Feb 28 03:48:13 np0005634017 kernel: Linux version 5.14.0-686.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026
Feb 28 03:48:13 np0005634017 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Feb 28 03:48:13 np0005634017 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-686.el9.x86_64 root=UUID=37391a25-080d-4723-8b0c-cb88a559875b ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 28 03:48:13 np0005634017 kernel: BIOS-provided physical RAM map:
Feb 28 03:48:13 np0005634017 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Feb 28 03:48:13 np0005634017 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Feb 28 03:48:13 np0005634017 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Feb 28 03:48:13 np0005634017 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Feb 28 03:48:13 np0005634017 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Feb 28 03:48:13 np0005634017 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Feb 28 03:48:13 np0005634017 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Feb 28 03:48:13 np0005634017 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Feb 28 03:48:13 np0005634017 kernel: NX (Execute Disable) protection: active
Feb 28 03:48:13 np0005634017 kernel: APIC: Static calls initialized
Feb 28 03:48:13 np0005634017 kernel: SMBIOS 2.8 present.
Feb 28 03:48:13 np0005634017 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Feb 28 03:48:13 np0005634017 kernel: Hypervisor detected: KVM
Feb 28 03:48:13 np0005634017 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb 28 03:48:13 np0005634017 kernel: kvm-clock: using sched offset of 9303144279 cycles
Feb 28 03:48:13 np0005634017 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb 28 03:48:13 np0005634017 kernel: tsc: Detected 2800.000 MHz processor
Feb 28 03:48:13 np0005634017 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Feb 28 03:48:13 np0005634017 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Feb 28 03:48:13 np0005634017 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb 28 03:48:13 np0005634017 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Feb 28 03:48:13 np0005634017 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Feb 28 03:48:13 np0005634017 kernel: Using GB pages for direct mapping
Feb 28 03:48:13 np0005634017 kernel: RAMDISK: [mem 0x1b6ca000-0x29b5cfff]
Feb 28 03:48:13 np0005634017 kernel: ACPI: Early table checksum verification disabled
Feb 28 03:48:13 np0005634017 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Feb 28 03:48:13 np0005634017 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 28 03:48:13 np0005634017 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 28 03:48:13 np0005634017 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 28 03:48:13 np0005634017 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Feb 28 03:48:13 np0005634017 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 28 03:48:13 np0005634017 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 28 03:48:13 np0005634017 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Feb 28 03:48:13 np0005634017 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Feb 28 03:48:13 np0005634017 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Feb 28 03:48:13 np0005634017 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Feb 28 03:48:13 np0005634017 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Feb 28 03:48:13 np0005634017 kernel: No NUMA configuration found
Feb 28 03:48:13 np0005634017 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Feb 28 03:48:13 np0005634017 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Feb 28 03:48:13 np0005634017 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Feb 28 03:48:13 np0005634017 kernel: Zone ranges:
Feb 28 03:48:13 np0005634017 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb 28 03:48:13 np0005634017 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Feb 28 03:48:13 np0005634017 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Feb 28 03:48:13 np0005634017 kernel:  Device   empty
Feb 28 03:48:13 np0005634017 kernel: Movable zone start for each node
Feb 28 03:48:13 np0005634017 kernel: Early memory node ranges
Feb 28 03:48:13 np0005634017 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Feb 28 03:48:13 np0005634017 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Feb 28 03:48:13 np0005634017 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Feb 28 03:48:13 np0005634017 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Feb 28 03:48:13 np0005634017 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb 28 03:48:13 np0005634017 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Feb 28 03:48:13 np0005634017 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Feb 28 03:48:13 np0005634017 kernel: ACPI: PM-Timer IO Port: 0x608
Feb 28 03:48:13 np0005634017 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb 28 03:48:13 np0005634017 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Feb 28 03:48:13 np0005634017 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Feb 28 03:48:13 np0005634017 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb 28 03:48:13 np0005634017 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb 28 03:48:13 np0005634017 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb 28 03:48:13 np0005634017 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb 28 03:48:13 np0005634017 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb 28 03:48:13 np0005634017 kernel: TSC deadline timer available
Feb 28 03:48:13 np0005634017 kernel: CPU topo: Max. logical packages:   8
Feb 28 03:48:13 np0005634017 kernel: CPU topo: Max. logical dies:       8
Feb 28 03:48:13 np0005634017 kernel: CPU topo: Max. dies per package:   1
Feb 28 03:48:13 np0005634017 kernel: CPU topo: Max. threads per core:   1
Feb 28 03:48:13 np0005634017 kernel: CPU topo: Num. cores per package:     1
Feb 28 03:48:13 np0005634017 kernel: CPU topo: Num. threads per package:   1
Feb 28 03:48:13 np0005634017 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Feb 28 03:48:13 np0005634017 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Feb 28 03:48:13 np0005634017 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Feb 28 03:48:13 np0005634017 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Feb 28 03:48:13 np0005634017 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Feb 28 03:48:13 np0005634017 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Feb 28 03:48:13 np0005634017 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Feb 28 03:48:13 np0005634017 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Feb 28 03:48:13 np0005634017 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Feb 28 03:48:13 np0005634017 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Feb 28 03:48:13 np0005634017 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Feb 28 03:48:13 np0005634017 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Feb 28 03:48:13 np0005634017 kernel: Booting paravirtualized kernel on KVM
Feb 28 03:48:13 np0005634017 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb 28 03:48:13 np0005634017 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Feb 28 03:48:13 np0005634017 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Feb 28 03:48:13 np0005634017 kernel: kvm-guest: PV spinlocks disabled, no host support
Feb 28 03:48:13 np0005634017 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-686.el9.x86_64 root=UUID=37391a25-080d-4723-8b0c-cb88a559875b ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 28 03:48:13 np0005634017 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-686.el9.x86_64", will be passed to user space.
Feb 28 03:48:13 np0005634017 kernel: random: crng init done
Feb 28 03:48:13 np0005634017 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Feb 28 03:48:13 np0005634017 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb 28 03:48:13 np0005634017 kernel: Fallback order for Node 0: 0 
Feb 28 03:48:13 np0005634017 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Feb 28 03:48:13 np0005634017 kernel: Policy zone: Normal
Feb 28 03:48:13 np0005634017 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 28 03:48:13 np0005634017 kernel: software IO TLB: area num 8.
Feb 28 03:48:13 np0005634017 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Feb 28 03:48:13 np0005634017 kernel: ftrace: allocating 49605 entries in 194 pages
Feb 28 03:48:13 np0005634017 kernel: ftrace: allocated 194 pages with 3 groups
Feb 28 03:48:13 np0005634017 kernel: Dynamic Preempt: voluntary
Feb 28 03:48:13 np0005634017 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 28 03:48:13 np0005634017 kernel: rcu: #011RCU event tracing is enabled.
Feb 28 03:48:13 np0005634017 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Feb 28 03:48:13 np0005634017 kernel: #011Trampoline variant of Tasks RCU enabled.
Feb 28 03:48:13 np0005634017 kernel: #011Rude variant of Tasks RCU enabled.
Feb 28 03:48:13 np0005634017 kernel: #011Tracing variant of Tasks RCU enabled.
Feb 28 03:48:13 np0005634017 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 28 03:48:13 np0005634017 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Feb 28 03:48:13 np0005634017 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 28 03:48:13 np0005634017 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 28 03:48:13 np0005634017 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 28 03:48:13 np0005634017 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Feb 28 03:48:13 np0005634017 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 28 03:48:13 np0005634017 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Feb 28 03:48:13 np0005634017 kernel: Console: colour VGA+ 80x25
Feb 28 03:48:13 np0005634017 kernel: printk: console [ttyS0] enabled
Feb 28 03:48:13 np0005634017 kernel: ACPI: Core revision 20230331
Feb 28 03:48:13 np0005634017 kernel: APIC: Switch to symmetric I/O mode setup
Feb 28 03:48:13 np0005634017 kernel: x2apic enabled
Feb 28 03:48:13 np0005634017 kernel: APIC: Switched APIC routing to: physical x2apic
Feb 28 03:48:13 np0005634017 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Feb 28 03:48:13 np0005634017 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Feb 28 03:48:13 np0005634017 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Feb 28 03:48:13 np0005634017 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Feb 28 03:48:13 np0005634017 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Feb 28 03:48:13 np0005634017 kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Feb 28 03:48:13 np0005634017 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Feb 28 03:48:13 np0005634017 kernel: Spectre V2 : Mitigation: Retpolines
Feb 28 03:48:13 np0005634017 kernel: RETBleed: Mitigation: untrained return thunk
Feb 28 03:48:13 np0005634017 kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Feb 28 03:48:13 np0005634017 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb 28 03:48:13 np0005634017 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Feb 28 03:48:13 np0005634017 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Feb 28 03:48:13 np0005634017 kernel: active return thunk: retbleed_return_thunk
Feb 28 03:48:13 np0005634017 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Feb 28 03:48:13 np0005634017 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb 28 03:48:13 np0005634017 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb 28 03:48:13 np0005634017 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb 28 03:48:13 np0005634017 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb 28 03:48:13 np0005634017 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Feb 28 03:48:13 np0005634017 kernel: Freeing SMP alternatives memory: 40K
Feb 28 03:48:13 np0005634017 kernel: pid_max: default: 32768 minimum: 301
Feb 28 03:48:13 np0005634017 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Feb 28 03:48:13 np0005634017 kernel: landlock: Up and running.
Feb 28 03:48:13 np0005634017 kernel: Yama: becoming mindful.
Feb 28 03:48:13 np0005634017 kernel: SELinux:  Initializing.
Feb 28 03:48:13 np0005634017 kernel: LSM support for eBPF active
Feb 28 03:48:13 np0005634017 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 28 03:48:13 np0005634017 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 28 03:48:13 np0005634017 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Feb 28 03:48:13 np0005634017 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Feb 28 03:48:13 np0005634017 kernel: ... version:                0
Feb 28 03:48:13 np0005634017 kernel: ... bit width:              48
Feb 28 03:48:13 np0005634017 kernel: ... generic registers:      6
Feb 28 03:48:13 np0005634017 kernel: ... value mask:             0000ffffffffffff
Feb 28 03:48:13 np0005634017 kernel: ... max period:             00007fffffffffff
Feb 28 03:48:13 np0005634017 kernel: ... fixed-purpose events:   0
Feb 28 03:48:13 np0005634017 kernel: ... event mask:             000000000000003f
Feb 28 03:48:13 np0005634017 kernel: signal: max sigframe size: 1776
Feb 28 03:48:13 np0005634017 kernel: rcu: Hierarchical SRCU implementation.
Feb 28 03:48:13 np0005634017 kernel: rcu: #011Max phase no-delay instances is 400.
Feb 28 03:48:13 np0005634017 kernel: smp: Bringing up secondary CPUs ...
Feb 28 03:48:13 np0005634017 kernel: smpboot: x86: Booting SMP configuration:
Feb 28 03:48:13 np0005634017 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Feb 28 03:48:13 np0005634017 kernel: smp: Brought up 1 node, 8 CPUs
Feb 28 03:48:13 np0005634017 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Feb 28 03:48:13 np0005634017 kernel: node 0 deferred pages initialised in 9ms
Feb 28 03:48:13 np0005634017 kernel: Memory: 7617716K/8388068K available (16384K kernel code, 5797K rwdata, 13956K rodata, 4204K init, 7172K bss, 764460K reserved, 0K cma-reserved)
Feb 28 03:48:13 np0005634017 kernel: devtmpfs: initialized
Feb 28 03:48:13 np0005634017 kernel: x86/mm: Memory block size: 128MB
Feb 28 03:48:13 np0005634017 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 28 03:48:13 np0005634017 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Feb 28 03:48:13 np0005634017 kernel: pinctrl core: initialized pinctrl subsystem
Feb 28 03:48:13 np0005634017 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 28 03:48:13 np0005634017 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Feb 28 03:48:13 np0005634017 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb 28 03:48:13 np0005634017 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb 28 03:48:13 np0005634017 kernel: audit: initializing netlink subsys (disabled)
Feb 28 03:48:13 np0005634017 kernel: audit: type=2000 audit(1772268492.122:1): state=initialized audit_enabled=0 res=1
Feb 28 03:48:13 np0005634017 kernel: thermal_sys: Registered thermal governor 'fair_share'
Feb 28 03:48:13 np0005634017 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 28 03:48:13 np0005634017 kernel: thermal_sys: Registered thermal governor 'user_space'
Feb 28 03:48:13 np0005634017 kernel: cpuidle: using governor menu
Feb 28 03:48:13 np0005634017 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 28 03:48:13 np0005634017 kernel: PCI: Using configuration type 1 for base access
Feb 28 03:48:13 np0005634017 kernel: PCI: Using configuration type 1 for extended access
Feb 28 03:48:13 np0005634017 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb 28 03:48:13 np0005634017 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 28 03:48:13 np0005634017 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Feb 28 03:48:13 np0005634017 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 28 03:48:13 np0005634017 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Feb 28 03:48:13 np0005634017 kernel: Demotion targets for Node 0: null
Feb 28 03:48:13 np0005634017 kernel: cryptd: max_cpu_qlen set to 1000
Feb 28 03:48:13 np0005634017 kernel: ACPI: Added _OSI(Module Device)
Feb 28 03:48:13 np0005634017 kernel: ACPI: Added _OSI(Processor Device)
Feb 28 03:48:13 np0005634017 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 28 03:48:13 np0005634017 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb 28 03:48:13 np0005634017 kernel: ACPI: Interpreter enabled
Feb 28 03:48:13 np0005634017 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Feb 28 03:48:13 np0005634017 kernel: ACPI: Using IOAPIC for interrupt routing
Feb 28 03:48:13 np0005634017 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb 28 03:48:13 np0005634017 kernel: PCI: Using E820 reservations for host bridge windows
Feb 28 03:48:13 np0005634017 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Feb 28 03:48:13 np0005634017 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb 28 03:48:13 np0005634017 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [3] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [4] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [5] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [6] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [7] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [8] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [9] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [10] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [11] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [12] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [13] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [14] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [15] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [16] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [17] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [18] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [19] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [20] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [21] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [22] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [23] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [24] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [25] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [26] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [27] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [28] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [29] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [30] registered
Feb 28 03:48:13 np0005634017 kernel: acpiphp: Slot [31] registered
Feb 28 03:48:13 np0005634017 kernel: PCI host bridge to bus 0000:00
Feb 28 03:48:13 np0005634017 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb 28 03:48:13 np0005634017 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb 28 03:48:13 np0005634017 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb 28 03:48:13 np0005634017 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Feb 28 03:48:13 np0005634017 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Feb 28 03:48:13 np0005634017 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Feb 28 03:48:13 np0005634017 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb 28 03:48:13 np0005634017 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb 28 03:48:13 np0005634017 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb 28 03:48:13 np0005634017 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb 28 03:48:13 np0005634017 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb 28 03:48:13 np0005634017 kernel: iommu: Default domain type: Translated
Feb 28 03:48:13 np0005634017 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Feb 28 03:48:13 np0005634017 kernel: SCSI subsystem initialized
Feb 28 03:48:13 np0005634017 kernel: ACPI: bus type USB registered
Feb 28 03:48:13 np0005634017 kernel: usbcore: registered new interface driver usbfs
Feb 28 03:48:13 np0005634017 kernel: usbcore: registered new interface driver hub
Feb 28 03:48:13 np0005634017 kernel: usbcore: registered new device driver usb
Feb 28 03:48:13 np0005634017 kernel: pps_core: LinuxPPS API ver. 1 registered
Feb 28 03:48:13 np0005634017 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb 28 03:48:13 np0005634017 kernel: PTP clock support registered
Feb 28 03:48:13 np0005634017 kernel: EDAC MC: Ver: 3.0.0
Feb 28 03:48:13 np0005634017 kernel: NetLabel: Initializing
Feb 28 03:48:13 np0005634017 kernel: NetLabel:  domain hash size = 128
Feb 28 03:48:13 np0005634017 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Feb 28 03:48:13 np0005634017 kernel: NetLabel:  unlabeled traffic allowed by default
Feb 28 03:48:13 np0005634017 kernel: PCI: Using ACPI for IRQ routing
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Feb 28 03:48:13 np0005634017 kernel: vgaarb: loaded
Feb 28 03:48:13 np0005634017 kernel: clocksource: Switched to clocksource kvm-clock
Feb 28 03:48:13 np0005634017 kernel: VFS: Disk quotas dquot_6.6.0
Feb 28 03:48:13 np0005634017 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 28 03:48:13 np0005634017 kernel: pnp: PnP ACPI init
Feb 28 03:48:13 np0005634017 kernel: pnp: PnP ACPI: found 5 devices
Feb 28 03:48:13 np0005634017 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb 28 03:48:13 np0005634017 kernel: NET: Registered PF_INET protocol family
Feb 28 03:48:13 np0005634017 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb 28 03:48:13 np0005634017 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Feb 28 03:48:13 np0005634017 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 28 03:48:13 np0005634017 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb 28 03:48:13 np0005634017 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Feb 28 03:48:13 np0005634017 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Feb 28 03:48:13 np0005634017 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Feb 28 03:48:13 np0005634017 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb 28 03:48:13 np0005634017 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb 28 03:48:13 np0005634017 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 28 03:48:13 np0005634017 kernel: NET: Registered PF_XDP protocol family
Feb 28 03:48:13 np0005634017 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb 28 03:48:13 np0005634017 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb 28 03:48:13 np0005634017 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb 28 03:48:13 np0005634017 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Feb 28 03:48:13 np0005634017 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb 28 03:48:13 np0005634017 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Feb 28 03:48:13 np0005634017 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 35311 usecs
Feb 28 03:48:13 np0005634017 kernel: PCI: CLS 0 bytes, default 64
Feb 28 03:48:13 np0005634017 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Feb 28 03:48:13 np0005634017 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Feb 28 03:48:13 np0005634017 kernel: ACPI: bus type thunderbolt registered
Feb 28 03:48:13 np0005634017 kernel: Trying to unpack rootfs image as initramfs...
Feb 28 03:48:13 np0005634017 kernel: Initialise system trusted keyrings
Feb 28 03:48:13 np0005634017 kernel: Key type blacklist registered
Feb 28 03:48:13 np0005634017 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Feb 28 03:48:13 np0005634017 kernel: zbud: loaded
Feb 28 03:48:13 np0005634017 kernel: integrity: Platform Keyring initialized
Feb 28 03:48:13 np0005634017 kernel: integrity: Machine keyring initialized
Feb 28 03:48:13 np0005634017 kernel: Freeing initrd memory: 234060K
Feb 28 03:48:13 np0005634017 kernel: NET: Registered PF_ALG protocol family
Feb 28 03:48:13 np0005634017 kernel: xor: automatically using best checksumming function   avx       
Feb 28 03:48:13 np0005634017 kernel: Key type asymmetric registered
Feb 28 03:48:13 np0005634017 kernel: Asymmetric key parser 'x509' registered
Feb 28 03:48:13 np0005634017 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Feb 28 03:48:13 np0005634017 kernel: io scheduler mq-deadline registered
Feb 28 03:48:13 np0005634017 kernel: io scheduler kyber registered
Feb 28 03:48:13 np0005634017 kernel: io scheduler bfq registered
Feb 28 03:48:13 np0005634017 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Feb 28 03:48:13 np0005634017 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Feb 28 03:48:13 np0005634017 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Feb 28 03:48:13 np0005634017 kernel: ACPI: button: Power Button [PWRF]
Feb 28 03:48:13 np0005634017 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Feb 28 03:48:13 np0005634017 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Feb 28 03:48:13 np0005634017 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Feb 28 03:48:13 np0005634017 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 28 03:48:13 np0005634017 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb 28 03:48:13 np0005634017 kernel: Non-volatile memory driver v1.3
Feb 28 03:48:13 np0005634017 kernel: rdac: device handler registered
Feb 28 03:48:13 np0005634017 kernel: hp_sw: device handler registered
Feb 28 03:48:13 np0005634017 kernel: emc: device handler registered
Feb 28 03:48:13 np0005634017 kernel: alua: device handler registered
Feb 28 03:48:13 np0005634017 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Feb 28 03:48:13 np0005634017 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Feb 28 03:48:13 np0005634017 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Feb 28 03:48:13 np0005634017 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Feb 28 03:48:13 np0005634017 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Feb 28 03:48:13 np0005634017 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Feb 28 03:48:13 np0005634017 kernel: usb usb1: Product: UHCI Host Controller
Feb 28 03:48:13 np0005634017 kernel: usb usb1: Manufacturer: Linux 5.14.0-686.el9.x86_64 uhci_hcd
Feb 28 03:48:13 np0005634017 kernel: usb usb1: SerialNumber: 0000:00:01.2
Feb 28 03:48:13 np0005634017 kernel: hub 1-0:1.0: USB hub found
Feb 28 03:48:13 np0005634017 kernel: hub 1-0:1.0: 2 ports detected
Feb 28 03:48:13 np0005634017 kernel: usbcore: registered new interface driver usbserial_generic
Feb 28 03:48:13 np0005634017 kernel: usbserial: USB Serial support registered for generic
Feb 28 03:48:13 np0005634017 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb 28 03:48:13 np0005634017 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb 28 03:48:13 np0005634017 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb 28 03:48:13 np0005634017 kernel: mousedev: PS/2 mouse device common for all mice
Feb 28 03:48:13 np0005634017 kernel: rtc_cmos 00:04: RTC can wake from S4
Feb 28 03:48:13 np0005634017 kernel: rtc_cmos 00:04: registered as rtc0
Feb 28 03:48:13 np0005634017 kernel: rtc_cmos 00:04: setting system clock to 2026-02-28T08:48:12 UTC (1772268492)
Feb 28 03:48:13 np0005634017 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Feb 28 03:48:13 np0005634017 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Feb 28 03:48:13 np0005634017 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Feb 28 03:48:13 np0005634017 kernel: hid: raw HID events driver (C) Jiri Kosina
Feb 28 03:48:13 np0005634017 kernel: usbcore: registered new interface driver usbhid
Feb 28 03:48:13 np0005634017 kernel: usbhid: USB HID core driver
Feb 28 03:48:13 np0005634017 kernel: drop_monitor: Initializing network drop monitor service
Feb 28 03:48:13 np0005634017 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Feb 28 03:48:13 np0005634017 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Feb 28 03:48:13 np0005634017 kernel: Initializing XFRM netlink socket
Feb 28 03:48:13 np0005634017 kernel: NET: Registered PF_INET6 protocol family
Feb 28 03:48:13 np0005634017 kernel: Segment Routing with IPv6
Feb 28 03:48:13 np0005634017 kernel: NET: Registered PF_PACKET protocol family
Feb 28 03:48:13 np0005634017 kernel: mpls_gso: MPLS GSO support
Feb 28 03:48:13 np0005634017 kernel: IPI shorthand broadcast: enabled
Feb 28 03:48:13 np0005634017 kernel: AVX2 version of gcm_enc/dec engaged.
Feb 28 03:48:13 np0005634017 kernel: AES CTR mode by8 optimization enabled
Feb 28 03:48:13 np0005634017 kernel: sched_clock: Marking stable (1064002120, 143548630)->(1313523170, -105972420)
Feb 28 03:48:13 np0005634017 kernel: registered taskstats version 1
Feb 28 03:48:13 np0005634017 kernel: Loading compiled-in X.509 certificates
Feb 28 03:48:13 np0005634017 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: d9d4cefd3ca2c4957ef0b2e7c6e39a7e4ae16390'
Feb 28 03:48:13 np0005634017 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Feb 28 03:48:13 np0005634017 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Feb 28 03:48:13 np0005634017 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Feb 28 03:48:13 np0005634017 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Feb 28 03:48:13 np0005634017 kernel: Demotion targets for Node 0: null
Feb 28 03:48:13 np0005634017 kernel: page_owner is disabled
Feb 28 03:48:13 np0005634017 kernel: Key type .fscrypt registered
Feb 28 03:48:13 np0005634017 kernel: Key type fscrypt-provisioning registered
Feb 28 03:48:13 np0005634017 kernel: Key type big_key registered
Feb 28 03:48:13 np0005634017 kernel: Key type encrypted registered
Feb 28 03:48:13 np0005634017 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 28 03:48:13 np0005634017 kernel: Loading compiled-in module X.509 certificates
Feb 28 03:48:13 np0005634017 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: d9d4cefd3ca2c4957ef0b2e7c6e39a7e4ae16390'
Feb 28 03:48:13 np0005634017 kernel: ima: Allocated hash algorithm: sha256
Feb 28 03:48:13 np0005634017 kernel: ima: No architecture policies found
Feb 28 03:48:13 np0005634017 kernel: evm: Initialising EVM extended attributes:
Feb 28 03:48:13 np0005634017 kernel: evm: security.selinux
Feb 28 03:48:13 np0005634017 kernel: evm: security.SMACK64 (disabled)
Feb 28 03:48:13 np0005634017 kernel: evm: security.SMACK64EXEC (disabled)
Feb 28 03:48:13 np0005634017 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Feb 28 03:48:13 np0005634017 kernel: evm: security.SMACK64MMAP (disabled)
Feb 28 03:48:13 np0005634017 kernel: evm: security.apparmor (disabled)
Feb 28 03:48:13 np0005634017 kernel: evm: security.ima
Feb 28 03:48:13 np0005634017 kernel: evm: security.capability
Feb 28 03:48:13 np0005634017 kernel: evm: HMAC attrs: 0x1
Feb 28 03:48:13 np0005634017 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Feb 28 03:48:13 np0005634017 kernel: Running certificate verification RSA selftest
Feb 28 03:48:13 np0005634017 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Feb 28 03:48:13 np0005634017 kernel: Running certificate verification ECDSA selftest
Feb 28 03:48:13 np0005634017 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Feb 28 03:48:13 np0005634017 kernel: clk: Disabling unused clocks
Feb 28 03:48:13 np0005634017 kernel: Freeing unused decrypted memory: 2028K
Feb 28 03:48:13 np0005634017 kernel: Freeing unused kernel image (initmem) memory: 4204K
Feb 28 03:48:13 np0005634017 kernel: Write protecting the kernel read-only data: 30720k
Feb 28 03:48:13 np0005634017 kernel: Freeing unused kernel image (rodata/data gap) memory: 380K
Feb 28 03:48:13 np0005634017 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Feb 28 03:48:13 np0005634017 kernel: Run /init as init process
Feb 28 03:48:13 np0005634017 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 28 03:48:13 np0005634017 systemd: Detected virtualization kvm.
Feb 28 03:48:13 np0005634017 systemd: Detected architecture x86-64.
Feb 28 03:48:13 np0005634017 systemd: Running in initrd.
Feb 28 03:48:13 np0005634017 systemd: No hostname configured, using default hostname.
Feb 28 03:48:13 np0005634017 systemd: Hostname set to <localhost>.
Feb 28 03:48:13 np0005634017 systemd: Initializing machine ID from VM UUID.
Feb 28 03:48:13 np0005634017 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Feb 28 03:48:13 np0005634017 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Feb 28 03:48:13 np0005634017 kernel: usb 1-1: Product: QEMU USB Tablet
Feb 28 03:48:13 np0005634017 kernel: usb 1-1: Manufacturer: QEMU
Feb 28 03:48:13 np0005634017 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Feb 28 03:48:13 np0005634017 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Feb 28 03:48:13 np0005634017 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Feb 28 03:48:13 np0005634017 systemd: Queued start job for default target Initrd Default Target.
Feb 28 03:48:13 np0005634017 systemd: Started Dispatch Password Requests to Console Directory Watch.
Feb 28 03:48:13 np0005634017 systemd: Reached target Local Encrypted Volumes.
Feb 28 03:48:13 np0005634017 systemd: Reached target Initrd /usr File System.
Feb 28 03:48:13 np0005634017 systemd: Reached target Local File Systems.
Feb 28 03:48:13 np0005634017 systemd: Reached target Path Units.
Feb 28 03:48:13 np0005634017 systemd: Reached target Slice Units.
Feb 28 03:48:13 np0005634017 systemd: Reached target Swaps.
Feb 28 03:48:13 np0005634017 systemd: Reached target Timer Units.
Feb 28 03:48:13 np0005634017 systemd: Listening on D-Bus System Message Bus Socket.
Feb 28 03:48:13 np0005634017 systemd: Listening on Journal Socket (/dev/log).
Feb 28 03:48:13 np0005634017 systemd: Listening on Journal Socket.
Feb 28 03:48:13 np0005634017 systemd: Listening on udev Control Socket.
Feb 28 03:48:13 np0005634017 systemd: Listening on udev Kernel Socket.
Feb 28 03:48:13 np0005634017 systemd: Reached target Socket Units.
Feb 28 03:48:13 np0005634017 systemd: Starting Create List of Static Device Nodes...
Feb 28 03:48:13 np0005634017 systemd: Starting Journal Service...
Feb 28 03:48:13 np0005634017 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Feb 28 03:48:13 np0005634017 systemd: Starting Apply Kernel Variables...
Feb 28 03:48:13 np0005634017 systemd: Starting Create System Users...
Feb 28 03:48:13 np0005634017 systemd: Starting Setup Virtual Console...
Feb 28 03:48:13 np0005634017 systemd: Finished Create List of Static Device Nodes.
Feb 28 03:48:13 np0005634017 systemd: Finished Apply Kernel Variables.
Feb 28 03:48:13 np0005634017 systemd: Finished Create System Users.
Feb 28 03:48:13 np0005634017 systemd-journald[307]: Journal started
Feb 28 03:48:13 np0005634017 systemd-journald[307]: Runtime Journal (/run/log/journal/df54d9c918244fc9835d7a3299a4aad4) is 8.0M, max 153.6M, 145.6M free.
Feb 28 03:48:13 np0005634017 systemd-sysusers[312]: Creating group 'users' with GID 100.
Feb 28 03:48:13 np0005634017 systemd-sysusers[312]: Creating group 'dbus' with GID 81.
Feb 28 03:48:13 np0005634017 systemd-sysusers[312]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Feb 28 03:48:13 np0005634017 systemd: Started Journal Service.
Feb 28 03:48:13 np0005634017 systemd[1]: Starting Create Static Device Nodes in /dev...
Feb 28 03:48:13 np0005634017 systemd[1]: Starting Create Volatile Files and Directories...
Feb 28 03:48:13 np0005634017 systemd[1]: Finished Create Static Device Nodes in /dev.
Feb 28 03:48:13 np0005634017 systemd[1]: Finished Create Volatile Files and Directories.
Feb 28 03:48:13 np0005634017 systemd[1]: Finished Setup Virtual Console.
Feb 28 03:48:13 np0005634017 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Feb 28 03:48:13 np0005634017 systemd[1]: Starting dracut cmdline hook...
Feb 28 03:48:13 np0005634017 dracut-cmdline[326]: dracut-9 dracut-057-110.git20260130.el9
Feb 28 03:48:13 np0005634017 dracut-cmdline[326]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-686.el9.x86_64 root=UUID=37391a25-080d-4723-8b0c-cb88a559875b ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 28 03:48:13 np0005634017 systemd[1]: Finished dracut cmdline hook.
Feb 28 03:48:13 np0005634017 systemd[1]: Starting dracut pre-udev hook...
Feb 28 03:48:13 np0005634017 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 28 03:48:13 np0005634017 kernel: device-mapper: uevent: version 1.0.3
Feb 28 03:48:13 np0005634017 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Feb 28 03:48:13 np0005634017 kernel: RPC: Registered named UNIX socket transport module.
Feb 28 03:48:13 np0005634017 kernel: RPC: Registered udp transport module.
Feb 28 03:48:13 np0005634017 kernel: RPC: Registered tcp transport module.
Feb 28 03:48:13 np0005634017 kernel: RPC: Registered tcp-with-tls transport module.
Feb 28 03:48:13 np0005634017 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb 28 03:48:13 np0005634017 rpc.statd[441]: Version 2.5.4 starting
Feb 28 03:48:13 np0005634017 rpc.statd[441]: Initializing NSM state
Feb 28 03:48:13 np0005634017 rpc.idmapd[446]: Setting log level to 0
Feb 28 03:48:13 np0005634017 systemd[1]: Finished dracut pre-udev hook.
Feb 28 03:48:13 np0005634017 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb 28 03:48:13 np0005634017 systemd-udevd[459]: Using default interface naming scheme 'rhel-9.0'.
Feb 28 03:48:13 np0005634017 systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb 28 03:48:13 np0005634017 systemd[1]: Starting dracut pre-trigger hook...
Feb 28 03:48:13 np0005634017 systemd[1]: Finished dracut pre-trigger hook.
Feb 28 03:48:13 np0005634017 systemd[1]: Starting Coldplug All udev Devices...
Feb 28 03:48:13 np0005634017 systemd[1]: Created slice Slice /system/modprobe.
Feb 28 03:48:13 np0005634017 systemd[1]: Starting Load Kernel Module configfs...
Feb 28 03:48:13 np0005634017 systemd[1]: Finished Coldplug All udev Devices.
Feb 28 03:48:13 np0005634017 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 28 03:48:13 np0005634017 systemd[1]: Finished Load Kernel Module configfs.
Feb 28 03:48:13 np0005634017 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb 28 03:48:13 np0005634017 systemd[1]: Reached target Network.
Feb 28 03:48:13 np0005634017 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb 28 03:48:13 np0005634017 systemd[1]: Starting dracut initqueue hook...
Feb 28 03:48:13 np0005634017 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Feb 28 03:48:13 np0005634017 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Feb 28 03:48:13 np0005634017 kernel: vda: vda1
Feb 28 03:48:13 np0005634017 systemd-udevd[476]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 03:48:13 np0005634017 systemd[1]: Found device /dev/disk/by-uuid/37391a25-080d-4723-8b0c-cb88a559875b.
Feb 28 03:48:13 np0005634017 kernel: ACPI: bus type drm_connector registered
Feb 28 03:48:13 np0005634017 kernel: scsi host0: ata_piix
Feb 28 03:48:13 np0005634017 kernel: scsi host1: ata_piix
Feb 28 03:48:13 np0005634017 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Feb 28 03:48:13 np0005634017 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Feb 28 03:48:13 np0005634017 systemd[1]: Reached target Initrd Root Device.
Feb 28 03:48:14 np0005634017 kernel: ata1: found unknown device (class 0)
Feb 28 03:48:14 np0005634017 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Feb 28 03:48:14 np0005634017 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Feb 28 03:48:14 np0005634017 systemd[1]: Mounting Kernel Configuration File System...
Feb 28 03:48:14 np0005634017 systemd[1]: Mounted Kernel Configuration File System.
Feb 28 03:48:14 np0005634017 systemd[1]: Reached target System Initialization.
Feb 28 03:48:14 np0005634017 systemd[1]: Reached target Basic System.
Feb 28 03:48:14 np0005634017 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Feb 28 03:48:14 np0005634017 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Feb 28 03:48:14 np0005634017 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Feb 28 03:48:14 np0005634017 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Feb 28 03:48:14 np0005634017 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Feb 28 03:48:14 np0005634017 kernel: Console: switching to colour dummy device 80x25
Feb 28 03:48:14 np0005634017 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Feb 28 03:48:14 np0005634017 kernel: [drm] features: -context_init
Feb 28 03:48:14 np0005634017 kernel: [drm] number of scanouts: 1
Feb 28 03:48:14 np0005634017 kernel: [drm] number of cap sets: 0
Feb 28 03:48:14 np0005634017 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Feb 28 03:48:14 np0005634017 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Feb 28 03:48:14 np0005634017 kernel: Console: switching to colour frame buffer device 128x48
Feb 28 03:48:14 np0005634017 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Feb 28 03:48:14 np0005634017 systemd[1]: Finished dracut initqueue hook.
Feb 28 03:48:14 np0005634017 systemd[1]: Reached target Preparation for Remote File Systems.
Feb 28 03:48:14 np0005634017 systemd[1]: Reached target Remote Encrypted Volumes.
Feb 28 03:48:14 np0005634017 systemd[1]: Reached target Remote File Systems.
Feb 28 03:48:14 np0005634017 systemd[1]: Starting dracut pre-mount hook...
Feb 28 03:48:14 np0005634017 systemd[1]: Finished dracut pre-mount hook.
Feb 28 03:48:14 np0005634017 systemd[1]: Starting File System Check on /dev/disk/by-uuid/37391a25-080d-4723-8b0c-cb88a559875b...
Feb 28 03:48:14 np0005634017 systemd-fsck[567]: /usr/sbin/fsck.xfs: XFS file system.
Feb 28 03:48:14 np0005634017 systemd[1]: Finished File System Check on /dev/disk/by-uuid/37391a25-080d-4723-8b0c-cb88a559875b.
Feb 28 03:48:14 np0005634017 systemd[1]: Mounting /sysroot...
Feb 28 03:48:14 np0005634017 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Feb 28 03:48:14 np0005634017 kernel: XFS (vda1): Mounting V5 Filesystem 37391a25-080d-4723-8b0c-cb88a559875b
Feb 28 03:48:14 np0005634017 kernel: XFS (vda1): Ending clean mount
Feb 28 03:48:14 np0005634017 systemd[1]: Mounted /sysroot.
Feb 28 03:48:14 np0005634017 systemd[1]: Reached target Initrd Root File System.
Feb 28 03:48:14 np0005634017 systemd[1]: Starting Mountpoints Configured in the Real Root...
Feb 28 03:48:14 np0005634017 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 28 03:48:14 np0005634017 systemd[1]: Finished Mountpoints Configured in the Real Root.
Feb 28 03:48:14 np0005634017 systemd[1]: Reached target Initrd File Systems.
Feb 28 03:48:14 np0005634017 systemd[1]: Reached target Initrd Default Target.
Feb 28 03:48:14 np0005634017 systemd[1]: Starting dracut mount hook...
Feb 28 03:48:14 np0005634017 systemd[1]: Finished dracut mount hook.
Feb 28 03:48:14 np0005634017 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Feb 28 03:48:14 np0005634017 rpc.idmapd[446]: exiting on signal 15
Feb 28 03:48:14 np0005634017 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Feb 28 03:48:14 np0005634017 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Feb 28 03:48:15 np0005634017 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped target Network.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped target Remote Encrypted Volumes.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped target Timer Units.
Feb 28 03:48:15 np0005634017 systemd[1]: dbus.socket: Deactivated successfully.
Feb 28 03:48:15 np0005634017 systemd[1]: Closed D-Bus System Message Bus Socket.
Feb 28 03:48:15 np0005634017 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped target Initrd Default Target.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped target Basic System.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped target Initrd Root Device.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped target Initrd /usr File System.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped target Path Units.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped target Remote File Systems.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped target Preparation for Remote File Systems.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped target Slice Units.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped target Socket Units.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped target System Initialization.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped target Local File Systems.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped target Swaps.
Feb 28 03:48:15 np0005634017 systemd[1]: dracut-mount.service: Deactivated successfully.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped dracut mount hook.
Feb 28 03:48:15 np0005634017 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped dracut pre-mount hook.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped target Local Encrypted Volumes.
Feb 28 03:48:15 np0005634017 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Feb 28 03:48:15 np0005634017 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped dracut initqueue hook.
Feb 28 03:48:15 np0005634017 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped Apply Kernel Variables.
Feb 28 03:48:15 np0005634017 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped Create Volatile Files and Directories.
Feb 28 03:48:15 np0005634017 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped Coldplug All udev Devices.
Feb 28 03:48:15 np0005634017 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped dracut pre-trigger hook.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Feb 28 03:48:15 np0005634017 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped Setup Virtual Console.
Feb 28 03:48:15 np0005634017 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Feb 28 03:48:15 np0005634017 systemd[1]: systemd-udevd.service: Consumed 1.039s CPU time.
Feb 28 03:48:15 np0005634017 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 28 03:48:15 np0005634017 systemd[1]: Closed udev Control Socket.
Feb 28 03:48:15 np0005634017 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 28 03:48:15 np0005634017 systemd[1]: Closed udev Kernel Socket.
Feb 28 03:48:15 np0005634017 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped dracut pre-udev hook.
Feb 28 03:48:15 np0005634017 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped dracut cmdline hook.
Feb 28 03:48:15 np0005634017 systemd[1]: Starting Cleanup udev Database...
Feb 28 03:48:15 np0005634017 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped Create Static Device Nodes in /dev.
Feb 28 03:48:15 np0005634017 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped Create List of Static Device Nodes.
Feb 28 03:48:15 np0005634017 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Feb 28 03:48:15 np0005634017 systemd[1]: Stopped Create System Users.
Feb 28 03:48:15 np0005634017 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Feb 28 03:48:15 np0005634017 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Feb 28 03:48:15 np0005634017 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Feb 28 03:48:15 np0005634017 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb 28 03:48:15 np0005634017 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 28 03:48:15 np0005634017 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Feb 28 03:48:15 np0005634017 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 28 03:48:15 np0005634017 systemd[1]: Finished Cleanup udev Database.
Feb 28 03:48:15 np0005634017 systemd[1]: Reached target Switch Root.
Feb 28 03:48:15 np0005634017 systemd[1]: Starting Switch Root...
Feb 28 03:48:15 np0005634017 systemd[1]: Switching root.
Feb 28 03:48:15 np0005634017 systemd-journald[307]: Journal stopped
Feb 28 03:48:16 np0005634017 systemd-journald: Received SIGTERM from PID 1 (systemd).
Feb 28 03:48:16 np0005634017 kernel: audit: type=1404 audit(1772268495.328:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Feb 28 03:48:16 np0005634017 kernel: SELinux:  policy capability network_peer_controls=1
Feb 28 03:48:16 np0005634017 kernel: SELinux:  policy capability open_perms=1
Feb 28 03:48:16 np0005634017 kernel: SELinux:  policy capability extended_socket_class=1
Feb 28 03:48:16 np0005634017 kernel: SELinux:  policy capability always_check_network=0
Feb 28 03:48:16 np0005634017 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 28 03:48:16 np0005634017 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 28 03:48:16 np0005634017 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 28 03:48:16 np0005634017 kernel: audit: type=1403 audit(1772268495.432:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 28 03:48:16 np0005634017 systemd: Successfully loaded SELinux policy in 108.127ms.
Feb 28 03:48:16 np0005634017 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.758ms.
Feb 28 03:48:16 np0005634017 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 28 03:48:16 np0005634017 systemd: Detected virtualization kvm.
Feb 28 03:48:16 np0005634017 systemd: Detected architecture x86-64.
Feb 28 03:48:16 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 03:48:16 np0005634017 systemd: initrd-switch-root.service: Deactivated successfully.
Feb 28 03:48:16 np0005634017 systemd: Stopped Switch Root.
Feb 28 03:48:16 np0005634017 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb 28 03:48:16 np0005634017 systemd: Created slice Slice /system/getty.
Feb 28 03:48:16 np0005634017 systemd: Created slice Slice /system/serial-getty.
Feb 28 03:48:16 np0005634017 systemd: Created slice Slice /system/sshd-keygen.
Feb 28 03:48:16 np0005634017 systemd: Created slice User and Session Slice.
Feb 28 03:48:16 np0005634017 systemd: Started Dispatch Password Requests to Console Directory Watch.
Feb 28 03:48:16 np0005634017 systemd: Started Forward Password Requests to Wall Directory Watch.
Feb 28 03:48:16 np0005634017 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Feb 28 03:48:16 np0005634017 systemd: Reached target Local Encrypted Volumes.
Feb 28 03:48:16 np0005634017 systemd: Stopped target Switch Root.
Feb 28 03:48:16 np0005634017 systemd: Stopped target Initrd File Systems.
Feb 28 03:48:16 np0005634017 systemd: Stopped target Initrd Root File System.
Feb 28 03:48:16 np0005634017 systemd: Reached target Local Integrity Protected Volumes.
Feb 28 03:48:16 np0005634017 systemd: Reached target Path Units.
Feb 28 03:48:16 np0005634017 systemd: Reached target rpc_pipefs.target.
Feb 28 03:48:16 np0005634017 systemd: Reached target Slice Units.
Feb 28 03:48:16 np0005634017 systemd: Reached target Swaps.
Feb 28 03:48:16 np0005634017 systemd: Reached target Local Verity Protected Volumes.
Feb 28 03:48:16 np0005634017 systemd: Listening on RPCbind Server Activation Socket.
Feb 28 03:48:16 np0005634017 systemd: Reached target RPC Port Mapper.
Feb 28 03:48:16 np0005634017 systemd: Listening on Process Core Dump Socket.
Feb 28 03:48:16 np0005634017 systemd: Listening on initctl Compatibility Named Pipe.
Feb 28 03:48:16 np0005634017 systemd: Listening on udev Control Socket.
Feb 28 03:48:16 np0005634017 systemd: Listening on udev Kernel Socket.
Feb 28 03:48:16 np0005634017 systemd: Mounting Huge Pages File System...
Feb 28 03:48:16 np0005634017 systemd: Mounting POSIX Message Queue File System...
Feb 28 03:48:16 np0005634017 systemd: Mounting Kernel Debug File System...
Feb 28 03:48:16 np0005634017 systemd: Mounting Kernel Trace File System...
Feb 28 03:48:16 np0005634017 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Feb 28 03:48:16 np0005634017 systemd: Starting Create List of Static Device Nodes...
Feb 28 03:48:16 np0005634017 systemd: Starting Load Kernel Module configfs...
Feb 28 03:48:16 np0005634017 systemd: Starting Load Kernel Module drm...
Feb 28 03:48:16 np0005634017 systemd: Starting Load Kernel Module efi_pstore...
Feb 28 03:48:16 np0005634017 systemd: Starting Load Kernel Module fuse...
Feb 28 03:48:16 np0005634017 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Feb 28 03:48:16 np0005634017 systemd: systemd-fsck-root.service: Deactivated successfully.
Feb 28 03:48:16 np0005634017 systemd: Stopped File System Check on Root Device.
Feb 28 03:48:16 np0005634017 systemd: Stopped Journal Service.
Feb 28 03:48:16 np0005634017 kernel: fuse: init (API version 7.37)
Feb 28 03:48:16 np0005634017 systemd: Starting Journal Service...
Feb 28 03:48:16 np0005634017 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Feb 28 03:48:16 np0005634017 systemd: Starting Generate network units from Kernel command line...
Feb 28 03:48:16 np0005634017 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 28 03:48:16 np0005634017 systemd: Starting Remount Root and Kernel File Systems...
Feb 28 03:48:16 np0005634017 systemd-journald[694]: Journal started
Feb 28 03:48:16 np0005634017 systemd-journald[694]: Runtime Journal (/run/log/journal/45af4031c1bdc072f1f045c25038675f) is 8.0M, max 153.6M, 145.6M free.
Feb 28 03:48:16 np0005634017 systemd[1]: Queued start job for default target Multi-User System.
Feb 28 03:48:16 np0005634017 systemd[1]: systemd-journald.service: Deactivated successfully.
Feb 28 03:48:16 np0005634017 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 28 03:48:16 np0005634017 systemd: Starting Apply Kernel Variables...
Feb 28 03:48:16 np0005634017 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Feb 28 03:48:16 np0005634017 systemd: Starting Coldplug All udev Devices...
Feb 28 03:48:16 np0005634017 systemd: Started Journal Service.
Feb 28 03:48:16 np0005634017 systemd[1]: Mounted Huge Pages File System.
Feb 28 03:48:16 np0005634017 systemd[1]: Mounted POSIX Message Queue File System.
Feb 28 03:48:16 np0005634017 systemd[1]: Mounted Kernel Debug File System.
Feb 28 03:48:16 np0005634017 systemd[1]: Mounted Kernel Trace File System.
Feb 28 03:48:16 np0005634017 systemd[1]: Finished Create List of Static Device Nodes.
Feb 28 03:48:16 np0005634017 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 28 03:48:16 np0005634017 systemd[1]: Finished Load Kernel Module configfs.
Feb 28 03:48:16 np0005634017 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 28 03:48:16 np0005634017 systemd[1]: Finished Load Kernel Module drm.
Feb 28 03:48:16 np0005634017 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 28 03:48:16 np0005634017 systemd[1]: Finished Load Kernel Module efi_pstore.
Feb 28 03:48:16 np0005634017 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 28 03:48:16 np0005634017 systemd[1]: Finished Load Kernel Module fuse.
Feb 28 03:48:16 np0005634017 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Feb 28 03:48:16 np0005634017 systemd[1]: Finished Generate network units from Kernel command line.
Feb 28 03:48:16 np0005634017 systemd[1]: Finished Remount Root and Kernel File Systems.
Feb 28 03:48:16 np0005634017 systemd[1]: Finished Apply Kernel Variables.
Feb 28 03:48:16 np0005634017 systemd[1]: Mounting FUSE Control File System...
Feb 28 03:48:16 np0005634017 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Feb 28 03:48:16 np0005634017 systemd[1]: Starting Rebuild Hardware Database...
Feb 28 03:48:16 np0005634017 systemd[1]: Starting Flush Journal to Persistent Storage...
Feb 28 03:48:16 np0005634017 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 28 03:48:16 np0005634017 systemd[1]: Starting Load/Save OS Random Seed...
Feb 28 03:48:16 np0005634017 systemd[1]: Starting Create System Users...
Feb 28 03:48:16 np0005634017 systemd[1]: Mounted FUSE Control File System.
Feb 28 03:48:16 np0005634017 systemd-journald[694]: Runtime Journal (/run/log/journal/45af4031c1bdc072f1f045c25038675f) is 8.0M, max 153.6M, 145.6M free.
Feb 28 03:48:16 np0005634017 systemd-journald[694]: Received client request to flush runtime journal.
Feb 28 03:48:16 np0005634017 systemd[1]: Finished Coldplug All udev Devices.
Feb 28 03:48:16 np0005634017 systemd[1]: Finished Flush Journal to Persistent Storage.
Feb 28 03:48:16 np0005634017 systemd[1]: Finished Load/Save OS Random Seed.
Feb 28 03:48:16 np0005634017 systemd[1]: Finished Create System Users.
Feb 28 03:48:16 np0005634017 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Feb 28 03:48:16 np0005634017 systemd[1]: Starting Create Static Device Nodes in /dev...
Feb 28 03:48:16 np0005634017 systemd[1]: Finished Create Static Device Nodes in /dev.
Feb 28 03:48:16 np0005634017 systemd[1]: Reached target Preparation for Local File Systems.
Feb 28 03:48:16 np0005634017 systemd[1]: Reached target Local File Systems.
Feb 28 03:48:16 np0005634017 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Feb 28 03:48:16 np0005634017 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Feb 28 03:48:16 np0005634017 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 28 03:48:16 np0005634017 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Feb 28 03:48:16 np0005634017 systemd[1]: Starting Automatic Boot Loader Update...
Feb 28 03:48:16 np0005634017 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Feb 28 03:48:16 np0005634017 systemd[1]: Starting Create Volatile Files and Directories...
Feb 28 03:48:16 np0005634017 bootctl[712]: Couldn't find EFI system partition, skipping.
Feb 28 03:48:16 np0005634017 systemd[1]: Finished Automatic Boot Loader Update.
Feb 28 03:48:16 np0005634017 systemd[1]: Finished Create Volatile Files and Directories.
Feb 28 03:48:16 np0005634017 systemd[1]: Starting Security Auditing Service...
Feb 28 03:48:16 np0005634017 systemd[1]: Starting RPC Bind...
Feb 28 03:48:16 np0005634017 systemd[1]: Starting Rebuild Journal Catalog...
Feb 28 03:48:16 np0005634017 auditd[718]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Feb 28 03:48:16 np0005634017 auditd[718]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Feb 28 03:48:16 np0005634017 systemd[1]: Finished Rebuild Journal Catalog.
Feb 28 03:48:16 np0005634017 systemd[1]: Started RPC Bind.
Feb 28 03:48:16 np0005634017 augenrules[723]: /sbin/augenrules: No change
Feb 28 03:48:16 np0005634017 augenrules[738]: No rules
Feb 28 03:48:16 np0005634017 augenrules[738]: enabled 1
Feb 28 03:48:16 np0005634017 augenrules[738]: failure 1
Feb 28 03:48:16 np0005634017 augenrules[738]: pid 718
Feb 28 03:48:16 np0005634017 augenrules[738]: rate_limit 0
Feb 28 03:48:16 np0005634017 augenrules[738]: backlog_limit 8192
Feb 28 03:48:16 np0005634017 augenrules[738]: lost 0
Feb 28 03:48:16 np0005634017 augenrules[738]: backlog 3
Feb 28 03:48:16 np0005634017 augenrules[738]: backlog_wait_time 60000
Feb 28 03:48:16 np0005634017 augenrules[738]: backlog_wait_time_actual 0
Feb 28 03:48:16 np0005634017 augenrules[738]: enabled 1
Feb 28 03:48:16 np0005634017 augenrules[738]: failure 1
Feb 28 03:48:16 np0005634017 augenrules[738]: pid 718
Feb 28 03:48:16 np0005634017 augenrules[738]: rate_limit 0
Feb 28 03:48:16 np0005634017 augenrules[738]: backlog_limit 8192
Feb 28 03:48:16 np0005634017 augenrules[738]: lost 0
Feb 28 03:48:16 np0005634017 augenrules[738]: backlog 0
Feb 28 03:48:16 np0005634017 augenrules[738]: backlog_wait_time 60000
Feb 28 03:48:16 np0005634017 augenrules[738]: backlog_wait_time_actual 0
Feb 28 03:48:16 np0005634017 augenrules[738]: enabled 1
Feb 28 03:48:16 np0005634017 augenrules[738]: failure 1
Feb 28 03:48:16 np0005634017 augenrules[738]: pid 718
Feb 28 03:48:16 np0005634017 augenrules[738]: rate_limit 0
Feb 28 03:48:16 np0005634017 augenrules[738]: backlog_limit 8192
Feb 28 03:48:16 np0005634017 augenrules[738]: lost 0
Feb 28 03:48:16 np0005634017 augenrules[738]: backlog 3
Feb 28 03:48:16 np0005634017 augenrules[738]: backlog_wait_time 60000
Feb 28 03:48:16 np0005634017 augenrules[738]: backlog_wait_time_actual 0
Feb 28 03:48:16 np0005634017 systemd[1]: Started Security Auditing Service.
Feb 28 03:48:16 np0005634017 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Feb 28 03:48:16 np0005634017 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Feb 28 03:48:17 np0005634017 systemd[1]: Finished Rebuild Hardware Database.
Feb 28 03:48:17 np0005634017 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb 28 03:48:17 np0005634017 systemd-udevd[746]: Using default interface naming scheme 'rhel-9.0'.
Feb 28 03:48:17 np0005634017 systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb 28 03:48:17 np0005634017 systemd[1]: Starting Load Kernel Module configfs...
Feb 28 03:48:17 np0005634017 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Feb 28 03:48:17 np0005634017 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 28 03:48:17 np0005634017 systemd[1]: Finished Load Kernel Module configfs.
Feb 28 03:48:17 np0005634017 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Feb 28 03:48:17 np0005634017 systemd-udevd[757]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 03:48:17 np0005634017 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Feb 28 03:48:17 np0005634017 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Feb 28 03:48:17 np0005634017 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Feb 28 03:48:17 np0005634017 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Feb 28 03:48:17 np0005634017 systemd[1]: Starting Update is Completed...
Feb 28 03:48:17 np0005634017 systemd[1]: Finished Update is Completed.
Feb 28 03:48:17 np0005634017 systemd[1]: Reached target System Initialization.
Feb 28 03:48:17 np0005634017 systemd[1]: Started dnf makecache --timer.
Feb 28 03:48:17 np0005634017 systemd[1]: Started Daily rotation of log files.
Feb 28 03:48:17 np0005634017 systemd[1]: Started Daily Cleanup of Temporary Directories.
Feb 28 03:48:17 np0005634017 systemd[1]: Reached target Timer Units.
Feb 28 03:48:17 np0005634017 systemd[1]: Listening on D-Bus System Message Bus Socket.
Feb 28 03:48:17 np0005634017 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Feb 28 03:48:17 np0005634017 systemd[1]: Reached target Socket Units.
Feb 28 03:48:17 np0005634017 systemd[1]: Starting D-Bus System Message Bus...
Feb 28 03:48:17 np0005634017 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 28 03:48:17 np0005634017 systemd[1]: Started D-Bus System Message Bus.
Feb 28 03:48:17 np0005634017 systemd[1]: Reached target Basic System.
Feb 28 03:48:17 np0005634017 kernel: kvm_amd: TSC scaling supported
Feb 28 03:48:17 np0005634017 kernel: kvm_amd: Nested Virtualization enabled
Feb 28 03:48:17 np0005634017 kernel: kvm_amd: Nested Paging enabled
Feb 28 03:48:17 np0005634017 kernel: kvm_amd: LBR virtualization supported
Feb 28 03:48:17 np0005634017 dbus-broker-lau[803]: Ready
Feb 28 03:48:17 np0005634017 systemd[1]: Starting NTP client/server...
Feb 28 03:48:17 np0005634017 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Feb 28 03:48:17 np0005634017 systemd[1]: Starting Restore /run/initramfs on shutdown...
Feb 28 03:48:17 np0005634017 systemd[1]: Starting IPv4 firewall with iptables...
Feb 28 03:48:17 np0005634017 systemd[1]: Started irqbalance daemon.
Feb 28 03:48:17 np0005634017 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Feb 28 03:48:17 np0005634017 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 28 03:48:17 np0005634017 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 28 03:48:17 np0005634017 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 28 03:48:17 np0005634017 systemd[1]: Reached target sshd-keygen.target.
Feb 28 03:48:17 np0005634017 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Feb 28 03:48:17 np0005634017 systemd[1]: Reached target User and Group Name Lookups.
Feb 28 03:48:17 np0005634017 systemd[1]: Starting User Login Management...
Feb 28 03:48:17 np0005634017 systemd[1]: Finished Restore /run/initramfs on shutdown.
Feb 28 03:48:17 np0005634017 systemd-logind[815]: New seat seat0.
Feb 28 03:48:17 np0005634017 systemd-logind[815]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 28 03:48:17 np0005634017 systemd-logind[815]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb 28 03:48:17 np0005634017 systemd[1]: Started User Login Management.
Feb 28 03:48:17 np0005634017 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Feb 28 03:48:17 np0005634017 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Feb 28 03:48:17 np0005634017 chronyd[840]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Feb 28 03:48:17 np0005634017 chronyd[840]: Loaded 0 symmetric keys
Feb 28 03:48:17 np0005634017 chronyd[840]: Using right/UTC timezone to obtain leap second data
Feb 28 03:48:17 np0005634017 chronyd[840]: Loaded seccomp filter (level 2)
Feb 28 03:48:17 np0005634017 systemd[1]: Started NTP client/server.
Feb 28 03:48:17 np0005634017 iptables.init[810]: iptables: Applying firewall rules: [  OK  ]
Feb 28 03:48:17 np0005634017 systemd[1]: Finished IPv4 firewall with iptables.
Feb 28 03:48:19 np0005634017 cloud-init[849]: Cloud-init v. 24.4-8.el9 running 'init-local' at Sat, 28 Feb 2026 08:48:19 +0000. Up 7.97 seconds.
Feb 28 03:48:19 np0005634017 systemd[1]: run-cloud\x2dinit-tmp-tmpqsscg15k.mount: Deactivated successfully.
Feb 28 03:48:19 np0005634017 systemd[1]: Starting Hostname Service...
Feb 28 03:48:19 np0005634017 systemd[1]: Started Hostname Service.
Feb 28 03:48:19 np0005634017 systemd-hostnamed[863]: Hostname set to <np0005634017.novalocal> (static)
Feb 28 03:48:20 np0005634017 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Feb 28 03:48:20 np0005634017 systemd[1]: Reached target Preparation for Network.
Feb 28 03:48:20 np0005634017 systemd[1]: Starting Network Manager...
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.2022] NetworkManager (version 1.54.3-2.el9) is starting... (boot:04d13732-50b9-406b-a503-39e846989e82)
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.2027] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.2925] manager[0x555b2d8c3000]: monitoring kernel firmware directory '/lib/firmware'.
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.2978] hostname: hostname: using hostnamed
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.2979] hostname: static hostname changed from (none) to "np0005634017.novalocal"
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.2989] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3103] manager[0x555b2d8c3000]: rfkill: Wi-Fi hardware radio set enabled
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3104] manager[0x555b2d8c3000]: rfkill: WWAN hardware radio set enabled
Feb 28 03:48:20 np0005634017 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3424] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3424] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3425] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3425] manager: Networking is enabled by state file
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3427] settings: Loaded settings plugin: keyfile (internal)
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3612] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3671] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3686] dhcp: init: Using DHCP client 'internal'
Feb 28 03:48:20 np0005634017 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3692] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3708] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3743] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3758] device (lo): Activation: starting connection 'lo' (22e8b1f5-c9dc-4bbf-8b00-9c310aff1f7b)
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3766] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3768] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 28 03:48:20 np0005634017 systemd[1]: Started Network Manager.
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3801] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3805] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3807] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3809] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3810] device (eth0): carrier: link connected
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3813] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3820] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3827] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3831] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3832] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3834] manager: NetworkManager state is now CONNECTING
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3835] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 28 03:48:20 np0005634017 systemd[1]: Reached target Network.
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3845] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3848] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3884] dhcp4 (eth0): state changed new lease, address=38.102.83.158
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3892] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb 28 03:48:20 np0005634017 systemd[1]: Starting Network Manager Wait Online...
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.3916] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 28 03:48:20 np0005634017 systemd[1]: Starting GSSAPI Proxy Daemon...
Feb 28 03:48:20 np0005634017 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.4075] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.4078] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.4084] device (lo): Activation: successful, device activated.
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.4094] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.4095] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.4098] manager: NetworkManager state is now CONNECTED_SITE
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.4101] device (eth0): Activation: successful, device activated.
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.4107] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb 28 03:48:20 np0005634017 NetworkManager[867]: <info>  [1772268500.4111] manager: startup complete
Feb 28 03:48:20 np0005634017 systemd[1]: Started GSSAPI Proxy Daemon.
Feb 28 03:48:20 np0005634017 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Feb 28 03:48:20 np0005634017 systemd[1]: Reached target NFS client services.
Feb 28 03:48:20 np0005634017 systemd[1]: Reached target Preparation for Remote File Systems.
Feb 28 03:48:20 np0005634017 systemd[1]: Reached target Remote File Systems.
Feb 28 03:48:20 np0005634017 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 28 03:48:20 np0005634017 systemd[1]: Finished Network Manager Wait Online.
Feb 28 03:48:20 np0005634017 systemd[1]: Starting Cloud-init: Network Stage...
Feb 28 03:48:20 np0005634017 cloud-init[934]: Cloud-init v. 24.4-8.el9 running 'init' at Sat, 28 Feb 2026 08:48:20 +0000. Up 9.29 seconds.
Feb 28 03:48:20 np0005634017 cloud-init[934]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Feb 28 03:48:20 np0005634017 cloud-init[934]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb 28 03:48:20 np0005634017 cloud-init[934]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Feb 28 03:48:20 np0005634017 cloud-init[934]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb 28 03:48:20 np0005634017 cloud-init[934]: ci-info: |  eth0  | True |        38.102.83.158         | 255.255.255.0 | global | fa:16:3e:e2:67:7b |
Feb 28 03:48:20 np0005634017 cloud-init[934]: ci-info: |  eth0  | True | fe80::f816:3eff:fee2:677b/64 |       .       |  link  | fa:16:3e:e2:67:7b |
Feb 28 03:48:20 np0005634017 cloud-init[934]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Feb 28 03:48:20 np0005634017 cloud-init[934]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Feb 28 03:48:20 np0005634017 cloud-init[934]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb 28 03:48:20 np0005634017 cloud-init[934]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Feb 28 03:48:20 np0005634017 cloud-init[934]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb 28 03:48:20 np0005634017 cloud-init[934]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Feb 28 03:48:20 np0005634017 cloud-init[934]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb 28 03:48:20 np0005634017 cloud-init[934]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Feb 28 03:48:20 np0005634017 cloud-init[934]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Feb 28 03:48:20 np0005634017 cloud-init[934]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Feb 28 03:48:20 np0005634017 cloud-init[934]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb 28 03:48:20 np0005634017 cloud-init[934]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Feb 28 03:48:20 np0005634017 cloud-init[934]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 28 03:48:20 np0005634017 cloud-init[934]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Feb 28 03:48:20 np0005634017 cloud-init[934]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 28 03:48:20 np0005634017 cloud-init[934]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Feb 28 03:48:20 np0005634017 cloud-init[934]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Feb 28 03:48:20 np0005634017 cloud-init[934]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 28 03:48:22 np0005634017 cloud-init[934]: Generating public/private rsa key pair.
Feb 28 03:48:22 np0005634017 cloud-init[934]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Feb 28 03:48:22 np0005634017 cloud-init[934]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Feb 28 03:48:22 np0005634017 cloud-init[934]: The key fingerprint is:
Feb 28 03:48:22 np0005634017 cloud-init[934]: SHA256:2mUaIn1G/wtteKirEP4+Tj4tejZgD53IfLUO2/iJfws root@np0005634017.novalocal
Feb 28 03:48:22 np0005634017 cloud-init[934]: The key's randomart image is:
Feb 28 03:48:22 np0005634017 cloud-init[934]: +---[RSA 3072]----+
Feb 28 03:48:22 np0005634017 cloud-init[934]: |                 |
Feb 28 03:48:22 np0005634017 cloud-init[934]: |                 |
Feb 28 03:48:22 np0005634017 cloud-init[934]: |        .        |
Feb 28 03:48:22 np0005634017 cloud-init[934]: |     . ...       |
Feb 28 03:48:22 np0005634017 cloud-init[934]: |   oooooS.+      |
Feb 28 03:48:22 np0005634017 cloud-init[934]: |   .Bo=*.= =     |
Feb 28 03:48:22 np0005634017 cloud-init[934]: |   .o=oBE + =    |
Feb 28 03:48:22 np0005634017 cloud-init[934]: |     =@.++.+ .   |
Feb 28 03:48:22 np0005634017 cloud-init[934]: |    .*BO*o...    |
Feb 28 03:48:22 np0005634017 cloud-init[934]: +----[SHA256]-----+
Feb 28 03:48:22 np0005634017 cloud-init[934]: Generating public/private ecdsa key pair.
Feb 28 03:48:22 np0005634017 cloud-init[934]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Feb 28 03:48:22 np0005634017 cloud-init[934]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Feb 28 03:48:22 np0005634017 cloud-init[934]: The key fingerprint is:
Feb 28 03:48:22 np0005634017 cloud-init[934]: SHA256:Lzm7nmktFqlR2Tig5doSsye0n66yzJN20niWQxRUcYk root@np0005634017.novalocal
Feb 28 03:48:22 np0005634017 cloud-init[934]: The key's randomart image is:
Feb 28 03:48:22 np0005634017 cloud-init[934]: +---[ECDSA 256]---+
Feb 28 03:48:22 np0005634017 cloud-init[934]: |   ...oo..       |
Feb 28 03:48:22 np0005634017 cloud-init[934]: |    . E..        |
Feb 28 03:48:22 np0005634017 cloud-init[934]: |     = . +       |
Feb 28 03:48:22 np0005634017 cloud-init[934]: |    * . = .      |
Feb 28 03:48:22 np0005634017 cloud-init[934]: |   o B .So       |
Feb 28 03:48:22 np0005634017 cloud-init[934]: |    B + oo       |
Feb 28 03:48:22 np0005634017 cloud-init[934]: |   = * ++o.      |
Feb 28 03:48:22 np0005634017 cloud-init[934]: | oB B + +*.      |
Feb 28 03:48:22 np0005634017 cloud-init[934]: | .+O.+.+*o       |
Feb 28 03:48:22 np0005634017 cloud-init[934]: +----[SHA256]-----+
Feb 28 03:48:22 np0005634017 cloud-init[934]: Generating public/private ed25519 key pair.
Feb 28 03:48:22 np0005634017 cloud-init[934]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Feb 28 03:48:22 np0005634017 cloud-init[934]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Feb 28 03:48:22 np0005634017 cloud-init[934]: The key fingerprint is:
Feb 28 03:48:22 np0005634017 cloud-init[934]: SHA256:aPROQv6QXotKin0aE1JVpruF3z8G31aL6OL6B/bZGc8 root@np0005634017.novalocal
Feb 28 03:48:22 np0005634017 cloud-init[934]: The key's randomart image is:
Feb 28 03:48:22 np0005634017 cloud-init[934]: +--[ED25519 256]--+
Feb 28 03:48:22 np0005634017 cloud-init[934]: |    ..o          |
Feb 28 03:48:22 np0005634017 cloud-init[934]: |   . o           |
Feb 28 03:48:22 np0005634017 cloud-init[934]: |  . . o          |
Feb 28 03:48:22 np0005634017 cloud-init[934]: | .   * +         |
Feb 28 03:48:22 np0005634017 cloud-init[934]: |. . o O S        |
Feb 28 03:48:22 np0005634017 cloud-init[934]: | . . * X+.  . .  |
Feb 28 03:48:22 np0005634017 cloud-init[934]: |  o o +.== = B . |
Feb 28 03:48:22 np0005634017 cloud-init[934]: | o =..  ..O * E  |
Feb 28 03:48:22 np0005634017 cloud-init[934]: |. +o. .+o=oo     |
Feb 28 03:48:22 np0005634017 cloud-init[934]: +----[SHA256]-----+
Feb 28 03:48:22 np0005634017 sm-notify[1016]: Version 2.5.4 starting
Feb 28 03:48:22 np0005634017 systemd[1]: Finished Cloud-init: Network Stage.
Feb 28 03:48:22 np0005634017 systemd[1]: Reached target Cloud-config availability.
Feb 28 03:48:22 np0005634017 systemd[1]: Reached target Network is Online.
Feb 28 03:48:22 np0005634017 systemd[1]: Starting Cloud-init: Config Stage...
Feb 28 03:48:22 np0005634017 systemd[1]: Starting Crash recovery kernel arming...
Feb 28 03:48:22 np0005634017 systemd[1]: Starting Notify NFS peers of a restart...
Feb 28 03:48:22 np0005634017 systemd[1]: Starting System Logging Service...
Feb 28 03:48:22 np0005634017 systemd[1]: Starting OpenSSH server daemon...
Feb 28 03:48:22 np0005634017 systemd[1]: Starting Permit User Sessions...
Feb 28 03:48:22 np0005634017 systemd[1]: Started Notify NFS peers of a restart.
Feb 28 03:48:22 np0005634017 systemd[1]: Finished Permit User Sessions.
Feb 28 03:48:22 np0005634017 systemd[1]: Started OpenSSH server daemon.
Feb 28 03:48:22 np0005634017 systemd[1]: Started Command Scheduler.
Feb 28 03:48:22 np0005634017 systemd[1]: Started Getty on tty1.
Feb 28 03:48:22 np0005634017 systemd[1]: Started Serial Getty on ttyS0.
Feb 28 03:48:22 np0005634017 systemd[1]: Reached target Login Prompts.
Feb 28 03:48:22 np0005634017 rsyslogd[1017]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1017" x-info="https://www.rsyslog.com"] start
Feb 28 03:48:22 np0005634017 systemd[1]: Started System Logging Service.
Feb 28 03:48:22 np0005634017 rsyslogd[1017]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Feb 28 03:48:22 np0005634017 systemd[1]: Reached target Multi-User System.
Feb 28 03:48:22 np0005634017 systemd[1]: Starting Record Runlevel Change in UTMP...
Feb 28 03:48:22 np0005634017 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Feb 28 03:48:22 np0005634017 systemd[1]: Finished Record Runlevel Change in UTMP.
Feb 28 03:48:22 np0005634017 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 03:48:22 np0005634017 cloud-init[1146]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Sat, 28 Feb 2026 08:48:22 +0000. Up 11.04 seconds.
Feb 28 03:48:22 np0005634017 kdumpctl[1031]: kdump: No kdump initial ramdisk found.
Feb 28 03:48:22 np0005634017 kdumpctl[1031]: kdump: Rebuilding /boot/initramfs-5.14.0-686.el9.x86_64kdump.img
Feb 28 03:48:22 np0005634017 systemd[1]: Finished Cloud-init: Config Stage.
Feb 28 03:48:22 np0005634017 systemd[1]: Starting Cloud-init: Final Stage...
Feb 28 03:48:22 np0005634017 cloud-init[1464]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Sat, 28 Feb 2026 08:48:22 +0000. Up 11.47 seconds.
Feb 28 03:48:23 np0005634017 cloud-init[1497]: #############################################################
Feb 28 03:48:23 np0005634017 cloud-init[1500]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Feb 28 03:48:23 np0005634017 cloud-init[1507]: 256 SHA256:Lzm7nmktFqlR2Tig5doSsye0n66yzJN20niWQxRUcYk root@np0005634017.novalocal (ECDSA)
Feb 28 03:48:23 np0005634017 cloud-init[1511]: 256 SHA256:aPROQv6QXotKin0aE1JVpruF3z8G31aL6OL6B/bZGc8 root@np0005634017.novalocal (ED25519)
Feb 28 03:48:23 np0005634017 cloud-init[1515]: 3072 SHA256:2mUaIn1G/wtteKirEP4+Tj4tejZgD53IfLUO2/iJfws root@np0005634017.novalocal (RSA)
Feb 28 03:48:23 np0005634017 cloud-init[1516]: -----END SSH HOST KEY FINGERPRINTS-----
Feb 28 03:48:23 np0005634017 cloud-init[1517]: #############################################################
Feb 28 03:48:23 np0005634017 cloud-init[1464]: Cloud-init v. 24.4-8.el9 finished at Sat, 28 Feb 2026 08:48:23 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.62 seconds
Feb 28 03:48:23 np0005634017 dracut[1524]: dracut-057-110.git20260130.el9
Feb 28 03:48:23 np0005634017 systemd[1]: Finished Cloud-init: Final Stage.
Feb 28 03:48:23 np0005634017 systemd[1]: Reached target Cloud-init target.
Feb 28 03:48:23 np0005634017 dracut[1526]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/37391a25-080d-4723-8b0c-cb88a559875b /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-686.el9.x86_64kdump.img 5.14.0-686.el9.x86_64
Feb 28 03:48:23 np0005634017 dracut[1526]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Feb 28 03:48:23 np0005634017 dracut[1526]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Feb 28 03:48:23 np0005634017 dracut[1526]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Feb 28 03:48:23 np0005634017 dracut[1526]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Feb 28 03:48:23 np0005634017 dracut[1526]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Feb 28 03:48:23 np0005634017 dracut[1526]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Feb 28 03:48:23 np0005634017 dracut[1526]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Feb 28 03:48:23 np0005634017 dracut[1526]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Feb 28 03:48:23 np0005634017 dracut[1526]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Feb 28 03:48:23 np0005634017 dracut[1526]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Feb 28 03:48:23 np0005634017 dracut[1526]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Feb 28 03:48:23 np0005634017 dracut[1526]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Feb 28 03:48:23 np0005634017 dracut[1526]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Feb 28 03:48:23 np0005634017 dracut[1526]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Feb 28 03:48:23 np0005634017 dracut[1526]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Feb 28 03:48:23 np0005634017 dracut[1526]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Feb 28 03:48:23 np0005634017 dracut[1526]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Feb 28 03:48:23 np0005634017 dracut[1526]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Feb 28 03:48:23 np0005634017 dracut[1526]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Feb 28 03:48:23 np0005634017 dracut[1526]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Feb 28 03:48:23 np0005634017 dracut[1526]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Feb 28 03:48:23 np0005634017 dracut[1526]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Feb 28 03:48:23 np0005634017 dracut[1526]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Feb 28 03:48:23 np0005634017 dracut[1526]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Feb 28 03:48:23 np0005634017 dracut[1526]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: memstrack is not available
Feb 28 03:48:24 np0005634017 dracut[1526]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Feb 28 03:48:24 np0005634017 dracut[1526]: memstrack is not available
Feb 28 03:48:24 np0005634017 dracut[1526]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Feb 28 03:48:24 np0005634017 dracut[1526]: *** Including module: systemd ***
Feb 28 03:48:24 np0005634017 dracut[1526]: *** Including module: fips ***
Feb 28 03:48:25 np0005634017 dracut[1526]: *** Including module: systemd-initrd ***
Feb 28 03:48:25 np0005634017 dracut[1526]: *** Including module: i18n ***
Feb 28 03:48:25 np0005634017 dracut[1526]: *** Including module: drm ***
Feb 28 03:48:25 np0005634017 chronyd[840]: Selected source 209.227.173.244 (2.centos.pool.ntp.org)
Feb 28 03:48:26 np0005634017 chronyd[840]: System clock wrong by 1.184725 seconds
Feb 28 03:48:26 np0005634017 chronyd[840]: System clock was stepped by 1.184725 seconds
Feb 28 03:48:26 np0005634017 chronyd[840]: System clock TAI offset set to 37 seconds
Feb 28 03:48:26 np0005634017 dracut[1526]: *** Including module: prefixdevname ***
Feb 28 03:48:26 np0005634017 dracut[1526]: *** Including module: kernel-modules ***
Feb 28 03:48:26 np0005634017 kernel: block vda: the capability attribute has been deprecated.
Feb 28 03:48:27 np0005634017 dracut[1526]: *** Including module: kernel-modules-extra ***
Feb 28 03:48:27 np0005634017 dracut[1526]: *** Including module: qemu ***
Feb 28 03:48:27 np0005634017 dracut[1526]: *** Including module: fstab-sys ***
Feb 28 03:48:27 np0005634017 dracut[1526]: *** Including module: rootfs-block ***
Feb 28 03:48:27 np0005634017 dracut[1526]: *** Including module: terminfo ***
Feb 28 03:48:27 np0005634017 dracut[1526]: *** Including module: udev-rules ***
Feb 28 03:48:28 np0005634017 dracut[1526]: Skipping udev rule: 91-permissions.rules
Feb 28 03:48:28 np0005634017 dracut[1526]: Skipping udev rule: 80-drivers-modprobe.rules
Feb 28 03:48:28 np0005634017 dracut[1526]: *** Including module: virtiofs ***
Feb 28 03:48:28 np0005634017 dracut[1526]: *** Including module: dracut-systemd ***
Feb 28 03:48:28 np0005634017 dracut[1526]: *** Including module: usrmount ***
Feb 28 03:48:28 np0005634017 dracut[1526]: *** Including module: base ***
Feb 28 03:48:28 np0005634017 dracut[1526]: *** Including module: fs-lib ***
Feb 28 03:48:28 np0005634017 dracut[1526]: *** Including module: kdumpbase ***
Feb 28 03:48:28 np0005634017 dracut[1526]: *** Including module: microcode_ctl-fw_dir_override ***
Feb 28 03:48:28 np0005634017 dracut[1526]:  microcode_ctl module: mangling fw_dir
Feb 28 03:48:28 np0005634017 dracut[1526]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Feb 28 03:48:28 np0005634017 dracut[1526]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Feb 28 03:48:28 np0005634017 dracut[1526]:    microcode_ctl: configuration "intel" is ignored
Feb 28 03:48:28 np0005634017 dracut[1526]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Feb 28 03:48:29 np0005634017 dracut[1526]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Feb 28 03:48:29 np0005634017 dracut[1526]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Feb 28 03:48:29 np0005634017 dracut[1526]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Feb 28 03:48:29 np0005634017 dracut[1526]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Feb 28 03:48:29 np0005634017 dracut[1526]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Feb 28 03:48:29 np0005634017 dracut[1526]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Feb 28 03:48:29 np0005634017 dracut[1526]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Feb 28 03:48:29 np0005634017 dracut[1526]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Feb 28 03:48:29 np0005634017 dracut[1526]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Feb 28 03:48:29 np0005634017 dracut[1526]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Feb 28 03:48:29 np0005634017 dracut[1526]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Feb 28 03:48:29 np0005634017 dracut[1526]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Feb 28 03:48:29 np0005634017 dracut[1526]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Feb 28 03:48:29 np0005634017 dracut[1526]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Feb 28 03:48:29 np0005634017 dracut[1526]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Feb 28 03:48:29 np0005634017 dracut[1526]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Feb 28 03:48:29 np0005634017 dracut[1526]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Feb 28 03:48:29 np0005634017 dracut[1526]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Feb 28 03:48:29 np0005634017 dracut[1526]: *** Including module: openssl ***
Feb 28 03:48:29 np0005634017 dracut[1526]: *** Including module: shutdown ***
Feb 28 03:48:29 np0005634017 dracut[1526]: *** Including module: squash ***
Feb 28 03:48:29 np0005634017 dracut[1526]: *** Including modules done ***
Feb 28 03:48:29 np0005634017 dracut[1526]: *** Installing kernel module dependencies ***
Feb 28 03:48:29 np0005634017 irqbalance[811]: Cannot change IRQ 25 affinity: Operation not permitted
Feb 28 03:48:29 np0005634017 irqbalance[811]: IRQ 25 affinity is now unmanaged
Feb 28 03:48:29 np0005634017 irqbalance[811]: Cannot change IRQ 31 affinity: Operation not permitted
Feb 28 03:48:29 np0005634017 irqbalance[811]: IRQ 31 affinity is now unmanaged
Feb 28 03:48:29 np0005634017 irqbalance[811]: Cannot change IRQ 28 affinity: Operation not permitted
Feb 28 03:48:29 np0005634017 irqbalance[811]: IRQ 28 affinity is now unmanaged
Feb 28 03:48:29 np0005634017 irqbalance[811]: Cannot change IRQ 32 affinity: Operation not permitted
Feb 28 03:48:29 np0005634017 irqbalance[811]: IRQ 32 affinity is now unmanaged
Feb 28 03:48:29 np0005634017 irqbalance[811]: Cannot change IRQ 30 affinity: Operation not permitted
Feb 28 03:48:29 np0005634017 irqbalance[811]: IRQ 30 affinity is now unmanaged
Feb 28 03:48:29 np0005634017 irqbalance[811]: Cannot change IRQ 29 affinity: Operation not permitted
Feb 28 03:48:29 np0005634017 irqbalance[811]: IRQ 29 affinity is now unmanaged
Feb 28 03:48:30 np0005634017 dracut[1526]: *** Installing kernel module dependencies done ***
Feb 28 03:48:30 np0005634017 dracut[1526]: *** Resolving executable dependencies ***
Feb 28 03:48:31 np0005634017 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 28 03:48:31 np0005634017 dracut[1526]: *** Resolving executable dependencies done ***
Feb 28 03:48:31 np0005634017 dracut[1526]: *** Generating early-microcode cpio image ***
Feb 28 03:48:31 np0005634017 dracut[1526]: *** Store current command line parameters ***
Feb 28 03:48:31 np0005634017 dracut[1526]: Stored kernel commandline:
Feb 28 03:48:31 np0005634017 dracut[1526]: No dracut internal kernel commandline stored in the initramfs
Feb 28 03:48:32 np0005634017 dracut[1526]: *** Install squash loader ***
Feb 28 03:48:32 np0005634017 dracut[1526]: *** Squashing the files inside the initramfs ***
Feb 28 03:48:33 np0005634017 dracut[1526]: *** Squashing the files inside the initramfs done ***
Feb 28 03:48:33 np0005634017 dracut[1526]: *** Creating image file '/boot/initramfs-5.14.0-686.el9.x86_64kdump.img' ***
Feb 28 03:48:33 np0005634017 dracut[1526]: *** Hardlinking files ***
Feb 28 03:48:33 np0005634017 dracut[1526]: *** Hardlinking files done ***
Feb 28 03:48:34 np0005634017 dracut[1526]: *** Creating initramfs image file '/boot/initramfs-5.14.0-686.el9.x86_64kdump.img' done ***
Feb 28 03:48:35 np0005634017 kdumpctl[1031]: kdump: kexec: loaded kdump kernel
Feb 28 03:48:35 np0005634017 kdumpctl[1031]: kdump: Starting kdump: [OK]
Feb 28 03:48:35 np0005634017 systemd[1]: Finished Crash recovery kernel arming.
Feb 28 03:48:35 np0005634017 systemd[1]: Startup finished in 1.414s (kernel) + 2.428s (initrd) + 18.521s (userspace) = 22.364s.
Feb 28 03:48:51 np0005634017 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 28 03:56:14 np0005634017 systemd-logind[815]: New session 1 of user zuul.
Feb 28 03:56:14 np0005634017 systemd[1]: Created slice User Slice of UID 1000.
Feb 28 03:56:14 np0005634017 systemd[1]: Starting User Runtime Directory /run/user/1000...
Feb 28 03:56:14 np0005634017 systemd[1]: Finished User Runtime Directory /run/user/1000.
Feb 28 03:56:14 np0005634017 systemd[1]: Starting User Manager for UID 1000...
Feb 28 03:56:14 np0005634017 systemd[4802]: Queued start job for default target Main User Target.
Feb 28 03:56:14 np0005634017 systemd[4802]: Created slice User Application Slice.
Feb 28 03:56:14 np0005634017 systemd[4802]: Started Mark boot as successful after the user session has run 2 minutes.
Feb 28 03:56:14 np0005634017 systemd[4802]: Started Daily Cleanup of User's Temporary Directories.
Feb 28 03:56:14 np0005634017 systemd[4802]: Reached target Paths.
Feb 28 03:56:14 np0005634017 systemd[4802]: Reached target Timers.
Feb 28 03:56:14 np0005634017 systemd[4802]: Starting D-Bus User Message Bus Socket...
Feb 28 03:56:14 np0005634017 systemd[4802]: Starting Create User's Volatile Files and Directories...
Feb 28 03:56:14 np0005634017 systemd[4802]: Finished Create User's Volatile Files and Directories.
Feb 28 03:56:14 np0005634017 systemd[4802]: Listening on D-Bus User Message Bus Socket.
Feb 28 03:56:14 np0005634017 systemd[4802]: Reached target Sockets.
Feb 28 03:56:14 np0005634017 systemd[4802]: Reached target Basic System.
Feb 28 03:56:14 np0005634017 systemd[4802]: Reached target Main User Target.
Feb 28 03:56:14 np0005634017 systemd[4802]: Startup finished in 117ms.
Feb 28 03:56:14 np0005634017 systemd[1]: Started User Manager for UID 1000.
Feb 28 03:56:14 np0005634017 systemd[1]: Started Session 1 of User zuul.
Feb 28 03:56:14 np0005634017 python3[4886]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 03:56:17 np0005634017 python3[4914]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 03:56:25 np0005634017 python3[4973]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 03:56:25 np0005634017 python3[5013]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Feb 28 03:56:27 np0005634017 python3[5039]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDS27HQgXYtPKnD5JCzP6dMq3FdZL0QbpQRTafjd1D/Oy8wQ5oXZeD90nUPnWHzbdqpe/7pQCibvm1U1vdfZcmSY9I11/7Tv5Dv4x455MtKMzja4X6q6eA/mcv/KD1OhDwl+CsACoLXzCHtgeoBF39CgAJBMoxIdzLRVc4uU0JkkFPb4vNDCaD+O3pse9EkYC6ZChhBCmZOYn+FbjToQVKIJ3VVM0rtIyIaJKu4kQWyEfCxQ+eHPgt9SVWmxDYR0fOfjg23LpuZnYQI0d1X4nRzy1+B1nN0ScAZQ7aHHoMv16Z4OD+J6NTImTR9XssSX4A2uxmA7eKo1VLLkb8Ijqj3aI9Ytp4GB8zqcisyZVTiJcwlgBZ8Ox9IGVRfUQKJlR4l6ccZg+yhRd/65VRQhDhSHIw7jCgpunDGW9sNyrCfZ55ZScQZVccriojb6lT7+E/LY9aRg+gilGTdrhE/0PA+Qan76XqrZC5CRKbcD4u876o3LdEASEKdVzi7KJmopMM= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:28 np0005634017 python3[5063]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 03:56:28 np0005634017 python3[5162]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 03:56:29 np0005634017 python3[5233]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772268988.4687095-207-213081805987634/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=a03658c4a3564828bc76d83cf72c40a0_id_rsa follow=False checksum=91928fd8febc4c7da85cec5c1114232130c2ef2c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 03:56:29 np0005634017 python3[5356]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 03:56:30 np0005634017 python3[5427]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772268989.4360273-240-163159325976404/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=a03658c4a3564828bc76d83cf72c40a0_id_rsa.pub follow=False checksum=b43e615b37ccb83581e4ebbe64a502b08413175b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 03:56:31 np0005634017 python3[5475]: ansible-ping Invoked with data=pong
Feb 28 03:56:32 np0005634017 python3[5499]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 03:56:34 np0005634017 python3[5557]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Feb 28 03:56:35 np0005634017 python3[5589]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 03:56:35 np0005634017 python3[5613]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 03:56:36 np0005634017 python3[5637]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 03:56:36 np0005634017 python3[5661]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 03:56:36 np0005634017 python3[5685]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 03:56:36 np0005634017 python3[5709]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 03:56:38 np0005634017 python3[5735]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 03:56:39 np0005634017 python3[5813]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 03:56:40 np0005634017 python3[5886]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1772268999.0611944-21-198137154862003/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 03:56:40 np0005634017 python3[5934]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:40 np0005634017 python3[5958]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:41 np0005634017 python3[5982]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:41 np0005634017 python3[6006]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:41 np0005634017 python3[6030]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:42 np0005634017 python3[6054]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:42 np0005634017 python3[6078]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:42 np0005634017 python3[6102]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:42 np0005634017 python3[6126]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:43 np0005634017 python3[6150]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:43 np0005634017 python3[6174]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:43 np0005634017 python3[6198]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:43 np0005634017 python3[6222]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICWBreHW95Wz2Toz5YwCGQwFcUG8oFYkienDh9tntmDc ralfieri@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:44 np0005634017 python3[6246]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:44 np0005634017 python3[6270]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:44 np0005634017 python3[6294]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:44 np0005634017 python3[6318]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:45 np0005634017 python3[6342]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:45 np0005634017 python3[6366]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:45 np0005634017 python3[6390]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:45 np0005634017 python3[6414]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:46 np0005634017 python3[6438]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:46 np0005634017 python3[6462]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:46 np0005634017 python3[6486]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:46 np0005634017 python3[6510]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:47 np0005634017 python3[6534]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 03:56:51 np0005634017 python3[6560]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb 28 03:56:51 np0005634017 systemd[1]: Starting Time & Date Service...
Feb 28 03:56:51 np0005634017 systemd[1]: Started Time & Date Service.
Feb 28 03:56:51 np0005634017 systemd-timedated[6562]: Changed time zone to 'UTC' (UTC).
Feb 28 03:56:51 np0005634017 python3[6591]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 03:56:51 np0005634017 python3[6667]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 03:56:52 np0005634017 python3[6738]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1772269011.729286-153-14995172548913/source _original_basename=tmphfn62613 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 03:56:52 np0005634017 python3[6838]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 03:56:53 np0005634017 python3[6909]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1772269012.6285522-183-269483236874825/source _original_basename=tmpt12woo05 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 03:56:54 np0005634017 python3[7011]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 03:56:54 np0005634017 python3[7084]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1772269013.80091-231-148280664294721/source _original_basename=tmpppgjq6yo follow=False checksum=5af11a2484d4a32bfd779dd7279c8c1bc46ad659 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 03:56:55 np0005634017 python3[7132]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 03:56:55 np0005634017 python3[7158]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 03:56:55 np0005634017 python3[7238]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 03:56:56 np0005634017 python3[7311]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1772269015.5188522-273-263611995835714/source _original_basename=tmp7krc1_le follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 03:56:56 np0005634017 python3[7362]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-389e-7a79-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 03:56:57 np0005634017 python3[7390]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163efc-24cc-389e-7a79-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Feb 28 03:56:58 np0005634017 python3[7419]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 03:57:15 np0005634017 python3[7445]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 03:57:21 np0005634017 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 28 03:57:48 np0005634017 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Feb 28 03:57:48 np0005634017 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Feb 28 03:57:48 np0005634017 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Feb 28 03:57:48 np0005634017 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Feb 28 03:57:48 np0005634017 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Feb 28 03:57:48 np0005634017 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Feb 28 03:57:48 np0005634017 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Feb 28 03:57:48 np0005634017 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Feb 28 03:57:48 np0005634017 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Feb 28 03:57:48 np0005634017 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Feb 28 03:57:48 np0005634017 NetworkManager[867]: <info>  [1772269068.7149] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb 28 03:57:48 np0005634017 systemd-udevd[7448]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 03:57:48 np0005634017 NetworkManager[867]: <info>  [1772269068.7389] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 28 03:57:48 np0005634017 NetworkManager[867]: <info>  [1772269068.7414] settings: (eth1): created default wired connection 'Wired connection 1'
Feb 28 03:57:48 np0005634017 NetworkManager[867]: <info>  [1772269068.7418] device (eth1): carrier: link connected
Feb 28 03:57:48 np0005634017 NetworkManager[867]: <info>  [1772269068.7419] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Feb 28 03:57:48 np0005634017 NetworkManager[867]: <info>  [1772269068.7425] policy: auto-activating connection 'Wired connection 1' (64ae6edf-590f-3630-9095-1a6a00a971a6)
Feb 28 03:57:48 np0005634017 NetworkManager[867]: <info>  [1772269068.7429] device (eth1): Activation: starting connection 'Wired connection 1' (64ae6edf-590f-3630-9095-1a6a00a971a6)
Feb 28 03:57:48 np0005634017 NetworkManager[867]: <info>  [1772269068.7431] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 28 03:57:48 np0005634017 NetworkManager[867]: <info>  [1772269068.7434] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 28 03:57:48 np0005634017 NetworkManager[867]: <info>  [1772269068.7438] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 28 03:57:48 np0005634017 NetworkManager[867]: <info>  [1772269068.7442] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 28 03:57:49 np0005634017 python3[7475]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-44b0-4c63-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 03:57:59 np0005634017 python3[7555]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 03:58:00 np0005634017 python3[7628]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772269079.527654-102-117913176290005/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=7a7dca897a4c3c0ce77aaded7f34b543f150da18 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 03:58:00 np0005634017 python3[7678]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 28 03:58:01 np0005634017 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Feb 28 03:58:01 np0005634017 systemd[1]: Stopped Network Manager Wait Online.
Feb 28 03:58:01 np0005634017 systemd[1]: Stopping Network Manager Wait Online...
Feb 28 03:58:01 np0005634017 NetworkManager[867]: <info>  [1772269081.0202] caught SIGTERM, shutting down normally.
Feb 28 03:58:01 np0005634017 systemd[1]: Stopping Network Manager...
Feb 28 03:58:01 np0005634017 NetworkManager[867]: <info>  [1772269081.0207] dhcp4 (eth0): canceled DHCP transaction
Feb 28 03:58:01 np0005634017 NetworkManager[867]: <info>  [1772269081.0208] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 28 03:58:01 np0005634017 NetworkManager[867]: <info>  [1772269081.0208] dhcp4 (eth0): state changed no lease
Feb 28 03:58:01 np0005634017 NetworkManager[867]: <info>  [1772269081.0210] manager: NetworkManager state is now CONNECTING
Feb 28 03:58:01 np0005634017 NetworkManager[867]: <info>  [1772269081.0381] dhcp4 (eth1): canceled DHCP transaction
Feb 28 03:58:01 np0005634017 NetworkManager[867]: <info>  [1772269081.0381] dhcp4 (eth1): state changed no lease
Feb 28 03:58:01 np0005634017 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 28 03:58:01 np0005634017 NetworkManager[867]: <info>  [1772269081.0425] exiting (success)
Feb 28 03:58:01 np0005634017 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 28 03:58:01 np0005634017 systemd[1]: NetworkManager.service: Deactivated successfully.
Feb 28 03:58:01 np0005634017 systemd[1]: Stopped Network Manager.
Feb 28 03:58:01 np0005634017 systemd[1]: NetworkManager.service: Consumed 3.501s CPU time, 10.0M memory peak.
Feb 28 03:58:01 np0005634017 systemd[1]: Starting Network Manager...
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.0826] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:04d13732-50b9-406b-a503-39e846989e82)
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.0831] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.0883] manager[0x55ea2c494000]: monitoring kernel firmware directory '/lib/firmware'.
Feb 28 03:58:01 np0005634017 systemd[1]: Starting Hostname Service...
Feb 28 03:58:01 np0005634017 systemd[1]: Started Hostname Service.
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1731] hostname: hostname: using hostnamed
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1732] hostname: static hostname changed from (none) to "np0005634017.novalocal"
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1740] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1747] manager[0x55ea2c494000]: rfkill: Wi-Fi hardware radio set enabled
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1748] manager[0x55ea2c494000]: rfkill: WWAN hardware radio set enabled
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1798] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1798] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1799] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1800] manager: Networking is enabled by state file
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1805] settings: Loaded settings plugin: keyfile (internal)
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1812] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1854] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1869] dhcp: init: Using DHCP client 'internal'
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1875] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1883] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1891] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1906] device (lo): Activation: starting connection 'lo' (22e8b1f5-c9dc-4bbf-8b00-9c310aff1f7b)
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1916] device (eth0): carrier: link connected
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1922] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1931] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1931] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1944] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1957] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1965] device (eth1): carrier: link connected
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1971] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1981] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (64ae6edf-590f-3630-9095-1a6a00a971a6) (indicated)
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1982] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.1991] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.2003] device (eth1): Activation: starting connection 'Wired connection 1' (64ae6edf-590f-3630-9095-1a6a00a971a6)
Feb 28 03:58:01 np0005634017 systemd[1]: Started Network Manager.
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.2014] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.2027] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.2031] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.2033] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.2036] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.2039] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.2043] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.2048] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.2053] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.2066] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.2070] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.2081] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.2085] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.2108] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.2119] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.2128] device (lo): Activation: successful, device activated.
Feb 28 03:58:01 np0005634017 systemd[1]: Starting Network Manager Wait Online...
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.2141] dhcp4 (eth0): state changed new lease, address=38.102.83.158
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.2152] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.2225] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.2252] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.2255] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.2260] manager: NetworkManager state is now CONNECTED_SITE
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.2264] device (eth0): Activation: successful, device activated.
Feb 28 03:58:01 np0005634017 NetworkManager[7689]: <info>  [1772269081.2275] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb 28 03:58:01 np0005634017 python3[7762]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-44b0-4c63-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 03:58:09 np0005634017 irqbalance[811]: Cannot change IRQ 26 affinity: Operation not permitted
Feb 28 03:58:09 np0005634017 irqbalance[811]: IRQ 26 affinity is now unmanaged
Feb 28 03:58:11 np0005634017 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 28 03:58:31 np0005634017 systemd[4802]: Starting Mark boot as successful...
Feb 28 03:58:31 np0005634017 systemd[4802]: Finished Mark boot as successful.
Feb 28 03:58:31 np0005634017 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 28 03:58:46 np0005634017 NetworkManager[7689]: <info>  [1772269126.6587] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 28 03:58:46 np0005634017 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 28 03:58:46 np0005634017 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 28 03:58:46 np0005634017 NetworkManager[7689]: <info>  [1772269126.6877] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 28 03:58:46 np0005634017 NetworkManager[7689]: <info>  [1772269126.6879] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 28 03:58:46 np0005634017 NetworkManager[7689]: <info>  [1772269126.6885] device (eth1): Activation: successful, device activated.
Feb 28 03:58:46 np0005634017 NetworkManager[7689]: <info>  [1772269126.6891] manager: startup complete
Feb 28 03:58:46 np0005634017 NetworkManager[7689]: <info>  [1772269126.6895] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Feb 28 03:58:46 np0005634017 NetworkManager[7689]: <warn>  [1772269126.6900] device (eth1): Activation: failed for connection 'Wired connection 1'
Feb 28 03:58:46 np0005634017 NetworkManager[7689]: <info>  [1772269126.6909] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Feb 28 03:58:46 np0005634017 systemd[1]: Finished Network Manager Wait Online.
Feb 28 03:58:46 np0005634017 NetworkManager[7689]: <info>  [1772269126.7081] dhcp4 (eth1): canceled DHCP transaction
Feb 28 03:58:46 np0005634017 NetworkManager[7689]: <info>  [1772269126.7083] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 28 03:58:46 np0005634017 NetworkManager[7689]: <info>  [1772269126.7083] dhcp4 (eth1): state changed no lease
Feb 28 03:58:46 np0005634017 NetworkManager[7689]: <info>  [1772269126.7105] policy: auto-activating connection 'ci-private-network' (d1a8d015-6336-5169-989a-3781f3c5fc10)
Feb 28 03:58:46 np0005634017 NetworkManager[7689]: <info>  [1772269126.7113] device (eth1): Activation: starting connection 'ci-private-network' (d1a8d015-6336-5169-989a-3781f3c5fc10)
Feb 28 03:58:46 np0005634017 NetworkManager[7689]: <info>  [1772269126.7115] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 28 03:58:46 np0005634017 NetworkManager[7689]: <info>  [1772269126.7120] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 28 03:58:46 np0005634017 NetworkManager[7689]: <info>  [1772269126.7127] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 28 03:58:46 np0005634017 NetworkManager[7689]: <info>  [1772269126.7136] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 28 03:58:46 np0005634017 NetworkManager[7689]: <info>  [1772269126.7250] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 28 03:58:46 np0005634017 NetworkManager[7689]: <info>  [1772269126.7254] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 28 03:58:46 np0005634017 NetworkManager[7689]: <info>  [1772269126.7260] device (eth1): Activation: successful, device activated.
Feb 28 03:58:56 np0005634017 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 28 03:58:58 np0005634017 python3[7871]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 03:58:59 np0005634017 python3[7944]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772269138.4661179-267-21209967193380/source _original_basename=tmp0n0hwmlp follow=False checksum=85e6a753e711efe521de5223ff170cbe782f2bf6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 03:59:59 np0005634017 systemd-logind[815]: Session 1 logged out. Waiting for processes to exit.
Feb 28 04:01:31 np0005634017 systemd[4802]: Created slice User Background Tasks Slice.
Feb 28 04:01:31 np0005634017 systemd[4802]: Starting Cleanup of User's Temporary Files and Directories...
Feb 28 04:01:31 np0005634017 systemd[4802]: Finished Cleanup of User's Temporary Files and Directories.
Feb 28 04:03:31 np0005634017 systemd[1]: Starting Cleanup of Temporary Directories...
Feb 28 04:03:31 np0005634017 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Feb 28 04:03:31 np0005634017 systemd[1]: Finished Cleanup of Temporary Directories.
Feb 28 04:03:31 np0005634017 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Feb 28 04:08:41 np0005634017 systemd-logind[815]: New session 3 of user zuul.
Feb 28 04:08:41 np0005634017 systemd[1]: Started Session 3 of User zuul.
Feb 28 04:08:41 np0005634017 python3[8021]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163efc-24cc-0f63-a032-0000000021be-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:08:42 np0005634017 python3[8050]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:08:42 np0005634017 python3[8076]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:08:42 np0005634017 python3[8102]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:08:42 np0005634017 python3[8128]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:08:43 np0005634017 python3[8154]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:08:43 np0005634017 python3[8232]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 04:08:44 np0005634017 python3[8305]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772269723.6241927-530-34233765589144/source _original_basename=tmpmdpn84t8 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:08:45 np0005634017 python3[8355]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 28 04:08:45 np0005634017 systemd[1]: Reloading.
Feb 28 04:08:45 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:08:46 np0005634017 python3[8418]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Feb 28 04:08:47 np0005634017 python3[8444]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:08:47 np0005634017 python3[8472]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:08:48 np0005634017 python3[8501]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:08:48 np0005634017 python3[8529]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:08:48 np0005634017 python3[8556]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163efc-24cc-0f63-a032-0000000021c5-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:08:49 np0005634017 python3[8586]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 28 04:08:51 np0005634017 systemd[1]: session-3.scope: Deactivated successfully.
Feb 28 04:08:51 np0005634017 systemd[1]: session-3.scope: Consumed 4.080s CPU time.
Feb 28 04:08:51 np0005634017 systemd-logind[815]: Session 3 logged out. Waiting for processes to exit.
Feb 28 04:08:51 np0005634017 systemd-logind[815]: Removed session 3.
Feb 28 04:08:53 np0005634017 systemd-logind[815]: New session 4 of user zuul.
Feb 28 04:08:53 np0005634017 systemd[1]: Started Session 4 of User zuul.
Feb 28 04:08:53 np0005634017 python3[8620]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 28 04:09:00 np0005634017 setsebool[8659]: The virt_use_nfs policy boolean was changed to 1 by root
Feb 28 04:09:00 np0005634017 setsebool[8659]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Feb 28 04:09:11 np0005634017 kernel: SELinux:  Converting 386 SID table entries...
Feb 28 04:09:11 np0005634017 kernel: SELinux:  policy capability network_peer_controls=1
Feb 28 04:09:11 np0005634017 kernel: SELinux:  policy capability open_perms=1
Feb 28 04:09:11 np0005634017 kernel: SELinux:  policy capability extended_socket_class=1
Feb 28 04:09:11 np0005634017 kernel: SELinux:  policy capability always_check_network=0
Feb 28 04:09:11 np0005634017 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 28 04:09:11 np0005634017 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 28 04:09:11 np0005634017 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 28 04:09:20 np0005634017 kernel: SELinux:  Converting 389 SID table entries...
Feb 28 04:09:20 np0005634017 kernel: SELinux:  policy capability network_peer_controls=1
Feb 28 04:09:20 np0005634017 kernel: SELinux:  policy capability open_perms=1
Feb 28 04:09:20 np0005634017 kernel: SELinux:  policy capability extended_socket_class=1
Feb 28 04:09:20 np0005634017 kernel: SELinux:  policy capability always_check_network=0
Feb 28 04:09:20 np0005634017 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 28 04:09:20 np0005634017 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 28 04:09:20 np0005634017 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 28 04:09:38 np0005634017 dbus-broker-launch[804]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Feb 28 04:09:39 np0005634017 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 28 04:09:39 np0005634017 systemd[1]: Starting man-db-cache-update.service...
Feb 28 04:09:39 np0005634017 systemd[1]: Reloading.
Feb 28 04:09:39 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:09:39 np0005634017 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 28 04:09:40 np0005634017 python3[11275]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163efc-24cc-fd80-68e8-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:09:41 np0005634017 kernel: evm: overlay not supported
Feb 28 04:09:41 np0005634017 systemd[4802]: Starting D-Bus User Message Bus...
Feb 28 04:09:41 np0005634017 dbus-broker-launch[12758]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Feb 28 04:09:41 np0005634017 dbus-broker-launch[12758]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Feb 28 04:09:41 np0005634017 systemd[4802]: Started D-Bus User Message Bus.
Feb 28 04:09:41 np0005634017 dbus-broker-lau[12758]: Ready
Feb 28 04:09:41 np0005634017 systemd[4802]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Feb 28 04:09:41 np0005634017 systemd[4802]: Created slice Slice /user.
Feb 28 04:09:41 np0005634017 systemd[4802]: podman-12613.scope: unit configures an IP firewall, but not running as root.
Feb 28 04:09:41 np0005634017 systemd[4802]: (This warning is only shown for the first unit using IP firewalling.)
Feb 28 04:09:41 np0005634017 systemd[4802]: Started podman-12613.scope.
Feb 28 04:09:41 np0005634017 systemd[4802]: Started podman-pause-0018cb66.scope.
Feb 28 04:09:42 np0005634017 python3[13699]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.103:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.103:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:09:42 np0005634017 python3[13699]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Feb 28 04:09:43 np0005634017 systemd[1]: session-4.scope: Deactivated successfully.
Feb 28 04:09:43 np0005634017 systemd[1]: session-4.scope: Consumed 42.523s CPU time.
Feb 28 04:09:43 np0005634017 systemd-logind[815]: Session 4 logged out. Waiting for processes to exit.
Feb 28 04:09:43 np0005634017 systemd-logind[815]: Removed session 4.
Feb 28 04:09:50 np0005634017 irqbalance[811]: Cannot change IRQ 27 affinity: Operation not permitted
Feb 28 04:09:50 np0005634017 irqbalance[811]: IRQ 27 affinity is now unmanaged
Feb 28 04:10:08 np0005634017 systemd-logind[815]: New session 5 of user zuul.
Feb 28 04:10:08 np0005634017 systemd[1]: Started Session 5 of User zuul.
Feb 28 04:10:09 np0005634017 python3[27884]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMWj5PCPtlP+MkPD6j0giFxRRN/QZ4pZaeWI85EgPgnAL1MBdHer30p4E8GyiNkHDpxYAgJx1tYyZ4MNoPksnN8= zuul@np0005634016.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 04:10:09 np0005634017 python3[28073]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMWj5PCPtlP+MkPD6j0giFxRRN/QZ4pZaeWI85EgPgnAL1MBdHer30p4E8GyiNkHDpxYAgJx1tYyZ4MNoPksnN8= zuul@np0005634016.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 04:10:10 np0005634017 python3[28503]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005634017.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Feb 28 04:10:10 np0005634017 python3[28718]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMWj5PCPtlP+MkPD6j0giFxRRN/QZ4pZaeWI85EgPgnAL1MBdHer30p4E8GyiNkHDpxYAgJx1tYyZ4MNoPksnN8= zuul@np0005634016.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 28 04:10:11 np0005634017 python3[29009]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 04:10:11 np0005634017 python3[29323]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1772269810.9793844-139-272976715779526/source _original_basename=tmp0heiaovl follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:10:12 np0005634017 python3[29700]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Feb 28 04:10:12 np0005634017 systemd[1]: Starting Hostname Service...
Feb 28 04:10:12 np0005634017 systemd[1]: Started Hostname Service.
Feb 28 04:10:12 np0005634017 systemd-hostnamed[29794]: Changed pretty hostname to 'compute-0'
Feb 28 04:10:12 np0005634017 systemd-hostnamed[29794]: Hostname set to <compute-0> (static)
Feb 28 04:10:12 np0005634017 NetworkManager[7689]: <info>  [1772269812.7268] hostname: static hostname changed from "np0005634017.novalocal" to "compute-0"
Feb 28 04:10:12 np0005634017 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 28 04:10:12 np0005634017 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 28 04:10:13 np0005634017 systemd[1]: session-5.scope: Deactivated successfully.
Feb 28 04:10:13 np0005634017 systemd[1]: session-5.scope: Consumed 2.192s CPU time.
Feb 28 04:10:13 np0005634017 systemd-logind[815]: Session 5 logged out. Waiting for processes to exit.
Feb 28 04:10:13 np0005634017 systemd-logind[815]: Removed session 5.
Feb 28 04:10:14 np0005634017 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 28 04:10:14 np0005634017 systemd[1]: Finished man-db-cache-update.service.
Feb 28 04:10:14 np0005634017 systemd[1]: man-db-cache-update.service: Consumed 38.635s CPU time.
Feb 28 04:10:14 np0005634017 systemd[1]: run-rb45b4e1bc03d4fd395438e4edc2463d2.service: Deactivated successfully.
Feb 28 04:10:22 np0005634017 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 28 04:10:42 np0005634017 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 28 04:13:43 np0005634017 systemd-logind[815]: New session 6 of user zuul.
Feb 28 04:13:43 np0005634017 systemd[1]: Started Session 6 of User zuul.
Feb 28 04:13:43 np0005634017 python3[30641]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:13:45 np0005634017 python3[30757]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 04:13:45 np0005634017 python3[30830]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1772270025.0261135-34405-263251614309174/source mode=0755 _original_basename=delorean.repo follow=False checksum=c7624fe5e858d4139de1ac159778eb6fd097c2ca backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:13:45 np0005634017 python3[30856]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 04:13:46 np0005634017 python3[30929]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1772270025.0261135-34405-263251614309174/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:13:46 np0005634017 python3[30955]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 04:13:46 np0005634017 python3[31028]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1772270025.0261135-34405-263251614309174/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:13:46 np0005634017 python3[31054]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 04:13:47 np0005634017 python3[31127]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1772270025.0261135-34405-263251614309174/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:13:47 np0005634017 python3[31153]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 04:13:47 np0005634017 python3[31226]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1772270025.0261135-34405-263251614309174/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:13:48 np0005634017 python3[31252]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 04:13:48 np0005634017 python3[31325]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1772270025.0261135-34405-263251614309174/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:13:48 np0005634017 python3[31351]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 04:13:48 np0005634017 python3[31424]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1772270025.0261135-34405-263251614309174/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=06a0a916cb7cbc51b08d6616a672f1322305cccf backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:14:01 np0005634017 python3[31482]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:19:01 np0005634017 systemd[1]: session-6.scope: Deactivated successfully.
Feb 28 04:19:01 np0005634017 systemd[1]: session-6.scope: Consumed 4.296s CPU time.
Feb 28 04:19:01 np0005634017 systemd-logind[815]: Session 6 logged out. Waiting for processes to exit.
Feb 28 04:19:01 np0005634017 systemd-logind[815]: Removed session 6.
Feb 28 04:25:09 np0005634017 systemd-logind[815]: New session 7 of user zuul.
Feb 28 04:25:09 np0005634017 systemd[1]: Started Session 7 of User zuul.
Feb 28 04:25:10 np0005634017 python3.9[31671]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:25:11 np0005634017 python3.9[31853]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:25:18 np0005634017 systemd[1]: session-7.scope: Deactivated successfully.
Feb 28 04:25:18 np0005634017 systemd[1]: session-7.scope: Consumed 7.149s CPU time.
Feb 28 04:25:18 np0005634017 systemd-logind[815]: Session 7 logged out. Waiting for processes to exit.
Feb 28 04:25:18 np0005634017 systemd-logind[815]: Removed session 7.
Feb 28 04:25:34 np0005634017 systemd-logind[815]: New session 8 of user zuul.
Feb 28 04:25:34 np0005634017 systemd[1]: Started Session 8 of User zuul.
Feb 28 04:25:35 np0005634017 python3.9[32066]: ansible-ansible.legacy.ping Invoked with data=pong
Feb 28 04:25:36 np0005634017 python3.9[32240]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:25:36 np0005634017 python3.9[32393]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:25:37 np0005634017 python3.9[32547]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:25:38 np0005634017 python3.9[32700]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:25:39 np0005634017 python3.9[32853]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:25:39 np0005634017 python3.9[32977]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1772270738.7525902-68-145671960012718/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:25:40 np0005634017 python3.9[33130]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:25:41 np0005634017 python3.9[33287]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:25:42 np0005634017 python3.9[33440]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:25:42 np0005634017 python3.9[33590]: ansible-ansible.builtin.service_facts Invoked
Feb 28 04:25:45 np0005634017 python3.9[33844]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:25:46 np0005634017 python3.9[33994]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:25:47 np0005634017 python3.9[34148]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:25:48 np0005634017 python3.9[34307]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 28 04:25:49 np0005634017 python3.9[34392]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 28 04:26:34 np0005634017 systemd[1]: Reloading.
Feb 28 04:26:34 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:26:35 np0005634017 systemd[1]: Starting dnf makecache...
Feb 28 04:26:35 np0005634017 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Feb 28 04:26:35 np0005634017 dnf[34611]: Failed determining last makecache time.
Feb 28 04:26:35 np0005634017 dnf[34611]: delorean-openstack-barbican-42b4c41831408a8e323 141 kB/s | 3.0 kB     00:00
Feb 28 04:26:35 np0005634017 dnf[34611]: delorean-python-glean-642fffe0203a8ffcc2443db52 186 kB/s | 3.0 kB     00:00
Feb 28 04:26:35 np0005634017 dnf[34611]: delorean-openstack-cinder-e95a374f4f00ef02d562d 188 kB/s | 3.0 kB     00:00
Feb 28 04:26:35 np0005634017 dnf[34611]: delorean-python-stevedore-c4acc5639fd2329372142 193 kB/s | 3.0 kB     00:00
Feb 28 04:26:35 np0005634017 dnf[34611]: delorean-python-cloudkitty-tests-tempest-ef9563 191 kB/s | 3.0 kB     00:00
Feb 28 04:26:35 np0005634017 dnf[34611]: delorean-diskimage-builder-cbb4478c143869181ba9 190 kB/s | 3.0 kB     00:00
Feb 28 04:26:35 np0005634017 dnf[34611]: delorean-openstack-nova-5cfeecbf22fca58822607dd 186 kB/s | 3.0 kB     00:00
Feb 28 04:26:35 np0005634017 systemd[1]: Reloading.
Feb 28 04:26:35 np0005634017 dnf[34611]: delorean-python-designate-tests-tempest-347fdbc 179 kB/s | 3.0 kB     00:00
Feb 28 04:26:35 np0005634017 dnf[34611]: delorean-openstack-glance-1fd12c29b339f30fe823e 177 kB/s | 3.0 kB     00:00
Feb 28 04:26:35 np0005634017 dnf[34611]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 194 kB/s | 3.0 kB     00:00
Feb 28 04:26:35 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:26:35 np0005634017 dnf[34611]: delorean-openstack-manila-8fa2b5793100022b4d0f6 168 kB/s | 3.0 kB     00:00
Feb 28 04:26:35 np0005634017 dnf[34611]: delorean-python-whitebox-neutron-tests-tempest- 183 kB/s | 3.0 kB     00:00
Feb 28 04:26:35 np0005634017 dnf[34611]: delorean-openstack-octavia-76dfc1e35cf7f4dd6102 164 kB/s | 3.0 kB     00:00
Feb 28 04:26:35 np0005634017 dnf[34611]: delorean-openstack-watcher-c014f81a8647287f6dcc 187 kB/s | 3.0 kB     00:00
Feb 28 04:26:35 np0005634017 dnf[34611]: delorean-python-tcib-b403f1051724db0286e1418f59 180 kB/s | 3.0 kB     00:00
Feb 28 04:26:35 np0005634017 dnf[34611]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 190 kB/s | 3.0 kB     00:00
Feb 28 04:26:35 np0005634017 dnf[34611]: delorean-openstack-swift-dc98a8463506ac520c469a 184 kB/s | 3.0 kB     00:00
Feb 28 04:26:35 np0005634017 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Feb 28 04:26:35 np0005634017 dnf[34611]: delorean-python-tempestconf-8e33668cda707818ee1 180 kB/s | 3.0 kB     00:00
Feb 28 04:26:35 np0005634017 dnf[34611]: delorean-openstack-heat-ui-013accbfd179753bc3f0 159 kB/s | 3.0 kB     00:00
Feb 28 04:26:35 np0005634017 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Feb 28 04:26:35 np0005634017 systemd[1]: Reloading.
Feb 28 04:26:35 np0005634017 dnf[34611]: CentOS Stream 9 - BaseOS                         65 kB/s | 7.0 kB     00:00
Feb 28 04:26:35 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:26:35 np0005634017 systemd[1]: Listening on LVM2 poll daemon socket.
Feb 28 04:26:35 np0005634017 dnf[34611]: CentOS Stream 9 - AppStream                      65 kB/s | 7.1 kB     00:00
Feb 28 04:26:36 np0005634017 dbus-broker-launch[803]: Noticed file-system modification, trigger reload.
Feb 28 04:26:36 np0005634017 dbus-broker-launch[803]: Noticed file-system modification, trigger reload.
Feb 28 04:26:36 np0005634017 dbus-broker-launch[803]: Noticed file-system modification, trigger reload.
Feb 28 04:26:36 np0005634017 dnf[34611]: CentOS Stream 9 - CRB                            62 kB/s | 6.9 kB     00:00
Feb 28 04:26:36 np0005634017 dnf[34611]: CentOS Stream 9 - Extras packages                78 kB/s | 7.6 kB     00:00
Feb 28 04:26:36 np0005634017 dnf[34611]: dlrn-antelope-testing                            93 kB/s | 3.0 kB     00:00
Feb 28 04:26:36 np0005634017 dnf[34611]: dlrn-antelope-build-deps                        152 kB/s | 3.0 kB     00:00
Feb 28 04:26:36 np0005634017 dnf[34611]: centos9-rabbitmq                                 98 kB/s | 3.0 kB     00:00
Feb 28 04:26:36 np0005634017 dnf[34611]: centos9-storage                                 115 kB/s | 3.0 kB     00:00
Feb 28 04:26:36 np0005634017 dnf[34611]: centos9-opstools                                119 kB/s | 3.0 kB     00:00
Feb 28 04:26:36 np0005634017 dnf[34611]: NFV SIG OpenvSwitch                             111 kB/s | 3.0 kB     00:00
Feb 28 04:26:36 np0005634017 dnf[34611]: repo-setup-centos-appstream                     160 kB/s | 4.4 kB     00:00
Feb 28 04:26:36 np0005634017 dnf[34611]: repo-setup-centos-baseos                        160 kB/s | 3.9 kB     00:00
Feb 28 04:26:36 np0005634017 dnf[34611]: repo-setup-centos-highavailability              166 kB/s | 3.9 kB     00:00
Feb 28 04:26:36 np0005634017 dnf[34611]: repo-setup-centos-powertools                    183 kB/s | 4.3 kB     00:00
Feb 28 04:26:37 np0005634017 dnf[34611]: Extra Packages for Enterprise Linux 9 - x86_64  127 kB/s |  24 kB     00:00
Feb 28 04:26:37 np0005634017 dnf[34611]: Metadata cache created.
Feb 28 04:26:37 np0005634017 systemd[1]: dnf-makecache.service: Deactivated successfully.
Feb 28 04:26:37 np0005634017 systemd[1]: Finished dnf makecache.
Feb 28 04:26:37 np0005634017 systemd[1]: dnf-makecache.service: Consumed 1.852s CPU time.
Feb 28 04:27:33 np0005634017 kernel: SELinux:  Converting 2728 SID table entries...
Feb 28 04:27:33 np0005634017 kernel: SELinux:  policy capability network_peer_controls=1
Feb 28 04:27:33 np0005634017 kernel: SELinux:  policy capability open_perms=1
Feb 28 04:27:33 np0005634017 kernel: SELinux:  policy capability extended_socket_class=1
Feb 28 04:27:33 np0005634017 kernel: SELinux:  policy capability always_check_network=0
Feb 28 04:27:33 np0005634017 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 28 04:27:33 np0005634017 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 28 04:27:33 np0005634017 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 28 04:27:33 np0005634017 dbus-broker-launch[804]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Feb 28 04:27:33 np0005634017 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 28 04:27:33 np0005634017 systemd[1]: Starting man-db-cache-update.service...
Feb 28 04:27:33 np0005634017 systemd[1]: Reloading.
Feb 28 04:27:33 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:27:34 np0005634017 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 28 04:27:34 np0005634017 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 28 04:27:34 np0005634017 systemd[1]: Finished man-db-cache-update.service.
Feb 28 04:27:34 np0005634017 systemd[1]: run-r53369e97428e48c69e4a4bb6062f0e8d.service: Deactivated successfully.
Feb 28 04:27:35 np0005634017 python3.9[35983]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:27:36 np0005634017 python3.9[36265]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Feb 28 04:27:37 np0005634017 python3.9[36418]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Feb 28 04:27:39 np0005634017 python3.9[36573]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:27:40 np0005634017 python3.9[36726]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Feb 28 04:27:41 np0005634017 python3.9[36879]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:27:42 np0005634017 python3.9[37032]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:27:43 np0005634017 python3.9[37156]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772270862.113202-231-55859887312461/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=19cefd09979300b160194e5c5bf9a482dfa697e9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:27:44 np0005634017 python3.9[37309]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:27:44 np0005634017 python3.9[37462]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:27:48 np0005634017 python3.9[37616]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:27:49 np0005634017 python3.9[37770]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Feb 28 04:27:49 np0005634017 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 04:27:49 np0005634017 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 04:27:50 np0005634017 python3.9[37925]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 28 04:27:51 np0005634017 python3.9[38084]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 28 04:27:52 np0005634017 python3.9[38246]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Feb 28 04:27:53 np0005634017 python3.9[38400]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 28 04:27:53 np0005634017 python3.9[38559]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Feb 28 04:27:54 np0005634017 python3.9[38712]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 28 04:27:56 np0005634017 python3.9[38869]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:27:57 np0005634017 python3.9[39022]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:27:58 np0005634017 python3.9[39146]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772270877.1045182-350-27537963437393/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:27:59 np0005634017 python3.9[39299]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 28 04:27:59 np0005634017 systemd[1]: Starting Load Kernel Modules...
Feb 28 04:27:59 np0005634017 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 28 04:27:59 np0005634017 kernel: Bridge firewalling registered
Feb 28 04:27:59 np0005634017 systemd-modules-load[39303]: Inserted module 'br_netfilter'
Feb 28 04:27:59 np0005634017 systemd[1]: Finished Load Kernel Modules.
Feb 28 04:27:59 np0005634017 python3.9[39459]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:28:00 np0005634017 python3.9[39583]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772270879.4382515-373-254182855042314/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:28:01 np0005634017 python3.9[39736]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 28 04:28:04 np0005634017 dbus-broker-launch[803]: Noticed file-system modification, trigger reload.
Feb 28 04:28:04 np0005634017 dbus-broker-launch[803]: Noticed file-system modification, trigger reload.
Feb 28 04:28:04 np0005634017 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 28 04:28:04 np0005634017 systemd[1]: Starting man-db-cache-update.service...
Feb 28 04:28:04 np0005634017 systemd[1]: Reloading.
Feb 28 04:28:04 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:28:04 np0005634017 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 28 04:28:05 np0005634017 python3.9[41190]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:28:06 np0005634017 python3.9[42248]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Feb 28 04:28:07 np0005634017 python3.9[43112]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:28:07 np0005634017 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 28 04:28:07 np0005634017 systemd[1]: Finished man-db-cache-update.service.
Feb 28 04:28:07 np0005634017 systemd[1]: man-db-cache-update.service: Consumed 4.335s CPU time.
Feb 28 04:28:07 np0005634017 systemd[1]: run-r0fc807bba89f4f80a0f2e15b549fbaec.service: Deactivated successfully.
Feb 28 04:28:08 np0005634017 python3.9[43998]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:28:08 np0005634017 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb 28 04:28:08 np0005634017 systemd[1]: Starting Authorization Manager...
Feb 28 04:28:08 np0005634017 systemd[1]: Started Dynamic System Tuning Daemon.
Feb 28 04:28:08 np0005634017 polkitd[44218]: Started polkitd version 0.117
Feb 28 04:28:08 np0005634017 systemd[1]: Started Authorization Manager.
Feb 28 04:28:09 np0005634017 python3.9[44389]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:28:09 np0005634017 systemd[1]: Stopping Dynamic System Tuning Daemon...
Feb 28 04:28:09 np0005634017 systemd[1]: tuned.service: Deactivated successfully.
Feb 28 04:28:09 np0005634017 systemd[1]: Stopped Dynamic System Tuning Daemon.
Feb 28 04:28:09 np0005634017 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb 28 04:28:09 np0005634017 systemd[1]: Started Dynamic System Tuning Daemon.
Feb 28 04:28:10 np0005634017 python3.9[44551]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Feb 28 04:28:12 np0005634017 python3.9[44704]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:28:12 np0005634017 systemd[1]: Reloading.
Feb 28 04:28:12 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:28:13 np0005634017 python3.9[44902]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:28:13 np0005634017 systemd[1]: Reloading.
Feb 28 04:28:13 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:28:14 np0005634017 python3.9[45098]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:28:15 np0005634017 python3.9[45252]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:28:15 np0005634017 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Feb 28 04:28:15 np0005634017 python3.9[45406]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:28:17 np0005634017 python3.9[45569]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:28:18 np0005634017 python3.9[45723]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 28 04:28:18 np0005634017 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 28 04:28:18 np0005634017 systemd[1]: Stopped Apply Kernel Variables.
Feb 28 04:28:18 np0005634017 systemd[1]: Stopping Apply Kernel Variables...
Feb 28 04:28:18 np0005634017 systemd[1]: Starting Apply Kernel Variables...
Feb 28 04:28:18 np0005634017 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb 28 04:28:18 np0005634017 systemd[1]: Finished Apply Kernel Variables.
Feb 28 04:28:19 np0005634017 systemd[1]: session-8.scope: Deactivated successfully.
Feb 28 04:28:19 np0005634017 systemd[1]: session-8.scope: Consumed 2min 10.428s CPU time.
Feb 28 04:28:19 np0005634017 systemd-logind[815]: Session 8 logged out. Waiting for processes to exit.
Feb 28 04:28:19 np0005634017 systemd-logind[815]: Removed session 8.
Feb 28 04:28:24 np0005634017 systemd-logind[815]: New session 9 of user zuul.
Feb 28 04:28:24 np0005634017 systemd[1]: Started Session 9 of User zuul.
Feb 28 04:28:25 np0005634017 python3.9[45906]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:28:27 np0005634017 python3.9[46063]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Feb 28 04:28:28 np0005634017 python3.9[46217]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 28 04:28:29 np0005634017 python3.9[46376]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 28 04:28:29 np0005634017 python3.9[46537]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 28 04:28:30 np0005634017 python3.9[46622]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 28 04:28:33 np0005634017 python3.9[46786]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 28 04:28:44 np0005634017 kernel: SELinux:  Converting 2740 SID table entries...
Feb 28 04:28:44 np0005634017 kernel: SELinux:  policy capability network_peer_controls=1
Feb 28 04:28:44 np0005634017 kernel: SELinux:  policy capability open_perms=1
Feb 28 04:28:44 np0005634017 kernel: SELinux:  policy capability extended_socket_class=1
Feb 28 04:28:44 np0005634017 kernel: SELinux:  policy capability always_check_network=0
Feb 28 04:28:44 np0005634017 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 28 04:28:44 np0005634017 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 28 04:28:44 np0005634017 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 28 04:28:45 np0005634017 dbus-broker-launch[804]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Feb 28 04:28:45 np0005634017 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Feb 28 04:28:46 np0005634017 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 28 04:28:46 np0005634017 systemd[1]: Starting man-db-cache-update.service...
Feb 28 04:28:46 np0005634017 systemd[1]: Reloading.
Feb 28 04:28:46 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:28:46 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:28:46 np0005634017 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 28 04:28:47 np0005634017 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 28 04:28:47 np0005634017 systemd[1]: Finished man-db-cache-update.service.
Feb 28 04:28:47 np0005634017 systemd[1]: run-r8d4ba6c9e7fb4874bf666ce767d236af.service: Deactivated successfully.
Feb 28 04:28:48 np0005634017 python3.9[47912]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 28 04:28:48 np0005634017 systemd[1]: Reloading.
Feb 28 04:28:48 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:28:48 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:28:48 np0005634017 systemd[1]: Starting Open vSwitch Database Unit...
Feb 28 04:28:48 np0005634017 chown[47960]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Feb 28 04:28:48 np0005634017 ovs-ctl[47965]: /etc/openvswitch/conf.db does not exist ... (warning).
Feb 28 04:28:48 np0005634017 ovs-ctl[47965]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Feb 28 04:28:48 np0005634017 ovs-ctl[47965]: Starting ovsdb-server [  OK  ]
Feb 28 04:28:48 np0005634017 ovs-vsctl[48014]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Feb 28 04:28:48 np0005634017 ovs-vsctl[48034]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"cdfa85e0-fba9-4bed-b591-423dd0221b71\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Feb 28 04:28:48 np0005634017 ovs-ctl[47965]: Configuring Open vSwitch system IDs [  OK  ]
Feb 28 04:28:48 np0005634017 ovs-ctl[47965]: Enabling remote OVSDB managers [  OK  ]
Feb 28 04:28:48 np0005634017 systemd[1]: Started Open vSwitch Database Unit.
Feb 28 04:28:48 np0005634017 ovs-vsctl[48040]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Feb 28 04:28:48 np0005634017 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Feb 28 04:28:48 np0005634017 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Feb 28 04:28:48 np0005634017 systemd[1]: Starting Open vSwitch Forwarding Unit...
Feb 28 04:28:48 np0005634017 kernel: openvswitch: Open vSwitch switching datapath
Feb 28 04:28:48 np0005634017 ovs-ctl[48085]: Inserting openvswitch module [  OK  ]
Feb 28 04:28:48 np0005634017 ovs-ctl[48054]: Starting ovs-vswitchd [  OK  ]
Feb 28 04:28:48 np0005634017 ovs-vsctl[48102]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Feb 28 04:28:48 np0005634017 ovs-ctl[48054]: Enabling remote OVSDB managers [  OK  ]
Feb 28 04:28:48 np0005634017 systemd[1]: Started Open vSwitch Forwarding Unit.
Feb 28 04:28:48 np0005634017 systemd[1]: Starting Open vSwitch...
Feb 28 04:28:48 np0005634017 systemd[1]: Finished Open vSwitch.
Feb 28 04:28:49 np0005634017 python3.9[48254]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:28:50 np0005634017 python3.9[48407]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Feb 28 04:28:51 np0005634017 kernel: SELinux:  Converting 2754 SID table entries...
Feb 28 04:28:51 np0005634017 kernel: SELinux:  policy capability network_peer_controls=1
Feb 28 04:28:51 np0005634017 kernel: SELinux:  policy capability open_perms=1
Feb 28 04:28:51 np0005634017 kernel: SELinux:  policy capability extended_socket_class=1
Feb 28 04:28:51 np0005634017 kernel: SELinux:  policy capability always_check_network=0
Feb 28 04:28:51 np0005634017 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 28 04:28:51 np0005634017 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 28 04:28:51 np0005634017 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 28 04:28:52 np0005634017 python3.9[48562]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:28:53 np0005634017 dbus-broker-launch[804]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Feb 28 04:28:53 np0005634017 python3.9[48721]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 28 04:28:55 np0005634017 python3.9[48875]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:28:57 np0005634017 python3.9[49163]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Feb 28 04:28:58 np0005634017 python3.9[49313]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:28:58 np0005634017 python3.9[49468]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 28 04:29:00 np0005634017 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 28 04:29:00 np0005634017 systemd[1]: Starting man-db-cache-update.service...
Feb 28 04:29:00 np0005634017 systemd[1]: Reloading.
Feb 28 04:29:00 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:29:00 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:29:01 np0005634017 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 28 04:29:01 np0005634017 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 28 04:29:01 np0005634017 systemd[1]: Finished man-db-cache-update.service.
Feb 28 04:29:01 np0005634017 systemd[1]: run-r32987cb1aff1453b8734d3d52a06983b.service: Deactivated successfully.
Feb 28 04:29:02 np0005634017 python3.9[49792]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 28 04:29:02 np0005634017 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Feb 28 04:29:02 np0005634017 systemd[1]: Stopped Network Manager Wait Online.
Feb 28 04:29:02 np0005634017 systemd[1]: Stopping Network Manager Wait Online...
Feb 28 04:29:02 np0005634017 systemd[1]: Stopping Network Manager...
Feb 28 04:29:02 np0005634017 NetworkManager[7689]: <info>  [1772270942.3658] caught SIGTERM, shutting down normally.
Feb 28 04:29:02 np0005634017 NetworkManager[7689]: <info>  [1772270942.3677] dhcp4 (eth0): canceled DHCP transaction
Feb 28 04:29:02 np0005634017 NetworkManager[7689]: <info>  [1772270942.3677] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 28 04:29:02 np0005634017 NetworkManager[7689]: <info>  [1772270942.3677] dhcp4 (eth0): state changed no lease
Feb 28 04:29:02 np0005634017 NetworkManager[7689]: <info>  [1772270942.3680] manager: NetworkManager state is now CONNECTED_SITE
Feb 28 04:29:02 np0005634017 NetworkManager[7689]: <info>  [1772270942.3739] exiting (success)
Feb 28 04:29:02 np0005634017 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 28 04:29:02 np0005634017 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 28 04:29:02 np0005634017 systemd[1]: NetworkManager.service: Deactivated successfully.
Feb 28 04:29:02 np0005634017 systemd[1]: Stopped Network Manager.
Feb 28 04:29:02 np0005634017 systemd[1]: NetworkManager.service: Consumed 10.914s CPU time, 4.1M memory peak, read 0B from disk, written 30.0K to disk.
Feb 28 04:29:02 np0005634017 systemd[1]: Starting Network Manager...
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.4567] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:04d13732-50b9-406b-a503-39e846989e82)
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.4568] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.4635] manager[0x559318e9a000]: monitoring kernel firmware directory '/lib/firmware'.
Feb 28 04:29:02 np0005634017 systemd[1]: Starting Hostname Service...
Feb 28 04:29:02 np0005634017 systemd[1]: Started Hostname Service.
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5327] hostname: hostname: using hostnamed
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5328] hostname: static hostname changed from (none) to "compute-0"
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5337] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5345] manager[0x559318e9a000]: rfkill: Wi-Fi hardware radio set enabled
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5345] manager[0x559318e9a000]: rfkill: WWAN hardware radio set enabled
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5381] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5395] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5396] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5397] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5398] manager: Networking is enabled by state file
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5401] settings: Loaded settings plugin: keyfile (internal)
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5406] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5452] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5465] dhcp: init: Using DHCP client 'internal'
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5469] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5477] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5485] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5498] device (lo): Activation: starting connection 'lo' (22e8b1f5-c9dc-4bbf-8b00-9c310aff1f7b)
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5507] device (eth0): carrier: link connected
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5513] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5521] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5522] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5531] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5542] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5550] device (eth1): carrier: link connected
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5555] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5564] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (d1a8d015-6336-5169-989a-3781f3c5fc10) (indicated)
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5565] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5573] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5583] device (eth1): Activation: starting connection 'ci-private-network' (d1a8d015-6336-5169-989a-3781f3c5fc10)
Feb 28 04:29:02 np0005634017 systemd[1]: Started Network Manager.
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5591] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5602] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5608] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5611] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5615] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5621] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5626] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5632] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5638] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5649] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5656] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5693] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5716] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5726] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5728] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5734] device (lo): Activation: successful, device activated.
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5743] dhcp4 (eth0): state changed new lease, address=38.102.83.158
Feb 28 04:29:02 np0005634017 systemd[1]: Starting Network Manager Wait Online...
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5753] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5830] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5837] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5845] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5849] manager: NetworkManager state is now CONNECTED_LOCAL
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5853] device (eth1): Activation: successful, device activated.
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5873] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5875] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5878] manager: NetworkManager state is now CONNECTED_SITE
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5881] device (eth0): Activation: successful, device activated.
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5888] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb 28 04:29:02 np0005634017 NetworkManager[49805]: <info>  [1772270942.5892] manager: startup complete
Feb 28 04:29:02 np0005634017 systemd[1]: Finished Network Manager Wait Online.
Feb 28 04:29:03 np0005634017 python3.9[50020]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 28 04:29:08 np0005634017 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 28 04:29:08 np0005634017 systemd[1]: Starting man-db-cache-update.service...
Feb 28 04:29:08 np0005634017 systemd[1]: Reloading.
Feb 28 04:29:08 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:29:08 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:29:08 np0005634017 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 28 04:29:08 np0005634017 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 28 04:29:08 np0005634017 systemd[1]: Finished man-db-cache-update.service.
Feb 28 04:29:08 np0005634017 systemd[1]: run-r22d9442d92c94f77ac4f4c6d87fa0b7f.service: Deactivated successfully.
Feb 28 04:29:09 np0005634017 python3.9[50499]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:29:10 np0005634017 python3.9[50652]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:29:11 np0005634017 python3.9[50807]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:29:12 np0005634017 python3.9[50960]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:29:12 np0005634017 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 28 04:29:12 np0005634017 python3.9[51113]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:29:13 np0005634017 python3.9[51266]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:29:14 np0005634017 python3.9[51419]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:29:14 np0005634017 python3.9[51543]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1772270953.750543-226-276213382771763/.source _original_basename=.w936ihhm follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:29:15 np0005634017 python3.9[51696]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:29:16 np0005634017 python3.9[51849]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Feb 28 04:29:17 np0005634017 python3.9[52002]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:29:19 np0005634017 python3.9[52432]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Feb 28 04:29:20 np0005634017 ansible-async_wrapper.py[52608]: Invoked with j444025069689 300 /home/zuul/.ansible/tmp/ansible-tmp-1772270959.6226926-292-159163033939173/AnsiballZ_edpm_os_net_config.py _
Feb 28 04:29:20 np0005634017 ansible-async_wrapper.py[52611]: Starting module and watcher
Feb 28 04:29:20 np0005634017 ansible-async_wrapper.py[52611]: Start watching 52612 (300)
Feb 28 04:29:20 np0005634017 ansible-async_wrapper.py[52612]: Start module (52612)
Feb 28 04:29:20 np0005634017 ansible-async_wrapper.py[52608]: Return async_wrapper task started.
Feb 28 04:29:20 np0005634017 python3.9[52613]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True remove_config=False safe_defaults=False use_nmstate=True purge_provider=
Feb 28 04:29:21 np0005634017 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Feb 28 04:29:21 np0005634017 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Feb 28 04:29:21 np0005634017 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Feb 28 04:29:21 np0005634017 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Feb 28 04:29:21 np0005634017 kernel: cfg80211: failed to load regulatory.db
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.5408] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52614 uid=0 result="success"
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.5429] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52614 uid=0 result="success"
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.5940] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.5944] audit: op="connection-add" uuid="3f334df6-fcdd-4064-a6f1-5b41ca1f0131" name="br-ex-br" pid=52614 uid=0 result="success"
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.5961] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.5962] audit: op="connection-add" uuid="fc2b51ad-5088-4837-8ac7-8a2a748d67ec" name="br-ex-port" pid=52614 uid=0 result="success"
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.5974] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.5976] audit: op="connection-add" uuid="9df69c20-1710-4825-9c74-f4f30d959d78" name="eth1-port" pid=52614 uid=0 result="success"
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.5988] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.5991] audit: op="connection-add" uuid="4808c826-29e7-4506-b9ca-39f1b067409f" name="vlan20-port" pid=52614 uid=0 result="success"
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6003] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6005] audit: op="connection-add" uuid="a19f6ba1-f148-430e-8a4e-47ee1b8fa04b" name="vlan21-port" pid=52614 uid=0 result="success"
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6017] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6020] audit: op="connection-add" uuid="6bd7d613-a44a-4993-af27-6395e5d34145" name="vlan22-port" pid=52614 uid=0 result="success"
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6032] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6035] audit: op="connection-add" uuid="c8fbc5b0-871c-44f5-9aec-5db26a39a747" name="vlan23-port" pid=52614 uid=0 result="success"
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6056] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method" pid=52614 uid=0 result="success"
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6074] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6077] audit: op="connection-add" uuid="b60762fc-0a3a-4333-84d3-a0c9523e2ffe" name="br-ex-if" pid=52614 uid=0 result="success"
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6115] audit: op="connection-update" uuid="d1a8d015-6336-5169-989a-3781f3c5fc10" name="ci-private-network" args="ipv4.dns,ipv4.routing-rules,ipv4.addresses,ipv4.routes,ipv4.never-default,ipv4.method,ovs-interface.type,connection.controller,connection.master,connection.slave-type,connection.timestamp,connection.port-type,ipv6.dns,ipv6.routing-rules,ipv6.addr-gen-mode,ipv6.addresses,ipv6.routes,ipv6.method,ovs-external-ids.data" pid=52614 uid=0 result="success"
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6134] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6137] audit: op="connection-add" uuid="1ebf9e1b-2ea3-442a-ad9f-c0b83c44a66d" name="vlan20-if" pid=52614 uid=0 result="success"
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6155] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6157] audit: op="connection-add" uuid="5a313b38-b235-4607-8da2-7570009e63ae" name="vlan21-if" pid=52614 uid=0 result="success"
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6175] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6177] audit: op="connection-add" uuid="5a6eed71-06db-4940-b9bf-e6cbaadba8f4" name="vlan22-if" pid=52614 uid=0 result="success"
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6195] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6197] audit: op="connection-add" uuid="94750748-28f2-452a-999c-c8412439f35d" name="vlan23-if" pid=52614 uid=0 result="success"
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6212] audit: op="connection-delete" uuid="64ae6edf-590f-3630-9095-1a6a00a971a6" name="Wired connection 1" pid=52614 uid=0 result="success"
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6226] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <warn>  [1772270962.6230] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6238] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6243] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (3f334df6-fcdd-4064-a6f1-5b41ca1f0131)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6244] audit: op="connection-activate" uuid="3f334df6-fcdd-4064-a6f1-5b41ca1f0131" name="br-ex-br" pid=52614 uid=0 result="success"
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6246] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <warn>  [1772270962.6248] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6254] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6260] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (fc2b51ad-5088-4837-8ac7-8a2a748d67ec)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6263] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <warn>  [1772270962.6264] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6270] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6275] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (9df69c20-1710-4825-9c74-f4f30d959d78)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6278] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <warn>  [1772270962.6280] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6286] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6291] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (4808c826-29e7-4506-b9ca-39f1b067409f)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6294] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <warn>  [1772270962.6295] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6302] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6307] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (a19f6ba1-f148-430e-8a4e-47ee1b8fa04b)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6310] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <warn>  [1772270962.6312] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6318] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6323] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (6bd7d613-a44a-4993-af27-6395e5d34145)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6326] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <warn>  [1772270962.6327] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6333] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6338] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (c8fbc5b0-871c-44f5-9aec-5db26a39a747)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6340] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6343] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6346] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6353] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <warn>  [1772270962.6355] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6358] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6364] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (b60762fc-0a3a-4333-84d3-a0c9523e2ffe)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6366] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6370] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6373] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6375] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6377] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6389] device (eth1): disconnecting for new activation request.
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6406] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6426] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6430] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6431] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6436] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <warn>  [1772270962.6437] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6443] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6448] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (1ebf9e1b-2ea3-442a-ad9f-c0b83c44a66d)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6450] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6455] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6457] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6459] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6463] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <warn>  [1772270962.6464] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6468] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6474] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (5a313b38-b235-4607-8da2-7570009e63ae)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6476] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6480] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6483] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6485] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6488] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <warn>  [1772270962.6489] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6494] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6500] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (5a6eed71-06db-4940-b9bf-e6cbaadba8f4)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6501] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6506] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6508] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6510] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6515] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <warn>  [1772270962.6516] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6521] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6527] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (94750748-28f2-452a-999c-c8412439f35d)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6528] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6532] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6534] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6536] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Feb 28 04:29:22 np0005634017 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6538] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6553] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,802-3-ethernet.mtu,connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method" pid=52614 uid=0 result="success"
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6556] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6559] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6561] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6575] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6579] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6584] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6589] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 kernel: ovs-system: entered promiscuous mode
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6591] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6597] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6601] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6605] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6607] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 kernel: Timeout policy base is empty
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6613] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6618] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 systemd-udevd[52620]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6624] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6628] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6634] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6639] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6642] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6643] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6648] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6653] dhcp4 (eth0): canceled DHCP transaction
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6653] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6653] dhcp4 (eth0): state changed no lease
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6655] dhcp4 (eth0): activation: beginning transaction (no timeout)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6667] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Feb 28 04:29:22 np0005634017 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6677] audit: op="device-reapply" interface="eth1" ifindex=3 pid=52614 uid=0 result="fail" reason="Device is not activated"
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6681] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6688] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6696] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6703] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6705] dhcp4 (eth0): state changed new lease, address=38.102.83.158
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6712] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6819] device (eth1): Activation: starting connection 'ci-private-network' (d1a8d015-6336-5169-989a-3781f3c5fc10)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6824] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6863] device (eth1): state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6891] device (eth1): disconnecting for new activation request.
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6891] audit: op="connection-activate" uuid="d1a8d015-6336-5169-989a-3781f3c5fc10" name="ci-private-network" pid=52614 uid=0 result="success"
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6912] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6922] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6927] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6934] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6935] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6937] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6938] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 kernel: br-ex: entered promiscuous mode
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6939] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6941] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6948] device (eth1): Activation: starting connection 'ci-private-network' (d1a8d015-6336-5169-989a-3781f3c5fc10)
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6971] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6976] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6981] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6986] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6991] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.6996] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7000] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7005] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7014] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7021] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7028] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7037] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7057] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 kernel: vlan22: entered promiscuous mode
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7065] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7078] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52614 uid=0 result="success"
Feb 28 04:29:22 np0005634017 systemd-udevd[52618]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7095] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 kernel: vlan20: entered promiscuous mode
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7140] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7145] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7156] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7166] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 kernel: vlan23: entered promiscuous mode
Feb 28 04:29:22 np0005634017 systemd-udevd[52619]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7229] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 kernel: vlan21: entered promiscuous mode
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7234] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7238] device (eth1): Activation: successful, device activated.
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7246] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7249] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7261] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 28 04:29:22 np0005634017 systemd-udevd[52730]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7275] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7286] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7287] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7303] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7321] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7342] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7345] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7356] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7361] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7371] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7372] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7375] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7381] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7401] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7404] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7412] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7416] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7435] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7437] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 28 04:29:22 np0005634017 NetworkManager[49805]: <info>  [1772270962.7442] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 28 04:29:23 np0005634017 NetworkManager[49805]: <info>  [1772270963.9035] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52614 uid=0 result="success"
Feb 28 04:29:24 np0005634017 NetworkManager[49805]: <info>  [1772270964.0972] checkpoint[0x559318e6f950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Feb 28 04:29:24 np0005634017 NetworkManager[49805]: <info>  [1772270964.0975] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52614 uid=0 result="success"
Feb 28 04:29:24 np0005634017 python3.9[52979]: ansible-ansible.legacy.async_status Invoked with jid=j444025069689.52608 mode=status _async_dir=/root/.ansible_async
Feb 28 04:29:24 np0005634017 NetworkManager[49805]: <info>  [1772270964.4139] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52614 uid=0 result="success"
Feb 28 04:29:24 np0005634017 NetworkManager[49805]: <info>  [1772270964.4153] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52614 uid=0 result="success"
Feb 28 04:29:24 np0005634017 NetworkManager[49805]: <info>  [1772270964.6628] audit: op="networking-control" arg="global-dns-configuration" pid=52614 uid=0 result="success"
Feb 28 04:29:24 np0005634017 NetworkManager[49805]: <info>  [1772270964.6673] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Feb 28 04:29:24 np0005634017 NetworkManager[49805]: <info>  [1772270964.6708] audit: op="networking-control" arg="global-dns-configuration" pid=52614 uid=0 result="success"
Feb 28 04:29:24 np0005634017 NetworkManager[49805]: <info>  [1772270964.6732] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52614 uid=0 result="success"
Feb 28 04:29:24 np0005634017 NetworkManager[49805]: <info>  [1772270964.8436] checkpoint[0x559318e6fa20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Feb 28 04:29:24 np0005634017 NetworkManager[49805]: <info>  [1772270964.8442] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52614 uid=0 result="success"
Feb 28 04:29:24 np0005634017 ansible-async_wrapper.py[52612]: Module complete (52612)
Feb 28 04:29:25 np0005634017 ansible-async_wrapper.py[52611]: Done in kid B.
Feb 28 04:29:27 np0005634017 python3.9[53085]: ansible-ansible.legacy.async_status Invoked with jid=j444025069689.52608 mode=status _async_dir=/root/.ansible_async
Feb 28 04:29:28 np0005634017 python3.9[53186]: ansible-ansible.legacy.async_status Invoked with jid=j444025069689.52608 mode=cleanup _async_dir=/root/.ansible_async
Feb 28 04:29:29 np0005634017 python3.9[53339]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:29:29 np0005634017 python3.9[53463]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772270968.6171412-319-250618296616608/.source.returncode _original_basename=.smuazt8g follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:29:30 np0005634017 python3.9[53616]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:29:30 np0005634017 python3.9[53740]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772270969.8596425-335-81651964253577/.source.cfg _original_basename=.zaarmmn_ follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:29:31 np0005634017 python3.9[53894]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 28 04:29:31 np0005634017 systemd[1]: Reloading Network Manager...
Feb 28 04:29:31 np0005634017 NetworkManager[49805]: <info>  [1772270971.7346] audit: op="reload" arg="0" pid=53898 uid=0 result="success"
Feb 28 04:29:31 np0005634017 NetworkManager[49805]: <info>  [1772270971.7353] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Feb 28 04:29:31 np0005634017 systemd[1]: Reloaded Network Manager.
Feb 28 04:29:32 np0005634017 systemd[1]: session-9.scope: Deactivated successfully.
Feb 28 04:29:32 np0005634017 systemd[1]: session-9.scope: Consumed 47.583s CPU time.
Feb 28 04:29:32 np0005634017 systemd-logind[815]: Session 9 logged out. Waiting for processes to exit.
Feb 28 04:29:32 np0005634017 systemd-logind[815]: Removed session 9.
Feb 28 04:29:32 np0005634017 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 28 04:29:37 np0005634017 systemd-logind[815]: New session 10 of user zuul.
Feb 28 04:29:37 np0005634017 systemd[1]: Started Session 10 of User zuul.
Feb 28 04:29:38 np0005634017 python3.9[54084]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:29:39 np0005634017 python3.9[54238]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 28 04:29:40 np0005634017 python3.9[54432]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:29:41 np0005634017 systemd[1]: session-10.scope: Deactivated successfully.
Feb 28 04:29:41 np0005634017 systemd[1]: session-10.scope: Consumed 2.402s CPU time.
Feb 28 04:29:41 np0005634017 systemd-logind[815]: Session 10 logged out. Waiting for processes to exit.
Feb 28 04:29:41 np0005634017 systemd-logind[815]: Removed session 10.
Feb 28 04:29:41 np0005634017 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 28 04:29:46 np0005634017 systemd-logind[815]: New session 11 of user zuul.
Feb 28 04:29:46 np0005634017 systemd[1]: Started Session 11 of User zuul.
Feb 28 04:29:47 np0005634017 python3.9[54616]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:29:48 np0005634017 python3.9[54770]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:29:49 np0005634017 python3.9[54927]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 28 04:29:51 np0005634017 python3.9[55013]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 28 04:29:53 np0005634017 python3.9[55167]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 28 04:29:54 np0005634017 python3.9[55364]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:29:55 np0005634017 python3.9[55518]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:29:55 np0005634017 podman[55519]: 2026-02-28 09:29:55.462429133 +0000 UTC m=+0.059136316 system refresh
Feb 28 04:29:56 np0005634017 python3.9[55682]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:29:56 np0005634017 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 28 04:29:57 np0005634017 python3.9[55806]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772270995.7225065-74-194405247228566/.source.json follow=False _original_basename=podman_network_config.j2 checksum=03a0b15be646160fc501f51bd2ad66a0a4051d93 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:29:57 np0005634017 python3.9[55959]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:29:58 np0005634017 python3.9[56083]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772270997.3424113-89-150119162401722/.source.conf follow=False _original_basename=registries.conf.j2 checksum=f95551851a3aad1fadf39ba40ad5808b10502fe1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:29:59 np0005634017 python3.9[56236]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:30:00 np0005634017 python3.9[56389]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:30:00 np0005634017 python3.9[56542]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:30:01 np0005634017 python3.9[56695]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:30:02 np0005634017 python3.9[56848]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 28 04:30:04 np0005634017 python3.9[57002]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:30:05 np0005634017 python3.9[57157]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:30:05 np0005634017 python3.9[57310]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:30:06 np0005634017 python3.9[57463]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:30:07 np0005634017 python3.9[57617]: ansible-service_facts Invoked
Feb 28 04:30:07 np0005634017 network[57634]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 28 04:30:07 np0005634017 network[57635]: 'network-scripts' will be removed from distribution in near future.
Feb 28 04:30:07 np0005634017 network[57636]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 28 04:30:11 np0005634017 python3.9[58091]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 28 04:30:14 np0005634017 python3.9[58245]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Feb 28 04:30:15 np0005634017 python3.9[58398]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:30:15 np0005634017 python3.9[58524]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772271014.7900457-233-236483872145990/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:30:16 np0005634017 python3.9[58679]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:30:17 np0005634017 python3.9[58805]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772271016.146999-248-231323042391029/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:30:18 np0005634017 python3.9[58960]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:30:19 np0005634017 python3.9[59115]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 28 04:30:20 np0005634017 python3.9[59200]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:30:21 np0005634017 python3.9[59355]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 28 04:30:22 np0005634017 python3.9[59440]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 28 04:30:22 np0005634017 chronyd[840]: chronyd exiting
Feb 28 04:30:22 np0005634017 systemd[1]: Stopping NTP client/server...
Feb 28 04:30:22 np0005634017 systemd[1]: chronyd.service: Deactivated successfully.
Feb 28 04:30:22 np0005634017 systemd[1]: Stopped NTP client/server.
Feb 28 04:30:22 np0005634017 systemd[1]: Starting NTP client/server...
Feb 28 04:30:22 np0005634017 chronyd[59448]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Feb 28 04:30:22 np0005634017 chronyd[59448]: Frequency -28.309 +/- 0.416 ppm read from /var/lib/chrony/drift
Feb 28 04:30:22 np0005634017 chronyd[59448]: Loaded seccomp filter (level 2)
Feb 28 04:30:22 np0005634017 systemd[1]: Started NTP client/server.
Feb 28 04:30:23 np0005634017 systemd[1]: session-11.scope: Deactivated successfully.
Feb 28 04:30:23 np0005634017 systemd[1]: session-11.scope: Consumed 25.355s CPU time.
Feb 28 04:30:23 np0005634017 systemd-logind[815]: Session 11 logged out. Waiting for processes to exit.
Feb 28 04:30:23 np0005634017 systemd-logind[815]: Removed session 11.
Feb 28 04:30:28 np0005634017 systemd-logind[815]: New session 12 of user zuul.
Feb 28 04:30:28 np0005634017 systemd[1]: Started Session 12 of User zuul.
Feb 28 04:30:29 np0005634017 python3.9[59632]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:30:30 np0005634017 python3.9[59785]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:30:30 np0005634017 python3.9[59909]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772271029.4912806-29-168461572525781/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:30:31 np0005634017 systemd[1]: session-12.scope: Deactivated successfully.
Feb 28 04:30:31 np0005634017 systemd[1]: session-12.scope: Consumed 1.537s CPU time.
Feb 28 04:30:31 np0005634017 systemd-logind[815]: Session 12 logged out. Waiting for processes to exit.
Feb 28 04:30:31 np0005634017 systemd-logind[815]: Removed session 12.
Feb 28 04:30:36 np0005634017 systemd-logind[815]: New session 13 of user zuul.
Feb 28 04:30:36 np0005634017 systemd[1]: Started Session 13 of User zuul.
Feb 28 04:30:37 np0005634017 python3.9[60087]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:30:38 np0005634017 python3.9[60244]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:30:39 np0005634017 python3.9[60420]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:30:40 np0005634017 python3.9[60544]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1772271038.7294056-36-66365176689799/.source.json _original_basename=.ny2hn04x follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:30:40 np0005634017 python3.9[60697]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:30:41 np0005634017 python3.9[60821]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772271040.3922784-59-87643268992441/.source _original_basename=.uqosvlht follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:30:42 np0005634017 python3.9[60974]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:30:42 np0005634017 python3.9[61127]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:30:43 np0005634017 python3.9[61251]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772271042.2481663-83-127053181002520/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:30:43 np0005634017 python3.9[61404]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:30:44 np0005634017 python3.9[61528]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772271043.4412155-83-150400440490645/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:30:45 np0005634017 python3.9[61681]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:30:45 np0005634017 python3.9[61834]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:30:46 np0005634017 python3.9[61958]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271045.3061247-120-251443363584027/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:30:47 np0005634017 python3.9[62111]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:30:47 np0005634017 python3.9[62235]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271046.559049-135-175791288455584/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:30:48 np0005634017 python3.9[62388]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:30:48 np0005634017 systemd[1]: Reloading.
Feb 28 04:30:48 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:30:48 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:30:48 np0005634017 systemd[1]: Reloading.
Feb 28 04:30:48 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:30:48 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:30:49 np0005634017 systemd[1]: Starting EDPM Container Shutdown...
Feb 28 04:30:49 np0005634017 systemd[1]: Finished EDPM Container Shutdown.
Feb 28 04:30:49 np0005634017 python3.9[62631]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:30:50 np0005634017 python3.9[62755]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271049.218021-158-190646112844635/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:30:50 np0005634017 python3.9[62908]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:30:51 np0005634017 python3.9[63032]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271050.3031898-173-70719552667038/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:30:52 np0005634017 python3.9[63185]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:30:52 np0005634017 systemd[1]: Reloading.
Feb 28 04:30:52 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:30:52 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:30:52 np0005634017 systemd[1]: Reloading.
Feb 28 04:30:52 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:30:52 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:30:53 np0005634017 systemd[1]: Starting Create netns directory...
Feb 28 04:30:53 np0005634017 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 28 04:30:53 np0005634017 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 28 04:30:53 np0005634017 systemd[1]: Finished Create netns directory.
Feb 28 04:30:53 np0005634017 python3.9[63425]: ansible-ansible.builtin.service_facts Invoked
Feb 28 04:30:53 np0005634017 network[63442]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 28 04:30:53 np0005634017 network[63443]: 'network-scripts' will be removed from distribution in near future.
Feb 28 04:30:53 np0005634017 network[63444]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 28 04:30:58 np0005634017 python3.9[63708]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:30:59 np0005634017 systemd[1]: Reloading.
Feb 28 04:30:59 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:30:59 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:30:59 np0005634017 systemd[1]: Stopping IPv4 firewall with iptables...
Feb 28 04:30:59 np0005634017 iptables.init[63755]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Feb 28 04:30:59 np0005634017 iptables.init[63755]: iptables: Flushing firewall rules: [  OK  ]
Feb 28 04:30:59 np0005634017 systemd[1]: iptables.service: Deactivated successfully.
Feb 28 04:30:59 np0005634017 systemd[1]: Stopped IPv4 firewall with iptables.
Feb 28 04:31:00 np0005634017 python3.9[63953]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:31:01 np0005634017 python3.9[64108]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:31:01 np0005634017 systemd[1]: Reloading.
Feb 28 04:31:01 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:31:01 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:31:01 np0005634017 systemd[1]: Starting Netfilter Tables...
Feb 28 04:31:01 np0005634017 systemd[1]: Finished Netfilter Tables.
Feb 28 04:31:02 np0005634017 python3.9[64308]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:31:03 np0005634017 python3.9[64462]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:31:03 np0005634017 python3.9[64588]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1772271062.8883321-242-136833649662963/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:31:04 np0005634017 python3.9[64742]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 28 04:31:04 np0005634017 systemd[1]: Reloading OpenSSH server daemon...
Feb 28 04:31:04 np0005634017 systemd[1]: Reloaded OpenSSH server daemon.
Feb 28 04:31:05 np0005634017 python3.9[64899]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:31:06 np0005634017 python3.9[65052]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:31:06 np0005634017 python3.9[65176]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271065.6379976-273-218710508819821/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:31:07 np0005634017 python3.9[65329]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb 28 04:31:07 np0005634017 systemd[1]: Starting Time & Date Service...
Feb 28 04:31:07 np0005634017 systemd[1]: Started Time & Date Service.
Feb 28 04:31:08 np0005634017 python3.9[65486]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:31:09 np0005634017 python3.9[65639]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:31:09 np0005634017 python3.9[65763]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772271068.6940784-308-185645551709566/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:31:10 np0005634017 python3.9[65916]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:31:10 np0005634017 python3.9[66040]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772271069.9377959-323-177491155615644/.source.yaml _original_basename=.okeau49k follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:31:11 np0005634017 python3.9[66193]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:31:12 np0005634017 python3.9[66317]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271071.0841773-338-83103128952506/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:31:12 np0005634017 python3.9[66470]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:31:13 np0005634017 python3.9[66624]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:31:14 np0005634017 python3[66778]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 28 04:31:14 np0005634017 python3.9[66931]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:31:15 np0005634017 python3.9[67055]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271074.4603896-377-130212113775067/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:31:16 np0005634017 python3.9[67208]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:31:16 np0005634017 python3.9[67332]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271075.5445995-392-107759893653729/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:31:17 np0005634017 python3.9[67485]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:31:17 np0005634017 python3.9[67609]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271076.764916-407-78165235037043/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:31:18 np0005634017 python3.9[67762]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:31:18 np0005634017 python3.9[67886]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271077.877817-422-138641161321216/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:31:19 np0005634017 python3.9[68039]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:31:20 np0005634017 python3.9[68163]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271079.0486138-437-168528614282022/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:31:20 np0005634017 python3.9[68316]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:31:21 np0005634017 python3.9[68469]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:31:22 np0005634017 python3.9[68629]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:31:22 np0005634017 python3.9[68783]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:31:23 np0005634017 python3.9[68938]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:31:24 np0005634017 python3.9[69091]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 28 04:31:25 np0005634017 python3.9[69245]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 28 04:31:25 np0005634017 systemd-logind[815]: Session 13 logged out. Waiting for processes to exit.
Feb 28 04:31:25 np0005634017 systemd[1]: session-13.scope: Deactivated successfully.
Feb 28 04:31:25 np0005634017 systemd[1]: session-13.scope: Consumed 32.862s CPU time.
Feb 28 04:31:25 np0005634017 systemd-logind[815]: Removed session 13.
Feb 28 04:31:30 np0005634017 systemd-logind[815]: New session 14 of user zuul.
Feb 28 04:31:30 np0005634017 systemd[1]: Started Session 14 of User zuul.
Feb 28 04:31:31 np0005634017 python3.9[69429]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Feb 28 04:31:32 np0005634017 python3.9[69582]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:31:33 np0005634017 python3.9[69735]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:31:34 np0005634017 python3.9[69888]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5URvrNFl9MhZBO2s62M/ln92UfkblaawhnJ9KezBXOb7Hx/bDKfNRPA3W5DSqIrJ4/o9TyYOzuD/9/jVe2uFHTrv+299xa2/InQ9VToKnL3ZnV0l2Dar8UyCAmumucJnnDzmSuxKtmh1r1Gi86xNXC7iEl5FXqXBCpJ+hh4d5tO4k9PEkVFDKX1tJ7l3NA2C42PXXgmKVkUUEN99DDu3ZM2BowCA16/UoFKdN3dZOU06w7syOjcgiq07QMFVtfdy3daBaaCMBy4kh4DOAILTp0GVVJ10vyxpxmHlSCErE5OofOjmIPt2pcWXYXHwFS0F0vK8L3LHyxQrrWm/RLuXkLkrKNpWvE4Xx0Cdz1vqJ1+8CYT6PunGFtNxfZxMOTzHNLNtHXT/M2b+yX3q/WPNfXkiELpx4pj0Nz/ktCqpQ9fjBPX+9s8TvqsT3OYyq48gg7oxQ7EOweyuNBpSVRNAs+j18kt+39ECqzrr9rKyz2wNQw7sHnEZirjSwolL1qKk=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPhhP9YEEfXzk0Ffz95IfaT7BJRnxVJ55LFCpYqclnB5#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOPc9R9av/qZ1K7mt9e0JZiyAzatp1ty2WUYoo+vXgranh+Y+IO49K8NUIIDuz3QS5vA8TOp/Roo0BoP/4nbGSw=#012 create=True mode=0644 path=/tmp/ansible.f6_0a5bl state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:31:35 np0005634017 python3.9[70041]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.f6_0a5bl' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:31:36 np0005634017 python3.9[70196]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.f6_0a5bl state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:31:36 np0005634017 systemd[1]: session-14.scope: Deactivated successfully.
Feb 28 04:31:36 np0005634017 systemd[1]: session-14.scope: Consumed 3.074s CPU time.
Feb 28 04:31:36 np0005634017 systemd-logind[815]: Session 14 logged out. Waiting for processes to exit.
Feb 28 04:31:36 np0005634017 systemd-logind[815]: Removed session 14.
Feb 28 04:31:37 np0005634017 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 28 04:31:41 np0005634017 systemd-logind[815]: New session 15 of user zuul.
Feb 28 04:31:41 np0005634017 systemd[1]: Started Session 15 of User zuul.
Feb 28 04:31:43 np0005634017 python3.9[70376]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:31:44 np0005634017 python3.9[70533]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 28 04:31:45 np0005634017 python3.9[70688]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 28 04:31:46 np0005634017 python3.9[70842]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:31:46 np0005634017 python3.9[70996]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:31:47 np0005634017 python3.9[71151]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:31:48 np0005634017 python3.9[71307]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:31:48 np0005634017 systemd[1]: session-15.scope: Deactivated successfully.
Feb 28 04:31:48 np0005634017 systemd[1]: session-15.scope: Consumed 4.274s CPU time.
Feb 28 04:31:48 np0005634017 systemd-logind[815]: Session 15 logged out. Waiting for processes to exit.
Feb 28 04:31:48 np0005634017 systemd-logind[815]: Removed session 15.
Feb 28 04:31:54 np0005634017 systemd-logind[815]: New session 16 of user zuul.
Feb 28 04:31:54 np0005634017 systemd[1]: Started Session 16 of User zuul.
Feb 28 04:31:55 np0005634017 python3.9[71485]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:31:56 np0005634017 python3.9[71642]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 28 04:31:57 np0005634017 python3.9[71727]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 28 04:31:59 np0005634017 python3.9[71878]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:32:00 np0005634017 python3.9[72029]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 28 04:32:01 np0005634017 python3.9[72179]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:32:01 np0005634017 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 04:32:01 np0005634017 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 04:32:02 np0005634017 python3.9[72330]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/nova follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:32:02 np0005634017 systemd[1]: session-16.scope: Deactivated successfully.
Feb 28 04:32:02 np0005634017 systemd[1]: session-16.scope: Consumed 5.605s CPU time.
Feb 28 04:32:02 np0005634017 systemd-logind[815]: Session 16 logged out. Waiting for processes to exit.
Feb 28 04:32:02 np0005634017 systemd-logind[815]: Removed session 16.
Feb 28 04:32:09 np0005634017 systemd-logind[815]: New session 17 of user zuul.
Feb 28 04:32:09 np0005634017 systemd[1]: Started Session 17 of User zuul.
Feb 28 04:32:15 np0005634017 python3[73096]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 28 04:32:15 np0005634017 python3[73130]: ansible-ansible.legacy.dnf Invoked with name=['jq'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 28 04:32:17 np0005634017 python3[73157]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 28 04:32:19 np0005634017 python3[73214]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 28 04:32:23 np0005634017 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 28 04:32:23 np0005634017 systemd[1]: Starting man-db-cache-update.service...
Feb 28 04:32:24 np0005634017 python3[73336]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 28 04:32:24 np0005634017 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 28 04:32:24 np0005634017 systemd[1]: Finished man-db-cache-update.service.
Feb 28 04:32:24 np0005634017 systemd[1]: run-r36dda033e8ca482e9abb43869f900c0d.service: Deactivated successfully.
Feb 28 04:32:24 np0005634017 python3[73365]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:32:26 np0005634017 python3[73460]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 28 04:32:27 np0005634017 python3[73487]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 28 04:32:28 np0005634017 python3[73513]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:32:28 np0005634017 kernel: loop: module loaded
Feb 28 04:32:28 np0005634017 kernel: loop3: detected capacity change from 0 to 41943040
Feb 28 04:32:28 np0005634017 python3[73548]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:32:28 np0005634017 lvm[73551]: PV /dev/loop3 not used.
Feb 28 04:32:28 np0005634017 lvm[73553]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:32:28 np0005634017 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Feb 28 04:32:28 np0005634017 lvm[73556]:  1 logical volume(s) in volume group "ceph_vg0" now active
Feb 28 04:32:28 np0005634017 lvm[73563]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:32:28 np0005634017 lvm[73563]: VG ceph_vg0 finished
Feb 28 04:32:28 np0005634017 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Feb 28 04:32:29 np0005634017 python3[73641]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 04:32:29 np0005634017 python3[73714]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772271148.8818007-37158-258079480721126/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:32:30 np0005634017 python3[73764]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:32:30 np0005634017 systemd[1]: Reloading.
Feb 28 04:32:30 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:32:30 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:32:30 np0005634017 systemd[1]: Starting Ceph OSD losetup...
Feb 28 04:32:30 np0005634017 bash[73811]: /dev/loop3: [64513]:4329463 (/var/lib/ceph-osd-0.img)
Feb 28 04:32:30 np0005634017 systemd[1]: Finished Ceph OSD losetup.
Feb 28 04:32:30 np0005634017 lvm[73812]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:32:30 np0005634017 lvm[73812]: VG ceph_vg0 finished
Feb 28 04:32:31 np0005634017 python3[73838]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 28 04:32:31 np0005634017 chronyd[59448]: Selected source 216.232.132.102 (pool.ntp.org)
Feb 28 04:32:32 np0005634017 python3[73865]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 28 04:32:33 np0005634017 python3[73891]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:32:33 np0005634017 kernel: loop4: detected capacity change from 0 to 41943040
Feb 28 04:32:33 np0005634017 python3[73923]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:32:33 np0005634017 lvm[73926]: PV /dev/loop4 not used.
Feb 28 04:32:33 np0005634017 lvm[73928]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:32:33 np0005634017 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Feb 28 04:32:33 np0005634017 lvm[73936]:  1 logical volume(s) in volume group "ceph_vg1" now active
Feb 28 04:32:33 np0005634017 lvm[73939]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:32:33 np0005634017 lvm[73939]: VG ceph_vg1 finished
Feb 28 04:32:33 np0005634017 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Feb 28 04:32:34 np0005634017 python3[74017]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 04:32:34 np0005634017 python3[74090]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772271153.8653417-37185-270542076799851/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:32:35 np0005634017 python3[74140]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:32:35 np0005634017 systemd[1]: Reloading.
Feb 28 04:32:35 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:32:35 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:32:35 np0005634017 systemd[1]: Starting Ceph OSD losetup...
Feb 28 04:32:35 np0005634017 bash[74189]: /dev/loop4: [64513]:5001544 (/var/lib/ceph-osd-1.img)
Feb 28 04:32:35 np0005634017 systemd[1]: Finished Ceph OSD losetup.
Feb 28 04:32:35 np0005634017 lvm[74190]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:32:35 np0005634017 lvm[74190]: VG ceph_vg1 finished
Feb 28 04:32:35 np0005634017 python3[74216]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 28 04:32:37 np0005634017 python3[74245]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 28 04:32:37 np0005634017 python3[74271]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:32:37 np0005634017 kernel: loop5: detected capacity change from 0 to 41943040
Feb 28 04:32:38 np0005634017 python3[74303]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:32:38 np0005634017 lvm[74306]: PV /dev/loop5 not used.
Feb 28 04:32:38 np0005634017 lvm[74308]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:32:38 np0005634017 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Feb 28 04:32:38 np0005634017 lvm[74313]:  1 logical volume(s) in volume group "ceph_vg2" now active
Feb 28 04:32:38 np0005634017 lvm[74319]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:32:38 np0005634017 lvm[74319]: VG ceph_vg2 finished
Feb 28 04:32:38 np0005634017 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Feb 28 04:32:38 np0005634017 python3[74397]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 04:32:39 np0005634017 python3[74470]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772271158.458289-37212-113978522695433/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:32:39 np0005634017 python3[74520]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:32:39 np0005634017 systemd[1]: Reloading.
Feb 28 04:32:39 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:32:39 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:32:39 np0005634017 systemd[1]: Starting Ceph OSD losetup...
Feb 28 04:32:39 np0005634017 bash[74566]: /dev/loop5: [64513]:5018996 (/var/lib/ceph-osd-2.img)
Feb 28 04:32:39 np0005634017 systemd[1]: Finished Ceph OSD losetup.
Feb 28 04:32:39 np0005634017 lvm[74567]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:32:39 np0005634017 lvm[74567]: VG ceph_vg2 finished
Feb 28 04:32:41 np0005634017 python3[74591]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:32:43 np0005634017 python3[74684]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:32:43 np0005634017 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 28 04:32:44 np0005634017 python3[74723]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:32:44 np0005634017 python3[74749]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:32:45 np0005634017 python3[74827]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 04:32:45 np0005634017 python3[74900]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772271165.2543113-37347-96374127532784/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:32:46 np0005634017 python3[75002]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 04:32:47 np0005634017 python3[75075]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772271166.3933277-37367-10920399104885/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:32:47 np0005634017 python3[75125]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 28 04:32:47 np0005634017 python3[75153]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 28 04:32:48 np0005634017 python3[75181]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 28 04:32:48 np0005634017 python3[75207]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 28 04:32:48 np0005634017 python3[75233]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:32:49 np0005634017 systemd[1]: Created slice User Slice of UID 42477.
Feb 28 04:32:49 np0005634017 systemd[1]: Starting User Runtime Directory /run/user/42477...
Feb 28 04:32:49 np0005634017 systemd-logind[815]: New session 18 of user ceph-admin.
Feb 28 04:32:49 np0005634017 systemd[1]: Finished User Runtime Directory /run/user/42477.
Feb 28 04:32:49 np0005634017 systemd[1]: Starting User Manager for UID 42477...
Feb 28 04:32:49 np0005634017 systemd[75241]: Queued start job for default target Main User Target.
Feb 28 04:32:49 np0005634017 systemd[75241]: Created slice User Application Slice.
Feb 28 04:32:49 np0005634017 systemd[75241]: Started Mark boot as successful after the user session has run 2 minutes.
Feb 28 04:32:49 np0005634017 systemd[75241]: Started Daily Cleanup of User's Temporary Directories.
Feb 28 04:32:49 np0005634017 systemd[75241]: Reached target Paths.
Feb 28 04:32:49 np0005634017 systemd[75241]: Reached target Timers.
Feb 28 04:32:49 np0005634017 systemd[75241]: Starting D-Bus User Message Bus Socket...
Feb 28 04:32:49 np0005634017 systemd[75241]: Starting Create User's Volatile Files and Directories...
Feb 28 04:32:49 np0005634017 systemd[75241]: Listening on D-Bus User Message Bus Socket.
Feb 28 04:32:49 np0005634017 systemd[75241]: Reached target Sockets.
Feb 28 04:32:49 np0005634017 systemd[75241]: Finished Create User's Volatile Files and Directories.
Feb 28 04:32:49 np0005634017 systemd[75241]: Reached target Basic System.
Feb 28 04:32:49 np0005634017 systemd[75241]: Reached target Main User Target.
Feb 28 04:32:49 np0005634017 systemd[75241]: Startup finished in 119ms.
Feb 28 04:32:49 np0005634017 systemd[1]: Started User Manager for UID 42477.
Feb 28 04:32:49 np0005634017 systemd[1]: Started Session 18 of User ceph-admin.
Feb 28 04:32:49 np0005634017 systemd[1]: session-18.scope: Deactivated successfully.
Feb 28 04:32:49 np0005634017 systemd-logind[815]: Session 18 logged out. Waiting for processes to exit.
Feb 28 04:32:49 np0005634017 systemd-logind[815]: Removed session 18.
Feb 28 04:32:49 np0005634017 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 28 04:32:49 np0005634017 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 28 04:32:52 np0005634017 systemd[1]: var-lib-containers-storage-overlay-compat945079854-lower\x2dmapped.mount: Deactivated successfully.
Feb 28 04:32:59 np0005634017 systemd[1]: Stopping User Manager for UID 42477...
Feb 28 04:32:59 np0005634017 systemd[75241]: Activating special unit Exit the Session...
Feb 28 04:32:59 np0005634017 systemd[75241]: Stopped target Main User Target.
Feb 28 04:32:59 np0005634017 systemd[75241]: Stopped target Basic System.
Feb 28 04:32:59 np0005634017 systemd[75241]: Stopped target Paths.
Feb 28 04:32:59 np0005634017 systemd[75241]: Stopped target Sockets.
Feb 28 04:32:59 np0005634017 systemd[75241]: Stopped target Timers.
Feb 28 04:32:59 np0005634017 systemd[75241]: Stopped Mark boot as successful after the user session has run 2 minutes.
Feb 28 04:32:59 np0005634017 systemd[75241]: Stopped Daily Cleanup of User's Temporary Directories.
Feb 28 04:32:59 np0005634017 systemd[75241]: Closed D-Bus User Message Bus Socket.
Feb 28 04:32:59 np0005634017 systemd[75241]: Stopped Create User's Volatile Files and Directories.
Feb 28 04:32:59 np0005634017 systemd[75241]: Removed slice User Application Slice.
Feb 28 04:32:59 np0005634017 systemd[75241]: Reached target Shutdown.
Feb 28 04:32:59 np0005634017 systemd[75241]: Finished Exit the Session.
Feb 28 04:32:59 np0005634017 systemd[75241]: Reached target Exit the Session.
Feb 28 04:32:59 np0005634017 systemd[1]: user@42477.service: Deactivated successfully.
Feb 28 04:32:59 np0005634017 systemd[1]: Stopped User Manager for UID 42477.
Feb 28 04:32:59 np0005634017 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Feb 28 04:32:59 np0005634017 systemd[1]: run-user-42477.mount: Deactivated successfully.
Feb 28 04:32:59 np0005634017 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Feb 28 04:32:59 np0005634017 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Feb 28 04:32:59 np0005634017 systemd[1]: Removed slice User Slice of UID 42477.
Feb 28 04:33:05 np0005634017 podman[75335]: 2026-02-28 09:33:05.500387065 +0000 UTC m=+15.796398786 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:05 np0005634017 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 28 04:33:05 np0005634017 podman[75394]: 2026-02-28 09:33:05.563293915 +0000 UTC m=+0.038940433 container create 4b0daf3dd2361715c0a5549a8ba99239e26f32da1625222592d3d76c9be2af35 (image=quay.io/ceph/ceph:v20, name=distracted_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:33:05 np0005634017 systemd[1]: Created slice Virtual Machine and Container Slice.
Feb 28 04:33:05 np0005634017 systemd[1]: Started libpod-conmon-4b0daf3dd2361715c0a5549a8ba99239e26f32da1625222592d3d76c9be2af35.scope.
Feb 28 04:33:05 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:05 np0005634017 podman[75394]: 2026-02-28 09:33:05.54583665 +0000 UTC m=+0.021483188 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:05 np0005634017 podman[75394]: 2026-02-28 09:33:05.65181236 +0000 UTC m=+0.127458878 container init 4b0daf3dd2361715c0a5549a8ba99239e26f32da1625222592d3d76c9be2af35 (image=quay.io/ceph/ceph:v20, name=distracted_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:05 np0005634017 podman[75394]: 2026-02-28 09:33:05.659094347 +0000 UTC m=+0.134740855 container start 4b0daf3dd2361715c0a5549a8ba99239e26f32da1625222592d3d76c9be2af35 (image=quay.io/ceph/ceph:v20, name=distracted_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 04:33:05 np0005634017 podman[75394]: 2026-02-28 09:33:05.663559193 +0000 UTC m=+0.139205711 container attach 4b0daf3dd2361715c0a5549a8ba99239e26f32da1625222592d3d76c9be2af35 (image=quay.io/ceph/ceph:v20, name=distracted_swartz, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 04:33:05 np0005634017 distracted_swartz[75411]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Feb 28 04:33:05 np0005634017 systemd[1]: libpod-4b0daf3dd2361715c0a5549a8ba99239e26f32da1625222592d3d76c9be2af35.scope: Deactivated successfully.
Feb 28 04:33:05 np0005634017 podman[75394]: 2026-02-28 09:33:05.761925108 +0000 UTC m=+0.237571626 container died 4b0daf3dd2361715c0a5549a8ba99239e26f32da1625222592d3d76c9be2af35 (image=quay.io/ceph/ceph:v20, name=distracted_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:33:05 np0005634017 systemd[1]: var-lib-containers-storage-overlay-1c4b1a90f0b5a848984303266e73dd243b8c7db171872af7e89a9f84a5b57907-merged.mount: Deactivated successfully.
Feb 28 04:33:05 np0005634017 podman[75394]: 2026-02-28 09:33:05.790880027 +0000 UTC m=+0.266526535 container remove 4b0daf3dd2361715c0a5549a8ba99239e26f32da1625222592d3d76c9be2af35 (image=quay.io/ceph/ceph:v20, name=distracted_swartz, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 04:33:05 np0005634017 systemd[1]: libpod-conmon-4b0daf3dd2361715c0a5549a8ba99239e26f32da1625222592d3d76c9be2af35.scope: Deactivated successfully.
Feb 28 04:33:05 np0005634017 podman[75428]: 2026-02-28 09:33:05.839118623 +0000 UTC m=+0.031961396 container create 7a1c2466458e48cfe44368d42d758a6b2a2fb983afb0c8051e8abbbf17bf76b2 (image=quay.io/ceph/ceph:v20, name=charming_chaplygin, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 28 04:33:05 np0005634017 systemd[1]: Started libpod-conmon-7a1c2466458e48cfe44368d42d758a6b2a2fb983afb0c8051e8abbbf17bf76b2.scope.
Feb 28 04:33:05 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:05 np0005634017 podman[75428]: 2026-02-28 09:33:05.894162371 +0000 UTC m=+0.087005134 container init 7a1c2466458e48cfe44368d42d758a6b2a2fb983afb0c8051e8abbbf17bf76b2 (image=quay.io/ceph/ceph:v20, name=charming_chaplygin, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 04:33:05 np0005634017 podman[75428]: 2026-02-28 09:33:05.89872482 +0000 UTC m=+0.091567593 container start 7a1c2466458e48cfe44368d42d758a6b2a2fb983afb0c8051e8abbbf17bf76b2 (image=quay.io/ceph/ceph:v20, name=charming_chaplygin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:05 np0005634017 charming_chaplygin[75444]: 167 167
Feb 28 04:33:05 np0005634017 systemd[1]: libpod-7a1c2466458e48cfe44368d42d758a6b2a2fb983afb0c8051e8abbbf17bf76b2.scope: Deactivated successfully.
Feb 28 04:33:05 np0005634017 podman[75428]: 2026-02-28 09:33:05.902323332 +0000 UTC m=+0.095166105 container attach 7a1c2466458e48cfe44368d42d758a6b2a2fb983afb0c8051e8abbbf17bf76b2 (image=quay.io/ceph/ceph:v20, name=charming_chaplygin, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:05 np0005634017 podman[75428]: 2026-02-28 09:33:05.90261833 +0000 UTC m=+0.095461103 container died 7a1c2466458e48cfe44368d42d758a6b2a2fb983afb0c8051e8abbbf17bf76b2 (image=quay.io/ceph/ceph:v20, name=charming_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Feb 28 04:33:05 np0005634017 podman[75428]: 2026-02-28 09:33:05.823716717 +0000 UTC m=+0.016559510 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:05 np0005634017 podman[75428]: 2026-02-28 09:33:05.935350647 +0000 UTC m=+0.128193420 container remove 7a1c2466458e48cfe44368d42d758a6b2a2fb983afb0c8051e8abbbf17bf76b2 (image=quay.io/ceph/ceph:v20, name=charming_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 28 04:33:05 np0005634017 systemd[1]: libpod-conmon-7a1c2466458e48cfe44368d42d758a6b2a2fb983afb0c8051e8abbbf17bf76b2.scope: Deactivated successfully.
Feb 28 04:33:05 np0005634017 podman[75461]: 2026-02-28 09:33:05.98633714 +0000 UTC m=+0.034865318 container create c64f45da49f024c64056d108e4e022c2243a27bb5d69502e8ab3ddf6e7434906 (image=quay.io/ceph/ceph:v20, name=brave_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 04:33:06 np0005634017 systemd[1]: Started libpod-conmon-c64f45da49f024c64056d108e4e022c2243a27bb5d69502e8ab3ddf6e7434906.scope.
Feb 28 04:33:06 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:06 np0005634017 podman[75461]: 2026-02-28 09:33:06.031776237 +0000 UTC m=+0.080304395 container init c64f45da49f024c64056d108e4e022c2243a27bb5d69502e8ab3ddf6e7434906 (image=quay.io/ceph/ceph:v20, name=brave_black, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:33:06 np0005634017 podman[75461]: 2026-02-28 09:33:06.034644318 +0000 UTC m=+0.083172476 container start c64f45da49f024c64056d108e4e022c2243a27bb5d69502e8ab3ddf6e7434906 (image=quay.io/ceph/ceph:v20, name=brave_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 04:33:06 np0005634017 podman[75461]: 2026-02-28 09:33:06.036866911 +0000 UTC m=+0.085395099 container attach c64f45da49f024c64056d108e4e022c2243a27bb5d69502e8ab3ddf6e7434906 (image=quay.io/ceph/ceph:v20, name=brave_black, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 04:33:06 np0005634017 brave_black[75478]: AQBStqJpcH33AhAAjYt2hkEa9YR+UqqcKHwiTg==
Feb 28 04:33:06 np0005634017 systemd[1]: libpod-c64f45da49f024c64056d108e4e022c2243a27bb5d69502e8ab3ddf6e7434906.scope: Deactivated successfully.
Feb 28 04:33:06 np0005634017 podman[75461]: 2026-02-28 09:33:06.051790323 +0000 UTC m=+0.100318481 container died c64f45da49f024c64056d108e4e022c2243a27bb5d69502e8ab3ddf6e7434906 (image=quay.io/ceph/ceph:v20, name=brave_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:33:06 np0005634017 podman[75461]: 2026-02-28 09:33:05.972240011 +0000 UTC m=+0.020768199 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:06 np0005634017 podman[75461]: 2026-02-28 09:33:06.077282435 +0000 UTC m=+0.125810613 container remove c64f45da49f024c64056d108e4e022c2243a27bb5d69502e8ab3ddf6e7434906 (image=quay.io/ceph/ceph:v20, name=brave_black, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:33:06 np0005634017 systemd[1]: libpod-conmon-c64f45da49f024c64056d108e4e022c2243a27bb5d69502e8ab3ddf6e7434906.scope: Deactivated successfully.
Feb 28 04:33:06 np0005634017 podman[75497]: 2026-02-28 09:33:06.127502826 +0000 UTC m=+0.036633328 container create 0ba1fdd942d4afeee87307cb24c47ba86d71db9ebbef64a440f24357f2fd7d25 (image=quay.io/ceph/ceph:v20, name=practical_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 04:33:06 np0005634017 systemd[1]: Started libpod-conmon-0ba1fdd942d4afeee87307cb24c47ba86d71db9ebbef64a440f24357f2fd7d25.scope.
Feb 28 04:33:06 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:06 np0005634017 podman[75497]: 2026-02-28 09:33:06.190591622 +0000 UTC m=+0.099722104 container init 0ba1fdd942d4afeee87307cb24c47ba86d71db9ebbef64a440f24357f2fd7d25 (image=quay.io/ceph/ceph:v20, name=practical_rosalind, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 04:33:06 np0005634017 podman[75497]: 2026-02-28 09:33:06.195157672 +0000 UTC m=+0.104288144 container start 0ba1fdd942d4afeee87307cb24c47ba86d71db9ebbef64a440f24357f2fd7d25 (image=quay.io/ceph/ceph:v20, name=practical_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:33:06 np0005634017 podman[75497]: 2026-02-28 09:33:06.198477316 +0000 UTC m=+0.107607828 container attach 0ba1fdd942d4afeee87307cb24c47ba86d71db9ebbef64a440f24357f2fd7d25 (image=quay.io/ceph/ceph:v20, name=practical_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:33:06 np0005634017 podman[75497]: 2026-02-28 09:33:06.108023125 +0000 UTC m=+0.017153647 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:06 np0005634017 practical_rosalind[75514]: AQBStqJptKxJDRAAhRZEoYFdIP2EQFtoanZ7JQ==
Feb 28 04:33:06 np0005634017 systemd[1]: libpod-0ba1fdd942d4afeee87307cb24c47ba86d71db9ebbef64a440f24357f2fd7d25.scope: Deactivated successfully.
Feb 28 04:33:06 np0005634017 podman[75497]: 2026-02-28 09:33:06.226116518 +0000 UTC m=+0.135247000 container died 0ba1fdd942d4afeee87307cb24c47ba86d71db9ebbef64a440f24357f2fd7d25 (image=quay.io/ceph/ceph:v20, name=practical_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 28 04:33:06 np0005634017 podman[75497]: 2026-02-28 09:33:06.263205838 +0000 UTC m=+0.172336310 container remove 0ba1fdd942d4afeee87307cb24c47ba86d71db9ebbef64a440f24357f2fd7d25 (image=quay.io/ceph/ceph:v20, name=practical_rosalind, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:33:06 np0005634017 systemd[1]: libpod-conmon-0ba1fdd942d4afeee87307cb24c47ba86d71db9ebbef64a440f24357f2fd7d25.scope: Deactivated successfully.
Feb 28 04:33:06 np0005634017 podman[75530]: 2026-02-28 09:33:06.343623354 +0000 UTC m=+0.057152299 container create b87fbf3055d978fdfcea2933dce8af7c07ba02276a761eb031f7fbd6a3866405 (image=quay.io/ceph/ceph:v20, name=sweet_sinoussi, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 04:33:06 np0005634017 podman[75530]: 2026-02-28 09:33:06.317835144 +0000 UTC m=+0.031364159 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:06 np0005634017 systemd[1]: Started libpod-conmon-b87fbf3055d978fdfcea2933dce8af7c07ba02276a761eb031f7fbd6a3866405.scope.
Feb 28 04:33:06 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:06 np0005634017 podman[75530]: 2026-02-28 09:33:06.521096368 +0000 UTC m=+0.234625283 container init b87fbf3055d978fdfcea2933dce8af7c07ba02276a761eb031f7fbd6a3866405 (image=quay.io/ceph/ceph:v20, name=sweet_sinoussi, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 28 04:33:06 np0005634017 podman[75530]: 2026-02-28 09:33:06.525588415 +0000 UTC m=+0.239117330 container start b87fbf3055d978fdfcea2933dce8af7c07ba02276a761eb031f7fbd6a3866405 (image=quay.io/ceph/ceph:v20, name=sweet_sinoussi, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 04:33:06 np0005634017 podman[75530]: 2026-02-28 09:33:06.529747213 +0000 UTC m=+0.243276128 container attach b87fbf3055d978fdfcea2933dce8af7c07ba02276a761eb031f7fbd6a3866405 (image=quay.io/ceph/ceph:v20, name=sweet_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 28 04:33:06 np0005634017 sweet_sinoussi[75547]: AQBStqJpZ8Y9IBAAkACAfgfzgSebhx+5Mf8elA==
Feb 28 04:33:06 np0005634017 systemd[1]: libpod-b87fbf3055d978fdfcea2933dce8af7c07ba02276a761eb031f7fbd6a3866405.scope: Deactivated successfully.
Feb 28 04:33:06 np0005634017 podman[75530]: 2026-02-28 09:33:06.543536553 +0000 UTC m=+0.257065468 container died b87fbf3055d978fdfcea2933dce8af7c07ba02276a761eb031f7fbd6a3866405 (image=quay.io/ceph/ceph:v20, name=sweet_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:33:06 np0005634017 systemd[1]: var-lib-containers-storage-overlay-50b787cfe943b2e019201759ebbd4078e9186a7ea5cb09b02be88ce83134f2d1-merged.mount: Deactivated successfully.
Feb 28 04:33:06 np0005634017 podman[75530]: 2026-02-28 09:33:06.580893551 +0000 UTC m=+0.294422456 container remove b87fbf3055d978fdfcea2933dce8af7c07ba02276a761eb031f7fbd6a3866405 (image=quay.io/ceph/ceph:v20, name=sweet_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:33:06 np0005634017 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 28 04:33:06 np0005634017 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 28 04:33:06 np0005634017 systemd[1]: libpod-conmon-b87fbf3055d978fdfcea2933dce8af7c07ba02276a761eb031f7fbd6a3866405.scope: Deactivated successfully.
Feb 28 04:33:06 np0005634017 podman[75567]: 2026-02-28 09:33:06.647806865 +0000 UTC m=+0.044769098 container create 2e4d7f1a20af1eb2608313cab6448206d95a281daa6f467f801c52f7627d32a8 (image=quay.io/ceph/ceph:v20, name=agitated_herschel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:33:06 np0005634017 systemd[1]: Started libpod-conmon-2e4d7f1a20af1eb2608313cab6448206d95a281daa6f467f801c52f7627d32a8.scope.
Feb 28 04:33:06 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:06 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb13a9effe489e80c0ca02a74ced7a5e36ce9ce0ad5496c3e64048d4851f08d1/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:06 np0005634017 podman[75567]: 2026-02-28 09:33:06.699532199 +0000 UTC m=+0.096494432 container init 2e4d7f1a20af1eb2608313cab6448206d95a281daa6f467f801c52f7627d32a8 (image=quay.io/ceph/ceph:v20, name=agitated_herschel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 04:33:06 np0005634017 podman[75567]: 2026-02-28 09:33:06.704183081 +0000 UTC m=+0.101145344 container start 2e4d7f1a20af1eb2608313cab6448206d95a281daa6f467f801c52f7627d32a8 (image=quay.io/ceph/ceph:v20, name=agitated_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 28 04:33:06 np0005634017 podman[75567]: 2026-02-28 09:33:06.707722631 +0000 UTC m=+0.104684894 container attach 2e4d7f1a20af1eb2608313cab6448206d95a281daa6f467f801c52f7627d32a8 (image=quay.io/ceph/ceph:v20, name=agitated_herschel, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 04:33:06 np0005634017 podman[75567]: 2026-02-28 09:33:06.632304036 +0000 UTC m=+0.029266289 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:06 np0005634017 agitated_herschel[75583]: /usr/bin/monmaptool: monmap file /tmp/monmap
Feb 28 04:33:06 np0005634017 agitated_herschel[75583]: setting min_mon_release = tentacle
Feb 28 04:33:06 np0005634017 agitated_herschel[75583]: /usr/bin/monmaptool: set fsid to 8f528268-ea2d-5d7b-af45-49b405fed6de
Feb 28 04:33:06 np0005634017 agitated_herschel[75583]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Feb 28 04:33:06 np0005634017 systemd[1]: libpod-2e4d7f1a20af1eb2608313cab6448206d95a281daa6f467f801c52f7627d32a8.scope: Deactivated successfully.
Feb 28 04:33:06 np0005634017 podman[75567]: 2026-02-28 09:33:06.733947554 +0000 UTC m=+0.130909787 container died 2e4d7f1a20af1eb2608313cab6448206d95a281daa6f467f801c52f7627d32a8 (image=quay.io/ceph/ceph:v20, name=agitated_herschel, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:33:06 np0005634017 podman[75567]: 2026-02-28 09:33:06.765787755 +0000 UTC m=+0.162749988 container remove 2e4d7f1a20af1eb2608313cab6448206d95a281daa6f467f801c52f7627d32a8 (image=quay.io/ceph/ceph:v20, name=agitated_herschel, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:33:06 np0005634017 systemd[1]: libpod-conmon-2e4d7f1a20af1eb2608313cab6448206d95a281daa6f467f801c52f7627d32a8.scope: Deactivated successfully.
Feb 28 04:33:06 np0005634017 podman[75602]: 2026-02-28 09:33:06.814419782 +0000 UTC m=+0.034469627 container create 40cb52ad43a125d2b37e428065a549e98c44d5cdbf6359effd0ef00722e0848c (image=quay.io/ceph/ceph:v20, name=tender_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:33:06 np0005634017 systemd[1]: Started libpod-conmon-40cb52ad43a125d2b37e428065a549e98c44d5cdbf6359effd0ef00722e0848c.scope.
Feb 28 04:33:06 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:06 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deebcf8ae320e44004f40fc6dc4e9bdea93c031f29f7944b1bae470083c202bc/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:06 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deebcf8ae320e44004f40fc6dc4e9bdea93c031f29f7944b1bae470083c202bc/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:06 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deebcf8ae320e44004f40fc6dc4e9bdea93c031f29f7944b1bae470083c202bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:06 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deebcf8ae320e44004f40fc6dc4e9bdea93c031f29f7944b1bae470083c202bc/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:06 np0005634017 podman[75602]: 2026-02-28 09:33:06.878224888 +0000 UTC m=+0.098274753 container init 40cb52ad43a125d2b37e428065a549e98c44d5cdbf6359effd0ef00722e0848c (image=quay.io/ceph/ceph:v20, name=tender_mcclintock, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 28 04:33:06 np0005634017 podman[75602]: 2026-02-28 09:33:06.881926223 +0000 UTC m=+0.101976068 container start 40cb52ad43a125d2b37e428065a549e98c44d5cdbf6359effd0ef00722e0848c (image=quay.io/ceph/ceph:v20, name=tender_mcclintock, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:33:06 np0005634017 podman[75602]: 2026-02-28 09:33:06.886303457 +0000 UTC m=+0.106353342 container attach 40cb52ad43a125d2b37e428065a549e98c44d5cdbf6359effd0ef00722e0848c (image=quay.io/ceph/ceph:v20, name=tender_mcclintock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 28 04:33:06 np0005634017 podman[75602]: 2026-02-28 09:33:06.797612096 +0000 UTC m=+0.017662031 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:07 np0005634017 systemd[1]: libpod-40cb52ad43a125d2b37e428065a549e98c44d5cdbf6359effd0ef00722e0848c.scope: Deactivated successfully.
Feb 28 04:33:07 np0005634017 podman[75602]: 2026-02-28 09:33:07.018298913 +0000 UTC m=+0.238348788 container died 40cb52ad43a125d2b37e428065a549e98c44d5cdbf6359effd0ef00722e0848c (image=quay.io/ceph/ceph:v20, name=tender_mcclintock, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True)
Feb 28 04:33:07 np0005634017 podman[75602]: 2026-02-28 09:33:07.061552047 +0000 UTC m=+0.281601922 container remove 40cb52ad43a125d2b37e428065a549e98c44d5cdbf6359effd0ef00722e0848c (image=quay.io/ceph/ceph:v20, name=tender_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 28 04:33:07 np0005634017 systemd[1]: libpod-conmon-40cb52ad43a125d2b37e428065a549e98c44d5cdbf6359effd0ef00722e0848c.scope: Deactivated successfully.
Feb 28 04:33:07 np0005634017 systemd[1]: Reloading.
Feb 28 04:33:07 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:33:07 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:33:07 np0005634017 systemd[1]: Reloading.
Feb 28 04:33:07 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:33:07 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:33:07 np0005634017 systemd[1]: Reached target All Ceph clusters and services.
Feb 28 04:33:07 np0005634017 systemd[1]: Reloading.
Feb 28 04:33:07 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:33:07 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:33:07 np0005634017 systemd[1]: Reached target Ceph cluster 8f528268-ea2d-5d7b-af45-49b405fed6de.
Feb 28 04:33:07 np0005634017 systemd[1]: Reloading.
Feb 28 04:33:07 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:33:07 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:33:08 np0005634017 systemd[1]: Reloading.
Feb 28 04:33:08 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:33:08 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:33:08 np0005634017 systemd[1]: Created slice Slice /system/ceph-8f528268-ea2d-5d7b-af45-49b405fed6de.
Feb 28 04:33:08 np0005634017 systemd[1]: Reached target System Time Set.
Feb 28 04:33:08 np0005634017 systemd[1]: Reached target System Time Synchronized.
Feb 28 04:33:08 np0005634017 systemd[1]: Starting Ceph mon.compute-0 for 8f528268-ea2d-5d7b-af45-49b405fed6de...
Feb 28 04:33:08 np0005634017 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 28 04:33:08 np0005634017 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 28 04:33:08 np0005634017 podman[75932]: 2026-02-28 09:33:08.533583118 +0000 UTC m=+0.030136874 container create f7bdb7cb2719a85dedff21d9512b1167f0b54f3a08352a8159da1f21e7d9a1b0 (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 04:33:08 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1415fc1dbf781b071d231045a24de38304bdd1706ecb2c0508be8c7d8a30cd42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:08 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1415fc1dbf781b071d231045a24de38304bdd1706ecb2c0508be8c7d8a30cd42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:08 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1415fc1dbf781b071d231045a24de38304bdd1706ecb2c0508be8c7d8a30cd42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:08 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1415fc1dbf781b071d231045a24de38304bdd1706ecb2c0508be8c7d8a30cd42/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:08 np0005634017 podman[75932]: 2026-02-28 09:33:08.583198202 +0000 UTC m=+0.079751978 container init f7bdb7cb2719a85dedff21d9512b1167f0b54f3a08352a8159da1f21e7d9a1b0 (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:33:08 np0005634017 podman[75932]: 2026-02-28 09:33:08.587832454 +0000 UTC m=+0.084386210 container start f7bdb7cb2719a85dedff21d9512b1167f0b54f3a08352a8159da1f21e7d9a1b0 (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 04:33:08 np0005634017 bash[75932]: f7bdb7cb2719a85dedff21d9512b1167f0b54f3a08352a8159da1f21e7d9a1b0
Feb 28 04:33:08 np0005634017 podman[75932]: 2026-02-28 09:33:08.520204509 +0000 UTC m=+0.016758285 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:08 np0005634017 systemd[1]: Started Ceph mon.compute-0 for 8f528268-ea2d-5d7b-af45-49b405fed6de.
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: set uid:gid to 167:167 (ceph:ceph)
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: pidfile_write: ignore empty --pid-file
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: load: jerasure load: lrc 
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: RocksDB version: 7.9.2
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: Git sha 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: DB SUMMARY
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: DB Session ID:  5TX5JARMD0FBP3TTHR7T
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: CURRENT file:  CURRENT
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: IDENTITY file:  IDENTITY
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                         Options.error_if_exists: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                       Options.create_if_missing: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                         Options.paranoid_checks: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                                     Options.env: 0x55a270127440
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                                      Options.fs: PosixFileSystem
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                                Options.info_log: 0x55a270f313e0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                Options.max_file_opening_threads: 16
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                              Options.statistics: (nil)
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                               Options.use_fsync: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                       Options.max_log_file_size: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                         Options.allow_fallocate: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                        Options.use_direct_reads: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:          Options.create_missing_column_families: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                              Options.db_log_dir: 
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                                 Options.wal_dir: 
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                   Options.advise_random_on_open: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                    Options.write_buffer_manager: 0x55a270eb0140
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                            Options.rate_limiter: (nil)
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                  Options.unordered_write: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                               Options.row_cache: None
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                              Options.wal_filter: None
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:             Options.allow_ingest_behind: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:             Options.two_write_queues: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:             Options.manual_wal_flush: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:             Options.wal_compression: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:             Options.atomic_flush: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                 Options.log_readahead_size: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:             Options.allow_data_in_errors: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:             Options.db_host_id: __hostname__
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:             Options.max_background_jobs: 2
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:             Options.max_background_compactions: -1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:             Options.max_subcompactions: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:             Options.max_total_wal_size: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                          Options.max_open_files: -1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                          Options.bytes_per_sync: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:       Options.compaction_readahead_size: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                  Options.max_background_flushes: -1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: Compression algorithms supported:
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: #011kZSTD supported: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: #011kXpressCompression supported: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: #011kBZip2Compression supported: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: #011kLZ4Compression supported: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: #011kZlibCompression supported: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: #011kLZ4HCCompression supported: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: #011kSnappyCompression supported: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:           Options.merge_operator: 
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:        Options.compaction_filter: None
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a270ebc600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a270ea18d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:        Options.write_buffer_size: 33554432
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:  Options.max_write_buffer_number: 2
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:          Options.compression: NoCompression
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:             Options.num_levels: 7
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 291b26fc-4235-4453-9c96-97f924605147
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271188633099, "job": 1, "event": "recovery_started", "wal_files": [4]}
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271188635035, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "5TX5JARMD0FBP3TTHR7T", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271188635129, "job": 1, "event": "recovery_finished"}
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a270ecee00
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: DB pointer 0x55a27101a000
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a270ea18d0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 8f528268-ea2d-5d7b-af45-49b405fed6de
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@-1(???) e0 preinit fsid 8f528268-ea2d-5d7b-af45-49b405fed6de
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(probing) e0 win_standalone_election
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(probing) e1 win_standalone_election
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: paxos.0).electionLogic(2) init, last seen epoch 2
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: log_channel(cluster) log [DBG] : monmap epoch 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: log_channel(cluster) log [DBG] : fsid 8f528268-ea2d-5d7b-af45-49b405fed6de
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: log_channel(cluster) log [DBG] : last_changed 2026-02-28T09:33:06.729878+0000
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: log_channel(cluster) log [DBG] : created 2026-02-28T09:33:06.729878+0000
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: log_channel(cluster) log [DBG] : election_strategy: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2026-02-28T09:33:06.911398Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026,kernel_version=5.14.0-686.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864280,os=Linux}
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader).mds e1 new map
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2026-02-28T09:33:08:670336+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: log_channel(cluster) log [DBG] : fsmap 
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mkfs 8f528268-ea2d-5d7b-af45-49b405fed6de
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 28 04:33:08 np0005634017 podman[75953]: 2026-02-28 09:33:08.700360659 +0000 UTC m=+0.065422023 container create 974ddf2036474ef6b0eb4550d6b96fb7acaa32f96725ec14416c187148cdd836 (image=quay.io/ceph/ceph:v20, name=trusting_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:33:08 np0005634017 systemd[1]: Started libpod-conmon-974ddf2036474ef6b0eb4550d6b96fb7acaa32f96725ec14416c187148cdd836.scope.
Feb 28 04:33:08 np0005634017 podman[75953]: 2026-02-28 09:33:08.670655018 +0000 UTC m=+0.035716432 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:08 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:08 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117ec5cf1757f10dc36fa055724caa7bb016ede23830417f2715583b387bd44d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:08 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117ec5cf1757f10dc36fa055724caa7bb016ede23830417f2715583b387bd44d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:08 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117ec5cf1757f10dc36fa055724caa7bb016ede23830417f2715583b387bd44d/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:08 np0005634017 podman[75953]: 2026-02-28 09:33:08.795689088 +0000 UTC m=+0.160750462 container init 974ddf2036474ef6b0eb4550d6b96fb7acaa32f96725ec14416c187148cdd836 (image=quay.io/ceph/ceph:v20, name=trusting_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:08 np0005634017 podman[75953]: 2026-02-28 09:33:08.801869393 +0000 UTC m=+0.166930757 container start 974ddf2036474ef6b0eb4550d6b96fb7acaa32f96725ec14416c187148cdd836 (image=quay.io/ceph/ceph:v20, name=trusting_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:33:08 np0005634017 podman[75953]: 2026-02-28 09:33:08.805603398 +0000 UTC m=+0.170664822 container attach 974ddf2036474ef6b0eb4550d6b96fb7acaa32f96725ec14416c187148cdd836 (image=quay.io/ceph/ceph:v20, name=trusting_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Feb 28 04:33:08 np0005634017 ceph-mon[75952]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/301208021' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 28 04:33:08 np0005634017 trusting_babbage[76008]:  cluster:
Feb 28 04:33:08 np0005634017 trusting_babbage[76008]:    id:     8f528268-ea2d-5d7b-af45-49b405fed6de
Feb 28 04:33:08 np0005634017 trusting_babbage[76008]:    health: HEALTH_OK
Feb 28 04:33:08 np0005634017 trusting_babbage[76008]: 
Feb 28 04:33:08 np0005634017 trusting_babbage[76008]:  services:
Feb 28 04:33:08 np0005634017 trusting_babbage[76008]:    mon: 1 daemons, quorum compute-0 (age 0.319624s) [leader: compute-0]
Feb 28 04:33:08 np0005634017 trusting_babbage[76008]:    mgr: no daemons active
Feb 28 04:33:08 np0005634017 trusting_babbage[76008]:    osd: 0 osds: 0 up, 0 in
Feb 28 04:33:08 np0005634017 trusting_babbage[76008]: 
Feb 28 04:33:08 np0005634017 trusting_babbage[76008]:  data:
Feb 28 04:33:08 np0005634017 trusting_babbage[76008]:    pools:   0 pools, 0 pgs
Feb 28 04:33:08 np0005634017 trusting_babbage[76008]:    objects: 0 objects, 0 B
Feb 28 04:33:08 np0005634017 trusting_babbage[76008]:    usage:   0 B used, 0 B / 0 B avail
Feb 28 04:33:08 np0005634017 trusting_babbage[76008]:    pgs:     
Feb 28 04:33:08 np0005634017 trusting_babbage[76008]: 
Feb 28 04:33:09 np0005634017 systemd[1]: libpod-974ddf2036474ef6b0eb4550d6b96fb7acaa32f96725ec14416c187148cdd836.scope: Deactivated successfully.
Feb 28 04:33:09 np0005634017 podman[75953]: 2026-02-28 09:33:09.005535188 +0000 UTC m=+0.370596552 container died 974ddf2036474ef6b0eb4550d6b96fb7acaa32f96725ec14416c187148cdd836 (image=quay.io/ceph/ceph:v20, name=trusting_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:09 np0005634017 systemd[1]: var-lib-containers-storage-overlay-117ec5cf1757f10dc36fa055724caa7bb016ede23830417f2715583b387bd44d-merged.mount: Deactivated successfully.
Feb 28 04:33:09 np0005634017 podman[75953]: 2026-02-28 09:33:09.046440416 +0000 UTC m=+0.411501780 container remove 974ddf2036474ef6b0eb4550d6b96fb7acaa32f96725ec14416c187148cdd836 (image=quay.io/ceph/ceph:v20, name=trusting_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:33:09 np0005634017 systemd[1]: libpod-conmon-974ddf2036474ef6b0eb4550d6b96fb7acaa32f96725ec14416c187148cdd836.scope: Deactivated successfully.
Feb 28 04:33:09 np0005634017 podman[76046]: 2026-02-28 09:33:09.107310818 +0000 UTC m=+0.040711762 container create 42aac3f42992f24add979a3c9128bf9a93f35aae37bba3d57d9213463d849401 (image=quay.io/ceph/ceph:v20, name=magical_elbakyan, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 04:33:09 np0005634017 systemd[1]: Started libpod-conmon-42aac3f42992f24add979a3c9128bf9a93f35aae37bba3d57d9213463d849401.scope.
Feb 28 04:33:09 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/804f9f5f74388995c06ad53b7149033dc01577821209ebdc91a69269952caf8a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/804f9f5f74388995c06ad53b7149033dc01577821209ebdc91a69269952caf8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/804f9f5f74388995c06ad53b7149033dc01577821209ebdc91a69269952caf8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/804f9f5f74388995c06ad53b7149033dc01577821209ebdc91a69269952caf8a/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:09 np0005634017 podman[76046]: 2026-02-28 09:33:09.090444012 +0000 UTC m=+0.023845006 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:09 np0005634017 podman[76046]: 2026-02-28 09:33:09.208347838 +0000 UTC m=+0.141748822 container init 42aac3f42992f24add979a3c9128bf9a93f35aae37bba3d57d9213463d849401 (image=quay.io/ceph/ceph:v20, name=magical_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 04:33:09 np0005634017 podman[76046]: 2026-02-28 09:33:09.213106223 +0000 UTC m=+0.146507177 container start 42aac3f42992f24add979a3c9128bf9a93f35aae37bba3d57d9213463d849401 (image=quay.io/ceph/ceph:v20, name=magical_elbakyan, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:33:09 np0005634017 podman[76046]: 2026-02-28 09:33:09.21653771 +0000 UTC m=+0.149938704 container attach 42aac3f42992f24add979a3c9128bf9a93f35aae37bba3d57d9213463d849401 (image=quay.io/ceph/ceph:v20, name=magical_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 04:33:09 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb 28 04:33:09 np0005634017 ceph-mon[75952]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2817426114' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb 28 04:33:09 np0005634017 ceph-mon[75952]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2817426114' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb 28 04:33:09 np0005634017 magical_elbakyan[76063]: 
Feb 28 04:33:09 np0005634017 magical_elbakyan[76063]: [global]
Feb 28 04:33:09 np0005634017 magical_elbakyan[76063]: #011fsid = 8f528268-ea2d-5d7b-af45-49b405fed6de
Feb 28 04:33:09 np0005634017 magical_elbakyan[76063]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Feb 28 04:33:09 np0005634017 magical_elbakyan[76063]: #011osd_crush_chooseleaf_type = 0
Feb 28 04:33:09 np0005634017 systemd[1]: libpod-42aac3f42992f24add979a3c9128bf9a93f35aae37bba3d57d9213463d849401.scope: Deactivated successfully.
Feb 28 04:33:09 np0005634017 podman[76046]: 2026-02-28 09:33:09.43341742 +0000 UTC m=+0.366818384 container died 42aac3f42992f24add979a3c9128bf9a93f35aae37bba3d57d9213463d849401 (image=quay.io/ceph/ceph:v20, name=magical_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:09 np0005634017 podman[76046]: 2026-02-28 09:33:09.46732864 +0000 UTC m=+0.400729594 container remove 42aac3f42992f24add979a3c9128bf9a93f35aae37bba3d57d9213463d849401 (image=quay.io/ceph/ceph:v20, name=magical_elbakyan, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 04:33:09 np0005634017 systemd[1]: libpod-conmon-42aac3f42992f24add979a3c9128bf9a93f35aae37bba3d57d9213463d849401.scope: Deactivated successfully.
Feb 28 04:33:09 np0005634017 podman[76101]: 2026-02-28 09:33:09.527904134 +0000 UTC m=+0.045034536 container create c914c0d3fe00a4b1ff71fe84753be71fbfd7ec2da50cf4a56216e91131b4cd2e (image=quay.io/ceph/ceph:v20, name=practical_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:33:09 np0005634017 systemd[1]: var-lib-containers-storage-overlay-804f9f5f74388995c06ad53b7149033dc01577821209ebdc91a69269952caf8a-merged.mount: Deactivated successfully.
Feb 28 04:33:09 np0005634017 systemd[1]: Started libpod-conmon-c914c0d3fe00a4b1ff71fe84753be71fbfd7ec2da50cf4a56216e91131b4cd2e.scope.
Feb 28 04:33:09 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bde2fee7317735806773e3d985802bfd7852253af413d3921d2f10834572822/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bde2fee7317735806773e3d985802bfd7852253af413d3921d2f10834572822/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bde2fee7317735806773e3d985802bfd7852253af413d3921d2f10834572822/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bde2fee7317735806773e3d985802bfd7852253af413d3921d2f10834572822/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:09 np0005634017 podman[76101]: 2026-02-28 09:33:09.504039509 +0000 UTC m=+0.021169951 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:09 np0005634017 podman[76101]: 2026-02-28 09:33:09.6294772 +0000 UTC m=+0.146607572 container init c914c0d3fe00a4b1ff71fe84753be71fbfd7ec2da50cf4a56216e91131b4cd2e (image=quay.io/ceph/ceph:v20, name=practical_carver, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:33:09 np0005634017 podman[76101]: 2026-02-28 09:33:09.636160949 +0000 UTC m=+0.153291341 container start c914c0d3fe00a4b1ff71fe84753be71fbfd7ec2da50cf4a56216e91131b4cd2e (image=quay.io/ceph/ceph:v20, name=practical_carver, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:09 np0005634017 podman[76101]: 2026-02-28 09:33:09.641318435 +0000 UTC m=+0.158448817 container attach c914c0d3fe00a4b1ff71fe84753be71fbfd7ec2da50cf4a56216e91131b4cd2e (image=quay.io/ceph/ceph:v20, name=practical_carver, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 04:33:09 np0005634017 ceph-mon[75952]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 28 04:33:09 np0005634017 ceph-mon[75952]: from='client.? 192.168.122.100:0/2817426114' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb 28 04:33:09 np0005634017 ceph-mon[75952]: from='client.? 192.168.122.100:0/2817426114' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb 28 04:33:09 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:33:09 np0005634017 ceph-mon[75952]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/898972773' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:33:09 np0005634017 systemd[1]: libpod-c914c0d3fe00a4b1ff71fe84753be71fbfd7ec2da50cf4a56216e91131b4cd2e.scope: Deactivated successfully.
Feb 28 04:33:09 np0005634017 podman[76101]: 2026-02-28 09:33:09.839431343 +0000 UTC m=+0.356561715 container died c914c0d3fe00a4b1ff71fe84753be71fbfd7ec2da50cf4a56216e91131b4cd2e (image=quay.io/ceph/ceph:v20, name=practical_carver, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:33:09 np0005634017 systemd[1]: var-lib-containers-storage-overlay-5bde2fee7317735806773e3d985802bfd7852253af413d3921d2f10834572822-merged.mount: Deactivated successfully.
Feb 28 04:33:09 np0005634017 podman[76101]: 2026-02-28 09:33:09.874719442 +0000 UTC m=+0.391849804 container remove c914c0d3fe00a4b1ff71fe84753be71fbfd7ec2da50cf4a56216e91131b4cd2e (image=quay.io/ceph/ceph:v20, name=practical_carver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:33:09 np0005634017 systemd[1]: libpod-conmon-c914c0d3fe00a4b1ff71fe84753be71fbfd7ec2da50cf4a56216e91131b4cd2e.scope: Deactivated successfully.
Feb 28 04:33:09 np0005634017 systemd[1]: Stopping Ceph mon.compute-0 for 8f528268-ea2d-5d7b-af45-49b405fed6de...
Feb 28 04:33:10 np0005634017 ceph-mon[75952]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Feb 28 04:33:10 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Feb 28 04:33:10 np0005634017 ceph-mon[75952]: mon.compute-0@0(leader) e1 shutdown
Feb 28 04:33:10 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0[75948]: 2026-02-28T09:33:10.055+0000 7f1fd56f0640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Feb 28 04:33:10 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0[75948]: 2026-02-28T09:33:10.055+0000 7f1fd56f0640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Feb 28 04:33:10 np0005634017 ceph-mon[75952]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb 28 04:33:10 np0005634017 ceph-mon[75952]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb 28 04:33:10 np0005634017 podman[76185]: 2026-02-28 09:33:10.199770714 +0000 UTC m=+0.176656782 container died f7bdb7cb2719a85dedff21d9512b1167f0b54f3a08352a8159da1f21e7d9a1b0 (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:33:10 np0005634017 systemd[1]: var-lib-containers-storage-overlay-1415fc1dbf781b071d231045a24de38304bdd1706ecb2c0508be8c7d8a30cd42-merged.mount: Deactivated successfully.
Feb 28 04:33:10 np0005634017 podman[76185]: 2026-02-28 09:33:10.236173024 +0000 UTC m=+0.213059042 container remove f7bdb7cb2719a85dedff21d9512b1167f0b54f3a08352a8159da1f21e7d9a1b0 (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 04:33:10 np0005634017 bash[76185]: ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0
Feb 28 04:33:10 np0005634017 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 28 04:33:10 np0005634017 systemd[1]: ceph-8f528268-ea2d-5d7b-af45-49b405fed6de@mon.compute-0.service: Deactivated successfully.
Feb 28 04:33:10 np0005634017 systemd[1]: Stopped Ceph mon.compute-0 for 8f528268-ea2d-5d7b-af45-49b405fed6de.
Feb 28 04:33:10 np0005634017 systemd[1]: Starting Ceph mon.compute-0 for 8f528268-ea2d-5d7b-af45-49b405fed6de...
Feb 28 04:33:10 np0005634017 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 28 04:33:10 np0005634017 podman[76286]: 2026-02-28 09:33:10.600290022 +0000 UTC m=+0.042450183 container create 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:33:10 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2029a30030671324f81cc3abd2ed8ec94a3526e9e66fc9692f04ba80cd20ae60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:10 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2029a30030671324f81cc3abd2ed8ec94a3526e9e66fc9692f04ba80cd20ae60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:10 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2029a30030671324f81cc3abd2ed8ec94a3526e9e66fc9692f04ba80cd20ae60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:10 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2029a30030671324f81cc3abd2ed8ec94a3526e9e66fc9692f04ba80cd20ae60/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:10 np0005634017 podman[76286]: 2026-02-28 09:33:10.579808742 +0000 UTC m=+0.021968883 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:10 np0005634017 podman[76286]: 2026-02-28 09:33:10.678271999 +0000 UTC m=+0.120432130 container init 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 04:33:10 np0005634017 podman[76286]: 2026-02-28 09:33:10.682872779 +0000 UTC m=+0.125032900 container start 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 04:33:10 np0005634017 bash[76286]: 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec
Feb 28 04:33:10 np0005634017 systemd[1]: Started Ceph mon.compute-0 for 8f528268-ea2d-5d7b-af45-49b405fed6de.
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: set uid:gid to 167:167 (ceph:ceph)
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: pidfile_write: ignore empty --pid-file
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: load: jerasure load: lrc 
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: RocksDB version: 7.9.2
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: Git sha 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: DB SUMMARY
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: DB Session ID:  K5U2AV5NM18630LWZTNR
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: CURRENT file:  CURRENT
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: IDENTITY file:  IDENTITY
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60235 ; 
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                         Options.error_if_exists: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                       Options.create_if_missing: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                         Options.paranoid_checks: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                                     Options.env: 0x555ca9f96440
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                                      Options.fs: PosixFileSystem
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                                Options.info_log: 0x555caad7fe80
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                Options.max_file_opening_threads: 16
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                              Options.statistics: (nil)
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                               Options.use_fsync: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                       Options.max_log_file_size: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                         Options.allow_fallocate: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                        Options.use_direct_reads: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:          Options.create_missing_column_families: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                              Options.db_log_dir: 
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                                 Options.wal_dir: 
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                   Options.advise_random_on_open: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                    Options.write_buffer_manager: 0x555caadca140
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                            Options.rate_limiter: (nil)
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                  Options.unordered_write: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                               Options.row_cache: None
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                              Options.wal_filter: None
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:             Options.allow_ingest_behind: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:             Options.two_write_queues: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:             Options.manual_wal_flush: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:             Options.wal_compression: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:             Options.atomic_flush: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                 Options.log_readahead_size: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:             Options.allow_data_in_errors: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:             Options.db_host_id: __hostname__
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:             Options.max_background_jobs: 2
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:             Options.max_background_compactions: -1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:             Options.max_subcompactions: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:             Options.max_total_wal_size: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                          Options.max_open_files: -1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                          Options.bytes_per_sync: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:       Options.compaction_readahead_size: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                  Options.max_background_flushes: -1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: Compression algorithms supported:
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: #011kZSTD supported: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: #011kXpressCompression supported: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: #011kBZip2Compression supported: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: #011kLZ4Compression supported: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: #011kZlibCompression supported: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: #011kLZ4HCCompression supported: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: #011kSnappyCompression supported: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:           Options.merge_operator: 
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:        Options.compaction_filter: None
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555caadd6a00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555caadbb8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:        Options.write_buffer_size: 33554432
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:  Options.max_write_buffer_number: 2
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:          Options.compression: NoCompression
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:             Options.num_levels: 7
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 291b26fc-4235-4453-9c96-97f924605147
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271190729412, "job": 1, "event": "recovery_started", "wal_files": [9]}
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271190735178, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59956, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 143, "table_properties": {"data_size": 58434, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3403, "raw_average_key_size": 30, "raw_value_size": 55786, "raw_average_value_size": 507, "num_data_blocks": 9, "num_entries": 110, "num_filter_entries": 110, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271190, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271190735280, "job": 1, "event": "recovery_finished"}
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x555caade8e00
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: DB pointer 0x555caaf32000
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   60.45 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.5      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0   60.45 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.5      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.5      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.5      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 3.81 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 3.81 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555caadbb8d0#2 capacity: 512.00 MB usage: 26.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,25.61 KB,0.0048846%) FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 8f528268-ea2d-5d7b-af45-49b405fed6de
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: mon.compute-0@-1(???) e1 preinit fsid 8f528268-ea2d-5d7b-af45-49b405fed6de
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: mon.compute-0@-1(???).mds e1 new map
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2026-02-28T09:33:08:670336+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(probing) e1 win_standalone_election
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : monmap epoch 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : fsid 8f528268-ea2d-5d7b-af45-49b405fed6de
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : last_changed 2026-02-28T09:33:06.729878+0000
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : created 2026-02-28T09:33:06.729878+0000
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : election_strategy: 1
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : fsmap 
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Feb 28 04:33:10 np0005634017 podman[76305]: 2026-02-28 09:33:10.756988087 +0000 UTC m=+0.046069425 container create 65fecfb1868042587d631bafb63f6748e949a93c932b7d0589dbb5c30cebb251 (image=quay.io/ceph/ceph:v20, name=blissful_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Feb 28 04:33:10 np0005634017 systemd[1]: Started libpod-conmon-65fecfb1868042587d631bafb63f6748e949a93c932b7d0589dbb5c30cebb251.scope.
Feb 28 04:33:10 np0005634017 ceph-mon[76304]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 28 04:33:10 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:10 np0005634017 podman[76305]: 2026-02-28 09:33:10.740318366 +0000 UTC m=+0.029399754 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:10 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aed52149e8e7e73be28f55989062f901ce0fce7a77f9b770abb1484cab7ea4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:10 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aed52149e8e7e73be28f55989062f901ce0fce7a77f9b770abb1484cab7ea4e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:10 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aed52149e8e7e73be28f55989062f901ce0fce7a77f9b770abb1484cab7ea4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:10 np0005634017 podman[76305]: 2026-02-28 09:33:10.854000874 +0000 UTC m=+0.143082272 container init 65fecfb1868042587d631bafb63f6748e949a93c932b7d0589dbb5c30cebb251 (image=quay.io/ceph/ceph:v20, name=blissful_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 28 04:33:10 np0005634017 podman[76305]: 2026-02-28 09:33:10.862667589 +0000 UTC m=+0.151748967 container start 65fecfb1868042587d631bafb63f6748e949a93c932b7d0589dbb5c30cebb251 (image=quay.io/ceph/ceph:v20, name=blissful_mestorf, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:33:10 np0005634017 podman[76305]: 2026-02-28 09:33:10.86623011 +0000 UTC m=+0.155311488 container attach 65fecfb1868042587d631bafb63f6748e949a93c932b7d0589dbb5c30cebb251 (image=quay.io/ceph/ceph:v20, name=blissful_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 04:33:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Feb 28 04:33:11 np0005634017 systemd[1]: libpod-65fecfb1868042587d631bafb63f6748e949a93c932b7d0589dbb5c30cebb251.scope: Deactivated successfully.
Feb 28 04:33:11 np0005634017 podman[76305]: 2026-02-28 09:33:11.101458109 +0000 UTC m=+0.390539457 container died 65fecfb1868042587d631bafb63f6748e949a93c932b7d0589dbb5c30cebb251 (image=quay.io/ceph/ceph:v20, name=blissful_mestorf, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:33:11 np0005634017 podman[76305]: 2026-02-28 09:33:11.138103166 +0000 UTC m=+0.427184544 container remove 65fecfb1868042587d631bafb63f6748e949a93c932b7d0589dbb5c30cebb251 (image=quay.io/ceph/ceph:v20, name=blissful_mestorf, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Feb 28 04:33:11 np0005634017 systemd[1]: libpod-conmon-65fecfb1868042587d631bafb63f6748e949a93c932b7d0589dbb5c30cebb251.scope: Deactivated successfully.
Feb 28 04:33:11 np0005634017 podman[76398]: 2026-02-28 09:33:11.197518288 +0000 UTC m=+0.038160131 container create 9bba262e92390034a2a9685d3151a11036630f0ebb5cf8381792887670a6041d (image=quay.io/ceph/ceph:v20, name=charming_khorana, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 28 04:33:11 np0005634017 systemd[1]: Started libpod-conmon-9bba262e92390034a2a9685d3151a11036630f0ebb5cf8381792887670a6041d.scope.
Feb 28 04:33:11 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:11 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1b63091c2f1ed8dffe9bd234250e8b14beadf2ef3cbfa749aeaeb3ec8e1ba2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:11 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1b63091c2f1ed8dffe9bd234250e8b14beadf2ef3cbfa749aeaeb3ec8e1ba2c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:11 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1b63091c2f1ed8dffe9bd234250e8b14beadf2ef3cbfa749aeaeb3ec8e1ba2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:11 np0005634017 podman[76398]: 2026-02-28 09:33:11.180886997 +0000 UTC m=+0.021528890 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:11 np0005634017 podman[76398]: 2026-02-28 09:33:11.292198958 +0000 UTC m=+0.132840811 container init 9bba262e92390034a2a9685d3151a11036630f0ebb5cf8381792887670a6041d (image=quay.io/ceph/ceph:v20, name=charming_khorana, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:33:11 np0005634017 podman[76398]: 2026-02-28 09:33:11.299555047 +0000 UTC m=+0.140196900 container start 9bba262e92390034a2a9685d3151a11036630f0ebb5cf8381792887670a6041d (image=quay.io/ceph/ceph:v20, name=charming_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 28 04:33:11 np0005634017 podman[76398]: 2026-02-28 09:33:11.30392162 +0000 UTC m=+0.144563493 container attach 9bba262e92390034a2a9685d3151a11036630f0ebb5cf8381792887670a6041d (image=quay.io/ceph/ceph:v20, name=charming_khorana, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:33:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Feb 28 04:33:11 np0005634017 systemd[1]: libpod-9bba262e92390034a2a9685d3151a11036630f0ebb5cf8381792887670a6041d.scope: Deactivated successfully.
Feb 28 04:33:11 np0005634017 podman[76398]: 2026-02-28 09:33:11.533231521 +0000 UTC m=+0.373873364 container died 9bba262e92390034a2a9685d3151a11036630f0ebb5cf8381792887670a6041d (image=quay.io/ceph/ceph:v20, name=charming_khorana, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 04:33:11 np0005634017 systemd[1]: var-lib-containers-storage-overlay-a1b63091c2f1ed8dffe9bd234250e8b14beadf2ef3cbfa749aeaeb3ec8e1ba2c-merged.mount: Deactivated successfully.
Feb 28 04:33:11 np0005634017 podman[76398]: 2026-02-28 09:33:11.572471502 +0000 UTC m=+0.413113355 container remove 9bba262e92390034a2a9685d3151a11036630f0ebb5cf8381792887670a6041d (image=quay.io/ceph/ceph:v20, name=charming_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:33:11 np0005634017 systemd[1]: libpod-conmon-9bba262e92390034a2a9685d3151a11036630f0ebb5cf8381792887670a6041d.scope: Deactivated successfully.
Feb 28 04:33:11 np0005634017 systemd[1]: Reloading.
Feb 28 04:33:11 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:33:11 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:33:11 np0005634017 systemd[1]: Reloading.
Feb 28 04:33:11 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:33:11 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:33:12 np0005634017 systemd[1]: Starting Ceph mgr.compute-0.izimmo for 8f528268-ea2d-5d7b-af45-49b405fed6de...
Feb 28 04:33:12 np0005634017 podman[76591]: 2026-02-28 09:33:12.39051689 +0000 UTC m=+0.052824687 container create 95ded722f5d0a369910f954d5d68ad4f182d04cd2476a34c4e56233a52719833 (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-izimmo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 04:33:12 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/336e9b4dafabee6dc7bd55bbcc36b9289d475fca1b8df108d1ef3da6e1574470/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:12 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/336e9b4dafabee6dc7bd55bbcc36b9289d475fca1b8df108d1ef3da6e1574470/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:12 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/336e9b4dafabee6dc7bd55bbcc36b9289d475fca1b8df108d1ef3da6e1574470/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:12 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/336e9b4dafabee6dc7bd55bbcc36b9289d475fca1b8df108d1ef3da6e1574470/merged/var/lib/ceph/mgr/ceph-compute-0.izimmo supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:12 np0005634017 podman[76591]: 2026-02-28 09:33:12.370098562 +0000 UTC m=+0.032406339 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:12 np0005634017 podman[76591]: 2026-02-28 09:33:12.48165392 +0000 UTC m=+0.143961717 container init 95ded722f5d0a369910f954d5d68ad4f182d04cd2476a34c4e56233a52719833 (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-izimmo, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 04:33:12 np0005634017 podman[76591]: 2026-02-28 09:33:12.486155397 +0000 UTC m=+0.148463194 container start 95ded722f5d0a369910f954d5d68ad4f182d04cd2476a34c4e56233a52719833 (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-izimmo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 04:33:12 np0005634017 bash[76591]: 95ded722f5d0a369910f954d5d68ad4f182d04cd2476a34c4e56233a52719833
Feb 28 04:33:12 np0005634017 systemd[1]: Started Ceph mgr.compute-0.izimmo for 8f528268-ea2d-5d7b-af45-49b405fed6de.
Feb 28 04:33:12 np0005634017 ceph-mgr[76610]: set uid:gid to 167:167 (ceph:ceph)
Feb 28 04:33:12 np0005634017 ceph-mgr[76610]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Feb 28 04:33:12 np0005634017 ceph-mgr[76610]: pidfile_write: ignore empty --pid-file
Feb 28 04:33:12 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'alerts'
Feb 28 04:33:12 np0005634017 podman[76611]: 2026-02-28 09:33:12.614902552 +0000 UTC m=+0.059439514 container create b6051dcd2d8ef112730be021b513267873bd6875c3aeb0a051babc5352de82aa (image=quay.io/ceph/ceph:v20, name=stupefied_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:33:12 np0005634017 podman[76611]: 2026-02-28 09:33:12.584791339 +0000 UTC m=+0.029328321 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:12 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'balancer'
Feb 28 04:33:12 np0005634017 systemd[1]: Started libpod-conmon-b6051dcd2d8ef112730be021b513267873bd6875c3aeb0a051babc5352de82aa.scope.
Feb 28 04:33:12 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:12 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9658d281280c76273ad5856b3cb01165333c8d33f5d9d7216ccb42d88832f834/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:12 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9658d281280c76273ad5856b3cb01165333c8d33f5d9d7216ccb42d88832f834/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:12 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9658d281280c76273ad5856b3cb01165333c8d33f5d9d7216ccb42d88832f834/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:12 np0005634017 podman[76611]: 2026-02-28 09:33:12.751621031 +0000 UTC m=+0.196158063 container init b6051dcd2d8ef112730be021b513267873bd6875c3aeb0a051babc5352de82aa (image=quay.io/ceph/ceph:v20, name=stupefied_payne, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:33:12 np0005634017 podman[76611]: 2026-02-28 09:33:12.7639393 +0000 UTC m=+0.208476242 container start b6051dcd2d8ef112730be021b513267873bd6875c3aeb0a051babc5352de82aa (image=quay.io/ceph/ceph:v20, name=stupefied_payne, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Feb 28 04:33:12 np0005634017 podman[76611]: 2026-02-28 09:33:12.76782716 +0000 UTC m=+0.212364202 container attach b6051dcd2d8ef112730be021b513267873bd6875c3aeb0a051babc5352de82aa (image=quay.io/ceph/ceph:v20, name=stupefied_payne, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:12 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'cephadm'
Feb 28 04:33:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb 28 04:33:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1718131149' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]: 
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]: {
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:    "fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:    "health": {
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "status": "HEALTH_OK",
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "checks": {},
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "mutes": []
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:    },
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:    "election_epoch": 5,
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:    "quorum": [
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        0
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:    ],
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:    "quorum_names": [
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "compute-0"
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:    ],
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:    "quorum_age": 2,
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:    "monmap": {
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "epoch": 1,
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "min_mon_release_name": "tentacle",
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "num_mons": 1
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:    },
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:    "osdmap": {
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "epoch": 1,
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "num_osds": 0,
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "num_up_osds": 0,
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "osd_up_since": 0,
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "num_in_osds": 0,
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "osd_in_since": 0,
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "num_remapped_pgs": 0
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:    },
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:    "pgmap": {
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "pgs_by_state": [],
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "num_pgs": 0,
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "num_pools": 0,
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "num_objects": 0,
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "data_bytes": 0,
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "bytes_used": 0,
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "bytes_avail": 0,
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "bytes_total": 0
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:    },
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:    "fsmap": {
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "epoch": 1,
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "btime": "2026-02-28T09:33:08:670336+0000",
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "by_rank": [],
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "up:standby": 0
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:    },
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:    "mgrmap": {
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "available": false,
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "num_standbys": 0,
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "modules": [
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:            "iostat",
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:            "nfs"
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        ],
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "services": {}
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:    },
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:    "servicemap": {
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "epoch": 1,
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "modified": "2026-02-28T09:33:08.673339+0000",
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:        "services": {}
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:    },
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]:    "progress_events": {}
Feb 28 04:33:12 np0005634017 stupefied_payne[76647]: }
Feb 28 04:33:12 np0005634017 systemd[1]: libpod-b6051dcd2d8ef112730be021b513267873bd6875c3aeb0a051babc5352de82aa.scope: Deactivated successfully.
Feb 28 04:33:12 np0005634017 podman[76611]: 2026-02-28 09:33:12.992354676 +0000 UTC m=+0.436891628 container died b6051dcd2d8ef112730be021b513267873bd6875c3aeb0a051babc5352de82aa (image=quay.io/ceph/ceph:v20, name=stupefied_payne, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:33:13 np0005634017 systemd[1]: var-lib-containers-storage-overlay-9658d281280c76273ad5856b3cb01165333c8d33f5d9d7216ccb42d88832f834-merged.mount: Deactivated successfully.
Feb 28 04:33:13 np0005634017 podman[76611]: 2026-02-28 09:33:13.032537283 +0000 UTC m=+0.477074215 container remove b6051dcd2d8ef112730be021b513267873bd6875c3aeb0a051babc5352de82aa (image=quay.io/ceph/ceph:v20, name=stupefied_payne, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:33:13 np0005634017 systemd[1]: libpod-conmon-b6051dcd2d8ef112730be021b513267873bd6875c3aeb0a051babc5352de82aa.scope: Deactivated successfully.
Feb 28 04:33:13 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'crash'
Feb 28 04:33:13 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'dashboard'
Feb 28 04:33:14 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'devicehealth'
Feb 28 04:33:14 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'diskprediction_local'
Feb 28 04:33:14 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-izimmo[76606]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb 28 04:33:14 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-izimmo[76606]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb 28 04:33:14 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-izimmo[76606]:  from numpy import show_config as show_numpy_config
Feb 28 04:33:14 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'influx'
Feb 28 04:33:14 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'insights'
Feb 28 04:33:14 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'iostat'
Feb 28 04:33:14 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'k8sevents'
Feb 28 04:33:14 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'localpool'
Feb 28 04:33:15 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'mds_autoscaler'
Feb 28 04:33:15 np0005634017 podman[76697]: 2026-02-28 09:33:15.096674325 +0000 UTC m=+0.039651533 container create b850122a85fb6b09a0c19e24fb438cd45f99ff01649252ffe9d91a97845138da (image=quay.io/ceph/ceph:v20, name=focused_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:33:15 np0005634017 systemd[1]: Started libpod-conmon-b850122a85fb6b09a0c19e24fb438cd45f99ff01649252ffe9d91a97845138da.scope.
Feb 28 04:33:15 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:15 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/680ae398e2788eaed5f1d1d298fc9e933a0d12a4fd25b31f5cdf27308b14f2de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:15 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/680ae398e2788eaed5f1d1d298fc9e933a0d12a4fd25b31f5cdf27308b14f2de/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:15 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/680ae398e2788eaed5f1d1d298fc9e933a0d12a4fd25b31f5cdf27308b14f2de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:15 np0005634017 podman[76697]: 2026-02-28 09:33:15.080508848 +0000 UTC m=+0.023486076 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:15 np0005634017 podman[76697]: 2026-02-28 09:33:15.194478694 +0000 UTC m=+0.137455912 container init b850122a85fb6b09a0c19e24fb438cd45f99ff01649252ffe9d91a97845138da (image=quay.io/ceph/ceph:v20, name=focused_mcclintock, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 28 04:33:15 np0005634017 podman[76697]: 2026-02-28 09:33:15.202204443 +0000 UTC m=+0.145181671 container start b850122a85fb6b09a0c19e24fb438cd45f99ff01649252ffe9d91a97845138da (image=quay.io/ceph/ceph:v20, name=focused_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 04:33:15 np0005634017 podman[76697]: 2026-02-28 09:33:15.216167868 +0000 UTC m=+0.159145076 container attach b850122a85fb6b09a0c19e24fb438cd45f99ff01649252ffe9d91a97845138da (image=quay.io/ceph/ceph:v20, name=focused_mcclintock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:15 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'mirroring'
Feb 28 04:33:15 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'nfs'
Feb 28 04:33:15 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb 28 04:33:15 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2073354644' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]: 
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]: {
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:    "fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:    "health": {
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "status": "HEALTH_OK",
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "checks": {},
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "mutes": []
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:    },
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:    "election_epoch": 5,
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:    "quorum": [
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        0
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:    ],
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:    "quorum_names": [
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "compute-0"
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:    ],
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:    "quorum_age": 4,
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:    "monmap": {
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "epoch": 1,
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "min_mon_release_name": "tentacle",
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "num_mons": 1
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:    },
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:    "osdmap": {
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "epoch": 1,
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "num_osds": 0,
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "num_up_osds": 0,
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "osd_up_since": 0,
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "num_in_osds": 0,
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "osd_in_since": 0,
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "num_remapped_pgs": 0
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:    },
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:    "pgmap": {
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "pgs_by_state": [],
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "num_pgs": 0,
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "num_pools": 0,
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "num_objects": 0,
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "data_bytes": 0,
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "bytes_used": 0,
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "bytes_avail": 0,
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "bytes_total": 0
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:    },
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:    "fsmap": {
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "epoch": 1,
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "btime": "2026-02-28T09:33:08:670336+0000",
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "by_rank": [],
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "up:standby": 0
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:    },
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:    "mgrmap": {
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "available": false,
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "num_standbys": 0,
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "modules": [
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:            "iostat",
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:            "nfs"
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        ],
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "services": {}
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:    },
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:    "servicemap": {
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "epoch": 1,
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "modified": "2026-02-28T09:33:08.673339+0000",
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:        "services": {}
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:    },
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]:    "progress_events": {}
Feb 28 04:33:15 np0005634017 focused_mcclintock[76714]: }
Feb 28 04:33:15 np0005634017 systemd[1]: libpod-b850122a85fb6b09a0c19e24fb438cd45f99ff01649252ffe9d91a97845138da.scope: Deactivated successfully.
Feb 28 04:33:15 np0005634017 podman[76697]: 2026-02-28 09:33:15.426475151 +0000 UTC m=+0.369452359 container died b850122a85fb6b09a0c19e24fb438cd45f99ff01649252ffe9d91a97845138da (image=quay.io/ceph/ceph:v20, name=focused_mcclintock, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:33:15 np0005634017 systemd[1]: var-lib-containers-storage-overlay-680ae398e2788eaed5f1d1d298fc9e933a0d12a4fd25b31f5cdf27308b14f2de-merged.mount: Deactivated successfully.
Feb 28 04:33:15 np0005634017 podman[76697]: 2026-02-28 09:33:15.47024322 +0000 UTC m=+0.413220448 container remove b850122a85fb6b09a0c19e24fb438cd45f99ff01649252ffe9d91a97845138da (image=quay.io/ceph/ceph:v20, name=focused_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 04:33:15 np0005634017 systemd[1]: libpod-conmon-b850122a85fb6b09a0c19e24fb438cd45f99ff01649252ffe9d91a97845138da.scope: Deactivated successfully.
Feb 28 04:33:15 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'orchestrator'
Feb 28 04:33:15 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'osd_perf_query'
Feb 28 04:33:15 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'osd_support'
Feb 28 04:33:15 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'pg_autoscaler'
Feb 28 04:33:16 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'progress'
Feb 28 04:33:16 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'prometheus'
Feb 28 04:33:16 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'rbd_support'
Feb 28 04:33:16 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'rgw'
Feb 28 04:33:16 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'rook'
Feb 28 04:33:17 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'selftest'
Feb 28 04:33:17 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'smb'
Feb 28 04:33:17 np0005634017 podman[76753]: 2026-02-28 09:33:17.528895926 +0000 UTC m=+0.032029818 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:17 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'snap_schedule'
Feb 28 04:33:17 np0005634017 podman[76753]: 2026-02-28 09:33:17.670607108 +0000 UTC m=+0.173740990 container create 708d551332b9703e37fac94fe56274782ebcf1c0d93c3446e14f03640cd2bcf8 (image=quay.io/ceph/ceph:v20, name=exciting_engelbart, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:33:17 np0005634017 systemd[1]: Started libpod-conmon-708d551332b9703e37fac94fe56274782ebcf1c0d93c3446e14f03640cd2bcf8.scope.
Feb 28 04:33:17 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'stats'
Feb 28 04:33:17 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a158ced97a6cb91f7e9bff0efe69b99a9711f16a3d983c5de0c1fe5f4ed274dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a158ced97a6cb91f7e9bff0efe69b99a9711f16a3d983c5de0c1fe5f4ed274dc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a158ced97a6cb91f7e9bff0efe69b99a9711f16a3d983c5de0c1fe5f4ed274dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:17 np0005634017 podman[76753]: 2026-02-28 09:33:17.760221704 +0000 UTC m=+0.263355566 container init 708d551332b9703e37fac94fe56274782ebcf1c0d93c3446e14f03640cd2bcf8 (image=quay.io/ceph/ceph:v20, name=exciting_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 04:33:17 np0005634017 podman[76753]: 2026-02-28 09:33:17.764159256 +0000 UTC m=+0.267293138 container start 708d551332b9703e37fac94fe56274782ebcf1c0d93c3446e14f03640cd2bcf8 (image=quay.io/ceph/ceph:v20, name=exciting_engelbart, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:33:17 np0005634017 podman[76753]: 2026-02-28 09:33:17.774771336 +0000 UTC m=+0.277905198 container attach 708d551332b9703e37fac94fe56274782ebcf1c0d93c3446e14f03640cd2bcf8 (image=quay.io/ceph/ceph:v20, name=exciting_engelbart, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 04:33:17 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'status'
Feb 28 04:33:17 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'telegraf'
Feb 28 04:33:17 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'telemetry'
Feb 28 04:33:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb 28 04:33:17 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2935978396' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]: 
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]: {
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:    "fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:    "health": {
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "status": "HEALTH_OK",
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "checks": {},
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "mutes": []
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:    },
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:    "election_epoch": 5,
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:    "quorum": [
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        0
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:    ],
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:    "quorum_names": [
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "compute-0"
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:    ],
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:    "quorum_age": 7,
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:    "monmap": {
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "epoch": 1,
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "min_mon_release_name": "tentacle",
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "num_mons": 1
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:    },
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:    "osdmap": {
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "epoch": 1,
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "num_osds": 0,
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "num_up_osds": 0,
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "osd_up_since": 0,
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "num_in_osds": 0,
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "osd_in_since": 0,
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "num_remapped_pgs": 0
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:    },
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:    "pgmap": {
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "pgs_by_state": [],
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "num_pgs": 0,
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "num_pools": 0,
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "num_objects": 0,
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "data_bytes": 0,
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "bytes_used": 0,
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "bytes_avail": 0,
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "bytes_total": 0
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:    },
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:    "fsmap": {
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "epoch": 1,
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "btime": "2026-02-28T09:33:08:670336+0000",
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "by_rank": [],
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "up:standby": 0
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:    },
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:    "mgrmap": {
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "available": false,
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "num_standbys": 0,
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "modules": [
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:            "iostat",
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:            "nfs"
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        ],
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "services": {}
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:    },
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:    "servicemap": {
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "epoch": 1,
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "modified": "2026-02-28T09:33:08.673339+0000",
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:        "services": {}
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:    },
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]:    "progress_events": {}
Feb 28 04:33:17 np0005634017 exciting_engelbart[76770]: }
Feb 28 04:33:17 np0005634017 systemd[1]: libpod-708d551332b9703e37fac94fe56274782ebcf1c0d93c3446e14f03640cd2bcf8.scope: Deactivated successfully.
Feb 28 04:33:18 np0005634017 podman[76796]: 2026-02-28 09:33:18.022810268 +0000 UTC m=+0.048774482 container died 708d551332b9703e37fac94fe56274782ebcf1c0d93c3446e14f03640cd2bcf8 (image=quay.io/ceph/ceph:v20, name=exciting_engelbart, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 28 04:33:18 np0005634017 systemd[1]: var-lib-containers-storage-overlay-a158ced97a6cb91f7e9bff0efe69b99a9711f16a3d983c5de0c1fe5f4ed274dc-merged.mount: Deactivated successfully.
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'test_orchestrator'
Feb 28 04:33:18 np0005634017 podman[76796]: 2026-02-28 09:33:18.06492752 +0000 UTC m=+0.090891714 container remove 708d551332b9703e37fac94fe56274782ebcf1c0d93c3446e14f03640cd2bcf8 (image=quay.io/ceph/ceph:v20, name=exciting_engelbart, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 04:33:18 np0005634017 systemd[1]: libpod-conmon-708d551332b9703e37fac94fe56274782ebcf1c0d93c3446e14f03640cd2bcf8.scope: Deactivated successfully.
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'volumes'
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: ms_deliver_dispatch: unhandled message 0x5591c7a89860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Feb 28 04:33:18 np0005634017 ceph-mon[76304]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.izimmo
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: mgr handle_mgr_map Activating!
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: mgr handle_mgr_map I am now activating
Feb 28 04:33:18 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.izimmo(active, starting, since 0.0144608s)
Feb 28 04:33:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Feb 28 04:33:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1590936356' entity='mgr.compute-0.izimmo' cmd={"prefix": "mds metadata"} : dispatch
Feb 28 04:33:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).mds e1 all = 1
Feb 28 04:33:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb 28 04:33:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1590936356' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata"} : dispatch
Feb 28 04:33:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Feb 28 04:33:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1590936356' entity='mgr.compute-0.izimmo' cmd={"prefix": "mon metadata"} : dispatch
Feb 28 04:33:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb 28 04:33:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1590936356' entity='mgr.compute-0.izimmo' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb 28 04:33:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.izimmo", "id": "compute-0.izimmo"} v 0)
Feb 28 04:33:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1590936356' entity='mgr.compute-0.izimmo' cmd={"prefix": "mgr metadata", "who": "compute-0.izimmo", "id": "compute-0.izimmo"} : dispatch
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: mgr load Constructed class from module: balancer
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [balancer INFO root] Starting
Feb 28 04:33:18 np0005634017 ceph-mon[76304]: log_channel(cluster) log [INF] : Manager daemon compute-0.izimmo is now available
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: mgr load Constructed class from module: crash
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_09:33:18
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [balancer INFO root] No pools available
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: mgr load Constructed class from module: devicehealth
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: mgr load Constructed class from module: iostat
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [devicehealth INFO root] Starting
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: mgr load Constructed class from module: nfs
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: mgr load Constructed class from module: orchestrator
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: mgr load Constructed class from module: pg_autoscaler
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: mgr load Constructed class from module: progress
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [progress INFO root] Loading...
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [progress INFO root] No stored events to load
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [progress INFO root] Loaded [] historic events
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [progress INFO root] Loaded OSDMap, ready.
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] recovery thread starting
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] starting setup
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: mgr load Constructed class from module: rbd_support
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: mgr load Constructed class from module: status
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: mgr load Constructed class from module: telemetry
Feb 28 04:33:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izimmo/mirror_snapshot_schedule"} v 0)
Feb 28 04:33:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1590936356' entity='mgr.compute-0.izimmo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izimmo/mirror_snapshot_schedule"} : dispatch
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Feb 28 04:33:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] PerfHandler: starting
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TaskHandler: starting
Feb 28 04:33:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izimmo/trash_purge_schedule"} v 0)
Feb 28 04:33:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1590936356' entity='mgr.compute-0.izimmo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izimmo/trash_purge_schedule"} : dispatch
Feb 28 04:33:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1590936356' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] setup complete
Feb 28 04:33:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1590936356' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Feb 28 04:33:18 np0005634017 ceph-mgr[76610]: mgr load Constructed class from module: volumes
Feb 28 04:33:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1590936356' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:19 np0005634017 ceph-mon[76304]: Activating manager daemon compute-0.izimmo
Feb 28 04:33:19 np0005634017 ceph-mon[76304]: Manager daemon compute-0.izimmo is now available
Feb 28 04:33:19 np0005634017 ceph-mon[76304]: from='mgr.14102 192.168.122.100:0/1590936356' entity='mgr.compute-0.izimmo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izimmo/mirror_snapshot_schedule"} : dispatch
Feb 28 04:33:19 np0005634017 ceph-mon[76304]: from='mgr.14102 192.168.122.100:0/1590936356' entity='mgr.compute-0.izimmo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izimmo/trash_purge_schedule"} : dispatch
Feb 28 04:33:19 np0005634017 ceph-mon[76304]: from='mgr.14102 192.168.122.100:0/1590936356' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:19 np0005634017 ceph-mon[76304]: from='mgr.14102 192.168.122.100:0/1590936356' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:19 np0005634017 ceph-mon[76304]: from='mgr.14102 192.168.122.100:0/1590936356' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:19 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.izimmo(active, since 1.02528s)
Feb 28 04:33:20 np0005634017 podman[76889]: 2026-02-28 09:33:20.149668554 +0000 UTC m=+0.052255290 container create d67b4a37c38fadf61280aae7e02104141e95e488f05bd84c1ef9a5c1cb89933f (image=quay.io/ceph/ceph:v20, name=nifty_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 04:33:20 np0005634017 systemd[1]: Started libpod-conmon-d67b4a37c38fadf61280aae7e02104141e95e488f05bd84c1ef9a5c1cb89933f.scope.
Feb 28 04:33:20 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:20 np0005634017 podman[76889]: 2026-02-28 09:33:20.131794598 +0000 UTC m=+0.034381314 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5401cf9e651faca727e3a2bf3c86dd7b2a3cfe6f6d876edc0c1fff8ed79df767/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5401cf9e651faca727e3a2bf3c86dd7b2a3cfe6f6d876edc0c1fff8ed79df767/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5401cf9e651faca727e3a2bf3c86dd7b2a3cfe6f6d876edc0c1fff8ed79df767/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:20 np0005634017 podman[76889]: 2026-02-28 09:33:20.249244063 +0000 UTC m=+0.151830779 container init d67b4a37c38fadf61280aae7e02104141e95e488f05bd84c1ef9a5c1cb89933f (image=quay.io/ceph/ceph:v20, name=nifty_borg, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:33:20 np0005634017 podman[76889]: 2026-02-28 09:33:20.256056395 +0000 UTC m=+0.158643091 container start d67b4a37c38fadf61280aae7e02104141e95e488f05bd84c1ef9a5c1cb89933f (image=quay.io/ceph/ceph:v20, name=nifty_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 28 04:33:20 np0005634017 podman[76889]: 2026-02-28 09:33:20.25904638 +0000 UTC m=+0.161633076 container attach d67b4a37c38fadf61280aae7e02104141e95e488f05bd84c1ef9a5c1cb89933f (image=quay.io/ceph/ceph:v20, name=nifty_borg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:20 np0005634017 ceph-mgr[76610]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 28 04:33:20 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.izimmo(active, since 2s)
Feb 28 04:33:20 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:33:20 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb 28 04:33:20 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1416778737' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb 28 04:33:20 np0005634017 nifty_borg[76905]: 
Feb 28 04:33:20 np0005634017 nifty_borg[76905]: {
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:    "fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:    "health": {
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "status": "HEALTH_OK",
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "checks": {},
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "mutes": []
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:    },
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:    "election_epoch": 5,
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:    "quorum": [
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        0
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:    ],
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:    "quorum_names": [
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "compute-0"
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:    ],
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:    "quorum_age": 10,
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:    "monmap": {
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "epoch": 1,
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "min_mon_release_name": "tentacle",
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "num_mons": 1
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:    },
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:    "osdmap": {
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "epoch": 1,
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "num_osds": 0,
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "num_up_osds": 0,
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "osd_up_since": 0,
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "num_in_osds": 0,
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "osd_in_since": 0,
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "num_remapped_pgs": 0
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:    },
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:    "pgmap": {
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "pgs_by_state": [],
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "num_pgs": 0,
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "num_pools": 0,
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "num_objects": 0,
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "data_bytes": 0,
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "bytes_used": 0,
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "bytes_avail": 0,
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "bytes_total": 0
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:    },
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:    "fsmap": {
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "epoch": 1,
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "btime": "2026-02-28T09:33:08:670336+0000",
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "by_rank": [],
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "up:standby": 0
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:    },
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:    "mgrmap": {
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "available": true,
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "num_standbys": 0,
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "modules": [
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:            "iostat",
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:            "nfs"
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        ],
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "services": {}
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:    },
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:    "servicemap": {
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "epoch": 1,
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "modified": "2026-02-28T09:33:08.673339+0000",
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:        "services": {}
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:    },
Feb 28 04:33:20 np0005634017 nifty_borg[76905]:    "progress_events": {}
Feb 28 04:33:20 np0005634017 nifty_borg[76905]: }
Feb 28 04:33:20 np0005634017 systemd[1]: libpod-d67b4a37c38fadf61280aae7e02104141e95e488f05bd84c1ef9a5c1cb89933f.scope: Deactivated successfully.
Feb 28 04:33:20 np0005634017 podman[76889]: 2026-02-28 09:33:20.80164267 +0000 UTC m=+0.704229366 container died d67b4a37c38fadf61280aae7e02104141e95e488f05bd84c1ef9a5c1cb89933f (image=quay.io/ceph/ceph:v20, name=nifty_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 04:33:20 np0005634017 systemd[1]: var-lib-containers-storage-overlay-5401cf9e651faca727e3a2bf3c86dd7b2a3cfe6f6d876edc0c1fff8ed79df767-merged.mount: Deactivated successfully.
Feb 28 04:33:20 np0005634017 podman[76889]: 2026-02-28 09:33:20.837247938 +0000 UTC m=+0.739834644 container remove d67b4a37c38fadf61280aae7e02104141e95e488f05bd84c1ef9a5c1cb89933f (image=quay.io/ceph/ceph:v20, name=nifty_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:20 np0005634017 systemd[1]: libpod-conmon-d67b4a37c38fadf61280aae7e02104141e95e488f05bd84c1ef9a5c1cb89933f.scope: Deactivated successfully.
Feb 28 04:33:20 np0005634017 podman[76943]: 2026-02-28 09:33:20.912108867 +0000 UTC m=+0.052407574 container create 01314dc20b7d4dc7613b5a8425a3c8337c9cad1b034f4734ed3bba6e94774508 (image=quay.io/ceph/ceph:v20, name=unruffled_bhabha, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:33:20 np0005634017 systemd[1]: Started libpod-conmon-01314dc20b7d4dc7613b5a8425a3c8337c9cad1b034f4734ed3bba6e94774508.scope.
Feb 28 04:33:20 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/469e2fd02d7e11fdfcff50fa124a581cef57f6ec17b5b1683b528cd550e21587/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/469e2fd02d7e11fdfcff50fa124a581cef57f6ec17b5b1683b528cd550e21587/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/469e2fd02d7e11fdfcff50fa124a581cef57f6ec17b5b1683b528cd550e21587/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/469e2fd02d7e11fdfcff50fa124a581cef57f6ec17b5b1683b528cd550e21587/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:20 np0005634017 podman[76943]: 2026-02-28 09:33:20.893201412 +0000 UTC m=+0.033500109 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:20 np0005634017 podman[76943]: 2026-02-28 09:33:20.994701895 +0000 UTC m=+0.135000602 container init 01314dc20b7d4dc7613b5a8425a3c8337c9cad1b034f4734ed3bba6e94774508 (image=quay.io/ceph/ceph:v20, name=unruffled_bhabha, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:33:21 np0005634017 podman[76943]: 2026-02-28 09:33:21.007746065 +0000 UTC m=+0.148044742 container start 01314dc20b7d4dc7613b5a8425a3c8337c9cad1b034f4734ed3bba6e94774508 (image=quay.io/ceph/ceph:v20, name=unruffled_bhabha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 04:33:21 np0005634017 podman[76943]: 2026-02-28 09:33:21.011572873 +0000 UTC m=+0.151871600 container attach 01314dc20b7d4dc7613b5a8425a3c8337c9cad1b034f4734ed3bba6e94774508 (image=quay.io/ceph/ceph:v20, name=unruffled_bhabha, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Feb 28 04:33:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb 28 04:33:21 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/389962562' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb 28 04:33:21 np0005634017 unruffled_bhabha[76960]: 
Feb 28 04:33:21 np0005634017 unruffled_bhabha[76960]: [global]
Feb 28 04:33:21 np0005634017 unruffled_bhabha[76960]: #011fsid = 8f528268-ea2d-5d7b-af45-49b405fed6de
Feb 28 04:33:21 np0005634017 unruffled_bhabha[76960]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Feb 28 04:33:21 np0005634017 unruffled_bhabha[76960]: #011osd_crush_chooseleaf_type = 0
Feb 28 04:33:21 np0005634017 systemd[1]: libpod-01314dc20b7d4dc7613b5a8425a3c8337c9cad1b034f4734ed3bba6e94774508.scope: Deactivated successfully.
Feb 28 04:33:21 np0005634017 podman[76986]: 2026-02-28 09:33:21.496397227 +0000 UTC m=+0.023217098 container died 01314dc20b7d4dc7613b5a8425a3c8337c9cad1b034f4734ed3bba6e94774508 (image=quay.io/ceph/ceph:v20, name=unruffled_bhabha, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 04:33:21 np0005634017 systemd[1]: var-lib-containers-storage-overlay-469e2fd02d7e11fdfcff50fa124a581cef57f6ec17b5b1683b528cd550e21587-merged.mount: Deactivated successfully.
Feb 28 04:33:21 np0005634017 podman[76986]: 2026-02-28 09:33:21.525923623 +0000 UTC m=+0.052743504 container remove 01314dc20b7d4dc7613b5a8425a3c8337c9cad1b034f4734ed3bba6e94774508 (image=quay.io/ceph/ceph:v20, name=unruffled_bhabha, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 04:33:21 np0005634017 systemd[1]: libpod-conmon-01314dc20b7d4dc7613b5a8425a3c8337c9cad1b034f4734ed3bba6e94774508.scope: Deactivated successfully.
Feb 28 04:33:21 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/389962562' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb 28 04:33:21 np0005634017 podman[77001]: 2026-02-28 09:33:21.584641976 +0000 UTC m=+0.040562250 container create 3188bea81f91ba5f0af28d126fccc696b1dcc4fb73079ea83b08339233f13594 (image=quay.io/ceph/ceph:v20, name=busy_montalcini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 28 04:33:21 np0005634017 systemd[1]: Started libpod-conmon-3188bea81f91ba5f0af28d126fccc696b1dcc4fb73079ea83b08339233f13594.scope.
Feb 28 04:33:21 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:21 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81d3743b863006c2bf97556da6dee7f8a66ee19e203ac51c18d943d3f2d485fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:21 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81d3743b863006c2bf97556da6dee7f8a66ee19e203ac51c18d943d3f2d485fc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:21 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81d3743b863006c2bf97556da6dee7f8a66ee19e203ac51c18d943d3f2d485fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:21 np0005634017 podman[77001]: 2026-02-28 09:33:21.563865437 +0000 UTC m=+0.019785751 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:21 np0005634017 podman[77001]: 2026-02-28 09:33:21.669247311 +0000 UTC m=+0.125167605 container init 3188bea81f91ba5f0af28d126fccc696b1dcc4fb73079ea83b08339233f13594 (image=quay.io/ceph/ceph:v20, name=busy_montalcini, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Feb 28 04:33:21 np0005634017 podman[77001]: 2026-02-28 09:33:21.673166521 +0000 UTC m=+0.129086785 container start 3188bea81f91ba5f0af28d126fccc696b1dcc4fb73079ea83b08339233f13594 (image=quay.io/ceph/ceph:v20, name=busy_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:33:21 np0005634017 podman[77001]: 2026-02-28 09:33:21.676438334 +0000 UTC m=+0.132358598 container attach 3188bea81f91ba5f0af28d126fccc696b1dcc4fb73079ea83b08339233f13594 (image=quay.io/ceph/ceph:v20, name=busy_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Feb 28 04:33:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Feb 28 04:33:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3482776601' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Feb 28 04:33:22 np0005634017 ceph-mgr[76610]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 28 04:33:22 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:33:22 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/3482776601' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Feb 28 04:33:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3482776601' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Feb 28 04:33:22 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.izimmo(active, since 4s)
Feb 28 04:33:22 np0005634017 ceph-mgr[76610]: mgr handle_mgr_map respawning because set of enabled modules changed!
Feb 28 04:33:22 np0005634017 ceph-mgr[76610]: mgr respawn  e: '/usr/bin/ceph-mgr'
Feb 28 04:33:22 np0005634017 ceph-mgr[76610]: mgr respawn  0: '/usr/bin/ceph-mgr'
Feb 28 04:33:22 np0005634017 ceph-mgr[76610]: mgr respawn  1: '-n'
Feb 28 04:33:22 np0005634017 ceph-mgr[76610]: mgr respawn  2: 'mgr.compute-0.izimmo'
Feb 28 04:33:22 np0005634017 ceph-mgr[76610]: mgr respawn  3: '-f'
Feb 28 04:33:22 np0005634017 ceph-mgr[76610]: mgr respawn  4: '--setuser'
Feb 28 04:33:22 np0005634017 ceph-mgr[76610]: mgr respawn  5: 'ceph'
Feb 28 04:33:22 np0005634017 ceph-mgr[76610]: mgr respawn  6: '--setgroup'
Feb 28 04:33:22 np0005634017 ceph-mgr[76610]: mgr respawn  7: 'ceph'
Feb 28 04:33:22 np0005634017 ceph-mgr[76610]: mgr respawn  8: '--default-log-to-file=false'
Feb 28 04:33:22 np0005634017 ceph-mgr[76610]: mgr respawn  9: '--default-log-to-journald=true'
Feb 28 04:33:22 np0005634017 ceph-mgr[76610]: mgr respawn  10: '--default-log-to-stderr=false'
Feb 28 04:33:22 np0005634017 ceph-mgr[76610]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Feb 28 04:33:22 np0005634017 ceph-mgr[76610]: mgr respawn  exe_path /proc/self/exe
Feb 28 04:33:22 np0005634017 systemd[1]: libpod-3188bea81f91ba5f0af28d126fccc696b1dcc4fb73079ea83b08339233f13594.scope: Deactivated successfully.
Feb 28 04:33:22 np0005634017 podman[77001]: 2026-02-28 09:33:22.623276537 +0000 UTC m=+1.079196831 container died 3188bea81f91ba5f0af28d126fccc696b1dcc4fb73079ea83b08339233f13594 (image=quay.io/ceph/ceph:v20, name=busy_montalcini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 28 04:33:22 np0005634017 systemd[1]: var-lib-containers-storage-overlay-81d3743b863006c2bf97556da6dee7f8a66ee19e203ac51c18d943d3f2d485fc-merged.mount: Deactivated successfully.
Feb 28 04:33:22 np0005634017 podman[77001]: 2026-02-28 09:33:22.66047528 +0000 UTC m=+1.116395574 container remove 3188bea81f91ba5f0af28d126fccc696b1dcc4fb73079ea83b08339233f13594 (image=quay.io/ceph/ceph:v20, name=busy_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Feb 28 04:33:22 np0005634017 systemd[1]: libpod-conmon-3188bea81f91ba5f0af28d126fccc696b1dcc4fb73079ea83b08339233f13594.scope: Deactivated successfully.
Feb 28 04:33:22 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-izimmo[76606]: ignoring --setuser ceph since I am not root
Feb 28 04:33:22 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-izimmo[76606]: ignoring --setgroup ceph since I am not root
Feb 28 04:33:22 np0005634017 ceph-mgr[76610]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Feb 28 04:33:22 np0005634017 ceph-mgr[76610]: pidfile_write: ignore empty --pid-file
Feb 28 04:33:22 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'alerts'
Feb 28 04:33:22 np0005634017 podman[77054]: 2026-02-28 09:33:22.794371491 +0000 UTC m=+0.115796459 container create 4bbd03c45a621dca53d905f64820e9ee19f06780d07169de8a9d270a3fb0fbe6 (image=quay.io/ceph/ceph:v20, name=admiring_perlman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:22 np0005634017 podman[77054]: 2026-02-28 09:33:22.699235558 +0000 UTC m=+0.020660556 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:22 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'balancer'
Feb 28 04:33:22 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'cephadm'
Feb 28 04:33:23 np0005634017 systemd[1]: Started libpod-conmon-4bbd03c45a621dca53d905f64820e9ee19f06780d07169de8a9d270a3fb0fbe6.scope.
Feb 28 04:33:23 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7b076038dc1602d3e95357d8d8db93e0c3ec9428c347f1e00be9872f24261d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7b076038dc1602d3e95357d8d8db93e0c3ec9428c347f1e00be9872f24261d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7b076038dc1602d3e95357d8d8db93e0c3ec9428c347f1e00be9872f24261d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:23 np0005634017 podman[77054]: 2026-02-28 09:33:23.343817645 +0000 UTC m=+0.665242643 container init 4bbd03c45a621dca53d905f64820e9ee19f06780d07169de8a9d270a3fb0fbe6 (image=quay.io/ceph/ceph:v20, name=admiring_perlman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:33:23 np0005634017 podman[77054]: 2026-02-28 09:33:23.347926741 +0000 UTC m=+0.669351709 container start 4bbd03c45a621dca53d905f64820e9ee19f06780d07169de8a9d270a3fb0fbe6 (image=quay.io/ceph/ceph:v20, name=admiring_perlman, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:33:23 np0005634017 podman[77054]: 2026-02-28 09:33:23.568741701 +0000 UTC m=+0.890166769 container attach 4bbd03c45a621dca53d905f64820e9ee19f06780d07169de8a9d270a3fb0fbe6 (image=quay.io/ceph/ceph:v20, name=admiring_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 28 04:33:23 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'crash'
Feb 28 04:33:23 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/3482776601' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Feb 28 04:33:23 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'dashboard'
Feb 28 04:33:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Feb 28 04:33:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3845577075' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb 28 04:33:23 np0005634017 admiring_perlman[77090]: {
Feb 28 04:33:23 np0005634017 admiring_perlman[77090]:    "epoch": 5,
Feb 28 04:33:23 np0005634017 admiring_perlman[77090]:    "available": true,
Feb 28 04:33:23 np0005634017 admiring_perlman[77090]:    "active_name": "compute-0.izimmo",
Feb 28 04:33:23 np0005634017 admiring_perlman[77090]:    "num_standby": 0
Feb 28 04:33:23 np0005634017 admiring_perlman[77090]: }
Feb 28 04:33:23 np0005634017 systemd[1]: libpod-4bbd03c45a621dca53d905f64820e9ee19f06780d07169de8a9d270a3fb0fbe6.scope: Deactivated successfully.
Feb 28 04:33:23 np0005634017 podman[77054]: 2026-02-28 09:33:23.840542815 +0000 UTC m=+1.161967783 container died 4bbd03c45a621dca53d905f64820e9ee19f06780d07169de8a9d270a3fb0fbe6 (image=quay.io/ceph/ceph:v20, name=admiring_perlman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:24 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'devicehealth'
Feb 28 04:33:24 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'diskprediction_local'
Feb 28 04:33:24 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-izimmo[76606]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb 28 04:33:24 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-izimmo[76606]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb 28 04:33:24 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-izimmo[76606]:  from numpy import show_config as show_numpy_config
Feb 28 04:33:24 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'influx'
Feb 28 04:33:24 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'insights'
Feb 28 04:33:24 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'iostat'
Feb 28 04:33:25 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'k8sevents'
Feb 28 04:33:25 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'localpool'
Feb 28 04:33:25 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'mds_autoscaler'
Feb 28 04:33:25 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'mirroring'
Feb 28 04:33:25 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'nfs'
Feb 28 04:33:26 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'orchestrator'
Feb 28 04:33:26 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'osd_perf_query'
Feb 28 04:33:26 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'osd_support'
Feb 28 04:33:26 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'pg_autoscaler'
Feb 28 04:33:26 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'progress'
Feb 28 04:33:26 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'prometheus'
Feb 28 04:33:26 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'rbd_support'
Feb 28 04:33:26 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'rgw'
Feb 28 04:33:27 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'rook'
Feb 28 04:33:27 np0005634017 systemd[1]: var-lib-containers-storage-overlay-2d7b076038dc1602d3e95357d8d8db93e0c3ec9428c347f1e00be9872f24261d-merged.mount: Deactivated successfully.
Feb 28 04:33:27 np0005634017 podman[77054]: 2026-02-28 09:33:27.165356724 +0000 UTC m=+4.486781692 container remove 4bbd03c45a621dca53d905f64820e9ee19f06780d07169de8a9d270a3fb0fbe6 (image=quay.io/ceph/ceph:v20, name=admiring_perlman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 04:33:27 np0005634017 systemd[1]: libpod-conmon-4bbd03c45a621dca53d905f64820e9ee19f06780d07169de8a9d270a3fb0fbe6.scope: Deactivated successfully.
Feb 28 04:33:27 np0005634017 podman[77140]: 2026-02-28 09:33:27.222285935 +0000 UTC m=+0.038629864 container create d93a82231069b3ced9ae68e4827691fc66476e9875f19968c3d330540e4dc216 (image=quay.io/ceph/ceph:v20, name=heuristic_almeida, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:33:27 np0005634017 systemd[1]: Started libpod-conmon-d93a82231069b3ced9ae68e4827691fc66476e9875f19968c3d330540e4dc216.scope.
Feb 28 04:33:27 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4ba04065927984cbba1a665d8b4da3878a6e62a99e3098f2eb303e3873e7876/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4ba04065927984cbba1a665d8b4da3878a6e62a99e3098f2eb303e3873e7876/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4ba04065927984cbba1a665d8b4da3878a6e62a99e3098f2eb303e3873e7876/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:27 np0005634017 podman[77140]: 2026-02-28 09:33:27.299206253 +0000 UTC m=+0.115550202 container init d93a82231069b3ced9ae68e4827691fc66476e9875f19968c3d330540e4dc216 (image=quay.io/ceph/ceph:v20, name=heuristic_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 04:33:27 np0005634017 podman[77140]: 2026-02-28 09:33:27.203300148 +0000 UTC m=+0.019644107 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:27 np0005634017 podman[77140]: 2026-02-28 09:33:27.303847884 +0000 UTC m=+0.120191823 container start d93a82231069b3ced9ae68e4827691fc66476e9875f19968c3d330540e4dc216 (image=quay.io/ceph/ceph:v20, name=heuristic_almeida, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:33:27 np0005634017 podman[77140]: 2026-02-28 09:33:27.306752426 +0000 UTC m=+0.123096385 container attach d93a82231069b3ced9ae68e4827691fc66476e9875f19968c3d330540e4dc216 (image=quay.io/ceph/ceph:v20, name=heuristic_almeida, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 28 04:33:27 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'selftest'
Feb 28 04:33:27 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'smb'
Feb 28 04:33:27 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'snap_schedule'
Feb 28 04:33:28 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'stats'
Feb 28 04:33:28 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'status'
Feb 28 04:33:28 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'telegraf'
Feb 28 04:33:28 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'telemetry'
Feb 28 04:33:28 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'test_orchestrator'
Feb 28 04:33:28 np0005634017 ceph-mgr[76610]: mgr[py] Loading python module 'volumes'
Feb 28 04:33:28 np0005634017 ceph-mon[76304]: log_channel(cluster) log [INF] : Active manager daemon compute-0.izimmo restarted
Feb 28 04:33:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Feb 28 04:33:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 28 04:33:28 np0005634017 ceph-mon[76304]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.izimmo
Feb 28 04:33:28 np0005634017 ceph-mgr[76610]: ms_deliver_dispatch: unhandled message 0x560528148000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Feb 28 04:33:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Feb 28 04:33:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Feb 28 04:33:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Feb 28 04:33:28 np0005634017 ceph-mgr[76610]: mgr handle_mgr_map Activating!
Feb 28 04:33:28 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Feb 28 04:33:28 np0005634017 ceph-mgr[76610]: mgr handle_mgr_map I am now activating
Feb 28 04:33:28 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.izimmo(active, starting, since 0.0175284s)
Feb 28 04:33:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb 28 04:33:28 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb 28 04:33:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.izimmo", "id": "compute-0.izimmo"} v 0)
Feb 28 04:33:28 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "mgr metadata", "who": "compute-0.izimmo", "id": "compute-0.izimmo"} : dispatch
Feb 28 04:33:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Feb 28 04:33:28 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "mds metadata"} : dispatch
Feb 28 04:33:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).mds e1 all = 1
Feb 28 04:33:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb 28 04:33:28 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata"} : dispatch
Feb 28 04:33:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Feb 28 04:33:28 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "mon metadata"} : dispatch
Feb 28 04:33:28 np0005634017 ceph-mon[76304]: log_channel(cluster) log [INF] : Manager daemon compute-0.izimmo is now available
Feb 28 04:33:28 np0005634017 ceph-mgr[76610]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 28 04:33:28 np0005634017 ceph-mgr[76610]: mgr load Constructed class from module: balancer
Feb 28 04:33:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Starting
Feb 28 04:33:28 np0005634017 ceph-mgr[76610]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 28 04:33:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_09:33:28
Feb 28 04:33:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 04:33:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 04:33:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] No pools available
Feb 28 04:33:28 np0005634017 ceph-mon[76304]: Active manager daemon compute-0.izimmo restarted
Feb 28 04:33:28 np0005634017 ceph-mon[76304]: Activating manager daemon compute-0.izimmo
Feb 28 04:33:28 np0005634017 ceph-mon[76304]: Manager daemon compute-0.izimmo is now available
Feb 28 04:33:29 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Feb 28 04:33:29 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.izimmo(active, since 1.02977s)
Feb 28 04:33:29 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Feb 28 04:33:29 np0005634017 heuristic_almeida[77156]: {
Feb 28 04:33:29 np0005634017 heuristic_almeida[77156]:    "mgrmap_epoch": 7,
Feb 28 04:33:29 np0005634017 heuristic_almeida[77156]:    "initialized": true
Feb 28 04:33:29 np0005634017 heuristic_almeida[77156]: }
Feb 28 04:33:29 np0005634017 systemd[1]: libpod-d93a82231069b3ced9ae68e4827691fc66476e9875f19968c3d330540e4dc216.scope: Deactivated successfully.
Feb 28 04:33:29 np0005634017 podman[77140]: 2026-02-28 09:33:29.933005931 +0000 UTC m=+2.749349860 container died d93a82231069b3ced9ae68e4827691fc66476e9875f19968c3d330540e4dc216 (image=quay.io/ceph/ceph:v20, name=heuristic_almeida, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:33:29 np0005634017 systemd[1]: var-lib-containers-storage-overlay-a4ba04065927984cbba1a665d8b4da3878a6e62a99e3098f2eb303e3873e7876-merged.mount: Deactivated successfully.
Feb 28 04:33:29 np0005634017 podman[77140]: 2026-02-28 09:33:29.966232362 +0000 UTC m=+2.782576301 container remove d93a82231069b3ced9ae68e4827691fc66476e9875f19968c3d330540e4dc216 (image=quay.io/ceph/ceph:v20, name=heuristic_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:29 np0005634017 systemd[1]: libpod-conmon-d93a82231069b3ced9ae68e4827691fc66476e9875f19968c3d330540e4dc216.scope: Deactivated successfully.
Feb 28 04:33:30 np0005634017 podman[77222]: 2026-02-28 09:33:30.021117235 +0000 UTC m=+0.036180815 container create 9f9a60617e721f5e89cc95af7734100017b03a854c523d2bbb218750dceeb84d (image=quay.io/ceph/ceph:v20, name=angry_spence, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 04:33:30 np0005634017 systemd[1]: Started libpod-conmon-9f9a60617e721f5e89cc95af7734100017b03a854c523d2bbb218750dceeb84d.scope.
Feb 28 04:33:30 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:30 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de4594574917d0f7f3b0522c18230152519ad6eb1d160e678e28d3c398889ae0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:30 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de4594574917d0f7f3b0522c18230152519ad6eb1d160e678e28d3c398889ae0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:30 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de4594574917d0f7f3b0522c18230152519ad6eb1d160e678e28d3c398889ae0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:30 np0005634017 podman[77222]: 2026-02-28 09:33:30.096030956 +0000 UTC m=+0.111094586 container init 9f9a60617e721f5e89cc95af7734100017b03a854c523d2bbb218750dceeb84d (image=quay.io/ceph/ceph:v20, name=angry_spence, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 28 04:33:30 np0005634017 podman[77222]: 2026-02-28 09:33:30.101156271 +0000 UTC m=+0.116219851 container start 9f9a60617e721f5e89cc95af7734100017b03a854c523d2bbb218750dceeb84d (image=quay.io/ceph/ceph:v20, name=angry_spence, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 28 04:33:30 np0005634017 podman[77222]: 2026-02-28 09:33:30.004366171 +0000 UTC m=+0.019429771 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:30 np0005634017 podman[77222]: 2026-02-28 09:33:30.117406351 +0000 UTC m=+0.132470061 container attach 9f9a60617e721f5e89cc95af7734100017b03a854c523d2bbb218750dceeb84d (image=quay.io/ceph/ceph:v20, name=angry_spence, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: mgr load Constructed class from module: cephadm
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: mgr load Constructed class from module: crash
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: mgr load Constructed class from module: devicehealth
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: mgr load Constructed class from module: iostat
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [devicehealth INFO root] Starting
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: mgr load Constructed class from module: nfs
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: mgr load Constructed class from module: orchestrator
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: mgr load Constructed class from module: pg_autoscaler
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: mgr load Constructed class from module: progress
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [progress INFO root] Loading...
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [progress INFO root] No stored events to load
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [progress INFO root] Loaded [] historic events
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [progress INFO root] Loaded OSDMap, ready.
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] recovery thread starting
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] starting setup
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: mgr load Constructed class from module: rbd_support
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: mgr load Constructed class from module: status
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izimmo/mirror_snapshot_schedule"} v 0)
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izimmo/mirror_snapshot_schedule"} : dispatch
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: mgr load Constructed class from module: telemetry
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] PerfHandler: starting
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TaskHandler: starting
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izimmo/trash_purge_schedule"} v 0)
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izimmo/trash_purge_schedule"} : dispatch
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] setup complete
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: mgr load Constructed class from module: volumes
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/242323283' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019903415 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:33:30 np0005634017 ceph-mgr[76610]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: Found migration_current of "None". Setting to last migration.
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izimmo/mirror_snapshot_schedule"} : dispatch
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izimmo/trash_purge_schedule"} : dispatch
Feb 28 04:33:30 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/242323283' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Feb 28 04:33:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/242323283' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Feb 28 04:33:31 np0005634017 angry_spence[77240]: module 'orchestrator' is already enabled (always-on)
Feb 28 04:33:31 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.izimmo(active, since 2s)
Feb 28 04:33:31 np0005634017 systemd[1]: libpod-9f9a60617e721f5e89cc95af7734100017b03a854c523d2bbb218750dceeb84d.scope: Deactivated successfully.
Feb 28 04:33:31 np0005634017 podman[77222]: 2026-02-28 09:33:31.262516776 +0000 UTC m=+1.277580366 container died 9f9a60617e721f5e89cc95af7734100017b03a854c523d2bbb218750dceeb84d (image=quay.io/ceph/ceph:v20, name=angry_spence, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 28 04:33:31 np0005634017 ceph-mgr[76610]: [cephadm INFO cherrypy.error] [28/Feb/2026:09:33:31] ENGINE Bus STARTING
Feb 28 04:33:31 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : [28/Feb/2026:09:33:31] ENGINE Bus STARTING
Feb 28 04:33:31 np0005634017 systemd[1]: var-lib-containers-storage-overlay-de4594574917d0f7f3b0522c18230152519ad6eb1d160e678e28d3c398889ae0-merged.mount: Deactivated successfully.
Feb 28 04:33:31 np0005634017 podman[77222]: 2026-02-28 09:33:31.300773959 +0000 UTC m=+1.315837539 container remove 9f9a60617e721f5e89cc95af7734100017b03a854c523d2bbb218750dceeb84d (image=quay.io/ceph/ceph:v20, name=angry_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:33:31 np0005634017 systemd[1]: libpod-conmon-9f9a60617e721f5e89cc95af7734100017b03a854c523d2bbb218750dceeb84d.scope: Deactivated successfully.
Feb 28 04:33:31 np0005634017 podman[77368]: 2026-02-28 09:33:31.35132229 +0000 UTC m=+0.034407595 container create 861517c33e75941d34d9df830215385f081811be4da130053db37569f855af89 (image=quay.io/ceph/ceph:v20, name=serene_austin, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:33:31 np0005634017 systemd[1]: Started libpod-conmon-861517c33e75941d34d9df830215385f081811be4da130053db37569f855af89.scope.
Feb 28 04:33:31 np0005634017 ceph-mgr[76610]: [cephadm INFO cherrypy.error] [28/Feb/2026:09:33:31] ENGINE Serving on http://192.168.122.100:8765
Feb 28 04:33:31 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : [28/Feb/2026:09:33:31] ENGINE Serving on http://192.168.122.100:8765
Feb 28 04:33:31 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97b25ec83ede07427732e6d6fec48ebd10a957e0965c4f2b487429885b425316/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97b25ec83ede07427732e6d6fec48ebd10a957e0965c4f2b487429885b425316/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97b25ec83ede07427732e6d6fec48ebd10a957e0965c4f2b487429885b425316/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:31 np0005634017 podman[77368]: 2026-02-28 09:33:31.411799562 +0000 UTC m=+0.094884867 container init 861517c33e75941d34d9df830215385f081811be4da130053db37569f855af89 (image=quay.io/ceph/ceph:v20, name=serene_austin, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:31 np0005634017 podman[77368]: 2026-02-28 09:33:31.415483776 +0000 UTC m=+0.098569061 container start 861517c33e75941d34d9df830215385f081811be4da130053db37569f855af89 (image=quay.io/ceph/ceph:v20, name=serene_austin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 28 04:33:31 np0005634017 podman[77368]: 2026-02-28 09:33:31.420057446 +0000 UTC m=+0.103142771 container attach 861517c33e75941d34d9df830215385f081811be4da130053db37569f855af89 (image=quay.io/ceph/ceph:v20, name=serene_austin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 04:33:31 np0005634017 podman[77368]: 2026-02-28 09:33:31.33543421 +0000 UTC m=+0.018519515 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:31 np0005634017 ceph-mgr[76610]: [cephadm INFO cherrypy.error] [28/Feb/2026:09:33:31] ENGINE Serving on https://192.168.122.100:7150
Feb 28 04:33:31 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : [28/Feb/2026:09:33:31] ENGINE Serving on https://192.168.122.100:7150
Feb 28 04:33:31 np0005634017 ceph-mgr[76610]: [cephadm INFO cherrypy.error] [28/Feb/2026:09:33:31] ENGINE Bus STARTED
Feb 28 04:33:31 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : [28/Feb/2026:09:33:31] ENGINE Bus STARTED
Feb 28 04:33:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 28 04:33:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 28 04:33:31 np0005634017 ceph-mgr[76610]: [cephadm INFO cherrypy.error] [28/Feb/2026:09:33:31] ENGINE Client ('192.168.122.100', 38556) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 28 04:33:31 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : [28/Feb/2026:09:33:31] ENGINE Client ('192.168.122.100', 38556) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 28 04:33:31 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 04:33:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Feb 28 04:33:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 28 04:33:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 28 04:33:31 np0005634017 systemd[1]: libpod-861517c33e75941d34d9df830215385f081811be4da130053db37569f855af89.scope: Deactivated successfully.
Feb 28 04:33:31 np0005634017 podman[77368]: 2026-02-28 09:33:31.847228158 +0000 UTC m=+0.530313473 container died 861517c33e75941d34d9df830215385f081811be4da130053db37569f855af89 (image=quay.io/ceph/ceph:v20, name=serene_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:33:31 np0005634017 systemd[1]: var-lib-containers-storage-overlay-97b25ec83ede07427732e6d6fec48ebd10a957e0965c4f2b487429885b425316-merged.mount: Deactivated successfully.
Feb 28 04:33:31 np0005634017 podman[77368]: 2026-02-28 09:33:31.886418708 +0000 UTC m=+0.569503993 container remove 861517c33e75941d34d9df830215385f081811be4da130053db37569f855af89 (image=quay.io/ceph/ceph:v20, name=serene_austin, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 04:33:31 np0005634017 systemd[1]: libpod-conmon-861517c33e75941d34d9df830215385f081811be4da130053db37569f855af89.scope: Deactivated successfully.
Feb 28 04:33:31 np0005634017 podman[77434]: 2026-02-28 09:33:31.959735603 +0000 UTC m=+0.049867223 container create 4b8589f89cff2a9126d67e54596c1dc658d0781008e6c2c00c38baf5a344a8a1 (image=quay.io/ceph/ceph:v20, name=romantic_blackburn, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:33:31 np0005634017 systemd[1]: Started libpod-conmon-4b8589f89cff2a9126d67e54596c1dc658d0781008e6c2c00c38baf5a344a8a1.scope.
Feb 28 04:33:32 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30a754d6c65a7502ad7fa670c91b9ced56cc85b72751421b30a0a61ea7f095a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30a754d6c65a7502ad7fa670c91b9ced56cc85b72751421b30a0a61ea7f095a0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30a754d6c65a7502ad7fa670c91b9ced56cc85b72751421b30a0a61ea7f095a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:32 np0005634017 podman[77434]: 2026-02-28 09:33:32.022114119 +0000 UTC m=+0.112245549 container init 4b8589f89cff2a9126d67e54596c1dc658d0781008e6c2c00c38baf5a344a8a1 (image=quay.io/ceph/ceph:v20, name=romantic_blackburn, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:32 np0005634017 podman[77434]: 2026-02-28 09:33:32.02957347 +0000 UTC m=+0.119704860 container start 4b8589f89cff2a9126d67e54596c1dc658d0781008e6c2c00c38baf5a344a8a1 (image=quay.io/ceph/ceph:v20, name=romantic_blackburn, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:33:32 np0005634017 podman[77434]: 2026-02-28 09:33:32.033038978 +0000 UTC m=+0.123170418 container attach 4b8589f89cff2a9126d67e54596c1dc658d0781008e6c2c00c38baf5a344a8a1 (image=quay.io/ceph/ceph:v20, name=romantic_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 28 04:33:32 np0005634017 podman[77434]: 2026-02-28 09:33:31.941904308 +0000 UTC m=+0.032035718 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:32 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/242323283' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Feb 28 04:33:32 np0005634017 ceph-mon[76304]: [28/Feb/2026:09:33:31] ENGINE Bus STARTING
Feb 28 04:33:32 np0005634017 ceph-mon[76304]: [28/Feb/2026:09:33:31] ENGINE Serving on http://192.168.122.100:8765
Feb 28 04:33:32 np0005634017 ceph-mon[76304]: [28/Feb/2026:09:33:31] ENGINE Serving on https://192.168.122.100:7150
Feb 28 04:33:32 np0005634017 ceph-mon[76304]: [28/Feb/2026:09:33:31] ENGINE Bus STARTED
Feb 28 04:33:32 np0005634017 ceph-mon[76304]: [28/Feb/2026:09:33:31] ENGINE Client ('192.168.122.100', 38556) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 28 04:33:32 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:32 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:33:32 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 04:33:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Feb 28 04:33:32 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:32 np0005634017 ceph-mgr[76610]: [cephadm INFO root] Set ssh ssh_user
Feb 28 04:33:32 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Feb 28 04:33:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Feb 28 04:33:32 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:32 np0005634017 ceph-mgr[76610]: [cephadm INFO root] Set ssh ssh_config
Feb 28 04:33:32 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Feb 28 04:33:32 np0005634017 ceph-mgr[76610]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Feb 28 04:33:32 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Feb 28 04:33:32 np0005634017 romantic_blackburn[77450]: ssh user set to ceph-admin. sudo will be used
Feb 28 04:33:32 np0005634017 systemd[1]: libpod-4b8589f89cff2a9126d67e54596c1dc658d0781008e6c2c00c38baf5a344a8a1.scope: Deactivated successfully.
Feb 28 04:33:32 np0005634017 podman[77434]: 2026-02-28 09:33:32.438897097 +0000 UTC m=+0.529028477 container died 4b8589f89cff2a9126d67e54596c1dc658d0781008e6c2c00c38baf5a344a8a1 (image=quay.io/ceph/ceph:v20, name=romantic_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:33:32 np0005634017 systemd[1]: var-lib-containers-storage-overlay-30a754d6c65a7502ad7fa670c91b9ced56cc85b72751421b30a0a61ea7f095a0-merged.mount: Deactivated successfully.
Feb 28 04:33:32 np0005634017 podman[77434]: 2026-02-28 09:33:32.47115247 +0000 UTC m=+0.561283850 container remove 4b8589f89cff2a9126d67e54596c1dc658d0781008e6c2c00c38baf5a344a8a1 (image=quay.io/ceph/ceph:v20, name=romantic_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:33:32 np0005634017 systemd[1]: libpod-conmon-4b8589f89cff2a9126d67e54596c1dc658d0781008e6c2c00c38baf5a344a8a1.scope: Deactivated successfully.
Feb 28 04:33:32 np0005634017 podman[77487]: 2026-02-28 09:33:32.538327612 +0000 UTC m=+0.046457536 container create ad5d35540acf2a7420cc4b3b349ad3e8973bb5b5c3ebc59abc50e459aeb611ac (image=quay.io/ceph/ceph:v20, name=relaxed_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:33:32 np0005634017 systemd[1]: Started libpod-conmon-ad5d35540acf2a7420cc4b3b349ad3e8973bb5b5c3ebc59abc50e459aeb611ac.scope.
Feb 28 04:33:32 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b99573370cc8583d8f513b822bb358bd6709f6783427a0ec585b50a31a5ce92/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b99573370cc8583d8f513b822bb358bd6709f6783427a0ec585b50a31a5ce92/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b99573370cc8583d8f513b822bb358bd6709f6783427a0ec585b50a31a5ce92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b99573370cc8583d8f513b822bb358bd6709f6783427a0ec585b50a31a5ce92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b99573370cc8583d8f513b822bb358bd6709f6783427a0ec585b50a31a5ce92/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:32 np0005634017 podman[77487]: 2026-02-28 09:33:32.515703692 +0000 UTC m=+0.023833606 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:32 np0005634017 podman[77487]: 2026-02-28 09:33:32.639429574 +0000 UTC m=+0.147559568 container init ad5d35540acf2a7420cc4b3b349ad3e8973bb5b5c3ebc59abc50e459aeb611ac (image=quay.io/ceph/ceph:v20, name=relaxed_faraday, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 04:33:32 np0005634017 podman[77487]: 2026-02-28 09:33:32.644876968 +0000 UTC m=+0.153006892 container start ad5d35540acf2a7420cc4b3b349ad3e8973bb5b5c3ebc59abc50e459aeb611ac (image=quay.io/ceph/ceph:v20, name=relaxed_faraday, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 04:33:32 np0005634017 podman[77487]: 2026-02-28 09:33:32.64916291 +0000 UTC m=+0.157292844 container attach ad5d35540acf2a7420cc4b3b349ad3e8973bb5b5c3ebc59abc50e459aeb611ac (image=quay.io/ceph/ceph:v20, name=relaxed_faraday, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:33:32 np0005634017 ceph-mgr[76610]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 28 04:33:33 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 04:33:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Feb 28 04:33:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:33 np0005634017 ceph-mgr[76610]: [cephadm INFO root] Set ssh ssh_identity_key
Feb 28 04:33:33 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Feb 28 04:33:33 np0005634017 ceph-mgr[76610]: [cephadm INFO root] Set ssh private key
Feb 28 04:33:33 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Set ssh private key
Feb 28 04:33:33 np0005634017 systemd[1]: libpod-ad5d35540acf2a7420cc4b3b349ad3e8973bb5b5c3ebc59abc50e459aeb611ac.scope: Deactivated successfully.
Feb 28 04:33:33 np0005634017 podman[77487]: 2026-02-28 09:33:33.068284294 +0000 UTC m=+0.576414188 container died ad5d35540acf2a7420cc4b3b349ad3e8973bb5b5c3ebc59abc50e459aeb611ac (image=quay.io/ceph/ceph:v20, name=relaxed_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:33:33 np0005634017 systemd[1]: var-lib-containers-storage-overlay-1b99573370cc8583d8f513b822bb358bd6709f6783427a0ec585b50a31a5ce92-merged.mount: Deactivated successfully.
Feb 28 04:33:33 np0005634017 podman[77487]: 2026-02-28 09:33:33.123453406 +0000 UTC m=+0.631583300 container remove ad5d35540acf2a7420cc4b3b349ad3e8973bb5b5c3ebc59abc50e459aeb611ac (image=quay.io/ceph/ceph:v20, name=relaxed_faraday, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:33:33 np0005634017 systemd[1]: libpod-conmon-ad5d35540acf2a7420cc4b3b349ad3e8973bb5b5c3ebc59abc50e459aeb611ac.scope: Deactivated successfully.
Feb 28 04:33:33 np0005634017 podman[77541]: 2026-02-28 09:33:33.178557296 +0000 UTC m=+0.040317383 container create 18479873362db5afcaff01b99baa227f17c79a449e97e81749efa9003bc428b6 (image=quay.io/ceph/ceph:v20, name=epic_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:33:33 np0005634017 systemd[1]: Started libpod-conmon-18479873362db5afcaff01b99baa227f17c79a449e97e81749efa9003bc428b6.scope.
Feb 28 04:33:33 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9f9c800bbf5ffa352dee845e44837b2f213462aa7420f16c32c5490a8a6586/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9f9c800bbf5ffa352dee845e44837b2f213462aa7420f16c32c5490a8a6586/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9f9c800bbf5ffa352dee845e44837b2f213462aa7420f16c32c5490a8a6586/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9f9c800bbf5ffa352dee845e44837b2f213462aa7420f16c32c5490a8a6586/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9f9c800bbf5ffa352dee845e44837b2f213462aa7420f16c32c5490a8a6586/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:33 np0005634017 podman[77541]: 2026-02-28 09:33:33.251769528 +0000 UTC m=+0.113529615 container init 18479873362db5afcaff01b99baa227f17c79a449e97e81749efa9003bc428b6 (image=quay.io/ceph/ceph:v20, name=epic_bardeen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:33:33 np0005634017 podman[77541]: 2026-02-28 09:33:33.160648649 +0000 UTC m=+0.022408756 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:33 np0005634017 podman[77541]: 2026-02-28 09:33:33.25749656 +0000 UTC m=+0.119256677 container start 18479873362db5afcaff01b99baa227f17c79a449e97e81749efa9003bc428b6 (image=quay.io/ceph/ceph:v20, name=epic_bardeen, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 28 04:33:33 np0005634017 podman[77541]: 2026-02-28 09:33:33.279416671 +0000 UTC m=+0.141176788 container attach 18479873362db5afcaff01b99baa227f17c79a449e97e81749efa9003bc428b6 (image=quay.io/ceph/ceph:v20, name=epic_bardeen, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Feb 28 04:33:33 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:33 np0005634017 ceph-mon[76304]: Set ssh ssh_user
Feb 28 04:33:33 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:33 np0005634017 ceph-mon[76304]: Set ssh ssh_config
Feb 28 04:33:33 np0005634017 ceph-mon[76304]: ssh user set to ceph-admin. sudo will be used
Feb 28 04:33:33 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:33 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 04:33:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Feb 28 04:33:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:33 np0005634017 ceph-mgr[76610]: [cephadm INFO root] Set ssh ssh_identity_pub
Feb 28 04:33:33 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Feb 28 04:33:33 np0005634017 systemd[1]: libpod-18479873362db5afcaff01b99baa227f17c79a449e97e81749efa9003bc428b6.scope: Deactivated successfully.
Feb 28 04:33:33 np0005634017 podman[77583]: 2026-02-28 09:33:33.807024567 +0000 UTC m=+0.026223854 container died 18479873362db5afcaff01b99baa227f17c79a449e97e81749efa9003bc428b6 (image=quay.io/ceph/ceph:v20, name=epic_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 04:33:33 np0005634017 systemd[1]: var-lib-containers-storage-overlay-bd9f9c800bbf5ffa352dee845e44837b2f213462aa7420f16c32c5490a8a6586-merged.mount: Deactivated successfully.
Feb 28 04:33:33 np0005634017 podman[77583]: 2026-02-28 09:33:33.839125155 +0000 UTC m=+0.058324412 container remove 18479873362db5afcaff01b99baa227f17c79a449e97e81749efa9003bc428b6 (image=quay.io/ceph/ceph:v20, name=epic_bardeen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:33:33 np0005634017 systemd[1]: libpod-conmon-18479873362db5afcaff01b99baa227f17c79a449e97e81749efa9003bc428b6.scope: Deactivated successfully.
Feb 28 04:33:33 np0005634017 podman[77598]: 2026-02-28 09:33:33.895863031 +0000 UTC m=+0.037302097 container create 5c33b698abadfcfab4e8209612b4332f45454fa53898e8a841f9c5862e286972 (image=quay.io/ceph/ceph:v20, name=bold_easley, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 04:33:33 np0005634017 systemd[1]: Started libpod-conmon-5c33b698abadfcfab4e8209612b4332f45454fa53898e8a841f9c5862e286972.scope.
Feb 28 04:33:33 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8755efa8644814808335d706873374ebcc89fe372a6a448e96b6b6d83f2fcd85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8755efa8644814808335d706873374ebcc89fe372a6a448e96b6b6d83f2fcd85/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8755efa8644814808335d706873374ebcc89fe372a6a448e96b6b6d83f2fcd85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:33 np0005634017 podman[77598]: 2026-02-28 09:33:33.965274816 +0000 UTC m=+0.106713852 container init 5c33b698abadfcfab4e8209612b4332f45454fa53898e8a841f9c5862e286972 (image=quay.io/ceph/ceph:v20, name=bold_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 28 04:33:33 np0005634017 podman[77598]: 2026-02-28 09:33:33.9685712 +0000 UTC m=+0.110010226 container start 5c33b698abadfcfab4e8209612b4332f45454fa53898e8a841f9c5862e286972 (image=quay.io/ceph/ceph:v20, name=bold_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 28 04:33:33 np0005634017 podman[77598]: 2026-02-28 09:33:33.971292297 +0000 UTC m=+0.112731323 container attach 5c33b698abadfcfab4e8209612b4332f45454fa53898e8a841f9c5862e286972 (image=quay.io/ceph/ceph:v20, name=bold_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:33:33 np0005634017 podman[77598]: 2026-02-28 09:33:33.878213252 +0000 UTC m=+0.019652308 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:34 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:33:34 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 04:33:34 np0005634017 bold_easley[77615]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRulN0y5eDJnS+C5d5HGoUtE6ZqoIiGLg+mEFpS8D8S9FDpG7MekjLTR1CdTEgd57yvmimH3Yeb0EFN8W/zWdsGTn0PDUXrRpwV7yVETHUxtMjLsxzpTuz/2txPi4IsafmKN0t6xMliRRqdgs8NCbiWG9x81Fnt/TWjSm56RmaI8J+ZWPSvqwI50UP7xKBY0c+OulS+A/2DtGjPbDyXglN/FUjDbX9mYgOh3dLPIMfti9zvSSC+x1MiLuwpnGgDKznPR+BnT44Tkg2w/XRAEYSDv4S1nHZ8TGfsQ3Q7Bjmy/hFmA6A8zWF4Q2nLcvwbxFRmn7r+AMKyBVTJzI9DB8eXkRIFdc2DTsv8QkMH4gmrykQPHYzYIvyQXozL06f9C+j6cnGsVXRnN4m2iV7RacnkFEv6Z1DY28+TxZfvOZ844Bu2OmEu6DWKC4lqy3rachWAvjxU632xvIbsxNiFVXsMr/dfjKsTzEaYpFdLzMvVuH3B0ElAghbfuCm8V5fP+k= zuul@controller
Feb 28 04:33:34 np0005634017 systemd[1]: libpod-5c33b698abadfcfab4e8209612b4332f45454fa53898e8a841f9c5862e286972.scope: Deactivated successfully.
Feb 28 04:33:34 np0005634017 podman[77598]: 2026-02-28 09:33:34.364790655 +0000 UTC m=+0.506229721 container died 5c33b698abadfcfab4e8209612b4332f45454fa53898e8a841f9c5862e286972 (image=quay.io/ceph/ceph:v20, name=bold_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 04:33:34 np0005634017 systemd[1]: var-lib-containers-storage-overlay-8755efa8644814808335d706873374ebcc89fe372a6a448e96b6b6d83f2fcd85-merged.mount: Deactivated successfully.
Feb 28 04:33:34 np0005634017 podman[77598]: 2026-02-28 09:33:34.403303625 +0000 UTC m=+0.544742671 container remove 5c33b698abadfcfab4e8209612b4332f45454fa53898e8a841f9c5862e286972 (image=quay.io/ceph/ceph:v20, name=bold_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 04:33:34 np0005634017 systemd[1]: libpod-conmon-5c33b698abadfcfab4e8209612b4332f45454fa53898e8a841f9c5862e286972.scope: Deactivated successfully.
Feb 28 04:33:34 np0005634017 ceph-mon[76304]: Set ssh ssh_identity_key
Feb 28 04:33:34 np0005634017 ceph-mon[76304]: Set ssh private key
Feb 28 04:33:34 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:34 np0005634017 podman[77655]: 2026-02-28 09:33:34.482148437 +0000 UTC m=+0.058634981 container create 682a65aacf8b86d8960ff95dbda8b342d32852e548e9560a4acfcd3e216be070 (image=quay.io/ceph/ceph:v20, name=elegant_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 28 04:33:34 np0005634017 systemd[1]: Started libpod-conmon-682a65aacf8b86d8960ff95dbda8b342d32852e548e9560a4acfcd3e216be070.scope.
Feb 28 04:33:34 np0005634017 podman[77655]: 2026-02-28 09:33:34.448734501 +0000 UTC m=+0.025221135 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:34 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6a35191496d805f2ec8594a00ca50c4b847fc3d65116d21bbec463da3f3180c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6a35191496d805f2ec8594a00ca50c4b847fc3d65116d21bbec463da3f3180c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6a35191496d805f2ec8594a00ca50c4b847fc3d65116d21bbec463da3f3180c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:34 np0005634017 podman[77655]: 2026-02-28 09:33:34.594274271 +0000 UTC m=+0.170760855 container init 682a65aacf8b86d8960ff95dbda8b342d32852e548e9560a4acfcd3e216be070 (image=quay.io/ceph/ceph:v20, name=elegant_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 04:33:34 np0005634017 podman[77655]: 2026-02-28 09:33:34.599024656 +0000 UTC m=+0.175511230 container start 682a65aacf8b86d8960ff95dbda8b342d32852e548e9560a4acfcd3e216be070 (image=quay.io/ceph/ceph:v20, name=elegant_franklin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:33:34 np0005634017 podman[77655]: 2026-02-28 09:33:34.602785192 +0000 UTC m=+0.179271816 container attach 682a65aacf8b86d8960ff95dbda8b342d32852e548e9560a4acfcd3e216be070 (image=quay.io/ceph/ceph:v20, name=elegant_franklin, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:33:34 np0005634017 ceph-mgr[76610]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 28 04:33:34 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 04:33:35 np0005634017 systemd[1]: Created slice User Slice of UID 42477.
Feb 28 04:33:35 np0005634017 systemd[1]: Starting User Runtime Directory /run/user/42477...
Feb 28 04:33:35 np0005634017 systemd-logind[815]: New session 20 of user ceph-admin.
Feb 28 04:33:35 np0005634017 systemd[1]: Finished User Runtime Directory /run/user/42477.
Feb 28 04:33:35 np0005634017 systemd[1]: Starting User Manager for UID 42477...
Feb 28 04:33:35 np0005634017 systemd[77703]: Queued start job for default target Main User Target.
Feb 28 04:33:35 np0005634017 systemd[77703]: Created slice User Application Slice.
Feb 28 04:33:35 np0005634017 systemd[77703]: Started Mark boot as successful after the user session has run 2 minutes.
Feb 28 04:33:35 np0005634017 systemd[77703]: Started Daily Cleanup of User's Temporary Directories.
Feb 28 04:33:35 np0005634017 systemd[77703]: Reached target Paths.
Feb 28 04:33:35 np0005634017 systemd[77703]: Reached target Timers.
Feb 28 04:33:35 np0005634017 systemd[77703]: Starting D-Bus User Message Bus Socket...
Feb 28 04:33:35 np0005634017 systemd[77703]: Starting Create User's Volatile Files and Directories...
Feb 28 04:33:35 np0005634017 systemd[77703]: Listening on D-Bus User Message Bus Socket.
Feb 28 04:33:35 np0005634017 systemd[77703]: Reached target Sockets.
Feb 28 04:33:35 np0005634017 systemd[77703]: Finished Create User's Volatile Files and Directories.
Feb 28 04:33:35 np0005634017 systemd[77703]: Reached target Basic System.
Feb 28 04:33:35 np0005634017 systemd[1]: Started User Manager for UID 42477.
Feb 28 04:33:35 np0005634017 systemd[77703]: Reached target Main User Target.
Feb 28 04:33:35 np0005634017 systemd[77703]: Startup finished in 116ms.
Feb 28 04:33:35 np0005634017 systemd[1]: Started Session 20 of User ceph-admin.
Feb 28 04:33:35 np0005634017 ceph-mon[76304]: Set ssh ssh_identity_pub
Feb 28 04:33:35 np0005634017 systemd-logind[815]: New session 22 of user ceph-admin.
Feb 28 04:33:35 np0005634017 systemd[1]: Started Session 22 of User ceph-admin.
Feb 28 04:33:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052632 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:33:35 np0005634017 systemd-logind[815]: New session 23 of user ceph-admin.
Feb 28 04:33:35 np0005634017 systemd[1]: Started Session 23 of User ceph-admin.
Feb 28 04:33:36 np0005634017 systemd-logind[815]: New session 24 of user ceph-admin.
Feb 28 04:33:36 np0005634017 systemd[1]: Started Session 24 of User ceph-admin.
Feb 28 04:33:36 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:33:36 np0005634017 ceph-mgr[76610]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Feb 28 04:33:36 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Feb 28 04:33:36 np0005634017 systemd-logind[815]: New session 25 of user ceph-admin.
Feb 28 04:33:36 np0005634017 systemd[1]: Started Session 25 of User ceph-admin.
Feb 28 04:33:36 np0005634017 systemd-logind[815]: New session 26 of user ceph-admin.
Feb 28 04:33:36 np0005634017 systemd[1]: Started Session 26 of User ceph-admin.
Feb 28 04:33:36 np0005634017 ceph-mgr[76610]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 28 04:33:37 np0005634017 systemd-logind[815]: New session 27 of user ceph-admin.
Feb 28 04:33:37 np0005634017 systemd[1]: Started Session 27 of User ceph-admin.
Feb 28 04:33:37 np0005634017 systemd-logind[815]: New session 28 of user ceph-admin.
Feb 28 04:33:37 np0005634017 systemd[1]: Started Session 28 of User ceph-admin.
Feb 28 04:33:37 np0005634017 ceph-mon[76304]: Deploying cephadm binary to compute-0
Feb 28 04:33:37 np0005634017 systemd-logind[815]: New session 29 of user ceph-admin.
Feb 28 04:33:37 np0005634017 systemd[1]: Started Session 29 of User ceph-admin.
Feb 28 04:33:38 np0005634017 systemd-logind[815]: New session 30 of user ceph-admin.
Feb 28 04:33:38 np0005634017 systemd[1]: Started Session 30 of User ceph-admin.
Feb 28 04:33:38 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:33:38 np0005634017 ceph-mgr[76610]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 28 04:33:39 np0005634017 systemd-logind[815]: New session 31 of user ceph-admin.
Feb 28 04:33:39 np0005634017 systemd[1]: Started Session 31 of User ceph-admin.
Feb 28 04:33:39 np0005634017 systemd-logind[815]: New session 32 of user ceph-admin.
Feb 28 04:33:39 np0005634017 systemd[1]: Started Session 32 of User ceph-admin.
Feb 28 04:33:40 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:33:40 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 28 04:33:40 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:40 np0005634017 ceph-mgr[76610]: [cephadm INFO root] Added host compute-0
Feb 28 04:33:40 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Added host compute-0
Feb 28 04:33:40 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 28 04:33:40 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 28 04:33:40 np0005634017 elegant_franklin[77673]: Added host 'compute-0' with addr '192.168.122.100'
Feb 28 04:33:40 np0005634017 systemd[1]: libpod-682a65aacf8b86d8960ff95dbda8b342d32852e548e9560a4acfcd3e216be070.scope: Deactivated successfully.
Feb 28 04:33:40 np0005634017 podman[78061]: 2026-02-28 09:33:40.362988933 +0000 UTC m=+0.047496156 container died 682a65aacf8b86d8960ff95dbda8b342d32852e548e9560a4acfcd3e216be070 (image=quay.io/ceph/ceph:v20, name=elegant_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Feb 28 04:33:40 np0005634017 systemd[1]: var-lib-containers-storage-overlay-f6a35191496d805f2ec8594a00ca50c4b847fc3d65116d21bbec463da3f3180c-merged.mount: Deactivated successfully.
Feb 28 04:33:40 np0005634017 podman[78061]: 2026-02-28 09:33:40.408393528 +0000 UTC m=+0.092900701 container remove 682a65aacf8b86d8960ff95dbda8b342d32852e548e9560a4acfcd3e216be070 (image=quay.io/ceph/ceph:v20, name=elegant_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:33:40 np0005634017 systemd[1]: libpod-conmon-682a65aacf8b86d8960ff95dbda8b342d32852e548e9560a4acfcd3e216be070.scope: Deactivated successfully.
Feb 28 04:33:40 np0005634017 podman[78121]: 2026-02-28 09:33:40.488251919 +0000 UTC m=+0.053844786 container create d5799724ede28d4d8e636b6cce157d99e13302624e80aa703ad30de96fe91be0 (image=quay.io/ceph/ceph:v20, name=great_solomon, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 04:33:40 np0005634017 systemd[1]: Started libpod-conmon-d5799724ede28d4d8e636b6cce157d99e13302624e80aa703ad30de96fe91be0.scope.
Feb 28 04:33:40 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:40 np0005634017 podman[78121]: 2026-02-28 09:33:40.462403937 +0000 UTC m=+0.027996824 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b37327f8559fbdd4ad68664167af74096ca14db635c1d8edf896e4ebd7ebb2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b37327f8559fbdd4ad68664167af74096ca14db635c1d8edf896e4ebd7ebb2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b37327f8559fbdd4ad68664167af74096ca14db635c1d8edf896e4ebd7ebb2c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:40 np0005634017 podman[78121]: 2026-02-28 09:33:40.573684387 +0000 UTC m=+0.139277334 container init d5799724ede28d4d8e636b6cce157d99e13302624e80aa703ad30de96fe91be0 (image=quay.io/ceph/ceph:v20, name=great_solomon, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 04:33:40 np0005634017 podman[78121]: 2026-02-28 09:33:40.584707689 +0000 UTC m=+0.150300556 container start d5799724ede28d4d8e636b6cce157d99e13302624e80aa703ad30de96fe91be0 (image=quay.io/ceph/ceph:v20, name=great_solomon, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:33:40 np0005634017 podman[78121]: 2026-02-28 09:33:40.58826662 +0000 UTC m=+0.153859527 container attach d5799724ede28d4d8e636b6cce157d99e13302624e80aa703ad30de96fe91be0 (image=quay.io/ceph/ceph:v20, name=great_solomon, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 28 04:33:40 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054703 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:33:40 np0005634017 ceph-mgr[76610]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 28 04:33:41 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 04:33:41 np0005634017 ceph-mgr[76610]: [cephadm INFO root] Saving service mon spec with placement count:5
Feb 28 04:33:41 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Feb 28 04:33:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb 28 04:33:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:41 np0005634017 great_solomon[78138]: Scheduled mon update...
Feb 28 04:33:41 np0005634017 systemd[1]: libpod-d5799724ede28d4d8e636b6cce157d99e13302624e80aa703ad30de96fe91be0.scope: Deactivated successfully.
Feb 28 04:33:41 np0005634017 podman[78121]: 2026-02-28 09:33:41.030445967 +0000 UTC m=+0.596038824 container died d5799724ede28d4d8e636b6cce157d99e13302624e80aa703ad30de96fe91be0 (image=quay.io/ceph/ceph:v20, name=great_solomon, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:33:41 np0005634017 systemd[1]: var-lib-containers-storage-overlay-1b37327f8559fbdd4ad68664167af74096ca14db635c1d8edf896e4ebd7ebb2c-merged.mount: Deactivated successfully.
Feb 28 04:33:41 np0005634017 podman[78121]: 2026-02-28 09:33:41.063825872 +0000 UTC m=+0.629418769 container remove d5799724ede28d4d8e636b6cce157d99e13302624e80aa703ad30de96fe91be0 (image=quay.io/ceph/ceph:v20, name=great_solomon, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:33:41 np0005634017 systemd[1]: libpod-conmon-d5799724ede28d4d8e636b6cce157d99e13302624e80aa703ad30de96fe91be0.scope: Deactivated successfully.
Feb 28 04:33:41 np0005634017 podman[78203]: 2026-02-28 09:33:41.141539132 +0000 UTC m=+0.056196092 container create 85b7a8766ecde16989f020042870885c59b02863a3c28c9ed97323ef0289909d (image=quay.io/ceph/ceph:v20, name=gracious_proskuriakova, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:33:41 np0005634017 systemd[1]: Started libpod-conmon-85b7a8766ecde16989f020042870885c59b02863a3c28c9ed97323ef0289909d.scope.
Feb 28 04:33:41 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:41 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c12b57ea33d99655877880c11047def27c284be3ef41ff7eeb180dfe2863e52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:41 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c12b57ea33d99655877880c11047def27c284be3ef41ff7eeb180dfe2863e52/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:41 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c12b57ea33d99655877880c11047def27c284be3ef41ff7eeb180dfe2863e52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:41 np0005634017 podman[78203]: 2026-02-28 09:33:41.118636684 +0000 UTC m=+0.033293664 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:41 np0005634017 podman[78203]: 2026-02-28 09:33:41.227875656 +0000 UTC m=+0.142532666 container init 85b7a8766ecde16989f020042870885c59b02863a3c28c9ed97323ef0289909d (image=quay.io/ceph/ceph:v20, name=gracious_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:33:41 np0005634017 podman[78203]: 2026-02-28 09:33:41.235008238 +0000 UTC m=+0.149665208 container start 85b7a8766ecde16989f020042870885c59b02863a3c28c9ed97323ef0289909d (image=quay.io/ceph/ceph:v20, name=gracious_proskuriakova, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 28 04:33:41 np0005634017 podman[78203]: 2026-02-28 09:33:41.23896987 +0000 UTC m=+0.153626840 container attach 85b7a8766ecde16989f020042870885c59b02863a3c28c9ed97323ef0289909d (image=quay.io/ceph/ceph:v20, name=gracious_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 04:33:41 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:41 np0005634017 ceph-mon[76304]: Added host compute-0
Feb 28 04:33:41 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:41 np0005634017 podman[78158]: 2026-02-28 09:33:41.538796867 +0000 UTC m=+0.815135845 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:41 np0005634017 podman[78258]: 2026-02-28 09:33:41.663002023 +0000 UTC m=+0.056682536 container create 173128d7e81a4ad4fcc1f74a2d3f7551550bd27d8928eddb17b0af97d4c90462 (image=quay.io/ceph/ceph:v20, name=compassionate_nobel, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:41 np0005634017 systemd[1]: Started libpod-conmon-173128d7e81a4ad4fcc1f74a2d3f7551550bd27d8928eddb17b0af97d4c90462.scope.
Feb 28 04:33:41 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:41 np0005634017 podman[78258]: 2026-02-28 09:33:41.720941783 +0000 UTC m=+0.114622316 container init 173128d7e81a4ad4fcc1f74a2d3f7551550bd27d8928eddb17b0af97d4c90462 (image=quay.io/ceph/ceph:v20, name=compassionate_nobel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:33:41 np0005634017 podman[78258]: 2026-02-28 09:33:41.727139568 +0000 UTC m=+0.120820091 container start 173128d7e81a4ad4fcc1f74a2d3f7551550bd27d8928eddb17b0af97d4c90462 (image=quay.io/ceph/ceph:v20, name=compassionate_nobel, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 28 04:33:41 np0005634017 podman[78258]: 2026-02-28 09:33:41.633697503 +0000 UTC m=+0.027378066 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:41 np0005634017 podman[78258]: 2026-02-28 09:33:41.731302596 +0000 UTC m=+0.124983169 container attach 173128d7e81a4ad4fcc1f74a2d3f7551550bd27d8928eddb17b0af97d4c90462 (image=quay.io/ceph/ceph:v20, name=compassionate_nobel, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 28 04:33:41 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 04:33:41 np0005634017 ceph-mgr[76610]: [cephadm INFO root] Saving service mgr spec with placement count:2
Feb 28 04:33:41 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Feb 28 04:33:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 28 04:33:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:41 np0005634017 gracious_proskuriakova[78220]: Scheduled mgr update...
Feb 28 04:33:41 np0005634017 systemd[1]: libpod-85b7a8766ecde16989f020042870885c59b02863a3c28c9ed97323ef0289909d.scope: Deactivated successfully.
Feb 28 04:33:41 np0005634017 podman[78203]: 2026-02-28 09:33:41.7808809 +0000 UTC m=+0.695537840 container died 85b7a8766ecde16989f020042870885c59b02863a3c28c9ed97323ef0289909d (image=quay.io/ceph/ceph:v20, name=gracious_proskuriakova, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:33:41 np0005634017 systemd[1]: var-lib-containers-storage-overlay-2c12b57ea33d99655877880c11047def27c284be3ef41ff7eeb180dfe2863e52-merged.mount: Deactivated successfully.
Feb 28 04:33:41 np0005634017 podman[78203]: 2026-02-28 09:33:41.817172687 +0000 UTC m=+0.731829617 container remove 85b7a8766ecde16989f020042870885c59b02863a3c28c9ed97323ef0289909d (image=quay.io/ceph/ceph:v20, name=gracious_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:33:41 np0005634017 systemd[1]: libpod-conmon-85b7a8766ecde16989f020042870885c59b02863a3c28c9ed97323ef0289909d.scope: Deactivated successfully.
Feb 28 04:33:41 np0005634017 compassionate_nobel[78274]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Feb 28 04:33:41 np0005634017 systemd[1]: libpod-173128d7e81a4ad4fcc1f74a2d3f7551550bd27d8928eddb17b0af97d4c90462.scope: Deactivated successfully.
Feb 28 04:33:41 np0005634017 podman[78258]: 2026-02-28 09:33:41.847489275 +0000 UTC m=+0.241169758 container died 173128d7e81a4ad4fcc1f74a2d3f7551550bd27d8928eddb17b0af97d4c90462 (image=quay.io/ceph/ceph:v20, name=compassionate_nobel, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 04:33:41 np0005634017 systemd[1]: var-lib-containers-storage-overlay-2e2d1181b8293e30bb7c4ddb3922f5df83dfad36f85dcdd040668712ca49f0d1-merged.mount: Deactivated successfully.
Feb 28 04:33:41 np0005634017 podman[78258]: 2026-02-28 09:33:41.895112963 +0000 UTC m=+0.288793446 container remove 173128d7e81a4ad4fcc1f74a2d3f7551550bd27d8928eddb17b0af97d4c90462 (image=quay.io/ceph/ceph:v20, name=compassionate_nobel, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Feb 28 04:33:41 np0005634017 systemd[1]: libpod-conmon-173128d7e81a4ad4fcc1f74a2d3f7551550bd27d8928eddb17b0af97d4c90462.scope: Deactivated successfully.
Feb 28 04:33:41 np0005634017 podman[78294]: 2026-02-28 09:33:41.908565854 +0000 UTC m=+0.073078330 container create 582d9f9be1b74c24de63bbbc8ea9747c6a1dfca3ccdcbc910525079b594729e9 (image=quay.io/ceph/ceph:v20, name=distracted_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:33:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Feb 28 04:33:41 np0005634017 systemd[1]: Started libpod-conmon-582d9f9be1b74c24de63bbbc8ea9747c6a1dfca3ccdcbc910525079b594729e9.scope.
Feb 28 04:33:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:41 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:41 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaaa02318c4f18321d5cc3e13c73642de3aa40822ea2763fa6ddd555e8521371/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:41 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaaa02318c4f18321d5cc3e13c73642de3aa40822ea2763fa6ddd555e8521371/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:41 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaaa02318c4f18321d5cc3e13c73642de3aa40822ea2763fa6ddd555e8521371/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:41 np0005634017 podman[78294]: 2026-02-28 09:33:41.867873702 +0000 UTC m=+0.032386208 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:41 np0005634017 podman[78294]: 2026-02-28 09:33:41.96988613 +0000 UTC m=+0.134398636 container init 582d9f9be1b74c24de63bbbc8ea9747c6a1dfca3ccdcbc910525079b594729e9 (image=quay.io/ceph/ceph:v20, name=distracted_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:33:41 np0005634017 podman[78294]: 2026-02-28 09:33:41.974538512 +0000 UTC m=+0.139050988 container start 582d9f9be1b74c24de63bbbc8ea9747c6a1dfca3ccdcbc910525079b594729e9 (image=quay.io/ceph/ceph:v20, name=distracted_moser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 04:33:41 np0005634017 podman[78294]: 2026-02-28 09:33:41.978074912 +0000 UTC m=+0.142587418 container attach 582d9f9be1b74c24de63bbbc8ea9747c6a1dfca3ccdcbc910525079b594729e9 (image=quay.io/ceph/ceph:v20, name=distracted_moser, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:33:42 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:33:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:33:42 np0005634017 ceph-mon[76304]: Saving service mon spec with placement count:5
Feb 28 04:33:42 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:42 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:42 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:42 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 04:33:42 np0005634017 ceph-mgr[76610]: [cephadm INFO root] Saving service crash spec with placement *
Feb 28 04:33:42 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Feb 28 04:33:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb 28 04:33:42 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:42 np0005634017 distracted_moser[78319]: Scheduled crash update...
Feb 28 04:33:42 np0005634017 systemd[1]: libpod-582d9f9be1b74c24de63bbbc8ea9747c6a1dfca3ccdcbc910525079b594729e9.scope: Deactivated successfully.
Feb 28 04:33:42 np0005634017 podman[78294]: 2026-02-28 09:33:42.448021575 +0000 UTC m=+0.612534091 container died 582d9f9be1b74c24de63bbbc8ea9747c6a1dfca3ccdcbc910525079b594729e9 (image=quay.io/ceph/ceph:v20, name=distracted_moser, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 28 04:33:42 np0005634017 systemd[1]: var-lib-containers-storage-overlay-aaaa02318c4f18321d5cc3e13c73642de3aa40822ea2763fa6ddd555e8521371-merged.mount: Deactivated successfully.
Feb 28 04:33:42 np0005634017 podman[78294]: 2026-02-28 09:33:42.651474255 +0000 UTC m=+0.815986761 container remove 582d9f9be1b74c24de63bbbc8ea9747c6a1dfca3ccdcbc910525079b594729e9 (image=quay.io/ceph/ceph:v20, name=distracted_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:33:42 np0005634017 systemd[1]: libpod-conmon-582d9f9be1b74c24de63bbbc8ea9747c6a1dfca3ccdcbc910525079b594729e9.scope: Deactivated successfully.
Feb 28 04:33:42 np0005634017 podman[78480]: 2026-02-28 09:33:42.743255373 +0000 UTC m=+0.070988361 container create 86563a37bb375db407c669135d09ce49f1e53f916ca6a9a4f4e945ee7bc2b99a (image=quay.io/ceph/ceph:v20, name=recursing_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 04:33:42 np0005634017 podman[78480]: 2026-02-28 09:33:42.709523608 +0000 UTC m=+0.037256656 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:42 np0005634017 systemd[1]: Started libpod-conmon-86563a37bb375db407c669135d09ce49f1e53f916ca6a9a4f4e945ee7bc2b99a.scope.
Feb 28 04:33:42 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:42 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4db48310f0351af4207ef498b39a47d9b1d4f92ca41a729c88e8d53c0f2aa7e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:42 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4db48310f0351af4207ef498b39a47d9b1d4f92ca41a729c88e8d53c0f2aa7e5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:42 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4db48310f0351af4207ef498b39a47d9b1d4f92ca41a729c88e8d53c0f2aa7e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:42 np0005634017 podman[78480]: 2026-02-28 09:33:42.880721544 +0000 UTC m=+0.208454542 container init 86563a37bb375db407c669135d09ce49f1e53f916ca6a9a4f4e945ee7bc2b99a (image=quay.io/ceph/ceph:v20, name=recursing_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 04:33:42 np0005634017 podman[78480]: 2026-02-28 09:33:42.887481015 +0000 UTC m=+0.215213963 container start 86563a37bb375db407c669135d09ce49f1e53f916ca6a9a4f4e945ee7bc2b99a (image=quay.io/ceph/ceph:v20, name=recursing_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 04:33:42 np0005634017 podman[78480]: 2026-02-28 09:33:42.891109698 +0000 UTC m=+0.218842656 container attach 86563a37bb375db407c669135d09ce49f1e53f916ca6a9a4f4e945ee7bc2b99a (image=quay.io/ceph/ceph:v20, name=recursing_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 28 04:33:42 np0005634017 ceph-mgr[76610]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 28 04:33:43 np0005634017 podman[78545]: 2026-02-28 09:33:43.101283128 +0000 UTC m=+0.168681726 container exec 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 04:33:43 np0005634017 podman[78545]: 2026-02-28 09:33:43.183583818 +0000 UTC m=+0.250982366 container exec_died 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 28 04:33:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Feb 28 04:33:43 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3413842770' entity='client.admin' 
Feb 28 04:33:43 np0005634017 podman[78480]: 2026-02-28 09:33:43.338297507 +0000 UTC m=+0.666030465 container died 86563a37bb375db407c669135d09ce49f1e53f916ca6a9a4f4e945ee7bc2b99a (image=quay.io/ceph/ceph:v20, name=recursing_rubin, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 04:33:43 np0005634017 systemd[1]: libpod-86563a37bb375db407c669135d09ce49f1e53f916ca6a9a4f4e945ee7bc2b99a.scope: Deactivated successfully.
Feb 28 04:33:43 np0005634017 ceph-mon[76304]: Saving service mgr spec with placement count:2
Feb 28 04:33:43 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:43 np0005634017 ceph-mon[76304]: Saving service crash spec with placement *
Feb 28 04:33:43 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:43 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/3413842770' entity='client.admin' 
Feb 28 04:33:43 np0005634017 systemd[1]: var-lib-containers-storage-overlay-4db48310f0351af4207ef498b39a47d9b1d4f92ca41a729c88e8d53c0f2aa7e5-merged.mount: Deactivated successfully.
Feb 28 04:33:43 np0005634017 podman[78480]: 2026-02-28 09:33:43.52772647 +0000 UTC m=+0.855459408 container remove 86563a37bb375db407c669135d09ce49f1e53f916ca6a9a4f4e945ee7bc2b99a (image=quay.io/ceph/ceph:v20, name=recursing_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 04:33:43 np0005634017 systemd[1]: libpod-conmon-86563a37bb375db407c669135d09ce49f1e53f916ca6a9a4f4e945ee7bc2b99a.scope: Deactivated successfully.
Feb 28 04:33:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:33:43 np0005634017 podman[78655]: 2026-02-28 09:33:43.640046509 +0000 UTC m=+0.089961507 container create 5c1796f0edeb3afd15485a9aa3a9462400ed20e4f4fd562d78f645eff375c3f2 (image=quay.io/ceph/ceph:v20, name=elastic_elgamal, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 04:33:43 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:43 np0005634017 systemd[1]: Started libpod-conmon-5c1796f0edeb3afd15485a9aa3a9462400ed20e4f4fd562d78f645eff375c3f2.scope.
Feb 28 04:33:43 np0005634017 podman[78655]: 2026-02-28 09:33:43.578477166 +0000 UTC m=+0.028392144 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:43 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfbb26a985e07818043562d1669aa55d2f5336dfa63002936574f4d6c37266db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfbb26a985e07818043562d1669aa55d2f5336dfa63002936574f4d6c37266db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfbb26a985e07818043562d1669aa55d2f5336dfa63002936574f4d6c37266db/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:43 np0005634017 podman[78655]: 2026-02-28 09:33:43.788993326 +0000 UTC m=+0.238908394 container init 5c1796f0edeb3afd15485a9aa3a9462400ed20e4f4fd562d78f645eff375c3f2 (image=quay.io/ceph/ceph:v20, name=elastic_elgamal, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:33:43 np0005634017 podman[78655]: 2026-02-28 09:33:43.795015676 +0000 UTC m=+0.244930684 container start 5c1796f0edeb3afd15485a9aa3a9462400ed20e4f4fd562d78f645eff375c3f2 (image=quay.io/ceph/ceph:v20, name=elastic_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:33:43 np0005634017 podman[78655]: 2026-02-28 09:33:43.798819644 +0000 UTC m=+0.248734702 container attach 5c1796f0edeb3afd15485a9aa3a9462400ed20e4f4fd562d78f645eff375c3f2 (image=quay.io/ceph/ceph:v20, name=elastic_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:33:44 np0005634017 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 78758 (sysctl)
Feb 28 04:33:44 np0005634017 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Feb 28 04:33:44 np0005634017 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Feb 28 04:33:44 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 04:33:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Feb 28 04:33:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:44 np0005634017 systemd[1]: libpod-5c1796f0edeb3afd15485a9aa3a9462400ed20e4f4fd562d78f645eff375c3f2.scope: Deactivated successfully.
Feb 28 04:33:44 np0005634017 podman[78655]: 2026-02-28 09:33:44.213464392 +0000 UTC m=+0.663379370 container died 5c1796f0edeb3afd15485a9aa3a9462400ed20e4f4fd562d78f645eff375c3f2 (image=quay.io/ceph/ceph:v20, name=elastic_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 04:33:44 np0005634017 systemd[1]: var-lib-containers-storage-overlay-dfbb26a985e07818043562d1669aa55d2f5336dfa63002936574f4d6c37266db-merged.mount: Deactivated successfully.
Feb 28 04:33:44 np0005634017 podman[78655]: 2026-02-28 09:33:44.25228163 +0000 UTC m=+0.702196598 container remove 5c1796f0edeb3afd15485a9aa3a9462400ed20e4f4fd562d78f645eff375c3f2 (image=quay.io/ceph/ceph:v20, name=elastic_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 28 04:33:44 np0005634017 systemd[1]: libpod-conmon-5c1796f0edeb3afd15485a9aa3a9462400ed20e4f4fd562d78f645eff375c3f2.scope: Deactivated successfully.
Feb 28 04:33:44 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:33:44 np0005634017 podman[78794]: 2026-02-28 09:33:44.312011901 +0000 UTC m=+0.037754909 container create 73b035da8127556c6b9cb37d4cfcea6484d0cd11dbdf2813540bf2a8a2244fa5 (image=quay.io/ceph/ceph:v20, name=charming_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:33:44 np0005634017 systemd[1]: Started libpod-conmon-73b035da8127556c6b9cb37d4cfcea6484d0cd11dbdf2813540bf2a8a2244fa5.scope.
Feb 28 04:33:44 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:44 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef242f9bb35bf2c553ab695e4a83f740079b7a89d1bc73282de0b6f02ed4e7fe/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:44 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef242f9bb35bf2c553ab695e4a83f740079b7a89d1bc73282de0b6f02ed4e7fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:44 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef242f9bb35bf2c553ab695e4a83f740079b7a89d1bc73282de0b6f02ed4e7fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:44 np0005634017 podman[78794]: 2026-02-28 09:33:44.292923881 +0000 UTC m=+0.018666909 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:44 np0005634017 podman[78794]: 2026-02-28 09:33:44.409052758 +0000 UTC m=+0.134795866 container init 73b035da8127556c6b9cb37d4cfcea6484d0cd11dbdf2813540bf2a8a2244fa5 (image=quay.io/ceph/ceph:v20, name=charming_mestorf, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 28 04:33:44 np0005634017 podman[78794]: 2026-02-28 09:33:44.416413147 +0000 UTC m=+0.142156175 container start 73b035da8127556c6b9cb37d4cfcea6484d0cd11dbdf2813540bf2a8a2244fa5 (image=quay.io/ceph/ceph:v20, name=charming_mestorf, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 28 04:33:44 np0005634017 podman[78794]: 2026-02-28 09:33:44.420038429 +0000 UTC m=+0.145781527 container attach 73b035da8127556c6b9cb37d4cfcea6484d0cd11dbdf2813540bf2a8a2244fa5 (image=quay.io/ceph/ceph:v20, name=charming_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 04:33:44 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:44 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:33:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:44 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 04:33:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 28 04:33:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:44 np0005634017 ceph-mgr[76610]: [cephadm INFO root] Added label _admin to host compute-0
Feb 28 04:33:44 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Feb 28 04:33:44 np0005634017 charming_mestorf[78834]: Added label _admin to host compute-0
Feb 28 04:33:44 np0005634017 systemd[1]: libpod-73b035da8127556c6b9cb37d4cfcea6484d0cd11dbdf2813540bf2a8a2244fa5.scope: Deactivated successfully.
Feb 28 04:33:44 np0005634017 podman[78794]: 2026-02-28 09:33:44.834486412 +0000 UTC m=+0.560229470 container died 73b035da8127556c6b9cb37d4cfcea6484d0cd11dbdf2813540bf2a8a2244fa5 (image=quay.io/ceph/ceph:v20, name=charming_mestorf, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 28 04:33:44 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ef242f9bb35bf2c553ab695e4a83f740079b7a89d1bc73282de0b6f02ed4e7fe-merged.mount: Deactivated successfully.
Feb 28 04:33:44 np0005634017 ceph-mgr[76610]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 28 04:33:44 np0005634017 podman[78794]: 2026-02-28 09:33:44.988902272 +0000 UTC m=+0.714645280 container remove 73b035da8127556c6b9cb37d4cfcea6484d0cd11dbdf2813540bf2a8a2244fa5 (image=quay.io/ceph/ceph:v20, name=charming_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:33:44 np0005634017 systemd[1]: libpod-conmon-73b035da8127556c6b9cb37d4cfcea6484d0cd11dbdf2813540bf2a8a2244fa5.scope: Deactivated successfully.
Feb 28 04:33:45 np0005634017 podman[78975]: 2026-02-28 09:33:45.027185706 +0000 UTC m=+0.020993166 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:45 np0005634017 podman[78975]: 2026-02-28 09:33:45.381051573 +0000 UTC m=+0.374859053 container create 1ac990ad64a17923d1bbbbd5f7abdad066e7a3a9c43d8460dcc27bdbf360afd5 (image=quay.io/ceph/ceph:v20, name=elastic_antonelli, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:45 np0005634017 systemd[1]: Started libpod-conmon-1ac990ad64a17923d1bbbbd5f7abdad066e7a3a9c43d8460dcc27bdbf360afd5.scope.
Feb 28 04:33:45 np0005634017 podman[78992]: 2026-02-28 09:33:45.402580772 +0000 UTC m=+0.367475753 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:33:45 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/615ea3f04358a3b728713c1451a3bc40e259f5004c2a340b12e1c1de34cd6409/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/615ea3f04358a3b728713c1451a3bc40e259f5004c2a340b12e1c1de34cd6409/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/615ea3f04358a3b728713c1451a3bc40e259f5004c2a340b12e1c1de34cd6409/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:33:45 np0005634017 podman[78992]: 2026-02-28 09:33:45.879635627 +0000 UTC m=+0.844530548 container create bf2433cd9b47930ef96642a255695b4b849df728e0ccc8c501e26c260e158f3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_shtern, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:33:45 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:45 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:45 np0005634017 systemd[1]: Started libpod-conmon-bf2433cd9b47930ef96642a255695b4b849df728e0ccc8c501e26c260e158f3d.scope.
Feb 28 04:33:45 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:45 np0005634017 podman[78975]: 2026-02-28 09:33:45.962235185 +0000 UTC m=+0.956042655 container init 1ac990ad64a17923d1bbbbd5f7abdad066e7a3a9c43d8460dcc27bdbf360afd5 (image=quay.io/ceph/ceph:v20, name=elastic_antonelli, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:33:45 np0005634017 podman[78975]: 2026-02-28 09:33:45.968002968 +0000 UTC m=+0.961810438 container start 1ac990ad64a17923d1bbbbd5f7abdad066e7a3a9c43d8460dcc27bdbf360afd5 (image=quay.io/ceph/ceph:v20, name=elastic_antonelli, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:45 np0005634017 podman[78992]: 2026-02-28 09:33:45.970705385 +0000 UTC m=+0.935600326 container init bf2433cd9b47930ef96642a255695b4b849df728e0ccc8c501e26c260e158f3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:33:45 np0005634017 podman[78975]: 2026-02-28 09:33:45.975051978 +0000 UTC m=+0.968859448 container attach 1ac990ad64a17923d1bbbbd5f7abdad066e7a3a9c43d8460dcc27bdbf360afd5 (image=quay.io/ceph/ceph:v20, name=elastic_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 28 04:33:45 np0005634017 podman[78992]: 2026-02-28 09:33:45.975283135 +0000 UTC m=+0.940178046 container start bf2433cd9b47930ef96642a255695b4b849df728e0ccc8c501e26c260e158f3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_shtern, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 04:33:45 np0005634017 practical_shtern[79015]: 167 167
Feb 28 04:33:45 np0005634017 podman[78992]: 2026-02-28 09:33:45.979958687 +0000 UTC m=+0.944853598 container attach bf2433cd9b47930ef96642a255695b4b849df728e0ccc8c501e26c260e158f3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_shtern, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 04:33:45 np0005634017 systemd[1]: libpod-bf2433cd9b47930ef96642a255695b4b849df728e0ccc8c501e26c260e158f3d.scope: Deactivated successfully.
Feb 28 04:33:45 np0005634017 podman[78992]: 2026-02-28 09:33:45.981213762 +0000 UTC m=+0.946108663 container died bf2433cd9b47930ef96642a255695b4b849df728e0ccc8c501e26c260e158f3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_shtern, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 04:33:46 np0005634017 systemd[1]: var-lib-containers-storage-overlay-4c9c5a48d85adeebf1384f39ad4169707b7129b441ea0ca1e44d833f5b6e65d5-merged.mount: Deactivated successfully.
Feb 28 04:33:46 np0005634017 podman[78992]: 2026-02-28 09:33:46.021457072 +0000 UTC m=+0.986351973 container remove bf2433cd9b47930ef96642a255695b4b849df728e0ccc8c501e26c260e158f3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_shtern, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 04:33:46 np0005634017 systemd[1]: libpod-conmon-bf2433cd9b47930ef96642a255695b4b849df728e0ccc8c501e26c260e158f3d.scope: Deactivated successfully.
Feb 28 04:33:46 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:33:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Feb 28 04:33:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/493318313' entity='client.admin' 
Feb 28 04:33:46 np0005634017 elastic_antonelli[79010]: set mgr/dashboard/cluster/status
Feb 28 04:33:46 np0005634017 systemd[1]: libpod-1ac990ad64a17923d1bbbbd5f7abdad066e7a3a9c43d8460dcc27bdbf360afd5.scope: Deactivated successfully.
Feb 28 04:33:46 np0005634017 podman[78975]: 2026-02-28 09:33:46.595290016 +0000 UTC m=+1.589097466 container died 1ac990ad64a17923d1bbbbd5f7abdad066e7a3a9c43d8460dcc27bdbf360afd5 (image=quay.io/ceph/ceph:v20, name=elastic_antonelli, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 28 04:33:46 np0005634017 systemd[1]: var-lib-containers-storage-overlay-615ea3f04358a3b728713c1451a3bc40e259f5004c2a340b12e1c1de34cd6409-merged.mount: Deactivated successfully.
Feb 28 04:33:46 np0005634017 podman[78975]: 2026-02-28 09:33:46.633168068 +0000 UTC m=+1.626975518 container remove 1ac990ad64a17923d1bbbbd5f7abdad066e7a3a9c43d8460dcc27bdbf360afd5 (image=quay.io/ceph/ceph:v20, name=elastic_antonelli, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 28 04:33:46 np0005634017 systemd[1]: libpod-conmon-1ac990ad64a17923d1bbbbd5f7abdad066e7a3a9c43d8460dcc27bdbf360afd5.scope: Deactivated successfully.
Feb 28 04:33:46 np0005634017 systemd[1]: Reloading.
Feb 28 04:33:46 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:33:46 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:33:46 np0005634017 ceph-mon[76304]: Added label _admin to host compute-0
Feb 28 04:33:46 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/493318313' entity='client.admin' 
Feb 28 04:33:46 np0005634017 ceph-mgr[76610]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 28 04:33:47 np0005634017 podman[79121]: 2026-02-28 09:33:47.083233149 +0000 UTC m=+0.054561796 container create 8210fc0ceec54debb4ecdd4495f6f9ec9ea983f630f63f95b02b971b7ffc515a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bardeen, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:33:47 np0005634017 systemd[1]: Started libpod-conmon-8210fc0ceec54debb4ecdd4495f6f9ec9ea983f630f63f95b02b971b7ffc515a.scope.
Feb 28 04:33:47 np0005634017 podman[79121]: 2026-02-28 09:33:47.057216052 +0000 UTC m=+0.028544789 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:33:47 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf975fb0812664f1ecdaf771e4590a32d91d47ed724d546cdad773da56985493/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf975fb0812664f1ecdaf771e4590a32d91d47ed724d546cdad773da56985493/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf975fb0812664f1ecdaf771e4590a32d91d47ed724d546cdad773da56985493/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf975fb0812664f1ecdaf771e4590a32d91d47ed724d546cdad773da56985493/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:47 np0005634017 podman[79121]: 2026-02-28 09:33:47.181660305 +0000 UTC m=+0.152988942 container init 8210fc0ceec54debb4ecdd4495f6f9ec9ea983f630f63f95b02b971b7ffc515a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bardeen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:33:47 np0005634017 podman[79121]: 2026-02-28 09:33:47.190379042 +0000 UTC m=+0.161707709 container start 8210fc0ceec54debb4ecdd4495f6f9ec9ea983f630f63f95b02b971b7ffc515a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:33:47 np0005634017 podman[79121]: 2026-02-28 09:33:47.193696456 +0000 UTC m=+0.165025123 container attach 8210fc0ceec54debb4ecdd4495f6f9ec9ea983f630f63f95b02b971b7ffc515a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bardeen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 28 04:33:47 np0005634017 python3[79170]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:33:47 np0005634017 podman[79175]: 2026-02-28 09:33:47.576567364 +0000 UTC m=+0.061394809 container create ea3566da5b2c15cd96ccd6dd3e78ce6f97e0c9b8fb33e4fda03adc8c80158172 (image=quay.io/ceph/ceph:v20, name=vigilant_brattain, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 04:33:47 np0005634017 systemd[1]: Started libpod-conmon-ea3566da5b2c15cd96ccd6dd3e78ce6f97e0c9b8fb33e4fda03adc8c80158172.scope.
Feb 28 04:33:47 np0005634017 podman[79175]: 2026-02-28 09:33:47.554252892 +0000 UTC m=+0.039080147 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:47 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a7bbe08d9159092924c809e0841e4c029f2495815c0e40535dbc99e15017726/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a7bbe08d9159092924c809e0841e4c029f2495815c0e40535dbc99e15017726/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:47 np0005634017 podman[79175]: 2026-02-28 09:33:47.669151315 +0000 UTC m=+0.153978640 container init ea3566da5b2c15cd96ccd6dd3e78ce6f97e0c9b8fb33e4fda03adc8c80158172 (image=quay.io/ceph/ceph:v20, name=vigilant_brattain, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:33:47 np0005634017 podman[79175]: 2026-02-28 09:33:47.676904204 +0000 UTC m=+0.161731479 container start ea3566da5b2c15cd96ccd6dd3e78ce6f97e0c9b8fb33e4fda03adc8c80158172 (image=quay.io/ceph/ceph:v20, name=vigilant_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:33:47 np0005634017 podman[79175]: 2026-02-28 09:33:47.680655891 +0000 UTC m=+0.165483136 container attach ea3566da5b2c15cd96ccd6dd3e78ce6f97e0c9b8fb33e4fda03adc8c80158172 (image=quay.io/ceph/ceph:v20, name=vigilant_brattain, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]: [
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:    {
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:        "available": false,
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:        "being_replaced": false,
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:        "ceph_device_lvm": false,
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:        "device_id": "QEMU_DVD-ROM_QM00001",
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:        "lsm_data": {},
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:        "lvs": [],
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:        "path": "/dev/sr0",
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:        "rejected_reasons": [
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:            "Insufficient space (<5GB)",
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:            "Has a FileSystem"
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:        ],
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:        "sys_api": {
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:            "actuators": null,
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:            "device_nodes": [
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:                "sr0"
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:            ],
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:            "devname": "sr0",
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:            "human_readable_size": "482.00 KB",
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:            "id_bus": "ata",
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:            "model": "QEMU DVD-ROM",
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:            "nr_requests": "2",
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:            "parent": "/dev/sr0",
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:            "partitions": {},
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:            "path": "/dev/sr0",
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:            "removable": "1",
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:            "rev": "2.5+",
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:            "ro": "0",
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:            "rotational": "1",
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:            "sas_address": "",
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:            "sas_device_handle": "",
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:            "scheduler_mode": "mq-deadline",
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:            "sectors": 0,
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:            "sectorsize": "2048",
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:            "size": 493568.0,
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:            "support_discard": "2048",
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:            "type": "disk",
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:            "vendor": "QEMU"
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:        }
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]:    }
Feb 28 04:33:47 np0005634017 cool_bardeen[79138]: ]
Feb 28 04:33:47 np0005634017 systemd[1]: libpod-8210fc0ceec54debb4ecdd4495f6f9ec9ea983f630f63f95b02b971b7ffc515a.scope: Deactivated successfully.
Feb 28 04:33:47 np0005634017 podman[79121]: 2026-02-28 09:33:47.731633704 +0000 UTC m=+0.702962351 container died 8210fc0ceec54debb4ecdd4495f6f9ec9ea983f630f63f95b02b971b7ffc515a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 28 04:33:47 np0005634017 systemd[1]: var-lib-containers-storage-overlay-bf975fb0812664f1ecdaf771e4590a32d91d47ed724d546cdad773da56985493-merged.mount: Deactivated successfully.
Feb 28 04:33:47 np0005634017 podman[79121]: 2026-02-28 09:33:47.781141765 +0000 UTC m=+0.752470442 container remove 8210fc0ceec54debb4ecdd4495f6f9ec9ea983f630f63f95b02b971b7ffc515a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:33:47 np0005634017 systemd[1]: libpod-conmon-8210fc0ceec54debb4ecdd4495f6f9ec9ea983f630f63f95b02b971b7ffc515a.scope: Deactivated successfully.
Feb 28 04:33:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:33:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:33:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:33:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:33:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 28 04:33:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 04:33:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:33:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:33:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 04:33:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:33:47 np0005634017 ceph-mgr[76610]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Feb 28 04:33:47 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Feb 28 04:33:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Feb 28 04:33:48 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2612086099' entity='client.admin' 
Feb 28 04:33:48 np0005634017 systemd[1]: libpod-ea3566da5b2c15cd96ccd6dd3e78ce6f97e0c9b8fb33e4fda03adc8c80158172.scope: Deactivated successfully.
Feb 28 04:33:48 np0005634017 podman[80004]: 2026-02-28 09:33:48.140100967 +0000 UTC m=+0.024726421 container died ea3566da5b2c15cd96ccd6dd3e78ce6f97e0c9b8fb33e4fda03adc8c80158172 (image=quay.io/ceph/ceph:v20, name=vigilant_brattain, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 04:33:48 np0005634017 systemd[1]: var-lib-containers-storage-overlay-0a7bbe08d9159092924c809e0841e4c029f2495815c0e40535dbc99e15017726-merged.mount: Deactivated successfully.
Feb 28 04:33:48 np0005634017 podman[80004]: 2026-02-28 09:33:48.184999388 +0000 UTC m=+0.069624822 container remove ea3566da5b2c15cd96ccd6dd3e78ce6f97e0c9b8fb33e4fda03adc8c80158172 (image=quay.io/ceph/ceph:v20, name=vigilant_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True)
Feb 28 04:33:48 np0005634017 systemd[1]: libpod-conmon-ea3566da5b2c15cd96ccd6dd3e78ce6f97e0c9b8fb33e4fda03adc8c80158172.scope: Deactivated successfully.
Feb 28 04:33:48 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:33:48 np0005634017 ceph-mgr[76610]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/config/ceph.conf
Feb 28 04:33:48 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/config/ceph.conf
Feb 28 04:33:48 np0005634017 ceph-mgr[76610]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb 28 04:33:48 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb 28 04:33:48 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:48 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:48 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:48 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:48 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 04:33:48 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:33:48 np0005634017 ceph-mon[76304]: Updating compute-0:/etc/ceph/ceph.conf
Feb 28 04:33:48 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/2612086099' entity='client.admin' 
Feb 28 04:33:48 np0005634017 ceph-mon[76304]: Updating compute-0:/var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/config/ceph.conf
Feb 28 04:33:48 np0005634017 ceph-mgr[76610]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Feb 28 04:33:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 28 04:33:48 np0005634017 ceph-mon[76304]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Feb 28 04:33:48 np0005634017 ansible-async_wrapper.py[80527]: Invoked with j687559067307 30 /home/zuul/.ansible/tmp/ansible-tmp-1772271228.4705875-37417-67148265934756/AnsiballZ_command.py _
Feb 28 04:33:48 np0005634017 ansible-async_wrapper.py[80585]: Starting module and watcher
Feb 28 04:33:48 np0005634017 ansible-async_wrapper.py[80585]: Start watching 80587 (30)
Feb 28 04:33:48 np0005634017 ansible-async_wrapper.py[80587]: Start module (80587)
Feb 28 04:33:48 np0005634017 ansible-async_wrapper.py[80527]: Return async_wrapper task started.
Feb 28 04:33:49 np0005634017 python3[80591]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:33:49 np0005634017 podman[80648]: 2026-02-28 09:33:49.182784072 +0000 UTC m=+0.036565116 container create 7ea062b4a4299f889e0a13e9b98747b71dbdfa9895602fe0ccdd3031e4bbeaea (image=quay.io/ceph/ceph:v20, name=focused_wiles, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:33:49 np0005634017 systemd[1]: Started libpod-conmon-7ea062b4a4299f889e0a13e9b98747b71dbdfa9895602fe0ccdd3031e4bbeaea.scope.
Feb 28 04:33:49 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:49 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49a7da5d75aab525f1369f269f1ea8437a403bf69d0814e502d0a296b3b8929f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:49 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49a7da5d75aab525f1369f269f1ea8437a403bf69d0814e502d0a296b3b8929f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:49 np0005634017 podman[80648]: 2026-02-28 09:33:49.169459825 +0000 UTC m=+0.023240889 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:49 np0005634017 podman[80648]: 2026-02-28 09:33:49.26890714 +0000 UTC m=+0.122688204 container init 7ea062b4a4299f889e0a13e9b98747b71dbdfa9895602fe0ccdd3031e4bbeaea (image=quay.io/ceph/ceph:v20, name=focused_wiles, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:33:49 np0005634017 podman[80648]: 2026-02-28 09:33:49.277356569 +0000 UTC m=+0.131137613 container start 7ea062b4a4299f889e0a13e9b98747b71dbdfa9895602fe0ccdd3031e4bbeaea (image=quay.io/ceph/ceph:v20, name=focused_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:33:49 np0005634017 podman[80648]: 2026-02-28 09:33:49.280485318 +0000 UTC m=+0.134266362 container attach 7ea062b4a4299f889e0a13e9b98747b71dbdfa9895602fe0ccdd3031e4bbeaea (image=quay.io/ceph/ceph:v20, name=focused_wiles, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:33:49 np0005634017 ceph-mgr[76610]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/config/ceph.client.admin.keyring
Feb 28 04:33:49 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/config/ceph.client.admin.keyring
Feb 28 04:33:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:33:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:49 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 28 04:33:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:33:49 np0005634017 focused_wiles[80697]: 
Feb 28 04:33:49 np0005634017 focused_wiles[80697]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb 28 04:33:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 04:33:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:49 np0005634017 systemd[1]: libpod-7ea062b4a4299f889e0a13e9b98747b71dbdfa9895602fe0ccdd3031e4bbeaea.scope: Deactivated successfully.
Feb 28 04:33:49 np0005634017 podman[80648]: 2026-02-28 09:33:49.765462717 +0000 UTC m=+0.619243761 container died 7ea062b4a4299f889e0a13e9b98747b71dbdfa9895602fe0ccdd3031e4bbeaea (image=quay.io/ceph/ceph:v20, name=focused_wiles, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:33:49 np0005634017 ceph-mgr[76610]: [progress INFO root] update: starting ev 2921d81d-c046-4cac-a6e0-c38edea33e8e (Updating crash deployment (+1 -> 1))
Feb 28 04:33:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Feb 28 04:33:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 28 04:33:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Feb 28 04:33:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:33:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:33:49 np0005634017 ceph-mgr[76610]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Feb 28 04:33:49 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Feb 28 04:33:49 np0005634017 systemd[1]: var-lib-containers-storage-overlay-49a7da5d75aab525f1369f269f1ea8437a403bf69d0814e502d0a296b3b8929f-merged.mount: Deactivated successfully.
Feb 28 04:33:49 np0005634017 podman[80648]: 2026-02-28 09:33:49.797273737 +0000 UTC m=+0.651054771 container remove 7ea062b4a4299f889e0a13e9b98747b71dbdfa9895602fe0ccdd3031e4bbeaea (image=quay.io/ceph/ceph:v20, name=focused_wiles, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:33:49 np0005634017 systemd[1]: libpod-conmon-7ea062b4a4299f889e0a13e9b98747b71dbdfa9895602fe0ccdd3031e4bbeaea.scope: Deactivated successfully.
Feb 28 04:33:49 np0005634017 ansible-async_wrapper.py[80587]: Module complete (80587)
Feb 28 04:33:49 np0005634017 ceph-mon[76304]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb 28 04:33:49 np0005634017 ceph-mon[76304]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Feb 28 04:33:49 np0005634017 ceph-mon[76304]: Updating compute-0:/var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/config/ceph.client.admin.keyring
Feb 28 04:33:49 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:49 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:49 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:49 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 28 04:33:49 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Feb 28 04:33:50 np0005634017 podman[81124]: 2026-02-28 09:33:50.248281074 +0000 UTC m=+0.040822306 container create 5d59564eefd4024bdd2bcf3f72e3608f604c04d01ed9b75b28eb4f6a2ce258f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_shockley, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 04:33:50 np0005634017 systemd[1]: Started libpod-conmon-5d59564eefd4024bdd2bcf3f72e3608f604c04d01ed9b75b28eb4f6a2ce258f8.scope.
Feb 28 04:33:50 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:33:50 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:50 np0005634017 podman[81124]: 2026-02-28 09:33:50.320767686 +0000 UTC m=+0.113308928 container init 5d59564eefd4024bdd2bcf3f72e3608f604c04d01ed9b75b28eb4f6a2ce258f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:50 np0005634017 podman[81124]: 2026-02-28 09:33:50.230479051 +0000 UTC m=+0.023020283 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:33:50 np0005634017 podman[81124]: 2026-02-28 09:33:50.328001491 +0000 UTC m=+0.120542733 container start 5d59564eefd4024bdd2bcf3f72e3608f604c04d01ed9b75b28eb4f6a2ce258f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_shockley, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:33:50 np0005634017 podman[81124]: 2026-02-28 09:33:50.331168471 +0000 UTC m=+0.123709723 container attach 5d59564eefd4024bdd2bcf3f72e3608f604c04d01ed9b75b28eb4f6a2ce258f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:33:50 np0005634017 condescending_shockley[81166]: 167 167
Feb 28 04:33:50 np0005634017 systemd[1]: libpod-5d59564eefd4024bdd2bcf3f72e3608f604c04d01ed9b75b28eb4f6a2ce258f8.scope: Deactivated successfully.
Feb 28 04:33:50 np0005634017 conmon[81166]: conmon 5d59564eefd4024bdd2b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5d59564eefd4024bdd2bcf3f72e3608f604c04d01ed9b75b28eb4f6a2ce258f8.scope/container/memory.events
Feb 28 04:33:50 np0005634017 podman[81124]: 2026-02-28 09:33:50.335118683 +0000 UTC m=+0.127659895 container died 5d59564eefd4024bdd2bcf3f72e3608f604c04d01ed9b75b28eb4f6a2ce258f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_shockley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True)
Feb 28 04:33:50 np0005634017 systemd[1]: var-lib-containers-storage-overlay-8a4ce1276eef25ebf9a916cbfc0c35e5ae4794fc87b4cce3a455c2eca48bfea8-merged.mount: Deactivated successfully.
Feb 28 04:33:50 np0005634017 podman[81124]: 2026-02-28 09:33:50.372672556 +0000 UTC m=+0.165213768 container remove 5d59564eefd4024bdd2bcf3f72e3608f604c04d01ed9b75b28eb4f6a2ce258f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_shockley, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:33:50 np0005634017 python3[81162]: ansible-ansible.legacy.async_status Invoked with jid=j687559067307.80527 mode=status _async_dir=/root/.ansible_async
Feb 28 04:33:50 np0005634017 systemd[1]: libpod-conmon-5d59564eefd4024bdd2bcf3f72e3608f604c04d01ed9b75b28eb4f6a2ce258f8.scope: Deactivated successfully.
Feb 28 04:33:50 np0005634017 systemd[1]: Reloading.
Feb 28 04:33:50 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:33:50 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:33:50 np0005634017 systemd[1]: Reloading.
Feb 28 04:33:50 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:33:50 np0005634017 python3[81278]: ansible-ansible.legacy.async_status Invoked with jid=j687559067307.80527 mode=cleanup _async_dir=/root/.ansible_async
Feb 28 04:33:50 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:33:50 np0005634017 ceph-mon[76304]: Deploying daemon crash.compute-0 on compute-0
Feb 28 04:33:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:33:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 28 04:33:51 np0005634017 systemd[1]: Starting Ceph crash.compute-0 for 8f528268-ea2d-5d7b-af45-49b405fed6de...
Feb 28 04:33:52 np0005634017 podman[81401]: 2026-02-28 09:33:52.193366305 +0000 UTC m=+0.036950386 container create 7f2ed486c4ede5c582534ebf73d20b3ef50cba5810e3217bc146f1c70915f2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-crash-compute-0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 28 04:33:52 np0005634017 python3[81379]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 28 04:33:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c43de4ac7c327f37e630130e0ad6bbdafeec66c02de1c43b11c91308d240738b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c43de4ac7c327f37e630130e0ad6bbdafeec66c02de1c43b11c91308d240738b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c43de4ac7c327f37e630130e0ad6bbdafeec66c02de1c43b11c91308d240738b/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c43de4ac7c327f37e630130e0ad6bbdafeec66c02de1c43b11c91308d240738b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:52 np0005634017 podman[81401]: 2026-02-28 09:33:52.258719485 +0000 UTC m=+0.102303616 container init 7f2ed486c4ede5c582534ebf73d20b3ef50cba5810e3217bc146f1c70915f2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-crash-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 04:33:52 np0005634017 podman[81401]: 2026-02-28 09:33:52.262533263 +0000 UTC m=+0.106117364 container start 7f2ed486c4ede5c582534ebf73d20b3ef50cba5810e3217bc146f1c70915f2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-crash-compute-0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:33:52 np0005634017 bash[81401]: 7f2ed486c4ede5c582534ebf73d20b3ef50cba5810e3217bc146f1c70915f2df
Feb 28 04:33:52 np0005634017 podman[81401]: 2026-02-28 09:33:52.175098258 +0000 UTC m=+0.018682369 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:33:52 np0005634017 systemd[1]: Started Ceph crash.compute-0 for 8f528268-ea2d-5d7b-af45-49b405fed6de.
Feb 28 04:33:52 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:33:52 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-crash-compute-0[81416]: INFO:ceph-crash:pinging cluster to exercise our key
Feb 28 04:33:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:33:52 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:33:52 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb 28 04:33:52 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:52 np0005634017 ceph-mgr[76610]: [progress INFO root] complete: finished ev 2921d81d-c046-4cac-a6e0-c38edea33e8e (Updating crash deployment (+1 -> 1))
Feb 28 04:33:52 np0005634017 ceph-mgr[76610]: [progress INFO root] Completed event 2921d81d-c046-4cac-a6e0-c38edea33e8e (Updating crash deployment (+1 -> 1)) in 3 seconds
Feb 28 04:33:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb 28 04:33:52 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb 28 04:33:52 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:52 np0005634017 ceph-mgr[76610]: [progress INFO root] update: starting ev b85be8db-d600-488e-a03a-366e12b22cfe (Updating mgr deployment (+1 -> 2))
Feb 28 04:33:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.fqpirj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb 28 04:33:52 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.fqpirj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 28 04:33:52 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.fqpirj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Feb 28 04:33:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 28 04:33:52 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "mgr services"} : dispatch
Feb 28 04:33:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:33:52 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:33:52 np0005634017 ceph-mgr[76610]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.fqpirj on compute-0
Feb 28 04:33:52 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.fqpirj on compute-0
Feb 28 04:33:52 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-crash-compute-0[81416]: 2026-02-28T09:33:52.416+0000 7f44dd78f640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Feb 28 04:33:52 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-crash-compute-0[81416]: 2026-02-28T09:33:52.416+0000 7f44dd78f640 -1 AuthRegistry(0x7f44d8052d90) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Feb 28 04:33:52 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-crash-compute-0[81416]: 2026-02-28T09:33:52.418+0000 7f44dd78f640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Feb 28 04:33:52 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-crash-compute-0[81416]: 2026-02-28T09:33:52.418+0000 7f44dd78f640 -1 AuthRegistry(0x7f44dd78dfe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Feb 28 04:33:52 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-crash-compute-0[81416]: 2026-02-28T09:33:52.419+0000 7f44d6ffd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Feb 28 04:33:52 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-crash-compute-0[81416]: 2026-02-28T09:33:52.419+0000 7f44dd78f640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Feb 28 04:33:52 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-crash-compute-0[81416]: [errno 13] RADOS permission denied (error connecting to the cluster)
Feb 28 04:33:52 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-crash-compute-0[81416]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Feb 28 04:33:52 np0005634017 python3[81510]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:33:52 np0005634017 podman[81551]: 2026-02-28 09:33:52.884527001 +0000 UTC m=+0.046872728 container create 10526206845bcd054698d5c79d4e82977e97afd01c1a6e14ba409a359272a2ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_heisenberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Feb 28 04:33:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 28 04:33:52 np0005634017 systemd[1]: Started libpod-conmon-10526206845bcd054698d5c79d4e82977e97afd01c1a6e14ba409a359272a2ee.scope.
Feb 28 04:33:52 np0005634017 podman[81565]: 2026-02-28 09:33:52.931692286 +0000 UTC m=+0.050033977 container create af620038641be6ed7d7aa9003dd0038a7b46f6a1bd8bfc96e8c238d6fbd7e751 (image=quay.io/ceph/ceph:v20, name=epic_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 04:33:52 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:52 np0005634017 podman[81551]: 2026-02-28 09:33:52.866687116 +0000 UTC m=+0.029032863 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:33:52 np0005634017 systemd[1]: Started libpod-conmon-af620038641be6ed7d7aa9003dd0038a7b46f6a1bd8bfc96e8c238d6fbd7e751.scope.
Feb 28 04:33:52 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:52 np0005634017 podman[81551]: 2026-02-28 09:33:52.973963943 +0000 UTC m=+0.136309700 container init 10526206845bcd054698d5c79d4e82977e97afd01c1a6e14ba409a359272a2ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_heisenberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 28 04:33:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d0c8d1cf3fb917edbce5d31c2fe149062be205f153277eaa28196188628496/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d0c8d1cf3fb917edbce5d31c2fe149062be205f153277eaa28196188628496/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d0c8d1cf3fb917edbce5d31c2fe149062be205f153277eaa28196188628496/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:52 np0005634017 podman[81551]: 2026-02-28 09:33:52.986008844 +0000 UTC m=+0.148354571 container start 10526206845bcd054698d5c79d4e82977e97afd01c1a6e14ba409a359272a2ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_heisenberg, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 04:33:52 np0005634017 podman[81551]: 2026-02-28 09:33:52.989361539 +0000 UTC m=+0.151707266 container attach 10526206845bcd054698d5c79d4e82977e97afd01c1a6e14ba409a359272a2ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_heisenberg, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:33:52 np0005634017 loving_heisenberg[81580]: 167 167
Feb 28 04:33:52 np0005634017 systemd[1]: libpod-10526206845bcd054698d5c79d4e82977e97afd01c1a6e14ba409a359272a2ee.scope: Deactivated successfully.
Feb 28 04:33:52 np0005634017 podman[81551]: 2026-02-28 09:33:52.992993232 +0000 UTC m=+0.155338959 container died 10526206845bcd054698d5c79d4e82977e97afd01c1a6e14ba409a359272a2ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:52 np0005634017 podman[81565]: 2026-02-28 09:33:52.993191477 +0000 UTC m=+0.111533148 container init af620038641be6ed7d7aa9003dd0038a7b46f6a1bd8bfc96e8c238d6fbd7e751 (image=quay.io/ceph/ceph:v20, name=epic_bhaskara, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 28 04:33:52 np0005634017 podman[81565]: 2026-02-28 09:33:52.997549751 +0000 UTC m=+0.115891402 container start af620038641be6ed7d7aa9003dd0038a7b46f6a1bd8bfc96e8c238d6fbd7e751 (image=quay.io/ceph/ceph:v20, name=epic_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 04:33:53 np0005634017 podman[81565]: 2026-02-28 09:33:52.907927334 +0000 UTC m=+0.026269075 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:53 np0005634017 podman[81565]: 2026-02-28 09:33:53.005118185 +0000 UTC m=+0.123459866 container attach af620038641be6ed7d7aa9003dd0038a7b46f6a1bd8bfc96e8c238d6fbd7e751 (image=quay.io/ceph/ceph:v20, name=epic_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:33:53 np0005634017 systemd[1]: var-lib-containers-storage-overlay-8490220f28731de0f02808812b21fffe6a2d3a106486a21eb8a25df2aced6e78-merged.mount: Deactivated successfully.
Feb 28 04:33:53 np0005634017 podman[81551]: 2026-02-28 09:33:53.044340235 +0000 UTC m=+0.206685962 container remove 10526206845bcd054698d5c79d4e82977e97afd01c1a6e14ba409a359272a2ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:33:53 np0005634017 systemd[1]: libpod-conmon-10526206845bcd054698d5c79d4e82977e97afd01c1a6e14ba409a359272a2ee.scope: Deactivated successfully.
Feb 28 04:33:53 np0005634017 systemd[1]: Reloading.
Feb 28 04:33:53 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:33:53 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:33:53 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:53 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:53 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:53 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:53 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:53 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.fqpirj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 28 04:33:53 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.fqpirj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Feb 28 04:33:53 np0005634017 ceph-mon[76304]: Deploying daemon mgr.compute-0.fqpirj on compute-0
Feb 28 04:33:53 np0005634017 systemd[1]: Reloading.
Feb 28 04:33:53 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 28 04:33:53 np0005634017 epic_bhaskara[81587]: 
Feb 28 04:33:53 np0005634017 epic_bhaskara[81587]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb 28 04:33:53 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:33:53 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:33:53 np0005634017 podman[81565]: 2026-02-28 09:33:53.454088794 +0000 UTC m=+0.572430475 container died af620038641be6ed7d7aa9003dd0038a7b46f6a1bd8bfc96e8c238d6fbd7e751 (image=quay.io/ceph/ceph:v20, name=epic_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 04:33:53 np0005634017 systemd[1]: libpod-af620038641be6ed7d7aa9003dd0038a7b46f6a1bd8bfc96e8c238d6fbd7e751.scope: Deactivated successfully.
Feb 28 04:33:53 np0005634017 systemd[1]: var-lib-containers-storage-overlay-62d0c8d1cf3fb917edbce5d31c2fe149062be205f153277eaa28196188628496-merged.mount: Deactivated successfully.
Feb 28 04:33:53 np0005634017 podman[81565]: 2026-02-28 09:33:53.611886211 +0000 UTC m=+0.730227872 container remove af620038641be6ed7d7aa9003dd0038a7b46f6a1bd8bfc96e8c238d6fbd7e751 (image=quay.io/ceph/ceph:v20, name=epic_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 04:33:53 np0005634017 systemd[1]: Starting Ceph mgr.compute-0.fqpirj for 8f528268-ea2d-5d7b-af45-49b405fed6de...
Feb 28 04:33:53 np0005634017 systemd[1]: libpod-conmon-af620038641be6ed7d7aa9003dd0038a7b46f6a1bd8bfc96e8c238d6fbd7e751.scope: Deactivated successfully.
Feb 28 04:33:53 np0005634017 podman[81778]: 2026-02-28 09:33:53.831506618 +0000 UTC m=+0.060754661 container create 2d11b9e48e976363b5e2aabef5ed3d6eb9a82ac418560fc3570f1a51e22c5db6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-fqpirj, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 04:33:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe1462813674cf56078df5ee5c207543d67fdbf87f2daecda92af89c687cd049/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe1462813674cf56078df5ee5c207543d67fdbf87f2daecda92af89c687cd049/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe1462813674cf56078df5ee5c207543d67fdbf87f2daecda92af89c687cd049/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe1462813674cf56078df5ee5c207543d67fdbf87f2daecda92af89c687cd049/merged/var/lib/ceph/mgr/ceph-compute-0.fqpirj supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:53 np0005634017 podman[81778]: 2026-02-28 09:33:53.895683045 +0000 UTC m=+0.124931148 container init 2d11b9e48e976363b5e2aabef5ed3d6eb9a82ac418560fc3570f1a51e22c5db6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-fqpirj, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:33:53 np0005634017 podman[81778]: 2026-02-28 09:33:53.806297065 +0000 UTC m=+0.035545158 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:33:53 np0005634017 podman[81778]: 2026-02-28 09:33:53.907298234 +0000 UTC m=+0.136546287 container start 2d11b9e48e976363b5e2aabef5ed3d6eb9a82ac418560fc3570f1a51e22c5db6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-fqpirj, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:33:53 np0005634017 bash[81778]: 2d11b9e48e976363b5e2aabef5ed3d6eb9a82ac418560fc3570f1a51e22c5db6
Feb 28 04:33:53 np0005634017 systemd[1]: Started Ceph mgr.compute-0.fqpirj for 8f528268-ea2d-5d7b-af45-49b405fed6de.
Feb 28 04:33:53 np0005634017 ceph-mgr[81822]: set uid:gid to 167:167 (ceph:ceph)
Feb 28 04:33:53 np0005634017 ceph-mgr[81822]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Feb 28 04:33:53 np0005634017 ceph-mgr[81822]: pidfile_write: ignore empty --pid-file
Feb 28 04:33:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:33:53 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:33:53 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'alerts'
Feb 28 04:33:53 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 28 04:33:53 np0005634017 ansible-async_wrapper.py[80585]: Done in kid B.
Feb 28 04:33:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 28 04:33:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:54 np0005634017 ceph-mgr[76610]: [progress INFO root] complete: finished ev b85be8db-d600-488e-a03a-366e12b22cfe (Updating mgr deployment (+1 -> 2))
Feb 28 04:33:54 np0005634017 ceph-mgr[76610]: [progress INFO root] Completed event b85be8db-d600-488e-a03a-366e12b22cfe (Updating mgr deployment (+1 -> 2)) in 2 seconds
Feb 28 04:33:54 np0005634017 python3[81823]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:33:54 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'balancer'
Feb 28 04:33:54 np0005634017 podman[81868]: 2026-02-28 09:33:54.138972012 +0000 UTC m=+0.067149282 container create aeca742cd274eb65b46e717796e48b919f05509267de594edf291bf0af6bf37a (image=quay.io/ceph/ceph:v20, name=unruffled_archimedes, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 28 04:33:54 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'cephadm'
Feb 28 04:33:54 np0005634017 systemd[1]: Started libpod-conmon-aeca742cd274eb65b46e717796e48b919f05509267de594edf291bf0af6bf37a.scope.
Feb 28 04:33:54 np0005634017 podman[81868]: 2026-02-28 09:33:54.118909714 +0000 UTC m=+0.047086994 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:54 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbaf2fdf6e6df9dd929826b2fc9d3935c6d5b3f32b7cf447c5ab3b5c5ebe7eb1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbaf2fdf6e6df9dd929826b2fc9d3935c6d5b3f32b7cf447c5ab3b5c5ebe7eb1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbaf2fdf6e6df9dd929826b2fc9d3935c6d5b3f32b7cf447c5ab3b5c5ebe7eb1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:54 np0005634017 podman[81868]: 2026-02-28 09:33:54.244364686 +0000 UTC m=+0.172541966 container init aeca742cd274eb65b46e717796e48b919f05509267de594edf291bf0af6bf37a (image=quay.io/ceph/ceph:v20, name=unruffled_archimedes, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:54 np0005634017 podman[81868]: 2026-02-28 09:33:54.251185139 +0000 UTC m=+0.179362389 container start aeca742cd274eb65b46e717796e48b919f05509267de594edf291bf0af6bf37a (image=quay.io/ceph/ceph:v20, name=unruffled_archimedes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:54 np0005634017 podman[81868]: 2026-02-28 09:33:54.254566214 +0000 UTC m=+0.182743504 container attach aeca742cd274eb65b46e717796e48b919f05509267de594edf291bf0af6bf37a (image=quay.io/ceph/ceph:v20, name=unruffled_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:33:54 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:33:54 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:54 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:54 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:54 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:54 np0005634017 podman[82004]: 2026-02-28 09:33:54.591343698 +0000 UTC m=+0.059879506 container exec 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 04:33:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Feb 28 04:33:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3338273820' entity='client.admin' 
Feb 28 04:33:54 np0005634017 systemd[1]: libpod-aeca742cd274eb65b46e717796e48b919f05509267de594edf291bf0af6bf37a.scope: Deactivated successfully.
Feb 28 04:33:54 np0005634017 podman[81868]: 2026-02-28 09:33:54.670633013 +0000 UTC m=+0.598810243 container died aeca742cd274eb65b46e717796e48b919f05509267de594edf291bf0af6bf37a (image=quay.io/ceph/ceph:v20, name=unruffled_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 04:33:54 np0005634017 podman[82004]: 2026-02-28 09:33:54.677811056 +0000 UTC m=+0.146346864 container exec_died 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:33:54 np0005634017 systemd[1]: var-lib-containers-storage-overlay-dbaf2fdf6e6df9dd929826b2fc9d3935c6d5b3f32b7cf447c5ab3b5c5ebe7eb1-merged.mount: Deactivated successfully.
Feb 28 04:33:54 np0005634017 podman[81868]: 2026-02-28 09:33:54.721977086 +0000 UTC m=+0.650154326 container remove aeca742cd274eb65b46e717796e48b919f05509267de594edf291bf0af6bf37a (image=quay.io/ceph/ceph:v20, name=unruffled_archimedes, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:33:54 np0005634017 systemd[1]: libpod-conmon-aeca742cd274eb65b46e717796e48b919f05509267de594edf291bf0af6bf37a.scope: Deactivated successfully.
Feb 28 04:33:54 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'crash'
Feb 28 04:33:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 28 04:33:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:33:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:33:54 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'dashboard'
Feb 28 04:33:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:33:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:33:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 04:33:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:33:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 04:33:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:55 np0005634017 python3[82140]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:33:55 np0005634017 ceph-mgr[76610]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Feb 28 04:33:55 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:33:55 np0005634017 ceph-mgr[76610]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Feb 28 04:33:55 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Feb 28 04:33:55 np0005634017 podman[82168]: 2026-02-28 09:33:55.063436922 +0000 UTC m=+0.039460088 container create 685a3e2e6bb4168036f5370c8e6c8dd868737bb710c4122f32e401a878943aff (image=quay.io/ceph/ceph:v20, name=nervous_austin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 04:33:55 np0005634017 systemd[1]: Started libpod-conmon-685a3e2e6bb4168036f5370c8e6c8dd868737bb710c4122f32e401a878943aff.scope.
Feb 28 04:33:55 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:55 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b77e83f63e229280aa44b277fbc9e196262e8aa78c229d7c84941b0f46f8d642/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:55 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b77e83f63e229280aa44b277fbc9e196262e8aa78c229d7c84941b0f46f8d642/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:55 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b77e83f63e229280aa44b277fbc9e196262e8aa78c229d7c84941b0f46f8d642/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:55 np0005634017 podman[82168]: 2026-02-28 09:33:55.0474529 +0000 UTC m=+0.023475966 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:55 np0005634017 podman[82168]: 2026-02-28 09:33:55.143490118 +0000 UTC m=+0.119513194 container init 685a3e2e6bb4168036f5370c8e6c8dd868737bb710c4122f32e401a878943aff (image=quay.io/ceph/ceph:v20, name=nervous_austin, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:33:55 np0005634017 podman[82168]: 2026-02-28 09:33:55.15168506 +0000 UTC m=+0.127708116 container start 685a3e2e6bb4168036f5370c8e6c8dd868737bb710c4122f32e401a878943aff (image=quay.io/ceph/ceph:v20, name=nervous_austin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:33:55 np0005634017 podman[82168]: 2026-02-28 09:33:55.160453048 +0000 UTC m=+0.136476174 container attach 685a3e2e6bb4168036f5370c8e6c8dd868737bb710c4122f32e401a878943aff (image=quay.io/ceph/ceph:v20, name=nervous_austin, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 28 04:33:55 np0005634017 ceph-mgr[76610]: [progress INFO root] Writing back 2 completed events
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:55 np0005634017 podman[82284]: 2026-02-28 09:33:55.498027745 +0000 UTC m=+0.042522475 container create 94f566a07a3509017cd5bbf127ec1dae2b88997ee11a89c7088e662f45bd63c6 (image=quay.io/ceph/ceph:v20, name=stupefied_sutherland, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:55 np0005634017 systemd[1]: Started libpod-conmon-94f566a07a3509017cd5bbf127ec1dae2b88997ee11a89c7088e662f45bd63c6.scope.
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Feb 28 04:33:55 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/332107858' entity='client.admin' 
Feb 28 04:33:55 np0005634017 podman[82284]: 2026-02-28 09:33:55.562197441 +0000 UTC m=+0.106692181 container init 94f566a07a3509017cd5bbf127ec1dae2b88997ee11a89c7088e662f45bd63c6 (image=quay.io/ceph/ceph:v20, name=stupefied_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:55 np0005634017 systemd[1]: libpod-685a3e2e6bb4168036f5370c8e6c8dd868737bb710c4122f32e401a878943aff.scope: Deactivated successfully.
Feb 28 04:33:55 np0005634017 podman[82284]: 2026-02-28 09:33:55.569411245 +0000 UTC m=+0.113905965 container start 94f566a07a3509017cd5bbf127ec1dae2b88997ee11a89c7088e662f45bd63c6 (image=quay.io/ceph/ceph:v20, name=stupefied_sutherland, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:33:55 np0005634017 stupefied_sutherland[82300]: 167 167
Feb 28 04:33:55 np0005634017 systemd[1]: libpod-94f566a07a3509017cd5bbf127ec1dae2b88997ee11a89c7088e662f45bd63c6.scope: Deactivated successfully.
Feb 28 04:33:55 np0005634017 podman[82284]: 2026-02-28 09:33:55.479264924 +0000 UTC m=+0.023759674 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:55 np0005634017 podman[82284]: 2026-02-28 09:33:55.57487178 +0000 UTC m=+0.119366520 container attach 94f566a07a3509017cd5bbf127ec1dae2b88997ee11a89c7088e662f45bd63c6 (image=quay.io/ceph/ceph:v20, name=stupefied_sutherland, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:33:55 np0005634017 conmon[82300]: conmon 94f566a07a3509017cd5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-94f566a07a3509017cd5bbf127ec1dae2b88997ee11a89c7088e662f45bd63c6.scope/container/memory.events
Feb 28 04:33:55 np0005634017 podman[82284]: 2026-02-28 09:33:55.575535829 +0000 UTC m=+0.120030549 container died 94f566a07a3509017cd5bbf127ec1dae2b88997ee11a89c7088e662f45bd63c6 (image=quay.io/ceph/ceph:v20, name=stupefied_sutherland, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:33:55 np0005634017 podman[82168]: 2026-02-28 09:33:55.591432919 +0000 UTC m=+0.567455965 container died 685a3e2e6bb4168036f5370c8e6c8dd868737bb710c4122f32e401a878943aff (image=quay.io/ceph/ceph:v20, name=nervous_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:33:55 np0005634017 systemd[1]: var-lib-containers-storage-overlay-a2b5af3f22bde40cb4530ce16959adab57d2b36eba1727dd34f988d103b921c9-merged.mount: Deactivated successfully.
Feb 28 04:33:55 np0005634017 podman[82284]: 2026-02-28 09:33:55.621431348 +0000 UTC m=+0.165926068 container remove 94f566a07a3509017cd5bbf127ec1dae2b88997ee11a89c7088e662f45bd63c6 (image=quay.io/ceph/ceph:v20, name=stupefied_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 04:33:55 np0005634017 systemd[1]: libpod-conmon-94f566a07a3509017cd5bbf127ec1dae2b88997ee11a89c7088e662f45bd63c6.scope: Deactivated successfully.
Feb 28 04:33:55 np0005634017 systemd[1]: var-lib-containers-storage-overlay-b77e83f63e229280aa44b277fbc9e196262e8aa78c229d7c84941b0f46f8d642-merged.mount: Deactivated successfully.
Feb 28 04:33:55 np0005634017 podman[82168]: 2026-02-28 09:33:55.648006469 +0000 UTC m=+0.624029505 container remove 685a3e2e6bb4168036f5370c8e6c8dd868737bb710c4122f32e401a878943aff (image=quay.io/ceph/ceph:v20, name=nervous_austin, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 04:33:55 np0005634017 systemd[1]: libpod-conmon-685a3e2e6bb4168036f5370c8e6c8dd868737bb710c4122f32e401a878943aff.scope: Deactivated successfully.
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/3338273820' entity='client.admin' 
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/332107858' entity='client.admin' 
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:55 np0005634017 ceph-mgr[76610]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.izimmo (unknown last config time)...
Feb 28 04:33:55 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.izimmo (unknown last config time)...
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.izimmo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.izimmo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "mgr services"} : dispatch
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:33:55 np0005634017 ceph-mgr[76610]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.izimmo on compute-0
Feb 28 04:33:55 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.izimmo on compute-0
Feb 28 04:33:55 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'devicehealth'
Feb 28 04:33:55 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'diskprediction_local'
Feb 28 04:33:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:33:55 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-fqpirj[81806]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb 28 04:33:55 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-fqpirj[81806]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb 28 04:33:55 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-fqpirj[81806]:  from numpy import show_config as show_numpy_config
Feb 28 04:33:55 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'influx'
Feb 28 04:33:56 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'insights'
Feb 28 04:33:56 np0005634017 python3[82404]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:33:56 np0005634017 podman[82422]: 2026-02-28 09:33:56.034285934 +0000 UTC m=+0.035828715 container create e94dd52cac0675b73f6b6218197d22f9cdaf4e83e11df2cd0aec4c4a57202ad8 (image=quay.io/ceph/ceph:v20, name=youthful_zhukovsky, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:33:56 np0005634017 systemd[1]: Started libpod-conmon-e94dd52cac0675b73f6b6218197d22f9cdaf4e83e11df2cd0aec4c4a57202ad8.scope.
Feb 28 04:33:56 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'iostat'
Feb 28 04:33:56 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:56 np0005634017 podman[82422]: 2026-02-28 09:33:56.107092385 +0000 UTC m=+0.108635166 container init e94dd52cac0675b73f6b6218197d22f9cdaf4e83e11df2cd0aec4c4a57202ad8 (image=quay.io/ceph/ceph:v20, name=youthful_zhukovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 04:33:56 np0005634017 podman[82434]: 2026-02-28 09:33:56.111873231 +0000 UTC m=+0.074881311 container create cba20cf254ed46775435ef952e7c29917f9837fdb9a183c872eb0c52acb4722c (image=quay.io/ceph/ceph:v20, name=trusting_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Feb 28 04:33:56 np0005634017 podman[82422]: 2026-02-28 09:33:56.113050484 +0000 UTC m=+0.114593285 container start e94dd52cac0675b73f6b6218197d22f9cdaf4e83e11df2cd0aec4c4a57202ad8 (image=quay.io/ceph/ceph:v20, name=youthful_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 28 04:33:56 np0005634017 podman[82422]: 2026-02-28 09:33:56.017148849 +0000 UTC m=+0.018691640 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:56 np0005634017 podman[82422]: 2026-02-28 09:33:56.11680405 +0000 UTC m=+0.118346851 container attach e94dd52cac0675b73f6b6218197d22f9cdaf4e83e11df2cd0aec4c4a57202ad8 (image=quay.io/ceph/ceph:v20, name=youthful_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 28 04:33:56 np0005634017 youthful_zhukovsky[82448]: 167 167
Feb 28 04:33:56 np0005634017 systemd[1]: libpod-e94dd52cac0675b73f6b6218197d22f9cdaf4e83e11df2cd0aec4c4a57202ad8.scope: Deactivated successfully.
Feb 28 04:33:56 np0005634017 podman[82422]: 2026-02-28 09:33:56.119137106 +0000 UTC m=+0.120679887 container died e94dd52cac0675b73f6b6218197d22f9cdaf4e83e11df2cd0aec4c4a57202ad8 (image=quay.io/ceph/ceph:v20, name=youthful_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:33:56 np0005634017 systemd[1]: Started libpod-conmon-cba20cf254ed46775435ef952e7c29917f9837fdb9a183c872eb0c52acb4722c.scope.
Feb 28 04:33:56 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'k8sevents'
Feb 28 04:33:56 np0005634017 systemd[1]: var-lib-containers-storage-overlay-f4f759c7930ff13488c2d73e82afec410f541d668c3ee35006edf1aff727ef34-merged.mount: Deactivated successfully.
Feb 28 04:33:56 np0005634017 podman[82434]: 2026-02-28 09:33:56.084438284 +0000 UTC m=+0.047446424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:56 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:56 np0005634017 podman[82422]: 2026-02-28 09:33:56.178523867 +0000 UTC m=+0.180066638 container remove e94dd52cac0675b73f6b6218197d22f9cdaf4e83e11df2cd0aec4c4a57202ad8 (image=quay.io/ceph/ceph:v20, name=youthful_zhukovsky, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 04:33:56 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f43078ad17290b4206a0e250ab880780e254a2518368abd8aa77b7d8fd81e52f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:56 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f43078ad17290b4206a0e250ab880780e254a2518368abd8aa77b7d8fd81e52f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:56 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f43078ad17290b4206a0e250ab880780e254a2518368abd8aa77b7d8fd81e52f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:56 np0005634017 systemd[1]: libpod-conmon-e94dd52cac0675b73f6b6218197d22f9cdaf4e83e11df2cd0aec4c4a57202ad8.scope: Deactivated successfully.
Feb 28 04:33:56 np0005634017 podman[82434]: 2026-02-28 09:33:56.235636414 +0000 UTC m=+0.198644504 container init cba20cf254ed46775435ef952e7c29917f9837fdb9a183c872eb0c52acb4722c (image=quay.io/ceph/ceph:v20, name=trusting_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:33:56 np0005634017 podman[82434]: 2026-02-28 09:33:56.240962055 +0000 UTC m=+0.203970125 container start cba20cf254ed46775435ef952e7c29917f9837fdb9a183c872eb0c52acb4722c (image=quay.io/ceph/ceph:v20, name=trusting_pascal, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 28 04:33:56 np0005634017 podman[82434]: 2026-02-28 09:33:56.244436273 +0000 UTC m=+0.207444343 container attach cba20cf254ed46775435ef952e7c29917f9837fdb9a183c872eb0c52acb4722c (image=quay.io/ceph/ceph:v20, name=trusting_pascal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:33:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:33:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:33:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:56 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:33:56 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'localpool'
Feb 28 04:33:56 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'mds_autoscaler'
Feb 28 04:33:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Feb 28 04:33:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4069540966' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Feb 28 04:33:56 np0005634017 ceph-mon[76304]: Reconfiguring mon.compute-0 (unknown last config time)...
Feb 28 04:33:56 np0005634017 ceph-mon[76304]: Reconfiguring daemon mon.compute-0 on compute-0
Feb 28 04:33:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:56 np0005634017 ceph-mon[76304]: Reconfiguring mgr.compute-0.izimmo (unknown last config time)...
Feb 28 04:33:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.izimmo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 28 04:33:56 np0005634017 ceph-mon[76304]: Reconfiguring daemon mgr.compute-0.izimmo on compute-0
Feb 28 04:33:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:56 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/4069540966' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Feb 28 04:33:56 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'mirroring'
Feb 28 04:33:56 np0005634017 podman[82589]: 2026-02-28 09:33:56.863159188 +0000 UTC m=+0.087952791 container exec 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:33:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 28 04:33:56 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'nfs'
Feb 28 04:33:56 np0005634017 podman[82589]: 2026-02-28 09:33:56.973457921 +0000 UTC m=+0.198251484 container exec_died 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:33:57 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'orchestrator'
Feb 28 04:33:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Feb 28 04:33:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 28 04:33:57 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4069540966' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Feb 28 04:33:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Feb 28 04:33:57 np0005634017 trusting_pascal[82471]: set require_min_compat_client to mimic
Feb 28 04:33:57 np0005634017 systemd[1]: libpod-cba20cf254ed46775435ef952e7c29917f9837fdb9a183c872eb0c52acb4722c.scope: Deactivated successfully.
Feb 28 04:33:57 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'osd_perf_query'
Feb 28 04:33:57 np0005634017 podman[82434]: 2026-02-28 09:33:57.38744595 +0000 UTC m=+1.350454010 container died cba20cf254ed46775435ef952e7c29917f9837fdb9a183c872eb0c52acb4722c (image=quay.io/ceph/ceph:v20, name=trusting_pascal, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:33:57 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'osd_support'
Feb 28 04:33:57 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Feb 28 04:33:57 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'pg_autoscaler'
Feb 28 04:33:57 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'progress'
Feb 28 04:33:57 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'prometheus'
Feb 28 04:33:57 np0005634017 systemd[1]: var-lib-containers-storage-overlay-f43078ad17290b4206a0e250ab880780e254a2518368abd8aa77b7d8fd81e52f-merged.mount: Deactivated successfully.
Feb 28 04:33:57 np0005634017 podman[82434]: 2026-02-28 09:33:57.982179006 +0000 UTC m=+1.945187056 container remove cba20cf254ed46775435ef952e7c29917f9837fdb9a183c872eb0c52acb4722c (image=quay.io/ceph/ceph:v20, name=trusting_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:33:57 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'rbd_support'
Feb 28 04:33:58 np0005634017 systemd[1]: libpod-conmon-cba20cf254ed46775435ef952e7c29917f9837fdb9a183c872eb0c52acb4722c.scope: Deactivated successfully.
Feb 28 04:33:58 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'rgw'
Feb 28 04:33:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:33:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:33:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:33:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:33:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 04:33:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:33:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 04:33:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:58 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:33:58 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'rook'
Feb 28 04:33:58 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/4069540966' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Feb 28 04:33:58 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:58 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:58 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:33:58 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:58 np0005634017 python3[82771]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:33:58 np0005634017 podman[82772]: 2026-02-28 09:33:58.66730601 +0000 UTC m=+0.050994294 container create 80cbb9f37c07c14417b611866f170e96089bdff7a108d07af32d3e0f4a5c9158 (image=quay.io/ceph/ceph:v20, name=awesome_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 04:33:58 np0005634017 systemd[1]: Started libpod-conmon-80cbb9f37c07c14417b611866f170e96089bdff7a108d07af32d3e0f4a5c9158.scope.
Feb 28 04:33:58 np0005634017 podman[82772]: 2026-02-28 09:33:58.644425883 +0000 UTC m=+0.028114127 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:33:58 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:33:58 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3548b75618ce3dfe2e73a003260ca5f89966fcf256d7111ca72be4173fd69d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:58 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3548b75618ce3dfe2e73a003260ca5f89966fcf256d7111ca72be4173fd69d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:58 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3548b75618ce3dfe2e73a003260ca5f89966fcf256d7111ca72be4173fd69d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:33:58 np0005634017 podman[82772]: 2026-02-28 09:33:58.765892041 +0000 UTC m=+0.149580285 container init 80cbb9f37c07c14417b611866f170e96089bdff7a108d07af32d3e0f4a5c9158 (image=quay.io/ceph/ceph:v20, name=awesome_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 04:33:58 np0005634017 podman[82772]: 2026-02-28 09:33:58.773501207 +0000 UTC m=+0.157189451 container start 80cbb9f37c07c14417b611866f170e96089bdff7a108d07af32d3e0f4a5c9158 (image=quay.io/ceph/ceph:v20, name=awesome_edison, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:33:58 np0005634017 podman[82772]: 2026-02-28 09:33:58.777538551 +0000 UTC m=+0.161226815 container attach 80cbb9f37c07c14417b611866f170e96089bdff7a108d07af32d3e0f4a5c9158 (image=quay.io/ceph/ceph:v20, name=awesome_edison, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 04:33:58 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'selftest'
Feb 28 04:33:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 28 04:33:58 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'smb'
Feb 28 04:33:59 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 04:33:59 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'snap_schedule'
Feb 28 04:33:59 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'stats'
Feb 28 04:33:59 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'status'
Feb 28 04:33:59 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'telegraf'
Feb 28 04:33:59 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'telemetry'
Feb 28 04:33:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 28 04:33:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 28 04:33:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 28 04:33:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 28 04:33:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:59 np0005634017 ceph-mgr[76610]: [cephadm INFO root] Added host compute-0
Feb 28 04:33:59 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Added host compute-0
Feb 28 04:33:59 np0005634017 ceph-mgr[76610]: [cephadm INFO root] Saving service mon spec with placement compute-0
Feb 28 04:33:59 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Feb 28 04:33:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb 28 04:33:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:33:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:33:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 04:33:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:33:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 04:33:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:59 np0005634017 ceph-mgr[76610]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Feb 28 04:33:59 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Feb 28 04:33:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 28 04:33:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb 28 04:33:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:59 np0005634017 ceph-mgr[76610]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Feb 28 04:33:59 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Feb 28 04:33:59 np0005634017 ceph-mgr[76610]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Feb 28 04:33:59 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Feb 28 04:33:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Feb 28 04:33:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:59 np0005634017 ceph-mgr[76610]: [progress INFO root] update: starting ev 7e101a11-ccdb-4f9d-bee2-3c6a6c57ec8a (Updating mgr deployment (-1 -> 1))
Feb 28 04:33:59 np0005634017 ceph-mgr[76610]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.fqpirj from compute-0 -- ports [8765]
Feb 28 04:33:59 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.fqpirj from compute-0 -- ports [8765]
Feb 28 04:33:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:33:59 np0005634017 awesome_edison[82788]: Added host 'compute-0' with addr '192.168.122.100'
Feb 28 04:33:59 np0005634017 awesome_edison[82788]: Scheduled mon update...
Feb 28 04:33:59 np0005634017 awesome_edison[82788]: Scheduled mgr update...
Feb 28 04:33:59 np0005634017 awesome_edison[82788]: Scheduled osd.default_drive_group update...
Feb 28 04:33:59 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'test_orchestrator'
Feb 28 04:33:59 np0005634017 systemd[1]: libpod-80cbb9f37c07c14417b611866f170e96089bdff7a108d07af32d3e0f4a5c9158.scope: Deactivated successfully.
Feb 28 04:33:59 np0005634017 podman[82772]: 2026-02-28 09:33:59.663185541 +0000 UTC m=+1.046873795 container died 80cbb9f37c07c14417b611866f170e96089bdff7a108d07af32d3e0f4a5c9158 (image=quay.io/ceph/ceph:v20, name=awesome_edison, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 04:33:59 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ca3548b75618ce3dfe2e73a003260ca5f89966fcf256d7111ca72be4173fd69d-merged.mount: Deactivated successfully.
Feb 28 04:33:59 np0005634017 podman[82772]: 2026-02-28 09:33:59.707205527 +0000 UTC m=+1.090893781 container remove 80cbb9f37c07c14417b611866f170e96089bdff7a108d07af32d3e0f4a5c9158 (image=quay.io/ceph/ceph:v20, name=awesome_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 04:33:59 np0005634017 systemd[1]: libpod-conmon-80cbb9f37c07c14417b611866f170e96089bdff7a108d07af32d3e0f4a5c9158.scope: Deactivated successfully.
Feb 28 04:33:59 np0005634017 ceph-mgr[81822]: mgr[py] Loading python module 'volumes'
Feb 28 04:34:00 np0005634017 systemd[1]: Stopping Ceph mgr.compute-0.fqpirj for 8f528268-ea2d-5d7b-af45-49b405fed6de...
Feb 28 04:34:00 np0005634017 python3[82984]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:34:00 np0005634017 ceph-mgr[81822]: ms_deliver_dispatch: unhandled message 0x55ee50b18000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : Standby manager daemon compute-0.fqpirj started
Feb 28 04:34:00 np0005634017 ceph-mgr[76610]: mgr.server handle_open ignoring open from mgr.compute-0.fqpirj 192.168.122.100:0/3751233063; not ready for session (expect reconnect)
Feb 28 04:34:00 np0005634017 podman[83014]: 2026-02-28 09:34:00.243346524 +0000 UTC m=+0.049002278 container create 9ddae3004dea50ec24a7345258c15a93dcd7eea81fe408db42ad0ce7d2605aef (image=quay.io/ceph/ceph:v20, name=infallible_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 28 04:34:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:34:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:34:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:34:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:34:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:34:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:34:00 np0005634017 systemd[1]: Started libpod-conmon-9ddae3004dea50ec24a7345258c15a93dcd7eea81fe408db42ad0ce7d2605aef.scope.
Feb 28 04:34:00 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:34:00 np0005634017 podman[83016]: 2026-02-28 09:34:00.298862066 +0000 UTC m=+0.098444208 container died 2d11b9e48e976363b5e2aabef5ed3d6eb9a82ac418560fc3570f1a51e22c5db6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-fqpirj, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 04:34:00 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:00 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3efdc6db33b9a67396a892228363f09406383ab8f36440475dabe3d5d2daf98f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:00 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3efdc6db33b9a67396a892228363f09406383ab8f36440475dabe3d5d2daf98f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:00 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3efdc6db33b9a67396a892228363f09406383ab8f36440475dabe3d5d2daf98f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:00 np0005634017 podman[83014]: 2026-02-28 09:34:00.224884672 +0000 UTC m=+0.030540446 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:34:00 np0005634017 systemd[1]: var-lib-containers-storage-overlay-fe1462813674cf56078df5ee5c207543d67fdbf87f2daecda92af89c687cd049-merged.mount: Deactivated successfully.
Feb 28 04:34:00 np0005634017 podman[83016]: 2026-02-28 09:34:00.354919673 +0000 UTC m=+0.154501815 container remove 2d11b9e48e976363b5e2aabef5ed3d6eb9a82ac418560fc3570f1a51e22c5db6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-fqpirj, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:34:00 np0005634017 bash[83016]: ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-fqpirj
Feb 28 04:34:00 np0005634017 systemd[1]: ceph-8f528268-ea2d-5d7b-af45-49b405fed6de@mgr.compute-0.fqpirj.service: Main process exited, code=exited, status=143/n/a
Feb 28 04:34:00 np0005634017 podman[83014]: 2026-02-28 09:34:00.376683689 +0000 UTC m=+0.182339443 container init 9ddae3004dea50ec24a7345258c15a93dcd7eea81fe408db42ad0ce7d2605aef (image=quay.io/ceph/ceph:v20, name=infallible_meninsky, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:34:00 np0005634017 podman[83014]: 2026-02-28 09:34:00.383108861 +0000 UTC m=+0.188764615 container start 9ddae3004dea50ec24a7345258c15a93dcd7eea81fe408db42ad0ce7d2605aef (image=quay.io/ceph/ceph:v20, name=infallible_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:00 np0005634017 podman[83014]: 2026-02-28 09:34:00.386655781 +0000 UTC m=+0.192311555 container attach 9ddae3004dea50ec24a7345258c15a93dcd7eea81fe408db42ad0ce7d2605aef (image=quay.io/ceph/ceph:v20, name=infallible_meninsky, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 28 04:34:00 np0005634017 systemd[1]: ceph-8f528268-ea2d-5d7b-af45-49b405fed6de@mgr.compute-0.fqpirj.service: Failed with result 'exit-code'.
Feb 28 04:34:00 np0005634017 systemd[1]: Stopped Ceph mgr.compute-0.fqpirj for 8f528268-ea2d-5d7b-af45-49b405fed6de.
Feb 28 04:34:00 np0005634017 systemd[1]: ceph-8f528268-ea2d-5d7b-af45-49b405fed6de@mgr.compute-0.fqpirj.service: Consumed 7.075s CPU time, 457.8M memory peak, read 0B from disk, written 154.0K to disk.
Feb 28 04:34:00 np0005634017 systemd[1]: Reloading.
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: Added host compute-0
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: Saving service mon spec with placement compute-0
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: Saving service mgr spec with placement compute-0
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: Marking host: compute-0 for OSDSpec preview refresh.
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: Saving service osd.default_drive_group spec with placement compute-0
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: Removing daemon mgr.compute-0.fqpirj from compute-0 -- ports [8765]
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:00 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:34:00 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.izimmo(active, since 31s), standbys: compute-0.fqpirj
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.fqpirj", "id": "compute-0.fqpirj"} v 0)
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "mgr metadata", "who": "compute-0.fqpirj", "id": "compute-0.fqpirj"} : dispatch
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:34:00 np0005634017 ceph-mgr[76610]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.fqpirj
Feb 28 04:34:00 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.fqpirj
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.fqpirj"} v 0)
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.fqpirj"} : dispatch
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.fqpirj"}]': finished
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:00 np0005634017 ceph-mgr[76610]: [progress INFO root] complete: finished ev 7e101a11-ccdb-4f9d-bee2-3c6a6c57ec8a (Updating mgr deployment (-1 -> 1))
Feb 28 04:34:00 np0005634017 ceph-mgr[76610]: [progress INFO root] Completed event 7e101a11-ccdb-4f9d-bee2-3c6a6c57ec8a (Updating mgr deployment (-1 -> 1)) in 1 seconds
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb 28 04:34:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2640916898' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 28 04:34:00 np0005634017 infallible_meninsky[83042]: 
Feb 28 04:34:00 np0005634017 infallible_meninsky[83042]: {"fsid":"8f528268-ea2d-5d7b-af45-49b405fed6de","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":50,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-02-28T09:33:08:670336+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":1,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-02-28T09:33:08.673339+0000","services":{}},"progress_events":{}}
Feb 28 04:34:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 28 04:34:00 np0005634017 systemd[1]: libpod-9ddae3004dea50ec24a7345258c15a93dcd7eea81fe408db42ad0ce7d2605aef.scope: Deactivated successfully.
Feb 28 04:34:00 np0005634017 podman[83014]: 2026-02-28 09:34:00.913760993 +0000 UTC m=+0.719416767 container died 9ddae3004dea50ec24a7345258c15a93dcd7eea81fe408db42ad0ce7d2605aef (image=quay.io/ceph/ceph:v20, name=infallible_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 04:34:00 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3efdc6db33b9a67396a892228363f09406383ab8f36440475dabe3d5d2daf98f-merged.mount: Deactivated successfully.
Feb 28 04:34:00 np0005634017 podman[83014]: 2026-02-28 09:34:00.956884703 +0000 UTC m=+0.762540457 container remove 9ddae3004dea50ec24a7345258c15a93dcd7eea81fe408db42ad0ce7d2605aef (image=quay.io/ceph/ceph:v20, name=infallible_meninsky, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:00 np0005634017 systemd[1]: libpod-conmon-9ddae3004dea50ec24a7345258c15a93dcd7eea81fe408db42ad0ce7d2605aef.scope: Deactivated successfully.
Feb 28 04:34:01 np0005634017 podman[83286]: 2026-02-28 09:34:01.472782148 +0000 UTC m=+0.060882035 container exec 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:34:01 np0005634017 podman[83286]: 2026-02-28 09:34:01.598880637 +0000 UTC m=+0.186980514 container exec_died 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:34:01 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.fqpirj"} : dispatch
Feb 28 04:34:01 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.fqpirj"}]': finished
Feb 28 04:34:01 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:01 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:34:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:34:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:34:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:34:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:34:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:34:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 04:34:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:34:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 04:34:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 04:34:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 04:34:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 04:34:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:34:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:34:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:34:02 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:34:02 np0005634017 podman[83445]: 2026-02-28 09:34:02.336371734 +0000 UTC m=+0.038149741 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:02 np0005634017 podman[83445]: 2026-02-28 09:34:02.563499304 +0000 UTC m=+0.265277291 container create ffee106018881812700dc1787e5ab59df43d2e34353d6484f2b9a331950778e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_cartwright, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 04:34:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 28 04:34:02 np0005634017 ceph-mon[76304]: Removing key for mgr.compute-0.fqpirj
Feb 28 04:34:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:34:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:34:02 np0005634017 systemd[1]: Started libpod-conmon-ffee106018881812700dc1787e5ab59df43d2e34353d6484f2b9a331950778e5.scope.
Feb 28 04:34:02 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:02 np0005634017 podman[83445]: 2026-02-28 09:34:02.990242814 +0000 UTC m=+0.692020871 container init ffee106018881812700dc1787e5ab59df43d2e34353d6484f2b9a331950778e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 04:34:02 np0005634017 podman[83445]: 2026-02-28 09:34:02.999197117 +0000 UTC m=+0.700975124 container start ffee106018881812700dc1787e5ab59df43d2e34353d6484f2b9a331950778e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_cartwright, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:03 np0005634017 intelligent_cartwright[83461]: 167 167
Feb 28 04:34:03 np0005634017 systemd[1]: libpod-ffee106018881812700dc1787e5ab59df43d2e34353d6484f2b9a331950778e5.scope: Deactivated successfully.
Feb 28 04:34:03 np0005634017 podman[83445]: 2026-02-28 09:34:03.009117538 +0000 UTC m=+0.710895545 container attach ffee106018881812700dc1787e5ab59df43d2e34353d6484f2b9a331950778e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_cartwright, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 04:34:03 np0005634017 podman[83445]: 2026-02-28 09:34:03.00954842 +0000 UTC m=+0.711326397 container died ffee106018881812700dc1787e5ab59df43d2e34353d6484f2b9a331950778e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 04:34:03 np0005634017 systemd[1]: var-lib-containers-storage-overlay-1789d0a17ebdfb38419e6d002774ef34ce8d61e85ac56e3e589f5811239a4655-merged.mount: Deactivated successfully.
Feb 28 04:34:03 np0005634017 podman[83445]: 2026-02-28 09:34:03.128339753 +0000 UTC m=+0.830117740 container remove ffee106018881812700dc1787e5ab59df43d2e34353d6484f2b9a331950778e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_cartwright, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 04:34:03 np0005634017 systemd[1]: libpod-conmon-ffee106018881812700dc1787e5ab59df43d2e34353d6484f2b9a331950778e5.scope: Deactivated successfully.
Feb 28 04:34:03 np0005634017 podman[83485]: 2026-02-28 09:34:03.274594793 +0000 UTC m=+0.058557939 container create 326961b4f103f33a83df612fa8d65f2c1df5768cb965b3e108f3cbf995184619 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 28 04:34:03 np0005634017 systemd[1]: Started libpod-conmon-326961b4f103f33a83df612fa8d65f2c1df5768cb965b3e108f3cbf995184619.scope.
Feb 28 04:34:03 np0005634017 podman[83485]: 2026-02-28 09:34:03.233768517 +0000 UTC m=+0.017731683 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:03 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/165e39a24e7978fe8cef5d4ac041b7a7cb3aa6eb5f3e5fe88549b6caea940b02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/165e39a24e7978fe8cef5d4ac041b7a7cb3aa6eb5f3e5fe88549b6caea940b02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/165e39a24e7978fe8cef5d4ac041b7a7cb3aa6eb5f3e5fe88549b6caea940b02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/165e39a24e7978fe8cef5d4ac041b7a7cb3aa6eb5f3e5fe88549b6caea940b02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/165e39a24e7978fe8cef5d4ac041b7a7cb3aa6eb5f3e5fe88549b6caea940b02/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:03 np0005634017 podman[83485]: 2026-02-28 09:34:03.377401454 +0000 UTC m=+0.161364660 container init 326961b4f103f33a83df612fa8d65f2c1df5768cb965b3e108f3cbf995184619 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:03 np0005634017 podman[83485]: 2026-02-28 09:34:03.389162026 +0000 UTC m=+0.173125172 container start 326961b4f103f33a83df612fa8d65f2c1df5768cb965b3e108f3cbf995184619 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_shtern, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:34:03 np0005634017 podman[83485]: 2026-02-28 09:34:03.393373796 +0000 UTC m=+0.177336962 container attach 326961b4f103f33a83df612fa8d65f2c1df5768cb965b3e108f3cbf995184619 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 04:34:04 np0005634017 wonderful_shtern[83501]: --> passed data devices: 0 physical, 3 LVM
Feb 28 04:34:04 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:04 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:04 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 9e119a96-18af-4ac6-8479-8955afc510ba
Feb 28 04:34:04 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:34:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "9e119a96-18af-4ac6-8479-8955afc510ba"} v 0)
Feb 28 04:34:04 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/920716668' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "9e119a96-18af-4ac6-8479-8955afc510ba"} : dispatch
Feb 28 04:34:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Feb 28 04:34:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 28 04:34:04 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/920716668' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9e119a96-18af-4ac6-8479-8955afc510ba"}]': finished
Feb 28 04:34:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Feb 28 04:34:04 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Feb 28 04:34:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 28 04:34:04 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 28 04:34:04 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 28 04:34:04 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Feb 28 04:34:04 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Feb 28 04:34:04 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb 28 04:34:04 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb 28 04:34:04 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Feb 28 04:34:04 np0005634017 lvm[83595]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:34:04 np0005634017 lvm[83595]: VG ceph_vg0 finished
Feb 28 04:34:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 28 04:34:04 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/920716668' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "9e119a96-18af-4ac6-8479-8955afc510ba"} : dispatch
Feb 28 04:34:04 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/920716668' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9e119a96-18af-4ac6-8479-8955afc510ba"}]': finished
Feb 28 04:34:05 np0005634017 ceph-mgr[76610]: [progress INFO root] Writing back 3 completed events
Feb 28 04:34:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 28 04:34:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Feb 28 04:34:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/73717982' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb 28 04:34:05 np0005634017 wonderful_shtern[83501]: stderr: got monmap epoch 1
Feb 28 04:34:05 np0005634017 wonderful_shtern[83501]: --> Creating keyring file for osd.0
Feb 28 04:34:05 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Feb 28 04:34:05 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Feb 28 04:34:05 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 9e119a96-18af-4ac6-8479-8955afc510ba --setuser ceph --setgroup ceph
Feb 28 04:34:05 np0005634017 ceph-mon[76304]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Feb 28 04:34:05 np0005634017 ceph-mon[76304]: log_channel(cluster) log [INF] : Cluster is now healthy
Feb 28 04:34:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:34:06 np0005634017 wonderful_shtern[83501]: stderr: 2026-02-28T09:34:05.470+0000 7fe3c9f078c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Feb 28 04:34:06 np0005634017 wonderful_shtern[83501]: stderr: 2026-02-28T09:34:05.502+0000 7fe3c9f078c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Feb 28 04:34:06 np0005634017 wonderful_shtern[83501]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Feb 28 04:34:06 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:06 np0005634017 ceph-mon[76304]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Feb 28 04:34:06 np0005634017 ceph-mon[76304]: Cluster is now healthy
Feb 28 04:34:06 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:34:06 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb 28 04:34:06 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Feb 28 04:34:06 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb 28 04:34:06 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Feb 28 04:34:06 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb 28 04:34:06 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb 28 04:34:06 np0005634017 wonderful_shtern[83501]: --> ceph-volume lvm activate successful for osd ID: 0
Feb 28 04:34:06 np0005634017 wonderful_shtern[83501]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Feb 28 04:34:06 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:06 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:06 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 8e01b2f2-2262-4a17-91a8-b31f77e16d5f
Feb 28 04:34:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 28 04:34:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f"} v 0)
Feb 28 04:34:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1498071349' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f"} : dispatch
Feb 28 04:34:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Feb 28 04:34:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 28 04:34:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1498071349' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f"}]': finished
Feb 28 04:34:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Feb 28 04:34:06 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Feb 28 04:34:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 28 04:34:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 28 04:34:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 28 04:34:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 28 04:34:06 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 28 04:34:06 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 28 04:34:07 np0005634017 lvm[84550]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:34:07 np0005634017 lvm[84550]: VG ceph_vg1 finished
Feb 28 04:34:07 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Feb 28 04:34:07 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Feb 28 04:34:07 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb 28 04:34:07 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb 28 04:34:07 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Feb 28 04:34:07 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/1498071349' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f"} : dispatch
Feb 28 04:34:07 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/1498071349' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f"}]': finished
Feb 28 04:34:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Feb 28 04:34:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1332355384' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb 28 04:34:07 np0005634017 wonderful_shtern[83501]: stderr: got monmap epoch 1
Feb 28 04:34:07 np0005634017 wonderful_shtern[83501]: --> Creating keyring file for osd.1
Feb 28 04:34:07 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Feb 28 04:34:07 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Feb 28 04:34:07 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 8e01b2f2-2262-4a17-91a8-b31f77e16d5f --setuser ceph --setgroup ceph
Feb 28 04:34:08 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:34:08 np0005634017 wonderful_shtern[83501]: stderr: 2026-02-28T09:34:07.813+0000 7fdc2afd28c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Feb 28 04:34:08 np0005634017 wonderful_shtern[83501]: stderr: 2026-02-28T09:34:07.843+0000 7fdc2afd28c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Feb 28 04:34:08 np0005634017 wonderful_shtern[83501]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Feb 28 04:34:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 28 04:34:08 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 28 04:34:08 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb 28 04:34:08 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb 28 04:34:08 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb 28 04:34:09 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb 28 04:34:09 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 28 04:34:09 np0005634017 wonderful_shtern[83501]: --> ceph-volume lvm activate successful for osd ID: 1
Feb 28 04:34:09 np0005634017 wonderful_shtern[83501]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Feb 28 04:34:09 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:09 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:09 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new cf6beeec-5018-4e1c-b8f4-bf775a604cf3
Feb 28 04:34:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3"} v 0)
Feb 28 04:34:09 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2298651141' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3"} : dispatch
Feb 28 04:34:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Feb 28 04:34:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 28 04:34:09 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2298651141' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3"}]': finished
Feb 28 04:34:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Feb 28 04:34:09 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Feb 28 04:34:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 28 04:34:09 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 28 04:34:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 28 04:34:09 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 28 04:34:09 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 28 04:34:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 28 04:34:09 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 28 04:34:09 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 28 04:34:09 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 28 04:34:09 np0005634017 lvm[85505]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:34:09 np0005634017 lvm[85505]: VG ceph_vg2 finished
Feb 28 04:34:09 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Feb 28 04:34:09 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Feb 28 04:34:09 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb 28 04:34:09 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb 28 04:34:09 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Feb 28 04:34:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Feb 28 04:34:10 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2681802699' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb 28 04:34:10 np0005634017 wonderful_shtern[83501]: stderr: got monmap epoch 1
Feb 28 04:34:10 np0005634017 wonderful_shtern[83501]: --> Creating keyring file for osd.2
Feb 28 04:34:10 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Feb 28 04:34:10 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Feb 28 04:34:10 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid cf6beeec-5018-4e1c-b8f4-bf775a604cf3 --setuser ceph --setgroup ceph
Feb 28 04:34:10 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:34:10 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/2298651141' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3"} : dispatch
Feb 28 04:34:10 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/2298651141' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3"}]': finished
Feb 28 04:34:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:34:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 28 04:34:11 np0005634017 wonderful_shtern[83501]: stderr: 2026-02-28T09:34:10.205+0000 7f61aefb98c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Feb 28 04:34:11 np0005634017 wonderful_shtern[83501]: stderr: 2026-02-28T09:34:10.232+0000 7f61aefb98c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Feb 28 04:34:11 np0005634017 wonderful_shtern[83501]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Feb 28 04:34:11 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb 28 04:34:11 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Feb 28 04:34:11 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb 28 04:34:11 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Feb 28 04:34:11 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb 28 04:34:11 np0005634017 wonderful_shtern[83501]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb 28 04:34:11 np0005634017 wonderful_shtern[83501]: --> ceph-volume lvm activate successful for osd ID: 2
Feb 28 04:34:11 np0005634017 wonderful_shtern[83501]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Feb 28 04:34:11 np0005634017 systemd[1]: libpod-326961b4f103f33a83df612fa8d65f2c1df5768cb965b3e108f3cbf995184619.scope: Deactivated successfully.
Feb 28 04:34:11 np0005634017 systemd[1]: libpod-326961b4f103f33a83df612fa8d65f2c1df5768cb965b3e108f3cbf995184619.scope: Consumed 6.015s CPU time.
Feb 28 04:34:11 np0005634017 podman[83485]: 2026-02-28 09:34:11.307023145 +0000 UTC m=+8.090986331 container died 326961b4f103f33a83df612fa8d65f2c1df5768cb965b3e108f3cbf995184619 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_shtern, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:34:11 np0005634017 systemd[1]: var-lib-containers-storage-overlay-165e39a24e7978fe8cef5d4ac041b7a7cb3aa6eb5f3e5fe88549b6caea940b02-merged.mount: Deactivated successfully.
Feb 28 04:34:11 np0005634017 podman[83485]: 2026-02-28 09:34:11.379806016 +0000 UTC m=+8.163769162 container remove 326961b4f103f33a83df612fa8d65f2c1df5768cb965b3e108f3cbf995184619 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 04:34:11 np0005634017 systemd[1]: libpod-conmon-326961b4f103f33a83df612fa8d65f2c1df5768cb965b3e108f3cbf995184619.scope: Deactivated successfully.
Feb 28 04:34:11 np0005634017 podman[86503]: 2026-02-28 09:34:11.818246447 +0000 UTC m=+0.052444075 container create b4b9a1b65eaad38ccd93bf21f4a1d7d5659a4949c9176f3b356d3690b7f4ab51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_archimedes, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 28 04:34:11 np0005634017 systemd[1]: Started libpod-conmon-b4b9a1b65eaad38ccd93bf21f4a1d7d5659a4949c9176f3b356d3690b7f4ab51.scope.
Feb 28 04:34:11 np0005634017 podman[86503]: 2026-02-28 09:34:11.791745737 +0000 UTC m=+0.025943325 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:11 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:11 np0005634017 podman[86503]: 2026-02-28 09:34:11.916357105 +0000 UTC m=+0.150554633 container init b4b9a1b65eaad38ccd93bf21f4a1d7d5659a4949c9176f3b356d3690b7f4ab51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Feb 28 04:34:11 np0005634017 podman[86503]: 2026-02-28 09:34:11.925520194 +0000 UTC m=+0.159717702 container start b4b9a1b65eaad38ccd93bf21f4a1d7d5659a4949c9176f3b356d3690b7f4ab51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_archimedes, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:34:11 np0005634017 podman[86503]: 2026-02-28 09:34:11.930018512 +0000 UTC m=+0.164216020 container attach b4b9a1b65eaad38ccd93bf21f4a1d7d5659a4949c9176f3b356d3690b7f4ab51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:11 np0005634017 friendly_archimedes[86520]: 167 167
Feb 28 04:34:11 np0005634017 systemd[1]: libpod-b4b9a1b65eaad38ccd93bf21f4a1d7d5659a4949c9176f3b356d3690b7f4ab51.scope: Deactivated successfully.
Feb 28 04:34:11 np0005634017 podman[86503]: 2026-02-28 09:34:11.933765508 +0000 UTC m=+0.167963046 container died b4b9a1b65eaad38ccd93bf21f4a1d7d5659a4949c9176f3b356d3690b7f4ab51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_archimedes, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 28 04:34:11 np0005634017 systemd[1]: var-lib-containers-storage-overlay-e44cf4a0b47e972ec22549a99d491fddae5e250c6de7dfba40e76b14d4209928-merged.mount: Deactivated successfully.
Feb 28 04:34:11 np0005634017 podman[86503]: 2026-02-28 09:34:11.978604087 +0000 UTC m=+0.212801595 container remove b4b9a1b65eaad38ccd93bf21f4a1d7d5659a4949c9176f3b356d3690b7f4ab51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 04:34:11 np0005634017 systemd[1]: libpod-conmon-b4b9a1b65eaad38ccd93bf21f4a1d7d5659a4949c9176f3b356d3690b7f4ab51.scope: Deactivated successfully.
Feb 28 04:34:12 np0005634017 podman[86544]: 2026-02-28 09:34:12.115572104 +0000 UTC m=+0.041979139 container create 70170e6c1f8e6978fd0ebaa9a32334413ccef442fac229c39e62ebe80fa7d072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:34:12 np0005634017 systemd[1]: Started libpod-conmon-70170e6c1f8e6978fd0ebaa9a32334413ccef442fac229c39e62ebe80fa7d072.scope.
Feb 28 04:34:12 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:12 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdabe8015b3d215ad71e1957fb90dca682fa3ea81dc5ef02bacc2dc552010720/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:12 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdabe8015b3d215ad71e1957fb90dca682fa3ea81dc5ef02bacc2dc552010720/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:12 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdabe8015b3d215ad71e1957fb90dca682fa3ea81dc5ef02bacc2dc552010720/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:12 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdabe8015b3d215ad71e1957fb90dca682fa3ea81dc5ef02bacc2dc552010720/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:12 np0005634017 podman[86544]: 2026-02-28 09:34:12.093958923 +0000 UTC m=+0.020365978 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:12 np0005634017 podman[86544]: 2026-02-28 09:34:12.200578961 +0000 UTC m=+0.126985996 container init 70170e6c1f8e6978fd0ebaa9a32334413ccef442fac229c39e62ebe80fa7d072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 04:34:12 np0005634017 podman[86544]: 2026-02-28 09:34:12.2090428 +0000 UTC m=+0.135449835 container start 70170e6c1f8e6978fd0ebaa9a32334413ccef442fac229c39e62ebe80fa7d072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_heyrovsky, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:12 np0005634017 podman[86544]: 2026-02-28 09:34:12.214105764 +0000 UTC m=+0.140512839 container attach 70170e6c1f8e6978fd0ebaa9a32334413ccef442fac229c39e62ebe80fa7d072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 04:34:12 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]: {
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:    "0": [
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:        {
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "devices": [
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "/dev/loop3"
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            ],
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "lv_name": "ceph_lv0",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "lv_size": "21470642176",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "name": "ceph_lv0",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "tags": {
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.cluster_name": "ceph",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.crush_device_class": "",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.encrypted": "0",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.objectstore": "bluestore",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.osd_id": "0",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.type": "block",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.vdo": "0",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.with_tpm": "0"
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            },
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "type": "block",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "vg_name": "ceph_vg0"
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:        }
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:    ],
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:    "1": [
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:        {
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "devices": [
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "/dev/loop4"
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            ],
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "lv_name": "ceph_lv1",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "lv_size": "21470642176",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "name": "ceph_lv1",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "tags": {
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.cluster_name": "ceph",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.crush_device_class": "",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.encrypted": "0",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.objectstore": "bluestore",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.osd_id": "1",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.type": "block",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.vdo": "0",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.with_tpm": "0"
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            },
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "type": "block",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "vg_name": "ceph_vg1"
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:        }
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:    ],
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:    "2": [
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:        {
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "devices": [
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "/dev/loop5"
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            ],
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "lv_name": "ceph_lv2",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "lv_size": "21470642176",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "name": "ceph_lv2",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "tags": {
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.cluster_name": "ceph",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.crush_device_class": "",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.encrypted": "0",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.objectstore": "bluestore",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.osd_id": "2",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.type": "block",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.vdo": "0",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:                "ceph.with_tpm": "0"
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            },
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "type": "block",
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:            "vg_name": "ceph_vg2"
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:        }
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]:    ]
Feb 28 04:34:12 np0005634017 romantic_heyrovsky[86560]: }
Feb 28 04:34:12 np0005634017 systemd[1]: libpod-70170e6c1f8e6978fd0ebaa9a32334413ccef442fac229c39e62ebe80fa7d072.scope: Deactivated successfully.
Feb 28 04:34:12 np0005634017 podman[86544]: 2026-02-28 09:34:12.507557961 +0000 UTC m=+0.433965026 container died 70170e6c1f8e6978fd0ebaa9a32334413ccef442fac229c39e62ebe80fa7d072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_heyrovsky, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:12 np0005634017 systemd[1]: var-lib-containers-storage-overlay-fdabe8015b3d215ad71e1957fb90dca682fa3ea81dc5ef02bacc2dc552010720-merged.mount: Deactivated successfully.
Feb 28 04:34:12 np0005634017 podman[86544]: 2026-02-28 09:34:12.550485856 +0000 UTC m=+0.476892881 container remove 70170e6c1f8e6978fd0ebaa9a32334413ccef442fac229c39e62ebe80fa7d072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_heyrovsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 04:34:12 np0005634017 systemd[1]: libpod-conmon-70170e6c1f8e6978fd0ebaa9a32334413ccef442fac229c39e62ebe80fa7d072.scope: Deactivated successfully.
Feb 28 04:34:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Feb 28 04:34:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Feb 28 04:34:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:34:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:34:12 np0005634017 ceph-mgr[76610]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Feb 28 04:34:12 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Feb 28 04:34:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 28 04:34:13 np0005634017 podman[86671]: 2026-02-28 09:34:13.040103346 +0000 UTC m=+0.036222096 container create d72fba447bda75156baa8e4448fe60175ba0b460091f2606a5f6d97d52f97783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:34:13 np0005634017 systemd[1]: Started libpod-conmon-d72fba447bda75156baa8e4448fe60175ba0b460091f2606a5f6d97d52f97783.scope.
Feb 28 04:34:13 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:13 np0005634017 podman[86671]: 2026-02-28 09:34:13.024231837 +0000 UTC m=+0.020350617 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:13 np0005634017 podman[86671]: 2026-02-28 09:34:13.125643098 +0000 UTC m=+0.121761898 container init d72fba447bda75156baa8e4448fe60175ba0b460091f2606a5f6d97d52f97783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_shannon, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 04:34:13 np0005634017 podman[86671]: 2026-02-28 09:34:13.135857927 +0000 UTC m=+0.131976697 container start d72fba447bda75156baa8e4448fe60175ba0b460091f2606a5f6d97d52f97783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:13 np0005634017 podman[86671]: 2026-02-28 09:34:13.139587103 +0000 UTC m=+0.135705863 container attach d72fba447bda75156baa8e4448fe60175ba0b460091f2606a5f6d97d52f97783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:13 np0005634017 crazy_shannon[86687]: 167 167
Feb 28 04:34:13 np0005634017 systemd[1]: libpod-d72fba447bda75156baa8e4448fe60175ba0b460091f2606a5f6d97d52f97783.scope: Deactivated successfully.
Feb 28 04:34:13 np0005634017 podman[86671]: 2026-02-28 09:34:13.142536466 +0000 UTC m=+0.138655226 container died d72fba447bda75156baa8e4448fe60175ba0b460091f2606a5f6d97d52f97783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_shannon, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:13 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3e103c71b3de64301d6eacaa123d2cdff05136234c4e385b357f50d8d32e5b82-merged.mount: Deactivated successfully.
Feb 28 04:34:13 np0005634017 podman[86671]: 2026-02-28 09:34:13.180579793 +0000 UTC m=+0.176698553 container remove d72fba447bda75156baa8e4448fe60175ba0b460091f2606a5f6d97d52f97783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_shannon, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:34:13 np0005634017 systemd[1]: libpod-conmon-d72fba447bda75156baa8e4448fe60175ba0b460091f2606a5f6d97d52f97783.scope: Deactivated successfully.
Feb 28 04:34:13 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Feb 28 04:34:13 np0005634017 ceph-mon[76304]: Deploying daemon osd.0 on compute-0
Feb 28 04:34:13 np0005634017 podman[86718]: 2026-02-28 09:34:13.383653362 +0000 UTC m=+0.037658717 container create 0097cf8cafb83e7c143dfd3c5af35e2b31ce8284eb6f0299561b3a9d5974843c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate-test, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 28 04:34:13 np0005634017 systemd[1]: Started libpod-conmon-0097cf8cafb83e7c143dfd3c5af35e2b31ce8284eb6f0299561b3a9d5974843c.scope.
Feb 28 04:34:13 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:13 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/214eef70b928d3f69b337d042e8b80b2ca08cfe5b5a178c4c1b5b9b5f5c7e2b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:13 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/214eef70b928d3f69b337d042e8b80b2ca08cfe5b5a178c4c1b5b9b5f5c7e2b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:13 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/214eef70b928d3f69b337d042e8b80b2ca08cfe5b5a178c4c1b5b9b5f5c7e2b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:13 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/214eef70b928d3f69b337d042e8b80b2ca08cfe5b5a178c4c1b5b9b5f5c7e2b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:13 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/214eef70b928d3f69b337d042e8b80b2ca08cfe5b5a178c4c1b5b9b5f5c7e2b1/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:13 np0005634017 podman[86718]: 2026-02-28 09:34:13.370213541 +0000 UTC m=+0.024218916 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:13 np0005634017 podman[86718]: 2026-02-28 09:34:13.482783608 +0000 UTC m=+0.136789003 container init 0097cf8cafb83e7c143dfd3c5af35e2b31ce8284eb6f0299561b3a9d5974843c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate-test, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:34:13 np0005634017 podman[86718]: 2026-02-28 09:34:13.49311836 +0000 UTC m=+0.147124025 container start 0097cf8cafb83e7c143dfd3c5af35e2b31ce8284eb6f0299561b3a9d5974843c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 28 04:34:13 np0005634017 podman[86718]: 2026-02-28 09:34:13.498200944 +0000 UTC m=+0.152206309 container attach 0097cf8cafb83e7c143dfd3c5af35e2b31ce8284eb6f0299561b3a9d5974843c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate-test, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:13 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate-test[86734]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Feb 28 04:34:13 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate-test[86734]:                            [--no-systemd] [--no-tmpfs]
Feb 28 04:34:13 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate-test[86734]: ceph-volume activate: error: unrecognized arguments: --bad-option
Feb 28 04:34:13 np0005634017 systemd[1]: libpod-0097cf8cafb83e7c143dfd3c5af35e2b31ce8284eb6f0299561b3a9d5974843c.scope: Deactivated successfully.
Feb 28 04:34:13 np0005634017 podman[86718]: 2026-02-28 09:34:13.688542792 +0000 UTC m=+0.342548207 container died 0097cf8cafb83e7c143dfd3c5af35e2b31ce8284eb6f0299561b3a9d5974843c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate-test, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:13 np0005634017 systemd[1]: var-lib-containers-storage-overlay-214eef70b928d3f69b337d042e8b80b2ca08cfe5b5a178c4c1b5b9b5f5c7e2b1-merged.mount: Deactivated successfully.
Feb 28 04:34:13 np0005634017 podman[86718]: 2026-02-28 09:34:13.735514161 +0000 UTC m=+0.389519516 container remove 0097cf8cafb83e7c143dfd3c5af35e2b31ce8284eb6f0299561b3a9d5974843c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate-test, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:13 np0005634017 systemd[1]: libpod-conmon-0097cf8cafb83e7c143dfd3c5af35e2b31ce8284eb6f0299561b3a9d5974843c.scope: Deactivated successfully.
Feb 28 04:34:13 np0005634017 systemd[1]: Reloading.
Feb 28 04:34:13 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:34:14 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:34:14 np0005634017 systemd[1]: Reloading.
Feb 28 04:34:14 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:34:14 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:34:14 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:34:14 np0005634017 systemd[1]: Starting Ceph osd.0 for 8f528268-ea2d-5d7b-af45-49b405fed6de...
Feb 28 04:34:14 np0005634017 podman[86907]: 2026-02-28 09:34:14.6341208 +0000 UTC m=+0.048390321 container create f5584bdfecf2eb12a52238b3623afcc9a5c52408dde3b5b94ac1a789de570c3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:34:14 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:14 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d401ef164d30e698c9064d42cf8ef5ca8d302d2bd142509a48b536394779125/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:14 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d401ef164d30e698c9064d42cf8ef5ca8d302d2bd142509a48b536394779125/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:14 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d401ef164d30e698c9064d42cf8ef5ca8d302d2bd142509a48b536394779125/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:14 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d401ef164d30e698c9064d42cf8ef5ca8d302d2bd142509a48b536394779125/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:14 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d401ef164d30e698c9064d42cf8ef5ca8d302d2bd142509a48b536394779125/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:14 np0005634017 podman[86907]: 2026-02-28 09:34:14.612760165 +0000 UTC m=+0.027029706 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:14 np0005634017 podman[86907]: 2026-02-28 09:34:14.708360291 +0000 UTC m=+0.122629872 container init f5584bdfecf2eb12a52238b3623afcc9a5c52408dde3b5b94ac1a789de570c3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:34:14 np0005634017 podman[86907]: 2026-02-28 09:34:14.713802776 +0000 UTC m=+0.128072307 container start f5584bdfecf2eb12a52238b3623afcc9a5c52408dde3b5b94ac1a789de570c3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Feb 28 04:34:14 np0005634017 podman[86907]: 2026-02-28 09:34:14.716859692 +0000 UTC m=+0.131129213 container attach f5584bdfecf2eb12a52238b3623afcc9a5c52408dde3b5b94ac1a789de570c3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 04:34:14 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate[86922]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:14 np0005634017 bash[86907]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:14 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate[86922]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:14 np0005634017 bash[86907]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 28 04:34:15 np0005634017 lvm[87008]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:34:15 np0005634017 lvm[87008]: VG ceph_vg1 finished
Feb 28 04:34:15 np0005634017 lvm[87005]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:34:15 np0005634017 lvm[87005]: VG ceph_vg0 finished
Feb 28 04:34:15 np0005634017 lvm[87010]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:34:15 np0005634017 lvm[87010]: VG ceph_vg2 finished
Feb 28 04:34:15 np0005634017 lvm[87011]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:34:15 np0005634017 lvm[87011]: VG ceph_vg0 finished
Feb 28 04:34:15 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate[86922]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb 28 04:34:15 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate[86922]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:15 np0005634017 bash[86907]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb 28 04:34:15 np0005634017 bash[86907]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:15 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate[86922]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:15 np0005634017 bash[86907]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:15 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate[86922]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb 28 04:34:15 np0005634017 bash[86907]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb 28 04:34:15 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate[86922]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Feb 28 04:34:15 np0005634017 bash[86907]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Feb 28 04:34:15 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate[86922]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb 28 04:34:15 np0005634017 bash[86907]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb 28 04:34:15 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate[86922]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Feb 28 04:34:15 np0005634017 bash[86907]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Feb 28 04:34:15 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate[86922]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb 28 04:34:15 np0005634017 bash[86907]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb 28 04:34:15 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate[86922]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb 28 04:34:15 np0005634017 bash[86907]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb 28 04:34:15 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate[86922]: --> ceph-volume lvm activate successful for osd ID: 0
Feb 28 04:34:15 np0005634017 bash[86907]: --> ceph-volume lvm activate successful for osd ID: 0
Feb 28 04:34:15 np0005634017 systemd[1]: libpod-f5584bdfecf2eb12a52238b3623afcc9a5c52408dde3b5b94ac1a789de570c3e.scope: Deactivated successfully.
Feb 28 04:34:15 np0005634017 systemd[1]: libpod-f5584bdfecf2eb12a52238b3623afcc9a5c52408dde3b5b94ac1a789de570c3e.scope: Consumed 1.368s CPU time.
Feb 28 04:34:15 np0005634017 podman[86907]: 2026-02-28 09:34:15.805701666 +0000 UTC m=+1.219971187 container died f5584bdfecf2eb12a52238b3623afcc9a5c52408dde3b5b94ac1a789de570c3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 04:34:15 np0005634017 systemd[1]: var-lib-containers-storage-overlay-9d401ef164d30e698c9064d42cf8ef5ca8d302d2bd142509a48b536394779125-merged.mount: Deactivated successfully.
Feb 28 04:34:15 np0005634017 podman[86907]: 2026-02-28 09:34:15.844250757 +0000 UTC m=+1.258520288 container remove f5584bdfecf2eb12a52238b3623afcc9a5c52408dde3b5b94ac1a789de570c3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0-activate, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:15 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:34:16 np0005634017 podman[87182]: 2026-02-28 09:34:16.002366543 +0000 UTC m=+0.040741164 container create fc71d051903a1cee3edd0627f744b86d813e22eab438b71d0e5a703e3df5a919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:34:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19bb23fc374768f6fabbcc17afacd530505d191cb444130ed8dad2455518a9b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19bb23fc374768f6fabbcc17afacd530505d191cb444130ed8dad2455518a9b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19bb23fc374768f6fabbcc17afacd530505d191cb444130ed8dad2455518a9b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19bb23fc374768f6fabbcc17afacd530505d191cb444130ed8dad2455518a9b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19bb23fc374768f6fabbcc17afacd530505d191cb444130ed8dad2455518a9b1/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:16 np0005634017 podman[87182]: 2026-02-28 09:34:16.049351913 +0000 UTC m=+0.087726534 container init fc71d051903a1cee3edd0627f744b86d813e22eab438b71d0e5a703e3df5a919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 04:34:16 np0005634017 podman[87182]: 2026-02-28 09:34:16.056259669 +0000 UTC m=+0.094634290 container start fc71d051903a1cee3edd0627f744b86d813e22eab438b71d0e5a703e3df5a919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 04:34:16 np0005634017 bash[87182]: fc71d051903a1cee3edd0627f744b86d813e22eab438b71d0e5a703e3df5a919
Feb 28 04:34:16 np0005634017 podman[87182]: 2026-02-28 09:34:15.984199029 +0000 UTC m=+0.022573700 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:16 np0005634017 systemd[1]: Started Ceph osd.0 for 8f528268-ea2d-5d7b-af45-49b405fed6de.
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: set uid:gid to 167:167 (ceph:ceph)
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: pidfile_write: ignore empty --pid-file
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) close
Feb 28 04:34:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) close
Feb 28 04:34:16 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:34:16 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Feb 28 04:34:16 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Feb 28 04:34:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:34:16 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:34:16 np0005634017 ceph-mgr[76610]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Feb 28 04:34:16 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) close
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) close
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) close
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6400 /var/lib/ceph/osd/ceph-0/block) close
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a6000 /var/lib/ceph/osd/ceph-0/block) close
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Feb 28 04:34:16 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: load: jerasure load: lrc 
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) close
Feb 28 04:34:16 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:16 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:16 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) close
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) close
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) close
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) close
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c32a7c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c3f3d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c3f3d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c3f3d800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c3f3d800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluefs mount
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluefs mount shared_bdev_used = 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: RocksDB version: 7.9.2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Git sha 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: DB SUMMARY
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: DB Session ID:  UWMS7XPT536QOSD0MEUX
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: CURRENT file:  CURRENT
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: IDENTITY file:  IDENTITY
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                         Options.error_if_exists: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.create_if_missing: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                         Options.paranoid_checks: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                                     Options.env: 0x55d7c3137ea0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                                Options.info_log: 0x55d7c41888a0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.max_file_opening_threads: 16
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                              Options.statistics: (nil)
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                               Options.use_fsync: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.max_log_file_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                         Options.allow_fallocate: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.use_direct_reads: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.create_missing_column_families: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                              Options.db_log_dir: 
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                                 Options.wal_dir: db.wal
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.advise_random_on_open: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.write_buffer_manager: 0x55d7c319cb40
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                            Options.rate_limiter: (nil)
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.unordered_write: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                               Options.row_cache: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                              Options.wal_filter: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.allow_ingest_behind: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.two_write_queues: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.manual_wal_flush: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.wal_compression: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.atomic_flush: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.log_readahead_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.allow_data_in_errors: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.db_host_id: __hostname__
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.max_background_jobs: 4
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.max_background_compactions: -1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.max_subcompactions: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.max_open_files: -1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.bytes_per_sync: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.max_background_flushes: -1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Compression algorithms supported:
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: #011kZSTD supported: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: #011kXpressCompression supported: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: #011kBZip2Compression supported: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: #011kLZ4Compression supported: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: #011kZlibCompression supported: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: #011kLZ4HCCompression supported: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: #011kSnappyCompression supported: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d7c4188c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7c313b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d7c4188c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7c313b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d7c4188c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7c313b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d7c4188c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7c313b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d7c4188c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7c313b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d7c4188c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7c313b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d7c4188c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7c313b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d7c4188c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7c313ba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d7c4188c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7c313ba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d7c4188c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7c313ba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b33e3885-9cd7-4a38-8237-30e8b29748b9
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271256491606, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271256493082, "job": 1, "event": "recovery_finished"}
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: freelist init
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: freelist _read_cfg
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluefs umount
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c3f3d800 /var/lib/ceph/osd/ceph-0/block) close
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c3f3d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c3f3d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c3f3d800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bdev(0x55d7c3f3d800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluefs mount
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluefs mount shared_bdev_used = 27262976
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: RocksDB version: 7.9.2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Git sha 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: DB SUMMARY
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: DB Session ID:  UWMS7XPT536QOSD0MEUW
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: CURRENT file:  CURRENT
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: IDENTITY file:  IDENTITY
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                         Options.error_if_exists: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.create_if_missing: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                         Options.paranoid_checks: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                                     Options.env: 0x55d7c4358a80
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                                Options.info_log: 0x55d7c4188960
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.max_file_opening_threads: 16
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                              Options.statistics: (nil)
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                               Options.use_fsync: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.max_log_file_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                         Options.allow_fallocate: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.use_direct_reads: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.create_missing_column_families: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                              Options.db_log_dir: 
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                                 Options.wal_dir: db.wal
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.advise_random_on_open: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.write_buffer_manager: 0x55d7c319d900
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                            Options.rate_limiter: (nil)
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.unordered_write: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                               Options.row_cache: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                              Options.wal_filter: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.allow_ingest_behind: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.two_write_queues: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.manual_wal_flush: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.wal_compression: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.atomic_flush: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.log_readahead_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.allow_data_in_errors: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.db_host_id: __hostname__
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.max_background_jobs: 4
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.max_background_compactions: -1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.max_subcompactions: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.max_open_files: -1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.bytes_per_sync: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.max_background_flushes: -1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Compression algorithms supported:
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: #011kZSTD supported: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: #011kXpressCompression supported: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: #011kBZip2Compression supported: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: #011kLZ4Compression supported: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: #011kZlibCompression supported: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: #011kLZ4HCCompression supported: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: #011kSnappyCompression supported: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d7c4188bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7c313b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d7c4188bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7c313b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d7c4188bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7c313b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d7c4188bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7c313b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d7c4188bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7c313b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d7c4188bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7c313b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d7c4188bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7c313b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d7c41890c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7c313ba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d7c41890c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7c313ba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d7c41890c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7c313ba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b33e3885-9cd7-4a38-8237-30e8b29748b9
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271256545444, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271256548977, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271256, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b33e3885-9cd7-4a38-8237-30e8b29748b9", "db_session_id": "UWMS7XPT536QOSD0MEUW", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271256554044, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271256, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b33e3885-9cd7-4a38-8237-30e8b29748b9", "db_session_id": "UWMS7XPT536QOSD0MEUW", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271256556675, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271256, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b33e3885-9cd7-4a38-8237-30e8b29748b9", "db_session_id": "UWMS7XPT536QOSD0MEUW", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271256558594, "job": 1, "event": "recovery_finished"}
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55d7c43a2000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: DB pointer 0x55d7c4342000
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d7c313b8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d7c313b8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d7c313b8d0#2 capacity: 460.80 MB usag
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: _get_class not permitted to load lua
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: _get_class not permitted to load sdk
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: osd.0 0 load_pgs
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: osd.0 0 load_pgs opened 0 pgs
Feb 28 04:34:16 np0005634017 ceph-osd[87202]: osd.0 0 log_to_monitors true
Feb 28 04:34:16 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0[87198]: 2026-02-28T09:34:16.662+0000 7f1f4da918c0 -1 osd.0 0 log_to_monitors true
Feb 28 04:34:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Feb 28 04:34:16 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/831197851,v1:192.168.122.100:6803/831197851]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Feb 28 04:34:16 np0005634017 podman[87710]: 2026-02-28 09:34:16.685619864 +0000 UTC m=+0.043954675 container create bfa7e56b712dfc2396fd47a1aa3f8e3fdf05c3acbe11139a9e5c917a27cdd8a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:34:16 np0005634017 systemd[1]: Started libpod-conmon-bfa7e56b712dfc2396fd47a1aa3f8e3fdf05c3acbe11139a9e5c917a27cdd8a4.scope.
Feb 28 04:34:16 np0005634017 podman[87710]: 2026-02-28 09:34:16.667343547 +0000 UTC m=+0.025678378 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:16 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:16 np0005634017 podman[87710]: 2026-02-28 09:34:16.779660837 +0000 UTC m=+0.137995668 container init bfa7e56b712dfc2396fd47a1aa3f8e3fdf05c3acbe11139a9e5c917a27cdd8a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 04:34:16 np0005634017 podman[87710]: 2026-02-28 09:34:16.7885975 +0000 UTC m=+0.146932301 container start bfa7e56b712dfc2396fd47a1aa3f8e3fdf05c3acbe11139a9e5c917a27cdd8a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Feb 28 04:34:16 np0005634017 podman[87710]: 2026-02-28 09:34:16.792120049 +0000 UTC m=+0.150454880 container attach bfa7e56b712dfc2396fd47a1aa3f8e3fdf05c3acbe11139a9e5c917a27cdd8a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:34:16 np0005634017 vibrant_elgamal[87759]: 167 167
Feb 28 04:34:16 np0005634017 systemd[1]: libpod-bfa7e56b712dfc2396fd47a1aa3f8e3fdf05c3acbe11139a9e5c917a27cdd8a4.scope: Deactivated successfully.
Feb 28 04:34:16 np0005634017 podman[87710]: 2026-02-28 09:34:16.797908963 +0000 UTC m=+0.156243834 container died bfa7e56b712dfc2396fd47a1aa3f8e3fdf05c3acbe11139a9e5c917a27cdd8a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:34:16 np0005634017 systemd[1]: var-lib-containers-storage-overlay-8fce5fb43a3b465c1bb5aad3c79f91655235700ae413d3b6692ba20bfd41360f-merged.mount: Deactivated successfully.
Feb 28 04:34:16 np0005634017 podman[87710]: 2026-02-28 09:34:16.8419678 +0000 UTC m=+0.200302611 container remove bfa7e56b712dfc2396fd47a1aa3f8e3fdf05c3acbe11139a9e5c917a27cdd8a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:34:16 np0005634017 systemd[1]: libpod-conmon-bfa7e56b712dfc2396fd47a1aa3f8e3fdf05c3acbe11139a9e5c917a27cdd8a4.scope: Deactivated successfully.
Feb 28 04:34:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 28 04:34:17 np0005634017 podman[87788]: 2026-02-28 09:34:17.068563415 +0000 UTC m=+0.050762528 container create 17f501029641f69f834bbd94d0224abcd26d5832df83b0a0221deb9dcf176f89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate-test, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 28 04:34:17 np0005634017 systemd[1]: Started libpod-conmon-17f501029641f69f834bbd94d0224abcd26d5832df83b0a0221deb9dcf176f89.scope.
Feb 28 04:34:17 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d3459c2efd10740679b7d1aea59bb4e559e581df73c54c4d97b0712f76f5e25/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d3459c2efd10740679b7d1aea59bb4e559e581df73c54c4d97b0712f76f5e25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d3459c2efd10740679b7d1aea59bb4e559e581df73c54c4d97b0712f76f5e25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d3459c2efd10740679b7d1aea59bb4e559e581df73c54c4d97b0712f76f5e25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d3459c2efd10740679b7d1aea59bb4e559e581df73c54c4d97b0712f76f5e25/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:17 np0005634017 podman[87788]: 2026-02-28 09:34:17.043979139 +0000 UTC m=+0.026178252 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:17 np0005634017 podman[87788]: 2026-02-28 09:34:17.159019775 +0000 UTC m=+0.141218968 container init 17f501029641f69f834bbd94d0224abcd26d5832df83b0a0221deb9dcf176f89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 04:34:17 np0005634017 podman[87788]: 2026-02-28 09:34:17.172600169 +0000 UTC m=+0.154799282 container start 17f501029641f69f834bbd94d0224abcd26d5832df83b0a0221deb9dcf176f89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate-test, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:34:17 np0005634017 podman[87788]: 2026-02-28 09:34:17.176727466 +0000 UTC m=+0.158926549 container attach 17f501029641f69f834bbd94d0224abcd26d5832df83b0a0221deb9dcf176f89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 04:34:17 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate-test[87804]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Feb 28 04:34:17 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate-test[87804]:                            [--no-systemd] [--no-tmpfs]
Feb 28 04:34:17 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate-test[87804]: ceph-volume activate: error: unrecognized arguments: --bad-option
Feb 28 04:34:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Feb 28 04:34:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 28 04:34:17 np0005634017 ceph-mon[76304]: Deploying daemon osd.1 on compute-0
Feb 28 04:34:17 np0005634017 ceph-mon[76304]: from='osd.0 [v2:192.168.122.100:6802/831197851,v1:192.168.122.100:6803/831197851]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Feb 28 04:34:17 np0005634017 systemd[1]: libpod-17f501029641f69f834bbd94d0224abcd26d5832df83b0a0221deb9dcf176f89.scope: Deactivated successfully.
Feb 28 04:34:17 np0005634017 podman[87788]: 2026-02-28 09:34:17.360675243 +0000 UTC m=+0.342874346 container died 17f501029641f69f834bbd94d0224abcd26d5832df83b0a0221deb9dcf176f89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:34:17 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/831197851,v1:192.168.122.100:6803/831197851]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Feb 28 04:34:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Feb 28 04:34:17 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Feb 28 04:34:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Feb 28 04:34:17 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/831197851,v1:192.168.122.100:6803/831197851]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb 28 04:34:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Feb 28 04:34:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 28 04:34:17 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 28 04:34:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 28 04:34:17 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 28 04:34:17 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 28 04:34:17 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 28 04:34:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 28 04:34:17 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 28 04:34:17 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 28 04:34:17 np0005634017 systemd[1]: var-lib-containers-storage-overlay-5d3459c2efd10740679b7d1aea59bb4e559e581df73c54c4d97b0712f76f5e25-merged.mount: Deactivated successfully.
Feb 28 04:34:17 np0005634017 podman[87788]: 2026-02-28 09:34:17.409354791 +0000 UTC m=+0.391553874 container remove 17f501029641f69f834bbd94d0224abcd26d5832df83b0a0221deb9dcf176f89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 04:34:17 np0005634017 systemd[1]: libpod-conmon-17f501029641f69f834bbd94d0224abcd26d5832df83b0a0221deb9dcf176f89.scope: Deactivated successfully.
Feb 28 04:34:17 np0005634017 systemd[1]: Reloading.
Feb 28 04:34:17 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Feb 28 04:34:17 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Feb 28 04:34:17 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:34:17 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:34:17 np0005634017 systemd[1]: Reloading.
Feb 28 04:34:17 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:34:17 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:34:18 np0005634017 systemd[1]: Starting Ceph osd.1 for 8f528268-ea2d-5d7b-af45-49b405fed6de...
Feb 28 04:34:18 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:34:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Feb 28 04:34:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 28 04:34:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/831197851,v1:192.168.122.100:6803/831197851]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb 28 04:34:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Feb 28 04:34:18 np0005634017 ceph-osd[87202]: osd.0 0 done with init, starting boot process
Feb 28 04:34:18 np0005634017 ceph-osd[87202]: osd.0 0 start_boot
Feb 28 04:34:18 np0005634017 ceph-osd[87202]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Feb 28 04:34:18 np0005634017 ceph-osd[87202]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Feb 28 04:34:18 np0005634017 ceph-osd[87202]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Feb 28 04:34:18 np0005634017 ceph-osd[87202]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Feb 28 04:34:18 np0005634017 ceph-osd[87202]: osd.0 0  bench count 12288000 bsize 4 KiB
Feb 28 04:34:18 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Feb 28 04:34:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 28 04:34:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 28 04:34:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 28 04:34:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 28 04:34:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 28 04:34:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 28 04:34:18 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 28 04:34:18 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 28 04:34:18 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 28 04:34:18 np0005634017 ceph-mgr[76610]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/831197851; not ready for session (expect reconnect)
Feb 28 04:34:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 28 04:34:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 28 04:34:18 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 28 04:34:18 np0005634017 ceph-mon[76304]: from='osd.0 [v2:192.168.122.100:6802/831197851,v1:192.168.122.100:6803/831197851]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Feb 28 04:34:18 np0005634017 ceph-mon[76304]: from='osd.0 [v2:192.168.122.100:6802/831197851,v1:192.168.122.100:6803/831197851]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb 28 04:34:18 np0005634017 podman[87981]: 2026-02-28 09:34:18.461176397 +0000 UTC m=+0.092914722 container create 3067ce5a69323e4e9b3d3600610ccd65ea2660e7168b9e1e2613235cec84bdc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 28 04:34:18 np0005634017 podman[87981]: 2026-02-28 09:34:18.394567911 +0000 UTC m=+0.026306216 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:18 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:18 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f49a35b261d5eac439c2bcbf2847fcec50d810ec06c80cd41aef054c25bef736/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:18 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f49a35b261d5eac439c2bcbf2847fcec50d810ec06c80cd41aef054c25bef736/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:18 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f49a35b261d5eac439c2bcbf2847fcec50d810ec06c80cd41aef054c25bef736/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:18 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f49a35b261d5eac439c2bcbf2847fcec50d810ec06c80cd41aef054c25bef736/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:18 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f49a35b261d5eac439c2bcbf2847fcec50d810ec06c80cd41aef054c25bef736/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:18 np0005634017 podman[87981]: 2026-02-28 09:34:18.545953276 +0000 UTC m=+0.177691591 container init 3067ce5a69323e4e9b3d3600610ccd65ea2660e7168b9e1e2613235cec84bdc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:34:18 np0005634017 podman[87981]: 2026-02-28 09:34:18.55736866 +0000 UTC m=+0.189106945 container start 3067ce5a69323e4e9b3d3600610ccd65ea2660e7168b9e1e2613235cec84bdc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 04:34:18 np0005634017 podman[87981]: 2026-02-28 09:34:18.580892386 +0000 UTC m=+0.212630681 container attach 3067ce5a69323e4e9b3d3600610ccd65ea2660e7168b9e1e2613235cec84bdc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Feb 28 04:34:18 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate[87996]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:18 np0005634017 bash[87981]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:18 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate[87996]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:18 np0005634017 bash[87981]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 28 04:34:19 np0005634017 lvm[88080]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:34:19 np0005634017 lvm[88080]: VG ceph_vg0 finished
Feb 28 04:34:19 np0005634017 lvm[88082]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:34:19 np0005634017 lvm[88082]: VG ceph_vg1 finished
Feb 28 04:34:19 np0005634017 ceph-mgr[76610]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/831197851; not ready for session (expect reconnect)
Feb 28 04:34:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 28 04:34:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 28 04:34:19 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 28 04:34:19 np0005634017 lvm[88084]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:34:19 np0005634017 lvm[88084]: VG ceph_vg2 finished
Feb 28 04:34:19 np0005634017 ceph-mon[76304]: from='osd.0 [v2:192.168.122.100:6802/831197851,v1:192.168.122.100:6803/831197851]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb 28 04:34:19 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate[87996]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb 28 04:34:19 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate[87996]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:19 np0005634017 bash[87981]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb 28 04:34:19 np0005634017 bash[87981]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:19 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate[87996]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:19 np0005634017 bash[87981]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:19 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate[87996]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 28 04:34:19 np0005634017 bash[87981]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 28 04:34:19 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate[87996]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb 28 04:34:19 np0005634017 bash[87981]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb 28 04:34:19 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate[87996]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb 28 04:34:19 np0005634017 bash[87981]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb 28 04:34:19 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate[87996]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb 28 04:34:19 np0005634017 bash[87981]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb 28 04:34:19 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate[87996]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb 28 04:34:19 np0005634017 bash[87981]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb 28 04:34:19 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate[87996]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 28 04:34:19 np0005634017 bash[87981]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 28 04:34:19 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate[87996]: --> ceph-volume lvm activate successful for osd ID: 1
Feb 28 04:34:19 np0005634017 bash[87981]: --> ceph-volume lvm activate successful for osd ID: 1
Feb 28 04:34:19 np0005634017 systemd[1]: libpod-3067ce5a69323e4e9b3d3600610ccd65ea2660e7168b9e1e2613235cec84bdc5.scope: Deactivated successfully.
Feb 28 04:34:19 np0005634017 systemd[1]: libpod-3067ce5a69323e4e9b3d3600610ccd65ea2660e7168b9e1e2613235cec84bdc5.scope: Consumed 1.489s CPU time.
Feb 28 04:34:19 np0005634017 podman[87981]: 2026-02-28 09:34:19.786431732 +0000 UTC m=+1.418170087 container died 3067ce5a69323e4e9b3d3600610ccd65ea2660e7168b9e1e2613235cec84bdc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 04:34:19 np0005634017 systemd[1]: var-lib-containers-storage-overlay-f49a35b261d5eac439c2bcbf2847fcec50d810ec06c80cd41aef054c25bef736-merged.mount: Deactivated successfully.
Feb 28 04:34:20 np0005634017 podman[87981]: 2026-02-28 09:34:20.088139463 +0000 UTC m=+1.719877788 container remove 3067ce5a69323e4e9b3d3600610ccd65ea2660e7168b9e1e2613235cec84bdc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1-activate, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:34:20 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:34:20 np0005634017 podman[88248]: 2026-02-28 09:34:20.268408966 +0000 UTC m=+0.031815821 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:20 np0005634017 ceph-mgr[76610]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/831197851; not ready for session (expect reconnect)
Feb 28 04:34:20 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 28 04:34:20 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 28 04:34:20 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 28 04:34:20 np0005634017 podman[88248]: 2026-02-28 09:34:20.388379662 +0000 UTC m=+0.151786517 container create 28b061fbeda1adf326537d837498b7b87f5b726bc714db9a04ca4452eff13e50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 04:34:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88550c337a09fb82ef7fa73bc0386b484bd235b37d70df1a6a8f32c2bbd8c50d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88550c337a09fb82ef7fa73bc0386b484bd235b37d70df1a6a8f32c2bbd8c50d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88550c337a09fb82ef7fa73bc0386b484bd235b37d70df1a6a8f32c2bbd8c50d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88550c337a09fb82ef7fa73bc0386b484bd235b37d70df1a6a8f32c2bbd8c50d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88550c337a09fb82ef7fa73bc0386b484bd235b37d70df1a6a8f32c2bbd8c50d/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:20 np0005634017 podman[88248]: 2026-02-28 09:34:20.694353543 +0000 UTC m=+0.457760398 container init 28b061fbeda1adf326537d837498b7b87f5b726bc714db9a04ca4452eff13e50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 04:34:20 np0005634017 podman[88248]: 2026-02-28 09:34:20.702483633 +0000 UTC m=+0.465890468 container start 28b061fbeda1adf326537d837498b7b87f5b726bc714db9a04ca4452eff13e50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: set uid:gid to 167:167 (ceph:ceph)
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: pidfile_write: ignore empty --pid-file
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) close
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) close
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) close
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) close
Feb 28 04:34:20 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:34:20 np0005634017 bash[88248]: 28b061fbeda1adf326537d837498b7b87f5b726bc714db9a04ca4452eff13e50
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) close
Feb 28 04:34:20 np0005634017 systemd[1]: Started Ceph osd.1 for 8f528268-ea2d-5d7b-af45-49b405fed6de.
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138400 /var/lib/ceph/osd/ceph-1/block) close
Feb 28 04:34:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe138000 /var/lib/ceph/osd/ceph-1/block) close
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: load: jerasure load: lrc 
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 28 04:34:20 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) close
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) close
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) close
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) close
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) close
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffe139c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffedd9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffedd9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffedd9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffedd9800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluefs mount
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluefs mount shared_bdev_used = 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: RocksDB version: 7.9.2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Git sha 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: DB SUMMARY
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: DB Session ID:  QNFZHBNN07N5JYGE6HTR
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: CURRENT file:  CURRENT
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: IDENTITY file:  IDENTITY
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                         Options.error_if_exists: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.create_if_missing: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                         Options.paranoid_checks: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                                     Options.env: 0x562ffdfc9ea0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                                Options.info_log: 0x562fff0248a0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.max_file_opening_threads: 16
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                              Options.statistics: (nil)
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                               Options.use_fsync: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.max_log_file_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                         Options.allow_fallocate: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.use_direct_reads: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.create_missing_column_families: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                              Options.db_log_dir: 
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                                 Options.wal_dir: db.wal
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.advise_random_on_open: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.write_buffer_manager: 0x562ffeecab40
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                            Options.rate_limiter: (nil)
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.unordered_write: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                               Options.row_cache: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                              Options.wal_filter: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.allow_ingest_behind: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.two_write_queues: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.manual_wal_flush: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.wal_compression: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.atomic_flush: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.log_readahead_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.allow_data_in_errors: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.db_host_id: __hostname__
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.max_background_jobs: 4
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.max_background_compactions: -1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.max_subcompactions: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.max_open_files: -1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.bytes_per_sync: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.max_background_flushes: -1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Compression algorithms supported:
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: #011kZSTD supported: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: #011kXpressCompression supported: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: #011kBZip2Compression supported: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: #011kLZ4Compression supported: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: #011kZlibCompression supported: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: #011kLZ4HCCompression supported: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: #011kSnappyCompression supported: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562fff024c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562ffdfcd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562fff024c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562ffdfcd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562fff024c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562ffdfcd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562fff024c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562ffdfcd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562fff024c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562ffdfcd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562fff024c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562ffdfcd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562fff024c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562ffdfcd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562fff024c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562ffdfcda30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562fff024c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562ffdfcda30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562fff024c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562ffdfcda30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7e44be4e-0c6c-403b-9596-6f83010883b4
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271261129127, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271261130270, "job": 1, "event": "recovery_finished"}
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: freelist init
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: freelist _read_cfg
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluefs umount
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffedd9800 /var/lib/ceph/osd/ceph-1/block) close
Feb 28 04:34:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffedd9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffedd9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffedd9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bdev(0x562ffedd9800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluefs mount
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluefs mount shared_bdev_used = 27262976
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: RocksDB version: 7.9.2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Git sha 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: DB SUMMARY
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: DB Session ID:  QNFZHBNN07N5JYGE6HTQ
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: CURRENT file:  CURRENT
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: IDENTITY file:  IDENTITY
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                         Options.error_if_exists: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.create_if_missing: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                         Options.paranoid_checks: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                                     Options.env: 0x562ffdfc9ce0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                                Options.info_log: 0x562fff024a20
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.max_file_opening_threads: 16
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                              Options.statistics: (nil)
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                               Options.use_fsync: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.max_log_file_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                         Options.allow_fallocate: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.use_direct_reads: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.create_missing_column_families: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                              Options.db_log_dir: 
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                                 Options.wal_dir: db.wal
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.advise_random_on_open: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.write_buffer_manager: 0x562ffeecab40
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                            Options.rate_limiter: (nil)
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.unordered_write: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                               Options.row_cache: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                              Options.wal_filter: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.allow_ingest_behind: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.two_write_queues: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.manual_wal_flush: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.wal_compression: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.atomic_flush: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.log_readahead_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.allow_data_in_errors: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.db_host_id: __hostname__
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.max_background_jobs: 4
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.max_background_compactions: -1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.max_subcompactions: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.max_open_files: -1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.bytes_per_sync: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.max_background_flushes: -1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Compression algorithms supported:
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: #011kZSTD supported: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: #011kXpressCompression supported: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: #011kBZip2Compression supported: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: #011kLZ4Compression supported: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: #011kZlibCompression supported: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: #011kLZ4HCCompression supported: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: #011kSnappyCompression supported: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562fff024bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562ffdfcd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562fff024bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562ffdfcd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562fff024bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562ffdfcd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562fff024bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562ffdfcd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562fff024bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562ffdfcd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562fff024bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562ffdfcd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562fff024bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562ffdfcd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562fff0250c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562ffdfcda30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562fff0250c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562ffdfcda30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562fff0250c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562ffdfcda30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7e44be4e-0c6c-403b-9596-6f83010883b4
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271261243164, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 28 04:34:21 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 28 04:34:21 np0005634017 ceph-mgr[76610]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/831197851; not ready for session (expect reconnect)
Feb 28 04:34:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 28 04:34:21 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 28 04:34:21 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 28 04:34:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:22 np0005634017 ceph-osd[88267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271262130511, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271261, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e44be4e-0c6c-403b-9596-6f83010883b4", "db_session_id": "QNFZHBNN07N5JYGE6HTQ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb 28 04:34:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:34:22 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:34:22 np0005634017 ceph-osd[88267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271262312543, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271262, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e44be4e-0c6c-403b-9596-6f83010883b4", "db_session_id": "QNFZHBNN07N5JYGE6HTQ", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Feb 28 04:34:22 np0005634017 ceph-mgr[76610]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/831197851; not ready for session (expect reconnect)
Feb 28 04:34:22 np0005634017 ceph-osd[88267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271262434532, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271262, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e44be4e-0c6c-403b-9596-6f83010883b4", "db_session_id": "QNFZHBNN07N5JYGE6HTQ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb 28 04:34:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 28 04:34:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 28 04:34:22 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 28 04:34:22 np0005634017 ceph-osd[88267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271262567343, "job": 1, "event": "recovery_finished"}
Feb 28 04:34:22 np0005634017 ceph-osd[88267]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Feb 28 04:34:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Feb 28 04:34:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Feb 28 04:34:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:34:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:34:22 np0005634017 ceph-mgr[76610]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Feb 28 04:34:22 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Feb 28 04:34:22 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562fff22c000
Feb 28 04:34:22 np0005634017 ceph-osd[88267]: rocksdb: DB pointer 0x562fff1de000
Feb 28 04:34:22 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 28 04:34:22 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Feb 28 04:34:22 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Feb 28 04:34:22 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 04:34:22 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1.6 total, 1.6 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.88              0.00         1    0.885       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.88              0.00         1    0.885       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.88              0.00         1    0.885       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.88              0.00         1    0.885       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.6 total, 1.6 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.9 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562ffdfcd8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.6 total, 1.6 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562ffdfcd8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.6 total, 1.6 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562ffdfcd8d0#2 capacity: 460.80 MB usag
Feb 28 04:34:22 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Feb 28 04:34:22 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Feb 28 04:34:22 np0005634017 ceph-osd[88267]: _get_class not permitted to load lua
Feb 28 04:34:22 np0005634017 ceph-osd[88267]: _get_class not permitted to load sdk
Feb 28 04:34:22 np0005634017 ceph-osd[88267]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Feb 28 04:34:22 np0005634017 ceph-osd[88267]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Feb 28 04:34:22 np0005634017 ceph-osd[88267]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Feb 28 04:34:22 np0005634017 ceph-osd[88267]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Feb 28 04:34:22 np0005634017 ceph-osd[88267]: osd.1 0 load_pgs
Feb 28 04:34:22 np0005634017 ceph-osd[88267]: osd.1 0 load_pgs opened 0 pgs
Feb 28 04:34:22 np0005634017 ceph-osd[88267]: osd.1 0 log_to_monitors true
Feb 28 04:34:22 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1[88263]: 2026-02-28T09:34:22.839+0000 7ff4414288c0 -1 osd.1 0 log_to_monitors true
Feb 28 04:34:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Feb 28 04:34:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/803105075,v1:192.168.122.100:6807/803105075]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Feb 28 04:34:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 28 04:34:23 np0005634017 podman[88809]: 2026-02-28 09:34:23.138156773 +0000 UTC m=+0.023280430 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:23 np0005634017 ceph-mgr[76610]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/831197851; not ready for session (expect reconnect)
Feb 28 04:34:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Feb 28 04:34:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 28 04:34:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 28 04:34:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 28 04:34:23 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 28 04:34:23 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Feb 28 04:34:23 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Feb 28 04:34:24 np0005634017 podman[88809]: 2026-02-28 09:34:24.16144294 +0000 UTC m=+1.046566587 container create 5d066bde34970ae29c67f077cbf67d43e0e54a0ebbe23554fbcc9f09917f00c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:24 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:24 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:24 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Feb 28 04:34:24 np0005634017 ceph-mon[76304]: Deploying daemon osd.2 on compute-0
Feb 28 04:34:24 np0005634017 ceph-mon[76304]: from='osd.1 [v2:192.168.122.100:6806/803105075,v1:192.168.122.100:6807/803105075]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Feb 28 04:34:24 np0005634017 systemd[1]: Started libpod-conmon-5d066bde34970ae29c67f077cbf67d43e0e54a0ebbe23554fbcc9f09917f00c1.scope.
Feb 28 04:34:24 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:24 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:34:24 np0005634017 ceph-mgr[76610]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/831197851; not ready for session (expect reconnect)
Feb 28 04:34:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 28 04:34:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 28 04:34:24 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 28 04:34:24 np0005634017 podman[88809]: 2026-02-28 09:34:24.389167966 +0000 UTC m=+1.274291603 container init 5d066bde34970ae29c67f077cbf67d43e0e54a0ebbe23554fbcc9f09917f00c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_visvesvaraya, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 04:34:24 np0005634017 podman[88809]: 2026-02-28 09:34:24.39988329 +0000 UTC m=+1.285006947 container start 5d066bde34970ae29c67f077cbf67d43e0e54a0ebbe23554fbcc9f09917f00c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 04:34:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/803105075,v1:192.168.122.100:6807/803105075]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Feb 28 04:34:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Feb 28 04:34:24 np0005634017 nostalgic_visvesvaraya[88825]: 167 167
Feb 28 04:34:24 np0005634017 systemd[1]: libpod-5d066bde34970ae29c67f077cbf67d43e0e54a0ebbe23554fbcc9f09917f00c1.scope: Deactivated successfully.
Feb 28 04:34:24 np0005634017 conmon[88825]: conmon 5d066bde34970ae29c67 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5d066bde34970ae29c67f077cbf67d43e0e54a0ebbe23554fbcc9f09917f00c1.scope/container/memory.events
Feb 28 04:34:24 np0005634017 podman[88809]: 2026-02-28 09:34:24.410542901 +0000 UTC m=+1.295666628 container attach 5d066bde34970ae29c67f077cbf67d43e0e54a0ebbe23554fbcc9f09917f00c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:34:24 np0005634017 podman[88809]: 2026-02-28 09:34:24.411677853 +0000 UTC m=+1.296801480 container died 5d066bde34970ae29c67f077cbf67d43e0e54a0ebbe23554fbcc9f09917f00c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_visvesvaraya, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 04:34:24 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Feb 28 04:34:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Feb 28 04:34:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/803105075,v1:192.168.122.100:6807/803105075]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb 28 04:34:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Feb 28 04:34:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 28 04:34:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 28 04:34:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 28 04:34:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 28 04:34:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 28 04:34:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 28 04:34:24 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 28 04:34:24 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 28 04:34:24 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 28 04:34:24 np0005634017 systemd[1]: var-lib-containers-storage-overlay-00d717e8c5ecaf237cb517e7ac36789b0479aae0758a432bf617691961f3c4fc-merged.mount: Deactivated successfully.
Feb 28 04:34:24 np0005634017 podman[88809]: 2026-02-28 09:34:24.573749481 +0000 UTC m=+1.458873118 container remove 5d066bde34970ae29c67f077cbf67d43e0e54a0ebbe23554fbcc9f09917f00c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_visvesvaraya, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 28 04:34:24 np0005634017 systemd[1]: libpod-conmon-5d066bde34970ae29c67f077cbf67d43e0e54a0ebbe23554fbcc9f09917f00c1.scope: Deactivated successfully.
Feb 28 04:34:24 np0005634017 podman[88856]: 2026-02-28 09:34:24.844943588 +0000 UTC m=+0.061955024 container create b4321026f4bb113362686a7e713e81dfbf7b6dd2d86e7e1e8676a4428767fa38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate-test, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 04:34:24 np0005634017 podman[88856]: 2026-02-28 09:34:24.809831224 +0000 UTC m=+0.026842650 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:24 np0005634017 systemd[1]: Started libpod-conmon-b4321026f4bb113362686a7e713e81dfbf7b6dd2d86e7e1e8676a4428767fa38.scope.
Feb 28 04:34:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 28 04:34:24 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:24 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c795174e6becea3fc41e89be11bcd76dac96d65ee8681ef3ab48a915bf2c5b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:24 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c795174e6becea3fc41e89be11bcd76dac96d65ee8681ef3ab48a915bf2c5b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:24 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c795174e6becea3fc41e89be11bcd76dac96d65ee8681ef3ab48a915bf2c5b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:24 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c795174e6becea3fc41e89be11bcd76dac96d65ee8681ef3ab48a915bf2c5b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:24 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c795174e6becea3fc41e89be11bcd76dac96d65ee8681ef3ab48a915bf2c5b9/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:24 np0005634017 podman[88856]: 2026-02-28 09:34:24.969952547 +0000 UTC m=+0.186963983 container init b4321026f4bb113362686a7e713e81dfbf7b6dd2d86e7e1e8676a4428767fa38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 28 04:34:24 np0005634017 podman[88856]: 2026-02-28 09:34:24.975453273 +0000 UTC m=+0.192464709 container start b4321026f4bb113362686a7e713e81dfbf7b6dd2d86e7e1e8676a4428767fa38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Feb 28 04:34:25 np0005634017 podman[88856]: 2026-02-28 09:34:25.005399531 +0000 UTC m=+0.222410957 container attach b4321026f4bb113362686a7e713e81dfbf7b6dd2d86e7e1e8676a4428767fa38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate-test, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:34:25 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate-test[88873]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Feb 28 04:34:25 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate-test[88873]:                            [--no-systemd] [--no-tmpfs]
Feb 28 04:34:25 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate-test[88873]: ceph-volume activate: error: unrecognized arguments: --bad-option
Feb 28 04:34:25 np0005634017 systemd[1]: libpod-b4321026f4bb113362686a7e713e81dfbf7b6dd2d86e7e1e8676a4428767fa38.scope: Deactivated successfully.
Feb 28 04:34:25 np0005634017 podman[88856]: 2026-02-28 09:34:25.181754233 +0000 UTC m=+0.398765639 container died b4321026f4bb113362686a7e713e81dfbf7b6dd2d86e7e1e8676a4428767fa38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:34:25 np0005634017 ceph-mon[76304]: from='osd.1 [v2:192.168.122.100:6806/803105075,v1:192.168.122.100:6807/803105075]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Feb 28 04:34:25 np0005634017 ceph-mon[76304]: from='osd.1 [v2:192.168.122.100:6806/803105075,v1:192.168.122.100:6807/803105075]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb 28 04:34:25 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3c795174e6becea3fc41e89be11bcd76dac96d65ee8681ef3ab48a915bf2c5b9-merged.mount: Deactivated successfully.
Feb 28 04:34:25 np0005634017 podman[88856]: 2026-02-28 09:34:25.306155205 +0000 UTC m=+0.523166651 container remove b4321026f4bb113362686a7e713e81dfbf7b6dd2d86e7e1e8676a4428767fa38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:34:25 np0005634017 systemd[1]: libpod-conmon-b4321026f4bb113362686a7e713e81dfbf7b6dd2d86e7e1e8676a4428767fa38.scope: Deactivated successfully.
Feb 28 04:34:25 np0005634017 ceph-mgr[76610]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/831197851; not ready for session (expect reconnect)
Feb 28 04:34:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 28 04:34:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 28 04:34:25 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 28 04:34:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Feb 28 04:34:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 28 04:34:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/803105075,v1:192.168.122.100:6807/803105075]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb 28 04:34:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e10 e10: 3 total, 0 up, 3 in
Feb 28 04:34:25 np0005634017 ceph-osd[88267]: osd.1 0 done with init, starting boot process
Feb 28 04:34:25 np0005634017 ceph-osd[88267]: osd.1 0 start_boot
Feb 28 04:34:25 np0005634017 ceph-osd[88267]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Feb 28 04:34:25 np0005634017 ceph-osd[88267]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Feb 28 04:34:25 np0005634017 ceph-osd[88267]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Feb 28 04:34:25 np0005634017 ceph-osd[88267]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Feb 28 04:34:25 np0005634017 ceph-osd[88267]: osd.1 0  bench count 12288000 bsize 4 KiB
Feb 28 04:34:25 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 0 up, 3 in
Feb 28 04:34:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 28 04:34:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 28 04:34:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 28 04:34:25 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 28 04:34:25 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 28 04:34:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 28 04:34:25 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 28 04:34:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 28 04:34:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 28 04:34:25 np0005634017 ceph-mgr[76610]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/803105075; not ready for session (expect reconnect)
Feb 28 04:34:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 28 04:34:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 28 04:34:25 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 28 04:34:25 np0005634017 systemd[1]: Reloading.
Feb 28 04:34:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:34:25 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:34:25 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:34:26 np0005634017 systemd[1]: Reloading.
Feb 28 04:34:26 np0005634017 ceph-osd[87202]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 13.553 iops: 3469.564 elapsed_sec: 0.865
Feb 28 04:34:26 np0005634017 ceph-osd[87202]: log_channel(cluster) log [WRN] : OSD bench result of 3469.563922 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 28 04:34:26 np0005634017 ceph-osd[87202]: osd.0 0 waiting for initial osdmap
Feb 28 04:34:26 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0[87198]: 2026-02-28T09:34:26.170+0000 7f1f49a13640 -1 osd.0 0 waiting for initial osdmap
Feb 28 04:34:26 np0005634017 ceph-osd[87202]: osd.0 10 crush map has features 288514050185494528, adjusting msgr requires for clients
Feb 28 04:34:26 np0005634017 ceph-osd[87202]: osd.0 10 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Feb 28 04:34:26 np0005634017 ceph-osd[87202]: osd.0 10 crush map has features 3314932999778484224, adjusting msgr requires for osds
Feb 28 04:34:26 np0005634017 ceph-osd[87202]: osd.0 10 check_osdmap_features require_osd_release unknown -> tentacle
Feb 28 04:34:26 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:34:26 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:34:26 np0005634017 ceph-mon[76304]: from='osd.1 [v2:192.168.122.100:6806/803105075,v1:192.168.122.100:6807/803105075]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb 28 04:34:26 np0005634017 ceph-osd[87202]: osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 28 04:34:26 np0005634017 ceph-osd[87202]: osd.0 10 set_numa_affinity not setting numa affinity
Feb 28 04:34:26 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-0[87198]: 2026-02-28T09:34:26.262+0000 7f1f44818640 -1 osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 28 04:34:26 np0005634017 ceph-osd[87202]: osd.0 10 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Feb 28 04:34:26 np0005634017 ceph-mgr[76610]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 28 04:34:26 np0005634017 ceph-mgr[76610]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/831197851; not ready for session (expect reconnect)
Feb 28 04:34:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 28 04:34:26 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 28 04:34:26 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 28 04:34:26 np0005634017 systemd[1]: Starting Ceph osd.2 for 8f528268-ea2d-5d7b-af45-49b405fed6de...
Feb 28 04:34:26 np0005634017 ceph-mgr[76610]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/803105075; not ready for session (expect reconnect)
Feb 28 04:34:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 28 04:34:26 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 28 04:34:26 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 28 04:34:26 np0005634017 podman[89046]: 2026-02-28 09:34:26.563450546 +0000 UTC m=+0.044950333 container create 6e3486a5ba82b91c27976228fe089bfee0aa11a0fbea1dafb56ea1238878a141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:26 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6ea16a8db0bba96d579f5a3defb27e13d0364cc2d879e0f0a8ae53b2571885/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6ea16a8db0bba96d579f5a3defb27e13d0364cc2d879e0f0a8ae53b2571885/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6ea16a8db0bba96d579f5a3defb27e13d0364cc2d879e0f0a8ae53b2571885/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6ea16a8db0bba96d579f5a3defb27e13d0364cc2d879e0f0a8ae53b2571885/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6ea16a8db0bba96d579f5a3defb27e13d0364cc2d879e0f0a8ae53b2571885/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:26 np0005634017 podman[89046]: 2026-02-28 09:34:26.537352618 +0000 UTC m=+0.018852425 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:26 np0005634017 podman[89046]: 2026-02-28 09:34:26.666793822 +0000 UTC m=+0.148293579 container init 6e3486a5ba82b91c27976228fe089bfee0aa11a0fbea1dafb56ea1238878a141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:34:26 np0005634017 podman[89046]: 2026-02-28 09:34:26.673010408 +0000 UTC m=+0.154510165 container start 6e3486a5ba82b91c27976228fe089bfee0aa11a0fbea1dafb56ea1238878a141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 04:34:26 np0005634017 podman[89046]: 2026-02-28 09:34:26.692290214 +0000 UTC m=+0.173790101 container attach 6e3486a5ba82b91c27976228fe089bfee0aa11a0fbea1dafb56ea1238878a141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:34:26 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate[89061]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:26 np0005634017 bash[89046]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:26 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate[89061]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:26 np0005634017 bash[89046]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 28 04:34:27 np0005634017 ceph-osd[87202]: osd.0 10 tick checking mon for new map
Feb 28 04:34:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Feb 28 04:34:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 28 04:34:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Feb 28 04:34:27 np0005634017 ceph-mon[76304]: OSD bench result of 3469.563922 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 28 04:34:27 np0005634017 ceph-osd[87202]: osd.0 11 state: booting -> active
Feb 28 04:34:27 np0005634017 ceph-mon[76304]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/831197851,v1:192.168.122.100:6803/831197851] boot
Feb 28 04:34:27 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Feb 28 04:34:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 28 04:34:27 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 28 04:34:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 28 04:34:27 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 28 04:34:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 28 04:34:27 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 28 04:34:27 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 28 04:34:27 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 28 04:34:27 np0005634017 lvm[89147]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:34:27 np0005634017 lvm[89147]: VG ceph_vg1 finished
Feb 28 04:34:27 np0005634017 lvm[89144]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:34:27 np0005634017 lvm[89144]: VG ceph_vg0 finished
Feb 28 04:34:27 np0005634017 lvm[89149]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:34:27 np0005634017 lvm[89149]: VG ceph_vg2 finished
Feb 28 04:34:27 np0005634017 ceph-mgr[76610]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/803105075; not ready for session (expect reconnect)
Feb 28 04:34:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 28 04:34:27 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 28 04:34:27 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 28 04:34:27 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate[89061]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb 28 04:34:27 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate[89061]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:27 np0005634017 bash[89046]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb 28 04:34:27 np0005634017 bash[89046]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:27 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate[89061]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:27 np0005634017 bash[89046]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 28 04:34:27 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate[89061]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb 28 04:34:27 np0005634017 bash[89046]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb 28 04:34:27 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate[89061]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Feb 28 04:34:27 np0005634017 bash[89046]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Feb 28 04:34:27 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate[89061]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb 28 04:34:27 np0005634017 bash[89046]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb 28 04:34:27 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate[89061]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Feb 28 04:34:27 np0005634017 bash[89046]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Feb 28 04:34:27 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate[89061]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb 28 04:34:27 np0005634017 bash[89046]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb 28 04:34:27 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate[89061]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb 28 04:34:27 np0005634017 bash[89046]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb 28 04:34:27 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate[89061]: --> ceph-volume lvm activate successful for osd ID: 2
Feb 28 04:34:27 np0005634017 bash[89046]: --> ceph-volume lvm activate successful for osd ID: 2
Feb 28 04:34:27 np0005634017 systemd[1]: libpod-6e3486a5ba82b91c27976228fe089bfee0aa11a0fbea1dafb56ea1238878a141.scope: Deactivated successfully.
Feb 28 04:34:27 np0005634017 podman[89046]: 2026-02-28 09:34:27.798153019 +0000 UTC m=+1.279652786 container died 6e3486a5ba82b91c27976228fe089bfee0aa11a0fbea1dafb56ea1238878a141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 28 04:34:27 np0005634017 systemd[1]: libpod-6e3486a5ba82b91c27976228fe089bfee0aa11a0fbea1dafb56ea1238878a141.scope: Consumed 1.237s CPU time.
Feb 28 04:34:27 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ef6ea16a8db0bba96d579f5a3defb27e13d0364cc2d879e0f0a8ae53b2571885-merged.mount: Deactivated successfully.
Feb 28 04:34:27 np0005634017 podman[89046]: 2026-02-28 09:34:27.958671412 +0000 UTC m=+1.440171169 container remove 6e3486a5ba82b91c27976228fe089bfee0aa11a0fbea1dafb56ea1238878a141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 28 04:34:28 np0005634017 podman[89303]: 2026-02-28 09:34:28.167329108 +0000 UTC m=+0.054631837 container create 09d5547f1af428f2fcd846bcb88fc68c1dc0421a3952733628351b5e4fd76eae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 04:34:28 np0005634017 podman[89303]: 2026-02-28 09:34:28.138105681 +0000 UTC m=+0.025408480 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:28 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/529aeb93794a1003c7a6085473b6418c1ed27556331bef96a5c6657f796f574e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:28 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/529aeb93794a1003c7a6085473b6418c1ed27556331bef96a5c6657f796f574e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:28 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/529aeb93794a1003c7a6085473b6418c1ed27556331bef96a5c6657f796f574e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:28 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/529aeb93794a1003c7a6085473b6418c1ed27556331bef96a5c6657f796f574e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:28 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/529aeb93794a1003c7a6085473b6418c1ed27556331bef96a5c6657f796f574e/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Feb 28 04:34:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e11 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 28 04:34:28 np0005634017 podman[89303]: 2026-02-28 09:34:28.284415443 +0000 UTC m=+0.171718172 container init 09d5547f1af428f2fcd846bcb88fc68c1dc0421a3952733628351b5e4fd76eae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 28 04:34:28 np0005634017 ceph-mon[76304]: osd.0 [v2:192.168.122.100:6802/831197851,v1:192.168.122.100:6803/831197851] boot
Feb 28 04:34:28 np0005634017 podman[89303]: 2026-02-28 09:34:28.294416996 +0000 UTC m=+0.181719725 container start 09d5547f1af428f2fcd846bcb88fc68c1dc0421a3952733628351b5e4fd76eae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:34:28 np0005634017 ceph-mgr[76610]: [devicehealth INFO root] creating mgr pool
Feb 28 04:34:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Feb 28 04:34:28 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Feb 28 04:34:28 np0005634017 bash[89303]: 09d5547f1af428f2fcd846bcb88fc68c1dc0421a3952733628351b5e4fd76eae
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: set uid:gid to 167:167 (ceph:ceph)
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: pidfile_write: ignore empty --pid-file
Feb 28 04:34:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:28 np0005634017 systemd[1]: Started Ceph osd.2 for 8f528268-ea2d-5d7b-af45-49b405fed6de.
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) close
Feb 28 04:34:28 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Feb 28 04:34:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 28 04:34:28 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 28 04:34:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 28 04:34:28 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 28 04:34:28 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 28 04:34:28 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) close
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) close
Feb 28 04:34:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 28 04:34:28 np0005634017 ceph-mgr[76610]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/803105075; not ready for session (expect reconnect)
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) close
Feb 28 04:34:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 28 04:34:28 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 28 04:34:28 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 28 04:34:28 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) close
Feb 28 04:34:28 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0400 /var/lib/ceph/osd/ceph-2/block) close
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c0000 /var/lib/ceph/osd/ceph-2/block) close
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: load: jerasure load: lrc 
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) close
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) close
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) close
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) close
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) close
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x5576846c1c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x557685357800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x557685357800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x557685357800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x557685357800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluefs mount
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluefs mount shared_bdev_used = 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: RocksDB version: 7.9.2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Git sha 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: DB SUMMARY
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: DB Session ID:  VDZFI6SJVQQNR3D3YH7P
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: CURRENT file:  CURRENT
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: IDENTITY file:  IDENTITY
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                         Options.error_if_exists: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.create_if_missing: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                         Options.paranoid_checks: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                                     Options.env: 0x557684551ea0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                                Options.info_log: 0x5576855b48a0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.max_file_opening_threads: 16
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                              Options.statistics: (nil)
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                               Options.use_fsync: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.max_log_file_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                         Options.allow_fallocate: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.use_direct_reads: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.create_missing_column_families: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                              Options.db_log_dir: 
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                                 Options.wal_dir: db.wal
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.advise_random_on_open: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.write_buffer_manager: 0x5576845b6b40
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                            Options.rate_limiter: (nil)
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.unordered_write: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                               Options.row_cache: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                              Options.wal_filter: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.allow_ingest_behind: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.two_write_queues: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.manual_wal_flush: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.wal_compression: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.atomic_flush: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.log_readahead_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.allow_data_in_errors: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.db_host_id: __hostname__
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.max_background_jobs: 4
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.max_background_compactions: -1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.max_subcompactions: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.max_open_files: -1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.bytes_per_sync: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.max_background_flushes: -1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Compression algorithms supported:
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: #011kZSTD supported: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: #011kXpressCompression supported: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: #011kBZip2Compression supported: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: #011kLZ4Compression supported: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: #011kZlibCompression supported: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: #011kLZ4HCCompression supported: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: #011kSnappyCompression supported: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5576855b4c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5576845558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5576855b4c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5576845558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5576855b4c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5576845558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5576855b4c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5576845558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5576855b4c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5576845558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5576855b4c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5576845558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5576855b4c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5576845558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5576855b4c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557684555a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5576855b4c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557684555a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5576855b4c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557684555a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a4e23108-6802-4d54-a599-3112e93d2449
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271268687632, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271268688537, "job": 1, "event": "recovery_finished"}
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: freelist init
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: freelist _read_cfg
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluefs umount
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x557685357800 /var/lib/ceph/osd/ceph-2/block) close
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x557685357800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x557685357800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x557685357800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bdev(0x557685357800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluefs mount
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluefs mount shared_bdev_used = 27262976
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: RocksDB version: 7.9.2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Git sha 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: DB SUMMARY
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: DB Session ID:  VDZFI6SJVQQNR3D3YH7O
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: CURRENT file:  CURRENT
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: IDENTITY file:  IDENTITY
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                         Options.error_if_exists: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.create_if_missing: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                         Options.paranoid_checks: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                                     Options.env: 0x557685784a80
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                                Options.info_log: 0x5576855b4a20
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.max_file_opening_threads: 16
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                              Options.statistics: (nil)
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                               Options.use_fsync: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.max_log_file_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                         Options.allow_fallocate: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.use_direct_reads: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.create_missing_column_families: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                              Options.db_log_dir: 
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                                 Options.wal_dir: db.wal
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.advise_random_on_open: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.write_buffer_manager: 0x5576845b7900
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                            Options.rate_limiter: (nil)
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.unordered_write: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                               Options.row_cache: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                              Options.wal_filter: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.allow_ingest_behind: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.two_write_queues: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.manual_wal_flush: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.wal_compression: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.atomic_flush: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.log_readahead_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.allow_data_in_errors: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.db_host_id: __hostname__
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.max_background_jobs: 4
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.max_background_compactions: -1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.max_subcompactions: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.max_open_files: -1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.bytes_per_sync: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.max_background_flushes: -1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Compression algorithms supported:
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: #011kZSTD supported: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: #011kXpressCompression supported: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: #011kBZip2Compression supported: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: #011kLZ4Compression supported: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: #011kZlibCompression supported: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: #011kLZ4HCCompression supported: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: #011kSnappyCompression supported: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5576855b4bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5576845558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5576855b4bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5576845558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5576855b4bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5576845558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5576855b4bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5576845558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5576855b4bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5576845558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5576855b4bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5576845558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5576855b4bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5576845558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5576855b50c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557684555a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5576855b50c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557684555a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:           Options.merge_operator: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.compaction_filter_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.sst_partitioner_factory: None
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5576855b50c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557684555a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.write_buffer_size: 16777216
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.max_write_buffer_number: 64
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.compression: LZ4
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.num_levels: 7
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.level: 32767
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.compression_opts.strategy: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                  Options.compression_opts.enabled: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.arena_block_size: 1048576
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.disable_auto_compactions: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.inplace_update_support: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.bloom_locality: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                    Options.max_successive_merges: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.paranoid_file_checks: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.force_consistency_checks: 1
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.report_bg_io_stats: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                               Options.ttl: 2592000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                       Options.enable_blob_files: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                           Options.min_blob_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                          Options.blob_file_size: 268435456
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb:                Options.blob_file_starting_level: 0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a4e23108-6802-4d54-a599-3112e93d2449
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271268732870, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271268745442, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271268, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a4e23108-6802-4d54-a599-3112e93d2449", "db_session_id": "VDZFI6SJVQQNR3D3YH7O", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271268772629, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271268, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a4e23108-6802-4d54-a599-3112e93d2449", "db_session_id": "VDZFI6SJVQQNR3D3YH7O", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271268778594, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271268, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a4e23108-6802-4d54-a599-3112e93d2449", "db_session_id": "VDZFI6SJVQQNR3D3YH7O", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271268806041, "job": 1, "event": "recovery_finished"}
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557685798000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: DB pointer 0x55768576e000
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.2 total, 0.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5576845558d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5576845558d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5576845558d0#2 capacity: 460.80 MB usage: 0
Feb 28 04:34:28 np0005634017 podman[89804]: 2026-02-28 09:34:28.887710531 +0000 UTC m=+0.043935485 container create d2f672a41b734e41afa01d9e250f73639865ea92fdb6c325e9b63bb7412ac517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_colden, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: _get_class not permitted to load lua
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: _get_class not permitted to load sdk
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: osd.2 0 load_pgs
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: osd.2 0 load_pgs opened 0 pgs
Feb 28 04:34:28 np0005634017 ceph-osd[89322]: osd.2 0 log_to_monitors true
Feb 28 04:34:28 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2[89318]: 2026-02-28T09:34:28.896+0000 7feccca208c0 -1 osd.2 0 log_to_monitors true
Feb 28 04:34:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Feb 28 04:34:28 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/748596393,v1:192.168.122.100:6811/748596393]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Feb 28 04:34:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Feb 28 04:34:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_09:34:28
Feb 28 04:34:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 04:34:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 04:34:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] No pools available
Feb 28 04:34:28 np0005634017 systemd[1]: Started libpod-conmon-d2f672a41b734e41afa01d9e250f73639865ea92fdb6c325e9b63bb7412ac517.scope.
Feb 28 04:34:28 np0005634017 podman[89804]: 2026-02-28 09:34:28.863601909 +0000 UTC m=+0.019826913 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:28 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:29 np0005634017 podman[89804]: 2026-02-28 09:34:29.009620172 +0000 UTC m=+0.165845206 container init d2f672a41b734e41afa01d9e250f73639865ea92fdb6c325e9b63bb7412ac517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_colden, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:34:29 np0005634017 podman[89804]: 2026-02-28 09:34:29.016849667 +0000 UTC m=+0.173074631 container start d2f672a41b734e41afa01d9e250f73639865ea92fdb6c325e9b63bb7412ac517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_colden, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:34:29 np0005634017 podman[89804]: 2026-02-28 09:34:29.020241783 +0000 UTC m=+0.176466767 container attach d2f672a41b734e41afa01d9e250f73639865ea92fdb6c325e9b63bb7412ac517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_colden, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:29 np0005634017 cranky_colden[89853]: 167 167
Feb 28 04:34:29 np0005634017 systemd[1]: libpod-d2f672a41b734e41afa01d9e250f73639865ea92fdb6c325e9b63bb7412ac517.scope: Deactivated successfully.
Feb 28 04:34:29 np0005634017 podman[89804]: 2026-02-28 09:34:29.024541114 +0000 UTC m=+0.180766078 container died d2f672a41b734e41afa01d9e250f73639865ea92fdb6c325e9b63bb7412ac517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:34:29 np0005634017 systemd[1]: var-lib-containers-storage-overlay-42a5225b050b1274c8ddc8441675561e47c7f1ca6f8b637a0deb8b81c3b89463-merged.mount: Deactivated successfully.
Feb 28 04:34:29 np0005634017 podman[89804]: 2026-02-28 09:34:29.099804755 +0000 UTC m=+0.256029729 container remove d2f672a41b734e41afa01d9e250f73639865ea92fdb6c325e9b63bb7412ac517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_colden, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:34:29 np0005634017 systemd[1]: libpod-conmon-d2f672a41b734e41afa01d9e250f73639865ea92fdb6c325e9b63bb7412ac517.scope: Deactivated successfully.
Feb 28 04:34:29 np0005634017 podman[89877]: 2026-02-28 09:34:29.215904622 +0000 UTC m=+0.036412272 container create c06c1d9974dc73bb397de6bfd743e5206e1270697c7f1f5a43a8df527de2e01c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lovelace, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:34:29 np0005634017 ceph-osd[88267]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 29.954 iops: 7668.199 elapsed_sec: 0.391
Feb 28 04:34:29 np0005634017 ceph-osd[88267]: log_channel(cluster) log [WRN] : OSD bench result of 7668.198777 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 28 04:34:29 np0005634017 ceph-osd[88267]: osd.1 0 waiting for initial osdmap
Feb 28 04:34:29 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1[88263]: 2026-02-28T09:34:29.216+0000 7ff43d3aa640 -1 osd.1 0 waiting for initial osdmap
Feb 28 04:34:29 np0005634017 ceph-osd[88267]: osd.1 12 crush map has features 288514050185494528, adjusting msgr requires for clients
Feb 28 04:34:29 np0005634017 ceph-osd[88267]: osd.1 12 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Feb 28 04:34:29 np0005634017 ceph-osd[88267]: osd.1 12 crush map has features 3314932999778484224, adjusting msgr requires for osds
Feb 28 04:34:29 np0005634017 ceph-osd[88267]: osd.1 12 check_osdmap_features require_osd_release unknown -> tentacle
Feb 28 04:34:29 np0005634017 ceph-osd[88267]: osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 28 04:34:29 np0005634017 ceph-osd[88267]: osd.1 12 set_numa_affinity not setting numa affinity
Feb 28 04:34:29 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-1[88263]: 2026-02-28T09:34:29.241+0000 7ff4381af640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 28 04:34:29 np0005634017 ceph-osd[88267]: osd.1 12 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Feb 28 04:34:29 np0005634017 systemd[1]: Started libpod-conmon-c06c1d9974dc73bb397de6bfd743e5206e1270697c7f1f5a43a8df527de2e01c.scope.
Feb 28 04:34:29 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0fe82ae7a69d1fa537ac61f2a21b25c4b1a9e4424c002229f5baa31290c5b32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0fe82ae7a69d1fa537ac61f2a21b25c4b1a9e4424c002229f5baa31290c5b32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0fe82ae7a69d1fa537ac61f2a21b25c4b1a9e4424c002229f5baa31290c5b32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0fe82ae7a69d1fa537ac61f2a21b25c4b1a9e4424c002229f5baa31290c5b32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:29 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Feb 28 04:34:29 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:29 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:29 np0005634017 ceph-mon[76304]: from='osd.2 [v2:192.168.122.100:6810/748596393,v1:192.168.122.100:6811/748596393]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Feb 28 04:34:29 np0005634017 podman[89877]: 2026-02-28 09:34:29.197053108 +0000 UTC m=+0.017560778 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:29 np0005634017 podman[89877]: 2026-02-28 09:34:29.306735733 +0000 UTC m=+0.127243453 container init c06c1d9974dc73bb397de6bfd743e5206e1270697c7f1f5a43a8df527de2e01c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lovelace, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:34:29 np0005634017 podman[89877]: 2026-02-28 09:34:29.31262569 +0000 UTC m=+0.133133360 container start c06c1d9974dc73bb397de6bfd743e5206e1270697c7f1f5a43a8df527de2e01c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lovelace, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 28 04:34:29 np0005634017 podman[89877]: 2026-02-28 09:34:29.316786807 +0000 UTC m=+0.137294477 container attach c06c1d9974dc73bb397de6bfd743e5206e1270697c7f1f5a43a8df527de2e01c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lovelace, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:34:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Feb 28 04:34:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e12 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 28 04:34:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Feb 28 04:34:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/748596393,v1:192.168.122.100:6811/748596393]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Feb 28 04:34:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Feb 28 04:34:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e13 crush map has features 3314933000852226048, adjusting msgr requires
Feb 28 04:34:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e13 crush map has features 288514051259236352, adjusting msgr requires
Feb 28 04:34:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e13 crush map has features 288514051259236352, adjusting msgr requires
Feb 28 04:34:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e13 crush map has features 288514051259236352, adjusting msgr requires
Feb 28 04:34:29 np0005634017 ceph-mon[76304]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/803105075,v1:192.168.122.100:6807/803105075] boot
Feb 28 04:34:29 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Feb 28 04:34:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Feb 28 04:34:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/748596393,v1:192.168.122.100:6811/748596393]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb 28 04:34:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Feb 28 04:34:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 28 04:34:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 28 04:34:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 28 04:34:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 28 04:34:29 np0005634017 ceph-osd[87202]: osd.0 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Feb 28 04:34:29 np0005634017 ceph-osd[87202]: osd.0 13 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Feb 28 04:34:29 np0005634017 ceph-osd[87202]: osd.0 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Feb 28 04:34:29 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 28 04:34:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Feb 28 04:34:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Feb 28 04:34:29 np0005634017 ceph-osd[88267]: osd.1 13 state: booting -> active
Feb 28 04:34:29 np0005634017 ceph-osd[88267]: osd.1 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Feb 28 04:34:29 np0005634017 ceph-osd[88267]: osd.1 13 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Feb 28 04:34:29 np0005634017 ceph-osd[88267]: osd.1 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Feb 28 04:34:29 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:34:29 np0005634017 lvm[89968]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:34:29 np0005634017 lvm[89968]: VG ceph_vg0 finished
Feb 28 04:34:29 np0005634017 lvm[89970]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:34:29 np0005634017 lvm[89970]: VG ceph_vg1 finished
Feb 28 04:34:29 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Feb 28 04:34:29 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Feb 28 04:34:29 np0005634017 lvm[89971]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:34:29 np0005634017 lvm[89971]: VG ceph_vg2 finished
Feb 28 04:34:29 np0005634017 dazzling_lovelace[89893]: {}
Feb 28 04:34:30 np0005634017 systemd[1]: libpod-c06c1d9974dc73bb397de6bfd743e5206e1270697c7f1f5a43a8df527de2e01c.scope: Deactivated successfully.
Feb 28 04:34:30 np0005634017 systemd[1]: libpod-c06c1d9974dc73bb397de6bfd743e5206e1270697c7f1f5a43a8df527de2e01c.scope: Consumed 1.055s CPU time.
Feb 28 04:34:30 np0005634017 podman[89877]: 2026-02-28 09:34:30.036368657 +0000 UTC m=+0.856876317 container died c06c1d9974dc73bb397de6bfd743e5206e1270697c7f1f5a43a8df527de2e01c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lovelace, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 28 04:34:30 np0005634017 systemd[1]: var-lib-containers-storage-overlay-a0fe82ae7a69d1fa537ac61f2a21b25c4b1a9e4424c002229f5baa31290c5b32-merged.mount: Deactivated successfully.
Feb 28 04:34:30 np0005634017 podman[89877]: 2026-02-28 09:34:30.075797274 +0000 UTC m=+0.896304924 container remove c06c1d9974dc73bb397de6bfd743e5206e1270697c7f1f5a43a8df527de2e01c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:34:30 np0005634017 systemd[1]: libpod-conmon-c06c1d9974dc73bb397de6bfd743e5206e1270697c7f1f5a43a8df527de2e01c.scope: Deactivated successfully.
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 04:34:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 21470642176
Feb 28 04:34:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 1 (current 1)
Feb 28 04:34:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 04:34:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:34:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:34:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 04:34:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:34:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:34:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:34:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/748596393,v1:192.168.122.100:6811/748596393]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Feb 28 04:34:30 np0005634017 ceph-osd[89322]: osd.2 0 done with init, starting boot process
Feb 28 04:34:30 np0005634017 ceph-osd[89322]: osd.2 0 start_boot
Feb 28 04:34:30 np0005634017 ceph-osd[89322]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Feb 28 04:34:30 np0005634017 ceph-osd[89322]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Feb 28 04:34:30 np0005634017 ceph-osd[89322]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Feb 28 04:34:30 np0005634017 ceph-osd[89322]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Feb 28 04:34:30 np0005634017 ceph-osd[89322]: osd.2 0  bench count 12288000 bsize 4 KiB
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 28 04:34:30 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 28 04:34:30 np0005634017 ceph-mgr[76610]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/748596393; not ready for session (expect reconnect)
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 28 04:34:30 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: OSD bench result of 7668.198777 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: from='osd.2 [v2:192.168.122.100:6810/748596393,v1:192.168.122.100:6811/748596393]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: osd.1 [v2:192.168.122.100:6806/803105075,v1:192.168.122.100:6807/803105075] boot
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: from='osd.2 [v2:192.168.122.100:6810/748596393,v1:192.168.122.100:6811/748596393]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:30 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:34:30 np0005634017 ceph-mgr[76610]: [devicehealth INFO root] creating main.db for devicehealth
Feb 28 04:34:30 np0005634017 podman[90106]: 2026-02-28 09:34:30.720405371 +0000 UTC m=+0.085162092 container exec 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 04:34:30 np0005634017 ceph-mgr[76610]: [devicehealth INFO root] Check health
Feb 28 04:34:30 np0005634017 ceph-mgr[76610]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb 28 04:34:30 np0005634017 podman[90106]: 2026-02-28 09:34:30.842491997 +0000 UTC m=+0.207248728 container exec_died 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:34:30 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.izimmo(active, since 62s)
Feb 28 04:34:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v36: 1 pgs: 1 creating+peering; 0 B data, 852 MiB used, 39 GiB / 40 GiB avail
Feb 28 04:34:31 np0005634017 python3[90207]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:34:31 np0005634017 podman[90246]: 2026-02-28 09:34:31.297427826 +0000 UTC m=+0.068524311 container create d72fb3575a7032b6850be140b8e23e1eb658549559c47564b9a4712eb408d4f9 (image=quay.io/ceph/ceph:v20, name=dreamy_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 04:34:31 np0005634017 podman[90246]: 2026-02-28 09:34:31.249465648 +0000 UTC m=+0.020562173 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:34:31 np0005634017 ceph-mgr[76610]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/748596393; not ready for session (expect reconnect)
Feb 28 04:34:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 28 04:34:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 28 04:34:31 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 28 04:34:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Feb 28 04:34:31 np0005634017 systemd[1]: Started libpod-conmon-d72fb3575a7032b6850be140b8e23e1eb658549559c47564b9a4712eb408d4f9.scope.
Feb 28 04:34:31 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c6dda6363d06d168a6e4a1b60e0b8500ca27e39b8447e417cd466a1adf87a08/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c6dda6363d06d168a6e4a1b60e0b8500ca27e39b8447e417cd466a1adf87a08/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c6dda6363d06d168a6e4a1b60e0b8500ca27e39b8447e417cd466a1adf87a08/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Feb 28 04:34:31 np0005634017 podman[90246]: 2026-02-28 09:34:31.468720684 +0000 UTC m=+0.239817219 container init d72fb3575a7032b6850be140b8e23e1eb658549559c47564b9a4712eb408d4f9 (image=quay.io/ceph/ceph:v20, name=dreamy_goldstine, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:34:31 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Feb 28 04:34:31 np0005634017 podman[90246]: 2026-02-28 09:34:31.477953235 +0000 UTC m=+0.249049760 container start d72fb3575a7032b6850be140b8e23e1eb658549559c47564b9a4712eb408d4f9 (image=quay.io/ceph/ceph:v20, name=dreamy_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 04:34:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 28 04:34:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 28 04:34:31 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 28 04:34:31 np0005634017 ceph-mon[76304]: from='osd.2 [v2:192.168.122.100:6810/748596393,v1:192.168.122.100:6811/748596393]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb 28 04:34:31 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Feb 28 04:34:31 np0005634017 ceph-mon[76304]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Feb 28 04:34:31 np0005634017 ceph-mon[76304]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Feb 28 04:34:31 np0005634017 podman[90246]: 2026-02-28 09:34:31.527825927 +0000 UTC m=+0.298922452 container attach d72fb3575a7032b6850be140b8e23e1eb658549559c47564b9a4712eb408d4f9 (image=quay.io/ceph/ceph:v20, name=dreamy_goldstine, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 28 04:34:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:34:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:34:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:31 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.izimmo(active, since 63s)
Feb 28 04:34:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb 28 04:34:32 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3650855312' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 28 04:34:32 np0005634017 dreamy_goldstine[90291]: 
Feb 28 04:34:32 np0005634017 dreamy_goldstine[90291]: {"fsid":"8f528268-ea2d-5d7b-af45-49b405fed6de","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":81,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":15,"num_osds":3,"num_up_osds":2,"osd_up_since":1772271269,"num_in_osds":3,"osd_in_since":1772271249,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"creating+peering","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":893911040,"bytes_avail":42047373312,"bytes_total":42941284352,"inactive_pgs_ratio":1},"fsmap":{"epoch":1,"btime":"2026-02-28T09:33:08:670336+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-02-28T09:34:30.913210+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Feb 28 04:34:32 np0005634017 systemd[1]: libpod-d72fb3575a7032b6850be140b8e23e1eb658549559c47564b9a4712eb408d4f9.scope: Deactivated successfully.
Feb 28 04:34:32 np0005634017 podman[90246]: 2026-02-28 09:34:32.024540188 +0000 UTC m=+0.795636673 container died d72fb3575a7032b6850be140b8e23e1eb658549559c47564b9a4712eb408d4f9 (image=quay.io/ceph/ceph:v20, name=dreamy_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 28 04:34:32 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3c6dda6363d06d168a6e4a1b60e0b8500ca27e39b8447e417cd466a1adf87a08-merged.mount: Deactivated successfully.
Feb 28 04:34:32 np0005634017 podman[90246]: 2026-02-28 09:34:32.184602099 +0000 UTC m=+0.955698574 container remove d72fb3575a7032b6850be140b8e23e1eb658549559c47564b9a4712eb408d4f9 (image=quay.io/ceph/ceph:v20, name=dreamy_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:32 np0005634017 systemd[1]: libpod-conmon-d72fb3575a7032b6850be140b8e23e1eb658549559c47564b9a4712eb408d4f9.scope: Deactivated successfully.
Feb 28 04:34:32 np0005634017 podman[90410]: 2026-02-28 09:34:32.237031523 +0000 UTC m=+0.116760106 container create 404e574c9f43c0fd4a72f5073c4d9af90f342e2d9585a85dba5cb9650783bf45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_golick, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 04:34:32 np0005634017 podman[90410]: 2026-02-28 09:34:32.19876468 +0000 UTC m=+0.078493253 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:32 np0005634017 systemd[1]: Started libpod-conmon-404e574c9f43c0fd4a72f5073c4d9af90f342e2d9585a85dba5cb9650783bf45.scope.
Feb 28 04:34:32 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:32 np0005634017 ceph-mgr[76610]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/748596393; not ready for session (expect reconnect)
Feb 28 04:34:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 28 04:34:32 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 28 04:34:32 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 28 04:34:32 np0005634017 podman[90410]: 2026-02-28 09:34:32.382479621 +0000 UTC m=+0.262208264 container init 404e574c9f43c0fd4a72f5073c4d9af90f342e2d9585a85dba5cb9650783bf45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_golick, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 28 04:34:32 np0005634017 podman[90410]: 2026-02-28 09:34:32.389595112 +0000 UTC m=+0.269323675 container start 404e574c9f43c0fd4a72f5073c4d9af90f342e2d9585a85dba5cb9650783bf45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_golick, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 04:34:32 np0005634017 vibrant_golick[90426]: 167 167
Feb 28 04:34:32 np0005634017 systemd[1]: libpod-404e574c9f43c0fd4a72f5073c4d9af90f342e2d9585a85dba5cb9650783bf45.scope: Deactivated successfully.
Feb 28 04:34:32 np0005634017 podman[90410]: 2026-02-28 09:34:32.413476328 +0000 UTC m=+0.293204881 container attach 404e574c9f43c0fd4a72f5073c4d9af90f342e2d9585a85dba5cb9650783bf45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_golick, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:34:32 np0005634017 podman[90410]: 2026-02-28 09:34:32.414308492 +0000 UTC m=+0.294037035 container died 404e574c9f43c0fd4a72f5073c4d9af90f342e2d9585a85dba5cb9650783bf45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_golick, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:32 np0005634017 systemd[1]: var-lib-containers-storage-overlay-1da941bc275945e134b1424057a706c1ace6b787df6ca9c26f6e71527e797958-merged.mount: Deactivated successfully.
Feb 28 04:34:32 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:32 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:32 np0005634017 podman[90410]: 2026-02-28 09:34:32.55837615 +0000 UTC m=+0.438105103 container remove 404e574c9f43c0fd4a72f5073c4d9af90f342e2d9585a85dba5cb9650783bf45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 04:34:32 np0005634017 systemd[1]: libpod-conmon-404e574c9f43c0fd4a72f5073c4d9af90f342e2d9585a85dba5cb9650783bf45.scope: Deactivated successfully.
Feb 28 04:34:32 np0005634017 python3[90467]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:34:32 np0005634017 podman[90471]: 2026-02-28 09:34:32.660516191 +0000 UTC m=+0.055968525 container create 8bdd858e436839debf5906c93cb36f980df650c2d5332b10f2d68797e09703a1 (image=quay.io/ceph/ceph:v20, name=xenodochial_nash, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 04:34:32 np0005634017 systemd[1]: Started libpod-conmon-8bdd858e436839debf5906c93cb36f980df650c2d5332b10f2d68797e09703a1.scope.
Feb 28 04:34:32 np0005634017 podman[90471]: 2026-02-28 09:34:32.629486533 +0000 UTC m=+0.024938957 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:34:32 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b76f91ef247510137428a75e8961be6348d2745f392c718ab5a04051374da7d9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b76f91ef247510137428a75e8961be6348d2745f392c718ab5a04051374da7d9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:32 np0005634017 podman[90488]: 2026-02-28 09:34:32.750880529 +0000 UTC m=+0.102288106 container create 1b39061a507944e3e9fde4136e971991070233c19782b3a56eb84c5ffe956781 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_newton, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:34:32 np0005634017 podman[90471]: 2026-02-28 09:34:32.781787354 +0000 UTC m=+0.177239788 container init 8bdd858e436839debf5906c93cb36f980df650c2d5332b10f2d68797e09703a1 (image=quay.io/ceph/ceph:v20, name=xenodochial_nash, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 04:34:32 np0005634017 podman[90471]: 2026-02-28 09:34:32.789604695 +0000 UTC m=+0.185057059 container start 8bdd858e436839debf5906c93cb36f980df650c2d5332b10f2d68797e09703a1 (image=quay.io/ceph/ceph:v20, name=xenodochial_nash, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 04:34:32 np0005634017 podman[90488]: 2026-02-28 09:34:32.707569713 +0000 UTC m=+0.058977350 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:32 np0005634017 podman[90471]: 2026-02-28 09:34:32.832597032 +0000 UTC m=+0.228049366 container attach 8bdd858e436839debf5906c93cb36f980df650c2d5332b10f2d68797e09703a1 (image=quay.io/ceph/ceph:v20, name=xenodochial_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:32 np0005634017 systemd[1]: Started libpod-conmon-1b39061a507944e3e9fde4136e971991070233c19782b3a56eb84c5ffe956781.scope.
Feb 28 04:34:32 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa228d383a852041ee0ff03651c41184bf73d19cfd254902dd461afd286c7cbc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa228d383a852041ee0ff03651c41184bf73d19cfd254902dd461afd286c7cbc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa228d383a852041ee0ff03651c41184bf73d19cfd254902dd461afd286c7cbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa228d383a852041ee0ff03651c41184bf73d19cfd254902dd461afd286c7cbc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:32 np0005634017 podman[90488]: 2026-02-28 09:34:32.910843367 +0000 UTC m=+0.262251044 container init 1b39061a507944e3e9fde4136e971991070233c19782b3a56eb84c5ffe956781 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:34:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v38: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Feb 28 04:34:32 np0005634017 podman[90488]: 2026-02-28 09:34:32.918450963 +0000 UTC m=+0.269858550 container start 1b39061a507944e3e9fde4136e971991070233c19782b3a56eb84c5ffe956781 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_newton, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 04:34:32 np0005634017 podman[90488]: 2026-02-28 09:34:32.955694867 +0000 UTC m=+0.307102484 container attach 1b39061a507944e3e9fde4136e971991070233c19782b3a56eb84c5ffe956781 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_newton, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4096103522' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 28 04:34:33 np0005634017 ceph-mgr[76610]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/748596393; not ready for session (expect reconnect)
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 28 04:34:33 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 28 04:34:33 np0005634017 zen_newton[90512]: [
Feb 28 04:34:33 np0005634017 zen_newton[90512]:    {
Feb 28 04:34:33 np0005634017 zen_newton[90512]:        "available": false,
Feb 28 04:34:33 np0005634017 zen_newton[90512]:        "being_replaced": false,
Feb 28 04:34:33 np0005634017 zen_newton[90512]:        "ceph_device_lvm": false,
Feb 28 04:34:33 np0005634017 zen_newton[90512]:        "device_id": "QEMU_DVD-ROM_QM00001",
Feb 28 04:34:33 np0005634017 zen_newton[90512]:        "lsm_data": {},
Feb 28 04:34:33 np0005634017 zen_newton[90512]:        "lvs": [],
Feb 28 04:34:33 np0005634017 zen_newton[90512]:        "path": "/dev/sr0",
Feb 28 04:34:33 np0005634017 zen_newton[90512]:        "rejected_reasons": [
Feb 28 04:34:33 np0005634017 zen_newton[90512]:            "Has a FileSystem",
Feb 28 04:34:33 np0005634017 zen_newton[90512]:            "Insufficient space (<5GB)"
Feb 28 04:34:33 np0005634017 zen_newton[90512]:        ],
Feb 28 04:34:33 np0005634017 zen_newton[90512]:        "sys_api": {
Feb 28 04:34:33 np0005634017 zen_newton[90512]:            "actuators": null,
Feb 28 04:34:33 np0005634017 zen_newton[90512]:            "device_nodes": [
Feb 28 04:34:33 np0005634017 zen_newton[90512]:                "sr0"
Feb 28 04:34:33 np0005634017 zen_newton[90512]:            ],
Feb 28 04:34:33 np0005634017 zen_newton[90512]:            "devname": "sr0",
Feb 28 04:34:33 np0005634017 zen_newton[90512]:            "human_readable_size": "482.00 KB",
Feb 28 04:34:33 np0005634017 zen_newton[90512]:            "id_bus": "ata",
Feb 28 04:34:33 np0005634017 zen_newton[90512]:            "model": "QEMU DVD-ROM",
Feb 28 04:34:33 np0005634017 zen_newton[90512]:            "nr_requests": "2",
Feb 28 04:34:33 np0005634017 zen_newton[90512]:            "parent": "/dev/sr0",
Feb 28 04:34:33 np0005634017 zen_newton[90512]:            "partitions": {},
Feb 28 04:34:33 np0005634017 zen_newton[90512]:            "path": "/dev/sr0",
Feb 28 04:34:33 np0005634017 zen_newton[90512]:            "removable": "1",
Feb 28 04:34:33 np0005634017 zen_newton[90512]:            "rev": "2.5+",
Feb 28 04:34:33 np0005634017 zen_newton[90512]:            "ro": "0",
Feb 28 04:34:33 np0005634017 zen_newton[90512]:            "rotational": "1",
Feb 28 04:34:33 np0005634017 zen_newton[90512]:            "sas_address": "",
Feb 28 04:34:33 np0005634017 zen_newton[90512]:            "sas_device_handle": "",
Feb 28 04:34:33 np0005634017 zen_newton[90512]:            "scheduler_mode": "mq-deadline",
Feb 28 04:34:33 np0005634017 zen_newton[90512]:            "sectors": 0,
Feb 28 04:34:33 np0005634017 zen_newton[90512]:            "sectorsize": "2048",
Feb 28 04:34:33 np0005634017 zen_newton[90512]:            "size": 493568.0,
Feb 28 04:34:33 np0005634017 zen_newton[90512]:            "support_discard": "2048",
Feb 28 04:34:33 np0005634017 zen_newton[90512]:            "type": "disk",
Feb 28 04:34:33 np0005634017 zen_newton[90512]:            "vendor": "QEMU"
Feb 28 04:34:33 np0005634017 zen_newton[90512]:        }
Feb 28 04:34:33 np0005634017 zen_newton[90512]:    }
Feb 28 04:34:33 np0005634017 zen_newton[90512]: ]
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/4096103522' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 28 04:34:33 np0005634017 systemd[1]: libpod-1b39061a507944e3e9fde4136e971991070233c19782b3a56eb84c5ffe956781.scope: Deactivated successfully.
Feb 28 04:34:33 np0005634017 podman[90488]: 2026-02-28 09:34:33.559681825 +0000 UTC m=+0.911089432 container died 1b39061a507944e3e9fde4136e971991070233c19782b3a56eb84c5ffe956781 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_newton, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4096103522' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Feb 28 04:34:33 np0005634017 xenodochial_nash[90505]: pool 'vms' created
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 28 04:34:33 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 28 04:34:33 np0005634017 systemd[1]: libpod-8bdd858e436839debf5906c93cb36f980df650c2d5332b10f2d68797e09703a1.scope: Deactivated successfully.
Feb 28 04:34:33 np0005634017 podman[90471]: 2026-02-28 09:34:33.590433125 +0000 UTC m=+0.985885469 container died 8bdd858e436839debf5906c93cb36f980df650c2d5332b10f2d68797e09703a1 (image=quay.io/ceph/ceph:v20, name=xenodochial_nash, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:34:33 np0005634017 systemd[1]: var-lib-containers-storage-overlay-b76f91ef247510137428a75e8961be6348d2745f392c718ab5a04051374da7d9-merged.mount: Deactivated successfully.
Feb 28 04:34:33 np0005634017 systemd[1]: var-lib-containers-storage-overlay-fa228d383a852041ee0ff03651c41184bf73d19cfd254902dd461afd286c7cbc-merged.mount: Deactivated successfully.
Feb 28 04:34:33 np0005634017 podman[90471]: 2026-02-28 09:34:33.70045595 +0000 UTC m=+1.095908284 container remove 8bdd858e436839debf5906c93cb36f980df650c2d5332b10f2d68797e09703a1 (image=quay.io/ceph/ceph:v20, name=xenodochial_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 04:34:33 np0005634017 systemd[1]: libpod-conmon-8bdd858e436839debf5906c93cb36f980df650c2d5332b10f2d68797e09703a1.scope: Deactivated successfully.
Feb 28 04:34:33 np0005634017 podman[90488]: 2026-02-28 09:34:33.713445738 +0000 UTC m=+1.064853315 container remove 1b39061a507944e3e9fde4136e971991070233c19782b3a56eb84c5ffe956781 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 28 04:34:33 np0005634017 systemd[1]: libpod-conmon-1b39061a507944e3e9fde4136e971991070233c19782b3a56eb84c5ffe956781.scope: Deactivated successfully.
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Feb 28 04:34:33 np0005634017 ceph-mgr[76610]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43681k
Feb 28 04:34:33 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43681k
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Feb 28 04:34:33 np0005634017 ceph-mgr[76610]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44729685: error parsing value: Value '44729685' is below minimum 939524096
Feb 28 04:34:33 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44729685: error parsing value: Value '44729685' is below minimum 939524096
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:34:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:34:33 np0005634017 ceph-osd[89322]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 33.850 iops: 8665.688 elapsed_sec: 0.346
Feb 28 04:34:33 np0005634017 ceph-osd[89322]: log_channel(cluster) log [WRN] : OSD bench result of 8665.688233 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 28 04:34:33 np0005634017 ceph-osd[89322]: osd.2 0 waiting for initial osdmap
Feb 28 04:34:33 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2[89318]: 2026-02-28T09:34:33.837+0000 7fecc91b4640 -1 osd.2 0 waiting for initial osdmap
Feb 28 04:34:33 np0005634017 ceph-osd[89322]: osd.2 16 crush map has features 288514051259236352, adjusting msgr requires for clients
Feb 28 04:34:33 np0005634017 ceph-osd[89322]: osd.2 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Feb 28 04:34:33 np0005634017 ceph-osd[89322]: osd.2 16 crush map has features 3314933000852226048, adjusting msgr requires for osds
Feb 28 04:34:33 np0005634017 ceph-osd[89322]: osd.2 16 check_osdmap_features require_osd_release unknown -> tentacle
Feb 28 04:34:33 np0005634017 ceph-osd[89322]: osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 28 04:34:33 np0005634017 ceph-osd[89322]: osd.2 16 set_numa_affinity not setting numa affinity
Feb 28 04:34:33 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-osd-2[89318]: 2026-02-28T09:34:33.862+0000 7fecc37a7640 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 28 04:34:33 np0005634017 ceph-osd[89322]: osd.2 16 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Feb 28 04:34:33 np0005634017 python3[91328]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:34:34 np0005634017 podman[91377]: 2026-02-28 09:34:34.049023527 +0000 UTC m=+0.047173186 container create f1a46ae179409ea250b9b0eb12c4207db3da3a1f30a8c23cb281a45dee0d90a2 (image=quay.io/ceph/ceph:v20, name=nifty_raman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:34 np0005634017 systemd[1]: Started libpod-conmon-f1a46ae179409ea250b9b0eb12c4207db3da3a1f30a8c23cb281a45dee0d90a2.scope.
Feb 28 04:34:34 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:34 np0005634017 podman[91377]: 2026-02-28 09:34:34.030283407 +0000 UTC m=+0.028433096 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:34:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba52930d07a79f50fdc6c6d9a5ffea3a1721c237b227880b861036794ef57049/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba52930d07a79f50fdc6c6d9a5ffea3a1721c237b227880b861036794ef57049/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:34 np0005634017 podman[91377]: 2026-02-28 09:34:34.145640192 +0000 UTC m=+0.143789851 container init f1a46ae179409ea250b9b0eb12c4207db3da3a1f30a8c23cb281a45dee0d90a2 (image=quay.io/ceph/ceph:v20, name=nifty_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:34:34 np0005634017 podman[91377]: 2026-02-28 09:34:34.150663625 +0000 UTC m=+0.148813284 container start f1a46ae179409ea250b9b0eb12c4207db3da3a1f30a8c23cb281a45dee0d90a2 (image=quay.io/ceph/ceph:v20, name=nifty_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 04:34:34 np0005634017 podman[91377]: 2026-02-28 09:34:34.156123669 +0000 UTC m=+0.154273358 container attach f1a46ae179409ea250b9b0eb12c4207db3da3a1f30a8c23cb281a45dee0d90a2 (image=quay.io/ceph/ceph:v20, name=nifty_raman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 04:34:34 np0005634017 podman[91407]: 2026-02-28 09:34:34.247467255 +0000 UTC m=+0.050652495 container create b0f770a4564c71e0e26b209c9a90d4ca12ea9c89395b9beb3b76ecf1a1449822 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_lehmann, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:34:34 np0005634017 systemd[1]: Started libpod-conmon-b0f770a4564c71e0e26b209c9a90d4ca12ea9c89395b9beb3b76ecf1a1449822.scope.
Feb 28 04:34:34 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:34 np0005634017 podman[91407]: 2026-02-28 09:34:34.221483489 +0000 UTC m=+0.024668719 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:34 np0005634017 podman[91407]: 2026-02-28 09:34:34.320911234 +0000 UTC m=+0.124096524 container init b0f770a4564c71e0e26b209c9a90d4ca12ea9c89395b9beb3b76ecf1a1449822 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle)
Feb 28 04:34:34 np0005634017 podman[91407]: 2026-02-28 09:34:34.325672189 +0000 UTC m=+0.128857389 container start b0f770a4564c71e0e26b209c9a90d4ca12ea9c89395b9beb3b76ecf1a1449822 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 04:34:34 np0005634017 podman[91407]: 2026-02-28 09:34:34.328854409 +0000 UTC m=+0.132039649 container attach b0f770a4564c71e0e26b209c9a90d4ca12ea9c89395b9beb3b76ecf1a1449822 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:34 np0005634017 mystifying_lehmann[91425]: 167 167
Feb 28 04:34:34 np0005634017 systemd[1]: libpod-b0f770a4564c71e0e26b209c9a90d4ca12ea9c89395b9beb3b76ecf1a1449822.scope: Deactivated successfully.
Feb 28 04:34:34 np0005634017 podman[91407]: 2026-02-28 09:34:34.331487963 +0000 UTC m=+0.134673193 container died b0f770a4564c71e0e26b209c9a90d4ca12ea9c89395b9beb3b76ecf1a1449822 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 28 04:34:34 np0005634017 systemd[1]: var-lib-containers-storage-overlay-8f56d19d22536580501bf022655cd702748fbd3bf903f69802556e2e11ae511d-merged.mount: Deactivated successfully.
Feb 28 04:34:34 np0005634017 ceph-mgr[76610]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/748596393; not ready for session (expect reconnect)
Feb 28 04:34:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 28 04:34:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 28 04:34:34 np0005634017 ceph-mgr[76610]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 28 04:34:34 np0005634017 podman[91407]: 2026-02-28 09:34:34.373357539 +0000 UTC m=+0.176542749 container remove b0f770a4564c71e0e26b209c9a90d4ca12ea9c89395b9beb3b76ecf1a1449822 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 04:34:34 np0005634017 systemd[1]: libpod-conmon-b0f770a4564c71e0e26b209c9a90d4ca12ea9c89395b9beb3b76ecf1a1449822.scope: Deactivated successfully.
Feb 28 04:34:34 np0005634017 podman[91466]: 2026-02-28 09:34:34.486414189 +0000 UTC m=+0.043476782 container create 9050cb286ffc5d743fd950ca2403900142a97255cc463b34b2b3e89ba3ec5af0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mcclintock, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:34:34 np0005634017 systemd[1]: Started libpod-conmon-9050cb286ffc5d743fd950ca2403900142a97255cc463b34b2b3e89ba3ec5af0.scope.
Feb 28 04:34:34 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b6cbbcb1c068120b7cf9eaec37111e6aa6331829341d95648ed56d671065944/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:34 np0005634017 podman[91466]: 2026-02-28 09:34:34.467778152 +0000 UTC m=+0.024840845 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b6cbbcb1c068120b7cf9eaec37111e6aa6331829341d95648ed56d671065944/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b6cbbcb1c068120b7cf9eaec37111e6aa6331829341d95648ed56d671065944/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b6cbbcb1c068120b7cf9eaec37111e6aa6331829341d95648ed56d671065944/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b6cbbcb1c068120b7cf9eaec37111e6aa6331829341d95648ed56d671065944/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:34 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/4096103522' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 28 04:34:34 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:34 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:34 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Feb 28 04:34:34 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Feb 28 04:34:34 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Feb 28 04:34:34 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:34:34 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:34 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:34:34 np0005634017 ceph-mon[76304]: OSD bench result of 8665.688233 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 28 04:34:34 np0005634017 podman[91466]: 2026-02-28 09:34:34.60055524 +0000 UTC m=+0.157617853 container init 9050cb286ffc5d743fd950ca2403900142a97255cc463b34b2b3e89ba3ec5af0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mcclintock, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 28 04:34:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4052087317' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 28 04:34:34 np0005634017 podman[91466]: 2026-02-28 09:34:34.607388734 +0000 UTC m=+0.164451337 container start 9050cb286ffc5d743fd950ca2403900142a97255cc463b34b2b3e89ba3ec5af0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 04:34:34 np0005634017 podman[91466]: 2026-02-28 09:34:34.61186111 +0000 UTC m=+0.168923733 container attach 9050cb286ffc5d743fd950ca2403900142a97255cc463b34b2b3e89ba3ec5af0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mcclintock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Feb 28 04:34:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4052087317' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 28 04:34:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Feb 28 04:34:34 np0005634017 nifty_raman[91393]: pool 'volumes' created
Feb 28 04:34:34 np0005634017 ceph-mon[76304]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/748596393,v1:192.168.122.100:6811/748596393] boot
Feb 28 04:34:34 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Feb 28 04:34:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 28 04:34:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 28 04:34:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 17 pg[3.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:34:34 np0005634017 ceph-osd[89322]: osd.2 17 state: booting -> active
Feb 28 04:34:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 pi=[16,17)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:34:34 np0005634017 systemd[1]: libpod-f1a46ae179409ea250b9b0eb12c4207db3da3a1f30a8c23cb281a45dee0d90a2.scope: Deactivated successfully.
Feb 28 04:34:34 np0005634017 podman[91377]: 2026-02-28 09:34:34.845231277 +0000 UTC m=+0.843380936 container died f1a46ae179409ea250b9b0eb12c4207db3da3a1f30a8c23cb281a45dee0d90a2 (image=quay.io/ceph/ceph:v20, name=nifty_raman, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:34:34 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ba52930d07a79f50fdc6c6d9a5ffea3a1721c237b227880b861036794ef57049-merged.mount: Deactivated successfully.
Feb 28 04:34:34 np0005634017 podman[91377]: 2026-02-28 09:34:34.879732393 +0000 UTC m=+0.877882052 container remove f1a46ae179409ea250b9b0eb12c4207db3da3a1f30a8c23cb281a45dee0d90a2 (image=quay.io/ceph/ceph:v20, name=nifty_raman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 04:34:34 np0005634017 systemd[1]: libpod-conmon-f1a46ae179409ea250b9b0eb12c4207db3da3a1f30a8c23cb281a45dee0d90a2.scope: Deactivated successfully.
Feb 28 04:34:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v41: 3 pgs: 1 active+clean, 2 unknown; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Feb 28 04:34:34 np0005634017 tender_mcclintock[91483]: --> passed data devices: 0 physical, 3 LVM
Feb 28 04:34:34 np0005634017 tender_mcclintock[91483]: --> All data devices are unavailable
Feb 28 04:34:35 np0005634017 systemd[1]: libpod-9050cb286ffc5d743fd950ca2403900142a97255cc463b34b2b3e89ba3ec5af0.scope: Deactivated successfully.
Feb 28 04:34:35 np0005634017 podman[91466]: 2026-02-28 09:34:35.007602043 +0000 UTC m=+0.564664666 container died 9050cb286ffc5d743fd950ca2403900142a97255cc463b34b2b3e89ba3ec5af0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mcclintock, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:34:35 np0005634017 systemd[1]: var-lib-containers-storage-overlay-8b6cbbcb1c068120b7cf9eaec37111e6aa6331829341d95648ed56d671065944-merged.mount: Deactivated successfully.
Feb 28 04:34:35 np0005634017 podman[91466]: 2026-02-28 09:34:35.060895111 +0000 UTC m=+0.617957704 container remove 9050cb286ffc5d743fd950ca2403900142a97255cc463b34b2b3e89ba3ec5af0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mcclintock, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:34:35 np0005634017 systemd[1]: libpod-conmon-9050cb286ffc5d743fd950ca2403900142a97255cc463b34b2b3e89ba3ec5af0.scope: Deactivated successfully.
Feb 28 04:34:35 np0005634017 python3[91554]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:34:35 np0005634017 podman[91605]: 2026-02-28 09:34:35.274916579 +0000 UTC m=+0.038353176 container create 5ce3060d700f15aecc75d0bbc8a1e88dde1bb36647e3c4e954ca1eed502a29e7 (image=quay.io/ceph/ceph:v20, name=interesting_archimedes, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:34:35 np0005634017 systemd[1]: Started libpod-conmon-5ce3060d700f15aecc75d0bbc8a1e88dde1bb36647e3c4e954ca1eed502a29e7.scope.
Feb 28 04:34:35 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:35 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d4ecd25c3991e1537ce3d1d1e771dbe4dd4bf9c3ead063670df1e920b472a6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:35 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d4ecd25c3991e1537ce3d1d1e771dbe4dd4bf9c3ead063670df1e920b472a6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:35 np0005634017 podman[91605]: 2026-02-28 09:34:35.260620915 +0000 UTC m=+0.024057532 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:34:35 np0005634017 podman[91605]: 2026-02-28 09:34:35.368976112 +0000 UTC m=+0.132412739 container init 5ce3060d700f15aecc75d0bbc8a1e88dde1bb36647e3c4e954ca1eed502a29e7 (image=quay.io/ceph/ceph:v20, name=interesting_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Feb 28 04:34:35 np0005634017 podman[91605]: 2026-02-28 09:34:35.37421687 +0000 UTC m=+0.137653477 container start 5ce3060d700f15aecc75d0bbc8a1e88dde1bb36647e3c4e954ca1eed502a29e7 (image=quay.io/ceph/ceph:v20, name=interesting_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:34:35 np0005634017 podman[91605]: 2026-02-28 09:34:35.378238504 +0000 UTC m=+0.141675151 container attach 5ce3060d700f15aecc75d0bbc8a1e88dde1bb36647e3c4e954ca1eed502a29e7 (image=quay.io/ceph/ceph:v20, name=interesting_archimedes, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:35 np0005634017 podman[91637]: 2026-02-28 09:34:35.453809873 +0000 UTC m=+0.043334767 container create db3a6cf39d7f1c0f312ea65c91506dcdc99af8f1cdfb1e010b55dfe604fda34d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kowalevski, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:35 np0005634017 systemd[1]: Started libpod-conmon-db3a6cf39d7f1c0f312ea65c91506dcdc99af8f1cdfb1e010b55dfe604fda34d.scope.
Feb 28 04:34:35 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:35 np0005634017 podman[91637]: 2026-02-28 09:34:35.438823769 +0000 UTC m=+0.028348693 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:35 np0005634017 podman[91637]: 2026-02-28 09:34:35.537298677 +0000 UTC m=+0.126823571 container init db3a6cf39d7f1c0f312ea65c91506dcdc99af8f1cdfb1e010b55dfe604fda34d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:34:35 np0005634017 podman[91637]: 2026-02-28 09:34:35.541447394 +0000 UTC m=+0.130972288 container start db3a6cf39d7f1c0f312ea65c91506dcdc99af8f1cdfb1e010b55dfe604fda34d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kowalevski, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 04:34:35 np0005634017 pedantic_kowalevski[91655]: 167 167
Feb 28 04:34:35 np0005634017 systemd[1]: libpod-db3a6cf39d7f1c0f312ea65c91506dcdc99af8f1cdfb1e010b55dfe604fda34d.scope: Deactivated successfully.
Feb 28 04:34:35 np0005634017 podman[91637]: 2026-02-28 09:34:35.545114048 +0000 UTC m=+0.134638962 container attach db3a6cf39d7f1c0f312ea65c91506dcdc99af8f1cdfb1e010b55dfe604fda34d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kowalevski, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:34:35 np0005634017 podman[91637]: 2026-02-28 09:34:35.545284403 +0000 UTC m=+0.134809317 container died db3a6cf39d7f1c0f312ea65c91506dcdc99af8f1cdfb1e010b55dfe604fda34d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 28 04:34:35 np0005634017 podman[91637]: 2026-02-28 09:34:35.581943511 +0000 UTC m=+0.171468415 container remove db3a6cf39d7f1c0f312ea65c91506dcdc99af8f1cdfb1e010b55dfe604fda34d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kowalevski, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 28 04:34:35 np0005634017 ceph-mon[76304]: Adjusting osd_memory_target on compute-0 to 43681k
Feb 28 04:34:35 np0005634017 ceph-mon[76304]: Unable to set osd_memory_target on compute-0 to 44729685: error parsing value: Value '44729685' is below minimum 939524096
Feb 28 04:34:35 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/4052087317' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 28 04:34:35 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/4052087317' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 28 04:34:35 np0005634017 ceph-mon[76304]: osd.2 [v2:192.168.122.100:6810/748596393,v1:192.168.122.100:6811/748596393] boot
Feb 28 04:34:35 np0005634017 systemd[1]: libpod-conmon-db3a6cf39d7f1c0f312ea65c91506dcdc99af8f1cdfb1e010b55dfe604fda34d.scope: Deactivated successfully.
Feb 28 04:34:35 np0005634017 systemd[1]: var-lib-containers-storage-overlay-bd6c71056a110c04962835f15f98d316788676b34d70254ab940bb1b43806a66-merged.mount: Deactivated successfully.
Feb 28 04:34:35 np0005634017 podman[91695]: 2026-02-28 09:34:35.710681565 +0000 UTC m=+0.034775045 container create 74108f86f6f1bf83c77005cf370947eedf29e7a1becdefbcbaf5551288b45e31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 28 04:34:35 np0005634017 systemd[1]: Started libpod-conmon-74108f86f6f1bf83c77005cf370947eedf29e7a1becdefbcbaf5551288b45e31.scope.
Feb 28 04:34:35 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:35 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1656c40f44870d3827ad0ef96acfd88b182f0f70d883b009581e1bbd35af8fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 28 04:34:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1522386905' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 28 04:34:35 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1656c40f44870d3827ad0ef96acfd88b182f0f70d883b009581e1bbd35af8fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:35 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1656c40f44870d3827ad0ef96acfd88b182f0f70d883b009581e1bbd35af8fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:35 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1656c40f44870d3827ad0ef96acfd88b182f0f70d883b009581e1bbd35af8fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:35 np0005634017 podman[91695]: 2026-02-28 09:34:35.69532325 +0000 UTC m=+0.019416770 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:35 np0005634017 podman[91695]: 2026-02-28 09:34:35.803700668 +0000 UTC m=+0.127794158 container init 74108f86f6f1bf83c77005cf370947eedf29e7a1becdefbcbaf5551288b45e31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:34:35 np0005634017 podman[91695]: 2026-02-28 09:34:35.816203382 +0000 UTC m=+0.140296872 container start 74108f86f6f1bf83c77005cf370947eedf29e7a1becdefbcbaf5551288b45e31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_agnesi, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:35 np0005634017 podman[91695]: 2026-02-28 09:34:35.819454434 +0000 UTC m=+0.143547934 container attach 74108f86f6f1bf83c77005cf370947eedf29e7a1becdefbcbaf5551288b45e31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Feb 28 04:34:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1522386905' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 28 04:34:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Feb 28 04:34:35 np0005634017 interesting_archimedes[91621]: pool 'backups' created
Feb 28 04:34:35 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Feb 28 04:34:35 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 18 pg[4.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:34:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 18 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:34:35 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 pi=[16,17)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:34:35 np0005634017 systemd[1]: libpod-5ce3060d700f15aecc75d0bbc8a1e88dde1bb36647e3c4e954ca1eed502a29e7.scope: Deactivated successfully.
Feb 28 04:34:35 np0005634017 podman[91605]: 2026-02-28 09:34:35.856728749 +0000 UTC m=+0.620165356 container died 5ce3060d700f15aecc75d0bbc8a1e88dde1bb36647e3c4e954ca1eed502a29e7 (image=quay.io/ceph/ceph:v20, name=interesting_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 28 04:34:35 np0005634017 systemd[1]: var-lib-containers-storage-overlay-22d4ecd25c3991e1537ce3d1d1e771dbe4dd4bf9c3ead063670df1e920b472a6-merged.mount: Deactivated successfully.
Feb 28 04:34:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:34:35 np0005634017 podman[91605]: 2026-02-28 09:34:35.896972239 +0000 UTC m=+0.660408876 container remove 5ce3060d700f15aecc75d0bbc8a1e88dde1bb36647e3c4e954ca1eed502a29e7 (image=quay.io/ceph/ceph:v20, name=interesting_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:35 np0005634017 systemd[1]: libpod-conmon-5ce3060d700f15aecc75d0bbc8a1e88dde1bb36647e3c4e954ca1eed502a29e7.scope: Deactivated successfully.
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]: {
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:    "0": [
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:        {
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "devices": [
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "/dev/loop3"
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            ],
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "lv_name": "ceph_lv0",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "lv_size": "21470642176",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "name": "ceph_lv0",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "tags": {
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.cluster_name": "ceph",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.crush_device_class": "",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.encrypted": "0",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.objectstore": "bluestore",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.osd_id": "0",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.type": "block",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.vdo": "0",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.with_tpm": "0"
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            },
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "type": "block",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "vg_name": "ceph_vg0"
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:        }
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:    ],
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:    "1": [
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:        {
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "devices": [
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "/dev/loop4"
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            ],
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "lv_name": "ceph_lv1",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "lv_size": "21470642176",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "name": "ceph_lv1",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "tags": {
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.cluster_name": "ceph",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.crush_device_class": "",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.encrypted": "0",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.objectstore": "bluestore",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.osd_id": "1",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.type": "block",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.vdo": "0",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.with_tpm": "0"
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            },
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "type": "block",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "vg_name": "ceph_vg1"
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:        }
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:    ],
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:    "2": [
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:        {
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "devices": [
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "/dev/loop5"
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            ],
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "lv_name": "ceph_lv2",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "lv_size": "21470642176",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "name": "ceph_lv2",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "tags": {
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.cluster_name": "ceph",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.crush_device_class": "",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.encrypted": "0",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.objectstore": "bluestore",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.osd_id": "2",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.type": "block",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.vdo": "0",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:                "ceph.with_tpm": "0"
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            },
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "type": "block",
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:            "vg_name": "ceph_vg2"
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:        }
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]:    ]
Feb 28 04:34:36 np0005634017 distracted_agnesi[91711]: }
Feb 28 04:34:36 np0005634017 python3[91759]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:34:36 np0005634017 systemd[1]: libpod-74108f86f6f1bf83c77005cf370947eedf29e7a1becdefbcbaf5551288b45e31.scope: Deactivated successfully.
Feb 28 04:34:36 np0005634017 podman[91695]: 2026-02-28 09:34:36.168446264 +0000 UTC m=+0.492539794 container died 74108f86f6f1bf83c77005cf370947eedf29e7a1becdefbcbaf5551288b45e31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_agnesi, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 28 04:34:36 np0005634017 systemd[1]: var-lib-containers-storage-overlay-d1656c40f44870d3827ad0ef96acfd88b182f0f70d883b009581e1bbd35af8fa-merged.mount: Deactivated successfully.
Feb 28 04:34:36 np0005634017 podman[91695]: 2026-02-28 09:34:36.224517551 +0000 UTC m=+0.548611031 container remove 74108f86f6f1bf83c77005cf370947eedf29e7a1becdefbcbaf5551288b45e31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_agnesi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:34:36 np0005634017 systemd[1]: libpod-conmon-74108f86f6f1bf83c77005cf370947eedf29e7a1becdefbcbaf5551288b45e31.scope: Deactivated successfully.
Feb 28 04:34:36 np0005634017 podman[91762]: 2026-02-28 09:34:36.244198848 +0000 UTC m=+0.062073798 container create cfd75f88bb0738a37286667a9c56effb206ec3ddca3f08cebd7aed465eeeff09 (image=quay.io/ceph/ceph:v20, name=gallant_goldwasser, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:34:36 np0005634017 systemd[1]: Started libpod-conmon-cfd75f88bb0738a37286667a9c56effb206ec3ddca3f08cebd7aed465eeeff09.scope.
Feb 28 04:34:36 np0005634017 podman[91762]: 2026-02-28 09:34:36.223513892 +0000 UTC m=+0.041388882 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:34:36 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:36 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b93d3faee8224cc534c44b794eba52f2c3a168951ed429ef42f7dd519167fd32/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:36 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b93d3faee8224cc534c44b794eba52f2c3a168951ed429ef42f7dd519167fd32/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:36 np0005634017 podman[91762]: 2026-02-28 09:34:36.34140844 +0000 UTC m=+0.159283420 container init cfd75f88bb0738a37286667a9c56effb206ec3ddca3f08cebd7aed465eeeff09 (image=quay.io/ceph/ceph:v20, name=gallant_goldwasser, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:36 np0005634017 podman[91762]: 2026-02-28 09:34:36.347619756 +0000 UTC m=+0.165494716 container start cfd75f88bb0738a37286667a9c56effb206ec3ddca3f08cebd7aed465eeeff09 (image=quay.io/ceph/ceph:v20, name=gallant_goldwasser, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 04:34:36 np0005634017 podman[91762]: 2026-02-28 09:34:36.351770013 +0000 UTC m=+0.169644973 container attach cfd75f88bb0738a37286667a9c56effb206ec3ddca3f08cebd7aed465eeeff09 (image=quay.io/ceph/ceph:v20, name=gallant_goldwasser, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:34:36 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/1522386905' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 28 04:34:36 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/1522386905' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 28 04:34:36 np0005634017 podman[91875]: 2026-02-28 09:34:36.703061938 +0000 UTC m=+0.054804663 container create f1231fe7ee5d94dbd5b39149bba197c2e435e52d8864ae24dc0e920c684a3235 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_keller, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 04:34:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 28 04:34:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/328782861' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 28 04:34:36 np0005634017 systemd[1]: Started libpod-conmon-f1231fe7ee5d94dbd5b39149bba197c2e435e52d8864ae24dc0e920c684a3235.scope.
Feb 28 04:34:36 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:36 np0005634017 podman[91875]: 2026-02-28 09:34:36.77556923 +0000 UTC m=+0.127311985 container init f1231fe7ee5d94dbd5b39149bba197c2e435e52d8864ae24dc0e920c684a3235 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 04:34:36 np0005634017 podman[91875]: 2026-02-28 09:34:36.681553349 +0000 UTC m=+0.033296094 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:36 np0005634017 podman[91875]: 2026-02-28 09:34:36.781740325 +0000 UTC m=+0.133483070 container start f1231fe7ee5d94dbd5b39149bba197c2e435e52d8864ae24dc0e920c684a3235 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 28 04:34:36 np0005634017 podman[91875]: 2026-02-28 09:34:36.785683966 +0000 UTC m=+0.137426691 container attach f1231fe7ee5d94dbd5b39149bba197c2e435e52d8864ae24dc0e920c684a3235 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_keller, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Feb 28 04:34:36 np0005634017 dazzling_keller[91893]: 167 167
Feb 28 04:34:36 np0005634017 systemd[1]: libpod-f1231fe7ee5d94dbd5b39149bba197c2e435e52d8864ae24dc0e920c684a3235.scope: Deactivated successfully.
Feb 28 04:34:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Feb 28 04:34:36 np0005634017 podman[91898]: 2026-02-28 09:34:36.843611466 +0000 UTC m=+0.038781459 container died f1231fe7ee5d94dbd5b39149bba197c2e435e52d8864ae24dc0e920c684a3235 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 28 04:34:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/328782861' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 28 04:34:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Feb 28 04:34:36 np0005634017 gallant_goldwasser[91790]: pool 'images' created
Feb 28 04:34:36 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Feb 28 04:34:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:34:36 np0005634017 systemd[1]: libpod-cfd75f88bb0738a37286667a9c56effb206ec3ddca3f08cebd7aed465eeeff09.scope: Deactivated successfully.
Feb 28 04:34:36 np0005634017 podman[91762]: 2026-02-28 09:34:36.868173282 +0000 UTC m=+0.686048242 container died cfd75f88bb0738a37286667a9c56effb206ec3ddca3f08cebd7aed465eeeff09 (image=quay.io/ceph/ceph:v20, name=gallant_goldwasser, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:36 np0005634017 systemd[1]: var-lib-containers-storage-overlay-680a0bf9cbea766a2eb6479c07855d532978b9ff1a818a83bf1f36d0c02d5f00-merged.mount: Deactivated successfully.
Feb 28 04:34:36 np0005634017 systemd[1]: var-lib-containers-storage-overlay-b93d3faee8224cc534c44b794eba52f2c3a168951ed429ef42f7dd519167fd32-merged.mount: Deactivated successfully.
Feb 28 04:34:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v44: 5 pgs: 1 creating+peering, 1 active+clean, 3 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:34:36 np0005634017 podman[91762]: 2026-02-28 09:34:36.917757995 +0000 UTC m=+0.735632995 container remove cfd75f88bb0738a37286667a9c56effb206ec3ddca3f08cebd7aed465eeeff09 (image=quay.io/ceph/ceph:v20, name=gallant_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:36 np0005634017 systemd[1]: libpod-conmon-cfd75f88bb0738a37286667a9c56effb206ec3ddca3f08cebd7aed465eeeff09.scope: Deactivated successfully.
Feb 28 04:34:36 np0005634017 podman[91898]: 2026-02-28 09:34:36.932147933 +0000 UTC m=+0.127317956 container remove f1231fe7ee5d94dbd5b39149bba197c2e435e52d8864ae24dc0e920c684a3235 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:34:36 np0005634017 systemd[1]: libpod-conmon-f1231fe7ee5d94dbd5b39149bba197c2e435e52d8864ae24dc0e920c684a3235.scope: Deactivated successfully.
Feb 28 04:34:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 19 pg[5.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:34:37 np0005634017 podman[91941]: 2026-02-28 09:34:37.08431124 +0000 UTC m=+0.052691103 container create 69603ba24ea3dec93ec0aec5811fd81d01f758ea3c4e6ae0c8d370f4f964e0d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mestorf, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:34:37 np0005634017 systemd[1]: Started libpod-conmon-69603ba24ea3dec93ec0aec5811fd81d01f758ea3c4e6ae0c8d370f4f964e0d9.scope.
Feb 28 04:34:37 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c019d408f719af7df64ce7f0f899e2163294bab9d7c0677221847fca1f332459/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c019d408f719af7df64ce7f0f899e2163294bab9d7c0677221847fca1f332459/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c019d408f719af7df64ce7f0f899e2163294bab9d7c0677221847fca1f332459/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c019d408f719af7df64ce7f0f899e2163294bab9d7c0677221847fca1f332459/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:37 np0005634017 podman[91941]: 2026-02-28 09:34:37.062895434 +0000 UTC m=+0.031275287 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:37 np0005634017 podman[91941]: 2026-02-28 09:34:37.176537021 +0000 UTC m=+0.144916944 container init 69603ba24ea3dec93ec0aec5811fd81d01f758ea3c4e6ae0c8d370f4f964e0d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 28 04:34:37 np0005634017 podman[91941]: 2026-02-28 09:34:37.185199096 +0000 UTC m=+0.153578959 container start 69603ba24ea3dec93ec0aec5811fd81d01f758ea3c4e6ae0c8d370f4f964e0d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mestorf, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 04:34:37 np0005634017 podman[91941]: 2026-02-28 09:34:37.203427932 +0000 UTC m=+0.171807795 container attach 69603ba24ea3dec93ec0aec5811fd81d01f758ea3c4e6ae0c8d370f4f964e0d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mestorf, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:34:37 np0005634017 python3[91968]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:34:37 np0005634017 podman[91981]: 2026-02-28 09:34:37.325770485 +0000 UTC m=+0.059495435 container create 4abcf757d2263f5a944bb0080d10cb9d69e935ac4983fc9d0b3f20bc6ce0ff15 (image=quay.io/ceph/ceph:v20, name=sleepy_poitras, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:34:37 np0005634017 systemd[1]: Started libpod-conmon-4abcf757d2263f5a944bb0080d10cb9d69e935ac4983fc9d0b3f20bc6ce0ff15.scope.
Feb 28 04:34:37 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdb8c27f7c849d8c3548b4aa09ab1628b6e86cbe6fdd7591c90360e815d73ffb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdb8c27f7c849d8c3548b4aa09ab1628b6e86cbe6fdd7591c90360e815d73ffb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:37 np0005634017 podman[91981]: 2026-02-28 09:34:37.396691593 +0000 UTC m=+0.130416543 container init 4abcf757d2263f5a944bb0080d10cb9d69e935ac4983fc9d0b3f20bc6ce0ff15 (image=quay.io/ceph/ceph:v20, name=sleepy_poitras, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:34:37 np0005634017 podman[91981]: 2026-02-28 09:34:37.307324103 +0000 UTC m=+0.041049043 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:34:37 np0005634017 podman[91981]: 2026-02-28 09:34:37.404762181 +0000 UTC m=+0.138487101 container start 4abcf757d2263f5a944bb0080d10cb9d69e935ac4983fc9d0b3f20bc6ce0ff15 (image=quay.io/ceph/ceph:v20, name=sleepy_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:37 np0005634017 podman[91981]: 2026-02-28 09:34:37.41213429 +0000 UTC m=+0.145859250 container attach 4abcf757d2263f5a944bb0080d10cb9d69e935ac4983fc9d0b3f20bc6ce0ff15 (image=quay.io/ceph/ceph:v20, name=sleepy_poitras, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 28 04:34:37 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/328782861' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 28 04:34:37 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/328782861' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 28 04:34:37 np0005634017 lvm[92095]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:34:37 np0005634017 lvm[92093]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:34:37 np0005634017 lvm[92092]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:34:37 np0005634017 lvm[92095]: VG ceph_vg2 finished
Feb 28 04:34:37 np0005634017 lvm[92093]: VG ceph_vg1 finished
Feb 28 04:34:37 np0005634017 lvm[92092]: VG ceph_vg0 finished
Feb 28 04:34:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 28 04:34:37 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/565118394' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 28 04:34:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Feb 28 04:34:37 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/565118394' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 28 04:34:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Feb 28 04:34:37 np0005634017 sleepy_poitras[91996]: pool 'cephfs.cephfs.meta' created
Feb 28 04:34:37 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Feb 28 04:34:37 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:34:37 np0005634017 systemd[1]: libpod-4abcf757d2263f5a944bb0080d10cb9d69e935ac4983fc9d0b3f20bc6ce0ff15.scope: Deactivated successfully.
Feb 28 04:34:37 np0005634017 podman[91981]: 2026-02-28 09:34:37.879253643 +0000 UTC m=+0.612978553 container died 4abcf757d2263f5a944bb0080d10cb9d69e935ac4983fc9d0b3f20bc6ce0ff15 (image=quay.io/ceph/ceph:v20, name=sleepy_poitras, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:34:37 np0005634017 infallible_mestorf[91976]: {}
Feb 28 04:34:37 np0005634017 systemd[1]: var-lib-containers-storage-overlay-fdb8c27f7c849d8c3548b4aa09ab1628b6e86cbe6fdd7591c90360e815d73ffb-merged.mount: Deactivated successfully.
Feb 28 04:34:37 np0005634017 podman[91981]: 2026-02-28 09:34:37.916797706 +0000 UTC m=+0.650522636 container remove 4abcf757d2263f5a944bb0080d10cb9d69e935ac4983fc9d0b3f20bc6ce0ff15 (image=quay.io/ceph/ceph:v20, name=sleepy_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:34:37 np0005634017 systemd[1]: libpod-69603ba24ea3dec93ec0aec5811fd81d01f758ea3c4e6ae0c8d370f4f964e0d9.scope: Deactivated successfully.
Feb 28 04:34:37 np0005634017 systemd[1]: libpod-69603ba24ea3dec93ec0aec5811fd81d01f758ea3c4e6ae0c8d370f4f964e0d9.scope: Consumed 1.066s CPU time.
Feb 28 04:34:37 np0005634017 podman[91941]: 2026-02-28 09:34:37.922814306 +0000 UTC m=+0.891194159 container died 69603ba24ea3dec93ec0aec5811fd81d01f758ea3c4e6ae0c8d370f4f964e0d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:34:37 np0005634017 systemd[1]: libpod-conmon-4abcf757d2263f5a944bb0080d10cb9d69e935ac4983fc9d0b3f20bc6ce0ff15.scope: Deactivated successfully.
Feb 28 04:34:37 np0005634017 systemd[1]: var-lib-containers-storage-overlay-c019d408f719af7df64ce7f0f899e2163294bab9d7c0677221847fca1f332459-merged.mount: Deactivated successfully.
Feb 28 04:34:37 np0005634017 podman[91941]: 2026-02-28 09:34:37.959857555 +0000 UTC m=+0.928237388 container remove 69603ba24ea3dec93ec0aec5811fd81d01f758ea3c4e6ae0c8d370f4f964e0d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mestorf, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 04:34:37 np0005634017 systemd[1]: libpod-conmon-69603ba24ea3dec93ec0aec5811fd81d01f758ea3c4e6ae0c8d370f4f964e0d9.scope: Deactivated successfully.
Feb 28 04:34:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:34:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:34:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 20 pg[6.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:34:38 np0005634017 python3[92171]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:34:38 np0005634017 podman[92177]: 2026-02-28 09:34:38.26454947 +0000 UTC m=+0.054497823 container create efc28bc748472f6c553b8b3a9eb5ffa4827e866ecac47558c8b7e011192d1ff9 (image=quay.io/ceph/ceph:v20, name=boring_heyrovsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 04:34:38 np0005634017 systemd[1]: Started libpod-conmon-efc28bc748472f6c553b8b3a9eb5ffa4827e866ecac47558c8b7e011192d1ff9.scope.
Feb 28 04:34:38 np0005634017 podman[92177]: 2026-02-28 09:34:38.242740963 +0000 UTC m=+0.032689296 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:34:38 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:38 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75655fd1862185e3c72d01168a547241487b69961c9681f89afb923bf955b139/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:38 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75655fd1862185e3c72d01168a547241487b69961c9681f89afb923bf955b139/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:38 np0005634017 podman[92177]: 2026-02-28 09:34:38.363412559 +0000 UTC m=+0.153360922 container init efc28bc748472f6c553b8b3a9eb5ffa4827e866ecac47558c8b7e011192d1ff9 (image=quay.io/ceph/ceph:v20, name=boring_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 04:34:38 np0005634017 podman[92177]: 2026-02-28 09:34:38.371596071 +0000 UTC m=+0.161544414 container start efc28bc748472f6c553b8b3a9eb5ffa4827e866ecac47558c8b7e011192d1ff9 (image=quay.io/ceph/ceph:v20, name=boring_heyrovsky, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:38 np0005634017 podman[92177]: 2026-02-28 09:34:38.375356797 +0000 UTC m=+0.165305190 container attach efc28bc748472f6c553b8b3a9eb5ffa4827e866ecac47558c8b7e011192d1ff9 (image=quay.io/ceph/ceph:v20, name=boring_heyrovsky, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:34:38 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/565118394' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 28 04:34:38 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/565118394' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 28 04:34:38 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:38 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 28 04:34:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3409489222' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 28 04:34:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Feb 28 04:34:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v46: 6 pgs: 1 creating+peering, 2 active+clean, 3 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:34:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3409489222' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 28 04:34:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Feb 28 04:34:38 np0005634017 boring_heyrovsky[92192]: pool 'cephfs.cephfs.data' created
Feb 28 04:34:38 np0005634017 systemd[1]: libpod-efc28bc748472f6c553b8b3a9eb5ffa4827e866ecac47558c8b7e011192d1ff9.scope: Deactivated successfully.
Feb 28 04:34:38 np0005634017 podman[92177]: 2026-02-28 09:34:38.949641023 +0000 UTC m=+0.739589346 container died efc28bc748472f6c553b8b3a9eb5ffa4827e866ecac47558c8b7e011192d1ff9 (image=quay.io/ceph/ceph:v20, name=boring_heyrovsky, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 04:34:39 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Feb 28 04:34:39 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 21 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:34:39 np0005634017 systemd[1]: var-lib-containers-storage-overlay-75655fd1862185e3c72d01168a547241487b69961c9681f89afb923bf955b139-merged.mount: Deactivated successfully.
Feb 28 04:34:39 np0005634017 podman[92177]: 2026-02-28 09:34:39.466518084 +0000 UTC m=+1.256466437 container remove efc28bc748472f6c553b8b3a9eb5ffa4827e866ecac47558c8b7e011192d1ff9 (image=quay.io/ceph/ceph:v20, name=boring_heyrovsky, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:34:39 np0005634017 systemd[1]: libpod-conmon-efc28bc748472f6c553b8b3a9eb5ffa4827e866ecac47558c8b7e011192d1ff9.scope: Deactivated successfully.
Feb 28 04:34:39 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/3409489222' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 28 04:34:39 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/3409489222' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 28 04:34:39 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 21 pg[7.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:34:39 np0005634017 python3[92258]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:34:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Feb 28 04:34:39 np0005634017 podman[92259]: 2026-02-28 09:34:39.950033302 +0000 UTC m=+0.079145281 container create bad128d1f150ea2a74edf868fed442bea87337e17fcba89e56f92622a9b0f321 (image=quay.io/ceph/ceph:v20, name=frosty_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:34:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Feb 28 04:34:39 np0005634017 podman[92259]: 2026-02-28 09:34:39.898590676 +0000 UTC m=+0.027702735 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:34:40 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Feb 28 04:34:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 22 pg[7.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:34:40 np0005634017 systemd[1]: Started libpod-conmon-bad128d1f150ea2a74edf868fed442bea87337e17fcba89e56f92622a9b0f321.scope.
Feb 28 04:34:40 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8379bf69903843c385461a14984057ed9f994fc3dbf4fa8b97599df19c53025/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8379bf69903843c385461a14984057ed9f994fc3dbf4fa8b97599df19c53025/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:40 np0005634017 podman[92259]: 2026-02-28 09:34:40.120801716 +0000 UTC m=+0.249913705 container init bad128d1f150ea2a74edf868fed442bea87337e17fcba89e56f92622a9b0f321 (image=quay.io/ceph/ceph:v20, name=frosty_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 04:34:40 np0005634017 podman[92259]: 2026-02-28 09:34:40.128941037 +0000 UTC m=+0.258053016 container start bad128d1f150ea2a74edf868fed442bea87337e17fcba89e56f92622a9b0f321 (image=quay.io/ceph/ceph:v20, name=frosty_faraday, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:34:40 np0005634017 podman[92259]: 2026-02-28 09:34:40.132287691 +0000 UTC m=+0.261399750 container attach bad128d1f150ea2a74edf868fed442bea87337e17fcba89e56f92622a9b0f321 (image=quay.io/ceph/ceph:v20, name=frosty_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Feb 28 04:34:40 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Feb 28 04:34:40 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1329866428' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Feb 28 04:34:40 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/1329866428' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Feb 28 04:34:40 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e22 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:34:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v49: 7 pgs: 5 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:34:40 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Feb 28 04:34:40 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1329866428' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Feb 28 04:34:40 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Feb 28 04:34:40 np0005634017 frosty_faraday[92274]: enabled application 'rbd' on pool 'vms'
Feb 28 04:34:40 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Feb 28 04:34:41 np0005634017 systemd[1]: libpod-bad128d1f150ea2a74edf868fed442bea87337e17fcba89e56f92622a9b0f321.scope: Deactivated successfully.
Feb 28 04:34:41 np0005634017 podman[92299]: 2026-02-28 09:34:41.042264872 +0000 UTC m=+0.027215292 container died bad128d1f150ea2a74edf868fed442bea87337e17fcba89e56f92622a9b0f321 (image=quay.io/ceph/ceph:v20, name=frosty_faraday, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 04:34:41 np0005634017 systemd[1]: var-lib-containers-storage-overlay-d8379bf69903843c385461a14984057ed9f994fc3dbf4fa8b97599df19c53025-merged.mount: Deactivated successfully.
Feb 28 04:34:41 np0005634017 podman[92299]: 2026-02-28 09:34:41.07895338 +0000 UTC m=+0.063903800 container remove bad128d1f150ea2a74edf868fed442bea87337e17fcba89e56f92622a9b0f321 (image=quay.io/ceph/ceph:v20, name=frosty_faraday, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 28 04:34:41 np0005634017 systemd[1]: libpod-conmon-bad128d1f150ea2a74edf868fed442bea87337e17fcba89e56f92622a9b0f321.scope: Deactivated successfully.
Feb 28 04:34:41 np0005634017 python3[92339]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:34:41 np0005634017 podman[92340]: 2026-02-28 09:34:41.455696654 +0000 UTC m=+0.043144168 container create e4300d2ed9181b4fa0e8f0676f9d45f1f29ecf67698dcfff30e99fac4a7ef81b (image=quay.io/ceph/ceph:v20, name=priceless_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 28 04:34:41 np0005634017 systemd[1]: Started libpod-conmon-e4300d2ed9181b4fa0e8f0676f9d45f1f29ecf67698dcfff30e99fac4a7ef81b.scope.
Feb 28 04:34:41 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:41 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4a2e0a3db74f63f38b8839e3a31adf0ebcfcffc1bbde21aaba7a011dddb7294/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:41 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4a2e0a3db74f63f38b8839e3a31adf0ebcfcffc1bbde21aaba7a011dddb7294/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:41 np0005634017 podman[92340]: 2026-02-28 09:34:41.535765173 +0000 UTC m=+0.123212707 container init e4300d2ed9181b4fa0e8f0676f9d45f1f29ecf67698dcfff30e99fac4a7ef81b (image=quay.io/ceph/ceph:v20, name=priceless_varahamihira, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:34:41 np0005634017 podman[92340]: 2026-02-28 09:34:41.439916429 +0000 UTC m=+0.027363963 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:34:41 np0005634017 podman[92340]: 2026-02-28 09:34:41.543213323 +0000 UTC m=+0.130660877 container start e4300d2ed9181b4fa0e8f0676f9d45f1f29ecf67698dcfff30e99fac4a7ef81b (image=quay.io/ceph/ceph:v20, name=priceless_varahamihira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:34:41 np0005634017 podman[92340]: 2026-02-28 09:34:41.546870766 +0000 UTC m=+0.134318360 container attach e4300d2ed9181b4fa0e8f0676f9d45f1f29ecf67698dcfff30e99fac4a7ef81b (image=quay.io/ceph/ceph:v20, name=priceless_varahamihira, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 28 04:34:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Feb 28 04:34:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/444874724' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Feb 28 04:34:41 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/1329866428' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Feb 28 04:34:41 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/444874724' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Feb 28 04:34:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Feb 28 04:34:42 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/444874724' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Feb 28 04:34:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Feb 28 04:34:42 np0005634017 priceless_varahamihira[92354]: enabled application 'rbd' on pool 'volumes'
Feb 28 04:34:42 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Feb 28 04:34:42 np0005634017 systemd[1]: libpod-e4300d2ed9181b4fa0e8f0676f9d45f1f29ecf67698dcfff30e99fac4a7ef81b.scope: Deactivated successfully.
Feb 28 04:34:42 np0005634017 podman[92340]: 2026-02-28 09:34:42.751649578 +0000 UTC m=+1.339097142 container died e4300d2ed9181b4fa0e8f0676f9d45f1f29ecf67698dcfff30e99fac4a7ef81b (image=quay.io/ceph/ceph:v20, name=priceless_varahamihira, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 28 04:34:42 np0005634017 systemd[1]: var-lib-containers-storage-overlay-e4a2e0a3db74f63f38b8839e3a31adf0ebcfcffc1bbde21aaba7a011dddb7294-merged.mount: Deactivated successfully.
Feb 28 04:34:42 np0005634017 podman[92340]: 2026-02-28 09:34:42.795222197 +0000 UTC m=+1.382669751 container remove e4300d2ed9181b4fa0e8f0676f9d45f1f29ecf67698dcfff30e99fac4a7ef81b (image=quay.io/ceph/ceph:v20, name=priceless_varahamihira, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:42 np0005634017 systemd[1]: libpod-conmon-e4300d2ed9181b4fa0e8f0676f9d45f1f29ecf67698dcfff30e99fac4a7ef81b.scope: Deactivated successfully.
Feb 28 04:34:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v52: 7 pgs: 6 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:34:43 np0005634017 python3[92415]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:34:43 np0005634017 podman[92416]: 2026-02-28 09:34:43.08780249 +0000 UTC m=+0.039902947 container create 25ea5639c79e257506338f4ba013d960cf43db4c907fcd44f88be74418494db0 (image=quay.io/ceph/ceph:v20, name=determined_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 28 04:34:43 np0005634017 systemd[1]: Started libpod-conmon-25ea5639c79e257506338f4ba013d960cf43db4c907fcd44f88be74418494db0.scope.
Feb 28 04:34:43 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3382c80ac40fc94568d1925bcfd7c36fb03afda91dc5346c72daaa99b09d408c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3382c80ac40fc94568d1925bcfd7c36fb03afda91dc5346c72daaa99b09d408c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:43 np0005634017 podman[92416]: 2026-02-28 09:34:43.149640454 +0000 UTC m=+0.101741001 container init 25ea5639c79e257506338f4ba013d960cf43db4c907fcd44f88be74418494db0 (image=quay.io/ceph/ceph:v20, name=determined_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 04:34:43 np0005634017 podman[92416]: 2026-02-28 09:34:43.153548834 +0000 UTC m=+0.105649321 container start 25ea5639c79e257506338f4ba013d960cf43db4c907fcd44f88be74418494db0 (image=quay.io/ceph/ceph:v20, name=determined_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:34:43 np0005634017 podman[92416]: 2026-02-28 09:34:43.157259839 +0000 UTC m=+0.109360326 container attach 25ea5639c79e257506338f4ba013d960cf43db4c907fcd44f88be74418494db0 (image=quay.io/ceph/ceph:v20, name=determined_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 28 04:34:43 np0005634017 podman[92416]: 2026-02-28 09:34:43.069177024 +0000 UTC m=+0.021277531 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:34:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Feb 28 04:34:43 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3556524608' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Feb 28 04:34:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Feb 28 04:34:43 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3556524608' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Feb 28 04:34:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Feb 28 04:34:43 np0005634017 determined_easley[92432]: enabled application 'rbd' on pool 'backups'
Feb 28 04:34:43 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Feb 28 04:34:43 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/444874724' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Feb 28 04:34:43 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/3556524608' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Feb 28 04:34:43 np0005634017 systemd[1]: libpod-25ea5639c79e257506338f4ba013d960cf43db4c907fcd44f88be74418494db0.scope: Deactivated successfully.
Feb 28 04:34:43 np0005634017 podman[92416]: 2026-02-28 09:34:43.75111926 +0000 UTC m=+0.703219727 container died 25ea5639c79e257506338f4ba013d960cf43db4c907fcd44f88be74418494db0 (image=quay.io/ceph/ceph:v20, name=determined_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:34:43 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3382c80ac40fc94568d1925bcfd7c36fb03afda91dc5346c72daaa99b09d408c-merged.mount: Deactivated successfully.
Feb 28 04:34:43 np0005634017 podman[92416]: 2026-02-28 09:34:43.787107195 +0000 UTC m=+0.739207652 container remove 25ea5639c79e257506338f4ba013d960cf43db4c907fcd44f88be74418494db0 (image=quay.io/ceph/ceph:v20, name=determined_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 04:34:43 np0005634017 systemd[1]: libpod-conmon-25ea5639c79e257506338f4ba013d960cf43db4c907fcd44f88be74418494db0.scope: Deactivated successfully.
Feb 28 04:34:44 np0005634017 python3[92494]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:34:44 np0005634017 podman[92495]: 2026-02-28 09:34:44.195405372 +0000 UTC m=+0.054521439 container create 55f6a4519286a5421f435df9c3d7532dd25e8f472e9da575afc7dadf227ba4e6 (image=quay.io/ceph/ceph:v20, name=infallible_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 28 04:34:44 np0005634017 systemd[1]: Started libpod-conmon-55f6a4519286a5421f435df9c3d7532dd25e8f472e9da575afc7dadf227ba4e6.scope.
Feb 28 04:34:44 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:44 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/039796183a2c81e2663b106f11dad055461f1032fafaee59d62c65371e18baa2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:44 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/039796183a2c81e2663b106f11dad055461f1032fafaee59d62c65371e18baa2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:44 np0005634017 podman[92495]: 2026-02-28 09:34:44.264234273 +0000 UTC m=+0.123350310 container init 55f6a4519286a5421f435df9c3d7532dd25e8f472e9da575afc7dadf227ba4e6 (image=quay.io/ceph/ceph:v20, name=infallible_mclaren, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:34:44 np0005634017 podman[92495]: 2026-02-28 09:34:44.174267545 +0000 UTC m=+0.033383592 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:34:44 np0005634017 podman[92495]: 2026-02-28 09:34:44.270003356 +0000 UTC m=+0.129119373 container start 55f6a4519286a5421f435df9c3d7532dd25e8f472e9da575afc7dadf227ba4e6 (image=quay.io/ceph/ceph:v20, name=infallible_mclaren, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 04:34:44 np0005634017 podman[92495]: 2026-02-28 09:34:44.272855106 +0000 UTC m=+0.131971123 container attach 55f6a4519286a5421f435df9c3d7532dd25e8f472e9da575afc7dadf227ba4e6 (image=quay.io/ceph/ceph:v20, name=infallible_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 04:34:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Feb 28 04:34:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3504860030' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Feb 28 04:34:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Feb 28 04:34:44 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/3556524608' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Feb 28 04:34:44 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/3504860030' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Feb 28 04:34:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3504860030' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Feb 28 04:34:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Feb 28 04:34:44 np0005634017 infallible_mclaren[92510]: enabled application 'rbd' on pool 'images'
Feb 28 04:34:44 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Feb 28 04:34:44 np0005634017 systemd[1]: libpod-55f6a4519286a5421f435df9c3d7532dd25e8f472e9da575afc7dadf227ba4e6.scope: Deactivated successfully.
Feb 28 04:34:44 np0005634017 podman[92495]: 2026-02-28 09:34:44.775480324 +0000 UTC m=+0.634596361 container died 55f6a4519286a5421f435df9c3d7532dd25e8f472e9da575afc7dadf227ba4e6 (image=quay.io/ceph/ceph:v20, name=infallible_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:34:44 np0005634017 systemd[1]: var-lib-containers-storage-overlay-039796183a2c81e2663b106f11dad055461f1032fafaee59d62c65371e18baa2-merged.mount: Deactivated successfully.
Feb 28 04:34:44 np0005634017 podman[92495]: 2026-02-28 09:34:44.813199388 +0000 UTC m=+0.672315425 container remove 55f6a4519286a5421f435df9c3d7532dd25e8f472e9da575afc7dadf227ba4e6 (image=quay.io/ceph/ceph:v20, name=infallible_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 04:34:44 np0005634017 systemd[1]: libpod-conmon-55f6a4519286a5421f435df9c3d7532dd25e8f472e9da575afc7dadf227ba4e6.scope: Deactivated successfully.
Feb 28 04:34:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v55: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:34:45 np0005634017 python3[92572]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:34:45 np0005634017 podman[92573]: 2026-02-28 09:34:45.166446372 +0000 UTC m=+0.045312750 container create 754a83f954cc0f4d27bf329ae019500433119590f11e773bc3807015a679a0e5 (image=quay.io/ceph/ceph:v20, name=silly_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 04:34:45 np0005634017 systemd[1]: Started libpod-conmon-754a83f954cc0f4d27bf329ae019500433119590f11e773bc3807015a679a0e5.scope.
Feb 28 04:34:45 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f87c9532768c258c0bbc8abb7cb1447ede219f93c917a09642a2213953b3e182/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f87c9532768c258c0bbc8abb7cb1447ede219f93c917a09642a2213953b3e182/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:45 np0005634017 podman[92573]: 2026-02-28 09:34:45.143268738 +0000 UTC m=+0.022135166 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:34:45 np0005634017 podman[92573]: 2026-02-28 09:34:45.239099341 +0000 UTC m=+0.117965749 container init 754a83f954cc0f4d27bf329ae019500433119590f11e773bc3807015a679a0e5 (image=quay.io/ceph/ceph:v20, name=silly_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 04:34:45 np0005634017 podman[92573]: 2026-02-28 09:34:45.246186571 +0000 UTC m=+0.125052949 container start 754a83f954cc0f4d27bf329ae019500433119590f11e773bc3807015a679a0e5 (image=quay.io/ceph/ceph:v20, name=silly_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 28 04:34:45 np0005634017 podman[92573]: 2026-02-28 09:34:45.25040769 +0000 UTC m=+0.129274118 container attach 754a83f954cc0f4d27bf329ae019500433119590f11e773bc3807015a679a0e5 (image=quay.io/ceph/ceph:v20, name=silly_cannon, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 04:34:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Feb 28 04:34:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2129223474' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Feb 28 04:34:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Feb 28 04:34:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2129223474' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Feb 28 04:34:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Feb 28 04:34:45 np0005634017 silly_cannon[92588]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Feb 28 04:34:45 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/3504860030' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Feb 28 04:34:45 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/2129223474' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Feb 28 04:34:45 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Feb 28 04:34:45 np0005634017 systemd[1]: libpod-754a83f954cc0f4d27bf329ae019500433119590f11e773bc3807015a679a0e5.scope: Deactivated successfully.
Feb 28 04:34:45 np0005634017 podman[92573]: 2026-02-28 09:34:45.78897478 +0000 UTC m=+0.667841128 container died 754a83f954cc0f4d27bf329ae019500433119590f11e773bc3807015a679a0e5 (image=quay.io/ceph/ceph:v20, name=silly_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:34:45 np0005634017 systemd[1]: var-lib-containers-storage-overlay-f87c9532768c258c0bbc8abb7cb1447ede219f93c917a09642a2213953b3e182-merged.mount: Deactivated successfully.
Feb 28 04:34:45 np0005634017 podman[92573]: 2026-02-28 09:34:45.826404246 +0000 UTC m=+0.705270624 container remove 754a83f954cc0f4d27bf329ae019500433119590f11e773bc3807015a679a0e5 (image=quay.io/ceph/ceph:v20, name=silly_cannon, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 04:34:45 np0005634017 systemd[1]: libpod-conmon-754a83f954cc0f4d27bf329ae019500433119590f11e773bc3807015a679a0e5.scope: Deactivated successfully.
Feb 28 04:34:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:34:46 np0005634017 python3[92650]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:34:46 np0005634017 podman[92651]: 2026-02-28 09:34:46.170264585 +0000 UTC m=+0.043763345 container create e860aa4e3dae4400d461db7b9cc5bbc6574e4d601adec925dbe8576e6809e366 (image=quay.io/ceph/ceph:v20, name=zealous_chaplygin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 04:34:46 np0005634017 systemd[1]: Started libpod-conmon-e860aa4e3dae4400d461db7b9cc5bbc6574e4d601adec925dbe8576e6809e366.scope.
Feb 28 04:34:46 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b868966aa05ac96b33a414bd08fa7738d6d002f05be64b385e8163cc44c23b7a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b868966aa05ac96b33a414bd08fa7738d6d002f05be64b385e8163cc44c23b7a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:46 np0005634017 podman[92651]: 2026-02-28 09:34:46.149727426 +0000 UTC m=+0.023226196 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:34:46 np0005634017 podman[92651]: 2026-02-28 09:34:46.25446513 +0000 UTC m=+0.127963910 container init e860aa4e3dae4400d461db7b9cc5bbc6574e4d601adec925dbe8576e6809e366 (image=quay.io/ceph/ceph:v20, name=zealous_chaplygin, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 04:34:46 np0005634017 podman[92651]: 2026-02-28 09:34:46.260015627 +0000 UTC m=+0.133514397 container start e860aa4e3dae4400d461db7b9cc5bbc6574e4d601adec925dbe8576e6809e366 (image=quay.io/ceph/ceph:v20, name=zealous_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:34:46 np0005634017 podman[92651]: 2026-02-28 09:34:46.265129231 +0000 UTC m=+0.138628001 container attach e860aa4e3dae4400d461db7b9cc5bbc6574e4d601adec925dbe8576e6809e366 (image=quay.io/ceph/ceph:v20, name=zealous_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Feb 28 04:34:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1256166274' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Feb 28 04:34:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Feb 28 04:34:46 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/2129223474' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Feb 28 04:34:46 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/1256166274' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Feb 28 04:34:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1256166274' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Feb 28 04:34:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Feb 28 04:34:46 np0005634017 zealous_chaplygin[92666]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Feb 28 04:34:46 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Feb 28 04:34:46 np0005634017 systemd[1]: libpod-e860aa4e3dae4400d461db7b9cc5bbc6574e4d601adec925dbe8576e6809e366.scope: Deactivated successfully.
Feb 28 04:34:46 np0005634017 podman[92691]: 2026-02-28 09:34:46.835283984 +0000 UTC m=+0.022302910 container died e860aa4e3dae4400d461db7b9cc5bbc6574e4d601adec925dbe8576e6809e366 (image=quay.io/ceph/ceph:v20, name=zealous_chaplygin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:34:46 np0005634017 systemd[1]: var-lib-containers-storage-overlay-b868966aa05ac96b33a414bd08fa7738d6d002f05be64b385e8163cc44c23b7a-merged.mount: Deactivated successfully.
Feb 28 04:34:46 np0005634017 podman[92691]: 2026-02-28 09:34:46.87166187 +0000 UTC m=+0.058680816 container remove e860aa4e3dae4400d461db7b9cc5bbc6574e4d601adec925dbe8576e6809e366 (image=quay.io/ceph/ceph:v20, name=zealous_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:46 np0005634017 systemd[1]: libpod-conmon-e860aa4e3dae4400d461db7b9cc5bbc6574e4d601adec925dbe8576e6809e366.scope: Deactivated successfully.
Feb 28 04:34:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v58: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:34:47 np0005634017 python3[92780]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 04:34:47 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/1256166274' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Feb 28 04:34:47 np0005634017 python3[92851]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772271287.4366283-37535-257110473872960/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:34:48 np0005634017 python3[92953]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 04:34:48 np0005634017 python3[93028]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772271288.2687378-37549-47375142089675/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=883d68745d10e1de5fbb5b1307baba7a7eeefe5d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:34:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v59: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:34:49 np0005634017 python3[93078]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:34:49 np0005634017 podman[93079]: 2026-02-28 09:34:49.332972195 +0000 UTC m=+0.066967449 container create 285d8e97d503866bb3d9bf95563724c4c93f44d2c521af20141f6bf2e8fbf821 (image=quay.io/ceph/ceph:v20, name=intelligent_franklin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Feb 28 04:34:49 np0005634017 systemd[1]: Started libpod-conmon-285d8e97d503866bb3d9bf95563724c4c93f44d2c521af20141f6bf2e8fbf821.scope.
Feb 28 04:34:49 np0005634017 podman[93079]: 2026-02-28 09:34:49.304344698 +0000 UTC m=+0.038339992 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:34:49 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:49 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400a373cfbd9df72341fd3f3ffba0406b03eb2b3aa074cd854565a33af5e00a7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:49 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400a373cfbd9df72341fd3f3ffba0406b03eb2b3aa074cd854565a33af5e00a7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:49 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400a373cfbd9df72341fd3f3ffba0406b03eb2b3aa074cd854565a33af5e00a7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:49 np0005634017 podman[93079]: 2026-02-28 09:34:49.437747 +0000 UTC m=+0.171742224 container init 285d8e97d503866bb3d9bf95563724c4c93f44d2c521af20141f6bf2e8fbf821 (image=quay.io/ceph/ceph:v20, name=intelligent_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:34:49 np0005634017 podman[93079]: 2026-02-28 09:34:49.444748468 +0000 UTC m=+0.178743682 container start 285d8e97d503866bb3d9bf95563724c4c93f44d2c521af20141f6bf2e8fbf821 (image=quay.io/ceph/ceph:v20, name=intelligent_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:34:49 np0005634017 podman[93079]: 2026-02-28 09:34:49.447962988 +0000 UTC m=+0.181958242 container attach 285d8e97d503866bb3d9bf95563724c4c93f44d2c521af20141f6bf2e8fbf821 (image=quay.io/ceph/ceph:v20, name=intelligent_franklin, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:34:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb 28 04:34:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1555604106' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb 28 04:34:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1555604106' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb 28 04:34:49 np0005634017 intelligent_franklin[93094]: 
Feb 28 04:34:49 np0005634017 intelligent_franklin[93094]: [global]
Feb 28 04:34:49 np0005634017 intelligent_franklin[93094]: #011fsid = 8f528268-ea2d-5d7b-af45-49b405fed6de
Feb 28 04:34:49 np0005634017 intelligent_franklin[93094]: #011mon_host = 192.168.122.100
Feb 28 04:34:49 np0005634017 intelligent_franklin[93094]: #011rgw_keystone_api_version = 3
Feb 28 04:34:49 np0005634017 systemd[1]: libpod-285d8e97d503866bb3d9bf95563724c4c93f44d2c521af20141f6bf2e8fbf821.scope: Deactivated successfully.
Feb 28 04:34:49 np0005634017 podman[93079]: 2026-02-28 09:34:49.899561077 +0000 UTC m=+0.633556321 container died 285d8e97d503866bb3d9bf95563724c4c93f44d2c521af20141f6bf2e8fbf821 (image=quay.io/ceph/ceph:v20, name=intelligent_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:34:49 np0005634017 systemd[1]: var-lib-containers-storage-overlay-400a373cfbd9df72341fd3f3ffba0406b03eb2b3aa074cd854565a33af5e00a7-merged.mount: Deactivated successfully.
Feb 28 04:34:49 np0005634017 podman[93079]: 2026-02-28 09:34:49.943259659 +0000 UTC m=+0.677254893 container remove 285d8e97d503866bb3d9bf95563724c4c93f44d2c521af20141f6bf2e8fbf821 (image=quay.io/ceph/ceph:v20, name=intelligent_franklin, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:34:49 np0005634017 systemd[1]: libpod-conmon-285d8e97d503866bb3d9bf95563724c4c93f44d2c521af20141f6bf2e8fbf821.scope: Deactivated successfully.
Feb 28 04:34:50 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/1555604106' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb 28 04:34:50 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/1555604106' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb 28 04:34:50 np0005634017 python3[93206]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:34:50 np0005634017 podman[93207]: 2026-02-28 09:34:50.464547853 +0000 UTC m=+0.062029411 container create 73adb0fff6ccbbd5efdebd2326f2055d6a76e2785d79a926e7e4da5ded2a2e12 (image=quay.io/ceph/ceph:v20, name=upbeat_noyce, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 04:34:50 np0005634017 systemd[1]: Started libpod-conmon-73adb0fff6ccbbd5efdebd2326f2055d6a76e2785d79a926e7e4da5ded2a2e12.scope.
Feb 28 04:34:50 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:50 np0005634017 podman[93207]: 2026-02-28 09:34:50.436297406 +0000 UTC m=+0.033779014 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:34:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30baec5d320aaf3f22b439edad026fba7c2a8ee56f0a7e39db2169ee804d0316/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30baec5d320aaf3f22b439edad026fba7c2a8ee56f0a7e39db2169ee804d0316/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30baec5d320aaf3f22b439edad026fba7c2a8ee56f0a7e39db2169ee804d0316/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:50 np0005634017 podman[93207]: 2026-02-28 09:34:50.555947181 +0000 UTC m=+0.153428779 container init 73adb0fff6ccbbd5efdebd2326f2055d6a76e2785d79a926e7e4da5ded2a2e12 (image=quay.io/ceph/ceph:v20, name=upbeat_noyce, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:34:50 np0005634017 podman[93207]: 2026-02-28 09:34:50.563568696 +0000 UTC m=+0.161050204 container start 73adb0fff6ccbbd5efdebd2326f2055d6a76e2785d79a926e7e4da5ded2a2e12 (image=quay.io/ceph/ceph:v20, name=upbeat_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:34:50 np0005634017 podman[93207]: 2026-02-28 09:34:50.569607276 +0000 UTC m=+0.167088884 container attach 73adb0fff6ccbbd5efdebd2326f2055d6a76e2785d79a926e7e4da5ded2a2e12 (image=quay.io/ceph/ceph:v20, name=upbeat_noyce, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 28 04:34:50 np0005634017 podman[93272]: 2026-02-28 09:34:50.680155604 +0000 UTC m=+0.052666216 container exec 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 04:34:50 np0005634017 podman[93272]: 2026-02-28 09:34:50.762179628 +0000 UTC m=+0.134690170 container exec_died 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Feb 28 04:34:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:34:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v60: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:34:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Feb 28 04:34:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2222574836' entity='client.admin' 
Feb 28 04:34:51 np0005634017 upbeat_noyce[93240]: set ssl_option
Feb 28 04:34:51 np0005634017 systemd[1]: libpod-73adb0fff6ccbbd5efdebd2326f2055d6a76e2785d79a926e7e4da5ded2a2e12.scope: Deactivated successfully.
Feb 28 04:34:51 np0005634017 podman[93207]: 2026-02-28 09:34:51.152752495 +0000 UTC m=+0.750234063 container died 73adb0fff6ccbbd5efdebd2326f2055d6a76e2785d79a926e7e4da5ded2a2e12 (image=quay.io/ceph/ceph:v20, name=upbeat_noyce, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:51 np0005634017 systemd[1]: var-lib-containers-storage-overlay-30baec5d320aaf3f22b439edad026fba7c2a8ee56f0a7e39db2169ee804d0316-merged.mount: Deactivated successfully.
Feb 28 04:34:51 np0005634017 podman[93207]: 2026-02-28 09:34:51.186397334 +0000 UTC m=+0.783878852 container remove 73adb0fff6ccbbd5efdebd2326f2055d6a76e2785d79a926e7e4da5ded2a2e12 (image=quay.io/ceph/ceph:v20, name=upbeat_noyce, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:34:51 np0005634017 systemd[1]: libpod-conmon-73adb0fff6ccbbd5efdebd2326f2055d6a76e2785d79a926e7e4da5ded2a2e12.scope: Deactivated successfully.
Feb 28 04:34:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:34:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:34:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:51 np0005634017 python3[93479]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:34:51 np0005634017 podman[93530]: 2026-02-28 09:34:51.542712545 +0000 UTC m=+0.042922452 container create d05ff54b7005854ab79836c1fcb9bb7ecff9b241397293985b237769afcfd750 (image=quay.io/ceph/ceph:v20, name=sad_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Feb 28 04:34:51 np0005634017 systemd[1]: Started libpod-conmon-d05ff54b7005854ab79836c1fcb9bb7ecff9b241397293985b237769afcfd750.scope.
Feb 28 04:34:51 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:51 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1865265fb8b05b8ac570597a251f3b7bfa035556b04597c22d67dff750272b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:51 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1865265fb8b05b8ac570597a251f3b7bfa035556b04597c22d67dff750272b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:51 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1865265fb8b05b8ac570597a251f3b7bfa035556b04597c22d67dff750272b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:51 np0005634017 podman[93530]: 2026-02-28 09:34:51.523345678 +0000 UTC m=+0.023555665 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:34:51 np0005634017 podman[93530]: 2026-02-28 09:34:51.642817468 +0000 UTC m=+0.143027405 container init d05ff54b7005854ab79836c1fcb9bb7ecff9b241397293985b237769afcfd750 (image=quay.io/ceph/ceph:v20, name=sad_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 04:34:51 np0005634017 podman[93530]: 2026-02-28 09:34:51.651498473 +0000 UTC m=+0.151708400 container start d05ff54b7005854ab79836c1fcb9bb7ecff9b241397293985b237769afcfd750 (image=quay.io/ceph/ceph:v20, name=sad_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:34:51 np0005634017 podman[93530]: 2026-02-28 09:34:51.656176675 +0000 UTC m=+0.156386612 container attach d05ff54b7005854ab79836c1fcb9bb7ecff9b241397293985b237769afcfd750 (image=quay.io/ceph/ceph:v20, name=sad_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 04:34:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:34:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:34:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 04:34:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:34:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 04:34:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 04:34:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 04:34:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 04:34:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:34:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:34:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:34:52 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 04:34:52 np0005634017 ceph-mgr[76610]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Feb 28 04:34:52 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Feb 28 04:34:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb 28 04:34:52 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/2222574836' entity='client.admin' 
Feb 28 04:34:52 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:52 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:52 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:34:52 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:52 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:34:52 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:52 np0005634017 sad_lederberg[93547]: Scheduled rgw.rgw update...
Feb 28 04:34:52 np0005634017 systemd[1]: libpod-d05ff54b7005854ab79836c1fcb9bb7ecff9b241397293985b237769afcfd750.scope: Deactivated successfully.
Feb 28 04:34:52 np0005634017 podman[93530]: 2026-02-28 09:34:52.182572003 +0000 UTC m=+0.682781910 container died d05ff54b7005854ab79836c1fcb9bb7ecff9b241397293985b237769afcfd750 (image=quay.io/ceph/ceph:v20, name=sad_lederberg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:52 np0005634017 systemd[1]: var-lib-containers-storage-overlay-aa1865265fb8b05b8ac570597a251f3b7bfa035556b04597c22d67dff750272b-merged.mount: Deactivated successfully.
Feb 28 04:34:52 np0005634017 podman[93530]: 2026-02-28 09:34:52.236788932 +0000 UTC m=+0.736998839 container remove d05ff54b7005854ab79836c1fcb9bb7ecff9b241397293985b237769afcfd750 (image=quay.io/ceph/ceph:v20, name=sad_lederberg, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 28 04:34:52 np0005634017 systemd[1]: libpod-conmon-d05ff54b7005854ab79836c1fcb9bb7ecff9b241397293985b237769afcfd750.scope: Deactivated successfully.
Feb 28 04:34:52 np0005634017 podman[93677]: 2026-02-28 09:34:52.391527327 +0000 UTC m=+0.060141307 container create a404a1fe8dfa4e44b3fbe889c0699c3bb47c31aa4bb3fd3e59cf4dc00065da7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_knuth, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 28 04:34:52 np0005634017 systemd[1]: Started libpod-conmon-a404a1fe8dfa4e44b3fbe889c0699c3bb47c31aa4bb3fd3e59cf4dc00065da7b.scope.
Feb 28 04:34:52 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:52 np0005634017 podman[93677]: 2026-02-28 09:34:52.365727749 +0000 UTC m=+0.034341779 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:52 np0005634017 podman[93677]: 2026-02-28 09:34:52.46679593 +0000 UTC m=+0.135409870 container init a404a1fe8dfa4e44b3fbe889c0699c3bb47c31aa4bb3fd3e59cf4dc00065da7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_knuth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:52 np0005634017 podman[93677]: 2026-02-28 09:34:52.473679675 +0000 UTC m=+0.142293625 container start a404a1fe8dfa4e44b3fbe889c0699c3bb47c31aa4bb3fd3e59cf4dc00065da7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_knuth, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 04:34:52 np0005634017 podman[93677]: 2026-02-28 09:34:52.477453791 +0000 UTC m=+0.146067731 container attach a404a1fe8dfa4e44b3fbe889c0699c3bb47c31aa4bb3fd3e59cf4dc00065da7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_knuth, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:34:52 np0005634017 condescending_knuth[93694]: 167 167
Feb 28 04:34:52 np0005634017 systemd[1]: libpod-a404a1fe8dfa4e44b3fbe889c0699c3bb47c31aa4bb3fd3e59cf4dc00065da7b.scope: Deactivated successfully.
Feb 28 04:34:52 np0005634017 podman[93677]: 2026-02-28 09:34:52.479363775 +0000 UTC m=+0.147977745 container died a404a1fe8dfa4e44b3fbe889c0699c3bb47c31aa4bb3fd3e59cf4dc00065da7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_knuth, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 04:34:52 np0005634017 systemd[1]: var-lib-containers-storage-overlay-18b964d61b6ad5c96a9bec699779314f52c0de14c3700c84a117ffb3a072e553-merged.mount: Deactivated successfully.
Feb 28 04:34:52 np0005634017 podman[93677]: 2026-02-28 09:34:52.519648231 +0000 UTC m=+0.188262191 container remove a404a1fe8dfa4e44b3fbe889c0699c3bb47c31aa4bb3fd3e59cf4dc00065da7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:34:52 np0005634017 systemd[1]: libpod-conmon-a404a1fe8dfa4e44b3fbe889c0699c3bb47c31aa4bb3fd3e59cf4dc00065da7b.scope: Deactivated successfully.
Feb 28 04:34:52 np0005634017 podman[93717]: 2026-02-28 09:34:52.667336547 +0000 UTC m=+0.048033016 container create fd34621a92afc0a335a28164884e5147208678d6fc174102aafbdc85dd5c894e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_noether, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:52 np0005634017 systemd[1]: Started libpod-conmon-fd34621a92afc0a335a28164884e5147208678d6fc174102aafbdc85dd5c894e.scope.
Feb 28 04:34:52 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab1ea8659a12563ad6fd1c1d798fc995f2cbf9f7d1bdd5342a693e1cd5adc7aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:52 np0005634017 podman[93717]: 2026-02-28 09:34:52.639257915 +0000 UTC m=+0.019954374 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab1ea8659a12563ad6fd1c1d798fc995f2cbf9f7d1bdd5342a693e1cd5adc7aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab1ea8659a12563ad6fd1c1d798fc995f2cbf9f7d1bdd5342a693e1cd5adc7aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab1ea8659a12563ad6fd1c1d798fc995f2cbf9f7d1bdd5342a693e1cd5adc7aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab1ea8659a12563ad6fd1c1d798fc995f2cbf9f7d1bdd5342a693e1cd5adc7aa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:52 np0005634017 podman[93717]: 2026-02-28 09:34:52.752630003 +0000 UTC m=+0.133326462 container init fd34621a92afc0a335a28164884e5147208678d6fc174102aafbdc85dd5c894e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_noether, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 04:34:52 np0005634017 podman[93717]: 2026-02-28 09:34:52.761630277 +0000 UTC m=+0.142326736 container start fd34621a92afc0a335a28164884e5147208678d6fc174102aafbdc85dd5c894e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_noether, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 04:34:52 np0005634017 podman[93717]: 2026-02-28 09:34:52.766328849 +0000 UTC m=+0.147025318 container attach fd34621a92afc0a335a28164884e5147208678d6fc174102aafbdc85dd5c894e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_noether, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:34:53 np0005634017 ceph-mon[76304]: Saving service rgw.rgw spec with placement compute-0
Feb 28 04:34:53 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:53 np0005634017 beautiful_noether[93733]: --> passed data devices: 0 physical, 3 LVM
Feb 28 04:34:53 np0005634017 beautiful_noether[93733]: --> All data devices are unavailable
Feb 28 04:34:53 np0005634017 systemd[1]: libpod-fd34621a92afc0a335a28164884e5147208678d6fc174102aafbdc85dd5c894e.scope: Deactivated successfully.
Feb 28 04:34:53 np0005634017 python3[93820]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 04:34:53 np0005634017 podman[93829]: 2026-02-28 09:34:53.249682852 +0000 UTC m=+0.028642409 container died fd34621a92afc0a335a28164884e5147208678d6fc174102aafbdc85dd5c894e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_noether, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Feb 28 04:34:53 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ab1ea8659a12563ad6fd1c1d798fc995f2cbf9f7d1bdd5342a693e1cd5adc7aa-merged.mount: Deactivated successfully.
Feb 28 04:34:53 np0005634017 podman[93829]: 2026-02-28 09:34:53.295148475 +0000 UTC m=+0.074107982 container remove fd34621a92afc0a335a28164884e5147208678d6fc174102aafbdc85dd5c894e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:34:53 np0005634017 systemd[1]: libpod-conmon-fd34621a92afc0a335a28164884e5147208678d6fc174102aafbdc85dd5c894e.scope: Deactivated successfully.
Feb 28 04:34:53 np0005634017 python3[93961]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772271292.9598522-37590-217337302135822/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:34:53 np0005634017 podman[94000]: 2026-02-28 09:34:53.701504507 +0000 UTC m=+0.034349520 container create 98f94cd74dfc7e664823a87523b5483020a00905d59ac2728c89fc1e034c092a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 28 04:34:53 np0005634017 systemd[1]: Started libpod-conmon-98f94cd74dfc7e664823a87523b5483020a00905d59ac2728c89fc1e034c092a.scope.
Feb 28 04:34:53 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:53 np0005634017 podman[94000]: 2026-02-28 09:34:53.686120153 +0000 UTC m=+0.018965186 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:53 np0005634017 podman[94000]: 2026-02-28 09:34:53.790836886 +0000 UTC m=+0.123681939 container init 98f94cd74dfc7e664823a87523b5483020a00905d59ac2728c89fc1e034c092a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_torvalds, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 04:34:53 np0005634017 podman[94000]: 2026-02-28 09:34:53.797126154 +0000 UTC m=+0.129971177 container start 98f94cd74dfc7e664823a87523b5483020a00905d59ac2728c89fc1e034c092a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:53 np0005634017 podman[94000]: 2026-02-28 09:34:53.800154379 +0000 UTC m=+0.132999452 container attach 98f94cd74dfc7e664823a87523b5483020a00905d59ac2728c89fc1e034c092a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_torvalds, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:53 np0005634017 musing_torvalds[94016]: 167 167
Feb 28 04:34:53 np0005634017 systemd[1]: libpod-98f94cd74dfc7e664823a87523b5483020a00905d59ac2728c89fc1e034c092a.scope: Deactivated successfully.
Feb 28 04:34:53 np0005634017 conmon[94016]: conmon 98f94cd74dfc7e664823 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-98f94cd74dfc7e664823a87523b5483020a00905d59ac2728c89fc1e034c092a.scope/container/memory.events
Feb 28 04:34:53 np0005634017 podman[94000]: 2026-02-28 09:34:53.80264953 +0000 UTC m=+0.135494553 container died 98f94cd74dfc7e664823a87523b5483020a00905d59ac2728c89fc1e034c092a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Feb 28 04:34:53 np0005634017 systemd[1]: var-lib-containers-storage-overlay-7c9b29794e82938756cb69a0424cd9c3dc640cf2ae46cc70aa42aa166182b11a-merged.mount: Deactivated successfully.
Feb 28 04:34:53 np0005634017 podman[94000]: 2026-02-28 09:34:53.841997429 +0000 UTC m=+0.174842452 container remove 98f94cd74dfc7e664823a87523b5483020a00905d59ac2728c89fc1e034c092a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 28 04:34:53 np0005634017 systemd[1]: libpod-conmon-98f94cd74dfc7e664823a87523b5483020a00905d59ac2728c89fc1e034c092a.scope: Deactivated successfully.
Feb 28 04:34:54 np0005634017 podman[94066]: 2026-02-28 09:34:54.019918018 +0000 UTC m=+0.061757523 container create ef7e25fe9d1ed639e043f5ff3cee184319f5eeff3d89802a6d188822514ac766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_johnson, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 28 04:34:54 np0005634017 systemd[1]: Started libpod-conmon-ef7e25fe9d1ed639e043f5ff3cee184319f5eeff3d89802a6d188822514ac766.scope.
Feb 28 04:34:54 np0005634017 python3[94061]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:34:54 np0005634017 podman[94066]: 2026-02-28 09:34:53.997213928 +0000 UTC m=+0.039053413 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:54 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/291f6161ecc127619666efbbd52ff900647f5e8d26e6b928b083e78c8e7a7cb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/291f6161ecc127619666efbbd52ff900647f5e8d26e6b928b083e78c8e7a7cb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/291f6161ecc127619666efbbd52ff900647f5e8d26e6b928b083e78c8e7a7cb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/291f6161ecc127619666efbbd52ff900647f5e8d26e6b928b083e78c8e7a7cb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:54 np0005634017 podman[94066]: 2026-02-28 09:34:54.124665073 +0000 UTC m=+0.166504568 container init ef7e25fe9d1ed639e043f5ff3cee184319f5eeff3d89802a6d188822514ac766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_johnson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 04:34:54 np0005634017 podman[94066]: 2026-02-28 09:34:54.133161382 +0000 UTC m=+0.175000847 container start ef7e25fe9d1ed639e043f5ff3cee184319f5eeff3d89802a6d188822514ac766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_johnson, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 04:34:54 np0005634017 podman[94066]: 2026-02-28 09:34:54.136531297 +0000 UTC m=+0.178370792 container attach ef7e25fe9d1ed639e043f5ff3cee184319f5eeff3d89802a6d188822514ac766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 04:34:54 np0005634017 podman[94085]: 2026-02-28 09:34:54.163710364 +0000 UTC m=+0.060845817 container create 76195836d344b71f8af41ca15ecee71a697f96bd8cffa02c15ebdf286aae4ac3 (image=quay.io/ceph/ceph:v20, name=thirsty_elbakyan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:34:54 np0005634017 systemd[1]: Started libpod-conmon-76195836d344b71f8af41ca15ecee71a697f96bd8cffa02c15ebdf286aae4ac3.scope.
Feb 28 04:34:54 np0005634017 podman[94085]: 2026-02-28 09:34:54.137246148 +0000 UTC m=+0.034381651 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:34:54 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8040e07fa525273a450b32270d219feb454e207ef18977435df1ba50e613c984/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8040e07fa525273a450b32270d219feb454e207ef18977435df1ba50e613c984/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8040e07fa525273a450b32270d219feb454e207ef18977435df1ba50e613c984/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:54 np0005634017 podman[94085]: 2026-02-28 09:34:54.259167977 +0000 UTC m=+0.156303470 container init 76195836d344b71f8af41ca15ecee71a697f96bd8cffa02c15ebdf286aae4ac3 (image=quay.io/ceph/ceph:v20, name=thirsty_elbakyan, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:54 np0005634017 podman[94085]: 2026-02-28 09:34:54.267768949 +0000 UTC m=+0.164904402 container start 76195836d344b71f8af41ca15ecee71a697f96bd8cffa02c15ebdf286aae4ac3 (image=quay.io/ceph/ceph:v20, name=thirsty_elbakyan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 28 04:34:54 np0005634017 podman[94085]: 2026-02-28 09:34:54.271755692 +0000 UTC m=+0.168891215 container attach 76195836d344b71f8af41ca15ecee71a697f96bd8cffa02c15ebdf286aae4ac3 (image=quay.io/ceph/ceph:v20, name=thirsty_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]: {
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:    "0": [
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:        {
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "devices": [
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "/dev/loop3"
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            ],
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "lv_name": "ceph_lv0",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "lv_size": "21470642176",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "name": "ceph_lv0",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "tags": {
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.cluster_name": "ceph",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.crush_device_class": "",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.encrypted": "0",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.objectstore": "bluestore",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.osd_id": "0",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.type": "block",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.vdo": "0",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.with_tpm": "0"
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            },
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "type": "block",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "vg_name": "ceph_vg0"
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:        }
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:    ],
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:    "1": [
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:        {
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "devices": [
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "/dev/loop4"
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            ],
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "lv_name": "ceph_lv1",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "lv_size": "21470642176",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "name": "ceph_lv1",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "tags": {
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.cluster_name": "ceph",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.crush_device_class": "",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.encrypted": "0",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.objectstore": "bluestore",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.osd_id": "1",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.type": "block",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.vdo": "0",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.with_tpm": "0"
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            },
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "type": "block",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "vg_name": "ceph_vg1"
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:        }
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:    ],
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:    "2": [
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:        {
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "devices": [
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "/dev/loop5"
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            ],
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "lv_name": "ceph_lv2",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "lv_size": "21470642176",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "name": "ceph_lv2",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "tags": {
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.cluster_name": "ceph",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.crush_device_class": "",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.encrypted": "0",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.objectstore": "bluestore",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.osd_id": "2",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.type": "block",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.vdo": "0",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:                "ceph.with_tpm": "0"
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            },
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "type": "block",
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:            "vg_name": "ceph_vg2"
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:        }
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]:    ]
Feb 28 04:34:54 np0005634017 youthful_johnson[94082]: }
Feb 28 04:34:54 np0005634017 systemd[1]: libpod-ef7e25fe9d1ed639e043f5ff3cee184319f5eeff3d89802a6d188822514ac766.scope: Deactivated successfully.
Feb 28 04:34:54 np0005634017 conmon[94082]: conmon ef7e25fe9d1ed639e043 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ef7e25fe9d1ed639e043f5ff3cee184319f5eeff3d89802a6d188822514ac766.scope/container/memory.events
Feb 28 04:34:54 np0005634017 podman[94066]: 2026-02-28 09:34:54.437852537 +0000 UTC m=+0.479692042 container died ef7e25fe9d1ed639e043f5ff3cee184319f5eeff3d89802a6d188822514ac766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_johnson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 04:34:54 np0005634017 systemd[1]: var-lib-containers-storage-overlay-291f6161ecc127619666efbbd52ff900647f5e8d26e6b928b083e78c8e7a7cb6-merged.mount: Deactivated successfully.
Feb 28 04:34:54 np0005634017 podman[94066]: 2026-02-28 09:34:54.486056996 +0000 UTC m=+0.527896481 container remove ef7e25fe9d1ed639e043f5ff3cee184319f5eeff3d89802a6d188822514ac766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_johnson, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:54 np0005634017 systemd[1]: libpod-conmon-ef7e25fe9d1ed639e043f5ff3cee184319f5eeff3d89802a6d188822514ac766.scope: Deactivated successfully.
Feb 28 04:34:54 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 04:34:54 np0005634017 ceph-mgr[76610]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Feb 28 04:34:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Feb 28 04:34:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Feb 28 04:34:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Feb 28 04:34:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Feb 28 04:34:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Feb 28 04:34:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Feb 28 04:34:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Feb 28 04:34:54 np0005634017 ceph-mon[76304]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb 28 04:34:54 np0005634017 ceph-mon[76304]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Feb 28 04:34:54 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0[76300]: 2026-02-28T09:34:54.752+0000 7f7f00111640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb 28 04:34:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Feb 28 04:34:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).mds e2 new map
Feb 28 04:34:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2026-02-28T09:34:54:753903+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-28T09:34:54.753617+0000#012modified#0112026-02-28T09:34:54.753617+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Feb 28 04:34:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Feb 28 04:34:54 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Feb 28 04:34:54 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Feb 28 04:34:54 np0005634017 ceph-mgr[76610]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Feb 28 04:34:54 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Feb 28 04:34:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb 28 04:34:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:54 np0005634017 ceph-mgr[76610]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Feb 28 04:34:54 np0005634017 systemd[1]: libpod-76195836d344b71f8af41ca15ecee71a697f96bd8cffa02c15ebdf286aae4ac3.scope: Deactivated successfully.
Feb 28 04:34:54 np0005634017 podman[94085]: 2026-02-28 09:34:54.796445052 +0000 UTC m=+0.693580495 container died 76195836d344b71f8af41ca15ecee71a697f96bd8cffa02c15ebdf286aae4ac3 (image=quay.io/ceph/ceph:v20, name=thirsty_elbakyan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:34:54 np0005634017 systemd[1]: var-lib-containers-storage-overlay-8040e07fa525273a450b32270d219feb454e207ef18977435df1ba50e613c984-merged.mount: Deactivated successfully.
Feb 28 04:34:54 np0005634017 podman[94085]: 2026-02-28 09:34:54.850590689 +0000 UTC m=+0.747726112 container remove 76195836d344b71f8af41ca15ecee71a697f96bd8cffa02c15ebdf286aae4ac3 (image=quay.io/ceph/ceph:v20, name=thirsty_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:34:54 np0005634017 systemd[1]: libpod-conmon-76195836d344b71f8af41ca15ecee71a697f96bd8cffa02c15ebdf286aae4ac3.scope: Deactivated successfully.
Feb 28 04:34:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:34:54 np0005634017 podman[94220]: 2026-02-28 09:34:54.990885996 +0000 UTC m=+0.035601765 container create 5e9b8db3e78e4d0620852e95169826cf8d6b6c79403bdccd1d4f666662eee2e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 04:34:55 np0005634017 systemd[1]: Started libpod-conmon-5e9b8db3e78e4d0620852e95169826cf8d6b6c79403bdccd1d4f666662eee2e1.scope.
Feb 28 04:34:55 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:55 np0005634017 podman[94220]: 2026-02-28 09:34:55.067310132 +0000 UTC m=+0.112025931 container init 5e9b8db3e78e4d0620852e95169826cf8d6b6c79403bdccd1d4f666662eee2e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_morse, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 04:34:55 np0005634017 podman[94220]: 2026-02-28 09:34:54.973930048 +0000 UTC m=+0.018645857 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:55 np0005634017 podman[94220]: 2026-02-28 09:34:55.072094257 +0000 UTC m=+0.116810026 container start 5e9b8db3e78e4d0620852e95169826cf8d6b6c79403bdccd1d4f666662eee2e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:55 np0005634017 podman[94220]: 2026-02-28 09:34:55.074882966 +0000 UTC m=+0.119598735 container attach 5e9b8db3e78e4d0620852e95169826cf8d6b6c79403bdccd1d4f666662eee2e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_morse, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 04:34:55 np0005634017 vigilant_morse[94259]: 167 167
Feb 28 04:34:55 np0005634017 systemd[1]: libpod-5e9b8db3e78e4d0620852e95169826cf8d6b6c79403bdccd1d4f666662eee2e1.scope: Deactivated successfully.
Feb 28 04:34:55 np0005634017 podman[94220]: 2026-02-28 09:34:55.078418465 +0000 UTC m=+0.123134244 container died 5e9b8db3e78e4d0620852e95169826cf8d6b6c79403bdccd1d4f666662eee2e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:55 np0005634017 systemd[1]: var-lib-containers-storage-overlay-80c8317aa38279d6a70f43a405f14f245b73c5b43ad405adbf722512a6b5d649-merged.mount: Deactivated successfully.
Feb 28 04:34:55 np0005634017 podman[94220]: 2026-02-28 09:34:55.111955611 +0000 UTC m=+0.156671380 container remove 5e9b8db3e78e4d0620852e95169826cf8d6b6c79403bdccd1d4f666662eee2e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 04:34:55 np0005634017 systemd[1]: libpod-conmon-5e9b8db3e78e4d0620852e95169826cf8d6b6c79403bdccd1d4f666662eee2e1.scope: Deactivated successfully.
Feb 28 04:34:55 np0005634017 python3[94255]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:34:55 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Feb 28 04:34:55 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Feb 28 04:34:55 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Feb 28 04:34:55 np0005634017 ceph-mon[76304]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb 28 04:34:55 np0005634017 ceph-mon[76304]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Feb 28 04:34:55 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Feb 28 04:34:55 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:55 np0005634017 podman[94279]: 2026-02-28 09:34:55.23101791 +0000 UTC m=+0.049588410 container create 5fe64e8c3d77938c500ce07b8c9dec130e8317bd5145af56e09718e04f5e0f4b (image=quay.io/ceph/ceph:v20, name=amazing_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 28 04:34:55 np0005634017 podman[94290]: 2026-02-28 09:34:55.259011069 +0000 UTC m=+0.052245485 container create cc45e7731718e05503962dea6273897d06bff6bce17247ccf13e3f7f5fa6cba7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Feb 28 04:34:55 np0005634017 systemd[1]: Started libpod-conmon-5fe64e8c3d77938c500ce07b8c9dec130e8317bd5145af56e09718e04f5e0f4b.scope.
Feb 28 04:34:55 np0005634017 podman[94279]: 2026-02-28 09:34:55.2108262 +0000 UTC m=+0.029396860 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:34:55 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:55 np0005634017 systemd[1]: Started libpod-conmon-cc45e7731718e05503962dea6273897d06bff6bce17247ccf13e3f7f5fa6cba7.scope.
Feb 28 04:34:55 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe48ebcc023f7f86c45abcda8473629b391797d9226931bf8a88dd3a2483407d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:55 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe48ebcc023f7f86c45abcda8473629b391797d9226931bf8a88dd3a2483407d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:55 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe48ebcc023f7f86c45abcda8473629b391797d9226931bf8a88dd3a2483407d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:55 np0005634017 podman[94290]: 2026-02-28 09:34:55.231690449 +0000 UTC m=+0.024924875 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:55 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:55 np0005634017 podman[94279]: 2026-02-28 09:34:55.32746597 +0000 UTC m=+0.146036540 container init 5fe64e8c3d77938c500ce07b8c9dec130e8317bd5145af56e09718e04f5e0f4b (image=quay.io/ceph/ceph:v20, name=amazing_ishizaka, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:34:55 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734086d1913c2f133f1229b6d4705964be4d45bf3e4bc94587417163de5f923e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:55 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734086d1913c2f133f1229b6d4705964be4d45bf3e4bc94587417163de5f923e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:55 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734086d1913c2f133f1229b6d4705964be4d45bf3e4bc94587417163de5f923e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:55 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734086d1913c2f133f1229b6d4705964be4d45bf3e4bc94587417163de5f923e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:55 np0005634017 podman[94279]: 2026-02-28 09:34:55.341095485 +0000 UTC m=+0.159665985 container start 5fe64e8c3d77938c500ce07b8c9dec130e8317bd5145af56e09718e04f5e0f4b (image=quay.io/ceph/ceph:v20, name=amazing_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 04:34:55 np0005634017 podman[94279]: 2026-02-28 09:34:55.345468068 +0000 UTC m=+0.164038568 container attach 5fe64e8c3d77938c500ce07b8c9dec130e8317bd5145af56e09718e04f5e0f4b (image=quay.io/ceph/ceph:v20, name=amazing_ishizaka, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Feb 28 04:34:55 np0005634017 podman[94290]: 2026-02-28 09:34:55.349922443 +0000 UTC m=+0.143156929 container init cc45e7731718e05503962dea6273897d06bff6bce17247ccf13e3f7f5fa6cba7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_allen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 28 04:34:55 np0005634017 podman[94290]: 2026-02-28 09:34:55.358833975 +0000 UTC m=+0.152068391 container start cc45e7731718e05503962dea6273897d06bff6bce17247ccf13e3f7f5fa6cba7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_allen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 04:34:55 np0005634017 podman[94290]: 2026-02-28 09:34:55.362593891 +0000 UTC m=+0.155828387 container attach cc45e7731718e05503962dea6273897d06bff6bce17247ccf13e3f7f5fa6cba7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_allen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 28 04:34:55 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 04:34:55 np0005634017 ceph-mgr[76610]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Feb 28 04:34:55 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Feb 28 04:34:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb 28 04:34:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:55 np0005634017 amazing_ishizaka[94312]: Scheduled mds.cephfs update...
Feb 28 04:34:55 np0005634017 systemd[1]: libpod-5fe64e8c3d77938c500ce07b8c9dec130e8317bd5145af56e09718e04f5e0f4b.scope: Deactivated successfully.
Feb 28 04:34:55 np0005634017 podman[94279]: 2026-02-28 09:34:55.818361237 +0000 UTC m=+0.636931737 container died 5fe64e8c3d77938c500ce07b8c9dec130e8317bd5145af56e09718e04f5e0f4b (image=quay.io/ceph/ceph:v20, name=amazing_ishizaka, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 04:34:55 np0005634017 systemd[1]: var-lib-containers-storage-overlay-fe48ebcc023f7f86c45abcda8473629b391797d9226931bf8a88dd3a2483407d-merged.mount: Deactivated successfully.
Feb 28 04:34:55 np0005634017 podman[94279]: 2026-02-28 09:34:55.854885797 +0000 UTC m=+0.673456307 container remove 5fe64e8c3d77938c500ce07b8c9dec130e8317bd5145af56e09718e04f5e0f4b (image=quay.io/ceph/ceph:v20, name=amazing_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 04:34:55 np0005634017 systemd[1]: libpod-conmon-5fe64e8c3d77938c500ce07b8c9dec130e8317bd5145af56e09718e04f5e0f4b.scope: Deactivated successfully.
Feb 28 04:34:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:34:55 np0005634017 lvm[94428]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:34:55 np0005634017 lvm[94428]: VG ceph_vg0 finished
Feb 28 04:34:55 np0005634017 lvm[94429]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:34:55 np0005634017 lvm[94429]: VG ceph_vg1 finished
Feb 28 04:34:56 np0005634017 lvm[94431]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:34:56 np0005634017 lvm[94431]: VG ceph_vg2 finished
Feb 28 04:34:56 np0005634017 suspicious_allen[94317]: {}
Feb 28 04:34:56 np0005634017 systemd[1]: libpod-cc45e7731718e05503962dea6273897d06bff6bce17247ccf13e3f7f5fa6cba7.scope: Deactivated successfully.
Feb 28 04:34:56 np0005634017 systemd[1]: libpod-cc45e7731718e05503962dea6273897d06bff6bce17247ccf13e3f7f5fa6cba7.scope: Consumed 1.201s CPU time.
Feb 28 04:34:56 np0005634017 conmon[94317]: conmon cc45e7731718e0550396 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc45e7731718e05503962dea6273897d06bff6bce17247ccf13e3f7f5fa6cba7.scope/container/memory.events
Feb 28 04:34:56 np0005634017 podman[94290]: 2026-02-28 09:34:56.147245393 +0000 UTC m=+0.940479799 container died cc45e7731718e05503962dea6273897d06bff6bce17247ccf13e3f7f5fa6cba7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:56 np0005634017 ceph-mon[76304]: Saving service mds.cephfs spec with placement compute-0
Feb 28 04:34:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:56 np0005634017 systemd[1]: var-lib-containers-storage-overlay-734086d1913c2f133f1229b6d4705964be4d45bf3e4bc94587417163de5f923e-merged.mount: Deactivated successfully.
Feb 28 04:34:56 np0005634017 podman[94290]: 2026-02-28 09:34:56.198625603 +0000 UTC m=+0.991860029 container remove cc45e7731718e05503962dea6273897d06bff6bce17247ccf13e3f7f5fa6cba7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 04:34:56 np0005634017 systemd[1]: libpod-conmon-cc45e7731718e05503962dea6273897d06bff6bce17247ccf13e3f7f5fa6cba7.scope: Deactivated successfully.
Feb 28 04:34:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:34:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:34:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:56 np0005634017 podman[94642]: 2026-02-28 09:34:56.908955388 +0000 UTC m=+0.061908097 container exec 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:34:56 np0005634017 python3[94627]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 28 04:34:57 np0005634017 podman[94642]: 2026-02-28 09:34:57.020830944 +0000 UTC m=+0.173783663 container exec_died 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:57 np0005634017 ceph-mon[76304]: Saving service mds.cephfs spec with placement compute-0
Feb 28 04:34:57 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:57 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:57 np0005634017 python3[94772]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772271296.6435483-37638-186793447481436/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=f947f110f3ed509cea5c7f15cc4d7baedfba3a80 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:34:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:34:57 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:34:57 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:34:57 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:34:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 04:34:57 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:34:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 04:34:57 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 04:34:57 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 04:34:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 04:34:57 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:34:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:34:57 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:34:57 np0005634017 python3[94932]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:34:57 np0005634017 podman[94959]: 2026-02-28 09:34:57.875997875 +0000 UTC m=+0.052016668 container create d506d23d78ed44e295a826986fc0774a35a2eb1bdddae376d8023e3702ec0ef5 (image=quay.io/ceph/ceph:v20, name=wonderful_archimedes, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:34:57 np0005634017 systemd[1]: Started libpod-conmon-d506d23d78ed44e295a826986fc0774a35a2eb1bdddae376d8023e3702ec0ef5.scope.
Feb 28 04:34:57 np0005634017 podman[94959]: 2026-02-28 09:34:57.850703262 +0000 UTC m=+0.026722095 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:34:57 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbf333f1009bc4cce1c2bc178225de2092d0a1de26ebc64e29782307cd95d323/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbf333f1009bc4cce1c2bc178225de2092d0a1de26ebc64e29782307cd95d323/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:57 np0005634017 podman[94959]: 2026-02-28 09:34:57.985245247 +0000 UTC m=+0.161264030 container init d506d23d78ed44e295a826986fc0774a35a2eb1bdddae376d8023e3702ec0ef5 (image=quay.io/ceph/ceph:v20, name=wonderful_archimedes, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 04:34:57 np0005634017 podman[94959]: 2026-02-28 09:34:57.993623323 +0000 UTC m=+0.169642116 container start d506d23d78ed44e295a826986fc0774a35a2eb1bdddae376d8023e3702ec0ef5 (image=quay.io/ceph/ceph:v20, name=wonderful_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:57 np0005634017 podman[94959]: 2026-02-28 09:34:57.997878633 +0000 UTC m=+0.173897416 container attach d506d23d78ed44e295a826986fc0774a35a2eb1bdddae376d8023e3702ec0ef5 (image=quay.io/ceph/ceph:v20, name=wonderful_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:34:58 np0005634017 podman[94990]: 2026-02-28 09:34:58.059752208 +0000 UTC m=+0.058212253 container create cecdd28788280701e956565436f038fe0aaa09d0d1409ecca7e0397f839f0cd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hellman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:34:58 np0005634017 systemd[1]: Started libpod-conmon-cecdd28788280701e956565436f038fe0aaa09d0d1409ecca7e0397f839f0cd3.scope.
Feb 28 04:34:58 np0005634017 podman[94990]: 2026-02-28 09:34:58.035610807 +0000 UTC m=+0.034070912 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:58 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:58 np0005634017 podman[94990]: 2026-02-28 09:34:58.149438348 +0000 UTC m=+0.147898453 container init cecdd28788280701e956565436f038fe0aaa09d0d1409ecca7e0397f839f0cd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hellman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 04:34:58 np0005634017 podman[94990]: 2026-02-28 09:34:58.163205367 +0000 UTC m=+0.161665392 container start cecdd28788280701e956565436f038fe0aaa09d0d1409ecca7e0397f839f0cd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hellman, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 28 04:34:58 np0005634017 exciting_hellman[95008]: 167 167
Feb 28 04:34:58 np0005634017 podman[94990]: 2026-02-28 09:34:58.166897401 +0000 UTC m=+0.165357456 container attach cecdd28788280701e956565436f038fe0aaa09d0d1409ecca7e0397f839f0cd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hellman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:34:58 np0005634017 systemd[1]: libpod-cecdd28788280701e956565436f038fe0aaa09d0d1409ecca7e0397f839f0cd3.scope: Deactivated successfully.
Feb 28 04:34:58 np0005634017 podman[94990]: 2026-02-28 09:34:58.168600109 +0000 UTC m=+0.167060134 container died cecdd28788280701e956565436f038fe0aaa09d0d1409ecca7e0397f839f0cd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:58 np0005634017 systemd[1]: var-lib-containers-storage-overlay-12240bca9914c69ee3a4710be152e430e58170a292a336d638077ef2693fb6c4-merged.mount: Deactivated successfully.
Feb 28 04:34:58 np0005634017 podman[94990]: 2026-02-28 09:34:58.212773975 +0000 UTC m=+0.211233980 container remove cecdd28788280701e956565436f038fe0aaa09d0d1409ecca7e0397f839f0cd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hellman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 04:34:58 np0005634017 systemd[1]: libpod-conmon-cecdd28788280701e956565436f038fe0aaa09d0d1409ecca7e0397f839f0cd3.scope: Deactivated successfully.
Feb 28 04:34:58 np0005634017 podman[95052]: 2026-02-28 09:34:58.346531428 +0000 UTC m=+0.034689630 container create 16f22e728bae66397a08c5fde6b4fb81a44dc18ba0372df22064ead246063c6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_haslett, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:34:58 np0005634017 systemd[1]: Started libpod-conmon-16f22e728bae66397a08c5fde6b4fb81a44dc18ba0372df22064ead246063c6d.scope.
Feb 28 04:34:58 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:58 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f532a95cc21a5e2cd4c07e7cc30b0990195a8b84637006d76cf279afee1523c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:58 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f532a95cc21a5e2cd4c07e7cc30b0990195a8b84637006d76cf279afee1523c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:58 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f532a95cc21a5e2cd4c07e7cc30b0990195a8b84637006d76cf279afee1523c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:58 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f532a95cc21a5e2cd4c07e7cc30b0990195a8b84637006d76cf279afee1523c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:58 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f532a95cc21a5e2cd4c07e7cc30b0990195a8b84637006d76cf279afee1523c1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:58 np0005634017 podman[95052]: 2026-02-28 09:34:58.423933091 +0000 UTC m=+0.112091343 container init 16f22e728bae66397a08c5fde6b4fb81a44dc18ba0372df22064ead246063c6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_haslett, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:34:58 np0005634017 podman[95052]: 2026-02-28 09:34:58.428811258 +0000 UTC m=+0.116969490 container start 16f22e728bae66397a08c5fde6b4fb81a44dc18ba0372df22064ead246063c6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:34:58 np0005634017 podman[95052]: 2026-02-28 09:34:58.330838215 +0000 UTC m=+0.018996437 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:58 np0005634017 podman[95052]: 2026-02-28 09:34:58.432295837 +0000 UTC m=+0.120454079 container attach 16f22e728bae66397a08c5fde6b4fb81a44dc18ba0372df22064ead246063c6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:34:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Feb 28 04:34:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2708852367' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Feb 28 04:34:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2708852367' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Feb 28 04:34:58 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:58 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:58 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:34:58 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:34:58 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:34:58 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/2708852367' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Feb 28 04:34:58 np0005634017 systemd[1]: libpod-d506d23d78ed44e295a826986fc0774a35a2eb1bdddae376d8023e3702ec0ef5.scope: Deactivated successfully.
Feb 28 04:34:58 np0005634017 podman[95075]: 2026-02-28 09:34:58.605903364 +0000 UTC m=+0.024569364 container died d506d23d78ed44e295a826986fc0774a35a2eb1bdddae376d8023e3702ec0ef5 (image=quay.io/ceph/ceph:v20, name=wonderful_archimedes, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:34:58 np0005634017 systemd[1]: var-lib-containers-storage-overlay-bbf333f1009bc4cce1c2bc178225de2092d0a1de26ebc64e29782307cd95d323-merged.mount: Deactivated successfully.
Feb 28 04:34:58 np0005634017 podman[95075]: 2026-02-28 09:34:58.646656103 +0000 UTC m=+0.065322043 container remove d506d23d78ed44e295a826986fc0774a35a2eb1bdddae376d8023e3702ec0ef5 (image=quay.io/ceph/ceph:v20, name=wonderful_archimedes, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:34:58 np0005634017 systemd[1]: libpod-conmon-d506d23d78ed44e295a826986fc0774a35a2eb1bdddae376d8023e3702ec0ef5.scope: Deactivated successfully.
Feb 28 04:34:58 np0005634017 relaxed_haslett[95068]: --> passed data devices: 0 physical, 3 LVM
Feb 28 04:34:58 np0005634017 relaxed_haslett[95068]: --> All data devices are unavailable
Feb 28 04:34:58 np0005634017 systemd[1]: libpod-16f22e728bae66397a08c5fde6b4fb81a44dc18ba0372df22064ead246063c6d.scope: Deactivated successfully.
Feb 28 04:34:58 np0005634017 podman[95052]: 2026-02-28 09:34:58.891381546 +0000 UTC m=+0.579539748 container died 16f22e728bae66397a08c5fde6b4fb81a44dc18ba0372df22064ead246063c6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_haslett, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:34:58 np0005634017 systemd[1]: var-lib-containers-storage-overlay-f532a95cc21a5e2cd4c07e7cc30b0990195a8b84637006d76cf279afee1523c1-merged.mount: Deactivated successfully.
Feb 28 04:34:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:34:58 np0005634017 podman[95052]: 2026-02-28 09:34:58.939297538 +0000 UTC m=+0.627455740 container remove 16f22e728bae66397a08c5fde6b4fb81a44dc18ba0372df22064ead246063c6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_haslett, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:58 np0005634017 systemd[1]: libpod-conmon-16f22e728bae66397a08c5fde6b4fb81a44dc18ba0372df22064ead246063c6d.scope: Deactivated successfully.
Feb 28 04:34:59 np0005634017 podman[95205]: 2026-02-28 09:34:59.369461161 +0000 UTC m=+0.048109718 container create ae4cc148b75a733e6caa12119cac504994615e68aa39db275262eb5bd61dad6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 28 04:34:59 np0005634017 python3[95191]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:34:59 np0005634017 systemd[1]: Started libpod-conmon-ae4cc148b75a733e6caa12119cac504994615e68aa39db275262eb5bd61dad6f.scope.
Feb 28 04:34:59 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:59 np0005634017 podman[95205]: 2026-02-28 09:34:59.350129986 +0000 UTC m=+0.028778553 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:59 np0005634017 podman[95205]: 2026-02-28 09:34:59.44739981 +0000 UTC m=+0.126048397 container init ae4cc148b75a733e6caa12119cac504994615e68aa39db275262eb5bd61dad6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 04:34:59 np0005634017 podman[95205]: 2026-02-28 09:34:59.453589694 +0000 UTC m=+0.132238251 container start ae4cc148b75a733e6caa12119cac504994615e68aa39db275262eb5bd61dad6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 04:34:59 np0005634017 podman[95222]: 2026-02-28 09:34:59.455651342 +0000 UTC m=+0.053232522 container create 7a5ff0955c304b24ad4dc6b6914a0fd0d00d995d18fd0130087849ac72eff2be (image=quay.io/ceph/ceph:v20, name=dazzling_roentgen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 04:34:59 np0005634017 determined_banzai[95231]: 167 167
Feb 28 04:34:59 np0005634017 systemd[1]: libpod-ae4cc148b75a733e6caa12119cac504994615e68aa39db275262eb5bd61dad6f.scope: Deactivated successfully.
Feb 28 04:34:59 np0005634017 podman[95205]: 2026-02-28 09:34:59.462124365 +0000 UTC m=+0.140772932 container attach ae4cc148b75a733e6caa12119cac504994615e68aa39db275262eb5bd61dad6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_banzai, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 04:34:59 np0005634017 podman[95205]: 2026-02-28 09:34:59.464922904 +0000 UTC m=+0.143571511 container died ae4cc148b75a733e6caa12119cac504994615e68aa39db275262eb5bd61dad6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Feb 28 04:34:59 np0005634017 systemd[1]: var-lib-containers-storage-overlay-2be6f5d1932004c439c7bd9552c5bdd80f1a08a619b5c0cf442dbd33c5c36b3d-merged.mount: Deactivated successfully.
Feb 28 04:34:59 np0005634017 systemd[1]: Started libpod-conmon-7a5ff0955c304b24ad4dc6b6914a0fd0d00d995d18fd0130087849ac72eff2be.scope.
Feb 28 04:34:59 np0005634017 podman[95205]: 2026-02-28 09:34:59.511313192 +0000 UTC m=+0.189961739 container remove ae4cc148b75a733e6caa12119cac504994615e68aa39db275262eb5bd61dad6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:34:59 np0005634017 systemd[1]: libpod-conmon-ae4cc148b75a733e6caa12119cac504994615e68aa39db275262eb5bd61dad6f.scope: Deactivated successfully.
Feb 28 04:34:59 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:59 np0005634017 podman[95222]: 2026-02-28 09:34:59.428155157 +0000 UTC m=+0.025736347 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:34:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd4b14135d1a954c9c3c3e277c899ecb0ec6dfaf0c52c7241e6fa09fa169bdd2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd4b14135d1a954c9c3c3e277c899ecb0ec6dfaf0c52c7241e6fa09fa169bdd2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:59 np0005634017 podman[95222]: 2026-02-28 09:34:59.548535472 +0000 UTC m=+0.146116752 container init 7a5ff0955c304b24ad4dc6b6914a0fd0d00d995d18fd0130087849ac72eff2be (image=quay.io/ceph/ceph:v20, name=dazzling_roentgen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Feb 28 04:34:59 np0005634017 podman[95222]: 2026-02-28 09:34:59.555824318 +0000 UTC m=+0.153405518 container start 7a5ff0955c304b24ad4dc6b6914a0fd0d00d995d18fd0130087849ac72eff2be (image=quay.io/ceph/ceph:v20, name=dazzling_roentgen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 28 04:34:59 np0005634017 podman[95222]: 2026-02-28 09:34:59.559695487 +0000 UTC m=+0.157276777 container attach 7a5ff0955c304b24ad4dc6b6914a0fd0d00d995d18fd0130087849ac72eff2be (image=quay.io/ceph/ceph:v20, name=dazzling_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 04:34:59 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/2708852367' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Feb 28 04:34:59 np0005634017 podman[95266]: 2026-02-28 09:34:59.636512654 +0000 UTC m=+0.040115723 container create eb79012f5b7ac38d4b35f0eab3dfc13afb4aee50ecc3e16b543e2c4309fac5a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:34:59 np0005634017 systemd[1]: Started libpod-conmon-eb79012f5b7ac38d4b35f0eab3dfc13afb4aee50ecc3e16b543e2c4309fac5a2.scope.
Feb 28 04:34:59 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:34:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0eecb1d427d7509a0a3c7e644d8587b50e562916ea56f968b1eb1f327c90602/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0eecb1d427d7509a0a3c7e644d8587b50e562916ea56f968b1eb1f327c90602/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0eecb1d427d7509a0a3c7e644d8587b50e562916ea56f968b1eb1f327c90602/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0eecb1d427d7509a0a3c7e644d8587b50e562916ea56f968b1eb1f327c90602/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:34:59 np0005634017 podman[95266]: 2026-02-28 09:34:59.722187181 +0000 UTC m=+0.125790280 container init eb79012f5b7ac38d4b35f0eab3dfc13afb4aee50ecc3e16b543e2c4309fac5a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 04:34:59 np0005634017 podman[95266]: 2026-02-28 09:34:59.618502826 +0000 UTC m=+0.022105955 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:34:59 np0005634017 podman[95266]: 2026-02-28 09:34:59.731579406 +0000 UTC m=+0.135182495 container start eb79012f5b7ac38d4b35f0eab3dfc13afb4aee50ecc3e16b543e2c4309fac5a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:34:59 np0005634017 podman[95266]: 2026-02-28 09:34:59.73563322 +0000 UTC m=+0.139236289 container attach eb79012f5b7ac38d4b35f0eab3dfc13afb4aee50ecc3e16b543e2c4309fac5a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]: {
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:    "0": [
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:        {
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "devices": [
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "/dev/loop3"
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            ],
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "lv_name": "ceph_lv0",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "lv_size": "21470642176",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "name": "ceph_lv0",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "tags": {
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.cluster_name": "ceph",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.crush_device_class": "",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.encrypted": "0",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.objectstore": "bluestore",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.osd_id": "0",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.type": "block",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.vdo": "0",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.with_tpm": "0"
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            },
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "type": "block",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "vg_name": "ceph_vg0"
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:        }
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:    ],
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:    "1": [
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:        {
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "devices": [
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "/dev/loop4"
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            ],
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "lv_name": "ceph_lv1",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "lv_size": "21470642176",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "name": "ceph_lv1",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "tags": {
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.cluster_name": "ceph",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.crush_device_class": "",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.encrypted": "0",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.objectstore": "bluestore",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.osd_id": "1",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.type": "block",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.vdo": "0",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.with_tpm": "0"
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            },
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "type": "block",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "vg_name": "ceph_vg1"
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:        }
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:    ],
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:    "2": [
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:        {
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "devices": [
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "/dev/loop5"
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            ],
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "lv_name": "ceph_lv2",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "lv_size": "21470642176",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "name": "ceph_lv2",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "tags": {
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.cluster_name": "ceph",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.crush_device_class": "",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.encrypted": "0",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.objectstore": "bluestore",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.osd_id": "2",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.type": "block",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.vdo": "0",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:                "ceph.with_tpm": "0"
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            },
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "type": "block",
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:            "vg_name": "ceph_vg2"
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:        }
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]:    ]
Feb 28 04:35:00 np0005634017 romantic_taussig[95292]: }
Feb 28 04:35:00 np0005634017 systemd[1]: libpod-eb79012f5b7ac38d4b35f0eab3dfc13afb4aee50ecc3e16b543e2c4309fac5a2.scope: Deactivated successfully.
Feb 28 04:35:00 np0005634017 podman[95266]: 2026-02-28 09:35:00.045523641 +0000 UTC m=+0.449126700 container died eb79012f5b7ac38d4b35f0eab3dfc13afb4aee50ecc3e16b543e2c4309fac5a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Feb 28 04:35:00 np0005634017 systemd[1]: var-lib-containers-storage-overlay-a0eecb1d427d7509a0a3c7e644d8587b50e562916ea56f968b1eb1f327c90602-merged.mount: Deactivated successfully.
Feb 28 04:35:00 np0005634017 podman[95266]: 2026-02-28 09:35:00.090131468 +0000 UTC m=+0.493734527 container remove eb79012f5b7ac38d4b35f0eab3dfc13afb4aee50ecc3e16b543e2c4309fac5a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:35:00 np0005634017 systemd[1]: libpod-conmon-eb79012f5b7ac38d4b35f0eab3dfc13afb4aee50ecc3e16b543e2c4309fac5a2.scope: Deactivated successfully.
Feb 28 04:35:00 np0005634017 dazzling_roentgen[95255]: 
Feb 28 04:35:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb 28 04:35:00 np0005634017 dazzling_roentgen[95255]: {"fsid":"8f528268-ea2d-5d7b-af45-49b405fed6de","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":109,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":29,"num_osds":3,"num_up_osds":3,"osd_up_since":1772271274,"num_in_osds":3,"osd_in_since":1772271249,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83820544,"bytes_avail":64328105984,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2026-02-28T09:34:54:753903+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-02-28T09:34:30.913210+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Feb 28 04:35:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2333100628' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 28 04:35:00 np0005634017 systemd[1]: libpod-7a5ff0955c304b24ad4dc6b6914a0fd0d00d995d18fd0130087849ac72eff2be.scope: Deactivated successfully.
Feb 28 04:35:00 np0005634017 podman[95222]: 2026-02-28 09:35:00.131496875 +0000 UTC m=+0.729078055 container died 7a5ff0955c304b24ad4dc6b6914a0fd0d00d995d18fd0130087849ac72eff2be (image=quay.io/ceph/ceph:v20, name=dazzling_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Feb 28 04:35:00 np0005634017 systemd[1]: var-lib-containers-storage-overlay-dd4b14135d1a954c9c3c3e277c899ecb0ec6dfaf0c52c7241e6fa09fa169bdd2-merged.mount: Deactivated successfully.
Feb 28 04:35:00 np0005634017 podman[95222]: 2026-02-28 09:35:00.169022824 +0000 UTC m=+0.766604004 container remove 7a5ff0955c304b24ad4dc6b6914a0fd0d00d995d18fd0130087849ac72eff2be (image=quay.io/ceph/ceph:v20, name=dazzling_roentgen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 28 04:35:00 np0005634017 systemd[1]: libpod-conmon-7a5ff0955c304b24ad4dc6b6914a0fd0d00d995d18fd0130087849ac72eff2be.scope: Deactivated successfully.
Feb 28 04:35:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:35:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:35:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:35:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:35:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:35:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:35:00 np0005634017 podman[95427]: 2026-02-28 09:35:00.498193358 +0000 UTC m=+0.034825953 container create 70bf0333c3a7e34ec9a5a27a54e30c01736ae3a66ee8ec8026ab9e20621e68af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:35:00 np0005634017 python3[95413]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:35:00 np0005634017 systemd[1]: Started libpod-conmon-70bf0333c3a7e34ec9a5a27a54e30c01736ae3a66ee8ec8026ab9e20621e68af.scope.
Feb 28 04:35:00 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:35:00 np0005634017 podman[95444]: 2026-02-28 09:35:00.576660412 +0000 UTC m=+0.047050428 container create 34ea6864a251dc8c70fcf793d8e369a528774ca10b8fc7f912731fe7aeb0b130 (image=quay.io/ceph/ceph:v20, name=friendly_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:35:00 np0005634017 podman[95427]: 2026-02-28 09:35:00.48228678 +0000 UTC m=+0.018919415 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:35:00 np0005634017 podman[95427]: 2026-02-28 09:35:00.584445031 +0000 UTC m=+0.121077646 container init 70bf0333c3a7e34ec9a5a27a54e30c01736ae3a66ee8ec8026ab9e20621e68af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_germain, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:35:00 np0005634017 podman[95427]: 2026-02-28 09:35:00.590656816 +0000 UTC m=+0.127289421 container start 70bf0333c3a7e34ec9a5a27a54e30c01736ae3a66ee8ec8026ab9e20621e68af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 04:35:00 np0005634017 gifted_germain[95450]: 167 167
Feb 28 04:35:00 np0005634017 podman[95427]: 2026-02-28 09:35:00.594611708 +0000 UTC m=+0.131244313 container attach 70bf0333c3a7e34ec9a5a27a54e30c01736ae3a66ee8ec8026ab9e20621e68af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:35:00 np0005634017 conmon[95450]: conmon 70bf0333c3a7e34ec9a5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-70bf0333c3a7e34ec9a5a27a54e30c01736ae3a66ee8ec8026ab9e20621e68af.scope/container/memory.events
Feb 28 04:35:00 np0005634017 systemd[1]: libpod-70bf0333c3a7e34ec9a5a27a54e30c01736ae3a66ee8ec8026ab9e20621e68af.scope: Deactivated successfully.
Feb 28 04:35:00 np0005634017 podman[95427]: 2026-02-28 09:35:00.595787791 +0000 UTC m=+0.132420406 container died 70bf0333c3a7e34ec9a5a27a54e30c01736ae3a66ee8ec8026ab9e20621e68af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Feb 28 04:35:00 np0005634017 systemd[1]: Started libpod-conmon-34ea6864a251dc8c70fcf793d8e369a528774ca10b8fc7f912731fe7aeb0b130.scope.
Feb 28 04:35:00 np0005634017 systemd[1]: var-lib-containers-storage-overlay-59bc06c43c35e0628c6d72341afe22ef7bb63f174a89ccfbd9dbb85f3f1a0683-merged.mount: Deactivated successfully.
Feb 28 04:35:00 np0005634017 podman[95427]: 2026-02-28 09:35:00.641615164 +0000 UTC m=+0.178247789 container remove 70bf0333c3a7e34ec9a5a27a54e30c01736ae3a66ee8ec8026ab9e20621e68af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_germain, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:35:00 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:35:00 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b52df8a30f0dae8eb9860475c5e5b4e3cf4ecb898efc72166af2af9143ac72bc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:00 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b52df8a30f0dae8eb9860475c5e5b4e3cf4ecb898efc72166af2af9143ac72bc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:00 np0005634017 podman[95444]: 2026-02-28 09:35:00.55637748 +0000 UTC m=+0.026767546 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:35:00 np0005634017 systemd[1]: libpod-conmon-70bf0333c3a7e34ec9a5a27a54e30c01736ae3a66ee8ec8026ab9e20621e68af.scope: Deactivated successfully.
Feb 28 04:35:00 np0005634017 podman[95444]: 2026-02-28 09:35:00.668350058 +0000 UTC m=+0.138740104 container init 34ea6864a251dc8c70fcf793d8e369a528774ca10b8fc7f912731fe7aeb0b130 (image=quay.io/ceph/ceph:v20, name=friendly_knuth, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:35:00 np0005634017 podman[95444]: 2026-02-28 09:35:00.675752357 +0000 UTC m=+0.146142373 container start 34ea6864a251dc8c70fcf793d8e369a528774ca10b8fc7f912731fe7aeb0b130 (image=quay.io/ceph/ceph:v20, name=friendly_knuth, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:35:00 np0005634017 podman[95444]: 2026-02-28 09:35:00.6787164 +0000 UTC m=+0.149106416 container attach 34ea6864a251dc8c70fcf793d8e369a528774ca10b8fc7f912731fe7aeb0b130 (image=quay.io/ceph/ceph:v20, name=friendly_knuth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 28 04:35:00 np0005634017 podman[95488]: 2026-02-28 09:35:00.759627953 +0000 UTC m=+0.041899993 container create 7c42319405af1c8341a11c1d492af24589b2a32525149cb63ed57c36a16a6632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 28 04:35:00 np0005634017 systemd[1]: Started libpod-conmon-7c42319405af1c8341a11c1d492af24589b2a32525149cb63ed57c36a16a6632.scope.
Feb 28 04:35:00 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:35:00 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff952be01f0b66084c4f5af93c75d3c7cd61f909ca89189dbe993ca1a2f0621c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:00 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff952be01f0b66084c4f5af93c75d3c7cd61f909ca89189dbe993ca1a2f0621c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:00 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff952be01f0b66084c4f5af93c75d3c7cd61f909ca89189dbe993ca1a2f0621c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:00 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff952be01f0b66084c4f5af93c75d3c7cd61f909ca89189dbe993ca1a2f0621c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:00 np0005634017 podman[95488]: 2026-02-28 09:35:00.744696991 +0000 UTC m=+0.026969031 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:35:00 np0005634017 podman[95488]: 2026-02-28 09:35:00.844395934 +0000 UTC m=+0.126667974 container init 7c42319405af1c8341a11c1d492af24589b2a32525149cb63ed57c36a16a6632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:35:00 np0005634017 podman[95488]: 2026-02-28 09:35:00.850228618 +0000 UTC m=+0.132500658 container start 7c42319405af1c8341a11c1d492af24589b2a32525149cb63ed57c36a16a6632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 04:35:00 np0005634017 podman[95488]: 2026-02-28 09:35:00.852977726 +0000 UTC m=+0.135249766 container attach 7c42319405af1c8341a11c1d492af24589b2a32525149cb63ed57c36a16a6632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kepler, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 04:35:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:35:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:35:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 04:35:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2242693414' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 04:35:01 np0005634017 friendly_knuth[95473]: 
Feb 28 04:35:01 np0005634017 friendly_knuth[95473]: {"epoch":1,"fsid":"8f528268-ea2d-5d7b-af45-49b405fed6de","modified":"2026-02-28T09:33:06.729878Z","created":"2026-02-28T09:33:06.729878Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Feb 28 04:35:01 np0005634017 friendly_knuth[95473]: dumped monmap epoch 1
Feb 28 04:35:01 np0005634017 systemd[1]: libpod-34ea6864a251dc8c70fcf793d8e369a528774ca10b8fc7f912731fe7aeb0b130.scope: Deactivated successfully.
Feb 28 04:35:01 np0005634017 podman[95444]: 2026-02-28 09:35:01.233665494 +0000 UTC m=+0.704055510 container died 34ea6864a251dc8c70fcf793d8e369a528774ca10b8fc7f912731fe7aeb0b130 (image=quay.io/ceph/ceph:v20, name=friendly_knuth, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:35:01 np0005634017 systemd[1]: var-lib-containers-storage-overlay-b52df8a30f0dae8eb9860475c5e5b4e3cf4ecb898efc72166af2af9143ac72bc-merged.mount: Deactivated successfully.
Feb 28 04:35:01 np0005634017 podman[95444]: 2026-02-28 09:35:01.269045452 +0000 UTC m=+0.739435468 container remove 34ea6864a251dc8c70fcf793d8e369a528774ca10b8fc7f912731fe7aeb0b130 (image=quay.io/ceph/ceph:v20, name=friendly_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:35:01 np0005634017 systemd[1]: libpod-conmon-34ea6864a251dc8c70fcf793d8e369a528774ca10b8fc7f912731fe7aeb0b130.scope: Deactivated successfully.
Feb 28 04:35:01 np0005634017 lvm[95617]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:35:01 np0005634017 lvm[95617]: VG ceph_vg1 finished
Feb 28 04:35:01 np0005634017 lvm[95616]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:35:01 np0005634017 lvm[95616]: VG ceph_vg0 finished
Feb 28 04:35:01 np0005634017 lvm[95619]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:35:01 np0005634017 lvm[95619]: VG ceph_vg2 finished
Feb 28 04:35:01 np0005634017 clever_kepler[95523]: {}
Feb 28 04:35:01 np0005634017 systemd[1]: libpod-7c42319405af1c8341a11c1d492af24589b2a32525149cb63ed57c36a16a6632.scope: Deactivated successfully.
Feb 28 04:35:01 np0005634017 systemd[1]: libpod-7c42319405af1c8341a11c1d492af24589b2a32525149cb63ed57c36a16a6632.scope: Consumed 1.064s CPU time.
Feb 28 04:35:01 np0005634017 podman[95488]: 2026-02-28 09:35:01.53734546 +0000 UTC m=+0.819617500 container died 7c42319405af1c8341a11c1d492af24589b2a32525149cb63ed57c36a16a6632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kepler, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 04:35:01 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ff952be01f0b66084c4f5af93c75d3c7cd61f909ca89189dbe993ca1a2f0621c-merged.mount: Deactivated successfully.
Feb 28 04:35:01 np0005634017 podman[95488]: 2026-02-28 09:35:01.576470573 +0000 UTC m=+0.858742613 container remove 7c42319405af1c8341a11c1d492af24589b2a32525149cb63ed57c36a16a6632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:35:01 np0005634017 systemd[1]: libpod-conmon-7c42319405af1c8341a11c1d492af24589b2a32525149cb63ed57c36a16a6632.scope: Deactivated successfully.
Feb 28 04:35:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:35:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:35:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:01 np0005634017 ceph-mgr[76610]: [progress INFO root] update: starting ev 9d726367-64fe-48e4-b404-bb3c404ba329 (Updating rgw.rgw deployment (+1 -> 1))
Feb 28 04:35:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.gzepcj", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Feb 28 04:35:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.gzepcj", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Feb 28 04:35:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.gzepcj", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb 28 04:35:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Feb 28 04:35:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:35:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:35:01 np0005634017 ceph-mgr[76610]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.gzepcj on compute-0
Feb 28 04:35:01 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.gzepcj on compute-0
Feb 28 04:35:01 np0005634017 python3[95660]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:35:01 np0005634017 podman[95711]: 2026-02-28 09:35:01.843544556 +0000 UTC m=+0.043506968 container create 15f5a8d32382b585288a139d3fc3e5e7fc63c504912b7adbaa57ec597316d23a (image=quay.io/ceph/ceph:v20, name=objective_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:35:01 np0005634017 systemd[1]: Started libpod-conmon-15f5a8d32382b585288a139d3fc3e5e7fc63c504912b7adbaa57ec597316d23a.scope.
Feb 28 04:35:01 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:35:01 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd38ed85b6658affcef710fc7ea47dddeac8e9e57883de445bc9b85a3607b52/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:01 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd38ed85b6658affcef710fc7ea47dddeac8e9e57883de445bc9b85a3607b52/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:01 np0005634017 podman[95711]: 2026-02-28 09:35:01.825378054 +0000 UTC m=+0.025340486 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:35:01 np0005634017 podman[95711]: 2026-02-28 09:35:01.928471312 +0000 UTC m=+0.128433744 container init 15f5a8d32382b585288a139d3fc3e5e7fc63c504912b7adbaa57ec597316d23a (image=quay.io/ceph/ceph:v20, name=objective_goldberg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Feb 28 04:35:01 np0005634017 podman[95711]: 2026-02-28 09:35:01.93478555 +0000 UTC m=+0.134747972 container start 15f5a8d32382b585288a139d3fc3e5e7fc63c504912b7adbaa57ec597316d23a (image=quay.io/ceph/ceph:v20, name=objective_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 28 04:35:01 np0005634017 podman[95711]: 2026-02-28 09:35:01.938702231 +0000 UTC m=+0.138664663 container attach 15f5a8d32382b585288a139d3fc3e5e7fc63c504912b7adbaa57ec597316d23a (image=quay.io/ceph/ceph:v20, name=objective_goldberg, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:35:02 np0005634017 podman[95789]: 2026-02-28 09:35:02.156182365 +0000 UTC m=+0.052658106 container create 2626e01a8e1ac585a2a169cad16ac9c81a059ce2a9b9355472d35a6f5bb57b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_hofstadter, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:35:02 np0005634017 systemd[1]: Started libpod-conmon-2626e01a8e1ac585a2a169cad16ac9c81a059ce2a9b9355472d35a6f5bb57b75.scope.
Feb 28 04:35:02 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:35:02 np0005634017 podman[95789]: 2026-02-28 09:35:02.225389227 +0000 UTC m=+0.121864988 container init 2626e01a8e1ac585a2a169cad16ac9c81a059ce2a9b9355472d35a6f5bb57b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 28 04:35:02 np0005634017 podman[95789]: 2026-02-28 09:35:02.13510393 +0000 UTC m=+0.031579691 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:35:02 np0005634017 podman[95789]: 2026-02-28 09:35:02.229892684 +0000 UTC m=+0.126368425 container start 2626e01a8e1ac585a2a169cad16ac9c81a059ce2a9b9355472d35a6f5bb57b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 28 04:35:02 np0005634017 optimistic_hofstadter[95805]: 167 167
Feb 28 04:35:02 np0005634017 systemd[1]: libpod-2626e01a8e1ac585a2a169cad16ac9c81a059ce2a9b9355472d35a6f5bb57b75.scope: Deactivated successfully.
Feb 28 04:35:02 np0005634017 podman[95789]: 2026-02-28 09:35:02.235180493 +0000 UTC m=+0.131656314 container attach 2626e01a8e1ac585a2a169cad16ac9c81a059ce2a9b9355472d35a6f5bb57b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 04:35:02 np0005634017 podman[95789]: 2026-02-28 09:35:02.235877193 +0000 UTC m=+0.132352944 container died 2626e01a8e1ac585a2a169cad16ac9c81a059ce2a9b9355472d35a6f5bb57b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 04:35:02 np0005634017 systemd[1]: var-lib-containers-storage-overlay-8a7dbae9ced2ab1993553f1bfce1437c001be530201dab13a1078cfd29b2caea-merged.mount: Deactivated successfully.
Feb 28 04:35:02 np0005634017 podman[95789]: 2026-02-28 09:35:02.275690616 +0000 UTC m=+0.172166367 container remove 2626e01a8e1ac585a2a169cad16ac9c81a059ce2a9b9355472d35a6f5bb57b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 04:35:02 np0005634017 systemd[1]: libpod-conmon-2626e01a8e1ac585a2a169cad16ac9c81a059ce2a9b9355472d35a6f5bb57b75.scope: Deactivated successfully.
Feb 28 04:35:02 np0005634017 systemd[1]: Reloading.
Feb 28 04:35:02 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:35:02 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:35:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Feb 28 04:35:02 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4259231877' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Feb 28 04:35:02 np0005634017 objective_goldberg[95727]: [client.openstack]
Feb 28 04:35:02 np0005634017 objective_goldberg[95727]: #011key = AQA6tqJpAAAAABAAqIDMaEaSG5wP00CKwNH8cQ==
Feb 28 04:35:02 np0005634017 objective_goldberg[95727]: #011caps mgr = "allow *"
Feb 28 04:35:02 np0005634017 objective_goldberg[95727]: #011caps mon = "profile rbd"
Feb 28 04:35:02 np0005634017 objective_goldberg[95727]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Feb 28 04:35:02 np0005634017 podman[95711]: 2026-02-28 09:35:02.486916114 +0000 UTC m=+0.686878626 container died 15f5a8d32382b585288a139d3fc3e5e7fc63c504912b7adbaa57ec597316d23a (image=quay.io/ceph/ceph:v20, name=objective_goldberg, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:35:02 np0005634017 systemd[1]: libpod-15f5a8d32382b585288a139d3fc3e5e7fc63c504912b7adbaa57ec597316d23a.scope: Deactivated successfully.
Feb 28 04:35:02 np0005634017 systemd[1]: var-lib-containers-storage-overlay-7cd38ed85b6658affcef710fc7ea47dddeac8e9e57883de445bc9b85a3607b52-merged.mount: Deactivated successfully.
Feb 28 04:35:02 np0005634017 podman[95711]: 2026-02-28 09:35:02.606115336 +0000 UTC m=+0.806077748 container remove 15f5a8d32382b585288a139d3fc3e5e7fc63c504912b7adbaa57ec597316d23a (image=quay.io/ceph/ceph:v20, name=objective_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 04:35:02 np0005634017 systemd[1]: libpod-conmon-15f5a8d32382b585288a139d3fc3e5e7fc63c504912b7adbaa57ec597316d23a.scope: Deactivated successfully.
Feb 28 04:35:02 np0005634017 systemd[1]: Reloading.
Feb 28 04:35:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.gzepcj", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Feb 28 04:35:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.gzepcj", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb 28 04:35:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:02 np0005634017 ceph-mon[76304]: Deploying daemon rgw.rgw.compute-0.gzepcj on compute-0
Feb 28 04:35:02 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/4259231877' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Feb 28 04:35:02 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:35:02 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:35:02 np0005634017 systemd[1]: Starting Ceph rgw.rgw.compute-0.gzepcj for 8f528268-ea2d-5d7b-af45-49b405fed6de...
Feb 28 04:35:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:35:03 np0005634017 podman[95978]: 2026-02-28 09:35:03.134336846 +0000 UTC m=+0.046975726 container create f42b0e91439107aa94c48e0b7846014370ceba71ad1512882855f3e172e87703 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-rgw-rgw-compute-0-gzepcj, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:35:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f078c103dff09d9961278c32027d00b425a76ec6ab6e1929b16b117c3bedfab8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f078c103dff09d9961278c32027d00b425a76ec6ab6e1929b16b117c3bedfab8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f078c103dff09d9961278c32027d00b425a76ec6ab6e1929b16b117c3bedfab8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f078c103dff09d9961278c32027d00b425a76ec6ab6e1929b16b117c3bedfab8/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.gzepcj supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:03 np0005634017 podman[95978]: 2026-02-28 09:35:03.199308448 +0000 UTC m=+0.111947328 container init f42b0e91439107aa94c48e0b7846014370ceba71ad1512882855f3e172e87703 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-rgw-rgw-compute-0-gzepcj, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:35:03 np0005634017 podman[95978]: 2026-02-28 09:35:03.114657031 +0000 UTC m=+0.027295911 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:35:03 np0005634017 podman[95978]: 2026-02-28 09:35:03.210499514 +0000 UTC m=+0.123138364 container start f42b0e91439107aa94c48e0b7846014370ceba71ad1512882855f3e172e87703 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-rgw-rgw-compute-0-gzepcj, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:35:03 np0005634017 bash[95978]: f42b0e91439107aa94c48e0b7846014370ceba71ad1512882855f3e172e87703
Feb 28 04:35:03 np0005634017 systemd[1]: Started Ceph rgw.rgw.compute-0.gzepcj for 8f528268-ea2d-5d7b-af45-49b405fed6de.
Feb 28 04:35:03 np0005634017 radosgw[95998]: deferred set uid:gid to 167:167 (ceph:ceph)
Feb 28 04:35:03 np0005634017 radosgw[95998]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Feb 28 04:35:03 np0005634017 radosgw[95998]: framework: beast
Feb 28 04:35:03 np0005634017 radosgw[95998]: framework conf key: endpoint, val: 192.168.122.100:8082
Feb 28 04:35:03 np0005634017 radosgw[95998]: init_numa not setting numa affinity
Feb 28 04:35:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:35:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:35:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb 28 04:35:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:03 np0005634017 ceph-mgr[76610]: [progress INFO root] complete: finished ev 9d726367-64fe-48e4-b404-bb3c404ba329 (Updating rgw.rgw deployment (+1 -> 1))
Feb 28 04:35:03 np0005634017 ceph-mgr[76610]: [progress INFO root] Completed event 9d726367-64fe-48e4-b404-bb3c404ba329 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Feb 28 04:35:03 np0005634017 ceph-mgr[76610]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Feb 28 04:35:03 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Feb 28 04:35:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb 28 04:35:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb 28 04:35:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:03 np0005634017 ceph-mgr[76610]: [progress INFO root] update: starting ev f3f5eedd-fbe9-452d-907a-62ce5ceeb644 (Updating mds.cephfs deployment (+1 -> 1))
Feb 28 04:35:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.eyikfq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Feb 28 04:35:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.eyikfq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 28 04:35:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.eyikfq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Feb 28 04:35:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:35:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:35:03 np0005634017 ceph-mgr[76610]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.eyikfq on compute-0
Feb 28 04:35:03 np0005634017 ceph-mgr[76610]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.eyikfq on compute-0
Feb 28 04:35:03 np0005634017 podman[96193]: 2026-02-28 09:35:03.81910672 +0000 UTC m=+0.043783436 container create 879a70855fc9dcda55b85a8023d4f0f2e6987df3e40dcc077f9ba893666c6bbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_sutherland, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:35:03 np0005634017 systemd[1]: Started libpod-conmon-879a70855fc9dcda55b85a8023d4f0f2e6987df3e40dcc077f9ba893666c6bbf.scope.
Feb 28 04:35:03 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:35:03 np0005634017 podman[96193]: 2026-02-28 09:35:03.797161051 +0000 UTC m=+0.021837857 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:35:03 np0005634017 podman[96193]: 2026-02-28 09:35:03.905239239 +0000 UTC m=+0.129915975 container init 879a70855fc9dcda55b85a8023d4f0f2e6987df3e40dcc077f9ba893666c6bbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_sutherland, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:35:03 np0005634017 podman[96193]: 2026-02-28 09:35:03.914680626 +0000 UTC m=+0.139357382 container start 879a70855fc9dcda55b85a8023d4f0f2e6987df3e40dcc077f9ba893666c6bbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_sutherland, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:35:03 np0005634017 youthful_sutherland[96253]: 167 167
Feb 28 04:35:03 np0005634017 systemd[1]: libpod-879a70855fc9dcda55b85a8023d4f0f2e6987df3e40dcc077f9ba893666c6bbf.scope: Deactivated successfully.
Feb 28 04:35:03 np0005634017 podman[96193]: 2026-02-28 09:35:03.919180153 +0000 UTC m=+0.143856909 container attach 879a70855fc9dcda55b85a8023d4f0f2e6987df3e40dcc077f9ba893666c6bbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 28 04:35:03 np0005634017 podman[96193]: 2026-02-28 09:35:03.92014905 +0000 UTC m=+0.144825806 container died 879a70855fc9dcda55b85a8023d4f0f2e6987df3e40dcc077f9ba893666c6bbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 28 04:35:03 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3a8a999b095ce5a9b2fd93b27959e5ce8daad6402cd4bb2670e4eed9335b0aed-merged.mount: Deactivated successfully.
Feb 28 04:35:03 np0005634017 podman[96193]: 2026-02-28 09:35:03.964763748 +0000 UTC m=+0.189440464 container remove 879a70855fc9dcda55b85a8023d4f0f2e6987df3e40dcc077f9ba893666c6bbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:35:03 np0005634017 systemd[1]: libpod-conmon-879a70855fc9dcda55b85a8023d4f0f2e6987df3e40dcc077f9ba893666c6bbf.scope: Deactivated successfully.
Feb 28 04:35:04 np0005634017 systemd[1]: Reloading.
Feb 28 04:35:04 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:35:04 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:35:04 np0005634017 ansible-async_wrapper.py[96303]: Invoked with j124735093621 30 /home/zuul/.ansible/tmp/ansible-tmp-1772271303.6077943-37710-126843928115372/AnsiballZ_command.py _
Feb 28 04:35:04 np0005634017 ansible-async_wrapper.py[96349]: Starting module and watcher
Feb 28 04:35:04 np0005634017 ansible-async_wrapper.py[96349]: Start watching 96350 (30)
Feb 28 04:35:04 np0005634017 ansible-async_wrapper.py[96350]: Start module (96350)
Feb 28 04:35:04 np0005634017 ansible-async_wrapper.py[96303]: Return async_wrapper task started.
Feb 28 04:35:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Feb 28 04:35:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Feb 28 04:35:04 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Feb 28 04:35:04 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 30 pg[8.0( empty local-lis/les=0/0 n=0 ec=30/30 lis/c=0/0 les/c/f=0/0/0 sis=30) [1] r=0 lpr=30 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:04 np0005634017 systemd[1]: Reloading.
Feb 28 04:35:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Feb 28 04:35:04 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3229498883' entity='client.rgw.rgw.compute-0.gzepcj' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Feb 28 04:35:04 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:04 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:04 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:04 np0005634017 ceph-mon[76304]: Saving service rgw.rgw spec with placement compute-0
Feb 28 04:35:04 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:04 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:04 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.eyikfq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 28 04:35:04 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.eyikfq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Feb 28 04:35:04 np0005634017 ceph-mon[76304]: Deploying daemon mds.cephfs.compute-0.eyikfq on compute-0
Feb 28 04:35:04 np0005634017 python3[96351]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:35:04 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:35:04 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:35:04 np0005634017 podman[96358]: 2026-02-28 09:35:04.397088713 +0000 UTC m=+0.066554598 container create e130a49a2df8731efde93b6ea0c101ff2d0a8a09c3e6beb30f6da398bf69115e (image=quay.io/ceph/ceph:v20, name=elastic_sanderson, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 28 04:35:04 np0005634017 podman[96358]: 2026-02-28 09:35:04.377453849 +0000 UTC m=+0.046919764 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:35:04 np0005634017 systemd[1]: Started libpod-conmon-e130a49a2df8731efde93b6ea0c101ff2d0a8a09c3e6beb30f6da398bf69115e.scope.
Feb 28 04:35:04 np0005634017 systemd[1]: Starting Ceph mds.cephfs.compute-0.eyikfq for 8f528268-ea2d-5d7b-af45-49b405fed6de...
Feb 28 04:35:04 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:35:04 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/406b94cf8ddc306e494c36739ce0b4a5e2f0e565799a7c9cf470ecb658c65ff8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:04 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/406b94cf8ddc306e494c36739ce0b4a5e2f0e565799a7c9cf470ecb658c65ff8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:04 np0005634017 podman[96358]: 2026-02-28 09:35:04.579764076 +0000 UTC m=+0.249229961 container init e130a49a2df8731efde93b6ea0c101ff2d0a8a09c3e6beb30f6da398bf69115e (image=quay.io/ceph/ceph:v20, name=elastic_sanderson, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:35:04 np0005634017 podman[96358]: 2026-02-28 09:35:04.589568822 +0000 UTC m=+0.259034717 container start e130a49a2df8731efde93b6ea0c101ff2d0a8a09c3e6beb30f6da398bf69115e (image=quay.io/ceph/ceph:v20, name=elastic_sanderson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 04:35:04 np0005634017 podman[96358]: 2026-02-28 09:35:04.595704765 +0000 UTC m=+0.265170680 container attach e130a49a2df8731efde93b6ea0c101ff2d0a8a09c3e6beb30f6da398bf69115e (image=quay.io/ceph/ceph:v20, name=elastic_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:35:04 np0005634017 podman[96487]: 2026-02-28 09:35:04.802029195 +0000 UTC m=+0.067693490 container create 9a26bc9997b0a24a5fe8bb542c045cc76294ce90bd172b8f26f907fde62c3908 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mds-cephfs-compute-0-eyikfq, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:35:04 np0005634017 podman[96487]: 2026-02-28 09:35:04.75966635 +0000 UTC m=+0.025330675 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:35:04 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde6a5d20f2d3d87521ab2a2252367300bcd612f18caaab868f884557e69bedd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:04 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde6a5d20f2d3d87521ab2a2252367300bcd612f18caaab868f884557e69bedd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:04 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde6a5d20f2d3d87521ab2a2252367300bcd612f18caaab868f884557e69bedd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:04 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde6a5d20f2d3d87521ab2a2252367300bcd612f18caaab868f884557e69bedd/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.eyikfq supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:04 np0005634017 podman[96487]: 2026-02-28 09:35:04.900691868 +0000 UTC m=+0.166356193 container init 9a26bc9997b0a24a5fe8bb542c045cc76294ce90bd172b8f26f907fde62c3908 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mds-cephfs-compute-0-eyikfq, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:35:04 np0005634017 podman[96487]: 2026-02-28 09:35:04.904902577 +0000 UTC m=+0.170566912 container start 9a26bc9997b0a24a5fe8bb542c045cc76294ce90bd172b8f26f907fde62c3908 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mds-cephfs-compute-0-eyikfq, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:35:04 np0005634017 bash[96487]: 9a26bc9997b0a24a5fe8bb542c045cc76294ce90bd172b8f26f907fde62c3908
Feb 28 04:35:04 np0005634017 systemd[1]: Started Ceph mds.cephfs.compute-0.eyikfq for 8f528268-ea2d-5d7b-af45-49b405fed6de.
Feb 28 04:35:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v69: 8 pgs: 1 creating+peering, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:35:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:35:04 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:35:04 np0005634017 ceph-mds[96507]: set uid:gid to 167:167 (ceph:ceph)
Feb 28 04:35:04 np0005634017 ceph-mds[96507]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Feb 28 04:35:04 np0005634017 ceph-mds[96507]: main not setting numa affinity
Feb 28 04:35:04 np0005634017 ceph-mds[96507]: pidfile_write: ignore empty --pid-file
Feb 28 04:35:04 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mds-cephfs-compute-0-eyikfq[96503]: starting mds.cephfs.compute-0.eyikfq at 
Feb 28 04:35:04 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb 28 04:35:04 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq Updating MDS map to version 2 from mon.0
Feb 28 04:35:04 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:04 np0005634017 ceph-mgr[76610]: [progress INFO root] complete: finished ev f3f5eedd-fbe9-452d-907a-62ce5ceeb644 (Updating mds.cephfs deployment (+1 -> 1))
Feb 28 04:35:04 np0005634017 ceph-mgr[76610]: [progress INFO root] Completed event f3f5eedd-fbe9-452d-907a-62ce5ceeb644 (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Feb 28 04:35:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:05 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.14251 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 28 04:35:05 np0005634017 elastic_sanderson[96416]: 
Feb 28 04:35:05 np0005634017 elastic_sanderson[96416]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb 28 04:35:05 np0005634017 systemd[1]: libpod-e130a49a2df8731efde93b6ea0c101ff2d0a8a09c3e6beb30f6da398bf69115e.scope: Deactivated successfully.
Feb 28 04:35:05 np0005634017 podman[96358]: 2026-02-28 09:35:05.085523332 +0000 UTC m=+0.754989197 container died e130a49a2df8731efde93b6ea0c101ff2d0a8a09c3e6beb30f6da398bf69115e (image=quay.io/ceph/ceph:v20, name=elastic_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:35:05 np0005634017 systemd[1]: var-lib-containers-storage-overlay-406b94cf8ddc306e494c36739ce0b4a5e2f0e565799a7c9cf470ecb658c65ff8-merged.mount: Deactivated successfully.
Feb 28 04:35:05 np0005634017 podman[96358]: 2026-02-28 09:35:05.128774692 +0000 UTC m=+0.798240567 container remove e130a49a2df8731efde93b6ea0c101ff2d0a8a09c3e6beb30f6da398bf69115e (image=quay.io/ceph/ceph:v20, name=elastic_sanderson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 28 04:35:05 np0005634017 systemd[1]: libpod-conmon-e130a49a2df8731efde93b6ea0c101ff2d0a8a09c3e6beb30f6da398bf69115e.scope: Deactivated successfully.
Feb 28 04:35:05 np0005634017 ansible-async_wrapper.py[96350]: Module complete (96350)
Feb 28 04:35:05 np0005634017 ceph-mgr[76610]: [progress INFO root] Writing back 5 completed events
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3229498883' entity='client.rgw.rgw.compute-0.gzepcj' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Feb 28 04:35:05 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 31 pg[8.0( empty local-lis/les=30/31 n=0 ec=30/30 lis/c=0/0 les/c/f=0/0/0 sis=30) [1] r=0 lpr=30 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/3229498883' entity='client.rgw.rgw.compute-0.gzepcj' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/3229498883' entity='client.rgw.rgw.compute-0.gzepcj' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Feb 28 04:35:05 np0005634017 python3[96663]: ansible-ansible.legacy.async_status Invoked with jid=j124735093621.96303 mode=status _async_dir=/root/.ansible_async
Feb 28 04:35:05 np0005634017 podman[97293]: 2026-02-28 09:35:05.577386856 +0000 UTC m=+0.061929598 container exec 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 04:35:05 np0005634017 podman[97293]: 2026-02-28 09:35:05.668477495 +0000 UTC m=+0.153020227 container exec_died 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 04:35:05 np0005634017 python3[97332]: ansible-ansible.legacy.async_status Invoked with jid=j124735093621.96303 mode=cleanup _async_dir=/root/.ansible_async
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).mds e2 assigned standby [v2:192.168.122.100:6814/1238471961,v1:192.168.122.100:6815/1238471961] as mds.0
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.eyikfq assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: log_channel(cluster) log [INF] : Cluster is now healthy
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).mds e3 new map
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2026-02-28T09:35:05:890758+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0113#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-28T09:34:54.753617+0000#012modified#0112026-02-28T09:35:05.890742+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14253}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-0.eyikfq{0:14253} state up:creating seq 1 addr [v2:192.168.122.100:6814/1238471961,v1:192.168.122.100:6815/1238471961] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Feb 28 04:35:05 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq Updating MDS map to version 3 from mon.0
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1238471961,v1:192.168.122.100:6815/1238471961] up:boot
Feb 28 04:35:05 np0005634017 ceph-mds[96507]: mds.0.3 handle_mds_map I am now mds.0.3
Feb 28 04:35:05 np0005634017 ceph-mds[96507]: mds.0.3 handle_mds_map state change up:standby --> up:creating
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.eyikfq=up:creating}
Feb 28 04:35:05 np0005634017 ceph-mds[96507]: mds.0.cache creating system inode with ino:0x1
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.eyikfq"} v 0)
Feb 28 04:35:05 np0005634017 ceph-mds[96507]: mds.0.cache creating system inode with ino:0x100
Feb 28 04:35:05 np0005634017 ceph-mds[96507]: mds.0.cache creating system inode with ino:0x600
Feb 28 04:35:05 np0005634017 ceph-mds[96507]: mds.0.cache creating system inode with ino:0x601
Feb 28 04:35:05 np0005634017 ceph-mds[96507]: mds.0.cache creating system inode with ino:0x602
Feb 28 04:35:05 np0005634017 ceph-mds[96507]: mds.0.cache creating system inode with ino:0x603
Feb 28 04:35:05 np0005634017 ceph-mds[96507]: mds.0.cache creating system inode with ino:0x604
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.eyikfq"} : dispatch
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).mds e3 all = 0
Feb 28 04:35:05 np0005634017 ceph-mds[96507]: mds.0.cache creating system inode with ino:0x605
Feb 28 04:35:05 np0005634017 ceph-mds[96507]: mds.0.cache creating system inode with ino:0x606
Feb 28 04:35:05 np0005634017 ceph-mds[96507]: mds.0.cache creating system inode with ino:0x607
Feb 28 04:35:05 np0005634017 ceph-mds[96507]: mds.0.cache creating system inode with ino:0x608
Feb 28 04:35:05 np0005634017 ceph-mds[96507]: mds.0.cache creating system inode with ino:0x609
Feb 28 04:35:05 np0005634017 ceph-mds[96507]: mds.0.3 creating_done
Feb 28 04:35:05 np0005634017 ceph-mon[76304]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.eyikfq is now active in filesystem cephfs as rank 0
Feb 28 04:35:06 np0005634017 python3[97496]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:35:06 np0005634017 podman[97533]: 2026-02-28 09:35:06.304412373 +0000 UTC m=+0.040296748 container create 76544b5c08445b4a17b696cf857bfd26b4aeec27def86032080e4db11b4b4b56 (image=quay.io/ceph/ceph:v20, name=determined_bassi, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3864343712' entity='client.rgw.rgw.compute-0.gzepcj' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:35:06 np0005634017 systemd[1]: Started libpod-conmon-76544b5c08445b4a17b696cf857bfd26b4aeec27def86032080e4db11b4b4b56.scope.
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: daemon mds.cephfs.compute-0.eyikfq assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: Cluster is now healthy
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: daemon mds.cephfs.compute-0.eyikfq is now active in filesystem cephfs as rank 0
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/3864343712' entity='client.rgw.rgw.compute-0.gzepcj' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 04:35:06 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:06 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d818fdaac2a930eb810d85ec52fb5d32adb85821274528134674dcff7299982f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:06 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d818fdaac2a930eb810d85ec52fb5d32adb85821274528134674dcff7299982f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:35:06 np0005634017 podman[97533]: 2026-02-28 09:35:06.37417488 +0000 UTC m=+0.110059295 container init 76544b5c08445b4a17b696cf857bfd26b4aeec27def86032080e4db11b4b4b56 (image=quay.io/ceph/ceph:v20, name=determined_bassi, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:35:06 np0005634017 podman[97533]: 2026-02-28 09:35:06.378974866 +0000 UTC m=+0.114859241 container start 76544b5c08445b4a17b696cf857bfd26b4aeec27def86032080e4db11b4b4b56 (image=quay.io/ceph/ceph:v20, name=determined_bassi, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 04:35:06 np0005634017 podman[97533]: 2026-02-28 09:35:06.382013011 +0000 UTC m=+0.117897436 container attach 76544b5c08445b4a17b696cf857bfd26b4aeec27def86032080e4db11b4b4b56 (image=quay.io/ceph/ceph:v20, name=determined_bassi, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:35:06 np0005634017 podman[97533]: 2026-02-28 09:35:06.28871581 +0000 UTC m=+0.024600205 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:35:06 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 32 pg[9.0( empty local-lis/les=0/0 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:06 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 28 04:35:06 np0005634017 determined_bassi[97551]: 
Feb 28 04:35:06 np0005634017 determined_bassi[97551]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb 28 04:35:06 np0005634017 systemd[1]: libpod-76544b5c08445b4a17b696cf857bfd26b4aeec27def86032080e4db11b4b4b56.scope: Deactivated successfully.
Feb 28 04:35:06 np0005634017 podman[97639]: 2026-02-28 09:35:06.744786294 +0000 UTC m=+0.034734721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:35:06 np0005634017 podman[97639]: 2026-02-28 09:35:06.89919783 +0000 UTC m=+0.189146227 container create 3ef7d37da8b006bfddbab189f45c6f843c86a34c833c5bd41a7c25db11920a71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hertz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:35:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v72: 9 pgs: 1 unknown, 1 creating+peering, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).mds e4 new map
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2026-02-28T09:35:06:916511+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-28T09:34:54.753617+0000#012modified#0112026-02-28T09:35:06.916508+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14253}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 14253 members: 14253#012[mds.cephfs.compute-0.eyikfq{0:14253} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/1238471961,v1:192.168.122.100:6815/1238471961] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Feb 28 04:35:06 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq Updating MDS map to version 4 from mon.0
Feb 28 04:35:06 np0005634017 ceph-mds[96507]: mds.0.3 handle_mds_map I am now mds.0.3
Feb 28 04:35:06 np0005634017 ceph-mds[96507]: mds.0.3 handle_mds_map state change up:creating --> up:active
Feb 28 04:35:06 np0005634017 ceph-mds[96507]: mds.0.3 recovery_done -- successful recovery!
Feb 28 04:35:06 np0005634017 ceph-mds[96507]: mds.0.3 active_start
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1238471961,v1:192.168.122.100:6815/1238471961] up:active
Feb 28 04:35:06 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.eyikfq=up:active}
Feb 28 04:35:06 np0005634017 systemd[1]: Started libpod-conmon-3ef7d37da8b006bfddbab189f45c6f843c86a34c833c5bd41a7c25db11920a71.scope.
Feb 28 04:35:06 np0005634017 podman[97533]: 2026-02-28 09:35:06.985897505 +0000 UTC m=+0.721781890 container died 76544b5c08445b4a17b696cf857bfd26b4aeec27def86032080e4db11b4b4b56 (image=quay.io/ceph/ceph:v20, name=determined_bassi, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 28 04:35:07 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:35:07 np0005634017 systemd[1]: var-lib-containers-storage-overlay-d818fdaac2a930eb810d85ec52fb5d32adb85821274528134674dcff7299982f-merged.mount: Deactivated successfully.
Feb 28 04:35:07 np0005634017 podman[97639]: 2026-02-28 09:35:07.052278748 +0000 UTC m=+0.342227195 container init 3ef7d37da8b006bfddbab189f45c6f843c86a34c833c5bd41a7c25db11920a71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hertz, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:35:07 np0005634017 podman[97639]: 2026-02-28 09:35:07.061336133 +0000 UTC m=+0.351284540 container start 3ef7d37da8b006bfddbab189f45c6f843c86a34c833c5bd41a7c25db11920a71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Feb 28 04:35:07 np0005634017 condescending_hertz[97674]: 167 167
Feb 28 04:35:07 np0005634017 podman[97639]: 2026-02-28 09:35:07.064998996 +0000 UTC m=+0.354947413 container attach 3ef7d37da8b006bfddbab189f45c6f843c86a34c833c5bd41a7c25db11920a71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:35:07 np0005634017 systemd[1]: libpod-3ef7d37da8b006bfddbab189f45c6f843c86a34c833c5bd41a7c25db11920a71.scope: Deactivated successfully.
Feb 28 04:35:07 np0005634017 podman[97655]: 2026-02-28 09:35:07.072700144 +0000 UTC m=+0.254032647 container remove 76544b5c08445b4a17b696cf857bfd26b4aeec27def86032080e4db11b4b4b56 (image=quay.io/ceph/ceph:v20, name=determined_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 04:35:07 np0005634017 systemd[1]: libpod-conmon-76544b5c08445b4a17b696cf857bfd26b4aeec27def86032080e4db11b4b4b56.scope: Deactivated successfully.
Feb 28 04:35:07 np0005634017 podman[97681]: 2026-02-28 09:35:07.119786692 +0000 UTC m=+0.036041088 container died 3ef7d37da8b006bfddbab189f45c6f843c86a34c833c5bd41a7c25db11920a71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hertz, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 04:35:07 np0005634017 systemd[1]: var-lib-containers-storage-overlay-d708d3e054ff650f009716a921f3603c618694f86d258f25ca72083524ded54e-merged.mount: Deactivated successfully.
Feb 28 04:35:07 np0005634017 podman[97681]: 2026-02-28 09:35:07.171773038 +0000 UTC m=+0.088027454 container remove 3ef7d37da8b006bfddbab189f45c6f843c86a34c833c5bd41a7c25db11920a71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hertz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 04:35:07 np0005634017 systemd[1]: libpod-conmon-3ef7d37da8b006bfddbab189f45c6f843c86a34c833c5bd41a7c25db11920a71.scope: Deactivated successfully.
Feb 28 04:35:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Feb 28 04:35:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3864343712' entity='client.rgw.rgw.compute-0.gzepcj' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Feb 28 04:35:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Feb 28 04:35:07 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Feb 28 04:35:07 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 33 pg[9.0( empty local-lis/les=32/33 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:07 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:07 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:35:07 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:07 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:35:07 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/3864343712' entity='client.rgw.rgw.compute-0.gzepcj' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Feb 28 04:35:07 np0005634017 podman[97703]: 2026-02-28 09:35:07.364125683 +0000 UTC m=+0.061730652 container create 252532a33e65062e28da14aba27bd21a5e43d06900376610e34549fd0f2edb1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_faraday, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 28 04:35:07 np0005634017 systemd[1]: Started libpod-conmon-252532a33e65062e28da14aba27bd21a5e43d06900376610e34549fd0f2edb1f.scope.
Feb 28 04:35:07 np0005634017 podman[97703]: 2026-02-28 09:35:07.339306543 +0000 UTC m=+0.036911572 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:35:07 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:35:07 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f01ee6d54147674fdbf5eeffc08a90543408a8552b9bd620c054ab15ce16c18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:07 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f01ee6d54147674fdbf5eeffc08a90543408a8552b9bd620c054ab15ce16c18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:07 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f01ee6d54147674fdbf5eeffc08a90543408a8552b9bd620c054ab15ce16c18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:07 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f01ee6d54147674fdbf5eeffc08a90543408a8552b9bd620c054ab15ce16c18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:07 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f01ee6d54147674fdbf5eeffc08a90543408a8552b9bd620c054ab15ce16c18/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:07 np0005634017 podman[97703]: 2026-02-28 09:35:07.471405119 +0000 UTC m=+0.169010118 container init 252532a33e65062e28da14aba27bd21a5e43d06900376610e34549fd0f2edb1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_faraday, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:35:07 np0005634017 podman[97703]: 2026-02-28 09:35:07.478385726 +0000 UTC m=+0.175990665 container start 252532a33e65062e28da14aba27bd21a5e43d06900376610e34549fd0f2edb1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_faraday, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:35:07 np0005634017 podman[97703]: 2026-02-28 09:35:07.481685749 +0000 UTC m=+0.179290748 container attach 252532a33e65062e28da14aba27bd21a5e43d06900376610e34549fd0f2edb1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 28 04:35:07 np0005634017 happy_faraday[97721]: --> passed data devices: 0 physical, 3 LVM
Feb 28 04:35:07 np0005634017 happy_faraday[97721]: --> All data devices are unavailable
Feb 28 04:35:07 np0005634017 python3[97758]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:35:07 np0005634017 systemd[1]: libpod-252532a33e65062e28da14aba27bd21a5e43d06900376610e34549fd0f2edb1f.scope: Deactivated successfully.
Feb 28 04:35:07 np0005634017 podman[97703]: 2026-02-28 09:35:07.980790727 +0000 UTC m=+0.678395656 container died 252532a33e65062e28da14aba27bd21a5e43d06900376610e34549fd0f2edb1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_faraday, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 28 04:35:08 np0005634017 systemd[1]: var-lib-containers-storage-overlay-7f01ee6d54147674fdbf5eeffc08a90543408a8552b9bd620c054ab15ce16c18-merged.mount: Deactivated successfully.
Feb 28 04:35:08 np0005634017 podman[97703]: 2026-02-28 09:35:08.065941839 +0000 UTC m=+0.763546768 container remove 252532a33e65062e28da14aba27bd21a5e43d06900376610e34549fd0f2edb1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:35:08 np0005634017 podman[97767]: 2026-02-28 09:35:08.073208614 +0000 UTC m=+0.094560598 container create 3ca2aa6b3b1b771ba521c68f9e7b0a3106c39c9e273864fb643eab5bb9405475 (image=quay.io/ceph/ceph:v20, name=intelligent_swirles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:35:08 np0005634017 systemd[1]: libpod-conmon-252532a33e65062e28da14aba27bd21a5e43d06900376610e34549fd0f2edb1f.scope: Deactivated successfully.
Feb 28 04:35:08 np0005634017 systemd[1]: Started libpod-conmon-3ca2aa6b3b1b771ba521c68f9e7b0a3106c39c9e273864fb643eab5bb9405475.scope.
Feb 28 04:35:08 np0005634017 podman[97767]: 2026-02-28 09:35:08.035954073 +0000 UTC m=+0.057305997 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:35:08 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:35:08 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/581fd9688aed659900ed0682767588ad76ccf6febf91e2ef304518da5b436f65/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:08 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/581fd9688aed659900ed0682767588ad76ccf6febf91e2ef304518da5b436f65/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:08 np0005634017 podman[97767]: 2026-02-28 09:35:08.155728872 +0000 UTC m=+0.177080806 container init 3ca2aa6b3b1b771ba521c68f9e7b0a3106c39c9e273864fb643eab5bb9405475 (image=quay.io/ceph/ceph:v20, name=intelligent_swirles, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 04:35:08 np0005634017 podman[97767]: 2026-02-28 09:35:08.165381634 +0000 UTC m=+0.186733498 container start 3ca2aa6b3b1b771ba521c68f9e7b0a3106c39c9e273864fb643eab5bb9405475 (image=quay.io/ceph/ceph:v20, name=intelligent_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:35:08 np0005634017 podman[97767]: 2026-02-28 09:35:08.16984065 +0000 UTC m=+0.191192574 container attach 3ca2aa6b3b1b771ba521c68f9e7b0a3106c39c9e273864fb643eab5bb9405475 (image=quay.io/ceph/ceph:v20, name=intelligent_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 28 04:35:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Feb 28 04:35:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Feb 28 04:35:08 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Feb 28 04:35:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Feb 28 04:35:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3864343712' entity='client.rgw.rgw.compute-0.gzepcj' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Feb 28 04:35:08 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/3864343712' entity='client.rgw.rgw.compute-0.gzepcj' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Feb 28 04:35:08 np0005634017 podman[97881]: 2026-02-28 09:35:08.594200369 +0000 UTC m=+0.046705338 container create c018c49018b5b4a73bf71560aea8132807e565dcd993b616d0e550713de921ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:35:08 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 28 04:35:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} v 0)
Feb 28 04:35:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} : dispatch
Feb 28 04:35:08 np0005634017 intelligent_swirles[97795]: 
Feb 28 04:35:08 np0005634017 intelligent_swirles[97795]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Feb 28 04:35:08 np0005634017 systemd[1]: Started libpod-conmon-c018c49018b5b4a73bf71560aea8132807e565dcd993b616d0e550713de921ea.scope.
Feb 28 04:35:08 np0005634017 systemd[1]: libpod-3ca2aa6b3b1b771ba521c68f9e7b0a3106c39c9e273864fb643eab5bb9405475.scope: Deactivated successfully.
Feb 28 04:35:08 np0005634017 podman[97767]: 2026-02-28 09:35:08.627965272 +0000 UTC m=+0.649317136 container died 3ca2aa6b3b1b771ba521c68f9e7b0a3106c39c9e273864fb643eab5bb9405475 (image=quay.io/ceph/ceph:v20, name=intelligent_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 28 04:35:08 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 34 pg[10.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [2] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:08 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:35:08 np0005634017 systemd[1]: var-lib-containers-storage-overlay-581fd9688aed659900ed0682767588ad76ccf6febf91e2ef304518da5b436f65-merged.mount: Deactivated successfully.
Feb 28 04:35:08 np0005634017 podman[97881]: 2026-02-28 09:35:08.574438572 +0000 UTC m=+0.026943561 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:35:08 np0005634017 podman[97881]: 2026-02-28 09:35:08.687793339 +0000 UTC m=+0.140298318 container init c018c49018b5b4a73bf71560aea8132807e565dcd993b616d0e550713de921ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 28 04:35:08 np0005634017 podman[97767]: 2026-02-28 09:35:08.693723187 +0000 UTC m=+0.715075061 container remove 3ca2aa6b3b1b771ba521c68f9e7b0a3106c39c9e273864fb643eab5bb9405475 (image=quay.io/ceph/ceph:v20, name=intelligent_swirles, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:35:08 np0005634017 podman[97881]: 2026-02-28 09:35:08.697950176 +0000 UTC m=+0.150455155 container start c018c49018b5b4a73bf71560aea8132807e565dcd993b616d0e550713de921ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:35:08 np0005634017 boring_bhaskara[97899]: 167 167
Feb 28 04:35:08 np0005634017 podman[97881]: 2026-02-28 09:35:08.702209996 +0000 UTC m=+0.154715025 container attach c018c49018b5b4a73bf71560aea8132807e565dcd993b616d0e550713de921ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bhaskara, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:35:08 np0005634017 systemd[1]: libpod-c018c49018b5b4a73bf71560aea8132807e565dcd993b616d0e550713de921ea.scope: Deactivated successfully.
Feb 28 04:35:08 np0005634017 podman[97881]: 2026-02-28 09:35:08.702904456 +0000 UTC m=+0.155409395 container died c018c49018b5b4a73bf71560aea8132807e565dcd993b616d0e550713de921ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 04:35:08 np0005634017 systemd[1]: libpod-conmon-3ca2aa6b3b1b771ba521c68f9e7b0a3106c39c9e273864fb643eab5bb9405475.scope: Deactivated successfully.
Feb 28 04:35:08 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ab8ff24a0e4caf5328a587df1b89ac9da2fc31d1f9bb196a41e9d937e4d769bd-merged.mount: Deactivated successfully.
Feb 28 04:35:08 np0005634017 podman[97881]: 2026-02-28 09:35:08.740306311 +0000 UTC m=+0.192811250 container remove c018c49018b5b4a73bf71560aea8132807e565dcd993b616d0e550713de921ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 04:35:08 np0005634017 systemd[1]: libpod-conmon-c018c49018b5b4a73bf71560aea8132807e565dcd993b616d0e550713de921ea.scope: Deactivated successfully.
Feb 28 04:35:08 np0005634017 podman[97935]: 2026-02-28 09:35:08.863480115 +0000 UTC m=+0.040026560 container create 7af634ed3dfa47e008e2bd9850f529c17a5148e6b4768a6ce267eb13039ed87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_heyrovsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Feb 28 04:35:08 np0005634017 systemd[1]: Started libpod-conmon-7af634ed3dfa47e008e2bd9850f529c17a5148e6b4768a6ce267eb13039ed87a.scope.
Feb 28 04:35:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v75: 10 pgs: 2 unknown, 1 creating+peering, 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 10 op/s
Feb 28 04:35:08 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:35:08 np0005634017 podman[97935]: 2026-02-28 09:35:08.844013006 +0000 UTC m=+0.020559451 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:35:08 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb09e6d5b94e31b7e34a96295374fd3926375319d03f735950dbf5e3ed0044b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:08 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb09e6d5b94e31b7e34a96295374fd3926375319d03f735950dbf5e3ed0044b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:08 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb09e6d5b94e31b7e34a96295374fd3926375319d03f735950dbf5e3ed0044b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:08 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb09e6d5b94e31b7e34a96295374fd3926375319d03f735950dbf5e3ed0044b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:08 np0005634017 podman[97935]: 2026-02-28 09:35:08.956777277 +0000 UTC m=+0.133323732 container init 7af634ed3dfa47e008e2bd9850f529c17a5148e6b4768a6ce267eb13039ed87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 04:35:08 np0005634017 podman[97935]: 2026-02-28 09:35:08.964744311 +0000 UTC m=+0.141290716 container start 7af634ed3dfa47e008e2bd9850f529c17a5148e6b4768a6ce267eb13039ed87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:35:08 np0005634017 podman[97935]: 2026-02-28 09:35:08.968221689 +0000 UTC m=+0.144768174 container attach 7af634ed3dfa47e008e2bd9850f529c17a5148e6b4768a6ce267eb13039ed87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:35:09 np0005634017 ansible-async_wrapper.py[96349]: Done in kid B.
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]: {
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:    "0": [
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:        {
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "devices": [
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "/dev/loop3"
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            ],
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "lv_name": "ceph_lv0",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "lv_size": "21470642176",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "name": "ceph_lv0",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "tags": {
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.cluster_name": "ceph",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.crush_device_class": "",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.encrypted": "0",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.objectstore": "bluestore",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.osd_id": "0",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.type": "block",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.vdo": "0",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.with_tpm": "0"
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            },
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "type": "block",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "vg_name": "ceph_vg0"
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:        }
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:    ],
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:    "1": [
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:        {
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "devices": [
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "/dev/loop4"
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            ],
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "lv_name": "ceph_lv1",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "lv_size": "21470642176",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "name": "ceph_lv1",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "tags": {
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.cluster_name": "ceph",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.crush_device_class": "",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.encrypted": "0",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.objectstore": "bluestore",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.osd_id": "1",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.type": "block",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.vdo": "0",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.with_tpm": "0"
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            },
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "type": "block",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "vg_name": "ceph_vg1"
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:        }
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:    ],
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:    "2": [
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:        {
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "devices": [
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "/dev/loop5"
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            ],
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "lv_name": "ceph_lv2",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "lv_size": "21470642176",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "name": "ceph_lv2",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "tags": {
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.cluster_name": "ceph",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.crush_device_class": "",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.encrypted": "0",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.objectstore": "bluestore",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.osd_id": "2",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.type": "block",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.vdo": "0",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:                "ceph.with_tpm": "0"
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            },
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "type": "block",
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:            "vg_name": "ceph_vg2"
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:        }
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]:    ]
Feb 28 04:35:09 np0005634017 practical_heyrovsky[97952]: }
Feb 28 04:35:09 np0005634017 systemd[1]: libpod-7af634ed3dfa47e008e2bd9850f529c17a5148e6b4768a6ce267eb13039ed87a.scope: Deactivated successfully.
Feb 28 04:35:09 np0005634017 podman[97935]: 2026-02-28 09:35:09.281052803 +0000 UTC m=+0.457599238 container died 7af634ed3dfa47e008e2bd9850f529c17a5148e6b4768a6ce267eb13039ed87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:35:09 np0005634017 systemd[1]: var-lib-containers-storage-overlay-bb09e6d5b94e31b7e34a96295374fd3926375319d03f735950dbf5e3ed0044b6-merged.mount: Deactivated successfully.
Feb 28 04:35:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Feb 28 04:35:09 np0005634017 podman[97935]: 2026-02-28 09:35:09.331796215 +0000 UTC m=+0.508342640 container remove 7af634ed3dfa47e008e2bd9850f529c17a5148e6b4768a6ce267eb13039ed87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_heyrovsky, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 28 04:35:09 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3864343712' entity='client.rgw.rgw.compute-0.gzepcj' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Feb 28 04:35:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Feb 28 04:35:09 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Feb 28 04:35:09 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 35 pg[10.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [2] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:09 np0005634017 systemd[1]: libpod-conmon-7af634ed3dfa47e008e2bd9850f529c17a5148e6b4768a6ce267eb13039ed87a.scope: Deactivated successfully.
Feb 28 04:35:09 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/3864343712' entity='client.rgw.rgw.compute-0.gzepcj' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Feb 28 04:35:09 np0005634017 python3[98034]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:35:09 np0005634017 podman[98053]: 2026-02-28 09:35:09.667458253 +0000 UTC m=+0.042713126 container create c50e28f1314fcb9e96f9b8adfe11062e7d495b63c8171d42dac077be7d13f090 (image=quay.io/ceph/ceph:v20, name=affectionate_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:35:09 np0005634017 systemd[1]: Started libpod-conmon-c50e28f1314fcb9e96f9b8adfe11062e7d495b63c8171d42dac077be7d13f090.scope.
Feb 28 04:35:09 np0005634017 podman[98053]: 2026-02-28 09:35:09.647696825 +0000 UTC m=+0.022951758 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:35:09 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:35:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44dc3449d60e3e9d3e8ba822a04e384f7567c237bdd1c9b1794257987e744a4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44dc3449d60e3e9d3e8ba822a04e384f7567c237bdd1c9b1794257987e744a4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:09 np0005634017 podman[98053]: 2026-02-28 09:35:09.780480181 +0000 UTC m=+0.155735044 container init c50e28f1314fcb9e96f9b8adfe11062e7d495b63c8171d42dac077be7d13f090 (image=quay.io/ceph/ceph:v20, name=affectionate_elion, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:35:09 np0005634017 podman[98053]: 2026-02-28 09:35:09.78895682 +0000 UTC m=+0.164211673 container start c50e28f1314fcb9e96f9b8adfe11062e7d495b63c8171d42dac077be7d13f090 (image=quay.io/ceph/ceph:v20, name=affectionate_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:35:09 np0005634017 podman[98083]: 2026-02-28 09:35:09.791963275 +0000 UTC m=+0.049513408 container create 9ea6f0846a26f4ef00b28c2027299aa5adbbcb8c319097455abce4b3c005b94d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_yalow, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:35:09 np0005634017 podman[98053]: 2026-02-28 09:35:09.795299959 +0000 UTC m=+0.170554812 container attach c50e28f1314fcb9e96f9b8adfe11062e7d495b63c8171d42dac077be7d13f090 (image=quay.io/ceph/ceph:v20, name=affectionate_elion, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:35:09 np0005634017 systemd[1]: Started libpod-conmon-9ea6f0846a26f4ef00b28c2027299aa5adbbcb8c319097455abce4b3c005b94d.scope.
Feb 28 04:35:09 np0005634017 podman[98083]: 2026-02-28 09:35:09.768610556 +0000 UTC m=+0.026160729 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:35:09 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:35:09 np0005634017 podman[98083]: 2026-02-28 09:35:09.8736971 +0000 UTC m=+0.131247243 container init 9ea6f0846a26f4ef00b28c2027299aa5adbbcb8c319097455abce4b3c005b94d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_yalow, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 04:35:09 np0005634017 podman[98083]: 2026-02-28 09:35:09.879937446 +0000 UTC m=+0.137487579 container start 9ea6f0846a26f4ef00b28c2027299aa5adbbcb8c319097455abce4b3c005b94d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_yalow, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:35:09 np0005634017 pedantic_yalow[98099]: 167 167
Feb 28 04:35:09 np0005634017 systemd[1]: libpod-9ea6f0846a26f4ef00b28c2027299aa5adbbcb8c319097455abce4b3c005b94d.scope: Deactivated successfully.
Feb 28 04:35:09 np0005634017 podman[98083]: 2026-02-28 09:35:09.884505135 +0000 UTC m=+0.142055268 container attach 9ea6f0846a26f4ef00b28c2027299aa5adbbcb8c319097455abce4b3c005b94d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_yalow, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:35:09 np0005634017 podman[98083]: 2026-02-28 09:35:09.884906126 +0000 UTC m=+0.142456279 container died 9ea6f0846a26f4ef00b28c2027299aa5adbbcb8c319097455abce4b3c005b94d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_yalow, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:35:09 np0005634017 systemd[1]: var-lib-containers-storage-overlay-8c2b34576dc7e20ffd7a171394a76446122c9256f58753537b5d1c379111630b-merged.mount: Deactivated successfully.
Feb 28 04:35:09 np0005634017 podman[98083]: 2026-02-28 09:35:09.919677627 +0000 UTC m=+0.177227780 container remove 9ea6f0846a26f4ef00b28c2027299aa5adbbcb8c319097455abce4b3c005b94d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_yalow, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 04:35:09 np0005634017 systemd[1]: libpod-conmon-9ea6f0846a26f4ef00b28c2027299aa5adbbcb8c319097455abce4b3c005b94d.scope: Deactivated successfully.
Feb 28 04:35:10 np0005634017 podman[98142]: 2026-02-28 09:35:10.090601328 +0000 UTC m=+0.061617059 container create 3e95b0dfc5c96561f8c5658cd0cccac21d87ba0a6b5a66c1c2850911d3e55fe7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 04:35:10 np0005634017 systemd[1]: Started libpod-conmon-3e95b0dfc5c96561f8c5658cd0cccac21d87ba0a6b5a66c1c2850911d3e55fe7.scope.
Feb 28 04:35:10 np0005634017 podman[98142]: 2026-02-28 09:35:10.060722485 +0000 UTC m=+0.031738136 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:35:10 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:35:10 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d9410ecb2f23c68848e7d71931d8ddb9638966bc3cb7cc15ec5e40d3e5118bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:10 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d9410ecb2f23c68848e7d71931d8ddb9638966bc3cb7cc15ec5e40d3e5118bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:10 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d9410ecb2f23c68848e7d71931d8ddb9638966bc3cb7cc15ec5e40d3e5118bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:10 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d9410ecb2f23c68848e7d71931d8ddb9638966bc3cb7cc15ec5e40d3e5118bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:10 np0005634017 podman[98142]: 2026-02-28 09:35:10.205401836 +0000 UTC m=+0.176417467 container init 3e95b0dfc5c96561f8c5658cd0cccac21d87ba0a6b5a66c1c2850911d3e55fe7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mendeleev, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:35:10 np0005634017 podman[98142]: 2026-02-28 09:35:10.21404691 +0000 UTC m=+0.185062561 container start 3e95b0dfc5c96561f8c5658cd0cccac21d87ba0a6b5a66c1c2850911d3e55fe7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mendeleev, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 28 04:35:10 np0005634017 podman[98142]: 2026-02-28 09:35:10.219644608 +0000 UTC m=+0.190660319 container attach 3e95b0dfc5c96561f8c5658cd0cccac21d87ba0a6b5a66c1c2850911d3e55fe7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mendeleev, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:35:10 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 28 04:35:10 np0005634017 affectionate_elion[98080]: 
Feb 28 04:35:10 np0005634017 affectionate_elion[98080]: [{"container_id": "7f2ed486c4ed", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.22%", "created": "2026-02-28T09:33:52.275545Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-02-28T09:33:52.349017Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-28T09:35:06.322645Z", "memory_usage": 7795113, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2026-02-28T09:33:52.180950Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-8f528268-ea2d-5d7b-af45-49b405fed6de@crash.compute-0", "version": "20.2.0"}, {"container_id": "9a26bc9997b0", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "8.97%", "created": "2026-02-28T09:35:04.917599Z", "daemon_id": "cephfs.compute-0.eyikfq", "daemon_name": "mds.cephfs.compute-0.eyikfq", "daemon_type": "mds", "events": ["2026-02-28T09:35:04.981836Z daemon:mds.cephfs.compute-0.eyikfq [INFO] \"Deployed mds.cephfs.compute-0.eyikfq on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-28T09:35:06.323452Z", "memory_usage": 15823011, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2026-02-28T09:35:04.763562Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-8f528268-ea2d-5d7b-af45-49b405fed6de@mds.cephfs.compute-0.eyikfq", "version": "20.2.0"}, {"container_id": "95ded722f5d0", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "18.03%", "created": "2026-02-28T09:33:12.511420Z", "daemon_id": "compute-0.izimmo", "daemon_name": "mgr.compute-0.izimmo", "daemon_type": "mgr", "events": ["2026-02-28T09:33:56.279451Z daemon:mgr.compute-0.izimmo [INFO] \"Reconfigured mgr.compute-0.izimmo on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-28T09:35:06.322508Z", "memory_usage": 547566387, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-02-28T09:33:12.374311Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-8f528268-ea2d-5d7b-af45-49b405fed6de@mgr.compute-0.izimmo", "version": "20.2.0"}, {"container_id": "02b99353aded", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.97%", "created": "2026-02-28T09:33:08.605309Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-02-28T09:33:55.686933Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-28T09:35:06.322327Z", "memory_request": 2147483648, "memory_usage": 42100326, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2026-02-28T09:33:10.584649Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-8f528268-ea2d-5d7b-af45-49b405fed6de@mon.compute-0", "version": "20.2.0"}, {"container_id": "fc71d051903a", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.91%", "created": "2026-02-28T09:34:16.066219Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-02-28T09:34:16.143289Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-28T09:35:06.322784Z", "memory_request": 4294967296, "memory_usage": 56203673, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-02-28T09:34:15.989963Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-8f528268-ea2d-5d7b-af45-49b405fed6de@osd.0", "version": "20.2.0"}, {"container_id": "28b061fbeda1", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.96%", "created": "2026-02-28T09:34:20.870355Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-02-28T09:34:22.586722Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-28T09:35:06.322927Z", "memory_request": 4294967296, "memory_usage": 61540925, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-02-28T09:34:20.273875Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-8f528268-ea2d-5d7b-af45-49b405fed6de@osd.1", "version": "20.2.0"}, {"container_id": "09d5547f1af4", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "2.10%", "created": "2026-02-28T09:34:28.346566Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-02-28T09:34:28.482447Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-28T09:35:06.323105Z", "memory_request": 4294967296, "memory_usage": 55480156, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-02-28T09:34:28.145872Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-8f528268-ea2d-5d7b-af45-49b405fed6de@osd.2", "version": "20.2.0"}, {"container_id": "f42b0e914391", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac68
Feb 28 04:35:10 np0005634017 systemd[1]: libpod-c50e28f1314fcb9e96f9b8adfe11062e7d495b63c8171d42dac077be7d13f090.scope: Deactivated successfully.
Feb 28 04:35:10 np0005634017 podman[98053]: 2026-02-28 09:35:10.242850503 +0000 UTC m=+0.618105376 container died c50e28f1314fcb9e96f9b8adfe11062e7d495b63c8171d42dac077be7d13f090 (image=quay.io/ceph/ceph:v20, name=affectionate_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:35:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Feb 28 04:35:10 np0005634017 systemd[1]: var-lib-containers-storage-overlay-c44dc3449d60e3e9d3e8ba822a04e384f7567c237bdd1c9b1794257987e744a4-merged.mount: Deactivated successfully.
Feb 28 04:35:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Feb 28 04:35:10 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Feb 28 04:35:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Feb 28 04:35:10 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3864343712' entity='client.rgw.rgw.compute-0.gzepcj' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Feb 28 04:35:10 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/3864343712' entity='client.rgw.rgw.compute-0.gzepcj' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Feb 28 04:35:10 np0005634017 rsyslogd[1017]: message too long (8842) with configured size 8096, begin of message is: [{"container_id": "7f2ed486c4ed", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 28 04:35:10 np0005634017 podman[98053]: 2026-02-28 09:35:10.482549184 +0000 UTC m=+0.857804057 container remove c50e28f1314fcb9e96f9b8adfe11062e7d495b63c8171d42dac077be7d13f090 (image=quay.io/ceph/ceph:v20, name=affectionate_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:35:10 np0005634017 systemd[1]: libpod-conmon-c50e28f1314fcb9e96f9b8adfe11062e7d495b63c8171d42dac077be7d13f090.scope: Deactivated successfully.
Feb 28 04:35:10 np0005634017 lvm[98250]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:35:10 np0005634017 lvm[98250]: VG ceph_vg0 finished
Feb 28 04:35:10 np0005634017 lvm[98252]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:35:10 np0005634017 lvm[98252]: VG ceph_vg1 finished
Feb 28 04:35:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:35:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v78: 11 pgs: 1 unknown, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 5.7 KiB/s wr, 15 op/s
Feb 28 04:35:10 np0005634017 ceph-mds[96507]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Feb 28 04:35:10 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mds-cephfs-compute-0-eyikfq[96503]: 2026-02-28T09:35:10.926+0000 7f49dcf79640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Feb 28 04:35:10 np0005634017 lvm[98254]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:35:10 np0005634017 lvm[98254]: VG ceph_vg2 finished
Feb 28 04:35:10 np0005634017 lvm[98255]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:35:10 np0005634017 lvm[98255]: VG ceph_vg0 finished
Feb 28 04:35:11 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 36 pg[11.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:11 np0005634017 youthful_mendeleev[98159]: {}
Feb 28 04:35:11 np0005634017 systemd[1]: libpod-3e95b0dfc5c96561f8c5658cd0cccac21d87ba0a6b5a66c1c2850911d3e55fe7.scope: Deactivated successfully.
Feb 28 04:35:11 np0005634017 systemd[1]: libpod-3e95b0dfc5c96561f8c5658cd0cccac21d87ba0a6b5a66c1c2850911d3e55fe7.scope: Consumed 1.190s CPU time.
Feb 28 04:35:11 np0005634017 podman[98142]: 2026-02-28 09:35:11.069147389 +0000 UTC m=+1.040163000 container died 3e95b0dfc5c96561f8c5658cd0cccac21d87ba0a6b5a66c1c2850911d3e55fe7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mendeleev, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 28 04:35:11 np0005634017 systemd[1]: var-lib-containers-storage-overlay-2d9410ecb2f23c68848e7d71931d8ddb9638966bc3cb7cc15ec5e40d3e5118bf-merged.mount: Deactivated successfully.
Feb 28 04:35:11 np0005634017 podman[98142]: 2026-02-28 09:35:11.11103261 +0000 UTC m=+1.082048251 container remove 3e95b0dfc5c96561f8c5658cd0cccac21d87ba0a6b5a66c1c2850911d3e55fe7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mendeleev, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:35:11 np0005634017 systemd[1]: libpod-conmon-3e95b0dfc5c96561f8c5658cd0cccac21d87ba0a6b5a66c1c2850911d3e55fe7.scope: Deactivated successfully.
Feb 28 04:35:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:35:11 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:35:11 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:35:11 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:35:11 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:11 np0005634017 python3[98315]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:35:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Feb 28 04:35:11 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3864343712' entity='client.rgw.rgw.compute-0.gzepcj' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Feb 28 04:35:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Feb 28 04:35:11 np0005634017 podman[98345]: 2026-02-28 09:35:11.419602474 +0000 UTC m=+0.044799134 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:35:11 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Feb 28 04:35:11 np0005634017 podman[98345]: 2026-02-28 09:35:11.540388131 +0000 UTC m=+0.165584751 container create c5ad46aeab2c11857f36d82ddc7967cb2baf6f516c34257c44c65030867e7b1a (image=quay.io/ceph/ceph:v20, name=cranky_jemison, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:35:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Feb 28 04:35:11 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3864343712' entity='client.rgw.rgw.compute-0.gzepcj' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Feb 28 04:35:11 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:11 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:11 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:11 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:11 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 37 pg[11.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:11 np0005634017 systemd[1]: Started libpod-conmon-c5ad46aeab2c11857f36d82ddc7967cb2baf6f516c34257c44c65030867e7b1a.scope.
Feb 28 04:35:11 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:35:11 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc2ffd34b2503daab4b74d68ba5c8d236289bc677166f954fee3da0e09a62423/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:11 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc2ffd34b2503daab4b74d68ba5c8d236289bc677166f954fee3da0e09a62423/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:11 np0005634017 podman[98345]: 2026-02-28 09:35:11.712905657 +0000 UTC m=+0.338102387 container init c5ad46aeab2c11857f36d82ddc7967cb2baf6f516c34257c44c65030867e7b1a (image=quay.io/ceph/ceph:v20, name=cranky_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 28 04:35:11 np0005634017 podman[98345]: 2026-02-28 09:35:11.721325065 +0000 UTC m=+0.346521685 container start c5ad46aeab2c11857f36d82ddc7967cb2baf6f516c34257c44c65030867e7b1a (image=quay.io/ceph/ceph:v20, name=cranky_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:35:11 np0005634017 podman[98345]: 2026-02-28 09:35:11.725268416 +0000 UTC m=+0.350465136 container attach c5ad46aeab2c11857f36d82ddc7967cb2baf6f516c34257c44c65030867e7b1a (image=quay.io/ceph/ceph:v20, name=cranky_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 04:35:11 np0005634017 podman[98431]: 2026-02-28 09:35:11.914224586 +0000 UTC m=+0.078840765 container exec 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 04:35:12 np0005634017 podman[98431]: 2026-02-28 09:35:12.012712014 +0000 UTC m=+0.177328153 container exec_died 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:35:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb 28 04:35:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/274712442' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 28 04:35:12 np0005634017 cranky_jemison[98397]: 
Feb 28 04:35:12 np0005634017 cranky_jemison[98397]: {"fsid":"8f528268-ea2d-5d7b-af45-49b405fed6de","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":121,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":37,"num_osds":3,"num_up_osds":3,"osd_up_since":1772271274,"num_in_osds":3,"osd_in_since":1772271249,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":10},{"state_name":"unknown","count":1}],"num_pgs":11,"num_pools":11,"num_objects":39,"data_bytes":463572,"bytes_used":84054016,"bytes_avail":64327872512,"bytes_total":64411926528,"unknown_pgs_ratio":0.090909093618392944,"read_bytes_sec":1535,"write_bytes_sec":5886,"read_op_per_sec":1,"write_op_per_sec":14},"fsmap":{"epoch":4,"btime":"2026-02-28T09:35:06:916511+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.eyikfq","status":"up:active","gid":14253}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-02-28T09:34:30.913210+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Feb 28 04:35:12 np0005634017 systemd[1]: libpod-c5ad46aeab2c11857f36d82ddc7967cb2baf6f516c34257c44c65030867e7b1a.scope: Deactivated successfully.
Feb 28 04:35:12 np0005634017 podman[98345]: 2026-02-28 09:35:12.271346159 +0000 UTC m=+0.896542779 container died c5ad46aeab2c11857f36d82ddc7967cb2baf6f516c34257c44c65030867e7b1a (image=quay.io/ceph/ceph:v20, name=cranky_jemison, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 04:35:12 np0005634017 systemd[1]: var-lib-containers-storage-overlay-dc2ffd34b2503daab4b74d68ba5c8d236289bc677166f954fee3da0e09a62423-merged.mount: Deactivated successfully.
Feb 28 04:35:12 np0005634017 podman[98345]: 2026-02-28 09:35:12.30895757 +0000 UTC m=+0.934154200 container remove c5ad46aeab2c11857f36d82ddc7967cb2baf6f516c34257c44c65030867e7b1a (image=quay.io/ceph/ceph:v20, name=cranky_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Feb 28 04:35:12 np0005634017 systemd[1]: libpod-conmon-c5ad46aeab2c11857f36d82ddc7967cb2baf6f516c34257c44c65030867e7b1a.scope: Deactivated successfully.
Feb 28 04:35:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Feb 28 04:35:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3864343712' entity='client.rgw.rgw.compute-0.gzepcj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Feb 28 04:35:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Feb 28 04:35:12 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Feb 28 04:35:12 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/3864343712' entity='client.rgw.rgw.compute-0.gzepcj' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Feb 28 04:35:12 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/3864343712' entity='client.rgw.rgw.compute-0.gzepcj' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Feb 28 04:35:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:35:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:35:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:35:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:35:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 04:35:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:35:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 04:35:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 04:35:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 04:35:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 04:35:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:35:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:35:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:35:12 np0005634017 radosgw[95998]: v1 topic migration: starting v1 topic migration..
Feb 28 04:35:12 np0005634017 radosgw[95998]: v1 topic migration: finished v1 topic migration
Feb 28 04:35:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v81: 11 pgs: 1 unknown, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 2.2 KiB/s wr, 5 op/s
Feb 28 04:35:12 np0005634017 radosgw[95998]: framework: beast
Feb 28 04:35:12 np0005634017 radosgw[95998]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Feb 28 04:35:12 np0005634017 radosgw[95998]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Feb 28 04:35:12 np0005634017 radosgw[95998]: starting handler: beast
Feb 28 04:35:12 np0005634017 radosgw[95998]: set uid:gid to 167:167 (ceph:ceph)
Feb 28 04:35:12 np0005634017 radosgw[95998]: mgrc service_daemon_register rgw.14256 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.gzepcj,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026,kernel_version=5.14.0-686.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864280,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=22ffc2cd-e6b5-4018-b2c7-f190c9a80ded,zone_name=default,zonegroup_id=59c3541f-5d48-409d-85b2-69c66ce61408,zonegroup_name=default}
Feb 28 04:35:13 np0005634017 podman[98772]: 2026-02-28 09:35:13.229717782 +0000 UTC m=+0.037376265 container create 3c98801e9393964e4270afd6180229c48f22f09def20e07cbc76e89d0f987877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:35:13 np0005634017 systemd[1]: Started libpod-conmon-3c98801e9393964e4270afd6180229c48f22f09def20e07cbc76e89d0f987877.scope.
Feb 28 04:35:13 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:35:13 np0005634017 podman[98772]: 2026-02-28 09:35:13.302899276 +0000 UTC m=+0.110557779 container init 3c98801e9393964e4270afd6180229c48f22f09def20e07cbc76e89d0f987877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_spence, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:35:13 np0005634017 podman[98772]: 2026-02-28 09:35:13.308498234 +0000 UTC m=+0.116156727 container start 3c98801e9393964e4270afd6180229c48f22f09def20e07cbc76e89d0f987877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_spence, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:35:13 np0005634017 podman[98772]: 2026-02-28 09:35:13.213999989 +0000 UTC m=+0.021658502 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:35:13 np0005634017 podman[98772]: 2026-02-28 09:35:13.311918491 +0000 UTC m=+0.119577004 container attach 3c98801e9393964e4270afd6180229c48f22f09def20e07cbc76e89d0f987877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_spence, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 04:35:13 np0005634017 intelligent_spence[98794]: 167 167
Feb 28 04:35:13 np0005634017 systemd[1]: libpod-3c98801e9393964e4270afd6180229c48f22f09def20e07cbc76e89d0f987877.scope: Deactivated successfully.
Feb 28 04:35:13 np0005634017 conmon[98794]: conmon 3c98801e9393964e4270 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3c98801e9393964e4270afd6180229c48f22f09def20e07cbc76e89d0f987877.scope/container/memory.events
Feb 28 04:35:13 np0005634017 podman[98772]: 2026-02-28 09:35:13.324410683 +0000 UTC m=+0.132069186 container died 3c98801e9393964e4270afd6180229c48f22f09def20e07cbc76e89d0f987877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_spence, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:35:13 np0005634017 python3[98786]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:35:13 np0005634017 systemd[1]: var-lib-containers-storage-overlay-7e6db755eb93ead1be0fcb57da6b336416d8037dc43782d9a3d068e8aa4f701f-merged.mount: Deactivated successfully.
Feb 28 04:35:13 np0005634017 podman[98772]: 2026-02-28 09:35:13.367311323 +0000 UTC m=+0.174969806 container remove 3c98801e9393964e4270afd6180229c48f22f09def20e07cbc76e89d0f987877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_spence, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 04:35:13 np0005634017 systemd[1]: libpod-conmon-3c98801e9393964e4270afd6180229c48f22f09def20e07cbc76e89d0f987877.scope: Deactivated successfully.
Feb 28 04:35:13 np0005634017 podman[98806]: 2026-02-28 09:35:13.392485643 +0000 UTC m=+0.043642292 container create e153f78ed338518e9a20d759361804bce17a3cb350b840723700e2af6706228e (image=quay.io/ceph/ceph:v20, name=hardcore_elgamal, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 04:35:13 np0005634017 systemd[1]: Started libpod-conmon-e153f78ed338518e9a20d759361804bce17a3cb350b840723700e2af6706228e.scope.
Feb 28 04:35:13 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:35:13 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36ed33ff1d7482aebf5035ff6c345009ad4a5d795f1428c72645deac70ddcbf8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:13 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36ed33ff1d7482aebf5035ff6c345009ad4a5d795f1428c72645deac70ddcbf8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:13 np0005634017 podman[98806]: 2026-02-28 09:35:13.373432516 +0000 UTC m=+0.024589185 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:35:13 np0005634017 podman[98806]: 2026-02-28 09:35:13.480804224 +0000 UTC m=+0.131960893 container init e153f78ed338518e9a20d759361804bce17a3cb350b840723700e2af6706228e (image=quay.io/ceph/ceph:v20, name=hardcore_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:35:13 np0005634017 podman[98806]: 2026-02-28 09:35:13.485104736 +0000 UTC m=+0.136261395 container start e153f78ed338518e9a20d759361804bce17a3cb350b840723700e2af6706228e (image=quay.io/ceph/ceph:v20, name=hardcore_elgamal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 28 04:35:13 np0005634017 podman[98806]: 2026-02-28 09:35:13.511659485 +0000 UTC m=+0.162816224 container attach e153f78ed338518e9a20d759361804bce17a3cb350b840723700e2af6706228e (image=quay.io/ceph/ceph:v20, name=hardcore_elgamal, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Feb 28 04:35:13 np0005634017 podman[98839]: 2026-02-28 09:35:13.552774224 +0000 UTC m=+0.104457937 container create cf59a0e1cc5d07b485f8ca3c2dd07341345ef234526d90b3b427185a01b2a461 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:35:13 np0005634017 podman[98839]: 2026-02-28 09:35:13.490460437 +0000 UTC m=+0.042144230 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:35:13 np0005634017 systemd[1]: Started libpod-conmon-cf59a0e1cc5d07b485f8ca3c2dd07341345ef234526d90b3b427185a01b2a461.scope.
Feb 28 04:35:13 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:35:13 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2053c5645903d16ea1076b353a5cbee6b7a9441b8bc145ad8f0bcab59aee2d75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:13 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2053c5645903d16ea1076b353a5cbee6b7a9441b8bc145ad8f0bcab59aee2d75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:13 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2053c5645903d16ea1076b353a5cbee6b7a9441b8bc145ad8f0bcab59aee2d75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:13 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2053c5645903d16ea1076b353a5cbee6b7a9441b8bc145ad8f0bcab59aee2d75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:13 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2053c5645903d16ea1076b353a5cbee6b7a9441b8bc145ad8f0bcab59aee2d75/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:13 np0005634017 podman[98839]: 2026-02-28 09:35:13.659703201 +0000 UTC m=+0.211386974 container init cf59a0e1cc5d07b485f8ca3c2dd07341345ef234526d90b3b427185a01b2a461 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_bell, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:35:13 np0005634017 podman[98839]: 2026-02-28 09:35:13.671053431 +0000 UTC m=+0.222737174 container start cf59a0e1cc5d07b485f8ca3c2dd07341345ef234526d90b3b427185a01b2a461 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_bell, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:35:13 np0005634017 ceph-mon[76304]: from='client.? 192.168.122.100:0/3864343712' entity='client.rgw.rgw.compute-0.gzepcj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Feb 28 04:35:13 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:13 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:13 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:35:13 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:13 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:35:13 np0005634017 podman[98839]: 2026-02-28 09:35:13.678572103 +0000 UTC m=+0.230255906 container attach cf59a0e1cc5d07b485f8ca3c2dd07341345ef234526d90b3b427185a01b2a461 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_bell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:35:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 28 04:35:13 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3400119697' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 28 04:35:13 np0005634017 hardcore_elgamal[98833]: 
Feb 28 04:35:13 np0005634017 systemd[1]: libpod-e153f78ed338518e9a20d759361804bce17a3cb350b840723700e2af6706228e.scope: Deactivated successfully.
Feb 28 04:35:13 np0005634017 hardcore_elgamal[98833]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.gzepcj","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Feb 28 04:35:13 np0005634017 podman[98806]: 2026-02-28 09:35:13.914880288 +0000 UTC m=+0.566036937 container died e153f78ed338518e9a20d759361804bce17a3cb350b840723700e2af6706228e (image=quay.io/ceph/ceph:v20, name=hardcore_elgamal, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:35:13 np0005634017 systemd[1]: var-lib-containers-storage-overlay-36ed33ff1d7482aebf5035ff6c345009ad4a5d795f1428c72645deac70ddcbf8-merged.mount: Deactivated successfully.
Feb 28 04:35:14 np0005634017 nifty_bell[98865]: --> passed data devices: 0 physical, 3 LVM
Feb 28 04:35:14 np0005634017 nifty_bell[98865]: --> All data devices are unavailable
Feb 28 04:35:14 np0005634017 systemd[1]: libpod-cf59a0e1cc5d07b485f8ca3c2dd07341345ef234526d90b3b427185a01b2a461.scope: Deactivated successfully.
Feb 28 04:35:14 np0005634017 podman[98806]: 2026-02-28 09:35:14.273943126 +0000 UTC m=+0.925099775 container remove e153f78ed338518e9a20d759361804bce17a3cb350b840723700e2af6706228e (image=quay.io/ceph/ceph:v20, name=hardcore_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:35:14 np0005634017 systemd[1]: libpod-conmon-e153f78ed338518e9a20d759361804bce17a3cb350b840723700e2af6706228e.scope: Deactivated successfully.
Feb 28 04:35:14 np0005634017 podman[98839]: 2026-02-28 09:35:14.316281091 +0000 UTC m=+0.867964804 container died cf59a0e1cc5d07b485f8ca3c2dd07341345ef234526d90b3b427185a01b2a461 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_bell, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 28 04:35:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v82: 11 pgs: 11 active+clean; 461 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 13 KiB/s wr, 257 op/s
Feb 28 04:35:15 np0005634017 python3[98952]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:35:15 np0005634017 systemd[1]: var-lib-containers-storage-overlay-2053c5645903d16ea1076b353a5cbee6b7a9441b8bc145ad8f0bcab59aee2d75-merged.mount: Deactivated successfully.
Feb 28 04:35:15 np0005634017 podman[98953]: 2026-02-28 09:35:15.555037691 +0000 UTC m=+0.169222814 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:35:15 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:35:15 np0005634017 podman[98839]: 2026-02-28 09:35:15.968653917 +0000 UTC m=+2.520337640 container remove cf59a0e1cc5d07b485f8ca3c2dd07341345ef234526d90b3b427185a01b2a461 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_bell, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:35:15 np0005634017 systemd[1]: libpod-conmon-cf59a0e1cc5d07b485f8ca3c2dd07341345ef234526d90b3b427185a01b2a461.scope: Deactivated successfully.
Feb 28 04:35:16 np0005634017 podman[98953]: 2026-02-28 09:35:16.00563962 +0000 UTC m=+0.619824733 container create bb3dd03443adca6a2600d854508f3db61023c36618817fae3b16be2e881ceba5 (image=quay.io/ceph/ceph:v20, name=friendly_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 28 04:35:16 np0005634017 systemd[1]: Started libpod-conmon-bb3dd03443adca6a2600d854508f3db61023c36618817fae3b16be2e881ceba5.scope.
Feb 28 04:35:16 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:35:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e107e6645108d97b8459007023211c9ddc6d9b3b72c18976094a3256b0938398/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e107e6645108d97b8459007023211c9ddc6d9b3b72c18976094a3256b0938398/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:16 np0005634017 podman[98953]: 2026-02-28 09:35:16.089527566 +0000 UTC m=+0.703712699 container init bb3dd03443adca6a2600d854508f3db61023c36618817fae3b16be2e881ceba5 (image=quay.io/ceph/ceph:v20, name=friendly_fermat, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 28 04:35:16 np0005634017 podman[98953]: 2026-02-28 09:35:16.096487533 +0000 UTC m=+0.710672646 container start bb3dd03443adca6a2600d854508f3db61023c36618817fae3b16be2e881ceba5 (image=quay.io/ceph/ceph:v20, name=friendly_fermat, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 04:35:16 np0005634017 podman[98953]: 2026-02-28 09:35:16.09956427 +0000 UTC m=+0.713749413 container attach bb3dd03443adca6a2600d854508f3db61023c36618817fae3b16be2e881ceba5 (image=quay.io/ceph/ceph:v20, name=friendly_fermat, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:35:16 np0005634017 podman[99053]: 2026-02-28 09:35:16.436848703 +0000 UTC m=+0.037279752 container create 7a4b0e1067408d486d92d373fe3d5385e5fa8bb31e0ff088b928c60cd1f7289a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hertz, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:35:16 np0005634017 systemd[1]: Started libpod-conmon-7a4b0e1067408d486d92d373fe3d5385e5fa8bb31e0ff088b928c60cd1f7289a.scope.
Feb 28 04:35:16 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:35:16 np0005634017 podman[99053]: 2026-02-28 09:35:16.496156756 +0000 UTC m=+0.096587815 container init 7a4b0e1067408d486d92d373fe3d5385e5fa8bb31e0ff088b928c60cd1f7289a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:35:16 np0005634017 podman[99053]: 2026-02-28 09:35:16.500957812 +0000 UTC m=+0.101388871 container start 7a4b0e1067408d486d92d373fe3d5385e5fa8bb31e0ff088b928c60cd1f7289a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hertz, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:35:16 np0005634017 hardcore_hertz[99069]: 167 167
Feb 28 04:35:16 np0005634017 podman[99053]: 2026-02-28 09:35:16.503457752 +0000 UTC m=+0.103888811 container attach 7a4b0e1067408d486d92d373fe3d5385e5fa8bb31e0ff088b928c60cd1f7289a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hertz, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 28 04:35:16 np0005634017 systemd[1]: libpod-7a4b0e1067408d486d92d373fe3d5385e5fa8bb31e0ff088b928c60cd1f7289a.scope: Deactivated successfully.
Feb 28 04:35:16 np0005634017 podman[99053]: 2026-02-28 09:35:16.504261015 +0000 UTC m=+0.104692074 container died 7a4b0e1067408d486d92d373fe3d5385e5fa8bb31e0ff088b928c60cd1f7289a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hertz, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:35:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Feb 28 04:35:16 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3974731187' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Feb 28 04:35:16 np0005634017 friendly_fermat[98969]: mimic
Feb 28 04:35:16 np0005634017 podman[99053]: 2026-02-28 09:35:16.420031719 +0000 UTC m=+0.020462818 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:35:16 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ba7ae3dfffa1d97a0a2d4ae4e454656dddb8d707b9c8da88d7260d1f7887dfc1-merged.mount: Deactivated successfully.
Feb 28 04:35:16 np0005634017 systemd[1]: libpod-bb3dd03443adca6a2600d854508f3db61023c36618817fae3b16be2e881ceba5.scope: Deactivated successfully.
Feb 28 04:35:16 np0005634017 podman[98953]: 2026-02-28 09:35:16.528646013 +0000 UTC m=+1.142831116 container died bb3dd03443adca6a2600d854508f3db61023c36618817fae3b16be2e881ceba5 (image=quay.io/ceph/ceph:v20, name=friendly_fermat, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:35:16 np0005634017 podman[99053]: 2026-02-28 09:35:16.544691495 +0000 UTC m=+0.145122584 container remove 7a4b0e1067408d486d92d373fe3d5385e5fa8bb31e0ff088b928c60cd1f7289a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:35:16 np0005634017 systemd[1]: libpod-conmon-7a4b0e1067408d486d92d373fe3d5385e5fa8bb31e0ff088b928c60cd1f7289a.scope: Deactivated successfully.
Feb 28 04:35:16 np0005634017 systemd[1]: var-lib-containers-storage-overlay-e107e6645108d97b8459007023211c9ddc6d9b3b72c18976094a3256b0938398-merged.mount: Deactivated successfully.
Feb 28 04:35:16 np0005634017 podman[98953]: 2026-02-28 09:35:16.582963985 +0000 UTC m=+1.197149088 container remove bb3dd03443adca6a2600d854508f3db61023c36618817fae3b16be2e881ceba5 (image=quay.io/ceph/ceph:v20, name=friendly_fermat, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 28 04:35:16 np0005634017 systemd[1]: libpod-conmon-bb3dd03443adca6a2600d854508f3db61023c36618817fae3b16be2e881ceba5.scope: Deactivated successfully.
Feb 28 04:35:16 np0005634017 podman[99106]: 2026-02-28 09:35:16.697985879 +0000 UTC m=+0.051973277 container create faec583def3d7dd1e366cbd49fdaa5e0f51a2bcef44b91674751db293501c2be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mendel, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 28 04:35:16 np0005634017 systemd[1]: Started libpod-conmon-faec583def3d7dd1e366cbd49fdaa5e0f51a2bcef44b91674751db293501c2be.scope.
Feb 28 04:35:16 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:35:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146172b6e7d9605e0f91cfffa350878d584b772d2d967f91a8206c84e03ba07e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146172b6e7d9605e0f91cfffa350878d584b772d2d967f91a8206c84e03ba07e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146172b6e7d9605e0f91cfffa350878d584b772d2d967f91a8206c84e03ba07e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:16 np0005634017 podman[99106]: 2026-02-28 09:35:16.68100276 +0000 UTC m=+0.034990178 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:35:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146172b6e7d9605e0f91cfffa350878d584b772d2d967f91a8206c84e03ba07e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:16 np0005634017 podman[99106]: 2026-02-28 09:35:16.792489225 +0000 UTC m=+0.146476643 container init faec583def3d7dd1e366cbd49fdaa5e0f51a2bcef44b91674751db293501c2be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mendel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:35:16 np0005634017 podman[99106]: 2026-02-28 09:35:16.80012756 +0000 UTC m=+0.154114978 container start faec583def3d7dd1e366cbd49fdaa5e0f51a2bcef44b91674751db293501c2be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mendel, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 28 04:35:16 np0005634017 podman[99106]: 2026-02-28 09:35:16.803886966 +0000 UTC m=+0.157874454 container attach faec583def3d7dd1e366cbd49fdaa5e0f51a2bcef44b91674751db293501c2be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 04:35:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v83: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 9.8 KiB/s wr, 217 op/s
Feb 28 04:35:17 np0005634017 silly_mendel[99123]: {
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:    "0": [
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:        {
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "devices": [
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "/dev/loop3"
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            ],
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "lv_name": "ceph_lv0",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "lv_size": "21470642176",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "name": "ceph_lv0",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "tags": {
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.cluster_name": "ceph",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.crush_device_class": "",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.encrypted": "0",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.objectstore": "bluestore",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.osd_id": "0",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.type": "block",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.vdo": "0",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.with_tpm": "0"
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            },
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "type": "block",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "vg_name": "ceph_vg0"
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:        }
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:    ],
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:    "1": [
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:        {
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "devices": [
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "/dev/loop4"
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            ],
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "lv_name": "ceph_lv1",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "lv_size": "21470642176",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "name": "ceph_lv1",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "tags": {
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.cluster_name": "ceph",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.crush_device_class": "",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.encrypted": "0",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.objectstore": "bluestore",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.osd_id": "1",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.type": "block",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.vdo": "0",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.with_tpm": "0"
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            },
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "type": "block",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "vg_name": "ceph_vg1"
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:        }
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:    ],
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:    "2": [
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:        {
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "devices": [
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "/dev/loop5"
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            ],
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "lv_name": "ceph_lv2",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "lv_size": "21470642176",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "name": "ceph_lv2",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "tags": {
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.cluster_name": "ceph",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.crush_device_class": "",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.encrypted": "0",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.objectstore": "bluestore",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.osd_id": "2",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.type": "block",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.vdo": "0",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:                "ceph.with_tpm": "0"
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            },
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "type": "block",
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:            "vg_name": "ceph_vg2"
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:        }
Feb 28 04:35:17 np0005634017 silly_mendel[99123]:    ]
Feb 28 04:35:17 np0005634017 silly_mendel[99123]: }
Feb 28 04:35:17 np0005634017 systemd[1]: libpod-faec583def3d7dd1e366cbd49fdaa5e0f51a2bcef44b91674751db293501c2be.scope: Deactivated successfully.
Feb 28 04:35:17 np0005634017 podman[99106]: 2026-02-28 09:35:17.117256686 +0000 UTC m=+0.471244114 container died faec583def3d7dd1e366cbd49fdaa5e0f51a2bcef44b91674751db293501c2be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mendel, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:35:17 np0005634017 systemd[1]: var-lib-containers-storage-overlay-146172b6e7d9605e0f91cfffa350878d584b772d2d967f91a8206c84e03ba07e-merged.mount: Deactivated successfully.
Feb 28 04:35:17 np0005634017 podman[99106]: 2026-02-28 09:35:17.160625069 +0000 UTC m=+0.514612487 container remove faec583def3d7dd1e366cbd49fdaa5e0f51a2bcef44b91674751db293501c2be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mendel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:35:17 np0005634017 systemd[1]: libpod-conmon-faec583def3d7dd1e366cbd49fdaa5e0f51a2bcef44b91674751db293501c2be.scope: Deactivated successfully.
Feb 28 04:35:17 np0005634017 python3[99219]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:35:17 np0005634017 podman[99220]: 2026-02-28 09:35:17.598922612 +0000 UTC m=+0.062557866 container create e9ff773867299b7af28530681eb3fd118b4762abe3dd06f0f926375e0a651ee6 (image=quay.io/ceph/ceph:v20, name=great_taussig, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:35:17 np0005634017 systemd[1]: Started libpod-conmon-e9ff773867299b7af28530681eb3fd118b4762abe3dd06f0f926375e0a651ee6.scope.
Feb 28 04:35:17 np0005634017 podman[99220]: 2026-02-28 09:35:17.572180988 +0000 UTC m=+0.035816292 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:35:17 np0005634017 podman[99246]: 2026-02-28 09:35:17.67898629 +0000 UTC m=+0.069566523 container create 1bf61a17bc0d353c8d24d0849122dc8f39e22481bcde80fd44273150eb4bd388 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_yalow, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 04:35:17 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:35:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55eaf494a51f2de379b8b1df16fc6c1f728db06f6bd482a9dc7f5e532133ca92/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55eaf494a51f2de379b8b1df16fc6c1f728db06f6bd482a9dc7f5e532133ca92/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:17 np0005634017 podman[99220]: 2026-02-28 09:35:17.703322237 +0000 UTC m=+0.166957521 container init e9ff773867299b7af28530681eb3fd118b4762abe3dd06f0f926375e0a651ee6 (image=quay.io/ceph/ceph:v20, name=great_taussig, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:35:17 np0005634017 systemd[1]: Started libpod-conmon-1bf61a17bc0d353c8d24d0849122dc8f39e22481bcde80fd44273150eb4bd388.scope.
Feb 28 04:35:17 np0005634017 podman[99220]: 2026-02-28 09:35:17.712728222 +0000 UTC m=+0.176363476 container start e9ff773867299b7af28530681eb3fd118b4762abe3dd06f0f926375e0a651ee6 (image=quay.io/ceph/ceph:v20, name=great_taussig, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:35:17 np0005634017 podman[99220]: 2026-02-28 09:35:17.716371665 +0000 UTC m=+0.180006889 container attach e9ff773867299b7af28530681eb3fd118b4762abe3dd06f0f926375e0a651ee6 (image=quay.io/ceph/ceph:v20, name=great_taussig, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:35:17 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:35:17 np0005634017 podman[99246]: 2026-02-28 09:35:17.65025259 +0000 UTC m=+0.040832863 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:35:17 np0005634017 podman[99246]: 2026-02-28 09:35:17.753967075 +0000 UTC m=+0.144547358 container init 1bf61a17bc0d353c8d24d0849122dc8f39e22481bcde80fd44273150eb4bd388 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 04:35:17 np0005634017 podman[99246]: 2026-02-28 09:35:17.76086618 +0000 UTC m=+0.151446403 container start 1bf61a17bc0d353c8d24d0849122dc8f39e22481bcde80fd44273150eb4bd388 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_yalow, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:35:17 np0005634017 podman[99246]: 2026-02-28 09:35:17.765001946 +0000 UTC m=+0.155582169 container attach 1bf61a17bc0d353c8d24d0849122dc8f39e22481bcde80fd44273150eb4bd388 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_yalow, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:35:17 np0005634017 clever_yalow[99270]: 167 167
Feb 28 04:35:17 np0005634017 systemd[1]: libpod-1bf61a17bc0d353c8d24d0849122dc8f39e22481bcde80fd44273150eb4bd388.scope: Deactivated successfully.
Feb 28 04:35:17 np0005634017 podman[99246]: 2026-02-28 09:35:17.766488738 +0000 UTC m=+0.157068961 container died 1bf61a17bc0d353c8d24d0849122dc8f39e22481bcde80fd44273150eb4bd388 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 28 04:35:17 np0005634017 systemd[1]: var-lib-containers-storage-overlay-8aa1d96db7649c31339d9ff75c127d0d2c9620b1156bfdd3252dc7a1de6c0f50-merged.mount: Deactivated successfully.
Feb 28 04:35:17 np0005634017 podman[99246]: 2026-02-28 09:35:17.808201245 +0000 UTC m=+0.198781448 container remove 1bf61a17bc0d353c8d24d0849122dc8f39e22481bcde80fd44273150eb4bd388 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:35:17 np0005634017 systemd[1]: libpod-conmon-1bf61a17bc0d353c8d24d0849122dc8f39e22481bcde80fd44273150eb4bd388.scope: Deactivated successfully.
Feb 28 04:35:17 np0005634017 podman[99313]: 2026-02-28 09:35:17.987584494 +0000 UTC m=+0.046986566 container create 2993d366ad3de6c8f1a72c9291e02c19a8a94137de72a7c4de11851b5a20ce08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_tesla, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 04:35:18 np0005634017 systemd[1]: Started libpod-conmon-2993d366ad3de6c8f1a72c9291e02c19a8a94137de72a7c4de11851b5a20ce08.scope.
Feb 28 04:35:18 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:35:18 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee66bf5cbc8a9f9638ab368d477ff420759c041867163380b69dfefe8cc7bf8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:18 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee66bf5cbc8a9f9638ab368d477ff420759c041867163380b69dfefe8cc7bf8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:18 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee66bf5cbc8a9f9638ab368d477ff420759c041867163380b69dfefe8cc7bf8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:18 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee66bf5cbc8a9f9638ab368d477ff420759c041867163380b69dfefe8cc7bf8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:35:18 np0005634017 podman[99313]: 2026-02-28 09:35:17.968234138 +0000 UTC m=+0.027636240 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:35:18 np0005634017 podman[99313]: 2026-02-28 09:35:18.082560303 +0000 UTC m=+0.141962405 container init 2993d366ad3de6c8f1a72c9291e02c19a8a94137de72a7c4de11851b5a20ce08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:35:18 np0005634017 podman[99313]: 2026-02-28 09:35:18.088657295 +0000 UTC m=+0.148059407 container start 2993d366ad3de6c8f1a72c9291e02c19a8a94137de72a7c4de11851b5a20ce08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_tesla, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 04:35:18 np0005634017 podman[99313]: 2026-02-28 09:35:18.09311489 +0000 UTC m=+0.152517062 container attach 2993d366ad3de6c8f1a72c9291e02c19a8a94137de72a7c4de11851b5a20ce08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_tesla, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:35:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Feb 28 04:35:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3820296052' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Feb 28 04:35:18 np0005634017 great_taussig[99264]: 
Feb 28 04:35:18 np0005634017 great_taussig[99264]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"rgw":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":7}}
Feb 28 04:35:18 np0005634017 systemd[1]: libpod-e9ff773867299b7af28530681eb3fd118b4762abe3dd06f0f926375e0a651ee6.scope: Deactivated successfully.
Feb 28 04:35:18 np0005634017 podman[99220]: 2026-02-28 09:35:18.243350438 +0000 UTC m=+0.706985652 container died e9ff773867299b7af28530681eb3fd118b4762abe3dd06f0f926375e0a651ee6 (image=quay.io/ceph/ceph:v20, name=great_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:35:18 np0005634017 systemd[1]: var-lib-containers-storage-overlay-55eaf494a51f2de379b8b1df16fc6c1f728db06f6bd482a9dc7f5e532133ca92-merged.mount: Deactivated successfully.
Feb 28 04:35:18 np0005634017 podman[99220]: 2026-02-28 09:35:18.280956439 +0000 UTC m=+0.744591653 container remove e9ff773867299b7af28530681eb3fd118b4762abe3dd06f0f926375e0a651ee6 (image=quay.io/ceph/ceph:v20, name=great_taussig, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 04:35:18 np0005634017 systemd[1]: libpod-conmon-e9ff773867299b7af28530681eb3fd118b4762abe3dd06f0f926375e0a651ee6.scope: Deactivated successfully.
Feb 28 04:35:18 np0005634017 lvm[99420]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:35:18 np0005634017 lvm[99421]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:35:18 np0005634017 lvm[99420]: VG ceph_vg0 finished
Feb 28 04:35:18 np0005634017 lvm[99421]: VG ceph_vg1 finished
Feb 28 04:35:18 np0005634017 lvm[99423]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:35:18 np0005634017 lvm[99423]: VG ceph_vg2 finished
Feb 28 04:35:18 np0005634017 magical_tesla[99330]: {}
Feb 28 04:35:18 np0005634017 systemd[1]: libpod-2993d366ad3de6c8f1a72c9291e02c19a8a94137de72a7c4de11851b5a20ce08.scope: Deactivated successfully.
Feb 28 04:35:18 np0005634017 podman[99313]: 2026-02-28 09:35:18.857634915 +0000 UTC m=+0.917037057 container died 2993d366ad3de6c8f1a72c9291e02c19a8a94137de72a7c4de11851b5a20ce08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_tesla, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 04:35:18 np0005634017 systemd[1]: libpod-2993d366ad3de6c8f1a72c9291e02c19a8a94137de72a7c4de11851b5a20ce08.scope: Consumed 1.199s CPU time.
Feb 28 04:35:18 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ee66bf5cbc8a9f9638ab368d477ff420759c041867163380b69dfefe8cc7bf8f-merged.mount: Deactivated successfully.
Feb 28 04:35:18 np0005634017 podman[99313]: 2026-02-28 09:35:18.909354904 +0000 UTC m=+0.968757006 container remove 2993d366ad3de6c8f1a72c9291e02c19a8a94137de72a7c4de11851b5a20ce08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_tesla, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 28 04:35:18 np0005634017 systemd[1]: libpod-conmon-2993d366ad3de6c8f1a72c9291e02c19a8a94137de72a7c4de11851b5a20ce08.scope: Deactivated successfully.
Feb 28 04:35:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v84: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 8.0 KiB/s wr, 177 op/s
Feb 28 04:35:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:35:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:35:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:18 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:18 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v85: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 6.8 KiB/s wr, 151 op/s
Feb 28 04:35:20 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:35:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v86: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 6.2 KiB/s wr, 137 op/s
Feb 28 04:35:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v87: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Feb 28 04:35:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:35:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v88: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:35:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_09:35:28
Feb 28 04:35:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 04:35:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 04:35:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', '.mgr', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'images', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta']
Feb 28 04:35:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 04:35:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v89: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.239963546149812e-07 of space, bias 4.0, pg target 0.0008687956255379774 quantized to 16 (current 1)
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 1)
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Feb 28 04:35:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Feb 28 04:35:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:35:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v90: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:35:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:35:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Feb 28 04:35:31 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Feb 28 04:35:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Feb 28 04:35:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Feb 28 04:35:31 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Feb 28 04:35:31 np0005634017 ceph-mgr[76610]: [progress INFO root] update: starting ev b15792d7-64ee-42bd-a6d9-bc828a1611ea (PG autoscaler increasing pool 2 PGs from 1 to 32)
Feb 28 04:35:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Feb 28 04:35:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Feb 28 04:35:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Feb 28 04:35:32 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Feb 28 04:35:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Feb 28 04:35:32 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Feb 28 04:35:32 np0005634017 ceph-mgr[76610]: [progress INFO root] update: starting ev 649fdc54-b715-4fe3-bc45-f71633efd505 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Feb 28 04:35:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Feb 28 04:35:32 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Feb 28 04:35:32 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Feb 28 04:35:32 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Feb 28 04:35:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v93: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:35:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Feb 28 04:35:32 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 28 04:35:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Feb 28 04:35:32 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 28 04:35:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Feb 28 04:35:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Feb 28 04:35:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Feb 28 04:35:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Feb 28 04:35:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Feb 28 04:35:33 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Feb 28 04:35:33 np0005634017 ceph-mgr[76610]: [progress INFO root] update: starting ev ba95d5b1-d410-489e-ab54-204de0f8d664 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Feb 28 04:35:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Feb 28 04:35:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Feb 28 04:35:33 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 41 pg[2.0( empty local-lis/les=17/18 n=0 ec=16/16 lis/c=17/17 les/c/f=18/18/0 sis=41 pruub=14.702878952s) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active pruub 78.962623596s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:33 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 41 pg[2.0( empty local-lis/les=17/18 n=0 ec=16/16 lis/c=17/17 les/c/f=18/18/0 sis=41 pruub=14.702878952s) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown pruub 78.962623596s@ mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:33 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Feb 28 04:35:33 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Feb 28 04:35:33 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 28 04:35:33 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 28 04:35:33 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Feb 28 04:35:33 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Feb 28 04:35:33 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Feb 28 04:35:33 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Feb 28 04:35:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Feb 28 04:35:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Feb 28 04:35:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Feb 28 04:35:34 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Feb 28 04:35:34 np0005634017 ceph-mgr[76610]: [progress INFO root] update: starting ev fd4bdc2a-186e-4f80-b6e5-afc90aec7fa0 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Feb 28 04:35:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Feb 28 04:35:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.1e( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.1( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.c( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.e( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.10( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.12( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.14( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.1a( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=17/18 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.1e( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.1( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.0( empty local-lis/les=41/42 n=0 ec=16/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.c( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.e( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.10( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.12( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.14( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.1a( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=17/17 les/c/f=18/18/0 sis=41) [2] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:34 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Feb 28 04:35:34 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 41 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=41 pruub=13.229707718s) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active pruub 85.018295288s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=41 pruub=13.229707718s) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown pruub 85.018295288s@ mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.2( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.4( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.b( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.d( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.10( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.13( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.14( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.19( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.1a( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.1c( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=17/18 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v96: 73 pgs: 62 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:35:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Feb 28 04:35:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 28 04:35:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Feb 28 04:35:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 28 04:35:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Feb 28 04:35:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Feb 28 04:35:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Feb 28 04:35:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Feb 28 04:35:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Feb 28 04:35:35 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Feb 28 04:35:35 np0005634017 ceph-mgr[76610]: [progress INFO root] update: starting ev aa298230-fccd-4128-9393-8a01bd2da988 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Feb 28 04:35:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Feb 28 04:35:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Feb 28 04:35:35 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 43 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=43 pruub=13.688986778s) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active pruub 92.203994751s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:35 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 43 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=43 pruub=13.688986778s) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown pruub 92.203994751s@ mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:35 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 28 04:35:35 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 28 04:35:35 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Feb 28 04:35:35 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Feb 28 04:35:35 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Feb 28 04:35:35 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.1c( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.19( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.1a( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.4( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.2( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.d( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.10( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.13( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.14( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.b( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.0( empty local-lis/les=41/43 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=17/17 les/c/f=18/18/0 sis=41) [1] r=0 lpr=41 pi=[17,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:35 np0005634017 ceph-mgr[76610]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Feb 28 04:35:35 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Feb 28 04:35:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 43 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=43 pruub=13.802740097s) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active pruub 80.979591370s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 43 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=43 pruub=13.802740097s) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown pruub 80.979591370s@ mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Feb 28 04:35:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Feb 28 04:35:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Feb 28 04:35:36 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Feb 28 04:35:36 np0005634017 ceph-mgr[76610]: [progress INFO root] update: starting ev cc34b46a-5b82-43cf-8d38-c92171b63f50 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Feb 28 04:35:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Feb 28 04:35:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.1d( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.1f( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.1e( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.b( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.6( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.19( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.3( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.c( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.15( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.16( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.17( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.1f( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.10( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.17( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.8( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.a( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.b( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.6( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.e( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.d( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.1c( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.1b( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.10( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.1f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.1f( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.17( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.1e( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.8( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.b( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.6( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.0( empty local-lis/les=43/44 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.1d( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.19( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.3( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.0( empty local-lis/les=43/44 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.c( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.6( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.15( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.16( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.17( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [0] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.1c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.1b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:36 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Feb 28 04:35:36 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Feb 28 04:35:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v99: 135 pgs: 93 unknown, 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:35:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Feb 28 04:35:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 28 04:35:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Feb 28 04:35:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Feb 28 04:35:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Feb 28 04:35:37 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Feb 28 04:35:37 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Feb 28 04:35:37 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Feb 28 04:35:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Feb 28 04:35:37 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Feb 28 04:35:37 np0005634017 ceph-mgr[76610]: [progress INFO root] update: starting ev c327eb03-0d1d-4cb4-9995-d7acec49ac74 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Feb 28 04:35:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Feb 28 04:35:37 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Feb 28 04:35:37 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 28 04:35:37 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Feb 28 04:35:37 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Feb 28 04:35:37 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Feb 28 04:35:37 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Feb 28 04:35:37 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Feb 28 04:35:37 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Feb 28 04:35:37 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 45 pg[6.0( v 33'39 (0'0,33'39] local-lis/les=20/21 n=22 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=45 pruub=13.039505959s) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 lcod 31'38 mlcod 31'38 active pruub 94.502609253s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 45 pg[6.0( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=20/21 n=1 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=45 pruub=13.039505959s) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 lcod 31'38 mlcod 0'0 unknown pruub 94.502609253s@ mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 45 pg[7.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45 pruub=13.926069260s) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active pruub 89.226921082s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 45 pg[7.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45 pruub=13.926069260s) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown pruub 89.226921082s@ mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Feb 28 04:35:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Feb 28 04:35:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Feb 28 04:35:38 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Feb 28 04:35:38 np0005634017 ceph-mgr[76610]: [progress INFO root] update: starting ev 4951cdae-99b7-4259-8ec4-a75f89db089e (PG autoscaler increasing pool 9 PGs from 1 to 32)
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=20/21 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=20/21 n=2 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=20/21 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=20/21 n=2 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=20/21 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=20/21 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=20/21 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=20/21 n=2 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=20/21 n=2 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=20/21 n=2 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=20/21 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=20/21 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=20/21 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=20/21 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=20/21 n=2 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Feb 28 04:35:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.13( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.11( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.17( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.16( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.15( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.14( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.b( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.12( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.a( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.10( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.9( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.8( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.d( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.6( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.4( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.f( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.e( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.c( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.5( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.7( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.1( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.2( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.3( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.1c( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.1e( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.1d( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.1f( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.18( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.1a( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.19( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.1b( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.0( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 lcod 31'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.17( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 46 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.14( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.16( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.8( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.0( empty local-lis/les=45/46 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.6( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.7( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.1c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.12( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.1d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.1e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.1f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.19( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.10( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 46 pg[7.1b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:38 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Feb 28 04:35:38 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Feb 28 04:35:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v102: 181 pgs: 108 unknown, 73 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:35:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Feb 28 04:35:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 28 04:35:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Feb 28 04:35:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Feb 28 04:35:38 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Feb 28 04:35:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Feb 28 04:35:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Feb 28 04:35:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Feb 28 04:35:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Feb 28 04:35:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Feb 28 04:35:39 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Feb 28 04:35:39 np0005634017 ceph-mgr[76610]: [progress INFO root] update: starting ev 47d697bf-d722-47c7-af09-68b802d343c1 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Feb 28 04:35:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Feb 28 04:35:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 47 pg[8.0( v 31'6 (0'0,31'6] local-lis/les=30/31 n=6 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=47 pruub=14.120130539s) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 31'5 mlcod 31'5 active pruub 90.483428955s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 47 pg[9.0( v 38'483 (0'0,38'483] local-lis/les=32/33 n=210 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=47 pruub=8.149884224s) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 38'482 mlcod 38'482 active pruub 84.513313293s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 47 pg[9.0( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=47 pruub=8.149884224s) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 38'482 mlcod 0'0 unknown pruub 84.513313293s@ mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x5630003a4480 space 0x563000980840 0x0~9a clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x56300048cd00 space 0x562fff660840 0x0~9a clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x5630003e5f80 space 0x562fff652840 0x0~98 clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x5630003cc100 space 0x563000895a40 0x0~9a clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x56300041dd80 space 0x562fffcc0e40 0x0~9a clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x562fffd11080 space 0x562fff710240 0x0~9a clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x562ffee1d880 space 0x56300052b440 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x56300047ba80 space 0x562fffb62840 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x563000442780 space 0x56300052da40 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x563000442600 space 0x5630004fdd40 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x56300048c980 space 0x562fffb64540 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x563000289300 space 0x563000813440 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x5630003e5a80 space 0x562fffcda840 0x0~9a clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x56300048c200 space 0x562fff6f4240 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x5630003cc380 space 0x562fff6fda40 0x0~98 clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x56300048cb80 space 0x562fffb64e40 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x56300047ba00 space 0x562fff65ee40 0x0~9a clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x56300039dd00 space 0x562fffc22240 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x56300047b980 space 0x562fff724840 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x563000443900 space 0x56300052bd40 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x56300039df00 space 0x562fff725140 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 47 pg[8.0( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=47 pruub=14.120130539s) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 31'5 mlcod 0'0 unknown pruub 90.483428955s@ mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x56300048cd80 space 0x562fffb65740 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x563000422d80 space 0x56300052ab40 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x56300048c780 space 0x562fffb63a40 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x56300039db00 space 0x562fff653440 0x0~98 clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x563000443000 space 0x56300052c840 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x56300048d300 space 0x562fffcb9d40 0x0~9a clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x5630003cc080 space 0x562fffa6fd40 0x0~98 clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x56300027ba80 space 0x562fffcdb440 0x0~9a clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x56300048c400 space 0x562fff6f4b40 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x56300039d080 space 0x562fffa6f140 0x0~9a clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x56300027b880 space 0x562fffa6e240 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x5630003e4f80 space 0x562fffd18840 0x0~9a clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x5630003ccf00 space 0x562fff70b740 0x0~9a clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x563000442e00 space 0x56300052d140 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x563000442200 space 0x562fffc92b40 0x0~98 clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x56300047b680 space 0x5630007c2e40 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x563000442180 space 0x562fffc19a40 0x0~98 clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x5630003cd700 space 0x563000980240 0x0~9a clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x5630003cd900 space 0x562fffb3c540 0x0~9a clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x563000442900 space 0x5630004fc840 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x56300048c580 space 0x562fffb63140 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x5630003cce80 space 0x562fff65fd40 0x0~9a clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x563000423100 space 0x56300052a240 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x56300048d000 space 0x562fff65e540 0x0~9a clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x5630003e4f00 space 0x562fff65f440 0x0~9a clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x5630003cd800 space 0x562fff6fc840 0x0~98 clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x563000288580 space 0x563000812b40 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x563000443d00 space 0x562fff653a40 0x0~9a clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x56300039d980 space 0x562fffc19140 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x5630003e5f00 space 0x562fffc23740 0x0~9a clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x56300048cf80 space 0x562fff725d40 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x5630003cc200 space 0x562fffcb8540 0x0~9a clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x563000443700 space 0x5630007c2540 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x56300048c800 space 0x563000894840 0x0~9a clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x5630003a4580 space 0x562fffa6eb40 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x563000442b00 space 0x562fffb5c240 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x5630003e5600 space 0x562fff6fd140 0x0~98 clean)
Feb 28 04:35:39 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563000478480) split_cache   moving buffer(0x56300048d180 space 0x562fffc18840 0x0~6e clean)
Feb 28 04:35:39 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 28 04:35:39 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 28 04:35:39 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Feb 28 04:35:39 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Feb 28 04:35:39 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Feb 28 04:35:39 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Feb 28 04:35:40 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Feb 28 04:35:40 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Feb 28 04:35:40 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Feb 28 04:35:40 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.14( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-mgr[76610]: [progress INFO root] update: starting ev 9ed0e174-d172-4b28-b818-b5f9e480e4d5 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.15( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.15( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.14( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.16( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.16( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.17( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.10( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.11( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.17( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.10( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.11( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-mgr[76610]: [progress INFO root] complete: finished ev b15792d7-64ee-42bd-a6d9-bc828a1611ea (PG autoscaler increasing pool 2 PGs from 1 to 32)
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.12( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-mgr[76610]: [progress INFO root] Completed event b15792d7-64ee-42bd-a6d9-bc828a1611ea (PG autoscaler increasing pool 2 PGs from 1 to 32) in 9 seconds
Feb 28 04:35:40 np0005634017 ceph-mgr[76610]: [progress INFO root] complete: finished ev 649fdc54-b715-4fe3-bc45-f71633efd505 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Feb 28 04:35:40 np0005634017 ceph-mgr[76610]: [progress INFO root] Completed event 649fdc54-b715-4fe3-bc45-f71633efd505 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 8 seconds
Feb 28 04:35:40 np0005634017 ceph-mgr[76610]: [progress INFO root] complete: finished ev ba95d5b1-d410-489e-ab54-204de0f8d664 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Feb 28 04:35:40 np0005634017 ceph-mgr[76610]: [progress INFO root] Completed event ba95d5b1-d410-489e-ab54-204de0f8d664 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 7 seconds
Feb 28 04:35:40 np0005634017 ceph-mgr[76610]: [progress INFO root] complete: finished ev fd4bdc2a-186e-4f80-b6e5-afc90aec7fa0 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Feb 28 04:35:40 np0005634017 ceph-mgr[76610]: [progress INFO root] Completed event fd4bdc2a-186e-4f80-b6e5-afc90aec7fa0 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Feb 28 04:35:40 np0005634017 ceph-mgr[76610]: [progress INFO root] complete: finished ev aa298230-fccd-4128-9393-8a01bd2da988 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Feb 28 04:35:40 np0005634017 ceph-mgr[76610]: [progress INFO root] Completed event aa298230-fccd-4128-9393-8a01bd2da988 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Feb 28 04:35:40 np0005634017 ceph-mgr[76610]: [progress INFO root] complete: finished ev cc34b46a-5b82-43cf-8d38-c92171b63f50 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Feb 28 04:35:40 np0005634017 ceph-mgr[76610]: [progress INFO root] Completed event cc34b46a-5b82-43cf-8d38-c92171b63f50 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Feb 28 04:35:40 np0005634017 ceph-mgr[76610]: [progress INFO root] complete: finished ev c327eb03-0d1d-4cb4-9995-d7acec49ac74 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Feb 28 04:35:40 np0005634017 ceph-mgr[76610]: [progress INFO root] Completed event c327eb03-0d1d-4cb4-9995-d7acec49ac74 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Feb 28 04:35:40 np0005634017 ceph-mgr[76610]: [progress INFO root] complete: finished ev 4951cdae-99b7-4259-8ec4-a75f89db089e (PG autoscaler increasing pool 9 PGs from 1 to 32)
Feb 28 04:35:40 np0005634017 ceph-mgr[76610]: [progress INFO root] Completed event 4951cdae-99b7-4259-8ec4-a75f89db089e (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Feb 28 04:35:40 np0005634017 ceph-mgr[76610]: [progress INFO root] complete: finished ev 47d697bf-d722-47c7-af09-68b802d343c1 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Feb 28 04:35:40 np0005634017 ceph-mgr[76610]: [progress INFO root] Completed event 47d697bf-d722-47c7-af09-68b802d343c1 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Feb 28 04:35:40 np0005634017 ceph-mgr[76610]: [progress INFO root] complete: finished ev 9ed0e174-d172-4b28-b818-b5f9e480e4d5 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Feb 28 04:35:40 np0005634017 ceph-mgr[76610]: [progress INFO root] Completed event 9ed0e174-d172-4b28-b818-b5f9e480e4d5 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.13( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.13( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.12( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.c( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.d( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.d( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.c( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.e( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.f( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.8( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.9( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.a( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.b( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.3( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.1( v 31'6 (0'0,31'6] local-lis/les=30/31 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.1( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.e( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.f( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.b( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.2( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.a( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.9( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.8( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.2( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.3( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.7( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.6( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.7( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.5( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.4( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.4( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.1b( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.5( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.1a( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.6( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.1a( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.1b( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.19( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.18( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.18( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.19( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.1e( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.1f( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.1f( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.1d( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.1e( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.1c( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.1d( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.1c( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.14( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.15( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.14( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.16( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.17( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.10( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.10( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.12( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.11( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.13( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.c( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.13( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.12( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.d( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.e( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.a( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.8( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.0( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 38'482 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.3( v 31'6 (0'0,31'6] local-lis/les=47/48 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.1( v 31'6 (0'0,31'6] local-lis/les=47/48 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.0( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 31'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.b( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.e( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.2( v 31'6 (0'0,31'6] local-lis/les=47/48 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.9( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.2( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.6( v 31'6 (0'0,31'6] local-lis/les=47/48 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.7( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.f( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.a( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.5( v 31'6 (0'0,31'6] local-lis/les=47/48 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.4( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.4( v 31'6 (0'0,31'6] local-lis/les=47/48 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.1a( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.5( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.18( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.19( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.18( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.1e( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.1f( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.1b( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.1d( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.1c( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.1a( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.1c( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 48 pg[8.1e( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:40 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Feb 28 04:35:40 np0005634017 ceph-mgr[76610]: [progress INFO root] Writing back 15 completed events
Feb 28 04:35:40 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 28 04:35:40 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v105: 243 pgs: 2 peering, 77 unknown, 164 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:35:40 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Feb 28 04:35:40 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 28 04:35:40 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Feb 28 04:35:40 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 28 04:35:40 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:35:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Feb 28 04:35:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Feb 28 04:35:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Feb 28 04:35:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Feb 28 04:35:41 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Feb 28 04:35:41 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 49 pg[11.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=49 pruub=10.299162865s) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active pruub 88.789665222s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:41 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 49 pg[11.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=49 pruub=10.299162865s) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown pruub 88.789665222s@ mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:41 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:41 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 28 04:35:41 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 28 04:35:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Feb 28 04:35:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Feb 28 04:35:42 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.16( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.15( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.17( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.14( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.13( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.12( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.11( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.10( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.f( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.e( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.d( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.b( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.9( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.2( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Feb 28 04:35:42 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.3( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.c( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.8( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.a( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.1( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.4( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.5( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.6( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.7( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.18( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.19( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.1a( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.1b( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.1c( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.1d( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.1e( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.1f( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.16( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.17( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.15( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.13( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.14( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.12( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.11( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.10( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.e( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.f( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.b( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.0( empty local-lis/les=49/50 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.9( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.2( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.d( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.c( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.3( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.a( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.8( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.1( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.5( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.6( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.7( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.4( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.19( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.18( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.1a( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.1b( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.1c( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.1d( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.1e( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 50 pg[11.1f( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v108: 305 pgs: 2 peering, 124 unknown, 179 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 49 pg[10.0( v 38'18 (0'0,38'18] local-lis/les=34/35 n=9 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=49 pruub=14.276808739s) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 38'17 mlcod 38'17 active pruub 88.454421997s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.0( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=49 pruub=14.276808739s) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 38'17 mlcod 0'0 unknown pruub 88.454421997s@ mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.1( v 38'18 (0'0,38'18] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.2( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.3( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.4( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.5( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.6( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.7( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.8( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.9( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.a( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.b( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.c( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.e( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.d( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.f( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.10( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.11( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.12( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.13( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.15( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.14( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.16( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.17( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.18( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.19( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.1a( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.1b( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.1c( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.1d( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.1e( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 50 pg[10.1f( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Feb 28 04:35:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Feb 28 04:35:43 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.10( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.1d( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.12( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.11( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.1b( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.1c( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.1a( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.1f( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.19( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.7( v 38'18 (0'0,38'18] local-lis/les=49/51 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.18( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.6( v 38'18 (0'0,38'18] local-lis/les=49/51 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.5( v 38'18 (0'0,38'18] local-lis/les=49/51 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.4( v 38'18 (0'0,38'18] local-lis/les=49/51 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.3( v 38'18 (0'0,38'18] local-lis/les=49/51 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.8( v 38'18 (0'0,38'18] local-lis/les=49/51 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.0( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 38'17 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.1e( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.f( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.a( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.9( v 38'18 (0'0,38'18] local-lis/les=49/51 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.b( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.c( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.d( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.e( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.2( v 38'18 (0'0,38'18] local-lis/les=49/51 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.1( v 38'18 (0'0,38'18] local-lis/les=49/51 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.14( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.16( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.17( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.13( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 51 pg[10.15( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:43 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Feb 28 04:35:44 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Feb 28 04:35:44 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Feb 28 04:35:44 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Feb 28 04:35:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v110: 305 pgs: 31 unknown, 274 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:35:44 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Feb 28 04:35:44 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Feb 28 04:35:45 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Feb 28 04:35:45 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Feb 28 04:35:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:35:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v111: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:35:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 28 04:35:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 28 04:35:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 28 04:35:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 28 04:35:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 28 04:35:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 28 04:35:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Feb 28 04:35:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb 28 04:35:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 28 04:35:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 28 04:35:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Feb 28 04:35:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb 28 04:35:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 28 04:35:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 28 04:35:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 28 04:35:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 28 04:35:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 28 04:35:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 28 04:35:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 28 04:35:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 28 04:35:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Feb 28 04:35:47 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 28 04:35:47 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 28 04:35:47 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 28 04:35:47 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb 28 04:35:47 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 28 04:35:47 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb 28 04:35:47 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 28 04:35:47 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 28 04:35:47 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 28 04:35:47 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 28 04:35:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 28 04:35:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 28 04:35:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 28 04:35:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Feb 28 04:35:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 28 04:35:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Feb 28 04:35:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 28 04:35:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 28 04:35:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 28 04:35:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 28 04:35:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Feb 28 04:35:47 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.783399582s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=33'39 lcod 0'0 active pruub 105.534118652s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.769900322s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 103.520675659s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.774107933s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 103.524902344s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.774498940s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 103.525329590s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.783322334s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 105.534118652s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.769843102s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 103.520675659s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.774061203s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 103.524902344s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.774459839s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 103.525329590s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.783163071s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=33'39 lcod 0'0 active pruub 105.534370422s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.774022102s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 103.525299072s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.783065796s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 105.534370422s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.773981094s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 103.525299072s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.773892403s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 103.525283813s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.773851395s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 103.525283813s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.773756027s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 103.525344849s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.782875061s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=33'39 lcod 0'0 active pruub 105.534500122s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.773723602s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 103.525344849s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.782836914s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 105.534500122s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.773635864s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 103.525367737s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.773601532s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 103.525367737s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.773710251s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 103.525558472s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.773664474s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 103.525558472s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.782584190s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=33'39 lcod 0'0 active pruub 105.534523010s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.782546043s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 105.534523010s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.782623291s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=33'39 lcod 0'0 active pruub 105.534736633s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.773530006s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 103.525680542s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.782664299s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=33'39 lcod 0'0 active pruub 105.534843445s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.782563210s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 105.534736633s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.773492813s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 103.525680542s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.773061752s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 103.525375366s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.773018837s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 103.525375366s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.782437325s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 105.534843445s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.782122612s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=33'39 lcod 0'0 active pruub 105.535125732s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.782080650s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 105.535125732s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.772220612s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 103.525566101s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.772188187s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 103.525566101s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.771763802s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 103.525299072s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.771711349s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 103.525299072s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.772055626s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 103.525680542s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.771840096s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 103.525482178s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.772020340s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 103.525680542s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.771800995s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 103.525482178s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.781386375s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=33'39 lcod 0'0 active pruub 105.535118103s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.771926880s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 103.525672913s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.771866798s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 103.525672913s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.781327248s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 105.535118103s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.771845818s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 103.525779724s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.774709702s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 103.528671265s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.771720886s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 103.525779724s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.774655342s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 103.528671265s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.771856308s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 103.526031494s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.771770477s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 103.526107788s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.771708488s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 103.526031494s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.771664619s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 103.526107788s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.770936012s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 103.526115417s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.770854950s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 103.526115417s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[4.18( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[4.10( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[4.1b( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[4.12( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[4.1a( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[4.e( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[4.14( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[4.1( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[4.8( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[4.a( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[4.13( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[4.11( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[4.1c( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.12( v 51'19 (0'0,51'19] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.958415031s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 38'18 active pruub 90.488342285s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.12( v 51'19 (0'0,51'19] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.958386421s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 38'18 unknown NOTIFY pruub 90.488342285s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.1e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.763708115s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 91.293792725s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.1e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.763698578s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 91.293792725s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.1d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.759579659s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 91.289695740s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.10( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.954502106s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 active pruub 90.484703064s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.1d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.759536743s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 91.289695740s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.10( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.954490662s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 90.484703064s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.11( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.958120346s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 active pruub 90.488342285s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.748187065s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 89.278442383s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.11( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.958068848s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 90.488342285s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.748145103s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 89.278442383s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.748028755s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 89.278388977s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.748017311s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 89.278388977s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.747792244s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 89.278251648s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.747780800s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 89.278251648s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.747685432s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 89.278297424s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.1e( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.958632469s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 active pruub 90.489257812s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.11( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.762701988s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 91.293334961s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.11( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.762659073s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 91.293334961s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.747654915s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 89.278297424s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.1e( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.958592415s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 90.489257812s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.746982574s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 89.278213501s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.746965408s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 89.278213501s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.13( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.762361526s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 91.293670654s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.12( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.762269974s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 91.293609619s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.13( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.762331963s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 91.293670654s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.12( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.762222290s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 91.293609619s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.14( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.762303352s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 91.293739319s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.746914864s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 89.278366089s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.14( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.762285233s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 91.293739319s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.746878624s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 89.278366089s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.15( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.762223244s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 91.293853760s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.15( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.762203217s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 91.293853760s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.19( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.956740379s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 active pruub 90.488464355s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.746468544s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 89.278228760s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.16( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.762029648s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 91.293807983s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.16( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.762016296s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 91.293807983s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.19( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.956697464s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 90.488464355s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.746434212s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 89.278228760s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.7( v 38'18 (0'0,38'18] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.956554413s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 active pruub 90.488471985s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.7( v 38'18 (0'0,38'18] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.956543922s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 90.488471985s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.6( v 38'18 (0'0,38'18] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.956566811s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 active pruub 90.488616943s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.746118546s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 89.278182983s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.6( v 38'18 (0'0,38'18] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.956549644s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 90.488616943s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.1a( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.956304550s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 active pruub 90.488426208s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.746074677s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 89.278182983s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.9( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.761674881s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 91.293861389s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.745930672s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 89.278129578s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.1a( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.956269264s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 90.488426208s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.745916367s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 89.278129578s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.9( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.761642456s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 91.293861389s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.745737076s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 89.278121948s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.745723724s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 89.278121948s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.4( v 38'18 (0'0,38'18] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.956568718s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 active pruub 90.488967896s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.761522293s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 91.293945312s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.761499405s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 91.293945312s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.8( v 38'18 (0'0,38'18] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.956795692s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 active pruub 90.489250183s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.4( v 38'18 (0'0,38'18] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.956525803s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 90.488967896s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[5.1e( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.8( v 38'18 (0'0,38'18] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.956767082s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 90.489250183s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.7( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.761397362s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 91.293914795s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[2.18( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.7( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.761383057s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 91.293914795s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.745318413s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 89.278106689s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.761119843s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 91.293952942s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.745173454s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 89.278007507s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.761103630s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 91.293952942s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.745286942s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 89.278106689s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.745141983s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 89.278007507s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.9( v 51'19 (0'0,51'19] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.956398010s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 38'18 active pruub 90.489341736s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.9( v 51'19 (0'0,51'19] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.956380844s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 38'18 unknown NOTIFY pruub 90.489341736s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.5( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.760897636s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 91.294021606s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.5( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.760882378s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 91.294021606s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.744716644s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 89.277885437s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.744689941s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 89.277885437s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.b( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.956125259s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 active pruub 90.489372253s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.b( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.956110954s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 90.489372253s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.4( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.760639191s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 91.294013977s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.4( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.760623932s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 91.294013977s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.744486809s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 89.277893066s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.744445801s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 89.277893066s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[2.19( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.744247437s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 89.277786255s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.744231224s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 89.277786255s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.d( v 51'19 (0'0,51'19] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.955664635s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 38'18 active pruub 90.489433289s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.3( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.766160965s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 91.299957275s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.d( v 51'19 (0'0,51'19] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.955636024s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 38'18 unknown NOTIFY pruub 90.489433289s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.3( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.766128540s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 91.299957275s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.2( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.766000748s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 91.299949646s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.2( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.765985489s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 91.299949646s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.743951797s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 89.277938843s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.e( v 51'19 (0'0,51'19] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.955466270s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 38'18 active pruub 90.489471436s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.743912697s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 89.277938843s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.e( v 51'19 (0'0,51'19] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.955430031s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 38'18 unknown NOTIFY pruub 90.489471436s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.743814468s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 89.277885437s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.743800163s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 89.277885437s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.1( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.759788513s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 91.294029236s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.1( v 38'18 (0'0,38'18] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.955436707s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 active pruub 90.489692688s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.1( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.759752274s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 91.294029236s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.1( v 38'18 (0'0,38'18] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.955416679s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 90.489692688s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.743319511s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 89.277809143s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.2( v 38'18 (0'0,38'18] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.954956055s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 active pruub 90.489479065s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.743291855s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 89.277809143s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.2( v 38'18 (0'0,38'18] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.954935074s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 90.489479065s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.743240356s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 89.277900696s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.13( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.955047607s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 active pruub 90.489776611s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.743205070s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 89.277900696s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.13( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.955025673s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 90.489776611s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.742658615s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 89.277496338s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.742623329s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 89.277496338s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.14( v 51'19 (0'0,51'19] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.954785347s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 38'18 active pruub 90.489715576s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.14( v 51'19 (0'0,51'19] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.954757690s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 38'18 unknown NOTIFY pruub 90.489715576s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.742762566s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 89.277748108s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.742731094s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 89.277748108s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.1a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.764997482s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 91.300079346s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.1a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.764979362s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 91.300079346s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.15( v 51'19 (0'0,51'19] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.954909325s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 38'18 active pruub 90.490028381s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[10.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[2.16( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.16( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.954502106s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 active pruub 90.489761353s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.16( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.954483986s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 90.489761353s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.742251396s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 89.277526855s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.15( v 51'19 (0'0,51'19] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.954855919s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 38'18 unknown NOTIFY pruub 90.490028381s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.742219925s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 89.277526855s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.19( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.764618874s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 91.300033569s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.19( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.764597893s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 91.300033569s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.18( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.764446259s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 91.300025940s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.737990379s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 89.273529053s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[5.18( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=12.764433861s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 91.300025940s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.17( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.954150200s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 active pruub 90.489761353s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52 pruub=10.737918854s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 89.273529053s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.17( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.954108238s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 90.489761353s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[4.9( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[4.5( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[5.14( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[2.13( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.f( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.956632614s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 active pruub 90.489288330s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[10.f( v 38'18 (0'0,38'18] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=11.950804710s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 90.489288330s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[4.7( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[6.1( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[5.15( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[4.d( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[10.7( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[4.f( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[2.11( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[4.4( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[2.f( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[4.2( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[2.b( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[10.4( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[10.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.912378311s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 active pruub 95.514495850s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.912034988s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 unknown NOTIFY pruub 95.514495850s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.14( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.769664764s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 active pruub 93.372261047s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.14( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.769610405s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 93.372261047s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.1b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.766106606s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 active pruub 99.368896484s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.1b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.766061783s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 unknown NOTIFY pruub 99.368896484s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.771759033s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 active pruub 93.374839783s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.771686554s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 93.374839783s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[5.7( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[2.8( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[10.9( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.763907433s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 active pruub 99.368728638s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.1f( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.746768951s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 96.351684570s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.15( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.769911766s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 active pruub 93.374832153s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.1e( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.744575500s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 96.349571228s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.1f( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.746652603s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 96.351684570s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.15( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.769825935s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 93.374832153s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.1e( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.744517326s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 96.349571228s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[5.5( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.1d( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.746049881s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 96.351486206s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.1d( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.746003151s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 96.351486206s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.763056755s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 unknown NOTIFY pruub 99.368728638s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.769462585s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 active pruub 93.375198364s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[8.15( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[11.17( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[3.1e( empty local-lis/les=0/0 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.769395828s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 93.375198364s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.1f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.762376785s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 active pruub 99.368675232s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.1f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.762321472s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 unknown NOTIFY pruub 99.368675232s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.908194542s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 active pruub 95.514663696s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.908158302s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 unknown NOTIFY pruub 95.514663696s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.1b( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.745053291s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 96.351402283s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.10( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.768411636s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 active pruub 93.375068665s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.10( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.768370628s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 93.375068665s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[8.14( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[3.1d( empty local-lis/les=0/0 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.907721519s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 active pruub 95.514694214s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.907694817s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 unknown NOTIFY pruub 95.514694214s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.767881393s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 active pruub 93.375091553s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.767672539s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 93.375091553s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[7.1b( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[7.1a( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[3.1f( empty local-lis/les=0/0 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.906492233s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 active pruub 95.514709473s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.906213760s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 active pruub 95.514526367s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.905986786s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 unknown NOTIFY pruub 95.514526367s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[2.2( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.12( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.766353607s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 active pruub 93.375236511s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.12( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.766320229s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 93.375236511s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.11( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.766086578s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 active pruub 93.375221252s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.13( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.766137123s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 active pruub 93.375289917s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.13( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.766112328s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 93.375289917s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.11( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.766035080s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 93.375221252s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.905316353s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 active pruub 95.514717102s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.905271530s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 unknown NOTIFY pruub 95.514717102s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.1c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.758274078s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 active pruub 99.367996216s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.1c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.758255959s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 unknown NOTIFY pruub 99.367996216s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.18( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.742008209s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 96.351486206s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.1b( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.744120598s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 96.351402283s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.18( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.741502762s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 96.351486206s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.758152008s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 active pruub 99.368278503s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.758102417s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 unknown NOTIFY pruub 99.368278503s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.904510498s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 active pruub 95.514770508s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.757165909s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 active pruub 99.367515564s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[5.4( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.757095337s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 unknown NOTIFY pruub 99.367515564s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.906457901s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 unknown NOTIFY pruub 95.514709473s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.c( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.764634132s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 active pruub 93.375297546s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.904471397s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 unknown NOTIFY pruub 95.514770508s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.7( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.740815163s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 96.351524353s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.764629364s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 active pruub 93.375366211s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.7( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.740771294s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 96.351524353s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.764580727s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 93.375366211s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[8.12( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[8.11( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[10.12( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.903098106s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 active pruub 95.514755249s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[7.1f( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[11.14( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.903058052s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 unknown NOTIFY pruub 95.514755249s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.756084442s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 active pruub 99.367797852s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.6( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.739786148s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 96.351478577s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.756012917s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 unknown NOTIFY pruub 99.367797852s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.6( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.739630699s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 96.351478577s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.d( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.763350487s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 active pruub 93.375389099s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.d( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.763319969s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 93.375389099s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.902549744s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 active pruub 95.514755249s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.902519226s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 unknown NOTIFY pruub 95.514755249s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.755256653s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 active pruub 99.367515564s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.5( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.739156723s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 96.351463318s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.755211830s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 unknown NOTIFY pruub 99.367515564s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.e( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.762957573s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 active pruub 93.375427246s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.c( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.764589310s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 93.375297546s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.764748573s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 active pruub 93.377265930s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.e( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.762928009s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 93.375427246s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.764717102s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 93.377265930s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.902060509s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 active pruub 95.514778137s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[10.d( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.5( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.739057541s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 96.351463318s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[7.1c( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.902001381s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 unknown NOTIFY pruub 95.514778137s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[10.10( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[8.10( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.3( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.738123894s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 96.351509094s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[5.3( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.3( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.738077164s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 96.351509094s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[10.11( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[5.2( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[3.18( empty local-lis/les=0/0 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.761004448s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 active pruub 93.375442505s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.760962486s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 93.375442505s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.752699852s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 active pruub 99.367408752s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.752668381s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 unknown NOTIFY pruub 99.367408752s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[3.7( empty local-lis/les=0/0 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.1( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.736981392s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 96.352020264s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.1( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.736957550s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 96.352020264s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.899717331s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 active pruub 95.514846802s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.762227058s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 active pruub 93.377372742s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.899680138s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 unknown NOTIFY pruub 95.514846802s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.762182236s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 93.377372742s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.8( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.736246109s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 96.351516724s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.752197266s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 active pruub 99.367507935s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.752168655s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 unknown NOTIFY pruub 99.367507935s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.899430275s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 active pruub 95.514862061s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.8( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.736219406s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 96.351516724s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.899400711s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 unknown NOTIFY pruub 95.514862061s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[2.17( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[10.e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.751401901s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 active pruub 99.367401123s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.a( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.735456467s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 96.351516724s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.901679039s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 active pruub 95.517784119s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.a( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.735416412s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 96.351516724s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.901577950s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 unknown NOTIFY pruub 95.517784119s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.751370430s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 unknown NOTIFY pruub 99.367401123s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[11.10( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.750930786s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 active pruub 99.367431641s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.750896454s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 unknown NOTIFY pruub 99.367431641s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[5.11( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[3.1b( empty local-lis/les=0/0 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[7.18( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[7.3( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[7.2( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[11.f( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.760308266s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 active pruub 93.377349854s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.f( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.761063576s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 active pruub 93.378143311s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.760276794s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 93.377349854s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.f( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.761041641s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 93.378143311s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.899920464s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 active pruub 95.517913818s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.899867058s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 unknown NOTIFY pruub 95.517913818s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.b( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.759981155s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 active pruub 93.378112793s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.6( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.749182701s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 active pruub 99.367362976s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.9( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.759882927s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 active pruub 93.378227234s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[11.e( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.748452187s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 active pruub 99.367370605s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.748193741s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 unknown NOTIFY pruub 99.367370605s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[3.6( empty local-lis/les=0/0 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.b( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.759000778s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 93.378112793s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[8.e( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.898180962s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 active pruub 95.517959595s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.6( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.747342110s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 unknown NOTIFY pruub 99.367362976s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.9( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.731972694s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 96.352027893s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.9( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.758238792s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 93.378227234s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[8.c( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[10.1( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.9( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.731938362s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 96.352027893s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[8.d( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.2( v 31'6 (0'0,31'6] local-lis/les=47/48 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.757833481s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 active pruub 93.378196716s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[3.3( empty local-lis/les=0/0 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.897572517s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 unknown NOTIFY pruub 95.517959595s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.757832527s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 active pruub 93.378265381s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.2( v 31'6 (0'0,31'6] local-lis/les=47/48 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.757802010s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 93.378196716s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.757794380s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 93.378265381s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.897500038s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 active pruub 95.518165588s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.897479057s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 unknown NOTIFY pruub 95.518165588s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[2.1c( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[2.15( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[10.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.c( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.730032921s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 96.351921082s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.8( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.744861603s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 active pruub 99.366783142s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.c( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.729967117s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 96.351921082s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.8( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.744802475s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 unknown NOTIFY pruub 99.366783142s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.744421005s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 active pruub 99.366806030s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.744394302s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 unknown NOTIFY pruub 99.366806030s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[3.1( empty local-lis/les=0/0 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[7.1( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.895401001s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 active pruub 95.518089294s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.e( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.729223251s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 96.351921082s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.743693352s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 active pruub 99.366409302s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.6( v 31'6 (0'0,31'6] local-lis/les=47/48 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.755604744s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 active pruub 93.378311157s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.895359039s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 unknown NOTIFY pruub 95.518089294s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.e( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.729199409s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 96.351921082s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.743647575s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 unknown NOTIFY pruub 99.366409302s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[10.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.6( v 31'6 (0'0,31'6] local-lis/les=47/48 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.755538940s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 93.378311157s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.755273819s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 active pruub 93.378356934s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.755241394s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 93.378356934s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[3.5( empty local-lis/les=0/0 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[2.1d( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[5.13( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[3.a( empty local-lis/les=0/0 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[2.1f( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[7.f( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[10.17( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.f( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.727254868s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 96.352005005s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.4( v 31'6 (0'0,31'6] local-lis/les=47/48 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.754353523s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 active pruub 93.379119873s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.5( v 48'484 (0'0,48'484] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.754420280s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 38'483 active pruub 93.379203796s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.f( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.727208138s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 96.352005005s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.893372536s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 active pruub 95.518188477s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.4( v 31'6 (0'0,31'6] local-lis/les=47/48 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.754315376s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 93.379119873s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.5( v 48'484 (0'0,48'484] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.754368782s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 38'483 unknown NOTIFY pruub 93.379203796s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.893334389s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 unknown NOTIFY pruub 95.518188477s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.1b( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.754487038s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 active pruub 93.379592896s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.1b( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.754467964s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 93.379592896s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.892973900s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 active pruub 95.518157959s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[7.5( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.11( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.726854324s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 96.352149963s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.11( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.726831436s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 96.352149963s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.892954826s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 unknown NOTIFY pruub 95.518157959s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.740895271s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 active pruub 99.366279602s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.740626335s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 unknown NOTIFY pruub 99.366279602s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.892253876s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 active pruub 95.518211365s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.892230034s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 unknown NOTIFY pruub 95.518211365s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.753290176s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 active pruub 93.379348755s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.753265381s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 93.379348755s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.1a( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.753515244s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 active pruub 93.379844666s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.1a( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.753480911s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 93.379844666s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[8.f( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[5.12( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[7.4( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.12( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.724345207s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 96.352020264s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[8.b( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[3.8( empty local-lis/les=0/0 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.890315056s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 active pruub 95.518226624s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[7.6( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[7.c( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.890250206s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 unknown NOTIFY pruub 95.518226624s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.18( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.751293182s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 active pruub 93.379470825s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.12( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.724315643s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 96.352020264s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.751084328s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 active pruub 93.379493713s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.751049042s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 93.379493713s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.889702797s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 active pruub 95.518264771s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[8.9( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.1f( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.750947952s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 active pruub 93.379554749s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.889670372s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 unknown NOTIFY pruub 95.518264771s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.1f( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.750925064s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 93.379554749s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.750828743s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 active pruub 93.379570007s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.732895851s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 active pruub 99.361663818s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.732878685s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 unknown NOTIFY pruub 99.361663818s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.15( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.723264694s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 96.352172852s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.15( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.723250389s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 96.352172852s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.888924599s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 active pruub 95.518287659s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.888902664s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 unknown NOTIFY pruub 95.518287659s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.18( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.751271248s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 93.379470825s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.16( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.722599030s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 96.352142334s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.16( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.722582817s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 96.352142334s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.1d( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.749941826s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 active pruub 93.379608154s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.1d( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.749918938s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 93.379608154s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[5.16( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[7.e( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.750800133s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 93.379570007s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.749501228s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 active pruub 93.379684448s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.749468803s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 93.379684448s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.731350899s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 active pruub 99.361640930s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[7.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=14.731308937s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 unknown NOTIFY pruub 99.361640930s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[3.9( empty local-lis/les=0/0 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.887643814s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 active pruub 95.518310547s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[8.2( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[10.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=10.887047768s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 unknown NOTIFY pruub 95.518310547s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[7.8( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.17( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.720478058s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 active pruub 96.352371216s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[11.4( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[3.17( empty local-lis/les=41/43 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52 pruub=11.720452309s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 unknown NOTIFY pruub 96.352371216s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.1c( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.748473167s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 active pruub 93.380477905s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[7.a( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[8.1c( v 31'6 (0'0,31'6] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=8.747925758s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 93.380477905s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[11.1( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[3.e( empty local-lis/les=0/0 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[10.6( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[3.c( empty local-lis/les=0/0 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[2.d( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[7.9( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[11.6( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[8.1b( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[10.1a( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[8.6( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[8.4( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[3.f( empty local-lis/les=0/0 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[5.9( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[5.c( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[3.11( empty local-lis/les=0/0 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[5.f( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[11.19( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[7.15( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[2.7( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[8.1a( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[3.12( empty local-lis/les=0/0 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[10.b( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[2.3( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[2.4( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[2.6( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[8.1f( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[7.11( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[5.1( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[3.15( empty local-lis/les=0/0 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[8.18( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[8.1d( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[3.16( empty local-lis/les=0/0 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[7.13( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 52 pg[3.17( empty local-lis/les=0/0 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 52 pg[8.1c( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[2.5( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[10.2( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[2.9( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[10.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[2.a( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[2.1b( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[10.14( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[5.1a( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[5.19( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[5.18( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[5.1d( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 52 pg[10.f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Feb 28 04:35:47 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Feb 28 04:35:47 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Feb 28 04:35:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Feb 28 04:35:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Feb 28 04:35:48 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[2.18( empty local-lis/les=52/53 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 28 04:35:48 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 28 04:35:48 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 28 04:35:48 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Feb 28 04:35:48 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 28 04:35:48 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Feb 28 04:35:48 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 28 04:35:48 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 28 04:35:48 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 28 04:35:48 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.5( v 48'484 (0'0,48'484] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 38'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.5( v 48'484 (0'0,48'484] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 38'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.13( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.13( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[4.1c( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[4.1( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[4.13( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[4.e( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[4.11( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[4.a( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[11.10( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[2.19( empty local-lis/les=52/53 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[5.1e( empty local-lis/les=52/53 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[7.1f( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[8.10( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[3.1b( empty local-lis/les=52/53 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[3.f( empty local-lis/les=52/53 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[10.9( v 51'19 lc 35'8 (0'0,51'19] local-lis/les=52/53 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=51'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[11.4( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[10.8( v 38'18 (0'0,38'18] local-lis/les=52/53 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[5.7( empty local-lis/les=52/53 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[3.1( empty local-lis/les=52/53 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[8.b( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[10.15( v 51'19 lc 35'3 (0'0,51'19] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=51'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[7.4( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[11.14( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[3.c( empty local-lis/les=52/53 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[2.1d( empty local-lis/les=52/53 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[7.9( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[10.4( v 38'18 (0'0,38'18] local-lis/les=52/53 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[5.4( empty local-lis/les=52/53 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[8.6( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=52/53 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[7.6( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[7.18( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[8.9( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[2.1c( empty local-lis/les=52/53 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[10.7( v 38'18 (0'0,38'18] local-lis/les=52/53 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[2.2( empty local-lis/les=52/53 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[11.6( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[5.5( empty local-lis/les=52/53 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[3.3( empty local-lis/les=52/53 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[2.1f( empty local-lis/les=52/53 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[10.17( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[5.2( empty local-lis/les=52/53 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[10.d( v 51'19 lc 35'5 (0'0,51'19] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=51'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[3.6( empty local-lis/les=52/53 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[11.e( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[8.e( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[5.3( empty local-lis/les=52/53 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[8.f( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[10.e( v 51'19 lc 35'4 (0'0,51'19] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=51'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[11.f( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[2.f( empty local-lis/les=52/53 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[8.c( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[7.3( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[7.f( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[2.b( empty local-lis/les=52/53 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[3.a( empty local-lis/les=52/53 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[2.8( empty local-lis/les=52/53 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[3.17( empty local-lis/les=52/53 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[10.1( v 38'18 (0'0,38'18] local-lis/les=52/53 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[3.9( empty local-lis/les=52/53 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[10.1e( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[2.16( empty local-lis/les=52/53 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[8.1d( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[3.15( empty local-lis/les=52/53 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[8.1f( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[7.13( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[8.18( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[5.15( empty local-lis/les=52/53 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[11.1( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[3.12( empty local-lis/les=52/53 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[4.10( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[5.14( empty local-lis/les=52/53 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[4.12( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[11.19( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[2.13( empty local-lis/les=52/53 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[2.11( empty local-lis/les=52/53 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[11.17( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[4.14( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[4.8( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[7.1b( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[4.9( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[8.14( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[8.1a( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[10.16( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 53 pg[3.1f( empty local-lis/les=52/53 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=52/53 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[4.5( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[4.7( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[6.5( v 33'39 lc 31'6 (0'0,33'39] local-lis/les=52/53 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=52/53 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[4.f( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[6.7( v 33'39 lc 31'11 (0'0,33'39] local-lis/les=52/53 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[6.f( v 33'39 lc 31'1 (0'0,33'39] local-lis/les=52/53 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[4.4( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[6.d( v 33'39 lc 31'7 (0'0,33'39] local-lis/les=52/53 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[4.2( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=52/53 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[5.19( empty local-lis/les=52/53 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[10.14( v 51'19 lc 35'7 (0'0,51'19] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=51'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[5.18( empty local-lis/les=52/53 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[5.1d( empty local-lis/les=52/53 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[5.1a( empty local-lis/les=52/53 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[10.12( v 51'19 lc 38'17 (0'0,51'19] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=51'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[4.d( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[2.1b( empty local-lis/les=52/53 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[10.13( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[10.10( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[10.11( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[5.1( empty local-lis/les=52/53 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[2.6( empty local-lis/les=52/53 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[10.f( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[2.4( empty local-lis/les=52/53 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[2.7( empty local-lis/les=52/53 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[2.9( empty local-lis/les=52/53 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[5.c( empty local-lis/les=52/53 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[10.2( v 38'18 (0'0,38'18] local-lis/les=52/53 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[2.a( empty local-lis/les=52/53 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[2.5( empty local-lis/les=52/53 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[4.1a( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[4.1b( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[4.18( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[8.15( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[3.1e( empty local-lis/les=52/53 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[7.1a( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[3.1d( empty local-lis/les=52/53 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[11.15( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[11.3( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[8.11( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[11.12( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[7.c( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[3.8( empty local-lis/les=52/53 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[11.d( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[3.7( empty local-lis/les=52/53 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[3.5( empty local-lis/les=52/53 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[7.1( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[11.b( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[11.8( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[8.2( v 31'6 (0'0,31'6] local-lis/les=52/53 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[8.d( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[11.9( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[7.5( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[7.e( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[11.2( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[7.8( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[8.4( v 31'6 (0'0,31'6] local-lis/les=52/53 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[3.e( empty local-lis/les=52/53 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[7.a( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[7.15( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[8.1b( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[11.1b( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[11.18( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[3.11( empty local-lis/les=52/53 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[11.1a( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[7.11( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[11.1c( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[11.1e( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[8.1c( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[3.16( empty local-lis/les=52/53 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[11.11( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[7.1c( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[11.1f( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[7.2( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[3.18( empty local-lis/les=52/53 n=0 ec=41/17 lis/c=41/41 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 53 pg[8.12( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[10.b( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[5.f( empty local-lis/les=52/53 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[10.6( v 38'18 (0'0,38'18] local-lis/les=52/53 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[10.19( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[5.16( empty local-lis/les=52/53 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[5.9( empty local-lis/les=52/53 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[2.3( empty local-lis/les=52/53 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[5.12( empty local-lis/les=52/53 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[2.15( empty local-lis/les=52/53 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[5.13( empty local-lis/les=52/53 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[5.11( empty local-lis/les=52/53 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[2.17( empty local-lis/les=52/53 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[2.d( empty local-lis/les=52/53 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 53 pg[10.1a( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v114: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:35:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Feb 28 04:35:48 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb 28 04:35:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Feb 28 04:35:48 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb 28 04:35:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Feb 28 04:35:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Feb 28 04:35:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Feb 28 04:35:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Feb 28 04:35:49 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb 28 04:35:49 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb 28 04:35:49 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Feb 28 04:35:49 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 54 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.619878769s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=33'39 lcod 0'0 active pruub 105.528945923s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:49 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 54 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.619830132s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 105.528945923s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:49 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 54 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:49 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 54 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.624801636s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=33'39 lcod 0'0 active pruub 105.534500122s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:49 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 54 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.624782562s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 105.534500122s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:49 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 54 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.629286766s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=33'39 lcod 0'0 active pruub 105.539054871s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:49 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 54 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.629229546s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=33'39 lcod 0'0 active pruub 105.539039612s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:49 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 54 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.629251480s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 105.539054871s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:49 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 54 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.629174232s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 105.539039612s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:49 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 54 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:49 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 54 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:49 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 54 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:49 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 54 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:49 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 54 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:49 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 54 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:49 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 54 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:49 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 54 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:49 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 54 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:49 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 54 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:49 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 54 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:49 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 54 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:49 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 54 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:49 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 54 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:49 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 54 pg[9.5( v 48'484 (0'0,48'484] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=48'484 lcod 38'483 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:49 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 54 pg[9.13( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:49 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 54 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:49 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 54 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:49 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 54 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:49 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Feb 28 04:35:49 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Feb 28 04:35:50 np0005634017 ceph-mgr[76610]: [progress INFO root] Completed event d75b786b-9bc7-4c59-a9b2-4394be37c723 (Global Recovery Event) in 15 seconds
Feb 28 04:35:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Feb 28 04:35:50 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Feb 28 04:35:50 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Feb 28 04:35:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Feb 28 04:35:50 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Feb 28 04:35:50 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 55 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.115384102s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=38'483 lcod 0'0 active pruub 102.744071960s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:50 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 55 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.108209610s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=38'483 lcod 0'0 active pruub 102.736907959s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:50 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 55 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.115320206s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 102.744071960s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:50 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 55 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.108099937s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 102.736907959s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:50 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 55 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.114882469s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=38'483 lcod 0'0 active pruub 102.744163513s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:50 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 55 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.114943504s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=38'483 lcod 0'0 active pruub 102.744239807s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:50 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 55 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.114891052s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 102.744239807s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:50 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 55 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.114805222s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 102.744163513s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:50 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 55 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.113562584s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=38'483 lcod 0'0 active pruub 102.743621826s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:50 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 55 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.114029884s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=38'483 lcod 0'0 active pruub 102.744285583s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:50 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 55 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.113984108s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 102.744285583s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:50 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 55 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.113448143s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 102.743621826s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:50 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 55 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.106306076s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=38'483 lcod 0'0 active pruub 102.736961365s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:50 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 55 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.106267929s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 102.736961365s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:50 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 55 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.113107681s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=38'483 lcod 0'0 active pruub 102.744003296s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:50 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 55 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.113042831s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 102.744003296s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:50 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 55 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:50 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 55 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:50 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 55 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:50 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 55 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:50 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 55 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:50 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 55 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:50 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 55 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:50 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 55 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:50 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 55 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:50 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 55 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:50 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 55 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:50 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 55 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:50 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 55 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:50 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 55 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:50 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 55 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:50 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 55 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:50 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 55 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=54/55 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:50 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 55 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=54/55 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:50 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 55 pg[6.e( v 33'39 lc 31'10 (0'0,33'39] local-lis/les=54/55 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:50 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 55 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=54/55 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v117: 305 pgs: 16 activating+remapped, 289 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 101/249 objects misplaced (40.562%); 201 B/s, 2 keys/s, 3 objects/s recovering
Feb 28 04:35:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:35:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Feb 28 04:35:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Feb 28 04:35:51 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Feb 28 04:35:51 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 56 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.489897728s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'483 lcod 0'0 active pruub 102.744415283s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:51 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 56 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.489760399s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 102.744415283s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:51 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 56 pg[9.13( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.489588737s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'483 lcod 0'0 active pruub 102.744354248s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:51 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 56 pg[9.13( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.489485741s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 102.744354248s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:51 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 56 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.489019394s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'483 lcod 0'0 active pruub 102.744499207s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:51 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 56 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.488857269s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 102.744499207s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:51 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 56 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.488323212s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'483 lcod 0'0 active pruub 102.744575500s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:51 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 56 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.488271713s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 102.744575500s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:51 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 56 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.487254143s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'483 lcod 0'0 active pruub 102.743995667s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:51 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 56 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.487132072s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 102.743995667s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:51 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 56 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:51 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 56 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:51 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 56 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:51 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 56 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:51 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 56 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:51 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 56 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:51 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 56 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:51 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 56 pg[9.13( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:51 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 56 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:51 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 56 pg[9.13( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:51 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 56 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=55/56 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:51 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 56 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=55/56 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:51 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 56 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=55/56 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:51 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 56 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=55/56 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:51 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 56 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=55/56 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:51 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 56 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=55/56 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:51 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 56 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=55/56 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:51 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 56 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=55/56 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:51 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Feb 28 04:35:51 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Feb 28 04:35:51 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Feb 28 04:35:51 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Feb 28 04:35:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Feb 28 04:35:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Feb 28 04:35:52 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Feb 28 04:35:52 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 57 pg[9.5( v 54'485 (0'0,54'485] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 pct=0'0 crt=48'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:52 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 57 pg[9.5( v 54'485 (0'0,54'485] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=48'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:52 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 57 pg[9.5( v 54'485 (0'0,54'485] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=57 pruub=13.468852043s) [0] async=[0] r=-1 lpr=57 pi=[47,57)/1 crt=48'484 lcod 48'484 active pruub 102.744338989s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:52 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 57 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:52 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 57 pg[9.5( v 54'485 (0'0,54'485] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=57 pruub=13.468603134s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=48'484 lcod 48'484 unknown NOTIFY pruub 102.744338989s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:52 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 57 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:52 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 57 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=57 pruub=13.467612267s) [0] async=[0] r=-1 lpr=57 pi=[47,57)/1 crt=38'483 lcod 0'0 active pruub 102.743873596s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:52 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 57 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=57 pruub=13.467513084s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 102.743873596s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:52 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 57 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:52 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 57 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:35:52 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 57 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=57 pruub=13.466193199s) [0] async=[0] r=-1 lpr=57 pi=[47,57)/1 crt=38'483 lcod 0'0 active pruub 102.743667603s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:35:52 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 57 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=57 pruub=13.466046333s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 102.743667603s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:35:52 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 57 pg[9.13( v 38'483 (0'0,38'483] local-lis/les=56/57 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:52 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 57 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=56/57 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:52 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 57 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=56/57 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:52 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 57 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=56/57 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:52 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 57 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=56/57 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:52 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Feb 28 04:35:52 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Feb 28 04:35:52 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Feb 28 04:35:52 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Feb 28 04:35:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v120: 305 pgs: 3 peering, 302 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s, 2 keys/s, 30 objects/s recovering
Feb 28 04:35:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Feb 28 04:35:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Feb 28 04:35:53 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Feb 28 04:35:53 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 58 pg[9.5( v 54'485 (0'0,54'485] local-lis/les=57/58 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=54'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:53 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 58 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=57/58 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:53 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 58 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=57/58 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:35:53 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Feb 28 04:35:53 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Feb 28 04:35:53 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Feb 28 04:35:53 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Feb 28 04:35:54 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Feb 28 04:35:54 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Feb 28 04:35:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v122: 305 pgs: 3 peering, 302 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s, 24 objects/s recovering
Feb 28 04:35:55 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 4.b scrub starts
Feb 28 04:35:55 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 4.b scrub ok
Feb 28 04:35:55 np0005634017 ceph-mgr[76610]: [progress INFO root] Writing back 16 completed events
Feb 28 04:35:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 28 04:35:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:55 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:35:55 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Feb 28 04:35:55 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Feb 28 04:35:55 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Feb 28 04:35:55 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Feb 28 04:35:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:35:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v123: 305 pgs: 3 peering, 302 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 905 B/s, 18 objects/s recovering
Feb 28 04:35:57 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Feb 28 04:35:58 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Feb 28 04:35:58 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Feb 28 04:35:58 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Feb 28 04:35:58 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Feb 28 04:35:58 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Feb 28 04:35:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v124: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 691 B/s, 14 objects/s recovering
Feb 28 04:35:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Feb 28 04:35:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb 28 04:35:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Feb 28 04:35:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb 28 04:35:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Feb 28 04:35:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Feb 28 04:35:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Feb 28 04:35:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Feb 28 04:35:59 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Feb 28 04:35:59 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb 28 04:35:59 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb 28 04:35:59 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Feb 28 04:35:59 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Feb 28 04:36:00 np0005634017 python3[99513]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:36:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:36:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:36:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:36:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:36:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:36:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:36:00 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Feb 28 04:36:00 np0005634017 podman[99514]: 2026-02-28 09:36:00.294278837 +0000 UTC m=+0.069246737 container create 6e1222633643a21c8ce16ff1f91de71f1b2aeee9938dc0f3dbf232c4b9c1a3b7 (image=quay.io/ceph/ceph:v20, name=fervent_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 04:36:00 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Feb 28 04:36:00 np0005634017 systemd[77703]: Starting Mark boot as successful...
Feb 28 04:36:00 np0005634017 systemd[77703]: Finished Mark boot as successful.
Feb 28 04:36:00 np0005634017 systemd[1]: Started libpod-conmon-6e1222633643a21c8ce16ff1f91de71f1b2aeee9938dc0f3dbf232c4b9c1a3b7.scope.
Feb 28 04:36:00 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:36:00 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/880bcaae5cbdbd7b7f743873de74fc1e2a800e5386971ebf5c25cf9f7dedf052/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:36:00 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/880bcaae5cbdbd7b7f743873de74fc1e2a800e5386971ebf5c25cf9f7dedf052/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:36:00 np0005634017 podman[99514]: 2026-02-28 09:36:00.264299559 +0000 UTC m=+0.039267569 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:36:00 np0005634017 podman[99514]: 2026-02-28 09:36:00.371617908 +0000 UTC m=+0.146585838 container init 6e1222633643a21c8ce16ff1f91de71f1b2aeee9938dc0f3dbf232c4b9c1a3b7 (image=quay.io/ceph/ceph:v20, name=fervent_chaplygin, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Feb 28 04:36:00 np0005634017 podman[99514]: 2026-02-28 09:36:00.382938676 +0000 UTC m=+0.157906616 container start 6e1222633643a21c8ce16ff1f91de71f1b2aeee9938dc0f3dbf232c4b9c1a3b7 (image=quay.io/ceph/ceph:v20, name=fervent_chaplygin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Feb 28 04:36:00 np0005634017 podman[99514]: 2026-02-28 09:36:00.389447815 +0000 UTC m=+0.164415745 container attach 6e1222633643a21c8ce16ff1f91de71f1b2aeee9938dc0f3dbf232c4b9c1a3b7 (image=quay.io/ceph/ceph:v20, name=fervent_chaplygin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 04:36:00 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 59 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=52/53 n=2 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=11.988523483s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=33'39 active pruub 109.602798462s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:00 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 59 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=11.988333702s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=33'39 active pruub 109.602684021s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:00 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 59 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=11.988243103s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=33'39 unknown NOTIFY pruub 109.602684021s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:00 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 59 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=11.988040924s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=33'39 active pruub 109.602607727s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:00 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=11.987517357s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=33'39 active pruub 109.602325439s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:00 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=11.987418175s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=33'39 unknown NOTIFY pruub 109.602325439s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:00 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 59 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=11.987983704s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=33'39 unknown NOTIFY pruub 109.602607727s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:00 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 59 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=52/53 n=2 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=11.987764359s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=33'39 unknown NOTIFY pruub 109.602798462s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:00 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 59 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:00 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 59 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:00 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 59 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:00 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 59 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Feb 28 04:36:00 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Feb 28 04:36:00 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Feb 28 04:36:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Feb 28 04:36:00 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Feb 28 04:36:00 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 60 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=59/60 n=1 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:00 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 60 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=59/60 n=2 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:00 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 60 pg[6.f( v 33'39 lc 31'1 (0'0,33'39] local-lis/les=59/60 n=1 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:00 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 60 pg[6.7( v 33'39 lc 31'11 (0'0,33'39] local-lis/les=59/60 n=1 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:00 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Feb 28 04:36:00 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Feb 28 04:36:00 np0005634017 fervent_chaplygin[99530]: could not fetch user info: no user info saved
Feb 28 04:36:00 np0005634017 systemd[1]: libpod-6e1222633643a21c8ce16ff1f91de71f1b2aeee9938dc0f3dbf232c4b9c1a3b7.scope: Deactivated successfully.
Feb 28 04:36:00 np0005634017 podman[99514]: 2026-02-28 09:36:00.696567484 +0000 UTC m=+0.471535404 container died 6e1222633643a21c8ce16ff1f91de71f1b2aeee9938dc0f3dbf232c4b9c1a3b7 (image=quay.io/ceph/ceph:v20, name=fervent_chaplygin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:36:00 np0005634017 systemd[1]: var-lib-containers-storage-overlay-880bcaae5cbdbd7b7f743873de74fc1e2a800e5386971ebf5c25cf9f7dedf052-merged.mount: Deactivated successfully.
Feb 28 04:36:00 np0005634017 podman[99514]: 2026-02-28 09:36:00.763303037 +0000 UTC m=+0.538270937 container remove 6e1222633643a21c8ce16ff1f91de71f1b2aeee9938dc0f3dbf232c4b9c1a3b7 (image=quay.io/ceph/ceph:v20, name=fervent_chaplygin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 04:36:00 np0005634017 systemd[1]: libpod-conmon-6e1222633643a21c8ce16ff1f91de71f1b2aeee9938dc0f3dbf232c4b9c1a3b7.scope: Deactivated successfully.
Feb 28 04:36:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v127: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Feb 28 04:36:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Feb 28 04:36:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb 28 04:36:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Feb 28 04:36:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb 28 04:36:01 np0005634017 python3[99655]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:36:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:36:01 np0005634017 podman[99656]: 2026-02-28 09:36:01.118354774 +0000 UTC m=+0.049873666 container create 840e623d958bcdb2975a0ce27f4ceafae2fd810abd63a03d62a4be9941d21406 (image=quay.io/ceph/ceph:v20, name=heuristic_diffie, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:36:01 np0005634017 systemd[1]: Started libpod-conmon-840e623d958bcdb2975a0ce27f4ceafae2fd810abd63a03d62a4be9941d21406.scope.
Feb 28 04:36:01 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:36:01 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71ef3b4042c1668a5ed2a36cff67ef1c90d05d2ba5e8eb8f9101c9d381156b1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:36:01 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71ef3b4042c1668a5ed2a36cff67ef1c90d05d2ba5e8eb8f9101c9d381156b1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:36:01 np0005634017 podman[99656]: 2026-02-28 09:36:01.092263548 +0000 UTC m=+0.023782490 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 28 04:36:01 np0005634017 podman[99656]: 2026-02-28 09:36:01.1962012 +0000 UTC m=+0.127720112 container init 840e623d958bcdb2975a0ce27f4ceafae2fd810abd63a03d62a4be9941d21406 (image=quay.io/ceph/ceph:v20, name=heuristic_diffie, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 28 04:36:01 np0005634017 podman[99656]: 2026-02-28 09:36:01.200475384 +0000 UTC m=+0.131994276 container start 840e623d958bcdb2975a0ce27f4ceafae2fd810abd63a03d62a4be9941d21406 (image=quay.io/ceph/ceph:v20, name=heuristic_diffie, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:36:01 np0005634017 podman[99656]: 2026-02-28 09:36:01.207147997 +0000 UTC m=+0.138666909 container attach 840e623d958bcdb2975a0ce27f4ceafae2fd810abd63a03d62a4be9941d21406 (image=quay.io/ceph/ceph:v20, name=heuristic_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]: {
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:    "user_id": "openstack",
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:    "display_name": "openstack",
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:    "email": "",
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:    "suspended": 0,
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:    "max_buckets": 1000,
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:    "subusers": [],
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:    "keys": [
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:        {
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:            "user": "openstack",
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:            "access_key": "RSBFE32NHLIPDS6568V9",
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:            "secret_key": "HT3ausCEqTqWXgHtDJ5CAOqtH73kMBxbZexkMBpo",
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:            "active": true,
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:            "create_date": "2026-02-28T09:36:01.385196Z"
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:        }
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:    ],
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:    "swift_keys": [],
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:    "caps": [],
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:    "op_mask": "read, write, delete",
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:    "default_placement": "",
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:    "default_storage_class": "",
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:    "placement_tags": [],
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:    "bucket_quota": {
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:        "enabled": false,
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:        "check_on_raw": false,
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:        "max_size": -1,
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:        "max_size_kb": 0,
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:        "max_objects": -1
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:    },
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:    "user_quota": {
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:        "enabled": false,
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:        "check_on_raw": false,
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:        "max_size": -1,
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:        "max_size_kb": 0,
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:        "max_objects": -1
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:    },
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:    "temp_url_keys": [],
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:    "type": "rgw",
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:    "mfa_ids": [],
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:    "account_id": "",
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:    "path": "/",
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:    "create_date": "2026-02-28T09:36:01.384795Z",
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:    "tags": [],
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]:    "group_ids": []
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]: }
Feb 28 04:36:01 np0005634017 heuristic_diffie[99671]: 
Feb 28 04:36:01 np0005634017 systemd[1]: libpod-840e623d958bcdb2975a0ce27f4ceafae2fd810abd63a03d62a4be9941d21406.scope: Deactivated successfully.
Feb 28 04:36:01 np0005634017 podman[99656]: 2026-02-28 09:36:01.423005702 +0000 UTC m=+0.354524604 container died 840e623d958bcdb2975a0ce27f4ceafae2fd810abd63a03d62a4be9941d21406 (image=quay.io/ceph/ceph:v20, name=heuristic_diffie, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Feb 28 04:36:01 np0005634017 systemd[1]: var-lib-containers-storage-overlay-b71ef3b4042c1668a5ed2a36cff67ef1c90d05d2ba5e8eb8f9101c9d381156b1-merged.mount: Deactivated successfully.
Feb 28 04:36:01 np0005634017 podman[99656]: 2026-02-28 09:36:01.458232302 +0000 UTC m=+0.389751224 container remove 840e623d958bcdb2975a0ce27f4ceafae2fd810abd63a03d62a4be9941d21406 (image=quay.io/ceph/ceph:v20, name=heuristic_diffie, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:36:01 np0005634017 systemd[1]: libpod-conmon-840e623d958bcdb2975a0ce27f4ceafae2fd810abd63a03d62a4be9941d21406.scope: Deactivated successfully.
Feb 28 04:36:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Feb 28 04:36:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Feb 28 04:36:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Feb 28 04:36:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Feb 28 04:36:01 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Feb 28 04:36:01 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb 28 04:36:01 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb 28 04:36:01 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 61 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=61 pruub=8.266051292s) [1] r=-1 lpr=61 pi=[45,61)/1 crt=33'39 lcod 0'0 active pruub 113.534484863s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:01 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 61 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=45/46 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=61 pruub=8.265581131s) [1] r=-1 lpr=61 pi=[45,61)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 113.534484863s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:01 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 61 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=61) [1] r=0 lpr=61 pi=[45,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:01 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 61 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=61 pruub=8.265041351s) [1] r=-1 lpr=61 pi=[45,61)/1 crt=33'39 lcod 0'0 active pruub 113.535461426s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:01 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 61 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=61 pruub=8.264995575s) [1] r=-1 lpr=61 pi=[45,61)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 113.535461426s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:01 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 61 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=61) [1] r=0 lpr=61 pi=[45,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:02 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Feb 28 04:36:02 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Feb 28 04:36:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Feb 28 04:36:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Feb 28 04:36:02 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Feb 28 04:36:02 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 62 pg[6.4( v 33'39 lc 31'8 (0'0,33'39] local-lis/les=61/62 n=2 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=61) [1] r=0 lpr=61 pi=[45,61)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Feb 28 04:36:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Feb 28 04:36:02 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 62 pg[6.c( v 33'39 lc 31'9 (0'0,33'39] local-lis/les=61/62 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=61) [1] r=0 lpr=61 pi=[45,61)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v130: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 29 op/s; 161 B/s, 2 keys/s, 1 objects/s recovering
Feb 28 04:36:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Feb 28 04:36:02 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb 28 04:36:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Feb 28 04:36:02 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb 28 04:36:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Feb 28 04:36:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Feb 28 04:36:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Feb 28 04:36:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Feb 28 04:36:03 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Feb 28 04:36:03 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 63 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=63 pruub=8.789766312s) [0] r=-1 lpr=63 pi=[52,63)/1 crt=33'39 active pruub 109.602752686s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:03 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 63 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=63 pruub=8.789725304s) [0] r=-1 lpr=63 pi=[52,63)/1 crt=33'39 unknown NOTIFY pruub 109.602752686s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:03 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 63 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=52/53 n=2 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=63 pruub=8.789265633s) [0] r=-1 lpr=63 pi=[52,63)/1 crt=33'39 active pruub 109.602630615s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:03 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 63 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=52/53 n=2 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=63 pruub=8.789205551s) [0] r=-1 lpr=63 pi=[52,63)/1 crt=33'39 unknown NOTIFY pruub 109.602630615s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:03 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb 28 04:36:03 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb 28 04:36:03 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 63 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=63) [0] r=0 lpr=63 pi=[52,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:03 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 63 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=63) [0] r=0 lpr=63 pi=[52,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:03 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Feb 28 04:36:03 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Feb 28 04:36:04 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Feb 28 04:36:04 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Feb 28 04:36:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Feb 28 04:36:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Feb 28 04:36:04 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Feb 28 04:36:04 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Feb 28 04:36:04 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Feb 28 04:36:04 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 64 pg[6.5( v 33'39 lc 31'6 (0'0,33'39] local-lis/les=63/64 n=2 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=63) [0] r=0 lpr=63 pi=[52,63)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:04 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 lc 31'7 (0'0,33'39] local-lis/les=63/64 n=1 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=63) [0] r=0 lpr=63 pi=[52,63)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:04 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Feb 28 04:36:04 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Feb 28 04:36:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v133: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 70 op/s; 567 B/s, 2 keys/s, 2 objects/s recovering
Feb 28 04:36:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Feb 28 04:36:04 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb 28 04:36:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Feb 28 04:36:04 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb 28 04:36:05 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Feb 28 04:36:05 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Feb 28 04:36:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Feb 28 04:36:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Feb 28 04:36:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Feb 28 04:36:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Feb 28 04:36:05 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Feb 28 04:36:05 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb 28 04:36:05 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb 28 04:36:05 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Feb 28 04:36:05 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Feb 28 04:36:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:36:06 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Feb 28 04:36:06 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Feb 28 04:36:06 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Feb 28 04:36:06 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Feb 28 04:36:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v135: 305 pgs: 305 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 473 B/s wr, 38 op/s; 374 B/s, 1 objects/s recovering
Feb 28 04:36:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Feb 28 04:36:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb 28 04:36:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Feb 28 04:36:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb 28 04:36:07 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 65 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65 pruub=13.084120750s) [2] r=-1 lpr=65 pi=[47,65)/1 crt=38'483 lcod 0'0 active pruub 117.375694275s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:07 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 65 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65 pruub=13.084049225s) [2] r=-1 lpr=65 pi=[47,65)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 117.375694275s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:07 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2] r=0 lpr=65 pi=[47,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:07 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 65 pg[9.e( v 60'489 (0'0,60'489] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65 pruub=13.086674690s) [2] r=-1 lpr=65 pi=[47,65)/1 crt=60'488 lcod 60'488 active pruub 117.378585815s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:07 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 65 pg[9.e( v 60'489 (0'0,60'489] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65 pruub=13.086093903s) [2] r=-1 lpr=65 pi=[47,65)/1 crt=60'488 lcod 60'488 unknown NOTIFY pruub 117.378585815s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:07 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 65 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65 pruub=13.086710930s) [2] r=-1 lpr=65 pi=[47,65)/1 crt=38'483 lcod 0'0 active pruub 117.379631042s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:07 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2] r=0 lpr=65 pi=[47,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:07 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2] r=0 lpr=65 pi=[47,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:07 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 65 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65 pruub=13.086627007s) [2] r=-1 lpr=65 pi=[47,65)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 117.379631042s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:07 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 65 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65 pruub=13.086853027s) [2] r=-1 lpr=65 pi=[47,65)/1 crt=59'484 lcod 59'484 active pruub 117.379981995s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:07 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 65 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65 pruub=13.086829185s) [2] r=-1 lpr=65 pi=[47,65)/1 crt=59'484 lcod 59'484 unknown NOTIFY pruub 117.379981995s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:07 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2] r=0 lpr=65 pi=[47,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:07 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Feb 28 04:36:07 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Feb 28 04:36:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Feb 28 04:36:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Feb 28 04:36:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Feb 28 04:36:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Feb 28 04:36:07 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Feb 28 04:36:07 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 66 pg[9.17( v 60'485 (0'0,60'485] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=66 pruub=15.305923462s) [2] r=-1 lpr=66 pi=[55,66)/1 crt=60'484 lcod 60'484 active pruub 126.447395325s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:07 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 66 pg[9.7( v 60'487 (0'0,60'487] local-lis/les=55/56 n=7 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=66 pruub=15.305915833s) [2] r=-1 lpr=66 pi=[55,66)/1 crt=60'486 lcod 60'486 active pruub 126.447402954s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:07 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 66 pg[9.17( v 60'485 (0'0,60'485] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=66 pruub=15.305885315s) [2] r=-1 lpr=66 pi=[55,66)/1 crt=60'484 lcod 60'484 unknown NOTIFY pruub 126.447395325s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:07 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 66 pg[9.7( v 60'487 (0'0,60'487] local-lis/les=55/56 n=7 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=66 pruub=15.305852890s) [2] r=-1 lpr=66 pi=[55,66)/1 crt=60'486 lcod 60'486 unknown NOTIFY pruub 126.447402954s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:07 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 66 pg[9.f( v 60'485 (0'0,60'485] local-lis/les=55/56 n=7 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=66 pruub=15.305207253s) [2] r=-1 lpr=66 pi=[55,66)/1 crt=59'484 lcod 59'484 active pruub 126.447395325s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:07 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 66 pg[9.f( v 60'485 (0'0,60'485] local-lis/les=55/56 n=7 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=66 pruub=15.305184364s) [2] r=-1 lpr=66 pi=[55,66)/1 crt=59'484 lcod 59'484 unknown NOTIFY pruub 126.447395325s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:07 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 66 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=66 pruub=15.304389954s) [2] r=-1 lpr=66 pi=[55,66)/1 crt=38'483 active pruub 126.447250366s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:07 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 66 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=66 pruub=15.304336548s) [2] r=-1 lpr=66 pi=[55,66)/1 crt=38'483 unknown NOTIFY pruub 126.447250366s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:07 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=66) [2] r=0 lpr=66 pi=[55,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:07 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=66) [2] r=0 lpr=66 pi=[55,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:07 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=66) [2] r=0 lpr=66 pi=[55,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:07 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[47,66)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:07 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[47,66)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:07 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[47,66)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:07 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[47,66)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:07 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=66) [2] r=0 lpr=66 pi=[55,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:07 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb 28 04:36:07 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb 28 04:36:07 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[47,66)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:07 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[47,66)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:07 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[47,66)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:07 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[47,66)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:07 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 66 pg[9.e( v 60'489 (0'0,60'489] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=0 lpr=66 pi=[47,66)/1 crt=60'488 lcod 60'488 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:07 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 66 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=0 lpr=66 pi=[47,66)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:07 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 66 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=0 lpr=66 pi=[47,66)/1 crt=59'484 lcod 59'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:07 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 66 pg[9.e( v 60'489 (0'0,60'489] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=0 lpr=66 pi=[47,66)/1 crt=60'488 lcod 60'488 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:07 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 66 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=0 lpr=66 pi=[47,66)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:07 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 66 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=0 lpr=66 pi=[47,66)/1 crt=59'484 lcod 59'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:07 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 66 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=0 lpr=66 pi=[47,66)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:07 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 66 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=0 lpr=66 pi=[47,66)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Feb 28 04:36:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Feb 28 04:36:08 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 67 pg[9.7( v 60'487 (0'0,60'487] local-lis/les=55/56 n=7 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=67) [2]/[0] r=0 lpr=67 pi=[55,67)/1 crt=60'486 lcod 60'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:08 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 67 pg[9.17( v 60'485 (0'0,60'485] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=67) [2]/[0] r=0 lpr=67 pi=[55,67)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:08 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 67 pg[9.7( v 60'487 (0'0,60'487] local-lis/les=55/56 n=7 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=67) [2]/[0] r=0 lpr=67 pi=[55,67)/1 crt=60'486 lcod 60'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:08 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 67 pg[9.17( v 60'485 (0'0,60'485] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=67) [2]/[0] r=0 lpr=67 pi=[55,67)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:08 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 67 pg[9.f( v 60'485 (0'0,60'485] local-lis/les=55/56 n=7 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=67) [2]/[0] r=0 lpr=67 pi=[55,67)/1 crt=59'484 lcod 59'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:08 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 67 pg[9.f( v 60'485 (0'0,60'485] local-lis/les=55/56 n=7 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=67) [2]/[0] r=0 lpr=67 pi=[55,67)/1 crt=59'484 lcod 59'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:08 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 67 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=67) [2]/[0] r=0 lpr=67 pi=[55,67)/1 crt=38'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:08 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 67 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=67) [2]/[0] r=0 lpr=67 pi=[55,67)/1 crt=38'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:08 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[55,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:08 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[55,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:08 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[55,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:08 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[55,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:08 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[55,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:08 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Feb 28 04:36:08 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[55,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:08 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[55,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:08 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[55,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:08 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Feb 28 04:36:08 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Feb 28 04:36:08 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 67 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=66/67 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[47,66)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:08 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 67 pg[9.e( v 60'489 (0'0,60'489] local-lis/les=66/67 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[47,66)/1 crt=60'489 lcod 60'488 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:08 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 67 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[47,66)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:08 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 67 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=66/67 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[47,66)/1 crt=60'485 lcod 59'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v138: 305 pgs: 305 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 477 B/s wr, 0 op/s; 37 B/s, 0 objects/s recovering
Feb 28 04:36:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Feb 28 04:36:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb 28 04:36:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Feb 28 04:36:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb 28 04:36:09 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 4.c scrub starts
Feb 28 04:36:09 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 4.c scrub ok
Feb 28 04:36:09 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Feb 28 04:36:09 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Feb 28 04:36:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Feb 28 04:36:09 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Feb 28 04:36:09 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Feb 28 04:36:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Feb 28 04:36:09 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Feb 28 04:36:09 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 68 pg[9.e( v 60'489 (0'0,60'489] local-lis/les=0/0 n=7 ec=47/32 lis/c=66/47 les/c/f=67/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 pct=0'0 crt=60'489 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:09 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 68 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=47/32 lis/c=66/47 les/c/f=67/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:09 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 68 pg[9.e( v 60'489 (0'0,60'489] local-lis/les=0/0 n=7 ec=47/32 lis/c=66/47 les/c/f=67/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 crt=60'489 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:09 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 68 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=47/32 lis/c=66/47 les/c/f=67/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:09 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 68 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=47/32 lis/c=66/47 les/c/f=67/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:09 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 68 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=47/32 lis/c=66/47 les/c/f=67/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:09 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 68 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=0/0 n=6 ec=47/32 lis/c=66/47 les/c/f=67/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 pct=0'0 crt=60'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:09 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 68 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=0/0 n=6 ec=47/32 lis/c=66/47 les/c/f=67/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 crt=60'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:09 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 68 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=68 pruub=8.381341934s) [2] r=-1 lpr=68 pi=[45,68)/1 crt=33'39 lcod 0'0 active pruub 121.534622192s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:09 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 68 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=68 pruub=8.381299973s) [2] r=-1 lpr=68 pi=[45,68)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 121.534622192s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:09 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 68 pg[9.e( v 60'489 (0'0,60'489] local-lis/les=66/67 n=7 ec=47/32 lis/c=66/47 les/c/f=67/48/0 sis=68 pruub=15.005438805s) [2] async=[2] r=-1 lpr=68 pi=[47,68)/1 crt=60'489 lcod 60'488 active pruub 121.985260010s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:09 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 68 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=47/32 lis/c=66/47 les/c/f=67/48/0 sis=68 pruub=15.005459785s) [2] async=[2] r=-1 lpr=68 pi=[47,68)/1 crt=38'483 lcod 0'0 active pruub 121.985290527s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:09 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 68 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=47/32 lis/c=66/47 les/c/f=67/48/0 sis=68 pruub=15.005408287s) [2] r=-1 lpr=68 pi=[47,68)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 121.985290527s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:09 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 68 pg[9.e( v 60'489 (0'0,60'489] local-lis/les=66/67 n=7 ec=47/32 lis/c=66/47 les/c/f=67/48/0 sis=68 pruub=15.005353928s) [2] r=-1 lpr=68 pi=[47,68)/1 crt=60'489 lcod 60'488 unknown NOTIFY pruub 121.985260010s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:09 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 68 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=66/67 n=6 ec=47/32 lis/c=66/47 les/c/f=67/48/0 sis=68 pruub=15.005230904s) [2] async=[2] r=-1 lpr=68 pi=[47,68)/1 crt=60'485 lcod 59'484 active pruub 121.985252380s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:09 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 68 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=66/67 n=6 ec=47/32 lis/c=66/47 les/c/f=67/48/0 sis=68 pruub=15.005192757s) [2] r=-1 lpr=68 pi=[47,68)/1 crt=60'485 lcod 59'484 unknown NOTIFY pruub 121.985252380s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:09 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 68 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=66/67 n=7 ec=47/32 lis/c=66/47 les/c/f=67/48/0 sis=68 pruub=15.000106812s) [2] async=[2] r=-1 lpr=68 pi=[47,68)/1 crt=38'483 lcod 0'0 active pruub 121.980392456s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:09 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 68 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=66/67 n=7 ec=47/32 lis/c=66/47 les/c/f=67/48/0 sis=68 pruub=14.999918938s) [2] r=-1 lpr=68 pi=[47,68)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 121.980392456s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:09 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 68 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:09 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 68 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68 pruub=10.396856308s) [2] r=-1 lpr=68 pi=[47,68)/1 crt=38'483 lcod 0'0 active pruub 117.378578186s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:09 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 68 pg[9.18( v 60'487 (0'0,60'487] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68 pruub=10.397644997s) [2] r=-1 lpr=68 pi=[47,68)/1 crt=60'486 lcod 60'486 active pruub 117.379592896s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:09 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 68 pg[9.18( v 60'487 (0'0,60'487] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68 pruub=10.397607803s) [2] r=-1 lpr=68 pi=[47,68)/1 crt=60'486 lcod 60'486 unknown NOTIFY pruub 117.379592896s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:09 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 68 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68 pruub=10.396667480s) [2] r=-1 lpr=68 pi=[47,68)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 117.378578186s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:09 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:09 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:09 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 68 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=67/68 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[55,67)/1 crt=38'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:09 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 68 pg[9.f( v 60'485 (0'0,60'485] local-lis/les=67/68 n=7 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[55,67)/1 crt=60'485 lcod 59'484 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:09 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 68 pg[9.17( v 60'485 (0'0,60'485] local-lis/les=67/68 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[55,67)/1 crt=60'485 lcod 60'484 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:09 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 68 pg[9.7( v 60'487 (0'0,60'487] local-lis/les=67/68 n=7 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[55,67)/1 crt=60'487 lcod 60'486 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:09 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb 28 04:36:09 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb 28 04:36:09 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Feb 28 04:36:09 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Feb 28 04:36:09 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Feb 28 04:36:09 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Feb 28 04:36:10 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Feb 28 04:36:10 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Feb 28 04:36:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Feb 28 04:36:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Feb 28 04:36:10 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Feb 28 04:36:10 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 69 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] r=0 lpr=69 pi=[47,69)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:10 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 69 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[47,69)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:10 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 69 pg[9.17( v 60'485 (0'0,60'485] local-lis/les=67/68 n=6 ec=47/32 lis/c=67/55 les/c/f=68/56/0 sis=69 pruub=14.991271019s) [2] async=[2] r=-1 lpr=69 pi=[55,69)/1 crt=60'485 lcod 60'484 active pruub 129.158584595s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:10 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 69 pg[9.17( v 60'485 (0'0,60'485] local-lis/les=0/0 n=6 ec=47/32 lis/c=67/55 les/c/f=68/56/0 sis=69) [2] r=0 lpr=69 pi=[55,69)/1 pct=0'0 crt=60'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:10 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 69 pg[9.17( v 60'485 (0'0,60'485] local-lis/les=67/68 n=6 ec=47/32 lis/c=67/55 les/c/f=68/56/0 sis=69 pruub=14.991213799s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=60'485 lcod 60'484 unknown NOTIFY pruub 129.158584595s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:10 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 69 pg[9.7( v 60'487 (0'0,60'487] local-lis/les=67/68 n=7 ec=47/32 lis/c=67/55 les/c/f=68/56/0 sis=69 pruub=14.991197586s) [2] async=[2] r=-1 lpr=69 pi=[55,69)/1 crt=60'487 lcod 60'486 active pruub 129.158599854s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:10 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 69 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[47,69)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:10 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 69 pg[9.7( v 60'487 (0'0,60'487] local-lis/les=67/68 n=7 ec=47/32 lis/c=67/55 les/c/f=68/56/0 sis=69 pruub=14.991067886s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=60'487 lcod 60'486 unknown NOTIFY pruub 129.158599854s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:10 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 69 pg[9.17( v 60'485 (0'0,60'485] local-lis/les=0/0 n=6 ec=47/32 lis/c=67/55 les/c/f=68/56/0 sis=69) [2] r=0 lpr=69 pi=[55,69)/1 crt=60'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:10 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 69 pg[9.f( v 60'485 (0'0,60'485] local-lis/les=0/0 n=7 ec=47/32 lis/c=67/55 les/c/f=68/56/0 sis=69) [2] r=0 lpr=69 pi=[55,69)/1 pct=0'0 crt=60'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:10 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 69 pg[9.f( v 60'485 (0'0,60'485] local-lis/les=0/0 n=7 ec=47/32 lis/c=67/55 les/c/f=68/56/0 sis=69) [2] r=0 lpr=69 pi=[55,69)/1 crt=60'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:10 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 69 pg[9.f( v 60'485 (0'0,60'485] local-lis/les=67/68 n=7 ec=47/32 lis/c=67/55 les/c/f=68/56/0 sis=69 pruub=14.990417480s) [2] async=[2] r=-1 lpr=69 pi=[55,69)/1 crt=60'485 lcod 59'484 active pruub 129.158538818s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:10 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 69 pg[9.f( v 60'485 (0'0,60'485] local-lis/les=67/68 n=7 ec=47/32 lis/c=67/55 les/c/f=68/56/0 sis=69 pruub=14.990354538s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=60'485 lcod 59'484 unknown NOTIFY pruub 129.158538818s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:10 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 69 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] r=0 lpr=69 pi=[47,69)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:10 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 69 pg[9.7( v 60'487 (0'0,60'487] local-lis/les=0/0 n=7 ec=47/32 lis/c=67/55 les/c/f=68/56/0 sis=69) [2] r=0 lpr=69 pi=[55,69)/1 pct=0'0 crt=60'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:10 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 69 pg[9.18( v 60'487 (0'0,60'487] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] r=0 lpr=69 pi=[47,69)/1 crt=60'486 lcod 60'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:10 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 69 pg[9.18( v 60'487 (0'0,60'487] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] r=0 lpr=69 pi=[47,69)/1 crt=60'486 lcod 60'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:10 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 69 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=67/68 n=6 ec=47/32 lis/c=67/55 les/c/f=68/56/0 sis=69 pruub=14.989809990s) [2] async=[2] r=-1 lpr=69 pi=[55,69)/1 crt=38'483 active pruub 129.158523560s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:10 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 69 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=67/68 n=6 ec=47/32 lis/c=67/55 les/c/f=68/56/0 sis=69 pruub=14.989743233s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=38'483 unknown NOTIFY pruub 129.158523560s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:10 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 69 pg[9.7( v 60'487 (0'0,60'487] local-lis/les=0/0 n=7 ec=47/32 lis/c=67/55 les/c/f=68/56/0 sis=69) [2] r=0 lpr=69 pi=[55,69)/1 crt=60'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:10 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 69 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=47/32 lis/c=67/55 les/c/f=68/56/0 sis=69) [2] r=0 lpr=69 pi=[55,69)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:10 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 69 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=47/32 lis/c=67/55 les/c/f=68/56/0 sis=69) [2] r=0 lpr=69 pi=[55,69)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:10 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 69 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[47,69)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:10 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 69 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[47,69)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:10 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 69 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=68/69 n=6 ec=47/32 lis/c=66/47 les/c/f=67/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:10 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 69 pg[9.e( v 60'489 (0'0,60'489] local-lis/les=68/69 n=7 ec=47/32 lis/c=66/47 les/c/f=67/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 crt=60'489 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:10 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 69 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=68/69 n=1 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:10 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 69 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=68/69 n=6 ec=47/32 lis/c=66/47 les/c/f=67/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 crt=60'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:10 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 69 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=68/69 n=7 ec=47/32 lis/c=66/47 les/c/f=67/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v141: 305 pgs: 2 unknown, 5 peering, 298 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 247 B/s, 6 objects/s recovering
Feb 28 04:36:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:36:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Feb 28 04:36:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Feb 28 04:36:11 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Feb 28 04:36:11 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 70 pg[9.18( v 60'487 (0'0,60'487] local-lis/les=69/70 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] async=[2] r=0 lpr=69 pi=[47,69)/1 crt=60'487 lcod 60'486 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:11 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 70 pg[9.f( v 60'485 (0'0,60'485] local-lis/les=69/70 n=7 ec=47/32 lis/c=67/55 les/c/f=68/56/0 sis=69) [2] r=0 lpr=69 pi=[55,69)/1 crt=60'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:11 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 70 pg[9.7( v 60'487 (0'0,60'487] local-lis/les=69/70 n=7 ec=47/32 lis/c=67/55 les/c/f=68/56/0 sis=69) [2] r=0 lpr=69 pi=[55,69)/1 crt=60'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:11 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 70 pg[9.17( v 60'485 (0'0,60'485] local-lis/les=69/70 n=6 ec=47/32 lis/c=67/55 les/c/f=68/56/0 sis=69) [2] r=0 lpr=69 pi=[55,69)/1 crt=60'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:11 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 70 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=69/70 n=6 ec=47/32 lis/c=67/55 les/c/f=68/56/0 sis=69) [2] r=0 lpr=69 pi=[55,69)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:11 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 70 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=69/70 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] async=[2] r=0 lpr=69 pi=[47,69)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:12 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Feb 28 04:36:12 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Feb 28 04:36:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Feb 28 04:36:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Feb 28 04:36:12 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Feb 28 04:36:12 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 71 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=69/70 n=7 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71 pruub=14.947519302s) [2] async=[2] r=-1 lpr=71 pi=[47,71)/1 crt=38'483 lcod 0'0 active pruub 125.019783020s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:12 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 71 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=69/70 n=7 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71 pruub=14.947432518s) [2] r=-1 lpr=71 pi=[47,71)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 125.019783020s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:12 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 71 pg[9.18( v 60'487 (0'0,60'487] local-lis/les=69/70 n=6 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71 pruub=14.940679550s) [2] async=[2] r=-1 lpr=71 pi=[47,71)/1 crt=60'487 lcod 60'486 active pruub 125.013313293s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:12 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 71 pg[9.18( v 60'487 (0'0,60'487] local-lis/les=69/70 n=6 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71 pruub=14.940419197s) [2] r=-1 lpr=71 pi=[47,71)/1 crt=60'487 lcod 60'486 unknown NOTIFY pruub 125.013313293s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:12 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 71 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71) [2] r=0 lpr=71 pi=[47,71)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:12 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 71 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71) [2] r=0 lpr=71 pi=[47,71)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:12 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 71 pg[9.18( v 60'487 (0'0,60'487] local-lis/les=0/0 n=6 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71) [2] r=0 lpr=71 pi=[47,71)/1 pct=0'0 crt=60'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:12 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 71 pg[9.18( v 60'487 (0'0,60'487] local-lis/les=0/0 n=6 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71) [2] r=0 lpr=71 pi=[47,71)/1 crt=60'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v144: 305 pgs: 2 unknown, 5 peering, 298 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 207 B/s, 5 objects/s recovering
Feb 28 04:36:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Feb 28 04:36:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Feb 28 04:36:14 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Feb 28 04:36:14 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 72 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=71/72 n=7 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71) [2] r=0 lpr=71 pi=[47,71)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:14 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 72 pg[9.18( v 60'487 (0'0,60'487] local-lis/les=71/72 n=6 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71) [2] r=0 lpr=71 pi=[47,71)/1 crt=60'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:14 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 2.e scrub starts
Feb 28 04:36:14 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 2.e scrub ok
Feb 28 04:36:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v146: 305 pgs: 2 unknown, 5 peering, 298 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:36:14 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Feb 28 04:36:14 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Feb 28 04:36:15 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Feb 28 04:36:15 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Feb 28 04:36:15 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 8.a scrub starts
Feb 28 04:36:15 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 8.a scrub ok
Feb 28 04:36:16 np0005634017 systemd-logind[815]: New session 33 of user zuul.
Feb 28 04:36:16 np0005634017 systemd[1]: Started Session 33 of User zuul.
Feb 28 04:36:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:36:16 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Feb 28 04:36:16 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Feb 28 04:36:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v147: 305 pgs: 305 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 682 B/s wr, 17 op/s; 281 B/s, 6 objects/s recovering
Feb 28 04:36:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Feb 28 04:36:16 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb 28 04:36:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Feb 28 04:36:16 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb 28 04:36:17 np0005634017 python3.9[99923]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:36:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Feb 28 04:36:17 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Feb 28 04:36:17 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Feb 28 04:36:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Feb 28 04:36:17 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Feb 28 04:36:17 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb 28 04:36:17 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb 28 04:36:17 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 73 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=73 pruub=10.864187241s) [0] r=-1 lpr=73 pi=[52,73)/1 crt=33'39 lcod 0'0 active pruub 125.603050232s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:17 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 73 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=73 pruub=10.864140511s) [0] r=-1 lpr=73 pi=[52,73)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 125.603050232s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:17 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 73 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=73) [0] r=0 lpr=73 pi=[52,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:17 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Feb 28 04:36:17 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Feb 28 04:36:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Feb 28 04:36:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Feb 28 04:36:18 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Feb 28 04:36:18 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 74 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=73/74 n=1 ec=45/20 lis/c=52/52 les/c/f=53/53/0 sis=73) [0] r=0 lpr=73 pi=[52,73)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:18 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Feb 28 04:36:18 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Feb 28 04:36:18 np0005634017 python3.9[100142]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:36:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v150: 305 pgs: 305 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 682 B/s wr, 17 op/s; 281 B/s, 6 objects/s recovering
Feb 28 04:36:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Feb 28 04:36:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb 28 04:36:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Feb 28 04:36:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb 28 04:36:18 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Feb 28 04:36:18 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Feb 28 04:36:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Feb 28 04:36:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Feb 28 04:36:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Feb 28 04:36:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Feb 28 04:36:19 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Feb 28 04:36:19 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb 28 04:36:19 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb 28 04:36:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:36:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:36:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 04:36:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:36:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 04:36:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:36:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 04:36:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 04:36:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 04:36:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:36:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:36:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:36:19 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Feb 28 04:36:19 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Feb 28 04:36:20 np0005634017 podman[100298]: 2026-02-28 09:36:20.061863814 +0000 UTC m=+0.052402989 container create 287885fd67c77841502d2406c39856265f1186376e6418a7f533fd77ba25f923 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 28 04:36:20 np0005634017 systemd[1]: Started libpod-conmon-287885fd67c77841502d2406c39856265f1186376e6418a7f533fd77ba25f923.scope.
Feb 28 04:36:20 np0005634017 podman[100298]: 2026-02-28 09:36:20.035959643 +0000 UTC m=+0.026498798 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:36:20 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:36:20 np0005634017 podman[100298]: 2026-02-28 09:36:20.17045459 +0000 UTC m=+0.160993785 container init 287885fd67c77841502d2406c39856265f1186376e6418a7f533fd77ba25f923 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_matsumoto, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 04:36:20 np0005634017 podman[100298]: 2026-02-28 09:36:20.179263906 +0000 UTC m=+0.169803061 container start 287885fd67c77841502d2406c39856265f1186376e6418a7f533fd77ba25f923 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:36:20 np0005634017 laughing_matsumoto[100314]: 167 167
Feb 28 04:36:20 np0005634017 systemd[1]: libpod-287885fd67c77841502d2406c39856265f1186376e6418a7f533fd77ba25f923.scope: Deactivated successfully.
Feb 28 04:36:20 np0005634017 podman[100298]: 2026-02-28 09:36:20.188797992 +0000 UTC m=+0.179337137 container attach 287885fd67c77841502d2406c39856265f1186376e6418a7f533fd77ba25f923 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_matsumoto, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:36:20 np0005634017 podman[100298]: 2026-02-28 09:36:20.190503601 +0000 UTC m=+0.181042746 container died 287885fd67c77841502d2406c39856265f1186376e6418a7f533fd77ba25f923 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:36:20 np0005634017 systemd[1]: var-lib-containers-storage-overlay-33ef9c1c03600422964a6f19e841ad46752b25739e0dd299752fcc66773f53f8-merged.mount: Deactivated successfully.
Feb 28 04:36:20 np0005634017 podman[100298]: 2026-02-28 09:36:20.260278223 +0000 UTC m=+0.250817378 container remove 287885fd67c77841502d2406c39856265f1186376e6418a7f533fd77ba25f923 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Feb 28 04:36:20 np0005634017 systemd[1]: libpod-conmon-287885fd67c77841502d2406c39856265f1186376e6418a7f533fd77ba25f923.scope: Deactivated successfully.
Feb 28 04:36:20 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Feb 28 04:36:20 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Feb 28 04:36:20 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:36:20 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:36:20 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:36:20 np0005634017 podman[100341]: 2026-02-28 09:36:20.441542935 +0000 UTC m=+0.058838646 container create 9386afbcca3e69e9c45df4ce0716b068034c0d0b5d7b5dd0bb5f750beeb6c0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Feb 28 04:36:20 np0005634017 systemd[1]: Started libpod-conmon-9386afbcca3e69e9c45df4ce0716b068034c0d0b5d7b5dd0bb5f750beeb6c0e3.scope.
Feb 28 04:36:20 np0005634017 podman[100341]: 2026-02-28 09:36:20.414669677 +0000 UTC m=+0.031965398 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:36:20 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:36:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38244ad08fd589f32a88cb9542d8f741da88b8882268c52f2ba0dc367e653b2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:36:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38244ad08fd589f32a88cb9542d8f741da88b8882268c52f2ba0dc367e653b2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:36:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38244ad08fd589f32a88cb9542d8f741da88b8882268c52f2ba0dc367e653b2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:36:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38244ad08fd589f32a88cb9542d8f741da88b8882268c52f2ba0dc367e653b2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:36:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38244ad08fd589f32a88cb9542d8f741da88b8882268c52f2ba0dc367e653b2d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:36:20 np0005634017 podman[100341]: 2026-02-28 09:36:20.56424055 +0000 UTC m=+0.181536291 container init 9386afbcca3e69e9c45df4ce0716b068034c0d0b5d7b5dd0bb5f750beeb6c0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_chaum, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 28 04:36:20 np0005634017 podman[100341]: 2026-02-28 09:36:20.573779387 +0000 UTC m=+0.191075098 container start 9386afbcca3e69e9c45df4ce0716b068034c0d0b5d7b5dd0bb5f750beeb6c0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_chaum, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 04:36:20 np0005634017 podman[100341]: 2026-02-28 09:36:20.580025008 +0000 UTC m=+0.197320769 container attach 9386afbcca3e69e9c45df4ce0716b068034c0d0b5d7b5dd0bb5f750beeb6c0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 28 04:36:20 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 75 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=54/55 n=1 ec=45/20 lis/c=54/54 les/c/f=55/55/0 sis=75 pruub=9.832005501s) [0] r=-1 lpr=75 pi=[54,75)/1 crt=33'39 lcod 0'0 active pruub 127.638763428s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:20 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 75 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=54/55 n=1 ec=45/20 lis/c=54/54 les/c/f=55/55/0 sis=75 pruub=9.831966400s) [0] r=-1 lpr=75 pi=[54,75)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 127.638763428s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:20 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 75 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=54/54 les/c/f=55/55/0 sis=75) [0] r=0 lpr=75 pi=[54,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:20 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Feb 28 04:36:20 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Feb 28 04:36:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v152: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s rd, 511 B/s wr, 16 op/s; 281 B/s, 6 objects/s recovering
Feb 28 04:36:20 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Feb 28 04:36:20 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb 28 04:36:20 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Feb 28 04:36:20 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb 28 04:36:21 np0005634017 trusting_chaum[100358]: --> passed data devices: 0 physical, 3 LVM
Feb 28 04:36:21 np0005634017 trusting_chaum[100358]: --> All data devices are unavailable
Feb 28 04:36:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:36:21 np0005634017 systemd[1]: libpod-9386afbcca3e69e9c45df4ce0716b068034c0d0b5d7b5dd0bb5f750beeb6c0e3.scope: Deactivated successfully.
Feb 28 04:36:21 np0005634017 podman[100341]: 2026-02-28 09:36:21.085605417 +0000 UTC m=+0.702901098 container died 9386afbcca3e69e9c45df4ce0716b068034c0d0b5d7b5dd0bb5f750beeb6c0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_chaum, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:36:21 np0005634017 systemd[1]: var-lib-containers-storage-overlay-38244ad08fd589f32a88cb9542d8f741da88b8882268c52f2ba0dc367e653b2d-merged.mount: Deactivated successfully.
Feb 28 04:36:21 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Feb 28 04:36:21 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Feb 28 04:36:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Feb 28 04:36:21 np0005634017 podman[100341]: 2026-02-28 09:36:21.458268385 +0000 UTC m=+1.075564086 container remove 9386afbcca3e69e9c45df4ce0716b068034c0d0b5d7b5dd0bb5f750beeb6c0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_chaum, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Feb 28 04:36:21 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb 28 04:36:21 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb 28 04:36:21 np0005634017 systemd[1]: libpod-conmon-9386afbcca3e69e9c45df4ce0716b068034c0d0b5d7b5dd0bb5f750beeb6c0e3.scope: Deactivated successfully.
Feb 28 04:36:21 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Feb 28 04:36:21 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Feb 28 04:36:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Feb 28 04:36:21 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Feb 28 04:36:21 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 5.a scrub starts
Feb 28 04:36:21 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 76 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=59/60 n=1 ec=45/20 lis/c=59/59 les/c/f=60/60/0 sis=76 pruub=10.918788910s) [1] r=-1 lpr=76 pi=[59,76)/1 crt=33'39 active pruub 135.944458008s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:21 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 76 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=59/60 n=1 ec=45/20 lis/c=59/59 les/c/f=60/60/0 sis=76 pruub=10.918711662s) [1] r=-1 lpr=76 pi=[59,76)/1 crt=33'39 unknown NOTIFY pruub 135.944458008s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:21 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 5.a scrub ok
Feb 28 04:36:21 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 76 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=75/76 n=1 ec=45/20 lis/c=54/54 les/c/f=55/55/0 sis=75) [0] r=0 lpr=75 pi=[54,75)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:21 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 76 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=59/59 les/c/f=60/60/0 sis=76) [1] r=0 lpr=76 pi=[59,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:21 np0005634017 podman[100454]: 2026-02-28 09:36:21.90365855 +0000 UTC m=+0.038328031 container create 60deb815b8cdac16fc861caf5dfaf70b15c796e8237b7a0faec904d12af1a9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_jackson, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:36:21 np0005634017 systemd[1]: Started libpod-conmon-60deb815b8cdac16fc861caf5dfaf70b15c796e8237b7a0faec904d12af1a9f3.scope.
Feb 28 04:36:21 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:36:21 np0005634017 podman[100454]: 2026-02-28 09:36:21.885860455 +0000 UTC m=+0.020529966 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:36:21 np0005634017 podman[100454]: 2026-02-28 09:36:21.986102689 +0000 UTC m=+0.120772190 container init 60deb815b8cdac16fc861caf5dfaf70b15c796e8237b7a0faec904d12af1a9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_jackson, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 04:36:21 np0005634017 podman[100454]: 2026-02-28 09:36:21.994558314 +0000 UTC m=+0.129227795 container start 60deb815b8cdac16fc861caf5dfaf70b15c796e8237b7a0faec904d12af1a9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 04:36:21 np0005634017 podman[100454]: 2026-02-28 09:36:21.997813309 +0000 UTC m=+0.132482800 container attach 60deb815b8cdac16fc861caf5dfaf70b15c796e8237b7a0faec904d12af1a9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_jackson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:36:21 np0005634017 clever_jackson[100474]: 167 167
Feb 28 04:36:21 np0005634017 systemd[1]: libpod-60deb815b8cdac16fc861caf5dfaf70b15c796e8237b7a0faec904d12af1a9f3.scope: Deactivated successfully.
Feb 28 04:36:21 np0005634017 podman[100454]: 2026-02-28 09:36:21.999120297 +0000 UTC m=+0.133789778 container died 60deb815b8cdac16fc861caf5dfaf70b15c796e8237b7a0faec904d12af1a9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_jackson, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:36:22 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3f1567afb6d4773c2338ffcf006037d8166c12c7c2c413ef824bd88ec0e7aec0-merged.mount: Deactivated successfully.
Feb 28 04:36:22 np0005634017 podman[100454]: 2026-02-28 09:36:22.040555837 +0000 UTC m=+0.175225318 container remove 60deb815b8cdac16fc861caf5dfaf70b15c796e8237b7a0faec904d12af1a9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_jackson, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 04:36:22 np0005634017 systemd[1]: libpod-conmon-60deb815b8cdac16fc861caf5dfaf70b15c796e8237b7a0faec904d12af1a9f3.scope: Deactivated successfully.
Feb 28 04:36:22 np0005634017 podman[100497]: 2026-02-28 09:36:22.179141653 +0000 UTC m=+0.045241382 container create 152c801370f5e90b3a2332e52c9b3e68cf0359c9dc960e3790a02ea239e6987e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 04:36:22 np0005634017 systemd[1]: Started libpod-conmon-152c801370f5e90b3a2332e52c9b3e68cf0359c9dc960e3790a02ea239e6987e.scope.
Feb 28 04:36:22 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:36:22 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e3e269c9ffb376ab2d45d476d53270c8f7f86452d2de96595eea4156eb8772/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:36:22 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e3e269c9ffb376ab2d45d476d53270c8f7f86452d2de96595eea4156eb8772/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:36:22 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e3e269c9ffb376ab2d45d476d53270c8f7f86452d2de96595eea4156eb8772/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:36:22 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e3e269c9ffb376ab2d45d476d53270c8f7f86452d2de96595eea4156eb8772/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:36:22 np0005634017 podman[100497]: 2026-02-28 09:36:22.156508967 +0000 UTC m=+0.022608736 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:36:22 np0005634017 podman[100497]: 2026-02-28 09:36:22.261113568 +0000 UTC m=+0.127213297 container init 152c801370f5e90b3a2332e52c9b3e68cf0359c9dc960e3790a02ea239e6987e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hoover, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:36:22 np0005634017 podman[100497]: 2026-02-28 09:36:22.266581546 +0000 UTC m=+0.132681305 container start 152c801370f5e90b3a2332e52c9b3e68cf0359c9dc960e3790a02ea239e6987e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 28 04:36:22 np0005634017 podman[100497]: 2026-02-28 09:36:22.270392397 +0000 UTC m=+0.136492156 container attach 152c801370f5e90b3a2332e52c9b3e68cf0359c9dc960e3790a02ea239e6987e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hoover, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 28 04:36:22 np0005634017 brave_hoover[100514]: {
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:    "0": [
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:        {
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "devices": [
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "/dev/loop3"
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            ],
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "lv_name": "ceph_lv0",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "lv_size": "21470642176",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "name": "ceph_lv0",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "tags": {
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.cluster_name": "ceph",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.crush_device_class": "",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.encrypted": "0",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.objectstore": "bluestore",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.osd_id": "0",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.type": "block",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.vdo": "0",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.with_tpm": "0"
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            },
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "type": "block",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "vg_name": "ceph_vg0"
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:        }
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:    ],
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:    "1": [
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:        {
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "devices": [
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "/dev/loop4"
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            ],
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "lv_name": "ceph_lv1",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "lv_size": "21470642176",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "name": "ceph_lv1",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "tags": {
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.cluster_name": "ceph",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.crush_device_class": "",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.encrypted": "0",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.objectstore": "bluestore",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.osd_id": "1",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.type": "block",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.vdo": "0",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.with_tpm": "0"
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            },
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "type": "block",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "vg_name": "ceph_vg1"
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:        }
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:    ],
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:    "2": [
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:        {
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "devices": [
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "/dev/loop5"
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            ],
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "lv_name": "ceph_lv2",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "lv_size": "21470642176",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "name": "ceph_lv2",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "tags": {
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.cluster_name": "ceph",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.crush_device_class": "",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.encrypted": "0",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.objectstore": "bluestore",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.osd_id": "2",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.type": "block",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.vdo": "0",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:                "ceph.with_tpm": "0"
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            },
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "type": "block",
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:            "vg_name": "ceph_vg2"
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:        }
Feb 28 04:36:22 np0005634017 brave_hoover[100514]:    ]
Feb 28 04:36:22 np0005634017 brave_hoover[100514]: }
Feb 28 04:36:22 np0005634017 systemd[1]: libpod-152c801370f5e90b3a2332e52c9b3e68cf0359c9dc960e3790a02ea239e6987e.scope: Deactivated successfully.
Feb 28 04:36:22 np0005634017 podman[100497]: 2026-02-28 09:36:22.577496934 +0000 UTC m=+0.443596663 container died 152c801370f5e90b3a2332e52c9b3e68cf0359c9dc960e3790a02ea239e6987e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hoover, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 04:36:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Feb 28 04:36:22 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Feb 28 04:36:22 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Feb 28 04:36:22 np0005634017 systemd[1]: var-lib-containers-storage-overlay-40e3e269c9ffb376ab2d45d476d53270c8f7f86452d2de96595eea4156eb8772-merged.mount: Deactivated successfully.
Feb 28 04:36:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Feb 28 04:36:22 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Feb 28 04:36:22 np0005634017 podman[100497]: 2026-02-28 09:36:22.632197559 +0000 UTC m=+0.498297318 container remove 152c801370f5e90b3a2332e52c9b3e68cf0359c9dc960e3790a02ea239e6987e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hoover, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 04:36:22 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 77 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=76/77 n=1 ec=45/20 lis/c=59/59 les/c/f=60/60/0 sis=76) [1] r=0 lpr=76 pi=[59,76)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:22 np0005634017 systemd[1]: libpod-conmon-152c801370f5e90b3a2332e52c9b3e68cf0359c9dc960e3790a02ea239e6987e.scope: Deactivated successfully.
Feb 28 04:36:22 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 2.c scrub starts
Feb 28 04:36:22 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 2.c scrub ok
Feb 28 04:36:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v155: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:36:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Feb 28 04:36:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb 28 04:36:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Feb 28 04:36:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb 28 04:36:23 np0005634017 podman[100607]: 2026-02-28 09:36:23.045799863 +0000 UTC m=+0.034133600 container create 2b5f59f699997a03c37730adbb8cfd739cf32d9e99c27f319036eac8c4f926ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_chaplygin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:36:23 np0005634017 systemd[1]: Started libpod-conmon-2b5f59f699997a03c37730adbb8cfd739cf32d9e99c27f319036eac8c4f926ab.scope.
Feb 28 04:36:23 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:36:23 np0005634017 podman[100607]: 2026-02-28 09:36:23.033356333 +0000 UTC m=+0.021690090 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:36:23 np0005634017 podman[100607]: 2026-02-28 09:36:23.131460155 +0000 UTC m=+0.119793952 container init 2b5f59f699997a03c37730adbb8cfd739cf32d9e99c27f319036eac8c4f926ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_chaplygin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 28 04:36:23 np0005634017 podman[100607]: 2026-02-28 09:36:23.137343596 +0000 UTC m=+0.125677333 container start 2b5f59f699997a03c37730adbb8cfd739cf32d9e99c27f319036eac8c4f926ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_chaplygin, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:36:23 np0005634017 podman[100607]: 2026-02-28 09:36:23.140163128 +0000 UTC m=+0.128496915 container attach 2b5f59f699997a03c37730adbb8cfd739cf32d9e99c27f319036eac8c4f926ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_chaplygin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:36:23 np0005634017 pensive_chaplygin[100624]: 167 167
Feb 28 04:36:23 np0005634017 systemd[1]: libpod-2b5f59f699997a03c37730adbb8cfd739cf32d9e99c27f319036eac8c4f926ab.scope: Deactivated successfully.
Feb 28 04:36:23 np0005634017 podman[100607]: 2026-02-28 09:36:23.141901268 +0000 UTC m=+0.130235015 container died 2b5f59f699997a03c37730adbb8cfd739cf32d9e99c27f319036eac8c4f926ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_chaplygin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 04:36:23 np0005634017 systemd[1]: var-lib-containers-storage-overlay-e378164f6ef61f221a1d3dd3f7a4003fc1b6352c07a49f22fffcaa5ea33c4fb0-merged.mount: Deactivated successfully.
Feb 28 04:36:23 np0005634017 podman[100607]: 2026-02-28 09:36:23.181441094 +0000 UTC m=+0.169774831 container remove 2b5f59f699997a03c37730adbb8cfd739cf32d9e99c27f319036eac8c4f926ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 28 04:36:23 np0005634017 systemd[1]: libpod-conmon-2b5f59f699997a03c37730adbb8cfd739cf32d9e99c27f319036eac8c4f926ab.scope: Deactivated successfully.
Feb 28 04:36:23 np0005634017 podman[100648]: 2026-02-28 09:36:23.306974431 +0000 UTC m=+0.040366801 container create 10a6bec73faf0553528c881eab40307b1d33de26d5a1b296da566e3aa1e71af1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ganguly, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:36:23 np0005634017 systemd[1]: Started libpod-conmon-10a6bec73faf0553528c881eab40307b1d33de26d5a1b296da566e3aa1e71af1.scope.
Feb 28 04:36:23 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:36:23 np0005634017 podman[100648]: 2026-02-28 09:36:23.28863474 +0000 UTC m=+0.022027110 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:36:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba4eb49fa6a4ec2c380837250a9b7948818f38bc944dcb7ad432c712552fbaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:36:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba4eb49fa6a4ec2c380837250a9b7948818f38bc944dcb7ad432c712552fbaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:36:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba4eb49fa6a4ec2c380837250a9b7948818f38bc944dcb7ad432c712552fbaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:36:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba4eb49fa6a4ec2c380837250a9b7948818f38bc944dcb7ad432c712552fbaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:36:23 np0005634017 podman[100648]: 2026-02-28 09:36:23.403041625 +0000 UTC m=+0.136433985 container init 10a6bec73faf0553528c881eab40307b1d33de26d5a1b296da566e3aa1e71af1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 28 04:36:23 np0005634017 podman[100648]: 2026-02-28 09:36:23.40840042 +0000 UTC m=+0.141792780 container start 10a6bec73faf0553528c881eab40307b1d33de26d5a1b296da566e3aa1e71af1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ganguly, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:36:23 np0005634017 podman[100648]: 2026-02-28 09:36:23.412483408 +0000 UTC m=+0.145875788 container attach 10a6bec73faf0553528c881eab40307b1d33de26d5a1b296da566e3aa1e71af1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ganguly, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:36:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Feb 28 04:36:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Feb 28 04:36:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Feb 28 04:36:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Feb 28 04:36:23 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Feb 28 04:36:23 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb 28 04:36:23 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb 28 04:36:24 np0005634017 lvm[100743]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:36:24 np0005634017 lvm[100743]: VG ceph_vg1 finished
Feb 28 04:36:24 np0005634017 lvm[100742]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:36:24 np0005634017 lvm[100742]: VG ceph_vg0 finished
Feb 28 04:36:24 np0005634017 lvm[100745]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:36:24 np0005634017 lvm[100745]: VG ceph_vg2 finished
Feb 28 04:36:24 np0005634017 youthful_ganguly[100664]: {}
Feb 28 04:36:24 np0005634017 systemd[1]: libpod-10a6bec73faf0553528c881eab40307b1d33de26d5a1b296da566e3aa1e71af1.scope: Deactivated successfully.
Feb 28 04:36:24 np0005634017 systemd[1]: libpod-10a6bec73faf0553528c881eab40307b1d33de26d5a1b296da566e3aa1e71af1.scope: Consumed 1.079s CPU time.
Feb 28 04:36:24 np0005634017 podman[100648]: 2026-02-28 09:36:24.172612473 +0000 UTC m=+0.906004923 container died 10a6bec73faf0553528c881eab40307b1d33de26d5a1b296da566e3aa1e71af1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ganguly, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 28 04:36:24 np0005634017 systemd[1]: var-lib-containers-storage-overlay-2ba4eb49fa6a4ec2c380837250a9b7948818f38bc944dcb7ad432c712552fbaa-merged.mount: Deactivated successfully.
Feb 28 04:36:24 np0005634017 podman[100648]: 2026-02-28 09:36:24.227979117 +0000 UTC m=+0.961371507 container remove 10a6bec73faf0553528c881eab40307b1d33de26d5a1b296da566e3aa1e71af1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ganguly, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:36:24 np0005634017 systemd[1]: libpod-conmon-10a6bec73faf0553528c881eab40307b1d33de26d5a1b296da566e3aa1e71af1.scope: Deactivated successfully.
Feb 28 04:36:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:36:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:36:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:36:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:36:24 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 5.b scrub starts
Feb 28 04:36:24 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Feb 28 04:36:24 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Feb 28 04:36:24 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:36:24 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:36:24 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 5.b scrub ok
Feb 28 04:36:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v157: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Feb 28 04:36:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Feb 28 04:36:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb 28 04:36:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Feb 28 04:36:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb 28 04:36:25 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 78 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=78 pruub=11.063853264s) [2] r=-1 lpr=78 pi=[47,78)/1 crt=38'483 lcod 0'0 active pruub 133.377532959s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:25 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 78 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=78 pruub=11.063816071s) [2] r=-1 lpr=78 pi=[47,78)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 133.377532959s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:25 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 78 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=78 pruub=11.066313744s) [2] r=-1 lpr=78 pi=[47,78)/1 crt=60'486 lcod 60'486 active pruub 133.380905151s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:25 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 78 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=78 pruub=11.066244125s) [2] r=-1 lpr=78 pi=[47,78)/1 crt=60'486 lcod 60'486 unknown NOTIFY pruub 133.380905151s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:25 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 78 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=78) [2] r=0 lpr=78 pi=[47,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:25 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 78 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=78) [2] r=0 lpr=78 pi=[47,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Feb 28 04:36:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Feb 28 04:36:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Feb 28 04:36:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Feb 28 04:36:25 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Feb 28 04:36:25 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 79 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=79) [2]/[1] r=-1 lpr=79 pi=[47,79)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:25 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 79 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=79) [2]/[1] r=-1 lpr=79 pi=[47,79)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:25 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 79 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=79) [2]/[1] r=0 lpr=79 pi=[47,79)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:25 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 79 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=79) [2]/[1] r=0 lpr=79 pi=[47,79)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:25 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 79 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=79) [2]/[1] r=-1 lpr=79 pi=[47,79)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:25 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 79 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=79) [2]/[1] r=-1 lpr=79 pi=[47,79)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:25 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 79 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=79) [2]/[1] r=0 lpr=79 pi=[47,79)/1 crt=60'486 lcod 60'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:25 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 79 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=79) [2]/[1] r=0 lpr=79 pi=[47,79)/1 crt=60'486 lcod 60'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:25 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb 28 04:36:25 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb 28 04:36:25 np0005634017 systemd[1]: session-33.scope: Deactivated successfully.
Feb 28 04:36:25 np0005634017 systemd-logind[815]: Session 33 logged out. Waiting for processes to exit.
Feb 28 04:36:25 np0005634017 systemd[1]: session-33.scope: Consumed 8.030s CPU time.
Feb 28 04:36:25 np0005634017 systemd-logind[815]: Removed session 33.
Feb 28 04:36:25 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 79 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=63/64 n=1 ec=45/20 lis/c=63/63 les/c/f=64/64/0 sis=79 pruub=10.898741722s) [1] r=-1 lpr=79 pi=[63,79)/1 crt=33'39 active pruub 140.019424438s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:25 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 79 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=63/64 n=1 ec=45/20 lis/c=63/63 les/c/f=64/64/0 sis=79 pruub=10.898618698s) [1] r=-1 lpr=79 pi=[63,79)/1 crt=33'39 unknown NOTIFY pruub 140.019424438s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:25 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 79 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=63/63 les/c/f=64/64/0 sis=79) [1] r=0 lpr=79 pi=[63,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:25 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 3.b scrub starts
Feb 28 04:36:25 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 3.b scrub ok
Feb 28 04:36:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:36:26 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Feb 28 04:36:26 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Feb 28 04:36:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Feb 28 04:36:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Feb 28 04:36:26 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Feb 28 04:36:26 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 80 pg[6.d( v 33'39 lc 31'7 (0'0,33'39] local-lis/les=79/80 n=1 ec=45/20 lis/c=63/63 les/c/f=64/64/0 sis=79) [1] r=0 lpr=79 pi=[63,79)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:26 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 80 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=79/80 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=79) [2]/[1] async=[2] r=0 lpr=79 pi=[47,79)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:26 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Feb 28 04:36:26 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Feb 28 04:36:26 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 80 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=79/80 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=79) [2]/[1] async=[2] r=0 lpr=79 pi=[47,79)/1 crt=60'487 lcod 60'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:26 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Feb 28 04:36:26 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Feb 28 04:36:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v160: 305 pgs: 2 unknown, 303 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Feb 28 04:36:26 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 11.c scrub starts
Feb 28 04:36:26 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 11.c scrub ok
Feb 28 04:36:27 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Feb 28 04:36:27 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Feb 28 04:36:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Feb 28 04:36:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Feb 28 04:36:27 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Feb 28 04:36:27 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 81 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=79/80 n=7 ec=47/32 lis/c=79/47 les/c/f=80/48/0 sis=81 pruub=14.957736969s) [2] async=[2] r=-1 lpr=81 pi=[47,81)/1 crt=38'483 lcod 0'0 active pruub 139.908370972s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:27 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 81 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=79/80 n=7 ec=47/32 lis/c=79/47 les/c/f=80/48/0 sis=81 pruub=14.957297325s) [2] r=-1 lpr=81 pi=[47,81)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 139.908370972s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:27 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 81 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=79/80 n=6 ec=47/32 lis/c=79/47 les/c/f=80/48/0 sis=81 pruub=14.963418961s) [2] async=[2] r=-1 lpr=81 pi=[47,81)/1 crt=60'487 lcod 60'486 active pruub 139.914749146s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:27 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 81 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=79/80 n=6 ec=47/32 lis/c=79/47 les/c/f=80/48/0 sis=81 pruub=14.963335991s) [2] r=-1 lpr=81 pi=[47,81)/1 crt=60'487 lcod 60'486 unknown NOTIFY pruub 139.914749146s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:27 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 81 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=0/0 n=6 ec=47/32 lis/c=79/47 les/c/f=80/48/0 sis=81) [2] r=0 lpr=81 pi=[47,81)/1 pct=0'0 crt=60'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:27 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 81 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=0/0 n=6 ec=47/32 lis/c=79/47 les/c/f=80/48/0 sis=81) [2] r=0 lpr=81 pi=[47,81)/1 crt=60'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:27 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 81 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=47/32 lis/c=79/47 les/c/f=80/48/0 sis=81) [2] r=0 lpr=81 pi=[47,81)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:27 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 81 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=47/32 lis/c=79/47 les/c/f=80/48/0 sis=81) [2] r=0 lpr=81 pi=[47,81)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:28 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Feb 28 04:36:28 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Feb 28 04:36:28 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Feb 28 04:36:28 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Feb 28 04:36:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Feb 28 04:36:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Feb 28 04:36:28 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Feb 28 04:36:28 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 82 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=81/82 n=7 ec=47/32 lis/c=79/47 les/c/f=80/48/0 sis=81) [2] r=0 lpr=81 pi=[47,81)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:28 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 82 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=81/82 n=6 ec=47/32 lis/c=79/47 les/c/f=80/48/0 sis=81) [2] r=0 lpr=81 pi=[47,81)/1 crt=60'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_09:36:28
Feb 28 04:36:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 04:36:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Some PGs (0.006557) are unknown; try again later
Feb 28 04:36:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v163: 305 pgs: 2 unknown, 303 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:36:29 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Feb 28 04:36:29 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Feb 28 04:36:29 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Feb 28 04:36:29 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Feb 28 04:36:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:36:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:36:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 04:36:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:36:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 04:36:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:36:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:36:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:36:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:36:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:36:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:36:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:36:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:36:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:36:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:36:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:36:30 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Feb 28 04:36:30 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Feb 28 04:36:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v164: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 781 B/s wr, 20 op/s; 93 B/s, 3 objects/s recovering
Feb 28 04:36:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Feb 28 04:36:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb 28 04:36:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Feb 28 04:36:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb 28 04:36:31 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Feb 28 04:36:31 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Feb 28 04:36:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:36:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Feb 28 04:36:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Feb 28 04:36:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Feb 28 04:36:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Feb 28 04:36:31 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Feb 28 04:36:32 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Feb 28 04:36:32 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Feb 28 04:36:32 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb 28 04:36:32 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb 28 04:36:32 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Feb 28 04:36:32 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Feb 28 04:36:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v166: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 682 B/s wr, 17 op/s; 81 B/s, 2 objects/s recovering
Feb 28 04:36:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Feb 28 04:36:32 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb 28 04:36:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Feb 28 04:36:32 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb 28 04:36:33 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 11.a scrub starts
Feb 28 04:36:33 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 11.a scrub ok
Feb 28 04:36:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Feb 28 04:36:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Feb 28 04:36:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Feb 28 04:36:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Feb 28 04:36:33 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Feb 28 04:36:33 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 84 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=59/60 n=1 ec=45/20 lis/c=59/59 les/c/f=60/60/0 sis=84 pruub=15.474144936s) [2] r=-1 lpr=84 pi=[59,84)/1 crt=33'39 active pruub 151.949249268s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:33 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 84 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=59/60 n=1 ec=45/20 lis/c=59/59 les/c/f=60/60/0 sis=84 pruub=15.474072456s) [2] r=-1 lpr=84 pi=[59,84)/1 crt=33'39 unknown NOTIFY pruub 151.949249268s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:33 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Feb 28 04:36:33 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Feb 28 04:36:33 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb 28 04:36:33 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb 28 04:36:33 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 84 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=59/59 les/c/f=60/60/0 sis=84) [2] r=0 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:33 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Feb 28 04:36:33 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Feb 28 04:36:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Feb 28 04:36:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Feb 28 04:36:34 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Feb 28 04:36:34 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 85 pg[6.f( v 33'39 lc 31'1 (0'0,33'39] local-lis/les=84/85 n=1 ec=45/20 lis/c=59/59 les/c/f=60/60/0 sis=84) [2] r=0 lpr=84 pi=[59,84)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:34 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Feb 28 04:36:34 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Feb 28 04:36:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v169: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 682 B/s wr, 17 op/s; 81 B/s, 2 objects/s recovering
Feb 28 04:36:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Feb 28 04:36:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Feb 28 04:36:35 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Feb 28 04:36:35 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Feb 28 04:36:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Feb 28 04:36:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Feb 28 04:36:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Feb 28 04:36:35 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Feb 28 04:36:35 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Feb 28 04:36:35 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Feb 28 04:36:35 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Feb 28 04:36:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:36:36 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 7.d scrub starts
Feb 28 04:36:36 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 7.d scrub ok
Feb 28 04:36:36 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Feb 28 04:36:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v171: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 120 B/s, 0 objects/s recovering
Feb 28 04:36:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Feb 28 04:36:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Feb 28 04:36:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Feb 28 04:36:37 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Feb 28 04:36:37 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Feb 28 04:36:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Feb 28 04:36:37 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Feb 28 04:36:38 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Feb 28 04:36:38 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Feb 28 04:36:38 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Feb 28 04:36:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v173: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 105 B/s, 0 objects/s recovering
Feb 28 04:36:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Feb 28 04:36:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Feb 28 04:36:39 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Feb 28 04:36:39 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Feb 28 04:36:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Feb 28 04:36:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Feb 28 04:36:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Feb 28 04:36:39 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Feb 28 04:36:39 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Feb 28 04:36:39 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Feb 28 04:36:39 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Feb 28 04:36:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 04:36:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:36:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 04:36:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:36:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:36:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:36:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:36:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:36:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:36:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:36:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:36:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:36:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375009330942768e-06 of space, bias 4.0, pg target 0.0016500111971313215 quantized to 16 (current 16)
Feb 28 04:36:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:36:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:36:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:36:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 04:36:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:36:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 04:36:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:36:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:36:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:36:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 04:36:40 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Feb 28 04:36:40 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 3.f scrub starts
Feb 28 04:36:40 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 3.f scrub ok
Feb 28 04:36:40 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 10.a scrub starts
Feb 28 04:36:40 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 10.a scrub ok
Feb 28 04:36:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v175: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 0 objects/s recovering
Feb 28 04:36:40 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Feb 28 04:36:40 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Feb 28 04:36:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:36:41 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 3.d scrub starts
Feb 28 04:36:41 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 3.d scrub ok
Feb 28 04:36:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Feb 28 04:36:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Feb 28 04:36:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Feb 28 04:36:41 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Feb 28 04:36:41 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 89 pg[9.13( v 60'485 (0'0,60'485] local-lis/les=56/57 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=89 pruub=14.872846603s) [2] r=-1 lpr=89 pi=[56,89)/1 crt=59'484 lcod 59'484 active pruub 159.453567505s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:41 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 89 pg[9.13( v 60'485 (0'0,60'485] local-lis/les=56/57 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=89 pruub=14.872174263s) [2] r=-1 lpr=89 pi=[56,89)/1 crt=59'484 lcod 59'484 unknown NOTIFY pruub 159.453567505s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:41 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 89 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=89) [2] r=0 lpr=89 pi=[56,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:41 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Feb 28 04:36:41 np0005634017 systemd-logind[815]: New session 34 of user zuul.
Feb 28 04:36:41 np0005634017 systemd[1]: Started Session 34 of User zuul.
Feb 28 04:36:42 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Feb 28 04:36:42 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Feb 28 04:36:42 np0005634017 python3.9[100970]: ansible-ansible.legacy.ping Invoked with data=pong
Feb 28 04:36:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Feb 28 04:36:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Feb 28 04:36:42 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Feb 28 04:36:42 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 90 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=90) [2]/[0] r=-1 lpr=90 pi=[56,90)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:42 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 90 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=90) [2]/[0] r=-1 lpr=90 pi=[56,90)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:42 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Feb 28 04:36:42 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 90 pg[9.13( v 60'485 (0'0,60'485] local-lis/les=56/57 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=90) [2]/[0] r=0 lpr=90 pi=[56,90)/1 crt=59'484 lcod 59'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:42 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 90 pg[9.13( v 60'485 (0'0,60'485] local-lis/les=56/57 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=90) [2]/[0] r=0 lpr=90 pi=[56,90)/1 crt=59'484 lcod 59'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v178: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:36:43 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Feb 28 04:36:43 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Feb 28 04:36:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Feb 28 04:36:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Feb 28 04:36:43 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Feb 28 04:36:43 np0005634017 python3.9[101144]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:36:43 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 10.c scrub starts
Feb 28 04:36:43 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 91 pg[9.13( v 60'485 (0'0,60'485] local-lis/les=90/91 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=90) [2]/[0] async=[2] r=0 lpr=90 pi=[56,90)/1 crt=60'485 lcod 59'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:43 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 10.c scrub ok
Feb 28 04:36:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Feb 28 04:36:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Feb 28 04:36:44 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Feb 28 04:36:44 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 92 pg[9.13( v 60'485 (0'0,60'485] local-lis/les=90/91 n=6 ec=47/32 lis/c=90/56 les/c/f=91/57/0 sis=92 pruub=15.435496330s) [2] async=[2] r=-1 lpr=92 pi=[56,92)/1 crt=60'485 lcod 59'484 active pruub 163.064163208s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:44 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 92 pg[9.13( v 60'485 (0'0,60'485] local-lis/les=90/91 n=6 ec=47/32 lis/c=90/56 les/c/f=91/57/0 sis=92 pruub=15.435029030s) [2] r=-1 lpr=92 pi=[56,92)/1 crt=60'485 lcod 59'484 unknown NOTIFY pruub 163.064163208s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:44 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 92 pg[9.13( v 60'485 (0'0,60'485] local-lis/les=0/0 n=6 ec=47/32 lis/c=90/56 les/c/f=91/57/0 sis=92) [2] r=0 lpr=92 pi=[56,92)/1 pct=0'0 crt=60'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:44 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 92 pg[9.13( v 60'485 (0'0,60'485] local-lis/les=0/0 n=6 ec=47/32 lis/c=90/56 les/c/f=91/57/0 sis=92) [2] r=0 lpr=92 pi=[56,92)/1 crt=60'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:44 np0005634017 python3.9[101301]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:36:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v181: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:36:45 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 7.b scrub starts
Feb 28 04:36:45 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 7.b scrub ok
Feb 28 04:36:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Feb 28 04:36:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Feb 28 04:36:45 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Feb 28 04:36:45 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 93 pg[9.13( v 60'485 (0'0,60'485] local-lis/les=92/93 n=6 ec=47/32 lis/c=90/56 les/c/f=91/57/0 sis=92) [2] r=0 lpr=92 pi=[56,92)/1 crt=60'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:45 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Feb 28 04:36:45 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Feb 28 04:36:45 np0005634017 python3.9[101455]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:36:45 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 5.e scrub starts
Feb 28 04:36:45 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 5.e scrub ok
Feb 28 04:36:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:36:46 np0005634017 python3.9[101610]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:36:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 55 B/s, 1 objects/s recovering
Feb 28 04:36:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Feb 28 04:36:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Feb 28 04:36:47 np0005634017 python3.9[101763]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:36:47 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Feb 28 04:36:47 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Feb 28 04:36:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Feb 28 04:36:47 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Feb 28 04:36:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Feb 28 04:36:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Feb 28 04:36:47 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Feb 28 04:36:47 np0005634017 python3.9[101913]: ansible-ansible.builtin.service_facts Invoked
Feb 28 04:36:48 np0005634017 network[101930]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 28 04:36:48 np0005634017 network[101931]: 'network-scripts' will be removed from distribution in near future.
Feb 28 04:36:48 np0005634017 network[101932]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 28 04:36:48 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Feb 28 04:36:48 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 5.d scrub starts
Feb 28 04:36:48 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 5.d scrub ok
Feb 28 04:36:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v185: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 45 B/s, 1 objects/s recovering
Feb 28 04:36:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Feb 28 04:36:48 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Feb 28 04:36:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Feb 28 04:36:49 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Feb 28 04:36:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Feb 28 04:36:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Feb 28 04:36:49 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Feb 28 04:36:49 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Feb 28 04:36:49 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Feb 28 04:36:49 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 95 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=95 pruub=13.470949173s) [1] r=-1 lpr=95 pi=[55,95)/1 crt=38'483 active pruub 166.448959351s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:49 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 95 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=95 pruub=13.470901489s) [1] r=-1 lpr=95 pi=[55,95)/1 crt=38'483 unknown NOTIFY pruub 166.448959351s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:49 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 95 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=95) [1] r=0 lpr=95 pi=[55,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:49 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Feb 28 04:36:49 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Feb 28 04:36:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Feb 28 04:36:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Feb 28 04:36:50 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 96 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=96) [1]/[0] r=0 lpr=96 pi=[55,96)/1 crt=38'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:50 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 96 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=96) [1]/[0] r=0 lpr=96 pi=[55,96)/1 crt=38'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:50 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Feb 28 04:36:50 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 96 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=96) [1]/[0] r=-1 lpr=96 pi=[55,96)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:50 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 96 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=96) [1]/[0] r=-1 lpr=96 pi=[55,96)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:50 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Feb 28 04:36:50 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Feb 28 04:36:50 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Feb 28 04:36:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v188: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 46 B/s, 1 objects/s recovering
Feb 28 04:36:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Feb 28 04:36:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Feb 28 04:36:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:36:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Feb 28 04:36:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Feb 28 04:36:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Feb 28 04:36:51 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Feb 28 04:36:51 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Feb 28 04:36:51 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Feb 28 04:36:51 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 97 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=96/97 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[55,96)/1 crt=38'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:51 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 97 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=68/69 n=6 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=97 pruub=15.131853104s) [0] r=-1 lpr=97 pi=[68,97)/1 crt=38'483 active pruub 157.938385010s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:51 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 97 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=68/69 n=6 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=97 pruub=15.131638527s) [0] r=-1 lpr=97 pi=[68,97)/1 crt=38'483 unknown NOTIFY pruub 157.938385010s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:51 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 97 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=97) [0] r=0 lpr=97 pi=[68,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Feb 28 04:36:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Feb 28 04:36:52 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 98 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=68/69 n=6 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=98) [0]/[2] r=0 lpr=98 pi=[68,98)/1 crt=38'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:52 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 98 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=68/69 n=6 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=98) [0]/[2] r=0 lpr=98 pi=[68,98)/1 crt=38'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:52 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Feb 28 04:36:52 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 98 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=98) [0]/[2] r=-1 lpr=98 pi=[68,98)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:52 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 98 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=98) [0]/[2] r=-1 lpr=98 pi=[68,98)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:52 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 98 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=96/97 n=6 ec=47/32 lis/c=96/55 les/c/f=97/56/0 sis=98 pruub=14.959552765s) [1] async=[1] r=-1 lpr=98 pi=[55,98)/1 crt=38'483 active pruub 170.721176147s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:52 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 98 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=96/97 n=6 ec=47/32 lis/c=96/55 les/c/f=97/56/0 sis=98 pruub=14.959445953s) [1] r=-1 lpr=98 pi=[55,98)/1 crt=38'483 unknown NOTIFY pruub 170.721176147s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:52 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 98 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=47/32 lis/c=96/55 les/c/f=97/56/0 sis=98) [1] r=0 lpr=98 pi=[55,98)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:52 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 98 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=47/32 lis/c=96/55 les/c/f=97/56/0 sis=98) [1] r=0 lpr=98 pi=[55,98)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v191: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Feb 28 04:36:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Feb 28 04:36:52 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Feb 28 04:36:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Feb 28 04:36:53 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Feb 28 04:36:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Feb 28 04:36:53 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Feb 28 04:36:53 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Feb 28 04:36:53 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 99 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=98/99 n=6 ec=47/32 lis/c=96/55 les/c/f=97/56/0 sis=98) [1] r=0 lpr=98 pi=[55,98)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:53 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 99 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=98/99 n=6 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=98) [0]/[2] async=[0] r=0 lpr=98 pi=[68,98)/1 crt=38'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:53 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Feb 28 04:36:53 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Feb 28 04:36:54 np0005634017 python3.9[102195]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:36:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Feb 28 04:36:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Feb 28 04:36:54 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Feb 28 04:36:54 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 100 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=98/99 n=6 ec=47/32 lis/c=98/68 les/c/f=99/69/0 sis=100 pruub=15.100744247s) [0] async=[0] r=-1 lpr=100 pi=[68,100)/1 crt=38'483 active pruub 160.654937744s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:54 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 100 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=98/99 n=6 ec=47/32 lis/c=98/68 les/c/f=99/69/0 sis=100 pruub=15.100631714s) [0] r=-1 lpr=100 pi=[68,100)/1 crt=38'483 unknown NOTIFY pruub 160.654937744s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:54 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Feb 28 04:36:54 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 100 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=47/32 lis/c=98/68 les/c/f=99/69/0 sis=100) [0] r=0 lpr=100 pi=[68,100)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:54 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 100 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=47/32 lis/c=98/68 les/c/f=99/69/0 sis=100) [0] r=0 lpr=100 pi=[68,100)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:54 np0005634017 python3.9[102345]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:36:54 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Feb 28 04:36:54 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Feb 28 04:36:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v194: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Feb 28 04:36:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Feb 28 04:36:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Feb 28 04:36:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Feb 28 04:36:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Feb 28 04:36:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Feb 28 04:36:55 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Feb 28 04:36:55 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 101 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=100/101 n=6 ec=47/32 lis/c=98/68 les/c/f=99/69/0 sis=100) [0] r=0 lpr=100 pi=[68,100)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:55 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Feb 28 04:36:55 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Feb 28 04:36:55 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Feb 28 04:36:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:36:56 np0005634017 python3.9[102499]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:36:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Feb 28 04:36:56 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Feb 28 04:36:56 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Feb 28 04:36:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v196: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 0 objects/s recovering
Feb 28 04:36:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Feb 28 04:36:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Feb 28 04:36:57 np0005634017 python3.9[102658]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 28 04:36:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Feb 28 04:36:57 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Feb 28 04:36:57 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Feb 28 04:36:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Feb 28 04:36:57 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 102 pg[9.19( v 60'487 (0'0,60'487] local-lis/les=56/57 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=102 pruub=14.613441467s) [2] r=-1 lpr=102 pi=[56,102)/1 crt=60'486 lcod 60'486 active pruub 175.455276489s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:57 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 102 pg[9.19( v 60'487 (0'0,60'487] local-lis/les=56/57 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=102 pruub=14.612936974s) [2] r=-1 lpr=102 pi=[56,102)/1 crt=60'486 lcod 60'486 unknown NOTIFY pruub 175.455276489s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:57 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Feb 28 04:36:57 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=102) [2] r=0 lpr=102 pi=[56,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:57 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Feb 28 04:36:57 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Feb 28 04:36:57 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 4.e scrub starts
Feb 28 04:36:57 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 4.e scrub ok
Feb 28 04:36:58 np0005634017 python3.9[102743]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 28 04:36:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Feb 28 04:36:58 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Feb 28 04:36:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Feb 28 04:36:58 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Feb 28 04:36:58 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 103 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=103) [2]/[0] r=-1 lpr=103 pi=[56,103)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:58 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 103 pg[9.19( v 60'487 (0'0,60'487] local-lis/les=56/57 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=103) [2]/[0] r=0 lpr=103 pi=[56,103)/1 crt=60'486 lcod 60'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:36:58 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 103 pg[9.19( v 60'487 (0'0,60'487] local-lis/les=56/57 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=103) [2]/[0] r=0 lpr=103 pi=[56,103)/1 crt=60'486 lcod 60'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:36:58 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 103 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=103) [2]/[0] r=-1 lpr=103 pi=[56,103)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:36:58 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 8.b scrub starts
Feb 28 04:36:58 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 8.b scrub ok
Feb 28 04:36:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 0 objects/s recovering
Feb 28 04:36:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Feb 28 04:36:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Feb 28 04:36:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Feb 28 04:36:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Feb 28 04:36:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Feb 28 04:36:59 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Feb 28 04:36:59 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Feb 28 04:36:59 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 4.a scrub starts
Feb 28 04:36:59 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 104 pg[9.19( v 60'487 (0'0,60'487] local-lis/les=103/104 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=103) [2]/[0] async=[2] r=0 lpr=103 pi=[56,103)/1 crt=60'487 lcod 60'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:36:59 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 4.a scrub ok
Feb 28 04:37:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:37:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:37:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:37:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:37:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:37:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:37:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Feb 28 04:37:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Feb 28 04:37:00 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Feb 28 04:37:00 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Feb 28 04:37:00 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 105 pg[9.19( v 60'487 (0'0,60'487] local-lis/les=103/104 n=6 ec=47/32 lis/c=103/56 les/c/f=104/57/0 sis=105 pruub=15.390148163s) [2] async=[2] r=-1 lpr=105 pi=[56,105)/1 crt=60'487 lcod 60'486 active pruub 179.276672363s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:37:00 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 105 pg[9.19( v 60'487 (0'0,60'487] local-lis/les=103/104 n=6 ec=47/32 lis/c=103/56 les/c/f=104/57/0 sis=105 pruub=15.389969826s) [2] r=-1 lpr=105 pi=[56,105)/1 crt=60'487 lcod 60'486 unknown NOTIFY pruub 179.276672363s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:37:00 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 105 pg[9.19( v 60'487 (0'0,60'487] local-lis/les=0/0 n=6 ec=47/32 lis/c=103/56 les/c/f=104/57/0 sis=105) [2] r=0 lpr=105 pi=[56,105)/1 pct=0'0 crt=60'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:37:00 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 105 pg[9.19( v 60'487 (0'0,60'487] local-lis/les=0/0 n=6 ec=47/32 lis/c=103/56 les/c/f=104/57/0 sis=105) [2] r=0 lpr=105 pi=[56,105)/1 crt=60'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:37:00 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Feb 28 04:37:00 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Feb 28 04:37:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v202: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Feb 28 04:37:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Feb 28 04:37:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Feb 28 04:37:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:37:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Feb 28 04:37:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Feb 28 04:37:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Feb 28 04:37:01 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Feb 28 04:37:01 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Feb 28 04:37:01 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 106 pg[9.19( v 60'487 (0'0,60'487] local-lis/les=105/106 n=6 ec=47/32 lis/c=103/56 les/c/f=104/57/0 sis=105) [2] r=0 lpr=105 pi=[56,105)/1 crt=60'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:37:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Feb 28 04:37:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:37:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Feb 28 04:37:02 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Feb 28 04:37:03 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Feb 28 04:37:03 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Feb 28 04:37:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Feb 28 04:37:03 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Feb 28 04:37:03 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Feb 28 04:37:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Feb 28 04:37:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Feb 28 04:37:03 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Feb 28 04:37:03 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Feb 28 04:37:03 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Feb 28 04:37:03 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Feb 28 04:37:04 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Feb 28 04:37:04 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Feb 28 04:37:04 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Feb 28 04:37:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v206: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:37:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Feb 28 04:37:04 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Feb 28 04:37:05 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 107 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=81/82 n=6 ec=47/32 lis/c=81/81 les/c/f=82/82/0 sis=107 pruub=11.412605286s) [0] r=-1 lpr=107 pi=[81,107)/1 crt=60'487 active pruub 167.900878906s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:37:05 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 107 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=81/82 n=6 ec=47/32 lis/c=81/81 les/c/f=82/82/0 sis=107 pruub=11.412554741s) [0] r=-1 lpr=107 pi=[81,107)/1 crt=60'487 unknown NOTIFY pruub 167.900878906s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:37:05 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 107 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=81/81 les/c/f=82/82/0 sis=107) [0] r=0 lpr=107 pi=[81,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:37:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Feb 28 04:37:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Feb 28 04:37:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Feb 28 04:37:05 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Feb 28 04:37:05 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 108 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=81/81 les/c/f=82/82/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[81,108)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:37:05 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 108 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=81/81 les/c/f=82/82/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[81,108)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:37:05 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Feb 28 04:37:05 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 108 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=81/82 n=6 ec=47/32 lis/c=81/81 les/c/f=82/82/0 sis=108) [0]/[2] r=0 lpr=108 pi=[81,108)/1 crt=60'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:37:05 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 108 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=81/82 n=6 ec=47/32 lis/c=81/81 les/c/f=82/82/0 sis=108) [0]/[2] r=0 lpr=108 pi=[81,108)/1 crt=60'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:37:06 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Feb 28 04:37:06 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Feb 28 04:37:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:37:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Feb 28 04:37:06 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Feb 28 04:37:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Feb 28 04:37:06 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Feb 28 04:37:06 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 109 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=108/109 n=6 ec=47/32 lis/c=81/81 les/c/f=82/82/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[81,108)/1 crt=60'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:37:06 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Feb 28 04:37:06 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Feb 28 04:37:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 76 B/s, 1 objects/s recovering
Feb 28 04:37:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Feb 28 04:37:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Feb 28 04:37:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Feb 28 04:37:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Feb 28 04:37:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Feb 28 04:37:07 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Feb 28 04:37:07 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 110 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=68/69 n=6 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=110 pruub=15.188876152s) [0] r=-1 lpr=110 pi=[68,110)/1 crt=60'485 active pruub 173.946197510s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:37:07 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 110 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=108/109 n=6 ec=47/32 lis/c=108/81 les/c/f=109/82/0 sis=110 pruub=14.990802765s) [0] async=[0] r=-1 lpr=110 pi=[81,110)/1 crt=60'487 active pruub 173.748168945s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:37:07 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Feb 28 04:37:07 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 110 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=68/69 n=6 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=110 pruub=15.188785553s) [0] r=-1 lpr=110 pi=[68,110)/1 crt=60'485 unknown NOTIFY pruub 173.946197510s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:37:07 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 110 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=108/109 n=6 ec=47/32 lis/c=108/81 les/c/f=109/82/0 sis=110 pruub=14.990717888s) [0] r=-1 lpr=110 pi=[81,110)/1 crt=60'487 unknown NOTIFY pruub 173.748168945s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:37:07 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 110 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=110) [0] r=0 lpr=110 pi=[68,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:37:07 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 110 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=0/0 n=6 ec=47/32 lis/c=108/81 les/c/f=109/82/0 sis=110) [0] r=0 lpr=110 pi=[81,110)/1 pct=0'0 crt=60'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:37:07 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 110 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=0/0 n=6 ec=47/32 lis/c=108/81 les/c/f=109/82/0 sis=110) [0] r=0 lpr=110 pi=[81,110)/1 crt=60'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:37:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Feb 28 04:37:08 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Feb 28 04:37:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Feb 28 04:37:08 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Feb 28 04:37:08 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=111) [0]/[2] r=-1 lpr=111 pi=[68,111)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:37:08 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=111) [0]/[2] r=-1 lpr=111 pi=[68,111)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:37:08 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 111 pg[9.1c( v 60'487 (0'0,60'487] local-lis/les=110/111 n=6 ec=47/32 lis/c=108/81 les/c/f=109/82/0 sis=110) [0] r=0 lpr=110 pi=[81,110)/1 crt=60'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:37:08 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 111 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=68/69 n=6 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=111) [0]/[2] r=0 lpr=111 pi=[68,111)/1 crt=60'485 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:37:08 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 111 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=68/69 n=6 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=111) [0]/[2] r=0 lpr=111 pi=[68,111)/1 crt=60'485 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:37:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v212: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 103 B/s, 2 objects/s recovering
Feb 28 04:37:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 28 04:37:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 28 04:37:09 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Feb 28 04:37:09 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Feb 28 04:37:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Feb 28 04:37:09 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 28 04:37:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Feb 28 04:37:09 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Feb 28 04:37:09 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 28 04:37:09 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 112 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=69/70 n=6 ec=47/32 lis/c=69/69 les/c/f=70/70/0 sis=112 pruub=14.176166534s) [1] r=-1 lpr=112 pi=[69,112)/1 crt=38'483 active pruub 174.962371826s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:37:09 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 112 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=69/70 n=6 ec=47/32 lis/c=69/69 les/c/f=70/70/0 sis=112 pruub=14.176117897s) [1] r=-1 lpr=112 pi=[69,112)/1 crt=38'483 unknown NOTIFY pruub 174.962371826s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:37:09 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=69/69 les/c/f=70/70/0 sis=112) [1] r=0 lpr=112 pi=[69,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:37:09 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 112 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=111/112 n=6 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=111) [0]/[2] async=[0] r=0 lpr=111 pi=[68,111)/1 crt=60'485 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:37:09 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Feb 28 04:37:09 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Feb 28 04:37:10 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 3.c scrub starts
Feb 28 04:37:10 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 3.c scrub ok
Feb 28 04:37:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Feb 28 04:37:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Feb 28 04:37:10 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Feb 28 04:37:10 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 113 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=69/70 n=6 ec=47/32 lis/c=69/69 les/c/f=70/70/0 sis=113) [1]/[2] r=0 lpr=113 pi=[69,113)/1 crt=38'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:37:10 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 113 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=69/70 n=6 ec=47/32 lis/c=69/69 les/c/f=70/70/0 sis=113) [1]/[2] r=0 lpr=113 pi=[69,113)/1 crt=38'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 28 04:37:10 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 113 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=111/112 n=6 ec=47/32 lis/c=111/68 les/c/f=112/69/0 sis=113 pruub=15.018297195s) [0] async=[0] r=-1 lpr=113 pi=[68,113)/1 crt=60'485 active pruub 176.815933228s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:37:10 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 113 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=111/112 n=6 ec=47/32 lis/c=111/68 les/c/f=112/69/0 sis=113 pruub=15.018041611s) [0] r=-1 lpr=113 pi=[68,113)/1 crt=60'485 unknown NOTIFY pruub 176.815933228s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:37:10 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 113 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=69/69 les/c/f=70/70/0 sis=113) [1]/[2] r=-1 lpr=113 pi=[69,113)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:37:10 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 113 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=69/69 les/c/f=70/70/0 sis=113) [1]/[2] r=-1 lpr=113 pi=[69,113)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 28 04:37:10 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 113 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=0/0 n=6 ec=47/32 lis/c=111/68 les/c/f=112/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 pct=0'0 crt=60'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:37:10 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 113 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=0/0 n=6 ec=47/32 lis/c=111/68 les/c/f=112/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 crt=60'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:37:10 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 28 04:37:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 1 objects/s recovering
Feb 28 04:37:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:37:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Feb 28 04:37:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Feb 28 04:37:11 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Feb 28 04:37:11 np0005634017 ceph-osd[87202]: osd.0 pg_epoch: 114 pg[9.1e( v 60'485 (0'0,60'485] local-lis/les=113/114 n=6 ec=47/32 lis/c=111/68 les/c/f=112/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 crt=60'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:37:11 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 114 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=113/114 n=6 ec=47/32 lis/c=69/69 les/c/f=70/70/0 sis=113) [1]/[2] async=[1] r=0 lpr=113 pi=[69,113)/1 crt=38'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:37:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Feb 28 04:37:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Feb 28 04:37:12 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Feb 28 04:37:12 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 115 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=113/114 n=6 ec=47/32 lis/c=113/69 les/c/f=114/70/0 sis=115 pruub=14.963991165s) [1] async=[1] r=-1 lpr=115 pi=[69,115)/1 crt=38'483 active pruub 178.817840576s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:37:12 np0005634017 ceph-osd[89322]: osd.2 pg_epoch: 115 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=113/114 n=6 ec=47/32 lis/c=113/69 les/c/f=114/70/0 sis=115 pruub=14.963921547s) [1] r=-1 lpr=115 pi=[69,115)/1 crt=38'483 unknown NOTIFY pruub 178.817840576s@ mbc={}] state<Start>: transitioning to Stray
Feb 28 04:37:12 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 115 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=47/32 lis/c=113/69 les/c/f=114/70/0 sis=115) [1] r=0 lpr=115 pi=[69,115)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 28 04:37:12 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 115 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=47/32 lis/c=113/69 les/c/f=114/70/0 sis=115) [1] r=0 lpr=115 pi=[69,115)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 28 04:37:12 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Feb 28 04:37:12 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Feb 28 04:37:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v218: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 142 B/s, 3 objects/s recovering
Feb 28 04:37:13 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Feb 28 04:37:13 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Feb 28 04:37:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Feb 28 04:37:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Feb 28 04:37:13 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Feb 28 04:37:13 np0005634017 ceph-osd[88267]: osd.1 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=115/116 n=6 ec=47/32 lis/c=113/69 les/c/f=114/70/0 sis=115) [1] r=0 lpr=115 pi=[69,115)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 28 04:37:14 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Feb 28 04:37:14 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Feb 28 04:37:14 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Feb 28 04:37:14 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Feb 28 04:37:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v220: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 122 B/s, 3 objects/s recovering
Feb 28 04:37:15 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Feb 28 04:37:15 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Feb 28 04:37:15 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Feb 28 04:37:15 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Feb 28 04:37:15 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Feb 28 04:37:15 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Feb 28 04:37:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:37:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 87 B/s, 2 objects/s recovering
Feb 28 04:37:17 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Feb 28 04:37:17 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Feb 28 04:37:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 72 B/s, 1 objects/s recovering
Feb 28 04:37:20 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Feb 28 04:37:20 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Feb 28 04:37:20 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Feb 28 04:37:20 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Feb 28 04:37:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Feb 28 04:37:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:37:21 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Feb 28 04:37:21 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Feb 28 04:37:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v224: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Feb 28 04:37:23 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Feb 28 04:37:23 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Feb 28 04:37:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:37:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:37:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 04:37:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:37:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 04:37:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:37:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 04:37:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 04:37:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 04:37:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:37:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:37:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:37:24 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:37:24 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:37:24 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:37:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v225: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Feb 28 04:37:25 np0005634017 podman[103031]: 2026-02-28 09:37:25.370324138 +0000 UTC m=+0.063080585 container create a429a7ad413b2b4b712a551356c3a53dab41578a0f949199b16c373faa1dccbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:37:25 np0005634017 systemd[1]: Started libpod-conmon-a429a7ad413b2b4b712a551356c3a53dab41578a0f949199b16c373faa1dccbc.scope.
Feb 28 04:37:25 np0005634017 podman[103031]: 2026-02-28 09:37:25.345559475 +0000 UTC m=+0.038315902 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:37:25 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:37:25 np0005634017 podman[103031]: 2026-02-28 09:37:25.461651793 +0000 UTC m=+0.154408210 container init a429a7ad413b2b4b712a551356c3a53dab41578a0f949199b16c373faa1dccbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 04:37:25 np0005634017 podman[103031]: 2026-02-28 09:37:25.469384241 +0000 UTC m=+0.162140678 container start a429a7ad413b2b4b712a551356c3a53dab41578a0f949199b16c373faa1dccbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_torvalds, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:37:25 np0005634017 podman[103031]: 2026-02-28 09:37:25.473104996 +0000 UTC m=+0.165861403 container attach a429a7ad413b2b4b712a551356c3a53dab41578a0f949199b16c373faa1dccbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_torvalds, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 28 04:37:25 np0005634017 cool_torvalds[103047]: 167 167
Feb 28 04:37:25 np0005634017 systemd[1]: libpod-a429a7ad413b2b4b712a551356c3a53dab41578a0f949199b16c373faa1dccbc.scope: Deactivated successfully.
Feb 28 04:37:25 np0005634017 conmon[103047]: conmon a429a7ad413b2b4b712a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a429a7ad413b2b4b712a551356c3a53dab41578a0f949199b16c373faa1dccbc.scope/container/memory.events
Feb 28 04:37:25 np0005634017 podman[103031]: 2026-02-28 09:37:25.476356146 +0000 UTC m=+0.169112593 container died a429a7ad413b2b4b712a551356c3a53dab41578a0f949199b16c373faa1dccbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_torvalds, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 04:37:25 np0005634017 systemd[1]: var-lib-containers-storage-overlay-79288e81f439f2cd92eeb10a0115b9883d193ecadc0a59a40b20e8c62f7e29f8-merged.mount: Deactivated successfully.
Feb 28 04:37:25 np0005634017 podman[103031]: 2026-02-28 09:37:25.516319648 +0000 UTC m=+0.209076065 container remove a429a7ad413b2b4b712a551356c3a53dab41578a0f949199b16c373faa1dccbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_torvalds, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 04:37:25 np0005634017 systemd[1]: libpod-conmon-a429a7ad413b2b4b712a551356c3a53dab41578a0f949199b16c373faa1dccbc.scope: Deactivated successfully.
Feb 28 04:37:25 np0005634017 podman[103071]: 2026-02-28 09:37:25.692173877 +0000 UTC m=+0.051071725 container create 556b72a37dc69f6dffdab6a0677af892aad72fab126e4460fac4416cd163ec57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_pascal, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:37:25 np0005634017 systemd[1]: Started libpod-conmon-556b72a37dc69f6dffdab6a0677af892aad72fab126e4460fac4416cd163ec57.scope.
Feb 28 04:37:25 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:37:25 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37cb1a47b2ca30dd93137da88304f3d9d8f938b4518cd12063210cc40fb62f74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:37:25 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37cb1a47b2ca30dd93137da88304f3d9d8f938b4518cd12063210cc40fb62f74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:37:25 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37cb1a47b2ca30dd93137da88304f3d9d8f938b4518cd12063210cc40fb62f74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:37:25 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37cb1a47b2ca30dd93137da88304f3d9d8f938b4518cd12063210cc40fb62f74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:37:25 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37cb1a47b2ca30dd93137da88304f3d9d8f938b4518cd12063210cc40fb62f74/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:37:25 np0005634017 podman[103071]: 2026-02-28 09:37:25.669829118 +0000 UTC m=+0.028727016 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:37:25 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Feb 28 04:37:25 np0005634017 podman[103071]: 2026-02-28 09:37:25.781025875 +0000 UTC m=+0.139923783 container init 556b72a37dc69f6dffdab6a0677af892aad72fab126e4460fac4416cd163ec57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_pascal, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:37:25 np0005634017 podman[103071]: 2026-02-28 09:37:25.795321106 +0000 UTC m=+0.154218964 container start 556b72a37dc69f6dffdab6a0677af892aad72fab126e4460fac4416cd163ec57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_pascal, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:37:25 np0005634017 podman[103071]: 2026-02-28 09:37:25.7993695 +0000 UTC m=+0.158267348 container attach 556b72a37dc69f6dffdab6a0677af892aad72fab126e4460fac4416cd163ec57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_pascal, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 04:37:25 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Feb 28 04:37:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:37:26 np0005634017 reverent_pascal[103088]: --> passed data devices: 0 physical, 3 LVM
Feb 28 04:37:26 np0005634017 reverent_pascal[103088]: --> All data devices are unavailable
Feb 28 04:37:26 np0005634017 systemd[1]: libpod-556b72a37dc69f6dffdab6a0677af892aad72fab126e4460fac4416cd163ec57.scope: Deactivated successfully.
Feb 28 04:37:26 np0005634017 podman[103071]: 2026-02-28 09:37:26.271194781 +0000 UTC m=+0.630092659 container died 556b72a37dc69f6dffdab6a0677af892aad72fab126e4460fac4416cd163ec57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_pascal, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 04:37:26 np0005634017 systemd[1]: var-lib-containers-storage-overlay-37cb1a47b2ca30dd93137da88304f3d9d8f938b4518cd12063210cc40fb62f74-merged.mount: Deactivated successfully.
Feb 28 04:37:26 np0005634017 podman[103071]: 2026-02-28 09:37:26.317273151 +0000 UTC m=+0.676170979 container remove 556b72a37dc69f6dffdab6a0677af892aad72fab126e4460fac4416cd163ec57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_pascal, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 28 04:37:26 np0005634017 systemd[1]: libpod-conmon-556b72a37dc69f6dffdab6a0677af892aad72fab126e4460fac4416cd163ec57.scope: Deactivated successfully.
Feb 28 04:37:26 np0005634017 podman[103189]: 2026-02-28 09:37:26.780018551 +0000 UTC m=+0.041962524 container create f70ef99b916a61e68aa30893df52dee9e23730da34a56bc9a97fff24b126e1a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_albattani, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:37:26 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Feb 28 04:37:26 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Feb 28 04:37:26 np0005634017 systemd[1]: Started libpod-conmon-f70ef99b916a61e68aa30893df52dee9e23730da34a56bc9a97fff24b126e1a0.scope.
Feb 28 04:37:26 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:37:26 np0005634017 podman[103189]: 2026-02-28 09:37:26.762527902 +0000 UTC m=+0.024471875 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:37:26 np0005634017 podman[103189]: 2026-02-28 09:37:26.874786791 +0000 UTC m=+0.136730814 container init f70ef99b916a61e68aa30893df52dee9e23730da34a56bc9a97fff24b126e1a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_albattani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:37:26 np0005634017 podman[103189]: 2026-02-28 09:37:26.879725403 +0000 UTC m=+0.141669376 container start f70ef99b916a61e68aa30893df52dee9e23730da34a56bc9a97fff24b126e1a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:37:26 np0005634017 podman[103189]: 2026-02-28 09:37:26.883155379 +0000 UTC m=+0.145099392 container attach f70ef99b916a61e68aa30893df52dee9e23730da34a56bc9a97fff24b126e1a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 04:37:26 np0005634017 thirsty_albattani[103205]: 167 167
Feb 28 04:37:26 np0005634017 systemd[1]: libpod-f70ef99b916a61e68aa30893df52dee9e23730da34a56bc9a97fff24b126e1a0.scope: Deactivated successfully.
Feb 28 04:37:26 np0005634017 podman[103189]: 2026-02-28 09:37:26.884597933 +0000 UTC m=+0.146541886 container died f70ef99b916a61e68aa30893df52dee9e23730da34a56bc9a97fff24b126e1a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 04:37:26 np0005634017 systemd[1]: var-lib-containers-storage-overlay-f84ecb890573f87655f0b8db6a1ef4805a100c9b7c4e07535a88eee37731dae0-merged.mount: Deactivated successfully.
Feb 28 04:37:26 np0005634017 podman[103189]: 2026-02-28 09:37:26.921793729 +0000 UTC m=+0.183737692 container remove f70ef99b916a61e68aa30893df52dee9e23730da34a56bc9a97fff24b126e1a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Feb 28 04:37:26 np0005634017 systemd[1]: libpod-conmon-f70ef99b916a61e68aa30893df52dee9e23730da34a56bc9a97fff24b126e1a0.scope: Deactivated successfully.
Feb 28 04:37:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:37:27 np0005634017 podman[103231]: 2026-02-28 09:37:27.058885934 +0000 UTC m=+0.052546660 container create 7adce8e440464c9d71019f90d7b56e12a56c1a6f27b4004d6b7347dc652f4b08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_poincare, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 28 04:37:27 np0005634017 systemd[1]: Started libpod-conmon-7adce8e440464c9d71019f90d7b56e12a56c1a6f27b4004d6b7347dc652f4b08.scope.
Feb 28 04:37:27 np0005634017 podman[103231]: 2026-02-28 09:37:27.038570488 +0000 UTC m=+0.032231274 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:37:27 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:37:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd404a958b5f0a08a9e9ec4e736bff422fe21bc24a27670a3431489c47fd256/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:37:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd404a958b5f0a08a9e9ec4e736bff422fe21bc24a27670a3431489c47fd256/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:37:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd404a958b5f0a08a9e9ec4e736bff422fe21bc24a27670a3431489c47fd256/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:37:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd404a958b5f0a08a9e9ec4e736bff422fe21bc24a27670a3431489c47fd256/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:37:27 np0005634017 podman[103231]: 2026-02-28 09:37:27.155230593 +0000 UTC m=+0.148891339 container init 7adce8e440464c9d71019f90d7b56e12a56c1a6f27b4004d6b7347dc652f4b08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_poincare, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:37:27 np0005634017 podman[103231]: 2026-02-28 09:37:27.16323538 +0000 UTC m=+0.156896096 container start 7adce8e440464c9d71019f90d7b56e12a56c1a6f27b4004d6b7347dc652f4b08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:37:27 np0005634017 podman[103231]: 2026-02-28 09:37:27.165805159 +0000 UTC m=+0.159465875 container attach 7adce8e440464c9d71019f90d7b56e12a56c1a6f27b4004d6b7347dc652f4b08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_poincare, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]: {
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:    "0": [
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:        {
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "devices": [
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "/dev/loop3"
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            ],
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "lv_name": "ceph_lv0",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "lv_size": "21470642176",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "name": "ceph_lv0",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "tags": {
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.cluster_name": "ceph",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.crush_device_class": "",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.encrypted": "0",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.objectstore": "bluestore",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.osd_id": "0",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.type": "block",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.vdo": "0",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.with_tpm": "0"
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            },
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "type": "block",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "vg_name": "ceph_vg0"
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:        }
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:    ],
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:    "1": [
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:        {
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "devices": [
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "/dev/loop4"
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            ],
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "lv_name": "ceph_lv1",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "lv_size": "21470642176",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "name": "ceph_lv1",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "tags": {
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.cluster_name": "ceph",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.crush_device_class": "",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.encrypted": "0",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.objectstore": "bluestore",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.osd_id": "1",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.type": "block",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.vdo": "0",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.with_tpm": "0"
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            },
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "type": "block",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "vg_name": "ceph_vg1"
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:        }
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:    ],
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:    "2": [
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:        {
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "devices": [
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "/dev/loop5"
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            ],
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "lv_name": "ceph_lv2",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "lv_size": "21470642176",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "name": "ceph_lv2",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "tags": {
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.cluster_name": "ceph",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.crush_device_class": "",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.encrypted": "0",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.objectstore": "bluestore",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.osd_id": "2",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.type": "block",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.vdo": "0",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:                "ceph.with_tpm": "0"
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            },
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "type": "block",
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:            "vg_name": "ceph_vg2"
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:        }
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]:    ]
Feb 28 04:37:27 np0005634017 vigilant_poincare[103248]: }
Feb 28 04:37:27 np0005634017 systemd[1]: libpod-7adce8e440464c9d71019f90d7b56e12a56c1a6f27b4004d6b7347dc652f4b08.scope: Deactivated successfully.
Feb 28 04:37:27 np0005634017 podman[103231]: 2026-02-28 09:37:27.471528431 +0000 UTC m=+0.465189177 container died 7adce8e440464c9d71019f90d7b56e12a56c1a6f27b4004d6b7347dc652f4b08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_poincare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:37:27 np0005634017 systemd[1]: var-lib-containers-storage-overlay-bfd404a958b5f0a08a9e9ec4e736bff422fe21bc24a27670a3431489c47fd256-merged.mount: Deactivated successfully.
Feb 28 04:37:27 np0005634017 podman[103231]: 2026-02-28 09:37:27.519394966 +0000 UTC m=+0.513055672 container remove 7adce8e440464c9d71019f90d7b56e12a56c1a6f27b4004d6b7347dc652f4b08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 04:37:27 np0005634017 systemd[1]: libpod-conmon-7adce8e440464c9d71019f90d7b56e12a56c1a6f27b4004d6b7347dc652f4b08.scope: Deactivated successfully.
Feb 28 04:37:27 np0005634017 podman[103331]: 2026-02-28 09:37:27.933470336 +0000 UTC m=+0.046237175 container create caff6021a3c50ad3bf0bf5c01651e075467ab31aacbdccc99ae9c624d32d00bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:37:27 np0005634017 systemd[1]: Started libpod-conmon-caff6021a3c50ad3bf0bf5c01651e075467ab31aacbdccc99ae9c624d32d00bd.scope.
Feb 28 04:37:28 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:37:28 np0005634017 podman[103331]: 2026-02-28 09:37:27.916143932 +0000 UTC m=+0.028910781 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:37:28 np0005634017 podman[103331]: 2026-02-28 09:37:28.023394888 +0000 UTC m=+0.136161717 container init caff6021a3c50ad3bf0bf5c01651e075467ab31aacbdccc99ae9c624d32d00bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hopper, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:37:28 np0005634017 podman[103331]: 2026-02-28 09:37:28.027740822 +0000 UTC m=+0.140507661 container start caff6021a3c50ad3bf0bf5c01651e075467ab31aacbdccc99ae9c624d32d00bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hopper, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 28 04:37:28 np0005634017 epic_hopper[103348]: 167 167
Feb 28 04:37:28 np0005634017 podman[103331]: 2026-02-28 09:37:28.030995392 +0000 UTC m=+0.143762241 container attach caff6021a3c50ad3bf0bf5c01651e075467ab31aacbdccc99ae9c624d32d00bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hopper, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:37:28 np0005634017 systemd[1]: libpod-caff6021a3c50ad3bf0bf5c01651e075467ab31aacbdccc99ae9c624d32d00bd.scope: Deactivated successfully.
Feb 28 04:37:28 np0005634017 podman[103331]: 2026-02-28 09:37:28.032962623 +0000 UTC m=+0.145729452 container died caff6021a3c50ad3bf0bf5c01651e075467ab31aacbdccc99ae9c624d32d00bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:37:28 np0005634017 systemd[1]: var-lib-containers-storage-overlay-46e7a35a5b5318cd532a7fc7a2da5a9a6c3c1e6c138979edf6de3afaef0899d0-merged.mount: Deactivated successfully.
Feb 28 04:37:28 np0005634017 podman[103331]: 2026-02-28 09:37:28.074543054 +0000 UTC m=+0.187309923 container remove caff6021a3c50ad3bf0bf5c01651e075467ab31aacbdccc99ae9c624d32d00bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hopper, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:37:28 np0005634017 systemd[1]: libpod-conmon-caff6021a3c50ad3bf0bf5c01651e075467ab31aacbdccc99ae9c624d32d00bd.scope: Deactivated successfully.
Feb 28 04:37:28 np0005634017 podman[103372]: 2026-02-28 09:37:28.206686156 +0000 UTC m=+0.045443621 container create 70b6a245fc286dc6ce9a8895f350f3087c0142b6dae1ab35c9fe0a1afda73c53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:37:28 np0005634017 systemd[1]: Started libpod-conmon-70b6a245fc286dc6ce9a8895f350f3087c0142b6dae1ab35c9fe0a1afda73c53.scope.
Feb 28 04:37:28 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:37:28 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb791db63c24249de116f9094fb2f4b8fb7c88c54b15d6719267f42d4c9854e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:37:28 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb791db63c24249de116f9094fb2f4b8fb7c88c54b15d6719267f42d4c9854e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:37:28 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb791db63c24249de116f9094fb2f4b8fb7c88c54b15d6719267f42d4c9854e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:37:28 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb791db63c24249de116f9094fb2f4b8fb7c88c54b15d6719267f42d4c9854e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:37:28 np0005634017 podman[103372]: 2026-02-28 09:37:28.18800488 +0000 UTC m=+0.026762335 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:37:28 np0005634017 podman[103372]: 2026-02-28 09:37:28.297825555 +0000 UTC m=+0.136583040 container init 70b6a245fc286dc6ce9a8895f350f3087c0142b6dae1ab35c9fe0a1afda73c53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_grothendieck, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:37:28 np0005634017 podman[103372]: 2026-02-28 09:37:28.30644647 +0000 UTC m=+0.145203955 container start 70b6a245fc286dc6ce9a8895f350f3087c0142b6dae1ab35c9fe0a1afda73c53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:37:28 np0005634017 podman[103372]: 2026-02-28 09:37:28.310345181 +0000 UTC m=+0.149102666 container attach 70b6a245fc286dc6ce9a8895f350f3087c0142b6dae1ab35c9fe0a1afda73c53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 04:37:28 np0005634017 lvm[103468]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:37:28 np0005634017 lvm[103468]: VG ceph_vg1 finished
Feb 28 04:37:28 np0005634017 lvm[103467]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:37:28 np0005634017 lvm[103467]: VG ceph_vg0 finished
Feb 28 04:37:28 np0005634017 lvm[103470]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:37:28 np0005634017 lvm[103470]: VG ceph_vg2 finished
Feb 28 04:37:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_09:37:28
Feb 28 04:37:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 04:37:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 04:37:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'volumes', '.mgr', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data']
Feb 28 04:37:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 04:37:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:37:29 np0005634017 gifted_grothendieck[103389]: {}
Feb 28 04:37:29 np0005634017 podman[103372]: 2026-02-28 09:37:29.046861988 +0000 UTC m=+0.885619473 container died 70b6a245fc286dc6ce9a8895f350f3087c0142b6dae1ab35c9fe0a1afda73c53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:37:29 np0005634017 systemd[1]: libpod-70b6a245fc286dc6ce9a8895f350f3087c0142b6dae1ab35c9fe0a1afda73c53.scope: Deactivated successfully.
Feb 28 04:37:29 np0005634017 systemd[1]: libpod-70b6a245fc286dc6ce9a8895f350f3087c0142b6dae1ab35c9fe0a1afda73c53.scope: Consumed 1.071s CPU time.
Feb 28 04:37:29 np0005634017 systemd[1]: var-lib-containers-storage-overlay-cdb791db63c24249de116f9094fb2f4b8fb7c88c54b15d6719267f42d4c9854e-merged.mount: Deactivated successfully.
Feb 28 04:37:29 np0005634017 podman[103372]: 2026-02-28 09:37:29.105737342 +0000 UTC m=+0.944494787 container remove 70b6a245fc286dc6ce9a8895f350f3087c0142b6dae1ab35c9fe0a1afda73c53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_grothendieck, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:37:29 np0005634017 systemd[1]: libpod-conmon-70b6a245fc286dc6ce9a8895f350f3087c0142b6dae1ab35c9fe0a1afda73c53.scope: Deactivated successfully.
Feb 28 04:37:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:37:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:37:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:37:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:37:29 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Feb 28 04:37:29 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Feb 28 04:37:30 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:37:30 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:37:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:37:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:37:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:37:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:37:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:37:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:37:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 04:37:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:37:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 04:37:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:37:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:37:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:37:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:37:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:37:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:37:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:37:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:37:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:37:31 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Feb 28 04:37:31 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Feb 28 04:37:32 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Feb 28 04:37:32 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Feb 28 04:37:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:37:33 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Feb 28 04:37:33 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Feb 28 04:37:34 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Feb 28 04:37:34 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Feb 28 04:37:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:37:34 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Feb 28 04:37:35 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Feb 28 04:37:35 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Feb 28 04:37:35 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Feb 28 04:37:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:37:36 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Feb 28 04:37:36 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Feb 28 04:37:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:37:37 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Feb 28 04:37:37 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Feb 28 04:37:38 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 7.c scrub starts
Feb 28 04:37:38 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 7.c scrub ok
Feb 28 04:37:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:37:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 04:37:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:37:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 04:37:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:37:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:37:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:37:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:37:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:37:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:37:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:37:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:37:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:37:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4231060137621101e-06 of space, bias 4.0, pg target 0.0017077272165145322 quantized to 16 (current 16)
Feb 28 04:37:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:37:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:37:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:37:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 04:37:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:37:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 04:37:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:37:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:37:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:37:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 04:37:40 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.d scrub starts
Feb 28 04:37:40 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.d scrub ok
Feb 28 04:37:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:37:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:37:41 np0005634017 python3.9[103661]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:37:42 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Feb 28 04:37:42 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Feb 28 04:37:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:37:43 np0005634017 python3.9[103949]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Feb 28 04:37:44 np0005634017 python3.9[104102]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Feb 28 04:37:44 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Feb 28 04:37:44 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Feb 28 04:37:44 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Feb 28 04:37:44 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Feb 28 04:37:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:37:45 np0005634017 python3.9[104255]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:37:45 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Feb 28 04:37:45 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Feb 28 04:37:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:37:46 np0005634017 python3.9[104408]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Feb 28 04:37:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:37:47 np0005634017 python3.9[104561]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:37:48 np0005634017 python3.9[104714]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:37:48 np0005634017 python3.9[104793]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:37:48 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Feb 28 04:37:48 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Feb 28 04:37:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:37:49 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Feb 28 04:37:49 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Feb 28 04:37:49 np0005634017 python3.9[104946]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:37:50 np0005634017 python3.9[105101]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Feb 28 04:37:50 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.b scrub starts
Feb 28 04:37:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:37:51 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.b scrub ok
Feb 28 04:37:51 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Feb 28 04:37:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:37:51 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Feb 28 04:37:51 np0005634017 python3.9[105255]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Feb 28 04:37:51 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Feb 28 04:37:51 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Feb 28 04:37:52 np0005634017 python3.9[105409]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 28 04:37:52 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Feb 28 04:37:52 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Feb 28 04:37:52 np0005634017 python3.9[105562]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Feb 28 04:37:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:37:53 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Feb 28 04:37:53 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Feb 28 04:37:53 np0005634017 python3.9[105715]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 28 04:37:54 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Feb 28 04:37:54 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Feb 28 04:37:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:37:55 np0005634017 python3.9[105869]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:37:55 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Feb 28 04:37:55 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Feb 28 04:37:55 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Feb 28 04:37:55 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Feb 28 04:37:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:37:56 np0005634017 python3.9[106022]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:37:56 np0005634017 python3.9[106101]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:37:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:37:57 np0005634017 python3.9[106254]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:37:58 np0005634017 python3.9[106333]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:37:58 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Feb 28 04:37:58 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Feb 28 04:37:58 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Feb 28 04:37:58 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Feb 28 04:37:58 np0005634017 python3.9[106486]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 28 04:37:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:38:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:38:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:38:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:38:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:38:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:38:00 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Feb 28 04:38:00 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Feb 28 04:38:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:01 np0005634017 python3.9[106637]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:38:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:38:01 np0005634017 python3.9[106789]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Feb 28 04:38:01 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Feb 28 04:38:01 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Feb 28 04:38:02 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Feb 28 04:38:02 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Feb 28 04:38:02 np0005634017 python3.9[106939]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:38:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:03 np0005634017 python3.9[107092]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:38:03 np0005634017 systemd[1]: Stopping Dynamic System Tuning Daemon...
Feb 28 04:38:03 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 8.d scrub starts
Feb 28 04:38:03 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 8.d scrub ok
Feb 28 04:38:03 np0005634017 systemd[1]: tuned.service: Deactivated successfully.
Feb 28 04:38:03 np0005634017 systemd[1]: Stopped Dynamic System Tuning Daemon.
Feb 28 04:38:03 np0005634017 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb 28 04:38:04 np0005634017 systemd[1]: Started Dynamic System Tuning Daemon.
Feb 28 04:38:04 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 8.e scrub starts
Feb 28 04:38:04 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 8.e scrub ok
Feb 28 04:38:04 np0005634017 python3.9[107254]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Feb 28 04:38:04 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Feb 28 04:38:04 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Feb 28 04:38:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:05 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Feb 28 04:38:05 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Feb 28 04:38:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:38:06 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 4.f scrub starts
Feb 28 04:38:06 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 4.f scrub ok
Feb 28 04:38:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:07 np0005634017 python3.9[107407]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:38:07 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Feb 28 04:38:07 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Feb 28 04:38:07 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Feb 28 04:38:07 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Feb 28 04:38:07 np0005634017 python3.9[107562]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:38:08 np0005634017 systemd[1]: session-34.scope: Deactivated successfully.
Feb 28 04:38:08 np0005634017 systemd[1]: session-34.scope: Consumed 1min 3.308s CPU time.
Feb 28 04:38:08 np0005634017 systemd-logind[815]: Session 34 logged out. Waiting for processes to exit.
Feb 28 04:38:08 np0005634017 systemd-logind[815]: Removed session 34.
Feb 28 04:38:08 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Feb 28 04:38:08 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Feb 28 04:38:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:09 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Feb 28 04:38:09 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Feb 28 04:38:10 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 11.e scrub starts
Feb 28 04:38:10 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 11.e scrub ok
Feb 28 04:38:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:38:11 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 8.c scrub starts
Feb 28 04:38:11 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 8.c scrub ok
Feb 28 04:38:12 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 7.e scrub starts
Feb 28 04:38:12 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 7.e scrub ok
Feb 28 04:38:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:13 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 2.f scrub starts
Feb 28 04:38:13 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 2.f scrub ok
Feb 28 04:38:14 np0005634017 systemd-logind[815]: New session 35 of user zuul.
Feb 28 04:38:14 np0005634017 systemd[1]: Started Session 35 of User zuul.
Feb 28 04:38:14 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Feb 28 04:38:14 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Feb 28 04:38:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:15 np0005634017 python3.9[107742]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:38:15 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Feb 28 04:38:15 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Feb 28 04:38:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:38:16 np0005634017 python3.9[107899]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Feb 28 04:38:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:17 np0005634017 python3.9[108053]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 28 04:38:17 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Feb 28 04:38:17 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Feb 28 04:38:17 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Feb 28 04:38:17 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Feb 28 04:38:18 np0005634017 python3.9[108138]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 28 04:38:18 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 7.f scrub starts
Feb 28 04:38:18 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 7.f scrub ok
Feb 28 04:38:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:20 np0005634017 python3.9[108292]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 28 04:38:20 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 7.a scrub starts
Feb 28 04:38:20 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 7.a scrub ok
Feb 28 04:38:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:38:21 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Feb 28 04:38:21 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Feb 28 04:38:22 np0005634017 python3.9[108446]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 28 04:38:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:23 np0005634017 python3.9[108599]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:38:23 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 3.e scrub starts
Feb 28 04:38:23 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 3.e scrub ok
Feb 28 04:38:24 np0005634017 python3.9[108752]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Feb 28 04:38:24 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Feb 28 04:38:24 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Feb 28 04:38:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:25 np0005634017 python3.9[108902]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:38:25 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Feb 28 04:38:25 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Feb 28 04:38:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:38:26 np0005634017 python3.9[109061]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 28 04:38:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:28 np0005634017 python3.9[109215]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:38:28 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 2.b scrub starts
Feb 28 04:38:28 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 2.b scrub ok
Feb 28 04:38:28 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Feb 28 04:38:28 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Feb 28 04:38:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_09:38:28
Feb 28 04:38:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 04:38:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 04:38:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'default.rgw.log', 'volumes', '.rgw.root', 'backups', 'images', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Feb 28 04:38:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 04:38:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:29 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 3.a scrub starts
Feb 28 04:38:29 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 3.a scrub ok
Feb 28 04:38:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:38:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:38:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 04:38:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:38:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 04:38:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:38:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 04:38:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 04:38:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 04:38:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:38:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:38:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:38:29 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Feb 28 04:38:29 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Feb 28 04:38:29 np0005634017 python3.9[109584]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Feb 28 04:38:30 np0005634017 podman[109653]: 2026-02-28 09:38:30.159826976 +0000 UTC m=+0.032640265 container create e3221c4160ea62e94ea8562ee65de0c5d7fd2ada5dd4bd5cd5ffae2bc5e3ee9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:38:30 np0005634017 systemd[1]: Started libpod-conmon-e3221c4160ea62e94ea8562ee65de0c5d7fd2ada5dd4bd5cd5ffae2bc5e3ee9f.scope.
Feb 28 04:38:30 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:38:30 np0005634017 podman[109653]: 2026-02-28 09:38:30.227584361 +0000 UTC m=+0.100397640 container init e3221c4160ea62e94ea8562ee65de0c5d7fd2ada5dd4bd5cd5ffae2bc5e3ee9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:38:30 np0005634017 podman[109653]: 2026-02-28 09:38:30.235849328 +0000 UTC m=+0.108662607 container start e3221c4160ea62e94ea8562ee65de0c5d7fd2ada5dd4bd5cd5ffae2bc5e3ee9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_swanson, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:38:30 np0005634017 podman[109653]: 2026-02-28 09:38:30.23912489 +0000 UTC m=+0.111938169 container attach e3221c4160ea62e94ea8562ee65de0c5d7fd2ada5dd4bd5cd5ffae2bc5e3ee9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_swanson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 28 04:38:30 np0005634017 podman[109653]: 2026-02-28 09:38:30.143807159 +0000 UTC m=+0.016620458 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:38:30 np0005634017 zealous_swanson[109688]: 167 167
Feb 28 04:38:30 np0005634017 systemd[1]: libpod-e3221c4160ea62e94ea8562ee65de0c5d7fd2ada5dd4bd5cd5ffae2bc5e3ee9f.scope: Deactivated successfully.
Feb 28 04:38:30 np0005634017 conmon[109688]: conmon e3221c4160ea62e94ea8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e3221c4160ea62e94ea8562ee65de0c5d7fd2ada5dd4bd5cd5ffae2bc5e3ee9f.scope/container/memory.events
Feb 28 04:38:30 np0005634017 podman[109653]: 2026-02-28 09:38:30.245551839 +0000 UTC m=+0.118365118 container died e3221c4160ea62e94ea8562ee65de0c5d7fd2ada5dd4bd5cd5ffae2bc5e3ee9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Feb 28 04:38:30 np0005634017 systemd[1]: var-lib-containers-storage-overlay-16b0c617025759161fd67147388e4986a37200d591ddee4fc96e65e02a2969ce-merged.mount: Deactivated successfully.
Feb 28 04:38:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:38:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:38:30 np0005634017 podman[109653]: 2026-02-28 09:38:30.285384747 +0000 UTC m=+0.158198036 container remove e3221c4160ea62e94ea8562ee65de0c5d7fd2ada5dd4bd5cd5ffae2bc5e3ee9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_swanson, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 04:38:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:38:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:38:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:38:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:38:30 np0005634017 systemd[1]: libpod-conmon-e3221c4160ea62e94ea8562ee65de0c5d7fd2ada5dd4bd5cd5ffae2bc5e3ee9f.scope: Deactivated successfully.
Feb 28 04:38:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 04:38:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:38:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 04:38:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:38:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:38:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:38:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:38:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:38:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:38:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:38:30 np0005634017 podman[109763]: 2026-02-28 09:38:30.465856062 +0000 UTC m=+0.054636858 container create 58b58fe7b7df47be41c4e207697d158116f361e4d368f9b3966086b4ccd725d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_nobel, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 04:38:30 np0005634017 systemd[1]: Started libpod-conmon-58b58fe7b7df47be41c4e207697d158116f361e4d368f9b3966086b4ccd725d3.scope.
Feb 28 04:38:30 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:38:30 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa24d6eef17742e4be9ae538160d885064f0ead1dd6c16dac89e4ef928d071e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:38:30 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa24d6eef17742e4be9ae538160d885064f0ead1dd6c16dac89e4ef928d071e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:38:30 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa24d6eef17742e4be9ae538160d885064f0ead1dd6c16dac89e4ef928d071e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:38:30 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa24d6eef17742e4be9ae538160d885064f0ead1dd6c16dac89e4ef928d071e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:38:30 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa24d6eef17742e4be9ae538160d885064f0ead1dd6c16dac89e4ef928d071e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:38:30 np0005634017 podman[109763]: 2026-02-28 09:38:30.449708501 +0000 UTC m=+0.038489267 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:38:30 np0005634017 podman[109763]: 2026-02-28 09:38:30.551711669 +0000 UTC m=+0.140492435 container init 58b58fe7b7df47be41c4e207697d158116f361e4d368f9b3966086b4ccd725d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_nobel, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Feb 28 04:38:30 np0005634017 podman[109763]: 2026-02-28 09:38:30.558551682 +0000 UTC m=+0.147332448 container start 58b58fe7b7df47be41c4e207697d158116f361e4d368f9b3966086b4ccd725d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 04:38:30 np0005634017 podman[109763]: 2026-02-28 09:38:30.56203223 +0000 UTC m=+0.150812986 container attach 58b58fe7b7df47be41c4e207697d158116f361e4d368f9b3966086b4ccd725d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:38:30 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:38:30 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:38:30 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:38:30 np0005634017 python3.9[109858]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:38:30 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Feb 28 04:38:30 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Feb 28 04:38:30 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Feb 28 04:38:30 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Feb 28 04:38:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:31 np0005634017 peaceful_nobel[109809]: --> passed data devices: 0 physical, 3 LVM
Feb 28 04:38:31 np0005634017 peaceful_nobel[109809]: --> All data devices are unavailable
Feb 28 04:38:31 np0005634017 systemd[1]: libpod-58b58fe7b7df47be41c4e207697d158116f361e4d368f9b3966086b4ccd725d3.scope: Deactivated successfully.
Feb 28 04:38:31 np0005634017 podman[109763]: 2026-02-28 09:38:31.09803883 +0000 UTC m=+0.686819586 container died 58b58fe7b7df47be41c4e207697d158116f361e4d368f9b3966086b4ccd725d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_nobel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:38:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:38:31 np0005634017 systemd[1]: var-lib-containers-storage-overlay-7aa24d6eef17742e4be9ae538160d885064f0ead1dd6c16dac89e4ef928d071e-merged.mount: Deactivated successfully.
Feb 28 04:38:31 np0005634017 podman[109763]: 2026-02-28 09:38:31.136515225 +0000 UTC m=+0.725295981 container remove 58b58fe7b7df47be41c4e207697d158116f361e4d368f9b3966086b4ccd725d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_nobel, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:38:31 np0005634017 systemd[1]: libpod-conmon-58b58fe7b7df47be41c4e207697d158116f361e4d368f9b3966086b4ccd725d3.scope: Deactivated successfully.
Feb 28 04:38:31 np0005634017 python3.9[110038]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 28 04:38:31 np0005634017 podman[110102]: 2026-02-28 09:38:31.513188864 +0000 UTC m=+0.050631133 container create 8c65d5717bb4496679f1f2c2b62b0e52f3b57290234c13d2c711ca815f7397e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hawking, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 04:38:31 np0005634017 systemd[1]: Started libpod-conmon-8c65d5717bb4496679f1f2c2b62b0e52f3b57290234c13d2c711ca815f7397e5.scope.
Feb 28 04:38:31 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:38:31 np0005634017 podman[110102]: 2026-02-28 09:38:31.494001568 +0000 UTC m=+0.031443817 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:38:31 np0005634017 podman[110102]: 2026-02-28 09:38:31.592652783 +0000 UTC m=+0.130095082 container init 8c65d5717bb4496679f1f2c2b62b0e52f3b57290234c13d2c711ca815f7397e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hawking, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Feb 28 04:38:31 np0005634017 podman[110102]: 2026-02-28 09:38:31.600435125 +0000 UTC m=+0.137877404 container start 8c65d5717bb4496679f1f2c2b62b0e52f3b57290234c13d2c711ca815f7397e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hawking, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 28 04:38:31 np0005634017 distracted_hawking[110119]: 167 167
Feb 28 04:38:31 np0005634017 podman[110102]: 2026-02-28 09:38:31.604490301 +0000 UTC m=+0.141932640 container attach 8c65d5717bb4496679f1f2c2b62b0e52f3b57290234c13d2c711ca815f7397e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 04:38:31 np0005634017 systemd[1]: libpod-8c65d5717bb4496679f1f2c2b62b0e52f3b57290234c13d2c711ca815f7397e5.scope: Deactivated successfully.
Feb 28 04:38:31 np0005634017 podman[110102]: 2026-02-28 09:38:31.606378159 +0000 UTC m=+0.143820438 container died 8c65d5717bb4496679f1f2c2b62b0e52f3b57290234c13d2c711ca815f7397e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 04:38:31 np0005634017 systemd[1]: var-lib-containers-storage-overlay-f738b1193416e4ef09481728ae29bc0d168fcb443bd410b6db60b14e24838c4b-merged.mount: Deactivated successfully.
Feb 28 04:38:31 np0005634017 podman[110102]: 2026-02-28 09:38:31.64730016 +0000 UTC m=+0.184742439 container remove 8c65d5717bb4496679f1f2c2b62b0e52f3b57290234c13d2c711ca815f7397e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:38:31 np0005634017 systemd[1]: libpod-conmon-8c65d5717bb4496679f1f2c2b62b0e52f3b57290234c13d2c711ca815f7397e5.scope: Deactivated successfully.
Feb 28 04:38:31 np0005634017 podman[110143]: 2026-02-28 09:38:31.791333644 +0000 UTC m=+0.056848026 container create d4a95301d56b63a252e6dddc427265b5affa9cfb780bc2a4fdf53e240b5d6125 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 04:38:31 np0005634017 systemd[1]: Started libpod-conmon-d4a95301d56b63a252e6dddc427265b5affa9cfb780bc2a4fdf53e240b5d6125.scope.
Feb 28 04:38:31 np0005634017 podman[110143]: 2026-02-28 09:38:31.765632746 +0000 UTC m=+0.031147178 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:38:31 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:38:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3968da9f74d7dccb9759b59680c92a968756b36d80aa62d492bce02e9d387797/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:38:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3968da9f74d7dccb9759b59680c92a968756b36d80aa62d492bce02e9d387797/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:38:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3968da9f74d7dccb9759b59680c92a968756b36d80aa62d492bce02e9d387797/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:38:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3968da9f74d7dccb9759b59680c92a968756b36d80aa62d492bce02e9d387797/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:38:31 np0005634017 podman[110143]: 2026-02-28 09:38:31.911340552 +0000 UTC m=+0.176854954 container init d4a95301d56b63a252e6dddc427265b5affa9cfb780bc2a4fdf53e240b5d6125 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ptolemy, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 04:38:31 np0005634017 podman[110143]: 2026-02-28 09:38:31.919389832 +0000 UTC m=+0.184904214 container start d4a95301d56b63a252e6dddc427265b5affa9cfb780bc2a4fdf53e240b5d6125 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ptolemy, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 04:38:31 np0005634017 podman[110143]: 2026-02-28 09:38:31.922613782 +0000 UTC m=+0.188128224 container attach d4a95301d56b63a252e6dddc427265b5affa9cfb780bc2a4fdf53e240b5d6125 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]: {
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:    "0": [
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:        {
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "devices": [
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "/dev/loop3"
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            ],
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "lv_name": "ceph_lv0",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "lv_size": "21470642176",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "name": "ceph_lv0",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "tags": {
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.cluster_name": "ceph",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.crush_device_class": "",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.encrypted": "0",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.objectstore": "bluestore",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.osd_id": "0",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.type": "block",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.vdo": "0",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.with_tpm": "0"
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            },
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "type": "block",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "vg_name": "ceph_vg0"
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:        }
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:    ],
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:    "1": [
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:        {
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "devices": [
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "/dev/loop4"
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            ],
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "lv_name": "ceph_lv1",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "lv_size": "21470642176",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "name": "ceph_lv1",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "tags": {
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.cluster_name": "ceph",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.crush_device_class": "",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.encrypted": "0",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.objectstore": "bluestore",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.osd_id": "1",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.type": "block",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.vdo": "0",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.with_tpm": "0"
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            },
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "type": "block",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "vg_name": "ceph_vg1"
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:        }
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:    ],
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:    "2": [
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:        {
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "devices": [
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "/dev/loop5"
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            ],
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "lv_name": "ceph_lv2",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "lv_size": "21470642176",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "name": "ceph_lv2",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "tags": {
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.cluster_name": "ceph",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.crush_device_class": "",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.encrypted": "0",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.objectstore": "bluestore",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.osd_id": "2",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.type": "block",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.vdo": "0",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:                "ceph.with_tpm": "0"
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            },
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "type": "block",
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:            "vg_name": "ceph_vg2"
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:        }
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]:    ]
Feb 28 04:38:32 np0005634017 boring_ptolemy[110159]: }
Feb 28 04:38:32 np0005634017 systemd[1]: libpod-d4a95301d56b63a252e6dddc427265b5affa9cfb780bc2a4fdf53e240b5d6125.scope: Deactivated successfully.
Feb 28 04:38:32 np0005634017 podman[110143]: 2026-02-28 09:38:32.206990486 +0000 UTC m=+0.472504828 container died d4a95301d56b63a252e6dddc427265b5affa9cfb780bc2a4fdf53e240b5d6125 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ptolemy, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 04:38:32 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3968da9f74d7dccb9759b59680c92a968756b36d80aa62d492bce02e9d387797-merged.mount: Deactivated successfully.
Feb 28 04:38:32 np0005634017 podman[110143]: 2026-02-28 09:38:32.252125438 +0000 UTC m=+0.517639820 container remove d4a95301d56b63a252e6dddc427265b5affa9cfb780bc2a4fdf53e240b5d6125 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:38:32 np0005634017 systemd[1]: libpod-conmon-d4a95301d56b63a252e6dddc427265b5affa9cfb780bc2a4fdf53e240b5d6125.scope: Deactivated successfully.
Feb 28 04:38:32 np0005634017 podman[110267]: 2026-02-28 09:38:32.732039195 +0000 UTC m=+0.042771589 container create b810607b3bed286952366882763268a62db7a48320240d7b3a725e47b06f6b17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_swirles, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 28 04:38:32 np0005634017 systemd[1]: Started libpod-conmon-b810607b3bed286952366882763268a62db7a48320240d7b3a725e47b06f6b17.scope.
Feb 28 04:38:32 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:38:32 np0005634017 podman[110267]: 2026-02-28 09:38:32.812244317 +0000 UTC m=+0.122976651 container init b810607b3bed286952366882763268a62db7a48320240d7b3a725e47b06f6b17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:38:32 np0005634017 podman[110267]: 2026-02-28 09:38:32.716804692 +0000 UTC m=+0.027537006 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:38:32 np0005634017 podman[110267]: 2026-02-28 09:38:32.820812823 +0000 UTC m=+0.131545137 container start b810607b3bed286952366882763268a62db7a48320240d7b3a725e47b06f6b17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:38:32 np0005634017 podman[110267]: 2026-02-28 09:38:32.824360703 +0000 UTC m=+0.135093027 container attach b810607b3bed286952366882763268a62db7a48320240d7b3a725e47b06f6b17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Feb 28 04:38:32 np0005634017 vibrant_swirles[110284]: 167 167
Feb 28 04:38:32 np0005634017 systemd[1]: libpod-b810607b3bed286952366882763268a62db7a48320240d7b3a725e47b06f6b17.scope: Deactivated successfully.
Feb 28 04:38:32 np0005634017 conmon[110284]: conmon b810607b3bed28695236 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b810607b3bed286952366882763268a62db7a48320240d7b3a725e47b06f6b17.scope/container/memory.events
Feb 28 04:38:32 np0005634017 podman[110267]: 2026-02-28 09:38:32.826944343 +0000 UTC m=+0.137676667 container died b810607b3bed286952366882763268a62db7a48320240d7b3a725e47b06f6b17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:38:32 np0005634017 systemd[1]: var-lib-containers-storage-overlay-abb4c88a820f0dfc9d87152c1c95343d951ba2e01e0af64d7374a3f34b9cd9ca-merged.mount: Deactivated successfully.
Feb 28 04:38:32 np0005634017 podman[110267]: 2026-02-28 09:38:32.869296879 +0000 UTC m=+0.180029173 container remove b810607b3bed286952366882763268a62db7a48320240d7b3a725e47b06f6b17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_swirles, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:38:32 np0005634017 systemd[1]: libpod-conmon-b810607b3bed286952366882763268a62db7a48320240d7b3a725e47b06f6b17.scope: Deactivated successfully.
Feb 28 04:38:32 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 4.d scrub starts
Feb 28 04:38:32 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 4.d scrub ok
Feb 28 04:38:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:33 np0005634017 podman[110383]: 2026-02-28 09:38:33.03255973 +0000 UTC m=+0.047681342 container create c42f63915678b583169abc343e266bdc1dde891a2d7095f56655bf66419b9a1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:38:33 np0005634017 systemd[1]: Started libpod-conmon-c42f63915678b583169abc343e266bdc1dde891a2d7095f56655bf66419b9a1a.scope.
Feb 28 04:38:33 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:38:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e327386ba93493916ca2a9835afb67d1c5fc721930c829fe80084dded3e4549/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:38:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e327386ba93493916ca2a9835afb67d1c5fc721930c829fe80084dded3e4549/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:38:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e327386ba93493916ca2a9835afb67d1c5fc721930c829fe80084dded3e4549/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:38:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e327386ba93493916ca2a9835afb67d1c5fc721930c829fe80084dded3e4549/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:38:33 np0005634017 podman[110383]: 2026-02-28 09:38:33.018585126 +0000 UTC m=+0.033706738 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:38:33 np0005634017 podman[110383]: 2026-02-28 09:38:33.114508746 +0000 UTC m=+0.129630418 container init c42f63915678b583169abc343e266bdc1dde891a2d7095f56655bf66419b9a1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 28 04:38:33 np0005634017 podman[110383]: 2026-02-28 09:38:33.119859552 +0000 UTC m=+0.134981164 container start c42f63915678b583169abc343e266bdc1dde891a2d7095f56655bf66419b9a1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:38:33 np0005634017 podman[110383]: 2026-02-28 09:38:33.12494993 +0000 UTC m=+0.140071592 container attach c42f63915678b583169abc343e266bdc1dde891a2d7095f56655bf66419b9a1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:38:33 np0005634017 python3.9[110457]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 28 04:38:33 np0005634017 lvm[110532]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:38:33 np0005634017 lvm[110532]: VG ceph_vg0 finished
Feb 28 04:38:33 np0005634017 lvm[110535]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:38:33 np0005634017 lvm[110535]: VG ceph_vg1 finished
Feb 28 04:38:33 np0005634017 lvm[110537]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:38:33 np0005634017 lvm[110537]: VG ceph_vg2 finished
Feb 28 04:38:33 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Feb 28 04:38:33 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Feb 28 04:38:33 np0005634017 charming_wilson[110423]: {}
Feb 28 04:38:33 np0005634017 systemd[1]: libpod-c42f63915678b583169abc343e266bdc1dde891a2d7095f56655bf66419b9a1a.scope: Deactivated successfully.
Feb 28 04:38:33 np0005634017 systemd[1]: libpod-c42f63915678b583169abc343e266bdc1dde891a2d7095f56655bf66419b9a1a.scope: Consumed 1.074s CPU time.
Feb 28 04:38:33 np0005634017 podman[110383]: 2026-02-28 09:38:33.882797571 +0000 UTC m=+0.897919213 container died c42f63915678b583169abc343e266bdc1dde891a2d7095f56655bf66419b9a1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wilson, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:38:33 np0005634017 systemd[1]: var-lib-containers-storage-overlay-9e327386ba93493916ca2a9835afb67d1c5fc721930c829fe80084dded3e4549-merged.mount: Deactivated successfully.
Feb 28 04:38:33 np0005634017 podman[110383]: 2026-02-28 09:38:33.934531288 +0000 UTC m=+0.949652940 container remove c42f63915678b583169abc343e266bdc1dde891a2d7095f56655bf66419b9a1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wilson, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 04:38:33 np0005634017 systemd[1]: libpod-conmon-c42f63915678b583169abc343e266bdc1dde891a2d7095f56655bf66419b9a1a.scope: Deactivated successfully.
Feb 28 04:38:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:38:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:38:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:38:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:38:34 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:38:34 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:38:34 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Feb 28 04:38:34 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Feb 28 04:38:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:35 np0005634017 python3.9[110729]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:38:35 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Feb 28 04:38:35 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Feb 28 04:38:35 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Feb 28 04:38:35 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Feb 28 04:38:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:38:36 np0005634017 python3.9[110884]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Feb 28 04:38:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:37 np0005634017 systemd-logind[815]: Session 35 logged out. Waiting for processes to exit.
Feb 28 04:38:37 np0005634017 systemd[1]: session-35.scope: Deactivated successfully.
Feb 28 04:38:37 np0005634017 systemd[1]: session-35.scope: Consumed 16.677s CPU time.
Feb 28 04:38:37 np0005634017 systemd-logind[815]: Removed session 35.
Feb 28 04:38:38 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Feb 28 04:38:38 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Feb 28 04:38:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:39 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 11.f scrub starts
Feb 28 04:38:39 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 11.f scrub ok
Feb 28 04:38:39 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Feb 28 04:38:39 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Feb 28 04:38:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 04:38:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:38:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 04:38:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:38:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:38:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:38:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:38:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:38:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:38:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:38:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:38:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:38:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4231060137621101e-06 of space, bias 4.0, pg target 0.0017077272165145322 quantized to 16 (current 16)
Feb 28 04:38:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:38:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:38:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:38:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 04:38:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:38:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 04:38:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:38:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:38:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:38:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 04:38:40 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Feb 28 04:38:40 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Feb 28 04:38:40 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Feb 28 04:38:41 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Feb 28 04:38:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:38:41 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Feb 28 04:38:41 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Feb 28 04:38:41 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Feb 28 04:38:41 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Feb 28 04:38:42 np0005634017 systemd-logind[815]: New session 36 of user zuul.
Feb 28 04:38:42 np0005634017 systemd[1]: Started Session 36 of User zuul.
Feb 28 04:38:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:43 np0005634017 python3.9[111062]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:38:43 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Feb 28 04:38:43 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Feb 28 04:38:43 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Feb 28 04:38:44 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Feb 28 04:38:44 np0005634017 python3.9[111216]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 28 04:38:44 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Feb 28 04:38:44 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Feb 28 04:38:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:45 np0005634017 python3.9[111409]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:38:46 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Feb 28 04:38:46 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Feb 28 04:38:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:38:46 np0005634017 systemd[1]: session-36.scope: Deactivated successfully.
Feb 28 04:38:46 np0005634017 systemd[1]: session-36.scope: Consumed 2.305s CPU time.
Feb 28 04:38:46 np0005634017 systemd-logind[815]: Session 36 logged out. Waiting for processes to exit.
Feb 28 04:38:46 np0005634017 systemd-logind[815]: Removed session 36.
Feb 28 04:38:46 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Feb 28 04:38:46 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Feb 28 04:38:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:47 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Feb 28 04:38:47 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Feb 28 04:38:47 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Feb 28 04:38:47 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Feb 28 04:38:47 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Feb 28 04:38:47 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Feb 28 04:38:48 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Feb 28 04:38:48 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Feb 28 04:38:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:38:51 np0005634017 systemd-logind[815]: New session 37 of user zuul.
Feb 28 04:38:51 np0005634017 systemd[1]: Started Session 37 of User zuul.
Feb 28 04:38:51 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Feb 28 04:38:51 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Feb 28 04:38:52 np0005634017 python3.9[111589]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:38:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:53 np0005634017 python3.9[111743]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:38:54 np0005634017 python3.9[111900]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 28 04:38:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:55 np0005634017 python3.9[111985]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 28 04:38:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:38:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:57 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Feb 28 04:38:57 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Feb 28 04:38:57 np0005634017 python3.9[112139]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 28 04:38:58 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Feb 28 04:38:58 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Feb 28 04:38:58 np0005634017 python3.9[112335]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:38:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:38:59 np0005634017 python3.9[112488]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:39:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:39:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:39:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:39:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:39:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:39:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:39:00 np0005634017 python3.9[112654]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:39:00 np0005634017 python3.9[112733]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:39:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:39:01 np0005634017 python3.9[112886]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:39:01 np0005634017 python3.9[112965]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:39:02 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Feb 28 04:39:02 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Feb 28 04:39:02 np0005634017 python3.9[113118]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:39:02 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Feb 28 04:39:03 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Feb 28 04:39:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:03 np0005634017 python3.9[113271]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:39:04 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Feb 28 04:39:04 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Feb 28 04:39:04 np0005634017 python3.9[113424]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:39:04 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Feb 28 04:39:04 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Feb 28 04:39:04 np0005634017 python3.9[113577]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:39:05 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Feb 28 04:39:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:05 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Feb 28 04:39:05 np0005634017 python3.9[113730]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 28 04:39:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:39:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:07 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Feb 28 04:39:07 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Feb 28 04:39:07 np0005634017 python3.9[113885]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:39:07 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Feb 28 04:39:07 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Feb 28 04:39:08 np0005634017 python3.9[114040]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:39:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:09 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Feb 28 04:39:09 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Feb 28 04:39:09 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Feb 28 04:39:09 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Feb 28 04:39:09 np0005634017 python3.9[114193]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:39:09 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Feb 28 04:39:09 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Feb 28 04:39:10 np0005634017 python3.9[114346]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:39:10 np0005634017 python3.9[114500]: ansible-service_facts Invoked
Feb 28 04:39:11 np0005634017 network[114517]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 28 04:39:11 np0005634017 network[114518]: 'network-scripts' will be removed from distribution in near future.
Feb 28 04:39:11 np0005634017 network[114519]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 28 04:39:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:39:12 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Feb 28 04:39:12 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Feb 28 04:39:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:13 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Feb 28 04:39:13 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Feb 28 04:39:13 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Feb 28 04:39:13 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Feb 28 04:39:13 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 5.c scrub starts
Feb 28 04:39:13 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 5.c scrub ok
Feb 28 04:39:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:15 np0005634017 python3.9[114974]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 28 04:39:15 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Feb 28 04:39:15 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Feb 28 04:39:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:39:16 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 6.f scrub starts
Feb 28 04:39:16 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 6.f scrub ok
Feb 28 04:39:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:17 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Feb 28 04:39:17 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Feb 28 04:39:17 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Feb 28 04:39:17 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Feb 28 04:39:17 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 9.e scrub starts
Feb 28 04:39:17 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 9.e scrub ok
Feb 28 04:39:18 np0005634017 python3.9[115128]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Feb 28 04:39:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:19 np0005634017 python3.9[115281]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:39:19 np0005634017 python3.9[115360]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:39:19 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Feb 28 04:39:19 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Feb 28 04:39:20 np0005634017 python3.9[115513]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:39:21 np0005634017 python3.9[115592]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:39:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:39:21 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 2.a scrub starts
Feb 28 04:39:21 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 2.a scrub ok
Feb 28 04:39:21 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Feb 28 04:39:21 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Feb 28 04:39:22 np0005634017 python3.9[115745]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:39:22 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 9.f scrub starts
Feb 28 04:39:22 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 9.f scrub ok
Feb 28 04:39:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:23 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Feb 28 04:39:23 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Feb 28 04:39:23 np0005634017 python3.9[115898]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 28 04:39:23 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 9.c scrub starts
Feb 28 04:39:23 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 9.c scrub ok
Feb 28 04:39:24 np0005634017 python3.9[115983]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:39:24 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Feb 28 04:39:24 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Feb 28 04:39:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:25 np0005634017 systemd[1]: session-37.scope: Deactivated successfully.
Feb 28 04:39:25 np0005634017 systemd[1]: session-37.scope: Consumed 22.917s CPU time.
Feb 28 04:39:25 np0005634017 systemd-logind[815]: Session 37 logged out. Waiting for processes to exit.
Feb 28 04:39:25 np0005634017 systemd-logind[815]: Removed session 37.
Feb 28 04:39:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:39:26 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Feb 28 04:39:26 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Feb 28 04:39:26 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Feb 28 04:39:26 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Feb 28 04:39:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:27 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Feb 28 04:39:27 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Feb 28 04:39:27 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Feb 28 04:39:27 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Feb 28 04:39:28 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Feb 28 04:39:28 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Feb 28 04:39:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_09:39:28
Feb 28 04:39:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 04:39:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 04:39:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'images', 'volumes', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'default.rgw.log', 'vms']
Feb 28 04:39:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 04:39:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:29 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Feb 28 04:39:29 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Feb 28 04:39:29 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Feb 28 04:39:29 np0005634017 ceph-osd[89322]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Feb 28 04:39:30 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Feb 28 04:39:30 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Feb 28 04:39:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:39:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:39:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:39:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:39:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:39:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:39:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 04:39:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:39:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 04:39:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:39:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:39:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:39:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:39:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:39:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:39:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:39:30 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 10.f scrub starts
Feb 28 04:39:30 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 10.f scrub ok
Feb 28 04:39:30 np0005634017 systemd-logind[815]: New session 38 of user zuul.
Feb 28 04:39:30 np0005634017 systemd[1]: Started Session 38 of User zuul.
Feb 28 04:39:31 np0005634017 systemd[77703]: Created slice User Background Tasks Slice.
Feb 28 04:39:31 np0005634017 systemd[77703]: Starting Cleanup of User's Temporary Files and Directories...
Feb 28 04:39:31 np0005634017 systemd[77703]: Finished Cleanup of User's Temporary Files and Directories.
Feb 28 04:39:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:39:31 np0005634017 python3.9[116167]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:39:32 np0005634017 python3.9[116320]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:39:33 np0005634017 python3.9[116399]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:39:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:33 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Feb 28 04:39:33 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Feb 28 04:39:33 np0005634017 systemd[1]: session-38.scope: Deactivated successfully.
Feb 28 04:39:33 np0005634017 systemd[1]: session-38.scope: Consumed 1.394s CPU time.
Feb 28 04:39:33 np0005634017 systemd-logind[815]: Session 38 logged out. Waiting for processes to exit.
Feb 28 04:39:33 np0005634017 systemd-logind[815]: Removed session 38.
Feb 28 04:39:34 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Feb 28 04:39:34 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Feb 28 04:39:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:39:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:39:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 04:39:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:39:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 04:39:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:39:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 04:39:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 04:39:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 04:39:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:39:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:39:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:39:35 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Feb 28 04:39:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:35 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Feb 28 04:39:35 np0005634017 podman[116572]: 2026-02-28 09:39:35.224109558 +0000 UTC m=+0.057546423 container create 1724db62a043d6b6973ee2cd1d9a153532d788cf7a3d854a8ffbba6430b0925b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_buck, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:39:35 np0005634017 systemd[1]: Started libpod-conmon-1724db62a043d6b6973ee2cd1d9a153532d788cf7a3d854a8ffbba6430b0925b.scope.
Feb 28 04:39:35 np0005634017 podman[116572]: 2026-02-28 09:39:35.199738788 +0000 UTC m=+0.033175693 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:39:35 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:39:35 np0005634017 podman[116572]: 2026-02-28 09:39:35.316032089 +0000 UTC m=+0.149468934 container init 1724db62a043d6b6973ee2cd1d9a153532d788cf7a3d854a8ffbba6430b0925b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_buck, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 04:39:35 np0005634017 podman[116572]: 2026-02-28 09:39:35.324597598 +0000 UTC m=+0.158034453 container start 1724db62a043d6b6973ee2cd1d9a153532d788cf7a3d854a8ffbba6430b0925b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_buck, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 28 04:39:35 np0005634017 podman[116572]: 2026-02-28 09:39:35.328792197 +0000 UTC m=+0.162229042 container attach 1724db62a043d6b6973ee2cd1d9a153532d788cf7a3d854a8ffbba6430b0925b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_buck, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:39:35 np0005634017 vigilant_buck[116589]: 167 167
Feb 28 04:39:35 np0005634017 systemd[1]: libpod-1724db62a043d6b6973ee2cd1d9a153532d788cf7a3d854a8ffbba6430b0925b.scope: Deactivated successfully.
Feb 28 04:39:35 np0005634017 podman[116572]: 2026-02-28 09:39:35.33118594 +0000 UTC m=+0.164622765 container died 1724db62a043d6b6973ee2cd1d9a153532d788cf7a3d854a8ffbba6430b0925b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_buck, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:39:35 np0005634017 systemd[1]: var-lib-containers-storage-overlay-22ce2fa97d0804a17ecd940037bb10ed1b2713e53835287f871bfc7c309d5e83-merged.mount: Deactivated successfully.
Feb 28 04:39:35 np0005634017 podman[116572]: 2026-02-28 09:39:35.370716711 +0000 UTC m=+0.204153566 container remove 1724db62a043d6b6973ee2cd1d9a153532d788cf7a3d854a8ffbba6430b0925b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 04:39:35 np0005634017 systemd[1]: libpod-conmon-1724db62a043d6b6973ee2cd1d9a153532d788cf7a3d854a8ffbba6430b0925b.scope: Deactivated successfully.
Feb 28 04:39:35 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:39:35 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:39:35 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:39:35 np0005634017 podman[116612]: 2026-02-28 09:39:35.536925008 +0000 UTC m=+0.058533467 container create cb14c87675ccc36bdd791d9a3a53af7e02277e35c17e700a5bce47fe83040f39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_snyder, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 04:39:35 np0005634017 systemd[1]: Started libpod-conmon-cb14c87675ccc36bdd791d9a3a53af7e02277e35c17e700a5bce47fe83040f39.scope.
Feb 28 04:39:35 np0005634017 podman[116612]: 2026-02-28 09:39:35.513305189 +0000 UTC m=+0.034913698 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:39:35 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:39:35 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7360e0f5e66883edeed4c45d8df91074948c796e416d3213fb2e8645c95e3f88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:39:35 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7360e0f5e66883edeed4c45d8df91074948c796e416d3213fb2e8645c95e3f88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:39:35 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7360e0f5e66883edeed4c45d8df91074948c796e416d3213fb2e8645c95e3f88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:39:35 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7360e0f5e66883edeed4c45d8df91074948c796e416d3213fb2e8645c95e3f88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:39:35 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7360e0f5e66883edeed4c45d8df91074948c796e416d3213fb2e8645c95e3f88/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:39:35 np0005634017 podman[116612]: 2026-02-28 09:39:35.634623489 +0000 UTC m=+0.156231928 container init cb14c87675ccc36bdd791d9a3a53af7e02277e35c17e700a5bce47fe83040f39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_snyder, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 28 04:39:35 np0005634017 podman[116612]: 2026-02-28 09:39:35.641640007 +0000 UTC m=+0.163248466 container start cb14c87675ccc36bdd791d9a3a53af7e02277e35c17e700a5bce47fe83040f39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_snyder, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 28 04:39:35 np0005634017 podman[116612]: 2026-02-28 09:39:35.645003914 +0000 UTC m=+0.166612353 container attach cb14c87675ccc36bdd791d9a3a53af7e02277e35c17e700a5bce47fe83040f39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_snyder, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 28 04:39:36 np0005634017 laughing_snyder[116629]: --> passed data devices: 0 physical, 3 LVM
Feb 28 04:39:36 np0005634017 laughing_snyder[116629]: --> All data devices are unavailable
Feb 28 04:39:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:39:36 np0005634017 systemd[1]: libpod-cb14c87675ccc36bdd791d9a3a53af7e02277e35c17e700a5bce47fe83040f39.scope: Deactivated successfully.
Feb 28 04:39:36 np0005634017 podman[116612]: 2026-02-28 09:39:36.150400748 +0000 UTC m=+0.672009197 container died cb14c87675ccc36bdd791d9a3a53af7e02277e35c17e700a5bce47fe83040f39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_snyder, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 28 04:39:36 np0005634017 systemd[1]: var-lib-containers-storage-overlay-7360e0f5e66883edeed4c45d8df91074948c796e416d3213fb2e8645c95e3f88-merged.mount: Deactivated successfully.
Feb 28 04:39:36 np0005634017 podman[116612]: 2026-02-28 09:39:36.193598161 +0000 UTC m=+0.715206590 container remove cb14c87675ccc36bdd791d9a3a53af7e02277e35c17e700a5bce47fe83040f39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_snyder, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:39:36 np0005634017 systemd[1]: libpod-conmon-cb14c87675ccc36bdd791d9a3a53af7e02277e35c17e700a5bce47fe83040f39.scope: Deactivated successfully.
Feb 28 04:39:36 np0005634017 podman[116724]: 2026-02-28 09:39:36.661640495 +0000 UTC m=+0.046626101 container create b479d2ef703257fcfcfffa55b74863bea26f7753cf93ec946ec8a1b8bbbcc4d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 28 04:39:36 np0005634017 systemd[1]: Started libpod-conmon-b479d2ef703257fcfcfffa55b74863bea26f7753cf93ec946ec8a1b8bbbcc4d1.scope.
Feb 28 04:39:36 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Feb 28 04:39:36 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Feb 28 04:39:36 np0005634017 podman[116724]: 2026-02-28 09:39:36.637309236 +0000 UTC m=+0.022294892 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:39:36 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:39:36 np0005634017 podman[116724]: 2026-02-28 09:39:36.748484405 +0000 UTC m=+0.133470011 container init b479d2ef703257fcfcfffa55b74863bea26f7753cf93ec946ec8a1b8bbbcc4d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_wing, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 04:39:36 np0005634017 podman[116724]: 2026-02-28 09:39:36.756199473 +0000 UTC m=+0.141185039 container start b479d2ef703257fcfcfffa55b74863bea26f7753cf93ec946ec8a1b8bbbcc4d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:39:36 np0005634017 podman[116724]: 2026-02-28 09:39:36.759861734 +0000 UTC m=+0.144847380 container attach b479d2ef703257fcfcfffa55b74863bea26f7753cf93ec946ec8a1b8bbbcc4d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_wing, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 04:39:36 np0005634017 goofy_wing[116740]: 167 167
Feb 28 04:39:36 np0005634017 systemd[1]: libpod-b479d2ef703257fcfcfffa55b74863bea26f7753cf93ec946ec8a1b8bbbcc4d1.scope: Deactivated successfully.
Feb 28 04:39:36 np0005634017 podman[116745]: 2026-02-28 09:39:36.79975253 +0000 UTC m=+0.027306562 container died b479d2ef703257fcfcfffa55b74863bea26f7753cf93ec946ec8a1b8bbbcc4d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_wing, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:39:36 np0005634017 systemd[1]: var-lib-containers-storage-overlay-f9179d8cb37537397fcba934a7ea822b9ed910e33389f5de691d94b7663dc0c1-merged.mount: Deactivated successfully.
Feb 28 04:39:36 np0005634017 podman[116745]: 2026-02-28 09:39:36.833573411 +0000 UTC m=+0.061127433 container remove b479d2ef703257fcfcfffa55b74863bea26f7753cf93ec946ec8a1b8bbbcc4d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 28 04:39:36 np0005634017 systemd[1]: libpod-conmon-b479d2ef703257fcfcfffa55b74863bea26f7753cf93ec946ec8a1b8bbbcc4d1.scope: Deactivated successfully.
Feb 28 04:39:36 np0005634017 podman[116767]: 2026-02-28 09:39:36.986024436 +0000 UTC m=+0.044252357 container create 8da42a701ada4b5fa3011bb32571167ec7acb35f36463a9f03a20d4b85aaefd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 28 04:39:37 np0005634017 systemd[1]: Started libpod-conmon-8da42a701ada4b5fa3011bb32571167ec7acb35f36463a9f03a20d4b85aaefd5.scope.
Feb 28 04:39:37 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:39:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432bba08f9f19949592d7849c6a2c20859bfe8bcf19e8715a44cd1a5e8c98216/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:39:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432bba08f9f19949592d7849c6a2c20859bfe8bcf19e8715a44cd1a5e8c98216/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:39:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432bba08f9f19949592d7849c6a2c20859bfe8bcf19e8715a44cd1a5e8c98216/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:39:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432bba08f9f19949592d7849c6a2c20859bfe8bcf19e8715a44cd1a5e8c98216/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:39:37 np0005634017 podman[116767]: 2026-02-28 09:39:36.969711199 +0000 UTC m=+0.027939120 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:39:37 np0005634017 podman[116767]: 2026-02-28 09:39:37.083986582 +0000 UTC m=+0.142214553 container init 8da42a701ada4b5fa3011bb32571167ec7acb35f36463a9f03a20d4b85aaefd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_nobel, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 04:39:37 np0005634017 podman[116767]: 2026-02-28 09:39:37.098625556 +0000 UTC m=+0.156853467 container start 8da42a701ada4b5fa3011bb32571167ec7acb35f36463a9f03a20d4b85aaefd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_nobel, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 28 04:39:37 np0005634017 podman[116767]: 2026-02-28 09:39:37.102553681 +0000 UTC m=+0.160781672 container attach 8da42a701ada4b5fa3011bb32571167ec7acb35f36463a9f03a20d4b85aaefd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_nobel, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]: {
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:    "0": [
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:        {
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "devices": [
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "/dev/loop3"
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            ],
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "lv_name": "ceph_lv0",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "lv_size": "21470642176",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "name": "ceph_lv0",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "tags": {
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.cluster_name": "ceph",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.crush_device_class": "",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.encrypted": "0",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.objectstore": "bluestore",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.osd_id": "0",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.type": "block",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.vdo": "0",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.with_tpm": "0"
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            },
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "type": "block",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "vg_name": "ceph_vg0"
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:        }
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:    ],
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:    "1": [
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:        {
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "devices": [
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "/dev/loop4"
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            ],
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "lv_name": "ceph_lv1",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "lv_size": "21470642176",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "name": "ceph_lv1",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "tags": {
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.cluster_name": "ceph",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.crush_device_class": "",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.encrypted": "0",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.objectstore": "bluestore",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.osd_id": "1",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.type": "block",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.vdo": "0",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.with_tpm": "0"
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            },
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "type": "block",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "vg_name": "ceph_vg1"
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:        }
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:    ],
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:    "2": [
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:        {
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "devices": [
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "/dev/loop5"
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            ],
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "lv_name": "ceph_lv2",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "lv_size": "21470642176",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "name": "ceph_lv2",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "tags": {
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.cluster_name": "ceph",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.crush_device_class": "",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.encrypted": "0",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.objectstore": "bluestore",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.osd_id": "2",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.type": "block",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.vdo": "0",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:                "ceph.with_tpm": "0"
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            },
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "type": "block",
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:            "vg_name": "ceph_vg2"
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:        }
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]:    ]
Feb 28 04:39:37 np0005634017 stupefied_nobel[116784]: }
Feb 28 04:39:37 np0005634017 systemd[1]: libpod-8da42a701ada4b5fa3011bb32571167ec7acb35f36463a9f03a20d4b85aaefd5.scope: Deactivated successfully.
Feb 28 04:39:37 np0005634017 podman[116767]: 2026-02-28 09:39:37.394533521 +0000 UTC m=+0.452761412 container died 8da42a701ada4b5fa3011bb32571167ec7acb35f36463a9f03a20d4b85aaefd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_nobel, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:39:37 np0005634017 systemd[1]: var-lib-containers-storage-overlay-432bba08f9f19949592d7849c6a2c20859bfe8bcf19e8715a44cd1a5e8c98216-merged.mount: Deactivated successfully.
Feb 28 04:39:37 np0005634017 podman[116767]: 2026-02-28 09:39:37.44179358 +0000 UTC m=+0.500021511 container remove 8da42a701ada4b5fa3011bb32571167ec7acb35f36463a9f03a20d4b85aaefd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_nobel, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:39:37 np0005634017 systemd[1]: libpod-conmon-8da42a701ada4b5fa3011bb32571167ec7acb35f36463a9f03a20d4b85aaefd5.scope: Deactivated successfully.
Feb 28 04:39:37 np0005634017 podman[116868]: 2026-02-28 09:39:37.917840905 +0000 UTC m=+0.051946965 container create 45ae91e84e6aa01ff48f9869a83fbbcebdd8cb6fd5e25e4d9453f195995475da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_euler, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:39:37 np0005634017 systemd[1]: Started libpod-conmon-45ae91e84e6aa01ff48f9869a83fbbcebdd8cb6fd5e25e4d9453f195995475da.scope.
Feb 28 04:39:37 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:39:37 np0005634017 podman[116868]: 2026-02-28 09:39:37.898387794 +0000 UTC m=+0.032493844 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:39:37 np0005634017 podman[116868]: 2026-02-28 09:39:37.998281497 +0000 UTC m=+0.132387607 container init 45ae91e84e6aa01ff48f9869a83fbbcebdd8cb6fd5e25e4d9453f195995475da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:39:38 np0005634017 podman[116868]: 2026-02-28 09:39:38.005535328 +0000 UTC m=+0.139641368 container start 45ae91e84e6aa01ff48f9869a83fbbcebdd8cb6fd5e25e4d9453f195995475da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_euler, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 04:39:38 np0005634017 podman[116868]: 2026-02-28 09:39:38.009031026 +0000 UTC m=+0.143137146 container attach 45ae91e84e6aa01ff48f9869a83fbbcebdd8cb6fd5e25e4d9453f195995475da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 28 04:39:38 np0005634017 elegant_euler[116885]: 167 167
Feb 28 04:39:38 np0005634017 systemd[1]: libpod-45ae91e84e6aa01ff48f9869a83fbbcebdd8cb6fd5e25e4d9453f195995475da.scope: Deactivated successfully.
Feb 28 04:39:38 np0005634017 podman[116868]: 2026-02-28 09:39:38.010302244 +0000 UTC m=+0.144408314 container died 45ae91e84e6aa01ff48f9869a83fbbcebdd8cb6fd5e25e4d9453f195995475da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_euler, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:39:38 np0005634017 systemd[1]: var-lib-containers-storage-overlay-c65887ce78bdd527351592c2d74d9a4a3cb41c6a36b8553a736d5a90a0ca2ca4-merged.mount: Deactivated successfully.
Feb 28 04:39:38 np0005634017 podman[116868]: 2026-02-28 09:39:38.05447589 +0000 UTC m=+0.188581960 container remove 45ae91e84e6aa01ff48f9869a83fbbcebdd8cb6fd5e25e4d9453f195995475da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_euler, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:39:38 np0005634017 systemd[1]: libpod-conmon-45ae91e84e6aa01ff48f9869a83fbbcebdd8cb6fd5e25e4d9453f195995475da.scope: Deactivated successfully.
Feb 28 04:39:38 np0005634017 podman[116909]: 2026-02-28 09:39:38.231622339 +0000 UTC m=+0.049983378 container create 9b0b8926d50e6b358bfb51e0262c473e8adf2eaf6b12c5d660da0479c1f020e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_dijkstra, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Feb 28 04:39:38 np0005634017 systemd[1]: Started libpod-conmon-9b0b8926d50e6b358bfb51e0262c473e8adf2eaf6b12c5d660da0479c1f020e0.scope.
Feb 28 04:39:38 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:39:38 np0005634017 podman[116909]: 2026-02-28 09:39:38.206801123 +0000 UTC m=+0.025162192 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:39:38 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1eacf2ec3839816585e182894f9d46dfd28015a6bc6ecbaf4a02d25ebf1c6e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:39:38 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1eacf2ec3839816585e182894f9d46dfd28015a6bc6ecbaf4a02d25ebf1c6e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:39:38 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1eacf2ec3839816585e182894f9d46dfd28015a6bc6ecbaf4a02d25ebf1c6e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:39:38 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1eacf2ec3839816585e182894f9d46dfd28015a6bc6ecbaf4a02d25ebf1c6e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:39:38 np0005634017 podman[116909]: 2026-02-28 09:39:38.337662467 +0000 UTC m=+0.156023516 container init 9b0b8926d50e6b358bfb51e0262c473e8adf2eaf6b12c5d660da0479c1f020e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 04:39:38 np0005634017 podman[116909]: 2026-02-28 09:39:38.34504404 +0000 UTC m=+0.163405059 container start 9b0b8926d50e6b358bfb51e0262c473e8adf2eaf6b12c5d660da0479c1f020e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:39:38 np0005634017 podman[116909]: 2026-02-28 09:39:38.348722931 +0000 UTC m=+0.167083950 container attach 9b0b8926d50e6b358bfb51e0262c473e8adf2eaf6b12c5d660da0479c1f020e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:39:38 np0005634017 lvm[117004]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:39:38 np0005634017 lvm[117004]: VG ceph_vg1 finished
Feb 28 04:39:38 np0005634017 lvm[117003]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:39:38 np0005634017 lvm[117003]: VG ceph_vg0 finished
Feb 28 04:39:38 np0005634017 lvm[117006]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:39:38 np0005634017 lvm[117006]: VG ceph_vg2 finished
Feb 28 04:39:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:39 np0005634017 epic_dijkstra[116925]: {}
Feb 28 04:39:39 np0005634017 systemd[1]: libpod-9b0b8926d50e6b358bfb51e0262c473e8adf2eaf6b12c5d660da0479c1f020e0.scope: Deactivated successfully.
Feb 28 04:39:39 np0005634017 systemd[1]: libpod-9b0b8926d50e6b358bfb51e0262c473e8adf2eaf6b12c5d660da0479c1f020e0.scope: Consumed 1.100s CPU time.
Feb 28 04:39:39 np0005634017 podman[117009]: 2026-02-28 09:39:39.17075267 +0000 UTC m=+0.036196116 container died 9b0b8926d50e6b358bfb51e0262c473e8adf2eaf6b12c5d660da0479c1f020e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:39:39 np0005634017 systemd[1]: var-lib-containers-storage-overlay-b1eacf2ec3839816585e182894f9d46dfd28015a6bc6ecbaf4a02d25ebf1c6e9-merged.mount: Deactivated successfully.
Feb 28 04:39:39 np0005634017 podman[117009]: 2026-02-28 09:39:39.209176065 +0000 UTC m=+0.074619491 container remove 9b0b8926d50e6b358bfb51e0262c473e8adf2eaf6b12c5d660da0479c1f020e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_dijkstra, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 28 04:39:39 np0005634017 systemd[1]: libpod-conmon-9b0b8926d50e6b358bfb51e0262c473e8adf2eaf6b12c5d660da0479c1f020e0.scope: Deactivated successfully.
Feb 28 04:39:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:39:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:39:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:39:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:39:39 np0005634017 systemd-logind[815]: New session 39 of user zuul.
Feb 28 04:39:39 np0005634017 systemd[1]: Started Session 39 of User zuul.
Feb 28 04:39:40 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Feb 28 04:39:40 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Feb 28 04:39:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 04:39:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:39:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 04:39:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:39:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:39:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:39:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:39:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:39:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:39:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:39:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:39:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:39:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4231060137621101e-06 of space, bias 4.0, pg target 0.0017077272165145322 quantized to 16 (current 16)
Feb 28 04:39:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:39:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:39:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:39:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 04:39:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:39:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 04:39:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:39:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:39:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:39:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 04:39:40 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:39:40 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:39:40 np0005634017 python3.9[117202]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:39:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:39:41 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Feb 28 04:39:41 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Feb 28 04:39:41 np0005634017 python3.9[117359]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:39:42 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Feb 28 04:39:42 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Feb 28 04:39:42 np0005634017 python3.9[117535]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:39:42 np0005634017 python3.9[117614]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.mm2_4dhm recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:39:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:39:43.328256) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271583328524, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7170, "num_deletes": 252, "total_data_size": 9961728, "memory_usage": 10136968, "flush_reason": "Manual Compaction"}
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271583370058, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7807087, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 7313, "table_properties": {"data_size": 7780687, "index_size": 17261, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8069, "raw_key_size": 74621, "raw_average_key_size": 23, "raw_value_size": 7718634, "raw_average_value_size": 2394, "num_data_blocks": 759, "num_entries": 3223, "num_filter_entries": 3223, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271190, "oldest_key_time": 1772271190, "file_creation_time": 1772271583, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 41874 microseconds, and 18687 cpu microseconds.
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:39:43.370131) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7807087 bytes OK
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:39:43.370156) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:39:43.371596) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:39:43.371612) EVENT_LOG_v1 {"time_micros": 1772271583371607, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:39:43.371646) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9930594, prev total WAL file size 9930594, number of live WAL files 2.
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:39:43.373670) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7624KB) 13(58KB) 8(1944B)]
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271583373750, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7868987, "oldest_snapshot_seqno": -1}
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3048 keys, 7821886 bytes, temperature: kUnknown
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271583407193, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7821886, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7795821, "index_size": 17315, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7685, "raw_key_size": 73024, "raw_average_key_size": 23, "raw_value_size": 7735152, "raw_average_value_size": 2537, "num_data_blocks": 762, "num_entries": 3048, "num_filter_entries": 3048, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772271583, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:39:43.407470) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7821886 bytes
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:39:43.409171) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 234.7 rd, 233.3 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.5, 0.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3338, records dropped: 290 output_compression: NoCompression
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:39:43.409225) EVENT_LOG_v1 {"time_micros": 1772271583409201, "job": 4, "event": "compaction_finished", "compaction_time_micros": 33529, "compaction_time_cpu_micros": 14905, "output_level": 6, "num_output_files": 1, "total_output_size": 7821886, "num_input_records": 3338, "num_output_records": 3048, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271583411152, "job": 4, "event": "table_file_deletion", "file_number": 19}
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271583411248, "job": 4, "event": "table_file_deletion", "file_number": 13}
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271583411325, "job": 4, "event": "table_file_deletion", "file_number": 8}
Feb 28 04:39:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:39:43.373547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:39:43 np0005634017 python3.9[117768]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:39:44 np0005634017 python3.9[117847]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.cc6kc9jg recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:39:44 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 10.b scrub starts
Feb 28 04:39:44 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 10.b scrub ok
Feb 28 04:39:45 np0005634017 python3.9[118000]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:39:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:45 np0005634017 python3.9[118153]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:39:45 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Feb 28 04:39:45 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Feb 28 04:39:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:39:46 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Feb 28 04:39:46 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Feb 28 04:39:46 np0005634017 python3.9[118232]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:39:46 np0005634017 python3.9[118385]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:39:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:47 np0005634017 python3.9[118464]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:39:47 np0005634017 python3.9[118617]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:39:48 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Feb 28 04:39:48 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Feb 28 04:39:48 np0005634017 python3.9[118770]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:39:48 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 5.f scrub starts
Feb 28 04:39:48 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 5.f scrub ok
Feb 28 04:39:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:49 np0005634017 python3.9[118849]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:39:49 np0005634017 python3.9[119002]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:39:50 np0005634017 python3.9[119081]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:39:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:39:51 np0005634017 python3.9[119234]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:39:51 np0005634017 systemd[1]: Reloading.
Feb 28 04:39:51 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:39:51 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:39:52 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Feb 28 04:39:52 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Feb 28 04:39:52 np0005634017 python3.9[119432]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:39:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:53 np0005634017 python3.9[119511]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:39:53 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Feb 28 04:39:53 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Feb 28 04:39:53 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Feb 28 04:39:53 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Feb 28 04:39:53 np0005634017 python3.9[119664]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:39:54 np0005634017 python3.9[119743]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:39:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:55 np0005634017 python3.9[119896]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:39:55 np0005634017 systemd[1]: Reloading.
Feb 28 04:39:55 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:39:55 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:39:55 np0005634017 systemd[1]: Starting Create netns directory...
Feb 28 04:39:55 np0005634017 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 28 04:39:55 np0005634017 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 28 04:39:55 np0005634017 systemd[1]: Finished Create netns directory.
Feb 28 04:39:55 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Feb 28 04:39:55 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Feb 28 04:39:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:39:56 np0005634017 python3.9[120096]: ansible-ansible.builtin.service_facts Invoked
Feb 28 04:39:56 np0005634017 network[120113]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 28 04:39:56 np0005634017 network[120114]: 'network-scripts' will be removed from distribution in near future.
Feb 28 04:39:56 np0005634017 network[120115]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 28 04:39:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:39:58 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Feb 28 04:39:58 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Feb 28 04:39:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:40:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:40:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:40:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:40:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:40:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:40:00 np0005634017 python3.9[120379]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:40:00 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Feb 28 04:40:00 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Feb 28 04:40:01 np0005634017 python3.9[120458]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:40:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:01 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Feb 28 04:40:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:40:01 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Feb 28 04:40:01 np0005634017 python3.9[120611]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:40:02 np0005634017 python3.9[120764]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:40:02 np0005634017 python3.9[120843]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:40:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:03 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 10.d scrub starts
Feb 28 04:40:03 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 10.d scrub ok
Feb 28 04:40:03 np0005634017 python3.9[120996]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb 28 04:40:03 np0005634017 systemd[1]: Starting Time & Date Service...
Feb 28 04:40:03 np0005634017 systemd[1]: Started Time & Date Service.
Feb 28 04:40:04 np0005634017 python3.9[121153]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:40:04 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Feb 28 04:40:04 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Feb 28 04:40:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:05 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 8.f scrub starts
Feb 28 04:40:05 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 8.f scrub ok
Feb 28 04:40:05 np0005634017 python3.9[121306]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:40:05 np0005634017 python3.9[121385]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:40:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:40:06 np0005634017 python3.9[121538]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:40:06 np0005634017 python3.9[121617]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.0j0oud16 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:40:06 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Feb 28 04:40:06 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Feb 28 04:40:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:07 np0005634017 python3.9[121770]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:40:08 np0005634017 python3.9[121849]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:40:08 np0005634017 python3.9[122002]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:40:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:09 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 10.e scrub starts
Feb 28 04:40:09 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 10.e scrub ok
Feb 28 04:40:09 np0005634017 python3[122156]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 28 04:40:10 np0005634017 python3.9[122309]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:40:11 np0005634017 python3.9[122388]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:40:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:40:11 np0005634017 python3.9[122541]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:40:12 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 6.a scrub starts
Feb 28 04:40:12 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 6.a scrub ok
Feb 28 04:40:12 np0005634017 python3.9[122667]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271611.2457116-308-130240253822570/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:40:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:13 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Feb 28 04:40:13 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Feb 28 04:40:13 np0005634017 python3.9[122820]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:40:13 np0005634017 python3.9[122899]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:40:14 np0005634017 python3.9[123052]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:40:14 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Feb 28 04:40:14 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Feb 28 04:40:14 np0005634017 python3.9[123131]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:40:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:15 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Feb 28 04:40:15 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Feb 28 04:40:15 np0005634017 python3.9[123284]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:40:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:40:16 np0005634017 python3.9[123363]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:40:16 np0005634017 python3.9[123516]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:40:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:17 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Feb 28 04:40:17 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Feb 28 04:40:17 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Feb 28 04:40:17 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Feb 28 04:40:17 np0005634017 python3.9[123672]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:40:18 np0005634017 python3.9[123825]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:40:18 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Feb 28 04:40:18 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Feb 28 04:40:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:19 np0005634017 python3.9[123978]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:40:19 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Feb 28 04:40:19 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Feb 28 04:40:20 np0005634017 python3.9[124131]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 28 04:40:20 np0005634017 python3.9[124284]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 28 04:40:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:40:21 np0005634017 systemd-logind[815]: Session 39 logged out. Waiting for processes to exit.
Feb 28 04:40:21 np0005634017 systemd[1]: session-39.scope: Deactivated successfully.
Feb 28 04:40:21 np0005634017 systemd[1]: session-39.scope: Consumed 28.726s CPU time.
Feb 28 04:40:21 np0005634017 systemd-logind[815]: Removed session 39.
Feb 28 04:40:21 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 2.d scrub starts
Feb 28 04:40:21 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 2.d scrub ok
Feb 28 04:40:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:23 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Feb 28 04:40:23 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Feb 28 04:40:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:40:26 np0005634017 systemd-logind[815]: New session 40 of user zuul.
Feb 28 04:40:26 np0005634017 systemd[1]: Started Session 40 of User zuul.
Feb 28 04:40:27 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Feb 28 04:40:27 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Feb 28 04:40:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:27 np0005634017 python3.9[124465]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Feb 28 04:40:28 np0005634017 python3.9[124618]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:40:28 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Feb 28 04:40:28 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Feb 28 04:40:28 np0005634017 python3.9[124773]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Feb 28 04:40:28 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Feb 28 04:40:28 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Feb 28 04:40:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_09:40:28
Feb 28 04:40:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 04:40:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 04:40:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'backups', 'default.rgw.log']
Feb 28 04:40:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 04:40:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:29 np0005634017 python3.9[124926]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.2evl9efe follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:40:29 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Feb 28 04:40:29 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Feb 28 04:40:30 np0005634017 python3.9[125052]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.2evl9efe mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772271628.9816976-44-141640026080619/.source.2evl9efe _original_basename=.39caaskt follow=False checksum=d17b144a76f8846522f30892e552cfb8b320da4c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:40:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:40:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:40:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:40:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:40:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:40:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:40:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 04:40:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:40:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 04:40:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:40:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:40:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:40:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:40:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:40:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:40:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:40:30 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Feb 28 04:40:30 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Feb 28 04:40:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:40:31 np0005634017 python3.9[125205]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:40:32 np0005634017 python3.9[125358]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5URvrNFl9MhZBO2s62M/ln92UfkblaawhnJ9KezBXOb7Hx/bDKfNRPA3W5DSqIrJ4/o9TyYOzuD/9/jVe2uFHTrv+299xa2/InQ9VToKnL3ZnV0l2Dar8UyCAmumucJnnDzmSuxKtmh1r1Gi86xNXC7iEl5FXqXBCpJ+hh4d5tO4k9PEkVFDKX1tJ7l3NA2C42PXXgmKVkUUEN99DDu3ZM2BowCA16/UoFKdN3dZOU06w7syOjcgiq07QMFVtfdy3daBaaCMBy4kh4DOAILTp0GVVJ10vyxpxmHlSCErE5OofOjmIPt2pcWXYXHwFS0F0vK8L3LHyxQrrWm/RLuXkLkrKNpWvE4Xx0Cdz1vqJ1+8CYT6PunGFtNxfZxMOTzHNLNtHXT/M2b+yX3q/WPNfXkiELpx4pj0Nz/ktCqpQ9fjBPX+9s8TvqsT3OYyq48gg7oxQ7EOweyuNBpSVRNAs+j18kt+39ECqzrr9rKyz2wNQw7sHnEZirjSwolL1qKk=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPhhP9YEEfXzk0Ffz95IfaT7BJRnxVJ55LFCpYqclnB5#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOPc9R9av/qZ1K7mt9e0JZiyAzatp1ty2WUYoo+vXgranh+Y+IO49K8NUIIDuz3QS5vA8TOp/Roo0BoP/4nbGSw=#012 create=True mode=0644 path=/tmp/ansible.2evl9efe state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:40:33 np0005634017 python3.9[125511]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.2evl9efe' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:40:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:33 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Feb 28 04:40:33 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Feb 28 04:40:33 np0005634017 python3.9[125666]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.2evl9efe state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:40:33 np0005634017 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 28 04:40:33 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Feb 28 04:40:34 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Feb 28 04:40:34 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Feb 28 04:40:34 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Feb 28 04:40:34 np0005634017 systemd[1]: session-40.scope: Deactivated successfully.
Feb 28 04:40:34 np0005634017 systemd[1]: session-40.scope: Consumed 4.683s CPU time.
Feb 28 04:40:34 np0005634017 systemd-logind[815]: Session 40 logged out. Waiting for processes to exit.
Feb 28 04:40:34 np0005634017 systemd-logind[815]: Removed session 40.
Feb 28 04:40:35 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 9.b scrub starts
Feb 28 04:40:35 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 9.b scrub ok
Feb 28 04:40:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:40:36 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Feb 28 04:40:36 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Feb 28 04:40:37 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Feb 28 04:40:37 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Feb 28 04:40:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:38 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 6.d scrub starts
Feb 28 04:40:38 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 6.d scrub ok
Feb 28 04:40:39 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 9.d scrub starts
Feb 28 04:40:39 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 9.d scrub ok
Feb 28 04:40:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:39 np0005634017 systemd-logind[815]: New session 41 of user zuul.
Feb 28 04:40:39 np0005634017 systemd[1]: Started Session 41 of User zuul.
Feb 28 04:40:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:40:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:40:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 04:40:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:40:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 04:40:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:40:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 04:40:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 04:40:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 04:40:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:40:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:40:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:40:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 04:40:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:40:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 04:40:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:40:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:40:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:40:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:40:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:40:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:40:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:40:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:40:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:40:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4231060137621101e-06 of space, bias 4.0, pg target 0.0017077272165145322 quantized to 16 (current 16)
Feb 28 04:40:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:40:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:40:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:40:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 04:40:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:40:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 04:40:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:40:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:40:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:40:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 04:40:40 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:40:40 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:40:40 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:40:40 np0005634017 podman[125963]: 2026-02-28 09:40:40.349353358 +0000 UTC m=+0.043541772 container create 039207536ba80e8a405ee2a3c1d209a4f80268aaddab0e48c9a894e7c18eaaff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_swanson, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 04:40:40 np0005634017 systemd[1]: Started libpod-conmon-039207536ba80e8a405ee2a3c1d209a4f80268aaddab0e48c9a894e7c18eaaff.scope.
Feb 28 04:40:40 np0005634017 podman[125963]: 2026-02-28 09:40:40.330183948 +0000 UTC m=+0.024372152 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:40:40 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:40:40 np0005634017 podman[125963]: 2026-02-28 09:40:40.449716891 +0000 UTC m=+0.143905125 container init 039207536ba80e8a405ee2a3c1d209a4f80268aaddab0e48c9a894e7c18eaaff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:40:40 np0005634017 podman[125963]: 2026-02-28 09:40:40.45890119 +0000 UTC m=+0.153089404 container start 039207536ba80e8a405ee2a3c1d209a4f80268aaddab0e48c9a894e7c18eaaff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_swanson, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:40:40 np0005634017 podman[125963]: 2026-02-28 09:40:40.463248138 +0000 UTC m=+0.157436342 container attach 039207536ba80e8a405ee2a3c1d209a4f80268aaddab0e48c9a894e7c18eaaff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_swanson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:40:40 np0005634017 unruffled_swanson[126006]: 167 167
Feb 28 04:40:40 np0005634017 systemd[1]: libpod-039207536ba80e8a405ee2a3c1d209a4f80268aaddab0e48c9a894e7c18eaaff.scope: Deactivated successfully.
Feb 28 04:40:40 np0005634017 podman[125963]: 2026-02-28 09:40:40.4673709 +0000 UTC m=+0.161559114 container died 039207536ba80e8a405ee2a3c1d209a4f80268aaddab0e48c9a894e7c18eaaff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_swanson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:40:40 np0005634017 systemd[1]: var-lib-containers-storage-overlay-9c9170ff1b59d4b368b37747d98883c0ce4f7c11d87c944297890aaf57670c31-merged.mount: Deactivated successfully.
Feb 28 04:40:40 np0005634017 podman[125963]: 2026-02-28 09:40:40.521811297 +0000 UTC m=+0.215999501 container remove 039207536ba80e8a405ee2a3c1d209a4f80268aaddab0e48c9a894e7c18eaaff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 28 04:40:40 np0005634017 systemd[1]: libpod-conmon-039207536ba80e8a405ee2a3c1d209a4f80268aaddab0e48c9a894e7c18eaaff.scope: Deactivated successfully.
Feb 28 04:40:40 np0005634017 python3.9[126003]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:40:40 np0005634017 podman[126032]: 2026-02-28 09:40:40.681950012 +0000 UTC m=+0.050844120 container create ebf1ce074fde9cf56174df66afe96e9d25b5b9a014d9282eb21cbf79344a1be9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_murdock, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:40:40 np0005634017 systemd[1]: Started libpod-conmon-ebf1ce074fde9cf56174df66afe96e9d25b5b9a014d9282eb21cbf79344a1be9.scope.
Feb 28 04:40:40 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:40:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd58e76580425820979f145b0216b1272efb4829f40fd34a89d9f21bdf765ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:40:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd58e76580425820979f145b0216b1272efb4829f40fd34a89d9f21bdf765ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:40:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd58e76580425820979f145b0216b1272efb4829f40fd34a89d9f21bdf765ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:40:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd58e76580425820979f145b0216b1272efb4829f40fd34a89d9f21bdf765ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:40:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd58e76580425820979f145b0216b1272efb4829f40fd34a89d9f21bdf765ef/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:40:40 np0005634017 podman[126032]: 2026-02-28 09:40:40.663729958 +0000 UTC m=+0.032624156 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:40:40 np0005634017 podman[126032]: 2026-02-28 09:40:40.771833961 +0000 UTC m=+0.140728119 container init ebf1ce074fde9cf56174df66afe96e9d25b5b9a014d9282eb21cbf79344a1be9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:40:40 np0005634017 podman[126032]: 2026-02-28 09:40:40.787952908 +0000 UTC m=+0.156847016 container start ebf1ce074fde9cf56174df66afe96e9d25b5b9a014d9282eb21cbf79344a1be9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 04:40:40 np0005634017 podman[126032]: 2026-02-28 09:40:40.791450133 +0000 UTC m=+0.160344301 container attach ebf1ce074fde9cf56174df66afe96e9d25b5b9a014d9282eb21cbf79344a1be9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_murdock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Feb 28 04:40:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:40:41 np0005634017 pensive_murdock[126053]: --> passed data devices: 0 physical, 3 LVM
Feb 28 04:40:41 np0005634017 pensive_murdock[126053]: --> All data devices are unavailable
Feb 28 04:40:41 np0005634017 systemd[1]: libpod-ebf1ce074fde9cf56174df66afe96e9d25b5b9a014d9282eb21cbf79344a1be9.scope: Deactivated successfully.
Feb 28 04:40:41 np0005634017 conmon[126053]: conmon ebf1ce074fde9cf56174 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ebf1ce074fde9cf56174df66afe96e9d25b5b9a014d9282eb21cbf79344a1be9.scope/container/memory.events
Feb 28 04:40:41 np0005634017 podman[126032]: 2026-02-28 09:40:41.363087933 +0000 UTC m=+0.731982091 container died ebf1ce074fde9cf56174df66afe96e9d25b5b9a014d9282eb21cbf79344a1be9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 28 04:40:41 np0005634017 systemd[1]: var-lib-containers-storage-overlay-7fd58e76580425820979f145b0216b1272efb4829f40fd34a89d9f21bdf765ef-merged.mount: Deactivated successfully.
Feb 28 04:40:41 np0005634017 podman[126032]: 2026-02-28 09:40:41.412782872 +0000 UTC m=+0.781676990 container remove ebf1ce074fde9cf56174df66afe96e9d25b5b9a014d9282eb21cbf79344a1be9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_murdock, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 04:40:41 np0005634017 systemd[1]: libpod-conmon-ebf1ce074fde9cf56174df66afe96e9d25b5b9a014d9282eb21cbf79344a1be9.scope: Deactivated successfully.
Feb 28 04:40:41 np0005634017 podman[126300]: 2026-02-28 09:40:41.898216913 +0000 UTC m=+0.057571123 container create 43e754f6f03a2ffb1853e4349cc393857250807bde5bfabdcc1d5eb18e9f589b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 28 04:40:41 np0005634017 systemd[1]: Started libpod-conmon-43e754f6f03a2ffb1853e4349cc393857250807bde5bfabdcc1d5eb18e9f589b.scope.
Feb 28 04:40:41 np0005634017 podman[126300]: 2026-02-28 09:40:41.872556926 +0000 UTC m=+0.031911196 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:40:41 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:40:41 np0005634017 podman[126300]: 2026-02-28 09:40:41.982860369 +0000 UTC m=+0.142214639 container init 43e754f6f03a2ffb1853e4349cc393857250807bde5bfabdcc1d5eb18e9f589b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:40:41 np0005634017 python3.9[126287]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 28 04:40:41 np0005634017 podman[126300]: 2026-02-28 09:40:41.991500284 +0000 UTC m=+0.150854514 container start 43e754f6f03a2ffb1853e4349cc393857250807bde5bfabdcc1d5eb18e9f589b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bassi, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 28 04:40:41 np0005634017 podman[126300]: 2026-02-28 09:40:41.995519393 +0000 UTC m=+0.154873673 container attach 43e754f6f03a2ffb1853e4349cc393857250807bde5bfabdcc1d5eb18e9f589b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bassi, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 04:40:41 np0005634017 crazy_bassi[126317]: 167 167
Feb 28 04:40:42 np0005634017 systemd[1]: libpod-43e754f6f03a2ffb1853e4349cc393857250807bde5bfabdcc1d5eb18e9f589b.scope: Deactivated successfully.
Feb 28 04:40:42 np0005634017 podman[126300]: 2026-02-28 09:40:42.001011622 +0000 UTC m=+0.160365852 container died 43e754f6f03a2ffb1853e4349cc393857250807bde5bfabdcc1d5eb18e9f589b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 04:40:42 np0005634017 systemd[1]: var-lib-containers-storage-overlay-dbc16bf651225de53d01cba58bbcfb858bfea04d652c33e058c7b9db871cc9c0-merged.mount: Deactivated successfully.
Feb 28 04:40:42 np0005634017 podman[126300]: 2026-02-28 09:40:42.042933499 +0000 UTC m=+0.202287729 container remove 43e754f6f03a2ffb1853e4349cc393857250807bde5bfabdcc1d5eb18e9f589b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bassi, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:40:42 np0005634017 systemd[1]: libpod-conmon-43e754f6f03a2ffb1853e4349cc393857250807bde5bfabdcc1d5eb18e9f589b.scope: Deactivated successfully.
Feb 28 04:40:42 np0005634017 podman[126366]: 2026-02-28 09:40:42.178855487 +0000 UTC m=+0.048167658 container create d305b99c7c81d2a24299eb0c62a520e7b02ccca2a6b24a510d91db6ca87aba19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:40:42 np0005634017 systemd[1]: Started libpod-conmon-d305b99c7c81d2a24299eb0c62a520e7b02ccca2a6b24a510d91db6ca87aba19.scope.
Feb 28 04:40:42 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:40:42 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7aff31c71cfcf3151f4c962d65c7e9a55a978b92bfe6a663d4be854bb33399f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:40:42 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7aff31c71cfcf3151f4c962d65c7e9a55a978b92bfe6a663d4be854bb33399f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:40:42 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7aff31c71cfcf3151f4c962d65c7e9a55a978b92bfe6a663d4be854bb33399f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:40:42 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7aff31c71cfcf3151f4c962d65c7e9a55a978b92bfe6a663d4be854bb33399f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:40:42 np0005634017 podman[126366]: 2026-02-28 09:40:42.157361754 +0000 UTC m=+0.026673905 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:40:42 np0005634017 podman[126366]: 2026-02-28 09:40:42.26447053 +0000 UTC m=+0.133782701 container init d305b99c7c81d2a24299eb0c62a520e7b02ccca2a6b24a510d91db6ca87aba19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_curran, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:40:42 np0005634017 podman[126366]: 2026-02-28 09:40:42.276715132 +0000 UTC m=+0.146027303 container start d305b99c7c81d2a24299eb0c62a520e7b02ccca2a6b24a510d91db6ca87aba19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_curran, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:40:42 np0005634017 podman[126366]: 2026-02-28 09:40:42.280874175 +0000 UTC m=+0.150186396 container attach d305b99c7c81d2a24299eb0c62a520e7b02ccca2a6b24a510d91db6ca87aba19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_curran, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:40:42 np0005634017 fervent_curran[126406]: {
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:    "0": [
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:        {
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "devices": [
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "/dev/loop3"
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            ],
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "lv_name": "ceph_lv0",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "lv_size": "21470642176",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "name": "ceph_lv0",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "tags": {
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.cluster_name": "ceph",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.crush_device_class": "",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.encrypted": "0",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.objectstore": "bluestore",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.osd_id": "0",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.type": "block",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.vdo": "0",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.with_tpm": "0"
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            },
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "type": "block",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "vg_name": "ceph_vg0"
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:        }
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:    ],
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:    "1": [
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:        {
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "devices": [
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "/dev/loop4"
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            ],
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "lv_name": "ceph_lv1",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "lv_size": "21470642176",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "name": "ceph_lv1",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "tags": {
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.cluster_name": "ceph",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.crush_device_class": "",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.encrypted": "0",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.objectstore": "bluestore",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.osd_id": "1",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.type": "block",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.vdo": "0",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.with_tpm": "0"
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            },
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "type": "block",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "vg_name": "ceph_vg1"
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:        }
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:    ],
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:    "2": [
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:        {
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "devices": [
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "/dev/loop5"
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            ],
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "lv_name": "ceph_lv2",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "lv_size": "21470642176",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "name": "ceph_lv2",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "tags": {
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.cluster_name": "ceph",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.crush_device_class": "",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.encrypted": "0",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.objectstore": "bluestore",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.osd_id": "2",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.type": "block",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.vdo": "0",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:                "ceph.with_tpm": "0"
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            },
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "type": "block",
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:            "vg_name": "ceph_vg2"
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:        }
Feb 28 04:40:42 np0005634017 fervent_curran[126406]:    ]
Feb 28 04:40:42 np0005634017 fervent_curran[126406]: }
Feb 28 04:40:42 np0005634017 systemd[1]: libpod-d305b99c7c81d2a24299eb0c62a520e7b02ccca2a6b24a510d91db6ca87aba19.scope: Deactivated successfully.
Feb 28 04:40:42 np0005634017 podman[126366]: 2026-02-28 09:40:42.61054127 +0000 UTC m=+0.479853401 container died d305b99c7c81d2a24299eb0c62a520e7b02ccca2a6b24a510d91db6ca87aba19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:40:42 np0005634017 systemd[1]: var-lib-containers-storage-overlay-f7aff31c71cfcf3151f4c962d65c7e9a55a978b92bfe6a663d4be854bb33399f-merged.mount: Deactivated successfully.
Feb 28 04:40:42 np0005634017 podman[126366]: 2026-02-28 09:40:42.650054812 +0000 UTC m=+0.519366943 container remove d305b99c7c81d2a24299eb0c62a520e7b02ccca2a6b24a510d91db6ca87aba19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 04:40:42 np0005634017 systemd[1]: libpod-conmon-d305b99c7c81d2a24299eb0c62a520e7b02ccca2a6b24a510d91db6ca87aba19.scope: Deactivated successfully.
Feb 28 04:40:42 np0005634017 python3.9[126519]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 28 04:40:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:43 np0005634017 podman[126620]: 2026-02-28 09:40:43.121493023 +0000 UTC m=+0.051714434 container create 0a3217ad9761d1cdff285b623cf3d6608cec77e2456c69812b1b0e7603d15a89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_herschel, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 04:40:43 np0005634017 systemd[1]: Started libpod-conmon-0a3217ad9761d1cdff285b623cf3d6608cec77e2456c69812b1b0e7603d15a89.scope.
Feb 28 04:40:43 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:40:43 np0005634017 podman[126620]: 2026-02-28 09:40:43.196500808 +0000 UTC m=+0.126722269 container init 0a3217ad9761d1cdff285b623cf3d6608cec77e2456c69812b1b0e7603d15a89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_herschel, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:40:43 np0005634017 podman[126620]: 2026-02-28 09:40:43.102512668 +0000 UTC m=+0.032734079 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:40:43 np0005634017 podman[126620]: 2026-02-28 09:40:43.205008709 +0000 UTC m=+0.135230120 container start 0a3217ad9761d1cdff285b623cf3d6608cec77e2456c69812b1b0e7603d15a89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 04:40:43 np0005634017 podman[126620]: 2026-02-28 09:40:43.20797832 +0000 UTC m=+0.138199741 container attach 0a3217ad9761d1cdff285b623cf3d6608cec77e2456c69812b1b0e7603d15a89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_herschel, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 04:40:43 np0005634017 eloquent_herschel[126672]: 167 167
Feb 28 04:40:43 np0005634017 systemd[1]: libpod-0a3217ad9761d1cdff285b623cf3d6608cec77e2456c69812b1b0e7603d15a89.scope: Deactivated successfully.
Feb 28 04:40:43 np0005634017 podman[126620]: 2026-02-28 09:40:43.209679906 +0000 UTC m=+0.139901327 container died 0a3217ad9761d1cdff285b623cf3d6608cec77e2456c69812b1b0e7603d15a89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_herschel, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:40:43 np0005634017 systemd[1]: var-lib-containers-storage-overlay-cabb54c84e28359b68100ef506a9597cd3baa450519ed478d075ba60a055354a-merged.mount: Deactivated successfully.
Feb 28 04:40:43 np0005634017 podman[126620]: 2026-02-28 09:40:43.25074594 +0000 UTC m=+0.180967361 container remove 0a3217ad9761d1cdff285b623cf3d6608cec77e2456c69812b1b0e7603d15a89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_herschel, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:40:43 np0005634017 systemd[1]: libpod-conmon-0a3217ad9761d1cdff285b623cf3d6608cec77e2456c69812b1b0e7603d15a89.scope: Deactivated successfully.
Feb 28 04:40:43 np0005634017 podman[126713]: 2026-02-28 09:40:43.417399152 +0000 UTC m=+0.058008275 container create 90a88ff6e1b49f73b5977e0b861b7afc37a4f9814eaaf5f690adc92c221273a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_keldysh, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:40:43 np0005634017 systemd[1]: Started libpod-conmon-90a88ff6e1b49f73b5977e0b861b7afc37a4f9814eaaf5f690adc92c221273a0.scope.
Feb 28 04:40:43 np0005634017 podman[126713]: 2026-02-28 09:40:43.394454899 +0000 UTC m=+0.035064062 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:40:43 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:40:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000a247ea0d9caf0957be769dae6dd37a59a7d96200c11b0574c9604fd32cd25/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:40:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000a247ea0d9caf0957be769dae6dd37a59a7d96200c11b0574c9604fd32cd25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:40:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000a247ea0d9caf0957be769dae6dd37a59a7d96200c11b0574c9604fd32cd25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:40:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000a247ea0d9caf0957be769dae6dd37a59a7d96200c11b0574c9604fd32cd25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:40:43 np0005634017 podman[126713]: 2026-02-28 09:40:43.523679946 +0000 UTC m=+0.164289049 container init 90a88ff6e1b49f73b5977e0b861b7afc37a4f9814eaaf5f690adc92c221273a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_keldysh, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:40:43 np0005634017 podman[126713]: 2026-02-28 09:40:43.537956203 +0000 UTC m=+0.178565326 container start 90a88ff6e1b49f73b5977e0b861b7afc37a4f9814eaaf5f690adc92c221273a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_keldysh, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:40:43 np0005634017 podman[126713]: 2026-02-28 09:40:43.542556878 +0000 UTC m=+0.183165981 container attach 90a88ff6e1b49f73b5977e0b861b7afc37a4f9814eaaf5f690adc92c221273a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Feb 28 04:40:43 np0005634017 python3.9[126811]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:40:44 np0005634017 lvm[126965]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:40:44 np0005634017 lvm[126961]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:40:44 np0005634017 lvm[126961]: VG ceph_vg0 finished
Feb 28 04:40:44 np0005634017 lvm[126963]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:40:44 np0005634017 lvm[126963]: VG ceph_vg1 finished
Feb 28 04:40:44 np0005634017 lvm[126965]: VG ceph_vg2 finished
Feb 28 04:40:44 np0005634017 fervent_keldysh[126750]: {}
Feb 28 04:40:44 np0005634017 systemd[1]: libpod-90a88ff6e1b49f73b5977e0b861b7afc37a4f9814eaaf5f690adc92c221273a0.scope: Deactivated successfully.
Feb 28 04:40:44 np0005634017 systemd[1]: libpod-90a88ff6e1b49f73b5977e0b861b7afc37a4f9814eaaf5f690adc92c221273a0.scope: Consumed 1.010s CPU time.
Feb 28 04:40:44 np0005634017 podman[126713]: 2026-02-28 09:40:44.280609702 +0000 UTC m=+0.921218825 container died 90a88ff6e1b49f73b5977e0b861b7afc37a4f9814eaaf5f690adc92c221273a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 04:40:44 np0005634017 systemd[1]: var-lib-containers-storage-overlay-000a247ea0d9caf0957be769dae6dd37a59a7d96200c11b0574c9604fd32cd25-merged.mount: Deactivated successfully.
Feb 28 04:40:44 np0005634017 podman[126713]: 2026-02-28 09:40:44.33469974 +0000 UTC m=+0.975308863 container remove 90a88ff6e1b49f73b5977e0b861b7afc37a4f9814eaaf5f690adc92c221273a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 28 04:40:44 np0005634017 systemd[1]: libpod-conmon-90a88ff6e1b49f73b5977e0b861b7afc37a4f9814eaaf5f690adc92c221273a0.scope: Deactivated successfully.
Feb 28 04:40:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:40:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:40:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:40:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:40:44 np0005634017 python3.9[127082]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:40:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:45 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:40:45 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:40:45 np0005634017 python3.9[127235]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:40:45 np0005634017 systemd[1]: session-41.scope: Deactivated successfully.
Feb 28 04:40:45 np0005634017 systemd[1]: session-41.scope: Consumed 3.707s CPU time.
Feb 28 04:40:45 np0005634017 systemd-logind[815]: Session 41 logged out. Waiting for processes to exit.
Feb 28 04:40:45 np0005634017 systemd-logind[815]: Removed session 41.
Feb 28 04:40:45 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Feb 28 04:40:46 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Feb 28 04:40:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:40:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:47 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 6.e scrub starts
Feb 28 04:40:47 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 6.e scrub ok
Feb 28 04:40:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:49 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Feb 28 04:40:49 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Feb 28 04:40:50 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Feb 28 04:40:50 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Feb 28 04:40:50 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 6.c scrub starts
Feb 28 04:40:50 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 6.c scrub ok
Feb 28 04:40:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:40:51 np0005634017 systemd-logind[815]: New session 42 of user zuul.
Feb 28 04:40:51 np0005634017 systemd[1]: Started Session 42 of User zuul.
Feb 28 04:40:51 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Feb 28 04:40:52 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Feb 28 04:40:52 np0005634017 python3.9[127413]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:40:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:54 np0005634017 python3.9[127570]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 28 04:40:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:55 np0005634017 python3.9[127655]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 28 04:40:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:40:56 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 6.b scrub starts
Feb 28 04:40:56 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 6.b scrub ok
Feb 28 04:40:56 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Feb 28 04:40:56 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Feb 28 04:40:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:57 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Feb 28 04:40:57 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Feb 28 04:40:57 np0005634017 python3.9[127806]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:40:57 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Feb 28 04:40:57 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Feb 28 04:40:58 np0005634017 python3.9[127957]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 28 04:40:58 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Feb 28 04:40:58 np0005634017 ceph-osd[87202]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Feb 28 04:40:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:40:59 np0005634017 python3.9[128107]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:41:00 np0005634017 python3.9[128257]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/nova follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:41:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:41:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:41:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:41:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:41:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:41:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:41:00 np0005634017 systemd[1]: session-17.scope: Deactivated successfully.
Feb 28 04:41:00 np0005634017 systemd[1]: session-17.scope: Consumed 1min 34.218s CPU time.
Feb 28 04:41:00 np0005634017 systemd-logind[815]: Session 17 logged out. Waiting for processes to exit.
Feb 28 04:41:00 np0005634017 systemd-logind[815]: Removed session 17.
Feb 28 04:41:00 np0005634017 systemd-logind[815]: Session 42 logged out. Waiting for processes to exit.
Feb 28 04:41:00 np0005634017 systemd[1]: session-42.scope: Deactivated successfully.
Feb 28 04:41:00 np0005634017 systemd[1]: session-42.scope: Consumed 5.474s CPU time.
Feb 28 04:41:00 np0005634017 systemd-logind[815]: Removed session 42.
Feb 28 04:41:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:41:02 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Feb 28 04:41:02 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Feb 28 04:41:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:05 np0005634017 systemd-logind[815]: New session 43 of user zuul.
Feb 28 04:41:05 np0005634017 systemd[1]: Started Session 43 of User zuul.
Feb 28 04:41:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:41:06 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Feb 28 04:41:06 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Feb 28 04:41:06 np0005634017 python3.9[128437]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:41:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:07 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Feb 28 04:41:07 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Feb 28 04:41:08 np0005634017 python3.9[128594]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:41:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:09 np0005634017 python3.9[128747]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:41:10 np0005634017 python3.9[128900]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:41:10 np0005634017 python3.9[129024]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271669.4123008-60-58460079245933/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=53f6326f1bd732e78ab97cbdca8b65589edf6c49 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:41:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:41:11 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Feb 28 04:41:11 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Feb 28 04:41:11 np0005634017 python3.9[129177]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:41:11 np0005634017 python3.9[129301]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271670.9192061-60-70095446997134/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=f2779139e6bd983850883ab25d7bcafa3d089a73 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:41:12 np0005634017 python3.9[129454]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:41:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:13 np0005634017 python3.9[129578]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271672.1581566-60-265342340214886/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=0c700f4b306d7a20cf984f6f6b16a4c390dc4a01 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:41:13 np0005634017 python3.9[129731]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:41:14 np0005634017 python3.9[129884]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:41:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:15 np0005634017 python3.9[130037]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:41:15 np0005634017 python3.9[130161]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271674.851559-119-202275045303359/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=ec83d8644f852acb94c44d26c40e3041f3172717 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:41:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:41:16 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Feb 28 04:41:16 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Feb 28 04:41:16 np0005634017 python3.9[130314]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:41:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:17 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 9.a scrub starts
Feb 28 04:41:17 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 9.a scrub ok
Feb 28 04:41:17 np0005634017 python3.9[130438]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271676.1288416-119-234839756456084/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=2e374956f2579985e04edfe2b834556be4a6bcec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:41:17 np0005634017 python3.9[130591]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:41:18 np0005634017 python3.9[130715]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271677.43511-119-153594085395687/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=53baa598d39880bf77e7750869f3f1b91e4f6061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:41:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:19 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Feb 28 04:41:19 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Feb 28 04:41:19 np0005634017 python3.9[130868]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:41:19 np0005634017 python3.9[131021]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:41:20 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Feb 28 04:41:20 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Feb 28 04:41:20 np0005634017 python3.9[131174]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:41:21 np0005634017 python3.9[131298]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271680.0531564-178-179271375884379/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=c0023b09fa22d40e1725630e1ff21cb6d4c45e16 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:41:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:41:21 np0005634017 python3.9[131451]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:41:22 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Feb 28 04:41:22 np0005634017 ceph-osd[88267]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Feb 28 04:41:22 np0005634017 python3.9[131575]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271681.2508092-178-129513715556297/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=2e374956f2579985e04edfe2b834556be4a6bcec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:41:22 np0005634017 python3.9[131728]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:41:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:23 np0005634017 python3.9[131852]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271682.374518-178-221870395662272/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=ef6e25976c65ef87d2cdaa0022aa026256b0d0b1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:41:24 np0005634017 python3.9[132005]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:41:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:25 np0005634017 python3.9[132158]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:41:25 np0005634017 python3.9[132282]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271684.7366283-246-207269481054613/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=19cefd09979300b160194e5c5bf9a482dfa697e9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:41:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:41:26 np0005634017 python3.9[132435]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:41:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:27 np0005634017 python3.9[132588]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:41:27 np0005634017 python3.9[132712]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271686.6774502-270-7017107492820/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=19cefd09979300b160194e5c5bf9a482dfa697e9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:41:28 np0005634017 python3.9[132865]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:41:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_09:41:28
Feb 28 04:41:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 04:41:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 04:41:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['images', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'vms', 'backups', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', '.mgr']
Feb 28 04:41:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 04:41:29 np0005634017 python3.9[133018]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:41:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:29 np0005634017 python3.9[133142]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271688.5957212-294-176684281283106/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=19cefd09979300b160194e5c5bf9a482dfa697e9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:41:30 np0005634017 python3.9[133295]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:41:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:41:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:41:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:41:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:41:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:41:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:41:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 04:41:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:41:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 04:41:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:41:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:41:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:41:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:41:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:41:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:41:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:41:30 np0005634017 python3.9[133448]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:41:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:41:31 np0005634017 python3.9[133572]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271690.3357701-318-150160005299996/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=19cefd09979300b160194e5c5bf9a482dfa697e9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:41:32 np0005634017 python3.9[133725]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:41:32 np0005634017 python3.9[133878]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:41:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:33 np0005634017 python3.9[134002]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271692.3819714-342-95178590323844/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=19cefd09979300b160194e5c5bf9a482dfa697e9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:41:34 np0005634017 python3.9[134155]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:41:34 np0005634017 python3.9[134308]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:41:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:35 np0005634017 python3.9[134432]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271694.3480659-366-71563516365756/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=19cefd09979300b160194e5c5bf9a482dfa697e9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:41:35 np0005634017 systemd[1]: session-43.scope: Deactivated successfully.
Feb 28 04:41:35 np0005634017 systemd[1]: session-43.scope: Consumed 21.675s CPU time.
Feb 28 04:41:35 np0005634017 systemd-logind[815]: Session 43 logged out. Waiting for processes to exit.
Feb 28 04:41:35 np0005634017 systemd-logind[815]: Removed session 43.
Feb 28 04:41:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:41:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 04:41:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:41:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 04:41:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:41:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:41:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:41:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:41:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:41:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:41:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:41:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:41:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:41:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4231060137621101e-06 of space, bias 4.0, pg target 0.0017077272165145322 quantized to 16 (current 16)
Feb 28 04:41:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:41:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:41:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:41:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 04:41:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:41:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 04:41:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:41:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:41:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:41:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 04:41:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:41:42 np0005634017 systemd-logind[815]: New session 44 of user zuul.
Feb 28 04:41:42 np0005634017 systemd[1]: Started Session 44 of User zuul.
Feb 28 04:41:42 np0005634017 python3.9[134613]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:41:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:43 np0005634017 python3.9[134766]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:41:44 np0005634017 python3.9[134890]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1772271703.1343017-29-13904774386037/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=f947f110f3ed509cea5c7f15cc4d7baedfba3a80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:41:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:41:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:41:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 04:41:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:41:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 04:41:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:41:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 04:41:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 04:41:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 04:41:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:41:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:41:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:41:45 np0005634017 python3.9[135111]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:41:45 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:41:45 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:41:45 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:41:45 np0005634017 podman[135294]: 2026-02-28 09:41:45.575390304 +0000 UTC m=+0.047486192 container create 1a35e46678acafb382560ef5ffdb70425ebefdd627482507a6463ffd22f56d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 04:41:45 np0005634017 systemd[1]: Started libpod-conmon-1a35e46678acafb382560ef5ffdb70425ebefdd627482507a6463ffd22f56d5b.scope.
Feb 28 04:41:45 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:41:45 np0005634017 podman[135294]: 2026-02-28 09:41:45.550798296 +0000 UTC m=+0.022894254 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:41:45 np0005634017 podman[135294]: 2026-02-28 09:41:45.647990547 +0000 UTC m=+0.120086515 container init 1a35e46678acafb382560ef5ffdb70425ebefdd627482507a6463ffd22f56d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_blackwell, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 04:41:45 np0005634017 podman[135294]: 2026-02-28 09:41:45.653750881 +0000 UTC m=+0.125846759 container start 1a35e46678acafb382560ef5ffdb70425ebefdd627482507a6463ffd22f56d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_blackwell, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:41:45 np0005634017 serene_blackwell[135330]: 167 167
Feb 28 04:41:45 np0005634017 systemd[1]: libpod-1a35e46678acafb382560ef5ffdb70425ebefdd627482507a6463ffd22f56d5b.scope: Deactivated successfully.
Feb 28 04:41:45 np0005634017 podman[135294]: 2026-02-28 09:41:45.660982315 +0000 UTC m=+0.133078213 container attach 1a35e46678acafb382560ef5ffdb70425ebefdd627482507a6463ffd22f56d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 04:41:45 np0005634017 podman[135336]: 2026-02-28 09:41:45.728641525 +0000 UTC m=+0.046579747 container died 1a35e46678acafb382560ef5ffdb70425ebefdd627482507a6463ffd22f56d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_blackwell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:41:45 np0005634017 python3.9[135326]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772271704.7096245-29-35846529434433/.source.conf _original_basename=ceph.conf follow=False checksum=6450c366802ab7624609ef75c966611ef2ad9ee5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:41:45 np0005634017 systemd[1]: var-lib-containers-storage-overlay-e4e5d79c52e6f81fb928820b0116b683c2ed0f6ce65bc71801f15f855fcbf7a8-merged.mount: Deactivated successfully.
Feb 28 04:41:45 np0005634017 podman[135336]: 2026-02-28 09:41:45.770892626 +0000 UTC m=+0.088830798 container remove 1a35e46678acafb382560ef5ffdb70425ebefdd627482507a6463ffd22f56d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 04:41:45 np0005634017 systemd[1]: libpod-conmon-1a35e46678acafb382560ef5ffdb70425ebefdd627482507a6463ffd22f56d5b.scope: Deactivated successfully.
Feb 28 04:41:45 np0005634017 podman[135382]: 2026-02-28 09:41:45.972961764 +0000 UTC m=+0.058910558 container create f4a4fdb11f815d6ea925ea4f3aa949edbdc70903881aed009287abf8546ca99f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 04:41:46 np0005634017 systemd[1]: Started libpod-conmon-f4a4fdb11f815d6ea925ea4f3aa949edbdc70903881aed009287abf8546ca99f.scope.
Feb 28 04:41:46 np0005634017 podman[135382]: 2026-02-28 09:41:45.948932871 +0000 UTC m=+0.034881715 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:41:46 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:41:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe05f125db36196599e7b2cddeb8d5bfcb3094f3964d8da34bffda0ffbb3e9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:41:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe05f125db36196599e7b2cddeb8d5bfcb3094f3964d8da34bffda0ffbb3e9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:41:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe05f125db36196599e7b2cddeb8d5bfcb3094f3964d8da34bffda0ffbb3e9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:41:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe05f125db36196599e7b2cddeb8d5bfcb3094f3964d8da34bffda0ffbb3e9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:41:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe05f125db36196599e7b2cddeb8d5bfcb3094f3964d8da34bffda0ffbb3e9e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:41:46 np0005634017 podman[135382]: 2026-02-28 09:41:46.080640066 +0000 UTC m=+0.166588860 container init f4a4fdb11f815d6ea925ea4f3aa949edbdc70903881aed009287abf8546ca99f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_banach, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 28 04:41:46 np0005634017 podman[135382]: 2026-02-28 09:41:46.095728989 +0000 UTC m=+0.181677783 container start f4a4fdb11f815d6ea925ea4f3aa949edbdc70903881aed009287abf8546ca99f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 28 04:41:46 np0005634017 podman[135382]: 2026-02-28 09:41:46.10100578 +0000 UTC m=+0.186954574 container attach f4a4fdb11f815d6ea925ea4f3aa949edbdc70903881aed009287abf8546ca99f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_banach, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 04:41:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:41:46 np0005634017 systemd[1]: session-44.scope: Deactivated successfully.
Feb 28 04:41:46 np0005634017 systemd[1]: session-44.scope: Consumed 2.555s CPU time.
Feb 28 04:41:46 np0005634017 systemd-logind[815]: Session 44 logged out. Waiting for processes to exit.
Feb 28 04:41:46 np0005634017 systemd-logind[815]: Removed session 44.
Feb 28 04:41:46 np0005634017 silly_banach[135399]: --> passed data devices: 0 physical, 3 LVM
Feb 28 04:41:46 np0005634017 silly_banach[135399]: --> All data devices are unavailable
Feb 28 04:41:46 np0005634017 systemd[1]: libpod-f4a4fdb11f815d6ea925ea4f3aa949edbdc70903881aed009287abf8546ca99f.scope: Deactivated successfully.
Feb 28 04:41:46 np0005634017 podman[135382]: 2026-02-28 09:41:46.585493376 +0000 UTC m=+0.671442170 container died f4a4fdb11f815d6ea925ea4f3aa949edbdc70903881aed009287abf8546ca99f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_banach, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:41:46 np0005634017 systemd[1]: var-lib-containers-storage-overlay-7fe05f125db36196599e7b2cddeb8d5bfcb3094f3964d8da34bffda0ffbb3e9e-merged.mount: Deactivated successfully.
Feb 28 04:41:46 np0005634017 podman[135382]: 2026-02-28 09:41:46.630422289 +0000 UTC m=+0.716371073 container remove f4a4fdb11f815d6ea925ea4f3aa949edbdc70903881aed009287abf8546ca99f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 28 04:41:46 np0005634017 systemd[1]: libpod-conmon-f4a4fdb11f815d6ea925ea4f3aa949edbdc70903881aed009287abf8546ca99f.scope: Deactivated successfully.
Feb 28 04:41:47 np0005634017 podman[135493]: 2026-02-28 09:41:47.064784143 +0000 UTC m=+0.043262649 container create 1168362e23fecb617ba553d9eca3b94b3932f61a5e6a2d4ac4db6bf5044cf005 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:41:47 np0005634017 systemd[1]: Started libpod-conmon-1168362e23fecb617ba553d9eca3b94b3932f61a5e6a2d4ac4db6bf5044cf005.scope.
Feb 28 04:41:47 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:41:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:47 np0005634017 podman[135493]: 2026-02-28 09:41:47.041714866 +0000 UTC m=+0.020193422 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:41:47 np0005634017 podman[135493]: 2026-02-28 09:41:47.136724718 +0000 UTC m=+0.115203224 container init 1168362e23fecb617ba553d9eca3b94b3932f61a5e6a2d4ac4db6bf5044cf005 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_ellis, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:41:47 np0005634017 podman[135493]: 2026-02-28 09:41:47.148427301 +0000 UTC m=+0.126905777 container start 1168362e23fecb617ba553d9eca3b94b3932f61a5e6a2d4ac4db6bf5044cf005 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_ellis, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:41:47 np0005634017 vibrant_ellis[135509]: 167 167
Feb 28 04:41:47 np0005634017 podman[135493]: 2026-02-28 09:41:47.152055888 +0000 UTC m=+0.130534454 container attach 1168362e23fecb617ba553d9eca3b94b3932f61a5e6a2d4ac4db6bf5044cf005 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 04:41:47 np0005634017 systemd[1]: libpod-1168362e23fecb617ba553d9eca3b94b3932f61a5e6a2d4ac4db6bf5044cf005.scope: Deactivated successfully.
Feb 28 04:41:47 np0005634017 podman[135493]: 2026-02-28 09:41:47.153037385 +0000 UTC m=+0.131515861 container died 1168362e23fecb617ba553d9eca3b94b3932f61a5e6a2d4ac4db6bf5044cf005 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_ellis, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:41:47 np0005634017 systemd[1]: var-lib-containers-storage-overlay-efe6db60f4bb890caf7df4c246a167270ee34eb766218e1f8b9ff73641b79940-merged.mount: Deactivated successfully.
Feb 28 04:41:47 np0005634017 podman[135493]: 2026-02-28 09:41:47.186987903 +0000 UTC m=+0.165466379 container remove 1168362e23fecb617ba553d9eca3b94b3932f61a5e6a2d4ac4db6bf5044cf005 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_ellis, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 28 04:41:47 np0005634017 systemd[1]: libpod-conmon-1168362e23fecb617ba553d9eca3b94b3932f61a5e6a2d4ac4db6bf5044cf005.scope: Deactivated successfully.
Feb 28 04:41:47 np0005634017 podman[135532]: 2026-02-28 09:41:47.363676442 +0000 UTC m=+0.053897614 container create 05914ee43240c40b5fa9437e9881059fd04b52787bc1f1fa32ca67578ffc0b70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:41:47 np0005634017 systemd[1]: Started libpod-conmon-05914ee43240c40b5fa9437e9881059fd04b52787bc1f1fa32ca67578ffc0b70.scope.
Feb 28 04:41:47 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:41:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/841382727cd668ddda4ad69093e3940907407cf96ac51b6beedcc0069627d1b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:41:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/841382727cd668ddda4ad69093e3940907407cf96ac51b6beedcc0069627d1b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:41:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/841382727cd668ddda4ad69093e3940907407cf96ac51b6beedcc0069627d1b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:41:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/841382727cd668ddda4ad69093e3940907407cf96ac51b6beedcc0069627d1b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:41:47 np0005634017 podman[135532]: 2026-02-28 09:41:47.338978811 +0000 UTC m=+0.029200023 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:41:47 np0005634017 podman[135532]: 2026-02-28 09:41:47.455042617 +0000 UTC m=+0.145263819 container init 05914ee43240c40b5fa9437e9881059fd04b52787bc1f1fa32ca67578ffc0b70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_merkle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:41:47 np0005634017 podman[135532]: 2026-02-28 09:41:47.463177475 +0000 UTC m=+0.153398607 container start 05914ee43240c40b5fa9437e9881059fd04b52787bc1f1fa32ca67578ffc0b70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:41:47 np0005634017 podman[135532]: 2026-02-28 09:41:47.466679868 +0000 UTC m=+0.156901090 container attach 05914ee43240c40b5fa9437e9881059fd04b52787bc1f1fa32ca67578ffc0b70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]: {
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:    "0": [
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:        {
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "devices": [
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "/dev/loop3"
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            ],
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "lv_name": "ceph_lv0",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "lv_size": "21470642176",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "name": "ceph_lv0",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "tags": {
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.cluster_name": "ceph",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.crush_device_class": "",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.encrypted": "0",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.objectstore": "bluestore",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.osd_id": "0",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.type": "block",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.vdo": "0",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.with_tpm": "0"
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            },
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "type": "block",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "vg_name": "ceph_vg0"
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:        }
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:    ],
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:    "1": [
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:        {
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "devices": [
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "/dev/loop4"
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            ],
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "lv_name": "ceph_lv1",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "lv_size": "21470642176",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "name": "ceph_lv1",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "tags": {
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.cluster_name": "ceph",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.crush_device_class": "",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.encrypted": "0",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.objectstore": "bluestore",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.osd_id": "1",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.type": "block",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.vdo": "0",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.with_tpm": "0"
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            },
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "type": "block",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "vg_name": "ceph_vg1"
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:        }
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:    ],
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:    "2": [
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:        {
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "devices": [
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "/dev/loop5"
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            ],
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "lv_name": "ceph_lv2",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "lv_size": "21470642176",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "name": "ceph_lv2",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "tags": {
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.cluster_name": "ceph",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.crush_device_class": "",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.encrypted": "0",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.objectstore": "bluestore",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.osd_id": "2",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.type": "block",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.vdo": "0",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:                "ceph.with_tpm": "0"
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            },
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "type": "block",
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:            "vg_name": "ceph_vg2"
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:        }
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]:    ]
Feb 28 04:41:47 np0005634017 jovial_merkle[135549]: }
Feb 28 04:41:47 np0005634017 systemd[1]: libpod-05914ee43240c40b5fa9437e9881059fd04b52787bc1f1fa32ca67578ffc0b70.scope: Deactivated successfully.
Feb 28 04:41:47 np0005634017 podman[135532]: 2026-02-28 09:41:47.74361742 +0000 UTC m=+0.433838592 container died 05914ee43240c40b5fa9437e9881059fd04b52787bc1f1fa32ca67578ffc0b70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:41:47 np0005634017 systemd[1]: var-lib-containers-storage-overlay-841382727cd668ddda4ad69093e3940907407cf96ac51b6beedcc0069627d1b8-merged.mount: Deactivated successfully.
Feb 28 04:41:47 np0005634017 podman[135532]: 2026-02-28 09:41:47.787609377 +0000 UTC m=+0.477830509 container remove 05914ee43240c40b5fa9437e9881059fd04b52787bc1f1fa32ca67578ffc0b70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_merkle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:41:47 np0005634017 systemd[1]: libpod-conmon-05914ee43240c40b5fa9437e9881059fd04b52787bc1f1fa32ca67578ffc0b70.scope: Deactivated successfully.
Feb 28 04:41:48 np0005634017 podman[135632]: 2026-02-28 09:41:48.227981941 +0000 UTC m=+0.061112036 container create bffba2f4ea9bb96a5ea287a27970fef64c9cc4a8faa6a6771b792f70e21a345b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_thompson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Feb 28 04:41:48 np0005634017 systemd[1]: Started libpod-conmon-bffba2f4ea9bb96a5ea287a27970fef64c9cc4a8faa6a6771b792f70e21a345b.scope.
Feb 28 04:41:48 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:41:48 np0005634017 podman[135632]: 2026-02-28 09:41:48.196955681 +0000 UTC m=+0.030085796 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:41:48 np0005634017 podman[135632]: 2026-02-28 09:41:48.301857888 +0000 UTC m=+0.134987993 container init bffba2f4ea9bb96a5ea287a27970fef64c9cc4a8faa6a6771b792f70e21a345b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 28 04:41:48 np0005634017 podman[135632]: 2026-02-28 09:41:48.310692775 +0000 UTC m=+0.143822870 container start bffba2f4ea9bb96a5ea287a27970fef64c9cc4a8faa6a6771b792f70e21a345b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 04:41:48 np0005634017 podman[135632]: 2026-02-28 09:41:48.314044774 +0000 UTC m=+0.147174899 container attach bffba2f4ea9bb96a5ea287a27970fef64c9cc4a8faa6a6771b792f70e21a345b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:41:48 np0005634017 gifted_thompson[135648]: 167 167
Feb 28 04:41:48 np0005634017 systemd[1]: libpod-bffba2f4ea9bb96a5ea287a27970fef64c9cc4a8faa6a6771b792f70e21a345b.scope: Deactivated successfully.
Feb 28 04:41:48 np0005634017 podman[135632]: 2026-02-28 09:41:48.315971246 +0000 UTC m=+0.149101381 container died bffba2f4ea9bb96a5ea287a27970fef64c9cc4a8faa6a6771b792f70e21a345b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_thompson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 28 04:41:48 np0005634017 systemd[1]: var-lib-containers-storage-overlay-8a734bdce4758539b133a1f4d47e2823f644ba1235ed9b0f67bb1292943428d7-merged.mount: Deactivated successfully.
Feb 28 04:41:48 np0005634017 podman[135632]: 2026-02-28 09:41:48.378323305 +0000 UTC m=+0.211453440 container remove bffba2f4ea9bb96a5ea287a27970fef64c9cc4a8faa6a6771b792f70e21a345b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_thompson, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:41:48 np0005634017 systemd[1]: libpod-conmon-bffba2f4ea9bb96a5ea287a27970fef64c9cc4a8faa6a6771b792f70e21a345b.scope: Deactivated successfully.
Feb 28 04:41:48 np0005634017 podman[135672]: 2026-02-28 09:41:48.485593625 +0000 UTC m=+0.040044102 container create ad4faf628f2dfe8d957cf2921d6d0888246070b43a1b204fb73d1f07f43df601 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:41:48 np0005634017 systemd[1]: Started libpod-conmon-ad4faf628f2dfe8d957cf2921d6d0888246070b43a1b204fb73d1f07f43df601.scope.
Feb 28 04:41:48 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:41:48 np0005634017 podman[135672]: 2026-02-28 09:41:48.467261755 +0000 UTC m=+0.021712312 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:41:48 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/229f4af55dd883ef3d24eaec9cd4e13b3e5555731a9432cec4ba803380a7ae51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:41:48 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/229f4af55dd883ef3d24eaec9cd4e13b3e5555731a9432cec4ba803380a7ae51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:41:48 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/229f4af55dd883ef3d24eaec9cd4e13b3e5555731a9432cec4ba803380a7ae51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:41:48 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/229f4af55dd883ef3d24eaec9cd4e13b3e5555731a9432cec4ba803380a7ae51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:41:48 np0005634017 podman[135672]: 2026-02-28 09:41:48.588239562 +0000 UTC m=+0.142690079 container init ad4faf628f2dfe8d957cf2921d6d0888246070b43a1b204fb73d1f07f43df601 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_clarke, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 28 04:41:48 np0005634017 podman[135672]: 2026-02-28 09:41:48.60309856 +0000 UTC m=+0.157549037 container start ad4faf628f2dfe8d957cf2921d6d0888246070b43a1b204fb73d1f07f43df601 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_clarke, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Feb 28 04:41:48 np0005634017 podman[135672]: 2026-02-28 09:41:48.606345937 +0000 UTC m=+0.160796444 container attach ad4faf628f2dfe8d957cf2921d6d0888246070b43a1b204fb73d1f07f43df601 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_clarke, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:41:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:49 np0005634017 lvm[135765]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:41:49 np0005634017 lvm[135765]: VG ceph_vg0 finished
Feb 28 04:41:49 np0005634017 lvm[135768]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:41:49 np0005634017 lvm[135768]: VG ceph_vg1 finished
Feb 28 04:41:49 np0005634017 lvm[135770]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:41:49 np0005634017 lvm[135770]: VG ceph_vg2 finished
Feb 28 04:41:49 np0005634017 sharp_clarke[135689]: {}
Feb 28 04:41:49 np0005634017 systemd[1]: libpod-ad4faf628f2dfe8d957cf2921d6d0888246070b43a1b204fb73d1f07f43df601.scope: Deactivated successfully.
Feb 28 04:41:49 np0005634017 podman[135672]: 2026-02-28 09:41:49.409612304 +0000 UTC m=+0.964062771 container died ad4faf628f2dfe8d957cf2921d6d0888246070b43a1b204fb73d1f07f43df601 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_clarke, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:41:49 np0005634017 systemd[1]: libpod-ad4faf628f2dfe8d957cf2921d6d0888246070b43a1b204fb73d1f07f43df601.scope: Consumed 1.075s CPU time.
Feb 28 04:41:49 np0005634017 systemd[1]: var-lib-containers-storage-overlay-229f4af55dd883ef3d24eaec9cd4e13b3e5555731a9432cec4ba803380a7ae51-merged.mount: Deactivated successfully.
Feb 28 04:41:49 np0005634017 podman[135672]: 2026-02-28 09:41:49.44608719 +0000 UTC m=+1.000537657 container remove ad4faf628f2dfe8d957cf2921d6d0888246070b43a1b204fb73d1f07f43df601 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_clarke, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Feb 28 04:41:49 np0005634017 systemd[1]: libpod-conmon-ad4faf628f2dfe8d957cf2921d6d0888246070b43a1b204fb73d1f07f43df601.scope: Deactivated successfully.
Feb 28 04:41:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:41:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:41:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:41:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:41:50 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:41:50 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:41:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:41:51 np0005634017 systemd-logind[815]: New session 45 of user zuul.
Feb 28 04:41:51 np0005634017 systemd[1]: Started Session 45 of User zuul.
Feb 28 04:41:52 np0005634017 python3.9[135962]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:41:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:53 np0005634017 python3.9[136119]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:41:54 np0005634017 python3.9[136272]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:41:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:55 np0005634017 python3.9[136422]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:41:56 np0005634017 python3.9[136575]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb 28 04:41:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:41:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:41:57 np0005634017 dbus-broker-launch[804]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Feb 28 04:41:57 np0005634017 python3.9[136732]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 28 04:41:58 np0005634017 python3.9[136817]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 28 04:41:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:42:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:42:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:42:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:42:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:42:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:42:01 np0005634017 python3.9[136971]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 28 04:42:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:42:01 np0005634017 python3[137127]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Feb 28 04:42:02 np0005634017 python3.9[137280]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:42:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:03 np0005634017 python3.9[137433]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:42:04 np0005634017 python3.9[137512]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:42:04 np0005634017 python3.9[137665]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:42:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:05 np0005634017 python3.9[137744]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.740mi2ow recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:42:05 np0005634017 python3.9[137897]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:42:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:42:06 np0005634017 python3.9[137976]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:42:07 np0005634017 python3.9[138129]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:42:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:07 np0005634017 python3[138283]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 28 04:42:08 np0005634017 python3.9[138436]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:42:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:09 np0005634017 python3.9[138562]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271728.0356023-152-127657949916793/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:42:09 np0005634017 python3.9[138715]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:42:10 np0005634017 python3.9[138841]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271729.3593082-167-119177044039769/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:42:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:42:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:11 np0005634017 python3.9[138994]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:42:11 np0005634017 python3.9[139120]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271730.715409-182-255754588530647/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:42:12 np0005634017 python3.9[139273]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:42:12 np0005634017 python3.9[139399]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271731.8100002-197-53748107079873/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:42:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:13 np0005634017 python3.9[139552]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:42:14 np0005634017 python3.9[139678]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772271733.0492685-212-49155230262654/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:42:14 np0005634017 python3.9[139831]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:42:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:15 np0005634017 python3.9[139984]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:42:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:42:16 np0005634017 python3.9[140140]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:42:17 np0005634017 python3.9[140293]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:42:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:17 np0005634017 python3.9[140447]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:42:18 np0005634017 python3.9[140602]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:42:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:19 np0005634017 python3.9[140758]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:42:20 np0005634017 python3.9[140908]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:42:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:42:21.150780) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271741150828, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1699, "num_deletes": 251, "total_data_size": 2400740, "memory_usage": 2442248, "flush_reason": "Manual Compaction"}
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271741162006, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1406006, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7314, "largest_seqno": 9012, "table_properties": {"data_size": 1400490, "index_size": 2463, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16440, "raw_average_key_size": 20, "raw_value_size": 1387229, "raw_average_value_size": 1758, "num_data_blocks": 115, "num_entries": 789, "num_filter_entries": 789, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271584, "oldest_key_time": 1772271584, "file_creation_time": 1772271741, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 11326 microseconds, and 5038 cpu microseconds.
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:42:21.162103) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1406006 bytes OK
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:42:21.162126) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:42:21.164231) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:42:21.164257) EVENT_LOG_v1 {"time_micros": 1772271741164250, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:42:21.164280) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2393180, prev total WAL file size 2393180, number of live WAL files 2.
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:42:21.165114) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1373KB)], [20(7638KB)]
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271741165205, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9227892, "oldest_snapshot_seqno": -1}
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3395 keys, 7195697 bytes, temperature: kUnknown
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271741209905, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7195697, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7169910, "index_size": 16206, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 81117, "raw_average_key_size": 23, "raw_value_size": 7105511, "raw_average_value_size": 2092, "num_data_blocks": 717, "num_entries": 3395, "num_filter_entries": 3395, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772271741, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:42:21.210235) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7195697 bytes
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:42:21.212106) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 206.1 rd, 160.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.5 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(11.7) write-amplify(5.1) OK, records in: 3837, records dropped: 442 output_compression: NoCompression
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:42:21.212150) EVENT_LOG_v1 {"time_micros": 1772271741212129, "job": 6, "event": "compaction_finished", "compaction_time_micros": 44777, "compaction_time_cpu_micros": 14425, "output_level": 6, "num_output_files": 1, "total_output_size": 7195697, "num_input_records": 3837, "num_output_records": 3395, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271741212543, "job": 6, "event": "table_file_deletion", "file_number": 22}
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271741214118, "job": 6, "event": "table_file_deletion", "file_number": 20}
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:42:21.164976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:42:21.214157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:42:21.214163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:42:21.214167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:42:21.214170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:42:21 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:42:21.214172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:42:21 np0005634017 python3.9[141062]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:e0:eb:c4:a5" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:42:21 np0005634017 ovs-vsctl[141063]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:e0:eb:c4:a5 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Feb 28 04:42:22 np0005634017 python3.9[141216]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:42:22 np0005634017 python3.9[141372]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:42:22 np0005634017 ovs-vsctl[141373]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Feb 28 04:42:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:23 np0005634017 python3.9[141523]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:42:24 np0005634017 python3.9[141678]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:42:24 np0005634017 python3.9[141831]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:42:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:25 np0005634017 python3.9[141910]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:42:26 np0005634017 python3.9[142063]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:42:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:42:26 np0005634017 python3.9[142142]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:42:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:27 np0005634017 python3.9[142295]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:42:27 np0005634017 python3.9[142448]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:42:28 np0005634017 python3.9[142527]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:42:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_09:42:28
Feb 28 04:42:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 04:42:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 04:42:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'backups', 'images', 'default.rgw.control', 'default.rgw.log', 'volumes', 'vms']
Feb 28 04:42:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 04:42:29 np0005634017 python3.9[142682]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:42:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:29 np0005634017 python3.9[142761]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:42:30 np0005634017 python3.9[142914]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:42:30 np0005634017 systemd[1]: Reloading.
Feb 28 04:42:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:42:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:42:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:42:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:42:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:42:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:42:30 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:42:30 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:42:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 04:42:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:42:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 04:42:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:42:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:42:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:42:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:42:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:42:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:42:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:42:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:42:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:31 np0005634017 python3.9[143112]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:42:31 np0005634017 python3.9[143191]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:42:32 np0005634017 python3.9[143344]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:42:32 np0005634017 python3.9[143423]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:42:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:33 np0005634017 python3.9[143576]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:42:33 np0005634017 systemd[1]: Reloading.
Feb 28 04:42:33 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:42:33 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:42:33 np0005634017 systemd[1]: Starting Create netns directory...
Feb 28 04:42:33 np0005634017 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 28 04:42:33 np0005634017 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 28 04:42:33 np0005634017 systemd[1]: Finished Create netns directory.
Feb 28 04:42:34 np0005634017 python3.9[143777]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:42:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:35 np0005634017 python3.9[143930]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:42:35 np0005634017 python3.9[144054]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772271754.8374825-464-173293619545168/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:42:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:42:36 np0005634017 python3.9[144207]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:42:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:37 np0005634017 python3.9[144360]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:42:38 np0005634017 python3.9[144513]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:42:38 np0005634017 python3.9[144637]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1772271757.6362975-497-173379380844189/.source.json _original_basename=.3elslrpd follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:42:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:39 np0005634017 python3.9[144787]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:42:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 04:42:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:42:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 04:42:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:42:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:42:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:42:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:42:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:42:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:42:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:42:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:42:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:42:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4231060137621101e-06 of space, bias 4.0, pg target 0.0017077272165145322 quantized to 16 (current 16)
Feb 28 04:42:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:42:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:42:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:42:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 04:42:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:42:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 04:42:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:42:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:42:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:42:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 04:42:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:42:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:41 np0005634017 python3.9[145211]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Feb 28 04:42:42 np0005634017 python3.9[145364]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 28 04:42:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:43 np0005634017 python3[145517]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Feb 28 04:42:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:42:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:47 np0005634017 podman[145532]: 2026-02-28 09:42:47.814927723 +0000 UTC m=+4.023459913 image pull ce6781f051bf092c13d84cb587c56ad7edaa58b70fcc0effc1dff15724d5232e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 28 04:42:47 np0005634017 podman[145651]: 2026-02-28 09:42:47.987782931 +0000 UTC m=+0.062759815 container create 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:42:47 np0005634017 podman[145651]: 2026-02-28 09:42:47.955016033 +0000 UTC m=+0.029992987 image pull ce6781f051bf092c13d84cb587c56ad7edaa58b70fcc0effc1dff15724d5232e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 28 04:42:47 np0005634017 python3[145517]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 28 04:42:48 np0005634017 python3.9[145842]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:42:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:49 np0005634017 python3.9[145997]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:42:49 np0005634017 python3.9[146124]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:42:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:42:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:42:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 04:42:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:42:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 04:42:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:42:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 04:42:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 04:42:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 04:42:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:42:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:42:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:42:50 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:42:50 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:42:50 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:42:50 np0005634017 podman[146371]: 2026-02-28 09:42:50.577266998 +0000 UTC m=+0.058346592 container create 4f65471633119e56b1ce916a03c30b7a948cd5fc8cbed0774138368b832b4bf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_blackwell, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 28 04:42:50 np0005634017 systemd[1]: Started libpod-conmon-4f65471633119e56b1ce916a03c30b7a948cd5fc8cbed0774138368b832b4bf2.scope.
Feb 28 04:42:50 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:42:50 np0005634017 podman[146371]: 2026-02-28 09:42:50.552027623 +0000 UTC m=+0.033107237 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:42:50 np0005634017 podman[146371]: 2026-02-28 09:42:50.659227153 +0000 UTC m=+0.140306797 container init 4f65471633119e56b1ce916a03c30b7a948cd5fc8cbed0774138368b832b4bf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:42:50 np0005634017 python3.9[146358]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772271769.9795303-575-63406501041115/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:42:50 np0005634017 podman[146371]: 2026-02-28 09:42:50.667641089 +0000 UTC m=+0.148720693 container start 4f65471633119e56b1ce916a03c30b7a948cd5fc8cbed0774138368b832b4bf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_blackwell, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Feb 28 04:42:50 np0005634017 tender_blackwell[146388]: 167 167
Feb 28 04:42:50 np0005634017 systemd[1]: libpod-4f65471633119e56b1ce916a03c30b7a948cd5fc8cbed0774138368b832b4bf2.scope: Deactivated successfully.
Feb 28 04:42:50 np0005634017 podman[146371]: 2026-02-28 09:42:50.67589914 +0000 UTC m=+0.156978804 container attach 4f65471633119e56b1ce916a03c30b7a948cd5fc8cbed0774138368b832b4bf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_blackwell, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:42:50 np0005634017 podman[146371]: 2026-02-28 09:42:50.677023648 +0000 UTC m=+0.158103252 container died 4f65471633119e56b1ce916a03c30b7a948cd5fc8cbed0774138368b832b4bf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 28 04:42:50 np0005634017 systemd[1]: var-lib-containers-storage-overlay-0abd7b81fe0eda88ce7b4f4483fc2fdbf4c90976535225aef0e048e978296f1d-merged.mount: Deactivated successfully.
Feb 28 04:42:50 np0005634017 podman[146371]: 2026-02-28 09:42:50.716133978 +0000 UTC m=+0.197213542 container remove 4f65471633119e56b1ce916a03c30b7a948cd5fc8cbed0774138368b832b4bf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_blackwell, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 28 04:42:50 np0005634017 systemd[1]: libpod-conmon-4f65471633119e56b1ce916a03c30b7a948cd5fc8cbed0774138368b832b4bf2.scope: Deactivated successfully.
Feb 28 04:42:50 np0005634017 podman[146442]: 2026-02-28 09:42:50.84023772 +0000 UTC m=+0.034682067 container create 6fe8b3f88671de733df7c3c1d689ae147442153b58aaa676adcc2a2d42ef96c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 04:42:50 np0005634017 systemd[1]: Started libpod-conmon-6fe8b3f88671de733df7c3c1d689ae147442153b58aaa676adcc2a2d42ef96c4.scope.
Feb 28 04:42:50 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:42:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00040dd3524a56cc4922dcbb48b65eb02e84afcc0bd5dee7772e30e7e0f53664/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:42:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00040dd3524a56cc4922dcbb48b65eb02e84afcc0bd5dee7772e30e7e0f53664/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:42:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00040dd3524a56cc4922dcbb48b65eb02e84afcc0bd5dee7772e30e7e0f53664/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:42:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00040dd3524a56cc4922dcbb48b65eb02e84afcc0bd5dee7772e30e7e0f53664/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:42:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00040dd3524a56cc4922dcbb48b65eb02e84afcc0bd5dee7772e30e7e0f53664/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:42:50 np0005634017 podman[146442]: 2026-02-28 09:42:50.918224484 +0000 UTC m=+0.112668861 container init 6fe8b3f88671de733df7c3c1d689ae147442153b58aaa676adcc2a2d42ef96c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 04:42:50 np0005634017 podman[146442]: 2026-02-28 09:42:50.824170479 +0000 UTC m=+0.018614836 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:42:50 np0005634017 podman[146442]: 2026-02-28 09:42:50.937284471 +0000 UTC m=+0.131728848 container start 6fe8b3f88671de733df7c3c1d689ae147442153b58aaa676adcc2a2d42ef96c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cohen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:42:50 np0005634017 podman[146442]: 2026-02-28 09:42:50.940856772 +0000 UTC m=+0.135301159 container attach 6fe8b3f88671de733df7c3c1d689ae147442153b58aaa676adcc2a2d42ef96c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cohen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Feb 28 04:42:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:42:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:51 np0005634017 python3.9[146508]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 28 04:42:51 np0005634017 systemd[1]: Reloading.
Feb 28 04:42:51 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:42:51 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:42:51 np0005634017 vigorous_cohen[146497]: --> passed data devices: 0 physical, 3 LVM
Feb 28 04:42:51 np0005634017 vigorous_cohen[146497]: --> All data devices are unavailable
Feb 28 04:42:51 np0005634017 podman[146442]: 2026-02-28 09:42:51.394163409 +0000 UTC m=+0.588607766 container died 6fe8b3f88671de733df7c3c1d689ae147442153b58aaa676adcc2a2d42ef96c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cohen, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 04:42:51 np0005634017 systemd[1]: libpod-6fe8b3f88671de733df7c3c1d689ae147442153b58aaa676adcc2a2d42ef96c4.scope: Deactivated successfully.
Feb 28 04:42:51 np0005634017 systemd[1]: var-lib-containers-storage-overlay-00040dd3524a56cc4922dcbb48b65eb02e84afcc0bd5dee7772e30e7e0f53664-merged.mount: Deactivated successfully.
Feb 28 04:42:51 np0005634017 podman[146442]: 2026-02-28 09:42:51.469356731 +0000 UTC m=+0.663801088 container remove 6fe8b3f88671de733df7c3c1d689ae147442153b58aaa676adcc2a2d42ef96c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:42:51 np0005634017 systemd[1]: libpod-conmon-6fe8b3f88671de733df7c3c1d689ae147442153b58aaa676adcc2a2d42ef96c4.scope: Deactivated successfully.
Feb 28 04:42:51 np0005634017 podman[146721]: 2026-02-28 09:42:51.894270652 +0000 UTC m=+0.040021164 container create e969caf03a8e8c23c902454b43eaac34714783bc6c94826fccc0683b75d04454 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:42:51 np0005634017 systemd[1]: Started libpod-conmon-e969caf03a8e8c23c902454b43eaac34714783bc6c94826fccc0683b75d04454.scope.
Feb 28 04:42:51 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:42:51 np0005634017 podman[146721]: 2026-02-28 09:42:51.874085006 +0000 UTC m=+0.019835568 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:42:51 np0005634017 podman[146721]: 2026-02-28 09:42:51.969409172 +0000 UTC m=+0.115159715 container init e969caf03a8e8c23c902454b43eaac34714783bc6c94826fccc0683b75d04454 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:42:51 np0005634017 podman[146721]: 2026-02-28 09:42:51.976146925 +0000 UTC m=+0.121897437 container start e969caf03a8e8c23c902454b43eaac34714783bc6c94826fccc0683b75d04454 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chandrasekhar, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:42:51 np0005634017 intelligent_chandrasekhar[146738]: 167 167
Feb 28 04:42:51 np0005634017 podman[146721]: 2026-02-28 09:42:51.979846909 +0000 UTC m=+0.125597461 container attach e969caf03a8e8c23c902454b43eaac34714783bc6c94826fccc0683b75d04454 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:42:51 np0005634017 systemd[1]: libpod-e969caf03a8e8c23c902454b43eaac34714783bc6c94826fccc0683b75d04454.scope: Deactivated successfully.
Feb 28 04:42:51 np0005634017 podman[146721]: 2026-02-28 09:42:51.980504776 +0000 UTC m=+0.126255288 container died e969caf03a8e8c23c902454b43eaac34714783bc6c94826fccc0683b75d04454 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chandrasekhar, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:42:52 np0005634017 systemd[1]: var-lib-containers-storage-overlay-79e776f965451dbde48b8a3797bfaacb270c665ee96c3c9694841a23ea306d8b-merged.mount: Deactivated successfully.
Feb 28 04:42:52 np0005634017 podman[146721]: 2026-02-28 09:42:52.013958001 +0000 UTC m=+0.159708523 container remove e969caf03a8e8c23c902454b43eaac34714783bc6c94826fccc0683b75d04454 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chandrasekhar, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 04:42:52 np0005634017 python3.9[146708]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:42:52 np0005634017 systemd[1]: libpod-conmon-e969caf03a8e8c23c902454b43eaac34714783bc6c94826fccc0683b75d04454.scope: Deactivated successfully.
Feb 28 04:42:52 np0005634017 systemd[1]: Reloading.
Feb 28 04:42:52 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:42:52 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:42:52 np0005634017 podman[146766]: 2026-02-28 09:42:52.201662719 +0000 UTC m=+0.066860490 container create 96fbe30ae3008816e20450f0647621d56cea743a3729893d654d00afc7c4efe2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_pascal, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 04:42:52 np0005634017 podman[146766]: 2026-02-28 09:42:52.174994567 +0000 UTC m=+0.040192318 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:42:52 np0005634017 systemd[1]: Started libpod-conmon-96fbe30ae3008816e20450f0647621d56cea743a3729893d654d00afc7c4efe2.scope.
Feb 28 04:42:52 np0005634017 systemd[1]: Starting ovn_controller container...
Feb 28 04:42:52 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:42:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923b5e0cea5cbf25bb862e43401680447221b3d44248c31532d7c8e4fdd89012/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:42:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923b5e0cea5cbf25bb862e43401680447221b3d44248c31532d7c8e4fdd89012/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:42:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923b5e0cea5cbf25bb862e43401680447221b3d44248c31532d7c8e4fdd89012/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:42:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923b5e0cea5cbf25bb862e43401680447221b3d44248c31532d7c8e4fdd89012/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:42:52 np0005634017 podman[146766]: 2026-02-28 09:42:52.392860346 +0000 UTC m=+0.258058087 container init 96fbe30ae3008816e20450f0647621d56cea743a3729893d654d00afc7c4efe2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_pascal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 04:42:52 np0005634017 podman[146766]: 2026-02-28 09:42:52.404715019 +0000 UTC m=+0.269912790 container start 96fbe30ae3008816e20450f0647621d56cea743a3729893d654d00afc7c4efe2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:42:52 np0005634017 podman[146766]: 2026-02-28 09:42:52.411517473 +0000 UTC m=+0.276715224 container attach 96fbe30ae3008816e20450f0647621d56cea743a3729893d654d00afc7c4efe2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 04:42:52 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:42:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee85a6dbfeb6c143fe358aac35a5e458604d3c13caf2d07ad5d662539c060620/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Feb 28 04:42:52 np0005634017 systemd[1]: Started /usr/bin/podman healthcheck run 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf.
Feb 28 04:42:52 np0005634017 podman[146828]: 2026-02-28 09:42:52.548958485 +0000 UTC m=+0.176093071 container init 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 28 04:42:52 np0005634017 ovn_controller[146846]: + sudo -E kolla_set_configs
Feb 28 04:42:52 np0005634017 podman[146828]: 2026-02-28 09:42:52.57849631 +0000 UTC m=+0.205630896 container start 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 04:42:52 np0005634017 edpm-start-podman-container[146828]: ovn_controller
Feb 28 04:42:52 np0005634017 systemd[1]: Created slice User Slice of UID 0.
Feb 28 04:42:52 np0005634017 systemd[1]: Starting User Runtime Directory /run/user/0...
Feb 28 04:42:52 np0005634017 systemd[1]: Finished User Runtime Directory /run/user/0.
Feb 28 04:42:52 np0005634017 systemd[1]: Starting User Manager for UID 0...
Feb 28 04:42:52 np0005634017 podman[146853]: 2026-02-28 09:42:52.648248783 +0000 UTC m=+0.061768770 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 28 04:42:52 np0005634017 edpm-start-podman-container[146825]: Creating additional drop-in dependency for "ovn_controller" (342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf)
Feb 28 04:42:52 np0005634017 systemd[1]: 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf-286f2773754b9713.service: Main process exited, code=exited, status=1/FAILURE
Feb 28 04:42:52 np0005634017 systemd[1]: 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf-286f2773754b9713.service: Failed with result 'exit-code'.
Feb 28 04:42:52 np0005634017 systemd[1]: Reloading.
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]: {
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:    "0": [
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:        {
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "devices": [
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "/dev/loop3"
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            ],
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "lv_name": "ceph_lv0",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "lv_size": "21470642176",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "name": "ceph_lv0",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "tags": {
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.cluster_name": "ceph",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.crush_device_class": "",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.encrypted": "0",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.objectstore": "bluestore",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.osd_id": "0",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.type": "block",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.vdo": "0",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.with_tpm": "0"
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            },
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "type": "block",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "vg_name": "ceph_vg0"
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:        }
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:    ],
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:    "1": [
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:        {
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "devices": [
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "/dev/loop4"
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            ],
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "lv_name": "ceph_lv1",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "lv_size": "21470642176",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "name": "ceph_lv1",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "tags": {
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.cluster_name": "ceph",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.crush_device_class": "",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.encrypted": "0",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.objectstore": "bluestore",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.osd_id": "1",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.type": "block",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.vdo": "0",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.with_tpm": "0"
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            },
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "type": "block",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "vg_name": "ceph_vg1"
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:        }
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:    ],
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:    "2": [
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:        {
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "devices": [
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "/dev/loop5"
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            ],
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "lv_name": "ceph_lv2",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "lv_size": "21470642176",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "name": "ceph_lv2",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "tags": {
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.cluster_name": "ceph",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.crush_device_class": "",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.encrypted": "0",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.objectstore": "bluestore",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.osd_id": "2",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.type": "block",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.vdo": "0",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:                "ceph.with_tpm": "0"
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            },
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "type": "block",
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:            "vg_name": "ceph_vg2"
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:        }
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]:    ]
Feb 28 04:42:52 np0005634017 eloquent_pascal[146824]: }
Feb 28 04:42:52 np0005634017 podman[146766]: 2026-02-28 09:42:52.715925993 +0000 UTC m=+0.581123744 container died 96fbe30ae3008816e20450f0647621d56cea743a3729893d654d00afc7c4efe2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 28 04:42:52 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:42:52 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:42:52 np0005634017 systemd[146886]: Queued start job for default target Main User Target.
Feb 28 04:42:52 np0005634017 systemd[146886]: Created slice User Application Slice.
Feb 28 04:42:52 np0005634017 systemd[146886]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Feb 28 04:42:52 np0005634017 systemd[146886]: Started Daily Cleanup of User's Temporary Directories.
Feb 28 04:42:52 np0005634017 systemd[146886]: Reached target Paths.
Feb 28 04:42:52 np0005634017 systemd[146886]: Reached target Timers.
Feb 28 04:42:52 np0005634017 systemd[146886]: Starting D-Bus User Message Bus Socket...
Feb 28 04:42:52 np0005634017 systemd[146886]: Starting Create User's Volatile Files and Directories...
Feb 28 04:42:52 np0005634017 systemd[146886]: Finished Create User's Volatile Files and Directories.
Feb 28 04:42:52 np0005634017 systemd[146886]: Listening on D-Bus User Message Bus Socket.
Feb 28 04:42:52 np0005634017 systemd[146886]: Reached target Sockets.
Feb 28 04:42:52 np0005634017 systemd[146886]: Reached target Basic System.
Feb 28 04:42:52 np0005634017 systemd[146886]: Reached target Main User Target.
Feb 28 04:42:52 np0005634017 systemd[146886]: Startup finished in 182ms.
Feb 28 04:42:52 np0005634017 systemd[1]: Started User Manager for UID 0.
Feb 28 04:42:52 np0005634017 systemd[1]: Started ovn_controller container.
Feb 28 04:42:52 np0005634017 systemd[1]: libpod-96fbe30ae3008816e20450f0647621d56cea743a3729893d654d00afc7c4efe2.scope: Deactivated successfully.
Feb 28 04:42:52 np0005634017 systemd[1]: Started Session c1 of User root.
Feb 28 04:42:52 np0005634017 systemd[1]: var-lib-containers-storage-overlay-923b5e0cea5cbf25bb862e43401680447221b3d44248c31532d7c8e4fdd89012-merged.mount: Deactivated successfully.
Feb 28 04:42:52 np0005634017 podman[146766]: 2026-02-28 09:42:52.985259727 +0000 UTC m=+0.850457448 container remove 96fbe30ae3008816e20450f0647621d56cea743a3729893d654d00afc7c4efe2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_pascal, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 04:42:52 np0005634017 systemd[1]: libpod-conmon-96fbe30ae3008816e20450f0647621d56cea743a3729893d654d00afc7c4efe2.scope: Deactivated successfully.
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: INFO:__main__:Validating config file
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: INFO:__main__:Writing out command to execute
Feb 28 04:42:53 np0005634017 systemd[1]: session-c1.scope: Deactivated successfully.
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: ++ cat /run_command
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: + ARGS=
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: + sudo kolla_copy_cacerts
Feb 28 04:42:53 np0005634017 systemd[1]: Started Session c2 of User root.
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: + [[ ! -n '' ]]
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: + . kolla_extend_start
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: + umask 0022
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Feb 28 04:42:53 np0005634017 systemd[1]: session-c2.scope: Deactivated successfully.
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Feb 28 04:42:53 np0005634017 NetworkManager[49805]: <info>  [1772271773.0994] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Feb 28 04:42:53 np0005634017 NetworkManager[49805]: <info>  [1772271773.1001] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 28 04:42:53 np0005634017 NetworkManager[49805]: <warn>  [1772271773.1003] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 28 04:42:53 np0005634017 NetworkManager[49805]: <info>  [1772271773.1012] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Feb 28 04:42:53 np0005634017 NetworkManager[49805]: <info>  [1772271773.1018] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Feb 28 04:42:53 np0005634017 NetworkManager[49805]: <info>  [1772271773.1022] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb 28 04:42:53 np0005634017 kernel: br-int: entered promiscuous mode
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00014|main|INFO|OVS feature set changed, force recompute.
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00022|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00023|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00024|main|INFO|OVS feature set changed, force recompute.
Feb 28 04:42:53 np0005634017 systemd-udevd[147051]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 28 04:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T09:42:53Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 28 04:42:53 np0005634017 NetworkManager[49805]: <info>  [1772271773.1365] manager: (ovn-66ae00-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Feb 28 04:42:53 np0005634017 kernel: genev_sys_6081: entered promiscuous mode
Feb 28 04:42:53 np0005634017 NetworkManager[49805]: <info>  [1772271773.1513] device (genev_sys_6081): carrier: link connected
Feb 28 04:42:53 np0005634017 NetworkManager[49805]: <info>  [1772271773.1515] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Feb 28 04:42:53 np0005634017 systemd-udevd[147053]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 04:42:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:53 np0005634017 podman[147102]: 2026-02-28 09:42:53.394329803 +0000 UTC m=+0.044113008 container create 08ef3e7ae5edb687f3286d9cd25b31c515a208d8dce6b75305588526b5b2c632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:42:53 np0005634017 systemd[1]: Started libpod-conmon-08ef3e7ae5edb687f3286d9cd25b31c515a208d8dce6b75305588526b5b2c632.scope.
Feb 28 04:42:53 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:42:53 np0005634017 podman[147102]: 2026-02-28 09:42:53.373285095 +0000 UTC m=+0.023068280 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:42:53 np0005634017 podman[147102]: 2026-02-28 09:42:53.484423746 +0000 UTC m=+0.134206991 container init 08ef3e7ae5edb687f3286d9cd25b31c515a208d8dce6b75305588526b5b2c632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bartik, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 04:42:53 np0005634017 podman[147102]: 2026-02-28 09:42:53.493370835 +0000 UTC m=+0.143154040 container start 08ef3e7ae5edb687f3286d9cd25b31c515a208d8dce6b75305588526b5b2c632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bartik, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 04:42:53 np0005634017 podman[147102]: 2026-02-28 09:42:53.497268344 +0000 UTC m=+0.147051609 container attach 08ef3e7ae5edb687f3286d9cd25b31c515a208d8dce6b75305588526b5b2c632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:42:53 np0005634017 elastic_bartik[147135]: 167 167
Feb 28 04:42:53 np0005634017 systemd[1]: libpod-08ef3e7ae5edb687f3286d9cd25b31c515a208d8dce6b75305588526b5b2c632.scope: Deactivated successfully.
Feb 28 04:42:53 np0005634017 podman[147102]: 2026-02-28 09:42:53.501034821 +0000 UTC m=+0.150818026 container died 08ef3e7ae5edb687f3286d9cd25b31c515a208d8dce6b75305588526b5b2c632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bartik, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 04:42:53 np0005634017 systemd[1]: var-lib-containers-storage-overlay-42e1360901dfa88e829137454b91f6fdce776f1c413df09b099d0796c83121b0-merged.mount: Deactivated successfully.
Feb 28 04:42:54 np0005634017 podman[147102]: 2026-02-28 09:42:54.100974016 +0000 UTC m=+0.750757221 container remove 08ef3e7ae5edb687f3286d9cd25b31c515a208d8dce6b75305588526b5b2c632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bartik, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 28 04:42:54 np0005634017 systemd[1]: libpod-conmon-08ef3e7ae5edb687f3286d9cd25b31c515a208d8dce6b75305588526b5b2c632.scope: Deactivated successfully.
Feb 28 04:42:54 np0005634017 podman[147235]: 2026-02-28 09:42:54.28112516 +0000 UTC m=+0.053021186 container create 13d339ca79e388d5af8b7145e14e1481b7fd405715edad11744f63eb19086c4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lovelace, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:42:54 np0005634017 systemd[1]: Started libpod-conmon-13d339ca79e388d5af8b7145e14e1481b7fd405715edad11744f63eb19086c4a.scope.
Feb 28 04:42:54 np0005634017 podman[147235]: 2026-02-28 09:42:54.261311934 +0000 UTC m=+0.033207960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:42:54 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:42:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/379e40416cb67ce9eca1b992edc6d30e4b8d02fb9808e1a92251dd3c8af6c8fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:42:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/379e40416cb67ce9eca1b992edc6d30e4b8d02fb9808e1a92251dd3c8af6c8fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:42:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/379e40416cb67ce9eca1b992edc6d30e4b8d02fb9808e1a92251dd3c8af6c8fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:42:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/379e40416cb67ce9eca1b992edc6d30e4b8d02fb9808e1a92251dd3c8af6c8fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:42:54 np0005634017 podman[147235]: 2026-02-28 09:42:54.383396424 +0000 UTC m=+0.155292460 container init 13d339ca79e388d5af8b7145e14e1481b7fd405715edad11744f63eb19086c4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Feb 28 04:42:54 np0005634017 podman[147235]: 2026-02-28 09:42:54.389551382 +0000 UTC m=+0.161447398 container start 13d339ca79e388d5af8b7145e14e1481b7fd405715edad11744f63eb19086c4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lovelace, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:42:54 np0005634017 podman[147235]: 2026-02-28 09:42:54.392895207 +0000 UTC m=+0.164791223 container attach 13d339ca79e388d5af8b7145e14e1481b7fd405715edad11744f63eb19086c4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 04:42:54 np0005634017 python3.9[147231]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 28 04:42:54 np0005634017 lvm[147369]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:42:54 np0005634017 lvm[147369]: VG ceph_vg0 finished
Feb 28 04:42:54 np0005634017 lvm[147376]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:42:54 np0005634017 lvm[147376]: VG ceph_vg1 finished
Feb 28 04:42:55 np0005634017 lvm[147397]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:42:55 np0005634017 lvm[147397]: VG ceph_vg2 finished
Feb 28 04:42:55 np0005634017 eloquent_lovelace[147252]: {}
Feb 28 04:42:55 np0005634017 systemd[1]: libpod-13d339ca79e388d5af8b7145e14e1481b7fd405715edad11744f63eb19086c4a.scope: Deactivated successfully.
Feb 28 04:42:55 np0005634017 systemd[1]: libpod-13d339ca79e388d5af8b7145e14e1481b7fd405715edad11744f63eb19086c4a.scope: Consumed 1.004s CPU time.
Feb 28 04:42:55 np0005634017 podman[147235]: 2026-02-28 09:42:55.137464889 +0000 UTC m=+0.909360905 container died 13d339ca79e388d5af8b7145e14e1481b7fd405715edad11744f63eb19086c4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lovelace, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 04:42:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:55 np0005634017 systemd[1]: var-lib-containers-storage-overlay-379e40416cb67ce9eca1b992edc6d30e4b8d02fb9808e1a92251dd3c8af6c8fb-merged.mount: Deactivated successfully.
Feb 28 04:42:55 np0005634017 podman[147235]: 2026-02-28 09:42:55.19385791 +0000 UTC m=+0.965753936 container remove 13d339ca79e388d5af8b7145e14e1481b7fd405715edad11744f63eb19086c4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lovelace, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:42:55 np0005634017 systemd[1]: libpod-conmon-13d339ca79e388d5af8b7145e14e1481b7fd405715edad11744f63eb19086c4a.scope: Deactivated successfully.
Feb 28 04:42:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:42:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:42:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:42:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:42:55 np0005634017 python3.9[147500]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:42:55 np0005634017 python3.9[147649]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772271774.9712596-620-8519011052317/.source.yaml _original_basename=._dce2vne follow=False checksum=07aa79261446215c0847d6a3fddbb8e2c4eeecb6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:42:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:42:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:42:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:42:56 np0005634017 python3.9[147802]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:42:56 np0005634017 ovs-vsctl[147803]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Feb 28 04:42:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:42:57 np0005634017 python3.9[147956]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:42:57 np0005634017 ovs-vsctl[147958]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Feb 28 04:42:58 np0005634017 python3.9[148112]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:42:58 np0005634017 ovs-vsctl[148113]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Feb 28 04:42:58 np0005634017 systemd-logind[815]: Session 45 logged out. Waiting for processes to exit.
Feb 28 04:42:58 np0005634017 systemd[1]: session-45.scope: Deactivated successfully.
Feb 28 04:42:58 np0005634017 systemd[1]: session-45.scope: Consumed 53.503s CPU time.
Feb 28 04:42:58 np0005634017 systemd-logind[815]: Removed session 45.
Feb 28 04:42:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:43:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:43:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:43:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:43:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:43:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:43:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:43:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:03 np0005634017 systemd[1]: Stopping User Manager for UID 0...
Feb 28 04:43:03 np0005634017 systemd[146886]: Activating special unit Exit the Session...
Feb 28 04:43:03 np0005634017 systemd[146886]: Stopped target Main User Target.
Feb 28 04:43:03 np0005634017 systemd[146886]: Stopped target Basic System.
Feb 28 04:43:03 np0005634017 systemd[146886]: Stopped target Paths.
Feb 28 04:43:03 np0005634017 systemd[146886]: Stopped target Sockets.
Feb 28 04:43:03 np0005634017 systemd[146886]: Stopped target Timers.
Feb 28 04:43:03 np0005634017 systemd[146886]: Stopped Daily Cleanup of User's Temporary Directories.
Feb 28 04:43:03 np0005634017 systemd[146886]: Closed D-Bus User Message Bus Socket.
Feb 28 04:43:03 np0005634017 systemd[146886]: Stopped Create User's Volatile Files and Directories.
Feb 28 04:43:03 np0005634017 systemd[146886]: Removed slice User Application Slice.
Feb 28 04:43:03 np0005634017 systemd[146886]: Reached target Shutdown.
Feb 28 04:43:03 np0005634017 systemd[146886]: Finished Exit the Session.
Feb 28 04:43:03 np0005634017 systemd[146886]: Reached target Exit the Session.
Feb 28 04:43:03 np0005634017 systemd[1]: user@0.service: Deactivated successfully.
Feb 28 04:43:03 np0005634017 systemd[1]: Stopped User Manager for UID 0.
Feb 28 04:43:03 np0005634017 systemd[1]: Stopping User Runtime Directory /run/user/0...
Feb 28 04:43:03 np0005634017 systemd[1]: run-user-0.mount: Deactivated successfully.
Feb 28 04:43:03 np0005634017 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Feb 28 04:43:03 np0005634017 systemd[1]: Stopped User Runtime Directory /run/user/0.
Feb 28 04:43:03 np0005634017 systemd[1]: Removed slice User Slice of UID 0.
Feb 28 04:43:04 np0005634017 systemd-logind[815]: New session 47 of user zuul.
Feb 28 04:43:04 np0005634017 systemd[1]: Started Session 47 of User zuul.
Feb 28 04:43:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:05 np0005634017 python3.9[148295]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:43:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:43:06 np0005634017 python3.9[148452]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:43:07 np0005634017 python3.9[148605]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:43:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:07 np0005634017 python3.9[148758]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:43:08 np0005634017 python3.9[148911]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:43:08 np0005634017 python3.9[149064]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:43:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:09 np0005634017 python3.9[149214]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:43:10 np0005634017 python3.9[149367]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb 28 04:43:10 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 04:43:10 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2097 writes, 9250 keys, 2097 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2097 writes, 2097 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2097 writes, 9250 keys, 2097 commit groups, 1.0 writes per commit group, ingest: 12.47 MB, 0.02 MB/s#012Interval WAL: 2097 writes, 2097 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    150.8      0.06              0.02         3    0.020       0      0       0.0       0.0#012  L6      1/0    6.86 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    208.2    182.9      0.08              0.03         2    0.039    7175    732       0.0       0.0#012 Sum      1/0    6.86 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    119.1    169.1      0.14              0.05         5    0.027    7175    732       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    124.0    175.7      0.13              0.05         4    0.033    7175    732       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    208.2    182.9      0.08              0.03         2    0.039    7175    732       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    165.2      0.05              0.02         2    0.027       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.5      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.009, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.1 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555caadbb8d0#2 capacity: 308.00 MB usage: 691.55 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 4.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(38,604.77 KB,0.19175%) FilterBlock(6,27.61 KB,0.00875399%) IndexBlock(6,59.17 KB,0.0187614%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb 28 04:43:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:43:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:11 np0005634017 python3.9[149518]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:43:12 np0005634017 python3.9[149639]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772271791.1184525-81-223572859199585/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:43:13 np0005634017 python3.9[149789]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:43:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:13 np0005634017 python3.9[149910]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772271792.5603776-96-131164438185635/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:43:14 np0005634017 python3.9[150063]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 28 04:43:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:15 np0005634017 python3.9[150148]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:43:16.582117) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271796582145, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 679, "num_deletes": 251, "total_data_size": 853787, "memory_usage": 867224, "flush_reason": "Manual Compaction"}
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271796586615, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 846517, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9013, "largest_seqno": 9691, "table_properties": {"data_size": 842930, "index_size": 1431, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7754, "raw_average_key_size": 18, "raw_value_size": 835775, "raw_average_value_size": 1994, "num_data_blocks": 66, "num_entries": 419, "num_filter_entries": 419, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271741, "oldest_key_time": 1772271741, "file_creation_time": 1772271796, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 4530 microseconds, and 1862 cpu microseconds.
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:43:16.586648) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 846517 bytes OK
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:43:16.586662) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:43:16.588293) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:43:16.588305) EVENT_LOG_v1 {"time_micros": 1772271796588302, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:43:16.588319) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 850223, prev total WAL file size 850223, number of live WAL files 2.
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:43:16.588601) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(826KB)], [23(7027KB)]
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271796588628, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8042214, "oldest_snapshot_seqno": -1}
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3300 keys, 6189632 bytes, temperature: kUnknown
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271796618299, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6189632, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6166071, "index_size": 14236, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 79946, "raw_average_key_size": 24, "raw_value_size": 6104867, "raw_average_value_size": 1849, "num_data_blocks": 619, "num_entries": 3300, "num_filter_entries": 3300, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772271796, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:43:16.618505) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6189632 bytes
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:43:16.619967) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 270.5 rd, 208.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 6.9 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(16.8) write-amplify(7.3) OK, records in: 3814, records dropped: 514 output_compression: NoCompression
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:43:16.619996) EVENT_LOG_v1 {"time_micros": 1772271796619982, "job": 8, "event": "compaction_finished", "compaction_time_micros": 29735, "compaction_time_cpu_micros": 13367, "output_level": 6, "num_output_files": 1, "total_output_size": 6189632, "num_input_records": 3814, "num_output_records": 3300, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271796620313, "job": 8, "event": "table_file_deletion", "file_number": 25}
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772271796621853, "job": 8, "event": "table_file_deletion", "file_number": 23}
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:43:16.588556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:43:16.621949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:43:16.621955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:43:16.621957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:43:16.621959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:43:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:43:16.621961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:43:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:17 np0005634017 python3.9[150302]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 28 04:43:18 np0005634017 python3.9[150455]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:43:18 np0005634017 python3.9[150576]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772271797.7962332-133-88379345188878/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:43:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:19 np0005634017 python3.9[150726]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:43:20 np0005634017 python3.9[150847]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772271799.0017796-133-222639000533880/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:43:21 np0005634017 python3.9[150997]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:43:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:43:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:21 np0005634017 python3.9[151118]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772271800.7128859-177-11588676566709/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:43:22 np0005634017 python3.9[151268]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:43:22 np0005634017 python3.9[151389]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772271801.78905-177-67012361666274/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:43:22 np0005634017 ovn_controller[146846]: 2026-02-28T09:43:22Z|00025|memory|INFO|17408 kB peak resident set size after 29.7 seconds
Feb 28 04:43:22 np0005634017 ovn_controller[146846]: 2026-02-28T09:43:22Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Feb 28 04:43:22 np0005634017 podman[151390]: 2026-02-28 09:43:22.852733074 +0000 UTC m=+0.090814714 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Feb 28 04:43:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:23 np0005634017 python3.9[151565]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:43:24 np0005634017 python3.9[151720]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:43:24 np0005634017 python3.9[151873]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:43:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:25 np0005634017 python3.9[151952]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:43:25 np0005634017 python3.9[152105]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:43:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:43:26 np0005634017 python3.9[152184]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:43:27 np0005634017 python3.9[152337]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:43:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:27 np0005634017 python3.9[152490]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:43:28 np0005634017 python3.9[152569]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:43:28 np0005634017 python3.9[152722]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:43:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_09:43:28
Feb 28 04:43:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 04:43:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 04:43:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'vms', 'default.rgw.meta', 'backups', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.control']
Feb 28 04:43:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 04:43:29 np0005634017 python3.9[152801]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:43:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:29 np0005634017 python3.9[152954]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:43:29 np0005634017 systemd[1]: Reloading.
Feb 28 04:43:29 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:43:30 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:43:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:43:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:43:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:43:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:43:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:43:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:43:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 04:43:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 04:43:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:43:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:43:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:43:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:43:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:43:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:43:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:43:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:43:30 np0005634017 python3.9[153151]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:43:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:43:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:31 np0005634017 python3.9[153230]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:43:31 np0005634017 python3.9[153383]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:43:32 np0005634017 python3.9[153462]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:43:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:33 np0005634017 python3.9[153615]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:43:33 np0005634017 systemd[1]: Reloading.
Feb 28 04:43:33 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:43:33 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:43:33 np0005634017 systemd[1]: Starting Create netns directory...
Feb 28 04:43:33 np0005634017 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 28 04:43:33 np0005634017 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 28 04:43:33 np0005634017 systemd[1]: Finished Create netns directory.
Feb 28 04:43:34 np0005634017 python3.9[153816]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:43:34 np0005634017 python3.9[153969]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:43:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:35 np0005634017 python3.9[154093]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772271814.4933867-328-173103059530566/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:43:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:43:36 np0005634017 python3.9[154246]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:43:37 np0005634017 python3.9[154399]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:43:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:37 np0005634017 python3.9[154552]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:43:38 np0005634017 python3.9[154676]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1772271817.2216644-361-12440962177667/.source.json _original_basename=.3ukk297l follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:43:38 np0005634017 python3.9[154826]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:43:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 04:43:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:43:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 04:43:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:43:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:43:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:43:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:43:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:43:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:43:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:43:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:43:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:43:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4231060137621101e-06 of space, bias 4.0, pg target 0.0017077272165145322 quantized to 16 (current 16)
Feb 28 04:43:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:43:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:43:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:43:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 04:43:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:43:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 04:43:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:43:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:43:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:43:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 04:43:40 np0005634017 python3.9[155250]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Feb 28 04:43:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:43:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:41 np0005634017 python3.9[155403]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 28 04:43:43 np0005634017 python3[155556]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Feb 28 04:43:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:43:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:49 np0005634017 podman[155569]: 2026-02-28 09:43:49.524825567 +0000 UTC m=+6.450824424 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 04:43:49 np0005634017 podman[155694]: 2026-02-28 09:43:49.69259022 +0000 UTC m=+0.062000347 container create 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 28 04:43:49 np0005634017 podman[155694]: 2026-02-28 09:43:49.660490462 +0000 UTC m=+0.029900649 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 04:43:49 np0005634017 python3[155556]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 04:43:50 np0005634017 python3.9[155887]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:43:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:43:51 np0005634017 python3.9[156042]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:43:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:51 np0005634017 python3.9[156119]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:43:52 np0005634017 python3.9[156271]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772271831.6813717-439-97191819643226/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:43:52 np0005634017 python3.9[156348]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 28 04:43:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:54 np0005634017 systemd[1]: Reloading.
Feb 28 04:43:54 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:43:54 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:43:54 np0005634017 podman[156350]: 2026-02-28 09:43:54.577613412 +0000 UTC m=+0.172681130 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Feb 28 04:43:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:55 np0005634017 python3.9[156494]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:43:55 np0005634017 systemd[1]: Reloading.
Feb 28 04:43:55 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:43:55 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:43:55 np0005634017 systemd[1]: Starting ovn_metadata_agent container...
Feb 28 04:43:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:43:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:43:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:43:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:43:56 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:43:56 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/970336e989178ee007a99f01d87abdec9af99a4da794692820773d3415da0b70/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Feb 28 04:43:56 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/970336e989178ee007a99f01d87abdec9af99a4da794692820773d3415da0b70/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 04:43:56 np0005634017 systemd[1]: Started /usr/bin/podman healthcheck run 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc.
Feb 28 04:43:56 np0005634017 podman[156616]: 2026-02-28 09:43:56.056813856 +0000 UTC m=+0.140498779 container init 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Feb 28 04:43:56 np0005634017 ovn_metadata_agent[156634]: + sudo -E kolla_set_configs
Feb 28 04:43:56 np0005634017 podman[156616]: 2026-02-28 09:43:56.091939218 +0000 UTC m=+0.175624101 container start 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 04:43:56 np0005634017 edpm-start-podman-container[156616]: ovn_metadata_agent
Feb 28 04:43:56 np0005634017 edpm-start-podman-container[156614]: Creating additional drop-in dependency for "ovn_metadata_agent" (3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc)
Feb 28 04:43:56 np0005634017 podman[156686]: 2026-02-28 09:43:56.154820688 +0000 UTC m=+0.056827974 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb 28 04:43:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:43:56 np0005634017 systemd[1]: Reloading.
Feb 28 04:43:56 np0005634017 ovn_metadata_agent[156634]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 28 04:43:56 np0005634017 ovn_metadata_agent[156634]: INFO:__main__:Validating config file
Feb 28 04:43:56 np0005634017 ovn_metadata_agent[156634]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 28 04:43:56 np0005634017 ovn_metadata_agent[156634]: INFO:__main__:Copying service configuration files
Feb 28 04:43:56 np0005634017 ovn_metadata_agent[156634]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Feb 28 04:43:56 np0005634017 ovn_metadata_agent[156634]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Feb 28 04:43:56 np0005634017 ovn_metadata_agent[156634]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Feb 28 04:43:56 np0005634017 ovn_metadata_agent[156634]: INFO:__main__:Writing out command to execute
Feb 28 04:43:56 np0005634017 ovn_metadata_agent[156634]: INFO:__main__:Setting permission for /var/lib/neutron
Feb 28 04:43:56 np0005634017 ovn_metadata_agent[156634]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Feb 28 04:43:56 np0005634017 ovn_metadata_agent[156634]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Feb 28 04:43:56 np0005634017 ovn_metadata_agent[156634]: INFO:__main__:Setting permission for /var/lib/neutron/external
Feb 28 04:43:56 np0005634017 ovn_metadata_agent[156634]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Feb 28 04:43:56 np0005634017 ovn_metadata_agent[156634]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Feb 28 04:43:56 np0005634017 ovn_metadata_agent[156634]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Feb 28 04:43:56 np0005634017 ovn_metadata_agent[156634]: ++ cat /run_command
Feb 28 04:43:56 np0005634017 ovn_metadata_agent[156634]: + CMD=neutron-ovn-metadata-agent
Feb 28 04:43:56 np0005634017 ovn_metadata_agent[156634]: + ARGS=
Feb 28 04:43:56 np0005634017 ovn_metadata_agent[156634]: + sudo kolla_copy_cacerts
Feb 28 04:43:56 np0005634017 ovn_metadata_agent[156634]: + [[ ! -n '' ]]
Feb 28 04:43:56 np0005634017 ovn_metadata_agent[156634]: + . kolla_extend_start
Feb 28 04:43:56 np0005634017 ovn_metadata_agent[156634]: Running command: 'neutron-ovn-metadata-agent'
Feb 28 04:43:56 np0005634017 ovn_metadata_agent[156634]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Feb 28 04:43:56 np0005634017 ovn_metadata_agent[156634]: + umask 0022
Feb 28 04:43:56 np0005634017 ovn_metadata_agent[156634]: + exec neutron-ovn-metadata-agent
Feb 28 04:43:56 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:43:56 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:43:56 np0005634017 systemd[1]: Started ovn_metadata_agent container.
Feb 28 04:43:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:43:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:43:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:43:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:43:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 04:43:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:43:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 04:43:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:43:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 04:43:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 04:43:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 04:43:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:43:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:43:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:43:57 np0005634017 podman[156944]: 2026-02-28 09:43:57.005373384 +0000 UTC m=+0.056031871 container create d843b61f072247f5c4b62f260d9f9a9db5caf67a8c22b7d013f20aa52619f951 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 04:43:57 np0005634017 systemd[1]: Started libpod-conmon-d843b61f072247f5c4b62f260d9f9a9db5caf67a8c22b7d013f20aa52619f951.scope.
Feb 28 04:43:57 np0005634017 podman[156944]: 2026-02-28 09:43:56.980519697 +0000 UTC m=+0.031178194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:43:57 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:43:57 np0005634017 podman[156944]: 2026-02-28 09:43:57.099965792 +0000 UTC m=+0.150624289 container init d843b61f072247f5c4b62f260d9f9a9db5caf67a8c22b7d013f20aa52619f951 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 04:43:57 np0005634017 podman[156944]: 2026-02-28 09:43:57.106568195 +0000 UTC m=+0.157226662 container start d843b61f072247f5c4b62f260d9f9a9db5caf67a8c22b7d013f20aa52619f951 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 04:43:57 np0005634017 podman[156944]: 2026-02-28 09:43:57.114566666 +0000 UTC m=+0.165225223 container attach d843b61f072247f5c4b62f260d9f9a9db5caf67a8c22b7d013f20aa52619f951 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_kalam, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 28 04:43:57 np0005634017 systemd[1]: libpod-d843b61f072247f5c4b62f260d9f9a9db5caf67a8c22b7d013f20aa52619f951.scope: Deactivated successfully.
Feb 28 04:43:57 np0005634017 conmon[156962]: conmon d843b61f072247f5c4b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d843b61f072247f5c4b62f260d9f9a9db5caf67a8c22b7d013f20aa52619f951.scope/container/memory.events
Feb 28 04:43:57 np0005634017 cool_kalam[156962]: 167 167
Feb 28 04:43:57 np0005634017 podman[156944]: 2026-02-28 09:43:57.124015598 +0000 UTC m=+0.174674095 container died d843b61f072247f5c4b62f260d9f9a9db5caf67a8c22b7d013f20aa52619f951 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_kalam, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 04:43:57 np0005634017 systemd[1]: var-lib-containers-storage-overlay-88c76445815b83d36bd6afae2a3de893a5e3e3481eabf4946a3be74da298fb40-merged.mount: Deactivated successfully.
Feb 28 04:43:57 np0005634017 podman[156944]: 2026-02-28 09:43:57.181872439 +0000 UTC m=+0.232530926 container remove d843b61f072247f5c4b62f260d9f9a9db5caf67a8c22b7d013f20aa52619f951 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_kalam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:43:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:57 np0005634017 systemd[1]: libpod-conmon-d843b61f072247f5c4b62f260d9f9a9db5caf67a8c22b7d013f20aa52619f951.scope: Deactivated successfully.
Feb 28 04:43:57 np0005634017 podman[157039]: 2026-02-28 09:43:57.337446774 +0000 UTC m=+0.055848507 container create 26da80326c6dea639f907642b93678a59ae531a800ca8e70f1c26c5cfde30b0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:43:57 np0005634017 systemd[1]: Started libpod-conmon-26da80326c6dea639f907642b93678a59ae531a800ca8e70f1c26c5cfde30b0b.scope.
Feb 28 04:43:57 np0005634017 podman[157039]: 2026-02-28 09:43:57.311552137 +0000 UTC m=+0.029953880 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:43:57 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:43:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3779df332118fba360c6cf4843f3b3daf667dd8015e32e1296c4243ec6151de6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:43:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3779df332118fba360c6cf4843f3b3daf667dd8015e32e1296c4243ec6151de6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:43:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3779df332118fba360c6cf4843f3b3daf667dd8015e32e1296c4243ec6151de6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:43:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3779df332118fba360c6cf4843f3b3daf667dd8015e32e1296c4243ec6151de6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:43:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3779df332118fba360c6cf4843f3b3daf667dd8015e32e1296c4243ec6151de6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:43:57 np0005634017 podman[157039]: 2026-02-28 09:43:57.44643573 +0000 UTC m=+0.164837483 container init 26da80326c6dea639f907642b93678a59ae531a800ca8e70f1c26c5cfde30b0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_wozniak, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:43:57 np0005634017 podman[157039]: 2026-02-28 09:43:57.454102382 +0000 UTC m=+0.172504085 container start 26da80326c6dea639f907642b93678a59ae531a800ca8e70f1c26c5cfde30b0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 04:43:57 np0005634017 podman[157039]: 2026-02-28 09:43:57.457584199 +0000 UTC m=+0.175986002 container attach 26da80326c6dea639f907642b93678a59ae531a800ca8e70f1c26c5cfde30b0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_wozniak, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 04:43:57 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:43:57 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:43:57 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:43:57 np0005634017 python3.9[157065]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.779 156681 INFO neutron.common.config [-] Logging enabled!#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.780 156681 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.780 156681 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.780 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.780 156681 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.781 156681 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.781 156681 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.781 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.781 156681 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.781 156681 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.781 156681 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.782 156681 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.782 156681 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.782 156681 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.782 156681 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.782 156681 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.782 156681 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.782 156681 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.783 156681 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.783 156681 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.783 156681 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.783 156681 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.783 156681 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.783 156681 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.783 156681 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.783 156681 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.783 156681 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.783 156681 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.784 156681 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.784 156681 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.784 156681 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.784 156681 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.784 156681 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.784 156681 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.784 156681 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.784 156681 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.784 156681 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.785 156681 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.785 156681 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.785 156681 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.785 156681 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.785 156681 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.785 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.785 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.785 156681 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.785 156681 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.786 156681 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.786 156681 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.786 156681 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.786 156681 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.786 156681 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.786 156681 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.786 156681 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.786 156681 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.786 156681 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.787 156681 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.787 156681 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.787 156681 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.787 156681 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.787 156681 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.787 156681 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.787 156681 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.787 156681 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.788 156681 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.788 156681 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.788 156681 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.788 156681 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.788 156681 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.788 156681 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.788 156681 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.788 156681 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.789 156681 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.789 156681 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.789 156681 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.789 156681 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.789 156681 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.789 156681 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.789 156681 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.789 156681 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.790 156681 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.790 156681 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.790 156681 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.790 156681 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.790 156681 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.790 156681 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.790 156681 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.790 156681 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.791 156681 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.791 156681 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.791 156681 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.791 156681 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.791 156681 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.791 156681 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.791 156681 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.791 156681 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.791 156681 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.792 156681 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.792 156681 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.792 156681 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.792 156681 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.792 156681 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.792 156681 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.792 156681 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.792 156681 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.792 156681 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.793 156681 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.793 156681 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.793 156681 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.793 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.793 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.793 156681 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.793 156681 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.793 156681 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.794 156681 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.794 156681 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.794 156681 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.794 156681 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.794 156681 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.794 156681 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.794 156681 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.794 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.794 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.795 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.795 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.795 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.795 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.795 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.795 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.795 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.795 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.795 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.795 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.796 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.796 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.796 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.796 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.796 156681 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.796 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.796 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.796 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.796 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.797 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.797 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.797 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.797 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.797 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.797 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.797 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.797 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.797 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.798 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.798 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.798 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.798 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.798 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.798 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.798 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.798 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.798 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.798 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.799 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.799 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.799 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.799 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.799 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.799 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.799 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.799 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.799 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.799 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.800 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.800 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.800 156681 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.800 156681 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.800 156681 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.800 156681 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.800 156681 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.800 156681 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.801 156681 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.801 156681 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.801 156681 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.801 156681 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.801 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.801 156681 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.801 156681 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.801 156681 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.801 156681 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.802 156681 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.802 156681 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.802 156681 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.802 156681 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.802 156681 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.802 156681 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.802 156681 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.802 156681 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.802 156681 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.803 156681 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.803 156681 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.803 156681 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.803 156681 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.803 156681 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.803 156681 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.803 156681 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.803 156681 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.803 156681 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.804 156681 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.804 156681 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.804 156681 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.804 156681 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.804 156681 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.804 156681 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.804 156681 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.804 156681 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.804 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.804 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.805 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.805 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.805 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.805 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.805 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.805 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.805 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.805 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.805 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.806 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.806 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.806 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.806 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.806 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.806 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.806 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.806 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.806 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.806 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.807 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.807 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.807 156681 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.807 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.807 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.807 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.807 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.807 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.807 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.808 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.808 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.808 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.808 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.808 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.808 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.808 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.808 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.809 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.809 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.809 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.809 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.809 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.809 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.809 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.809 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.809 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.810 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.810 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.810 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.810 156681 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.810 156681 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.810 156681 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.810 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.810 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.810 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.811 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.811 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.811 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.811 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.811 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.811 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.811 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.811 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.811 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.812 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.812 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.812 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.812 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.812 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.812 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.812 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.812 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.812 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.813 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.813 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.813 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.813 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.813 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.813 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.813 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.813 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.813 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.814 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.814 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.814 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.814 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.814 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.814 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.814 156681 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.814 156681 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.823 156681 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.823 156681 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.823 156681 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.823 156681 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.823 156681 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.834 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name cdfa85e0-fba9-4bed-b591-423dd0221b71 (UUID: cdfa85e0-fba9-4bed-b591-423dd0221b71) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.867 156681 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.867 156681 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.868 156681 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.868 156681 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.870 156681 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.875 156681 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.880 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'cdfa85e0-fba9-4bed-b591-423dd0221b71'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], external_ids={}, name=cdfa85e0-fba9-4bed-b591-423dd0221b71, nb_cfg_timestamp=1772271781125, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.881 156681 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fc619614a90>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.882 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.882 156681 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.882 156681 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.882 156681 INFO oslo_service.service [-] Starting 1 workers#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.886 156681 DEBUG oslo_service.service [-] Started child 157134 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.888 156681 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpp7sl2o7r/privsep.sock']#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.890 157134 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-955938'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.921 157134 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.921 157134 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.922 157134 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.927 157134 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.935 157134 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Feb 28 04:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:57.943 157134 INFO eventlet.wsgi.server [-] (157134) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Feb 28 04:43:57 np0005634017 great_wozniak[157075]: --> passed data devices: 0 physical, 3 LVM
Feb 28 04:43:57 np0005634017 great_wozniak[157075]: --> All data devices are unavailable
Feb 28 04:43:57 np0005634017 systemd[1]: libpod-26da80326c6dea639f907642b93678a59ae531a800ca8e70f1c26c5cfde30b0b.scope: Deactivated successfully.
Feb 28 04:43:57 np0005634017 conmon[157075]: conmon 26da80326c6dea639f90 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-26da80326c6dea639f907642b93678a59ae531a800ca8e70f1c26c5cfde30b0b.scope/container/memory.events
Feb 28 04:43:58 np0005634017 podman[157175]: 2026-02-28 09:43:58.005923843 +0000 UTC m=+0.022180075 container died 26da80326c6dea639f907642b93678a59ae531a800ca8e70f1c26c5cfde30b0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 04:43:58 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3779df332118fba360c6cf4843f3b3daf667dd8015e32e1296c4243ec6151de6-merged.mount: Deactivated successfully.
Feb 28 04:43:58 np0005634017 podman[157175]: 2026-02-28 09:43:58.04917705 +0000 UTC m=+0.065433292 container remove 26da80326c6dea639f907642b93678a59ae531a800ca8e70f1c26c5cfde30b0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_wozniak, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:43:58 np0005634017 systemd[1]: libpod-conmon-26da80326c6dea639f907642b93678a59ae531a800ca8e70f1c26c5cfde30b0b.scope: Deactivated successfully.
Feb 28 04:43:58 np0005634017 python3.9[157293]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:43:58 np0005634017 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Feb 28 04:43:58 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:58.464 156681 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Feb 28 04:43:58 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:58.465 156681 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpp7sl2o7r/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Feb 28 04:43:58 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:58.359 157319 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb 28 04:43:58 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:58.361 157319 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb 28 04:43:58 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:58.363 157319 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Feb 28 04:43:58 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:58.363 157319 INFO oslo.privsep.daemon [-] privsep daemon running as pid 157319#033[00m
Feb 28 04:43:58 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:58.467 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[febc0c31-5b83-4317-b0de-a1af6c9a9203]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:43:58 np0005634017 podman[157347]: 2026-02-28 09:43:58.491196042 +0000 UTC m=+0.054156179 container create a7db0c97070cb9a6395d65300afc43d55aec38094882a2cad53b19e75b6fb571 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:43:58 np0005634017 systemd[1]: Started libpod-conmon-a7db0c97070cb9a6395d65300afc43d55aec38094882a2cad53b19e75b6fb571.scope.
Feb 28 04:43:58 np0005634017 podman[157347]: 2026-02-28 09:43:58.463318631 +0000 UTC m=+0.026278778 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:43:58 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:43:58 np0005634017 podman[157347]: 2026-02-28 09:43:58.585807601 +0000 UTC m=+0.148767798 container init a7db0c97070cb9a6395d65300afc43d55aec38094882a2cad53b19e75b6fb571 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:43:58 np0005634017 podman[157347]: 2026-02-28 09:43:58.59986691 +0000 UTC m=+0.162827077 container start a7db0c97070cb9a6395d65300afc43d55aec38094882a2cad53b19e75b6fb571 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 28 04:43:58 np0005634017 podman[157347]: 2026-02-28 09:43:58.604301292 +0000 UTC m=+0.167261519 container attach a7db0c97070cb9a6395d65300afc43d55aec38094882a2cad53b19e75b6fb571 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 04:43:58 np0005634017 dreamy_keller[157399]: 167 167
Feb 28 04:43:58 np0005634017 systemd[1]: libpod-a7db0c97070cb9a6395d65300afc43d55aec38094882a2cad53b19e75b6fb571.scope: Deactivated successfully.
Feb 28 04:43:58 np0005634017 conmon[157399]: conmon a7db0c97070cb9a6395d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a7db0c97070cb9a6395d65300afc43d55aec38094882a2cad53b19e75b6fb571.scope/container/memory.events
Feb 28 04:43:58 np0005634017 podman[157347]: 2026-02-28 09:43:58.607497621 +0000 UTC m=+0.170457798 container died a7db0c97070cb9a6395d65300afc43d55aec38094882a2cad53b19e75b6fb571 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:43:58 np0005634017 systemd[1]: var-lib-containers-storage-overlay-d35d78e1896f6f85d08a77f26bcd40c8a16fd61bdfb5322827cae9b9cf0f8fc2-merged.mount: Deactivated successfully.
Feb 28 04:43:58 np0005634017 podman[157347]: 2026-02-28 09:43:58.659321755 +0000 UTC m=+0.222281892 container remove a7db0c97070cb9a6395d65300afc43d55aec38094882a2cad53b19e75b6fb571 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keller, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 28 04:43:58 np0005634017 systemd[1]: libpod-conmon-a7db0c97070cb9a6395d65300afc43d55aec38094882a2cad53b19e75b6fb571.scope: Deactivated successfully.
Feb 28 04:43:58 np0005634017 podman[157499]: 2026-02-28 09:43:58.818255173 +0000 UTC m=+0.045072058 container create d7d0c35082ec34ce3a1ad1340d216b18eab60114769b64fb3f61f849443c5a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Feb 28 04:43:58 np0005634017 systemd[1]: Started libpod-conmon-d7d0c35082ec34ce3a1ad1340d216b18eab60114769b64fb3f61f849443c5a2e.scope.
Feb 28 04:43:58 np0005634017 podman[157499]: 2026-02-28 09:43:58.796640485 +0000 UTC m=+0.023457390 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:43:58 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:43:58 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d69475e39d221e3d75a68793ac4c6e63b8b75e789fb65b086eb6ef4cefe7bba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:43:58 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d69475e39d221e3d75a68793ac4c6e63b8b75e789fb65b086eb6ef4cefe7bba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:43:58 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d69475e39d221e3d75a68793ac4c6e63b8b75e789fb65b086eb6ef4cefe7bba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:43:58 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d69475e39d221e3d75a68793ac4c6e63b8b75e789fb65b086eb6ef4cefe7bba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:43:58 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:58.908 157319 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:43:58 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:58.908 157319 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:43:58 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:58.909 157319 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:43:58 np0005634017 python3.9[157508]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772271837.8992686-484-97847303047981/.source.yaml _original_basename=.336i4hcb follow=False checksum=c97b22d796edd75d18ff197fc4f532e7c5e6de36 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:43:58 np0005634017 podman[157499]: 2026-02-28 09:43:58.939138928 +0000 UTC m=+0.165955863 container init d7d0c35082ec34ce3a1ad1340d216b18eab60114769b64fb3f61f849443c5a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_chaplygin, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:43:58 np0005634017 podman[157499]: 2026-02-28 09:43:58.94608608 +0000 UTC m=+0.172902965 container start d7d0c35082ec34ce3a1ad1340d216b18eab60114769b64fb3f61f849443c5a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 04:43:58 np0005634017 podman[157499]: 2026-02-28 09:43:58.950247536 +0000 UTC m=+0.177064431 container attach d7d0c35082ec34ce3a1ad1340d216b18eab60114769b64fb3f61f849443c5a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 04:43:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]: {
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:    "0": [
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:        {
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "devices": [
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "/dev/loop3"
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            ],
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "lv_name": "ceph_lv0",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "lv_size": "21470642176",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "name": "ceph_lv0",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "tags": {
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.cluster_name": "ceph",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.crush_device_class": "",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.encrypted": "0",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.objectstore": "bluestore",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.osd_id": "0",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.type": "block",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.vdo": "0",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.with_tpm": "0"
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            },
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "type": "block",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "vg_name": "ceph_vg0"
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:        }
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:    ],
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:    "1": [
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:        {
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "devices": [
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "/dev/loop4"
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            ],
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "lv_name": "ceph_lv1",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "lv_size": "21470642176",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "name": "ceph_lv1",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "tags": {
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.cluster_name": "ceph",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.crush_device_class": "",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.encrypted": "0",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.objectstore": "bluestore",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.osd_id": "1",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.type": "block",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.vdo": "0",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.with_tpm": "0"
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            },
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "type": "block",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "vg_name": "ceph_vg1"
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:        }
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:    ],
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:    "2": [
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:        {
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "devices": [
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "/dev/loop5"
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            ],
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "lv_name": "ceph_lv2",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "lv_size": "21470642176",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "name": "ceph_lv2",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "tags": {
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.cluster_name": "ceph",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.crush_device_class": "",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.encrypted": "0",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.objectstore": "bluestore",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.osd_id": "2",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.type": "block",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.vdo": "0",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:                "ceph.with_tpm": "0"
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            },
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "type": "block",
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:            "vg_name": "ceph_vg2"
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:        }
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]:    ]
Feb 28 04:43:59 np0005634017 sad_chaplygin[157519]: }
Feb 28 04:43:59 np0005634017 systemd[1]: libpod-d7d0c35082ec34ce3a1ad1340d216b18eab60114769b64fb3f61f849443c5a2e.scope: Deactivated successfully.
Feb 28 04:43:59 np0005634017 podman[157499]: 2026-02-28 09:43:59.23335511 +0000 UTC m=+0.460171975 container died d7d0c35082ec34ce3a1ad1340d216b18eab60114769b64fb3f61f849443c5a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_chaplygin, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:43:59 np0005634017 systemd[1]: var-lib-containers-storage-overlay-0d69475e39d221e3d75a68793ac4c6e63b8b75e789fb65b086eb6ef4cefe7bba-merged.mount: Deactivated successfully.
Feb 28 04:43:59 np0005634017 podman[157499]: 2026-02-28 09:43:59.268929985 +0000 UTC m=+0.495746850 container remove d7d0c35082ec34ce3a1ad1340d216b18eab60114769b64fb3f61f849443c5a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_chaplygin, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 28 04:43:59 np0005634017 systemd[1]: libpod-conmon-d7d0c35082ec34ce3a1ad1340d216b18eab60114769b64fb3f61f849443c5a2e.scope: Deactivated successfully.
Feb 28 04:43:59 np0005634017 systemd[1]: session-47.scope: Deactivated successfully.
Feb 28 04:43:59 np0005634017 systemd[1]: session-47.scope: Consumed 49.819s CPU time.
Feb 28 04:43:59 np0005634017 systemd-logind[815]: Session 47 logged out. Waiting for processes to exit.
Feb 28 04:43:59 np0005634017 systemd-logind[815]: Removed session 47.
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.387 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[6649d98d-2082-4f58-8510-7ca9f9b7bbe4]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.389 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, column=external_ids, values=({'neutron:ovn-metadata-id': '176ff126-6106-58cb-a031-9d6ab1cd1b88'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.396 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.401 156681 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.401 156681 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.401 156681 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.401 156681 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.402 156681 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.402 156681 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.402 156681 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.402 156681 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.402 156681 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.402 156681 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.402 156681 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.402 156681 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.402 156681 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.403 156681 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.403 156681 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.403 156681 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.403 156681 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.403 156681 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.403 156681 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.403 156681 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.403 156681 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.403 156681 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.403 156681 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.404 156681 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.404 156681 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.404 156681 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.404 156681 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.404 156681 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.404 156681 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.404 156681 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.404 156681 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.404 156681 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.405 156681 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.405 156681 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.405 156681 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.405 156681 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.405 156681 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.405 156681 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.405 156681 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.405 156681 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.405 156681 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.406 156681 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.406 156681 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.406 156681 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.406 156681 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.406 156681 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.406 156681 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.406 156681 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.406 156681 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.406 156681 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.406 156681 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.407 156681 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.407 156681 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.407 156681 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.407 156681 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.407 156681 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.407 156681 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.407 156681 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.407 156681 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.407 156681 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.407 156681 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.407 156681 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.408 156681 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.408 156681 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.408 156681 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.408 156681 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.408 156681 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.408 156681 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.408 156681 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.408 156681 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.408 156681 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.409 156681 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.409 156681 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.409 156681 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.409 156681 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.409 156681 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.409 156681 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.409 156681 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.409 156681 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.409 156681 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.409 156681 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.410 156681 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.410 156681 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.410 156681 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.410 156681 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.410 156681 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.410 156681 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.410 156681 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.410 156681 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.410 156681 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.410 156681 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.410 156681 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.411 156681 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.411 156681 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.411 156681 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.411 156681 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.411 156681 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.411 156681 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.411 156681 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.411 156681 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.411 156681 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.411 156681 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.411 156681 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.412 156681 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.412 156681 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.412 156681 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.412 156681 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.412 156681 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.412 156681 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.412 156681 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.412 156681 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.413 156681 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.413 156681 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.413 156681 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.413 156681 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.413 156681 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.413 156681 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.413 156681 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.413 156681 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.413 156681 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.414 156681 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.414 156681 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.414 156681 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.414 156681 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.414 156681 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.414 156681 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.414 156681 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.414 156681 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.414 156681 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.414 156681 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.415 156681 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.415 156681 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.415 156681 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.415 156681 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.415 156681 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.415 156681 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.415 156681 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.415 156681 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.415 156681 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.416 156681 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.416 156681 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.416 156681 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.416 156681 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.416 156681 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.416 156681 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.416 156681 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.416 156681 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.416 156681 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.416 156681 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.417 156681 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.417 156681 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.417 156681 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.417 156681 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.417 156681 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.417 156681 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.417 156681 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.417 156681 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.417 156681 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.417 156681 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.417 156681 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.417 156681 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.418 156681 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.418 156681 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.418 156681 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.418 156681 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.418 156681 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.418 156681 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.418 156681 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.418 156681 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.418 156681 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.418 156681 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.419 156681 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.419 156681 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.419 156681 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.419 156681 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.419 156681 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.419 156681 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.419 156681 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.419 156681 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.419 156681 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.419 156681 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.420 156681 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.420 156681 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.420 156681 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.420 156681 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.420 156681 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.420 156681 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.420 156681 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.420 156681 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.420 156681 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.420 156681 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.421 156681 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.421 156681 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.421 156681 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.421 156681 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.421 156681 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.421 156681 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.421 156681 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.421 156681 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.421 156681 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.421 156681 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.422 156681 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.422 156681 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.422 156681 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.422 156681 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.422 156681 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.422 156681 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.422 156681 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.422 156681 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.422 156681 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.422 156681 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.422 156681 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.423 156681 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.423 156681 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.423 156681 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.423 156681 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.423 156681 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.423 156681 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.423 156681 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.423 156681 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.423 156681 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.424 156681 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.424 156681 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.424 156681 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.424 156681 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.424 156681 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.424 156681 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.424 156681 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.424 156681 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.424 156681 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.424 156681 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.425 156681 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.425 156681 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.425 156681 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.425 156681 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.425 156681 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.425 156681 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.425 156681 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.425 156681 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.425 156681 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.425 156681 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.426 156681 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.426 156681 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.426 156681 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.426 156681 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.426 156681 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.426 156681 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.426 156681 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.426 156681 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.426 156681 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.426 156681 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.427 156681 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.427 156681 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.427 156681 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.427 156681 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.427 156681 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.427 156681 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.427 156681 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.427 156681 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.427 156681 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.427 156681 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.427 156681 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.428 156681 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.428 156681 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.428 156681 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.428 156681 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.428 156681 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.428 156681 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.428 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.428 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.428 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.428 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.429 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.429 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.429 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.429 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.429 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.429 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.429 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.429 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.429 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.429 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.430 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.430 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.430 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.430 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.430 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.430 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.430 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.430 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.430 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.430 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.431 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.431 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.431 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.431 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.431 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.431 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.431 156681 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.431 156681 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.431 156681 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.431 156681 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.432 156681 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:43:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:43:59.432 156681 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb 28 04:43:59 np0005634017 podman[157627]: 2026-02-28 09:43:59.673184692 +0000 UTC m=+0.037323144 container create 419aff3e5a633fcc04a7caa4113e215c047c80172b5b7679dd37169244c3e17a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_yonath, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:43:59 np0005634017 systemd[1]: Started libpod-conmon-419aff3e5a633fcc04a7caa4113e215c047c80172b5b7679dd37169244c3e17a.scope.
Feb 28 04:43:59 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:43:59 np0005634017 podman[157627]: 2026-02-28 09:43:59.750760658 +0000 UTC m=+0.114899150 container init 419aff3e5a633fcc04a7caa4113e215c047c80172b5b7679dd37169244c3e17a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_yonath, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 04:43:59 np0005634017 podman[157627]: 2026-02-28 09:43:59.655342918 +0000 UTC m=+0.019481460 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:43:59 np0005634017 podman[157627]: 2026-02-28 09:43:59.758938485 +0000 UTC m=+0.123076937 container start 419aff3e5a633fcc04a7caa4113e215c047c80172b5b7679dd37169244c3e17a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_yonath, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 04:43:59 np0005634017 elegant_yonath[157643]: 167 167
Feb 28 04:43:59 np0005634017 systemd[1]: libpod-419aff3e5a633fcc04a7caa4113e215c047c80172b5b7679dd37169244c3e17a.scope: Deactivated successfully.
Feb 28 04:43:59 np0005634017 podman[157627]: 2026-02-28 09:43:59.762235346 +0000 UTC m=+0.126373828 container attach 419aff3e5a633fcc04a7caa4113e215c047c80172b5b7679dd37169244c3e17a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_yonath, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:43:59 np0005634017 podman[157627]: 2026-02-28 09:43:59.762942935 +0000 UTC m=+0.127081387 container died 419aff3e5a633fcc04a7caa4113e215c047c80172b5b7679dd37169244c3e17a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_yonath, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 04:43:59 np0005634017 systemd[1]: var-lib-containers-storage-overlay-07c40258cebbcb172f3e204076c353139a3e0bcbdd29310d1a937fd45772d033-merged.mount: Deactivated successfully.
Feb 28 04:43:59 np0005634017 podman[157627]: 2026-02-28 09:43:59.796133104 +0000 UTC m=+0.160271556 container remove 419aff3e5a633fcc04a7caa4113e215c047c80172b5b7679dd37169244c3e17a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 04:43:59 np0005634017 systemd[1]: libpod-conmon-419aff3e5a633fcc04a7caa4113e215c047c80172b5b7679dd37169244c3e17a.scope: Deactivated successfully.
Feb 28 04:43:59 np0005634017 podman[157666]: 2026-02-28 09:43:59.943629316 +0000 UTC m=+0.048249597 container create 4cf3d5967b431c52e5355e59e49c26a51dc534dd7a636f64df29ddadb6b997f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_germain, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:43:59 np0005634017 systemd[1]: Started libpod-conmon-4cf3d5967b431c52e5355e59e49c26a51dc534dd7a636f64df29ddadb6b997f5.scope.
Feb 28 04:44:00 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:44:00 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432f1150c8b45b735d3d804595be81bd02dc74965cd186ea4660886e236682f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:44:00 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432f1150c8b45b735d3d804595be81bd02dc74965cd186ea4660886e236682f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:44:00 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432f1150c8b45b735d3d804595be81bd02dc74965cd186ea4660886e236682f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:44:00 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432f1150c8b45b735d3d804595be81bd02dc74965cd186ea4660886e236682f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:44:00 np0005634017 podman[157666]: 2026-02-28 09:43:59.920375602 +0000 UTC m=+0.024995933 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:44:00 np0005634017 podman[157666]: 2026-02-28 09:44:00.05075978 +0000 UTC m=+0.155380061 container init 4cf3d5967b431c52e5355e59e49c26a51dc534dd7a636f64df29ddadb6b997f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_germain, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 04:44:00 np0005634017 podman[157666]: 2026-02-28 09:44:00.059603685 +0000 UTC m=+0.164223956 container start 4cf3d5967b431c52e5355e59e49c26a51dc534dd7a636f64df29ddadb6b997f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 04:44:00 np0005634017 podman[157666]: 2026-02-28 09:44:00.063523914 +0000 UTC m=+0.168144165 container attach 4cf3d5967b431c52e5355e59e49c26a51dc534dd7a636f64df29ddadb6b997f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_germain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 04:44:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:44:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:44:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:44:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:44:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:44:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:44:00 np0005634017 lvm[157761]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:44:00 np0005634017 lvm[157761]: VG ceph_vg0 finished
Feb 28 04:44:00 np0005634017 lvm[157763]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:44:00 np0005634017 lvm[157763]: VG ceph_vg1 finished
Feb 28 04:44:00 np0005634017 lvm[157765]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:44:00 np0005634017 lvm[157765]: VG ceph_vg2 finished
Feb 28 04:44:00 np0005634017 inspiring_germain[157684]: {}
Feb 28 04:44:00 np0005634017 systemd[1]: libpod-4cf3d5967b431c52e5355e59e49c26a51dc534dd7a636f64df29ddadb6b997f5.scope: Deactivated successfully.
Feb 28 04:44:00 np0005634017 podman[157666]: 2026-02-28 09:44:00.815267435 +0000 UTC m=+0.919887706 container died 4cf3d5967b431c52e5355e59e49c26a51dc534dd7a636f64df29ddadb6b997f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_germain, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 04:44:00 np0005634017 systemd[1]: libpod-4cf3d5967b431c52e5355e59e49c26a51dc534dd7a636f64df29ddadb6b997f5.scope: Consumed 1.125s CPU time.
Feb 28 04:44:00 np0005634017 systemd[1]: var-lib-containers-storage-overlay-432f1150c8b45b735d3d804595be81bd02dc74965cd186ea4660886e236682f7-merged.mount: Deactivated successfully.
Feb 28 04:44:00 np0005634017 podman[157666]: 2026-02-28 09:44:00.860558899 +0000 UTC m=+0.965179160 container remove 4cf3d5967b431c52e5355e59e49c26a51dc534dd7a636f64df29ddadb6b997f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_germain, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 28 04:44:00 np0005634017 systemd[1]: libpod-conmon-4cf3d5967b431c52e5355e59e49c26a51dc534dd7a636f64df29ddadb6b997f5.scope: Deactivated successfully.
Feb 28 04:44:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:44:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:44:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:44:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:44:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:44:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:01 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:44:01 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:44:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:04 np0005634017 systemd-logind[815]: New session 48 of user zuul.
Feb 28 04:44:04 np0005634017 systemd[1]: Started Session 48 of User zuul.
Feb 28 04:44:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:05 np0005634017 python3.9[157958]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:44:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:44:06 np0005634017 python3.9[158115]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:44:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:08 np0005634017 python3.9[158281]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 28 04:44:08 np0005634017 systemd[1]: Reloading.
Feb 28 04:44:08 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:44:08 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:44:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:09 np0005634017 python3.9[158472]: ansible-ansible.builtin.service_facts Invoked
Feb 28 04:44:09 np0005634017 network[158489]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 28 04:44:09 np0005634017 network[158490]: 'network-scripts' will be removed from distribution in near future.
Feb 28 04:44:09 np0005634017 network[158491]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 28 04:44:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:44:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:12 np0005634017 python3.9[158755]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:44:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:13 np0005634017 python3.9[158909]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:44:14 np0005634017 python3.9[159063]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:44:15 np0005634017 python3.9[159217]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:44:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:44:16 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 04:44:16 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5611 writes, 24K keys, 5611 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5611 writes, 894 syncs, 6.28 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5611 writes, 24K keys, 5611 commit groups, 1.0 writes per commit group, ingest: 18.61 MB, 0.03 MB/s#012Interval WAL: 5611 writes, 894 syncs, 6.28 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d7c313b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d7c313b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowd
Feb 28 04:44:16 np0005634017 python3.9[159371]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:44:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:17 np0005634017 python3.9[159525]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:44:18 np0005634017 python3.9[159679]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:44:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:19 np0005634017 python3.9[159833]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:44:20 np0005634017 python3.9[159986]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:44:20 np0005634017 python3.9[160139]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:44:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:44:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:21 np0005634017 python3.9[160292]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:44:21 np0005634017 python3.9[160445]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:44:22 np0005634017 python3.9[160598]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:44:22 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 04:44:22 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 601.6 total, 600.0 interval#012Cumulative writes: 6944 writes, 28K keys, 6944 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6944 writes, 1334 syncs, 5.21 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6944 writes, 28K keys, 6944 commit groups, 1.0 writes per commit group, ingest: 19.70 MB, 0.03 MB/s#012Interval WAL: 6944 writes, 1334 syncs, 5.21 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.88              0.00         1    0.885       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.88              0.00         1    0.885       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.88              0.00         1    0.885       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.6 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.9 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562ffdfcd8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.6 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562ffdfcd8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.6 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Feb 28 04:44:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:23 np0005634017 python3.9[160751]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:44:23 np0005634017 python3.9[160904]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:44:24 np0005634017 python3.9[161057]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:44:25 np0005634017 podman[161181]: 2026-02-28 09:44:25.030604116 +0000 UTC m=+0.099735263 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true)
Feb 28 04:44:25 np0005634017 python3.9[161230]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:44:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:25 np0005634017 python3.9[161391]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:44:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:44:26 np0005634017 podman[161515]: 2026-02-28 09:44:26.353056551 +0000 UTC m=+0.069767069 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 28 04:44:26 np0005634017 python3.9[161564]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:44:27 np0005634017 python3.9[161717]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:44:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:27 np0005634017 python3.9[161870]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:44:28 np0005634017 python3.9[162023]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:44:28 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 04:44:28 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 5441 writes, 23K keys, 5441 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5441 writes, 789 syncs, 6.90 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5441 writes, 23K keys, 5441 commit groups, 1.0 writes per commit group, ingest: 18.33 MB, 0.03 MB/s#012Interval WAL: 5441 writes, 789 syncs, 6.90 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5576845558d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5576845558d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Feb 28 04:44:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_09:44:28
Feb 28 04:44:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 04:44:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 04:44:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['volumes', 'backups', 'vms', 'default.rgw.log', 'images', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', '.rgw.root']
Feb 28 04:44:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 04:44:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:29 np0005634017 python3.9[162175]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 28 04:44:30 np0005634017 python3.9[162328]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 28 04:44:30 np0005634017 systemd[1]: Reloading.
Feb 28 04:44:30 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:44:30 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:44:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:44:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:44:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:44:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:44:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:44:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:44:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 04:44:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 04:44:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:44:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:44:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:44:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:44:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:44:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:44:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:44:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:44:30 np0005634017 ceph-mgr[76610]: [devicehealth INFO root] Check health
Feb 28 04:44:31 np0005634017 python3.9[162523]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:44:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:44:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:31 np0005634017 python3.9[162677]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:44:32 np0005634017 python3.9[162831]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:44:33 np0005634017 python3.9[162985]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:44:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:33 np0005634017 python3.9[163139]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:44:34 np0005634017 python3.9[163293]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:44:34 np0005634017 python3.9[163447]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:44:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:35 np0005634017 python3.9[163601]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Feb 28 04:44:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:44:36 np0005634017 python3.9[163755]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 28 04:44:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:37 np0005634017 python3.9[163914]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 28 04:44:37 np0005634017 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 04:44:39 np0005634017 python3.9[164076]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 28 04:44:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:39 np0005634017 python3.9[164161]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 28 04:44:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 04:44:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:44:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 04:44:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:44:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:44:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:44:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:44:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:44:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:44:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:44:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:44:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:44:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4231060137621101e-06 of space, bias 4.0, pg target 0.0017077272165145322 quantized to 16 (current 16)
Feb 28 04:44:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:44:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:44:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:44:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 04:44:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:44:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 04:44:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:44:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:44:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:44:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 04:44:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:44:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:44:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:44:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:56 np0005634017 podman[164347]: 2026-02-28 09:44:56.140485076 +0000 UTC m=+0.082960849 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:44:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:44:57 np0005634017 podman[164380]: 2026-02-28 09:44:57.146330477 +0000 UTC m=+0.086192853 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 28 04:44:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:44:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:44:57.824 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:44:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:44:57.825 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:44:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:44:57.825 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:44:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:45:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:45:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:45:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:45:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:45:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:45:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:45:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:45:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:45:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 28 04:45:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 04:45:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:45:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:45:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 04:45:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:45:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 04:45:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:45:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 04:45:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 04:45:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 04:45:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:45:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:45:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:45:02 np0005634017 podman[164543]: 2026-02-28 09:45:02.198057927 +0000 UTC m=+0.051506534 container create 66ef2ec10cf6c2f762002b8ce5fd4b061cfb0a3fb78c73f2ba7b5fb0e59f2776 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:45:02 np0005634017 systemd[1]: Started libpod-conmon-66ef2ec10cf6c2f762002b8ce5fd4b061cfb0a3fb78c73f2ba7b5fb0e59f2776.scope.
Feb 28 04:45:02 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:45:02 np0005634017 podman[164543]: 2026-02-28 09:45:02.171387809 +0000 UTC m=+0.024836456 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:45:02 np0005634017 podman[164543]: 2026-02-28 09:45:02.278364139 +0000 UTC m=+0.131812746 container init 66ef2ec10cf6c2f762002b8ce5fd4b061cfb0a3fb78c73f2ba7b5fb0e59f2776 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:45:02 np0005634017 podman[164543]: 2026-02-28 09:45:02.285381691 +0000 UTC m=+0.138830258 container start 66ef2ec10cf6c2f762002b8ce5fd4b061cfb0a3fb78c73f2ba7b5fb0e59f2776 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_turing, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 04:45:02 np0005634017 podman[164543]: 2026-02-28 09:45:02.288328166 +0000 UTC m=+0.141776743 container attach 66ef2ec10cf6c2f762002b8ce5fd4b061cfb0a3fb78c73f2ba7b5fb0e59f2776 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_turing, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:45:02 np0005634017 elated_turing[164560]: 167 167
Feb 28 04:45:02 np0005634017 systemd[1]: libpod-66ef2ec10cf6c2f762002b8ce5fd4b061cfb0a3fb78c73f2ba7b5fb0e59f2776.scope: Deactivated successfully.
Feb 28 04:45:02 np0005634017 podman[164543]: 2026-02-28 09:45:02.29472444 +0000 UTC m=+0.148173037 container died 66ef2ec10cf6c2f762002b8ce5fd4b061cfb0a3fb78c73f2ba7b5fb0e59f2776 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_turing, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 04:45:02 np0005634017 systemd[1]: var-lib-containers-storage-overlay-65bb9538db730f43d9a549ac5822ecb820b7f427dbaf143e1ad0f377c26243a2-merged.mount: Deactivated successfully.
Feb 28 04:45:02 np0005634017 podman[164543]: 2026-02-28 09:45:02.345932215 +0000 UTC m=+0.199380822 container remove 66ef2ec10cf6c2f762002b8ce5fd4b061cfb0a3fb78c73f2ba7b5fb0e59f2776 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_turing, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 28 04:45:02 np0005634017 systemd[1]: libpod-conmon-66ef2ec10cf6c2f762002b8ce5fd4b061cfb0a3fb78c73f2ba7b5fb0e59f2776.scope: Deactivated successfully.
Feb 28 04:45:02 np0005634017 podman[164584]: 2026-02-28 09:45:02.504198301 +0000 UTC m=+0.045896412 container create 82ba13acba3f3cce6f086b25d207f24273c0da03d6b77faeb82930ad7cc8bdb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_khorana, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 04:45:02 np0005634017 systemd[1]: Started libpod-conmon-82ba13acba3f3cce6f086b25d207f24273c0da03d6b77faeb82930ad7cc8bdb6.scope.
Feb 28 04:45:02 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:45:02 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/339f7da1ebb2edfb22685f4d50766a216ed836e0cbe875cbf9c84bc3834f5ec4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:45:02 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/339f7da1ebb2edfb22685f4d50766a216ed836e0cbe875cbf9c84bc3834f5ec4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:45:02 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/339f7da1ebb2edfb22685f4d50766a216ed836e0cbe875cbf9c84bc3834f5ec4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:45:02 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/339f7da1ebb2edfb22685f4d50766a216ed836e0cbe875cbf9c84bc3834f5ec4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:45:02 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/339f7da1ebb2edfb22685f4d50766a216ed836e0cbe875cbf9c84bc3834f5ec4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:45:02 np0005634017 podman[164584]: 2026-02-28 09:45:02.487201492 +0000 UTC m=+0.028899613 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:45:02 np0005634017 podman[164584]: 2026-02-28 09:45:02.59619408 +0000 UTC m=+0.137892241 container init 82ba13acba3f3cce6f086b25d207f24273c0da03d6b77faeb82930ad7cc8bdb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_khorana, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 04:45:02 np0005634017 podman[164584]: 2026-02-28 09:45:02.602863372 +0000 UTC m=+0.144561513 container start 82ba13acba3f3cce6f086b25d207f24273c0da03d6b77faeb82930ad7cc8bdb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_khorana, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:45:02 np0005634017 podman[164584]: 2026-02-28 09:45:02.606855127 +0000 UTC m=+0.148553258 container attach 82ba13acba3f3cce6f086b25d207f24273c0da03d6b77faeb82930ad7cc8bdb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_khorana, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 28 04:45:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 04:45:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:45:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:45:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:45:03 np0005634017 heuristic_khorana[164600]: --> passed data devices: 0 physical, 3 LVM
Feb 28 04:45:03 np0005634017 heuristic_khorana[164600]: --> All data devices are unavailable
Feb 28 04:45:03 np0005634017 systemd[1]: libpod-82ba13acba3f3cce6f086b25d207f24273c0da03d6b77faeb82930ad7cc8bdb6.scope: Deactivated successfully.
Feb 28 04:45:03 np0005634017 podman[164584]: 2026-02-28 09:45:03.065355389 +0000 UTC m=+0.607053500 container died 82ba13acba3f3cce6f086b25d207f24273c0da03d6b77faeb82930ad7cc8bdb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_khorana, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 28 04:45:03 np0005634017 systemd[1]: var-lib-containers-storage-overlay-339f7da1ebb2edfb22685f4d50766a216ed836e0cbe875cbf9c84bc3834f5ec4-merged.mount: Deactivated successfully.
Feb 28 04:45:03 np0005634017 podman[164584]: 2026-02-28 09:45:03.109579022 +0000 UTC m=+0.651277143 container remove 82ba13acba3f3cce6f086b25d207f24273c0da03d6b77faeb82930ad7cc8bdb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_khorana, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:45:03 np0005634017 systemd[1]: libpod-conmon-82ba13acba3f3cce6f086b25d207f24273c0da03d6b77faeb82930ad7cc8bdb6.scope: Deactivated successfully.
Feb 28 04:45:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:45:03 np0005634017 podman[164696]: 2026-02-28 09:45:03.531982954 +0000 UTC m=+0.050701451 container create 2bf90af8080df862f0168884203dc334481aa31339b6b073428c4225393d99f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:45:03 np0005634017 systemd[1]: Started libpod-conmon-2bf90af8080df862f0168884203dc334481aa31339b6b073428c4225393d99f9.scope.
Feb 28 04:45:03 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:45:03 np0005634017 podman[164696]: 2026-02-28 09:45:03.588270705 +0000 UTC m=+0.106989222 container init 2bf90af8080df862f0168884203dc334481aa31339b6b073428c4225393d99f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:45:03 np0005634017 podman[164696]: 2026-02-28 09:45:03.598980453 +0000 UTC m=+0.117698950 container start 2bf90af8080df862f0168884203dc334481aa31339b6b073428c4225393d99f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_jennings, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 04:45:03 np0005634017 nice_jennings[164713]: 167 167
Feb 28 04:45:03 np0005634017 systemd[1]: libpod-2bf90af8080df862f0168884203dc334481aa31339b6b073428c4225393d99f9.scope: Deactivated successfully.
Feb 28 04:45:03 np0005634017 podman[164696]: 2026-02-28 09:45:03.602499004 +0000 UTC m=+0.121217551 container attach 2bf90af8080df862f0168884203dc334481aa31339b6b073428c4225393d99f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_jennings, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 04:45:03 np0005634017 podman[164696]: 2026-02-28 09:45:03.60340123 +0000 UTC m=+0.122119737 container died 2bf90af8080df862f0168884203dc334481aa31339b6b073428c4225393d99f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_jennings, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:45:03 np0005634017 podman[164696]: 2026-02-28 09:45:03.510847425 +0000 UTC m=+0.029565952 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:45:03 np0005634017 systemd[1]: var-lib-containers-storage-overlay-d2c2fdd53eb6cc42b38c76801d9d8181406c69543574e9ff50eba69c789c22e6-merged.mount: Deactivated successfully.
Feb 28 04:45:03 np0005634017 podman[164696]: 2026-02-28 09:45:03.636871234 +0000 UTC m=+0.155589741 container remove 2bf90af8080df862f0168884203dc334481aa31339b6b073428c4225393d99f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 04:45:03 np0005634017 systemd[1]: libpod-conmon-2bf90af8080df862f0168884203dc334481aa31339b6b073428c4225393d99f9.scope: Deactivated successfully.
Feb 28 04:45:03 np0005634017 podman[164737]: 2026-02-28 09:45:03.765519538 +0000 UTC m=+0.042681200 container create 14003b85565e616331c2358c837197ab7554242fcac58a3f7fe5e81889bc9ee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_jennings, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Feb 28 04:45:03 np0005634017 systemd[1]: Started libpod-conmon-14003b85565e616331c2358c837197ab7554242fcac58a3f7fe5e81889bc9ee7.scope.
Feb 28 04:45:03 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:45:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2021260f0e7bf386594957bf73c2f0d98b1f8c558d9d78367bb1a18e5175b6a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:45:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2021260f0e7bf386594957bf73c2f0d98b1f8c558d9d78367bb1a18e5175b6a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:45:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2021260f0e7bf386594957bf73c2f0d98b1f8c558d9d78367bb1a18e5175b6a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:45:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2021260f0e7bf386594957bf73c2f0d98b1f8c558d9d78367bb1a18e5175b6a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:45:03 np0005634017 podman[164737]: 2026-02-28 09:45:03.744529244 +0000 UTC m=+0.021690936 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:45:03 np0005634017 podman[164737]: 2026-02-28 09:45:03.859952457 +0000 UTC m=+0.137114249 container init 14003b85565e616331c2358c837197ab7554242fcac58a3f7fe5e81889bc9ee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_jennings, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:45:03 np0005634017 podman[164737]: 2026-02-28 09:45:03.867785613 +0000 UTC m=+0.144947315 container start 14003b85565e616331c2358c837197ab7554242fcac58a3f7fe5e81889bc9ee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_jennings, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 04:45:03 np0005634017 podman[164737]: 2026-02-28 09:45:03.872341764 +0000 UTC m=+0.149503456 container attach 14003b85565e616331c2358c837197ab7554242fcac58a3f7fe5e81889bc9ee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_jennings, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 04:45:04 np0005634017 serene_jennings[164755]: {
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:    "0": [
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:        {
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "devices": [
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "/dev/loop3"
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            ],
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "lv_name": "ceph_lv0",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "lv_size": "21470642176",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "name": "ceph_lv0",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "tags": {
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.cluster_name": "ceph",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.crush_device_class": "",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.encrypted": "0",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.objectstore": "bluestore",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.osd_id": "0",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.type": "block",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.vdo": "0",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.with_tpm": "0"
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            },
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "type": "block",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "vg_name": "ceph_vg0"
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:        }
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:    ],
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:    "1": [
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:        {
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "devices": [
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "/dev/loop4"
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            ],
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "lv_name": "ceph_lv1",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "lv_size": "21470642176",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "name": "ceph_lv1",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "tags": {
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.cluster_name": "ceph",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.crush_device_class": "",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.encrypted": "0",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.objectstore": "bluestore",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.osd_id": "1",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.type": "block",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.vdo": "0",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.with_tpm": "0"
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            },
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "type": "block",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "vg_name": "ceph_vg1"
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:        }
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:    ],
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:    "2": [
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:        {
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "devices": [
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "/dev/loop5"
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            ],
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "lv_name": "ceph_lv2",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "lv_size": "21470642176",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "name": "ceph_lv2",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "tags": {
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.cluster_name": "ceph",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.crush_device_class": "",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.encrypted": "0",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.objectstore": "bluestore",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.osd_id": "2",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.type": "block",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.vdo": "0",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:                "ceph.with_tpm": "0"
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            },
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "type": "block",
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:            "vg_name": "ceph_vg2"
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:        }
Feb 28 04:45:04 np0005634017 serene_jennings[164755]:    ]
Feb 28 04:45:04 np0005634017 serene_jennings[164755]: }
Feb 28 04:45:04 np0005634017 systemd[1]: libpod-14003b85565e616331c2358c837197ab7554242fcac58a3f7fe5e81889bc9ee7.scope: Deactivated successfully.
Feb 28 04:45:04 np0005634017 podman[164737]: 2026-02-28 09:45:04.190271058 +0000 UTC m=+0.467432750 container died 14003b85565e616331c2358c837197ab7554242fcac58a3f7fe5e81889bc9ee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_jennings, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 04:45:04 np0005634017 systemd[1]: var-lib-containers-storage-overlay-2021260f0e7bf386594957bf73c2f0d98b1f8c558d9d78367bb1a18e5175b6a3-merged.mount: Deactivated successfully.
Feb 28 04:45:04 np0005634017 podman[164737]: 2026-02-28 09:45:04.240125393 +0000 UTC m=+0.517287065 container remove 14003b85565e616331c2358c837197ab7554242fcac58a3f7fe5e81889bc9ee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 04:45:04 np0005634017 systemd[1]: libpod-conmon-14003b85565e616331c2358c837197ab7554242fcac58a3f7fe5e81889bc9ee7.scope: Deactivated successfully.
Feb 28 04:45:04 np0005634017 podman[164841]: 2026-02-28 09:45:04.647007278 +0000 UTC m=+0.049591489 container create a60c28bc6de6a91c8c2b84ed102e74b27d004f89294bbbf4183dbc4482b00c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_banach, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Feb 28 04:45:04 np0005634017 systemd[1]: Started libpod-conmon-a60c28bc6de6a91c8c2b84ed102e74b27d004f89294bbbf4183dbc4482b00c1a.scope.
Feb 28 04:45:04 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:45:04 np0005634017 podman[164841]: 2026-02-28 09:45:04.708796507 +0000 UTC m=+0.111380718 container init a60c28bc6de6a91c8c2b84ed102e74b27d004f89294bbbf4183dbc4482b00c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_banach, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 04:45:04 np0005634017 podman[164841]: 2026-02-28 09:45:04.713983337 +0000 UTC m=+0.116567558 container start a60c28bc6de6a91c8c2b84ed102e74b27d004f89294bbbf4183dbc4482b00c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_banach, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 28 04:45:04 np0005634017 modest_banach[164857]: 167 167
Feb 28 04:45:04 np0005634017 systemd[1]: libpod-a60c28bc6de6a91c8c2b84ed102e74b27d004f89294bbbf4183dbc4482b00c1a.scope: Deactivated successfully.
Feb 28 04:45:04 np0005634017 podman[164841]: 2026-02-28 09:45:04.717627942 +0000 UTC m=+0.120212163 container attach a60c28bc6de6a91c8c2b84ed102e74b27d004f89294bbbf4183dbc4482b00c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:45:04 np0005634017 podman[164841]: 2026-02-28 09:45:04.719117475 +0000 UTC m=+0.121701726 container died a60c28bc6de6a91c8c2b84ed102e74b27d004f89294bbbf4183dbc4482b00c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Feb 28 04:45:04 np0005634017 podman[164841]: 2026-02-28 09:45:04.63248198 +0000 UTC m=+0.035066211 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:45:04 np0005634017 systemd[1]: var-lib-containers-storage-overlay-dcf064e3d9dfe32ba2c3bf04c3c48e310cf94ef74a92abbdb412600e52ffdfbe-merged.mount: Deactivated successfully.
Feb 28 04:45:04 np0005634017 podman[164841]: 2026-02-28 09:45:04.763227995 +0000 UTC m=+0.165812216 container remove a60c28bc6de6a91c8c2b84ed102e74b27d004f89294bbbf4183dbc4482b00c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:45:04 np0005634017 systemd[1]: libpod-conmon-a60c28bc6de6a91c8c2b84ed102e74b27d004f89294bbbf4183dbc4482b00c1a.scope: Deactivated successfully.
Feb 28 04:45:04 np0005634017 podman[164882]: 2026-02-28 09:45:04.935616438 +0000 UTC m=+0.053172022 container create aff952f558877df4e94d0aa6823cb500098fd79ebf3515d0355cd446b2cb3ba4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:45:04 np0005634017 systemd[1]: Started libpod-conmon-aff952f558877df4e94d0aa6823cb500098fd79ebf3515d0355cd446b2cb3ba4.scope.
Feb 28 04:45:04 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:45:04 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2835b6cbc3d96464a79a2decd7952bdf2ca0580a35e55ad6fd249369ea3991/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:45:05 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2835b6cbc3d96464a79a2decd7952bdf2ca0580a35e55ad6fd249369ea3991/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:45:05 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2835b6cbc3d96464a79a2decd7952bdf2ca0580a35e55ad6fd249369ea3991/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:45:05 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2835b6cbc3d96464a79a2decd7952bdf2ca0580a35e55ad6fd249369ea3991/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:45:05 np0005634017 podman[164882]: 2026-02-28 09:45:04.913725308 +0000 UTC m=+0.031280952 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:45:05 np0005634017 podman[164882]: 2026-02-28 09:45:05.034609517 +0000 UTC m=+0.152165101 container init aff952f558877df4e94d0aa6823cb500098fd79ebf3515d0355cd446b2cb3ba4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_chaplygin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 04:45:05 np0005634017 podman[164882]: 2026-02-28 09:45:05.039596621 +0000 UTC m=+0.157152235 container start aff952f558877df4e94d0aa6823cb500098fd79ebf3515d0355cd446b2cb3ba4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_chaplygin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:45:05 np0005634017 podman[164882]: 2026-02-28 09:45:05.0430371 +0000 UTC m=+0.160592704 container attach aff952f558877df4e94d0aa6823cb500098fd79ebf3515d0355cd446b2cb3ba4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:45:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:45:05 np0005634017 lvm[164977]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:45:05 np0005634017 lvm[164977]: VG ceph_vg1 finished
Feb 28 04:45:05 np0005634017 lvm[164976]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:45:05 np0005634017 lvm[164976]: VG ceph_vg0 finished
Feb 28 04:45:05 np0005634017 lvm[164979]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:45:05 np0005634017 lvm[164979]: VG ceph_vg2 finished
Feb 28 04:45:05 np0005634017 elated_chaplygin[164898]: {}
Feb 28 04:45:05 np0005634017 systemd[1]: libpod-aff952f558877df4e94d0aa6823cb500098fd79ebf3515d0355cd446b2cb3ba4.scope: Deactivated successfully.
Feb 28 04:45:05 np0005634017 podman[164882]: 2026-02-28 09:45:05.740556953 +0000 UTC m=+0.858112557 container died aff952f558877df4e94d0aa6823cb500098fd79ebf3515d0355cd446b2cb3ba4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_chaplygin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 04:45:05 np0005634017 systemd[1]: libpod-aff952f558877df4e94d0aa6823cb500098fd79ebf3515d0355cd446b2cb3ba4.scope: Consumed 1.023s CPU time.
Feb 28 04:45:05 np0005634017 systemd[1]: var-lib-containers-storage-overlay-4c2835b6cbc3d96464a79a2decd7952bdf2ca0580a35e55ad6fd249369ea3991-merged.mount: Deactivated successfully.
Feb 28 04:45:05 np0005634017 podman[164882]: 2026-02-28 09:45:05.781988006 +0000 UTC m=+0.899543590 container remove aff952f558877df4e94d0aa6823cb500098fd79ebf3515d0355cd446b2cb3ba4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_chaplygin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 04:45:05 np0005634017 systemd[1]: libpod-conmon-aff952f558877df4e94d0aa6823cb500098fd79ebf3515d0355cd446b2cb3ba4.scope: Deactivated successfully.
Feb 28 04:45:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:45:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:45:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:45:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:45:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:45:06 np0005634017 kernel: SELinux:  Converting 2778 SID table entries...
Feb 28 04:45:06 np0005634017 kernel: SELinux:  policy capability network_peer_controls=1
Feb 28 04:45:06 np0005634017 kernel: SELinux:  policy capability open_perms=1
Feb 28 04:45:06 np0005634017 kernel: SELinux:  policy capability extended_socket_class=1
Feb 28 04:45:06 np0005634017 kernel: SELinux:  policy capability always_check_network=0
Feb 28 04:45:06 np0005634017 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 28 04:45:06 np0005634017 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 28 04:45:06 np0005634017 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 28 04:45:06 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:45:06 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:45:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:45:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:45:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:45:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:45:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:45:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Feb 28 04:45:15 np0005634017 kernel: SELinux:  Converting 2778 SID table entries...
Feb 28 04:45:15 np0005634017 kernel: SELinux:  policy capability network_peer_controls=1
Feb 28 04:45:15 np0005634017 kernel: SELinux:  policy capability open_perms=1
Feb 28 04:45:15 np0005634017 kernel: SELinux:  policy capability extended_socket_class=1
Feb 28 04:45:15 np0005634017 kernel: SELinux:  policy capability always_check_network=0
Feb 28 04:45:15 np0005634017 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 28 04:45:15 np0005634017 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 28 04:45:15 np0005634017 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 28 04:45:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:45:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Feb 28 04:45:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 28 04:45:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:45:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 28 04:45:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 28 04:45:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 28 04:45:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:45:27 np0005634017 dbus-broker-launch[804]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Feb 28 04:45:27 np0005634017 podman[165449]: 2026-02-28 09:45:27.194901632 +0000 UTC m=+0.121007235 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 04:45:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Feb 28 04:45:27 np0005634017 podman[165570]: 2026-02-28 09:45:27.256712651 +0000 UTC m=+0.057308811 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:45:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_09:45:28
Feb 28 04:45:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 04:45:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 04:45:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'images', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'backups', 'vms']
Feb 28 04:45:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 04:45:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Feb 28 04:45:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:45:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:45:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:45:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:45:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:45:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:45:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 04:45:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 04:45:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:45:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:45:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:45:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:45:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:45:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:45:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:45:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:45:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:45:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:45:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:45:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:45:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:45:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:45:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:45:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 04:45:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:45:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 04:45:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:45:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:45:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:45:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:45:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:45:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:45:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:45:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:45:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:45:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4231060137621101e-06 of space, bias 4.0, pg target 0.0017077272165145322 quantized to 16 (current 16)
Feb 28 04:45:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:45:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:45:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:45:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 04:45:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:45:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 04:45:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:45:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:45:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:45:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 04:45:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:45:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:45:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:45:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:45:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:45:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:45:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:45:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:45:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:45:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:45:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:45:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:45:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:45:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:45:57.825 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:45:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:45:57.825 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:45:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:45:57.825 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:45:58 np0005634017 podman[181982]: 2026-02-28 09:45:58.127773949 +0000 UTC m=+0.065558309 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 04:45:58 np0005634017 podman[181981]: 2026-02-28 09:45:58.149280288 +0000 UTC m=+0.088302794 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 04:45:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:46:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:46:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:46:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:46:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:46:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:46:00 np0005634017 kernel: SELinux:  Converting 2779 SID table entries...
Feb 28 04:46:00 np0005634017 kernel: SELinux:  policy capability network_peer_controls=1
Feb 28 04:46:00 np0005634017 kernel: SELinux:  policy capability open_perms=1
Feb 28 04:46:00 np0005634017 kernel: SELinux:  policy capability extended_socket_class=1
Feb 28 04:46:00 np0005634017 kernel: SELinux:  policy capability always_check_network=0
Feb 28 04:46:00 np0005634017 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 28 04:46:00 np0005634017 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 28 04:46:00 np0005634017 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 28 04:46:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:46:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:01 np0005634017 dbus-broker-launch[803]: Noticed file-system modification, trigger reload.
Feb 28 04:46:01 np0005634017 dbus-broker-launch[804]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Feb 28 04:46:01 np0005634017 dbus-broker-launch[803]: Noticed file-system modification, trigger reload.
Feb 28 04:46:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:46:06 np0005634017 podman[182357]: 2026-02-28 09:46:06.451544748 +0000 UTC m=+0.069930975 container exec 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:46:06 np0005634017 podman[182357]: 2026-02-28 09:46:06.550219959 +0000 UTC m=+0.168606186 container exec_died 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 04:46:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:46:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:46:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:46:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:46:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:46:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:46:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 04:46:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:46:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 04:46:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:46:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 04:46:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 04:46:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 04:46:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:46:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:46:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:46:08 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:46:08 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:46:08 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:46:08 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:46:08 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:46:08 np0005634017 podman[182742]: 2026-02-28 09:46:08.583456521 +0000 UTC m=+0.114714134 container create f48a7083d2fdcf306fc356bbec4ff799d42bf22f3561a38c196502371199b5da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_solomon, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:46:08 np0005634017 podman[182742]: 2026-02-28 09:46:08.498275778 +0000 UTC m=+0.029533381 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:46:08 np0005634017 systemd[1]: Started libpod-conmon-f48a7083d2fdcf306fc356bbec4ff799d42bf22f3561a38c196502371199b5da.scope.
Feb 28 04:46:08 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:46:08 np0005634017 podman[182742]: 2026-02-28 09:46:08.714636008 +0000 UTC m=+0.245893621 container init f48a7083d2fdcf306fc356bbec4ff799d42bf22f3561a38c196502371199b5da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_solomon, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 04:46:08 np0005634017 podman[182742]: 2026-02-28 09:46:08.722970628 +0000 UTC m=+0.254228231 container start f48a7083d2fdcf306fc356bbec4ff799d42bf22f3561a38c196502371199b5da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_solomon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 04:46:08 np0005634017 musing_solomon[182858]: 167 167
Feb 28 04:46:08 np0005634017 systemd[1]: libpod-f48a7083d2fdcf306fc356bbec4ff799d42bf22f3561a38c196502371199b5da.scope: Deactivated successfully.
Feb 28 04:46:08 np0005634017 podman[182742]: 2026-02-28 09:46:08.739786682 +0000 UTC m=+0.271044275 container attach f48a7083d2fdcf306fc356bbec4ff799d42bf22f3561a38c196502371199b5da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_solomon, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:46:08 np0005634017 podman[182742]: 2026-02-28 09:46:08.7410947 +0000 UTC m=+0.272352303 container died f48a7083d2fdcf306fc356bbec4ff799d42bf22f3561a38c196502371199b5da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_solomon, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:46:08 np0005634017 systemd[1]: var-lib-containers-storage-overlay-b9ead74f6ed946bf4943b6ec9c8f9f2aeadb86e112ef8b34e5e7c5093ace3bcc-merged.mount: Deactivated successfully.
Feb 28 04:46:09 np0005634017 podman[182742]: 2026-02-28 09:46:09.106360517 +0000 UTC m=+0.637618110 container remove f48a7083d2fdcf306fc356bbec4ff799d42bf22f3561a38c196502371199b5da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_solomon, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 28 04:46:09 np0005634017 systemd[1]: libpod-conmon-f48a7083d2fdcf306fc356bbec4ff799d42bf22f3561a38c196502371199b5da.scope: Deactivated successfully.
Feb 28 04:46:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:09 np0005634017 podman[183245]: 2026-02-28 09:46:09.316458986 +0000 UTC m=+0.091287070 container create 8442ae053eeb94ca50445e29cd560f65ef0a8e7e7348e9ca7ee4a397f627e103 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_banach, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:46:09 np0005634017 podman[183245]: 2026-02-28 09:46:09.260913736 +0000 UTC m=+0.035741870 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:46:09 np0005634017 systemd[1]: Started libpod-conmon-8442ae053eeb94ca50445e29cd560f65ef0a8e7e7348e9ca7ee4a397f627e103.scope.
Feb 28 04:46:09 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:46:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d27bf8d27580c8aaec560c5cc05d7d80df5871aa4ab9e42b5abdc7af5ec01db3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:46:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d27bf8d27580c8aaec560c5cc05d7d80df5871aa4ab9e42b5abdc7af5ec01db3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:46:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d27bf8d27580c8aaec560c5cc05d7d80df5871aa4ab9e42b5abdc7af5ec01db3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:46:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d27bf8d27580c8aaec560c5cc05d7d80df5871aa4ab9e42b5abdc7af5ec01db3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:46:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d27bf8d27580c8aaec560c5cc05d7d80df5871aa4ab9e42b5abdc7af5ec01db3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:46:09 np0005634017 podman[183245]: 2026-02-28 09:46:09.484638807 +0000 UTC m=+0.259466971 container init 8442ae053eeb94ca50445e29cd560f65ef0a8e7e7348e9ca7ee4a397f627e103 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 04:46:09 np0005634017 podman[183245]: 2026-02-28 09:46:09.493794591 +0000 UTC m=+0.268622695 container start 8442ae053eeb94ca50445e29cd560f65ef0a8e7e7348e9ca7ee4a397f627e103 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 04:46:09 np0005634017 podman[183245]: 2026-02-28 09:46:09.517443792 +0000 UTC m=+0.292271886 container attach 8442ae053eeb94ca50445e29cd560f65ef0a8e7e7348e9ca7ee4a397f627e103 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_banach, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 04:46:09 np0005634017 systemd[1]: Stopping OpenSSH server daemon...
Feb 28 04:46:09 np0005634017 systemd[1]: sshd.service: Deactivated successfully.
Feb 28 04:46:09 np0005634017 systemd[1]: Stopped OpenSSH server daemon.
Feb 28 04:46:09 np0005634017 systemd[1]: sshd.service: Consumed 3.362s CPU time, read 32.0K from disk, written 4.0K to disk.
Feb 28 04:46:09 np0005634017 systemd[1]: Stopped target sshd-keygen.target.
Feb 28 04:46:09 np0005634017 systemd[1]: Stopping sshd-keygen.target...
Feb 28 04:46:09 np0005634017 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 28 04:46:09 np0005634017 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 28 04:46:09 np0005634017 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 28 04:46:09 np0005634017 systemd[1]: Reached target sshd-keygen.target.
Feb 28 04:46:09 np0005634017 systemd[1]: Starting OpenSSH server daemon...
Feb 28 04:46:09 np0005634017 systemd[1]: Started OpenSSH server daemon.
Feb 28 04:46:10 np0005634017 nervous_banach[183362]: --> passed data devices: 0 physical, 3 LVM
Feb 28 04:46:10 np0005634017 nervous_banach[183362]: --> All data devices are unavailable
Feb 28 04:46:10 np0005634017 podman[183245]: 2026-02-28 09:46:10.029026261 +0000 UTC m=+0.803854325 container died 8442ae053eeb94ca50445e29cd560f65ef0a8e7e7348e9ca7ee4a397f627e103 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_banach, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:46:10 np0005634017 systemd[1]: libpod-8442ae053eeb94ca50445e29cd560f65ef0a8e7e7348e9ca7ee4a397f627e103.scope: Deactivated successfully.
Feb 28 04:46:10 np0005634017 systemd[1]: var-lib-containers-storage-overlay-d27bf8d27580c8aaec560c5cc05d7d80df5871aa4ab9e42b5abdc7af5ec01db3-merged.mount: Deactivated successfully.
Feb 28 04:46:10 np0005634017 podman[183245]: 2026-02-28 09:46:10.269186096 +0000 UTC m=+1.044014150 container remove 8442ae053eeb94ca50445e29cd560f65ef0a8e7e7348e9ca7ee4a397f627e103 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_banach, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:46:10 np0005634017 systemd[1]: libpod-conmon-8442ae053eeb94ca50445e29cd560f65ef0a8e7e7348e9ca7ee4a397f627e103.scope: Deactivated successfully.
Feb 28 04:46:10 np0005634017 podman[183597]: 2026-02-28 09:46:10.726295878 +0000 UTC m=+0.033241198 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:46:10 np0005634017 podman[183597]: 2026-02-28 09:46:10.880386274 +0000 UTC m=+0.187331504 container create 76bdab9d136aabdf3627dc4299e554b4c72e0ccf464148dd06deddf12a5be359 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_hertz, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:46:11 np0005634017 systemd[1]: Started libpod-conmon-76bdab9d136aabdf3627dc4299e554b4c72e0ccf464148dd06deddf12a5be359.scope.
Feb 28 04:46:11 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:46:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:46:11 np0005634017 podman[183597]: 2026-02-28 09:46:11.193819149 +0000 UTC m=+0.500764459 container init 76bdab9d136aabdf3627dc4299e554b4c72e0ccf464148dd06deddf12a5be359 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_hertz, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 28 04:46:11 np0005634017 podman[183597]: 2026-02-28 09:46:11.201706626 +0000 UTC m=+0.508651896 container start 76bdab9d136aabdf3627dc4299e554b4c72e0ccf464148dd06deddf12a5be359 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_hertz, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:46:11 np0005634017 intelligent_hertz[183659]: 167 167
Feb 28 04:46:11 np0005634017 systemd[1]: libpod-76bdab9d136aabdf3627dc4299e554b4c72e0ccf464148dd06deddf12a5be359.scope: Deactivated successfully.
Feb 28 04:46:11 np0005634017 conmon[183659]: conmon 76bdab9d136aabdf3627 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-76bdab9d136aabdf3627dc4299e554b4c72e0ccf464148dd06deddf12a5be359.scope/container/memory.events
Feb 28 04:46:11 np0005634017 podman[183597]: 2026-02-28 09:46:11.241294916 +0000 UTC m=+0.548240146 container attach 76bdab9d136aabdf3627dc4299e554b4c72e0ccf464148dd06deddf12a5be359 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_hertz, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Feb 28 04:46:11 np0005634017 podman[183597]: 2026-02-28 09:46:11.242236613 +0000 UTC m=+0.549181853 container died 76bdab9d136aabdf3627dc4299e554b4c72e0ccf464148dd06deddf12a5be359 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_hertz, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 04:46:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:11 np0005634017 systemd[1]: var-lib-containers-storage-overlay-a12b8d919d1447bdee1d8c66a444ed9891d5cc0bf297b96361f92bc5fd26229b-merged.mount: Deactivated successfully.
Feb 28 04:46:11 np0005634017 podman[183597]: 2026-02-28 09:46:11.503574407 +0000 UTC m=+0.810519637 container remove 76bdab9d136aabdf3627dc4299e554b4c72e0ccf464148dd06deddf12a5be359 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_hertz, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:46:11 np0005634017 systemd[1]: libpod-conmon-76bdab9d136aabdf3627dc4299e554b4c72e0ccf464148dd06deddf12a5be359.scope: Deactivated successfully.
Feb 28 04:46:11 np0005634017 podman[183696]: 2026-02-28 09:46:11.683505218 +0000 UTC m=+0.048365013 container create 3aa01af6898e8408aa4c562d28ed7cca8561d093fb56209dca29d6668d39f433 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:46:11 np0005634017 systemd[1]: Started libpod-conmon-3aa01af6898e8408aa4c562d28ed7cca8561d093fb56209dca29d6668d39f433.scope.
Feb 28 04:46:11 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:46:11 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fad4acbe100f148b64041c65f46cf384cd75ddb2e23361faa8df3b6b465a166f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:46:11 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fad4acbe100f148b64041c65f46cf384cd75ddb2e23361faa8df3b6b465a166f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:46:11 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fad4acbe100f148b64041c65f46cf384cd75ddb2e23361faa8df3b6b465a166f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:46:11 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fad4acbe100f148b64041c65f46cf384cd75ddb2e23361faa8df3b6b465a166f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:46:11 np0005634017 podman[183696]: 2026-02-28 09:46:11.667490457 +0000 UTC m=+0.032350232 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:46:11 np0005634017 podman[183696]: 2026-02-28 09:46:11.815517079 +0000 UTC m=+0.180376894 container init 3aa01af6898e8408aa4c562d28ed7cca8561d093fb56209dca29d6668d39f433 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 28 04:46:11 np0005634017 podman[183696]: 2026-02-28 09:46:11.826403382 +0000 UTC m=+0.191263167 container start 3aa01af6898e8408aa4c562d28ed7cca8561d093fb56209dca29d6668d39f433 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_khayyam, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 28 04:46:11 np0005634017 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 28 04:46:11 np0005634017 podman[183696]: 2026-02-28 09:46:11.860836854 +0000 UTC m=+0.225696649 container attach 3aa01af6898e8408aa4c562d28ed7cca8561d093fb56209dca29d6668d39f433 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_khayyam, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 28 04:46:11 np0005634017 systemd[1]: Starting man-db-cache-update.service...
Feb 28 04:46:11 np0005634017 systemd[1]: Reloading.
Feb 28 04:46:12 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:46:12 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]: {
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:    "0": [
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:        {
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "devices": [
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "/dev/loop3"
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            ],
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "lv_name": "ceph_lv0",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "lv_size": "21470642176",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "name": "ceph_lv0",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "tags": {
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.cluster_name": "ceph",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.crush_device_class": "",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.encrypted": "0",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.objectstore": "bluestore",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.osd_id": "0",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.type": "block",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.vdo": "0",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.with_tpm": "0"
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            },
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "type": "block",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "vg_name": "ceph_vg0"
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:        }
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:    ],
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:    "1": [
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:        {
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "devices": [
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "/dev/loop4"
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            ],
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "lv_name": "ceph_lv1",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "lv_size": "21470642176",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "name": "ceph_lv1",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "tags": {
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.cluster_name": "ceph",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.crush_device_class": "",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.encrypted": "0",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.objectstore": "bluestore",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.osd_id": "1",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.type": "block",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.vdo": "0",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.with_tpm": "0"
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            },
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "type": "block",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "vg_name": "ceph_vg1"
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:        }
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:    ],
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:    "2": [
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:        {
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "devices": [
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "/dev/loop5"
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            ],
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "lv_name": "ceph_lv2",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "lv_size": "21470642176",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "name": "ceph_lv2",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "tags": {
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.cluster_name": "ceph",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.crush_device_class": "",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.encrypted": "0",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.objectstore": "bluestore",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.osd_id": "2",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.type": "block",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.vdo": "0",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:                "ceph.with_tpm": "0"
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            },
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "type": "block",
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:            "vg_name": "ceph_vg2"
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:        }
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]:    ]
Feb 28 04:46:12 np0005634017 eager_khayyam[183717]: }
Feb 28 04:46:12 np0005634017 podman[183696]: 2026-02-28 09:46:12.121905351 +0000 UTC m=+0.486765136 container died 3aa01af6898e8408aa4c562d28ed7cca8561d093fb56209dca29d6668d39f433 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_khayyam, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 04:46:12 np0005634017 systemd[1]: libpod-3aa01af6898e8408aa4c562d28ed7cca8561d093fb56209dca29d6668d39f433.scope: Deactivated successfully.
Feb 28 04:46:12 np0005634017 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 28 04:46:12 np0005634017 systemd[1]: var-lib-containers-storage-overlay-fad4acbe100f148b64041c65f46cf384cd75ddb2e23361faa8df3b6b465a166f-merged.mount: Deactivated successfully.
Feb 28 04:46:12 np0005634017 podman[183696]: 2026-02-28 09:46:12.229407676 +0000 UTC m=+0.594267461 container remove 3aa01af6898e8408aa4c562d28ed7cca8561d093fb56209dca29d6668d39f433 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 28 04:46:12 np0005634017 systemd[1]: libpod-conmon-3aa01af6898e8408aa4c562d28ed7cca8561d093fb56209dca29d6668d39f433.scope: Deactivated successfully.
Feb 28 04:46:12 np0005634017 podman[184575]: 2026-02-28 09:46:12.606885785 +0000 UTC m=+0.026827044 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:46:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:13 np0005634017 podman[184575]: 2026-02-28 09:46:13.636297933 +0000 UTC m=+1.056239162 container create 577ccec5bf08b16a988164934da6f3b1004406118d145e78448ad48a5a67a8fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Feb 28 04:46:14 np0005634017 systemd[1]: Started libpod-conmon-577ccec5bf08b16a988164934da6f3b1004406118d145e78448ad48a5a67a8fc.scope.
Feb 28 04:46:14 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:46:14 np0005634017 podman[184575]: 2026-02-28 09:46:14.057553822 +0000 UTC m=+1.477495081 container init 577ccec5bf08b16a988164934da6f3b1004406118d145e78448ad48a5a67a8fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 04:46:14 np0005634017 podman[184575]: 2026-02-28 09:46:14.06615898 +0000 UTC m=+1.486100219 container start 577ccec5bf08b16a988164934da6f3b1004406118d145e78448ad48a5a67a8fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_morse, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:46:14 np0005634017 podman[184575]: 2026-02-28 09:46:14.070904266 +0000 UTC m=+1.490845525 container attach 577ccec5bf08b16a988164934da6f3b1004406118d145e78448ad48a5a67a8fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:46:14 np0005634017 nice_morse[186684]: 167 167
Feb 28 04:46:14 np0005634017 systemd[1]: libpod-577ccec5bf08b16a988164934da6f3b1004406118d145e78448ad48a5a67a8fc.scope: Deactivated successfully.
Feb 28 04:46:14 np0005634017 podman[184575]: 2026-02-28 09:46:14.072522493 +0000 UTC m=+1.492463762 container died 577ccec5bf08b16a988164934da6f3b1004406118d145e78448ad48a5a67a8fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_morse, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:46:14 np0005634017 systemd[1]: var-lib-containers-storage-overlay-1a1418adfec18f850a4648d901702ebdd3aece1e6c78ab6f117111e1c414eb4f-merged.mount: Deactivated successfully.
Feb 28 04:46:14 np0005634017 podman[184575]: 2026-02-28 09:46:14.11617375 +0000 UTC m=+1.536114999 container remove 577ccec5bf08b16a988164934da6f3b1004406118d145e78448ad48a5a67a8fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3)
Feb 28 04:46:14 np0005634017 systemd[1]: libpod-conmon-577ccec5bf08b16a988164934da6f3b1004406118d145e78448ad48a5a67a8fc.scope: Deactivated successfully.
Feb 28 04:46:14 np0005634017 podman[187003]: 2026-02-28 09:46:14.235611259 +0000 UTC m=+0.034206366 container create ce0e931aa350d78458188d67714181659ca13d0dfaf02a9da7ea5aee8d87a0ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_pike, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 04:46:14 np0005634017 systemd[1]: Started libpod-conmon-ce0e931aa350d78458188d67714181659ca13d0dfaf02a9da7ea5aee8d87a0ad.scope.
Feb 28 04:46:14 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:46:14 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf85cfb8ae6e77525807df6100bebaf846257bf3457215b2d90c6f27277cb9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:46:14 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf85cfb8ae6e77525807df6100bebaf846257bf3457215b2d90c6f27277cb9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:46:14 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf85cfb8ae6e77525807df6100bebaf846257bf3457215b2d90c6f27277cb9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:46:14 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf85cfb8ae6e77525807df6100bebaf846257bf3457215b2d90c6f27277cb9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:46:14 np0005634017 podman[187003]: 2026-02-28 09:46:14.305304495 +0000 UTC m=+0.103899612 container init ce0e931aa350d78458188d67714181659ca13d0dfaf02a9da7ea5aee8d87a0ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 04:46:14 np0005634017 podman[187003]: 2026-02-28 09:46:14.310390802 +0000 UTC m=+0.108985949 container start ce0e931aa350d78458188d67714181659ca13d0dfaf02a9da7ea5aee8d87a0ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 28 04:46:14 np0005634017 podman[187003]: 2026-02-28 09:46:14.314724606 +0000 UTC m=+0.113319763 container attach ce0e931aa350d78458188d67714181659ca13d0dfaf02a9da7ea5aee8d87a0ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_pike, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 04:46:14 np0005634017 podman[187003]: 2026-02-28 09:46:14.21934142 +0000 UTC m=+0.017936547 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:46:14 np0005634017 lvm[188103]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:46:14 np0005634017 lvm[188116]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:46:14 np0005634017 lvm[188116]: VG ceph_vg1 finished
Feb 28 04:46:14 np0005634017 lvm[188103]: VG ceph_vg0 finished
Feb 28 04:46:14 np0005634017 lvm[188147]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:46:14 np0005634017 lvm[188147]: VG ceph_vg2 finished
Feb 28 04:46:14 np0005634017 naughty_pike[187203]: {}
Feb 28 04:46:14 np0005634017 systemd[1]: libpod-ce0e931aa350d78458188d67714181659ca13d0dfaf02a9da7ea5aee8d87a0ad.scope: Deactivated successfully.
Feb 28 04:46:14 np0005634017 podman[187003]: 2026-02-28 09:46:14.931453184 +0000 UTC m=+0.730048281 container died ce0e931aa350d78458188d67714181659ca13d0dfaf02a9da7ea5aee8d87a0ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:46:14 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3cf85cfb8ae6e77525807df6100bebaf846257bf3457215b2d90c6f27277cb9a-merged.mount: Deactivated successfully.
Feb 28 04:46:14 np0005634017 podman[187003]: 2026-02-28 09:46:14.973545645 +0000 UTC m=+0.772140752 container remove ce0e931aa350d78458188d67714181659ca13d0dfaf02a9da7ea5aee8d87a0ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_pike, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 04:46:14 np0005634017 systemd[1]: libpod-conmon-ce0e931aa350d78458188d67714181659ca13d0dfaf02a9da7ea5aee8d87a0ad.scope: Deactivated successfully.
Feb 28 04:46:15 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:46:15 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:46:15 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:46:15 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:46:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:15 np0005634017 python3.9[189367]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 28 04:46:15 np0005634017 systemd[1]: Reloading.
Feb 28 04:46:16 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:46:16 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:46:16 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:46:16 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:46:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:46:17 np0005634017 python3.9[190904]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 28 04:46:17 np0005634017 systemd[1]: Reloading.
Feb 28 04:46:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:17 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:46:17 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:46:18 np0005634017 python3.9[192309]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 28 04:46:18 np0005634017 systemd[1]: Reloading.
Feb 28 04:46:18 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:46:18 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:46:18 np0005634017 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 28 04:46:18 np0005634017 systemd[1]: Finished man-db-cache-update.service.
Feb 28 04:46:18 np0005634017 systemd[1]: man-db-cache-update.service: Consumed 8.584s CPU time.
Feb 28 04:46:18 np0005634017 systemd[1]: run-r643853093f7842fbb5e99e41edc42321.service: Deactivated successfully.
Feb 28 04:46:19 np0005634017 python3.9[193197]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 28 04:46:19 np0005634017 systemd[1]: Reloading.
Feb 28 04:46:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:19 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:46:19 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:46:20 np0005634017 python3.9[193394]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 28 04:46:20 np0005634017 systemd[1]: Reloading.
Feb 28 04:46:20 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:46:20 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:46:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:46:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:21 np0005634017 python3.9[193592]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 28 04:46:21 np0005634017 systemd[1]: Reloading.
Feb 28 04:46:21 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:46:21 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:46:22 np0005634017 python3.9[193790]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 28 04:46:22 np0005634017 systemd[1]: Reloading.
Feb 28 04:46:22 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:46:22 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:46:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:23 np0005634017 python3.9[193988]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 28 04:46:24 np0005634017 python3.9[194144]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 28 04:46:24 np0005634017 systemd[1]: Reloading.
Feb 28 04:46:24 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:46:24 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:46:25 np0005634017 python3.9[194342]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 28 04:46:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:25 np0005634017 systemd[1]: Reloading.
Feb 28 04:46:25 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:46:25 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:46:25 np0005634017 systemd[1]: Listening on libvirt proxy daemon socket.
Feb 28 04:46:25 np0005634017 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Feb 28 04:46:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:46:26 np0005634017 python3.9[194543]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 28 04:46:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:27 np0005634017 python3.9[194699]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 28 04:46:28 np0005634017 python3.9[194855]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 28 04:46:28 np0005634017 podman[194858]: 2026-02-28 09:46:28.391574089 +0000 UTC m=+0.079518530 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Feb 28 04:46:28 np0005634017 podman[194857]: 2026-02-28 09:46:28.418848525 +0000 UTC m=+0.107936349 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Feb 28 04:46:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_09:46:28
Feb 28 04:46:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 04:46:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 04:46:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'backups', '.mgr', '.rgw.root', 'vms', 'default.rgw.control', 'images']
Feb 28 04:46:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 04:46:29 np0005634017 python3.9[195055]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 28 04:46:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:29 np0005634017 python3.9[195211]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 28 04:46:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:46:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:46:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:46:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:46:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:46:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:46:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 04:46:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:46:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 04:46:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:46:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:46:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:46:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:46:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:46:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:46:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:46:30 np0005634017 python3.9[195367]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 28 04:46:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:46:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:31 np0005634017 python3.9[195523]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 28 04:46:32 np0005634017 python3.9[195679]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 28 04:46:33 np0005634017 python3.9[195835]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 28 04:46:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:33 np0005634017 python3.9[195991]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 28 04:46:34 np0005634017 python3.9[196147]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 28 04:46:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:35 np0005634017 python3.9[196303]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 28 04:46:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:46:36 np0005634017 python3.9[196459]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 28 04:46:37 np0005634017 python3.9[196615]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 28 04:46:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:38 np0005634017 python3.9[196771]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:46:38 np0005634017 python3.9[196924]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:46:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:39 np0005634017 python3.9[197077]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:46:40 np0005634017 python3.9[197230]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:46:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 04:46:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:46:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 04:46:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:46:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:46:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:46:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:46:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:46:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:46:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:46:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:46:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:46:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4231060137621101e-06 of space, bias 4.0, pg target 0.0017077272165145322 quantized to 16 (current 16)
Feb 28 04:46:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:46:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:46:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:46:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 04:46:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:46:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 04:46:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:46:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:46:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:46:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 04:46:40 np0005634017 python3.9[197383]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:46:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:46:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:41 np0005634017 python3.9[197536]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:46:42 np0005634017 python3.9[197686]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:46:43 np0005634017 python3.9[197839]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:46:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:43 np0005634017 python3.9[197965]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1772272002.380095-557-89236137910906/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:46:44 np0005634017 python3.9[198118]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:46:44 np0005634017 python3.9[198244]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1772272003.9602034-557-170330857284822/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:46:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:45 np0005634017 python3.9[198397]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:46:46 np0005634017 python3.9[198523]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1772272005.1066115-557-84650312930750/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:46:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:46:46 np0005634017 python3.9[198676]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:46:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:47 np0005634017 python3.9[198802]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1772272006.2933972-557-184081923138049/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:46:48 np0005634017 python3.9[198955]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:46:48 np0005634017 python3.9[199081]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1772272007.4948127-557-246974665954458/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:46:49 np0005634017 python3.9[199234]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:46:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:49 np0005634017 python3.9[199360]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1772272008.7381124-557-33008473873412/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:46:50 np0005634017 python3.9[199513]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:46:50 np0005634017 python3.9[199637]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1772272009.9815147-557-9294026172440/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:46:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:46:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:51 np0005634017 python3.9[199790]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:46:52 np0005634017 python3.9[199916]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1772272011.0905232-557-159928305220312/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:46:52 np0005634017 python3.9[200069]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Feb 28 04:46:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:53 np0005634017 python3.9[200223]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:46:54 np0005634017 python3.9[200376]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:46:54 np0005634017 python3.9[200529]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:46:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:55 np0005634017 python3.9[200682]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:46:56 np0005634017 python3.9[200835]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:46:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:46:56 np0005634017 python3.9[200988]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:46:57 np0005634017 python3.9[201141]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:46:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:57 np0005634017 python3.9[201294]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:46:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:46:57.825 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:46:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:46:57.825 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:46:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:46:57.826 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:46:58 np0005634017 python3.9[201447]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:46:58 np0005634017 podman[201572]: 2026-02-28 09:46:58.784856307 +0000 UTC m=+0.050401647 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 28 04:46:58 np0005634017 podman[201571]: 2026-02-28 09:46:58.844953433 +0000 UTC m=+0.109659640 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 28 04:46:58 np0005634017 python3.9[201636]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:46:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:46:59 np0005634017 python3.9[201796]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:00 np0005634017 python3.9[201949]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:47:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:47:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:47:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:47:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:47:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:47:00 np0005634017 python3.9[202102]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:47:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:01 np0005634017 python3.9[202255]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:02 np0005634017 python3.9[202408]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:47:02 np0005634017 python3.9[202532]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772272021.638175-778-96076189196010/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:03 np0005634017 python3.9[202685]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:47:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:03 np0005634017 python3.9[202809]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772272022.7138214-778-102138297630139/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:04 np0005634017 python3.9[202962]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:47:04 np0005634017 python3.9[203086]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772272023.857517-778-84101522213963/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:05 np0005634017 python3.9[203239]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:47:06 np0005634017 python3.9[203363]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772272025.1062152-778-24577225124885/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:47:06 np0005634017 python3.9[203516]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:47:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:07 np0005634017 python3.9[203640]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772272026.2644632-778-152128726782026/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:08 np0005634017 python3.9[203793]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:47:08 np0005634017 python3.9[203917]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772272027.5366821-778-138237188522519/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:09 np0005634017 python3.9[204070]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:47:10 np0005634017 python3.9[204194]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772272028.8254323-778-90605870078698/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:10 np0005634017 python3.9[204347]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:47:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:47:11 np0005634017 python3.9[204471]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772272030.1764233-778-65386139606758/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:11 np0005634017 python3.9[204624]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:47:12.022265) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272032022337, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2038, "num_deletes": 251, "total_data_size": 3580749, "memory_usage": 3631680, "flush_reason": "Manual Compaction"}
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272032187897, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3504541, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9692, "largest_seqno": 11729, "table_properties": {"data_size": 3495248, "index_size": 5915, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17751, "raw_average_key_size": 19, "raw_value_size": 3476870, "raw_average_value_size": 3808, "num_data_blocks": 269, "num_entries": 913, "num_filter_entries": 913, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271797, "oldest_key_time": 1772271797, "file_creation_time": 1772272032, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 165661 microseconds, and 6499 cpu microseconds.
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:47:12.187935) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3504541 bytes OK
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:47:12.187954) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:47:12.211866) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:47:12.211895) EVENT_LOG_v1 {"time_micros": 1772272032211888, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:47:12.211915) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3572256, prev total WAL file size 3572256, number of live WAL files 2.
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:47:12.212864) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3422KB)], [26(6044KB)]
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272032212892, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9694173, "oldest_snapshot_seqno": -1}
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3699 keys, 8145382 bytes, temperature: kUnknown
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272032296265, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8145382, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8117056, "index_size": 17976, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88772, "raw_average_key_size": 23, "raw_value_size": 8046732, "raw_average_value_size": 2175, "num_data_blocks": 779, "num_entries": 3699, "num_filter_entries": 3699, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772272032, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:47:12.296573) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8145382 bytes
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:47:12.300276) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 116.2 rd, 97.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.9 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4213, records dropped: 514 output_compression: NoCompression
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:47:12.300308) EVENT_LOG_v1 {"time_micros": 1772272032300294, "job": 10, "event": "compaction_finished", "compaction_time_micros": 83462, "compaction_time_cpu_micros": 16966, "output_level": 6, "num_output_files": 1, "total_output_size": 8145382, "num_input_records": 4213, "num_output_records": 3699, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272032300920, "job": 10, "event": "table_file_deletion", "file_number": 28}
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272032301869, "job": 10, "event": "table_file_deletion", "file_number": 26}
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:47:12.212808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:47:12.301909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:47:12.301914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:47:12.301917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:47:12.301920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:47:12 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:47:12.301923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:47:12 np0005634017 python3.9[204748]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772272031.3927608-778-7039734827032/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:13 np0005634017 python3.9[204901]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:47:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:13 np0005634017 python3.9[205025]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772272032.5965142-778-40786674927038/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:14 np0005634017 python3.9[205178]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:47:14 np0005634017 python3.9[205302]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772272033.7758005-778-95862206896535/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:15 np0005634017 auditd[718]: Audit daemon rotating log files
Feb 28 04:47:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:15 np0005634017 python3.9[205505]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:47:15 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:47:15 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:47:15 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 04:47:15 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:47:15 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 04:47:15 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:47:15 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 04:47:15 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 04:47:15 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 04:47:15 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:47:15 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:47:15 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:47:15 np0005634017 python3.9[205710]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772272035.0113647-778-268599972936120/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:16 np0005634017 podman[205729]: 2026-02-28 09:47:16.058572467 +0000 UTC m=+0.050005456 container create f01971bc628de187ac01e97ca97d26761f37538c5b2cac6ab07a7df868c75bf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 04:47:16 np0005634017 systemd[1]: Started libpod-conmon-f01971bc628de187ac01e97ca97d26761f37538c5b2cac6ab07a7df868c75bf4.scope.
Feb 28 04:47:16 np0005634017 podman[205729]: 2026-02-28 09:47:16.038444755 +0000 UTC m=+0.029877634 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:47:16 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:47:16 np0005634017 podman[205729]: 2026-02-28 09:47:16.149825972 +0000 UTC m=+0.141258901 container init f01971bc628de187ac01e97ca97d26761f37538c5b2cac6ab07a7df868c75bf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_einstein, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:47:16 np0005634017 podman[205729]: 2026-02-28 09:47:16.155767648 +0000 UTC m=+0.147200507 container start f01971bc628de187ac01e97ca97d26761f37538c5b2cac6ab07a7df868c75bf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 04:47:16 np0005634017 podman[205729]: 2026-02-28 09:47:16.158740141 +0000 UTC m=+0.150173030 container attach f01971bc628de187ac01e97ca97d26761f37538c5b2cac6ab07a7df868c75bf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_einstein, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 28 04:47:16 np0005634017 youthful_einstein[205787]: 167 167
Feb 28 04:47:16 np0005634017 systemd[1]: libpod-f01971bc628de187ac01e97ca97d26761f37538c5b2cac6ab07a7df868c75bf4.scope: Deactivated successfully.
Feb 28 04:47:16 np0005634017 podman[205729]: 2026-02-28 09:47:16.16231345 +0000 UTC m=+0.153746299 container died f01971bc628de187ac01e97ca97d26761f37538c5b2cac6ab07a7df868c75bf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_einstein, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 04:47:16 np0005634017 systemd[1]: var-lib-containers-storage-overlay-5be10d3cc3143ef194a5b6fe0c7cd2d9e855096e35bb1499150aa0b7e8ce7919-merged.mount: Deactivated successfully.
Feb 28 04:47:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:47:16 np0005634017 podman[205729]: 2026-02-28 09:47:16.207877211 +0000 UTC m=+0.199310060 container remove f01971bc628de187ac01e97ca97d26761f37538c5b2cac6ab07a7df868c75bf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:47:16 np0005634017 systemd[1]: libpod-conmon-f01971bc628de187ac01e97ca97d26761f37538c5b2cac6ab07a7df868c75bf4.scope: Deactivated successfully.
Feb 28 04:47:16 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:47:16 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:47:16 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:47:16 np0005634017 podman[205887]: 2026-02-28 09:47:16.380361623 +0000 UTC m=+0.059008227 container create ad72aed338a902357e3b810c93698e7a1dd87c74a40dadb35fabefcb522be9af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:47:16 np0005634017 systemd[1]: Started libpod-conmon-ad72aed338a902357e3b810c93698e7a1dd87c74a40dadb35fabefcb522be9af.scope.
Feb 28 04:47:16 np0005634017 podman[205887]: 2026-02-28 09:47:16.349487261 +0000 UTC m=+0.028133915 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:47:16 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:47:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f18440d55d83f0e865fb4da4c6349a3144ffe7a875fd93fa6a4f0a43e51e060/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:47:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f18440d55d83f0e865fb4da4c6349a3144ffe7a875fd93fa6a4f0a43e51e060/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:47:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f18440d55d83f0e865fb4da4c6349a3144ffe7a875fd93fa6a4f0a43e51e060/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:47:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f18440d55d83f0e865fb4da4c6349a3144ffe7a875fd93fa6a4f0a43e51e060/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:47:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f18440d55d83f0e865fb4da4c6349a3144ffe7a875fd93fa6a4f0a43e51e060/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:47:16 np0005634017 podman[205887]: 2026-02-28 09:47:16.499014632 +0000 UTC m=+0.177661266 container init ad72aed338a902357e3b810c93698e7a1dd87c74a40dadb35fabefcb522be9af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_haslett, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:47:16 np0005634017 podman[205887]: 2026-02-28 09:47:16.506779369 +0000 UTC m=+0.185425973 container start ad72aed338a902357e3b810c93698e7a1dd87c74a40dadb35fabefcb522be9af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_haslett, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 04:47:16 np0005634017 podman[205887]: 2026-02-28 09:47:16.509720481 +0000 UTC m=+0.188367105 container attach ad72aed338a902357e3b810c93698e7a1dd87c74a40dadb35fabefcb522be9af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_haslett, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 04:47:16 np0005634017 python3.9[205930]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:47:16 np0005634017 loving_haslett[205933]: --> passed data devices: 0 physical, 3 LVM
Feb 28 04:47:16 np0005634017 loving_haslett[205933]: --> All data devices are unavailable
Feb 28 04:47:17 np0005634017 systemd[1]: libpod-ad72aed338a902357e3b810c93698e7a1dd87c74a40dadb35fabefcb522be9af.scope: Deactivated successfully.
Feb 28 04:47:17 np0005634017 podman[205887]: 2026-02-28 09:47:17.01264525 +0000 UTC m=+0.691291864 container died ad72aed338a902357e3b810c93698e7a1dd87c74a40dadb35fabefcb522be9af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:47:17 np0005634017 systemd[1]: var-lib-containers-storage-overlay-5f18440d55d83f0e865fb4da4c6349a3144ffe7a875fd93fa6a4f0a43e51e060-merged.mount: Deactivated successfully.
Feb 28 04:47:17 np0005634017 podman[205887]: 2026-02-28 09:47:17.054539848 +0000 UTC m=+0.733186452 container remove ad72aed338a902357e3b810c93698e7a1dd87c74a40dadb35fabefcb522be9af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_haslett, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 04:47:17 np0005634017 systemd[1]: libpod-conmon-ad72aed338a902357e3b810c93698e7a1dd87c74a40dadb35fabefcb522be9af.scope: Deactivated successfully.
Feb 28 04:47:17 np0005634017 python3.9[206073]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772272036.1058447-778-203587181960286/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:17 np0005634017 podman[206249]: 2026-02-28 09:47:17.463463284 +0000 UTC m=+0.043977178 container create 7d13b87852be932d7d6bbc02149820e9423bf29773a6c185cbe9d5a1ec842b94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 04:47:17 np0005634017 systemd[1]: Started libpod-conmon-7d13b87852be932d7d6bbc02149820e9423bf29773a6c185cbe9d5a1ec842b94.scope.
Feb 28 04:47:17 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:47:17 np0005634017 podman[206249]: 2026-02-28 09:47:17.535716229 +0000 UTC m=+0.116230153 container init 7d13b87852be932d7d6bbc02149820e9423bf29773a6c185cbe9d5a1ec842b94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:47:17 np0005634017 podman[206249]: 2026-02-28 09:47:17.446771158 +0000 UTC m=+0.027285062 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:47:17 np0005634017 podman[206249]: 2026-02-28 09:47:17.544176545 +0000 UTC m=+0.124690429 container start 7d13b87852be932d7d6bbc02149820e9423bf29773a6c185cbe9d5a1ec842b94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hypatia, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:47:17 np0005634017 podman[206249]: 2026-02-28 09:47:17.547802256 +0000 UTC m=+0.128316200 container attach 7d13b87852be932d7d6bbc02149820e9423bf29773a6c185cbe9d5a1ec842b94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hypatia, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:47:17 np0005634017 reverent_hypatia[206290]: 167 167
Feb 28 04:47:17 np0005634017 systemd[1]: libpod-7d13b87852be932d7d6bbc02149820e9423bf29773a6c185cbe9d5a1ec842b94.scope: Deactivated successfully.
Feb 28 04:47:17 np0005634017 podman[206249]: 2026-02-28 09:47:17.548574858 +0000 UTC m=+0.129088742 container died 7d13b87852be932d7d6bbc02149820e9423bf29773a6c185cbe9d5a1ec842b94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:47:17 np0005634017 systemd[1]: var-lib-containers-storage-overlay-054f947d93db1ac18e27b7321ae7e97d04af14600468a5f47af32d5efa7edce4-merged.mount: Deactivated successfully.
Feb 28 04:47:17 np0005634017 podman[206249]: 2026-02-28 09:47:17.600726143 +0000 UTC m=+0.181240067 container remove 7d13b87852be932d7d6bbc02149820e9423bf29773a6c185cbe9d5a1ec842b94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hypatia, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 04:47:17 np0005634017 systemd[1]: libpod-conmon-7d13b87852be932d7d6bbc02149820e9423bf29773a6c185cbe9d5a1ec842b94.scope: Deactivated successfully.
Feb 28 04:47:17 np0005634017 python3.9[206324]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:47:17 np0005634017 podman[206345]: 2026-02-28 09:47:17.741056377 +0000 UTC m=+0.037859147 container create 59b2471605eaa23a01404a141be693d0bd17d305cc0937655019b5c9a7714396 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:47:17 np0005634017 systemd[1]: Started libpod-conmon-59b2471605eaa23a01404a141be693d0bd17d305cc0937655019b5c9a7714396.scope.
Feb 28 04:47:17 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:47:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53d9b67f0256b7b53a932688cd1c8aa91570143c39c0d58f3641f2f099a4afe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:47:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53d9b67f0256b7b53a932688cd1c8aa91570143c39c0d58f3641f2f099a4afe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:47:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53d9b67f0256b7b53a932688cd1c8aa91570143c39c0d58f3641f2f099a4afe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:47:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53d9b67f0256b7b53a932688cd1c8aa91570143c39c0d58f3641f2f099a4afe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:47:17 np0005634017 podman[206345]: 2026-02-28 09:47:17.726719747 +0000 UTC m=+0.023522537 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:47:17 np0005634017 podman[206345]: 2026-02-28 09:47:17.830824911 +0000 UTC m=+0.127627701 container init 59b2471605eaa23a01404a141be693d0bd17d305cc0937655019b5c9a7714396 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_euler, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:47:17 np0005634017 podman[206345]: 2026-02-28 09:47:17.839550614 +0000 UTC m=+0.136353414 container start 59b2471605eaa23a01404a141be693d0bd17d305cc0937655019b5c9a7714396 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:47:17 np0005634017 podman[206345]: 2026-02-28 09:47:17.843555976 +0000 UTC m=+0.140358796 container attach 59b2471605eaa23a01404a141be693d0bd17d305cc0937655019b5c9a7714396 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]: {
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:    "0": [
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:        {
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "devices": [
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "/dev/loop3"
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            ],
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "lv_name": "ceph_lv0",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "lv_size": "21470642176",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "name": "ceph_lv0",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "tags": {
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.cluster_name": "ceph",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.crush_device_class": "",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.encrypted": "0",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.objectstore": "bluestore",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.osd_id": "0",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.type": "block",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.vdo": "0",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.with_tpm": "0"
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            },
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "type": "block",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "vg_name": "ceph_vg0"
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:        }
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:    ],
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:    "1": [
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:        {
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "devices": [
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "/dev/loop4"
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            ],
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "lv_name": "ceph_lv1",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "lv_size": "21470642176",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "name": "ceph_lv1",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "tags": {
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.cluster_name": "ceph",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.crush_device_class": "",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.encrypted": "0",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.objectstore": "bluestore",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.osd_id": "1",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.type": "block",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.vdo": "0",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.with_tpm": "0"
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            },
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "type": "block",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "vg_name": "ceph_vg1"
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:        }
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:    ],
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:    "2": [
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:        {
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "devices": [
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "/dev/loop5"
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            ],
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "lv_name": "ceph_lv2",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "lv_size": "21470642176",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "name": "ceph_lv2",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "tags": {
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.cluster_name": "ceph",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.crush_device_class": "",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.encrypted": "0",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.objectstore": "bluestore",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.osd_id": "2",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.type": "block",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.vdo": "0",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:                "ceph.with_tpm": "0"
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            },
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "type": "block",
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:            "vg_name": "ceph_vg2"
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:        }
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]:    ]
Feb 28 04:47:18 np0005634017 hopeful_euler[206384]: }
Feb 28 04:47:18 np0005634017 systemd[1]: libpod-59b2471605eaa23a01404a141be693d0bd17d305cc0937655019b5c9a7714396.scope: Deactivated successfully.
Feb 28 04:47:18 np0005634017 podman[206345]: 2026-02-28 09:47:18.108900198 +0000 UTC m=+0.405702998 container died 59b2471605eaa23a01404a141be693d0bd17d305cc0937655019b5c9a7714396 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:47:18 np0005634017 systemd[1]: var-lib-containers-storage-overlay-a53d9b67f0256b7b53a932688cd1c8aa91570143c39c0d58f3641f2f099a4afe-merged.mount: Deactivated successfully.
Feb 28 04:47:18 np0005634017 podman[206345]: 2026-02-28 09:47:18.161467764 +0000 UTC m=+0.458270564 container remove 59b2471605eaa23a01404a141be693d0bd17d305cc0937655019b5c9a7714396 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 04:47:18 np0005634017 systemd[1]: libpod-conmon-59b2471605eaa23a01404a141be693d0bd17d305cc0937655019b5c9a7714396.scope: Deactivated successfully.
Feb 28 04:47:18 np0005634017 python3.9[206491]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772272037.2933831-778-198370259077272/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:18 np0005634017 podman[206664]: 2026-02-28 09:47:18.580157873 +0000 UTC m=+0.043144314 container create 447b9200c278d0d1dc6e01f299f67959da9d5035703f0b9fff5c1ccdebb5043c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 04:47:18 np0005634017 systemd[1]: Started libpod-conmon-447b9200c278d0d1dc6e01f299f67959da9d5035703f0b9fff5c1ccdebb5043c.scope.
Feb 28 04:47:18 np0005634017 podman[206664]: 2026-02-28 09:47:18.561198644 +0000 UTC m=+0.024185075 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:47:18 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:47:18 np0005634017 podman[206664]: 2026-02-28 09:47:18.673427615 +0000 UTC m=+0.136414096 container init 447b9200c278d0d1dc6e01f299f67959da9d5035703f0b9fff5c1ccdebb5043c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_albattani, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 04:47:18 np0005634017 podman[206664]: 2026-02-28 09:47:18.677713664 +0000 UTC m=+0.140700095 container start 447b9200c278d0d1dc6e01f299f67959da9d5035703f0b9fff5c1ccdebb5043c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 04:47:18 np0005634017 intelligent_albattani[206731]: 167 167
Feb 28 04:47:18 np0005634017 systemd[1]: libpod-447b9200c278d0d1dc6e01f299f67959da9d5035703f0b9fff5c1ccdebb5043c.scope: Deactivated successfully.
Feb 28 04:47:18 np0005634017 conmon[206731]: conmon 447b9200c278d0d1dc6e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-447b9200c278d0d1dc6e01f299f67959da9d5035703f0b9fff5c1ccdebb5043c.scope/container/memory.events
Feb 28 04:47:18 np0005634017 podman[206664]: 2026-02-28 09:47:18.681793118 +0000 UTC m=+0.144779519 container attach 447b9200c278d0d1dc6e01f299f67959da9d5035703f0b9fff5c1ccdebb5043c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_albattani, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 04:47:18 np0005634017 podman[206664]: 2026-02-28 09:47:18.683047593 +0000 UTC m=+0.146034034 container died 447b9200c278d0d1dc6e01f299f67959da9d5035703f0b9fff5c1ccdebb5043c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:47:18 np0005634017 systemd[1]: var-lib-containers-storage-overlay-5be8ec3b27c792e63a6e2c05efe917ecb90c23fc2f353a5f6f8a8a317ba18399-merged.mount: Deactivated successfully.
Feb 28 04:47:18 np0005634017 podman[206664]: 2026-02-28 09:47:18.725295742 +0000 UTC m=+0.188282183 container remove 447b9200c278d0d1dc6e01f299f67959da9d5035703f0b9fff5c1ccdebb5043c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:47:18 np0005634017 systemd[1]: libpod-conmon-447b9200c278d0d1dc6e01f299f67959da9d5035703f0b9fff5c1ccdebb5043c.scope: Deactivated successfully.
Feb 28 04:47:18 np0005634017 python3.9[206730]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:47:18 np0005634017 podman[206757]: 2026-02-28 09:47:18.887514466 +0000 UTC m=+0.042748693 container create 71fe634663ce2aa0a9b053468b204d7f4fbeddffa8e938184bb81f5cbf7f539d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 04:47:18 np0005634017 systemd[1]: Started libpod-conmon-71fe634663ce2aa0a9b053468b204d7f4fbeddffa8e938184bb81f5cbf7f539d.scope.
Feb 28 04:47:18 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:47:18 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59c7f9183bfb3728c2ef534468888b4ba1b175f4d308a02b97571a7ccd418a1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:47:18 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59c7f9183bfb3728c2ef534468888b4ba1b175f4d308a02b97571a7ccd418a1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:47:18 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59c7f9183bfb3728c2ef534468888b4ba1b175f4d308a02b97571a7ccd418a1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:47:18 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59c7f9183bfb3728c2ef534468888b4ba1b175f4d308a02b97571a7ccd418a1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:47:18 np0005634017 podman[206757]: 2026-02-28 09:47:18.965927034 +0000 UTC m=+0.121161311 container init 71fe634663ce2aa0a9b053468b204d7f4fbeddffa8e938184bb81f5cbf7f539d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_brown, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:47:18 np0005634017 podman[206757]: 2026-02-28 09:47:18.869137224 +0000 UTC m=+0.024371491 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:47:18 np0005634017 podman[206757]: 2026-02-28 09:47:18.974670768 +0000 UTC m=+0.129905025 container start 71fe634663ce2aa0a9b053468b204d7f4fbeddffa8e938184bb81f5cbf7f539d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_brown, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:47:18 np0005634017 podman[206757]: 2026-02-28 09:47:18.979220065 +0000 UTC m=+0.134454332 container attach 71fe634663ce2aa0a9b053468b204d7f4fbeddffa8e938184bb81f5cbf7f539d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:47:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:19 np0005634017 lvm[206950]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:47:19 np0005634017 lvm[206950]: VG ceph_vg0 finished
Feb 28 04:47:19 np0005634017 lvm[206959]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:47:19 np0005634017 lvm[206959]: VG ceph_vg1 finished
Feb 28 04:47:19 np0005634017 lvm[206978]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:47:19 np0005634017 lvm[206978]: VG ceph_vg2 finished
Feb 28 04:47:19 np0005634017 determined_brown[206798]: {}
Feb 28 04:47:19 np0005634017 systemd[1]: libpod-71fe634663ce2aa0a9b053468b204d7f4fbeddffa8e938184bb81f5cbf7f539d.scope: Deactivated successfully.
Feb 28 04:47:19 np0005634017 podman[206757]: 2026-02-28 09:47:19.787666695 +0000 UTC m=+0.942900922 container died 71fe634663ce2aa0a9b053468b204d7f4fbeddffa8e938184bb81f5cbf7f539d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_brown, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 04:47:19 np0005634017 systemd[1]: libpod-71fe634663ce2aa0a9b053468b204d7f4fbeddffa8e938184bb81f5cbf7f539d.scope: Consumed 1.158s CPU time.
Feb 28 04:47:19 np0005634017 systemd[1]: var-lib-containers-storage-overlay-59c7f9183bfb3728c2ef534468888b4ba1b175f4d308a02b97571a7ccd418a1e-merged.mount: Deactivated successfully.
Feb 28 04:47:19 np0005634017 podman[206757]: 2026-02-28 09:47:19.827909848 +0000 UTC m=+0.983144075 container remove 71fe634663ce2aa0a9b053468b204d7f4fbeddffa8e938184bb81f5cbf7f539d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:47:19 np0005634017 systemd[1]: libpod-conmon-71fe634663ce2aa0a9b053468b204d7f4fbeddffa8e938184bb81f5cbf7f539d.scope: Deactivated successfully.
Feb 28 04:47:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:47:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:47:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:47:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:47:19 np0005634017 python3.9[207009]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Feb 28 04:47:20 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:47:20 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:47:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:47:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:21 np0005634017 dbus-broker-launch[804]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Feb 28 04:47:21 np0005634017 python3.9[207205]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:22 np0005634017 python3.9[207358]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:22 np0005634017 python3.9[207511]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:23 np0005634017 python3.9[207664]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:24 np0005634017 python3.9[207817]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:24 np0005634017 python3.9[207970]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:25 np0005634017 python3.9[208123]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:26 np0005634017 python3.9[208276]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:47:26 np0005634017 python3.9[208429]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:27 np0005634017 python3.9[208582]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:28 np0005634017 python3.9[208735]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 28 04:47:28 np0005634017 systemd[1]: Reloading.
Feb 28 04:47:28 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:47:28 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:47:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_09:47:28
Feb 28 04:47:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 04:47:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 04:47:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'vms', '.mgr', '.rgw.root', 'images', 'backups', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta']
Feb 28 04:47:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 04:47:29 np0005634017 podman[208780]: 2026-02-28 09:47:29.141767297 +0000 UTC m=+0.074584582 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 04:47:29 np0005634017 podman[208779]: 2026-02-28 09:47:29.16982569 +0000 UTC m=+0.106435390 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 28 04:47:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:29 np0005634017 systemd[1]: Starting libvirt logging daemon socket...
Feb 28 04:47:29 np0005634017 systemd[1]: Listening on libvirt logging daemon socket.
Feb 28 04:47:29 np0005634017 systemd[1]: Starting libvirt logging daemon admin socket...
Feb 28 04:47:29 np0005634017 systemd[1]: Listening on libvirt logging daemon admin socket.
Feb 28 04:47:29 np0005634017 systemd[1]: Starting libvirt logging daemon...
Feb 28 04:47:29 np0005634017 systemd[1]: Started libvirt logging daemon.
Feb 28 04:47:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:47:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:47:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:47:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:47:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:47:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:47:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 04:47:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:47:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 04:47:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:47:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:47:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:47:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:47:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:47:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:47:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:47:30 np0005634017 python3.9[208978]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 28 04:47:30 np0005634017 systemd[1]: Reloading.
Feb 28 04:47:30 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:47:30 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:47:30 np0005634017 systemd[1]: Starting libvirt nodedev daemon socket...
Feb 28 04:47:30 np0005634017 systemd[1]: Listening on libvirt nodedev daemon socket.
Feb 28 04:47:30 np0005634017 systemd[1]: Starting libvirt nodedev daemon admin socket...
Feb 28 04:47:30 np0005634017 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Feb 28 04:47:30 np0005634017 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Feb 28 04:47:30 np0005634017 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Feb 28 04:47:30 np0005634017 systemd[1]: Starting libvirt nodedev daemon...
Feb 28 04:47:30 np0005634017 systemd[1]: Started libvirt nodedev daemon.
Feb 28 04:47:31 np0005634017 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Feb 28 04:47:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:47:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:31 np0005634017 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Feb 28 04:47:31 np0005634017 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Feb 28 04:47:31 np0005634017 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Feb 28 04:47:31 np0005634017 python3.9[209203]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 28 04:47:31 np0005634017 systemd[1]: Reloading.
Feb 28 04:47:31 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:47:31 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:47:31 np0005634017 systemd[1]: Starting libvirt proxy daemon admin socket...
Feb 28 04:47:32 np0005634017 systemd[1]: Starting libvirt proxy daemon read-only socket...
Feb 28 04:47:32 np0005634017 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Feb 28 04:47:32 np0005634017 systemd[1]: Listening on libvirt proxy daemon admin socket.
Feb 28 04:47:32 np0005634017 systemd[1]: Starting libvirt proxy daemon...
Feb 28 04:47:32 np0005634017 systemd[1]: Started libvirt proxy daemon.
Feb 28 04:47:32 np0005634017 setroubleshoot[209077]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l eadd3250-6971-497e-9cbb-9192e185c856
Feb 28 04:47:32 np0005634017 setroubleshoot[209077]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Feb 28 04:47:32 np0005634017 setroubleshoot[209077]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l eadd3250-6971-497e-9cbb-9192e185c856
Feb 28 04:47:32 np0005634017 setroubleshoot[209077]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Feb 28 04:47:32 np0005634017 python3.9[209433]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 28 04:47:32 np0005634017 systemd[1]: Reloading.
Feb 28 04:47:32 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:47:32 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:47:33 np0005634017 systemd[1]: Listening on libvirt locking daemon socket.
Feb 28 04:47:33 np0005634017 systemd[1]: Starting libvirt QEMU daemon socket...
Feb 28 04:47:33 np0005634017 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 28 04:47:33 np0005634017 systemd[1]: Starting Virtual Machine and Container Registration Service...
Feb 28 04:47:33 np0005634017 systemd[1]: Listening on libvirt QEMU daemon socket.
Feb 28 04:47:33 np0005634017 systemd[1]: Starting libvirt QEMU daemon admin socket...
Feb 28 04:47:33 np0005634017 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Feb 28 04:47:33 np0005634017 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Feb 28 04:47:33 np0005634017 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Feb 28 04:47:33 np0005634017 systemd[1]: Started Virtual Machine and Container Registration Service.
Feb 28 04:47:33 np0005634017 systemd[1]: Starting libvirt QEMU daemon...
Feb 28 04:47:33 np0005634017 systemd[1]: Started libvirt QEMU daemon.
Feb 28 04:47:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:33 np0005634017 python3.9[209656]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 28 04:47:33 np0005634017 systemd[1]: Reloading.
Feb 28 04:47:34 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:47:34 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:47:34 np0005634017 systemd[1]: Starting libvirt secret daemon socket...
Feb 28 04:47:34 np0005634017 systemd[1]: Listening on libvirt secret daemon socket.
Feb 28 04:47:34 np0005634017 systemd[1]: Starting libvirt secret daemon admin socket...
Feb 28 04:47:34 np0005634017 systemd[1]: Starting libvirt secret daemon read-only socket...
Feb 28 04:47:34 np0005634017 systemd[1]: Listening on libvirt secret daemon admin socket.
Feb 28 04:47:34 np0005634017 systemd[1]: Listening on libvirt secret daemon read-only socket.
Feb 28 04:47:34 np0005634017 systemd[1]: Starting libvirt secret daemon...
Feb 28 04:47:34 np0005634017 systemd[1]: Started libvirt secret daemon.
Feb 28 04:47:35 np0005634017 python3.9[209875]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:35 np0005634017 python3.9[210028]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 28 04:47:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:47:36 np0005634017 python3.9[210181]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:47:37 np0005634017 python3.9[210335]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 28 04:47:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:37 np0005634017 python3.9[210485]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:47:38 np0005634017 python3.9[210606]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1772272057.537666-1136-32957342598382/.source.xml follow=False _original_basename=secret.xml.j2 checksum=735e126d547ae1535750b912b136304b047213c8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:39 np0005634017 python3.9[210759]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 8f528268-ea2d-5d7b-af45-49b405fed6de#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:47:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:39 np0005634017 python3.9[210921]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 04:47:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:47:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 04:47:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:47:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:47:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:47:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:47:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:47:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:47:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:47:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:47:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:47:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4231060137621101e-06 of space, bias 4.0, pg target 0.0017077272165145322 quantized to 16 (current 16)
Feb 28 04:47:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:47:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:47:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:47:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 04:47:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:47:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 04:47:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:47:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:47:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:47:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 04:47:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:47:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:42 np0005634017 python3.9[211387]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:42 np0005634017 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Feb 28 04:47:42 np0005634017 systemd[1]: setroubleshootd.service: Deactivated successfully.
Feb 28 04:47:42 np0005634017 python3.9[211540]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:47:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:43 np0005634017 python3.9[211664]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1772272062.422223-1191-184189051424491/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:44 np0005634017 python3.9[211817]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:45 np0005634017 python3.9[211970]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:47:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:45 np0005634017 python3.9[212049]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:46 np0005634017 python3.9[212202]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:47:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:47:46 np0005634017 python3.9[212281]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.6dmh08qe recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:47 np0005634017 python3.9[212434]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:47:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:47 np0005634017 python3.9[212513]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:48 np0005634017 python3.9[212666]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:47:49 np0005634017 python3[212820]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 28 04:47:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:49 np0005634017 python3.9[212973]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:47:50 np0005634017 python3.9[213052]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:51 np0005634017 python3.9[213205]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:47:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:47:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:51 np0005634017 python3.9[213331]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772272070.5092876-1280-99578241579284/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:52 np0005634017 python3.9[213484]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:47:52 np0005634017 python3.9[213563]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:53 np0005634017 python3.9[213716]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:47:54 np0005634017 python3.9[213795]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:54 np0005634017 python3.9[213948]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:47:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:55 np0005634017 python3.9[214074]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772272074.2913-1319-97893615559850/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:56 np0005634017 python3.9[214227]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:47:56 np0005634017 python3.9[214380]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:47:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:57 np0005634017 python3.9[214536]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:47:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:47:57.826 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:47:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:47:57.827 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:47:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:47:57.827 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:47:58 np0005634017 python3.9[214689]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:47:59 np0005634017 python3.9[214843]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:47:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:47:59 np0005634017 podman[214970]: 2026-02-28 09:47:59.467779678 +0000 UTC m=+0.062770647 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 28 04:47:59 np0005634017 podman[214969]: 2026-02-28 09:47:59.482177119 +0000 UTC m=+0.083039492 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller)
Feb 28 04:47:59 np0005634017 python3.9[215038]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:48:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:48:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:48:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:48:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:48:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:48:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:48:00 np0005634017 python3.9[215202]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:48:01 np0005634017 python3.9[215355]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:48:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:48:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:01 np0005634017 python3.9[215479]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772272080.589883-1391-231133686862534/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:48:02 np0005634017 python3.9[215632]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:48:02 np0005634017 python3.9[215756]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772272081.772145-1406-39120355228944/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:48:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:03 np0005634017 python3.9[215909]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:48:04 np0005634017 python3.9[216033]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772272083.0650911-1421-53332570212122/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:48:04 np0005634017 python3.9[216186]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:48:04 np0005634017 systemd[1]: Reloading.
Feb 28 04:48:05 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:48:05 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:48:05 np0005634017 systemd[1]: Reached target edpm_libvirt.target.
Feb 28 04:48:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:06 np0005634017 python3.9[216385]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 28 04:48:06 np0005634017 systemd[1]: Reloading.
Feb 28 04:48:06 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:48:06 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:48:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:48:06 np0005634017 systemd[1]: Reloading.
Feb 28 04:48:06 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:48:06 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:48:07 np0005634017 systemd[1]: session-48.scope: Deactivated successfully.
Feb 28 04:48:07 np0005634017 systemd[1]: session-48.scope: Consumed 3min 9.482s CPU time.
Feb 28 04:48:07 np0005634017 systemd-logind[815]: Session 48 logged out. Waiting for processes to exit.
Feb 28 04:48:07 np0005634017 systemd-logind[815]: Removed session 48.
Feb 28 04:48:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:48:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:12 np0005634017 systemd-logind[815]: New session 49 of user zuul.
Feb 28 04:48:12 np0005634017 systemd[1]: Started Session 49 of User zuul.
Feb 28 04:48:12 np0005634017 ceph-osd[87202]: bluestore.MempoolThread fragmentation_score=0.000139 took=0.000080s
Feb 28 04:48:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread fragmentation_score=0.000141 took=0.000057s
Feb 28 04:48:12 np0005634017 ceph-osd[89322]: bluestore.MempoolThread fragmentation_score=0.000141 took=0.000033s
Feb 28 04:48:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:13 np0005634017 python3.9[216651]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:48:14 np0005634017 python3.9[216805]: ansible-ansible.builtin.service_facts Invoked
Feb 28 04:48:14 np0005634017 network[216822]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 28 04:48:14 np0005634017 network[216823]: 'network-scripts' will be removed from distribution in near future.
Feb 28 04:48:14 np0005634017 network[216824]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 28 04:48:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:48:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:19 np0005634017 python3.9[217098]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 28 04:48:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:20 np0005634017 python3.9[217183]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 28 04:48:20 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:48:20 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:48:20 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 04:48:20 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:48:20 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 04:48:20 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:48:20 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 04:48:20 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 04:48:20 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:48:20 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 04:48:20 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:48:20 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:48:20 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:48:21 np0005634017 podman[217329]: 2026-02-28 09:48:21.055388558 +0000 UTC m=+0.061752260 container create 948710b90b38fe994cfb140024624250a5505050afee99a7735b237ffc8b6da5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 04:48:21 np0005634017 systemd[1]: Started libpod-conmon-948710b90b38fe994cfb140024624250a5505050afee99a7735b237ffc8b6da5.scope.
Feb 28 04:48:21 np0005634017 podman[217329]: 2026-02-28 09:48:21.026801782 +0000 UTC m=+0.033165494 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:48:21 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:48:21 np0005634017 podman[217329]: 2026-02-28 09:48:21.150804943 +0000 UTC m=+0.157168705 container init 948710b90b38fe994cfb140024624250a5505050afee99a7735b237ffc8b6da5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_cray, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:48:21 np0005634017 podman[217329]: 2026-02-28 09:48:21.156725757 +0000 UTC m=+0.163089439 container start 948710b90b38fe994cfb140024624250a5505050afee99a7735b237ffc8b6da5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_cray, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 28 04:48:21 np0005634017 podman[217329]: 2026-02-28 09:48:21.159815903 +0000 UTC m=+0.166179675 container attach 948710b90b38fe994cfb140024624250a5505050afee99a7735b237ffc8b6da5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_cray, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:48:21 np0005634017 peaceful_cray[217346]: 167 167
Feb 28 04:48:21 np0005634017 systemd[1]: libpod-948710b90b38fe994cfb140024624250a5505050afee99a7735b237ffc8b6da5.scope: Deactivated successfully.
Feb 28 04:48:21 np0005634017 podman[217329]: 2026-02-28 09:48:21.161200442 +0000 UTC m=+0.167564144 container died 948710b90b38fe994cfb140024624250a5505050afee99a7735b237ffc8b6da5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_cray, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 04:48:21 np0005634017 systemd[1]: var-lib-containers-storage-overlay-4467a2683e2c924213ff339f7075130e6d22c7c51c1d40e775d874a19829f093-merged.mount: Deactivated successfully.
Feb 28 04:48:21 np0005634017 podman[217329]: 2026-02-28 09:48:21.211972255 +0000 UTC m=+0.218335967 container remove 948710b90b38fe994cfb140024624250a5505050afee99a7735b237ffc8b6da5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 04:48:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:48:21 np0005634017 systemd[1]: libpod-conmon-948710b90b38fe994cfb140024624250a5505050afee99a7735b237ffc8b6da5.scope: Deactivated successfully.
Feb 28 04:48:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:21 np0005634017 podman[217370]: 2026-02-28 09:48:21.399003579 +0000 UTC m=+0.052948185 container create 5c8b9f60aeedd2e61722144f3c15a0c11d077a1a3d77602af20a93acc7835982 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_cori, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 04:48:21 np0005634017 systemd[1]: Started libpod-conmon-5c8b9f60aeedd2e61722144f3c15a0c11d077a1a3d77602af20a93acc7835982.scope.
Feb 28 04:48:21 np0005634017 podman[217370]: 2026-02-28 09:48:21.374728993 +0000 UTC m=+0.028673649 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:48:21 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:48:21 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96b96331a04418f3d32012d1c9bc4ce998f4357990075f1557cf6756e8081fce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:48:21 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96b96331a04418f3d32012d1c9bc4ce998f4357990075f1557cf6756e8081fce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:48:21 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96b96331a04418f3d32012d1c9bc4ce998f4357990075f1557cf6756e8081fce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:48:21 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96b96331a04418f3d32012d1c9bc4ce998f4357990075f1557cf6756e8081fce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:48:21 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96b96331a04418f3d32012d1c9bc4ce998f4357990075f1557cf6756e8081fce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:48:21 np0005634017 podman[217370]: 2026-02-28 09:48:21.541129323 +0000 UTC m=+0.195073939 container init 5c8b9f60aeedd2e61722144f3c15a0c11d077a1a3d77602af20a93acc7835982 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_cori, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 28 04:48:21 np0005634017 podman[217370]: 2026-02-28 09:48:21.547405318 +0000 UTC m=+0.201349924 container start 5c8b9f60aeedd2e61722144f3c15a0c11d077a1a3d77602af20a93acc7835982 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_cori, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:48:21 np0005634017 podman[217370]: 2026-02-28 09:48:21.551232454 +0000 UTC m=+0.205177030 container attach 5c8b9f60aeedd2e61722144f3c15a0c11d077a1a3d77602af20a93acc7835982 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_cori, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:48:21 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:48:21 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:48:21 np0005634017 magical_cori[217387]: --> passed data devices: 0 physical, 3 LVM
Feb 28 04:48:21 np0005634017 magical_cori[217387]: --> All data devices are unavailable
Feb 28 04:48:21 np0005634017 systemd[1]: libpod-5c8b9f60aeedd2e61722144f3c15a0c11d077a1a3d77602af20a93acc7835982.scope: Deactivated successfully.
Feb 28 04:48:21 np0005634017 podman[217370]: 2026-02-28 09:48:21.953925188 +0000 UTC m=+0.607869784 container died 5c8b9f60aeedd2e61722144f3c15a0c11d077a1a3d77602af20a93acc7835982 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_cori, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 28 04:48:21 np0005634017 systemd[1]: var-lib-containers-storage-overlay-96b96331a04418f3d32012d1c9bc4ce998f4357990075f1557cf6756e8081fce-merged.mount: Deactivated successfully.
Feb 28 04:48:22 np0005634017 podman[217370]: 2026-02-28 09:48:22.010128212 +0000 UTC m=+0.664072818 container remove 5c8b9f60aeedd2e61722144f3c15a0c11d077a1a3d77602af20a93acc7835982 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 04:48:22 np0005634017 systemd[1]: libpod-conmon-5c8b9f60aeedd2e61722144f3c15a0c11d077a1a3d77602af20a93acc7835982.scope: Deactivated successfully.
Feb 28 04:48:22 np0005634017 podman[217479]: 2026-02-28 09:48:22.432376511 +0000 UTC m=+0.041134285 container create 19c5edcd250a0edae2145cab3d2ec8ead0f2a067001a320b2dd2356df5b31a7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 28 04:48:22 np0005634017 systemd[1]: Started libpod-conmon-19c5edcd250a0edae2145cab3d2ec8ead0f2a067001a320b2dd2356df5b31a7b.scope.
Feb 28 04:48:22 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:48:22 np0005634017 podman[217479]: 2026-02-28 09:48:22.413050474 +0000 UTC m=+0.021808318 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:48:22 np0005634017 podman[217479]: 2026-02-28 09:48:22.510917857 +0000 UTC m=+0.119675691 container init 19c5edcd250a0edae2145cab3d2ec8ead0f2a067001a320b2dd2356df5b31a7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 28 04:48:22 np0005634017 podman[217479]: 2026-02-28 09:48:22.516910473 +0000 UTC m=+0.125668257 container start 19c5edcd250a0edae2145cab3d2ec8ead0f2a067001a320b2dd2356df5b31a7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 04:48:22 np0005634017 lucid_mestorf[217496]: 167 167
Feb 28 04:48:22 np0005634017 systemd[1]: libpod-19c5edcd250a0edae2145cab3d2ec8ead0f2a067001a320b2dd2356df5b31a7b.scope: Deactivated successfully.
Feb 28 04:48:22 np0005634017 podman[217479]: 2026-02-28 09:48:22.520772591 +0000 UTC m=+0.129530445 container attach 19c5edcd250a0edae2145cab3d2ec8ead0f2a067001a320b2dd2356df5b31a7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mestorf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:48:22 np0005634017 podman[217479]: 2026-02-28 09:48:22.522196031 +0000 UTC m=+0.130953875 container died 19c5edcd250a0edae2145cab3d2ec8ead0f2a067001a320b2dd2356df5b31a7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:48:22 np0005634017 systemd[1]: var-lib-containers-storage-overlay-a8171d3fbf65185a78ca5d21acfa1c7f2d0d104745ec88bee73f5199d530581b-merged.mount: Deactivated successfully.
Feb 28 04:48:22 np0005634017 podman[217479]: 2026-02-28 09:48:22.568241522 +0000 UTC m=+0.176999296 container remove 19c5edcd250a0edae2145cab3d2ec8ead0f2a067001a320b2dd2356df5b31a7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:48:22 np0005634017 systemd[1]: libpod-conmon-19c5edcd250a0edae2145cab3d2ec8ead0f2a067001a320b2dd2356df5b31a7b.scope: Deactivated successfully.
Feb 28 04:48:22 np0005634017 podman[217520]: 2026-02-28 09:48:22.702157858 +0000 UTC m=+0.032702171 container create b7574a81e6639339e5474b1e81924a28c1f4eb2eda5f041c16d20f9c696085a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:48:22 np0005634017 systemd[1]: Started libpod-conmon-b7574a81e6639339e5474b1e81924a28c1f4eb2eda5f041c16d20f9c696085a9.scope.
Feb 28 04:48:22 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:48:22 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/495ad5d41373e6f680d7bd64dc1a4b26bd7a3dad23b723d892598f880b283bd2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:48:22 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/495ad5d41373e6f680d7bd64dc1a4b26bd7a3dad23b723d892598f880b283bd2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:48:22 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/495ad5d41373e6f680d7bd64dc1a4b26bd7a3dad23b723d892598f880b283bd2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:48:22 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/495ad5d41373e6f680d7bd64dc1a4b26bd7a3dad23b723d892598f880b283bd2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:48:22 np0005634017 podman[217520]: 2026-02-28 09:48:22.689359982 +0000 UTC m=+0.019904315 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:48:22 np0005634017 podman[217520]: 2026-02-28 09:48:22.789315593 +0000 UTC m=+0.119859936 container init b7574a81e6639339e5474b1e81924a28c1f4eb2eda5f041c16d20f9c696085a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:48:22 np0005634017 podman[217520]: 2026-02-28 09:48:22.798405026 +0000 UTC m=+0.128949379 container start b7574a81e6639339e5474b1e81924a28c1f4eb2eda5f041c16d20f9c696085a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 04:48:22 np0005634017 podman[217520]: 2026-02-28 09:48:22.805446052 +0000 UTC m=+0.135990375 container attach b7574a81e6639339e5474b1e81924a28c1f4eb2eda5f041c16d20f9c696085a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_goldberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]: {
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:    "0": [
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:        {
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "devices": [
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "/dev/loop3"
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            ],
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "lv_name": "ceph_lv0",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "lv_size": "21470642176",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "name": "ceph_lv0",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "tags": {
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.cluster_name": "ceph",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.crush_device_class": "",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.encrypted": "0",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.objectstore": "bluestore",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.osd_id": "0",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.type": "block",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.vdo": "0",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.with_tpm": "0"
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            },
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "type": "block",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "vg_name": "ceph_vg0"
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:        }
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:    ],
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:    "1": [
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:        {
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "devices": [
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "/dev/loop4"
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            ],
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "lv_name": "ceph_lv1",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "lv_size": "21470642176",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "name": "ceph_lv1",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "tags": {
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.cluster_name": "ceph",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.crush_device_class": "",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.encrypted": "0",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.objectstore": "bluestore",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.osd_id": "1",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.type": "block",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.vdo": "0",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.with_tpm": "0"
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            },
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "type": "block",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "vg_name": "ceph_vg1"
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:        }
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:    ],
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:    "2": [
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:        {
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "devices": [
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "/dev/loop5"
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            ],
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "lv_name": "ceph_lv2",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "lv_size": "21470642176",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "name": "ceph_lv2",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "tags": {
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.cluster_name": "ceph",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.crush_device_class": "",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.encrypted": "0",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.objectstore": "bluestore",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.osd_id": "2",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.type": "block",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.vdo": "0",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:                "ceph.with_tpm": "0"
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            },
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "type": "block",
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:            "vg_name": "ceph_vg2"
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:        }
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]:    ]
Feb 28 04:48:23 np0005634017 quirky_goldberg[217537]: }
Feb 28 04:48:23 np0005634017 systemd[1]: libpod-b7574a81e6639339e5474b1e81924a28c1f4eb2eda5f041c16d20f9c696085a9.scope: Deactivated successfully.
Feb 28 04:48:23 np0005634017 conmon[217537]: conmon b7574a81e6639339e547 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b7574a81e6639339e5474b1e81924a28c1f4eb2eda5f041c16d20f9c696085a9.scope/container/memory.events
Feb 28 04:48:23 np0005634017 podman[217520]: 2026-02-28 09:48:23.117276609 +0000 UTC m=+0.447820962 container died b7574a81e6639339e5474b1e81924a28c1f4eb2eda5f041c16d20f9c696085a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 04:48:23 np0005634017 systemd[1]: var-lib-containers-storage-overlay-495ad5d41373e6f680d7bd64dc1a4b26bd7a3dad23b723d892598f880b283bd2-merged.mount: Deactivated successfully.
Feb 28 04:48:23 np0005634017 podman[217520]: 2026-02-28 09:48:23.159767281 +0000 UTC m=+0.490311604 container remove b7574a81e6639339e5474b1e81924a28c1f4eb2eda5f041c16d20f9c696085a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 04:48:23 np0005634017 systemd[1]: libpod-conmon-b7574a81e6639339e5474b1e81924a28c1f4eb2eda5f041c16d20f9c696085a9.scope: Deactivated successfully.
Feb 28 04:48:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:23 np0005634017 podman[217622]: 2026-02-28 09:48:23.602111519 +0000 UTC m=+0.043559163 container create bef8382cbc6cfbbc431089e3904e2d28fb49a32f9b11474355d181bd6de2be26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goodall, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:48:23 np0005634017 systemd[1]: Started libpod-conmon-bef8382cbc6cfbbc431089e3904e2d28fb49a32f9b11474355d181bd6de2be26.scope.
Feb 28 04:48:23 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:48:23 np0005634017 podman[217622]: 2026-02-28 09:48:23.582501754 +0000 UTC m=+0.023949428 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:48:23 np0005634017 podman[217622]: 2026-02-28 09:48:23.68766585 +0000 UTC m=+0.129113574 container init bef8382cbc6cfbbc431089e3904e2d28fb49a32f9b11474355d181bd6de2be26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 04:48:23 np0005634017 podman[217622]: 2026-02-28 09:48:23.697716279 +0000 UTC m=+0.139163913 container start bef8382cbc6cfbbc431089e3904e2d28fb49a32f9b11474355d181bd6de2be26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:48:23 np0005634017 podman[217622]: 2026-02-28 09:48:23.701043002 +0000 UTC m=+0.142490676 container attach bef8382cbc6cfbbc431089e3904e2d28fb49a32f9b11474355d181bd6de2be26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 04:48:23 np0005634017 clever_goodall[217639]: 167 167
Feb 28 04:48:23 np0005634017 systemd[1]: libpod-bef8382cbc6cfbbc431089e3904e2d28fb49a32f9b11474355d181bd6de2be26.scope: Deactivated successfully.
Feb 28 04:48:23 np0005634017 conmon[217639]: conmon bef8382cbc6cfbbc4310 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bef8382cbc6cfbbc431089e3904e2d28fb49a32f9b11474355d181bd6de2be26.scope/container/memory.events
Feb 28 04:48:23 np0005634017 podman[217622]: 2026-02-28 09:48:23.707033479 +0000 UTC m=+0.148481173 container died bef8382cbc6cfbbc431089e3904e2d28fb49a32f9b11474355d181bd6de2be26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goodall, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:48:23 np0005634017 systemd[1]: var-lib-containers-storage-overlay-1f7f2fef70ed8ae211fdda986126fcb76c6eda4a278d06a860f54d262b8c39b3-merged.mount: Deactivated successfully.
Feb 28 04:48:23 np0005634017 podman[217622]: 2026-02-28 09:48:23.767567343 +0000 UTC m=+0.209014987 container remove bef8382cbc6cfbbc431089e3904e2d28fb49a32f9b11474355d181bd6de2be26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goodall, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:48:23 np0005634017 systemd[1]: libpod-conmon-bef8382cbc6cfbbc431089e3904e2d28fb49a32f9b11474355d181bd6de2be26.scope: Deactivated successfully.
Feb 28 04:48:23 np0005634017 podman[217665]: 2026-02-28 09:48:23.926906427 +0000 UTC m=+0.049320434 container create 56a74ed1d59923a7d4eb0cf539e7fab70c58084c321371eaf8cf97c96286def3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_sutherland, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:48:23 np0005634017 systemd[1]: Started libpod-conmon-56a74ed1d59923a7d4eb0cf539e7fab70c58084c321371eaf8cf97c96286def3.scope.
Feb 28 04:48:23 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:48:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15affd9a123e30fcc26056a7d4b4e008e76922f57cf7dfeeb3a7992a7a8555f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:48:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15affd9a123e30fcc26056a7d4b4e008e76922f57cf7dfeeb3a7992a7a8555f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:48:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15affd9a123e30fcc26056a7d4b4e008e76922f57cf7dfeeb3a7992a7a8555f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:48:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15affd9a123e30fcc26056a7d4b4e008e76922f57cf7dfeeb3a7992a7a8555f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:48:24 np0005634017 podman[217665]: 2026-02-28 09:48:23.909611145 +0000 UTC m=+0.032025162 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:48:24 np0005634017 podman[217665]: 2026-02-28 09:48:24.013299611 +0000 UTC m=+0.135713648 container init 56a74ed1d59923a7d4eb0cf539e7fab70c58084c321371eaf8cf97c96286def3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 28 04:48:24 np0005634017 podman[217665]: 2026-02-28 09:48:24.020514711 +0000 UTC m=+0.142928758 container start 56a74ed1d59923a7d4eb0cf539e7fab70c58084c321371eaf8cf97c96286def3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:48:24 np0005634017 podman[217665]: 2026-02-28 09:48:24.025920532 +0000 UTC m=+0.148334579 container attach 56a74ed1d59923a7d4eb0cf539e7fab70c58084c321371eaf8cf97c96286def3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_sutherland, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:48:24 np0005634017 lvm[217761]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:48:24 np0005634017 lvm[217761]: VG ceph_vg1 finished
Feb 28 04:48:24 np0005634017 lvm[217758]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:48:24 np0005634017 lvm[217758]: VG ceph_vg0 finished
Feb 28 04:48:24 np0005634017 lvm[217763]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:48:24 np0005634017 lvm[217763]: VG ceph_vg2 finished
Feb 28 04:48:24 np0005634017 relaxed_sutherland[217682]: {}
Feb 28 04:48:24 np0005634017 systemd[1]: libpod-56a74ed1d59923a7d4eb0cf539e7fab70c58084c321371eaf8cf97c96286def3.scope: Deactivated successfully.
Feb 28 04:48:24 np0005634017 systemd[1]: libpod-56a74ed1d59923a7d4eb0cf539e7fab70c58084c321371eaf8cf97c96286def3.scope: Consumed 1.120s CPU time.
Feb 28 04:48:24 np0005634017 podman[217665]: 2026-02-28 09:48:24.806143551 +0000 UTC m=+0.928557558 container died 56a74ed1d59923a7d4eb0cf539e7fab70c58084c321371eaf8cf97c96286def3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_sutherland, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:48:24 np0005634017 systemd[1]: var-lib-containers-storage-overlay-15affd9a123e30fcc26056a7d4b4e008e76922f57cf7dfeeb3a7992a7a8555f0-merged.mount: Deactivated successfully.
Feb 28 04:48:24 np0005634017 podman[217665]: 2026-02-28 09:48:24.859828555 +0000 UTC m=+0.982242592 container remove 56a74ed1d59923a7d4eb0cf539e7fab70c58084c321371eaf8cf97c96286def3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 28 04:48:24 np0005634017 systemd[1]: libpod-conmon-56a74ed1d59923a7d4eb0cf539e7fab70c58084c321371eaf8cf97c96286def3.scope: Deactivated successfully.
Feb 28 04:48:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:48:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:48:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:48:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:48:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:25 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:48:25 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:48:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:48:26 np0005634017 python3.9[217955]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:48:27 np0005634017 python3.9[218108]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:48:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:28 np0005634017 python3.9[218262]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:48:28 np0005634017 python3.9[218415]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:48:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_09:48:28
Feb 28 04:48:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 04:48:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 04:48:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'images', '.rgw.root', 'default.rgw.meta', 'volumes', 'default.rgw.log']
Feb 28 04:48:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 04:48:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:29 np0005634017 python3.9[218569]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:48:29 np0005634017 podman[218572]: 2026-02-28 09:48:29.633933863 +0000 UTC m=+0.060663479 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 04:48:29 np0005634017 podman[218570]: 2026-02-28 09:48:29.674623645 +0000 UTC m=+0.100809596 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Feb 28 04:48:30 np0005634017 python3.9[218740]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772272109.0380821-90-121890269451300/.source.iscsi _original_basename=.kvdm48v4 follow=False checksum=94da68bf44edd98d40e6fa7a99eebba4c7c7a8c7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:48:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:48:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:48:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:48:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:48:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:48:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:48:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 04:48:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 04:48:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:48:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:48:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:48:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:48:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:48:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:48:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:48:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:48:31 np0005634017 python3.9[218893]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:48:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:48:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:31 np0005634017 python3.9[219046]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:48:33 np0005634017 python3.9[219199]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:48:33 np0005634017 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Feb 28 04:48:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:34 np0005634017 python3.9[219356]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:48:34 np0005634017 systemd[1]: Reloading.
Feb 28 04:48:34 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:48:34 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:48:34 np0005634017 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Feb 28 04:48:34 np0005634017 systemd[1]: Starting Open-iSCSI...
Feb 28 04:48:34 np0005634017 kernel: Loading iSCSI transport class v2.0-870.
Feb 28 04:48:34 np0005634017 systemd[1]: Started Open-iSCSI.
Feb 28 04:48:34 np0005634017 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Feb 28 04:48:34 np0005634017 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Feb 28 04:48:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:35 np0005634017 python3.9[219563]: ansible-ansible.builtin.service_facts Invoked
Feb 28 04:48:35 np0005634017 network[219580]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 28 04:48:35 np0005634017 network[219581]: 'network-scripts' will be removed from distribution in near future.
Feb 28 04:48:35 np0005634017 network[219582]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 28 04:48:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:48:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:39 np0005634017 python3.9[219856]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 28 04:48:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 04:48:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:48:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 04:48:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:48:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:48:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:48:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:48:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:48:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:48:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:48:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:48:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:48:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4231060137621101e-06 of space, bias 4.0, pg target 0.0017077272165145322 quantized to 16 (current 16)
Feb 28 04:48:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:48:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:48:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:48:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 04:48:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:48:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 04:48:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:48:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:48:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:48:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 04:48:41 np0005634017 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 28 04:48:41 np0005634017 systemd[1]: Starting man-db-cache-update.service...
Feb 28 04:48:41 np0005634017 systemd[1]: Reloading.
Feb 28 04:48:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:48:41 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:48:41 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:48:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:41 np0005634017 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 28 04:48:41 np0005634017 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 28 04:48:41 np0005634017 systemd[1]: Finished man-db-cache-update.service.
Feb 28 04:48:41 np0005634017 systemd[1]: run-r837198a4040549c2a4c5aee08640daf8.service: Deactivated successfully.
Feb 28 04:48:42 np0005634017 python3.9[220190]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb 28 04:48:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:43 np0005634017 python3.9[220343]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Feb 28 04:48:44 np0005634017 python3.9[220500]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:48:45 np0005634017 python3.9[220624]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772272124.066148-178-98281478477107/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:48:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:45 np0005634017 python3.9[220777]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:48:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:48:46 np0005634017 python3.9[220930]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 28 04:48:46 np0005634017 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 28 04:48:46 np0005634017 systemd[1]: Stopped Load Kernel Modules.
Feb 28 04:48:46 np0005634017 systemd[1]: Stopping Load Kernel Modules...
Feb 28 04:48:46 np0005634017 systemd[1]: Starting Load Kernel Modules...
Feb 28 04:48:47 np0005634017 systemd[1]: Finished Load Kernel Modules.
Feb 28 04:48:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:47 np0005634017 python3.9[221087]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:48:48 np0005634017 python3.9[221241]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:48:49 np0005634017 python3.9[221394]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:48:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:49 np0005634017 python3.9[221518]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772272128.7517629-229-204440110697745/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:48:50 np0005634017 python3.9[221671]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:48:51 np0005634017 python3.9[221825]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:48:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:48:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:52 np0005634017 python3.9[221978]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:48:53 np0005634017 python3.9[222131]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:48:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:53 np0005634017 python3.9[222284]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:48:54 np0005634017 python3.9[222437]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:48:55 np0005634017 python3.9[222590]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:48:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:55 np0005634017 python3.9[222743]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:48:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:48:56 np0005634017 python3.9[222896]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:48:57 np0005634017 python3.9[223051]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:48:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:48:57.827 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:48:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:48:57.828 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:48:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:48:57.829 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:48:58 np0005634017 python3.9[223205]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:48:58 np0005634017 systemd[1]: Listening on multipathd control socket.
Feb 28 04:48:58 np0005634017 python3.9[223362]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:48:58 np0005634017 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Feb 28 04:48:58 np0005634017 udevadm[223367]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Feb 28 04:48:59 np0005634017 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Feb 28 04:48:59 np0005634017 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Feb 28 04:48:59 np0005634017 multipathd[223370]: --------start up--------
Feb 28 04:48:59 np0005634017 multipathd[223370]: read /etc/multipath.conf
Feb 28 04:48:59 np0005634017 multipathd[223370]: path checkers start up
Feb 28 04:48:59 np0005634017 systemd[1]: Started Device-Mapper Multipath Device Controller.
Feb 28 04:48:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:48:59 np0005634017 podman[223503]: 2026-02-28 09:48:59.814169754 +0000 UTC m=+0.076439231 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 04:48:59 np0005634017 podman[223502]: 2026-02-28 09:48:59.8474572 +0000 UTC m=+0.110078177 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:49:00 np0005634017 python3.9[223568]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb 28 04:49:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:49:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:49:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:49:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:49:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:49:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:49:00 np0005634017 python3.9[223728]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Feb 28 04:49:00 np0005634017 kernel: Key type psk registered
Feb 28 04:49:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:49:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:01 np0005634017 python3.9[223892]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:49:02 np0005634017 python3.9[224016]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772272141.0696688-359-99767331255390/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:49:02 np0005634017 python3.9[224169]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:49:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:03 np0005634017 python3.9[224322]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 28 04:49:03 np0005634017 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 28 04:49:03 np0005634017 systemd[1]: Stopped Load Kernel Modules.
Feb 28 04:49:03 np0005634017 systemd[1]: Stopping Load Kernel Modules...
Feb 28 04:49:03 np0005634017 systemd[1]: Starting Load Kernel Modules...
Feb 28 04:49:03 np0005634017 systemd[1]: Finished Load Kernel Modules.
Feb 28 04:49:04 np0005634017 python3.9[224479]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 28 04:49:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:49:06 np0005634017 systemd[1]: Reloading.
Feb 28 04:49:06 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:49:06 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:49:07 np0005634017 systemd[1]: Reloading.
Feb 28 04:49:07 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:49:07 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:49:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:07 np0005634017 systemd-logind[815]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 28 04:49:07 np0005634017 systemd-logind[815]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb 28 04:49:07 np0005634017 lvm[224606]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:49:07 np0005634017 lvm[224606]: VG ceph_vg2 finished
Feb 28 04:49:07 np0005634017 lvm[224607]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:49:07 np0005634017 lvm[224607]: VG ceph_vg0 finished
Feb 28 04:49:07 np0005634017 lvm[224608]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:49:07 np0005634017 lvm[224608]: VG ceph_vg1 finished
Feb 28 04:49:07 np0005634017 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 28 04:49:07 np0005634017 systemd[1]: Starting man-db-cache-update.service...
Feb 28 04:49:07 np0005634017 systemd[1]: Reloading.
Feb 28 04:49:07 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:49:07 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:49:07 np0005634017 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 28 04:49:08 np0005634017 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 28 04:49:08 np0005634017 systemd[1]: Finished man-db-cache-update.service.
Feb 28 04:49:08 np0005634017 systemd[1]: man-db-cache-update.service: Consumed 1.392s CPU time.
Feb 28 04:49:08 np0005634017 systemd[1]: run-r0b5911f4290a454e8844a502b91ff955.service: Deactivated successfully.
Feb 28 04:49:09 np0005634017 python3.9[225982]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 28 04:49:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:09 np0005634017 systemd[1]: Stopping Open-iSCSI...
Feb 28 04:49:09 np0005634017 iscsid[219403]: iscsid shutting down.
Feb 28 04:49:09 np0005634017 systemd[1]: iscsid.service: Deactivated successfully.
Feb 28 04:49:09 np0005634017 systemd[1]: Stopped Open-iSCSI.
Feb 28 04:49:09 np0005634017 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Feb 28 04:49:09 np0005634017 systemd[1]: Starting Open-iSCSI...
Feb 28 04:49:09 np0005634017 systemd[1]: Started Open-iSCSI.
Feb 28 04:49:10 np0005634017 python3.9[226140]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 28 04:49:10 np0005634017 multipathd[223370]: exit (signal)
Feb 28 04:49:10 np0005634017 multipathd[223370]: --------shut down-------
Feb 28 04:49:10 np0005634017 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Feb 28 04:49:10 np0005634017 systemd[1]: multipathd.service: Deactivated successfully.
Feb 28 04:49:10 np0005634017 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Feb 28 04:49:10 np0005634017 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Feb 28 04:49:10 np0005634017 multipathd[226147]: --------start up--------
Feb 28 04:49:10 np0005634017 multipathd[226147]: read /etc/multipath.conf
Feb 28 04:49:10 np0005634017 multipathd[226147]: path checkers start up
Feb 28 04:49:10 np0005634017 systemd[1]: Started Device-Mapper Multipath Device Controller.
Feb 28 04:49:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:49:11 np0005634017 python3.9[226305]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 28 04:49:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:12 np0005634017 python3.9[226462]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:49:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:13 np0005634017 python3.9[226615]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 28 04:49:13 np0005634017 systemd[1]: Reloading.
Feb 28 04:49:13 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:49:13 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:49:14 np0005634017 python3.9[226806]: ansible-ansible.builtin.service_facts Invoked
Feb 28 04:49:14 np0005634017 network[226823]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 28 04:49:14 np0005634017 network[226824]: 'network-scripts' will be removed from distribution in near future.
Feb 28 04:49:14 np0005634017 network[226825]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 28 04:49:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:49:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:19 np0005634017 python3.9[227100]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:49:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:20 np0005634017 python3.9[227254]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:49:20 np0005634017 python3.9[227408]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:49:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:49:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:21 np0005634017 python3.9[227562]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:49:22 np0005634017 python3.9[227716]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:49:23 np0005634017 python3.9[227870]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:49:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:23 np0005634017 python3.9[228024]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:49:24 np0005634017 python3.9[228178]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:49:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:25 np0005634017 python3.9[228397]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:49:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:49:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:49:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 04:49:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:49:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 04:49:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:49:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 04:49:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 04:49:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 04:49:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:49:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:49:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:49:25 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:49:25 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:49:25 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:49:26 np0005634017 podman[228599]: 2026-02-28 09:49:26.052780099 +0000 UTC m=+0.049856894 container create 006cee557ccafd16e9e903c1c0549af6ab7aabf3103dd09b368360221af91859 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_hoover, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:49:26 np0005634017 systemd[1]: Started libpod-conmon-006cee557ccafd16e9e903c1c0549af6ab7aabf3103dd09b368360221af91859.scope.
Feb 28 04:49:26 np0005634017 podman[228599]: 2026-02-28 09:49:26.031686406 +0000 UTC m=+0.028763191 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:49:26 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:49:26 np0005634017 podman[228599]: 2026-02-28 09:49:26.145158438 +0000 UTC m=+0.142235223 container init 006cee557ccafd16e9e903c1c0549af6ab7aabf3103dd09b368360221af91859 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 28 04:49:26 np0005634017 podman[228599]: 2026-02-28 09:49:26.1527105 +0000 UTC m=+0.149787265 container start 006cee557ccafd16e9e903c1c0549af6ab7aabf3103dd09b368360221af91859 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:49:26 np0005634017 podman[228599]: 2026-02-28 09:49:26.156229899 +0000 UTC m=+0.153306694 container attach 006cee557ccafd16e9e903c1c0549af6ab7aabf3103dd09b368360221af91859 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 28 04:49:26 np0005634017 recursing_hoover[228642]: 167 167
Feb 28 04:49:26 np0005634017 systemd[1]: libpod-006cee557ccafd16e9e903c1c0549af6ab7aabf3103dd09b368360221af91859.scope: Deactivated successfully.
Feb 28 04:49:26 np0005634017 conmon[228642]: conmon 006cee557ccafd16e9e9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-006cee557ccafd16e9e903c1c0549af6ab7aabf3103dd09b368360221af91859.scope/container/memory.events
Feb 28 04:49:26 np0005634017 podman[228599]: 2026-02-28 09:49:26.160457608 +0000 UTC m=+0.157534473 container died 006cee557ccafd16e9e903c1c0549af6ab7aabf3103dd09b368360221af91859 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_hoover, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:49:26 np0005634017 systemd[1]: var-lib-containers-storage-overlay-9f16015df447f00820dc898c06b98ea2cd97bdaad7e11d132959d84be8113441-merged.mount: Deactivated successfully.
Feb 28 04:49:26 np0005634017 podman[228599]: 2026-02-28 09:49:26.20461566 +0000 UTC m=+0.201692465 container remove 006cee557ccafd16e9e903c1c0549af6ab7aabf3103dd09b368360221af91859 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_hoover, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 04:49:26 np0005634017 systemd[1]: libpod-conmon-006cee557ccafd16e9e903c1c0549af6ab7aabf3103dd09b368360221af91859.scope: Deactivated successfully.
Feb 28 04:49:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:49:26 np0005634017 podman[228669]: 2026-02-28 09:49:26.35395146 +0000 UTC m=+0.047415105 container create ee4f9b3ee42ddbe659785939cfe98651c351f997f53794c194b077391d35ba32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_einstein, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 04:49:26 np0005634017 python3.9[228645]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:49:26 np0005634017 systemd[1]: Started libpod-conmon-ee4f9b3ee42ddbe659785939cfe98651c351f997f53794c194b077391d35ba32.scope.
Feb 28 04:49:26 np0005634017 podman[228669]: 2026-02-28 09:49:26.336315184 +0000 UTC m=+0.029778829 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:49:26 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:49:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa5bd18ea478bdf3e7f0281f46e69e05bca95913d902af2bfe9950f891915a8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:49:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa5bd18ea478bdf3e7f0281f46e69e05bca95913d902af2bfe9950f891915a8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:49:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa5bd18ea478bdf3e7f0281f46e69e05bca95913d902af2bfe9950f891915a8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:49:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa5bd18ea478bdf3e7f0281f46e69e05bca95913d902af2bfe9950f891915a8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:49:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa5bd18ea478bdf3e7f0281f46e69e05bca95913d902af2bfe9950f891915a8c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:49:26 np0005634017 podman[228669]: 2026-02-28 09:49:26.456984178 +0000 UTC m=+0.150447843 container init ee4f9b3ee42ddbe659785939cfe98651c351f997f53794c194b077391d35ba32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_einstein, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:49:26 np0005634017 podman[228669]: 2026-02-28 09:49:26.464776267 +0000 UTC m=+0.158239932 container start ee4f9b3ee42ddbe659785939cfe98651c351f997f53794c194b077391d35ba32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_einstein, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default)
Feb 28 04:49:26 np0005634017 podman[228669]: 2026-02-28 09:49:26.469931742 +0000 UTC m=+0.163395407 container attach ee4f9b3ee42ddbe659785939cfe98651c351f997f53794c194b077391d35ba32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:49:26 np0005634017 thirsty_einstein[228688]: --> passed data devices: 0 physical, 3 LVM
Feb 28 04:49:26 np0005634017 thirsty_einstein[228688]: --> All data devices are unavailable
Feb 28 04:49:26 np0005634017 systemd[1]: libpod-ee4f9b3ee42ddbe659785939cfe98651c351f997f53794c194b077391d35ba32.scope: Deactivated successfully.
Feb 28 04:49:26 np0005634017 podman[228669]: 2026-02-28 09:49:26.961583093 +0000 UTC m=+0.655046758 container died ee4f9b3ee42ddbe659785939cfe98651c351f997f53794c194b077391d35ba32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:49:26 np0005634017 python3.9[228849]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:49:27 np0005634017 systemd[1]: var-lib-containers-storage-overlay-fa5bd18ea478bdf3e7f0281f46e69e05bca95913d902af2bfe9950f891915a8c-merged.mount: Deactivated successfully.
Feb 28 04:49:27 np0005634017 podman[228669]: 2026-02-28 09:49:27.073045518 +0000 UTC m=+0.766509153 container remove ee4f9b3ee42ddbe659785939cfe98651c351f997f53794c194b077391d35ba32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_einstein, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 28 04:49:27 np0005634017 systemd[1]: libpod-conmon-ee4f9b3ee42ddbe659785939cfe98651c351f997f53794c194b077391d35ba32.scope: Deactivated successfully.
Feb 28 04:49:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:27 np0005634017 podman[229085]: 2026-02-28 09:49:27.548148723 +0000 UTC m=+0.064699171 container create 1ea52ab0fd31a30606958d6b09afcadc4166864e02359901b6ce4c15ce2a94fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 04:49:27 np0005634017 systemd[1]: Started libpod-conmon-1ea52ab0fd31a30606958d6b09afcadc4166864e02359901b6ce4c15ce2a94fa.scope.
Feb 28 04:49:27 np0005634017 podman[229085]: 2026-02-28 09:49:27.507718625 +0000 UTC m=+0.024268983 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:49:27 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:49:27 np0005634017 podman[229085]: 2026-02-28 09:49:27.636840528 +0000 UTC m=+0.153390896 container init 1ea52ab0fd31a30606958d6b09afcadc4166864e02359901b6ce4c15ce2a94fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_shannon, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:49:27 np0005634017 python3.9[229072]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:49:27 np0005634017 podman[229085]: 2026-02-28 09:49:27.647726214 +0000 UTC m=+0.164276602 container start 1ea52ab0fd31a30606958d6b09afcadc4166864e02359901b6ce4c15ce2a94fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 04:49:27 np0005634017 reverent_shannon[229101]: 167 167
Feb 28 04:49:27 np0005634017 systemd[1]: libpod-1ea52ab0fd31a30606958d6b09afcadc4166864e02359901b6ce4c15ce2a94fa.scope: Deactivated successfully.
Feb 28 04:49:27 np0005634017 podman[229085]: 2026-02-28 09:49:27.654592177 +0000 UTC m=+0.171142535 container attach 1ea52ab0fd31a30606958d6b09afcadc4166864e02359901b6ce4c15ce2a94fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:49:27 np0005634017 podman[229085]: 2026-02-28 09:49:27.65505646 +0000 UTC m=+0.171606818 container died 1ea52ab0fd31a30606958d6b09afcadc4166864e02359901b6ce4c15ce2a94fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_shannon, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:49:27 np0005634017 systemd[1]: var-lib-containers-storage-overlay-7bccdfe471565e9d85ebb145d5b477a251b253dfcd5ee8df372c85b1a0bcf325-merged.mount: Deactivated successfully.
Feb 28 04:49:27 np0005634017 podman[229085]: 2026-02-28 09:49:27.700114957 +0000 UTC m=+0.216665305 container remove 1ea52ab0fd31a30606958d6b09afcadc4166864e02359901b6ce4c15ce2a94fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_shannon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:49:27 np0005634017 systemd[1]: libpod-conmon-1ea52ab0fd31a30606958d6b09afcadc4166864e02359901b6ce4c15ce2a94fa.scope: Deactivated successfully.
Feb 28 04:49:27 np0005634017 podman[229167]: 2026-02-28 09:49:27.852221066 +0000 UTC m=+0.050866222 container create 95643bea4e28ce6861f8f34e537c902192e8d8f0cfc2de1af57340da7f98fa86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_banzai, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 04:49:27 np0005634017 systemd[1]: Started libpod-conmon-95643bea4e28ce6861f8f34e537c902192e8d8f0cfc2de1af57340da7f98fa86.scope.
Feb 28 04:49:27 np0005634017 podman[229167]: 2026-02-28 09:49:27.829995001 +0000 UTC m=+0.028640167 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:49:27 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:49:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cbd8f29a89a95de9f53276a4c68840977b871de5cce0d3aebefd519bb6957dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:49:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cbd8f29a89a95de9f53276a4c68840977b871de5cce0d3aebefd519bb6957dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:49:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cbd8f29a89a95de9f53276a4c68840977b871de5cce0d3aebefd519bb6957dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:49:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cbd8f29a89a95de9f53276a4c68840977b871de5cce0d3aebefd519bb6957dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:49:27 np0005634017 podman[229167]: 2026-02-28 09:49:27.950975984 +0000 UTC m=+0.149621120 container init 95643bea4e28ce6861f8f34e537c902192e8d8f0cfc2de1af57340da7f98fa86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_banzai, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 04:49:27 np0005634017 podman[229167]: 2026-02-28 09:49:27.957649512 +0000 UTC m=+0.156294628 container start 95643bea4e28ce6861f8f34e537c902192e8d8f0cfc2de1af57340da7f98fa86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_banzai, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:49:27 np0005634017 podman[229167]: 2026-02-28 09:49:27.961911592 +0000 UTC m=+0.160556708 container attach 95643bea4e28ce6861f8f34e537c902192e8d8f0cfc2de1af57340da7f98fa86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_banzai, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]: {
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:    "0": [
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:        {
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "devices": [
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "/dev/loop3"
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            ],
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "lv_name": "ceph_lv0",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "lv_size": "21470642176",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "name": "ceph_lv0",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "tags": {
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.cluster_name": "ceph",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.crush_device_class": "",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.encrypted": "0",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.objectstore": "bluestore",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.osd_id": "0",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.type": "block",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.vdo": "0",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.with_tpm": "0"
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            },
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "type": "block",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "vg_name": "ceph_vg0"
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:        }
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:    ],
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:    "1": [
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:        {
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "devices": [
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "/dev/loop4"
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            ],
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "lv_name": "ceph_lv1",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "lv_size": "21470642176",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "name": "ceph_lv1",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "tags": {
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.cluster_name": "ceph",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.crush_device_class": "",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.encrypted": "0",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.objectstore": "bluestore",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.osd_id": "1",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.type": "block",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.vdo": "0",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.with_tpm": "0"
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            },
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "type": "block",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "vg_name": "ceph_vg1"
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:        }
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:    ],
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:    "2": [
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:        {
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "devices": [
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "/dev/loop5"
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            ],
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "lv_name": "ceph_lv2",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "lv_size": "21470642176",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "name": "ceph_lv2",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "tags": {
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.cluster_name": "ceph",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.crush_device_class": "",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.encrypted": "0",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.objectstore": "bluestore",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.osd_id": "2",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.type": "block",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.vdo": "0",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:                "ceph.with_tpm": "0"
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            },
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "type": "block",
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:            "vg_name": "ceph_vg2"
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:        }
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]:    ]
Feb 28 04:49:28 np0005634017 vibrant_banzai[229216]: }
Feb 28 04:49:28 np0005634017 systemd[1]: libpod-95643bea4e28ce6861f8f34e537c902192e8d8f0cfc2de1af57340da7f98fa86.scope: Deactivated successfully.
Feb 28 04:49:28 np0005634017 conmon[229216]: conmon 95643bea4e28ce6861f8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-95643bea4e28ce6861f8f34e537c902192e8d8f0cfc2de1af57340da7f98fa86.scope/container/memory.events
Feb 28 04:49:28 np0005634017 podman[229167]: 2026-02-28 09:49:28.236725472 +0000 UTC m=+0.435370618 container died 95643bea4e28ce6861f8f34e537c902192e8d8f0cfc2de1af57340da7f98fa86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 28 04:49:28 np0005634017 systemd[1]: var-lib-containers-storage-overlay-7cbd8f29a89a95de9f53276a4c68840977b871de5cce0d3aebefd519bb6957dd-merged.mount: Deactivated successfully.
Feb 28 04:49:28 np0005634017 podman[229167]: 2026-02-28 09:49:28.281541413 +0000 UTC m=+0.480186559 container remove 95643bea4e28ce6861f8f34e537c902192e8d8f0cfc2de1af57340da7f98fa86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_banzai, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:49:28 np0005634017 python3.9[229297]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:49:28 np0005634017 systemd[1]: libpod-conmon-95643bea4e28ce6861f8f34e537c902192e8d8f0cfc2de1af57340da7f98fa86.scope: Deactivated successfully.
Feb 28 04:49:28 np0005634017 podman[229509]: 2026-02-28 09:49:28.771060963 +0000 UTC m=+0.054796262 container create 694130b6007934a6cf8d88288986e66e63e306b8b756df6353bf0f1ca321ffb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bartik, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 28 04:49:28 np0005634017 systemd[1]: Started libpod-conmon-694130b6007934a6cf8d88288986e66e63e306b8b756df6353bf0f1ca321ffb8.scope.
Feb 28 04:49:28 np0005634017 podman[229509]: 2026-02-28 09:49:28.740301608 +0000 UTC m=+0.024036957 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:49:28 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:49:28 np0005634017 podman[229509]: 2026-02-28 09:49:28.869757729 +0000 UTC m=+0.153493008 container init 694130b6007934a6cf8d88288986e66e63e306b8b756df6353bf0f1ca321ffb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bartik, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:49:28 np0005634017 podman[229509]: 2026-02-28 09:49:28.879532654 +0000 UTC m=+0.163267953 container start 694130b6007934a6cf8d88288986e66e63e306b8b756df6353bf0f1ca321ffb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 04:49:28 np0005634017 podman[229509]: 2026-02-28 09:49:28.884328579 +0000 UTC m=+0.168063858 container attach 694130b6007934a6cf8d88288986e66e63e306b8b756df6353bf0f1ca321ffb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bartik, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:49:28 np0005634017 systemd[1]: libpod-694130b6007934a6cf8d88288986e66e63e306b8b756df6353bf0f1ca321ffb8.scope: Deactivated successfully.
Feb 28 04:49:28 np0005634017 sleepy_bartik[229544]: 167 167
Feb 28 04:49:28 np0005634017 conmon[229544]: conmon 694130b6007934a6cf8d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-694130b6007934a6cf8d88288986e66e63e306b8b756df6353bf0f1ca321ffb8.scope/container/memory.events
Feb 28 04:49:28 np0005634017 podman[229509]: 2026-02-28 09:49:28.886808669 +0000 UTC m=+0.170543958 container died 694130b6007934a6cf8d88288986e66e63e306b8b756df6353bf0f1ca321ffb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 04:49:28 np0005634017 systemd[1]: var-lib-containers-storage-overlay-bc7ac883b1bc5b510a828db9bb2b5e7bf74b279a32469af2debe393dbec2d1c6-merged.mount: Deactivated successfully.
Feb 28 04:49:28 np0005634017 podman[229509]: 2026-02-28 09:49:28.931696902 +0000 UTC m=+0.215432171 container remove 694130b6007934a6cf8d88288986e66e63e306b8b756df6353bf0f1ca321ffb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bartik, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:49:28 np0005634017 systemd[1]: libpod-conmon-694130b6007934a6cf8d88288986e66e63e306b8b756df6353bf0f1ca321ffb8.scope: Deactivated successfully.
Feb 28 04:49:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_09:49:28
Feb 28 04:49:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 04:49:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 04:49:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['.mgr', 'backups', 'vms', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'images', 'default.rgw.control', 'cephfs.cephfs.data']
Feb 28 04:49:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 04:49:29 np0005634017 python3.9[229541]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:49:29 np0005634017 podman[229569]: 2026-02-28 09:49:29.114309599 +0000 UTC m=+0.062458008 container create 5f5f1f642abe7b86496c7b332bcc938dfa859bac68f11d310f8303c1fa0ed289 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 04:49:29 np0005634017 systemd[1]: Started libpod-conmon-5f5f1f642abe7b86496c7b332bcc938dfa859bac68f11d310f8303c1fa0ed289.scope.
Feb 28 04:49:29 np0005634017 podman[229569]: 2026-02-28 09:49:29.082678599 +0000 UTC m=+0.030827018 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:49:29 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:49:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb8c72ffdc7ca08ba6aa84107255dda4b904b30ef12ffc1e74e7ea7374e96cc3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:49:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb8c72ffdc7ca08ba6aa84107255dda4b904b30ef12ffc1e74e7ea7374e96cc3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:49:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb8c72ffdc7ca08ba6aa84107255dda4b904b30ef12ffc1e74e7ea7374e96cc3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:49:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb8c72ffdc7ca08ba6aa84107255dda4b904b30ef12ffc1e74e7ea7374e96cc3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:49:29 np0005634017 podman[229569]: 2026-02-28 09:49:29.244637855 +0000 UTC m=+0.192786294 container init 5f5f1f642abe7b86496c7b332bcc938dfa859bac68f11d310f8303c1fa0ed289 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 04:49:29 np0005634017 podman[229569]: 2026-02-28 09:49:29.255692216 +0000 UTC m=+0.203840615 container start 5f5f1f642abe7b86496c7b332bcc938dfa859bac68f11d310f8303c1fa0ed289 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_morse, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:49:29 np0005634017 podman[229569]: 2026-02-28 09:49:29.284617269 +0000 UTC m=+0.232765658 container attach 5f5f1f642abe7b86496c7b332bcc938dfa859bac68f11d310f8303c1fa0ed289 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_morse, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:49:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:29 np0005634017 python3.9[229753]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:49:29 np0005634017 lvm[229899]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:49:29 np0005634017 lvm[229901]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:49:29 np0005634017 lvm[229901]: VG ceph_vg2 finished
Feb 28 04:49:29 np0005634017 lvm[229899]: VG ceph_vg1 finished
Feb 28 04:49:29 np0005634017 lvm[229897]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:49:29 np0005634017 lvm[229897]: VG ceph_vg0 finished
Feb 28 04:49:30 np0005634017 podman[229863]: 2026-02-28 09:49:30.019662285 +0000 UTC m=+0.079037594 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 28 04:49:30 np0005634017 nostalgic_morse[229633]: {}
Feb 28 04:49:30 np0005634017 podman[229861]: 2026-02-28 09:49:30.055270037 +0000 UTC m=+0.113327629 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 04:49:30 np0005634017 systemd[1]: libpod-5f5f1f642abe7b86496c7b332bcc938dfa859bac68f11d310f8303c1fa0ed289.scope: Deactivated successfully.
Feb 28 04:49:30 np0005634017 systemd[1]: libpod-5f5f1f642abe7b86496c7b332bcc938dfa859bac68f11d310f8303c1fa0ed289.scope: Consumed 1.166s CPU time.
Feb 28 04:49:30 np0005634017 podman[229569]: 2026-02-28 09:49:30.066845522 +0000 UTC m=+1.014993941 container died 5f5f1f642abe7b86496c7b332bcc938dfa859bac68f11d310f8303c1fa0ed289 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_morse, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 28 04:49:30 np0005634017 systemd[1]: var-lib-containers-storage-overlay-bb8c72ffdc7ca08ba6aa84107255dda4b904b30ef12ffc1e74e7ea7374e96cc3-merged.mount: Deactivated successfully.
Feb 28 04:49:30 np0005634017 podman[229569]: 2026-02-28 09:49:30.11473807 +0000 UTC m=+1.062886469 container remove 5f5f1f642abe7b86496c7b332bcc938dfa859bac68f11d310f8303c1fa0ed289 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:49:30 np0005634017 systemd[1]: libpod-conmon-5f5f1f642abe7b86496c7b332bcc938dfa859bac68f11d310f8303c1fa0ed289.scope: Deactivated successfully.
Feb 28 04:49:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:49:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:49:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:49:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:49:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:49:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:49:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:49:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:49:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:49:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:49:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 04:49:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:49:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 04:49:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:49:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:49:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:49:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:49:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:49:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:49:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:49:30 np0005634017 python3.9[230054]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:49:30 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:49:30 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:49:30 np0005634017 systemd[1]: virtnodedevd.service: Deactivated successfully.
Feb 28 04:49:31 np0005634017 python3.9[230210]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:49:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:49:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:31 np0005634017 python3.9[230363]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:49:32 np0005634017 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb 28 04:49:32 np0005634017 python3.9[230517]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:49:33 np0005634017 python3.9[230670]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:49:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:33 np0005634017 python3.9[230823]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:49:34 np0005634017 python3.9[230976]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:49:34 np0005634017 python3.9[231129]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:49:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:35 np0005634017 python3.9[231282]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:49:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:49:36 np0005634017 python3.9[231435]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:49:37 np0005634017 python3.9[231587]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 28 04:49:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:38 np0005634017 python3.9[231740]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 28 04:49:38 np0005634017 systemd[1]: Reloading.
Feb 28 04:49:38 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:49:38 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:49:39 np0005634017 python3.9[231935]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:49:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:39 np0005634017 python3.9[232089]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:49:40 np0005634017 python3.9[232243]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:49:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 04:49:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:49:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 04:49:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:49:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:49:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:49:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:49:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:49:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:49:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:49:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:49:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:49:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4231060137621101e-06 of space, bias 4.0, pg target 0.0017077272165145322 quantized to 16 (current 16)
Feb 28 04:49:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:49:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:49:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:49:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 04:49:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:49:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 04:49:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:49:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:49:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:49:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 04:49:41 np0005634017 python3.9[232397]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:49:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:49:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:41 np0005634017 systemd[1]: virtsecretd.service: Deactivated successfully.
Feb 28 04:49:41 np0005634017 systemd[1]: virtqemud.service: Deactivated successfully.
Feb 28 04:49:41 np0005634017 python3.9[232553]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:49:42 np0005634017 python3.9[232707]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:49:43 np0005634017 python3.9[232861]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:49:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:43 np0005634017 python3.9[233015]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 28 04:49:45 np0005634017 python3.9[233169]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:49:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:45 np0005634017 python3.9[233322]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:49:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:49:46 np0005634017 python3.9[233475]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:49:47 np0005634017 python3.9[233628]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:49:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:47 np0005634017 python3.9[233781]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:49:48 np0005634017 python3.9[233934]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:49:49 np0005634017 python3.9[234087]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:49:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:49 np0005634017 python3.9[234240]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:49:50 np0005634017 python3.9[234393]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:49:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:49:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:49:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:57 np0005634017 python3.9[234546]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Feb 28 04:49:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:49:57.827 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:49:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:49:57.829 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:49:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:49:57.829 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:49:58 np0005634017 python3.9[234700]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 28 04:49:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:49:59 np0005634017 python3.9[234859]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 28 04:50:00 np0005634017 podman[234862]: 2026-02-28 09:50:00.135387426 +0000 UTC m=+0.073491145 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 28 04:50:00 np0005634017 podman[234868]: 2026-02-28 09:50:00.184687954 +0000 UTC m=+0.090512291 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 28 04:50:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:50:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:50:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:50:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:50:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:50:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:50:01 np0005634017 systemd-logind[815]: New session 50 of user zuul.
Feb 28 04:50:01 np0005634017 systemd[1]: Started Session 50 of User zuul.
Feb 28 04:50:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:50:01 np0005634017 systemd[1]: session-50.scope: Deactivated successfully.
Feb 28 04:50:01 np0005634017 systemd-logind[815]: Session 50 logged out. Waiting for processes to exit.
Feb 28 04:50:01 np0005634017 systemd-logind[815]: Removed session 50.
Feb 28 04:50:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:50:02 np0005634017 python3.9[235091]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:50:02 np0005634017 python3.9[235167]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:50:03 np0005634017 python3.9[235317]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:50:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:50:03 np0005634017 python3.9[235438]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772272202.6025546-1019-277088292024356/.source _original_basename=ssh-config follow=False checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:50:04 np0005634017 python3.9[235588]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:50:04 np0005634017 python3.9[235709]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772272203.9196472-1019-193137223236428/.source.py _original_basename=nova_statedir_ownership.py follow=False checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:50:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:50:05 np0005634017 python3.9[235859]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:50:06 np0005634017 python3.9[235980]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772272205.070829-1019-106014540684958/.source _original_basename=run-on-host follow=False checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:50:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:50:06 np0005634017 python3.9[236130]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:50:07 np0005634017 python3.9[236251]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772272206.3158169-1073-89781987465085/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:50:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:50:08 np0005634017 python3.9[236404]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:50:08 np0005634017 python3.9[236557]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:50:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:50:09 np0005634017 python3.9[236710]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:50:10 np0005634017 python3.9[236863]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:50:10 np0005634017 python3.9[236987]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1772272209.8365824-1112-154791246426129/.source _original_basename=.xsvs2nr0 follow=False checksum=57dd16a67fff17e563c5016d723dcc5081467ac8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Feb 28 04:50:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:50:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:50:11 np0005634017 python3.9[237139]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:50:12 np0005634017 python3.9[237294]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:50:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:50:13 np0005634017 python3.9[237447]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:50:14 np0005634017 python3.9[237597]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/nova_compute_init state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:50:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 5 op/s
Feb 28 04:50:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:50:16 np0005634017 python3.9[238021]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/nova_compute_init config_pattern=*.json debug=False
Feb 28 04:50:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 11 op/s
Feb 28 04:50:17 np0005634017 python3.9[238174]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 28 04:50:18 np0005634017 python3[238327]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/nova_compute_init config_id=nova_compute_init config_overrides={} config_patterns=*.json containers=['nova_compute_init'] log_base_path=/var/log/containers/stdouts debug=False
Feb 28 04:50:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 28 04:50:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:50:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 28 04:50:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 28 04:50:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 28 04:50:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:50:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 10 op/s
Feb 28 04:50:27 np0005634017 podman[238340]: 2026-02-28 09:50:27.586029045 +0000 UTC m=+8.970792905 image pull 7e637240710437807d86f704ec92f4417e40d6b1f76088848cab504c91655fe5 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 28 04:50:27 np0005634017 podman[238422]: 2026-02-28 09:50:27.729216177 +0000 UTC m=+0.063848606 container create aebaf2ae6514d5224078749da64ecaa6698340a88a55ba84f73ce9fa7178457e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=nova_compute_init, container_name=nova_compute_init, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': 'f957d3fcb0d42f199799b901fad0259f9aa6c30fc287a89d12bbbfb92ec28d3e'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 28 04:50:27 np0005634017 podman[238422]: 2026-02-28 09:50:27.69676031 +0000 UTC m=+0.031392819 image pull 7e637240710437807d86f704ec92f4417e40d6b1f76088848cab504c91655fe5 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 28 04:50:27 np0005634017 python3[238327]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --env EDPM_CONFIG_HASH=f957d3fcb0d42f199799b901fad0259f9aa6c30fc287a89d12bbbfb92ec28d3e --label config_id=nova_compute_init --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': 'f957d3fcb0d42f199799b901fad0259f9aa6c30fc287a89d12bbbfb92ec28d3e'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Feb 28 04:50:28 np0005634017 python3.9[238613]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:50:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_09:50:28
Feb 28 04:50:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 04:50:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 04:50:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['volumes', 'vms', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'images', 'cephfs.cephfs.data', '.mgr']
Feb 28 04:50:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 04:50:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Feb 28 04:50:29 np0005634017 python3.9[238765]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 28 04:50:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:50:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:50:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:50:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:50:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:50:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:50:30 np0005634017 podman[238900]: 2026-02-28 09:50:30.376851109 +0000 UTC m=+0.075220803 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible)
Feb 28 04:50:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 04:50:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:50:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 04:50:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:50:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:50:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:50:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:50:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:50:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:50:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:50:30 np0005634017 podman[238890]: 2026-02-28 09:50:30.451042803 +0000 UTC m=+0.150078746 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 04:50:30 np0005634017 python3.9[239004]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:50:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:50:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:50:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 04:50:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:50:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 04:50:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:50:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 04:50:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 04:50:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 04:50:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:50:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:50:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:50:31 np0005634017 python3.9[239170]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772272229.878989-1238-191713104990077/.source.yaml _original_basename=.h_jqjnvl follow=False checksum=63c90ff58524568b99bdd44cfcbaaa76e8d77038 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:50:31 np0005634017 podman[239257]: 2026-02-28 09:50:31.311542883 +0000 UTC m=+0.047557860 container create 7b2a3463e3f2dde89888fe7e3487445077c80d0a44ff19821d895b484416ae4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 04:50:31 np0005634017 systemd[1]: Started libpod-conmon-7b2a3463e3f2dde89888fe7e3487445077c80d0a44ff19821d895b484416ae4a.scope.
Feb 28 04:50:31 np0005634017 podman[239257]: 2026-02-28 09:50:31.284018214 +0000 UTC m=+0.020033211 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:50:31 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:50:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:50:31 np0005634017 podman[239257]: 2026-02-28 09:50:31.419922022 +0000 UTC m=+0.155937089 container init 7b2a3463e3f2dde89888fe7e3487445077c80d0a44ff19821d895b484416ae4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_booth, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 04:50:31 np0005634017 podman[239257]: 2026-02-28 09:50:31.431392553 +0000 UTC m=+0.167407570 container start 7b2a3463e3f2dde89888fe7e3487445077c80d0a44ff19821d895b484416ae4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:50:31 np0005634017 podman[239257]: 2026-02-28 09:50:31.436147226 +0000 UTC m=+0.172162233 container attach 7b2a3463e3f2dde89888fe7e3487445077c80d0a44ff19821d895b484416ae4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_booth, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 04:50:31 np0005634017 blissful_booth[239273]: 167 167
Feb 28 04:50:31 np0005634017 systemd[1]: libpod-7b2a3463e3f2dde89888fe7e3487445077c80d0a44ff19821d895b484416ae4a.scope: Deactivated successfully.
Feb 28 04:50:31 np0005634017 conmon[239273]: conmon 7b2a3463e3f2dde89888 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7b2a3463e3f2dde89888fe7e3487445077c80d0a44ff19821d895b484416ae4a.scope/container/memory.events
Feb 28 04:50:31 np0005634017 podman[239257]: 2026-02-28 09:50:31.441429673 +0000 UTC m=+0.177444680 container died 7b2a3463e3f2dde89888fe7e3487445077c80d0a44ff19821d895b484416ae4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3)
Feb 28 04:50:31 np0005634017 systemd[1]: var-lib-containers-storage-overlay-21a5f3e5450559a2075d0d4112e783c621158572accb1aa3b12b368d52c82bbc-merged.mount: Deactivated successfully.
Feb 28 04:50:31 np0005634017 podman[239257]: 2026-02-28 09:50:31.487263335 +0000 UTC m=+0.223278312 container remove 7b2a3463e3f2dde89888fe7e3487445077c80d0a44ff19821d895b484416ae4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Feb 28 04:50:31 np0005634017 systemd[1]: libpod-conmon-7b2a3463e3f2dde89888fe7e3487445077c80d0a44ff19821d895b484416ae4a.scope: Deactivated successfully.
Feb 28 04:50:31 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:50:31 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:50:31 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:50:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:50:31 np0005634017 podman[239323]: 2026-02-28 09:50:31.658807999 +0000 UTC m=+0.061230672 container create 7ac183d72eec881b04f5e1867ef9dff5b356fd03919d436335618943c65efc83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_williamson, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:50:31 np0005634017 systemd[1]: Started libpod-conmon-7ac183d72eec881b04f5e1867ef9dff5b356fd03919d436335618943c65efc83.scope.
Feb 28 04:50:31 np0005634017 podman[239323]: 2026-02-28 09:50:31.636215008 +0000 UTC m=+0.038637761 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:50:31 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:50:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a01100bccdcd438cb68c3986319efede263ee1c122d8f61e8f046d082e8585/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:50:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a01100bccdcd438cb68c3986319efede263ee1c122d8f61e8f046d082e8585/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:50:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a01100bccdcd438cb68c3986319efede263ee1c122d8f61e8f046d082e8585/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:50:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a01100bccdcd438cb68c3986319efede263ee1c122d8f61e8f046d082e8585/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:50:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a01100bccdcd438cb68c3986319efede263ee1c122d8f61e8f046d082e8585/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:50:31 np0005634017 podman[239323]: 2026-02-28 09:50:31.762979031 +0000 UTC m=+0.165401694 container init 7ac183d72eec881b04f5e1867ef9dff5b356fd03919d436335618943c65efc83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 04:50:31 np0005634017 podman[239323]: 2026-02-28 09:50:31.784544934 +0000 UTC m=+0.186967577 container start 7ac183d72eec881b04f5e1867ef9dff5b356fd03919d436335618943c65efc83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 04:50:31 np0005634017 podman[239323]: 2026-02-28 09:50:31.788668559 +0000 UTC m=+0.191091232 container attach 7ac183d72eec881b04f5e1867ef9dff5b356fd03919d436335618943c65efc83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 28 04:50:32 np0005634017 python3.9[239450]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:50:32 np0005634017 peaceful_williamson[239390]: --> passed data devices: 0 physical, 3 LVM
Feb 28 04:50:32 np0005634017 peaceful_williamson[239390]: --> All data devices are unavailable
Feb 28 04:50:32 np0005634017 podman[239323]: 2026-02-28 09:50:32.322982593 +0000 UTC m=+0.725405266 container died 7ac183d72eec881b04f5e1867ef9dff5b356fd03919d436335618943c65efc83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_williamson, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 04:50:32 np0005634017 systemd[1]: libpod-7ac183d72eec881b04f5e1867ef9dff5b356fd03919d436335618943c65efc83.scope: Deactivated successfully.
Feb 28 04:50:32 np0005634017 systemd[1]: var-lib-containers-storage-overlay-82a01100bccdcd438cb68c3986319efede263ee1c122d8f61e8f046d082e8585-merged.mount: Deactivated successfully.
Feb 28 04:50:32 np0005634017 podman[239323]: 2026-02-28 09:50:32.373335471 +0000 UTC m=+0.775758134 container remove 7ac183d72eec881b04f5e1867ef9dff5b356fd03919d436335618943c65efc83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:50:32 np0005634017 systemd[1]: libpod-conmon-7ac183d72eec881b04f5e1867ef9dff5b356fd03919d436335618943c65efc83.scope: Deactivated successfully.
Feb 28 04:50:32 np0005634017 python3.9[239681]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 28 04:50:32 np0005634017 podman[239719]: 2026-02-28 09:50:32.945468252 +0000 UTC m=+0.103660468 container create 4899c986ef5af410824cad7c03c1a8b065f0a1c56f91b964e77fc6e6ee938272 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:50:32 np0005634017 podman[239719]: 2026-02-28 09:50:32.864835228 +0000 UTC m=+0.023027474 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:50:32 np0005634017 systemd[1]: Started libpod-conmon-4899c986ef5af410824cad7c03c1a8b065f0a1c56f91b964e77fc6e6ee938272.scope.
Feb 28 04:50:33 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:50:33 np0005634017 podman[239719]: 2026-02-28 09:50:33.07491384 +0000 UTC m=+0.233106086 container init 4899c986ef5af410824cad7c03c1a8b065f0a1c56f91b964e77fc6e6ee938272 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_neumann, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:50:33 np0005634017 podman[239719]: 2026-02-28 09:50:33.082730848 +0000 UTC m=+0.240923064 container start 4899c986ef5af410824cad7c03c1a8b065f0a1c56f91b964e77fc6e6ee938272 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_neumann, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:50:33 np0005634017 stupefied_neumann[239788]: 167 167
Feb 28 04:50:33 np0005634017 systemd[1]: libpod-4899c986ef5af410824cad7c03c1a8b065f0a1c56f91b964e77fc6e6ee938272.scope: Deactivated successfully.
Feb 28 04:50:33 np0005634017 podman[239719]: 2026-02-28 09:50:33.105826654 +0000 UTC m=+0.264018940 container attach 4899c986ef5af410824cad7c03c1a8b065f0a1c56f91b964e77fc6e6ee938272 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_neumann, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:50:33 np0005634017 podman[239719]: 2026-02-28 09:50:33.106538014 +0000 UTC m=+0.264730230 container died 4899c986ef5af410824cad7c03c1a8b065f0a1c56f91b964e77fc6e6ee938272 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:50:33 np0005634017 systemd[1]: var-lib-containers-storage-overlay-6257c1703d4d980cb04139922dd8d8bf8227db182433c681d3a791135f7b5085-merged.mount: Deactivated successfully.
Feb 28 04:50:33 np0005634017 podman[239719]: 2026-02-28 09:50:33.329258779 +0000 UTC m=+0.487451025 container remove 4899c986ef5af410824cad7c03c1a8b065f0a1c56f91b964e77fc6e6ee938272 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:50:33 np0005634017 systemd[1]: libpod-conmon-4899c986ef5af410824cad7c03c1a8b065f0a1c56f91b964e77fc6e6ee938272.scope: Deactivated successfully.
Feb 28 04:50:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:50:33 np0005634017 python3.9[239883]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:50:33 np0005634017 podman[239891]: 2026-02-28 09:50:33.501156184 +0000 UTC m=+0.045101442 container create 861e54401aab84b2f4a9e8049908d983892764b94c15e2d1f9b34517602f7ea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_euler, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:50:33 np0005634017 systemd[1]: Started libpod-conmon-861e54401aab84b2f4a9e8049908d983892764b94c15e2d1f9b34517602f7ea1.scope.
Feb 28 04:50:33 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:50:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4485d168e2f0b2c17d53383b27bee9de27d944272265b2e43237a04efc18c1ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:50:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4485d168e2f0b2c17d53383b27bee9de27d944272265b2e43237a04efc18c1ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:50:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4485d168e2f0b2c17d53383b27bee9de27d944272265b2e43237a04efc18c1ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:50:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4485d168e2f0b2c17d53383b27bee9de27d944272265b2e43237a04efc18c1ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:50:33 np0005634017 podman[239891]: 2026-02-28 09:50:33.483311245 +0000 UTC m=+0.027256503 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:50:33 np0005634017 podman[239891]: 2026-02-28 09:50:33.592768844 +0000 UTC m=+0.136714092 container init 861e54401aab84b2f4a9e8049908d983892764b94c15e2d1f9b34517602f7ea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 04:50:33 np0005634017 podman[239891]: 2026-02-28 09:50:33.609275926 +0000 UTC m=+0.153221164 container start 861e54401aab84b2f4a9e8049908d983892764b94c15e2d1f9b34517602f7ea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_euler, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 28 04:50:33 np0005634017 podman[239891]: 2026-02-28 09:50:33.613273617 +0000 UTC m=+0.157218875 container attach 861e54401aab84b2f4a9e8049908d983892764b94c15e2d1f9b34517602f7ea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_euler, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]: {
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:    "0": [
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:        {
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "devices": [
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "/dev/loop3"
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            ],
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "lv_name": "ceph_lv0",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "lv_size": "21470642176",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "name": "ceph_lv0",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "tags": {
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.cluster_name": "ceph",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.crush_device_class": "",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.encrypted": "0",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.objectstore": "bluestore",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.osd_id": "0",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.type": "block",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.vdo": "0",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.with_tpm": "0"
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            },
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "type": "block",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "vg_name": "ceph_vg0"
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:        }
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:    ],
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:    "1": [
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:        {
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "devices": [
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "/dev/loop4"
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            ],
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "lv_name": "ceph_lv1",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "lv_size": "21470642176",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "name": "ceph_lv1",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "tags": {
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.cluster_name": "ceph",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.crush_device_class": "",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.encrypted": "0",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.objectstore": "bluestore",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.osd_id": "1",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.type": "block",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.vdo": "0",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.with_tpm": "0"
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            },
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "type": "block",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "vg_name": "ceph_vg1"
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:        }
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:    ],
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:    "2": [
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:        {
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "devices": [
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "/dev/loop5"
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            ],
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "lv_name": "ceph_lv2",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "lv_size": "21470642176",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "name": "ceph_lv2",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "tags": {
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.cluster_name": "ceph",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.crush_device_class": "",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.encrypted": "0",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.objectstore": "bluestore",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.osd_id": "2",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.type": "block",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.vdo": "0",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:                "ceph.with_tpm": "0"
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            },
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "type": "block",
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:            "vg_name": "ceph_vg2"
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:        }
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]:    ]
Feb 28 04:50:33 np0005634017 affectionate_euler[239931]: }
Feb 28 04:50:33 np0005634017 systemd[1]: libpod-861e54401aab84b2f4a9e8049908d983892764b94c15e2d1f9b34517602f7ea1.scope: Deactivated successfully.
Feb 28 04:50:34 np0005634017 python3.9[240038]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/nova_compute.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1772272232.911508-1271-277707175779339/.source.json _original_basename=.r8cgmwn6 follow=False checksum=0018389a48392615f4a8869cad43008a907328ff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:50:34 np0005634017 podman[240041]: 2026-02-28 09:50:34.025838709 +0000 UTC m=+0.037894510 container died 861e54401aab84b2f4a9e8049908d983892764b94c15e2d1f9b34517602f7ea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_euler, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 04:50:34 np0005634017 systemd[1]: var-lib-containers-storage-overlay-4485d168e2f0b2c17d53383b27bee9de27d944272265b2e43237a04efc18c1ad-merged.mount: Deactivated successfully.
Feb 28 04:50:34 np0005634017 podman[240041]: 2026-02-28 09:50:34.079728685 +0000 UTC m=+0.091784486 container remove 861e54401aab84b2f4a9e8049908d983892764b94c15e2d1f9b34517602f7ea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_euler, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 28 04:50:34 np0005634017 systemd[1]: libpod-conmon-861e54401aab84b2f4a9e8049908d983892764b94c15e2d1f9b34517602f7ea1.scope: Deactivated successfully.
Feb 28 04:50:34 np0005634017 podman[240269]: 2026-02-28 09:50:34.569294388 +0000 UTC m=+0.044565727 container create 2bfc41e295f53dc4fd5997a64804db1ec9f679d9cfac54d5a9d8daf690dd459a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_swirles, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 04:50:34 np0005634017 systemd[1]: Started libpod-conmon-2bfc41e295f53dc4fd5997a64804db1ec9f679d9cfac54d5a9d8daf690dd459a.scope.
Feb 28 04:50:34 np0005634017 python3.9[240256]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/nova_compute state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:50:34 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:50:34 np0005634017 podman[240269]: 2026-02-28 09:50:34.545574055 +0000 UTC m=+0.020845424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:50:34 np0005634017 podman[240269]: 2026-02-28 09:50:34.656718181 +0000 UTC m=+0.131989580 container init 2bfc41e295f53dc4fd5997a64804db1ec9f679d9cfac54d5a9d8daf690dd459a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 28 04:50:34 np0005634017 podman[240269]: 2026-02-28 09:50:34.666062492 +0000 UTC m=+0.141333821 container start 2bfc41e295f53dc4fd5997a64804db1ec9f679d9cfac54d5a9d8daf690dd459a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_swirles, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 04:50:34 np0005634017 heuristic_swirles[240286]: 167 167
Feb 28 04:50:34 np0005634017 podman[240269]: 2026-02-28 09:50:34.670196768 +0000 UTC m=+0.145468187 container attach 2bfc41e295f53dc4fd5997a64804db1ec9f679d9cfac54d5a9d8daf690dd459a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:50:34 np0005634017 systemd[1]: libpod-2bfc41e295f53dc4fd5997a64804db1ec9f679d9cfac54d5a9d8daf690dd459a.scope: Deactivated successfully.
Feb 28 04:50:34 np0005634017 podman[240269]: 2026-02-28 09:50:34.673333115 +0000 UTC m=+0.148604514 container died 2bfc41e295f53dc4fd5997a64804db1ec9f679d9cfac54d5a9d8daf690dd459a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_swirles, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:50:34 np0005634017 systemd[1]: var-lib-containers-storage-overlay-d9aedaba56e53b82c54aab8fb3b83c60988c9e645a7bd82df958dfc846d60fc3-merged.mount: Deactivated successfully.
Feb 28 04:50:34 np0005634017 podman[240269]: 2026-02-28 09:50:34.719745793 +0000 UTC m=+0.195017182 container remove 2bfc41e295f53dc4fd5997a64804db1ec9f679d9cfac54d5a9d8daf690dd459a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_swirles, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:50:34 np0005634017 systemd[1]: libpod-conmon-2bfc41e295f53dc4fd5997a64804db1ec9f679d9cfac54d5a9d8daf690dd459a.scope: Deactivated successfully.
Feb 28 04:50:34 np0005634017 podman[240334]: 2026-02-28 09:50:34.895272879 +0000 UTC m=+0.067709474 container create c86e966c5cc8e03f4afb8972036812787b2c9802c3a1fec76a482ce74b1f6741 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 28 04:50:34 np0005634017 systemd[1]: Started libpod-conmon-c86e966c5cc8e03f4afb8972036812787b2c9802c3a1fec76a482ce74b1f6741.scope.
Feb 28 04:50:34 np0005634017 podman[240334]: 2026-02-28 09:50:34.865707472 +0000 UTC m=+0.038144147 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:50:34 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:50:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a29e9580332c5ac4f81a556ce4a6dd9a76e9ce8834c57a27d1785e0f5136ae9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:50:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a29e9580332c5ac4f81a556ce4a6dd9a76e9ce8834c57a27d1785e0f5136ae9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:50:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a29e9580332c5ac4f81a556ce4a6dd9a76e9ce8834c57a27d1785e0f5136ae9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:50:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a29e9580332c5ac4f81a556ce4a6dd9a76e9ce8834c57a27d1785e0f5136ae9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:50:35 np0005634017 podman[240334]: 2026-02-28 09:50:35.015813588 +0000 UTC m=+0.188250233 container init c86e966c5cc8e03f4afb8972036812787b2c9802c3a1fec76a482ce74b1f6741 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:50:35 np0005634017 podman[240334]: 2026-02-28 09:50:35.025593501 +0000 UTC m=+0.198030106 container start c86e966c5cc8e03f4afb8972036812787b2c9802c3a1fec76a482ce74b1f6741 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:50:35 np0005634017 podman[240334]: 2026-02-28 09:50:35.030981172 +0000 UTC m=+0.203417777 container attach c86e966c5cc8e03f4afb8972036812787b2c9802c3a1fec76a482ce74b1f6741 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:50:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:50:35 np0005634017 lvm[240572]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:50:35 np0005634017 lvm[240569]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:50:35 np0005634017 lvm[240572]: VG ceph_vg1 finished
Feb 28 04:50:35 np0005634017 lvm[240569]: VG ceph_vg0 finished
Feb 28 04:50:35 np0005634017 lvm[240580]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:50:35 np0005634017 lvm[240580]: VG ceph_vg2 finished
Feb 28 04:50:35 np0005634017 peaceful_lovelace[240394]: {}
Feb 28 04:50:35 np0005634017 systemd[1]: libpod-c86e966c5cc8e03f4afb8972036812787b2c9802c3a1fec76a482ce74b1f6741.scope: Deactivated successfully.
Feb 28 04:50:35 np0005634017 systemd[1]: libpod-c86e966c5cc8e03f4afb8972036812787b2c9802c3a1fec76a482ce74b1f6741.scope: Consumed 1.162s CPU time.
Feb 28 04:50:35 np0005634017 podman[240334]: 2026-02-28 09:50:35.831183118 +0000 UTC m=+1.003619723 container died c86e966c5cc8e03f4afb8972036812787b2c9802c3a1fec76a482ce74b1f6741 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:50:35 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3a29e9580332c5ac4f81a556ce4a6dd9a76e9ce8834c57a27d1785e0f5136ae9-merged.mount: Deactivated successfully.
Feb 28 04:50:35 np0005634017 podman[240334]: 2026-02-28 09:50:35.876895876 +0000 UTC m=+1.049332471 container remove c86e966c5cc8e03f4afb8972036812787b2c9802c3a1fec76a482ce74b1f6741 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 04:50:35 np0005634017 systemd[1]: libpod-conmon-c86e966c5cc8e03f4afb8972036812787b2c9802c3a1fec76a482ce74b1f6741.scope: Deactivated successfully.
Feb 28 04:50:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:50:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:50:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:50:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:50:36 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:50:36 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:50:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:50:36 np0005634017 python3.9[240869]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/nova_compute config_pattern=*.json debug=False
Feb 28 04:50:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:50:37 np0005634017 python3.9[241022]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 28 04:50:38 np0005634017 python3[241175]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/nova_compute config_id=nova_compute config_overrides={} config_patterns=*.json containers=['nova_compute'] log_base_path=/var/log/containers/stdouts debug=False
Feb 28 04:50:39 np0005634017 podman[241213]: 2026-02-28 09:50:39.024168793 +0000 UTC m=+0.060535623 container create 4ded7306dd36ad79370acb2fef14211e93074c932d53c87a6cc935e126b0e260 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-1c3178970be275583b5f85c32a5360a083e2e8e737ac61e0f2cba4ff1f6008ec-f957d3fcb0d42f199799b901fad0259f9aa6c30fc287a89d12bbbfb92ec28d3e'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, config_id=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 04:50:39 np0005634017 podman[241213]: 2026-02-28 09:50:38.990138861 +0000 UTC m=+0.026505711 image pull 7e637240710437807d86f704ec92f4417e40d6b1f76088848cab504c91655fe5 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 28 04:50:39 np0005634017 python3[241175]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-1c3178970be275583b5f85c32a5360a083e2e8e737ac61e0f2cba4ff1f6008ec-f957d3fcb0d42f199799b901fad0259f9aa6c30fc287a89d12bbbfb92ec28d3e --label config_id=nova_compute --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-1c3178970be275583b5f85c32a5360a083e2e8e737ac61e0f2cba4ff1f6008ec-f957d3fcb0d42f199799b901fad0259f9aa6c30fc287a89d12bbbfb92ec28d3e'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Feb 28 04:50:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:50:39 np0005634017 python3.9[241404]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:50:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 04:50:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:50:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 04:50:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:50:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:50:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:50:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:50:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:50:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:50:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:50:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:50:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:50:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4231060137621101e-06 of space, bias 4.0, pg target 0.0017077272165145322 quantized to 16 (current 16)
Feb 28 04:50:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:50:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:50:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:50:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 04:50:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:50:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 04:50:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:50:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:50:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:50:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 04:50:40 np0005634017 python3.9[241559]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:50:41 np0005634017 python3.9[241636]: ansible-stat Invoked with path=/etc/systemd/system/edpm_nova_compute_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:50:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:50:41.642777) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272241642857, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1857, "num_deletes": 251, "total_data_size": 3184884, "memory_usage": 3249960, "flush_reason": "Manual Compaction"}
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272241654375, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1803272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11730, "largest_seqno": 13586, "table_properties": {"data_size": 1797220, "index_size": 3063, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15128, "raw_average_key_size": 20, "raw_value_size": 1783885, "raw_average_value_size": 2375, "num_data_blocks": 143, "num_entries": 751, "num_filter_entries": 751, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772272033, "oldest_key_time": 1772272033, "file_creation_time": 1772272241, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 11643 microseconds, and 4178 cpu microseconds.
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:50:41.654441) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1803272 bytes OK
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:50:41.654473) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:50:41.656115) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:50:41.656134) EVENT_LOG_v1 {"time_micros": 1772272241656129, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:50:41.656160) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3177034, prev total WAL file size 3177034, number of live WAL files 2.
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:50:41.656811) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1761KB)], [29(7954KB)]
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272241656896, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9948654, "oldest_snapshot_seqno": -1}
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4035 keys, 7911839 bytes, temperature: kUnknown
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272241715299, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7911839, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7883068, "index_size": 17589, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 95850, "raw_average_key_size": 23, "raw_value_size": 7808589, "raw_average_value_size": 1935, "num_data_blocks": 766, "num_entries": 4035, "num_filter_entries": 4035, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772272241, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:50:41.715638) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7911839 bytes
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:50:41.717122) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.1 rd, 135.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.8 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(9.9) write-amplify(4.4) OK, records in: 4450, records dropped: 415 output_compression: NoCompression
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:50:41.717162) EVENT_LOG_v1 {"time_micros": 1772272241717145, "job": 12, "event": "compaction_finished", "compaction_time_micros": 58501, "compaction_time_cpu_micros": 24507, "output_level": 6, "num_output_files": 1, "total_output_size": 7911839, "num_input_records": 4450, "num_output_records": 4035, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272241717560, "job": 12, "event": "table_file_deletion", "file_number": 31}
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272241718650, "job": 12, "event": "table_file_deletion", "file_number": 29}
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:50:41.656730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:50:41.718692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:50:41.718700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:50:41.718703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:50:41.718705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:50:41 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:50:41.718708) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:50:41 np0005634017 python3.9[241788]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772272241.232081-1349-74238857407463/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:50:42 np0005634017 python3.9[241865]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 28 04:50:42 np0005634017 systemd[1]: Reloading.
Feb 28 04:50:42 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:50:42 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:50:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:50:43 np0005634017 python3.9[241984]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 28 04:50:43 np0005634017 systemd[1]: Reloading.
Feb 28 04:50:43 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 04:50:43 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 04:50:43 np0005634017 systemd[1]: Starting nova_compute container...
Feb 28 04:50:44 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:50:44 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4c0c5b790d9b6d2481904d9f521fdff3edd25d3103b9ea3f50542eaeaf1f6c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb 28 04:50:44 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4c0c5b790d9b6d2481904d9f521fdff3edd25d3103b9ea3f50542eaeaf1f6c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb 28 04:50:44 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4c0c5b790d9b6d2481904d9f521fdff3edd25d3103b9ea3f50542eaeaf1f6c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 28 04:50:44 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4c0c5b790d9b6d2481904d9f521fdff3edd25d3103b9ea3f50542eaeaf1f6c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 28 04:50:44 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4c0c5b790d9b6d2481904d9f521fdff3edd25d3103b9ea3f50542eaeaf1f6c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb 28 04:50:44 np0005634017 podman[242031]: 2026-02-28 09:50:44.034411279 +0000 UTC m=+0.132142084 container init 4ded7306dd36ad79370acb2fef14211e93074c932d53c87a6cc935e126b0e260 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=nova_compute, io.buildah.version=1.43.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-1c3178970be275583b5f85c32a5360a083e2e8e737ac61e0f2cba4ff1f6008ec-f957d3fcb0d42f199799b901fad0259f9aa6c30fc287a89d12bbbfb92ec28d3e'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, config_id=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 04:50:44 np0005634017 podman[242031]: 2026-02-28 09:50:44.039646746 +0000 UTC m=+0.137377551 container start 4ded7306dd36ad79370acb2fef14211e93074c932d53c87a6cc935e126b0e260 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-1c3178970be275583b5f85c32a5360a083e2e8e737ac61e0f2cba4ff1f6008ec-f957d3fcb0d42f199799b901fad0259f9aa6c30fc287a89d12bbbfb92ec28d3e'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=nova_compute, container_name=nova_compute, io.buildah.version=1.43.0)
Feb 28 04:50:44 np0005634017 podman[242031]: nova_compute
Feb 28 04:50:44 np0005634017 systemd[1]: Started nova_compute container.
Feb 28 04:50:44 np0005634017 nova_compute[242046]: + sudo -E kolla_set_configs
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Validating config file
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Copying service configuration files
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Copying /var/lib/kolla/config_files/src/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Copying /var/lib/kolla/config_files/src/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Copying /var/lib/kolla/config_files/src/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Copying /var/lib/kolla/config_files/src/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Deleting /etc/ceph
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Creating directory /etc/ceph
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Setting permission for /etc/ceph
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.conf to /etc/ceph/ceph.conf
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-config to /var/lib/nova/.ssh/config
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Copying /var/lib/kolla/config_files/src/run-on-host to /usr/sbin/iscsiadm
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Writing out command to execute
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 28 04:50:44 np0005634017 nova_compute[242046]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 28 04:50:44 np0005634017 nova_compute[242046]: ++ cat /run_command
Feb 28 04:50:44 np0005634017 nova_compute[242046]: + CMD=nova-compute
Feb 28 04:50:44 np0005634017 nova_compute[242046]: + ARGS=
Feb 28 04:50:44 np0005634017 nova_compute[242046]: + sudo kolla_copy_cacerts
Feb 28 04:50:44 np0005634017 nova_compute[242046]: + [[ ! -n '' ]]
Feb 28 04:50:44 np0005634017 nova_compute[242046]: + . kolla_extend_start
Feb 28 04:50:44 np0005634017 nova_compute[242046]: Running command: 'nova-compute'
Feb 28 04:50:44 np0005634017 nova_compute[242046]: + echo 'Running command: '\''nova-compute'\'''
Feb 28 04:50:44 np0005634017 nova_compute[242046]: + umask 0022
Feb 28 04:50:44 np0005634017 nova_compute[242046]: + exec nova-compute
Feb 28 04:50:44 np0005634017 python3.9[242207]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 28 04:50:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:50:45 np0005634017 python3.9[242361]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 28 04:50:46 np0005634017 python3.9[242487]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772272245.3790226-1394-11822718053449/.source.yaml _original_basename=.m7pt8bdj follow=False checksum=c1e1b14eebeb695c8d4d12323ba08f7ca12ed39d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 28 04:50:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:50:46 np0005634017 nova_compute[242046]: 2026-02-28 09:50:46.990 242050 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb 28 04:50:46 np0005634017 nova_compute[242046]: 2026-02-28 09:50:46.991 242050 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb 28 04:50:46 np0005634017 nova_compute[242046]: 2026-02-28 09:50:46.991 242050 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb 28 04:50:46 np0005634017 nova_compute[242046]: 2026-02-28 09:50:46.991 242050 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Feb 28 04:50:47 np0005634017 nova_compute[242046]: 2026-02-28 09:50:47.160 242050 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:50:47 np0005634017 nova_compute[242046]: 2026-02-28 09:50:47.171 242050 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:50:47 np0005634017 nova_compute[242046]: 2026-02-28 09:50:47.171 242050 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Feb 28 04:50:47 np0005634017 python3.9[242639]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:50:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:50:47 np0005634017 nova_compute[242046]: 2026-02-28 09:50:47.835 242050 INFO nova.virt.driver [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.037 242050 INFO nova.compute.provider_config [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Feb 28 04:50:48 np0005634017 python3.9[242791]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.055 242050 DEBUG oslo_concurrency.lockutils [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.056 242050 DEBUG oslo_concurrency.lockutils [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.056 242050 DEBUG oslo_concurrency.lockutils [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.057 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.057 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.057 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.057 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.058 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.058 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.058 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.058 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.058 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.059 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.059 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.059 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.059 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.059 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.060 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.060 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.060 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.060 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.061 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.061 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.061 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.061 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.061 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.062 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.062 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.062 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.062 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.063 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.063 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.063 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.063 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.063 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.064 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.064 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.064 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.064 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.064 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.065 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.065 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.065 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.065 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.066 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.066 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.066 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.066 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.067 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.067 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.067 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.067 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.067 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.068 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.068 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.068 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.068 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.069 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.069 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.069 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.069 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.069 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.070 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.070 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.070 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.070 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.070 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.071 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.071 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.071 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.071 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.072 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.072 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.072 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.072 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.072 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.073 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.073 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.073 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.073 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.074 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.074 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.074 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.074 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.075 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.075 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.075 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.075 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.075 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.076 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.076 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.076 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.076 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.077 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.077 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.077 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.077 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.077 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.078 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.078 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.078 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.078 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.079 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.079 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.079 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.079 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.079 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.080 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.080 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.080 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.080 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.080 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.081 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.081 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.081 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.081 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.081 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.082 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.082 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.082 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.082 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.082 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.083 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.083 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.083 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.083 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.084 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.084 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.084 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.084 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.084 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.085 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.085 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.085 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.085 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.085 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.086 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.086 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.086 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.086 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.086 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.087 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.087 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.087 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.088 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.088 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.088 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.088 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.088 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.089 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.089 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.089 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.089 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.089 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.090 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.090 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.090 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.090 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.091 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.091 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.091 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.091 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.092 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.092 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.092 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.092 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.092 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.093 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.093 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.093 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.093 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.093 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.093 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.093 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.094 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.094 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.094 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.094 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.094 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.094 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.095 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.095 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.095 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.095 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.095 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.095 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.096 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.096 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.096 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.096 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.096 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.096 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.097 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.097 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.097 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.097 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.097 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.097 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.097 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.098 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.098 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.098 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.098 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.099 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.099 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.099 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.099 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.099 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.099 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.099 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.100 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.100 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.100 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.100 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.100 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.100 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.101 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.101 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.101 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.101 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.101 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.101 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.102 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.102 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.102 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.102 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cinder.os_region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.102 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.103 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.103 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.103 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.103 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.103 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.104 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.104 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.104 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.104 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.104 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.104 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.104 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.105 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.105 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.105 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.105 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.105 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.105 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.105 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.106 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.106 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.106 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.106 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.106 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.106 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.107 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.107 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.107 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.107 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.107 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.107 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.108 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.108 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.108 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.108 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.108 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.108 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.108 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.109 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.109 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.109 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.109 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.109 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.109 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.110 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.110 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.110 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.110 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.110 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.110 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.110 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.111 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.111 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.111 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.111 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.111 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.112 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.112 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.112 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.112 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.112 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.112 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.113 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.113 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.113 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.113 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.113 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.113 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.113 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.114 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.114 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.114 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.114 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.114 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.114 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.114 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.115 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.115 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.115 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.115 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.115 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.115 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.116 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.116 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.116 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.116 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.116 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.116 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.116 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.117 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.117 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.117 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.117 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.117 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.117 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.117 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.118 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.118 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.118 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.118 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.118 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.118 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.119 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.119 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.119 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.119 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.119 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.119 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.120 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.120 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.120 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.120 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.120 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.120 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.121 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.121 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.121 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.121 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.121 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.121 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.121 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.122 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.122 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.122 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.122 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.122 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.122 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.122 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.122 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.123 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.123 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.123 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.123 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.123 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.124 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.124 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.124 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.124 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.124 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.124 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.125 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.125 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.125 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.125 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.125 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.125 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.125 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.126 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.126 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.126 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.126 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.126 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.126 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.126 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.127 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.127 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.127 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.127 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.127 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.127 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.128 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.128 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.128 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.128 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.128 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.128 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.128 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.129 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.129 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.129 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.129 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] barbican.barbican_region_name  = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.129 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.129 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.129 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.130 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.130 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.130 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.130 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.130 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.130 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.130 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.131 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.131 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.131 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.131 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.131 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.131 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.131 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.132 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.132 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.132 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.132 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.132 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.132 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.133 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.133 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.133 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.133 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.133 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.133 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.133 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.134 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.134 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.134 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.134 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.134 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.134 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.134 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.135 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.135 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.135 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.135 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.135 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.135 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.135 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.136 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.136 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.136 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.136 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.136 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.137 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.137 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.137 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.137 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.137 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.137 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.137 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.138 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.138 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.138 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.138 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.138 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.138 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.138 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.139 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.139 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.139 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.139 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.139 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.139 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.139 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.140 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.140 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.140 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.140 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.140 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.140 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.141 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.141 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.141 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.141 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.141 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.141 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.141 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.141 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.142 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.142 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.142 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.142 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.142 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.142 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.142 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.143 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.143 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.143 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.143 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.143 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.143 242050 WARNING oslo_config.cfg [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb 28 04:50:48 np0005634017 nova_compute[242046]: live_migration_uri is deprecated for removal in favor of two other options that
Feb 28 04:50:48 np0005634017 nova_compute[242046]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb 28 04:50:48 np0005634017 nova_compute[242046]: and ``live_migration_inbound_addr`` respectively.
Feb 28 04:50:48 np0005634017 nova_compute[242046]: ).  Its value may be silently ignored in the future.#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.144 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.144 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.144 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.144 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.144 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.144 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.145 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.145 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.145 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.145 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.145 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.145 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.145 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.146 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.146 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.146 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.146 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.146 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.146 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.rbd_secret_uuid        = 8f528268-ea2d-5d7b-af45-49b405fed6de log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.147 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.147 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.147 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.147 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.147 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.147 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.147 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.148 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.148 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.148 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.148 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.148 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.148 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.149 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.149 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.149 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.149 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.149 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.149 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.149 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.150 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.150 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.150 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.150 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.150 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.150 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.150 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.151 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.151 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.151 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.151 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.151 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.151 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.152 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.152 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.152 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.152 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.152 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.152 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.152 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.153 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.153 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.153 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.153 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.153 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.153 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.153 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.154 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.154 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.154 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.154 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.154 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.154 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.154 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.155 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.155 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.155 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.155 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.155 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.155 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.155 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.155 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.156 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.156 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.156 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.156 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.156 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.156 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.157 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.157 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.157 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.157 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.157 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.157 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.157 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.158 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.158 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.158 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.158 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.158 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.158 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.158 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.158 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.159 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.159 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.159 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.159 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.159 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.159 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.159 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.160 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.160 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.160 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.160 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.160 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.160 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.160 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.161 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.161 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.161 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.161 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.161 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.161 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.161 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.162 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.162 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.162 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.162 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.162 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.162 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.163 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.163 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.163 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.163 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.163 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.163 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.163 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.163 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.164 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.164 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.164 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.164 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.164 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.165 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.165 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.165 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.165 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.165 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.165 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.165 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.165 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.166 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.166 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.166 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.166 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.166 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.166 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.167 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.167 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.167 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.167 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.167 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.167 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.167 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.168 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.168 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.168 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.168 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.168 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.168 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.168 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.169 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.169 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.169 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.169 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.169 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.169 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.169 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.170 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.170 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.170 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.170 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.170 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.170 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.170 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.171 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.171 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.171 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.171 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.171 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.171 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.171 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.172 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.172 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.172 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.172 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.172 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.172 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.173 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.173 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.173 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.173 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.173 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.173 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.173 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.174 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.174 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.174 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.174 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.174 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.174 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.174 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.175 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.175 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.175 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.175 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.175 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.175 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.175 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.176 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.176 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.176 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.176 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.176 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.176 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.176 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.177 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.177 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.177 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.177 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.177 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.177 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.177 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.178 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.178 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.178 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.178 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.178 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.178 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.178 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.179 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.179 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.179 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.179 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.179 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.179 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.179 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.180 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.180 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.180 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.180 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.180 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.180 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.181 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.181 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.181 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.181 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.181 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.181 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.181 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.182 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.182 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.182 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.182 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.182 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.182 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.182 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.182 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.183 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.183 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.183 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.183 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.183 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.183 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.184 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.184 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.184 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.184 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.184 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.184 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.184 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.185 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.185 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.185 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.185 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.185 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.185 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.185 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.186 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.186 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.186 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.186 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.186 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.186 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.186 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.187 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.187 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.187 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.187 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.187 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.187 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.187 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.188 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.188 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.188 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.188 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.188 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.188 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.189 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.189 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.189 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.189 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.189 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.189 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.189 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.190 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.190 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.190 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.190 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.190 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.190 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.191 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.191 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.191 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.191 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.191 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.191 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.191 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.192 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.192 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.192 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.192 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.192 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.192 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.192 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.193 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.193 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.193 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.193 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.193 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.193 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.194 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.194 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.194 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.194 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.194 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.194 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.194 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.195 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.195 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.195 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.195 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.195 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.195 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.195 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.195 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.196 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.196 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.196 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.196 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.196 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.196 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.196 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.197 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.197 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.197 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.197 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.197 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.197 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.197 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.198 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.198 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.198 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.198 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.198 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.198 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.198 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.199 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.199 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.199 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.199 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.199 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.199 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.199 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.200 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.200 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.200 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.200 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.200 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.200 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.201 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.201 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.201 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.201 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.201 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.201 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.202 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.202 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.202 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.202 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.202 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.202 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.202 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.203 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.203 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.203 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.203 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.203 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.203 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.203 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.204 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.204 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.204 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.204 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.204 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.204 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.204 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.204 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.205 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.205 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.205 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.205 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.205 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.205 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.205 242050 DEBUG oslo_service.service [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.207 242050 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260220085704.5cfeecb.el9)#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.225 242050 DEBUG nova.virt.libvirt.host [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.226 242050 DEBUG nova.virt.libvirt.host [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.227 242050 DEBUG nova.virt.libvirt.host [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.227 242050 DEBUG nova.virt.libvirt.host [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Feb 28 04:50:48 np0005634017 systemd[1]: Starting libvirt QEMU daemon...
Feb 28 04:50:48 np0005634017 systemd[1]: Started libvirt QEMU daemon.
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.298 242050 DEBUG nova.virt.libvirt.host [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f48925a71c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.302 242050 DEBUG nova.virt.libvirt.host [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f48925a71c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.303 242050 INFO nova.virt.libvirt.driver [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Connection event '1' reason 'None'#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.328 242050 WARNING nova.virt.libvirt.driver [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Feb 28 04:50:48 np0005634017 nova_compute[242046]: 2026-02-28 09:50:48.328 242050 DEBUG nova.virt.libvirt.volume.mount [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Feb 28 04:50:48 np0005634017 python3.9[242993]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 28 04:50:49 np0005634017 nova_compute[242046]: 2026-02-28 09:50:49.152 242050 INFO nova.virt.libvirt.host [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Libvirt host capabilities <capabilities>
Feb 28 04:50:49 np0005634017 nova_compute[242046]: 
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <host>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <uuid>df54d9c9-1824-4fc9-835d-7a3299a4aad4</uuid>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <cpu>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <arch>x86_64</arch>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model>EPYC-Rome-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <vendor>AMD</vendor>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <microcode version='16777317'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <signature family='23' model='49' stepping='0'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <maxphysaddr mode='emulate' bits='40'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature name='x2apic'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature name='tsc-deadline'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature name='osxsave'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature name='hypervisor'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature name='tsc_adjust'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature name='spec-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature name='stibp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature name='arch-capabilities'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature name='ssbd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature name='cmp_legacy'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature name='topoext'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature name='virt-ssbd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature name='lbrv'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature name='tsc-scale'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature name='vmcb-clean'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature name='pause-filter'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature name='pfthreshold'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature name='svme-addr-chk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature name='rdctl-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature name='skip-l1dfl-vmentry'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature name='mds-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature name='pschange-mc-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <pages unit='KiB' size='4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <pages unit='KiB' size='2048'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <pages unit='KiB' size='1048576'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </cpu>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <power_management>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <suspend_mem/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </power_management>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <iommu support='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <migration_features>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <live/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <uri_transports>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <uri_transport>tcp</uri_transport>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <uri_transport>rdma</uri_transport>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </uri_transports>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </migration_features>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <topology>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <cells num='1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <cell id='0'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:          <memory unit='KiB'>7864280</memory>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:          <pages unit='KiB' size='4'>1966070</pages>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:          <pages unit='KiB' size='2048'>0</pages>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:          <pages unit='KiB' size='1048576'>0</pages>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:          <distances>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:            <sibling id='0' value='10'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:          </distances>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:          <cpus num='8'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:          </cpus>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        </cell>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </cells>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </topology>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <cache>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </cache>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <secmodel>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model>selinux</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <doi>0</doi>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </secmodel>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <secmodel>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model>dac</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <doi>0</doi>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <baselabel type='kvm'>+107:+107</baselabel>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <baselabel type='qemu'>+107:+107</baselabel>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </secmodel>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  </host>
Feb 28 04:50:49 np0005634017 nova_compute[242046]: 
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <guest>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <os_type>hvm</os_type>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <arch name='i686'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <wordsize>32</wordsize>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <domain type='qemu'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <domain type='kvm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </arch>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <features>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <pae/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <nonpae/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <acpi default='on' toggle='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <apic default='on' toggle='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <cpuselection/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <deviceboot/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <disksnapshot default='on' toggle='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <externalSnapshot/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </features>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  </guest>
Feb 28 04:50:49 np0005634017 nova_compute[242046]: 
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <guest>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <os_type>hvm</os_type>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <arch name='x86_64'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <wordsize>64</wordsize>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <domain type='qemu'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <domain type='kvm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </arch>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <features>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <acpi default='on' toggle='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <apic default='on' toggle='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <cpuselection/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <deviceboot/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <disksnapshot default='on' toggle='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <externalSnapshot/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </features>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  </guest>
Feb 28 04:50:49 np0005634017 nova_compute[242046]: 
Feb 28 04:50:49 np0005634017 nova_compute[242046]: </capabilities>
Feb 28 04:50:49 np0005634017 nova_compute[242046]: #033[00m
Feb 28 04:50:49 np0005634017 nova_compute[242046]: 2026-02-28 09:50:49.161 242050 DEBUG nova.virt.libvirt.host [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Feb 28 04:50:49 np0005634017 nova_compute[242046]: 2026-02-28 09:50:49.180 242050 DEBUG nova.virt.libvirt.host [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Feb 28 04:50:49 np0005634017 nova_compute[242046]: <domainCapabilities>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <path>/usr/libexec/qemu-kvm</path>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <domain>kvm</domain>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <machine>pc-q35-rhel9.8.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <arch>i686</arch>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <vcpu max='4096'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <iothreads supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <os supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <enum name='firmware'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <loader supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='type'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>rom</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>pflash</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='readonly'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>yes</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>no</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='secure'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>no</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </loader>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  </os>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <cpu>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <mode name='host-passthrough' supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='hostPassthroughMigratable'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>on</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>off</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </mode>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <mode name='maximum' supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='maximumMigratable'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>on</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>off</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </mode>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <mode name='host-model' supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model fallback='forbid'>EPYC-Rome</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <vendor>AMD</vendor>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='x2apic'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='tsc-deadline'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='hypervisor'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='tsc_adjust'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='spec-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='stibp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='ssbd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='cmp_legacy'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='overflow-recov'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='succor'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='ibrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='amd-ssbd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='virt-ssbd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='lbrv'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='tsc-scale'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='vmcb-clean'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='flushbyasid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='pause-filter'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='pfthreshold'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='svme-addr-chk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='lfence-always-serializing'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='disable' name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </mode>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <mode name='custom' supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-noTSX'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server-noTSX'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server-v5'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='ClearwaterForest'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bhi-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cmpccxadd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ddpd-u'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='intel-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='lam'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchiti'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sha512'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sm3'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sm4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='ClearwaterForest-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bhi-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cmpccxadd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ddpd-u'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='intel-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='lam'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchiti'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sha512'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sm3'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sm4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cooperlake'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cooperlake-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cooperlake-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Denverton'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mpx'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Denverton-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mpx'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Denverton-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Denverton-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Dhyana-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Genoa'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='auto-ibrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Genoa-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='auto-ibrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Genoa-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='auto-ibrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fs-gs-base-ns'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='perfmon-v2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Milan'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Milan-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Milan-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Milan-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Rome'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Rome-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Rome-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Rome-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Turin'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='auto-ibrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vp2intersect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fs-gs-base-ns'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibpb-brtype'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='perfmon-v2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbpb'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='srso-user-kernel-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Turin-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='auto-ibrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vp2intersect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fs-gs-base-ns'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibpb-brtype'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='perfmon-v2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbpb'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='srso-user-kernel-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-v5'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='GraniteRapids'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchiti'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='GraniteRapids-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchiti'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='GraniteRapids-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10-128'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10-256'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10-512'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchiti'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='GraniteRapids-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10-128'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10-256'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10-512'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchiti'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-noTSX'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-noTSX-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-noTSX'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v5'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v6'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v7'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='IvyBridge'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='IvyBridge-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='IvyBridge-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='IvyBridge-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='KnightsMill'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-4fmaps'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-4vnniw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512er'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512pf'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='KnightsMill-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-4fmaps'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-4vnniw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512er'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512pf'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Opteron_G4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fma4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xop'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Opteron_G4-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fma4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xop'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Opteron_G5'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fma4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tbm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xop'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Opteron_G5-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fma4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tbm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xop'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SapphireRapids'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SapphireRapids-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SapphireRapids-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SapphireRapids-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SapphireRapids-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SierraForest'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cmpccxadd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SierraForest-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cmpccxadd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SierraForest-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cmpccxadd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='intel-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='lam'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SierraForest-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cmpccxadd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='intel-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='lam'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-v5'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Snowridge'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='core-capability'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mpx'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='split-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Snowridge-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='core-capability'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mpx'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='split-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Snowridge-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='core-capability'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='split-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Snowridge-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='core-capability'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='split-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Snowridge-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='athlon'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnow'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnowext'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='athlon-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnow'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnowext'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='core2duo'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='core2duo-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='coreduo'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='coreduo-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='n270'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='n270-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='phenom'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnow'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnowext'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='phenom-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnow'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnowext'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </mode>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  </cpu>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <memoryBacking supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <enum name='sourceType'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <value>file</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <value>anonymous</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <value>memfd</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  </memoryBacking>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <devices>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <disk supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='diskDevice'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>disk</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>cdrom</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>floppy</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>lun</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='bus'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>fdc</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>scsi</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>usb</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>sata</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='model'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio-transitional</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio-non-transitional</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </disk>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <graphics supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='type'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>vnc</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>egl-headless</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>dbus</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </graphics>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <video supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='modelType'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>vga</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>cirrus</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>none</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>bochs</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>ramfb</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </video>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <hostdev supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='mode'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>subsystem</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='startupPolicy'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>default</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>mandatory</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>requisite</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>optional</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='subsysType'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>usb</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>pci</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>scsi</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='capsType'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='pciBackend'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </hostdev>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <rng supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='model'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio-transitional</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio-non-transitional</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='backendModel'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>random</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>egd</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>builtin</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </rng>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <filesystem supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='driverType'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>path</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>handle</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtiofs</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </filesystem>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <tpm supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='model'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>tpm-tis</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>tpm-crb</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='backendModel'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>emulator</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>external</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='backendVersion'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>2.0</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </tpm>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <redirdev supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='bus'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>usb</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </redirdev>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <channel supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='type'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>pty</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>unix</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </channel>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <crypto supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='model'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='type'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>qemu</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='backendModel'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>builtin</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </crypto>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <interface supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='backendType'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>default</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>passt</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </interface>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <panic supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='model'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>isa</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>hyperv</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </panic>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <console supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='type'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>null</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>vc</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>pty</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>dev</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>file</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>pipe</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>stdio</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>udp</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>tcp</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>unix</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>qemu-vdagent</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>dbus</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </console>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  </devices>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <features>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <gic supported='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <vmcoreinfo supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <genid supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <backingStoreInput supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <backup supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <async-teardown supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <s390-pv supported='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <ps2 supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <tdx supported='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <sev supported='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <sgx supported='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <hyperv supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='features'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>relaxed</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>vapic</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>spinlocks</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>vpindex</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>runtime</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>synic</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>stimer</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>reset</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>vendor_id</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>frequencies</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>reenlightenment</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>tlbflush</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>ipi</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>avic</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>emsr_bitmap</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>xmm_input</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <defaults>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <spinlocks>4095</spinlocks>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <stimer_direct>on</stimer_direct>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <tlbflush_direct>on</tlbflush_direct>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <tlbflush_extended>on</tlbflush_extended>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </defaults>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </hyperv>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <launchSecurity supported='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  </features>
Feb 28 04:50:49 np0005634017 nova_compute[242046]: </domainCapabilities>
Feb 28 04:50:49 np0005634017 nova_compute[242046]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb 28 04:50:49 np0005634017 nova_compute[242046]: 2026-02-28 09:50:49.187 242050 DEBUG nova.virt.libvirt.host [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Feb 28 04:50:49 np0005634017 nova_compute[242046]: <domainCapabilities>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <path>/usr/libexec/qemu-kvm</path>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <domain>kvm</domain>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <machine>pc-i440fx-rhel7.6.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <arch>i686</arch>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <vcpu max='240'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <iothreads supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <os supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <enum name='firmware'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <loader supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='type'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>rom</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>pflash</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='readonly'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>yes</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>no</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='secure'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>no</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </loader>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  </os>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <cpu>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <mode name='host-passthrough' supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='hostPassthroughMigratable'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>on</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>off</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </mode>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <mode name='maximum' supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='maximumMigratable'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>on</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>off</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </mode>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <mode name='host-model' supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model fallback='forbid'>EPYC-Rome</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <vendor>AMD</vendor>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='x2apic'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='tsc-deadline'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='hypervisor'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='tsc_adjust'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='spec-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='stibp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='ssbd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='cmp_legacy'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='overflow-recov'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='succor'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='ibrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='amd-ssbd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='virt-ssbd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='lbrv'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='tsc-scale'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='vmcb-clean'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='flushbyasid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='pause-filter'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='pfthreshold'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='svme-addr-chk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='lfence-always-serializing'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='disable' name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </mode>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <mode name='custom' supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-noTSX'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server-noTSX'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server-v5'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='ClearwaterForest'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bhi-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cmpccxadd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ddpd-u'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='intel-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='lam'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchiti'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sha512'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sm3'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sm4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='ClearwaterForest-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bhi-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cmpccxadd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ddpd-u'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='intel-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='lam'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchiti'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sha512'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sm3'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sm4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cooperlake'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cooperlake-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cooperlake-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Denverton'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mpx'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Denverton-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mpx'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Denverton-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Denverton-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Dhyana-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Genoa'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='auto-ibrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Genoa-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='auto-ibrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Genoa-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='auto-ibrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fs-gs-base-ns'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='perfmon-v2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Milan'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Milan-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Milan-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Milan-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Rome'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Rome-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Rome-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Rome-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Turin'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='auto-ibrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vp2intersect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fs-gs-base-ns'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibpb-brtype'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='perfmon-v2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbpb'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='srso-user-kernel-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Turin-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='auto-ibrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vp2intersect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fs-gs-base-ns'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibpb-brtype'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='perfmon-v2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbpb'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='srso-user-kernel-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-v5'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='GraniteRapids'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchiti'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='GraniteRapids-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchiti'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='GraniteRapids-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10-128'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10-256'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10-512'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchiti'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='GraniteRapids-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10-128'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10-256'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10-512'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchiti'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-noTSX'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-noTSX-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-noTSX'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v5'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v6'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v7'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='IvyBridge'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='IvyBridge-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='IvyBridge-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='IvyBridge-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='KnightsMill'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-4fmaps'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-4vnniw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512er'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512pf'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='KnightsMill-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-4fmaps'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-4vnniw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512er'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512pf'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Opteron_G4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fma4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xop'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Opteron_G4-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fma4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xop'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Opteron_G5'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fma4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tbm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xop'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Opteron_G5-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fma4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tbm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xop'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SapphireRapids'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SapphireRapids-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SapphireRapids-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SapphireRapids-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SapphireRapids-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SierraForest'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cmpccxadd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SierraForest-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cmpccxadd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SierraForest-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cmpccxadd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='intel-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='lam'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SierraForest-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cmpccxadd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='intel-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='lam'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-v5'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Snowridge'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='core-capability'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mpx'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='split-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Snowridge-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='core-capability'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mpx'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='split-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Snowridge-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='core-capability'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='split-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Snowridge-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='core-capability'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='split-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Snowridge-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='athlon'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnow'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnowext'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='athlon-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnow'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnowext'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='core2duo'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='core2duo-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='coreduo'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='coreduo-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='n270'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='n270-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='phenom'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnow'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnowext'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='phenom-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnow'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnowext'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </mode>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  </cpu>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <memoryBacking supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <enum name='sourceType'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <value>file</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <value>anonymous</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <value>memfd</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  </memoryBacking>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <devices>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <disk supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='diskDevice'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>disk</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>cdrom</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>floppy</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>lun</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='bus'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>ide</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>fdc</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>scsi</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>usb</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>sata</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='model'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio-transitional</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio-non-transitional</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </disk>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <graphics supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='type'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>vnc</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>egl-headless</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>dbus</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </graphics>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <video supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='modelType'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>vga</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>cirrus</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>none</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>bochs</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>ramfb</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </video>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <hostdev supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='mode'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>subsystem</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='startupPolicy'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>default</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>mandatory</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>requisite</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>optional</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='subsysType'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>usb</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>pci</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>scsi</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='capsType'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='pciBackend'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </hostdev>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <rng supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='model'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio-transitional</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio-non-transitional</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='backendModel'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>random</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>egd</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>builtin</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </rng>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <filesystem supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='driverType'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>path</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>handle</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtiofs</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </filesystem>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <tpm supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='model'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>tpm-tis</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>tpm-crb</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='backendModel'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>emulator</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>external</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='backendVersion'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>2.0</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </tpm>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <redirdev supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='bus'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>usb</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </redirdev>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <channel supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='type'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>pty</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>unix</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </channel>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <crypto supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='model'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='type'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>qemu</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='backendModel'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>builtin</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </crypto>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <interface supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='backendType'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>default</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>passt</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </interface>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <panic supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='model'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>isa</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>hyperv</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </panic>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <console supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='type'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>null</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>vc</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>pty</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>dev</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>file</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>pipe</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>stdio</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>udp</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>tcp</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>unix</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>qemu-vdagent</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>dbus</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </console>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  </devices>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <features>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <gic supported='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <vmcoreinfo supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <genid supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <backingStoreInput supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <backup supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <async-teardown supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <s390-pv supported='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <ps2 supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <tdx supported='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <sev supported='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <sgx supported='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <hyperv supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='features'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>relaxed</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>vapic</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>spinlocks</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>vpindex</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>runtime</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>synic</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>stimer</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>reset</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>vendor_id</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>frequencies</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>reenlightenment</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>tlbflush</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>ipi</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>avic</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>emsr_bitmap</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>xmm_input</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <defaults>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <spinlocks>4095</spinlocks>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <stimer_direct>on</stimer_direct>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <tlbflush_direct>on</tlbflush_direct>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <tlbflush_extended>on</tlbflush_extended>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </defaults>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </hyperv>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <launchSecurity supported='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  </features>
Feb 28 04:50:49 np0005634017 nova_compute[242046]: </domainCapabilities>
Feb 28 04:50:49 np0005634017 nova_compute[242046]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb 28 04:50:49 np0005634017 nova_compute[242046]: 2026-02-28 09:50:49.271 242050 DEBUG nova.virt.libvirt.host [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Feb 28 04:50:49 np0005634017 nova_compute[242046]: 2026-02-28 09:50:49.277 242050 DEBUG nova.virt.libvirt.host [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Feb 28 04:50:49 np0005634017 nova_compute[242046]: <domainCapabilities>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <path>/usr/libexec/qemu-kvm</path>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <domain>kvm</domain>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <machine>pc-q35-rhel9.8.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <arch>x86_64</arch>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <vcpu max='4096'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <iothreads supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <os supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <enum name='firmware'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <value>efi</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <loader supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='type'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>rom</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>pflash</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='readonly'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>yes</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>no</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='secure'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>yes</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>no</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </loader>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  </os>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <cpu>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <mode name='host-passthrough' supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='hostPassthroughMigratable'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>on</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>off</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </mode>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <mode name='maximum' supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='maximumMigratable'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>on</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>off</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </mode>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <mode name='host-model' supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model fallback='forbid'>EPYC-Rome</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <vendor>AMD</vendor>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='x2apic'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='tsc-deadline'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='hypervisor'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='tsc_adjust'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='spec-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='stibp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='ssbd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='cmp_legacy'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='overflow-recov'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='succor'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='ibrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='amd-ssbd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='virt-ssbd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='lbrv'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='tsc-scale'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='vmcb-clean'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='flushbyasid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='pause-filter'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='pfthreshold'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='svme-addr-chk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='lfence-always-serializing'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='disable' name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </mode>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <mode name='custom' supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-noTSX'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server-noTSX'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server-v5'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='ClearwaterForest'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bhi-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cmpccxadd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ddpd-u'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='intel-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='lam'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchiti'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sha512'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sm3'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sm4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='ClearwaterForest-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bhi-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cmpccxadd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ddpd-u'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='intel-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='lam'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchiti'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sha512'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sm3'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sm4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cooperlake'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cooperlake-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cooperlake-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Denverton'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mpx'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Denverton-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mpx'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Denverton-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Denverton-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Dhyana-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Genoa'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='auto-ibrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Genoa-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='auto-ibrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Genoa-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='auto-ibrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fs-gs-base-ns'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='perfmon-v2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Milan'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Milan-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Milan-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Milan-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Rome'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Rome-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Rome-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Rome-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Turin'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='auto-ibrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vp2intersect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fs-gs-base-ns'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibpb-brtype'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='perfmon-v2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbpb'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='srso-user-kernel-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Turin-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='auto-ibrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vp2intersect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fs-gs-base-ns'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibpb-brtype'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='perfmon-v2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbpb'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='srso-user-kernel-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-v5'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='GraniteRapids'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchiti'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='GraniteRapids-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchiti'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='GraniteRapids-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10-128'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10-256'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10-512'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchiti'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='GraniteRapids-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10-128'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10-256'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10-512'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchiti'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-noTSX'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-noTSX-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-noTSX'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v5'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v6'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v7'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='IvyBridge'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='IvyBridge-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='IvyBridge-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='IvyBridge-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='KnightsMill'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-4fmaps'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-4vnniw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512er'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512pf'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='KnightsMill-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-4fmaps'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-4vnniw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512er'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512pf'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Opteron_G4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fma4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xop'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Opteron_G4-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fma4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xop'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Opteron_G5'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fma4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tbm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xop'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Opteron_G5-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fma4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tbm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xop'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SapphireRapids'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SapphireRapids-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SapphireRapids-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SapphireRapids-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SapphireRapids-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SierraForest'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cmpccxadd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SierraForest-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cmpccxadd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SierraForest-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cmpccxadd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='intel-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='lam'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SierraForest-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cmpccxadd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='intel-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='lam'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-v5'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Snowridge'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='core-capability'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mpx'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='split-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Snowridge-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='core-capability'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mpx'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='split-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Snowridge-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='core-capability'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='split-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Snowridge-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='core-capability'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='split-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Snowridge-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='athlon'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnow'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnowext'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='athlon-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnow'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnowext'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='core2duo'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='core2duo-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='coreduo'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='coreduo-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='n270'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='n270-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='phenom'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnow'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnowext'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='phenom-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnow'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnowext'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </mode>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  </cpu>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <memoryBacking supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <enum name='sourceType'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <value>file</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <value>anonymous</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <value>memfd</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  </memoryBacking>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <devices>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <disk supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='diskDevice'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>disk</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>cdrom</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>floppy</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>lun</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='bus'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>fdc</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>scsi</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>usb</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>sata</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='model'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio-transitional</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio-non-transitional</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </disk>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <graphics supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='type'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>vnc</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>egl-headless</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>dbus</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </graphics>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <video supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='modelType'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>vga</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>cirrus</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>none</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>bochs</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>ramfb</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </video>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <hostdev supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='mode'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>subsystem</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='startupPolicy'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>default</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>mandatory</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>requisite</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>optional</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='subsysType'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>usb</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>pci</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>scsi</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='capsType'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='pciBackend'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </hostdev>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <rng supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='model'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio-transitional</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio-non-transitional</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='backendModel'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>random</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>egd</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>builtin</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </rng>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <filesystem supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='driverType'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>path</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>handle</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtiofs</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </filesystem>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <tpm supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='model'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>tpm-tis</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>tpm-crb</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='backendModel'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>emulator</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>external</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='backendVersion'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>2.0</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </tpm>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <redirdev supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='bus'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>usb</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </redirdev>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <channel supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='type'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>pty</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>unix</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </channel>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <crypto supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='model'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='type'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>qemu</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='backendModel'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>builtin</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </crypto>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <interface supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='backendType'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>default</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>passt</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </interface>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <panic supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='model'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>isa</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>hyperv</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </panic>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <console supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='type'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>null</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>vc</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>pty</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>dev</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>file</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>pipe</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>stdio</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>udp</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>tcp</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>unix</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>qemu-vdagent</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>dbus</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </console>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  </devices>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <features>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <gic supported='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <vmcoreinfo supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <genid supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <backingStoreInput supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <backup supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <async-teardown supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <s390-pv supported='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <ps2 supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <tdx supported='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <sev supported='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <sgx supported='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <hyperv supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='features'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>relaxed</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>vapic</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>spinlocks</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>vpindex</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>runtime</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>synic</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>stimer</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>reset</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>vendor_id</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>frequencies</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>reenlightenment</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>tlbflush</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>ipi</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>avic</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>emsr_bitmap</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>xmm_input</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <defaults>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <spinlocks>4095</spinlocks>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <stimer_direct>on</stimer_direct>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <tlbflush_direct>on</tlbflush_direct>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <tlbflush_extended>on</tlbflush_extended>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </defaults>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </hyperv>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <launchSecurity supported='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  </features>
Feb 28 04:50:49 np0005634017 nova_compute[242046]: </domainCapabilities>
Feb 28 04:50:49 np0005634017 nova_compute[242046]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb 28 04:50:49 np0005634017 nova_compute[242046]: 2026-02-28 09:50:49.354 242050 DEBUG nova.virt.libvirt.host [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Feb 28 04:50:49 np0005634017 nova_compute[242046]: <domainCapabilities>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <path>/usr/libexec/qemu-kvm</path>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <domain>kvm</domain>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <machine>pc-i440fx-rhel7.6.0</machine>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <arch>x86_64</arch>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <vcpu max='240'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <iothreads supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <os supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <enum name='firmware'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <loader supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='type'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>rom</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>pflash</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='readonly'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>yes</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>no</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='secure'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>no</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </loader>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  </os>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <cpu>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <mode name='host-passthrough' supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='hostPassthroughMigratable'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>on</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>off</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </mode>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <mode name='maximum' supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='maximumMigratable'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>on</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>off</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </mode>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <mode name='host-model' supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model fallback='forbid'>EPYC-Rome</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <vendor>AMD</vendor>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='x2apic'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='tsc-deadline'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='hypervisor'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='tsc_adjust'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='spec-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='stibp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='ssbd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='cmp_legacy'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='overflow-recov'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='succor'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='ibrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='amd-ssbd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='virt-ssbd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='lbrv'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='tsc-scale'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='vmcb-clean'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='flushbyasid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='pause-filter'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='pfthreshold'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='svme-addr-chk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='require' name='lfence-always-serializing'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <feature policy='disable' name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </mode>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <mode name='custom' supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-noTSX'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Broadwell-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server-noTSX'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cascadelake-Server-v5'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='ClearwaterForest'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bhi-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cmpccxadd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ddpd-u'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='intel-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='lam'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchiti'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sha512'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sm3'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sm4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='ClearwaterForest-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bhi-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cmpccxadd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ddpd-u'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='intel-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='lam'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchiti'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sha512'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sm3'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sm4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cooperlake'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cooperlake-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Cooperlake-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Denverton'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mpx'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Denverton-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mpx'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Denverton-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Denverton-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Dhyana-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Genoa'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='auto-ibrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Genoa-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='auto-ibrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Genoa-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='auto-ibrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fs-gs-base-ns'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='perfmon-v2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Milan'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Milan-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Milan-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Milan-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Rome'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Rome-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Rome-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Rome-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Turin'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='auto-ibrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vp2intersect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fs-gs-base-ns'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibpb-brtype'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='perfmon-v2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbpb'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='srso-user-kernel-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-Turin-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amd-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='auto-ibrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vp2intersect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fs-gs-base-ns'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibpb-brtype'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='perfmon-v2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbpb'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='srso-user-kernel-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='stibp-always-on'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='EPYC-v5'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='GraniteRapids'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchiti'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='GraniteRapids-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchiti'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='GraniteRapids-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10-128'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10-256'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10-512'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchiti'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='GraniteRapids-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10-128'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10-256'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx10-512'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='prefetchiti'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-noTSX'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-noTSX-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Haswell-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-noTSX'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v5'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v6'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Icelake-Server-v7'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='IvyBridge'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='IvyBridge-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='IvyBridge-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='IvyBridge-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='KnightsMill'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-4fmaps'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-4vnniw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512er'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512pf'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='KnightsMill-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-4fmaps'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-4vnniw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512er'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512pf'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Opteron_G4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fma4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xop'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Opteron_G4-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fma4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xop'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Opteron_G5'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fma4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tbm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xop'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Opteron_G5-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fma4'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tbm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xop'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SapphireRapids'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SapphireRapids-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SapphireRapids-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SapphireRapids-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SapphireRapids-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='amx-tile'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-bf16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-fp16'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bitalg'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrc'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fzrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='la57'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='taa-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SierraForest'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cmpccxadd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SierraForest-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cmpccxadd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SierraForest-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cmpccxadd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='intel-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='lam'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='SierraForest-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ifma'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cmpccxadd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fbsdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='fsrs'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ibrs-all'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='intel-psfd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='lam'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mcdt-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pbrsb-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='psdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='serialize'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vaes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Client-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='hle'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='rtm'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Skylake-Server-v5'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512bw'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512cd'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512dq'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512f'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='avx512vl'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='invpcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pcid'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='pku'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Snowridge'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='core-capability'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mpx'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='split-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Snowridge-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='core-capability'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='mpx'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='split-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Snowridge-v2'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='core-capability'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='split-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Snowridge-v3'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='core-capability'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='split-lock-detect'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='Snowridge-v4'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='cldemote'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='erms'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='gfni'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdir64b'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='movdiri'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='xsaves'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='athlon'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnow'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnowext'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='athlon-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnow'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnowext'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='core2duo'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='core2duo-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='coreduo'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='coreduo-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='n270'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='n270-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='ss'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='phenom'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnow'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnowext'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <blockers model='phenom-v1'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnow'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <feature name='3dnowext'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </blockers>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </mode>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  </cpu>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <memoryBacking supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <enum name='sourceType'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <value>file</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <value>anonymous</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <value>memfd</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  </memoryBacking>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <devices>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <disk supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='diskDevice'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>disk</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>cdrom</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>floppy</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>lun</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='bus'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>ide</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>fdc</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>scsi</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>usb</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>sata</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='model'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio-transitional</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio-non-transitional</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </disk>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <graphics supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='type'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>vnc</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>egl-headless</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>dbus</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </graphics>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <video supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='modelType'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>vga</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>cirrus</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>none</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>bochs</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>ramfb</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </video>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <hostdev supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='mode'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>subsystem</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='startupPolicy'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>default</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>mandatory</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>requisite</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>optional</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='subsysType'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>usb</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>pci</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>scsi</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='capsType'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='pciBackend'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </hostdev>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <rng supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='model'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio-transitional</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtio-non-transitional</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='backendModel'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>random</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>egd</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>builtin</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </rng>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <filesystem supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='driverType'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>path</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>handle</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>virtiofs</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </filesystem>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <tpm supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='model'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>tpm-tis</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>tpm-crb</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='backendModel'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>emulator</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>external</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='backendVersion'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>2.0</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </tpm>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <redirdev supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='bus'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>usb</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </redirdev>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <channel supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='type'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>pty</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>unix</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </channel>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <crypto supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='model'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='type'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>qemu</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='backendModel'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>builtin</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </crypto>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <interface supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='backendType'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>default</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>passt</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </interface>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <panic supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='model'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>isa</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>hyperv</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </panic>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <console supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='type'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>null</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>vc</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>pty</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>dev</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>file</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>pipe</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>stdio</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>udp</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>tcp</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>unix</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>qemu-vdagent</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>dbus</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </console>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  </devices>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  <features>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <gic supported='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <vmcoreinfo supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <genid supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <backingStoreInput supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <backup supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <async-teardown supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <s390-pv supported='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <ps2 supported='yes'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <tdx supported='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <sev supported='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <sgx supported='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <hyperv supported='yes'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <enum name='features'>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>relaxed</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>vapic</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>spinlocks</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>vpindex</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>runtime</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>synic</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>stimer</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>reset</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>vendor_id</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>frequencies</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>reenlightenment</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>tlbflush</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>ipi</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>avic</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>emsr_bitmap</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <value>xmm_input</value>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </enum>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      <defaults>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <spinlocks>4095</spinlocks>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <stimer_direct>on</stimer_direct>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <tlbflush_direct>on</tlbflush_direct>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <tlbflush_extended>on</tlbflush_extended>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:      </defaults>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    </hyperv>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:    <launchSecurity supported='no'/>
Feb 28 04:50:49 np0005634017 nova_compute[242046]:  </features>
Feb 28 04:50:49 np0005634017 nova_compute[242046]: </domainCapabilities>
Feb 28 04:50:49 np0005634017 nova_compute[242046]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb 28 04:50:49 np0005634017 nova_compute[242046]: 2026-02-28 09:50:49.418 242050 DEBUG nova.virt.libvirt.host [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Feb 28 04:50:49 np0005634017 nova_compute[242046]: 2026-02-28 09:50:49.419 242050 INFO nova.virt.libvirt.host [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Secure Boot support detected#033[00m
Feb 28 04:50:49 np0005634017 nova_compute[242046]: 2026-02-28 09:50:49.421 242050 INFO nova.virt.libvirt.driver [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Feb 28 04:50:49 np0005634017 nova_compute[242046]: 2026-02-28 09:50:49.421 242050 INFO nova.virt.libvirt.driver [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Feb 28 04:50:49 np0005634017 nova_compute[242046]: 2026-02-28 09:50:49.430 242050 DEBUG nova.virt.libvirt.driver [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Feb 28 04:50:49 np0005634017 nova_compute[242046]: 2026-02-28 09:50:49.470 242050 INFO nova.virt.node [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Determined node identity 1ab5ec33-a817-4b85-91a2-974557eeabfb from /var/lib/nova/compute_id#033[00m
Feb 28 04:50:49 np0005634017 nova_compute[242046]: 2026-02-28 09:50:49.490 242050 WARNING nova.compute.manager [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Compute nodes ['1ab5ec33-a817-4b85-91a2-974557eeabfb'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Feb 28 04:50:49 np0005634017 nova_compute[242046]: 2026-02-28 09:50:49.525 242050 INFO nova.compute.manager [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Feb 28 04:50:49 np0005634017 nova_compute[242046]: 2026-02-28 09:50:49.563 242050 WARNING nova.compute.manager [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Feb 28 04:50:49 np0005634017 nova_compute[242046]: 2026-02-28 09:50:49.564 242050 DEBUG oslo_concurrency.lockutils [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:50:49 np0005634017 nova_compute[242046]: 2026-02-28 09:50:49.564 242050 DEBUG oslo_concurrency.lockutils [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:50:49 np0005634017 nova_compute[242046]: 2026-02-28 09:50:49.564 242050 DEBUG oslo_concurrency.lockutils [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:50:49 np0005634017 nova_compute[242046]: 2026-02-28 09:50:49.564 242050 DEBUG nova.compute.resource_tracker [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 04:50:49 np0005634017 nova_compute[242046]: 2026-02-28 09:50:49.564 242050 DEBUG oslo_concurrency.processutils [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:50:49 np0005634017 python3.9[243159]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb 28 04:50:50 np0005634017 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 04:50:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 04:50:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2506854552' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 04:50:50 np0005634017 nova_compute[242046]: 2026-02-28 09:50:50.068 242050 DEBUG oslo_concurrency.processutils [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:50:50 np0005634017 systemd[1]: Starting libvirt nodedev daemon...
Feb 28 04:50:50 np0005634017 systemd[1]: Started libvirt nodedev daemon.
Feb 28 04:50:50 np0005634017 nova_compute[242046]: 2026-02-28 09:50:50.296 242050 WARNING nova.virt.libvirt.driver [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 04:50:50 np0005634017 nova_compute[242046]: 2026-02-28 09:50:50.297 242050 DEBUG nova.compute.resource_tracker [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5065MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 04:50:50 np0005634017 nova_compute[242046]: 2026-02-28 09:50:50.298 242050 DEBUG oslo_concurrency.lockutils [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:50:50 np0005634017 nova_compute[242046]: 2026-02-28 09:50:50.298 242050 DEBUG oslo_concurrency.lockutils [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:50:50 np0005634017 nova_compute[242046]: 2026-02-28 09:50:50.315 242050 WARNING nova.compute.resource_tracker [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] No compute node record for compute-0.ctlplane.example.com:1ab5ec33-a817-4b85-91a2-974557eeabfb: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 1ab5ec33-a817-4b85-91a2-974557eeabfb could not be found.#033[00m
Feb 28 04:50:50 np0005634017 nova_compute[242046]: 2026-02-28 09:50:50.336 242050 INFO nova.compute.resource_tracker [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 1ab5ec33-a817-4b85-91a2-974557eeabfb#033[00m
Feb 28 04:50:50 np0005634017 nova_compute[242046]: 2026-02-28 09:50:50.424 242050 DEBUG nova.compute.resource_tracker [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 04:50:50 np0005634017 nova_compute[242046]: 2026-02-28 09:50:50.424 242050 DEBUG nova.compute.resource_tracker [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 04:50:50 np0005634017 python3.9[243379]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 28 04:50:50 np0005634017 systemd[1]: Stopping nova_compute container...
Feb 28 04:50:50 np0005634017 nova_compute[242046]: 2026-02-28 09:50:50.784 242050 DEBUG oslo_concurrency.lockutils [None req-4c304e3b-93f6-44f0-aa4e-ca6db89aeb47 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:50:50 np0005634017 nova_compute[242046]: 2026-02-28 09:50:50.785 242050 DEBUG oslo_concurrency.lockutils [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 04:50:50 np0005634017 nova_compute[242046]: 2026-02-28 09:50:50.785 242050 DEBUG oslo_concurrency.lockutils [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 04:50:50 np0005634017 nova_compute[242046]: 2026-02-28 09:50:50.785 242050 DEBUG oslo_concurrency.lockutils [None req-00c9f28c-f11e-425f-9b7f-7f01b2a5ef69 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 04:50:51 np0005634017 virtqemud[242837]: libvirt version: 11.10.0, package: 4.el9 (builder@centos.org, 2026-01-29-15:25:17, )
Feb 28 04:50:51 np0005634017 virtqemud[242837]: hostname: compute-0
Feb 28 04:50:51 np0005634017 virtqemud[242837]: End of file while reading data: Input/output error
Feb 28 04:50:51 np0005634017 systemd[1]: libpod-4ded7306dd36ad79370acb2fef14211e93074c932d53c87a6cc935e126b0e260.scope: Deactivated successfully.
Feb 28 04:50:51 np0005634017 systemd[1]: libpod-4ded7306dd36ad79370acb2fef14211e93074c932d53c87a6cc935e126b0e260.scope: Consumed 3.874s CPU time.
Feb 28 04:50:51 np0005634017 podman[243383]: 2026-02-28 09:50:51.309939423 +0000 UTC m=+0.584633620 container died 4ded7306dd36ad79370acb2fef14211e93074c932d53c87a6cc935e126b0e260 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=nova_compute, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-1c3178970be275583b5f85c32a5360a083e2e8e737ac61e0f2cba4ff1f6008ec-f957d3fcb0d42f199799b901fad0259f9aa6c30fc287a89d12bbbfb92ec28d3e'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 04:50:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:50:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:50:52 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4ded7306dd36ad79370acb2fef14211e93074c932d53c87a6cc935e126b0e260-userdata-shm.mount: Deactivated successfully.
Feb 28 04:50:52 np0005634017 systemd[1]: var-lib-containers-storage-overlay-2f4c0c5b790d9b6d2481904d9f521fdff3edd25d3103b9ea3f50542eaeaf1f6c-merged.mount: Deactivated successfully.
Feb 28 04:50:53 np0005634017 podman[243383]: 2026-02-28 09:50:53.191639597 +0000 UTC m=+2.466333764 container cleanup 4ded7306dd36ad79370acb2fef14211e93074c932d53c87a6cc935e126b0e260 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=nova_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-1c3178970be275583b5f85c32a5360a083e2e8e737ac61e0f2cba4ff1f6008ec-f957d3fcb0d42f199799b901fad0259f9aa6c30fc287a89d12bbbfb92ec28d3e'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 28 04:50:53 np0005634017 podman[243383]: nova_compute
Feb 28 04:50:53 np0005634017 podman[243423]: nova_compute
Feb 28 04:50:53 np0005634017 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Feb 28 04:50:53 np0005634017 systemd[1]: Stopped nova_compute container.
Feb 28 04:50:53 np0005634017 systemd[1]: Starting nova_compute container...
Feb 28 04:50:53 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:50:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4c0c5b790d9b6d2481904d9f521fdff3edd25d3103b9ea3f50542eaeaf1f6c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb 28 04:50:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4c0c5b790d9b6d2481904d9f521fdff3edd25d3103b9ea3f50542eaeaf1f6c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb 28 04:50:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4c0c5b790d9b6d2481904d9f521fdff3edd25d3103b9ea3f50542eaeaf1f6c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 28 04:50:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4c0c5b790d9b6d2481904d9f521fdff3edd25d3103b9ea3f50542eaeaf1f6c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 28 04:50:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4c0c5b790d9b6d2481904d9f521fdff3edd25d3103b9ea3f50542eaeaf1f6c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb 28 04:50:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:50:53 np0005634017 podman[243436]: 2026-02-28 09:50:53.420006218 +0000 UTC m=+0.116280132 container init 4ded7306dd36ad79370acb2fef14211e93074c932d53c87a6cc935e126b0e260 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-1c3178970be275583b5f85c32a5360a083e2e8e737ac61e0f2cba4ff1f6008ec-f957d3fcb0d42f199799b901fad0259f9aa6c30fc287a89d12bbbfb92ec28d3e'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible, config_id=nova_compute, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 04:50:53 np0005634017 podman[243436]: 2026-02-28 09:50:53.428529205 +0000 UTC m=+0.124803119 container start 4ded7306dd36ad79370acb2fef14211e93074c932d53c87a6cc935e126b0e260 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-1c3178970be275583b5f85c32a5360a083e2e8e737ac61e0f2cba4ff1f6008ec-f957d3fcb0d42f199799b901fad0259f9aa6c30fc287a89d12bbbfb92ec28d3e'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=nova_compute, org.label-schema.build-date=20260223)
Feb 28 04:50:53 np0005634017 podman[243436]: nova_compute
Feb 28 04:50:53 np0005634017 nova_compute[243452]: + sudo -E kolla_set_configs
Feb 28 04:50:53 np0005634017 systemd[1]: Started nova_compute container.
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Validating config file
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Copying service configuration files
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Copying /var/lib/kolla/config_files/src/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Copying /var/lib/kolla/config_files/src/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Copying /var/lib/kolla/config_files/src/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Copying /var/lib/kolla/config_files/src/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Deleting /etc/ceph
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Creating directory /etc/ceph
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Setting permission for /etc/ceph
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.conf to /etc/ceph/ceph.conf
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-config to /var/lib/nova/.ssh/config
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Copying /var/lib/kolla/config_files/src/run-on-host to /usr/sbin/iscsiadm
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Writing out command to execute
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 28 04:50:53 np0005634017 nova_compute[243452]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 28 04:50:53 np0005634017 nova_compute[243452]: ++ cat /run_command
Feb 28 04:50:53 np0005634017 nova_compute[243452]: + CMD=nova-compute
Feb 28 04:50:53 np0005634017 nova_compute[243452]: + ARGS=
Feb 28 04:50:53 np0005634017 nova_compute[243452]: + sudo kolla_copy_cacerts
Feb 28 04:50:53 np0005634017 nova_compute[243452]: + [[ ! -n '' ]]
Feb 28 04:50:53 np0005634017 nova_compute[243452]: + . kolla_extend_start
Feb 28 04:50:53 np0005634017 nova_compute[243452]: Running command: 'nova-compute'
Feb 28 04:50:53 np0005634017 nova_compute[243452]: + echo 'Running command: '\''nova-compute'\'''
Feb 28 04:50:53 np0005634017 nova_compute[243452]: + umask 0022
Feb 28 04:50:53 np0005634017 nova_compute[243452]: + exec nova-compute
Feb 28 04:50:54 np0005634017 python3.9[243616]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb 28 04:50:55 np0005634017 systemd[1]: Started libpod-conmon-aebaf2ae6514d5224078749da64ecaa6698340a88a55ba84f73ce9fa7178457e.scope.
Feb 28 04:50:55 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:50:55 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/704a47bcee4f67e6d6dbfd3f8229458e6ebaddcd714719c1e4bcdee178f83cfb/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Feb 28 04:50:55 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/704a47bcee4f67e6d6dbfd3f8229458e6ebaddcd714719c1e4bcdee178f83cfb/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Feb 28 04:50:55 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/704a47bcee4f67e6d6dbfd3f8229458e6ebaddcd714719c1e4bcdee178f83cfb/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 28 04:50:55 np0005634017 podman[243641]: 2026-02-28 09:50:55.130381366 +0000 UTC m=+0.166130858 container init aebaf2ae6514d5224078749da64ecaa6698340a88a55ba84f73ce9fa7178457e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': 'f957d3fcb0d42f199799b901fad0259f9aa6c30fc287a89d12bbbfb92ec28d3e'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=nova_compute_init, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 04:50:55 np0005634017 podman[243641]: 2026-02-28 09:50:55.139927792 +0000 UTC m=+0.175677234 container start aebaf2ae6514d5224078749da64ecaa6698340a88a55ba84f73ce9fa7178457e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': 'f957d3fcb0d42f199799b901fad0259f9aa6c30fc287a89d12bbbfb92ec28d3e'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, container_name=nova_compute_init, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=nova_compute_init, maintainer=OpenStack Kubernetes Operator team)
Feb 28 04:50:55 np0005634017 python3.9[243616]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Feb 28 04:50:55 np0005634017 nova_compute_init[243663]: INFO:nova_statedir:Applying nova statedir ownership
Feb 28 04:50:55 np0005634017 nova_compute_init[243663]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Feb 28 04:50:55 np0005634017 nova_compute_init[243663]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Feb 28 04:50:55 np0005634017 nova_compute_init[243663]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Feb 28 04:50:55 np0005634017 nova_compute_init[243663]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Feb 28 04:50:55 np0005634017 nova_compute_init[243663]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Feb 28 04:50:55 np0005634017 nova_compute_init[243663]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Feb 28 04:50:55 np0005634017 nova_compute_init[243663]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Feb 28 04:50:55 np0005634017 nova_compute_init[243663]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Feb 28 04:50:55 np0005634017 nova_compute_init[243663]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Feb 28 04:50:55 np0005634017 nova_compute_init[243663]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Feb 28 04:50:55 np0005634017 nova_compute_init[243663]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Feb 28 04:50:55 np0005634017 nova_compute_init[243663]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Feb 28 04:50:55 np0005634017 nova_compute_init[243663]: INFO:nova_statedir:Nova statedir ownership complete
Feb 28 04:50:55 np0005634017 systemd[1]: libpod-aebaf2ae6514d5224078749da64ecaa6698340a88a55ba84f73ce9fa7178457e.scope: Deactivated successfully.
Feb 28 04:50:55 np0005634017 podman[243670]: 2026-02-28 09:50:55.227612164 +0000 UTC m=+0.026286143 container died aebaf2ae6514d5224078749da64ecaa6698340a88a55ba84f73ce9fa7178457e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, container_name=nova_compute_init, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': 'f957d3fcb0d42f199799b901fad0259f9aa6c30fc287a89d12bbbfb92ec28d3e'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=nova_compute_init, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible)
Feb 28 04:50:55 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aebaf2ae6514d5224078749da64ecaa6698340a88a55ba84f73ce9fa7178457e-userdata-shm.mount: Deactivated successfully.
Feb 28 04:50:55 np0005634017 systemd[1]: var-lib-containers-storage-overlay-704a47bcee4f67e6d6dbfd3f8229458e6ebaddcd714719c1e4bcdee178f83cfb-merged.mount: Deactivated successfully.
Feb 28 04:50:55 np0005634017 podman[243670]: 2026-02-28 09:50:55.380026623 +0000 UTC m=+0.178700592 container cleanup aebaf2ae6514d5224078749da64ecaa6698340a88a55ba84f73ce9fa7178457e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': 'f957d3fcb0d42f199799b901fad0259f9aa6c30fc287a89d12bbbfb92ec28d3e'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=nova_compute_init, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute_init)
Feb 28 04:50:55 np0005634017 systemd[1]: libpod-conmon-aebaf2ae6514d5224078749da64ecaa6698340a88a55ba84f73ce9fa7178457e.scope: Deactivated successfully.
Feb 28 04:50:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:50:56 np0005634017 systemd-logind[815]: Session 49 logged out. Waiting for processes to exit.
Feb 28 04:50:56 np0005634017 systemd[1]: session-49.scope: Deactivated successfully.
Feb 28 04:50:56 np0005634017 systemd[1]: session-49.scope: Consumed 2min 1.677s CPU time.
Feb 28 04:50:56 np0005634017 systemd-logind[815]: Removed session 49.
Feb 28 04:50:56 np0005634017 nova_compute[243452]: 2026-02-28 09:50:56.146 243456 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb 28 04:50:56 np0005634017 nova_compute[243452]: 2026-02-28 09:50:56.147 243456 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb 28 04:50:56 np0005634017 nova_compute[243452]: 2026-02-28 09:50:56.147 243456 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb 28 04:50:56 np0005634017 nova_compute[243452]: 2026-02-28 09:50:56.147 243456 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Feb 28 04:50:56 np0005634017 nova_compute[243452]: 2026-02-28 09:50:56.285 243456 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:50:56 np0005634017 nova_compute[243452]: 2026-02-28 09:50:56.307 243456 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:50:56 np0005634017 nova_compute[243452]: 2026-02-28 09:50:56.307 243456 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Feb 28 04:50:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:50:56 np0005634017 nova_compute[243452]: 2026-02-28 09:50:56.863 243456 INFO nova.virt.driver [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Feb 28 04:50:56 np0005634017 nova_compute[243452]: 2026-02-28 09:50:56.985 243456 INFO nova.compute.provider_config [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.000 243456 DEBUG oslo_concurrency.lockutils [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.000 243456 DEBUG oslo_concurrency.lockutils [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.001 243456 DEBUG oslo_concurrency.lockutils [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.001 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.002 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.002 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.002 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.002 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.002 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.003 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.003 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.003 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.003 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.003 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.003 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.004 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.004 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.004 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.004 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.004 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.005 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.005 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.005 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.005 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.005 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.006 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.006 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.006 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.006 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.006 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.007 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.007 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.007 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.007 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.008 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.008 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.008 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.008 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.008 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.009 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.009 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.009 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.009 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.010 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.010 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.010 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.011 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.011 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.011 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.011 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.012 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.012 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.012 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.012 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.013 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.013 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.013 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.013 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.014 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.014 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.014 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.014 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.015 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.015 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.015 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.015 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.015 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.016 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.016 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.016 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.016 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.017 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.017 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.017 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.017 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.018 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.018 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.018 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.019 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.019 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.019 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.019 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.020 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.020 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.020 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.021 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.021 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.021 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.022 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.022 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.022 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.022 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.022 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.023 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.023 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.023 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.023 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.023 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.024 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.024 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.024 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.024 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.024 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.025 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.025 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.025 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.025 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.026 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.026 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.026 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.026 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.026 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.026 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.026 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.027 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.027 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.027 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.027 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.027 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.027 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.027 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.027 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.028 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.028 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.028 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.028 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.028 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.028 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.028 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.029 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.029 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.029 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.029 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.029 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.029 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.030 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.030 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.030 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.030 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.030 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.030 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.030 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.031 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.031 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.031 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.031 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.031 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.031 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.031 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.032 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.032 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.032 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.032 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.032 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.032 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.033 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.033 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.033 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.033 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.033 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.033 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.034 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.034 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.034 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.034 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.034 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.034 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.035 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.035 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.035 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.035 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.035 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.035 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.035 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.036 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.036 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.036 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.036 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.036 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.036 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.037 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.037 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.037 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.037 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.037 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.037 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.038 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.038 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.038 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.038 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.038 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.039 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.039 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.039 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.039 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.039 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.039 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.039 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.040 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.040 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.040 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.040 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.040 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.040 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.040 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.041 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.041 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.041 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.041 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.041 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.041 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.041 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.042 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.042 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.042 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.042 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.042 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.042 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.042 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.043 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.043 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.043 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.043 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.043 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.043 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.043 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cinder.os_region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.044 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.044 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.044 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.044 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.044 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.044 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.044 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.045 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.045 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.045 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.045 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.045 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.046 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.046 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.046 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.046 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.046 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.046 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.046 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.047 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.047 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.047 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.047 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.047 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.047 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.047 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.048 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.048 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.048 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.048 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.048 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.048 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.048 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.049 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.049 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.049 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.049 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.049 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.049 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.049 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.050 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.050 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.050 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.050 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.050 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.050 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.050 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.051 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.051 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.051 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.051 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.051 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.051 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.051 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.052 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.052 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.052 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.052 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.052 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.052 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.053 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.053 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.053 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.053 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.053 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.053 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.053 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.054 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.054 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.054 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.054 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.054 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.054 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.054 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.055 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.055 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.055 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.055 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.055 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.055 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.055 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.056 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.056 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.056 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.056 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.056 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.056 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.056 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.057 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.057 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.057 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.057 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.057 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.057 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.058 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.058 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.058 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.058 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.058 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.058 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.058 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.059 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.059 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.059 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.059 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.059 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.059 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.060 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.060 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.060 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.060 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.060 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.060 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.060 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.061 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.061 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.061 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.061 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.061 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.061 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.061 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.062 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.062 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.062 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.062 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.062 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.062 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.062 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.062 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.063 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.063 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.063 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.063 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.064 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.064 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.064 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.064 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.064 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.065 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.065 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.065 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.065 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.065 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.065 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.065 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.066 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.066 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.066 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.066 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.066 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.066 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.066 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.067 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.067 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.067 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.067 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.067 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.067 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.067 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.068 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.068 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.068 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.068 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.068 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.068 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.068 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.069 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.069 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.069 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.069 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.069 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.069 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] barbican.barbican_region_name  = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.070 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.070 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.070 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.070 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.070 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.070 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.071 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.071 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.071 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.071 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.071 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.071 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.071 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.072 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.072 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.072 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.072 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.073 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.073 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.073 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.073 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.073 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.074 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.074 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.074 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.074 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.075 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.075 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.075 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.075 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.075 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.076 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.076 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.076 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.076 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.076 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.077 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.077 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.077 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.077 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.077 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.078 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.078 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.078 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.078 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.078 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.079 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.079 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.079 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.079 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.079 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.080 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.080 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.080 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.080 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.080 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.081 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.081 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.081 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.081 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.081 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.081 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.082 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.082 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.082 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.082 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.082 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.082 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.083 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.083 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.083 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.083 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.083 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.083 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.084 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.084 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.084 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.084 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.084 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.084 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.084 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.084 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.085 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.085 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.085 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.085 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.085 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.085 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.085 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.086 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.086 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.086 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.086 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.086 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.086 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.086 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.087 243456 WARNING oslo_config.cfg [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb 28 04:50:57 np0005634017 nova_compute[243452]: live_migration_uri is deprecated for removal in favor of two other options that
Feb 28 04:50:57 np0005634017 nova_compute[243452]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb 28 04:50:57 np0005634017 nova_compute[243452]: and ``live_migration_inbound_addr`` respectively.
Feb 28 04:50:57 np0005634017 nova_compute[243452]: ).  Its value may be silently ignored in the future.#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.087 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.087 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.088 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.088 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.088 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.088 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.088 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.089 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.089 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.089 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.089 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.089 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.090 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.090 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.090 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.090 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.091 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.091 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.091 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.rbd_secret_uuid        = 8f528268-ea2d-5d7b-af45-49b405fed6de log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.091 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.091 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.092 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.092 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.092 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.092 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.092 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.093 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.093 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.093 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.093 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.094 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.094 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.094 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.094 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.095 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.095 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.095 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.095 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.095 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.096 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.096 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.096 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.096 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.097 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.097 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.097 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.097 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.097 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.098 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.098 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.098 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.098 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.099 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.099 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.099 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.099 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.099 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.100 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.100 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.100 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.100 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.100 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.100 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.100 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.101 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.101 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.101 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.101 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.101 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.101 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.102 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.102 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.102 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.102 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.102 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.102 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.103 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.103 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.103 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.103 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.103 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.103 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.104 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.104 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.104 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.104 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.104 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.104 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.105 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.105 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.105 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.105 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.105 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.105 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.106 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.106 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.106 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.106 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.106 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.107 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.107 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.107 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.107 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.107 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.107 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.107 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.107 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.108 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.108 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.108 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.108 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.108 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.109 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.109 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.109 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.109 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.109 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.109 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.110 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.110 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.110 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.110 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.110 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.110 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.111 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.111 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.111 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.111 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.111 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.111 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.112 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.112 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.112 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.112 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.112 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.112 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.113 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.113 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.113 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.113 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.114 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.114 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.114 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.114 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.114 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.114 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.115 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.115 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.115 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.115 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.115 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.115 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.116 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.116 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.116 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.116 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.117 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.117 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.117 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.117 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.117 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.117 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.118 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.118 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.118 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.118 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.118 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.119 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.119 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.119 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.119 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.119 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.119 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.119 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.120 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.120 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.120 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.120 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.120 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.121 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.121 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.121 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.121 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.121 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.122 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.122 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.122 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.122 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.122 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.123 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.123 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.123 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.123 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.123 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.123 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.124 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.124 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.124 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.124 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.125 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.125 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.125 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.125 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.125 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.126 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.126 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.126 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.126 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.126 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.127 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.127 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.127 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.127 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.127 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.128 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.128 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.128 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.128 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.128 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.129 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.129 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.129 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.129 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.129 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.130 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.130 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.130 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.130 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.130 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.131 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.131 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.131 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.131 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.131 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.132 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.132 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.132 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.132 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.133 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.133 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.133 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.133 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.133 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.134 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.134 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.134 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.134 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.134 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.135 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.135 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.135 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.136 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.136 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.136 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.136 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.136 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.137 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.137 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.137 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.137 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.137 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.138 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.138 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.138 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.138 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.139 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.139 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.139 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.139 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.139 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.140 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.140 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.140 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.140 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.140 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.141 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.141 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.141 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.141 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.142 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.142 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.142 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.142 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.142 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.143 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.143 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.143 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.143 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.143 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.144 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.144 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.144 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.144 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.144 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.145 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.145 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.145 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.145 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.145 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.146 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.146 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.146 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.146 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.146 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.146 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.147 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.147 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.147 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.147 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.147 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.148 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.148 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.148 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.148 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.148 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.148 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.148 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.149 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.149 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.149 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.149 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.149 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.150 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.150 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.150 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.150 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.150 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.150 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.151 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.151 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.151 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.151 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.151 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.151 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.152 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.152 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.152 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.152 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.153 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.153 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.153 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.153 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.153 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.153 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.154 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.154 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.154 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.154 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.154 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.154 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.154 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.155 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.155 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.155 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.155 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.155 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.155 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.155 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.156 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.156 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.156 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.156 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.156 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.156 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.156 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.157 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.157 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.157 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.157 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.157 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.157 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.157 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.158 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.158 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.158 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.158 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.158 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.158 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.158 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.159 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.159 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.159 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.159 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.159 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.159 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.159 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.160 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.160 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.160 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.160 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.160 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.160 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.160 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.161 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.161 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.161 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.161 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.161 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.161 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.161 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.162 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.162 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.162 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.162 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.162 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.162 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.162 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.163 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.163 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.163 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.163 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.163 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.163 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.163 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.164 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.164 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.164 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.164 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.164 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.164 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.164 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.164 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.165 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.165 243456 DEBUG oslo_service.service [None req-92648c7f-40cd-4b7f-b53e-d1219f33ed7e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.166 243456 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260220085704.5cfeecb.el9)#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.184 243456 INFO nova.virt.node [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Determined node identity 1ab5ec33-a817-4b85-91a2-974557eeabfb from /var/lib/nova/compute_id#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.185 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.186 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.186 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.186 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.199 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fb2ff54f1c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.202 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fb2ff54f1c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.203 243456 INFO nova.virt.libvirt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Connection event '1' reason 'None'#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.207 243456 INFO nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Libvirt host capabilities <capabilities>
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <host>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <uuid>df54d9c9-1824-4fc9-835d-7a3299a4aad4</uuid>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <cpu>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <arch>x86_64</arch>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model>EPYC-Rome-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <vendor>AMD</vendor>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <microcode version='16777317'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <signature family='23' model='49' stepping='0'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <maxphysaddr mode='emulate' bits='40'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature name='x2apic'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature name='tsc-deadline'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature name='osxsave'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature name='hypervisor'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature name='tsc_adjust'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature name='spec-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature name='stibp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature name='arch-capabilities'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature name='ssbd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature name='cmp_legacy'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature name='topoext'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature name='virt-ssbd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature name='lbrv'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature name='tsc-scale'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature name='vmcb-clean'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature name='pause-filter'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature name='pfthreshold'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature name='svme-addr-chk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature name='rdctl-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature name='skip-l1dfl-vmentry'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature name='mds-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature name='pschange-mc-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <pages unit='KiB' size='4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <pages unit='KiB' size='2048'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <pages unit='KiB' size='1048576'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </cpu>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <power_management>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <suspend_mem/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </power_management>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <iommu support='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <migration_features>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <live/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <uri_transports>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <uri_transport>tcp</uri_transport>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <uri_transport>rdma</uri_transport>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </uri_transports>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </migration_features>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <topology>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <cells num='1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <cell id='0'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:          <memory unit='KiB'>7864280</memory>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:          <pages unit='KiB' size='4'>1966070</pages>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:          <pages unit='KiB' size='2048'>0</pages>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:          <pages unit='KiB' size='1048576'>0</pages>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:          <distances>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:            <sibling id='0' value='10'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:          </distances>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:          <cpus num='8'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:          </cpus>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        </cell>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </cells>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </topology>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <cache>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </cache>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <secmodel>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model>selinux</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <doi>0</doi>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </secmodel>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <secmodel>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model>dac</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <doi>0</doi>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <baselabel type='kvm'>+107:+107</baselabel>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <baselabel type='qemu'>+107:+107</baselabel>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </secmodel>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  </host>
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <guest>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <os_type>hvm</os_type>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <arch name='i686'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <wordsize>32</wordsize>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <domain type='qemu'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <domain type='kvm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </arch>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <features>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <pae/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <nonpae/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <acpi default='on' toggle='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <apic default='on' toggle='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <cpuselection/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <deviceboot/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <disksnapshot default='on' toggle='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <externalSnapshot/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </features>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  </guest>
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <guest>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <os_type>hvm</os_type>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <arch name='x86_64'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <wordsize>64</wordsize>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <domain type='qemu'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <domain type='kvm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </arch>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <features>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <acpi default='on' toggle='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <apic default='on' toggle='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <cpuselection/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <deviceboot/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <disksnapshot default='on' toggle='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <externalSnapshot/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </features>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  </guest>
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 
Feb 28 04:50:57 np0005634017 nova_compute[243452]: </capabilities>
Feb 28 04:50:57 np0005634017 nova_compute[243452]: #033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.215 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.217 243456 DEBUG nova.virt.libvirt.volume.mount [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.219 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Feb 28 04:50:57 np0005634017 nova_compute[243452]: <domainCapabilities>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <path>/usr/libexec/qemu-kvm</path>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <domain>kvm</domain>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <machine>pc-q35-rhel9.8.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <arch>i686</arch>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <vcpu max='4096'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <iothreads supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <os supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <enum name='firmware'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <loader supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='type'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>rom</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>pflash</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='readonly'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>yes</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>no</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='secure'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>no</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </loader>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  </os>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <cpu>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <mode name='host-passthrough' supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='hostPassthroughMigratable'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>on</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>off</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </mode>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <mode name='maximum' supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='maximumMigratable'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>on</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>off</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </mode>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <mode name='host-model' supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model fallback='forbid'>EPYC-Rome</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <vendor>AMD</vendor>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='x2apic'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='tsc-deadline'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='hypervisor'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='tsc_adjust'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='spec-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='stibp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='ssbd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='cmp_legacy'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='overflow-recov'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='succor'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='ibrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='amd-ssbd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='virt-ssbd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='lbrv'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='tsc-scale'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='vmcb-clean'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='flushbyasid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='pause-filter'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='pfthreshold'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='svme-addr-chk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='lfence-always-serializing'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='disable' name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </mode>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <mode name='custom' supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-noTSX'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server-noTSX'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server-v5'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='ClearwaterForest'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bhi-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cmpccxadd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ddpd-u'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='intel-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='lam'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchiti'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sha512'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sm3'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sm4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='ClearwaterForest-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bhi-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cmpccxadd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ddpd-u'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='intel-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='lam'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchiti'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sha512'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sm3'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sm4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cooperlake'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cooperlake-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cooperlake-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Denverton'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mpx'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Denverton-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mpx'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Denverton-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Denverton-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Dhyana-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Genoa'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='auto-ibrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Genoa-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='auto-ibrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Genoa-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='auto-ibrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fs-gs-base-ns'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='perfmon-v2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Milan'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Milan-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Milan-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Milan-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Rome'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Rome-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Rome-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Rome-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Turin'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='auto-ibrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vp2intersect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fs-gs-base-ns'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibpb-brtype'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='perfmon-v2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbpb'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='srso-user-kernel-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Turin-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='auto-ibrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vp2intersect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fs-gs-base-ns'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibpb-brtype'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='perfmon-v2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbpb'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='srso-user-kernel-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-v5'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='GraniteRapids'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchiti'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='GraniteRapids-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchiti'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='GraniteRapids-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10-128'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10-256'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10-512'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchiti'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='GraniteRapids-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10-128'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10-256'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10-512'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchiti'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-noTSX'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-noTSX-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-noTSX'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v5'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v6'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v7'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='IvyBridge'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='IvyBridge-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='IvyBridge-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='IvyBridge-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='KnightsMill'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-4fmaps'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-4vnniw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512er'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512pf'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='KnightsMill-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-4fmaps'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-4vnniw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512er'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512pf'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Opteron_G4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fma4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xop'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Opteron_G4-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fma4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xop'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Opteron_G5'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fma4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tbm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xop'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Opteron_G5-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fma4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tbm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xop'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SapphireRapids'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SapphireRapids-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SapphireRapids-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SapphireRapids-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SapphireRapids-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SierraForest'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cmpccxadd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SierraForest-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cmpccxadd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SierraForest-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cmpccxadd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='intel-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='lam'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SierraForest-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cmpccxadd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='intel-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='lam'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-v5'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Snowridge'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='core-capability'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mpx'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='split-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Snowridge-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='core-capability'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mpx'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='split-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Snowridge-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='core-capability'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='split-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Snowridge-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='core-capability'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='split-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Snowridge-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='athlon'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnow'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnowext'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='athlon-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnow'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnowext'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='core2duo'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='core2duo-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='coreduo'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='coreduo-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='n270'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='n270-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='phenom'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnow'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnowext'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='phenom-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnow'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnowext'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </mode>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <memoryBacking supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <enum name='sourceType'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <value>file</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <value>anonymous</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <value>memfd</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  </memoryBacking>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <devices>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <disk supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='diskDevice'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>disk</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>cdrom</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>floppy</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>lun</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='bus'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>fdc</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>scsi</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>usb</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>sata</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='model'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio-transitional</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio-non-transitional</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </disk>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <graphics supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='type'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>vnc</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>egl-headless</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>dbus</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </graphics>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <video supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='modelType'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>vga</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>cirrus</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>none</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>bochs</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>ramfb</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </video>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <hostdev supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='mode'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>subsystem</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='startupPolicy'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>default</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>mandatory</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>requisite</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>optional</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='subsysType'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>usb</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>pci</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>scsi</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='capsType'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='pciBackend'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </hostdev>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <rng supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='model'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio-transitional</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio-non-transitional</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='backendModel'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>random</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>egd</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>builtin</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </rng>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <filesystem supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='driverType'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>path</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>handle</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtiofs</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </filesystem>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <tpm supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='model'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>tpm-tis</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>tpm-crb</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='backendModel'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>emulator</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>external</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='backendVersion'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>2.0</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </tpm>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <redirdev supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='bus'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>usb</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </redirdev>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <channel supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='type'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>pty</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>unix</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </channel>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <crypto supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='model'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='type'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>qemu</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='backendModel'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>builtin</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </crypto>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <interface supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='backendType'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>default</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>passt</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </interface>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <panic supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='model'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>isa</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>hyperv</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </panic>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <console supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='type'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>null</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>vc</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>pty</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>dev</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>file</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>pipe</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>stdio</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>udp</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>tcp</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>unix</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>qemu-vdagent</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>dbus</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </console>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  </devices>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <features>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <gic supported='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <vmcoreinfo supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <genid supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <backingStoreInput supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <backup supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <async-teardown supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <s390-pv supported='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <ps2 supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <tdx supported='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <sev supported='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <sgx supported='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <hyperv supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='features'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>relaxed</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>vapic</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>spinlocks</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>vpindex</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>runtime</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>synic</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>stimer</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>reset</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>vendor_id</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>frequencies</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>reenlightenment</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>tlbflush</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>ipi</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>avic</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>emsr_bitmap</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>xmm_input</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <defaults>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <spinlocks>4095</spinlocks>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <stimer_direct>on</stimer_direct>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <tlbflush_direct>on</tlbflush_direct>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <tlbflush_extended>on</tlbflush_extended>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </defaults>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </hyperv>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <launchSecurity supported='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  </features>
Feb 28 04:50:57 np0005634017 nova_compute[243452]: </domainCapabilities>
Feb 28 04:50:57 np0005634017 nova_compute[243452]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.226 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Feb 28 04:50:57 np0005634017 nova_compute[243452]: <domainCapabilities>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <path>/usr/libexec/qemu-kvm</path>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <domain>kvm</domain>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <machine>pc-i440fx-rhel7.6.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <arch>i686</arch>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <vcpu max='240'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <iothreads supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <os supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <enum name='firmware'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <loader supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='type'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>rom</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>pflash</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='readonly'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>yes</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>no</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='secure'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>no</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </loader>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  </os>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <cpu>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <mode name='host-passthrough' supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='hostPassthroughMigratable'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>on</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>off</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </mode>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <mode name='maximum' supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='maximumMigratable'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>on</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>off</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </mode>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <mode name='host-model' supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model fallback='forbid'>EPYC-Rome</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <vendor>AMD</vendor>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='x2apic'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='tsc-deadline'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='hypervisor'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='tsc_adjust'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='spec-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='stibp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='ssbd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='cmp_legacy'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='overflow-recov'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='succor'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='ibrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='amd-ssbd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='virt-ssbd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='lbrv'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='tsc-scale'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='vmcb-clean'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='flushbyasid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='pause-filter'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='pfthreshold'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='svme-addr-chk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='lfence-always-serializing'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='disable' name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </mode>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <mode name='custom' supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-noTSX'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server-noTSX'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server-v5'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='ClearwaterForest'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bhi-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cmpccxadd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ddpd-u'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='intel-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='lam'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchiti'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sha512'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sm3'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sm4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='ClearwaterForest-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bhi-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cmpccxadd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ddpd-u'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='intel-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='lam'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchiti'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sha512'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sm3'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sm4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cooperlake'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cooperlake-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cooperlake-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Denverton'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mpx'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Denverton-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mpx'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Denverton-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Denverton-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Dhyana-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Genoa'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='auto-ibrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Genoa-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='auto-ibrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Genoa-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='auto-ibrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fs-gs-base-ns'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='perfmon-v2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Milan'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Milan-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Milan-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Milan-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Rome'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Rome-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Rome-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Rome-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Turin'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='auto-ibrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vp2intersect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fs-gs-base-ns'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibpb-brtype'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='perfmon-v2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbpb'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='srso-user-kernel-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Turin-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='auto-ibrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vp2intersect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fs-gs-base-ns'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibpb-brtype'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='perfmon-v2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbpb'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='srso-user-kernel-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-v5'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='GraniteRapids'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchiti'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='GraniteRapids-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchiti'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='GraniteRapids-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10-128'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10-256'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10-512'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchiti'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='GraniteRapids-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10-128'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10-256'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10-512'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchiti'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-noTSX'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-noTSX-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-noTSX'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v5'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v6'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v7'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='IvyBridge'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='IvyBridge-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='IvyBridge-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='IvyBridge-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='KnightsMill'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-4fmaps'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-4vnniw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512er'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512pf'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='KnightsMill-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-4fmaps'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-4vnniw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512er'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512pf'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Opteron_G4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fma4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xop'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Opteron_G4-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fma4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xop'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Opteron_G5'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fma4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tbm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xop'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Opteron_G5-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fma4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tbm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xop'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SapphireRapids'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SapphireRapids-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SapphireRapids-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SapphireRapids-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SapphireRapids-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SierraForest'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cmpccxadd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SierraForest-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cmpccxadd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SierraForest-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cmpccxadd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='intel-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='lam'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SierraForest-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cmpccxadd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='intel-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='lam'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-v5'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Snowridge'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='core-capability'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mpx'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='split-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Snowridge-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='core-capability'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mpx'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='split-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Snowridge-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='core-capability'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='split-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Snowridge-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='core-capability'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='split-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Snowridge-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='athlon'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnow'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnowext'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='athlon-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnow'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnowext'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='core2duo'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='core2duo-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='coreduo'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='coreduo-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='n270'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='n270-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='phenom'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnow'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnowext'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='phenom-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnow'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnowext'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </mode>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <memoryBacking supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <enum name='sourceType'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <value>file</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <value>anonymous</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <value>memfd</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  </memoryBacking>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <devices>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <disk supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='diskDevice'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>disk</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>cdrom</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>floppy</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>lun</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='bus'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>ide</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>fdc</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>scsi</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>usb</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>sata</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='model'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio-transitional</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio-non-transitional</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </disk>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <graphics supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='type'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>vnc</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>egl-headless</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>dbus</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </graphics>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <video supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='modelType'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>vga</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>cirrus</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>none</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>bochs</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>ramfb</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </video>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <hostdev supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='mode'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>subsystem</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='startupPolicy'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>default</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>mandatory</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>requisite</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>optional</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='subsysType'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>usb</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>pci</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>scsi</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='capsType'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='pciBackend'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </hostdev>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <rng supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='model'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio-transitional</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio-non-transitional</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='backendModel'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>random</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>egd</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>builtin</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </rng>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <filesystem supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='driverType'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>path</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>handle</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtiofs</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </filesystem>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <tpm supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='model'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>tpm-tis</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>tpm-crb</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='backendModel'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>emulator</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>external</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='backendVersion'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>2.0</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </tpm>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <redirdev supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='bus'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>usb</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </redirdev>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <channel supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='type'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>pty</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>unix</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </channel>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <crypto supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='model'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='type'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>qemu</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='backendModel'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>builtin</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </crypto>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <interface supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='backendType'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>default</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>passt</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </interface>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <panic supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='model'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>isa</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>hyperv</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </panic>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <console supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='type'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>null</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>vc</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>pty</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>dev</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>file</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>pipe</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>stdio</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>udp</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>tcp</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>unix</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>qemu-vdagent</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>dbus</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </console>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  </devices>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <features>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <gic supported='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <vmcoreinfo supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <genid supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <backingStoreInput supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <backup supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <async-teardown supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <s390-pv supported='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <ps2 supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <tdx supported='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <sev supported='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <sgx supported='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <hyperv supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='features'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>relaxed</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>vapic</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>spinlocks</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>vpindex</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>runtime</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>synic</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>stimer</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>reset</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>vendor_id</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>frequencies</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>reenlightenment</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>tlbflush</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>ipi</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>avic</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>emsr_bitmap</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>xmm_input</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <defaults>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <spinlocks>4095</spinlocks>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <stimer_direct>on</stimer_direct>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <tlbflush_direct>on</tlbflush_direct>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <tlbflush_extended>on</tlbflush_extended>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </defaults>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </hyperv>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <launchSecurity supported='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  </features>
Feb 28 04:50:57 np0005634017 nova_compute[243452]: </domainCapabilities>
Feb 28 04:50:57 np0005634017 nova_compute[243452]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.278 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.283 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Feb 28 04:50:57 np0005634017 nova_compute[243452]: <domainCapabilities>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <path>/usr/libexec/qemu-kvm</path>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <domain>kvm</domain>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <machine>pc-q35-rhel9.8.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <arch>x86_64</arch>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <vcpu max='4096'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <iothreads supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <os supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <enum name='firmware'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <value>efi</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <loader supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='type'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>rom</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>pflash</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='readonly'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>yes</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>no</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='secure'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>yes</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>no</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </loader>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  </os>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <cpu>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <mode name='host-passthrough' supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='hostPassthroughMigratable'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>on</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>off</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </mode>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <mode name='maximum' supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='maximumMigratable'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>on</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>off</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </mode>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <mode name='host-model' supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model fallback='forbid'>EPYC-Rome</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <vendor>AMD</vendor>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='x2apic'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='tsc-deadline'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='hypervisor'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='tsc_adjust'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='spec-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='stibp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='ssbd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='cmp_legacy'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='overflow-recov'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='succor'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='ibrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='amd-ssbd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='virt-ssbd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='lbrv'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='tsc-scale'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='vmcb-clean'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='flushbyasid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='pause-filter'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='pfthreshold'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='svme-addr-chk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='lfence-always-serializing'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='disable' name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </mode>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <mode name='custom' supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-noTSX'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server-noTSX'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server-v5'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='ClearwaterForest'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bhi-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cmpccxadd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ddpd-u'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='intel-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='lam'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchiti'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sha512'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sm3'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sm4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='ClearwaterForest-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bhi-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cmpccxadd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ddpd-u'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='intel-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='lam'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchiti'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sha512'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sm3'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sm4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cooperlake'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cooperlake-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cooperlake-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Denverton'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mpx'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Denverton-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mpx'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Denverton-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Denverton-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Dhyana-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Genoa'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='auto-ibrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Genoa-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='auto-ibrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Genoa-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='auto-ibrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fs-gs-base-ns'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='perfmon-v2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Milan'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Milan-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Milan-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Milan-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Rome'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Rome-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Rome-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Rome-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Turin'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='auto-ibrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vp2intersect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fs-gs-base-ns'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibpb-brtype'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='perfmon-v2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbpb'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='srso-user-kernel-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Turin-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='auto-ibrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vp2intersect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fs-gs-base-ns'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibpb-brtype'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='perfmon-v2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbpb'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='srso-user-kernel-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-v5'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='GraniteRapids'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchiti'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='GraniteRapids-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchiti'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='GraniteRapids-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10-128'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10-256'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10-512'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchiti'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='GraniteRapids-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10-128'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10-256'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10-512'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchiti'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-noTSX'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-noTSX-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-noTSX'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v5'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v6'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v7'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='IvyBridge'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='IvyBridge-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='IvyBridge-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='IvyBridge-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='KnightsMill'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-4fmaps'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-4vnniw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512er'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512pf'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='KnightsMill-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-4fmaps'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-4vnniw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512er'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512pf'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Opteron_G4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fma4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xop'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Opteron_G4-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fma4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xop'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Opteron_G5'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fma4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tbm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xop'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Opteron_G5-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fma4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tbm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xop'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SapphireRapids'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SapphireRapids-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SapphireRapids-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SapphireRapids-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SapphireRapids-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SierraForest'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cmpccxadd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SierraForest-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cmpccxadd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SierraForest-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cmpccxadd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='intel-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='lam'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SierraForest-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cmpccxadd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='intel-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='lam'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-v5'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Snowridge'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='core-capability'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mpx'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='split-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Snowridge-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='core-capability'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mpx'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='split-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Snowridge-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='core-capability'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='split-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Snowridge-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='core-capability'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='split-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Snowridge-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='athlon'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnow'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnowext'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='athlon-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnow'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnowext'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='core2duo'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='core2duo-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='coreduo'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='coreduo-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='n270'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='n270-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='phenom'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnow'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnowext'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='phenom-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnow'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnowext'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </mode>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <memoryBacking supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <enum name='sourceType'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <value>file</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <value>anonymous</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <value>memfd</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  </memoryBacking>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <devices>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <disk supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='diskDevice'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>disk</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>cdrom</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>floppy</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>lun</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='bus'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>fdc</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>scsi</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>usb</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>sata</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='model'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio-transitional</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio-non-transitional</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </disk>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <graphics supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='type'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>vnc</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>egl-headless</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>dbus</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </graphics>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <video supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='modelType'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>vga</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>cirrus</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>none</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>bochs</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>ramfb</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </video>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <hostdev supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='mode'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>subsystem</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='startupPolicy'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>default</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>mandatory</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>requisite</value>
Feb 28 04:50:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>optional</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='subsysType'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>usb</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>pci</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>scsi</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='capsType'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='pciBackend'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </hostdev>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <rng supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='model'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio-transitional</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio-non-transitional</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='backendModel'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>random</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>egd</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>builtin</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </rng>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <filesystem supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='driverType'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>path</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>handle</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtiofs</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </filesystem>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <tpm supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='model'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>tpm-tis</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>tpm-crb</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='backendModel'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>emulator</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>external</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='backendVersion'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>2.0</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </tpm>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <redirdev supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='bus'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>usb</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </redirdev>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <channel supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='type'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>pty</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>unix</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </channel>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <crypto supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='model'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='type'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>qemu</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='backendModel'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>builtin</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </crypto>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <interface supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='backendType'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>default</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>passt</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </interface>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <panic supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='model'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>isa</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>hyperv</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </panic>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <console supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='type'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>null</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>vc</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>pty</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>dev</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>file</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>pipe</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>stdio</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>udp</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>tcp</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>unix</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>qemu-vdagent</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>dbus</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </console>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  </devices>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <features>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <gic supported='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <vmcoreinfo supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <genid supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <backingStoreInput supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <backup supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <async-teardown supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <s390-pv supported='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <ps2 supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <tdx supported='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <sev supported='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <sgx supported='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <hyperv supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='features'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>relaxed</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>vapic</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>spinlocks</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>vpindex</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>runtime</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>synic</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>stimer</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>reset</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>vendor_id</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>frequencies</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>reenlightenment</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>tlbflush</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>ipi</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>avic</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>emsr_bitmap</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>xmm_input</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <defaults>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <spinlocks>4095</spinlocks>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <stimer_direct>on</stimer_direct>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <tlbflush_direct>on</tlbflush_direct>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <tlbflush_extended>on</tlbflush_extended>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </defaults>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </hyperv>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <launchSecurity supported='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  </features>
Feb 28 04:50:57 np0005634017 nova_compute[243452]: </domainCapabilities>
Feb 28 04:50:57 np0005634017 nova_compute[243452]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.363 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Feb 28 04:50:57 np0005634017 nova_compute[243452]: <domainCapabilities>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <path>/usr/libexec/qemu-kvm</path>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <domain>kvm</domain>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <machine>pc-i440fx-rhel7.6.0</machine>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <arch>x86_64</arch>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <vcpu max='240'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <iothreads supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <os supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <enum name='firmware'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <loader supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='type'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>rom</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>pflash</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='readonly'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>yes</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>no</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='secure'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>no</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </loader>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  </os>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <cpu>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <mode name='host-passthrough' supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='hostPassthroughMigratable'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>on</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>off</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </mode>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <mode name='maximum' supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='maximumMigratable'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>on</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>off</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </mode>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <mode name='host-model' supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model fallback='forbid'>EPYC-Rome</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <vendor>AMD</vendor>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='x2apic'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='tsc-deadline'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='hypervisor'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='tsc_adjust'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='spec-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='stibp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='ssbd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='cmp_legacy'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='overflow-recov'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='succor'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='ibrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='amd-ssbd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='virt-ssbd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='lbrv'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='tsc-scale'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='vmcb-clean'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='flushbyasid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='pause-filter'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='pfthreshold'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='svme-addr-chk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='require' name='lfence-always-serializing'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <feature policy='disable' name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </mode>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <mode name='custom' supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-noTSX'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Broadwell-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server-noTSX'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cascadelake-Server-v5'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='ClearwaterForest'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bhi-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cmpccxadd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ddpd-u'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='intel-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='lam'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchiti'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sha512'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sm3'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sm4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='ClearwaterForest-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bhi-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cmpccxadd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ddpd-u'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='intel-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='lam'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchiti'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sha512'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sm3'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sm4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cooperlake'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cooperlake-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Cooperlake-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Denverton'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mpx'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Denverton-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mpx'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Denverton-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Denverton-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Dhyana-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Genoa'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='auto-ibrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Genoa-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='auto-ibrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Genoa-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='auto-ibrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fs-gs-base-ns'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='perfmon-v2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Milan'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Milan-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Milan-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Milan-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Rome'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Rome-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Rome-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Rome-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Turin'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='auto-ibrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vp2intersect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fs-gs-base-ns'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibpb-brtype'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='perfmon-v2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbpb'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='srso-user-kernel-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-Turin-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amd-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='auto-ibrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vp2intersect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fs-gs-base-ns'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibpb-brtype'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='no-nested-data-bp'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='null-sel-clr-base'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='perfmon-v2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbpb'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='srso-user-kernel-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='stibp-always-on'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='EPYC-v5'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='GraniteRapids'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchiti'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='GraniteRapids-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchiti'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='GraniteRapids-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10-128'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10-256'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10-512'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchiti'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='GraniteRapids-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10-128'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10-256'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx10-512'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='prefetchiti'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-noTSX'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-noTSX-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Haswell-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-noTSX'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v5'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v6'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Icelake-Server-v7'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='IvyBridge'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='IvyBridge-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='IvyBridge-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='IvyBridge-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='KnightsMill'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-4fmaps'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-4vnniw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512er'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512pf'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='KnightsMill-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-4fmaps'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-4vnniw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512er'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512pf'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Opteron_G4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fma4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xop'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Opteron_G4-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fma4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xop'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Opteron_G5'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fma4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tbm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xop'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Opteron_G5-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fma4'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tbm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xop'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SapphireRapids'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SapphireRapids-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SapphireRapids-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SapphireRapids-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SapphireRapids-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='amx-tile'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-bf16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-fp16'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512-vpopcntdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bitalg'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vbmi2'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrc'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fzrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='la57'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='taa-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='tsx-ldtrk'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SierraForest'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cmpccxadd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SierraForest-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cmpccxadd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SierraForest-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cmpccxadd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='intel-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='lam'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='SierraForest-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ifma'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-ne-convert'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx-vnni-int8'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bhi-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='bus-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cmpccxadd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fbsdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='fsrs'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ibrs-all'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='intel-psfd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ipred-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='lam'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mcdt-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pbrsb-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='psdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rrsba-ctrl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='sbdr-ssdp-no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='serialize'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vaes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='vpclmulqdq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Client-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='hle'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='rtm'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Skylake-Server-v5'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512bw'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512cd'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512dq'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512f'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='avx512vl'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='invpcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pcid'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='pku'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Snowridge'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='core-capability'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mpx'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='split-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Snowridge-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='core-capability'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='mpx'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='split-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Snowridge-v2'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='core-capability'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='split-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Snowridge-v3'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='core-capability'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='split-lock-detect'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='Snowridge-v4'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='cldemote'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='erms'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='gfni'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdir64b'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='movdiri'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='xsaves'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='athlon'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnow'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnowext'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='athlon-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnow'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnowext'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='core2duo'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='core2duo-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='coreduo'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='coreduo-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='n270'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='n270-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='ss'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='phenom'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnow'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnowext'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <blockers model='phenom-v1'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnow'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <feature name='3dnowext'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </blockers>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </mode>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <memoryBacking supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <enum name='sourceType'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <value>file</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <value>anonymous</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <value>memfd</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  </memoryBacking>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <devices>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <disk supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='diskDevice'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>disk</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>cdrom</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>floppy</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>lun</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='bus'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>ide</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>fdc</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>scsi</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>usb</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>sata</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='model'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio-transitional</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio-non-transitional</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </disk>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <graphics supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='type'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>vnc</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>egl-headless</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>dbus</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </graphics>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <video supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='modelType'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>vga</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>cirrus</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>none</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>bochs</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>ramfb</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </video>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <hostdev supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='mode'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>subsystem</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='startupPolicy'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>default</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>mandatory</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>requisite</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>optional</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='subsysType'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>usb</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>pci</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>scsi</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='capsType'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='pciBackend'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </hostdev>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <rng supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='model'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio-transitional</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtio-non-transitional</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='backendModel'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>random</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>egd</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>builtin</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </rng>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <filesystem supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='driverType'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>path</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>handle</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>virtiofs</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </filesystem>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <tpm supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='model'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>tpm-tis</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>tpm-crb</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='backendModel'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>emulator</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>external</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='backendVersion'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>2.0</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </tpm>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <redirdev supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='bus'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>usb</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </redirdev>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <channel supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='type'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>pty</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>unix</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </channel>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <crypto supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='model'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='type'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>qemu</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='backendModel'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>builtin</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </crypto>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <interface supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='backendType'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>default</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>passt</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </interface>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <panic supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='model'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>isa</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>hyperv</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </panic>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <console supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='type'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>null</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>vc</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>pty</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>dev</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>file</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>pipe</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>stdio</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>udp</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>tcp</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>unix</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>qemu-vdagent</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>dbus</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </console>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  </devices>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  <features>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <gic supported='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <vmcoreinfo supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <genid supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <backingStoreInput supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <backup supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <async-teardown supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <s390-pv supported='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <ps2 supported='yes'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <tdx supported='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <sev supported='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <sgx supported='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <hyperv supported='yes'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <enum name='features'>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>relaxed</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>vapic</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>spinlocks</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>vpindex</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>runtime</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>synic</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>stimer</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>reset</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>vendor_id</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>frequencies</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>reenlightenment</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>tlbflush</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>ipi</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>avic</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>emsr_bitmap</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <value>xmm_input</value>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </enum>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      <defaults>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <spinlocks>4095</spinlocks>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <stimer_direct>on</stimer_direct>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <tlbflush_direct>on</tlbflush_direct>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <tlbflush_extended>on</tlbflush_extended>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:      </defaults>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    </hyperv>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:    <launchSecurity supported='no'/>
Feb 28 04:50:57 np0005634017 nova_compute[243452]:  </features>
Feb 28 04:50:57 np0005634017 nova_compute[243452]: </domainCapabilities>
Feb 28 04:50:57 np0005634017 nova_compute[243452]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.431 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.432 243456 INFO nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Secure Boot support detected#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.434 243456 INFO nova.virt.libvirt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.434 243456 INFO nova.virt.libvirt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.445 243456 DEBUG nova.virt.libvirt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.469 243456 INFO nova.virt.node [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Determined node identity 1ab5ec33-a817-4b85-91a2-974557eeabfb from /var/lib/nova/compute_id#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.497 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Verified node 1ab5ec33-a817-4b85-91a2-974557eeabfb matches my host compute-0.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.541 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Feb 28 04:50:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:50:57.829 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:50:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:50:57.830 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:50:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:50:57.830 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.938 243456 ERROR nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Could not retrieve compute node resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb and therefore unable to error out any instances stuck in BUILDING state. Error: Failed to retrieve allocations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '1ab5ec33-a817-4b85-91a2-974557eeabfb' not found: No resource provider with uuid 1ab5ec33-a817-4b85-91a2-974557eeabfb found  ", "request_id": "req-0a09b30b-eaba-4efc-90c5-01312b31bdac"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '1ab5ec33-a817-4b85-91a2-974557eeabfb' not found: No resource provider with uuid 1ab5ec33-a817-4b85-91a2-974557eeabfb found  ", "request_id": "req-0a09b30b-eaba-4efc-90c5-01312b31bdac"}]}#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.963 243456 DEBUG oslo_concurrency.lockutils [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.963 243456 DEBUG oslo_concurrency.lockutils [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.964 243456 DEBUG oslo_concurrency.lockutils [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.964 243456 DEBUG nova.compute.resource_tracker [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 04:50:57 np0005634017 nova_compute[243452]: 2026-02-28 09:50:57.964 243456 DEBUG oslo_concurrency.processutils [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:50:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 04:50:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2419094949' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 04:50:58 np0005634017 nova_compute[243452]: 2026-02-28 09:50:58.528 243456 DEBUG oslo_concurrency.processutils [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:50:58 np0005634017 nova_compute[243452]: 2026-02-28 09:50:58.697 243456 WARNING nova.virt.libvirt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 04:50:58 np0005634017 nova_compute[243452]: 2026-02-28 09:50:58.698 243456 DEBUG nova.compute.resource_tracker [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5083MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 04:50:58 np0005634017 nova_compute[243452]: 2026-02-28 09:50:58.699 243456 DEBUG oslo_concurrency.lockutils [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:50:58 np0005634017 nova_compute[243452]: 2026-02-28 09:50:58.699 243456 DEBUG oslo_concurrency.lockutils [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:50:58 np0005634017 nova_compute[243452]: 2026-02-28 09:50:58.858 243456 ERROR nova.compute.resource_tracker [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Skipping removal of allocations for deleted instances: Failed to retrieve allocations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '1ab5ec33-a817-4b85-91a2-974557eeabfb' not found: No resource provider with uuid 1ab5ec33-a817-4b85-91a2-974557eeabfb found  ", "request_id": "req-8b1d2caa-f9ad-48ff-980b-ed4532612d7f"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '1ab5ec33-a817-4b85-91a2-974557eeabfb' not found: No resource provider with uuid 1ab5ec33-a817-4b85-91a2-974557eeabfb found  ", "request_id": "req-8b1d2caa-f9ad-48ff-980b-ed4532612d7f"}]}#033[00m
Feb 28 04:50:58 np0005634017 nova_compute[243452]: 2026-02-28 09:50:58.859 243456 DEBUG nova.compute.resource_tracker [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 04:50:58 np0005634017 nova_compute[243452]: 2026-02-28 09:50:58.859 243456 DEBUG nova.compute.resource_tracker [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 04:50:59 np0005634017 nova_compute[243452]: 2026-02-28 09:50:59.301 243456 INFO nova.scheduler.client.report [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [req-5d4d554a-671c-4ec4-80b0-bb9db2b5afa5] Created resource provider record via placement API for resource provider with UUID 1ab5ec33-a817-4b85-91a2-974557eeabfb and name compute-0.ctlplane.example.com.#033[00m
Feb 28 04:50:59 np0005634017 nova_compute[243452]: 2026-02-28 09:50:59.325 243456 DEBUG oslo_concurrency.processutils [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:50:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:50:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 04:50:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/896974902' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 04:50:59 np0005634017 nova_compute[243452]: 2026-02-28 09:50:59.930 243456 DEBUG oslo_concurrency.processutils [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:50:59 np0005634017 nova_compute[243452]: 2026-02-28 09:50:59.935 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Feb 28 04:50:59 np0005634017 nova_compute[243452]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Feb 28 04:50:59 np0005634017 nova_compute[243452]: 2026-02-28 09:50:59.935 243456 INFO nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] kernel doesn't support AMD SEV#033[00m
Feb 28 04:50:59 np0005634017 nova_compute[243452]: 2026-02-28 09:50:59.936 243456 DEBUG nova.compute.provider_tree [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 28 04:50:59 np0005634017 nova_compute[243452]: 2026-02-28 09:50:59.936 243456 DEBUG nova.virt.libvirt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 04:50:59 np0005634017 nova_compute[243452]: 2026-02-28 09:50:59.990 243456 DEBUG nova.scheduler.client.report [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Updated inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Feb 28 04:50:59 np0005634017 nova_compute[243452]: 2026-02-28 09:50:59.990 243456 DEBUG nova.compute.provider_tree [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Updating resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Feb 28 04:50:59 np0005634017 nova_compute[243452]: 2026-02-28 09:50:59.990 243456 DEBUG nova.compute.provider_tree [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 28 04:51:00 np0005634017 nova_compute[243452]: 2026-02-28 09:51:00.077 243456 DEBUG nova.compute.provider_tree [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Updating resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Feb 28 04:51:00 np0005634017 nova_compute[243452]: 2026-02-28 09:51:00.105 243456 DEBUG nova.compute.resource_tracker [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 04:51:00 np0005634017 nova_compute[243452]: 2026-02-28 09:51:00.105 243456 DEBUG oslo_concurrency.lockutils [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.406s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:51:00 np0005634017 nova_compute[243452]: 2026-02-28 09:51:00.106 243456 DEBUG nova.service [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Feb 28 04:51:00 np0005634017 nova_compute[243452]: 2026-02-28 09:51:00.246 243456 DEBUG nova.service [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Feb 28 04:51:00 np0005634017 nova_compute[243452]: 2026-02-28 09:51:00.247 243456 DEBUG nova.servicegroup.drivers.db [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Feb 28 04:51:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:51:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:51:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:51:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:51:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:51:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:51:01 np0005634017 podman[243797]: 2026-02-28 09:51:01.136765681 +0000 UTC m=+0.071180675 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 04:51:01 np0005634017 podman[243796]: 2026-02-28 09:51:01.171216259 +0000 UTC m=+0.105420107 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 28 04:51:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:51:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:51:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:51:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:51:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:51:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:51:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:51:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:51:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:51:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:51:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:51:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:51:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:51:19 np0005634017 nova_compute[243452]: 2026-02-28 09:51:19.249 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 04:51:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:51:19 np0005634017 nova_compute[243452]: 2026-02-28 09:51:19.757 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 04:51:20 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 04:51:20 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3948564881' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 04:51:20 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 04:51:20 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3948564881' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 04:51:20 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 04:51:20 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2762285147' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 04:51:20 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 04:51:20 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2762285147' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 04:51:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 04:51:21 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3988511342' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 04:51:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 04:51:21 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3988511342' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 04:51:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:51:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:51:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:51:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:51:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:51:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:51:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_09:51:28
Feb 28 04:51:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 04:51:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 04:51:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'default.rgw.log', 'backups', 'volumes', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data']
Feb 28 04:51:28 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 04:51:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:51:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:51:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:51:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:51:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:51:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:51:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:51:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 04:51:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:51:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 04:51:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:51:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:51:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:51:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:51:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:51:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:51:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:51:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:51:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:51:32 np0005634017 podman[243844]: 2026-02-28 09:51:32.138047619 +0000 UTC m=+0.068880508 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_managed=true, config_id=ovn_metadata_agent)
Feb 28 04:51:32 np0005634017 podman[243843]: 2026-02-28 09:51:32.148168442 +0000 UTC m=+0.088627480 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 28 04:51:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:51:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 28 04:51:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 28 04:51:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:51:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:51:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 04:51:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:51:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 04:51:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:51:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 04:51:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 04:51:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 04:51:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:51:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:51:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:51:36 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:51:36 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:51:36 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:58:55 np0005634017 nova_compute[243452]: 2026-02-28 09:58:55.091 243456 INFO nova.virt.libvirt.driver [None req-e519582d-8934-490f-94a8-ac38121cad66 9391362a054c4974bb439e4a05ed9fcb 915f20b5394f4d19aba77e35edffa5b5 - - default default] [instance: a2067bb3-321d-4043-a424-a26d3fb8fcb3] Deleting instance files /var/lib/nova/instances/a2067bb3-321d-4043-a424-a26d3fb8fcb3_del#033[00m
Feb 28 04:58:55 np0005634017 nova_compute[243452]: 2026-02-28 09:58:55.092 243456 INFO nova.virt.libvirt.driver [None req-e519582d-8934-490f-94a8-ac38121cad66 9391362a054c4974bb439e4a05ed9fcb 915f20b5394f4d19aba77e35edffa5b5 - - default default] [instance: a2067bb3-321d-4043-a424-a26d3fb8fcb3] Deletion of /var/lib/nova/instances/a2067bb3-321d-4043-a424-a26d3fb8fcb3_del complete#033[00m
Feb 28 04:58:55 np0005634017 nova_compute[243452]: 2026-02-28 09:58:55.176 243456 DEBUG nova.virt.libvirt.host [None req-e519582d-8934-490f-94a8-ac38121cad66 9391362a054c4974bb439e4a05ed9fcb 915f20b5394f4d19aba77e35edffa5b5 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Feb 28 04:58:55 np0005634017 nova_compute[243452]: 2026-02-28 09:58:55.176 243456 INFO nova.virt.libvirt.host [None req-e519582d-8934-490f-94a8-ac38121cad66 9391362a054c4974bb439e4a05ed9fcb 915f20b5394f4d19aba77e35edffa5b5 - - default default] UEFI support detected#033[00m
Feb 28 04:58:55 np0005634017 nova_compute[243452]: 2026-02-28 09:58:55.178 243456 INFO nova.compute.manager [None req-e519582d-8934-490f-94a8-ac38121cad66 9391362a054c4974bb439e4a05ed9fcb 915f20b5394f4d19aba77e35edffa5b5 - - default default] [instance: a2067bb3-321d-4043-a424-a26d3fb8fcb3] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 04:58:55 np0005634017 nova_compute[243452]: 2026-02-28 09:58:55.179 243456 DEBUG oslo.service.loopingcall [None req-e519582d-8934-490f-94a8-ac38121cad66 9391362a054c4974bb439e4a05ed9fcb 915f20b5394f4d19aba77e35edffa5b5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 04:58:55 np0005634017 nova_compute[243452]: 2026-02-28 09:58:55.179 243456 DEBUG nova.compute.manager [-] [instance: a2067bb3-321d-4043-a424-a26d3fb8fcb3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 04:58:55 np0005634017 nova_compute[243452]: 2026-02-28 09:58:55.179 243456 DEBUG nova.network.neutron [-] [instance: a2067bb3-321d-4043-a424-a26d3fb8fcb3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 04:58:55 np0005634017 nova_compute[243452]: 2026-02-28 09:58:55.337 243456 DEBUG nova.network.neutron [-] [instance: a2067bb3-321d-4043-a424-a26d3fb8fcb3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 04:58:55 np0005634017 nova_compute[243452]: 2026-02-28 09:58:55.351 243456 DEBUG nova.network.neutron [-] [instance: a2067bb3-321d-4043-a424-a26d3fb8fcb3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 04:58:55 np0005634017 rsyslogd[1017]: imjournal: 3547 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Feb 28 04:58:55 np0005634017 nova_compute[243452]: 2026-02-28 09:58:55.367 243456 INFO nova.compute.manager [-] [instance: a2067bb3-321d-4043-a424-a26d3fb8fcb3] Took 0.19 seconds to deallocate network for instance.#033[00m
Feb 28 04:58:55 np0005634017 nova_compute[243452]: 2026-02-28 09:58:55.420 243456 DEBUG oslo_concurrency.lockutils [None req-e519582d-8934-490f-94a8-ac38121cad66 9391362a054c4974bb439e4a05ed9fcb 915f20b5394f4d19aba77e35edffa5b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:58:55 np0005634017 nova_compute[243452]: 2026-02-28 09:58:55.421 243456 DEBUG oslo_concurrency.lockutils [None req-e519582d-8934-490f-94a8-ac38121cad66 9391362a054c4974bb439e4a05ed9fcb 915f20b5394f4d19aba77e35edffa5b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:58:55 np0005634017 nova_compute[243452]: 2026-02-28 09:58:55.481 243456 DEBUG oslo_concurrency.processutils [None req-e519582d-8934-490f-94a8-ac38121cad66 9391362a054c4974bb439e4a05ed9fcb 915f20b5394f4d19aba77e35edffa5b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:58:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.3 MiB/s wr, 92 op/s
Feb 28 04:58:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 04:58:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3783723940' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 04:58:56 np0005634017 nova_compute[243452]: 2026-02-28 09:58:56.028 243456 DEBUG oslo_concurrency.processutils [None req-e519582d-8934-490f-94a8-ac38121cad66 9391362a054c4974bb439e4a05ed9fcb 915f20b5394f4d19aba77e35edffa5b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:58:56 np0005634017 nova_compute[243452]: 2026-02-28 09:58:56.037 243456 DEBUG nova.compute.provider_tree [None req-e519582d-8934-490f-94a8-ac38121cad66 9391362a054c4974bb439e4a05ed9fcb 915f20b5394f4d19aba77e35edffa5b5 - - default default] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 28 04:58:56 np0005634017 nova_compute[243452]: 2026-02-28 09:58:56.081 243456 ERROR nova.scheduler.client.report [None req-e519582d-8934-490f-94a8-ac38121cad66 9391362a054c4974bb439e4a05ed9fcb 915f20b5394f4d19aba77e35edffa5b5 - - default default] [req-51e513dd-8eb1-4ae9-b915-4bb4f3fe2df3] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 1ab5ec33-a817-4b85-91a2-974557eeabfb.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-51e513dd-8eb1-4ae9-b915-4bb4f3fe2df3"}]}#033[00m
Feb 28 04:58:56 np0005634017 nova_compute[243452]: 2026-02-28 09:58:56.113 243456 DEBUG nova.scheduler.client.report [None req-e519582d-8934-490f-94a8-ac38121cad66 9391362a054c4974bb439e4a05ed9fcb 915f20b5394f4d19aba77e35edffa5b5 - - default default] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 28 04:58:56 np0005634017 nova_compute[243452]: 2026-02-28 09:58:56.137 243456 DEBUG nova.scheduler.client.report [None req-e519582d-8934-490f-94a8-ac38121cad66 9391362a054c4974bb439e4a05ed9fcb 915f20b5394f4d19aba77e35edffa5b5 - - default default] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 28 04:58:56 np0005634017 nova_compute[243452]: 2026-02-28 09:58:56.138 243456 DEBUG nova.compute.provider_tree [None req-e519582d-8934-490f-94a8-ac38121cad66 9391362a054c4974bb439e4a05ed9fcb 915f20b5394f4d19aba77e35edffa5b5 - - default default] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 28 04:58:56 np0005634017 nova_compute[243452]: 2026-02-28 09:58:56.155 243456 DEBUG nova.scheduler.client.report [None req-e519582d-8934-490f-94a8-ac38121cad66 9391362a054c4974bb439e4a05ed9fcb 915f20b5394f4d19aba77e35edffa5b5 - - default default] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 28 04:58:56 np0005634017 nova_compute[243452]: 2026-02-28 09:58:56.194 243456 DEBUG nova.scheduler.client.report [None req-e519582d-8934-490f-94a8-ac38121cad66 9391362a054c4974bb439e4a05ed9fcb 915f20b5394f4d19aba77e35edffa5b5 - - default default] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 28 04:58:56 np0005634017 nova_compute[243452]: 2026-02-28 09:58:56.240 243456 DEBUG oslo_concurrency.processutils [None req-e519582d-8934-490f-94a8-ac38121cad66 9391362a054c4974bb439e4a05ed9fcb 915f20b5394f4d19aba77e35edffa5b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:58:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 04:58:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/235364672' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 04:58:56 np0005634017 nova_compute[243452]: 2026-02-28 09:58:56.767 243456 DEBUG oslo_concurrency.processutils [None req-e519582d-8934-490f-94a8-ac38121cad66 9391362a054c4974bb439e4a05ed9fcb 915f20b5394f4d19aba77e35edffa5b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:58:56 np0005634017 nova_compute[243452]: 2026-02-28 09:58:56.774 243456 DEBUG nova.compute.provider_tree [None req-e519582d-8934-490f-94a8-ac38121cad66 9391362a054c4974bb439e4a05ed9fcb 915f20b5394f4d19aba77e35edffa5b5 - - default default] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 28 04:58:56 np0005634017 nova_compute[243452]: 2026-02-28 09:58:56.809 243456 DEBUG nova.scheduler.client.report [None req-e519582d-8934-490f-94a8-ac38121cad66 9391362a054c4974bb439e4a05ed9fcb 915f20b5394f4d19aba77e35edffa5b5 - - default default] Updated inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with generation 7 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Feb 28 04:58:56 np0005634017 nova_compute[243452]: 2026-02-28 09:58:56.809 243456 DEBUG nova.compute.provider_tree [None req-e519582d-8934-490f-94a8-ac38121cad66 9391362a054c4974bb439e4a05ed9fcb 915f20b5394f4d19aba77e35edffa5b5 - - default default] Updating resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb generation from 7 to 8 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Feb 28 04:58:56 np0005634017 nova_compute[243452]: 2026-02-28 09:58:56.810 243456 DEBUG nova.compute.provider_tree [None req-e519582d-8934-490f-94a8-ac38121cad66 9391362a054c4974bb439e4a05ed9fcb 915f20b5394f4d19aba77e35edffa5b5 - - default default] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 28 04:58:56 np0005634017 nova_compute[243452]: 2026-02-28 09:58:56.834 243456 DEBUG oslo_concurrency.lockutils [None req-e519582d-8934-490f-94a8-ac38121cad66 9391362a054c4974bb439e4a05ed9fcb 915f20b5394f4d19aba77e35edffa5b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.413s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:58:56 np0005634017 nova_compute[243452]: 2026-02-28 09:58:56.859 243456 INFO nova.scheduler.client.report [None req-e519582d-8934-490f-94a8-ac38121cad66 9391362a054c4974bb439e4a05ed9fcb 915f20b5394f4d19aba77e35edffa5b5 - - default default] Deleted allocations for instance a2067bb3-321d-4043-a424-a26d3fb8fcb3#033[00m
Feb 28 04:58:56 np0005634017 nova_compute[243452]: 2026-02-28 09:58:56.949 243456 DEBUG oslo_concurrency.lockutils [None req-e519582d-8934-490f-94a8-ac38121cad66 9391362a054c4974bb439e4a05ed9fcb 915f20b5394f4d19aba77e35edffa5b5 - - default default] Lock "a2067bb3-321d-4043-a424-a26d3fb8fcb3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:58:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 04:58:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 165 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 133 op/s
Feb 28 04:58:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:58:57.838 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:58:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:58:57.839 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:58:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:58:57.839 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:58:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 165 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 133 op/s
Feb 28 04:59:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:59:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:59:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:59:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:59:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:59:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:59:01 np0005634017 nova_compute[243452]: 2026-02-28 09:59:01.018 243456 DEBUG oslo_concurrency.lockutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Acquiring lock "345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:01 np0005634017 nova_compute[243452]: 2026-02-28 09:59:01.019 243456 DEBUG oslo_concurrency.lockutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:01 np0005634017 nova_compute[243452]: 2026-02-28 09:59:01.045 243456 DEBUG nova.compute.manager [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 04:59:01 np0005634017 nova_compute[243452]: 2026-02-28 09:59:01.109 243456 DEBUG oslo_concurrency.lockutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:01 np0005634017 nova_compute[243452]: 2026-02-28 09:59:01.109 243456 DEBUG oslo_concurrency.lockutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:01 np0005634017 nova_compute[243452]: 2026-02-28 09:59:01.115 243456 DEBUG nova.virt.hardware [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 04:59:01 np0005634017 nova_compute[243452]: 2026-02-28 09:59:01.116 243456 INFO nova.compute.claims [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 04:59:01 np0005634017 nova_compute[243452]: 2026-02-28 09:59:01.216 243456 DEBUG oslo_concurrency.processutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:01 np0005634017 nova_compute[243452]: 2026-02-28 09:59:01.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 04:59:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 153 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 119 op/s
Feb 28 04:59:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 04:59:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3437406110' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 04:59:01 np0005634017 nova_compute[243452]: 2026-02-28 09:59:01.752 243456 DEBUG oslo_concurrency.processutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:01 np0005634017 nova_compute[243452]: 2026-02-28 09:59:01.758 243456 DEBUG nova.compute.provider_tree [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 04:59:01 np0005634017 nova_compute[243452]: 2026-02-28 09:59:01.777 243456 DEBUG nova.scheduler.client.report [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 04:59:01 np0005634017 nova_compute[243452]: 2026-02-28 09:59:01.802 243456 DEBUG oslo_concurrency.lockutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:01 np0005634017 nova_compute[243452]: 2026-02-28 09:59:01.803 243456 DEBUG nova.compute.manager [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 04:59:01 np0005634017 nova_compute[243452]: 2026-02-28 09:59:01.856 243456 DEBUG nova.compute.manager [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 04:59:01 np0005634017 nova_compute[243452]: 2026-02-28 09:59:01.857 243456 DEBUG nova.network.neutron [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 04:59:01 np0005634017 nova_compute[243452]: 2026-02-28 09:59:01.887 243456 INFO nova.virt.libvirt.driver [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 04:59:01 np0005634017 nova_compute[243452]: 2026-02-28 09:59:01.906 243456 DEBUG nova.compute.manager [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 04:59:02 np0005634017 nova_compute[243452]: 2026-02-28 09:59:02.016 243456 DEBUG nova.compute.manager [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 04:59:02 np0005634017 nova_compute[243452]: 2026-02-28 09:59:02.018 243456 DEBUG nova.virt.libvirt.driver [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 04:59:02 np0005634017 nova_compute[243452]: 2026-02-28 09:59:02.019 243456 INFO nova.virt.libvirt.driver [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Creating image(s)#033[00m
Feb 28 04:59:02 np0005634017 nova_compute[243452]: 2026-02-28 09:59:02.057 243456 DEBUG nova.storage.rbd_utils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] rbd image 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:02 np0005634017 nova_compute[243452]: 2026-02-28 09:59:02.093 243456 DEBUG nova.storage.rbd_utils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] rbd image 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:02 np0005634017 nova_compute[243452]: 2026-02-28 09:59:02.129 243456 DEBUG nova.storage.rbd_utils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] rbd image 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:02 np0005634017 nova_compute[243452]: 2026-02-28 09:59:02.135 243456 DEBUG oslo_concurrency.processutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:02 np0005634017 nova_compute[243452]: 2026-02-28 09:59:02.214 243456 DEBUG oslo_concurrency.processutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:02 np0005634017 nova_compute[243452]: 2026-02-28 09:59:02.216 243456 DEBUG oslo_concurrency.lockutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:02 np0005634017 nova_compute[243452]: 2026-02-28 09:59:02.217 243456 DEBUG oslo_concurrency.lockutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:02 np0005634017 nova_compute[243452]: 2026-02-28 09:59:02.217 243456 DEBUG oslo_concurrency.lockutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:02 np0005634017 nova_compute[243452]: 2026-02-28 09:59:02.244 243456 DEBUG nova.storage.rbd_utils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] rbd image 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:02 np0005634017 nova_compute[243452]: 2026-02-28 09:59:02.247 243456 DEBUG oslo_concurrency.processutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:02 np0005634017 nova_compute[243452]: 2026-02-28 09:59:02.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 04:59:02 np0005634017 nova_compute[243452]: 2026-02-28 09:59:02.389 243456 WARNING oslo_policy.policy [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Feb 28 04:59:02 np0005634017 nova_compute[243452]: 2026-02-28 09:59:02.389 243456 WARNING oslo_policy.policy [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Feb 28 04:59:02 np0005634017 nova_compute[243452]: 2026-02-28 09:59:02.392 243456 DEBUG nova.policy [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1e1fe17532ba42f08284df768e59361b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd8d11cb18c7a4f0bb17a76974faf1f67', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:02.594703) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272742594791, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 525, "num_deletes": 250, "total_data_size": 496507, "memory_usage": 505920, "flush_reason": "Manual Compaction"}
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272742693006, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 371823, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18431, "largest_seqno": 18955, "table_properties": {"data_size": 369128, "index_size": 731, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 7113, "raw_average_key_size": 19, "raw_value_size": 363463, "raw_average_value_size": 1015, "num_data_blocks": 33, "num_entries": 358, "num_filter_entries": 358, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772272712, "oldest_key_time": 1772272712, "file_creation_time": 1772272742, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 98486 microseconds, and 2413 cpu microseconds.
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:02.693204) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 371823 bytes OK
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:02.693237) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:02.717763) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:02.717819) EVENT_LOG_v1 {"time_micros": 1772272742717811, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:02.717849) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 493482, prev total WAL file size 493482, number of live WAL files 2.
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:02.718428) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373533' seq:0, type:0; will stop at (end)
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(363KB)], [41(9374KB)]
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272742718513, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 9971245, "oldest_snapshot_seqno": -1}
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4302 keys, 6759744 bytes, temperature: kUnknown
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272742809994, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6759744, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6731351, "index_size": 16507, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10821, "raw_key_size": 104968, "raw_average_key_size": 24, "raw_value_size": 6653922, "raw_average_value_size": 1546, "num_data_blocks": 694, "num_entries": 4302, "num_filter_entries": 4302, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772272742, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:02.810455) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6759744 bytes
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:02.813413) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 108.8 rd, 73.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.2 +0.0 blob) out(6.4 +0.0 blob), read-write-amplify(45.0) write-amplify(18.2) OK, records in: 4805, records dropped: 503 output_compression: NoCompression
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:02.813447) EVENT_LOG_v1 {"time_micros": 1772272742813431, "job": 20, "event": "compaction_finished", "compaction_time_micros": 91640, "compaction_time_cpu_micros": 28115, "output_level": 6, "num_output_files": 1, "total_output_size": 6759744, "num_input_records": 4805, "num_output_records": 4302, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272742813658, "job": 20, "event": "table_file_deletion", "file_number": 43}
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272742814976, "job": 20, "event": "table_file_deletion", "file_number": 41}
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:02.718281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:02.815044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:02.815051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:02.815054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:02.815057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:59:02 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:02.815061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:59:02 np0005634017 nova_compute[243452]: 2026-02-28 09:59:02.848 243456 DEBUG oslo_concurrency.processutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:02 np0005634017 nova_compute[243452]: 2026-02-28 09:59:02.929 243456 DEBUG nova.storage.rbd_utils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] resizing rbd image 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 04:59:03 np0005634017 nova_compute[243452]: 2026-02-28 09:59:03.022 243456 DEBUG nova.objects.instance [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lazy-loading 'migration_context' on Instance uuid 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 04:59:03 np0005634017 nova_compute[243452]: 2026-02-28 09:59:03.036 243456 DEBUG nova.virt.libvirt.driver [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 04:59:03 np0005634017 nova_compute[243452]: 2026-02-28 09:59:03.037 243456 DEBUG nova.virt.libvirt.driver [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Ensure instance console log exists: /var/lib/nova/instances/345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 04:59:03 np0005634017 nova_compute[243452]: 2026-02-28 09:59:03.038 243456 DEBUG oslo_concurrency.lockutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:03 np0005634017 nova_compute[243452]: 2026-02-28 09:59:03.038 243456 DEBUG oslo_concurrency.lockutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:03 np0005634017 nova_compute[243452]: 2026-02-28 09:59:03.039 243456 DEBUG oslo_concurrency.lockutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:03 np0005634017 nova_compute[243452]: 2026-02-28 09:59:03.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 04:59:03 np0005634017 nova_compute[243452]: 2026-02-28 09:59:03.638 243456 DEBUG nova.network.neutron [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Successfully created port: 281248dd-cb66-437d-8420-a90b5b5a5b86 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 04:59:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 153 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 110 op/s
Feb 28 04:59:04 np0005634017 nova_compute[243452]: 2026-02-28 09:59:04.292 243456 DEBUG nova.network.neutron [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Successfully updated port: 281248dd-cb66-437d-8420-a90b5b5a5b86 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 04:59:04 np0005634017 nova_compute[243452]: 2026-02-28 09:59:04.309 243456 DEBUG oslo_concurrency.lockutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Acquiring lock "refresh_cache-345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 04:59:04 np0005634017 nova_compute[243452]: 2026-02-28 09:59:04.310 243456 DEBUG oslo_concurrency.lockutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Acquired lock "refresh_cache-345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 04:59:04 np0005634017 nova_compute[243452]: 2026-02-28 09:59:04.310 243456 DEBUG nova.network.neutron [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 04:59:04 np0005634017 nova_compute[243452]: 2026-02-28 09:59:04.313 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 04:59:04 np0005634017 nova_compute[243452]: 2026-02-28 09:59:04.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 04:59:04 np0005634017 nova_compute[243452]: 2026-02-28 09:59:04.333 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 04:59:04 np0005634017 nova_compute[243452]: 2026-02-28 09:59:04.334 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 04:59:04 np0005634017 nova_compute[243452]: 2026-02-28 09:59:04.334 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 04:59:04 np0005634017 nova_compute[243452]: 2026-02-28 09:59:04.359 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 28 04:59:04 np0005634017 nova_compute[243452]: 2026-02-28 09:59:04.360 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 04:59:04 np0005634017 nova_compute[243452]: 2026-02-28 09:59:04.360 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 04:59:04 np0005634017 nova_compute[243452]: 2026-02-28 09:59:04.361 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 04:59:04 np0005634017 nova_compute[243452]: 2026-02-28 09:59:04.361 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 04:59:04 np0005634017 nova_compute[243452]: 2026-02-28 09:59:04.383 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:04 np0005634017 nova_compute[243452]: 2026-02-28 09:59:04.383 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:04 np0005634017 nova_compute[243452]: 2026-02-28 09:59:04.383 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:04 np0005634017 nova_compute[243452]: 2026-02-28 09:59:04.383 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 04:59:04 np0005634017 nova_compute[243452]: 2026-02-28 09:59:04.384 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:04 np0005634017 nova_compute[243452]: 2026-02-28 09:59:04.705 243456 DEBUG nova.network.neutron [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 04:59:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 04:59:04 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2110491509' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 04:59:04 np0005634017 nova_compute[243452]: 2026-02-28 09:59:04.895 243456 DEBUG nova.compute.manager [req-a723c219-ca4b-441c-8458-a0aad29ff8a6 req-8d29f547-9ca5-448f-8bda-3752dfca1651 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Received event network-changed-281248dd-cb66-437d-8420-a90b5b5a5b86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 04:59:04 np0005634017 nova_compute[243452]: 2026-02-28 09:59:04.895 243456 DEBUG nova.compute.manager [req-a723c219-ca4b-441c-8458-a0aad29ff8a6 req-8d29f547-9ca5-448f-8bda-3752dfca1651 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Refreshing instance network info cache due to event network-changed-281248dd-cb66-437d-8420-a90b5b5a5b86. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 04:59:04 np0005634017 nova_compute[243452]: 2026-02-28 09:59:04.896 243456 DEBUG oslo_concurrency.lockutils [req-a723c219-ca4b-441c-8458-a0aad29ff8a6 req-8d29f547-9ca5-448f-8bda-3752dfca1651 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 04:59:04 np0005634017 nova_compute[243452]: 2026-02-28 09:59:04.908 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.077 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.078 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4998MB free_disk=59.987968852743506GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.078 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.079 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.221 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.221 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.222 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.277 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 188 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.4 MiB/s wr, 100 op/s
Feb 28 04:59:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 04:59:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2385411218' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.815 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.820 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.836 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.863 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.864 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.895 243456 DEBUG nova.network.neutron [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Updating instance_info_cache with network_info: [{"id": "281248dd-cb66-437d-8420-a90b5b5a5b86", "address": "fa:16:3e:94:f1:ca", "network": {"id": "0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1690820459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d11cb18c7a4f0bb17a76974faf1f67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281248dd-cb", "ovs_interfaceid": "281248dd-cb66-437d-8420-a90b5b5a5b86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.922 243456 DEBUG oslo_concurrency.lockutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Releasing lock "refresh_cache-345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.922 243456 DEBUG nova.compute.manager [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Instance network_info: |[{"id": "281248dd-cb66-437d-8420-a90b5b5a5b86", "address": "fa:16:3e:94:f1:ca", "network": {"id": "0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1690820459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d11cb18c7a4f0bb17a76974faf1f67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281248dd-cb", "ovs_interfaceid": "281248dd-cb66-437d-8420-a90b5b5a5b86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.922 243456 DEBUG oslo_concurrency.lockutils [req-a723c219-ca4b-441c-8458-a0aad29ff8a6 req-8d29f547-9ca5-448f-8bda-3752dfca1651 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.923 243456 DEBUG nova.network.neutron [req-a723c219-ca4b-441c-8458-a0aad29ff8a6 req-8d29f547-9ca5-448f-8bda-3752dfca1651 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Refreshing network info cache for port 281248dd-cb66-437d-8420-a90b5b5a5b86 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.926 243456 DEBUG nova.virt.libvirt.driver [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Start _get_guest_xml network_info=[{"id": "281248dd-cb66-437d-8420-a90b5b5a5b86", "address": "fa:16:3e:94:f1:ca", "network": {"id": "0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1690820459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d11cb18c7a4f0bb17a76974faf1f67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281248dd-cb", "ovs_interfaceid": "281248dd-cb66-437d-8420-a90b5b5a5b86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.931 243456 WARNING nova.virt.libvirt.driver [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.936 243456 DEBUG nova.virt.libvirt.host [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.937 243456 DEBUG nova.virt.libvirt.host [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.946 243456 DEBUG nova.virt.libvirt.host [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.946 243456 DEBUG nova.virt.libvirt.host [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.947 243456 DEBUG nova.virt.libvirt.driver [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.947 243456 DEBUG nova.virt.hardware [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='294066627',id=18,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-112727790',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.948 243456 DEBUG nova.virt.hardware [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.948 243456 DEBUG nova.virt.hardware [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.949 243456 DEBUG nova.virt.hardware [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.949 243456 DEBUG nova.virt.hardware [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.950 243456 DEBUG nova.virt.hardware [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.950 243456 DEBUG nova.virt.hardware [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.950 243456 DEBUG nova.virt.hardware [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.951 243456 DEBUG nova.virt.hardware [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.951 243456 DEBUG nova.virt.hardware [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.951 243456 DEBUG nova.virt.hardware [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 04:59:05 np0005634017 nova_compute[243452]: 2026-02-28 09:59:05.958 243456 DEBUG oslo_concurrency.processutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:06 np0005634017 nova_compute[243452]: 2026-02-28 09:59:06.045 243456 DEBUG oslo_concurrency.lockutils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Acquiring lock "690358a2-fced-4bc3-8a76-d85e5bfdc727" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:06 np0005634017 nova_compute[243452]: 2026-02-28 09:59:06.046 243456 DEBUG oslo_concurrency.lockutils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lock "690358a2-fced-4bc3-8a76-d85e5bfdc727" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:06 np0005634017 nova_compute[243452]: 2026-02-28 09:59:06.070 243456 DEBUG nova.compute.manager [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 04:59:06 np0005634017 nova_compute[243452]: 2026-02-28 09:59:06.139 243456 DEBUG oslo_concurrency.lockutils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:06 np0005634017 nova_compute[243452]: 2026-02-28 09:59:06.140 243456 DEBUG oslo_concurrency.lockutils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:06 np0005634017 nova_compute[243452]: 2026-02-28 09:59:06.150 243456 DEBUG nova.virt.hardware [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 04:59:06 np0005634017 nova_compute[243452]: 2026-02-28 09:59:06.151 243456 INFO nova.compute.claims [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 04:59:06 np0005634017 nova_compute[243452]: 2026-02-28 09:59:06.289 243456 DEBUG oslo_concurrency.processutils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 04:59:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2790752973' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 04:59:06 np0005634017 nova_compute[243452]: 2026-02-28 09:59:06.560 243456 DEBUG oslo_concurrency.processutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:06 np0005634017 nova_compute[243452]: 2026-02-28 09:59:06.594 243456 DEBUG nova.storage.rbd_utils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] rbd image 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:06 np0005634017 nova_compute[243452]: 2026-02-28 09:59:06.600 243456 DEBUG oslo_concurrency.processutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:06 np0005634017 nova_compute[243452]: 2026-02-28 09:59:06.819 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 04:59:06 np0005634017 nova_compute[243452]: 2026-02-28 09:59:06.820 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 04:59:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 04:59:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2126052834' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 04:59:06 np0005634017 nova_compute[243452]: 2026-02-28 09:59:06.882 243456 DEBUG oslo_concurrency.processutils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:06 np0005634017 nova_compute[243452]: 2026-02-28 09:59:06.887 243456 DEBUG nova.compute.provider_tree [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 04:59:06 np0005634017 nova_compute[243452]: 2026-02-28 09:59:06.903 243456 DEBUG nova.scheduler.client.report [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 04:59:06 np0005634017 nova_compute[243452]: 2026-02-28 09:59:06.922 243456 DEBUG oslo_concurrency.lockutils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:06 np0005634017 nova_compute[243452]: 2026-02-28 09:59:06.923 243456 DEBUG nova.compute.manager [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 04:59:06 np0005634017 nova_compute[243452]: 2026-02-28 09:59:06.969 243456 DEBUG nova.compute.manager [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 04:59:06 np0005634017 nova_compute[243452]: 2026-02-28 09:59:06.969 243456 DEBUG nova.network.neutron [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 04:59:06 np0005634017 nova_compute[243452]: 2026-02-28 09:59:06.989 243456 INFO nova.virt.libvirt.driver [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.007 243456 DEBUG nova.compute.manager [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.088 243456 DEBUG nova.compute.manager [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.089 243456 DEBUG nova.virt.libvirt.driver [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.089 243456 INFO nova.virt.libvirt.driver [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Creating image(s)#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.111 243456 DEBUG nova.storage.rbd_utils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] rbd image 690358a2-fced-4bc3-8a76-d85e5bfdc727_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.135 243456 DEBUG nova.storage.rbd_utils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] rbd image 690358a2-fced-4bc3-8a76-d85e5bfdc727_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.158 243456 DEBUG nova.storage.rbd_utils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] rbd image 690358a2-fced-4bc3-8a76-d85e5bfdc727_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.161 243456 DEBUG oslo_concurrency.processutils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 04:59:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2857282618' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.185 243456 DEBUG oslo_concurrency.processutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.187 243456 DEBUG nova.virt.libvirt.vif [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T09:59:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-170965901',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-170965901',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(18),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-170965901',id=2,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=18,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHI7c4keuyySp3JCKwGcWG3wi5bkL/HjgXw+tKRvGg2QtCSRW0FIRsi5MSLftPso6IldV2AEMqsGSDx0hY586rAhWIJtbGmY0OMNEXz4pQjFyQmOBvdLhFcG1jQGXlgziQ==',key_name='tempest-keypair-380335806',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d8d11cb18c7a4f0bb17a76974faf1f67',ramdisk_id='',reservation_id='r-0xni0k0g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-390490490',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-390490490-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T09:59:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1fe17532ba42f08284df768e59361b',uuid=345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "281248dd-cb66-437d-8420-a90b5b5a5b86", "address": "fa:16:3e:94:f1:ca", "network": {"id": "0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1690820459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d11cb18c7a4f0bb17a76974faf1f67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281248dd-cb", "ovs_interfaceid": "281248dd-cb66-437d-8420-a90b5b5a5b86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.188 243456 DEBUG nova.network.os_vif_util [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Converting VIF {"id": "281248dd-cb66-437d-8420-a90b5b5a5b86", "address": "fa:16:3e:94:f1:ca", "network": {"id": "0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1690820459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d11cb18c7a4f0bb17a76974faf1f67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281248dd-cb", "ovs_interfaceid": "281248dd-cb66-437d-8420-a90b5b5a5b86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.189 243456 DEBUG nova.network.os_vif_util [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:f1:ca,bridge_name='br-int',has_traffic_filtering=True,id=281248dd-cb66-437d-8420-a90b5b5a5b86,network=Network(0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap281248dd-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.192 243456 DEBUG nova.objects.instance [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lazy-loading 'pci_devices' on Instance uuid 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.207 243456 DEBUG nova.virt.libvirt.driver [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] End _get_guest_xml xml=<domain type="kvm">
Feb 28 04:59:07 np0005634017 nova_compute[243452]:  <uuid>345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2</uuid>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:  <name>instance-00000002</name>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-170965901</nova:name>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 09:59:05</nova:creationTime>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <nova:flavor name="tempest-flavor_with_ephemeral_0-112727790">
Feb 28 04:59:07 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:        <nova:user uuid="1e1fe17532ba42f08284df768e59361b">tempest-ServersWithSpecificFlavorTestJSON-390490490-project-member</nova:user>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:        <nova:project uuid="d8d11cb18c7a4f0bb17a76974faf1f67">tempest-ServersWithSpecificFlavorTestJSON-390490490</nova:project>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:        <nova:port uuid="281248dd-cb66-437d-8420-a90b5b5a5b86">
Feb 28 04:59:07 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <system>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <entry name="serial">345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2</entry>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <entry name="uuid">345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2</entry>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    </system>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:  <os>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:  </os>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:  <features>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:  </features>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:  </clock>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:  <devices>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2_disk">
Feb 28 04:59:07 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      </source>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 04:59:07 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      </auth>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    </disk>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2_disk.config">
Feb 28 04:59:07 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      </source>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 04:59:07 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      </auth>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    </disk>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:94:f1:ca"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <target dev="tap281248dd-cb"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    </interface>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2/console.log" append="off"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    </serial>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <video>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    </video>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    </rng>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 04:59:07 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 04:59:07 np0005634017 nova_compute[243452]:  </devices>
Feb 28 04:59:07 np0005634017 nova_compute[243452]: </domain>
Feb 28 04:59:07 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.207 243456 DEBUG nova.compute.manager [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Preparing to wait for external event network-vif-plugged-281248dd-cb66-437d-8420-a90b5b5a5b86 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.208 243456 DEBUG oslo_concurrency.lockutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Acquiring lock "345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.208 243456 DEBUG oslo_concurrency.lockutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.208 243456 DEBUG oslo_concurrency.lockutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.209 243456 DEBUG nova.virt.libvirt.vif [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T09:59:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-170965901',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-170965901',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(18),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-170965901',id=2,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=18,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHI7c4keuyySp3JCKwGcWG3wi5bkL/HjgXw+tKRvGg2QtCSRW0FIRsi5MSLftPso6IldV2AEMqsGSDx0hY586rAhWIJtbGmY0OMNEXz4pQjFyQmOBvdLhFcG1jQGXlgziQ==',key_name='tempest-keypair-380335806',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d8d11cb18c7a4f0bb17a76974faf1f67',ramdisk_id='',reservation_id='r-0xni0k0g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-390490490',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-390490490-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T09:59:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1fe17532ba42f08284df768e59361b',uuid=345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "281248dd-cb66-437d-8420-a90b5b5a5b86", "address": "fa:16:3e:94:f1:ca", "network": {"id": "0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1690820459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d11cb18c7a4f0bb17a76974faf1f67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281248dd-cb", "ovs_interfaceid": "281248dd-cb66-437d-8420-a90b5b5a5b86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.209 243456 DEBUG nova.network.os_vif_util [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Converting VIF {"id": "281248dd-cb66-437d-8420-a90b5b5a5b86", "address": "fa:16:3e:94:f1:ca", "network": {"id": "0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1690820459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d11cb18c7a4f0bb17a76974faf1f67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281248dd-cb", "ovs_interfaceid": "281248dd-cb66-437d-8420-a90b5b5a5b86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.210 243456 DEBUG nova.network.os_vif_util [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:f1:ca,bridge_name='br-int',has_traffic_filtering=True,id=281248dd-cb66-437d-8420-a90b5b5a5b86,network=Network(0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap281248dd-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.210 243456 DEBUG os_vif [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:f1:ca,bridge_name='br-int',has_traffic_filtering=True,id=281248dd-cb66-437d-8420-a90b5b5a5b86,network=Network(0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap281248dd-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.272 243456 DEBUG ovsdbapp.backend.ovs_idl [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.273 243456 DEBUG ovsdbapp.backend.ovs_idl [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.273 243456 DEBUG ovsdbapp.backend.ovs_idl [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.273 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.273 243456 DEBUG oslo_concurrency.processutils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.274 243456 DEBUG oslo_concurrency.lockutils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.275 243456 DEBUG oslo_concurrency.lockutils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.275 243456 DEBUG oslo_concurrency.lockutils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.297 243456 DEBUG nova.storage.rbd_utils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] rbd image 690358a2-fced-4bc3-8a76-d85e5bfdc727_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.300 243456 DEBUG oslo_concurrency.processutils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 690358a2-fced-4bc3-8a76-d85e5bfdc727_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.315 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [POLLOUT] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.316 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.317 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.318 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.321 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.330 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.330 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.330 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.332 243456 INFO oslo.privsep.daemon [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpjjdj13mf/privsep.sock']#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.481 243456 DEBUG nova.network.neutron [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.482 243456 DEBUG nova.compute.manager [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.549 243456 DEBUG oslo_concurrency.processutils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 690358a2-fced-4bc3-8a76-d85e5bfdc727_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.613 243456 DEBUG nova.storage.rbd_utils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] resizing rbd image 690358a2-fced-4bc3-8a76-d85e5bfdc727_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 04:59:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 224 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.9 MiB/s wr, 95 op/s
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.699 243456 DEBUG nova.objects.instance [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lazy-loading 'migration_context' on Instance uuid 690358a2-fced-4bc3-8a76-d85e5bfdc727 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.720 243456 DEBUG nova.virt.libvirt.driver [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.720 243456 DEBUG nova.virt.libvirt.driver [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Ensure instance console log exists: /var/lib/nova/instances/690358a2-fced-4bc3-8a76-d85e5bfdc727/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.721 243456 DEBUG oslo_concurrency.lockutils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.721 243456 DEBUG oslo_concurrency.lockutils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.721 243456 DEBUG oslo_concurrency.lockutils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.723 243456 DEBUG nova.virt.libvirt.driver [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.727 243456 WARNING nova.virt.libvirt.driver [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.732 243456 DEBUG nova.virt.libvirt.host [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.733 243456 DEBUG nova.virt.libvirt.host [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.735 243456 DEBUG nova.virt.libvirt.host [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.735 243456 DEBUG nova.virt.libvirt.host [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.736 243456 DEBUG nova.virt.libvirt.driver [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.736 243456 DEBUG nova.virt.hardware [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.737 243456 DEBUG nova.virt.hardware [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.737 243456 DEBUG nova.virt.hardware [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.737 243456 DEBUG nova.virt.hardware [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.737 243456 DEBUG nova.virt.hardware [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.737 243456 DEBUG nova.virt.hardware [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.738 243456 DEBUG nova.virt.hardware [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.738 243456 DEBUG nova.virt.hardware [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.738 243456 DEBUG nova.virt.hardware [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.738 243456 DEBUG nova.virt.hardware [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.739 243456 DEBUG nova.virt.hardware [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 04:59:07 np0005634017 nova_compute[243452]: 2026-02-28 09:59:07.742 243456 DEBUG oslo_concurrency.processutils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 04:59:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1352920651' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 04:59:08 np0005634017 nova_compute[243452]: 2026-02-28 09:59:08.269 243456 DEBUG oslo_concurrency.processutils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:08 np0005634017 nova_compute[243452]: 2026-02-28 09:59:08.304 243456 DEBUG nova.storage.rbd_utils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] rbd image 690358a2-fced-4bc3-8a76-d85e5bfdc727_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:08 np0005634017 nova_compute[243452]: 2026-02-28 09:59:08.312 243456 DEBUG oslo_concurrency.processutils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:08 np0005634017 nova_compute[243452]: 2026-02-28 09:59:08.338 243456 DEBUG nova.network.neutron [req-a723c219-ca4b-441c-8458-a0aad29ff8a6 req-8d29f547-9ca5-448f-8bda-3752dfca1651 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Updated VIF entry in instance network info cache for port 281248dd-cb66-437d-8420-a90b5b5a5b86. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 04:59:08 np0005634017 nova_compute[243452]: 2026-02-28 09:59:08.339 243456 DEBUG nova.network.neutron [req-a723c219-ca4b-441c-8458-a0aad29ff8a6 req-8d29f547-9ca5-448f-8bda-3752dfca1651 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Updating instance_info_cache with network_info: [{"id": "281248dd-cb66-437d-8420-a90b5b5a5b86", "address": "fa:16:3e:94:f1:ca", "network": {"id": "0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1690820459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d11cb18c7a4f0bb17a76974faf1f67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281248dd-cb", "ovs_interfaceid": "281248dd-cb66-437d-8420-a90b5b5a5b86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 04:59:08 np0005634017 nova_compute[243452]: 2026-02-28 09:59:08.358 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:08 np0005634017 nova_compute[243452]: 2026-02-28 09:59:08.365 243456 DEBUG oslo_concurrency.lockutils [req-a723c219-ca4b-441c-8458-a0aad29ff8a6 req-8d29f547-9ca5-448f-8bda-3752dfca1651 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 04:59:08 np0005634017 nova_compute[243452]: 2026-02-28 09:59:08.550 243456 INFO oslo.privsep.daemon [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Feb 28 04:59:08 np0005634017 nova_compute[243452]: 2026-02-28 09:59:08.393 250558 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb 28 04:59:08 np0005634017 nova_compute[243452]: 2026-02-28 09:59:08.403 250558 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb 28 04:59:08 np0005634017 nova_compute[243452]: 2026-02-28 09:59:08.407 250558 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Feb 28 04:59:08 np0005634017 nova_compute[243452]: 2026-02-28 09:59:08.407 250558 INFO oslo.privsep.daemon [-] privsep daemon running as pid 250558#033[00m
Feb 28 04:59:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 04:59:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/633364071' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 04:59:08 np0005634017 nova_compute[243452]: 2026-02-28 09:59:08.913 243456 DEBUG oslo_concurrency.processutils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:08 np0005634017 nova_compute[243452]: 2026-02-28 09:59:08.914 243456 DEBUG nova.objects.instance [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lazy-loading 'pci_devices' on Instance uuid 690358a2-fced-4bc3-8a76-d85e5bfdc727 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 04:59:08 np0005634017 nova_compute[243452]: 2026-02-28 09:59:08.933 243456 DEBUG nova.virt.libvirt.driver [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] End _get_guest_xml xml=<domain type="kvm">
Feb 28 04:59:08 np0005634017 nova_compute[243452]:  <uuid>690358a2-fced-4bc3-8a76-d85e5bfdc727</uuid>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:  <name>instance-00000003</name>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-1317112794</nova:name>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 09:59:07</nova:creationTime>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 04:59:08 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:        <nova:user uuid="f285b05675cf49d8812463f1f0d35eb3">tempest-DeleteServersAdminTestJSON-1976390746-project-member</nova:user>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:        <nova:project uuid="056bbee826f042bc922f04eebd9ccf55">tempest-DeleteServersAdminTestJSON-1976390746</nova:project>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      <nova:ports/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <system>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      <entry name="serial">690358a2-fced-4bc3-8a76-d85e5bfdc727</entry>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      <entry name="uuid">690358a2-fced-4bc3-8a76-d85e5bfdc727</entry>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    </system>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:  <os>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:  </os>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:  <features>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:  </features>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:  </clock>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:  <devices>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/690358a2-fced-4bc3-8a76-d85e5bfdc727_disk">
Feb 28 04:59:08 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      </source>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 04:59:08 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      </auth>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    </disk>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/690358a2-fced-4bc3-8a76-d85e5bfdc727_disk.config">
Feb 28 04:59:08 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      </source>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 04:59:08 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      </auth>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    </disk>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/690358a2-fced-4bc3-8a76-d85e5bfdc727/console.log" append="off"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    </serial>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <video>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    </video>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    </rng>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 04:59:08 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 04:59:08 np0005634017 nova_compute[243452]:  </devices>
Feb 28 04:59:08 np0005634017 nova_compute[243452]: </domain>
Feb 28 04:59:08 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 04:59:08 np0005634017 nova_compute[243452]: 2026-02-28 09:59:08.976 243456 DEBUG nova.virt.libvirt.driver [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 04:59:08 np0005634017 nova_compute[243452]: 2026-02-28 09:59:08.977 243456 DEBUG nova.virt.libvirt.driver [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 04:59:08 np0005634017 nova_compute[243452]: 2026-02-28 09:59:08.977 243456 INFO nova.virt.libvirt.driver [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Using config drive#033[00m
Feb 28 04:59:08 np0005634017 nova_compute[243452]: 2026-02-28 09:59:08.996 243456 DEBUG nova.storage.rbd_utils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] rbd image 690358a2-fced-4bc3-8a76-d85e5bfdc727_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.039 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.040 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap281248dd-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.040 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap281248dd-cb, col_values=(('external_ids', {'iface-id': '281248dd-cb66-437d-8420-a90b5b5a5b86', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:94:f1:ca', 'vm-uuid': '345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.042 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:09 np0005634017 NetworkManager[49805]: <info>  [1772272749.0433] manager: (tap281248dd-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.045 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.048 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.048 243456 INFO os_vif [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:f1:ca,bridge_name='br-int',has_traffic_filtering=True,id=281248dd-cb66-437d-8420-a90b5b5a5b86,network=Network(0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap281248dd-cb')#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.092 243456 DEBUG nova.virt.libvirt.driver [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.094 243456 DEBUG nova.virt.libvirt.driver [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.095 243456 DEBUG nova.virt.libvirt.driver [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] No VIF found with MAC fa:16:3e:94:f1:ca, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.096 243456 INFO nova.virt.libvirt.driver [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Using config drive#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.117 243456 DEBUG nova.storage.rbd_utils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] rbd image 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.142 243456 INFO nova.virt.libvirt.driver [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Creating config drive at /var/lib/nova/instances/690358a2-fced-4bc3-8a76-d85e5bfdc727/disk.config#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.145 243456 DEBUG oslo_concurrency.processutils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/690358a2-fced-4bc3-8a76-d85e5bfdc727/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6f0rrh1j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.261 243456 DEBUG oslo_concurrency.processutils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/690358a2-fced-4bc3-8a76-d85e5bfdc727/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6f0rrh1j" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.294 243456 DEBUG nova.storage.rbd_utils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] rbd image 690358a2-fced-4bc3-8a76-d85e5bfdc727_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.299 243456 DEBUG oslo_concurrency.processutils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/690358a2-fced-4bc3-8a76-d85e5bfdc727/disk.config 690358a2-fced-4bc3-8a76-d85e5bfdc727_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.413 243456 DEBUG oslo_concurrency.processutils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/690358a2-fced-4bc3-8a76-d85e5bfdc727/disk.config 690358a2-fced-4bc3-8a76-d85e5bfdc727_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.414 243456 INFO nova.virt.libvirt.driver [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Deleting local config drive /var/lib/nova/instances/690358a2-fced-4bc3-8a76-d85e5bfdc727/disk.config because it was imported into RBD.#033[00m
Feb 28 04:59:09 np0005634017 systemd-machined[209480]: New machine qemu-2-instance-00000003.
Feb 28 04:59:09 np0005634017 systemd[1]: Started Virtual Machine qemu-2-instance-00000003.
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.623 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272734.621894, a2067bb3-321d-4043-a424-a26d3fb8fcb3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.624 243456 INFO nova.compute.manager [-] [instance: a2067bb3-321d-4043-a424-a26d3fb8fcb3] VM Stopped (Lifecycle Event)#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.641 243456 DEBUG nova.compute.manager [None req-26294c54-062b-48f4-9bd8-bf9fd1f708b3 - - - - - -] [instance: a2067bb3-321d-4043-a424-a26d3fb8fcb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 04:59:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 224 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.9 MiB/s wr, 40 op/s
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.721 243456 INFO nova.virt.libvirt.driver [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Creating config drive at /var/lib/nova/instances/345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2/disk.config#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.728 243456 DEBUG oslo_concurrency.processutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp07hwusq0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.853 243456 DEBUG oslo_concurrency.processutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp07hwusq0" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.887 243456 DEBUG nova.storage.rbd_utils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] rbd image 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.892 243456 DEBUG oslo_concurrency.processutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2/disk.config 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.926 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272749.9256434, 690358a2-fced-4bc3-8a76-d85e5bfdc727 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.927 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] VM Resumed (Lifecycle Event)#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.931 243456 DEBUG nova.compute.manager [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.931 243456 DEBUG nova.virt.libvirt.driver [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.935 243456 INFO nova.virt.libvirt.driver [-] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Instance spawned successfully.#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.936 243456 DEBUG nova.virt.libvirt.driver [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.953 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.967 243456 DEBUG nova.virt.libvirt.driver [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.967 243456 DEBUG nova.virt.libvirt.driver [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.968 243456 DEBUG nova.virt.libvirt.driver [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.969 243456 DEBUG nova.virt.libvirt.driver [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.970 243456 DEBUG nova.virt.libvirt.driver [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.971 243456 DEBUG nova.virt.libvirt.driver [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:09 np0005634017 nova_compute[243452]: 2026-02-28 09:59:09.976 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 04:59:10 np0005634017 nova_compute[243452]: 2026-02-28 09:59:10.002 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 04:59:10 np0005634017 nova_compute[243452]: 2026-02-28 09:59:10.003 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272749.9272716, 690358a2-fced-4bc3-8a76-d85e5bfdc727 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 04:59:10 np0005634017 nova_compute[243452]: 2026-02-28 09:59:10.003 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] VM Started (Lifecycle Event)#033[00m
Feb 28 04:59:10 np0005634017 nova_compute[243452]: 2026-02-28 09:59:10.025 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 04:59:10 np0005634017 nova_compute[243452]: 2026-02-28 09:59:10.029 243456 INFO nova.compute.manager [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Took 2.94 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 04:59:10 np0005634017 nova_compute[243452]: 2026-02-28 09:59:10.030 243456 DEBUG nova.compute.manager [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 04:59:10 np0005634017 nova_compute[243452]: 2026-02-28 09:59:10.031 243456 DEBUG oslo_concurrency.processutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2/disk.config 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:10 np0005634017 nova_compute[243452]: 2026-02-28 09:59:10.031 243456 INFO nova.virt.libvirt.driver [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Deleting local config drive /var/lib/nova/instances/345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2/disk.config because it was imported into RBD.#033[00m
Feb 28 04:59:10 np0005634017 nova_compute[243452]: 2026-02-28 09:59:10.035 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 04:59:10 np0005634017 nova_compute[243452]: 2026-02-28 09:59:10.067 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 04:59:10 np0005634017 kernel: tun: Universal TUN/TAP device driver, 1.6
Feb 28 04:59:10 np0005634017 kernel: tap281248dd-cb: entered promiscuous mode
Feb 28 04:59:10 np0005634017 systemd-udevd[250719]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 04:59:10 np0005634017 ovn_controller[146846]: 2026-02-28T09:59:10Z|00027|binding|INFO|Claiming lport 281248dd-cb66-437d-8420-a90b5b5a5b86 for this chassis.
Feb 28 04:59:10 np0005634017 ovn_controller[146846]: 2026-02-28T09:59:10Z|00028|binding|INFO|281248dd-cb66-437d-8420-a90b5b5a5b86: Claiming fa:16:3e:94:f1:ca 10.100.0.3
Feb 28 04:59:10 np0005634017 NetworkManager[49805]: <info>  [1772272750.1043] manager: (tap281248dd-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Feb 28 04:59:10 np0005634017 nova_compute[243452]: 2026-02-28 09:59:10.108 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:10 np0005634017 nova_compute[243452]: 2026-02-28 09:59:10.114 243456 INFO nova.compute.manager [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Took 4.00 seconds to build instance.#033[00m
Feb 28 04:59:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:10.118 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:f1:ca 10.100.0.3'], port_security=['fa:16:3e:94:f1:ca 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8d11cb18c7a4f0bb17a76974faf1f67', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6c29529b-835f-4326-814a-8256c4b702f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f040037-a4e0-4f34-9b2b-1599fa7f19e6, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=281248dd-cb66-437d-8420-a90b5b5a5b86) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 04:59:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:10.120 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 281248dd-cb66-437d-8420-a90b5b5a5b86 in datapath 0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4 bound to our chassis#033[00m
Feb 28 04:59:10 np0005634017 NetworkManager[49805]: <info>  [1772272750.1217] device (tap281248dd-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 04:59:10 np0005634017 NetworkManager[49805]: <info>  [1772272750.1227] device (tap281248dd-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 04:59:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:10.124 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4#033[00m
Feb 28 04:59:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:10.125 156681 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpt25544uf/privsep.sock']#033[00m
Feb 28 04:59:10 np0005634017 nova_compute[243452]: 2026-02-28 09:59:10.133 243456 DEBUG oslo_concurrency.lockutils [None req-5c462aa2-1e65-4938-8681-2c97a3a8b3fe f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lock "690358a2-fced-4bc3-8a76-d85e5bfdc727" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:10 np0005634017 systemd-machined[209480]: New machine qemu-3-instance-00000002.
Feb 28 04:59:10 np0005634017 nova_compute[243452]: 2026-02-28 09:59:10.167 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:10 np0005634017 ovn_controller[146846]: 2026-02-28T09:59:10Z|00029|binding|INFO|Setting lport 281248dd-cb66-437d-8420-a90b5b5a5b86 ovn-installed in OVS
Feb 28 04:59:10 np0005634017 ovn_controller[146846]: 2026-02-28T09:59:10Z|00030|binding|INFO|Setting lport 281248dd-cb66-437d-8420-a90b5b5a5b86 up in Southbound
Feb 28 04:59:10 np0005634017 nova_compute[243452]: 2026-02-28 09:59:10.174 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:10 np0005634017 systemd[1]: Started Virtual Machine qemu-3-instance-00000002.
Feb 28 04:59:10 np0005634017 nova_compute[243452]: 2026-02-28 09:59:10.809 243456 DEBUG oslo_concurrency.lockutils [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] Acquiring lock "690358a2-fced-4bc3-8a76-d85e5bfdc727" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:10 np0005634017 nova_compute[243452]: 2026-02-28 09:59:10.810 243456 DEBUG oslo_concurrency.lockutils [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] Lock "690358a2-fced-4bc3-8a76-d85e5bfdc727" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:10 np0005634017 nova_compute[243452]: 2026-02-28 09:59:10.810 243456 DEBUG oslo_concurrency.lockutils [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] Acquiring lock "690358a2-fced-4bc3-8a76-d85e5bfdc727-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:10 np0005634017 nova_compute[243452]: 2026-02-28 09:59:10.810 243456 DEBUG oslo_concurrency.lockutils [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] Lock "690358a2-fced-4bc3-8a76-d85e5bfdc727-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:10 np0005634017 nova_compute[243452]: 2026-02-28 09:59:10.811 243456 DEBUG oslo_concurrency.lockutils [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] Lock "690358a2-fced-4bc3-8a76-d85e5bfdc727-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:10 np0005634017 nova_compute[243452]: 2026-02-28 09:59:10.812 243456 INFO nova.compute.manager [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Terminating instance#033[00m
Feb 28 04:59:10 np0005634017 nova_compute[243452]: 2026-02-28 09:59:10.813 243456 DEBUG oslo_concurrency.lockutils [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] Acquiring lock "refresh_cache-690358a2-fced-4bc3-8a76-d85e5bfdc727" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 04:59:10 np0005634017 nova_compute[243452]: 2026-02-28 09:59:10.813 243456 DEBUG oslo_concurrency.lockutils [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] Acquired lock "refresh_cache-690358a2-fced-4bc3-8a76-d85e5bfdc727" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 04:59:10 np0005634017 nova_compute[243452]: 2026-02-28 09:59:10.814 243456 DEBUG nova.network.neutron [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 04:59:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:10.878 156681 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Feb 28 04:59:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:10.878 156681 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpt25544uf/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Feb 28 04:59:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:10.741 250787 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb 28 04:59:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:10.747 250787 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb 28 04:59:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:10.751 250787 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Feb 28 04:59:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:10.751 250787 INFO oslo.privsep.daemon [-] privsep daemon running as pid 250787#033[00m
Feb 28 04:59:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:10.880 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0163665a-3b49-43eb-a531-05577c97f398]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.272 243456 DEBUG nova.compute.manager [req-4ef6c32b-2dc4-455b-b52e-dd3c4d71223c req-2121ee36-5b39-4a4d-a2fc-be5e42556ac3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Received event network-vif-plugged-281248dd-cb66-437d-8420-a90b5b5a5b86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.273 243456 DEBUG oslo_concurrency.lockutils [req-4ef6c32b-2dc4-455b-b52e-dd3c4d71223c req-2121ee36-5b39-4a4d-a2fc-be5e42556ac3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.273 243456 DEBUG oslo_concurrency.lockutils [req-4ef6c32b-2dc4-455b-b52e-dd3c4d71223c req-2121ee36-5b39-4a4d-a2fc-be5e42556ac3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.273 243456 DEBUG oslo_concurrency.lockutils [req-4ef6c32b-2dc4-455b-b52e-dd3c4d71223c req-2121ee36-5b39-4a4d-a2fc-be5e42556ac3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.274 243456 DEBUG nova.compute.manager [req-4ef6c32b-2dc4-455b-b52e-dd3c4d71223c req-2121ee36-5b39-4a4d-a2fc-be5e42556ac3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Processing event network-vif-plugged-281248dd-cb66-437d-8420-a90b5b5a5b86 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.318 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272751.3179245, 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.319 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] VM Started (Lifecycle Event)#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.326 243456 DEBUG nova.compute.manager [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.331 243456 DEBUG nova.network.neutron [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.339 243456 DEBUG nova.virt.libvirt.driver [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.346 243456 INFO nova.virt.libvirt.driver [-] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Instance spawned successfully.#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.347 243456 DEBUG nova.virt.libvirt.driver [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.372 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.391 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.399 243456 DEBUG nova.virt.libvirt.driver [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.400 243456 DEBUG nova.virt.libvirt.driver [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.402 243456 DEBUG nova.virt.libvirt.driver [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.403 243456 DEBUG nova.virt.libvirt.driver [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.404 243456 DEBUG nova.virt.libvirt.driver [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.405 243456 DEBUG nova.virt.libvirt.driver [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.419 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.419 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272751.3181176, 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.419 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] VM Paused (Lifecycle Event)#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.441 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.447 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272751.3319101, 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.447 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] VM Resumed (Lifecycle Event)#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.575 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.580 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.586 243456 INFO nova.compute.manager [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Took 9.57 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.587 243456 DEBUG nova.compute.manager [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.598 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.644 243456 INFO nova.compute.manager [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Took 10.56 seconds to build instance.#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.662 243456 DEBUG oslo_concurrency.lockutils [None req-ae49a3a0-be17-4b4d-ab13-19751be5ca36 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 246 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 3.6 MiB/s wr, 66 op/s
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.715 243456 DEBUG nova.network.neutron [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.730 243456 DEBUG oslo_concurrency.lockutils [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] Releasing lock "refresh_cache-690358a2-fced-4bc3-8a76-d85e5bfdc727" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.731 243456 DEBUG nova.compute.manager [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 04:59:11 np0005634017 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Deactivated successfully.
Feb 28 04:59:11 np0005634017 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Consumed 2.269s CPU time.
Feb 28 04:59:11 np0005634017 systemd-machined[209480]: Machine qemu-2-instance-00000003 terminated.
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.955 243456 INFO nova.virt.libvirt.driver [-] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Instance destroyed successfully.#033[00m
Feb 28 04:59:11 np0005634017 nova_compute[243452]: 2026-02-28 09:59:11.956 243456 DEBUG nova.objects.instance [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] Lazy-loading 'resources' on Instance uuid 690358a2-fced-4bc3-8a76-d85e5bfdc727 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 04:59:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:12.309 250787 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:12.310 250787 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:12.310 250787 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:12 np0005634017 nova_compute[243452]: 2026-02-28 09:59:12.406 243456 INFO nova.virt.libvirt.driver [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Deleting instance files /var/lib/nova/instances/690358a2-fced-4bc3-8a76-d85e5bfdc727_del#033[00m
Feb 28 04:59:12 np0005634017 nova_compute[243452]: 2026-02-28 09:59:12.407 243456 INFO nova.virt.libvirt.driver [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Deletion of /var/lib/nova/instances/690358a2-fced-4bc3-8a76-d85e5bfdc727_del complete#033[00m
Feb 28 04:59:12 np0005634017 nova_compute[243452]: 2026-02-28 09:59:12.483 243456 INFO nova.compute.manager [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 04:59:12 np0005634017 nova_compute[243452]: 2026-02-28 09:59:12.484 243456 DEBUG oslo.service.loopingcall [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 04:59:12 np0005634017 nova_compute[243452]: 2026-02-28 09:59:12.485 243456 DEBUG nova.compute.manager [-] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 04:59:12 np0005634017 nova_compute[243452]: 2026-02-28 09:59:12.485 243456 DEBUG nova.network.neutron [-] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 04:59:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 04:59:12 np0005634017 nova_compute[243452]: 2026-02-28 09:59:12.708 243456 DEBUG nova.network.neutron [-] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 04:59:12 np0005634017 nova_compute[243452]: 2026-02-28 09:59:12.723 243456 DEBUG nova.network.neutron [-] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 04:59:12 np0005634017 nova_compute[243452]: 2026-02-28 09:59:12.735 243456 INFO nova.compute.manager [-] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Took 0.25 seconds to deallocate network for instance.#033[00m
Feb 28 04:59:12 np0005634017 nova_compute[243452]: 2026-02-28 09:59:12.781 243456 DEBUG oslo_concurrency.lockutils [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:12 np0005634017 nova_compute[243452]: 2026-02-28 09:59:12.782 243456 DEBUG oslo_concurrency.lockutils [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:12 np0005634017 nova_compute[243452]: 2026-02-28 09:59:12.850 243456 DEBUG oslo_concurrency.processutils [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:13 np0005634017 nova_compute[243452]: 2026-02-28 09:59:13.362 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 04:59:13 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1692044834' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 04:59:13 np0005634017 nova_compute[243452]: 2026-02-28 09:59:13.409 243456 DEBUG oslo_concurrency.processutils [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:13 np0005634017 nova_compute[243452]: 2026-02-28 09:59:13.415 243456 DEBUG nova.compute.provider_tree [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 04:59:13 np0005634017 nova_compute[243452]: 2026-02-28 09:59:13.439 243456 DEBUG nova.scheduler.client.report [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 04:59:13 np0005634017 nova_compute[243452]: 2026-02-28 09:59:13.462 243456 DEBUG oslo_concurrency.lockutils [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:13.475 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[73cc0ee2-1f46-4f4b-bc15-f1688a2d3f22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:13.476 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0ff5ef4b-e1 in ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 04:59:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:13.478 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0ff5ef4b-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 04:59:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:13.478 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7225c1d3-40d1-4400-bcdc-7c373031f0c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:13 np0005634017 nova_compute[243452]: 2026-02-28 09:59:13.484 243456 INFO nova.scheduler.client.report [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] Deleted allocations for instance 690358a2-fced-4bc3-8a76-d85e5bfdc727#033[00m
Feb 28 04:59:13 np0005634017 nova_compute[243452]: 2026-02-28 09:59:13.488 243456 DEBUG nova.compute.manager [req-ca913056-0472-4d93-8316-7356a0bc02ac req-69d81874-ebd2-4842-b45f-be81c93a4f35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Received event network-vif-plugged-281248dd-cb66-437d-8420-a90b5b5a5b86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 04:59:13 np0005634017 nova_compute[243452]: 2026-02-28 09:59:13.488 243456 DEBUG oslo_concurrency.lockutils [req-ca913056-0472-4d93-8316-7356a0bc02ac req-69d81874-ebd2-4842-b45f-be81c93a4f35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:13 np0005634017 nova_compute[243452]: 2026-02-28 09:59:13.488 243456 DEBUG oslo_concurrency.lockutils [req-ca913056-0472-4d93-8316-7356a0bc02ac req-69d81874-ebd2-4842-b45f-be81c93a4f35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:13 np0005634017 nova_compute[243452]: 2026-02-28 09:59:13.489 243456 DEBUG oslo_concurrency.lockutils [req-ca913056-0472-4d93-8316-7356a0bc02ac req-69d81874-ebd2-4842-b45f-be81c93a4f35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:13 np0005634017 nova_compute[243452]: 2026-02-28 09:59:13.489 243456 DEBUG nova.compute.manager [req-ca913056-0472-4d93-8316-7356a0bc02ac req-69d81874-ebd2-4842-b45f-be81c93a4f35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] No waiting events found dispatching network-vif-plugged-281248dd-cb66-437d-8420-a90b5b5a5b86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 04:59:13 np0005634017 nova_compute[243452]: 2026-02-28 09:59:13.489 243456 WARNING nova.compute.manager [req-ca913056-0472-4d93-8316-7356a0bc02ac req-69d81874-ebd2-4842-b45f-be81c93a4f35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Received unexpected event network-vif-plugged-281248dd-cb66-437d-8420-a90b5b5a5b86 for instance with vm_state active and task_state None.#033[00m
Feb 28 04:59:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:13.523 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ce0813e5-42a9-4cd3-b3e8-2ae3ef0e5019]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:13 np0005634017 nova_compute[243452]: 2026-02-28 09:59:13.541 243456 DEBUG oslo_concurrency.lockutils [None req-78bdf230-d865-440e-b20b-3473afef5447 59d0f54f19814637812288345cb14d39 4e458b86a66849d59e5e6ae46a2356d4 - - default default] Lock "690358a2-fced-4bc3-8a76-d85e5bfdc727" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:13.551 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[545b3404-766c-4cab-b41b-798bb40cd301]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:13 np0005634017 NetworkManager[49805]: <info>  [1772272753.5908] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Feb 28 04:59:13 np0005634017 NetworkManager[49805]: <info>  [1772272753.5916] device (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 28 04:59:13 np0005634017 NetworkManager[49805]: <warn>  [1772272753.5917] device (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 28 04:59:13 np0005634017 NetworkManager[49805]: <info>  [1772272753.5924] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Feb 28 04:59:13 np0005634017 NetworkManager[49805]: <info>  [1772272753.5927] device (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 28 04:59:13 np0005634017 NetworkManager[49805]: <warn>  [1772272753.5927] device (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 28 04:59:13 np0005634017 NetworkManager[49805]: <info>  [1772272753.5936] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Feb 28 04:59:13 np0005634017 NetworkManager[49805]: <info>  [1772272753.5942] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Feb 28 04:59:13 np0005634017 NetworkManager[49805]: <info>  [1772272753.5946] device (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb 28 04:59:13 np0005634017 NetworkManager[49805]: <info>  [1772272753.5951] device (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb 28 04:59:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:13.619 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3656a621-9c92-4e33-bc29-b2db714cccb8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:13.622 156681 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpbx_xkc3h/privsep.sock']#033[00m
Feb 28 04:59:13 np0005634017 nova_compute[243452]: 2026-02-28 09:59:13.686 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 222 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Feb 28 04:59:14 np0005634017 nova_compute[243452]: 2026-02-28 09:59:14.043 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:14 np0005634017 nova_compute[243452]: 2026-02-28 09:59:14.204 243456 DEBUG nova.compute.manager [req-4508f1c2-7e5c-4874-9521-68e46ad8ccd4 req-eefc8760-d1cc-4e62-bd24-41f8148cf02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Received event network-changed-281248dd-cb66-437d-8420-a90b5b5a5b86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 04:59:14 np0005634017 nova_compute[243452]: 2026-02-28 09:59:14.205 243456 DEBUG nova.compute.manager [req-4508f1c2-7e5c-4874-9521-68e46ad8ccd4 req-eefc8760-d1cc-4e62-bd24-41f8148cf02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Refreshing instance network info cache due to event network-changed-281248dd-cb66-437d-8420-a90b5b5a5b86. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 04:59:14 np0005634017 nova_compute[243452]: 2026-02-28 09:59:14.205 243456 DEBUG oslo_concurrency.lockutils [req-4508f1c2-7e5c-4874-9521-68e46ad8ccd4 req-eefc8760-d1cc-4e62-bd24-41f8148cf02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 04:59:14 np0005634017 nova_compute[243452]: 2026-02-28 09:59:14.206 243456 DEBUG oslo_concurrency.lockutils [req-4508f1c2-7e5c-4874-9521-68e46ad8ccd4 req-eefc8760-d1cc-4e62-bd24-41f8148cf02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 04:59:14 np0005634017 nova_compute[243452]: 2026-02-28 09:59:14.206 243456 DEBUG nova.network.neutron [req-4508f1c2-7e5c-4874-9521-68e46ad8ccd4 req-eefc8760-d1cc-4e62-bd24-41f8148cf02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Refreshing network info cache for port 281248dd-cb66-437d-8420-a90b5b5a5b86 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 04:59:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:14.252 156681 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Feb 28 04:59:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:14.253 156681 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpbx_xkc3h/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Feb 28 04:59:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:14.135 250887 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb 28 04:59:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:14.140 250887 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb 28 04:59:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:14.144 250887 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Feb 28 04:59:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:14.145 250887 INFO oslo.privsep.daemon [-] privsep daemon running as pid 250887#033[00m
Feb 28 04:59:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:14.259 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[80eacc36-97d6-43a2-ac2b-5a5e4c6fa6ad]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:14.805 250887 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:14.805 250887 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:14.805 250887 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:15 np0005634017 podman[250893]: 2026-02-28 09:59:15.128942856 +0000 UTC m=+0.058766530 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 28 04:59:15 np0005634017 podman[250892]: 2026-02-28 09:59:15.15496537 +0000 UTC m=+0.084952769 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:15.322 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4c4e017b-64ed-4ba0-93d7-7bd838d1c011]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:15.341 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0c23d0e7-c221-4986-bc1f-b044f0ba6a27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:15 np0005634017 NetworkManager[49805]: <info>  [1772272755.3430] manager: (tap0ff5ef4b-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:15.364 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[efde0138-3b53-4e8e-8771-3e607af5612d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:15 np0005634017 systemd-udevd[250939]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:15.370 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[50983448-f22b-4360-a9b6-deb86a34304b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:15 np0005634017 NetworkManager[49805]: <info>  [1772272755.3921] device (tap0ff5ef4b-e0): carrier: link connected
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:15.396 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6b6473ec-e7f3-48d3-888c-75b3acb6ef2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:15.409 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b79eb551-e21e-4676-ad34-fba02fbd79f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ff5ef4b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:f6:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 426267, 'reachable_time': 24581, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250957, 'error': None, 'target': 'ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:15.421 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[37ed4f00-404d-4de5-a591-39fa0171545c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:f61d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 426267, 'tstamp': 426267}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250958, 'error': None, 'target': 'ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:15.433 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[101a7df3-1f99-4dbb-8ba3-41fe7f448a31]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ff5ef4b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:f6:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 426267, 'reachable_time': 24581, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250959, 'error': None, 'target': 'ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:15.456 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bc602ccb-d186-4a64-ab34-4e8d2dcc0521]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:15.521 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ec5882b8-cb4c-4c70-978a-75280db7cf9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:15.523 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ff5ef4b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:15.524 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:15.524 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ff5ef4b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 04:59:15 np0005634017 nova_compute[243452]: 2026-02-28 09:59:15.526 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:15 np0005634017 kernel: tap0ff5ef4b-e0: entered promiscuous mode
Feb 28 04:59:15 np0005634017 NetworkManager[49805]: <info>  [1772272755.5265] manager: (tap0ff5ef4b-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:15.530 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0ff5ef4b-e0, col_values=(('external_ids', {'iface-id': 'cac40e1f-3378-46e9-a01a-1794bf29ef6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 04:59:15 np0005634017 nova_compute[243452]: 2026-02-28 09:59:15.531 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:15 np0005634017 ovn_controller[146846]: 2026-02-28T09:59:15Z|00031|binding|INFO|Releasing lport cac40e1f-3378-46e9-a01a-1794bf29ef6c from this chassis (sb_readonly=0)
Feb 28 04:59:15 np0005634017 nova_compute[243452]: 2026-02-28 09:59:15.532 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:15.534 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 04:59:15 np0005634017 nova_compute[243452]: 2026-02-28 09:59:15.535 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:15.535 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b0463ada-1f60-44be-abff-001223f00988]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:15.536 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4.pid.haproxy
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 04:59:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:15.538 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4', 'env', 'PROCESS_TAG=haproxy-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 04:59:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 225 op/s
Feb 28 04:59:15 np0005634017 podman[250993]: 2026-02-28 09:59:15.896980635 +0000 UTC m=+0.026188070 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 04:59:16 np0005634017 podman[250993]: 2026-02-28 09:59:16.103017351 +0000 UTC m=+0.232224766 container create 07185706513cb192493e4fa9bd5b8b7891eca185e25068386412d64169526651 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 04:59:16 np0005634017 systemd[1]: Started libpod-conmon-07185706513cb192493e4fa9bd5b8b7891eca185e25068386412d64169526651.scope.
Feb 28 04:59:16 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:59:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba7d2d03f737867a40efd78da35d700803076aa386aabc23bc069a79b8dd0cc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 04:59:16 np0005634017 podman[250993]: 2026-02-28 09:59:16.217918025 +0000 UTC m=+0.347125460 container init 07185706513cb192493e4fa9bd5b8b7891eca185e25068386412d64169526651 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 28 04:59:16 np0005634017 podman[250993]: 2026-02-28 09:59:16.223993286 +0000 UTC m=+0.353200701 container start 07185706513cb192493e4fa9bd5b8b7891eca185e25068386412d64169526651 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 28 04:59:16 np0005634017 neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4[251008]: [NOTICE]   (251012) : New worker (251014) forked
Feb 28 04:59:16 np0005634017 neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4[251008]: [NOTICE]   (251012) : Loading success.
Feb 28 04:59:16 np0005634017 nova_compute[243452]: 2026-02-28 09:59:16.301 243456 DEBUG nova.network.neutron [req-4508f1c2-7e5c-4874-9521-68e46ad8ccd4 req-eefc8760-d1cc-4e62-bd24-41f8148cf02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Updated VIF entry in instance network info cache for port 281248dd-cb66-437d-8420-a90b5b5a5b86. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 04:59:16 np0005634017 nova_compute[243452]: 2026-02-28 09:59:16.302 243456 DEBUG nova.network.neutron [req-4508f1c2-7e5c-4874-9521-68e46ad8ccd4 req-eefc8760-d1cc-4e62-bd24-41f8148cf02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Updating instance_info_cache with network_info: [{"id": "281248dd-cb66-437d-8420-a90b5b5a5b86", "address": "fa:16:3e:94:f1:ca", "network": {"id": "0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1690820459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d11cb18c7a4f0bb17a76974faf1f67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281248dd-cb", "ovs_interfaceid": "281248dd-cb66-437d-8420-a90b5b5a5b86", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 04:59:16 np0005634017 nova_compute[243452]: 2026-02-28 09:59:16.323 243456 DEBUG oslo_concurrency.lockutils [req-4508f1c2-7e5c-4874-9521-68e46ad8ccd4 req-eefc8760-d1cc-4e62-bd24-41f8148cf02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 04:59:16 np0005634017 nova_compute[243452]: 2026-02-28 09:59:16.436 243456 DEBUG oslo_concurrency.lockutils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Acquiring lock "65ac3a64-52da-4513-a78c-8af927a40e61" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:16 np0005634017 nova_compute[243452]: 2026-02-28 09:59:16.436 243456 DEBUG oslo_concurrency.lockutils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lock "65ac3a64-52da-4513-a78c-8af927a40e61" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:16 np0005634017 nova_compute[243452]: 2026-02-28 09:59:16.451 243456 DEBUG nova.compute.manager [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 04:59:16 np0005634017 nova_compute[243452]: 2026-02-28 09:59:16.516 243456 DEBUG oslo_concurrency.lockutils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:16 np0005634017 nova_compute[243452]: 2026-02-28 09:59:16.517 243456 DEBUG oslo_concurrency.lockutils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:16 np0005634017 nova_compute[243452]: 2026-02-28 09:59:16.525 243456 DEBUG nova.virt.hardware [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 04:59:16 np0005634017 nova_compute[243452]: 2026-02-28 09:59:16.526 243456 INFO nova.compute.claims [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 04:59:16 np0005634017 nova_compute[243452]: 2026-02-28 09:59:16.656 243456 DEBUG oslo_concurrency.processutils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 04:59:17 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/422799550' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 04:59:17 np0005634017 nova_compute[243452]: 2026-02-28 09:59:17.213 243456 DEBUG oslo_concurrency.processutils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:17 np0005634017 nova_compute[243452]: 2026-02-28 09:59:17.220 243456 DEBUG nova.compute.provider_tree [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 04:59:17 np0005634017 nova_compute[243452]: 2026-02-28 09:59:17.235 243456 DEBUG nova.scheduler.client.report [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 04:59:17 np0005634017 nova_compute[243452]: 2026-02-28 09:59:17.257 243456 DEBUG oslo_concurrency.lockutils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:17 np0005634017 nova_compute[243452]: 2026-02-28 09:59:17.257 243456 DEBUG nova.compute.manager [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 04:59:17 np0005634017 nova_compute[243452]: 2026-02-28 09:59:17.305 243456 DEBUG nova.compute.manager [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 04:59:17 np0005634017 nova_compute[243452]: 2026-02-28 09:59:17.306 243456 DEBUG nova.network.neutron [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 04:59:17 np0005634017 nova_compute[243452]: 2026-02-28 09:59:17.327 243456 INFO nova.virt.libvirt.driver [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 04:59:17 np0005634017 nova_compute[243452]: 2026-02-28 09:59:17.355 243456 DEBUG nova.compute.manager [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 04:59:17 np0005634017 nova_compute[243452]: 2026-02-28 09:59:17.439 243456 DEBUG nova.compute.manager [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 04:59:17 np0005634017 nova_compute[243452]: 2026-02-28 09:59:17.441 243456 DEBUG nova.virt.libvirt.driver [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 04:59:17 np0005634017 nova_compute[243452]: 2026-02-28 09:59:17.441 243456 INFO nova.virt.libvirt.driver [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Creating image(s)#033[00m
Feb 28 04:59:17 np0005634017 nova_compute[243452]: 2026-02-28 09:59:17.464 243456 DEBUG nova.storage.rbd_utils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] rbd image 65ac3a64-52da-4513-a78c-8af927a40e61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:17 np0005634017 nova_compute[243452]: 2026-02-28 09:59:17.492 243456 DEBUG nova.storage.rbd_utils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] rbd image 65ac3a64-52da-4513-a78c-8af927a40e61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:17 np0005634017 nova_compute[243452]: 2026-02-28 09:59:17.520 243456 DEBUG nova.storage.rbd_utils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] rbd image 65ac3a64-52da-4513-a78c-8af927a40e61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:17 np0005634017 nova_compute[243452]: 2026-02-28 09:59:17.524 243456 DEBUG oslo_concurrency.processutils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 04:59:17 np0005634017 nova_compute[243452]: 2026-02-28 09:59:17.602 243456 DEBUG oslo_concurrency.processutils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:17 np0005634017 nova_compute[243452]: 2026-02-28 09:59:17.603 243456 DEBUG oslo_concurrency.lockutils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:17 np0005634017 nova_compute[243452]: 2026-02-28 09:59:17.604 243456 DEBUG oslo_concurrency.lockutils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:17 np0005634017 nova_compute[243452]: 2026-02-28 09:59:17.604 243456 DEBUG oslo_concurrency.lockutils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:17 np0005634017 nova_compute[243452]: 2026-02-28 09:59:17.637 243456 DEBUG nova.storage.rbd_utils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] rbd image 65ac3a64-52da-4513-a78c-8af927a40e61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:17 np0005634017 nova_compute[243452]: 2026-02-28 09:59:17.645 243456 DEBUG oslo_concurrency.processutils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 65ac3a64-52da-4513-a78c-8af927a40e61_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:17 np0005634017 nova_compute[243452]: 2026-02-28 09:59:17.682 243456 DEBUG nova.network.neutron [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 28 04:59:17 np0005634017 nova_compute[243452]: 2026-02-28 09:59:17.682 243456 DEBUG nova.compute.manager [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 04:59:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 210 op/s
Feb 28 04:59:17 np0005634017 nova_compute[243452]: 2026-02-28 09:59:17.984 243456 DEBUG oslo_concurrency.processutils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 65ac3a64-52da-4513-a78c-8af927a40e61_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.074 243456 DEBUG nova.storage.rbd_utils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] resizing rbd image 65ac3a64-52da-4513-a78c-8af927a40e61_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.167 243456 DEBUG nova.objects.instance [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lazy-loading 'migration_context' on Instance uuid 65ac3a64-52da-4513-a78c-8af927a40e61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.182 243456 DEBUG nova.virt.libvirt.driver [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.182 243456 DEBUG nova.virt.libvirt.driver [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Ensure instance console log exists: /var/lib/nova/instances/65ac3a64-52da-4513-a78c-8af927a40e61/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.182 243456 DEBUG oslo_concurrency.lockutils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.183 243456 DEBUG oslo_concurrency.lockutils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.183 243456 DEBUG oslo_concurrency.lockutils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.185 243456 DEBUG nova.virt.libvirt.driver [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.189 243456 WARNING nova.virt.libvirt.driver [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.194 243456 DEBUG nova.virt.libvirt.host [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.194 243456 DEBUG nova.virt.libvirt.host [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.197 243456 DEBUG nova.virt.libvirt.host [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.197 243456 DEBUG nova.virt.libvirt.host [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.198 243456 DEBUG nova.virt.libvirt.driver [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.198 243456 DEBUG nova.virt.hardware [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.198 243456 DEBUG nova.virt.hardware [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.198 243456 DEBUG nova.virt.hardware [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.199 243456 DEBUG nova.virt.hardware [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.199 243456 DEBUG nova.virt.hardware [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.199 243456 DEBUG nova.virt.hardware [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.199 243456 DEBUG nova.virt.hardware [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.199 243456 DEBUG nova.virt.hardware [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.200 243456 DEBUG nova.virt.hardware [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.200 243456 DEBUG nova.virt.hardware [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.200 243456 DEBUG nova.virt.hardware [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.203 243456 DEBUG oslo_concurrency.processutils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.361 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 04:59:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2564855126' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.810 243456 DEBUG oslo_concurrency.processutils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.842 243456 DEBUG nova.storage.rbd_utils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] rbd image 65ac3a64-52da-4513-a78c-8af927a40e61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:18 np0005634017 nova_compute[243452]: 2026-02-28 09:59:18.849 243456 DEBUG oslo_concurrency.processutils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:19 np0005634017 nova_compute[243452]: 2026-02-28 09:59:19.045 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 04:59:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/198594036' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 04:59:19 np0005634017 nova_compute[243452]: 2026-02-28 09:59:19.363 243456 DEBUG oslo_concurrency.processutils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:19 np0005634017 nova_compute[243452]: 2026-02-28 09:59:19.366 243456 DEBUG nova.objects.instance [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lazy-loading 'pci_devices' on Instance uuid 65ac3a64-52da-4513-a78c-8af927a40e61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 04:59:19 np0005634017 nova_compute[243452]: 2026-02-28 09:59:19.390 243456 DEBUG nova.virt.libvirt.driver [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] End _get_guest_xml xml=<domain type="kvm">
Feb 28 04:59:19 np0005634017 nova_compute[243452]:  <uuid>65ac3a64-52da-4513-a78c-8af927a40e61</uuid>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:  <name>instance-00000004</name>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-1504036920</nova:name>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 09:59:18</nova:creationTime>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 04:59:19 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:        <nova:user uuid="f285b05675cf49d8812463f1f0d35eb3">tempest-DeleteServersAdminTestJSON-1976390746-project-member</nova:user>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:        <nova:project uuid="056bbee826f042bc922f04eebd9ccf55">tempest-DeleteServersAdminTestJSON-1976390746</nova:project>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      <nova:ports/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <system>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      <entry name="serial">65ac3a64-52da-4513-a78c-8af927a40e61</entry>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      <entry name="uuid">65ac3a64-52da-4513-a78c-8af927a40e61</entry>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    </system>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:  <os>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:  </os>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:  <features>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:  </features>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:  </clock>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:  <devices>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/65ac3a64-52da-4513-a78c-8af927a40e61_disk">
Feb 28 04:59:19 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      </source>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 04:59:19 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      </auth>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    </disk>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/65ac3a64-52da-4513-a78c-8af927a40e61_disk.config">
Feb 28 04:59:19 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      </source>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 04:59:19 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      </auth>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    </disk>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/65ac3a64-52da-4513-a78c-8af927a40e61/console.log" append="off"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    </serial>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <video>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    </video>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    </rng>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 04:59:19 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 04:59:19 np0005634017 nova_compute[243452]:  </devices>
Feb 28 04:59:19 np0005634017 nova_compute[243452]: </domain>
Feb 28 04:59:19 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 04:59:19 np0005634017 nova_compute[243452]: 2026-02-28 09:59:19.442 243456 DEBUG nova.virt.libvirt.driver [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 04:59:19 np0005634017 nova_compute[243452]: 2026-02-28 09:59:19.442 243456 DEBUG nova.virt.libvirt.driver [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 04:59:19 np0005634017 nova_compute[243452]: 2026-02-28 09:59:19.443 243456 INFO nova.virt.libvirt.driver [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Using config drive#033[00m
Feb 28 04:59:19 np0005634017 nova_compute[243452]: 2026-02-28 09:59:19.469 243456 DEBUG nova.storage.rbd_utils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] rbd image 65ac3a64-52da-4513-a78c-8af927a40e61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:19 np0005634017 nova_compute[243452]: 2026-02-28 09:59:19.646 243456 INFO nova.virt.libvirt.driver [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Creating config drive at /var/lib/nova/instances/65ac3a64-52da-4513-a78c-8af927a40e61/disk.config#033[00m
Feb 28 04:59:19 np0005634017 nova_compute[243452]: 2026-02-28 09:59:19.660 243456 DEBUG oslo_concurrency.processutils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/65ac3a64-52da-4513-a78c-8af927a40e61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp23uvp07i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 656 KiB/s wr, 188 op/s
Feb 28 04:59:19 np0005634017 nova_compute[243452]: 2026-02-28 09:59:19.774 243456 DEBUG oslo_concurrency.processutils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/65ac3a64-52da-4513-a78c-8af927a40e61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp23uvp07i" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:19 np0005634017 nova_compute[243452]: 2026-02-28 09:59:19.794 243456 DEBUG nova.storage.rbd_utils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] rbd image 65ac3a64-52da-4513-a78c-8af927a40e61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:19 np0005634017 nova_compute[243452]: 2026-02-28 09:59:19.798 243456 DEBUG oslo_concurrency.processutils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/65ac3a64-52da-4513-a78c-8af927a40e61/disk.config 65ac3a64-52da-4513-a78c-8af927a40e61_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:19 np0005634017 nova_compute[243452]: 2026-02-28 09:59:19.928 243456 DEBUG oslo_concurrency.processutils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/65ac3a64-52da-4513-a78c-8af927a40e61/disk.config 65ac3a64-52da-4513-a78c-8af927a40e61_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:19 np0005634017 nova_compute[243452]: 2026-02-28 09:59:19.929 243456 INFO nova.virt.libvirt.driver [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Deleting local config drive /var/lib/nova/instances/65ac3a64-52da-4513-a78c-8af927a40e61/disk.config because it was imported into RBD.#033[00m
Feb 28 04:59:20 np0005634017 systemd-machined[209480]: New machine qemu-4-instance-00000004.
Feb 28 04:59:20 np0005634017 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Feb 28 04:59:20 np0005634017 nova_compute[243452]: 2026-02-28 09:59:20.313 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272760.3126597, 65ac3a64-52da-4513-a78c-8af927a40e61 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 04:59:20 np0005634017 nova_compute[243452]: 2026-02-28 09:59:20.314 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] VM Resumed (Lifecycle Event)#033[00m
Feb 28 04:59:20 np0005634017 nova_compute[243452]: 2026-02-28 09:59:20.317 243456 DEBUG nova.compute.manager [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 04:59:20 np0005634017 nova_compute[243452]: 2026-02-28 09:59:20.318 243456 DEBUG nova.virt.libvirt.driver [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 04:59:20 np0005634017 nova_compute[243452]: 2026-02-28 09:59:20.322 243456 INFO nova.virt.libvirt.driver [-] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Instance spawned successfully.#033[00m
Feb 28 04:59:20 np0005634017 nova_compute[243452]: 2026-02-28 09:59:20.323 243456 DEBUG nova.virt.libvirt.driver [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 04:59:20 np0005634017 nova_compute[243452]: 2026-02-28 09:59:20.339 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 04:59:20 np0005634017 nova_compute[243452]: 2026-02-28 09:59:20.346 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 04:59:20 np0005634017 nova_compute[243452]: 2026-02-28 09:59:20.350 243456 DEBUG nova.virt.libvirt.driver [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:20 np0005634017 nova_compute[243452]: 2026-02-28 09:59:20.351 243456 DEBUG nova.virt.libvirt.driver [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:20 np0005634017 nova_compute[243452]: 2026-02-28 09:59:20.351 243456 DEBUG nova.virt.libvirt.driver [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:20 np0005634017 nova_compute[243452]: 2026-02-28 09:59:20.352 243456 DEBUG nova.virt.libvirt.driver [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:20 np0005634017 nova_compute[243452]: 2026-02-28 09:59:20.352 243456 DEBUG nova.virt.libvirt.driver [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:20 np0005634017 nova_compute[243452]: 2026-02-28 09:59:20.353 243456 DEBUG nova.virt.libvirt.driver [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:20 np0005634017 nova_compute[243452]: 2026-02-28 09:59:20.378 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 04:59:20 np0005634017 nova_compute[243452]: 2026-02-28 09:59:20.379 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272760.3138154, 65ac3a64-52da-4513-a78c-8af927a40e61 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 04:59:20 np0005634017 nova_compute[243452]: 2026-02-28 09:59:20.379 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] VM Started (Lifecycle Event)#033[00m
Feb 28 04:59:20 np0005634017 nova_compute[243452]: 2026-02-28 09:59:20.400 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 04:59:20 np0005634017 nova_compute[243452]: 2026-02-28 09:59:20.404 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 04:59:20 np0005634017 nova_compute[243452]: 2026-02-28 09:59:20.407 243456 INFO nova.compute.manager [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Took 2.97 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 04:59:20 np0005634017 nova_compute[243452]: 2026-02-28 09:59:20.407 243456 DEBUG nova.compute.manager [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 04:59:20 np0005634017 nova_compute[243452]: 2026-02-28 09:59:20.427 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 04:59:20 np0005634017 nova_compute[243452]: 2026-02-28 09:59:20.456 243456 INFO nova.compute.manager [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Took 3.96 seconds to build instance.#033[00m
Feb 28 04:59:20 np0005634017 nova_compute[243452]: 2026-02-28 09:59:20.472 243456 DEBUG oslo_concurrency.lockutils [None req-28b7d5ed-9b90-4ea6-b13e-cd663d14ecae f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lock "65ac3a64-52da-4513-a78c-8af927a40e61" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:21 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb 28 04:59:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 225 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 214 op/s
Feb 28 04:59:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:59:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:59:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 04:59:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:59:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 04:59:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:59:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 04:59:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 04:59:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 04:59:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:59:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 04:59:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 04:59:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 04:59:22 np0005634017 ovn_controller[146846]: 2026-02-28T09:59:22Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:94:f1:ca 10.100.0.3
Feb 28 04:59:22 np0005634017 ovn_controller[146846]: 2026-02-28T09:59:22Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:94:f1:ca 10.100.0.3
Feb 28 04:59:22 np0005634017 podman[251532]: 2026-02-28 09:59:22.645981058 +0000 UTC m=+0.052073831 container create f12cdd8f92415a2736c1a3c21065015ccaf5cf5f5400b5c99158866a342d85e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_bose, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 04:59:22 np0005634017 systemd[1]: Started libpod-conmon-f12cdd8f92415a2736c1a3c21065015ccaf5cf5f5400b5c99158866a342d85e9.scope.
Feb 28 04:59:22 np0005634017 podman[251532]: 2026-02-28 09:59:22.617589307 +0000 UTC m=+0.023682170 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:59:22 np0005634017 nova_compute[243452]: 2026-02-28 09:59:22.712 243456 DEBUG oslo_concurrency.lockutils [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Acquiring lock "65ac3a64-52da-4513-a78c-8af927a40e61" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:22 np0005634017 nova_compute[243452]: 2026-02-28 09:59:22.713 243456 DEBUG oslo_concurrency.lockutils [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lock "65ac3a64-52da-4513-a78c-8af927a40e61" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:22 np0005634017 nova_compute[243452]: 2026-02-28 09:59:22.714 243456 DEBUG oslo_concurrency.lockutils [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Acquiring lock "65ac3a64-52da-4513-a78c-8af927a40e61-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:22 np0005634017 nova_compute[243452]: 2026-02-28 09:59:22.714 243456 DEBUG oslo_concurrency.lockutils [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lock "65ac3a64-52da-4513-a78c-8af927a40e61-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:22 np0005634017 nova_compute[243452]: 2026-02-28 09:59:22.714 243456 DEBUG oslo_concurrency.lockutils [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lock "65ac3a64-52da-4513-a78c-8af927a40e61-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:22 np0005634017 nova_compute[243452]: 2026-02-28 09:59:22.716 243456 INFO nova.compute.manager [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Terminating instance#033[00m
Feb 28 04:59:22 np0005634017 nova_compute[243452]: 2026-02-28 09:59:22.717 243456 DEBUG oslo_concurrency.lockutils [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Acquiring lock "refresh_cache-65ac3a64-52da-4513-a78c-8af927a40e61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 04:59:22 np0005634017 nova_compute[243452]: 2026-02-28 09:59:22.717 243456 DEBUG oslo_concurrency.lockutils [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Acquired lock "refresh_cache-65ac3a64-52da-4513-a78c-8af927a40e61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 04:59:22 np0005634017 nova_compute[243452]: 2026-02-28 09:59:22.717 243456 DEBUG nova.network.neutron [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 04:59:22 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:59:22 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb 28 04:59:22 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 04:59:22 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:59:22 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 04:59:22 np0005634017 podman[251532]: 2026-02-28 09:59:22.764462292 +0000 UTC m=+0.170555105 container init f12cdd8f92415a2736c1a3c21065015ccaf5cf5f5400b5c99158866a342d85e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 28 04:59:22 np0005634017 podman[251532]: 2026-02-28 09:59:22.771713357 +0000 UTC m=+0.177806140 container start f12cdd8f92415a2736c1a3c21065015ccaf5cf5f5400b5c99158866a342d85e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_bose, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 04:59:22 np0005634017 podman[251532]: 2026-02-28 09:59:22.776187283 +0000 UTC m=+0.182280106 container attach f12cdd8f92415a2736c1a3c21065015ccaf5cf5f5400b5c99158866a342d85e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_bose, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True)
Feb 28 04:59:22 np0005634017 systemd[1]: libpod-f12cdd8f92415a2736c1a3c21065015ccaf5cf5f5400b5c99158866a342d85e9.scope: Deactivated successfully.
Feb 28 04:59:22 np0005634017 condescending_bose[251548]: 167 167
Feb 28 04:59:22 np0005634017 conmon[251548]: conmon f12cdd8f92415a2736c1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f12cdd8f92415a2736c1a3c21065015ccaf5cf5f5400b5c99158866a342d85e9.scope/container/memory.events
Feb 28 04:59:22 np0005634017 podman[251532]: 2026-02-28 09:59:22.778836088 +0000 UTC m=+0.184928861 container died f12cdd8f92415a2736c1a3c21065015ccaf5cf5f5400b5c99158866a342d85e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 04:59:22 np0005634017 systemd[1]: var-lib-containers-storage-overlay-76d01d0e19e97b07e5051a8451364086edeb952f731b0b44f52f6693d4a8c78c-merged.mount: Deactivated successfully.
Feb 28 04:59:22 np0005634017 podman[251532]: 2026-02-28 09:59:22.813598149 +0000 UTC m=+0.219690922 container remove f12cdd8f92415a2736c1a3c21065015ccaf5cf5f5400b5c99158866a342d85e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_bose, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:59:22 np0005634017 systemd[1]: libpod-conmon-f12cdd8f92415a2736c1a3c21065015ccaf5cf5f5400b5c99158866a342d85e9.scope: Deactivated successfully.
Feb 28 04:59:22 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb 28 04:59:22 np0005634017 podman[251571]: 2026-02-28 09:59:22.976908979 +0000 UTC m=+0.046912405 container create 25a6c8ddfaed8ed2c8ebd1e6e3555954185a3d01cda42eb9d07512de33bc592d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hopper, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 04:59:23 np0005634017 systemd[1]: Started libpod-conmon-25a6c8ddfaed8ed2c8ebd1e6e3555954185a3d01cda42eb9d07512de33bc592d.scope.
Feb 28 04:59:23 np0005634017 podman[251571]: 2026-02-28 09:59:22.958972843 +0000 UTC m=+0.028976289 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:59:23 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:59:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520286b16e57b25aad0cd29341092958ab5dfb85f7408a090628082680bd620d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:59:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520286b16e57b25aad0cd29341092958ab5dfb85f7408a090628082680bd620d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:59:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520286b16e57b25aad0cd29341092958ab5dfb85f7408a090628082680bd620d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:59:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520286b16e57b25aad0cd29341092958ab5dfb85f7408a090628082680bd620d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:59:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520286b16e57b25aad0cd29341092958ab5dfb85f7408a090628082680bd620d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 04:59:23 np0005634017 podman[251571]: 2026-02-28 09:59:23.087976254 +0000 UTC m=+0.157979680 container init 25a6c8ddfaed8ed2c8ebd1e6e3555954185a3d01cda42eb9d07512de33bc592d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:59:23 np0005634017 podman[251571]: 2026-02-28 09:59:23.100674813 +0000 UTC m=+0.170678249 container start 25a6c8ddfaed8ed2c8ebd1e6e3555954185a3d01cda42eb9d07512de33bc592d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:59:23 np0005634017 podman[251571]: 2026-02-28 09:59:23.107794344 +0000 UTC m=+0.177797810 container attach 25a6c8ddfaed8ed2c8ebd1e6e3555954185a3d01cda42eb9d07512de33bc592d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hopper, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 04:59:23 np0005634017 nova_compute[243452]: 2026-02-28 09:59:23.364 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:23 np0005634017 funny_hopper[251588]: --> passed data devices: 0 physical, 3 LVM
Feb 28 04:59:23 np0005634017 funny_hopper[251588]: --> All data devices are unavailable
Feb 28 04:59:23 np0005634017 systemd[1]: libpod-25a6c8ddfaed8ed2c8ebd1e6e3555954185a3d01cda42eb9d07512de33bc592d.scope: Deactivated successfully.
Feb 28 04:59:23 np0005634017 podman[251571]: 2026-02-28 09:59:23.573789908 +0000 UTC m=+0.643793324 container died 25a6c8ddfaed8ed2c8ebd1e6e3555954185a3d01cda42eb9d07512de33bc592d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hopper, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:59:23 np0005634017 nova_compute[243452]: 2026-02-28 09:59:23.575 243456 DEBUG nova.network.neutron [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 04:59:23 np0005634017 systemd[1]: var-lib-containers-storage-overlay-520286b16e57b25aad0cd29341092958ab5dfb85f7408a090628082680bd620d-merged.mount: Deactivated successfully.
Feb 28 04:59:23 np0005634017 podman[251571]: 2026-02-28 09:59:23.618226332 +0000 UTC m=+0.688229748 container remove 25a6c8ddfaed8ed2c8ebd1e6e3555954185a3d01cda42eb9d07512de33bc592d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 28 04:59:23 np0005634017 systemd[1]: libpod-conmon-25a6c8ddfaed8ed2c8ebd1e6e3555954185a3d01cda42eb9d07512de33bc592d.scope: Deactivated successfully.
Feb 28 04:59:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 254 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.2 MiB/s wr, 215 op/s
Feb 28 04:59:24 np0005634017 podman[251683]: 2026-02-28 09:59:24.030274972 +0000 UTC m=+0.045880896 container create ee93da9edc38cfa0260ce3fea38ff73a2204317a281896e9f85dd4c5ac07985a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Feb 28 04:59:24 np0005634017 nova_compute[243452]: 2026-02-28 09:59:24.049 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:24 np0005634017 systemd[1]: Started libpod-conmon-ee93da9edc38cfa0260ce3fea38ff73a2204317a281896e9f85dd4c5ac07985a.scope.
Feb 28 04:59:24 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:59:24 np0005634017 podman[251683]: 2026-02-28 09:59:24.008035384 +0000 UTC m=+0.023641348 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:59:24 np0005634017 nova_compute[243452]: 2026-02-28 09:59:24.116 243456 DEBUG nova.network.neutron [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 04:59:24 np0005634017 podman[251683]: 2026-02-28 09:59:24.119552152 +0000 UTC m=+0.135158106 container init ee93da9edc38cfa0260ce3fea38ff73a2204317a281896e9f85dd4c5ac07985a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_mcnulty, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:59:24 np0005634017 podman[251683]: 2026-02-28 09:59:24.130363677 +0000 UTC m=+0.145969611 container start ee93da9edc38cfa0260ce3fea38ff73a2204317a281896e9f85dd4c5ac07985a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True)
Feb 28 04:59:24 np0005634017 podman[251683]: 2026-02-28 09:59:24.135510823 +0000 UTC m=+0.151116777 container attach ee93da9edc38cfa0260ce3fea38ff73a2204317a281896e9f85dd4c5ac07985a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_mcnulty, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 04:59:24 np0005634017 funny_mcnulty[251699]: 167 167
Feb 28 04:59:24 np0005634017 systemd[1]: libpod-ee93da9edc38cfa0260ce3fea38ff73a2204317a281896e9f85dd4c5ac07985a.scope: Deactivated successfully.
Feb 28 04:59:24 np0005634017 conmon[251699]: conmon ee93da9edc38cfa0260c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ee93da9edc38cfa0260ce3fea38ff73a2204317a281896e9f85dd4c5ac07985a.scope/container/memory.events
Feb 28 04:59:24 np0005634017 podman[251683]: 2026-02-28 09:59:24.138881638 +0000 UTC m=+0.154487592 container died ee93da9edc38cfa0260ce3fea38ff73a2204317a281896e9f85dd4c5ac07985a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_mcnulty, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:59:24 np0005634017 nova_compute[243452]: 2026-02-28 09:59:24.138 243456 DEBUG oslo_concurrency.lockutils [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Releasing lock "refresh_cache-65ac3a64-52da-4513-a78c-8af927a40e61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 04:59:24 np0005634017 nova_compute[243452]: 2026-02-28 09:59:24.139 243456 DEBUG nova.compute.manager [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 04:59:24 np0005634017 systemd[1]: var-lib-containers-storage-overlay-8d495999a0083f2cdae6b51d0c197f5449c996e6da8a34af7e7a3b7aeda2b854-merged.mount: Deactivated successfully.
Feb 28 04:59:24 np0005634017 podman[251683]: 2026-02-28 09:59:24.195261909 +0000 UTC m=+0.210867833 container remove ee93da9edc38cfa0260ce3fea38ff73a2204317a281896e9f85dd4c5ac07985a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_mcnulty, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Feb 28 04:59:24 np0005634017 systemd[1]: libpod-conmon-ee93da9edc38cfa0260ce3fea38ff73a2204317a281896e9f85dd4c5ac07985a.scope: Deactivated successfully.
Feb 28 04:59:24 np0005634017 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Feb 28 04:59:24 np0005634017 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 4.118s CPU time.
Feb 28 04:59:24 np0005634017 systemd-machined[209480]: Machine qemu-4-instance-00000004 terminated.
Feb 28 04:59:24 np0005634017 nova_compute[243452]: 2026-02-28 09:59:24.365 243456 INFO nova.virt.libvirt.driver [-] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Instance destroyed successfully.#033[00m
Feb 28 04:59:24 np0005634017 nova_compute[243452]: 2026-02-28 09:59:24.368 243456 DEBUG nova.objects.instance [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lazy-loading 'resources' on Instance uuid 65ac3a64-52da-4513-a78c-8af927a40e61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 04:59:24 np0005634017 podman[251724]: 2026-02-28 09:59:24.380536919 +0000 UTC m=+0.044737034 container create e819200dc3079f583f02bf18c91b75aa56f68c3aefb2a65a5024aa8011ba4bb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_morse, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 04:59:24 np0005634017 systemd[1]: Started libpod-conmon-e819200dc3079f583f02bf18c91b75aa56f68c3aefb2a65a5024aa8011ba4bb0.scope.
Feb 28 04:59:24 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:59:24 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c23e63fd2e3c9382961e536e0c436a920b8e39f0e2ca92e9fffa579a4119c795/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:59:24 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c23e63fd2e3c9382961e536e0c436a920b8e39f0e2ca92e9fffa579a4119c795/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:59:24 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c23e63fd2e3c9382961e536e0c436a920b8e39f0e2ca92e9fffa579a4119c795/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:59:24 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c23e63fd2e3c9382961e536e0c436a920b8e39f0e2ca92e9fffa579a4119c795/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:59:24 np0005634017 podman[251724]: 2026-02-28 09:59:24.363045115 +0000 UTC m=+0.027245250 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:59:24 np0005634017 podman[251724]: 2026-02-28 09:59:24.560304163 +0000 UTC m=+0.224504328 container init e819200dc3079f583f02bf18c91b75aa56f68c3aefb2a65a5024aa8011ba4bb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_morse, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:59:24 np0005634017 podman[251724]: 2026-02-28 09:59:24.569127052 +0000 UTC m=+0.233327147 container start e819200dc3079f583f02bf18c91b75aa56f68c3aefb2a65a5024aa8011ba4bb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_morse, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:59:24 np0005634017 podman[251724]: 2026-02-28 09:59:24.674987351 +0000 UTC m=+0.339187556 container attach e819200dc3079f583f02bf18c91b75aa56f68c3aefb2a65a5024aa8011ba4bb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 04:59:24 np0005634017 pensive_morse[251757]: {
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:    "0": [
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:        {
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "devices": [
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "/dev/loop3"
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            ],
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "lv_name": "ceph_lv0",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "lv_size": "21470642176",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "name": "ceph_lv0",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "tags": {
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.cluster_name": "ceph",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.crush_device_class": "",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.encrypted": "0",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.objectstore": "bluestore",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.osd_id": "0",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.type": "block",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.vdo": "0",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.with_tpm": "0"
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            },
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "type": "block",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "vg_name": "ceph_vg0"
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:        }
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:    ],
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:    "1": [
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:        {
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "devices": [
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "/dev/loop4"
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            ],
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "lv_name": "ceph_lv1",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "lv_size": "21470642176",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "name": "ceph_lv1",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "tags": {
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.cluster_name": "ceph",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.crush_device_class": "",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.encrypted": "0",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.objectstore": "bluestore",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.osd_id": "1",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.type": "block",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.vdo": "0",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.with_tpm": "0"
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            },
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "type": "block",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "vg_name": "ceph_vg1"
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:        }
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:    ],
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:    "2": [
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:        {
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "devices": [
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "/dev/loop5"
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            ],
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "lv_name": "ceph_lv2",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "lv_size": "21470642176",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "name": "ceph_lv2",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "tags": {
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.cephx_lockbox_secret": "",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.cluster_name": "ceph",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.crush_device_class": "",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.encrypted": "0",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.objectstore": "bluestore",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.osd_id": "2",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.type": "block",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.vdo": "0",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:                "ceph.with_tpm": "0"
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            },
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "type": "block",
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:            "vg_name": "ceph_vg2"
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:        }
Feb 28 04:59:24 np0005634017 pensive_morse[251757]:    ]
Feb 28 04:59:24 np0005634017 pensive_morse[251757]: }
Feb 28 04:59:24 np0005634017 systemd[1]: libpod-e819200dc3079f583f02bf18c91b75aa56f68c3aefb2a65a5024aa8011ba4bb0.scope: Deactivated successfully.
Feb 28 04:59:24 np0005634017 podman[251724]: 2026-02-28 09:59:24.900718882 +0000 UTC m=+0.564919017 container died e819200dc3079f583f02bf18c91b75aa56f68c3aefb2a65a5024aa8011ba4bb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_morse, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:59:25 np0005634017 systemd[1]: var-lib-containers-storage-overlay-c23e63fd2e3c9382961e536e0c436a920b8e39f0e2ca92e9fffa579a4119c795-merged.mount: Deactivated successfully.
Feb 28 04:59:25 np0005634017 podman[251724]: 2026-02-28 09:59:25.038422369 +0000 UTC m=+0.702622494 container remove e819200dc3079f583f02bf18c91b75aa56f68c3aefb2a65a5024aa8011ba4bb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:59:25 np0005634017 systemd[1]: libpod-conmon-e819200dc3079f583f02bf18c91b75aa56f68c3aefb2a65a5024aa8011ba4bb0.scope: Deactivated successfully.
Feb 28 04:59:25 np0005634017 nova_compute[243452]: 2026-02-28 09:59:25.152 243456 INFO nova.virt.libvirt.driver [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Deleting instance files /var/lib/nova/instances/65ac3a64-52da-4513-a78c-8af927a40e61_del#033[00m
Feb 28 04:59:25 np0005634017 nova_compute[243452]: 2026-02-28 09:59:25.154 243456 INFO nova.virt.libvirt.driver [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Deletion of /var/lib/nova/instances/65ac3a64-52da-4513-a78c-8af927a40e61_del complete#033[00m
Feb 28 04:59:25 np0005634017 nova_compute[243452]: 2026-02-28 09:59:25.225 243456 INFO nova.compute.manager [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Took 1.09 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 04:59:25 np0005634017 nova_compute[243452]: 2026-02-28 09:59:25.226 243456 DEBUG oslo.service.loopingcall [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 04:59:25 np0005634017 nova_compute[243452]: 2026-02-28 09:59:25.226 243456 DEBUG nova.compute.manager [-] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 04:59:25 np0005634017 nova_compute[243452]: 2026-02-28 09:59:25.227 243456 DEBUG nova.network.neutron [-] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 04:59:25 np0005634017 nova_compute[243452]: 2026-02-28 09:59:25.465 243456 DEBUG nova.network.neutron [-] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 04:59:25 np0005634017 nova_compute[243452]: 2026-02-28 09:59:25.481 243456 DEBUG nova.network.neutron [-] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 04:59:25 np0005634017 nova_compute[243452]: 2026-02-28 09:59:25.513 243456 INFO nova.compute.manager [-] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Took 0.29 seconds to deallocate network for instance.#033[00m
Feb 28 04:59:25 np0005634017 podman[251848]: 2026-02-28 09:59:25.540609395 +0000 UTC m=+0.061971540 container create e701dff86d07b7ab87075247dd7ab45f70606a78f55b9a2a5ff160a84ed72eb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 04:59:25 np0005634017 systemd[1]: Started libpod-conmon-e701dff86d07b7ab87075247dd7ab45f70606a78f55b9a2a5ff160a84ed72eb7.scope.
Feb 28 04:59:25 np0005634017 nova_compute[243452]: 2026-02-28 09:59:25.588 243456 DEBUG oslo_concurrency.lockutils [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:25 np0005634017 nova_compute[243452]: 2026-02-28 09:59:25.589 243456 DEBUG oslo_concurrency.lockutils [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:25 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:59:25 np0005634017 podman[251848]: 2026-02-28 09:59:25.513984333 +0000 UTC m=+0.035346528 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:59:25 np0005634017 podman[251848]: 2026-02-28 09:59:25.622801165 +0000 UTC m=+0.144163290 container init e701dff86d07b7ab87075247dd7ab45f70606a78f55b9a2a5ff160a84ed72eb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bhaskara, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 04:59:25 np0005634017 podman[251848]: 2026-02-28 09:59:25.632476878 +0000 UTC m=+0.153839023 container start e701dff86d07b7ab87075247dd7ab45f70606a78f55b9a2a5ff160a84ed72eb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 04:59:25 np0005634017 podman[251848]: 2026-02-28 09:59:25.636967645 +0000 UTC m=+0.158329800 container attach e701dff86d07b7ab87075247dd7ab45f70606a78f55b9a2a5ff160a84ed72eb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 28 04:59:25 np0005634017 competent_bhaskara[251864]: 167 167
Feb 28 04:59:25 np0005634017 systemd[1]: libpod-e701dff86d07b7ab87075247dd7ab45f70606a78f55b9a2a5ff160a84ed72eb7.scope: Deactivated successfully.
Feb 28 04:59:25 np0005634017 podman[251848]: 2026-02-28 09:59:25.642226373 +0000 UTC m=+0.163588478 container died e701dff86d07b7ab87075247dd7ab45f70606a78f55b9a2a5ff160a84ed72eb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bhaskara, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 28 04:59:25 np0005634017 systemd[1]: var-lib-containers-storage-overlay-cc827e37f14f70e96a7f2c8acb031cd3e7eb5f100ece3d009a1f5ae5d499afc1-merged.mount: Deactivated successfully.
Feb 28 04:59:25 np0005634017 podman[251848]: 2026-02-28 09:59:25.675906244 +0000 UTC m=+0.197268359 container remove e701dff86d07b7ab87075247dd7ab45f70606a78f55b9a2a5ff160a84ed72eb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Feb 28 04:59:25 np0005634017 nova_compute[243452]: 2026-02-28 09:59:25.697 243456 DEBUG oslo_concurrency.processutils [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 264 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.9 MiB/s wr, 270 op/s
Feb 28 04:59:25 np0005634017 systemd[1]: libpod-conmon-e701dff86d07b7ab87075247dd7ab45f70606a78f55b9a2a5ff160a84ed72eb7.scope: Deactivated successfully.
Feb 28 04:59:25 np0005634017 podman[251890]: 2026-02-28 09:59:25.820151756 +0000 UTC m=+0.046964117 container create ee0172752bef723815899188850c8468d1844700c94b40c15db2913876284df7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_lalande, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 04:59:25 np0005634017 systemd[1]: Started libpod-conmon-ee0172752bef723815899188850c8468d1844700c94b40c15db2913876284df7.scope.
Feb 28 04:59:25 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:59:25 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38b8e014896f8a28c2ceb56d66f562f3a1902e1c74ac2ef010931dee9d15d450/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 04:59:25 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38b8e014896f8a28c2ceb56d66f562f3a1902e1c74ac2ef010931dee9d15d450/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 04:59:25 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38b8e014896f8a28c2ceb56d66f562f3a1902e1c74ac2ef010931dee9d15d450/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 04:59:25 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38b8e014896f8a28c2ceb56d66f562f3a1902e1c74ac2ef010931dee9d15d450/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 04:59:25 np0005634017 podman[251890]: 2026-02-28 09:59:25.795293604 +0000 UTC m=+0.022106055 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 04:59:25 np0005634017 podman[251890]: 2026-02-28 09:59:25.898223059 +0000 UTC m=+0.125035440 container init ee0172752bef723815899188850c8468d1844700c94b40c15db2913876284df7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_lalande, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:59:25 np0005634017 podman[251890]: 2026-02-28 09:59:25.905619858 +0000 UTC m=+0.132432219 container start ee0172752bef723815899188850c8468d1844700c94b40c15db2913876284df7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 04:59:25 np0005634017 podman[251890]: 2026-02-28 09:59:25.909345023 +0000 UTC m=+0.136157404 container attach ee0172752bef723815899188850c8468d1844700c94b40c15db2913876284df7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 04:59:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 04:59:26 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/88300875' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 04:59:26 np0005634017 nova_compute[243452]: 2026-02-28 09:59:26.214 243456 DEBUG oslo_concurrency.processutils [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:26 np0005634017 nova_compute[243452]: 2026-02-28 09:59:26.225 243456 DEBUG nova.compute.provider_tree [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 04:59:26 np0005634017 nova_compute[243452]: 2026-02-28 09:59:26.244 243456 DEBUG nova.scheduler.client.report [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 04:59:26 np0005634017 nova_compute[243452]: 2026-02-28 09:59:26.265 243456 DEBUG oslo_concurrency.lockutils [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:26 np0005634017 nova_compute[243452]: 2026-02-28 09:59:26.297 243456 INFO nova.scheduler.client.report [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Deleted allocations for instance 65ac3a64-52da-4513-a78c-8af927a40e61#033[00m
Feb 28 04:59:26 np0005634017 nova_compute[243452]: 2026-02-28 09:59:26.378 243456 DEBUG oslo_concurrency.lockutils [None req-8450d463-4655-4b83-85f7-8f232d239205 f285b05675cf49d8812463f1f0d35eb3 056bbee826f042bc922f04eebd9ccf55 - - default default] Lock "65ac3a64-52da-4513-a78c-8af927a40e61" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:26 np0005634017 lvm[252002]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 04:59:26 np0005634017 lvm[252002]: VG ceph_vg0 finished
Feb 28 04:59:26 np0005634017 lvm[252005]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 04:59:26 np0005634017 lvm[252005]: VG ceph_vg1 finished
Feb 28 04:59:26 np0005634017 lvm[252006]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 04:59:26 np0005634017 lvm[252006]: VG ceph_vg2 finished
Feb 28 04:59:26 np0005634017 infallible_lalande[251925]: {}
Feb 28 04:59:26 np0005634017 systemd[1]: libpod-ee0172752bef723815899188850c8468d1844700c94b40c15db2913876284df7.scope: Deactivated successfully.
Feb 28 04:59:26 np0005634017 systemd[1]: libpod-ee0172752bef723815899188850c8468d1844700c94b40c15db2913876284df7.scope: Consumed 1.045s CPU time.
Feb 28 04:59:26 np0005634017 podman[251890]: 2026-02-28 09:59:26.723492016 +0000 UTC m=+0.950304417 container died ee0172752bef723815899188850c8468d1844700c94b40c15db2913876284df7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 04:59:26 np0005634017 systemd[1]: var-lib-containers-storage-overlay-38b8e014896f8a28c2ceb56d66f562f3a1902e1c74ac2ef010931dee9d15d450-merged.mount: Deactivated successfully.
Feb 28 04:59:26 np0005634017 podman[251890]: 2026-02-28 09:59:26.844565106 +0000 UTC m=+1.071377467 container remove ee0172752bef723815899188850c8468d1844700c94b40c15db2913876284df7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_lalande, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 04:59:26 np0005634017 systemd[1]: libpod-conmon-ee0172752bef723815899188850c8468d1844700c94b40c15db2913876284df7.scope: Deactivated successfully.
Feb 28 04:59:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 04:59:26 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:59:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 04:59:26 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:59:26 np0005634017 nova_compute[243452]: 2026-02-28 09:59:26.953 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272751.9516804, 690358a2-fced-4bc3-8a76-d85e5bfdc727 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 04:59:26 np0005634017 nova_compute[243452]: 2026-02-28 09:59:26.954 243456 INFO nova.compute.manager [-] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] VM Stopped (Lifecycle Event)#033[00m
Feb 28 04:59:26 np0005634017 nova_compute[243452]: 2026-02-28 09:59:26.994 243456 DEBUG nova.compute.manager [None req-8e20b8c7-8404-49f1-9862-03dd6ac29455 - - - - - -] [instance: 690358a2-fced-4bc3-8a76-d85e5bfdc727] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 04:59:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 04:59:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 247 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 174 op/s
Feb 28 04:59:27 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:59:27 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 04:59:28 np0005634017 nova_compute[243452]: 2026-02-28 09:59:28.366 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_09:59:29
Feb 28 04:59:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 04:59:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 04:59:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['backups', 'images', 'cephfs.cephfs.data', 'vms', 'volumes', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', '.mgr', 'cephfs.cephfs.meta']
Feb 28 04:59:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 04:59:29 np0005634017 nova_compute[243452]: 2026-02-28 09:59:29.053 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 247 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 174 op/s
Feb 28 04:59:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:59:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:59:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:59:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:59:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 04:59:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 04:59:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 04:59:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:59:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 04:59:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 04:59:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:59:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 04:59:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:59:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 04:59:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:59:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 04:59:30 np0005634017 nova_compute[243452]: 2026-02-28 09:59:30.659 243456 DEBUG oslo_concurrency.lockutils [None req-d3130f15-3fa1-4017-afdb-fee0a253023b 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Acquiring lock "345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:30 np0005634017 nova_compute[243452]: 2026-02-28 09:59:30.660 243456 DEBUG oslo_concurrency.lockutils [None req-d3130f15-3fa1-4017-afdb-fee0a253023b 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:30 np0005634017 nova_compute[243452]: 2026-02-28 09:59:30.660 243456 DEBUG oslo_concurrency.lockutils [None req-d3130f15-3fa1-4017-afdb-fee0a253023b 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Acquiring lock "345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:30 np0005634017 nova_compute[243452]: 2026-02-28 09:59:30.661 243456 DEBUG oslo_concurrency.lockutils [None req-d3130f15-3fa1-4017-afdb-fee0a253023b 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:30 np0005634017 nova_compute[243452]: 2026-02-28 09:59:30.661 243456 DEBUG oslo_concurrency.lockutils [None req-d3130f15-3fa1-4017-afdb-fee0a253023b 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:30 np0005634017 nova_compute[243452]: 2026-02-28 09:59:30.663 243456 INFO nova.compute.manager [None req-d3130f15-3fa1-4017-afdb-fee0a253023b 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Terminating instance#033[00m
Feb 28 04:59:30 np0005634017 nova_compute[243452]: 2026-02-28 09:59:30.665 243456 DEBUG nova.compute.manager [None req-d3130f15-3fa1-4017-afdb-fee0a253023b 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 04:59:30 np0005634017 kernel: tap281248dd-cb (unregistering): left promiscuous mode
Feb 28 04:59:30 np0005634017 NetworkManager[49805]: <info>  [1772272770.7158] device (tap281248dd-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 04:59:30 np0005634017 nova_compute[243452]: 2026-02-28 09:59:30.717 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:30 np0005634017 nova_compute[243452]: 2026-02-28 09:59:30.727 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:30 np0005634017 ovn_controller[146846]: 2026-02-28T09:59:30Z|00032|binding|INFO|Releasing lport 281248dd-cb66-437d-8420-a90b5b5a5b86 from this chassis (sb_readonly=0)
Feb 28 04:59:30 np0005634017 ovn_controller[146846]: 2026-02-28T09:59:30Z|00033|binding|INFO|Setting lport 281248dd-cb66-437d-8420-a90b5b5a5b86 down in Southbound
Feb 28 04:59:30 np0005634017 ovn_controller[146846]: 2026-02-28T09:59:30Z|00034|binding|INFO|Removing iface tap281248dd-cb ovn-installed in OVS
Feb 28 04:59:30 np0005634017 nova_compute[243452]: 2026-02-28 09:59:30.730 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:30 np0005634017 nova_compute[243452]: 2026-02-28 09:59:30.734 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:30.744 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:f1:ca 10.100.0.3'], port_security=['fa:16:3e:94:f1:ca 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8d11cb18c7a4f0bb17a76974faf1f67', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6c29529b-835f-4326-814a-8256c4b702f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f040037-a4e0-4f34-9b2b-1599fa7f19e6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=281248dd-cb66-437d-8420-a90b5b5a5b86) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 04:59:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:30.745 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 281248dd-cb66-437d-8420-a90b5b5a5b86 in datapath 0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4 unbound from our chassis#033[00m
Feb 28 04:59:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:30.746 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 04:59:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:30.747 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c67f224f-2329-4e2f-adbf-50a32ec91f1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:30.748 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4 namespace which is not needed anymore#033[00m
Feb 28 04:59:30 np0005634017 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000002.scope: Deactivated successfully.
Feb 28 04:59:30 np0005634017 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000002.scope: Consumed 12.529s CPU time.
Feb 28 04:59:30 np0005634017 systemd-machined[209480]: Machine qemu-3-instance-00000002 terminated.
Feb 28 04:59:30 np0005634017 neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4[251008]: [NOTICE]   (251012) : haproxy version is 2.8.14-c23fe91
Feb 28 04:59:30 np0005634017 neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4[251008]: [NOTICE]   (251012) : path to executable is /usr/sbin/haproxy
Feb 28 04:59:30 np0005634017 neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4[251008]: [WARNING]  (251012) : Exiting Master process...
Feb 28 04:59:30 np0005634017 neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4[251008]: [WARNING]  (251012) : Exiting Master process...
Feb 28 04:59:30 np0005634017 neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4[251008]: [ALERT]    (251012) : Current worker (251014) exited with code 143 (Terminated)
Feb 28 04:59:30 np0005634017 neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4[251008]: [WARNING]  (251012) : All workers exited. Exiting... (0)
Feb 28 04:59:30 np0005634017 systemd[1]: libpod-07185706513cb192493e4fa9bd5b8b7891eca185e25068386412d64169526651.scope: Deactivated successfully.
Feb 28 04:59:30 np0005634017 podman[252073]: 2026-02-28 09:59:30.902277194 +0000 UTC m=+0.057168781 container died 07185706513cb192493e4fa9bd5b8b7891eca185e25068386412d64169526651 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 28 04:59:30 np0005634017 nova_compute[243452]: 2026-02-28 09:59:30.907 243456 INFO nova.virt.libvirt.driver [-] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Instance destroyed successfully.#033[00m
Feb 28 04:59:30 np0005634017 nova_compute[243452]: 2026-02-28 09:59:30.908 243456 DEBUG nova.objects.instance [None req-d3130f15-3fa1-4017-afdb-fee0a253023b 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lazy-loading 'resources' on Instance uuid 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 04:59:30 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-07185706513cb192493e4fa9bd5b8b7891eca185e25068386412d64169526651-userdata-shm.mount: Deactivated successfully.
Feb 28 04:59:30 np0005634017 systemd[1]: var-lib-containers-storage-overlay-9ba7d2d03f737867a40efd78da35d700803076aa386aabc23bc069a79b8dd0cc-merged.mount: Deactivated successfully.
Feb 28 04:59:30 np0005634017 nova_compute[243452]: 2026-02-28 09:59:30.941 243456 DEBUG nova.virt.libvirt.vif [None req-d3130f15-3fa1-4017-afdb-fee0a253023b 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T09:59:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-170965901',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-170965901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(18),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-170965901',id=2,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=18,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHI7c4keuyySp3JCKwGcWG3wi5bkL/HjgXw+tKRvGg2QtCSRW0FIRsi5MSLftPso6IldV2AEMqsGSDx0hY586rAhWIJtbGmY0OMNEXz4pQjFyQmOBvdLhFcG1jQGXlgziQ==',key_name='tempest-keypair-380335806',keypairs=<?>,launch_index=0,launched_at=2026-02-28T09:59:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d8d11cb18c7a4f0bb17a76974faf1f67',ramdisk_id='',reservation_id='r-0xni0k0g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-390490490',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-390490490-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T09:59:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1fe17532ba42f08284df768e59361b',uuid=345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "281248dd-cb66-437d-8420-a90b5b5a5b86", "address": "fa:16:3e:94:f1:ca", "network": {"id": "0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1690820459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d11cb18c7a4f0bb17a76974faf1f67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281248dd-cb", "ovs_interfaceid": "281248dd-cb66-437d-8420-a90b5b5a5b86", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 04:59:30 np0005634017 nova_compute[243452]: 2026-02-28 09:59:30.944 243456 DEBUG nova.network.os_vif_util [None req-d3130f15-3fa1-4017-afdb-fee0a253023b 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Converting VIF {"id": "281248dd-cb66-437d-8420-a90b5b5a5b86", "address": "fa:16:3e:94:f1:ca", "network": {"id": "0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1690820459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d11cb18c7a4f0bb17a76974faf1f67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap281248dd-cb", "ovs_interfaceid": "281248dd-cb66-437d-8420-a90b5b5a5b86", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 04:59:30 np0005634017 nova_compute[243452]: 2026-02-28 09:59:30.945 243456 DEBUG nova.network.os_vif_util [None req-d3130f15-3fa1-4017-afdb-fee0a253023b 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:94:f1:ca,bridge_name='br-int',has_traffic_filtering=True,id=281248dd-cb66-437d-8420-a90b5b5a5b86,network=Network(0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap281248dd-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 04:59:30 np0005634017 nova_compute[243452]: 2026-02-28 09:59:30.946 243456 DEBUG os_vif [None req-d3130f15-3fa1-4017-afdb-fee0a253023b 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:94:f1:ca,bridge_name='br-int',has_traffic_filtering=True,id=281248dd-cb66-437d-8420-a90b5b5a5b86,network=Network(0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap281248dd-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 04:59:30 np0005634017 podman[252073]: 2026-02-28 09:59:30.947258391 +0000 UTC m=+0.102149978 container cleanup 07185706513cb192493e4fa9bd5b8b7891eca185e25068386412d64169526651 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 04:59:30 np0005634017 nova_compute[243452]: 2026-02-28 09:59:30.948 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:30 np0005634017 nova_compute[243452]: 2026-02-28 09:59:30.948 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap281248dd-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 04:59:30 np0005634017 nova_compute[243452]: 2026-02-28 09:59:30.951 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:30 np0005634017 nova_compute[243452]: 2026-02-28 09:59:30.953 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 04:59:30 np0005634017 nova_compute[243452]: 2026-02-28 09:59:30.955 243456 INFO os_vif [None req-d3130f15-3fa1-4017-afdb-fee0a253023b 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:94:f1:ca,bridge_name='br-int',has_traffic_filtering=True,id=281248dd-cb66-437d-8420-a90b5b5a5b86,network=Network(0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap281248dd-cb')#033[00m
Feb 28 04:59:30 np0005634017 systemd[1]: libpod-conmon-07185706513cb192493e4fa9bd5b8b7891eca185e25068386412d64169526651.scope: Deactivated successfully.
Feb 28 04:59:31 np0005634017 podman[252112]: 2026-02-28 09:59:31.02036833 +0000 UTC m=+0.049976038 container remove 07185706513cb192493e4fa9bd5b8b7891eca185e25068386412d64169526651 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 04:59:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:31.024 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[70bf5789-143b-4353-b2ee-9b522fa891d2]: (4, ('Sat Feb 28 09:59:30 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4 (07185706513cb192493e4fa9bd5b8b7891eca185e25068386412d64169526651)\n07185706513cb192493e4fa9bd5b8b7891eca185e25068386412d64169526651\nSat Feb 28 09:59:30 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4 (07185706513cb192493e4fa9bd5b8b7891eca185e25068386412d64169526651)\n07185706513cb192493e4fa9bd5b8b7891eca185e25068386412d64169526651\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:31.026 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a3c08227-b889-4b66-b0e7-8eaf4dcec668]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:31.027 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ff5ef4b-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 04:59:31 np0005634017 nova_compute[243452]: 2026-02-28 09:59:31.027 243456 DEBUG nova.compute.manager [req-4c14d988-5fe4-4288-8794-07a3515385a3 req-b437d337-00ec-45b0-8b81-754da6de2b75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Received event network-vif-unplugged-281248dd-cb66-437d-8420-a90b5b5a5b86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 04:59:31 np0005634017 nova_compute[243452]: 2026-02-28 09:59:31.028 243456 DEBUG oslo_concurrency.lockutils [req-4c14d988-5fe4-4288-8794-07a3515385a3 req-b437d337-00ec-45b0-8b81-754da6de2b75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:31 np0005634017 nova_compute[243452]: 2026-02-28 09:59:31.029 243456 DEBUG oslo_concurrency.lockutils [req-4c14d988-5fe4-4288-8794-07a3515385a3 req-b437d337-00ec-45b0-8b81-754da6de2b75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:31 np0005634017 nova_compute[243452]: 2026-02-28 09:59:31.029 243456 DEBUG oslo_concurrency.lockutils [req-4c14d988-5fe4-4288-8794-07a3515385a3 req-b437d337-00ec-45b0-8b81-754da6de2b75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:31 np0005634017 nova_compute[243452]: 2026-02-28 09:59:31.029 243456 DEBUG nova.compute.manager [req-4c14d988-5fe4-4288-8794-07a3515385a3 req-b437d337-00ec-45b0-8b81-754da6de2b75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] No waiting events found dispatching network-vif-unplugged-281248dd-cb66-437d-8420-a90b5b5a5b86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 04:59:31 np0005634017 nova_compute[243452]: 2026-02-28 09:59:31.030 243456 DEBUG nova.compute.manager [req-4c14d988-5fe4-4288-8794-07a3515385a3 req-b437d337-00ec-45b0-8b81-754da6de2b75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Received event network-vif-unplugged-281248dd-cb66-437d-8420-a90b5b5a5b86 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 04:59:31 np0005634017 nova_compute[243452]: 2026-02-28 09:59:31.030 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:31 np0005634017 kernel: tap0ff5ef4b-e0: left promiscuous mode
Feb 28 04:59:31 np0005634017 nova_compute[243452]: 2026-02-28 09:59:31.037 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:31.039 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa64716-4128-4281-bbfc-77e48f2cb9c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:31.058 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[31e0bbbe-b3a4-45b4-9c68-05c69c725cd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:31.060 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[544ee642-2fd8-4b32-b427-d5c69955e078]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:31.074 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[67b46806-4bfe-4d33-84db-3e772c3003c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 426260, 'reachable_time': 19679, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252145, 'error': None, 'target': 'ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:31 np0005634017 systemd[1]: run-netns-ovnmeta\x2d0ff5ef4b\x2de6cb\x2d48f9\x2d904e\x2dc0a38484b0f4.mount: Deactivated successfully.
Feb 28 04:59:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:31.093 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 04:59:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:31.093 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[8bb9cc96-c5fd-460f-a04d-d2f08eece1b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:31 np0005634017 nova_compute[243452]: 2026-02-28 09:59:31.269 243456 INFO nova.virt.libvirt.driver [None req-d3130f15-3fa1-4017-afdb-fee0a253023b 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Deleting instance files /var/lib/nova/instances/345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2_del#033[00m
Feb 28 04:59:31 np0005634017 nova_compute[243452]: 2026-02-28 09:59:31.270 243456 INFO nova.virt.libvirt.driver [None req-d3130f15-3fa1-4017-afdb-fee0a253023b 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Deletion of /var/lib/nova/instances/345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2_del complete#033[00m
Feb 28 04:59:31 np0005634017 nova_compute[243452]: 2026-02-28 09:59:31.323 243456 INFO nova.compute.manager [None req-d3130f15-3fa1-4017-afdb-fee0a253023b 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Took 0.66 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 04:59:31 np0005634017 nova_compute[243452]: 2026-02-28 09:59:31.324 243456 DEBUG oslo.service.loopingcall [None req-d3130f15-3fa1-4017-afdb-fee0a253023b 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 04:59:31 np0005634017 nova_compute[243452]: 2026-02-28 09:59:31.324 243456 DEBUG nova.compute.manager [-] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 04:59:31 np0005634017 nova_compute[243452]: 2026-02-28 09:59:31.325 243456 DEBUG nova.network.neutron [-] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 04:59:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 233 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 189 op/s
Feb 28 04:59:32 np0005634017 nova_compute[243452]: 2026-02-28 09:59:32.505 243456 DEBUG nova.network.neutron [-] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 04:59:32 np0005634017 nova_compute[243452]: 2026-02-28 09:59:32.519 243456 INFO nova.compute.manager [-] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Took 1.19 seconds to deallocate network for instance.#033[00m
Feb 28 04:59:32 np0005634017 nova_compute[243452]: 2026-02-28 09:59:32.561 243456 DEBUG oslo_concurrency.lockutils [None req-d3130f15-3fa1-4017-afdb-fee0a253023b 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:32 np0005634017 nova_compute[243452]: 2026-02-28 09:59:32.561 243456 DEBUG oslo_concurrency.lockutils [None req-d3130f15-3fa1-4017-afdb-fee0a253023b 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 04:59:32 np0005634017 nova_compute[243452]: 2026-02-28 09:59:32.614 243456 DEBUG oslo_concurrency.processutils [None req-d3130f15-3fa1-4017-afdb-fee0a253023b 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 04:59:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1728599001' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 04:59:33 np0005634017 nova_compute[243452]: 2026-02-28 09:59:33.127 243456 DEBUG oslo_concurrency.processutils [None req-d3130f15-3fa1-4017-afdb-fee0a253023b 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:33 np0005634017 nova_compute[243452]: 2026-02-28 09:59:33.134 243456 DEBUG nova.compute.provider_tree [None req-d3130f15-3fa1-4017-afdb-fee0a253023b 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 04:59:33 np0005634017 nova_compute[243452]: 2026-02-28 09:59:33.153 243456 DEBUG nova.scheduler.client.report [None req-d3130f15-3fa1-4017-afdb-fee0a253023b 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 04:59:33 np0005634017 nova_compute[243452]: 2026-02-28 09:59:33.173 243456 DEBUG oslo_concurrency.lockutils [None req-d3130f15-3fa1-4017-afdb-fee0a253023b 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:33 np0005634017 nova_compute[243452]: 2026-02-28 09:59:33.196 243456 DEBUG nova.compute.manager [req-067a3b48-d211-4ad8-8120-eaf1e83c831c req-53e2df9e-68fa-480e-bdf4-0fda93e0f704 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Received event network-vif-plugged-281248dd-cb66-437d-8420-a90b5b5a5b86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 04:59:33 np0005634017 nova_compute[243452]: 2026-02-28 09:59:33.196 243456 DEBUG oslo_concurrency.lockutils [req-067a3b48-d211-4ad8-8120-eaf1e83c831c req-53e2df9e-68fa-480e-bdf4-0fda93e0f704 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:33 np0005634017 nova_compute[243452]: 2026-02-28 09:59:33.196 243456 DEBUG oslo_concurrency.lockutils [req-067a3b48-d211-4ad8-8120-eaf1e83c831c req-53e2df9e-68fa-480e-bdf4-0fda93e0f704 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:33 np0005634017 nova_compute[243452]: 2026-02-28 09:59:33.197 243456 DEBUG oslo_concurrency.lockutils [req-067a3b48-d211-4ad8-8120-eaf1e83c831c req-53e2df9e-68fa-480e-bdf4-0fda93e0f704 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:33 np0005634017 nova_compute[243452]: 2026-02-28 09:59:33.197 243456 DEBUG nova.compute.manager [req-067a3b48-d211-4ad8-8120-eaf1e83c831c req-53e2df9e-68fa-480e-bdf4-0fda93e0f704 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] No waiting events found dispatching network-vif-plugged-281248dd-cb66-437d-8420-a90b5b5a5b86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 04:59:33 np0005634017 nova_compute[243452]: 2026-02-28 09:59:33.197 243456 WARNING nova.compute.manager [req-067a3b48-d211-4ad8-8120-eaf1e83c831c req-53e2df9e-68fa-480e-bdf4-0fda93e0f704 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Received unexpected event network-vif-plugged-281248dd-cb66-437d-8420-a90b5b5a5b86 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 04:59:33 np0005634017 nova_compute[243452]: 2026-02-28 09:59:33.197 243456 DEBUG nova.compute.manager [req-067a3b48-d211-4ad8-8120-eaf1e83c831c req-53e2df9e-68fa-480e-bdf4-0fda93e0f704 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Received event network-vif-deleted-281248dd-cb66-437d-8420-a90b5b5a5b86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 04:59:33 np0005634017 nova_compute[243452]: 2026-02-28 09:59:33.200 243456 INFO nova.scheduler.client.report [None req-d3130f15-3fa1-4017-afdb-fee0a253023b 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Deleted allocations for instance 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2#033[00m
Feb 28 04:59:33 np0005634017 nova_compute[243452]: 2026-02-28 09:59:33.268 243456 DEBUG oslo_concurrency.lockutils [None req-d3130f15-3fa1-4017-afdb-fee0a253023b 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:33 np0005634017 nova_compute[243452]: 2026-02-28 09:59:33.369 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 209 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 176 op/s
Feb 28 04:59:34 np0005634017 nova_compute[243452]: 2026-02-28 09:59:34.197 243456 DEBUG oslo_concurrency.lockutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Acquiring lock "cbfb5d00-33da-4fdc-a9b7-a16865020102" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:34 np0005634017 nova_compute[243452]: 2026-02-28 09:59:34.198 243456 DEBUG oslo_concurrency.lockutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "cbfb5d00-33da-4fdc-a9b7-a16865020102" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:34 np0005634017 nova_compute[243452]: 2026-02-28 09:59:34.214 243456 DEBUG nova.compute.manager [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 04:59:34 np0005634017 nova_compute[243452]: 2026-02-28 09:59:34.285 243456 DEBUG oslo_concurrency.lockutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:34 np0005634017 nova_compute[243452]: 2026-02-28 09:59:34.285 243456 DEBUG oslo_concurrency.lockutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:34 np0005634017 nova_compute[243452]: 2026-02-28 09:59:34.292 243456 DEBUG nova.virt.hardware [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 04:59:34 np0005634017 nova_compute[243452]: 2026-02-28 09:59:34.293 243456 INFO nova.compute.claims [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 04:59:34 np0005634017 nova_compute[243452]: 2026-02-28 09:59:34.411 243456 DEBUG oslo_concurrency.processutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 04:59:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3877859442' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 04:59:34 np0005634017 nova_compute[243452]: 2026-02-28 09:59:34.997 243456 DEBUG oslo_concurrency.processutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.004 243456 DEBUG nova.compute.provider_tree [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.020 243456 DEBUG nova.scheduler.client.report [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.043 243456 DEBUG oslo_concurrency.lockutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.045 243456 DEBUG nova.compute.manager [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.099 243456 DEBUG nova.compute.manager [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.100 243456 DEBUG nova.network.neutron [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.121 243456 INFO nova.virt.libvirt.driver [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.160 243456 DEBUG nova.compute.manager [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.241 243456 DEBUG nova.compute.manager [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.243 243456 DEBUG nova.virt.libvirt.driver [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.243 243456 INFO nova.virt.libvirt.driver [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Creating image(s)#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.303 243456 DEBUG nova.storage.rbd_utils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] rbd image cbfb5d00-33da-4fdc-a9b7-a16865020102_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.324 243456 DEBUG nova.storage.rbd_utils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] rbd image cbfb5d00-33da-4fdc-a9b7-a16865020102_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.344 243456 DEBUG nova.storage.rbd_utils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] rbd image cbfb5d00-33da-4fdc-a9b7-a16865020102_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.347 243456 DEBUG oslo_concurrency.processutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.405 243456 DEBUG oslo_concurrency.processutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.406 243456 DEBUG oslo_concurrency.lockutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.406 243456 DEBUG oslo_concurrency.lockutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.407 243456 DEBUG oslo_concurrency.lockutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.425 243456 DEBUG nova.storage.rbd_utils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] rbd image cbfb5d00-33da-4fdc-a9b7-a16865020102_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.428 243456 DEBUG oslo_concurrency.processutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 cbfb5d00-33da-4fdc-a9b7-a16865020102_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.456 243456 DEBUG nova.policy [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1e1fe17532ba42f08284df768e59361b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd8d11cb18c7a4f0bb17a76974faf1f67', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 04:59:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 169 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.9 MiB/s wr, 176 op/s
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.726 243456 DEBUG oslo_concurrency.processutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 cbfb5d00-33da-4fdc-a9b7-a16865020102_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.800 243456 DEBUG nova.storage.rbd_utils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] resizing rbd image cbfb5d00-33da-4fdc-a9b7-a16865020102_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.878 243456 DEBUG nova.objects.instance [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lazy-loading 'migration_context' on Instance uuid cbfb5d00-33da-4fdc-a9b7-a16865020102 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.932 243456 DEBUG nova.storage.rbd_utils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] rbd image cbfb5d00-33da-4fdc-a9b7-a16865020102_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.961 243456 DEBUG nova.storage.rbd_utils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] rbd image cbfb5d00-33da-4fdc-a9b7-a16865020102_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.966 243456 DEBUG oslo_concurrency.lockutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.967 243456 DEBUG oslo_concurrency.lockutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.967 243456 DEBUG oslo_concurrency.processutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.985 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.996 243456 DEBUG oslo_concurrency.processutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:35 np0005634017 nova_compute[243452]: 2026-02-28 09:59:35.997 243456 DEBUG oslo_concurrency.processutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:36 np0005634017 nova_compute[243452]: 2026-02-28 09:59:36.033 243456 DEBUG oslo_concurrency.processutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:36 np0005634017 nova_compute[243452]: 2026-02-28 09:59:36.034 243456 DEBUG oslo_concurrency.lockutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:36 np0005634017 nova_compute[243452]: 2026-02-28 09:59:36.060 243456 DEBUG nova.storage.rbd_utils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] rbd image cbfb5d00-33da-4fdc-a9b7-a16865020102_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:36 np0005634017 nova_compute[243452]: 2026-02-28 09:59:36.065 243456 DEBUG oslo_concurrency.processutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 cbfb5d00-33da-4fdc-a9b7-a16865020102_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:36 np0005634017 nova_compute[243452]: 2026-02-28 09:59:36.588 243456 DEBUG nova.network.neutron [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Successfully created port: 72d99337-c438-4e0b-b5a9-54214143c1d8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 04:59:36 np0005634017 nova_compute[243452]: 2026-02-28 09:59:36.893 243456 DEBUG oslo_concurrency.processutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 cbfb5d00-33da-4fdc-a9b7-a16865020102_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.828s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:36 np0005634017 nova_compute[243452]: 2026-02-28 09:59:36.981 243456 DEBUG nova.virt.libvirt.driver [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 04:59:36 np0005634017 nova_compute[243452]: 2026-02-28 09:59:36.982 243456 DEBUG nova.virt.libvirt.driver [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Ensure instance console log exists: /var/lib/nova/instances/cbfb5d00-33da-4fdc-a9b7-a16865020102/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 04:59:36 np0005634017 nova_compute[243452]: 2026-02-28 09:59:36.982 243456 DEBUG oslo_concurrency.lockutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:36 np0005634017 nova_compute[243452]: 2026-02-28 09:59:36.983 243456 DEBUG oslo_concurrency.lockutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:36 np0005634017 nova_compute[243452]: 2026-02-28 09:59:36.983 243456 DEBUG oslo_concurrency.lockutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:37 np0005634017 nova_compute[243452]: 2026-02-28 09:59:37.296 243456 DEBUG nova.network.neutron [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Successfully updated port: 72d99337-c438-4e0b-b5a9-54214143c1d8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 04:59:37 np0005634017 nova_compute[243452]: 2026-02-28 09:59:37.316 243456 DEBUG oslo_concurrency.lockutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Acquiring lock "refresh_cache-cbfb5d00-33da-4fdc-a9b7-a16865020102" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 04:59:37 np0005634017 nova_compute[243452]: 2026-02-28 09:59:37.317 243456 DEBUG oslo_concurrency.lockutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Acquired lock "refresh_cache-cbfb5d00-33da-4fdc-a9b7-a16865020102" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 04:59:37 np0005634017 nova_compute[243452]: 2026-02-28 09:59:37.317 243456 DEBUG nova.network.neutron [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 04:59:37 np0005634017 nova_compute[243452]: 2026-02-28 09:59:37.502 243456 DEBUG nova.compute.manager [req-76008f77-83d6-4e81-9240-087fe88aec56 req-7ad9cd12-acf9-463f-b62d-391f1a1e67a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Received event network-changed-72d99337-c438-4e0b-b5a9-54214143c1d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 04:59:37 np0005634017 nova_compute[243452]: 2026-02-28 09:59:37.502 243456 DEBUG nova.compute.manager [req-76008f77-83d6-4e81-9240-087fe88aec56 req-7ad9cd12-acf9-463f-b62d-391f1a1e67a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Refreshing instance network info cache due to event network-changed-72d99337-c438-4e0b-b5a9-54214143c1d8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 04:59:37 np0005634017 nova_compute[243452]: 2026-02-28 09:59:37.503 243456 DEBUG oslo_concurrency.lockutils [req-76008f77-83d6-4e81-9240-087fe88aec56 req-7ad9cd12-acf9-463f-b62d-391f1a1e67a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-cbfb5d00-33da-4fdc-a9b7-a16865020102" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 04:59:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 04:59:37 np0005634017 nova_compute[243452]: 2026-02-28 09:59:37.612 243456 DEBUG nova.network.neutron [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 04:59:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 188 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.6 MiB/s wr, 62 op/s
Feb 28 04:59:37 np0005634017 nova_compute[243452]: 2026-02-28 09:59:37.922 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:37.923 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 04:59:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:37.924 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 04:59:38 np0005634017 nova_compute[243452]: 2026-02-28 09:59:38.370 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:38 np0005634017 nova_compute[243452]: 2026-02-28 09:59:38.491 243456 DEBUG nova.network.neutron [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Updating instance_info_cache with network_info: [{"id": "72d99337-c438-4e0b-b5a9-54214143c1d8", "address": "fa:16:3e:16:df:fc", "network": {"id": "0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1690820459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d11cb18c7a4f0bb17a76974faf1f67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72d99337-c4", "ovs_interfaceid": "72d99337-c438-4e0b-b5a9-54214143c1d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 04:59:38 np0005634017 nova_compute[243452]: 2026-02-28 09:59:38.516 243456 DEBUG oslo_concurrency.lockutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Releasing lock "refresh_cache-cbfb5d00-33da-4fdc-a9b7-a16865020102" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 04:59:38 np0005634017 nova_compute[243452]: 2026-02-28 09:59:38.516 243456 DEBUG nova.compute.manager [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Instance network_info: |[{"id": "72d99337-c438-4e0b-b5a9-54214143c1d8", "address": "fa:16:3e:16:df:fc", "network": {"id": "0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1690820459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d11cb18c7a4f0bb17a76974faf1f67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72d99337-c4", "ovs_interfaceid": "72d99337-c438-4e0b-b5a9-54214143c1d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 04:59:38 np0005634017 nova_compute[243452]: 2026-02-28 09:59:38.517 243456 DEBUG oslo_concurrency.lockutils [req-76008f77-83d6-4e81-9240-087fe88aec56 req-7ad9cd12-acf9-463f-b62d-391f1a1e67a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-cbfb5d00-33da-4fdc-a9b7-a16865020102" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 04:59:38 np0005634017 nova_compute[243452]: 2026-02-28 09:59:38.518 243456 DEBUG nova.network.neutron [req-76008f77-83d6-4e81-9240-087fe88aec56 req-7ad9cd12-acf9-463f-b62d-391f1a1e67a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Refreshing network info cache for port 72d99337-c438-4e0b-b5a9-54214143c1d8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 04:59:38 np0005634017 nova_compute[243452]: 2026-02-28 09:59:38.524 243456 DEBUG nova.virt.libvirt.driver [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Start _get_guest_xml network_info=[{"id": "72d99337-c438-4e0b-b5a9-54214143c1d8", "address": "fa:16:3e:16:df:fc", "network": {"id": "0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1690820459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d11cb18c7a4f0bb17a76974faf1f67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72d99337-c4", "ovs_interfaceid": "72d99337-c438-4e0b-b5a9-54214143c1d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [{'device_name': '/dev/vdb', 'device_type': 'disk', 'size': 1, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 04:59:38 np0005634017 nova_compute[243452]: 2026-02-28 09:59:38.532 243456 WARNING nova.virt.libvirt.driver [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 04:59:38 np0005634017 nova_compute[243452]: 2026-02-28 09:59:38.545 243456 DEBUG nova.virt.libvirt.host [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 04:59:38 np0005634017 nova_compute[243452]: 2026-02-28 09:59:38.545 243456 DEBUG nova.virt.libvirt.host [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 04:59:38 np0005634017 nova_compute[243452]: 2026-02-28 09:59:38.550 243456 DEBUG nova.virt.libvirt.host [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 04:59:38 np0005634017 nova_compute[243452]: 2026-02-28 09:59:38.551 243456 DEBUG nova.virt.libvirt.host [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 04:59:38 np0005634017 nova_compute[243452]: 2026-02-28 09:59:38.551 243456 DEBUG nova.virt.libvirt.driver [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 04:59:38 np0005634017 nova_compute[243452]: 2026-02-28 09:59:38.551 243456 DEBUG nova.virt.hardware [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='877180600',id=17,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-503788857',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 04:59:38 np0005634017 nova_compute[243452]: 2026-02-28 09:59:38.552 243456 DEBUG nova.virt.hardware [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 04:59:38 np0005634017 nova_compute[243452]: 2026-02-28 09:59:38.552 243456 DEBUG nova.virt.hardware [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 04:59:38 np0005634017 nova_compute[243452]: 2026-02-28 09:59:38.552 243456 DEBUG nova.virt.hardware [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 04:59:38 np0005634017 nova_compute[243452]: 2026-02-28 09:59:38.553 243456 DEBUG nova.virt.hardware [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 04:59:38 np0005634017 nova_compute[243452]: 2026-02-28 09:59:38.553 243456 DEBUG nova.virt.hardware [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 04:59:38 np0005634017 nova_compute[243452]: 2026-02-28 09:59:38.553 243456 DEBUG nova.virt.hardware [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 04:59:38 np0005634017 nova_compute[243452]: 2026-02-28 09:59:38.553 243456 DEBUG nova.virt.hardware [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 04:59:38 np0005634017 nova_compute[243452]: 2026-02-28 09:59:38.554 243456 DEBUG nova.virt.hardware [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 04:59:38 np0005634017 nova_compute[243452]: 2026-02-28 09:59:38.554 243456 DEBUG nova.virt.hardware [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 04:59:38 np0005634017 nova_compute[243452]: 2026-02-28 09:59:38.554 243456 DEBUG nova.virt.hardware [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 04:59:38 np0005634017 nova_compute[243452]: 2026-02-28 09:59:38.557 243456 DEBUG oslo_concurrency.processutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 04:59:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1373997486' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 04:59:39 np0005634017 nova_compute[243452]: 2026-02-28 09:59:39.122 243456 DEBUG oslo_concurrency.processutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:39 np0005634017 nova_compute[243452]: 2026-02-28 09:59:39.123 243456 DEBUG oslo_concurrency.processutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:39 np0005634017 nova_compute[243452]: 2026-02-28 09:59:39.364 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272764.3620167, 65ac3a64-52da-4513-a78c-8af927a40e61 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 04:59:39 np0005634017 nova_compute[243452]: 2026-02-28 09:59:39.364 243456 INFO nova.compute.manager [-] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] VM Stopped (Lifecycle Event)#033[00m
Feb 28 04:59:39 np0005634017 nova_compute[243452]: 2026-02-28 09:59:39.381 243456 DEBUG nova.compute.manager [None req-c4c9acae-5e25-442d-a852-90618792ea5b - - - - - -] [instance: 65ac3a64-52da-4513-a78c-8af927a40e61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 04:59:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 04:59:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2434500799' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 04:59:39 np0005634017 nova_compute[243452]: 2026-02-28 09:59:39.621 243456 DEBUG oslo_concurrency.processutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:39 np0005634017 nova_compute[243452]: 2026-02-28 09:59:39.652 243456 DEBUG nova.storage.rbd_utils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] rbd image cbfb5d00-33da-4fdc-a9b7-a16865020102_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:39 np0005634017 nova_compute[243452]: 2026-02-28 09:59:39.658 243456 DEBUG oslo_concurrency.processutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 188 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.6 MiB/s wr, 57 op/s
Feb 28 04:59:40 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 04:59:40 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3986252134' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.207 243456 DEBUG oslo_concurrency.processutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.209 243456 DEBUG nova.virt.libvirt.vif [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T09:59:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-565971534',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-565971534',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(17),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-565971534',id=5,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=17,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHI7c4keuyySp3JCKwGcWG3wi5bkL/HjgXw+tKRvGg2QtCSRW0FIRsi5MSLftPso6IldV2AEMqsGSDx0hY586rAhWIJtbGmY0OMNEXz4pQjFyQmOBvdLhFcG1jQGXlgziQ==',key_name='tempest-keypair-380335806',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d8d11cb18c7a4f0bb17a76974faf1f67',ramdisk_id='',reservation_id='r-y6eiyu1r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-390490490',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-390490490-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T09:59:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1fe17532ba42f08284df768e59361b',uuid=cbfb5d00-33da-4fdc-a9b7-a16865020102,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "72d99337-c438-4e0b-b5a9-54214143c1d8", "address": "fa:16:3e:16:df:fc", "network": {"id": "0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1690820459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d11cb18c7a4f0bb17a76974faf1f67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72d99337-c4", "ovs_interfaceid": "72d99337-c438-4e0b-b5a9-54214143c1d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.209 243456 DEBUG nova.network.os_vif_util [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Converting VIF {"id": "72d99337-c438-4e0b-b5a9-54214143c1d8", "address": "fa:16:3e:16:df:fc", "network": {"id": "0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1690820459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d11cb18c7a4f0bb17a76974faf1f67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72d99337-c4", "ovs_interfaceid": "72d99337-c438-4e0b-b5a9-54214143c1d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.210 243456 DEBUG nova.network.os_vif_util [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:df:fc,bridge_name='br-int',has_traffic_filtering=True,id=72d99337-c438-4e0b-b5a9-54214143c1d8,network=Network(0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72d99337-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.211 243456 DEBUG nova.objects.instance [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lazy-loading 'pci_devices' on Instance uuid cbfb5d00-33da-4fdc-a9b7-a16865020102 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.265 243456 DEBUG nova.virt.libvirt.driver [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] End _get_guest_xml xml=<domain type="kvm">
Feb 28 04:59:40 np0005634017 nova_compute[243452]:  <uuid>cbfb5d00-33da-4fdc-a9b7-a16865020102</uuid>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:  <name>instance-00000005</name>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-565971534</nova:name>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 09:59:38</nova:creationTime>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <nova:flavor name="tempest-flavor_with_ephemeral_1-503788857">
Feb 28 04:59:40 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:        <nova:ephemeral>1</nova:ephemeral>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:        <nova:user uuid="1e1fe17532ba42f08284df768e59361b">tempest-ServersWithSpecificFlavorTestJSON-390490490-project-member</nova:user>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:        <nova:project uuid="d8d11cb18c7a4f0bb17a76974faf1f67">tempest-ServersWithSpecificFlavorTestJSON-390490490</nova:project>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:        <nova:port uuid="72d99337-c438-4e0b-b5a9-54214143c1d8">
Feb 28 04:59:40 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <system>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <entry name="serial">cbfb5d00-33da-4fdc-a9b7-a16865020102</entry>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <entry name="uuid">cbfb5d00-33da-4fdc-a9b7-a16865020102</entry>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    </system>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:  <os>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:  </os>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:  <features>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:  </features>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:  </clock>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:  <devices>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/cbfb5d00-33da-4fdc-a9b7-a16865020102_disk">
Feb 28 04:59:40 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      </source>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 04:59:40 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      </auth>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    </disk>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/cbfb5d00-33da-4fdc-a9b7-a16865020102_disk.eph0">
Feb 28 04:59:40 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      </source>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 04:59:40 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      </auth>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <target dev="vdb" bus="virtio"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    </disk>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/cbfb5d00-33da-4fdc-a9b7-a16865020102_disk.config">
Feb 28 04:59:40 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      </source>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 04:59:40 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      </auth>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    </disk>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:16:df:fc"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <target dev="tap72d99337-c4"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    </interface>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/cbfb5d00-33da-4fdc-a9b7-a16865020102/console.log" append="off"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    </serial>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <video>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    </video>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    </rng>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 04:59:40 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 04:59:40 np0005634017 nova_compute[243452]:  </devices>
Feb 28 04:59:40 np0005634017 nova_compute[243452]: </domain>
Feb 28 04:59:40 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.266 243456 DEBUG nova.compute.manager [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Preparing to wait for external event network-vif-plugged-72d99337-c438-4e0b-b5a9-54214143c1d8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.267 243456 DEBUG oslo_concurrency.lockutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Acquiring lock "cbfb5d00-33da-4fdc-a9b7-a16865020102-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.267 243456 DEBUG oslo_concurrency.lockutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "cbfb5d00-33da-4fdc-a9b7-a16865020102-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.268 243456 DEBUG oslo_concurrency.lockutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "cbfb5d00-33da-4fdc-a9b7-a16865020102-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.268 243456 DEBUG nova.virt.libvirt.vif [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T09:59:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-565971534',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-565971534',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(17),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-565971534',id=5,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=17,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHI7c4keuyySp3JCKwGcWG3wi5bkL/HjgXw+tKRvGg2QtCSRW0FIRsi5MSLftPso6IldV2AEMqsGSDx0hY586rAhWIJtbGmY0OMNEXz4pQjFyQmOBvdLhFcG1jQGXlgziQ==',key_name='tempest-keypair-380335806',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d8d11cb18c7a4f0bb17a76974faf1f67',ramdisk_id='',reservation_id='r-y6eiyu1r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-390490490',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-390490490-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T09:59:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1fe17532ba42f08284df768e59361b',uuid=cbfb5d00-33da-4fdc-a9b7-a16865020102,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "72d99337-c438-4e0b-b5a9-54214143c1d8", "address": "fa:16:3e:16:df:fc", "network": {"id": "0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1690820459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d11cb18c7a4f0bb17a76974faf1f67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72d99337-c4", "ovs_interfaceid": "72d99337-c438-4e0b-b5a9-54214143c1d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.269 243456 DEBUG nova.network.os_vif_util [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Converting VIF {"id": "72d99337-c438-4e0b-b5a9-54214143c1d8", "address": "fa:16:3e:16:df:fc", "network": {"id": "0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1690820459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d11cb18c7a4f0bb17a76974faf1f67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72d99337-c4", "ovs_interfaceid": "72d99337-c438-4e0b-b5a9-54214143c1d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.270 243456 DEBUG nova.network.os_vif_util [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:df:fc,bridge_name='br-int',has_traffic_filtering=True,id=72d99337-c438-4e0b-b5a9-54214143c1d8,network=Network(0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72d99337-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.270 243456 DEBUG os_vif [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:df:fc,bridge_name='br-int',has_traffic_filtering=True,id=72d99337-c438-4e0b-b5a9-54214143c1d8,network=Network(0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72d99337-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.271 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.272 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.272 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.277 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.277 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72d99337-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.278 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap72d99337-c4, col_values=(('external_ids', {'iface-id': '72d99337-c438-4e0b-b5a9-54214143c1d8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:df:fc', 'vm-uuid': 'cbfb5d00-33da-4fdc-a9b7-a16865020102'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.279 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:40 np0005634017 NetworkManager[49805]: <info>  [1772272780.2803] manager: (tap72d99337-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.281 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.286 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.287 243456 INFO os_vif [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:df:fc,bridge_name='br-int',has_traffic_filtering=True,id=72d99337-c438-4e0b-b5a9-54214143c1d8,network=Network(0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72d99337-c4')#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.292 243456 DEBUG nova.network.neutron [req-76008f77-83d6-4e81-9240-087fe88aec56 req-7ad9cd12-acf9-463f-b62d-391f1a1e67a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Updated VIF entry in instance network info cache for port 72d99337-c438-4e0b-b5a9-54214143c1d8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.292 243456 DEBUG nova.network.neutron [req-76008f77-83d6-4e81-9240-087fe88aec56 req-7ad9cd12-acf9-463f-b62d-391f1a1e67a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Updating instance_info_cache with network_info: [{"id": "72d99337-c438-4e0b-b5a9-54214143c1d8", "address": "fa:16:3e:16:df:fc", "network": {"id": "0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1690820459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d11cb18c7a4f0bb17a76974faf1f67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72d99337-c4", "ovs_interfaceid": "72d99337-c438-4e0b-b5a9-54214143c1d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.309 243456 DEBUG oslo_concurrency.lockutils [req-76008f77-83d6-4e81-9240-087fe88aec56 req-7ad9cd12-acf9-463f-b62d-391f1a1e67a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-cbfb5d00-33da-4fdc-a9b7-a16865020102" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.338 243456 DEBUG nova.virt.libvirt.driver [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.338 243456 DEBUG nova.virt.libvirt.driver [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.339 243456 DEBUG nova.virt.libvirt.driver [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.339 243456 DEBUG nova.virt.libvirt.driver [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] No VIF found with MAC fa:16:3e:16:df:fc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.339 243456 INFO nova.virt.libvirt.driver [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Using config drive#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.362 243456 DEBUG nova.storage.rbd_utils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] rbd image cbfb5d00-33da-4fdc-a9b7-a16865020102_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 04:59:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:59:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 04:59:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:59:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003289519059919648 of space, bias 1.0, pg target 0.09868557179758944 quantized to 32 (current 32)
Feb 28 04:59:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:59:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:59:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:59:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:59:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:59:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024910049217399493 of space, bias 1.0, pg target 0.7473014765219848 quantized to 32 (current 32)
Feb 28 04:59:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:59:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1571925265672439e-06 of space, bias 4.0, pg target 0.0013886310318806927 quantized to 16 (current 16)
Feb 28 04:59:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:59:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:59:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:59:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 04:59:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:59:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 04:59:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:59:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 04:59:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 04:59:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.719 243456 INFO nova.virt.libvirt.driver [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Creating config drive at /var/lib/nova/instances/cbfb5d00-33da-4fdc-a9b7-a16865020102/disk.config#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.727 243456 DEBUG oslo_concurrency.processutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cbfb5d00-33da-4fdc-a9b7-a16865020102/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp0sfixm1u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.851 243456 DEBUG oslo_concurrency.processutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cbfb5d00-33da-4fdc-a9b7-a16865020102/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp0sfixm1u" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.879 243456 DEBUG nova.storage.rbd_utils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] rbd image cbfb5d00-33da-4fdc-a9b7-a16865020102_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.882 243456 DEBUG oslo_concurrency.processutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cbfb5d00-33da-4fdc-a9b7-a16865020102/disk.config cbfb5d00-33da-4fdc-a9b7-a16865020102_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.994 243456 DEBUG oslo_concurrency.processutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cbfb5d00-33da-4fdc-a9b7-a16865020102/disk.config cbfb5d00-33da-4fdc-a9b7-a16865020102_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:40 np0005634017 nova_compute[243452]: 2026-02-28 09:59:40.995 243456 INFO nova.virt.libvirt.driver [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Deleting local config drive /var/lib/nova/instances/cbfb5d00-33da-4fdc-a9b7-a16865020102/disk.config because it was imported into RBD.#033[00m
Feb 28 04:59:41 np0005634017 kernel: tap72d99337-c4: entered promiscuous mode
Feb 28 04:59:41 np0005634017 NetworkManager[49805]: <info>  [1772272781.0467] manager: (tap72d99337-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Feb 28 04:59:41 np0005634017 ovn_controller[146846]: 2026-02-28T09:59:41Z|00035|binding|INFO|Claiming lport 72d99337-c438-4e0b-b5a9-54214143c1d8 for this chassis.
Feb 28 04:59:41 np0005634017 ovn_controller[146846]: 2026-02-28T09:59:41Z|00036|binding|INFO|72d99337-c438-4e0b-b5a9-54214143c1d8: Claiming fa:16:3e:16:df:fc 10.100.0.6
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.048 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.055 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:df:fc 10.100.0.6'], port_security=['fa:16:3e:16:df:fc 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'cbfb5d00-33da-4fdc-a9b7-a16865020102', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8d11cb18c7a4f0bb17a76974faf1f67', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6c29529b-835f-4326-814a-8256c4b702f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f040037-a4e0-4f34-9b2b-1599fa7f19e6, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=72d99337-c438-4e0b-b5a9-54214143c1d8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.056 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 72d99337-c438-4e0b-b5a9-54214143c1d8 in datapath 0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4 bound to our chassis#033[00m
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.058 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4#033[00m
Feb 28 04:59:41 np0005634017 ovn_controller[146846]: 2026-02-28T09:59:41Z|00037|binding|INFO|Setting lport 72d99337-c438-4e0b-b5a9-54214143c1d8 up in Southbound
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.061 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:41 np0005634017 ovn_controller[146846]: 2026-02-28T09:59:41Z|00038|binding|INFO|Setting lport 72d99337-c438-4e0b-b5a9-54214143c1d8 ovn-installed in OVS
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.063 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.066 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.069 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[736293bf-94b5-4d01-8971-ad2fe78a6e1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.070 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0ff5ef4b-e1 in ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.072 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0ff5ef4b-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.072 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[56d11f3a-530c-4f52-b5c1-b3f30b106cd6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:41 np0005634017 systemd-machined[209480]: New machine qemu-5-instance-00000005.
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.073 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[025ef529-e54e-4365-a9de-f4edd569e08b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.085 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[798a8191-aaac-43aa-8ab9-913033b2c656]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:41 np0005634017 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.094 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[442ad03f-45e9-41c0-81ff-f3b02152508b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:41 np0005634017 systemd-udevd[252653]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.120 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7ac19fe1-4c2e-4a9a-99d3-dd13714db38a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.124 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7b8e0e4e-1898-4cf7-903f-eab8fca0ad0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:41 np0005634017 NetworkManager[49805]: <info>  [1772272781.1253] manager: (tap0ff5ef4b-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Feb 28 04:59:41 np0005634017 systemd-udevd[252657]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 04:59:41 np0005634017 NetworkManager[49805]: <info>  [1772272781.1289] device (tap72d99337-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 04:59:41 np0005634017 NetworkManager[49805]: <info>  [1772272781.1297] device (tap72d99337-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.153 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[56a01c34-0eea-42f8-a575-d018ea376ea7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.156 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6cc4dc37-1c42-4d76-9002-d0f83f778810]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:41 np0005634017 NetworkManager[49805]: <info>  [1772272781.1713] device (tap0ff5ef4b-e0): carrier: link connected
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.174 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[41ab1dfc-c266-4306-9560-7c12356f5b26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.188 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e2eecc21-9a44-4a1b-81d3-5cf57880580d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ff5ef4b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:f6:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428845, 'reachable_time': 29411, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252680, 'error': None, 'target': 'ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.199 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3a33e08d-c01f-47d3-bebd-dbacfe42f423]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:f61d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428845, 'tstamp': 428845}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252681, 'error': None, 'target': 'ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.216 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[da6f9788-dcbf-4d1a-852f-ee0adf39d68f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ff5ef4b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:f6:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428845, 'reachable_time': 29411, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252683, 'error': None, 'target': 'ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.245 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b7c20afa-611a-47c8-805d-27f958c16818]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.302 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2da318f6-159a-4ea4-bdd9-06ef4741845f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.304 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ff5ef4b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.304 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.305 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ff5ef4b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 04:59:41 np0005634017 kernel: tap0ff5ef4b-e0: entered promiscuous mode
Feb 28 04:59:41 np0005634017 NetworkManager[49805]: <info>  [1772272781.3075] manager: (tap0ff5ef4b-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.306 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.311 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0ff5ef4b-e0, col_values=(('external_ids', {'iface-id': 'cac40e1f-3378-46e9-a01a-1794bf29ef6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 04:59:41 np0005634017 ovn_controller[146846]: 2026-02-28T09:59:41Z|00039|binding|INFO|Releasing lport cac40e1f-3378-46e9-a01a-1794bf29ef6c from this chassis (sb_readonly=0)
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.321 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.322 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.323 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a6e8f047-3d81-4924-a179-9b1f211c574b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.324 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4.pid.haproxy
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 04:59:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:41.325 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4', 'env', 'PROCESS_TAG=haproxy-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.449 243456 DEBUG nova.compute.manager [req-818bcefb-bd44-4615-be0b-1b51253a7151 req-d4b9ca6a-58f7-4921-9715-6771a4203a4e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Received event network-vif-plugged-72d99337-c438-4e0b-b5a9-54214143c1d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.451 243456 DEBUG oslo_concurrency.lockutils [req-818bcefb-bd44-4615-be0b-1b51253a7151 req-d4b9ca6a-58f7-4921-9715-6771a4203a4e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "cbfb5d00-33da-4fdc-a9b7-a16865020102-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.452 243456 DEBUG oslo_concurrency.lockutils [req-818bcefb-bd44-4615-be0b-1b51253a7151 req-d4b9ca6a-58f7-4921-9715-6771a4203a4e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cbfb5d00-33da-4fdc-a9b7-a16865020102-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.452 243456 DEBUG oslo_concurrency.lockutils [req-818bcefb-bd44-4615-be0b-1b51253a7151 req-d4b9ca6a-58f7-4921-9715-6771a4203a4e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cbfb5d00-33da-4fdc-a9b7-a16865020102-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.453 243456 DEBUG nova.compute.manager [req-818bcefb-bd44-4615-be0b-1b51253a7151 req-d4b9ca6a-58f7-4921-9715-6771a4203a4e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Processing event network-vif-plugged-72d99337-c438-4e0b-b5a9-54214143c1d8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.603 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272781.6030931, cbfb5d00-33da-4fdc-a9b7-a16865020102 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.604 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] VM Started (Lifecycle Event)#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.607 243456 DEBUG nova.compute.manager [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.611 243456 DEBUG nova.virt.libvirt.driver [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.615 243456 INFO nova.virt.libvirt.driver [-] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Instance spawned successfully.#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.616 243456 DEBUG nova.virt.libvirt.driver [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.620 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.623 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.640 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.641 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272781.6042473, cbfb5d00-33da-4fdc-a9b7-a16865020102 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.641 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] VM Paused (Lifecycle Event)#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.651 243456 DEBUG nova.virt.libvirt.driver [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.652 243456 DEBUG nova.virt.libvirt.driver [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.653 243456 DEBUG nova.virt.libvirt.driver [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.653 243456 DEBUG nova.virt.libvirt.driver [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.654 243456 DEBUG nova.virt.libvirt.driver [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.655 243456 DEBUG nova.virt.libvirt.driver [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.662 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.665 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272781.6103704, cbfb5d00-33da-4fdc-a9b7-a16865020102 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.665 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] VM Resumed (Lifecycle Event)#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.690 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.694 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 04:59:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 202 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.712 243456 INFO nova.compute.manager [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Took 6.47 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.713 243456 DEBUG nova.compute.manager [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.717 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 04:59:41 np0005634017 podman[252774]: 2026-02-28 09:59:41.730777529 +0000 UTC m=+0.058413446 container create 58edd6b23a5647d75836994cb91eb227679a720da4c88e28781c6741afcf104a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 28 04:59:41 np0005634017 systemd[1]: Started libpod-conmon-58edd6b23a5647d75836994cb91eb227679a720da4c88e28781c6741afcf104a.scope.
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.784 243456 INFO nova.compute.manager [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Took 7.52 seconds to build instance.#033[00m
Feb 28 04:59:41 np0005634017 podman[252774]: 2026-02-28 09:59:41.694493027 +0000 UTC m=+0.022129004 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 04:59:41 np0005634017 nova_compute[243452]: 2026-02-28 09:59:41.802 243456 DEBUG oslo_concurrency.lockutils [None req-1cf5fe20-0a86-4213-8434-a66f62add0af 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "cbfb5d00-33da-4fdc-a9b7-a16865020102" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:41 np0005634017 systemd[1]: Started libcrun container.
Feb 28 04:59:41 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f88c436b33e3fa5bca86ad46b1acb46dc201f1fc502830b3b062edebefa89bbd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 04:59:41 np0005634017 podman[252774]: 2026-02-28 09:59:41.829707345 +0000 UTC m=+0.157343282 container init 58edd6b23a5647d75836994cb91eb227679a720da4c88e28781c6741afcf104a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 28 04:59:41 np0005634017 podman[252774]: 2026-02-28 09:59:41.83767673 +0000 UTC m=+0.165312657 container start 58edd6b23a5647d75836994cb91eb227679a720da4c88e28781c6741afcf104a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 28 04:59:41 np0005634017 neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4[252789]: [NOTICE]   (252793) : New worker (252795) forked
Feb 28 04:59:41 np0005634017 neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4[252789]: [NOTICE]   (252793) : Loading success.
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:42.574882) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272782574926, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 633, "num_deletes": 255, "total_data_size": 650091, "memory_usage": 663240, "flush_reason": "Manual Compaction"}
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272782580564, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 643842, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18956, "largest_seqno": 19588, "table_properties": {"data_size": 640524, "index_size": 1162, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7506, "raw_average_key_size": 18, "raw_value_size": 633780, "raw_average_value_size": 1534, "num_data_blocks": 52, "num_entries": 413, "num_filter_entries": 413, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772272743, "oldest_key_time": 1772272743, "file_creation_time": 1772272782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 5720 microseconds, and 1928 cpu microseconds.
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:42.580604) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 643842 bytes OK
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:42.580625) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:42.582393) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:42.582409) EVENT_LOG_v1 {"time_micros": 1772272782582404, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:42.582427) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 646653, prev total WAL file size 646653, number of live WAL files 2.
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:42.582791) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(628KB)], [44(6601KB)]
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272782582882, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7403586, "oldest_snapshot_seqno": -1}
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4194 keys, 7276573 bytes, temperature: kUnknown
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272782622711, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7276573, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7247904, "index_size": 17103, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10501, "raw_key_size": 103898, "raw_average_key_size": 24, "raw_value_size": 7171305, "raw_average_value_size": 1709, "num_data_blocks": 717, "num_entries": 4194, "num_filter_entries": 4194, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772272782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:42.622999) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7276573 bytes
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:42.626389) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.4 rd, 182.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 6.4 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(22.8) write-amplify(11.3) OK, records in: 4715, records dropped: 521 output_compression: NoCompression
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:42.626422) EVENT_LOG_v1 {"time_micros": 1772272782626409, "job": 22, "event": "compaction_finished", "compaction_time_micros": 39926, "compaction_time_cpu_micros": 15380, "output_level": 6, "num_output_files": 1, "total_output_size": 7276573, "num_input_records": 4715, "num_output_records": 4194, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272782626642, "job": 22, "event": "table_file_deletion", "file_number": 46}
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272782627285, "job": 22, "event": "table_file_deletion", "file_number": 44}
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:42.582674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:42.627384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:42.627391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:42.627393) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:42.627395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:59:42 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-09:59:42.627396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 04:59:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:42.927 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 04:59:43 np0005634017 nova_compute[243452]: 2026-02-28 09:59:43.373 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:43 np0005634017 nova_compute[243452]: 2026-02-28 09:59:43.594 243456 DEBUG nova.compute.manager [req-19bee367-4c25-4dc3-b6f9-85fadb29dc3d req-51711a57-2b18-4f35-a261-c10cc6b48d2d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Received event network-vif-plugged-72d99337-c438-4e0b-b5a9-54214143c1d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 04:59:43 np0005634017 nova_compute[243452]: 2026-02-28 09:59:43.595 243456 DEBUG oslo_concurrency.lockutils [req-19bee367-4c25-4dc3-b6f9-85fadb29dc3d req-51711a57-2b18-4f35-a261-c10cc6b48d2d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "cbfb5d00-33da-4fdc-a9b7-a16865020102-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:43 np0005634017 nova_compute[243452]: 2026-02-28 09:59:43.595 243456 DEBUG oslo_concurrency.lockutils [req-19bee367-4c25-4dc3-b6f9-85fadb29dc3d req-51711a57-2b18-4f35-a261-c10cc6b48d2d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cbfb5d00-33da-4fdc-a9b7-a16865020102-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:43 np0005634017 nova_compute[243452]: 2026-02-28 09:59:43.596 243456 DEBUG oslo_concurrency.lockutils [req-19bee367-4c25-4dc3-b6f9-85fadb29dc3d req-51711a57-2b18-4f35-a261-c10cc6b48d2d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cbfb5d00-33da-4fdc-a9b7-a16865020102-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:43 np0005634017 nova_compute[243452]: 2026-02-28 09:59:43.596 243456 DEBUG nova.compute.manager [req-19bee367-4c25-4dc3-b6f9-85fadb29dc3d req-51711a57-2b18-4f35-a261-c10cc6b48d2d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] No waiting events found dispatching network-vif-plugged-72d99337-c438-4e0b-b5a9-54214143c1d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 04:59:43 np0005634017 nova_compute[243452]: 2026-02-28 09:59:43.596 243456 WARNING nova.compute.manager [req-19bee367-4c25-4dc3-b6f9-85fadb29dc3d req-51711a57-2b18-4f35-a261-c10cc6b48d2d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Received unexpected event network-vif-plugged-72d99337-c438-4e0b-b5a9-54214143c1d8 for instance with vm_state active and task_state None.#033[00m
Feb 28 04:59:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 202 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 439 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Feb 28 04:59:44 np0005634017 nova_compute[243452]: 2026-02-28 09:59:44.081 243456 DEBUG oslo_concurrency.lockutils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Acquiring lock "271317bd-57ab-43e2-a767-935ba54d7a8f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:44 np0005634017 nova_compute[243452]: 2026-02-28 09:59:44.081 243456 DEBUG oslo_concurrency.lockutils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Lock "271317bd-57ab-43e2-a767-935ba54d7a8f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:44 np0005634017 nova_compute[243452]: 2026-02-28 09:59:44.094 243456 DEBUG nova.compute.manager [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 04:59:44 np0005634017 nova_compute[243452]: 2026-02-28 09:59:44.174 243456 DEBUG oslo_concurrency.lockutils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:44 np0005634017 nova_compute[243452]: 2026-02-28 09:59:44.177 243456 DEBUG oslo_concurrency.lockutils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:44 np0005634017 nova_compute[243452]: 2026-02-28 09:59:44.188 243456 DEBUG nova.virt.hardware [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 04:59:44 np0005634017 nova_compute[243452]: 2026-02-28 09:59:44.188 243456 INFO nova.compute.claims [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 04:59:44 np0005634017 nova_compute[243452]: 2026-02-28 09:59:44.309 243456 DEBUG oslo_concurrency.processutils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 04:59:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2649369914' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 04:59:44 np0005634017 nova_compute[243452]: 2026-02-28 09:59:44.825 243456 DEBUG oslo_concurrency.processutils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:44 np0005634017 nova_compute[243452]: 2026-02-28 09:59:44.830 243456 DEBUG nova.compute.provider_tree [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 04:59:44 np0005634017 nova_compute[243452]: 2026-02-28 09:59:44.844 243456 DEBUG nova.scheduler.client.report [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 04:59:44 np0005634017 nova_compute[243452]: 2026-02-28 09:59:44.861 243456 DEBUG oslo_concurrency.lockutils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:44 np0005634017 nova_compute[243452]: 2026-02-28 09:59:44.862 243456 DEBUG nova.compute.manager [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 04:59:44 np0005634017 nova_compute[243452]: 2026-02-28 09:59:44.904 243456 DEBUG nova.compute.manager [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 04:59:44 np0005634017 nova_compute[243452]: 2026-02-28 09:59:44.904 243456 DEBUG nova.network.neutron [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 04:59:44 np0005634017 nova_compute[243452]: 2026-02-28 09:59:44.921 243456 INFO nova.virt.libvirt.driver [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 04:59:44 np0005634017 nova_compute[243452]: 2026-02-28 09:59:44.939 243456 DEBUG nova.compute.manager [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.027 243456 DEBUG nova.compute.manager [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.029 243456 DEBUG nova.virt.libvirt.driver [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.029 243456 INFO nova.virt.libvirt.driver [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Creating image(s)#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.057 243456 DEBUG nova.storage.rbd_utils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] rbd image 271317bd-57ab-43e2-a767-935ba54d7a8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.084 243456 DEBUG nova.storage.rbd_utils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] rbd image 271317bd-57ab-43e2-a767-935ba54d7a8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.108 243456 DEBUG nova.storage.rbd_utils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] rbd image 271317bd-57ab-43e2-a767-935ba54d7a8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.111 243456 DEBUG oslo_concurrency.processutils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.165 243456 DEBUG oslo_concurrency.processutils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.167 243456 DEBUG oslo_concurrency.lockutils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.168 243456 DEBUG oslo_concurrency.lockutils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.169 243456 DEBUG oslo_concurrency.lockutils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.199 243456 DEBUG nova.storage.rbd_utils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] rbd image 271317bd-57ab-43e2-a767-935ba54d7a8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.202 243456 DEBUG oslo_concurrency.processutils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 271317bd-57ab-43e2-a767-935ba54d7a8f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.281 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.374 243456 DEBUG nova.network.neutron [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.375 243456 DEBUG nova.compute.manager [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 04:59:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 04:59:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2252229060' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 04:59:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 04:59:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2252229060' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.534 243456 DEBUG oslo_concurrency.processutils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 271317bd-57ab-43e2-a767-935ba54d7a8f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.604 243456 DEBUG nova.storage.rbd_utils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] resizing rbd image 271317bd-57ab-43e2-a767-935ba54d7a8f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.688 243456 DEBUG nova.objects.instance [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Lazy-loading 'migration_context' on Instance uuid 271317bd-57ab-43e2-a767-935ba54d7a8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 04:59:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 213 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.724 243456 DEBUG nova.virt.libvirt.driver [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.724 243456 DEBUG nova.virt.libvirt.driver [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Ensure instance console log exists: /var/lib/nova/instances/271317bd-57ab-43e2-a767-935ba54d7a8f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.725 243456 DEBUG oslo_concurrency.lockutils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.725 243456 DEBUG oslo_concurrency.lockutils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.726 243456 DEBUG oslo_concurrency.lockutils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.727 243456 DEBUG nova.virt.libvirt.driver [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.731 243456 WARNING nova.virt.libvirt.driver [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.736 243456 DEBUG nova.virt.libvirt.host [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.736 243456 DEBUG nova.virt.libvirt.host [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.742 243456 DEBUG nova.virt.libvirt.host [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.742 243456 DEBUG nova.virt.libvirt.host [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.743 243456 DEBUG nova.virt.libvirt.driver [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.743 243456 DEBUG nova.virt.hardware [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.744 243456 DEBUG nova.virt.hardware [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.744 243456 DEBUG nova.virt.hardware [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.744 243456 DEBUG nova.virt.hardware [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.744 243456 DEBUG nova.virt.hardware [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.745 243456 DEBUG nova.virt.hardware [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.745 243456 DEBUG nova.virt.hardware [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.745 243456 DEBUG nova.virt.hardware [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.745 243456 DEBUG nova.virt.hardware [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.746 243456 DEBUG nova.virt.hardware [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.746 243456 DEBUG nova.virt.hardware [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.748 243456 DEBUG oslo_concurrency.processutils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.906 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272770.9054184, 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.908 243456 INFO nova.compute.manager [-] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] VM Stopped (Lifecycle Event)#033[00m
Feb 28 04:59:45 np0005634017 nova_compute[243452]: 2026-02-28 09:59:45.948 243456 DEBUG nova.compute.manager [None req-66cfdf8a-be82-43bd-b522-d6781421046d - - - - - -] [instance: 345ddbf5-f1fc-4d1f-92d1-ec8dc53fecd2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 04:59:46 np0005634017 podman[253013]: 2026-02-28 09:59:46.150282177 +0000 UTC m=+0.067515352 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 04:59:46 np0005634017 nova_compute[243452]: 2026-02-28 09:59:46.215 243456 DEBUG nova.compute.manager [req-c84f489f-73bb-4f29-ab04-83d419b9a1fb req-65599840-95c3-49c9-8abe-2ec8f5f88fef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Received event network-changed-72d99337-c438-4e0b-b5a9-54214143c1d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 04:59:46 np0005634017 nova_compute[243452]: 2026-02-28 09:59:46.215 243456 DEBUG nova.compute.manager [req-c84f489f-73bb-4f29-ab04-83d419b9a1fb req-65599840-95c3-49c9-8abe-2ec8f5f88fef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Refreshing instance network info cache due to event network-changed-72d99337-c438-4e0b-b5a9-54214143c1d8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 04:59:46 np0005634017 podman[253012]: 2026-02-28 09:59:46.217546812 +0000 UTC m=+0.145027406 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.43.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Feb 28 04:59:46 np0005634017 nova_compute[243452]: 2026-02-28 09:59:46.217 243456 DEBUG oslo_concurrency.lockutils [req-c84f489f-73bb-4f29-ab04-83d419b9a1fb req-65599840-95c3-49c9-8abe-2ec8f5f88fef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-cbfb5d00-33da-4fdc-a9b7-a16865020102" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 04:59:46 np0005634017 nova_compute[243452]: 2026-02-28 09:59:46.217 243456 DEBUG oslo_concurrency.lockutils [req-c84f489f-73bb-4f29-ab04-83d419b9a1fb req-65599840-95c3-49c9-8abe-2ec8f5f88fef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-cbfb5d00-33da-4fdc-a9b7-a16865020102" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 04:59:46 np0005634017 nova_compute[243452]: 2026-02-28 09:59:46.218 243456 DEBUG nova.network.neutron [req-c84f489f-73bb-4f29-ab04-83d419b9a1fb req-65599840-95c3-49c9-8abe-2ec8f5f88fef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Refreshing network info cache for port 72d99337-c438-4e0b-b5a9-54214143c1d8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 04:59:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 04:59:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2892333413' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 04:59:46 np0005634017 nova_compute[243452]: 2026-02-28 09:59:46.296 243456 DEBUG oslo_concurrency.processutils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:46 np0005634017 nova_compute[243452]: 2026-02-28 09:59:46.332 243456 DEBUG nova.storage.rbd_utils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] rbd image 271317bd-57ab-43e2-a767-935ba54d7a8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:46 np0005634017 nova_compute[243452]: 2026-02-28 09:59:46.337 243456 DEBUG oslo_concurrency.processutils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 04:59:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2022367496' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 04:59:46 np0005634017 nova_compute[243452]: 2026-02-28 09:59:46.876 243456 DEBUG oslo_concurrency.processutils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:46 np0005634017 nova_compute[243452]: 2026-02-28 09:59:46.880 243456 DEBUG nova.objects.instance [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 271317bd-57ab-43e2-a767-935ba54d7a8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 04:59:46 np0005634017 nova_compute[243452]: 2026-02-28 09:59:46.898 243456 DEBUG nova.virt.libvirt.driver [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] End _get_guest_xml xml=<domain type="kvm">
Feb 28 04:59:46 np0005634017 nova_compute[243452]:  <uuid>271317bd-57ab-43e2-a767-935ba54d7a8f</uuid>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:  <name>instance-00000006</name>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerDiagnosticsTest-server-1971722722</nova:name>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 09:59:45</nova:creationTime>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 04:59:46 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:        <nova:user uuid="8a74a42fa9f941aaade83d63aded149a">tempest-ServerDiagnosticsTest-797393049-project-member</nova:user>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:        <nova:project uuid="62d5e04523a64c6da1016d4fc1f8c1ec">tempest-ServerDiagnosticsTest-797393049</nova:project>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      <nova:ports/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <system>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      <entry name="serial">271317bd-57ab-43e2-a767-935ba54d7a8f</entry>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      <entry name="uuid">271317bd-57ab-43e2-a767-935ba54d7a8f</entry>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    </system>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:  <os>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:  </os>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:  <features>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:  </features>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:  </clock>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:  <devices>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/271317bd-57ab-43e2-a767-935ba54d7a8f_disk">
Feb 28 04:59:46 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      </source>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 04:59:46 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      </auth>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    </disk>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/271317bd-57ab-43e2-a767-935ba54d7a8f_disk.config">
Feb 28 04:59:46 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      </source>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 04:59:46 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      </auth>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    </disk>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/271317bd-57ab-43e2-a767-935ba54d7a8f/console.log" append="off"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    </serial>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <video>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    </video>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    </rng>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 04:59:46 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 04:59:46 np0005634017 nova_compute[243452]:  </devices>
Feb 28 04:59:46 np0005634017 nova_compute[243452]: </domain>
Feb 28 04:59:46 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 04:59:46 np0005634017 nova_compute[243452]: 2026-02-28 09:59:46.963 243456 DEBUG nova.virt.libvirt.driver [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 04:59:46 np0005634017 nova_compute[243452]: 2026-02-28 09:59:46.964 243456 DEBUG nova.virt.libvirt.driver [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 04:59:46 np0005634017 nova_compute[243452]: 2026-02-28 09:59:46.965 243456 INFO nova.virt.libvirt.driver [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Using config drive#033[00m
Feb 28 04:59:46 np0005634017 nova_compute[243452]: 2026-02-28 09:59:46.989 243456 DEBUG nova.storage.rbd_utils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] rbd image 271317bd-57ab-43e2-a767-935ba54d7a8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:47 np0005634017 nova_compute[243452]: 2026-02-28 09:59:47.152 243456 INFO nova.virt.libvirt.driver [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Creating config drive at /var/lib/nova/instances/271317bd-57ab-43e2-a767-935ba54d7a8f/disk.config#033[00m
Feb 28 04:59:47 np0005634017 nova_compute[243452]: 2026-02-28 09:59:47.173 243456 DEBUG oslo_concurrency.processutils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/271317bd-57ab-43e2-a767-935ba54d7a8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkwtftheo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:47 np0005634017 nova_compute[243452]: 2026-02-28 09:59:47.302 243456 DEBUG oslo_concurrency.processutils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/271317bd-57ab-43e2-a767-935ba54d7a8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkwtftheo" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:47 np0005634017 nova_compute[243452]: 2026-02-28 09:59:47.326 243456 DEBUG nova.storage.rbd_utils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] rbd image 271317bd-57ab-43e2-a767-935ba54d7a8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 04:59:47 np0005634017 nova_compute[243452]: 2026-02-28 09:59:47.331 243456 DEBUG oslo_concurrency.processutils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/271317bd-57ab-43e2-a767-935ba54d7a8f/disk.config 271317bd-57ab-43e2-a767-935ba54d7a8f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:47 np0005634017 nova_compute[243452]: 2026-02-28 09:59:47.504 243456 DEBUG oslo_concurrency.processutils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/271317bd-57ab-43e2-a767-935ba54d7a8f/disk.config 271317bd-57ab-43e2-a767-935ba54d7a8f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:47 np0005634017 nova_compute[243452]: 2026-02-28 09:59:47.506 243456 INFO nova.virt.libvirt.driver [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Deleting local config drive /var/lib/nova/instances/271317bd-57ab-43e2-a767-935ba54d7a8f/disk.config because it was imported into RBD.#033[00m
Feb 28 04:59:47 np0005634017 systemd-machined[209480]: New machine qemu-6-instance-00000006.
Feb 28 04:59:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 04:59:47 np0005634017 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Feb 28 04:59:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 225 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 131 op/s
Feb 28 04:59:47 np0005634017 nova_compute[243452]: 2026-02-28 09:59:47.855 243456 DEBUG nova.network.neutron [req-c84f489f-73bb-4f29-ab04-83d419b9a1fb req-65599840-95c3-49c9-8abe-2ec8f5f88fef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Updated VIF entry in instance network info cache for port 72d99337-c438-4e0b-b5a9-54214143c1d8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 04:59:47 np0005634017 nova_compute[243452]: 2026-02-28 09:59:47.856 243456 DEBUG nova.network.neutron [req-c84f489f-73bb-4f29-ab04-83d419b9a1fb req-65599840-95c3-49c9-8abe-2ec8f5f88fef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Updating instance_info_cache with network_info: [{"id": "72d99337-c438-4e0b-b5a9-54214143c1d8", "address": "fa:16:3e:16:df:fc", "network": {"id": "0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1690820459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d11cb18c7a4f0bb17a76974faf1f67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72d99337-c4", "ovs_interfaceid": "72d99337-c438-4e0b-b5a9-54214143c1d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 04:59:47 np0005634017 nova_compute[243452]: 2026-02-28 09:59:47.874 243456 DEBUG oslo_concurrency.lockutils [req-c84f489f-73bb-4f29-ab04-83d419b9a1fb req-65599840-95c3-49c9-8abe-2ec8f5f88fef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-cbfb5d00-33da-4fdc-a9b7-a16865020102" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 04:59:47 np0005634017 nova_compute[243452]: 2026-02-28 09:59:47.952 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272787.951626, 271317bd-57ab-43e2-a767-935ba54d7a8f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 04:59:47 np0005634017 nova_compute[243452]: 2026-02-28 09:59:47.954 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] VM Resumed (Lifecycle Event)#033[00m
Feb 28 04:59:47 np0005634017 nova_compute[243452]: 2026-02-28 09:59:47.957 243456 DEBUG nova.compute.manager [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 04:59:47 np0005634017 nova_compute[243452]: 2026-02-28 09:59:47.957 243456 DEBUG nova.virt.libvirt.driver [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 04:59:47 np0005634017 nova_compute[243452]: 2026-02-28 09:59:47.961 243456 INFO nova.virt.libvirt.driver [-] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Instance spawned successfully.#033[00m
Feb 28 04:59:47 np0005634017 nova_compute[243452]: 2026-02-28 09:59:47.961 243456 DEBUG nova.virt.libvirt.driver [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 04:59:47 np0005634017 nova_compute[243452]: 2026-02-28 09:59:47.977 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 04:59:47 np0005634017 nova_compute[243452]: 2026-02-28 09:59:47.985 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 04:59:47 np0005634017 nova_compute[243452]: 2026-02-28 09:59:47.990 243456 DEBUG nova.virt.libvirt.driver [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:47 np0005634017 nova_compute[243452]: 2026-02-28 09:59:47.991 243456 DEBUG nova.virt.libvirt.driver [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:47 np0005634017 nova_compute[243452]: 2026-02-28 09:59:47.992 243456 DEBUG nova.virt.libvirt.driver [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:47 np0005634017 nova_compute[243452]: 2026-02-28 09:59:47.993 243456 DEBUG nova.virt.libvirt.driver [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:47 np0005634017 nova_compute[243452]: 2026-02-28 09:59:47.993 243456 DEBUG nova.virt.libvirt.driver [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:47 np0005634017 nova_compute[243452]: 2026-02-28 09:59:47.994 243456 DEBUG nova.virt.libvirt.driver [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 04:59:48 np0005634017 nova_compute[243452]: 2026-02-28 09:59:48.004 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 04:59:48 np0005634017 nova_compute[243452]: 2026-02-28 09:59:48.005 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272787.952845, 271317bd-57ab-43e2-a767-935ba54d7a8f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 04:59:48 np0005634017 nova_compute[243452]: 2026-02-28 09:59:48.005 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] VM Started (Lifecycle Event)#033[00m
Feb 28 04:59:48 np0005634017 nova_compute[243452]: 2026-02-28 09:59:48.028 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 04:59:48 np0005634017 nova_compute[243452]: 2026-02-28 09:59:48.032 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 04:59:48 np0005634017 nova_compute[243452]: 2026-02-28 09:59:48.051 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 04:59:48 np0005634017 nova_compute[243452]: 2026-02-28 09:59:48.057 243456 INFO nova.compute.manager [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Took 3.03 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 04:59:48 np0005634017 nova_compute[243452]: 2026-02-28 09:59:48.058 243456 DEBUG nova.compute.manager [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 04:59:48 np0005634017 nova_compute[243452]: 2026-02-28 09:59:48.119 243456 INFO nova.compute.manager [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Took 3.98 seconds to build instance.#033[00m
Feb 28 04:59:48 np0005634017 nova_compute[243452]: 2026-02-28 09:59:48.134 243456 DEBUG oslo_concurrency.lockutils [None req-908f0215-a197-4a4f-89c9-63a2de498a3a 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Lock "271317bd-57ab-43e2-a767-935ba54d7a8f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:48 np0005634017 nova_compute[243452]: 2026-02-28 09:59:48.375 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:49 np0005634017 nova_compute[243452]: 2026-02-28 09:59:49.556 243456 DEBUG nova.compute.manager [None req-8bf2ad14-74a1-473c-b496-35323081d85a 7a0b86e74e7642048cf10d7d38941a52 373e1d203c794b7289ebe110bf090a75 - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 04:59:49 np0005634017 nova_compute[243452]: 2026-02-28 09:59:49.561 243456 INFO nova.compute.manager [None req-8bf2ad14-74a1-473c-b496-35323081d85a 7a0b86e74e7642048cf10d7d38941a52 373e1d203c794b7289ebe110bf090a75 - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Retrieving diagnostics#033[00m
Feb 28 04:59:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 225 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 895 KiB/s wr, 128 op/s
Feb 28 04:59:49 np0005634017 nova_compute[243452]: 2026-02-28 09:59:49.752 243456 DEBUG oslo_concurrency.lockutils [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Acquiring lock "271317bd-57ab-43e2-a767-935ba54d7a8f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:49 np0005634017 nova_compute[243452]: 2026-02-28 09:59:49.753 243456 DEBUG oslo_concurrency.lockutils [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Lock "271317bd-57ab-43e2-a767-935ba54d7a8f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:49 np0005634017 nova_compute[243452]: 2026-02-28 09:59:49.753 243456 DEBUG oslo_concurrency.lockutils [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Acquiring lock "271317bd-57ab-43e2-a767-935ba54d7a8f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:49 np0005634017 nova_compute[243452]: 2026-02-28 09:59:49.753 243456 DEBUG oslo_concurrency.lockutils [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Lock "271317bd-57ab-43e2-a767-935ba54d7a8f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:49 np0005634017 nova_compute[243452]: 2026-02-28 09:59:49.754 243456 DEBUG oslo_concurrency.lockutils [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Lock "271317bd-57ab-43e2-a767-935ba54d7a8f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:49 np0005634017 nova_compute[243452]: 2026-02-28 09:59:49.755 243456 INFO nova.compute.manager [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Terminating instance#033[00m
Feb 28 04:59:49 np0005634017 nova_compute[243452]: 2026-02-28 09:59:49.755 243456 DEBUG oslo_concurrency.lockutils [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Acquiring lock "refresh_cache-271317bd-57ab-43e2-a767-935ba54d7a8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 04:59:49 np0005634017 nova_compute[243452]: 2026-02-28 09:59:49.756 243456 DEBUG oslo_concurrency.lockutils [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Acquired lock "refresh_cache-271317bd-57ab-43e2-a767-935ba54d7a8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 04:59:49 np0005634017 nova_compute[243452]: 2026-02-28 09:59:49.756 243456 DEBUG nova.network.neutron [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 04:59:49 np0005634017 nova_compute[243452]: 2026-02-28 09:59:49.949 243456 DEBUG nova.network.neutron [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 04:59:50 np0005634017 nova_compute[243452]: 2026-02-28 09:59:50.284 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:50 np0005634017 nova_compute[243452]: 2026-02-28 09:59:50.360 243456 DEBUG nova.network.neutron [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 04:59:50 np0005634017 nova_compute[243452]: 2026-02-28 09:59:50.378 243456 DEBUG oslo_concurrency.lockutils [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Releasing lock "refresh_cache-271317bd-57ab-43e2-a767-935ba54d7a8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 04:59:50 np0005634017 nova_compute[243452]: 2026-02-28 09:59:50.379 243456 DEBUG nova.compute.manager [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 04:59:50 np0005634017 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Feb 28 04:59:50 np0005634017 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 2.870s CPU time.
Feb 28 04:59:50 np0005634017 systemd-machined[209480]: Machine qemu-6-instance-00000006 terminated.
Feb 28 04:59:50 np0005634017 nova_compute[243452]: 2026-02-28 09:59:50.602 243456 INFO nova.virt.libvirt.driver [-] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Instance destroyed successfully.#033[00m
Feb 28 04:59:50 np0005634017 nova_compute[243452]: 2026-02-28 09:59:50.604 243456 DEBUG nova.objects.instance [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Lazy-loading 'resources' on Instance uuid 271317bd-57ab-43e2-a767-935ba54d7a8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 04:59:50 np0005634017 nova_compute[243452]: 2026-02-28 09:59:50.963 243456 INFO nova.virt.libvirt.driver [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Deleting instance files /var/lib/nova/instances/271317bd-57ab-43e2-a767-935ba54d7a8f_del#033[00m
Feb 28 04:59:50 np0005634017 nova_compute[243452]: 2026-02-28 09:59:50.965 243456 INFO nova.virt.libvirt.driver [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Deletion of /var/lib/nova/instances/271317bd-57ab-43e2-a767-935ba54d7a8f_del complete#033[00m
Feb 28 04:59:51 np0005634017 nova_compute[243452]: 2026-02-28 09:59:51.044 243456 INFO nova.compute.manager [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Took 0.66 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 04:59:51 np0005634017 nova_compute[243452]: 2026-02-28 09:59:51.045 243456 DEBUG oslo.service.loopingcall [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 04:59:51 np0005634017 nova_compute[243452]: 2026-02-28 09:59:51.045 243456 DEBUG nova.compute.manager [-] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 04:59:51 np0005634017 nova_compute[243452]: 2026-02-28 09:59:51.046 243456 DEBUG nova.network.neutron [-] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 04:59:51 np0005634017 nova_compute[243452]: 2026-02-28 09:59:51.300 243456 DEBUG nova.network.neutron [-] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 04:59:51 np0005634017 nova_compute[243452]: 2026-02-28 09:59:51.315 243456 DEBUG nova.network.neutron [-] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 04:59:51 np0005634017 nova_compute[243452]: 2026-02-28 09:59:51.326 243456 INFO nova.compute.manager [-] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Took 0.28 seconds to deallocate network for instance.#033[00m
Feb 28 04:59:51 np0005634017 nova_compute[243452]: 2026-02-28 09:59:51.375 243456 DEBUG oslo_concurrency.lockutils [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:51 np0005634017 nova_compute[243452]: 2026-02-28 09:59:51.376 243456 DEBUG oslo_concurrency.lockutils [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:51 np0005634017 nova_compute[243452]: 2026-02-28 09:59:51.481 243456 DEBUG oslo_concurrency.processutils [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 04:59:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 248 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.0 MiB/s wr, 187 op/s
Feb 28 04:59:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 04:59:52 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1292624691' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 04:59:52 np0005634017 nova_compute[243452]: 2026-02-28 09:59:52.018 243456 DEBUG oslo_concurrency.processutils [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 04:59:52 np0005634017 nova_compute[243452]: 2026-02-28 09:59:52.027 243456 DEBUG nova.compute.provider_tree [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 04:59:52 np0005634017 nova_compute[243452]: 2026-02-28 09:59:52.050 243456 DEBUG nova.scheduler.client.report [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 04:59:52 np0005634017 nova_compute[243452]: 2026-02-28 09:59:52.090 243456 DEBUG oslo_concurrency.lockutils [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:52 np0005634017 nova_compute[243452]: 2026-02-28 09:59:52.132 243456 INFO nova.scheduler.client.report [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Deleted allocations for instance 271317bd-57ab-43e2-a767-935ba54d7a8f#033[00m
Feb 28 04:59:52 np0005634017 nova_compute[243452]: 2026-02-28 09:59:52.239 243456 DEBUG oslo_concurrency.lockutils [None req-78d6a30f-51b8-4a0b-8e55-0d3753dc84fd 8a74a42fa9f941aaade83d63aded149a 62d5e04523a64c6da1016d4fc1f8c1ec - - default default] Lock "271317bd-57ab-43e2-a767-935ba54d7a8f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 04:59:53 np0005634017 nova_compute[243452]: 2026-02-28 09:59:53.378 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 241 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.5 MiB/s wr, 208 op/s
Feb 28 04:59:54 np0005634017 ovn_controller[146846]: 2026-02-28T09:59:54Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:16:df:fc 10.100.0.6
Feb 28 04:59:54 np0005634017 ovn_controller[146846]: 2026-02-28T09:59:54Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:16:df:fc 10.100.0.6
Feb 28 04:59:55 np0005634017 nova_compute[243452]: 2026-02-28 09:59:55.288 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 233 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.9 MiB/s wr, 249 op/s
Feb 28 04:59:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 04:59:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 235 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 198 op/s
Feb 28 04:59:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:57.839 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 04:59:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:57.840 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 04:59:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 09:59:57.841 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 04:59:58 np0005634017 nova_compute[243452]: 2026-02-28 09:59:58.380 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 04:59:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 235 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 173 op/s
Feb 28 05:00:00 np0005634017 nova_compute[243452]: 2026-02-28 10:00:00.291 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:00:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:00:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:00:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:00:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:00:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:00:00 np0005634017 nova_compute[243452]: 2026-02-28 10:00:00.885 243456 DEBUG oslo_concurrency.lockutils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Acquiring lock "f63bf263-5801-463b-84c3-90fc3e438863" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:00 np0005634017 nova_compute[243452]: 2026-02-28 10:00:00.885 243456 DEBUG oslo_concurrency.lockutils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Lock "f63bf263-5801-463b-84c3-90fc3e438863" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:00 np0005634017 nova_compute[243452]: 2026-02-28 10:00:00.901 243456 DEBUG nova.compute.manager [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:00:00 np0005634017 nova_compute[243452]: 2026-02-28 10:00:00.993 243456 DEBUG oslo_concurrency.lockutils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:00 np0005634017 nova_compute[243452]: 2026-02-28 10:00:00.993 243456 DEBUG oslo_concurrency.lockutils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:01 np0005634017 nova_compute[243452]: 2026-02-28 10:00:00.999 243456 DEBUG nova.virt.hardware [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:00:01 np0005634017 nova_compute[243452]: 2026-02-28 10:00:01.000 243456 INFO nova.compute.claims [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:00:01 np0005634017 nova_compute[243452]: 2026-02-28 10:00:01.128 243456 DEBUG oslo_concurrency.processutils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:00:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1311304572' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:00:01 np0005634017 nova_compute[243452]: 2026-02-28 10:00:01.637 243456 DEBUG oslo_concurrency.processutils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:01 np0005634017 nova_compute[243452]: 2026-02-28 10:00:01.645 243456 DEBUG nova.compute.provider_tree [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:00:01 np0005634017 nova_compute[243452]: 2026-02-28 10:00:01.664 243456 DEBUG nova.scheduler.client.report [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:00:01 np0005634017 nova_compute[243452]: 2026-02-28 10:00:01.690 243456 DEBUG oslo_concurrency.lockutils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:01 np0005634017 nova_compute[243452]: 2026-02-28 10:00:01.691 243456 DEBUG nova.compute.manager [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:00:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 235 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 173 op/s
Feb 28 05:00:01 np0005634017 nova_compute[243452]: 2026-02-28 10:00:01.736 243456 DEBUG nova.compute.manager [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:00:01 np0005634017 nova_compute[243452]: 2026-02-28 10:00:01.737 243456 DEBUG nova.network.neutron [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:00:01 np0005634017 nova_compute[243452]: 2026-02-28 10:00:01.762 243456 INFO nova.virt.libvirt.driver [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:00:01 np0005634017 nova_compute[243452]: 2026-02-28 10:00:01.786 243456 DEBUG nova.compute.manager [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:00:01 np0005634017 nova_compute[243452]: 2026-02-28 10:00:01.886 243456 DEBUG nova.compute.manager [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:00:01 np0005634017 nova_compute[243452]: 2026-02-28 10:00:01.887 243456 DEBUG nova.virt.libvirt.driver [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:00:01 np0005634017 nova_compute[243452]: 2026-02-28 10:00:01.887 243456 INFO nova.virt.libvirt.driver [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Creating image(s)#033[00m
Feb 28 05:00:01 np0005634017 nova_compute[243452]: 2026-02-28 10:00:01.919 243456 DEBUG nova.storage.rbd_utils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] rbd image f63bf263-5801-463b-84c3-90fc3e438863_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:01 np0005634017 nova_compute[243452]: 2026-02-28 10:00:01.952 243456 DEBUG nova.storage.rbd_utils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] rbd image f63bf263-5801-463b-84c3-90fc3e438863_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:01 np0005634017 nova_compute[243452]: 2026-02-28 10:00:01.982 243456 DEBUG nova.storage.rbd_utils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] rbd image f63bf263-5801-463b-84c3-90fc3e438863_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:01 np0005634017 nova_compute[243452]: 2026-02-28 10:00:01.987 243456 DEBUG oslo_concurrency.processutils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.069 243456 DEBUG oslo_concurrency.processutils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.070 243456 DEBUG oslo_concurrency.lockutils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.071 243456 DEBUG oslo_concurrency.lockutils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.071 243456 DEBUG oslo_concurrency.lockutils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.096 243456 DEBUG nova.storage.rbd_utils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] rbd image f63bf263-5801-463b-84c3-90fc3e438863_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.100 243456 DEBUG oslo_concurrency.processutils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 f63bf263-5801-463b-84c3-90fc3e438863_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.322 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.327 243456 DEBUG oslo_concurrency.lockutils [None req-bedb4f5f-ba1c-4c1c-8079-65e4ebef3f5e 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Acquiring lock "cbfb5d00-33da-4fdc-a9b7-a16865020102" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.327 243456 DEBUG oslo_concurrency.lockutils [None req-bedb4f5f-ba1c-4c1c-8079-65e4ebef3f5e 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "cbfb5d00-33da-4fdc-a9b7-a16865020102" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.328 243456 DEBUG oslo_concurrency.lockutils [None req-bedb4f5f-ba1c-4c1c-8079-65e4ebef3f5e 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Acquiring lock "cbfb5d00-33da-4fdc-a9b7-a16865020102-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.328 243456 DEBUG oslo_concurrency.lockutils [None req-bedb4f5f-ba1c-4c1c-8079-65e4ebef3f5e 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "cbfb5d00-33da-4fdc-a9b7-a16865020102-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.328 243456 DEBUG oslo_concurrency.lockutils [None req-bedb4f5f-ba1c-4c1c-8079-65e4ebef3f5e 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "cbfb5d00-33da-4fdc-a9b7-a16865020102-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.330 243456 INFO nova.compute.manager [None req-bedb4f5f-ba1c-4c1c-8079-65e4ebef3f5e 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Terminating instance#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.331 243456 DEBUG nova.compute.manager [None req-bedb4f5f-ba1c-4c1c-8079-65e4ebef3f5e 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.380 243456 DEBUG oslo_concurrency.processutils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 f63bf263-5801-463b-84c3-90fc3e438863_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:02 np0005634017 kernel: tap72d99337-c4 (unregistering): left promiscuous mode
Feb 28 05:00:02 np0005634017 NetworkManager[49805]: <info>  [1772272802.3994] device (tap72d99337-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:00:02 np0005634017 ovn_controller[146846]: 2026-02-28T10:00:02Z|00040|binding|INFO|Releasing lport 72d99337-c438-4e0b-b5a9-54214143c1d8 from this chassis (sb_readonly=0)
Feb 28 05:00:02 np0005634017 ovn_controller[146846]: 2026-02-28T10:00:02Z|00041|binding|INFO|Setting lport 72d99337-c438-4e0b-b5a9-54214143c1d8 down in Southbound
Feb 28 05:00:02 np0005634017 ovn_controller[146846]: 2026-02-28T10:00:02Z|00042|binding|INFO|Removing iface tap72d99337-c4 ovn-installed in OVS
Feb 28 05:00:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:02.414 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:df:fc 10.100.0.6'], port_security=['fa:16:3e:16:df:fc 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'cbfb5d00-33da-4fdc-a9b7-a16865020102', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8d11cb18c7a4f0bb17a76974faf1f67', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6c29529b-835f-4326-814a-8256c4b702f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f040037-a4e0-4f34-9b2b-1599fa7f19e6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=72d99337-c438-4e0b-b5a9-54214143c1d8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:00:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:02.416 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 72d99337-c438-4e0b-b5a9-54214143c1d8 in datapath 0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4 unbound from our chassis#033[00m
Feb 28 05:00:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:02.418 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:00:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:02.419 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0f88b6a8-eca2-4225-972a-849040791194]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:00:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:02.420 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4 namespace which is not needed anymore#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.437 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:02 np0005634017 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Feb 28 05:00:02 np0005634017 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 12.579s CPU time.
Feb 28 05:00:02 np0005634017 systemd-machined[209480]: Machine qemu-5-instance-00000005 terminated.
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.482 243456 DEBUG nova.storage.rbd_utils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] resizing rbd image f63bf263-5801-463b-84c3-90fc3e438863_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.521 243456 DEBUG nova.network.neutron [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.522 243456 DEBUG nova.compute.manager [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:00:02 np0005634017 neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4[252789]: [NOTICE]   (252793) : haproxy version is 2.8.14-c23fe91
Feb 28 05:00:02 np0005634017 neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4[252789]: [NOTICE]   (252793) : path to executable is /usr/sbin/haproxy
Feb 28 05:00:02 np0005634017 neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4[252789]: [ALERT]    (252793) : Current worker (252795) exited with code 143 (Terminated)
Feb 28 05:00:02 np0005634017 neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4[252789]: [WARNING]  (252793) : All workers exited. Exiting... (0)
Feb 28 05:00:02 np0005634017 systemd[1]: libpod-58edd6b23a5647d75836994cb91eb227679a720da4c88e28781c6741afcf104a.scope: Deactivated successfully.
Feb 28 05:00:02 np0005634017 podman[253451]: 2026-02-28 10:00:02.564459809 +0000 UTC m=+0.054862806 container died 58edd6b23a5647d75836994cb91eb227679a720da4c88e28781c6741afcf104a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 28 05:00:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:00:02 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-58edd6b23a5647d75836994cb91eb227679a720da4c88e28781c6741afcf104a-userdata-shm.mount: Deactivated successfully.
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.594 243456 DEBUG nova.objects.instance [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Lazy-loading 'migration_context' on Instance uuid f63bf263-5801-463b-84c3-90fc3e438863 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:00:02 np0005634017 systemd[1]: var-lib-containers-storage-overlay-f88c436b33e3fa5bca86ad46b1acb46dc201f1fc502830b3b062edebefa89bbd-merged.mount: Deactivated successfully.
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.604 243456 INFO nova.virt.libvirt.driver [-] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Instance destroyed successfully.#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.604 243456 DEBUG nova.objects.instance [None req-bedb4f5f-ba1c-4c1c-8079-65e4ebef3f5e 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lazy-loading 'resources' on Instance uuid cbfb5d00-33da-4fdc-a9b7-a16865020102 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:00:02 np0005634017 podman[253451]: 2026-02-28 10:00:02.609085916 +0000 UTC m=+0.099488863 container cleanup 58edd6b23a5647d75836994cb91eb227679a720da4c88e28781c6741afcf104a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 28 05:00:02 np0005634017 systemd[1]: libpod-conmon-58edd6b23a5647d75836994cb91eb227679a720da4c88e28781c6741afcf104a.scope: Deactivated successfully.
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.619 243456 DEBUG nova.virt.libvirt.vif [None req-bedb4f5f-ba1c-4c1c-8079-65e4ebef3f5e 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T09:59:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-565971534',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-565971534',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(17),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-565971534',id=5,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=17,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHI7c4keuyySp3JCKwGcWG3wi5bkL/HjgXw+tKRvGg2QtCSRW0FIRsi5MSLftPso6IldV2AEMqsGSDx0hY586rAhWIJtbGmY0OMNEXz4pQjFyQmOBvdLhFcG1jQGXlgziQ==',key_name='tempest-keypair-380335806',keypairs=<?>,launch_index=0,launched_at=2026-02-28T09:59:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d8d11cb18c7a4f0bb17a76974faf1f67',ramdisk_id='',reservation_id='r-y6eiyu1r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-390490490',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-390490490-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T09:59:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1fe17532ba42f08284df768e59361b',uuid=cbfb5d00-33da-4fdc-a9b7-a16865020102,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "72d99337-c438-4e0b-b5a9-54214143c1d8", "address": "fa:16:3e:16:df:fc", "network": {"id": "0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1690820459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d11cb18c7a4f0bb17a76974faf1f67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72d99337-c4", "ovs_interfaceid": "72d99337-c438-4e0b-b5a9-54214143c1d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.619 243456 DEBUG nova.network.os_vif_util [None req-bedb4f5f-ba1c-4c1c-8079-65e4ebef3f5e 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Converting VIF {"id": "72d99337-c438-4e0b-b5a9-54214143c1d8", "address": "fa:16:3e:16:df:fc", "network": {"id": "0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1690820459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d11cb18c7a4f0bb17a76974faf1f67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72d99337-c4", "ovs_interfaceid": "72d99337-c438-4e0b-b5a9-54214143c1d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.620 243456 DEBUG nova.network.os_vif_util [None req-bedb4f5f-ba1c-4c1c-8079-65e4ebef3f5e 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:16:df:fc,bridge_name='br-int',has_traffic_filtering=True,id=72d99337-c438-4e0b-b5a9-54214143c1d8,network=Network(0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72d99337-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.620 243456 DEBUG os_vif [None req-bedb4f5f-ba1c-4c1c-8079-65e4ebef3f5e 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:df:fc,bridge_name='br-int',has_traffic_filtering=True,id=72d99337-c438-4e0b-b5a9-54214143c1d8,network=Network(0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72d99337-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.623 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.623 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72d99337-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.624 243456 DEBUG nova.virt.libvirt.driver [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.624 243456 DEBUG nova.virt.libvirt.driver [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Ensure instance console log exists: /var/lib/nova/instances/f63bf263-5801-463b-84c3-90fc3e438863/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.625 243456 DEBUG oslo_concurrency.lockutils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.626 243456 DEBUG oslo_concurrency.lockutils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.626 243456 DEBUG oslo_concurrency.lockutils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.627 243456 DEBUG nova.virt.libvirt.driver [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.628 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.630 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.632 243456 INFO os_vif [None req-bedb4f5f-ba1c-4c1c-8079-65e4ebef3f5e 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:df:fc,bridge_name='br-int',has_traffic_filtering=True,id=72d99337-c438-4e0b-b5a9-54214143c1d8,network=Network(0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72d99337-c4')#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.650 243456 DEBUG nova.compute.manager [req-d089fe5a-4019-4fa7-95ce-e52b637f2495 req-a9a3b116-11f4-4c00-bffb-7795dfdc1ed6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Received event network-vif-unplugged-72d99337-c438-4e0b-b5a9-54214143c1d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.650 243456 DEBUG oslo_concurrency.lockutils [req-d089fe5a-4019-4fa7-95ce-e52b637f2495 req-a9a3b116-11f4-4c00-bffb-7795dfdc1ed6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "cbfb5d00-33da-4fdc-a9b7-a16865020102-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.651 243456 DEBUG oslo_concurrency.lockutils [req-d089fe5a-4019-4fa7-95ce-e52b637f2495 req-a9a3b116-11f4-4c00-bffb-7795dfdc1ed6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cbfb5d00-33da-4fdc-a9b7-a16865020102-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.651 243456 DEBUG oslo_concurrency.lockutils [req-d089fe5a-4019-4fa7-95ce-e52b637f2495 req-a9a3b116-11f4-4c00-bffb-7795dfdc1ed6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cbfb5d00-33da-4fdc-a9b7-a16865020102-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.651 243456 DEBUG nova.compute.manager [req-d089fe5a-4019-4fa7-95ce-e52b637f2495 req-a9a3b116-11f4-4c00-bffb-7795dfdc1ed6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] No waiting events found dispatching network-vif-unplugged-72d99337-c438-4e0b-b5a9-54214143c1d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.652 243456 DEBUG nova.compute.manager [req-d089fe5a-4019-4fa7-95ce-e52b637f2495 req-a9a3b116-11f4-4c00-bffb-7795dfdc1ed6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Received event network-vif-unplugged-72d99337-c438-4e0b-b5a9-54214143c1d8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.654 243456 WARNING nova.virt.libvirt.driver [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.662 243456 DEBUG nova.virt.libvirt.host [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.663 243456 DEBUG nova.virt.libvirt.host [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:00:02 np0005634017 podman[253510]: 2026-02-28 10:00:02.666807092 +0000 UTC m=+0.037730924 container remove 58edd6b23a5647d75836994cb91eb227679a720da4c88e28781c6741afcf104a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.667 243456 DEBUG nova.virt.libvirt.host [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.668 243456 DEBUG nova.virt.libvirt.host [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.668 243456 DEBUG nova.virt.libvirt.driver [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.669 243456 DEBUG nova.virt.hardware [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.669 243456 DEBUG nova.virt.hardware [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.669 243456 DEBUG nova.virt.hardware [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.670 243456 DEBUG nova.virt.hardware [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.670 243456 DEBUG nova.virt.hardware [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.670 243456 DEBUG nova.virt.hardware [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.670 243456 DEBUG nova.virt.hardware [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.671 243456 DEBUG nova.virt.hardware [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.671 243456 DEBUG nova.virt.hardware [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:00:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:02.670 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[96a423c2-892d-44a4-a883-7c41a543ce87]: (4, ('Sat Feb 28 10:00:02 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4 (58edd6b23a5647d75836994cb91eb227679a720da4c88e28781c6741afcf104a)\n58edd6b23a5647d75836994cb91eb227679a720da4c88e28781c6741afcf104a\nSat Feb 28 10:00:02 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4 (58edd6b23a5647d75836994cb91eb227679a720da4c88e28781c6741afcf104a)\n58edd6b23a5647d75836994cb91eb227679a720da4c88e28781c6741afcf104a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.671 243456 DEBUG nova.virt.hardware [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.672 243456 DEBUG nova.virt.hardware [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:00:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:02.672 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4362a00c-ad8b-4b5a-8138-b1f796e693f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:00:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:02.674 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ff5ef4b-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.674 243456 DEBUG oslo_concurrency.processutils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:02 np0005634017 kernel: tap0ff5ef4b-e0: left promiscuous mode
Feb 28 05:00:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:02.685 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e96e3ae6-50d7-4884-9459-22fdd069cf7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:00:02 np0005634017 nova_compute[243452]: 2026-02-28 10:00:02.692 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:02.699 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d9166af2-4b82-4ee3-9868-9d2c87a6e5a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:00:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:02.701 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ecc902-bd1d-48f5-ac98-4828c40b7d2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:00:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:02.715 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[954434d6-d4d6-4c87-8541-5b272d1d6f48]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428840, 'reachable_time': 35360, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253542, 'error': None, 'target': 'ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:00:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:02.718 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0ff5ef4b-e6cb-48f9-904e-c0a38484b0f4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:00:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:02.718 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[5122bc69-0b8f-48f4-b84b-a56180843a81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:00:02 np0005634017 systemd[1]: run-netns-ovnmeta\x2d0ff5ef4b\x2de6cb\x2d48f9\x2d904e\x2dc0a38484b0f4.mount: Deactivated successfully.
Feb 28 05:00:03 np0005634017 nova_compute[243452]: 2026-02-28 10:00:03.097 243456 INFO nova.virt.libvirt.driver [None req-bedb4f5f-ba1c-4c1c-8079-65e4ebef3f5e 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Deleting instance files /var/lib/nova/instances/cbfb5d00-33da-4fdc-a9b7-a16865020102_del#033[00m
Feb 28 05:00:03 np0005634017 nova_compute[243452]: 2026-02-28 10:00:03.098 243456 INFO nova.virt.libvirt.driver [None req-bedb4f5f-ba1c-4c1c-8079-65e4ebef3f5e 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Deletion of /var/lib/nova/instances/cbfb5d00-33da-4fdc-a9b7-a16865020102_del complete#033[00m
Feb 28 05:00:03 np0005634017 nova_compute[243452]: 2026-02-28 10:00:03.154 243456 INFO nova.compute.manager [None req-bedb4f5f-ba1c-4c1c-8079-65e4ebef3f5e 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:00:03 np0005634017 nova_compute[243452]: 2026-02-28 10:00:03.156 243456 DEBUG oslo.service.loopingcall [None req-bedb4f5f-ba1c-4c1c-8079-65e4ebef3f5e 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:00:03 np0005634017 nova_compute[243452]: 2026-02-28 10:00:03.156 243456 DEBUG nova.compute.manager [-] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:00:03 np0005634017 nova_compute[243452]: 2026-02-28 10:00:03.157 243456 DEBUG nova.network.neutron [-] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:00:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:00:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3147461632' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:00:03 np0005634017 nova_compute[243452]: 2026-02-28 10:00:03.268 243456 DEBUG oslo_concurrency.processutils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:03 np0005634017 nova_compute[243452]: 2026-02-28 10:00:03.300 243456 DEBUG nova.storage.rbd_utils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] rbd image f63bf263-5801-463b-84c3-90fc3e438863_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:03 np0005634017 nova_compute[243452]: 2026-02-28 10:00:03.305 243456 DEBUG oslo_concurrency.processutils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:03 np0005634017 nova_compute[243452]: 2026-02-28 10:00:03.382 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 251 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 786 KiB/s rd, 3.0 MiB/s wr, 124 op/s
Feb 28 05:00:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:00:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2805929751' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:00:03 np0005634017 nova_compute[243452]: 2026-02-28 10:00:03.834 243456 DEBUG oslo_concurrency.processutils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:03 np0005634017 nova_compute[243452]: 2026-02-28 10:00:03.837 243456 DEBUG nova.objects.instance [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Lazy-loading 'pci_devices' on Instance uuid f63bf263-5801-463b-84c3-90fc3e438863 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:00:03 np0005634017 nova_compute[243452]: 2026-02-28 10:00:03.856 243456 DEBUG nova.virt.libvirt.driver [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:00:03 np0005634017 nova_compute[243452]:  <uuid>f63bf263-5801-463b-84c3-90fc3e438863</uuid>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:  <name>instance-00000007</name>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerDiagnosticsNegativeTest-server-869282410</nova:name>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:00:02</nova:creationTime>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:00:03 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:        <nova:user uuid="330f60740eda47c589662819f21d61bf">tempest-ServerDiagnosticsNegativeTest-702084415-project-member</nova:user>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:        <nova:project uuid="9cd849430cdc473391344d9e0c297bb7">tempest-ServerDiagnosticsNegativeTest-702084415</nova:project>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      <nova:ports/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      <entry name="serial">f63bf263-5801-463b-84c3-90fc3e438863</entry>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      <entry name="uuid">f63bf263-5801-463b-84c3-90fc3e438863</entry>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/f63bf263-5801-463b-84c3-90fc3e438863_disk">
Feb 28 05:00:03 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:00:03 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/f63bf263-5801-463b-84c3-90fc3e438863_disk.config">
Feb 28 05:00:03 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:00:03 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/f63bf263-5801-463b-84c3-90fc3e438863/console.log" append="off"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:00:03 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:00:03 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:00:03 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:00:03 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:00:03 np0005634017 nova_compute[243452]: 2026-02-28 10:00:03.936 243456 DEBUG nova.virt.libvirt.driver [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:00:03 np0005634017 nova_compute[243452]: 2026-02-28 10:00:03.937 243456 DEBUG nova.virt.libvirt.driver [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:00:03 np0005634017 nova_compute[243452]: 2026-02-28 10:00:03.938 243456 INFO nova.virt.libvirt.driver [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Using config drive#033[00m
Feb 28 05:00:03 np0005634017 nova_compute[243452]: 2026-02-28 10:00:03.972 243456 DEBUG nova.storage.rbd_utils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] rbd image f63bf263-5801-463b-84c3-90fc3e438863_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.354 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.355 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.355 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.356 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.569 243456 INFO nova.virt.libvirt.driver [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Creating config drive at /var/lib/nova/instances/f63bf263-5801-463b-84c3-90fc3e438863/disk.config#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.578 243456 DEBUG oslo_concurrency.processutils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f63bf263-5801-463b-84c3-90fc3e438863/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqm_4pwgh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.638 243456 DEBUG nova.network.neutron [-] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.664 243456 INFO nova.compute.manager [-] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Took 1.51 seconds to deallocate network for instance.#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.706 243456 DEBUG oslo_concurrency.processutils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f63bf263-5801-463b-84c3-90fc3e438863/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqm_4pwgh" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.739 243456 DEBUG nova.storage.rbd_utils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] rbd image f63bf263-5801-463b-84c3-90fc3e438863_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.745 243456 DEBUG oslo_concurrency.processutils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f63bf263-5801-463b-84c3-90fc3e438863/disk.config f63bf263-5801-463b-84c3-90fc3e438863_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.769 243456 DEBUG oslo_concurrency.lockutils [None req-bedb4f5f-ba1c-4c1c-8079-65e4ebef3f5e 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.770 243456 DEBUG oslo_concurrency.lockutils [None req-bedb4f5f-ba1c-4c1c-8079-65e4ebef3f5e 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.859 243456 DEBUG oslo_concurrency.processutils [None req-bedb4f5f-ba1c-4c1c-8079-65e4ebef3f5e 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.891 243456 DEBUG nova.compute.manager [req-2eff6faa-5487-428f-ba87-e6b8b736aac7 req-6b3f619f-e78a-467b-891c-2aeb70e1029e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Received event network-vif-plugged-72d99337-c438-4e0b-b5a9-54214143c1d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.892 243456 DEBUG oslo_concurrency.lockutils [req-2eff6faa-5487-428f-ba87-e6b8b736aac7 req-6b3f619f-e78a-467b-891c-2aeb70e1029e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "cbfb5d00-33da-4fdc-a9b7-a16865020102-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.892 243456 DEBUG oslo_concurrency.lockutils [req-2eff6faa-5487-428f-ba87-e6b8b736aac7 req-6b3f619f-e78a-467b-891c-2aeb70e1029e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cbfb5d00-33da-4fdc-a9b7-a16865020102-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.892 243456 DEBUG oslo_concurrency.lockutils [req-2eff6faa-5487-428f-ba87-e6b8b736aac7 req-6b3f619f-e78a-467b-891c-2aeb70e1029e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cbfb5d00-33da-4fdc-a9b7-a16865020102-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.893 243456 DEBUG nova.compute.manager [req-2eff6faa-5487-428f-ba87-e6b8b736aac7 req-6b3f619f-e78a-467b-891c-2aeb70e1029e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] No waiting events found dispatching network-vif-plugged-72d99337-c438-4e0b-b5a9-54214143c1d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.893 243456 WARNING nova.compute.manager [req-2eff6faa-5487-428f-ba87-e6b8b736aac7 req-6b3f619f-e78a-467b-891c-2aeb70e1029e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Received unexpected event network-vif-plugged-72d99337-c438-4e0b-b5a9-54214143c1d8 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.893 243456 DEBUG nova.compute.manager [req-2eff6faa-5487-428f-ba87-e6b8b736aac7 req-6b3f619f-e78a-467b-891c-2aeb70e1029e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Received event network-vif-deleted-72d99337-c438-4e0b-b5a9-54214143c1d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.913 243456 DEBUG oslo_concurrency.processutils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f63bf263-5801-463b-84c3-90fc3e438863/disk.config f63bf263-5801-463b-84c3-90fc3e438863_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:04 np0005634017 nova_compute[243452]: 2026-02-28 10:00:04.914 243456 INFO nova.virt.libvirt.driver [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Deleting local config drive /var/lib/nova/instances/f63bf263-5801-463b-84c3-90fc3e438863/disk.config because it was imported into RBD.#033[00m
Feb 28 05:00:04 np0005634017 systemd-machined[209480]: New machine qemu-7-instance-00000007.
Feb 28 05:00:04 np0005634017 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.252 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272805.2517595, f63bf263-5801-463b-84c3-90fc3e438863 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.253 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f63bf263-5801-463b-84c3-90fc3e438863] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.257 243456 DEBUG nova.compute.manager [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.258 243456 DEBUG nova.virt.libvirt.driver [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.263 243456 INFO nova.virt.libvirt.driver [-] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Instance spawned successfully.#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.263 243456 DEBUG nova.virt.libvirt.driver [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.290 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.303 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.310 243456 DEBUG nova.virt.libvirt.driver [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.311 243456 DEBUG nova.virt.libvirt.driver [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.312 243456 DEBUG nova.virt.libvirt.driver [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.313 243456 DEBUG nova.virt.libvirt.driver [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.314 243456 DEBUG nova.virt.libvirt.driver [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.315 243456 DEBUG nova.virt.libvirt.driver [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.319 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.320 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.321 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.360 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f63bf263-5801-463b-84c3-90fc3e438863] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.360 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272805.2565267, f63bf263-5801-463b-84c3-90fc3e438863 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.360 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f63bf263-5801-463b-84c3-90fc3e438863] VM Started (Lifecycle Event)#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.415 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.417 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.420 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.424 243456 INFO nova.compute.manager [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Took 3.54 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.424 243456 DEBUG nova.compute.manager [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.437 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f63bf263-5801-463b-84c3-90fc3e438863] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:00:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:00:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1874942149' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.483 243456 INFO nova.compute.manager [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Took 4.52 seconds to build instance.#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.490 243456 DEBUG oslo_concurrency.processutils [None req-bedb4f5f-ba1c-4c1c-8079-65e4ebef3f5e 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.495 243456 DEBUG nova.compute.provider_tree [None req-bedb4f5f-ba1c-4c1c-8079-65e4ebef3f5e 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.500 243456 DEBUG oslo_concurrency.lockutils [None req-3eb4b915-4cb9-4433-8697-b4d9c58acf33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Lock "f63bf263-5801-463b-84c3-90fc3e438863" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.507 243456 DEBUG nova.scheduler.client.report [None req-bedb4f5f-ba1c-4c1c-8079-65e4ebef3f5e 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.529 243456 DEBUG oslo_concurrency.lockutils [None req-bedb4f5f-ba1c-4c1c-8079-65e4ebef3f5e 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.531 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.532 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.532 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.532 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.574 243456 INFO nova.scheduler.client.report [None req-bedb4f5f-ba1c-4c1c-8079-65e4ebef3f5e 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Deleted allocations for instance cbfb5d00-33da-4fdc-a9b7-a16865020102#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.600 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272790.5983315, 271317bd-57ab-43e2-a767-935ba54d7a8f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.600 243456 INFO nova.compute.manager [-] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.622 243456 DEBUG nova.compute.manager [None req-35f92b15-6322-4e3c-8235-a009b30cd040 - - - - - -] [instance: 271317bd-57ab-43e2-a767-935ba54d7a8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:05 np0005634017 nova_compute[243452]: 2026-02-28 10:00:05.637 243456 DEBUG oslo_concurrency.lockutils [None req-bedb4f5f-ba1c-4c1c-8079-65e4ebef3f5e 1e1fe17532ba42f08284df768e59361b d8d11cb18c7a4f0bb17a76974faf1f67 - - default default] Lock "cbfb5d00-33da-4fdc-a9b7-a16865020102" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.310s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 224 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 341 KiB/s rd, 3.2 MiB/s wr, 129 op/s
Feb 28 05:00:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:00:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/126617003' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:00:06 np0005634017 nova_compute[243452]: 2026-02-28 10:00:06.109 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:06 np0005634017 nova_compute[243452]: 2026-02-28 10:00:06.204 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:00:06 np0005634017 nova_compute[243452]: 2026-02-28 10:00:06.204 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:00:06 np0005634017 nova_compute[243452]: 2026-02-28 10:00:06.343 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:00:06 np0005634017 nova_compute[243452]: 2026-02-28 10:00:06.345 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4643MB free_disk=59.953485606238246GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:00:06 np0005634017 nova_compute[243452]: 2026-02-28 10:00:06.345 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:06 np0005634017 nova_compute[243452]: 2026-02-28 10:00:06.345 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:06 np0005634017 nova_compute[243452]: 2026-02-28 10:00:06.438 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance f63bf263-5801-463b-84c3-90fc3e438863 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:00:06 np0005634017 nova_compute[243452]: 2026-02-28 10:00:06.438 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:00:06 np0005634017 nova_compute[243452]: 2026-02-28 10:00:06.439 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:00:06 np0005634017 nova_compute[243452]: 2026-02-28 10:00:06.477 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:00:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2875323552' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:00:06 np0005634017 nova_compute[243452]: 2026-02-28 10:00:06.994 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:07 np0005634017 nova_compute[243452]: 2026-02-28 10:00:07.000 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:00:07 np0005634017 nova_compute[243452]: 2026-02-28 10:00:07.020 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:00:07 np0005634017 nova_compute[243452]: 2026-02-28 10:00:07.051 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:00:07 np0005634017 nova_compute[243452]: 2026-02-28 10:00:07.052 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:00:07 np0005634017 nova_compute[243452]: 2026-02-28 10:00:07.627 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 200 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 114 op/s
Feb 28 05:00:08 np0005634017 nova_compute[243452]: 2026-02-28 10:00:08.265 243456 DEBUG oslo_concurrency.lockutils [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Acquiring lock "f63bf263-5801-463b-84c3-90fc3e438863" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:08 np0005634017 nova_compute[243452]: 2026-02-28 10:00:08.267 243456 DEBUG oslo_concurrency.lockutils [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Lock "f63bf263-5801-463b-84c3-90fc3e438863" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:08 np0005634017 nova_compute[243452]: 2026-02-28 10:00:08.268 243456 DEBUG oslo_concurrency.lockutils [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Acquiring lock "f63bf263-5801-463b-84c3-90fc3e438863-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:08 np0005634017 nova_compute[243452]: 2026-02-28 10:00:08.269 243456 DEBUG oslo_concurrency.lockutils [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Lock "f63bf263-5801-463b-84c3-90fc3e438863-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:08 np0005634017 nova_compute[243452]: 2026-02-28 10:00:08.270 243456 DEBUG oslo_concurrency.lockutils [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Lock "f63bf263-5801-463b-84c3-90fc3e438863-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:08 np0005634017 nova_compute[243452]: 2026-02-28 10:00:08.272 243456 INFO nova.compute.manager [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Terminating instance#033[00m
Feb 28 05:00:08 np0005634017 nova_compute[243452]: 2026-02-28 10:00:08.274 243456 DEBUG oslo_concurrency.lockutils [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Acquiring lock "refresh_cache-f63bf263-5801-463b-84c3-90fc3e438863" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:00:08 np0005634017 nova_compute[243452]: 2026-02-28 10:00:08.275 243456 DEBUG oslo_concurrency.lockutils [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Acquired lock "refresh_cache-f63bf263-5801-463b-84c3-90fc3e438863" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:00:08 np0005634017 nova_compute[243452]: 2026-02-28 10:00:08.276 243456 DEBUG nova.network.neutron [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:00:08 np0005634017 nova_compute[243452]: 2026-02-28 10:00:08.384 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:08 np0005634017 nova_compute[243452]: 2026-02-28 10:00:08.803 243456 DEBUG nova.network.neutron [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:00:09 np0005634017 nova_compute[243452]: 2026-02-28 10:00:09.047 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:00:09 np0005634017 nova_compute[243452]: 2026-02-28 10:00:09.048 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:00:09 np0005634017 nova_compute[243452]: 2026-02-28 10:00:09.062 243456 DEBUG nova.network.neutron [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:00:09 np0005634017 nova_compute[243452]: 2026-02-28 10:00:09.081 243456 DEBUG oslo_concurrency.lockutils [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Releasing lock "refresh_cache-f63bf263-5801-463b-84c3-90fc3e438863" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:00:09 np0005634017 nova_compute[243452]: 2026-02-28 10:00:09.083 243456 DEBUG nova.compute.manager [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:00:09 np0005634017 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Feb 28 05:00:09 np0005634017 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 4.273s CPU time.
Feb 28 05:00:09 np0005634017 systemd-machined[209480]: Machine qemu-7-instance-00000007 terminated.
Feb 28 05:00:09 np0005634017 nova_compute[243452]: 2026-02-28 10:00:09.308 243456 INFO nova.virt.libvirt.driver [-] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Instance destroyed successfully.#033[00m
Feb 28 05:00:09 np0005634017 nova_compute[243452]: 2026-02-28 10:00:09.308 243456 DEBUG nova.objects.instance [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Lazy-loading 'resources' on Instance uuid f63bf263-5801-463b-84c3-90fc3e438863 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:00:09 np0005634017 nova_compute[243452]: 2026-02-28 10:00:09.694 243456 INFO nova.virt.libvirt.driver [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Deleting instance files /var/lib/nova/instances/f63bf263-5801-463b-84c3-90fc3e438863_del#033[00m
Feb 28 05:00:09 np0005634017 nova_compute[243452]: 2026-02-28 10:00:09.694 243456 INFO nova.virt.libvirt.driver [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Deletion of /var/lib/nova/instances/f63bf263-5801-463b-84c3-90fc3e438863_del complete#033[00m
Feb 28 05:00:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 200 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Feb 28 05:00:09 np0005634017 nova_compute[243452]: 2026-02-28 10:00:09.782 243456 INFO nova.compute.manager [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:00:09 np0005634017 nova_compute[243452]: 2026-02-28 10:00:09.784 243456 DEBUG oslo.service.loopingcall [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:00:09 np0005634017 nova_compute[243452]: 2026-02-28 10:00:09.784 243456 DEBUG nova.compute.manager [-] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:00:09 np0005634017 nova_compute[243452]: 2026-02-28 10:00:09.784 243456 DEBUG nova.network.neutron [-] [instance: f63bf263-5801-463b-84c3-90fc3e438863] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:00:09 np0005634017 nova_compute[243452]: 2026-02-28 10:00:09.980 243456 DEBUG nova.network.neutron [-] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:00:09 np0005634017 nova_compute[243452]: 2026-02-28 10:00:09.997 243456 DEBUG nova.network.neutron [-] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:00:10 np0005634017 nova_compute[243452]: 2026-02-28 10:00:10.013 243456 INFO nova.compute.manager [-] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Took 0.23 seconds to deallocate network for instance.#033[00m
Feb 28 05:00:10 np0005634017 nova_compute[243452]: 2026-02-28 10:00:10.025 243456 DEBUG oslo_concurrency.lockutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Acquiring lock "5cac75f5-aeef-427d-b484-7d40a33679cf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:10 np0005634017 nova_compute[243452]: 2026-02-28 10:00:10.026 243456 DEBUG oslo_concurrency.lockutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Lock "5cac75f5-aeef-427d-b484-7d40a33679cf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:10 np0005634017 nova_compute[243452]: 2026-02-28 10:00:10.060 243456 DEBUG nova.compute.manager [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:00:10 np0005634017 nova_compute[243452]: 2026-02-28 10:00:10.146 243456 DEBUG oslo_concurrency.lockutils [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:10 np0005634017 nova_compute[243452]: 2026-02-28 10:00:10.146 243456 DEBUG oslo_concurrency.lockutils [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:10 np0005634017 nova_compute[243452]: 2026-02-28 10:00:10.215 243456 DEBUG oslo_concurrency.processutils [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:10 np0005634017 nova_compute[243452]: 2026-02-28 10:00:10.263 243456 DEBUG oslo_concurrency.lockutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:10 np0005634017 nova_compute[243452]: 2026-02-28 10:00:10.672 243456 DEBUG oslo_concurrency.lockutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "e7cedb7c-31a4-4578-82e8-f93b29898300" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:10 np0005634017 nova_compute[243452]: 2026-02-28 10:00:10.673 243456 DEBUG oslo_concurrency.lockutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "e7cedb7c-31a4-4578-82e8-f93b29898300" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:10 np0005634017 nova_compute[243452]: 2026-02-28 10:00:10.690 243456 DEBUG nova.compute.manager [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:00:10 np0005634017 nova_compute[243452]: 2026-02-28 10:00:10.750 243456 DEBUG oslo_concurrency.lockutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:00:10 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3135146981' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:00:10 np0005634017 nova_compute[243452]: 2026-02-28 10:00:10.808 243456 DEBUG oslo_concurrency.processutils [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:10 np0005634017 nova_compute[243452]: 2026-02-28 10:00:10.815 243456 DEBUG nova.compute.provider_tree [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:00:10 np0005634017 nova_compute[243452]: 2026-02-28 10:00:10.876 243456 DEBUG nova.scheduler.client.report [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:00:11 np0005634017 nova_compute[243452]: 2026-02-28 10:00:11.098 243456 DEBUG oslo_concurrency.lockutils [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:11 np0005634017 nova_compute[243452]: 2026-02-28 10:00:11.102 243456 DEBUG oslo_concurrency.lockutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:11 np0005634017 nova_compute[243452]: 2026-02-28 10:00:11.113 243456 DEBUG nova.virt.hardware [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:00:11 np0005634017 nova_compute[243452]: 2026-02-28 10:00:11.114 243456 INFO nova.compute.claims [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:00:11 np0005634017 nova_compute[243452]: 2026-02-28 10:00:11.129 243456 INFO nova.scheduler.client.report [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Deleted allocations for instance f63bf263-5801-463b-84c3-90fc3e438863#033[00m
Feb 28 05:00:11 np0005634017 nova_compute[243452]: 2026-02-28 10:00:11.228 243456 DEBUG oslo_concurrency.lockutils [None req-17086446-5448-4fc9-b4e7-bad233677b33 330f60740eda47c589662819f21d61bf 9cd849430cdc473391344d9e0c297bb7 - - default default] Lock "f63bf263-5801-463b-84c3-90fc3e438863" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.961s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:11 np0005634017 nova_compute[243452]: 2026-02-28 10:00:11.301 243456 DEBUG oslo_concurrency.processutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 169 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 158 op/s
Feb 28 05:00:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:00:11 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2648374953' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:00:11 np0005634017 nova_compute[243452]: 2026-02-28 10:00:11.874 243456 DEBUG oslo_concurrency.processutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:11 np0005634017 nova_compute[243452]: 2026-02-28 10:00:11.882 243456 DEBUG nova.compute.provider_tree [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:00:11 np0005634017 nova_compute[243452]: 2026-02-28 10:00:11.900 243456 DEBUG nova.scheduler.client.report [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:00:11 np0005634017 nova_compute[243452]: 2026-02-28 10:00:11.925 243456 DEBUG oslo_concurrency.lockutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:11 np0005634017 nova_compute[243452]: 2026-02-28 10:00:11.926 243456 DEBUG nova.compute.manager [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:00:11 np0005634017 nova_compute[243452]: 2026-02-28 10:00:11.931 243456 DEBUG oslo_concurrency.lockutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:11 np0005634017 nova_compute[243452]: 2026-02-28 10:00:11.939 243456 DEBUG nova.virt.hardware [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:00:11 np0005634017 nova_compute[243452]: 2026-02-28 10:00:11.939 243456 INFO nova.compute.claims [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:11.999 243456 DEBUG nova.compute.manager [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.034 243456 INFO nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.062 243456 DEBUG nova.compute.manager [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.114 243456 DEBUG oslo_concurrency.processutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.169 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.195 243456 DEBUG nova.compute.manager [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.198 243456 DEBUG nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.199 243456 INFO nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Creating image(s)#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.232 243456 DEBUG nova.storage.rbd_utils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] rbd image 5cac75f5-aeef-427d-b484-7d40a33679cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.265 243456 DEBUG nova.storage.rbd_utils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] rbd image 5cac75f5-aeef-427d-b484-7d40a33679cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.304 243456 DEBUG nova.storage.rbd_utils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] rbd image 5cac75f5-aeef-427d-b484-7d40a33679cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.311 243456 DEBUG oslo_concurrency.processutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.331 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.400 243456 DEBUG oslo_concurrency.processutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.402 243456 DEBUG oslo_concurrency.lockutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.402 243456 DEBUG oslo_concurrency.lockutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.403 243456 DEBUG oslo_concurrency.lockutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.429 243456 DEBUG nova.storage.rbd_utils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] rbd image 5cac75f5-aeef-427d-b484-7d40a33679cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.433 243456 DEBUG oslo_concurrency.processutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 5cac75f5-aeef-427d-b484-7d40a33679cf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.631 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.685 243456 DEBUG oslo_concurrency.processutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 5cac75f5-aeef-427d-b484-7d40a33679cf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:00:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2033015449' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.759 243456 DEBUG nova.storage.rbd_utils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] resizing rbd image 5cac75f5-aeef-427d-b484-7d40a33679cf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.794 243456 DEBUG oslo_concurrency.processutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.680s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.801 243456 DEBUG nova.compute.provider_tree [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.821 243456 DEBUG nova.scheduler.client.report [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.845 243456 DEBUG oslo_concurrency.lockutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.846 243456 DEBUG nova.compute.manager [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.891 243456 DEBUG nova.objects.instance [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Lazy-loading 'migration_context' on Instance uuid 5cac75f5-aeef-427d-b484-7d40a33679cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.896 243456 DEBUG nova.compute.manager [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.897 243456 DEBUG nova.network.neutron [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.906 243456 DEBUG nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.906 243456 DEBUG nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Ensure instance console log exists: /var/lib/nova/instances/5cac75f5-aeef-427d-b484-7d40a33679cf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.907 243456 DEBUG oslo_concurrency.lockutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.907 243456 DEBUG oslo_concurrency.lockutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.908 243456 DEBUG oslo_concurrency.lockutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.909 243456 DEBUG nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.914 243456 WARNING nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.919 243456 DEBUG nova.virt.libvirt.host [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.919 243456 DEBUG nova.virt.libvirt.host [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.920 243456 INFO nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.927 243456 DEBUG nova.virt.libvirt.host [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.927 243456 DEBUG nova.virt.libvirt.host [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.928 243456 DEBUG nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.928 243456 DEBUG nova.virt.hardware [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.929 243456 DEBUG nova.virt.hardware [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.929 243456 DEBUG nova.virt.hardware [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.929 243456 DEBUG nova.virt.hardware [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.929 243456 DEBUG nova.virt.hardware [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.930 243456 DEBUG nova.virt.hardware [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.930 243456 DEBUG nova.virt.hardware [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.930 243456 DEBUG nova.virt.hardware [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.930 243456 DEBUG nova.virt.hardware [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.930 243456 DEBUG nova.virt.hardware [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.931 243456 DEBUG nova.virt.hardware [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.933 243456 DEBUG oslo_concurrency.processutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:12 np0005634017 nova_compute[243452]: 2026-02-28 10:00:12.953 243456 DEBUG nova.compute.manager [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:00:13 np0005634017 nova_compute[243452]: 2026-02-28 10:00:13.030 243456 DEBUG nova.compute.manager [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:00:13 np0005634017 nova_compute[243452]: 2026-02-28 10:00:13.031 243456 DEBUG nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:00:13 np0005634017 nova_compute[243452]: 2026-02-28 10:00:13.032 243456 INFO nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Creating image(s)#033[00m
Feb 28 05:00:13 np0005634017 nova_compute[243452]: 2026-02-28 10:00:13.057 243456 DEBUG nova.storage.rbd_utils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] rbd image e7cedb7c-31a4-4578-82e8-f93b29898300_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:13 np0005634017 nova_compute[243452]: 2026-02-28 10:00:13.085 243456 DEBUG nova.storage.rbd_utils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] rbd image e7cedb7c-31a4-4578-82e8-f93b29898300_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:13 np0005634017 nova_compute[243452]: 2026-02-28 10:00:13.110 243456 DEBUG nova.storage.rbd_utils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] rbd image e7cedb7c-31a4-4578-82e8-f93b29898300_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:13 np0005634017 nova_compute[243452]: 2026-02-28 10:00:13.113 243456 DEBUG oslo_concurrency.processutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:13 np0005634017 nova_compute[243452]: 2026-02-28 10:00:13.173 243456 DEBUG oslo_concurrency.processutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:13 np0005634017 nova_compute[243452]: 2026-02-28 10:00:13.174 243456 DEBUG oslo_concurrency.lockutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:13 np0005634017 nova_compute[243452]: 2026-02-28 10:00:13.174 243456 DEBUG oslo_concurrency.lockutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:13 np0005634017 nova_compute[243452]: 2026-02-28 10:00:13.175 243456 DEBUG oslo_concurrency.lockutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:13 np0005634017 nova_compute[243452]: 2026-02-28 10:00:13.201 243456 DEBUG nova.storage.rbd_utils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] rbd image e7cedb7c-31a4-4578-82e8-f93b29898300_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:13 np0005634017 nova_compute[243452]: 2026-02-28 10:00:13.205 243456 DEBUG oslo_concurrency.processutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 e7cedb7c-31a4-4578-82e8-f93b29898300_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:13 np0005634017 nova_compute[243452]: 2026-02-28 10:00:13.385 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:13 np0005634017 nova_compute[243452]: 2026-02-28 10:00:13.477 243456 DEBUG oslo_concurrency.processutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 e7cedb7c-31a4-4578-82e8-f93b29898300_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:00:13 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/677768186' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:00:13 np0005634017 nova_compute[243452]: 2026-02-28 10:00:13.550 243456 DEBUG oslo_concurrency.processutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:13 np0005634017 nova_compute[243452]: 2026-02-28 10:00:13.577 243456 DEBUG nova.storage.rbd_utils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] rbd image 5cac75f5-aeef-427d-b484-7d40a33679cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:13 np0005634017 nova_compute[243452]: 2026-02-28 10:00:13.582 243456 DEBUG oslo_concurrency.processutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:13 np0005634017 nova_compute[243452]: 2026-02-28 10:00:13.602 243456 DEBUG nova.storage.rbd_utils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] resizing rbd image e7cedb7c-31a4-4578-82e8-f93b29898300_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:00:13 np0005634017 nova_compute[243452]: 2026-02-28 10:00:13.703 243456 DEBUG nova.objects.instance [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lazy-loading 'migration_context' on Instance uuid e7cedb7c-31a4-4578-82e8-f93b29898300 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:00:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 166 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 180 op/s
Feb 28 05:00:13 np0005634017 nova_compute[243452]: 2026-02-28 10:00:13.915 243456 DEBUG nova.network.neutron [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 28 05:00:13 np0005634017 nova_compute[243452]: 2026-02-28 10:00:13.916 243456 DEBUG nova.compute.manager [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:00:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:00:14 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2444866762' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:00:14 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.103 243456 DEBUG oslo_concurrency.processutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:14 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.105 243456 DEBUG nova.objects.instance [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Lazy-loading 'pci_devices' on Instance uuid 5cac75f5-aeef-427d-b484-7d40a33679cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:00:14 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.842 243456 DEBUG nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:00:14 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.844 243456 DEBUG nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Ensure instance console log exists: /var/lib/nova/instances/e7cedb7c-31a4-4578-82e8-f93b29898300/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:00:14 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.846 243456 DEBUG oslo_concurrency.lockutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:14 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.846 243456 DEBUG oslo_concurrency.lockutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:14 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.847 243456 DEBUG oslo_concurrency.lockutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:14 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.849 243456 DEBUG nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:00:14 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.855 243456 WARNING nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:00:14 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.860 243456 DEBUG nova.virt.libvirt.host [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:00:14 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.860 243456 DEBUG nova.virt.libvirt.host [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:00:14 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.863 243456 DEBUG nova.virt.libvirt.host [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:00:14 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.864 243456 DEBUG nova.virt.libvirt.host [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:00:14 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.864 243456 DEBUG nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:00:14 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.865 243456 DEBUG nova.virt.hardware [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:00:14 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.865 243456 DEBUG nova.virt.hardware [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:00:14 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.866 243456 DEBUG nova.virt.hardware [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:00:14 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.866 243456 DEBUG nova.virt.hardware [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:00:14 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.866 243456 DEBUG nova.virt.hardware [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:00:14 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.866 243456 DEBUG nova.virt.hardware [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:00:14 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.867 243456 DEBUG nova.virt.hardware [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:00:14 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.867 243456 DEBUG nova.virt.hardware [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:00:14 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.867 243456 DEBUG nova.virt.hardware [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:00:14 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.867 243456 DEBUG nova.virt.hardware [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:00:14 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.868 243456 DEBUG nova.virt.hardware [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:00:14 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.871 243456 DEBUG oslo_concurrency.processutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:15 np0005634017 nova_compute[243452]: 2026-02-28 10:00:14.999 243456 DEBUG nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:00:15 np0005634017 nova_compute[243452]:  <uuid>5cac75f5-aeef-427d-b484-7d40a33679cf</uuid>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:  <name>instance-00000008</name>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerDiagnosticsV248Test-server-851731741</nova:name>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:00:12</nova:creationTime>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:00:15 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:        <nova:user uuid="137c57a081ef448e87e6ee3762d09a10">tempest-ServerDiagnosticsV248Test-216069417-project-member</nova:user>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:        <nova:project uuid="28fea2ec5f724190b6d7966ed36747db">tempest-ServerDiagnosticsV248Test-216069417</nova:project>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      <nova:ports/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      <entry name="serial">5cac75f5-aeef-427d-b484-7d40a33679cf</entry>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      <entry name="uuid">5cac75f5-aeef-427d-b484-7d40a33679cf</entry>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/5cac75f5-aeef-427d-b484-7d40a33679cf_disk">
Feb 28 05:00:15 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:00:15 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/5cac75f5-aeef-427d-b484-7d40a33679cf_disk.config">
Feb 28 05:00:15 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:00:15 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/5cac75f5-aeef-427d-b484-7d40a33679cf/console.log" append="off"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:00:15 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:00:15 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:00:15 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:00:15 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:00:15 np0005634017 nova_compute[243452]: 2026-02-28 10:00:15.167 243456 DEBUG nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:00:15 np0005634017 nova_compute[243452]: 2026-02-28 10:00:15.168 243456 DEBUG nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:00:15 np0005634017 nova_compute[243452]: 2026-02-28 10:00:15.169 243456 INFO nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Using config drive#033[00m
Feb 28 05:00:15 np0005634017 nova_compute[243452]: 2026-02-28 10:00:15.194 243456 DEBUG nova.storage.rbd_utils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] rbd image 5cac75f5-aeef-427d-b484-7d40a33679cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:15 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:00:15 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2666279813' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:00:15 np0005634017 nova_compute[243452]: 2026-02-28 10:00:15.493 243456 DEBUG oslo_concurrency.processutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:15 np0005634017 nova_compute[243452]: 2026-02-28 10:00:15.516 243456 DEBUG nova.storage.rbd_utils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] rbd image e7cedb7c-31a4-4578-82e8-f93b29898300_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:15 np0005634017 nova_compute[243452]: 2026-02-28 10:00:15.521 243456 DEBUG oslo_concurrency.processutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:15 np0005634017 nova_compute[243452]: 2026-02-28 10:00:15.610 243456 INFO nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Creating config drive at /var/lib/nova/instances/5cac75f5-aeef-427d-b484-7d40a33679cf/disk.config#033[00m
Feb 28 05:00:15 np0005634017 nova_compute[243452]: 2026-02-28 10:00:15.618 243456 DEBUG oslo_concurrency.processutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5cac75f5-aeef-427d-b484-7d40a33679cf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpc0ydyeqv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 226 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.0 MiB/s wr, 188 op/s
Feb 28 05:00:15 np0005634017 nova_compute[243452]: 2026-02-28 10:00:15.744 243456 DEBUG oslo_concurrency.processutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5cac75f5-aeef-427d-b484-7d40a33679cf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpc0ydyeqv" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:15 np0005634017 nova_compute[243452]: 2026-02-28 10:00:15.771 243456 DEBUG nova.storage.rbd_utils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] rbd image 5cac75f5-aeef-427d-b484-7d40a33679cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:15 np0005634017 nova_compute[243452]: 2026-02-28 10:00:15.775 243456 DEBUG oslo_concurrency.processutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5cac75f5-aeef-427d-b484-7d40a33679cf/disk.config 5cac75f5-aeef-427d-b484-7d40a33679cf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:15 np0005634017 nova_compute[243452]: 2026-02-28 10:00:15.888 243456 DEBUG oslo_concurrency.processutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5cac75f5-aeef-427d-b484-7d40a33679cf/disk.config 5cac75f5-aeef-427d-b484-7d40a33679cf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:15 np0005634017 nova_compute[243452]: 2026-02-28 10:00:15.889 243456 INFO nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Deleting local config drive /var/lib/nova/instances/5cac75f5-aeef-427d-b484-7d40a33679cf/disk.config because it was imported into RBD.#033[00m
Feb 28 05:00:15 np0005634017 systemd-machined[209480]: New machine qemu-8-instance-00000008.
Feb 28 05:00:15 np0005634017 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Feb 28 05:00:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:00:16 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1257427459' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.121 243456 DEBUG oslo_concurrency.processutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.125 243456 DEBUG nova.objects.instance [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lazy-loading 'pci_devices' on Instance uuid e7cedb7c-31a4-4578-82e8-f93b29898300 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.143 243456 DEBUG nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:00:16 np0005634017 nova_compute[243452]:  <uuid>e7cedb7c-31a4-4578-82e8-f93b29898300</uuid>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:  <name>instance-00000009</name>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      <nova:name>tempest-LiveMigrationNegativeTest-server-468219823</nova:name>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:00:14</nova:creationTime>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:00:16 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:        <nova:user uuid="cb71641f461242f9afa154410c27a4c5">tempest-LiveMigrationNegativeTest-1894925611-project-member</nova:user>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:        <nova:project uuid="e4ed979bda47466ebd87517c73a12e9d">tempest-LiveMigrationNegativeTest-1894925611</nova:project>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      <nova:ports/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      <entry name="serial">e7cedb7c-31a4-4578-82e8-f93b29898300</entry>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      <entry name="uuid">e7cedb7c-31a4-4578-82e8-f93b29898300</entry>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/e7cedb7c-31a4-4578-82e8-f93b29898300_disk">
Feb 28 05:00:16 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:00:16 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/e7cedb7c-31a4-4578-82e8-f93b29898300_disk.config">
Feb 28 05:00:16 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:00:16 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/e7cedb7c-31a4-4578-82e8-f93b29898300/console.log" append="off"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:00:16 np0005634017 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:00:16 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:00:16 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:00:16 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:00:16 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:00:16 np0005634017 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.189 243456 DEBUG nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.190 243456 DEBUG nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.191 243456 INFO nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Using config drive#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.220 243456 DEBUG nova.storage.rbd_utils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] rbd image e7cedb7c-31a4-4578-82e8-f93b29898300_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.458 243456 INFO nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Creating config drive at /var/lib/nova/instances/e7cedb7c-31a4-4578-82e8-f93b29898300/disk.config#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.463 243456 DEBUG oslo_concurrency.processutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e7cedb7c-31a4-4578-82e8-f93b29898300/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpw2kd9r4m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.493 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272816.4926243, 5cac75f5-aeef-427d-b484-7d40a33679cf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.493 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.498 243456 DEBUG nova.compute.manager [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.498 243456 DEBUG nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.512 243456 INFO nova.virt.libvirt.driver [-] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Instance spawned successfully.#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.516 243456 DEBUG nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:00:16 np0005634017 podman[254466]: 2026-02-28 10:00:16.519855195 +0000 UTC m=+0.079418438 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:00:16 np0005634017 podman[254465]: 2026-02-28 10:00:16.548939454 +0000 UTC m=+0.109703331 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.560 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.566 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.569 243456 DEBUG nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.570 243456 DEBUG nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.570 243456 DEBUG nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.571 243456 DEBUG nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.572 243456 DEBUG nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.572 243456 DEBUG nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.581 243456 DEBUG oslo_concurrency.processutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e7cedb7c-31a4-4578-82e8-f93b29898300/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpw2kd9r4m" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.603 243456 DEBUG nova.storage.rbd_utils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] rbd image e7cedb7c-31a4-4578-82e8-f93b29898300_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.607 243456 DEBUG oslo_concurrency.processutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e7cedb7c-31a4-4578-82e8-f93b29898300/disk.config e7cedb7c-31a4-4578-82e8-f93b29898300_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.622 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.623 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272816.4996681, 5cac75f5-aeef-427d-b484-7d40a33679cf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.623 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] VM Started (Lifecycle Event)#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.644 243456 INFO nova.compute.manager [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Took 4.45 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.645 243456 DEBUG nova.compute.manager [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.646 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.658 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.702 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.721 243456 INFO nova.compute.manager [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Took 6.48 seconds to build instance.#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.738 243456 DEBUG oslo_concurrency.processutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e7cedb7c-31a4-4578-82e8-f93b29898300/disk.config e7cedb7c-31a4-4578-82e8-f93b29898300_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.739 243456 INFO nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Deleting local config drive /var/lib/nova/instances/e7cedb7c-31a4-4578-82e8-f93b29898300/disk.config because it was imported into RBD.#033[00m
Feb 28 05:00:16 np0005634017 nova_compute[243452]: 2026-02-28 10:00:16.743 243456 DEBUG oslo_concurrency.lockutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Lock "5cac75f5-aeef-427d-b484-7d40a33679cf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:16 np0005634017 systemd-machined[209480]: New machine qemu-9-instance-00000009.
Feb 28 05:00:16 np0005634017 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.472 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272817.4718213, e7cedb7c-31a4-4578-82e8-f93b29898300 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.474 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.479 243456 DEBUG nova.compute.manager [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.481 243456 DEBUG nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.487 243456 INFO nova.virt.libvirt.driver [-] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Instance spawned successfully.#033[00m
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.488 243456 DEBUG nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.518 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.525 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.529 243456 DEBUG nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.530 243456 DEBUG nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.531 243456 DEBUG nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.532 243456 DEBUG nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.532 243456 DEBUG nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.533 243456 DEBUG nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.575 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.576 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272817.472855, e7cedb7c-31a4-4578-82e8-f93b29898300 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.576 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] VM Started (Lifecycle Event)#033[00m
Feb 28 05:00:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.588 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272802.5598826, cbfb5d00-33da-4fdc-a9b7-a16865020102 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.589 243456 INFO nova.compute.manager [-] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.611 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.612 243456 DEBUG nova.compute.manager [None req-ce714cbe-946d-46e9-a47e-713867866c2e - - - - - -] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.615 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.621 243456 INFO nova.compute.manager [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Took 4.59 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.621 243456 DEBUG nova.compute.manager [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.634 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.647 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.686 243456 INFO nova.compute.manager [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Took 6.95 seconds to build instance.#033[00m
Feb 28 05:00:17 np0005634017 nova_compute[243452]: 2026-02-28 10:00:17.721 243456 DEBUG oslo_concurrency.lockutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "e7cedb7c-31a4-4578-82e8-f93b29898300" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 189 op/s
Feb 28 05:00:18 np0005634017 nova_compute[243452]: 2026-02-28 10:00:18.136 243456 DEBUG nova.compute.manager [None req-126044c0-a853-44de-8144-0a919c2669b6 fbbda653764b41c6a0d9c5f1a589bdfe 179ac1b345e44371a698f9ff7f4ed1d4 - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:18 np0005634017 nova_compute[243452]: 2026-02-28 10:00:18.140 243456 INFO nova.compute.manager [None req-126044c0-a853-44de-8144-0a919c2669b6 fbbda653764b41c6a0d9c5f1a589bdfe 179ac1b345e44371a698f9ff7f4ed1d4 - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Retrieving diagnostics#033[00m
Feb 28 05:00:18 np0005634017 nova_compute[243452]: 2026-02-28 10:00:18.386 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 146 op/s
Feb 28 05:00:21 np0005634017 nova_compute[243452]: 2026-02-28 10:00:21.615 243456 DEBUG oslo_concurrency.lockutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Acquiring lock "5706ada3-074b-4ac3-8540-425edba37cbe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:21 np0005634017 nova_compute[243452]: 2026-02-28 10:00:21.617 243456 DEBUG oslo_concurrency.lockutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "5706ada3-074b-4ac3-8540-425edba37cbe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:21 np0005634017 nova_compute[243452]: 2026-02-28 10:00:21.646 243456 DEBUG nova.compute.manager [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:00:21 np0005634017 nova_compute[243452]: 2026-02-28 10:00:21.719 243456 DEBUG oslo_concurrency.lockutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:21 np0005634017 nova_compute[243452]: 2026-02-28 10:00:21.719 243456 DEBUG oslo_concurrency.lockutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:21 np0005634017 nova_compute[243452]: 2026-02-28 10:00:21.725 243456 DEBUG nova.virt.hardware [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:00:21 np0005634017 nova_compute[243452]: 2026-02-28 10:00:21.726 243456 INFO nova.compute.claims [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:00:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.6 MiB/s wr, 261 op/s
Feb 28 05:00:21 np0005634017 nova_compute[243452]: 2026-02-28 10:00:21.871 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:00:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3188189077' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.385 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.393 243456 DEBUG nova.compute.provider_tree [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.413 243456 DEBUG nova.scheduler.client.report [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.439 243456 DEBUG oslo_concurrency.lockutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.441 243456 DEBUG nova.compute.manager [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.491 243456 DEBUG nova.compute.manager [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.492 243456 DEBUG nova.network.neutron [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.515 243456 INFO nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.552 243456 DEBUG nova.compute.manager [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:00:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.638 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.657 243456 DEBUG nova.compute.manager [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.659 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.660 243456 INFO nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Creating image(s)#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.693 243456 DEBUG nova.storage.rbd_utils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] rbd image 5706ada3-074b-4ac3-8540-425edba37cbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.726 243456 DEBUG nova.storage.rbd_utils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] rbd image 5706ada3-074b-4ac3-8540-425edba37cbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.754 243456 DEBUG nova.storage.rbd_utils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] rbd image 5706ada3-074b-4ac3-8540-425edba37cbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.758 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.784 243456 DEBUG oslo_concurrency.lockutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "a2e72370-536c-417e-8667-678b824b849c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.784 243456 DEBUG oslo_concurrency.lockutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "a2e72370-536c-417e-8667-678b824b849c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.802 243456 DEBUG nova.compute.manager [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.822 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.823 243456 DEBUG oslo_concurrency.lockutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.823 243456 DEBUG oslo_concurrency.lockutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.824 243456 DEBUG oslo_concurrency.lockutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.846 243456 DEBUG nova.storage.rbd_utils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] rbd image 5706ada3-074b-4ac3-8540-425edba37cbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.850 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 5706ada3-074b-4ac3-8540-425edba37cbe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.892 243456 DEBUG oslo_concurrency.lockutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.893 243456 DEBUG oslo_concurrency.lockutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.905 243456 DEBUG nova.virt.hardware [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:00:22 np0005634017 nova_compute[243452]: 2026-02-28 10:00:22.906 243456 INFO nova.compute.claims [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.008 243456 DEBUG nova.network.neutron [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.008 243456 DEBUG nova.compute.manager [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.089 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.112 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 5706ada3-074b-4ac3-8540-425edba37cbe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.185 243456 DEBUG nova.storage.rbd_utils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] resizing rbd image 5706ada3-074b-4ac3-8540-425edba37cbe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.270 243456 DEBUG nova.objects.instance [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lazy-loading 'migration_context' on Instance uuid 5706ada3-074b-4ac3-8540-425edba37cbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.294 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.294 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Ensure instance console log exists: /var/lib/nova/instances/5706ada3-074b-4ac3-8540-425edba37cbe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.295 243456 DEBUG oslo_concurrency.lockutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.295 243456 DEBUG oslo_concurrency.lockutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.295 243456 DEBUG oslo_concurrency.lockutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.297 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.302 243456 WARNING nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.307 243456 DEBUG nova.virt.libvirt.host [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.307 243456 DEBUG nova.virt.libvirt.host [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.311 243456 DEBUG nova.virt.libvirt.host [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.311 243456 DEBUG nova.virt.libvirt.host [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.311 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.312 243456 DEBUG nova.virt.hardware [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.313 243456 DEBUG nova.virt.hardware [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.314 243456 DEBUG nova.virt.hardware [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.314 243456 DEBUG nova.virt.hardware [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.315 243456 DEBUG nova.virt.hardware [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.315 243456 DEBUG nova.virt.hardware [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.315 243456 DEBUG nova.virt.hardware [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.315 243456 DEBUG nova.virt.hardware [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.316 243456 DEBUG nova.virt.hardware [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.316 243456 DEBUG nova.virt.hardware [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.316 243456 DEBUG nova.virt.hardware [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.319 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.389 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:00:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/603828765' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.630 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.640 243456 DEBUG nova.compute.provider_tree [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.673 243456 DEBUG nova.scheduler.client.report [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.719 243456 DEBUG oslo_concurrency.lockutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.721 243456 DEBUG nova.compute.manager [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:00:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 211 op/s
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.792 243456 DEBUG nova.compute.manager [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.793 243456 DEBUG nova.network.neutron [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.817 243456 INFO nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.845 243456 DEBUG nova.compute.manager [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:00:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:00:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/548531986' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.864 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.893 243456 DEBUG nova.storage.rbd_utils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] rbd image 5706ada3-074b-4ac3-8540-425edba37cbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.898 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.983 243456 DEBUG nova.compute.manager [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.986 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:00:23 np0005634017 nova_compute[243452]: 2026-02-28 10:00:23.987 243456 INFO nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Creating image(s)#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.008 243456 DEBUG nova.storage.rbd_utils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] rbd image a2e72370-536c-417e-8667-678b824b849c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.031 243456 DEBUG nova.storage.rbd_utils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] rbd image a2e72370-536c-417e-8667-678b824b849c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.056 243456 DEBUG nova.storage.rbd_utils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] rbd image a2e72370-536c-417e-8667-678b824b849c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.059 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.110 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.111 243456 DEBUG oslo_concurrency.lockutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.112 243456 DEBUG oslo_concurrency.lockutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.113 243456 DEBUG oslo_concurrency.lockutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.139 243456 DEBUG nova.storage.rbd_utils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] rbd image a2e72370-536c-417e-8667-678b824b849c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.142 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 a2e72370-536c-417e-8667-678b824b849c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.306 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272809.3039806, f63bf263-5801-463b-84c3-90fc3e438863 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.307 243456 INFO nova.compute.manager [-] [instance: f63bf263-5801-463b-84c3-90fc3e438863] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.326 243456 DEBUG nova.compute.manager [None req-310d40aa-1ae5-4bfe-ae90-9c554acab7bc - - - - - -] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.356 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 a2e72370-536c-417e-8667-678b824b849c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.402 243456 DEBUG nova.storage.rbd_utils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] resizing rbd image a2e72370-536c-417e-8667-678b824b849c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:00:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:00:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1132892288' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.463 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.464 243456 DEBUG nova.objects.instance [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lazy-loading 'pci_devices' on Instance uuid 5706ada3-074b-4ac3-8540-425edba37cbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.470 243456 DEBUG nova.objects.instance [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lazy-loading 'migration_context' on Instance uuid a2e72370-536c-417e-8667-678b824b849c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.492 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:00:24 np0005634017 nova_compute[243452]:  <uuid>5706ada3-074b-4ac3-8540-425edba37cbe</uuid>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:  <name>instance-0000000a</name>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      <nova:name>tempest-TenantUsagesTestJSON-server-900182279</nova:name>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:00:23</nova:creationTime>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:00:24 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:        <nova:user uuid="fe16ef4b58de443ba0abd815064150e4">tempest-TenantUsagesTestJSON-2127500945-project-member</nova:user>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:        <nova:project uuid="7d9c93e3ee774e3ea2b19d16b9ceab1b">tempest-TenantUsagesTestJSON-2127500945</nova:project>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      <nova:ports/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      <entry name="serial">5706ada3-074b-4ac3-8540-425edba37cbe</entry>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      <entry name="uuid">5706ada3-074b-4ac3-8540-425edba37cbe</entry>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/5706ada3-074b-4ac3-8540-425edba37cbe_disk">
Feb 28 05:00:24 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:00:24 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/5706ada3-074b-4ac3-8540-425edba37cbe_disk.config">
Feb 28 05:00:24 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:00:24 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/5706ada3-074b-4ac3-8540-425edba37cbe/console.log" append="off"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:00:24 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:00:24 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:00:24 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:00:24 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.497 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.498 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Ensure instance console log exists: /var/lib/nova/instances/a2e72370-536c-417e-8667-678b824b849c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.498 243456 DEBUG oslo_concurrency.lockutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.498 243456 DEBUG oslo_concurrency.lockutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.499 243456 DEBUG oslo_concurrency.lockutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.538 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.539 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.539 243456 INFO nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Using config drive#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.558 243456 DEBUG nova.storage.rbd_utils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] rbd image 5706ada3-074b-4ac3-8540-425edba37cbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.738 243456 INFO nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Creating config drive at /var/lib/nova/instances/5706ada3-074b-4ac3-8540-425edba37cbe/disk.config#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.742 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5706ada3-074b-4ac3-8540-425edba37cbe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpvmjv6gx7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.761 243456 DEBUG nova.network.neutron [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.762 243456 DEBUG nova.compute.manager [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.763 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.768 243456 WARNING nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.772 243456 DEBUG nova.virt.libvirt.host [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.773 243456 DEBUG nova.virt.libvirt.host [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.777 243456 DEBUG nova.virt.libvirt.host [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.777 243456 DEBUG nova.virt.libvirt.host [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.778 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.778 243456 DEBUG nova.virt.hardware [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.778 243456 DEBUG nova.virt.hardware [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.779 243456 DEBUG nova.virt.hardware [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.779 243456 DEBUG nova.virt.hardware [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.779 243456 DEBUG nova.virt.hardware [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.779 243456 DEBUG nova.virt.hardware [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.780 243456 DEBUG nova.virt.hardware [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.780 243456 DEBUG nova.virt.hardware [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.780 243456 DEBUG nova.virt.hardware [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.780 243456 DEBUG nova.virt.hardware [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.781 243456 DEBUG nova.virt.hardware [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.783 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.864 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5706ada3-074b-4ac3-8540-425edba37cbe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpvmjv6gx7" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.901 243456 DEBUG nova.storage.rbd_utils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] rbd image 5706ada3-074b-4ac3-8540-425edba37cbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:24 np0005634017 nova_compute[243452]: 2026-02-28 10:00:24.907 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5706ada3-074b-4ac3-8540-425edba37cbe/disk.config 5706ada3-074b-4ac3-8540-425edba37cbe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.056 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5706ada3-074b-4ac3-8540-425edba37cbe/disk.config 5706ada3-074b-4ac3-8540-425edba37cbe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.058 243456 INFO nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Deleting local config drive /var/lib/nova/instances/5706ada3-074b-4ac3-8540-425edba37cbe/disk.config because it was imported into RBD.#033[00m
Feb 28 05:00:25 np0005634017 systemd-machined[209480]: New machine qemu-10-instance-0000000a.
Feb 28 05:00:25 np0005634017 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Feb 28 05:00:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:00:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3660596840' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.333 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.362 243456 DEBUG nova.storage.rbd_utils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] rbd image a2e72370-536c-417e-8667-678b824b849c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.368 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.650 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272825.6498132, 5706ada3-074b-4ac3-8540-425edba37cbe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.651 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.653 243456 DEBUG nova.compute.manager [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.654 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.658 243456 INFO nova.virt.libvirt.driver [-] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Instance spawned successfully.#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.659 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.680 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.686 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.690 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.691 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.692 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.692 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.693 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.694 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.718 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.719 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272825.6512961, 5706ada3-074b-4ac3-8540-425edba37cbe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.719 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] VM Started (Lifecycle Event)#033[00m
Feb 28 05:00:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 299 MiB data, 412 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.6 MiB/s wr, 227 op/s
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.750 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.754 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.761 243456 INFO nova.compute.manager [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Took 3.10 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.762 243456 DEBUG nova.compute.manager [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.771 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.854 243456 INFO nova.compute.manager [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Took 4.16 seconds to build instance.#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.879 243456 DEBUG oslo_concurrency.lockutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "5706ada3-074b-4ac3-8540-425edba37cbe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:00:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1607902511' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.905 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.907 243456 DEBUG nova.objects.instance [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lazy-loading 'pci_devices' on Instance uuid a2e72370-536c-417e-8667-678b824b849c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:00:25 np0005634017 nova_compute[243452]: 2026-02-28 10:00:25.921 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:00:25 np0005634017 nova_compute[243452]:  <uuid>a2e72370-536c-417e-8667-678b824b849c</uuid>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:  <name>instance-0000000b</name>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      <nova:name>tempest-LiveMigrationNegativeTest-server-1356439074</nova:name>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:00:24</nova:creationTime>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:00:25 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:        <nova:user uuid="cb71641f461242f9afa154410c27a4c5">tempest-LiveMigrationNegativeTest-1894925611-project-member</nova:user>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:        <nova:project uuid="e4ed979bda47466ebd87517c73a12e9d">tempest-LiveMigrationNegativeTest-1894925611</nova:project>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      <nova:ports/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      <entry name="serial">a2e72370-536c-417e-8667-678b824b849c</entry>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      <entry name="uuid">a2e72370-536c-417e-8667-678b824b849c</entry>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/a2e72370-536c-417e-8667-678b824b849c_disk">
Feb 28 05:00:25 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:00:25 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/a2e72370-536c-417e-8667-678b824b849c_disk.config">
Feb 28 05:00:25 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:00:25 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/a2e72370-536c-417e-8667-678b824b849c/console.log" append="off"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:00:25 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:00:25 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:00:25 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:00:25 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:00:26 np0005634017 nova_compute[243452]: 2026-02-28 10:00:26.012 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:00:26 np0005634017 nova_compute[243452]: 2026-02-28 10:00:26.013 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:00:26 np0005634017 nova_compute[243452]: 2026-02-28 10:00:26.013 243456 INFO nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Using config drive#033[00m
Feb 28 05:00:26 np0005634017 nova_compute[243452]: 2026-02-28 10:00:26.042 243456 DEBUG nova.storage.rbd_utils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] rbd image a2e72370-536c-417e-8667-678b824b849c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:26 np0005634017 nova_compute[243452]: 2026-02-28 10:00:26.267 243456 INFO nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Creating config drive at /var/lib/nova/instances/a2e72370-536c-417e-8667-678b824b849c/disk.config#033[00m
Feb 28 05:00:26 np0005634017 nova_compute[243452]: 2026-02-28 10:00:26.272 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a2e72370-536c-417e-8667-678b824b849c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp73t_q6i4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:26 np0005634017 nova_compute[243452]: 2026-02-28 10:00:26.394 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a2e72370-536c-417e-8667-678b824b849c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp73t_q6i4" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:26 np0005634017 nova_compute[243452]: 2026-02-28 10:00:26.426 243456 DEBUG nova.storage.rbd_utils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] rbd image a2e72370-536c-417e-8667-678b824b849c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:26 np0005634017 nova_compute[243452]: 2026-02-28 10:00:26.432 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a2e72370-536c-417e-8667-678b824b849c/disk.config a2e72370-536c-417e-8667-678b824b849c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:26 np0005634017 nova_compute[243452]: 2026-02-28 10:00:26.596 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a2e72370-536c-417e-8667-678b824b849c/disk.config a2e72370-536c-417e-8667-678b824b849c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:26 np0005634017 nova_compute[243452]: 2026-02-28 10:00:26.599 243456 INFO nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Deleting local config drive /var/lib/nova/instances/a2e72370-536c-417e-8667-678b824b849c/disk.config because it was imported into RBD.#033[00m
Feb 28 05:00:26 np0005634017 systemd-machined[209480]: New machine qemu-11-instance-0000000b.
Feb 28 05:00:26 np0005634017 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.063 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272827.0629199, a2e72370-536c-417e-8667-678b824b849c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.065 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a2e72370-536c-417e-8667-678b824b849c] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.068 243456 DEBUG nova.compute.manager [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.069 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.074 243456 INFO nova.virt.libvirt.driver [-] [instance: a2e72370-536c-417e-8667-678b824b849c] Instance spawned successfully.#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.075 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.090 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a2e72370-536c-417e-8667-678b824b849c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.097 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a2e72370-536c-417e-8667-678b824b849c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.102 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.102 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.102 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.103 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.103 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.104 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.128 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a2e72370-536c-417e-8667-678b824b849c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.130 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272827.063029, a2e72370-536c-417e-8667-678b824b849c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.130 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a2e72370-536c-417e-8667-678b824b849c] VM Started (Lifecycle Event)#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.155 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a2e72370-536c-417e-8667-678b824b849c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.164 243456 INFO nova.compute.manager [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Took 3.18 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.164 243456 DEBUG nova.compute.manager [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.165 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a2e72370-536c-417e-8667-678b824b849c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.194 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a2e72370-536c-417e-8667-678b824b849c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.242 243456 INFO nova.compute.manager [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Took 4.37 seconds to build instance.#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.272 243456 DEBUG oslo_concurrency.lockutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "a2e72370-536c-417e-8667-678b824b849c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.488s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:00:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:00:27 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:00:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:00:27 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:00:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:00:27 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:00:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:00:27 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:00:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:00:27 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:00:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:00:27 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.642 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:27 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:00:27 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:00:27 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:00:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 357 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 5.4 MiB/s wr, 280 op/s
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.901 243456 DEBUG nova.objects.instance [None req-bec8a33c-7b99-4570-ae36-53bee6c28520 aa88e7a4d6a74829a2aa88687f3e0d9b 048b401fefe0402ab609aa3c8e535be6 - - default default] Lazy-loading 'pci_devices' on Instance uuid a2e72370-536c-417e-8667-678b824b849c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.921 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272827.9217892, a2e72370-536c-417e-8667-678b824b849c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.922 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a2e72370-536c-417e-8667-678b824b849c] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.943 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a2e72370-536c-417e-8667-678b824b849c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.946 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a2e72370-536c-417e-8667-678b824b849c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:00:27 np0005634017 nova_compute[243452]: 2026-02-28 10:00:27.973 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a2e72370-536c-417e-8667-678b824b849c] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Feb 28 05:00:28 np0005634017 podman[255483]: 2026-02-28 10:00:28.044494657 +0000 UTC m=+0.090872401 container create d1deed01d9fe1667a235816c970cb89b28aa14201e224c63d4162603478413d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_saha, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 28 05:00:28 np0005634017 podman[255483]: 2026-02-28 10:00:27.979223659 +0000 UTC m=+0.025601453 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:00:28 np0005634017 systemd[1]: Started libpod-conmon-d1deed01d9fe1667a235816c970cb89b28aa14201e224c63d4162603478413d1.scope.
Feb 28 05:00:28 np0005634017 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Feb 28 05:00:28 np0005634017 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 1.203s CPU time.
Feb 28 05:00:28 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:00:28 np0005634017 systemd-machined[209480]: Machine qemu-11-instance-0000000b terminated.
Feb 28 05:00:28 np0005634017 podman[255483]: 2026-02-28 10:00:28.169669533 +0000 UTC m=+0.216047307 container init d1deed01d9fe1667a235816c970cb89b28aa14201e224c63d4162603478413d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:00:28 np0005634017 podman[255483]: 2026-02-28 10:00:28.177445232 +0000 UTC m=+0.223822986 container start d1deed01d9fe1667a235816c970cb89b28aa14201e224c63d4162603478413d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:00:28 np0005634017 cranky_saha[255499]: 167 167
Feb 28 05:00:28 np0005634017 systemd[1]: libpod-d1deed01d9fe1667a235816c970cb89b28aa14201e224c63d4162603478413d1.scope: Deactivated successfully.
Feb 28 05:00:28 np0005634017 conmon[255499]: conmon d1deed01d9fe1667a235 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d1deed01d9fe1667a235816c970cb89b28aa14201e224c63d4162603478413d1.scope/container/memory.events
Feb 28 05:00:28 np0005634017 podman[255483]: 2026-02-28 10:00:28.201458318 +0000 UTC m=+0.247836102 container attach d1deed01d9fe1667a235816c970cb89b28aa14201e224c63d4162603478413d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:00:28 np0005634017 podman[255483]: 2026-02-28 10:00:28.201837969 +0000 UTC m=+0.248215723 container died d1deed01d9fe1667a235816c970cb89b28aa14201e224c63d4162603478413d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_saha, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.226 243456 DEBUG oslo_concurrency.lockutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Acquiring lock "5706ada3-074b-4ac3-8540-425edba37cbe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.226 243456 DEBUG oslo_concurrency.lockutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "5706ada3-074b-4ac3-8540-425edba37cbe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.226 243456 DEBUG oslo_concurrency.lockutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Acquiring lock "5706ada3-074b-4ac3-8540-425edba37cbe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.226 243456 DEBUG oslo_concurrency.lockutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "5706ada3-074b-4ac3-8540-425edba37cbe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.227 243456 DEBUG oslo_concurrency.lockutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "5706ada3-074b-4ac3-8540-425edba37cbe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.228 243456 INFO nova.compute.manager [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Terminating instance#033[00m
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.229 243456 DEBUG oslo_concurrency.lockutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Acquiring lock "refresh_cache-5706ada3-074b-4ac3-8540-425edba37cbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.229 243456 DEBUG oslo_concurrency.lockutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Acquired lock "refresh_cache-5706ada3-074b-4ac3-8540-425edba37cbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.229 243456 DEBUG nova.network.neutron [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.244 243456 DEBUG nova.compute.manager [None req-bec8a33c-7b99-4570-ae36-53bee6c28520 aa88e7a4d6a74829a2aa88687f3e0d9b 048b401fefe0402ab609aa3c8e535be6 - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:28 np0005634017 systemd[1]: var-lib-containers-storage-overlay-0974a094a17b97c8e96539590c954cb4e8f08c5f47e8305b3dbe805e8e5d16fa-merged.mount: Deactivated successfully.
Feb 28 05:00:28 np0005634017 podman[255483]: 2026-02-28 10:00:28.271394548 +0000 UTC m=+0.317772282 container remove d1deed01d9fe1667a235816c970cb89b28aa14201e224c63d4162603478413d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 05:00:28 np0005634017 systemd[1]: libpod-conmon-d1deed01d9fe1667a235816c970cb89b28aa14201e224c63d4162603478413d1.scope: Deactivated successfully.
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.390 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.394 243456 DEBUG nova.compute.manager [None req-40d8a711-a3ea-43bd-a6c9-cdc6a946890a fbbda653764b41c6a0d9c5f1a589bdfe 179ac1b345e44371a698f9ff7f4ed1d4 - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.397 243456 INFO nova.compute.manager [None req-40d8a711-a3ea-43bd-a6c9-cdc6a946890a fbbda653764b41c6a0d9c5f1a589bdfe 179ac1b345e44371a698f9ff7f4ed1d4 - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Retrieving diagnostics#033[00m
Feb 28 05:00:28 np0005634017 podman[255526]: 2026-02-28 10:00:28.429889021 +0000 UTC m=+0.044682628 container create 2d6fb0f4f4c5637c9126ba49f2efe86c8bf4ffb36828d2183599c83eb79a9e72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mahavira, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:00:28 np0005634017 systemd[1]: Started libpod-conmon-2d6fb0f4f4c5637c9126ba49f2efe86c8bf4ffb36828d2183599c83eb79a9e72.scope.
Feb 28 05:00:28 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:00:28 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c6452720418b021c62a411dd9dae3c64faa280fe1e5afdcec6e49fd3378c619/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:00:28 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c6452720418b021c62a411dd9dae3c64faa280fe1e5afdcec6e49fd3378c619/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:00:28 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c6452720418b021c62a411dd9dae3c64faa280fe1e5afdcec6e49fd3378c619/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:00:28 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c6452720418b021c62a411dd9dae3c64faa280fe1e5afdcec6e49fd3378c619/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:00:28 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c6452720418b021c62a411dd9dae3c64faa280fe1e5afdcec6e49fd3378c619/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:00:28 np0005634017 podman[255526]: 2026-02-28 10:00:28.412987015 +0000 UTC m=+0.027780642 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:00:28 np0005634017 podman[255526]: 2026-02-28 10:00:28.51469108 +0000 UTC m=+0.129484687 container init 2d6fb0f4f4c5637c9126ba49f2efe86c8bf4ffb36828d2183599c83eb79a9e72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mahavira, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 28 05:00:28 np0005634017 podman[255526]: 2026-02-28 10:00:28.520202896 +0000 UTC m=+0.134996503 container start 2d6fb0f4f4c5637c9126ba49f2efe86c8bf4ffb36828d2183599c83eb79a9e72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 05:00:28 np0005634017 podman[255526]: 2026-02-28 10:00:28.522754797 +0000 UTC m=+0.137548414 container attach 2d6fb0f4f4c5637c9126ba49f2efe86c8bf4ffb36828d2183599c83eb79a9e72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.550 243456 DEBUG nova.network.neutron [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.716 243456 DEBUG oslo_concurrency.lockutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Acquiring lock "5cac75f5-aeef-427d-b484-7d40a33679cf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.717 243456 DEBUG oslo_concurrency.lockutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Lock "5cac75f5-aeef-427d-b484-7d40a33679cf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.717 243456 DEBUG oslo_concurrency.lockutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Acquiring lock "5cac75f5-aeef-427d-b484-7d40a33679cf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.718 243456 DEBUG oslo_concurrency.lockutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Lock "5cac75f5-aeef-427d-b484-7d40a33679cf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.718 243456 DEBUG oslo_concurrency.lockutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Lock "5cac75f5-aeef-427d-b484-7d40a33679cf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.719 243456 INFO nova.compute.manager [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Terminating instance#033[00m
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.720 243456 DEBUG oslo_concurrency.lockutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Acquiring lock "refresh_cache-5cac75f5-aeef-427d-b484-7d40a33679cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.720 243456 DEBUG oslo_concurrency.lockutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Acquired lock "refresh_cache-5cac75f5-aeef-427d-b484-7d40a33679cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.720 243456 DEBUG nova.network.neutron [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.830 243456 DEBUG nova.network.neutron [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.847 243456 DEBUG oslo_concurrency.lockutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Releasing lock "refresh_cache-5706ada3-074b-4ac3-8540-425edba37cbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.848 243456 DEBUG nova.compute.manager [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:00:28 np0005634017 nova_compute[243452]: 2026-02-28 10:00:28.857 243456 DEBUG nova.network.neutron [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:00:28 np0005634017 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Feb 28 05:00:28 np0005634017 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 3.550s CPU time.
Feb 28 05:00:28 np0005634017 systemd-machined[209480]: Machine qemu-10-instance-0000000a terminated.
Feb 28 05:00:28 np0005634017 vibrant_mahavira[255542]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:00:28 np0005634017 vibrant_mahavira[255542]: --> All data devices are unavailable
Feb 28 05:00:28 np0005634017 systemd[1]: libpod-2d6fb0f4f4c5637c9126ba49f2efe86c8bf4ffb36828d2183599c83eb79a9e72.scope: Deactivated successfully.
Feb 28 05:00:28 np0005634017 podman[255526]: 2026-02-28 10:00:28.959234752 +0000 UTC m=+0.574028379 container died 2d6fb0f4f4c5637c9126ba49f2efe86c8bf4ffb36828d2183599c83eb79a9e72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:00:28 np0005634017 systemd[1]: var-lib-containers-storage-overlay-1c6452720418b021c62a411dd9dae3c64faa280fe1e5afdcec6e49fd3378c619-merged.mount: Deactivated successfully.
Feb 28 05:00:28 np0005634017 podman[255526]: 2026-02-28 10:00:28.994386502 +0000 UTC m=+0.609180109 container remove 2d6fb0f4f4c5637c9126ba49f2efe86c8bf4ffb36828d2183599c83eb79a9e72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:00:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:00:29
Feb 28 05:00:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:00:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:00:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'default.rgw.meta', 'backups', 'volumes', 'default.rgw.control', '.mgr', 'images', 'cephfs.cephfs.meta']
Feb 28 05:00:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:00:29 np0005634017 systemd[1]: libpod-conmon-2d6fb0f4f4c5637c9126ba49f2efe86c8bf4ffb36828d2183599c83eb79a9e72.scope: Deactivated successfully.
Feb 28 05:00:29 np0005634017 nova_compute[243452]: 2026-02-28 10:00:29.044 243456 DEBUG nova.network.neutron [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:00:29 np0005634017 nova_compute[243452]: 2026-02-28 10:00:29.065 243456 DEBUG oslo_concurrency.lockutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Releasing lock "refresh_cache-5cac75f5-aeef-427d-b484-7d40a33679cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:00:29 np0005634017 nova_compute[243452]: 2026-02-28 10:00:29.066 243456 DEBUG nova.compute.manager [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:00:29 np0005634017 nova_compute[243452]: 2026-02-28 10:00:29.085 243456 INFO nova.virt.libvirt.driver [-] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Instance destroyed successfully.#033[00m
Feb 28 05:00:29 np0005634017 nova_compute[243452]: 2026-02-28 10:00:29.085 243456 DEBUG nova.objects.instance [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lazy-loading 'resources' on Instance uuid 5706ada3-074b-4ac3-8540-425edba37cbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:00:29 np0005634017 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Feb 28 05:00:29 np0005634017 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 11.608s CPU time.
Feb 28 05:00:29 np0005634017 systemd-machined[209480]: Machine qemu-8-instance-00000008 terminated.
Feb 28 05:00:29 np0005634017 nova_compute[243452]: 2026-02-28 10:00:29.297 243456 INFO nova.virt.libvirt.driver [-] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Instance destroyed successfully.#033[00m
Feb 28 05:00:29 np0005634017 nova_compute[243452]: 2026-02-28 10:00:29.298 243456 DEBUG nova.objects.instance [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Lazy-loading 'resources' on Instance uuid 5cac75f5-aeef-427d-b484-7d40a33679cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:00:29 np0005634017 nova_compute[243452]: 2026-02-28 10:00:29.384 243456 INFO nova.virt.libvirt.driver [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Deleting instance files /var/lib/nova/instances/5706ada3-074b-4ac3-8540-425edba37cbe_del#033[00m
Feb 28 05:00:29 np0005634017 nova_compute[243452]: 2026-02-28 10:00:29.385 243456 INFO nova.virt.libvirt.driver [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Deletion of /var/lib/nova/instances/5706ada3-074b-4ac3-8540-425edba37cbe_del complete#033[00m
Feb 28 05:00:29 np0005634017 podman[255679]: 2026-02-28 10:00:29.427479441 +0000 UTC m=+0.053458157 container create f92d9e319043d1e7e14bc9e0489c524b7282c675eaa5462228b11b6c59ccf519 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Feb 28 05:00:29 np0005634017 nova_compute[243452]: 2026-02-28 10:00:29.437 243456 INFO nova.compute.manager [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:00:29 np0005634017 nova_compute[243452]: 2026-02-28 10:00:29.438 243456 DEBUG oslo.service.loopingcall [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:00:29 np0005634017 nova_compute[243452]: 2026-02-28 10:00:29.438 243456 DEBUG nova.compute.manager [-] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:00:29 np0005634017 nova_compute[243452]: 2026-02-28 10:00:29.438 243456 DEBUG nova.network.neutron [-] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:00:29 np0005634017 systemd[1]: Started libpod-conmon-f92d9e319043d1e7e14bc9e0489c524b7282c675eaa5462228b11b6c59ccf519.scope.
Feb 28 05:00:29 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:00:29 np0005634017 podman[255679]: 2026-02-28 10:00:29.408165827 +0000 UTC m=+0.034144593 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:00:29 np0005634017 podman[255679]: 2026-02-28 10:00:29.512446934 +0000 UTC m=+0.138425740 container init f92d9e319043d1e7e14bc9e0489c524b7282c675eaa5462228b11b6c59ccf519 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:00:29 np0005634017 podman[255679]: 2026-02-28 10:00:29.523522276 +0000 UTC m=+0.149501002 container start f92d9e319043d1e7e14bc9e0489c524b7282c675eaa5462228b11b6c59ccf519 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_keller, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 05:00:29 np0005634017 podman[255679]: 2026-02-28 10:00:29.527601511 +0000 UTC m=+0.153580307 container attach f92d9e319043d1e7e14bc9e0489c524b7282c675eaa5462228b11b6c59ccf519 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_keller, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:00:29 np0005634017 distracted_keller[255696]: 167 167
Feb 28 05:00:29 np0005634017 systemd[1]: libpod-f92d9e319043d1e7e14bc9e0489c524b7282c675eaa5462228b11b6c59ccf519.scope: Deactivated successfully.
Feb 28 05:00:29 np0005634017 conmon[255696]: conmon f92d9e319043d1e7e14b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f92d9e319043d1e7e14bc9e0489c524b7282c675eaa5462228b11b6c59ccf519.scope/container/memory.events
Feb 28 05:00:29 np0005634017 podman[255679]: 2026-02-28 10:00:29.533531168 +0000 UTC m=+0.159509944 container died f92d9e319043d1e7e14bc9e0489c524b7282c675eaa5462228b11b6c59ccf519 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_keller, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 28 05:00:29 np0005634017 systemd[1]: var-lib-containers-storage-overlay-4cda96c44d47c499b345b2d5522199217e83ee2df10d12458f6f8dc08e3e96f3-merged.mount: Deactivated successfully.
Feb 28 05:00:29 np0005634017 podman[255679]: 2026-02-28 10:00:29.582783345 +0000 UTC m=+0.208762061 container remove f92d9e319043d1e7e14bc9e0489c524b7282c675eaa5462228b11b6c59ccf519 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_keller, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:00:29 np0005634017 nova_compute[243452]: 2026-02-28 10:00:29.587 243456 INFO nova.virt.libvirt.driver [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Deleting instance files /var/lib/nova/instances/5cac75f5-aeef-427d-b484-7d40a33679cf_del#033[00m
Feb 28 05:00:29 np0005634017 nova_compute[243452]: 2026-02-28 10:00:29.589 243456 INFO nova.virt.libvirt.driver [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Deletion of /var/lib/nova/instances/5cac75f5-aeef-427d-b484-7d40a33679cf_del complete#033[00m
Feb 28 05:00:29 np0005634017 systemd[1]: libpod-conmon-f92d9e319043d1e7e14bc9e0489c524b7282c675eaa5462228b11b6c59ccf519.scope: Deactivated successfully.
Feb 28 05:00:29 np0005634017 nova_compute[243452]: 2026-02-28 10:00:29.648 243456 INFO nova.compute.manager [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Took 0.58 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:00:29 np0005634017 nova_compute[243452]: 2026-02-28 10:00:29.648 243456 DEBUG oslo.service.loopingcall [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:00:29 np0005634017 nova_compute[243452]: 2026-02-28 10:00:29.650 243456 DEBUG nova.compute.manager [-] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:00:29 np0005634017 nova_compute[243452]: 2026-02-28 10:00:29.650 243456 DEBUG nova.network.neutron [-] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:00:29 np0005634017 podman[255720]: 2026-02-28 10:00:29.699863573 +0000 UTC m=+0.036776827 container create 46b3adc7d3fbc8a8cce707f1f52184ef5f3a44edfb5034b482c346895fe5fa87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_pasteur, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:00:29 np0005634017 systemd[1]: Started libpod-conmon-46b3adc7d3fbc8a8cce707f1f52184ef5f3a44edfb5034b482c346895fe5fa87.scope.
Feb 28 05:00:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 357 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 5.0 MiB/s wr, 228 op/s
Feb 28 05:00:29 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:00:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e894ee0d69dfd03f01a83cbb010dcdd2356a322e92ce32876bacdbc54a6ce417/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:00:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e894ee0d69dfd03f01a83cbb010dcdd2356a322e92ce32876bacdbc54a6ce417/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:00:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e894ee0d69dfd03f01a83cbb010dcdd2356a322e92ce32876bacdbc54a6ce417/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:00:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e894ee0d69dfd03f01a83cbb010dcdd2356a322e92ce32876bacdbc54a6ce417/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:00:29 np0005634017 podman[255720]: 2026-02-28 10:00:29.680312112 +0000 UTC m=+0.017225356 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:00:29 np0005634017 podman[255720]: 2026-02-28 10:00:29.788596062 +0000 UTC m=+0.125509346 container init 46b3adc7d3fbc8a8cce707f1f52184ef5f3a44edfb5034b482c346895fe5fa87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_pasteur, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:00:29 np0005634017 podman[255720]: 2026-02-28 10:00:29.795824716 +0000 UTC m=+0.132737940 container start 46b3adc7d3fbc8a8cce707f1f52184ef5f3a44edfb5034b482c346895fe5fa87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 05:00:29 np0005634017 podman[255720]: 2026-02-28 10:00:29.800557029 +0000 UTC m=+0.137470303 container attach 46b3adc7d3fbc8a8cce707f1f52184ef5f3a44edfb5034b482c346895fe5fa87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_pasteur, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]: {
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:    "0": [
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:        {
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "devices": [
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "/dev/loop3"
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            ],
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "lv_name": "ceph_lv0",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "lv_size": "21470642176",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "name": "ceph_lv0",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "tags": {
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.cluster_name": "ceph",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.crush_device_class": "",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.encrypted": "0",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.objectstore": "bluestore",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.osd_id": "0",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.type": "block",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.vdo": "0",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.with_tpm": "0"
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            },
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "type": "block",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "vg_name": "ceph_vg0"
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:        }
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:    ],
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:    "1": [
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:        {
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "devices": [
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "/dev/loop4"
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            ],
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "lv_name": "ceph_lv1",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "lv_size": "21470642176",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "name": "ceph_lv1",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "tags": {
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.cluster_name": "ceph",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.crush_device_class": "",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.encrypted": "0",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.objectstore": "bluestore",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.osd_id": "1",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.type": "block",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.vdo": "0",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.with_tpm": "0"
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            },
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "type": "block",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "vg_name": "ceph_vg1"
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:        }
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:    ],
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:    "2": [
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:        {
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "devices": [
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "/dev/loop5"
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            ],
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "lv_name": "ceph_lv2",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "lv_size": "21470642176",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "name": "ceph_lv2",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "tags": {
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.cluster_name": "ceph",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.crush_device_class": "",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.encrypted": "0",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.objectstore": "bluestore",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.osd_id": "2",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.type": "block",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.vdo": "0",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:                "ceph.with_tpm": "0"
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            },
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "type": "block",
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:            "vg_name": "ceph_vg2"
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:        }
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]:    ]
Feb 28 05:00:30 np0005634017 zealous_pasteur[255736]: }
Feb 28 05:00:30 np0005634017 systemd[1]: libpod-46b3adc7d3fbc8a8cce707f1f52184ef5f3a44edfb5034b482c346895fe5fa87.scope: Deactivated successfully.
Feb 28 05:00:30 np0005634017 podman[255746]: 2026-02-28 10:00:30.143411126 +0000 UTC m=+0.035278645 container died 46b3adc7d3fbc8a8cce707f1f52184ef5f3a44edfb5034b482c346895fe5fa87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_pasteur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Feb 28 05:00:30 np0005634017 systemd[1]: var-lib-containers-storage-overlay-e894ee0d69dfd03f01a83cbb010dcdd2356a322e92ce32876bacdbc54a6ce417-merged.mount: Deactivated successfully.
Feb 28 05:00:30 np0005634017 podman[255746]: 2026-02-28 10:00:30.187677083 +0000 UTC m=+0.079544552 container remove 46b3adc7d3fbc8a8cce707f1f52184ef5f3a44edfb5034b482c346895fe5fa87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:00:30 np0005634017 systemd[1]: libpod-conmon-46b3adc7d3fbc8a8cce707f1f52184ef5f3a44edfb5034b482c346895fe5fa87.scope: Deactivated successfully.
Feb 28 05:00:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:00:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:00:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:00:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:00:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:00:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:00:30 np0005634017 nova_compute[243452]: 2026-02-28 10:00:30.333 243456 DEBUG nova.network.neutron [-] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:00:30 np0005634017 nova_compute[243452]: 2026-02-28 10:00:30.337 243456 DEBUG nova.network.neutron [-] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:00:30 np0005634017 nova_compute[243452]: 2026-02-28 10:00:30.353 243456 DEBUG nova.network.neutron [-] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:00:30 np0005634017 nova_compute[243452]: 2026-02-28 10:00:30.355 243456 DEBUG nova.network.neutron [-] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:00:30 np0005634017 nova_compute[243452]: 2026-02-28 10:00:30.375 243456 INFO nova.compute.manager [-] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Took 0.72 seconds to deallocate network for instance.#033[00m
Feb 28 05:00:30 np0005634017 nova_compute[243452]: 2026-02-28 10:00:30.376 243456 INFO nova.compute.manager [-] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Took 0.94 seconds to deallocate network for instance.#033[00m
Feb 28 05:00:30 np0005634017 nova_compute[243452]: 2026-02-28 10:00:30.469 243456 DEBUG oslo_concurrency.lockutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:30 np0005634017 nova_compute[243452]: 2026-02-28 10:00:30.469 243456 DEBUG oslo_concurrency.lockutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:30 np0005634017 nova_compute[243452]: 2026-02-28 10:00:30.472 243456 DEBUG oslo_concurrency.lockutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:00:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:00:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:00:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:00:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:00:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:00:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:00:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:00:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:00:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:00:30 np0005634017 nova_compute[243452]: 2026-02-28 10:00:30.587 243456 DEBUG oslo_concurrency.processutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:30 np0005634017 podman[255823]: 2026-02-28 10:00:30.813048777 +0000 UTC m=+0.099552015 container create 879a9515b024c20b5a401ca8c3e377d6ad536a31842d6991f418895142d59b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True)
Feb 28 05:00:30 np0005634017 podman[255823]: 2026-02-28 10:00:30.74925098 +0000 UTC m=+0.035754288 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:00:30 np0005634017 systemd[1]: Started libpod-conmon-879a9515b024c20b5a401ca8c3e377d6ad536a31842d6991f418895142d59b82.scope.
Feb 28 05:00:30 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:00:30 np0005634017 podman[255823]: 2026-02-28 10:00:30.910198944 +0000 UTC m=+0.196702242 container init 879a9515b024c20b5a401ca8c3e377d6ad536a31842d6991f418895142d59b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 05:00:30 np0005634017 podman[255823]: 2026-02-28 10:00:30.91645078 +0000 UTC m=+0.202953998 container start 879a9515b024c20b5a401ca8c3e377d6ad536a31842d6991f418895142d59b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 05:00:30 np0005634017 focused_bhaskara[255858]: 167 167
Feb 28 05:00:30 np0005634017 systemd[1]: libpod-879a9515b024c20b5a401ca8c3e377d6ad536a31842d6991f418895142d59b82.scope: Deactivated successfully.
Feb 28 05:00:31 np0005634017 podman[255823]: 2026-02-28 10:00:31.016299392 +0000 UTC m=+0.302802640 container attach 879a9515b024c20b5a401ca8c3e377d6ad536a31842d6991f418895142d59b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bhaskara, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:00:31 np0005634017 podman[255823]: 2026-02-28 10:00:31.017135796 +0000 UTC m=+0.303639094 container died 879a9515b024c20b5a401ca8c3e377d6ad536a31842d6991f418895142d59b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 28 05:00:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:00:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2922638578' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:00:31 np0005634017 nova_compute[243452]: 2026-02-28 10:00:31.137 243456 DEBUG oslo_concurrency.processutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:31 np0005634017 nova_compute[243452]: 2026-02-28 10:00:31.147 243456 DEBUG nova.compute.provider_tree [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:00:31 np0005634017 nova_compute[243452]: 2026-02-28 10:00:31.162 243456 DEBUG nova.scheduler.client.report [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:00:31 np0005634017 nova_compute[243452]: 2026-02-28 10:00:31.182 243456 DEBUG oslo_concurrency.lockutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:31 np0005634017 nova_compute[243452]: 2026-02-28 10:00:31.186 243456 DEBUG oslo_concurrency.lockutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:31 np0005634017 nova_compute[243452]: 2026-02-28 10:00:31.220 243456 INFO nova.scheduler.client.report [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Deleted allocations for instance 5cac75f5-aeef-427d-b484-7d40a33679cf#033[00m
Feb 28 05:00:31 np0005634017 systemd[1]: var-lib-containers-storage-overlay-728be934235e4e05bd44d02957bbb0398634ad1728db96b5f0e6eefc1fef17ee-merged.mount: Deactivated successfully.
Feb 28 05:00:31 np0005634017 podman[255823]: 2026-02-28 10:00:31.28784397 +0000 UTC m=+0.574347208 container remove 879a9515b024c20b5a401ca8c3e377d6ad536a31842d6991f418895142d59b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 05:00:31 np0005634017 nova_compute[243452]: 2026-02-28 10:00:31.296 243456 DEBUG oslo_concurrency.lockutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Lock "5cac75f5-aeef-427d-b484-7d40a33679cf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:31 np0005634017 systemd[1]: libpod-conmon-879a9515b024c20b5a401ca8c3e377d6ad536a31842d6991f418895142d59b82.scope: Deactivated successfully.
Feb 28 05:00:31 np0005634017 nova_compute[243452]: 2026-02-28 10:00:31.327 243456 DEBUG oslo_concurrency.processutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:31 np0005634017 podman[255885]: 2026-02-28 10:00:31.455222095 +0000 UTC m=+0.070886738 container create d7d4f4f1f7b2bf3708ff079a28ff28b0704e12c6de6f0f8d2ae0b73ff9de73c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_elion, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:00:31 np0005634017 podman[255885]: 2026-02-28 10:00:31.423294546 +0000 UTC m=+0.038959269 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:00:31 np0005634017 nova_compute[243452]: 2026-02-28 10:00:31.522 243456 DEBUG oslo_concurrency.lockutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "a2e72370-536c-417e-8667-678b824b849c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:31 np0005634017 nova_compute[243452]: 2026-02-28 10:00:31.523 243456 DEBUG oslo_concurrency.lockutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "a2e72370-536c-417e-8667-678b824b849c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:31 np0005634017 nova_compute[243452]: 2026-02-28 10:00:31.523 243456 DEBUG oslo_concurrency.lockutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "a2e72370-536c-417e-8667-678b824b849c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:31 np0005634017 nova_compute[243452]: 2026-02-28 10:00:31.523 243456 DEBUG oslo_concurrency.lockutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "a2e72370-536c-417e-8667-678b824b849c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:31 np0005634017 nova_compute[243452]: 2026-02-28 10:00:31.524 243456 DEBUG oslo_concurrency.lockutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "a2e72370-536c-417e-8667-678b824b849c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:31 np0005634017 nova_compute[243452]: 2026-02-28 10:00:31.525 243456 INFO nova.compute.manager [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Terminating instance#033[00m
Feb 28 05:00:31 np0005634017 nova_compute[243452]: 2026-02-28 10:00:31.526 243456 DEBUG oslo_concurrency.lockutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "refresh_cache-a2e72370-536c-417e-8667-678b824b849c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:00:31 np0005634017 nova_compute[243452]: 2026-02-28 10:00:31.527 243456 DEBUG oslo_concurrency.lockutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquired lock "refresh_cache-a2e72370-536c-417e-8667-678b824b849c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:00:31 np0005634017 nova_compute[243452]: 2026-02-28 10:00:31.527 243456 DEBUG nova.network.neutron [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:00:31 np0005634017 systemd[1]: Started libpod-conmon-d7d4f4f1f7b2bf3708ff079a28ff28b0704e12c6de6f0f8d2ae0b73ff9de73c4.scope.
Feb 28 05:00:31 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:00:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8d31721b1dcaf2bdd53cae4d967960e55700c18728ed521b529e051ff6321a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:00:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8d31721b1dcaf2bdd53cae4d967960e55700c18728ed521b529e051ff6321a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:00:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8d31721b1dcaf2bdd53cae4d967960e55700c18728ed521b529e051ff6321a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:00:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8d31721b1dcaf2bdd53cae4d967960e55700c18728ed521b529e051ff6321a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:00:31 np0005634017 podman[255885]: 2026-02-28 10:00:31.591693699 +0000 UTC m=+0.207358342 container init d7d4f4f1f7b2bf3708ff079a28ff28b0704e12c6de6f0f8d2ae0b73ff9de73c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_elion, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 05:00:31 np0005634017 podman[255885]: 2026-02-28 10:00:31.599126428 +0000 UTC m=+0.214791071 container start d7d4f4f1f7b2bf3708ff079a28ff28b0704e12c6de6f0f8d2ae0b73ff9de73c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 05:00:31 np0005634017 podman[255885]: 2026-02-28 10:00:31.618737691 +0000 UTC m=+0.234402324 container attach d7d4f4f1f7b2bf3708ff079a28ff28b0704e12c6de6f0f8d2ae0b73ff9de73c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_elion, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 05:00:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 328 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 6.9 MiB/s wr, 400 op/s
Feb 28 05:00:31 np0005634017 nova_compute[243452]: 2026-02-28 10:00:31.802 243456 DEBUG nova.network.neutron [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:00:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:00:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2245145351' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:00:31 np0005634017 nova_compute[243452]: 2026-02-28 10:00:31.919 243456 DEBUG oslo_concurrency.processutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:31 np0005634017 nova_compute[243452]: 2026-02-28 10:00:31.925 243456 DEBUG nova.compute.provider_tree [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:00:31 np0005634017 nova_compute[243452]: 2026-02-28 10:00:31.937 243456 DEBUG nova.scheduler.client.report [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:00:31 np0005634017 nova_compute[243452]: 2026-02-28 10:00:31.960 243456 DEBUG oslo_concurrency.lockutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:31 np0005634017 nova_compute[243452]: 2026-02-28 10:00:31.992 243456 INFO nova.scheduler.client.report [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Deleted allocations for instance 5706ada3-074b-4ac3-8540-425edba37cbe#033[00m
Feb 28 05:00:32 np0005634017 nova_compute[243452]: 2026-02-28 10:00:32.058 243456 DEBUG nova.network.neutron [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:00:32 np0005634017 nova_compute[243452]: 2026-02-28 10:00:32.063 243456 DEBUG oslo_concurrency.lockutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "5706ada3-074b-4ac3-8540-425edba37cbe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:32 np0005634017 nova_compute[243452]: 2026-02-28 10:00:32.087 243456 DEBUG oslo_concurrency.lockutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Releasing lock "refresh_cache-a2e72370-536c-417e-8667-678b824b849c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:00:32 np0005634017 nova_compute[243452]: 2026-02-28 10:00:32.088 243456 DEBUG nova.compute.manager [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:00:32 np0005634017 nova_compute[243452]: 2026-02-28 10:00:32.094 243456 INFO nova.virt.libvirt.driver [-] [instance: a2e72370-536c-417e-8667-678b824b849c] Instance destroyed successfully.#033[00m
Feb 28 05:00:32 np0005634017 nova_compute[243452]: 2026-02-28 10:00:32.095 243456 DEBUG nova.objects.instance [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lazy-loading 'resources' on Instance uuid a2e72370-536c-417e-8667-678b824b849c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:00:32 np0005634017 lvm[256019]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:00:32 np0005634017 lvm[256017]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:00:32 np0005634017 lvm[256019]: VG ceph_vg1 finished
Feb 28 05:00:32 np0005634017 lvm[256017]: VG ceph_vg0 finished
Feb 28 05:00:32 np0005634017 lvm[256021]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:00:32 np0005634017 lvm[256021]: VG ceph_vg2 finished
Feb 28 05:00:32 np0005634017 recursing_elion[255920]: {}
Feb 28 05:00:32 np0005634017 systemd[1]: libpod-d7d4f4f1f7b2bf3708ff079a28ff28b0704e12c6de6f0f8d2ae0b73ff9de73c4.scope: Deactivated successfully.
Feb 28 05:00:32 np0005634017 systemd[1]: libpod-d7d4f4f1f7b2bf3708ff079a28ff28b0704e12c6de6f0f8d2ae0b73ff9de73c4.scope: Consumed 1.109s CPU time.
Feb 28 05:00:32 np0005634017 podman[255885]: 2026-02-28 10:00:32.426133421 +0000 UTC m=+1.041798064 container died d7d4f4f1f7b2bf3708ff079a28ff28b0704e12c6de6f0f8d2ae0b73ff9de73c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_elion, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:00:32 np0005634017 nova_compute[243452]: 2026-02-28 10:00:32.425 243456 INFO nova.virt.libvirt.driver [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Deleting instance files /var/lib/nova/instances/a2e72370-536c-417e-8667-678b824b849c_del#033[00m
Feb 28 05:00:32 np0005634017 nova_compute[243452]: 2026-02-28 10:00:32.426 243456 INFO nova.virt.libvirt.driver [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Deletion of /var/lib/nova/instances/a2e72370-536c-417e-8667-678b824b849c_del complete#033[00m
Feb 28 05:00:32 np0005634017 systemd[1]: var-lib-containers-storage-overlay-f8d31721b1dcaf2bdd53cae4d967960e55700c18728ed521b529e051ff6321a1-merged.mount: Deactivated successfully.
Feb 28 05:00:32 np0005634017 podman[255885]: 2026-02-28 10:00:32.475393118 +0000 UTC m=+1.091057791 container remove d7d4f4f1f7b2bf3708ff079a28ff28b0704e12c6de6f0f8d2ae0b73ff9de73c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_elion, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:00:32 np0005634017 systemd[1]: libpod-conmon-d7d4f4f1f7b2bf3708ff079a28ff28b0704e12c6de6f0f8d2ae0b73ff9de73c4.scope: Deactivated successfully.
Feb 28 05:00:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:00:32 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:00:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:00:32 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:00:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:00:32 np0005634017 nova_compute[243452]: 2026-02-28 10:00:32.647 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:32 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:00:32 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:00:33 np0005634017 nova_compute[243452]: 2026-02-28 10:00:33.199 243456 INFO nova.compute.manager [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Took 1.11 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:00:33 np0005634017 nova_compute[243452]: 2026-02-28 10:00:33.200 243456 DEBUG oslo.service.loopingcall [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:00:33 np0005634017 nova_compute[243452]: 2026-02-28 10:00:33.201 243456 DEBUG nova.compute.manager [-] [instance: a2e72370-536c-417e-8667-678b824b849c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:00:33 np0005634017 nova_compute[243452]: 2026-02-28 10:00:33.201 243456 DEBUG nova.network.neutron [-] [instance: a2e72370-536c-417e-8667-678b824b849c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:00:33 np0005634017 nova_compute[243452]: 2026-02-28 10:00:33.392 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:33 np0005634017 nova_compute[243452]: 2026-02-28 10:00:33.476 243456 DEBUG nova.network.neutron [-] [instance: a2e72370-536c-417e-8667-678b824b849c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:00:33 np0005634017 nova_compute[243452]: 2026-02-28 10:00:33.490 243456 DEBUG nova.network.neutron [-] [instance: a2e72370-536c-417e-8667-678b824b849c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:00:33 np0005634017 nova_compute[243452]: 2026-02-28 10:00:33.509 243456 INFO nova.compute.manager [-] [instance: a2e72370-536c-417e-8667-678b824b849c] Took 0.31 seconds to deallocate network for instance.#033[00m
Feb 28 05:00:33 np0005634017 nova_compute[243452]: 2026-02-28 10:00:33.559 243456 DEBUG oslo_concurrency.lockutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:33 np0005634017 nova_compute[243452]: 2026-02-28 10:00:33.560 243456 DEBUG oslo_concurrency.lockutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:33 np0005634017 nova_compute[243452]: 2026-02-28 10:00:33.637 243456 DEBUG oslo_concurrency.processutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 261 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 7.8 MiB/s wr, 354 op/s
Feb 28 05:00:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:00:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1878636309' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:00:34 np0005634017 nova_compute[243452]: 2026-02-28 10:00:34.164 243456 DEBUG oslo_concurrency.processutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:34 np0005634017 nova_compute[243452]: 2026-02-28 10:00:34.170 243456 DEBUG nova.compute.provider_tree [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:00:34 np0005634017 nova_compute[243452]: 2026-02-28 10:00:34.194 243456 DEBUG nova.scheduler.client.report [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:00:34 np0005634017 nova_compute[243452]: 2026-02-28 10:00:34.215 243456 DEBUG oslo_concurrency.lockutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:34 np0005634017 nova_compute[243452]: 2026-02-28 10:00:34.237 243456 INFO nova.scheduler.client.report [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Deleted allocations for instance a2e72370-536c-417e-8667-678b824b849c#033[00m
Feb 28 05:00:34 np0005634017 nova_compute[243452]: 2026-02-28 10:00:34.305 243456 DEBUG oslo_concurrency.lockutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "a2e72370-536c-417e-8667-678b824b849c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:34 np0005634017 nova_compute[243452]: 2026-02-28 10:00:34.551 243456 DEBUG oslo_concurrency.lockutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "e7cedb7c-31a4-4578-82e8-f93b29898300" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:34 np0005634017 nova_compute[243452]: 2026-02-28 10:00:34.552 243456 DEBUG oslo_concurrency.lockutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "e7cedb7c-31a4-4578-82e8-f93b29898300" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:34 np0005634017 nova_compute[243452]: 2026-02-28 10:00:34.553 243456 DEBUG oslo_concurrency.lockutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "e7cedb7c-31a4-4578-82e8-f93b29898300-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:34 np0005634017 nova_compute[243452]: 2026-02-28 10:00:34.553 243456 DEBUG oslo_concurrency.lockutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "e7cedb7c-31a4-4578-82e8-f93b29898300-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:34 np0005634017 nova_compute[243452]: 2026-02-28 10:00:34.554 243456 DEBUG oslo_concurrency.lockutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "e7cedb7c-31a4-4578-82e8-f93b29898300-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:34 np0005634017 nova_compute[243452]: 2026-02-28 10:00:34.555 243456 INFO nova.compute.manager [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Terminating instance#033[00m
Feb 28 05:00:34 np0005634017 nova_compute[243452]: 2026-02-28 10:00:34.557 243456 DEBUG oslo_concurrency.lockutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "refresh_cache-e7cedb7c-31a4-4578-82e8-f93b29898300" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:00:34 np0005634017 nova_compute[243452]: 2026-02-28 10:00:34.558 243456 DEBUG oslo_concurrency.lockutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquired lock "refresh_cache-e7cedb7c-31a4-4578-82e8-f93b29898300" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:00:34 np0005634017 nova_compute[243452]: 2026-02-28 10:00:34.558 243456 DEBUG nova.network.neutron [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:00:34 np0005634017 nova_compute[243452]: 2026-02-28 10:00:34.951 243456 DEBUG nova.network.neutron [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:00:35 np0005634017 nova_compute[243452]: 2026-02-28 10:00:35.253 243456 DEBUG nova.network.neutron [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:00:35 np0005634017 nova_compute[243452]: 2026-02-28 10:00:35.270 243456 DEBUG oslo_concurrency.lockutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Releasing lock "refresh_cache-e7cedb7c-31a4-4578-82e8-f93b29898300" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:00:35 np0005634017 nova_compute[243452]: 2026-02-28 10:00:35.271 243456 DEBUG nova.compute.manager [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:00:35 np0005634017 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Feb 28 05:00:35 np0005634017 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 12.428s CPU time.
Feb 28 05:00:35 np0005634017 systemd-machined[209480]: Machine qemu-9-instance-00000009 terminated.
Feb 28 05:00:35 np0005634017 nova_compute[243452]: 2026-02-28 10:00:35.495 243456 INFO nova.virt.libvirt.driver [-] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Instance destroyed successfully.#033[00m
Feb 28 05:00:35 np0005634017 nova_compute[243452]: 2026-02-28 10:00:35.496 243456 DEBUG nova.objects.instance [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lazy-loading 'resources' on Instance uuid e7cedb7c-31a4-4578-82e8-f93b29898300 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:00:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 196 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 7.8 MiB/s wr, 368 op/s
Feb 28 05:00:35 np0005634017 nova_compute[243452]: 2026-02-28 10:00:35.804 243456 INFO nova.virt.libvirt.driver [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Deleting instance files /var/lib/nova/instances/e7cedb7c-31a4-4578-82e8-f93b29898300_del#033[00m
Feb 28 05:00:35 np0005634017 nova_compute[243452]: 2026-02-28 10:00:35.805 243456 INFO nova.virt.libvirt.driver [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Deletion of /var/lib/nova/instances/e7cedb7c-31a4-4578-82e8-f93b29898300_del complete#033[00m
Feb 28 05:00:35 np0005634017 nova_compute[243452]: 2026-02-28 10:00:35.884 243456 INFO nova.compute.manager [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Took 0.61 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:00:35 np0005634017 nova_compute[243452]: 2026-02-28 10:00:35.885 243456 DEBUG oslo.service.loopingcall [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:00:35 np0005634017 nova_compute[243452]: 2026-02-28 10:00:35.885 243456 DEBUG nova.compute.manager [-] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:00:35 np0005634017 nova_compute[243452]: 2026-02-28 10:00:35.886 243456 DEBUG nova.network.neutron [-] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:00:36 np0005634017 nova_compute[243452]: 2026-02-28 10:00:36.159 243456 DEBUG nova.network.neutron [-] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:00:36 np0005634017 nova_compute[243452]: 2026-02-28 10:00:36.174 243456 DEBUG nova.network.neutron [-] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:00:36 np0005634017 nova_compute[243452]: 2026-02-28 10:00:36.203 243456 INFO nova.compute.manager [-] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Took 0.32 seconds to deallocate network for instance.#033[00m
Feb 28 05:00:36 np0005634017 nova_compute[243452]: 2026-02-28 10:00:36.264 243456 DEBUG oslo_concurrency.lockutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:36 np0005634017 nova_compute[243452]: 2026-02-28 10:00:36.265 243456 DEBUG oslo_concurrency.lockutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:36 np0005634017 nova_compute[243452]: 2026-02-28 10:00:36.319 243456 DEBUG oslo_concurrency.processutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:00:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/413253201' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:00:36 np0005634017 nova_compute[243452]: 2026-02-28 10:00:36.883 243456 DEBUG oslo_concurrency.processutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:36 np0005634017 nova_compute[243452]: 2026-02-28 10:00:36.891 243456 DEBUG nova.compute.provider_tree [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:00:36 np0005634017 nova_compute[243452]: 2026-02-28 10:00:36.909 243456 DEBUG nova.scheduler.client.report [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:00:36 np0005634017 nova_compute[243452]: 2026-02-28 10:00:36.939 243456 DEBUG oslo_concurrency.lockutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:36 np0005634017 nova_compute[243452]: 2026-02-28 10:00:36.966 243456 INFO nova.scheduler.client.report [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Deleted allocations for instance e7cedb7c-31a4-4578-82e8-f93b29898300#033[00m
Feb 28 05:00:37 np0005634017 nova_compute[243452]: 2026-02-28 10:00:37.063 243456 DEBUG oslo_concurrency.lockutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "e7cedb7c-31a4-4578-82e8-f93b29898300" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:00:37 np0005634017 nova_compute[243452]: 2026-02-28 10:00:37.651 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 173 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.6 MiB/s wr, 339 op/s
Feb 28 05:00:38 np0005634017 nova_compute[243452]: 2026-02-28 10:00:38.394 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:38 np0005634017 nova_compute[243452]: 2026-02-28 10:00:38.888 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:38.888 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:00:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:38.890 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:00:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 173 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.9 MiB/s wr, 268 op/s
Feb 28 05:00:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:00:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:00:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:00:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:00:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00013532782622502094 of space, bias 1.0, pg target 0.04059834786750628 quantized to 32 (current 32)
Feb 28 05:00:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:00:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:00:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:00:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:00:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:00:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002490987642331306 of space, bias 1.0, pg target 0.7472962926993918 quantized to 32 (current 32)
Feb 28 05:00:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:00:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.905678768646067e-07 of space, bias 4.0, pg target 0.0009486814522375281 quantized to 16 (current 16)
Feb 28 05:00:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:00:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:00:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:00:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:00:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:00:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:00:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:00:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:00:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:00:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:00:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 153 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.9 MiB/s wr, 284 op/s
Feb 28 05:00:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:00:42 np0005634017 nova_compute[243452]: 2026-02-28 10:00:42.654 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:43 np0005634017 nova_compute[243452]: 2026-02-28 10:00:43.246 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272828.24524, a2e72370-536c-417e-8667-678b824b849c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:00:43 np0005634017 nova_compute[243452]: 2026-02-28 10:00:43.247 243456 INFO nova.compute.manager [-] [instance: a2e72370-536c-417e-8667-678b824b849c] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:00:43 np0005634017 nova_compute[243452]: 2026-02-28 10:00:43.294 243456 DEBUG nova.compute.manager [None req-3c479c04-daad-458b-8b27-66bbf60c9842 - - - - - -] [instance: a2e72370-536c-417e-8667-678b824b849c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:43 np0005634017 nova_compute[243452]: 2026-02-28 10:00:43.396 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 153 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 498 KiB/s rd, 968 KiB/s wr, 113 op/s
Feb 28 05:00:44 np0005634017 nova_compute[243452]: 2026-02-28 10:00:44.082 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272829.0815992, 5706ada3-074b-4ac3-8540-425edba37cbe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:00:44 np0005634017 nova_compute[243452]: 2026-02-28 10:00:44.083 243456 INFO nova.compute.manager [-] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:00:44 np0005634017 nova_compute[243452]: 2026-02-28 10:00:44.106 243456 DEBUG nova.compute.manager [None req-78191570-94e6-4a08-85ca-71cf0419b715 - - - - - -] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:44 np0005634017 nova_compute[243452]: 2026-02-28 10:00:44.296 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272829.2946024, 5cac75f5-aeef-427d-b484-7d40a33679cf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:00:44 np0005634017 nova_compute[243452]: 2026-02-28 10:00:44.296 243456 INFO nova.compute.manager [-] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:00:44 np0005634017 nova_compute[243452]: 2026-02-28 10:00:44.321 243456 DEBUG nova.compute.manager [None req-3b0ae8e7-0afc-4075-ad2e-11ad9c0799b5 - - - - - -] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:00:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/632909243' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:00:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:00:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/632909243' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:00:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 153 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.3 KiB/s wr, 44 op/s
Feb 28 05:00:47 np0005634017 podman[256132]: 2026-02-28 10:00:47.132060369 +0000 UTC m=+0.067764680 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 28 05:00:47 np0005634017 podman[256131]: 2026-02-28 10:00:47.162953189 +0000 UTC m=+0.099482473 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:00:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:00:47 np0005634017 nova_compute[243452]: 2026-02-28 10:00:47.657 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 153 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Feb 28 05:00:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:47.892 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:00:47 np0005634017 nova_compute[243452]: 2026-02-28 10:00:47.973 243456 DEBUG oslo_concurrency.processutils [None req-9529d302-fd83-453a-93e7-c343c5e10f8f 12e661cb0cc24d3f87d5ad5e55437da9 962ad781bef44a669f4439ec50ff4508 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:48 np0005634017 nova_compute[243452]: 2026-02-28 10:00:48.000 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:48 np0005634017 nova_compute[243452]: 2026-02-28 10:00:48.000 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:48 np0005634017 nova_compute[243452]: 2026-02-28 10:00:48.004 243456 DEBUG oslo_concurrency.processutils [None req-9529d302-fd83-453a-93e7-c343c5e10f8f 12e661cb0cc24d3f87d5ad5e55437da9 962ad781bef44a669f4439ec50ff4508 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:48 np0005634017 nova_compute[243452]: 2026-02-28 10:00:48.022 243456 DEBUG nova.compute.manager [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:00:48 np0005634017 nova_compute[243452]: 2026-02-28 10:00:48.110 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:48 np0005634017 nova_compute[243452]: 2026-02-28 10:00:48.111 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:48 np0005634017 nova_compute[243452]: 2026-02-28 10:00:48.120 243456 DEBUG nova.virt.hardware [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:00:48 np0005634017 nova_compute[243452]: 2026-02-28 10:00:48.121 243456 INFO nova.compute.claims [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:00:48 np0005634017 nova_compute[243452]: 2026-02-28 10:00:48.226 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:48 np0005634017 nova_compute[243452]: 2026-02-28 10:00:48.397 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:00:48 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/972004038' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:00:48 np0005634017 nova_compute[243452]: 2026-02-28 10:00:48.723 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:48 np0005634017 nova_compute[243452]: 2026-02-28 10:00:48.731 243456 DEBUG nova.compute.provider_tree [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:00:48 np0005634017 nova_compute[243452]: 2026-02-28 10:00:48.756 243456 DEBUG nova.scheduler.client.report [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:00:48 np0005634017 nova_compute[243452]: 2026-02-28 10:00:48.790 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:48 np0005634017 nova_compute[243452]: 2026-02-28 10:00:48.792 243456 DEBUG nova.compute.manager [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:00:48 np0005634017 nova_compute[243452]: 2026-02-28 10:00:48.870 243456 DEBUG nova.compute.manager [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:00:48 np0005634017 nova_compute[243452]: 2026-02-28 10:00:48.871 243456 DEBUG nova.network.neutron [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:00:48 np0005634017 nova_compute[243452]: 2026-02-28 10:00:48.895 243456 INFO nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:00:48 np0005634017 nova_compute[243452]: 2026-02-28 10:00:48.920 243456 DEBUG nova.compute.manager [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:00:49 np0005634017 nova_compute[243452]: 2026-02-28 10:00:49.082 243456 DEBUG nova.compute.manager [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:00:49 np0005634017 nova_compute[243452]: 2026-02-28 10:00:49.084 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:00:49 np0005634017 nova_compute[243452]: 2026-02-28 10:00:49.085 243456 INFO nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Creating image(s)#033[00m
Feb 28 05:00:49 np0005634017 nova_compute[243452]: 2026-02-28 10:00:49.115 243456 DEBUG nova.storage.rbd_utils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image 9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:49 np0005634017 nova_compute[243452]: 2026-02-28 10:00:49.153 243456 DEBUG nova.storage.rbd_utils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image 9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:49 np0005634017 nova_compute[243452]: 2026-02-28 10:00:49.185 243456 DEBUG nova.storage.rbd_utils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image 9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:49 np0005634017 nova_compute[243452]: 2026-02-28 10:00:49.190 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:49 np0005634017 nova_compute[243452]: 2026-02-28 10:00:49.270 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:49 np0005634017 nova_compute[243452]: 2026-02-28 10:00:49.273 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:49 np0005634017 nova_compute[243452]: 2026-02-28 10:00:49.274 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:49 np0005634017 nova_compute[243452]: 2026-02-28 10:00:49.275 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:49 np0005634017 nova_compute[243452]: 2026-02-28 10:00:49.305 243456 DEBUG nova.storage.rbd_utils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image 9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:49 np0005634017 nova_compute[243452]: 2026-02-28 10:00:49.310 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:49 np0005634017 nova_compute[243452]: 2026-02-28 10:00:49.330 243456 DEBUG nova.policy [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b1dc716928742ca935bb155783e2d9a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '446562351a804787bd6c523245bada39', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:00:49 np0005634017 nova_compute[243452]: 2026-02-28 10:00:49.558 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:49 np0005634017 nova_compute[243452]: 2026-02-28 10:00:49.636 243456 DEBUG nova.storage.rbd_utils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] resizing rbd image 9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:00:49 np0005634017 nova_compute[243452]: 2026-02-28 10:00:49.737 243456 DEBUG nova.objects.instance [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lazy-loading 'migration_context' on Instance uuid 9e27fde4-3df3-46cf-97ac-88a91baefbc0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:00:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 153 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 16 op/s
Feb 28 05:00:49 np0005634017 nova_compute[243452]: 2026-02-28 10:00:49.757 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:00:49 np0005634017 nova_compute[243452]: 2026-02-28 10:00:49.757 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Ensure instance console log exists: /var/lib/nova/instances/9e27fde4-3df3-46cf-97ac-88a91baefbc0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:00:49 np0005634017 nova_compute[243452]: 2026-02-28 10:00:49.758 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:49 np0005634017 nova_compute[243452]: 2026-02-28 10:00:49.759 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:49 np0005634017 nova_compute[243452]: 2026-02-28 10:00:49.759 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:49 np0005634017 nova_compute[243452]: 2026-02-28 10:00:49.905 243456 DEBUG nova.network.neutron [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Successfully created port: 77e0efad-ce89-42fd-9284-b155767f5c74 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:00:50 np0005634017 nova_compute[243452]: 2026-02-28 10:00:50.494 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272835.4924226, e7cedb7c-31a4-4578-82e8-f93b29898300 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:00:50 np0005634017 nova_compute[243452]: 2026-02-28 10:00:50.494 243456 INFO nova.compute.manager [-] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:00:50 np0005634017 nova_compute[243452]: 2026-02-28 10:00:50.518 243456 DEBUG nova.compute.manager [None req-8d6f5adf-fd74-4273-b315-1015c6c676a6 - - - - - -] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:50 np0005634017 nova_compute[243452]: 2026-02-28 10:00:50.847 243456 DEBUG nova.network.neutron [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Successfully updated port: 77e0efad-ce89-42fd-9284-b155767f5c74 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:00:50 np0005634017 nova_compute[243452]: 2026-02-28 10:00:50.865 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:00:50 np0005634017 nova_compute[243452]: 2026-02-28 10:00:50.866 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquired lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:00:50 np0005634017 nova_compute[243452]: 2026-02-28 10:00:50.866 243456 DEBUG nova.network.neutron [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:00:50 np0005634017 nova_compute[243452]: 2026-02-28 10:00:50.991 243456 DEBUG nova.compute.manager [req-c2f224e1-b002-48ea-b708-c2818b6aa814 req-31d1b836-b8fa-41a2-9253-602a71c152a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Received event network-changed-77e0efad-ce89-42fd-9284-b155767f5c74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:00:50 np0005634017 nova_compute[243452]: 2026-02-28 10:00:50.992 243456 DEBUG nova.compute.manager [req-c2f224e1-b002-48ea-b708-c2818b6aa814 req-31d1b836-b8fa-41a2-9253-602a71c152a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Refreshing instance network info cache due to event network-changed-77e0efad-ce89-42fd-9284-b155767f5c74. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:00:50 np0005634017 nova_compute[243452]: 2026-02-28 10:00:50.992 243456 DEBUG oslo_concurrency.lockutils [req-c2f224e1-b002-48ea-b708-c2818b6aa814 req-31d1b836-b8fa-41a2-9253-602a71c152a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:00:51 np0005634017 nova_compute[243452]: 2026-02-28 10:00:51.019 243456 DEBUG nova.network.neutron [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:00:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 188 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.4 MiB/s wr, 40 op/s
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.202 243456 DEBUG nova.network.neutron [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updating instance_info_cache with network_info: [{"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.229 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Releasing lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.229 243456 DEBUG nova.compute.manager [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Instance network_info: |[{"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.230 243456 DEBUG oslo_concurrency.lockutils [req-c2f224e1-b002-48ea-b708-c2818b6aa814 req-31d1b836-b8fa-41a2-9253-602a71c152a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.230 243456 DEBUG nova.network.neutron [req-c2f224e1-b002-48ea-b708-c2818b6aa814 req-31d1b836-b8fa-41a2-9253-602a71c152a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Refreshing network info cache for port 77e0efad-ce89-42fd-9284-b155767f5c74 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.232 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Start _get_guest_xml network_info=[{"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.237 243456 WARNING nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.242 243456 DEBUG nova.virt.libvirt.host [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.242 243456 DEBUG nova.virt.libvirt.host [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.245 243456 DEBUG nova.virt.libvirt.host [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.246 243456 DEBUG nova.virt.libvirt.host [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.246 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.247 243456 DEBUG nova.virt.hardware [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.247 243456 DEBUG nova.virt.hardware [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.247 243456 DEBUG nova.virt.hardware [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.247 243456 DEBUG nova.virt.hardware [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.247 243456 DEBUG nova.virt.hardware [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.248 243456 DEBUG nova.virt.hardware [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.248 243456 DEBUG nova.virt.hardware [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.248 243456 DEBUG nova.virt.hardware [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.248 243456 DEBUG nova.virt.hardware [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.248 243456 DEBUG nova.virt.hardware [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.249 243456 DEBUG nova.virt.hardware [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.251 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.661 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:00:52 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1536218003' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.788 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.812 243456 DEBUG nova.storage.rbd_utils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image 9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:52 np0005634017 nova_compute[243452]: 2026-02-28 10:00:52.817 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:00:53 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1251524903' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.343 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.345 243456 DEBUG nova.virt.libvirt.vif [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:00:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1253849242',display_name='tempest-FloatingIPsAssociationTestJSON-server-1253849242',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1253849242',id=12,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='446562351a804787bd6c523245bada39',ramdisk_id='',reservation_id='r-nu7q8nez',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1803239001',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1803239001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:00:48Z,user_data=None,user_id='3b1dc716928742ca935bb155783e2d9a',uuid=9e27fde4-3df3-46cf-97ac-88a91baefbc0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.347 243456 DEBUG nova.network.os_vif_util [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Converting VIF {"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.348 243456 DEBUG nova.network.os_vif_util [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:04:a0,bridge_name='br-int',has_traffic_filtering=True,id=77e0efad-ce89-42fd-9284-b155767f5c74,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77e0efad-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.350 243456 DEBUG nova.objects.instance [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9e27fde4-3df3-46cf-97ac-88a91baefbc0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.370 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:00:53 np0005634017 nova_compute[243452]:  <uuid>9e27fde4-3df3-46cf-97ac-88a91baefbc0</uuid>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:  <name>instance-0000000c</name>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1253849242</nova:name>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:00:52</nova:creationTime>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:00:53 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:        <nova:user uuid="3b1dc716928742ca935bb155783e2d9a">tempest-FloatingIPsAssociationTestJSON-1803239001-project-member</nova:user>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:        <nova:project uuid="446562351a804787bd6c523245bada39">tempest-FloatingIPsAssociationTestJSON-1803239001</nova:project>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:        <nova:port uuid="77e0efad-ce89-42fd-9284-b155767f5c74">
Feb 28 05:00:53 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <entry name="serial">9e27fde4-3df3-46cf-97ac-88a91baefbc0</entry>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <entry name="uuid">9e27fde4-3df3-46cf-97ac-88a91baefbc0</entry>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk">
Feb 28 05:00:53 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:00:53 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk.config">
Feb 28 05:00:53 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:00:53 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:9c:04:a0"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <target dev="tap77e0efad-ce"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/9e27fde4-3df3-46cf-97ac-88a91baefbc0/console.log" append="off"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:00:53 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:00:53 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:00:53 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:00:53 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.372 243456 DEBUG nova.compute.manager [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Preparing to wait for external event network-vif-plugged-77e0efad-ce89-42fd-9284-b155767f5c74 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.373 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.373 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.374 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.374 243456 DEBUG nova.virt.libvirt.vif [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:00:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1253849242',display_name='tempest-FloatingIPsAssociationTestJSON-server-1253849242',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1253849242',id=12,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='446562351a804787bd6c523245bada39',ramdisk_id='',reservation_id='r-nu7q8nez',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1803239001',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1803239001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:00:48Z,user_data=None,user_id='3b1dc716928742ca935bb155783e2d9a',uuid=9e27fde4-3df3-46cf-97ac-88a91baefbc0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.375 243456 DEBUG nova.network.os_vif_util [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Converting VIF {"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.375 243456 DEBUG nova.network.os_vif_util [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:04:a0,bridge_name='br-int',has_traffic_filtering=True,id=77e0efad-ce89-42fd-9284-b155767f5c74,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77e0efad-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.376 243456 DEBUG os_vif [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:04:a0,bridge_name='br-int',has_traffic_filtering=True,id=77e0efad-ce89-42fd-9284-b155767f5c74,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77e0efad-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.377 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.377 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.378 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.382 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.382 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77e0efad-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.383 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap77e0efad-ce, col_values=(('external_ids', {'iface-id': '77e0efad-ce89-42fd-9284-b155767f5c74', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:04:a0', 'vm-uuid': '9e27fde4-3df3-46cf-97ac-88a91baefbc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.385 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:53 np0005634017 NetworkManager[49805]: <info>  [1772272853.3863] manager: (tap77e0efad-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.388 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.395 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.396 243456 INFO os_vif [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:04:a0,bridge_name='br-int',has_traffic_filtering=True,id=77e0efad-ce89-42fd-9284-b155767f5c74,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77e0efad-ce')#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.398 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.481 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.482 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.483 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] No VIF found with MAC fa:16:3e:9c:04:a0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.484 243456 INFO nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Using config drive#033[00m
Feb 28 05:00:53 np0005634017 nova_compute[243452]: 2026-02-28 10:00:53.519 243456 DEBUG nova.storage.rbd_utils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image 9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:00:54 np0005634017 nova_compute[243452]: 2026-02-28 10:00:54.157 243456 INFO nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Creating config drive at /var/lib/nova/instances/9e27fde4-3df3-46cf-97ac-88a91baefbc0/disk.config#033[00m
Feb 28 05:00:54 np0005634017 nova_compute[243452]: 2026-02-28 10:00:54.160 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9e27fde4-3df3-46cf-97ac-88a91baefbc0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp0e_3jfik execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:54 np0005634017 nova_compute[243452]: 2026-02-28 10:00:54.236 243456 DEBUG nova.network.neutron [req-c2f224e1-b002-48ea-b708-c2818b6aa814 req-31d1b836-b8fa-41a2-9253-602a71c152a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updated VIF entry in instance network info cache for port 77e0efad-ce89-42fd-9284-b155767f5c74. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:00:54 np0005634017 nova_compute[243452]: 2026-02-28 10:00:54.237 243456 DEBUG nova.network.neutron [req-c2f224e1-b002-48ea-b708-c2818b6aa814 req-31d1b836-b8fa-41a2-9253-602a71c152a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updating instance_info_cache with network_info: [{"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:00:54 np0005634017 nova_compute[243452]: 2026-02-28 10:00:54.256 243456 DEBUG oslo_concurrency.lockutils [req-c2f224e1-b002-48ea-b708-c2818b6aa814 req-31d1b836-b8fa-41a2-9253-602a71c152a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:00:54 np0005634017 nova_compute[243452]: 2026-02-28 10:00:54.287 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9e27fde4-3df3-46cf-97ac-88a91baefbc0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp0e_3jfik" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:54 np0005634017 nova_compute[243452]: 2026-02-28 10:00:54.328 243456 DEBUG nova.storage.rbd_utils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image 9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:54 np0005634017 nova_compute[243452]: 2026-02-28 10:00:54.332 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9e27fde4-3df3-46cf-97ac-88a91baefbc0/disk.config 9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:54 np0005634017 nova_compute[243452]: 2026-02-28 10:00:54.478 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9e27fde4-3df3-46cf-97ac-88a91baefbc0/disk.config 9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:54 np0005634017 nova_compute[243452]: 2026-02-28 10:00:54.479 243456 INFO nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Deleting local config drive /var/lib/nova/instances/9e27fde4-3df3-46cf-97ac-88a91baefbc0/disk.config because it was imported into RBD.#033[00m
Feb 28 05:00:54 np0005634017 kernel: tap77e0efad-ce: entered promiscuous mode
Feb 28 05:00:54 np0005634017 NetworkManager[49805]: <info>  [1772272854.5229] manager: (tap77e0efad-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Feb 28 05:00:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:00:54Z|00043|binding|INFO|Claiming lport 77e0efad-ce89-42fd-9284-b155767f5c74 for this chassis.
Feb 28 05:00:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:00:54Z|00044|binding|INFO|77e0efad-ce89-42fd-9284-b155767f5c74: Claiming fa:16:3e:9c:04:a0 10.100.0.4
Feb 28 05:00:54 np0005634017 nova_compute[243452]: 2026-02-28 10:00:54.525 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:54 np0005634017 nova_compute[243452]: 2026-02-28 10:00:54.530 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:54 np0005634017 nova_compute[243452]: 2026-02-28 10:00:54.536 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:54 np0005634017 systemd-machined[209480]: New machine qemu-12-instance-0000000c.
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.547 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:04:a0 10.100.0.4'], port_security=['fa:16:3e:9c:04:a0 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9e27fde4-3df3-46cf-97ac-88a91baefbc0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '446562351a804787bd6c523245bada39', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c45a4b58-82e0-4f02-9f8e-0ad33df761ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d1e8e62-0bec-49c5-9374-c674d11c0532, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=77e0efad-ce89-42fd-9284-b155767f5c74) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.549 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 77e0efad-ce89-42fd-9284-b155767f5c74 in datapath 71984a35-6483-4ac4-a021-6bd1f9989d8b bound to our chassis#033[00m
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.550 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 71984a35-6483-4ac4-a021-6bd1f9989d8b#033[00m
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.560 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8b077226-4f83-4c1a-adf6-4eac2024491b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.561 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap71984a35-61 in ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.563 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap71984a35-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.563 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b18308e1-18f0-45d9-8cfa-bdddaf1185b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.564 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a39762d5-becb-4b53-bc92-74932f73bbf4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:00:54 np0005634017 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.577 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb74e54-2ed3-4a30-a405-a5c2502c5e4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:00:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:00:54Z|00045|binding|INFO|Setting lport 77e0efad-ce89-42fd-9284-b155767f5c74 ovn-installed in OVS
Feb 28 05:00:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:00:54Z|00046|binding|INFO|Setting lport 77e0efad-ce89-42fd-9284-b155767f5c74 up in Southbound
Feb 28 05:00:54 np0005634017 nova_compute[243452]: 2026-02-28 10:00:54.586 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:54 np0005634017 systemd-udevd[256504]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.587 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e09ba378-0387-40b4-a3cb-1c3718e9c24b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:00:54 np0005634017 NetworkManager[49805]: <info>  [1772272854.5991] device (tap77e0efad-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:00:54 np0005634017 NetworkManager[49805]: <info>  [1772272854.5999] device (tap77e0efad-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.607 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a442b4f1-3fd1-44ba-a57f-6061b76dcd89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:00:54 np0005634017 NetworkManager[49805]: <info>  [1772272854.6118] manager: (tap71984a35-60): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.610 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f068473c-ff46-4246-958e-affdd57af741]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.635 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c2226a39-b6d1-4ac1-a52e-935ba121d049]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.638 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[cc2edd3b-e1af-4c3e-80f8-f2b2d679c90e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:00:54 np0005634017 NetworkManager[49805]: <info>  [1772272854.6575] device (tap71984a35-60): carrier: link connected
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.660 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c88657e3-b8df-4750-8b5b-26881ef92d4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.673 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b96340f3-d7af-4353-a9ed-ab6984893e90]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71984a35-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:3a:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436194, 'reachable_time': 24401, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256534, 'error': None, 'target': 'ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.686 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5d9771bb-5cbb-4e7d-82a7-0664de26aa1e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef5:3a8a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 436194, 'tstamp': 436194}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256535, 'error': None, 'target': 'ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.697 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2cb05727-edfd-49bb-8284-ca3bf594bb6a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71984a35-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:3a:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436194, 'reachable_time': 24401, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256536, 'error': None, 'target': 'ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.719 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5132b491-1dd6-4783-9e90-a9dc22a43d23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.774 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dedf3d4e-4a5a-4313-8ef6-045c646e0355]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.776 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71984a35-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.776 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.777 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71984a35-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:00:54 np0005634017 nova_compute[243452]: 2026-02-28 10:00:54.778 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:54 np0005634017 NetworkManager[49805]: <info>  [1772272854.7796] manager: (tap71984a35-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Feb 28 05:00:54 np0005634017 kernel: tap71984a35-60: entered promiscuous mode
Feb 28 05:00:54 np0005634017 nova_compute[243452]: 2026-02-28 10:00:54.780 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.783 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap71984a35-60, col_values=(('external_ids', {'iface-id': 'a589fe00-3087-4c3d-af34-6af9a22081de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:00:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:00:54Z|00047|binding|INFO|Releasing lport a589fe00-3087-4c3d-af34-6af9a22081de from this chassis (sb_readonly=0)
Feb 28 05:00:54 np0005634017 nova_compute[243452]: 2026-02-28 10:00:54.784 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.785 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/71984a35-6483-4ac4-a021-6bd1f9989d8b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/71984a35-6483-4ac4-a021-6bd1f9989d8b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.786 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e5b354cb-ea36-4b61-bad5-688ee111a200]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.787 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-71984a35-6483-4ac4-a021-6bd1f9989d8b
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/71984a35-6483-4ac4-a021-6bd1f9989d8b.pid.haproxy
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 71984a35-6483-4ac4-a021-6bd1f9989d8b
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:00:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.787 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'env', 'PROCESS_TAG=haproxy-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/71984a35-6483-4ac4-a021-6bd1f9989d8b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:00:54 np0005634017 nova_compute[243452]: 2026-02-28 10:00:54.790 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:54 np0005634017 nova_compute[243452]: 2026-02-28 10:00:54.898 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272854.8975585, 9e27fde4-3df3-46cf-97ac-88a91baefbc0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:00:54 np0005634017 nova_compute[243452]: 2026-02-28 10:00:54.898 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] VM Started (Lifecycle Event)#033[00m
Feb 28 05:00:54 np0005634017 nova_compute[243452]: 2026-02-28 10:00:54.927 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:54 np0005634017 nova_compute[243452]: 2026-02-28 10:00:54.931 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272854.89767, 9e27fde4-3df3-46cf-97ac-88a91baefbc0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:00:54 np0005634017 nova_compute[243452]: 2026-02-28 10:00:54.931 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:00:54 np0005634017 nova_compute[243452]: 2026-02-28 10:00:54.957 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:54 np0005634017 nova_compute[243452]: 2026-02-28 10:00:54.960 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:00:54 np0005634017 nova_compute[243452]: 2026-02-28 10:00:54.992 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:00:55 np0005634017 podman[256610]: 2026-02-28 10:00:55.110175739 +0000 UTC m=+0.048600840 container create 0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 28 05:00:55 np0005634017 systemd[1]: Started libpod-conmon-0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd.scope.
Feb 28 05:00:55 np0005634017 podman[256610]: 2026-02-28 10:00:55.081683207 +0000 UTC m=+0.020108378 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:00:55 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:00:55 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70290e901e28e6d5eb5c062ad508564d16cc1df6df50d7d37fe37074f9c775c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:00:55 np0005634017 podman[256610]: 2026-02-28 10:00:55.203769216 +0000 UTC m=+0.142194337 container init 0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 28 05:00:55 np0005634017 podman[256610]: 2026-02-28 10:00:55.211197295 +0000 UTC m=+0.149622436 container start 0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:00:55 np0005634017 neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b[256625]: [NOTICE]   (256629) : New worker (256631) forked
Feb 28 05:00:55 np0005634017 neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b[256625]: [NOTICE]   (256629) : Loading success.
Feb 28 05:00:55 np0005634017 nova_compute[243452]: 2026-02-28 10:00:55.660 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:55 np0005634017 nova_compute[243452]: 2026-02-28 10:00:55.660 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:55 np0005634017 nova_compute[243452]: 2026-02-28 10:00:55.680 243456 DEBUG nova.compute.manager [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:00:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Feb 28 05:00:55 np0005634017 nova_compute[243452]: 2026-02-28 10:00:55.772 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:55 np0005634017 nova_compute[243452]: 2026-02-28 10:00:55.773 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:55 np0005634017 nova_compute[243452]: 2026-02-28 10:00:55.780 243456 DEBUG nova.virt.hardware [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:00:55 np0005634017 nova_compute[243452]: 2026-02-28 10:00:55.780 243456 INFO nova.compute.claims [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:00:55 np0005634017 nova_compute[243452]: 2026-02-28 10:00:55.933 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:00:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/773155305' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:00:56 np0005634017 nova_compute[243452]: 2026-02-28 10:00:56.481 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:56 np0005634017 nova_compute[243452]: 2026-02-28 10:00:56.489 243456 DEBUG nova.compute.provider_tree [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:00:56 np0005634017 nova_compute[243452]: 2026-02-28 10:00:56.511 243456 DEBUG nova.scheduler.client.report [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:00:56 np0005634017 nova_compute[243452]: 2026-02-28 10:00:56.540 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:56 np0005634017 nova_compute[243452]: 2026-02-28 10:00:56.541 243456 DEBUG nova.compute.manager [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:00:56 np0005634017 nova_compute[243452]: 2026-02-28 10:00:56.594 243456 DEBUG nova.compute.manager [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:00:56 np0005634017 nova_compute[243452]: 2026-02-28 10:00:56.594 243456 DEBUG nova.network.neutron [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:00:56 np0005634017 nova_compute[243452]: 2026-02-28 10:00:56.613 243456 INFO nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:00:56 np0005634017 nova_compute[243452]: 2026-02-28 10:00:56.629 243456 DEBUG nova.compute.manager [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:00:56 np0005634017 nova_compute[243452]: 2026-02-28 10:00:56.710 243456 DEBUG nova.compute.manager [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:00:56 np0005634017 nova_compute[243452]: 2026-02-28 10:00:56.712 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:00:56 np0005634017 nova_compute[243452]: 2026-02-28 10:00:56.713 243456 INFO nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Creating image(s)#033[00m
Feb 28 05:00:56 np0005634017 nova_compute[243452]: 2026-02-28 10:00:56.747 243456 DEBUG nova.storage.rbd_utils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:56 np0005634017 nova_compute[243452]: 2026-02-28 10:00:56.777 243456 DEBUG nova.storage.rbd_utils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:56 np0005634017 nova_compute[243452]: 2026-02-28 10:00:56.801 243456 DEBUG nova.storage.rbd_utils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:56 np0005634017 nova_compute[243452]: 2026-02-28 10:00:56.804 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:56 np0005634017 nova_compute[243452]: 2026-02-28 10:00:56.884 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:56 np0005634017 nova_compute[243452]: 2026-02-28 10:00:56.885 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:56 np0005634017 nova_compute[243452]: 2026-02-28 10:00:56.886 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:56 np0005634017 nova_compute[243452]: 2026-02-28 10:00:56.886 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:56 np0005634017 nova_compute[243452]: 2026-02-28 10:00:56.930 243456 DEBUG nova.storage.rbd_utils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:56 np0005634017 nova_compute[243452]: 2026-02-28 10:00:56.934 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c92e965f-2d18-4b78-8b78-7d391039f382_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:56 np0005634017 nova_compute[243452]: 2026-02-28 10:00:56.954 243456 DEBUG nova.policy [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4fb1e2bbed9c4e2395c13dba974f8603', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '339b7f5b41a54615b051fb9d036072dd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:00:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:00:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 28 05:00:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:57.841 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:57.841 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:00:57.842 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:57 np0005634017 nova_compute[243452]: 2026-02-28 10:00:57.944 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:57 np0005634017 nova_compute[243452]: 2026-02-28 10:00:57.945 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:57 np0005634017 nova_compute[243452]: 2026-02-28 10:00:57.964 243456 DEBUG nova.compute.manager [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.064 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.065 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.081 243456 DEBUG nova.virt.hardware [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.081 243456 INFO nova.compute.claims [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.147 243456 DEBUG nova.network.neutron [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Successfully created port: 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.205 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c92e965f-2d18-4b78-8b78-7d391039f382_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.250 243456 DEBUG nova.compute.manager [req-4965be19-f9ab-44ac-bf86-6be88b295e14 req-87c7b5fd-d770-4152-b7f7-d8a12422b9f1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Received event network-vif-plugged-77e0efad-ce89-42fd-9284-b155767f5c74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.251 243456 DEBUG oslo_concurrency.lockutils [req-4965be19-f9ab-44ac-bf86-6be88b295e14 req-87c7b5fd-d770-4152-b7f7-d8a12422b9f1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.251 243456 DEBUG oslo_concurrency.lockutils [req-4965be19-f9ab-44ac-bf86-6be88b295e14 req-87c7b5fd-d770-4152-b7f7-d8a12422b9f1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.252 243456 DEBUG oslo_concurrency.lockutils [req-4965be19-f9ab-44ac-bf86-6be88b295e14 req-87c7b5fd-d770-4152-b7f7-d8a12422b9f1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.252 243456 DEBUG nova.compute.manager [req-4965be19-f9ab-44ac-bf86-6be88b295e14 req-87c7b5fd-d770-4152-b7f7-d8a12422b9f1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Processing event network-vif-plugged-77e0efad-ce89-42fd-9284-b155767f5c74 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.254 243456 DEBUG nova.compute.manager [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.305 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272858.2602673, 9e27fde4-3df3-46cf-97ac-88a91baefbc0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.307 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.310 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.317 243456 DEBUG nova.storage.rbd_utils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] resizing rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.364 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.382 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.386 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.388 243456 INFO nova.virt.libvirt.driver [-] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Instance spawned successfully.#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.389 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.392 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.400 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.435 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.440 243456 DEBUG nova.objects.instance [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'migration_context' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.443 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.443 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.444 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.444 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.444 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.445 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.466 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.467 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Ensure instance console log exists: /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.467 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.467 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.468 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.498 243456 INFO nova.compute.manager [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Took 9.41 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.498 243456 DEBUG nova.compute.manager [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.568 243456 INFO nova.compute.manager [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Took 10.49 seconds to build instance.#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.585 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:00:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3968829183' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.892 243456 DEBUG nova.network.neutron [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Successfully updated port: 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.894 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.900 243456 DEBUG nova.compute.provider_tree [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.904 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "refresh_cache-c92e965f-2d18-4b78-8b78-7d391039f382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.904 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquired lock "refresh_cache-c92e965f-2d18-4b78-8b78-7d391039f382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.905 243456 DEBUG nova.network.neutron [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.919 243456 DEBUG nova.scheduler.client.report [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.942 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.943 243456 DEBUG nova.compute.manager [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.991 243456 DEBUG nova.compute.manager [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:00:58 np0005634017 nova_compute[243452]: 2026-02-28 10:00:58.991 243456 DEBUG nova.network.neutron [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.009 243456 INFO nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.029 243456 DEBUG nova.compute.manager [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.104 243456 DEBUG nova.compute.manager [req-69c4aa5a-e780-4c0a-aa9e-1d546644e01c req-fcd9ff68-240e-4911-9605-4d1d118bfe36 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-changed-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.104 243456 DEBUG nova.compute.manager [req-69c4aa5a-e780-4c0a-aa9e-1d546644e01c req-fcd9ff68-240e-4911-9605-4d1d118bfe36 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Refreshing instance network info cache due to event network-changed-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.104 243456 DEBUG oslo_concurrency.lockutils [req-69c4aa5a-e780-4c0a-aa9e-1d546644e01c req-fcd9ff68-240e-4911-9605-4d1d118bfe36 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c92e965f-2d18-4b78-8b78-7d391039f382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.122 243456 DEBUG nova.compute.manager [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.124 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.124 243456 INFO nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Creating image(s)#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.148 243456 DEBUG nova.storage.rbd_utils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.171 243456 DEBUG nova.storage.rbd_utils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.199 243456 DEBUG nova.storage.rbd_utils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.203 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.221 243456 DEBUG nova.network.neutron [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.235 243456 DEBUG nova.policy [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4fb1e2bbed9c4e2395c13dba974f8603', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '339b7f5b41a54615b051fb9d036072dd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.260 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.261 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.262 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.263 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.291 243456 DEBUG nova.storage.rbd_utils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.295 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.728 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:00:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.797 243456 DEBUG nova.storage.rbd_utils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] resizing rbd image 89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.888 243456 DEBUG nova.objects.instance [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'migration_context' on Instance uuid 89ced16e-cc50-41d5-bfcb-fa5af85c14c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.891 243456 DEBUG nova.network.neutron [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Successfully created port: eaa5f652-63c2-4a9b-aae0-eec299565322 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.911 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.912 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Ensure instance console log exists: /var/lib/nova/instances/89ced16e-cc50-41d5-bfcb-fa5af85c14c8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.913 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.913 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:00:59 np0005634017 nova_compute[243452]: 2026-02-28 10:00:59.914 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:01:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:01:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:01:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:01:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:01:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.488 243456 DEBUG nova.network.neutron [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Updating instance_info_cache with network_info: [{"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.548 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Releasing lock "refresh_cache-c92e965f-2d18-4b78-8b78-7d391039f382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.549 243456 DEBUG nova.compute.manager [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance network_info: |[{"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.549 243456 DEBUG oslo_concurrency.lockutils [req-69c4aa5a-e780-4c0a-aa9e-1d546644e01c req-fcd9ff68-240e-4911-9605-4d1d118bfe36 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c92e965f-2d18-4b78-8b78-7d391039f382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.549 243456 DEBUG nova.network.neutron [req-69c4aa5a-e780-4c0a-aa9e-1d546644e01c req-fcd9ff68-240e-4911-9605-4d1d118bfe36 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Refreshing network info cache for port 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.552 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Start _get_guest_xml network_info=[{"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.558 243456 WARNING nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.564 243456 DEBUG nova.virt.libvirt.host [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.568 243456 DEBUG nova.virt.libvirt.host [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.578 243456 DEBUG nova.virt.libvirt.host [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.579 243456 DEBUG nova.virt.libvirt.host [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.579 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.579 243456 DEBUG nova.virt.hardware [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.580 243456 DEBUG nova.virt.hardware [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.580 243456 DEBUG nova.virt.hardware [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.580 243456 DEBUG nova.virt.hardware [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.581 243456 DEBUG nova.virt.hardware [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.581 243456 DEBUG nova.virt.hardware [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.581 243456 DEBUG nova.virt.hardware [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.582 243456 DEBUG nova.virt.hardware [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.582 243456 DEBUG nova.virt.hardware [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.582 243456 DEBUG nova.virt.hardware [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.582 243456 DEBUG nova.virt.hardware [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.589 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.716 243456 DEBUG nova.compute.manager [req-9842146e-d25e-42f7-85ef-c641f59e9eb3 req-acb3cef8-a602-43c6-8adf-0ffef0a4f8cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Received event network-vif-plugged-77e0efad-ce89-42fd-9284-b155767f5c74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.717 243456 DEBUG oslo_concurrency.lockutils [req-9842146e-d25e-42f7-85ef-c641f59e9eb3 req-acb3cef8-a602-43c6-8adf-0ffef0a4f8cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.718 243456 DEBUG oslo_concurrency.lockutils [req-9842146e-d25e-42f7-85ef-c641f59e9eb3 req-acb3cef8-a602-43c6-8adf-0ffef0a4f8cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.718 243456 DEBUG oslo_concurrency.lockutils [req-9842146e-d25e-42f7-85ef-c641f59e9eb3 req-acb3cef8-a602-43c6-8adf-0ffef0a4f8cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.718 243456 DEBUG nova.compute.manager [req-9842146e-d25e-42f7-85ef-c641f59e9eb3 req-acb3cef8-a602-43c6-8adf-0ffef0a4f8cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] No waiting events found dispatching network-vif-plugged-77e0efad-ce89-42fd-9284-b155767f5c74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.718 243456 WARNING nova.compute.manager [req-9842146e-d25e-42f7-85ef-c641f59e9eb3 req-acb3cef8-a602-43c6-8adf-0ffef0a4f8cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Received unexpected event network-vif-plugged-77e0efad-ce89-42fd-9284-b155767f5c74 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.789 243456 DEBUG nova.network.neutron [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Successfully updated port: eaa5f652-63c2-4a9b-aae0-eec299565322 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.816 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "refresh_cache-89ced16e-cc50-41d5-bfcb-fa5af85c14c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.817 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquired lock "refresh_cache-89ced16e-cc50-41d5-bfcb-fa5af85c14c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:01:00 np0005634017 nova_compute[243452]: 2026-02-28 10:01:00.817 243456 DEBUG nova.network.neutron [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:01:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:01:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2276677706' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.147 243456 DEBUG nova.network.neutron [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.155 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.177 243456 DEBUG nova.storage.rbd_utils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.182 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:01:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:01:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2004804878' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:01:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 255 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.7 MiB/s wr, 115 op/s
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.753 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.758 243456 DEBUG nova.virt.libvirt.vif [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1293627042',display_name='tempest-ServersAdminTestJSON-server-1293627042',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1293627042',id=13,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-7hctsnmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:00:56Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=c92e965f-2d18-4b78-8b78-7d391039f382,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.759 243456 DEBUG nova.network.os_vif_util [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.760 243456 DEBUG nova.network.os_vif_util [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.766 243456 DEBUG nova.objects.instance [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'pci_devices' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.798 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:01:01 np0005634017 nova_compute[243452]:  <uuid>c92e965f-2d18-4b78-8b78-7d391039f382</uuid>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:  <name>instance-0000000d</name>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServersAdminTestJSON-server-1293627042</nova:name>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:01:00</nova:creationTime>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:01:01 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:        <nova:user uuid="4fb1e2bbed9c4e2395c13dba974f8603">tempest-ServersAdminTestJSON-1494420313-project-member</nova:user>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:        <nova:project uuid="339b7f5b41a54615b051fb9d036072dd">tempest-ServersAdminTestJSON-1494420313</nova:project>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:        <nova:port uuid="1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7">
Feb 28 05:01:01 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <entry name="serial">c92e965f-2d18-4b78-8b78-7d391039f382</entry>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <entry name="uuid">c92e965f-2d18-4b78-8b78-7d391039f382</entry>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/c92e965f-2d18-4b78-8b78-7d391039f382_disk">
Feb 28 05:01:01 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:01:01 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/c92e965f-2d18-4b78-8b78-7d391039f382_disk.config">
Feb 28 05:01:01 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:01:01 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:3d:6e:bc"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <target dev="tap1c6e98f3-e9"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/console.log" append="off"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:01:01 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:01:01 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:01:01 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:01:01 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.801 243456 DEBUG nova.compute.manager [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Preparing to wait for external event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.802 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.802 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.803 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.804 243456 DEBUG nova.virt.libvirt.vif [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1293627042',display_name='tempest-ServersAdminTestJSON-server-1293627042',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1293627042',id=13,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-7hctsnmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:00:56Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=c92e965f-2d18-4b78-8b78-7d391039f382,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.805 243456 DEBUG nova.network.os_vif_util [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.806 243456 DEBUG nova.network.os_vif_util [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.806 243456 DEBUG os_vif [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.808 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.808 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.809 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.814 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.814 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c6e98f3-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.815 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1c6e98f3-e9, col_values=(('external_ids', {'iface-id': '1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:6e:bc', 'vm-uuid': 'c92e965f-2d18-4b78-8b78-7d391039f382'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.817 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:01 np0005634017 NetworkManager[49805]: <info>  [1772272861.8186] manager: (tap1c6e98f3-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.822 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.824 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.826 243456 INFO os_vif [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9')#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.895 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.896 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.896 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No VIF found with MAC fa:16:3e:3d:6e:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.897 243456 INFO nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Using config drive#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.923 243456 DEBUG nova.storage.rbd_utils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.933 243456 DEBUG nova.compute.manager [req-7dd0752e-6aea-4d46-bfc5-f8adeaa8c150 req-564da138-3b21-4860-978d-b381e2e2f172 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Received event network-changed-eaa5f652-63c2-4a9b-aae0-eec299565322 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.934 243456 DEBUG nova.compute.manager [req-7dd0752e-6aea-4d46-bfc5-f8adeaa8c150 req-564da138-3b21-4860-978d-b381e2e2f172 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Refreshing instance network info cache due to event network-changed-eaa5f652-63c2-4a9b-aae0-eec299565322. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:01:01 np0005634017 nova_compute[243452]: 2026-02-28 10:01:01.934 243456 DEBUG oslo_concurrency.lockutils [req-7dd0752e-6aea-4d46-bfc5-f8adeaa8c150 req-564da138-3b21-4860-978d-b381e2e2f172 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-89ced16e-cc50-41d5-bfcb-fa5af85c14c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.219 243456 DEBUG nova.network.neutron [req-69c4aa5a-e780-4c0a-aa9e-1d546644e01c req-fcd9ff68-240e-4911-9605-4d1d118bfe36 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Updated VIF entry in instance network info cache for port 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.219 243456 DEBUG nova.network.neutron [req-69c4aa5a-e780-4c0a-aa9e-1d546644e01c req-fcd9ff68-240e-4911-9605-4d1d118bfe36 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Updating instance_info_cache with network_info: [{"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.233 243456 DEBUG oslo_concurrency.lockutils [req-69c4aa5a-e780-4c0a-aa9e-1d546644e01c req-fcd9ff68-240e-4911-9605-4d1d118bfe36 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c92e965f-2d18-4b78-8b78-7d391039f382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.264 243456 DEBUG oslo_concurrency.lockutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Acquiring lock "c8eefb37-41ae-4d33-8085-e4e8c3ce2075" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.265 243456 DEBUG oslo_concurrency.lockutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "c8eefb37-41ae-4d33-8085-e4e8c3ce2075" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.293 243456 DEBUG nova.compute.manager [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.329 243456 DEBUG nova.network.neutron [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Updating instance_info_cache with network_info: [{"id": "eaa5f652-63c2-4a9b-aae0-eec299565322", "address": "fa:16:3e:f2:c8:7d", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeaa5f652-63", "ovs_interfaceid": "eaa5f652-63c2-4a9b-aae0-eec299565322", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.367 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Releasing lock "refresh_cache-89ced16e-cc50-41d5-bfcb-fa5af85c14c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.368 243456 DEBUG nova.compute.manager [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Instance network_info: |[{"id": "eaa5f652-63c2-4a9b-aae0-eec299565322", "address": "fa:16:3e:f2:c8:7d", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeaa5f652-63", "ovs_interfaceid": "eaa5f652-63c2-4a9b-aae0-eec299565322", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.369 243456 DEBUG oslo_concurrency.lockutils [req-7dd0752e-6aea-4d46-bfc5-f8adeaa8c150 req-564da138-3b21-4860-978d-b381e2e2f172 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-89ced16e-cc50-41d5-bfcb-fa5af85c14c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.370 243456 DEBUG nova.network.neutron [req-7dd0752e-6aea-4d46-bfc5-f8adeaa8c150 req-564da138-3b21-4860-978d-b381e2e2f172 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Refreshing network info cache for port eaa5f652-63c2-4a9b-aae0-eec299565322 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.381 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Start _get_guest_xml network_info=[{"id": "eaa5f652-63c2-4a9b-aae0-eec299565322", "address": "fa:16:3e:f2:c8:7d", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeaa5f652-63", "ovs_interfaceid": "eaa5f652-63c2-4a9b-aae0-eec299565322", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.388 243456 WARNING nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.396 243456 DEBUG nova.virt.libvirt.host [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.397 243456 DEBUG nova.virt.libvirt.host [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.406 243456 DEBUG nova.virt.libvirt.host [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.406 243456 DEBUG nova.virt.libvirt.host [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.407 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.408 243456 DEBUG nova.virt.hardware [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.409 243456 DEBUG nova.virt.hardware [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.409 243456 DEBUG nova.virt.hardware [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.410 243456 DEBUG nova.virt.hardware [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.410 243456 DEBUG nova.virt.hardware [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.411 243456 DEBUG nova.virt.hardware [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.412 243456 DEBUG nova.virt.hardware [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.412 243456 DEBUG nova.virt.hardware [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.413 243456 DEBUG nova.virt.hardware [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.414 243456 DEBUG nova.virt.hardware [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.415 243456 DEBUG nova.virt.hardware [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.419 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.449 243456 INFO nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Creating config drive at /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.457 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqv3o97zw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.516 243456 DEBUG oslo_concurrency.lockutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.517 243456 DEBUG oslo_concurrency.lockutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.521 243456 DEBUG nova.virt.hardware [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.522 243456 INFO nova.compute.claims [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:01:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.588 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqv3o97zw" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.622 243456 DEBUG nova.storage.rbd_utils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.626 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config c92e965f-2d18-4b78-8b78-7d391039f382_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.760 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config c92e965f-2d18-4b78-8b78-7d391039f382_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.761 243456 INFO nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Deleting local config drive /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config because it was imported into RBD.#033[00m
Feb 28 05:01:02 np0005634017 kernel: tap1c6e98f3-e9: entered promiscuous mode
Feb 28 05:01:02 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:02Z|00048|binding|INFO|Claiming lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for this chassis.
Feb 28 05:01:02 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:02Z|00049|binding|INFO|1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7: Claiming fa:16:3e:3d:6e:bc 10.100.0.5
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.814 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:02 np0005634017 NetworkManager[49805]: <info>  [1772272862.8159] manager: (tap1c6e98f3-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.818 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.821 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.837 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:6e:bc 10.100.0.5'], port_security=['fa:16:3e:3d:6e:bc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c92e965f-2d18-4b78-8b78-7d391039f382', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:01:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.838 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 bound to our chassis#033[00m
Feb 28 05:01:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.839 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24#033[00m
Feb 28 05:01:02 np0005634017 systemd-udevd[257182]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:01:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.850 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2f91d171-c7a9-40b4-8e90-92845386f9bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.851 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce5045ea-11 in ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.854 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.852 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce5045ea-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:01:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.852 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9cf09106-4812-428c-bdf1-14a15183d1df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.853 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd0d840-90f8-45f5-b84a-37080f6c0024]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:02 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:02Z|00050|binding|INFO|Setting lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 ovn-installed in OVS
Feb 28 05:01:02 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:02Z|00051|binding|INFO|Setting lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 up in Southbound
Feb 28 05:01:02 np0005634017 nova_compute[243452]: 2026-02-28 10:01:02.858 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:02 np0005634017 NetworkManager[49805]: <info>  [1772272862.8666] device (tap1c6e98f3-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:01:02 np0005634017 NetworkManager[49805]: <info>  [1772272862.8673] device (tap1c6e98f3-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:01:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.868 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[1b75388b-69b9-4bab-b6db-d5d64ff4069c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:02 np0005634017 systemd-machined[209480]: New machine qemu-13-instance-0000000d.
Feb 28 05:01:02 np0005634017 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Feb 28 05:01:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.890 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bc2fbf35-d766-4030-8d22-d65781cafc39]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.921 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a2d97e-e8b6-48b9-936a-cbbc94c91f77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:02 np0005634017 NetworkManager[49805]: <info>  [1772272862.9284] manager: (tapce5045ea-10): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Feb 28 05:01:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.929 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c7f717aa-bcd8-4d6e-b36d-c4c3ede356ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.961 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[65e9b207-7637-4aa8-ae11-49336ab92980]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.965 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[dfeff24f-1ebd-47e7-a936-3ef91d11d9d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:02 np0005634017 NetworkManager[49805]: <info>  [1772272862.9854] device (tapce5045ea-10): carrier: link connected
Feb 28 05:01:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.992 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ee406a61-cb36-4e7f-bc4e-e221c07b9aaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:01:02 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3705257239' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.014 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:03.015 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[41ff3c81-7b57-4d1b-981e-1a331f694924]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257219, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:03.031 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[910e128b-ac2a-4469-a55c-746e09cbf104]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe03:35bf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437027, 'tstamp': 437027}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257227, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.039 243456 DEBUG nova.storage.rbd_utils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.044 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:03.047 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[59c99f0d-11e6-480d-badb-744880011b7d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 257238, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:03.077 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[515250a2-ac3b-4fde-8653-9e2ac460bc0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.138 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:03.151 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a14f6d6d-5a5b-4fae-b516-f2b155824113]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:03.152 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:03.153 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:03.153 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5045ea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:03 np0005634017 NetworkManager[49805]: <info>  [1772272863.1562] manager: (tapce5045ea-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Feb 28 05:01:03 np0005634017 kernel: tapce5045ea-10: entered promiscuous mode
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:03.161 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5045ea-10, col_values=(('external_ids', {'iface-id': '863d7ac1-9b1e-4788-aadf-440a36d66b39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:03Z|00052|binding|INFO|Releasing lport 863d7ac1-9b1e-4788-aadf-440a36d66b39 from this chassis (sb_readonly=0)
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:03.170 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce5045ea-1437-4fd1-bdb3-3fe83470fb24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce5045ea-1437-4fd1-bdb3-3fe83470fb24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.177 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:03.176 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[111a4262-46cc-47e1-99d3-8cf37c44cf1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:03.178 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-ce5045ea-1437-4fd1-bdb3-3fe83470fb24
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/ce5045ea-1437-4fd1-bdb3-3fe83470fb24.pid.haproxy
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID ce5045ea-1437-4fd1-bdb3-3fe83470fb24
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:01:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:03.179 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'env', 'PROCESS_TAG=haproxy-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce5045ea-1437-4fd1-bdb3-3fe83470fb24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.342 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.342 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.401 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.505 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272863.5048075, c92e965f-2d18-4b78-8b78-7d391039f382 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.506 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] VM Started (Lifecycle Event)#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.523 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.528 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272863.5057948, c92e965f-2d18-4b78-8b78-7d391039f382 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.528 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:01:03 np0005634017 podman[257351]: 2026-02-28 10:01:03.536816144 +0000 UTC m=+0.056609146 container create f60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.551 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.553 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.577 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:01:03 np0005634017 systemd[1]: Started libpod-conmon-f60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a.scope.
Feb 28 05:01:03 np0005634017 podman[257351]: 2026-02-28 10:01:03.50436676 +0000 UTC m=+0.024159742 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:01:03 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:01:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e22472ecc95bc20b98dcc0462d4cd4b77133e1b85e4f473f2fb73cbcea480a12/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:01:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:01:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2772477825' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:01:03 np0005634017 podman[257351]: 2026-02-28 10:01:03.622092495 +0000 UTC m=+0.141885487 container init f60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 05:01:03 np0005634017 podman[257351]: 2026-02-28 10:01:03.63076496 +0000 UTC m=+0.150557932 container start f60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.636 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.637 243456 DEBUG nova.virt.libvirt.vif [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:00:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1953624900',display_name='tempest-ServersAdminTestJSON-server-1953624900',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1953624900',id=14,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-y1ejzv4y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:00:59Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=89ced16e-cc50-41d5-bfcb-fa5af85c14c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eaa5f652-63c2-4a9b-aae0-eec299565322", "address": "fa:16:3e:f2:c8:7d", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeaa5f652-63", "ovs_interfaceid": "eaa5f652-63c2-4a9b-aae0-eec299565322", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.637 243456 DEBUG nova.network.os_vif_util [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "eaa5f652-63c2-4a9b-aae0-eec299565322", "address": "fa:16:3e:f2:c8:7d", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeaa5f652-63", "ovs_interfaceid": "eaa5f652-63c2-4a9b-aae0-eec299565322", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.638 243456 DEBUG nova.network.os_vif_util [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:c8:7d,bridge_name='br-int',has_traffic_filtering=True,id=eaa5f652-63c2-4a9b-aae0-eec299565322,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeaa5f652-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.640 243456 DEBUG nova.objects.instance [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'pci_devices' on Instance uuid 89ced16e-cc50-41d5-bfcb-fa5af85c14c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:03 np0005634017 neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24[257366]: [NOTICE]   (257372) : New worker (257374) forked
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.654 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:01:03 np0005634017 nova_compute[243452]:  <uuid>89ced16e-cc50-41d5-bfcb-fa5af85c14c8</uuid>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:  <name>instance-0000000e</name>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServersAdminTestJSON-server-1953624900</nova:name>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:01:02</nova:creationTime>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:01:03 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:        <nova:user uuid="4fb1e2bbed9c4e2395c13dba974f8603">tempest-ServersAdminTestJSON-1494420313-project-member</nova:user>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:        <nova:project uuid="339b7f5b41a54615b051fb9d036072dd">tempest-ServersAdminTestJSON-1494420313</nova:project>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:        <nova:port uuid="eaa5f652-63c2-4a9b-aae0-eec299565322">
Feb 28 05:01:03 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <entry name="serial">89ced16e-cc50-41d5-bfcb-fa5af85c14c8</entry>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <entry name="uuid">89ced16e-cc50-41d5-bfcb-fa5af85c14c8</entry>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk">
Feb 28 05:01:03 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:01:03 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk.config">
Feb 28 05:01:03 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:01:03 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:f2:c8:7d"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <target dev="tapeaa5f652-63"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/89ced16e-cc50-41d5-bfcb-fa5af85c14c8/console.log" append="off"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:01:03 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:01:03 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:01:03 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:01:03 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.654 243456 DEBUG nova.compute.manager [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Preparing to wait for external event network-vif-plugged-eaa5f652-63c2-4a9b-aae0-eec299565322 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.654 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:03 np0005634017 neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24[257366]: [NOTICE]   (257372) : Loading success.
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.655 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.655 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.656 243456 DEBUG nova.virt.libvirt.vif [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:00:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1953624900',display_name='tempest-ServersAdminTestJSON-server-1953624900',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1953624900',id=14,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-y1ejzv4y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:00:59Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=89ced16e-cc50-41d5-bfcb-fa5af85c14c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eaa5f652-63c2-4a9b-aae0-eec299565322", "address": "fa:16:3e:f2:c8:7d", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeaa5f652-63", "ovs_interfaceid": "eaa5f652-63c2-4a9b-aae0-eec299565322", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.656 243456 DEBUG nova.network.os_vif_util [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "eaa5f652-63c2-4a9b-aae0-eec299565322", "address": "fa:16:3e:f2:c8:7d", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeaa5f652-63", "ovs_interfaceid": "eaa5f652-63c2-4a9b-aae0-eec299565322", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.657 243456 DEBUG nova.network.os_vif_util [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:c8:7d,bridge_name='br-int',has_traffic_filtering=True,id=eaa5f652-63c2-4a9b-aae0-eec299565322,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeaa5f652-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.657 243456 DEBUG os_vif [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:c8:7d,bridge_name='br-int',has_traffic_filtering=True,id=eaa5f652-63c2-4a9b-aae0-eec299565322,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeaa5f652-63') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.657 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.658 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.658 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.665 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.665 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeaa5f652-63, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.665 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeaa5f652-63, col_values=(('external_ids', {'iface-id': 'eaa5f652-63c2-4a9b-aae0-eec299565322', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f2:c8:7d', 'vm-uuid': '89ced16e-cc50-41d5-bfcb-fa5af85c14c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.667 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:03 np0005634017 NetworkManager[49805]: <info>  [1772272863.6679] manager: (tapeaa5f652-63): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.672 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.675 243456 INFO os_vif [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:c8:7d,bridge_name='br-int',has_traffic_filtering=True,id=eaa5f652-63c2-4a9b-aae0-eec299565322,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeaa5f652-63')#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.720 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.721 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.721 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No VIF found with MAC fa:16:3e:f2:c8:7d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.721 243456 INFO nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Using config drive#033[00m
Feb 28 05:01:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:01:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1188977669' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.738 243456 DEBUG nova.storage.rbd_utils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 292 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.0 MiB/s wr, 129 op/s
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.752 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.765 243456 DEBUG nova.compute.provider_tree [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.781 243456 DEBUG nova.scheduler.client.report [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.808 243456 DEBUG oslo_concurrency.lockutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.808 243456 DEBUG nova.compute.manager [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.863 243456 DEBUG nova.compute.manager [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.864 243456 DEBUG nova.network.neutron [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.888 243456 INFO nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:01:03 np0005634017 nova_compute[243452]: 2026-02-28 10:01:03.906 243456 DEBUG nova.compute.manager [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.032 243456 DEBUG nova.compute.manager [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.035 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.036 243456 INFO nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Creating image(s)#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.065 243456 DEBUG nova.storage.rbd_utils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] rbd image c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.092 243456 DEBUG nova.storage.rbd_utils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] rbd image c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.121 243456 DEBUG nova.storage.rbd_utils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] rbd image c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.128 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.175 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.176 243456 DEBUG oslo_concurrency.lockutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.176 243456 DEBUG oslo_concurrency.lockutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.176 243456 DEBUG oslo_concurrency.lockutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.198 243456 DEBUG nova.storage.rbd_utils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] rbd image c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.201 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.377 243456 DEBUG nova.network.neutron [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.378 243456 DEBUG nova.compute.manager [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.397 243456 INFO nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Creating config drive at /var/lib/nova/instances/89ced16e-cc50-41d5-bfcb-fa5af85c14c8/disk.config#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.404 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/89ced16e-cc50-41d5-bfcb-fa5af85c14c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpeq04zqwz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.507 243456 DEBUG nova.compute.manager [req-06bb235e-4155-4f30-9483-b6b90d8eb9f1 req-dba7ca7f-ecfb-4bd0-ada9-8a803da28de5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.508 243456 DEBUG oslo_concurrency.lockutils [req-06bb235e-4155-4f30-9483-b6b90d8eb9f1 req-dba7ca7f-ecfb-4bd0-ada9-8a803da28de5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.509 243456 DEBUG oslo_concurrency.lockutils [req-06bb235e-4155-4f30-9483-b6b90d8eb9f1 req-dba7ca7f-ecfb-4bd0-ada9-8a803da28de5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.509 243456 DEBUG oslo_concurrency.lockutils [req-06bb235e-4155-4f30-9483-b6b90d8eb9f1 req-dba7ca7f-ecfb-4bd0-ada9-8a803da28de5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.510 243456 DEBUG nova.compute.manager [req-06bb235e-4155-4f30-9483-b6b90d8eb9f1 req-dba7ca7f-ecfb-4bd0-ada9-8a803da28de5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Processing event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.510 243456 DEBUG nova.compute.manager [req-06bb235e-4155-4f30-9483-b6b90d8eb9f1 req-dba7ca7f-ecfb-4bd0-ada9-8a803da28de5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.511 243456 DEBUG oslo_concurrency.lockutils [req-06bb235e-4155-4f30-9483-b6b90d8eb9f1 req-dba7ca7f-ecfb-4bd0-ada9-8a803da28de5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.511 243456 DEBUG oslo_concurrency.lockutils [req-06bb235e-4155-4f30-9483-b6b90d8eb9f1 req-dba7ca7f-ecfb-4bd0-ada9-8a803da28de5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.511 243456 DEBUG oslo_concurrency.lockutils [req-06bb235e-4155-4f30-9483-b6b90d8eb9f1 req-dba7ca7f-ecfb-4bd0-ada9-8a803da28de5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.512 243456 DEBUG nova.compute.manager [req-06bb235e-4155-4f30-9483-b6b90d8eb9f1 req-dba7ca7f-ecfb-4bd0-ada9-8a803da28de5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] No waiting events found dispatching network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.512 243456 WARNING nova.compute.manager [req-06bb235e-4155-4f30-9483-b6b90d8eb9f1 req-dba7ca7f-ecfb-4bd0-ada9-8a803da28de5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received unexpected event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.513 243456 DEBUG nova.compute.manager [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.518 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272864.5179641, c92e965f-2d18-4b78-8b78-7d391039f382 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.518 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.522 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.525 243456 INFO nova.virt.libvirt.driver [-] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance spawned successfully.#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.526 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.533 243456 DEBUG nova.network.neutron [req-7dd0752e-6aea-4d46-bfc5-f8adeaa8c150 req-564da138-3b21-4860-978d-b381e2e2f172 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Updated VIF entry in instance network info cache for port eaa5f652-63c2-4a9b-aae0-eec299565322. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.534 243456 DEBUG nova.network.neutron [req-7dd0752e-6aea-4d46-bfc5-f8adeaa8c150 req-564da138-3b21-4860-978d-b381e2e2f172 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Updating instance_info_cache with network_info: [{"id": "eaa5f652-63c2-4a9b-aae0-eec299565322", "address": "fa:16:3e:f2:c8:7d", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeaa5f652-63", "ovs_interfaceid": "eaa5f652-63c2-4a9b-aae0-eec299565322", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.539 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/89ced16e-cc50-41d5-bfcb-fa5af85c14c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpeq04zqwz" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.566 243456 DEBUG nova.storage.rbd_utils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.571 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/89ced16e-cc50-41d5-bfcb-fa5af85c14c8/disk.config 89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.597 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.598 243456 DEBUG oslo_concurrency.lockutils [req-7dd0752e-6aea-4d46-bfc5-f8adeaa8c150 req-564da138-3b21-4860-978d-b381e2e2f172 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-89ced16e-cc50-41d5-bfcb-fa5af85c14c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.603 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.604 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.604 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.605 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.605 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.606 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.610 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.650 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.677 243456 INFO nova.compute.manager [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Took 7.97 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.678 243456 DEBUG nova.compute.manager [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.762 243456 INFO nova.compute.manager [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Took 9.04 seconds to build instance.#033[00m
Feb 28 05:01:04 np0005634017 nova_compute[243452]: 2026-02-28 10:01:04.792 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.255 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.326 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.331 243456 DEBUG nova.storage.rbd_utils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] resizing rbd image c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.364 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/89ced16e-cc50-41d5-bfcb-fa5af85c14c8/disk.config 89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.793s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.365 243456 INFO nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Deleting local config drive /var/lib/nova/instances/89ced16e-cc50-41d5-bfcb-fa5af85c14c8/disk.config because it was imported into RBD.#033[00m
Feb 28 05:01:05 np0005634017 kernel: tapeaa5f652-63: entered promiscuous mode
Feb 28 05:01:05 np0005634017 NetworkManager[49805]: <info>  [1772272865.4001] manager: (tapeaa5f652-63): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Feb 28 05:01:05 np0005634017 systemd-udevd[257204]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.401 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:05Z|00053|binding|INFO|Claiming lport eaa5f652-63c2-4a9b-aae0-eec299565322 for this chassis.
Feb 28 05:01:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:05Z|00054|binding|INFO|eaa5f652-63c2-4a9b-aae0-eec299565322: Claiming fa:16:3e:f2:c8:7d 10.100.0.13
Feb 28 05:01:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:05Z|00055|binding|INFO|Setting lport eaa5f652-63c2-4a9b-aae0-eec299565322 ovn-installed in OVS
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.409 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:05 np0005634017 NetworkManager[49805]: <info>  [1772272865.4110] device (tapeaa5f652-63): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:01:05 np0005634017 NetworkManager[49805]: <info>  [1772272865.4117] device (tapeaa5f652-63): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:01:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:05Z|00056|binding|INFO|Setting lport eaa5f652-63c2-4a9b-aae0-eec299565322 up in Southbound
Feb 28 05:01:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.414 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:c8:7d 10.100.0.13'], port_security=['fa:16:3e:f2:c8:7d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '89ced16e-cc50-41d5-bfcb-fa5af85c14c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=eaa5f652-63c2-4a9b-aae0-eec299565322) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:01:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.416 156681 INFO neutron.agent.ovn.metadata.agent [-] Port eaa5f652-63c2-4a9b-aae0-eec299565322 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 bound to our chassis#033[00m
Feb 28 05:01:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.417 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24#033[00m
Feb 28 05:01:05 np0005634017 systemd-machined[209480]: New machine qemu-14-instance-0000000e.
Feb 28 05:01:05 np0005634017 systemd[1]: Started Virtual Machine qemu-14-instance-0000000e.
Feb 28 05:01:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.433 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ef5c42d9-7d7a-4e13-bbed-d99066fcc944]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.454 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.465 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[fffd65e4-8046-463f-a771-32a46635758e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.468 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3b7bf5ea-22af-4db0-8154-a0d78c225d4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.475 243456 DEBUG nova.objects.instance [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lazy-loading 'migration_context' on Instance uuid c8eefb37-41ae-4d33-8085-e4e8c3ce2075 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.483 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6961919b-9a9d-4125-80da-e82170a9fca4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.496 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.496 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Ensure instance console log exists: /var/lib/nova/instances/c8eefb37-41ae-4d33-8085-e4e8c3ce2075/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.496 243456 DEBUG oslo_concurrency.lockutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.496 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b360da7e-2818-47ac-a14c-af21e6aa37ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257636, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.497 243456 DEBUG oslo_concurrency.lockutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.497 243456 DEBUG oslo_concurrency.lockutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.498 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.503 243456 WARNING nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.507 243456 DEBUG nova.virt.libvirt.host [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:01:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.511 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[38c79587-1c89-4cc8-a16d-a42785839c28]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437039, 'tstamp': 437039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257637, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437043, 'tstamp': 437043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257637, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.512 243456 DEBUG nova.virt.libvirt.host [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:01:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.512 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.514 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.515 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.515 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5045ea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.515 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:01:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.516 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5045ea-10, col_values=(('external_ids', {'iface-id': '863d7ac1-9b1e-4788-aadf-440a36d66b39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.516 243456 DEBUG nova.virt.libvirt.host [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:01:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.516 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.516 243456 DEBUG nova.virt.libvirt.host [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.517 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.517 243456 DEBUG nova.virt.hardware [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.517 243456 DEBUG nova.virt.hardware [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.517 243456 DEBUG nova.virt.hardware [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.518 243456 DEBUG nova.virt.hardware [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.518 243456 DEBUG nova.virt.hardware [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.518 243456 DEBUG nova.virt.hardware [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.518 243456 DEBUG nova.virt.hardware [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.518 243456 DEBUG nova.virt.hardware [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.518 243456 DEBUG nova.virt.hardware [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.519 243456 DEBUG nova.virt.hardware [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.519 243456 DEBUG nova.virt.hardware [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.521 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 307 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.1 MiB/s wr, 152 op/s
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.945 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272865.9442315, 89ced16e-cc50-41d5-bfcb-fa5af85c14c8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.947 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] VM Started (Lifecycle Event)#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.980 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.984 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272865.9443486, 89ced16e-cc50-41d5-bfcb-fa5af85c14c8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:05 np0005634017 nova_compute[243452]: 2026-02-28 10:01:05.985 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.018 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.023 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.044 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:01:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:01:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/530190241' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.091 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.113 243456 DEBUG nova.storage.rbd_utils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] rbd image c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.117 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.360 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.361 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.581 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.582 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.583 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.583 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9e27fde4-3df3-46cf-97ac-88a91baefbc0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:01:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3979094616' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.734 243456 DEBUG nova.compute.manager [req-5341a2bd-e713-49ea-b5db-30337417f4f2 req-57198834-dc1e-4050-b252-b47c44b6d6e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Received event network-vif-plugged-eaa5f652-63c2-4a9b-aae0-eec299565322 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.735 243456 DEBUG oslo_concurrency.lockutils [req-5341a2bd-e713-49ea-b5db-30337417f4f2 req-57198834-dc1e-4050-b252-b47c44b6d6e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.735 243456 DEBUG oslo_concurrency.lockutils [req-5341a2bd-e713-49ea-b5db-30337417f4f2 req-57198834-dc1e-4050-b252-b47c44b6d6e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.735 243456 DEBUG oslo_concurrency.lockutils [req-5341a2bd-e713-49ea-b5db-30337417f4f2 req-57198834-dc1e-4050-b252-b47c44b6d6e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.736 243456 DEBUG nova.compute.manager [req-5341a2bd-e713-49ea-b5db-30337417f4f2 req-57198834-dc1e-4050-b252-b47c44b6d6e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Processing event network-vif-plugged-eaa5f652-63c2-4a9b-aae0-eec299565322 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.736 243456 DEBUG nova.compute.manager [req-5341a2bd-e713-49ea-b5db-30337417f4f2 req-57198834-dc1e-4050-b252-b47c44b6d6e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Received event network-vif-plugged-eaa5f652-63c2-4a9b-aae0-eec299565322 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.736 243456 DEBUG oslo_concurrency.lockutils [req-5341a2bd-e713-49ea-b5db-30337417f4f2 req-57198834-dc1e-4050-b252-b47c44b6d6e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.737 243456 DEBUG oslo_concurrency.lockutils [req-5341a2bd-e713-49ea-b5db-30337417f4f2 req-57198834-dc1e-4050-b252-b47c44b6d6e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.737 243456 DEBUG oslo_concurrency.lockutils [req-5341a2bd-e713-49ea-b5db-30337417f4f2 req-57198834-dc1e-4050-b252-b47c44b6d6e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.737 243456 DEBUG nova.compute.manager [req-5341a2bd-e713-49ea-b5db-30337417f4f2 req-57198834-dc1e-4050-b252-b47c44b6d6e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] No waiting events found dispatching network-vif-plugged-eaa5f652-63c2-4a9b-aae0-eec299565322 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.738 243456 WARNING nova.compute.manager [req-5341a2bd-e713-49ea-b5db-30337417f4f2 req-57198834-dc1e-4050-b252-b47c44b6d6e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Received unexpected event network-vif-plugged-eaa5f652-63c2-4a9b-aae0-eec299565322 for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.738 243456 DEBUG nova.compute.manager [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.742 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272866.7414095, 89ced16e-cc50-41d5-bfcb-fa5af85c14c8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.742 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.744 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.745 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.747 243456 DEBUG nova.objects.instance [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lazy-loading 'pci_devices' on Instance uuid c8eefb37-41ae-4d33-8085-e4e8c3ce2075 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.750 243456 INFO nova.virt.libvirt.driver [-] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Instance spawned successfully.#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.750 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.769 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.773 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:01:06 np0005634017 nova_compute[243452]:  <uuid>c8eefb37-41ae-4d33-8085-e4e8c3ce2075</uuid>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:  <name>instance-0000000f</name>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerExternalEventsTest-server-1374385373</nova:name>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:01:05</nova:creationTime>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:01:06 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:        <nova:user uuid="bf06d33f07d54a60bff952b57a770e77">tempest-ServerExternalEventsTest-234006114-project-member</nova:user>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:        <nova:project uuid="383ac6e5ec8946a0afec20ecf5e8021e">tempest-ServerExternalEventsTest-234006114</nova:project>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      <nova:ports/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      <entry name="serial">c8eefb37-41ae-4d33-8085-e4e8c3ce2075</entry>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      <entry name="uuid">c8eefb37-41ae-4d33-8085-e4e8c3ce2075</entry>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk">
Feb 28 05:01:06 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:01:06 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk.config">
Feb 28 05:01:06 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:01:06 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/c8eefb37-41ae-4d33-8085-e4e8c3ce2075/console.log" append="off"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:01:06 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:01:06 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:01:06 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:01:06 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.787 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.794 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.795 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.796 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.796 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.797 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.797 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.823 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.856 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.866 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.868 243456 INFO nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Using config drive#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.893 243456 DEBUG nova.storage.rbd_utils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] rbd image c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.901 243456 INFO nova.compute.manager [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Took 7.78 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.902 243456 DEBUG nova.compute.manager [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:06 np0005634017 nova_compute[243452]: 2026-02-28 10:01:06.981 243456 INFO nova.compute.manager [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Took 8.96 seconds to build instance.#033[00m
Feb 28 05:01:07 np0005634017 nova_compute[243452]: 2026-02-28 10:01:07.003 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:01:07 np0005634017 nova_compute[243452]: 2026-02-28 10:01:07.648 243456 INFO nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Creating config drive at /var/lib/nova/instances/c8eefb37-41ae-4d33-8085-e4e8c3ce2075/disk.config#033[00m
Feb 28 05:01:07 np0005634017 nova_compute[243452]: 2026-02-28 10:01:07.652 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c8eefb37-41ae-4d33-8085-e4e8c3ce2075/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkr3b55rm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 318 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.7 MiB/s wr, 200 op/s
Feb 28 05:01:07 np0005634017 nova_compute[243452]: 2026-02-28 10:01:07.787 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c8eefb37-41ae-4d33-8085-e4e8c3ce2075/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkr3b55rm" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:07 np0005634017 nova_compute[243452]: 2026-02-28 10:01:07.819 243456 DEBUG nova.storage.rbd_utils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] rbd image c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:07 np0005634017 nova_compute[243452]: 2026-02-28 10:01:07.823 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c8eefb37-41ae-4d33-8085-e4e8c3ce2075/disk.config c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:07 np0005634017 nova_compute[243452]: 2026-02-28 10:01:07.931 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c8eefb37-41ae-4d33-8085-e4e8c3ce2075/disk.config c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:07 np0005634017 nova_compute[243452]: 2026-02-28 10:01:07.931 243456 INFO nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Deleting local config drive /var/lib/nova/instances/c8eefb37-41ae-4d33-8085-e4e8c3ce2075/disk.config because it was imported into RBD.#033[00m
Feb 28 05:01:07 np0005634017 systemd-machined[209480]: New machine qemu-15-instance-0000000f.
Feb 28 05:01:07 np0005634017 nova_compute[243452]: 2026-02-28 10:01:07.984 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "f37b722c-8def-4545-a455-39df230540d8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:07 np0005634017 nova_compute[243452]: 2026-02-28 10:01:07.984 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:07 np0005634017 systemd[1]: Started Virtual Machine qemu-15-instance-0000000f.
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.002 243456 DEBUG nova.compute.manager [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.079 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.080 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.086 243456 DEBUG nova.virt.hardware [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.086 243456 INFO nova.compute.claims [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.263 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.404 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.667 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.710 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272868.710008, c8eefb37-41ae-4d33-8085-e4e8c3ce2075 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.711 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.713 243456 DEBUG nova.compute.manager [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.714 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.718 243456 INFO nova.virt.libvirt.driver [-] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Instance spawned successfully.#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.718 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.749 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.750 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.751 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.751 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.752 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.752 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.757 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.759 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.788 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.789 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272868.7131577, c8eefb37-41ae-4d33-8085-e4e8c3ce2075 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.789 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] VM Started (Lifecycle Event)#033[00m
Feb 28 05:01:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:01:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1035697662' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.824 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.834 243456 INFO nova.compute.manager [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Took 4.80 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.836 243456 DEBUG nova.compute.manager [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.837 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.838 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.852 243456 DEBUG nova.compute.provider_tree [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.880 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.883 243456 DEBUG nova.scheduler.client.report [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.911 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.912 243456 DEBUG nova.compute.manager [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.924 243456 INFO nova.compute.manager [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Took 6.56 seconds to build instance.#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.948 243456 DEBUG oslo_concurrency.lockutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "c8eefb37-41ae-4d33-8085-e4e8c3ce2075" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.982 243456 DEBUG nova.compute.manager [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:01:08 np0005634017 nova_compute[243452]: 2026-02-28 10:01:08.983 243456 DEBUG nova.network.neutron [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.183 243456 INFO nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.203 243456 DEBUG nova.compute.manager [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.306 243456 DEBUG nova.compute.manager [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.307 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.308 243456 INFO nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Creating image(s)#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.327 243456 DEBUG nova.storage.rbd_utils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image f37b722c-8def-4545-a455-39df230540d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.349 243456 DEBUG nova.storage.rbd_utils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image f37b722c-8def-4545-a455-39df230540d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.374 243456 DEBUG nova.storage.rbd_utils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image f37b722c-8def-4545-a455-39df230540d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.378 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.398 243456 DEBUG nova.policy [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b1dc716928742ca935bb155783e2d9a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '446562351a804787bd6c523245bada39', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.403 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updating instance_info_cache with network_info: [{"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.420 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.421 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.422 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.422 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.422 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.423 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.423 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.438 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.439 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.439 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.440 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.456 243456 DEBUG nova.storage.rbd_utils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image f37b722c-8def-4545-a455-39df230540d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.459 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 f37b722c-8def-4545-a455-39df230540d8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.478 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.478 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.479 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.479 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.479 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:09 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:09Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9c:04:a0 10.100.0.4
Feb 28 05:01:09 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:09Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9c:04:a0 10.100.0.4
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.734 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 f37b722c-8def-4545-a455-39df230540d8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 318 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.7 MiB/s wr, 194 op/s
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.792 243456 DEBUG nova.storage.rbd_utils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] resizing rbd image f37b722c-8def-4545-a455-39df230540d8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.866 243456 DEBUG nova.objects.instance [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lazy-loading 'migration_context' on Instance uuid f37b722c-8def-4545-a455-39df230540d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.883 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.883 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Ensure instance console log exists: /var/lib/nova/instances/f37b722c-8def-4545-a455-39df230540d8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.884 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.884 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.884 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:09 np0005634017 nova_compute[243452]: 2026-02-28 10:01:09.957 243456 DEBUG nova.network.neutron [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Successfully created port: 24f1e17a-f542-4eab-9180-968c61bc1cf7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:01:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:01:10 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1040341953' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:01:10 np0005634017 nova_compute[243452]: 2026-02-28 10:01:10.185 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.706s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:10 np0005634017 nova_compute[243452]: 2026-02-28 10:01:10.282 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:01:10 np0005634017 nova_compute[243452]: 2026-02-28 10:01:10.282 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:01:10 np0005634017 nova_compute[243452]: 2026-02-28 10:01:10.286 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:01:10 np0005634017 nova_compute[243452]: 2026-02-28 10:01:10.286 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:01:10 np0005634017 nova_compute[243452]: 2026-02-28 10:01:10.292 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:01:10 np0005634017 nova_compute[243452]: 2026-02-28 10:01:10.292 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:01:10 np0005634017 nova_compute[243452]: 2026-02-28 10:01:10.296 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:01:10 np0005634017 nova_compute[243452]: 2026-02-28 10:01:10.296 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:01:10 np0005634017 nova_compute[243452]: 2026-02-28 10:01:10.528 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:01:10 np0005634017 nova_compute[243452]: 2026-02-28 10:01:10.530 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4038MB free_disk=59.9117830619216GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:01:10 np0005634017 nova_compute[243452]: 2026-02-28 10:01:10.531 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:10 np0005634017 nova_compute[243452]: 2026-02-28 10:01:10.532 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:10 np0005634017 nova_compute[243452]: 2026-02-28 10:01:10.623 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 9e27fde4-3df3-46cf-97ac-88a91baefbc0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:01:10 np0005634017 nova_compute[243452]: 2026-02-28 10:01:10.623 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance c92e965f-2d18-4b78-8b78-7d391039f382 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:01:10 np0005634017 nova_compute[243452]: 2026-02-28 10:01:10.623 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 89ced16e-cc50-41d5-bfcb-fa5af85c14c8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:01:10 np0005634017 nova_compute[243452]: 2026-02-28 10:01:10.623 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance c8eefb37-41ae-4d33-8085-e4e8c3ce2075 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:01:10 np0005634017 nova_compute[243452]: 2026-02-28 10:01:10.623 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance f37b722c-8def-4545-a455-39df230540d8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:01:10 np0005634017 nova_compute[243452]: 2026-02-28 10:01:10.623 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:01:10 np0005634017 nova_compute[243452]: 2026-02-28 10:01:10.624 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:01:10 np0005634017 nova_compute[243452]: 2026-02-28 10:01:10.745 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:01:11 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4205101017' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.374 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.383 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.402 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.424 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.425 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.893s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.426 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.426 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.440 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.441 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.441 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.473 243456 DEBUG nova.network.neutron [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Successfully updated port: 24f1e17a-f542-4eab-9180-968c61bc1cf7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.502 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "refresh_cache-f37b722c-8def-4545-a455-39df230540d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.503 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquired lock "refresh_cache-f37b722c-8def-4545-a455-39df230540d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.503 243456 DEBUG nova.network.neutron [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.596 243456 DEBUG nova.compute.manager [req-a458f70a-d1e7-4206-879a-bc93cf9c918b req-bec16dad-c73c-4398-bf4c-0036136286bb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Received event network-changed-24f1e17a-f542-4eab-9180-968c61bc1cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.597 243456 DEBUG nova.compute.manager [req-a458f70a-d1e7-4206-879a-bc93cf9c918b req-bec16dad-c73c-4398-bf4c-0036136286bb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Refreshing instance network info cache due to event network-changed-24f1e17a-f542-4eab-9180-968c61bc1cf7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.597 243456 DEBUG oslo_concurrency.lockutils [req-a458f70a-d1e7-4206-879a-bc93cf9c918b req-bec16dad-c73c-4398-bf4c-0036136286bb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f37b722c-8def-4545-a455-39df230540d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.736 243456 DEBUG nova.network.neutron [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.741 243456 DEBUG nova.compute.manager [None req-f91acf22-a15f-4bd1-8a25-837be84c8696 659c6b9da71d4b6d88c16cd01e3e121b b3cac0b1e8b5417184b3b8163e0c489a - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.742 243456 DEBUG nova.compute.manager [None req-f91acf22-a15f-4bd1-8a25-837be84c8696 659c6b9da71d4b6d88c16cd01e3e121b b3cac0b1e8b5417184b3b8163e0c489a - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.743 243456 DEBUG oslo_concurrency.lockutils [None req-f91acf22-a15f-4bd1-8a25-837be84c8696 659c6b9da71d4b6d88c16cd01e3e121b b3cac0b1e8b5417184b3b8163e0c489a - - default default] Acquiring lock "refresh_cache-c8eefb37-41ae-4d33-8085-e4e8c3ce2075" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.743 243456 DEBUG oslo_concurrency.lockutils [None req-f91acf22-a15f-4bd1-8a25-837be84c8696 659c6b9da71d4b6d88c16cd01e3e121b b3cac0b1e8b5417184b3b8163e0c489a - - default default] Acquired lock "refresh_cache-c8eefb37-41ae-4d33-8085-e4e8c3ce2075" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.743 243456 DEBUG nova.network.neutron [None req-f91acf22-a15f-4bd1-8a25-837be84c8696 659c6b9da71d4b6d88c16cd01e3e121b b3cac0b1e8b5417184b3b8163e0c489a - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:01:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 388 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 7.7 MiB/s wr, 375 op/s
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.907 243456 DEBUG nova.network.neutron [None req-f91acf22-a15f-4bd1-8a25-837be84c8696 659c6b9da71d4b6d88c16cd01e3e121b b3cac0b1e8b5417184b3b8163e0c489a - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.917 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "08147934-b9df-4154-8d1f-3fd318973eb6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.918 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.937 243456 DEBUG nova.compute.manager [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.941 243456 DEBUG oslo_concurrency.lockutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Acquiring lock "c8eefb37-41ae-4d33-8085-e4e8c3ce2075" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.941 243456 DEBUG oslo_concurrency.lockutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "c8eefb37-41ae-4d33-8085-e4e8c3ce2075" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.941 243456 DEBUG oslo_concurrency.lockutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Acquiring lock "c8eefb37-41ae-4d33-8085-e4e8c3ce2075-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.942 243456 DEBUG oslo_concurrency.lockutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "c8eefb37-41ae-4d33-8085-e4e8c3ce2075-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.942 243456 DEBUG oslo_concurrency.lockutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "c8eefb37-41ae-4d33-8085-e4e8c3ce2075-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.943 243456 INFO nova.compute.manager [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Terminating instance#033[00m
Feb 28 05:01:11 np0005634017 nova_compute[243452]: 2026-02-28 10:01:11.944 243456 DEBUG oslo_concurrency.lockutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Acquiring lock "refresh_cache-c8eefb37-41ae-4d33-8085-e4e8c3ce2075" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:01:12 np0005634017 nova_compute[243452]: 2026-02-28 10:01:12.003 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:12 np0005634017 nova_compute[243452]: 2026-02-28 10:01:12.003 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:12 np0005634017 nova_compute[243452]: 2026-02-28 10:01:12.010 243456 DEBUG nova.virt.hardware [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:01:12 np0005634017 nova_compute[243452]: 2026-02-28 10:01:12.010 243456 INFO nova.compute.claims [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:01:12 np0005634017 nova_compute[243452]: 2026-02-28 10:01:12.213 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:01:12 np0005634017 nova_compute[243452]: 2026-02-28 10:01:12.653 243456 DEBUG nova.network.neutron [None req-f91acf22-a15f-4bd1-8a25-837be84c8696 659c6b9da71d4b6d88c16cd01e3e121b b3cac0b1e8b5417184b3b8163e0c489a - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:12 np0005634017 nova_compute[243452]: 2026-02-28 10:01:12.677 243456 DEBUG oslo_concurrency.lockutils [None req-f91acf22-a15f-4bd1-8a25-837be84c8696 659c6b9da71d4b6d88c16cd01e3e121b b3cac0b1e8b5417184b3b8163e0c489a - - default default] Releasing lock "refresh_cache-c8eefb37-41ae-4d33-8085-e4e8c3ce2075" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:01:12 np0005634017 nova_compute[243452]: 2026-02-28 10:01:12.679 243456 DEBUG oslo_concurrency.lockutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Acquired lock "refresh_cache-c8eefb37-41ae-4d33-8085-e4e8c3ce2075" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:01:12 np0005634017 nova_compute[243452]: 2026-02-28 10:01:12.680 243456 DEBUG nova.network.neutron [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:01:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:01:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/762713474' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:01:12 np0005634017 nova_compute[243452]: 2026-02-28 10:01:12.743 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:12 np0005634017 nova_compute[243452]: 2026-02-28 10:01:12.749 243456 DEBUG nova.compute.provider_tree [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:01:12 np0005634017 nova_compute[243452]: 2026-02-28 10:01:12.765 243456 DEBUG nova.scheduler.client.report [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:01:12 np0005634017 nova_compute[243452]: 2026-02-28 10:01:12.786 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:12 np0005634017 nova_compute[243452]: 2026-02-28 10:01:12.787 243456 DEBUG nova.compute.manager [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:01:12 np0005634017 nova_compute[243452]: 2026-02-28 10:01:12.836 243456 DEBUG nova.compute.manager [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:01:12 np0005634017 nova_compute[243452]: 2026-02-28 10:01:12.836 243456 DEBUG nova.network.neutron [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:01:12 np0005634017 nova_compute[243452]: 2026-02-28 10:01:12.852 243456 DEBUG nova.network.neutron [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:01:12 np0005634017 nova_compute[243452]: 2026-02-28 10:01:12.857 243456 INFO nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:01:12 np0005634017 nova_compute[243452]: 2026-02-28 10:01:12.875 243456 DEBUG nova.compute.manager [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:01:12 np0005634017 nova_compute[243452]: 2026-02-28 10:01:12.974 243456 DEBUG nova.compute.manager [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:01:12 np0005634017 nova_compute[243452]: 2026-02-28 10:01:12.976 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:01:12 np0005634017 nova_compute[243452]: 2026-02-28 10:01:12.977 243456 INFO nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Creating image(s)#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.004 243456 DEBUG nova.storage.rbd_utils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 08147934-b9df-4154-8d1f-3fd318973eb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.023 243456 DEBUG nova.storage.rbd_utils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 08147934-b9df-4154-8d1f-3fd318973eb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.048 243456 DEBUG nova.storage.rbd_utils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 08147934-b9df-4154-8d1f-3fd318973eb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.052 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.068 243456 DEBUG nova.policy [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4fb1e2bbed9c4e2395c13dba974f8603', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '339b7f5b41a54615b051fb9d036072dd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.072 243456 DEBUG nova.network.neutron [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Updating instance_info_cache with network_info: [{"id": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "address": "fa:16:3e:84:dd:a4", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f1e17a-f5", "ovs_interfaceid": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.095 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Releasing lock "refresh_cache-f37b722c-8def-4545-a455-39df230540d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.095 243456 DEBUG nova.compute.manager [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Instance network_info: |[{"id": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "address": "fa:16:3e:84:dd:a4", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f1e17a-f5", "ovs_interfaceid": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.097 243456 DEBUG nova.network.neutron [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.097 243456 DEBUG oslo_concurrency.lockutils [req-a458f70a-d1e7-4206-879a-bc93cf9c918b req-bec16dad-c73c-4398-bf4c-0036136286bb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f37b722c-8def-4545-a455-39df230540d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.098 243456 DEBUG nova.network.neutron [req-a458f70a-d1e7-4206-879a-bc93cf9c918b req-bec16dad-c73c-4398-bf4c-0036136286bb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Refreshing network info cache for port 24f1e17a-f542-4eab-9180-968c61bc1cf7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.101 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Start _get_guest_xml network_info=[{"id": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "address": "fa:16:3e:84:dd:a4", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f1e17a-f5", "ovs_interfaceid": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.104 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.104 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.105 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.105 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.125 243456 DEBUG nova.storage.rbd_utils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 08147934-b9df-4154-8d1f-3fd318973eb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.128 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 08147934-b9df-4154-8d1f-3fd318973eb6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.149 243456 DEBUG oslo_concurrency.lockutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Releasing lock "refresh_cache-c8eefb37-41ae-4d33-8085-e4e8c3ce2075" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.150 243456 DEBUG nova.compute.manager [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.153 243456 WARNING nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.158 243456 DEBUG nova.virt.libvirt.host [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.159 243456 DEBUG nova.virt.libvirt.host [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.162 243456 DEBUG nova.virt.libvirt.host [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.162 243456 DEBUG nova.virt.libvirt.host [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.163 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.163 243456 DEBUG nova.virt.hardware [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.163 243456 DEBUG nova.virt.hardware [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.164 243456 DEBUG nova.virt.hardware [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.164 243456 DEBUG nova.virt.hardware [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.164 243456 DEBUG nova.virt.hardware [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.164 243456 DEBUG nova.virt.hardware [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.165 243456 DEBUG nova.virt.hardware [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.165 243456 DEBUG nova.virt.hardware [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.166 243456 DEBUG nova.virt.hardware [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.166 243456 DEBUG nova.virt.hardware [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.167 243456 DEBUG nova.virt.hardware [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.176 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:13 np0005634017 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Feb 28 05:01:13 np0005634017 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Consumed 5.007s CPU time.
Feb 28 05:01:13 np0005634017 systemd-machined[209480]: Machine qemu-15-instance-0000000f terminated.
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.371 243456 INFO nova.virt.libvirt.driver [-] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Instance destroyed successfully.#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.373 243456 DEBUG nova.objects.instance [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lazy-loading 'resources' on Instance uuid c8eefb37-41ae-4d33-8085-e4e8c3ce2075 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.375 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 08147934-b9df-4154-8d1f-3fd318973eb6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.419 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.469 243456 DEBUG nova.storage.rbd_utils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] resizing rbd image 08147934-b9df-4154-8d1f-3fd318973eb6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.548 243456 DEBUG nova.objects.instance [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'migration_context' on Instance uuid 08147934-b9df-4154-8d1f-3fd318973eb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.566 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.566 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Ensure instance console log exists: /var/lib/nova/instances/08147934-b9df-4154-8d1f-3fd318973eb6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.567 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.567 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.567 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.668 243456 INFO nova.virt.libvirt.driver [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Deleting instance files /var/lib/nova/instances/c8eefb37-41ae-4d33-8085-e4e8c3ce2075_del#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.669 243456 INFO nova.virt.libvirt.driver [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Deletion of /var/lib/nova/instances/c8eefb37-41ae-4d33-8085-e4e8c3ce2075_del complete#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.671 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:01:13 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2747155551' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.702 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.727 243456 DEBUG nova.storage.rbd_utils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image f37b722c-8def-4545-a455-39df230540d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.731 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.750 243456 INFO nova.compute.manager [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.752 243456 DEBUG oslo.service.loopingcall [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.753 243456 DEBUG nova.compute.manager [-] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:01:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 413 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 6.5 MiB/s rd, 7.4 MiB/s wr, 368 op/s
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.754 243456 DEBUG nova.network.neutron [-] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.783 243456 DEBUG nova.network.neutron [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Successfully created port: 1b6bf464-31de-4504-9af4-59a95d6d9c05 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.926 243456 DEBUG nova.network.neutron [-] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.955 243456 DEBUG nova.network.neutron [-] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:13 np0005634017 nova_compute[243452]: 2026-02-28 10:01:13.971 243456 INFO nova.compute.manager [-] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Took 0.22 seconds to deallocate network for instance.#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.036 243456 DEBUG oslo_concurrency.lockutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.037 243456 DEBUG oslo_concurrency.lockutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.201 243456 DEBUG oslo_concurrency.processutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:01:14 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1634414211' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.282 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.283 243456 DEBUG nova.virt.libvirt.vif [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1434521462',display_name='tempest-FloatingIPsAssociationTestJSON-server-1434521462',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1434521462',id=16,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='446562351a804787bd6c523245bada39',ramdisk_id='',reservation_id='r-txju9utb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1803239001',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1803239001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:01:09Z,user_data=None,user_id='3b1dc716928742ca935bb155783e2d9a',uuid=f37b722c-8def-4545-a455-39df230540d8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "address": "fa:16:3e:84:dd:a4", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f1e17a-f5", "ovs_interfaceid": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.284 243456 DEBUG nova.network.os_vif_util [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Converting VIF {"id": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "address": "fa:16:3e:84:dd:a4", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f1e17a-f5", "ovs_interfaceid": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.285 243456 DEBUG nova.network.os_vif_util [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:dd:a4,bridge_name='br-int',has_traffic_filtering=True,id=24f1e17a-f542-4eab-9180-968c61bc1cf7,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24f1e17a-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.286 243456 DEBUG nova.objects.instance [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lazy-loading 'pci_devices' on Instance uuid f37b722c-8def-4545-a455-39df230540d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.303 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:01:14 np0005634017 nova_compute[243452]:  <uuid>f37b722c-8def-4545-a455-39df230540d8</uuid>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:  <name>instance-00000010</name>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1434521462</nova:name>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:01:13</nova:creationTime>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:01:14 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:        <nova:user uuid="3b1dc716928742ca935bb155783e2d9a">tempest-FloatingIPsAssociationTestJSON-1803239001-project-member</nova:user>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:        <nova:project uuid="446562351a804787bd6c523245bada39">tempest-FloatingIPsAssociationTestJSON-1803239001</nova:project>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:        <nova:port uuid="24f1e17a-f542-4eab-9180-968c61bc1cf7">
Feb 28 05:01:14 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <entry name="serial">f37b722c-8def-4545-a455-39df230540d8</entry>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <entry name="uuid">f37b722c-8def-4545-a455-39df230540d8</entry>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/f37b722c-8def-4545-a455-39df230540d8_disk">
Feb 28 05:01:14 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:01:14 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/f37b722c-8def-4545-a455-39df230540d8_disk.config">
Feb 28 05:01:14 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:01:14 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:84:dd:a4"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <target dev="tap24f1e17a-f5"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/f37b722c-8def-4545-a455-39df230540d8/console.log" append="off"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:01:14 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:01:14 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:01:14 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:01:14 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.307 243456 DEBUG nova.compute.manager [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Preparing to wait for external event network-vif-plugged-24f1e17a-f542-4eab-9180-968c61bc1cf7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.308 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "f37b722c-8def-4545-a455-39df230540d8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.308 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.308 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.309 243456 DEBUG nova.virt.libvirt.vif [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1434521462',display_name='tempest-FloatingIPsAssociationTestJSON-server-1434521462',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1434521462',id=16,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='446562351a804787bd6c523245bada39',ramdisk_id='',reservation_id='r-txju9utb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1803239001',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1803239001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:01:09Z,user_data=None,user_id='3b1dc716928742ca935bb155783e2d9a',uuid=f37b722c-8def-4545-a455-39df230540d8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "address": "fa:16:3e:84:dd:a4", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f1e17a-f5", "ovs_interfaceid": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.310 243456 DEBUG nova.network.os_vif_util [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Converting VIF {"id": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "address": "fa:16:3e:84:dd:a4", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f1e17a-f5", "ovs_interfaceid": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.310 243456 DEBUG nova.network.os_vif_util [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:dd:a4,bridge_name='br-int',has_traffic_filtering=True,id=24f1e17a-f542-4eab-9180-968c61bc1cf7,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24f1e17a-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.311 243456 DEBUG os_vif [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:dd:a4,bridge_name='br-int',has_traffic_filtering=True,id=24f1e17a-f542-4eab-9180-968c61bc1cf7,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24f1e17a-f5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.311 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.312 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.312 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.317 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.317 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24f1e17a-f5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.318 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap24f1e17a-f5, col_values=(('external_ids', {'iface-id': '24f1e17a-f542-4eab-9180-968c61bc1cf7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:dd:a4', 'vm-uuid': 'f37b722c-8def-4545-a455-39df230540d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.319 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:14 np0005634017 NetworkManager[49805]: <info>  [1772272874.3211] manager: (tap24f1e17a-f5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.323 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.327 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.328 243456 INFO os_vif [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:dd:a4,bridge_name='br-int',has_traffic_filtering=True,id=24f1e17a-f542-4eab-9180-968c61bc1cf7,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24f1e17a-f5')#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.372 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.373 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.373 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] No VIF found with MAC fa:16:3e:84:dd:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.374 243456 INFO nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Using config drive#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.398 243456 DEBUG nova.storage.rbd_utils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image f37b722c-8def-4545-a455-39df230540d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.450 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:01:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:01:14 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3384180488' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.744 243456 DEBUG oslo_concurrency.processutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.751 243456 DEBUG nova.compute.provider_tree [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.775 243456 DEBUG nova.network.neutron [req-a458f70a-d1e7-4206-879a-bc93cf9c918b req-bec16dad-c73c-4398-bf4c-0036136286bb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Updated VIF entry in instance network info cache for port 24f1e17a-f542-4eab-9180-968c61bc1cf7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.776 243456 DEBUG nova.network.neutron [req-a458f70a-d1e7-4206-879a-bc93cf9c918b req-bec16dad-c73c-4398-bf4c-0036136286bb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Updating instance_info_cache with network_info: [{"id": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "address": "fa:16:3e:84:dd:a4", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f1e17a-f5", "ovs_interfaceid": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.781 243456 DEBUG nova.scheduler.client.report [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.804 243456 DEBUG oslo_concurrency.lockutils [req-a458f70a-d1e7-4206-879a-bc93cf9c918b req-bec16dad-c73c-4398-bf4c-0036136286bb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f37b722c-8def-4545-a455-39df230540d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.807 243456 DEBUG oslo_concurrency.lockutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.858 243456 INFO nova.scheduler.client.report [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Deleted allocations for instance c8eefb37-41ae-4d33-8085-e4e8c3ce2075#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.950 243456 DEBUG oslo_concurrency.lockutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "c8eefb37-41ae-4d33-8085-e4e8c3ce2075" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.955 243456 INFO nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Creating config drive at /var/lib/nova/instances/f37b722c-8def-4545-a455-39df230540d8/disk.config#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.965 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f37b722c-8def-4545-a455-39df230540d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6izudh52 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:14 np0005634017 nova_compute[243452]: 2026-02-28 10:01:14.997 243456 DEBUG nova.network.neutron [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Successfully updated port: 1b6bf464-31de-4504-9af4-59a95d6d9c05 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.015 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "refresh_cache-08147934-b9df-4154-8d1f-3fd318973eb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.016 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquired lock "refresh_cache-08147934-b9df-4154-8d1f-3fd318973eb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.016 243456 DEBUG nova.network.neutron [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.068 243456 DEBUG nova.compute.manager [req-01954f58-9768-4d50-848d-002affe884c8 req-ec5ad1a7-c1a3-45c6-8aed-771802dd64c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Received event network-changed-1b6bf464-31de-4504-9af4-59a95d6d9c05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.069 243456 DEBUG nova.compute.manager [req-01954f58-9768-4d50-848d-002affe884c8 req-ec5ad1a7-c1a3-45c6-8aed-771802dd64c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Refreshing instance network info cache due to event network-changed-1b6bf464-31de-4504-9af4-59a95d6d9c05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.069 243456 DEBUG oslo_concurrency.lockutils [req-01954f58-9768-4d50-848d-002affe884c8 req-ec5ad1a7-c1a3-45c6-8aed-771802dd64c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-08147934-b9df-4154-8d1f-3fd318973eb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.099 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f37b722c-8def-4545-a455-39df230540d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6izudh52" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.135 243456 DEBUG nova.storage.rbd_utils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image f37b722c-8def-4545-a455-39df230540d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.140 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f37b722c-8def-4545-a455-39df230540d8/disk.config f37b722c-8def-4545-a455-39df230540d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.199 243456 DEBUG nova.network.neutron [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.320 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f37b722c-8def-4545-a455-39df230540d8/disk.config f37b722c-8def-4545-a455-39df230540d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.320 243456 INFO nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Deleting local config drive /var/lib/nova/instances/f37b722c-8def-4545-a455-39df230540d8/disk.config because it was imported into RBD.#033[00m
Feb 28 05:01:15 np0005634017 NetworkManager[49805]: <info>  [1772272875.3654] manager: (tap24f1e17a-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Feb 28 05:01:15 np0005634017 systemd-udevd[258205]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:01:15 np0005634017 kernel: tap24f1e17a-f5: entered promiscuous mode
Feb 28 05:01:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:15Z|00057|binding|INFO|Claiming lport 24f1e17a-f542-4eab-9180-968c61bc1cf7 for this chassis.
Feb 28 05:01:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:15Z|00058|binding|INFO|24f1e17a-f542-4eab-9180-968c61bc1cf7: Claiming fa:16:3e:84:dd:a4 10.100.0.7
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.372 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.381 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:15 np0005634017 NetworkManager[49805]: <info>  [1772272875.3852] device (tap24f1e17a-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:01:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:15Z|00059|binding|INFO|Setting lport 24f1e17a-f542-4eab-9180-968c61bc1cf7 ovn-installed in OVS
Feb 28 05:01:15 np0005634017 NetworkManager[49805]: <info>  [1772272875.3865] device (tap24f1e17a-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.386 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:15 np0005634017 systemd-machined[209480]: New machine qemu-16-instance-00000010.
Feb 28 05:01:15 np0005634017 systemd[1]: Started Virtual Machine qemu-16-instance-00000010.
Feb 28 05:01:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:15Z|00060|binding|INFO|Setting lport 24f1e17a-f542-4eab-9180-968c61bc1cf7 up in Southbound
Feb 28 05:01:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.521 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:dd:a4 10.100.0.7'], port_security=['fa:16:3e:84:dd:a4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f37b722c-8def-4545-a455-39df230540d8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '446562351a804787bd6c523245bada39', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c45a4b58-82e0-4f02-9f8e-0ad33df761ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d1e8e62-0bec-49c5-9374-c674d11c0532, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=24f1e17a-f542-4eab-9180-968c61bc1cf7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:01:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.522 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 24f1e17a-f542-4eab-9180-968c61bc1cf7 in datapath 71984a35-6483-4ac4-a021-6bd1f9989d8b bound to our chassis#033[00m
Feb 28 05:01:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.524 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 71984a35-6483-4ac4-a021-6bd1f9989d8b#033[00m
Feb 28 05:01:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.536 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[26b30c7e-0399-489c-a5a9-6f2056c93f88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.561 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[14726ade-30ab-4eb7-9037-277868e0fa48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.565 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[42ce50d0-41aa-4994-8302-99e08c5828d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.590 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f072a8-87ab-43b8-bfd1-a56102415f37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.609 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5e49017b-4c0d-464e-86c5-2a23c4444273]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71984a35-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:3a:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436194, 'reachable_time': 28919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258492, 'error': None, 'target': 'ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.630 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[51475c66-245f-4c45-bfb6-81cbaf707eb4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap71984a35-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 436202, 'tstamp': 436202}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258508, 'error': None, 'target': 'ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap71984a35-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 436205, 'tstamp': 436205}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258508, 'error': None, 'target': 'ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.632 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71984a35-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.634 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.636 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71984a35-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.636 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:01:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.638 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap71984a35-60, col_values=(('external_ids', {'iface-id': 'a589fe00-3087-4c3d-af34-6af9a22081de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.638 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:01:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 409 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 6.2 MiB/s rd, 7.1 MiB/s wr, 393 op/s
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.772 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272875.7716343, f37b722c-8def-4545-a455-39df230540d8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.773 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f37b722c-8def-4545-a455-39df230540d8] VM Started (Lifecycle Event)#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.803 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f37b722c-8def-4545-a455-39df230540d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.809 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272875.776962, f37b722c-8def-4545-a455-39df230540d8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.810 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f37b722c-8def-4545-a455-39df230540d8] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.836 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f37b722c-8def-4545-a455-39df230540d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.842 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f37b722c-8def-4545-a455-39df230540d8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.867 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f37b722c-8def-4545-a455-39df230540d8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.919 243456 DEBUG nova.compute.manager [req-a46bc3e1-2fae-441e-b0f2-97533ab3bdf2 req-5d52427d-9de9-4f73-9c12-81e2f5875504 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Received event network-vif-plugged-24f1e17a-f542-4eab-9180-968c61bc1cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.919 243456 DEBUG oslo_concurrency.lockutils [req-a46bc3e1-2fae-441e-b0f2-97533ab3bdf2 req-5d52427d-9de9-4f73-9c12-81e2f5875504 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f37b722c-8def-4545-a455-39df230540d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.919 243456 DEBUG oslo_concurrency.lockutils [req-a46bc3e1-2fae-441e-b0f2-97533ab3bdf2 req-5d52427d-9de9-4f73-9c12-81e2f5875504 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.920 243456 DEBUG oslo_concurrency.lockutils [req-a46bc3e1-2fae-441e-b0f2-97533ab3bdf2 req-5d52427d-9de9-4f73-9c12-81e2f5875504 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.920 243456 DEBUG nova.compute.manager [req-a46bc3e1-2fae-441e-b0f2-97533ab3bdf2 req-5d52427d-9de9-4f73-9c12-81e2f5875504 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Processing event network-vif-plugged-24f1e17a-f542-4eab-9180-968c61bc1cf7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.921 243456 DEBUG nova.compute.manager [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.925 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272875.924463, f37b722c-8def-4545-a455-39df230540d8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.926 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f37b722c-8def-4545-a455-39df230540d8] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.929 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.932 243456 INFO nova.virt.libvirt.driver [-] [instance: f37b722c-8def-4545-a455-39df230540d8] Instance spawned successfully.#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.932 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.961 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f37b722c-8def-4545-a455-39df230540d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.964 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.964 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.964 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.965 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.965 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.965 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:15 np0005634017 nova_compute[243452]: 2026-02-28 10:01:15.969 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f37b722c-8def-4545-a455-39df230540d8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.016 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f37b722c-8def-4545-a455-39df230540d8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.048 243456 INFO nova.compute.manager [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Took 6.74 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.049 243456 DEBUG nova.compute.manager [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.121 243456 INFO nova.compute.manager [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Took 8.07 seconds to build instance.#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.143 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.902 243456 DEBUG nova.network.neutron [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Updating instance_info_cache with network_info: [{"id": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "address": "fa:16:3e:12:02:52", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b6bf464-31", "ovs_interfaceid": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.944 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Releasing lock "refresh_cache-08147934-b9df-4154-8d1f-3fd318973eb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.944 243456 DEBUG nova.compute.manager [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Instance network_info: |[{"id": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "address": "fa:16:3e:12:02:52", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b6bf464-31", "ovs_interfaceid": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.945 243456 DEBUG oslo_concurrency.lockutils [req-01954f58-9768-4d50-848d-002affe884c8 req-ec5ad1a7-c1a3-45c6-8aed-771802dd64c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-08147934-b9df-4154-8d1f-3fd318973eb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.945 243456 DEBUG nova.network.neutron [req-01954f58-9768-4d50-848d-002affe884c8 req-ec5ad1a7-c1a3-45c6-8aed-771802dd64c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Refreshing network info cache for port 1b6bf464-31de-4504-9af4-59a95d6d9c05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.948 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Start _get_guest_xml network_info=[{"id": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "address": "fa:16:3e:12:02:52", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b6bf464-31", "ovs_interfaceid": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.952 243456 WARNING nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.958 243456 DEBUG nova.virt.libvirt.host [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.959 243456 DEBUG nova.virt.libvirt.host [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.965 243456 DEBUG nova.virt.libvirt.host [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.965 243456 DEBUG nova.virt.libvirt.host [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.966 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.966 243456 DEBUG nova.virt.hardware [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.967 243456 DEBUG nova.virt.hardware [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.967 243456 DEBUG nova.virt.hardware [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.967 243456 DEBUG nova.virt.hardware [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.967 243456 DEBUG nova.virt.hardware [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.968 243456 DEBUG nova.virt.hardware [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.968 243456 DEBUG nova.virt.hardware [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.968 243456 DEBUG nova.virt.hardware [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.968 243456 DEBUG nova.virt.hardware [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.971 243456 DEBUG nova.virt.hardware [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.971 243456 DEBUG nova.virt.hardware [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:01:16 np0005634017 nova_compute[243452]: 2026-02-28 10:01:16.974 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:17 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:17Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3d:6e:bc 10.100.0.5
Feb 28 05:01:17 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:17Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:6e:bc 10.100.0.5
Feb 28 05:01:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:01:17 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1080498958' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:01:17 np0005634017 nova_compute[243452]: 2026-02-28 10:01:17.555 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:01:17 np0005634017 nova_compute[243452]: 2026-02-28 10:01:17.596 243456 DEBUG nova.storage.rbd_utils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 08147934-b9df-4154-8d1f-3fd318973eb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:17 np0005634017 nova_compute[243452]: 2026-02-28 10:01:17.601 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 444 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 6.7 MiB/s rd, 9.2 MiB/s wr, 441 op/s
Feb 28 05:01:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:01:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/933414871' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.118 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.122 243456 DEBUG nova.virt.libvirt.vif [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:01:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2143913214',display_name='tempest-ServersAdminTestJSON-server-2143913214',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2143913214',id=17,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-zs0r3m3g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:01:12Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=08147934-b9df-4154-8d1f-3fd318973eb6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "address": "fa:16:3e:12:02:52", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b6bf464-31", "ovs_interfaceid": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.123 243456 DEBUG nova.network.os_vif_util [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "address": "fa:16:3e:12:02:52", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b6bf464-31", "ovs_interfaceid": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.125 243456 DEBUG nova.network.os_vif_util [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:02:52,bridge_name='br-int',has_traffic_filtering=True,id=1b6bf464-31de-4504-9af4-59a95d6d9c05,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b6bf464-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.128 243456 DEBUG nova.objects.instance [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'pci_devices' on Instance uuid 08147934-b9df-4154-8d1f-3fd318973eb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.152 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:01:18 np0005634017 nova_compute[243452]:  <uuid>08147934-b9df-4154-8d1f-3fd318973eb6</uuid>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:  <name>instance-00000011</name>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServersAdminTestJSON-server-2143913214</nova:name>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:01:16</nova:creationTime>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:01:18 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:        <nova:user uuid="4fb1e2bbed9c4e2395c13dba974f8603">tempest-ServersAdminTestJSON-1494420313-project-member</nova:user>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:        <nova:project uuid="339b7f5b41a54615b051fb9d036072dd">tempest-ServersAdminTestJSON-1494420313</nova:project>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:        <nova:port uuid="1b6bf464-31de-4504-9af4-59a95d6d9c05">
Feb 28 05:01:18 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <entry name="serial">08147934-b9df-4154-8d1f-3fd318973eb6</entry>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <entry name="uuid">08147934-b9df-4154-8d1f-3fd318973eb6</entry>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/08147934-b9df-4154-8d1f-3fd318973eb6_disk">
Feb 28 05:01:18 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:01:18 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/08147934-b9df-4154-8d1f-3fd318973eb6_disk.config">
Feb 28 05:01:18 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:01:18 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:12:02:52"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <target dev="tap1b6bf464-31"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/08147934-b9df-4154-8d1f-3fd318973eb6/console.log" append="off"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:01:18 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:01:18 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:01:18 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:01:18 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.153 243456 DEBUG nova.compute.manager [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Preparing to wait for external event network-vif-plugged-1b6bf464-31de-4504-9af4-59a95d6d9c05 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.154 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.155 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:18 np0005634017 podman[258579]: 2026-02-28 10:01:18.155709467 +0000 UTC m=+0.085152979 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.155 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.157 243456 DEBUG nova.virt.libvirt.vif [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:01:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2143913214',display_name='tempest-ServersAdminTestJSON-server-2143913214',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2143913214',id=17,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-zs0r3m3g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:01:12Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=08147934-b9df-4154-8d1f-3fd318973eb6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "address": "fa:16:3e:12:02:52", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b6bf464-31", "ovs_interfaceid": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.158 243456 DEBUG nova.network.os_vif_util [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "address": "fa:16:3e:12:02:52", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b6bf464-31", "ovs_interfaceid": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.159 243456 DEBUG nova.network.os_vif_util [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:02:52,bridge_name='br-int',has_traffic_filtering=True,id=1b6bf464-31de-4504-9af4-59a95d6d9c05,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b6bf464-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.160 243456 DEBUG os_vif [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:02:52,bridge_name='br-int',has_traffic_filtering=True,id=1b6bf464-31de-4504-9af4-59a95d6d9c05,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b6bf464-31') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.161 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.162 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.163 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:01:18 np0005634017 podman[258578]: 2026-02-28 10:01:18.16822108 +0000 UTC m=+0.102277982 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.167 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.168 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b6bf464-31, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.169 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1b6bf464-31, col_values=(('external_ids', {'iface-id': '1b6bf464-31de-4504-9af4-59a95d6d9c05', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:12:02:52', 'vm-uuid': '08147934-b9df-4154-8d1f-3fd318973eb6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:18 np0005634017 NetworkManager[49805]: <info>  [1772272878.1736] manager: (tap1b6bf464-31): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.178 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.182 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.182 243456 INFO os_vif [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:02:52,bridge_name='br-int',has_traffic_filtering=True,id=1b6bf464-31de-4504-9af4-59a95d6d9c05,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b6bf464-31')#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.231 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.231 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.232 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No VIF found with MAC fa:16:3e:12:02:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.233 243456 INFO nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Using config drive#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.259 243456 DEBUG nova.storage.rbd_utils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 08147934-b9df-4154-8d1f-3fd318973eb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.393 243456 DEBUG nova.compute.manager [req-3ad06832-64d9-47be-a03f-42d7aabe1ce7 req-747dd3fb-50a6-4af1-a462-0ce410400c41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Received event network-vif-plugged-24f1e17a-f542-4eab-9180-968c61bc1cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.394 243456 DEBUG oslo_concurrency.lockutils [req-3ad06832-64d9-47be-a03f-42d7aabe1ce7 req-747dd3fb-50a6-4af1-a462-0ce410400c41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f37b722c-8def-4545-a455-39df230540d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.394 243456 DEBUG oslo_concurrency.lockutils [req-3ad06832-64d9-47be-a03f-42d7aabe1ce7 req-747dd3fb-50a6-4af1-a462-0ce410400c41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.394 243456 DEBUG oslo_concurrency.lockutils [req-3ad06832-64d9-47be-a03f-42d7aabe1ce7 req-747dd3fb-50a6-4af1-a462-0ce410400c41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.394 243456 DEBUG nova.compute.manager [req-3ad06832-64d9-47be-a03f-42d7aabe1ce7 req-747dd3fb-50a6-4af1-a462-0ce410400c41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] No waiting events found dispatching network-vif-plugged-24f1e17a-f542-4eab-9180-968c61bc1cf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.395 243456 WARNING nova.compute.manager [req-3ad06832-64d9-47be-a03f-42d7aabe1ce7 req-747dd3fb-50a6-4af1-a462-0ce410400c41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Received unexpected event network-vif-plugged-24f1e17a-f542-4eab-9180-968c61bc1cf7 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.410 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.569 243456 DEBUG nova.network.neutron [req-01954f58-9768-4d50-848d-002affe884c8 req-ec5ad1a7-c1a3-45c6-8aed-771802dd64c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Updated VIF entry in instance network info cache for port 1b6bf464-31de-4504-9af4-59a95d6d9c05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.570 243456 DEBUG nova.network.neutron [req-01954f58-9768-4d50-848d-002affe884c8 req-ec5ad1a7-c1a3-45c6-8aed-771802dd64c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Updating instance_info_cache with network_info: [{"id": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "address": "fa:16:3e:12:02:52", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b6bf464-31", "ovs_interfaceid": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.599 243456 DEBUG oslo_concurrency.lockutils [req-01954f58-9768-4d50-848d-002affe884c8 req-ec5ad1a7-c1a3-45c6-8aed-771802dd64c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-08147934-b9df-4154-8d1f-3fd318973eb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.706 243456 INFO nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Creating config drive at /var/lib/nova/instances/08147934-b9df-4154-8d1f-3fd318973eb6/disk.config#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.711 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/08147934-b9df-4154-8d1f-3fd318973eb6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqfnr8lb5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.834 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/08147934-b9df-4154-8d1f-3fd318973eb6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqfnr8lb5" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.864 243456 DEBUG nova.storage.rbd_utils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 08147934-b9df-4154-8d1f-3fd318973eb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:18 np0005634017 nova_compute[243452]: 2026-02-28 10:01:18.868 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/08147934-b9df-4154-8d1f-3fd318973eb6/disk.config 08147934-b9df-4154-8d1f-3fd318973eb6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.012 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/08147934-b9df-4154-8d1f-3fd318973eb6/disk.config 08147934-b9df-4154-8d1f-3fd318973eb6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.013 243456 INFO nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Deleting local config drive /var/lib/nova/instances/08147934-b9df-4154-8d1f-3fd318973eb6/disk.config because it was imported into RBD.#033[00m
Feb 28 05:01:19 np0005634017 kernel: tap1b6bf464-31: entered promiscuous mode
Feb 28 05:01:19 np0005634017 NetworkManager[49805]: <info>  [1772272879.0573] manager: (tap1b6bf464-31): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Feb 28 05:01:19 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:19Z|00061|binding|INFO|Claiming lport 1b6bf464-31de-4504-9af4-59a95d6d9c05 for this chassis.
Feb 28 05:01:19 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:19Z|00062|binding|INFO|1b6bf464-31de-4504-9af4-59a95d6d9c05: Claiming fa:16:3e:12:02:52 10.100.0.10
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.063 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.073 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:19 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:19Z|00063|binding|INFO|Setting lport 1b6bf464-31de-4504-9af4-59a95d6d9c05 ovn-installed in OVS
Feb 28 05:01:19 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:19Z|00064|binding|INFO|Setting lport 1b6bf464-31de-4504-9af4-59a95d6d9c05 up in Southbound
Feb 28 05:01:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.071 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:02:52 10.100.0.10'], port_security=['fa:16:3e:12:02:52 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '08147934-b9df-4154-8d1f-3fd318973eb6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=1b6bf464-31de-4504-9af4-59a95d6d9c05) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:01:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.074 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 1b6bf464-31de-4504-9af4-59a95d6d9c05 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 bound to our chassis#033[00m
Feb 28 05:01:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.076 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.075 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:19 np0005634017 systemd-udevd[258698]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:01:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.096 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[37be13a1-a105-42f4-8325-0c31682bb0ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:19 np0005634017 systemd-machined[209480]: New machine qemu-17-instance-00000011.
Feb 28 05:01:19 np0005634017 NetworkManager[49805]: <info>  [1772272879.1129] device (tap1b6bf464-31): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:01:19 np0005634017 systemd[1]: Started Virtual Machine qemu-17-instance-00000011.
Feb 28 05:01:19 np0005634017 NetworkManager[49805]: <info>  [1772272879.1147] device (tap1b6bf464-31): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:01:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.129 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[361bc16d-ec0e-49eb-98dc-f5f12c7ab7b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.134 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7eac565b-8984-4c9c-ab74-823036c51936]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.155 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0828e04f-a267-47ee-bd96-9c18af7afed7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.170 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ba180782-d7c0-41e8-955b-ae7e2b61daf4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258712, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.183 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9628dce1-aed5-438f-83d0-620de603935a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437039, 'tstamp': 437039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258713, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437043, 'tstamp': 437043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258713, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.184 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.186 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.188 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5045ea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.188 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:01:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.189 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5045ea-10, col_values=(('external_ids', {'iface-id': '863d7ac1-9b1e-4788-aadf-440a36d66b39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.189 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:01:19 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:19Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f2:c8:7d 10.100.0.13
Feb 28 05:01:19 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:19Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f2:c8:7d 10.100.0.13
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.251 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.276 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Triggering sync for uuid 9e27fde4-3df3-46cf-97ac-88a91baefbc0 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.277 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Triggering sync for uuid c92e965f-2d18-4b78-8b78-7d391039f382 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.277 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Triggering sync for uuid 89ced16e-cc50-41d5-bfcb-fa5af85c14c8 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.278 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Triggering sync for uuid f37b722c-8def-4545-a455-39df230540d8 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.278 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Triggering sync for uuid 08147934-b9df-4154-8d1f-3fd318973eb6 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.279 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.279 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.279 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.280 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "c92e965f-2d18-4b78-8b78-7d391039f382" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.280 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.281 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.281 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "f37b722c-8def-4545-a455-39df230540d8" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.282 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "f37b722c-8def-4545-a455-39df230540d8" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.282 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "08147934-b9df-4154-8d1f-3fd318973eb6" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.346 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "f37b722c-8def-4545-a455-39df230540d8" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.352 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.376 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "c92e965f-2d18-4b78-8b78-7d391039f382" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.403 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.491 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272879.4913454, 08147934-b9df-4154-8d1f-3fd318973eb6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.492 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] VM Started (Lifecycle Event)#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.530 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.534 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272879.493852, 08147934-b9df-4154-8d1f-3fd318973eb6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.535 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.561 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.565 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.588 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:01:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 444 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 8.6 MiB/s wr, 390 op/s
Feb 28 05:01:19 np0005634017 NetworkManager[49805]: <info>  [1772272879.9304] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.929 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:19 np0005634017 NetworkManager[49805]: <info>  [1772272879.9316] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.971 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:19 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:19Z|00065|binding|INFO|Releasing lport 863d7ac1-9b1e-4788-aadf-440a36d66b39 from this chassis (sb_readonly=0)
Feb 28 05:01:19 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:19Z|00066|binding|INFO|Releasing lport a589fe00-3087-4c3d-af34-6af9a22081de from this chassis (sb_readonly=0)
Feb 28 05:01:19 np0005634017 nova_compute[243452]: 2026-02-28 10:01:19.992 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.231 243456 DEBUG nova.compute.manager [req-b8416d1e-3733-4ccb-8b90-19436dbd0c34 req-ad22d82e-1dfc-47f1-89b3-a849ae49a86a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Received event network-vif-plugged-1b6bf464-31de-4504-9af4-59a95d6d9c05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.231 243456 DEBUG oslo_concurrency.lockutils [req-b8416d1e-3733-4ccb-8b90-19436dbd0c34 req-ad22d82e-1dfc-47f1-89b3-a849ae49a86a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.232 243456 DEBUG oslo_concurrency.lockutils [req-b8416d1e-3733-4ccb-8b90-19436dbd0c34 req-ad22d82e-1dfc-47f1-89b3-a849ae49a86a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.232 243456 DEBUG oslo_concurrency.lockutils [req-b8416d1e-3733-4ccb-8b90-19436dbd0c34 req-ad22d82e-1dfc-47f1-89b3-a849ae49a86a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.233 243456 DEBUG nova.compute.manager [req-b8416d1e-3733-4ccb-8b90-19436dbd0c34 req-ad22d82e-1dfc-47f1-89b3-a849ae49a86a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Processing event network-vif-plugged-1b6bf464-31de-4504-9af4-59a95d6d9c05 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.233 243456 DEBUG nova.compute.manager [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.238 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.239 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272880.239201, 08147934-b9df-4154-8d1f-3fd318973eb6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.239 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.244 243456 INFO nova.virt.libvirt.driver [-] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Instance spawned successfully.#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.245 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.271 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.271 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.272 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.272 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.272 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.273 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.277 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.281 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.315 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.371 243456 INFO nova.compute.manager [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Took 7.40 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.371 243456 DEBUG nova.compute.manager [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.435 243456 INFO nova.compute.manager [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Took 8.46 seconds to build instance.#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.459 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.460 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "08147934-b9df-4154-8d1f-3fd318973eb6" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 1.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.460 243456 INFO nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:01:20 np0005634017 nova_compute[243452]: 2026-02-28 10:01:20.460 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "08147934-b9df-4154-8d1f-3fd318973eb6" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 481 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 11 MiB/s wr, 521 op/s
Feb 28 05:01:22 np0005634017 nova_compute[243452]: 2026-02-28 10:01:22.441 243456 DEBUG nova.compute.manager [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Received event network-vif-plugged-1b6bf464-31de-4504-9af4-59a95d6d9c05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:22 np0005634017 nova_compute[243452]: 2026-02-28 10:01:22.441 243456 DEBUG oslo_concurrency.lockutils [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:22 np0005634017 nova_compute[243452]: 2026-02-28 10:01:22.442 243456 DEBUG oslo_concurrency.lockutils [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:22 np0005634017 nova_compute[243452]: 2026-02-28 10:01:22.442 243456 DEBUG oslo_concurrency.lockutils [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:22 np0005634017 nova_compute[243452]: 2026-02-28 10:01:22.443 243456 DEBUG nova.compute.manager [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] No waiting events found dispatching network-vif-plugged-1b6bf464-31de-4504-9af4-59a95d6d9c05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:01:22 np0005634017 nova_compute[243452]: 2026-02-28 10:01:22.443 243456 WARNING nova.compute.manager [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Received unexpected event network-vif-plugged-1b6bf464-31de-4504-9af4-59a95d6d9c05 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:01:22 np0005634017 nova_compute[243452]: 2026-02-28 10:01:22.443 243456 DEBUG nova.compute.manager [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Received event network-changed-77e0efad-ce89-42fd-9284-b155767f5c74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:22 np0005634017 nova_compute[243452]: 2026-02-28 10:01:22.443 243456 DEBUG nova.compute.manager [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Refreshing instance network info cache due to event network-changed-77e0efad-ce89-42fd-9284-b155767f5c74. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:01:22 np0005634017 nova_compute[243452]: 2026-02-28 10:01:22.444 243456 DEBUG oslo_concurrency.lockutils [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:01:22 np0005634017 nova_compute[243452]: 2026-02-28 10:01:22.444 243456 DEBUG oslo_concurrency.lockutils [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:01:22 np0005634017 nova_compute[243452]: 2026-02-28 10:01:22.444 243456 DEBUG nova.network.neutron [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Refreshing network info cache for port 77e0efad-ce89-42fd-9284-b155767f5c74 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:01:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:01:23 np0005634017 nova_compute[243452]: 2026-02-28 10:01:23.175 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:23 np0005634017 nova_compute[243452]: 2026-02-28 10:01:23.411 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 484 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 7.6 MiB/s wr, 385 op/s
Feb 28 05:01:24 np0005634017 nova_compute[243452]: 2026-02-28 10:01:24.299 243456 DEBUG nova.network.neutron [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updated VIF entry in instance network info cache for port 77e0efad-ce89-42fd-9284-b155767f5c74. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:01:24 np0005634017 nova_compute[243452]: 2026-02-28 10:01:24.300 243456 DEBUG nova.network.neutron [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updating instance_info_cache with network_info: [{"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:24 np0005634017 nova_compute[243452]: 2026-02-28 10:01:24.324 243456 DEBUG oslo_concurrency.lockutils [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:01:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 484 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.1 MiB/s wr, 339 op/s
Feb 28 05:01:25 np0005634017 nova_compute[243452]: 2026-02-28 10:01:25.978 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:25 np0005634017 nova_compute[243452]: 2026-02-28 10:01:25.979 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:26 np0005634017 nova_compute[243452]: 2026-02-28 10:01:26.001 243456 DEBUG nova.compute.manager [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:01:26 np0005634017 nova_compute[243452]: 2026-02-28 10:01:26.102 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:26 np0005634017 nova_compute[243452]: 2026-02-28 10:01:26.102 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:26 np0005634017 nova_compute[243452]: 2026-02-28 10:01:26.110 243456 DEBUG nova.virt.hardware [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:01:26 np0005634017 nova_compute[243452]: 2026-02-28 10:01:26.111 243456 INFO nova.compute.claims [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:01:26 np0005634017 nova_compute[243452]: 2026-02-28 10:01:26.281 243456 DEBUG nova.compute.manager [req-594c78ea-6c20-49e2-a945-ffb85e5fb27f req-1ddb4e60-7eb2-4334-ab30-d354f1831249 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Received event network-changed-77e0efad-ce89-42fd-9284-b155767f5c74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:26 np0005634017 nova_compute[243452]: 2026-02-28 10:01:26.281 243456 DEBUG nova.compute.manager [req-594c78ea-6c20-49e2-a945-ffb85e5fb27f req-1ddb4e60-7eb2-4334-ab30-d354f1831249 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Refreshing instance network info cache due to event network-changed-77e0efad-ce89-42fd-9284-b155767f5c74. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:01:26 np0005634017 nova_compute[243452]: 2026-02-28 10:01:26.282 243456 DEBUG oslo_concurrency.lockutils [req-594c78ea-6c20-49e2-a945-ffb85e5fb27f req-1ddb4e60-7eb2-4334-ab30-d354f1831249 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:01:26 np0005634017 nova_compute[243452]: 2026-02-28 10:01:26.282 243456 DEBUG oslo_concurrency.lockutils [req-594c78ea-6c20-49e2-a945-ffb85e5fb27f req-1ddb4e60-7eb2-4334-ab30-d354f1831249 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:01:26 np0005634017 nova_compute[243452]: 2026-02-28 10:01:26.283 243456 DEBUG nova.network.neutron [req-594c78ea-6c20-49e2-a945-ffb85e5fb27f req-1ddb4e60-7eb2-4334-ab30-d354f1831249 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Refreshing network info cache for port 77e0efad-ce89-42fd-9284-b155767f5c74 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:01:26 np0005634017 nova_compute[243452]: 2026-02-28 10:01:26.388 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:26 np0005634017 nova_compute[243452]: 2026-02-28 10:01:26.532 243456 DEBUG oslo_concurrency.lockutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "ce94b006-3fde-4285-89f7-1e435e514d3e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:26 np0005634017 nova_compute[243452]: 2026-02-28 10:01:26.533 243456 DEBUG oslo_concurrency.lockutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "ce94b006-3fde-4285-89f7-1e435e514d3e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:26 np0005634017 nova_compute[243452]: 2026-02-28 10:01:26.574 243456 DEBUG nova.compute.manager [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:01:26 np0005634017 nova_compute[243452]: 2026-02-28 10:01:26.670 243456 DEBUG oslo_concurrency.lockutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:01:26 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1552549082' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:01:26 np0005634017 nova_compute[243452]: 2026-02-28 10:01:26.965 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:26 np0005634017 nova_compute[243452]: 2026-02-28 10:01:26.972 243456 DEBUG nova.compute.provider_tree [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:01:26 np0005634017 nova_compute[243452]: 2026-02-28 10:01:26.991 243456 DEBUG nova.scheduler.client.report [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.023 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.024 243456 DEBUG nova.compute.manager [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.027 243456 DEBUG oslo_concurrency.lockutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.357s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.035 243456 DEBUG nova.virt.hardware [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.036 243456 INFO nova.compute.claims [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.088 243456 DEBUG nova.compute.manager [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.088 243456 DEBUG nova.network.neutron [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.112 243456 INFO nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.131 243456 DEBUG nova.compute.manager [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.226 243456 DEBUG nova.compute.manager [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.229 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.229 243456 INFO nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Creating image(s)#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.258 243456 DEBUG nova.storage.rbd_utils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.278 243456 DEBUG nova.storage.rbd_utils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.300 243456 DEBUG nova.storage.rbd_utils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.304 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.352 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.353 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.353 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.354 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.375 243456 DEBUG nova.storage.rbd_utils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.377 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.403 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.617 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.671 243456 DEBUG nova.storage.rbd_utils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] resizing rbd image d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.752 243456 DEBUG nova.objects.instance [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'migration_context' on Instance uuid d2d9bd29-453d-4abd-a3de-c1a9603cfc11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 499 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 5.6 MiB/s wr, 295 op/s
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.776 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.776 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Ensure instance console log exists: /var/lib/nova/instances/d2d9bd29-453d-4abd-a3de-c1a9603cfc11/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.777 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.778 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.778 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:27Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:84:dd:a4 10.100.0.7
Feb 28 05:01:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:27Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:84:dd:a4 10.100.0.7
Feb 28 05:01:27 np0005634017 nova_compute[243452]: 2026-02-28 10:01:27.934 243456 DEBUG nova.policy [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4fb1e2bbed9c4e2395c13dba974f8603', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '339b7f5b41a54615b051fb9d036072dd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:01:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:01:27 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4118599950' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.000 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.012 243456 DEBUG nova.compute.provider_tree [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.044 243456 DEBUG nova.scheduler.client.report [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.073 243456 DEBUG oslo_concurrency.lockutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.075 243456 DEBUG nova.compute.manager [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.128 243456 DEBUG nova.compute.manager [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.129 243456 DEBUG nova.network.neutron [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.150 243456 INFO nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.166 243456 DEBUG oslo_concurrency.lockutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Acquiring lock "9bf11a21-b6eb-432e-aa3b-4dd3413f91a5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.167 243456 DEBUG oslo_concurrency.lockutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "9bf11a21-b6eb-432e-aa3b-4dd3413f91a5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.171 243456 DEBUG nova.compute.manager [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.178 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.185 243456 DEBUG nova.compute.manager [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.285 243456 DEBUG oslo_concurrency.lockutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.286 243456 DEBUG oslo_concurrency.lockutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.293 243456 DEBUG nova.virt.hardware [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.294 243456 INFO nova.compute.claims [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.299 243456 DEBUG nova.compute.manager [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.300 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.301 243456 INFO nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Creating image(s)#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.323 243456 DEBUG nova.storage.rbd_utils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image ce94b006-3fde-4285-89f7-1e435e514d3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.347 243456 DEBUG nova.storage.rbd_utils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image ce94b006-3fde-4285-89f7-1e435e514d3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.371 243456 DEBUG nova.storage.rbd_utils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image ce94b006-3fde-4285-89f7-1e435e514d3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.375 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.400 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272873.3699229, c8eefb37-41ae-4d33-8085-e4e8c3ce2075 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.400 243456 INFO nova.compute.manager [-] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.413 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.421 243456 DEBUG nova.compute.manager [None req-11c76ca5-80d6-47ee-9527-b7e71801a716 - - - - - -] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.441 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.442 243456 DEBUG oslo_concurrency.lockutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.442 243456 DEBUG oslo_concurrency.lockutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.443 243456 DEBUG oslo_concurrency.lockutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.462 243456 DEBUG nova.storage.rbd_utils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image ce94b006-3fde-4285-89f7-1e435e514d3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.464 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ce94b006-3fde-4285-89f7-1e435e514d3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.481 243456 DEBUG nova.network.neutron [req-594c78ea-6c20-49e2-a945-ffb85e5fb27f req-1ddb4e60-7eb2-4334-ab30-d354f1831249 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updated VIF entry in instance network info cache for port 77e0efad-ce89-42fd-9284-b155767f5c74. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.482 243456 DEBUG nova.network.neutron [req-594c78ea-6c20-49e2-a945-ffb85e5fb27f req-1ddb4e60-7eb2-4334-ab30-d354f1831249 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updating instance_info_cache with network_info: [{"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.497 243456 DEBUG oslo_concurrency.lockutils [req-594c78ea-6c20-49e2-a945-ffb85e5fb27f req-1ddb4e60-7eb2-4334-ab30-d354f1831249 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.601 243456 DEBUG nova.network.neutron [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.601 243456 DEBUG nova.compute.manager [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.670 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ce94b006-3fde-4285-89f7-1e435e514d3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.739 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.766 243456 DEBUG nova.storage.rbd_utils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] resizing rbd image ce94b006-3fde-4285-89f7-1e435e514d3e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.847 243456 DEBUG nova.objects.instance [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lazy-loading 'migration_context' on Instance uuid ce94b006-3fde-4285-89f7-1e435e514d3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.861 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.862 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Ensure instance console log exists: /var/lib/nova/instances/ce94b006-3fde-4285-89f7-1e435e514d3e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.862 243456 DEBUG oslo_concurrency.lockutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.863 243456 DEBUG oslo_concurrency.lockutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.863 243456 DEBUG oslo_concurrency.lockutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.865 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.870 243456 WARNING nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.896 243456 DEBUG nova.virt.libvirt.host [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.897 243456 DEBUG nova.virt.libvirt.host [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.900 243456 DEBUG nova.virt.libvirt.host [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.901 243456 DEBUG nova.virt.libvirt.host [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.901 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.902 243456 DEBUG nova.virt.hardware [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.902 243456 DEBUG nova.virt.hardware [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.902 243456 DEBUG nova.virt.hardware [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.903 243456 DEBUG nova.virt.hardware [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.903 243456 DEBUG nova.virt.hardware [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.904 243456 DEBUG nova.virt.hardware [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.904 243456 DEBUG nova.virt.hardware [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.905 243456 DEBUG nova.virt.hardware [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.905 243456 DEBUG nova.virt.hardware [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.905 243456 DEBUG nova.virt.hardware [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.906 243456 DEBUG nova.virt.hardware [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.910 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.935 243456 DEBUG nova.compute.manager [req-ca1cbc50-ed4f-4fc3-9555-525257e53739 req-9ea3dbed-a19d-42ce-b7f5-8b9c45414e17 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Received event network-changed-24f1e17a-f542-4eab-9180-968c61bc1cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.935 243456 DEBUG nova.compute.manager [req-ca1cbc50-ed4f-4fc3-9555-525257e53739 req-9ea3dbed-a19d-42ce-b7f5-8b9c45414e17 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Refreshing instance network info cache due to event network-changed-24f1e17a-f542-4eab-9180-968c61bc1cf7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.936 243456 DEBUG oslo_concurrency.lockutils [req-ca1cbc50-ed4f-4fc3-9555-525257e53739 req-9ea3dbed-a19d-42ce-b7f5-8b9c45414e17 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f37b722c-8def-4545-a455-39df230540d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.936 243456 DEBUG oslo_concurrency.lockutils [req-ca1cbc50-ed4f-4fc3-9555-525257e53739 req-9ea3dbed-a19d-42ce-b7f5-8b9c45414e17 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f37b722c-8def-4545-a455-39df230540d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.937 243456 DEBUG nova.network.neutron [req-ca1cbc50-ed4f-4fc3-9555-525257e53739 req-9ea3dbed-a19d-42ce-b7f5-8b9c45414e17 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Refreshing network info cache for port 24f1e17a-f542-4eab-9180-968c61bc1cf7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:01:28 np0005634017 nova_compute[243452]: 2026-02-28 10:01:28.989 243456 DEBUG nova.network.neutron [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Successfully created port: 42dc1876-90c4-4b52-b301-1c90b71ff297 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:01:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:01:29
Feb 28 05:01:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:01:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:01:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['backups', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.log', 'vms', '.rgw.root', 'default.rgw.meta']
Feb 28 05:01:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:01:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:01:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/383752822' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:01:29 np0005634017 nova_compute[243452]: 2026-02-28 10:01:29.290 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:29 np0005634017 nova_compute[243452]: 2026-02-28 10:01:29.299 243456 DEBUG nova.compute.provider_tree [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:01:29 np0005634017 nova_compute[243452]: 2026-02-28 10:01:29.318 243456 DEBUG nova.scheduler.client.report [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:01:29 np0005634017 nova_compute[243452]: 2026-02-28 10:01:29.348 243456 DEBUG oslo_concurrency.lockutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:29 np0005634017 nova_compute[243452]: 2026-02-28 10:01:29.349 243456 DEBUG nova.compute.manager [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:01:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:01:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4212338717' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:01:29 np0005634017 nova_compute[243452]: 2026-02-28 10:01:29.476 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:29 np0005634017 nova_compute[243452]: 2026-02-28 10:01:29.505 243456 DEBUG nova.storage.rbd_utils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image ce94b006-3fde-4285-89f7-1e435e514d3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:29 np0005634017 nova_compute[243452]: 2026-02-28 10:01:29.510 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:29 np0005634017 nova_compute[243452]: 2026-02-28 10:01:29.545 243456 DEBUG nova.compute.manager [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Feb 28 05:01:29 np0005634017 nova_compute[243452]: 2026-02-28 10:01:29.563 243456 INFO nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:01:29 np0005634017 nova_compute[243452]: 2026-02-28 10:01:29.583 243456 DEBUG nova.compute.manager [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:01:29 np0005634017 nova_compute[243452]: 2026-02-28 10:01:29.676 243456 DEBUG nova.compute.manager [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:01:29 np0005634017 nova_compute[243452]: 2026-02-28 10:01:29.678 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:01:29 np0005634017 nova_compute[243452]: 2026-02-28 10:01:29.678 243456 INFO nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Creating image(s)#033[00m
Feb 28 05:01:29 np0005634017 nova_compute[243452]: 2026-02-28 10:01:29.705 243456 DEBUG nova.storage.rbd_utils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:29 np0005634017 nova_compute[243452]: 2026-02-28 10:01:29.741 243456 DEBUG nova.storage.rbd_utils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 499 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.9 MiB/s wr, 220 op/s
Feb 28 05:01:29 np0005634017 nova_compute[243452]: 2026-02-28 10:01:29.769 243456 DEBUG nova.storage.rbd_utils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:29 np0005634017 nova_compute[243452]: 2026-02-28 10:01:29.773 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:29 np0005634017 nova_compute[243452]: 2026-02-28 10:01:29.850 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:29 np0005634017 nova_compute[243452]: 2026-02-28 10:01:29.851 243456 DEBUG oslo_concurrency.lockutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:29 np0005634017 nova_compute[243452]: 2026-02-28 10:01:29.852 243456 DEBUG oslo_concurrency.lockutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:29 np0005634017 nova_compute[243452]: 2026-02-28 10:01:29.853 243456 DEBUG oslo_concurrency.lockutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:29 np0005634017 nova_compute[243452]: 2026-02-28 10:01:29.879 243456 DEBUG nova.storage.rbd_utils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:29 np0005634017 nova_compute[243452]: 2026-02-28 10:01:29.883 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.006 243456 DEBUG nova.network.neutron [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Successfully updated port: 42dc1876-90c4-4b52-b301-1c90b71ff297 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:01:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:01:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1533661090' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.027 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "refresh_cache-d2d9bd29-453d-4abd-a3de-c1a9603cfc11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.030 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquired lock "refresh_cache-d2d9bd29-453d-4abd-a3de-c1a9603cfc11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.031 243456 DEBUG nova.network.neutron [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.044 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.047 243456 DEBUG nova.objects.instance [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lazy-loading 'pci_devices' on Instance uuid ce94b006-3fde-4285-89f7-1e435e514d3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.074 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:01:30 np0005634017 nova_compute[243452]:  <uuid>ce94b006-3fde-4285-89f7-1e435e514d3e</uuid>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:  <name>instance-00000013</name>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-848702146</nova:name>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:01:28</nova:creationTime>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:01:30 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:        <nova:user uuid="d12cfb1e6b0d4d93916ba6a6c4b75cfc">tempest-ServersAdminNegativeTestJSON-1432426192-project-member</nova:user>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:        <nova:project uuid="388f2f7e6d59433a8c88217806df2e33">tempest-ServersAdminNegativeTestJSON-1432426192</nova:project>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      <nova:ports/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      <entry name="serial">ce94b006-3fde-4285-89f7-1e435e514d3e</entry>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      <entry name="uuid">ce94b006-3fde-4285-89f7-1e435e514d3e</entry>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/ce94b006-3fde-4285-89f7-1e435e514d3e_disk">
Feb 28 05:01:30 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:01:30 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/ce94b006-3fde-4285-89f7-1e435e514d3e_disk.config">
Feb 28 05:01:30 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:01:30 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/ce94b006-3fde-4285-89f7-1e435e514d3e/console.log" append="off"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:01:30 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:01:30 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:01:30 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:01:30 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.157 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.157 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.158 243456 INFO nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Using config drive#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.177 243456 DEBUG nova.storage.rbd_utils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image ce94b006-3fde-4285-89f7-1e435e514d3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.182 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.238 243456 DEBUG nova.storage.rbd_utils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] resizing rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:01:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:01:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:01:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:01:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:01:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:01:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.308 243456 DEBUG nova.objects.instance [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lazy-loading 'migration_context' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.319 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.320 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Ensure instance console log exists: /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.320 243456 DEBUG oslo_concurrency.lockutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.320 243456 DEBUG oslo_concurrency.lockutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.320 243456 DEBUG oslo_concurrency.lockutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.321 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.325 243456 WARNING nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.330 243456 DEBUG nova.virt.libvirt.host [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.330 243456 DEBUG nova.virt.libvirt.host [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.333 243456 DEBUG nova.virt.libvirt.host [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.333 243456 DEBUG nova.virt.libvirt.host [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.333 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.333 243456 DEBUG nova.virt.hardware [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.334 243456 DEBUG nova.virt.hardware [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.334 243456 DEBUG nova.virt.hardware [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.334 243456 DEBUG nova.virt.hardware [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.334 243456 DEBUG nova.virt.hardware [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.334 243456 DEBUG nova.virt.hardware [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.334 243456 DEBUG nova.virt.hardware [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.334 243456 DEBUG nova.virt.hardware [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.335 243456 DEBUG nova.virt.hardware [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.335 243456 DEBUG nova.virt.hardware [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.335 243456 DEBUG nova.virt.hardware [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.337 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:01:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:01:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:01:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:01:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:01:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:01:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:01:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:01:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:01:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.684 243456 INFO nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Creating config drive at /var/lib/nova/instances/ce94b006-3fde-4285-89f7-1e435e514d3e/disk.config#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.687 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ce94b006-3fde-4285-89f7-1e435e514d3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpid341_41 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.746 243456 DEBUG nova.network.neutron [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.810 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ce94b006-3fde-4285-89f7-1e435e514d3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpid341_41" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.842 243456 DEBUG nova.storage.rbd_utils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image ce94b006-3fde-4285-89f7-1e435e514d3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.846 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ce94b006-3fde-4285-89f7-1e435e514d3e/disk.config ce94b006-3fde-4285-89f7-1e435e514d3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.871 243456 DEBUG nova.network.neutron [req-ca1cbc50-ed4f-4fc3-9555-525257e53739 req-9ea3dbed-a19d-42ce-b7f5-8b9c45414e17 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Updated VIF entry in instance network info cache for port 24f1e17a-f542-4eab-9180-968c61bc1cf7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.872 243456 DEBUG nova.network.neutron [req-ca1cbc50-ed4f-4fc3-9555-525257e53739 req-9ea3dbed-a19d-42ce-b7f5-8b9c45414e17 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Updating instance_info_cache with network_info: [{"id": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "address": "fa:16:3e:84:dd:a4", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f1e17a-f5", "ovs_interfaceid": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:01:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3880366764' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.890 243456 DEBUG oslo_concurrency.lockutils [req-ca1cbc50-ed4f-4fc3-9555-525257e53739 req-9ea3dbed-a19d-42ce-b7f5-8b9c45414e17 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f37b722c-8def-4545-a455-39df230540d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.906 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.929 243456 DEBUG nova.storage.rbd_utils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.934 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.988 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ce94b006-3fde-4285-89f7-1e435e514d3e/disk.config ce94b006-3fde-4285-89f7-1e435e514d3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:30 np0005634017 nova_compute[243452]: 2026-02-28 10:01:30.989 243456 INFO nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Deleting local config drive /var/lib/nova/instances/ce94b006-3fde-4285-89f7-1e435e514d3e/disk.config because it was imported into RBD.#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.005 243456 DEBUG nova.compute.manager [req-a36c5936-715c-434b-9456-ae7ceeb4fb0a req-e46b2fd6-5b31-4df7-9744-8a6d4dbdb39b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Received event network-changed-42dc1876-90c4-4b52-b301-1c90b71ff297 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.005 243456 DEBUG nova.compute.manager [req-a36c5936-715c-434b-9456-ae7ceeb4fb0a req-e46b2fd6-5b31-4df7-9744-8a6d4dbdb39b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Refreshing instance network info cache due to event network-changed-42dc1876-90c4-4b52-b301-1c90b71ff297. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.005 243456 DEBUG oslo_concurrency.lockutils [req-a36c5936-715c-434b-9456-ae7ceeb4fb0a req-e46b2fd6-5b31-4df7-9744-8a6d4dbdb39b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-d2d9bd29-453d-4abd-a3de-c1a9603cfc11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:01:31 np0005634017 systemd-machined[209480]: New machine qemu-18-instance-00000013.
Feb 28 05:01:31 np0005634017 systemd[1]: Started Virtual Machine qemu-18-instance-00000013.
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.451 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272891.4501214, ce94b006-3fde-4285-89f7-1e435e514d3e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.455 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.465 243456 DEBUG nova.compute.manager [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.465 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.471 243456 INFO nova.virt.libvirt.driver [-] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Instance spawned successfully.#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.471 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.484 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.486 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.493 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.494 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.494 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.495 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.495 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.496 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:01:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2166209625' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.533 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.533 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272891.4545243, ce94b006-3fde-4285-89f7-1e435e514d3e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.533 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] VM Started (Lifecycle Event)#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.549 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.550 243456 DEBUG nova.objects.instance [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.563 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.566 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:01:31 np0005634017 nova_compute[243452]:  <uuid>9bf11a21-b6eb-432e-aa3b-4dd3413f91a5</uuid>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:  <name>instance-00000014</name>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServersAdmin275Test-server-1098361722</nova:name>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:01:30</nova:creationTime>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:01:31 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:        <nova:user uuid="c1d3b80b39ba4f3392d63b05c85009e2">tempest-ServersAdmin275Test-175914647-project-member</nova:user>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:        <nova:project uuid="a1e90097927e484eb57dee6ac05b7b47">tempest-ServersAdmin275Test-175914647</nova:project>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      <nova:ports/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      <entry name="serial">9bf11a21-b6eb-432e-aa3b-4dd3413f91a5</entry>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      <entry name="uuid">9bf11a21-b6eb-432e-aa3b-4dd3413f91a5</entry>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk">
Feb 28 05:01:31 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:01:31 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config">
Feb 28 05:01:31 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:01:31 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/console.log" append="off"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:01:31 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:01:31 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:01:31 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:01:31 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.574 243456 INFO nova.compute.manager [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Took 3.27 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.574 243456 DEBUG nova.compute.manager [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.576 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.586 243456 DEBUG nova.network.neutron [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Updating instance_info_cache with network_info: [{"id": "42dc1876-90c4-4b52-b301-1c90b71ff297", "address": "fa:16:3e:0e:8a:2e", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42dc1876-90", "ovs_interfaceid": "42dc1876-90c4-4b52-b301-1c90b71ff297", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.631 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.634 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Releasing lock "refresh_cache-d2d9bd29-453d-4abd-a3de-c1a9603cfc11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.634 243456 DEBUG nova.compute.manager [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Instance network_info: |[{"id": "42dc1876-90c4-4b52-b301-1c90b71ff297", "address": "fa:16:3e:0e:8a:2e", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42dc1876-90", "ovs_interfaceid": "42dc1876-90c4-4b52-b301-1c90b71ff297", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.635 243456 DEBUG oslo_concurrency.lockutils [req-a36c5936-715c-434b-9456-ae7ceeb4fb0a req-e46b2fd6-5b31-4df7-9744-8a6d4dbdb39b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-d2d9bd29-453d-4abd-a3de-c1a9603cfc11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.635 243456 DEBUG nova.network.neutron [req-a36c5936-715c-434b-9456-ae7ceeb4fb0a req-e46b2fd6-5b31-4df7-9744-8a6d4dbdb39b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Refreshing network info cache for port 42dc1876-90c4-4b52-b301-1c90b71ff297 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.638 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Start _get_guest_xml network_info=[{"id": "42dc1876-90c4-4b52-b301-1c90b71ff297", "address": "fa:16:3e:0e:8a:2e", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42dc1876-90", "ovs_interfaceid": "42dc1876-90c4-4b52-b301-1c90b71ff297", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.643 243456 WARNING nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.649 243456 DEBUG nova.virt.libvirt.host [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.650 243456 DEBUG nova.virt.libvirt.host [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.652 243456 INFO nova.compute.manager [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Took 5.00 seconds to build instance.#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.661 243456 DEBUG nova.virt.libvirt.host [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.662 243456 DEBUG nova.virt.libvirt.host [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.662 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.662 243456 DEBUG nova.virt.hardware [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.663 243456 DEBUG nova.virt.hardware [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.663 243456 DEBUG nova.virt.hardware [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.664 243456 DEBUG nova.virt.hardware [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.664 243456 DEBUG nova.virt.hardware [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.664 243456 DEBUG nova.virt.hardware [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.665 243456 DEBUG nova.virt.hardware [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.665 243456 DEBUG nova.virt.hardware [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.665 243456 DEBUG nova.virt.hardware [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.666 243456 DEBUG nova.virt.hardware [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.666 243456 DEBUG nova.virt.hardware [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.668 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.683 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.683 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.684 243456 INFO nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Using config drive#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.710 243456 DEBUG nova.storage.rbd_utils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.740 243456 DEBUG oslo_concurrency.lockutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "ce94b006-3fde-4285-89f7-1e435e514d3e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.207s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 590 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 7.0 MiB/s wr, 305 op/s
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.961 243456 INFO nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Creating config drive at /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config#033[00m
Feb 28 05:01:31 np0005634017 nova_compute[243452]: 2026-02-28 10:01:31.970 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp78vrattu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.096 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp78vrattu" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.131 243456 DEBUG nova.storage.rbd_utils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.136 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:01:32 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2489451399' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.233 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:32 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:32Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:12:02:52 10.100.0.10
Feb 28 05:01:32 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:32Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:12:02:52 10.100.0.10
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.260 243456 DEBUG nova.storage.rbd_utils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.266 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.286 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.287 243456 INFO nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Deleting local config drive /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config because it was imported into RBD.#033[00m
Feb 28 05:01:32 np0005634017 systemd-machined[209480]: New machine qemu-19-instance-00000014.
Feb 28 05:01:32 np0005634017 systemd[1]: Started Virtual Machine qemu-19-instance-00000014.
Feb 28 05:01:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.886 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272892.8852859, 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.887 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.891 243456 DEBUG nova.compute.manager [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.891 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.896 243456 INFO nova.virt.libvirt.driver [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance spawned successfully.#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.897 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:01:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:01:32 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2635969508' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.919 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.927 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.931 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.931 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.932 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.932 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.932 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.933 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.937 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.671s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.938 243456 DEBUG nova.virt.libvirt.vif [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:01:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1694487247',display_name='tempest-ServersAdminTestJSON-server-1694487247',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1694487247',id=18,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-vmrf35mh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:01:27Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=d2d9bd29-453d-4abd-a3de-c1a9603cfc11,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "42dc1876-90c4-4b52-b301-1c90b71ff297", "address": "fa:16:3e:0e:8a:2e", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42dc1876-90", "ovs_interfaceid": "42dc1876-90c4-4b52-b301-1c90b71ff297", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.938 243456 DEBUG nova.network.os_vif_util [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "42dc1876-90c4-4b52-b301-1c90b71ff297", "address": "fa:16:3e:0e:8a:2e", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42dc1876-90", "ovs_interfaceid": "42dc1876-90c4-4b52-b301-1c90b71ff297", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.939 243456 DEBUG nova.network.os_vif_util [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:8a:2e,bridge_name='br-int',has_traffic_filtering=True,id=42dc1876-90c4-4b52-b301-1c90b71ff297,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42dc1876-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.939 243456 DEBUG nova.objects.instance [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'pci_devices' on Instance uuid d2d9bd29-453d-4abd-a3de-c1a9603cfc11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.978 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.978 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272892.8859205, 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.978 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] VM Started (Lifecycle Event)#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.981 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:01:32 np0005634017 nova_compute[243452]:  <uuid>d2d9bd29-453d-4abd-a3de-c1a9603cfc11</uuid>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:  <name>instance-00000012</name>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServersAdminTestJSON-server-1694487247</nova:name>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:01:31</nova:creationTime>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:01:32 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:        <nova:user uuid="4fb1e2bbed9c4e2395c13dba974f8603">tempest-ServersAdminTestJSON-1494420313-project-member</nova:user>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:        <nova:project uuid="339b7f5b41a54615b051fb9d036072dd">tempest-ServersAdminTestJSON-1494420313</nova:project>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:        <nova:port uuid="42dc1876-90c4-4b52-b301-1c90b71ff297">
Feb 28 05:01:32 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <entry name="serial">d2d9bd29-453d-4abd-a3de-c1a9603cfc11</entry>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <entry name="uuid">d2d9bd29-453d-4abd-a3de-c1a9603cfc11</entry>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk">
Feb 28 05:01:32 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:01:32 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk.config">
Feb 28 05:01:32 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:01:32 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:0e:8a:2e"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <target dev="tap42dc1876-90"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/d2d9bd29-453d-4abd-a3de-c1a9603cfc11/console.log" append="off"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:01:32 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:01:32 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:01:32 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:01:32 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.982 243456 DEBUG nova.compute.manager [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Preparing to wait for external event network-vif-plugged-42dc1876-90c4-4b52-b301-1c90b71ff297 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.982 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.982 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.983 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.983 243456 DEBUG nova.virt.libvirt.vif [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:01:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1694487247',display_name='tempest-ServersAdminTestJSON-server-1694487247',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1694487247',id=18,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-vmrf35mh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:01:27Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=d2d9bd29-453d-4abd-a3de-c1a9603cfc11,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "42dc1876-90c4-4b52-b301-1c90b71ff297", "address": "fa:16:3e:0e:8a:2e", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42dc1876-90", "ovs_interfaceid": "42dc1876-90c4-4b52-b301-1c90b71ff297", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.984 243456 DEBUG nova.network.os_vif_util [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "42dc1876-90c4-4b52-b301-1c90b71ff297", "address": "fa:16:3e:0e:8a:2e", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42dc1876-90", "ovs_interfaceid": "42dc1876-90c4-4b52-b301-1c90b71ff297", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.985 243456 DEBUG nova.network.os_vif_util [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:8a:2e,bridge_name='br-int',has_traffic_filtering=True,id=42dc1876-90c4-4b52-b301-1c90b71ff297,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42dc1876-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.985 243456 DEBUG os_vif [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:8a:2e,bridge_name='br-int',has_traffic_filtering=True,id=42dc1876-90c4-4b52-b301-1c90b71ff297,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42dc1876-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.990 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.990 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.991 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.994 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.994 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap42dc1876-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.995 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap42dc1876-90, col_values=(('external_ids', {'iface-id': '42dc1876-90c4-4b52-b301-1c90b71ff297', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0e:8a:2e', 'vm-uuid': 'd2d9bd29-453d-4abd-a3de-c1a9603cfc11'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.996 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:32 np0005634017 NetworkManager[49805]: <info>  [1772272892.9976] manager: (tap42dc1876-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Feb 28 05:01:32 np0005634017 nova_compute[243452]: 2026-02-28 10:01:32.999 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:01:33 np0005634017 nova_compute[243452]: 2026-02-28 10:01:33.005 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:33 np0005634017 nova_compute[243452]: 2026-02-28 10:01:33.005 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:33 np0005634017 nova_compute[243452]: 2026-02-28 10:01:33.006 243456 INFO os_vif [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:8a:2e,bridge_name='br-int',has_traffic_filtering=True,id=42dc1876-90c4-4b52-b301-1c90b71ff297,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42dc1876-90')#033[00m
Feb 28 05:01:33 np0005634017 nova_compute[243452]: 2026-02-28 10:01:33.009 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:01:33 np0005634017 nova_compute[243452]: 2026-02-28 10:01:33.042 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:01:33 np0005634017 nova_compute[243452]: 2026-02-28 10:01:33.056 243456 INFO nova.compute.manager [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Took 3.38 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:01:33 np0005634017 nova_compute[243452]: 2026-02-28 10:01:33.057 243456 DEBUG nova.compute.manager [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:33 np0005634017 nova_compute[243452]: 2026-02-28 10:01:33.066 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:01:33 np0005634017 nova_compute[243452]: 2026-02-28 10:01:33.067 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:01:33 np0005634017 nova_compute[243452]: 2026-02-28 10:01:33.068 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No VIF found with MAC fa:16:3e:0e:8a:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:01:33 np0005634017 nova_compute[243452]: 2026-02-28 10:01:33.068 243456 INFO nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Using config drive#033[00m
Feb 28 05:01:33 np0005634017 nova_compute[243452]: 2026-02-28 10:01:33.094 243456 DEBUG nova.storage.rbd_utils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:33 np0005634017 nova_compute[243452]: 2026-02-28 10:01:33.123 243456 INFO nova.compute.manager [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Took 4.88 seconds to build instance.#033[00m
Feb 28 05:01:33 np0005634017 nova_compute[243452]: 2026-02-28 10:01:33.151 243456 DEBUG oslo_concurrency.lockutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "9bf11a21-b6eb-432e-aa3b-4dd3413f91a5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.984s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:01:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:01:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:01:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:01:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:01:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:01:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:01:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:01:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:01:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:01:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:01:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:01:33 np0005634017 nova_compute[243452]: 2026-02-28 10:01:33.415 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:33 np0005634017 nova_compute[243452]: 2026-02-28 10:01:33.619 243456 DEBUG nova.compute.manager [req-b25d08a5-6947-4618-a921-e6203cee8e3b req-52bca9b5-6b44-4a59-a09f-996afe6e6e59 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Received event network-changed-24f1e17a-f542-4eab-9180-968c61bc1cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:33 np0005634017 nova_compute[243452]: 2026-02-28 10:01:33.619 243456 DEBUG nova.compute.manager [req-b25d08a5-6947-4618-a921-e6203cee8e3b req-52bca9b5-6b44-4a59-a09f-996afe6e6e59 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Refreshing instance network info cache due to event network-changed-24f1e17a-f542-4eab-9180-968c61bc1cf7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:01:33 np0005634017 nova_compute[243452]: 2026-02-28 10:01:33.620 243456 DEBUG oslo_concurrency.lockutils [req-b25d08a5-6947-4618-a921-e6203cee8e3b req-52bca9b5-6b44-4a59-a09f-996afe6e6e59 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f37b722c-8def-4545-a455-39df230540d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:01:33 np0005634017 nova_compute[243452]: 2026-02-28 10:01:33.620 243456 DEBUG oslo_concurrency.lockutils [req-b25d08a5-6947-4618-a921-e6203cee8e3b req-52bca9b5-6b44-4a59-a09f-996afe6e6e59 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f37b722c-8def-4545-a455-39df230540d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:01:33 np0005634017 nova_compute[243452]: 2026-02-28 10:01:33.621 243456 DEBUG nova.network.neutron [req-b25d08a5-6947-4618-a921-e6203cee8e3b req-52bca9b5-6b44-4a59-a09f-996afe6e6e59 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Refreshing network info cache for port 24f1e17a-f542-4eab-9180-968c61bc1cf7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:01:33 np0005634017 podman[259902]: 2026-02-28 10:01:33.703522645 +0000 UTC m=+0.052338606 container create 0c85fa5f138d8edf2d29dc969f5901c81b848aaf6bd5b6643dc67aa9d448034b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:01:33 np0005634017 systemd[1]: Started libpod-conmon-0c85fa5f138d8edf2d29dc969f5901c81b848aaf6bd5b6643dc67aa9d448034b.scope.
Feb 28 05:01:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 643 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 7.8 MiB/s wr, 283 op/s
Feb 28 05:01:33 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:01:33 np0005634017 podman[259902]: 2026-02-28 10:01:33.68311441 +0000 UTC m=+0.031930411 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:01:33 np0005634017 podman[259902]: 2026-02-28 10:01:33.802571624 +0000 UTC m=+0.151387625 container init 0c85fa5f138d8edf2d29dc969f5901c81b848aaf6bd5b6643dc67aa9d448034b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 05:01:33 np0005634017 podman[259902]: 2026-02-28 10:01:33.813877673 +0000 UTC m=+0.162693624 container start 0c85fa5f138d8edf2d29dc969f5901c81b848aaf6bd5b6643dc67aa9d448034b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bohr, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:01:33 np0005634017 podman[259902]: 2026-02-28 10:01:33.817710711 +0000 UTC m=+0.166526662 container attach 0c85fa5f138d8edf2d29dc969f5901c81b848aaf6bd5b6643dc67aa9d448034b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:01:33 np0005634017 competent_bohr[259917]: 167 167
Feb 28 05:01:33 np0005634017 systemd[1]: libpod-0c85fa5f138d8edf2d29dc969f5901c81b848aaf6bd5b6643dc67aa9d448034b.scope: Deactivated successfully.
Feb 28 05:01:33 np0005634017 conmon[259917]: conmon 0c85fa5f138d8edf2d29 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0c85fa5f138d8edf2d29dc969f5901c81b848aaf6bd5b6643dc67aa9d448034b.scope/container/memory.events
Feb 28 05:01:33 np0005634017 podman[259902]: 2026-02-28 10:01:33.822420874 +0000 UTC m=+0.171236825 container died 0c85fa5f138d8edf2d29dc969f5901c81b848aaf6bd5b6643dc67aa9d448034b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bohr, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:01:33 np0005634017 systemd[1]: var-lib-containers-storage-overlay-c844b2e2d961845dc6fa34e945c17530506f506f953029b152705be32ea7cd85-merged.mount: Deactivated successfully.
Feb 28 05:01:33 np0005634017 podman[259902]: 2026-02-28 10:01:33.856892524 +0000 UTC m=+0.205708485 container remove 0c85fa5f138d8edf2d29dc969f5901c81b848aaf6bd5b6643dc67aa9d448034b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:01:33 np0005634017 systemd[1]: libpod-conmon-0c85fa5f138d8edf2d29dc969f5901c81b848aaf6bd5b6643dc67aa9d448034b.scope: Deactivated successfully.
Feb 28 05:01:33 np0005634017 nova_compute[243452]: 2026-02-28 10:01:33.970 243456 DEBUG nova.network.neutron [req-a36c5936-715c-434b-9456-ae7ceeb4fb0a req-e46b2fd6-5b31-4df7-9744-8a6d4dbdb39b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Updated VIF entry in instance network info cache for port 42dc1876-90c4-4b52-b301-1c90b71ff297. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:01:33 np0005634017 nova_compute[243452]: 2026-02-28 10:01:33.971 243456 DEBUG nova.network.neutron [req-a36c5936-715c-434b-9456-ae7ceeb4fb0a req-e46b2fd6-5b31-4df7-9744-8a6d4dbdb39b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Updating instance_info_cache with network_info: [{"id": "42dc1876-90c4-4b52-b301-1c90b71ff297", "address": "fa:16:3e:0e:8a:2e", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42dc1876-90", "ovs_interfaceid": "42dc1876-90c4-4b52-b301-1c90b71ff297", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:33 np0005634017 nova_compute[243452]: 2026-02-28 10:01:33.976 243456 INFO nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Creating config drive at /var/lib/nova/instances/d2d9bd29-453d-4abd-a3de-c1a9603cfc11/disk.config#033[00m
Feb 28 05:01:33 np0005634017 nova_compute[243452]: 2026-02-28 10:01:33.981 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d2d9bd29-453d-4abd-a3de-c1a9603cfc11/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3uvy17b3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:34 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:01:34 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:01:34 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:01:34 np0005634017 nova_compute[243452]: 2026-02-28 10:01:34.012 243456 DEBUG oslo_concurrency.lockutils [req-a36c5936-715c-434b-9456-ae7ceeb4fb0a req-e46b2fd6-5b31-4df7-9744-8a6d4dbdb39b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-d2d9bd29-453d-4abd-a3de-c1a9603cfc11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:01:34 np0005634017 podman[259941]: 2026-02-28 10:01:34.055659303 +0000 UTC m=+0.056402240 container create d7d2970cea941c58d67ee2af05c72483b58cfbd53ef0945f487a4c2213afd9e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_snyder, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Feb 28 05:01:34 np0005634017 systemd[1]: Started libpod-conmon-d7d2970cea941c58d67ee2af05c72483b58cfbd53ef0945f487a4c2213afd9e1.scope.
Feb 28 05:01:34 np0005634017 nova_compute[243452]: 2026-02-28 10:01:34.107 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d2d9bd29-453d-4abd-a3de-c1a9603cfc11/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3uvy17b3" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:34 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:01:34 np0005634017 podman[259941]: 2026-02-28 10:01:34.032035248 +0000 UTC m=+0.032778195 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:01:34 np0005634017 nova_compute[243452]: 2026-02-28 10:01:34.127 243456 DEBUG nova.storage.rbd_utils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04c6d462d2d6850d4cf8ca8ecfe73d1f6edd284f91f055a3c8d83266520fe4d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:01:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04c6d462d2d6850d4cf8ca8ecfe73d1f6edd284f91f055a3c8d83266520fe4d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:01:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04c6d462d2d6850d4cf8ca8ecfe73d1f6edd284f91f055a3c8d83266520fe4d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:01:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04c6d462d2d6850d4cf8ca8ecfe73d1f6edd284f91f055a3c8d83266520fe4d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:01:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04c6d462d2d6850d4cf8ca8ecfe73d1f6edd284f91f055a3c8d83266520fe4d1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:01:34 np0005634017 nova_compute[243452]: 2026-02-28 10:01:34.162 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d2d9bd29-453d-4abd-a3de-c1a9603cfc11/disk.config d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:34 np0005634017 podman[259941]: 2026-02-28 10:01:34.164088817 +0000 UTC m=+0.164831754 container init d7d2970cea941c58d67ee2af05c72483b58cfbd53ef0945f487a4c2213afd9e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_snyder, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:01:34 np0005634017 podman[259941]: 2026-02-28 10:01:34.172240377 +0000 UTC m=+0.172983314 container start d7d2970cea941c58d67ee2af05c72483b58cfbd53ef0945f487a4c2213afd9e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_snyder, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:01:34 np0005634017 podman[259941]: 2026-02-28 10:01:34.176017703 +0000 UTC m=+0.176760650 container attach d7d2970cea941c58d67ee2af05c72483b58cfbd53ef0945f487a4c2213afd9e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:01:34 np0005634017 nova_compute[243452]: 2026-02-28 10:01:34.283 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d2d9bd29-453d-4abd-a3de-c1a9603cfc11/disk.config d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:34 np0005634017 nova_compute[243452]: 2026-02-28 10:01:34.287 243456 INFO nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Deleting local config drive /var/lib/nova/instances/d2d9bd29-453d-4abd-a3de-c1a9603cfc11/disk.config because it was imported into RBD.#033[00m
Feb 28 05:01:34 np0005634017 kernel: tap42dc1876-90: entered promiscuous mode
Feb 28 05:01:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:34Z|00067|binding|INFO|Claiming lport 42dc1876-90c4-4b52-b301-1c90b71ff297 for this chassis.
Feb 28 05:01:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:34Z|00068|binding|INFO|42dc1876-90c4-4b52-b301-1c90b71ff297: Claiming fa:16:3e:0e:8a:2e 10.100.0.4
Feb 28 05:01:34 np0005634017 NetworkManager[49805]: <info>  [1772272894.3246] manager: (tap42dc1876-90): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Feb 28 05:01:34 np0005634017 nova_compute[243452]: 2026-02-28 10:01:34.329 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:34 np0005634017 systemd-udevd[259557]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:01:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.331 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:8a:2e 10.100.0.4'], port_security=['fa:16:3e:0e:8a:2e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd2d9bd29-453d-4abd-a3de-c1a9603cfc11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=42dc1876-90c4-4b52-b301-1c90b71ff297) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:01:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.335 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 42dc1876-90c4-4b52-b301-1c90b71ff297 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 bound to our chassis#033[00m
Feb 28 05:01:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.337 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24#033[00m
Feb 28 05:01:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:34Z|00069|binding|INFO|Setting lport 42dc1876-90c4-4b52-b301-1c90b71ff297 ovn-installed in OVS
Feb 28 05:01:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:34Z|00070|binding|INFO|Setting lport 42dc1876-90c4-4b52-b301-1c90b71ff297 up in Southbound
Feb 28 05:01:34 np0005634017 nova_compute[243452]: 2026-02-28 10:01:34.338 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:34 np0005634017 nova_compute[243452]: 2026-02-28 10:01:34.342 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:34 np0005634017 NetworkManager[49805]: <info>  [1772272894.3505] device (tap42dc1876-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:01:34 np0005634017 NetworkManager[49805]: <info>  [1772272894.3511] device (tap42dc1876-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:01:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.352 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[63ab131b-71bb-457d-b84d-bf01556bf97f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:34 np0005634017 systemd-machined[209480]: New machine qemu-20-instance-00000012.
Feb 28 05:01:34 np0005634017 systemd[1]: Started Virtual Machine qemu-20-instance-00000012.
Feb 28 05:01:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.380 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[871c8709-5467-4e0d-9d2b-d92b6ce788ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.384 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7491a0b8-c56a-4b0a-9de8-8e1f2f2f448e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.429 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[381be29c-7e41-47bb-9826-b7b2956e9a00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.450 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1b902278-44ad-4f43-b2cd-f5ed86fd86df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260033, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.469 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8d314eef-3748-4a20-b007-ff0ab642cce4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437039, 'tstamp': 437039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260035, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437043, 'tstamp': 437043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260035, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.471 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:34 np0005634017 nova_compute[243452]: 2026-02-28 10:01:34.476 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.479 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5045ea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.480 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:01:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.480 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5045ea-10, col_values=(('external_ids', {'iface-id': '863d7ac1-9b1e-4788-aadf-440a36d66b39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.481 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:01:34 np0005634017 jolly_snyder[259960]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:01:34 np0005634017 jolly_snyder[259960]: --> All data devices are unavailable
Feb 28 05:01:34 np0005634017 systemd[1]: libpod-d7d2970cea941c58d67ee2af05c72483b58cfbd53ef0945f487a4c2213afd9e1.scope: Deactivated successfully.
Feb 28 05:01:34 np0005634017 conmon[259960]: conmon d7d2970cea941c58d67e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d7d2970cea941c58d67ee2af05c72483b58cfbd53ef0945f487a4c2213afd9e1.scope/container/memory.events
Feb 28 05:01:34 np0005634017 podman[259941]: 2026-02-28 10:01:34.688187029 +0000 UTC m=+0.688929956 container died d7d2970cea941c58d67ee2af05c72483b58cfbd53ef0945f487a4c2213afd9e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_snyder, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:01:34 np0005634017 systemd[1]: var-lib-containers-storage-overlay-04c6d462d2d6850d4cf8ca8ecfe73d1f6edd284f91f055a3c8d83266520fe4d1-merged.mount: Deactivated successfully.
Feb 28 05:01:34 np0005634017 podman[259941]: 2026-02-28 10:01:34.735556283 +0000 UTC m=+0.736299210 container remove d7d2970cea941c58d67ee2af05c72483b58cfbd53ef0945f487a4c2213afd9e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_snyder, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:01:34 np0005634017 systemd[1]: libpod-conmon-d7d2970cea941c58d67ee2af05c72483b58cfbd53ef0945f487a4c2213afd9e1.scope: Deactivated successfully.
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.072 243456 DEBUG nova.compute.manager [req-b6d2eefd-602e-40f2-b2a6-2368d01e3d22 req-1af5c688-0892-43dc-b378-845264ccabb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Received event network-vif-plugged-42dc1876-90c4-4b52-b301-1c90b71ff297 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.074 243456 DEBUG oslo_concurrency.lockutils [req-b6d2eefd-602e-40f2-b2a6-2368d01e3d22 req-1af5c688-0892-43dc-b378-845264ccabb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.074 243456 DEBUG oslo_concurrency.lockutils [req-b6d2eefd-602e-40f2-b2a6-2368d01e3d22 req-1af5c688-0892-43dc-b378-845264ccabb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.074 243456 DEBUG oslo_concurrency.lockutils [req-b6d2eefd-602e-40f2-b2a6-2368d01e3d22 req-1af5c688-0892-43dc-b378-845264ccabb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.074 243456 DEBUG nova.compute.manager [req-b6d2eefd-602e-40f2-b2a6-2368d01e3d22 req-1af5c688-0892-43dc-b378-845264ccabb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Processing event network-vif-plugged-42dc1876-90c4-4b52-b301-1c90b71ff297 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.092 243456 DEBUG nova.compute.manager [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.094 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272895.0922856, d2d9bd29-453d-4abd-a3de-c1a9603cfc11 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.094 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] VM Started (Lifecycle Event)#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.098 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.103 243456 INFO nova.virt.libvirt.driver [-] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Instance spawned successfully.#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.103 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.112 243456 DEBUG oslo_concurrency.lockutils [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "f37b722c-8def-4545-a455-39df230540d8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.113 243456 DEBUG oslo_concurrency.lockutils [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.113 243456 DEBUG oslo_concurrency.lockutils [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "f37b722c-8def-4545-a455-39df230540d8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.113 243456 DEBUG oslo_concurrency.lockutils [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.114 243456 DEBUG oslo_concurrency.lockutils [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.116 243456 INFO nova.compute.manager [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Terminating instance#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.118 243456 DEBUG nova.compute.manager [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.121 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.131 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.136 243456 DEBUG nova.network.neutron [req-b25d08a5-6947-4618-a921-e6203cee8e3b req-52bca9b5-6b44-4a59-a09f-996afe6e6e59 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Updated VIF entry in instance network info cache for port 24f1e17a-f542-4eab-9180-968c61bc1cf7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.136 243456 DEBUG nova.network.neutron [req-b25d08a5-6947-4618-a921-e6203cee8e3b req-52bca9b5-6b44-4a59-a09f-996afe6e6e59 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Updating instance_info_cache with network_info: [{"id": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "address": "fa:16:3e:84:dd:a4", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f1e17a-f5", "ovs_interfaceid": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.142 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.143 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.143 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.144 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.145 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.146 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.171 243456 DEBUG oslo_concurrency.lockutils [req-b25d08a5-6947-4618-a921-e6203cee8e3b req-52bca9b5-6b44-4a59-a09f-996afe6e6e59 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f37b722c-8def-4545-a455-39df230540d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:01:35 np0005634017 kernel: tap24f1e17a-f5 (unregistering): left promiscuous mode
Feb 28 05:01:35 np0005634017 NetworkManager[49805]: <info>  [1772272895.1781] device (tap24f1e17a-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.180 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.180 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272895.0937629, d2d9bd29-453d-4abd-a3de-c1a9603cfc11 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.181 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:01:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:35Z|00071|binding|INFO|Releasing lport 24f1e17a-f542-4eab-9180-968c61bc1cf7 from this chassis (sb_readonly=0)
Feb 28 05:01:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:35Z|00072|binding|INFO|Setting lport 24f1e17a-f542-4eab-9180-968c61bc1cf7 down in Southbound
Feb 28 05:01:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:35Z|00073|binding|INFO|Removing iface tap24f1e17a-f5 ovn-installed in OVS
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.193 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.201 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:dd:a4 10.100.0.7'], port_security=['fa:16:3e:84:dd:a4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f37b722c-8def-4545-a455-39df230540d8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '446562351a804787bd6c523245bada39', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c45a4b58-82e0-4f02-9f8e-0ad33df761ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d1e8e62-0bec-49c5-9374-c674d11c0532, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=24f1e17a-f542-4eab-9180-968c61bc1cf7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:01:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.204 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 24f1e17a-f542-4eab-9180-968c61bc1cf7 in datapath 71984a35-6483-4ac4-a021-6bd1f9989d8b unbound from our chassis#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.205 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.210 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.208 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 71984a35-6483-4ac4-a021-6bd1f9989d8b#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.211 243456 INFO nova.compute.manager [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Took 7.98 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.211 243456 DEBUG nova.compute.manager [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.215 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272895.0994737, d2d9bd29-453d-4abd-a3de-c1a9603cfc11 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.215 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.222 243456 DEBUG oslo_concurrency.lockutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "cd5eedf6-c835-46d6-9378-148eb04d4cb2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.223 243456 DEBUG oslo_concurrency.lockutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "cd5eedf6-c835-46d6-9378-148eb04d4cb2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:35 np0005634017 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Deactivated successfully.
Feb 28 05:01:35 np0005634017 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Consumed 12.652s CPU time.
Feb 28 05:01:35 np0005634017 systemd-machined[209480]: Machine qemu-16-instance-00000010 terminated.
Feb 28 05:01:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.228 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a518c0d6-d7eb-4b69-a7a4-e4d96257ea01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.250 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.250 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[25d1cbe9-fe28-4f62-ba8c-95a4760d96f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.254 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a2d39728-ecca-402b-abbc-a89f0db04936]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.257 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.259 243456 DEBUG nova.compute.manager [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:01:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.276 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e8d4333e-f4e2-4288-a582-727f2e4646d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:35 np0005634017 podman[260164]: 2026-02-28 10:01:35.286619774 +0000 UTC m=+0.051834811 container create 3c6a300e30791667bc0a760f870545c9d45f1b95e26f82e11430101af0bf91a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_rhodes, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:01:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.290 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6dcebfaa-5304-4a43-9f26-a58de94d4fe3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71984a35-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:3a:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436194, 'reachable_time': 28919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260182, 'error': None, 'target': 'ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.300 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:01:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.308 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[32277d7e-a243-4f39-ab5a-239e86e5608b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap71984a35-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 436202, 'tstamp': 436202}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260183, 'error': None, 'target': 'ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap71984a35-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 436205, 'tstamp': 436205}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260183, 'error': None, 'target': 'ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.310 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71984a35-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.311 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.318 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:35 np0005634017 systemd[1]: Started libpod-conmon-3c6a300e30791667bc0a760f870545c9d45f1b95e26f82e11430101af0bf91a2.scope.
Feb 28 05:01:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.319 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71984a35-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.319 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:01:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.319 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap71984a35-60, col_values=(('external_ids', {'iface-id': 'a589fe00-3087-4c3d-af34-6af9a22081de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.320 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.326 243456 INFO nova.compute.manager [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Took 9.25 seconds to build instance.#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.345 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.366s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:35 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.352 243456 DEBUG oslo_concurrency.lockutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.352 243456 DEBUG oslo_concurrency.lockutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.359 243456 DEBUG nova.virt.hardware [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.359 243456 INFO nova.compute.claims [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.363 243456 INFO nova.virt.libvirt.driver [-] [instance: f37b722c-8def-4545-a455-39df230540d8] Instance destroyed successfully.#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.364 243456 DEBUG nova.objects.instance [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lazy-loading 'resources' on Instance uuid f37b722c-8def-4545-a455-39df230540d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:35 np0005634017 podman[260164]: 2026-02-28 10:01:35.269368749 +0000 UTC m=+0.034583806 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:01:35 np0005634017 podman[260164]: 2026-02-28 10:01:35.368898822 +0000 UTC m=+0.134113879 container init 3c6a300e30791667bc0a760f870545c9d45f1b95e26f82e11430101af0bf91a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 28 05:01:35 np0005634017 podman[260164]: 2026-02-28 10:01:35.374698425 +0000 UTC m=+0.139913462 container start 3c6a300e30791667bc0a760f870545c9d45f1b95e26f82e11430101af0bf91a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_rhodes, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:01:35 np0005634017 exciting_rhodes[260187]: 167 167
Feb 28 05:01:35 np0005634017 systemd[1]: libpod-3c6a300e30791667bc0a760f870545c9d45f1b95e26f82e11430101af0bf91a2.scope: Deactivated successfully.
Feb 28 05:01:35 np0005634017 conmon[260187]: conmon 3c6a300e30791667bc0a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3c6a300e30791667bc0a760f870545c9d45f1b95e26f82e11430101af0bf91a2.scope/container/memory.events
Feb 28 05:01:35 np0005634017 podman[260164]: 2026-02-28 10:01:35.382000871 +0000 UTC m=+0.147215928 container attach 3c6a300e30791667bc0a760f870545c9d45f1b95e26f82e11430101af0bf91a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 28 05:01:35 np0005634017 podman[260164]: 2026-02-28 10:01:35.382458984 +0000 UTC m=+0.147674021 container died 3c6a300e30791667bc0a760f870545c9d45f1b95e26f82e11430101af0bf91a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_rhodes, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.388 243456 DEBUG nova.virt.libvirt.vif [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1434521462',display_name='tempest-FloatingIPsAssociationTestJSON-server-1434521462',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1434521462',id=16,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:01:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='446562351a804787bd6c523245bada39',ramdisk_id='',reservation_id='r-txju9utb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1803239001',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1803239001-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:01:16Z,user_data=None,user_id='3b1dc716928742ca935bb155783e2d9a',uuid=f37b722c-8def-4545-a455-39df230540d8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "address": "fa:16:3e:84:dd:a4", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f1e17a-f5", "ovs_interfaceid": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.391 243456 DEBUG nova.network.os_vif_util [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Converting VIF {"id": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "address": "fa:16:3e:84:dd:a4", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f1e17a-f5", "ovs_interfaceid": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.392 243456 DEBUG nova.network.os_vif_util [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:84:dd:a4,bridge_name='br-int',has_traffic_filtering=True,id=24f1e17a-f542-4eab-9180-968c61bc1cf7,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24f1e17a-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.393 243456 DEBUG os_vif [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:dd:a4,bridge_name='br-int',has_traffic_filtering=True,id=24f1e17a-f542-4eab-9180-968c61bc1cf7,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24f1e17a-f5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.395 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.395 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24f1e17a-f5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.397 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.398 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.400 243456 INFO os_vif [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:dd:a4,bridge_name='br-int',has_traffic_filtering=True,id=24f1e17a-f542-4eab-9180-968c61bc1cf7,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24f1e17a-f5')#033[00m
Feb 28 05:01:35 np0005634017 systemd[1]: var-lib-containers-storage-overlay-00006daa3d636e0ee33f3163e551eeb6053feff82c40035a4a0ce62ffb7be6cc-merged.mount: Deactivated successfully.
Feb 28 05:01:35 np0005634017 podman[260164]: 2026-02-28 10:01:35.417075749 +0000 UTC m=+0.182290786 container remove 3c6a300e30791667bc0a760f870545c9d45f1b95e26f82e11430101af0bf91a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:01:35 np0005634017 systemd[1]: libpod-conmon-3c6a300e30791667bc0a760f870545c9d45f1b95e26f82e11430101af0bf91a2.scope: Deactivated successfully.
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.468 243456 INFO nova.compute.manager [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Rebuilding instance#033[00m
Feb 28 05:01:35 np0005634017 podman[260241]: 2026-02-28 10:01:35.609720145 +0000 UTC m=+0.066491924 container create 29b148492bd853febc5ee22673fdb0c16b71f3a65fa4f0c9b320b648f6495868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_pike, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 05:01:35 np0005634017 podman[260241]: 2026-02-28 10:01:35.575324076 +0000 UTC m=+0.032095875 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.703 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:35 np0005634017 systemd[1]: Started libpod-conmon-29b148492bd853febc5ee22673fdb0c16b71f3a65fa4f0c9b320b648f6495868.scope.
Feb 28 05:01:35 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.751 243456 DEBUG nova.objects.instance [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:35 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dd20e64680734bcb844c6217170d9fee0b6c89c9516be7c1861be8654cb256c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:01:35 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dd20e64680734bcb844c6217170d9fee0b6c89c9516be7c1861be8654cb256c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:01:35 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dd20e64680734bcb844c6217170d9fee0b6c89c9516be7c1861be8654cb256c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:01:35 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dd20e64680734bcb844c6217170d9fee0b6c89c9516be7c1861be8654cb256c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.769 243456 DEBUG nova.compute.manager [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 674 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 9.6 MiB/s wr, 379 op/s
Feb 28 05:01:35 np0005634017 podman[260241]: 2026-02-28 10:01:35.777909542 +0000 UTC m=+0.234681361 container init 29b148492bd853febc5ee22673fdb0c16b71f3a65fa4f0c9b320b648f6495868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_pike, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.781 243456 DEBUG nova.compute.manager [req-299aad8a-b79a-443c-98c7-b648e2435e55 req-32ea12fe-d103-4185-814c-67676708d475 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Received event network-vif-unplugged-24f1e17a-f542-4eab-9180-968c61bc1cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.782 243456 DEBUG oslo_concurrency.lockutils [req-299aad8a-b79a-443c-98c7-b648e2435e55 req-32ea12fe-d103-4185-814c-67676708d475 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f37b722c-8def-4545-a455-39df230540d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.782 243456 DEBUG oslo_concurrency.lockutils [req-299aad8a-b79a-443c-98c7-b648e2435e55 req-32ea12fe-d103-4185-814c-67676708d475 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.782 243456 DEBUG oslo_concurrency.lockutils [req-299aad8a-b79a-443c-98c7-b648e2435e55 req-32ea12fe-d103-4185-814c-67676708d475 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.782 243456 DEBUG nova.compute.manager [req-299aad8a-b79a-443c-98c7-b648e2435e55 req-32ea12fe-d103-4185-814c-67676708d475 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] No waiting events found dispatching network-vif-unplugged-24f1e17a-f542-4eab-9180-968c61bc1cf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.783 243456 DEBUG nova.compute.manager [req-299aad8a-b79a-443c-98c7-b648e2435e55 req-32ea12fe-d103-4185-814c-67676708d475 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Received event network-vif-unplugged-24f1e17a-f542-4eab-9180-968c61bc1cf7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:01:35 np0005634017 podman[260241]: 2026-02-28 10:01:35.783730946 +0000 UTC m=+0.240502715 container start 29b148492bd853febc5ee22673fdb0c16b71f3a65fa4f0c9b320b648f6495868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 28 05:01:35 np0005634017 podman[260241]: 2026-02-28 10:01:35.802239127 +0000 UTC m=+0.259010906 container attach 29b148492bd853febc5ee22673fdb0c16b71f3a65fa4f0c9b320b648f6495868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_pike, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.855 243456 DEBUG nova.objects.instance [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lazy-loading 'pci_requests' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.869 243456 DEBUG nova.objects.instance [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.882 243456 DEBUG nova.objects.instance [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lazy-loading 'resources' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.894 243456 DEBUG nova.objects.instance [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lazy-loading 'migration_context' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.907 243456 DEBUG nova.objects.instance [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.919 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.977 243456 INFO nova.virt.libvirt.driver [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Deleting instance files /var/lib/nova/instances/f37b722c-8def-4545-a455-39df230540d8_del#033[00m
Feb 28 05:01:35 np0005634017 nova_compute[243452]: 2026-02-28 10:01:35.978 243456 INFO nova.virt.libvirt.driver [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Deletion of /var/lib/nova/instances/f37b722c-8def-4545-a455-39df230540d8_del complete#033[00m
Feb 28 05:01:36 np0005634017 boring_pike[260259]: {
Feb 28 05:01:36 np0005634017 boring_pike[260259]:    "0": [
Feb 28 05:01:36 np0005634017 boring_pike[260259]:        {
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "devices": [
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "/dev/loop3"
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            ],
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "lv_name": "ceph_lv0",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "lv_size": "21470642176",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "name": "ceph_lv0",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "tags": {
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.cluster_name": "ceph",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.crush_device_class": "",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.encrypted": "0",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.objectstore": "bluestore",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.osd_id": "0",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.type": "block",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.vdo": "0",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.with_tpm": "0"
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            },
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "type": "block",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "vg_name": "ceph_vg0"
Feb 28 05:01:36 np0005634017 boring_pike[260259]:        }
Feb 28 05:01:36 np0005634017 boring_pike[260259]:    ],
Feb 28 05:01:36 np0005634017 boring_pike[260259]:    "1": [
Feb 28 05:01:36 np0005634017 boring_pike[260259]:        {
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "devices": [
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "/dev/loop4"
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            ],
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "lv_name": "ceph_lv1",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "lv_size": "21470642176",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "name": "ceph_lv1",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "tags": {
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.cluster_name": "ceph",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.crush_device_class": "",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.encrypted": "0",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.objectstore": "bluestore",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.osd_id": "1",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.type": "block",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.vdo": "0",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.with_tpm": "0"
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            },
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "type": "block",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "vg_name": "ceph_vg1"
Feb 28 05:01:36 np0005634017 boring_pike[260259]:        }
Feb 28 05:01:36 np0005634017 boring_pike[260259]:    ],
Feb 28 05:01:36 np0005634017 boring_pike[260259]:    "2": [
Feb 28 05:01:36 np0005634017 boring_pike[260259]:        {
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "devices": [
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "/dev/loop5"
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            ],
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "lv_name": "ceph_lv2",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "lv_size": "21470642176",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "name": "ceph_lv2",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "tags": {
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.cluster_name": "ceph",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.crush_device_class": "",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.encrypted": "0",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.objectstore": "bluestore",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.osd_id": "2",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.type": "block",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.vdo": "0",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:                "ceph.with_tpm": "0"
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            },
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "type": "block",
Feb 28 05:01:36 np0005634017 boring_pike[260259]:            "vg_name": "ceph_vg2"
Feb 28 05:01:36 np0005634017 boring_pike[260259]:        }
Feb 28 05:01:36 np0005634017 boring_pike[260259]:    ]
Feb 28 05:01:36 np0005634017 boring_pike[260259]: }
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.043 243456 INFO nova.compute.manager [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Took 0.92 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.043 243456 DEBUG oslo.service.loopingcall [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.044 243456 DEBUG nova.compute.manager [-] [instance: f37b722c-8def-4545-a455-39df230540d8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.044 243456 DEBUG nova.network.neutron [-] [instance: f37b722c-8def-4545-a455-39df230540d8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:01:36 np0005634017 systemd[1]: libpod-29b148492bd853febc5ee22673fdb0c16b71f3a65fa4f0c9b320b648f6495868.scope: Deactivated successfully.
Feb 28 05:01:36 np0005634017 podman[260287]: 2026-02-28 10:01:36.087146832 +0000 UTC m=+0.025783917 container died 29b148492bd853febc5ee22673fdb0c16b71f3a65fa4f0c9b320b648f6495868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Feb 28 05:01:36 np0005634017 systemd[1]: var-lib-containers-storage-overlay-0dd20e64680734bcb844c6217170d9fee0b6c89c9516be7c1861be8654cb256c-merged.mount: Deactivated successfully.
Feb 28 05:01:36 np0005634017 podman[260287]: 2026-02-28 10:01:36.131312746 +0000 UTC m=+0.069949821 container remove 29b148492bd853febc5ee22673fdb0c16b71f3a65fa4f0c9b320b648f6495868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_pike, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:01:36 np0005634017 systemd[1]: libpod-conmon-29b148492bd853febc5ee22673fdb0c16b71f3a65fa4f0c9b320b648f6495868.scope: Deactivated successfully.
Feb 28 05:01:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:01:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1967810180' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.275 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.281 243456 DEBUG nova.compute.provider_tree [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.298 243456 DEBUG nova.scheduler.client.report [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.323 243456 DEBUG oslo_concurrency.lockutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.971s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.324 243456 DEBUG nova.compute.manager [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.368 243456 DEBUG nova.compute.manager [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.368 243456 DEBUG nova.network.neutron [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.392 243456 INFO nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.420 243456 DEBUG nova.compute.manager [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.499 243456 DEBUG nova.compute.manager [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.501 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.502 243456 INFO nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Creating image(s)#033[00m
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.527 243456 DEBUG nova.storage.rbd_utils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:36 np0005634017 podman[260366]: 2026-02-28 10:01:36.55396819 +0000 UTC m=+0.046820340 container create 778d69ba8212c9cfcf33160b4a504a723039533a861e1db424f5fe0ecb4ba880 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.563 243456 DEBUG nova.storage.rbd_utils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:36 np0005634017 systemd[1]: Started libpod-conmon-778d69ba8212c9cfcf33160b4a504a723039533a861e1db424f5fe0ecb4ba880.scope.
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.595 243456 DEBUG nova.storage.rbd_utils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.600 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:36 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:01:36 np0005634017 podman[260366]: 2026-02-28 10:01:36.621520792 +0000 UTC m=+0.114372942 container init 778d69ba8212c9cfcf33160b4a504a723039533a861e1db424f5fe0ecb4ba880 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 05:01:36 np0005634017 podman[260366]: 2026-02-28 10:01:36.532521306 +0000 UTC m=+0.025373516 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:01:36 np0005634017 podman[260366]: 2026-02-28 10:01:36.627256514 +0000 UTC m=+0.120108644 container start 778d69ba8212c9cfcf33160b4a504a723039533a861e1db424f5fe0ecb4ba880 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 05:01:36 np0005634017 fervent_einstein[260432]: 167 167
Feb 28 05:01:36 np0005634017 systemd[1]: libpod-778d69ba8212c9cfcf33160b4a504a723039533a861e1db424f5fe0ecb4ba880.scope: Deactivated successfully.
Feb 28 05:01:36 np0005634017 podman[260366]: 2026-02-28 10:01:36.632062969 +0000 UTC m=+0.124915099 container attach 778d69ba8212c9cfcf33160b4a504a723039533a861e1db424f5fe0ecb4ba880 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 28 05:01:36 np0005634017 conmon[260432]: conmon 778d69ba8212c9cfcf33 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-778d69ba8212c9cfcf33160b4a504a723039533a861e1db424f5fe0ecb4ba880.scope/container/memory.events
Feb 28 05:01:36 np0005634017 podman[260366]: 2026-02-28 10:01:36.632975395 +0000 UTC m=+0.125827525 container died 778d69ba8212c9cfcf33160b4a504a723039533a861e1db424f5fe0ecb4ba880 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:01:36 np0005634017 systemd[1]: var-lib-containers-storage-overlay-0068f3b3a27544eaa4486f76724916600c281b8923da91d2e79e4af742f9fdc0-merged.mount: Deactivated successfully.
Feb 28 05:01:36 np0005634017 podman[260366]: 2026-02-28 10:01:36.666410907 +0000 UTC m=+0.159263037 container remove 778d69ba8212c9cfcf33160b4a504a723039533a861e1db424f5fe0ecb4ba880 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.674 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.675 243456 DEBUG oslo_concurrency.lockutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.675 243456 DEBUG oslo_concurrency.lockutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.676 243456 DEBUG oslo_concurrency.lockutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:36 np0005634017 systemd[1]: libpod-conmon-778d69ba8212c9cfcf33160b4a504a723039533a861e1db424f5fe0ecb4ba880.scope: Deactivated successfully.
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.696 243456 DEBUG nova.storage.rbd_utils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.701 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.781 243456 DEBUG nova.network.neutron [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.781 243456 DEBUG nova.compute.manager [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:01:36 np0005634017 podman[260494]: 2026-02-28 10:01:36.818878271 +0000 UTC m=+0.043008262 container create 757428c54ab78234c83ce7c533298969779bb6479bfc29a760a79df35a6fe124 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 05:01:36 np0005634017 systemd[1]: Started libpod-conmon-757428c54ab78234c83ce7c533298969779bb6479bfc29a760a79df35a6fe124.scope.
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.880 243456 DEBUG nova.network.neutron [-] [instance: f37b722c-8def-4545-a455-39df230540d8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:36 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:01:36 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f52c6c790f3f1a13fc27448f470511197f5f8aab2177092e38c047dec2b587f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:01:36 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f52c6c790f3f1a13fc27448f470511197f5f8aab2177092e38c047dec2b587f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:01:36 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f52c6c790f3f1a13fc27448f470511197f5f8aab2177092e38c047dec2b587f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:01:36 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f52c6c790f3f1a13fc27448f470511197f5f8aab2177092e38c047dec2b587f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:01:36 np0005634017 podman[260494]: 2026-02-28 10:01:36.797268613 +0000 UTC m=+0.021398624 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.901 243456 INFO nova.compute.manager [-] [instance: f37b722c-8def-4545-a455-39df230540d8] Took 0.86 seconds to deallocate network for instance.#033[00m
Feb 28 05:01:36 np0005634017 podman[260494]: 2026-02-28 10:01:36.906011305 +0000 UTC m=+0.130141296 container init 757428c54ab78234c83ce7c533298969779bb6479bfc29a760a79df35a6fe124 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 05:01:36 np0005634017 podman[260494]: 2026-02-28 10:01:36.913005922 +0000 UTC m=+0.137135913 container start 757428c54ab78234c83ce7c533298969779bb6479bfc29a760a79df35a6fe124 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 05:01:36 np0005634017 podman[260494]: 2026-02-28 10:01:36.918686962 +0000 UTC m=+0.142816953 container attach 757428c54ab78234c83ce7c533298969779bb6479bfc29a760a79df35a6fe124 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hermann, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.936 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.963 243456 DEBUG oslo_concurrency.lockutils [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:36 np0005634017 nova_compute[243452]: 2026-02-28 10:01:36.964 243456 DEBUG oslo_concurrency.lockutils [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.005 243456 DEBUG nova.storage.rbd_utils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] resizing rbd image cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.126 243456 DEBUG nova.objects.instance [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lazy-loading 'migration_context' on Instance uuid cd5eedf6-c835-46d6-9378-148eb04d4cb2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.148 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.148 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Ensure instance console log exists: /var/lib/nova/instances/cd5eedf6-c835-46d6-9378-148eb04d4cb2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.148 243456 DEBUG oslo_concurrency.lockutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.149 243456 DEBUG oslo_concurrency.lockutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.149 243456 DEBUG oslo_concurrency.lockutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.150 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.153 243456 WARNING nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.158 243456 DEBUG nova.virt.libvirt.host [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.159 243456 DEBUG nova.virt.libvirt.host [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.161 243456 DEBUG nova.virt.libvirt.host [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.162 243456 DEBUG nova.virt.libvirt.host [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.162 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.162 243456 DEBUG nova.virt.hardware [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.163 243456 DEBUG nova.virt.hardware [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.163 243456 DEBUG nova.virt.hardware [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.163 243456 DEBUG nova.virt.hardware [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.163 243456 DEBUG nova.virt.hardware [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.163 243456 DEBUG nova.virt.hardware [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.163 243456 DEBUG nova.virt.hardware [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.164 243456 DEBUG nova.virt.hardware [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.164 243456 DEBUG nova.virt.hardware [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.164 243456 DEBUG nova.virt.hardware [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.164 243456 DEBUG nova.virt.hardware [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.166 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.316 243456 DEBUG oslo_concurrency.processutils [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:37 np0005634017 lvm[260701]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:01:37 np0005634017 lvm[260701]: VG ceph_vg0 finished
Feb 28 05:01:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:01:37 np0005634017 lvm[260703]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:01:37 np0005634017 lvm[260703]: VG ceph_vg1 finished
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.613 243456 DEBUG nova.compute.manager [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Received event network-vif-plugged-42dc1876-90c4-4b52-b301-1c90b71ff297 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.614 243456 DEBUG oslo_concurrency.lockutils [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.616 243456 DEBUG oslo_concurrency.lockutils [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.617 243456 DEBUG oslo_concurrency.lockutils [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:37 np0005634017 lvm[260705]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:01:37 np0005634017 lvm[260705]: VG ceph_vg2 finished
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.618 243456 DEBUG nova.compute.manager [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] No waiting events found dispatching network-vif-plugged-42dc1876-90c4-4b52-b301-1c90b71ff297 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.618 243456 WARNING nova.compute.manager [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Received unexpected event network-vif-plugged-42dc1876-90c4-4b52-b301-1c90b71ff297 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.618 243456 DEBUG nova.compute.manager [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Received event network-vif-deleted-24f1e17a-f542-4eab-9180-968c61bc1cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.619 243456 DEBUG nova.compute.manager [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Received event network-changed-77e0efad-ce89-42fd-9284-b155767f5c74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.619 243456 DEBUG nova.compute.manager [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Refreshing instance network info cache due to event network-changed-77e0efad-ce89-42fd-9284-b155767f5c74. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.619 243456 DEBUG oslo_concurrency.lockutils [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.619 243456 DEBUG oslo_concurrency.lockutils [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.619 243456 DEBUG nova.network.neutron [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Refreshing network info cache for port 77e0efad-ce89-42fd-9284-b155767f5c74 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:01:37 np0005634017 agitated_hermann[260514]: {}
Feb 28 05:01:37 np0005634017 systemd[1]: libpod-757428c54ab78234c83ce7c533298969779bb6479bfc29a760a79df35a6fe124.scope: Deactivated successfully.
Feb 28 05:01:37 np0005634017 systemd[1]: libpod-757428c54ab78234c83ce7c533298969779bb6479bfc29a760a79df35a6fe124.scope: Consumed 1.045s CPU time.
Feb 28 05:01:37 np0005634017 podman[260494]: 2026-02-28 10:01:37.695034427 +0000 UTC m=+0.919164418 container died 757428c54ab78234c83ce7c533298969779bb6479bfc29a760a79df35a6fe124 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 05:01:37 np0005634017 systemd[1]: var-lib-containers-storage-overlay-f52c6c790f3f1a13fc27448f470511197f5f8aab2177092e38c047dec2b587f3-merged.mount: Deactivated successfully.
Feb 28 05:01:37 np0005634017 podman[260494]: 2026-02-28 10:01:37.732981562 +0000 UTC m=+0.957111563 container remove 757428c54ab78234c83ce7c533298969779bb6479bfc29a760a79df35a6fe124 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:01:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:01:37 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/663080830' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:01:37 np0005634017 systemd[1]: libpod-conmon-757428c54ab78234c83ce7c533298969779bb6479bfc29a760a79df35a6fe124.scope: Deactivated successfully.
Feb 28 05:01:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 656 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 10 MiB/s wr, 411 op/s
Feb 28 05:01:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:01:37 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:01:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.784 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:37 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.813 243456 DEBUG nova.storage.rbd_utils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.819 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.841 243456 DEBUG oslo_concurrency.lockutils [None req-0ddd3846-ae6b-43e0-98af-fdd91e5ab2dd 0e532ad680b24bcbac2ad58e279c9d00 81c87609780749f7b368e51bddc62945 - - default default] Acquiring lock "refresh_cache-d2d9bd29-453d-4abd-a3de-c1a9603cfc11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.841 243456 DEBUG oslo_concurrency.lockutils [None req-0ddd3846-ae6b-43e0-98af-fdd91e5ab2dd 0e532ad680b24bcbac2ad58e279c9d00 81c87609780749f7b368e51bddc62945 - - default default] Acquired lock "refresh_cache-d2d9bd29-453d-4abd-a3de-c1a9603cfc11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.842 243456 DEBUG nova.network.neutron [None req-0ddd3846-ae6b-43e0-98af-fdd91e5ab2dd 0e532ad680b24bcbac2ad58e279c9d00 81c87609780749f7b368e51bddc62945 - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:01:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:01:37 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1590387511' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.889 243456 DEBUG oslo_concurrency.processutils [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.895 243456 DEBUG nova.compute.provider_tree [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.911 243456 DEBUG nova.scheduler.client.report [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.934 243456 DEBUG oslo_concurrency.lockutils [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:37 np0005634017 nova_compute[243452]: 2026-02-28 10:01:37.955 243456 INFO nova.scheduler.client.report [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Deleted allocations for instance f37b722c-8def-4545-a455-39df230540d8#033[00m
Feb 28 05:01:38 np0005634017 nova_compute[243452]: 2026-02-28 10:01:38.015 243456 DEBUG oslo_concurrency.lockutils [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:38 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:01:38 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:01:38 np0005634017 nova_compute[243452]: 2026-02-28 10:01:38.152 243456 DEBUG nova.compute.manager [req-42c42e9b-ded0-49cc-8eee-b13d0b086cff req-548d21cb-2844-4d37-8e1a-4024e9241f1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Received event network-vif-plugged-24f1e17a-f542-4eab-9180-968c61bc1cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:38 np0005634017 nova_compute[243452]: 2026-02-28 10:01:38.152 243456 DEBUG oslo_concurrency.lockutils [req-42c42e9b-ded0-49cc-8eee-b13d0b086cff req-548d21cb-2844-4d37-8e1a-4024e9241f1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f37b722c-8def-4545-a455-39df230540d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:38 np0005634017 nova_compute[243452]: 2026-02-28 10:01:38.153 243456 DEBUG oslo_concurrency.lockutils [req-42c42e9b-ded0-49cc-8eee-b13d0b086cff req-548d21cb-2844-4d37-8e1a-4024e9241f1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:38 np0005634017 nova_compute[243452]: 2026-02-28 10:01:38.153 243456 DEBUG oslo_concurrency.lockutils [req-42c42e9b-ded0-49cc-8eee-b13d0b086cff req-548d21cb-2844-4d37-8e1a-4024e9241f1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:38 np0005634017 nova_compute[243452]: 2026-02-28 10:01:38.153 243456 DEBUG nova.compute.manager [req-42c42e9b-ded0-49cc-8eee-b13d0b086cff req-548d21cb-2844-4d37-8e1a-4024e9241f1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] No waiting events found dispatching network-vif-plugged-24f1e17a-f542-4eab-9180-968c61bc1cf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:01:38 np0005634017 nova_compute[243452]: 2026-02-28 10:01:38.153 243456 WARNING nova.compute.manager [req-42c42e9b-ded0-49cc-8eee-b13d0b086cff req-548d21cb-2844-4d37-8e1a-4024e9241f1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Received unexpected event network-vif-plugged-24f1e17a-f542-4eab-9180-968c61bc1cf7 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:01:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:01:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3566414693' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:01:38 np0005634017 nova_compute[243452]: 2026-02-28 10:01:38.323 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:38 np0005634017 nova_compute[243452]: 2026-02-28 10:01:38.325 243456 DEBUG nova.objects.instance [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lazy-loading 'pci_devices' on Instance uuid cd5eedf6-c835-46d6-9378-148eb04d4cb2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:38 np0005634017 nova_compute[243452]: 2026-02-28 10:01:38.343 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:01:38 np0005634017 nova_compute[243452]:  <uuid>cd5eedf6-c835-46d6-9378-148eb04d4cb2</uuid>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:  <name>instance-00000015</name>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-1400219163</nova:name>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:01:37</nova:creationTime>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:01:38 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:        <nova:user uuid="d12cfb1e6b0d4d93916ba6a6c4b75cfc">tempest-ServersAdminNegativeTestJSON-1432426192-project-member</nova:user>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:        <nova:project uuid="388f2f7e6d59433a8c88217806df2e33">tempest-ServersAdminNegativeTestJSON-1432426192</nova:project>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      <nova:ports/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      <entry name="serial">cd5eedf6-c835-46d6-9378-148eb04d4cb2</entry>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      <entry name="uuid">cd5eedf6-c835-46d6-9378-148eb04d4cb2</entry>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk">
Feb 28 05:01:38 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:01:38 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk.config">
Feb 28 05:01:38 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:01:38 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/cd5eedf6-c835-46d6-9378-148eb04d4cb2/console.log" append="off"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:01:38 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:01:38 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:01:38 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:01:38 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:01:38 np0005634017 nova_compute[243452]: 2026-02-28 10:01:38.395 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:01:38 np0005634017 nova_compute[243452]: 2026-02-28 10:01:38.395 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:01:38 np0005634017 nova_compute[243452]: 2026-02-28 10:01:38.396 243456 INFO nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Using config drive#033[00m
Feb 28 05:01:38 np0005634017 nova_compute[243452]: 2026-02-28 10:01:38.415 243456 DEBUG nova.storage.rbd_utils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:38 np0005634017 nova_compute[243452]: 2026-02-28 10:01:38.422 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:39 np0005634017 nova_compute[243452]: 2026-02-28 10:01:39.272 243456 INFO nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Creating config drive at /var/lib/nova/instances/cd5eedf6-c835-46d6-9378-148eb04d4cb2/disk.config#033[00m
Feb 28 05:01:39 np0005634017 nova_compute[243452]: 2026-02-28 10:01:39.279 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cd5eedf6-c835-46d6-9378-148eb04d4cb2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8bktf0om execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:39 np0005634017 nova_compute[243452]: 2026-02-28 10:01:39.410 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cd5eedf6-c835-46d6-9378-148eb04d4cb2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8bktf0om" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:39 np0005634017 nova_compute[243452]: 2026-02-28 10:01:39.436 243456 DEBUG nova.storage.rbd_utils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:39 np0005634017 nova_compute[243452]: 2026-02-28 10:01:39.440 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cd5eedf6-c835-46d6-9378-148eb04d4cb2/disk.config cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:39 np0005634017 nova_compute[243452]: 2026-02-28 10:01:39.598 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cd5eedf6-c835-46d6-9378-148eb04d4cb2/disk.config cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:39 np0005634017 nova_compute[243452]: 2026-02-28 10:01:39.600 243456 INFO nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Deleting local config drive /var/lib/nova/instances/cd5eedf6-c835-46d6-9378-148eb04d4cb2/disk.config because it was imported into RBD.#033[00m
Feb 28 05:01:39 np0005634017 systemd-machined[209480]: New machine qemu-21-instance-00000015.
Feb 28 05:01:39 np0005634017 systemd[1]: Started Virtual Machine qemu-21-instance-00000015.
Feb 28 05:01:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 656 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 9.2 MiB/s wr, 391 op/s
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.008 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272900.0083244, cd5eedf6-c835-46d6-9378-148eb04d4cb2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.009 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.022 243456 DEBUG nova.compute.manager [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.023 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.028 243456 INFO nova.virt.libvirt.driver [-] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Instance spawned successfully.#033[00m
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.028 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.040 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.044 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.056 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.057 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.058 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.058 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.059 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.060 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.074 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.074 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272900.0218563, cd5eedf6-c835-46d6-9378-148eb04d4cb2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.075 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] VM Started (Lifecycle Event)#033[00m
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.118 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.121 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.166 243456 INFO nova.compute.manager [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Took 3.67 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.167 243456 DEBUG nova.compute.manager [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.176 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.241 243456 INFO nova.compute.manager [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Took 4.92 seconds to build instance.#033[00m
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.260 243456 DEBUG oslo_concurrency.lockutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "cd5eedf6-c835-46d6-9378-148eb04d4cb2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.396 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:01:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:01:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:01:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:01:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004546580513667694 of space, bias 1.0, pg target 1.3639741541003083 quantized to 32 (current 32)
Feb 28 05:01:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:01:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:01:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:01:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:01:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:01:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024910054340674293 of space, bias 1.0, pg target 0.7448106247861613 quantized to 32 (current 32)
Feb 28 05:01:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:01:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.023018427299415e-07 of space, bias 4.0, pg target 0.00107915300390501 quantized to 16 (current 16)
Feb 28 05:01:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:01:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:01:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:01:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Feb 28 05:01:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:01:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Feb 28 05:01:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:01:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:01:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:01:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.779 243456 DEBUG nova.network.neutron [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updated VIF entry in instance network info cache for port 77e0efad-ce89-42fd-9284-b155767f5c74. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.780 243456 DEBUG nova.network.neutron [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updating instance_info_cache with network_info: [{"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:40 np0005634017 nova_compute[243452]: 2026-02-28 10:01:40.902 243456 DEBUG oslo_concurrency.lockutils [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:01:41 np0005634017 nova_compute[243452]: 2026-02-28 10:01:41.427 243456 DEBUG nova.network.neutron [None req-0ddd3846-ae6b-43e0-98af-fdd91e5ab2dd 0e532ad680b24bcbac2ad58e279c9d00 81c87609780749f7b368e51bddc62945 - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Updating instance_info_cache with network_info: [{"id": "42dc1876-90c4-4b52-b301-1c90b71ff297", "address": "fa:16:3e:0e:8a:2e", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42dc1876-90", "ovs_interfaceid": "42dc1876-90c4-4b52-b301-1c90b71ff297", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:41 np0005634017 nova_compute[243452]: 2026-02-28 10:01:41.455 243456 DEBUG oslo_concurrency.lockutils [None req-0ddd3846-ae6b-43e0-98af-fdd91e5ab2dd 0e532ad680b24bcbac2ad58e279c9d00 81c87609780749f7b368e51bddc62945 - - default default] Releasing lock "refresh_cache-d2d9bd29-453d-4abd-a3de-c1a9603cfc11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:01:41 np0005634017 nova_compute[243452]: 2026-02-28 10:01:41.455 243456 DEBUG nova.compute.manager [None req-0ddd3846-ae6b-43e0-98af-fdd91e5ab2dd 0e532ad680b24bcbac2ad58e279c9d00 81c87609780749f7b368e51bddc62945 - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Feb 28 05:01:41 np0005634017 nova_compute[243452]: 2026-02-28 10:01:41.455 243456 DEBUG nova.compute.manager [None req-0ddd3846-ae6b-43e0-98af-fdd91e5ab2dd 0e532ad680b24bcbac2ad58e279c9d00 81c87609780749f7b368e51bddc62945 - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] network_info to inject: |[{"id": "42dc1876-90c4-4b52-b301-1c90b71ff297", "address": "fa:16:3e:0e:8a:2e", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42dc1876-90", "ovs_interfaceid": "42dc1876-90c4-4b52-b301-1c90b71ff297", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Feb 28 05:01:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 656 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 11 MiB/s wr, 477 op/s
Feb 28 05:01:42 np0005634017 nova_compute[243452]: 2026-02-28 10:01:42.007 243456 DEBUG nova.objects.instance [None req-f7342722-e58f-43f4-a242-177cbc0b1374 b9190d82939143d782afeba7662b6241 59eeacf1fc7e434d81334a4fff5e51d9 - - default default] Lazy-loading 'pci_devices' on Instance uuid cd5eedf6-c835-46d6-9378-148eb04d4cb2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:42 np0005634017 nova_compute[243452]: 2026-02-28 10:01:42.027 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272902.0276446, cd5eedf6-c835-46d6-9378-148eb04d4cb2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:42 np0005634017 nova_compute[243452]: 2026-02-28 10:01:42.028 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:01:42 np0005634017 nova_compute[243452]: 2026-02-28 10:01:42.055 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:42 np0005634017 nova_compute[243452]: 2026-02-28 10:01:42.064 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:01:42 np0005634017 nova_compute[243452]: 2026-02-28 10:01:42.086 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Feb 28 05:01:42 np0005634017 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Deactivated successfully.
Feb 28 05:01:42 np0005634017 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Consumed 2.400s CPU time.
Feb 28 05:01:42 np0005634017 systemd-machined[209480]: Machine qemu-21-instance-00000015 terminated.
Feb 28 05:01:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:01:42 np0005634017 nova_compute[243452]: 2026-02-28 10:01:42.666 243456 DEBUG nova.compute.manager [None req-f7342722-e58f-43f4-a242-177cbc0b1374 b9190d82939143d782afeba7662b6241 59eeacf1fc7e434d81334a4fff5e51d9 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:43 np0005634017 nova_compute[243452]: 2026-02-28 10:01:43.420 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 656 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 6.4 MiB/s wr, 410 op/s
Feb 28 05:01:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:44.705 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:01:44 np0005634017 nova_compute[243452]: 2026-02-28 10:01:44.706 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:44.707 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:01:44 np0005634017 nova_compute[243452]: 2026-02-28 10:01:44.935 243456 DEBUG nova.compute.manager [req-82b1cbd5-a446-45cd-a9e9-d421152b6a88 req-55dcae6b-ebcf-4c82-b234-217a404324ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Received event network-changed-77e0efad-ce89-42fd-9284-b155767f5c74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:44 np0005634017 nova_compute[243452]: 2026-02-28 10:01:44.936 243456 DEBUG nova.compute.manager [req-82b1cbd5-a446-45cd-a9e9-d421152b6a88 req-55dcae6b-ebcf-4c82-b234-217a404324ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Refreshing instance network info cache due to event network-changed-77e0efad-ce89-42fd-9284-b155767f5c74. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:01:44 np0005634017 nova_compute[243452]: 2026-02-28 10:01:44.936 243456 DEBUG oslo_concurrency.lockutils [req-82b1cbd5-a446-45cd-a9e9-d421152b6a88 req-55dcae6b-ebcf-4c82-b234-217a404324ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:01:44 np0005634017 nova_compute[243452]: 2026-02-28 10:01:44.936 243456 DEBUG oslo_concurrency.lockutils [req-82b1cbd5-a446-45cd-a9e9-d421152b6a88 req-55dcae6b-ebcf-4c82-b234-217a404324ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:01:44 np0005634017 nova_compute[243452]: 2026-02-28 10:01:44.936 243456 DEBUG nova.network.neutron [req-82b1cbd5-a446-45cd-a9e9-d421152b6a88 req-55dcae6b-ebcf-4c82-b234-217a404324ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Refreshing network info cache for port 77e0efad-ce89-42fd-9284-b155767f5c74 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:01:45 np0005634017 nova_compute[243452]: 2026-02-28 10:01:45.398 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:01:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/704200910' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:01:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:01:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/704200910' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:01:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:45.708 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 677 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 5.1 MiB/s wr, 396 op/s
Feb 28 05:01:45 np0005634017 nova_compute[243452]: 2026-02-28 10:01:45.977 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 28 05:01:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:46Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0e:8a:2e 10.100.0.4
Feb 28 05:01:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:46Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0e:8a:2e 10.100.0.4
Feb 28 05:01:47 np0005634017 nova_compute[243452]: 2026-02-28 10:01:47.054 243456 DEBUG oslo_concurrency.lockutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "cd5eedf6-c835-46d6-9378-148eb04d4cb2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:47 np0005634017 nova_compute[243452]: 2026-02-28 10:01:47.054 243456 DEBUG oslo_concurrency.lockutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "cd5eedf6-c835-46d6-9378-148eb04d4cb2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:47 np0005634017 nova_compute[243452]: 2026-02-28 10:01:47.055 243456 DEBUG oslo_concurrency.lockutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "cd5eedf6-c835-46d6-9378-148eb04d4cb2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:47 np0005634017 nova_compute[243452]: 2026-02-28 10:01:47.055 243456 DEBUG oslo_concurrency.lockutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "cd5eedf6-c835-46d6-9378-148eb04d4cb2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:47 np0005634017 nova_compute[243452]: 2026-02-28 10:01:47.056 243456 DEBUG oslo_concurrency.lockutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "cd5eedf6-c835-46d6-9378-148eb04d4cb2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:47 np0005634017 nova_compute[243452]: 2026-02-28 10:01:47.058 243456 INFO nova.compute.manager [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Terminating instance#033[00m
Feb 28 05:01:47 np0005634017 nova_compute[243452]: 2026-02-28 10:01:47.059 243456 DEBUG oslo_concurrency.lockutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "refresh_cache-cd5eedf6-c835-46d6-9378-148eb04d4cb2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:01:47 np0005634017 nova_compute[243452]: 2026-02-28 10:01:47.059 243456 DEBUG oslo_concurrency.lockutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquired lock "refresh_cache-cd5eedf6-c835-46d6-9378-148eb04d4cb2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:01:47 np0005634017 nova_compute[243452]: 2026-02-28 10:01:47.060 243456 DEBUG nova.network.neutron [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:01:47 np0005634017 nova_compute[243452]: 2026-02-28 10:01:47.307 243456 DEBUG nova.network.neutron [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:01:47 np0005634017 nova_compute[243452]: 2026-02-28 10:01:47.363 243456 DEBUG nova.network.neutron [req-82b1cbd5-a446-45cd-a9e9-d421152b6a88 req-55dcae6b-ebcf-4c82-b234-217a404324ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updated VIF entry in instance network info cache for port 77e0efad-ce89-42fd-9284-b155767f5c74. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:01:47 np0005634017 nova_compute[243452]: 2026-02-28 10:01:47.365 243456 DEBUG nova.network.neutron [req-82b1cbd5-a446-45cd-a9e9-d421152b6a88 req-55dcae6b-ebcf-4c82-b234-217a404324ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updating instance_info_cache with network_info: [{"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:47 np0005634017 nova_compute[243452]: 2026-02-28 10:01:47.387 243456 DEBUG oslo_concurrency.lockutils [req-82b1cbd5-a446-45cd-a9e9-d421152b6a88 req-55dcae6b-ebcf-4c82-b234-217a404324ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:01:47 np0005634017 nova_compute[243452]: 2026-02-28 10:01:47.576 243456 DEBUG nova.network.neutron [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:01:47 np0005634017 nova_compute[243452]: 2026-02-28 10:01:47.597 243456 DEBUG oslo_concurrency.lockutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Releasing lock "refresh_cache-cd5eedf6-c835-46d6-9378-148eb04d4cb2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:01:47 np0005634017 nova_compute[243452]: 2026-02-28 10:01:47.597 243456 DEBUG nova.compute.manager [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:01:47 np0005634017 nova_compute[243452]: 2026-02-28 10:01:47.605 243456 INFO nova.virt.libvirt.driver [-] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Instance destroyed successfully.#033[00m
Feb 28 05:01:47 np0005634017 nova_compute[243452]: 2026-02-28 10:01:47.605 243456 DEBUG nova.objects.instance [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lazy-loading 'resources' on Instance uuid cd5eedf6-c835-46d6-9378-148eb04d4cb2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 721 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 5.6 MiB/s wr, 316 op/s
Feb 28 05:01:47 np0005634017 nova_compute[243452]: 2026-02-28 10:01:47.888 243456 INFO nova.virt.libvirt.driver [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Deleting instance files /var/lib/nova/instances/cd5eedf6-c835-46d6-9378-148eb04d4cb2_del#033[00m
Feb 28 05:01:47 np0005634017 nova_compute[243452]: 2026-02-28 10:01:47.888 243456 INFO nova.virt.libvirt.driver [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Deletion of /var/lib/nova/instances/cd5eedf6-c835-46d6-9378-148eb04d4cb2_del complete#033[00m
Feb 28 05:01:47 np0005634017 nova_compute[243452]: 2026-02-28 10:01:47.966 243456 INFO nova.compute.manager [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Took 0.37 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:01:47 np0005634017 nova_compute[243452]: 2026-02-28 10:01:47.967 243456 DEBUG oslo.service.loopingcall [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:01:47 np0005634017 nova_compute[243452]: 2026-02-28 10:01:47.967 243456 DEBUG nova.compute.manager [-] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:01:47 np0005634017 nova_compute[243452]: 2026-02-28 10:01:47.967 243456 DEBUG nova.network.neutron [-] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:01:48 np0005634017 nova_compute[243452]: 2026-02-28 10:01:48.121 243456 DEBUG nova.network.neutron [-] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:01:48 np0005634017 nova_compute[243452]: 2026-02-28 10:01:48.149 243456 DEBUG nova.network.neutron [-] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:48 np0005634017 nova_compute[243452]: 2026-02-28 10:01:48.170 243456 INFO nova.compute.manager [-] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Took 0.20 seconds to deallocate network for instance.#033[00m
Feb 28 05:01:48 np0005634017 nova_compute[243452]: 2026-02-28 10:01:48.232 243456 DEBUG oslo_concurrency.lockutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:48 np0005634017 nova_compute[243452]: 2026-02-28 10:01:48.233 243456 DEBUG oslo_concurrency.lockutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:48 np0005634017 nova_compute[243452]: 2026-02-28 10:01:48.410 243456 DEBUG oslo_concurrency.processutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:48 np0005634017 nova_compute[243452]: 2026-02-28 10:01:48.428 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:48 np0005634017 nova_compute[243452]: 2026-02-28 10:01:48.781 243456 INFO nova.compute.manager [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Rebuilding instance#033[00m
Feb 28 05:01:48 np0005634017 nova_compute[243452]: 2026-02-28 10:01:48.993 243456 INFO nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance shutdown successfully after 13 seconds.#033[00m
Feb 28 05:01:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:01:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3534240982' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.040 243456 DEBUG nova.objects.instance [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'trusted_certs' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.043 243456 DEBUG oslo_concurrency.processutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:49 np0005634017 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000014.scope: Deactivated successfully.
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.049 243456 DEBUG nova.compute.provider_tree [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:01:49 np0005634017 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000014.scope: Consumed 13.242s CPU time.
Feb 28 05:01:49 np0005634017 systemd-machined[209480]: Machine qemu-19-instance-00000014 terminated.
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.057 243456 DEBUG nova.compute.manager [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.065 243456 DEBUG nova.scheduler.client.report [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.090 243456 DEBUG oslo_concurrency.lockutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.105 243456 DEBUG nova.objects.instance [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'pci_requests' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.118 243456 DEBUG nova.objects.instance [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'pci_devices' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.132 243456 DEBUG nova.objects.instance [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'resources' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.134 243456 INFO nova.scheduler.client.report [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Deleted allocations for instance cd5eedf6-c835-46d6-9378-148eb04d4cb2#033[00m
Feb 28 05:01:49 np0005634017 podman[260952]: 2026-02-28 10:01:49.138872165 +0000 UTC m=+0.088661221 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.141 243456 DEBUG nova.objects.instance [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'migration_context' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:49 np0005634017 podman[260951]: 2026-02-28 10:01:49.143739021 +0000 UTC m=+0.093861536 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.155 243456 DEBUG nova.objects.instance [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.161 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.206 243456 DEBUG oslo_concurrency.lockutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "cd5eedf6-c835-46d6-9378-148eb04d4cb2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.212 243456 INFO nova.virt.libvirt.driver [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance destroyed successfully.#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.218 243456 INFO nova.virt.libvirt.driver [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance destroyed successfully.#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.302 243456 DEBUG oslo_concurrency.lockutils [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.302 243456 DEBUG oslo_concurrency.lockutils [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.303 243456 DEBUG oslo_concurrency.lockutils [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.303 243456 DEBUG oslo_concurrency.lockutils [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.304 243456 DEBUG oslo_concurrency.lockutils [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.306 243456 INFO nova.compute.manager [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Terminating instance#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.308 243456 DEBUG nova.compute.manager [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:01:49 np0005634017 kernel: tap77e0efad-ce (unregistering): left promiscuous mode
Feb 28 05:01:49 np0005634017 NetworkManager[49805]: <info>  [1772272909.3606] device (tap77e0efad-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.368 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:49 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:49Z|00074|binding|INFO|Releasing lport 77e0efad-ce89-42fd-9284-b155767f5c74 from this chassis (sb_readonly=0)
Feb 28 05:01:49 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:49Z|00075|binding|INFO|Setting lport 77e0efad-ce89-42fd-9284-b155767f5c74 down in Southbound
Feb 28 05:01:49 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:49Z|00076|binding|INFO|Removing iface tap77e0efad-ce ovn-installed in OVS
Feb 28 05:01:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.377 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:04:a0 10.100.0.4'], port_security=['fa:16:3e:9c:04:a0 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9e27fde4-3df3-46cf-97ac-88a91baefbc0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '446562351a804787bd6c523245bada39', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c45a4b58-82e0-4f02-9f8e-0ad33df761ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d1e8e62-0bec-49c5-9374-c674d11c0532, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=77e0efad-ce89-42fd-9284-b155767f5c74) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:01:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.378 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 77e0efad-ce89-42fd-9284-b155767f5c74 in datapath 71984a35-6483-4ac4-a021-6bd1f9989d8b unbound from our chassis#033[00m
Feb 28 05:01:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.380 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 71984a35-6483-4ac4-a021-6bd1f9989d8b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:01:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.381 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fa7fea34-bbb6-4d05-a548-0c766d3621f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.382 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b namespace which is not needed anymore#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.382 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:49 np0005634017 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Feb 28 05:01:49 np0005634017 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 13.304s CPU time.
Feb 28 05:01:49 np0005634017 systemd-machined[209480]: Machine qemu-12-instance-0000000c terminated.
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.507 243456 INFO nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Deleting instance files /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_del#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.509 243456 INFO nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Deletion of /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_del complete#033[00m
Feb 28 05:01:49 np0005634017 neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b[256625]: [NOTICE]   (256629) : haproxy version is 2.8.14-c23fe91
Feb 28 05:01:49 np0005634017 neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b[256625]: [NOTICE]   (256629) : path to executable is /usr/sbin/haproxy
Feb 28 05:01:49 np0005634017 neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b[256625]: [WARNING]  (256629) : Exiting Master process...
Feb 28 05:01:49 np0005634017 neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b[256625]: [WARNING]  (256629) : Exiting Master process...
Feb 28 05:01:49 np0005634017 neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b[256625]: [ALERT]    (256629) : Current worker (256631) exited with code 143 (Terminated)
Feb 28 05:01:49 np0005634017 neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b[256625]: [WARNING]  (256629) : All workers exited. Exiting... (0)
Feb 28 05:01:49 np0005634017 systemd[1]: libpod-0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd.scope: Deactivated successfully.
Feb 28 05:01:49 np0005634017 podman[261042]: 2026-02-28 10:01:49.523473825 +0000 UTC m=+0.053118523 container died 0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.528 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.533 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.542 243456 INFO nova.virt.libvirt.driver [-] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Instance destroyed successfully.#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.542 243456 DEBUG nova.objects.instance [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lazy-loading 'resources' on Instance uuid 9e27fde4-3df3-46cf-97ac-88a91baefbc0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:49 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd-userdata-shm.mount: Deactivated successfully.
Feb 28 05:01:49 np0005634017 systemd[1]: var-lib-containers-storage-overlay-e70290e901e28e6d5eb5c062ad508564d16cc1df6df50d7d37fe37074f9c775c-merged.mount: Deactivated successfully.
Feb 28 05:01:49 np0005634017 podman[261042]: 2026-02-28 10:01:49.56638665 +0000 UTC m=+0.096031358 container cleanup 0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 05:01:49 np0005634017 systemd[1]: libpod-conmon-0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd.scope: Deactivated successfully.
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.615 243456 DEBUG nova.virt.libvirt.vif [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:00:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1253849242',display_name='tempest-FloatingIPsAssociationTestJSON-server-1253849242',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1253849242',id=12,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:00:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='446562351a804787bd6c523245bada39',ramdisk_id='',reservation_id='r-nu7q8nez',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1803239001',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1803239001-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:00:58Z,user_data=None,user_id='3b1dc716928742ca935bb155783e2d9a',uuid=9e27fde4-3df3-46cf-97ac-88a91baefbc0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.616 243456 DEBUG nova.network.os_vif_util [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Converting VIF {"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.617 243456 DEBUG nova.network.os_vif_util [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:04:a0,bridge_name='br-int',has_traffic_filtering=True,id=77e0efad-ce89-42fd-9284-b155767f5c74,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77e0efad-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.617 243456 DEBUG os_vif [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:04:a0,bridge_name='br-int',has_traffic_filtering=True,id=77e0efad-ce89-42fd-9284-b155767f5c74,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77e0efad-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.621 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.621 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77e0efad-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.625 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.627 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.630 243456 INFO os_vif [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:04:a0,bridge_name='br-int',has_traffic_filtering=True,id=77e0efad-ce89-42fd-9284-b155767f5c74,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77e0efad-ce')#033[00m
Feb 28 05:01:49 np0005634017 podman[261079]: 2026-02-28 10:01:49.635262754 +0000 UTC m=+0.044903492 container remove 0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 28 05:01:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.642 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1141ab66-0b75-438f-9dad-672a7f4c1a92]: (4, ('Sat Feb 28 10:01:49 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b (0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd)\n0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd\nSat Feb 28 10:01:49 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b (0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd)\n0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.643 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[53a9c2ef-165b-40ec-8977-e1dd822b576a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.644 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71984a35-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:49 np0005634017 kernel: tap71984a35-60: left promiscuous mode
Feb 28 05:01:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.651 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a129fc64-6ea1-499b-9d83-fc6a610e97c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.654 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.661 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cf4d0b51-ffdb-4eea-8ca9-872a3ae982ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.662 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c47ae017-4b2b-49fd-b3c5-71eef31d47ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.677 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[817c6bf1-603b-43a9-b3f4-a72064043bc9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436189, 'reachable_time': 33716, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261109, 'error': None, 'target': 'ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.681 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:01:49 np0005634017 systemd[1]: run-netns-ovnmeta\x2d71984a35\x2d6483\x2d4ac4\x2da021\x2d6bd1f9989d8b.mount: Deactivated successfully.
Feb 28 05:01:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.681 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[7b2b34f1-89a7-42ed-a767-a20fda2cdba9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 721 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.1 MiB/s wr, 260 op/s
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.781 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.781 243456 INFO nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Creating image(s)#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.801 243456 DEBUG nova.storage.rbd_utils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.821 243456 DEBUG nova.storage.rbd_utils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.843 243456 DEBUG nova.storage.rbd_utils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.846 243456 DEBUG oslo_concurrency.lockutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Acquiring lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.847 243456 DEBUG oslo_concurrency.lockutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.965 243456 INFO nova.virt.libvirt.driver [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Deleting instance files /var/lib/nova/instances/9e27fde4-3df3-46cf-97ac-88a91baefbc0_del#033[00m
Feb 28 05:01:49 np0005634017 nova_compute[243452]: 2026-02-28 10:01:49.966 243456 INFO nova.virt.libvirt.driver [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Deletion of /var/lib/nova/instances/9e27fde4-3df3-46cf-97ac-88a91baefbc0_del complete#033[00m
Feb 28 05:01:50 np0005634017 nova_compute[243452]: 2026-02-28 10:01:50.008 243456 DEBUG nova.compute.manager [req-58940474-5dde-44c1-991b-09ecd9c942e0 req-966d8598-c0a9-4e1f-803b-d8462f5a68b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Received event network-vif-unplugged-77e0efad-ce89-42fd-9284-b155767f5c74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:50 np0005634017 nova_compute[243452]: 2026-02-28 10:01:50.009 243456 DEBUG oslo_concurrency.lockutils [req-58940474-5dde-44c1-991b-09ecd9c942e0 req-966d8598-c0a9-4e1f-803b-d8462f5a68b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:50 np0005634017 nova_compute[243452]: 2026-02-28 10:01:50.009 243456 DEBUG oslo_concurrency.lockutils [req-58940474-5dde-44c1-991b-09ecd9c942e0 req-966d8598-c0a9-4e1f-803b-d8462f5a68b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:50 np0005634017 nova_compute[243452]: 2026-02-28 10:01:50.009 243456 DEBUG oslo_concurrency.lockutils [req-58940474-5dde-44c1-991b-09ecd9c942e0 req-966d8598-c0a9-4e1f-803b-d8462f5a68b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:50 np0005634017 nova_compute[243452]: 2026-02-28 10:01:50.009 243456 DEBUG nova.compute.manager [req-58940474-5dde-44c1-991b-09ecd9c942e0 req-966d8598-c0a9-4e1f-803b-d8462f5a68b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] No waiting events found dispatching network-vif-unplugged-77e0efad-ce89-42fd-9284-b155767f5c74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:01:50 np0005634017 nova_compute[243452]: 2026-02-28 10:01:50.009 243456 DEBUG nova.compute.manager [req-58940474-5dde-44c1-991b-09ecd9c942e0 req-966d8598-c0a9-4e1f-803b-d8462f5a68b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Received event network-vif-unplugged-77e0efad-ce89-42fd-9284-b155767f5c74 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:01:50 np0005634017 nova_compute[243452]: 2026-02-28 10:01:50.014 243456 INFO nova.compute.manager [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:01:50 np0005634017 nova_compute[243452]: 2026-02-28 10:01:50.015 243456 DEBUG oslo.service.loopingcall [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:01:50 np0005634017 nova_compute[243452]: 2026-02-28 10:01:50.015 243456 DEBUG nova.compute.manager [-] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:01:50 np0005634017 nova_compute[243452]: 2026-02-28 10:01:50.015 243456 DEBUG nova.network.neutron [-] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:01:50 np0005634017 nova_compute[243452]: 2026-02-28 10:01:50.081 243456 DEBUG nova.virt.libvirt.imagebackend [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Image locations are: [{'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/88971623-4808-4102-a4a7-34a287d8b7fe/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/88971623-4808-4102-a4a7-34a287d8b7fe/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Feb 28 05:01:50 np0005634017 nova_compute[243452]: 2026-02-28 10:01:50.354 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272895.3534305, f37b722c-8def-4545-a455-39df230540d8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:50 np0005634017 nova_compute[243452]: 2026-02-28 10:01:50.355 243456 INFO nova.compute.manager [-] [instance: f37b722c-8def-4545-a455-39df230540d8] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:01:50 np0005634017 nova_compute[243452]: 2026-02-28 10:01:50.377 243456 DEBUG nova.compute.manager [None req-46c5655a-1a8c-408a-9d29-b8c1fb3547c7 - - - - - -] [instance: f37b722c-8def-4545-a455-39df230540d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:50 np0005634017 nova_compute[243452]: 2026-02-28 10:01:50.406 243456 DEBUG oslo_concurrency.lockutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "ce94b006-3fde-4285-89f7-1e435e514d3e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:50 np0005634017 nova_compute[243452]: 2026-02-28 10:01:50.407 243456 DEBUG oslo_concurrency.lockutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "ce94b006-3fde-4285-89f7-1e435e514d3e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:50 np0005634017 nova_compute[243452]: 2026-02-28 10:01:50.407 243456 DEBUG oslo_concurrency.lockutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "ce94b006-3fde-4285-89f7-1e435e514d3e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:50 np0005634017 nova_compute[243452]: 2026-02-28 10:01:50.408 243456 DEBUG oslo_concurrency.lockutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "ce94b006-3fde-4285-89f7-1e435e514d3e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:50 np0005634017 nova_compute[243452]: 2026-02-28 10:01:50.408 243456 DEBUG oslo_concurrency.lockutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "ce94b006-3fde-4285-89f7-1e435e514d3e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:50 np0005634017 nova_compute[243452]: 2026-02-28 10:01:50.410 243456 INFO nova.compute.manager [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Terminating instance#033[00m
Feb 28 05:01:50 np0005634017 nova_compute[243452]: 2026-02-28 10:01:50.412 243456 DEBUG oslo_concurrency.lockutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "refresh_cache-ce94b006-3fde-4285-89f7-1e435e514d3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:01:50 np0005634017 nova_compute[243452]: 2026-02-28 10:01:50.413 243456 DEBUG oslo_concurrency.lockutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquired lock "refresh_cache-ce94b006-3fde-4285-89f7-1e435e514d3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:01:50 np0005634017 nova_compute[243452]: 2026-02-28 10:01:50.414 243456 DEBUG nova.network.neutron [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:01:50 np0005634017 nova_compute[243452]: 2026-02-28 10:01:50.673 243456 DEBUG nova.network.neutron [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:01:51 np0005634017 nova_compute[243452]: 2026-02-28 10:01:51.413 243456 INFO nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance shutdown successfully after 2 seconds.#033[00m
Feb 28 05:01:51 np0005634017 nova_compute[243452]: 2026-02-28 10:01:51.422 243456 DEBUG nova.network.neutron [-] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:51 np0005634017 nova_compute[243452]: 2026-02-28 10:01:51.441 243456 INFO nova.compute.manager [-] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Took 1.43 seconds to deallocate network for instance.#033[00m
Feb 28 05:01:51 np0005634017 nova_compute[243452]: 2026-02-28 10:01:51.502 243456 DEBUG oslo_concurrency.lockutils [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:51 np0005634017 nova_compute[243452]: 2026-02-28 10:01:51.503 243456 DEBUG oslo_concurrency.lockutils [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:51 np0005634017 nova_compute[243452]: 2026-02-28 10:01:51.593 243456 DEBUG nova.network.neutron [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:51 np0005634017 nova_compute[243452]: 2026-02-28 10:01:51.616 243456 DEBUG oslo_concurrency.lockutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Releasing lock "refresh_cache-ce94b006-3fde-4285-89f7-1e435e514d3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:01:51 np0005634017 nova_compute[243452]: 2026-02-28 10:01:51.616 243456 DEBUG nova.compute.manager [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:01:51 np0005634017 nova_compute[243452]: 2026-02-28 10:01:51.654 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:51 np0005634017 nova_compute[243452]: 2026-02-28 10:01:51.716 243456 DEBUG oslo_concurrency.processutils [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:51 np0005634017 nova_compute[243452]: 2026-02-28 10:01:51.754 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847.part --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:51 np0005634017 nova_compute[243452]: 2026-02-28 10:01:51.755 243456 DEBUG nova.virt.images [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] 88971623-4808-4102-a4a7-34a287d8b7fe was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Feb 28 05:01:51 np0005634017 nova_compute[243452]: 2026-02-28 10:01:51.759 243456 DEBUG nova.privsep.utils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Feb 28 05:01:51 np0005634017 nova_compute[243452]: 2026-02-28 10:01:51.760 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847.part /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.762082) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272911762127, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1451, "num_deletes": 251, "total_data_size": 2015445, "memory_usage": 2052432, "flush_reason": "Manual Compaction"}
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Feb 28 05:01:51 np0005634017 kernel: tap1c6e98f3-e9 (unregistering): left promiscuous mode
Feb 28 05:01:51 np0005634017 NetworkManager[49805]: <info>  [1772272911.7714] device (tap1c6e98f3-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:01:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 608 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 7.8 MiB/s wr, 387 op/s
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272911777897, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1982253, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19589, "largest_seqno": 21039, "table_properties": {"data_size": 1975817, "index_size": 3511, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14904, "raw_average_key_size": 20, "raw_value_size": 1962330, "raw_average_value_size": 2655, "num_data_blocks": 159, "num_entries": 739, "num_filter_entries": 739, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772272783, "oldest_key_time": 1772272783, "file_creation_time": 1772272911, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 15860 microseconds, and 3758 cpu microseconds.
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:01:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:51Z|00077|binding|INFO|Releasing lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 from this chassis (sb_readonly=0)
Feb 28 05:01:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:51Z|00078|binding|INFO|Setting lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 down in Southbound
Feb 28 05:01:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:51Z|00079|binding|INFO|Removing iface tap1c6e98f3-e9 ovn-installed in OVS
Feb 28 05:01:51 np0005634017 nova_compute[243452]: 2026-02-28 10:01:51.781 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:51 np0005634017 nova_compute[243452]: 2026-02-28 10:01:51.786 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.777942) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1982253 bytes OK
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.777963) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.783969) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.784008) EVENT_LOG_v1 {"time_micros": 1772272911784001, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.784031) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2008949, prev total WAL file size 2008949, number of live WAL files 2.
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.785466) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1935KB)], [47(7106KB)]
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272911785523, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9258826, "oldest_snapshot_seqno": -1}
Feb 28 05:01:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.794 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:6e:bc 10.100.0.5'], port_security=['fa:16:3e:3d:6e:bc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c92e965f-2d18-4b78-8b78-7d391039f382', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:01:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.795 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 unbound from our chassis#033[00m
Feb 28 05:01:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.796 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24#033[00m
Feb 28 05:01:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.812 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1b78c2ba-d419-43a9-b8d7-1882df47ddbd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:51 np0005634017 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Feb 28 05:01:51 np0005634017 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 13.547s CPU time.
Feb 28 05:01:51 np0005634017 systemd-machined[209480]: Machine qemu-13-instance-0000000d terminated.
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4419 keys, 7489099 bytes, temperature: kUnknown
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272911830223, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7489099, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7459055, "index_size": 17902, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 109375, "raw_average_key_size": 24, "raw_value_size": 7378712, "raw_average_value_size": 1669, "num_data_blocks": 747, "num_entries": 4419, "num_filter_entries": 4419, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772272911, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.830426) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7489099 bytes
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.832123) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 206.8 rd, 167.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 6.9 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(8.4) write-amplify(3.8) OK, records in: 4933, records dropped: 514 output_compression: NoCompression
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.832141) EVENT_LOG_v1 {"time_micros": 1772272911832132, "job": 24, "event": "compaction_finished", "compaction_time_micros": 44769, "compaction_time_cpu_micros": 13246, "output_level": 6, "num_output_files": 1, "total_output_size": 7489099, "num_input_records": 4933, "num_output_records": 4419, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272911832866, "job": 24, "event": "table_file_deletion", "file_number": 49}
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272911833801, "job": 24, "event": "table_file_deletion", "file_number": 47}
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.784456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.833885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.833890) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.833892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.833893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:01:51 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.833895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:01:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.836 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[17e507ba-bbe8-4ac9-8076-ae991da6a6e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.839 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d2aaca2a-70d4-4d8f-98fb-6759c2bb99b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:51 np0005634017 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000013.scope: Deactivated successfully.
Feb 28 05:01:51 np0005634017 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000013.scope: Consumed 12.717s CPU time.
Feb 28 05:01:51 np0005634017 systemd-machined[209480]: Machine qemu-18-instance-00000013 terminated.
Feb 28 05:01:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.854 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1968e90f-fd6d-421b-8261-d40fdd433e5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.866 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9d40d14b-29dd-4b22-9411-8eab6b8b6c0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 12, 'rx_bytes': 784, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 12, 'rx_bytes': 784, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261208, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.875 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[aad1ca22-4a18-4cc8-8afe-2f7c70db4d60]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437039, 'tstamp': 437039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261209, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437043, 'tstamp': 437043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261209, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.876 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:51 np0005634017 nova_compute[243452]: 2026-02-28 10:01:51.877 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:51 np0005634017 nova_compute[243452]: 2026-02-28 10:01:51.881 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.881 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5045ea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.882 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:01:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.882 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5045ea-10, col_values=(('external_ids', {'iface-id': '863d7ac1-9b1e-4788-aadf-440a36d66b39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.882 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:01:52 np0005634017 kernel: tap1c6e98f3-e9: entered promiscuous mode
Feb 28 05:01:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:52Z|00080|binding|INFO|Claiming lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for this chassis.
Feb 28 05:01:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:52Z|00081|binding|INFO|1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7: Claiming fa:16:3e:3d:6e:bc 10.100.0.5
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.042 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:52 np0005634017 kernel: tap1c6e98f3-e9 (unregistering): left promiscuous mode
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.049 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:6e:bc 10.100.0.5'], port_security=['fa:16:3e:3d:6e:bc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c92e965f-2d18-4b78-8b78-7d391039f382', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.050 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 bound to our chassis#033[00m
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.052 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24#033[00m
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.067 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[094735d7-02ba-429d-b49d-c9b66fa9b7c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:52Z|00082|binding|INFO|Setting lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 ovn-installed in OVS
Feb 28 05:01:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:52Z|00083|binding|INFO|Setting lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 up in Southbound
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.068 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:52Z|00084|binding|INFO|Releasing lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 from this chassis (sb_readonly=1)
Feb 28 05:01:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:52Z|00085|if_status|INFO|Not setting lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 down as sb is readonly
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.074 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:52Z|00086|binding|INFO|Removing iface tap1c6e98f3-e9 ovn-installed in OVS
Feb 28 05:01:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:52Z|00087|binding|INFO|Releasing lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 from this chassis (sb_readonly=0)
Feb 28 05:01:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:52Z|00088|binding|INFO|Setting lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 down in Southbound
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.078 243456 DEBUG nova.compute.manager [req-f4585cd4-5154-44d3-b413-770b1e5ce37d req-740e4e5c-e256-434a-89c4-c3f1986dea7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-unplugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.079 243456 DEBUG oslo_concurrency.lockutils [req-f4585cd4-5154-44d3-b413-770b1e5ce37d req-740e4e5c-e256-434a-89c4-c3f1986dea7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.079 243456 DEBUG oslo_concurrency.lockutils [req-f4585cd4-5154-44d3-b413-770b1e5ce37d req-740e4e5c-e256-434a-89c4-c3f1986dea7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.080 243456 DEBUG oslo_concurrency.lockutils [req-f4585cd4-5154-44d3-b413-770b1e5ce37d req-740e4e5c-e256-434a-89c4-c3f1986dea7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.080 243456 DEBUG nova.compute.manager [req-f4585cd4-5154-44d3-b413-770b1e5ce37d req-740e4e5c-e256-434a-89c4-c3f1986dea7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] No waiting events found dispatching network-vif-unplugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.080 243456 WARNING nova.compute.manager [req-f4585cd4-5154-44d3-b413-770b1e5ce37d req-740e4e5c-e256-434a-89c4-c3f1986dea7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received unexpected event network-vif-unplugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for instance with vm_state error and task_state rebuilding.#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.082 243456 INFO nova.virt.libvirt.driver [-] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance destroyed successfully.#033[00m
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.083 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:6e:bc 10.100.0.5'], port_security=['fa:16:3e:3d:6e:bc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c92e965f-2d18-4b78-8b78-7d391039f382', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '5', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.086 243456 INFO nova.virt.libvirt.driver [-] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Instance destroyed successfully.#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.087 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.087 243456 DEBUG nova.objects.instance [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lazy-loading 'resources' on Instance uuid ce94b006-3fde-4285-89f7-1e435e514d3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.093 243456 INFO nova.virt.libvirt.driver [-] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance destroyed successfully.#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.093 243456 DEBUG nova.virt.libvirt.vif [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1293627042',display_name='tempest-ServersAdminTestJSON-server-1293627042',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1293627042',id=13,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:01:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-7hctsnmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:01:48Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=c92e965f-2d18-4b78-8b78-7d391039f382,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.094 243456 DEBUG nova.network.os_vif_util [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.094 243456 DEBUG nova.network.os_vif_util [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.095 243456 DEBUG os_vif [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.096 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.096 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c6e98f3-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.102 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a757c9e5-4134-41c0-9eca-a7b3fa4fe429]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.105 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[34c32d26-cc7c-41d8-a2c4-61209806ae11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.118 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.122 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.125 243456 INFO os_vif [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9')#033[00m
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.130 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c50ecd6a-9f76-4e7e-aee1-873968c00793]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.140 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847.part /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847.converted" returned: 0 in 0.380s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.146 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.149 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[29ff280d-e287-40db-9ed1-aecf1f22556f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 14, 'rx_bytes': 826, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 14, 'rx_bytes': 826, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261252, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.165 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[38fef1c6-a58a-46bf-aa4d-36f7b337cca7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437039, 'tstamp': 437039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261265, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437043, 'tstamp': 437043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261265, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.167 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.169 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5045ea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.170 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.170 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5045ea-10, col_values=(('external_ids', {'iface-id': '863d7ac1-9b1e-4788-aadf-440a36d66b39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.170 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.171 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 unbound from our chassis#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.172 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.172 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.175 243456 DEBUG nova.compute.manager [req-d7b03b14-9cee-4f5f-8ff1-156552badd12 req-36ca4339-314f-4d96-9746-17ed7742caf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Received event network-vif-plugged-77e0efad-ce89-42fd-9284-b155767f5c74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.176 243456 DEBUG oslo_concurrency.lockutils [req-d7b03b14-9cee-4f5f-8ff1-156552badd12 req-36ca4339-314f-4d96-9746-17ed7742caf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.176 243456 DEBUG oslo_concurrency.lockutils [req-d7b03b14-9cee-4f5f-8ff1-156552badd12 req-36ca4339-314f-4d96-9746-17ed7742caf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.176 243456 DEBUG oslo_concurrency.lockutils [req-d7b03b14-9cee-4f5f-8ff1-156552badd12 req-36ca4339-314f-4d96-9746-17ed7742caf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.177 243456 DEBUG nova.compute.manager [req-d7b03b14-9cee-4f5f-8ff1-156552badd12 req-36ca4339-314f-4d96-9746-17ed7742caf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] No waiting events found dispatching network-vif-plugged-77e0efad-ce89-42fd-9284-b155767f5c74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.177 243456 WARNING nova.compute.manager [req-d7b03b14-9cee-4f5f-8ff1-156552badd12 req-36ca4339-314f-4d96-9746-17ed7742caf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Received unexpected event network-vif-plugged-77e0efad-ce89-42fd-9284-b155767f5c74 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.177 243456 DEBUG nova.compute.manager [req-d7b03b14-9cee-4f5f-8ff1-156552badd12 req-36ca4339-314f-4d96-9746-17ed7742caf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Received event network-vif-deleted-77e0efad-ce89-42fd-9284-b155767f5c74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.184 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[85e392b0-e2c2-4985-8b9d-57410b2b0daf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.206 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2ae58384-f955-49b7-921c-035440da4d61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.209 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a31af515-6037-4df0-956d-4357b08990a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:01:52 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1514507087' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.228 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6709ae84-940f-41c7-881c-fbfb6b897d05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.230 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847.converted --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.231 243456 DEBUG oslo_concurrency.lockutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.385s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.241 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cf077049-70c1-4eaa-b244-2d7647b05d64]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 16, 'rx_bytes': 826, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 16, 'rx_bytes': 826, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261278, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.253 243456 DEBUG nova.storage.rbd_utils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.256 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.257 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f443bd72-d225-48c6-a5d1-89134c8aeae2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437039, 'tstamp': 437039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261294, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437043, 'tstamp': 437043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261294, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.259 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.262 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5045ea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.262 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.262 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5045ea-10, col_values=(('external_ids', {'iface-id': '863d7ac1-9b1e-4788-aadf-440a36d66b39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.263 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.277 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.279 243456 DEBUG oslo_concurrency.processutils [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.286 243456 DEBUG nova.compute.provider_tree [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.300 243456 DEBUG nova.scheduler.client.report [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.326 243456 DEBUG oslo_concurrency.lockutils [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.354 243456 INFO nova.scheduler.client.report [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Deleted allocations for instance 9e27fde4-3df3-46cf-97ac-88a91baefbc0#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.413 243456 DEBUG oslo_concurrency.lockutils [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.542 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.600 243456 DEBUG nova.storage.rbd_utils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] resizing rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.704 243456 INFO nova.virt.libvirt.driver [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Deleting instance files /var/lib/nova/instances/ce94b006-3fde-4285-89f7-1e435e514d3e_del#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.705 243456 INFO nova.virt.libvirt.driver [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Deletion of /var/lib/nova/instances/ce94b006-3fde-4285-89f7-1e435e514d3e_del complete#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.712 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.713 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Ensure instance console log exists: /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.713 243456 DEBUG oslo_concurrency.lockutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.714 243456 DEBUG oslo_concurrency.lockutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.714 243456 DEBUG oslo_concurrency.lockutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.715 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.719 243456 INFO nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Deleting instance files /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382_del#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.719 243456 INFO nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Deletion of /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382_del complete#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.724 243456 WARNING nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.739 243456 DEBUG nova.virt.libvirt.host [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.739 243456 DEBUG nova.virt.libvirt.host [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.784 243456 DEBUG nova.virt.libvirt.host [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.785 243456 DEBUG nova.virt.libvirt.host [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.786 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.786 243456 DEBUG nova.virt.hardware [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.787 243456 DEBUG nova.virt.hardware [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.788 243456 DEBUG nova.virt.hardware [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.788 243456 DEBUG nova.virt.hardware [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.788 243456 DEBUG nova.virt.hardware [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.789 243456 DEBUG nova.virt.hardware [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.789 243456 DEBUG nova.virt.hardware [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.790 243456 DEBUG nova.virt.hardware [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.790 243456 DEBUG nova.virt.hardware [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.791 243456 DEBUG nova.virt.hardware [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.791 243456 DEBUG nova.virt.hardware [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.791 243456 DEBUG nova.objects.instance [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.897 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.929 243456 INFO nova.compute.manager [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Took 1.31 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.931 243456 DEBUG oslo.service.loopingcall [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.932 243456 DEBUG nova.compute.manager [-] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:01:52 np0005634017 nova_compute[243452]: 2026-02-28 10:01:52.932 243456 DEBUG nova.network.neutron [-] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.026 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.027 243456 INFO nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Creating image(s)#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.255 243456 DEBUG nova.storage.rbd_utils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.296 243456 DEBUG nova.storage.rbd_utils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.328 243456 DEBUG nova.storage.rbd_utils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.333 243456 DEBUG oslo_concurrency.processutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.354 243456 DEBUG nova.network.neutron [-] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.369 243456 DEBUG nova.network.neutron [-] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.382 243456 INFO nova.compute.manager [-] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Took 0.45 seconds to deallocate network for instance.#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.407 243456 DEBUG oslo_concurrency.processutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.408 243456 DEBUG oslo_concurrency.lockutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.409 243456 DEBUG oslo_concurrency.lockutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.409 243456 DEBUG oslo_concurrency.lockutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.435 243456 DEBUG nova.storage.rbd_utils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.439 243456 DEBUG oslo_concurrency.processutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 c92e965f-2d18-4b78-8b78-7d391039f382_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.456 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.460 243456 DEBUG oslo_concurrency.lockutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.460 243456 DEBUG oslo_concurrency.lockutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.604 243456 DEBUG oslo_concurrency.processutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.671 243456 DEBUG oslo_concurrency.processutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 c92e965f-2d18-4b78-8b78-7d391039f382_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.232s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:01:53 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/450386554' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.741 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.844s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.757 243456 DEBUG nova.storage.rbd_utils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.760 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 505 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 7.0 MiB/s wr, 372 op/s
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.776 243456 DEBUG nova.storage.rbd_utils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] resizing rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.853 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.853 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Ensure instance console log exists: /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.854 243456 DEBUG oslo_concurrency.lockutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.854 243456 DEBUG oslo_concurrency.lockutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.854 243456 DEBUG oslo_concurrency.lockutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.857 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Start _get_guest_xml network_info=[{"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.861 243456 WARNING nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.864 243456 DEBUG nova.virt.libvirt.host [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.865 243456 DEBUG nova.virt.libvirt.host [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.867 243456 DEBUG nova.virt.libvirt.host [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.867 243456 DEBUG nova.virt.libvirt.host [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.868 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.868 243456 DEBUG nova.virt.hardware [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.868 243456 DEBUG nova.virt.hardware [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.868 243456 DEBUG nova.virt.hardware [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.869 243456 DEBUG nova.virt.hardware [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.869 243456 DEBUG nova.virt.hardware [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.869 243456 DEBUG nova.virt.hardware [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.869 243456 DEBUG nova.virt.hardware [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.869 243456 DEBUG nova.virt.hardware [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.870 243456 DEBUG nova.virt.hardware [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.870 243456 DEBUG nova.virt.hardware [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.870 243456 DEBUG nova.virt.hardware [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.870 243456 DEBUG nova.objects.instance [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'vcpu_model' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:53 np0005634017 nova_compute[243452]: 2026-02-28 10:01:53.891 243456 DEBUG oslo_concurrency.processutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:01:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/722264479' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.148 243456 DEBUG oslo_concurrency.processutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.154 243456 DEBUG nova.compute.provider_tree [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.173 243456 DEBUG nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.174 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.175 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.175 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.175 243456 DEBUG nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] No waiting events found dispatching network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.176 243456 WARNING nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received unexpected event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for instance with vm_state error and task_state rebuild_spawning.#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.176 243456 DEBUG nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.176 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.176 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.177 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.177 243456 DEBUG nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] No waiting events found dispatching network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.177 243456 WARNING nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received unexpected event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for instance with vm_state error and task_state rebuild_spawning.#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.177 243456 DEBUG nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.177 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.178 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.178 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.178 243456 DEBUG nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] No waiting events found dispatching network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.179 243456 WARNING nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received unexpected event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for instance with vm_state error and task_state rebuild_spawning.#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.179 243456 DEBUG nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-unplugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.179 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.179 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.179 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.180 243456 DEBUG nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] No waiting events found dispatching network-vif-unplugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.180 243456 WARNING nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received unexpected event network-vif-unplugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for instance with vm_state error and task_state rebuild_spawning.#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.180 243456 DEBUG nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.180 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.181 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.181 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.181 243456 DEBUG nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] No waiting events found dispatching network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.181 243456 WARNING nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received unexpected event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for instance with vm_state error and task_state rebuild_spawning.#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.184 243456 DEBUG nova.scheduler.client.report [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.213 243456 DEBUG oslo_concurrency.lockutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:01:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4220485943' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.271 243456 INFO nova.scheduler.client.report [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Deleted allocations for instance ce94b006-3fde-4285-89f7-1e435e514d3e#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.285 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.288 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:01:54 np0005634017 nova_compute[243452]:  <uuid>9bf11a21-b6eb-432e-aa3b-4dd3413f91a5</uuid>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:  <name>instance-00000014</name>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServersAdmin275Test-server-1098361722</nova:name>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:01:52</nova:creationTime>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:01:54 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:        <nova:user uuid="c1d3b80b39ba4f3392d63b05c85009e2">tempest-ServersAdmin275Test-175914647-project-member</nova:user>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:        <nova:project uuid="a1e90097927e484eb57dee6ac05b7b47">tempest-ServersAdmin275Test-175914647</nova:project>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="88971623-4808-4102-a4a7-34a287d8b7fe"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      <nova:ports/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      <entry name="serial">9bf11a21-b6eb-432e-aa3b-4dd3413f91a5</entry>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      <entry name="uuid">9bf11a21-b6eb-432e-aa3b-4dd3413f91a5</entry>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk">
Feb 28 05:01:54 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:01:54 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config">
Feb 28 05:01:54 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:01:54 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/console.log" append="off"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:01:54 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:01:54 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:01:54 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:01:54 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.355 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.356 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.356 243456 INFO nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Using config drive#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.384 243456 DEBUG nova.storage.rbd_utils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:01:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1047387282' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.395 243456 DEBUG oslo_concurrency.lockutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "ce94b006-3fde-4285-89f7-1e435e514d3e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.989s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.404 243456 DEBUG oslo_concurrency.processutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.429 243456 DEBUG nova.storage.rbd_utils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.434 243456 DEBUG oslo_concurrency.processutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.463 243456 DEBUG nova.objects.instance [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:54 np0005634017 nova_compute[243452]: 2026-02-28 10:01:54.561 243456 DEBUG nova.objects.instance [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lazy-loading 'keypairs' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:01:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/364084966' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.005 243456 DEBUG oslo_concurrency.processutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.006 243456 DEBUG nova.virt.libvirt.vif [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1293627042',display_name='tempest-ServersAdminTestJSON-server-1293627042',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1293627042',id=13,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:01:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-7hctsnmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:01:52Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=c92e965f-2d18-4b78-8b78-7d391039f382,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.007 243456 DEBUG nova.network.os_vif_util [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.008 243456 DEBUG nova.network.os_vif_util [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.010 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:01:55 np0005634017 nova_compute[243452]:  <uuid>c92e965f-2d18-4b78-8b78-7d391039f382</uuid>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:  <name>instance-0000000d</name>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServersAdminTestJSON-server-1293627042</nova:name>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:01:53</nova:creationTime>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:01:55 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:        <nova:user uuid="4fb1e2bbed9c4e2395c13dba974f8603">tempest-ServersAdminTestJSON-1494420313-project-member</nova:user>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:        <nova:project uuid="339b7f5b41a54615b051fb9d036072dd">tempest-ServersAdminTestJSON-1494420313</nova:project>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="88971623-4808-4102-a4a7-34a287d8b7fe"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:        <nova:port uuid="1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7">
Feb 28 05:01:55 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <entry name="serial">c92e965f-2d18-4b78-8b78-7d391039f382</entry>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <entry name="uuid">c92e965f-2d18-4b78-8b78-7d391039f382</entry>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/c92e965f-2d18-4b78-8b78-7d391039f382_disk">
Feb 28 05:01:55 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:01:55 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/c92e965f-2d18-4b78-8b78-7d391039f382_disk.config">
Feb 28 05:01:55 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:01:55 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:3d:6e:bc"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <target dev="tap1c6e98f3-e9"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/console.log" append="off"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:01:55 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:01:55 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:01:55 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:01:55 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.011 243456 DEBUG nova.compute.manager [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Preparing to wait for external event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.011 243456 DEBUG oslo_concurrency.lockutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.011 243456 DEBUG oslo_concurrency.lockutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.012 243456 DEBUG oslo_concurrency.lockutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.012 243456 DEBUG nova.virt.libvirt.vif [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1293627042',display_name='tempest-ServersAdminTestJSON-server-1293627042',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1293627042',id=13,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:01:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-7hctsnmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:01:52Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=c92e965f-2d18-4b78-8b78-7d391039f382,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.012 243456 DEBUG nova.network.os_vif_util [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.013 243456 DEBUG nova.network.os_vif_util [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.013 243456 DEBUG os_vif [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.014 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.014 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.015 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.019 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.019 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c6e98f3-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.020 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1c6e98f3-e9, col_values=(('external_ids', {'iface-id': '1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:6e:bc', 'vm-uuid': 'c92e965f-2d18-4b78-8b78-7d391039f382'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.021 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:55 np0005634017 NetworkManager[49805]: <info>  [1772272915.0235] manager: (tap1c6e98f3-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.024 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.027 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.028 243456 INFO os_vif [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9')#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.070 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.071 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.071 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No VIF found with MAC fa:16:3e:3d:6e:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.072 243456 INFO nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Using config drive#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.105 243456 DEBUG nova.storage.rbd_utils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.132 243456 DEBUG nova.objects.instance [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'ec2_ids' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.163 243456 DEBUG nova.objects.instance [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'keypairs' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.710 243456 INFO nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Creating config drive at /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.717 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmphljzyvrx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 476 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 9.5 MiB/s wr, 415 op/s
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.846 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmphljzyvrx" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.870 243456 DEBUG nova.storage.rbd_utils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:55 np0005634017 nova_compute[243452]: 2026-02-28 10:01:55.874 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:56 np0005634017 nova_compute[243452]: 2026-02-28 10:01:56.015 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:56 np0005634017 nova_compute[243452]: 2026-02-28 10:01:56.017 243456 INFO nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Deleting local config drive /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config because it was imported into RBD.#033[00m
Feb 28 05:01:56 np0005634017 systemd-machined[209480]: New machine qemu-22-instance-00000014.
Feb 28 05:01:56 np0005634017 systemd[1]: Started Virtual Machine qemu-22-instance-00000014.
Feb 28 05:01:56 np0005634017 nova_compute[243452]: 2026-02-28 10:01:56.148 243456 INFO nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Creating config drive at /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config#033[00m
Feb 28 05:01:56 np0005634017 nova_compute[243452]: 2026-02-28 10:01:56.155 243456 DEBUG oslo_concurrency.processutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpr7x32s8n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:56 np0005634017 nova_compute[243452]: 2026-02-28 10:01:56.285 243456 DEBUG oslo_concurrency.processutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpr7x32s8n" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:56 np0005634017 nova_compute[243452]: 2026-02-28 10:01:56.308 243456 DEBUG nova.storage.rbd_utils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:01:56 np0005634017 nova_compute[243452]: 2026-02-28 10:01:56.311 243456 DEBUG oslo_concurrency.processutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config c92e965f-2d18-4b78-8b78-7d391039f382_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:01:56 np0005634017 nova_compute[243452]: 2026-02-28 10:01:56.433 243456 DEBUG oslo_concurrency.processutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config c92e965f-2d18-4b78-8b78-7d391039f382_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:01:56 np0005634017 nova_compute[243452]: 2026-02-28 10:01:56.434 243456 INFO nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Deleting local config drive /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config because it was imported into RBD.#033[00m
Feb 28 05:01:56 np0005634017 kernel: tap1c6e98f3-e9: entered promiscuous mode
Feb 28 05:01:56 np0005634017 NetworkManager[49805]: <info>  [1772272916.4750] manager: (tap1c6e98f3-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Feb 28 05:01:56 np0005634017 nova_compute[243452]: 2026-02-28 10:01:56.475 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:56Z|00089|binding|INFO|Claiming lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for this chassis.
Feb 28 05:01:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:56Z|00090|binding|INFO|1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7: Claiming fa:16:3e:3d:6e:bc 10.100.0.5
Feb 28 05:01:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.484 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:6e:bc 10.100.0.5'], port_security=['fa:16:3e:3d:6e:bc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c92e965f-2d18-4b78-8b78-7d391039f382', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '7', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:01:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.485 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 bound to our chassis#033[00m
Feb 28 05:01:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.487 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24#033[00m
Feb 28 05:01:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:56Z|00091|binding|INFO|Setting lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 ovn-installed in OVS
Feb 28 05:01:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:56Z|00092|binding|INFO|Setting lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 up in Southbound
Feb 28 05:01:56 np0005634017 nova_compute[243452]: 2026-02-28 10:01:56.500 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:56 np0005634017 systemd-machined[209480]: New machine qemu-23-instance-0000000d.
Feb 28 05:01:56 np0005634017 nova_compute[243452]: 2026-02-28 10:01:56.505 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.509 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[38735c8b-33ad-4f3b-9d61-dd819993a3d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:56 np0005634017 systemd[1]: Started Virtual Machine qemu-23-instance-0000000d.
Feb 28 05:01:56 np0005634017 systemd-udevd[261852]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:01:56 np0005634017 NetworkManager[49805]: <info>  [1772272916.5306] device (tap1c6e98f3-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:01:56 np0005634017 NetworkManager[49805]: <info>  [1772272916.5320] device (tap1c6e98f3-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:01:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.547 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d54c0105-3b92-4815-bd04-a42d2f942417]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.553 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[72ceab56-ed5b-4a1b-8514-db695b519014]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.588 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8791e657-0de9-4d6e-8add-65528a314a4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.608 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6dfba8f4-4a93-4fbc-b2d7-c44f074551a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 18, 'rx_bytes': 868, 'tx_bytes': 944, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 18, 'rx_bytes': 868, 'tx_bytes': 944, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261863, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.627 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf2e5f5-2800-41d0-8493-9a4d4bd47eb1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437039, 'tstamp': 437039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261865, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437043, 'tstamp': 437043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261865, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:01:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.629 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:56 np0005634017 nova_compute[243452]: 2026-02-28 10:01:56.631 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:56 np0005634017 nova_compute[243452]: 2026-02-28 10:01:56.632 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.632 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5045ea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.633 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:01:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.633 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5045ea-10, col_values=(('external_ids', {'iface-id': '863d7ac1-9b1e-4788-aadf-440a36d66b39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:01:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.633 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:01:56 np0005634017 nova_compute[243452]: 2026-02-28 10:01:56.672 243456 DEBUG nova.compute.manager [req-a95a8a98-f41b-4c27-8875-f29626bb23a3 req-db973713-4dc5-49dc-8e84-1ab6dbb0575a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:56 np0005634017 nova_compute[243452]: 2026-02-28 10:01:56.673 243456 DEBUG oslo_concurrency.lockutils [req-a95a8a98-f41b-4c27-8875-f29626bb23a3 req-db973713-4dc5-49dc-8e84-1ab6dbb0575a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:56 np0005634017 nova_compute[243452]: 2026-02-28 10:01:56.673 243456 DEBUG oslo_concurrency.lockutils [req-a95a8a98-f41b-4c27-8875-f29626bb23a3 req-db973713-4dc5-49dc-8e84-1ab6dbb0575a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:56 np0005634017 nova_compute[243452]: 2026-02-28 10:01:56.673 243456 DEBUG oslo_concurrency.lockutils [req-a95a8a98-f41b-4c27-8875-f29626bb23a3 req-db973713-4dc5-49dc-8e84-1ab6dbb0575a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:56 np0005634017 nova_compute[243452]: 2026-02-28 10:01:56.673 243456 DEBUG nova.compute.manager [req-a95a8a98-f41b-4c27-8875-f29626bb23a3 req-db973713-4dc5-49dc-8e84-1ab6dbb0575a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Processing event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.051 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for c92e965f-2d18-4b78-8b78-7d391039f382 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.052 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272917.0510361, c92e965f-2d18-4b78-8b78-7d391039f382 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.052 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] VM Started (Lifecycle Event)#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.054 243456 DEBUG nova.compute.manager [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.058 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.064 243456 INFO nova.virt.libvirt.driver [-] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance spawned successfully.#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.065 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.080 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.084 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Synchronizing instance power state after lifecycle event "Started"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.094 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.095 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.095 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.095 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.096 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.096 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.102 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.103 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272917.0541003, c92e965f-2d18-4b78-8b78-7d391039f382 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.103 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:01:57 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:57Z|00093|binding|INFO|Releasing lport 863d7ac1-9b1e-4788-aadf-440a36d66b39 from this chassis (sb_readonly=0)
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.154 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.178 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.182 243456 DEBUG nova.compute.manager [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.186 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272917.0580935, c92e965f-2d18-4b78-8b78-7d391039f382 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.186 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:01:57 np0005634017 ovn_controller[146846]: 2026-02-28T10:01:57Z|00094|binding|INFO|Releasing lport 863d7ac1-9b1e-4788-aadf-440a36d66b39 from this chassis (sb_readonly=0)
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.208 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.212 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.217 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.254 243456 DEBUG oslo_concurrency.lockutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.255 243456 DEBUG oslo_concurrency.lockutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.255 243456 DEBUG nova.objects.instance [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.307 243456 DEBUG oslo_concurrency.lockutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.668 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272902.666035, cd5eedf6-c835-46d6-9378-148eb04d4cb2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.668 243456 INFO nova.compute.manager [-] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:01:57 np0005634017 nova_compute[243452]: 2026-02-28 10:01:57.689 243456 DEBUG nova.compute.manager [None req-aeba82f6-aad4-4fa2-81e6-532fea495248 - - - - - -] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 484 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 8.6 MiB/s wr, 347 op/s
Feb 28 05:01:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:57.841 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:57.842 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:01:57.843 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.091 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.091 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272918.0906162, 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.092 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.095 243456 DEBUG nova.compute.manager [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.095 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.099 243456 INFO nova.virt.libvirt.driver [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance spawned successfully.#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.099 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.128 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.133 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.133 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.134 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.134 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.135 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.135 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.140 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.172 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.173 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272918.0916939, 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.173 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] VM Started (Lifecycle Event)#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.202 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.206 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.223 243456 DEBUG nova.compute.manager [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.250 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.279 243456 DEBUG oslo_concurrency.lockutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.280 243456 DEBUG oslo_concurrency.lockutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.280 243456 DEBUG nova.objects.instance [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.338 243456 DEBUG oslo_concurrency.lockutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:58 np0005634017 nova_compute[243452]: 2026-02-28 10:01:58.425 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:01:59 np0005634017 nova_compute[243452]: 2026-02-28 10:01:59.326 243456 DEBUG nova.compute.manager [req-7d280523-5aa2-423b-b0c3-c331c5d6ec13 req-1b26f9de-8e13-43bb-86fd-11080e09377c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:01:59 np0005634017 nova_compute[243452]: 2026-02-28 10:01:59.327 243456 DEBUG oslo_concurrency.lockutils [req-7d280523-5aa2-423b-b0c3-c331c5d6ec13 req-1b26f9de-8e13-43bb-86fd-11080e09377c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:01:59 np0005634017 nova_compute[243452]: 2026-02-28 10:01:59.327 243456 DEBUG oslo_concurrency.lockutils [req-7d280523-5aa2-423b-b0c3-c331c5d6ec13 req-1b26f9de-8e13-43bb-86fd-11080e09377c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:01:59 np0005634017 nova_compute[243452]: 2026-02-28 10:01:59.327 243456 DEBUG oslo_concurrency.lockutils [req-7d280523-5aa2-423b-b0c3-c331c5d6ec13 req-1b26f9de-8e13-43bb-86fd-11080e09377c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:01:59 np0005634017 nova_compute[243452]: 2026-02-28 10:01:59.328 243456 DEBUG nova.compute.manager [req-7d280523-5aa2-423b-b0c3-c331c5d6ec13 req-1b26f9de-8e13-43bb-86fd-11080e09377c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] No waiting events found dispatching network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:01:59 np0005634017 nova_compute[243452]: 2026-02-28 10:01:59.328 243456 WARNING nova.compute.manager [req-7d280523-5aa2-423b-b0c3-c331c5d6ec13 req-1b26f9de-8e13-43bb-86fd-11080e09377c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received unexpected event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for instance with vm_state active and task_state rebuilding.#033[00m
Feb 28 05:01:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 484 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.3 MiB/s wr, 285 op/s
Feb 28 05:01:59 np0005634017 nova_compute[243452]: 2026-02-28 10:01:59.954 243456 INFO nova.compute.manager [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Rebuilding instance#033[00m
Feb 28 05:02:00 np0005634017 nova_compute[243452]: 2026-02-28 10:02:00.023 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:02:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:02:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:02:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:02:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:02:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:02:00 np0005634017 nova_compute[243452]: 2026-02-28 10:02:00.636 243456 DEBUG nova.objects.instance [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'trusted_certs' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:02:00 np0005634017 nova_compute[243452]: 2026-02-28 10:02:00.661 243456 DEBUG nova.compute.manager [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:02:00 np0005634017 nova_compute[243452]: 2026-02-28 10:02:00.712 243456 DEBUG nova.objects.instance [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'pci_requests' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:02:00 np0005634017 nova_compute[243452]: 2026-02-28 10:02:00.724 243456 DEBUG nova.objects.instance [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'pci_devices' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:02:00 np0005634017 nova_compute[243452]: 2026-02-28 10:02:00.736 243456 DEBUG nova.objects.instance [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'resources' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:02:00 np0005634017 nova_compute[243452]: 2026-02-28 10:02:00.746 243456 DEBUG nova.objects.instance [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'migration_context' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:02:00 np0005634017 nova_compute[243452]: 2026-02-28 10:02:00.763 243456 DEBUG nova.objects.instance [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 28 05:02:00 np0005634017 nova_compute[243452]: 2026-02-28 10:02:00.766 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 28 05:02:00 np0005634017 nova_compute[243452]: 2026-02-28 10:02:00.950 243456 INFO nova.compute.manager [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Rebuilding instance#033[00m
Feb 28 05:02:01 np0005634017 nova_compute[243452]: 2026-02-28 10:02:01.455 243456 DEBUG nova.objects.instance [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:02:01 np0005634017 nova_compute[243452]: 2026-02-28 10:02:01.474 243456 DEBUG nova.compute.manager [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:02:01 np0005634017 nova_compute[243452]: 2026-02-28 10:02:01.535 243456 DEBUG nova.objects.instance [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lazy-loading 'pci_requests' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:02:01 np0005634017 nova_compute[243452]: 2026-02-28 10:02:01.551 243456 DEBUG nova.objects.instance [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:02:01 np0005634017 nova_compute[243452]: 2026-02-28 10:02:01.566 243456 DEBUG nova.objects.instance [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lazy-loading 'resources' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:02:01 np0005634017 nova_compute[243452]: 2026-02-28 10:02:01.578 243456 DEBUG nova.objects.instance [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lazy-loading 'migration_context' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:02:01 np0005634017 nova_compute[243452]: 2026-02-28 10:02:01.593 243456 DEBUG nova.objects.instance [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 28 05:02:01 np0005634017 nova_compute[243452]: 2026-02-28 10:02:01.597 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 28 05:02:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 484 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 6.3 MiB/s wr, 390 op/s
Feb 28 05:02:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:02:03 np0005634017 nova_compute[243452]: 2026-02-28 10:02:03.427 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 484 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 300 op/s
Feb 28 05:02:04 np0005634017 nova_compute[243452]: 2026-02-28 10:02:04.346 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:02:04 np0005634017 nova_compute[243452]: 2026-02-28 10:02:04.539 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272909.5375903, 9e27fde4-3df3-46cf-97ac-88a91baefbc0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:02:04 np0005634017 nova_compute[243452]: 2026-02-28 10:02:04.539 243456 INFO nova.compute.manager [-] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:02:04 np0005634017 nova_compute[243452]: 2026-02-28 10:02:04.589 243456 DEBUG nova.compute.manager [None req-4f1e2d07-e8e6-42c6-ba63-396e02d329c2 - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:02:05 np0005634017 nova_compute[243452]: 2026-02-28 10:02:05.025 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:05 np0005634017 nova_compute[243452]: 2026-02-28 10:02:05.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:02:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 484 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.0 MiB/s wr, 229 op/s
Feb 28 05:02:06 np0005634017 nova_compute[243452]: 2026-02-28 10:02:06.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:02:06 np0005634017 nova_compute[243452]: 2026-02-28 10:02:06.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:02:06 np0005634017 nova_compute[243452]: 2026-02-28 10:02:06.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:02:06 np0005634017 nova_compute[243452]: 2026-02-28 10:02:06.359 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-c92e965f-2d18-4b78-8b78-7d391039f382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:02:06 np0005634017 nova_compute[243452]: 2026-02-28 10:02:06.360 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-c92e965f-2d18-4b78-8b78-7d391039f382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:02:06 np0005634017 nova_compute[243452]: 2026-02-28 10:02:06.360 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:02:07 np0005634017 nova_compute[243452]: 2026-02-28 10:02:07.068 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272912.067587, ce94b006-3fde-4285-89f7-1e435e514d3e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:02:07 np0005634017 nova_compute[243452]: 2026-02-28 10:02:07.069 243456 INFO nova.compute.manager [-] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:02:07 np0005634017 nova_compute[243452]: 2026-02-28 10:02:07.089 243456 DEBUG nova.compute.manager [None req-677d476b-b1eb-4a15-a0ce-5ea7c3fbfa9f - - - - - -] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:02:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:02:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 488 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 964 KiB/s wr, 175 op/s
Feb 28 05:02:08 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb 28 05:02:08 np0005634017 nova_compute[243452]: 2026-02-28 10:02:08.429 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:09 np0005634017 nova_compute[243452]: 2026-02-28 10:02:09.013 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Updating instance_info_cache with network_info: [{"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:02:09 np0005634017 nova_compute[243452]: 2026-02-28 10:02:09.166 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-c92e965f-2d18-4b78-8b78-7d391039f382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:02:09 np0005634017 nova_compute[243452]: 2026-02-28 10:02:09.167 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:02:09 np0005634017 nova_compute[243452]: 2026-02-28 10:02:09.168 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:02:09 np0005634017 nova_compute[243452]: 2026-02-28 10:02:09.168 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:02:09 np0005634017 nova_compute[243452]: 2026-02-28 10:02:09.168 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:02:09 np0005634017 nova_compute[243452]: 2026-02-28 10:02:09.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:02:09 np0005634017 nova_compute[243452]: 2026-02-28 10:02:09.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:02:09 np0005634017 nova_compute[243452]: 2026-02-28 10:02:09.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:02:09 np0005634017 nova_compute[243452]: 2026-02-28 10:02:09.337 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:09 np0005634017 nova_compute[243452]: 2026-02-28 10:02:09.338 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:09 np0005634017 nova_compute[243452]: 2026-02-28 10:02:09.338 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:09 np0005634017 nova_compute[243452]: 2026-02-28 10:02:09.338 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:02:09 np0005634017 nova_compute[243452]: 2026-02-28 10:02:09.339 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:02:09 np0005634017 ovn_controller[146846]: 2026-02-28T10:02:09Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3d:6e:bc 10.100.0.5
Feb 28 05:02:09 np0005634017 ovn_controller[146846]: 2026-02-28T10:02:09Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:6e:bc 10.100.0.5
Feb 28 05:02:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 488 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 437 KiB/s wr, 148 op/s
Feb 28 05:02:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:02:09 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/300958680' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:02:09 np0005634017 nova_compute[243452]: 2026-02-28 10:02:09.844 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:02:09 np0005634017 nova_compute[243452]: 2026-02-28 10:02:09.964 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:02:09 np0005634017 nova_compute[243452]: 2026-02-28 10:02:09.965 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:02:09 np0005634017 nova_compute[243452]: 2026-02-28 10:02:09.970 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:02:09 np0005634017 nova_compute[243452]: 2026-02-28 10:02:09.970 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:02:09 np0005634017 nova_compute[243452]: 2026-02-28 10:02:09.974 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:02:09 np0005634017 nova_compute[243452]: 2026-02-28 10:02:09.975 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:02:09 np0005634017 nova_compute[243452]: 2026-02-28 10:02:09.980 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:02:09 np0005634017 nova_compute[243452]: 2026-02-28 10:02:09.980 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:02:09 np0005634017 nova_compute[243452]: 2026-02-28 10:02:09.984 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:02:09 np0005634017 nova_compute[243452]: 2026-02-28 10:02:09.984 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:02:10 np0005634017 nova_compute[243452]: 2026-02-28 10:02:10.028 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:10 np0005634017 nova_compute[243452]: 2026-02-28 10:02:10.175 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:02:10 np0005634017 nova_compute[243452]: 2026-02-28 10:02:10.176 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3544MB free_disk=59.804935125634074GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:02:10 np0005634017 nova_compute[243452]: 2026-02-28 10:02:10.177 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:10 np0005634017 nova_compute[243452]: 2026-02-28 10:02:10.177 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:10 np0005634017 nova_compute[243452]: 2026-02-28 10:02:10.261 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance c92e965f-2d18-4b78-8b78-7d391039f382 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:02:10 np0005634017 nova_compute[243452]: 2026-02-28 10:02:10.261 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 89ced16e-cc50-41d5-bfcb-fa5af85c14c8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:02:10 np0005634017 nova_compute[243452]: 2026-02-28 10:02:10.262 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 08147934-b9df-4154-8d1f-3fd318973eb6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:02:10 np0005634017 nova_compute[243452]: 2026-02-28 10:02:10.262 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance d2d9bd29-453d-4abd-a3de-c1a9603cfc11 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:02:10 np0005634017 nova_compute[243452]: 2026-02-28 10:02:10.262 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:02:10 np0005634017 nova_compute[243452]: 2026-02-28 10:02:10.262 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:02:10 np0005634017 nova_compute[243452]: 2026-02-28 10:02:10.262 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:02:10 np0005634017 nova_compute[243452]: 2026-02-28 10:02:10.400 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:02:10 np0005634017 nova_compute[243452]: 2026-02-28 10:02:10.808 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 28 05:02:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:02:10 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4054022208' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:02:10 np0005634017 nova_compute[243452]: 2026-02-28 10:02:10.962 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:02:10 np0005634017 nova_compute[243452]: 2026-02-28 10:02:10.966 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:02:10 np0005634017 nova_compute[243452]: 2026-02-28 10:02:10.982 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:02:11 np0005634017 nova_compute[243452]: 2026-02-28 10:02:11.024 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:02:11 np0005634017 nova_compute[243452]: 2026-02-28 10:02:11.024 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:11 np0005634017 nova_compute[243452]: 2026-02-28 10:02:11.645 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 28 05:02:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 533 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.5 MiB/s wr, 214 op/s
Feb 28 05:02:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:02:13 np0005634017 kernel: tap1c6e98f3-e9 (unregistering): left promiscuous mode
Feb 28 05:02:13 np0005634017 NetworkManager[49805]: <info>  [1772272933.1189] device (tap1c6e98f3-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:02:13 np0005634017 ovn_controller[146846]: 2026-02-28T10:02:13Z|00095|binding|INFO|Releasing lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 from this chassis (sb_readonly=0)
Feb 28 05:02:13 np0005634017 nova_compute[243452]: 2026-02-28 10:02:13.126 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:13 np0005634017 ovn_controller[146846]: 2026-02-28T10:02:13Z|00096|binding|INFO|Setting lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 down in Southbound
Feb 28 05:02:13 np0005634017 ovn_controller[146846]: 2026-02-28T10:02:13Z|00097|binding|INFO|Removing iface tap1c6e98f3-e9 ovn-installed in OVS
Feb 28 05:02:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.139 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:6e:bc 10.100.0.5'], port_security=['fa:16:3e:3d:6e:bc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c92e965f-2d18-4b78-8b78-7d391039f382', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '8', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:02:13 np0005634017 nova_compute[243452]: 2026-02-28 10:02:13.140 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.143 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 unbound from our chassis#033[00m
Feb 28 05:02:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.146 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24#033[00m
Feb 28 05:02:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.161 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cef11176-7f1f-4c22-937c-3f20406c163b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:13 np0005634017 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Feb 28 05:02:13 np0005634017 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000000d.scope: Consumed 12.162s CPU time.
Feb 28 05:02:13 np0005634017 systemd-machined[209480]: Machine qemu-23-instance-0000000d terminated.
Feb 28 05:02:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.188 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[60605e97-55a1-4363-86dd-b66d37983924]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.193 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b837b479-f1d6-4125-a27a-243faf6fb1e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.235 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[33094a30-ccec-4a33-88c1-62ab9ff9fbb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.248 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c00f7d-1dd8-4ae1-9d01-7280215bf21f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 20, 'rx_bytes': 868, 'tx_bytes': 1028, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 20, 'rx_bytes': 868, 'tx_bytes': 1028, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262006, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.263 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2aedc342-39c8-47e5-983f-91c8578fc9c5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437039, 'tstamp': 437039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262007, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437043, 'tstamp': 437043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262007, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.265 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:02:13 np0005634017 nova_compute[243452]: 2026-02-28 10:02:13.267 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:13 np0005634017 nova_compute[243452]: 2026-02-28 10:02:13.271 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.271 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5045ea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:02:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.272 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:02:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.272 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5045ea-10, col_values=(('external_ids', {'iface-id': '863d7ac1-9b1e-4788-aadf-440a36d66b39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:02:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.273 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:02:13 np0005634017 nova_compute[243452]: 2026-02-28 10:02:13.432 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 550 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.3 MiB/s wr, 163 op/s
Feb 28 05:02:13 np0005634017 nova_compute[243452]: 2026-02-28 10:02:13.812 243456 DEBUG nova.compute.manager [req-eee43dab-ca7c-400e-9b51-1c4efb5d8568 req-b4cceb82-a9bc-40b0-9cdc-2956562267cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-unplugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:02:13 np0005634017 nova_compute[243452]: 2026-02-28 10:02:13.813 243456 DEBUG oslo_concurrency.lockutils [req-eee43dab-ca7c-400e-9b51-1c4efb5d8568 req-b4cceb82-a9bc-40b0-9cdc-2956562267cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:13 np0005634017 nova_compute[243452]: 2026-02-28 10:02:13.814 243456 DEBUG oslo_concurrency.lockutils [req-eee43dab-ca7c-400e-9b51-1c4efb5d8568 req-b4cceb82-a9bc-40b0-9cdc-2956562267cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:13 np0005634017 nova_compute[243452]: 2026-02-28 10:02:13.814 243456 DEBUG oslo_concurrency.lockutils [req-eee43dab-ca7c-400e-9b51-1c4efb5d8568 req-b4cceb82-a9bc-40b0-9cdc-2956562267cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:13 np0005634017 nova_compute[243452]: 2026-02-28 10:02:13.815 243456 DEBUG nova.compute.manager [req-eee43dab-ca7c-400e-9b51-1c4efb5d8568 req-b4cceb82-a9bc-40b0-9cdc-2956562267cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] No waiting events found dispatching network-vif-unplugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:02:13 np0005634017 nova_compute[243452]: 2026-02-28 10:02:13.815 243456 WARNING nova.compute.manager [req-eee43dab-ca7c-400e-9b51-1c4efb5d8568 req-b4cceb82-a9bc-40b0-9cdc-2956562267cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received unexpected event network-vif-unplugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for instance with vm_state active and task_state rebuilding.#033[00m
Feb 28 05:02:13 np0005634017 nova_compute[243452]: 2026-02-28 10:02:13.824 243456 INFO nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance shutdown successfully after 13 seconds.#033[00m
Feb 28 05:02:13 np0005634017 nova_compute[243452]: 2026-02-28 10:02:13.831 243456 INFO nova.virt.libvirt.driver [-] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance destroyed successfully.#033[00m
Feb 28 05:02:13 np0005634017 nova_compute[243452]: 2026-02-28 10:02:13.837 243456 INFO nova.virt.libvirt.driver [-] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance destroyed successfully.#033[00m
Feb 28 05:02:13 np0005634017 nova_compute[243452]: 2026-02-28 10:02:13.838 243456 DEBUG nova.virt.libvirt.vif [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1293627042',display_name='tempest-ServersAdminTestJSON-server-1293627042',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1293627042',id=13,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:01:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-7hctsnmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:01:59Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=c92e965f-2d18-4b78-8b78-7d391039f382,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:02:13 np0005634017 nova_compute[243452]: 2026-02-28 10:02:13.839 243456 DEBUG nova.network.os_vif_util [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:02:13 np0005634017 nova_compute[243452]: 2026-02-28 10:02:13.841 243456 DEBUG nova.network.os_vif_util [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:02:13 np0005634017 nova_compute[243452]: 2026-02-28 10:02:13.841 243456 DEBUG os_vif [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:02:13 np0005634017 nova_compute[243452]: 2026-02-28 10:02:13.843 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:13 np0005634017 nova_compute[243452]: 2026-02-28 10:02:13.844 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c6e98f3-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:02:13 np0005634017 nova_compute[243452]: 2026-02-28 10:02:13.846 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:13 np0005634017 nova_compute[243452]: 2026-02-28 10:02:13.848 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:13 np0005634017 nova_compute[243452]: 2026-02-28 10:02:13.850 243456 INFO os_vif [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9')#033[00m
Feb 28 05:02:14 np0005634017 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000014.scope: Deactivated successfully.
Feb 28 05:02:14 np0005634017 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000014.scope: Consumed 13.544s CPU time.
Feb 28 05:02:14 np0005634017 systemd-machined[209480]: Machine qemu-22-instance-00000014 terminated.
Feb 28 05:02:14 np0005634017 nova_compute[243452]: 2026-02-28 10:02:14.531 243456 INFO nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Deleting instance files /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382_del#033[00m
Feb 28 05:02:14 np0005634017 nova_compute[243452]: 2026-02-28 10:02:14.532 243456 INFO nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Deletion of /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382_del complete#033[00m
Feb 28 05:02:14 np0005634017 nova_compute[243452]: 2026-02-28 10:02:14.665 243456 INFO nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance shutdown successfully after 13 seconds.#033[00m
Feb 28 05:02:14 np0005634017 nova_compute[243452]: 2026-02-28 10:02:14.683 243456 INFO nova.virt.libvirt.driver [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance destroyed successfully.#033[00m
Feb 28 05:02:14 np0005634017 nova_compute[243452]: 2026-02-28 10:02:14.691 243456 INFO nova.virt.libvirt.driver [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance destroyed successfully.#033[00m
Feb 28 05:02:14 np0005634017 nova_compute[243452]: 2026-02-28 10:02:14.719 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:02:14 np0005634017 nova_compute[243452]: 2026-02-28 10:02:14.720 243456 INFO nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Creating image(s)#033[00m
Feb 28 05:02:14 np0005634017 nova_compute[243452]: 2026-02-28 10:02:14.752 243456 DEBUG nova.storage.rbd_utils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:02:14 np0005634017 nova_compute[243452]: 2026-02-28 10:02:14.784 243456 DEBUG nova.storage.rbd_utils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:02:14 np0005634017 nova_compute[243452]: 2026-02-28 10:02:14.817 243456 DEBUG nova.storage.rbd_utils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:02:14 np0005634017 nova_compute[243452]: 2026-02-28 10:02:14.822 243456 DEBUG oslo_concurrency.processutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:02:14 np0005634017 nova_compute[243452]: 2026-02-28 10:02:14.907 243456 DEBUG oslo_concurrency.processutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:02:14 np0005634017 nova_compute[243452]: 2026-02-28 10:02:14.908 243456 DEBUG oslo_concurrency.lockutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:14 np0005634017 nova_compute[243452]: 2026-02-28 10:02:14.909 243456 DEBUG oslo_concurrency.lockutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:14 np0005634017 nova_compute[243452]: 2026-02-28 10:02:14.909 243456 DEBUG oslo_concurrency.lockutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:14 np0005634017 nova_compute[243452]: 2026-02-28 10:02:14.933 243456 DEBUG nova.storage.rbd_utils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:02:14 np0005634017 nova_compute[243452]: 2026-02-28 10:02:14.948 243456 DEBUG oslo_concurrency.processutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c92e965f-2d18-4b78-8b78-7d391039f382_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:02:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Feb 28 05:02:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Feb 28 05:02:14 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.345 243456 DEBUG oslo_concurrency.processutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c92e965f-2d18-4b78-8b78-7d391039f382_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.425 243456 INFO nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Deleting instance files /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_del#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.426 243456 INFO nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Deletion of /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_del complete#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.433 243456 DEBUG nova.storage.rbd_utils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] resizing rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.519 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.520 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Ensure instance console log exists: /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.521 243456 DEBUG oslo_concurrency.lockutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.521 243456 DEBUG oslo_concurrency.lockutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.521 243456 DEBUG oslo_concurrency.lockutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.523 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Start _get_guest_xml network_info=[{"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.530 243456 WARNING nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.536 243456 DEBUG nova.virt.libvirt.host [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.536 243456 DEBUG nova.virt.libvirt.host [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.539 243456 DEBUG nova.virt.libvirt.host [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.540 243456 DEBUG nova.virt.libvirt.host [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.541 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.541 243456 DEBUG nova.virt.hardware [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.541 243456 DEBUG nova.virt.hardware [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.542 243456 DEBUG nova.virt.hardware [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.542 243456 DEBUG nova.virt.hardware [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.542 243456 DEBUG nova.virt.hardware [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.542 243456 DEBUG nova.virt.hardware [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.543 243456 DEBUG nova.virt.hardware [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.543 243456 DEBUG nova.virt.hardware [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.543 243456 DEBUG nova.virt.hardware [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.544 243456 DEBUG nova.virt.hardware [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.544 243456 DEBUG nova.virt.hardware [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.544 243456 DEBUG nova.objects.instance [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'vcpu_model' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.562 243456 DEBUG oslo_concurrency.processutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.585 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.588 243456 INFO nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Creating image(s)#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.620 243456 DEBUG nova.storage.rbd_utils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.648 243456 DEBUG nova.storage.rbd_utils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.671 243456 DEBUG nova.storage.rbd_utils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.675 243456 DEBUG oslo_concurrency.processutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.749 243456 DEBUG oslo_concurrency.processutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.751 243456 DEBUG oslo_concurrency.lockutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.752 243456 DEBUG oslo_concurrency.lockutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.752 243456 DEBUG oslo_concurrency.lockutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 475 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 764 KiB/s rd, 5.9 MiB/s wr, 202 op/s
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.787 243456 DEBUG nova.storage.rbd_utils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:02:15 np0005634017 nova_compute[243452]: 2026-02-28 10:02:15.792 243456 DEBUG oslo_concurrency.processutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:02:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:02:16 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1043529309' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.090 243456 DEBUG oslo_concurrency.processutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.116 243456 DEBUG nova.storage.rbd_utils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.120 243456 DEBUG oslo_concurrency.processutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.135 243456 DEBUG oslo_concurrency.processutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.209 243456 DEBUG nova.storage.rbd_utils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] resizing rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.290 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.291 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Ensure instance console log exists: /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.291 243456 DEBUG oslo_concurrency.lockutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.292 243456 DEBUG oslo_concurrency.lockutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.292 243456 DEBUG oslo_concurrency.lockutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.293 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.298 243456 WARNING nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.302 243456 DEBUG nova.virt.libvirt.host [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.302 243456 DEBUG nova.virt.libvirt.host [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.306 243456 DEBUG nova.virt.libvirt.host [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.306 243456 DEBUG nova.virt.libvirt.host [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.306 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.307 243456 DEBUG nova.virt.hardware [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.307 243456 DEBUG nova.virt.hardware [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.307 243456 DEBUG nova.virt.hardware [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.307 243456 DEBUG nova.virt.hardware [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.308 243456 DEBUG nova.virt.hardware [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.308 243456 DEBUG nova.virt.hardware [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.308 243456 DEBUG nova.virt.hardware [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.308 243456 DEBUG nova.virt.hardware [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.309 243456 DEBUG nova.virt.hardware [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.309 243456 DEBUG nova.virt.hardware [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.309 243456 DEBUG nova.virt.hardware [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.309 243456 DEBUG nova.objects.instance [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.315 243456 DEBUG nova.compute.manager [req-3e8f4f0d-fce8-4663-afa0-c2b8f51eac5c req-47b5accc-5cfc-4de4-956f-247cc4d288aa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.316 243456 DEBUG oslo_concurrency.lockutils [req-3e8f4f0d-fce8-4663-afa0-c2b8f51eac5c req-47b5accc-5cfc-4de4-956f-247cc4d288aa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.316 243456 DEBUG oslo_concurrency.lockutils [req-3e8f4f0d-fce8-4663-afa0-c2b8f51eac5c req-47b5accc-5cfc-4de4-956f-247cc4d288aa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.316 243456 DEBUG oslo_concurrency.lockutils [req-3e8f4f0d-fce8-4663-afa0-c2b8f51eac5c req-47b5accc-5cfc-4de4-956f-247cc4d288aa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.316 243456 DEBUG nova.compute.manager [req-3e8f4f0d-fce8-4663-afa0-c2b8f51eac5c req-47b5accc-5cfc-4de4-956f-247cc4d288aa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] No waiting events found dispatching network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.317 243456 WARNING nova.compute.manager [req-3e8f4f0d-fce8-4663-afa0-c2b8f51eac5c req-47b5accc-5cfc-4de4-956f-247cc4d288aa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received unexpected event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.326 243456 DEBUG oslo_concurrency.processutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:02:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:02:16 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1241410563' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.655 243456 DEBUG oslo_concurrency.processutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.659 243456 DEBUG nova.virt.libvirt.vif [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1293627042',display_name='tempest-ServersAdminTestJSON-server-1293627042',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1293627042',id=13,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:01:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-7hctsnmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:02:14Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=c92e965f-2d18-4b78-8b78-7d391039f382,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.661 243456 DEBUG nova.network.os_vif_util [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.663 243456 DEBUG nova.network.os_vif_util [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.668 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:02:16 np0005634017 nova_compute[243452]:  <uuid>c92e965f-2d18-4b78-8b78-7d391039f382</uuid>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:  <name>instance-0000000d</name>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServersAdminTestJSON-server-1293627042</nova:name>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:02:15</nova:creationTime>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:02:16 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:        <nova:user uuid="4fb1e2bbed9c4e2395c13dba974f8603">tempest-ServersAdminTestJSON-1494420313-project-member</nova:user>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:        <nova:project uuid="339b7f5b41a54615b051fb9d036072dd">tempest-ServersAdminTestJSON-1494420313</nova:project>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:        <nova:port uuid="1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7">
Feb 28 05:02:16 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <entry name="serial">c92e965f-2d18-4b78-8b78-7d391039f382</entry>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <entry name="uuid">c92e965f-2d18-4b78-8b78-7d391039f382</entry>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/c92e965f-2d18-4b78-8b78-7d391039f382_disk">
Feb 28 05:02:16 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:02:16 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/c92e965f-2d18-4b78-8b78-7d391039f382_disk.config">
Feb 28 05:02:16 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:02:16 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:3d:6e:bc"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <target dev="tap1c6e98f3-e9"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/console.log" append="off"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:02:16 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:02:16 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:02:16 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:02:16 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.671 243456 DEBUG nova.compute.manager [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Preparing to wait for external event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.672 243456 DEBUG oslo_concurrency.lockutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.673 243456 DEBUG oslo_concurrency.lockutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.673 243456 DEBUG oslo_concurrency.lockutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.675 243456 DEBUG nova.virt.libvirt.vif [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1293627042',display_name='tempest-ServersAdminTestJSON-server-1293627042',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1293627042',id=13,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:01:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-7hctsnmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:02:14Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=c92e965f-2d18-4b78-8b78-7d391039f382,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.676 243456 DEBUG nova.network.os_vif_util [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.678 243456 DEBUG nova.network.os_vif_util [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.679 243456 DEBUG os_vif [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.681 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.682 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.683 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.687 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.687 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c6e98f3-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.688 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1c6e98f3-e9, col_values=(('external_ids', {'iface-id': '1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:6e:bc', 'vm-uuid': 'c92e965f-2d18-4b78-8b78-7d391039f382'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:02:16 np0005634017 NetworkManager[49805]: <info>  [1772272936.6899] manager: (tap1c6e98f3-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.689 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.693 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.694 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.694 243456 INFO os_vif [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9')#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.756 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.757 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.757 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No VIF found with MAC fa:16:3e:3d:6e:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.758 243456 INFO nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Using config drive#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.791 243456 DEBUG nova.storage.rbd_utils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.816 243456 DEBUG nova.objects.instance [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'ec2_ids' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.850 243456 DEBUG nova.objects.instance [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'keypairs' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:02:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:02:16 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2995584168' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.946 243456 DEBUG oslo_concurrency.processutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.966 243456 DEBUG nova.storage.rbd_utils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:02:16 np0005634017 nova_compute[243452]: 2026-02-28 10:02:16.970 243456 DEBUG oslo_concurrency.processutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:02:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:02:17 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3643995517' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:02:17 np0005634017 nova_compute[243452]: 2026-02-28 10:02:17.522 243456 DEBUG oslo_concurrency.processutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:02:17 np0005634017 nova_compute[243452]: 2026-02-28 10:02:17.526 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:02:17 np0005634017 nova_compute[243452]:  <uuid>9bf11a21-b6eb-432e-aa3b-4dd3413f91a5</uuid>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:  <name>instance-00000014</name>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServersAdmin275Test-server-1098361722</nova:name>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:02:16</nova:creationTime>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:02:17 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:        <nova:user uuid="c1d3b80b39ba4f3392d63b05c85009e2">tempest-ServersAdmin275Test-175914647-project-member</nova:user>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:        <nova:project uuid="a1e90097927e484eb57dee6ac05b7b47">tempest-ServersAdmin275Test-175914647</nova:project>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      <nova:ports/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      <entry name="serial">9bf11a21-b6eb-432e-aa3b-4dd3413f91a5</entry>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      <entry name="uuid">9bf11a21-b6eb-432e-aa3b-4dd3413f91a5</entry>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk">
Feb 28 05:02:17 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:02:17 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config">
Feb 28 05:02:17 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:02:17 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/console.log" append="off"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:02:17 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:02:17 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:02:17 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:02:17 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:02:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:02:17 np0005634017 nova_compute[243452]: 2026-02-28 10:02:17.621 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:02:17 np0005634017 nova_compute[243452]: 2026-02-28 10:02:17.621 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:02:17 np0005634017 nova_compute[243452]: 2026-02-28 10:02:17.622 243456 INFO nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Using config drive#033[00m
Feb 28 05:02:17 np0005634017 nova_compute[243452]: 2026-02-28 10:02:17.649 243456 DEBUG nova.storage.rbd_utils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:02:17 np0005634017 nova_compute[243452]: 2026-02-28 10:02:17.731 243456 INFO nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Creating config drive at /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config#033[00m
Feb 28 05:02:17 np0005634017 nova_compute[243452]: 2026-02-28 10:02:17.737 243456 DEBUG oslo_concurrency.processutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_tdgc112 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:02:17 np0005634017 nova_compute[243452]: 2026-02-28 10:02:17.769 243456 DEBUG nova.objects.instance [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:02:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 469 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 773 KiB/s rd, 7.1 MiB/s wr, 217 op/s
Feb 28 05:02:17 np0005634017 nova_compute[243452]: 2026-02-28 10:02:17.842 243456 DEBUG nova.objects.instance [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lazy-loading 'keypairs' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:02:17 np0005634017 nova_compute[243452]: 2026-02-28 10:02:17.875 243456 DEBUG oslo_concurrency.processutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_tdgc112" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:02:17 np0005634017 nova_compute[243452]: 2026-02-28 10:02:17.918 243456 DEBUG nova.storage.rbd_utils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:02:17 np0005634017 nova_compute[243452]: 2026-02-28 10:02:17.923 243456 DEBUG oslo_concurrency.processutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config c92e965f-2d18-4b78-8b78-7d391039f382_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:02:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Feb 28 05:02:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Feb 28 05:02:18 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.067 243456 DEBUG oslo_concurrency.processutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config c92e965f-2d18-4b78-8b78-7d391039f382_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.068 243456 INFO nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Deleting local config drive /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config because it was imported into RBD.#033[00m
Feb 28 05:02:18 np0005634017 kernel: tap1c6e98f3-e9: entered promiscuous mode
Feb 28 05:02:18 np0005634017 ovn_controller[146846]: 2026-02-28T10:02:18Z|00098|binding|INFO|Claiming lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for this chassis.
Feb 28 05:02:18 np0005634017 ovn_controller[146846]: 2026-02-28T10:02:18Z|00099|binding|INFO|1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7: Claiming fa:16:3e:3d:6e:bc 10.100.0.5
Feb 28 05:02:18 np0005634017 NetworkManager[49805]: <info>  [1772272938.1211] manager: (tap1c6e98f3-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/54)
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.116 243456 INFO nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Creating config drive at /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.124 243456 DEBUG oslo_concurrency.processutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5wgz08g8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:02:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.128 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:6e:bc 10.100.0.5'], port_security=['fa:16:3e:3d:6e:bc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c92e965f-2d18-4b78-8b78-7d391039f382', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '9', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:02:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.130 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 bound to our chassis#033[00m
Feb 28 05:02:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.131 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24#033[00m
Feb 28 05:02:18 np0005634017 ovn_controller[146846]: 2026-02-28T10:02:18Z|00100|binding|INFO|Setting lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 ovn-installed in OVS
Feb 28 05:02:18 np0005634017 ovn_controller[146846]: 2026-02-28T10:02:18Z|00101|binding|INFO|Setting lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 up in Southbound
Feb 28 05:02:18 np0005634017 systemd-udevd[262607]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.155 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.152 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1e60fc78-84f2-4eaf-bd33-474a09fb60a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:18 np0005634017 NetworkManager[49805]: <info>  [1772272938.1665] device (tap1c6e98f3-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:02:18 np0005634017 NetworkManager[49805]: <info>  [1772272938.1672] device (tap1c6e98f3-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:02:18 np0005634017 systemd-machined[209480]: New machine qemu-24-instance-0000000d.
Feb 28 05:02:18 np0005634017 systemd[1]: Started Virtual Machine qemu-24-instance-0000000d.
Feb 28 05:02:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.184 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[28beec0d-109e-46c7-8cf5-f6f6dadea6ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.187 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[68c5f544-3b01-4030-911e-91bf95effe73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.213 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6ae140b0-a4f2-4d8f-9e4d-d47feb6fbb5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.228 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ec9baabd-8252-4001-91bd-4041b325c709]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 22, 'rx_bytes': 868, 'tx_bytes': 1112, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 22, 'rx_bytes': 868, 'tx_bytes': 1112, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262624, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.248 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[81ddbd36-7995-485b-becb-b948d7acc311]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437039, 'tstamp': 437039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262625, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437043, 'tstamp': 437043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262625, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.250 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.252 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.253 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.254 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5045ea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:02:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.255 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:02:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.255 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5045ea-10, col_values=(('external_ids', {'iface-id': '863d7ac1-9b1e-4788-aadf-440a36d66b39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:02:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.256 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.264 243456 DEBUG oslo_concurrency.processutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5wgz08g8" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.292 243456 DEBUG nova.storage.rbd_utils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.295 243456 DEBUG oslo_concurrency.processutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.390 243456 DEBUG nova.compute.manager [req-3b69d00f-14c8-4f07-ac02-af7c3844210c req-6cba49de-a061-4c29-95fa-824da15b21e5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.390 243456 DEBUG oslo_concurrency.lockutils [req-3b69d00f-14c8-4f07-ac02-af7c3844210c req-6cba49de-a061-4c29-95fa-824da15b21e5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.391 243456 DEBUG oslo_concurrency.lockutils [req-3b69d00f-14c8-4f07-ac02-af7c3844210c req-6cba49de-a061-4c29-95fa-824da15b21e5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.391 243456 DEBUG oslo_concurrency.lockutils [req-3b69d00f-14c8-4f07-ac02-af7c3844210c req-6cba49de-a061-4c29-95fa-824da15b21e5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.391 243456 DEBUG nova.compute.manager [req-3b69d00f-14c8-4f07-ac02-af7c3844210c req-6cba49de-a061-4c29-95fa-824da15b21e5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Processing event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.417 243456 DEBUG oslo_concurrency.processutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.418 243456 INFO nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Deleting local config drive /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config because it was imported into RBD.#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.433 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:18 np0005634017 systemd-machined[209480]: New machine qemu-25-instance-00000014.
Feb 28 05:02:18 np0005634017 systemd[1]: Started Virtual Machine qemu-25-instance-00000014.
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.682 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for c92e965f-2d18-4b78-8b78-7d391039f382 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.683 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272938.6816688, c92e965f-2d18-4b78-8b78-7d391039f382 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.684 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] VM Started (Lifecycle Event)#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.688 243456 DEBUG nova.compute.manager [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.694 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.699 243456 INFO nova.virt.libvirt.driver [-] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance spawned successfully.#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.700 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.715 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.719 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.740 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.741 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.741 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.741 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.742 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.742 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.753 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.753 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272938.6817753, c92e965f-2d18-4b78-8b78-7d391039f382 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.754 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.783 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.788 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272938.692803, c92e965f-2d18-4b78-8b78-7d391039f382 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.788 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.812 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.817 243456 DEBUG nova.compute.manager [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.819 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.859 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.897 243456 DEBUG oslo_concurrency.lockutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.898 243456 DEBUG oslo_concurrency.lockutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.898 243456 DEBUG nova.objects.instance [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 28 05:02:18 np0005634017 nova_compute[243452]: 2026-02-28 10:02:18.961 243456 DEBUG oslo_concurrency.lockutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:19 np0005634017 nova_compute[243452]: 2026-02-28 10:02:19.072 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 28 05:02:19 np0005634017 nova_compute[243452]: 2026-02-28 10:02:19.073 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272939.0720258, 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:02:19 np0005634017 nova_compute[243452]: 2026-02-28 10:02:19.073 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:02:19 np0005634017 nova_compute[243452]: 2026-02-28 10:02:19.075 243456 DEBUG nova.compute.manager [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:02:19 np0005634017 nova_compute[243452]: 2026-02-28 10:02:19.076 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:02:19 np0005634017 nova_compute[243452]: 2026-02-28 10:02:19.079 243456 INFO nova.virt.libvirt.driver [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance spawned successfully.#033[00m
Feb 28 05:02:19 np0005634017 nova_compute[243452]: 2026-02-28 10:02:19.080 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:02:19 np0005634017 nova_compute[243452]: 2026-02-28 10:02:19.097 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:02:19 np0005634017 nova_compute[243452]: 2026-02-28 10:02:19.103 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:02:19 np0005634017 nova_compute[243452]: 2026-02-28 10:02:19.109 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:02:19 np0005634017 nova_compute[243452]: 2026-02-28 10:02:19.109 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:02:19 np0005634017 nova_compute[243452]: 2026-02-28 10:02:19.110 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:02:19 np0005634017 nova_compute[243452]: 2026-02-28 10:02:19.110 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:02:19 np0005634017 nova_compute[243452]: 2026-02-28 10:02:19.110 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:02:19 np0005634017 nova_compute[243452]: 2026-02-28 10:02:19.111 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:02:19 np0005634017 nova_compute[243452]: 2026-02-28 10:02:19.133 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 28 05:02:19 np0005634017 nova_compute[243452]: 2026-02-28 10:02:19.133 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272939.0748594, 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:02:19 np0005634017 nova_compute[243452]: 2026-02-28 10:02:19.133 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] VM Started (Lifecycle Event)#033[00m
Feb 28 05:02:19 np0005634017 nova_compute[243452]: 2026-02-28 10:02:19.158 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:02:19 np0005634017 nova_compute[243452]: 2026-02-28 10:02:19.161 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:02:19 np0005634017 nova_compute[243452]: 2026-02-28 10:02:19.167 243456 DEBUG nova.compute.manager [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:02:19 np0005634017 nova_compute[243452]: 2026-02-28 10:02:19.193 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 28 05:02:19 np0005634017 nova_compute[243452]: 2026-02-28 10:02:19.232 243456 DEBUG oslo_concurrency.lockutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:19 np0005634017 nova_compute[243452]: 2026-02-28 10:02:19.232 243456 DEBUG oslo_concurrency.lockutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:19 np0005634017 nova_compute[243452]: 2026-02-28 10:02:19.233 243456 DEBUG nova.objects.instance [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 28 05:02:19 np0005634017 nova_compute[243452]: 2026-02-28 10:02:19.298 243456 DEBUG oslo_concurrency.lockutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 469 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 527 KiB/s rd, 4.2 MiB/s wr, 173 op/s
Feb 28 05:02:20 np0005634017 podman[262763]: 2026-02-28 10:02:20.129040991 +0000 UTC m=+0.059289616 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 28 05:02:20 np0005634017 podman[262762]: 2026-02-28 10:02:20.151583294 +0000 UTC m=+0.086083168 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.build-date=20260223, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 05:02:20 np0005634017 nova_compute[243452]: 2026-02-28 10:02:20.518 243456 DEBUG nova.compute.manager [req-98c8636a-74b9-4e77-bd67-756ca19af65d req-f6c9767a-1d06-49b6-a910-2fea1d2b4e19 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:02:20 np0005634017 nova_compute[243452]: 2026-02-28 10:02:20.518 243456 DEBUG oslo_concurrency.lockutils [req-98c8636a-74b9-4e77-bd67-756ca19af65d req-f6c9767a-1d06-49b6-a910-2fea1d2b4e19 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:20 np0005634017 nova_compute[243452]: 2026-02-28 10:02:20.518 243456 DEBUG oslo_concurrency.lockutils [req-98c8636a-74b9-4e77-bd67-756ca19af65d req-f6c9767a-1d06-49b6-a910-2fea1d2b4e19 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:20 np0005634017 nova_compute[243452]: 2026-02-28 10:02:20.519 243456 DEBUG oslo_concurrency.lockutils [req-98c8636a-74b9-4e77-bd67-756ca19af65d req-f6c9767a-1d06-49b6-a910-2fea1d2b4e19 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:20 np0005634017 nova_compute[243452]: 2026-02-28 10:02:20.519 243456 DEBUG nova.compute.manager [req-98c8636a-74b9-4e77-bd67-756ca19af65d req-f6c9767a-1d06-49b6-a910-2fea1d2b4e19 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] No waiting events found dispatching network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:02:20 np0005634017 nova_compute[243452]: 2026-02-28 10:02:20.519 243456 WARNING nova.compute.manager [req-98c8636a-74b9-4e77-bd67-756ca19af65d req-f6c9767a-1d06-49b6-a910-2fea1d2b4e19 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received unexpected event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for instance with vm_state error and task_state None.#033[00m
Feb 28 05:02:21 np0005634017 nova_compute[243452]: 2026-02-28 10:02:21.229 243456 DEBUG oslo_concurrency.lockutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Acquiring lock "9bf11a21-b6eb-432e-aa3b-4dd3413f91a5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:21 np0005634017 nova_compute[243452]: 2026-02-28 10:02:21.230 243456 DEBUG oslo_concurrency.lockutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "9bf11a21-b6eb-432e-aa3b-4dd3413f91a5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:21 np0005634017 nova_compute[243452]: 2026-02-28 10:02:21.230 243456 DEBUG oslo_concurrency.lockutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Acquiring lock "9bf11a21-b6eb-432e-aa3b-4dd3413f91a5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:21 np0005634017 nova_compute[243452]: 2026-02-28 10:02:21.230 243456 DEBUG oslo_concurrency.lockutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "9bf11a21-b6eb-432e-aa3b-4dd3413f91a5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:21 np0005634017 nova_compute[243452]: 2026-02-28 10:02:21.231 243456 DEBUG oslo_concurrency.lockutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "9bf11a21-b6eb-432e-aa3b-4dd3413f91a5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:21 np0005634017 nova_compute[243452]: 2026-02-28 10:02:21.232 243456 INFO nova.compute.manager [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Terminating instance#033[00m
Feb 28 05:02:21 np0005634017 nova_compute[243452]: 2026-02-28 10:02:21.233 243456 DEBUG oslo_concurrency.lockutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Acquiring lock "refresh_cache-9bf11a21-b6eb-432e-aa3b-4dd3413f91a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:02:21 np0005634017 nova_compute[243452]: 2026-02-28 10:02:21.233 243456 DEBUG oslo_concurrency.lockutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Acquired lock "refresh_cache-9bf11a21-b6eb-432e-aa3b-4dd3413f91a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:02:21 np0005634017 nova_compute[243452]: 2026-02-28 10:02:21.233 243456 DEBUG nova.network.neutron [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:02:21 np0005634017 nova_compute[243452]: 2026-02-28 10:02:21.442 243456 DEBUG nova.network.neutron [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:02:21 np0005634017 nova_compute[243452]: 2026-02-28 10:02:21.673 243456 DEBUG oslo_concurrency.lockutils [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:21 np0005634017 nova_compute[243452]: 2026-02-28 10:02:21.673 243456 DEBUG oslo_concurrency.lockutils [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:21 np0005634017 nova_compute[243452]: 2026-02-28 10:02:21.673 243456 DEBUG oslo_concurrency.lockutils [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:21 np0005634017 nova_compute[243452]: 2026-02-28 10:02:21.673 243456 DEBUG oslo_concurrency.lockutils [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:21 np0005634017 nova_compute[243452]: 2026-02-28 10:02:21.673 243456 DEBUG oslo_concurrency.lockutils [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:21 np0005634017 nova_compute[243452]: 2026-02-28 10:02:21.674 243456 INFO nova.compute.manager [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Terminating instance#033[00m
Feb 28 05:02:21 np0005634017 nova_compute[243452]: 2026-02-28 10:02:21.675 243456 DEBUG nova.compute.manager [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:02:21 np0005634017 nova_compute[243452]: 2026-02-28 10:02:21.689 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 484 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.4 MiB/s wr, 368 op/s
Feb 28 05:02:21 np0005634017 nova_compute[243452]: 2026-02-28 10:02:21.903 243456 DEBUG nova.network.neutron [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:02:21 np0005634017 kernel: tap42dc1876-90 (unregistering): left promiscuous mode
Feb 28 05:02:21 np0005634017 NetworkManager[49805]: <info>  [1772272941.9161] device (tap42dc1876-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:02:21 np0005634017 nova_compute[243452]: 2026-02-28 10:02:21.920 243456 DEBUG oslo_concurrency.lockutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Releasing lock "refresh_cache-9bf11a21-b6eb-432e-aa3b-4dd3413f91a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:02:21 np0005634017 nova_compute[243452]: 2026-02-28 10:02:21.920 243456 DEBUG nova.compute.manager [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:02:21 np0005634017 ovn_controller[146846]: 2026-02-28T10:02:21Z|00102|binding|INFO|Releasing lport 42dc1876-90c4-4b52-b301-1c90b71ff297 from this chassis (sb_readonly=0)
Feb 28 05:02:21 np0005634017 ovn_controller[146846]: 2026-02-28T10:02:21Z|00103|binding|INFO|Setting lport 42dc1876-90c4-4b52-b301-1c90b71ff297 down in Southbound
Feb 28 05:02:21 np0005634017 nova_compute[243452]: 2026-02-28 10:02:21.925 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:21 np0005634017 ovn_controller[146846]: 2026-02-28T10:02:21Z|00104|binding|INFO|Removing iface tap42dc1876-90 ovn-installed in OVS
Feb 28 05:02:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:21.935 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:8a:2e 10.100.0.4'], port_security=['fa:16:3e:0e:8a:2e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd2d9bd29-453d-4abd-a3de-c1a9603cfc11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=42dc1876-90c4-4b52-b301-1c90b71ff297) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:02:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:21.936 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 42dc1876-90c4-4b52-b301-1c90b71ff297 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 unbound from our chassis#033[00m
Feb 28 05:02:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:21.938 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24#033[00m
Feb 28 05:02:21 np0005634017 nova_compute[243452]: 2026-02-28 10:02:21.940 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:21.952 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[114d428e-c655-4390-b5fc-21df3af0a1a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:21.974 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[65b5b36b-971e-4b87-b36d-c6bdc79adbf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:21.978 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[99b16035-57bf-42cf-a826-553e932e6db0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:21 np0005634017 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000012.scope: Deactivated successfully.
Feb 28 05:02:21 np0005634017 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000012.scope: Consumed 13.354s CPU time.
Feb 28 05:02:21 np0005634017 systemd-machined[209480]: Machine qemu-20-instance-00000012 terminated.
Feb 28 05:02:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:22.000 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4f3186bc-af9f-44cc-b3bd-48dc1e376c4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:22.015 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[50e9025f-1de4-4af3-814f-cd281d2ed49d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 24, 'rx_bytes': 868, 'tx_bytes': 1196, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 24, 'rx_bytes': 868, 'tx_bytes': 1196, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262819, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:22.028 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[469213cb-7b5f-4bd5-8cdd-beb8af0221d4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437039, 'tstamp': 437039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262820, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437043, 'tstamp': 437043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262820, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:22.030 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.031 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.035 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:22.035 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5045ea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:02:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:22.035 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:02:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:22.036 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5045ea-10, col_values=(('external_ids', {'iface-id': '863d7ac1-9b1e-4788-aadf-440a36d66b39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:02:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:22.036 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.109 243456 INFO nova.virt.libvirt.driver [-] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Instance destroyed successfully.#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.110 243456 DEBUG nova.objects.instance [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'resources' on Instance uuid d2d9bd29-453d-4abd-a3de-c1a9603cfc11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.131 243456 DEBUG nova.virt.libvirt.vif [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:01:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1694487247',display_name='tempest-ServersAdminTestJSON-server-1694487247',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1694487247',id=18,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:01:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-vmrf35mh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:01:35Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=d2d9bd29-453d-4abd-a3de-c1a9603cfc11,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "42dc1876-90c4-4b52-b301-1c90b71ff297", "address": "fa:16:3e:0e:8a:2e", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42dc1876-90", "ovs_interfaceid": "42dc1876-90c4-4b52-b301-1c90b71ff297", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.131 243456 DEBUG nova.network.os_vif_util [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "42dc1876-90c4-4b52-b301-1c90b71ff297", "address": "fa:16:3e:0e:8a:2e", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42dc1876-90", "ovs_interfaceid": "42dc1876-90c4-4b52-b301-1c90b71ff297", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.133 243456 DEBUG nova.network.os_vif_util [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:8a:2e,bridge_name='br-int',has_traffic_filtering=True,id=42dc1876-90c4-4b52-b301-1c90b71ff297,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42dc1876-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.133 243456 DEBUG os_vif [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:8a:2e,bridge_name='br-int',has_traffic_filtering=True,id=42dc1876-90c4-4b52-b301-1c90b71ff297,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42dc1876-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.136 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.137 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap42dc1876-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.139 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.141 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.144 243456 INFO os_vif [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:8a:2e,bridge_name='br-int',has_traffic_filtering=True,id=42dc1876-90c4-4b52-b301-1c90b71ff297,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42dc1876-90')#033[00m
Feb 28 05:02:22 np0005634017 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000014.scope: Deactivated successfully.
Feb 28 05:02:22 np0005634017 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000014.scope: Consumed 3.430s CPU time.
Feb 28 05:02:22 np0005634017 systemd-machined[209480]: Machine qemu-25-instance-00000014 terminated.
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.339 243456 INFO nova.virt.libvirt.driver [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance destroyed successfully.#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.340 243456 DEBUG nova.objects.instance [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lazy-loading 'resources' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:02:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.804 243456 INFO nova.virt.libvirt.driver [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Deleting instance files /var/lib/nova/instances/d2d9bd29-453d-4abd-a3de-c1a9603cfc11_del#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.805 243456 INFO nova.virt.libvirt.driver [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Deletion of /var/lib/nova/instances/d2d9bd29-453d-4abd-a3de-c1a9603cfc11_del complete#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.814 243456 INFO nova.virt.libvirt.driver [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Deleting instance files /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_del#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.814 243456 INFO nova.virt.libvirt.driver [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Deletion of /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_del complete#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.836 243456 DEBUG nova.compute.manager [req-adae786b-dc5e-47c9-b919-c79631f80959 req-495c3227-dec7-4ccc-beb8-0ca581836d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Received event network-vif-unplugged-42dc1876-90c4-4b52-b301-1c90b71ff297 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.836 243456 DEBUG oslo_concurrency.lockutils [req-adae786b-dc5e-47c9-b919-c79631f80959 req-495c3227-dec7-4ccc-beb8-0ca581836d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.837 243456 DEBUG oslo_concurrency.lockutils [req-adae786b-dc5e-47c9-b919-c79631f80959 req-495c3227-dec7-4ccc-beb8-0ca581836d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.837 243456 DEBUG oslo_concurrency.lockutils [req-adae786b-dc5e-47c9-b919-c79631f80959 req-495c3227-dec7-4ccc-beb8-0ca581836d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.837 243456 DEBUG nova.compute.manager [req-adae786b-dc5e-47c9-b919-c79631f80959 req-495c3227-dec7-4ccc-beb8-0ca581836d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] No waiting events found dispatching network-vif-unplugged-42dc1876-90c4-4b52-b301-1c90b71ff297 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.838 243456 DEBUG nova.compute.manager [req-adae786b-dc5e-47c9-b919-c79631f80959 req-495c3227-dec7-4ccc-beb8-0ca581836d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Received event network-vif-unplugged-42dc1876-90c4-4b52-b301-1c90b71ff297 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.909 243456 INFO nova.compute.manager [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Took 1.23 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.909 243456 DEBUG oslo.service.loopingcall [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.910 243456 DEBUG nova.compute.manager [-] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.910 243456 DEBUG nova.network.neutron [-] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.915 243456 INFO nova.compute.manager [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Took 0.99 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.915 243456 DEBUG oslo.service.loopingcall [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.916 243456 DEBUG nova.compute.manager [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:02:22 np0005634017 nova_compute[243452]: 2026-02-28 10:02:22.916 243456 DEBUG nova.network.neutron [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:02:23 np0005634017 nova_compute[243452]: 2026-02-28 10:02:23.084 243456 DEBUG nova.network.neutron [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:02:23 np0005634017 nova_compute[243452]: 2026-02-28 10:02:23.106 243456 DEBUG nova.network.neutron [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:02:23 np0005634017 nova_compute[243452]: 2026-02-28 10:02:23.126 243456 INFO nova.compute.manager [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Took 0.21 seconds to deallocate network for instance.#033[00m
Feb 28 05:02:23 np0005634017 nova_compute[243452]: 2026-02-28 10:02:23.185 243456 DEBUG oslo_concurrency.lockutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:23 np0005634017 nova_compute[243452]: 2026-02-28 10:02:23.186 243456 DEBUG oslo_concurrency.lockutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:23 np0005634017 nova_compute[243452]: 2026-02-28 10:02:23.340 243456 DEBUG oslo_concurrency.processutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:02:23 np0005634017 nova_compute[243452]: 2026-02-28 10:02:23.435 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 484 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 4.9 MiB/s wr, 382 op/s
Feb 28 05:02:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:02:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2162428933' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:02:23 np0005634017 nova_compute[243452]: 2026-02-28 10:02:23.845 243456 DEBUG oslo_concurrency.processutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:02:23 np0005634017 nova_compute[243452]: 2026-02-28 10:02:23.853 243456 DEBUG nova.compute.provider_tree [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:02:23 np0005634017 nova_compute[243452]: 2026-02-28 10:02:23.892 243456 DEBUG nova.scheduler.client.report [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:02:23 np0005634017 nova_compute[243452]: 2026-02-28 10:02:23.925 243456 DEBUG oslo_concurrency.lockutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:23 np0005634017 nova_compute[243452]: 2026-02-28 10:02:23.971 243456 INFO nova.scheduler.client.report [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Deleted allocations for instance 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5#033[00m
Feb 28 05:02:24 np0005634017 nova_compute[243452]: 2026-02-28 10:02:24.079 243456 DEBUG oslo_concurrency.lockutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "9bf11a21-b6eb-432e-aa3b-4dd3413f91a5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:24 np0005634017 nova_compute[243452]: 2026-02-28 10:02:24.120 243456 DEBUG nova.network.neutron [-] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:02:24 np0005634017 nova_compute[243452]: 2026-02-28 10:02:24.150 243456 INFO nova.compute.manager [-] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Took 1.24 seconds to deallocate network for instance.#033[00m
Feb 28 05:02:24 np0005634017 nova_compute[243452]: 2026-02-28 10:02:24.196 243456 DEBUG oslo_concurrency.lockutils [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:24 np0005634017 nova_compute[243452]: 2026-02-28 10:02:24.197 243456 DEBUG oslo_concurrency.lockutils [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:24 np0005634017 nova_compute[243452]: 2026-02-28 10:02:24.246 243456 DEBUG nova.compute.manager [req-d1e94399-a069-427b-bea9-6acaca919792 req-955b58d3-1dce-4396-a7b1-78fb114716be 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Received event network-vif-deleted-42dc1876-90c4-4b52-b301-1c90b71ff297 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:02:24 np0005634017 nova_compute[243452]: 2026-02-28 10:02:24.291 243456 DEBUG oslo_concurrency.processutils [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:02:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:02:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1170047623' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:02:24 np0005634017 nova_compute[243452]: 2026-02-28 10:02:24.851 243456 DEBUG oslo_concurrency.processutils [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:02:24 np0005634017 nova_compute[243452]: 2026-02-28 10:02:24.857 243456 DEBUG nova.compute.provider_tree [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:02:24 np0005634017 nova_compute[243452]: 2026-02-28 10:02:24.881 243456 DEBUG nova.scheduler.client.report [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:02:24 np0005634017 nova_compute[243452]: 2026-02-28 10:02:24.907 243456 DEBUG oslo_concurrency.lockutils [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:24 np0005634017 nova_compute[243452]: 2026-02-28 10:02:24.946 243456 INFO nova.scheduler.client.report [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Deleted allocations for instance d2d9bd29-453d-4abd-a3de-c1a9603cfc11#033[00m
Feb 28 05:02:24 np0005634017 nova_compute[243452]: 2026-02-28 10:02:24.976 243456 DEBUG nova.compute.manager [req-1ba0412d-1971-436a-9a03-ff12abca4a05 req-3c9219a0-21de-442e-a1a6-03033c552a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Received event network-vif-plugged-42dc1876-90c4-4b52-b301-1c90b71ff297 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:02:24 np0005634017 nova_compute[243452]: 2026-02-28 10:02:24.977 243456 DEBUG oslo_concurrency.lockutils [req-1ba0412d-1971-436a-9a03-ff12abca4a05 req-3c9219a0-21de-442e-a1a6-03033c552a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:24 np0005634017 nova_compute[243452]: 2026-02-28 10:02:24.978 243456 DEBUG oslo_concurrency.lockutils [req-1ba0412d-1971-436a-9a03-ff12abca4a05 req-3c9219a0-21de-442e-a1a6-03033c552a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:24 np0005634017 nova_compute[243452]: 2026-02-28 10:02:24.978 243456 DEBUG oslo_concurrency.lockutils [req-1ba0412d-1971-436a-9a03-ff12abca4a05 req-3c9219a0-21de-442e-a1a6-03033c552a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:24 np0005634017 nova_compute[243452]: 2026-02-28 10:02:24.979 243456 DEBUG nova.compute.manager [req-1ba0412d-1971-436a-9a03-ff12abca4a05 req-3c9219a0-21de-442e-a1a6-03033c552a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] No waiting events found dispatching network-vif-plugged-42dc1876-90c4-4b52-b301-1c90b71ff297 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:02:24 np0005634017 nova_compute[243452]: 2026-02-28 10:02:24.979 243456 WARNING nova.compute.manager [req-1ba0412d-1971-436a-9a03-ff12abca4a05 req-3c9219a0-21de-442e-a1a6-03033c552a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Received unexpected event network-vif-plugged-42dc1876-90c4-4b52-b301-1c90b71ff297 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:02:25 np0005634017 nova_compute[243452]: 2026-02-28 10:02:25.029 243456 DEBUG oslo_concurrency.lockutils [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:25 np0005634017 nova_compute[243452]: 2026-02-28 10:02:25.627 243456 DEBUG oslo_concurrency.lockutils [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "08147934-b9df-4154-8d1f-3fd318973eb6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:25 np0005634017 nova_compute[243452]: 2026-02-28 10:02:25.628 243456 DEBUG oslo_concurrency.lockutils [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:25 np0005634017 nova_compute[243452]: 2026-02-28 10:02:25.628 243456 DEBUG oslo_concurrency.lockutils [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:25 np0005634017 nova_compute[243452]: 2026-02-28 10:02:25.628 243456 DEBUG oslo_concurrency.lockutils [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:25 np0005634017 nova_compute[243452]: 2026-02-28 10:02:25.628 243456 DEBUG oslo_concurrency.lockutils [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:25 np0005634017 nova_compute[243452]: 2026-02-28 10:02:25.629 243456 INFO nova.compute.manager [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Terminating instance#033[00m
Feb 28 05:02:25 np0005634017 nova_compute[243452]: 2026-02-28 10:02:25.630 243456 DEBUG nova.compute.manager [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:02:25 np0005634017 kernel: tap1b6bf464-31 (unregistering): left promiscuous mode
Feb 28 05:02:25 np0005634017 NetworkManager[49805]: <info>  [1772272945.6754] device (tap1b6bf464-31): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:02:25 np0005634017 nova_compute[243452]: 2026-02-28 10:02:25.682 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:02:25Z|00105|binding|INFO|Releasing lport 1b6bf464-31de-4504-9af4-59a95d6d9c05 from this chassis (sb_readonly=0)
Feb 28 05:02:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:02:25Z|00106|binding|INFO|Setting lport 1b6bf464-31de-4504-9af4-59a95d6d9c05 down in Southbound
Feb 28 05:02:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:02:25Z|00107|binding|INFO|Removing iface tap1b6bf464-31 ovn-installed in OVS
Feb 28 05:02:25 np0005634017 nova_compute[243452]: 2026-02-28 10:02:25.684 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.690 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:02:52 10.100.0.10'], port_security=['fa:16:3e:12:02:52 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '08147934-b9df-4154-8d1f-3fd318973eb6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=1b6bf464-31de-4504-9af4-59a95d6d9c05) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:02:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.692 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 1b6bf464-31de-4504-9af4-59a95d6d9c05 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 unbound from our chassis#033[00m
Feb 28 05:02:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.694 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24#033[00m
Feb 28 05:02:25 np0005634017 nova_compute[243452]: 2026-02-28 10:02:25.708 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.712 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f6fe438c-5b10-443e-ae71-e595eea7e8e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.738 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f077583e-9f74-476e-bcea-caa0155da9fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.742 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7510b7e8-1978-48ae-a119-2047023e8e65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:25 np0005634017 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Deactivated successfully.
Feb 28 05:02:25 np0005634017 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Consumed 14.426s CPU time.
Feb 28 05:02:25 np0005634017 systemd-machined[209480]: Machine qemu-17-instance-00000011 terminated.
Feb 28 05:02:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.766 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d2af6416-73fa-4429-bd56-7b7abaa5d7a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.784 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[144c5cfa-bdb1-4bc5-af61-08966b1259b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 26, 'rx_bytes': 868, 'tx_bytes': 1280, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 26, 'rx_bytes': 868, 'tx_bytes': 1280, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262930, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 401 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.6 MiB/s wr, 368 op/s
Feb 28 05:02:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.798 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ae09cea8-1d43-42c7-8f18-a49d22c4954b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437039, 'tstamp': 437039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262931, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437043, 'tstamp': 437043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262931, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.800 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:02:25 np0005634017 nova_compute[243452]: 2026-02-28 10:02:25.801 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:25 np0005634017 nova_compute[243452]: 2026-02-28 10:02:25.806 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.807 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5045ea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:02:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.807 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:02:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.808 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5045ea-10, col_values=(('external_ids', {'iface-id': '863d7ac1-9b1e-4788-aadf-440a36d66b39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:02:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.808 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:02:25 np0005634017 nova_compute[243452]: 2026-02-28 10:02:25.861 243456 INFO nova.virt.libvirt.driver [-] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Instance destroyed successfully.#033[00m
Feb 28 05:02:25 np0005634017 nova_compute[243452]: 2026-02-28 10:02:25.861 243456 DEBUG nova.objects.instance [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'resources' on Instance uuid 08147934-b9df-4154-8d1f-3fd318973eb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:02:25 np0005634017 nova_compute[243452]: 2026-02-28 10:02:25.881 243456 DEBUG nova.virt.libvirt.vif [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:01:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2143913214',display_name='tempest-ServersAdminTestJSON-server-2143913214',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2143913214',id=17,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:01:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-zs0r3m3g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:01:20Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=08147934-b9df-4154-8d1f-3fd318973eb6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "address": "fa:16:3e:12:02:52", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b6bf464-31", "ovs_interfaceid": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:02:25 np0005634017 nova_compute[243452]: 2026-02-28 10:02:25.881 243456 DEBUG nova.network.os_vif_util [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "address": "fa:16:3e:12:02:52", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b6bf464-31", "ovs_interfaceid": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:02:25 np0005634017 nova_compute[243452]: 2026-02-28 10:02:25.882 243456 DEBUG nova.network.os_vif_util [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:02:52,bridge_name='br-int',has_traffic_filtering=True,id=1b6bf464-31de-4504-9af4-59a95d6d9c05,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b6bf464-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:02:25 np0005634017 nova_compute[243452]: 2026-02-28 10:02:25.882 243456 DEBUG os_vif [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:02:52,bridge_name='br-int',has_traffic_filtering=True,id=1b6bf464-31de-4504-9af4-59a95d6d9c05,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b6bf464-31') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:02:25 np0005634017 nova_compute[243452]: 2026-02-28 10:02:25.884 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:25 np0005634017 nova_compute[243452]: 2026-02-28 10:02:25.884 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b6bf464-31, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:02:25 np0005634017 nova_compute[243452]: 2026-02-28 10:02:25.886 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:25 np0005634017 nova_compute[243452]: 2026-02-28 10:02:25.888 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:25 np0005634017 nova_compute[243452]: 2026-02-28 10:02:25.890 243456 INFO os_vif [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:02:52,bridge_name='br-int',has_traffic_filtering=True,id=1b6bf464-31de-4504-9af4-59a95d6d9c05,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b6bf464-31')#033[00m
Feb 28 05:02:26 np0005634017 nova_compute[243452]: 2026-02-28 10:02:26.182 243456 INFO nova.virt.libvirt.driver [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Deleting instance files /var/lib/nova/instances/08147934-b9df-4154-8d1f-3fd318973eb6_del#033[00m
Feb 28 05:02:26 np0005634017 nova_compute[243452]: 2026-02-28 10:02:26.184 243456 INFO nova.virt.libvirt.driver [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Deletion of /var/lib/nova/instances/08147934-b9df-4154-8d1f-3fd318973eb6_del complete#033[00m
Feb 28 05:02:26 np0005634017 nova_compute[243452]: 2026-02-28 10:02:26.271 243456 INFO nova.compute.manager [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:02:26 np0005634017 nova_compute[243452]: 2026-02-28 10:02:26.272 243456 DEBUG oslo.service.loopingcall [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:02:26 np0005634017 nova_compute[243452]: 2026-02-28 10:02:26.273 243456 DEBUG nova.compute.manager [-] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:02:26 np0005634017 nova_compute[243452]: 2026-02-28 10:02:26.273 243456 DEBUG nova.network.neutron [-] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:02:27 np0005634017 nova_compute[243452]: 2026-02-28 10:02:27.128 243456 DEBUG nova.compute.manager [req-29cadeab-27e4-4841-9dca-dc40639ae7e1 req-742b48c7-e3ab-4bb2-9f6e-76eca70083e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Received event network-vif-unplugged-1b6bf464-31de-4504-9af4-59a95d6d9c05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:02:27 np0005634017 nova_compute[243452]: 2026-02-28 10:02:27.129 243456 DEBUG oslo_concurrency.lockutils [req-29cadeab-27e4-4841-9dca-dc40639ae7e1 req-742b48c7-e3ab-4bb2-9f6e-76eca70083e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:27 np0005634017 nova_compute[243452]: 2026-02-28 10:02:27.130 243456 DEBUG oslo_concurrency.lockutils [req-29cadeab-27e4-4841-9dca-dc40639ae7e1 req-742b48c7-e3ab-4bb2-9f6e-76eca70083e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:27 np0005634017 nova_compute[243452]: 2026-02-28 10:02:27.130 243456 DEBUG oslo_concurrency.lockutils [req-29cadeab-27e4-4841-9dca-dc40639ae7e1 req-742b48c7-e3ab-4bb2-9f6e-76eca70083e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:27 np0005634017 nova_compute[243452]: 2026-02-28 10:02:27.130 243456 DEBUG nova.compute.manager [req-29cadeab-27e4-4841-9dca-dc40639ae7e1 req-742b48c7-e3ab-4bb2-9f6e-76eca70083e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] No waiting events found dispatching network-vif-unplugged-1b6bf464-31de-4504-9af4-59a95d6d9c05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:02:27 np0005634017 nova_compute[243452]: 2026-02-28 10:02:27.131 243456 DEBUG nova.compute.manager [req-29cadeab-27e4-4841-9dca-dc40639ae7e1 req-742b48c7-e3ab-4bb2-9f6e-76eca70083e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Received event network-vif-unplugged-1b6bf464-31de-4504-9af4-59a95d6d9c05 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:02:27 np0005634017 nova_compute[243452]: 2026-02-28 10:02:27.131 243456 DEBUG nova.compute.manager [req-29cadeab-27e4-4841-9dca-dc40639ae7e1 req-742b48c7-e3ab-4bb2-9f6e-76eca70083e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Received event network-vif-plugged-1b6bf464-31de-4504-9af4-59a95d6d9c05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:02:27 np0005634017 nova_compute[243452]: 2026-02-28 10:02:27.132 243456 DEBUG oslo_concurrency.lockutils [req-29cadeab-27e4-4841-9dca-dc40639ae7e1 req-742b48c7-e3ab-4bb2-9f6e-76eca70083e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:27 np0005634017 nova_compute[243452]: 2026-02-28 10:02:27.132 243456 DEBUG oslo_concurrency.lockutils [req-29cadeab-27e4-4841-9dca-dc40639ae7e1 req-742b48c7-e3ab-4bb2-9f6e-76eca70083e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:27 np0005634017 nova_compute[243452]: 2026-02-28 10:02:27.133 243456 DEBUG oslo_concurrency.lockutils [req-29cadeab-27e4-4841-9dca-dc40639ae7e1 req-742b48c7-e3ab-4bb2-9f6e-76eca70083e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:27 np0005634017 nova_compute[243452]: 2026-02-28 10:02:27.133 243456 DEBUG nova.compute.manager [req-29cadeab-27e4-4841-9dca-dc40639ae7e1 req-742b48c7-e3ab-4bb2-9f6e-76eca70083e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] No waiting events found dispatching network-vif-plugged-1b6bf464-31de-4504-9af4-59a95d6d9c05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:02:27 np0005634017 nova_compute[243452]: 2026-02-28 10:02:27.134 243456 WARNING nova.compute.manager [req-29cadeab-27e4-4841-9dca-dc40639ae7e1 req-742b48c7-e3ab-4bb2-9f6e-76eca70083e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Received unexpected event network-vif-plugged-1b6bf464-31de-4504-9af4-59a95d6d9c05 for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:02:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:02:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Feb 28 05:02:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Feb 28 05:02:27 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Feb 28 05:02:27 np0005634017 nova_compute[243452]: 2026-02-28 10:02:27.783 243456 DEBUG nova.network.neutron [-] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:02:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 316 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 2.0 MiB/s wr, 366 op/s
Feb 28 05:02:27 np0005634017 nova_compute[243452]: 2026-02-28 10:02:27.851 243456 INFO nova.compute.manager [-] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Took 1.58 seconds to deallocate network for instance.#033[00m
Feb 28 05:02:27 np0005634017 nova_compute[243452]: 2026-02-28 10:02:27.873 243456 DEBUG nova.compute.manager [req-63212b9a-989c-4b9a-ab56-a39752441c77 req-b214e926-e184-4997-ae25-096d1e37fdfb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Received event network-vif-deleted-1b6bf464-31de-4504-9af4-59a95d6d9c05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:02:27 np0005634017 nova_compute[243452]: 2026-02-28 10:02:27.874 243456 INFO nova.compute.manager [req-63212b9a-989c-4b9a-ab56-a39752441c77 req-b214e926-e184-4997-ae25-096d1e37fdfb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Neutron deleted interface 1b6bf464-31de-4504-9af4-59a95d6d9c05; detaching it from the instance and deleting it from the info cache#033[00m
Feb 28 05:02:27 np0005634017 nova_compute[243452]: 2026-02-28 10:02:27.874 243456 DEBUG nova.network.neutron [req-63212b9a-989c-4b9a-ab56-a39752441c77 req-b214e926-e184-4997-ae25-096d1e37fdfb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:02:27 np0005634017 nova_compute[243452]: 2026-02-28 10:02:27.946 243456 DEBUG nova.compute.manager [req-63212b9a-989c-4b9a-ab56-a39752441c77 req-b214e926-e184-4997-ae25-096d1e37fdfb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Detach interface failed, port_id=1b6bf464-31de-4504-9af4-59a95d6d9c05, reason: Instance 08147934-b9df-4154-8d1f-3fd318973eb6 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 28 05:02:27 np0005634017 nova_compute[243452]: 2026-02-28 10:02:27.958 243456 DEBUG oslo_concurrency.lockutils [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:27 np0005634017 nova_compute[243452]: 2026-02-28 10:02:27.959 243456 DEBUG oslo_concurrency.lockutils [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:28 np0005634017 nova_compute[243452]: 2026-02-28 10:02:28.065 243456 DEBUG oslo_concurrency.processutils [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:02:28 np0005634017 nova_compute[243452]: 2026-02-28 10:02:28.438 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:02:28 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4074060817' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:02:28 np0005634017 nova_compute[243452]: 2026-02-28 10:02:28.623 243456 DEBUG oslo_concurrency.processutils [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:02:28 np0005634017 nova_compute[243452]: 2026-02-28 10:02:28.631 243456 DEBUG nova.compute.provider_tree [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:02:28 np0005634017 nova_compute[243452]: 2026-02-28 10:02:28.657 243456 DEBUG nova.scheduler.client.report [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:02:28 np0005634017 nova_compute[243452]: 2026-02-28 10:02:28.729 243456 DEBUG oslo_concurrency.lockutils [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:28 np0005634017 nova_compute[243452]: 2026-02-28 10:02:28.809 243456 INFO nova.scheduler.client.report [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Deleted allocations for instance 08147934-b9df-4154-8d1f-3fd318973eb6#033[00m
Feb 28 05:02:28 np0005634017 nova_compute[243452]: 2026-02-28 10:02:28.947 243456 DEBUG oslo_concurrency.lockutils [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.319s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:02:29
Feb 28 05:02:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:02:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:02:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'volumes', 'default.rgw.meta', 'vms', 'default.rgw.log', '.rgw.root', 'backups', 'images', 'cephfs.cephfs.data']
Feb 28 05:02:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:02:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 316 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 1.9 MiB/s wr, 358 op/s
Feb 28 05:02:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:02:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:02:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:02:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:02:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:02:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.462 243456 DEBUG oslo_concurrency.lockutils [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.463 243456 DEBUG oslo_concurrency.lockutils [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.463 243456 DEBUG oslo_concurrency.lockutils [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.463 243456 DEBUG oslo_concurrency.lockutils [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.463 243456 DEBUG oslo_concurrency.lockutils [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.464 243456 INFO nova.compute.manager [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Terminating instance#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.465 243456 DEBUG nova.compute.manager [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:02:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:02:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:02:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:02:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:02:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:02:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:02:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:02:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:02:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:02:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:02:30 np0005634017 kernel: tapeaa5f652-63 (unregistering): left promiscuous mode
Feb 28 05:02:30 np0005634017 NetworkManager[49805]: <info>  [1772272950.5618] device (tapeaa5f652-63): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.566 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:30 np0005634017 ovn_controller[146846]: 2026-02-28T10:02:30Z|00108|binding|INFO|Releasing lport eaa5f652-63c2-4a9b-aae0-eec299565322 from this chassis (sb_readonly=0)
Feb 28 05:02:30 np0005634017 ovn_controller[146846]: 2026-02-28T10:02:30Z|00109|binding|INFO|Setting lport eaa5f652-63c2-4a9b-aae0-eec299565322 down in Southbound
Feb 28 05:02:30 np0005634017 ovn_controller[146846]: 2026-02-28T10:02:30Z|00110|binding|INFO|Removing iface tapeaa5f652-63 ovn-installed in OVS
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.569 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.580 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:c8:7d 10.100.0.13'], port_security=['fa:16:3e:f2:c8:7d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '89ced16e-cc50-41d5-bfcb-fa5af85c14c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=eaa5f652-63c2-4a9b-aae0-eec299565322) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:02:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.583 156681 INFO neutron.agent.ovn.metadata.agent [-] Port eaa5f652-63c2-4a9b-aae0-eec299565322 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 unbound from our chassis#033[00m
Feb 28 05:02:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.585 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.585 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.597 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[acdf87f2-ab9b-40d7-88a3-d3183a0fcba4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.615 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ecfbf9eb-8ec5-4f46-8458-dfa3149f6704]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.618 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3bb687f0-fa90-4676-8912-64696200060d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:30 np0005634017 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Feb 28 05:02:30 np0005634017 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Consumed 14.401s CPU time.
Feb 28 05:02:30 np0005634017 systemd-machined[209480]: Machine qemu-14-instance-0000000e terminated.
Feb 28 05:02:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.645 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0f01c86a-5c6f-4e99-9bd8-da1f4f6bc1db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.661 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[74e17975-2d5e-46fe-bc71-debc9567a3c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 28, 'rx_bytes': 868, 'tx_bytes': 1364, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 28, 'rx_bytes': 868, 'tx_bytes': 1364, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262996, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.672 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fdf67e39-62ee-43a8-a984-3efbdcbab3f8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437039, 'tstamp': 437039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262997, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437043, 'tstamp': 437043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262997, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.674 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.677 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.683 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.686 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5045ea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:02:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.687 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:02:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.687 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5045ea-10, col_values=(('external_ids', {'iface-id': '863d7ac1-9b1e-4788-aadf-440a36d66b39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:02:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.689 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.695 243456 INFO nova.virt.libvirt.driver [-] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Instance destroyed successfully.#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.696 243456 DEBUG nova.objects.instance [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'resources' on Instance uuid 89ced16e-cc50-41d5-bfcb-fa5af85c14c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.711 243456 DEBUG nova.virt.libvirt.vif [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:00:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1953624900',display_name='tempest-ServersAdminTestJSON-server-1953624900',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1953624900',id=14,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:01:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-y1ejzv4y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:01:06Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=89ced16e-cc50-41d5-bfcb-fa5af85c14c8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eaa5f652-63c2-4a9b-aae0-eec299565322", "address": "fa:16:3e:f2:c8:7d", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeaa5f652-63", "ovs_interfaceid": "eaa5f652-63c2-4a9b-aae0-eec299565322", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.711 243456 DEBUG nova.network.os_vif_util [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "eaa5f652-63c2-4a9b-aae0-eec299565322", "address": "fa:16:3e:f2:c8:7d", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeaa5f652-63", "ovs_interfaceid": "eaa5f652-63c2-4a9b-aae0-eec299565322", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.712 243456 DEBUG nova.network.os_vif_util [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:c8:7d,bridge_name='br-int',has_traffic_filtering=True,id=eaa5f652-63c2-4a9b-aae0-eec299565322,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeaa5f652-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.712 243456 DEBUG os_vif [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:c8:7d,bridge_name='br-int',has_traffic_filtering=True,id=eaa5f652-63c2-4a9b-aae0-eec299565322,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeaa5f652-63') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.714 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.714 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeaa5f652-63, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.715 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.717 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.719 243456 INFO os_vif [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:c8:7d,bridge_name='br-int',has_traffic_filtering=True,id=eaa5f652-63c2-4a9b-aae0-eec299565322,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeaa5f652-63')#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.834 243456 DEBUG nova.compute.manager [req-d9f26057-1994-42aa-8b55-9d39f2625dad req-28b85781-712d-4303-8f5d-c176654c3e13 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Received event network-vif-unplugged-eaa5f652-63c2-4a9b-aae0-eec299565322 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.834 243456 DEBUG oslo_concurrency.lockutils [req-d9f26057-1994-42aa-8b55-9d39f2625dad req-28b85781-712d-4303-8f5d-c176654c3e13 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.835 243456 DEBUG oslo_concurrency.lockutils [req-d9f26057-1994-42aa-8b55-9d39f2625dad req-28b85781-712d-4303-8f5d-c176654c3e13 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.835 243456 DEBUG oslo_concurrency.lockutils [req-d9f26057-1994-42aa-8b55-9d39f2625dad req-28b85781-712d-4303-8f5d-c176654c3e13 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.836 243456 DEBUG nova.compute.manager [req-d9f26057-1994-42aa-8b55-9d39f2625dad req-28b85781-712d-4303-8f5d-c176654c3e13 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] No waiting events found dispatching network-vif-unplugged-eaa5f652-63c2-4a9b-aae0-eec299565322 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.836 243456 DEBUG nova.compute.manager [req-d9f26057-1994-42aa-8b55-9d39f2625dad req-28b85781-712d-4303-8f5d-c176654c3e13 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Received event network-vif-unplugged-eaa5f652-63c2-4a9b-aae0-eec299565322 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.970 243456 INFO nova.virt.libvirt.driver [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Deleting instance files /var/lib/nova/instances/89ced16e-cc50-41d5-bfcb-fa5af85c14c8_del#033[00m
Feb 28 05:02:30 np0005634017 nova_compute[243452]: 2026-02-28 10:02:30.972 243456 INFO nova.virt.libvirt.driver [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Deletion of /var/lib/nova/instances/89ced16e-cc50-41d5-bfcb-fa5af85c14c8_del complete#033[00m
Feb 28 05:02:31 np0005634017 ovn_controller[146846]: 2026-02-28T10:02:31Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3d:6e:bc 10.100.0.5
Feb 28 05:02:31 np0005634017 ovn_controller[146846]: 2026-02-28T10:02:31Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:6e:bc 10.100.0.5
Feb 28 05:02:31 np0005634017 nova_compute[243452]: 2026-02-28 10:02:31.215 243456 INFO nova.compute.manager [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:02:31 np0005634017 nova_compute[243452]: 2026-02-28 10:02:31.215 243456 DEBUG oslo.service.loopingcall [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:02:31 np0005634017 nova_compute[243452]: 2026-02-28 10:02:31.216 243456 DEBUG nova.compute.manager [-] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:02:31 np0005634017 nova_compute[243452]: 2026-02-28 10:02:31.216 243456 DEBUG nova.network.neutron [-] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:02:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 293 MiB data, 476 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 205 op/s
Feb 28 05:02:32 np0005634017 nova_compute[243452]: 2026-02-28 10:02:32.218 243456 DEBUG nova.network.neutron [-] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:02:32 np0005634017 nova_compute[243452]: 2026-02-28 10:02:32.237 243456 INFO nova.compute.manager [-] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Took 1.02 seconds to deallocate network for instance.#033[00m
Feb 28 05:02:32 np0005634017 nova_compute[243452]: 2026-02-28 10:02:32.293 243456 DEBUG oslo_concurrency.lockutils [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:32 np0005634017 nova_compute[243452]: 2026-02-28 10:02:32.294 243456 DEBUG oslo_concurrency.lockutils [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:32 np0005634017 nova_compute[243452]: 2026-02-28 10:02:32.356 243456 DEBUG oslo_concurrency.processutils [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:02:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:02:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:02:32 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3126105542' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:02:32 np0005634017 nova_compute[243452]: 2026-02-28 10:02:32.917 243456 DEBUG oslo_concurrency.processutils [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:02:32 np0005634017 nova_compute[243452]: 2026-02-28 10:02:32.923 243456 DEBUG nova.compute.provider_tree [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:02:32 np0005634017 nova_compute[243452]: 2026-02-28 10:02:32.943 243456 DEBUG nova.scheduler.client.report [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:02:32 np0005634017 nova_compute[243452]: 2026-02-28 10:02:32.973 243456 DEBUG oslo_concurrency.lockutils [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:33 np0005634017 nova_compute[243452]: 2026-02-28 10:02:33.004 243456 INFO nova.scheduler.client.report [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Deleted allocations for instance 89ced16e-cc50-41d5-bfcb-fa5af85c14c8#033[00m
Feb 28 05:02:33 np0005634017 nova_compute[243452]: 2026-02-28 10:02:33.075 243456 DEBUG oslo_concurrency.lockutils [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:33 np0005634017 nova_compute[243452]: 2026-02-28 10:02:33.392 243456 DEBUG nova.compute.manager [req-590d0c30-e749-42db-9648-86c8a7887fae req-2e45d788-c1b1-4e8c-a04e-02877848c3a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Received event network-vif-plugged-eaa5f652-63c2-4a9b-aae0-eec299565322 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:02:33 np0005634017 nova_compute[243452]: 2026-02-28 10:02:33.393 243456 DEBUG oslo_concurrency.lockutils [req-590d0c30-e749-42db-9648-86c8a7887fae req-2e45d788-c1b1-4e8c-a04e-02877848c3a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:33 np0005634017 nova_compute[243452]: 2026-02-28 10:02:33.393 243456 DEBUG oslo_concurrency.lockutils [req-590d0c30-e749-42db-9648-86c8a7887fae req-2e45d788-c1b1-4e8c-a04e-02877848c3a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:33 np0005634017 nova_compute[243452]: 2026-02-28 10:02:33.393 243456 DEBUG oslo_concurrency.lockutils [req-590d0c30-e749-42db-9648-86c8a7887fae req-2e45d788-c1b1-4e8c-a04e-02877848c3a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:33 np0005634017 nova_compute[243452]: 2026-02-28 10:02:33.394 243456 DEBUG nova.compute.manager [req-590d0c30-e749-42db-9648-86c8a7887fae req-2e45d788-c1b1-4e8c-a04e-02877848c3a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] No waiting events found dispatching network-vif-plugged-eaa5f652-63c2-4a9b-aae0-eec299565322 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:02:33 np0005634017 nova_compute[243452]: 2026-02-28 10:02:33.394 243456 WARNING nova.compute.manager [req-590d0c30-e749-42db-9648-86c8a7887fae req-2e45d788-c1b1-4e8c-a04e-02877848c3a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Received unexpected event network-vif-plugged-eaa5f652-63c2-4a9b-aae0-eec299565322 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:02:33 np0005634017 nova_compute[243452]: 2026-02-28 10:02:33.395 243456 DEBUG nova.compute.manager [req-590d0c30-e749-42db-9648-86c8a7887fae req-2e45d788-c1b1-4e8c-a04e-02877848c3a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Received event network-vif-deleted-eaa5f652-63c2-4a9b-aae0-eec299565322 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:02:33 np0005634017 nova_compute[243452]: 2026-02-28 10:02:33.439 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 270 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 509 KiB/s rd, 2.5 MiB/s wr, 188 op/s
Feb 28 05:02:34 np0005634017 nova_compute[243452]: 2026-02-28 10:02:34.476 243456 DEBUG oslo_concurrency.lockutils [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:34 np0005634017 nova_compute[243452]: 2026-02-28 10:02:34.478 243456 DEBUG oslo_concurrency.lockutils [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:34 np0005634017 nova_compute[243452]: 2026-02-28 10:02:34.478 243456 DEBUG oslo_concurrency.lockutils [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:34 np0005634017 nova_compute[243452]: 2026-02-28 10:02:34.479 243456 DEBUG oslo_concurrency.lockutils [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:34 np0005634017 nova_compute[243452]: 2026-02-28 10:02:34.479 243456 DEBUG oslo_concurrency.lockutils [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:34 np0005634017 nova_compute[243452]: 2026-02-28 10:02:34.481 243456 INFO nova.compute.manager [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Terminating instance#033[00m
Feb 28 05:02:34 np0005634017 nova_compute[243452]: 2026-02-28 10:02:34.483 243456 DEBUG nova.compute.manager [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:02:34 np0005634017 kernel: tap1c6e98f3-e9 (unregistering): left promiscuous mode
Feb 28 05:02:34 np0005634017 NetworkManager[49805]: <info>  [1772272954.5471] device (tap1c6e98f3-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:02:34 np0005634017 nova_compute[243452]: 2026-02-28 10:02:34.548 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:02:34Z|00111|binding|INFO|Releasing lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 from this chassis (sb_readonly=0)
Feb 28 05:02:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:02:34Z|00112|binding|INFO|Setting lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 down in Southbound
Feb 28 05:02:34 np0005634017 nova_compute[243452]: 2026-02-28 10:02:34.554 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:02:34Z|00113|binding|INFO|Removing iface tap1c6e98f3-e9 ovn-installed in OVS
Feb 28 05:02:34 np0005634017 nova_compute[243452]: 2026-02-28 10:02:34.558 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.563 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:6e:bc 10.100.0.5'], port_security=['fa:16:3e:3d:6e:bc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c92e965f-2d18-4b78-8b78-7d391039f382', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '10', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:02:34 np0005634017 nova_compute[243452]: 2026-02-28 10:02:34.563 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.564 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 unbound from our chassis#033[00m
Feb 28 05:02:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.565 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:02:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.566 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5157edad-4be3-4836-b3f3-26e3cb6bd496]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.566 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24 namespace which is not needed anymore#033[00m
Feb 28 05:02:34 np0005634017 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Feb 28 05:02:34 np0005634017 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d0000000d.scope: Consumed 11.799s CPU time.
Feb 28 05:02:34 np0005634017 systemd-machined[209480]: Machine qemu-24-instance-0000000d terminated.
Feb 28 05:02:34 np0005634017 neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24[257366]: [NOTICE]   (257372) : haproxy version is 2.8.14-c23fe91
Feb 28 05:02:34 np0005634017 neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24[257366]: [NOTICE]   (257372) : path to executable is /usr/sbin/haproxy
Feb 28 05:02:34 np0005634017 neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24[257366]: [WARNING]  (257372) : Exiting Master process...
Feb 28 05:02:34 np0005634017 neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24[257366]: [ALERT]    (257372) : Current worker (257374) exited with code 143 (Terminated)
Feb 28 05:02:34 np0005634017 neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24[257366]: [WARNING]  (257372) : All workers exited. Exiting... (0)
Feb 28 05:02:34 np0005634017 systemd[1]: libpod-f60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a.scope: Deactivated successfully.
Feb 28 05:02:34 np0005634017 podman[263074]: 2026-02-28 10:02:34.66796979 +0000 UTC m=+0.041309161 container died f60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 05:02:34 np0005634017 nova_compute[243452]: 2026-02-28 10:02:34.705 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:34 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a-userdata-shm.mount: Deactivated successfully.
Feb 28 05:02:34 np0005634017 systemd[1]: var-lib-containers-storage-overlay-e22472ecc95bc20b98dcc0462d4cd4b77133e1b85e4f473f2fb73cbcea480a12-merged.mount: Deactivated successfully.
Feb 28 05:02:34 np0005634017 nova_compute[243452]: 2026-02-28 10:02:34.712 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:34 np0005634017 nova_compute[243452]: 2026-02-28 10:02:34.719 243456 INFO nova.virt.libvirt.driver [-] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance destroyed successfully.#033[00m
Feb 28 05:02:34 np0005634017 nova_compute[243452]: 2026-02-28 10:02:34.720 243456 DEBUG nova.objects.instance [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'resources' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:02:34 np0005634017 podman[263074]: 2026-02-28 10:02:34.723593482 +0000 UTC m=+0.096932823 container cleanup f60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 28 05:02:34 np0005634017 systemd[1]: libpod-conmon-f60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a.scope: Deactivated successfully.
Feb 28 05:02:34 np0005634017 nova_compute[243452]: 2026-02-28 10:02:34.740 243456 DEBUG nova.virt.libvirt.vif [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1293627042',display_name='tempest-ServersAdminTestJSON-server-1293627042',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1293627042',id=13,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:02:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-7hctsnmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:02:20Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=c92e965f-2d18-4b78-8b78-7d391039f382,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:02:34 np0005634017 nova_compute[243452]: 2026-02-28 10:02:34.741 243456 DEBUG nova.network.os_vif_util [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:02:34 np0005634017 nova_compute[243452]: 2026-02-28 10:02:34.742 243456 DEBUG nova.network.os_vif_util [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:02:34 np0005634017 nova_compute[243452]: 2026-02-28 10:02:34.742 243456 DEBUG os_vif [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:02:34 np0005634017 nova_compute[243452]: 2026-02-28 10:02:34.744 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:34 np0005634017 nova_compute[243452]: 2026-02-28 10:02:34.744 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c6e98f3-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:02:34 np0005634017 nova_compute[243452]: 2026-02-28 10:02:34.747 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:34 np0005634017 nova_compute[243452]: 2026-02-28 10:02:34.750 243456 INFO os_vif [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9')#033[00m
Feb 28 05:02:34 np0005634017 podman[263116]: 2026-02-28 10:02:34.806355716 +0000 UTC m=+0.053433172 container remove f60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 05:02:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.811 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[86899cfd-c84d-444c-b6ff-eeb1b2a9217b]: (4, ('Sat Feb 28 10:02:34 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24 (f60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a)\nf60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a\nSat Feb 28 10:02:34 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24 (f60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a)\nf60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.812 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cff66e8e-14b9-4e04-b1c1-8e1c2b2d1be9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.813 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:02:34 np0005634017 nova_compute[243452]: 2026-02-28 10:02:34.815 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:34 np0005634017 kernel: tapce5045ea-10: left promiscuous mode
Feb 28 05:02:34 np0005634017 nova_compute[243452]: 2026-02-28 10:02:34.825 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.828 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7287aca4-8ea4-44fa-9ce6-91bfdfed5373]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.840 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[51ae0c0b-a5ce-4f9e-84e3-45668d3b7bc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.842 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[aa28296c-4acb-4f5e-a5ae-702448dbef02]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.858 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2bb18736-5188-49ca-9191-2281950e2e1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437020, 'reachable_time': 39897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263149, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.860 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:02:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.860 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[12b19a97-de0d-4938-b06a-fc8c52f5f81f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:02:34 np0005634017 systemd[1]: run-netns-ovnmeta\x2dce5045ea\x2d1437\x2d4fd1\x2dbdb3\x2d3fe83470fb24.mount: Deactivated successfully.
Feb 28 05:02:35 np0005634017 nova_compute[243452]: 2026-02-28 10:02:35.071 243456 INFO nova.virt.libvirt.driver [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Deleting instance files /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382_del#033[00m
Feb 28 05:02:35 np0005634017 nova_compute[243452]: 2026-02-28 10:02:35.071 243456 INFO nova.virt.libvirt.driver [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Deletion of /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382_del complete#033[00m
Feb 28 05:02:35 np0005634017 nova_compute[243452]: 2026-02-28 10:02:35.112 243456 INFO nova.compute.manager [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Took 0.63 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:02:35 np0005634017 nova_compute[243452]: 2026-02-28 10:02:35.113 243456 DEBUG oslo.service.loopingcall [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:02:35 np0005634017 nova_compute[243452]: 2026-02-28 10:02:35.113 243456 DEBUG nova.compute.manager [-] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:02:35 np0005634017 nova_compute[243452]: 2026-02-28 10:02:35.113 243456 DEBUG nova.network.neutron [-] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:02:35 np0005634017 nova_compute[243452]: 2026-02-28 10:02:35.495 243456 DEBUG nova.compute.manager [req-ef94d76c-cc3a-4c7c-bd24-47c721ec62fe req-2acdacbc-d94c-4969-bed6-6f8277cbca2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-unplugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:02:35 np0005634017 nova_compute[243452]: 2026-02-28 10:02:35.495 243456 DEBUG oslo_concurrency.lockutils [req-ef94d76c-cc3a-4c7c-bd24-47c721ec62fe req-2acdacbc-d94c-4969-bed6-6f8277cbca2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:35 np0005634017 nova_compute[243452]: 2026-02-28 10:02:35.495 243456 DEBUG oslo_concurrency.lockutils [req-ef94d76c-cc3a-4c7c-bd24-47c721ec62fe req-2acdacbc-d94c-4969-bed6-6f8277cbca2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:35 np0005634017 nova_compute[243452]: 2026-02-28 10:02:35.496 243456 DEBUG oslo_concurrency.lockutils [req-ef94d76c-cc3a-4c7c-bd24-47c721ec62fe req-2acdacbc-d94c-4969-bed6-6f8277cbca2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:35 np0005634017 nova_compute[243452]: 2026-02-28 10:02:35.497 243456 DEBUG nova.compute.manager [req-ef94d76c-cc3a-4c7c-bd24-47c721ec62fe req-2acdacbc-d94c-4969-bed6-6f8277cbca2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] No waiting events found dispatching network-vif-unplugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:02:35 np0005634017 nova_compute[243452]: 2026-02-28 10:02:35.498 243456 DEBUG nova.compute.manager [req-ef94d76c-cc3a-4c7c-bd24-47c721ec62fe req-2acdacbc-d94c-4969-bed6-6f8277cbca2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-unplugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:02:35 np0005634017 nova_compute[243452]: 2026-02-28 10:02:35.499 243456 DEBUG nova.compute.manager [req-ef94d76c-cc3a-4c7c-bd24-47c721ec62fe req-2acdacbc-d94c-4969-bed6-6f8277cbca2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:02:35 np0005634017 nova_compute[243452]: 2026-02-28 10:02:35.499 243456 DEBUG oslo_concurrency.lockutils [req-ef94d76c-cc3a-4c7c-bd24-47c721ec62fe req-2acdacbc-d94c-4969-bed6-6f8277cbca2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:35 np0005634017 nova_compute[243452]: 2026-02-28 10:02:35.500 243456 DEBUG oslo_concurrency.lockutils [req-ef94d76c-cc3a-4c7c-bd24-47c721ec62fe req-2acdacbc-d94c-4969-bed6-6f8277cbca2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:35 np0005634017 nova_compute[243452]: 2026-02-28 10:02:35.500 243456 DEBUG oslo_concurrency.lockutils [req-ef94d76c-cc3a-4c7c-bd24-47c721ec62fe req-2acdacbc-d94c-4969-bed6-6f8277cbca2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:35 np0005634017 nova_compute[243452]: 2026-02-28 10:02:35.501 243456 DEBUG nova.compute.manager [req-ef94d76c-cc3a-4c7c-bd24-47c721ec62fe req-2acdacbc-d94c-4969-bed6-6f8277cbca2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] No waiting events found dispatching network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:02:35 np0005634017 nova_compute[243452]: 2026-02-28 10:02:35.501 243456 WARNING nova.compute.manager [req-ef94d76c-cc3a-4c7c-bd24-47c721ec62fe req-2acdacbc-d94c-4969-bed6-6f8277cbca2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received unexpected event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:02:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 210 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 446 KiB/s rd, 2.6 MiB/s wr, 158 op/s
Feb 28 05:02:35 np0005634017 nova_compute[243452]: 2026-02-28 10:02:35.791 243456 DEBUG nova.network.neutron [-] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:02:35 np0005634017 nova_compute[243452]: 2026-02-28 10:02:35.815 243456 INFO nova.compute.manager [-] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Took 0.70 seconds to deallocate network for instance.#033[00m
Feb 28 05:02:35 np0005634017 nova_compute[243452]: 2026-02-28 10:02:35.877 243456 DEBUG nova.compute.manager [req-a8017b11-95c5-4dd1-a5bb-9bc97c8f1bd3 req-10c43368-35c3-4401-82ff-c1fef46694a8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-deleted-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:02:35 np0005634017 nova_compute[243452]: 2026-02-28 10:02:35.881 243456 DEBUG oslo_concurrency.lockutils [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:35 np0005634017 nova_compute[243452]: 2026-02-28 10:02:35.882 243456 DEBUG oslo_concurrency.lockutils [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:35 np0005634017 nova_compute[243452]: 2026-02-28 10:02:35.949 243456 DEBUG oslo_concurrency.processutils [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:02:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:02:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4288166464' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:02:36 np0005634017 nova_compute[243452]: 2026-02-28 10:02:36.503 243456 DEBUG oslo_concurrency.processutils [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:02:36 np0005634017 nova_compute[243452]: 2026-02-28 10:02:36.509 243456 DEBUG nova.compute.provider_tree [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:02:36 np0005634017 nova_compute[243452]: 2026-02-28 10:02:36.525 243456 DEBUG nova.scheduler.client.report [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:02:36 np0005634017 nova_compute[243452]: 2026-02-28 10:02:36.552 243456 DEBUG oslo_concurrency.lockutils [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:36 np0005634017 nova_compute[243452]: 2026-02-28 10:02:36.579 243456 INFO nova.scheduler.client.report [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Deleted allocations for instance c92e965f-2d18-4b78-8b78-7d391039f382#033[00m
Feb 28 05:02:36 np0005634017 nova_compute[243452]: 2026-02-28 10:02:36.643 243456 DEBUG oslo_concurrency.lockutils [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:37 np0005634017 nova_compute[243452]: 2026-02-28 10:02:37.107 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272942.1053064, d2d9bd29-453d-4abd-a3de-c1a9603cfc11 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:02:37 np0005634017 nova_compute[243452]: 2026-02-28 10:02:37.107 243456 INFO nova.compute.manager [-] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:02:37 np0005634017 nova_compute[243452]: 2026-02-28 10:02:37.136 243456 DEBUG nova.compute.manager [None req-aec7cbeb-01a9-4f0b-b152-bda289859736 - - - - - -] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:02:37 np0005634017 nova_compute[243452]: 2026-02-28 10:02:37.339 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272942.3376737, 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:02:37 np0005634017 nova_compute[243452]: 2026-02-28 10:02:37.339 243456 INFO nova.compute.manager [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:02:37 np0005634017 nova_compute[243452]: 2026-02-28 10:02:37.520 243456 DEBUG nova.compute.manager [None req-cf1556eb-6ba8-481f-8a4f-517b97695f69 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:02:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:02:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 182 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 435 KiB/s rd, 2.5 MiB/s wr, 148 op/s
Feb 28 05:02:38 np0005634017 nova_compute[243452]: 2026-02-28 10:02:38.441 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:02:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:02:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:02:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:02:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:02:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:02:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:02:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:02:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:02:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:02:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:02:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:02:38 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:02:38 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:02:38 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:02:38 np0005634017 podman[263316]: 2026-02-28 10:02:38.944062044 +0000 UTC m=+0.059541313 container create f94076da59edee8ef7e3b166497e251b9fadbe4d26bf6de1e4057ee149c49594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 28 05:02:38 np0005634017 systemd[1]: Started libpod-conmon-f94076da59edee8ef7e3b166497e251b9fadbe4d26bf6de1e4057ee149c49594.scope.
Feb 28 05:02:39 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:02:39 np0005634017 podman[263316]: 2026-02-28 10:02:39.017466495 +0000 UTC m=+0.132945774 container init f94076da59edee8ef7e3b166497e251b9fadbe4d26bf6de1e4057ee149c49594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:02:39 np0005634017 podman[263316]: 2026-02-28 10:02:38.925298307 +0000 UTC m=+0.040777616 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:02:39 np0005634017 podman[263316]: 2026-02-28 10:02:39.022093595 +0000 UTC m=+0.137572874 container start f94076da59edee8ef7e3b166497e251b9fadbe4d26bf6de1e4057ee149c49594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_panini, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 05:02:39 np0005634017 podman[263316]: 2026-02-28 10:02:39.025659285 +0000 UTC m=+0.141138544 container attach f94076da59edee8ef7e3b166497e251b9fadbe4d26bf6de1e4057ee149c49594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_panini, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 28 05:02:39 np0005634017 adoring_panini[263332]: 167 167
Feb 28 05:02:39 np0005634017 systemd[1]: libpod-f94076da59edee8ef7e3b166497e251b9fadbe4d26bf6de1e4057ee149c49594.scope: Deactivated successfully.
Feb 28 05:02:39 np0005634017 podman[263316]: 2026-02-28 10:02:39.026776147 +0000 UTC m=+0.142255406 container died f94076da59edee8ef7e3b166497e251b9fadbe4d26bf6de1e4057ee149c49594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_panini, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:02:39 np0005634017 systemd[1]: var-lib-containers-storage-overlay-a2b2f5ec45ff294f5bb9a1bd756d8d62490082c0c0467165e361c5359be4bc9c-merged.mount: Deactivated successfully.
Feb 28 05:02:39 np0005634017 podman[263316]: 2026-02-28 10:02:39.065166395 +0000 UTC m=+0.180645664 container remove f94076da59edee8ef7e3b166497e251b9fadbe4d26bf6de1e4057ee149c49594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_panini, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Feb 28 05:02:39 np0005634017 systemd[1]: libpod-conmon-f94076da59edee8ef7e3b166497e251b9fadbe4d26bf6de1e4057ee149c49594.scope: Deactivated successfully.
Feb 28 05:02:39 np0005634017 podman[263356]: 2026-02-28 10:02:39.23379448 +0000 UTC m=+0.067397604 container create 8eed6fe56ea477dbea14ab668f16245e7c83ad05b03e7712496b38cbe1ab3641 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_herschel, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:02:39 np0005634017 systemd[1]: Started libpod-conmon-8eed6fe56ea477dbea14ab668f16245e7c83ad05b03e7712496b38cbe1ab3641.scope.
Feb 28 05:02:39 np0005634017 podman[263356]: 2026-02-28 10:02:39.205359541 +0000 UTC m=+0.038962675 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:02:39 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:02:39 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adfcfa03f3d8f24faef9877f5ffd31c96d799f86183d87ef2ad35f8953e2e454/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:02:39 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adfcfa03f3d8f24faef9877f5ffd31c96d799f86183d87ef2ad35f8953e2e454/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:02:39 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adfcfa03f3d8f24faef9877f5ffd31c96d799f86183d87ef2ad35f8953e2e454/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:02:39 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adfcfa03f3d8f24faef9877f5ffd31c96d799f86183d87ef2ad35f8953e2e454/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:02:39 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adfcfa03f3d8f24faef9877f5ffd31c96d799f86183d87ef2ad35f8953e2e454/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:02:39 np0005634017 podman[263356]: 2026-02-28 10:02:39.337949805 +0000 UTC m=+0.171552929 container init 8eed6fe56ea477dbea14ab668f16245e7c83ad05b03e7712496b38cbe1ab3641 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 05:02:39 np0005634017 podman[263356]: 2026-02-28 10:02:39.345194138 +0000 UTC m=+0.178797262 container start 8eed6fe56ea477dbea14ab668f16245e7c83ad05b03e7712496b38cbe1ab3641 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:02:39 np0005634017 podman[263356]: 2026-02-28 10:02:39.349508749 +0000 UTC m=+0.183111873 container attach 8eed6fe56ea477dbea14ab668f16245e7c83ad05b03e7712496b38cbe1ab3641 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 05:02:39 np0005634017 magical_herschel[263372]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:02:39 np0005634017 magical_herschel[263372]: --> All data devices are unavailable
Feb 28 05:02:39 np0005634017 nova_compute[243452]: 2026-02-28 10:02:39.747 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:39 np0005634017 systemd[1]: libpod-8eed6fe56ea477dbea14ab668f16245e7c83ad05b03e7712496b38cbe1ab3641.scope: Deactivated successfully.
Feb 28 05:02:39 np0005634017 podman[263356]: 2026-02-28 10:02:39.775742798 +0000 UTC m=+0.609345922 container died 8eed6fe56ea477dbea14ab668f16245e7c83ad05b03e7712496b38cbe1ab3641 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 05:02:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 182 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 369 KiB/s rd, 2.1 MiB/s wr, 126 op/s
Feb 28 05:02:39 np0005634017 systemd[1]: var-lib-containers-storage-overlay-adfcfa03f3d8f24faef9877f5ffd31c96d799f86183d87ef2ad35f8953e2e454-merged.mount: Deactivated successfully.
Feb 28 05:02:39 np0005634017 podman[263356]: 2026-02-28 10:02:39.833624764 +0000 UTC m=+0.667227848 container remove 8eed6fe56ea477dbea14ab668f16245e7c83ad05b03e7712496b38cbe1ab3641 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_herschel, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:02:39 np0005634017 systemd[1]: libpod-conmon-8eed6fe56ea477dbea14ab668f16245e7c83ad05b03e7712496b38cbe1ab3641.scope: Deactivated successfully.
Feb 28 05:02:40 np0005634017 podman[263466]: 2026-02-28 10:02:40.261607102 +0000 UTC m=+0.054324107 container create a3370b59cfd51cf1e117786e10634e243d8aa87697befceb1d2f28626da8e2af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elgamal, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Feb 28 05:02:40 np0005634017 systemd[1]: Started libpod-conmon-a3370b59cfd51cf1e117786e10634e243d8aa87697befceb1d2f28626da8e2af.scope.
Feb 28 05:02:40 np0005634017 podman[263466]: 2026-02-28 10:02:40.237638919 +0000 UTC m=+0.030355944 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:02:40 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:02:40 np0005634017 podman[263466]: 2026-02-28 10:02:40.361574449 +0000 UTC m=+0.154291494 container init a3370b59cfd51cf1e117786e10634e243d8aa87697befceb1d2f28626da8e2af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elgamal, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 28 05:02:40 np0005634017 podman[263466]: 2026-02-28 10:02:40.369597404 +0000 UTC m=+0.162314409 container start a3370b59cfd51cf1e117786e10634e243d8aa87697befceb1d2f28626da8e2af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elgamal, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:02:40 np0005634017 podman[263466]: 2026-02-28 10:02:40.374366638 +0000 UTC m=+0.167083663 container attach a3370b59cfd51cf1e117786e10634e243d8aa87697befceb1d2f28626da8e2af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 05:02:40 np0005634017 nervous_elgamal[263483]: 167 167
Feb 28 05:02:40 np0005634017 systemd[1]: libpod-a3370b59cfd51cf1e117786e10634e243d8aa87697befceb1d2f28626da8e2af.scope: Deactivated successfully.
Feb 28 05:02:40 np0005634017 podman[263466]: 2026-02-28 10:02:40.377376752 +0000 UTC m=+0.170093757 container died a3370b59cfd51cf1e117786e10634e243d8aa87697befceb1d2f28626da8e2af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elgamal, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 28 05:02:40 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3b863e144c0f8a9a7e27d576c18c967301c226f48804f6340a5709c3b0ea5690-merged.mount: Deactivated successfully.
Feb 28 05:02:40 np0005634017 podman[263466]: 2026-02-28 10:02:40.422133819 +0000 UTC m=+0.214850804 container remove a3370b59cfd51cf1e117786e10634e243d8aa87697befceb1d2f28626da8e2af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 05:02:40 np0005634017 systemd[1]: libpod-conmon-a3370b59cfd51cf1e117786e10634e243d8aa87697befceb1d2f28626da8e2af.scope: Deactivated successfully.
Feb 28 05:02:40 np0005634017 podman[263507]: 2026-02-28 10:02:40.581300249 +0000 UTC m=+0.044962244 container create 013754a8ce9bdc1cee854947be9bcfadaaab52868ac4c19ba2a72a884abbb390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 05:02:40 np0005634017 systemd[1]: Started libpod-conmon-013754a8ce9bdc1cee854947be9bcfadaaab52868ac4c19ba2a72a884abbb390.scope.
Feb 28 05:02:40 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:02:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c3174da55ee8b3d2e58419209506650088651e9e0ef833d5f41827470e096d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:02:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c3174da55ee8b3d2e58419209506650088651e9e0ef833d5f41827470e096d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:02:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c3174da55ee8b3d2e58419209506650088651e9e0ef833d5f41827470e096d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:02:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c3174da55ee8b3d2e58419209506650088651e9e0ef833d5f41827470e096d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:02:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:02:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:02:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:02:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:02:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0002967955940844235 of space, bias 1.0, pg target 0.08903867822532706 quantized to 32 (current 32)
Feb 28 05:02:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:02:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:02:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:02:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:02:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:02:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024910402878611446 of space, bias 1.0, pg target 0.7473120863583433 quantized to 32 (current 32)
Feb 28 05:02:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:02:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.707674241480498e-07 of space, bias 4.0, pg target 0.0011649209089776597 quantized to 16 (current 16)
Feb 28 05:02:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:02:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:02:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:02:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:02:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:02:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:02:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:02:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:02:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:02:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:02:40 np0005634017 podman[263507]: 2026-02-28 10:02:40.639680018 +0000 UTC m=+0.103342013 container init 013754a8ce9bdc1cee854947be9bcfadaaab52868ac4c19ba2a72a884abbb390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hofstadter, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Feb 28 05:02:40 np0005634017 podman[263507]: 2026-02-28 10:02:40.653018523 +0000 UTC m=+0.116680518 container start 013754a8ce9bdc1cee854947be9bcfadaaab52868ac4c19ba2a72a884abbb390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:02:40 np0005634017 podman[263507]: 2026-02-28 10:02:40.657520989 +0000 UTC m=+0.121182984 container attach 013754a8ce9bdc1cee854947be9bcfadaaab52868ac4c19ba2a72a884abbb390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 05:02:40 np0005634017 podman[263507]: 2026-02-28 10:02:40.566549305 +0000 UTC m=+0.030211320 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:02:40 np0005634017 nova_compute[243452]: 2026-02-28 10:02:40.861 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272945.8599164, 08147934-b9df-4154-8d1f-3fd318973eb6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:02:40 np0005634017 nova_compute[243452]: 2026-02-28 10:02:40.862 243456 INFO nova.compute.manager [-] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:02:40 np0005634017 nova_compute[243452]: 2026-02-28 10:02:40.882 243456 DEBUG nova.compute.manager [None req-eeafceef-2488-4e49-8dec-95253ebfb150 - - - - - -] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]: {
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:    "0": [
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:        {
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "devices": [
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "/dev/loop3"
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            ],
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "lv_name": "ceph_lv0",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "lv_size": "21470642176",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "name": "ceph_lv0",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "tags": {
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.cluster_name": "ceph",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.crush_device_class": "",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.encrypted": "0",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.objectstore": "bluestore",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.osd_id": "0",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.type": "block",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.vdo": "0",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.with_tpm": "0"
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            },
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "type": "block",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "vg_name": "ceph_vg0"
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:        }
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:    ],
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:    "1": [
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:        {
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "devices": [
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "/dev/loop4"
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            ],
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "lv_name": "ceph_lv1",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "lv_size": "21470642176",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "name": "ceph_lv1",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "tags": {
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.cluster_name": "ceph",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.crush_device_class": "",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.encrypted": "0",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.objectstore": "bluestore",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.osd_id": "1",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.type": "block",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.vdo": "0",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.with_tpm": "0"
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            },
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "type": "block",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "vg_name": "ceph_vg1"
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:        }
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:    ],
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:    "2": [
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:        {
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "devices": [
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "/dev/loop5"
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            ],
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "lv_name": "ceph_lv2",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "lv_size": "21470642176",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "name": "ceph_lv2",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "tags": {
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.cluster_name": "ceph",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.crush_device_class": "",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.encrypted": "0",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.objectstore": "bluestore",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.osd_id": "2",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.type": "block",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.vdo": "0",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:                "ceph.with_tpm": "0"
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            },
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "type": "block",
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:            "vg_name": "ceph_vg2"
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:        }
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]:    ]
Feb 28 05:02:40 np0005634017 laughing_hofstadter[263523]: }
Feb 28 05:02:40 np0005634017 systemd[1]: libpod-013754a8ce9bdc1cee854947be9bcfadaaab52868ac4c19ba2a72a884abbb390.scope: Deactivated successfully.
Feb 28 05:02:40 np0005634017 podman[263507]: 2026-02-28 10:02:40.932927282 +0000 UTC m=+0.396589317 container died 013754a8ce9bdc1cee854947be9bcfadaaab52868ac4c19ba2a72a884abbb390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 05:02:40 np0005634017 systemd[1]: var-lib-containers-storage-overlay-0c3174da55ee8b3d2e58419209506650088651e9e0ef833d5f41827470e096d4-merged.mount: Deactivated successfully.
Feb 28 05:02:40 np0005634017 podman[263507]: 2026-02-28 10:02:40.982318339 +0000 UTC m=+0.445980384 container remove 013754a8ce9bdc1cee854947be9bcfadaaab52868ac4c19ba2a72a884abbb390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hofstadter, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 05:02:40 np0005634017 systemd[1]: libpod-conmon-013754a8ce9bdc1cee854947be9bcfadaaab52868ac4c19ba2a72a884abbb390.scope: Deactivated successfully.
Feb 28 05:02:41 np0005634017 podman[263606]: 2026-02-28 10:02:41.434745713 +0000 UTC m=+0.046542848 container create efcb91fdd3b5b25baabc6fd467d99a8a29d56fd81c8ffbdf246c4a6c457cc8cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_tharp, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 05:02:41 np0005634017 systemd[1]: Started libpod-conmon-efcb91fdd3b5b25baabc6fd467d99a8a29d56fd81c8ffbdf246c4a6c457cc8cb.scope.
Feb 28 05:02:41 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:02:41 np0005634017 podman[263606]: 2026-02-28 10:02:41.420164954 +0000 UTC m=+0.031962109 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:02:41 np0005634017 podman[263606]: 2026-02-28 10:02:41.519958886 +0000 UTC m=+0.131756081 container init efcb91fdd3b5b25baabc6fd467d99a8a29d56fd81c8ffbdf246c4a6c457cc8cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Feb 28 05:02:41 np0005634017 podman[263606]: 2026-02-28 10:02:41.527746745 +0000 UTC m=+0.139543920 container start efcb91fdd3b5b25baabc6fd467d99a8a29d56fd81c8ffbdf246c4a6c457cc8cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Feb 28 05:02:41 np0005634017 podman[263606]: 2026-02-28 10:02:41.532839918 +0000 UTC m=+0.144637083 container attach efcb91fdd3b5b25baabc6fd467d99a8a29d56fd81c8ffbdf246c4a6c457cc8cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Feb 28 05:02:41 np0005634017 laughing_tharp[263623]: 167 167
Feb 28 05:02:41 np0005634017 systemd[1]: libpod-efcb91fdd3b5b25baabc6fd467d99a8a29d56fd81c8ffbdf246c4a6c457cc8cb.scope: Deactivated successfully.
Feb 28 05:02:41 np0005634017 podman[263606]: 2026-02-28 10:02:41.535892203 +0000 UTC m=+0.147689408 container died efcb91fdd3b5b25baabc6fd467d99a8a29d56fd81c8ffbdf246c4a6c457cc8cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:02:41 np0005634017 systemd[1]: var-lib-containers-storage-overlay-14e9277274e80614e84bb088bc5927bbf0ff48f172497f856814f31a281bbddd-merged.mount: Deactivated successfully.
Feb 28 05:02:41 np0005634017 podman[263606]: 2026-02-28 10:02:41.582971535 +0000 UTC m=+0.194768670 container remove efcb91fdd3b5b25baabc6fd467d99a8a29d56fd81c8ffbdf246c4a6c457cc8cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:02:41 np0005634017 systemd[1]: libpod-conmon-efcb91fdd3b5b25baabc6fd467d99a8a29d56fd81c8ffbdf246c4a6c457cc8cb.scope: Deactivated successfully.
Feb 28 05:02:41 np0005634017 podman[263647]: 2026-02-28 10:02:41.743311268 +0000 UTC m=+0.048205695 container create 3ac9ce535a52a34c2ee94dbc194ae1cb3051278dfd8c37b973609f8decf02b05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_dubinsky, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:02:41 np0005634017 systemd[1]: Started libpod-conmon-3ac9ce535a52a34c2ee94dbc194ae1cb3051278dfd8c37b973609f8decf02b05.scope.
Feb 28 05:02:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 153 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 2.1 MiB/s wr, 138 op/s
Feb 28 05:02:41 np0005634017 podman[263647]: 2026-02-28 10:02:41.719560881 +0000 UTC m=+0.024455278 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:02:41 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:02:41 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8341ffd6fcbd14166acff11e2cc6347f1d587add91016f571060a269ec721ba6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:02:41 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8341ffd6fcbd14166acff11e2cc6347f1d587add91016f571060a269ec721ba6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:02:41 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8341ffd6fcbd14166acff11e2cc6347f1d587add91016f571060a269ec721ba6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:02:41 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8341ffd6fcbd14166acff11e2cc6347f1d587add91016f571060a269ec721ba6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:02:41 np0005634017 podman[263647]: 2026-02-28 10:02:41.857914746 +0000 UTC m=+0.162809233 container init 3ac9ce535a52a34c2ee94dbc194ae1cb3051278dfd8c37b973609f8decf02b05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:02:41 np0005634017 podman[263647]: 2026-02-28 10:02:41.867267489 +0000 UTC m=+0.172161886 container start 3ac9ce535a52a34c2ee94dbc194ae1cb3051278dfd8c37b973609f8decf02b05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_dubinsky, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:02:41 np0005634017 podman[263647]: 2026-02-28 10:02:41.870775737 +0000 UTC m=+0.175670214 container attach 3ac9ce535a52a34c2ee94dbc194ae1cb3051278dfd8c37b973609f8decf02b05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_dubinsky, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:02:41 np0005634017 nova_compute[243452]: 2026-02-28 10:02:41.914 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:42 np0005634017 lvm[263742]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:02:42 np0005634017 lvm[263743]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:02:42 np0005634017 lvm[263742]: VG ceph_vg0 finished
Feb 28 05:02:42 np0005634017 lvm[263743]: VG ceph_vg1 finished
Feb 28 05:02:42 np0005634017 lvm[263745]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:02:42 np0005634017 lvm[263745]: VG ceph_vg2 finished
Feb 28 05:02:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:02:42 np0005634017 goofy_dubinsky[263664]: {}
Feb 28 05:02:42 np0005634017 systemd[1]: libpod-3ac9ce535a52a34c2ee94dbc194ae1cb3051278dfd8c37b973609f8decf02b05.scope: Deactivated successfully.
Feb 28 05:02:42 np0005634017 podman[263647]: 2026-02-28 10:02:42.754336538 +0000 UTC m=+1.059230955 container died 3ac9ce535a52a34c2ee94dbc194ae1cb3051278dfd8c37b973609f8decf02b05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_dubinsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 28 05:02:42 np0005634017 systemd[1]: libpod-3ac9ce535a52a34c2ee94dbc194ae1cb3051278dfd8c37b973609f8decf02b05.scope: Consumed 1.270s CPU time.
Feb 28 05:02:42 np0005634017 systemd[1]: var-lib-containers-storage-overlay-8341ffd6fcbd14166acff11e2cc6347f1d587add91016f571060a269ec721ba6-merged.mount: Deactivated successfully.
Feb 28 05:02:42 np0005634017 podman[263647]: 2026-02-28 10:02:42.808042326 +0000 UTC m=+1.112936753 container remove 3ac9ce535a52a34c2ee94dbc194ae1cb3051278dfd8c37b973609f8decf02b05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_dubinsky, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 28 05:02:42 np0005634017 systemd[1]: libpod-conmon-3ac9ce535a52a34c2ee94dbc194ae1cb3051278dfd8c37b973609f8decf02b05.scope: Deactivated successfully.
Feb 28 05:02:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:02:42 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:02:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:02:42 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:02:42 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:02:43 np0005634017 nova_compute[243452]: 2026-02-28 10:02:43.443 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 153 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 218 KiB/s rd, 1.1 MiB/s wr, 81 op/s
Feb 28 05:02:43 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:02:44 np0005634017 nova_compute[243452]: 2026-02-28 10:02:44.752 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:44 np0005634017 nova_compute[243452]: 2026-02-28 10:02:44.810 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:44.809 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:02:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:44.811 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:02:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:02:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3451699820' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:02:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:02:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3451699820' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:02:45 np0005634017 nova_compute[243452]: 2026-02-28 10:02:45.691 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272950.690185, 89ced16e-cc50-41d5-bfcb-fa5af85c14c8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:02:45 np0005634017 nova_compute[243452]: 2026-02-28 10:02:45.691 243456 INFO nova.compute.manager [-] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:02:45 np0005634017 nova_compute[243452]: 2026-02-28 10:02:45.715 243456 DEBUG nova.compute.manager [None req-3b475f93-03d6-44a9-8f3a-a3f81a314046 - - - - - -] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:02:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 153 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 45 KiB/s wr, 44 op/s
Feb 28 05:02:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:02:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 153 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Feb 28 05:02:48 np0005634017 nova_compute[243452]: 2026-02-28 10:02:48.444 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:49 np0005634017 nova_compute[243452]: 2026-02-28 10:02:49.717 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272954.7157624, c92e965f-2d18-4b78-8b78-7d391039f382 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:02:49 np0005634017 nova_compute[243452]: 2026-02-28 10:02:49.718 243456 INFO nova.compute.manager [-] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:02:49 np0005634017 nova_compute[243452]: 2026-02-28 10:02:49.740 243456 DEBUG nova.compute.manager [None req-9459368a-1629-42fd-b264-ee953fcbb798 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:02:49 np0005634017 nova_compute[243452]: 2026-02-28 10:02:49.756 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 153 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 852 B/s wr, 12 op/s
Feb 28 05:02:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:50.813 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:02:51 np0005634017 podman[263785]: 2026-02-28 10:02:51.128049266 +0000 UTC m=+0.062615490 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:02:51 np0005634017 podman[263784]: 2026-02-28 10:02:51.165353053 +0000 UTC m=+0.098996841 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:02:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 153 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 852 B/s wr, 12 op/s
Feb 28 05:02:52 np0005634017 nova_compute[243452]: 2026-02-28 10:02:52.495 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Acquiring lock "540aa538-279f-4645-a7a1-03fa5c859440" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:52 np0005634017 nova_compute[243452]: 2026-02-28 10:02:52.496 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:52 np0005634017 nova_compute[243452]: 2026-02-28 10:02:52.513 243456 DEBUG nova.compute.manager [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:02:52 np0005634017 nova_compute[243452]: 2026-02-28 10:02:52.584 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:52 np0005634017 nova_compute[243452]: 2026-02-28 10:02:52.585 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:52 np0005634017 nova_compute[243452]: 2026-02-28 10:02:52.595 243456 DEBUG nova.virt.hardware [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:02:52 np0005634017 nova_compute[243452]: 2026-02-28 10:02:52.596 243456 INFO nova.compute.claims [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:02:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:02:52 np0005634017 nova_compute[243452]: 2026-02-28 10:02:52.715 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:02:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:02:53 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3121478681' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:02:53 np0005634017 nova_compute[243452]: 2026-02-28 10:02:53.308 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:02:53 np0005634017 nova_compute[243452]: 2026-02-28 10:02:53.316 243456 DEBUG nova.compute.provider_tree [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:02:53 np0005634017 nova_compute[243452]: 2026-02-28 10:02:53.334 243456 DEBUG nova.scheduler.client.report [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:02:53 np0005634017 nova_compute[243452]: 2026-02-28 10:02:53.359 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:53 np0005634017 nova_compute[243452]: 2026-02-28 10:02:53.360 243456 DEBUG nova.compute.manager [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:02:53 np0005634017 nova_compute[243452]: 2026-02-28 10:02:53.408 243456 DEBUG nova.compute.manager [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:02:53 np0005634017 nova_compute[243452]: 2026-02-28 10:02:53.409 243456 DEBUG nova.network.neutron [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:02:53 np0005634017 nova_compute[243452]: 2026-02-28 10:02:53.442 243456 INFO nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:02:53 np0005634017 nova_compute[243452]: 2026-02-28 10:02:53.447 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:53 np0005634017 nova_compute[243452]: 2026-02-28 10:02:53.461 243456 DEBUG nova.compute.manager [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:02:53 np0005634017 nova_compute[243452]: 2026-02-28 10:02:53.547 243456 DEBUG nova.compute.manager [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:02:53 np0005634017 nova_compute[243452]: 2026-02-28 10:02:53.549 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:02:53 np0005634017 nova_compute[243452]: 2026-02-28 10:02:53.550 243456 INFO nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Creating image(s)#033[00m
Feb 28 05:02:53 np0005634017 nova_compute[243452]: 2026-02-28 10:02:53.584 243456 DEBUG nova.storage.rbd_utils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] rbd image 540aa538-279f-4645-a7a1-03fa5c859440_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:02:53 np0005634017 nova_compute[243452]: 2026-02-28 10:02:53.620 243456 DEBUG nova.storage.rbd_utils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] rbd image 540aa538-279f-4645-a7a1-03fa5c859440_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:02:53 np0005634017 nova_compute[243452]: 2026-02-28 10:02:53.656 243456 DEBUG nova.storage.rbd_utils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] rbd image 540aa538-279f-4645-a7a1-03fa5c859440_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:02:53 np0005634017 nova_compute[243452]: 2026-02-28 10:02:53.661 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:02:53 np0005634017 nova_compute[243452]: 2026-02-28 10:02:53.735 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:02:53 np0005634017 nova_compute[243452]: 2026-02-28 10:02:53.737 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:53 np0005634017 nova_compute[243452]: 2026-02-28 10:02:53.738 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:53 np0005634017 nova_compute[243452]: 2026-02-28 10:02:53.738 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:53 np0005634017 nova_compute[243452]: 2026-02-28 10:02:53.778 243456 DEBUG nova.storage.rbd_utils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] rbd image 540aa538-279f-4645-a7a1-03fa5c859440_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:02:53 np0005634017 nova_compute[243452]: 2026-02-28 10:02:53.783 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 540aa538-279f-4645-a7a1-03fa5c859440_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:02:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 153 MiB data, 399 MiB used, 60 GiB / 60 GiB avail
Feb 28 05:02:54 np0005634017 nova_compute[243452]: 2026-02-28 10:02:54.004 243456 DEBUG nova.policy [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '678df28a33b147768bd6e1e5d3b17ccf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2ee8273ac001494c973c44a7dd357180', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:02:54 np0005634017 nova_compute[243452]: 2026-02-28 10:02:54.070 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 540aa538-279f-4645-a7a1-03fa5c859440_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:02:54 np0005634017 nova_compute[243452]: 2026-02-28 10:02:54.138 243456 DEBUG nova.storage.rbd_utils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] resizing rbd image 540aa538-279f-4645-a7a1-03fa5c859440_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:02:54 np0005634017 nova_compute[243452]: 2026-02-28 10:02:54.224 243456 DEBUG nova.objects.instance [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lazy-loading 'migration_context' on Instance uuid 540aa538-279f-4645-a7a1-03fa5c859440 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:02:54 np0005634017 nova_compute[243452]: 2026-02-28 10:02:54.237 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:02:54 np0005634017 nova_compute[243452]: 2026-02-28 10:02:54.237 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Ensure instance console log exists: /var/lib/nova/instances/540aa538-279f-4645-a7a1-03fa5c859440/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:02:54 np0005634017 nova_compute[243452]: 2026-02-28 10:02:54.238 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:54 np0005634017 nova_compute[243452]: 2026-02-28 10:02:54.238 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:54 np0005634017 nova_compute[243452]: 2026-02-28 10:02:54.238 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:55 np0005634017 nova_compute[243452]: 2026-02-28 10:02:54.759 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:55 np0005634017 nova_compute[243452]: 2026-02-28 10:02:55.202 243456 DEBUG nova.network.neutron [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Successfully created port: f4b2f0ef-feda-437c-96c7-92a0645bceb9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:02:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 171 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 457 KiB/s wr, 24 op/s
Feb 28 05:02:57 np0005634017 nova_compute[243452]: 2026-02-28 10:02:57.086 243456 DEBUG nova.network.neutron [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Successfully updated port: f4b2f0ef-feda-437c-96c7-92a0645bceb9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:02:57 np0005634017 nova_compute[243452]: 2026-02-28 10:02:57.107 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Acquiring lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:02:57 np0005634017 nova_compute[243452]: 2026-02-28 10:02:57.108 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Acquired lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:02:57 np0005634017 nova_compute[243452]: 2026-02-28 10:02:57.108 243456 DEBUG nova.network.neutron [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:02:57 np0005634017 nova_compute[243452]: 2026-02-28 10:02:57.250 243456 DEBUG nova.compute.manager [req-e7c3eff8-dcd1-4c59-b811-225f40cf187a req-926971e1-e9e0-4877-991a-83d98b746377 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Received event network-changed-f4b2f0ef-feda-437c-96c7-92a0645bceb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:02:57 np0005634017 nova_compute[243452]: 2026-02-28 10:02:57.251 243456 DEBUG nova.compute.manager [req-e7c3eff8-dcd1-4c59-b811-225f40cf187a req-926971e1-e9e0-4877-991a-83d98b746377 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Refreshing instance network info cache due to event network-changed-f4b2f0ef-feda-437c-96c7-92a0645bceb9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:02:57 np0005634017 nova_compute[243452]: 2026-02-28 10:02:57.251 243456 DEBUG oslo_concurrency.lockutils [req-e7c3eff8-dcd1-4c59-b811-225f40cf187a req-926971e1-e9e0-4877-991a-83d98b746377 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:02:57 np0005634017 nova_compute[243452]: 2026-02-28 10:02:57.605 243456 DEBUG nova.network.neutron [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:02:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:02:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 200 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:02:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:57.842 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:02:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:57.842 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:02:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:02:57.843 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:02:58 np0005634017 nova_compute[243452]: 2026-02-28 10:02:58.449 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:58 np0005634017 nova_compute[243452]: 2026-02-28 10:02:58.823 243456 DEBUG nova.network.neutron [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Updating instance_info_cache with network_info: [{"id": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "address": "fa:16:3e:bd:c8:f4", "network": {"id": "a11c3342-4f74-40c1-a9f3-ae18f9be9d19", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1076482414-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ee8273ac001494c973c44a7dd357180", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4b2f0ef-fe", "ovs_interfaceid": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:02:58 np0005634017 nova_compute[243452]: 2026-02-28 10:02:58.845 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Releasing lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:02:58 np0005634017 nova_compute[243452]: 2026-02-28 10:02:58.846 243456 DEBUG nova.compute.manager [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Instance network_info: |[{"id": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "address": "fa:16:3e:bd:c8:f4", "network": {"id": "a11c3342-4f74-40c1-a9f3-ae18f9be9d19", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1076482414-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ee8273ac001494c973c44a7dd357180", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4b2f0ef-fe", "ovs_interfaceid": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:02:58 np0005634017 nova_compute[243452]: 2026-02-28 10:02:58.848 243456 DEBUG oslo_concurrency.lockutils [req-e7c3eff8-dcd1-4c59-b811-225f40cf187a req-926971e1-e9e0-4877-991a-83d98b746377 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:02:58 np0005634017 nova_compute[243452]: 2026-02-28 10:02:58.848 243456 DEBUG nova.network.neutron [req-e7c3eff8-dcd1-4c59-b811-225f40cf187a req-926971e1-e9e0-4877-991a-83d98b746377 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Refreshing network info cache for port f4b2f0ef-feda-437c-96c7-92a0645bceb9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:02:58 np0005634017 nova_compute[243452]: 2026-02-28 10:02:58.855 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Start _get_guest_xml network_info=[{"id": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "address": "fa:16:3e:bd:c8:f4", "network": {"id": "a11c3342-4f74-40c1-a9f3-ae18f9be9d19", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1076482414-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ee8273ac001494c973c44a7dd357180", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4b2f0ef-fe", "ovs_interfaceid": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:02:58 np0005634017 nova_compute[243452]: 2026-02-28 10:02:58.864 243456 WARNING nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:02:58 np0005634017 nova_compute[243452]: 2026-02-28 10:02:58.872 243456 DEBUG nova.virt.libvirt.host [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:02:58 np0005634017 nova_compute[243452]: 2026-02-28 10:02:58.873 243456 DEBUG nova.virt.libvirt.host [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:02:58 np0005634017 nova_compute[243452]: 2026-02-28 10:02:58.876 243456 DEBUG nova.virt.libvirt.host [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:02:58 np0005634017 nova_compute[243452]: 2026-02-28 10:02:58.877 243456 DEBUG nova.virt.libvirt.host [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:02:58 np0005634017 nova_compute[243452]: 2026-02-28 10:02:58.877 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:02:58 np0005634017 nova_compute[243452]: 2026-02-28 10:02:58.877 243456 DEBUG nova.virt.hardware [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:02:58 np0005634017 nova_compute[243452]: 2026-02-28 10:02:58.878 243456 DEBUG nova.virt.hardware [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:02:58 np0005634017 nova_compute[243452]: 2026-02-28 10:02:58.878 243456 DEBUG nova.virt.hardware [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:02:58 np0005634017 nova_compute[243452]: 2026-02-28 10:02:58.879 243456 DEBUG nova.virt.hardware [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:02:58 np0005634017 nova_compute[243452]: 2026-02-28 10:02:58.879 243456 DEBUG nova.virt.hardware [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:02:58 np0005634017 nova_compute[243452]: 2026-02-28 10:02:58.879 243456 DEBUG nova.virt.hardware [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:02:58 np0005634017 nova_compute[243452]: 2026-02-28 10:02:58.879 243456 DEBUG nova.virt.hardware [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:02:58 np0005634017 nova_compute[243452]: 2026-02-28 10:02:58.880 243456 DEBUG nova.virt.hardware [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:02:58 np0005634017 nova_compute[243452]: 2026-02-28 10:02:58.880 243456 DEBUG nova.virt.hardware [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:02:58 np0005634017 nova_compute[243452]: 2026-02-28 10:02:58.880 243456 DEBUG nova.virt.hardware [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:02:58 np0005634017 nova_compute[243452]: 2026-02-28 10:02:58.880 243456 DEBUG nova.virt.hardware [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:02:58 np0005634017 nova_compute[243452]: 2026-02-28 10:02:58.883 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:02:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:02:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3568348272' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:02:59 np0005634017 nova_compute[243452]: 2026-02-28 10:02:59.416 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:02:59 np0005634017 nova_compute[243452]: 2026-02-28 10:02:59.446 243456 DEBUG nova.storage.rbd_utils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] rbd image 540aa538-279f-4645-a7a1-03fa5c859440_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:02:59 np0005634017 nova_compute[243452]: 2026-02-28 10:02:59.450 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:02:59 np0005634017 nova_compute[243452]: 2026-02-28 10:02:59.762 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:02:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 200 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:03:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:03:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4231140852' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.020 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.023 243456 DEBUG nova.virt.libvirt.vif [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:02:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1652183905',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1652183905',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-165218390',id=22,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2ee8273ac001494c973c44a7dd357180',ramdisk_id='',reservation_id='r-4t9kpqze',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1398433179',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1398433179-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:02:53Z,user_data=None,user_id='678df28a33b147768bd6e1e5d3b17ccf',uuid=540aa538-279f-4645-a7a1-03fa5c859440,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "address": "fa:16:3e:bd:c8:f4", "network": {"id": "a11c3342-4f74-40c1-a9f3-ae18f9be9d19", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1076482414-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ee8273ac001494c973c44a7dd357180", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4b2f0ef-fe", "ovs_interfaceid": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.024 243456 DEBUG nova.network.os_vif_util [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Converting VIF {"id": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "address": "fa:16:3e:bd:c8:f4", "network": {"id": "a11c3342-4f74-40c1-a9f3-ae18f9be9d19", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1076482414-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ee8273ac001494c973c44a7dd357180", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4b2f0ef-fe", "ovs_interfaceid": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.026 243456 DEBUG nova.network.os_vif_util [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:c8:f4,bridge_name='br-int',has_traffic_filtering=True,id=f4b2f0ef-feda-437c-96c7-92a0645bceb9,network=Network(a11c3342-4f74-40c1-a9f3-ae18f9be9d19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4b2f0ef-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.028 243456 DEBUG nova.objects.instance [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lazy-loading 'pci_devices' on Instance uuid 540aa538-279f-4645-a7a1-03fa5c859440 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.049 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:03:00 np0005634017 nova_compute[243452]:  <uuid>540aa538-279f-4645-a7a1-03fa5c859440</uuid>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:  <name>instance-00000016</name>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-1652183905</nova:name>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:02:58</nova:creationTime>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:03:00 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:        <nova:user uuid="678df28a33b147768bd6e1e5d3b17ccf">tempest-FloatingIPsAssociationNegativeTestJSON-1398433179-project-member</nova:user>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:        <nova:project uuid="2ee8273ac001494c973c44a7dd357180">tempest-FloatingIPsAssociationNegativeTestJSON-1398433179</nova:project>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:        <nova:port uuid="f4b2f0ef-feda-437c-96c7-92a0645bceb9">
Feb 28 05:03:00 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <entry name="serial">540aa538-279f-4645-a7a1-03fa5c859440</entry>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <entry name="uuid">540aa538-279f-4645-a7a1-03fa5c859440</entry>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/540aa538-279f-4645-a7a1-03fa5c859440_disk">
Feb 28 05:03:00 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:03:00 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/540aa538-279f-4645-a7a1-03fa5c859440_disk.config">
Feb 28 05:03:00 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:03:00 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:bd:c8:f4"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <target dev="tapf4b2f0ef-fe"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/540aa538-279f-4645-a7a1-03fa5c859440/console.log" append="off"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:03:00 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:03:00 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:03:00 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:03:00 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.051 243456 DEBUG nova.compute.manager [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Preparing to wait for external event network-vif-plugged-f4b2f0ef-feda-437c-96c7-92a0645bceb9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.051 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Acquiring lock "540aa538-279f-4645-a7a1-03fa5c859440-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.052 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.052 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.053 243456 DEBUG nova.virt.libvirt.vif [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:02:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1652183905',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1652183905',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-165218390',id=22,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2ee8273ac001494c973c44a7dd357180',ramdisk_id='',reservation_id='r-4t9kpqze',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1398433179',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1398433179-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:02:53Z,user_data=None,user_id='678df28a33b147768bd6e1e5d3b17ccf',uuid=540aa538-279f-4645-a7a1-03fa5c859440,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "address": "fa:16:3e:bd:c8:f4", "network": {"id": "a11c3342-4f74-40c1-a9f3-ae18f9be9d19", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1076482414-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ee8273ac001494c973c44a7dd357180", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4b2f0ef-fe", "ovs_interfaceid": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.054 243456 DEBUG nova.network.os_vif_util [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Converting VIF {"id": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "address": "fa:16:3e:bd:c8:f4", "network": {"id": "a11c3342-4f74-40c1-a9f3-ae18f9be9d19", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1076482414-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ee8273ac001494c973c44a7dd357180", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4b2f0ef-fe", "ovs_interfaceid": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.055 243456 DEBUG nova.network.os_vif_util [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:c8:f4,bridge_name='br-int',has_traffic_filtering=True,id=f4b2f0ef-feda-437c-96c7-92a0645bceb9,network=Network(a11c3342-4f74-40c1-a9f3-ae18f9be9d19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4b2f0ef-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.056 243456 DEBUG os_vif [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:c8:f4,bridge_name='br-int',has_traffic_filtering=True,id=f4b2f0ef-feda-437c-96c7-92a0645bceb9,network=Network(a11c3342-4f74-40c1-a9f3-ae18f9be9d19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4b2f0ef-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.057 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.058 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.059 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.064 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.065 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4b2f0ef-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.066 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf4b2f0ef-fe, col_values=(('external_ids', {'iface-id': 'f4b2f0ef-feda-437c-96c7-92a0645bceb9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bd:c8:f4', 'vm-uuid': '540aa538-279f-4645-a7a1-03fa5c859440'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.068 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:00 np0005634017 NetworkManager[49805]: <info>  [1772272980.0696] manager: (tapf4b2f0ef-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.079 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.082 243456 INFO os_vif [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:c8:f4,bridge_name='br-int',has_traffic_filtering=True,id=f4b2f0ef-feda-437c-96c7-92a0645bceb9,network=Network(a11c3342-4f74-40c1-a9f3-ae18f9be9d19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4b2f0ef-fe')#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.135 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.137 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.137 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] No VIF found with MAC fa:16:3e:bd:c8:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.138 243456 INFO nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Using config drive#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.173 243456 DEBUG nova.storage.rbd_utils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] rbd image 540aa538-279f-4645-a7a1-03fa5c859440_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:03:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:03:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:03:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:03:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:03:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:03:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.817 243456 INFO nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Creating config drive at /var/lib/nova/instances/540aa538-279f-4645-a7a1-03fa5c859440/disk.config#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.824 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/540aa538-279f-4645-a7a1-03fa5c859440/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpwooamopn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.952 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/540aa538-279f-4645-a7a1-03fa5c859440/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpwooamopn" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.974 243456 DEBUG nova.storage.rbd_utils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] rbd image 540aa538-279f-4645-a7a1-03fa5c859440_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:03:00 np0005634017 nova_compute[243452]: 2026-02-28 10:03:00.978 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/540aa538-279f-4645-a7a1-03fa5c859440/disk.config 540aa538-279f-4645-a7a1-03fa5c859440_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:03:01 np0005634017 nova_compute[243452]: 2026-02-28 10:03:01.103 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/540aa538-279f-4645-a7a1-03fa5c859440/disk.config 540aa538-279f-4645-a7a1-03fa5c859440_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:03:01 np0005634017 nova_compute[243452]: 2026-02-28 10:03:01.104 243456 INFO nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Deleting local config drive /var/lib/nova/instances/540aa538-279f-4645-a7a1-03fa5c859440/disk.config because it was imported into RBD.#033[00m
Feb 28 05:03:01 np0005634017 kernel: tapf4b2f0ef-fe: entered promiscuous mode
Feb 28 05:03:01 np0005634017 NetworkManager[49805]: <info>  [1772272981.1652] manager: (tapf4b2f0ef-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Feb 28 05:03:01 np0005634017 ovn_controller[146846]: 2026-02-28T10:03:01Z|00114|binding|INFO|Claiming lport f4b2f0ef-feda-437c-96c7-92a0645bceb9 for this chassis.
Feb 28 05:03:01 np0005634017 ovn_controller[146846]: 2026-02-28T10:03:01Z|00115|binding|INFO|f4b2f0ef-feda-437c-96c7-92a0645bceb9: Claiming fa:16:3e:bd:c8:f4 10.100.0.9
Feb 28 05:03:01 np0005634017 nova_compute[243452]: 2026-02-28 10:03:01.166 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:01 np0005634017 nova_compute[243452]: 2026-02-28 10:03:01.172 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:01 np0005634017 nova_compute[243452]: 2026-02-28 10:03:01.175 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.188 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:c8:f4 10.100.0.9'], port_security=['fa:16:3e:bd:c8:f4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '540aa538-279f-4645-a7a1-03fa5c859440', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a11c3342-4f74-40c1-a9f3-ae18f9be9d19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ee8273ac001494c973c44a7dd357180', 'neutron:revision_number': '2', 'neutron:security_group_ids': '26c27b27-9608-4c57-8427-9c45b7a72eae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c643f549-0af1-46f6-9870-09f9117f13fa, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=f4b2f0ef-feda-437c-96c7-92a0645bceb9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.190 156681 INFO neutron.agent.ovn.metadata.agent [-] Port f4b2f0ef-feda-437c-96c7-92a0645bceb9 in datapath a11c3342-4f74-40c1-a9f3-ae18f9be9d19 bound to our chassis#033[00m
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.191 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a11c3342-4f74-40c1-a9f3-ae18f9be9d19#033[00m
Feb 28 05:03:01 np0005634017 systemd-machined[209480]: New machine qemu-26-instance-00000016.
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.200 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5f26b837-6aa8-4cfe-8dda-89c1b3a2557c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.202 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa11c3342-41 in ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.204 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa11c3342-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.205 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3fe5630d-20a8-4a73-b889-ad09e11a5969]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.206 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0b882321-29b8-4d95-9690-0e19b06774d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:01 np0005634017 systemd[1]: Started Virtual Machine qemu-26-instance-00000016.
Feb 28 05:03:01 np0005634017 nova_compute[243452]: 2026-02-28 10:03:01.217 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.220 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[6659914a-65af-40a0-8503-20be4145eee8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:01 np0005634017 ovn_controller[146846]: 2026-02-28T10:03:01Z|00116|binding|INFO|Setting lport f4b2f0ef-feda-437c-96c7-92a0645bceb9 ovn-installed in OVS
Feb 28 05:03:01 np0005634017 ovn_controller[146846]: 2026-02-28T10:03:01Z|00117|binding|INFO|Setting lport f4b2f0ef-feda-437c-96c7-92a0645bceb9 up in Southbound
Feb 28 05:03:01 np0005634017 nova_compute[243452]: 2026-02-28 10:03:01.222 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:01 np0005634017 systemd-udevd[264154]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.234 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f15a9024-aae9-4e3c-ad91-79efc7adda3e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:01 np0005634017 nova_compute[243452]: 2026-02-28 10:03:01.241 243456 DEBUG nova.network.neutron [req-e7c3eff8-dcd1-4c59-b811-225f40cf187a req-926971e1-e9e0-4877-991a-83d98b746377 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Updated VIF entry in instance network info cache for port f4b2f0ef-feda-437c-96c7-92a0645bceb9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:03:01 np0005634017 nova_compute[243452]: 2026-02-28 10:03:01.242 243456 DEBUG nova.network.neutron [req-e7c3eff8-dcd1-4c59-b811-225f40cf187a req-926971e1-e9e0-4877-991a-83d98b746377 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Updating instance_info_cache with network_info: [{"id": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "address": "fa:16:3e:bd:c8:f4", "network": {"id": "a11c3342-4f74-40c1-a9f3-ae18f9be9d19", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1076482414-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ee8273ac001494c973c44a7dd357180", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4b2f0ef-fe", "ovs_interfaceid": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:03:01 np0005634017 NetworkManager[49805]: <info>  [1772272981.2536] device (tapf4b2f0ef-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:03:01 np0005634017 NetworkManager[49805]: <info>  [1772272981.2547] device (tapf4b2f0ef-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.260 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a095e8f2-1848-4edf-853b-2c913243db63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:01 np0005634017 NetworkManager[49805]: <info>  [1772272981.2662] manager: (tapa11c3342-40): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.265 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf4d1da-6eea-4d7a-856f-33ca5b286a49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:01 np0005634017 nova_compute[243452]: 2026-02-28 10:03:01.271 243456 DEBUG oslo_concurrency.lockutils [req-e7c3eff8-dcd1-4c59-b811-225f40cf187a req-926971e1-e9e0-4877-991a-83d98b746377 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.293 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[af209c46-d80a-40d9-becf-6c9c14605161]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.296 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7131946c-4fbc-4575-abbe-4b1717c00061]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:01 np0005634017 NetworkManager[49805]: <info>  [1772272981.3194] device (tapa11c3342-40): carrier: link connected
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.325 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ac325c59-fc35-48e3-ae78-b1241511b6aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.341 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c0e0023c-e637-47a8-9afc-2310583faca3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa11c3342-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:d3:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448860, 'reachable_time': 17153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264184, 'error': None, 'target': 'ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.356 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[980b4fae-4170-4510-9df4-70a494cb8454]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefa:d346'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448860, 'tstamp': 448860}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264185, 'error': None, 'target': 'ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.372 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b6951edd-48a1-42fd-8052-ccdfbd044b5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa11c3342-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:d3:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448860, 'reachable_time': 17153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264186, 'error': None, 'target': 'ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.398 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ae02b448-740b-4590-a10f-e34783b9ac4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.442 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9314078c-4016-4ecf-9389-701a9220a1d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.444 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa11c3342-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.444 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.445 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa11c3342-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:03:01 np0005634017 NetworkManager[49805]: <info>  [1772272981.4476] manager: (tapa11c3342-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Feb 28 05:03:01 np0005634017 kernel: tapa11c3342-40: entered promiscuous mode
Feb 28 05:03:01 np0005634017 nova_compute[243452]: 2026-02-28 10:03:01.447 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.451 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa11c3342-40, col_values=(('external_ids', {'iface-id': '8314f3c1-f6f7-4153-871f-5211928b3006'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:03:01 np0005634017 nova_compute[243452]: 2026-02-28 10:03:01.453 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:01 np0005634017 ovn_controller[146846]: 2026-02-28T10:03:01Z|00118|binding|INFO|Releasing lport 8314f3c1-f6f7-4153-871f-5211928b3006 from this chassis (sb_readonly=0)
Feb 28 05:03:01 np0005634017 nova_compute[243452]: 2026-02-28 10:03:01.465 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.466 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a11c3342-4f74-40c1-a9f3-ae18f9be9d19.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a11c3342-4f74-40c1-a9f3-ae18f9be9d19.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.467 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4aaa159a-bd46-4ce6-adcf-329ce636d083]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.467 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-a11c3342-4f74-40c1-a9f3-ae18f9be9d19
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/a11c3342-4f74-40c1-a9f3-ae18f9be9d19.pid.haproxy
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID a11c3342-4f74-40c1-a9f3-ae18f9be9d19
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:03:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.469 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19', 'env', 'PROCESS_TAG=haproxy-a11c3342-4f74-40c1-a9f3-ae18f9be9d19', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a11c3342-4f74-40c1-a9f3-ae18f9be9d19.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:03:01 np0005634017 nova_compute[243452]: 2026-02-28 10:03:01.480 243456 DEBUG nova.compute.manager [req-4a7cb7f1-b85a-425c-9a83-20abd2ef4490 req-b8275c26-646a-4d0a-aa16-1f40756fa1e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Received event network-vif-plugged-f4b2f0ef-feda-437c-96c7-92a0645bceb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:03:01 np0005634017 nova_compute[243452]: 2026-02-28 10:03:01.480 243456 DEBUG oslo_concurrency.lockutils [req-4a7cb7f1-b85a-425c-9a83-20abd2ef4490 req-b8275c26-646a-4d0a-aa16-1f40756fa1e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "540aa538-279f-4645-a7a1-03fa5c859440-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:01 np0005634017 nova_compute[243452]: 2026-02-28 10:03:01.480 243456 DEBUG oslo_concurrency.lockutils [req-4a7cb7f1-b85a-425c-9a83-20abd2ef4490 req-b8275c26-646a-4d0a-aa16-1f40756fa1e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:01 np0005634017 nova_compute[243452]: 2026-02-28 10:03:01.481 243456 DEBUG oslo_concurrency.lockutils [req-4a7cb7f1-b85a-425c-9a83-20abd2ef4490 req-b8275c26-646a-4d0a-aa16-1f40756fa1e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:01 np0005634017 nova_compute[243452]: 2026-02-28 10:03:01.481 243456 DEBUG nova.compute.manager [req-4a7cb7f1-b85a-425c-9a83-20abd2ef4490 req-b8275c26-646a-4d0a-aa16-1f40756fa1e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Processing event network-vif-plugged-f4b2f0ef-feda-437c-96c7-92a0645bceb9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:03:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 200 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:03:01 np0005634017 podman[264216]: 2026-02-28 10:03:01.855796595 +0000 UTC m=+0.073054872 container create 15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 28 05:03:01 np0005634017 podman[264216]: 2026-02-28 10:03:01.819366732 +0000 UTC m=+0.036625039 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:03:01 np0005634017 systemd[1]: Started libpod-conmon-15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7.scope.
Feb 28 05:03:01 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:03:01 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3537f5f8d55cebfe7fda6f58cc62d454caeb57ea64eac96f07c6f8abf46f3522/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:03:01 np0005634017 podman[264216]: 2026-02-28 10:03:01.950616888 +0000 UTC m=+0.167875225 container init 15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS)
Feb 28 05:03:01 np0005634017 podman[264216]: 2026-02-28 10:03:01.958297173 +0000 UTC m=+0.175555450 container start 15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 28 05:03:01 np0005634017 neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19[264231]: [NOTICE]   (264235) : New worker (264237) forked
Feb 28 05:03:01 np0005634017 neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19[264231]: [NOTICE]   (264235) : Loading success.
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.399 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272982.3982081, 540aa538-279f-4645-a7a1-03fa5c859440 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.399 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] VM Started (Lifecycle Event)#033[00m
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.402 243456 DEBUG nova.compute.manager [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.407 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.411 243456 INFO nova.virt.libvirt.driver [-] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Instance spawned successfully.#033[00m
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.413 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.526 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.534 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.539 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.540 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.540 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.541 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.542 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.543 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.575 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.576 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272982.3986197, 540aa538-279f-4645-a7a1-03fa5c859440 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.576 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.600 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.606 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272982.405823, 540aa538-279f-4645-a7a1-03fa5c859440 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.607 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:03:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.612 243456 INFO nova.compute.manager [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Took 9.06 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.613 243456 DEBUG nova.compute.manager [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.624 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.628 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.647 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.678 243456 INFO nova.compute.manager [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Took 10.12 seconds to build instance.#033[00m
Feb 28 05:03:02 np0005634017 nova_compute[243452]: 2026-02-28 10:03:02.710 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.214s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:03 np0005634017 nova_compute[243452]: 2026-02-28 10:03:03.377 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Acquiring lock "ff148d2a-2dba-45c2-b726-78423f3ccedc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:03 np0005634017 nova_compute[243452]: 2026-02-28 10:03:03.378 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:03 np0005634017 nova_compute[243452]: 2026-02-28 10:03:03.394 243456 DEBUG nova.compute.manager [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:03:03 np0005634017 nova_compute[243452]: 2026-02-28 10:03:03.451 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:03 np0005634017 nova_compute[243452]: 2026-02-28 10:03:03.493 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:03 np0005634017 nova_compute[243452]: 2026-02-28 10:03:03.494 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:03 np0005634017 nova_compute[243452]: 2026-02-28 10:03:03.503 243456 DEBUG nova.virt.hardware [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:03:03 np0005634017 nova_compute[243452]: 2026-02-28 10:03:03.503 243456 INFO nova.compute.claims [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:03:03 np0005634017 nova_compute[243452]: 2026-02-28 10:03:03.569 243456 DEBUG nova.compute.manager [req-f42b31ee-42cb-4874-ace3-0dfd22de58e1 req-be82b42e-143a-463e-b21d-55b69617db87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Received event network-vif-plugged-f4b2f0ef-feda-437c-96c7-92a0645bceb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:03:03 np0005634017 nova_compute[243452]: 2026-02-28 10:03:03.570 243456 DEBUG oslo_concurrency.lockutils [req-f42b31ee-42cb-4874-ace3-0dfd22de58e1 req-be82b42e-143a-463e-b21d-55b69617db87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "540aa538-279f-4645-a7a1-03fa5c859440-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:03 np0005634017 nova_compute[243452]: 2026-02-28 10:03:03.570 243456 DEBUG oslo_concurrency.lockutils [req-f42b31ee-42cb-4874-ace3-0dfd22de58e1 req-be82b42e-143a-463e-b21d-55b69617db87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:03 np0005634017 nova_compute[243452]: 2026-02-28 10:03:03.571 243456 DEBUG oslo_concurrency.lockutils [req-f42b31ee-42cb-4874-ace3-0dfd22de58e1 req-be82b42e-143a-463e-b21d-55b69617db87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:03 np0005634017 nova_compute[243452]: 2026-02-28 10:03:03.571 243456 DEBUG nova.compute.manager [req-f42b31ee-42cb-4874-ace3-0dfd22de58e1 req-be82b42e-143a-463e-b21d-55b69617db87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] No waiting events found dispatching network-vif-plugged-f4b2f0ef-feda-437c-96c7-92a0645bceb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:03:03 np0005634017 nova_compute[243452]: 2026-02-28 10:03:03.572 243456 WARNING nova.compute.manager [req-f42b31ee-42cb-4874-ace3-0dfd22de58e1 req-be82b42e-143a-463e-b21d-55b69617db87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Received unexpected event network-vif-plugged-f4b2f0ef-feda-437c-96c7-92a0645bceb9 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:03:03 np0005634017 nova_compute[243452]: 2026-02-28 10:03:03.663 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:03:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 200 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Feb 28 05:03:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:03:04 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2532414934' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.210 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.215 243456 DEBUG nova.compute.provider_tree [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.231 243456 DEBUG nova.scheduler.client.report [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.260 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.261 243456 DEBUG nova.compute.manager [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.313 243456 DEBUG nova.compute.manager [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.314 243456 DEBUG nova.network.neutron [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.334 243456 INFO nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.350 243456 DEBUG nova.compute.manager [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.425 243456 DEBUG nova.compute.manager [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.427 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.427 243456 INFO nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Creating image(s)#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.443 243456 DEBUG nova.storage.rbd_utils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] rbd image ff148d2a-2dba-45c2-b726-78423f3ccedc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.463 243456 DEBUG nova.storage.rbd_utils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] rbd image ff148d2a-2dba-45c2-b726-78423f3ccedc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.483 243456 DEBUG nova.storage.rbd_utils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] rbd image ff148d2a-2dba-45c2-b726-78423f3ccedc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.486 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.528 243456 DEBUG nova.policy [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2973b4a88c3d4417a732901954bb6c6a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dd177a65e6274abe9b6091a2f34c319c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.550 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.551 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.551 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.552 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.570 243456 DEBUG nova.storage.rbd_utils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] rbd image ff148d2a-2dba-45c2-b726-78423f3ccedc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.574 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ff148d2a-2dba-45c2-b726-78423f3ccedc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.778 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ff148d2a-2dba-45c2-b726-78423f3ccedc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.842 243456 DEBUG nova.storage.rbd_utils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] resizing rbd image ff148d2a-2dba-45c2-b726-78423f3ccedc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.928 243456 DEBUG nova.objects.instance [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lazy-loading 'migration_context' on Instance uuid ff148d2a-2dba-45c2-b726-78423f3ccedc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.949 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.949 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Ensure instance console log exists: /var/lib/nova/instances/ff148d2a-2dba-45c2-b726-78423f3ccedc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.950 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.950 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:04 np0005634017 nova_compute[243452]: 2026-02-28 10:03:04.950 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:05 np0005634017 nova_compute[243452]: 2026-02-28 10:03:05.069 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 220 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 575 KiB/s rd, 2.7 MiB/s wr, 78 op/s
Feb 28 05:03:05 np0005634017 nova_compute[243452]: 2026-02-28 10:03:05.918 243456 DEBUG nova.network.neutron [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Successfully created port: 86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:03:06 np0005634017 nova_compute[243452]: 2026-02-28 10:03:06.727 243456 DEBUG nova.network.neutron [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Successfully updated port: 86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:03:06 np0005634017 nova_compute[243452]: 2026-02-28 10:03:06.747 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Acquiring lock "refresh_cache-ff148d2a-2dba-45c2-b726-78423f3ccedc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:03:06 np0005634017 nova_compute[243452]: 2026-02-28 10:03:06.748 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Acquired lock "refresh_cache-ff148d2a-2dba-45c2-b726-78423f3ccedc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:03:06 np0005634017 nova_compute[243452]: 2026-02-28 10:03:06.749 243456 DEBUG nova.network.neutron [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:03:06 np0005634017 nova_compute[243452]: 2026-02-28 10:03:06.806 243456 DEBUG nova.compute.manager [req-067d5bb9-e0e5-4329-a142-754bfca8797a req-19da5fdc-fc75-400e-8515-abb4310bedb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Received event network-changed-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:03:06 np0005634017 nova_compute[243452]: 2026-02-28 10:03:06.807 243456 DEBUG nova.compute.manager [req-067d5bb9-e0e5-4329-a142-754bfca8797a req-19da5fdc-fc75-400e-8515-abb4310bedb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Refreshing instance network info cache due to event network-changed-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:03:06 np0005634017 nova_compute[243452]: 2026-02-28 10:03:06.807 243456 DEBUG oslo_concurrency.lockutils [req-067d5bb9-e0e5-4329-a142-754bfca8797a req-19da5fdc-fc75-400e-8515-abb4310bedb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ff148d2a-2dba-45c2-b726-78423f3ccedc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:03:07 np0005634017 nova_compute[243452]: 2026-02-28 10:03:07.025 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:03:07 np0005634017 nova_compute[243452]: 2026-02-28 10:03:07.027 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:03:07 np0005634017 nova_compute[243452]: 2026-02-28 10:03:07.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:03:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:03:07 np0005634017 nova_compute[243452]: 2026-02-28 10:03:07.765 243456 DEBUG nova.network.neutron [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:03:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 246 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.1 MiB/s wr, 102 op/s
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.313 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.344 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 28 05:03:08 np0005634017 NetworkManager[49805]: <info>  [1772272988.4142] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Feb 28 05:03:08 np0005634017 NetworkManager[49805]: <info>  [1772272988.4148] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.426 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.449 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.451 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:08 np0005634017 ovn_controller[146846]: 2026-02-28T10:03:08Z|00119|binding|INFO|Releasing lport 8314f3c1-f6f7-4153-871f-5211928b3006 from this chassis (sb_readonly=0)
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.470 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.573 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.573 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.574 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.574 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 540aa538-279f-4645-a7a1-03fa5c859440 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.739 243456 DEBUG nova.network.neutron [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Updating instance_info_cache with network_info: [{"id": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "address": "fa:16:3e:e8:ef:ce", "network": {"id": "82c3895c-56bd-4273-9e68-83c2fcec4532", "bridge": "br-int", "label": "tempest-ServersTestJSON-179965434-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd177a65e6274abe9b6091a2f34c319c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a5b3dc-16", "ovs_interfaceid": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.765 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Releasing lock "refresh_cache-ff148d2a-2dba-45c2-b726-78423f3ccedc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.765 243456 DEBUG nova.compute.manager [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Instance network_info: |[{"id": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "address": "fa:16:3e:e8:ef:ce", "network": {"id": "82c3895c-56bd-4273-9e68-83c2fcec4532", "bridge": "br-int", "label": "tempest-ServersTestJSON-179965434-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd177a65e6274abe9b6091a2f34c319c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a5b3dc-16", "ovs_interfaceid": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.766 243456 DEBUG oslo_concurrency.lockutils [req-067d5bb9-e0e5-4329-a142-754bfca8797a req-19da5fdc-fc75-400e-8515-abb4310bedb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ff148d2a-2dba-45c2-b726-78423f3ccedc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.767 243456 DEBUG nova.network.neutron [req-067d5bb9-e0e5-4329-a142-754bfca8797a req-19da5fdc-fc75-400e-8515-abb4310bedb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Refreshing network info cache for port 86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.772 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Start _get_guest_xml network_info=[{"id": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "address": "fa:16:3e:e8:ef:ce", "network": {"id": "82c3895c-56bd-4273-9e68-83c2fcec4532", "bridge": "br-int", "label": "tempest-ServersTestJSON-179965434-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd177a65e6274abe9b6091a2f34c319c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a5b3dc-16", "ovs_interfaceid": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.778 243456 WARNING nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.782 243456 DEBUG nova.virt.libvirt.host [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.783 243456 DEBUG nova.virt.libvirt.host [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.794 243456 DEBUG nova.virt.libvirt.host [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.795 243456 DEBUG nova.virt.libvirt.host [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.796 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.796 243456 DEBUG nova.virt.hardware [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.797 243456 DEBUG nova.virt.hardware [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.797 243456 DEBUG nova.virt.hardware [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.797 243456 DEBUG nova.virt.hardware [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.798 243456 DEBUG nova.virt.hardware [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.798 243456 DEBUG nova.virt.hardware [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.798 243456 DEBUG nova.virt.hardware [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.799 243456 DEBUG nova.virt.hardware [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.799 243456 DEBUG nova.virt.hardware [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.799 243456 DEBUG nova.virt.hardware [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.799 243456 DEBUG nova.virt.hardware [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:03:08 np0005634017 nova_compute[243452]: 2026-02-28 10:03:08.802 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.149 243456 DEBUG nova.compute.manager [req-639bc7af-0173-4c0d-83ae-a6f0751f5f79 req-695502d2-27b7-4709-9322-e4d3d3d3eea6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Received event network-changed-f4b2f0ef-feda-437c-96c7-92a0645bceb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.149 243456 DEBUG nova.compute.manager [req-639bc7af-0173-4c0d-83ae-a6f0751f5f79 req-695502d2-27b7-4709-9322-e4d3d3d3eea6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Refreshing instance network info cache due to event network-changed-f4b2f0ef-feda-437c-96c7-92a0645bceb9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.150 243456 DEBUG oslo_concurrency.lockutils [req-639bc7af-0173-4c0d-83ae-a6f0751f5f79 req-695502d2-27b7-4709-9322-e4d3d3d3eea6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:03:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:03:09 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1446439942' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.359 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.388 243456 DEBUG nova.storage.rbd_utils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] rbd image ff148d2a-2dba-45c2-b726-78423f3ccedc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.392 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:03:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 246 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Feb 28 05:03:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:03:09 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1877928499' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.950 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.952 243456 DEBUG nova.virt.libvirt.vif [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:03:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1118743636',display_name='tempest-ServersTestJSON-server-1118743636',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1118743636',id=23,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGJCbaARgz2MiKHmjHHxTu8VMbZv/za78QmoiFrm1b2dtjLbi0sfz0k8qm0DAA2vOdrkhv2c4afOR1apvGQL9ZXBiG6t7d6JyxwbG3gSOY/aDpdVVM9RXPjLMeE9AYf3zg==',key_name='tempest-keypair-178399307',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd177a65e6274abe9b6091a2f34c319c',ramdisk_id='',reservation_id='r-1mssv15c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-595608049',owner_user_name='tempest-ServersTestJSON-595608049-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:03:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2973b4a88c3d4417a732901954bb6c6a',uuid=ff148d2a-2dba-45c2-b726-78423f3ccedc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "address": "fa:16:3e:e8:ef:ce", "network": {"id": "82c3895c-56bd-4273-9e68-83c2fcec4532", "bridge": "br-int", "label": "tempest-ServersTestJSON-179965434-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd177a65e6274abe9b6091a2f34c319c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a5b3dc-16", "ovs_interfaceid": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.952 243456 DEBUG nova.network.os_vif_util [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Converting VIF {"id": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "address": "fa:16:3e:e8:ef:ce", "network": {"id": "82c3895c-56bd-4273-9e68-83c2fcec4532", "bridge": "br-int", "label": "tempest-ServersTestJSON-179965434-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd177a65e6274abe9b6091a2f34c319c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a5b3dc-16", "ovs_interfaceid": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.953 243456 DEBUG nova.network.os_vif_util [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:ef:ce,bridge_name='br-int',has_traffic_filtering=True,id=86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0,network=Network(82c3895c-56bd-4273-9e68-83c2fcec4532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a5b3dc-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.954 243456 DEBUG nova.objects.instance [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lazy-loading 'pci_devices' on Instance uuid ff148d2a-2dba-45c2-b726-78423f3ccedc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.970 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:03:09 np0005634017 nova_compute[243452]:  <uuid>ff148d2a-2dba-45c2-b726-78423f3ccedc</uuid>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:  <name>instance-00000017</name>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServersTestJSON-server-1118743636</nova:name>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:03:08</nova:creationTime>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:03:09 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:        <nova:user uuid="2973b4a88c3d4417a732901954bb6c6a">tempest-ServersTestJSON-595608049-project-member</nova:user>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:        <nova:project uuid="dd177a65e6274abe9b6091a2f34c319c">tempest-ServersTestJSON-595608049</nova:project>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:        <nova:port uuid="86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0">
Feb 28 05:03:09 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <entry name="serial">ff148d2a-2dba-45c2-b726-78423f3ccedc</entry>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <entry name="uuid">ff148d2a-2dba-45c2-b726-78423f3ccedc</entry>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/ff148d2a-2dba-45c2-b726-78423f3ccedc_disk">
Feb 28 05:03:09 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:03:09 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/ff148d2a-2dba-45c2-b726-78423f3ccedc_disk.config">
Feb 28 05:03:09 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:03:09 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:e8:ef:ce"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <target dev="tap86a5b3dc-16"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/ff148d2a-2dba-45c2-b726-78423f3ccedc/console.log" append="off"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:03:09 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:03:09 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:03:09 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:03:09 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.971 243456 DEBUG nova.compute.manager [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Preparing to wait for external event network-vif-plugged-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.971 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Acquiring lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.972 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.972 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.973 243456 DEBUG nova.virt.libvirt.vif [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:03:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1118743636',display_name='tempest-ServersTestJSON-server-1118743636',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1118743636',id=23,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGJCbaARgz2MiKHmjHHxTu8VMbZv/za78QmoiFrm1b2dtjLbi0sfz0k8qm0DAA2vOdrkhv2c4afOR1apvGQL9ZXBiG6t7d6JyxwbG3gSOY/aDpdVVM9RXPjLMeE9AYf3zg==',key_name='tempest-keypair-178399307',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd177a65e6274abe9b6091a2f34c319c',ramdisk_id='',reservation_id='r-1mssv15c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-595608049',owner_user_name='tempest-ServersTestJSON-595608049-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:03:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2973b4a88c3d4417a732901954bb6c6a',uuid=ff148d2a-2dba-45c2-b726-78423f3ccedc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "address": "fa:16:3e:e8:ef:ce", "network": {"id": "82c3895c-56bd-4273-9e68-83c2fcec4532", "bridge": "br-int", "label": "tempest-ServersTestJSON-179965434-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd177a65e6274abe9b6091a2f34c319c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a5b3dc-16", "ovs_interfaceid": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.973 243456 DEBUG nova.network.os_vif_util [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Converting VIF {"id": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "address": "fa:16:3e:e8:ef:ce", "network": {"id": "82c3895c-56bd-4273-9e68-83c2fcec4532", "bridge": "br-int", "label": "tempest-ServersTestJSON-179965434-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd177a65e6274abe9b6091a2f34c319c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a5b3dc-16", "ovs_interfaceid": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.974 243456 DEBUG nova.network.os_vif_util [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:ef:ce,bridge_name='br-int',has_traffic_filtering=True,id=86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0,network=Network(82c3895c-56bd-4273-9e68-83c2fcec4532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a5b3dc-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.974 243456 DEBUG os_vif [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:ef:ce,bridge_name='br-int',has_traffic_filtering=True,id=86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0,network=Network(82c3895c-56bd-4273-9e68-83c2fcec4532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a5b3dc-16') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.975 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.975 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.976 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.978 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.979 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86a5b3dc-16, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.979 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap86a5b3dc-16, col_values=(('external_ids', {'iface-id': '86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:ef:ce', 'vm-uuid': 'ff148d2a-2dba-45c2-b726-78423f3ccedc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.980 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:09 np0005634017 NetworkManager[49805]: <info>  [1772272989.9816] manager: (tap86a5b3dc-16): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.984 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.989 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:09 np0005634017 nova_compute[243452]: 2026-02-28 10:03:09.990 243456 INFO os_vif [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:ef:ce,bridge_name='br-int',has_traffic_filtering=True,id=86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0,network=Network(82c3895c-56bd-4273-9e68-83c2fcec4532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a5b3dc-16')#033[00m
Feb 28 05:03:10 np0005634017 nova_compute[243452]: 2026-02-28 10:03:10.042 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:03:10 np0005634017 nova_compute[243452]: 2026-02-28 10:03:10.043 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:03:10 np0005634017 nova_compute[243452]: 2026-02-28 10:03:10.043 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] No VIF found with MAC fa:16:3e:e8:ef:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:03:10 np0005634017 nova_compute[243452]: 2026-02-28 10:03:10.043 243456 INFO nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Using config drive#033[00m
Feb 28 05:03:10 np0005634017 nova_compute[243452]: 2026-02-28 10:03:10.061 243456 DEBUG nova.storage.rbd_utils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] rbd image ff148d2a-2dba-45c2-b726-78423f3ccedc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:03:10 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 05:03:10 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4834 writes, 21K keys, 4834 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4834 writes, 4834 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1449 writes, 6513 keys, 1449 commit groups, 1.0 writes per commit group, ingest: 9.07 MB, 0.02 MB/s#012Interval WAL: 1449 writes, 1449 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     53.7      0.46              0.06        12    0.038       0      0       0.0       0.0#012  L6      1/0    7.14 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    141.5    116.3      0.69              0.19        11    0.062     48K   5772       0.0       0.0#012 Sum      1/0    7.14 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     85.1     91.3      1.14              0.25        23    0.050     48K   5772       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.4     93.6     93.8      0.49              0.10        10    0.049     23K   2572       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    141.5    116.3      0.69              0.19        11    0.062     48K   5772       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     54.2      0.45              0.06        11    0.041       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.5      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.024, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.09 GB read, 0.05 MB/s read, 1.1 seconds#012Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.08 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555caadbb8d0#2 capacity: 304.00 MB usage: 9.08 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000156 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(559,8.68 MB,2.85454%) FilterBlock(24,141.67 KB,0.0455103%) IndexBlock(24,269.58 KB,0.0865986%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb 28 05:03:10 np0005634017 nova_compute[243452]: 2026-02-28 10:03:10.997 243456 INFO nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Creating config drive at /var/lib/nova/instances/ff148d2a-2dba-45c2-b726-78423f3ccedc/disk.config#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.002 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ff148d2a-2dba-45c2-b726-78423f3ccedc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpgnbrt6by execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.095 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Updating instance_info_cache with network_info: [{"id": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "address": "fa:16:3e:bd:c8:f4", "network": {"id": "a11c3342-4f74-40c1-a9f3-ae18f9be9d19", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1076482414-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ee8273ac001494c973c44a7dd357180", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4b2f0ef-fe", "ovs_interfaceid": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.111 243456 DEBUG nova.network.neutron [req-067d5bb9-e0e5-4329-a142-754bfca8797a req-19da5fdc-fc75-400e-8515-abb4310bedb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Updated VIF entry in instance network info cache for port 86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.112 243456 DEBUG nova.network.neutron [req-067d5bb9-e0e5-4329-a142-754bfca8797a req-19da5fdc-fc75-400e-8515-abb4310bedb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Updating instance_info_cache with network_info: [{"id": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "address": "fa:16:3e:e8:ef:ce", "network": {"id": "82c3895c-56bd-4273-9e68-83c2fcec4532", "bridge": "br-int", "label": "tempest-ServersTestJSON-179965434-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd177a65e6274abe9b6091a2f34c319c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a5b3dc-16", "ovs_interfaceid": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.117 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.117 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.118 243456 DEBUG oslo_concurrency.lockutils [req-639bc7af-0173-4c0d-83ae-a6f0751f5f79 req-695502d2-27b7-4709-9322-e4d3d3d3eea6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.118 243456 DEBUG nova.network.neutron [req-639bc7af-0173-4c0d-83ae-a6f0751f5f79 req-695502d2-27b7-4709-9322-e4d3d3d3eea6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Refreshing network info cache for port f4b2f0ef-feda-437c-96c7-92a0645bceb9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.120 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.121 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.121 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.121 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.122 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.127 243456 DEBUG oslo_concurrency.lockutils [req-067d5bb9-e0e5-4329-a142-754bfca8797a req-19da5fdc-fc75-400e-8515-abb4310bedb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ff148d2a-2dba-45c2-b726-78423f3ccedc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.129 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ff148d2a-2dba-45c2-b726-78423f3ccedc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpgnbrt6by" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.158 243456 DEBUG nova.storage.rbd_utils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] rbd image ff148d2a-2dba-45c2-b726-78423f3ccedc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.161 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ff148d2a-2dba-45c2-b726-78423f3ccedc/disk.config ff148d2a-2dba-45c2-b726-78423f3ccedc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.201 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.202 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.202 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.202 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.202 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.302 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ff148d2a-2dba-45c2-b726-78423f3ccedc/disk.config ff148d2a-2dba-45c2-b726-78423f3ccedc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.303 243456 INFO nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Deleting local config drive /var/lib/nova/instances/ff148d2a-2dba-45c2-b726-78423f3ccedc/disk.config because it was imported into RBD.#033[00m
Feb 28 05:03:11 np0005634017 kernel: tap86a5b3dc-16: entered promiscuous mode
Feb 28 05:03:11 np0005634017 NetworkManager[49805]: <info>  [1772272991.3464] manager: (tap86a5b3dc-16): new Tun device (/org/freedesktop/NetworkManager/Devices/62)
Feb 28 05:03:11 np0005634017 ovn_controller[146846]: 2026-02-28T10:03:11Z|00120|binding|INFO|Claiming lport 86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 for this chassis.
Feb 28 05:03:11 np0005634017 ovn_controller[146846]: 2026-02-28T10:03:11Z|00121|binding|INFO|86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0: Claiming fa:16:3e:e8:ef:ce 10.100.0.5
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.351 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.360 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:ef:ce 10.100.0.5'], port_security=['fa:16:3e:e8:ef:ce 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ff148d2a-2dba-45c2-b726-78423f3ccedc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82c3895c-56bd-4273-9e68-83c2fcec4532', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dd177a65e6274abe9b6091a2f34c319c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '177d1676-9b22-4402-a0f3-24e0d0ec2ee5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab32c8ab-3085-42dc-845f-ce2535981325, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:03:11 np0005634017 ovn_controller[146846]: 2026-02-28T10:03:11Z|00122|binding|INFO|Setting lport 86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 ovn-installed in OVS
Feb 28 05:03:11 np0005634017 ovn_controller[146846]: 2026-02-28T10:03:11Z|00123|binding|INFO|Setting lport 86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 up in Southbound
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.364 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 in datapath 82c3895c-56bd-4273-9e68-83c2fcec4532 bound to our chassis#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.364 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.365 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 82c3895c-56bd-4273-9e68-83c2fcec4532#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.366 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:11 np0005634017 systemd-udevd[264632]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:03:11 np0005634017 systemd-machined[209480]: New machine qemu-27-instance-00000017.
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.379 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e4e184d5-3711-4446-8231-a0b77d4c895e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.381 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap82c3895c-51 in ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.383 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap82c3895c-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.383 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4bc76f9c-209a-41cc-acec-7eb7cbe0990c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.384 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1c57e61b-f73f-4f7f-9245-bfe2fb10e390]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:11 np0005634017 NetworkManager[49805]: <info>  [1772272991.3859] device (tap86a5b3dc-16): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:03:11 np0005634017 NetworkManager[49805]: <info>  [1772272991.3864] device (tap86a5b3dc-16): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:03:11 np0005634017 systemd[1]: Started Virtual Machine qemu-27-instance-00000017.
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.399 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[ff942f31-32cb-40fa-86b5-12fd398f01bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.421 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[269df904-949d-48c3-bc97-1d2b75a0babf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.463 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2bb2c2fa-e323-4185-ac5c-ebcfa0e15cd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.469 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bbcd9611-a8a9-4cce-80cb-62a49a9d24f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:11 np0005634017 NetworkManager[49805]: <info>  [1772272991.4705] manager: (tap82c3895c-50): new Veth device (/org/freedesktop/NetworkManager/Devices/63)
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.500 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[18cd12d6-6622-47f0-abda-0402153debee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.504 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b3546fc7-4a87-4e35-bfe9-7d3b3afcae43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:11 np0005634017 NetworkManager[49805]: <info>  [1772272991.5214] device (tap82c3895c-50): carrier: link connected
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.526 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f916b6f5-2d58-44a6-9e4d-7b7f25c4d573]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.541 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d69e38ba-9911-466a-84a0-b7c281060302]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82c3895c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:5a:6d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449880, 'reachable_time': 26373, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264666, 'error': None, 'target': 'ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.556 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[18aafe91-c619-4f96-9f63-07b1dee4f81c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe82:5a6d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449880, 'tstamp': 449880}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264667, 'error': None, 'target': 'ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.575 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e34cdbc1-ce94-4c32-98c1-1dcb2073c2d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82c3895c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:5a:6d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449880, 'reachable_time': 26373, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264668, 'error': None, 'target': 'ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.602 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8fa3c5a4-fc8f-4df6-a982-f55be1ea3fcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.610 243456 DEBUG nova.compute.manager [req-9c4ce7f3-9681-421f-bc8d-8b00dd9da0e1 req-85f8aca7-5cbc-48cb-a840-3ff8169c220d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Received event network-vif-plugged-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.610 243456 DEBUG oslo_concurrency.lockutils [req-9c4ce7f3-9681-421f-bc8d-8b00dd9da0e1 req-85f8aca7-5cbc-48cb-a840-3ff8169c220d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.611 243456 DEBUG oslo_concurrency.lockutils [req-9c4ce7f3-9681-421f-bc8d-8b00dd9da0e1 req-85f8aca7-5cbc-48cb-a840-3ff8169c220d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.611 243456 DEBUG oslo_concurrency.lockutils [req-9c4ce7f3-9681-421f-bc8d-8b00dd9da0e1 req-85f8aca7-5cbc-48cb-a840-3ff8169c220d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.612 243456 DEBUG nova.compute.manager [req-9c4ce7f3-9681-421f-bc8d-8b00dd9da0e1 req-85f8aca7-5cbc-48cb-a840-3ff8169c220d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Processing event network-vif-plugged-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.658 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7d4b242d-30d8-49cb-aeae-d58773fb788c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.660 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82c3895c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.660 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.660 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82c3895c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.662 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:11 np0005634017 kernel: tap82c3895c-50: entered promiscuous mode
Feb 28 05:03:11 np0005634017 NetworkManager[49805]: <info>  [1772272991.6625] manager: (tap82c3895c-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.663 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.665 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap82c3895c-50, col_values=(('external_ids', {'iface-id': '7c02020d-625b-46f3-9b28-7bd463d7a7e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.666 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.667 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.667 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/82c3895c-56bd-4273-9e68-83c2fcec4532.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/82c3895c-56bd-4273-9e68-83c2fcec4532.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.668 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[08e73de8-2d55-4ead-8c45-2d532299b38e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.669 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-82c3895c-56bd-4273-9e68-83c2fcec4532
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/82c3895c-56bd-4273-9e68-83c2fcec4532.pid.haproxy
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 82c3895c-56bd-4273-9e68-83c2fcec4532
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:03:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.669 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532', 'env', 'PROCESS_TAG=haproxy-82c3895c-56bd-4273-9e68-83c2fcec4532', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/82c3895c-56bd-4273-9e68-83c2fcec4532.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:03:11 np0005634017 ovn_controller[146846]: 2026-02-28T10:03:11Z|00124|binding|INFO|Releasing lport 7c02020d-625b-46f3-9b28-7bd463d7a7e3 from this chassis (sb_readonly=0)
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.684 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:03:11 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/767398388' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.766 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:03:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 246 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.847 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.848 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.855 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:03:11 np0005634017 nova_compute[243452]: 2026-02-28 10:03:11.855 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.017 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272992.016694, ff148d2a-2dba-45c2-b726-78423f3ccedc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.017 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] VM Started (Lifecycle Event)#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.021 243456 DEBUG nova.compute.manager [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.025 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.028 243456 INFO nova.virt.libvirt.driver [-] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Instance spawned successfully.#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.029 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.045 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:03:12 np0005634017 podman[264744]: 2026-02-28 10:03:12.049032185 +0000 UTC m=+0.050522860 container create 177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.057 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.061 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.061 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.062 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.062 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.063 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.063 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:03:12 np0005634017 systemd[1]: Started libpod-conmon-177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738.scope.
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.097 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.098 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272992.0210967, ff148d2a-2dba-45c2-b726-78423f3ccedc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.098 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:03:12 np0005634017 podman[264744]: 2026-02-28 10:03:12.020678029 +0000 UTC m=+0.022168734 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.121 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.123 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.124 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4279MB free_disk=59.946378882043064GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.124 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.124 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.127 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272992.0241928, ff148d2a-2dba-45c2-b726-78423f3ccedc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.127 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:03:12 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.132 243456 INFO nova.compute.manager [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Took 7.71 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.132 243456 DEBUG nova.compute.manager [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:03:12 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78619c8bdd71b488143c8f78ad8f7f85fee1a03814cf56ea253497a71198123b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:03:12 np0005634017 podman[264744]: 2026-02-28 10:03:12.145201465 +0000 UTC m=+0.146692140 container init 177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 05:03:12 np0005634017 podman[264744]: 2026-02-28 10:03:12.151898553 +0000 UTC m=+0.153389228 container start 177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.173 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.179 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:03:12 np0005634017 neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532[264759]: [NOTICE]   (264763) : New worker (264765) forked
Feb 28 05:03:12 np0005634017 neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532[264759]: [NOTICE]   (264763) : Loading success.
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.213 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.227 243456 INFO nova.compute.manager [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Took 8.77 seconds to build instance.#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.243 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 540aa538-279f-4645-a7a1-03fa5c859440 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.243 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance ff148d2a-2dba-45c2-b726-78423f3ccedc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.243 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.244 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.247 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.295 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:03:12 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb 28 05:03:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:03:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:03:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/351380988' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.900 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.905 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.917 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.935 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:03:12 np0005634017 nova_compute[243452]: 2026-02-28 10:03:12.936 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:13 np0005634017 nova_compute[243452]: 2026-02-28 10:03:13.456 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:13 np0005634017 nova_compute[243452]: 2026-02-28 10:03:13.468 243456 DEBUG nova.network.neutron [req-639bc7af-0173-4c0d-83ae-a6f0751f5f79 req-695502d2-27b7-4709-9322-e4d3d3d3eea6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Updated VIF entry in instance network info cache for port f4b2f0ef-feda-437c-96c7-92a0645bceb9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:03:13 np0005634017 nova_compute[243452]: 2026-02-28 10:03:13.469 243456 DEBUG nova.network.neutron [req-639bc7af-0173-4c0d-83ae-a6f0751f5f79 req-695502d2-27b7-4709-9322-e4d3d3d3eea6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Updating instance_info_cache with network_info: [{"id": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "address": "fa:16:3e:bd:c8:f4", "network": {"id": "a11c3342-4f74-40c1-a9f3-ae18f9be9d19", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1076482414-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ee8273ac001494c973c44a7dd357180", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4b2f0ef-fe", "ovs_interfaceid": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:03:13 np0005634017 nova_compute[243452]: 2026-02-28 10:03:13.488 243456 DEBUG oslo_concurrency.lockutils [req-639bc7af-0173-4c0d-83ae-a6f0751f5f79 req-695502d2-27b7-4709-9322-e4d3d3d3eea6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:03:13 np0005634017 nova_compute[243452]: 2026-02-28 10:03:13.705 243456 DEBUG nova.compute.manager [req-5b72a02c-7747-4a2e-b62b-d80d70864fce req-6392e1be-d9ac-4300-8c27-011f3f821cc6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Received event network-vif-plugged-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:03:13 np0005634017 nova_compute[243452]: 2026-02-28 10:03:13.706 243456 DEBUG oslo_concurrency.lockutils [req-5b72a02c-7747-4a2e-b62b-d80d70864fce req-6392e1be-d9ac-4300-8c27-011f3f821cc6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:13 np0005634017 nova_compute[243452]: 2026-02-28 10:03:13.707 243456 DEBUG oslo_concurrency.lockutils [req-5b72a02c-7747-4a2e-b62b-d80d70864fce req-6392e1be-d9ac-4300-8c27-011f3f821cc6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:13 np0005634017 nova_compute[243452]: 2026-02-28 10:03:13.707 243456 DEBUG oslo_concurrency.lockutils [req-5b72a02c-7747-4a2e-b62b-d80d70864fce req-6392e1be-d9ac-4300-8c27-011f3f821cc6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:13 np0005634017 nova_compute[243452]: 2026-02-28 10:03:13.708 243456 DEBUG nova.compute.manager [req-5b72a02c-7747-4a2e-b62b-d80d70864fce req-6392e1be-d9ac-4300-8c27-011f3f821cc6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] No waiting events found dispatching network-vif-plugged-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:03:13 np0005634017 nova_compute[243452]: 2026-02-28 10:03:13.708 243456 WARNING nova.compute.manager [req-5b72a02c-7747-4a2e-b62b-d80d70864fce req-6392e1be-d9ac-4300-8c27-011f3f821cc6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Received unexpected event network-vif-plugged-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:03:13 np0005634017 nova_compute[243452]: 2026-02-28 10:03:13.789 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 255 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 117 op/s
Feb 28 05:03:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:03:14Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bd:c8:f4 10.100.0.9
Feb 28 05:03:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:03:14Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bd:c8:f4 10.100.0.9
Feb 28 05:03:14 np0005634017 nova_compute[243452]: 2026-02-28 10:03:14.982 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 274 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.9 MiB/s wr, 198 op/s
Feb 28 05:03:15 np0005634017 nova_compute[243452]: 2026-02-28 10:03:15.897 243456 DEBUG nova.compute.manager [req-6a10630c-241d-4a8d-8316-6629f950df5d req-7a08af56-ef6e-41b7-8589-05ea121667d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Received event network-changed-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:03:15 np0005634017 nova_compute[243452]: 2026-02-28 10:03:15.898 243456 DEBUG nova.compute.manager [req-6a10630c-241d-4a8d-8316-6629f950df5d req-7a08af56-ef6e-41b7-8589-05ea121667d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Refreshing instance network info cache due to event network-changed-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:03:15 np0005634017 nova_compute[243452]: 2026-02-28 10:03:15.899 243456 DEBUG oslo_concurrency.lockutils [req-6a10630c-241d-4a8d-8316-6629f950df5d req-7a08af56-ef6e-41b7-8589-05ea121667d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ff148d2a-2dba-45c2-b726-78423f3ccedc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:03:15 np0005634017 nova_compute[243452]: 2026-02-28 10:03:15.899 243456 DEBUG oslo_concurrency.lockutils [req-6a10630c-241d-4a8d-8316-6629f950df5d req-7a08af56-ef6e-41b7-8589-05ea121667d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ff148d2a-2dba-45c2-b726-78423f3ccedc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:03:15 np0005634017 nova_compute[243452]: 2026-02-28 10:03:15.900 243456 DEBUG nova.network.neutron [req-6a10630c-241d-4a8d-8316-6629f950df5d req-7a08af56-ef6e-41b7-8589-05ea121667d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Refreshing network info cache for port 86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:03:15 np0005634017 nova_compute[243452]: 2026-02-28 10:03:15.932 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:03:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:03:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 279 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.0 MiB/s wr, 184 op/s
Feb 28 05:03:18 np0005634017 nova_compute[243452]: 2026-02-28 10:03:18.460 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:18 np0005634017 nova_compute[243452]: 2026-02-28 10:03:18.518 243456 DEBUG nova.network.neutron [req-6a10630c-241d-4a8d-8316-6629f950df5d req-7a08af56-ef6e-41b7-8589-05ea121667d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Updated VIF entry in instance network info cache for port 86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:03:18 np0005634017 nova_compute[243452]: 2026-02-28 10:03:18.519 243456 DEBUG nova.network.neutron [req-6a10630c-241d-4a8d-8316-6629f950df5d req-7a08af56-ef6e-41b7-8589-05ea121667d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Updating instance_info_cache with network_info: [{"id": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "address": "fa:16:3e:e8:ef:ce", "network": {"id": "82c3895c-56bd-4273-9e68-83c2fcec4532", "bridge": "br-int", "label": "tempest-ServersTestJSON-179965434-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd177a65e6274abe9b6091a2f34c319c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a5b3dc-16", "ovs_interfaceid": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:03:18 np0005634017 nova_compute[243452]: 2026-02-28 10:03:18.543 243456 DEBUG nova.compute.manager [req-fed232fb-7436-40bc-b43c-f0349b1e374c req-7cfb4b74-f315-4105-8caa-f6a70c07459a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Received event network-changed-f4b2f0ef-feda-437c-96c7-92a0645bceb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:03:18 np0005634017 nova_compute[243452]: 2026-02-28 10:03:18.543 243456 DEBUG nova.compute.manager [req-fed232fb-7436-40bc-b43c-f0349b1e374c req-7cfb4b74-f315-4105-8caa-f6a70c07459a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Refreshing instance network info cache due to event network-changed-f4b2f0ef-feda-437c-96c7-92a0645bceb9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:03:18 np0005634017 nova_compute[243452]: 2026-02-28 10:03:18.544 243456 DEBUG oslo_concurrency.lockutils [req-fed232fb-7436-40bc-b43c-f0349b1e374c req-7cfb4b74-f315-4105-8caa-f6a70c07459a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:03:18 np0005634017 nova_compute[243452]: 2026-02-28 10:03:18.544 243456 DEBUG oslo_concurrency.lockutils [req-fed232fb-7436-40bc-b43c-f0349b1e374c req-7cfb4b74-f315-4105-8caa-f6a70c07459a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:03:18 np0005634017 nova_compute[243452]: 2026-02-28 10:03:18.545 243456 DEBUG nova.network.neutron [req-fed232fb-7436-40bc-b43c-f0349b1e374c req-7cfb4b74-f315-4105-8caa-f6a70c07459a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Refreshing network info cache for port f4b2f0ef-feda-437c-96c7-92a0645bceb9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:03:18 np0005634017 nova_compute[243452]: 2026-02-28 10:03:18.549 243456 DEBUG oslo_concurrency.lockutils [req-6a10630c-241d-4a8d-8316-6629f950df5d req-7a08af56-ef6e-41b7-8589-05ea121667d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ff148d2a-2dba-45c2-b726-78423f3ccedc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:03:18 np0005634017 nova_compute[243452]: 2026-02-28 10:03:18.648 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 279 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Feb 28 05:03:19 np0005634017 nova_compute[243452]: 2026-02-28 10:03:19.817 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:19 np0005634017 nova_compute[243452]: 2026-02-28 10:03:19.984 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:20 np0005634017 nova_compute[243452]: 2026-02-28 10:03:20.777 243456 DEBUG nova.network.neutron [req-fed232fb-7436-40bc-b43c-f0349b1e374c req-7cfb4b74-f315-4105-8caa-f6a70c07459a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Updated VIF entry in instance network info cache for port f4b2f0ef-feda-437c-96c7-92a0645bceb9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:03:20 np0005634017 nova_compute[243452]: 2026-02-28 10:03:20.778 243456 DEBUG nova.network.neutron [req-fed232fb-7436-40bc-b43c-f0349b1e374c req-7cfb4b74-f315-4105-8caa-f6a70c07459a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Updating instance_info_cache with network_info: [{"id": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "address": "fa:16:3e:bd:c8:f4", "network": {"id": "a11c3342-4f74-40c1-a9f3-ae18f9be9d19", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1076482414-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ee8273ac001494c973c44a7dd357180", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4b2f0ef-fe", "ovs_interfaceid": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:03:20 np0005634017 nova_compute[243452]: 2026-02-28 10:03:20.795 243456 DEBUG oslo_concurrency.lockutils [req-fed232fb-7436-40bc-b43c-f0349b1e374c req-7cfb4b74-f315-4105-8caa-f6a70c07459a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:03:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 279 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Feb 28 05:03:22 np0005634017 podman[264798]: 2026-02-28 10:03:22.12386214 +0000 UTC m=+0.058011840 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 28 05:03:22 np0005634017 podman[264797]: 2026-02-28 10:03:22.156415434 +0000 UTC m=+0.101652175 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 05:03:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:03:23 np0005634017 ovn_controller[146846]: 2026-02-28T10:03:23Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e8:ef:ce 10.100.0.5
Feb 28 05:03:23 np0005634017 ovn_controller[146846]: 2026-02-28T10:03:23Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e8:ef:ce 10.100.0.5
Feb 28 05:03:23 np0005634017 nova_compute[243452]: 2026-02-28 10:03:23.462 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 289 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 157 op/s
Feb 28 05:03:24 np0005634017 nova_compute[243452]: 2026-02-28 10:03:24.970 243456 DEBUG oslo_concurrency.lockutils [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Acquiring lock "540aa538-279f-4645-a7a1-03fa5c859440" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:24 np0005634017 nova_compute[243452]: 2026-02-28 10:03:24.971 243456 DEBUG oslo_concurrency.lockutils [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:24 np0005634017 nova_compute[243452]: 2026-02-28 10:03:24.972 243456 DEBUG oslo_concurrency.lockutils [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Acquiring lock "540aa538-279f-4645-a7a1-03fa5c859440-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:24 np0005634017 nova_compute[243452]: 2026-02-28 10:03:24.972 243456 DEBUG oslo_concurrency.lockutils [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:24 np0005634017 nova_compute[243452]: 2026-02-28 10:03:24.973 243456 DEBUG oslo_concurrency.lockutils [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:24 np0005634017 nova_compute[243452]: 2026-02-28 10:03:24.974 243456 INFO nova.compute.manager [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Terminating instance#033[00m
Feb 28 05:03:24 np0005634017 nova_compute[243452]: 2026-02-28 10:03:24.977 243456 DEBUG nova.compute.manager [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:03:24 np0005634017 nova_compute[243452]: 2026-02-28 10:03:24.987 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:25 np0005634017 kernel: tapf4b2f0ef-fe (unregistering): left promiscuous mode
Feb 28 05:03:25 np0005634017 NetworkManager[49805]: <info>  [1772273005.0249] device (tapf4b2f0ef-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:03:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:03:25Z|00125|binding|INFO|Releasing lport f4b2f0ef-feda-437c-96c7-92a0645bceb9 from this chassis (sb_readonly=0)
Feb 28 05:03:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:03:25Z|00126|binding|INFO|Setting lport f4b2f0ef-feda-437c-96c7-92a0645bceb9 down in Southbound
Feb 28 05:03:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:03:25Z|00127|binding|INFO|Removing iface tapf4b2f0ef-fe ovn-installed in OVS
Feb 28 05:03:25 np0005634017 nova_compute[243452]: 2026-02-28 10:03:25.035 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.042 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:c8:f4 10.100.0.9'], port_security=['fa:16:3e:bd:c8:f4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '540aa538-279f-4645-a7a1-03fa5c859440', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a11c3342-4f74-40c1-a9f3-ae18f9be9d19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ee8273ac001494c973c44a7dd357180', 'neutron:revision_number': '4', 'neutron:security_group_ids': '26c27b27-9608-4c57-8427-9c45b7a72eae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c643f549-0af1-46f6-9870-09f9117f13fa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=f4b2f0ef-feda-437c-96c7-92a0645bceb9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:03:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.045 156681 INFO neutron.agent.ovn.metadata.agent [-] Port f4b2f0ef-feda-437c-96c7-92a0645bceb9 in datapath a11c3342-4f74-40c1-a9f3-ae18f9be9d19 unbound from our chassis#033[00m
Feb 28 05:03:25 np0005634017 nova_compute[243452]: 2026-02-28 10:03:25.046 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.047 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a11c3342-4f74-40c1-a9f3-ae18f9be9d19, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:03:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.049 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2f739466-3199-4aec-9bc4-a802a1e070c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.050 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19 namespace which is not needed anymore#033[00m
Feb 28 05:03:25 np0005634017 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000016.scope: Deactivated successfully.
Feb 28 05:03:25 np0005634017 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000016.scope: Consumed 12.900s CPU time.
Feb 28 05:03:25 np0005634017 systemd-machined[209480]: Machine qemu-26-instance-00000016 terminated.
Feb 28 05:03:25 np0005634017 nova_compute[243452]: 2026-02-28 10:03:25.199 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:25 np0005634017 neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19[264231]: [NOTICE]   (264235) : haproxy version is 2.8.14-c23fe91
Feb 28 05:03:25 np0005634017 neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19[264231]: [NOTICE]   (264235) : path to executable is /usr/sbin/haproxy
Feb 28 05:03:25 np0005634017 neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19[264231]: [WARNING]  (264235) : Exiting Master process...
Feb 28 05:03:25 np0005634017 neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19[264231]: [ALERT]    (264235) : Current worker (264237) exited with code 143 (Terminated)
Feb 28 05:03:25 np0005634017 neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19[264231]: [WARNING]  (264235) : All workers exited. Exiting... (0)
Feb 28 05:03:25 np0005634017 nova_compute[243452]: 2026-02-28 10:03:25.205 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:25 np0005634017 systemd[1]: libpod-15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7.scope: Deactivated successfully.
Feb 28 05:03:25 np0005634017 nova_compute[243452]: 2026-02-28 10:03:25.218 243456 INFO nova.virt.libvirt.driver [-] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Instance destroyed successfully.#033[00m
Feb 28 05:03:25 np0005634017 nova_compute[243452]: 2026-02-28 10:03:25.219 243456 DEBUG nova.objects.instance [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lazy-loading 'resources' on Instance uuid 540aa538-279f-4645-a7a1-03fa5c859440 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:03:25 np0005634017 podman[264863]: 2026-02-28 10:03:25.21852309 +0000 UTC m=+0.064184174 container died 15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 05:03:25 np0005634017 nova_compute[243452]: 2026-02-28 10:03:25.235 243456 DEBUG nova.virt.libvirt.vif [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:02:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1652183905',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1652183905',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-165218390',id=22,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:03:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2ee8273ac001494c973c44a7dd357180',ramdisk_id='',reservation_id='r-4t9kpqze',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1398433179',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1398433179-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:03:02Z,user_data=None,user_id='678df28a33b147768bd6e1e5d3b17ccf',uuid=540aa538-279f-4645-a7a1-03fa5c859440,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "address": "fa:16:3e:bd:c8:f4", "network": {"id": "a11c3342-4f74-40c1-a9f3-ae18f9be9d19", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1076482414-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ee8273ac001494c973c44a7dd357180", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4b2f0ef-fe", "ovs_interfaceid": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:03:25 np0005634017 nova_compute[243452]: 2026-02-28 10:03:25.236 243456 DEBUG nova.network.os_vif_util [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Converting VIF {"id": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "address": "fa:16:3e:bd:c8:f4", "network": {"id": "a11c3342-4f74-40c1-a9f3-ae18f9be9d19", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1076482414-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ee8273ac001494c973c44a7dd357180", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4b2f0ef-fe", "ovs_interfaceid": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:03:25 np0005634017 nova_compute[243452]: 2026-02-28 10:03:25.237 243456 DEBUG nova.network.os_vif_util [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bd:c8:f4,bridge_name='br-int',has_traffic_filtering=True,id=f4b2f0ef-feda-437c-96c7-92a0645bceb9,network=Network(a11c3342-4f74-40c1-a9f3-ae18f9be9d19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4b2f0ef-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:03:25 np0005634017 nova_compute[243452]: 2026-02-28 10:03:25.238 243456 DEBUG os_vif [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:c8:f4,bridge_name='br-int',has_traffic_filtering=True,id=f4b2f0ef-feda-437c-96c7-92a0645bceb9,network=Network(a11c3342-4f74-40c1-a9f3-ae18f9be9d19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4b2f0ef-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:03:25 np0005634017 nova_compute[243452]: 2026-02-28 10:03:25.239 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:25 np0005634017 nova_compute[243452]: 2026-02-28 10:03:25.240 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4b2f0ef-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:03:25 np0005634017 nova_compute[243452]: 2026-02-28 10:03:25.241 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:25 np0005634017 nova_compute[243452]: 2026-02-28 10:03:25.243 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:25 np0005634017 nova_compute[243452]: 2026-02-28 10:03:25.246 243456 INFO os_vif [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:c8:f4,bridge_name='br-int',has_traffic_filtering=True,id=f4b2f0ef-feda-437c-96c7-92a0645bceb9,network=Network(a11c3342-4f74-40c1-a9f3-ae18f9be9d19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4b2f0ef-fe')#033[00m
Feb 28 05:03:25 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7-userdata-shm.mount: Deactivated successfully.
Feb 28 05:03:25 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3537f5f8d55cebfe7fda6f58cc62d454caeb57ea64eac96f07c6f8abf46f3522-merged.mount: Deactivated successfully.
Feb 28 05:03:25 np0005634017 podman[264863]: 2026-02-28 10:03:25.282580938 +0000 UTC m=+0.128242002 container cleanup 15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 05:03:25 np0005634017 systemd[1]: libpod-conmon-15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7.scope: Deactivated successfully.
Feb 28 05:03:25 np0005634017 podman[264920]: 2026-02-28 10:03:25.351845413 +0000 UTC m=+0.051100066 container remove 15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 05:03:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.355 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b5747315-5dcd-47e4-b95d-6a9de3d89318]: (4, ('Sat Feb 28 10:03:25 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19 (15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7)\n15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7\nSat Feb 28 10:03:25 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19 (15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7)\n15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.357 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eec7ce6f-5b92-49b8-9ad6-dd001d479f46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.357 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa11c3342-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:03:25 np0005634017 nova_compute[243452]: 2026-02-28 10:03:25.359 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:25 np0005634017 kernel: tapa11c3342-40: left promiscuous mode
Feb 28 05:03:25 np0005634017 nova_compute[243452]: 2026-02-28 10:03:25.362 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.365 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6a26e793-7cd8-466c-83c2-7147d7b5776a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:25 np0005634017 nova_compute[243452]: 2026-02-28 10:03:25.368 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.378 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f570462a-0bc9-473f-99ad-d36906f78085]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.379 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[779f8302-008d-4d9a-a7b7-879d2d36a74b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.391 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8f893f6d-6904-4287-8380-bba15863381d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448854, 'reachable_time': 23949, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264936, 'error': None, 'target': 'ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.393 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:03:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.393 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[ec4893fe-fbf1-42e7-8507-e550304b36ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:25 np0005634017 systemd[1]: run-netns-ovnmeta\x2da11c3342\x2d4f74\x2d40c1\x2da9f3\x2dae18f9be9d19.mount: Deactivated successfully.
Feb 28 05:03:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 294 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.3 MiB/s wr, 191 op/s
Feb 28 05:03:26 np0005634017 nova_compute[243452]: 2026-02-28 10:03:26.194 243456 DEBUG nova.compute.manager [req-32d7a4b0-c853-4c8e-98d8-e141554d0a3f req-f8e02c85-f63a-47d6-a368-c91127a3d178 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Received event network-vif-unplugged-f4b2f0ef-feda-437c-96c7-92a0645bceb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:03:26 np0005634017 nova_compute[243452]: 2026-02-28 10:03:26.194 243456 DEBUG oslo_concurrency.lockutils [req-32d7a4b0-c853-4c8e-98d8-e141554d0a3f req-f8e02c85-f63a-47d6-a368-c91127a3d178 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "540aa538-279f-4645-a7a1-03fa5c859440-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:26 np0005634017 nova_compute[243452]: 2026-02-28 10:03:26.195 243456 DEBUG oslo_concurrency.lockutils [req-32d7a4b0-c853-4c8e-98d8-e141554d0a3f req-f8e02c85-f63a-47d6-a368-c91127a3d178 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:26 np0005634017 nova_compute[243452]: 2026-02-28 10:03:26.195 243456 DEBUG oslo_concurrency.lockutils [req-32d7a4b0-c853-4c8e-98d8-e141554d0a3f req-f8e02c85-f63a-47d6-a368-c91127a3d178 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:26 np0005634017 nova_compute[243452]: 2026-02-28 10:03:26.196 243456 DEBUG nova.compute.manager [req-32d7a4b0-c853-4c8e-98d8-e141554d0a3f req-f8e02c85-f63a-47d6-a368-c91127a3d178 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] No waiting events found dispatching network-vif-unplugged-f4b2f0ef-feda-437c-96c7-92a0645bceb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:03:26 np0005634017 nova_compute[243452]: 2026-02-28 10:03:26.196 243456 DEBUG nova.compute.manager [req-32d7a4b0-c853-4c8e-98d8-e141554d0a3f req-f8e02c85-f63a-47d6-a368-c91127a3d178 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Received event network-vif-unplugged-f4b2f0ef-feda-437c-96c7-92a0645bceb9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:03:26 np0005634017 nova_compute[243452]: 2026-02-28 10:03:26.206 243456 INFO nova.virt.libvirt.driver [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Deleting instance files /var/lib/nova/instances/540aa538-279f-4645-a7a1-03fa5c859440_del#033[00m
Feb 28 05:03:26 np0005634017 nova_compute[243452]: 2026-02-28 10:03:26.207 243456 INFO nova.virt.libvirt.driver [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Deletion of /var/lib/nova/instances/540aa538-279f-4645-a7a1-03fa5c859440_del complete#033[00m
Feb 28 05:03:26 np0005634017 nova_compute[243452]: 2026-02-28 10:03:26.272 243456 INFO nova.compute.manager [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Took 1.29 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:03:26 np0005634017 nova_compute[243452]: 2026-02-28 10:03:26.273 243456 DEBUG oslo.service.loopingcall [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:03:26 np0005634017 nova_compute[243452]: 2026-02-28 10:03:26.273 243456 DEBUG nova.compute.manager [-] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:03:26 np0005634017 nova_compute[243452]: 2026-02-28 10:03:26.273 243456 DEBUG nova.network.neutron [-] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:03:26 np0005634017 nova_compute[243452]: 2026-02-28 10:03:26.609 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:26 np0005634017 nova_compute[243452]: 2026-02-28 10:03:26.995 243456 DEBUG nova.network.neutron [-] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:03:27 np0005634017 nova_compute[243452]: 2026-02-28 10:03:27.019 243456 INFO nova.compute.manager [-] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Took 0.75 seconds to deallocate network for instance.#033[00m
Feb 28 05:03:27 np0005634017 nova_compute[243452]: 2026-02-28 10:03:27.065 243456 DEBUG oslo_concurrency.lockutils [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:27 np0005634017 nova_compute[243452]: 2026-02-28 10:03:27.066 243456 DEBUG oslo_concurrency.lockutils [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:27 np0005634017 nova_compute[243452]: 2026-02-28 10:03:27.094 243456 DEBUG nova.compute.manager [req-4fc6dfbb-91b3-47f8-b1b4-000cc56359c7 req-c489a0e9-7262-4d24-965e-60895216523d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Received event network-vif-deleted-f4b2f0ef-feda-437c-96c7-92a0645bceb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:03:27 np0005634017 nova_compute[243452]: 2026-02-28 10:03:27.129 243456 DEBUG oslo_concurrency.processutils [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:03:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:03:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:03:27 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2082020615' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:03:27 np0005634017 nova_compute[243452]: 2026-02-28 10:03:27.693 243456 DEBUG oslo_concurrency.processutils [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:03:27 np0005634017 nova_compute[243452]: 2026-02-28 10:03:27.699 243456 DEBUG nova.compute.provider_tree [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:03:27 np0005634017 nova_compute[243452]: 2026-02-28 10:03:27.722 243456 DEBUG nova.scheduler.client.report [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:03:27 np0005634017 nova_compute[243452]: 2026-02-28 10:03:27.745 243456 DEBUG oslo_concurrency.lockutils [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:27 np0005634017 nova_compute[243452]: 2026-02-28 10:03:27.778 243456 INFO nova.scheduler.client.report [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Deleted allocations for instance 540aa538-279f-4645-a7a1-03fa5c859440#033[00m
Feb 28 05:03:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 255 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 117 op/s
Feb 28 05:03:27 np0005634017 nova_compute[243452]: 2026-02-28 10:03:27.857 243456 DEBUG oslo_concurrency.lockutils [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:28 np0005634017 nova_compute[243452]: 2026-02-28 10:03:28.465 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:28 np0005634017 nova_compute[243452]: 2026-02-28 10:03:28.481 243456 DEBUG nova.compute.manager [req-be5a0d48-ce02-481e-9546-f10293ea9e76 req-7faf5989-7f44-42c6-a1a5-8c784686f604 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Received event network-vif-plugged-f4b2f0ef-feda-437c-96c7-92a0645bceb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:03:28 np0005634017 nova_compute[243452]: 2026-02-28 10:03:28.481 243456 DEBUG oslo_concurrency.lockutils [req-be5a0d48-ce02-481e-9546-f10293ea9e76 req-7faf5989-7f44-42c6-a1a5-8c784686f604 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "540aa538-279f-4645-a7a1-03fa5c859440-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:28 np0005634017 nova_compute[243452]: 2026-02-28 10:03:28.482 243456 DEBUG oslo_concurrency.lockutils [req-be5a0d48-ce02-481e-9546-f10293ea9e76 req-7faf5989-7f44-42c6-a1a5-8c784686f604 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:28 np0005634017 nova_compute[243452]: 2026-02-28 10:03:28.482 243456 DEBUG oslo_concurrency.lockutils [req-be5a0d48-ce02-481e-9546-f10293ea9e76 req-7faf5989-7f44-42c6-a1a5-8c784686f604 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:28 np0005634017 nova_compute[243452]: 2026-02-28 10:03:28.482 243456 DEBUG nova.compute.manager [req-be5a0d48-ce02-481e-9546-f10293ea9e76 req-7faf5989-7f44-42c6-a1a5-8c784686f604 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] No waiting events found dispatching network-vif-plugged-f4b2f0ef-feda-437c-96c7-92a0645bceb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:03:28 np0005634017 nova_compute[243452]: 2026-02-28 10:03:28.483 243456 WARNING nova.compute.manager [req-be5a0d48-ce02-481e-9546-f10293ea9e76 req-7faf5989-7f44-42c6-a1a5-8c784686f604 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Received unexpected event network-vif-plugged-f4b2f0ef-feda-437c-96c7-92a0645bceb9 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:03:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:03:29
Feb 28 05:03:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:03:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:03:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'images', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', '.mgr', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes']
Feb 28 05:03:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:03:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 255 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Feb 28 05:03:30 np0005634017 nova_compute[243452]: 2026-02-28 10:03:30.243 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:03:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:03:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:03:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:03:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:03:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:03:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:03:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:03:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:03:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:03:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:03:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:03:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:03:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:03:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:03:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:03:31 np0005634017 ovn_controller[146846]: 2026-02-28T10:03:31Z|00128|binding|INFO|Releasing lport 7c02020d-625b-46f3-9b28-7bd463d7a7e3 from this chassis (sb_readonly=0)
Feb 28 05:03:31 np0005634017 nova_compute[243452]: 2026-02-28 10:03:31.794 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 233 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Feb 28 05:03:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.133 243456 DEBUG oslo_concurrency.lockutils [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Acquiring lock "ff148d2a-2dba-45c2-b726-78423f3ccedc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.133 243456 DEBUG oslo_concurrency.lockutils [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.134 243456 DEBUG oslo_concurrency.lockutils [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Acquiring lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.134 243456 DEBUG oslo_concurrency.lockutils [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.135 243456 DEBUG oslo_concurrency.lockutils [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.137 243456 INFO nova.compute.manager [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Terminating instance#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.139 243456 DEBUG nova.compute.manager [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:03:33 np0005634017 kernel: tap86a5b3dc-16 (unregistering): left promiscuous mode
Feb 28 05:03:33 np0005634017 NetworkManager[49805]: <info>  [1772273013.1872] device (tap86a5b3dc-16): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:03:33 np0005634017 ovn_controller[146846]: 2026-02-28T10:03:33Z|00129|binding|INFO|Releasing lport 86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 from this chassis (sb_readonly=0)
Feb 28 05:03:33 np0005634017 ovn_controller[146846]: 2026-02-28T10:03:33Z|00130|binding|INFO|Setting lport 86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 down in Southbound
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.194 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:33 np0005634017 ovn_controller[146846]: 2026-02-28T10:03:33Z|00131|binding|INFO|Removing iface tap86a5b3dc-16 ovn-installed in OVS
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.196 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.205 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:ef:ce 10.100.0.5'], port_security=['fa:16:3e:e8:ef:ce 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ff148d2a-2dba-45c2-b726-78423f3ccedc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82c3895c-56bd-4273-9e68-83c2fcec4532', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dd177a65e6274abe9b6091a2f34c319c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '177d1676-9b22-4402-a0f3-24e0d0ec2ee5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.211'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab32c8ab-3085-42dc-845f-ce2535981325, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:03:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.209 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 in datapath 82c3895c-56bd-4273-9e68-83c2fcec4532 unbound from our chassis#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.210 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.212 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 82c3895c-56bd-4273-9e68-83c2fcec4532, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:03:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.213 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e8c7e9fa-e8d5-4452-b9aa-1f31abfd1f5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.214 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532 namespace which is not needed anymore#033[00m
Feb 28 05:03:33 np0005634017 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000017.scope: Deactivated successfully.
Feb 28 05:03:33 np0005634017 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000017.scope: Consumed 11.799s CPU time.
Feb 28 05:03:33 np0005634017 systemd-machined[209480]: Machine qemu-27-instance-00000017 terminated.
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.355 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.359 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:33 np0005634017 neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532[264759]: [NOTICE]   (264763) : haproxy version is 2.8.14-c23fe91
Feb 28 05:03:33 np0005634017 neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532[264759]: [NOTICE]   (264763) : path to executable is /usr/sbin/haproxy
Feb 28 05:03:33 np0005634017 neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532[264759]: [WARNING]  (264763) : Exiting Master process...
Feb 28 05:03:33 np0005634017 neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532[264759]: [WARNING]  (264763) : Exiting Master process...
Feb 28 05:03:33 np0005634017 neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532[264759]: [ALERT]    (264763) : Current worker (264765) exited with code 143 (Terminated)
Feb 28 05:03:33 np0005634017 neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532[264759]: [WARNING]  (264763) : All workers exited. Exiting... (0)
Feb 28 05:03:33 np0005634017 systemd[1]: libpod-177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738.scope: Deactivated successfully.
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.377 243456 INFO nova.virt.libvirt.driver [-] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Instance destroyed successfully.#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.377 243456 DEBUG nova.objects.instance [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lazy-loading 'resources' on Instance uuid ff148d2a-2dba-45c2-b726-78423f3ccedc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:03:33 np0005634017 podman[264985]: 2026-02-28 10:03:33.382264622 +0000 UTC m=+0.063655038 container died 177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.392 243456 DEBUG nova.virt.libvirt.vif [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:03:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1118743636',display_name='tempest-ServersTestJSON-server-1118743636',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1118743636',id=23,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGJCbaARgz2MiKHmjHHxTu8VMbZv/za78QmoiFrm1b2dtjLbi0sfz0k8qm0DAA2vOdrkhv2c4afOR1apvGQL9ZXBiG6t7d6JyxwbG3gSOY/aDpdVVM9RXPjLMeE9AYf3zg==',key_name='tempest-keypair-178399307',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:03:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dd177a65e6274abe9b6091a2f34c319c',ramdisk_id='',reservation_id='r-1mssv15c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-595608049',owner_user_name='tempest-ServersTestJSON-595608049-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:03:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2973b4a88c3d4417a732901954bb6c6a',uuid=ff148d2a-2dba-45c2-b726-78423f3ccedc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "address": "fa:16:3e:e8:ef:ce", "network": {"id": "82c3895c-56bd-4273-9e68-83c2fcec4532", "bridge": "br-int", "label": "tempest-ServersTestJSON-179965434-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd177a65e6274abe9b6091a2f34c319c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a5b3dc-16", "ovs_interfaceid": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.393 243456 DEBUG nova.network.os_vif_util [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Converting VIF {"id": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "address": "fa:16:3e:e8:ef:ce", "network": {"id": "82c3895c-56bd-4273-9e68-83c2fcec4532", "bridge": "br-int", "label": "tempest-ServersTestJSON-179965434-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd177a65e6274abe9b6091a2f34c319c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a5b3dc-16", "ovs_interfaceid": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.394 243456 DEBUG nova.network.os_vif_util [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e8:ef:ce,bridge_name='br-int',has_traffic_filtering=True,id=86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0,network=Network(82c3895c-56bd-4273-9e68-83c2fcec4532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a5b3dc-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.394 243456 DEBUG os_vif [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:ef:ce,bridge_name='br-int',has_traffic_filtering=True,id=86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0,network=Network(82c3895c-56bd-4273-9e68-83c2fcec4532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a5b3dc-16') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.396 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.396 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86a5b3dc-16, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.397 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.398 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.403 243456 INFO os_vif [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:ef:ce,bridge_name='br-int',has_traffic_filtering=True,id=86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0,network=Network(82c3895c-56bd-4273-9e68-83c2fcec4532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a5b3dc-16')#033[00m
Feb 28 05:03:33 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738-userdata-shm.mount: Deactivated successfully.
Feb 28 05:03:33 np0005634017 systemd[1]: var-lib-containers-storage-overlay-78619c8bdd71b488143c8f78ad8f7f85fee1a03814cf56ea253497a71198123b-merged.mount: Deactivated successfully.
Feb 28 05:03:33 np0005634017 podman[264985]: 2026-02-28 10:03:33.418194771 +0000 UTC m=+0.099585147 container cleanup 177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:03:33 np0005634017 systemd[1]: libpod-conmon-177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738.scope: Deactivated successfully.
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.469 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:33 np0005634017 podman[265039]: 2026-02-28 10:03:33.4751242 +0000 UTC m=+0.038792011 container remove 177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 05:03:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.478 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a1fab63a-a275-4e28-b713-ab42163377aa]: (4, ('Sat Feb 28 10:03:33 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532 (177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738)\n177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738\nSat Feb 28 10:03:33 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532 (177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738)\n177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.480 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[edc292f6-0a2f-4240-8529-77923eeca17f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.481 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82c3895c-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.482 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:33 np0005634017 kernel: tap82c3895c-50: left promiscuous mode
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.485 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.487 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7a276b11-9391-483a-835b-e6865d28fc6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.489 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.502 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2b249c29-b774-448d-8f12-1fb6b8f7061d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.503 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0352f30f-e1ac-4118-a313-e19758236b71]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.518 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a1731248-bd35-419a-be04-84a83f51de3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449874, 'reachable_time': 16932, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265057, 'error': None, 'target': 'ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:33 np0005634017 systemd[1]: run-netns-ovnmeta\x2d82c3895c\x2d56bd\x2d4273\x2d9e68\x2d83c2fcec4532.mount: Deactivated successfully.
Feb 28 05:03:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.521 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:03:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.522 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b93491-b07a-40a7-9f65-67c1d0f49f93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.638 243456 INFO nova.virt.libvirt.driver [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Deleting instance files /var/lib/nova/instances/ff148d2a-2dba-45c2-b726-78423f3ccedc_del#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.639 243456 INFO nova.virt.libvirt.driver [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Deletion of /var/lib/nova/instances/ff148d2a-2dba-45c2-b726-78423f3ccedc_del complete#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.649 243456 DEBUG nova.compute.manager [req-d7736c44-93c9-44ca-816a-bb919df1d833 req-6f9ec5cb-783e-49b5-8ebe-37ea7cf8578a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Received event network-vif-unplugged-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.650 243456 DEBUG oslo_concurrency.lockutils [req-d7736c44-93c9-44ca-816a-bb919df1d833 req-6f9ec5cb-783e-49b5-8ebe-37ea7cf8578a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.650 243456 DEBUG oslo_concurrency.lockutils [req-d7736c44-93c9-44ca-816a-bb919df1d833 req-6f9ec5cb-783e-49b5-8ebe-37ea7cf8578a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.650 243456 DEBUG oslo_concurrency.lockutils [req-d7736c44-93c9-44ca-816a-bb919df1d833 req-6f9ec5cb-783e-49b5-8ebe-37ea7cf8578a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.651 243456 DEBUG nova.compute.manager [req-d7736c44-93c9-44ca-816a-bb919df1d833 req-6f9ec5cb-783e-49b5-8ebe-37ea7cf8578a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] No waiting events found dispatching network-vif-unplugged-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.651 243456 DEBUG nova.compute.manager [req-d7736c44-93c9-44ca-816a-bb919df1d833 req-6f9ec5cb-783e-49b5-8ebe-37ea7cf8578a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Received event network-vif-unplugged-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.702 243456 INFO nova.compute.manager [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Took 0.56 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.703 243456 DEBUG oslo.service.loopingcall [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.704 243456 DEBUG nova.compute.manager [-] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.704 243456 DEBUG nova.network.neutron [-] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.731 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:33 np0005634017 nova_compute[243452]: 2026-02-28 10:03:33.819 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 233 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Feb 28 05:03:35 np0005634017 nova_compute[243452]: 2026-02-28 10:03:35.590 243456 DEBUG nova.network.neutron [-] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:03:35 np0005634017 nova_compute[243452]: 2026-02-28 10:03:35.612 243456 INFO nova.compute.manager [-] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Took 1.91 seconds to deallocate network for instance.#033[00m
Feb 28 05:03:35 np0005634017 nova_compute[243452]: 2026-02-28 10:03:35.666 243456 DEBUG oslo_concurrency.lockutils [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:35 np0005634017 nova_compute[243452]: 2026-02-28 10:03:35.667 243456 DEBUG oslo_concurrency.lockutils [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:35 np0005634017 nova_compute[243452]: 2026-02-28 10:03:35.725 243456 DEBUG oslo_concurrency.processutils [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:03:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 192 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 224 KiB/s rd, 1.4 MiB/s wr, 86 op/s
Feb 28 05:03:35 np0005634017 nova_compute[243452]: 2026-02-28 10:03:35.847 243456 DEBUG nova.compute.manager [req-2943ea0a-60ce-42f5-b39c-4510e3b8f0d1 req-42582482-bf23-4487-afd6-2c01ccd0cecc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Received event network-vif-plugged-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:03:35 np0005634017 nova_compute[243452]: 2026-02-28 10:03:35.848 243456 DEBUG oslo_concurrency.lockutils [req-2943ea0a-60ce-42f5-b39c-4510e3b8f0d1 req-42582482-bf23-4487-afd6-2c01ccd0cecc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:35 np0005634017 nova_compute[243452]: 2026-02-28 10:03:35.848 243456 DEBUG oslo_concurrency.lockutils [req-2943ea0a-60ce-42f5-b39c-4510e3b8f0d1 req-42582482-bf23-4487-afd6-2c01ccd0cecc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:35 np0005634017 nova_compute[243452]: 2026-02-28 10:03:35.849 243456 DEBUG oslo_concurrency.lockutils [req-2943ea0a-60ce-42f5-b39c-4510e3b8f0d1 req-42582482-bf23-4487-afd6-2c01ccd0cecc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:35 np0005634017 nova_compute[243452]: 2026-02-28 10:03:35.849 243456 DEBUG nova.compute.manager [req-2943ea0a-60ce-42f5-b39c-4510e3b8f0d1 req-42582482-bf23-4487-afd6-2c01ccd0cecc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] No waiting events found dispatching network-vif-plugged-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:03:35 np0005634017 nova_compute[243452]: 2026-02-28 10:03:35.850 243456 WARNING nova.compute.manager [req-2943ea0a-60ce-42f5-b39c-4510e3b8f0d1 req-42582482-bf23-4487-afd6-2c01ccd0cecc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Received unexpected event network-vif-plugged-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:03:35 np0005634017 nova_compute[243452]: 2026-02-28 10:03:35.850 243456 DEBUG nova.compute.manager [req-2943ea0a-60ce-42f5-b39c-4510e3b8f0d1 req-42582482-bf23-4487-afd6-2c01ccd0cecc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Received event network-vif-deleted-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:03:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:03:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2727393592' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:03:36 np0005634017 nova_compute[243452]: 2026-02-28 10:03:36.239 243456 DEBUG oslo_concurrency.processutils [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:03:36 np0005634017 nova_compute[243452]: 2026-02-28 10:03:36.246 243456 DEBUG nova.compute.provider_tree [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:03:36 np0005634017 nova_compute[243452]: 2026-02-28 10:03:36.267 243456 DEBUG nova.scheduler.client.report [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:03:36 np0005634017 nova_compute[243452]: 2026-02-28 10:03:36.286 243456 DEBUG oslo_concurrency.lockutils [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:36 np0005634017 nova_compute[243452]: 2026-02-28 10:03:36.313 243456 INFO nova.scheduler.client.report [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Deleted allocations for instance ff148d2a-2dba-45c2-b726-78423f3ccedc#033[00m
Feb 28 05:03:36 np0005634017 nova_compute[243452]: 2026-02-28 10:03:36.372 243456 DEBUG oslo_concurrency.lockutils [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.239s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:03:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 153 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 65 KiB/s wr, 48 op/s
Feb 28 05:03:38 np0005634017 nova_compute[243452]: 2026-02-28 10:03:38.401 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:38 np0005634017 nova_compute[243452]: 2026-02-28 10:03:38.471 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Feb 28 05:03:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Feb 28 05:03:38 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Feb 28 05:03:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 153 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 17 KiB/s wr, 46 op/s
Feb 28 05:03:40 np0005634017 nova_compute[243452]: 2026-02-28 10:03:40.217 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273005.2148383, 540aa538-279f-4645-a7a1-03fa5c859440 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:03:40 np0005634017 nova_compute[243452]: 2026-02-28 10:03:40.217 243456 INFO nova.compute.manager [-] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:03:40 np0005634017 nova_compute[243452]: 2026-02-28 10:03:40.235 243456 DEBUG nova.compute.manager [None req-a4bea8de-c5ca-4cfa-a4e4-4c1351cc6e6b - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:03:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:03:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:03:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:03:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:03:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 4.2897490401857024e-06 of space, bias 1.0, pg target 0.0012869247120557107 quantized to 32 (current 32)
Feb 28 05:03:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:03:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:03:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:03:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:03:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:03:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00249102496461197 of space, bias 1.0, pg target 0.7473074893835909 quantized to 32 (current 32)
Feb 28 05:03:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:03:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.63424063601391e-07 of space, bias 4.0, pg target 0.0011561088763216692 quantized to 16 (current 16)
Feb 28 05:03:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:03:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:03:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:03:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:03:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:03:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:03:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:03:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:03:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:03:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:03:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 153 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 17 KiB/s wr, 48 op/s
Feb 28 05:03:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:03:43 np0005634017 nova_compute[243452]: 2026-02-28 10:03:43.403 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:43 np0005634017 nova_compute[243452]: 2026-02-28 10:03:43.474 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:03:43 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:03:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:03:43 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:03:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:03:43 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:03:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:03:43 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:03:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:03:43 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:03:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:03:43 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:03:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 153 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.0 KiB/s wr, 48 op/s
Feb 28 05:03:43 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:03:43 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:03:43 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:03:44 np0005634017 podman[265223]: 2026-02-28 10:03:44.050549443 +0000 UTC m=+0.025958420 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:03:44 np0005634017 podman[265223]: 2026-02-28 10:03:44.165588173 +0000 UTC m=+0.140997090 container create f57f7e15a370c063a0fde7d4c5f091a73acf9af10082a97f8086f55feb81c66e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hamilton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 05:03:44 np0005634017 systemd[1]: Started libpod-conmon-f57f7e15a370c063a0fde7d4c5f091a73acf9af10082a97f8086f55feb81c66e.scope.
Feb 28 05:03:44 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:03:44 np0005634017 podman[265223]: 2026-02-28 10:03:44.433127676 +0000 UTC m=+0.408536663 container init f57f7e15a370c063a0fde7d4c5f091a73acf9af10082a97f8086f55feb81c66e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hamilton, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 05:03:44 np0005634017 podman[265223]: 2026-02-28 10:03:44.443060745 +0000 UTC m=+0.418469672 container start f57f7e15a370c063a0fde7d4c5f091a73acf9af10082a97f8086f55feb81c66e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hamilton, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 05:03:44 np0005634017 systemd[1]: libpod-f57f7e15a370c063a0fde7d4c5f091a73acf9af10082a97f8086f55feb81c66e.scope: Deactivated successfully.
Feb 28 05:03:44 np0005634017 magical_hamilton[265239]: 167 167
Feb 28 05:03:44 np0005634017 conmon[265239]: conmon f57f7e15a370c063a0fd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f57f7e15a370c063a0fde7d4c5f091a73acf9af10082a97f8086f55feb81c66e.scope/container/memory.events
Feb 28 05:03:44 np0005634017 podman[265223]: 2026-02-28 10:03:44.561682206 +0000 UTC m=+0.537091103 container attach f57f7e15a370c063a0fde7d4c5f091a73acf9af10082a97f8086f55feb81c66e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hamilton, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:03:44 np0005634017 podman[265223]: 2026-02-28 10:03:44.562739715 +0000 UTC m=+0.538148602 container died f57f7e15a370c063a0fde7d4c5f091a73acf9af10082a97f8086f55feb81c66e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hamilton, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:03:44 np0005634017 systemd[1]: var-lib-containers-storage-overlay-2b3bd29e9b417b4efa20e3dfed07f3096db3f68c2cf0f7c7452627c969d875c7-merged.mount: Deactivated successfully.
Feb 28 05:03:44 np0005634017 podman[265223]: 2026-02-28 10:03:44.863998355 +0000 UTC m=+0.839407282 container remove f57f7e15a370c063a0fde7d4c5f091a73acf9af10082a97f8086f55feb81c66e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 05:03:44 np0005634017 systemd[1]: libpod-conmon-f57f7e15a370c063a0fde7d4c5f091a73acf9af10082a97f8086f55feb81c66e.scope: Deactivated successfully.
Feb 28 05:03:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:44.954 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:03:44 np0005634017 nova_compute[243452]: 2026-02-28 10:03:44.955 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:44.957 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:03:45 np0005634017 podman[265264]: 2026-02-28 10:03:45.054609287 +0000 UTC m=+0.047945397 container create 06310dba18ae71fb45231eecc2942685c2a2b80603c9c14f7752fc89681eb8d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_cannon, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:03:45 np0005634017 systemd[1]: Started libpod-conmon-06310dba18ae71fb45231eecc2942685c2a2b80603c9c14f7752fc89681eb8d5.scope.
Feb 28 05:03:45 np0005634017 podman[265264]: 2026-02-28 10:03:45.029276346 +0000 UTC m=+0.022612486 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:03:45 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:03:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3fd47c2afe4e3b6c3e03c55c1a02ec6aea55f6a63e6c5948b5f7c4755d6a67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:03:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3fd47c2afe4e3b6c3e03c55c1a02ec6aea55f6a63e6c5948b5f7c4755d6a67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:03:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3fd47c2afe4e3b6c3e03c55c1a02ec6aea55f6a63e6c5948b5f7c4755d6a67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:03:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3fd47c2afe4e3b6c3e03c55c1a02ec6aea55f6a63e6c5948b5f7c4755d6a67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:03:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3fd47c2afe4e3b6c3e03c55c1a02ec6aea55f6a63e6c5948b5f7c4755d6a67/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:03:45 np0005634017 podman[265264]: 2026-02-28 10:03:45.166215261 +0000 UTC m=+0.159551331 container init 06310dba18ae71fb45231eecc2942685c2a2b80603c9c14f7752fc89681eb8d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_cannon, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 28 05:03:45 np0005634017 podman[265264]: 2026-02-28 10:03:45.173926488 +0000 UTC m=+0.167262558 container start 06310dba18ae71fb45231eecc2942685c2a2b80603c9c14f7752fc89681eb8d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_cannon, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 05:03:45 np0005634017 podman[265264]: 2026-02-28 10:03:45.177040115 +0000 UTC m=+0.170376185 container attach 06310dba18ae71fb45231eecc2942685c2a2b80603c9c14f7752fc89681eb8d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_cannon, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 28 05:03:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:03:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2495899353' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:03:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:03:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2495899353' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:03:45 np0005634017 bold_cannon[265281]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:03:45 np0005634017 bold_cannon[265281]: --> All data devices are unavailable
Feb 28 05:03:45 np0005634017 systemd[1]: libpod-06310dba18ae71fb45231eecc2942685c2a2b80603c9c14f7752fc89681eb8d5.scope: Deactivated successfully.
Feb 28 05:03:45 np0005634017 podman[265264]: 2026-02-28 10:03:45.701736548 +0000 UTC m=+0.695072618 container died 06310dba18ae71fb45231eecc2942685c2a2b80603c9c14f7752fc89681eb8d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_cannon, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:03:45 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ec3fd47c2afe4e3b6c3e03c55c1a02ec6aea55f6a63e6c5948b5f7c4755d6a67-merged.mount: Deactivated successfully.
Feb 28 05:03:45 np0005634017 podman[265264]: 2026-02-28 10:03:45.757199246 +0000 UTC m=+0.750535326 container remove 06310dba18ae71fb45231eecc2942685c2a2b80603c9c14f7752fc89681eb8d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:03:45 np0005634017 systemd[1]: libpod-conmon-06310dba18ae71fb45231eecc2942685c2a2b80603c9c14f7752fc89681eb8d5.scope: Deactivated successfully.
Feb 28 05:03:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 153 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 KiB/s wr, 30 op/s
Feb 28 05:03:46 np0005634017 podman[265378]: 2026-02-28 10:03:46.255787286 +0000 UTC m=+0.052212459 container create 19374abdb0ece56ecd7d0372f844b1e116e2b1c9b9c86c93ce0094010f453d50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:03:46 np0005634017 systemd[1]: Started libpod-conmon-19374abdb0ece56ecd7d0372f844b1e116e2b1c9b9c86c93ce0094010f453d50.scope.
Feb 28 05:03:46 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:03:46 np0005634017 podman[265378]: 2026-02-28 10:03:46.237728539 +0000 UTC m=+0.034153712 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:03:46 np0005634017 podman[265378]: 2026-02-28 10:03:46.345581881 +0000 UTC m=+0.142007104 container init 19374abdb0ece56ecd7d0372f844b1e116e2b1c9b9c86c93ce0094010f453d50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:03:46 np0005634017 podman[265378]: 2026-02-28 10:03:46.355744656 +0000 UTC m=+0.152169829 container start 19374abdb0ece56ecd7d0372f844b1e116e2b1c9b9c86c93ce0094010f453d50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 05:03:46 np0005634017 podman[265378]: 2026-02-28 10:03:46.35977264 +0000 UTC m=+0.156197813 container attach 19374abdb0ece56ecd7d0372f844b1e116e2b1c9b9c86c93ce0094010f453d50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_fermi, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:03:46 np0005634017 nifty_fermi[265394]: 167 167
Feb 28 05:03:46 np0005634017 systemd[1]: libpod-19374abdb0ece56ecd7d0372f844b1e116e2b1c9b9c86c93ce0094010f453d50.scope: Deactivated successfully.
Feb 28 05:03:46 np0005634017 podman[265378]: 2026-02-28 10:03:46.362556638 +0000 UTC m=+0.158981821 container died 19374abdb0ece56ecd7d0372f844b1e116e2b1c9b9c86c93ce0094010f453d50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 05:03:46 np0005634017 systemd[1]: var-lib-containers-storage-overlay-4ced52aa93bf15c864890e2121acc50bec37f38c11cf08c64666a310e61d09dd-merged.mount: Deactivated successfully.
Feb 28 05:03:46 np0005634017 podman[265378]: 2026-02-28 10:03:46.412032129 +0000 UTC m=+0.208457292 container remove 19374abdb0ece56ecd7d0372f844b1e116e2b1c9b9c86c93ce0094010f453d50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_fermi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 05:03:46 np0005634017 systemd[1]: libpod-conmon-19374abdb0ece56ecd7d0372f844b1e116e2b1c9b9c86c93ce0094010f453d50.scope: Deactivated successfully.
Feb 28 05:03:46 np0005634017 podman[265419]: 2026-02-28 10:03:46.575778892 +0000 UTC m=+0.051203460 container create c77f1f7323b8191cab09b144b7976854b20e132e6b3c81ea5fbc2e2f81de9682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_cannon, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 05:03:46 np0005634017 systemd[1]: Started libpod-conmon-c77f1f7323b8191cab09b144b7976854b20e132e6b3c81ea5fbc2e2f81de9682.scope.
Feb 28 05:03:46 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:03:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1921176faa70651d2d0bbcde4922988f381bf3bb7673891f1adca3458ab367c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:03:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1921176faa70651d2d0bbcde4922988f381bf3bb7673891f1adca3458ab367c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:03:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1921176faa70651d2d0bbcde4922988f381bf3bb7673891f1adca3458ab367c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:03:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1921176faa70651d2d0bbcde4922988f381bf3bb7673891f1adca3458ab367c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:03:46 np0005634017 podman[265419]: 2026-02-28 10:03:46.551236932 +0000 UTC m=+0.026661560 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:03:46 np0005634017 podman[265419]: 2026-02-28 10:03:46.64682626 +0000 UTC m=+0.122250828 container init c77f1f7323b8191cab09b144b7976854b20e132e6b3c81ea5fbc2e2f81de9682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 05:03:46 np0005634017 podman[265419]: 2026-02-28 10:03:46.651510921 +0000 UTC m=+0.126935479 container start c77f1f7323b8191cab09b144b7976854b20e132e6b3c81ea5fbc2e2f81de9682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_cannon, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 05:03:46 np0005634017 podman[265419]: 2026-02-28 10:03:46.65573282 +0000 UTC m=+0.131157368 container attach c77f1f7323b8191cab09b144b7976854b20e132e6b3c81ea5fbc2e2f81de9682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]: {
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:    "0": [
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:        {
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "devices": [
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "/dev/loop3"
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            ],
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "lv_name": "ceph_lv0",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "lv_size": "21470642176",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "name": "ceph_lv0",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "tags": {
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.cluster_name": "ceph",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.crush_device_class": "",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.encrypted": "0",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.objectstore": "bluestore",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.osd_id": "0",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.type": "block",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.vdo": "0",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.with_tpm": "0"
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            },
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "type": "block",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "vg_name": "ceph_vg0"
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:        }
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:    ],
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:    "1": [
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:        {
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "devices": [
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "/dev/loop4"
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            ],
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "lv_name": "ceph_lv1",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "lv_size": "21470642176",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "name": "ceph_lv1",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "tags": {
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.cluster_name": "ceph",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.crush_device_class": "",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.encrypted": "0",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.objectstore": "bluestore",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.osd_id": "1",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.type": "block",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.vdo": "0",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.with_tpm": "0"
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            },
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "type": "block",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "vg_name": "ceph_vg1"
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:        }
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:    ],
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:    "2": [
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:        {
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "devices": [
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "/dev/loop5"
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            ],
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "lv_name": "ceph_lv2",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "lv_size": "21470642176",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "name": "ceph_lv2",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "tags": {
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.cluster_name": "ceph",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.crush_device_class": "",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.encrypted": "0",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.objectstore": "bluestore",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.osd_id": "2",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.type": "block",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.vdo": "0",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:                "ceph.with_tpm": "0"
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            },
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "type": "block",
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:            "vg_name": "ceph_vg2"
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:        }
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]:    ]
Feb 28 05:03:46 np0005634017 exciting_cannon[265435]: }
Feb 28 05:03:46 np0005634017 systemd[1]: libpod-c77f1f7323b8191cab09b144b7976854b20e132e6b3c81ea5fbc2e2f81de9682.scope: Deactivated successfully.
Feb 28 05:03:46 np0005634017 podman[265419]: 2026-02-28 10:03:46.906699706 +0000 UTC m=+0.382124264 container died c77f1f7323b8191cab09b144b7976854b20e132e6b3c81ea5fbc2e2f81de9682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 28 05:03:46 np0005634017 systemd[1]: var-lib-containers-storage-overlay-1921176faa70651d2d0bbcde4922988f381bf3bb7673891f1adca3458ab367c4-merged.mount: Deactivated successfully.
Feb 28 05:03:46 np0005634017 podman[265419]: 2026-02-28 10:03:46.945457605 +0000 UTC m=+0.420882163 container remove c77f1f7323b8191cab09b144b7976854b20e132e6b3c81ea5fbc2e2f81de9682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 05:03:46 np0005634017 systemd[1]: libpod-conmon-c77f1f7323b8191cab09b144b7976854b20e132e6b3c81ea5fbc2e2f81de9682.scope: Deactivated successfully.
Feb 28 05:03:47 np0005634017 podman[265518]: 2026-02-28 10:03:47.328180005 +0000 UTC m=+0.048491664 container create 9410ab851d8db2ead2bbba1a29c865d7c081b28dce6f0c937f46d8c2902e3ca0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jepsen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:03:47 np0005634017 systemd[1]: Started libpod-conmon-9410ab851d8db2ead2bbba1a29c865d7c081b28dce6f0c937f46d8c2902e3ca0.scope.
Feb 28 05:03:47 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:03:47 np0005634017 podman[265518]: 2026-02-28 10:03:47.303753458 +0000 UTC m=+0.024065207 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:03:47 np0005634017 podman[265518]: 2026-02-28 10:03:47.40627733 +0000 UTC m=+0.126589019 container init 9410ab851d8db2ead2bbba1a29c865d7c081b28dce6f0c937f46d8c2902e3ca0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:03:47 np0005634017 podman[265518]: 2026-02-28 10:03:47.411798446 +0000 UTC m=+0.132110105 container start 9410ab851d8db2ead2bbba1a29c865d7c081b28dce6f0c937f46d8c2902e3ca0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:03:47 np0005634017 podman[265518]: 2026-02-28 10:03:47.415659454 +0000 UTC m=+0.135971113 container attach 9410ab851d8db2ead2bbba1a29c865d7c081b28dce6f0c937f46d8c2902e3ca0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jepsen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 05:03:47 np0005634017 systemd[1]: libpod-9410ab851d8db2ead2bbba1a29c865d7c081b28dce6f0c937f46d8c2902e3ca0.scope: Deactivated successfully.
Feb 28 05:03:47 np0005634017 adoring_jepsen[265534]: 167 167
Feb 28 05:03:47 np0005634017 conmon[265534]: conmon 9410ab851d8db2ead2bb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9410ab851d8db2ead2bbba1a29c865d7c081b28dce6f0c937f46d8c2902e3ca0.scope/container/memory.events
Feb 28 05:03:47 np0005634017 podman[265518]: 2026-02-28 10:03:47.4176367 +0000 UTC m=+0.137948359 container died 9410ab851d8db2ead2bbba1a29c865d7c081b28dce6f0c937f46d8c2902e3ca0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jepsen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:03:47 np0005634017 systemd[1]: var-lib-containers-storage-overlay-1787387d80658f186c5d8bee6d96b39d8742bd28a8f190abd8149e2a57d78d79-merged.mount: Deactivated successfully.
Feb 28 05:03:47 np0005634017 podman[265518]: 2026-02-28 10:03:47.462639405 +0000 UTC m=+0.182951094 container remove 9410ab851d8db2ead2bbba1a29c865d7c081b28dce6f0c937f46d8c2902e3ca0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jepsen, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:03:47 np0005634017 systemd[1]: libpod-conmon-9410ab851d8db2ead2bbba1a29c865d7c081b28dce6f0c937f46d8c2902e3ca0.scope: Deactivated successfully.
Feb 28 05:03:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:03:47 np0005634017 podman[265560]: 2026-02-28 10:03:47.637089829 +0000 UTC m=+0.036665492 container create b56185423b3cf130b1097bac11e114dd7d926202ede8edcdf6f8b911d89d99b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bassi, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:03:47 np0005634017 systemd[1]: Started libpod-conmon-b56185423b3cf130b1097bac11e114dd7d926202ede8edcdf6f8b911d89d99b2.scope.
Feb 28 05:03:47 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:03:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009dd558833d1dbc48695efc60d1266ead07a861a6ae9781c656f12ea9eaa613/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:03:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009dd558833d1dbc48695efc60d1266ead07a861a6ae9781c656f12ea9eaa613/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:03:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009dd558833d1dbc48695efc60d1266ead07a861a6ae9781c656f12ea9eaa613/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:03:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009dd558833d1dbc48695efc60d1266ead07a861a6ae9781c656f12ea9eaa613/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:03:47 np0005634017 podman[265560]: 2026-02-28 10:03:47.622290183 +0000 UTC m=+0.021865886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:03:47 np0005634017 podman[265560]: 2026-02-28 10:03:47.735697751 +0000 UTC m=+0.135273504 container init b56185423b3cf130b1097bac11e114dd7d926202ede8edcdf6f8b911d89d99b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bassi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:03:47 np0005634017 podman[265560]: 2026-02-28 10:03:47.747226836 +0000 UTC m=+0.146802539 container start b56185423b3cf130b1097bac11e114dd7d926202ede8edcdf6f8b911d89d99b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Feb 28 05:03:47 np0005634017 podman[265560]: 2026-02-28 10:03:47.751227978 +0000 UTC m=+0.150803671 container attach b56185423b3cf130b1097bac11e114dd7d926202ede8edcdf6f8b911d89d99b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bassi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default)
Feb 28 05:03:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 153 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Feb 28 05:03:48 np0005634017 lvm[265653]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:03:48 np0005634017 lvm[265653]: VG ceph_vg0 finished
Feb 28 05:03:48 np0005634017 lvm[265656]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:03:48 np0005634017 lvm[265656]: VG ceph_vg1 finished
Feb 28 05:03:48 np0005634017 lvm[265658]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:03:48 np0005634017 lvm[265658]: VG ceph_vg2 finished
Feb 28 05:03:48 np0005634017 nova_compute[243452]: 2026-02-28 10:03:48.372 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273013.371014, ff148d2a-2dba-45c2-b726-78423f3ccedc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:03:48 np0005634017 nova_compute[243452]: 2026-02-28 10:03:48.373 243456 INFO nova.compute.manager [-] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:03:48 np0005634017 nova_compute[243452]: 2026-02-28 10:03:48.391 243456 DEBUG nova.compute.manager [None req-d1f4f9de-3538-4026-91f7-72b1be633110 - - - - - -] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:03:48 np0005634017 nova_compute[243452]: 2026-02-28 10:03:48.409 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:48 np0005634017 exciting_bassi[265576]: {}
Feb 28 05:03:48 np0005634017 nova_compute[243452]: 2026-02-28 10:03:48.475 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:48 np0005634017 systemd[1]: libpod-b56185423b3cf130b1097bac11e114dd7d926202ede8edcdf6f8b911d89d99b2.scope: Deactivated successfully.
Feb 28 05:03:48 np0005634017 systemd[1]: libpod-b56185423b3cf130b1097bac11e114dd7d926202ede8edcdf6f8b911d89d99b2.scope: Consumed 1.089s CPU time.
Feb 28 05:03:48 np0005634017 podman[265560]: 2026-02-28 10:03:48.501770648 +0000 UTC m=+0.901346331 container died b56185423b3cf130b1097bac11e114dd7d926202ede8edcdf6f8b911d89d99b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:03:48 np0005634017 systemd[1]: var-lib-containers-storage-overlay-009dd558833d1dbc48695efc60d1266ead07a861a6ae9781c656f12ea9eaa613-merged.mount: Deactivated successfully.
Feb 28 05:03:48 np0005634017 podman[265560]: 2026-02-28 10:03:48.543995375 +0000 UTC m=+0.943571048 container remove b56185423b3cf130b1097bac11e114dd7d926202ede8edcdf6f8b911d89d99b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bassi, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:03:48 np0005634017 systemd[1]: libpod-conmon-b56185423b3cf130b1097bac11e114dd7d926202ede8edcdf6f8b911d89d99b2.scope: Deactivated successfully.
Feb 28 05:03:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:03:48 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:03:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:03:48 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:03:49 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:03:49 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:03:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 153 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 1.5 KiB/s wr, 13 op/s
Feb 28 05:03:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 153 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Feb 28 05:03:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:03:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:52.959 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:03:53 np0005634017 podman[265698]: 2026-02-28 10:03:53.166954942 +0000 UTC m=+0.101686260 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:03:53 np0005634017 podman[265697]: 2026-02-28 10:03:53.175756199 +0000 UTC m=+0.110385724 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 05:03:53 np0005634017 nova_compute[243452]: 2026-02-28 10:03:53.411 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:53 np0005634017 nova_compute[243452]: 2026-02-28 10:03:53.476 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 153 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Feb 28 05:03:55 np0005634017 nova_compute[243452]: 2026-02-28 10:03:55.427 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "400869d5-7369-466b-970e-ac7e3f4e2e4c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:55 np0005634017 nova_compute[243452]: 2026-02-28 10:03:55.428 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "400869d5-7369-466b-970e-ac7e3f4e2e4c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:55 np0005634017 nova_compute[243452]: 2026-02-28 10:03:55.428 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "95805a4e-8bc0-47e3-981a-dfe27127a270" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:55 np0005634017 nova_compute[243452]: 2026-02-28 10:03:55.429 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:55 np0005634017 nova_compute[243452]: 2026-02-28 10:03:55.429 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:55 np0005634017 nova_compute[243452]: 2026-02-28 10:03:55.430 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:55 np0005634017 nova_compute[243452]: 2026-02-28 10:03:55.461 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:03:55 np0005634017 nova_compute[243452]: 2026-02-28 10:03:55.465 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:03:55 np0005634017 nova_compute[243452]: 2026-02-28 10:03:55.467 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:03:55 np0005634017 nova_compute[243452]: 2026-02-28 10:03:55.580 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:55 np0005634017 nova_compute[243452]: 2026-02-28 10:03:55.581 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:55 np0005634017 nova_compute[243452]: 2026-02-28 10:03:55.583 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:55 np0005634017 nova_compute[243452]: 2026-02-28 10:03:55.586 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:55 np0005634017 nova_compute[243452]: 2026-02-28 10:03:55.591 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:03:55 np0005634017 nova_compute[243452]: 2026-02-28 10:03:55.591 243456 INFO nova.compute.claims [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:03:55 np0005634017 nova_compute[243452]: 2026-02-28 10:03:55.705 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:03:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 153 MiB data, 413 MiB used, 60 GiB / 60 GiB avail
Feb 28 05:03:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:03:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2668771355' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.337 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.345 243456 DEBUG nova.compute.provider_tree [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.362 243456 DEBUG nova.scheduler.client.report [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.383 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.384 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.387 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.393 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.394 243456 INFO nova.compute.claims [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:03:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Feb 28 05:03:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Feb 28 05:03:56 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.463 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.463 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.480 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.490 243456 DEBUG nova.scheduler.client.report [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.501 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.511 243456 DEBUG nova.scheduler.client.report [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.512 243456 DEBUG nova.compute.provider_tree [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.564 243456 DEBUG nova.scheduler.client.report [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.609 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.611 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.612 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Creating image(s)#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.643 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.678 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.710 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.715 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.735 243456 DEBUG nova.scheduler.client.report [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.788 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.789 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.790 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.791 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.822 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.827 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.888 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:03:56 np0005634017 nova_compute[243452]: 2026-02-28 10:03:56.976 243456 DEBUG nova.policy [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a44c3f76b144221b23743a554a4a839', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1a4f47b449434c708378388c3e76610e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.066 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.239s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.139 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] resizing rbd image c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.230 243456 DEBUG nova.objects.instance [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lazy-loading 'migration_context' on Instance uuid c38e4584-8a86-41a3-bc10-2a35205cf7c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.249 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.250 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Ensure instance console log exists: /var/lib/nova/instances/c38e4584-8a86-41a3-bc10-2a35205cf7c7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.250 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.251 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.251 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:03:57 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2013276614' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.466 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.472 243456 DEBUG nova.compute.provider_tree [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.487 243456 DEBUG nova.scheduler.client.report [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.525 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.526 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.531 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.540 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.540 243456 INFO nova.compute.claims [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.591 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.592 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.618 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:03:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.645 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.660 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Successfully created port: b9c5316d-0f6b-4a56-8273-692ee1492259 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.704 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.740 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.742 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.743 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Creating image(s)#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.772 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 95805a4e-8bc0-47e3-981a-dfe27127a270_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.802 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 95805a4e-8bc0-47e3-981a-dfe27127a270_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.835 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 95805a4e-8bc0-47e3-981a-dfe27127a270_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:03:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 170 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 1.1 MiB/s wr, 15 op/s
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.839 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:03:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:57.842 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:57.843 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:03:57.843 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.855 243456 DEBUG nova.policy [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a44c3f76b144221b23743a554a4a839', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1a4f47b449434c708378388c3e76610e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.886 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.886 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.887 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.887 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.908 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 95805a4e-8bc0-47e3-981a-dfe27127a270_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:03:57 np0005634017 nova_compute[243452]: 2026-02-28 10:03:57.911 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 95805a4e-8bc0-47e3-981a-dfe27127a270_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.194 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 95805a4e-8bc0-47e3-981a-dfe27127a270_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.246 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] resizing rbd image 95805a4e-8bc0-47e3-981a-dfe27127a270_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:03:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:03:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4115759404' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.279 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.283 243456 DEBUG nova.compute.provider_tree [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.315 243456 DEBUG nova.scheduler.client.report [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.323 243456 DEBUG nova.objects.instance [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lazy-loading 'migration_context' on Instance uuid 95805a4e-8bc0-47e3-981a-dfe27127a270 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.336 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.337 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Ensure instance console log exists: /var/lib/nova/instances/95805a4e-8bc0-47e3-981a-dfe27127a270/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.337 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.338 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.338 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.339 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.340 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.391 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Successfully created port: dca369b1-9f27-4c57-9e18-4a62a8bd95e9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.393 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.394 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.416 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.418 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.437 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.478 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.533 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.534 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.535 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Creating image(s)#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.556 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 400869d5-7369-466b-970e-ac7e3f4e2e4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.581 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 400869d5-7369-466b-970e-ac7e3f4e2e4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.607 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 400869d5-7369-466b-970e-ac7e3f4e2e4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.612 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.642 243456 DEBUG nova.policy [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a44c3f76b144221b23743a554a4a839', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1a4f47b449434c708378388c3e76610e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.683 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.684 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.684 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.685 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.708 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 400869d5-7369-466b-970e-ac7e3f4e2e4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.713 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 400869d5-7369-466b-970e-ac7e3f4e2e4c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.782 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Successfully updated port: b9c5316d-0f6b-4a56-8273-692ee1492259 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.806 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "refresh_cache-c38e4584-8a86-41a3-bc10-2a35205cf7c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.807 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquired lock "refresh_cache-c38e4584-8a86-41a3-bc10-2a35205cf7c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.807 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.950 243456 DEBUG nova.compute.manager [req-f80fd425-863e-47b4-a25f-0ad773bcef0c req-5ef41959-8e3b-4f40-8d7f-10e480e0e020 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Received event network-changed-b9c5316d-0f6b-4a56-8273-692ee1492259 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.950 243456 DEBUG nova.compute.manager [req-f80fd425-863e-47b4-a25f-0ad773bcef0c req-5ef41959-8e3b-4f40-8d7f-10e480e0e020 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Refreshing instance network info cache due to event network-changed-b9c5316d-0f6b-4a56-8273-692ee1492259. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.951 243456 DEBUG oslo_concurrency.lockutils [req-f80fd425-863e-47b4-a25f-0ad773bcef0c req-5ef41959-8e3b-4f40-8d7f-10e480e0e020 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c38e4584-8a86-41a3-bc10-2a35205cf7c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:03:58 np0005634017 nova_compute[243452]: 2026-02-28 10:03:58.994 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 400869d5-7369-466b-970e-ac7e3f4e2e4c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:03:59 np0005634017 nova_compute[243452]: 2026-02-28 10:03:59.057 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] resizing rbd image 400869d5-7369-466b-970e-ac7e3f4e2e4c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:03:59 np0005634017 nova_compute[243452]: 2026-02-28 10:03:59.167 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:03:59 np0005634017 nova_compute[243452]: 2026-02-28 10:03:59.175 243456 DEBUG nova.objects.instance [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lazy-loading 'migration_context' on Instance uuid 400869d5-7369-466b-970e-ac7e3f4e2e4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:03:59 np0005634017 nova_compute[243452]: 2026-02-28 10:03:59.195 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:03:59 np0005634017 nova_compute[243452]: 2026-02-28 10:03:59.195 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Ensure instance console log exists: /var/lib/nova/instances/400869d5-7369-466b-970e-ac7e3f4e2e4c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:03:59 np0005634017 nova_compute[243452]: 2026-02-28 10:03:59.196 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:03:59 np0005634017 nova_compute[243452]: 2026-02-28 10:03:59.196 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:03:59 np0005634017 nova_compute[243452]: 2026-02-28 10:03:59.196 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:03:59 np0005634017 nova_compute[243452]: 2026-02-28 10:03:59.388 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Successfully created port: 5051af6b-3383-4342-be85-7f3b44b527a2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:03:59 np0005634017 nova_compute[243452]: 2026-02-28 10:03:59.494 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Successfully updated port: dca369b1-9f27-4c57-9e18-4a62a8bd95e9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:03:59 np0005634017 nova_compute[243452]: 2026-02-28 10:03:59.510 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "refresh_cache-95805a4e-8bc0-47e3-981a-dfe27127a270" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:03:59 np0005634017 nova_compute[243452]: 2026-02-28 10:03:59.511 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquired lock "refresh_cache-95805a4e-8bc0-47e3-981a-dfe27127a270" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:03:59 np0005634017 nova_compute[243452]: 2026-02-28 10:03:59.511 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:03:59 np0005634017 nova_compute[243452]: 2026-02-28 10:03:59.648 243456 DEBUG nova.compute.manager [req-d8a33801-8dba-40a2-b201-084d2adf4b1a req-4bf72f07-4367-4732-9fc6-812748ee578b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Received event network-changed-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:03:59 np0005634017 nova_compute[243452]: 2026-02-28 10:03:59.649 243456 DEBUG nova.compute.manager [req-d8a33801-8dba-40a2-b201-084d2adf4b1a req-4bf72f07-4367-4732-9fc6-812748ee578b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Refreshing instance network info cache due to event network-changed-dca369b1-9f27-4c57-9e18-4a62a8bd95e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:03:59 np0005634017 nova_compute[243452]: 2026-02-28 10:03:59.649 243456 DEBUG oslo_concurrency.lockutils [req-d8a33801-8dba-40a2-b201-084d2adf4b1a req-4bf72f07-4367-4732-9fc6-812748ee578b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-95805a4e-8bc0-47e3-981a-dfe27127a270" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:03:59 np0005634017 nova_compute[243452]: 2026-02-28 10:03:59.731 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:03:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 170 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 1.1 MiB/s wr, 15 op/s
Feb 28 05:04:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:04:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:04:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:04:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:04:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:04:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.489 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Updating instance_info_cache with network_info: [{"id": "b9c5316d-0f6b-4a56-8273-692ee1492259", "address": "fa:16:3e:b1:5c:cb", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c5316d-0f", "ovs_interfaceid": "b9c5316d-0f6b-4a56-8273-692ee1492259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.529 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Releasing lock "refresh_cache-c38e4584-8a86-41a3-bc10-2a35205cf7c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.529 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Instance network_info: |[{"id": "b9c5316d-0f6b-4a56-8273-692ee1492259", "address": "fa:16:3e:b1:5c:cb", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c5316d-0f", "ovs_interfaceid": "b9c5316d-0f6b-4a56-8273-692ee1492259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.531 243456 DEBUG oslo_concurrency.lockutils [req-f80fd425-863e-47b4-a25f-0ad773bcef0c req-5ef41959-8e3b-4f40-8d7f-10e480e0e020 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c38e4584-8a86-41a3-bc10-2a35205cf7c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.531 243456 DEBUG nova.network.neutron [req-f80fd425-863e-47b4-a25f-0ad773bcef0c req-5ef41959-8e3b-4f40-8d7f-10e480e0e020 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Refreshing network info cache for port b9c5316d-0f6b-4a56-8273-692ee1492259 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.536 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Start _get_guest_xml network_info=[{"id": "b9c5316d-0f6b-4a56-8273-692ee1492259", "address": "fa:16:3e:b1:5c:cb", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c5316d-0f", "ovs_interfaceid": "b9c5316d-0f6b-4a56-8273-692ee1492259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.542 243456 WARNING nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.550 243456 DEBUG nova.virt.libvirt.host [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.551 243456 DEBUG nova.virt.libvirt.host [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.559 243456 DEBUG nova.virt.libvirt.host [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.560 243456 DEBUG nova.virt.libvirt.host [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.560 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.561 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.562 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.563 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.563 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.563 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.564 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.564 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.565 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.565 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.565 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.566 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.570 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.592 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Successfully updated port: 5051af6b-3383-4342-be85-7f3b44b527a2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.619 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "refresh_cache-400869d5-7369-466b-970e-ac7e3f4e2e4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.619 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquired lock "refresh_cache-400869d5-7369-466b-970e-ac7e3f4e2e4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.619 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.755 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Updating instance_info_cache with network_info: [{"id": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "address": "fa:16:3e:63:2f:18", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdca369b1-9f", "ovs_interfaceid": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.782 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Releasing lock "refresh_cache-95805a4e-8bc0-47e3-981a-dfe27127a270" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.783 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Instance network_info: |[{"id": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "address": "fa:16:3e:63:2f:18", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdca369b1-9f", "ovs_interfaceid": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.784 243456 DEBUG oslo_concurrency.lockutils [req-d8a33801-8dba-40a2-b201-084d2adf4b1a req-4bf72f07-4367-4732-9fc6-812748ee578b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-95805a4e-8bc0-47e3-981a-dfe27127a270" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.785 243456 DEBUG nova.network.neutron [req-d8a33801-8dba-40a2-b201-084d2adf4b1a req-4bf72f07-4367-4732-9fc6-812748ee578b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Refreshing network info cache for port dca369b1-9f27-4c57-9e18-4a62a8bd95e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.793 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Start _get_guest_xml network_info=[{"id": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "address": "fa:16:3e:63:2f:18", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdca369b1-9f", "ovs_interfaceid": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.801 243456 WARNING nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.806 243456 DEBUG nova.virt.libvirt.host [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.807 243456 DEBUG nova.virt.libvirt.host [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.815 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.824 243456 DEBUG nova.virt.libvirt.host [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.824 243456 DEBUG nova.virt.libvirt.host [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.825 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.825 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.826 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.827 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.827 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.827 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.828 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.828 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.829 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.829 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.829 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.830 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:04:00 np0005634017 nova_compute[243452]: 2026-02-28 10:04:00.835 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:04:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1167639053' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.115 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.140 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.145 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:04:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/287769195' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.409 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.450 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 95805a4e-8bc0-47e3-981a-dfe27127a270_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.456 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:04:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2196627638' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.696 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.699 243456 DEBUG nova.virt.libvirt.vif [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:03:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1851116039',display_name='tempest-ListServersNegativeTestJSON-server-1851116039-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1851116039-3',id=26,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a4f47b449434c708378388c3e76610e',ramdisk_id='',reservation_id='r-my980dr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-784056711',owner_user_name='tempest-ListServersNegativeTestJSON-784056711-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:03:56Z,user_data=None,user_id='7a44c3f76b144221b23743a554a4a839',uuid=c38e4584-8a86-41a3-bc10-2a35205cf7c7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b9c5316d-0f6b-4a56-8273-692ee1492259", "address": "fa:16:3e:b1:5c:cb", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c5316d-0f", "ovs_interfaceid": "b9c5316d-0f6b-4a56-8273-692ee1492259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.699 243456 DEBUG nova.network.os_vif_util [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converting VIF {"id": "b9c5316d-0f6b-4a56-8273-692ee1492259", "address": "fa:16:3e:b1:5c:cb", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c5316d-0f", "ovs_interfaceid": "b9c5316d-0f6b-4a56-8273-692ee1492259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.701 243456 DEBUG nova.network.os_vif_util [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:5c:cb,bridge_name='br-int',has_traffic_filtering=True,id=b9c5316d-0f6b-4a56-8273-692ee1492259,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9c5316d-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.703 243456 DEBUG nova.objects.instance [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lazy-loading 'pci_devices' on Instance uuid c38e4584-8a86-41a3-bc10-2a35205cf7c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.724 243456 DEBUG nova.compute.manager [req-9b9a18d6-9ed2-4058-b258-c67fe7d1c4b4 req-12e582d8-5e53-4304-9f24-a746aaae678c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Received event network-changed-5051af6b-3383-4342-be85-7f3b44b527a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.725 243456 DEBUG nova.compute.manager [req-9b9a18d6-9ed2-4058-b258-c67fe7d1c4b4 req-12e582d8-5e53-4304-9f24-a746aaae678c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Refreshing instance network info cache due to event network-changed-5051af6b-3383-4342-be85-7f3b44b527a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.725 243456 DEBUG oslo_concurrency.lockutils [req-9b9a18d6-9ed2-4058-b258-c67fe7d1c4b4 req-12e582d8-5e53-4304-9f24-a746aaae678c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-400869d5-7369-466b-970e-ac7e3f4e2e4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.729 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:04:01 np0005634017 nova_compute[243452]:  <uuid>c38e4584-8a86-41a3-bc10-2a35205cf7c7</uuid>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:  <name>instance-0000001a</name>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <nova:name>tempest-ListServersNegativeTestJSON-server-1851116039-3</nova:name>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:04:00</nova:creationTime>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:04:01 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:        <nova:user uuid="7a44c3f76b144221b23743a554a4a839">tempest-ListServersNegativeTestJSON-784056711-project-member</nova:user>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:        <nova:project uuid="1a4f47b449434c708378388c3e76610e">tempest-ListServersNegativeTestJSON-784056711</nova:project>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:        <nova:port uuid="b9c5316d-0f6b-4a56-8273-692ee1492259">
Feb 28 05:04:01 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <entry name="serial">c38e4584-8a86-41a3-bc10-2a35205cf7c7</entry>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <entry name="uuid">c38e4584-8a86-41a3-bc10-2a35205cf7c7</entry>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk">
Feb 28 05:04:01 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:04:01 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk.config">
Feb 28 05:04:01 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:04:01 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:b1:5c:cb"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <target dev="tapb9c5316d-0f"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/c38e4584-8a86-41a3-bc10-2a35205cf7c7/console.log" append="off"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:04:01 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:04:01 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:04:01 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:04:01 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.731 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Preparing to wait for external event network-vif-plugged-b9c5316d-0f6b-4a56-8273-692ee1492259 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.731 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.732 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.732 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.733 243456 DEBUG nova.virt.libvirt.vif [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:03:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1851116039',display_name='tempest-ListServersNegativeTestJSON-server-1851116039-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1851116039-3',id=26,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a4f47b449434c708378388c3e76610e',ramdisk_id='',reservation_id='r-my980dr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-784056711',owner_user_name='tempest-ListServersNegativeTestJSON-784056711-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:03:56Z,user_data=None,user_id='7a44c3f76b144221b23743a554a4a839',uuid=c38e4584-8a86-41a3-bc10-2a35205cf7c7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b9c5316d-0f6b-4a56-8273-692ee1492259", "address": "fa:16:3e:b1:5c:cb", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c5316d-0f", "ovs_interfaceid": "b9c5316d-0f6b-4a56-8273-692ee1492259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.734 243456 DEBUG nova.network.os_vif_util [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converting VIF {"id": "b9c5316d-0f6b-4a56-8273-692ee1492259", "address": "fa:16:3e:b1:5c:cb", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c5316d-0f", "ovs_interfaceid": "b9c5316d-0f6b-4a56-8273-692ee1492259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.734 243456 DEBUG nova.network.os_vif_util [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:5c:cb,bridge_name='br-int',has_traffic_filtering=True,id=b9c5316d-0f6b-4a56-8273-692ee1492259,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9c5316d-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.735 243456 DEBUG os_vif [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:5c:cb,bridge_name='br-int',has_traffic_filtering=True,id=b9c5316d-0f6b-4a56-8273-692ee1492259,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9c5316d-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.736 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.737 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.737 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.747 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.747 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9c5316d-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.748 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb9c5316d-0f, col_values=(('external_ids', {'iface-id': 'b9c5316d-0f6b-4a56-8273-692ee1492259', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b1:5c:cb', 'vm-uuid': 'c38e4584-8a86-41a3-bc10-2a35205cf7c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:01 np0005634017 NetworkManager[49805]: <info>  [1772273041.7517] manager: (tapb9c5316d-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.750 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.755 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.757 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.759 243456 INFO os_vif [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:5c:cb,bridge_name='br-int',has_traffic_filtering=True,id=b9c5316d-0f6b-4a56-8273-692ee1492259,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9c5316d-0f')#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.821 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.822 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.822 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] No VIF found with MAC fa:16:3e:b1:5c:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.822 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Using config drive#033[00m
Feb 28 05:04:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 259 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 4.4 MiB/s wr, 90 op/s
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.846 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:01 np0005634017 nova_compute[243452]: 2026-02-28 10:04:01.987 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Updating instance_info_cache with network_info: [{"id": "5051af6b-3383-4342-be85-7f3b44b527a2", "address": "fa:16:3e:04:cc:6f", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5051af6b-33", "ovs_interfaceid": "5051af6b-3383-4342-be85-7f3b44b527a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.012 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Releasing lock "refresh_cache-400869d5-7369-466b-970e-ac7e3f4e2e4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.012 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Instance network_info: |[{"id": "5051af6b-3383-4342-be85-7f3b44b527a2", "address": "fa:16:3e:04:cc:6f", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5051af6b-33", "ovs_interfaceid": "5051af6b-3383-4342-be85-7f3b44b527a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.013 243456 DEBUG oslo_concurrency.lockutils [req-9b9a18d6-9ed2-4058-b258-c67fe7d1c4b4 req-12e582d8-5e53-4304-9f24-a746aaae678c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-400869d5-7369-466b-970e-ac7e3f4e2e4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.013 243456 DEBUG nova.network.neutron [req-9b9a18d6-9ed2-4058-b258-c67fe7d1c4b4 req-12e582d8-5e53-4304-9f24-a746aaae678c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Refreshing network info cache for port 5051af6b-3383-4342-be85-7f3b44b527a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:04:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:04:02 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1887011915' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.017 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Start _get_guest_xml network_info=[{"id": "5051af6b-3383-4342-be85-7f3b44b527a2", "address": "fa:16:3e:04:cc:6f", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5051af6b-33", "ovs_interfaceid": "5051af6b-3383-4342-be85-7f3b44b527a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.023 243456 WARNING nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.029 243456 DEBUG nova.virt.libvirt.host [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.031 243456 DEBUG nova.virt.libvirt.host [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.045 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.047 243456 DEBUG nova.network.neutron [req-f80fd425-863e-47b4-a25f-0ad773bcef0c req-5ef41959-8e3b-4f40-8d7f-10e480e0e020 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Updated VIF entry in instance network info cache for port b9c5316d-0f6b-4a56-8273-692ee1492259. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.047 243456 DEBUG nova.network.neutron [req-f80fd425-863e-47b4-a25f-0ad773bcef0c req-5ef41959-8e3b-4f40-8d7f-10e480e0e020 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Updating instance_info_cache with network_info: [{"id": "b9c5316d-0f6b-4a56-8273-692ee1492259", "address": "fa:16:3e:b1:5c:cb", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c5316d-0f", "ovs_interfaceid": "b9c5316d-0f6b-4a56-8273-692ee1492259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.051 243456 DEBUG nova.virt.libvirt.vif [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:03:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1851116039',display_name='tempest-ListServersNegativeTestJSON-server-1851116039-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1851116039-2',id=25,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a4f47b449434c708378388c3e76610e',ramdisk_id='',reservation_id='r-my980dr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-784056711',owner_user_name='tempest-ListServersNegativeTestJSON-784056711-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:03:57Z,user_data=None,user_id='7a44c3f76b144221b23743a554a4a839',uuid=95805a4e-8bc0-47e3-981a-dfe27127a270,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "address": "fa:16:3e:63:2f:18", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdca369b1-9f", "ovs_interfaceid": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.052 243456 DEBUG nova.network.os_vif_util [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converting VIF {"id": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "address": "fa:16:3e:63:2f:18", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdca369b1-9f", "ovs_interfaceid": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.053 243456 DEBUG nova.network.os_vif_util [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:2f:18,bridge_name='br-int',has_traffic_filtering=True,id=dca369b1-9f27-4c57-9e18-4a62a8bd95e9,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdca369b1-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.054 243456 DEBUG nova.objects.instance [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lazy-loading 'pci_devices' on Instance uuid 95805a4e-8bc0-47e3-981a-dfe27127a270 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.057 243456 DEBUG nova.virt.libvirt.host [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.058 243456 DEBUG nova.virt.libvirt.host [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.058 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.059 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.060 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.060 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.060 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.061 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.061 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.062 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.062 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.063 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.063 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.063 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.068 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.101 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:04:02 np0005634017 nova_compute[243452]:  <uuid>95805a4e-8bc0-47e3-981a-dfe27127a270</uuid>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:  <name>instance-00000019</name>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <nova:name>tempest-ListServersNegativeTestJSON-server-1851116039-2</nova:name>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:04:00</nova:creationTime>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:04:02 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:        <nova:user uuid="7a44c3f76b144221b23743a554a4a839">tempest-ListServersNegativeTestJSON-784056711-project-member</nova:user>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:        <nova:project uuid="1a4f47b449434c708378388c3e76610e">tempest-ListServersNegativeTestJSON-784056711</nova:project>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:        <nova:port uuid="dca369b1-9f27-4c57-9e18-4a62a8bd95e9">
Feb 28 05:04:02 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <entry name="serial">95805a4e-8bc0-47e3-981a-dfe27127a270</entry>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <entry name="uuid">95805a4e-8bc0-47e3-981a-dfe27127a270</entry>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/95805a4e-8bc0-47e3-981a-dfe27127a270_disk">
Feb 28 05:04:02 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:04:02 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/95805a4e-8bc0-47e3-981a-dfe27127a270_disk.config">
Feb 28 05:04:02 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:04:02 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:63:2f:18"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <target dev="tapdca369b1-9f"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/95805a4e-8bc0-47e3-981a-dfe27127a270/console.log" append="off"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:04:02 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:04:02 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:04:02 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:04:02 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.102 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Preparing to wait for external event network-vif-plugged-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.102 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.102 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.103 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.104 243456 DEBUG nova.virt.libvirt.vif [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:03:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1851116039',display_name='tempest-ListServersNegativeTestJSON-server-1851116039-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1851116039-2',id=25,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a4f47b449434c708378388c3e76610e',ramdisk_id='',reservation_id='r-my980dr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-784056711',owner_user_name='tempest-ListServersNegativeTestJSON-784056711-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:03:57Z,user_data=None,user_id='7a44c3f76b144221b23743a554a4a839',uuid=95805a4e-8bc0-47e3-981a-dfe27127a270,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "address": "fa:16:3e:63:2f:18", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdca369b1-9f", "ovs_interfaceid": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.104 243456 DEBUG nova.network.os_vif_util [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converting VIF {"id": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "address": "fa:16:3e:63:2f:18", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdca369b1-9f", "ovs_interfaceid": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.105 243456 DEBUG nova.network.os_vif_util [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:2f:18,bridge_name='br-int',has_traffic_filtering=True,id=dca369b1-9f27-4c57-9e18-4a62a8bd95e9,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdca369b1-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.105 243456 DEBUG os_vif [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:2f:18,bridge_name='br-int',has_traffic_filtering=True,id=dca369b1-9f27-4c57-9e18-4a62a8bd95e9,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdca369b1-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.107 243456 DEBUG oslo_concurrency.lockutils [req-f80fd425-863e-47b4-a25f-0ad773bcef0c req-5ef41959-8e3b-4f40-8d7f-10e480e0e020 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c38e4584-8a86-41a3-bc10-2a35205cf7c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.108 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.108 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.109 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.112 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.112 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdca369b1-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.113 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdca369b1-9f, col_values=(('external_ids', {'iface-id': 'dca369b1-9f27-4c57-9e18-4a62a8bd95e9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:63:2f:18', 'vm-uuid': '95805a4e-8bc0-47e3-981a-dfe27127a270'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.115 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:02 np0005634017 NetworkManager[49805]: <info>  [1772273042.1161] manager: (tapdca369b1-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.118 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.124 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.125 243456 INFO os_vif [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:2f:18,bridge_name='br-int',has_traffic_filtering=True,id=dca369b1-9f27-4c57-9e18-4a62a8bd95e9,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdca369b1-9f')#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.166 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.167 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.167 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] No VIF found with MAC fa:16:3e:63:2f:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.168 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Using config drive#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.191 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 95805a4e-8bc0-47e3-981a-dfe27127a270_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:04:02 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2738159381' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:04:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:04:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Feb 28 05:04:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Feb 28 05:04:02 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.633 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.663 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 400869d5-7369-466b-970e-ac7e3f4e2e4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.668 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.931 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Creating config drive at /var/lib/nova/instances/95805a4e-8bc0-47e3-981a-dfe27127a270/disk.config#033[00m
Feb 28 05:04:02 np0005634017 nova_compute[243452]: 2026-02-28 10:04:02.935 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/95805a4e-8bc0-47e3-981a-dfe27127a270/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpfncl_0wi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.060 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Creating config drive at /var/lib/nova/instances/c38e4584-8a86-41a3-bc10-2a35205cf7c7/disk.config#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.063 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c38e4584-8a86-41a3-bc10-2a35205cf7c7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpd281ri9v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.083 243456 DEBUG nova.network.neutron [req-d8a33801-8dba-40a2-b201-084d2adf4b1a req-4bf72f07-4367-4732-9fc6-812748ee578b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Updated VIF entry in instance network info cache for port dca369b1-9f27-4c57-9e18-4a62a8bd95e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.085 243456 DEBUG nova.network.neutron [req-d8a33801-8dba-40a2-b201-084d2adf4b1a req-4bf72f07-4367-4732-9fc6-812748ee578b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Updating instance_info_cache with network_info: [{"id": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "address": "fa:16:3e:63:2f:18", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdca369b1-9f", "ovs_interfaceid": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.088 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/95805a4e-8bc0-47e3-981a-dfe27127a270/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpfncl_0wi" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.126 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 95805a4e-8bc0-47e3-981a-dfe27127a270_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.130 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/95805a4e-8bc0-47e3-981a-dfe27127a270/disk.config 95805a4e-8bc0-47e3-981a-dfe27127a270_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.157 243456 DEBUG oslo_concurrency.lockutils [req-d8a33801-8dba-40a2-b201-084d2adf4b1a req-4bf72f07-4367-4732-9fc6-812748ee578b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-95805a4e-8bc0-47e3-981a-dfe27127a270" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.186 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c38e4584-8a86-41a3-bc10-2a35205cf7c7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpd281ri9v" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:04:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1649378117' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.216 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.222 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c38e4584-8a86-41a3-bc10-2a35205cf7c7/disk.config c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.249 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.251 243456 DEBUG nova.virt.libvirt.vif [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:03:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1851116039',display_name='tempest-ListServersNegativeTestJSON-server-1851116039-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1851116039-1',id=24,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a4f47b449434c708378388c3e76610e',ramdisk_id='',reservation_id='r-my980dr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-784056711',owner_user_name='tempest-ListServersNegativeTestJSON-784056711-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:03:58Z,user_data=None,user_id='7a44c3f76b144221b23743a554a4a839',uuid=400869d5-7369-466b-970e-ac7e3f4e2e4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5051af6b-3383-4342-be85-7f3b44b527a2", "address": "fa:16:3e:04:cc:6f", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5051af6b-33", "ovs_interfaceid": "5051af6b-3383-4342-be85-7f3b44b527a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.252 243456 DEBUG nova.network.os_vif_util [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converting VIF {"id": "5051af6b-3383-4342-be85-7f3b44b527a2", "address": "fa:16:3e:04:cc:6f", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5051af6b-33", "ovs_interfaceid": "5051af6b-3383-4342-be85-7f3b44b527a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.253 243456 DEBUG nova.network.os_vif_util [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:cc:6f,bridge_name='br-int',has_traffic_filtering=True,id=5051af6b-3383-4342-be85-7f3b44b527a2,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5051af6b-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.254 243456 DEBUG nova.objects.instance [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lazy-loading 'pci_devices' on Instance uuid 400869d5-7369-466b-970e-ac7e3f4e2e4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.257 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/95805a4e-8bc0-47e3-981a-dfe27127a270/disk.config 95805a4e-8bc0-47e3-981a-dfe27127a270_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.257 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Deleting local config drive /var/lib/nova/instances/95805a4e-8bc0-47e3-981a-dfe27127a270/disk.config because it was imported into RBD.#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.282 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:04:03 np0005634017 nova_compute[243452]:  <uuid>400869d5-7369-466b-970e-ac7e3f4e2e4c</uuid>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:  <name>instance-00000018</name>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <nova:name>tempest-ListServersNegativeTestJSON-server-1851116039-1</nova:name>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:04:02</nova:creationTime>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:04:03 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:        <nova:user uuid="7a44c3f76b144221b23743a554a4a839">tempest-ListServersNegativeTestJSON-784056711-project-member</nova:user>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:        <nova:project uuid="1a4f47b449434c708378388c3e76610e">tempest-ListServersNegativeTestJSON-784056711</nova:project>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:        <nova:port uuid="5051af6b-3383-4342-be85-7f3b44b527a2">
Feb 28 05:04:03 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <entry name="serial">400869d5-7369-466b-970e-ac7e3f4e2e4c</entry>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <entry name="uuid">400869d5-7369-466b-970e-ac7e3f4e2e4c</entry>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/400869d5-7369-466b-970e-ac7e3f4e2e4c_disk">
Feb 28 05:04:03 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:04:03 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/400869d5-7369-466b-970e-ac7e3f4e2e4c_disk.config">
Feb 28 05:04:03 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:04:03 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:04:cc:6f"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <target dev="tap5051af6b-33"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/400869d5-7369-466b-970e-ac7e3f4e2e4c/console.log" append="off"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:04:03 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:04:03 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:04:03 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:04:03 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.284 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Preparing to wait for external event network-vif-plugged-5051af6b-3383-4342-be85-7f3b44b527a2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.284 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "400869d5-7369-466b-970e-ac7e3f4e2e4c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.284 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "400869d5-7369-466b-970e-ac7e3f4e2e4c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.285 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "400869d5-7369-466b-970e-ac7e3f4e2e4c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.285 243456 DEBUG nova.virt.libvirt.vif [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:03:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1851116039',display_name='tempest-ListServersNegativeTestJSON-server-1851116039-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1851116039-1',id=24,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a4f47b449434c708378388c3e76610e',ramdisk_id='',reservation_id='r-my980dr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-784056711',owner_user_name='tempest-ListServersNegativeTestJSON-784056711-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:03:58Z,user_data=None,user_id='7a44c3f76b144221b23743a554a4a839',uuid=400869d5-7369-466b-970e-ac7e3f4e2e4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5051af6b-3383-4342-be85-7f3b44b527a2", "address": "fa:16:3e:04:cc:6f", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5051af6b-33", "ovs_interfaceid": "5051af6b-3383-4342-be85-7f3b44b527a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.286 243456 DEBUG nova.network.os_vif_util [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converting VIF {"id": "5051af6b-3383-4342-be85-7f3b44b527a2", "address": "fa:16:3e:04:cc:6f", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5051af6b-33", "ovs_interfaceid": "5051af6b-3383-4342-be85-7f3b44b527a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.286 243456 DEBUG nova.network.os_vif_util [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:cc:6f,bridge_name='br-int',has_traffic_filtering=True,id=5051af6b-3383-4342-be85-7f3b44b527a2,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5051af6b-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.287 243456 DEBUG os_vif [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:cc:6f,bridge_name='br-int',has_traffic_filtering=True,id=5051af6b-3383-4342-be85-7f3b44b527a2,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5051af6b-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.287 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.288 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.288 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.291 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.292 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5051af6b-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.292 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5051af6b-33, col_values=(('external_ids', {'iface-id': '5051af6b-3383-4342-be85-7f3b44b527a2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:04:cc:6f', 'vm-uuid': '400869d5-7369-466b-970e-ac7e3f4e2e4c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.294 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:03 np0005634017 NetworkManager[49805]: <info>  [1772273043.2953] manager: (tap5051af6b-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.298 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.303 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.304 243456 INFO os_vif [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:cc:6f,bridge_name='br-int',has_traffic_filtering=True,id=5051af6b-3383-4342-be85-7f3b44b527a2,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5051af6b-33')#033[00m
Feb 28 05:04:03 np0005634017 NetworkManager[49805]: <info>  [1772273043.3127] manager: (tapdca369b1-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/68)
Feb 28 05:04:03 np0005634017 kernel: tapdca369b1-9f: entered promiscuous mode
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.317 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:03Z|00132|binding|INFO|Claiming lport dca369b1-9f27-4c57-9e18-4a62a8bd95e9 for this chassis.
Feb 28 05:04:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:03Z|00133|binding|INFO|dca369b1-9f27-4c57-9e18-4a62a8bd95e9: Claiming fa:16:3e:63:2f:18 10.100.0.5
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.332 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:2f:18 10.100.0.5'], port_security=['fa:16:3e:63:2f:18 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '95805a4e-8bc0-47e3-981a-dfe27127a270', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a4f47b449434c708378388c3e76610e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40d4cd58-fe6b-4388-8998-09342a19e6c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dbb9d37-cc7b-4cfd-a162-1ef19741223f, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=dca369b1-9f27-4c57-9e18-4a62a8bd95e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.333 156681 INFO neutron.agent.ovn.metadata.agent [-] Port dca369b1-9f27-4c57-9e18-4a62a8bd95e9 in datapath c4a214b7-a9ab-4548-82a3-ef624bc8e6a1 bound to our chassis#033[00m
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.334 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4a214b7-a9ab-4548-82a3-ef624bc8e6a1#033[00m
Feb 28 05:04:03 np0005634017 systemd-udevd[266638]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:04:03 np0005634017 systemd-machined[209480]: New machine qemu-28-instance-00000019.
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.347 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf0ce44-1a74-4e54-b4ff-91cf71f7afbf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.348 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc4a214b7-a1 in ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.350 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc4a214b7-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.351 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[68f9a301-b9b3-4a01-a972-d4ea334956e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.351 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[aac7d99b-f5f4-4bc9-8749-50ce5882ed7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:03 np0005634017 systemd[1]: Started Virtual Machine qemu-28-instance-00000019.
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.356 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:03 np0005634017 NetworkManager[49805]: <info>  [1772273043.3597] device (tapdca369b1-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:04:03 np0005634017 NetworkManager[49805]: <info>  [1772273043.3608] device (tapdca369b1-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.367 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[590e22cf-079f-4dd6-9346-9399ac17b2d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:03Z|00134|binding|INFO|Setting lport dca369b1-9f27-4c57-9e18-4a62a8bd95e9 ovn-installed in OVS
Feb 28 05:04:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:03Z|00135|binding|INFO|Setting lport dca369b1-9f27-4c57-9e18-4a62a8bd95e9 up in Southbound
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.372 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.377 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.378 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.378 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] No VIF found with MAC fa:16:3e:04:cc:6f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.378 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Using config drive#033[00m
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.385 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f43d887d-7ade-4034-87fb-fccb46b4949e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.413 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 400869d5-7369-466b-970e-ac7e3f4e2e4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.416 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[157aabb1-760e-4832-95d3-defeb4844201]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.422 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c38e4584-8a86-41a3-bc10-2a35205cf7c7/disk.config c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.423 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Deleting local config drive /var/lib/nova/instances/c38e4584-8a86-41a3-bc10-2a35205cf7c7/disk.config because it was imported into RBD.#033[00m
Feb 28 05:04:03 np0005634017 NetworkManager[49805]: <info>  [1772273043.4259] manager: (tapc4a214b7-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/69)
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.424 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1aac0b48-cd52-4a62-99cd-8eb952c4965d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.461 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9724a2e2-dd10-47aa-8598-7add94c007dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:03 np0005634017 NetworkManager[49805]: <info>  [1772273043.4634] manager: (tapb9c5316d-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Feb 28 05:04:03 np0005634017 systemd-udevd[266689]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:04:03 np0005634017 kernel: tapb9c5316d-0f: entered promiscuous mode
Feb 28 05:04:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:03Z|00136|binding|INFO|Claiming lport b9c5316d-0f6b-4a56-8273-692ee1492259 for this chassis.
Feb 28 05:04:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:03Z|00137|binding|INFO|b9c5316d-0f6b-4a56-8273-692ee1492259: Claiming fa:16:3e:b1:5c:cb 10.100.0.4
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.471 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.473 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[033c45bf-487e-478b-a539-ec19b0d54e47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.475 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:5c:cb 10.100.0.4'], port_security=['fa:16:3e:b1:5c:cb 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c38e4584-8a86-41a3-bc10-2a35205cf7c7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a4f47b449434c708378388c3e76610e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40d4cd58-fe6b-4388-8998-09342a19e6c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dbb9d37-cc7b-4cfd-a162-1ef19741223f, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b9c5316d-0f6b-4a56-8273-692ee1492259) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:04:03 np0005634017 NetworkManager[49805]: <info>  [1772273043.4767] device (tapb9c5316d-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:04:03 np0005634017 NetworkManager[49805]: <info>  [1772273043.4777] device (tapb9c5316d-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.481 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:03Z|00138|binding|INFO|Setting lport b9c5316d-0f6b-4a56-8273-692ee1492259 ovn-installed in OVS
Feb 28 05:04:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:03Z|00139|binding|INFO|Setting lport b9c5316d-0f6b-4a56-8273-692ee1492259 up in Southbound
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.483 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.488 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:03 np0005634017 systemd-machined[209480]: New machine qemu-29-instance-0000001a.
Feb 28 05:04:03 np0005634017 NetworkManager[49805]: <info>  [1772273043.5009] device (tapc4a214b7-a0): carrier: link connected
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.505 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[06e20277-a28a-436a-8633-7c39a8f92dcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:03 np0005634017 systemd[1]: Started Virtual Machine qemu-29-instance-0000001a.
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.524 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[de1ef5b4-b174-4fb0-8ea5-652d8bca2d29]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4a214b7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:13:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455078, 'reachable_time': 28358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266709, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.555 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[02b578d8-e27e-43f8-b100-761b1759f3ba]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed7:1336'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455078, 'tstamp': 455078}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266712, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.580 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8598ac85-dc94-41bb-bc28-1ab91e0ad5b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4a214b7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:13:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455078, 'reachable_time': 28358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266717, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.611 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0ca815b2-ddd9-47c8-bfd6-b339a39457a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.680 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4c3d8fcd-9d23-4008-a148-8ffd486a6e3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.682 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4a214b7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.683 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.683 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4a214b7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.686 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:03 np0005634017 NetworkManager[49805]: <info>  [1772273043.6866] manager: (tapc4a214b7-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Feb 28 05:04:03 np0005634017 kernel: tapc4a214b7-a0: entered promiscuous mode
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.691 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.691 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4a214b7-a0, col_values=(('external_ids', {'iface-id': '6478ac98-6cab-4170-89f8-76b31581cc8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:03Z|00140|binding|INFO|Releasing lport 6478ac98-6cab-4170-89f8-76b31581cc8c from this chassis (sb_readonly=0)
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.700 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.702 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.703 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c4a214b7-a9ab-4548-82a3-ef624bc8e6a1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c4a214b7-a9ab-4548-82a3-ef624bc8e6a1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.704 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[01a04a0e-150d-4f90-99c0-6195ce0a2070]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.704 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/c4a214b7-a9ab-4548-82a3-ef624bc8e6a1.pid.haproxy
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID c4a214b7-a9ab-4548-82a3-ef624bc8e6a1
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:04:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.705 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'env', 'PROCESS_TAG=haproxy-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c4a214b7-a9ab-4548-82a3-ef624bc8e6a1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.772 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273043.7723367, 95805a4e-8bc0-47e3-981a-dfe27127a270 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.773 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] VM Started (Lifecycle Event)#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.810 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.815 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273043.7736096, 95805a4e-8bc0-47e3-981a-dfe27127a270 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.815 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.834 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.839 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:04:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 292 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 8.0 MiB/s wr, 153 op/s
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.844 243456 DEBUG nova.compute.manager [req-016ccefb-df35-4935-9c9f-38b52b60c8c2 req-cda5e572-041f-4c88-9d27-db184115cbd6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Received event network-vif-plugged-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.844 243456 DEBUG oslo_concurrency.lockutils [req-016ccefb-df35-4935-9c9f-38b52b60c8c2 req-cda5e572-041f-4c88-9d27-db184115cbd6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.844 243456 DEBUG oslo_concurrency.lockutils [req-016ccefb-df35-4935-9c9f-38b52b60c8c2 req-cda5e572-041f-4c88-9d27-db184115cbd6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.844 243456 DEBUG oslo_concurrency.lockutils [req-016ccefb-df35-4935-9c9f-38b52b60c8c2 req-cda5e572-041f-4c88-9d27-db184115cbd6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.844 243456 DEBUG nova.compute.manager [req-016ccefb-df35-4935-9c9f-38b52b60c8c2 req-cda5e572-041f-4c88-9d27-db184115cbd6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Processing event network-vif-plugged-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.845 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.850 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.853 243456 INFO nova.virt.libvirt.driver [-] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Instance spawned successfully.#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.853 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.858 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.858 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273043.8511238, 95805a4e-8bc0-47e3-981a-dfe27127a270 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.858 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.861 243456 DEBUG nova.network.neutron [req-9b9a18d6-9ed2-4058-b258-c67fe7d1c4b4 req-12e582d8-5e53-4304-9f24-a746aaae678c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Updated VIF entry in instance network info cache for port 5051af6b-3383-4342-be85-7f3b44b527a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.861 243456 DEBUG nova.network.neutron [req-9b9a18d6-9ed2-4058-b258-c67fe7d1c4b4 req-12e582d8-5e53-4304-9f24-a746aaae678c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Updating instance_info_cache with network_info: [{"id": "5051af6b-3383-4342-be85-7f3b44b527a2", "address": "fa:16:3e:04:cc:6f", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5051af6b-33", "ovs_interfaceid": "5051af6b-3383-4342-be85-7f3b44b527a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.882 243456 DEBUG oslo_concurrency.lockutils [req-9b9a18d6-9ed2-4058-b258-c67fe7d1c4b4 req-12e582d8-5e53-4304-9f24-a746aaae678c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-400869d5-7369-466b-970e-ac7e3f4e2e4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.884 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.887 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.887 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.888 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.888 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.888 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.888 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.892 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.923 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.953 243456 INFO nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Took 6.21 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:04:03 np0005634017 nova_compute[243452]: 2026-02-28 10:04:03.953 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.017 243456 INFO nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Took 8.46 seconds to build instance.#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.041 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Creating config drive at /var/lib/nova/instances/400869d5-7369-466b-970e-ac7e3f4e2e4c/disk.config#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.046 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/400869d5-7369-466b-970e-ac7e3f4e2e4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpm13phxm3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.077 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:04 np0005634017 podman[266836]: 2026-02-28 10:04:04.096484514 +0000 UTC m=+0.049997907 container create f03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.113 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273044.113597, c38e4584-8a86-41a3-bc10-2a35205cf7c7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.114 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] VM Started (Lifecycle Event)#033[00m
Feb 28 05:04:04 np0005634017 systemd[1]: Started libpod-conmon-f03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242.scope.
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.146 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.150 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273044.1136684, c38e4584-8a86-41a3-bc10-2a35205cf7c7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.151 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:04:04 np0005634017 podman[266836]: 2026-02-28 10:04:04.070048941 +0000 UTC m=+0.023562354 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.168 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:04 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.172 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:04:04 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0957a7b6eff2b2a82eae4443a3c42b23275f1b3848fb9d1ed3e7a8e33112a7b0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.189 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/400869d5-7369-466b-970e-ac7e3f4e2e4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpm13phxm3" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:04 np0005634017 podman[266836]: 2026-02-28 10:04:04.191726252 +0000 UTC m=+0.145239735 container init f03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 28 05:04:04 np0005634017 podman[266836]: 2026-02-28 10:04:04.198640666 +0000 UTC m=+0.152154059 container start f03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.221 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 400869d5-7369-466b-970e-ac7e3f4e2e4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.225 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/400869d5-7369-466b-970e-ac7e3f4e2e4c/disk.config 400869d5-7369-466b-970e-ac7e3f4e2e4c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:04 np0005634017 neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1[266854]: [NOTICE]   (266865) : New worker (266878) forked
Feb 28 05:04:04 np0005634017 neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1[266854]: [NOTICE]   (266865) : Loading success.
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.253 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.257 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "f1026535-7729-43d0-8027-dd71ef14dfbf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.257 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.271 243456 DEBUG nova.compute.manager [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.289 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b9c5316d-0f6b-4a56-8273-692ee1492259 in datapath c4a214b7-a9ab-4548-82a3-ef624bc8e6a1 unbound from our chassis#033[00m
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.292 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4a214b7-a9ab-4548-82a3-ef624bc8e6a1#033[00m
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.312 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9abb9d3a-2a2d-4cf4-a5c3-fa36ce40f696]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.343 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8f019c1a-fc73-4402-bed3-73b07b92befb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.349 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[09da118d-16b4-4a9c-b06a-9225b0991a79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.350 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.351 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.362 243456 DEBUG nova.virt.hardware [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.362 243456 INFO nova.compute.claims [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.366 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/400869d5-7369-466b-970e-ac7e3f4e2e4c/disk.config 400869d5-7369-466b-970e-ac7e3f4e2e4c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.367 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Deleting local config drive /var/lib/nova/instances/400869d5-7369-466b-970e-ac7e3f4e2e4c/disk.config because it was imported into RBD.#033[00m
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.380 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b778da09-c987-4b3b-8b67-7448e28d8b11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.417 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9e7fea37-f86d-41f4-ac7a-007fb362821d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4a214b7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:13:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455078, 'reachable_time': 28358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266914, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:04 np0005634017 kernel: tap5051af6b-33: entered promiscuous mode
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.434 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e90c19c2-6c94-4ab6-9788-4140a6213860]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc4a214b7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455092, 'tstamp': 455092}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266919, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc4a214b7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455096, 'tstamp': 455096}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266919, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:04 np0005634017 NetworkManager[49805]: <info>  [1772273044.4399] manager: (tap5051af6b-33): new Tun device (/org/freedesktop/NetworkManager/Devices/72)
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.437 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4a214b7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:04 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:04Z|00141|binding|INFO|Claiming lport 5051af6b-3383-4342-be85-7f3b44b527a2 for this chassis.
Feb 28 05:04:04 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:04Z|00142|binding|INFO|5051af6b-3383-4342-be85-7f3b44b527a2: Claiming fa:16:3e:04:cc:6f 10.100.0.9
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.445 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:04 np0005634017 NetworkManager[49805]: <info>  [1772273044.4511] device (tap5051af6b-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:04:04 np0005634017 NetworkManager[49805]: <info>  [1772273044.4520] device (tap5051af6b-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.455 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:cc:6f 10.100.0.9'], port_security=['fa:16:3e:04:cc:6f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '400869d5-7369-466b-970e-ac7e3f4e2e4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a4f47b449434c708378388c3e76610e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40d4cd58-fe6b-4388-8998-09342a19e6c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dbb9d37-cc7b-4cfd-a162-1ef19741223f, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=5051af6b-3383-4342-be85-7f3b44b527a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.462 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:04 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:04Z|00143|binding|INFO|Setting lport 5051af6b-3383-4342-be85-7f3b44b527a2 ovn-installed in OVS
Feb 28 05:04:04 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:04Z|00144|binding|INFO|Setting lport 5051af6b-3383-4342-be85-7f3b44b527a2 up in Southbound
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.464 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.464 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4a214b7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.464 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.465 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.466 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4a214b7-a0, col_values=(('external_ids', {'iface-id': '6478ac98-6cab-4170-89f8-76b31581cc8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.467 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.468 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 5051af6b-3383-4342-be85-7f3b44b527a2 in datapath c4a214b7-a9ab-4548-82a3-ef624bc8e6a1 bound to our chassis#033[00m
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.469 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4a214b7-a9ab-4548-82a3-ef624bc8e6a1#033[00m
Feb 28 05:04:04 np0005634017 systemd-machined[209480]: New machine qemu-30-instance-00000018.
Feb 28 05:04:04 np0005634017 systemd[1]: Started Virtual Machine qemu-30-instance-00000018.
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.487 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7c45aac9-a510-4039-9db3-45d90f697094]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.518 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8a8ed719-923a-441b-a6dd-572e407e6028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.533 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c9e4ed5e-2a70-4d36-9b91-b01cc37ad000]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.559 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.564 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[621b52cb-b216-4feb-8071-a29d4ae2fe5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.591 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[be133535-30b3-4e58-b805-5f0832c62ce4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4a214b7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:13:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 7, 'rx_bytes': 266, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 7, 'rx_bytes': 266, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455078, 'reachable_time': 28358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266938, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.608 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[313cdf29-5cd3-4ff7-8216-6ceff8951f8c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc4a214b7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455092, 'tstamp': 455092}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266940, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc4a214b7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455096, 'tstamp': 455096}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266940, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.610 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4a214b7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.612 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.612 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4a214b7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.613 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.613 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4a214b7-a0, col_values=(('external_ids', {'iface-id': '6478ac98-6cab-4170-89f8-76b31581cc8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.613 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.929 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273044.9285486, 400869d5-7369-466b-970e-ac7e3f4e2e4c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.929 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] VM Started (Lifecycle Event)#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.955 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.962 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273044.9287317, 400869d5-7369-466b-970e-ac7e3f4e2e4c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.962 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.982 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:04 np0005634017 nova_compute[243452]: 2026-02-28 10:04:04.985 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.014 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.026 243456 DEBUG nova.compute.manager [req-2f2cc9fa-1bde-476f-9a05-87ccd353e2ae req-f797a9c4-0733-4eb5-8cdf-cc227ce2bac3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Received event network-vif-plugged-5051af6b-3383-4342-be85-7f3b44b527a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.027 243456 DEBUG oslo_concurrency.lockutils [req-2f2cc9fa-1bde-476f-9a05-87ccd353e2ae req-f797a9c4-0733-4eb5-8cdf-cc227ce2bac3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "400869d5-7369-466b-970e-ac7e3f4e2e4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.027 243456 DEBUG oslo_concurrency.lockutils [req-2f2cc9fa-1bde-476f-9a05-87ccd353e2ae req-f797a9c4-0733-4eb5-8cdf-cc227ce2bac3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "400869d5-7369-466b-970e-ac7e3f4e2e4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.027 243456 DEBUG oslo_concurrency.lockutils [req-2f2cc9fa-1bde-476f-9a05-87ccd353e2ae req-f797a9c4-0733-4eb5-8cdf-cc227ce2bac3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "400869d5-7369-466b-970e-ac7e3f4e2e4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.028 243456 DEBUG nova.compute.manager [req-2f2cc9fa-1bde-476f-9a05-87ccd353e2ae req-f797a9c4-0733-4eb5-8cdf-cc227ce2bac3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Processing event network-vif-plugged-5051af6b-3383-4342-be85-7f3b44b527a2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.029 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.038 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273045.037757, 400869d5-7369-466b-970e-ac7e3f4e2e4c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.039 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.041 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.044 243456 INFO nova.virt.libvirt.driver [-] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Instance spawned successfully.#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.045 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.087 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.093 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.098 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.098 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.099 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.099 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.100 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.100 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:04:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/439360762' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.145 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.155 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.161 243456 DEBUG nova.compute.provider_tree [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.168 243456 INFO nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Took 6.63 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.169 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.175 243456 DEBUG nova.scheduler.client.report [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.199 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.200 243456 DEBUG nova.compute.manager [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.231 243456 INFO nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Took 9.68 seconds to build instance.#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.256 243456 DEBUG nova.compute.manager [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.257 243456 DEBUG nova.network.neutron [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.260 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "400869d5-7369-466b-970e-ac7e3f4e2e4c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.274 243456 INFO nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.294 243456 DEBUG nova.compute.manager [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.377 243456 DEBUG nova.compute.manager [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.378 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.378 243456 INFO nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Creating image(s)#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.407 243456 DEBUG nova.storage.rbd_utils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image f1026535-7729-43d0-8027-dd71ef14dfbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.429 243456 DEBUG nova.storage.rbd_utils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image f1026535-7729-43d0-8027-dd71ef14dfbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.452 243456 DEBUG nova.storage.rbd_utils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image f1026535-7729-43d0-8027-dd71ef14dfbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.457 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.528 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.529 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.530 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.530 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.554 243456 DEBUG nova.storage.rbd_utils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image f1026535-7729-43d0-8027-dd71ef14dfbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.559 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 f1026535-7729-43d0-8027-dd71ef14dfbf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 301 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 7.0 MiB/s wr, 205 op/s
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.859 243456 DEBUG nova.policy [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1d18942aa7a4f4f9f673058f8c5f232', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5478fb37e3fa483c86ed85ad939d42da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:04:05 np0005634017 nova_compute[243452]: 2026-02-28 10:04:05.968 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 f1026535-7729-43d0-8027-dd71ef14dfbf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.007 243456 DEBUG nova.compute.manager [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Received event network-vif-plugged-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.007 243456 DEBUG oslo_concurrency.lockutils [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.008 243456 DEBUG oslo_concurrency.lockutils [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.008 243456 DEBUG oslo_concurrency.lockutils [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.008 243456 DEBUG nova.compute.manager [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] No waiting events found dispatching network-vif-plugged-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.008 243456 WARNING nova.compute.manager [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Received unexpected event network-vif-plugged-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.009 243456 DEBUG nova.compute.manager [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Received event network-vif-plugged-b9c5316d-0f6b-4a56-8273-692ee1492259 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.009 243456 DEBUG oslo_concurrency.lockutils [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.009 243456 DEBUG oslo_concurrency.lockutils [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.009 243456 DEBUG oslo_concurrency.lockutils [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.009 243456 DEBUG nova.compute.manager [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Processing event network-vif-plugged-b9c5316d-0f6b-4a56-8273-692ee1492259 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.010 243456 DEBUG nova.compute.manager [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Received event network-vif-plugged-b9c5316d-0f6b-4a56-8273-692ee1492259 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.010 243456 DEBUG oslo_concurrency.lockutils [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.010 243456 DEBUG oslo_concurrency.lockutils [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.010 243456 DEBUG oslo_concurrency.lockutils [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.010 243456 DEBUG nova.compute.manager [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] No waiting events found dispatching network-vif-plugged-b9c5316d-0f6b-4a56-8273-692ee1492259 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.011 243456 WARNING nova.compute.manager [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Received unexpected event network-vif-plugged-b9c5316d-0f6b-4a56-8273-692ee1492259 for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.012 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.055 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273046.0170944, c38e4584-8a86-41a3-bc10-2a35205cf7c7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.056 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.058 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.066 243456 DEBUG nova.storage.rbd_utils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] resizing rbd image f1026535-7729-43d0-8027-dd71ef14dfbf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.100 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.105 243456 INFO nova.virt.libvirt.driver [-] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Instance spawned successfully.#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.106 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.109 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.158 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.166 243456 DEBUG nova.objects.instance [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'migration_context' on Instance uuid f1026535-7729-43d0-8027-dd71ef14dfbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.170 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.170 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.170 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.171 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.171 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.171 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.199 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.200 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Ensure instance console log exists: /var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.200 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.201 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.201 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.270 243456 INFO nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Took 9.66 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.270 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.359 243456 INFO nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Took 10.82 seconds to build instance.#033[00m
Feb 28 05:04:06 np0005634017 nova_compute[243452]: 2026-02-28 10:04:06.379 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:07 np0005634017 nova_compute[243452]: 2026-02-28 10:04:07.310 243456 DEBUG nova.compute.manager [req-84c9ee35-8032-44af-b3f9-54cf79f902c3 req-e8048ec5-f9f0-472d-b0e1-92b285c8fb24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Received event network-vif-plugged-5051af6b-3383-4342-be85-7f3b44b527a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:07 np0005634017 nova_compute[243452]: 2026-02-28 10:04:07.310 243456 DEBUG oslo_concurrency.lockutils [req-84c9ee35-8032-44af-b3f9-54cf79f902c3 req-e8048ec5-f9f0-472d-b0e1-92b285c8fb24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "400869d5-7369-466b-970e-ac7e3f4e2e4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:07 np0005634017 nova_compute[243452]: 2026-02-28 10:04:07.311 243456 DEBUG oslo_concurrency.lockutils [req-84c9ee35-8032-44af-b3f9-54cf79f902c3 req-e8048ec5-f9f0-472d-b0e1-92b285c8fb24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "400869d5-7369-466b-970e-ac7e3f4e2e4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:07 np0005634017 nova_compute[243452]: 2026-02-28 10:04:07.311 243456 DEBUG oslo_concurrency.lockutils [req-84c9ee35-8032-44af-b3f9-54cf79f902c3 req-e8048ec5-f9f0-472d-b0e1-92b285c8fb24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "400869d5-7369-466b-970e-ac7e3f4e2e4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:07 np0005634017 nova_compute[243452]: 2026-02-28 10:04:07.311 243456 DEBUG nova.compute.manager [req-84c9ee35-8032-44af-b3f9-54cf79f902c3 req-e8048ec5-f9f0-472d-b0e1-92b285c8fb24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] No waiting events found dispatching network-vif-plugged-5051af6b-3383-4342-be85-7f3b44b527a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:04:07 np0005634017 nova_compute[243452]: 2026-02-28 10:04:07.311 243456 WARNING nova.compute.manager [req-84c9ee35-8032-44af-b3f9-54cf79f902c3 req-e8048ec5-f9f0-472d-b0e1-92b285c8fb24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Received unexpected event network-vif-plugged-5051af6b-3383-4342-be85-7f3b44b527a2 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:04:07 np0005634017 nova_compute[243452]: 2026-02-28 10:04:07.316 243456 DEBUG nova.network.neutron [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Successfully created port: 76d5199d-5d1e-4198-8780-c2537175a2be _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:04:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:04:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 316 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.3 MiB/s wr, 305 op/s
Feb 28 05:04:08 np0005634017 nova_compute[243452]: 2026-02-28 10:04:08.296 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:08 np0005634017 nova_compute[243452]: 2026-02-28 10:04:08.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:04:08 np0005634017 nova_compute[243452]: 2026-02-28 10:04:08.489 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:08 np0005634017 nova_compute[243452]: 2026-02-28 10:04:08.571 243456 DEBUG nova.network.neutron [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Successfully updated port: 76d5199d-5d1e-4198-8780-c2537175a2be _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:04:08 np0005634017 nova_compute[243452]: 2026-02-28 10:04:08.587 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:04:08 np0005634017 nova_compute[243452]: 2026-02-28 10:04:08.587 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquired lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:04:08 np0005634017 nova_compute[243452]: 2026-02-28 10:04:08.588 243456 DEBUG nova.network.neutron [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:04:08 np0005634017 nova_compute[243452]: 2026-02-28 10:04:08.727 243456 DEBUG nova.compute.manager [req-3fa719ea-8985-4749-ad4b-f0d279f4e1b8 req-80aaa86a-4da5-479e-8740-e5464ea47fca 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-changed-76d5199d-5d1e-4198-8780-c2537175a2be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:08 np0005634017 nova_compute[243452]: 2026-02-28 10:04:08.728 243456 DEBUG nova.compute.manager [req-3fa719ea-8985-4749-ad4b-f0d279f4e1b8 req-80aaa86a-4da5-479e-8740-e5464ea47fca 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Refreshing instance network info cache due to event network-changed-76d5199d-5d1e-4198-8780-c2537175a2be. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:04:08 np0005634017 nova_compute[243452]: 2026-02-28 10:04:08.729 243456 DEBUG oslo_concurrency.lockutils [req-3fa719ea-8985-4749-ad4b-f0d279f4e1b8 req-80aaa86a-4da5-479e-8740-e5464ea47fca 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:04:08 np0005634017 nova_compute[243452]: 2026-02-28 10:04:08.826 243456 DEBUG nova.network.neutron [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.344 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.345 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.345 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.346 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.346 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.476 243456 DEBUG oslo_concurrency.lockutils [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "400869d5-7369-466b-970e-ac7e3f4e2e4c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.479 243456 DEBUG oslo_concurrency.lockutils [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "400869d5-7369-466b-970e-ac7e3f4e2e4c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.480 243456 DEBUG oslo_concurrency.lockutils [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "400869d5-7369-466b-970e-ac7e3f4e2e4c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.480 243456 DEBUG oslo_concurrency.lockutils [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "400869d5-7369-466b-970e-ac7e3f4e2e4c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.481 243456 DEBUG oslo_concurrency.lockutils [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "400869d5-7369-466b-970e-ac7e3f4e2e4c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.483 243456 INFO nova.compute.manager [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Terminating instance#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.485 243456 DEBUG nova.compute.manager [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:04:09 np0005634017 kernel: tap5051af6b-33 (unregistering): left promiscuous mode
Feb 28 05:04:09 np0005634017 NetworkManager[49805]: <info>  [1772273049.5257] device (tap5051af6b-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.533 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:09 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:09Z|00145|binding|INFO|Releasing lport 5051af6b-3383-4342-be85-7f3b44b527a2 from this chassis (sb_readonly=0)
Feb 28 05:04:09 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:09Z|00146|binding|INFO|Setting lport 5051af6b-3383-4342-be85-7f3b44b527a2 down in Southbound
Feb 28 05:04:09 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:09Z|00147|binding|INFO|Removing iface tap5051af6b-33 ovn-installed in OVS
Feb 28 05:04:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.540 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:cc:6f 10.100.0.9'], port_security=['fa:16:3e:04:cc:6f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '400869d5-7369-466b-970e-ac7e3f4e2e4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a4f47b449434c708378388c3e76610e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '40d4cd58-fe6b-4388-8998-09342a19e6c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dbb9d37-cc7b-4cfd-a162-1ef19741223f, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=5051af6b-3383-4342-be85-7f3b44b527a2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:04:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.542 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 5051af6b-3383-4342-be85-7f3b44b527a2 in datapath c4a214b7-a9ab-4548-82a3-ef624bc8e6a1 unbound from our chassis#033[00m
Feb 28 05:04:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.544 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4a214b7-a9ab-4548-82a3-ef624bc8e6a1#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.543 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.564 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3bc5e764-cf1e-4681-875f-4681b943dd10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:09 np0005634017 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000018.scope: Deactivated successfully.
Feb 28 05:04:09 np0005634017 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000018.scope: Consumed 4.946s CPU time.
Feb 28 05:04:09 np0005634017 systemd-machined[209480]: Machine qemu-30-instance-00000018 terminated.
Feb 28 05:04:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.601 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9404f4ca-5a55-497f-8e22-75794572af20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.606 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[67599f2b-5342-40cd-a2e1-f4cd696ac314]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.634 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5df59501-160c-4d64-bcc8-874f22d596c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.656 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2d6c1108-4a5c-479f-a42e-06c8b1c32d2d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4a214b7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:13:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455078, 'reachable_time': 28358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267201, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.674 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2259dd61-fe72-4b0d-8d8f-0e7d777a812d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc4a214b7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455092, 'tstamp': 455092}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267202, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc4a214b7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455096, 'tstamp': 455096}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267202, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.676 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4a214b7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.678 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.682 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.683 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4a214b7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.683 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.683 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4a214b7-a0, col_values=(('external_ids', {'iface-id': '6478ac98-6cab-4170-89f8-76b31581cc8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.684 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.728 243456 INFO nova.virt.libvirt.driver [-] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Instance destroyed successfully.#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.730 243456 DEBUG nova.objects.instance [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lazy-loading 'resources' on Instance uuid 400869d5-7369-466b-970e-ac7e3f4e2e4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.747 243456 DEBUG nova.virt.libvirt.vif [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:03:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1851116039',display_name='tempest-ListServersNegativeTestJSON-server-1851116039-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1851116039-1',id=24,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:04:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a4f47b449434c708378388c3e76610e',ramdisk_id='',reservation_id='r-my980dr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-784056711',owner_user_name='tempest-ListServersNegativeTestJSON-784056711-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:04:05Z,user_data=None,user_id='7a44c3f76b144221b23743a554a4a839',uuid=400869d5-7369-466b-970e-ac7e3f4e2e4c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5051af6b-3383-4342-be85-7f3b44b527a2", "address": "fa:16:3e:04:cc:6f", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5051af6b-33", "ovs_interfaceid": "5051af6b-3383-4342-be85-7f3b44b527a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.748 243456 DEBUG nova.network.os_vif_util [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converting VIF {"id": "5051af6b-3383-4342-be85-7f3b44b527a2", "address": "fa:16:3e:04:cc:6f", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5051af6b-33", "ovs_interfaceid": "5051af6b-3383-4342-be85-7f3b44b527a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.749 243456 DEBUG nova.network.os_vif_util [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:cc:6f,bridge_name='br-int',has_traffic_filtering=True,id=5051af6b-3383-4342-be85-7f3b44b527a2,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5051af6b-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.749 243456 DEBUG os_vif [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:cc:6f,bridge_name='br-int',has_traffic_filtering=True,id=5051af6b-3383-4342-be85-7f3b44b527a2,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5051af6b-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.753 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.755 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5051af6b-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.760 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.763 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.765 243456 INFO os_vif [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:cc:6f,bridge_name='br-int',has_traffic_filtering=True,id=5051af6b-3383-4342-be85-7f3b44b527a2,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5051af6b-33')#033[00m
Feb 28 05:04:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 316 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.3 MiB/s wr, 305 op/s
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.852 243456 DEBUG nova.network.neutron [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Updating instance_info_cache with network_info: [{"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.882 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Releasing lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.883 243456 DEBUG nova.compute.manager [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Instance network_info: |[{"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.883 243456 DEBUG oslo_concurrency.lockutils [req-3fa719ea-8985-4749-ad4b-f0d279f4e1b8 req-80aaa86a-4da5-479e-8740-e5464ea47fca 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.884 243456 DEBUG nova.network.neutron [req-3fa719ea-8985-4749-ad4b-f0d279f4e1b8 req-80aaa86a-4da5-479e-8740-e5464ea47fca 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Refreshing network info cache for port 76d5199d-5d1e-4198-8780-c2537175a2be _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:04:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:04:09 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3595329839' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.888 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Start _get_guest_xml network_info=[{"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.896 243456 WARNING nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.901 243456 DEBUG nova.virt.libvirt.host [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.903 243456 DEBUG nova.virt.libvirt.host [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.915 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.916 243456 DEBUG nova.virt.libvirt.host [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.919 243456 DEBUG nova.virt.libvirt.host [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.920 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.920 243456 DEBUG nova.virt.hardware [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.922 243456 DEBUG nova.virt.hardware [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.922 243456 DEBUG nova.virt.hardware [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.923 243456 DEBUG nova.virt.hardware [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.923 243456 DEBUG nova.virt.hardware [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.923 243456 DEBUG nova.virt.hardware [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.923 243456 DEBUG nova.virt.hardware [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.924 243456 DEBUG nova.virt.hardware [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.924 243456 DEBUG nova.virt.hardware [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.924 243456 DEBUG nova.virt.hardware [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.925 243456 DEBUG nova.virt.hardware [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:04:09 np0005634017 nova_compute[243452]: 2026-02-28 10:04:09.928 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:10 np0005634017 nova_compute[243452]: 2026-02-28 10:04:10.080 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:04:10 np0005634017 nova_compute[243452]: 2026-02-28 10:04:10.082 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:04:10 np0005634017 nova_compute[243452]: 2026-02-28 10:04:10.097 243456 INFO nova.virt.libvirt.driver [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Deleting instance files /var/lib/nova/instances/400869d5-7369-466b-970e-ac7e3f4e2e4c_del#033[00m
Feb 28 05:04:10 np0005634017 nova_compute[243452]: 2026-02-28 10:04:10.104 243456 INFO nova.virt.libvirt.driver [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Deletion of /var/lib/nova/instances/400869d5-7369-466b-970e-ac7e3f4e2e4c_del complete#033[00m
Feb 28 05:04:10 np0005634017 nova_compute[243452]: 2026-02-28 10:04:10.115 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:04:10 np0005634017 nova_compute[243452]: 2026-02-28 10:04:10.116 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:04:10 np0005634017 nova_compute[243452]: 2026-02-28 10:04:10.122 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:04:10 np0005634017 nova_compute[243452]: 2026-02-28 10:04:10.123 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:04:10 np0005634017 nova_compute[243452]: 2026-02-28 10:04:10.203 243456 INFO nova.compute.manager [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:04:10 np0005634017 nova_compute[243452]: 2026-02-28 10:04:10.204 243456 DEBUG oslo.service.loopingcall [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:04:10 np0005634017 nova_compute[243452]: 2026-02-28 10:04:10.204 243456 DEBUG nova.compute.manager [-] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:04:10 np0005634017 nova_compute[243452]: 2026-02-28 10:04:10.205 243456 DEBUG nova.network.neutron [-] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:04:10 np0005634017 nova_compute[243452]: 2026-02-28 10:04:10.322 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:04:10 np0005634017 nova_compute[243452]: 2026-02-28 10:04:10.324 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4132MB free_disk=59.91480299457908GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:04:10 np0005634017 nova_compute[243452]: 2026-02-28 10:04:10.324 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:10 np0005634017 nova_compute[243452]: 2026-02-28 10:04:10.325 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:10 np0005634017 nova_compute[243452]: 2026-02-28 10:04:10.405 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 400869d5-7369-466b-970e-ac7e3f4e2e4c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:04:10 np0005634017 nova_compute[243452]: 2026-02-28 10:04:10.406 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 95805a4e-8bc0-47e3-981a-dfe27127a270 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:04:10 np0005634017 nova_compute[243452]: 2026-02-28 10:04:10.406 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance c38e4584-8a86-41a3-bc10-2a35205cf7c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:04:10 np0005634017 nova_compute[243452]: 2026-02-28 10:04:10.407 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance f1026535-7729-43d0-8027-dd71ef14dfbf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:04:10 np0005634017 nova_compute[243452]: 2026-02-28 10:04:10.407 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:04:10 np0005634017 nova_compute[243452]: 2026-02-28 10:04:10.407 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:04:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:04:10 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1332031427' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:04:10 np0005634017 nova_compute[243452]: 2026-02-28 10:04:10.494 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:10 np0005634017 nova_compute[243452]: 2026-02-28 10:04:10.518 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:10 np0005634017 nova_compute[243452]: 2026-02-28 10:04:10.543 243456 DEBUG nova.storage.rbd_utils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image f1026535-7729-43d0-8027-dd71ef14dfbf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:10 np0005634017 nova_compute[243452]: 2026-02-28 10:04:10.548 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:04:11 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2183075377' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.051 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.058 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.088 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.119 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.120 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.120 243456 DEBUG nova.network.neutron [-] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:04:11 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2096698919' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.141 243456 INFO nova.compute.manager [-] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Took 0.94 seconds to deallocate network for instance.#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.153 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.155 243456 DEBUG nova.virt.libvirt.vif [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:04:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-585792194',display_name='tempest-AttachInterfacesTestJSON-server-585792194',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-585792194',id=27,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPfxDpm3Myh4xl51GH/Eee4e+9gL0QXIrJb3KmgHGFFsr7Zk1zaEkJOxAcUnihnT1OPbqVJln2d2q3oc5Q222W4sAVF2VtSEfrwl92YPLxE6w1H/lw+338tjUK54+I/6sA==',key_name='tempest-keypair-1547211698',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-ngglzwzv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:04:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=f1026535-7729-43d0-8027-dd71ef14dfbf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.156 243456 DEBUG nova.network.os_vif_util [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.157 243456 DEBUG nova.network.os_vif_util [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:bd:5b,bridge_name='br-int',has_traffic_filtering=True,id=76d5199d-5d1e-4198-8780-c2537175a2be,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d5199d-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.159 243456 DEBUG nova.objects.instance [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'pci_devices' on Instance uuid f1026535-7729-43d0-8027-dd71ef14dfbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.181 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:04:11 np0005634017 nova_compute[243452]:  <uuid>f1026535-7729-43d0-8027-dd71ef14dfbf</uuid>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:  <name>instance-0000001b</name>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <nova:name>tempest-AttachInterfacesTestJSON-server-585792194</nova:name>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:04:09</nova:creationTime>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:04:11 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:        <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:        <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:        <nova:port uuid="76d5199d-5d1e-4198-8780-c2537175a2be">
Feb 28 05:04:11 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <entry name="serial">f1026535-7729-43d0-8027-dd71ef14dfbf</entry>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <entry name="uuid">f1026535-7729-43d0-8027-dd71ef14dfbf</entry>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/f1026535-7729-43d0-8027-dd71ef14dfbf_disk">
Feb 28 05:04:11 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:04:11 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/f1026535-7729-43d0-8027-dd71ef14dfbf_disk.config">
Feb 28 05:04:11 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:04:11 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:7a:bd:5b"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <target dev="tap76d5199d-5d"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/console.log" append="off"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:04:11 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:04:11 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:04:11 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:04:11 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.189 243456 DEBUG nova.compute.manager [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Preparing to wait for external event network-vif-plugged-76d5199d-5d1e-4198-8780-c2537175a2be prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.189 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.190 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.190 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.192 243456 DEBUG nova.virt.libvirt.vif [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:04:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-585792194',display_name='tempest-AttachInterfacesTestJSON-server-585792194',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-585792194',id=27,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPfxDpm3Myh4xl51GH/Eee4e+9gL0QXIrJb3KmgHGFFsr7Zk1zaEkJOxAcUnihnT1OPbqVJln2d2q3oc5Q222W4sAVF2VtSEfrwl92YPLxE6w1H/lw+338tjUK54+I/6sA==',key_name='tempest-keypair-1547211698',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-ngglzwzv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:04:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=f1026535-7729-43d0-8027-dd71ef14dfbf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.192 243456 DEBUG nova.network.os_vif_util [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.194 243456 DEBUG nova.network.os_vif_util [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:bd:5b,bridge_name='br-int',has_traffic_filtering=True,id=76d5199d-5d1e-4198-8780-c2537175a2be,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d5199d-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.195 243456 DEBUG os_vif [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:bd:5b,bridge_name='br-int',has_traffic_filtering=True,id=76d5199d-5d1e-4198-8780-c2537175a2be,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d5199d-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.196 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.197 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.198 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.201 243456 DEBUG oslo_concurrency.lockutils [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.202 243456 DEBUG oslo_concurrency.lockutils [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.204 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.205 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76d5199d-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.208 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap76d5199d-5d, col_values=(('external_ids', {'iface-id': '76d5199d-5d1e-4198-8780-c2537175a2be', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:bd:5b', 'vm-uuid': 'f1026535-7729-43d0-8027-dd71ef14dfbf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:11 np0005634017 NetworkManager[49805]: <info>  [1772273051.2116] manager: (tap76d5199d-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.211 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.214 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.220 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.222 243456 INFO os_vif [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:bd:5b,bridge_name='br-int',has_traffic_filtering=True,id=76d5199d-5d1e-4198-8780-c2537175a2be,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d5199d-5d')#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.308 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.309 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.310 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No VIF found with MAC fa:16:3e:7a:bd:5b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.310 243456 INFO nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Using config drive#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.337 243456 DEBUG nova.storage.rbd_utils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image f1026535-7729-43d0-8027-dd71ef14dfbf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.345 243456 DEBUG oslo_concurrency.processutils [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.829 243456 INFO nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Creating config drive at /var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/disk.config#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.840 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp631urk_j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 315 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 6.9 MiB/s rd, 4.2 MiB/s wr, 340 op/s
Feb 28 05:04:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:04:11 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3373037427' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.874 243456 DEBUG oslo_concurrency.processutils [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.881 243456 DEBUG nova.compute.provider_tree [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.887 243456 DEBUG nova.network.neutron [req-3fa719ea-8985-4749-ad4b-f0d279f4e1b8 req-80aaa86a-4da5-479e-8740-e5464ea47fca 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Updated VIF entry in instance network info cache for port 76d5199d-5d1e-4198-8780-c2537175a2be. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.888 243456 DEBUG nova.network.neutron [req-3fa719ea-8985-4749-ad4b-f0d279f4e1b8 req-80aaa86a-4da5-479e-8740-e5464ea47fca 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Updating instance_info_cache with network_info: [{"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.915 243456 DEBUG nova.scheduler.client.report [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.923 243456 DEBUG oslo_concurrency.lockutils [req-3fa719ea-8985-4749-ad4b-f0d279f4e1b8 req-80aaa86a-4da5-479e-8740-e5464ea47fca 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.945 243456 DEBUG oslo_concurrency.lockutils [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:11 np0005634017 nova_compute[243452]: 2026-02-28 10:04:11.973 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp631urk_j" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.005 243456 DEBUG nova.storage.rbd_utils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image f1026535-7729-43d0-8027-dd71ef14dfbf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.010 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/disk.config f1026535-7729-43d0-8027-dd71ef14dfbf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.026 243456 INFO nova.scheduler.client.report [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Deleted allocations for instance 400869d5-7369-466b-970e-ac7e3f4e2e4c#033[00m
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.092 243456 DEBUG oslo_concurrency.lockutils [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "400869d5-7369-466b-970e-ac7e3f4e2e4c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.122 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.123 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.123 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.134 243456 DEBUG nova.compute.manager [req-c621f763-9b68-407a-aa6a-383ef2385eda req-00659073-e0da-4d7b-b26a-11178ec0c500 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Received event network-vif-deleted-5051af6b-3383-4342-be85-7f3b44b527a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.135 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/disk.config f1026535-7729-43d0-8027-dd71ef14dfbf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.136 243456 INFO nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Deleting local config drive /var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/disk.config because it was imported into RBD.#033[00m
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.145 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 28 05:04:12 np0005634017 kernel: tap76d5199d-5d: entered promiscuous mode
Feb 28 05:04:12 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:12Z|00148|binding|INFO|Claiming lport 76d5199d-5d1e-4198-8780-c2537175a2be for this chassis.
Feb 28 05:04:12 np0005634017 systemd-udevd[267192]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:04:12 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:12Z|00149|binding|INFO|76d5199d-5d1e-4198-8780-c2537175a2be: Claiming fa:16:3e:7a:bd:5b 10.100.0.12
Feb 28 05:04:12 np0005634017 NetworkManager[49805]: <info>  [1772273052.1805] manager: (tap76d5199d-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.185 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:12 np0005634017 NetworkManager[49805]: <info>  [1772273052.1938] device (tap76d5199d-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:04:12 np0005634017 NetworkManager[49805]: <info>  [1772273052.1943] device (tap76d5199d-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.193 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:bd:5b 10.100.0.12'], port_security=['fa:16:3e:7a:bd:5b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f1026535-7729-43d0-8027-dd71ef14dfbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b0c8faed-afc6-47c6-b1ad-97c1fd7445c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=76d5199d-5d1e-4198-8780-c2537175a2be) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.194 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 76d5199d-5d1e-4198-8780-c2537175a2be in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f bound to our chassis#033[00m
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.196 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60dcefc3-95e1-437e-9c00-e51656c39b8f#033[00m
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.196 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.197 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.206 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[13a72c46-6f21-475c-9e9b-51bca3a5e037]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.207 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap60dcefc3-91 in ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.209 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap60dcefc3-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.209 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d275725b-6cfb-4a36-a967-7fdb8d8d0954]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:12 np0005634017 systemd-machined[209480]: New machine qemu-31-instance-0000001b.
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.211 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[32db3bc1-d678-4ff7-9b62-c2fe579d013a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.211 243456 DEBUG nova.compute.manager [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.219 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[c4c8605f-bc97-40c5-989a-6fc534f6d671]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.226 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:12 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:12Z|00150|binding|INFO|Setting lport 76d5199d-5d1e-4198-8780-c2537175a2be ovn-installed in OVS
Feb 28 05:04:12 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:12Z|00151|binding|INFO|Setting lport 76d5199d-5d1e-4198-8780-c2537175a2be up in Southbound
Feb 28 05:04:12 np0005634017 systemd[1]: Started Virtual Machine qemu-31-instance-0000001b.
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.228 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.241 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7a656fea-b2b9-446d-97ce-6b3a8095e9b9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.265 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c868daf0-ab9b-4b8f-a551-081fb75fbb0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:12 np0005634017 NetworkManager[49805]: <info>  [1772273052.2720] manager: (tap60dcefc3-90): new Veth device (/org/freedesktop/NetworkManager/Devices/75)
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.270 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[499cc033-54ef-4e05-aca2-52514800dd01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.282 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.283 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.295 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a4da4a3b-75f5-471c-9ef4-58bc1dff0aea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.297 243456 DEBUG nova.virt.hardware [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.298 243456 INFO nova.compute.claims [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.298 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9521226c-60d3-4e20-b3ab-642cdbddcd0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:12 np0005634017 NetworkManager[49805]: <info>  [1772273052.3167] device (tap60dcefc3-90): carrier: link connected
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.320 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-95805a4e-8bc0-47e3-981a-dfe27127a270" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.321 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-95805a4e-8bc0-47e3-981a-dfe27127a270" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.321 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.321 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 95805a4e-8bc0-47e3-981a-dfe27127a270 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.321 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[705ee4c8-a51d-42d7-8da0-703cebc3efd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.337 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b5075d66-3561-43db-bfcc-faca94d46d45]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455960, 'reachable_time': 22319, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267446, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.352 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8f023fc8-9854-4b88-8cdb-dd2096938868]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe46:227a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455960, 'tstamp': 455960}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267447, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.372 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9afaa222-f95c-4158-8d2b-675b6c0efa51]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455960, 'reachable_time': 22319, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267448, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.394 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b4e7a01f-0d87-488a-866d-9067bd1e407f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.438 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[37e891bd-3bc6-4052-88ae-a4c3afea3490]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.440 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60dcefc3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.440 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.440 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60dcefc3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:12 np0005634017 NetworkManager[49805]: <info>  [1772273052.4426] manager: (tap60dcefc3-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.441 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:12 np0005634017 kernel: tap60dcefc3-90: entered promiscuous mode
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.445 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.446 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60dcefc3-90, col_values=(('external_ids', {'iface-id': '76417fb5-53fd-4986-92c0-fafddb45b2f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:12 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:12Z|00152|binding|INFO|Releasing lport 76417fb5-53fd-4986-92c0-fafddb45b2f5 from this chassis (sb_readonly=0)
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.461 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.471 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/60dcefc3-95e1-437e-9c00-e51656c39b8f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/60dcefc3-95e1-437e-9c00-e51656c39b8f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.472 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[095ee223-9216-4b09-b77c-825e72a9d90b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.473 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/60dcefc3-95e1-437e-9c00-e51656c39b8f.pid.haproxy
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:04:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.473 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'env', 'PROCESS_TAG=haproxy-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/60dcefc3-95e1-437e-9c00-e51656c39b8f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.612 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.612 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.627 243456 DEBUG nova.compute.manager [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.680 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:12 np0005634017 podman[267499]: 2026-02-28 10:04:12.8847539 +0000 UTC m=+0.078219710 container create 0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 28 05:04:12 np0005634017 systemd[1]: Started libpod-conmon-0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543.scope.
Feb 28 05:04:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:04:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3138718886' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:04:12 np0005634017 podman[267499]: 2026-02-28 10:04:12.838193292 +0000 UTC m=+0.031659152 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:04:12 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:04:12 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e56ba081050cdf68ac2830d8fd1d550ce7d433e93c5d67b3c7be66a947248126/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.960 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:12 np0005634017 podman[267499]: 2026-02-28 10:04:12.964855692 +0000 UTC m=+0.158321502 container init 0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.966 243456 DEBUG nova.compute.provider_tree [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:04:12 np0005634017 podman[267499]: 2026-02-28 10:04:12.970000447 +0000 UTC m=+0.163466257 container start 0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 28 05:04:12 np0005634017 nova_compute[243452]: 2026-02-28 10:04:12.983 243456 DEBUG nova.scheduler.client.report [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:04:12 np0005634017 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[267513]: [NOTICE]   (267519) : New worker (267521) forked
Feb 28 05:04:12 np0005634017 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[267513]: [NOTICE]   (267519) : Loading success.
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.004 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.004 243456 DEBUG nova.compute.manager [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.007 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.327s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.012 243456 DEBUG nova.virt.hardware [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.013 243456 INFO nova.compute.claims [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.166 243456 DEBUG nova.compute.manager [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.167 243456 DEBUG nova.network.neutron [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.186 243456 INFO nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.204 243456 DEBUG nova.compute.manager [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.230 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273053.229568, f1026535-7729-43d0-8027-dd71ef14dfbf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.230 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] VM Started (Lifecycle Event)#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.262 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.267 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273053.2304146, f1026535-7729-43d0-8027-dd71ef14dfbf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.267 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.297 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.301 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.321 243456 DEBUG nova.compute.manager [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.323 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.323 243456 INFO nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Creating image(s)#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.348 243456 DEBUG nova.storage.rbd_utils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.375 243456 DEBUG nova.storage.rbd_utils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.402 243456 DEBUG nova.storage.rbd_utils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.405 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.425 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.438 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.471 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.472 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.473 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.474 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.497 243456 DEBUG nova.storage.rbd_utils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.500 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.516 243456 DEBUG nova.policy [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2737540b5d9a437cac0ea91b25f0c5d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da458887a8634c5a8b9a38fcbcc44e07', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.519 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.522 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Updating instance_info_cache with network_info: [{"id": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "address": "fa:16:3e:63:2f:18", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdca369b1-9f", "ovs_interfaceid": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.546 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-95805a4e-8bc0-47e3-981a-dfe27127a270" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.547 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.548 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.549 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.549 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.549 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.736 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.794 243456 DEBUG nova.storage.rbd_utils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] resizing rbd image 6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:04:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 299 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 6.2 MiB/s rd, 1.9 MiB/s wr, 282 op/s
Feb 28 05:04:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:04:13 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/222971367' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.983 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:13 np0005634017 nova_compute[243452]: 2026-02-28 10:04:13.990 243456 DEBUG nova.compute.provider_tree [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.016 243456 DEBUG nova.scheduler.client.report [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.055 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.056 243456 DEBUG nova.compute.manager [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.104 243456 DEBUG nova.compute.manager [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.105 243456 DEBUG nova.network.neutron [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.120 243456 INFO nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.135 243456 DEBUG nova.compute.manager [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.157 243456 DEBUG nova.network.neutron [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Successfully created port: 3c6e6f23-d681-47b4-a8e5-474ce94e984d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.215 243456 DEBUG nova.compute.manager [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.217 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.218 243456 INFO nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Creating image(s)#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.433 243456 DEBUG nova.storage.rbd_utils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.454 243456 DEBUG nova.storage.rbd_utils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.477 243456 DEBUG nova.storage.rbd_utils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.485 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.501 243456 DEBUG nova.policy [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '163582c3e6a34c87b52f82ac4f189f77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.545 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.546 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.546 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.547 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.566 243456 DEBUG nova.storage.rbd_utils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.572 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.882 243456 DEBUG nova.objects.instance [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lazy-loading 'migration_context' on Instance uuid 6d74f9b9-edf7-4d81-b139-cb664b2ab68c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.901 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.901 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Ensure instance console log exists: /var/lib/nova/instances/6d74f9b9-edf7-4d81-b139-cb664b2ab68c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.902 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.902 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.903 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:14 np0005634017 nova_compute[243452]: 2026-02-28 10:04:14.991 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.041 243456 DEBUG nova.storage.rbd_utils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] resizing rbd image 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.118 243456 DEBUG nova.objects.instance [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'migration_context' on Instance uuid 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.156 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.157 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Ensure instance console log exists: /var/lib/nova/instances/9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.157 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.158 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.158 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.280 243456 DEBUG nova.network.neutron [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Successfully updated port: 3c6e6f23-d681-47b4-a8e5-474ce94e984d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.294 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "refresh_cache-6d74f9b9-edf7-4d81-b139-cb664b2ab68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.295 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquired lock "refresh_cache-6d74f9b9-edf7-4d81-b139-cb664b2ab68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.295 243456 DEBUG nova.network.neutron [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.420 243456 DEBUG nova.compute.manager [req-bcb686c6-beae-4f3f-a5ef-4eddac6b2ea5 req-db3edd9d-e7f5-4b9a-a1d3-64a2301a2246 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Received event network-changed-3c6e6f23-d681-47b4-a8e5-474ce94e984d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.421 243456 DEBUG nova.compute.manager [req-bcb686c6-beae-4f3f-a5ef-4eddac6b2ea5 req-db3edd9d-e7f5-4b9a-a1d3-64a2301a2246 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Refreshing instance network info cache due to event network-changed-3c6e6f23-d681-47b4-a8e5-474ce94e984d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.421 243456 DEBUG oslo_concurrency.lockutils [req-bcb686c6-beae-4f3f-a5ef-4eddac6b2ea5 req-db3edd9d-e7f5-4b9a-a1d3-64a2301a2246 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-6d74f9b9-edf7-4d81-b139-cb664b2ab68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.482 243456 DEBUG nova.network.neutron [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Successfully created port: 2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.507 243456 DEBUG nova.network.neutron [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.661 243456 DEBUG oslo_concurrency.lockutils [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "95805a4e-8bc0-47e3-981a-dfe27127a270" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.662 243456 DEBUG oslo_concurrency.lockutils [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.662 243456 DEBUG oslo_concurrency.lockutils [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.663 243456 DEBUG oslo_concurrency.lockutils [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.664 243456 DEBUG oslo_concurrency.lockutils [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.666 243456 INFO nova.compute.manager [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Terminating instance#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.668 243456 DEBUG nova.compute.manager [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:04:15 np0005634017 kernel: tapdca369b1-9f (unregistering): left promiscuous mode
Feb 28 05:04:15 np0005634017 NetworkManager[49805]: <info>  [1772273055.7158] device (tapdca369b1-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:04:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:15Z|00153|binding|INFO|Releasing lport dca369b1-9f27-4c57-9e18-4a62a8bd95e9 from this chassis (sb_readonly=0)
Feb 28 05:04:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:15Z|00154|binding|INFO|Setting lport dca369b1-9f27-4c57-9e18-4a62a8bd95e9 down in Southbound
Feb 28 05:04:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:15Z|00155|binding|INFO|Removing iface tapdca369b1-9f ovn-installed in OVS
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.720 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.727 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.764 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:2f:18 10.100.0.5'], port_security=['fa:16:3e:63:2f:18 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '95805a4e-8bc0-47e3-981a-dfe27127a270', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a4f47b449434c708378388c3e76610e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '40d4cd58-fe6b-4388-8998-09342a19e6c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dbb9d37-cc7b-4cfd-a162-1ef19741223f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=dca369b1-9f27-4c57-9e18-4a62a8bd95e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:04:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.765 156681 INFO neutron.agent.ovn.metadata.agent [-] Port dca369b1-9f27-4c57-9e18-4a62a8bd95e9 in datapath c4a214b7-a9ab-4548-82a3-ef624bc8e6a1 unbound from our chassis#033[00m
Feb 28 05:04:15 np0005634017 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000019.scope: Deactivated successfully.
Feb 28 05:04:15 np0005634017 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000019.scope: Consumed 11.289s CPU time.
Feb 28 05:04:15 np0005634017 systemd-machined[209480]: Machine qemu-28-instance-00000019 terminated.
Feb 28 05:04:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.772 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4a214b7-a9ab-4548-82a3-ef624bc8e6a1#033[00m
Feb 28 05:04:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.787 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7a5d9dc1-cc30-4edf-93c8-01e79d064443]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.803 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[15267a39-803f-4efc-8080-feda80500290]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.805 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c6eb26-e165-4505-bab8-f823933e9d37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.829 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4ceb3943-d93f-44fd-b1d0-9ba44a60fe7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 351 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 4.5 MiB/s wr, 302 op/s
Feb 28 05:04:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.849 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b463abcd-1711-4b79-96d9-801886eba87c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4a214b7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:13:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455078, 'reachable_time': 28358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267938, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.864 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cd1e6c6f-3a78-4e01-83e2-f8dd7b940908]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc4a214b7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455092, 'tstamp': 455092}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267941, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc4a214b7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455096, 'tstamp': 455096}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267941, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.866 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4a214b7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.867 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.870 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.871 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4a214b7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.871 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.871 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4a214b7-a0, col_values=(('external_ids', {'iface-id': '6478ac98-6cab-4170-89f8-76b31581cc8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.871 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:15 np0005634017 NetworkManager[49805]: <info>  [1772273055.8826] manager: (tapdca369b1-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/77)
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.897 243456 INFO nova.virt.libvirt.driver [-] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Instance destroyed successfully.#033[00m
Feb 28 05:04:15 np0005634017 nova_compute[243452]: 2026-02-28 10:04:15.898 243456 DEBUG nova.objects.instance [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lazy-loading 'resources' on Instance uuid 95805a4e-8bc0-47e3-981a-dfe27127a270 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.009 243456 DEBUG nova.virt.libvirt.vif [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:03:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1851116039',display_name='tempest-ListServersNegativeTestJSON-server-1851116039-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1851116039-2',id=25,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-02-28T10:04:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a4f47b449434c708378388c3e76610e',ramdisk_id='',reservation_id='r-my980dr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-784056711',owner_user_name='tempest-ListServersNegativeTestJSON-784056711-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:04:03Z,user_data=None,user_id='7a44c3f76b144221b23743a554a4a839',uuid=95805a4e-8bc0-47e3-981a-dfe27127a270,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "address": "fa:16:3e:63:2f:18", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdca369b1-9f", "ovs_interfaceid": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.011 243456 DEBUG nova.network.os_vif_util [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converting VIF {"id": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "address": "fa:16:3e:63:2f:18", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdca369b1-9f", "ovs_interfaceid": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.012 243456 DEBUG nova.network.os_vif_util [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:63:2f:18,bridge_name='br-int',has_traffic_filtering=True,id=dca369b1-9f27-4c57-9e18-4a62a8bd95e9,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdca369b1-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.013 243456 DEBUG os_vif [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:2f:18,bridge_name='br-int',has_traffic_filtering=True,id=dca369b1-9f27-4c57-9e18-4a62a8bd95e9,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdca369b1-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.016 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.016 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdca369b1-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.018 243456 DEBUG oslo_concurrency.lockutils [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.018 243456 DEBUG oslo_concurrency.lockutils [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.021 243456 DEBUG oslo_concurrency.lockutils [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.021 243456 DEBUG oslo_concurrency.lockutils [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.022 243456 DEBUG oslo_concurrency.lockutils [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.023 243456 INFO nova.compute.manager [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Terminating instance#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.025 243456 DEBUG nova.compute.manager [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.026 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.028 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.031 243456 INFO os_vif [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:2f:18,bridge_name='br-int',has_traffic_filtering=True,id=dca369b1-9f27-4c57-9e18-4a62a8bd95e9,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdca369b1-9f')#033[00m
Feb 28 05:04:16 np0005634017 kernel: tapb9c5316d-0f (unregistering): left promiscuous mode
Feb 28 05:04:16 np0005634017 NetworkManager[49805]: <info>  [1772273056.0723] device (tapb9c5316d-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.082 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:16 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:16Z|00156|binding|INFO|Releasing lport b9c5316d-0f6b-4a56-8273-692ee1492259 from this chassis (sb_readonly=0)
Feb 28 05:04:16 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:16Z|00157|binding|INFO|Setting lport b9c5316d-0f6b-4a56-8273-692ee1492259 down in Southbound
Feb 28 05:04:16 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:16Z|00158|binding|INFO|Removing iface tapb9c5316d-0f ovn-installed in OVS
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.086 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.092 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:5c:cb 10.100.0.4'], port_security=['fa:16:3e:b1:5c:cb 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c38e4584-8a86-41a3-bc10-2a35205cf7c7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a4f47b449434c708378388c3e76610e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '40d4cd58-fe6b-4388-8998-09342a19e6c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dbb9d37-cc7b-4cfd-a162-1ef19741223f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b9c5316d-0f6b-4a56-8273-692ee1492259) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:04:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.094 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b9c5316d-0f6b-4a56-8273-692ee1492259 in datapath c4a214b7-a9ab-4548-82a3-ef624bc8e6a1 unbound from our chassis#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.095 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.096 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c4a214b7-a9ab-4548-82a3-ef624bc8e6a1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:04:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.097 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5c799d-073c-480f-892b-bcb47a46b39e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.098 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1 namespace which is not needed anymore#033[00m
Feb 28 05:04:16 np0005634017 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Feb 28 05:04:16 np0005634017 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001a.scope: Consumed 10.516s CPU time.
Feb 28 05:04:16 np0005634017 systemd-machined[209480]: Machine qemu-29-instance-0000001a terminated.
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.250 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:16 np0005634017 neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1[266854]: [NOTICE]   (266865) : haproxy version is 2.8.14-c23fe91
Feb 28 05:04:16 np0005634017 neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1[266854]: [NOTICE]   (266865) : path to executable is /usr/sbin/haproxy
Feb 28 05:04:16 np0005634017 neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1[266854]: [WARNING]  (266865) : Exiting Master process...
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.258 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:16 np0005634017 neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1[266854]: [ALERT]    (266865) : Current worker (266878) exited with code 143 (Terminated)
Feb 28 05:04:16 np0005634017 neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1[266854]: [WARNING]  (266865) : All workers exited. Exiting... (0)
Feb 28 05:04:16 np0005634017 systemd[1]: libpod-f03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242.scope: Deactivated successfully.
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.267 243456 INFO nova.virt.libvirt.driver [-] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Instance destroyed successfully.#033[00m
Feb 28 05:04:16 np0005634017 podman[267991]: 2026-02-28 10:04:16.268683833 +0000 UTC m=+0.062284142 container died f03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.268 243456 DEBUG nova.objects.instance [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lazy-loading 'resources' on Instance uuid c38e4584-8a86-41a3-bc10-2a35205cf7c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.282 243456 DEBUG nova.virt.libvirt.vif [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:03:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1851116039',display_name='tempest-ListServersNegativeTestJSON-server-1851116039-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1851116039-3',id=26,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2026-02-28T10:04:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a4f47b449434c708378388c3e76610e',ramdisk_id='',reservation_id='r-my980dr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-784056711',owner_user_name='tempest-ListServersNegativeTestJSON-784056711-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:04:06Z,user_data=None,user_id='7a44c3f76b144221b23743a554a4a839',uuid=c38e4584-8a86-41a3-bc10-2a35205cf7c7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b9c5316d-0f6b-4a56-8273-692ee1492259", "address": "fa:16:3e:b1:5c:cb", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c5316d-0f", "ovs_interfaceid": "b9c5316d-0f6b-4a56-8273-692ee1492259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.283 243456 DEBUG nova.network.os_vif_util [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converting VIF {"id": "b9c5316d-0f6b-4a56-8273-692ee1492259", "address": "fa:16:3e:b1:5c:cb", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c5316d-0f", "ovs_interfaceid": "b9c5316d-0f6b-4a56-8273-692ee1492259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.284 243456 DEBUG nova.network.os_vif_util [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:5c:cb,bridge_name='br-int',has_traffic_filtering=True,id=b9c5316d-0f6b-4a56-8273-692ee1492259,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9c5316d-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.284 243456 DEBUG os_vif [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:5c:cb,bridge_name='br-int',has_traffic_filtering=True,id=b9c5316d-0f6b-4a56-8273-692ee1492259,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9c5316d-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.287 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.287 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9c5316d-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.289 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.293 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.295 243456 INFO os_vif [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:5c:cb,bridge_name='br-int',has_traffic_filtering=True,id=b9c5316d-0f6b-4a56-8273-692ee1492259,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9c5316d-0f')#033[00m
Feb 28 05:04:16 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242-userdata-shm.mount: Deactivated successfully.
Feb 28 05:04:16 np0005634017 systemd[1]: var-lib-containers-storage-overlay-0957a7b6eff2b2a82eae4443a3c42b23275f1b3848fb9d1ed3e7a8e33112a7b0-merged.mount: Deactivated successfully.
Feb 28 05:04:16 np0005634017 podman[267991]: 2026-02-28 10:04:16.320218582 +0000 UTC m=+0.113818891 container cleanup f03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:04:16 np0005634017 systemd[1]: libpod-conmon-f03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242.scope: Deactivated successfully.
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.347 243456 INFO nova.virt.libvirt.driver [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Deleting instance files /var/lib/nova/instances/95805a4e-8bc0-47e3-981a-dfe27127a270_del#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.348 243456 INFO nova.virt.libvirt.driver [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Deletion of /var/lib/nova/instances/95805a4e-8bc0-47e3-981a-dfe27127a270_del complete#033[00m
Feb 28 05:04:16 np0005634017 podman[268047]: 2026-02-28 10:04:16.382708439 +0000 UTC m=+0.045045387 container remove f03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 05:04:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.388 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ec8019c9-ac93-4747-8d30-64010010c445]: (4, ('Sat Feb 28 10:04:16 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1 (f03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242)\nf03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242\nSat Feb 28 10:04:16 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1 (f03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242)\nf03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.389 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[88caa938-acac-4765-af22-30f2f1a9129f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.390 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4a214b7-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.391 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:16 np0005634017 kernel: tapc4a214b7-a0: left promiscuous mode
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.395 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.397 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4a212a6d-2960-4a7f-8278-20ee151a7c86]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.403 243456 INFO nova.compute.manager [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.403 243456 DEBUG oslo.service.loopingcall [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.404 243456 DEBUG nova.compute.manager [-] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.404 243456 DEBUG nova.network.neutron [-] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.413 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.415 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a2231474-848b-4b09-b3c5-d6d05efc9a68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.417 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0ff7f827-d1dd-4433-a8e1-446acb8e1cb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.434 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8ea07fef-61b0-4a98-8934-1feea67ffe7a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455069, 'reachable_time': 28078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268065, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:16 np0005634017 systemd[1]: run-netns-ovnmeta\x2dc4a214b7\x2da9ab\x2d4548\x2d82a3\x2def624bc8e6a1.mount: Deactivated successfully.
Feb 28 05:04:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.437 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:04:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.438 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[8eeb50cd-6961-48c4-bf82-8a73a0b3a519]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.537 243456 INFO nova.virt.libvirt.driver [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Deleting instance files /var/lib/nova/instances/c38e4584-8a86-41a3-bc10-2a35205cf7c7_del#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.538 243456 INFO nova.virt.libvirt.driver [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Deletion of /var/lib/nova/instances/c38e4584-8a86-41a3-bc10-2a35205cf7c7_del complete#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.596 243456 INFO nova.compute.manager [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Took 0.57 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.597 243456 DEBUG oslo.service.loopingcall [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.597 243456 DEBUG nova.compute.manager [-] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.598 243456 DEBUG nova.network.neutron [-] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.656 243456 DEBUG nova.network.neutron [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Updating instance_info_cache with network_info: [{"id": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "address": "fa:16:3e:32:58:99", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6e6f23-d6", "ovs_interfaceid": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:16 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 05:04:16 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 12K writes, 51K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s#012Cumulative WAL: 12K writes, 3648 syncs, 3.44 writes per sync, written: 0.05 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6674 writes, 27K keys, 6674 commit groups, 1.0 writes per commit group, ingest: 29.85 MB, 0.05 MB/s#012Interval WAL: 6674 writes, 2630 syncs, 2.54 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.677 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Releasing lock "refresh_cache-6d74f9b9-edf7-4d81-b139-cb664b2ab68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.678 243456 DEBUG nova.compute.manager [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Instance network_info: |[{"id": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "address": "fa:16:3e:32:58:99", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6e6f23-d6", "ovs_interfaceid": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.679 243456 DEBUG oslo_concurrency.lockutils [req-bcb686c6-beae-4f3f-a5ef-4eddac6b2ea5 req-db3edd9d-e7f5-4b9a-a1d3-64a2301a2246 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-6d74f9b9-edf7-4d81-b139-cb664b2ab68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.679 243456 DEBUG nova.network.neutron [req-bcb686c6-beae-4f3f-a5ef-4eddac6b2ea5 req-db3edd9d-e7f5-4b9a-a1d3-64a2301a2246 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Refreshing network info cache for port 3c6e6f23-d681-47b4-a8e5-474ce94e984d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.684 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Start _get_guest_xml network_info=[{"id": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "address": "fa:16:3e:32:58:99", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6e6f23-d6", "ovs_interfaceid": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.690 243456 WARNING nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.697 243456 DEBUG nova.virt.libvirt.host [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.698 243456 DEBUG nova.virt.libvirt.host [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.706 243456 DEBUG nova.virt.libvirt.host [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.707 243456 DEBUG nova.virt.libvirt.host [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.707 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.708 243456 DEBUG nova.virt.hardware [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.709 243456 DEBUG nova.virt.hardware [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.709 243456 DEBUG nova.virt.hardware [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.710 243456 DEBUG nova.virt.hardware [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.710 243456 DEBUG nova.virt.hardware [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.710 243456 DEBUG nova.virt.hardware [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.711 243456 DEBUG nova.virt.hardware [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.711 243456 DEBUG nova.virt.hardware [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.712 243456 DEBUG nova.virt.hardware [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.712 243456 DEBUG nova.virt.hardware [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.713 243456 DEBUG nova.virt.hardware [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:04:16 np0005634017 nova_compute[243452]: 2026-02-28 10:04:16.717 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:04:17 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3719924605' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.253 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.289 243456 DEBUG nova.storage.rbd_utils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.295 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.458 243456 DEBUG nova.network.neutron [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Successfully updated port: 2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.489 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "refresh_cache-9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.489 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquired lock "refresh_cache-9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.490 243456 DEBUG nova.network.neutron [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.512 243456 DEBUG nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-vif-plugged-76d5199d-5d1e-4198-8780-c2537175a2be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.512 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.512 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.513 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.513 243456 DEBUG nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Processing event network-vif-plugged-76d5199d-5d1e-4198-8780-c2537175a2be _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.513 243456 DEBUG nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-vif-plugged-76d5199d-5d1e-4198-8780-c2537175a2be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.513 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.513 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.514 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.514 243456 DEBUG nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] No waiting events found dispatching network-vif-plugged-76d5199d-5d1e-4198-8780-c2537175a2be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.514 243456 WARNING nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received unexpected event network-vif-plugged-76d5199d-5d1e-4198-8780-c2537175a2be for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.514 243456 DEBUG nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Received event network-vif-unplugged-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.514 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.514 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.515 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.515 243456 DEBUG nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] No waiting events found dispatching network-vif-unplugged-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.515 243456 DEBUG nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Received event network-vif-unplugged-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.515 243456 DEBUG nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Received event network-vif-plugged-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.515 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.516 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.516 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.516 243456 DEBUG nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] No waiting events found dispatching network-vif-plugged-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.516 243456 WARNING nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Received unexpected event network-vif-plugged-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.516 243456 DEBUG nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Received event network-vif-unplugged-b9c5316d-0f6b-4a56-8273-692ee1492259 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.517 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.517 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.517 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.517 243456 DEBUG nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] No waiting events found dispatching network-vif-unplugged-b9c5316d-0f6b-4a56-8273-692ee1492259 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.517 243456 DEBUG nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Received event network-vif-unplugged-b9c5316d-0f6b-4a56-8273-692ee1492259 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.518 243456 DEBUG nova.compute.manager [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.523 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273057.52318, f1026535-7729-43d0-8027-dd71ef14dfbf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.523 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.525 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.532 243456 INFO nova.virt.libvirt.driver [-] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Instance spawned successfully.#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.533 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.551 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.555 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.562 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.563 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.563 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.564 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.564 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.564 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.576 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:04:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.627 243456 INFO nova.compute.manager [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Took 12.25 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.628 243456 DEBUG nova.compute.manager [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.683 243456 INFO nova.compute.manager [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Took 13.37 seconds to build instance.#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.702 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.810 243456 DEBUG nova.network.neutron [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:04:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:04:17 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2705202442' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.840 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.843 243456 DEBUG nova.virt.libvirt.vif [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:04:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-279405115',display_name='tempest-ImagesOneServerNegativeTestJSON-server-279405115',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-279405115',id=28,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da458887a8634c5a8b9a38fcbcc44e07',ramdisk_id='',reservation_id='r-4l4iiwh0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-356581433',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-356581433-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:04:13Z,user_data=None,user_id='2737540b5d9a437cac0ea91b25f0c5d8',uuid=6d74f9b9-edf7-4d81-b139-cb664b2ab68c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "address": "fa:16:3e:32:58:99", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6e6f23-d6", "ovs_interfaceid": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.844 243456 DEBUG nova.network.os_vif_util [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converting VIF {"id": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "address": "fa:16:3e:32:58:99", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6e6f23-d6", "ovs_interfaceid": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.846 243456 DEBUG nova.network.os_vif_util [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:58:99,bridge_name='br-int',has_traffic_filtering=True,id=3c6e6f23-d681-47b4-a8e5-474ce94e984d,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6e6f23-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.848 243456 DEBUG nova.objects.instance [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6d74f9b9-edf7-4d81-b139-cb664b2ab68c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 355 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 6.7 MiB/s wr, 332 op/s
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.868 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:04:17 np0005634017 nova_compute[243452]:  <uuid>6d74f9b9-edf7-4d81-b139-cb664b2ab68c</uuid>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:  <name>instance-0000001c</name>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-279405115</nova:name>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:04:16</nova:creationTime>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:04:17 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:        <nova:user uuid="2737540b5d9a437cac0ea91b25f0c5d8">tempest-ImagesOneServerNegativeTestJSON-356581433-project-member</nova:user>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:        <nova:project uuid="da458887a8634c5a8b9a38fcbcc44e07">tempest-ImagesOneServerNegativeTestJSON-356581433</nova:project>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:        <nova:port uuid="3c6e6f23-d681-47b4-a8e5-474ce94e984d">
Feb 28 05:04:17 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <entry name="serial">6d74f9b9-edf7-4d81-b139-cb664b2ab68c</entry>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <entry name="uuid">6d74f9b9-edf7-4d81-b139-cb664b2ab68c</entry>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk">
Feb 28 05:04:17 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:04:17 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk.config">
Feb 28 05:04:17 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:04:17 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:32:58:99"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <target dev="tap3c6e6f23-d6"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/6d74f9b9-edf7-4d81-b139-cb664b2ab68c/console.log" append="off"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:04:17 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:04:17 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:04:17 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:04:17 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.870 243456 DEBUG nova.compute.manager [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Preparing to wait for external event network-vif-plugged-3c6e6f23-d681-47b4-a8e5-474ce94e984d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.870 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.870 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.870 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.871 243456 DEBUG nova.virt.libvirt.vif [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:04:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-279405115',display_name='tempest-ImagesOneServerNegativeTestJSON-server-279405115',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-279405115',id=28,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da458887a8634c5a8b9a38fcbcc44e07',ramdisk_id='',reservation_id='r-4l4iiwh0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-356581433',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-356581433-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:04:13Z,user_data=None,user_id='2737540b5d9a437cac0ea91b25f0c5d8',uuid=6d74f9b9-edf7-4d81-b139-cb664b2ab68c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "address": "fa:16:3e:32:58:99", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6e6f23-d6", "ovs_interfaceid": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.871 243456 DEBUG nova.network.os_vif_util [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converting VIF {"id": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "address": "fa:16:3e:32:58:99", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6e6f23-d6", "ovs_interfaceid": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.872 243456 DEBUG nova.network.os_vif_util [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:58:99,bridge_name='br-int',has_traffic_filtering=True,id=3c6e6f23-d681-47b4-a8e5-474ce94e984d,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6e6f23-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.872 243456 DEBUG os_vif [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:58:99,bridge_name='br-int',has_traffic_filtering=True,id=3c6e6f23-d681-47b4-a8e5-474ce94e984d,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6e6f23-d6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.873 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.873 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.874 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.876 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.876 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c6e6f23-d6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.877 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3c6e6f23-d6, col_values=(('external_ids', {'iface-id': '3c6e6f23-d681-47b4-a8e5-474ce94e984d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:58:99', 'vm-uuid': '6d74f9b9-edf7-4d81-b139-cb664b2ab68c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.878 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:17 np0005634017 NetworkManager[49805]: <info>  [1772273057.8799] manager: (tap3c6e6f23-d6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.881 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.886 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.887 243456 INFO os_vif [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:58:99,bridge_name='br-int',has_traffic_filtering=True,id=3c6e6f23-d681-47b4-a8e5-474ce94e984d,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6e6f23-d6')#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.945 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.946 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.946 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] No VIF found with MAC fa:16:3e:32:58:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.946 243456 INFO nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Using config drive#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.972 243456 DEBUG nova.storage.rbd_utils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:17 np0005634017 nova_compute[243452]: 2026-02-28 10:04:17.980 243456 DEBUG nova.network.neutron [-] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.003 243456 DEBUG nova.network.neutron [-] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.013 243456 INFO nova.compute.manager [-] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Took 1.41 seconds to deallocate network for instance.#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.019 243456 INFO nova.compute.manager [-] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Took 1.61 seconds to deallocate network for instance.#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.076 243456 DEBUG oslo_concurrency.lockutils [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.077 243456 DEBUG oslo_concurrency.lockutils [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.079 243456 DEBUG oslo_concurrency.lockutils [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.194 243456 DEBUG oslo_concurrency.processutils [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.457 243456 INFO nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Creating config drive at /var/lib/nova/instances/6d74f9b9-edf7-4d81-b139-cb664b2ab68c/disk.config#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.463 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6d74f9b9-edf7-4d81-b139-cb664b2ab68c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpcy_ql8hm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.482 243456 DEBUG nova.network.neutron [req-bcb686c6-beae-4f3f-a5ef-4eddac6b2ea5 req-db3edd9d-e7f5-4b9a-a1d3-64a2301a2246 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Updated VIF entry in instance network info cache for port 3c6e6f23-d681-47b4-a8e5-474ce94e984d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.483 243456 DEBUG nova.network.neutron [req-bcb686c6-beae-4f3f-a5ef-4eddac6b2ea5 req-db3edd9d-e7f5-4b9a-a1d3-64a2301a2246 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Updating instance_info_cache with network_info: [{"id": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "address": "fa:16:3e:32:58:99", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6e6f23-d6", "ovs_interfaceid": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.493 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.498 243456 DEBUG oslo_concurrency.lockutils [req-bcb686c6-beae-4f3f-a5ef-4eddac6b2ea5 req-db3edd9d-e7f5-4b9a-a1d3-64a2301a2246 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-6d74f9b9-edf7-4d81-b139-cb664b2ab68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.589 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6d74f9b9-edf7-4d81-b139-cb664b2ab68c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpcy_ql8hm" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.627 243456 DEBUG nova.storage.rbd_utils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.633 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6d74f9b9-edf7-4d81-b139-cb664b2ab68c/disk.config 6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.663 243456 DEBUG nova.network.neutron [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Updating instance_info_cache with network_info: [{"id": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "address": "fa:16:3e:3a:a0:83", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d6bdb4d-0d", "ovs_interfaceid": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.689 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Releasing lock "refresh_cache-9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.690 243456 DEBUG nova.compute.manager [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Instance network_info: |[{"id": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "address": "fa:16:3e:3a:a0:83", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d6bdb4d-0d", "ovs_interfaceid": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.694 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Start _get_guest_xml network_info=[{"id": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "address": "fa:16:3e:3a:a0:83", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d6bdb4d-0d", "ovs_interfaceid": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.700 243456 WARNING nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.705 243456 DEBUG nova.virt.libvirt.host [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:04:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:04:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1150524428' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.706 243456 DEBUG nova.virt.libvirt.host [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.710 243456 DEBUG nova.virt.libvirt.host [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.711 243456 DEBUG nova.virt.libvirt.host [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.711 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.712 243456 DEBUG nova.virt.hardware [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.713 243456 DEBUG nova.virt.hardware [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.713 243456 DEBUG nova.virt.hardware [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.714 243456 DEBUG nova.virt.hardware [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.714 243456 DEBUG nova.virt.hardware [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.715 243456 DEBUG nova.virt.hardware [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.716 243456 DEBUG nova.virt.hardware [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.716 243456 DEBUG nova.virt.hardware [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.717 243456 DEBUG nova.virt.hardware [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.717 243456 DEBUG nova.virt.hardware [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.717 243456 DEBUG nova.virt.hardware [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.722 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.739 243456 DEBUG oslo_concurrency.processutils [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.745 243456 DEBUG nova.compute.provider_tree [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.781 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6d74f9b9-edf7-4d81-b139-cb664b2ab68c/disk.config 6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.782 243456 INFO nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Deleting local config drive /var/lib/nova/instances/6d74f9b9-edf7-4d81-b139-cb664b2ab68c/disk.config because it was imported into RBD.#033[00m
Feb 28 05:04:18 np0005634017 kernel: tap3c6e6f23-d6: entered promiscuous mode
Feb 28 05:04:18 np0005634017 NetworkManager[49805]: <info>  [1772273058.8146] manager: (tap3c6e6f23-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Feb 28 05:04:18 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:18Z|00159|binding|INFO|Claiming lport 3c6e6f23-d681-47b4-a8e5-474ce94e984d for this chassis.
Feb 28 05:04:18 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:18Z|00160|binding|INFO|3c6e6f23-d681-47b4-a8e5-474ce94e984d: Claiming fa:16:3e:32:58:99 10.100.0.8
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.816 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:18 np0005634017 systemd-udevd[268016]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.822 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.824 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:18 np0005634017 NetworkManager[49805]: <info>  [1772273058.8320] device (tap3c6e6f23-d6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:04:18 np0005634017 NetworkManager[49805]: <info>  [1772273058.8325] device (tap3c6e6f23-d6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:04:18 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:18Z|00161|binding|INFO|Setting lport 3c6e6f23-d681-47b4-a8e5-474ce94e984d ovn-installed in OVS
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.863 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:18 np0005634017 systemd-machined[209480]: New machine qemu-32-instance-0000001c.
Feb 28 05:04:18 np0005634017 systemd[1]: Started Virtual Machine qemu-32-instance-0000001c.
Feb 28 05:04:18 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:18Z|00162|binding|INFO|Setting lport 3c6e6f23-d681-47b4-a8e5-474ce94e984d up in Southbound
Feb 28 05:04:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:18.914 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:58:99 10.100.0.8'], port_security=['fa:16:3e:32:58:99 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6d74f9b9-edf7-4d81-b139-cb664b2ab68c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da458887a8634c5a8b9a38fcbcc44e07', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c01448ef-a8fc-4bd2-928c-fd08df4a870e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5852d9cf-0cd0-48e3-ac9d-e151dafb6ffc, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=3c6e6f23-d681-47b4-a8e5-474ce94e984d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:04:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:18.916 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 3c6e6f23-d681-47b4-a8e5-474ce94e984d in datapath 2eebd3ec-f7d4-4881-813e-8d884cdcadaf bound to our chassis#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.919 243456 DEBUG nova.scheduler.client.report [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:04:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:18.919 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2eebd3ec-f7d4-4881-813e-8d884cdcadaf#033[00m
Feb 28 05:04:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:18.934 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d94869fd-98eb-4452-9038-076f674c9e72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:18.935 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2eebd3ec-f1 in ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:04:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:18.937 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2eebd3ec-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.938 243456 DEBUG oslo_concurrency.lockutils [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:18.938 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eb0523a2-8276-406d-94db-983f58886d8b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:18.939 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[df5d8098-11c0-423f-99a4-5b8b16ff6dd4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.940 243456 DEBUG oslo_concurrency.lockutils [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:18.955 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd3f93a-6a65-4f3b-af27-e6163cca8af9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:18 np0005634017 nova_compute[243452]: 2026-02-28 10:04:18.958 243456 INFO nova.scheduler.client.report [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Deleted allocations for instance c38e4584-8a86-41a3-bc10-2a35205cf7c7#033[00m
Feb 28 05:04:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:18.981 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6d262d52-dd10-4e4e-ba32-fb7088097f9f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.007 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6e280dcf-1c86-41fa-8534-096aebdda992]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:19 np0005634017 NetworkManager[49805]: <info>  [1772273059.0161] manager: (tap2eebd3ec-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/80)
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.015 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[427e766f-5afd-412d-acad-2704c6024649]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.046 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[00bf3894-15e2-419c-992d-8ac0cd8c955b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.049 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1339a6af-111c-4371-9ef3-c8f534d6f33b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.062 243456 DEBUG oslo_concurrency.processutils [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:19 np0005634017 NetworkManager[49805]: <info>  [1772273059.0677] device (tap2eebd3ec-f0): carrier: link connected
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.071 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2a9fc6c1-42d5-4009-85a8-2c98d1fa3a0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.082 243456 DEBUG oslo_concurrency.lockutils [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.088 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ab6daa29-4bd1-41ec-93c7-1dc0d8f8e7ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2eebd3ec-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:03:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456635, 'reachable_time': 25550, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268276, 'error': None, 'target': 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.101 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d287b949-450e-4e1e-bd4e-12efb3387424]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecb:321'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456635, 'tstamp': 456635}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268277, 'error': None, 'target': 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.116 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea15570-3d47-4474-ad54-5bbed22d3f2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2eebd3ec-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:03:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456635, 'reachable_time': 25550, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268278, 'error': None, 'target': 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.143 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2b6202e8-7042-4b74-adae-8415685f01e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.192 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1a37b209-4ed7-46d4-9503-32b60c4bdd91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.194 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2eebd3ec-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.194 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.195 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2eebd3ec-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:19 np0005634017 NetworkManager[49805]: <info>  [1772273059.1976] manager: (tap2eebd3ec-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.197 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:19 np0005634017 kernel: tap2eebd3ec-f0: entered promiscuous mode
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.205 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2eebd3ec-f0, col_values=(('external_ids', {'iface-id': '6b6cc396-2618-4c5f-8702-0c03569c876b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:19 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:19Z|00163|binding|INFO|Releasing lport 6b6cc396-2618-4c5f-8702-0c03569c876b from this chassis (sb_readonly=0)
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.206 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.216 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2eebd3ec-f7d4-4881-813e-8d884cdcadaf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2eebd3ec-f7d4-4881-813e-8d884cdcadaf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.217 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.218 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bd29cc9a-60f6-4ffb-a9a5-353b2e905561]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.219 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-2eebd3ec-f7d4-4881-813e-8d884cdcadaf
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/2eebd3ec-f7d4-4881-813e-8d884cdcadaf.pid.haproxy
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 2eebd3ec-f7d4-4881-813e-8d884cdcadaf
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:04:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.220 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'env', 'PROCESS_TAG=haproxy-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2eebd3ec-f7d4-4881-813e-8d884cdcadaf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:04:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:04:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/297271897' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.301 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.331 243456 DEBUG nova.storage.rbd_utils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.335 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:04:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1214607614' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.554 243456 DEBUG oslo_concurrency.processutils [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.566 243456 DEBUG nova.compute.provider_tree [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:04:19 np0005634017 podman[268368]: 2026-02-28 10:04:19.579668125 +0000 UTC m=+0.068659821 container create daf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.603 243456 DEBUG nova.scheduler.client.report [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.620 243456 DEBUG nova.compute.manager [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Received event network-changed-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.620 243456 DEBUG nova.compute.manager [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Refreshing instance network info cache due to event network-changed-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.621 243456 DEBUG oslo_concurrency.lockutils [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.621 243456 DEBUG oslo_concurrency.lockutils [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.622 243456 DEBUG nova.network.neutron [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Refreshing network info cache for port 2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:04:19 np0005634017 podman[268368]: 2026-02-28 10:04:19.5343208 +0000 UTC m=+0.023312476 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:04:19 np0005634017 systemd[1]: Started libpod-conmon-daf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f.scope.
Feb 28 05:04:19 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.646 243456 DEBUG oslo_concurrency.lockutils [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:19 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd5eb1ee2da379ebf68d6e75d71e29d19c55cfc0b19947902e32f6aaa6684a7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:04:19 np0005634017 podman[268368]: 2026-02-28 10:04:19.665777736 +0000 UTC m=+0.154769442 container init daf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.667 243456 INFO nova.scheduler.client.report [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Deleted allocations for instance 95805a4e-8bc0-47e3-981a-dfe27127a270#033[00m
Feb 28 05:04:19 np0005634017 podman[268368]: 2026-02-28 10:04:19.672280719 +0000 UTC m=+0.161272395 container start daf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 05:04:19 np0005634017 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[268385]: [NOTICE]   (268389) : New worker (268391) forked
Feb 28 05:04:19 np0005634017 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[268385]: [NOTICE]   (268389) : Loading success.
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.741 243456 DEBUG oslo_concurrency.lockutils [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 355 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.0 MiB/s wr, 226 op/s
Feb 28 05:04:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:04:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1495951900' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.879 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.881 243456 DEBUG nova.virt.libvirt.vif [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:04:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1083426510',display_name='tempest-ImagesTestJSON-server-1083426510',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1083426510',id=29,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-dl71n395',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:04:14Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "address": "fa:16:3e:3a:a0:83", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d6bdb4d-0d", "ovs_interfaceid": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.882 243456 DEBUG nova.network.os_vif_util [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "address": "fa:16:3e:3a:a0:83", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d6bdb4d-0d", "ovs_interfaceid": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.882 243456 DEBUG nova.network.os_vif_util [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:a0:83,bridge_name='br-int',has_traffic_filtering=True,id=2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d6bdb4d-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.883 243456 DEBUG nova.objects.instance [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'pci_devices' on Instance uuid 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.903 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:04:19 np0005634017 nova_compute[243452]:  <uuid>9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab</uuid>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:  <name>instance-0000001d</name>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <nova:name>tempest-ImagesTestJSON-server-1083426510</nova:name>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:04:18</nova:creationTime>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:04:19 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:        <nova:user uuid="163582c3e6a34c87b52f82ac4f189f77">tempest-ImagesTestJSON-2059286278-project-member</nova:user>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:        <nova:project uuid="a2ce6ed219d94b3b88c2d2d7001f6c3a">tempest-ImagesTestJSON-2059286278</nova:project>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:        <nova:port uuid="2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd">
Feb 28 05:04:19 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <entry name="serial">9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab</entry>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <entry name="uuid">9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab</entry>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk">
Feb 28 05:04:19 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:04:19 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk.config">
Feb 28 05:04:19 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:04:19 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:3a:a0:83"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <target dev="tap2d6bdb4d-0d"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab/console.log" append="off"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:04:19 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:04:19 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:04:19 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:04:19 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.907 243456 DEBUG nova.compute.manager [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Preparing to wait for external event network-vif-plugged-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.908 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.908 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.908 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.909 243456 DEBUG nova.virt.libvirt.vif [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:04:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1083426510',display_name='tempest-ImagesTestJSON-server-1083426510',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1083426510',id=29,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-dl71n395',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:04:14Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "address": "fa:16:3e:3a:a0:83", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d6bdb4d-0d", "ovs_interfaceid": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.909 243456 DEBUG nova.network.os_vif_util [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "address": "fa:16:3e:3a:a0:83", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d6bdb4d-0d", "ovs_interfaceid": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.910 243456 DEBUG nova.network.os_vif_util [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:a0:83,bridge_name='br-int',has_traffic_filtering=True,id=2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d6bdb4d-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.910 243456 DEBUG os_vif [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:a0:83,bridge_name='br-int',has_traffic_filtering=True,id=2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d6bdb4d-0d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.911 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.911 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.911 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.914 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.914 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d6bdb4d-0d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.914 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2d6bdb4d-0d, col_values=(('external_ids', {'iface-id': '2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:a0:83', 'vm-uuid': '9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.915 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:19 np0005634017 NetworkManager[49805]: <info>  [1772273059.9171] manager: (tap2d6bdb4d-0d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.918 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.921 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.922 243456 INFO os_vif [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:a0:83,bridge_name='br-int',has_traffic_filtering=True,id=2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d6bdb4d-0d')#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.975 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.976 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.976 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No VIF found with MAC fa:16:3e:3a:a0:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.977 243456 INFO nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Using config drive#033[00m
Feb 28 05:04:19 np0005634017 nova_compute[243452]: 2026-02-28 10:04:19.997 243456 DEBUG nova.storage.rbd_utils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:20 np0005634017 nova_compute[243452]: 2026-02-28 10:04:20.123 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273060.1229281, 6d74f9b9-edf7-4d81-b139-cb664b2ab68c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:20 np0005634017 nova_compute[243452]: 2026-02-28 10:04:20.123 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] VM Started (Lifecycle Event)#033[00m
Feb 28 05:04:20 np0005634017 nova_compute[243452]: 2026-02-28 10:04:20.162 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:20 np0005634017 nova_compute[243452]: 2026-02-28 10:04:20.167 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273060.1249266, 6d74f9b9-edf7-4d81-b139-cb664b2ab68c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:20 np0005634017 nova_compute[243452]: 2026-02-28 10:04:20.167 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:04:20 np0005634017 nova_compute[243452]: 2026-02-28 10:04:20.192 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:20 np0005634017 nova_compute[243452]: 2026-02-28 10:04:20.195 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:04:20 np0005634017 nova_compute[243452]: 2026-02-28 10:04:20.222 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.465 243456 INFO nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Creating config drive at /var/lib/nova/instances/9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab/disk.config#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.473 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpa8um2us5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.603 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpa8um2us5" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.626 243456 DEBUG nova.storage.rbd_utils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.629 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab/disk.config 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:21 np0005634017 NetworkManager[49805]: <info>  [1772273061.6462] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Feb 28 05:04:21 np0005634017 NetworkManager[49805]: <info>  [1772273061.6471] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.649 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.692 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:21 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:21Z|00164|binding|INFO|Releasing lport 76417fb5-53fd-4986-92c0-fafddb45b2f5 from this chassis (sb_readonly=0)
Feb 28 05:04:21 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:21Z|00165|binding|INFO|Releasing lport 6b6cc396-2618-4c5f-8702-0c03569c876b from this chassis (sb_readonly=0)
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.704 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.763 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab/disk.config 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.764 243456 INFO nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Deleting local config drive /var/lib/nova/instances/9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab/disk.config because it was imported into RBD.#033[00m
Feb 28 05:04:21 np0005634017 kernel: tap2d6bdb4d-0d: entered promiscuous mode
Feb 28 05:04:21 np0005634017 NetworkManager[49805]: <info>  [1772273061.8079] manager: (tap2d6bdb4d-0d): new Tun device (/org/freedesktop/NetworkManager/Devices/85)
Feb 28 05:04:21 np0005634017 systemd-udevd[268464]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:04:21 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:21Z|00166|binding|INFO|Claiming lport 2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd for this chassis.
Feb 28 05:04:21 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:21Z|00167|binding|INFO|2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd: Claiming fa:16:3e:3a:a0:83 10.100.0.10
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.810 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:21 np0005634017 NetworkManager[49805]: <info>  [1772273061.8214] device (tap2d6bdb4d-0d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:04:21 np0005634017 NetworkManager[49805]: <info>  [1772273061.8229] device (tap2d6bdb4d-0d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:04:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.821 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:a0:83 10.100.0.10'], port_security=['fa:16:3e:3a:a0:83 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b4baa283-ae4f-4852-9324-45f4fb1de1ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d63e728b-5dfe-4a66-867f-fa0957507bc0, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:04:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.823 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd in datapath 3a8395bc-d7fc-4457-8cb4-52e2b9305b61 bound to our chassis#033[00m
Feb 28 05:04:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.824 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3a8395bc-d7fc-4457-8cb4-52e2b9305b61#033[00m
Feb 28 05:04:21 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:21Z|00168|binding|INFO|Setting lport 2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd ovn-installed in OVS
Feb 28 05:04:21 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:21Z|00169|binding|INFO|Setting lport 2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd up in Southbound
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.828 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.831 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.835 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[89ad5daa-984a-41f8-af88-e451602a85e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.835 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3a8395bc-d1 in ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:04:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.837 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3a8395bc-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:04:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.837 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3833c839-952e-43f9-87ec-0d911af811f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.837 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2bdc0972-f51a-421e-9126-77b5cb7616d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.848 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[b95e0303-301d-4725-a9f8-139735686b1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 292 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 6.6 MiB/s wr, 314 op/s
Feb 28 05:04:21 np0005634017 systemd-machined[209480]: New machine qemu-33-instance-0000001d.
Feb 28 05:04:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.870 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4935da9e-5595-4672-9f65-6cdc821cff9e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:21 np0005634017 systemd[1]: Started Virtual Machine qemu-33-instance-0000001d.
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.897 243456 DEBUG nova.network.neutron [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Updated VIF entry in instance network info cache for port 2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.901 243456 DEBUG nova.network.neutron [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Updating instance_info_cache with network_info: [{"id": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "address": "fa:16:3e:3a:a0:83", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d6bdb4d-0d", "ovs_interfaceid": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.901 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb33171-ba62-4d95-aae4-f20e9ffc3225]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:21 np0005634017 NetworkManager[49805]: <info>  [1772273061.9118] manager: (tap3a8395bc-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/86)
Feb 28 05:04:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.914 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[52f55688-a93d-4844-ac3c-8d34b8119207]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:21 np0005634017 systemd-udevd[268519]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.926 243456 DEBUG oslo_concurrency.lockutils [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.927 243456 DEBUG nova.compute.manager [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Received event network-vif-plugged-b9c5316d-0f6b-4a56-8273-692ee1492259 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.927 243456 DEBUG oslo_concurrency.lockutils [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.927 243456 DEBUG oslo_concurrency.lockutils [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.928 243456 DEBUG oslo_concurrency.lockutils [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.928 243456 DEBUG nova.compute.manager [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] No waiting events found dispatching network-vif-plugged-b9c5316d-0f6b-4a56-8273-692ee1492259 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.928 243456 WARNING nova.compute.manager [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Received unexpected event network-vif-plugged-b9c5316d-0f6b-4a56-8273-692ee1492259 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.929 243456 DEBUG nova.compute.manager [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Received event network-vif-deleted-b9c5316d-0f6b-4a56-8273-692ee1492259 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.929 243456 DEBUG nova.compute.manager [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Received event network-vif-deleted-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.929 243456 DEBUG nova.compute.manager [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Received event network-vif-plugged-3c6e6f23-d681-47b4-a8e5-474ce94e984d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.930 243456 DEBUG oslo_concurrency.lockutils [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.930 243456 DEBUG oslo_concurrency.lockutils [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.930 243456 DEBUG oslo_concurrency.lockutils [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.930 243456 DEBUG nova.compute.manager [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Processing event network-vif-plugged-3c6e6f23-d681-47b4-a8e5-474ce94e984d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.931 243456 DEBUG nova.compute.manager [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Received event network-vif-plugged-3c6e6f23-d681-47b4-a8e5-474ce94e984d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.932 243456 DEBUG oslo_concurrency.lockutils [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.932 243456 DEBUG oslo_concurrency.lockutils [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.932 243456 DEBUG oslo_concurrency.lockutils [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.933 243456 DEBUG nova.compute.manager [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] No waiting events found dispatching network-vif-plugged-3c6e6f23-d681-47b4-a8e5-474ce94e984d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.933 243456 WARNING nova.compute.manager [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Received unexpected event network-vif-plugged-3c6e6f23-d681-47b4-a8e5-474ce94e984d for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.934 243456 DEBUG nova.compute.manager [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.948 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273061.9481177, 6d74f9b9-edf7-4d81-b139-cb664b2ab68c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.949 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.951 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.957 243456 INFO nova.virt.libvirt.driver [-] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Instance spawned successfully.#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.957 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:04:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.964 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[339f27ee-68fb-4f49-ab1a-b964d9c671f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.968 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[90504454-a43c-482e-8203-51a38208d360]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.979 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.987 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.993 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.994 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.995 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.996 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.997 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:21 np0005634017 nova_compute[243452]: 2026-02-28 10:04:21.997 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:22 np0005634017 NetworkManager[49805]: <info>  [1772273062.0026] device (tap3a8395bc-d0): carrier: link connected
Feb 28 05:04:22 np0005634017 nova_compute[243452]: 2026-02-28 10:04:22.008 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.007 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4b173214-822d-4789-bb40-fa33ca6e0861]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.027 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1cb927f5-585e-44f3-b6b1-22728a73cf17]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3a8395bc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:6b:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456928, 'reachable_time': 33733, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268555, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.041 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a2bc4ab5-9496-4e46-a4cc-1a2eb6bfede4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8e:6b8d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456928, 'tstamp': 456928}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268556, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:22 np0005634017 nova_compute[243452]: 2026-02-28 10:04:22.049 243456 INFO nova.compute.manager [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Took 8.73 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:04:22 np0005634017 nova_compute[243452]: 2026-02-28 10:04:22.051 243456 DEBUG nova.compute.manager [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.056 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8e813fd3-64ba-46b9-a16b-e015559e130f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3a8395bc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:6b:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456928, 'reachable_time': 33733, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268557, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.082 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[864a74f2-4a6c-4712-a9f2-381164eb001d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:22 np0005634017 nova_compute[243452]: 2026-02-28 10:04:22.115 243456 INFO nova.compute.manager [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Took 9.85 seconds to build instance.#033[00m
Feb 28 05:04:22 np0005634017 nova_compute[243452]: 2026-02-28 10:04:22.131 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.135 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[117c70fe-8ddb-4fcc-a1f0-4a7f9a0e427c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.137 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a8395bc-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.137 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.138 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3a8395bc-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:22 np0005634017 NetworkManager[49805]: <info>  [1772273062.1403] manager: (tap3a8395bc-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Feb 28 05:04:22 np0005634017 nova_compute[243452]: 2026-02-28 10:04:22.140 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:22 np0005634017 kernel: tap3a8395bc-d0: entered promiscuous mode
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.143 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3a8395bc-d0, col_values=(('external_ids', {'iface-id': '5ebe3a2c-2a7c-485c-a1b1-d51cfb0beba3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:22 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:22Z|00170|binding|INFO|Releasing lport 5ebe3a2c-2a7c-485c-a1b1-d51cfb0beba3 from this chassis (sb_readonly=0)
Feb 28 05:04:22 np0005634017 nova_compute[243452]: 2026-02-28 10:04:22.145 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:22 np0005634017 nova_compute[243452]: 2026-02-28 10:04:22.154 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.155 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.155 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[49116c3b-6dcd-4fff-b08f-2d9392bd1b5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.156 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-3a8395bc-d7fc-4457-8cb4-52e2b9305b61
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.pid.haproxy
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 3a8395bc-d7fc-4457-8cb4-52e2b9305b61
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:04:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.157 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'env', 'PROCESS_TAG=haproxy-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:04:22 np0005634017 nova_compute[243452]: 2026-02-28 10:04:22.270 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273062.2703502, 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:22 np0005634017 nova_compute[243452]: 2026-02-28 10:04:22.271 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] VM Started (Lifecycle Event)#033[00m
Feb 28 05:04:22 np0005634017 nova_compute[243452]: 2026-02-28 10:04:22.293 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:22 np0005634017 nova_compute[243452]: 2026-02-28 10:04:22.296 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273062.2705956, 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:22 np0005634017 nova_compute[243452]: 2026-02-28 10:04:22.296 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:04:22 np0005634017 nova_compute[243452]: 2026-02-28 10:04:22.315 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:22 np0005634017 nova_compute[243452]: 2026-02-28 10:04:22.318 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:04:22 np0005634017 nova_compute[243452]: 2026-02-28 10:04:22.338 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:04:22 np0005634017 podman[268631]: 2026-02-28 10:04:22.566845214 +0000 UTC m=+0.089054975 container create 8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 28 05:04:22 np0005634017 podman[268631]: 2026-02-28 10:04:22.517370063 +0000 UTC m=+0.039579854 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:04:22 np0005634017 systemd[1]: Started libpod-conmon-8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9.scope.
Feb 28 05:04:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:04:22 np0005634017 nova_compute[243452]: 2026-02-28 10:04:22.631 243456 DEBUG nova.compute.manager [req-42c09cd6-5d4d-4d03-90e0-d4d5c6443c7a req-c84ad94b-1e1f-4a32-abd3-c02056ce2d91 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-changed-76d5199d-5d1e-4198-8780-c2537175a2be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:22 np0005634017 nova_compute[243452]: 2026-02-28 10:04:22.632 243456 DEBUG nova.compute.manager [req-42c09cd6-5d4d-4d03-90e0-d4d5c6443c7a req-c84ad94b-1e1f-4a32-abd3-c02056ce2d91 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Refreshing instance network info cache due to event network-changed-76d5199d-5d1e-4198-8780-c2537175a2be. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:04:22 np0005634017 nova_compute[243452]: 2026-02-28 10:04:22.632 243456 DEBUG oslo_concurrency.lockutils [req-42c09cd6-5d4d-4d03-90e0-d4d5c6443c7a req-c84ad94b-1e1f-4a32-abd3-c02056ce2d91 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:04:22 np0005634017 nova_compute[243452]: 2026-02-28 10:04:22.633 243456 DEBUG oslo_concurrency.lockutils [req-42c09cd6-5d4d-4d03-90e0-d4d5c6443c7a req-c84ad94b-1e1f-4a32-abd3-c02056ce2d91 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:04:22 np0005634017 nova_compute[243452]: 2026-02-28 10:04:22.633 243456 DEBUG nova.network.neutron [req-42c09cd6-5d4d-4d03-90e0-d4d5c6443c7a req-c84ad94b-1e1f-4a32-abd3-c02056ce2d91 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Refreshing network info cache for port 76d5199d-5d1e-4198-8780-c2537175a2be _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:04:22 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:04:22 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2f3bfc33757300e8d509d5547a7828aa30ca0c8b5e0dd5d94a856da1a162672/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:04:22 np0005634017 podman[268631]: 2026-02-28 10:04:22.66451952 +0000 UTC m=+0.186729341 container init 8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 05:04:22 np0005634017 podman[268631]: 2026-02-28 10:04:22.669325255 +0000 UTC m=+0.191535036 container start 8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_managed=true)
Feb 28 05:04:22 np0005634017 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[268646]: [NOTICE]   (268650) : New worker (268652) forked
Feb 28 05:04:22 np0005634017 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[268646]: [NOTICE]   (268650) : Loading success.
Feb 28 05:04:22 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 05:04:22 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1801.6 total, 600.0 interval#012Cumulative writes: 14K writes, 55K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s#012Cumulative WAL: 14K writes, 4217 syncs, 3.33 writes per sync, written: 0.05 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6856 writes, 25K keys, 6856 commit groups, 1.0 writes per commit group, ingest: 26.31 MB, 0.04 MB/s#012Interval WAL: 6856 writes, 2771 syncs, 2.47 writes per sync, written: 0.03 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 05:04:23 np0005634017 nova_compute[243452]: 2026-02-28 10:04:23.496 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 293 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.6 MiB/s wr, 257 op/s
Feb 28 05:04:24 np0005634017 podman[268662]: 2026-02-28 10:04:24.126187072 +0000 UTC m=+0.067972102 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 05:04:24 np0005634017 podman[268661]: 2026-02-28 10:04:24.15991088 +0000 UTC m=+0.098803958 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.43.0)
Feb 28 05:04:24 np0005634017 nova_compute[243452]: 2026-02-28 10:04:24.725 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273049.7249467, 400869d5-7369-466b-970e-ac7e3f4e2e4c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:24 np0005634017 nova_compute[243452]: 2026-02-28 10:04:24.726 243456 INFO nova.compute.manager [-] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:04:24 np0005634017 nova_compute[243452]: 2026-02-28 10:04:24.755 243456 DEBUG nova.compute.manager [None req-a19e70e5-232c-4399-80d0-abc469478475 - - - - - -] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:24 np0005634017 nova_compute[243452]: 2026-02-28 10:04:24.917 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:24 np0005634017 nova_compute[243452]: 2026-02-28 10:04:24.966 243456 DEBUG nova.compute.manager [req-ff96db3c-77e4-44af-9375-46b506e3c0b9 req-4ff36849-bcda-40a0-b247-007d58c2d73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Received event network-vif-plugged-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:24 np0005634017 nova_compute[243452]: 2026-02-28 10:04:24.966 243456 DEBUG oslo_concurrency.lockutils [req-ff96db3c-77e4-44af-9375-46b506e3c0b9 req-4ff36849-bcda-40a0-b247-007d58c2d73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:24 np0005634017 nova_compute[243452]: 2026-02-28 10:04:24.966 243456 DEBUG oslo_concurrency.lockutils [req-ff96db3c-77e4-44af-9375-46b506e3c0b9 req-4ff36849-bcda-40a0-b247-007d58c2d73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:24 np0005634017 nova_compute[243452]: 2026-02-28 10:04:24.967 243456 DEBUG oslo_concurrency.lockutils [req-ff96db3c-77e4-44af-9375-46b506e3c0b9 req-4ff36849-bcda-40a0-b247-007d58c2d73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:24 np0005634017 nova_compute[243452]: 2026-02-28 10:04:24.967 243456 DEBUG nova.compute.manager [req-ff96db3c-77e4-44af-9375-46b506e3c0b9 req-4ff36849-bcda-40a0-b247-007d58c2d73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Processing event network-vif-plugged-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:04:24 np0005634017 nova_compute[243452]: 2026-02-28 10:04:24.967 243456 DEBUG nova.compute.manager [req-ff96db3c-77e4-44af-9375-46b506e3c0b9 req-4ff36849-bcda-40a0-b247-007d58c2d73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Received event network-vif-plugged-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:24 np0005634017 nova_compute[243452]: 2026-02-28 10:04:24.967 243456 DEBUG oslo_concurrency.lockutils [req-ff96db3c-77e4-44af-9375-46b506e3c0b9 req-4ff36849-bcda-40a0-b247-007d58c2d73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:24 np0005634017 nova_compute[243452]: 2026-02-28 10:04:24.968 243456 DEBUG oslo_concurrency.lockutils [req-ff96db3c-77e4-44af-9375-46b506e3c0b9 req-4ff36849-bcda-40a0-b247-007d58c2d73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:24 np0005634017 nova_compute[243452]: 2026-02-28 10:04:24.968 243456 DEBUG oslo_concurrency.lockutils [req-ff96db3c-77e4-44af-9375-46b506e3c0b9 req-4ff36849-bcda-40a0-b247-007d58c2d73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:24 np0005634017 nova_compute[243452]: 2026-02-28 10:04:24.968 243456 DEBUG nova.compute.manager [req-ff96db3c-77e4-44af-9375-46b506e3c0b9 req-4ff36849-bcda-40a0-b247-007d58c2d73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] No waiting events found dispatching network-vif-plugged-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:04:24 np0005634017 nova_compute[243452]: 2026-02-28 10:04:24.968 243456 WARNING nova.compute.manager [req-ff96db3c-77e4-44af-9375-46b506e3c0b9 req-4ff36849-bcda-40a0-b247-007d58c2d73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Received unexpected event network-vif-plugged-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:04:24 np0005634017 nova_compute[243452]: 2026-02-28 10:04:24.969 243456 DEBUG nova.compute.manager [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:04:24 np0005634017 nova_compute[243452]: 2026-02-28 10:04:24.974 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273064.9746413, 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:24 np0005634017 nova_compute[243452]: 2026-02-28 10:04:24.975 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:04:24 np0005634017 nova_compute[243452]: 2026-02-28 10:04:24.977 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:04:24 np0005634017 nova_compute[243452]: 2026-02-28 10:04:24.983 243456 INFO nova.virt.libvirt.driver [-] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Instance spawned successfully.#033[00m
Feb 28 05:04:24 np0005634017 nova_compute[243452]: 2026-02-28 10:04:24.983 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:04:25 np0005634017 nova_compute[243452]: 2026-02-28 10:04:25.008 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:25 np0005634017 nova_compute[243452]: 2026-02-28 10:04:25.014 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:04:25 np0005634017 nova_compute[243452]: 2026-02-28 10:04:25.018 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:25 np0005634017 nova_compute[243452]: 2026-02-28 10:04:25.019 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:25 np0005634017 nova_compute[243452]: 2026-02-28 10:04:25.019 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:25 np0005634017 nova_compute[243452]: 2026-02-28 10:04:25.020 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:25 np0005634017 nova_compute[243452]: 2026-02-28 10:04:25.020 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:25 np0005634017 nova_compute[243452]: 2026-02-28 10:04:25.021 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:25 np0005634017 nova_compute[243452]: 2026-02-28 10:04:25.056 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:04:25 np0005634017 nova_compute[243452]: 2026-02-28 10:04:25.094 243456 INFO nova.compute.manager [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Took 10.88 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:04:25 np0005634017 nova_compute[243452]: 2026-02-28 10:04:25.094 243456 DEBUG nova.compute.manager [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:25 np0005634017 nova_compute[243452]: 2026-02-28 10:04:25.183 243456 INFO nova.compute.manager [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Took 12.52 seconds to build instance.#033[00m
Feb 28 05:04:25 np0005634017 nova_compute[243452]: 2026-02-28 10:04:25.200 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 293 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.6 MiB/s wr, 310 op/s
Feb 28 05:04:26 np0005634017 nova_compute[243452]: 2026-02-28 10:04:26.077 243456 DEBUG nova.network.neutron [req-42c09cd6-5d4d-4d03-90e0-d4d5c6443c7a req-c84ad94b-1e1f-4a32-abd3-c02056ce2d91 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Updated VIF entry in instance network info cache for port 76d5199d-5d1e-4198-8780-c2537175a2be. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:04:26 np0005634017 nova_compute[243452]: 2026-02-28 10:04:26.078 243456 DEBUG nova.network.neutron [req-42c09cd6-5d4d-4d03-90e0-d4d5c6443c7a req-c84ad94b-1e1f-4a32-abd3-c02056ce2d91 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Updating instance_info_cache with network_info: [{"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:26 np0005634017 nova_compute[243452]: 2026-02-28 10:04:26.099 243456 DEBUG oslo_concurrency.lockutils [req-42c09cd6-5d4d-4d03-90e0-d4d5c6443c7a req-c84ad94b-1e1f-4a32-abd3-c02056ce2d91 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:04:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:27Z|00171|binding|INFO|Releasing lport 76417fb5-53fd-4986-92c0-fafddb45b2f5 from this chassis (sb_readonly=0)
Feb 28 05:04:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:27Z|00172|binding|INFO|Releasing lport 6b6cc396-2618-4c5f-8702-0c03569c876b from this chassis (sb_readonly=0)
Feb 28 05:04:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:27Z|00173|binding|INFO|Releasing lport 5ebe3a2c-2a7c-485c-a1b1-d51cfb0beba3 from this chassis (sb_readonly=0)
Feb 28 05:04:27 np0005634017 nova_compute[243452]: 2026-02-28 10:04:27.037 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:27 np0005634017 nova_compute[243452]: 2026-02-28 10:04:27.260 243456 INFO nova.compute.manager [None req-20483f1a-0a6b-49b9-b66f-5564bed44b79 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Pausing#033[00m
Feb 28 05:04:27 np0005634017 nova_compute[243452]: 2026-02-28 10:04:27.261 243456 DEBUG nova.objects.instance [None req-20483f1a-0a6b-49b9-b66f-5564bed44b79 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'flavor' on Instance uuid 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:27 np0005634017 nova_compute[243452]: 2026-02-28 10:04:27.315 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273067.3156853, 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:27 np0005634017 nova_compute[243452]: 2026-02-28 10:04:27.316 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:04:27 np0005634017 nova_compute[243452]: 2026-02-28 10:04:27.318 243456 DEBUG nova.compute.manager [None req-20483f1a-0a6b-49b9-b66f-5564bed44b79 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:27 np0005634017 nova_compute[243452]: 2026-02-28 10:04:27.396 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:27 np0005634017 nova_compute[243452]: 2026-02-28 10:04:27.400 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:04:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:04:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 293 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.9 MiB/s wr, 330 op/s
Feb 28 05:04:28 np0005634017 nova_compute[243452]: 2026-02-28 10:04:28.498 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:28 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 05:04:28 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.2 total, 600.0 interval#012Cumulative writes: 10K writes, 43K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 2947 syncs, 3.61 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4966 writes, 19K keys, 4966 commit groups, 1.0 writes per commit group, ingest: 20.73 MB, 0.03 MB/s#012Interval WAL: 4966 writes, 2044 syncs, 2.43 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 05:04:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:04:29
Feb 28 05:04:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:04:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:04:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.meta', 'vms', '.mgr', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'default.rgw.log']
Feb 28 05:04:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:04:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 302 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 1.2 MiB/s wr, 278 op/s
Feb 28 05:04:29 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:29Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7a:bd:5b 10.100.0.12
Feb 28 05:04:29 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:29Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:bd:5b 10.100.0.12
Feb 28 05:04:29 np0005634017 nova_compute[243452]: 2026-02-28 10:04:29.919 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:04:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:04:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:04:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:04:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:04:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:04:30 np0005634017 nova_compute[243452]: 2026-02-28 10:04:30.375 243456 DEBUG nova.compute.manager [None req-214b31c5-4876-4114-9013-d19ecadb0e87 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:30 np0005634017 nova_compute[243452]: 2026-02-28 10:04:30.423 243456 INFO nova.compute.manager [None req-214b31c5-4876-4114-9013-d19ecadb0e87 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] instance snapshotting#033[00m
Feb 28 05:04:30 np0005634017 nova_compute[243452]: 2026-02-28 10:04:30.423 243456 WARNING nova.compute.manager [None req-214b31c5-4876-4114-9013-d19ecadb0e87 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] trying to snapshot a non-running instance: (state: 3 expected: 1)#033[00m
Feb 28 05:04:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:04:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:04:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:04:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:04:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:04:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:04:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:04:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:04:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:04:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:04:30 np0005634017 nova_compute[243452]: 2026-02-28 10:04:30.829 243456 INFO nova.virt.libvirt.driver [None req-214b31c5-4876-4114-9013-d19ecadb0e87 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Beginning live snapshot process#033[00m
Feb 28 05:04:30 np0005634017 nova_compute[243452]: 2026-02-28 10:04:30.897 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273055.895925, 95805a4e-8bc0-47e3-981a-dfe27127a270 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:30 np0005634017 nova_compute[243452]: 2026-02-28 10:04:30.898 243456 INFO nova.compute.manager [-] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:04:30 np0005634017 nova_compute[243452]: 2026-02-28 10:04:30.975 243456 DEBUG nova.compute.manager [None req-eadb4ed0-4faf-4d96-8ae9-87afac8aff3a - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:30 np0005634017 nova_compute[243452]: 2026-02-28 10:04:30.981 243456 DEBUG nova.virt.libvirt.imagebackend [None req-214b31c5-4876-4114-9013-d19ecadb0e87 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 28 05:04:31 np0005634017 ceph-mgr[76610]: [devicehealth INFO root] Check health
Feb 28 05:04:31 np0005634017 nova_compute[243452]: 2026-02-28 10:04:31.264 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273056.2627988, c38e4584-8a86-41a3-bc10-2a35205cf7c7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:31 np0005634017 nova_compute[243452]: 2026-02-28 10:04:31.265 243456 INFO nova.compute.manager [-] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:04:31 np0005634017 nova_compute[243452]: 2026-02-28 10:04:31.293 243456 DEBUG nova.compute.manager [None req-f4bc5179-8b06-4ca8-a484-e00586cc4c1b - - - - - -] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:31 np0005634017 nova_compute[243452]: 2026-02-28 10:04:31.388 243456 DEBUG nova.storage.rbd_utils [None req-214b31c5-4876-4114-9013-d19ecadb0e87 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] creating snapshot(c3430a45051b47d7a8bcacb9cd6239b0) on rbd image(9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:04:31 np0005634017 nova_compute[243452]: 2026-02-28 10:04:31.622 243456 DEBUG nova.compute.manager [None req-fa474c0c-00a0-4102-90d5-8d23947b33dc 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:31 np0005634017 nova_compute[243452]: 2026-02-28 10:04:31.671 243456 INFO nova.compute.manager [None req-fa474c0c-00a0-4102-90d5-8d23947b33dc 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] instance snapshotting#033[00m
Feb 28 05:04:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 307 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 1.8 MiB/s wr, 304 op/s
Feb 28 05:04:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Feb 28 05:04:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Feb 28 05:04:31 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Feb 28 05:04:32 np0005634017 nova_compute[243452]: 2026-02-28 10:04:32.013 243456 DEBUG nova.storage.rbd_utils [None req-214b31c5-4876-4114-9013-d19ecadb0e87 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] cloning vms/9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk@c3430a45051b47d7a8bcacb9cd6239b0 to images/15834e2d-0d53-4b8a-8f0b-345fe662dcbf clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 28 05:04:32 np0005634017 nova_compute[243452]: 2026-02-28 10:04:32.058 243456 INFO nova.virt.libvirt.driver [None req-fa474c0c-00a0-4102-90d5-8d23947b33dc 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Beginning live snapshot process#033[00m
Feb 28 05:04:32 np0005634017 nova_compute[243452]: 2026-02-28 10:04:32.137 243456 DEBUG nova.storage.rbd_utils [None req-214b31c5-4876-4114-9013-d19ecadb0e87 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] flattening images/15834e2d-0d53-4b8a-8f0b-345fe662dcbf flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 28 05:04:32 np0005634017 nova_compute[243452]: 2026-02-28 10:04:32.265 243456 DEBUG nova.virt.libvirt.imagebackend [None req-fa474c0c-00a0-4102-90d5-8d23947b33dc 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 28 05:04:32 np0005634017 nova_compute[243452]: 2026-02-28 10:04:32.367 243456 DEBUG nova.storage.rbd_utils [None req-214b31c5-4876-4114-9013-d19ecadb0e87 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] removing snapshot(c3430a45051b47d7a8bcacb9cd6239b0) on rbd image(9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 28 05:04:32 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb 28 05:04:32 np0005634017 nova_compute[243452]: 2026-02-28 10:04:32.451 243456 DEBUG nova.storage.rbd_utils [None req-fa474c0c-00a0-4102-90d5-8d23947b33dc 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] creating snapshot(719713618c2a4a64a9cf133ff544640b) on rbd image(6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:04:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:04:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Feb 28 05:04:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Feb 28 05:04:32 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Feb 28 05:04:33 np0005634017 nova_compute[243452]: 2026-02-28 10:04:33.022 243456 DEBUG nova.storage.rbd_utils [None req-214b31c5-4876-4114-9013-d19ecadb0e87 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] creating snapshot(snap) on rbd image(15834e2d-0d53-4b8a-8f0b-345fe662dcbf) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:04:33 np0005634017 nova_compute[243452]: 2026-02-28 10:04:33.095 243456 DEBUG nova.storage.rbd_utils [None req-fa474c0c-00a0-4102-90d5-8d23947b33dc 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] cloning vms/6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk@719713618c2a4a64a9cf133ff544640b to images/368a95dc-f2bf-43b2-80e0-562f69f20423 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 28 05:04:33 np0005634017 nova_compute[243452]: 2026-02-28 10:04:33.220 243456 DEBUG nova.storage.rbd_utils [None req-fa474c0c-00a0-4102-90d5-8d23947b33dc 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] flattening images/368a95dc-f2bf-43b2-80e0-562f69f20423 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 28 05:04:33 np0005634017 nova_compute[243452]: 2026-02-28 10:04:33.530 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:33 np0005634017 nova_compute[243452]: 2026-02-28 10:04:33.543 243456 DEBUG nova.storage.rbd_utils [None req-fa474c0c-00a0-4102-90d5-8d23947b33dc 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] removing snapshot(719713618c2a4a64a9cf133ff544640b) on rbd image(6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 28 05:04:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 339 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 3.9 MiB/s wr, 256 op/s
Feb 28 05:04:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Feb 28 05:04:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Feb 28 05:04:34 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Feb 28 05:04:34 np0005634017 nova_compute[243452]: 2026-02-28 10:04:34.022 243456 DEBUG nova.storage.rbd_utils [None req-fa474c0c-00a0-4102-90d5-8d23947b33dc 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] creating snapshot(snap) on rbd image(368a95dc-f2bf-43b2-80e0-562f69f20423) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:04:34 np0005634017 nova_compute[243452]: 2026-02-28 10:04:34.922 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Feb 28 05:04:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Feb 28 05:04:35 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Feb 28 05:04:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:35Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:32:58:99 10.100.0.8
Feb 28 05:04:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:35Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:32:58:99 10.100.0.8
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver [None req-fa474c0c-00a0-4102-90d5-8d23947b33dc 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 368a95dc-f2bf-43b2-80e0-562f69f20423 could not be found.
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     image = self._client.call(
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 368a95dc-f2bf-43b2-80e0-562f69f20423
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver 
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver 
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     image = self._client.call(
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 368a95dc-f2bf-43b2-80e0-562f69f20423 could not be found.
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver #033[00m
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.258 243456 DEBUG nova.storage.rbd_utils [None req-fa474c0c-00a0-4102-90d5-8d23947b33dc 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] removing snapshot(snap) on rbd image(368a95dc-f2bf-43b2-80e0-562f69f20423) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.453 243456 INFO nova.virt.libvirt.driver [None req-214b31c5-4876-4114-9013-d19ecadb0e87 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Snapshot image upload complete#033[00m
Feb 28 05:04:35 np0005634017 nova_compute[243452]: 2026-02-28 10:04:35.453 243456 INFO nova.compute.manager [None req-214b31c5-4876-4114-9013-d19ecadb0e87 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Took 5.03 seconds to snapshot the instance on the hypervisor.#033[00m
Feb 28 05:04:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 440 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 15 MiB/s rd, 21 MiB/s wr, 441 op/s
Feb 28 05:04:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Feb 28 05:04:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Feb 28 05:04:36 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Feb 28 05:04:36 np0005634017 nova_compute[243452]: 2026-02-28 10:04:36.323 243456 WARNING nova.compute.manager [None req-fa474c0c-00a0-4102-90d5-8d23947b33dc 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Image not found during snapshot: nova.exception.ImageNotFound: Image 368a95dc-f2bf-43b2-80e0-562f69f20423 could not be found.#033[00m
Feb 28 05:04:36 np0005634017 nova_compute[243452]: 2026-02-28 10:04:36.542 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Feb 28 05:04:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Feb 28 05:04:37 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.610 243456 DEBUG oslo_concurrency.lockutils [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.611 243456 DEBUG oslo_concurrency.lockutils [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.612 243456 DEBUG oslo_concurrency.lockutils [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.612 243456 DEBUG oslo_concurrency.lockutils [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.613 243456 DEBUG oslo_concurrency.lockutils [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.615 243456 INFO nova.compute.manager [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Terminating instance#033[00m
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.617 243456 DEBUG nova.compute.manager [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:04:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:04:37 np0005634017 kernel: tap2d6bdb4d-0d (unregistering): left promiscuous mode
Feb 28 05:04:37 np0005634017 NetworkManager[49805]: <info>  [1772273077.6661] device (tap2d6bdb4d-0d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:04:37 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:37Z|00174|binding|INFO|Releasing lport 2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd from this chassis (sb_readonly=0)
Feb 28 05:04:37 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:37Z|00175|binding|INFO|Setting lport 2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd down in Southbound
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.674 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:37 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:37Z|00176|binding|INFO|Removing iface tap2d6bdb4d-0d ovn-installed in OVS
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.677 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.687 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:37.683 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:a0:83 10.100.0.10'], port_security=['fa:16:3e:3a:a0:83 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b4baa283-ae4f-4852-9324-45f4fb1de1ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d63e728b-5dfe-4a66-867f-fa0957507bc0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:04:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:37.685 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd in datapath 3a8395bc-d7fc-4457-8cb4-52e2b9305b61 unbound from our chassis#033[00m
Feb 28 05:04:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:37.688 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3a8395bc-d7fc-4457-8cb4-52e2b9305b61, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:04:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:37.690 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[142597f6-a84c-4121-b6ba-6b67ad8d9d19]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:37.690 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 namespace which is not needed anymore#033[00m
Feb 28 05:04:37 np0005634017 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Feb 28 05:04:37 np0005634017 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001d.scope: Consumed 2.807s CPU time.
Feb 28 05:04:37 np0005634017 systemd-machined[209480]: Machine qemu-33-instance-0000001d terminated.
Feb 28 05:04:37 np0005634017 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[268646]: [NOTICE]   (268650) : haproxy version is 2.8.14-c23fe91
Feb 28 05:04:37 np0005634017 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[268646]: [NOTICE]   (268650) : path to executable is /usr/sbin/haproxy
Feb 28 05:04:37 np0005634017 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[268646]: [WARNING]  (268650) : Exiting Master process...
Feb 28 05:04:37 np0005634017 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[268646]: [WARNING]  (268650) : Exiting Master process...
Feb 28 05:04:37 np0005634017 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[268646]: [ALERT]    (268650) : Current worker (268652) exited with code 143 (Terminated)
Feb 28 05:04:37 np0005634017 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[268646]: [WARNING]  (268650) : All workers exited. Exiting... (0)
Feb 28 05:04:37 np0005634017 systemd[1]: libpod-8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9.scope: Deactivated successfully.
Feb 28 05:04:37 np0005634017 podman[269048]: 2026-02-28 10:04:37.838793706 +0000 UTC m=+0.047527867 container died 8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.859 243456 INFO nova.virt.libvirt.driver [-] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Instance destroyed successfully.#033[00m
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.859 243456 DEBUG nova.objects.instance [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'resources' on Instance uuid 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 434 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 21 MiB/s wr, 573 op/s
Feb 28 05:04:37 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9-userdata-shm.mount: Deactivated successfully.
Feb 28 05:04:37 np0005634017 systemd[1]: var-lib-containers-storage-overlay-b2f3bfc33757300e8d509d5547a7828aa30ca0c8b5e0dd5d94a856da1a162672-merged.mount: Deactivated successfully.
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.874 243456 DEBUG nova.virt.libvirt.vif [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:04:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1083426510',display_name='tempest-ImagesTestJSON-server-1083426510',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1083426510',id=29,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:04:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-dl71n395',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:04:35Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "address": "fa:16:3e:3a:a0:83", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d6bdb4d-0d", "ovs_interfaceid": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.875 243456 DEBUG nova.network.os_vif_util [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "address": "fa:16:3e:3a:a0:83", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d6bdb4d-0d", "ovs_interfaceid": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.876 243456 DEBUG nova.network.os_vif_util [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:a0:83,bridge_name='br-int',has_traffic_filtering=True,id=2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d6bdb4d-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.876 243456 DEBUG os_vif [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:a0:83,bridge_name='br-int',has_traffic_filtering=True,id=2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d6bdb4d-0d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.878 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.878 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d6bdb4d-0d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.884 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.885 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.888 243456 INFO os_vif [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:a0:83,bridge_name='br-int',has_traffic_filtering=True,id=2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d6bdb4d-0d')#033[00m
Feb 28 05:04:37 np0005634017 podman[269048]: 2026-02-28 10:04:37.889685127 +0000 UTC m=+0.098419278 container cleanup 8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.905 243456 DEBUG oslo_concurrency.lockutils [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.906 243456 DEBUG oslo_concurrency.lockutils [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.906 243456 DEBUG oslo_concurrency.lockutils [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.906 243456 DEBUG oslo_concurrency.lockutils [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.906 243456 DEBUG oslo_concurrency.lockutils [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.909 243456 INFO nova.compute.manager [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Terminating instance#033[00m
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.910 243456 DEBUG nova.compute.manager [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:04:37 np0005634017 systemd[1]: libpod-conmon-8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9.scope: Deactivated successfully.
Feb 28 05:04:37 np0005634017 kernel: tap3c6e6f23-d6 (unregistering): left promiscuous mode
Feb 28 05:04:37 np0005634017 podman[269094]: 2026-02-28 10:04:37.960730804 +0000 UTC m=+0.048553196 container remove 8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 28 05:04:37 np0005634017 NetworkManager[49805]: <info>  [1772273077.9626] device (tap3c6e6f23-d6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:04:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:37.970 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb47c55-c025-4ade-9d18-bb9bc73a1622]: (4, ('Sat Feb 28 10:04:37 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 (8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9)\n8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9\nSat Feb 28 10:04:37 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 (8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9)\n8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:37.972 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b89bd1-5afb-469e-85a7-35a8dc7e8134]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:37 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:37Z|00177|binding|INFO|Releasing lport 3c6e6f23-d681-47b4-a8e5-474ce94e984d from this chassis (sb_readonly=0)
Feb 28 05:04:37 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:37Z|00178|binding|INFO|Setting lport 3c6e6f23-d681-47b4-a8e5-474ce94e984d down in Southbound
Feb 28 05:04:37 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:37Z|00179|binding|INFO|Removing iface tap3c6e6f23-d6 ovn-installed in OVS
Feb 28 05:04:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:37.974 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a8395bc-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.976 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.981 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:37.981 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:58:99 10.100.0.8'], port_security=['fa:16:3e:32:58:99 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6d74f9b9-edf7-4d81-b139-cb664b2ab68c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da458887a8634c5a8b9a38fcbcc44e07', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c01448ef-a8fc-4bd2-928c-fd08df4a870e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5852d9cf-0cd0-48e3-ac9d-e151dafb6ffc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=3c6e6f23-d681-47b4-a8e5-474ce94e984d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:04:37 np0005634017 kernel: tap3a8395bc-d0: left promiscuous mode
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.985 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:37 np0005634017 nova_compute[243452]: 2026-02-28 10:04:37.990 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:37.993 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eb7637c9-9be2-4519-b8f3-9d3be9802373]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:38 np0005634017 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Feb 28 05:04:38 np0005634017 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001c.scope: Consumed 12.863s CPU time.
Feb 28 05:04:38 np0005634017 systemd-machined[209480]: Machine qemu-32-instance-0000001c terminated.
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.018 243456 DEBUG nova.compute.manager [req-bb029be2-25bd-4172-b975-2dede82c0dc7 req-ac0072b4-708f-4517-b23a-b3170907cf9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Received event network-vif-unplugged-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.019 243456 DEBUG oslo_concurrency.lockutils [req-bb029be2-25bd-4172-b975-2dede82c0dc7 req-ac0072b4-708f-4517-b23a-b3170907cf9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.019 243456 DEBUG oslo_concurrency.lockutils [req-bb029be2-25bd-4172-b975-2dede82c0dc7 req-ac0072b4-708f-4517-b23a-b3170907cf9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.019 243456 DEBUG oslo_concurrency.lockutils [req-bb029be2-25bd-4172-b975-2dede82c0dc7 req-ac0072b4-708f-4517-b23a-b3170907cf9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.019 243456 DEBUG nova.compute.manager [req-bb029be2-25bd-4172-b975-2dede82c0dc7 req-ac0072b4-708f-4517-b23a-b3170907cf9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] No waiting events found dispatching network-vif-unplugged-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.019 243456 DEBUG nova.compute.manager [req-bb029be2-25bd-4172-b975-2dede82c0dc7 req-ac0072b4-708f-4517-b23a-b3170907cf9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Received event network-vif-unplugged-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:04:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.024 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[03980b9d-0294-45e8-ad52-6873d23aed4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.025 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c90f60b6-f185-4871-bc8d-8494787177a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.044 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9fcbfb4d-9af9-420f-8dde-86500f999ddb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456918, 'reachable_time': 38734, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269127, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:38 np0005634017 systemd[1]: run-netns-ovnmeta\x2d3a8395bc\x2dd7fc\x2d4457\x2d8cb4\x2d52e2b9305b61.mount: Deactivated successfully.
Feb 28 05:04:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.049 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:04:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.049 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[ade54b44-1da6-4b3a-aaa3-5fb4c6b44d86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.051 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 3c6e6f23-d681-47b4-a8e5-474ce94e984d in datapath 2eebd3ec-f7d4-4881-813e-8d884cdcadaf unbound from our chassis#033[00m
Feb 28 05:04:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.052 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2eebd3ec-f7d4-4881-813e-8d884cdcadaf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:04:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.053 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f02bc157-300b-4420-8392-3ddfabf91b13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.053 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf namespace which is not needed anymore#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.135 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.142 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.150 243456 INFO nova.virt.libvirt.driver [-] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Instance destroyed successfully.#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.151 243456 DEBUG nova.objects.instance [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lazy-loading 'resources' on Instance uuid 6d74f9b9-edf7-4d81-b139-cb664b2ab68c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.170 243456 INFO nova.virt.libvirt.driver [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Deleting instance files /var/lib/nova/instances/9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_del#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.171 243456 INFO nova.virt.libvirt.driver [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Deletion of /var/lib/nova/instances/9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_del complete#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.176 243456 DEBUG nova.virt.libvirt.vif [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:04:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-279405115',display_name='tempest-ImagesOneServerNegativeTestJSON-server-279405115',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-279405115',id=28,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:04:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da458887a8634c5a8b9a38fcbcc44e07',ramdisk_id='',reservation_id='r-4l4iiwh0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-356581433',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-356581433-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:04:36Z,user_data=None,user_id='2737540b5d9a437cac0ea91b25f0c5d8',uuid=6d74f9b9-edf7-4d81-b139-cb664b2ab68c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "address": "fa:16:3e:32:58:99", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6e6f23-d6", "ovs_interfaceid": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.176 243456 DEBUG nova.network.os_vif_util [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converting VIF {"id": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "address": "fa:16:3e:32:58:99", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6e6f23-d6", "ovs_interfaceid": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.177 243456 DEBUG nova.network.os_vif_util [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:58:99,bridge_name='br-int',has_traffic_filtering=True,id=3c6e6f23-d681-47b4-a8e5-474ce94e984d,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6e6f23-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.177 243456 DEBUG os_vif [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:58:99,bridge_name='br-int',has_traffic_filtering=True,id=3c6e6f23-d681-47b4-a8e5-474ce94e984d,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6e6f23-d6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.179 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.179 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c6e6f23-d6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.181 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.182 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.185 243456 INFO os_vif [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:58:99,bridge_name='br-int',has_traffic_filtering=True,id=3c6e6f23-d681-47b4-a8e5-474ce94e984d,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6e6f23-d6')#033[00m
Feb 28 05:04:38 np0005634017 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[268385]: [NOTICE]   (268389) : haproxy version is 2.8.14-c23fe91
Feb 28 05:04:38 np0005634017 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[268385]: [NOTICE]   (268389) : path to executable is /usr/sbin/haproxy
Feb 28 05:04:38 np0005634017 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[268385]: [WARNING]  (268389) : Exiting Master process...
Feb 28 05:04:38 np0005634017 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[268385]: [WARNING]  (268389) : Exiting Master process...
Feb 28 05:04:38 np0005634017 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[268385]: [ALERT]    (268389) : Current worker (268391) exited with code 143 (Terminated)
Feb 28 05:04:38 np0005634017 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[268385]: [WARNING]  (268389) : All workers exited. Exiting... (0)
Feb 28 05:04:38 np0005634017 systemd[1]: libpod-daf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f.scope: Deactivated successfully.
Feb 28 05:04:38 np0005634017 podman[269155]: 2026-02-28 10:04:38.215525427 +0000 UTC m=+0.043803762 container died daf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.245 243456 INFO nova.compute.manager [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Took 0.63 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.246 243456 DEBUG oslo.service.loopingcall [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.246 243456 DEBUG nova.compute.manager [-] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.247 243456 DEBUG nova.network.neutron [-] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:04:38 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-daf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f-userdata-shm.mount: Deactivated successfully.
Feb 28 05:04:38 np0005634017 systemd[1]: var-lib-containers-storage-overlay-7cd5eb1ee2da379ebf68d6e75d71e29d19c55cfc0b19947902e32f6aaa6684a7-merged.mount: Deactivated successfully.
Feb 28 05:04:38 np0005634017 podman[269155]: 2026-02-28 10:04:38.257081206 +0000 UTC m=+0.085359551 container cleanup daf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:04:38 np0005634017 systemd[1]: libpod-conmon-daf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f.scope: Deactivated successfully.
Feb 28 05:04:38 np0005634017 podman[269205]: 2026-02-28 10:04:38.326441546 +0000 UTC m=+0.047222659 container remove daf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 05:04:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.333 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f19fd591-e433-4ed6-85cf-31c5c3992131]: (4, ('Sat Feb 28 10:04:38 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf (daf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f)\ndaf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f\nSat Feb 28 10:04:38 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf (daf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f)\ndaf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.334 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[316307ee-3ba8-432b-b087-6b730f97e0a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.335 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2eebd3ec-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.338 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:38 np0005634017 kernel: tap2eebd3ec-f0: left promiscuous mode
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.345 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.349 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[70de7f51-6769-477b-a17c-03057d6c76ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.363 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2d68587b-fd1a-4d47-bb47-8888b17152e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.365 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[10384ab0-e08a-4384-a6a4-5dc5353a9bc0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.386 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b9fda02b-75cf-4883-b68d-dabf4dd96e7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456629, 'reachable_time': 39722, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269221, 'error': None, 'target': 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.389 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:04:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.390 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[f67a4fd3-7159-4355-8956-cd50123361ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.437 243456 INFO nova.virt.libvirt.driver [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Deleting instance files /var/lib/nova/instances/6d74f9b9-edf7-4d81-b139-cb664b2ab68c_del#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.438 243456 INFO nova.virt.libvirt.driver [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Deletion of /var/lib/nova/instances/6d74f9b9-edf7-4d81-b139-cb664b2ab68c_del complete#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.505 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.515 243456 INFO nova.compute.manager [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.515 243456 DEBUG oslo.service.loopingcall [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.516 243456 DEBUG nova.compute.manager [-] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:04:38 np0005634017 nova_compute[243452]: 2026-02-28 10:04:38.516 243456 DEBUG nova.network.neutron [-] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:04:38 np0005634017 systemd[1]: run-netns-ovnmeta\x2d2eebd3ec\x2df7d4\x2d4881\x2d813e\x2d8d884cdcadaf.mount: Deactivated successfully.
Feb 28 05:04:39 np0005634017 nova_compute[243452]: 2026-02-28 10:04:39.393 243456 DEBUG nova.network.neutron [-] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:39 np0005634017 nova_compute[243452]: 2026-02-28 10:04:39.414 243456 INFO nova.compute.manager [-] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Took 1.17 seconds to deallocate network for instance.#033[00m
Feb 28 05:04:39 np0005634017 nova_compute[243452]: 2026-02-28 10:04:39.463 243456 DEBUG oslo_concurrency.lockutils [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:39 np0005634017 nova_compute[243452]: 2026-02-28 10:04:39.464 243456 DEBUG oslo_concurrency.lockutils [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:39 np0005634017 nova_compute[243452]: 2026-02-28 10:04:39.540 243456 DEBUG nova.compute.manager [req-351c88b1-d05e-401f-b16a-b4c2f18e2a8f req-554643f3-228b-4035-9789-9a01987c9f3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Received event network-vif-deleted-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:39 np0005634017 nova_compute[243452]: 2026-02-28 10:04:39.543 243456 DEBUG nova.network.neutron [-] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:39 np0005634017 nova_compute[243452]: 2026-02-28 10:04:39.557 243456 INFO nova.compute.manager [-] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Took 1.04 seconds to deallocate network for instance.#033[00m
Feb 28 05:04:39 np0005634017 nova_compute[243452]: 2026-02-28 10:04:39.607 243456 DEBUG oslo_concurrency.lockutils [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:39 np0005634017 nova_compute[243452]: 2026-02-28 10:04:39.636 243456 DEBUG oslo_concurrency.processutils [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 362 MiB data, 583 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 15 MiB/s wr, 448 op/s
Feb 28 05:04:39 np0005634017 nova_compute[243452]: 2026-02-28 10:04:39.939 243456 DEBUG oslo_concurrency.lockutils [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "interface-f1026535-7729-43d0-8027-dd71ef14dfbf-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:39 np0005634017 nova_compute[243452]: 2026-02-28 10:04:39.939 243456 DEBUG oslo_concurrency.lockutils [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-f1026535-7729-43d0-8027-dd71ef14dfbf-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:39 np0005634017 nova_compute[243452]: 2026-02-28 10:04:39.940 243456 DEBUG nova.objects.instance [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'flavor' on Instance uuid f1026535-7729-43d0-8027-dd71ef14dfbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:40 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:04:40 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1277498819' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:04:40 np0005634017 nova_compute[243452]: 2026-02-28 10:04:40.213 243456 DEBUG oslo_concurrency.processutils [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:40 np0005634017 nova_compute[243452]: 2026-02-28 10:04:40.219 243456 DEBUG nova.compute.provider_tree [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:04:40 np0005634017 nova_compute[243452]: 2026-02-28 10:04:40.234 243456 DEBUG nova.scheduler.client.report [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:04:40 np0005634017 nova_compute[243452]: 2026-02-28 10:04:40.255 243456 DEBUG oslo_concurrency.lockutils [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:40 np0005634017 nova_compute[243452]: 2026-02-28 10:04:40.258 243456 DEBUG oslo_concurrency.lockutils [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:40 np0005634017 nova_compute[243452]: 2026-02-28 10:04:40.277 243456 INFO nova.scheduler.client.report [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Deleted allocations for instance 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab#033[00m
Feb 28 05:04:40 np0005634017 nova_compute[243452]: 2026-02-28 10:04:40.316 243456 DEBUG oslo_concurrency.processutils [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:40 np0005634017 nova_compute[243452]: 2026-02-28 10:04:40.346 243456 DEBUG oslo_concurrency.lockutils [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:40 np0005634017 nova_compute[243452]: 2026-02-28 10:04:40.379 243456 DEBUG nova.compute.manager [req-c9a4ea16-36f1-40eb-bd01-8de11aace7b9 req-22377905-15e4-40b7-a9bd-d70c4839986f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Received event network-vif-plugged-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:40 np0005634017 nova_compute[243452]: 2026-02-28 10:04:40.380 243456 DEBUG oslo_concurrency.lockutils [req-c9a4ea16-36f1-40eb-bd01-8de11aace7b9 req-22377905-15e4-40b7-a9bd-d70c4839986f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:40 np0005634017 nova_compute[243452]: 2026-02-28 10:04:40.380 243456 DEBUG oslo_concurrency.lockutils [req-c9a4ea16-36f1-40eb-bd01-8de11aace7b9 req-22377905-15e4-40b7-a9bd-d70c4839986f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:40 np0005634017 nova_compute[243452]: 2026-02-28 10:04:40.381 243456 DEBUG oslo_concurrency.lockutils [req-c9a4ea16-36f1-40eb-bd01-8de11aace7b9 req-22377905-15e4-40b7-a9bd-d70c4839986f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:40 np0005634017 nova_compute[243452]: 2026-02-28 10:04:40.381 243456 DEBUG nova.compute.manager [req-c9a4ea16-36f1-40eb-bd01-8de11aace7b9 req-22377905-15e4-40b7-a9bd-d70c4839986f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] No waiting events found dispatching network-vif-plugged-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:04:40 np0005634017 nova_compute[243452]: 2026-02-28 10:04:40.381 243456 WARNING nova.compute.manager [req-c9a4ea16-36f1-40eb-bd01-8de11aace7b9 req-22377905-15e4-40b7-a9bd-d70c4839986f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Received unexpected event network-vif-plugged-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:04:40 np0005634017 nova_compute[243452]: 2026-02-28 10:04:40.551 243456 DEBUG nova.objects.instance [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'pci_requests' on Instance uuid f1026535-7729-43d0-8027-dd71ef14dfbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:40 np0005634017 nova_compute[243452]: 2026-02-28 10:04:40.570 243456 DEBUG nova.network.neutron [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:04:40 np0005634017 nova_compute[243452]: 2026-02-28 10:04:40.607 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:40 np0005634017 nova_compute[243452]: 2026-02-28 10:04:40.607 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:40 np0005634017 nova_compute[243452]: 2026-02-28 10:04:40.626 243456 DEBUG nova.compute.manager [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:04:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:04:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:04:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:04:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:04:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015381725767343905 of space, bias 1.0, pg target 0.46145177302031715 quantized to 32 (current 32)
Feb 28 05:04:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:04:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:04:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:04:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:04:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:04:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002975076423412019 of space, bias 1.0, pg target 0.8925229270236057 quantized to 32 (current 32)
Feb 28 05:04:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:04:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.584590305021096e-07 of space, bias 4.0, pg target 0.0010301508366025315 quantized to 16 (current 16)
Feb 28 05:04:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:04:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:04:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:04:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:04:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:04:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:04:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:04:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:04:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:04:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:04:40 np0005634017 nova_compute[243452]: 2026-02-28 10:04:40.707 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:40 np0005634017 nova_compute[243452]: 2026-02-28 10:04:40.743 243456 DEBUG nova.policy [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1d18942aa7a4f4f9f673058f8c5f232', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5478fb37e3fa483c86ed85ad939d42da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:04:40 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:04:40 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3634590387' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:04:40 np0005634017 nova_compute[243452]: 2026-02-28 10:04:40.934 243456 DEBUG oslo_concurrency.processutils [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:40 np0005634017 nova_compute[243452]: 2026-02-28 10:04:40.938 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:40 np0005634017 nova_compute[243452]: 2026-02-28 10:04:40.949 243456 DEBUG nova.compute.provider_tree [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:04:40 np0005634017 nova_compute[243452]: 2026-02-28 10:04:40.967 243456 DEBUG nova.scheduler.client.report [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:04:41 np0005634017 nova_compute[243452]: 2026-02-28 10:04:41.009 243456 DEBUG oslo_concurrency.lockutils [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:41 np0005634017 nova_compute[243452]: 2026-02-28 10:04:41.013 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:41 np0005634017 nova_compute[243452]: 2026-02-28 10:04:41.022 243456 DEBUG nova.virt.hardware [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:04:41 np0005634017 nova_compute[243452]: 2026-02-28 10:04:41.022 243456 INFO nova.compute.claims [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:04:41 np0005634017 nova_compute[243452]: 2026-02-28 10:04:41.035 243456 INFO nova.scheduler.client.report [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Deleted allocations for instance 6d74f9b9-edf7-4d81-b139-cb664b2ab68c#033[00m
Feb 28 05:04:41 np0005634017 nova_compute[243452]: 2026-02-28 10:04:41.114 243456 DEBUG oslo_concurrency.lockutils [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.208s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:41 np0005634017 nova_compute[243452]: 2026-02-28 10:04:41.166 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:41 np0005634017 nova_compute[243452]: 2026-02-28 10:04:41.324 243456 DEBUG nova.network.neutron [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Successfully created port: 11bba824-bf60-4206-b70d-fd5035009fbf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:04:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:04:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1256323313' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:04:41 np0005634017 nova_compute[243452]: 2026-02-28 10:04:41.716 243456 DEBUG nova.compute.manager [req-aa64e975-95d4-4bfe-a8b5-e1a3017d6f4f req-6af31a91-0886-44d6-85db-f4bc94701733 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Received event network-vif-deleted-3c6e6f23-d681-47b4-a8e5-474ce94e984d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:41 np0005634017 nova_compute[243452]: 2026-02-28 10:04:41.738 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:41 np0005634017 nova_compute[243452]: 2026-02-28 10:04:41.747 243456 DEBUG nova.compute.provider_tree [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:04:41 np0005634017 nova_compute[243452]: 2026-02-28 10:04:41.765 243456 DEBUG nova.scheduler.client.report [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:04:41 np0005634017 nova_compute[243452]: 2026-02-28 10:04:41.794 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:41 np0005634017 nova_compute[243452]: 2026-02-28 10:04:41.795 243456 DEBUG nova.compute.manager [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:04:41 np0005634017 nova_compute[243452]: 2026-02-28 10:04:41.843 243456 DEBUG nova.compute.manager [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:04:41 np0005634017 nova_compute[243452]: 2026-02-28 10:04:41.844 243456 DEBUG nova.network.neutron [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:04:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 279 MiB data, 533 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 8.9 MiB/s wr, 356 op/s
Feb 28 05:04:41 np0005634017 nova_compute[243452]: 2026-02-28 10:04:41.864 243456 INFO nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:04:41 np0005634017 nova_compute[243452]: 2026-02-28 10:04:41.885 243456 DEBUG nova.compute.manager [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:04:41 np0005634017 nova_compute[243452]: 2026-02-28 10:04:41.970 243456 DEBUG nova.compute.manager [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:04:41 np0005634017 nova_compute[243452]: 2026-02-28 10:04:41.971 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:04:41 np0005634017 nova_compute[243452]: 2026-02-28 10:04:41.971 243456 INFO nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Creating image(s)#033[00m
Feb 28 05:04:42 np0005634017 nova_compute[243452]: 2026-02-28 10:04:42.002 243456 DEBUG nova.storage.rbd_utils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:42 np0005634017 nova_compute[243452]: 2026-02-28 10:04:42.039 243456 DEBUG nova.storage.rbd_utils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:42 np0005634017 nova_compute[243452]: 2026-02-28 10:04:42.074 243456 DEBUG nova.storage.rbd_utils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:42 np0005634017 nova_compute[243452]: 2026-02-28 10:04:42.079 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:42 np0005634017 nova_compute[243452]: 2026-02-28 10:04:42.105 243456 DEBUG nova.network.neutron [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Successfully updated port: 11bba824-bf60-4206-b70d-fd5035009fbf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:04:42 np0005634017 nova_compute[243452]: 2026-02-28 10:04:42.110 243456 DEBUG nova.policy [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '163582c3e6a34c87b52f82ac4f189f77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:04:42 np0005634017 nova_compute[243452]: 2026-02-28 10:04:42.134 243456 DEBUG oslo_concurrency.lockutils [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:04:42 np0005634017 nova_compute[243452]: 2026-02-28 10:04:42.136 243456 DEBUG oslo_concurrency.lockutils [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquired lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:04:42 np0005634017 nova_compute[243452]: 2026-02-28 10:04:42.136 243456 DEBUG nova.network.neutron [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:04:42 np0005634017 nova_compute[243452]: 2026-02-28 10:04:42.164 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:42 np0005634017 nova_compute[243452]: 2026-02-28 10:04:42.165 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:42 np0005634017 nova_compute[243452]: 2026-02-28 10:04:42.166 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:42 np0005634017 nova_compute[243452]: 2026-02-28 10:04:42.166 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:42 np0005634017 nova_compute[243452]: 2026-02-28 10:04:42.192 243456 DEBUG nova.storage.rbd_utils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:42 np0005634017 nova_compute[243452]: 2026-02-28 10:04:42.196 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:42 np0005634017 nova_compute[243452]: 2026-02-28 10:04:42.422 243456 WARNING nova.network.neutron [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] 60dcefc3-95e1-437e-9c00-e51656c39b8f already exists in list: networks containing: ['60dcefc3-95e1-437e-9c00-e51656c39b8f']. ignoring it#033[00m
Feb 28 05:04:42 np0005634017 nova_compute[243452]: 2026-02-28 10:04:42.432 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:42 np0005634017 nova_compute[243452]: 2026-02-28 10:04:42.494 243456 DEBUG nova.storage.rbd_utils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] resizing rbd image ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:04:42 np0005634017 nova_compute[243452]: 2026-02-28 10:04:42.577 243456 DEBUG nova.objects.instance [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'migration_context' on Instance uuid ff5bf118-ea06-44c0-81f0-0a229162e1d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:42 np0005634017 nova_compute[243452]: 2026-02-28 10:04:42.590 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:04:42 np0005634017 nova_compute[243452]: 2026-02-28 10:04:42.590 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Ensure instance console log exists: /var/lib/nova/instances/ff5bf118-ea06-44c0-81f0-0a229162e1d8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:04:42 np0005634017 nova_compute[243452]: 2026-02-28 10:04:42.591 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:42 np0005634017 nova_compute[243452]: 2026-02-28 10:04:42.591 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:42 np0005634017 nova_compute[243452]: 2026-02-28 10:04:42.591 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:04:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Feb 28 05:04:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Feb 28 05:04:42 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Feb 28 05:04:43 np0005634017 nova_compute[243452]: 2026-02-28 10:04:43.086 243456 DEBUG nova.network.neutron [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Successfully created port: 29b5f82a-cfc3-4c87-aac9-8419af0bcf75 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:04:43 np0005634017 nova_compute[243452]: 2026-02-28 10:04:43.183 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:43 np0005634017 nova_compute[243452]: 2026-02-28 10:04:43.508 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:43 np0005634017 nova_compute[243452]: 2026-02-28 10:04:43.836 243456 DEBUG nova.compute.manager [req-07106fe3-254b-45d6-a435-9c686c01fc4f req-7de3e4ed-51d3-4edd-8067-2599c6155572 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-changed-11bba824-bf60-4206-b70d-fd5035009fbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:43 np0005634017 nova_compute[243452]: 2026-02-28 10:04:43.837 243456 DEBUG nova.compute.manager [req-07106fe3-254b-45d6-a435-9c686c01fc4f req-7de3e4ed-51d3-4edd-8067-2599c6155572 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Refreshing instance network info cache due to event network-changed-11bba824-bf60-4206-b70d-fd5035009fbf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:04:43 np0005634017 nova_compute[243452]: 2026-02-28 10:04:43.837 243456 DEBUG oslo_concurrency.lockutils [req-07106fe3-254b-45d6-a435-9c686c01fc4f req-7de3e4ed-51d3-4edd-8067-2599c6155572 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:04:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 244 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.0 MiB/s wr, 267 op/s
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.141 243456 DEBUG nova.network.neutron [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Successfully updated port: 29b5f82a-cfc3-4c87-aac9-8419af0bcf75 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.157 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "refresh_cache-ff5bf118-ea06-44c0-81f0-0a229162e1d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.158 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquired lock "refresh_cache-ff5bf118-ea06-44c0-81f0-0a229162e1d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.158 243456 DEBUG nova.network.neutron [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.801 243456 DEBUG nova.network.neutron [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.841 243456 DEBUG nova.network.neutron [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Updating instance_info_cache with network_info: [{"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.872 243456 DEBUG oslo_concurrency.lockutils [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Releasing lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.873 243456 DEBUG oslo_concurrency.lockutils [req-07106fe3-254b-45d6-a435-9c686c01fc4f req-7de3e4ed-51d3-4edd-8067-2599c6155572 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.873 243456 DEBUG nova.network.neutron [req-07106fe3-254b-45d6-a435-9c686c01fc4f req-7de3e4ed-51d3-4edd-8067-2599c6155572 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Refreshing network info cache for port 11bba824-bf60-4206-b70d-fd5035009fbf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.876 243456 DEBUG nova.virt.libvirt.vif [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:04:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-585792194',display_name='tempest-AttachInterfacesTestJSON-server-585792194',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-585792194',id=27,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPfxDpm3Myh4xl51GH/Eee4e+9gL0QXIrJb3KmgHGFFsr7Zk1zaEkJOxAcUnihnT1OPbqVJln2d2q3oc5Q222W4sAVF2VtSEfrwl92YPLxE6w1H/lw+338tjUK54+I/6sA==',key_name='tempest-keypair-1547211698',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:04:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-ngglzwzv',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:04:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=f1026535-7729-43d0-8027-dd71ef14dfbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.877 243456 DEBUG nova.network.os_vif_util [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.877 243456 DEBUG nova.network.os_vif_util [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:32:9d,bridge_name='br-int',has_traffic_filtering=True,id=11bba824-bf60-4206-b70d-fd5035009fbf,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11bba824-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.878 243456 DEBUG os_vif [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:32:9d,bridge_name='br-int',has_traffic_filtering=True,id=11bba824-bf60-4206-b70d-fd5035009fbf,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11bba824-bf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.878 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.878 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.879 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.881 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.881 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap11bba824-bf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.882 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap11bba824-bf, col_values=(('external_ids', {'iface-id': '11bba824-bf60-4206-b70d-fd5035009fbf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:32:9d', 'vm-uuid': 'f1026535-7729-43d0-8027-dd71ef14dfbf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.883 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.884 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:04:44 np0005634017 NetworkManager[49805]: <info>  [1772273084.8845] manager: (tap11bba824-bf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.890 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.891 243456 INFO os_vif [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:32:9d,bridge_name='br-int',has_traffic_filtering=True,id=11bba824-bf60-4206-b70d-fd5035009fbf,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11bba824-bf')#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.892 243456 DEBUG nova.virt.libvirt.vif [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:04:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-585792194',display_name='tempest-AttachInterfacesTestJSON-server-585792194',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-585792194',id=27,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPfxDpm3Myh4xl51GH/Eee4e+9gL0QXIrJb3KmgHGFFsr7Zk1zaEkJOxAcUnihnT1OPbqVJln2d2q3oc5Q222W4sAVF2VtSEfrwl92YPLxE6w1H/lw+338tjUK54+I/6sA==',key_name='tempest-keypair-1547211698',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:04:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-ngglzwzv',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:04:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=f1026535-7729-43d0-8027-dd71ef14dfbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.892 243456 DEBUG nova.network.os_vif_util [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.893 243456 DEBUG nova.network.os_vif_util [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:32:9d,bridge_name='br-int',has_traffic_filtering=True,id=11bba824-bf60-4206-b70d-fd5035009fbf,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11bba824-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.895 243456 DEBUG nova.virt.libvirt.guest [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] attach device xml: <interface type="ethernet">
Feb 28 05:04:44 np0005634017 nova_compute[243452]:  <mac address="fa:16:3e:d6:32:9d"/>
Feb 28 05:04:44 np0005634017 nova_compute[243452]:  <model type="virtio"/>
Feb 28 05:04:44 np0005634017 nova_compute[243452]:  <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:04:44 np0005634017 nova_compute[243452]:  <mtu size="1442"/>
Feb 28 05:04:44 np0005634017 nova_compute[243452]:  <target dev="tap11bba824-bf"/>
Feb 28 05:04:44 np0005634017 nova_compute[243452]: </interface>
Feb 28 05:04:44 np0005634017 nova_compute[243452]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb 28 05:04:44 np0005634017 kernel: tap11bba824-bf: entered promiscuous mode
Feb 28 05:04:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:44Z|00180|binding|INFO|Claiming lport 11bba824-bf60-4206-b70d-fd5035009fbf for this chassis.
Feb 28 05:04:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:44Z|00181|binding|INFO|11bba824-bf60-4206-b70d-fd5035009fbf: Claiming fa:16:3e:d6:32:9d 10.100.0.14
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.908 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:44 np0005634017 NetworkManager[49805]: <info>  [1772273084.9095] manager: (tap11bba824-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Feb 28 05:04:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:44.921 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:32:9d 10.100.0.14'], port_security=['fa:16:3e:d6:32:9d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f1026535-7729-43d0-8027-dd71ef14dfbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '24808a61-2182-43ba-a46d-e629124276f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=11bba824-bf60-4206-b70d-fd5035009fbf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:04:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:44Z|00182|binding|INFO|Setting lport 11bba824-bf60-4206-b70d-fd5035009fbf ovn-installed in OVS
Feb 28 05:04:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:44Z|00183|binding|INFO|Setting lport 11bba824-bf60-4206-b70d-fd5035009fbf up in Southbound
Feb 28 05:04:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:44.923 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 11bba824-bf60-4206-b70d-fd5035009fbf in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f bound to our chassis#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.926 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:44.926 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60dcefc3-95e1-437e-9c00-e51656c39b8f#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.932 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:44 np0005634017 systemd-udevd[269461]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:04:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:44.947 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[48fd7a0a-7658-4227-b06c-19cd2aada1a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:44 np0005634017 NetworkManager[49805]: <info>  [1772273084.9545] device (tap11bba824-bf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:04:44 np0005634017 NetworkManager[49805]: <info>  [1772273084.9561] device (tap11bba824-bf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.984 243456 DEBUG nova.virt.libvirt.driver [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.984 243456 DEBUG nova.virt.libvirt.driver [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.984 243456 DEBUG nova.virt.libvirt.driver [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No VIF found with MAC fa:16:3e:7a:bd:5b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:04:44 np0005634017 nova_compute[243452]: 2026-02-28 10:04:44.984 243456 DEBUG nova.virt.libvirt.driver [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No VIF found with MAC fa:16:3e:d6:32:9d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:04:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:44.984 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b48b13db-96d5-43c6-ae29-0451677bfb9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:44.988 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[53790528-4225-4e2c-81d6-6c5b1670f4de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:45 np0005634017 nova_compute[243452]: 2026-02-28 10:04:45.015 243456 DEBUG nova.virt.libvirt.guest [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:04:45 np0005634017 nova_compute[243452]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:04:45 np0005634017 nova_compute[243452]:  <nova:name>tempest-AttachInterfacesTestJSON-server-585792194</nova:name>
Feb 28 05:04:45 np0005634017 nova_compute[243452]:  <nova:creationTime>2026-02-28 10:04:45</nova:creationTime>
Feb 28 05:04:45 np0005634017 nova_compute[243452]:  <nova:flavor name="m1.nano">
Feb 28 05:04:45 np0005634017 nova_compute[243452]:    <nova:memory>128</nova:memory>
Feb 28 05:04:45 np0005634017 nova_compute[243452]:    <nova:disk>1</nova:disk>
Feb 28 05:04:45 np0005634017 nova_compute[243452]:    <nova:swap>0</nova:swap>
Feb 28 05:04:45 np0005634017 nova_compute[243452]:    <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:04:45 np0005634017 nova_compute[243452]:    <nova:vcpus>1</nova:vcpus>
Feb 28 05:04:45 np0005634017 nova_compute[243452]:  </nova:flavor>
Feb 28 05:04:45 np0005634017 nova_compute[243452]:  <nova:owner>
Feb 28 05:04:45 np0005634017 nova_compute[243452]:    <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 05:04:45 np0005634017 nova_compute[243452]:    <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 05:04:45 np0005634017 nova_compute[243452]:  </nova:owner>
Feb 28 05:04:45 np0005634017 nova_compute[243452]:  <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:04:45 np0005634017 nova_compute[243452]:  <nova:ports>
Feb 28 05:04:45 np0005634017 nova_compute[243452]:    <nova:port uuid="76d5199d-5d1e-4198-8780-c2537175a2be">
Feb 28 05:04:45 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 05:04:45 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:04:45 np0005634017 nova_compute[243452]:    <nova:port uuid="11bba824-bf60-4206-b70d-fd5035009fbf">
Feb 28 05:04:45 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 05:04:45 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:04:45 np0005634017 nova_compute[243452]:  </nova:ports>
Feb 28 05:04:45 np0005634017 nova_compute[243452]: </nova:instance>
Feb 28 05:04:45 np0005634017 nova_compute[243452]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb 28 05:04:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:45.015 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[aefeda72-d80a-47bf-bbfb-081345ca73f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:45.029 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[78d93d01-13a6-4b66-a6ef-e989d754c185]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455960, 'reachable_time': 22319, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269468, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:45.042 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cb221bd0-2389-4203-86b5-1ca59f4cb0e3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455970, 'tstamp': 455970}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269469, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455972, 'tstamp': 455972}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269469, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:45.044 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60dcefc3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:45 np0005634017 nova_compute[243452]: 2026-02-28 10:04:45.046 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:45.047 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60dcefc3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:45.047 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:45.047 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60dcefc3-90, col_values=(('external_ids', {'iface-id': '76417fb5-53fd-4986-92c0-fafddb45b2f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:45 np0005634017 nova_compute[243452]: 2026-02-28 10:04:45.047 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:45.048 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:45 np0005634017 nova_compute[243452]: 2026-02-28 10:04:45.054 243456 DEBUG oslo_concurrency.lockutils [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-f1026535-7729-43d0-8027-dd71ef14dfbf-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:45 np0005634017 nova_compute[243452]: 2026-02-28 10:04:45.267 243456 DEBUG nova.compute.manager [req-967478f5-ae5f-4733-a517-0d3190865f33 req-6474f5d9-e91b-4361-8b59-b9bd9f5766f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-vif-plugged-11bba824-bf60-4206-b70d-fd5035009fbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:45 np0005634017 nova_compute[243452]: 2026-02-28 10:04:45.267 243456 DEBUG oslo_concurrency.lockutils [req-967478f5-ae5f-4733-a517-0d3190865f33 req-6474f5d9-e91b-4361-8b59-b9bd9f5766f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:45 np0005634017 nova_compute[243452]: 2026-02-28 10:04:45.268 243456 DEBUG oslo_concurrency.lockutils [req-967478f5-ae5f-4733-a517-0d3190865f33 req-6474f5d9-e91b-4361-8b59-b9bd9f5766f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:45 np0005634017 nova_compute[243452]: 2026-02-28 10:04:45.268 243456 DEBUG oslo_concurrency.lockutils [req-967478f5-ae5f-4733-a517-0d3190865f33 req-6474f5d9-e91b-4361-8b59-b9bd9f5766f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:45 np0005634017 nova_compute[243452]: 2026-02-28 10:04:45.268 243456 DEBUG nova.compute.manager [req-967478f5-ae5f-4733-a517-0d3190865f33 req-6474f5d9-e91b-4361-8b59-b9bd9f5766f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] No waiting events found dispatching network-vif-plugged-11bba824-bf60-4206-b70d-fd5035009fbf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:04:45 np0005634017 nova_compute[243452]: 2026-02-28 10:04:45.268 243456 WARNING nova.compute.manager [req-967478f5-ae5f-4733-a517-0d3190865f33 req-6474f5d9-e91b-4361-8b59-b9bd9f5766f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received unexpected event network-vif-plugged-11bba824-bf60-4206-b70d-fd5035009fbf for instance with vm_state active and task_state None.#033[00m
Feb 28 05:04:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:04:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2723943377' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:04:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:04:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2723943377' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:04:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 279 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.4 MiB/s wr, 273 op/s
Feb 28 05:04:45 np0005634017 nova_compute[243452]: 2026-02-28 10:04:45.954 243456 DEBUG nova.compute.manager [req-8a7152f6-83cd-44c6-8cdc-81e110741ef0 req-82c41c35-4152-43b9-b40f-448c8820869f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Received event network-changed-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:45 np0005634017 nova_compute[243452]: 2026-02-28 10:04:45.954 243456 DEBUG nova.compute.manager [req-8a7152f6-83cd-44c6-8cdc-81e110741ef0 req-82c41c35-4152-43b9-b40f-448c8820869f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Refreshing instance network info cache due to event network-changed-29b5f82a-cfc3-4c87-aac9-8419af0bcf75. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:04:45 np0005634017 nova_compute[243452]: 2026-02-28 10:04:45.955 243456 DEBUG oslo_concurrency.lockutils [req-8a7152f6-83cd-44c6-8cdc-81e110741ef0 req-82c41c35-4152-43b9-b40f-448c8820869f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ff5bf118-ea06-44c0-81f0-0a229162e1d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:04:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:46Z|00184|binding|INFO|Releasing lport 76417fb5-53fd-4986-92c0-fafddb45b2f5 from this chassis (sb_readonly=0)
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.089 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.365 243456 DEBUG nova.network.neutron [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Updating instance_info_cache with network_info: [{"id": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "address": "fa:16:3e:29:2d:d1", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b5f82a-cf", "ovs_interfaceid": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.398 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Releasing lock "refresh_cache-ff5bf118-ea06-44c0-81f0-0a229162e1d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.399 243456 DEBUG nova.compute.manager [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Instance network_info: |[{"id": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "address": "fa:16:3e:29:2d:d1", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b5f82a-cf", "ovs_interfaceid": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.400 243456 DEBUG oslo_concurrency.lockutils [req-8a7152f6-83cd-44c6-8cdc-81e110741ef0 req-82c41c35-4152-43b9-b40f-448c8820869f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ff5bf118-ea06-44c0-81f0-0a229162e1d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.401 243456 DEBUG nova.network.neutron [req-8a7152f6-83cd-44c6-8cdc-81e110741ef0 req-82c41c35-4152-43b9-b40f-448c8820869f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Refreshing network info cache for port 29b5f82a-cfc3-4c87-aac9-8419af0bcf75 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.405 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Start _get_guest_xml network_info=[{"id": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "address": "fa:16:3e:29:2d:d1", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b5f82a-cf", "ovs_interfaceid": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.413 243456 WARNING nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.428 243456 DEBUG nova.virt.libvirt.host [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.429 243456 DEBUG nova.virt.libvirt.host [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.434 243456 DEBUG nova.virt.libvirt.host [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.435 243456 DEBUG nova.virt.libvirt.host [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.436 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.436 243456 DEBUG nova.virt.hardware [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.437 243456 DEBUG nova.virt.hardware [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.438 243456 DEBUG nova.virt.hardware [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.438 243456 DEBUG nova.virt.hardware [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.439 243456 DEBUG nova.virt.hardware [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.439 243456 DEBUG nova.virt.hardware [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.439 243456 DEBUG nova.virt.hardware [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.440 243456 DEBUG nova.virt.hardware [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.440 243456 DEBUG nova.virt.hardware [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.441 243456 DEBUG nova.virt.hardware [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.441 243456 DEBUG nova.virt.hardware [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.446 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.637 243456 DEBUG nova.network.neutron [req-07106fe3-254b-45d6-a435-9c686c01fc4f req-7de3e4ed-51d3-4edd-8067-2599c6155572 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Updated VIF entry in instance network info cache for port 11bba824-bf60-4206-b70d-fd5035009fbf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.638 243456 DEBUG nova.network.neutron [req-07106fe3-254b-45d6-a435-9c686c01fc4f req-7de3e4ed-51d3-4edd-8067-2599c6155572 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Updating instance_info_cache with network_info: [{"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:46 np0005634017 nova_compute[243452]: 2026-02-28 10:04:46.655 243456 DEBUG oslo_concurrency.lockutils [req-07106fe3-254b-45d6-a435-9c686c01fc4f req-7de3e4ed-51d3-4edd-8067-2599c6155572 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:04:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:04:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2569154026' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.079 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.101 243456 DEBUG nova.storage.rbd_utils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.107 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:47 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:47Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d6:32:9d 10.100.0.14
Feb 28 05:04:47 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:47Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:32:9d 10.100.0.14
Feb 28 05:04:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:04:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Feb 28 05:04:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:04:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1208741869' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:04:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Feb 28 05:04:47 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.668 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.671 243456 DEBUG nova.virt.libvirt.vif [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:04:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-52310168',display_name='tempest-ImagesTestJSON-server-52310168',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-52310168',id=30,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-1llqlg3p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:04:41Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=ff5bf118-ea06-44c0-81f0-0a229162e1d8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "address": "fa:16:3e:29:2d:d1", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b5f82a-cf", "ovs_interfaceid": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.672 243456 DEBUG nova.network.os_vif_util [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "address": "fa:16:3e:29:2d:d1", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b5f82a-cf", "ovs_interfaceid": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.673 243456 DEBUG nova.network.os_vif_util [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:2d:d1,bridge_name='br-int',has_traffic_filtering=True,id=29b5f82a-cfc3-4c87-aac9-8419af0bcf75,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b5f82a-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.675 243456 DEBUG nova.objects.instance [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'pci_devices' on Instance uuid ff5bf118-ea06-44c0-81f0-0a229162e1d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.696 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:04:47 np0005634017 nova_compute[243452]:  <uuid>ff5bf118-ea06-44c0-81f0-0a229162e1d8</uuid>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:  <name>instance-0000001e</name>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <nova:name>tempest-ImagesTestJSON-server-52310168</nova:name>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:04:46</nova:creationTime>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:04:47 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:        <nova:user uuid="163582c3e6a34c87b52f82ac4f189f77">tempest-ImagesTestJSON-2059286278-project-member</nova:user>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:        <nova:project uuid="a2ce6ed219d94b3b88c2d2d7001f6c3a">tempest-ImagesTestJSON-2059286278</nova:project>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:        <nova:port uuid="29b5f82a-cfc3-4c87-aac9-8419af0bcf75">
Feb 28 05:04:47 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <entry name="serial">ff5bf118-ea06-44c0-81f0-0a229162e1d8</entry>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <entry name="uuid">ff5bf118-ea06-44c0-81f0-0a229162e1d8</entry>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk">
Feb 28 05:04:47 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:04:47 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk.config">
Feb 28 05:04:47 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:04:47 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:29:2d:d1"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <target dev="tap29b5f82a-cf"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/ff5bf118-ea06-44c0-81f0-0a229162e1d8/console.log" append="off"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:04:47 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:04:47 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:04:47 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:04:47 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.698 243456 DEBUG nova.compute.manager [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Preparing to wait for external event network-vif-plugged-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.698 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.699 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.699 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.701 243456 DEBUG nova.virt.libvirt.vif [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:04:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-52310168',display_name='tempest-ImagesTestJSON-server-52310168',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-52310168',id=30,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-1llqlg3p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:04:41Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=ff5bf118-ea06-44c0-81f0-0a229162e1d8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "address": "fa:16:3e:29:2d:d1", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b5f82a-cf", "ovs_interfaceid": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.701 243456 DEBUG nova.network.os_vif_util [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "address": "fa:16:3e:29:2d:d1", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b5f82a-cf", "ovs_interfaceid": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.702 243456 DEBUG nova.network.os_vif_util [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:2d:d1,bridge_name='br-int',has_traffic_filtering=True,id=29b5f82a-cfc3-4c87-aac9-8419af0bcf75,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b5f82a-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.703 243456 DEBUG os_vif [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:2d:d1,bridge_name='br-int',has_traffic_filtering=True,id=29b5f82a-cfc3-4c87-aac9-8419af0bcf75,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b5f82a-cf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.704 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.705 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.705 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.711 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.711 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29b5f82a-cf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.712 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap29b5f82a-cf, col_values=(('external_ids', {'iface-id': '29b5f82a-cfc3-4c87-aac9-8419af0bcf75', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:29:2d:d1', 'vm-uuid': 'ff5bf118-ea06-44c0-81f0-0a229162e1d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.714 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:47 np0005634017 NetworkManager[49805]: <info>  [1772273087.7158] manager: (tap29b5f82a-cf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.717 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.722 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.723 243456 INFO os_vif [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:2d:d1,bridge_name='br-int',has_traffic_filtering=True,id=29b5f82a-cfc3-4c87-aac9-8419af0bcf75,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b5f82a-cf')#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.787 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.787 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.788 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No VIF found with MAC fa:16:3e:29:2d:d1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.789 243456 INFO nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Using config drive#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.814 243456 DEBUG nova.storage.rbd_utils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.839 243456 DEBUG nova.compute.manager [req-e2b4014f-de3b-428d-99b9-588d62f4c115 req-ffa2168c-6016-4a82-9f74-17e92c855da2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-vif-plugged-11bba824-bf60-4206-b70d-fd5035009fbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.839 243456 DEBUG oslo_concurrency.lockutils [req-e2b4014f-de3b-428d-99b9-588d62f4c115 req-ffa2168c-6016-4a82-9f74-17e92c855da2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.840 243456 DEBUG oslo_concurrency.lockutils [req-e2b4014f-de3b-428d-99b9-588d62f4c115 req-ffa2168c-6016-4a82-9f74-17e92c855da2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.840 243456 DEBUG oslo_concurrency.lockutils [req-e2b4014f-de3b-428d-99b9-588d62f4c115 req-ffa2168c-6016-4a82-9f74-17e92c855da2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.840 243456 DEBUG nova.compute.manager [req-e2b4014f-de3b-428d-99b9-588d62f4c115 req-ffa2168c-6016-4a82-9f74-17e92c855da2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] No waiting events found dispatching network-vif-plugged-11bba824-bf60-4206-b70d-fd5035009fbf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:04:47 np0005634017 nova_compute[243452]: 2026-02-28 10:04:47.840 243456 WARNING nova.compute.manager [req-e2b4014f-de3b-428d-99b9-588d62f4c115 req-ffa2168c-6016-4a82-9f74-17e92c855da2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received unexpected event network-vif-plugged-11bba824-bf60-4206-b70d-fd5035009fbf for instance with vm_state active and task_state None.#033[00m
Feb 28 05:04:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 2.7 MiB/s wr, 122 op/s
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.101 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.102 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.103 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.121 243456 INFO nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Creating config drive at /var/lib/nova/instances/ff5bf118-ea06-44c0-81f0-0a229162e1d8/disk.config#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.127 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ff5bf118-ea06-44c0-81f0-0a229162e1d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmphu52c29o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.255 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ff5bf118-ea06-44c0-81f0-0a229162e1d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmphu52c29o" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.290 243456 DEBUG nova.storage.rbd_utils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.295 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ff5bf118-ea06-44c0-81f0-0a229162e1d8/disk.config ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.456 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ff5bf118-ea06-44c0-81f0-0a229162e1d8/disk.config ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.456 243456 INFO nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Deleting local config drive /var/lib/nova/instances/ff5bf118-ea06-44c0-81f0-0a229162e1d8/disk.config because it was imported into RBD.#033[00m
Feb 28 05:04:48 np0005634017 NetworkManager[49805]: <info>  [1772273088.5091] manager: (tap29b5f82a-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/91)
Feb 28 05:04:48 np0005634017 kernel: tap29b5f82a-cf: entered promiscuous mode
Feb 28 05:04:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:48Z|00185|binding|INFO|Claiming lport 29b5f82a-cfc3-4c87-aac9-8419af0bcf75 for this chassis.
Feb 28 05:04:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:48Z|00186|binding|INFO|29b5f82a-cfc3-4c87-aac9-8419af0bcf75: Claiming fa:16:3e:29:2d:d1 10.100.0.6
Feb 28 05:04:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:48Z|00187|binding|INFO|Setting lport 29b5f82a-cfc3-4c87-aac9-8419af0bcf75 up in Southbound
Feb 28 05:04:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:48Z|00188|binding|INFO|Setting lport 29b5f82a-cfc3-4c87-aac9-8419af0bcf75 ovn-installed in OVS
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.521 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:2d:d1 10.100.0.6'], port_security=['fa:16:3e:29:2d:d1 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'ff5bf118-ea06-44c0-81f0-0a229162e1d8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b4baa283-ae4f-4852-9324-45f4fb1de1ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d63e728b-5dfe-4a66-867f-fa0957507bc0, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=29b5f82a-cfc3-4c87-aac9-8419af0bcf75) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.524 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 29b5f82a-cfc3-4c87-aac9-8419af0bcf75 in datapath 3a8395bc-d7fc-4457-8cb4-52e2b9305b61 bound to our chassis#033[00m
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.527 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3a8395bc-d7fc-4457-8cb4-52e2b9305b61#033[00m
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.540 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d6fe706c-ad86-4e8b-8a7f-29296c86ef57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.542 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3a8395bc-d1 in ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.544 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3a8395bc-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.544 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9e262ebb-2a7f-423e-bed8-35f5260bcda6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.545 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dd41417e-6e1a-4ef2-8d6c-a35460d4733a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:48 np0005634017 systemd-udevd[269607]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:04:48 np0005634017 systemd-machined[209480]: New machine qemu-34-instance-0000001e.
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.557 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[3547eb77-3235-42bf-8f71-49d4b2f353bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:48 np0005634017 systemd[1]: Started Virtual Machine qemu-34-instance-0000001e.
Feb 28 05:04:48 np0005634017 NetworkManager[49805]: <info>  [1772273088.5672] device (tap29b5f82a-cf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:04:48 np0005634017 NetworkManager[49805]: <info>  [1772273088.5681] device (tap29b5f82a-cf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.583 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9ce1b97a-525c-4d64-92a8-3eb785750cb0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.603 243456 DEBUG oslo_concurrency.lockutils [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "interface-f1026535-7729-43d0-8027-dd71ef14dfbf-11bba824-bf60-4206-b70d-fd5035009fbf" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.604 243456 DEBUG oslo_concurrency.lockutils [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-f1026535-7729-43d0-8027-dd71ef14dfbf-11bba824-bf60-4206-b70d-fd5035009fbf" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.605 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.614 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a07ae5-1517-4071-9b88-525f78a8f2b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:48 np0005634017 systemd-udevd[269611]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:04:48 np0005634017 NetworkManager[49805]: <info>  [1772273088.6207] manager: (tap3a8395bc-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/92)
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.619 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c9286996-dd94-43d0-a321-f552d7d99d3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.632 243456 DEBUG nova.objects.instance [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'flavor' on Instance uuid f1026535-7729-43d0-8027-dd71ef14dfbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.654 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6487ac31-3cd0-4c31-bb4b-e29670acc306]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.658 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e1b46fd1-c028-4d95-8b14-1db6e08a0605]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.661 243456 DEBUG nova.virt.libvirt.vif [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:04:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-585792194',display_name='tempest-AttachInterfacesTestJSON-server-585792194',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-585792194',id=27,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPfxDpm3Myh4xl51GH/Eee4e+9gL0QXIrJb3KmgHGFFsr7Zk1zaEkJOxAcUnihnT1OPbqVJln2d2q3oc5Q222W4sAVF2VtSEfrwl92YPLxE6w1H/lw+338tjUK54+I/6sA==',key_name='tempest-keypair-1547211698',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:04:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-ngglzwzv',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:04:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=f1026535-7729-43d0-8027-dd71ef14dfbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.662 243456 DEBUG nova.network.os_vif_util [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.663 243456 DEBUG nova.network.os_vif_util [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:32:9d,bridge_name='br-int',has_traffic_filtering=True,id=11bba824-bf60-4206-b70d-fd5035009fbf,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11bba824-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.669 243456 DEBUG nova.virt.libvirt.guest [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d6:32:9d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap11bba824-bf"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.671 243456 DEBUG nova.virt.libvirt.guest [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d6:32:9d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap11bba824-bf"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.676 243456 DEBUG nova.virt.libvirt.driver [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Attempting to detach device tap11bba824-bf from instance f1026535-7729-43d0-8027-dd71ef14dfbf from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.676 243456 DEBUG nova.virt.libvirt.guest [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] detach device xml: <interface type="ethernet">
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <mac address="fa:16:3e:d6:32:9d"/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <model type="virtio"/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <mtu size="1442"/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <target dev="tap11bba824-bf"/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]: </interface>
Feb 28 05:04:48 np0005634017 nova_compute[243452]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.682 243456 DEBUG nova.virt.libvirt.guest [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d6:32:9d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap11bba824-bf"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 28 05:04:48 np0005634017 NetworkManager[49805]: <info>  [1772273088.6840] device (tap3a8395bc-d0): carrier: link connected
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.686 243456 DEBUG nova.virt.libvirt.guest [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d6:32:9d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap11bba824-bf"/></interface>not found in domain: <domain type='kvm' id='31'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <name>instance-0000001b</name>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <uuid>f1026535-7729-43d0-8027-dd71ef14dfbf</uuid>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <nova:name>tempest-AttachInterfacesTestJSON-server-585792194</nova:name>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <nova:creationTime>2026-02-28 10:04:45</nova:creationTime>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <nova:flavor name="m1.nano">
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:memory>128</nova:memory>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:disk>1</nova:disk>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:swap>0</nova:swap>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:vcpus>1</nova:vcpus>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </nova:flavor>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <nova:owner>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </nova:owner>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <nova:ports>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:port uuid="76d5199d-5d1e-4198-8780-c2537175a2be">
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:port uuid="11bba824-bf60-4206-b70d-fd5035009fbf">
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </nova:ports>
Feb 28 05:04:48 np0005634017 nova_compute[243452]: </nova:instance>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <memory unit='KiB'>131072</memory>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <vcpu placement='static'>1</vcpu>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <resource>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <partition>/machine</partition>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </resource>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <sysinfo type='smbios'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <entry name='manufacturer'>RDO</entry>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <entry name='product'>OpenStack Compute</entry>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <entry name='serial'>f1026535-7729-43d0-8027-dd71ef14dfbf</entry>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <entry name='uuid'>f1026535-7729-43d0-8027-dd71ef14dfbf</entry>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <entry name='family'>Virtual Machine</entry>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <boot dev='hd'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <smbios mode='sysinfo'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <vmcoreinfo state='on'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <cpu mode='custom' match='exact' check='full'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <model fallback='forbid'>EPYC-Rome</model>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <vendor>AMD</vendor>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='x2apic'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='tsc-deadline'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='hypervisor'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='tsc_adjust'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='spec-ctrl'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='stibp'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='ssbd'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='cmp_legacy'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='overflow-recov'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='succor'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='ibrs'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='amd-ssbd'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='virt-ssbd'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='disable' name='lbrv'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='disable' name='tsc-scale'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='disable' name='vmcb-clean'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='disable' name='flushbyasid'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='disable' name='pause-filter'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='disable' name='pfthreshold'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='disable' name='svme-addr-chk'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='lfence-always-serializing'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='disable' name='xsaves'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='disable' name='svm'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='topoext'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='disable' name='npt'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='disable' name='nrip-save'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <clock offset='utc'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <timer name='pit' tickpolicy='delay'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <timer name='rtc' tickpolicy='catchup'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <timer name='hpet' present='no'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <on_poweroff>destroy</on_poweroff>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <on_reboot>restart</on_reboot>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <on_crash>destroy</on_crash>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <disk type='network' device='disk'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <driver name='qemu' type='raw' cache='none'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <auth username='openstack'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:        <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <source protocol='rbd' name='vms/f1026535-7729-43d0-8027-dd71ef14dfbf_disk' index='2'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:        <host name='192.168.122.100' port='6789'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target dev='vda' bus='virtio'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='virtio-disk0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <disk type='network' device='cdrom'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <driver name='qemu' type='raw' cache='none'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <auth username='openstack'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:        <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <source protocol='rbd' name='vms/f1026535-7729-43d0-8027-dd71ef14dfbf_disk.config' index='1'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:        <host name='192.168.122.100' port='6789'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target dev='sda' bus='sata'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <readonly/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='sata0-0-0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='0' model='pcie-root'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pcie.0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='1' port='0x10'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.1'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='2' port='0x11'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.2'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='3' port='0x12'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.3'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='4' port='0x13'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.4'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='5' port='0x14'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.5'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='6' port='0x15'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.6'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='7' port='0x16'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.7'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='8' port='0x17'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.8'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='9' port='0x18'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.9'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='10' port='0x19'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.10'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='11' port='0x1a'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.11'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='12' port='0x1b'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.12'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='13' port='0x1c'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.13'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='14' port='0x1d'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.14'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='15' port='0x1e'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.15'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='16' port='0x1f'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.16'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='17' port='0x20'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.17'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='18' port='0x21'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.18'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='19' port='0x22'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.19'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='20' port='0x23'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.20'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='21' port='0x24'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.21'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='22' port='0x25'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.22'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='23' port='0x26'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.23'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='24' port='0x27'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.24'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='25' port='0x28'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.25'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-pci-bridge'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.26'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='usb'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='sata' index='0'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='ide'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <interface type='ethernet'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <mac address='fa:16:3e:7a:bd:5b'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target dev='tap76d5199d-5d'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model type='virtio'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <driver name='vhost' rx_queue_size='512'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <mtu size='1442'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='net0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <interface type='ethernet'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <mac address='fa:16:3e:d6:32:9d'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target dev='tap11bba824-bf'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model type='virtio'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <driver name='vhost' rx_queue_size='512'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <mtu size='1442'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='net1'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <serial type='pty'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <source path='/dev/pts/2'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <log file='/var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/console.log' append='off'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target type='isa-serial' port='0'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:        <model name='isa-serial'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      </target>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='serial0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <console type='pty' tty='/dev/pts/2'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <source path='/dev/pts/2'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <log file='/var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/console.log' append='off'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target type='serial' port='0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='serial0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </console>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <input type='tablet' bus='usb'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='input0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='usb' bus='0' port='1'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <input type='mouse' bus='ps2'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='input1'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <input type='keyboard' bus='ps2'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='input2'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <listen type='address' address='::0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </graphics>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <audio id='1' type='none'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model type='virtio' heads='1' primary='yes'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='video0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <watchdog model='itco' action='reset'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='watchdog0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </watchdog>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <memballoon model='virtio'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <stats period='10'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='balloon0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <rng model='virtio'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <backend model='random'>/dev/urandom</backend>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='rng0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <label>system_u:system_r:svirt_t:s0:c562,c938</label>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c562,c938</imagelabel>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </seclabel>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <label>+107:+107</label>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <imagelabel>+107:+107</imagelabel>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </seclabel>
Feb 28 05:04:48 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:04:48 np0005634017 nova_compute[243452]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.689 243456 INFO nova.virt.libvirt.driver [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully detached device tap11bba824-bf from instance f1026535-7729-43d0-8027-dd71ef14dfbf from the persistent domain config.#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.690 243456 DEBUG nova.virt.libvirt.driver [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] (1/8): Attempting to detach device tap11bba824-bf with device alias net1 from instance f1026535-7729-43d0-8027-dd71ef14dfbf from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.690 243456 DEBUG nova.virt.libvirt.guest [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] detach device xml: <interface type="ethernet">
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <mac address="fa:16:3e:d6:32:9d"/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <model type="virtio"/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <mtu size="1442"/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <target dev="tap11bba824-bf"/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]: </interface>
Feb 28 05:04:48 np0005634017 nova_compute[243452]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.693 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3b2c0f87-8de5-47bc-b551-9334f2c6ff29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.716 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[aeef6c6e-7bfc-497b-808b-209727ca9b4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3a8395bc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:6b:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459597, 'reachable_time': 37873, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269639, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.737 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[739966ac-01ca-492a-8b89-4202792b423e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8e:6b8d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 459597, 'tstamp': 459597}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269642, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.752 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4219d219-8f99-4b98-8732-8be5096d6a09]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3a8395bc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:6b:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459597, 'reachable_time': 37873, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269661, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.784 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9d6ff305-8e7f-499a-83cc-6f7ed2a00a04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:48 np0005634017 kernel: tap11bba824-bf (unregistering): left promiscuous mode
Feb 28 05:04:48 np0005634017 NetworkManager[49805]: <info>  [1772273088.8175] device (tap11bba824-bf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:04:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:48Z|00189|binding|INFO|Releasing lport 11bba824-bf60-4206-b70d-fd5035009fbf from this chassis (sb_readonly=0)
Feb 28 05:04:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:48Z|00190|binding|INFO|Setting lport 11bba824-bf60-4206-b70d-fd5035009fbf down in Southbound
Feb 28 05:04:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:48Z|00191|binding|INFO|Removing iface tap11bba824-bf ovn-installed in OVS
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.829 243456 DEBUG nova.network.neutron [req-8a7152f6-83cd-44c6-8cdc-81e110741ef0 req-82c41c35-4152-43b9-b40f-448c8820869f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Updated VIF entry in instance network info cache for port 29b5f82a-cfc3-4c87-aac9-8419af0bcf75. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.832 243456 DEBUG nova.network.neutron [req-8a7152f6-83cd-44c6-8cdc-81e110741ef0 req-82c41c35-4152-43b9-b40f-448c8820869f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Updating instance_info_cache with network_info: [{"id": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "address": "fa:16:3e:29:2d:d1", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b5f82a-cf", "ovs_interfaceid": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.837 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.839 243456 DEBUG nova.virt.libvirt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Received event <DeviceRemovedEvent: 1772273088.831463, f1026535-7729-43d0-8027-dd71ef14dfbf => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.841 243456 DEBUG nova.virt.libvirt.driver [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Start waiting for the detach event from libvirt for device tap11bba824-bf with device alias net1 for instance f1026535-7729-43d0-8027-dd71ef14dfbf _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.841 243456 DEBUG nova.virt.libvirt.guest [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d6:32:9d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap11bba824-bf"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.842 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.849 243456 DEBUG nova.virt.libvirt.guest [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d6:32:9d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap11bba824-bf"/></interface>not found in domain: <domain type='kvm' id='31'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <name>instance-0000001b</name>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <uuid>f1026535-7729-43d0-8027-dd71ef14dfbf</uuid>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <nova:name>tempest-AttachInterfacesTestJSON-server-585792194</nova:name>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <nova:creationTime>2026-02-28 10:04:45</nova:creationTime>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <nova:flavor name="m1.nano">
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:memory>128</nova:memory>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:disk>1</nova:disk>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:swap>0</nova:swap>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:vcpus>1</nova:vcpus>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </nova:flavor>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <nova:owner>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </nova:owner>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <nova:ports>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:port uuid="76d5199d-5d1e-4198-8780-c2537175a2be">
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:port uuid="11bba824-bf60-4206-b70d-fd5035009fbf">
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </nova:ports>
Feb 28 05:04:48 np0005634017 nova_compute[243452]: </nova:instance>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <memory unit='KiB'>131072</memory>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <vcpu placement='static'>1</vcpu>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <resource>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <partition>/machine</partition>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </resource>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <sysinfo type='smbios'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <entry name='manufacturer'>RDO</entry>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <entry name='product'>OpenStack Compute</entry>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <entry name='serial'>f1026535-7729-43d0-8027-dd71ef14dfbf</entry>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <entry name='uuid'>f1026535-7729-43d0-8027-dd71ef14dfbf</entry>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <entry name='family'>Virtual Machine</entry>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <boot dev='hd'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <smbios mode='sysinfo'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <vmcoreinfo state='on'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <cpu mode='custom' match='exact' check='full'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <model fallback='forbid'>EPYC-Rome</model>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <vendor>AMD</vendor>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='x2apic'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='tsc-deadline'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='hypervisor'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='tsc_adjust'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='spec-ctrl'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='stibp'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='ssbd'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='cmp_legacy'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='overflow-recov'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='succor'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='ibrs'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='amd-ssbd'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='virt-ssbd'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='disable' name='lbrv'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='disable' name='tsc-scale'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='disable' name='vmcb-clean'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='disable' name='flushbyasid'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='disable' name='pause-filter'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='disable' name='pfthreshold'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='disable' name='svme-addr-chk'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='lfence-always-serializing'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='disable' name='xsaves'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='disable' name='svm'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='require' name='topoext'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='disable' name='npt'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <feature policy='disable' name='nrip-save'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <clock offset='utc'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <timer name='pit' tickpolicy='delay'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <timer name='rtc' tickpolicy='catchup'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <timer name='hpet' present='no'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <on_poweroff>destroy</on_poweroff>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <on_reboot>restart</on_reboot>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <on_crash>destroy</on_crash>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <disk type='network' device='disk'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <driver name='qemu' type='raw' cache='none'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <auth username='openstack'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:        <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <source protocol='rbd' name='vms/f1026535-7729-43d0-8027-dd71ef14dfbf_disk' index='2'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:        <host name='192.168.122.100' port='6789'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target dev='vda' bus='virtio'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='virtio-disk0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <disk type='network' device='cdrom'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <driver name='qemu' type='raw' cache='none'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <auth username='openstack'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:        <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <source protocol='rbd' name='vms/f1026535-7729-43d0-8027-dd71ef14dfbf_disk.config' index='1'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:        <host name='192.168.122.100' port='6789'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target dev='sda' bus='sata'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <readonly/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='sata0-0-0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='0' model='pcie-root'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pcie.0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='1' port='0x10'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.1'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='2' port='0x11'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.2'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='3' port='0x12'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.3'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='4' port='0x13'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.4'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='5' port='0x14'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.5'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='6' port='0x15'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.6'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='7' port='0x16'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.7'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='8' port='0x17'/>
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.850 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e864e97a-4056-48c7-b91c-ed24806ec829]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.8'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='9' port='0x18'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.9'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='10' port='0x19'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.10'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='11' port='0x1a'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.11'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='12' port='0x1b'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.12'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='13' port='0x1c'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.13'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='14' port='0x1d'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.14'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='15' port='0x1e'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.15'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='16' port='0x1f'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.16'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='17' port='0x20'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.17'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='18' port='0x21'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.18'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='19' port='0x22'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.19'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='20' port='0x23'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.20'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='21' port='0x24'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.21'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='22' port='0x25'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.22'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='23' port='0x26'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.23'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='24' port='0x27'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.24'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target chassis='25' port='0x28'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.25'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model name='pcie-pci-bridge'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='pci.26'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='usb'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <controller type='sata' index='0'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='ide'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <interface type='ethernet'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <mac address='fa:16:3e:7a:bd:5b'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target dev='tap76d5199d-5d'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model type='virtio'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <driver name='vhost' rx_queue_size='512'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <mtu size='1442'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='net0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <serial type='pty'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <source path='/dev/pts/2'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <log file='/var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/console.log' append='off'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target type='isa-serial' port='0'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:        <model name='isa-serial'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      </target>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='serial0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <console type='pty' tty='/dev/pts/2'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <source path='/dev/pts/2'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <log file='/var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/console.log' append='off'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <target type='serial' port='0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='serial0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </console>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <input type='tablet' bus='usb'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='input0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='usb' bus='0' port='1'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <input type='mouse' bus='ps2'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='input1'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <input type='keyboard' bus='ps2'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='input2'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <listen type='address' address='::0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </graphics>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <audio id='1' type='none'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <model type='virtio' heads='1' primary='yes'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='video0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <watchdog model='itco' action='reset'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='watchdog0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </watchdog>
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.851 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a8395bc-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <memballoon model='virtio'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <stats period='10'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='balloon0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <rng model='virtio'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <backend model='random'>/dev/urandom</backend>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <alias name='rng0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <label>system_u:system_r:svirt_t:s0:c562,c938</label>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c562,c938</imagelabel>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </seclabel>
Feb 28 05:04:48 np0005634017 kernel: tap3a8395bc-d0: entered promiscuous mode
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <label>+107:+107</label>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <imagelabel>+107:+107</imagelabel>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </seclabel>
Feb 28 05:04:48 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:04:48 np0005634017 nova_compute[243452]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.851 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.851 243456 INFO nova.virt.libvirt.driver [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully detached device tap11bba824-bf from instance f1026535-7729-43d0-8027-dd71ef14dfbf from the live domain config.#033[00m
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.851 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3a8395bc-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.852 243456 DEBUG nova.virt.libvirt.vif [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:04:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-585792194',display_name='tempest-AttachInterfacesTestJSON-server-585792194',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-585792194',id=27,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPfxDpm3Myh4xl51GH/Eee4e+9gL0QXIrJb3KmgHGFFsr7Zk1zaEkJOxAcUnihnT1OPbqVJln2d2q3oc5Q222W4sAVF2VtSEfrwl92YPLxE6w1H/lw+338tjUK54+I/6sA==',key_name='tempest-keypair-1547211698',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:04:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-ngglzwzv',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:04:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=f1026535-7729-43d0-8027-dd71ef14dfbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.852 243456 DEBUG nova.network.os_vif_util [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.853 243456 DEBUG nova.network.os_vif_util [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:32:9d,bridge_name='br-int',has_traffic_filtering=True,id=11bba824-bf60-4206-b70d-fd5035009fbf,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11bba824-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.853 243456 DEBUG os_vif [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:32:9d,bridge_name='br-int',has_traffic_filtering=True,id=11bba824-bf60-4206-b70d-fd5035009fbf,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11bba824-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:04:48 np0005634017 NetworkManager[49805]: <info>  [1772273088.8540] manager: (tap3a8395bc-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.855 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.856 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11bba824-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.856 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.857 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.859 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.865 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:32:9d 10.100.0.14'], port_security=['fa:16:3e:d6:32:9d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f1026535-7729-43d0-8027-dd71ef14dfbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '4', 'neutron:security_group_ids': '24808a61-2182-43ba-a46d-e629124276f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=11bba824-bf60-4206-b70d-fd5035009fbf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.867 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3a8395bc-d0, col_values=(('external_ids', {'iface-id': '5ebe3a2c-2a7c-485c-a1b1-d51cfb0beba3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.867 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:48Z|00192|binding|INFO|Releasing lport 5ebe3a2c-2a7c-485c-a1b1-d51cfb0beba3 from this chassis (sb_readonly=0)
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.870 243456 INFO os_vif [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:32:9d,bridge_name='br-int',has_traffic_filtering=True,id=11bba824-bf60-4206-b70d-fd5035009fbf,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11bba824-bf')#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.871 243456 DEBUG nova.virt.libvirt.guest [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <nova:name>tempest-AttachInterfacesTestJSON-server-585792194</nova:name>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <nova:creationTime>2026-02-28 10:04:48</nova:creationTime>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <nova:flavor name="m1.nano">
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:memory>128</nova:memory>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:disk>1</nova:disk>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:swap>0</nova:swap>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:vcpus>1</nova:vcpus>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </nova:flavor>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <nova:owner>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </nova:owner>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  <nova:ports>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    <nova:port uuid="76d5199d-5d1e-4198-8780-c2537175a2be">
Feb 28 05:04:48 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:04:48 np0005634017 nova_compute[243452]:  </nova:ports>
Feb 28 05:04:48 np0005634017 nova_compute[243452]: </nova:instance>
Feb 28 05:04:48 np0005634017 nova_compute[243452]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.875 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.877 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.877 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.878 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9f1299c3-a4af-4244-a655-86089c6780e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.879 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-3a8395bc-d7fc-4457-8cb4-52e2b9305b61
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.pid.haproxy
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 3a8395bc-d7fc-4457-8cb4-52e2b9305b61
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:04:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.879 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'env', 'PROCESS_TAG=haproxy-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:04:48 np0005634017 nova_compute[243452]: 2026-02-28 10:04:48.884 243456 DEBUG oslo_concurrency.lockutils [req-8a7152f6-83cd-44c6-8cdc-81e110741ef0 req-82c41c35-4152-43b9-b40f-448c8820869f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ff5bf118-ea06-44c0-81f0-0a229162e1d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.027 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273089.026949, ff5bf118-ea06-44c0-81f0-0a229162e1d8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.027 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] VM Started (Lifecycle Event)#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.053 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.057 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273089.0289953, ff5bf118-ea06-44c0-81f0-0a229162e1d8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.058 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.075 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.081 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.098 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.110 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "c1e4150a-4695-4464-a271-378970447180" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.111 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "c1e4150a-4695-4464-a271-378970447180" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.125 243456 DEBUG nova.compute.manager [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.186 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.186 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.193 243456 DEBUG nova.virt.hardware [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.194 243456 INFO nova.compute.claims [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:04:49 np0005634017 podman[269787]: 2026-02-28 10:04:49.26386457 +0000 UTC m=+0.057974701 container create 2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:04:49 np0005634017 podman[269787]: 2026-02-28 10:04:49.227293762 +0000 UTC m=+0.021403913 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.337 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:49 np0005634017 systemd[1]: Started libpod-conmon-2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf.scope.
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.373 243456 DEBUG oslo_concurrency.lockutils [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.375 243456 DEBUG oslo_concurrency.lockutils [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquired lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.375 243456 DEBUG nova.network.neutron [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:04:49 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:04:49 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a5e83e1d44ebf105d11322506dc55c483ad72f2af990afcb1e865b656b3d9ad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:04:49 np0005634017 podman[269787]: 2026-02-28 10:04:49.415948616 +0000 UTC m=+0.210058767 container init 2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:04:49 np0005634017 podman[269787]: 2026-02-28 10:04:49.423744115 +0000 UTC m=+0.217854246 container start 2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 28 05:04:49 np0005634017 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[269826]: [NOTICE]   (269844) : New worker (269864) forked
Feb 28 05:04:49 np0005634017 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[269826]: [NOTICE]   (269844) : Loading success.
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.492 243456 DEBUG nova.compute.manager [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-vif-deleted-11bba824-bf60-4206-b70d-fd5035009fbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.493 243456 INFO nova.compute.manager [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Neutron deleted interface 11bba824-bf60-4206-b70d-fd5035009fbf; detaching it from the instance and deleting it from the info cache#033[00m
Feb 28 05:04:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:49.493 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 11bba824-bf60-4206-b70d-fd5035009fbf in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f unbound from our chassis#033[00m
Feb 28 05:04:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:49.495 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60dcefc3-95e1-437e-9c00-e51656c39b8f#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.494 243456 DEBUG nova.network.neutron [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Updating instance_info_cache with network_info: [{"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:49.508 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8fdd5cc7-9952-452d-84a9-3fd859129b67]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:49.535 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bb814ffb-d419-46d1-9aa0-7d099311170c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:49.539 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e03c4b23-a336-4cf8-8020-320f3abae949]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.539 243456 DEBUG nova.objects.instance [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lazy-loading 'system_metadata' on Instance uuid f1026535-7729-43d0-8027-dd71ef14dfbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:49.567 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[44842415-95f1-42fc-8002-d9b0a7c70000]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.580 243456 DEBUG nova.objects.instance [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lazy-loading 'flavor' on Instance uuid f1026535-7729-43d0-8027-dd71ef14dfbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:49.589 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[52edb19c-9ed0-44a8-b7d5-8d6ec9c2a52a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455960, 'reachable_time': 22319, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269903, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:49.609 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[02da52d6-dcfe-448c-9201-def4c37b3252]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455970, 'tstamp': 455970}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269904, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455972, 'tstamp': 455972}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269904, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:49.612 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60dcefc3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.614 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:49.617 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60dcefc3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:49.617 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:49.618 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60dcefc3-90, col_values=(('external_ids', {'iface-id': '76417fb5-53fd-4986-92c0-fafddb45b2f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:49.618 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.626 243456 DEBUG nova.virt.libvirt.vif [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:04:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-585792194',display_name='tempest-AttachInterfacesTestJSON-server-585792194',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-585792194',id=27,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPfxDpm3Myh4xl51GH/Eee4e+9gL0QXIrJb3KmgHGFFsr7Zk1zaEkJOxAcUnihnT1OPbqVJln2d2q3oc5Q222W4sAVF2VtSEfrwl92YPLxE6w1H/lw+338tjUK54+I/6sA==',key_name='tempest-keypair-1547211698',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:04:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-ngglzwzv',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:04:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=f1026535-7729-43d0-8027-dd71ef14dfbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.627 243456 DEBUG nova.network.os_vif_util [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converting VIF {"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.629 243456 DEBUG nova.network.os_vif_util [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:32:9d,bridge_name='br-int',has_traffic_filtering=True,id=11bba824-bf60-4206-b70d-fd5035009fbf,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11bba824-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.634 243456 DEBUG nova.virt.libvirt.guest [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d6:32:9d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap11bba824-bf"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.639 243456 DEBUG nova.virt.libvirt.guest [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d6:32:9d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap11bba824-bf"/></interface>not found in domain: <domain type='kvm' id='31'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <name>instance-0000001b</name>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <uuid>f1026535-7729-43d0-8027-dd71ef14dfbf</uuid>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <nova:name>tempest-AttachInterfacesTestJSON-server-585792194</nova:name>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <nova:creationTime>2026-02-28 10:04:48</nova:creationTime>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <nova:flavor name="m1.nano">
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <nova:memory>128</nova:memory>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <nova:disk>1</nova:disk>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <nova:swap>0</nova:swap>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <nova:vcpus>1</nova:vcpus>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </nova:flavor>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <nova:owner>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </nova:owner>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <nova:ports>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <nova:port uuid="76d5199d-5d1e-4198-8780-c2537175a2be">
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </nova:ports>
Feb 28 05:04:49 np0005634017 nova_compute[243452]: </nova:instance>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <memory unit='KiB'>131072</memory>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <vcpu placement='static'>1</vcpu>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <resource>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <partition>/machine</partition>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </resource>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <sysinfo type='smbios'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <entry name='manufacturer'>RDO</entry>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <entry name='product'>OpenStack Compute</entry>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <entry name='serial'>f1026535-7729-43d0-8027-dd71ef14dfbf</entry>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <entry name='uuid'>f1026535-7729-43d0-8027-dd71ef14dfbf</entry>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <entry name='family'>Virtual Machine</entry>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <boot dev='hd'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <smbios mode='sysinfo'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <vmcoreinfo state='on'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <cpu mode='custom' match='exact' check='full'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <model fallback='forbid'>EPYC-Rome</model>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <vendor>AMD</vendor>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='x2apic'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='tsc-deadline'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='hypervisor'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='tsc_adjust'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='spec-ctrl'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='stibp'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='ssbd'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='cmp_legacy'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='overflow-recov'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='succor'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='ibrs'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='amd-ssbd'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='virt-ssbd'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='lbrv'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='tsc-scale'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='vmcb-clean'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='flushbyasid'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='pause-filter'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='pfthreshold'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='svme-addr-chk'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='lfence-always-serializing'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='xsaves'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='svm'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='topoext'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='npt'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='nrip-save'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <clock offset='utc'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <timer name='pit' tickpolicy='delay'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <timer name='rtc' tickpolicy='catchup'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <timer name='hpet' present='no'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <on_poweroff>destroy</on_poweroff>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <on_reboot>restart</on_reboot>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <on_crash>destroy</on_crash>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <disk type='network' device='disk'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <driver name='qemu' type='raw' cache='none'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <auth username='openstack'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:        <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <source protocol='rbd' name='vms/f1026535-7729-43d0-8027-dd71ef14dfbf_disk' index='2'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:        <host name='192.168.122.100' port='6789'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target dev='vda' bus='virtio'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='virtio-disk0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <disk type='network' device='cdrom'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <driver name='qemu' type='raw' cache='none'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <auth username='openstack'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:        <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <source protocol='rbd' name='vms/f1026535-7729-43d0-8027-dd71ef14dfbf_disk.config' index='1'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:        <host name='192.168.122.100' port='6789'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target dev='sda' bus='sata'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <readonly/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='sata0-0-0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='0' model='pcie-root'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pcie.0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='1' port='0x10'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.1'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='2' port='0x11'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.2'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='3' port='0x12'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.3'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='4' port='0x13'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.4'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='5' port='0x14'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.5'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='6' port='0x15'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.6'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='7' port='0x16'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.7'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='8' port='0x17'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.8'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='9' port='0x18'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.9'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='10' port='0x19'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.10'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='11' port='0x1a'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.11'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='12' port='0x1b'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.12'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='13' port='0x1c'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.13'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='14' port='0x1d'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.14'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='15' port='0x1e'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.15'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='16' port='0x1f'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.16'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='17' port='0x20'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.17'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='18' port='0x21'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.18'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='19' port='0x22'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.19'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='20' port='0x23'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.20'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='21' port='0x24'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.21'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='22' port='0x25'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.22'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='23' port='0x26'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.23'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='24' port='0x27'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.24'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='25' port='0x28'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.25'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-pci-bridge'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.26'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='usb'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='sata' index='0'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='ide'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <interface type='ethernet'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <mac address='fa:16:3e:7a:bd:5b'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target dev='tap76d5199d-5d'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model type='virtio'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <driver name='vhost' rx_queue_size='512'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <mtu size='1442'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='net0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <serial type='pty'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <source path='/dev/pts/2'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <log file='/var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/console.log' append='off'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target type='isa-serial' port='0'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:        <model name='isa-serial'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      </target>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='serial0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <console type='pty' tty='/dev/pts/2'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <source path='/dev/pts/2'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <log file='/var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/console.log' append='off'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target type='serial' port='0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='serial0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </console>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <input type='tablet' bus='usb'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='input0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='usb' bus='0' port='1'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <input type='mouse' bus='ps2'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='input1'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <input type='keyboard' bus='ps2'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='input2'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <listen type='address' address='::0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </graphics>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <audio id='1' type='none'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model type='virtio' heads='1' primary='yes'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='video0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <watchdog model='itco' action='reset'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='watchdog0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </watchdog>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <memballoon model='virtio'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <stats period='10'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='balloon0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <rng model='virtio'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <backend model='random'>/dev/urandom</backend>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='rng0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <label>system_u:system_r:svirt_t:s0:c562,c938</label>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c562,c938</imagelabel>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </seclabel>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <label>+107:+107</label>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <imagelabel>+107:+107</imagelabel>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </seclabel>
Feb 28 05:04:49 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:04:49 np0005634017 nova_compute[243452]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.653 243456 DEBUG nova.virt.libvirt.guest [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d6:32:9d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap11bba824-bf"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.659 243456 DEBUG nova.virt.libvirt.guest [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d6:32:9d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap11bba824-bf"/></interface>not found in domain: <domain type='kvm' id='31'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <name>instance-0000001b</name>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <uuid>f1026535-7729-43d0-8027-dd71ef14dfbf</uuid>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <nova:name>tempest-AttachInterfacesTestJSON-server-585792194</nova:name>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <nova:creationTime>2026-02-28 10:04:48</nova:creationTime>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <nova:flavor name="m1.nano">
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <nova:memory>128</nova:memory>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <nova:disk>1</nova:disk>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <nova:swap>0</nova:swap>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <nova:vcpus>1</nova:vcpus>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </nova:flavor>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <nova:owner>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </nova:owner>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <nova:ports>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <nova:port uuid="76d5199d-5d1e-4198-8780-c2537175a2be">
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </nova:ports>
Feb 28 05:04:49 np0005634017 nova_compute[243452]: </nova:instance>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <memory unit='KiB'>131072</memory>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <vcpu placement='static'>1</vcpu>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <resource>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <partition>/machine</partition>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </resource>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <sysinfo type='smbios'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <entry name='manufacturer'>RDO</entry>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <entry name='product'>OpenStack Compute</entry>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <entry name='serial'>f1026535-7729-43d0-8027-dd71ef14dfbf</entry>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <entry name='uuid'>f1026535-7729-43d0-8027-dd71ef14dfbf</entry>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <entry name='family'>Virtual Machine</entry>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <boot dev='hd'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <smbios mode='sysinfo'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <vmcoreinfo state='on'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <cpu mode='custom' match='exact' check='full'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <model fallback='forbid'>EPYC-Rome</model>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <vendor>AMD</vendor>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='x2apic'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='tsc-deadline'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='hypervisor'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='tsc_adjust'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='spec-ctrl'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='stibp'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='ssbd'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='cmp_legacy'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='overflow-recov'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='succor'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='ibrs'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='amd-ssbd'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='virt-ssbd'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='lbrv'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='tsc-scale'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='vmcb-clean'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='flushbyasid'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='pause-filter'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='pfthreshold'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='svme-addr-chk'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='lfence-always-serializing'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='xsaves'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='svm'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='topoext'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='npt'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='nrip-save'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <clock offset='utc'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <timer name='pit' tickpolicy='delay'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <timer name='rtc' tickpolicy='catchup'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <timer name='hpet' present='no'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <on_poweroff>destroy</on_poweroff>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <on_reboot>restart</on_reboot>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <on_crash>destroy</on_crash>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <disk type='network' device='disk'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <driver name='qemu' type='raw' cache='none'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <auth username='openstack'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:        <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <source protocol='rbd' name='vms/f1026535-7729-43d0-8027-dd71ef14dfbf_disk' index='2'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:        <host name='192.168.122.100' port='6789'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target dev='vda' bus='virtio'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='virtio-disk0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <disk type='network' device='cdrom'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <driver name='qemu' type='raw' cache='none'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <auth username='openstack'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:        <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <source protocol='rbd' name='vms/f1026535-7729-43d0-8027-dd71ef14dfbf_disk.config' index='1'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:        <host name='192.168.122.100' port='6789'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target dev='sda' bus='sata'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <readonly/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='sata0-0-0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='0' model='pcie-root'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pcie.0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='1' port='0x10'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.1'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='2' port='0x11'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.2'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='3' port='0x12'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.3'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='4' port='0x13'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.4'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='5' port='0x14'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.5'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='6' port='0x15'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.6'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='7' port='0x16'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.7'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='8' port='0x17'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.8'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='9' port='0x18'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.9'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='10' port='0x19'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.10'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='11' port='0x1a'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.11'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='12' port='0x1b'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.12'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='13' port='0x1c'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.13'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='14' port='0x1d'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.14'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='15' port='0x1e'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.15'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='16' port='0x1f'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.16'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='17' port='0x20'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.17'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='18' port='0x21'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.18'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='19' port='0x22'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.19'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='20' port='0x23'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.20'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='21' port='0x24'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.21'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='22' port='0x25'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.22'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='23' port='0x26'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.23'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='24' port='0x27'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.24'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target chassis='25' port='0x28'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.25'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model name='pcie-pci-bridge'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='pci.26'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='usb'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <controller type='sata' index='0'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='ide'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <interface type='ethernet'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <mac address='fa:16:3e:7a:bd:5b'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target dev='tap76d5199d-5d'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model type='virtio'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <driver name='vhost' rx_queue_size='512'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <mtu size='1442'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='net0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <serial type='pty'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <source path='/dev/pts/2'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <log file='/var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/console.log' append='off'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target type='isa-serial' port='0'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:        <model name='isa-serial'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      </target>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='serial0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <console type='pty' tty='/dev/pts/2'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <source path='/dev/pts/2'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <log file='/var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/console.log' append='off'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <target type='serial' port='0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='serial0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </console>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <input type='tablet' bus='usb'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='input0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='usb' bus='0' port='1'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <input type='mouse' bus='ps2'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='input1'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <input type='keyboard' bus='ps2'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='input2'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <listen type='address' address='::0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </graphics>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <audio id='1' type='none'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <model type='virtio' heads='1' primary='yes'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='video0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <watchdog model='itco' action='reset'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='watchdog0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </watchdog>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <memballoon model='virtio'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <stats period='10'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='balloon0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <rng model='virtio'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <backend model='random'>/dev/urandom</backend>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <alias name='rng0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <label>system_u:system_r:svirt_t:s0:c562,c938</label>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c562,c938</imagelabel>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </seclabel>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <label>+107:+107</label>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <imagelabel>+107:+107</imagelabel>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </seclabel>
Feb 28 05:04:49 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:04:49 np0005634017 nova_compute[243452]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.659 243456 WARNING nova.virt.libvirt.driver [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Detaching interface fa:16:3e:d6:32:9d failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap11bba824-bf' not found.#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.660 243456 DEBUG nova.virt.libvirt.vif [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:04:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-585792194',display_name='tempest-AttachInterfacesTestJSON-server-585792194',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-585792194',id=27,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPfxDpm3Myh4xl51GH/Eee4e+9gL0QXIrJb3KmgHGFFsr7Zk1zaEkJOxAcUnihnT1OPbqVJln2d2q3oc5Q222W4sAVF2VtSEfrwl92YPLxE6w1H/lw+338tjUK54+I/6sA==',key_name='tempest-keypair-1547211698',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:04:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-ngglzwzv',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:04:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=f1026535-7729-43d0-8027-dd71ef14dfbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.661 243456 DEBUG nova.network.os_vif_util [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converting VIF {"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.662 243456 DEBUG nova.network.os_vif_util [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:32:9d,bridge_name='br-int',has_traffic_filtering=True,id=11bba824-bf60-4206-b70d-fd5035009fbf,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11bba824-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.662 243456 DEBUG os_vif [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:32:9d,bridge_name='br-int',has_traffic_filtering=True,id=11bba824-bf60-4206-b70d-fd5035009fbf,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11bba824-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.670 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.671 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11bba824-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.671 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.674 243456 INFO os_vif [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:32:9d,bridge_name='br-int',has_traffic_filtering=True,id=11bba824-bf60-4206-b70d-fd5035009fbf,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11bba824-bf')#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.675 243456 DEBUG nova.virt.libvirt.guest [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <nova:name>tempest-AttachInterfacesTestJSON-server-585792194</nova:name>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <nova:creationTime>2026-02-28 10:04:49</nova:creationTime>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <nova:flavor name="m1.nano">
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <nova:memory>128</nova:memory>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <nova:disk>1</nova:disk>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <nova:swap>0</nova:swap>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <nova:vcpus>1</nova:vcpus>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </nova:flavor>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <nova:owner>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </nova:owner>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  <nova:ports>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    <nova:port uuid="76d5199d-5d1e-4198-8780-c2537175a2be">
Feb 28 05:04:49 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:04:49 np0005634017 nova_compute[243452]:  </nova:ports>
Feb 28 05:04:49 np0005634017 nova_compute[243452]: </nova:instance>
Feb 28 05:04:49 np0005634017 nova_compute[243452]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb 28 05:04:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:04:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:04:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:04:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:04:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:04:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1274799466' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:04:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 2.7 MiB/s wr, 95 op/s
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.874 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.884 243456 DEBUG nova.compute.provider_tree [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.906 243456 DEBUG nova.scheduler.client.report [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.941 243456 DEBUG nova.compute.manager [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Received event network-vif-plugged-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.942 243456 DEBUG oslo_concurrency.lockutils [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.943 243456 DEBUG oslo_concurrency.lockutils [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.943 243456 DEBUG oslo_concurrency.lockutils [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.943 243456 DEBUG nova.compute.manager [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Processing event network-vif-plugged-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.944 243456 DEBUG nova.compute.manager [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Received event network-vif-plugged-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.944 243456 DEBUG oslo_concurrency.lockutils [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.945 243456 DEBUG oslo_concurrency.lockutils [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.945 243456 DEBUG oslo_concurrency.lockutils [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.946 243456 DEBUG nova.compute.manager [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] No waiting events found dispatching network-vif-plugged-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.946 243456 WARNING nova.compute.manager [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Received unexpected event network-vif-plugged-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.946 243456 DEBUG nova.compute.manager [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-vif-unplugged-11bba824-bf60-4206-b70d-fd5035009fbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.947 243456 DEBUG oslo_concurrency.lockutils [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.947 243456 DEBUG oslo_concurrency.lockutils [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.948 243456 DEBUG oslo_concurrency.lockutils [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.948 243456 DEBUG nova.compute.manager [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] No waiting events found dispatching network-vif-unplugged-11bba824-bf60-4206-b70d-fd5035009fbf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.949 243456 WARNING nova.compute.manager [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received unexpected event network-vif-unplugged-11bba824-bf60-4206-b70d-fd5035009fbf for instance with vm_state active and task_state None.#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.949 243456 DEBUG nova.compute.manager [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-vif-plugged-11bba824-bf60-4206-b70d-fd5035009fbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.950 243456 DEBUG oslo_concurrency.lockutils [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.950 243456 DEBUG oslo_concurrency.lockutils [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.950 243456 DEBUG oslo_concurrency.lockutils [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.951 243456 DEBUG nova.compute.manager [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] No waiting events found dispatching network-vif-plugged-11bba824-bf60-4206-b70d-fd5035009fbf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.951 243456 WARNING nova.compute.manager [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received unexpected event network-vif-plugged-11bba824-bf60-4206-b70d-fd5035009fbf for instance with vm_state active and task_state None.#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.953 243456 DEBUG nova.compute.manager [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.954 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.955 243456 DEBUG nova.compute.manager [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.961 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.962 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273089.961608, ff5bf118-ea06-44c0-81f0-0a229162e1d8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.962 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.970 243456 INFO nova.virt.libvirt.driver [-] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Instance spawned successfully.#033[00m
Feb 28 05:04:49 np0005634017 nova_compute[243452]: 2026-02-28 10:04:49.970 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.007 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.015 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.016 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.017 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.018 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.019 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.020 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.036 243456 DEBUG nova.compute.manager [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.039 243456 DEBUG nova.network.neutron [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.050 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.088 243456 INFO nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.092 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.108 243456 INFO nova.compute.manager [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Took 8.14 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.108 243456 DEBUG nova.compute.manager [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.110 243456 DEBUG nova.compute.manager [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.180 243456 INFO nova.compute.manager [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Took 9.50 seconds to build instance.#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.202 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.212 243456 DEBUG nova.compute.manager [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.214 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.214 243456 INFO nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Creating image(s)#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.239 243456 DEBUG nova.storage.rbd_utils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image c1e4150a-4695-4464-a271-378970447180_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.271 243456 DEBUG nova.storage.rbd_utils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image c1e4150a-4695-4464-a271-378970447180_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.303 243456 DEBUG nova.storage.rbd_utils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image c1e4150a-4695-4464-a271-378970447180_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.316 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:50 np0005634017 podman[269991]: 2026-02-28 10:04:50.326483653 +0000 UTC m=+0.088814778 container create 78a72c1326e041e2e6381e72edf8feb4940c0247116413c55960edfc46b28458 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_dubinsky, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.355 243456 DEBUG nova.policy [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2737540b5d9a437cac0ea91b25f0c5d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da458887a8634c5a8b9a38fcbcc44e07', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:04:50 np0005634017 systemd[1]: Started libpod-conmon-78a72c1326e041e2e6381e72edf8feb4940c0247116413c55960edfc46b28458.scope.
Feb 28 05:04:50 np0005634017 podman[269991]: 2026-02-28 10:04:50.295043429 +0000 UTC m=+0.057374604 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.387 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.388 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.388 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.389 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:50 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.426 243456 DEBUG nova.storage.rbd_utils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image c1e4150a-4695-4464-a271-378970447180_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:50 np0005634017 podman[269991]: 2026-02-28 10:04:50.428020918 +0000 UTC m=+0.190352073 container init 78a72c1326e041e2e6381e72edf8feb4940c0247116413c55960edfc46b28458 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.432 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c1e4150a-4695-4464-a271-378970447180_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:50 np0005634017 podman[269991]: 2026-02-28 10:04:50.437573876 +0000 UTC m=+0.199905001 container start 78a72c1326e041e2e6381e72edf8feb4940c0247116413c55960edfc46b28458 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_dubinsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:04:50 np0005634017 podman[269991]: 2026-02-28 10:04:50.442391952 +0000 UTC m=+0.204723087 container attach 78a72c1326e041e2e6381e72edf8feb4940c0247116413c55960edfc46b28458 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:04:50 np0005634017 systemd[1]: libpod-78a72c1326e041e2e6381e72edf8feb4940c0247116413c55960edfc46b28458.scope: Deactivated successfully.
Feb 28 05:04:50 np0005634017 elastic_dubinsky[270061]: 167 167
Feb 28 05:04:50 np0005634017 conmon[270061]: conmon 78a72c1326e041e2e638 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-78a72c1326e041e2e6381e72edf8feb4940c0247116413c55960edfc46b28458.scope/container/memory.events
Feb 28 05:04:50 np0005634017 podman[269991]: 2026-02-28 10:04:50.446355643 +0000 UTC m=+0.208686768 container died 78a72c1326e041e2e6381e72edf8feb4940c0247116413c55960edfc46b28458 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_dubinsky, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 28 05:04:50 np0005634017 systemd[1]: var-lib-containers-storage-overlay-b9e5831d3b494939ceab09553ff4a200fc9c8459853a6c090185ef1097a57dab-merged.mount: Deactivated successfully.
Feb 28 05:04:50 np0005634017 podman[269991]: 2026-02-28 10:04:50.493474698 +0000 UTC m=+0.255805823 container remove 78a72c1326e041e2e6381e72edf8feb4940c0247116413c55960edfc46b28458 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:04:50 np0005634017 systemd[1]: libpod-conmon-78a72c1326e041e2e6381e72edf8feb4940c0247116413c55960edfc46b28458.scope: Deactivated successfully.
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.657 243456 DEBUG oslo_concurrency.lockutils [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "f1026535-7729-43d0-8027-dd71ef14dfbf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.658 243456 DEBUG oslo_concurrency.lockutils [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.658 243456 DEBUG oslo_concurrency.lockutils [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.658 243456 DEBUG oslo_concurrency.lockutils [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.659 243456 DEBUG oslo_concurrency.lockutils [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.660 243456 INFO nova.compute.manager [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Terminating instance#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.661 243456 DEBUG nova.compute.manager [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.684 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c1e4150a-4695-4464-a271-378970447180_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:50 np0005634017 podman[270123]: 2026-02-28 10:04:50.68813867 +0000 UTC m=+0.057551109 container create 0c57ea23320d0f747646e173d329304a564b3d6a9493f40207035144d09d6cfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Feb 28 05:04:50 np0005634017 kernel: tap76d5199d-5d (unregistering): left promiscuous mode
Feb 28 05:04:50 np0005634017 NetworkManager[49805]: <info>  [1772273090.7176] device (tap76d5199d-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:04:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:50Z|00193|binding|INFO|Releasing lport 76d5199d-5d1e-4198-8780-c2537175a2be from this chassis (sb_readonly=0)
Feb 28 05:04:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:50Z|00194|binding|INFO|Setting lport 76d5199d-5d1e-4198-8780-c2537175a2be down in Southbound
Feb 28 05:04:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:50Z|00195|binding|INFO|Removing iface tap76d5199d-5d ovn-installed in OVS
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.733 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:50 np0005634017 systemd[1]: Started libpod-conmon-0c57ea23320d0f747646e173d329304a564b3d6a9493f40207035144d09d6cfe.scope.
Feb 28 05:04:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:50.739 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:bd:5b 10.100.0.12'], port_security=['fa:16:3e:7a:bd:5b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f1026535-7729-43d0-8027-dd71ef14dfbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b0c8faed-afc6-47c6-b1ad-97c1fd7445c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=76d5199d-5d1e-4198-8780-c2537175a2be) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:04:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:50.741 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 76d5199d-5d1e-4198-8780-c2537175a2be in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f unbound from our chassis#033[00m
Feb 28 05:04:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:50.742 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 60dcefc3-95e1-437e-9c00-e51656c39b8f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.742 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:50.743 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ad5c4a4d-80ca-4883-8583-b0cca6c4c46f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:50.743 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f namespace which is not needed anymore#033[00m
Feb 28 05:04:50 np0005634017 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Feb 28 05:04:50 np0005634017 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001b.scope: Consumed 13.120s CPU time.
Feb 28 05:04:50 np0005634017 systemd-machined[209480]: Machine qemu-31-instance-0000001b terminated.
Feb 28 05:04:50 np0005634017 podman[270123]: 2026-02-28 10:04:50.667388707 +0000 UTC m=+0.036801176 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:04:50 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:04:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62582b15685eca806715c6f9ba82b50a12142970c939c69f3f5a6b7e95c2b9e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:04:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62582b15685eca806715c6f9ba82b50a12142970c939c69f3f5a6b7e95c2b9e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:04:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62582b15685eca806715c6f9ba82b50a12142970c939c69f3f5a6b7e95c2b9e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:04:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62582b15685eca806715c6f9ba82b50a12142970c939c69f3f5a6b7e95c2b9e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:04:50 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:04:50 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:04:50 np0005634017 podman[270123]: 2026-02-28 10:04:50.789963473 +0000 UTC m=+0.159375952 container init 0c57ea23320d0f747646e173d329304a564b3d6a9493f40207035144d09d6cfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_dubinsky, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:04:50 np0005634017 podman[270123]: 2026-02-28 10:04:50.802709811 +0000 UTC m=+0.172122270 container start 0c57ea23320d0f747646e173d329304a564b3d6a9493f40207035144d09d6cfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.806 243456 DEBUG nova.storage.rbd_utils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] resizing rbd image c1e4150a-4695-4464-a271-378970447180_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:04:50 np0005634017 podman[270123]: 2026-02-28 10:04:50.809146822 +0000 UTC m=+0.178559291 container attach 0c57ea23320d0f747646e173d329304a564b3d6a9493f40207035144d09d6cfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 05:04:50 np0005634017 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[267513]: [NOTICE]   (267519) : haproxy version is 2.8.14-c23fe91
Feb 28 05:04:50 np0005634017 kernel: tap76d5199d-5d: entered promiscuous mode
Feb 28 05:04:50 np0005634017 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[267513]: [NOTICE]   (267519) : path to executable is /usr/sbin/haproxy
Feb 28 05:04:50 np0005634017 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[267513]: [WARNING]  (267519) : Exiting Master process...
Feb 28 05:04:50 np0005634017 NetworkManager[49805]: <info>  [1772273090.8826] manager: (tap76d5199d-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/94)
Feb 28 05:04:50 np0005634017 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[267513]: [ALERT]    (267519) : Current worker (267521) exited with code 143 (Terminated)
Feb 28 05:04:50 np0005634017 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[267513]: [WARNING]  (267519) : All workers exited. Exiting... (0)
Feb 28 05:04:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:50Z|00196|binding|INFO|Claiming lport 76d5199d-5d1e-4198-8780-c2537175a2be for this chassis.
Feb 28 05:04:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:50Z|00197|binding|INFO|76d5199d-5d1e-4198-8780-c2537175a2be: Claiming fa:16:3e:7a:bd:5b 10.100.0.12
Feb 28 05:04:50 np0005634017 kernel: tap76d5199d-5d (unregistering): left promiscuous mode
Feb 28 05:04:50 np0005634017 systemd[1]: libpod-0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543.scope: Deactivated successfully.
Feb 28 05:04:50 np0005634017 podman[270221]: 2026-02-28 10:04:50.893503274 +0000 UTC m=+0.044566604 container died 0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 05:04:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:50.893 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:bd:5b 10.100.0.12'], port_security=['fa:16:3e:7a:bd:5b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f1026535-7729-43d0-8027-dd71ef14dfbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b0c8faed-afc6-47c6-b1ad-97c1fd7445c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=76d5199d-5d1e-4198-8780-c2537175a2be) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:04:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:50Z|00198|binding|INFO|Releasing lport 76d5199d-5d1e-4198-8780-c2537175a2be from this chassis (sb_readonly=0)
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.916 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:50.921 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:bd:5b 10.100.0.12'], port_security=['fa:16:3e:7a:bd:5b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f1026535-7729-43d0-8027-dd71ef14dfbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b0c8faed-afc6-47c6-b1ad-97c1fd7445c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=76d5199d-5d1e-4198-8780-c2537175a2be) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.926 243456 DEBUG nova.objects.instance [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lazy-loading 'migration_context' on Instance uuid c1e4150a-4695-4464-a271-378970447180 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:50 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543-userdata-shm.mount: Deactivated successfully.
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.929 243456 INFO nova.network.neutron [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Port 11bba824-bf60-4206-b70d-fd5035009fbf from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.930 243456 DEBUG nova.network.neutron [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Updating instance_info_cache with network_info: [{"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.932 243456 INFO nova.virt.libvirt.driver [-] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Instance destroyed successfully.#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.932 243456 DEBUG nova.objects.instance [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'resources' on Instance uuid f1026535-7729-43d0-8027-dd71ef14dfbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:50 np0005634017 systemd[1]: var-lib-containers-storage-overlay-e56ba081050cdf68ac2830d8fd1d550ce7d433e93c5d67b3c7be66a947248126-merged.mount: Deactivated successfully.
Feb 28 05:04:50 np0005634017 podman[270221]: 2026-02-28 10:04:50.942850631 +0000 UTC m=+0.093913941 container cleanup 0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 28 05:04:50 np0005634017 systemd[1]: libpod-conmon-0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543.scope: Deactivated successfully.
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.951 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.952 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Ensure instance console log exists: /var/lib/nova/instances/c1e4150a-4695-4464-a271-378970447180/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.953 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.953 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.953 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.954 243456 DEBUG nova.virt.libvirt.vif [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:04:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-585792194',display_name='tempest-AttachInterfacesTestJSON-server-585792194',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-585792194',id=27,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPfxDpm3Myh4xl51GH/Eee4e+9gL0QXIrJb3KmgHGFFsr7Zk1zaEkJOxAcUnihnT1OPbqVJln2d2q3oc5Q222W4sAVF2VtSEfrwl92YPLxE6w1H/lw+338tjUK54+I/6sA==',key_name='tempest-keypair-1547211698',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:04:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-ngglzwzv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:04:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=f1026535-7729-43d0-8027-dd71ef14dfbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.954 243456 DEBUG nova.network.os_vif_util [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.955 243456 DEBUG nova.network.os_vif_util [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:bd:5b,bridge_name='br-int',has_traffic_filtering=True,id=76d5199d-5d1e-4198-8780-c2537175a2be,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d5199d-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.956 243456 DEBUG os_vif [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:bd:5b,bridge_name='br-int',has_traffic_filtering=True,id=76d5199d-5d1e-4198-8780-c2537175a2be,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d5199d-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.959 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.959 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76d5199d-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.961 243456 DEBUG oslo_concurrency.lockutils [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Releasing lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.964 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.965 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.967 243456 INFO os_vif [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:bd:5b,bridge_name='br-int',has_traffic_filtering=True,id=76d5199d-5d1e-4198-8780-c2537175a2be,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d5199d-5d')#033[00m
Feb 28 05:04:50 np0005634017 nova_compute[243452]: 2026-02-28 10:04:50.994 243456 DEBUG oslo_concurrency.lockutils [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-f1026535-7729-43d0-8027-dd71ef14dfbf-11bba824-bf60-4206-b70d-fd5035009fbf" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:51 np0005634017 podman[270270]: 2026-02-28 10:04:51.00897569 +0000 UTC m=+0.047682941 container remove 0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:04:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.018 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3db3a3e5-7652-43c8-be87-8d899205bab8]: (4, ('Sat Feb 28 10:04:50 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f (0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543)\n0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543\nSat Feb 28 10:04:50 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f (0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543)\n0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.020 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[24bce2a5-73e5-4ba7-acba-a46bc9108a84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.021 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60dcefc3-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:51 np0005634017 kernel: tap60dcefc3-90: left promiscuous mode
Feb 28 05:04:51 np0005634017 nova_compute[243452]: 2026-02-28 10:04:51.023 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:51 np0005634017 nova_compute[243452]: 2026-02-28 10:04:51.029 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.035 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[300a3f7c-9464-446d-9ef1-4022673b63c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:51 np0005634017 nova_compute[243452]: 2026-02-28 10:04:51.039 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.049 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[602887ac-bf66-4892-b318-71fab7e7306a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.051 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[846f6606-e8de-41c8-b5f1-e73089b3b49a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.061 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9e167f73-2a4d-482d-80bc-f74cde829d54]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455954, 'reachable_time': 24674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270304, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.063 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:04:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.063 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[ad898bbc-98e0-4bfd-a0eb-f0f40c23c164]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.064 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 76d5199d-5d1e-4198-8780-c2537175a2be in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f unbound from our chassis#033[00m
Feb 28 05:04:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.065 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 60dcefc3-95e1-437e-9c00-e51656c39b8f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:04:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.066 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8f7a7ea4-3a95-4f24-805d-09e1d368dcc4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.066 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 76d5199d-5d1e-4198-8780-c2537175a2be in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f unbound from our chassis#033[00m
Feb 28 05:04:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.068 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 60dcefc3-95e1-437e-9c00-e51656c39b8f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:04:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.068 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c818b63c-cca1-4069-8a90-0fe9194e90c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:51 np0005634017 nova_compute[243452]: 2026-02-28 10:04:51.220 243456 INFO nova.virt.libvirt.driver [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Deleting instance files /var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf_del#033[00m
Feb 28 05:04:51 np0005634017 nova_compute[243452]: 2026-02-28 10:04:51.227 243456 INFO nova.virt.libvirt.driver [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Deletion of /var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf_del complete#033[00m
Feb 28 05:04:51 np0005634017 nova_compute[243452]: 2026-02-28 10:04:51.279 243456 INFO nova.compute.manager [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Took 0.62 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:04:51 np0005634017 nova_compute[243452]: 2026-02-28 10:04:51.279 243456 DEBUG oslo.service.loopingcall [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:04:51 np0005634017 nova_compute[243452]: 2026-02-28 10:04:51.280 243456 DEBUG nova.compute.manager [-] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:04:51 np0005634017 nova_compute[243452]: 2026-02-28 10:04:51.280 243456 DEBUG nova.network.neutron [-] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]: [
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:    {
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:        "available": false,
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:        "being_replaced": false,
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:        "ceph_device_lvm": false,
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:        "device_id": "QEMU_DVD-ROM_QM00001",
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:        "lsm_data": {},
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:        "lvs": [],
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:        "path": "/dev/sr0",
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:        "rejected_reasons": [
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:            "Has a FileSystem",
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:            "Insufficient space (<5GB)"
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:        ],
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:        "sys_api": {
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:            "actuators": null,
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:            "device_nodes": [
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:                "sr0"
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:            ],
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:            "devname": "sr0",
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:            "human_readable_size": "482.00 KB",
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:            "id_bus": "ata",
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:            "model": "QEMU DVD-ROM",
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:            "nr_requests": "2",
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:            "parent": "/dev/sr0",
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:            "partitions": {},
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:            "path": "/dev/sr0",
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:            "removable": "1",
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:            "rev": "2.5+",
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:            "ro": "0",
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:            "rotational": "1",
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:            "sas_address": "",
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:            "sas_device_handle": "",
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:            "scheduler_mode": "mq-deadline",
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:            "sectors": 0,
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:            "sectorsize": "2048",
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:            "size": 493568.0,
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:            "support_discard": "2048",
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:            "type": "disk",
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:            "vendor": "QEMU"
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:        }
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]:    }
Feb 28 05:04:51 np0005634017 ecstatic_dubinsky[270164]: ]
Feb 28 05:04:51 np0005634017 nova_compute[243452]: 2026-02-28 10:04:51.318 243456 DEBUG nova.network.neutron [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Successfully created port: a1a4b6a4-de37-4bca-9501-0465f18ded83 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:04:51 np0005634017 systemd[1]: run-netns-ovnmeta\x2d60dcefc3\x2d95e1\x2d437e\x2d9c00\x2de51656c39b8f.mount: Deactivated successfully.
Feb 28 05:04:51 np0005634017 systemd[1]: libpod-0c57ea23320d0f747646e173d329304a564b3d6a9493f40207035144d09d6cfe.scope: Deactivated successfully.
Feb 28 05:04:51 np0005634017 podman[270123]: 2026-02-28 10:04:51.343003641 +0000 UTC m=+0.712416080 container died 0c57ea23320d0f747646e173d329304a564b3d6a9493f40207035144d09d6cfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_dubinsky, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:04:51 np0005634017 systemd[1]: var-lib-containers-storage-overlay-62582b15685eca806715c6f9ba82b50a12142970c939c69f3f5a6b7e95c2b9e7-merged.mount: Deactivated successfully.
Feb 28 05:04:51 np0005634017 podman[270123]: 2026-02-28 10:04:51.376918194 +0000 UTC m=+0.746330633 container remove 0c57ea23320d0f747646e173d329304a564b3d6a9493f40207035144d09d6cfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_dubinsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 05:04:51 np0005634017 systemd[1]: libpod-conmon-0c57ea23320d0f747646e173d329304a564b3d6a9493f40207035144d09d6cfe.scope: Deactivated successfully.
Feb 28 05:04:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:04:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:04:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:04:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:04:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:04:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:04:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:04:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:04:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:04:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:04:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:04:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:04:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:04:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:04:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:04:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:04:51 np0005634017 nova_compute[243452]: 2026-02-28 10:04:51.629 243456 DEBUG nova.compute.manager [req-78134db9-123b-4054-9d31-4ea43531a7f7 req-0ac786cd-a575-4773-a460-6029eefc8e96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-vif-unplugged-76d5199d-5d1e-4198-8780-c2537175a2be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:51 np0005634017 nova_compute[243452]: 2026-02-28 10:04:51.630 243456 DEBUG oslo_concurrency.lockutils [req-78134db9-123b-4054-9d31-4ea43531a7f7 req-0ac786cd-a575-4773-a460-6029eefc8e96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:51 np0005634017 nova_compute[243452]: 2026-02-28 10:04:51.631 243456 DEBUG oslo_concurrency.lockutils [req-78134db9-123b-4054-9d31-4ea43531a7f7 req-0ac786cd-a575-4773-a460-6029eefc8e96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:51 np0005634017 nova_compute[243452]: 2026-02-28 10:04:51.632 243456 DEBUG oslo_concurrency.lockutils [req-78134db9-123b-4054-9d31-4ea43531a7f7 req-0ac786cd-a575-4773-a460-6029eefc8e96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:51 np0005634017 nova_compute[243452]: 2026-02-28 10:04:51.632 243456 DEBUG nova.compute.manager [req-78134db9-123b-4054-9d31-4ea43531a7f7 req-0ac786cd-a575-4773-a460-6029eefc8e96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] No waiting events found dispatching network-vif-unplugged-76d5199d-5d1e-4198-8780-c2537175a2be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:04:51 np0005634017 nova_compute[243452]: 2026-02-28 10:04:51.633 243456 DEBUG nova.compute.manager [req-78134db9-123b-4054-9d31-4ea43531a7f7 req-0ac786cd-a575-4773-a460-6029eefc8e96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-vif-unplugged-76d5199d-5d1e-4198-8780-c2537175a2be for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:04:51 np0005634017 nova_compute[243452]: 2026-02-28 10:04:51.633 243456 DEBUG nova.compute.manager [req-78134db9-123b-4054-9d31-4ea43531a7f7 req-0ac786cd-a575-4773-a460-6029eefc8e96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-vif-plugged-76d5199d-5d1e-4198-8780-c2537175a2be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:51 np0005634017 nova_compute[243452]: 2026-02-28 10:04:51.634 243456 DEBUG oslo_concurrency.lockutils [req-78134db9-123b-4054-9d31-4ea43531a7f7 req-0ac786cd-a575-4773-a460-6029eefc8e96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:51 np0005634017 nova_compute[243452]: 2026-02-28 10:04:51.634 243456 DEBUG oslo_concurrency.lockutils [req-78134db9-123b-4054-9d31-4ea43531a7f7 req-0ac786cd-a575-4773-a460-6029eefc8e96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:51 np0005634017 nova_compute[243452]: 2026-02-28 10:04:51.635 243456 DEBUG oslo_concurrency.lockutils [req-78134db9-123b-4054-9d31-4ea43531a7f7 req-0ac786cd-a575-4773-a460-6029eefc8e96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:51 np0005634017 nova_compute[243452]: 2026-02-28 10:04:51.635 243456 DEBUG nova.compute.manager [req-78134db9-123b-4054-9d31-4ea43531a7f7 req-0ac786cd-a575-4773-a460-6029eefc8e96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] No waiting events found dispatching network-vif-plugged-76d5199d-5d1e-4198-8780-c2537175a2be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:04:51 np0005634017 nova_compute[243452]: 2026-02-28 10:04:51.636 243456 WARNING nova.compute.manager [req-78134db9-123b-4054-9d31-4ea43531a7f7 req-0ac786cd-a575-4773-a460-6029eefc8e96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received unexpected event network-vif-plugged-76d5199d-5d1e-4198-8780-c2537175a2be for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:04:51 np0005634017 podman[271121]: 2026-02-28 10:04:51.825769863 +0000 UTC m=+0.062334123 container create 86c3262e412c64506c2e09ccaf71df66bec099248921260cbf554c3ea755df4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:04:51 np0005634017 systemd[1]: Started libpod-conmon-86c3262e412c64506c2e09ccaf71df66bec099248921260cbf554c3ea755df4c.scope.
Feb 28 05:04:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.7 MiB/s wr, 47 op/s
Feb 28 05:04:51 np0005634017 podman[271121]: 2026-02-28 10:04:51.798762384 +0000 UTC m=+0.035326654 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:04:51 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:04:51 np0005634017 podman[271121]: 2026-02-28 10:04:51.92276797 +0000 UTC m=+0.159332210 container init 86c3262e412c64506c2e09ccaf71df66bec099248921260cbf554c3ea755df4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 05:04:51 np0005634017 podman[271121]: 2026-02-28 10:04:51.932222296 +0000 UTC m=+0.168786516 container start 86c3262e412c64506c2e09ccaf71df66bec099248921260cbf554c3ea755df4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:04:51 np0005634017 blissful_bhaskara[271139]: 167 167
Feb 28 05:04:51 np0005634017 systemd[1]: libpod-86c3262e412c64506c2e09ccaf71df66bec099248921260cbf554c3ea755df4c.scope: Deactivated successfully.
Feb 28 05:04:51 np0005634017 podman[271121]: 2026-02-28 10:04:51.941126566 +0000 UTC m=+0.177690786 container attach 86c3262e412c64506c2e09ccaf71df66bec099248921260cbf554c3ea755df4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhaskara, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 05:04:51 np0005634017 podman[271121]: 2026-02-28 10:04:51.941676522 +0000 UTC m=+0.178240742 container died 86c3262e412c64506c2e09ccaf71df66bec099248921260cbf554c3ea755df4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhaskara, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 05:04:51 np0005634017 systemd[1]: var-lib-containers-storage-overlay-22250463dd2144ac0fce8a6e06dffe632c237ed59f04b1dd99212d226bc33158-merged.mount: Deactivated successfully.
Feb 28 05:04:51 np0005634017 podman[271121]: 2026-02-28 10:04:51.977394316 +0000 UTC m=+0.213958536 container remove 86c3262e412c64506c2e09ccaf71df66bec099248921260cbf554c3ea755df4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:04:51 np0005634017 systemd[1]: libpod-conmon-86c3262e412c64506c2e09ccaf71df66bec099248921260cbf554c3ea755df4c.scope: Deactivated successfully.
Feb 28 05:04:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:52.105 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:52 np0005634017 podman[271163]: 2026-02-28 10:04:52.148059804 +0000 UTC m=+0.049859373 container create 07fa4fef611e9c4969e7b7adb2e75214c25a110a0be95dbff0cec65db6cc90ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 05:04:52 np0005634017 nova_compute[243452]: 2026-02-28 10:04:52.172 243456 DEBUG nova.network.neutron [-] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:52 np0005634017 nova_compute[243452]: 2026-02-28 10:04:52.186 243456 INFO nova.compute.manager [-] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Took 0.91 seconds to deallocate network for instance.#033[00m
Feb 28 05:04:52 np0005634017 systemd[1]: Started libpod-conmon-07fa4fef611e9c4969e7b7adb2e75214c25a110a0be95dbff0cec65db6cc90ae.scope.
Feb 28 05:04:52 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:04:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62189f5111567f03bac4b3d815eaed0197467301e746c26f6e9ac2af5da4937f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:04:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62189f5111567f03bac4b3d815eaed0197467301e746c26f6e9ac2af5da4937f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:04:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62189f5111567f03bac4b3d815eaed0197467301e746c26f6e9ac2af5da4937f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:04:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62189f5111567f03bac4b3d815eaed0197467301e746c26f6e9ac2af5da4937f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:04:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62189f5111567f03bac4b3d815eaed0197467301e746c26f6e9ac2af5da4937f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:04:52 np0005634017 nova_compute[243452]: 2026-02-28 10:04:52.232 243456 DEBUG oslo_concurrency.lockutils [None req-e6589198-ce55-453d-952a-57fedd94be67 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:52 np0005634017 nova_compute[243452]: 2026-02-28 10:04:52.233 243456 DEBUG oslo_concurrency.lockutils [None req-e6589198-ce55-453d-952a-57fedd94be67 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:52 np0005634017 nova_compute[243452]: 2026-02-28 10:04:52.233 243456 DEBUG nova.compute.manager [None req-e6589198-ce55-453d-952a-57fedd94be67 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:52 np0005634017 podman[271163]: 2026-02-28 10:04:52.128620437 +0000 UTC m=+0.030420026 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:04:52 np0005634017 nova_compute[243452]: 2026-02-28 10:04:52.241 243456 DEBUG oslo_concurrency.lockutils [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:52 np0005634017 nova_compute[243452]: 2026-02-28 10:04:52.242 243456 DEBUG oslo_concurrency.lockutils [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:52 np0005634017 podman[271163]: 2026-02-28 10:04:52.246419409 +0000 UTC m=+0.148219038 container init 07fa4fef611e9c4969e7b7adb2e75214c25a110a0be95dbff0cec65db6cc90ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 05:04:52 np0005634017 nova_compute[243452]: 2026-02-28 10:04:52.248 243456 DEBUG nova.compute.manager [None req-e6589198-ce55-453d-952a-57fedd94be67 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Feb 28 05:04:52 np0005634017 nova_compute[243452]: 2026-02-28 10:04:52.250 243456 DEBUG nova.objects.instance [None req-e6589198-ce55-453d-952a-57fedd94be67 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'flavor' on Instance uuid ff5bf118-ea06-44c0-81f0-0a229162e1d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:52 np0005634017 podman[271163]: 2026-02-28 10:04:52.258036236 +0000 UTC m=+0.159835855 container start 07fa4fef611e9c4969e7b7adb2e75214c25a110a0be95dbff0cec65db6cc90ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 05:04:52 np0005634017 podman[271163]: 2026-02-28 10:04:52.2674451 +0000 UTC m=+0.169244749 container attach 07fa4fef611e9c4969e7b7adb2e75214c25a110a0be95dbff0cec65db6cc90ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 28 05:04:52 np0005634017 nova_compute[243452]: 2026-02-28 10:04:52.294 243456 DEBUG nova.virt.libvirt.driver [None req-e6589198-ce55-453d-952a-57fedd94be67 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 28 05:04:52 np0005634017 nova_compute[243452]: 2026-02-28 10:04:52.412 243456 DEBUG oslo_concurrency.processutils [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:52 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:04:52 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:04:52 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:04:52 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:04:52 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:04:52 np0005634017 nova_compute[243452]: 2026-02-28 10:04:52.529 243456 DEBUG nova.network.neutron [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Successfully updated port: a1a4b6a4-de37-4bca-9501-0465f18ded83 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:04:52 np0005634017 nova_compute[243452]: 2026-02-28 10:04:52.547 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "refresh_cache-c1e4150a-4695-4464-a271-378970447180" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:04:52 np0005634017 nova_compute[243452]: 2026-02-28 10:04:52.547 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquired lock "refresh_cache-c1e4150a-4695-4464-a271-378970447180" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:04:52 np0005634017 nova_compute[243452]: 2026-02-28 10:04:52.548 243456 DEBUG nova.network.neutron [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:04:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:04:52 np0005634017 nova_compute[243452]: 2026-02-28 10:04:52.683 243456 DEBUG nova.compute.manager [req-02963816-3b21-48b6-92bc-7b55523329d4 req-e017db2b-e1dc-4d97-9d59-80e4463ce183 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Received event network-changed-a1a4b6a4-de37-4bca-9501-0465f18ded83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:52 np0005634017 nova_compute[243452]: 2026-02-28 10:04:52.683 243456 DEBUG nova.compute.manager [req-02963816-3b21-48b6-92bc-7b55523329d4 req-e017db2b-e1dc-4d97-9d59-80e4463ce183 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Refreshing instance network info cache due to event network-changed-a1a4b6a4-de37-4bca-9501-0465f18ded83. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:04:52 np0005634017 nova_compute[243452]: 2026-02-28 10:04:52.684 243456 DEBUG oslo_concurrency.lockutils [req-02963816-3b21-48b6-92bc-7b55523329d4 req-e017db2b-e1dc-4d97-9d59-80e4463ce183 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c1e4150a-4695-4464-a271-378970447180" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:04:52 np0005634017 nova_compute[243452]: 2026-02-28 10:04:52.774 243456 DEBUG nova.network.neutron [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:04:52 np0005634017 mystifying_wing[271180]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:04:52 np0005634017 mystifying_wing[271180]: --> All data devices are unavailable
Feb 28 05:04:52 np0005634017 nova_compute[243452]: 2026-02-28 10:04:52.857 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273077.8557258, 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:52 np0005634017 nova_compute[243452]: 2026-02-28 10:04:52.857 243456 INFO nova.compute.manager [-] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:04:52 np0005634017 nova_compute[243452]: 2026-02-28 10:04:52.877 243456 DEBUG nova.compute.manager [None req-9f33c7ee-6a21-423a-aca3-6349b5adea7c - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:52 np0005634017 systemd[1]: libpod-07fa4fef611e9c4969e7b7adb2e75214c25a110a0be95dbff0cec65db6cc90ae.scope: Deactivated successfully.
Feb 28 05:04:52 np0005634017 podman[271163]: 2026-02-28 10:04:52.909316955 +0000 UTC m=+0.811116534 container died 07fa4fef611e9c4969e7b7adb2e75214c25a110a0be95dbff0cec65db6cc90ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 05:04:52 np0005634017 systemd[1]: var-lib-containers-storage-overlay-62189f5111567f03bac4b3d815eaed0197467301e746c26f6e9ac2af5da4937f-merged.mount: Deactivated successfully.
Feb 28 05:04:52 np0005634017 podman[271163]: 2026-02-28 10:04:52.958651492 +0000 UTC m=+0.860451071 container remove 07fa4fef611e9c4969e7b7adb2e75214c25a110a0be95dbff0cec65db6cc90ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 05:04:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:04:52 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3331512955' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:04:52 np0005634017 systemd[1]: libpod-conmon-07fa4fef611e9c4969e7b7adb2e75214c25a110a0be95dbff0cec65db6cc90ae.scope: Deactivated successfully.
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.017 243456 DEBUG oslo_concurrency.processutils [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.025 243456 DEBUG nova.compute.provider_tree [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.045 243456 DEBUG nova.scheduler.client.report [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.066 243456 DEBUG oslo_concurrency.lockutils [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.094 243456 INFO nova.scheduler.client.report [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Deleted allocations for instance f1026535-7729-43d0-8027-dd71ef14dfbf#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.149 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273078.1483636, 6d74f9b9-edf7-4d81-b139-cb664b2ab68c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.150 243456 INFO nova.compute.manager [-] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.176 243456 DEBUG nova.compute.manager [None req-91b8e2a4-96c9-406e-b318-cb4ee13320d2 - - - - - -] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.186 243456 DEBUG oslo_concurrency.lockutils [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:53 np0005634017 podman[271297]: 2026-02-28 10:04:53.477037465 +0000 UTC m=+0.058266079 container create c826dd0ab2945139f64d884a84948be0d226fabcb16cca06fbd11c4e2d693b25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_sammet, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.513 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:53 np0005634017 systemd[1]: Started libpod-conmon-c826dd0ab2945139f64d884a84948be0d226fabcb16cca06fbd11c4e2d693b25.scope.
Feb 28 05:04:53 np0005634017 podman[271297]: 2026-02-28 10:04:53.451384074 +0000 UTC m=+0.032612718 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:04:53 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:04:53 np0005634017 podman[271297]: 2026-02-28 10:04:53.574347241 +0000 UTC m=+0.155575875 container init c826dd0ab2945139f64d884a84948be0d226fabcb16cca06fbd11c4e2d693b25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_sammet, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 05:04:53 np0005634017 podman[271297]: 2026-02-28 10:04:53.587408328 +0000 UTC m=+0.168636952 container start c826dd0ab2945139f64d884a84948be0d226fabcb16cca06fbd11c4e2d693b25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:04:53 np0005634017 podman[271297]: 2026-02-28 10:04:53.591108942 +0000 UTC m=+0.172337556 container attach c826dd0ab2945139f64d884a84948be0d226fabcb16cca06fbd11c4e2d693b25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_sammet, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 05:04:53 np0005634017 stoic_sammet[271314]: 167 167
Feb 28 05:04:53 np0005634017 systemd[1]: libpod-c826dd0ab2945139f64d884a84948be0d226fabcb16cca06fbd11c4e2d693b25.scope: Deactivated successfully.
Feb 28 05:04:53 np0005634017 podman[271297]: 2026-02-28 10:04:53.595353651 +0000 UTC m=+0.176582325 container died c826dd0ab2945139f64d884a84948be0d226fabcb16cca06fbd11c4e2d693b25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_sammet, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 05:04:53 np0005634017 systemd[1]: var-lib-containers-storage-overlay-27634a0e397d83fb6c67a14161b87801e95d121eedb9702c11b3b00dd6dee5f8-merged.mount: Deactivated successfully.
Feb 28 05:04:53 np0005634017 podman[271297]: 2026-02-28 10:04:53.634948544 +0000 UTC m=+0.216177168 container remove c826dd0ab2945139f64d884a84948be0d226fabcb16cca06fbd11c4e2d693b25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 05:04:53 np0005634017 systemd[1]: libpod-conmon-c826dd0ab2945139f64d884a84948be0d226fabcb16cca06fbd11c4e2d693b25.scope: Deactivated successfully.
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.723 243456 DEBUG nova.compute.manager [req-a59f6e2b-1708-4112-beb5-8b31211d26e1 req-ab51b66e-fe72-4320-8497-34892dbf35ff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-vif-deleted-76d5199d-5d1e-4198-8780-c2537175a2be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.815 243456 DEBUG nova.network.neutron [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Updating instance_info_cache with network_info: [{"id": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "address": "fa:16:3e:22:fe:a4", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a4b6a4-de", "ovs_interfaceid": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:53 np0005634017 podman[271338]: 2026-02-28 10:04:53.836543192 +0000 UTC m=+0.084143407 container create 02a301ae1529637e1b31f75d8035adc9fb1e2cddd2eb9577adc7dfdef5182bf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.855 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Releasing lock "refresh_cache-c1e4150a-4695-4464-a271-378970447180" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.856 243456 DEBUG nova.compute.manager [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Instance network_info: |[{"id": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "address": "fa:16:3e:22:fe:a4", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a4b6a4-de", "ovs_interfaceid": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.856 243456 DEBUG oslo_concurrency.lockutils [req-02963816-3b21-48b6-92bc-7b55523329d4 req-e017db2b-e1dc-4d97-9d59-80e4463ce183 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c1e4150a-4695-4464-a271-378970447180" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.856 243456 DEBUG nova.network.neutron [req-02963816-3b21-48b6-92bc-7b55523329d4 req-e017db2b-e1dc-4d97-9d59-80e4463ce183 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Refreshing network info cache for port a1a4b6a4-de37-4bca-9501-0465f18ded83 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.859 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Start _get_guest_xml network_info=[{"id": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "address": "fa:16:3e:22:fe:a4", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a4b6a4-de", "ovs_interfaceid": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.865 243456 WARNING nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:04:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 267 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 688 KiB/s rd, 2.4 MiB/s wr, 76 op/s
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.871 243456 DEBUG nova.virt.libvirt.host [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.872 243456 DEBUG nova.virt.libvirt.host [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.877 243456 DEBUG nova.virt.libvirt.host [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.878 243456 DEBUG nova.virt.libvirt.host [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.878 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.879 243456 DEBUG nova.virt.hardware [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.879 243456 DEBUG nova.virt.hardware [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.879 243456 DEBUG nova.virt.hardware [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.880 243456 DEBUG nova.virt.hardware [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.880 243456 DEBUG nova.virt.hardware [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.880 243456 DEBUG nova.virt.hardware [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.880 243456 DEBUG nova.virt.hardware [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.881 243456 DEBUG nova.virt.hardware [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.881 243456 DEBUG nova.virt.hardware [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.881 243456 DEBUG nova.virt.hardware [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.881 243456 DEBUG nova.virt.hardware [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:04:53 np0005634017 podman[271338]: 2026-02-28 10:04:53.790369794 +0000 UTC m=+0.037969999 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:04:53 np0005634017 systemd[1]: Started libpod-conmon-02a301ae1529637e1b31f75d8035adc9fb1e2cddd2eb9577adc7dfdef5182bf7.scope.
Feb 28 05:04:53 np0005634017 nova_compute[243452]: 2026-02-28 10:04:53.884 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:53 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:04:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e15956fb9ee3514d120b074f36465da9a2a14abff28184af291e5e30adef3fab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:04:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e15956fb9ee3514d120b074f36465da9a2a14abff28184af291e5e30adef3fab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:04:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e15956fb9ee3514d120b074f36465da9a2a14abff28184af291e5e30adef3fab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:04:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e15956fb9ee3514d120b074f36465da9a2a14abff28184af291e5e30adef3fab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:04:53 np0005634017 podman[271338]: 2026-02-28 10:04:53.936925254 +0000 UTC m=+0.184525469 container init 02a301ae1529637e1b31f75d8035adc9fb1e2cddd2eb9577adc7dfdef5182bf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:04:53 np0005634017 podman[271338]: 2026-02-28 10:04:53.94282101 +0000 UTC m=+0.190421245 container start 02a301ae1529637e1b31f75d8035adc9fb1e2cddd2eb9577adc7dfdef5182bf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 05:04:53 np0005634017 podman[271338]: 2026-02-28 10:04:53.946879004 +0000 UTC m=+0.194479199 container attach 02a301ae1529637e1b31f75d8035adc9fb1e2cddd2eb9577adc7dfdef5182bf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]: {
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:    "0": [
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:        {
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "devices": [
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "/dev/loop3"
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            ],
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "lv_name": "ceph_lv0",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "lv_size": "21470642176",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "name": "ceph_lv0",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "tags": {
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.cluster_name": "ceph",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.crush_device_class": "",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.encrypted": "0",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.objectstore": "bluestore",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.osd_id": "0",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.type": "block",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.vdo": "0",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.with_tpm": "0"
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            },
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "type": "block",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "vg_name": "ceph_vg0"
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:        }
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:    ],
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:    "1": [
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:        {
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "devices": [
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "/dev/loop4"
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            ],
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "lv_name": "ceph_lv1",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "lv_size": "21470642176",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "name": "ceph_lv1",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "tags": {
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.cluster_name": "ceph",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.crush_device_class": "",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.encrypted": "0",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.objectstore": "bluestore",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.osd_id": "1",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.type": "block",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.vdo": "0",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.with_tpm": "0"
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            },
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "type": "block",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "vg_name": "ceph_vg1"
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:        }
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:    ],
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:    "2": [
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:        {
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "devices": [
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "/dev/loop5"
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            ],
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "lv_name": "ceph_lv2",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "lv_size": "21470642176",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "name": "ceph_lv2",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "tags": {
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.cluster_name": "ceph",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.crush_device_class": "",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.encrypted": "0",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.objectstore": "bluestore",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.osd_id": "2",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.type": "block",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.vdo": "0",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:                "ceph.with_tpm": "0"
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            },
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "type": "block",
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:            "vg_name": "ceph_vg2"
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:        }
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]:    ]
Feb 28 05:04:54 np0005634017 dazzling_cori[271354]: }
Feb 28 05:04:54 np0005634017 systemd[1]: libpod-02a301ae1529637e1b31f75d8035adc9fb1e2cddd2eb9577adc7dfdef5182bf7.scope: Deactivated successfully.
Feb 28 05:04:54 np0005634017 podman[271338]: 2026-02-28 10:04:54.260522601 +0000 UTC m=+0.508122796 container died 02a301ae1529637e1b31f75d8035adc9fb1e2cddd2eb9577adc7dfdef5182bf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 05:04:54 np0005634017 systemd[1]: var-lib-containers-storage-overlay-e15956fb9ee3514d120b074f36465da9a2a14abff28184af291e5e30adef3fab-merged.mount: Deactivated successfully.
Feb 28 05:04:54 np0005634017 podman[271338]: 2026-02-28 10:04:54.335438608 +0000 UTC m=+0.583038803 container remove 02a301ae1529637e1b31f75d8035adc9fb1e2cddd2eb9577adc7dfdef5182bf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cori, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:04:54 np0005634017 systemd[1]: libpod-conmon-02a301ae1529637e1b31f75d8035adc9fb1e2cddd2eb9577adc7dfdef5182bf7.scope: Deactivated successfully.
Feb 28 05:04:54 np0005634017 podman[271390]: 2026-02-28 10:04:54.377640524 +0000 UTC m=+0.077273454 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Feb 28 05:04:54 np0005634017 podman[271384]: 2026-02-28 10:04:54.434207974 +0000 UTC m=+0.132984240 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260223)
Feb 28 05:04:54 np0005634017 nova_compute[243452]: 2026-02-28 10:04:54.452 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:04:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1906612923' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:04:54 np0005634017 nova_compute[243452]: 2026-02-28 10:04:54.498 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:54 np0005634017 nova_compute[243452]: 2026-02-28 10:04:54.518 243456 DEBUG nova.storage.rbd_utils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image c1e4150a-4695-4464-a271-378970447180_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:54 np0005634017 nova_compute[243452]: 2026-02-28 10:04:54.522 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:54 np0005634017 podman[271533]: 2026-02-28 10:04:54.7459887 +0000 UTC m=+0.044364969 container create 7c8f7d71ae8f3effbd1ce983bedf94d2edb03b251bfdef7606732fded50d740f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_villani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Feb 28 05:04:54 np0005634017 systemd[1]: Started libpod-conmon-7c8f7d71ae8f3effbd1ce983bedf94d2edb03b251bfdef7606732fded50d740f.scope.
Feb 28 05:04:54 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:04:54 np0005634017 podman[271533]: 2026-02-28 10:04:54.728609391 +0000 UTC m=+0.026985670 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:04:54 np0005634017 podman[271533]: 2026-02-28 10:04:54.83350851 +0000 UTC m=+0.131884789 container init 7c8f7d71ae8f3effbd1ce983bedf94d2edb03b251bfdef7606732fded50d740f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Feb 28 05:04:54 np0005634017 podman[271533]: 2026-02-28 10:04:54.838909782 +0000 UTC m=+0.137286041 container start 7c8f7d71ae8f3effbd1ce983bedf94d2edb03b251bfdef7606732fded50d740f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_villani, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 05:04:54 np0005634017 podman[271533]: 2026-02-28 10:04:54.842815852 +0000 UTC m=+0.141192141 container attach 7c8f7d71ae8f3effbd1ce983bedf94d2edb03b251bfdef7606732fded50d740f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_villani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Feb 28 05:04:54 np0005634017 elastic_villani[271551]: 167 167
Feb 28 05:04:54 np0005634017 systemd[1]: libpod-7c8f7d71ae8f3effbd1ce983bedf94d2edb03b251bfdef7606732fded50d740f.scope: Deactivated successfully.
Feb 28 05:04:54 np0005634017 conmon[271551]: conmon 7c8f7d71ae8f3effbd1c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c8f7d71ae8f3effbd1ce983bedf94d2edb03b251bfdef7606732fded50d740f.scope/container/memory.events
Feb 28 05:04:54 np0005634017 podman[271533]: 2026-02-28 10:04:54.846698211 +0000 UTC m=+0.145074510 container died 7c8f7d71ae8f3effbd1ce983bedf94d2edb03b251bfdef7606732fded50d740f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_villani, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:04:54 np0005634017 systemd[1]: var-lib-containers-storage-overlay-405fc703e34ecec1a6e7925f13c2126f5a0012cd8ae7150942216adc78e429d9-merged.mount: Deactivated successfully.
Feb 28 05:04:54 np0005634017 podman[271533]: 2026-02-28 10:04:54.886657664 +0000 UTC m=+0.185033923 container remove 7c8f7d71ae8f3effbd1ce983bedf94d2edb03b251bfdef7606732fded50d740f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_villani, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 05:04:54 np0005634017 systemd[1]: libpod-conmon-7c8f7d71ae8f3effbd1ce983bedf94d2edb03b251bfdef7606732fded50d740f.scope: Deactivated successfully.
Feb 28 05:04:55 np0005634017 podman[271575]: 2026-02-28 10:04:55.027193815 +0000 UTC m=+0.046831677 container create 92b50f685abe062c540c95545cbf4aca1bde97f1b435adfc1c50f828742ab3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mclean, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 05:04:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:04:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1150397621' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.058 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.063 243456 DEBUG nova.virt.libvirt.vif [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:04:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1596427493',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1596427493',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1596427493',id=31,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da458887a8634c5a8b9a38fcbcc44e07',ramdisk_id='',reservation_id='r-a3v88xmi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-356581433',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-356581433-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:04:50Z,user_data=None,user_id='2737540b5d9a437cac0ea91b25f0c5d8',uuid=c1e4150a-4695-4464-a271-378970447180,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "address": "fa:16:3e:22:fe:a4", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a4b6a4-de", "ovs_interfaceid": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.063 243456 DEBUG nova.network.os_vif_util [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converting VIF {"id": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "address": "fa:16:3e:22:fe:a4", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a4b6a4-de", "ovs_interfaceid": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.064 243456 DEBUG nova.network.os_vif_util [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:fe:a4,bridge_name='br-int',has_traffic_filtering=True,id=a1a4b6a4-de37-4bca-9501-0465f18ded83,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1a4b6a4-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.066 243456 DEBUG nova.objects.instance [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lazy-loading 'pci_devices' on Instance uuid c1e4150a-4695-4464-a271-378970447180 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:04:55 np0005634017 systemd[1]: Started libpod-conmon-92b50f685abe062c540c95545cbf4aca1bde97f1b435adfc1c50f828742ab3a6.scope.
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.081 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:04:55 np0005634017 nova_compute[243452]:  <uuid>c1e4150a-4695-4464-a271-378970447180</uuid>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:  <name>instance-0000001f</name>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1596427493</nova:name>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:04:53</nova:creationTime>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:04:55 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:        <nova:user uuid="2737540b5d9a437cac0ea91b25f0c5d8">tempest-ImagesOneServerNegativeTestJSON-356581433-project-member</nova:user>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:        <nova:project uuid="da458887a8634c5a8b9a38fcbcc44e07">tempest-ImagesOneServerNegativeTestJSON-356581433</nova:project>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:        <nova:port uuid="a1a4b6a4-de37-4bca-9501-0465f18ded83">
Feb 28 05:04:55 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <entry name="serial">c1e4150a-4695-4464-a271-378970447180</entry>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <entry name="uuid">c1e4150a-4695-4464-a271-378970447180</entry>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/c1e4150a-4695-4464-a271-378970447180_disk">
Feb 28 05:04:55 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:04:55 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/c1e4150a-4695-4464-a271-378970447180_disk.config">
Feb 28 05:04:55 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:04:55 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:22:fe:a4"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <target dev="tapa1a4b6a4-de"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/c1e4150a-4695-4464-a271-378970447180/console.log" append="off"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:04:55 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:04:55 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:04:55 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:04:55 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.082 243456 DEBUG nova.compute.manager [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Preparing to wait for external event network-vif-plugged-a1a4b6a4-de37-4bca-9501-0465f18ded83 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.083 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "c1e4150a-4695-4464-a271-378970447180-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.083 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "c1e4150a-4695-4464-a271-378970447180-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.083 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "c1e4150a-4695-4464-a271-378970447180-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.084 243456 DEBUG nova.virt.libvirt.vif [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:04:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1596427493',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1596427493',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1596427493',id=31,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da458887a8634c5a8b9a38fcbcc44e07',ramdisk_id='',reservation_id='r-a3v88xmi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-356581433',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-356581433-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:04:50Z,user_data=None,user_id='2737540b5d9a437cac0ea91b25f0c5d8',uuid=c1e4150a-4695-4464-a271-378970447180,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "address": "fa:16:3e:22:fe:a4", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a4b6a4-de", "ovs_interfaceid": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.084 243456 DEBUG nova.network.os_vif_util [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converting VIF {"id": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "address": "fa:16:3e:22:fe:a4", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a4b6a4-de", "ovs_interfaceid": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.085 243456 DEBUG nova.network.os_vif_util [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:fe:a4,bridge_name='br-int',has_traffic_filtering=True,id=a1a4b6a4-de37-4bca-9501-0465f18ded83,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1a4b6a4-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.086 243456 DEBUG os_vif [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:fe:a4,bridge_name='br-int',has_traffic_filtering=True,id=a1a4b6a4-de37-4bca-9501-0465f18ded83,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1a4b6a4-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.086 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.087 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.087 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.092 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.093 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1a4b6a4-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.094 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa1a4b6a4-de, col_values=(('external_ids', {'iface-id': 'a1a4b6a4-de37-4bca-9501-0465f18ded83', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:22:fe:a4', 'vm-uuid': 'c1e4150a-4695-4464-a271-378970447180'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:55 np0005634017 NetworkManager[49805]: <info>  [1772273095.0967] manager: (tapa1a4b6a4-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.095 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:55 np0005634017 podman[271575]: 2026-02-28 10:04:55.004582419 +0000 UTC m=+0.024220331 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.099 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.107 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.108 243456 INFO os_vif [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:fe:a4,bridge_name='br-int',has_traffic_filtering=True,id=a1a4b6a4-de37-4bca-9501-0465f18ded83,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1a4b6a4-de')#033[00m
Feb 28 05:04:55 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:04:55 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1129d4a19bd44bf272de2b9659fcb61a49da65fc5e3ab783aebfb73e147626cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:04:55 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1129d4a19bd44bf272de2b9659fcb61a49da65fc5e3ab783aebfb73e147626cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:04:55 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1129d4a19bd44bf272de2b9659fcb61a49da65fc5e3ab783aebfb73e147626cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:04:55 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1129d4a19bd44bf272de2b9659fcb61a49da65fc5e3ab783aebfb73e147626cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:04:55 np0005634017 podman[271575]: 2026-02-28 10:04:55.144669218 +0000 UTC m=+0.164307060 container init 92b50f685abe062c540c95545cbf4aca1bde97f1b435adfc1c50f828742ab3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:04:55 np0005634017 podman[271575]: 2026-02-28 10:04:55.153095015 +0000 UTC m=+0.172732837 container start 92b50f685abe062c540c95545cbf4aca1bde97f1b435adfc1c50f828742ab3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mclean, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:04:55 np0005634017 podman[271575]: 2026-02-28 10:04:55.157175779 +0000 UTC m=+0.176813651 container attach 92b50f685abe062c540c95545cbf4aca1bde97f1b435adfc1c50f828742ab3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.165 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.165 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.165 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] No VIF found with MAC fa:16:3e:22:fe:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.166 243456 INFO nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Using config drive#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.184 243456 DEBUG nova.storage.rbd_utils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image c1e4150a-4695-4464-a271-378970447180_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.553 243456 INFO nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Creating config drive at /var/lib/nova/instances/c1e4150a-4695-4464-a271-378970447180/disk.config#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.556 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c1e4150a-4695-4464-a271-378970447180/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpl9lbgsdr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.688 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c1e4150a-4695-4464-a271-378970447180/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpl9lbgsdr" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.728 243456 DEBUG nova.storage.rbd_utils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image c1e4150a-4695-4464-a271-378970447180_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.741 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c1e4150a-4695-4464-a271-378970447180/disk.config c1e4150a-4695-4464-a271-378970447180_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.827 243456 DEBUG nova.network.neutron [req-02963816-3b21-48b6-92bc-7b55523329d4 req-e017db2b-e1dc-4d97-9d59-80e4463ce183 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Updated VIF entry in instance network info cache for port a1a4b6a4-de37-4bca-9501-0465f18ded83. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.828 243456 DEBUG nova.network.neutron [req-02963816-3b21-48b6-92bc-7b55523329d4 req-e017db2b-e1dc-4d97-9d59-80e4463ce183 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Updating instance_info_cache with network_info: [{"id": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "address": "fa:16:3e:22:fe:a4", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a4b6a4-de", "ovs_interfaceid": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:04:55 np0005634017 nova_compute[243452]: 2026-02-28 10:04:55.844 243456 DEBUG oslo_concurrency.lockutils [req-02963816-3b21-48b6-92bc-7b55523329d4 req-e017db2b-e1dc-4d97-9d59-80e4463ce183 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c1e4150a-4695-4464-a271-378970447180" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:04:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 246 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 154 op/s
Feb 28 05:04:56 np0005634017 lvm[271728]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:04:56 np0005634017 lvm[271728]: VG ceph_vg0 finished
Feb 28 05:04:56 np0005634017 lvm[271729]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:04:56 np0005634017 lvm[271729]: VG ceph_vg1 finished
Feb 28 05:04:56 np0005634017 lvm[271731]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:04:56 np0005634017 lvm[271731]: VG ceph_vg2 finished
Feb 28 05:04:56 np0005634017 heuristic_mclean[271592]: {}
Feb 28 05:04:56 np0005634017 systemd[1]: libpod-92b50f685abe062c540c95545cbf4aca1bde97f1b435adfc1c50f828742ab3a6.scope: Deactivated successfully.
Feb 28 05:04:56 np0005634017 systemd[1]: libpod-92b50f685abe062c540c95545cbf4aca1bde97f1b435adfc1c50f828742ab3a6.scope: Consumed 1.359s CPU time.
Feb 28 05:04:56 np0005634017 podman[271575]: 2026-02-28 10:04:56.192050393 +0000 UTC m=+1.211688255 container died 92b50f685abe062c540c95545cbf4aca1bde97f1b435adfc1c50f828742ab3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mclean, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:04:56 np0005634017 systemd[1]: var-lib-containers-storage-overlay-1129d4a19bd44bf272de2b9659fcb61a49da65fc5e3ab783aebfb73e147626cc-merged.mount: Deactivated successfully.
Feb 28 05:04:56 np0005634017 nova_compute[243452]: 2026-02-28 10:04:56.593 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c1e4150a-4695-4464-a271-378970447180/disk.config c1e4150a-4695-4464-a271-378970447180_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.853s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:04:56 np0005634017 nova_compute[243452]: 2026-02-28 10:04:56.596 243456 INFO nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Deleting local config drive /var/lib/nova/instances/c1e4150a-4695-4464-a271-378970447180/disk.config because it was imported into RBD.#033[00m
Feb 28 05:04:56 np0005634017 kernel: tapa1a4b6a4-de: entered promiscuous mode
Feb 28 05:04:56 np0005634017 NetworkManager[49805]: <info>  [1772273096.6611] manager: (tapa1a4b6a4-de): new Tun device (/org/freedesktop/NetworkManager/Devices/96)
Feb 28 05:04:56 np0005634017 nova_compute[243452]: 2026-02-28 10:04:56.662 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:56 np0005634017 systemd-udevd[271726]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:04:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:56Z|00199|binding|INFO|Claiming lport a1a4b6a4-de37-4bca-9501-0465f18ded83 for this chassis.
Feb 28 05:04:56 np0005634017 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 05:04:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:56Z|00200|binding|INFO|a1a4b6a4-de37-4bca-9501-0465f18ded83: Claiming fa:16:3e:22:fe:a4 10.100.0.9
Feb 28 05:04:56 np0005634017 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 05:04:56 np0005634017 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 05:04:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:56Z|00201|binding|INFO|Setting lport a1a4b6a4-de37-4bca-9501-0465f18ded83 up in Southbound
Feb 28 05:04:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:56Z|00202|binding|INFO|Setting lport a1a4b6a4-de37-4bca-9501-0465f18ded83 ovn-installed in OVS
Feb 28 05:04:56 np0005634017 nova_compute[243452]: 2026-02-28 10:04:56.683 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:56 np0005634017 nova_compute[243452]: 2026-02-28 10:04:56.685 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.678 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:fe:a4 10.100.0.9'], port_security=['fa:16:3e:22:fe:a4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'c1e4150a-4695-4464-a271-378970447180', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da458887a8634c5a8b9a38fcbcc44e07', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c01448ef-a8fc-4bd2-928c-fd08df4a870e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5852d9cf-0cd0-48e3-ac9d-e151dafb6ffc, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a1a4b6a4-de37-4bca-9501-0465f18ded83) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:04:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.680 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a1a4b6a4-de37-4bca-9501-0465f18ded83 in datapath 2eebd3ec-f7d4-4881-813e-8d884cdcadaf bound to our chassis#033[00m
Feb 28 05:04:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.683 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2eebd3ec-f7d4-4881-813e-8d884cdcadaf#033[00m
Feb 28 05:04:56 np0005634017 NetworkManager[49805]: <info>  [1772273096.6891] device (tapa1a4b6a4-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:04:56 np0005634017 NetworkManager[49805]: <info>  [1772273096.6898] device (tapa1a4b6a4-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:04:56 np0005634017 nova_compute[243452]: 2026-02-28 10:04:56.693 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.700 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b50f204d-a90d-4638-8638-423a860f17f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.701 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2eebd3ec-f1 in ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:04:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.704 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2eebd3ec-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:04:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.704 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[217852b4-6dfe-482c-a81c-2d80221f0084]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.706 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[29bbac88-1fac-48b1-8e33-012a2b5139ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:56 np0005634017 systemd-machined[209480]: New machine qemu-35-instance-0000001f.
Feb 28 05:04:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.721 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[2ac253c8-67c8-4876-988d-09e8c8505214]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:56 np0005634017 systemd[1]: Started Virtual Machine qemu-35-instance-0000001f.
Feb 28 05:04:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.737 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9ac790ad-c5b9-4e03-b9d9-3491ae878f13]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:56 np0005634017 podman[271575]: 2026-02-28 10:04:56.755599656 +0000 UTC m=+1.775237498 container remove 92b50f685abe062c540c95545cbf4aca1bde97f1b435adfc1c50f828742ab3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mclean, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 05:04:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.767 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9ce618ba-84ff-4e23-8d79-88e209add576]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.775 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c8977db4-5969-4b55-b17c-8c105c583e29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:56 np0005634017 NetworkManager[49805]: <info>  [1772273096.7769] manager: (tap2eebd3ec-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/97)
Feb 28 05:04:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.800 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c798974f-debc-4276-8dce-40730c95d58c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.805 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ee2ba09a-12d0-4180-82a4-a964d899f48e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:04:56 np0005634017 NetworkManager[49805]: <info>  [1772273096.8308] device (tap2eebd3ec-f0): carrier: link connected
Feb 28 05:04:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.836 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c0a6230a-21ba-4785-8734-49ba3830cd62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:04:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:04:56 np0005634017 systemd[1]: libpod-conmon-92b50f685abe062c540c95545cbf4aca1bde97f1b435adfc1c50f828742ab3a6.scope: Deactivated successfully.
Feb 28 05:04:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.868 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f1a906-69e7-4539-9f23-6d501320ed22]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2eebd3ec-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:03:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460411, 'reachable_time': 36983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271793, 'error': None, 'target': 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:04:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.892 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[53f2230e-8734-4075-a2b2-c5ed95e6b550]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecb:321'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460411, 'tstamp': 460411}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271794, 'error': None, 'target': 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.909 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4608eb99-5b36-4139-b730-c40611758ace]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2eebd3ec-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:03:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460411, 'reachable_time': 36983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271798, 'error': None, 'target': 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.939 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f301a8d8-38fa-4c80-832c-23c9d7c5d775]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:57.008 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[553012b7-245b-468b-b725-47416f82ebd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:57.010 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2eebd3ec-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:57.011 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:57.011 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2eebd3ec-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:57 np0005634017 NetworkManager[49805]: <info>  [1772273097.0145] manager: (tap2eebd3ec-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Feb 28 05:04:57 np0005634017 kernel: tap2eebd3ec-f0: entered promiscuous mode
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:57.017 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2eebd3ec-f0, col_values=(('external_ids', {'iface-id': '6b6cc396-2618-4c5f-8702-0c03569c876b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:04:57 np0005634017 ovn_controller[146846]: 2026-02-28T10:04:57Z|00203|binding|INFO|Releasing lport 6b6cc396-2618-4c5f-8702-0c03569c876b from this chassis (sb_readonly=0)
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.025 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:57.027 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2eebd3ec-f7d4-4881-813e-8d884cdcadaf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2eebd3ec-f7d4-4881-813e-8d884cdcadaf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:57.031 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b49605ee-3dce-43e2-9914-a1d674b4a6c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:57.032 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-2eebd3ec-f7d4-4881-813e-8d884cdcadaf
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/2eebd3ec-f7d4-4881-813e-8d884cdcadaf.pid.haproxy
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 2eebd3ec-f7d4-4881-813e-8d884cdcadaf
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:57.033 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'env', 'PROCESS_TAG=haproxy-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2eebd3ec-f7d4-4881-813e-8d884cdcadaf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.133 243456 DEBUG nova.compute.manager [req-ce1b4c36-e04f-41bf-a889-0b019c48c097 req-d167e048-c64c-485c-9c8a-23627179e26d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Received event network-vif-plugged-a1a4b6a4-de37-4bca-9501-0465f18ded83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.134 243456 DEBUG oslo_concurrency.lockutils [req-ce1b4c36-e04f-41bf-a889-0b019c48c097 req-d167e048-c64c-485c-9c8a-23627179e26d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c1e4150a-4695-4464-a271-378970447180-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.135 243456 DEBUG oslo_concurrency.lockutils [req-ce1b4c36-e04f-41bf-a889-0b019c48c097 req-d167e048-c64c-485c-9c8a-23627179e26d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c1e4150a-4695-4464-a271-378970447180-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.135 243456 DEBUG oslo_concurrency.lockutils [req-ce1b4c36-e04f-41bf-a889-0b019c48c097 req-d167e048-c64c-485c-9c8a-23627179e26d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c1e4150a-4695-4464-a271-378970447180-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.136 243456 DEBUG nova.compute.manager [req-ce1b4c36-e04f-41bf-a889-0b019c48c097 req-d167e048-c64c-485c-9c8a-23627179e26d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Processing event network-vif-plugged-a1a4b6a4-de37-4bca-9501-0465f18ded83 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.213 243456 DEBUG nova.compute.manager [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.214 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273097.2143345, c1e4150a-4695-4464-a271-378970447180 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.215 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c1e4150a-4695-4464-a271-378970447180] VM Started (Lifecycle Event)#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.221 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.225 243456 INFO nova.virt.libvirt.driver [-] [instance: c1e4150a-4695-4464-a271-378970447180] Instance spawned successfully.#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.226 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.257 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c1e4150a-4695-4464-a271-378970447180] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.262 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.263 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.263 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.264 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.264 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.265 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.274 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c1e4150a-4695-4464-a271-378970447180] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.314 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c1e4150a-4695-4464-a271-378970447180] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.315 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273097.215522, c1e4150a-4695-4464-a271-378970447180 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.315 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c1e4150a-4695-4464-a271-378970447180] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.338 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c1e4150a-4695-4464-a271-378970447180] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.343 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273097.2174728, c1e4150a-4695-4464-a271-378970447180 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.344 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c1e4150a-4695-4464-a271-378970447180] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.378 243456 INFO nova.compute.manager [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Took 7.17 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.380 243456 DEBUG nova.compute.manager [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.431 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c1e4150a-4695-4464-a271-378970447180] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.436 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c1e4150a-4695-4464-a271-378970447180] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:04:57 np0005634017 podman[271892]: 2026-02-28 10:04:57.461783759 +0000 UTC m=+0.053388332 container create 2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.477 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c1e4150a-4695-4464-a271-378970447180] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.489 243456 INFO nova.compute.manager [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Took 8.32 seconds to build instance.#033[00m
Feb 28 05:04:57 np0005634017 systemd[1]: Started libpod-conmon-2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3.scope.
Feb 28 05:04:57 np0005634017 nova_compute[243452]: 2026-02-28 10:04:57.511 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "c1e4150a-4695-4464-a271-378970447180" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:57 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:04:57 np0005634017 podman[271892]: 2026-02-28 10:04:57.438105923 +0000 UTC m=+0.029710516 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:04:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9528566aabd5692d826fa682f7bcba8949e6d95980a2790e31c1867903228e4e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:04:57 np0005634017 podman[271892]: 2026-02-28 10:04:57.548523307 +0000 UTC m=+0.140127910 container init 2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:04:57 np0005634017 podman[271892]: 2026-02-28 10:04:57.553890898 +0000 UTC m=+0.145495471 container start 2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 28 05:04:57 np0005634017 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[271907]: [NOTICE]   (271911) : New worker (271913) forked
Feb 28 05:04:57 np0005634017 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[271907]: [NOTICE]   (271911) : Loading success.
Feb 28 05:04:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:57.843 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:57.844 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:04:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:04:57.845 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:04:57 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:04:57 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:04:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 246 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 157 op/s
Feb 28 05:04:58 np0005634017 nova_compute[243452]: 2026-02-28 10:04:58.515 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:58 np0005634017 nova_compute[243452]: 2026-02-28 10:04:58.827 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:04:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 246 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 153 op/s
Feb 28 05:05:00 np0005634017 nova_compute[243452]: 2026-02-28 10:05:00.096 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:05:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:05:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:05:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:05:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:05:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:05:00 np0005634017 nova_compute[243452]: 2026-02-28 10:05:00.857 243456 DEBUG nova.compute.manager [req-98fea500-3be0-4fe5-8af7-76a2547f86b5 req-25615d0d-8514-41a0-bf9a-b5ffbf859caf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Received event network-vif-plugged-a1a4b6a4-de37-4bca-9501-0465f18ded83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:00 np0005634017 nova_compute[243452]: 2026-02-28 10:05:00.857 243456 DEBUG oslo_concurrency.lockutils [req-98fea500-3be0-4fe5-8af7-76a2547f86b5 req-25615d0d-8514-41a0-bf9a-b5ffbf859caf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c1e4150a-4695-4464-a271-378970447180-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:00 np0005634017 nova_compute[243452]: 2026-02-28 10:05:00.858 243456 DEBUG oslo_concurrency.lockutils [req-98fea500-3be0-4fe5-8af7-76a2547f86b5 req-25615d0d-8514-41a0-bf9a-b5ffbf859caf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c1e4150a-4695-4464-a271-378970447180-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:00 np0005634017 nova_compute[243452]: 2026-02-28 10:05:00.858 243456 DEBUG oslo_concurrency.lockutils [req-98fea500-3be0-4fe5-8af7-76a2547f86b5 req-25615d0d-8514-41a0-bf9a-b5ffbf859caf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c1e4150a-4695-4464-a271-378970447180-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:00 np0005634017 nova_compute[243452]: 2026-02-28 10:05:00.858 243456 DEBUG nova.compute.manager [req-98fea500-3be0-4fe5-8af7-76a2547f86b5 req-25615d0d-8514-41a0-bf9a-b5ffbf859caf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] No waiting events found dispatching network-vif-plugged-a1a4b6a4-de37-4bca-9501-0465f18ded83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:05:00 np0005634017 nova_compute[243452]: 2026-02-28 10:05:00.859 243456 WARNING nova.compute.manager [req-98fea500-3be0-4fe5-8af7-76a2547f86b5 req-25615d0d-8514-41a0-bf9a-b5ffbf859caf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Received unexpected event network-vif-plugged-a1a4b6a4-de37-4bca-9501-0465f18ded83 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:05:00 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Feb 28 05:05:00 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:00.968296) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:05:00 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Feb 28 05:05:00 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273100968411, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2148, "num_deletes": 254, "total_data_size": 3268720, "memory_usage": 3321352, "flush_reason": "Manual Compaction"}
Feb 28 05:05:00 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Feb 28 05:05:00 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273100989884, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3187585, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21040, "largest_seqno": 23187, "table_properties": {"data_size": 3178049, "index_size": 5902, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20465, "raw_average_key_size": 20, "raw_value_size": 3158582, "raw_average_value_size": 3158, "num_data_blocks": 264, "num_entries": 1000, "num_filter_entries": 1000, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772272912, "oldest_key_time": 1772272912, "file_creation_time": 1772273100, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:05:00 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 21633 microseconds, and 6421 cpu microseconds.
Feb 28 05:05:00 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:05:00 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:00.989948) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3187585 bytes OK
Feb 28 05:05:00 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:00.989978) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Feb 28 05:05:00 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:00.991882) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Feb 28 05:05:00 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:00.991900) EVENT_LOG_v1 {"time_micros": 1772273100991895, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:05:00 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:00.991928) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:05:00 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3259622, prev total WAL file size 3259622, number of live WAL files 2.
Feb 28 05:05:00 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:05:00 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:00.992670) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Feb 28 05:05:00 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:05:00 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3112KB)], [50(7313KB)]
Feb 28 05:05:00 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273100992753, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10676684, "oldest_snapshot_seqno": -1}
Feb 28 05:05:01 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4897 keys, 8908449 bytes, temperature: kUnknown
Feb 28 05:05:01 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273101042717, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 8908449, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8874028, "index_size": 21063, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12293, "raw_key_size": 120117, "raw_average_key_size": 24, "raw_value_size": 8784217, "raw_average_value_size": 1793, "num_data_blocks": 880, "num_entries": 4897, "num_filter_entries": 4897, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772273100, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:05:01 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:05:01 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:01.043137) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 8908449 bytes
Feb 28 05:05:01 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:01.045846) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 214.1 rd, 178.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 7.1 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 5419, records dropped: 522 output_compression: NoCompression
Feb 28 05:05:01 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:01.045863) EVENT_LOG_v1 {"time_micros": 1772273101045855, "job": 26, "event": "compaction_finished", "compaction_time_micros": 49861, "compaction_time_cpu_micros": 14024, "output_level": 6, "num_output_files": 1, "total_output_size": 8908449, "num_input_records": 5419, "num_output_records": 4897, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:05:01 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:05:01 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273101046312, "job": 26, "event": "table_file_deletion", "file_number": 52}
Feb 28 05:05:01 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:05:01 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273101046911, "job": 26, "event": "table_file_deletion", "file_number": 50}
Feb 28 05:05:01 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:00.992512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:05:01 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:01.047014) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:05:01 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:01.047022) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:05:01 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:01.047024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:05:01 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:01.047025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:05:01 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:01.047027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:05:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 246 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 171 op/s
Feb 28 05:05:02 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:02Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:29:2d:d1 10.100.0.6
Feb 28 05:05:02 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:02Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:29:2d:d1 10.100.0.6
Feb 28 05:05:02 np0005634017 nova_compute[243452]: 2026-02-28 10:05:02.224 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "3a118849-0d0a-4196-9bdd-65333da2e8f7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:02 np0005634017 nova_compute[243452]: 2026-02-28 10:05:02.226 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:02 np0005634017 nova_compute[243452]: 2026-02-28 10:05:02.243 243456 DEBUG nova.compute.manager [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:05:02 np0005634017 nova_compute[243452]: 2026-02-28 10:05:02.330 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:02 np0005634017 nova_compute[243452]: 2026-02-28 10:05:02.331 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:02 np0005634017 nova_compute[243452]: 2026-02-28 10:05:02.338 243456 DEBUG nova.virt.hardware [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:05:02 np0005634017 nova_compute[243452]: 2026-02-28 10:05:02.339 243456 INFO nova.compute.claims [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:05:02 np0005634017 nova_compute[243452]: 2026-02-28 10:05:02.412 243456 DEBUG nova.virt.libvirt.driver [None req-e6589198-ce55-453d-952a-57fedd94be67 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 28 05:05:02 np0005634017 nova_compute[243452]: 2026-02-28 10:05:02.474 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:05:02 np0005634017 nova_compute[243452]: 2026-02-28 10:05:02.727 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Acquiring lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:02 np0005634017 nova_compute[243452]: 2026-02-28 10:05:02.728 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:02 np0005634017 nova_compute[243452]: 2026-02-28 10:05:02.744 243456 DEBUG nova.compute.manager [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:05:02 np0005634017 nova_compute[243452]: 2026-02-28 10:05:02.813 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:05:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2757291868' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.022 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.027 243456 DEBUG nova.compute.provider_tree [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.047 243456 DEBUG nova.scheduler.client.report [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.101 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.102 243456 DEBUG nova.compute.manager [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.106 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.294s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.119 243456 DEBUG nova.virt.hardware [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.120 243456 INFO nova.compute.claims [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.192 243456 DEBUG nova.compute.manager [None req-cf633291-f10a-4a92-ac8a-73aea78e7a5c 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.204 243456 DEBUG nova.compute.manager [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.204 243456 DEBUG nova.network.neutron [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.236 243456 INFO nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.241 243456 INFO nova.compute.manager [None req-cf633291-f10a-4a92-ac8a-73aea78e7a5c 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] instance snapshotting#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.252 243456 DEBUG nova.compute.manager [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.351 243456 DEBUG nova.compute.manager [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.352 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.353 243456 INFO nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Creating image(s)#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.375 243456 DEBUG nova.storage.rbd_utils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image 3a118849-0d0a-4196-9bdd-65333da2e8f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.398 243456 DEBUG nova.storage.rbd_utils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image 3a118849-0d0a-4196-9bdd-65333da2e8f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.418 243456 DEBUG nova.storage.rbd_utils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image 3a118849-0d0a-4196-9bdd-65333da2e8f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.421 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.445 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.473 243456 DEBUG nova.policy [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1d18942aa7a4f4f9f673058f8c5f232', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5478fb37e3fa483c86ed85ad939d42da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.478 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.478 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.479 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.479 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.504 243456 DEBUG nova.storage.rbd_utils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image 3a118849-0d0a-4196-9bdd-65333da2e8f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.510 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 3a118849-0d0a-4196-9bdd-65333da2e8f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.533 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.538 243456 WARNING nova.compute.manager [None req-cf633291-f10a-4a92-ac8a-73aea78e7a5c 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Image not found during snapshot: nova.exception.ImageNotFound: Image fb0dbe89-ad99-412d-9a08-eeddbc9f2714 could not be found.#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.754 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 3a118849-0d0a-4196-9bdd-65333da2e8f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.243s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.815 243456 DEBUG nova.storage.rbd_utils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] resizing rbd image 3a118849-0d0a-4196-9bdd-65333da2e8f7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:05:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 255 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 197 op/s
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.895 243456 DEBUG nova.objects.instance [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'migration_context' on Instance uuid 3a118849-0d0a-4196-9bdd-65333da2e8f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.910 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.910 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Ensure instance console log exists: /var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.911 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.911 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:03 np0005634017 nova_compute[243452]: 2026-02-28 10:05:03.912 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:05:04 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2660759736' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:05:04 np0005634017 nova_compute[243452]: 2026-02-28 10:05:04.028 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:04 np0005634017 nova_compute[243452]: 2026-02-28 10:05:04.033 243456 DEBUG nova.compute.provider_tree [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:05:04 np0005634017 nova_compute[243452]: 2026-02-28 10:05:04.047 243456 DEBUG nova.scheduler.client.report [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:05:04 np0005634017 nova_compute[243452]: 2026-02-28 10:05:04.074 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.968s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:04 np0005634017 nova_compute[243452]: 2026-02-28 10:05:04.075 243456 DEBUG nova.compute.manager [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:05:04 np0005634017 nova_compute[243452]: 2026-02-28 10:05:04.132 243456 DEBUG nova.compute.manager [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:05:04 np0005634017 nova_compute[243452]: 2026-02-28 10:05:04.133 243456 DEBUG nova.network.neutron [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:05:04 np0005634017 nova_compute[243452]: 2026-02-28 10:05:04.167 243456 INFO nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:05:04 np0005634017 nova_compute[243452]: 2026-02-28 10:05:04.201 243456 DEBUG nova.compute.manager [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:05:04 np0005634017 nova_compute[243452]: 2026-02-28 10:05:04.291 243456 DEBUG nova.compute.manager [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:05:04 np0005634017 nova_compute[243452]: 2026-02-28 10:05:04.293 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:05:04 np0005634017 nova_compute[243452]: 2026-02-28 10:05:04.293 243456 INFO nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Creating image(s)#033[00m
Feb 28 05:05:04 np0005634017 nova_compute[243452]: 2026-02-28 10:05:04.320 243456 DEBUG nova.storage.rbd_utils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] rbd image bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:04 np0005634017 nova_compute[243452]: 2026-02-28 10:05:04.342 243456 DEBUG nova.storage.rbd_utils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] rbd image bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:04 np0005634017 nova_compute[243452]: 2026-02-28 10:05:04.366 243456 DEBUG nova.storage.rbd_utils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] rbd image bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:04 np0005634017 nova_compute[243452]: 2026-02-28 10:05:04.372 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:04 np0005634017 nova_compute[243452]: 2026-02-28 10:05:04.429 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:04 np0005634017 nova_compute[243452]: 2026-02-28 10:05:04.430 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:04 np0005634017 nova_compute[243452]: 2026-02-28 10:05:04.430 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:04 np0005634017 nova_compute[243452]: 2026-02-28 10:05:04.431 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:04 np0005634017 nova_compute[243452]: 2026-02-28 10:05:04.452 243456 DEBUG nova.storage.rbd_utils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] rbd image bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:04 np0005634017 nova_compute[243452]: 2026-02-28 10:05:04.456 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:04 np0005634017 nova_compute[243452]: 2026-02-28 10:05:04.819 243456 DEBUG nova.policy [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6bbc470612fa48afb6c2a143ba966473', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '135c387aaa024e42b1c3c19237591cf3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:05:04 np0005634017 nova_compute[243452]: 2026-02-28 10:05:04.857 243456 DEBUG nova.network.neutron [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Successfully created port: 9f44b9f8-b888-40e8-be30-f985e3ca11b9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.055 243456 DEBUG oslo_concurrency.lockutils [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "c1e4150a-4695-4464-a271-378970447180" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.056 243456 DEBUG oslo_concurrency.lockutils [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "c1e4150a-4695-4464-a271-378970447180" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.056 243456 DEBUG oslo_concurrency.lockutils [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "c1e4150a-4695-4464-a271-378970447180-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.056 243456 DEBUG oslo_concurrency.lockutils [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "c1e4150a-4695-4464-a271-378970447180-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.057 243456 DEBUG oslo_concurrency.lockutils [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "c1e4150a-4695-4464-a271-378970447180-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.058 243456 INFO nova.compute.manager [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Terminating instance#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.059 243456 DEBUG nova.compute.manager [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.098 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:05 np0005634017 kernel: tap29b5f82a-cf (unregistering): left promiscuous mode
Feb 28 05:05:05 np0005634017 NetworkManager[49805]: <info>  [1772273105.1411] device (tap29b5f82a-cf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:05:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:05Z|00204|binding|INFO|Releasing lport 29b5f82a-cfc3-4c87-aac9-8419af0bcf75 from this chassis (sb_readonly=0)
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.149 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:05Z|00205|binding|INFO|Setting lport 29b5f82a-cfc3-4c87-aac9-8419af0bcf75 down in Southbound
Feb 28 05:05:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:05Z|00206|binding|INFO|Removing iface tap29b5f82a-cf ovn-installed in OVS
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.156 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.162 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.163 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:2d:d1 10.100.0.6'], port_security=['fa:16:3e:29:2d:d1 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'ff5bf118-ea06-44c0-81f0-0a229162e1d8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b4baa283-ae4f-4852-9324-45f4fb1de1ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d63e728b-5dfe-4a66-867f-fa0957507bc0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=29b5f82a-cfc3-4c87-aac9-8419af0bcf75) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.165 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 29b5f82a-cfc3-4c87-aac9-8419af0bcf75 in datapath 3a8395bc-d7fc-4457-8cb4-52e2b9305b61 unbound from our chassis#033[00m
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.166 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3a8395bc-d7fc-4457-8cb4-52e2b9305b61, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.167 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3e3664-e5ad-4fee-804a-5ff15ae3d7d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.168 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 namespace which is not needed anymore#033[00m
Feb 28 05:05:05 np0005634017 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Feb 28 05:05:05 np0005634017 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001e.scope: Consumed 12.334s CPU time.
Feb 28 05:05:05 np0005634017 systemd-machined[209480]: Machine qemu-34-instance-0000001e terminated.
Feb 28 05:05:05 np0005634017 kernel: tapa1a4b6a4-de (unregistering): left promiscuous mode
Feb 28 05:05:05 np0005634017 NetworkManager[49805]: <info>  [1772273105.2512] device (tapa1a4b6a4-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.250 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.794s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:05Z|00207|binding|INFO|Releasing lport a1a4b6a4-de37-4bca-9501-0465f18ded83 from this chassis (sb_readonly=0)
Feb 28 05:05:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:05Z|00208|binding|INFO|Setting lport a1a4b6a4-de37-4bca-9501-0465f18ded83 down in Southbound
Feb 28 05:05:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:05Z|00209|binding|INFO|Removing iface tapa1a4b6a4-de ovn-installed in OVS
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.279 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:fe:a4 10.100.0.9'], port_security=['fa:16:3e:22:fe:a4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'c1e4150a-4695-4464-a271-378970447180', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da458887a8634c5a8b9a38fcbcc44e07', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c01448ef-a8fc-4bd2-928c-fd08df4a870e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5852d9cf-0cd0-48e3-ac9d-e151dafb6ffc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a1a4b6a4-de37-4bca-9501-0465f18ded83) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:05:05 np0005634017 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Feb 28 05:05:05 np0005634017 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001f.scope: Consumed 8.364s CPU time.
Feb 28 05:05:05 np0005634017 systemd-machined[209480]: Machine qemu-35-instance-0000001f terminated.
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.305 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:05 np0005634017 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[269826]: [NOTICE]   (269844) : haproxy version is 2.8.14-c23fe91
Feb 28 05:05:05 np0005634017 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[269826]: [NOTICE]   (269844) : path to executable is /usr/sbin/haproxy
Feb 28 05:05:05 np0005634017 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[269826]: [ALERT]    (269844) : Current worker (269864) exited with code 143 (Terminated)
Feb 28 05:05:05 np0005634017 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[269826]: [WARNING]  (269844) : All workers exited. Exiting... (0)
Feb 28 05:05:05 np0005634017 systemd[1]: libpod-2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf.scope: Deactivated successfully.
Feb 28 05:05:05 np0005634017 conmon[269826]: conmon 2f3240798ba0bc96b9c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf.scope/container/memory.events
Feb 28 05:05:05 np0005634017 podman[272255]: 2026-02-28 10:05:05.320915364 +0000 UTC m=+0.054219126 container died 2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 28 05:05:05 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf-userdata-shm.mount: Deactivated successfully.
Feb 28 05:05:05 np0005634017 systemd[1]: var-lib-containers-storage-overlay-9a5e83e1d44ebf105d11322506dc55c483ad72f2af990afcb1e865b656b3d9ad-merged.mount: Deactivated successfully.
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.360 243456 DEBUG nova.storage.rbd_utils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] resizing rbd image bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:05:05 np0005634017 NetworkManager[49805]: <info>  [1772273105.3651] manager: (tap29b5f82a-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/99)
Feb 28 05:05:05 np0005634017 podman[272255]: 2026-02-28 10:05:05.367206555 +0000 UTC m=+0.100510337 container cleanup 2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 05:05:05 np0005634017 systemd[1]: libpod-conmon-2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf.scope: Deactivated successfully.
Feb 28 05:05:05 np0005634017 podman[272347]: 2026-02-28 10:05:05.438343145 +0000 UTC m=+0.046617122 container remove 2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.443 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[968dbc30-2ce9-4a93-9001-c9ff37649163]: (4, ('Sat Feb 28 10:05:05 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 (2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf)\n2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf\nSat Feb 28 10:05:05 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 (2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf)\n2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.445 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[15329195-ee70-4ad0-a817-05a796e83c9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.446 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a8395bc-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:05 np0005634017 kernel: tap3a8395bc-d0: left promiscuous mode
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.449 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.457 243456 DEBUG nova.objects.instance [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lazy-loading 'migration_context' on Instance uuid bacbbaae-ce23-42df-b5cc-0fc49b2f3741 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.460 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.462 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0900cb61-26f6-40da-a536-3799ef1fe1de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.475 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.475 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Ensure instance console log exists: /var/lib/nova/instances/bacbbaae-ce23-42df-b5cc-0fc49b2f3741/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.476 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.476 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.476 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.479 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7fa57ef7-6e2c-4886-902e-34dabe02acfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.480 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ec78c53d-4f56-4654-8902-cd85bb0bf33d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.497 243456 INFO nova.virt.libvirt.driver [-] [instance: c1e4150a-4695-4464-a271-378970447180] Instance destroyed successfully.#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.498 243456 DEBUG nova.objects.instance [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lazy-loading 'resources' on Instance uuid c1e4150a-4695-4464-a271-378970447180 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.497 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a22ef408-7f4e-424e-a307-08f611f2d21b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459589, 'reachable_time': 35470, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272404, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:05 np0005634017 systemd[1]: run-netns-ovnmeta\x2d3a8395bc\x2dd7fc\x2d4457\x2d8cb4\x2d52e2b9305b61.mount: Deactivated successfully.
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.502 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.502 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[e2e0910d-0470-42fd-9a57-04ef96661a57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.505 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a1a4b6a4-de37-4bca-9501-0465f18ded83 in datapath 2eebd3ec-f7d4-4881-813e-8d884cdcadaf unbound from our chassis#033[00m
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.506 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2eebd3ec-f7d4-4881-813e-8d884cdcadaf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.507 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6f38d711-4ac6-4081-b933-2224659d85c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.508 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf namespace which is not needed anymore#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.512 243456 DEBUG nova.virt.libvirt.vif [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:04:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1596427493',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1596427493',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1596427493',id=31,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:04:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da458887a8634c5a8b9a38fcbcc44e07',ramdisk_id='',reservation_id='r-a3v88xmi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-356581433',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-356581433-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:03Z,user_data=None,user_id='2737540b5d9a437cac0ea91b25f0c5d8',uuid=c1e4150a-4695-4464-a271-378970447180,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "address": "fa:16:3e:22:fe:a4", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a4b6a4-de", "ovs_interfaceid": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.512 243456 DEBUG nova.network.os_vif_util [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converting VIF {"id": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "address": "fa:16:3e:22:fe:a4", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a4b6a4-de", "ovs_interfaceid": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.514 243456 DEBUG nova.network.os_vif_util [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:fe:a4,bridge_name='br-int',has_traffic_filtering=True,id=a1a4b6a4-de37-4bca-9501-0465f18ded83,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1a4b6a4-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.514 243456 DEBUG os_vif [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:fe:a4,bridge_name='br-int',has_traffic_filtering=True,id=a1a4b6a4-de37-4bca-9501-0465f18ded83,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1a4b6a4-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.516 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.516 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1a4b6a4-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.518 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.520 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.521 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.524 243456 INFO os_vif [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:fe:a4,bridge_name='br-int',has_traffic_filtering=True,id=a1a4b6a4-de37-4bca-9501-0465f18ded83,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1a4b6a4-de')#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.550 243456 INFO nova.virt.libvirt.driver [None req-e6589198-ce55-453d-952a-57fedd94be67 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Instance shutdown successfully after 13 seconds.#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.556 243456 INFO nova.virt.libvirt.driver [-] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Instance destroyed successfully.#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.557 243456 DEBUG nova.objects.instance [None req-e6589198-ce55-453d-952a-57fedd94be67 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'numa_topology' on Instance uuid ff5bf118-ea06-44c0-81f0-0a229162e1d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.577 243456 DEBUG nova.compute.manager [None req-e6589198-ce55-453d-952a-57fedd94be67 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:05 np0005634017 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[271907]: [NOTICE]   (271911) : haproxy version is 2.8.14-c23fe91
Feb 28 05:05:05 np0005634017 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[271907]: [NOTICE]   (271911) : path to executable is /usr/sbin/haproxy
Feb 28 05:05:05 np0005634017 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[271907]: [WARNING]  (271911) : Exiting Master process...
Feb 28 05:05:05 np0005634017 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[271907]: [WARNING]  (271911) : Exiting Master process...
Feb 28 05:05:05 np0005634017 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[271907]: [ALERT]    (271911) : Current worker (271913) exited with code 143 (Terminated)
Feb 28 05:05:05 np0005634017 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[271907]: [WARNING]  (271911) : All workers exited. Exiting... (0)
Feb 28 05:05:05 np0005634017 systemd[1]: libpod-2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3.scope: Deactivated successfully.
Feb 28 05:05:05 np0005634017 podman[272440]: 2026-02-28 10:05:05.621183775 +0000 UTC m=+0.036702493 container died 2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 28 05:05:05 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3-userdata-shm.mount: Deactivated successfully.
Feb 28 05:05:05 np0005634017 systemd[1]: var-lib-containers-storage-overlay-9528566aabd5692d826fa682f7bcba8949e6d95980a2790e31c1867903228e4e-merged.mount: Deactivated successfully.
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.650 243456 DEBUG oslo_concurrency.lockutils [None req-e6589198-ce55-453d-952a-57fedd94be67 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.417s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:05 np0005634017 podman[272440]: 2026-02-28 10:05:05.656089597 +0000 UTC m=+0.071608315 container cleanup 2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 05:05:05 np0005634017 systemd[1]: libpod-conmon-2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3.scope: Deactivated successfully.
Feb 28 05:05:05 np0005634017 podman[272472]: 2026-02-28 10:05:05.705961309 +0000 UTC m=+0.034416459 container remove 2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.710 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2a998e8a-82ce-46eb-8f8f-336fb1dd2f5c]: (4, ('Sat Feb 28 10:05:05 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf (2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3)\n2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3\nSat Feb 28 10:05:05 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf (2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3)\n2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.712 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6a7d1973-9eee-436b-b27e-5dcd02e36c45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.713 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2eebd3ec-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:05 np0005634017 kernel: tap2eebd3ec-f0: left promiscuous mode
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.715 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.723 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.726 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8b65433f-cd4d-4921-b796-bbbfe566be28]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.740 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[73020b36-532d-4b55-bddb-6a41630d42bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.741 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2597621e-527c-41d5-ab92-be85f82ff420]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.752 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[685d34c2-d667-4173-abd8-e353ce46e33f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460404, 'reachable_time': 19205, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272489, 'error': None, 'target': 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.754 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:05:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.755 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[a44b00d9-9c9d-4e49-a9e3-08cebcd434f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.760 243456 INFO nova.virt.libvirt.driver [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Deleting instance files /var/lib/nova/instances/c1e4150a-4695-4464-a271-378970447180_del#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.761 243456 INFO nova.virt.libvirt.driver [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Deletion of /var/lib/nova/instances/c1e4150a-4695-4464-a271-378970447180_del complete#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.826 243456 INFO nova.compute.manager [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.826 243456 DEBUG oslo.service.loopingcall [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.827 243456 DEBUG nova.compute.manager [-] [instance: c1e4150a-4695-4464-a271-378970447180] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.827 243456 DEBUG nova.network.neutron [-] [instance: c1e4150a-4695-4464-a271-378970447180] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:05:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 326 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.9 MiB/s wr, 242 op/s
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.918 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273090.9009323, f1026535-7729-43d0-8027-dd71ef14dfbf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.919 243456 INFO nova.compute.manager [-] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.927 243456 DEBUG nova.compute.manager [req-b575204c-a031-4a1d-b965-16cadcd551f7 req-0409a0b5-499a-42ee-b1d6-a3d76d8d04c5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Received event network-vif-unplugged-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.927 243456 DEBUG oslo_concurrency.lockutils [req-b575204c-a031-4a1d-b965-16cadcd551f7 req-0409a0b5-499a-42ee-b1d6-a3d76d8d04c5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.928 243456 DEBUG oslo_concurrency.lockutils [req-b575204c-a031-4a1d-b965-16cadcd551f7 req-0409a0b5-499a-42ee-b1d6-a3d76d8d04c5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.928 243456 DEBUG oslo_concurrency.lockutils [req-b575204c-a031-4a1d-b965-16cadcd551f7 req-0409a0b5-499a-42ee-b1d6-a3d76d8d04c5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.928 243456 DEBUG nova.compute.manager [req-b575204c-a031-4a1d-b965-16cadcd551f7 req-0409a0b5-499a-42ee-b1d6-a3d76d8d04c5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] No waiting events found dispatching network-vif-unplugged-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.929 243456 WARNING nova.compute.manager [req-b575204c-a031-4a1d-b965-16cadcd551f7 req-0409a0b5-499a-42ee-b1d6-a3d76d8d04c5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Received unexpected event network-vif-unplugged-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 for instance with vm_state stopped and task_state None.#033[00m
Feb 28 05:05:05 np0005634017 nova_compute[243452]: 2026-02-28 10:05:05.941 243456 DEBUG nova.compute.manager [None req-0afaa1c7-0992-4530-a3f9-fdcf0565e413 - - - - - -] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:06 np0005634017 nova_compute[243452]: 2026-02-28 10:05:06.310 243456 DEBUG nova.network.neutron [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Successfully created port: f3950212-15c6-462b-a9f9-1f218cfd3914 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:05:06 np0005634017 nova_compute[243452]: 2026-02-28 10:05:06.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:05:06 np0005634017 nova_compute[243452]: 2026-02-28 10:05:06.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:05:06 np0005634017 systemd[1]: run-netns-ovnmeta\x2d2eebd3ec\x2df7d4\x2d4881\x2d813e\x2d8d884cdcadaf.mount: Deactivated successfully.
Feb 28 05:05:06 np0005634017 nova_compute[243452]: 2026-02-28 10:05:06.546 243456 DEBUG nova.network.neutron [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Successfully updated port: 9f44b9f8-b888-40e8-be30-f985e3ca11b9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:05:06 np0005634017 nova_compute[243452]: 2026-02-28 10:05:06.565 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:05:06 np0005634017 nova_compute[243452]: 2026-02-28 10:05:06.566 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquired lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:05:06 np0005634017 nova_compute[243452]: 2026-02-28 10:05:06.566 243456 DEBUG nova.network.neutron [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:05:06 np0005634017 nova_compute[243452]: 2026-02-28 10:05:06.614 243456 DEBUG nova.network.neutron [-] [instance: c1e4150a-4695-4464-a271-378970447180] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:05:06 np0005634017 nova_compute[243452]: 2026-02-28 10:05:06.636 243456 INFO nova.compute.manager [-] [instance: c1e4150a-4695-4464-a271-378970447180] Took 0.81 seconds to deallocate network for instance.#033[00m
Feb 28 05:05:06 np0005634017 nova_compute[243452]: 2026-02-28 10:05:06.675 243456 DEBUG oslo_concurrency.lockutils [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:06 np0005634017 nova_compute[243452]: 2026-02-28 10:05:06.676 243456 DEBUG oslo_concurrency.lockutils [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:06 np0005634017 nova_compute[243452]: 2026-02-28 10:05:06.684 243456 DEBUG nova.compute.manager [req-254cbf5a-e6e6-41de-9542-527f6bf9dc41 req-98d85944-c465-4689-92c1-60cbdcf2c630 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received event network-changed-9f44b9f8-b888-40e8-be30-f985e3ca11b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:06 np0005634017 nova_compute[243452]: 2026-02-28 10:05:06.684 243456 DEBUG nova.compute.manager [req-254cbf5a-e6e6-41de-9542-527f6bf9dc41 req-98d85944-c465-4689-92c1-60cbdcf2c630 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Refreshing instance network info cache due to event network-changed-9f44b9f8-b888-40e8-be30-f985e3ca11b9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:05:06 np0005634017 nova_compute[243452]: 2026-02-28 10:05:06.685 243456 DEBUG oslo_concurrency.lockutils [req-254cbf5a-e6e6-41de-9542-527f6bf9dc41 req-98d85944-c465-4689-92c1-60cbdcf2c630 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:05:06 np0005634017 nova_compute[243452]: 2026-02-28 10:05:06.762 243456 DEBUG nova.network.neutron [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:05:06 np0005634017 nova_compute[243452]: 2026-02-28 10:05:06.778 243456 DEBUG oslo_concurrency.processutils [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.072 243456 DEBUG nova.compute.manager [None req-fca46017-b657-46e6-aafe-5ede402539b7 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.111 243456 INFO nova.compute.manager [None req-fca46017-b657-46e6-aafe-5ede402539b7 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] instance snapshotting#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.111 243456 WARNING nova.compute.manager [None req-fca46017-b657-46e6-aafe-5ede402539b7 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] trying to snapshot a non-running instance: (state: 4 expected: 1)#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.313 243456 INFO nova.virt.libvirt.driver [None req-fca46017-b657-46e6-aafe-5ede402539b7 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Beginning cold snapshot process#033[00m
Feb 28 05:05:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:05:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3254729468' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.351 243456 DEBUG oslo_concurrency.processutils [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.364 243456 DEBUG nova.compute.provider_tree [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.401 243456 DEBUG nova.scheduler.client.report [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.451 243456 DEBUG oslo_concurrency.lockutils [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.458 243456 DEBUG nova.virt.libvirt.imagebackend [None req-fca46017-b657-46e6-aafe-5ede402539b7 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.475 243456 INFO nova.scheduler.client.report [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Deleted allocations for instance c1e4150a-4695-4464-a271-378970447180#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.503 243456 DEBUG nova.network.neutron [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Successfully updated port: f3950212-15c6-462b-a9f9-1f218cfd3914 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.537 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Acquiring lock "refresh_cache-bacbbaae-ce23-42df-b5cc-0fc49b2f3741" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.537 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Acquired lock "refresh_cache-bacbbaae-ce23-42df-b5cc-0fc49b2f3741" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.538 243456 DEBUG nova.network.neutron [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.586 243456 DEBUG oslo_concurrency.lockutils [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "c1e4150a-4695-4464-a271-378970447180" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.824 243456 DEBUG nova.network.neutron [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.833 243456 DEBUG nova.storage.rbd_utils [None req-fca46017-b657-46e6-aafe-5ede402539b7 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] creating snapshot(dc1318d480914aa6917e8aaac1e71514) on rbd image(ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:05:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 343 MiB data, 534 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.5 MiB/s wr, 187 op/s
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.919 243456 DEBUG nova.network.neutron [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updating instance_info_cache with network_info: [{"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.938 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Releasing lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.939 243456 DEBUG nova.compute.manager [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Instance network_info: |[{"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.939 243456 DEBUG oslo_concurrency.lockutils [req-254cbf5a-e6e6-41de-9542-527f6bf9dc41 req-98d85944-c465-4689-92c1-60cbdcf2c630 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.939 243456 DEBUG nova.network.neutron [req-254cbf5a-e6e6-41de-9542-527f6bf9dc41 req-98d85944-c465-4689-92c1-60cbdcf2c630 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Refreshing network info cache for port 9f44b9f8-b888-40e8-be30-f985e3ca11b9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.942 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Start _get_guest_xml network_info=[{"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.949 243456 WARNING nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.962 243456 DEBUG nova.virt.libvirt.host [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.963 243456 DEBUG nova.virt.libvirt.host [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.969 243456 DEBUG nova.virt.libvirt.host [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.970 243456 DEBUG nova.virt.libvirt.host [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.971 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.971 243456 DEBUG nova.virt.hardware [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.972 243456 DEBUG nova.virt.hardware [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.973 243456 DEBUG nova.virt.hardware [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.973 243456 DEBUG nova.virt.hardware [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.973 243456 DEBUG nova.virt.hardware [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.974 243456 DEBUG nova.virt.hardware [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.974 243456 DEBUG nova.virt.hardware [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.975 243456 DEBUG nova.virt.hardware [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.975 243456 DEBUG nova.virt.hardware [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.975 243456 DEBUG nova.virt.hardware [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.976 243456 DEBUG nova.virt.hardware [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:05:07 np0005634017 nova_compute[243452]: 2026-02-28 10:05:07.980 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.029 243456 DEBUG nova.compute.manager [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Received event network-vif-plugged-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.030 243456 DEBUG oslo_concurrency.lockutils [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.031 243456 DEBUG oslo_concurrency.lockutils [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.031 243456 DEBUG oslo_concurrency.lockutils [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.031 243456 DEBUG nova.compute.manager [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] No waiting events found dispatching network-vif-plugged-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.032 243456 WARNING nova.compute.manager [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Received unexpected event network-vif-plugged-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 for instance with vm_state stopped and task_state image_uploading.#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.032 243456 DEBUG nova.compute.manager [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Received event network-vif-unplugged-a1a4b6a4-de37-4bca-9501-0465f18ded83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.032 243456 DEBUG oslo_concurrency.lockutils [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c1e4150a-4695-4464-a271-378970447180-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.033 243456 DEBUG oslo_concurrency.lockutils [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c1e4150a-4695-4464-a271-378970447180-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.033 243456 DEBUG oslo_concurrency.lockutils [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c1e4150a-4695-4464-a271-378970447180-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.033 243456 DEBUG nova.compute.manager [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] No waiting events found dispatching network-vif-unplugged-a1a4b6a4-de37-4bca-9501-0465f18ded83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.034 243456 WARNING nova.compute.manager [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Received unexpected event network-vif-unplugged-a1a4b6a4-de37-4bca-9501-0465f18ded83 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.034 243456 DEBUG nova.compute.manager [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Received event network-vif-plugged-a1a4b6a4-de37-4bca-9501-0465f18ded83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.035 243456 DEBUG oslo_concurrency.lockutils [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c1e4150a-4695-4464-a271-378970447180-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.035 243456 DEBUG oslo_concurrency.lockutils [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c1e4150a-4695-4464-a271-378970447180-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.035 243456 DEBUG oslo_concurrency.lockutils [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c1e4150a-4695-4464-a271-378970447180-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.036 243456 DEBUG nova.compute.manager [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] No waiting events found dispatching network-vif-plugged-a1a4b6a4-de37-4bca-9501-0465f18ded83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.036 243456 WARNING nova.compute.manager [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Received unexpected event network-vif-plugged-a1a4b6a4-de37-4bca-9501-0465f18ded83 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.036 243456 DEBUG nova.compute.manager [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Received event network-vif-deleted-a1a4b6a4-de37-4bca-9501-0465f18ded83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:05:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Feb 28 05:05:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Feb 28 05:05:08 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.448 243456 DEBUG nova.storage.rbd_utils [None req-fca46017-b657-46e6-aafe-5ede402539b7 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] cloning vms/ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk@dc1318d480914aa6917e8aaac1e71514 to images/60ad6e6b-14e8-4ec6-9e02-5b68cbfcac60 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.518 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.554 243456 DEBUG nova.storage.rbd_utils [None req-fca46017-b657-46e6-aafe-5ede402539b7 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] flattening images/60ad6e6b-14e8-4ec6-9e02-5b68cbfcac60 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 28 05:05:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:05:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4093465859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.630 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.651 243456 DEBUG nova.storage.rbd_utils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image 3a118849-0d0a-4196-9bdd-65333da2e8f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.654 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.887 243456 DEBUG nova.network.neutron [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Updating instance_info_cache with network_info: [{"id": "f3950212-15c6-462b-a9f9-1f218cfd3914", "address": "fa:16:3e:a3:4a:3e", "network": {"id": "7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-199293454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "135c387aaa024e42b1c3c19237591cf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3950212-15", "ovs_interfaceid": "f3950212-15c6-462b-a9f9-1f218cfd3914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.899 243456 DEBUG nova.storage.rbd_utils [None req-fca46017-b657-46e6-aafe-5ede402539b7 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] removing snapshot(dc1318d480914aa6917e8aaac1e71514) on rbd image(ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.916 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Releasing lock "refresh_cache-bacbbaae-ce23-42df-b5cc-0fc49b2f3741" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.917 243456 DEBUG nova.compute.manager [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Instance network_info: |[{"id": "f3950212-15c6-462b-a9f9-1f218cfd3914", "address": "fa:16:3e:a3:4a:3e", "network": {"id": "7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-199293454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "135c387aaa024e42b1c3c19237591cf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3950212-15", "ovs_interfaceid": "f3950212-15c6-462b-a9f9-1f218cfd3914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.919 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Start _get_guest_xml network_info=[{"id": "f3950212-15c6-462b-a9f9-1f218cfd3914", "address": "fa:16:3e:a3:4a:3e", "network": {"id": "7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-199293454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "135c387aaa024e42b1c3c19237591cf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3950212-15", "ovs_interfaceid": "f3950212-15c6-462b-a9f9-1f218cfd3914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.924 243456 WARNING nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.930 243456 DEBUG nova.virt.libvirt.host [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.931 243456 DEBUG nova.virt.libvirt.host [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.935 243456 DEBUG nova.virt.libvirt.host [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.935 243456 DEBUG nova.virt.libvirt.host [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.936 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.936 243456 DEBUG nova.virt.hardware [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.936 243456 DEBUG nova.virt.hardware [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.936 243456 DEBUG nova.virt.hardware [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.937 243456 DEBUG nova.virt.hardware [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.937 243456 DEBUG nova.virt.hardware [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.937 243456 DEBUG nova.virt.hardware [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.937 243456 DEBUG nova.virt.hardware [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.937 243456 DEBUG nova.virt.hardware [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.938 243456 DEBUG nova.virt.hardware [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.938 243456 DEBUG nova.virt.hardware [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.938 243456 DEBUG nova.virt.hardware [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:05:08 np0005634017 nova_compute[243452]: 2026-02-28 10:05:08.940 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.095 243456 DEBUG nova.compute.manager [req-91279874-c84a-4595-8bf6-f90ed76af280 req-0028bc65-4b4b-4335-8843-bfee9839b34a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Received event network-changed-f3950212-15c6-462b-a9f9-1f218cfd3914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.096 243456 DEBUG nova.compute.manager [req-91279874-c84a-4595-8bf6-f90ed76af280 req-0028bc65-4b4b-4335-8843-bfee9839b34a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Refreshing instance network info cache due to event network-changed-f3950212-15c6-462b-a9f9-1f218cfd3914. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.096 243456 DEBUG oslo_concurrency.lockutils [req-91279874-c84a-4595-8bf6-f90ed76af280 req-0028bc65-4b4b-4335-8843-bfee9839b34a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-bacbbaae-ce23-42df-b5cc-0fc49b2f3741" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.096 243456 DEBUG oslo_concurrency.lockutils [req-91279874-c84a-4595-8bf6-f90ed76af280 req-0028bc65-4b4b-4335-8843-bfee9839b34a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-bacbbaae-ce23-42df-b5cc-0fc49b2f3741" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.097 243456 DEBUG nova.network.neutron [req-91279874-c84a-4595-8bf6-f90ed76af280 req-0028bc65-4b4b-4335-8843-bfee9839b34a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Refreshing network info cache for port f3950212-15c6-462b-a9f9-1f218cfd3914 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.173 243456 DEBUG nova.network.neutron [req-254cbf5a-e6e6-41de-9542-527f6bf9dc41 req-98d85944-c465-4689-92c1-60cbdcf2c630 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updated VIF entry in instance network info cache for port 9f44b9f8-b888-40e8-be30-f985e3ca11b9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.174 243456 DEBUG nova.network.neutron [req-254cbf5a-e6e6-41de-9542-527f6bf9dc41 req-98d85944-c465-4689-92c1-60cbdcf2c630 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updating instance_info_cache with network_info: [{"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.191 243456 DEBUG oslo_concurrency.lockutils [req-254cbf5a-e6e6-41de-9542-527f6bf9dc41 req-98d85944-c465-4689-92c1-60cbdcf2c630 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:05:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:05:09 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2960477370' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.297 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.298 243456 DEBUG nova.virt.libvirt.vif [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:05:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-853218932',display_name='tempest-AttachInterfacesTestJSON-server-853218932',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-853218932',id=32,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNhRql1zNcgozp7+n+OTZNjDLjB8j28cqviorl1M0I+grkvDu6q5p/JYhTL7OoExHJt9j3CSAkdrK21HWqvkni4EGOra+zdcam7r8KHxbshLgFt9F0GQkeLVE/FGatuEGQ==',key_name='tempest-keypair-1407834863',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-b1a06y2e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:05:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=3a118849-0d0a-4196-9bdd-65333da2e8f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.298 243456 DEBUG nova.network.os_vif_util [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.299 243456 DEBUG nova.network.os_vif_util [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:3c:f5,bridge_name='br-int',has_traffic_filtering=True,id=9f44b9f8-b888-40e8-be30-f985e3ca11b9,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f44b9f8-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.300 243456 DEBUG nova.objects.instance [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a118849-0d0a-4196-9bdd-65333da2e8f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.314 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:05:09 np0005634017 nova_compute[243452]:  <uuid>3a118849-0d0a-4196-9bdd-65333da2e8f7</uuid>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:  <name>instance-00000020</name>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <nova:name>tempest-AttachInterfacesTestJSON-server-853218932</nova:name>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:05:07</nova:creationTime>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:05:09 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:        <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:        <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:        <nova:port uuid="9f44b9f8-b888-40e8-be30-f985e3ca11b9">
Feb 28 05:05:09 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <entry name="serial">3a118849-0d0a-4196-9bdd-65333da2e8f7</entry>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <entry name="uuid">3a118849-0d0a-4196-9bdd-65333da2e8f7</entry>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/3a118849-0d0a-4196-9bdd-65333da2e8f7_disk">
Feb 28 05:05:09 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:05:09 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/3a118849-0d0a-4196-9bdd-65333da2e8f7_disk.config">
Feb 28 05:05:09 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:05:09 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:07:3c:f5"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <target dev="tap9f44b9f8-b8"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7/console.log" append="off"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:05:09 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:05:09 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:05:09 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:05:09 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.315 243456 DEBUG nova.compute.manager [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Preparing to wait for external event network-vif-plugged-9f44b9f8-b888-40e8-be30-f985e3ca11b9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.315 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.315 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.315 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.316 243456 DEBUG nova.virt.libvirt.vif [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:05:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-853218932',display_name='tempest-AttachInterfacesTestJSON-server-853218932',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-853218932',id=32,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNhRql1zNcgozp7+n+OTZNjDLjB8j28cqviorl1M0I+grkvDu6q5p/JYhTL7OoExHJt9j3CSAkdrK21HWqvkni4EGOra+zdcam7r8KHxbshLgFt9F0GQkeLVE/FGatuEGQ==',key_name='tempest-keypair-1407834863',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-b1a06y2e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:05:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=3a118849-0d0a-4196-9bdd-65333da2e8f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.316 243456 DEBUG nova.network.os_vif_util [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.317 243456 DEBUG nova.network.os_vif_util [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:3c:f5,bridge_name='br-int',has_traffic_filtering=True,id=9f44b9f8-b888-40e8-be30-f985e3ca11b9,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f44b9f8-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.318 243456 DEBUG os_vif [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:3c:f5,bridge_name='br-int',has_traffic_filtering=True,id=9f44b9f8-b888-40e8-be30-f985e3ca11b9,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f44b9f8-b8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.318 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.319 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.319 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.325 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.325 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f44b9f8-b8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.326 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9f44b9f8-b8, col_values=(('external_ids', {'iface-id': '9f44b9f8-b888-40e8-be30-f985e3ca11b9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:3c:f5', 'vm-uuid': '3a118849-0d0a-4196-9bdd-65333da2e8f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.328 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:09 np0005634017 NetworkManager[49805]: <info>  [1772273109.3295] manager: (tap9f44b9f8-b8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.331 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.337 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.337 243456 INFO os_vif [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:3c:f5,bridge_name='br-int',has_traffic_filtering=True,id=9f44b9f8-b888-40e8-be30-f985e3ca11b9,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f44b9f8-b8')#033[00m
Feb 28 05:05:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.433 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.434 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.434 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No VIF found with MAC fa:16:3e:07:3c:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.435 243456 INFO nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Using config drive#033[00m
Feb 28 05:05:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.461 243456 DEBUG nova.storage.rbd_utils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image 3a118849-0d0a-4196-9bdd-65333da2e8f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:09 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Feb 28 05:05:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:05:09 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/35539494' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.598 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.619 243456 DEBUG nova.storage.rbd_utils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] rbd image bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.623 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.648 243456 DEBUG nova.storage.rbd_utils [None req-fca46017-b657-46e6-aafe-5ede402539b7 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] creating snapshot(snap) on rbd image(60ad6e6b-14e8-4ec6-9e02-5b68cbfcac60) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.842 243456 INFO nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Creating config drive at /var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7/disk.config#033[00m
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.849 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5g01n1wm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 354 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 9.8 MiB/s wr, 313 op/s
Feb 28 05:05:09 np0005634017 nova_compute[243452]: 2026-02-28 10:05:09.988 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5g01n1wm" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.015 243456 DEBUG nova.storage.rbd_utils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image 3a118849-0d0a-4196-9bdd-65333da2e8f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.019 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7/disk.config 3a118849-0d0a-4196-9bdd-65333da2e8f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.115 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "75431a43-9412-4ad7-86ef-4f1fc1563b37" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.116 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:05:10 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/224204770' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.139 243456 DEBUG nova.compute.manager [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.149 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.150 243456 DEBUG nova.virt.libvirt.vif [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:05:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-803037290',display_name='tempest-ServerMetadataTestJSON-server-803037290',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-803037290',id=33,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='135c387aaa024e42b1c3c19237591cf3',ramdisk_id='',reservation_id='r-70lii02n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-1667248256',owner_user_name='tempest-ServerMetadataTestJSON-1667248256-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:05:04Z,user_data=None,user_id='6bbc470612fa48afb6c2a143ba966473',uuid=bacbbaae-ce23-42df-b5cc-0fc49b2f3741,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f3950212-15c6-462b-a9f9-1f218cfd3914", "address": "fa:16:3e:a3:4a:3e", "network": {"id": "7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-199293454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "135c387aaa024e42b1c3c19237591cf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3950212-15", "ovs_interfaceid": "f3950212-15c6-462b-a9f9-1f218cfd3914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.150 243456 DEBUG nova.network.os_vif_util [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Converting VIF {"id": "f3950212-15c6-462b-a9f9-1f218cfd3914", "address": "fa:16:3e:a3:4a:3e", "network": {"id": "7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-199293454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "135c387aaa024e42b1c3c19237591cf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3950212-15", "ovs_interfaceid": "f3950212-15c6-462b-a9f9-1f218cfd3914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.151 243456 DEBUG nova.network.os_vif_util [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:4a:3e,bridge_name='br-int',has_traffic_filtering=True,id=f3950212-15c6-462b-a9f9-1f218cfd3914,network=Network(7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3950212-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.152 243456 DEBUG nova.objects.instance [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lazy-loading 'pci_devices' on Instance uuid bacbbaae-ce23-42df-b5cc-0fc49b2f3741 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.156 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7/disk.config 3a118849-0d0a-4196-9bdd-65333da2e8f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.156 243456 INFO nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Deleting local config drive /var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7/disk.config because it was imported into RBD.#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.165 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:05:10 np0005634017 nova_compute[243452]:  <uuid>bacbbaae-ce23-42df-b5cc-0fc49b2f3741</uuid>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:  <name>instance-00000021</name>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerMetadataTestJSON-server-803037290</nova:name>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:05:08</nova:creationTime>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:05:10 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:        <nova:user uuid="6bbc470612fa48afb6c2a143ba966473">tempest-ServerMetadataTestJSON-1667248256-project-member</nova:user>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:        <nova:project uuid="135c387aaa024e42b1c3c19237591cf3">tempest-ServerMetadataTestJSON-1667248256</nova:project>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:        <nova:port uuid="f3950212-15c6-462b-a9f9-1f218cfd3914">
Feb 28 05:05:10 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <entry name="serial">bacbbaae-ce23-42df-b5cc-0fc49b2f3741</entry>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <entry name="uuid">bacbbaae-ce23-42df-b5cc-0fc49b2f3741</entry>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk">
Feb 28 05:05:10 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:05:10 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk.config">
Feb 28 05:05:10 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:05:10 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:a3:4a:3e"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <target dev="tapf3950212-15"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/bacbbaae-ce23-42df-b5cc-0fc49b2f3741/console.log" append="off"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:05:10 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:05:10 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:05:10 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:05:10 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.166 243456 DEBUG nova.compute.manager [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Preparing to wait for external event network-vif-plugged-f3950212-15c6-462b-a9f9-1f218cfd3914 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.166 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Acquiring lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.167 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.167 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.168 243456 DEBUG nova.virt.libvirt.vif [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:05:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-803037290',display_name='tempest-ServerMetadataTestJSON-server-803037290',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-803037290',id=33,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='135c387aaa024e42b1c3c19237591cf3',ramdisk_id='',reservation_id='r-70lii02n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-1667248256',owner_user_name='tempest-ServerMetadataTestJSON-1667248256-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:05:04Z,user_data=None,user_id='6bbc470612fa48afb6c2a143ba966473',uuid=bacbbaae-ce23-42df-b5cc-0fc49b2f3741,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f3950212-15c6-462b-a9f9-1f218cfd3914", "address": "fa:16:3e:a3:4a:3e", "network": {"id": "7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-199293454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "135c387aaa024e42b1c3c19237591cf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3950212-15", "ovs_interfaceid": "f3950212-15c6-462b-a9f9-1f218cfd3914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.168 243456 DEBUG nova.network.os_vif_util [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Converting VIF {"id": "f3950212-15c6-462b-a9f9-1f218cfd3914", "address": "fa:16:3e:a3:4a:3e", "network": {"id": "7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-199293454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "135c387aaa024e42b1c3c19237591cf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3950212-15", "ovs_interfaceid": "f3950212-15c6-462b-a9f9-1f218cfd3914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.169 243456 DEBUG nova.network.os_vif_util [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:4a:3e,bridge_name='br-int',has_traffic_filtering=True,id=f3950212-15c6-462b-a9f9-1f218cfd3914,network=Network(7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3950212-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.169 243456 DEBUG os_vif [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:4a:3e,bridge_name='br-int',has_traffic_filtering=True,id=f3950212-15c6-462b-a9f9-1f218cfd3914,network=Network(7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3950212-15') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.170 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.170 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.171 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.177 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.177 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3950212-15, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.179 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf3950212-15, col_values=(('external_ids', {'iface-id': 'f3950212-15c6-462b-a9f9-1f218cfd3914', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a3:4a:3e', 'vm-uuid': 'bacbbaae-ce23-42df-b5cc-0fc49b2f3741'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:10 np0005634017 NetworkManager[49805]: <info>  [1772273110.1820] manager: (tapf3950212-15): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.191 243456 DEBUG nova.network.neutron [req-91279874-c84a-4595-8bf6-f90ed76af280 req-0028bc65-4b4b-4335-8843-bfee9839b34a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Updated VIF entry in instance network info cache for port f3950212-15c6-462b-a9f9-1f218cfd3914. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.192 243456 DEBUG nova.network.neutron [req-91279874-c84a-4595-8bf6-f90ed76af280 req-0028bc65-4b4b-4335-8843-bfee9839b34a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Updating instance_info_cache with network_info: [{"id": "f3950212-15c6-462b-a9f9-1f218cfd3914", "address": "fa:16:3e:a3:4a:3e", "network": {"id": "7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-199293454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "135c387aaa024e42b1c3c19237591cf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3950212-15", "ovs_interfaceid": "f3950212-15c6-462b-a9f9-1f218cfd3914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.194 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.198 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.199 243456 INFO os_vif [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:4a:3e,bridge_name='br-int',has_traffic_filtering=True,id=f3950212-15c6-462b-a9f9-1f218cfd3914,network=Network(7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3950212-15')#033[00m
Feb 28 05:05:10 np0005634017 NetworkManager[49805]: <info>  [1772273110.2025] manager: (tap9f44b9f8-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/102)
Feb 28 05:05:10 np0005634017 kernel: tap9f44b9f8-b8: entered promiscuous mode
Feb 28 05:05:10 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:10Z|00210|binding|INFO|Claiming lport 9f44b9f8-b888-40e8-be30-f985e3ca11b9 for this chassis.
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.207 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:10 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:10Z|00211|binding|INFO|9f44b9f8-b888-40e8-be30-f985e3ca11b9: Claiming fa:16:3e:07:3c:f5 10.100.0.12
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.215 243456 DEBUG oslo_concurrency.lockutils [req-91279874-c84a-4595-8bf6-f90ed76af280 req-0028bc65-4b4b-4335-8843-bfee9839b34a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-bacbbaae-ce23-42df-b5cc-0fc49b2f3741" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.215 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:3c:f5 10.100.0.12'], port_security=['fa:16:3e:07:3c:f5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3a118849-0d0a-4196-9bdd-65333da2e8f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8d9a339b-3aab-4fbe-a87a-c3231e7f58e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=9f44b9f8-b888-40e8-be30-f985e3ca11b9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.217 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 9f44b9f8-b888-40e8-be30-f985e3ca11b9 in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f bound to our chassis#033[00m
Feb 28 05:05:10 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:10Z|00212|binding|INFO|Setting lport 9f44b9f8-b888-40e8-be30-f985e3ca11b9 ovn-installed in OVS
Feb 28 05:05:10 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:10Z|00213|binding|INFO|Setting lport 9f44b9f8-b888-40e8-be30-f985e3ca11b9 up in Southbound
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.219 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60dcefc3-95e1-437e-9c00-e51656c39b8f#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.219 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.221 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.223 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:10 np0005634017 systemd-udevd[272860]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.229 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.232 243456 DEBUG nova.virt.hardware [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.233 243456 INFO nova.compute.claims [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:05:10 np0005634017 systemd-machined[209480]: New machine qemu-36-instance-00000020.
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.234 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[df6e11e3-b6c4-4a9e-8285-18a2c5f0a1e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.235 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap60dcefc3-91 in ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.237 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap60dcefc3-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.237 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[672f60b9-56da-4d24-96b8-868c213d1f90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.238 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8898a28b-c1cb-40fb-b247-5bc92796f2c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:10 np0005634017 NetworkManager[49805]: <info>  [1772273110.2426] device (tap9f44b9f8-b8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:05:10 np0005634017 NetworkManager[49805]: <info>  [1772273110.2432] device (tap9f44b9f8-b8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.251 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[4a57fa7f-0316-4b4d-91da-dd8b8a36b7b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:10 np0005634017 systemd[1]: Started Virtual Machine qemu-36-instance-00000020.
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.262 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.262 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.263 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] No VIF found with MAC fa:16:3e:a3:4a:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.263 243456 INFO nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Using config drive#033[00m
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.266 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c6c48dba-7d66-4de6-ba26-8d9d03cb1d36]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.288 243456 DEBUG nova.storage.rbd_utils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] rbd image bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.292 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[40279ac7-da0a-4948-8589-14a8c0049f23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.297 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f9812d3b-c3ab-4ce9-9aa5-a5126ce33c76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:10 np0005634017 systemd-udevd[272864]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:05:10 np0005634017 NetworkManager[49805]: <info>  [1772273110.2982] manager: (tap60dcefc3-90): new Veth device (/org/freedesktop/NetworkManager/Devices/103)
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.324 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[21ef70d2-6796-49bc-a8b4-957981b51238]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.327 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a6f7ae5d-133e-4049-b256-d743037cefd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:10 np0005634017 NetworkManager[49805]: <info>  [1772273110.3464] device (tap60dcefc3-90): carrier: link connected
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.350 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[650e6035-6d1d-4514-a53a-d20f7468b99f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.365 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[db66daed-e486-4446-86f3-a63aaeffefdc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461763, 'reachable_time': 27957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272917, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.381 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3c078a38-3e2d-4a53-9068-ad0472697897]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe46:227a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461763, 'tstamp': 461763}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272918, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.399 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a447433c-4883-4592-9309-2961dd4c912d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461763, 'reachable_time': 27957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272919, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.403 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.431 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c0f602b9-4822-4053-9b39-24064e66b041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.495 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[040d7934-e7e3-4cb0-a6f5-73bba8e840a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.497 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60dcefc3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.497 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.498 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60dcefc3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.499 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:10 np0005634017 NetworkManager[49805]: <info>  [1772273110.5005] manager: (tap60dcefc3-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Feb 28 05:05:10 np0005634017 kernel: tap60dcefc3-90: entered promiscuous mode
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.506 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.507 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60dcefc3-90, col_values=(('external_ids', {'iface-id': '76417fb5-53fd-4986-92c0-fafddb45b2f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.508 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:10 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:10Z|00214|binding|INFO|Releasing lport 76417fb5-53fd-4986-92c0-fafddb45b2f5 from this chassis (sb_readonly=0)
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.516 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.520 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.521 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/60dcefc3-95e1-437e-9c00-e51656c39b8f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/60dcefc3-95e1-437e-9c00-e51656c39b8f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.522 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f4fbb1e6-966f-4038-84da-1c412047ff81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.523 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/60dcefc3-95e1-437e-9c00-e51656c39b8f.pid.haproxy
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:05:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.523 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'env', 'PROCESS_TAG=haproxy-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/60dcefc3-95e1-437e-9c00-e51656c39b8f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:05:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Feb 28 05:05:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Feb 28 05:05:10 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.732 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273110.731971, 3a118849-0d0a-4196-9bdd-65333da2e8f7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.732 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] VM Started (Lifecycle Event)#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.753 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.760 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273110.7341702, 3a118849-0d0a-4196-9bdd-65333da2e8f7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.761 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.782 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.790 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:05:10 np0005634017 nova_compute[243452]: 2026-02-28 10:05:10.818 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:05:10 np0005634017 podman[273023]: 2026-02-28 10:05:10.86659507 +0000 UTC m=+0.046220840 container create 016f261e309610b5c58492c67c432b1c34975f7b62f1ac943c9fc4d7e5a60d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 05:05:10 np0005634017 systemd[1]: Started libpod-conmon-016f261e309610b5c58492c67c432b1c34975f7b62f1ac943c9fc4d7e5a60d52.scope.
Feb 28 05:05:10 np0005634017 podman[273023]: 2026-02-28 10:05:10.841519805 +0000 UTC m=+0.021145625 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:05:10 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:05:10 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5efc450d9fa849624f5c4721cc215a74a11ec38e71507abe88ff27e9bd2546bf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:05:10 np0005634017 podman[273023]: 2026-02-28 10:05:10.963866005 +0000 UTC m=+0.143491775 container init 016f261e309610b5c58492c67c432b1c34975f7b62f1ac943c9fc4d7e5a60d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 28 05:05:10 np0005634017 podman[273023]: 2026-02-28 10:05:10.968452864 +0000 UTC m=+0.148078634 container start 016f261e309610b5c58492c67c432b1c34975f7b62f1ac943c9fc4d7e5a60d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223)
Feb 28 05:05:10 np0005634017 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[273038]: [NOTICE]   (273042) : New worker (273044) forked
Feb 28 05:05:10 np0005634017 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[273038]: [NOTICE]   (273042) : Loading success.
Feb 28 05:05:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:05:11 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3109450376' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.041 243456 INFO nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Creating config drive at /var/lib/nova/instances/bacbbaae-ce23-42df-b5cc-0fc49b2f3741/disk.config#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.045 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bacbbaae-ce23-42df-b5cc-0fc49b2f3741/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpaie5yg5k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.066 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.662s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.073 243456 DEBUG nova.compute.provider_tree [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.094 243456 DEBUG nova.scheduler.client.report [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.121 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.121 243456 DEBUG nova.compute.manager [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.170 243456 DEBUG nova.compute.manager [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.170 243456 DEBUG nova.network.neutron [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.175 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bacbbaae-ce23-42df-b5cc-0fc49b2f3741/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpaie5yg5k" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.210 243456 DEBUG nova.storage.rbd_utils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] rbd image bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.215 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bacbbaae-ce23-42df-b5cc-0fc49b2f3741/disk.config bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.245 243456 INFO nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.257 243456 DEBUG nova.compute.manager [req-092f7666-c0ce-43aa-8119-d1abdcc302c9 req-1ce9f1d4-c64c-4dc9-86fd-622ce562ad60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received event network-vif-plugged-9f44b9f8-b888-40e8-be30-f985e3ca11b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.257 243456 DEBUG oslo_concurrency.lockutils [req-092f7666-c0ce-43aa-8119-d1abdcc302c9 req-1ce9f1d4-c64c-4dc9-86fd-622ce562ad60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.258 243456 DEBUG oslo_concurrency.lockutils [req-092f7666-c0ce-43aa-8119-d1abdcc302c9 req-1ce9f1d4-c64c-4dc9-86fd-622ce562ad60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.258 243456 DEBUG oslo_concurrency.lockutils [req-092f7666-c0ce-43aa-8119-d1abdcc302c9 req-1ce9f1d4-c64c-4dc9-86fd-622ce562ad60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.259 243456 DEBUG nova.compute.manager [req-092f7666-c0ce-43aa-8119-d1abdcc302c9 req-1ce9f1d4-c64c-4dc9-86fd-622ce562ad60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Processing event network-vif-plugged-9f44b9f8-b888-40e8-be30-f985e3ca11b9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.259 243456 DEBUG nova.compute.manager [req-092f7666-c0ce-43aa-8119-d1abdcc302c9 req-1ce9f1d4-c64c-4dc9-86fd-622ce562ad60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received event network-vif-plugged-9f44b9f8-b888-40e8-be30-f985e3ca11b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.259 243456 DEBUG oslo_concurrency.lockutils [req-092f7666-c0ce-43aa-8119-d1abdcc302c9 req-1ce9f1d4-c64c-4dc9-86fd-622ce562ad60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.260 243456 DEBUG oslo_concurrency.lockutils [req-092f7666-c0ce-43aa-8119-d1abdcc302c9 req-1ce9f1d4-c64c-4dc9-86fd-622ce562ad60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.260 243456 DEBUG oslo_concurrency.lockutils [req-092f7666-c0ce-43aa-8119-d1abdcc302c9 req-1ce9f1d4-c64c-4dc9-86fd-622ce562ad60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.260 243456 DEBUG nova.compute.manager [req-092f7666-c0ce-43aa-8119-d1abdcc302c9 req-1ce9f1d4-c64c-4dc9-86fd-622ce562ad60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] No waiting events found dispatching network-vif-plugged-9f44b9f8-b888-40e8-be30-f985e3ca11b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.261 243456 WARNING nova.compute.manager [req-092f7666-c0ce-43aa-8119-d1abdcc302c9 req-1ce9f1d4-c64c-4dc9-86fd-622ce562ad60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received unexpected event network-vif-plugged-9f44b9f8-b888-40e8-be30-f985e3ca11b9 for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.262 243456 DEBUG nova.compute.manager [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.267 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273111.2674031, 3a118849-0d0a-4196-9bdd-65333da2e8f7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.268 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.272 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.273 243456 DEBUG nova.compute.manager [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.284 243456 INFO nova.virt.libvirt.driver [-] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Instance spawned successfully.#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.284 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.289 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.293 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.332 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.343 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.344 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.345 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.346 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.347 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.348 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.359 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.360 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.360 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.360 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.361 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.384 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bacbbaae-ce23-42df-b5cc-0fc49b2f3741/disk.config bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.384 243456 INFO nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Deleting local config drive /var/lib/nova/instances/bacbbaae-ce23-42df-b5cc-0fc49b2f3741/disk.config because it was imported into RBD.#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.418 243456 INFO nova.compute.manager [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Took 8.07 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.418 243456 DEBUG nova.compute.manager [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:11 np0005634017 kernel: tapf3950212-15: entered promiscuous mode
Feb 28 05:05:11 np0005634017 NetworkManager[49805]: <info>  [1772273111.4366] manager: (tapf3950212-15): new Tun device (/org/freedesktop/NetworkManager/Devices/105)
Feb 28 05:05:11 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:11Z|00215|binding|INFO|Claiming lport f3950212-15c6-462b-a9f9-1f218cfd3914 for this chassis.
Feb 28 05:05:11 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:11Z|00216|binding|INFO|f3950212-15c6-462b-a9f9-1f218cfd3914: Claiming fa:16:3e:a3:4a:3e 10.100.0.14
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.442 243456 DEBUG nova.policy [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2737540b5d9a437cac0ea91b25f0c5d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da458887a8634c5a8b9a38fcbcc44e07', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.445 243456 DEBUG nova.compute.manager [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:05:11 np0005634017 systemd-udevd[272900]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.448 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:4a:3e 10.100.0.14'], port_security=['fa:16:3e:a3:4a:3e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'bacbbaae-ce23-42df-b5cc-0fc49b2f3741', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '135c387aaa024e42b1c3c19237591cf3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5ee5bcf8-3b1e-427f-98a7-d823cb130082', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d8b3b33-cfca-4ec3-8a4a-f81d6eb91a80, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=f3950212-15c6-462b-a9f9-1f218cfd3914) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.450 156681 INFO neutron.agent.ovn.metadata.agent [-] Port f3950212-15c6-462b-a9f9-1f218cfd3914 in datapath 7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8 bound to our chassis#033[00m
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.453 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.455 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:05:11 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:11Z|00217|binding|INFO|Setting lport f3950212-15c6-462b-a9f9-1f218cfd3914 ovn-installed in OVS
Feb 28 05:05:11 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:11Z|00218|binding|INFO|Setting lport f3950212-15c6-462b-a9f9-1f218cfd3914 up in Southbound
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.456 243456 INFO nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Creating image(s)#033[00m
Feb 28 05:05:11 np0005634017 NetworkManager[49805]: <info>  [1772273111.4694] device (tapf3950212-15): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:05:11 np0005634017 NetworkManager[49805]: <info>  [1772273111.4699] device (tapf3950212-15): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.470 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e3340aff-c5e2-4d60-8b05-36758849de58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.473 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7579f8b7-d1 in ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.474 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7579f8b7-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.474 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ed652614-a193-4fd2-a5c2-c03804342b41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.475 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1ddcb5bf-1ed2-4842-9999-410132082127]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:11 np0005634017 systemd-machined[209480]: New machine qemu-37-instance-00000021.
Feb 28 05:05:11 np0005634017 systemd[1]: Started Virtual Machine qemu-37-instance-00000021.
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.488 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[2cdaba6d-0e68-452e-80cc-998b6b009b03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.492 243456 DEBUG nova.storage.rbd_utils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 75431a43-9412-4ad7-86ef-4f1fc1563b37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.508 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dc9bc786-7a9c-420d-ae41-c6d7e5ba6f26]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.541 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[492cfde2-5ae0-4592-af7d-2ca04a8e147f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:11 np0005634017 NetworkManager[49805]: <info>  [1772273111.5494] manager: (tap7579f8b7-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/106)
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.549 243456 DEBUG nova.storage.rbd_utils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 75431a43-9412-4ad7-86ef-4f1fc1563b37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.548 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b0977a62-2aaa-4425-aa26-28188f6ed815]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.576 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[66d2e488-176e-48d5-8651-d786e2fd7eef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.579 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1d3a2b19-0083-45dc-9e27-8bb9dc2d23ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.590 243456 DEBUG nova.storage.rbd_utils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 75431a43-9412-4ad7-86ef-4f1fc1563b37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.597 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:11 np0005634017 NetworkManager[49805]: <info>  [1772273111.5999] device (tap7579f8b7-d0): carrier: link connected
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.603 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c42bafed-70cb-4dfa-831d-7709c8c43d6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.624 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[43f82b98-de15-4ef4-8fcf-6bbf8bc7dd5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7579f8b7-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:d2:f4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461888, 'reachable_time': 39229, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273207, 'error': None, 'target': 'ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.628 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.638 243456 INFO nova.compute.manager [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Took 9.34 seconds to build instance.#033[00m
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.644 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[34d29d17-ce45-4930-92e8-1a5d43eb9622]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea6:d2f4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461888, 'tstamp': 461888}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273209, 'error': None, 'target': 'ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.653 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.662 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6506c63d-ad46-4cc6-bbe1-4d97b2d7740a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7579f8b7-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:d2:f4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461888, 'reachable_time': 39229, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273210, 'error': None, 'target': 'ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.677 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.678 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.678 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.678 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.691 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b39e07d0-a9a2-4bf6-a674-7f8de33995b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.706 243456 DEBUG nova.storage.rbd_utils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 75431a43-9412-4ad7-86ef-4f1fc1563b37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.714 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 75431a43-9412-4ad7-86ef-4f1fc1563b37_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.759 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[58898bfe-39f1-4f82-9e17-ce0fea2a47d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.762 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7579f8b7-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.764 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.764 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7579f8b7-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:11 np0005634017 NetworkManager[49805]: <info>  [1772273111.7670] manager: (tap7579f8b7-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.767 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:11 np0005634017 kernel: tap7579f8b7-d0: entered promiscuous mode
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.773 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.773 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7579f8b7-d0, col_values=(('external_ids', {'iface-id': 'c9aff458-0679-4a43-bbf1-87878c1e8a54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:11 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:11Z|00219|binding|INFO|Releasing lport c9aff458-0679-4a43-bbf1-87878c1e8a54 from this chassis (sb_readonly=0)
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.787 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.791 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.793 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.794 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2e571e82-9caa-4723-a98b-ce0b4bba6433]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.795 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8.pid.haproxy
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:05:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.797 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8', 'env', 'PROCESS_TAG=haproxy-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.859 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273111.858253, bacbbaae-ce23-42df-b5cc-0fc49b2f3741 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.860 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] VM Started (Lifecycle Event)#033[00m
Feb 28 05:05:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 383 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 9.3 MiB/s wr, 269 op/s
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.892 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.898 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273111.858537, bacbbaae-ce23-42df-b5cc-0fc49b2f3741 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.900 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.919 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.923 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.946 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:05:11 np0005634017 nova_compute[243452]: 2026-02-28 10:05:11.984 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 75431a43-9412-4ad7-86ef-4f1fc1563b37_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:05:11 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/47225562' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.045 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.685s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.057 243456 DEBUG nova.storage.rbd_utils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] resizing rbd image 75431a43-9412-4ad7-86ef-4f1fc1563b37_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.135 243456 DEBUG nova.objects.instance [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lazy-loading 'migration_context' on Instance uuid 75431a43-9412-4ad7-86ef-4f1fc1563b37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.153 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.153 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Ensure instance console log exists: /var/lib/nova/instances/75431a43-9412-4ad7-86ef-4f1fc1563b37/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.153 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.154 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.154 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:12 np0005634017 podman[273384]: 2026-02-28 10:05:12.179631233 +0000 UTC m=+0.053946577 container create 86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.193 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.193 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.196 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.196 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.199 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.199 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:05:12 np0005634017 systemd[1]: Started libpod-conmon-86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f.scope.
Feb 28 05:05:12 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:05:12 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39ee18a45b312c8bd14e361b6e1db2c2440bb85e7a95b137a16d7ff81aa888a6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:05:12 np0005634017 podman[273384]: 2026-02-28 10:05:12.151362059 +0000 UTC m=+0.025677423 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:05:12 np0005634017 podman[273384]: 2026-02-28 10:05:12.256837424 +0000 UTC m=+0.131152798 container init 86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 28 05:05:12 np0005634017 podman[273384]: 2026-02-28 10:05:12.262784161 +0000 UTC m=+0.137099505 container start 86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:05:12 np0005634017 neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8[273417]: [NOTICE]   (273421) : New worker (273423) forked
Feb 28 05:05:12 np0005634017 neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8[273417]: [NOTICE]   (273421) : Loading success.
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.405 243456 INFO nova.virt.libvirt.driver [None req-fca46017-b657-46e6-aafe-5ede402539b7 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Snapshot image upload complete#033[00m
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.405 243456 INFO nova.compute.manager [None req-fca46017-b657-46e6-aafe-5ede402539b7 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Took 5.29 seconds to snapshot the instance on the hypervisor.#033[00m
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.410 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.411 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4057MB free_disk=59.89544444810599GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.412 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.412 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.481 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance ff5bf118-ea06-44c0-81f0-0a229162e1d8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.482 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 3a118849-0d0a-4196-9bdd-65333da2e8f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.482 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance bacbbaae-ce23-42df-b5cc-0fc49b2f3741 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.482 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 75431a43-9412-4ad7-86ef-4f1fc1563b37 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.482 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.482 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.586 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:05:12 np0005634017 nova_compute[243452]: 2026-02-28 10:05:12.747 243456 DEBUG nova.network.neutron [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Successfully created port: ec45afc5-b898-4497-8ddf-4195ba6f8dfc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:05:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:05:13 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1940111830' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.133 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.139 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.154 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.174 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.175 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.332 243456 DEBUG nova.compute.manager [req-a8356d6b-781e-4fe9-9f07-42cfb28e0150 req-c353e9d5-dbb8-42e5-812b-b861ffa82204 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Received event network-vif-plugged-f3950212-15c6-462b-a9f9-1f218cfd3914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.333 243456 DEBUG oslo_concurrency.lockutils [req-a8356d6b-781e-4fe9-9f07-42cfb28e0150 req-c353e9d5-dbb8-42e5-812b-b861ffa82204 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.333 243456 DEBUG oslo_concurrency.lockutils [req-a8356d6b-781e-4fe9-9f07-42cfb28e0150 req-c353e9d5-dbb8-42e5-812b-b861ffa82204 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.334 243456 DEBUG oslo_concurrency.lockutils [req-a8356d6b-781e-4fe9-9f07-42cfb28e0150 req-c353e9d5-dbb8-42e5-812b-b861ffa82204 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.334 243456 DEBUG nova.compute.manager [req-a8356d6b-781e-4fe9-9f07-42cfb28e0150 req-c353e9d5-dbb8-42e5-812b-b861ffa82204 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Processing event network-vif-plugged-f3950212-15c6-462b-a9f9-1f218cfd3914 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.334 243456 DEBUG nova.compute.manager [req-a8356d6b-781e-4fe9-9f07-42cfb28e0150 req-c353e9d5-dbb8-42e5-812b-b861ffa82204 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Received event network-vif-plugged-f3950212-15c6-462b-a9f9-1f218cfd3914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.335 243456 DEBUG oslo_concurrency.lockutils [req-a8356d6b-781e-4fe9-9f07-42cfb28e0150 req-c353e9d5-dbb8-42e5-812b-b861ffa82204 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.335 243456 DEBUG oslo_concurrency.lockutils [req-a8356d6b-781e-4fe9-9f07-42cfb28e0150 req-c353e9d5-dbb8-42e5-812b-b861ffa82204 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.335 243456 DEBUG oslo_concurrency.lockutils [req-a8356d6b-781e-4fe9-9f07-42cfb28e0150 req-c353e9d5-dbb8-42e5-812b-b861ffa82204 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.335 243456 DEBUG nova.compute.manager [req-a8356d6b-781e-4fe9-9f07-42cfb28e0150 req-c353e9d5-dbb8-42e5-812b-b861ffa82204 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] No waiting events found dispatching network-vif-plugged-f3950212-15c6-462b-a9f9-1f218cfd3914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.336 243456 WARNING nova.compute.manager [req-a8356d6b-781e-4fe9-9f07-42cfb28e0150 req-c353e9d5-dbb8-42e5-812b-b861ffa82204 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Received unexpected event network-vif-plugged-f3950212-15c6-462b-a9f9-1f218cfd3914 for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.336 243456 DEBUG nova.compute.manager [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.342 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273113.3405821, bacbbaae-ce23-42df-b5cc-0fc49b2f3741 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.342 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.346 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.350 243456 INFO nova.virt.libvirt.driver [-] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Instance spawned successfully.#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.350 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.364 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.373 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.378 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.378 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.379 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.380 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.380 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.381 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.389 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.434 243456 INFO nova.compute.manager [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Took 9.14 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.434 243456 DEBUG nova.compute.manager [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.510 243456 INFO nova.compute.manager [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Took 10.72 seconds to build instance.#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.520 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.533 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Feb 28 05:05:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Feb 28 05:05:13 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Feb 28 05:05:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 427 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 8.7 MiB/s rd, 11 MiB/s wr, 258 op/s
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.900 243456 DEBUG nova.network.neutron [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Successfully updated port: ec45afc5-b898-4497-8ddf-4195ba6f8dfc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.927 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "refresh_cache-75431a43-9412-4ad7-86ef-4f1fc1563b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.927 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquired lock "refresh_cache-75431a43-9412-4ad7-86ef-4f1fc1563b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:05:13 np0005634017 nova_compute[243452]: 2026-02-28 10:05:13.927 243456 DEBUG nova.network.neutron [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.112 243456 DEBUG nova.network.neutron [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.171 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.178 243456 DEBUG oslo_concurrency.lockutils [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.179 243456 DEBUG oslo_concurrency.lockutils [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.179 243456 DEBUG oslo_concurrency.lockutils [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.179 243456 DEBUG oslo_concurrency.lockutils [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.179 243456 DEBUG oslo_concurrency.lockutils [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.180 243456 INFO nova.compute.manager [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Terminating instance#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.181 243456 DEBUG nova.compute.manager [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.187 243456 INFO nova.virt.libvirt.driver [-] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Instance destroyed successfully.#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.187 243456 DEBUG nova.objects.instance [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'resources' on Instance uuid ff5bf118-ea06-44c0-81f0-0a229162e1d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.197 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.198 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.222 243456 DEBUG nova.virt.libvirt.vif [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:04:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-52310168',display_name='tempest-ImagesTestJSON-server-52310168',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-52310168',id=30,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:04:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-1llqlg3p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:12Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=ff5bf118-ea06-44c0-81f0-0a229162e1d8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "address": "fa:16:3e:29:2d:d1", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b5f82a-cf", "ovs_interfaceid": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.222 243456 DEBUG nova.network.os_vif_util [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "address": "fa:16:3e:29:2d:d1", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b5f82a-cf", "ovs_interfaceid": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.223 243456 DEBUG nova.network.os_vif_util [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:2d:d1,bridge_name='br-int',has_traffic_filtering=True,id=29b5f82a-cfc3-4c87-aac9-8419af0bcf75,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b5f82a-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.223 243456 DEBUG os_vif [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:2d:d1,bridge_name='br-int',has_traffic_filtering=True,id=29b5f82a-cfc3-4c87-aac9-8419af0bcf75,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b5f82a-cf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.224 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.225 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29b5f82a-cf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.226 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.229 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.230 243456 INFO os_vif [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:2d:d1,bridge_name='br-int',has_traffic_filtering=True,id=29b5f82a-cfc3-4c87-aac9-8419af0bcf75,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b5f82a-cf')#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.250 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.251 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.492 243456 INFO nova.virt.libvirt.driver [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Deleting instance files /var/lib/nova/instances/ff5bf118-ea06-44c0-81f0-0a229162e1d8_del#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.492 243456 INFO nova.virt.libvirt.driver [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Deletion of /var/lib/nova/instances/ff5bf118-ea06-44c0-81f0-0a229162e1d8_del complete#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.545 243456 INFO nova.compute.manager [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Took 0.36 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.545 243456 DEBUG oslo.service.loopingcall [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.546 243456 DEBUG nova.compute.manager [-] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:05:14 np0005634017 nova_compute[243452]: 2026-02-28 10:05:14.546 243456 DEBUG nova.network.neutron [-] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.406 243456 DEBUG nova.network.neutron [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Updating instance_info_cache with network_info: [{"id": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "address": "fa:16:3e:d3:62:f9", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45afc5-b8", "ovs_interfaceid": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.555 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Releasing lock "refresh_cache-75431a43-9412-4ad7-86ef-4f1fc1563b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.555 243456 DEBUG nova.compute.manager [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Instance network_info: |[{"id": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "address": "fa:16:3e:d3:62:f9", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45afc5-b8", "ovs_interfaceid": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.557 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Start _get_guest_xml network_info=[{"id": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "address": "fa:16:3e:d3:62:f9", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45afc5-b8", "ovs_interfaceid": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.563 243456 WARNING nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.569 243456 DEBUG nova.virt.libvirt.host [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.570 243456 DEBUG nova.virt.libvirt.host [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.574 243456 DEBUG nova.virt.libvirt.host [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.574 243456 DEBUG nova.virt.libvirt.host [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.575 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.575 243456 DEBUG nova.virt.hardware [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.575 243456 DEBUG nova.virt.hardware [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.576 243456 DEBUG nova.virt.hardware [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.576 243456 DEBUG nova.virt.hardware [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.576 243456 DEBUG nova.virt.hardware [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.576 243456 DEBUG nova.virt.hardware [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.576 243456 DEBUG nova.virt.hardware [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.577 243456 DEBUG nova.virt.hardware [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.577 243456 DEBUG nova.virt.hardware [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.577 243456 DEBUG nova.virt.hardware [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.577 243456 DEBUG nova.virt.hardware [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.580 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.854 243456 DEBUG nova.network.neutron [-] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.882 243456 DEBUG nova.compute.manager [req-296e831b-6f7d-436f-a8a0-59998425f0eb req-fd452033-c538-4230-9c22-3409c4ed3aec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received event network-changed-9f44b9f8-b888-40e8-be30-f985e3ca11b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.883 243456 DEBUG nova.compute.manager [req-296e831b-6f7d-436f-a8a0-59998425f0eb req-fd452033-c538-4230-9c22-3409c4ed3aec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Refreshing instance network info cache due to event network-changed-9f44b9f8-b888-40e8-be30-f985e3ca11b9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.883 243456 DEBUG oslo_concurrency.lockutils [req-296e831b-6f7d-436f-a8a0-59998425f0eb req-fd452033-c538-4230-9c22-3409c4ed3aec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.883 243456 DEBUG oslo_concurrency.lockutils [req-296e831b-6f7d-436f-a8a0-59998425f0eb req-fd452033-c538-4230-9c22-3409c4ed3aec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.884 243456 DEBUG nova.network.neutron [req-296e831b-6f7d-436f-a8a0-59998425f0eb req-fd452033-c538-4230-9c22-3409c4ed3aec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Refreshing network info cache for port 9f44b9f8-b888-40e8-be30-f985e3ca11b9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:05:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 334 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 9.2 MiB/s wr, 460 op/s
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.893 243456 INFO nova.compute.manager [-] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Took 1.35 seconds to deallocate network for instance.#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.957 243456 DEBUG nova.compute.manager [req-32e412c6-f21e-41c1-84b2-e7f020ce22be req-3d946ba5-05cc-485c-9c77-9f604b3b7d7c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Received event network-changed-ec45afc5-b898-4497-8ddf-4195ba6f8dfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.957 243456 DEBUG nova.compute.manager [req-32e412c6-f21e-41c1-84b2-e7f020ce22be req-3d946ba5-05cc-485c-9c77-9f604b3b7d7c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Refreshing instance network info cache due to event network-changed-ec45afc5-b898-4497-8ddf-4195ba6f8dfc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.957 243456 DEBUG oslo_concurrency.lockutils [req-32e412c6-f21e-41c1-84b2-e7f020ce22be req-3d946ba5-05cc-485c-9c77-9f604b3b7d7c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-75431a43-9412-4ad7-86ef-4f1fc1563b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.958 243456 DEBUG oslo_concurrency.lockutils [req-32e412c6-f21e-41c1-84b2-e7f020ce22be req-3d946ba5-05cc-485c-9c77-9f604b3b7d7c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-75431a43-9412-4ad7-86ef-4f1fc1563b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.958 243456 DEBUG nova.network.neutron [req-32e412c6-f21e-41c1-84b2-e7f020ce22be req-3d946ba5-05cc-485c-9c77-9f604b3b7d7c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Refreshing network info cache for port ec45afc5-b898-4497-8ddf-4195ba6f8dfc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.993 243456 DEBUG oslo_concurrency.lockutils [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:15 np0005634017 nova_compute[243452]: 2026-02-28 10:05:15.993 243456 DEBUG oslo_concurrency.lockutils [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:05:16 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1517483985' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.082 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.112 243456 DEBUG nova.storage.rbd_utils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 75431a43-9412-4ad7-86ef-4f1fc1563b37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.120 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.167 243456 DEBUG oslo_concurrency.processutils [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:05:16 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2271261437' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.651 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.654 243456 DEBUG nova.virt.libvirt.vif [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:05:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-80345680',display_name='tempest-ImagesOneServerNegativeTestJSON-server-80345680',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-80345680',id=34,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da458887a8634c5a8b9a38fcbcc44e07',ramdisk_id='',reservation_id='r-3y3m8m7g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-356581433',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-356581433-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:05:11Z,user_data=None,user_id='2737540b5d9a437cac0ea91b25f0c5d8',uuid=75431a43-9412-4ad7-86ef-4f1fc1563b37,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "address": "fa:16:3e:d3:62:f9", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45afc5-b8", "ovs_interfaceid": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.655 243456 DEBUG nova.network.os_vif_util [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converting VIF {"id": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "address": "fa:16:3e:d3:62:f9", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45afc5-b8", "ovs_interfaceid": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.657 243456 DEBUG nova.network.os_vif_util [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:62:f9,bridge_name='br-int',has_traffic_filtering=True,id=ec45afc5-b898-4497-8ddf-4195ba6f8dfc,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45afc5-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.663 243456 DEBUG nova.objects.instance [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lazy-loading 'pci_devices' on Instance uuid 75431a43-9412-4ad7-86ef-4f1fc1563b37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.681 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:05:16 np0005634017 nova_compute[243452]:  <uuid>75431a43-9412-4ad7-86ef-4f1fc1563b37</uuid>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:  <name>instance-00000022</name>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-80345680</nova:name>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:05:15</nova:creationTime>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:05:16 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:        <nova:user uuid="2737540b5d9a437cac0ea91b25f0c5d8">tempest-ImagesOneServerNegativeTestJSON-356581433-project-member</nova:user>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:        <nova:project uuid="da458887a8634c5a8b9a38fcbcc44e07">tempest-ImagesOneServerNegativeTestJSON-356581433</nova:project>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:        <nova:port uuid="ec45afc5-b898-4497-8ddf-4195ba6f8dfc">
Feb 28 05:05:16 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <entry name="serial">75431a43-9412-4ad7-86ef-4f1fc1563b37</entry>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <entry name="uuid">75431a43-9412-4ad7-86ef-4f1fc1563b37</entry>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/75431a43-9412-4ad7-86ef-4f1fc1563b37_disk">
Feb 28 05:05:16 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:05:16 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/75431a43-9412-4ad7-86ef-4f1fc1563b37_disk.config">
Feb 28 05:05:16 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:05:16 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:d3:62:f9"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <target dev="tapec45afc5-b8"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/75431a43-9412-4ad7-86ef-4f1fc1563b37/console.log" append="off"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:05:16 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:05:16 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:05:16 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:05:16 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.682 243456 DEBUG nova.compute.manager [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Preparing to wait for external event network-vif-plugged-ec45afc5-b898-4497-8ddf-4195ba6f8dfc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.683 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.683 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.684 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.688 243456 DEBUG nova.virt.libvirt.vif [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:05:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-80345680',display_name='tempest-ImagesOneServerNegativeTestJSON-server-80345680',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-80345680',id=34,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da458887a8634c5a8b9a38fcbcc44e07',ramdisk_id='',reservation_id='r-3y3m8m7g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-356581433',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-356581433-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:05:11Z,user_data=None,user_id='2737540b5d9a437cac0ea91b25f0c5d8',uuid=75431a43-9412-4ad7-86ef-4f1fc1563b37,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "address": "fa:16:3e:d3:62:f9", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45afc5-b8", "ovs_interfaceid": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.689 243456 DEBUG nova.network.os_vif_util [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converting VIF {"id": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "address": "fa:16:3e:d3:62:f9", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45afc5-b8", "ovs_interfaceid": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.690 243456 DEBUG nova.network.os_vif_util [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:62:f9,bridge_name='br-int',has_traffic_filtering=True,id=ec45afc5-b898-4497-8ddf-4195ba6f8dfc,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45afc5-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.691 243456 DEBUG os_vif [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:62:f9,bridge_name='br-int',has_traffic_filtering=True,id=ec45afc5-b898-4497-8ddf-4195ba6f8dfc,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45afc5-b8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.692 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.693 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.693 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.697 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.701 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec45afc5-b8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.701 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapec45afc5-b8, col_values=(('external_ids', {'iface-id': 'ec45afc5-b898-4497-8ddf-4195ba6f8dfc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d3:62:f9', 'vm-uuid': '75431a43-9412-4ad7-86ef-4f1fc1563b37'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:16 np0005634017 NetworkManager[49805]: <info>  [1772273116.7047] manager: (tapec45afc5-b8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.704 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.707 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.709 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:05:16 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3646194582' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.710 243456 INFO os_vif [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:62:f9,bridge_name='br-int',has_traffic_filtering=True,id=ec45afc5-b898-4497-8ddf-4195ba6f8dfc,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45afc5-b8')#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.740 243456 DEBUG oslo_concurrency.processutils [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.756 243456 DEBUG nova.compute.provider_tree [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.780 243456 DEBUG nova.scheduler.client.report [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.804 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.804 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.805 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] No VIF found with MAC fa:16:3e:d3:62:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.805 243456 INFO nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Using config drive#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.830 243456 DEBUG nova.storage.rbd_utils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 75431a43-9412-4ad7-86ef-4f1fc1563b37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.844 243456 DEBUG oslo_concurrency.lockutils [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.886 243456 INFO nova.scheduler.client.report [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Deleted allocations for instance ff5bf118-ea06-44c0-81f0-0a229162e1d8#033[00m
Feb 28 05:05:16 np0005634017 nova_compute[243452]: 2026-02-28 10:05:16.960 243456 DEBUG oslo_concurrency.lockutils [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.132 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "abcde488-a508-4239-9f40-28af252a1cd3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.133 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.157 243456 DEBUG nova.compute.manager [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.221 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.222 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.231 243456 DEBUG nova.virt.hardware [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.232 243456 INFO nova.compute.claims [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.244 243456 INFO nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Creating config drive at /var/lib/nova/instances/75431a43-9412-4ad7-86ef-4f1fc1563b37/disk.config#033[00m
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.250 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/75431a43-9412-4ad7-86ef-4f1fc1563b37/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpuncqnlxd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.386 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/75431a43-9412-4ad7-86ef-4f1fc1563b37/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpuncqnlxd" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.423 243456 DEBUG nova.storage.rbd_utils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 75431a43-9412-4ad7-86ef-4f1fc1563b37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.429 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/75431a43-9412-4ad7-86ef-4f1fc1563b37/disk.config 75431a43-9412-4ad7-86ef-4f1fc1563b37_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.457 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.485 243456 DEBUG nova.network.neutron [req-32e412c6-f21e-41c1-84b2-e7f020ce22be req-3d946ba5-05cc-485c-9c77-9f604b3b7d7c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Updated VIF entry in instance network info cache for port ec45afc5-b898-4497-8ddf-4195ba6f8dfc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.486 243456 DEBUG nova.network.neutron [req-32e412c6-f21e-41c1-84b2-e7f020ce22be req-3d946ba5-05cc-485c-9c77-9f604b3b7d7c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Updating instance_info_cache with network_info: [{"id": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "address": "fa:16:3e:d3:62:f9", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45afc5-b8", "ovs_interfaceid": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.513 243456 DEBUG oslo_concurrency.lockutils [req-32e412c6-f21e-41c1-84b2-e7f020ce22be req-3d946ba5-05cc-485c-9c77-9f604b3b7d7c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-75431a43-9412-4ad7-86ef-4f1fc1563b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.567 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/75431a43-9412-4ad7-86ef-4f1fc1563b37/disk.config 75431a43-9412-4ad7-86ef-4f1fc1563b37_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.567 243456 INFO nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Deleting local config drive /var/lib/nova/instances/75431a43-9412-4ad7-86ef-4f1fc1563b37/disk.config because it was imported into RBD.#033[00m
Feb 28 05:05:17 np0005634017 kernel: tapec45afc5-b8: entered promiscuous mode
Feb 28 05:05:17 np0005634017 NetworkManager[49805]: <info>  [1772273117.6277] manager: (tapec45afc5-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/109)
Feb 28 05:05:17 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:17Z|00220|binding|INFO|Claiming lport ec45afc5-b898-4497-8ddf-4195ba6f8dfc for this chassis.
Feb 28 05:05:17 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:17Z|00221|binding|INFO|ec45afc5-b898-4497-8ddf-4195ba6f8dfc: Claiming fa:16:3e:d3:62:f9 10.100.0.9
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.627 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.641 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:62:f9 10.100.0.9'], port_security=['fa:16:3e:d3:62:f9 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '75431a43-9412-4ad7-86ef-4f1fc1563b37', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da458887a8634c5a8b9a38fcbcc44e07', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c01448ef-a8fc-4bd2-928c-fd08df4a870e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5852d9cf-0cd0-48e3-ac9d-e151dafb6ffc, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ec45afc5-b898-4497-8ddf-4195ba6f8dfc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.643 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:17 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:17Z|00222|binding|INFO|Setting lport ec45afc5-b898-4497-8ddf-4195ba6f8dfc ovn-installed in OVS
Feb 28 05:05:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:05:17 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:17Z|00223|binding|INFO|Setting lport ec45afc5-b898-4497-8ddf-4195ba6f8dfc up in Southbound
Feb 28 05:05:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.644 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ec45afc5-b898-4497-8ddf-4195ba6f8dfc in datapath 2eebd3ec-f7d4-4881-813e-8d884cdcadaf bound to our chassis#033[00m
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.645 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.647 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2eebd3ec-f7d4-4881-813e-8d884cdcadaf#033[00m
Feb 28 05:05:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.654 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:17 np0005634017 systemd-udevd[273646]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:05:17 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Feb 28 05:05:17 np0005634017 NetworkManager[49805]: <info>  [1772273117.6696] device (tapec45afc5-b8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:05:17 np0005634017 NetworkManager[49805]: <info>  [1772273117.6703] device (tapec45afc5-b8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.670 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[629e901d-faaa-4069-a05c-f7e9d170995c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.671 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2eebd3ec-f1 in ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.673 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2eebd3ec-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.673 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ec513239-a319-4b81-bfd7-9df0a3309f51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.675 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0cda1b5c-c11a-4c1f-9218-58cbae63116e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:17 np0005634017 systemd-machined[209480]: New machine qemu-38-instance-00000022.
Feb 28 05:05:17 np0005634017 systemd[1]: Started Virtual Machine qemu-38-instance-00000022.
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.692 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[6358105b-feb1-4818-aa4d-2bd9a7138879]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.707 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f4593e27-6d8b-4942-b32b-b83ca0d616b9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.748 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c20751-a441-4f18-943c-ca6d329050b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:17 np0005634017 NetworkManager[49805]: <info>  [1772273117.7581] manager: (tap2eebd3ec-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/110)
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.757 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[22a57776-10c1-445f-905b-c5c4db188a09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:17 np0005634017 systemd-udevd[273651]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.792 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f4985078-c92f-496a-a46d-e920d978035e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.795 243456 DEBUG nova.network.neutron [req-296e831b-6f7d-436f-a8a0-59998425f0eb req-fd452033-c538-4230-9c22-3409c4ed3aec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updated VIF entry in instance network info cache for port 9f44b9f8-b888-40e8-be30-f985e3ca11b9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.795 243456 DEBUG nova.network.neutron [req-296e831b-6f7d-436f-a8a0-59998425f0eb req-fd452033-c538-4230-9c22-3409c4ed3aec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updating instance_info_cache with network_info: [{"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.797 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[641b38d5-ce95-4cc4-9a6d-8eef8826dbe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:17 np0005634017 NetworkManager[49805]: <info>  [1772273117.8158] device (tap2eebd3ec-f0): carrier: link connected
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.821 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[94e27add-3ebe-4243-98e5-0f0847b30c17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.839 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[23a0ace1-5785-446b-a9d2-5f38b62f24c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2eebd3ec-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:03:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462510, 'reachable_time': 31121, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273684, 'error': None, 'target': 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.841 243456 DEBUG oslo_concurrency.lockutils [req-296e831b-6f7d-436f-a8a0-59998425f0eb req-fd452033-c538-4230-9c22-3409c4ed3aec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.856 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4cc3ba-b3da-495a-986c-650790d73f7a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecb:321'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 462510, 'tstamp': 462510}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273685, 'error': None, 'target': 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.873 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5fdb7671-dc09-4d3b-81c2-7c82785ee5c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2eebd3ec-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:03:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462510, 'reachable_time': 31121, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273686, 'error': None, 'target': 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 293 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 4.6 MiB/s wr, 489 op/s
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.910 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d9c7619f-403a-48a6-b36a-51df9bb870f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.976 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0fbc6348-2485-47cf-bf1d-c42685db988d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.977 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2eebd3ec-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.978 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.978 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2eebd3ec-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.980 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:17 np0005634017 NetworkManager[49805]: <info>  [1772273117.9808] manager: (tap2eebd3ec-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Feb 28 05:05:17 np0005634017 kernel: tap2eebd3ec-f0: entered promiscuous mode
Feb 28 05:05:17 np0005634017 nova_compute[243452]: 2026-02-28 10:05:17.984 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.985 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2eebd3ec-f0, col_values=(('external_ids', {'iface-id': '6b6cc396-2618-4c5f-8702-0c03569c876b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.989 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2eebd3ec-f7d4-4881-813e-8d884cdcadaf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2eebd3ec-f7d4-4881-813e-8d884cdcadaf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:05:17 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:17Z|00224|binding|INFO|Releasing lport 6b6cc396-2618-4c5f-8702-0c03569c876b from this chassis (sb_readonly=0)
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.995 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a0f5fd-7264-4240-a34a-d8ba876b345b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.996 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-2eebd3ec-f7d4-4881-813e-8d884cdcadaf
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/2eebd3ec-f7d4-4881-813e-8d884cdcadaf.pid.haproxy
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 2eebd3ec-f7d4-4881-813e-8d884cdcadaf
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:05:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.997 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'env', 'PROCESS_TAG=haproxy-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2eebd3ec-f7d4-4881-813e-8d884cdcadaf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.000 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:05:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2587161045' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.035 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.042 243456 DEBUG nova.compute.provider_tree [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.054 243456 DEBUG nova.compute.manager [req-68ce1da0-e59e-4da7-a73c-43252704372a req-2a61eca2-45d6-4b7d-b3b9-f7027d6c1a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Received event network-vif-deleted-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.055 243456 DEBUG nova.compute.manager [req-68ce1da0-e59e-4da7-a73c-43252704372a req-2a61eca2-45d6-4b7d-b3b9-f7027d6c1a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Received event network-vif-plugged-ec45afc5-b898-4497-8ddf-4195ba6f8dfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.055 243456 DEBUG oslo_concurrency.lockutils [req-68ce1da0-e59e-4da7-a73c-43252704372a req-2a61eca2-45d6-4b7d-b3b9-f7027d6c1a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.056 243456 DEBUG oslo_concurrency.lockutils [req-68ce1da0-e59e-4da7-a73c-43252704372a req-2a61eca2-45d6-4b7d-b3b9-f7027d6c1a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.056 243456 DEBUG oslo_concurrency.lockutils [req-68ce1da0-e59e-4da7-a73c-43252704372a req-2a61eca2-45d6-4b7d-b3b9-f7027d6c1a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.056 243456 DEBUG nova.compute.manager [req-68ce1da0-e59e-4da7-a73c-43252704372a req-2a61eca2-45d6-4b7d-b3b9-f7027d6c1a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Processing event network-vif-plugged-ec45afc5-b898-4497-8ddf-4195ba6f8dfc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.062 243456 DEBUG nova.scheduler.client.report [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.140 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.141 243456 DEBUG nova.compute.manager [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.193 243456 DEBUG nova.compute.manager [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.193 243456 DEBUG nova.network.neutron [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.212 243456 INFO nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.226 243456 DEBUG nova.compute.manager [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.312 243456 DEBUG nova.compute.manager [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.313 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.313 243456 INFO nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Creating image(s)#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.341 243456 DEBUG nova.storage.rbd_utils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image abcde488-a508-4239-9f40-28af252a1cd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.372 243456 DEBUG nova.storage.rbd_utils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image abcde488-a508-4239-9f40-28af252a1cd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.398 243456 DEBUG nova.storage.rbd_utils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image abcde488-a508-4239-9f40-28af252a1cd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.404 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:18 np0005634017 podman[273736]: 2026-02-28 10:05:18.41651758 +0000 UTC m=+0.053293759 container create b18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223)
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.441 243456 DEBUG nova.policy [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '163582c3e6a34c87b52f82ac4f189f77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:05:18 np0005634017 systemd[1]: Started libpod-conmon-b18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90.scope.
Feb 28 05:05:18 np0005634017 podman[273736]: 2026-02-28 10:05:18.391022284 +0000 UTC m=+0.027798483 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.487 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.487 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.488 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.488 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:18 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:05:18 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cedf4787cc345b38d5f64c843dc01ce5e7235c7fb10d6e0c8b426d33046e626/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:05:18 np0005634017 podman[273736]: 2026-02-28 10:05:18.512313324 +0000 UTC m=+0.149089523 container init b18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:05:18 np0005634017 podman[273736]: 2026-02-28 10:05:18.519808844 +0000 UTC m=+0.156585023 container start b18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 28 05:05:18 np0005634017 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[273811]: [NOTICE]   (273843) : New worker (273851) forked
Feb 28 05:05:18 np0005634017 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[273811]: [NOTICE]   (273843) : Loading success.
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.543 243456 DEBUG nova.storage.rbd_utils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image abcde488-a508-4239-9f40-28af252a1cd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.555 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 abcde488-a508-4239-9f40-28af252a1cd3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.584 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.607 243456 DEBUG nova.compute.manager [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.608 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273118.6062098, 75431a43-9412-4ad7-86ef-4f1fc1563b37 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.608 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] VM Started (Lifecycle Event)#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.613 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.618 243456 INFO nova.virt.libvirt.driver [-] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Instance spawned successfully.#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.619 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.643 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.651 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.651 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.651 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.652 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.652 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.652 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.656 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.685 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.685 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273118.608918, 75431a43-9412-4ad7-86ef-4f1fc1563b37 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.685 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.712 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.720 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273118.6127074, 75431a43-9412-4ad7-86ef-4f1fc1563b37 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.721 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.728 243456 INFO nova.compute.manager [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Took 7.28 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.729 243456 DEBUG nova.compute.manager [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.770 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.775 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.798 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 abcde488-a508-4239-9f40-28af252a1cd3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.243s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.830 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.843 243456 INFO nova.compute.manager [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Took 8.66 seconds to build instance.#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.881 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.886 243456 DEBUG nova.storage.rbd_utils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] resizing rbd image abcde488-a508-4239-9f40-28af252a1cd3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:05:18 np0005634017 nova_compute[243452]: 2026-02-28 10:05:18.995 243456 DEBUG nova.objects.instance [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'migration_context' on Instance uuid abcde488-a508-4239-9f40-28af252a1cd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.014 243456 DEBUG oslo_concurrency.lockutils [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Acquiring lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.014 243456 DEBUG oslo_concurrency.lockutils [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.015 243456 DEBUG oslo_concurrency.lockutils [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Acquiring lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.015 243456 DEBUG oslo_concurrency.lockutils [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.015 243456 DEBUG oslo_concurrency.lockutils [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.016 243456 INFO nova.compute.manager [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Terminating instance#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.017 243456 DEBUG nova.compute.manager [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.018 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.019 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Ensure instance console log exists: /var/lib/nova/instances/abcde488-a508-4239-9f40-28af252a1cd3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.019 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.019 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.019 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:19 np0005634017 kernel: tapf3950212-15 (unregistering): left promiscuous mode
Feb 28 05:05:19 np0005634017 NetworkManager[49805]: <info>  [1772273119.0566] device (tapf3950212-15): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:05:19 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:19Z|00225|binding|INFO|Releasing lport f3950212-15c6-462b-a9f9-1f218cfd3914 from this chassis (sb_readonly=0)
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.064 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:19 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:19Z|00226|binding|INFO|Setting lport f3950212-15c6-462b-a9f9-1f218cfd3914 down in Southbound
Feb 28 05:05:19 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:19Z|00227|binding|INFO|Removing iface tapf3950212-15 ovn-installed in OVS
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.066 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.073 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:4a:3e 10.100.0.14'], port_security=['fa:16:3e:a3:4a:3e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'bacbbaae-ce23-42df-b5cc-0fc49b2f3741', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '135c387aaa024e42b1c3c19237591cf3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5ee5bcf8-3b1e-427f-98a7-d823cb130082', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d8b3b33-cfca-4ec3-8a4a-f81d6eb91a80, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=f3950212-15c6-462b-a9f9-1f218cfd3914) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:05:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.075 156681 INFO neutron.agent.ovn.metadata.agent [-] Port f3950212-15c6-462b-a9f9-1f218cfd3914 in datapath 7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8 unbound from our chassis#033[00m
Feb 28 05:05:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.086 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:05:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.087 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c04e84-0da7-469f-99a6-dfb184430a90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.088 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8 namespace which is not needed anymore#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.089 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:19 np0005634017 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000021.scope: Deactivated successfully.
Feb 28 05:05:19 np0005634017 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000021.scope: Consumed 6.074s CPU time.
Feb 28 05:05:19 np0005634017 systemd-machined[209480]: Machine qemu-37-instance-00000021 terminated.
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.123 243456 DEBUG nova.network.neutron [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Successfully created port: d21b0d77-e339-46dd-a448-d8c3473dc8e8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:05:19 np0005634017 neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8[273417]: [NOTICE]   (273421) : haproxy version is 2.8.14-c23fe91
Feb 28 05:05:19 np0005634017 neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8[273417]: [NOTICE]   (273421) : path to executable is /usr/sbin/haproxy
Feb 28 05:05:19 np0005634017 neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8[273417]: [WARNING]  (273421) : Exiting Master process...
Feb 28 05:05:19 np0005634017 neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8[273417]: [WARNING]  (273421) : Exiting Master process...
Feb 28 05:05:19 np0005634017 neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8[273417]: [ALERT]    (273421) : Current worker (273423) exited with code 143 (Terminated)
Feb 28 05:05:19 np0005634017 neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8[273417]: [WARNING]  (273421) : All workers exited. Exiting... (0)
Feb 28 05:05:19 np0005634017 systemd[1]: libpod-86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f.scope: Deactivated successfully.
Feb 28 05:05:19 np0005634017 podman[273975]: 2026-02-28 10:05:19.232920102 +0000 UTC m=+0.053245708 container died 86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.252 243456 INFO nova.virt.libvirt.driver [-] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Instance destroyed successfully.#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.253 243456 DEBUG nova.objects.instance [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lazy-loading 'resources' on Instance uuid bacbbaae-ce23-42df-b5cc-0fc49b2f3741 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:05:19 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f-userdata-shm.mount: Deactivated successfully.
Feb 28 05:05:19 np0005634017 systemd[1]: var-lib-containers-storage-overlay-39ee18a45b312c8bd14e361b6e1db2c2440bb85e7a95b137a16d7ff81aa888a6-merged.mount: Deactivated successfully.
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.269 243456 DEBUG nova.virt.libvirt.vif [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:05:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-803037290',display_name='tempest-ServerMetadataTestJSON-server-803037290',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-803037290',id=33,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:05:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={key1='alt1',key2='value2',key3='value3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='135c387aaa024e42b1c3c19237591cf3',ramdisk_id='',reservation_id='r-70lii02n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataTestJSON-1667248256',owner_user_name='tempest-ServerMetadataTestJSON-1667248256-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:18Z,user_data=None,user_id='6bbc470612fa48afb6c2a143ba966473',uuid=bacbbaae-ce23-42df-b5cc-0fc49b2f3741,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f3950212-15c6-462b-a9f9-1f218cfd3914", "address": "fa:16:3e:a3:4a:3e", "network": {"id": "7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-199293454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "135c387aaa024e42b1c3c19237591cf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3950212-15", "ovs_interfaceid": "f3950212-15c6-462b-a9f9-1f218cfd3914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.270 243456 DEBUG nova.network.os_vif_util [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Converting VIF {"id": "f3950212-15c6-462b-a9f9-1f218cfd3914", "address": "fa:16:3e:a3:4a:3e", "network": {"id": "7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-199293454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "135c387aaa024e42b1c3c19237591cf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3950212-15", "ovs_interfaceid": "f3950212-15c6-462b-a9f9-1f218cfd3914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.272 243456 DEBUG nova.network.os_vif_util [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:4a:3e,bridge_name='br-int',has_traffic_filtering=True,id=f3950212-15c6-462b-a9f9-1f218cfd3914,network=Network(7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3950212-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.273 243456 DEBUG os_vif [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:4a:3e,bridge_name='br-int',has_traffic_filtering=True,id=f3950212-15c6-462b-a9f9-1f218cfd3914,network=Network(7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3950212-15') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:05:19 np0005634017 podman[273975]: 2026-02-28 10:05:19.274959894 +0000 UTC m=+0.095285500 container cleanup 86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.276 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.276 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3950212-15, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.280 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:19 np0005634017 systemd[1]: libpod-conmon-86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f.scope: Deactivated successfully.
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.283 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.285 243456 INFO os_vif [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:4a:3e,bridge_name='br-int',has_traffic_filtering=True,id=f3950212-15c6-462b-a9f9-1f218cfd3914,network=Network(7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3950212-15')#033[00m
Feb 28 05:05:19 np0005634017 podman[274013]: 2026-02-28 10:05:19.351177747 +0000 UTC m=+0.050226463 container remove 86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 05:05:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.358 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cdd8ee2b-5d03-4c98-8c17-a94b0c7e991b]: (4, ('Sat Feb 28 10:05:19 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8 (86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f)\n86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f\nSat Feb 28 10:05:19 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8 (86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f)\n86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.361 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[37cf802e-37cb-4e7d-a3ca-04b74c187a41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.362 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7579f8b7-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.365 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:19 np0005634017 kernel: tap7579f8b7-d0: left promiscuous mode
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.377 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.380 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[48932005-92a7-4cb5-a9ec-58e73f274f2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.396 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[68f497fb-95ab-4e3f-b5e6-c16c9c770cbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.398 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0c8cdcff-f29e-4bfb-b7e1-1282968d1ef0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.418 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3f71edcb-4570-4f42-b474-672bb23c8342]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461882, 'reachable_time': 42596, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274046, 'error': None, 'target': 'ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:19 np0005634017 systemd[1]: run-netns-ovnmeta\x2d7579f8b7\x2ddbd7\x2d4cb9\x2db8d9\x2de1f48bd1c7c8.mount: Deactivated successfully.
Feb 28 05:05:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.423 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:05:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.423 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[a8b5e105-d05e-4ff1-bfea-0963261a359d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.555 243456 INFO nova.virt.libvirt.driver [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Deleting instance files /var/lib/nova/instances/bacbbaae-ce23-42df-b5cc-0fc49b2f3741_del#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.556 243456 INFO nova.virt.libvirt.driver [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Deletion of /var/lib/nova/instances/bacbbaae-ce23-42df-b5cc-0fc49b2f3741_del complete#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.620 243456 INFO nova.compute.manager [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.621 243456 DEBUG oslo.service.loopingcall [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.621 243456 DEBUG nova.compute.manager [-] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.622 243456 DEBUG nova.network.neutron [-] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:05:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 302 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 4.9 MiB/s wr, 516 op/s
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.980 243456 DEBUG oslo_concurrency.lockutils [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "75431a43-9412-4ad7-86ef-4f1fc1563b37" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.980 243456 DEBUG oslo_concurrency.lockutils [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.981 243456 DEBUG oslo_concurrency.lockutils [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.982 243456 DEBUG oslo_concurrency.lockutils [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.982 243456 DEBUG oslo_concurrency.lockutils [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.983 243456 INFO nova.compute.manager [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Terminating instance#033[00m
Feb 28 05:05:19 np0005634017 nova_compute[243452]: 2026-02-28 10:05:19.984 243456 DEBUG nova.compute.manager [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:05:20 np0005634017 kernel: tapec45afc5-b8 (unregistering): left promiscuous mode
Feb 28 05:05:20 np0005634017 NetworkManager[49805]: <info>  [1772273120.0283] device (tapec45afc5-b8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:05:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:20Z|00228|binding|INFO|Releasing lport ec45afc5-b898-4497-8ddf-4195ba6f8dfc from this chassis (sb_readonly=0)
Feb 28 05:05:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:20Z|00229|binding|INFO|Setting lport ec45afc5-b898-4497-8ddf-4195ba6f8dfc down in Southbound
Feb 28 05:05:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:20Z|00230|binding|INFO|Removing iface tapec45afc5-b8 ovn-installed in OVS
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.041 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.045 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:62:f9 10.100.0.9'], port_security=['fa:16:3e:d3:62:f9 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '75431a43-9412-4ad7-86ef-4f1fc1563b37', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da458887a8634c5a8b9a38fcbcc44e07', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c01448ef-a8fc-4bd2-928c-fd08df4a870e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5852d9cf-0cd0-48e3-ac9d-e151dafb6ffc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ec45afc5-b898-4497-8ddf-4195ba6f8dfc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:05:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.053 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ec45afc5-b898-4497-8ddf-4195ba6f8dfc in datapath 2eebd3ec-f7d4-4881-813e-8d884cdcadaf unbound from our chassis#033[00m
Feb 28 05:05:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.055 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2eebd3ec-f7d4-4881-813e-8d884cdcadaf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.057 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.061 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[39e194a0-45b3-4e7d-9cbc-976ccd28c4ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.062 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf namespace which is not needed anymore#033[00m
Feb 28 05:05:20 np0005634017 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000022.scope: Deactivated successfully.
Feb 28 05:05:20 np0005634017 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000022.scope: Consumed 2.212s CPU time.
Feb 28 05:05:20 np0005634017 systemd-machined[209480]: Machine qemu-38-instance-00000022 terminated.
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.132 243456 DEBUG nova.compute.manager [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Received event network-vif-plugged-ec45afc5-b898-4497-8ddf-4195ba6f8dfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.134 243456 DEBUG oslo_concurrency.lockutils [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.134 243456 DEBUG oslo_concurrency.lockutils [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.134 243456 DEBUG oslo_concurrency.lockutils [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.134 243456 DEBUG nova.compute.manager [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] No waiting events found dispatching network-vif-plugged-ec45afc5-b898-4497-8ddf-4195ba6f8dfc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.135 243456 WARNING nova.compute.manager [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Received unexpected event network-vif-plugged-ec45afc5-b898-4497-8ddf-4195ba6f8dfc for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.135 243456 DEBUG nova.compute.manager [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Received event network-vif-unplugged-f3950212-15c6-462b-a9f9-1f218cfd3914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.135 243456 DEBUG oslo_concurrency.lockutils [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.135 243456 DEBUG oslo_concurrency.lockutils [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.136 243456 DEBUG oslo_concurrency.lockutils [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.136 243456 DEBUG nova.compute.manager [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] No waiting events found dispatching network-vif-unplugged-f3950212-15c6-462b-a9f9-1f218cfd3914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.136 243456 DEBUG nova.compute.manager [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Received event network-vif-unplugged-f3950212-15c6-462b-a9f9-1f218cfd3914 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.136 243456 DEBUG nova.compute.manager [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Received event network-vif-plugged-f3950212-15c6-462b-a9f9-1f218cfd3914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.136 243456 DEBUG oslo_concurrency.lockutils [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.137 243456 DEBUG oslo_concurrency.lockutils [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.137 243456 DEBUG oslo_concurrency.lockutils [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.137 243456 DEBUG nova.compute.manager [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] No waiting events found dispatching network-vif-plugged-f3950212-15c6-462b-a9f9-1f218cfd3914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.138 243456 WARNING nova.compute.manager [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Received unexpected event network-vif-plugged-f3950212-15c6-462b-a9f9-1f218cfd3914 for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.213 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.216 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.227 243456 INFO nova.virt.libvirt.driver [-] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Instance destroyed successfully.#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.227 243456 DEBUG nova.objects.instance [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lazy-loading 'resources' on Instance uuid 75431a43-9412-4ad7-86ef-4f1fc1563b37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:05:20 np0005634017 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[273811]: [NOTICE]   (273843) : haproxy version is 2.8.14-c23fe91
Feb 28 05:05:20 np0005634017 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[273811]: [NOTICE]   (273843) : path to executable is /usr/sbin/haproxy
Feb 28 05:05:20 np0005634017 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[273811]: [WARNING]  (273843) : Exiting Master process...
Feb 28 05:05:20 np0005634017 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[273811]: [ALERT]    (273843) : Current worker (273851) exited with code 143 (Terminated)
Feb 28 05:05:20 np0005634017 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[273811]: [WARNING]  (273843) : All workers exited. Exiting... (0)
Feb 28 05:05:20 np0005634017 systemd[1]: libpod-b18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90.scope: Deactivated successfully.
Feb 28 05:05:20 np0005634017 podman[274067]: 2026-02-28 10:05:20.24693607 +0000 UTC m=+0.060930974 container died b18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.243 243456 DEBUG nova.virt.libvirt.vif [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:05:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-80345680',display_name='tempest-ImagesOneServerNegativeTestJSON-server-80345680',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-80345680',id=34,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:05:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da458887a8634c5a8b9a38fcbcc44e07',ramdisk_id='',reservation_id='r-3y3m8m7g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-356581433',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-356581433-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:18Z,user_data=None,user_id='2737540b5d9a437cac0ea91b25f0c5d8',uuid=75431a43-9412-4ad7-86ef-4f1fc1563b37,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "address": "fa:16:3e:d3:62:f9", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45afc5-b8", "ovs_interfaceid": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.248 243456 DEBUG nova.network.os_vif_util [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converting VIF {"id": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "address": "fa:16:3e:d3:62:f9", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45afc5-b8", "ovs_interfaceid": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.251 243456 DEBUG nova.network.os_vif_util [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:62:f9,bridge_name='br-int',has_traffic_filtering=True,id=ec45afc5-b898-4497-8ddf-4195ba6f8dfc,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45afc5-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.252 243456 DEBUG os_vif [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:62:f9,bridge_name='br-int',has_traffic_filtering=True,id=ec45afc5-b898-4497-8ddf-4195ba6f8dfc,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45afc5-b8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.254 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.254 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec45afc5-b8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.256 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.258 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.264 243456 INFO os_vif [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:62:f9,bridge_name='br-int',has_traffic_filtering=True,id=ec45afc5-b898-4497-8ddf-4195ba6f8dfc,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45afc5-b8')#033[00m
Feb 28 05:05:20 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90-userdata-shm.mount: Deactivated successfully.
Feb 28 05:05:20 np0005634017 systemd[1]: var-lib-containers-storage-overlay-4cedf4787cc345b38d5f64c843dc01ce5e7235c7fb10d6e0c8b426d33046e626-merged.mount: Deactivated successfully.
Feb 28 05:05:20 np0005634017 podman[274067]: 2026-02-28 10:05:20.305125126 +0000 UTC m=+0.119120010 container cleanup b18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:05:20 np0005634017 systemd[1]: libpod-conmon-b18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90.scope: Deactivated successfully.
Feb 28 05:05:20 np0005634017 podman[274124]: 2026-02-28 10:05:20.375893355 +0000 UTC m=+0.048340610 container remove b18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:05:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.380 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[833d084f-0bea-4371-bea1-d4920f813979]: (4, ('Sat Feb 28 10:05:20 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf (b18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90)\nb18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90\nSat Feb 28 10:05:20 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf (b18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90)\nb18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.383 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[33ae6f05-4c83-4016-bcac-3bd5ce211fa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.385 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2eebd3ec-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:20 np0005634017 kernel: tap2eebd3ec-f0: left promiscuous mode
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.387 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.390 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.396 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[005b25f5-e600-4e63-8fe7-7ffa8145e922]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.399 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.401 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273105.3794076, ff5bf118-ea06-44c0-81f0-0a229162e1d8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.401 243456 INFO nova.compute.manager [-] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:05:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.417 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[21018f92-8176-41da-8f53-f11dd48b2132]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.418 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1e27e228-016a-4f6c-b184-516f69aeb710]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.432 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8ce95cd9-58e8-4ae6-8be1-c12a81d6f2c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462502, 'reachable_time': 16840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274138, 'error': None, 'target': 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:20 np0005634017 systemd[1]: run-netns-ovnmeta\x2d2eebd3ec\x2df7d4\x2d4881\x2d813e\x2d8d884cdcadaf.mount: Deactivated successfully.
Feb 28 05:05:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.436 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:05:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.437 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[4480d7e1-0e50-4bcb-9a0f-dff7848d0f16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.436 243456 DEBUG nova.compute.manager [None req-3885b83d-03eb-4e40-8479-9ef49b904a02 - - - - - -] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.439 243456 DEBUG nova.network.neutron [-] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.468 243456 INFO nova.compute.manager [-] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Took 0.85 seconds to deallocate network for instance.#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.494 243456 DEBUG nova.network.neutron [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Successfully updated port: d21b0d77-e339-46dd-a448-d8c3473dc8e8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.495 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273105.493643, c1e4150a-4695-4464-a271-378970447180 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.496 243456 INFO nova.compute.manager [-] [instance: c1e4150a-4695-4464-a271-378970447180] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.524 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "refresh_cache-abcde488-a508-4239-9f40-28af252a1cd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.524 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquired lock "refresh_cache-abcde488-a508-4239-9f40-28af252a1cd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.524 243456 DEBUG nova.network.neutron [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.528 243456 DEBUG oslo_concurrency.lockutils [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.528 243456 DEBUG oslo_concurrency.lockutils [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.530 243456 DEBUG nova.compute.manager [None req-9c292224-10dc-45df-90ba-7813055acfc0 - - - - - -] [instance: c1e4150a-4695-4464-a271-378970447180] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.570 243456 INFO nova.virt.libvirt.driver [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Deleting instance files /var/lib/nova/instances/75431a43-9412-4ad7-86ef-4f1fc1563b37_del#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.572 243456 INFO nova.virt.libvirt.driver [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Deletion of /var/lib/nova/instances/75431a43-9412-4ad7-86ef-4f1fc1563b37_del complete#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.634 243456 INFO nova.compute.manager [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.635 243456 DEBUG oslo.service.loopingcall [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.635 243456 DEBUG nova.compute.manager [-] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.635 243456 DEBUG nova.network.neutron [-] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.639 243456 DEBUG oslo_concurrency.processutils [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:20 np0005634017 nova_compute[243452]: 2026-02-28 10:05:20.785 243456 DEBUG nova.network.neutron [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:05:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:05:21 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1678683351' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.187 243456 DEBUG oslo_concurrency.processutils [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.195 243456 DEBUG nova.compute.provider_tree [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.224 243456 DEBUG nova.scheduler.client.report [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.340 243456 DEBUG oslo_concurrency.lockutils [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.396 243456 DEBUG nova.network.neutron [-] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.402 243456 INFO nova.scheduler.client.report [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Deleted allocations for instance bacbbaae-ce23-42df-b5cc-0fc49b2f3741#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.426 243456 INFO nova.compute.manager [-] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Took 0.79 seconds to deallocate network for instance.#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.512 243456 DEBUG oslo_concurrency.lockutils [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.513 243456 DEBUG oslo_concurrency.lockutils [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.519 243456 DEBUG oslo_concurrency.lockutils [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.504s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.549 243456 DEBUG nova.network.neutron [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Updating instance_info_cache with network_info: [{"id": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "address": "fa:16:3e:10:3d:54", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd21b0d77-e3", "ovs_interfaceid": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.568 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Releasing lock "refresh_cache-abcde488-a508-4239-9f40-28af252a1cd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.569 243456 DEBUG nova.compute.manager [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Instance network_info: |[{"id": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "address": "fa:16:3e:10:3d:54", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd21b0d77-e3", "ovs_interfaceid": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.572 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Start _get_guest_xml network_info=[{"id": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "address": "fa:16:3e:10:3d:54", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd21b0d77-e3", "ovs_interfaceid": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.582 243456 WARNING nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.591 243456 DEBUG nova.virt.libvirt.host [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.593 243456 DEBUG nova.virt.libvirt.host [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.606 243456 DEBUG nova.virt.libvirt.host [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.607 243456 DEBUG nova.virt.libvirt.host [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.607 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.608 243456 DEBUG nova.virt.hardware [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.608 243456 DEBUG nova.virt.hardware [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.609 243456 DEBUG nova.virt.hardware [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.609 243456 DEBUG nova.virt.hardware [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.609 243456 DEBUG nova.virt.hardware [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.609 243456 DEBUG nova.virt.hardware [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.609 243456 DEBUG nova.virt.hardware [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.609 243456 DEBUG nova.virt.hardware [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.610 243456 DEBUG nova.virt.hardware [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.610 243456 DEBUG nova.virt.hardware [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.610 243456 DEBUG nova.virt.hardware [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.613 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:21 np0005634017 nova_compute[243452]: 2026-02-28 10:05:21.651 243456 DEBUG oslo_concurrency.processutils [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 311 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 3.0 MiB/s wr, 539 op/s
Feb 28 05:05:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:05:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3583209275' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.187 243456 DEBUG oslo_concurrency.processutils [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.200 243456 DEBUG nova.compute.provider_tree [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:05:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:05:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2231054614' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.223 243456 DEBUG nova.scheduler.client.report [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.229 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.252 243456 DEBUG nova.storage.rbd_utils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image abcde488-a508-4239-9f40-28af252a1cd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.259 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.292 243456 DEBUG nova.compute.manager [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Received event network-vif-unplugged-ec45afc5-b898-4497-8ddf-4195ba6f8dfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.293 243456 DEBUG oslo_concurrency.lockutils [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.293 243456 DEBUG oslo_concurrency.lockutils [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.294 243456 DEBUG oslo_concurrency.lockutils [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.294 243456 DEBUG nova.compute.manager [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] No waiting events found dispatching network-vif-unplugged-ec45afc5-b898-4497-8ddf-4195ba6f8dfc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.294 243456 WARNING nova.compute.manager [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Received unexpected event network-vif-unplugged-ec45afc5-b898-4497-8ddf-4195ba6f8dfc for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.294 243456 DEBUG nova.compute.manager [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Received event network-vif-deleted-f3950212-15c6-462b-a9f9-1f218cfd3914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.294 243456 DEBUG nova.compute.manager [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Received event network-changed-d21b0d77-e339-46dd-a448-d8c3473dc8e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.294 243456 DEBUG nova.compute.manager [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Refreshing instance network info cache due to event network-changed-d21b0d77-e339-46dd-a448-d8c3473dc8e8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.295 243456 DEBUG oslo_concurrency.lockutils [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-abcde488-a508-4239-9f40-28af252a1cd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.295 243456 DEBUG oslo_concurrency.lockutils [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-abcde488-a508-4239-9f40-28af252a1cd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.295 243456 DEBUG nova.network.neutron [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Refreshing network info cache for port d21b0d77-e339-46dd-a448-d8c3473dc8e8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.297 243456 DEBUG oslo_concurrency.lockutils [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.340 243456 INFO nova.scheduler.client.report [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Deleted allocations for instance 75431a43-9412-4ad7-86ef-4f1fc1563b37#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.415 243456 DEBUG oslo_concurrency.lockutils [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:05:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Feb 28 05:05:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Feb 28 05:05:22 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Feb 28 05:05:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:05:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4023284657' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.777 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.779 243456 DEBUG nova.virt.libvirt.vif [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1645656228',display_name='tempest-ImagesTestJSON-server-1645656228',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1645656228',id=35,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-649wx7we',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:05:18Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=abcde488-a508-4239-9f40-28af252a1cd3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "address": "fa:16:3e:10:3d:54", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd21b0d77-e3", "ovs_interfaceid": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.779 243456 DEBUG nova.network.os_vif_util [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "address": "fa:16:3e:10:3d:54", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd21b0d77-e3", "ovs_interfaceid": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.780 243456 DEBUG nova.network.os_vif_util [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:3d:54,bridge_name='br-int',has_traffic_filtering=True,id=d21b0d77-e339-46dd-a448-d8c3473dc8e8,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd21b0d77-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.781 243456 DEBUG nova.objects.instance [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'pci_devices' on Instance uuid abcde488-a508-4239-9f40-28af252a1cd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.800 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:05:22 np0005634017 nova_compute[243452]:  <uuid>abcde488-a508-4239-9f40-28af252a1cd3</uuid>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:  <name>instance-00000023</name>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <nova:name>tempest-ImagesTestJSON-server-1645656228</nova:name>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:05:21</nova:creationTime>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:05:22 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:        <nova:user uuid="163582c3e6a34c87b52f82ac4f189f77">tempest-ImagesTestJSON-2059286278-project-member</nova:user>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:        <nova:project uuid="a2ce6ed219d94b3b88c2d2d7001f6c3a">tempest-ImagesTestJSON-2059286278</nova:project>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:        <nova:port uuid="d21b0d77-e339-46dd-a448-d8c3473dc8e8">
Feb 28 05:05:22 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <entry name="serial">abcde488-a508-4239-9f40-28af252a1cd3</entry>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <entry name="uuid">abcde488-a508-4239-9f40-28af252a1cd3</entry>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/abcde488-a508-4239-9f40-28af252a1cd3_disk">
Feb 28 05:05:22 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:05:22 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/abcde488-a508-4239-9f40-28af252a1cd3_disk.config">
Feb 28 05:05:22 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:05:22 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:10:3d:54"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <target dev="tapd21b0d77-e3"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/abcde488-a508-4239-9f40-28af252a1cd3/console.log" append="off"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:05:22 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:05:22 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:05:22 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:05:22 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.801 243456 DEBUG nova.compute.manager [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Preparing to wait for external event network-vif-plugged-d21b0d77-e339-46dd-a448-d8c3473dc8e8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.802 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "abcde488-a508-4239-9f40-28af252a1cd3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.802 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.803 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.803 243456 DEBUG nova.virt.libvirt.vif [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1645656228',display_name='tempest-ImagesTestJSON-server-1645656228',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1645656228',id=35,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-649wx7we',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:05:18Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=abcde488-a508-4239-9f40-28af252a1cd3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "address": "fa:16:3e:10:3d:54", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd21b0d77-e3", "ovs_interfaceid": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.804 243456 DEBUG nova.network.os_vif_util [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "address": "fa:16:3e:10:3d:54", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd21b0d77-e3", "ovs_interfaceid": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.805 243456 DEBUG nova.network.os_vif_util [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:3d:54,bridge_name='br-int',has_traffic_filtering=True,id=d21b0d77-e339-46dd-a448-d8c3473dc8e8,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd21b0d77-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.805 243456 DEBUG os_vif [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:3d:54,bridge_name='br-int',has_traffic_filtering=True,id=d21b0d77-e339-46dd-a448-d8c3473dc8e8,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd21b0d77-e3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.807 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.807 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.808 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.812 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.813 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd21b0d77-e3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.813 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd21b0d77-e3, col_values=(('external_ids', {'iface-id': 'd21b0d77-e339-46dd-a448-d8c3473dc8e8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:10:3d:54', 'vm-uuid': 'abcde488-a508-4239-9f40-28af252a1cd3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.815 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:22 np0005634017 NetworkManager[49805]: <info>  [1772273122.8166] manager: (tapd21b0d77-e3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.817 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.821 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.822 243456 INFO os_vif [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:3d:54,bridge_name='br-int',has_traffic_filtering=True,id=d21b0d77-e339-46dd-a448-d8c3473dc8e8,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd21b0d77-e3')#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.887 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.889 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.889 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No VIF found with MAC fa:16:3e:10:3d:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.890 243456 INFO nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Using config drive#033[00m
Feb 28 05:05:22 np0005634017 nova_compute[243452]: 2026-02-28 10:05:22.915 243456 DEBUG nova.storage.rbd_utils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image abcde488-a508-4239-9f40-28af252a1cd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:23 np0005634017 nova_compute[243452]: 2026-02-28 10:05:23.402 243456 INFO nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Creating config drive at /var/lib/nova/instances/abcde488-a508-4239-9f40-28af252a1cd3/disk.config#033[00m
Feb 28 05:05:23 np0005634017 nova_compute[243452]: 2026-02-28 10:05:23.406 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/abcde488-a508-4239-9f40-28af252a1cd3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpn7w9ylhg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:23 np0005634017 nova_compute[243452]: 2026-02-28 10:05:23.524 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:23 np0005634017 nova_compute[243452]: 2026-02-28 10:05:23.542 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/abcde488-a508-4239-9f40-28af252a1cd3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpn7w9ylhg" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:23 np0005634017 nova_compute[243452]: 2026-02-28 10:05:23.575 243456 DEBUG nova.storage.rbd_utils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image abcde488-a508-4239-9f40-28af252a1cd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:23 np0005634017 nova_compute[243452]: 2026-02-28 10:05:23.580 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/abcde488-a508-4239-9f40-28af252a1cd3/disk.config abcde488-a508-4239-9f40-28af252a1cd3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:23 np0005634017 nova_compute[243452]: 2026-02-28 10:05:23.699 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/abcde488-a508-4239-9f40-28af252a1cd3/disk.config abcde488-a508-4239-9f40-28af252a1cd3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:23 np0005634017 nova_compute[243452]: 2026-02-28 10:05:23.702 243456 INFO nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Deleting local config drive /var/lib/nova/instances/abcde488-a508-4239-9f40-28af252a1cd3/disk.config because it was imported into RBD.#033[00m
Feb 28 05:05:23 np0005634017 kernel: tapd21b0d77-e3: entered promiscuous mode
Feb 28 05:05:23 np0005634017 NetworkManager[49805]: <info>  [1772273123.7610] manager: (tapd21b0d77-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/113)
Feb 28 05:05:23 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:23Z|00231|binding|INFO|Claiming lport d21b0d77-e339-46dd-a448-d8c3473dc8e8 for this chassis.
Feb 28 05:05:23 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:23Z|00232|binding|INFO|d21b0d77-e339-46dd-a448-d8c3473dc8e8: Claiming fa:16:3e:10:3d:54 10.100.0.8
Feb 28 05:05:23 np0005634017 nova_compute[243452]: 2026-02-28 10:05:23.762 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.773 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:3d:54 10.100.0.8'], port_security=['fa:16:3e:10:3d:54 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'abcde488-a508-4239-9f40-28af252a1cd3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b4baa283-ae4f-4852-9324-45f4fb1de1ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d63e728b-5dfe-4a66-867f-fa0957507bc0, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d21b0d77-e339-46dd-a448-d8c3473dc8e8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:05:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.774 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d21b0d77-e339-46dd-a448-d8c3473dc8e8 in datapath 3a8395bc-d7fc-4457-8cb4-52e2b9305b61 bound to our chassis#033[00m
Feb 28 05:05:23 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:23Z|00233|binding|INFO|Setting lport d21b0d77-e339-46dd-a448-d8c3473dc8e8 ovn-installed in OVS
Feb 28 05:05:23 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:23Z|00234|binding|INFO|Setting lport d21b0d77-e339-46dd-a448-d8c3473dc8e8 up in Southbound
Feb 28 05:05:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.776 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3a8395bc-d7fc-4457-8cb4-52e2b9305b61#033[00m
Feb 28 05:05:23 np0005634017 nova_compute[243452]: 2026-02-28 10:05:23.778 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:23 np0005634017 nova_compute[243452]: 2026-02-28 10:05:23.782 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:23 np0005634017 systemd-udevd[274318]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:05:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.789 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[77cd87bb-c708-4de5-9636-49ca84da256a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.790 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3a8395bc-d1 in ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:05:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.792 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3a8395bc-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:05:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.792 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9df3e24f-ec57-4eb1-97a0-4364d2db0968]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.793 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b47bae6c-f0f4-4ce6-9c05-db5dfd598613]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:23 np0005634017 NetworkManager[49805]: <info>  [1772273123.7991] device (tapd21b0d77-e3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:05:23 np0005634017 NetworkManager[49805]: <info>  [1772273123.7997] device (tapd21b0d77-e3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:05:23 np0005634017 systemd-machined[209480]: New machine qemu-39-instance-00000023.
Feb 28 05:05:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.809 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[2fc01839-3c7c-4cc8-ad00-4b88a7e7a735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:23 np0005634017 systemd[1]: Started Virtual Machine qemu-39-instance-00000023.
Feb 28 05:05:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.820 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6187dd23-727b-4536-af93-5c170fb8817b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:23 np0005634017 nova_compute[243452]: 2026-02-28 10:05:23.830 243456 DEBUG nova.network.neutron [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Updated VIF entry in instance network info cache for port d21b0d77-e339-46dd-a448-d8c3473dc8e8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:05:23 np0005634017 nova_compute[243452]: 2026-02-28 10:05:23.830 243456 DEBUG nova.network.neutron [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Updating instance_info_cache with network_info: [{"id": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "address": "fa:16:3e:10:3d:54", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd21b0d77-e3", "ovs_interfaceid": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:05:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.847 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d64e520e-aed8-4684-874f-6dadb824808e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.853 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[954e7ff3-48ae-4ea7-a95e-3bbb2f1ca019]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:23 np0005634017 NetworkManager[49805]: <info>  [1772273123.8547] manager: (tap3a8395bc-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/114)
Feb 28 05:05:23 np0005634017 nova_compute[243452]: 2026-02-28 10:05:23.858 243456 DEBUG oslo_concurrency.lockutils [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-abcde488-a508-4239-9f40-28af252a1cd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:05:23 np0005634017 nova_compute[243452]: 2026-02-28 10:05:23.859 243456 DEBUG nova.compute.manager [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Received event network-vif-plugged-ec45afc5-b898-4497-8ddf-4195ba6f8dfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:23 np0005634017 nova_compute[243452]: 2026-02-28 10:05:23.859 243456 DEBUG oslo_concurrency.lockutils [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:23 np0005634017 nova_compute[243452]: 2026-02-28 10:05:23.859 243456 DEBUG oslo_concurrency.lockutils [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:23 np0005634017 nova_compute[243452]: 2026-02-28 10:05:23.859 243456 DEBUG oslo_concurrency.lockutils [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:23 np0005634017 nova_compute[243452]: 2026-02-28 10:05:23.859 243456 DEBUG nova.compute.manager [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] No waiting events found dispatching network-vif-plugged-ec45afc5-b898-4497-8ddf-4195ba6f8dfc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:05:23 np0005634017 nova_compute[243452]: 2026-02-28 10:05:23.860 243456 WARNING nova.compute.manager [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Received unexpected event network-vif-plugged-ec45afc5-b898-4497-8ddf-4195ba6f8dfc for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:05:23 np0005634017 nova_compute[243452]: 2026-02-28 10:05:23.860 243456 DEBUG nova.compute.manager [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Received event network-vif-deleted-ec45afc5-b898-4497-8ddf-4195ba6f8dfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.883 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4572b330-1a44-4f6a-98ae-50468c4a6e2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 279 MiB data, 500 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.5 MiB/s wr, 377 op/s
Feb 28 05:05:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.888 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d306e5f9-b84b-4e0c-bcdb-6682790ef18f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:23 np0005634017 NetworkManager[49805]: <info>  [1772273123.9113] device (tap3a8395bc-d0): carrier: link connected
Feb 28 05:05:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.917 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[62d94987-9f30-46f6-a241-3204afed7df5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.934 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9209ab05-bb46-4687-9e69-5255c6482ae8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3a8395bc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:6b:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 74], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463119, 'reachable_time': 39338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274352, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.952 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[27a506ff-cf9f-47dc-9b2a-0c0180e301bc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8e:6b8d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 463119, 'tstamp': 463119}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274353, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.969 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[46f6d97b-9d64-4614-8e49-2b7ff12d2aec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3a8395bc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:6b:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 74], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463119, 'reachable_time': 39338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274354, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:24.015 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c97db1fe-6848-4c70-9db4-429f081142d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:24.084 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[56380812-0c77-4e54-b96f-ec64345a138b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:24.086 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a8395bc-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:24.087 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:24.087 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3a8395bc-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:24 np0005634017 kernel: tap3a8395bc-d0: entered promiscuous mode
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.092 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:24 np0005634017 NetworkManager[49805]: <info>  [1772273124.0929] manager: (tap3a8395bc-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:24.093 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3a8395bc-d0, col_values=(('external_ids', {'iface-id': '5ebe3a2c-2a7c-485c-a1b1-d51cfb0beba3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:24 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:24Z|00235|binding|INFO|Releasing lport 5ebe3a2c-2a7c-485c-a1b1-d51cfb0beba3 from this chassis (sb_readonly=0)
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:24.096 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:24.096 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3e08faa4-bd84-45c0-a332-6a71df564e93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:24.097 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-3a8395bc-d7fc-4457-8cb4-52e2b9305b61
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.pid.haproxy
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 3a8395bc-d7fc-4457-8cb4-52e2b9305b61
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:05:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:24.098 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'env', 'PROCESS_TAG=haproxy-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.100 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.155 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273124.1551216, abcde488-a508-4239-9f40-28af252a1cd3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.156 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] VM Started (Lifecycle Event)#033[00m
Feb 28 05:05:24 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:24Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:07:3c:f5 10.100.0.12
Feb 28 05:05:24 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:24Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:07:3c:f5 10.100.0.12
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.220 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.224 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273124.1552832, abcde488-a508-4239-9f40-28af252a1cd3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.225 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.385 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.389 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.447 243456 DEBUG nova.compute.manager [req-618444ff-ea68-48f1-89ec-003c6c925d2e req-0f36c27b-a87b-48da-9beb-9fbf04a8ccdb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Received event network-vif-plugged-d21b0d77-e339-46dd-a448-d8c3473dc8e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.448 243456 DEBUG oslo_concurrency.lockutils [req-618444ff-ea68-48f1-89ec-003c6c925d2e req-0f36c27b-a87b-48da-9beb-9fbf04a8ccdb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "abcde488-a508-4239-9f40-28af252a1cd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.449 243456 DEBUG oslo_concurrency.lockutils [req-618444ff-ea68-48f1-89ec-003c6c925d2e req-0f36c27b-a87b-48da-9beb-9fbf04a8ccdb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.449 243456 DEBUG oslo_concurrency.lockutils [req-618444ff-ea68-48f1-89ec-003c6c925d2e req-0f36c27b-a87b-48da-9beb-9fbf04a8ccdb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.450 243456 DEBUG nova.compute.manager [req-618444ff-ea68-48f1-89ec-003c6c925d2e req-0f36c27b-a87b-48da-9beb-9fbf04a8ccdb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Processing event network-vif-plugged-d21b0d77-e339-46dd-a448-d8c3473dc8e8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.451 243456 DEBUG nova.compute.manager [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.455 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.457 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.458 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273124.454519, abcde488-a508-4239-9f40-28af252a1cd3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.458 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.462 243456 INFO nova.virt.libvirt.driver [-] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Instance spawned successfully.#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.463 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.481 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.487 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.492 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.493 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.493 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.494 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.494 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.495 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.504 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:05:24 np0005634017 podman[274428]: 2026-02-28 10:05:24.506005456 +0000 UTC m=+0.055099440 container create f00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:05:24 np0005634017 systemd[1]: Started libpod-conmon-f00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828.scope.
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.547 243456 INFO nova.compute.manager [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Took 6.24 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.549 243456 DEBUG nova.compute.manager [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:24 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:05:24 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac069718318c7c6e38b19b80ce17d545aac543570ffaf0e081acaac88f751402/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:05:24 np0005634017 podman[274428]: 2026-02-28 10:05:24.479106109 +0000 UTC m=+0.028200123 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:05:24 np0005634017 podman[274428]: 2026-02-28 10:05:24.582775924 +0000 UTC m=+0.131869908 container init f00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 05:05:24 np0005634017 podman[274428]: 2026-02-28 10:05:24.588712981 +0000 UTC m=+0.137806965 container start f00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:05:24 np0005634017 podman[274441]: 2026-02-28 10:05:24.59723499 +0000 UTC m=+0.055999675 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 28 05:05:24 np0005634017 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[274452]: [NOTICE]   (274479) : New worker (274487) forked
Feb 28 05:05:24 np0005634017 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[274452]: [NOTICE]   (274479) : Loading success.
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.614 243456 INFO nova.compute.manager [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Took 7.41 seconds to build instance.#033[00m
Feb 28 05:05:24 np0005634017 podman[274440]: 2026-02-28 10:05:24.628318904 +0000 UTC m=+0.090094274 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true)
Feb 28 05:05:24 np0005634017 nova_compute[243452]: 2026-02-28 10:05:24.630 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:25Z|00236|binding|INFO|Releasing lport 76417fb5-53fd-4986-92c0-fafddb45b2f5 from this chassis (sb_readonly=0)
Feb 28 05:05:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:25Z|00237|binding|INFO|Releasing lport 5ebe3a2c-2a7c-485c-a1b1-d51cfb0beba3 from this chassis (sb_readonly=0)
Feb 28 05:05:25 np0005634017 nova_compute[243452]: 2026-02-28 10:05:25.828 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 274 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 5.7 MiB/s wr, 397 op/s
Feb 28 05:05:25 np0005634017 nova_compute[243452]: 2026-02-28 10:05:25.965 243456 DEBUG nova.objects.instance [None req-bd572c81-7ac0-45c8-853b-f4afc2c9f830 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'pci_devices' on Instance uuid abcde488-a508-4239-9f40-28af252a1cd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:05:25 np0005634017 nova_compute[243452]: 2026-02-28 10:05:25.989 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273125.989279, abcde488-a508-4239-9f40-28af252a1cd3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:05:25 np0005634017 nova_compute[243452]: 2026-02-28 10:05:25.990 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:05:26 np0005634017 nova_compute[243452]: 2026-02-28 10:05:26.011 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:26 np0005634017 nova_compute[243452]: 2026-02-28 10:05:26.015 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:05:26 np0005634017 nova_compute[243452]: 2026-02-28 10:05:26.034 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Feb 28 05:05:26 np0005634017 kernel: tapd21b0d77-e3 (unregistering): left promiscuous mode
Feb 28 05:05:26 np0005634017 NetworkManager[49805]: <info>  [1772273126.3813] device (tapd21b0d77-e3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:05:26 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:26Z|00238|binding|INFO|Releasing lport d21b0d77-e339-46dd-a448-d8c3473dc8e8 from this chassis (sb_readonly=0)
Feb 28 05:05:26 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:26Z|00239|binding|INFO|Setting lport d21b0d77-e339-46dd-a448-d8c3473dc8e8 down in Southbound
Feb 28 05:05:26 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:26Z|00240|binding|INFO|Removing iface tapd21b0d77-e3 ovn-installed in OVS
Feb 28 05:05:26 np0005634017 nova_compute[243452]: 2026-02-28 10:05:26.392 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:26 np0005634017 nova_compute[243452]: 2026-02-28 10:05:26.394 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.401 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:3d:54 10.100.0.8'], port_security=['fa:16:3e:10:3d:54 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'abcde488-a508-4239-9f40-28af252a1cd3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b4baa283-ae4f-4852-9324-45f4fb1de1ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d63e728b-5dfe-4a66-867f-fa0957507bc0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d21b0d77-e339-46dd-a448-d8c3473dc8e8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:05:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.403 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d21b0d77-e339-46dd-a448-d8c3473dc8e8 in datapath 3a8395bc-d7fc-4457-8cb4-52e2b9305b61 unbound from our chassis#033[00m
Feb 28 05:05:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.404 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3a8395bc-d7fc-4457-8cb4-52e2b9305b61, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:05:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.405 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[770ae92e-36ef-4446-940e-5fb1acb2ac1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.406 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 namespace which is not needed anymore#033[00m
Feb 28 05:05:26 np0005634017 nova_compute[243452]: 2026-02-28 10:05:26.410 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:26 np0005634017 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000023.scope: Deactivated successfully.
Feb 28 05:05:26 np0005634017 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000023.scope: Consumed 1.988s CPU time.
Feb 28 05:05:26 np0005634017 systemd-machined[209480]: Machine qemu-39-instance-00000023 terminated.
Feb 28 05:05:26 np0005634017 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[274452]: [NOTICE]   (274479) : haproxy version is 2.8.14-c23fe91
Feb 28 05:05:26 np0005634017 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[274452]: [NOTICE]   (274479) : path to executable is /usr/sbin/haproxy
Feb 28 05:05:26 np0005634017 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[274452]: [WARNING]  (274479) : Exiting Master process...
Feb 28 05:05:26 np0005634017 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[274452]: [ALERT]    (274479) : Current worker (274487) exited with code 143 (Terminated)
Feb 28 05:05:26 np0005634017 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[274452]: [WARNING]  (274479) : All workers exited. Exiting... (0)
Feb 28 05:05:26 np0005634017 systemd[1]: libpod-f00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828.scope: Deactivated successfully.
Feb 28 05:05:26 np0005634017 podman[274524]: 2026-02-28 10:05:26.541848529 +0000 UTC m=+0.045128030 container died f00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 28 05:05:26 np0005634017 nova_compute[243452]: 2026-02-28 10:05:26.551 243456 DEBUG nova.compute.manager [req-7a5f0a07-ec39-46b0-9659-97ed6836ef0a req-9d23c757-d237-41f0-92a3-cdd280cab698 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Received event network-vif-plugged-d21b0d77-e339-46dd-a448-d8c3473dc8e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:26 np0005634017 nova_compute[243452]: 2026-02-28 10:05:26.551 243456 DEBUG oslo_concurrency.lockutils [req-7a5f0a07-ec39-46b0-9659-97ed6836ef0a req-9d23c757-d237-41f0-92a3-cdd280cab698 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "abcde488-a508-4239-9f40-28af252a1cd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:26 np0005634017 nova_compute[243452]: 2026-02-28 10:05:26.551 243456 DEBUG oslo_concurrency.lockutils [req-7a5f0a07-ec39-46b0-9659-97ed6836ef0a req-9d23c757-d237-41f0-92a3-cdd280cab698 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:26 np0005634017 nova_compute[243452]: 2026-02-28 10:05:26.551 243456 DEBUG oslo_concurrency.lockutils [req-7a5f0a07-ec39-46b0-9659-97ed6836ef0a req-9d23c757-d237-41f0-92a3-cdd280cab698 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:26 np0005634017 nova_compute[243452]: 2026-02-28 10:05:26.552 243456 DEBUG nova.compute.manager [req-7a5f0a07-ec39-46b0-9659-97ed6836ef0a req-9d23c757-d237-41f0-92a3-cdd280cab698 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] No waiting events found dispatching network-vif-plugged-d21b0d77-e339-46dd-a448-d8c3473dc8e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:05:26 np0005634017 nova_compute[243452]: 2026-02-28 10:05:26.552 243456 WARNING nova.compute.manager [req-7a5f0a07-ec39-46b0-9659-97ed6836ef0a req-9d23c757-d237-41f0-92a3-cdd280cab698 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Received unexpected event network-vif-plugged-d21b0d77-e339-46dd-a448-d8c3473dc8e8 for instance with vm_state active and task_state suspending.#033[00m
Feb 28 05:05:26 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828-userdata-shm.mount: Deactivated successfully.
Feb 28 05:05:26 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ac069718318c7c6e38b19b80ce17d545aac543570ffaf0e081acaac88f751402-merged.mount: Deactivated successfully.
Feb 28 05:05:26 np0005634017 nova_compute[243452]: 2026-02-28 10:05:26.564 243456 DEBUG nova.compute.manager [None req-bd572c81-7ac0-45c8-853b-f4afc2c9f830 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:26 np0005634017 podman[274524]: 2026-02-28 10:05:26.570172925 +0000 UTC m=+0.073452426 container cleanup f00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:05:26 np0005634017 systemd[1]: libpod-conmon-f00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828.scope: Deactivated successfully.
Feb 28 05:05:26 np0005634017 podman[274559]: 2026-02-28 10:05:26.630338507 +0000 UTC m=+0.040032737 container remove f00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 05:05:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.635 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[231cdbd5-8764-459d-8c46-2e83e8518879]: (4, ('Sat Feb 28 10:05:26 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 (f00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828)\nf00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828\nSat Feb 28 10:05:26 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 (f00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828)\nf00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.638 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dba105b7-f4ab-4817-aa16-cd80456da898]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.639 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a8395bc-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:26 np0005634017 nova_compute[243452]: 2026-02-28 10:05:26.642 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:26 np0005634017 kernel: tap3a8395bc-d0: left promiscuous mode
Feb 28 05:05:26 np0005634017 nova_compute[243452]: 2026-02-28 10:05:26.652 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.655 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[26443083-cbac-4921-8bd0-31a86bc80738]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.682 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e0251ac0-a751-470f-9397-f91604877c7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.684 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5ce34f86-4dcc-4773-8068-39ab3db1162c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.698 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[54bb2b8f-0f48-4735-ac47-40a73aa13d87]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463113, 'reachable_time': 15771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274579, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.700 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:05:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.700 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[0953951f-d82a-4f98-85c1-d260bcc8cf59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:26 np0005634017 systemd[1]: run-netns-ovnmeta\x2d3a8395bc\x2dd7fc\x2d4457\x2d8cb4\x2d52e2b9305b61.mount: Deactivated successfully.
Feb 28 05:05:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:27Z|00241|binding|INFO|Releasing lport 76417fb5-53fd-4986-92c0-fafddb45b2f5 from this chassis (sb_readonly=0)
Feb 28 05:05:27 np0005634017 nova_compute[243452]: 2026-02-28 10:05:27.062 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:05:27 np0005634017 nova_compute[243452]: 2026-02-28 10:05:27.815 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.7 MiB/s wr, 354 op/s
Feb 28 05:05:28 np0005634017 nova_compute[243452]: 2026-02-28 10:05:28.526 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:28 np0005634017 nova_compute[243452]: 2026-02-28 10:05:28.814 243456 DEBUG nova.compute.manager [req-b6b5c77d-83f7-451f-80ab-47bb7fa2ee31 req-2748f2fd-c486-4014-8ef2-2e88ddfc3449 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Received event network-vif-unplugged-d21b0d77-e339-46dd-a448-d8c3473dc8e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:28 np0005634017 nova_compute[243452]: 2026-02-28 10:05:28.815 243456 DEBUG oslo_concurrency.lockutils [req-b6b5c77d-83f7-451f-80ab-47bb7fa2ee31 req-2748f2fd-c486-4014-8ef2-2e88ddfc3449 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "abcde488-a508-4239-9f40-28af252a1cd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:28 np0005634017 nova_compute[243452]: 2026-02-28 10:05:28.815 243456 DEBUG oslo_concurrency.lockutils [req-b6b5c77d-83f7-451f-80ab-47bb7fa2ee31 req-2748f2fd-c486-4014-8ef2-2e88ddfc3449 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:28 np0005634017 nova_compute[243452]: 2026-02-28 10:05:28.816 243456 DEBUG oslo_concurrency.lockutils [req-b6b5c77d-83f7-451f-80ab-47bb7fa2ee31 req-2748f2fd-c486-4014-8ef2-2e88ddfc3449 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:28 np0005634017 nova_compute[243452]: 2026-02-28 10:05:28.816 243456 DEBUG nova.compute.manager [req-b6b5c77d-83f7-451f-80ab-47bb7fa2ee31 req-2748f2fd-c486-4014-8ef2-2e88ddfc3449 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] No waiting events found dispatching network-vif-unplugged-d21b0d77-e339-46dd-a448-d8c3473dc8e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:05:28 np0005634017 nova_compute[243452]: 2026-02-28 10:05:28.817 243456 WARNING nova.compute.manager [req-b6b5c77d-83f7-451f-80ab-47bb7fa2ee31 req-2748f2fd-c486-4014-8ef2-2e88ddfc3449 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Received unexpected event network-vif-unplugged-d21b0d77-e339-46dd-a448-d8c3473dc8e8 for instance with vm_state suspended and task_state None.#033[00m
Feb 28 05:05:28 np0005634017 nova_compute[243452]: 2026-02-28 10:05:28.817 243456 DEBUG nova.compute.manager [req-b6b5c77d-83f7-451f-80ab-47bb7fa2ee31 req-2748f2fd-c486-4014-8ef2-2e88ddfc3449 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Received event network-vif-plugged-d21b0d77-e339-46dd-a448-d8c3473dc8e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:28 np0005634017 nova_compute[243452]: 2026-02-28 10:05:28.818 243456 DEBUG oslo_concurrency.lockutils [req-b6b5c77d-83f7-451f-80ab-47bb7fa2ee31 req-2748f2fd-c486-4014-8ef2-2e88ddfc3449 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "abcde488-a508-4239-9f40-28af252a1cd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:28 np0005634017 nova_compute[243452]: 2026-02-28 10:05:28.818 243456 DEBUG oslo_concurrency.lockutils [req-b6b5c77d-83f7-451f-80ab-47bb7fa2ee31 req-2748f2fd-c486-4014-8ef2-2e88ddfc3449 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:28 np0005634017 nova_compute[243452]: 2026-02-28 10:05:28.819 243456 DEBUG oslo_concurrency.lockutils [req-b6b5c77d-83f7-451f-80ab-47bb7fa2ee31 req-2748f2fd-c486-4014-8ef2-2e88ddfc3449 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:28 np0005634017 nova_compute[243452]: 2026-02-28 10:05:28.819 243456 DEBUG nova.compute.manager [req-b6b5c77d-83f7-451f-80ab-47bb7fa2ee31 req-2748f2fd-c486-4014-8ef2-2e88ddfc3449 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] No waiting events found dispatching network-vif-plugged-d21b0d77-e339-46dd-a448-d8c3473dc8e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:05:28 np0005634017 nova_compute[243452]: 2026-02-28 10:05:28.820 243456 WARNING nova.compute.manager [req-b6b5c77d-83f7-451f-80ab-47bb7fa2ee31 req-2748f2fd-c486-4014-8ef2-2e88ddfc3449 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Received unexpected event network-vif-plugged-d21b0d77-e339-46dd-a448-d8c3473dc8e8 for instance with vm_state suspended and task_state None.#033[00m
Feb 28 05:05:29 np0005634017 nova_compute[243452]: 2026-02-28 10:05:29.026 243456 DEBUG nova.compute.manager [None req-d828df7d-1225-43bf-ae71-a25aa9e3a212 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:05:29
Feb 28 05:05:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:05:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:05:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'images', 'volumes', 'cephfs.cephfs.data', 'vms', '.mgr', 'default.rgw.meta', 'default.rgw.control']
Feb 28 05:05:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:05:29 np0005634017 nova_compute[243452]: 2026-02-28 10:05:29.074 243456 INFO nova.compute.manager [None req-d828df7d-1225-43bf-ae71-a25aa9e3a212 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] instance snapshotting#033[00m
Feb 28 05:05:29 np0005634017 nova_compute[243452]: 2026-02-28 10:05:29.075 243456 WARNING nova.compute.manager [None req-d828df7d-1225-43bf-ae71-a25aa9e3a212 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] trying to snapshot a non-running instance: (state: 4 expected: 1)#033[00m
Feb 28 05:05:29 np0005634017 nova_compute[243452]: 2026-02-28 10:05:29.532 243456 INFO nova.virt.libvirt.driver [None req-d828df7d-1225-43bf-ae71-a25aa9e3a212 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Beginning cold snapshot process#033[00m
Feb 28 05:05:29 np0005634017 nova_compute[243452]: 2026-02-28 10:05:29.690 243456 DEBUG nova.virt.libvirt.imagebackend [None req-d828df7d-1225-43bf-ae71-a25aa9e3a212 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 28 05:05:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.1 MiB/s wr, 322 op/s
Feb 28 05:05:30 np0005634017 nova_compute[243452]: 2026-02-28 10:05:30.059 243456 DEBUG nova.storage.rbd_utils [None req-d828df7d-1225-43bf-ae71-a25aa9e3a212 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] creating snapshot(faa26cc3cc7e4bb3b6ff32fe0bf39739) on rbd image(abcde488-a508-4239-9f40-28af252a1cd3_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:05:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:05:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:05:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:05:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:05:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:05:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:05:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:05:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:05:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:05:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:05:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:05:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:05:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:05:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:05:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:05:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:05:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Feb 28 05:05:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Feb 28 05:05:31 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Feb 28 05:05:31 np0005634017 nova_compute[243452]: 2026-02-28 10:05:31.053 243456 DEBUG nova.storage.rbd_utils [None req-d828df7d-1225-43bf-ae71-a25aa9e3a212 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] cloning vms/abcde488-a508-4239-9f40-28af252a1cd3_disk@faa26cc3cc7e4bb3b6ff32fe0bf39739 to images/830ebb40-f7a7-4e0f-9487-d755555148cb clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 28 05:05:31 np0005634017 nova_compute[243452]: 2026-02-28 10:05:31.229 243456 DEBUG nova.storage.rbd_utils [None req-d828df7d-1225-43bf-ae71-a25aa9e3a212 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] flattening images/830ebb40-f7a7-4e0f-9487-d755555148cb flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 28 05:05:31 np0005634017 nova_compute[243452]: 2026-02-28 10:05:31.477 243456 DEBUG nova.storage.rbd_utils [None req-d828df7d-1225-43bf-ae71-a25aa9e3a212 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] removing snapshot(faa26cc3cc7e4bb3b6ff32fe0bf39739) on rbd image(abcde488-a508-4239-9f40-28af252a1cd3_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 28 05:05:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.1 MiB/s wr, 201 op/s
Feb 28 05:05:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Feb 28 05:05:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Feb 28 05:05:32 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Feb 28 05:05:32 np0005634017 nova_compute[243452]: 2026-02-28 10:05:32.096 243456 DEBUG nova.storage.rbd_utils [None req-d828df7d-1225-43bf-ae71-a25aa9e3a212 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] creating snapshot(snap) on rbd image(830ebb40-f7a7-4e0f-9487-d755555148cb) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:05:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:05:32 np0005634017 nova_compute[243452]: 2026-02-28 10:05:32.818 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:32 np0005634017 nova_compute[243452]: 2026-02-28 10:05:32.918 243456 DEBUG oslo_concurrency.lockutils [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "interface-3a118849-0d0a-4196-9bdd-65333da2e8f7-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:32 np0005634017 nova_compute[243452]: 2026-02-28 10:05:32.919 243456 DEBUG oslo_concurrency.lockutils [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-3a118849-0d0a-4196-9bdd-65333da2e8f7-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:32 np0005634017 nova_compute[243452]: 2026-02-28 10:05:32.919 243456 DEBUG nova.objects.instance [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'flavor' on Instance uuid 3a118849-0d0a-4196-9bdd-65333da2e8f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:05:32 np0005634017 nova_compute[243452]: 2026-02-28 10:05:32.946 243456 DEBUG nova.objects.instance [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'pci_requests' on Instance uuid 3a118849-0d0a-4196-9bdd-65333da2e8f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:05:32 np0005634017 nova_compute[243452]: 2026-02-28 10:05:32.957 243456 DEBUG nova.network.neutron [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:05:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Feb 28 05:05:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Feb 28 05:05:33 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Feb 28 05:05:33 np0005634017 nova_compute[243452]: 2026-02-28 10:05:33.528 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:33 np0005634017 nova_compute[243452]: 2026-02-28 10:05:33.829 243456 DEBUG nova.policy [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1d18942aa7a4f4f9f673058f8c5f232', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5478fb37e3fa483c86ed85ad939d42da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:05:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 289 MiB data, 522 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 814 KiB/s wr, 97 op/s
Feb 28 05:05:33 np0005634017 nova_compute[243452]: 2026-02-28 10:05:33.947 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:34 np0005634017 nova_compute[243452]: 2026-02-28 10:05:34.250 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273119.2485564, bacbbaae-ce23-42df-b5cc-0fc49b2f3741 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:05:34 np0005634017 nova_compute[243452]: 2026-02-28 10:05:34.251 243456 INFO nova.compute.manager [-] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:05:34 np0005634017 nova_compute[243452]: 2026-02-28 10:05:34.276 243456 DEBUG nova.compute.manager [None req-37bd679f-b975-4134-8a60-77b246aac42f - - - - - -] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:34 np0005634017 nova_compute[243452]: 2026-02-28 10:05:34.412 243456 INFO nova.virt.libvirt.driver [None req-d828df7d-1225-43bf-ae71-a25aa9e3a212 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Snapshot image upload complete#033[00m
Feb 28 05:05:34 np0005634017 nova_compute[243452]: 2026-02-28 10:05:34.413 243456 INFO nova.compute.manager [None req-d828df7d-1225-43bf-ae71-a25aa9e3a212 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Took 5.34 seconds to snapshot the instance on the hypervisor.#033[00m
Feb 28 05:05:34 np0005634017 nova_compute[243452]: 2026-02-28 10:05:34.824 243456 DEBUG nova.network.neutron [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Successfully created port: 5de28374-dbe4-4c8d-9f73-047a368cc895 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:05:35 np0005634017 nova_compute[243452]: 2026-02-28 10:05:35.225 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273120.2235603, 75431a43-9412-4ad7-86ef-4f1fc1563b37 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:05:35 np0005634017 nova_compute[243452]: 2026-02-28 10:05:35.225 243456 INFO nova.compute.manager [-] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:05:35 np0005634017 nova_compute[243452]: 2026-02-28 10:05:35.254 243456 DEBUG nova.compute.manager [None req-037950e8-55a8-44b7-9278-d28fe3f43d98 - - - - - -] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 325 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 111 op/s
Feb 28 05:05:36 np0005634017 nova_compute[243452]: 2026-02-28 10:05:36.534 243456 DEBUG nova.network.neutron [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Successfully updated port: 5de28374-dbe4-4c8d-9f73-047a368cc895 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:05:36 np0005634017 nova_compute[243452]: 2026-02-28 10:05:36.558 243456 DEBUG oslo_concurrency.lockutils [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:05:36 np0005634017 nova_compute[243452]: 2026-02-28 10:05:36.558 243456 DEBUG oslo_concurrency.lockutils [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquired lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:05:36 np0005634017 nova_compute[243452]: 2026-02-28 10:05:36.558 243456 DEBUG nova.network.neutron [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:05:36 np0005634017 nova_compute[243452]: 2026-02-28 10:05:36.738 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:36 np0005634017 nova_compute[243452]: 2026-02-28 10:05:36.851 243456 WARNING nova.network.neutron [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] 60dcefc3-95e1-437e-9c00-e51656c39b8f already exists in list: networks containing: ['60dcefc3-95e1-437e-9c00-e51656c39b8f']. ignoring it#033[00m
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.785531) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273137785649, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 694, "num_deletes": 258, "total_data_size": 720841, "memory_usage": 735144, "flush_reason": "Manual Compaction"}
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Feb 28 05:05:37 np0005634017 nova_compute[243452]: 2026-02-28 10:05:37.820 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273137856478, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 712485, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23188, "largest_seqno": 23881, "table_properties": {"data_size": 708804, "index_size": 1461, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8202, "raw_average_key_size": 18, "raw_value_size": 701365, "raw_average_value_size": 1601, "num_data_blocks": 65, "num_entries": 438, "num_filter_entries": 438, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772273101, "oldest_key_time": 1772273101, "file_creation_time": 1772273137, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 71031 microseconds, and 3090 cpu microseconds.
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.856588) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 712485 bytes OK
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.856633) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.873984) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.874086) EVENT_LOG_v1 {"time_micros": 1772273137874049, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.874129) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 717129, prev total WAL file size 717129, number of live WAL files 2.
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.875605) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373532' seq:0, type:0; will stop at (end)
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(695KB)], [53(8699KB)]
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273137875693, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 9620934, "oldest_snapshot_seqno": -1}
Feb 28 05:05:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 325 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 153 op/s
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4809 keys, 9533029 bytes, temperature: kUnknown
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273137934371, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 9533029, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9497755, "index_size": 22172, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12037, "raw_key_size": 119584, "raw_average_key_size": 24, "raw_value_size": 9408033, "raw_average_value_size": 1956, "num_data_blocks": 924, "num_entries": 4809, "num_filter_entries": 4809, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772273137, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.934757) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9533029 bytes
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.936256) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.6 rd, 162.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 8.5 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(26.9) write-amplify(13.4) OK, records in: 5335, records dropped: 526 output_compression: NoCompression
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.936288) EVENT_LOG_v1 {"time_micros": 1772273137936273, "job": 28, "event": "compaction_finished", "compaction_time_micros": 58805, "compaction_time_cpu_micros": 31974, "output_level": 6, "num_output_files": 1, "total_output_size": 9533029, "num_input_records": 5335, "num_output_records": 4809, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273137936598, "job": 28, "event": "table_file_deletion", "file_number": 55}
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273137938394, "job": 28, "event": "table_file_deletion", "file_number": 53}
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.875400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.938487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.938499) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.938502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.938505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:05:37 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.938509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.176 243456 DEBUG nova.compute.manager [req-4f356008-0553-4801-98e9-513c6ca5a43f req-c3959927-fce9-4bfd-9ce9-9c1254a8982d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received event network-changed-5de28374-dbe4-4c8d-9f73-047a368cc895 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.177 243456 DEBUG nova.compute.manager [req-4f356008-0553-4801-98e9-513c6ca5a43f req-c3959927-fce9-4bfd-9ce9-9c1254a8982d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Refreshing instance network info cache due to event network-changed-5de28374-dbe4-4c8d-9f73-047a368cc895. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.177 243456 DEBUG oslo_concurrency.lockutils [req-4f356008-0553-4801-98e9-513c6ca5a43f req-c3959927-fce9-4bfd-9ce9-9c1254a8982d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.317 243456 DEBUG nova.network.neutron [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updating instance_info_cache with network_info: [{"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5de28374-dbe4-4c8d-9f73-047a368cc895", "address": "fa:16:3e:fc:8b:a2", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de28374-db", "ovs_interfaceid": "5de28374-dbe4-4c8d-9f73-047a368cc895", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.337 243456 DEBUG oslo_concurrency.lockutils [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Releasing lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.338 243456 DEBUG oslo_concurrency.lockutils [req-4f356008-0553-4801-98e9-513c6ca5a43f req-c3959927-fce9-4bfd-9ce9-9c1254a8982d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.339 243456 DEBUG nova.network.neutron [req-4f356008-0553-4801-98e9-513c6ca5a43f req-c3959927-fce9-4bfd-9ce9-9c1254a8982d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Refreshing network info cache for port 5de28374-dbe4-4c8d-9f73-047a368cc895 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.343 243456 DEBUG nova.virt.libvirt.vif [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:05:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-853218932',display_name='tempest-AttachInterfacesTestJSON-server-853218932',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-853218932',id=32,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNhRql1zNcgozp7+n+OTZNjDLjB8j28cqviorl1M0I+grkvDu6q5p/JYhTL7OoExHJt9j3CSAkdrK21HWqvkni4EGOra+zdcam7r8KHxbshLgFt9F0GQkeLVE/FGatuEGQ==',key_name='tempest-keypair-1407834863',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:05:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-b1a06y2e',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=3a118849-0d0a-4196-9bdd-65333da2e8f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5de28374-dbe4-4c8d-9f73-047a368cc895", "address": "fa:16:3e:fc:8b:a2", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de28374-db", "ovs_interfaceid": "5de28374-dbe4-4c8d-9f73-047a368cc895", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.344 243456 DEBUG nova.network.os_vif_util [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "5de28374-dbe4-4c8d-9f73-047a368cc895", "address": "fa:16:3e:fc:8b:a2", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de28374-db", "ovs_interfaceid": "5de28374-dbe4-4c8d-9f73-047a368cc895", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.345 243456 DEBUG nova.network.os_vif_util [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:8b:a2,bridge_name='br-int',has_traffic_filtering=True,id=5de28374-dbe4-4c8d-9f73-047a368cc895,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de28374-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.345 243456 DEBUG os_vif [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:8b:a2,bridge_name='br-int',has_traffic_filtering=True,id=5de28374-dbe4-4c8d-9f73-047a368cc895,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de28374-db') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.346 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.346 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.346 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.349 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.349 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5de28374-db, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.350 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5de28374-db, col_values=(('external_ids', {'iface-id': '5de28374-dbe4-4c8d-9f73-047a368cc895', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:8b:a2', 'vm-uuid': '3a118849-0d0a-4196-9bdd-65333da2e8f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:38 np0005634017 NetworkManager[49805]: <info>  [1772273138.3524] manager: (tap5de28374-db): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.354 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.357 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.359 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.360 243456 INFO os_vif [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:8b:a2,bridge_name='br-int',has_traffic_filtering=True,id=5de28374-dbe4-4c8d-9f73-047a368cc895,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de28374-db')#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.361 243456 DEBUG nova.virt.libvirt.vif [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:05:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-853218932',display_name='tempest-AttachInterfacesTestJSON-server-853218932',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-853218932',id=32,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNhRql1zNcgozp7+n+OTZNjDLjB8j28cqviorl1M0I+grkvDu6q5p/JYhTL7OoExHJt9j3CSAkdrK21HWqvkni4EGOra+zdcam7r8KHxbshLgFt9F0GQkeLVE/FGatuEGQ==',key_name='tempest-keypair-1407834863',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:05:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-b1a06y2e',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=3a118849-0d0a-4196-9bdd-65333da2e8f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5de28374-dbe4-4c8d-9f73-047a368cc895", "address": "fa:16:3e:fc:8b:a2", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de28374-db", "ovs_interfaceid": "5de28374-dbe4-4c8d-9f73-047a368cc895", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.361 243456 DEBUG nova.network.os_vif_util [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "5de28374-dbe4-4c8d-9f73-047a368cc895", "address": "fa:16:3e:fc:8b:a2", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de28374-db", "ovs_interfaceid": "5de28374-dbe4-4c8d-9f73-047a368cc895", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.362 243456 DEBUG nova.network.os_vif_util [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:8b:a2,bridge_name='br-int',has_traffic_filtering=True,id=5de28374-dbe4-4c8d-9f73-047a368cc895,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de28374-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.364 243456 DEBUG nova.virt.libvirt.guest [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] attach device xml: <interface type="ethernet">
Feb 28 05:05:38 np0005634017 nova_compute[243452]:  <mac address="fa:16:3e:fc:8b:a2"/>
Feb 28 05:05:38 np0005634017 nova_compute[243452]:  <model type="virtio"/>
Feb 28 05:05:38 np0005634017 nova_compute[243452]:  <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:05:38 np0005634017 nova_compute[243452]:  <mtu size="1442"/>
Feb 28 05:05:38 np0005634017 nova_compute[243452]:  <target dev="tap5de28374-db"/>
Feb 28 05:05:38 np0005634017 nova_compute[243452]: </interface>
Feb 28 05:05:38 np0005634017 nova_compute[243452]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb 28 05:05:38 np0005634017 NetworkManager[49805]: <info>  [1772273138.3773] manager: (tap5de28374-db): new Tun device (/org/freedesktop/NetworkManager/Devices/117)
Feb 28 05:05:38 np0005634017 kernel: tap5de28374-db: entered promiscuous mode
Feb 28 05:05:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:38Z|00242|binding|INFO|Claiming lport 5de28374-dbe4-4c8d-9f73-047a368cc895 for this chassis.
Feb 28 05:05:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:38Z|00243|binding|INFO|5de28374-dbe4-4c8d-9f73-047a368cc895: Claiming fa:16:3e:fc:8b:a2 10.100.0.6
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.381 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.391 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:8b:a2 10.100.0.6'], port_security=['fa:16:3e:fc:8b:a2 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3a118849-0d0a-4196-9bdd-65333da2e8f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '24808a61-2182-43ba-a46d-e629124276f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=5de28374-dbe4-4c8d-9f73-047a368cc895) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:05:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.393 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 5de28374-dbe4-4c8d-9f73-047a368cc895 in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f bound to our chassis#033[00m
Feb 28 05:05:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.394 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60dcefc3-95e1-437e-9c00-e51656c39b8f#033[00m
Feb 28 05:05:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:38Z|00244|binding|INFO|Setting lport 5de28374-dbe4-4c8d-9f73-047a368cc895 ovn-installed in OVS
Feb 28 05:05:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:38Z|00245|binding|INFO|Setting lport 5de28374-dbe4-4c8d-9f73-047a368cc895 up in Southbound
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.402 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:38 np0005634017 systemd-udevd[274732]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.409 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:38 np0005634017 NetworkManager[49805]: <info>  [1772273138.4162] device (tap5de28374-db): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:05:38 np0005634017 NetworkManager[49805]: <info>  [1772273138.4169] device (tap5de28374-db): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:05:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.416 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2d0fd40a-6f0d-4982-8a87-0229ea815d16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.442 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3b8c905e-378b-438d-a766-f5618bf8c828]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.448 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f0a8d4-bab7-42fc-8863-64f9bdaf8a86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.454 243456 DEBUG nova.virt.libvirt.driver [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.455 243456 DEBUG nova.virt.libvirt.driver [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.455 243456 DEBUG nova.virt.libvirt.driver [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No VIF found with MAC fa:16:3e:07:3c:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.455 243456 DEBUG nova.virt.libvirt.driver [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No VIF found with MAC fa:16:3e:fc:8b:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:05:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.477 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a1e2c29c-a1e1-48ab-9587-d92dc625e93d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.483 243456 DEBUG nova.virt.libvirt.guest [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:05:38 np0005634017 nova_compute[243452]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:05:38 np0005634017 nova_compute[243452]:  <nova:name>tempest-AttachInterfacesTestJSON-server-853218932</nova:name>
Feb 28 05:05:38 np0005634017 nova_compute[243452]:  <nova:creationTime>2026-02-28 10:05:38</nova:creationTime>
Feb 28 05:05:38 np0005634017 nova_compute[243452]:  <nova:flavor name="m1.nano">
Feb 28 05:05:38 np0005634017 nova_compute[243452]:    <nova:memory>128</nova:memory>
Feb 28 05:05:38 np0005634017 nova_compute[243452]:    <nova:disk>1</nova:disk>
Feb 28 05:05:38 np0005634017 nova_compute[243452]:    <nova:swap>0</nova:swap>
Feb 28 05:05:38 np0005634017 nova_compute[243452]:    <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:05:38 np0005634017 nova_compute[243452]:    <nova:vcpus>1</nova:vcpus>
Feb 28 05:05:38 np0005634017 nova_compute[243452]:  </nova:flavor>
Feb 28 05:05:38 np0005634017 nova_compute[243452]:  <nova:owner>
Feb 28 05:05:38 np0005634017 nova_compute[243452]:    <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 05:05:38 np0005634017 nova_compute[243452]:    <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 05:05:38 np0005634017 nova_compute[243452]:  </nova:owner>
Feb 28 05:05:38 np0005634017 nova_compute[243452]:  <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:05:38 np0005634017 nova_compute[243452]:  <nova:ports>
Feb 28 05:05:38 np0005634017 nova_compute[243452]:    <nova:port uuid="9f44b9f8-b888-40e8-be30-f985e3ca11b9">
Feb 28 05:05:38 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 05:05:38 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:05:38 np0005634017 nova_compute[243452]:    <nova:port uuid="5de28374-dbe4-4c8d-9f73-047a368cc895">
Feb 28 05:05:38 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 05:05:38 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:05:38 np0005634017 nova_compute[243452]:  </nova:ports>
Feb 28 05:05:38 np0005634017 nova_compute[243452]: </nova:instance>
Feb 28 05:05:38 np0005634017 nova_compute[243452]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb 28 05:05:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.493 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fd366eb9-05e3-4cbb-8aa8-30b23134ae35]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461763, 'reachable_time': 27957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274742, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.510 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7812a1f2-170f-464e-9eb3-879eb7193ebc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461774, 'tstamp': 461774}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274743, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461777, 'tstamp': 461777}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274743, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:05:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.512 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60dcefc3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.517 243456 DEBUG oslo_concurrency.lockutils [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-3a118849-0d0a-4196-9bdd-65333da2e8f7-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.518 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.520 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60dcefc3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.520 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:05:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.520 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60dcefc3-90, col_values=(('external_ids', {'iface-id': '76417fb5-53fd-4986-92c0-fafddb45b2f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.521 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:05:38 np0005634017 nova_compute[243452]: 2026-02-28 10:05:38.531 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.099 243456 DEBUG oslo_concurrency.lockutils [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "abcde488-a508-4239-9f40-28af252a1cd3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.099 243456 DEBUG oslo_concurrency.lockutils [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.100 243456 DEBUG oslo_concurrency.lockutils [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "abcde488-a508-4239-9f40-28af252a1cd3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.100 243456 DEBUG oslo_concurrency.lockutils [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.100 243456 DEBUG oslo_concurrency.lockutils [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.102 243456 INFO nova.compute.manager [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Terminating instance#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.103 243456 DEBUG nova.compute.manager [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.112 243456 INFO nova.virt.libvirt.driver [-] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Instance destroyed successfully.#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.113 243456 DEBUG nova.objects.instance [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'resources' on Instance uuid abcde488-a508-4239-9f40-28af252a1cd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.131 243456 DEBUG nova.virt.libvirt.vif [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1645656228',display_name='tempest-ImagesTestJSON-server-1645656228',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1645656228',id=35,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:05:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-649wx7we',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:34Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=abcde488-a508-4239-9f40-28af252a1cd3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "address": "fa:16:3e:10:3d:54", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd21b0d77-e3", "ovs_interfaceid": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.132 243456 DEBUG nova.network.os_vif_util [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "address": "fa:16:3e:10:3d:54", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd21b0d77-e3", "ovs_interfaceid": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.132 243456 DEBUG nova.network.os_vif_util [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:3d:54,bridge_name='br-int',has_traffic_filtering=True,id=d21b0d77-e339-46dd-a448-d8c3473dc8e8,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd21b0d77-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.133 243456 DEBUG os_vif [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:3d:54,bridge_name='br-int',has_traffic_filtering=True,id=d21b0d77-e339-46dd-a448-d8c3473dc8e8,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd21b0d77-e3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.135 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.135 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd21b0d77-e3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.137 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.139 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.141 243456 INFO os_vif [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:3d:54,bridge_name='br-int',has_traffic_filtering=True,id=d21b0d77-e339-46dd-a448-d8c3473dc8e8,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd21b0d77-e3')#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.473 243456 INFO nova.virt.libvirt.driver [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Deleting instance files /var/lib/nova/instances/abcde488-a508-4239-9f40-28af252a1cd3_del#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.474 243456 INFO nova.virt.libvirt.driver [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Deletion of /var/lib/nova/instances/abcde488-a508-4239-9f40-28af252a1cd3_del complete#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.538 243456 DEBUG oslo_concurrency.lockutils [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "interface-3a118849-0d0a-4196-9bdd-65333da2e8f7-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.538 243456 DEBUG oslo_concurrency.lockutils [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-3a118849-0d0a-4196-9bdd-65333da2e8f7-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.539 243456 DEBUG nova.objects.instance [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'flavor' on Instance uuid 3a118849-0d0a-4196-9bdd-65333da2e8f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.564 243456 INFO nova.compute.manager [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Took 0.46 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.564 243456 DEBUG oslo.service.loopingcall [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.566 243456 DEBUG nova.compute.manager [-] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.566 243456 DEBUG nova.network.neutron [-] [instance: abcde488-a508-4239-9f40-28af252a1cd3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.647 243456 DEBUG nova.network.neutron [req-4f356008-0553-4801-98e9-513c6ca5a43f req-c3959927-fce9-4bfd-9ce9-9c1254a8982d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updated VIF entry in instance network info cache for port 5de28374-dbe4-4c8d-9f73-047a368cc895. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.648 243456 DEBUG nova.network.neutron [req-4f356008-0553-4801-98e9-513c6ca5a43f req-c3959927-fce9-4bfd-9ce9-9c1254a8982d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updating instance_info_cache with network_info: [{"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5de28374-dbe4-4c8d-9f73-047a368cc895", "address": "fa:16:3e:fc:8b:a2", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de28374-db", "ovs_interfaceid": "5de28374-dbe4-4c8d-9f73-047a368cc895", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.668 243456 DEBUG oslo_concurrency.lockutils [req-4f356008-0553-4801-98e9-513c6ca5a43f req-c3959927-fce9-4bfd-9ce9-9c1254a8982d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:05:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 292 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 122 op/s
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.936 243456 DEBUG nova.objects.instance [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'pci_requests' on Instance uuid 3a118849-0d0a-4196-9bdd-65333da2e8f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:05:39 np0005634017 nova_compute[243452]: 2026-02-28 10:05:39.957 243456 DEBUG nova.network.neutron [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:05:40 np0005634017 nova_compute[243452]: 2026-02-28 10:05:40.063 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Acquiring lock "b320ad06-d6fa-470f-8bd8-1ecd6a00b33a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:40 np0005634017 nova_compute[243452]: 2026-02-28 10:05:40.063 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lock "b320ad06-d6fa-470f-8bd8-1ecd6a00b33a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:40 np0005634017 nova_compute[243452]: 2026-02-28 10:05:40.091 243456 DEBUG nova.compute.manager [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:05:40 np0005634017 nova_compute[243452]: 2026-02-28 10:05:40.148 243456 DEBUG nova.policy [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1d18942aa7a4f4f9f673058f8c5f232', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5478fb37e3fa483c86ed85ad939d42da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:05:40 np0005634017 nova_compute[243452]: 2026-02-28 10:05:40.175 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:40 np0005634017 nova_compute[243452]: 2026-02-28 10:05:40.176 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:40 np0005634017 nova_compute[243452]: 2026-02-28 10:05:40.185 243456 DEBUG nova.virt.hardware [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:05:40 np0005634017 nova_compute[243452]: 2026-02-28 10:05:40.185 243456 INFO nova.compute.claims [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:05:40 np0005634017 nova_compute[243452]: 2026-02-28 10:05:40.308 243456 DEBUG oslo_concurrency.processutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:05:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:05:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:05:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:05:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001019492375708623 of space, bias 1.0, pg target 0.30584771271258687 quantized to 32 (current 32)
Feb 28 05:05:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:05:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:05:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:05:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:05:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:05:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0026436659199438376 of space, bias 1.0, pg target 0.7930997759831513 quantized to 32 (current 32)
Feb 28 05:05:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:05:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.943870453518754e-07 of space, bias 4.0, pg target 0.0009532644544222505 quantized to 16 (current 16)
Feb 28 05:05:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:05:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:05:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:05:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:05:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:05:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:05:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:05:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:05:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:05:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:05:40 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:05:40 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3196233387' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:05:40 np0005634017 nova_compute[243452]: 2026-02-28 10:05:40.808 243456 DEBUG nova.network.neutron [-] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:05:40 np0005634017 nova_compute[243452]: 2026-02-28 10:05:40.827 243456 DEBUG oslo_concurrency.processutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:40 np0005634017 nova_compute[243452]: 2026-02-28 10:05:40.830 243456 INFO nova.compute.manager [-] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Took 1.26 seconds to deallocate network for instance.#033[00m
Feb 28 05:05:40 np0005634017 nova_compute[243452]: 2026-02-28 10:05:40.839 243456 DEBUG nova.compute.provider_tree [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:05:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:40Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fc:8b:a2 10.100.0.6
Feb 28 05:05:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:05:40Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fc:8b:a2 10.100.0.6
Feb 28 05:05:40 np0005634017 nova_compute[243452]: 2026-02-28 10:05:40.853 243456 DEBUG nova.scheduler.client.report [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:05:40 np0005634017 nova_compute[243452]: 2026-02-28 10:05:40.882 243456 DEBUG oslo_concurrency.lockutils [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:40 np0005634017 nova_compute[243452]: 2026-02-28 10:05:40.883 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:40 np0005634017 nova_compute[243452]: 2026-02-28 10:05:40.884 243456 DEBUG nova.compute.manager [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:05:40 np0005634017 nova_compute[243452]: 2026-02-28 10:05:40.887 243456 DEBUG oslo_concurrency.lockutils [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:40 np0005634017 nova_compute[243452]: 2026-02-28 10:05:40.936 243456 DEBUG nova.compute.manager [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:05:40 np0005634017 nova_compute[243452]: 2026-02-28 10:05:40.937 243456 DEBUG nova.network.neutron [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:05:40 np0005634017 nova_compute[243452]: 2026-02-28 10:05:40.955 243456 INFO nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:05:40 np0005634017 nova_compute[243452]: 2026-02-28 10:05:40.977 243456 DEBUG nova.compute.manager [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:05:40 np0005634017 nova_compute[243452]: 2026-02-28 10:05:40.983 243456 DEBUG oslo_concurrency.processutils [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.084 243456 DEBUG nova.compute.manager [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.086 243456 DEBUG nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.087 243456 INFO nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Creating image(s)#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.109 243456 DEBUG nova.storage.rbd_utils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] rbd image b320ad06-d6fa-470f-8bd8-1ecd6a00b33a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.138 243456 DEBUG nova.storage.rbd_utils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] rbd image b320ad06-d6fa-470f-8bd8-1ecd6a00b33a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.164 243456 DEBUG nova.storage.rbd_utils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] rbd image b320ad06-d6fa-470f-8bd8-1ecd6a00b33a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.169 243456 DEBUG oslo_concurrency.processutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.198 243456 DEBUG nova.policy [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'af24a6d4c4c246cf80645675cc85b3c6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fc066bf883d477dab2475efe229ee9f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.231 243456 DEBUG oslo_concurrency.processutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.232 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.233 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.233 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.259 243456 DEBUG nova.storage.rbd_utils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] rbd image b320ad06-d6fa-470f-8bd8-1ecd6a00b33a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.264 243456 DEBUG oslo_concurrency.processutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 b320ad06-d6fa-470f-8bd8-1ecd6a00b33a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.506 243456 DEBUG oslo_concurrency.processutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 b320ad06-d6fa-470f-8bd8-1ecd6a00b33a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.242s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:05:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2978644247' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.573 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273126.5656288, abcde488-a508-4239-9f40-28af252a1cd3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.573 243456 INFO nova.compute.manager [-] [instance: abcde488-a508-4239-9f40-28af252a1cd3] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.576 243456 DEBUG oslo_concurrency.processutils [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.582 243456 DEBUG nova.storage.rbd_utils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] resizing rbd image b320ad06-d6fa-470f-8bd8-1ecd6a00b33a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.611 243456 DEBUG nova.network.neutron [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Successfully created port: 81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.615 243456 DEBUG nova.compute.manager [None req-1e9027c7-9570-4e5b-b796-738cc41ba5cc - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.618 243456 DEBUG nova.compute.provider_tree [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.659 243456 DEBUG nova.scheduler.client.report [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.668 243456 DEBUG nova.objects.instance [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lazy-loading 'migration_context' on Instance uuid b320ad06-d6fa-470f-8bd8-1ecd6a00b33a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.683 243456 DEBUG nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.684 243456 DEBUG nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Ensure instance console log exists: /var/lib/nova/instances/b320ad06-d6fa-470f-8bd8-1ecd6a00b33a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.684 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.684 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.684 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.687 243456 DEBUG oslo_concurrency.lockutils [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.715 243456 INFO nova.scheduler.client.report [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Deleted allocations for instance abcde488-a508-4239-9f40-28af252a1cd3#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.791 243456 DEBUG nova.network.neutron [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Successfully created port: c965ae98-2866-43c7-bd75-b717acf060bd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:05:41 np0005634017 nova_compute[243452]: 2026-02-28 10:05:41.796 243456 DEBUG oslo_concurrency.lockutils [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 257 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Feb 28 05:05:42 np0005634017 nova_compute[243452]: 2026-02-28 10:05:42.067 243456 DEBUG nova.compute.manager [req-12c480a8-28e0-45d4-b258-3516fd00a75a req-e25601a6-1fb8-4a88-bf1d-07fddb691672 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received event network-vif-plugged-5de28374-dbe4-4c8d-9f73-047a368cc895 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:42 np0005634017 nova_compute[243452]: 2026-02-28 10:05:42.067 243456 DEBUG oslo_concurrency.lockutils [req-12c480a8-28e0-45d4-b258-3516fd00a75a req-e25601a6-1fb8-4a88-bf1d-07fddb691672 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:42 np0005634017 nova_compute[243452]: 2026-02-28 10:05:42.068 243456 DEBUG oslo_concurrency.lockutils [req-12c480a8-28e0-45d4-b258-3516fd00a75a req-e25601a6-1fb8-4a88-bf1d-07fddb691672 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:42 np0005634017 nova_compute[243452]: 2026-02-28 10:05:42.068 243456 DEBUG oslo_concurrency.lockutils [req-12c480a8-28e0-45d4-b258-3516fd00a75a req-e25601a6-1fb8-4a88-bf1d-07fddb691672 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:42 np0005634017 nova_compute[243452]: 2026-02-28 10:05:42.069 243456 DEBUG nova.compute.manager [req-12c480a8-28e0-45d4-b258-3516fd00a75a req-e25601a6-1fb8-4a88-bf1d-07fddb691672 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] No waiting events found dispatching network-vif-plugged-5de28374-dbe4-4c8d-9f73-047a368cc895 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:05:42 np0005634017 nova_compute[243452]: 2026-02-28 10:05:42.069 243456 WARNING nova.compute.manager [req-12c480a8-28e0-45d4-b258-3516fd00a75a req-e25601a6-1fb8-4a88-bf1d-07fddb691672 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received unexpected event network-vif-plugged-5de28374-dbe4-4c8d-9f73-047a368cc895 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:05:42 np0005634017 nova_compute[243452]: 2026-02-28 10:05:42.069 243456 DEBUG nova.compute.manager [req-12c480a8-28e0-45d4-b258-3516fd00a75a req-e25601a6-1fb8-4a88-bf1d-07fddb691672 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received event network-vif-plugged-5de28374-dbe4-4c8d-9f73-047a368cc895 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:05:42 np0005634017 nova_compute[243452]: 2026-02-28 10:05:42.070 243456 DEBUG oslo_concurrency.lockutils [req-12c480a8-28e0-45d4-b258-3516fd00a75a req-e25601a6-1fb8-4a88-bf1d-07fddb691672 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:05:42 np0005634017 nova_compute[243452]: 2026-02-28 10:05:42.070 243456 DEBUG oslo_concurrency.lockutils [req-12c480a8-28e0-45d4-b258-3516fd00a75a req-e25601a6-1fb8-4a88-bf1d-07fddb691672 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:05:42 np0005634017 nova_compute[243452]: 2026-02-28 10:05:42.071 243456 DEBUG oslo_concurrency.lockutils [req-12c480a8-28e0-45d4-b258-3516fd00a75a req-e25601a6-1fb8-4a88-bf1d-07fddb691672 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:05:42 np0005634017 nova_compute[243452]: 2026-02-28 10:05:42.071 243456 DEBUG nova.compute.manager [req-12c480a8-28e0-45d4-b258-3516fd00a75a req-e25601a6-1fb8-4a88-bf1d-07fddb691672 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] No waiting events found dispatching network-vif-plugged-5de28374-dbe4-4c8d-9f73-047a368cc895 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:05:42 np0005634017 nova_compute[243452]: 2026-02-28 10:05:42.071 243456 WARNING nova.compute.manager [req-12c480a8-28e0-45d4-b258-3516fd00a75a req-e25601a6-1fb8-4a88-bf1d-07fddb691672 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received unexpected event network-vif-plugged-5de28374-dbe4-4c8d-9f73-047a368cc895 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:05:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:05:43 np0005634017 nova_compute[243452]: 2026-02-28 10:05:43.077 243456 DEBUG nova.network.neutron [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Successfully updated port: 81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:05:43 np0005634017 nova_compute[243452]: 2026-02-28 10:05:43.082 243456 DEBUG nova.network.neutron [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Successfully updated port: c965ae98-2866-43c7-bd75-b717acf060bd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:05:43 np0005634017 nova_compute[243452]: 2026-02-28 10:05:43.096 243456 DEBUG oslo_concurrency.lockutils [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:05:43 np0005634017 nova_compute[243452]: 2026-02-28 10:05:43.096 243456 DEBUG oslo_concurrency.lockutils [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquired lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:05:43 np0005634017 nova_compute[243452]: 2026-02-28 10:05:43.097 243456 DEBUG nova.network.neutron [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:08:55 np0005634017 nova_compute[243452]: 2026-02-28 10:08:55.876 243456 DEBUG oslo_concurrency.lockutils [req-7e6a0d1c-f191-4cc1-9a70-d626285b675e req-f7ea98ce-1010-456b-8802-cb496fb327b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-6e81c007-3bdc-4baf-b310-775c3122cd14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:08:55 np0005634017 nova_compute[243452]: 2026-02-28 10:08:55.956 243456 DEBUG nova.network.neutron [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:08:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 2.6 MiB/s wr, 108 op/s
Feb 28 05:08:56 np0005634017 rsyslogd[1017]: imjournal: 13558 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Feb 28 05:08:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:08:57.846 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:08:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:08:57.847 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:08:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:08:57.847 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:08:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:08:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Feb 28 05:08:58 np0005634017 nova_compute[243452]: 2026-02-28 10:08:58.496 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:08:58 np0005634017 nova_compute[243452]: 2026-02-28 10:08:58.624 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:08:59 np0005634017 podman[287186]: 2026-02-28 10:08:59.135787915 +0000 UTC m=+0.066297873 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 28 05:08:59 np0005634017 podman[287185]: 2026-02-28 10:08:59.168382081 +0000 UTC m=+0.102715656 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Feb 28 05:08:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:08:59.961 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.051 243456 DEBUG nova.network.neutron [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Updating instance_info_cache with network_info: [{"id": "2913b56a-9cff-4697-89c5-e6e3553b8002", "address": "fa:16:3e:cb:85:32", "network": {"id": "ee9d9c00-c168-42ad-9b0a-3feb4cee5b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2066554005", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2913b56a-9c", "ovs_interfaceid": "2913b56a-9cff-4697-89c5-e6e3553b8002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "address": "fa:16:3e:ea:a4:97", "network": {"id": "6393174b-f4f0-476b-a7a9-02f4c8a0425f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-40536825", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf3648de-8d", "ovs_interfaceid": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.076 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Releasing lock "refresh_cache-6e81c007-3bdc-4baf-b310-775c3122cd14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.077 243456 DEBUG nova.compute.manager [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Instance network_info: |[{"id": "2913b56a-9cff-4697-89c5-e6e3553b8002", "address": "fa:16:3e:cb:85:32", "network": {"id": "ee9d9c00-c168-42ad-9b0a-3feb4cee5b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2066554005", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2913b56a-9c", "ovs_interfaceid": "2913b56a-9cff-4697-89c5-e6e3553b8002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "address": "fa:16:3e:ea:a4:97", "network": {"id": "6393174b-f4f0-476b-a7a9-02f4c8a0425f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-40536825", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf3648de-8d", "ovs_interfaceid": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.078 243456 DEBUG oslo_concurrency.lockutils [req-7e6a0d1c-f191-4cc1-9a70-d626285b675e req-f7ea98ce-1010-456b-8802-cb496fb327b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-6e81c007-3bdc-4baf-b310-775c3122cd14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.080 243456 DEBUG nova.network.neutron [req-7e6a0d1c-f191-4cc1-9a70-d626285b675e req-f7ea98ce-1010-456b-8802-cb496fb327b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Refreshing network info cache for port cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.086 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Start _get_guest_xml network_info=[{"id": "2913b56a-9cff-4697-89c5-e6e3553b8002", "address": "fa:16:3e:cb:85:32", "network": {"id": "ee9d9c00-c168-42ad-9b0a-3feb4cee5b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2066554005", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2913b56a-9c", "ovs_interfaceid": "2913b56a-9cff-4697-89c5-e6e3553b8002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "address": "fa:16:3e:ea:a4:97", "network": {"id": "6393174b-f4f0-476b-a7a9-02f4c8a0425f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-40536825", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf3648de-8d", "ovs_interfaceid": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.094 243456 WARNING nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.100 243456 DEBUG nova.virt.libvirt.host [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.100 243456 DEBUG nova.virt.libvirt.host [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.111 243456 DEBUG nova.virt.libvirt.host [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.112 243456 DEBUG nova.virt.libvirt.host [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.113 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.113 243456 DEBUG nova.virt.hardware [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.114 243456 DEBUG nova.virt.hardware [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.115 243456 DEBUG nova.virt.hardware [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.115 243456 DEBUG nova.virt.hardware [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.116 243456 DEBUG nova.virt.hardware [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.116 243456 DEBUG nova.virt.hardware [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.116 243456 DEBUG nova.virt.hardware [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.117 243456 DEBUG nova.virt.hardware [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.117 243456 DEBUG nova.virt.hardware [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.118 243456 DEBUG nova.virt.hardware [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.118 243456 DEBUG nova.virt.hardware [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.123 243456 DEBUG oslo_concurrency.processutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:09:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:09:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:09:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:09:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:09:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:09:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:09:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/161285979' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.682 243456 DEBUG oslo_concurrency.processutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.721 243456 DEBUG nova.storage.rbd_utils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] rbd image 6e81c007-3bdc-4baf-b310-775c3122cd14_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.726 243456 DEBUG oslo_concurrency.processutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:00 np0005634017 nova_compute[243452]: 2026-02-28 10:09:00.947 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:09:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/320021054' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.328 243456 DEBUG oslo_concurrency.processutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.331 243456 DEBUG nova.virt.libvirt.vif [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:08:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-2003718246',display_name='tempest-ServersTestMultiNic-server-2003718246',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-2003718246',id=51,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='30cb5e2d14fb4fb7a9d37cf231549329',ramdisk_id='',reservation_id='r-2f5u8tlh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-116334619',owner_user_name='tempest-ServersTestMultiNic-116334619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:08:49Z,user_data=None,user_id='35aa1fe862a2437dbcc12fc7b0acbf91',uuid=6e81c007-3bdc-4baf-b310-775c3122cd14,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2913b56a-9cff-4697-89c5-e6e3553b8002", "address": "fa:16:3e:cb:85:32", "network": {"id": "ee9d9c00-c168-42ad-9b0a-3feb4cee5b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2066554005", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2913b56a-9c", "ovs_interfaceid": "2913b56a-9cff-4697-89c5-e6e3553b8002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.331 243456 DEBUG nova.network.os_vif_util [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converting VIF {"id": "2913b56a-9cff-4697-89c5-e6e3553b8002", "address": "fa:16:3e:cb:85:32", "network": {"id": "ee9d9c00-c168-42ad-9b0a-3feb4cee5b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2066554005", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2913b56a-9c", "ovs_interfaceid": "2913b56a-9cff-4697-89c5-e6e3553b8002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.333 243456 DEBUG nova.network.os_vif_util [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:85:32,bridge_name='br-int',has_traffic_filtering=True,id=2913b56a-9cff-4697-89c5-e6e3553b8002,network=Network(ee9d9c00-c168-42ad-9b0a-3feb4cee5b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2913b56a-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.334 243456 DEBUG nova.virt.libvirt.vif [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:08:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-2003718246',display_name='tempest-ServersTestMultiNic-server-2003718246',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-2003718246',id=51,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='30cb5e2d14fb4fb7a9d37cf231549329',ramdisk_id='',reservation_id='r-2f5u8tlh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-116334619',owner_user_name='tempest-ServersTestMultiNic-116334619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:08:49Z,user_data=None,user_id='35aa1fe862a2437dbcc12fc7b0acbf91',uuid=6e81c007-3bdc-4baf-b310-775c3122cd14,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "address": "fa:16:3e:ea:a4:97", "network": {"id": "6393174b-f4f0-476b-a7a9-02f4c8a0425f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-40536825", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf3648de-8d", "ovs_interfaceid": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.335 243456 DEBUG nova.network.os_vif_util [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converting VIF {"id": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "address": "fa:16:3e:ea:a4:97", "network": {"id": "6393174b-f4f0-476b-a7a9-02f4c8a0425f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-40536825", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf3648de-8d", "ovs_interfaceid": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.336 243456 DEBUG nova.network.os_vif_util [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:a4:97,bridge_name='br-int',has_traffic_filtering=True,id=cf3648de-8d18-4aa3-9cb9-b7a38ba349c7,network=Network(6393174b-f4f0-476b-a7a9-02f4c8a0425f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf3648de-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.338 243456 DEBUG nova.objects.instance [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6e81c007-3bdc-4baf-b310-775c3122cd14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.358 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:09:01 np0005634017 nova_compute[243452]:  <uuid>6e81c007-3bdc-4baf-b310-775c3122cd14</uuid>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:  <name>instance-00000033</name>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServersTestMultiNic-server-2003718246</nova:name>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:09:00</nova:creationTime>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:09:01 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:        <nova:user uuid="35aa1fe862a2437dbcc12fc7b0acbf91">tempest-ServersTestMultiNic-116334619-project-member</nova:user>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:        <nova:project uuid="30cb5e2d14fb4fb7a9d37cf231549329">tempest-ServersTestMultiNic-116334619</nova:project>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:        <nova:port uuid="2913b56a-9cff-4697-89c5-e6e3553b8002">
Feb 28 05:09:01 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.207" ipVersion="4"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:        <nova:port uuid="cf3648de-8d18-4aa3-9cb9-b7a38ba349c7">
Feb 28 05:09:01 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.1.62" ipVersion="4"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <entry name="serial">6e81c007-3bdc-4baf-b310-775c3122cd14</entry>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <entry name="uuid">6e81c007-3bdc-4baf-b310-775c3122cd14</entry>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/6e81c007-3bdc-4baf-b310-775c3122cd14_disk">
Feb 28 05:09:01 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:09:01 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/6e81c007-3bdc-4baf-b310-775c3122cd14_disk.config">
Feb 28 05:09:01 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:09:01 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:cb:85:32"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <target dev="tap2913b56a-9c"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:ea:a4:97"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <target dev="tapcf3648de-8d"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/6e81c007-3bdc-4baf-b310-775c3122cd14/console.log" append="off"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:09:01 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:09:01 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:09:01 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:09:01 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.360 243456 DEBUG nova.compute.manager [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Preparing to wait for external event network-vif-plugged-2913b56a-9cff-4697-89c5-e6e3553b8002 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.361 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquiring lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.361 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.362 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.362 243456 DEBUG nova.compute.manager [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Preparing to wait for external event network-vif-plugged-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.362 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquiring lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.363 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.363 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.365 243456 DEBUG nova.virt.libvirt.vif [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:08:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-2003718246',display_name='tempest-ServersTestMultiNic-server-2003718246',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-2003718246',id=51,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='30cb5e2d14fb4fb7a9d37cf231549329',ramdisk_id='',reservation_id='r-2f5u8tlh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-116334619',owner_user_name='tempest-ServersTestMultiNic-116334619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:08:49Z,user_data=None,user_id='35aa1fe862a2437dbcc12fc7b0acbf91',uuid=6e81c007-3bdc-4baf-b310-775c3122cd14,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2913b56a-9cff-4697-89c5-e6e3553b8002", "address": "fa:16:3e:cb:85:32", "network": {"id": "ee9d9c00-c168-42ad-9b0a-3feb4cee5b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2066554005", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2913b56a-9c", "ovs_interfaceid": "2913b56a-9cff-4697-89c5-e6e3553b8002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.365 243456 DEBUG nova.network.os_vif_util [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converting VIF {"id": "2913b56a-9cff-4697-89c5-e6e3553b8002", "address": "fa:16:3e:cb:85:32", "network": {"id": "ee9d9c00-c168-42ad-9b0a-3feb4cee5b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2066554005", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2913b56a-9c", "ovs_interfaceid": "2913b56a-9cff-4697-89c5-e6e3553b8002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.366 243456 DEBUG nova.network.os_vif_util [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:85:32,bridge_name='br-int',has_traffic_filtering=True,id=2913b56a-9cff-4697-89c5-e6e3553b8002,network=Network(ee9d9c00-c168-42ad-9b0a-3feb4cee5b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2913b56a-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.367 243456 DEBUG os_vif [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:85:32,bridge_name='br-int',has_traffic_filtering=True,id=2913b56a-9cff-4697-89c5-e6e3553b8002,network=Network(ee9d9c00-c168-42ad-9b0a-3feb4cee5b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2913b56a-9c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.368 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.368 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.369 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.373 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.374 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2913b56a-9c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.375 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2913b56a-9c, col_values=(('external_ids', {'iface-id': '2913b56a-9cff-4697-89c5-e6e3553b8002', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:85:32', 'vm-uuid': '6e81c007-3bdc-4baf-b310-775c3122cd14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.378 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:01 np0005634017 NetworkManager[49805]: <info>  [1772273341.3792] manager: (tap2913b56a-9c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/188)
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.381 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.383 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.384 243456 INFO os_vif [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:85:32,bridge_name='br-int',has_traffic_filtering=True,id=2913b56a-9cff-4697-89c5-e6e3553b8002,network=Network(ee9d9c00-c168-42ad-9b0a-3feb4cee5b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2913b56a-9c')#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.385 243456 DEBUG nova.virt.libvirt.vif [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:08:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-2003718246',display_name='tempest-ServersTestMultiNic-server-2003718246',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-2003718246',id=51,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='30cb5e2d14fb4fb7a9d37cf231549329',ramdisk_id='',reservation_id='r-2f5u8tlh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-116334619',owner_user_name='tempest-ServersTestMultiNic-116334619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:08:49Z,user_data=None,user_id='35aa1fe862a2437dbcc12fc7b0acbf91',uuid=6e81c007-3bdc-4baf-b310-775c3122cd14,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "address": "fa:16:3e:ea:a4:97", "network": {"id": "6393174b-f4f0-476b-a7a9-02f4c8a0425f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-40536825", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf3648de-8d", "ovs_interfaceid": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.385 243456 DEBUG nova.network.os_vif_util [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converting VIF {"id": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "address": "fa:16:3e:ea:a4:97", "network": {"id": "6393174b-f4f0-476b-a7a9-02f4c8a0425f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-40536825", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf3648de-8d", "ovs_interfaceid": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.386 243456 DEBUG nova.network.os_vif_util [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:a4:97,bridge_name='br-int',has_traffic_filtering=True,id=cf3648de-8d18-4aa3-9cb9-b7a38ba349c7,network=Network(6393174b-f4f0-476b-a7a9-02f4c8a0425f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf3648de-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.387 243456 DEBUG os_vif [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:a4:97,bridge_name='br-int',has_traffic_filtering=True,id=cf3648de-8d18-4aa3-9cb9-b7a38ba349c7,network=Network(6393174b-f4f0-476b-a7a9-02f4c8a0425f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf3648de-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.387 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.387 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.388 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.391 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.391 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf3648de-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.392 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcf3648de-8d, col_values=(('external_ids', {'iface-id': 'cf3648de-8d18-4aa3-9cb9-b7a38ba349c7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ea:a4:97', 'vm-uuid': '6e81c007-3bdc-4baf-b310-775c3122cd14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:01 np0005634017 NetworkManager[49805]: <info>  [1772273341.3946] manager: (tapcf3648de-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/189)
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.396 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.400 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.401 243456 INFO os_vif [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:a4:97,bridge_name='br-int',has_traffic_filtering=True,id=cf3648de-8d18-4aa3-9cb9-b7a38ba349c7,network=Network(6393174b-f4f0-476b-a7a9-02f4c8a0425f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf3648de-8d')#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.604 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.605 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.606 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] No VIF found with MAC fa:16:3e:cb:85:32, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.606 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] No VIF found with MAC fa:16:3e:ea:a4:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.607 243456 INFO nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Using config drive#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.636 243456 DEBUG nova.storage.rbd_utils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] rbd image 6e81c007-3bdc-4baf-b310-775c3122cd14_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.943 243456 DEBUG nova.network.neutron [req-7e6a0d1c-f191-4cc1-9a70-d626285b675e req-f7ea98ce-1010-456b-8802-cb496fb327b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Updated VIF entry in instance network info cache for port cf3648de-8d18-4aa3-9cb9-b7a38ba349c7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:09:01 np0005634017 nova_compute[243452]: 2026-02-28 10:09:01.944 243456 DEBUG nova.network.neutron [req-7e6a0d1c-f191-4cc1-9a70-d626285b675e req-f7ea98ce-1010-456b-8802-cb496fb327b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Updating instance_info_cache with network_info: [{"id": "2913b56a-9cff-4697-89c5-e6e3553b8002", "address": "fa:16:3e:cb:85:32", "network": {"id": "ee9d9c00-c168-42ad-9b0a-3feb4cee5b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2066554005", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2913b56a-9c", "ovs_interfaceid": "2913b56a-9cff-4697-89c5-e6e3553b8002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "address": "fa:16:3e:ea:a4:97", "network": {"id": "6393174b-f4f0-476b-a7a9-02f4c8a0425f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-40536825", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf3648de-8d", "ovs_interfaceid": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:09:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.3 MiB/s wr, 39 op/s
Feb 28 05:09:02 np0005634017 nova_compute[243452]: 2026-02-28 10:09:02.012 243456 DEBUG oslo_concurrency.lockutils [req-7e6a0d1c-f191-4cc1-9a70-d626285b675e req-f7ea98ce-1010-456b-8802-cb496fb327b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-6e81c007-3bdc-4baf-b310-775c3122cd14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:09:02 np0005634017 nova_compute[243452]: 2026-02-28 10:09:02.558 243456 INFO nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Creating config drive at /var/lib/nova/instances/6e81c007-3bdc-4baf-b310-775c3122cd14/disk.config#033[00m
Feb 28 05:09:02 np0005634017 nova_compute[243452]: 2026-02-28 10:09:02.564 243456 DEBUG oslo_concurrency.processutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6e81c007-3bdc-4baf-b310-775c3122cd14/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3g6je2_4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:02 np0005634017 nova_compute[243452]: 2026-02-28 10:09:02.711 243456 DEBUG oslo_concurrency.processutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6e81c007-3bdc-4baf-b310-775c3122cd14/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3g6je2_4" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:02 np0005634017 nova_compute[243452]: 2026-02-28 10:09:02.745 243456 DEBUG nova.storage.rbd_utils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] rbd image 6e81c007-3bdc-4baf-b310-775c3122cd14_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:02 np0005634017 nova_compute[243452]: 2026-02-28 10:09:02.749 243456 DEBUG oslo_concurrency.processutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6e81c007-3bdc-4baf-b310-775c3122cd14/disk.config 6e81c007-3bdc-4baf-b310-775c3122cd14_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:02 np0005634017 nova_compute[243452]: 2026-02-28 10:09:02.911 243456 DEBUG oslo_concurrency.processutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6e81c007-3bdc-4baf-b310-775c3122cd14/disk.config 6e81c007-3bdc-4baf-b310-775c3122cd14_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:09:02 np0005634017 nova_compute[243452]: 2026-02-28 10:09:02.912 243456 INFO nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Deleting local config drive /var/lib/nova/instances/6e81c007-3bdc-4baf-b310-775c3122cd14/disk.config because it was imported into RBD.#033[00m
Feb 28 05:09:02 np0005634017 kernel: tap2913b56a-9c: entered promiscuous mode
Feb 28 05:09:02 np0005634017 NetworkManager[49805]: <info>  [1772273342.9714] manager: (tap2913b56a-9c): new Tun device (/org/freedesktop/NetworkManager/Devices/190)
Feb 28 05:09:02 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:02Z|00416|binding|INFO|Claiming lport 2913b56a-9cff-4697-89c5-e6e3553b8002 for this chassis.
Feb 28 05:09:02 np0005634017 nova_compute[243452]: 2026-02-28 10:09:02.974 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:02 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:02Z|00417|binding|INFO|2913b56a-9cff-4697-89c5-e6e3553b8002: Claiming fa:16:3e:cb:85:32 10.100.0.207
Feb 28 05:09:02 np0005634017 NetworkManager[49805]: <info>  [1772273342.9853] manager: (tapcf3648de-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/191)
Feb 28 05:09:02 np0005634017 nova_compute[243452]: 2026-02-28 10:09:02.994 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:02 np0005634017 systemd-udevd[287370]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:09:02 np0005634017 kernel: tapcf3648de-8d: entered promiscuous mode
Feb 28 05:09:02 np0005634017 systemd-udevd[287371]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:09:03 np0005634017 nova_compute[243452]: 2026-02-28 10:09:02.999 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:03Z|00418|if_status|INFO|Not updating pb chassis for cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 now as sb is readonly
Feb 28 05:09:03 np0005634017 nova_compute[243452]: 2026-02-28 10:09:03.004 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:03 np0005634017 NetworkManager[49805]: <info>  [1772273343.0130] device (tap2913b56a-9c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:09:03 np0005634017 NetworkManager[49805]: <info>  [1772273343.0150] device (tap2913b56a-9c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:09:03 np0005634017 NetworkManager[49805]: <info>  [1772273343.0162] device (tapcf3648de-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:09:03 np0005634017 NetworkManager[49805]: <info>  [1772273343.0170] device (tapcf3648de-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:09:03 np0005634017 systemd-machined[209480]: New machine qemu-57-instance-00000033.
Feb 28 05:09:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:03Z|00419|binding|INFO|Claiming lport cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 for this chassis.
Feb 28 05:09:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:03Z|00420|binding|INFO|cf3648de-8d18-4aa3-9cb9-b7a38ba349c7: Claiming fa:16:3e:ea:a4:97 10.100.1.62
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.024 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:85:32 10.100.0.207'], port_security=['fa:16:3e:cb:85:32 10.100.0.207'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.207/24', 'neutron:device_id': '6e81c007-3bdc-4baf-b310-775c3122cd14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '30cb5e2d14fb4fb7a9d37cf231549329', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd203f6f-2b13-4be2-b266-8607f489044a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03dc6239-598e-4b30-827f-ab655e778931, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2913b56a-9cff-4697-89c5-e6e3553b8002) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.025 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2913b56a-9cff-4697-89c5-e6e3553b8002 in datapath ee9d9c00-c168-42ad-9b0a-3feb4cee5b51 bound to our chassis#033[00m
Feb 28 05:09:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:03Z|00421|binding|INFO|Setting lport 2913b56a-9cff-4697-89c5-e6e3553b8002 ovn-installed in OVS
Feb 28 05:09:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:03Z|00422|binding|INFO|Setting lport 2913b56a-9cff-4697-89c5-e6e3553b8002 up in Southbound
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.026 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee9d9c00-c168-42ad-9b0a-3feb4cee5b51#033[00m
Feb 28 05:09:03 np0005634017 systemd[1]: Started Virtual Machine qemu-57-instance-00000033.
Feb 28 05:09:03 np0005634017 nova_compute[243452]: 2026-02-28 10:09:03.027 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.038 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0a4ab0c5-e8f6-4814-86a7-209742463f58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.038 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapee9d9c00-c1 in ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.040 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapee9d9c00-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.041 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a98391ab-dccc-4655-a51a-6b925efbcddf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.042 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[11a0ce7d-5be9-4219-a41a-522689be8960]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:03Z|00423|binding|INFO|Setting lport cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 ovn-installed in OVS
Feb 28 05:09:03 np0005634017 nova_compute[243452]: 2026-02-28 10:09:03.044 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.055 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[d8f9676f-6c62-4276-9034-22d80b0248fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.068 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8524699c-70fe-493b-8123-539a772dac4a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.091 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[61889b51-a190-47be-b262-b71e996450bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:03 np0005634017 NetworkManager[49805]: <info>  [1772273343.0987] manager: (tapee9d9c00-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/192)
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.098 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[380bd802-538f-4648-9f8b-ce3031b2ed15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.126 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9d7fea76-435f-464d-888c-ab2f82b78231]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.129 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[011b69c8-bf24-4d94-8497-11d73dd16a03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:03 np0005634017 NetworkManager[49805]: <info>  [1772273343.1504] device (tapee9d9c00-c0): carrier: link connected
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.157 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8c0a0eaf-e87d-4ff3-a537-4d5270fe11b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.174 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[12d680b2-911d-45aa-99f3-5e2fc64a0199]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee9d9c00-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:05:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485043, 'reachable_time': 35618, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287407, 'error': None, 'target': 'ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:03Z|00424|binding|INFO|Setting lport cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 up in Southbound
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.187 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:a4:97 10.100.1.62'], port_security=['fa:16:3e:ea:a4:97 10.100.1.62'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.62/24', 'neutron:device_id': '6e81c007-3bdc-4baf-b310-775c3122cd14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6393174b-f4f0-476b-a7a9-02f4c8a0425f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '30cb5e2d14fb4fb7a9d37cf231549329', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd203f6f-2b13-4be2-b266-8607f489044a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=641263bc-b0b1-40e0-a9a7-6631a5b53fe3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cf3648de-8d18-4aa3-9cb9-b7a38ba349c7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.188 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[48eb89c5-ebf9-43bf-89a4-78fb67fdddc8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8c:56a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 485043, 'tstamp': 485043}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287408, 'error': None, 'target': 'ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.202 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[526cdef8-7328-4b31-b2c4-ebb165e63090]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee9d9c00-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:05:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485043, 'reachable_time': 35618, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287409, 'error': None, 'target': 'ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.235 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[15ae1078-17f8-4331-a0b1-75652ab0510b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:03 np0005634017 nova_compute[243452]: 2026-02-28 10:09:03.292 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273328.271685, bbbba0d8-fff9-4f59-ab31-54ff03b71390 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:09:03 np0005634017 nova_compute[243452]: 2026-02-28 10:09:03.292 243456 INFO nova.compute.manager [-] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.302 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5d6ddc0e-9eb4-4d33-b3df-1f9663d3adf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.304 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee9d9c00-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.304 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.305 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee9d9c00-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:03 np0005634017 kernel: tapee9d9c00-c0: entered promiscuous mode
Feb 28 05:09:03 np0005634017 nova_compute[243452]: 2026-02-28 10:09:03.307 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:03 np0005634017 NetworkManager[49805]: <info>  [1772273343.3087] manager: (tapee9d9c00-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/193)
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.319 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee9d9c00-c0, col_values=(('external_ids', {'iface-id': '0492ad6c-9098-45c5-863a-7d90483ac932'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:03Z|00425|binding|INFO|Releasing lport 0492ad6c-9098-45c5-863a-7d90483ac932 from this chassis (sb_readonly=0)
Feb 28 05:09:03 np0005634017 nova_compute[243452]: 2026-02-28 10:09:03.321 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.329 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ee9d9c00-c168-42ad-9b0a-3feb4cee5b51.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ee9d9c00-c168-42ad-9b0a-3feb4cee5b51.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:09:03 np0005634017 nova_compute[243452]: 2026-02-28 10:09:03.331 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.331 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[538227cf-ee48-499f-9183-e665c9f6356a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.332 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/ee9d9c00-c168-42ad-9b0a-3feb4cee5b51.pid.haproxy
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID ee9d9c00-c168-42ad-9b0a-3feb4cee5b51
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.335 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51', 'env', 'PROCESS_TAG=haproxy-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ee9d9c00-c168-42ad-9b0a-3feb4cee5b51.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:09:03 np0005634017 nova_compute[243452]: 2026-02-28 10:09:03.558 243456 DEBUG nova.compute.manager [None req-c7d49cd4-1e50-4c4c-9c47-50ca8b7f8f0e - - - - - -] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:03 np0005634017 nova_compute[243452]: 2026-02-28 10:09:03.625 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:03 np0005634017 podman[287441]: 2026-02-28 10:09:03.679115905 +0000 UTC m=+0.050577881 container create d888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 28 05:09:03 np0005634017 systemd[1]: Started libpod-conmon-d888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709.scope.
Feb 28 05:09:03 np0005634017 podman[287441]: 2026-02-28 10:09:03.650612105 +0000 UTC m=+0.022074101 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:09:03 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:09:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45fadf675fa9eecd4223d28e0b263c5d180910a3ada48d4b575cb25eeb657e56/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:09:03 np0005634017 podman[287441]: 2026-02-28 10:09:03.767392965 +0000 UTC m=+0.138854951 container init d888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:09:03 np0005634017 podman[287441]: 2026-02-28 10:09:03.772554009 +0000 UTC m=+0.144015985 container start d888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:09:03 np0005634017 neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51[287457]: [NOTICE]   (287461) : New worker (287463) forked
Feb 28 05:09:03 np0005634017 neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51[287457]: [NOTICE]   (287461) : Loading success.
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.841 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 in datapath 6393174b-f4f0-476b-a7a9-02f4c8a0425f unbound from our chassis#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.845 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6393174b-f4f0-476b-a7a9-02f4c8a0425f#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.854 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[07a5bb19-6420-41d8-9b40-18371c81fb71]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.855 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6393174b-f1 in ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.857 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6393174b-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.858 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f7fbddff-a631-4f86-a498-bc6ad3f321ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.859 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a8d4e4f9-8a09-41c5-ac64-74f72307b273]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.870 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[54a13eb7-a1b6-4808-899c-9f56975ee766]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.882 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b340c61b-06a9-43f6-aa4a-00e0a1032224]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.906 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[774a7e69-fd11-4bb8-be90-e9b5747969cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:03 np0005634017 systemd-udevd[287397]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:09:03 np0005634017 NetworkManager[49805]: <info>  [1772273343.9181] manager: (tap6393174b-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/194)
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.915 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fe8bf05d-9735-4173-9877-a7adfde9e142]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.954 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0f1d1331-f08d-4d1c-b67a-14ade0aad2db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.958 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1256b6b5-cbb3-4fb0-82d7-a8af582508d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:03 np0005634017 NetworkManager[49805]: <info>  [1772273343.9854] device (tap6393174b-f0): carrier: link connected
Feb 28 05:09:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.993 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[533bda35-55b7-4993-89f2-28d9073830b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 823 KiB/s wr, 12 op/s
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:04.016 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4935cca4-628b-4684-8f2a-fef00f309925]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6393174b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:25:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485127, 'reachable_time': 19594, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287493, 'error': None, 'target': 'ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:04.039 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c7bd94c1-42b7-48e5-bb4d-5276abd4e4ab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:25df'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 485127, 'tstamp': 485127}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287508, 'error': None, 'target': 'ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:04.058 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7e80e40a-4a04-4851-84ba-46fd374b7350]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6393174b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:25:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485127, 'reachable_time': 19594, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287517, 'error': None, 'target': 'ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:04.092 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d161cb6d-a46d-4cab-a23f-f1d368ecb775]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:04.153 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[439bd937-aee2-47af-bdd7-a2176fc9860b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:04.154 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6393174b-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:04.155 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:04.155 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6393174b-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:04 np0005634017 kernel: tap6393174b-f0: entered promiscuous mode
Feb 28 05:09:04 np0005634017 NetworkManager[49805]: <info>  [1772273344.1586] manager: (tap6393174b-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/195)
Feb 28 05:09:04 np0005634017 nova_compute[243452]: 2026-02-28 10:09:04.158 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:04.163 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6393174b-f0, col_values=(('external_ids', {'iface-id': 'ae3d677c-b1f9-4238-be1d-1c968c276024'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:04 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:04Z|00426|binding|INFO|Releasing lport ae3d677c-b1f9-4238-be1d-1c968c276024 from this chassis (sb_readonly=0)
Feb 28 05:09:04 np0005634017 nova_compute[243452]: 2026-02-28 10:09:04.164 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273344.1633954, 6e81c007-3bdc-4baf-b310-775c3122cd14 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:09:04 np0005634017 nova_compute[243452]: 2026-02-28 10:09:04.166 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] VM Started (Lifecycle Event)#033[00m
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:04.166 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6393174b-f4f0-476b-a7a9-02f4c8a0425f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6393174b-f4f0-476b-a7a9-02f4c8a0425f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:04.167 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[67f66493-de63-4cb1-89b1-baef20b164b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:04.168 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-6393174b-f4f0-476b-a7a9-02f4c8a0425f
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/6393174b-f4f0-476b-a7a9-02f4c8a0425f.pid.haproxy
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 6393174b-f4f0-476b-a7a9-02f4c8a0425f
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:09:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:04.168 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f', 'env', 'PROCESS_TAG=haproxy-6393174b-f4f0-476b-a7a9-02f4c8a0425f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6393174b-f4f0-476b-a7a9-02f4c8a0425f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:09:04 np0005634017 nova_compute[243452]: 2026-02-28 10:09:04.169 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:04 np0005634017 nova_compute[243452]: 2026-02-28 10:09:04.173 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:04 np0005634017 nova_compute[243452]: 2026-02-28 10:09:04.185 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:04 np0005634017 nova_compute[243452]: 2026-02-28 10:09:04.190 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273344.1636717, 6e81c007-3bdc-4baf-b310-775c3122cd14 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:09:04 np0005634017 nova_compute[243452]: 2026-02-28 10:09:04.190 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:09:04 np0005634017 nova_compute[243452]: 2026-02-28 10:09:04.213 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:04 np0005634017 nova_compute[243452]: 2026-02-28 10:09:04.218 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:09:04 np0005634017 nova_compute[243452]: 2026-02-28 10:09:04.336 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:09:04 np0005634017 podman[287559]: 2026-02-28 10:09:04.577153446 +0000 UTC m=+0.063918976 container create 0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:09:04 np0005634017 systemd[1]: Started libpod-conmon-0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145.scope.
Feb 28 05:09:04 np0005634017 podman[287559]: 2026-02-28 10:09:04.54632385 +0000 UTC m=+0.033089430 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:09:04 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:09:04 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28076557dba7d747e9b7cbfa4fae2599e2a1294bb9f8c658c1b2eee3c50b0bf7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:09:04 np0005634017 podman[287559]: 2026-02-28 10:09:04.663978014 +0000 UTC m=+0.150743504 container init 0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:09:04 np0005634017 podman[287559]: 2026-02-28 10:09:04.674255693 +0000 UTC m=+0.161021173 container start 0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 28 05:09:04 np0005634017 neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f[287574]: [NOTICE]   (287578) : New worker (287580) forked
Feb 28 05:09:04 np0005634017 neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f[287574]: [NOTICE]   (287578) : Loading success.
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.149 243456 DEBUG nova.compute.manager [req-425086b3-6cd7-404b-8710-cee90fc56332 req-cb2d69d2-e3ca-4bd3-a3dd-f0f5b3658130 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received event network-vif-plugged-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.150 243456 DEBUG oslo_concurrency.lockutils [req-425086b3-6cd7-404b-8710-cee90fc56332 req-cb2d69d2-e3ca-4bd3-a3dd-f0f5b3658130 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.151 243456 DEBUG oslo_concurrency.lockutils [req-425086b3-6cd7-404b-8710-cee90fc56332 req-cb2d69d2-e3ca-4bd3-a3dd-f0f5b3658130 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.151 243456 DEBUG oslo_concurrency.lockutils [req-425086b3-6cd7-404b-8710-cee90fc56332 req-cb2d69d2-e3ca-4bd3-a3dd-f0f5b3658130 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.151 243456 DEBUG nova.compute.manager [req-425086b3-6cd7-404b-8710-cee90fc56332 req-cb2d69d2-e3ca-4bd3-a3dd-f0f5b3658130 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Processing event network-vif-plugged-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.245 243456 DEBUG nova.compute.manager [req-5828a295-5e8b-48f3-a70e-858695c0fa65 req-9103aeb3-06e6-4525-b599-4e5f4343ccf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received event network-vif-plugged-2913b56a-9cff-4697-89c5-e6e3553b8002 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.246 243456 DEBUG oslo_concurrency.lockutils [req-5828a295-5e8b-48f3-a70e-858695c0fa65 req-9103aeb3-06e6-4525-b599-4e5f4343ccf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.246 243456 DEBUG oslo_concurrency.lockutils [req-5828a295-5e8b-48f3-a70e-858695c0fa65 req-9103aeb3-06e6-4525-b599-4e5f4343ccf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.246 243456 DEBUG oslo_concurrency.lockutils [req-5828a295-5e8b-48f3-a70e-858695c0fa65 req-9103aeb3-06e6-4525-b599-4e5f4343ccf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.246 243456 DEBUG nova.compute.manager [req-5828a295-5e8b-48f3-a70e-858695c0fa65 req-9103aeb3-06e6-4525-b599-4e5f4343ccf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Processing event network-vif-plugged-2913b56a-9cff-4697-89c5-e6e3553b8002 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.247 243456 DEBUG nova.compute.manager [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.251 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273345.2511153, 6e81c007-3bdc-4baf-b310-775c3122cd14 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.252 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.255 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.259 243456 INFO nova.virt.libvirt.driver [-] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Instance spawned successfully.#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.260 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.294 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.304 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.310 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.311 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.312 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.313 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.313 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.314 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.330 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.448 243456 INFO nova.compute.manager [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Took 16.29 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.449 243456 DEBUG nova.compute.manager [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.519 243456 INFO nova.compute.manager [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Took 17.86 seconds to build instance.#033[00m
Feb 28 05:09:05 np0005634017 nova_compute[243452]: 2026-02-28 10:09:05.541 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 772 KiB/s wr, 21 op/s
Feb 28 05:09:06 np0005634017 nova_compute[243452]: 2026-02-28 10:09:06.184 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "51a0e59a-81d0-4f05-bb13-5ca025288da2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:06 np0005634017 nova_compute[243452]: 2026-02-28 10:09:06.184 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:06 np0005634017 nova_compute[243452]: 2026-02-28 10:09:06.204 243456 DEBUG nova.compute.manager [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:09:06 np0005634017 nova_compute[243452]: 2026-02-28 10:09:06.294 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:06 np0005634017 nova_compute[243452]: 2026-02-28 10:09:06.295 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:06 np0005634017 nova_compute[243452]: 2026-02-28 10:09:06.304 243456 DEBUG nova.virt.hardware [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:09:06 np0005634017 nova_compute[243452]: 2026-02-28 10:09:06.304 243456 INFO nova.compute.claims [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:09:06 np0005634017 nova_compute[243452]: 2026-02-28 10:09:06.394 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:06 np0005634017 nova_compute[243452]: 2026-02-28 10:09:06.397 243456 DEBUG nova.scheduler.client.report [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 28 05:09:06 np0005634017 nova_compute[243452]: 2026-02-28 10:09:06.415 243456 DEBUG nova.scheduler.client.report [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 28 05:09:06 np0005634017 nova_compute[243452]: 2026-02-28 10:09:06.415 243456 DEBUG nova.compute.provider_tree [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 28 05:09:06 np0005634017 nova_compute[243452]: 2026-02-28 10:09:06.435 243456 DEBUG nova.scheduler.client.report [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 28 05:09:06 np0005634017 nova_compute[243452]: 2026-02-28 10:09:06.464 243456 DEBUG nova.scheduler.client.report [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 28 05:09:06 np0005634017 nova_compute[243452]: 2026-02-28 10:09:06.522 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:09:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/888599738' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.111 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.119 243456 DEBUG nova.compute.provider_tree [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.136 243456 DEBUG nova.scheduler.client.report [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.161 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.162 243456 DEBUG nova.compute.manager [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.214 243456 DEBUG oslo_concurrency.lockutils [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquiring lock "6e81c007-3bdc-4baf-b310-775c3122cd14" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.214 243456 DEBUG oslo_concurrency.lockutils [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.215 243456 DEBUG oslo_concurrency.lockutils [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquiring lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.215 243456 DEBUG oslo_concurrency.lockutils [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.216 243456 DEBUG oslo_concurrency.lockutils [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.217 243456 INFO nova.compute.manager [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Terminating instance#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.218 243456 DEBUG nova.compute.manager [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.218 243456 DEBUG nova.compute.manager [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.218 243456 DEBUG nova.network.neutron [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.241 243456 INFO nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:09:07 np0005634017 kernel: tap2913b56a-9c (unregistering): left promiscuous mode
Feb 28 05:09:07 np0005634017 NetworkManager[49805]: <info>  [1772273347.2592] device (tap2913b56a-9c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:09:07 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:07Z|00427|binding|INFO|Releasing lport 2913b56a-9cff-4697-89c5-e6e3553b8002 from this chassis (sb_readonly=0)
Feb 28 05:09:07 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:07Z|00428|binding|INFO|Setting lport 2913b56a-9cff-4697-89c5-e6e3553b8002 down in Southbound
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.269 243456 DEBUG nova.compute.manager [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:09:07 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:07Z|00429|binding|INFO|Removing iface tap2913b56a-9c ovn-installed in OVS
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.272 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.277 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.277 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:85:32 10.100.0.207'], port_security=['fa:16:3e:cb:85:32 10.100.0.207'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.207/24', 'neutron:device_id': '6e81c007-3bdc-4baf-b310-775c3122cd14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '30cb5e2d14fb4fb7a9d37cf231549329', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd203f6f-2b13-4be2-b266-8607f489044a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03dc6239-598e-4b30-827f-ab655e778931, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2913b56a-9cff-4697-89c5-e6e3553b8002) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:09:07 np0005634017 kernel: tapcf3648de-8d (unregistering): left promiscuous mode
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.279 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2913b56a-9cff-4697-89c5-e6e3553b8002 in datapath ee9d9c00-c168-42ad-9b0a-3feb4cee5b51 unbound from our chassis#033[00m
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.281 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ee9d9c00-c168-42ad-9b0a-3feb4cee5b51, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:09:07 np0005634017 NetworkManager[49805]: <info>  [1772273347.2823] device (tapcf3648de-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.283 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.282 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e7a016d9-798d-4bcc-8927-db90a299cc57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.283 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51 namespace which is not needed anymore#033[00m
Feb 28 05:09:07 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:07Z|00430|binding|INFO|Releasing lport cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 from this chassis (sb_readonly=0)
Feb 28 05:09:07 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:07Z|00431|binding|INFO|Setting lport cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 down in Southbound
Feb 28 05:09:07 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:07Z|00432|binding|INFO|Removing iface tapcf3648de-8d ovn-installed in OVS
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.291 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.294 243456 DEBUG nova.compute.manager [req-86b80a8c-6bad-4cd7-8220-3263285ca2ad req-3474af61-7cfa-4730-848c-8e4660ba9c1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received event network-vif-plugged-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.295 243456 DEBUG oslo_concurrency.lockutils [req-86b80a8c-6bad-4cd7-8220-3263285ca2ad req-3474af61-7cfa-4730-848c-8e4660ba9c1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.295 243456 DEBUG oslo_concurrency.lockutils [req-86b80a8c-6bad-4cd7-8220-3263285ca2ad req-3474af61-7cfa-4730-848c-8e4660ba9c1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.295 243456 DEBUG oslo_concurrency.lockutils [req-86b80a8c-6bad-4cd7-8220-3263285ca2ad req-3474af61-7cfa-4730-848c-8e4660ba9c1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.295 243456 DEBUG nova.compute.manager [req-86b80a8c-6bad-4cd7-8220-3263285ca2ad req-3474af61-7cfa-4730-848c-8e4660ba9c1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] No waiting events found dispatching network-vif-plugged-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.295 243456 WARNING nova.compute.manager [req-86b80a8c-6bad-4cd7-8220-3263285ca2ad req-3474af61-7cfa-4730-848c-8e4660ba9c1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received unexpected event network-vif-plugged-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.297 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.300 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:a4:97 10.100.1.62'], port_security=['fa:16:3e:ea:a4:97 10.100.1.62'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.62/24', 'neutron:device_id': '6e81c007-3bdc-4baf-b310-775c3122cd14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6393174b-f4f0-476b-a7a9-02f4c8a0425f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '30cb5e2d14fb4fb7a9d37cf231549329', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd203f6f-2b13-4be2-b266-8607f489044a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=641263bc-b0b1-40e0-a9a7-6631a5b53fe3, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cf3648de-8d18-4aa3-9cb9-b7a38ba349c7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.301 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:07 np0005634017 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000033.scope: Deactivated successfully.
Feb 28 05:09:07 np0005634017 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000033.scope: Consumed 3.156s CPU time.
Feb 28 05:09:07 np0005634017 systemd-machined[209480]: Machine qemu-57-instance-00000033 terminated.
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.366 243456 DEBUG nova.compute.manager [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.369 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.370 243456 INFO nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Creating image(s)#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.396 243456 DEBUG nova.storage.rbd_utils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 51a0e59a-81d0-4f05-bb13-5ca025288da2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:07 np0005634017 neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51[287457]: [NOTICE]   (287461) : haproxy version is 2.8.14-c23fe91
Feb 28 05:09:07 np0005634017 neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51[287457]: [NOTICE]   (287461) : path to executable is /usr/sbin/haproxy
Feb 28 05:09:07 np0005634017 neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51[287457]: [WARNING]  (287461) : Exiting Master process...
Feb 28 05:09:07 np0005634017 neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51[287457]: [ALERT]    (287461) : Current worker (287463) exited with code 143 (Terminated)
Feb 28 05:09:07 np0005634017 neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51[287457]: [WARNING]  (287461) : All workers exited. Exiting... (0)
Feb 28 05:09:07 np0005634017 systemd[1]: libpod-d888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709.scope: Deactivated successfully.
Feb 28 05:09:07 np0005634017 podman[287639]: 2026-02-28 10:09:07.415431539 +0000 UTC m=+0.047716740 container died d888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.422 243456 DEBUG nova.storage.rbd_utils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 51a0e59a-81d0-4f05-bb13-5ca025288da2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:07 np0005634017 NetworkManager[49805]: <info>  [1772273347.4544] manager: (tapcf3648de-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/196)
Feb 28 05:09:07 np0005634017 systemd[1]: var-lib-containers-storage-overlay-45fadf675fa9eecd4223d28e0b263c5d180910a3ada48d4b575cb25eeb657e56-merged.mount: Deactivated successfully.
Feb 28 05:09:07 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709-userdata-shm.mount: Deactivated successfully.
Feb 28 05:09:07 np0005634017 podman[287639]: 2026-02-28 10:09:07.471873554 +0000 UTC m=+0.104158755 container cleanup d888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 28 05:09:07 np0005634017 systemd[1]: libpod-conmon-d888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709.scope: Deactivated successfully.
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.479 243456 DEBUG nova.storage.rbd_utils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 51a0e59a-81d0-4f05-bb13-5ca025288da2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.487 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.518 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.523 243456 DEBUG nova.policy [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '33855957e5e3480b850c2ddef62a5f89', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a5095f810f0d431788237ae1da262bf6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.530 243456 DEBUG nova.compute.manager [req-8b4e0c43-c8ba-4da9-b825-d63974f32fe2 req-160fcd7f-6bc7-4f56-b136-38a77a624ae8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received event network-vif-plugged-2913b56a-9cff-4697-89c5-e6e3553b8002 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.530 243456 DEBUG oslo_concurrency.lockutils [req-8b4e0c43-c8ba-4da9-b825-d63974f32fe2 req-160fcd7f-6bc7-4f56-b136-38a77a624ae8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.530 243456 DEBUG oslo_concurrency.lockutils [req-8b4e0c43-c8ba-4da9-b825-d63974f32fe2 req-160fcd7f-6bc7-4f56-b136-38a77a624ae8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.531 243456 DEBUG oslo_concurrency.lockutils [req-8b4e0c43-c8ba-4da9-b825-d63974f32fe2 req-160fcd7f-6bc7-4f56-b136-38a77a624ae8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.531 243456 DEBUG nova.compute.manager [req-8b4e0c43-c8ba-4da9-b825-d63974f32fe2 req-160fcd7f-6bc7-4f56-b136-38a77a624ae8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] No waiting events found dispatching network-vif-plugged-2913b56a-9cff-4697-89c5-e6e3553b8002 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.531 243456 WARNING nova.compute.manager [req-8b4e0c43-c8ba-4da9-b825-d63974f32fe2 req-160fcd7f-6bc7-4f56-b136-38a77a624ae8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received unexpected event network-vif-plugged-2913b56a-9cff-4697-89c5-e6e3553b8002 for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:09:07 np0005634017 podman[287747]: 2026-02-28 10:09:07.535855861 +0000 UTC m=+0.041040074 container remove d888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.536 243456 INFO nova.virt.libvirt.driver [-] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Instance destroyed successfully.#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.536 243456 DEBUG nova.objects.instance [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lazy-loading 'resources' on Instance uuid 6e81c007-3bdc-4baf-b310-775c3122cd14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.541 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d2da008a-d4f6-48c5-9c06-7738cb477ffe]: (4, ('Sat Feb 28 10:09:07 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51 (d888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709)\nd888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709\nSat Feb 28 10:09:07 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51 (d888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709)\nd888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.544 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4f682762-bb97-4a12-9d27-ef82274f9515]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.545 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee9d9c00-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:07 np0005634017 kernel: tapee9d9c00-c0: left promiscuous mode
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.547 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.552 243456 DEBUG nova.virt.libvirt.vif [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:08:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-2003718246',display_name='tempest-ServersTestMultiNic-server-2003718246',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-2003718246',id=51,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:09:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='30cb5e2d14fb4fb7a9d37cf231549329',ramdisk_id='',reservation_id='r-2f5u8tlh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-116334619',owner_user_name='tempest-ServersTestMultiNic-116334619-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:09:05Z,user_data=None,user_id='35aa1fe862a2437dbcc12fc7b0acbf91',uuid=6e81c007-3bdc-4baf-b310-775c3122cd14,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2913b56a-9cff-4697-89c5-e6e3553b8002", "address": "fa:16:3e:cb:85:32", "network": {"id": "ee9d9c00-c168-42ad-9b0a-3feb4cee5b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2066554005", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2913b56a-9c", "ovs_interfaceid": "2913b56a-9cff-4697-89c5-e6e3553b8002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.552 243456 DEBUG nova.network.os_vif_util [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converting VIF {"id": "2913b56a-9cff-4697-89c5-e6e3553b8002", "address": "fa:16:3e:cb:85:32", "network": {"id": "ee9d9c00-c168-42ad-9b0a-3feb4cee5b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2066554005", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2913b56a-9c", "ovs_interfaceid": "2913b56a-9cff-4697-89c5-e6e3553b8002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.553 243456 DEBUG nova.network.os_vif_util [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:85:32,bridge_name='br-int',has_traffic_filtering=True,id=2913b56a-9cff-4697-89c5-e6e3553b8002,network=Network(ee9d9c00-c168-42ad-9b0a-3feb4cee5b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2913b56a-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.553 243456 DEBUG os_vif [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:85:32,bridge_name='br-int',has_traffic_filtering=True,id=2913b56a-9cff-4697-89c5-e6e3553b8002,network=Network(ee9d9c00-c168-42ad-9b0a-3feb4cee5b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2913b56a-9c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.556 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.556 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2913b56a-9c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.559 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.560 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.561 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.561 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.561 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.563 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ac776044-5dc6-4669-b934-674003a89ebf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.576 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[04e07c7c-22a8-4a84-8f03-b544e28026df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.577 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[18e7bb19-9eca-40ac-bc8d-3ed891cd9922]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.583 243456 DEBUG nova.storage.rbd_utils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 51a0e59a-81d0-4f05-bb13-5ca025288da2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.586 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 51a0e59a-81d0-4f05-bb13-5ca025288da2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.591 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[28efb751-a1c0-4c7c-8845-1571471852c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485037, 'reachable_time': 19249, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287784, 'error': None, 'target': 'ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.593 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.593 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[1d3a8f16-91e3-4480-a7fb-d9ec56a4c419]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.594 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 in datapath 6393174b-f4f0-476b-a7a9-02f4c8a0425f unbound from our chassis#033[00m
Feb 28 05:09:07 np0005634017 systemd[1]: run-netns-ovnmeta\x2dee9d9c00\x2dc168\x2d42ad\x2d9b0a\x2d3feb4cee5b51.mount: Deactivated successfully.
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.596 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6393174b-f4f0-476b-a7a9-02f4c8a0425f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.596 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[88417aac-4db3-4f6a-b853-381c4928a70a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.597 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f namespace which is not needed anymore#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.609 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.614 243456 INFO os_vif [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:85:32,bridge_name='br-int',has_traffic_filtering=True,id=2913b56a-9cff-4697-89c5-e6e3553b8002,network=Network(ee9d9c00-c168-42ad-9b0a-3feb4cee5b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2913b56a-9c')#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.615 243456 DEBUG nova.virt.libvirt.vif [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:08:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-2003718246',display_name='tempest-ServersTestMultiNic-server-2003718246',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-2003718246',id=51,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:09:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='30cb5e2d14fb4fb7a9d37cf231549329',ramdisk_id='',reservation_id='r-2f5u8tlh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-116334619',owner_user_name='tempest-ServersTestMultiNic-116334619-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:09:05Z,user_data=None,user_id='35aa1fe862a2437dbcc12fc7b0acbf91',uuid=6e81c007-3bdc-4baf-b310-775c3122cd14,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "address": "fa:16:3e:ea:a4:97", "network": {"id": "6393174b-f4f0-476b-a7a9-02f4c8a0425f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-40536825", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf3648de-8d", "ovs_interfaceid": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.615 243456 DEBUG nova.network.os_vif_util [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converting VIF {"id": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "address": "fa:16:3e:ea:a4:97", "network": {"id": "6393174b-f4f0-476b-a7a9-02f4c8a0425f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-40536825", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf3648de-8d", "ovs_interfaceid": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.616 243456 DEBUG nova.network.os_vif_util [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:a4:97,bridge_name='br-int',has_traffic_filtering=True,id=cf3648de-8d18-4aa3-9cb9-b7a38ba349c7,network=Network(6393174b-f4f0-476b-a7a9-02f4c8a0425f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf3648de-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.616 243456 DEBUG os_vif [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:a4:97,bridge_name='br-int',has_traffic_filtering=True,id=cf3648de-8d18-4aa3-9cb9-b7a38ba349c7,network=Network(6393174b-f4f0-476b-a7a9-02f4c8a0425f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf3648de-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.617 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.618 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf3648de-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.619 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.621 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.623 243456 INFO os_vif [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:a4:97,bridge_name='br-int',has_traffic_filtering=True,id=cf3648de-8d18-4aa3-9cb9-b7a38ba349c7,network=Network(6393174b-f4f0-476b-a7a9-02f4c8a0425f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf3648de-8d')#033[00m
Feb 28 05:09:07 np0005634017 neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f[287574]: [NOTICE]   (287578) : haproxy version is 2.8.14-c23fe91
Feb 28 05:09:07 np0005634017 neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f[287574]: [NOTICE]   (287578) : path to executable is /usr/sbin/haproxy
Feb 28 05:09:07 np0005634017 neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f[287574]: [WARNING]  (287578) : Exiting Master process...
Feb 28 05:09:07 np0005634017 neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f[287574]: [ALERT]    (287578) : Current worker (287580) exited with code 143 (Terminated)
Feb 28 05:09:07 np0005634017 neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f[287574]: [WARNING]  (287578) : All workers exited. Exiting... (0)
Feb 28 05:09:07 np0005634017 systemd[1]: libpod-0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145.scope: Deactivated successfully.
Feb 28 05:09:07 np0005634017 conmon[287574]: conmon 0f754d2119676e96a4ff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145.scope/container/memory.events
Feb 28 05:09:07 np0005634017 podman[287838]: 2026-02-28 10:09:07.727819232 +0000 UTC m=+0.053351979 container died 0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:09:07 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145-userdata-shm.mount: Deactivated successfully.
Feb 28 05:09:07 np0005634017 systemd[1]: var-lib-containers-storage-overlay-28076557dba7d747e9b7cbfa4fae2599e2a1294bb9f8c658c1b2eee3c50b0bf7-merged.mount: Deactivated successfully.
Feb 28 05:09:07 np0005634017 podman[287838]: 2026-02-28 10:09:07.791952513 +0000 UTC m=+0.117485250 container cleanup 0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:09:07 np0005634017 systemd[1]: libpod-conmon-0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145.scope: Deactivated successfully.
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.808 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 51a0e59a-81d0-4f05-bb13-5ca025288da2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:07 np0005634017 podman[287870]: 2026-02-28 10:09:07.849425958 +0000 UTC m=+0.042501225 container remove 0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.855 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6deeb59a-37ff-445d-884f-46131d04a930]: (4, ('Sat Feb 28 10:09:07 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f (0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145)\n0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145\nSat Feb 28 10:09:07 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f (0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145)\n0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.857 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cb12d74c-ce6c-4ce3-b106-ecb7e9238cef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.858 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6393174b-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:07 np0005634017 kernel: tap6393174b-f0: left promiscuous mode
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.874 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[405016b3-161c-45e8-be52-4e6f364ec571]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.885 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.889 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[87f1a413-49f7-4597-a4f9-8e7605548dcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.891 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab8e1f7-0ca2-43dc-95c5-102cfab70e15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.894 243456 DEBUG nova.storage.rbd_utils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] resizing rbd image 51a0e59a-81d0-4f05-bb13-5ca025288da2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.905 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0bfbd7aa-1f8f-4ae8-9165-b2ba84decee8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485118, 'reachable_time': 23527, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287922, 'error': None, 'target': 'ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.908 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:09:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.909 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[14c4b276-7e38-4a29-a7d7-922e01d1477d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:07 np0005634017 systemd[1]: run-netns-ovnmeta\x2d6393174b\x2df4f0\x2d476b\x2da7a9\x2d02f4c8a0425f.mount: Deactivated successfully.
Feb 28 05:09:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.977 243456 INFO nova.virt.libvirt.driver [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Deleting instance files /var/lib/nova/instances/6e81c007-3bdc-4baf-b310-775c3122cd14_del#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.978 243456 INFO nova.virt.libvirt.driver [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Deletion of /var/lib/nova/instances/6e81c007-3bdc-4baf-b310-775c3122cd14_del complete#033[00m
Feb 28 05:09:07 np0005634017 nova_compute[243452]: 2026-02-28 10:09:07.984 243456 DEBUG nova.objects.instance [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'migration_context' on Instance uuid 51a0e59a-81d0-4f05-bb13-5ca025288da2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:09:08 np0005634017 nova_compute[243452]: 2026-02-28 10:09:08.005 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:09:08 np0005634017 nova_compute[243452]: 2026-02-28 10:09:08.005 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Ensure instance console log exists: /var/lib/nova/instances/51a0e59a-81d0-4f05-bb13-5ca025288da2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:09:08 np0005634017 nova_compute[243452]: 2026-02-28 10:09:08.006 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:08 np0005634017 nova_compute[243452]: 2026-02-28 10:09:08.006 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:08 np0005634017 nova_compute[243452]: 2026-02-28 10:09:08.006 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 452 KiB/s rd, 12 KiB/s wr, 24 op/s
Feb 28 05:09:08 np0005634017 nova_compute[243452]: 2026-02-28 10:09:08.033 243456 INFO nova.compute.manager [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:09:08 np0005634017 nova_compute[243452]: 2026-02-28 10:09:08.034 243456 DEBUG oslo.service.loopingcall [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:09:08 np0005634017 nova_compute[243452]: 2026-02-28 10:09:08.035 243456 DEBUG nova.compute.manager [-] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:09:08 np0005634017 nova_compute[243452]: 2026-02-28 10:09:08.036 243456 DEBUG nova.network.neutron [-] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:09:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Feb 28 05:09:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Feb 28 05:09:08 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Feb 28 05:09:08 np0005634017 nova_compute[243452]: 2026-02-28 10:09:08.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:09:08 np0005634017 nova_compute[243452]: 2026-02-28 10:09:08.628 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:08 np0005634017 nova_compute[243452]: 2026-02-28 10:09:08.641 243456 DEBUG nova.network.neutron [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Successfully created port: d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:09:09 np0005634017 nova_compute[243452]: 2026-02-28 10:09:09.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:09:09 np0005634017 nova_compute[243452]: 2026-02-28 10:09:09.874 243456 DEBUG nova.network.neutron [-] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:09:09 np0005634017 nova_compute[243452]: 2026-02-28 10:09:09.905 243456 DEBUG nova.compute.manager [req-605e5e24-ce6b-49ec-b4fb-51dc5a0c774c req-8d34bf1f-da09-4501-a343-06ebe10a92a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received event network-vif-unplugged-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:09 np0005634017 nova_compute[243452]: 2026-02-28 10:09:09.905 243456 DEBUG oslo_concurrency.lockutils [req-605e5e24-ce6b-49ec-b4fb-51dc5a0c774c req-8d34bf1f-da09-4501-a343-06ebe10a92a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:09 np0005634017 nova_compute[243452]: 2026-02-28 10:09:09.906 243456 DEBUG oslo_concurrency.lockutils [req-605e5e24-ce6b-49ec-b4fb-51dc5a0c774c req-8d34bf1f-da09-4501-a343-06ebe10a92a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:09 np0005634017 nova_compute[243452]: 2026-02-28 10:09:09.906 243456 DEBUG oslo_concurrency.lockutils [req-605e5e24-ce6b-49ec-b4fb-51dc5a0c774c req-8d34bf1f-da09-4501-a343-06ebe10a92a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:09 np0005634017 nova_compute[243452]: 2026-02-28 10:09:09.906 243456 DEBUG nova.compute.manager [req-605e5e24-ce6b-49ec-b4fb-51dc5a0c774c req-8d34bf1f-da09-4501-a343-06ebe10a92a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] No waiting events found dispatching network-vif-unplugged-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:09:09 np0005634017 nova_compute[243452]: 2026-02-28 10:09:09.906 243456 DEBUG nova.compute.manager [req-605e5e24-ce6b-49ec-b4fb-51dc5a0c774c req-8d34bf1f-da09-4501-a343-06ebe10a92a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received event network-vif-unplugged-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:09:09 np0005634017 nova_compute[243452]: 2026-02-28 10:09:09.907 243456 DEBUG nova.compute.manager [req-605e5e24-ce6b-49ec-b4fb-51dc5a0c774c req-8d34bf1f-da09-4501-a343-06ebe10a92a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received event network-vif-plugged-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:09 np0005634017 nova_compute[243452]: 2026-02-28 10:09:09.907 243456 DEBUG oslo_concurrency.lockutils [req-605e5e24-ce6b-49ec-b4fb-51dc5a0c774c req-8d34bf1f-da09-4501-a343-06ebe10a92a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:09 np0005634017 nova_compute[243452]: 2026-02-28 10:09:09.907 243456 DEBUG oslo_concurrency.lockutils [req-605e5e24-ce6b-49ec-b4fb-51dc5a0c774c req-8d34bf1f-da09-4501-a343-06ebe10a92a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:09 np0005634017 nova_compute[243452]: 2026-02-28 10:09:09.907 243456 DEBUG oslo_concurrency.lockutils [req-605e5e24-ce6b-49ec-b4fb-51dc5a0c774c req-8d34bf1f-da09-4501-a343-06ebe10a92a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:09 np0005634017 nova_compute[243452]: 2026-02-28 10:09:09.908 243456 DEBUG nova.compute.manager [req-605e5e24-ce6b-49ec-b4fb-51dc5a0c774c req-8d34bf1f-da09-4501-a343-06ebe10a92a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] No waiting events found dispatching network-vif-plugged-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:09:09 np0005634017 nova_compute[243452]: 2026-02-28 10:09:09.908 243456 WARNING nova.compute.manager [req-605e5e24-ce6b-49ec-b4fb-51dc5a0c774c req-8d34bf1f-da09-4501-a343-06ebe10a92a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received unexpected event network-vif-plugged-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:09:09 np0005634017 nova_compute[243452]: 2026-02-28 10:09:09.914 243456 INFO nova.compute.manager [-] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Took 1.88 seconds to deallocate network for instance.#033[00m
Feb 28 05:09:09 np0005634017 nova_compute[243452]: 2026-02-28 10:09:09.973 243456 DEBUG oslo_concurrency.lockutils [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:09 np0005634017 nova_compute[243452]: 2026-02-28 10:09:09.974 243456 DEBUG oslo_concurrency.lockutils [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 185 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 280 KiB/s wr, 95 op/s
Feb 28 05:09:10 np0005634017 nova_compute[243452]: 2026-02-28 10:09:10.143 243456 DEBUG oslo_concurrency.processutils [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:10 np0005634017 nova_compute[243452]: 2026-02-28 10:09:10.209 243456 DEBUG nova.compute.manager [req-ba092d40-26a6-4c6c-9b40-90641a6abc74 req-df99f6a8-ed47-4ca7-8ff3-0ee56d2dc80d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received event network-vif-unplugged-2913b56a-9cff-4697-89c5-e6e3553b8002 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:10 np0005634017 nova_compute[243452]: 2026-02-28 10:09:10.210 243456 DEBUG oslo_concurrency.lockutils [req-ba092d40-26a6-4c6c-9b40-90641a6abc74 req-df99f6a8-ed47-4ca7-8ff3-0ee56d2dc80d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:10 np0005634017 nova_compute[243452]: 2026-02-28 10:09:10.210 243456 DEBUG oslo_concurrency.lockutils [req-ba092d40-26a6-4c6c-9b40-90641a6abc74 req-df99f6a8-ed47-4ca7-8ff3-0ee56d2dc80d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:10 np0005634017 nova_compute[243452]: 2026-02-28 10:09:10.210 243456 DEBUG oslo_concurrency.lockutils [req-ba092d40-26a6-4c6c-9b40-90641a6abc74 req-df99f6a8-ed47-4ca7-8ff3-0ee56d2dc80d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:10 np0005634017 nova_compute[243452]: 2026-02-28 10:09:10.211 243456 DEBUG nova.compute.manager [req-ba092d40-26a6-4c6c-9b40-90641a6abc74 req-df99f6a8-ed47-4ca7-8ff3-0ee56d2dc80d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] No waiting events found dispatching network-vif-unplugged-2913b56a-9cff-4697-89c5-e6e3553b8002 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:09:10 np0005634017 nova_compute[243452]: 2026-02-28 10:09:10.211 243456 WARNING nova.compute.manager [req-ba092d40-26a6-4c6c-9b40-90641a6abc74 req-df99f6a8-ed47-4ca7-8ff3-0ee56d2dc80d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received unexpected event network-vif-unplugged-2913b56a-9cff-4697-89c5-e6e3553b8002 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:09:10 np0005634017 nova_compute[243452]: 2026-02-28 10:09:10.211 243456 DEBUG nova.compute.manager [req-ba092d40-26a6-4c6c-9b40-90641a6abc74 req-df99f6a8-ed47-4ca7-8ff3-0ee56d2dc80d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received event network-vif-plugged-2913b56a-9cff-4697-89c5-e6e3553b8002 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:10 np0005634017 nova_compute[243452]: 2026-02-28 10:09:10.211 243456 DEBUG oslo_concurrency.lockutils [req-ba092d40-26a6-4c6c-9b40-90641a6abc74 req-df99f6a8-ed47-4ca7-8ff3-0ee56d2dc80d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:10 np0005634017 nova_compute[243452]: 2026-02-28 10:09:10.211 243456 DEBUG oslo_concurrency.lockutils [req-ba092d40-26a6-4c6c-9b40-90641a6abc74 req-df99f6a8-ed47-4ca7-8ff3-0ee56d2dc80d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:10 np0005634017 nova_compute[243452]: 2026-02-28 10:09:10.212 243456 DEBUG oslo_concurrency.lockutils [req-ba092d40-26a6-4c6c-9b40-90641a6abc74 req-df99f6a8-ed47-4ca7-8ff3-0ee56d2dc80d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:10 np0005634017 nova_compute[243452]: 2026-02-28 10:09:10.212 243456 DEBUG nova.compute.manager [req-ba092d40-26a6-4c6c-9b40-90641a6abc74 req-df99f6a8-ed47-4ca7-8ff3-0ee56d2dc80d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] No waiting events found dispatching network-vif-plugged-2913b56a-9cff-4697-89c5-e6e3553b8002 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:09:10 np0005634017 nova_compute[243452]: 2026-02-28 10:09:10.212 243456 WARNING nova.compute.manager [req-ba092d40-26a6-4c6c-9b40-90641a6abc74 req-df99f6a8-ed47-4ca7-8ff3-0ee56d2dc80d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received unexpected event network-vif-plugged-2913b56a-9cff-4697-89c5-e6e3553b8002 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:09:10 np0005634017 nova_compute[243452]: 2026-02-28 10:09:10.212 243456 DEBUG nova.compute.manager [req-ba092d40-26a6-4c6c-9b40-90641a6abc74 req-df99f6a8-ed47-4ca7-8ff3-0ee56d2dc80d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received event network-vif-deleted-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:10 np0005634017 nova_compute[243452]: 2026-02-28 10:09:10.538 243456 DEBUG nova.network.neutron [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Successfully updated port: d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:09:10 np0005634017 nova_compute[243452]: 2026-02-28 10:09:10.554 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "refresh_cache-51a0e59a-81d0-4f05-bb13-5ca025288da2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:09:10 np0005634017 nova_compute[243452]: 2026-02-28 10:09:10.554 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquired lock "refresh_cache-51a0e59a-81d0-4f05-bb13-5ca025288da2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:09:10 np0005634017 nova_compute[243452]: 2026-02-28 10:09:10.555 243456 DEBUG nova.network.neutron [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:09:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:09:10 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/405360413' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:09:10 np0005634017 nova_compute[243452]: 2026-02-28 10:09:10.739 243456 DEBUG oslo_concurrency.processutils [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:10 np0005634017 nova_compute[243452]: 2026-02-28 10:09:10.746 243456 DEBUG nova.compute.provider_tree [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:09:10 np0005634017 nova_compute[243452]: 2026-02-28 10:09:10.772 243456 DEBUG nova.scheduler.client.report [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:09:10 np0005634017 nova_compute[243452]: 2026-02-28 10:09:10.809 243456 DEBUG oslo_concurrency.lockutils [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:10 np0005634017 nova_compute[243452]: 2026-02-28 10:09:10.837 243456 INFO nova.scheduler.client.report [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Deleted allocations for instance 6e81c007-3bdc-4baf-b310-775c3122cd14#033[00m
Feb 28 05:09:10 np0005634017 nova_compute[243452]: 2026-02-28 10:09:10.930 243456 DEBUG oslo_concurrency.lockutils [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:10 np0005634017 nova_compute[243452]: 2026-02-28 10:09:10.933 243456 DEBUG nova.network.neutron [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:09:11 np0005634017 nova_compute[243452]: 2026-02-28 10:09:11.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:09:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 189 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 937 KiB/s wr, 147 op/s
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.138 243456 DEBUG nova.network.neutron [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Updating instance_info_cache with network_info: [{"id": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "address": "fa:16:3e:2d:11:10", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd85f2b17-bc", "ovs_interfaceid": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.157 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Releasing lock "refresh_cache-51a0e59a-81d0-4f05-bb13-5ca025288da2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.157 243456 DEBUG nova.compute.manager [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Instance network_info: |[{"id": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "address": "fa:16:3e:2d:11:10", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd85f2b17-bc", "ovs_interfaceid": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.160 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Start _get_guest_xml network_info=[{"id": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "address": "fa:16:3e:2d:11:10", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd85f2b17-bc", "ovs_interfaceid": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.167 243456 WARNING nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.172 243456 DEBUG nova.virt.libvirt.host [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.173 243456 DEBUG nova.virt.libvirt.host [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.176 243456 DEBUG nova.virt.libvirt.host [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.176 243456 DEBUG nova.virt.libvirt.host [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.177 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.177 243456 DEBUG nova.virt.hardware [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.178 243456 DEBUG nova.virt.hardware [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.178 243456 DEBUG nova.virt.hardware [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.178 243456 DEBUG nova.virt.hardware [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.178 243456 DEBUG nova.virt.hardware [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.179 243456 DEBUG nova.virt.hardware [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.179 243456 DEBUG nova.virt.hardware [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.179 243456 DEBUG nova.virt.hardware [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.180 243456 DEBUG nova.virt.hardware [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.180 243456 DEBUG nova.virt.hardware [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.180 243456 DEBUG nova.virt.hardware [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.183 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Feb 28 05:09:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Feb 28 05:09:12 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.421 243456 DEBUG nova.compute.manager [req-0b9940a3-cb02-454c-af1f-ad0981dbf077 req-74831c3b-87cc-4210-8e67-f0573e0acc98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received event network-vif-deleted-2913b56a-9cff-4697-89c5-e6e3553b8002 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.422 243456 DEBUG nova.compute.manager [req-0b9940a3-cb02-454c-af1f-ad0981dbf077 req-74831c3b-87cc-4210-8e67-f0573e0acc98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Received event network-changed-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.423 243456 DEBUG nova.compute.manager [req-0b9940a3-cb02-454c-af1f-ad0981dbf077 req-74831c3b-87cc-4210-8e67-f0573e0acc98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Refreshing instance network info cache due to event network-changed-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.424 243456 DEBUG oslo_concurrency.lockutils [req-0b9940a3-cb02-454c-af1f-ad0981dbf077 req-74831c3b-87cc-4210-8e67-f0573e0acc98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-51a0e59a-81d0-4f05-bb13-5ca025288da2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.424 243456 DEBUG oslo_concurrency.lockutils [req-0b9940a3-cb02-454c-af1f-ad0981dbf077 req-74831c3b-87cc-4210-8e67-f0573e0acc98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-51a0e59a-81d0-4f05-bb13-5ca025288da2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.425 243456 DEBUG nova.network.neutron [req-0b9940a3-cb02-454c-af1f-ad0981dbf077 req-74831c3b-87cc-4210-8e67-f0573e0acc98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Refreshing network info cache for port d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.622 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:09:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2840125212' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.723 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.752 243456 DEBUG nova.storage.rbd_utils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 51a0e59a-81d0-4f05-bb13-5ca025288da2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.758 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:09:12 np0005634017 nova_compute[243452]: 2026-02-28 10:09:12.940 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:09:13 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2982194424' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.343 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.345 243456 DEBUG nova.virt.libvirt.vif [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:09:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1328994048',display_name='tempest-DeleteServersTestJSON-server-1328994048',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1328994048',id=52,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-eeazn004',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:09:07Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=51a0e59a-81d0-4f05-bb13-5ca025288da2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "address": "fa:16:3e:2d:11:10", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd85f2b17-bc", "ovs_interfaceid": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.346 243456 DEBUG nova.network.os_vif_util [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "address": "fa:16:3e:2d:11:10", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd85f2b17-bc", "ovs_interfaceid": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.348 243456 DEBUG nova.network.os_vif_util [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:11:10,bridge_name='br-int',has_traffic_filtering=True,id=d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd85f2b17-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.349 243456 DEBUG nova.objects.instance [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 51a0e59a-81d0-4f05-bb13-5ca025288da2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.367 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:09:13 np0005634017 nova_compute[243452]:  <uuid>51a0e59a-81d0-4f05-bb13-5ca025288da2</uuid>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:  <name>instance-00000034</name>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <nova:name>tempest-DeleteServersTestJSON-server-1328994048</nova:name>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:09:12</nova:creationTime>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:09:13 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:        <nova:user uuid="33855957e5e3480b850c2ddef62a5f89">tempest-DeleteServersTestJSON-515886650-project-member</nova:user>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:        <nova:project uuid="a5095f810f0d431788237ae1da262bf6">tempest-DeleteServersTestJSON-515886650</nova:project>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:        <nova:port uuid="d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf">
Feb 28 05:09:13 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <entry name="serial">51a0e59a-81d0-4f05-bb13-5ca025288da2</entry>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <entry name="uuid">51a0e59a-81d0-4f05-bb13-5ca025288da2</entry>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/51a0e59a-81d0-4f05-bb13-5ca025288da2_disk">
Feb 28 05:09:13 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:09:13 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/51a0e59a-81d0-4f05-bb13-5ca025288da2_disk.config">
Feb 28 05:09:13 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:09:13 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:2d:11:10"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <target dev="tapd85f2b17-bc"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/51a0e59a-81d0-4f05-bb13-5ca025288da2/console.log" append="off"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:09:13 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:09:13 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:09:13 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:09:13 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.368 243456 DEBUG nova.compute.manager [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Preparing to wait for external event network-vif-plugged-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.368 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.369 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.369 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.370 243456 DEBUG nova.virt.libvirt.vif [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:09:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1328994048',display_name='tempest-DeleteServersTestJSON-server-1328994048',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1328994048',id=52,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-eeazn004',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:09:07Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=51a0e59a-81d0-4f05-bb13-5ca025288da2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "address": "fa:16:3e:2d:11:10", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd85f2b17-bc", "ovs_interfaceid": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.370 243456 DEBUG nova.network.os_vif_util [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "address": "fa:16:3e:2d:11:10", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd85f2b17-bc", "ovs_interfaceid": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.371 243456 DEBUG nova.network.os_vif_util [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:11:10,bridge_name='br-int',has_traffic_filtering=True,id=d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd85f2b17-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.371 243456 DEBUG os_vif [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:11:10,bridge_name='br-int',has_traffic_filtering=True,id=d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd85f2b17-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.372 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.372 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.373 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.376 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.376 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd85f2b17-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.377 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd85f2b17-bc, col_values=(('external_ids', {'iface-id': 'd85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:11:10', 'vm-uuid': '51a0e59a-81d0-4f05-bb13-5ca025288da2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:13 np0005634017 NetworkManager[49805]: <info>  [1772273353.3795] manager: (tapd85f2b17-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/197)
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.378 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.386 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.388 243456 INFO os_vif [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:11:10,bridge_name='br-int',has_traffic_filtering=True,id=d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd85f2b17-bc')#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.454 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.455 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.455 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No VIF found with MAC fa:16:3e:2d:11:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.456 243456 INFO nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Using config drive#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.478 243456 DEBUG nova.storage.rbd_utils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 51a0e59a-81d0-4f05-bb13-5ca025288da2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.630 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.844 243456 INFO nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Creating config drive at /var/lib/nova/instances/51a0e59a-81d0-4f05-bb13-5ca025288da2/disk.config#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.849 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/51a0e59a-81d0-4f05-bb13-5ca025288da2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpggyudb5f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:13 np0005634017 nova_compute[243452]: 2026-02-28 10:09:13.983 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/51a0e59a-81d0-4f05-bb13-5ca025288da2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpggyudb5f" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 200 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 194 op/s
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.017 243456 DEBUG nova.storage.rbd_utils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 51a0e59a-81d0-4f05-bb13-5ca025288da2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.023 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/51a0e59a-81d0-4f05-bb13-5ca025288da2/disk.config 51a0e59a-81d0-4f05-bb13-5ca025288da2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.113 243456 DEBUG nova.network.neutron [req-0b9940a3-cb02-454c-af1f-ad0981dbf077 req-74831c3b-87cc-4210-8e67-f0573e0acc98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Updated VIF entry in instance network info cache for port d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.114 243456 DEBUG nova.network.neutron [req-0b9940a3-cb02-454c-af1f-ad0981dbf077 req-74831c3b-87cc-4210-8e67-f0573e0acc98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Updating instance_info_cache with network_info: [{"id": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "address": "fa:16:3e:2d:11:10", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd85f2b17-bc", "ovs_interfaceid": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.130 243456 DEBUG oslo_concurrency.lockutils [req-0b9940a3-cb02-454c-af1f-ad0981dbf077 req-74831c3b-87cc-4210-8e67-f0573e0acc98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-51a0e59a-81d0-4f05-bb13-5ca025288da2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.209 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/51a0e59a-81d0-4f05-bb13-5ca025288da2/disk.config 51a0e59a-81d0-4f05-bb13-5ca025288da2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.210 243456 INFO nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Deleting local config drive /var/lib/nova/instances/51a0e59a-81d0-4f05-bb13-5ca025288da2/disk.config because it was imported into RBD.#033[00m
Feb 28 05:09:14 np0005634017 kernel: tapd85f2b17-bc: entered promiscuous mode
Feb 28 05:09:14 np0005634017 NetworkManager[49805]: <info>  [1772273354.2616] manager: (tapd85f2b17-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/198)
Feb 28 05:09:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:14Z|00433|binding|INFO|Claiming lport d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf for this chassis.
Feb 28 05:09:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:14Z|00434|binding|INFO|d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf: Claiming fa:16:3e:2d:11:10 10.100.0.12
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.262 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.265 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.268 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.280 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:11:10 10.100.0.12'], port_security=['fa:16:3e:2d:11:10 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '51a0e59a-81d0-4f05-bb13-5ca025288da2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e92100d-850d-4567-9a5d-269bb15701d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5095f810f0d431788237ae1da262bf6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05124cc2-701f-43d1-96d7-3db27e805ae2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4409f277-3260-44c5-97fd-d8b2121f3d6b, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.282 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf in datapath 8e92100d-850d-4567-9a5d-269bb15701d5 bound to our chassis#033[00m
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.284 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8e92100d-850d-4567-9a5d-269bb15701d5#033[00m
Feb 28 05:09:14 np0005634017 systemd-udevd[288115]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:09:14 np0005634017 systemd-machined[209480]: New machine qemu-58-instance-00000034.
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.297 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.296 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2961852e-4395-49e4-a7fb-b430ed64eaaf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.297 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8e92100d-81 in ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.299 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8e92100d-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.299 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5e5838e4-9fb1-4d6d-89c9-fb13308eca71]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:14 np0005634017 NetworkManager[49805]: <info>  [1772273354.3013] device (tapd85f2b17-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:09:14 np0005634017 NetworkManager[49805]: <info>  [1772273354.3017] device (tapd85f2b17-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.300 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4056b401-721c-4784-a563-c18742e29e85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:14 np0005634017 systemd[1]: Started Virtual Machine qemu-58-instance-00000034.
Feb 28 05:09:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:14Z|00435|binding|INFO|Setting lport d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf ovn-installed in OVS
Feb 28 05:09:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:14Z|00436|binding|INFO|Setting lport d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf up in Southbound
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.305 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.315 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[ede29984-d974-4424-877f-50084c5fb180]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.331 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.331 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.341 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[55914ecd-dee7-4737-b4e2-a38aa29512db]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.351 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.352 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.352 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.352 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.353 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.374 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2d5fb7b0-605e-441f-8ada-77c30abba0ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:14 np0005634017 NetworkManager[49805]: <info>  [1772273354.3868] manager: (tap8e92100d-80): new Veth device (/org/freedesktop/NetworkManager/Devices/199)
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.386 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a96a852a-98da-469e-8c26-5c3c6604c64e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:14 np0005634017 systemd-udevd[288119]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.421 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[061fc82b-6582-4269-94b6-bf1287c90861]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.424 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a91840dc-4085-4c54-a7da-1c4417df1c93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:14 np0005634017 NetworkManager[49805]: <info>  [1772273354.4481] device (tap8e92100d-80): carrier: link connected
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.453 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c16c18a2-fcc6-446e-9523-c7829ac04583]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.470 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7196994c-d49f-46d9-a52f-3280b49f1e86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e92100d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:bd:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486173, 'reachable_time': 25164, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288150, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.487 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[33d67bf1-60bf-4b03-81f7-65deefcce85f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:bd50'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 486173, 'tstamp': 486173}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288152, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.505 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b4ef20d3-8459-40e9-9887-af9426ac02a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e92100d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:bd:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486173, 'reachable_time': 25164, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288162, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.534 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c4ae68d2-3752-4a97-8d20-8a95d9fbc754]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.587 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[86ea4bfb-d783-4560-9052-1b0fc971cef3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.589 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e92100d-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.589 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.589 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e92100d-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:14 np0005634017 kernel: tap8e92100d-80: entered promiscuous mode
Feb 28 05:09:14 np0005634017 NetworkManager[49805]: <info>  [1772273354.5922] manager: (tap8e92100d-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.591 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.594 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8e92100d-80, col_values=(('external_ids', {'iface-id': 'df60d363-b5aa-4c1b-a9a1-997dfff36799'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:14Z|00437|binding|INFO|Releasing lport df60d363-b5aa-4c1b-a9a1-997dfff36799 from this chassis (sb_readonly=0)
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.603 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.606 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.607 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.608 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5159c429-aa73-45d3-a48a-7f9f48f7d02c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.609 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-8e92100d-850d-4567-9a5d-269bb15701d5
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 8e92100d-850d-4567-9a5d-269bb15701d5
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:09:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.610 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'env', 'PROCESS_TAG=haproxy-8e92100d-850d-4567-9a5d-269bb15701d5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8e92100d-850d-4567-9a5d-269bb15701d5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.613 243456 DEBUG nova.compute.manager [req-94a48b55-65a8-48af-b9da-76c6a6d953c6 req-4b0e35c0-f203-4d07-a721-a9bd57fd8eed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Received event network-vif-plugged-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.614 243456 DEBUG oslo_concurrency.lockutils [req-94a48b55-65a8-48af-b9da-76c6a6d953c6 req-4b0e35c0-f203-4d07-a721-a9bd57fd8eed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.615 243456 DEBUG oslo_concurrency.lockutils [req-94a48b55-65a8-48af-b9da-76c6a6d953c6 req-4b0e35c0-f203-4d07-a721-a9bd57fd8eed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.615 243456 DEBUG oslo_concurrency.lockutils [req-94a48b55-65a8-48af-b9da-76c6a6d953c6 req-4b0e35c0-f203-4d07-a721-a9bd57fd8eed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.615 243456 DEBUG nova.compute.manager [req-94a48b55-65a8-48af-b9da-76c6a6d953c6 req-4b0e35c0-f203-4d07-a721-a9bd57fd8eed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Processing event network-vif-plugged-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.819 243456 DEBUG nova.compute.manager [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.820 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273354.818189, 51a0e59a-81d0-4f05-bb13-5ca025288da2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.821 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] VM Started (Lifecycle Event)#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.824 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.828 243456 INFO nova.virt.libvirt.driver [-] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Instance spawned successfully.#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.829 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.839 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.842 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.849 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.850 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.850 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.851 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.851 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.852 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.861 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.862 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273354.8185802, 51a0e59a-81d0-4f05-bb13-5ca025288da2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.862 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.894 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:09:14 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1713440759' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.900 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273354.8241525, 51a0e59a-81d0-4f05-bb13-5ca025288da2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.900 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.918 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.933 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.941 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.948 243456 INFO nova.compute.manager [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Took 7.58 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.949 243456 DEBUG nova.compute.manager [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:14 np0005634017 nova_compute[243452]: 2026-02-28 10:09:14.984 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:09:14 np0005634017 podman[288247]: 2026-02-28 10:09:14.989766923 +0000 UTC m=+0.061050515 container create cfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 05:09:15 np0005634017 nova_compute[243452]: 2026-02-28 10:09:15.018 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:09:15 np0005634017 nova_compute[243452]: 2026-02-28 10:09:15.019 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:09:15 np0005634017 nova_compute[243452]: 2026-02-28 10:09:15.026 243456 INFO nova.compute.manager [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Took 8.76 seconds to build instance.#033[00m
Feb 28 05:09:15 np0005634017 systemd[1]: Started libpod-conmon-cfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98.scope.
Feb 28 05:09:15 np0005634017 nova_compute[243452]: 2026-02-28 10:09:15.044 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:15 np0005634017 podman[288247]: 2026-02-28 10:09:14.966737457 +0000 UTC m=+0.038021069 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:09:15 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:09:15 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e20019706ae76b9455ec6b594309c35a7d333801c45502a26fa08af52be625/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:09:15 np0005634017 podman[288247]: 2026-02-28 10:09:15.094257318 +0000 UTC m=+0.165540910 container init cfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:09:15 np0005634017 podman[288247]: 2026-02-28 10:09:15.101394118 +0000 UTC m=+0.172677700 container start cfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS)
Feb 28 05:09:15 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[288263]: [NOTICE]   (288267) : New worker (288269) forked
Feb 28 05:09:15 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[288263]: [NOTICE]   (288267) : Loading success.
Feb 28 05:09:15 np0005634017 nova_compute[243452]: 2026-02-28 10:09:15.208 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:09:15 np0005634017 nova_compute[243452]: 2026-02-28 10:09:15.209 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4004MB free_disk=59.967133364640176GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:09:15 np0005634017 nova_compute[243452]: 2026-02-28 10:09:15.209 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:15 np0005634017 nova_compute[243452]: 2026-02-28 10:09:15.209 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:15 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Feb 28 05:09:15 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Feb 28 05:09:15 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Feb 28 05:09:15 np0005634017 nova_compute[243452]: 2026-02-28 10:09:15.300 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 51a0e59a-81d0-4f05-bb13-5ca025288da2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:09:15 np0005634017 nova_compute[243452]: 2026-02-28 10:09:15.301 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:09:15 np0005634017 nova_compute[243452]: 2026-02-28 10:09:15.301 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:09:15 np0005634017 nova_compute[243452]: 2026-02-28 10:09:15.349 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:15 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:09:15 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1262194963' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:09:15 np0005634017 nova_compute[243452]: 2026-02-28 10:09:15.912 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:15 np0005634017 nova_compute[243452]: 2026-02-28 10:09:15.918 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:09:15 np0005634017 nova_compute[243452]: 2026-02-28 10:09:15.936 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:09:15 np0005634017 nova_compute[243452]: 2026-02-28 10:09:15.961 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:09:15 np0005634017 nova_compute[243452]: 2026-02-28 10:09:15.961 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.7 MiB/s wr, 223 op/s
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.381 243456 DEBUG oslo_concurrency.lockutils [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "51a0e59a-81d0-4f05-bb13-5ca025288da2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.382 243456 DEBUG oslo_concurrency.lockutils [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.382 243456 DEBUG oslo_concurrency.lockutils [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.383 243456 DEBUG oslo_concurrency.lockutils [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.383 243456 DEBUG oslo_concurrency.lockutils [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.385 243456 INFO nova.compute.manager [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Terminating instance#033[00m
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.387 243456 DEBUG nova.compute.manager [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:09:16 np0005634017 kernel: tapd85f2b17-bc (unregistering): left promiscuous mode
Feb 28 05:09:16 np0005634017 NetworkManager[49805]: <info>  [1772273356.4277] device (tapd85f2b17-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.427 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:16 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:16Z|00438|binding|INFO|Releasing lport d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf from this chassis (sb_readonly=0)
Feb 28 05:09:16 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:16Z|00439|binding|INFO|Setting lport d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf down in Southbound
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.439 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:16 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:16Z|00440|binding|INFO|Removing iface tapd85f2b17-bc ovn-installed in OVS
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.441 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.450 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:11:10 10.100.0.12'], port_security=['fa:16:3e:2d:11:10 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '51a0e59a-81d0-4f05-bb13-5ca025288da2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e92100d-850d-4567-9a5d-269bb15701d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5095f810f0d431788237ae1da262bf6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05124cc2-701f-43d1-96d7-3db27e805ae2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4409f277-3260-44c5-97fd-d8b2121f3d6b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:09:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.453 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf in datapath 8e92100d-850d-4567-9a5d-269bb15701d5 unbound from our chassis#033[00m
Feb 28 05:09:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.454 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e92100d-850d-4567-9a5d-269bb15701d5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.454 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.455 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[239e17c1-b7e2-4bf0-9c25-28dd78a41c71]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.456 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 namespace which is not needed anymore#033[00m
Feb 28 05:09:16 np0005634017 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000034.scope: Deactivated successfully.
Feb 28 05:09:16 np0005634017 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000034.scope: Consumed 2.158s CPU time.
Feb 28 05:09:16 np0005634017 systemd-machined[209480]: Machine qemu-58-instance-00000034 terminated.
Feb 28 05:09:16 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[288263]: [NOTICE]   (288267) : haproxy version is 2.8.14-c23fe91
Feb 28 05:09:16 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[288263]: [NOTICE]   (288267) : path to executable is /usr/sbin/haproxy
Feb 28 05:09:16 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[288263]: [WARNING]  (288267) : Exiting Master process...
Feb 28 05:09:16 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[288263]: [ALERT]    (288267) : Current worker (288269) exited with code 143 (Terminated)
Feb 28 05:09:16 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[288263]: [WARNING]  (288267) : All workers exited. Exiting... (0)
Feb 28 05:09:16 np0005634017 systemd[1]: libpod-cfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98.scope: Deactivated successfully.
Feb 28 05:09:16 np0005634017 podman[288321]: 2026-02-28 10:09:16.619820034 +0000 UTC m=+0.061009165 container died cfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.633 243456 INFO nova.virt.libvirt.driver [-] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Instance destroyed successfully.#033[00m
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.634 243456 DEBUG nova.objects.instance [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'resources' on Instance uuid 51a0e59a-81d0-4f05-bb13-5ca025288da2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:09:16 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98-userdata-shm.mount: Deactivated successfully.
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.654 243456 DEBUG nova.virt.libvirt.vif [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:09:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1328994048',display_name='tempest-DeleteServersTestJSON-server-1328994048',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1328994048',id=52,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:09:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-eeazn004',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:09:14Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=51a0e59a-81d0-4f05-bb13-5ca025288da2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "address": "fa:16:3e:2d:11:10", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd85f2b17-bc", "ovs_interfaceid": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.655 243456 DEBUG nova.network.os_vif_util [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "address": "fa:16:3e:2d:11:10", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd85f2b17-bc", "ovs_interfaceid": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.656 243456 DEBUG nova.network.os_vif_util [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:11:10,bridge_name='br-int',has_traffic_filtering=True,id=d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd85f2b17-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.657 243456 DEBUG os_vif [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:11:10,bridge_name='br-int',has_traffic_filtering=True,id=d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd85f2b17-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:09:16 np0005634017 systemd[1]: var-lib-containers-storage-overlay-22e20019706ae76b9455ec6b594309c35a7d333801c45502a26fa08af52be625-merged.mount: Deactivated successfully.
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.660 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.660 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd85f2b17-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.662 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.665 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:09:16 np0005634017 podman[288321]: 2026-02-28 10:09:16.667301187 +0000 UTC m=+0.108490298 container cleanup cfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.668 243456 INFO os_vif [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:11:10,bridge_name='br-int',has_traffic_filtering=True,id=d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd85f2b17-bc')#033[00m
Feb 28 05:09:16 np0005634017 systemd[1]: libpod-conmon-cfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98.scope: Deactivated successfully.
Feb 28 05:09:16 np0005634017 podman[288359]: 2026-02-28 10:09:16.745174554 +0000 UTC m=+0.056561179 container remove cfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 28 05:09:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.750 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c3117a86-4bcf-4940-abe6-30bb8b5d0345]: (4, ('Sat Feb 28 10:09:16 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 (cfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98)\ncfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98\nSat Feb 28 10:09:16 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 (cfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98)\ncfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.753 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e3affc57-d122-41f0-afff-e92169ced859]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.754 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e92100d-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:16 np0005634017 kernel: tap8e92100d-80: left promiscuous mode
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.757 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.764 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.767 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[79f689ae-66fc-4de3-bc8f-c0805ec7d0e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.786 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dfd28050-2ba3-4546-8df3-878ce210d2f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.788 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b749dcce-b898-47c1-9fa4-204109584995]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.806 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[412e8c06-4aa3-4c9a-9d9c-58d68cd76fec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486165, 'reachable_time': 30131, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288392, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.810 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:09:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.810 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[81981dee-47e4-4f7f-bf6d-c887395b488e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:16 np0005634017 systemd[1]: run-netns-ovnmeta\x2d8e92100d\x2d850d\x2d4567\x2d9a5d\x2d269bb15701d5.mount: Deactivated successfully.
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.960 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.965 243456 INFO nova.virt.libvirt.driver [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Deleting instance files /var/lib/nova/instances/51a0e59a-81d0-4f05-bb13-5ca025288da2_del#033[00m
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.967 243456 INFO nova.virt.libvirt.driver [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Deletion of /var/lib/nova/instances/51a0e59a-81d0-4f05-bb13-5ca025288da2_del complete#033[00m
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.972 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.972 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:09:16 np0005634017 nova_compute[243452]: 2026-02-28 10:09:16.972 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:09:17 np0005634017 nova_compute[243452]: 2026-02-28 10:09:17.008 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Feb 28 05:09:17 np0005634017 nova_compute[243452]: 2026-02-28 10:09:17.009 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 05:09:17 np0005634017 nova_compute[243452]: 2026-02-28 10:09:17.036 243456 DEBUG nova.compute.manager [req-1b7cf545-c6a5-48d5-a8d0-4780ad6b17ce req-73cdc020-0938-4afa-b4c1-043b93f55f16 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Received event network-vif-plugged-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:17 np0005634017 nova_compute[243452]: 2026-02-28 10:09:17.036 243456 DEBUG oslo_concurrency.lockutils [req-1b7cf545-c6a5-48d5-a8d0-4780ad6b17ce req-73cdc020-0938-4afa-b4c1-043b93f55f16 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:17 np0005634017 nova_compute[243452]: 2026-02-28 10:09:17.037 243456 DEBUG oslo_concurrency.lockutils [req-1b7cf545-c6a5-48d5-a8d0-4780ad6b17ce req-73cdc020-0938-4afa-b4c1-043b93f55f16 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:17 np0005634017 nova_compute[243452]: 2026-02-28 10:09:17.037 243456 DEBUG oslo_concurrency.lockutils [req-1b7cf545-c6a5-48d5-a8d0-4780ad6b17ce req-73cdc020-0938-4afa-b4c1-043b93f55f16 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:17 np0005634017 nova_compute[243452]: 2026-02-28 10:09:17.038 243456 DEBUG nova.compute.manager [req-1b7cf545-c6a5-48d5-a8d0-4780ad6b17ce req-73cdc020-0938-4afa-b4c1-043b93f55f16 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] No waiting events found dispatching network-vif-plugged-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:09:17 np0005634017 nova_compute[243452]: 2026-02-28 10:09:17.038 243456 WARNING nova.compute.manager [req-1b7cf545-c6a5-48d5-a8d0-4780ad6b17ce req-73cdc020-0938-4afa-b4c1-043b93f55f16 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Received unexpected event network-vif-plugged-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:09:17 np0005634017 nova_compute[243452]: 2026-02-28 10:09:17.038 243456 DEBUG nova.compute.manager [req-1b7cf545-c6a5-48d5-a8d0-4780ad6b17ce req-73cdc020-0938-4afa-b4c1-043b93f55f16 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Received event network-vif-unplugged-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:17 np0005634017 nova_compute[243452]: 2026-02-28 10:09:17.039 243456 DEBUG oslo_concurrency.lockutils [req-1b7cf545-c6a5-48d5-a8d0-4780ad6b17ce req-73cdc020-0938-4afa-b4c1-043b93f55f16 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:17 np0005634017 nova_compute[243452]: 2026-02-28 10:09:17.041 243456 DEBUG oslo_concurrency.lockutils [req-1b7cf545-c6a5-48d5-a8d0-4780ad6b17ce req-73cdc020-0938-4afa-b4c1-043b93f55f16 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:17 np0005634017 nova_compute[243452]: 2026-02-28 10:09:17.042 243456 DEBUG oslo_concurrency.lockutils [req-1b7cf545-c6a5-48d5-a8d0-4780ad6b17ce req-73cdc020-0938-4afa-b4c1-043b93f55f16 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:17 np0005634017 nova_compute[243452]: 2026-02-28 10:09:17.042 243456 DEBUG nova.compute.manager [req-1b7cf545-c6a5-48d5-a8d0-4780ad6b17ce req-73cdc020-0938-4afa-b4c1-043b93f55f16 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] No waiting events found dispatching network-vif-unplugged-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:09:17 np0005634017 nova_compute[243452]: 2026-02-28 10:09:17.042 243456 DEBUG nova.compute.manager [req-1b7cf545-c6a5-48d5-a8d0-4780ad6b17ce req-73cdc020-0938-4afa-b4c1-043b93f55f16 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Received event network-vif-unplugged-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:09:17 np0005634017 nova_compute[243452]: 2026-02-28 10:09:17.047 243456 INFO nova.compute.manager [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Took 0.66 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:09:17 np0005634017 nova_compute[243452]: 2026-02-28 10:09:17.048 243456 DEBUG oslo.service.loopingcall [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:09:17 np0005634017 nova_compute[243452]: 2026-02-28 10:09:17.049 243456 DEBUG nova.compute.manager [-] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:09:17 np0005634017 nova_compute[243452]: 2026-02-28 10:09:17.049 243456 DEBUG nova.network.neutron [-] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:09:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:09:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 176 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.4 MiB/s wr, 219 op/s
Feb 28 05:09:18 np0005634017 nova_compute[243452]: 2026-02-28 10:09:18.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:09:18 np0005634017 nova_compute[243452]: 2026-02-28 10:09:18.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:09:18 np0005634017 nova_compute[243452]: 2026-02-28 10:09:18.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:09:18 np0005634017 nova_compute[243452]: 2026-02-28 10:09:18.559 243456 DEBUG nova.network.neutron [-] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:09:18 np0005634017 nova_compute[243452]: 2026-02-28 10:09:18.588 243456 INFO nova.compute.manager [-] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Took 1.54 seconds to deallocate network for instance.#033[00m
Feb 28 05:09:18 np0005634017 nova_compute[243452]: 2026-02-28 10:09:18.631 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:18 np0005634017 nova_compute[243452]: 2026-02-28 10:09:18.656 243456 DEBUG oslo_concurrency.lockutils [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:18 np0005634017 nova_compute[243452]: 2026-02-28 10:09:18.657 243456 DEBUG oslo_concurrency.lockutils [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:18 np0005634017 nova_compute[243452]: 2026-02-28 10:09:18.707 243456 DEBUG oslo_concurrency.processutils [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:18 np0005634017 nova_compute[243452]: 2026-02-28 10:09:18.984 243456 DEBUG oslo_concurrency.lockutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "9098ebf3-e36c-492b-9c50-dc6f0078794d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:18 np0005634017 nova_compute[243452]: 2026-02-28 10:09:18.985 243456 DEBUG oslo_concurrency.lockutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "9098ebf3-e36c-492b-9c50-dc6f0078794d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:19 np0005634017 nova_compute[243452]: 2026-02-28 10:09:19.006 243456 DEBUG nova.compute.manager [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:09:19 np0005634017 nova_compute[243452]: 2026-02-28 10:09:19.090 243456 DEBUG oslo_concurrency.lockutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:19 np0005634017 nova_compute[243452]: 2026-02-28 10:09:19.135 243456 DEBUG nova.compute.manager [req-e3789ec0-a9f4-4746-841d-76498ff23097 req-f80c987a-192f-4f25-9d92-591a63874439 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Received event network-vif-plugged-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:19 np0005634017 nova_compute[243452]: 2026-02-28 10:09:19.136 243456 DEBUG oslo_concurrency.lockutils [req-e3789ec0-a9f4-4746-841d-76498ff23097 req-f80c987a-192f-4f25-9d92-591a63874439 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:19 np0005634017 nova_compute[243452]: 2026-02-28 10:09:19.136 243456 DEBUG oslo_concurrency.lockutils [req-e3789ec0-a9f4-4746-841d-76498ff23097 req-f80c987a-192f-4f25-9d92-591a63874439 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:19 np0005634017 nova_compute[243452]: 2026-02-28 10:09:19.137 243456 DEBUG oslo_concurrency.lockutils [req-e3789ec0-a9f4-4746-841d-76498ff23097 req-f80c987a-192f-4f25-9d92-591a63874439 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:19 np0005634017 nova_compute[243452]: 2026-02-28 10:09:19.137 243456 DEBUG nova.compute.manager [req-e3789ec0-a9f4-4746-841d-76498ff23097 req-f80c987a-192f-4f25-9d92-591a63874439 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] No waiting events found dispatching network-vif-plugged-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:09:19 np0005634017 nova_compute[243452]: 2026-02-28 10:09:19.137 243456 WARNING nova.compute.manager [req-e3789ec0-a9f4-4746-841d-76498ff23097 req-f80c987a-192f-4f25-9d92-591a63874439 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Received unexpected event network-vif-plugged-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:09:19 np0005634017 nova_compute[243452]: 2026-02-28 10:09:19.138 243456 DEBUG nova.compute.manager [req-e3789ec0-a9f4-4746-841d-76498ff23097 req-f80c987a-192f-4f25-9d92-591a63874439 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Received event network-vif-deleted-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:09:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3272923698' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:09:19 np0005634017 nova_compute[243452]: 2026-02-28 10:09:19.257 243456 DEBUG oslo_concurrency.processutils [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:19 np0005634017 nova_compute[243452]: 2026-02-28 10:09:19.264 243456 DEBUG nova.compute.provider_tree [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:09:19 np0005634017 nova_compute[243452]: 2026-02-28 10:09:19.292 243456 DEBUG nova.scheduler.client.report [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:09:19 np0005634017 nova_compute[243452]: 2026-02-28 10:09:19.314 243456 DEBUG oslo_concurrency.lockutils [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:19 np0005634017 nova_compute[243452]: 2026-02-28 10:09:19.317 243456 DEBUG oslo_concurrency.lockutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:19 np0005634017 nova_compute[243452]: 2026-02-28 10:09:19.323 243456 DEBUG nova.virt.hardware [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:09:19 np0005634017 nova_compute[243452]: 2026-02-28 10:09:19.323 243456 INFO nova.compute.claims [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:09:19 np0005634017 nova_compute[243452]: 2026-02-28 10:09:19.342 243456 INFO nova.scheduler.client.report [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Deleted allocations for instance 51a0e59a-81d0-4f05-bb13-5ca025288da2#033[00m
Feb 28 05:09:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:09:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:09:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:09:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:09:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:09:19 np0005634017 nova_compute[243452]: 2026-02-28 10:09:19.416 243456 DEBUG oslo_concurrency.lockutils [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:09:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:09:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:09:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:09:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:09:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:09:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:09:19 np0005634017 nova_compute[243452]: 2026-02-28 10:09:19.440 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:19 np0005634017 podman[288582]: 2026-02-28 10:09:19.863536464 +0000 UTC m=+0.092616632 container create 9c20d4d6922ac8028d6f8bd8f8dab2b94c99901b6a3cfbd64b05223a0020e907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_hawking, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 05:09:19 np0005634017 podman[288582]: 2026-02-28 10:09:19.794137165 +0000 UTC m=+0.023217373 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:09:19 np0005634017 systemd[1]: Started libpod-conmon-9c20d4d6922ac8028d6f8bd8f8dab2b94c99901b6a3cfbd64b05223a0020e907.scope.
Feb 28 05:09:19 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:09:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:09:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3230522339' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:09:20 np0005634017 podman[288582]: 2026-02-28 10:09:20.010970775 +0000 UTC m=+0.240050983 container init 9c20d4d6922ac8028d6f8bd8f8dab2b94c99901b6a3cfbd64b05223a0020e907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_hawking, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:09:20 np0005634017 nova_compute[243452]: 2026-02-28 10:09:20.010 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 169 MiB data, 539 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 174 op/s
Feb 28 05:09:20 np0005634017 podman[288582]: 2026-02-28 10:09:20.0172074 +0000 UTC m=+0.246287558 container start 9c20d4d6922ac8028d6f8bd8f8dab2b94c99901b6a3cfbd64b05223a0020e907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:09:20 np0005634017 nova_compute[243452]: 2026-02-28 10:09:20.017 243456 DEBUG nova.compute.provider_tree [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:09:20 np0005634017 peaceful_hawking[288599]: 167 167
Feb 28 05:09:20 np0005634017 systemd[1]: libpod-9c20d4d6922ac8028d6f8bd8f8dab2b94c99901b6a3cfbd64b05223a0020e907.scope: Deactivated successfully.
Feb 28 05:09:20 np0005634017 conmon[288599]: conmon 9c20d4d6922ac8028d6f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9c20d4d6922ac8028d6f8bd8f8dab2b94c99901b6a3cfbd64b05223a0020e907.scope/container/memory.events
Feb 28 05:09:20 np0005634017 nova_compute[243452]: 2026-02-28 10:09:20.047 243456 DEBUG nova.scheduler.client.report [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:09:20 np0005634017 podman[288582]: 2026-02-28 10:09:20.058668844 +0000 UTC m=+0.287749012 container attach 9c20d4d6922ac8028d6f8bd8f8dab2b94c99901b6a3cfbd64b05223a0020e907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_hawking, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:09:20 np0005634017 podman[288582]: 2026-02-28 10:09:20.060782274 +0000 UTC m=+0.289862442 container died 9c20d4d6922ac8028d6f8bd8f8dab2b94c99901b6a3cfbd64b05223a0020e907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_hawking, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 28 05:09:20 np0005634017 nova_compute[243452]: 2026-02-28 10:09:20.074 243456 DEBUG oslo_concurrency.lockutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:20 np0005634017 nova_compute[243452]: 2026-02-28 10:09:20.075 243456 DEBUG nova.compute.manager [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:09:20 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:09:20 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:09:20 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:09:20 np0005634017 nova_compute[243452]: 2026-02-28 10:09:20.128 243456 DEBUG nova.compute.manager [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:09:20 np0005634017 nova_compute[243452]: 2026-02-28 10:09:20.128 243456 DEBUG nova.network.neutron [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:09:20 np0005634017 nova_compute[243452]: 2026-02-28 10:09:20.150 243456 INFO nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:09:20 np0005634017 nova_compute[243452]: 2026-02-28 10:09:20.168 243456 DEBUG nova.compute.manager [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:09:20 np0005634017 systemd[1]: var-lib-containers-storage-overlay-a37c330b9db123b77c6717c68fb506b3ab797a847ca536d9d6fa30c77d4d978a-merged.mount: Deactivated successfully.
Feb 28 05:09:20 np0005634017 nova_compute[243452]: 2026-02-28 10:09:20.296 243456 DEBUG nova.compute.manager [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:09:20 np0005634017 nova_compute[243452]: 2026-02-28 10:09:20.298 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:09:20 np0005634017 nova_compute[243452]: 2026-02-28 10:09:20.298 243456 INFO nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Creating image(s)#033[00m
Feb 28 05:09:20 np0005634017 nova_compute[243452]: 2026-02-28 10:09:20.324 243456 DEBUG nova.storage.rbd_utils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 9098ebf3-e36c-492b-9c50-dc6f0078794d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:20 np0005634017 nova_compute[243452]: 2026-02-28 10:09:20.355 243456 DEBUG nova.storage.rbd_utils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 9098ebf3-e36c-492b-9c50-dc6f0078794d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:20 np0005634017 podman[288582]: 2026-02-28 10:09:20.365505231 +0000 UTC m=+0.594585369 container remove 9c20d4d6922ac8028d6f8bd8f8dab2b94c99901b6a3cfbd64b05223a0020e907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_hawking, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:09:20 np0005634017 nova_compute[243452]: 2026-02-28 10:09:20.385 243456 DEBUG nova.storage.rbd_utils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 9098ebf3-e36c-492b-9c50-dc6f0078794d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:20 np0005634017 nova_compute[243452]: 2026-02-28 10:09:20.391 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:20 np0005634017 systemd[1]: libpod-conmon-9c20d4d6922ac8028d6f8bd8f8dab2b94c99901b6a3cfbd64b05223a0020e907.scope: Deactivated successfully.
Feb 28 05:09:20 np0005634017 nova_compute[243452]: 2026-02-28 10:09:20.483 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:20 np0005634017 nova_compute[243452]: 2026-02-28 10:09:20.485 243456 DEBUG oslo_concurrency.lockutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:20 np0005634017 nova_compute[243452]: 2026-02-28 10:09:20.486 243456 DEBUG oslo_concurrency.lockutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:20 np0005634017 nova_compute[243452]: 2026-02-28 10:09:20.487 243456 DEBUG oslo_concurrency.lockutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:20 np0005634017 nova_compute[243452]: 2026-02-28 10:09:20.529 243456 DEBUG nova.storage.rbd_utils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 9098ebf3-e36c-492b-9c50-dc6f0078794d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:20 np0005634017 nova_compute[243452]: 2026-02-28 10:09:20.535 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9098ebf3-e36c-492b-9c50-dc6f0078794d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:20 np0005634017 podman[288679]: 2026-02-28 10:09:20.49860589 +0000 UTC m=+0.021693031 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:09:20 np0005634017 podman[288679]: 2026-02-28 10:09:20.629266939 +0000 UTC m=+0.152354110 container create 7620f1a046a2003a4c999975d6352fdb79951e22bbdfa07a80aaa56720a87f0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 28 05:09:20 np0005634017 systemd[1]: Started libpod-conmon-7620f1a046a2003a4c999975d6352fdb79951e22bbdfa07a80aaa56720a87f0f.scope.
Feb 28 05:09:20 np0005634017 nova_compute[243452]: 2026-02-28 10:09:20.730 243456 DEBUG nova.network.neutron [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 28 05:09:20 np0005634017 nova_compute[243452]: 2026-02-28 10:09:20.732 243456 DEBUG nova.compute.manager [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:09:20 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:09:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc778953745c9af41e3bfd94cdd9eaec2f0a44eb02cee95cae2e48ba7524b998/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:09:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc778953745c9af41e3bfd94cdd9eaec2f0a44eb02cee95cae2e48ba7524b998/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:09:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc778953745c9af41e3bfd94cdd9eaec2f0a44eb02cee95cae2e48ba7524b998/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:09:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc778953745c9af41e3bfd94cdd9eaec2f0a44eb02cee95cae2e48ba7524b998/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:09:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc778953745c9af41e3bfd94cdd9eaec2f0a44eb02cee95cae2e48ba7524b998/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:09:20 np0005634017 podman[288679]: 2026-02-28 10:09:20.899866549 +0000 UTC m=+0.422953690 container init 7620f1a046a2003a4c999975d6352fdb79951e22bbdfa07a80aaa56720a87f0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:09:20 np0005634017 podman[288679]: 2026-02-28 10:09:20.911198717 +0000 UTC m=+0.434285848 container start 7620f1a046a2003a4c999975d6352fdb79951e22bbdfa07a80aaa56720a87f0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_stonebraker, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 05:09:21 np0005634017 nova_compute[243452]: 2026-02-28 10:09:21.297 243456 DEBUG oslo_concurrency.lockutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "226b6da4-15c9-4d10-ab4d-194b313446f9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:21 np0005634017 nova_compute[243452]: 2026-02-28 10:09:21.299 243456 DEBUG oslo_concurrency.lockutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "226b6da4-15c9-4d10-ab4d-194b313446f9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:21 np0005634017 nova_compute[243452]: 2026-02-28 10:09:21.325 243456 DEBUG nova.compute.manager [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:09:21 np0005634017 silly_stonebraker[288736]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:09:21 np0005634017 silly_stonebraker[288736]: --> All data devices are unavailable
Feb 28 05:09:21 np0005634017 podman[288679]: 2026-02-28 10:09:21.373999645 +0000 UTC m=+0.897086776 container attach 7620f1a046a2003a4c999975d6352fdb79951e22bbdfa07a80aaa56720a87f0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_stonebraker, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 28 05:09:21 np0005634017 systemd[1]: libpod-7620f1a046a2003a4c999975d6352fdb79951e22bbdfa07a80aaa56720a87f0f.scope: Deactivated successfully.
Feb 28 05:09:21 np0005634017 nova_compute[243452]: 2026-02-28 10:09:21.419 243456 DEBUG oslo_concurrency.lockutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:21 np0005634017 nova_compute[243452]: 2026-02-28 10:09:21.421 243456 DEBUG oslo_concurrency.lockutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:21 np0005634017 nova_compute[243452]: 2026-02-28 10:09:21.432 243456 DEBUG nova.virt.hardware [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:09:21 np0005634017 nova_compute[243452]: 2026-02-28 10:09:21.432 243456 INFO nova.compute.claims [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:09:21 np0005634017 podman[288756]: 2026-02-28 10:09:21.452967433 +0000 UTC m=+0.027431111 container died 7620f1a046a2003a4c999975d6352fdb79951e22bbdfa07a80aaa56720a87f0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_stonebraker, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 05:09:21 np0005634017 nova_compute[243452]: 2026-02-28 10:09:21.561 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:21 np0005634017 nova_compute[243452]: 2026-02-28 10:09:21.663 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:21 np0005634017 systemd[1]: var-lib-containers-storage-overlay-fc778953745c9af41e3bfd94cdd9eaec2f0a44eb02cee95cae2e48ba7524b998-merged.mount: Deactivated successfully.
Feb 28 05:09:21 np0005634017 nova_compute[243452]: 2026-02-28 10:09:21.951 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9098ebf3-e36c-492b-9c50-dc6f0078794d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:22 np0005634017 podman[288756]: 2026-02-28 10:09:22.002448375 +0000 UTC m=+0.576912043 container remove 7620f1a046a2003a4c999975d6352fdb79951e22bbdfa07a80aaa56720a87f0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_stonebraker, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:09:22 np0005634017 systemd[1]: libpod-conmon-7620f1a046a2003a4c999975d6352fdb79951e22bbdfa07a80aaa56720a87f0f.scope: Deactivated successfully.
Feb 28 05:09:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 153 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.3 MiB/s wr, 174 op/s
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.045 243456 DEBUG nova.storage.rbd_utils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] resizing rbd image 9098ebf3-e36c-492b-9c50-dc6f0078794d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:09:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:09:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1263084625' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.246 243456 DEBUG nova.objects.instance [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lazy-loading 'migration_context' on Instance uuid 9098ebf3-e36c-492b-9c50-dc6f0078794d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.264 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.265 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Ensure instance console log exists: /var/lib/nova/instances/9098ebf3-e36c-492b-9c50-dc6f0078794d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.266 243456 DEBUG oslo_concurrency.lockutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.267 243456 DEBUG oslo_concurrency.lockutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.267 243456 DEBUG oslo_concurrency.lockutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.270 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.271 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.710s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.277 243456 WARNING nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.280 243456 DEBUG nova.compute.provider_tree [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.287 243456 DEBUG nova.virt.libvirt.host [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.288 243456 DEBUG nova.virt.libvirt.host [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.292 243456 DEBUG nova.virt.libvirt.host [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.293 243456 DEBUG nova.virt.libvirt.host [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.294 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.294 243456 DEBUG nova.virt.hardware [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.295 243456 DEBUG nova.virt.hardware [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.296 243456 DEBUG nova.virt.hardware [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.296 243456 DEBUG nova.virt.hardware [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.297 243456 DEBUG nova.virt.hardware [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.297 243456 DEBUG nova.virt.hardware [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.298 243456 DEBUG nova.virt.hardware [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.298 243456 DEBUG nova.virt.hardware [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.299 243456 DEBUG nova.virt.hardware [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.299 243456 DEBUG nova.virt.hardware [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.299 243456 DEBUG nova.virt.hardware [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.305 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.346 243456 DEBUG nova.scheduler.client.report [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.378 243456 DEBUG oslo_concurrency.lockutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.957s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.379 243456 DEBUG nova.compute.manager [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.436 243456 DEBUG nova.compute.manager [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.437 243456 DEBUG nova.network.neutron [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.455 243456 INFO nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.473 243456 DEBUG nova.compute.manager [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:09:22 np0005634017 podman[288927]: 2026-02-28 10:09:22.486043236 +0000 UTC m=+0.042897866 container create d3f6634175b23c608a1248bcb21082cd3de7f04f5c2adbb68bcd25cdb9ba57f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.520 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273347.4671392, 6e81c007-3bdc-4baf-b310-775c3122cd14 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.521 243456 INFO nova.compute.manager [-] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:09:22 np0005634017 systemd[1]: Started libpod-conmon-d3f6634175b23c608a1248bcb21082cd3de7f04f5c2adbb68bcd25cdb9ba57f5.scope.
Feb 28 05:09:22 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.550 243456 DEBUG nova.compute.manager [None req-53a8bd45-8f0b-42d2-b53c-69f583f4e0ff - - - - - -] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:22 np0005634017 podman[288927]: 2026-02-28 10:09:22.555213359 +0000 UTC m=+0.112067989 container init d3f6634175b23c608a1248bcb21082cd3de7f04f5c2adbb68bcd25cdb9ba57f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lamport, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.555 243456 DEBUG nova.compute.manager [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.556 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.557 243456 INFO nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Creating image(s)#033[00m
Feb 28 05:09:22 np0005634017 podman[288927]: 2026-02-28 10:09:22.468108072 +0000 UTC m=+0.024962712 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:09:22 np0005634017 podman[288927]: 2026-02-28 10:09:22.562523184 +0000 UTC m=+0.119377814 container start d3f6634175b23c608a1248bcb21082cd3de7f04f5c2adbb68bcd25cdb9ba57f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lamport, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 05:09:22 np0005634017 podman[288927]: 2026-02-28 10:09:22.566089734 +0000 UTC m=+0.122944374 container attach d3f6634175b23c608a1248bcb21082cd3de7f04f5c2adbb68bcd25cdb9ba57f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lamport, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:09:22 np0005634017 eager_lamport[288962]: 167 167
Feb 28 05:09:22 np0005634017 systemd[1]: libpod-d3f6634175b23c608a1248bcb21082cd3de7f04f5c2adbb68bcd25cdb9ba57f5.scope: Deactivated successfully.
Feb 28 05:09:22 np0005634017 conmon[288962]: conmon d3f6634175b23c608a12 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d3f6634175b23c608a1248bcb21082cd3de7f04f5c2adbb68bcd25cdb9ba57f5.scope/container/memory.events
Feb 28 05:09:22 np0005634017 podman[288927]: 2026-02-28 10:09:22.569660405 +0000 UTC m=+0.126515045 container died d3f6634175b23c608a1248bcb21082cd3de7f04f5c2adbb68bcd25cdb9ba57f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:09:22 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ce6e2472751119b85618870776c5133ec4753e9745c1a6f02eebe3aa7c903366-merged.mount: Deactivated successfully.
Feb 28 05:09:22 np0005634017 podman[288927]: 2026-02-28 10:09:22.605952124 +0000 UTC m=+0.162806754 container remove d3f6634175b23c608a1248bcb21082cd3de7f04f5c2adbb68bcd25cdb9ba57f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lamport, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.606 243456 DEBUG nova.storage.rbd_utils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 226b6da4-15c9-4d10-ab4d-194b313446f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:22 np0005634017 systemd[1]: libpod-conmon-d3f6634175b23c608a1248bcb21082cd3de7f04f5c2adbb68bcd25cdb9ba57f5.scope: Deactivated successfully.
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.639 243456 DEBUG nova.storage.rbd_utils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 226b6da4-15c9-4d10-ab4d-194b313446f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.665 243456 DEBUG nova.storage.rbd_utils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 226b6da4-15c9-4d10-ab4d-194b313446f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.670 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.761 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.763 243456 DEBUG oslo_concurrency.lockutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.764 243456 DEBUG oslo_concurrency.lockutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.764 243456 DEBUG oslo_concurrency.lockutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:22 np0005634017 podman[289039]: 2026-02-28 10:09:22.770181027 +0000 UTC m=+0.062224398 container create f3ab909f0fc2bf24798e68e66a45d4be5d3b415c976b8b26396e1c782adab51d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.796 243456 DEBUG nova.storage.rbd_utils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 226b6da4-15c9-4d10-ab4d-194b313446f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.802 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 226b6da4-15c9-4d10-ab4d-194b313446f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:22 np0005634017 systemd[1]: Started libpod-conmon-f3ab909f0fc2bf24798e68e66a45d4be5d3b415c976b8b26396e1c782adab51d.scope.
Feb 28 05:09:22 np0005634017 podman[289039]: 2026-02-28 10:09:22.73431154 +0000 UTC m=+0.026354981 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:09:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:09:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/236287320' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:09:22 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:09:22 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c14004ea5412f4ddc35babb28ba3e4ee0909078ae6e2644e9f492318c221db1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:09:22 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c14004ea5412f4ddc35babb28ba3e4ee0909078ae6e2644e9f492318c221db1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:09:22 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c14004ea5412f4ddc35babb28ba3e4ee0909078ae6e2644e9f492318c221db1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:09:22 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c14004ea5412f4ddc35babb28ba3e4ee0909078ae6e2644e9f492318c221db1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:09:22 np0005634017 podman[289039]: 2026-02-28 10:09:22.871869653 +0000 UTC m=+0.163913044 container init f3ab909f0fc2bf24798e68e66a45d4be5d3b415c976b8b26396e1c782adab51d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_burnell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.871 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:22 np0005634017 podman[289039]: 2026-02-28 10:09:22.878636423 +0000 UTC m=+0.170679784 container start f3ab909f0fc2bf24798e68e66a45d4be5d3b415c976b8b26396e1c782adab51d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_burnell, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:09:22 np0005634017 podman[289039]: 2026-02-28 10:09:22.882134101 +0000 UTC m=+0.174177482 container attach f3ab909f0fc2bf24798e68e66a45d4be5d3b415c976b8b26396e1c782adab51d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.907 243456 DEBUG nova.storage.rbd_utils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 9098ebf3-e36c-492b-9c50-dc6f0078794d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:22 np0005634017 nova_compute[243452]: 2026-02-28 10:09:22.912 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:09:23 np0005634017 nova_compute[243452]: 2026-02-28 10:09:23.096 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 226b6da4-15c9-4d10-ab4d-194b313446f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:23 np0005634017 nova_compute[243452]: 2026-02-28 10:09:23.161 243456 DEBUG nova.storage.rbd_utils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] resizing rbd image 226b6da4-15c9-4d10-ab4d-194b313446f9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]: {
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:    "0": [
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:        {
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "devices": [
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "/dev/loop3"
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            ],
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "lv_name": "ceph_lv0",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "lv_size": "21470642176",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "name": "ceph_lv0",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "tags": {
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.cluster_name": "ceph",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.crush_device_class": "",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.encrypted": "0",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.objectstore": "bluestore",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.osd_id": "0",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.type": "block",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.vdo": "0",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.with_tpm": "0"
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            },
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "type": "block",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "vg_name": "ceph_vg0"
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:        }
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:    ],
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:    "1": [
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:        {
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "devices": [
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "/dev/loop4"
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            ],
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "lv_name": "ceph_lv1",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "lv_size": "21470642176",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "name": "ceph_lv1",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "tags": {
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.cluster_name": "ceph",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.crush_device_class": "",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.encrypted": "0",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.objectstore": "bluestore",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.osd_id": "1",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.type": "block",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.vdo": "0",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.with_tpm": "0"
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            },
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "type": "block",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "vg_name": "ceph_vg1"
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:        }
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:    ],
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:    "2": [
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:        {
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "devices": [
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "/dev/loop5"
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            ],
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "lv_name": "ceph_lv2",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "lv_size": "21470642176",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "name": "ceph_lv2",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "tags": {
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.cluster_name": "ceph",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.crush_device_class": "",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.encrypted": "0",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.objectstore": "bluestore",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.osd_id": "2",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.type": "block",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.vdo": "0",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:                "ceph.with_tpm": "0"
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            },
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "type": "block",
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:            "vg_name": "ceph_vg2"
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:        }
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]:    ]
Feb 28 05:09:23 np0005634017 interesting_burnell[289077]: }
Feb 28 05:09:23 np0005634017 systemd[1]: libpod-f3ab909f0fc2bf24798e68e66a45d4be5d3b415c976b8b26396e1c782adab51d.scope: Deactivated successfully.
Feb 28 05:09:23 np0005634017 podman[289039]: 2026-02-28 10:09:23.232210193 +0000 UTC m=+0.524253594 container died f3ab909f0fc2bf24798e68e66a45d4be5d3b415c976b8b26396e1c782adab51d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_burnell, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 28 05:09:23 np0005634017 nova_compute[243452]: 2026-02-28 10:09:23.248 243456 DEBUG nova.objects.instance [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lazy-loading 'migration_context' on Instance uuid 226b6da4-15c9-4d10-ab4d-194b313446f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:09:23 np0005634017 systemd[1]: var-lib-containers-storage-overlay-4c14004ea5412f4ddc35babb28ba3e4ee0909078ae6e2644e9f492318c221db1-merged.mount: Deactivated successfully.
Feb 28 05:09:23 np0005634017 nova_compute[243452]: 2026-02-28 10:09:23.266 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:09:23 np0005634017 nova_compute[243452]: 2026-02-28 10:09:23.267 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Ensure instance console log exists: /var/lib/nova/instances/226b6da4-15c9-4d10-ab4d-194b313446f9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:09:23 np0005634017 nova_compute[243452]: 2026-02-28 10:09:23.267 243456 DEBUG oslo_concurrency.lockutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:23 np0005634017 nova_compute[243452]: 2026-02-28 10:09:23.268 243456 DEBUG oslo_concurrency.lockutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:23 np0005634017 nova_compute[243452]: 2026-02-28 10:09:23.268 243456 DEBUG oslo_concurrency.lockutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:23 np0005634017 podman[289039]: 2026-02-28 10:09:23.274888382 +0000 UTC m=+0.566931743 container remove f3ab909f0fc2bf24798e68e66a45d4be5d3b415c976b8b26396e1c782adab51d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:09:23 np0005634017 systemd[1]: libpod-conmon-f3ab909f0fc2bf24798e68e66a45d4be5d3b415c976b8b26396e1c782adab51d.scope: Deactivated successfully.
Feb 28 05:09:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:09:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1389734995' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:09:23 np0005634017 nova_compute[243452]: 2026-02-28 10:09:23.525 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:23 np0005634017 nova_compute[243452]: 2026-02-28 10:09:23.529 243456 DEBUG nova.objects.instance [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lazy-loading 'pci_devices' on Instance uuid 9098ebf3-e36c-492b-9c50-dc6f0078794d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:09:23 np0005634017 nova_compute[243452]: 2026-02-28 10:09:23.554 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:09:23 np0005634017 nova_compute[243452]:  <uuid>9098ebf3-e36c-492b-9c50-dc6f0078794d</uuid>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:  <name>instance-00000035</name>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      <nova:name>tempest-ListImageFiltersTestJSON-server-1661712571</nova:name>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:09:22</nova:creationTime>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:09:23 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:        <nova:user uuid="7b7f4fcc1d0d41f59aed36b3de16f8e2">tempest-ListImageFiltersTestJSON-1834030841-project-member</nova:user>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:        <nova:project uuid="6a54f983c0fa466f9e11947f104ed5ca">tempest-ListImageFiltersTestJSON-1834030841</nova:project>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      <nova:ports/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      <entry name="serial">9098ebf3-e36c-492b-9c50-dc6f0078794d</entry>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      <entry name="uuid">9098ebf3-e36c-492b-9c50-dc6f0078794d</entry>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/9098ebf3-e36c-492b-9c50-dc6f0078794d_disk">
Feb 28 05:09:23 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:09:23 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/9098ebf3-e36c-492b-9c50-dc6f0078794d_disk.config">
Feb 28 05:09:23 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:09:23 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/9098ebf3-e36c-492b-9c50-dc6f0078794d/console.log" append="off"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:09:23 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:09:23 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:09:23 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:09:23 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:09:23 np0005634017 nova_compute[243452]: 2026-02-28 10:09:23.633 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:23 np0005634017 nova_compute[243452]: 2026-02-28 10:09:23.641 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:09:23 np0005634017 nova_compute[243452]: 2026-02-28 10:09:23.642 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:09:23 np0005634017 nova_compute[243452]: 2026-02-28 10:09:23.643 243456 INFO nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Using config drive#033[00m
Feb 28 05:09:23 np0005634017 nova_compute[243452]: 2026-02-28 10:09:23.669 243456 DEBUG nova.storage.rbd_utils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 9098ebf3-e36c-492b-9c50-dc6f0078794d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:23 np0005634017 podman[289310]: 2026-02-28 10:09:23.724552091 +0000 UTC m=+0.045027726 container create 0ef2e1d1c10a60fa70bfba5ffa5ec4632630bf267617c750504635fbcae93228 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:09:23 np0005634017 systemd[1]: Started libpod-conmon-0ef2e1d1c10a60fa70bfba5ffa5ec4632630bf267617c750504635fbcae93228.scope.
Feb 28 05:09:23 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:09:23 np0005634017 podman[289310]: 2026-02-28 10:09:23.706772151 +0000 UTC m=+0.027247816 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:09:23 np0005634017 podman[289310]: 2026-02-28 10:09:23.811883034 +0000 UTC m=+0.132358669 container init 0ef2e1d1c10a60fa70bfba5ffa5ec4632630bf267617c750504635fbcae93228 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:09:23 np0005634017 podman[289310]: 2026-02-28 10:09:23.819308422 +0000 UTC m=+0.139784057 container start 0ef2e1d1c10a60fa70bfba5ffa5ec4632630bf267617c750504635fbcae93228 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 05:09:23 np0005634017 podman[289310]: 2026-02-28 10:09:23.82207401 +0000 UTC m=+0.142549645 container attach 0ef2e1d1c10a60fa70bfba5ffa5ec4632630bf267617c750504635fbcae93228 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 28 05:09:23 np0005634017 mystifying_wing[289326]: 167 167
Feb 28 05:09:23 np0005634017 systemd[1]: libpod-0ef2e1d1c10a60fa70bfba5ffa5ec4632630bf267617c750504635fbcae93228.scope: Deactivated successfully.
Feb 28 05:09:23 np0005634017 conmon[289326]: conmon 0ef2e1d1c10a60fa70bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ef2e1d1c10a60fa70bfba5ffa5ec4632630bf267617c750504635fbcae93228.scope/container/memory.events
Feb 28 05:09:23 np0005634017 podman[289310]: 2026-02-28 10:09:23.824908459 +0000 UTC m=+0.145384094 container died 0ef2e1d1c10a60fa70bfba5ffa5ec4632630bf267617c750504635fbcae93228 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 05:09:23 np0005634017 systemd[1]: var-lib-containers-storage-overlay-15ef729bd0bd8843ce2464cc8d6126fb99b5607f41102f4da82b2d66f4499a56-merged.mount: Deactivated successfully.
Feb 28 05:09:23 np0005634017 podman[289310]: 2026-02-28 10:09:23.860328934 +0000 UTC m=+0.180804569 container remove 0ef2e1d1c10a60fa70bfba5ffa5ec4632630bf267617c750504635fbcae93228 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 28 05:09:23 np0005634017 systemd[1]: libpod-conmon-0ef2e1d1c10a60fa70bfba5ffa5ec4632630bf267617c750504635fbcae93228.scope: Deactivated successfully.
Feb 28 05:09:23 np0005634017 nova_compute[243452]: 2026-02-28 10:09:23.931 243456 INFO nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Creating config drive at /var/lib/nova/instances/9098ebf3-e36c-492b-9c50-dc6f0078794d/disk.config#033[00m
Feb 28 05:09:23 np0005634017 nova_compute[243452]: 2026-02-28 10:09:23.936 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9098ebf3-e36c-492b-9c50-dc6f0078794d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8_ezggb_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:23 np0005634017 nova_compute[243452]: 2026-02-28 10:09:23.983 243456 DEBUG nova.network.neutron [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 28 05:09:23 np0005634017 nova_compute[243452]: 2026-02-28 10:09:23.984 243456 DEBUG nova.compute.manager [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:09:23 np0005634017 nova_compute[243452]: 2026-02-28 10:09:23.988 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:09:23 np0005634017 nova_compute[243452]: 2026-02-28 10:09:23.996 243456 WARNING nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:09:24 np0005634017 nova_compute[243452]: 2026-02-28 10:09:24.007 243456 DEBUG nova.virt.libvirt.host [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:09:24 np0005634017 podman[289349]: 2026-02-28 10:09:24.00724426 +0000 UTC m=+0.054745738 container create 614a2d60d9a10275b15a4b100c6e25ae185a22e667950023354907035561c2cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 05:09:24 np0005634017 nova_compute[243452]: 2026-02-28 10:09:24.008 243456 DEBUG nova.virt.libvirt.host [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:09:24 np0005634017 nova_compute[243452]: 2026-02-28 10:09:24.013 243456 DEBUG nova.virt.libvirt.host [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:09:24 np0005634017 nova_compute[243452]: 2026-02-28 10:09:24.015 243456 DEBUG nova.virt.libvirt.host [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:09:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 172 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.2 MiB/s wr, 180 op/s
Feb 28 05:09:24 np0005634017 nova_compute[243452]: 2026-02-28 10:09:24.016 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:09:24 np0005634017 nova_compute[243452]: 2026-02-28 10:09:24.016 243456 DEBUG nova.virt.hardware [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:09:24 np0005634017 nova_compute[243452]: 2026-02-28 10:09:24.017 243456 DEBUG nova.virt.hardware [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:09:24 np0005634017 nova_compute[243452]: 2026-02-28 10:09:24.018 243456 DEBUG nova.virt.hardware [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:09:24 np0005634017 nova_compute[243452]: 2026-02-28 10:09:24.019 243456 DEBUG nova.virt.hardware [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:09:24 np0005634017 nova_compute[243452]: 2026-02-28 10:09:24.020 243456 DEBUG nova.virt.hardware [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:09:24 np0005634017 nova_compute[243452]: 2026-02-28 10:09:24.020 243456 DEBUG nova.virt.hardware [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:09:24 np0005634017 nova_compute[243452]: 2026-02-28 10:09:24.021 243456 DEBUG nova.virt.hardware [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:09:24 np0005634017 nova_compute[243452]: 2026-02-28 10:09:24.021 243456 DEBUG nova.virt.hardware [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:09:24 np0005634017 nova_compute[243452]: 2026-02-28 10:09:24.022 243456 DEBUG nova.virt.hardware [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:09:24 np0005634017 nova_compute[243452]: 2026-02-28 10:09:24.023 243456 DEBUG nova.virt.hardware [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:09:24 np0005634017 nova_compute[243452]: 2026-02-28 10:09:24.023 243456 DEBUG nova.virt.hardware [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:09:24 np0005634017 nova_compute[243452]: 2026-02-28 10:09:24.030 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:24 np0005634017 systemd[1]: Started libpod-conmon-614a2d60d9a10275b15a4b100c6e25ae185a22e667950023354907035561c2cb.scope.
Feb 28 05:09:24 np0005634017 podman[289349]: 2026-02-28 10:09:23.979722708 +0000 UTC m=+0.027224246 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:09:24 np0005634017 nova_compute[243452]: 2026-02-28 10:09:24.085 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9098ebf3-e36c-492b-9c50-dc6f0078794d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8_ezggb_" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:24 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:09:24 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/277ea15dbd84a698ea92a9c7b80efe63b2c2c28a71153a36957fea16eb9e1d0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:09:24 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/277ea15dbd84a698ea92a9c7b80efe63b2c2c28a71153a36957fea16eb9e1d0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:09:24 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/277ea15dbd84a698ea92a9c7b80efe63b2c2c28a71153a36957fea16eb9e1d0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:09:24 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/277ea15dbd84a698ea92a9c7b80efe63b2c2c28a71153a36957fea16eb9e1d0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:09:24 np0005634017 podman[289349]: 2026-02-28 10:09:24.120843391 +0000 UTC m=+0.168344849 container init 614a2d60d9a10275b15a4b100c6e25ae185a22e667950023354907035561c2cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ishizaka, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 28 05:09:24 np0005634017 podman[289349]: 2026-02-28 10:09:24.131731127 +0000 UTC m=+0.179232575 container start 614a2d60d9a10275b15a4b100c6e25ae185a22e667950023354907035561c2cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ishizaka, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:09:24 np0005634017 nova_compute[243452]: 2026-02-28 10:09:24.133 243456 DEBUG nova.storage.rbd_utils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 9098ebf3-e36c-492b-9c50-dc6f0078794d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:24 np0005634017 nova_compute[243452]: 2026-02-28 10:09:24.139 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9098ebf3-e36c-492b-9c50-dc6f0078794d/disk.config 9098ebf3-e36c-492b-9c50-dc6f0078794d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:24 np0005634017 podman[289349]: 2026-02-28 10:09:24.142906291 +0000 UTC m=+0.190407729 container attach 614a2d60d9a10275b15a4b100c6e25ae185a22e667950023354907035561c2cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ishizaka, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:09:24 np0005634017 nova_compute[243452]: 2026-02-28 10:09:24.337 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9098ebf3-e36c-492b-9c50-dc6f0078794d/disk.config 9098ebf3-e36c-492b-9c50-dc6f0078794d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:24 np0005634017 nova_compute[243452]: 2026-02-28 10:09:24.339 243456 INFO nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Deleting local config drive /var/lib/nova/instances/9098ebf3-e36c-492b-9c50-dc6f0078794d/disk.config because it was imported into RBD.#033[00m
Feb 28 05:09:24 np0005634017 systemd-machined[209480]: New machine qemu-59-instance-00000035.
Feb 28 05:09:24 np0005634017 systemd[1]: Started Virtual Machine qemu-59-instance-00000035.
Feb 28 05:09:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:09:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1749059947' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:09:24 np0005634017 nova_compute[243452]: 2026-02-28 10:09:24.578 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:24 np0005634017 nova_compute[243452]: 2026-02-28 10:09:24.607 243456 DEBUG nova.storage.rbd_utils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 226b6da4-15c9-4d10-ab4d-194b313446f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:24 np0005634017 nova_compute[243452]: 2026-02-28 10:09:24.611 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:24 np0005634017 lvm[289560]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:09:24 np0005634017 lvm[289559]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:09:24 np0005634017 lvm[289560]: VG ceph_vg1 finished
Feb 28 05:09:24 np0005634017 lvm[289559]: VG ceph_vg0 finished
Feb 28 05:09:24 np0005634017 lvm[289562]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:09:24 np0005634017 lvm[289562]: VG ceph_vg2 finished
Feb 28 05:09:24 np0005634017 quizzical_ishizaka[289370]: {}
Feb 28 05:09:24 np0005634017 systemd[1]: libpod-614a2d60d9a10275b15a4b100c6e25ae185a22e667950023354907035561c2cb.scope: Deactivated successfully.
Feb 28 05:09:24 np0005634017 systemd[1]: libpod-614a2d60d9a10275b15a4b100c6e25ae185a22e667950023354907035561c2cb.scope: Consumed 1.227s CPU time.
Feb 28 05:09:24 np0005634017 conmon[289370]: conmon 614a2d60d9a10275b15a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-614a2d60d9a10275b15a4b100c6e25ae185a22e667950023354907035561c2cb.scope/container/memory.events
Feb 28 05:09:24 np0005634017 podman[289349]: 2026-02-28 10:09:24.991083872 +0000 UTC m=+1.038585320 container died 614a2d60d9a10275b15a4b100c6e25ae185a22e667950023354907035561c2cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 05:09:25 np0005634017 systemd[1]: var-lib-containers-storage-overlay-277ea15dbd84a698ea92a9c7b80efe63b2c2c28a71153a36957fea16eb9e1d0a-merged.mount: Deactivated successfully.
Feb 28 05:09:25 np0005634017 podman[289349]: 2026-02-28 10:09:25.052117116 +0000 UTC m=+1.099618544 container remove 614a2d60d9a10275b15a4b100c6e25ae185a22e667950023354907035561c2cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ishizaka, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 05:09:25 np0005634017 systemd[1]: libpod-conmon-614a2d60d9a10275b15a4b100c6e25ae185a22e667950023354907035561c2cb.scope: Deactivated successfully.
Feb 28 05:09:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:09:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:09:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.136 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273365.135103, 9098ebf3-e36c-492b-9c50-dc6f0078794d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.141 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.147 243456 DEBUG nova.compute.manager [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.149 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:09:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.154 243456 INFO nova.virt.libvirt.driver [-] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Instance spawned successfully.#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.155 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.182 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.188 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.189 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.189 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.190 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.190 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.191 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.196 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:09:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:09:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3788229665' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.230 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.230 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273365.13723, 9098ebf3-e36c-492b-9c50-dc6f0078794d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.231 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] VM Started (Lifecycle Event)#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.234 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.236 243456 DEBUG nova.objects.instance [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lazy-loading 'pci_devices' on Instance uuid 226b6da4-15c9-4d10-ab4d-194b313446f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.290 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.295 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:09:25 np0005634017 nova_compute[243452]:  <uuid>226b6da4-15c9-4d10-ab4d-194b313446f9</uuid>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:  <name>instance-00000036</name>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      <nova:name>tempest-ListImageFiltersTestJSON-server-1188134634</nova:name>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:09:23</nova:creationTime>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:09:25 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:        <nova:user uuid="7b7f4fcc1d0d41f59aed36b3de16f8e2">tempest-ListImageFiltersTestJSON-1834030841-project-member</nova:user>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:        <nova:project uuid="6a54f983c0fa466f9e11947f104ed5ca">tempest-ListImageFiltersTestJSON-1834030841</nova:project>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      <nova:ports/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      <entry name="serial">226b6da4-15c9-4d10-ab4d-194b313446f9</entry>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      <entry name="uuid">226b6da4-15c9-4d10-ab4d-194b313446f9</entry>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/226b6da4-15c9-4d10-ab4d-194b313446f9_disk">
Feb 28 05:09:25 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:09:25 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/226b6da4-15c9-4d10-ab4d-194b313446f9_disk.config">
Feb 28 05:09:25 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:09:25 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/226b6da4-15c9-4d10-ab4d-194b313446f9/console.log" append="off"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:09:25 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:09:25 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:09:25 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:09:25 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.307 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.319 243456 INFO nova.compute.manager [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Took 5.02 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.320 243456 DEBUG nova.compute.manager [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.454 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.487 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.488 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.501 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.501 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.502 243456 INFO nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Using config drive#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.530 243456 DEBUG nova.storage.rbd_utils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 226b6da4-15c9-4d10-ab4d-194b313446f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.539 243456 DEBUG nova.compute.manager [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.543 243456 INFO nova.compute.manager [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Took 6.48 seconds to build instance.#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.600 243456 DEBUG oslo_concurrency.lockutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "9098ebf3-e36c-492b-9c50-dc6f0078794d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.632 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.633 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.639 243456 DEBUG nova.virt.hardware [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.639 243456 INFO nova.compute.claims [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.781 243456 INFO nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Creating config drive at /var/lib/nova/instances/226b6da4-15c9-4d10-ab4d-194b313446f9/disk.config#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.784 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/226b6da4-15c9-4d10-ab4d-194b313446f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp453tmx82 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.827 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.937 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/226b6da4-15c9-4d10-ab4d-194b313446f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp453tmx82" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.976 243456 DEBUG nova.storage.rbd_utils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 226b6da4-15c9-4d10-ab4d-194b313446f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:25 np0005634017 nova_compute[243452]: 2026-02-28 10:09:25.982 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/226b6da4-15c9-4d10-ab4d-194b313446f9/disk.config 226b6da4-15c9-4d10-ab4d-194b313446f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 219 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 189 op/s
Feb 28 05:09:26 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:09:26 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:09:26 np0005634017 nova_compute[243452]: 2026-02-28 10:09:26.160 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/226b6da4-15c9-4d10-ab4d-194b313446f9/disk.config 226b6da4-15c9-4d10-ab4d-194b313446f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:26 np0005634017 nova_compute[243452]: 2026-02-28 10:09:26.162 243456 INFO nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Deleting local config drive /var/lib/nova/instances/226b6da4-15c9-4d10-ab4d-194b313446f9/disk.config because it was imported into RBD.#033[00m
Feb 28 05:09:26 np0005634017 systemd-machined[209480]: New machine qemu-60-instance-00000036.
Feb 28 05:09:26 np0005634017 systemd[1]: Started Virtual Machine qemu-60-instance-00000036.
Feb 28 05:09:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:09:26 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/458384995' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:09:26 np0005634017 nova_compute[243452]: 2026-02-28 10:09:26.451 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:26 np0005634017 nova_compute[243452]: 2026-02-28 10:09:26.459 243456 DEBUG nova.compute.provider_tree [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:09:26 np0005634017 nova_compute[243452]: 2026-02-28 10:09:26.485 243456 DEBUG nova.scheduler.client.report [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:09:26 np0005634017 nova_compute[243452]: 2026-02-28 10:09:26.513 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:26 np0005634017 nova_compute[243452]: 2026-02-28 10:09:26.514 243456 DEBUG nova.compute.manager [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:09:26 np0005634017 nova_compute[243452]: 2026-02-28 10:09:26.566 243456 DEBUG nova.compute.manager [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:09:26 np0005634017 nova_compute[243452]: 2026-02-28 10:09:26.566 243456 DEBUG nova.network.neutron [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:09:26 np0005634017 nova_compute[243452]: 2026-02-28 10:09:26.591 243456 INFO nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:09:26 np0005634017 nova_compute[243452]: 2026-02-28 10:09:26.618 243456 DEBUG nova.compute.manager [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:09:26 np0005634017 nova_compute[243452]: 2026-02-28 10:09:26.666 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:26 np0005634017 nova_compute[243452]: 2026-02-28 10:09:26.702 243456 DEBUG nova.compute.manager [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:09:26 np0005634017 nova_compute[243452]: 2026-02-28 10:09:26.704 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:09:26 np0005634017 nova_compute[243452]: 2026-02-28 10:09:26.704 243456 INFO nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Creating image(s)#033[00m
Feb 28 05:09:26 np0005634017 nova_compute[243452]: 2026-02-28 10:09:26.728 243456 DEBUG nova.storage.rbd_utils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:26 np0005634017 nova_compute[243452]: 2026-02-28 10:09:26.756 243456 DEBUG nova.storage.rbd_utils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:26 np0005634017 nova_compute[243452]: 2026-02-28 10:09:26.788 243456 DEBUG nova.storage.rbd_utils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:26 np0005634017 nova_compute[243452]: 2026-02-28 10:09:26.795 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:26 np0005634017 nova_compute[243452]: 2026-02-28 10:09:26.860 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:26 np0005634017 nova_compute[243452]: 2026-02-28 10:09:26.862 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:26 np0005634017 nova_compute[243452]: 2026-02-28 10:09:26.862 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:26 np0005634017 nova_compute[243452]: 2026-02-28 10:09:26.863 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:26 np0005634017 nova_compute[243452]: 2026-02-28 10:09:26.886 243456 DEBUG nova.storage.rbd_utils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:26 np0005634017 nova_compute[243452]: 2026-02-28 10:09:26.892 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:26 np0005634017 nova_compute[243452]: 2026-02-28 10:09:26.930 243456 DEBUG nova.policy [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '33855957e5e3480b850c2ddef62a5f89', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a5095f810f0d431788237ae1da262bf6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.195 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.253 243456 DEBUG nova.storage.rbd_utils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] resizing rbd image 8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.350 243456 DEBUG nova.objects.instance [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'migration_context' on Instance uuid 8eee8376-acc6-4a01-80c3-d7f0d579f9bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.372 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.373 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Ensure instance console log exists: /var/lib/nova/instances/8eee8376-acc6-4a01-80c3-d7f0d579f9bb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.374 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.374 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.375 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.491 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273367.4894414, 226b6da4-15c9-4d10-ab4d-194b313446f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.493 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.495 243456 DEBUG nova.compute.manager [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.496 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.500 243456 INFO nova.virt.libvirt.driver [-] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Instance spawned successfully.#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.500 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.540 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.548 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.551 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.552 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.553 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.553 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.554 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.554 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.596 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.597 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273367.4910986, 226b6da4-15c9-4d10-ab4d-194b313446f9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.597 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] VM Started (Lifecycle Event)#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.616 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.628 243456 INFO nova.compute.manager [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Took 5.07 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.629 243456 DEBUG nova.compute.manager [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.631 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.664 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.698 243456 INFO nova.compute.manager [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Took 6.30 seconds to build instance.#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.717 243456 DEBUG oslo_concurrency.lockutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "226b6da4-15c9-4d10-ab4d-194b313446f9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:27 np0005634017 nova_compute[243452]: 2026-02-28 10:09:27.730 243456 DEBUG nova.network.neutron [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Successfully created port: 3fb210f2-4c9d-4399-acf6-20e10d93fdd5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:09:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:09:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 259 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.7 MiB/s wr, 202 op/s
Feb 28 05:09:28 np0005634017 nova_compute[243452]: 2026-02-28 10:09:28.635 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:28 np0005634017 nova_compute[243452]: 2026-02-28 10:09:28.730 243456 DEBUG nova.network.neutron [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Successfully updated port: 3fb210f2-4c9d-4399-acf6-20e10d93fdd5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:09:28 np0005634017 nova_compute[243452]: 2026-02-28 10:09:28.776 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "refresh_cache-8eee8376-acc6-4a01-80c3-d7f0d579f9bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:09:28 np0005634017 nova_compute[243452]: 2026-02-28 10:09:28.776 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquired lock "refresh_cache-8eee8376-acc6-4a01-80c3-d7f0d579f9bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:09:28 np0005634017 nova_compute[243452]: 2026-02-28 10:09:28.777 243456 DEBUG nova.network.neutron [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:09:28 np0005634017 nova_compute[243452]: 2026-02-28 10:09:28.917 243456 DEBUG nova.network.neutron [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:09:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:09:29
Feb 28 05:09:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:09:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:09:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', '.mgr', 'vms', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', '.rgw.root', 'default.rgw.control']
Feb 28 05:09:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:09:29 np0005634017 nova_compute[243452]: 2026-02-28 10:09:29.120 243456 DEBUG nova.compute.manager [req-5a3d874c-fe32-4e60-9612-5836020c99bd req-8eb9ca00-6fd7-4afd-b063-2d16a61d56a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Received event network-changed-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:29 np0005634017 nova_compute[243452]: 2026-02-28 10:09:29.125 243456 DEBUG nova.compute.manager [req-5a3d874c-fe32-4e60-9612-5836020c99bd req-8eb9ca00-6fd7-4afd-b063-2d16a61d56a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Refreshing instance network info cache due to event network-changed-3fb210f2-4c9d-4399-acf6-20e10d93fdd5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:09:29 np0005634017 nova_compute[243452]: 2026-02-28 10:09:29.128 243456 DEBUG oslo_concurrency.lockutils [req-5a3d874c-fe32-4e60-9612-5836020c99bd req-8eb9ca00-6fd7-4afd-b063-2d16a61d56a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-8eee8376-acc6-4a01-80c3-d7f0d579f9bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:09:29 np0005634017 nova_compute[243452]: 2026-02-28 10:09:29.142 243456 DEBUG nova.compute.manager [None req-66e986c6-d882-4392-8aa0-797f95202c66 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Feb 28 05:09:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Feb 28 05:09:29 np0005634017 nova_compute[243452]: 2026-02-28 10:09:29.195 243456 INFO nova.compute.manager [None req-66e986c6-d882-4392-8aa0-797f95202c66 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] instance snapshotting#033[00m
Feb 28 05:09:29 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Feb 28 05:09:29 np0005634017 nova_compute[243452]: 2026-02-28 10:09:29.423 243456 INFO nova.virt.libvirt.driver [None req-66e986c6-d882-4392-8aa0-797f95202c66 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Beginning live snapshot process#033[00m
Feb 28 05:09:29 np0005634017 nova_compute[243452]: 2026-02-28 10:09:29.572 243456 DEBUG nova.virt.libvirt.imagebackend [None req-66e986c6-d882-4392-8aa0-797f95202c66 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 28 05:09:29 np0005634017 nova_compute[243452]: 2026-02-28 10:09:29.797 243456 DEBUG nova.storage.rbd_utils [None req-66e986c6-d882-4392-8aa0-797f95202c66 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] creating snapshot(7bebd4cdf79048b6b104db3e428177f0) on rbd image(9098ebf3-e36c-492b-9c50-dc6f0078794d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:09:29 np0005634017 nova_compute[243452]: 2026-02-28 10:09:29.988 243456 DEBUG nova.network.neutron [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Updating instance_info_cache with network_info: [{"id": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "address": "fa:16:3e:be:7e:cf", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb210f2-4c", "ovs_interfaceid": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.007 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Releasing lock "refresh_cache-8eee8376-acc6-4a01-80c3-d7f0d579f9bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.008 243456 DEBUG nova.compute.manager [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Instance network_info: |[{"id": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "address": "fa:16:3e:be:7e:cf", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb210f2-4c", "ovs_interfaceid": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.009 243456 DEBUG oslo_concurrency.lockutils [req-5a3d874c-fe32-4e60-9612-5836020c99bd req-8eb9ca00-6fd7-4afd-b063-2d16a61d56a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-8eee8376-acc6-4a01-80c3-d7f0d579f9bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.009 243456 DEBUG nova.network.neutron [req-5a3d874c-fe32-4e60-9612-5836020c99bd req-8eb9ca00-6fd7-4afd-b063-2d16a61d56a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Refreshing network info cache for port 3fb210f2-4c9d-4399-acf6-20e10d93fdd5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.012 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Start _get_guest_xml network_info=[{"id": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "address": "fa:16:3e:be:7e:cf", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb210f2-4c", "ovs_interfaceid": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:09:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 288 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 6.3 MiB/s wr, 234 op/s
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.019 243456 WARNING nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.028 243456 DEBUG nova.virt.libvirt.host [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.029 243456 DEBUG nova.virt.libvirt.host [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.032 243456 DEBUG nova.virt.libvirt.host [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.032 243456 DEBUG nova.virt.libvirt.host [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.033 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.033 243456 DEBUG nova.virt.hardware [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.034 243456 DEBUG nova.virt.hardware [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.034 243456 DEBUG nova.virt.hardware [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.034 243456 DEBUG nova.virt.hardware [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.034 243456 DEBUG nova.virt.hardware [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.035 243456 DEBUG nova.virt.hardware [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.035 243456 DEBUG nova.virt.hardware [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.035 243456 DEBUG nova.virt.hardware [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.036 243456 DEBUG nova.virt.hardware [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.036 243456 DEBUG nova.virt.hardware [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.036 243456 DEBUG nova.virt.hardware [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.040 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:30 np0005634017 podman[290001]: 2026-02-28 10:09:30.149810754 +0000 UTC m=+0.066490338 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:09:30 np0005634017 podman[289999]: 2026-02-28 10:09:30.183191392 +0000 UTC m=+0.110478554 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible)
Feb 28 05:09:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:09:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:09:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:09:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:09:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:09:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:09:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Feb 28 05:09:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Feb 28 05:09:30 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.415 243456 DEBUG nova.storage.rbd_utils [None req-66e986c6-d882-4392-8aa0-797f95202c66 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] cloning vms/9098ebf3-e36c-492b-9c50-dc6f0078794d_disk@7bebd4cdf79048b6b104db3e428177f0 to images/9ce992ef-e998-43ea-9bd4-8cdaf2841ea4 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.509 243456 DEBUG nova.storage.rbd_utils [None req-66e986c6-d882-4392-8aa0-797f95202c66 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] flattening images/9ce992ef-e998-43ea-9bd4-8cdaf2841ea4 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 28 05:09:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:09:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:09:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:09:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:09:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:09:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:09:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:09:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:09:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:09:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:09:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:09:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1048446427' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.659 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.684 243456 DEBUG nova.storage.rbd_utils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.689 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:30 np0005634017 nova_compute[243452]: 2026-02-28 10:09:30.791 243456 DEBUG nova.storage.rbd_utils [None req-66e986c6-d882-4392-8aa0-797f95202c66 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] removing snapshot(7bebd4cdf79048b6b104db3e428177f0) on rbd image(9098ebf3-e36c-492b-9c50-dc6f0078794d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 28 05:09:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:09:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1079513857' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.242 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.244 243456 DEBUG nova.virt.libvirt.vif [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:09:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1458563036',display_name='tempest-DeleteServersTestJSON-server-1458563036',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1458563036',id=55,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-8hsyesw8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:09:26Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=8eee8376-acc6-4a01-80c3-d7f0d579f9bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "address": "fa:16:3e:be:7e:cf", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb210f2-4c", "ovs_interfaceid": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.245 243456 DEBUG nova.network.os_vif_util [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "address": "fa:16:3e:be:7e:cf", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb210f2-4c", "ovs_interfaceid": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.246 243456 DEBUG nova.network.os_vif_util [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:7e:cf,bridge_name='br-int',has_traffic_filtering=True,id=3fb210f2-4c9d-4399-acf6-20e10d93fdd5,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb210f2-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.247 243456 DEBUG nova.objects.instance [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8eee8376-acc6-4a01-80c3-d7f0d579f9bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.263 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:09:31 np0005634017 nova_compute[243452]:  <uuid>8eee8376-acc6-4a01-80c3-d7f0d579f9bb</uuid>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:  <name>instance-00000037</name>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <nova:name>tempest-DeleteServersTestJSON-server-1458563036</nova:name>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:09:30</nova:creationTime>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:09:31 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:        <nova:user uuid="33855957e5e3480b850c2ddef62a5f89">tempest-DeleteServersTestJSON-515886650-project-member</nova:user>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:        <nova:project uuid="a5095f810f0d431788237ae1da262bf6">tempest-DeleteServersTestJSON-515886650</nova:project>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:        <nova:port uuid="3fb210f2-4c9d-4399-acf6-20e10d93fdd5">
Feb 28 05:09:31 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <entry name="serial">8eee8376-acc6-4a01-80c3-d7f0d579f9bb</entry>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <entry name="uuid">8eee8376-acc6-4a01-80c3-d7f0d579f9bb</entry>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk">
Feb 28 05:09:31 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:09:31 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk.config">
Feb 28 05:09:31 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:09:31 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:be:7e:cf"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <target dev="tap3fb210f2-4c"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/8eee8376-acc6-4a01-80c3-d7f0d579f9bb/console.log" append="off"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:09:31 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:09:31 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:09:31 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:09:31 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.265 243456 DEBUG nova.compute.manager [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Preparing to wait for external event network-vif-plugged-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.265 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.265 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.266 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.266 243456 DEBUG nova.virt.libvirt.vif [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:09:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1458563036',display_name='tempest-DeleteServersTestJSON-server-1458563036',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1458563036',id=55,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-8hsyesw8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:09:26Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=8eee8376-acc6-4a01-80c3-d7f0d579f9bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "address": "fa:16:3e:be:7e:cf", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb210f2-4c", "ovs_interfaceid": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.267 243456 DEBUG nova.network.os_vif_util [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "address": "fa:16:3e:be:7e:cf", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb210f2-4c", "ovs_interfaceid": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.267 243456 DEBUG nova.network.os_vif_util [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:7e:cf,bridge_name='br-int',has_traffic_filtering=True,id=3fb210f2-4c9d-4399-acf6-20e10d93fdd5,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb210f2-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.268 243456 DEBUG os_vif [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:7e:cf,bridge_name='br-int',has_traffic_filtering=True,id=3fb210f2-4c9d-4399-acf6-20e10d93fdd5,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb210f2-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.269 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.269 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.270 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.273 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.274 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3fb210f2-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.274 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3fb210f2-4c, col_values=(('external_ids', {'iface-id': '3fb210f2-4c9d-4399-acf6-20e10d93fdd5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:be:7e:cf', 'vm-uuid': '8eee8376-acc6-4a01-80c3-d7f0d579f9bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.276 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:31 np0005634017 NetworkManager[49805]: <info>  [1772273371.2773] manager: (tap3fb210f2-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/201)
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.279 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.285 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.286 243456 INFO os_vif [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:7e:cf,bridge_name='br-int',has_traffic_filtering=True,id=3fb210f2-4c9d-4399-acf6-20e10d93fdd5,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb210f2-4c')#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.349 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Acquiring lock "163deb6e-49f4-4093-b0c1-98240f93c499" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.350 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.357 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.357 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.358 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No VIF found with MAC fa:16:3e:be:7e:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.358 243456 INFO nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Using config drive#033[00m
Feb 28 05:09:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Feb 28 05:09:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Feb 28 05:09:31 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.403 243456 DEBUG nova.storage.rbd_utils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.410 243456 DEBUG nova.compute.manager [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.425 243456 DEBUG nova.storage.rbd_utils [None req-66e986c6-d882-4392-8aa0-797f95202c66 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] creating snapshot(snap) on rbd image(9ce992ef-e998-43ea-9bd4-8cdaf2841ea4) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.494 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.494 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.504 243456 DEBUG nova.virt.hardware [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.505 243456 INFO nova.compute.claims [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.629 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273356.6282868, 51a0e59a-81d0-4f05-bb13-5ca025288da2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.629 243456 INFO nova.compute.manager [-] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.655 243456 DEBUG nova.compute.manager [None req-2921a1fd-7a75-42da-8b0c-b639051058ee - - - - - -] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.674 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.741 243456 INFO nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Creating config drive at /var/lib/nova/instances/8eee8376-acc6-4a01-80c3-d7f0d579f9bb/disk.config#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.749 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8eee8376-acc6-4a01-80c3-d7f0d579f9bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp30kndlo1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.901 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8eee8376-acc6-4a01-80c3-d7f0d579f9bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp30kndlo1" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.931 243456 DEBUG nova.storage.rbd_utils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:31 np0005634017 nova_compute[243452]: 2026-02-28 10:09:31.937 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8eee8376-acc6-4a01-80c3-d7f0d579f9bb/disk.config 8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 293 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 4.8 MiB/s wr, 319 op/s
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.054 243456 DEBUG nova.network.neutron [req-5a3d874c-fe32-4e60-9612-5836020c99bd req-8eb9ca00-6fd7-4afd-b063-2d16a61d56a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Updated VIF entry in instance network info cache for port 3fb210f2-4c9d-4399-acf6-20e10d93fdd5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.055 243456 DEBUG nova.network.neutron [req-5a3d874c-fe32-4e60-9612-5836020c99bd req-8eb9ca00-6fd7-4afd-b063-2d16a61d56a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Updating instance_info_cache with network_info: [{"id": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "address": "fa:16:3e:be:7e:cf", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb210f2-4c", "ovs_interfaceid": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.077 243456 DEBUG oslo_concurrency.lockutils [req-5a3d874c-fe32-4e60-9612-5836020c99bd req-8eb9ca00-6fd7-4afd-b063-2d16a61d56a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-8eee8376-acc6-4a01-80c3-d7f0d579f9bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.094 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8eee8376-acc6-4a01-80c3-d7f0d579f9bb/disk.config 8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.095 243456 INFO nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Deleting local config drive /var/lib/nova/instances/8eee8376-acc6-4a01-80c3-d7f0d579f9bb/disk.config because it was imported into RBD.#033[00m
Feb 28 05:09:32 np0005634017 kernel: tap3fb210f2-4c: entered promiscuous mode
Feb 28 05:09:32 np0005634017 NetworkManager[49805]: <info>  [1772273372.1437] manager: (tap3fb210f2-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/202)
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.149 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:32 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:32Z|00441|binding|INFO|Claiming lport 3fb210f2-4c9d-4399-acf6-20e10d93fdd5 for this chassis.
Feb 28 05:09:32 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:32Z|00442|binding|INFO|3fb210f2-4c9d-4399-acf6-20e10d93fdd5: Claiming fa:16:3e:be:7e:cf 10.100.0.6
Feb 28 05:09:32 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:32Z|00443|binding|INFO|Setting lport 3fb210f2-4c9d-4399-acf6-20e10d93fdd5 ovn-installed in OVS
Feb 28 05:09:32 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:32Z|00444|binding|INFO|Setting lport 3fb210f2-4c9d-4399-acf6-20e10d93fdd5 up in Southbound
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.172 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.170 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:7e:cf 10.100.0.6'], port_security=['fa:16:3e:be:7e:cf 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8eee8376-acc6-4a01-80c3-d7f0d579f9bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e92100d-850d-4567-9a5d-269bb15701d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5095f810f0d431788237ae1da262bf6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05124cc2-701f-43d1-96d7-3db27e805ae2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4409f277-3260-44c5-97fd-d8b2121f3d6b, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=3fb210f2-4c9d-4399-acf6-20e10d93fdd5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.172 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 3fb210f2-4c9d-4399-acf6-20e10d93fdd5 in datapath 8e92100d-850d-4567-9a5d-269bb15701d5 bound to our chassis#033[00m
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.173 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8e92100d-850d-4567-9a5d-269bb15701d5#033[00m
Feb 28 05:09:32 np0005634017 systemd-machined[209480]: New machine qemu-61-instance-00000037.
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.188 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[266f00ec-6c37-4a5e-90e9-361aff99d777]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.189 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8e92100d-81 in ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.192 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8e92100d-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.192 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2c039fa8-de7d-487e-981e-b78057c00662]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.194 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5e16c7cd-3f15-4539-a4f6-c694d7711d4d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:32 np0005634017 systemd[1]: Started Virtual Machine qemu-61-instance-00000037.
Feb 28 05:09:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:09:32 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3430434087' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.209 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[dd19695c-2266-4cdd-bc45-1f949418d3a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:32 np0005634017 systemd-udevd[290291]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:09:32 np0005634017 NetworkManager[49805]: <info>  [1772273372.2381] device (tap3fb210f2-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:09:32 np0005634017 NetworkManager[49805]: <info>  [1772273372.2388] device (tap3fb210f2-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.242 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a62b61b0-274e-4ef6-a2fb-896e05e63a21]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.257 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.274 243456 DEBUG nova.compute.provider_tree [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.277 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a90b5ba0-a540-4351-ab54-aea1bde9d1c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.281 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f6139f6e-d546-4cb9-82cb-46d28192c4ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:32 np0005634017 NetworkManager[49805]: <info>  [1772273372.2827] manager: (tap8e92100d-80): new Veth device (/org/freedesktop/NetworkManager/Devices/203)
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.293 243456 DEBUG nova.scheduler.client.report [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.315 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[de43787f-077f-4ce9-8e01-d86e9b0f87a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.318 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4b6e14-bf81-4292-b28f-bae73c41a3bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.323 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.324 243456 DEBUG nova.compute.manager [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:09:32 np0005634017 NetworkManager[49805]: <info>  [1772273372.3411] device (tap8e92100d-80): carrier: link connected
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.346 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[69dc6981-22ef-4efe-8823-7c02870d85bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.363 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[72afc927-248f-4eec-b46d-14ca9e8f83a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e92100d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:bd:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487962, 'reachable_time': 32981, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290323, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.369 243456 DEBUG nova.compute.manager [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.370 243456 DEBUG nova.network.neutron [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.380 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9cc12998-3793-47f1-b27c-88056bc3a1d7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:bd50'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 487962, 'tstamp': 487962}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290324, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.388 243456 INFO nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:09:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Feb 28 05:09:32 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.405 243456 DEBUG nova.compute.manager [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.413 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d892c4a4-8d30-4d70-ab01-16cc53bd291b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e92100d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:bd:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487962, 'reachable_time': 32981, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290325, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.459 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[30d90d78-0039-4934-b015-13d7ee33829f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.492 243456 DEBUG nova.compute.manager [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.494 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.495 243456 INFO nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Creating image(s)#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.529 243456 DEBUG nova.storage.rbd_utils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] rbd image 163deb6e-49f4-4093-b0c1-98240f93c499_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.542 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e5262736-d7f8-461d-a1f2-f5143a8ca874]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.544 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e92100d-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.544 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.545 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e92100d-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:32 np0005634017 NetworkManager[49805]: <info>  [1772273372.5479] manager: (tap8e92100d-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/204)
Feb 28 05:09:32 np0005634017 kernel: tap8e92100d-80: entered promiscuous mode
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.550 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8e92100d-80, col_values=(('external_ids', {'iface-id': 'df60d363-b5aa-4c1b-a9a1-997dfff36799'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:32 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:32Z|00445|binding|INFO|Releasing lport df60d363-b5aa-4c1b-a9a1-997dfff36799 from this chassis (sb_readonly=0)
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.557 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.558 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7c1fd392-5e4a-4c5f-95c2-36722634292e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.559 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-8e92100d-850d-4567-9a5d-269bb15701d5
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 8e92100d-850d-4567-9a5d-269bb15701d5
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:09:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.560 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'env', 'PROCESS_TAG=haproxy-8e92100d-850d-4567-9a5d-269bb15701d5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8e92100d-850d-4567-9a5d-269bb15701d5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.581 243456 DEBUG nova.storage.rbd_utils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] rbd image 163deb6e-49f4-4093-b0c1-98240f93c499_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.618 243456 DEBUG nova.storage.rbd_utils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] rbd image 163deb6e-49f4-4093-b0c1-98240f93c499_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.623 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.654 243456 DEBUG nova.policy [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1341b7bab4cc4ddca989e12ab7770723', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '13c8391ebb8644dea661a093a38db268', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.660 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.664 243456 DEBUG nova.compute.manager [req-4f19ee31-002e-40c6-b0c1-6867fffb7f2b req-18401ef4-24de-4414-9ff3-9520ceafc692 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Received event network-vif-plugged-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.664 243456 DEBUG oslo_concurrency.lockutils [req-4f19ee31-002e-40c6-b0c1-6867fffb7f2b req-18401ef4-24de-4414-9ff3-9520ceafc692 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.664 243456 DEBUG oslo_concurrency.lockutils [req-4f19ee31-002e-40c6-b0c1-6867fffb7f2b req-18401ef4-24de-4414-9ff3-9520ceafc692 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.664 243456 DEBUG oslo_concurrency.lockutils [req-4f19ee31-002e-40c6-b0c1-6867fffb7f2b req-18401ef4-24de-4414-9ff3-9520ceafc692 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.665 243456 DEBUG nova.compute.manager [req-4f19ee31-002e-40c6-b0c1-6867fffb7f2b req-18401ef4-24de-4414-9ff3-9520ceafc692 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Processing event network-vif-plugged-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.719 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.719 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.720 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.720 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.739 243456 DEBUG nova.storage.rbd_utils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] rbd image 163deb6e-49f4-4093-b0c1-98240f93c499_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:32 np0005634017 nova_compute[243452]: 2026-02-28 10:09:32.743 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 163deb6e-49f4-4093-b0c1-98240f93c499_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:09:33 np0005634017 podman[290474]: 2026-02-28 10:09:33.091557113 +0000 UTC m=+0.119536348 container create 3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 28 05:09:33 np0005634017 podman[290474]: 2026-02-28 10:09:32.994681992 +0000 UTC m=+0.022661227 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:09:33 np0005634017 systemd[1]: Started libpod-conmon-3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116.scope.
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.143 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 163deb6e-49f4-4093-b0c1-98240f93c499_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:33 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:09:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a11d72e3c1b7abf544db6e07b97f78e6bb5dfdbed62af2a9efcbccff78207a6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:09:33 np0005634017 podman[290474]: 2026-02-28 10:09:33.187860158 +0000 UTC m=+0.215839383 container init 3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.188 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273373.1559103, 8eee8376-acc6-4a01-80c3-d7f0d579f9bb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.188 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] VM Started (Lifecycle Event)#033[00m
Feb 28 05:09:33 np0005634017 podman[290474]: 2026-02-28 10:09:33.193166387 +0000 UTC m=+0.221145602 container start 3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.194 243456 DEBUG nova.compute.manager [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:09:33 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[290504]: [NOTICE]   (290528) : New worker (290531) forked
Feb 28 05:09:33 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[290504]: [NOTICE]   (290528) : Loading success.
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.239 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.258 243456 DEBUG nova.storage.rbd_utils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] resizing rbd image 163deb6e-49f4-4093-b0c1-98240f93c499_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.290 243456 INFO nova.virt.libvirt.driver [-] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Instance spawned successfully.#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.290 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.350 243456 DEBUG nova.objects.instance [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lazy-loading 'migration_context' on Instance uuid 163deb6e-49f4-4093-b0c1-98240f93c499 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.391 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.394 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.483 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.483 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Ensure instance console log exists: /var/lib/nova/instances/163deb6e-49f4-4093-b0c1-98240f93c499/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.484 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.484 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.484 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.485 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.486 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273373.1560607, 8eee8376-acc6-4a01-80c3-d7f0d579f9bb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.486 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.492 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.493 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.494 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.494 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.495 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.495 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.511 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.514 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273373.201439, 8eee8376-acc6-4a01-80c3-d7f0d579f9bb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.514 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.543 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.546 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.576 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.592 243456 INFO nova.compute.manager [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Took 6.89 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.593 243456 DEBUG nova.compute.manager [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.637 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.662 243456 INFO nova.compute.manager [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Took 8.05 seconds to build instance.#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.700 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.211s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:33 np0005634017 nova_compute[243452]: 2026-02-28 10:09:33.925 243456 DEBUG nova.network.neutron [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Successfully created port: cf1a075d-084d-4b7f-afd3-5a1d130b7493 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:09:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 304 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 5.0 MiB/s wr, 409 op/s
Feb 28 05:09:34 np0005634017 nova_compute[243452]: 2026-02-28 10:09:34.101 243456 INFO nova.virt.libvirt.driver [None req-66e986c6-d882-4392-8aa0-797f95202c66 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Snapshot image upload complete#033[00m
Feb 28 05:09:34 np0005634017 nova_compute[243452]: 2026-02-28 10:09:34.103 243456 INFO nova.compute.manager [None req-66e986c6-d882-4392-8aa0-797f95202c66 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Took 4.90 seconds to snapshot the instance on the hypervisor.#033[00m
Feb 28 05:09:34 np0005634017 nova_compute[243452]: 2026-02-28 10:09:34.609 243456 DEBUG nova.compute.manager [req-9f93617c-34a4-4cc7-a79d-b558f36246c8 req-19184e49-733c-4aad-988a-65a189ca75ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Received event network-vif-plugged-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:34 np0005634017 nova_compute[243452]: 2026-02-28 10:09:34.610 243456 DEBUG oslo_concurrency.lockutils [req-9f93617c-34a4-4cc7-a79d-b558f36246c8 req-19184e49-733c-4aad-988a-65a189ca75ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:34 np0005634017 nova_compute[243452]: 2026-02-28 10:09:34.610 243456 DEBUG oslo_concurrency.lockutils [req-9f93617c-34a4-4cc7-a79d-b558f36246c8 req-19184e49-733c-4aad-988a-65a189ca75ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:34 np0005634017 nova_compute[243452]: 2026-02-28 10:09:34.610 243456 DEBUG oslo_concurrency.lockutils [req-9f93617c-34a4-4cc7-a79d-b558f36246c8 req-19184e49-733c-4aad-988a-65a189ca75ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:34 np0005634017 nova_compute[243452]: 2026-02-28 10:09:34.610 243456 DEBUG nova.compute.manager [req-9f93617c-34a4-4cc7-a79d-b558f36246c8 req-19184e49-733c-4aad-988a-65a189ca75ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] No waiting events found dispatching network-vif-plugged-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:09:34 np0005634017 nova_compute[243452]: 2026-02-28 10:09:34.611 243456 WARNING nova.compute.manager [req-9f93617c-34a4-4cc7-a79d-b558f36246c8 req-19184e49-733c-4aad-988a-65a189ca75ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Received unexpected event network-vif-plugged-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:09:34 np0005634017 nova_compute[243452]: 2026-02-28 10:09:34.814 243456 DEBUG nova.network.neutron [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Successfully updated port: cf1a075d-084d-4b7f-afd3-5a1d130b7493 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:09:34 np0005634017 nova_compute[243452]: 2026-02-28 10:09:34.830 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Acquiring lock "refresh_cache-163deb6e-49f4-4093-b0c1-98240f93c499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:09:34 np0005634017 nova_compute[243452]: 2026-02-28 10:09:34.830 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Acquired lock "refresh_cache-163deb6e-49f4-4093-b0c1-98240f93c499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:09:34 np0005634017 nova_compute[243452]: 2026-02-28 10:09:34.830 243456 DEBUG nova.network.neutron [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:09:34 np0005634017 nova_compute[243452]: 2026-02-28 10:09:34.955 243456 DEBUG nova.network.neutron [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:09:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Feb 28 05:09:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Feb 28 05:09:35 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Feb 28 05:09:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 445 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 20 MiB/s wr, 478 op/s
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.067 243456 DEBUG nova.network.neutron [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Updating instance_info_cache with network_info: [{"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.096 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Releasing lock "refresh_cache-163deb6e-49f4-4093-b0c1-98240f93c499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.096 243456 DEBUG nova.compute.manager [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Instance network_info: |[{"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.099 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Start _get_guest_xml network_info=[{"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.110 243456 WARNING nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.116 243456 DEBUG nova.virt.libvirt.host [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.116 243456 DEBUG nova.virt.libvirt.host [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.119 243456 DEBUG nova.virt.libvirt.host [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.119 243456 DEBUG nova.virt.libvirt.host [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.120 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.120 243456 DEBUG nova.virt.hardware [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.120 243456 DEBUG nova.virt.hardware [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.120 243456 DEBUG nova.virt.hardware [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.121 243456 DEBUG nova.virt.hardware [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.121 243456 DEBUG nova.virt.hardware [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.121 243456 DEBUG nova.virt.hardware [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.121 243456 DEBUG nova.virt.hardware [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.121 243456 DEBUG nova.virt.hardware [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.121 243456 DEBUG nova.virt.hardware [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.122 243456 DEBUG nova.virt.hardware [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.122 243456 DEBUG nova.virt.hardware [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.125 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.166 243456 INFO nova.compute.manager [None req-97004ae0-02a2-451b-87af-a3827300e816 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Pausing#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.168 243456 DEBUG nova.objects.instance [None req-97004ae0-02a2-451b-87af-a3827300e816 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'flavor' on Instance uuid 8eee8376-acc6-4a01-80c3-d7f0d579f9bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.196 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273376.195942, 8eee8376-acc6-4a01-80c3-d7f0d579f9bb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.197 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.199 243456 DEBUG nova.compute.manager [None req-97004ae0-02a2-451b-87af-a3827300e816 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.212 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.214 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.234 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.276 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.321 243456 DEBUG nova.compute.manager [None req-a5e7e905-11b5-4ad3-b484-fe80f207ca09 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.365 243456 INFO nova.compute.manager [None req-a5e7e905-11b5-4ad3-b484-fe80f207ca09 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] instance snapshotting#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.627 243456 INFO nova.virt.libvirt.driver [None req-a5e7e905-11b5-4ad3-b484-fe80f207ca09 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Beginning live snapshot process#033[00m
Feb 28 05:09:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:09:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/200886479' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.680 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.707 243456 DEBUG nova.storage.rbd_utils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] rbd image 163deb6e-49f4-4093-b0c1-98240f93c499_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.711 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.795 243456 DEBUG nova.compute.manager [req-7e1ff444-99f4-4f63-b0b0-33645176dd4c req-934f25e4-79f8-484e-83ac-6b2da57178b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received event network-changed-cf1a075d-084d-4b7f-afd3-5a1d130b7493 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.796 243456 DEBUG nova.compute.manager [req-7e1ff444-99f4-4f63-b0b0-33645176dd4c req-934f25e4-79f8-484e-83ac-6b2da57178b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Refreshing instance network info cache due to event network-changed-cf1a075d-084d-4b7f-afd3-5a1d130b7493. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.796 243456 DEBUG oslo_concurrency.lockutils [req-7e1ff444-99f4-4f63-b0b0-33645176dd4c req-934f25e4-79f8-484e-83ac-6b2da57178b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-163deb6e-49f4-4093-b0c1-98240f93c499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.797 243456 DEBUG oslo_concurrency.lockutils [req-7e1ff444-99f4-4f63-b0b0-33645176dd4c req-934f25e4-79f8-484e-83ac-6b2da57178b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-163deb6e-49f4-4093-b0c1-98240f93c499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.797 243456 DEBUG nova.network.neutron [req-7e1ff444-99f4-4f63-b0b0-33645176dd4c req-934f25e4-79f8-484e-83ac-6b2da57178b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Refreshing network info cache for port cf1a075d-084d-4b7f-afd3-5a1d130b7493 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.804 243456 DEBUG nova.virt.libvirt.imagebackend [None req-a5e7e905-11b5-4ad3-b484-fe80f207ca09 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 28 05:09:36 np0005634017 nova_compute[243452]: 2026-02-28 10:09:36.990 243456 DEBUG nova.storage.rbd_utils [None req-a5e7e905-11b5-4ad3-b484-fe80f207ca09 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] creating snapshot(d6439769e7bb4ab49fa1dd35452060d1) on rbd image(226b6da4-15c9-4d10-ab4d-194b313446f9_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:09:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:09:37 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2609547973' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.284 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.287 243456 DEBUG nova.virt.libvirt.vif [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:09:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1645315342',display_name='tempest-InstanceActionsTestJSON-server-1645315342',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1645315342',id=56,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='13c8391ebb8644dea661a093a38db268',ramdisk_id='',reservation_id='r-fir4075z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-1464907638',owner_user_name='tempest-InstanceActionsTestJSON-1464907638-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:09:32Z,user_data=None,user_id='1341b7bab4cc4ddca989e12ab7770723',uuid=163deb6e-49f4-4093-b0c1-98240f93c499,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.287 243456 DEBUG nova.network.os_vif_util [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Converting VIF {"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.289 243456 DEBUG nova.network.os_vif_util [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.290 243456 DEBUG nova.objects.instance [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lazy-loading 'pci_devices' on Instance uuid 163deb6e-49f4-4093-b0c1-98240f93c499 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.316 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:09:37 np0005634017 nova_compute[243452]:  <uuid>163deb6e-49f4-4093-b0c1-98240f93c499</uuid>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:  <name>instance-00000038</name>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <nova:name>tempest-InstanceActionsTestJSON-server-1645315342</nova:name>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:09:36</nova:creationTime>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:09:37 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:        <nova:user uuid="1341b7bab4cc4ddca989e12ab7770723">tempest-InstanceActionsTestJSON-1464907638-project-member</nova:user>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:        <nova:project uuid="13c8391ebb8644dea661a093a38db268">tempest-InstanceActionsTestJSON-1464907638</nova:project>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:        <nova:port uuid="cf1a075d-084d-4b7f-afd3-5a1d130b7493">
Feb 28 05:09:37 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <entry name="serial">163deb6e-49f4-4093-b0c1-98240f93c499</entry>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <entry name="uuid">163deb6e-49f4-4093-b0c1-98240f93c499</entry>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/163deb6e-49f4-4093-b0c1-98240f93c499_disk">
Feb 28 05:09:37 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:09:37 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/163deb6e-49f4-4093-b0c1-98240f93c499_disk.config">
Feb 28 05:09:37 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:09:37 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:c1:c5:d4"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <target dev="tapcf1a075d-08"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/163deb6e-49f4-4093-b0c1-98240f93c499/console.log" append="off"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:09:37 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:09:37 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:09:37 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:09:37 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.327 243456 DEBUG nova.compute.manager [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Preparing to wait for external event network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.328 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Acquiring lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.328 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.329 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.330 243456 DEBUG nova.virt.libvirt.vif [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:09:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1645315342',display_name='tempest-InstanceActionsTestJSON-server-1645315342',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1645315342',id=56,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='13c8391ebb8644dea661a093a38db268',ramdisk_id='',reservation_id='r-fir4075z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-1464907638',owner_user_name='tempest-InstanceActionsTestJSON-1464907638-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:09:32Z,user_data=None,user_id='1341b7bab4cc4ddca989e12ab7770723',uuid=163deb6e-49f4-4093-b0c1-98240f93c499,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.331 243456 DEBUG nova.network.os_vif_util [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Converting VIF {"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.332 243456 DEBUG nova.network.os_vif_util [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.332 243456 DEBUG os_vif [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.333 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.334 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.334 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.342 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.343 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf1a075d-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.344 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcf1a075d-08, col_values=(('external_ids', {'iface-id': 'cf1a075d-084d-4b7f-afd3-5a1d130b7493', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:c5:d4', 'vm-uuid': '163deb6e-49f4-4093-b0c1-98240f93c499'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.345 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:37 np0005634017 NetworkManager[49805]: <info>  [1772273377.3466] manager: (tapcf1a075d-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.350 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.354 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.355 243456 INFO os_vif [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08')#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.402 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.403 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.403 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] No VIF found with MAC fa:16:3e:c1:c5:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.404 243456 INFO nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Using config drive#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.425 243456 DEBUG nova.storage.rbd_utils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] rbd image 163deb6e-49f4-4093-b0c1-98240f93c499_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Feb 28 05:09:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Feb 28 05:09:37 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.599 243456 DEBUG nova.storage.rbd_utils [None req-a5e7e905-11b5-4ad3-b484-fe80f207ca09 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] cloning vms/226b6da4-15c9-4d10-ab4d-194b313446f9_disk@d6439769e7bb4ab49fa1dd35452060d1 to images/fdddffc1-692d-46e3-8fbc-eca1a14df1a2 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 28 05:09:37 np0005634017 nova_compute[243452]: 2026-02-28 10:09:37.707 243456 DEBUG nova.storage.rbd_utils [None req-a5e7e905-11b5-4ad3-b484-fe80f207ca09 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] flattening images/fdddffc1-692d-46e3-8fbc-eca1a14df1a2 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 28 05:09:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:09:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Feb 28 05:09:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Feb 28 05:09:37 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Feb 28 05:09:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 488 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 9.4 MiB/s rd, 26 MiB/s wr, 489 op/s
Feb 28 05:09:38 np0005634017 nova_compute[243452]: 2026-02-28 10:09:38.062 243456 DEBUG nova.storage.rbd_utils [None req-a5e7e905-11b5-4ad3-b484-fe80f207ca09 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] removing snapshot(d6439769e7bb4ab49fa1dd35452060d1) on rbd image(226b6da4-15c9-4d10-ab4d-194b313446f9_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 28 05:09:38 np0005634017 nova_compute[243452]: 2026-02-28 10:09:38.120 243456 INFO nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Creating config drive at /var/lib/nova/instances/163deb6e-49f4-4093-b0c1-98240f93c499/disk.config#033[00m
Feb 28 05:09:38 np0005634017 nova_compute[243452]: 2026-02-28 10:09:38.129 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/163deb6e-49f4-4093-b0c1-98240f93c499/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpadjypnla execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:38 np0005634017 nova_compute[243452]: 2026-02-28 10:09:38.275 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/163deb6e-49f4-4093-b0c1-98240f93c499/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpadjypnla" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:38 np0005634017 nova_compute[243452]: 2026-02-28 10:09:38.305 243456 DEBUG nova.storage.rbd_utils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] rbd image 163deb6e-49f4-4093-b0c1-98240f93c499_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:38 np0005634017 nova_compute[243452]: 2026-02-28 10:09:38.309 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/163deb6e-49f4-4093-b0c1-98240f93c499/disk.config 163deb6e-49f4-4093-b0c1-98240f93c499_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:38 np0005634017 nova_compute[243452]: 2026-02-28 10:09:38.414 243456 DEBUG nova.network.neutron [req-7e1ff444-99f4-4f63-b0b0-33645176dd4c req-934f25e4-79f8-484e-83ac-6b2da57178b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Updated VIF entry in instance network info cache for port cf1a075d-084d-4b7f-afd3-5a1d130b7493. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:09:38 np0005634017 nova_compute[243452]: 2026-02-28 10:09:38.416 243456 DEBUG nova.network.neutron [req-7e1ff444-99f4-4f63-b0b0-33645176dd4c req-934f25e4-79f8-484e-83ac-6b2da57178b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Updating instance_info_cache with network_info: [{"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:09:38 np0005634017 nova_compute[243452]: 2026-02-28 10:09:38.439 243456 DEBUG oslo_concurrency.lockutils [req-7e1ff444-99f4-4f63-b0b0-33645176dd4c req-934f25e4-79f8-484e-83ac-6b2da57178b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-163deb6e-49f4-4093-b0c1-98240f93c499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:09:38 np0005634017 nova_compute[243452]: 2026-02-28 10:09:38.480 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/163deb6e-49f4-4093-b0c1-98240f93c499/disk.config 163deb6e-49f4-4093-b0c1-98240f93c499_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:38 np0005634017 nova_compute[243452]: 2026-02-28 10:09:38.480 243456 INFO nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Deleting local config drive /var/lib/nova/instances/163deb6e-49f4-4093-b0c1-98240f93c499/disk.config because it was imported into RBD.#033[00m
Feb 28 05:09:38 np0005634017 kernel: tapcf1a075d-08: entered promiscuous mode
Feb 28 05:09:38 np0005634017 NetworkManager[49805]: <info>  [1772273378.5294] manager: (tapcf1a075d-08): new Tun device (/org/freedesktop/NetworkManager/Devices/206)
Feb 28 05:09:38 np0005634017 nova_compute[243452]: 2026-02-28 10:09:38.529 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:38Z|00446|binding|INFO|Claiming lport cf1a075d-084d-4b7f-afd3-5a1d130b7493 for this chassis.
Feb 28 05:09:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:38Z|00447|binding|INFO|cf1a075d-084d-4b7f-afd3-5a1d130b7493: Claiming fa:16:3e:c1:c5:d4 10.100.0.13
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.541 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:c5:d4 10.100.0.13'], port_security=['fa:16:3e:c1:c5:d4 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '163deb6e-49f4-4093-b0c1-98240f93c499', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13c8391ebb8644dea661a093a38db268', 'neutron:revision_number': '2', 'neutron:security_group_ids': '53eb53cd-49a8-4522-81fc-f6ca4a23f74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=510eab2e-34c0-4fd9-8330-79799ad97e91, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cf1a075d-084d-4b7f-afd3-5a1d130b7493) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.543 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cf1a075d-084d-4b7f-afd3-5a1d130b7493 in datapath f471a656-3c36-4e5b-a5f2-7df3f97122e0 bound to our chassis#033[00m
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.545 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f471a656-3c36-4e5b-a5f2-7df3f97122e0#033[00m
Feb 28 05:09:38 np0005634017 systemd-udevd[290853]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:09:38 np0005634017 nova_compute[243452]: 2026-02-28 10:09:38.558 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.558 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[94b90fd4-fd61-40a5-ab57-0f2bb702b854]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.559 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf471a656-31 in ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.561 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf471a656-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.561 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1161d4c3-213b-40f4-80c3-ac670d6a825f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:38Z|00448|binding|INFO|Setting lport cf1a075d-084d-4b7f-afd3-5a1d130b7493 ovn-installed in OVS
Feb 28 05:09:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:38Z|00449|binding|INFO|Setting lport cf1a075d-084d-4b7f-afd3-5a1d130b7493 up in Southbound
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.563 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0b0da612-e823-4775-bd27-d608e4dcfcd0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:38 np0005634017 nova_compute[243452]: 2026-02-28 10:09:38.565 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:38 np0005634017 systemd-machined[209480]: New machine qemu-62-instance-00000038.
Feb 28 05:09:38 np0005634017 NetworkManager[49805]: <info>  [1772273378.5742] device (tapcf1a075d-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:09:38 np0005634017 NetworkManager[49805]: <info>  [1772273378.5749] device (tapcf1a075d-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.578 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[c9852497-b0dc-4760-a0f1-17c787107331]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:38 np0005634017 systemd[1]: Started Virtual Machine qemu-62-instance-00000038.
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.597 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5adc457a-441f-46b9-ac1b-872b01902709]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.625 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3c82fa59-96c6-4763-8fc6-46bf3de9b894]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:38 np0005634017 NetworkManager[49805]: <info>  [1772273378.6332] manager: (tapf471a656-30): new Veth device (/org/freedesktop/NetworkManager/Devices/207)
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.633 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb21031-5a74-414a-9f2a-1d7db22e1c34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:38 np0005634017 systemd-udevd[290856]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:09:38 np0005634017 nova_compute[243452]: 2026-02-28 10:09:38.640 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.668 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0f3646ff-dc08-46fd-95e2-923f41526514]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.672 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ec6686c6-8459-4bc1-a004-8423e5c8092a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:38 np0005634017 NetworkManager[49805]: <info>  [1772273378.6937] device (tapf471a656-30): carrier: link connected
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.704 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[da41ee3e-8a79-44ff-a851-0c38ac8460a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.723 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4abf72c2-de32-45c1-9d59-14622df4b314]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf471a656-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:76:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488597, 'reachable_time': 35111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290885, 'error': None, 'target': 'ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.735 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[644a84a8-6231-4076-b169-69412773ddfd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:76a9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 488597, 'tstamp': 488597}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290886, 'error': None, 'target': 'ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.755 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5513ff1a-4588-4a4b-a98a-4c915e258ab8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf471a656-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:76:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488597, 'reachable_time': 35111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290887, 'error': None, 'target': 'ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.783 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e171c2d0-8fef-478a-90d0-d0563f863900]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.849 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4b3dcb09-85e7-49f5-bded-9d93d967363e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.851 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf471a656-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.851 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.852 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf471a656-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:38 np0005634017 nova_compute[243452]: 2026-02-28 10:09:38.854 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:38 np0005634017 kernel: tapf471a656-30: entered promiscuous mode
Feb 28 05:09:38 np0005634017 NetworkManager[49805]: <info>  [1772273378.8550] manager: (tapf471a656-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/208)
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.861 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf471a656-30, col_values=(('external_ids', {'iface-id': '403ee777-cb2a-4f95-bb3a-7e871bc2a2b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:38Z|00450|binding|INFO|Releasing lport 403ee777-cb2a-4f95-bb3a-7e871bc2a2b0 from this chassis (sb_readonly=0)
Feb 28 05:09:38 np0005634017 nova_compute[243452]: 2026-02-28 10:09:38.862 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.866 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f471a656-3c36-4e5b-a5f2-7df3f97122e0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f471a656-3c36-4e5b-a5f2-7df3f97122e0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:09:38 np0005634017 nova_compute[243452]: 2026-02-28 10:09:38.869 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.870 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0b260655-07ef-4909-a3bd-6b174633e151]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.872 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-f471a656-3c36-4e5b-a5f2-7df3f97122e0
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/f471a656-3c36-4e5b-a5f2-7df3f97122e0.pid.haproxy
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID f471a656-3c36-4e5b-a5f2-7df3f97122e0
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:09:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.873 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'env', 'PROCESS_TAG=haproxy-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f471a656-3c36-4e5b-a5f2-7df3f97122e0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:09:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Feb 28 05:09:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Feb 28 05:09:38 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Feb 28 05:09:38 np0005634017 nova_compute[243452]: 2026-02-28 10:09:38.969 243456 DEBUG nova.storage.rbd_utils [None req-a5e7e905-11b5-4ad3-b484-fe80f207ca09 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] creating snapshot(snap) on rbd image(fdddffc1-692d-46e3-8fbc-eca1a14df1a2) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.001 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273378.9883766, 163deb6e-49f4-4093-b0c1-98240f93c499 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.001 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] VM Started (Lifecycle Event)#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.028 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.033 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273378.9889781, 163deb6e-49f4-4093-b0c1-98240f93c499 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.033 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.056 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.059 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.084 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:09:39 np0005634017 podman[290979]: 2026-02-28 10:09:39.28229681 +0000 UTC m=+0.063598107 container create 3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:09:39 np0005634017 systemd[1]: Started libpod-conmon-3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a.scope.
Feb 28 05:09:39 np0005634017 podman[290979]: 2026-02-28 10:09:39.250708413 +0000 UTC m=+0.032009810 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:09:39 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:09:39 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5353cbb467188b41a538d6dc884c63774e3e2d92080e318eb49b20370d14a5ad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.368 243456 DEBUG oslo_concurrency.lockutils [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.368 243456 DEBUG oslo_concurrency.lockutils [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.369 243456 DEBUG oslo_concurrency.lockutils [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.369 243456 DEBUG oslo_concurrency.lockutils [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.369 243456 DEBUG oslo_concurrency.lockutils [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.371 243456 INFO nova.compute.manager [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Terminating instance#033[00m
Feb 28 05:09:39 np0005634017 podman[290979]: 2026-02-28 10:09:39.37165748 +0000 UTC m=+0.152958807 container init 3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.372 243456 DEBUG nova.compute.manager [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:09:39 np0005634017 podman[290979]: 2026-02-28 10:09:39.379005066 +0000 UTC m=+0.160306383 container start 3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:09:39 np0005634017 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[290994]: [NOTICE]   (290998) : New worker (291000) forked
Feb 28 05:09:39 np0005634017 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[290994]: [NOTICE]   (290998) : Loading success.
Feb 28 05:09:39 np0005634017 kernel: tap3fb210f2-4c (unregistering): left promiscuous mode
Feb 28 05:09:39 np0005634017 NetworkManager[49805]: <info>  [1772273379.4120] device (tap3fb210f2-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:09:39 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:39Z|00451|binding|INFO|Releasing lport 3fb210f2-4c9d-4399-acf6-20e10d93fdd5 from this chassis (sb_readonly=0)
Feb 28 05:09:39 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:39Z|00452|binding|INFO|Setting lport 3fb210f2-4c9d-4399-acf6-20e10d93fdd5 down in Southbound
Feb 28 05:09:39 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:39Z|00453|binding|INFO|Removing iface tap3fb210f2-4c ovn-installed in OVS
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.424 243456 DEBUG nova.compute.manager [req-ea45cf74-ce16-4512-bb79-fe4844b10542 req-ee3c80cd-acce-4f02-94bd-06221085282e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received event network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.425 243456 DEBUG oslo_concurrency.lockutils [req-ea45cf74-ce16-4512-bb79-fe4844b10542 req-ee3c80cd-acce-4f02-94bd-06221085282e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.425 243456 DEBUG oslo_concurrency.lockutils [req-ea45cf74-ce16-4512-bb79-fe4844b10542 req-ee3c80cd-acce-4f02-94bd-06221085282e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.425 243456 DEBUG oslo_concurrency.lockutils [req-ea45cf74-ce16-4512-bb79-fe4844b10542 req-ee3c80cd-acce-4f02-94bd-06221085282e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.426 243456 DEBUG nova.compute.manager [req-ea45cf74-ce16-4512-bb79-fe4844b10542 req-ee3c80cd-acce-4f02-94bd-06221085282e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Processing event network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.426 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.428 243456 DEBUG nova.compute.manager [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.431 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.433 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273379.4325452, 163deb6e-49f4-4093-b0c1-98240f93c499 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.433 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:09:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.434 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:7e:cf 10.100.0.6'], port_security=['fa:16:3e:be:7e:cf 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8eee8376-acc6-4a01-80c3-d7f0d579f9bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e92100d-850d-4567-9a5d-269bb15701d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5095f810f0d431788237ae1da262bf6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05124cc2-701f-43d1-96d7-3db27e805ae2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4409f277-3260-44c5-97fd-d8b2121f3d6b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=3fb210f2-4c9d-4399-acf6-20e10d93fdd5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.443 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.447 243456 INFO nova.virt.libvirt.driver [-] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Instance spawned successfully.#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.447 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:09:39 np0005634017 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000037.scope: Deactivated successfully.
Feb 28 05:09:39 np0005634017 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000037.scope: Consumed 3.739s CPU time.
Feb 28 05:09:39 np0005634017 systemd-machined[209480]: Machine qemu-61-instance-00000037 terminated.
Feb 28 05:09:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.476 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 3fb210f2-4c9d-4399-acf6-20e10d93fdd5 in datapath 8e92100d-850d-4567-9a5d-269bb15701d5 unbound from our chassis#033[00m
Feb 28 05:09:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.478 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e92100d-850d-4567-9a5d-269bb15701d5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.478 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.479 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[144310d8-c7df-404f-96ff-d34cb662e778]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.480 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 namespace which is not needed anymore#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.485 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.489 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.490 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.491 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.491 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.492 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.492 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.528 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.565 243456 INFO nova.compute.manager [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Took 7.07 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.566 243456 DEBUG nova.compute.manager [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.613 243456 INFO nova.virt.libvirt.driver [-] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Instance destroyed successfully.#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.613 243456 DEBUG nova.objects.instance [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'resources' on Instance uuid 8eee8376-acc6-4a01-80c3-d7f0d579f9bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:09:39 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[290504]: [NOTICE]   (290528) : haproxy version is 2.8.14-c23fe91
Feb 28 05:09:39 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[290504]: [NOTICE]   (290528) : path to executable is /usr/sbin/haproxy
Feb 28 05:09:39 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[290504]: [WARNING]  (290528) : Exiting Master process...
Feb 28 05:09:39 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[290504]: [WARNING]  (290528) : Exiting Master process...
Feb 28 05:09:39 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[290504]: [ALERT]    (290528) : Current worker (290531) exited with code 143 (Terminated)
Feb 28 05:09:39 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[290504]: [WARNING]  (290528) : All workers exited. Exiting... (0)
Feb 28 05:09:39 np0005634017 systemd[1]: libpod-3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116.scope: Deactivated successfully.
Feb 28 05:09:39 np0005634017 podman[291029]: 2026-02-28 10:09:39.627596637 +0000 UTC m=+0.058853543 container died 3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.649 243456 DEBUG nova.virt.libvirt.vif [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:09:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1458563036',display_name='tempest-DeleteServersTestJSON-server-1458563036',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1458563036',id=55,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:09:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-8hsyesw8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:09:36Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=8eee8376-acc6-4a01-80c3-d7f0d579f9bb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "address": "fa:16:3e:be:7e:cf", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb210f2-4c", "ovs_interfaceid": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.650 243456 DEBUG nova.network.os_vif_util [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "address": "fa:16:3e:be:7e:cf", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb210f2-4c", "ovs_interfaceid": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.651 243456 DEBUG nova.network.os_vif_util [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:7e:cf,bridge_name='br-int',has_traffic_filtering=True,id=3fb210f2-4c9d-4399-acf6-20e10d93fdd5,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb210f2-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.651 243456 DEBUG os_vif [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:7e:cf,bridge_name='br-int',has_traffic_filtering=True,id=3fb210f2-4c9d-4399-acf6-20e10d93fdd5,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb210f2-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.653 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.654 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3fb210f2-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:39 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116-userdata-shm.mount: Deactivated successfully.
Feb 28 05:09:39 np0005634017 systemd[1]: var-lib-containers-storage-overlay-6a11d72e3c1b7abf544db6e07b97f78e6bb5dfdbed62af2a9efcbccff78207a6-merged.mount: Deactivated successfully.
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.664 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.668 243456 INFO nova.compute.manager [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Took 8.20 seconds to build instance.#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.670 243456 INFO os_vif [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:7e:cf,bridge_name='br-int',has_traffic_filtering=True,id=3fb210f2-4c9d-4399-acf6-20e10d93fdd5,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb210f2-4c')#033[00m
Feb 28 05:09:39 np0005634017 podman[291029]: 2026-02-28 10:09:39.676164601 +0000 UTC m=+0.107421507 container cleanup 3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:09:39 np0005634017 systemd[1]: libpod-conmon-3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116.scope: Deactivated successfully.
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.695 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.345s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:39 np0005634017 podman[291077]: 2026-02-28 10:09:39.748508953 +0000 UTC m=+0.046484147 container remove 3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 28 05:09:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.753 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bf3c9023-9e39-4a2b-819d-cb0aa4a3597f]: (4, ('Sat Feb 28 10:09:39 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 (3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116)\n3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116\nSat Feb 28 10:09:39 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 (3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116)\n3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.755 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[94043fe7-787f-46db-8a74-d3877b2fb7c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.758 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e92100d-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.760 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:39 np0005634017 kernel: tap8e92100d-80: left promiscuous mode
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.775 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.775 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[118a544d-60ad-48fd-8755-1df9fd3fefcb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.797 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1ac78930-d466-433b-9f2f-34ce0e3f43c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.799 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b564b0dc-e5a8-47f3-a105-48d9c1377357]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.816 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[93b540df-63c5-4047-929d-e6819c99a8d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487956, 'reachable_time': 38565, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291105, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:39 np0005634017 systemd[1]: run-netns-ovnmeta\x2d8e92100d\x2d850d\x2d4567\x2d9a5d\x2d269bb15701d5.mount: Deactivated successfully.
Feb 28 05:09:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.821 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:09:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.821 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[50490164-e911-4bf1-9648-6fb65cfb2761]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Feb 28 05:09:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Feb 28 05:09:39 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.960 243456 INFO nova.virt.libvirt.driver [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Deleting instance files /var/lib/nova/instances/8eee8376-acc6-4a01-80c3-d7f0d579f9bb_del#033[00m
Feb 28 05:09:39 np0005634017 nova_compute[243452]: 2026-02-28 10:09:39.962 243456 INFO nova.virt.libvirt.driver [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Deletion of /var/lib/nova/instances/8eee8376-acc6-4a01-80c3-d7f0d579f9bb_del complete#033[00m
Feb 28 05:09:40 np0005634017 nova_compute[243452]: 2026-02-28 10:09:40.008 243456 INFO nova.compute.manager [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:09:40 np0005634017 nova_compute[243452]: 2026-02-28 10:09:40.009 243456 DEBUG oslo.service.loopingcall [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:09:40 np0005634017 nova_compute[243452]: 2026-02-28 10:09:40.009 243456 DEBUG nova.compute.manager [-] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:09:40 np0005634017 nova_compute[243452]: 2026-02-28 10:09:40.010 243456 DEBUG nova.network.neutron [-] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:09:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 293 active+clean; 499 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 21 MiB/s wr, 530 op/s
Feb 28 05:09:40 np0005634017 nova_compute[243452]: 2026-02-28 10:09:40.584 243456 DEBUG nova.network.neutron [-] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:09:40 np0005634017 nova_compute[243452]: 2026-02-28 10:09:40.618 243456 INFO nova.compute.manager [-] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Took 0.61 seconds to deallocate network for instance.#033[00m
Feb 28 05:09:40 np0005634017 nova_compute[243452]: 2026-02-28 10:09:40.675 243456 DEBUG oslo_concurrency.lockutils [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:40 np0005634017 nova_compute[243452]: 2026-02-28 10:09:40.676 243456 DEBUG oslo_concurrency.lockutils [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:09:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:09:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:09:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:09:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0019003105728686955 of space, bias 1.0, pg target 0.5700931718606087 quantized to 32 (current 32)
Feb 28 05:09:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:09:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:09:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:09:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:09:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:09:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00386913443571144 of space, bias 1.0, pg target 1.1607403307134319 quantized to 32 (current 32)
Feb 28 05:09:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:09:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.918254079518781e-07 of space, bias 4.0, pg target 0.0009470231879104462 quantized to 16 (current 16)
Feb 28 05:09:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:09:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:09:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:09:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Feb 28 05:09:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:09:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Feb 28 05:09:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:09:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:09:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:09:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Feb 28 05:09:40 np0005634017 nova_compute[243452]: 2026-02-28 10:09:40.799 243456 DEBUG oslo_concurrency.processutils [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:09:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1640933012' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.405 243456 DEBUG oslo_concurrency.processutils [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.423 243456 DEBUG nova.compute.provider_tree [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.433 243456 DEBUG oslo_concurrency.lockutils [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Acquiring lock "163deb6e-49f4-4093-b0c1-98240f93c499" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.435 243456 DEBUG oslo_concurrency.lockutils [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.435 243456 INFO nova.compute.manager [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Rebooting instance#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.444 243456 DEBUG nova.scheduler.client.report [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.460 243456 DEBUG oslo_concurrency.lockutils [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Acquiring lock "refresh_cache-163deb6e-49f4-4093-b0c1-98240f93c499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.461 243456 DEBUG oslo_concurrency.lockutils [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Acquired lock "refresh_cache-163deb6e-49f4-4093-b0c1-98240f93c499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.462 243456 DEBUG nova.network.neutron [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.494 243456 DEBUG oslo_concurrency.lockutils [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.546 243456 INFO nova.scheduler.client.report [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Deleted allocations for instance 8eee8376-acc6-4a01-80c3-d7f0d579f9bb#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.578 243456 DEBUG nova.compute.manager [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received event network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.579 243456 DEBUG oslo_concurrency.lockutils [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.580 243456 DEBUG oslo_concurrency.lockutils [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.581 243456 DEBUG oslo_concurrency.lockutils [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.581 243456 DEBUG nova.compute.manager [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] No waiting events found dispatching network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.581 243456 WARNING nova.compute.manager [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received unexpected event network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 for instance with vm_state active and task_state rebooting_hard.#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.582 243456 DEBUG nova.compute.manager [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Received event network-vif-unplugged-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.583 243456 DEBUG oslo_concurrency.lockutils [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.583 243456 DEBUG oslo_concurrency.lockutils [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.583 243456 DEBUG oslo_concurrency.lockutils [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.584 243456 DEBUG nova.compute.manager [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] No waiting events found dispatching network-vif-unplugged-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.584 243456 WARNING nova.compute.manager [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Received unexpected event network-vif-unplugged-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.585 243456 DEBUG nova.compute.manager [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Received event network-vif-plugged-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.585 243456 DEBUG oslo_concurrency.lockutils [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.586 243456 DEBUG oslo_concurrency.lockutils [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.586 243456 DEBUG oslo_concurrency.lockutils [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.586 243456 DEBUG nova.compute.manager [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] No waiting events found dispatching network-vif-plugged-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.587 243456 WARNING nova.compute.manager [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Received unexpected event network-vif-plugged-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.587 243456 DEBUG nova.compute.manager [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Received event network-vif-deleted-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.656 243456 DEBUG oslo_concurrency.lockutils [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.287s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.776 243456 INFO nova.virt.libvirt.driver [None req-a5e7e905-11b5-4ad3-b484-fe80f207ca09 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Snapshot image upload complete#033[00m
Feb 28 05:09:41 np0005634017 nova_compute[243452]: 2026-02-28 10:09:41.777 243456 INFO nova.compute.manager [None req-a5e7e905-11b5-4ad3-b484-fe80f207ca09 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Took 5.41 seconds to snapshot the instance on the hypervisor.#033[00m
Feb 28 05:09:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 293 active+clean; 496 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 22 MiB/s wr, 679 op/s
Feb 28 05:09:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.151 243456 DEBUG nova.network.neutron [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Updating instance_info_cache with network_info: [{"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.174 243456 DEBUG oslo_concurrency.lockutils [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Releasing lock "refresh_cache-163deb6e-49f4-4093-b0c1-98240f93c499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.177 243456 DEBUG nova.compute.manager [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.297 243456 DEBUG nova.compute.manager [None req-313753f3-c9fa-46ab-a1f8-ebb614454755 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.343 243456 INFO nova.compute.manager [None req-313753f3-c9fa-46ab-a1f8-ebb614454755 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] instance snapshotting#033[00m
Feb 28 05:09:43 np0005634017 kernel: tapcf1a075d-08 (unregistering): left promiscuous mode
Feb 28 05:09:43 np0005634017 NetworkManager[49805]: <info>  [1772273383.3542] device (tapcf1a075d-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.362 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:43 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:43Z|00454|binding|INFO|Releasing lport cf1a075d-084d-4b7f-afd3-5a1d130b7493 from this chassis (sb_readonly=0)
Feb 28 05:09:43 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:43Z|00455|binding|INFO|Setting lport cf1a075d-084d-4b7f-afd3-5a1d130b7493 down in Southbound
Feb 28 05:09:43 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:43Z|00456|binding|INFO|Removing iface tapcf1a075d-08 ovn-installed in OVS
Feb 28 05:09:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.388 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:c5:d4 10.100.0.13'], port_security=['fa:16:3e:c1:c5:d4 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '163deb6e-49f4-4093-b0c1-98240f93c499', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13c8391ebb8644dea661a093a38db268', 'neutron:revision_number': '4', 'neutron:security_group_ids': '53eb53cd-49a8-4522-81fc-f6ca4a23f74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=510eab2e-34c0-4fd9-8330-79799ad97e91, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cf1a075d-084d-4b7f-afd3-5a1d130b7493) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.389 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.390 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cf1a075d-084d-4b7f-afd3-5a1d130b7493 in datapath f471a656-3c36-4e5b-a5f2-7df3f97122e0 unbound from our chassis#033[00m
Feb 28 05:09:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.391 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f471a656-3c36-4e5b-a5f2-7df3f97122e0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:09:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.392 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[11167fc5-3cca-46f3-91fd-342da4a19bf8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.393 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0 namespace which is not needed anymore#033[00m
Feb 28 05:09:43 np0005634017 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000038.scope: Deactivated successfully.
Feb 28 05:09:43 np0005634017 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000038.scope: Consumed 4.339s CPU time.
Feb 28 05:09:43 np0005634017 systemd-machined[209480]: Machine qemu-62-instance-00000038 terminated.
Feb 28 05:09:43 np0005634017 kernel: tapcf1a075d-08: entered promiscuous mode
Feb 28 05:09:43 np0005634017 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[290994]: [NOTICE]   (290998) : haproxy version is 2.8.14-c23fe91
Feb 28 05:09:43 np0005634017 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[290994]: [NOTICE]   (290998) : path to executable is /usr/sbin/haproxy
Feb 28 05:09:43 np0005634017 kernel: tapcf1a075d-08 (unregistering): left promiscuous mode
Feb 28 05:09:43 np0005634017 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[290994]: [WARNING]  (290998) : Exiting Master process...
Feb 28 05:09:43 np0005634017 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[290994]: [WARNING]  (290998) : Exiting Master process...
Feb 28 05:09:43 np0005634017 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[290994]: [ALERT]    (290998) : Current worker (291000) exited with code 143 (Terminated)
Feb 28 05:09:43 np0005634017 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[290994]: [WARNING]  (290998) : All workers exited. Exiting... (0)
Feb 28 05:09:43 np0005634017 NetworkManager[49805]: <info>  [1772273383.5371] manager: (tapcf1a075d-08): new Tun device (/org/freedesktop/NetworkManager/Devices/209)
Feb 28 05:09:43 np0005634017 systemd[1]: libpod-3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a.scope: Deactivated successfully.
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.543 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:43 np0005634017 podman[291155]: 2026-02-28 10:09:43.547147477 +0000 UTC m=+0.055839579 container died 3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.557 243456 INFO nova.virt.libvirt.driver [-] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Instance destroyed successfully.#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.558 243456 DEBUG nova.objects.instance [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lazy-loading 'resources' on Instance uuid 163deb6e-49f4-4093-b0c1-98240f93c499 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.576 243456 DEBUG nova.virt.libvirt.vif [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:09:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1645315342',display_name='tempest-InstanceActionsTestJSON-server-1645315342',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1645315342',id=56,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:09:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='13c8391ebb8644dea661a093a38db268',ramdisk_id='',reservation_id='r-fir4075z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1464907638',owner_user_name='tempest-InstanceActionsTestJSON-1464907638-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:09:43Z,user_data=None,user_id='1341b7bab4cc4ddca989e12ab7770723',uuid=163deb6e-49f4-4093-b0c1-98240f93c499,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.576 243456 DEBUG nova.network.os_vif_util [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Converting VIF {"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.577 243456 DEBUG nova.network.os_vif_util [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.577 243456 DEBUG os_vif [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.580 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.580 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf1a075d-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.583 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.585 243456 INFO os_vif [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08')#033[00m
Feb 28 05:09:43 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a-userdata-shm.mount: Deactivated successfully.
Feb 28 05:09:43 np0005634017 systemd[1]: var-lib-containers-storage-overlay-5353cbb467188b41a538d6dc884c63774e3e2d92080e318eb49b20370d14a5ad-merged.mount: Deactivated successfully.
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.600 243456 DEBUG nova.virt.libvirt.driver [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Start _get_guest_xml network_info=[{"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.606 243456 WARNING nova.virt.libvirt.driver [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.610 243456 DEBUG nova.virt.libvirt.host [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.611 243456 DEBUG nova.virt.libvirt.host [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.614 243456 DEBUG nova.virt.libvirt.host [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.614 243456 DEBUG nova.virt.libvirt.host [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.615 243456 DEBUG nova.virt.libvirt.driver [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.615 243456 DEBUG nova.virt.hardware [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.615 243456 DEBUG nova.virt.hardware [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.615 243456 DEBUG nova.virt.hardware [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.616 243456 DEBUG nova.virt.hardware [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.616 243456 DEBUG nova.virt.hardware [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.616 243456 DEBUG nova.virt.hardware [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.616 243456 DEBUG nova.virt.hardware [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.617 243456 DEBUG nova.virt.hardware [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.617 243456 DEBUG nova.virt.hardware [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.617 243456 DEBUG nova.virt.hardware [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.617 243456 DEBUG nova.virt.hardware [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.618 243456 DEBUG nova.objects.instance [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 163deb6e-49f4-4093-b0c1-98240f93c499 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:09:43 np0005634017 podman[291155]: 2026-02-28 10:09:43.619802088 +0000 UTC m=+0.128494180 container cleanup 3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 28 05:09:43 np0005634017 systemd[1]: libpod-conmon-3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a.scope: Deactivated successfully.
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.643 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.649 243456 DEBUG oslo_concurrency.processutils [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.704 243456 INFO nova.virt.libvirt.driver [None req-313753f3-c9fa-46ab-a1f8-ebb614454755 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Beginning live snapshot process#033[00m
Feb 28 05:09:43 np0005634017 podman[291189]: 2026-02-28 10:09:43.706537074 +0000 UTC m=+0.066591161 container remove 3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 05:09:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.714 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ac4279ba-784e-4adb-9857-640f412f260c]: (4, ('Sat Feb 28 10:09:43 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0 (3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a)\n3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a\nSat Feb 28 10:09:43 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0 (3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a)\n3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.716 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3a002ec2-8c46-4ce6-835b-4590326e05e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.717 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf471a656-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:43 np0005634017 kernel: tapf471a656-30: left promiscuous mode
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.718 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.726 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.729 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a6dd1665-6ab9-4d53-976b-a66319ea58ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.742 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b78c12c3-25b4-4fea-952a-445d8df84adb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.743 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[27f4f603-2837-4dec-bb05-c63917f8910b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.758 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b690def9-ba1a-46fc-a7d1-8e609231bc81]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488590, 'reachable_time': 30132, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291204, 'error': None, 'target': 'ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:43 np0005634017 systemd[1]: run-netns-ovnmeta\x2df471a656\x2d3c36\x2d4e5b\x2da5f2\x2d7df3f97122e0.mount: Deactivated successfully.
Feb 28 05:09:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.761 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:09:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.761 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[e4f538da-ee20-4386-8d26-e7aa68f6f2d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:43 np0005634017 nova_compute[243452]: 2026-02-28 10:09:43.858 243456 DEBUG nova.virt.libvirt.imagebackend [None req-313753f3-c9fa-46ab-a1f8-ebb614454755 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 28 05:09:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 480 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 13 MiB/s wr, 541 op/s
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.094 243456 DEBUG nova.storage.rbd_utils [None req-313753f3-c9fa-46ab-a1f8-ebb614454755 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] creating snapshot(1ef1506ee39c470ba24c9c06b3a17f6c) on rbd image(9098ebf3-e36c-492b-9c50-dc6f0078794d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:09:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:09:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3918153978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.205 243456 DEBUG oslo_concurrency.processutils [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.234 243456 DEBUG oslo_concurrency.processutils [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.281 243456 DEBUG nova.compute.manager [req-75321434-a2ea-43b6-a860-bf049ae85c7e req-9e941602-4486-4836-8511-51caa9f406a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received event network-vif-unplugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.282 243456 DEBUG oslo_concurrency.lockutils [req-75321434-a2ea-43b6-a860-bf049ae85c7e req-9e941602-4486-4836-8511-51caa9f406a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.283 243456 DEBUG oslo_concurrency.lockutils [req-75321434-a2ea-43b6-a860-bf049ae85c7e req-9e941602-4486-4836-8511-51caa9f406a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.283 243456 DEBUG oslo_concurrency.lockutils [req-75321434-a2ea-43b6-a860-bf049ae85c7e req-9e941602-4486-4836-8511-51caa9f406a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.284 243456 DEBUG nova.compute.manager [req-75321434-a2ea-43b6-a860-bf049ae85c7e req-9e941602-4486-4836-8511-51caa9f406a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] No waiting events found dispatching network-vif-unplugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.284 243456 WARNING nova.compute.manager [req-75321434-a2ea-43b6-a860-bf049ae85c7e req-9e941602-4486-4836-8511-51caa9f406a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received unexpected event network-vif-unplugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Feb 28 05:09:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:09:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3177599203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.788 243456 DEBUG oslo_concurrency.processutils [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.791 243456 DEBUG nova.virt.libvirt.vif [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:09:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1645315342',display_name='tempest-InstanceActionsTestJSON-server-1645315342',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1645315342',id=56,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:09:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='13c8391ebb8644dea661a093a38db268',ramdisk_id='',reservation_id='r-fir4075z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1464907638',owner_user_name='tempest-InstanceActionsTestJSON-1464907638-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:09:43Z,user_data=None,user_id='1341b7bab4cc4ddca989e12ab7770723',uuid=163deb6e-49f4-4093-b0c1-98240f93c499,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.791 243456 DEBUG nova.network.os_vif_util [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Converting VIF {"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.793 243456 DEBUG nova.network.os_vif_util [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.796 243456 DEBUG nova.objects.instance [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lazy-loading 'pci_devices' on Instance uuid 163deb6e-49f4-4093-b0c1-98240f93c499 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.813 243456 DEBUG nova.virt.libvirt.driver [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:09:44 np0005634017 nova_compute[243452]:  <uuid>163deb6e-49f4-4093-b0c1-98240f93c499</uuid>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:  <name>instance-00000038</name>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <nova:name>tempest-InstanceActionsTestJSON-server-1645315342</nova:name>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:09:43</nova:creationTime>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:09:44 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:        <nova:user uuid="1341b7bab4cc4ddca989e12ab7770723">tempest-InstanceActionsTestJSON-1464907638-project-member</nova:user>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:        <nova:project uuid="13c8391ebb8644dea661a093a38db268">tempest-InstanceActionsTestJSON-1464907638</nova:project>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:        <nova:port uuid="cf1a075d-084d-4b7f-afd3-5a1d130b7493">
Feb 28 05:09:44 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <entry name="serial">163deb6e-49f4-4093-b0c1-98240f93c499</entry>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <entry name="uuid">163deb6e-49f4-4093-b0c1-98240f93c499</entry>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/163deb6e-49f4-4093-b0c1-98240f93c499_disk">
Feb 28 05:09:44 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:09:44 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/163deb6e-49f4-4093-b0c1-98240f93c499_disk.config">
Feb 28 05:09:44 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:09:44 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:c1:c5:d4"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <target dev="tapcf1a075d-08"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/163deb6e-49f4-4093-b0c1-98240f93c499/console.log" append="off"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <input type="keyboard" bus="usb"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:09:44 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:09:44 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:09:44 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:09:44 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.815 243456 DEBUG nova.virt.libvirt.driver [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] skipping disk for instance-00000038 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.816 243456 DEBUG nova.virt.libvirt.driver [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] skipping disk for instance-00000038 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.817 243456 DEBUG nova.virt.libvirt.vif [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:09:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1645315342',display_name='tempest-InstanceActionsTestJSON-server-1645315342',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1645315342',id=56,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:09:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='13c8391ebb8644dea661a093a38db268',ramdisk_id='',reservation_id='r-fir4075z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1464907638',owner_user_name='tempest-InstanceActionsTestJSON-1464907638-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:09:43Z,user_data=None,user_id='1341b7bab4cc4ddca989e12ab7770723',uuid=163deb6e-49f4-4093-b0c1-98240f93c499,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.818 243456 DEBUG nova.network.os_vif_util [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Converting VIF {"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.819 243456 DEBUG nova.network.os_vif_util [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.819 243456 DEBUG os_vif [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.820 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.820 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.821 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.825 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.825 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf1a075d-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.826 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcf1a075d-08, col_values=(('external_ids', {'iface-id': 'cf1a075d-084d-4b7f-afd3-5a1d130b7493', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:c5:d4', 'vm-uuid': '163deb6e-49f4-4093-b0c1-98240f93c499'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.828 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:44 np0005634017 NetworkManager[49805]: <info>  [1772273384.8294] manager: (tapcf1a075d-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/210)
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.831 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.834 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.834 243456 INFO os_vif [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08')#033[00m
Feb 28 05:09:44 np0005634017 kernel: tapcf1a075d-08: entered promiscuous mode
Feb 28 05:09:44 np0005634017 NetworkManager[49805]: <info>  [1772273384.9295] manager: (tapcf1a075d-08): new Tun device (/org/freedesktop/NetworkManager/Devices/211)
Feb 28 05:09:44 np0005634017 systemd-udevd[291133]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:09:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:44Z|00457|binding|INFO|Claiming lport cf1a075d-084d-4b7f-afd3-5a1d130b7493 for this chassis.
Feb 28 05:09:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:44Z|00458|binding|INFO|cf1a075d-084d-4b7f-afd3-5a1d130b7493: Claiming fa:16:3e:c1:c5:d4 10.100.0.13
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.932 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:44.942 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:c5:d4 10.100.0.13'], port_security=['fa:16:3e:c1:c5:d4 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '163deb6e-49f4-4093-b0c1-98240f93c499', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13c8391ebb8644dea661a093a38db268', 'neutron:revision_number': '5', 'neutron:security_group_ids': '53eb53cd-49a8-4522-81fc-f6ca4a23f74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=510eab2e-34c0-4fd9-8330-79799ad97e91, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cf1a075d-084d-4b7f-afd3-5a1d130b7493) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:09:44 np0005634017 NetworkManager[49805]: <info>  [1772273384.9462] device (tapcf1a075d-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:09:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:44.945 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cf1a075d-084d-4b7f-afd3-5a1d130b7493 in datapath f471a656-3c36-4e5b-a5f2-7df3f97122e0 bound to our chassis#033[00m
Feb 28 05:09:44 np0005634017 NetworkManager[49805]: <info>  [1772273384.9469] device (tapcf1a075d-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:09:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:44Z|00459|binding|INFO|Setting lport cf1a075d-084d-4b7f-afd3-5a1d130b7493 ovn-installed in OVS
Feb 28 05:09:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:44Z|00460|binding|INFO|Setting lport cf1a075d-084d-4b7f-afd3-5a1d130b7493 up in Southbound
Feb 28 05:09:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:44.947 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f471a656-3c36-4e5b-a5f2-7df3f97122e0#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.948 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:44 np0005634017 nova_compute[243452]: 2026-02-28 10:09:44.950 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:44.963 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e785ffd3-a0cc-447c-a750-025af19ff01f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:44.964 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf471a656-31 in ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:09:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:44.966 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf471a656-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:09:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:44.966 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f6ff4e30-1660-424b-a145-84fa03aa991d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:44.967 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cb52cfb1-be9d-49ff-bdf2-d30e4dfc78fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:44 np0005634017 systemd-machined[209480]: New machine qemu-63-instance-00000038.
Feb 28 05:09:44 np0005634017 systemd[1]: Started Virtual Machine qemu-63-instance-00000038.
Feb 28 05:09:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:44.982 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[38762393-76ba-4cce-aa08-1f45b934bfa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:44.998 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2310e1c8-9f6b-4b31-acca-0fb156220269]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.024 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bfbe51cc-5ae6-40e9-8e73-bf95552158d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:45 np0005634017 NetworkManager[49805]: <info>  [1772273385.0314] manager: (tapf471a656-30): new Veth device (/org/freedesktop/NetworkManager/Devices/212)
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.030 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f036d175-4acb-469e-a5ff-d4456739918b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.059 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9106e5ee-f4fd-44da-8a1b-ba3c535e9fd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.063 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[263a4817-9289-4bd1-a876-ba88b4f30bce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Feb 28 05:09:45 np0005634017 NetworkManager[49805]: <info>  [1772273385.0815] device (tapf471a656-30): carrier: link connected
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.084 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c32ef933-a76d-4527-9afb-14c79c5d2880]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.100 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[93b11eac-d902-469f-8c37-633893fec887]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf471a656-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:76:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489236, 'reachable_time': 42598, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291362, 'error': None, 'target': 'ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.112 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[238af499-9b3a-4298-aae1-81cb55e4c388]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:76a9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489236, 'tstamp': 489236}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291363, 'error': None, 'target': 'ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:45 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.129 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9817943f-af04-4e9b-b164-9ec44a560b84]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf471a656-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:76:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489236, 'reachable_time': 42598, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291364, 'error': None, 'target': 'ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.158 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea36e70-e001-438d-aebb-85cb1f22727d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:45 np0005634017 nova_compute[243452]: 2026-02-28 10:09:45.179 243456 DEBUG nova.storage.rbd_utils [None req-313753f3-c9fa-46ab-a1f8-ebb614454755 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] cloning vms/9098ebf3-e36c-492b-9c50-dc6f0078794d_disk@1ef1506ee39c470ba24c9c06b3a17f6c to images/b452f76e-84b8-461a-8b95-21b09f41396c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 28 05:09:45 np0005634017 nova_compute[243452]: 2026-02-28 10:09:45.223 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.222 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c3843613-f010-48c6-9ac7-26ed61c27519]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:45 np0005634017 nova_compute[243452]: 2026-02-28 10:09:45.223 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.224 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf471a656-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.225 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.225 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf471a656-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:45 np0005634017 nova_compute[243452]: 2026-02-28 10:09:45.227 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:45 np0005634017 NetworkManager[49805]: <info>  [1772273385.2277] manager: (tapf471a656-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/213)
Feb 28 05:09:45 np0005634017 kernel: tapf471a656-30: entered promiscuous mode
Feb 28 05:09:45 np0005634017 nova_compute[243452]: 2026-02-28 10:09:45.230 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.231 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf471a656-30, col_values=(('external_ids', {'iface-id': '403ee777-cb2a-4f95-bb3a-7e871bc2a2b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:45 np0005634017 nova_compute[243452]: 2026-02-28 10:09:45.232 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:45 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:45Z|00461|binding|INFO|Releasing lport 403ee777-cb2a-4f95-bb3a-7e871bc2a2b0 from this chassis (sb_readonly=0)
Feb 28 05:09:45 np0005634017 nova_compute[243452]: 2026-02-28 10:09:45.233 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.235 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f471a656-3c36-4e5b-a5f2-7df3f97122e0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f471a656-3c36-4e5b-a5f2-7df3f97122e0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.237 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cdcd6d95-de52-4e6b-b9aa-97102c064dbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.238 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-f471a656-3c36-4e5b-a5f2-7df3f97122e0
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/f471a656-3c36-4e5b-a5f2-7df3f97122e0.pid.haproxy
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:09:45 np0005634017 nova_compute[243452]: 2026-02-28 10:09:45.238 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID f471a656-3c36-4e5b-a5f2-7df3f97122e0
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:09:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.239 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'env', 'PROCESS_TAG=haproxy-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f471a656-3c36-4e5b-a5f2-7df3f97122e0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:09:45 np0005634017 nova_compute[243452]: 2026-02-28 10:09:45.240 243456 DEBUG nova.compute.manager [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:09:45 np0005634017 nova_compute[243452]: 2026-02-28 10:09:45.269 243456 DEBUG nova.storage.rbd_utils [None req-313753f3-c9fa-46ab-a1f8-ebb614454755 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] flattening images/b452f76e-84b8-461a-8b95-21b09f41396c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 28 05:09:45 np0005634017 nova_compute[243452]: 2026-02-28 10:09:45.342 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:45 np0005634017 nova_compute[243452]: 2026-02-28 10:09:45.343 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:45 np0005634017 nova_compute[243452]: 2026-02-28 10:09:45.351 243456 DEBUG nova.virt.hardware [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:09:45 np0005634017 nova_compute[243452]: 2026-02-28 10:09:45.351 243456 INFO nova.compute.claims [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:09:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:09:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3112937870' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:09:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:09:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3112937870' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:09:45 np0005634017 nova_compute[243452]: 2026-02-28 10:09:45.534 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:45 np0005634017 podman[291466]: 2026-02-28 10:09:45.580111234 +0000 UTC m=+0.021532696 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:09:45 np0005634017 podman[291466]: 2026-02-28 10:09:45.800875734 +0000 UTC m=+0.242297166 container create efdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 28 05:09:45 np0005634017 systemd[1]: Started libpod-conmon-efdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88.scope.
Feb 28 05:09:45 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:09:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b352ab757dfa9cdc7e9943d005cb3aec7dad2b98cce6ff8ebb2b10d663fcb13/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:09:45 np0005634017 nova_compute[243452]: 2026-02-28 10:09:45.995 243456 DEBUG nova.compute.manager [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:09:45 np0005634017 nova_compute[243452]: 2026-02-28 10:09:45.996 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 163deb6e-49f4-4093-b0c1-98240f93c499 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 28 05:09:45 np0005634017 nova_compute[243452]: 2026-02-28 10:09:45.996 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273385.9654663, 163deb6e-49f4-4093-b0c1-98240f93c499 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:09:45 np0005634017 nova_compute[243452]: 2026-02-28 10:09:45.997 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.003 243456 INFO nova.virt.libvirt.driver [-] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Instance rebooted successfully.#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.004 243456 DEBUG nova.compute.manager [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:46 np0005634017 podman[291466]: 2026-02-28 10:09:46.010005237 +0000 UTC m=+0.451426689 container init efdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 05:09:46 np0005634017 podman[291466]: 2026-02-28 10:09:46.016114899 +0000 UTC m=+0.457536331 container start efdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 28 05:09:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 455 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 12 MiB/s wr, 596 op/s
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.034 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.037 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:09:46 np0005634017 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[291525]: [NOTICE]   (291548) : New worker (291550) forked
Feb 28 05:09:46 np0005634017 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[291525]: [NOTICE]   (291548) : Loading success.
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.049 243456 DEBUG nova.storage.rbd_utils [None req-313753f3-c9fa-46ab-a1f8-ebb614454755 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] removing snapshot(1ef1506ee39c470ba24c9c06b3a17f6c) on rbd image(9098ebf3-e36c-492b-9c50-dc6f0078794d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.074 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.075 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273385.9662154, 163deb6e-49f4-4093-b0c1-98240f93c499 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.075 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] VM Started (Lifecycle Event)#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.081 243456 DEBUG oslo_concurrency.lockutils [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.097 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.101 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:09:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:09:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/14418466' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:09:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Feb 28 05:09:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.163 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.169 243456 DEBUG nova.compute.provider_tree [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:09:46 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.190 243456 DEBUG nova.scheduler.client.report [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.204 243456 DEBUG nova.storage.rbd_utils [None req-313753f3-c9fa-46ab-a1f8-ebb614454755 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] creating snapshot(snap) on rbd image(b452f76e-84b8-461a-8b95-21b09f41396c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.253 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.254 243456 DEBUG nova.compute.manager [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.304 243456 DEBUG nova.compute.manager [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.304 243456 DEBUG nova.network.neutron [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.331 243456 INFO nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.350 243456 DEBUG nova.compute.manager [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.465 243456 DEBUG nova.compute.manager [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.467 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.467 243456 INFO nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Creating image(s)#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.488 243456 DEBUG nova.storage.rbd_utils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.512 243456 DEBUG nova.storage.rbd_utils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.538 243456 DEBUG nova.storage.rbd_utils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.543 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.597 243456 DEBUG nova.compute.manager [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received event network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.598 243456 DEBUG oslo_concurrency.lockutils [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.599 243456 DEBUG oslo_concurrency.lockutils [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.599 243456 DEBUG oslo_concurrency.lockutils [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.599 243456 DEBUG nova.compute.manager [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] No waiting events found dispatching network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.599 243456 WARNING nova.compute.manager [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received unexpected event network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.599 243456 DEBUG nova.compute.manager [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received event network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.599 243456 DEBUG oslo_concurrency.lockutils [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.600 243456 DEBUG oslo_concurrency.lockutils [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.600 243456 DEBUG oslo_concurrency.lockutils [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.600 243456 DEBUG nova.compute.manager [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] No waiting events found dispatching network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.600 243456 WARNING nova.compute.manager [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received unexpected event network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.601 243456 DEBUG nova.compute.manager [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received event network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.601 243456 DEBUG oslo_concurrency.lockutils [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.601 243456 DEBUG oslo_concurrency.lockutils [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.601 243456 DEBUG oslo_concurrency.lockutils [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.601 243456 DEBUG nova.compute.manager [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] No waiting events found dispatching network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.602 243456 WARNING nova.compute.manager [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received unexpected event network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.617 243456 DEBUG nova.policy [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '33855957e5e3480b850c2ddef62a5f89', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a5095f810f0d431788237ae1da262bf6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.638 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.639 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.639 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.640 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.664 243456 DEBUG nova.storage.rbd_utils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:46 np0005634017 nova_compute[243452]: 2026-02-28 10:09:46.668 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.069 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.126 243456 DEBUG nova.storage.rbd_utils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] resizing rbd image 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:09:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Feb 28 05:09:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Feb 28 05:09:47 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.343 243456 DEBUG nova.objects.instance [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'migration_context' on Instance uuid 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.365 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.365 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Ensure instance console log exists: /var/lib/nova/instances/9633f2a0-e94b-40f8-a1c1-6a9ebf9458be/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.366 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.366 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.366 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.538 243456 DEBUG oslo_concurrency.lockutils [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Acquiring lock "163deb6e-49f4-4093-b0c1-98240f93c499" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.538 243456 DEBUG oslo_concurrency.lockutils [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.538 243456 DEBUG oslo_concurrency.lockutils [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Acquiring lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.541 243456 DEBUG oslo_concurrency.lockutils [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.541 243456 DEBUG oslo_concurrency.lockutils [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.542 243456 INFO nova.compute.manager [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Terminating instance#033[00m
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.543 243456 DEBUG nova.compute.manager [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:09:47 np0005634017 kernel: tapcf1a075d-08 (unregistering): left promiscuous mode
Feb 28 05:09:47 np0005634017 NetworkManager[49805]: <info>  [1772273387.5793] device (tapcf1a075d-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.579 243456 DEBUG nova.network.neutron [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Successfully created port: b74d22f2-fc92-4f30-b403-6cecd975b301 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:09:47 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:47Z|00462|binding|INFO|Releasing lport cf1a075d-084d-4b7f-afd3-5a1d130b7493 from this chassis (sb_readonly=0)
Feb 28 05:09:47 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:47Z|00463|binding|INFO|Setting lport cf1a075d-084d-4b7f-afd3-5a1d130b7493 down in Southbound
Feb 28 05:09:47 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:47Z|00464|binding|INFO|Removing iface tapcf1a075d-08 ovn-installed in OVS
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.593 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.598 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:c5:d4 10.100.0.13'], port_security=['fa:16:3e:c1:c5:d4 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '163deb6e-49f4-4093-b0c1-98240f93c499', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13c8391ebb8644dea661a093a38db268', 'neutron:revision_number': '6', 'neutron:security_group_ids': '53eb53cd-49a8-4522-81fc-f6ca4a23f74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=510eab2e-34c0-4fd9-8330-79799ad97e91, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cf1a075d-084d-4b7f-afd3-5a1d130b7493) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:09:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.599 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cf1a075d-084d-4b7f-afd3-5a1d130b7493 in datapath f471a656-3c36-4e5b-a5f2-7df3f97122e0 unbound from our chassis#033[00m
Feb 28 05:09:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.602 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f471a656-3c36-4e5b-a5f2-7df3f97122e0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.603 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.603 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c0fc4d71-9d01-4b3b-bb3b-e3a2848fbe48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.604 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0 namespace which is not needed anymore#033[00m
Feb 28 05:09:47 np0005634017 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000038.scope: Deactivated successfully.
Feb 28 05:09:47 np0005634017 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000038.scope: Consumed 2.185s CPU time.
Feb 28 05:09:47 np0005634017 systemd-machined[209480]: Machine qemu-63-instance-00000038 terminated.
Feb 28 05:09:47 np0005634017 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[291525]: [NOTICE]   (291548) : haproxy version is 2.8.14-c23fe91
Feb 28 05:09:47 np0005634017 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[291525]: [NOTICE]   (291548) : path to executable is /usr/sbin/haproxy
Feb 28 05:09:47 np0005634017 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[291525]: [WARNING]  (291548) : Exiting Master process...
Feb 28 05:09:47 np0005634017 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[291525]: [ALERT]    (291548) : Current worker (291550) exited with code 143 (Terminated)
Feb 28 05:09:47 np0005634017 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[291525]: [WARNING]  (291548) : All workers exited. Exiting... (0)
Feb 28 05:09:47 np0005634017 systemd[1]: libpod-efdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88.scope: Deactivated successfully.
Feb 28 05:09:47 np0005634017 podman[291769]: 2026-02-28 10:09:47.730821866 +0000 UTC m=+0.044078189 container died efdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 28 05:09:47 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-efdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88-userdata-shm.mount: Deactivated successfully.
Feb 28 05:09:47 np0005634017 systemd[1]: var-lib-containers-storage-overlay-6b352ab757dfa9cdc7e9943d005cb3aec7dad2b98cce6ff8ebb2b10d663fcb13-merged.mount: Deactivated successfully.
Feb 28 05:09:47 np0005634017 podman[291769]: 2026-02-28 10:09:47.760651474 +0000 UTC m=+0.073907797 container cleanup efdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 05:09:47 np0005634017 NetworkManager[49805]: <info>  [1772273387.7629] manager: (tapcf1a075d-08): new Tun device (/org/freedesktop/NetworkManager/Devices/214)
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.768 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.774 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.784 243456 INFO nova.virt.libvirt.driver [-] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Instance destroyed successfully.#033[00m
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.785 243456 DEBUG nova.objects.instance [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lazy-loading 'resources' on Instance uuid 163deb6e-49f4-4093-b0c1-98240f93c499 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:09:47 np0005634017 systemd[1]: libpod-conmon-efdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88.scope: Deactivated successfully.
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.798 243456 DEBUG nova.virt.libvirt.vif [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:09:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1645315342',display_name='tempest-InstanceActionsTestJSON-server-1645315342',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1645315342',id=56,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:09:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='13c8391ebb8644dea661a093a38db268',ramdisk_id='',reservation_id='r-fir4075z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1464907638',owner_user_name='tempest-InstanceActionsTestJSON-1464907638-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:09:46Z,user_data=None,user_id='1341b7bab4cc4ddca989e12ab7770723',uuid=163deb6e-49f4-4093-b0c1-98240f93c499,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.799 243456 DEBUG nova.network.os_vif_util [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Converting VIF {"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.799 243456 DEBUG nova.network.os_vif_util [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.800 243456 DEBUG os_vif [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.801 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.802 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf1a075d-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.804 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.810 243456 INFO os_vif [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08')#033[00m
Feb 28 05:09:47 np0005634017 podman[291803]: 2026-02-28 10:09:47.823417546 +0000 UTC m=+0.042283538 container remove efdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:09:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.830 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9ce23c9d-14ef-4c3b-9ee3-d73c4d6834cd]: (4, ('Sat Feb 28 10:09:47 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0 (efdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88)\nefdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88\nSat Feb 28 10:09:47 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0 (efdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88)\nefdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.833 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eadf6622-5967-4ae2-b9e6-79af5d4dc27e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.835 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf471a656-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.837 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:47 np0005634017 kernel: tapf471a656-30: left promiscuous mode
Feb 28 05:09:47 np0005634017 nova_compute[243452]: 2026-02-28 10:09:47.843 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.846 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[04f2a9f2-389c-45f5-b6c3-b8a932fb2da9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.862 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2d894ec8-2e61-4bb0-ba58-5810306bc235]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.865 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[74061e2d-0edc-4d16-a97e-28b363b4cf75]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.880 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[89f228d4-849d-4312-a03e-c0792cb1c551]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489230, 'reachable_time': 15766, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291842, 'error': None, 'target': 'ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:47 np0005634017 systemd[1]: run-netns-ovnmeta\x2df471a656\x2d3c36\x2d4e5b\x2da5f2\x2d7df3f97122e0.mount: Deactivated successfully.
Feb 28 05:09:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.884 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:09:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.884 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[52943730-c327-46f2-98a8-5c6d280958ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:09:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Feb 28 05:09:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Feb 28 05:09:47 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Feb 28 05:09:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 509 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 7.1 MiB/s wr, 375 op/s
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.063 243456 INFO nova.virt.libvirt.driver [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Deleting instance files /var/lib/nova/instances/163deb6e-49f4-4093-b0c1-98240f93c499_del#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.063 243456 INFO nova.virt.libvirt.driver [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Deletion of /var/lib/nova/instances/163deb6e-49f4-4093-b0c1-98240f93c499_del complete#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.117 243456 INFO nova.compute.manager [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Took 0.57 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.117 243456 DEBUG oslo.service.loopingcall [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.118 243456 DEBUG nova.compute.manager [-] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.118 243456 DEBUG nova.network.neutron [-] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.625 243456 DEBUG nova.network.neutron [-] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.643 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.648 243456 INFO nova.compute.manager [-] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Took 0.53 seconds to deallocate network for instance.#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.669 243456 DEBUG nova.network.neutron [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Successfully updated port: b74d22f2-fc92-4f30-b403-6cecd975b301 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.686 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.687 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquired lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.687 243456 DEBUG nova.network.neutron [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.716 243456 DEBUG oslo_concurrency.lockutils [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.717 243456 DEBUG oslo_concurrency.lockutils [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.718 243456 DEBUG nova.compute.manager [req-ba129f3c-266b-4720-a33d-fae6d364f7d6 req-d732e63c-5f20-4854-8f19-3a3bdf3b3e6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received event network-vif-unplugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.719 243456 DEBUG oslo_concurrency.lockutils [req-ba129f3c-266b-4720-a33d-fae6d364f7d6 req-d732e63c-5f20-4854-8f19-3a3bdf3b3e6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.719 243456 DEBUG oslo_concurrency.lockutils [req-ba129f3c-266b-4720-a33d-fae6d364f7d6 req-d732e63c-5f20-4854-8f19-3a3bdf3b3e6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.719 243456 DEBUG oslo_concurrency.lockutils [req-ba129f3c-266b-4720-a33d-fae6d364f7d6 req-d732e63c-5f20-4854-8f19-3a3bdf3b3e6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.719 243456 DEBUG nova.compute.manager [req-ba129f3c-266b-4720-a33d-fae6d364f7d6 req-d732e63c-5f20-4854-8f19-3a3bdf3b3e6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] No waiting events found dispatching network-vif-unplugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.720 243456 DEBUG nova.compute.manager [req-ba129f3c-266b-4720-a33d-fae6d364f7d6 req-d732e63c-5f20-4854-8f19-3a3bdf3b3e6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received event network-vif-unplugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.720 243456 DEBUG nova.compute.manager [req-ba129f3c-266b-4720-a33d-fae6d364f7d6 req-d732e63c-5f20-4854-8f19-3a3bdf3b3e6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received event network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.720 243456 DEBUG oslo_concurrency.lockutils [req-ba129f3c-266b-4720-a33d-fae6d364f7d6 req-d732e63c-5f20-4854-8f19-3a3bdf3b3e6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.721 243456 DEBUG oslo_concurrency.lockutils [req-ba129f3c-266b-4720-a33d-fae6d364f7d6 req-d732e63c-5f20-4854-8f19-3a3bdf3b3e6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.721 243456 DEBUG oslo_concurrency.lockutils [req-ba129f3c-266b-4720-a33d-fae6d364f7d6 req-d732e63c-5f20-4854-8f19-3a3bdf3b3e6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.722 243456 DEBUG nova.compute.manager [req-ba129f3c-266b-4720-a33d-fae6d364f7d6 req-d732e63c-5f20-4854-8f19-3a3bdf3b3e6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] No waiting events found dispatching network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.722 243456 WARNING nova.compute.manager [req-ba129f3c-266b-4720-a33d-fae6d364f7d6 req-d732e63c-5f20-4854-8f19-3a3bdf3b3e6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received unexpected event network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.733 243456 DEBUG nova.compute.manager [req-5a32fd09-67ff-4ef0-ab32-566023f9b3b6 req-e5740772-a121-4ac8-a024-7ad0a8ad96ed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received event network-vif-deleted-cf1a075d-084d-4b7f-afd3-5a1d130b7493 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.889 243456 DEBUG nova.network.neutron [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.892 243456 DEBUG oslo_concurrency.processutils [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.963 243456 INFO nova.virt.libvirt.driver [None req-313753f3-c9fa-46ab-a1f8-ebb614454755 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Snapshot image upload complete#033[00m
Feb 28 05:09:48 np0005634017 nova_compute[243452]: 2026-02-28 10:09:48.964 243456 INFO nova.compute.manager [None req-313753f3-c9fa-46ab-a1f8-ebb614454755 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Took 5.62 seconds to snapshot the instance on the hypervisor.#033[00m
Feb 28 05:09:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:09:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1041087252' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:09:49 np0005634017 nova_compute[243452]: 2026-02-28 10:09:49.468 243456 DEBUG oslo_concurrency.processutils [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:49 np0005634017 nova_compute[243452]: 2026-02-28 10:09:49.473 243456 DEBUG nova.compute.provider_tree [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:09:49 np0005634017 nova_compute[243452]: 2026-02-28 10:09:49.801 243456 DEBUG nova.scheduler.client.report [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:09:49 np0005634017 nova_compute[243452]: 2026-02-28 10:09:49.824 243456 DEBUG oslo_concurrency.lockutils [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:49 np0005634017 nova_compute[243452]: 2026-02-28 10:09:49.861 243456 INFO nova.scheduler.client.report [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Deleted allocations for instance 163deb6e-49f4-4093-b0c1-98240f93c499#033[00m
Feb 28 05:09:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 523 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 11 MiB/s wr, 372 op/s
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.156 243456 DEBUG nova.network.neutron [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Updating instance_info_cache with network_info: [{"id": "b74d22f2-fc92-4f30-b403-6cecd975b301", "address": "fa:16:3e:7f:e6:f5", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb74d22f2-fc", "ovs_interfaceid": "b74d22f2-fc92-4f30-b403-6cecd975b301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.191 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Releasing lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.192 243456 DEBUG nova.compute.manager [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Instance network_info: |[{"id": "b74d22f2-fc92-4f30-b403-6cecd975b301", "address": "fa:16:3e:7f:e6:f5", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb74d22f2-fc", "ovs_interfaceid": "b74d22f2-fc92-4f30-b403-6cecd975b301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.194 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Start _get_guest_xml network_info=[{"id": "b74d22f2-fc92-4f30-b403-6cecd975b301", "address": "fa:16:3e:7f:e6:f5", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb74d22f2-fc", "ovs_interfaceid": "b74d22f2-fc92-4f30-b403-6cecd975b301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.199 243456 WARNING nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.204 243456 DEBUG nova.virt.libvirt.host [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.205 243456 DEBUG nova.virt.libvirt.host [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.210 243456 DEBUG oslo_concurrency.lockutils [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.211 243456 DEBUG nova.virt.libvirt.host [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.212 243456 DEBUG nova.virt.libvirt.host [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.212 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.213 243456 DEBUG nova.virt.hardware [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.213 243456 DEBUG nova.virt.hardware [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.213 243456 DEBUG nova.virt.hardware [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.214 243456 DEBUG nova.virt.hardware [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.214 243456 DEBUG nova.virt.hardware [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.214 243456 DEBUG nova.virt.hardware [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.214 243456 DEBUG nova.virt.hardware [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.214 243456 DEBUG nova.virt.hardware [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.215 243456 DEBUG nova.virt.hardware [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.215 243456 DEBUG nova.virt.hardware [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.215 243456 DEBUG nova.virt.hardware [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.218 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:09:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3449709121' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.774 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.802 243456 DEBUG nova.storage.rbd_utils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.807 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.878 243456 DEBUG nova.compute.manager [req-843086f3-7f5d-4a54-a6b5-71b4e4da3ed1 req-ef39f2e9-814d-4899-b24f-8fb99115f9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Received event network-changed-b74d22f2-fc92-4f30-b403-6cecd975b301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.879 243456 DEBUG nova.compute.manager [req-843086f3-7f5d-4a54-a6b5-71b4e4da3ed1 req-ef39f2e9-814d-4899-b24f-8fb99115f9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Refreshing instance network info cache due to event network-changed-b74d22f2-fc92-4f30-b403-6cecd975b301. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.880 243456 DEBUG oslo_concurrency.lockutils [req-843086f3-7f5d-4a54-a6b5-71b4e4da3ed1 req-ef39f2e9-814d-4899-b24f-8fb99115f9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.880 243456 DEBUG oslo_concurrency.lockutils [req-843086f3-7f5d-4a54-a6b5-71b4e4da3ed1 req-ef39f2e9-814d-4899-b24f-8fb99115f9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:09:50 np0005634017 nova_compute[243452]: 2026-02-28 10:09:50.880 243456 DEBUG nova.network.neutron [req-843086f3-7f5d-4a54-a6b5-71b4e4da3ed1 req-ef39f2e9-814d-4899-b24f-8fb99115f9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Refreshing network info cache for port b74d22f2-fc92-4f30-b403-6cecd975b301 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:09:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:09:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2904316254' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.381 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.384 243456 DEBUG nova.virt.libvirt.vif [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:09:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-980989576',display_name='tempest-DeleteServersTestJSON-server-980989576',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-980989576',id=57,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-kebdyvyk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:09:46Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=9633f2a0-e94b-40f8-a1c1-6a9ebf9458be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b74d22f2-fc92-4f30-b403-6cecd975b301", "address": "fa:16:3e:7f:e6:f5", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb74d22f2-fc", "ovs_interfaceid": "b74d22f2-fc92-4f30-b403-6cecd975b301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.384 243456 DEBUG nova.network.os_vif_util [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "b74d22f2-fc92-4f30-b403-6cecd975b301", "address": "fa:16:3e:7f:e6:f5", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb74d22f2-fc", "ovs_interfaceid": "b74d22f2-fc92-4f30-b403-6cecd975b301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.386 243456 DEBUG nova.network.os_vif_util [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:e6:f5,bridge_name='br-int',has_traffic_filtering=True,id=b74d22f2-fc92-4f30-b403-6cecd975b301,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb74d22f2-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.387 243456 DEBUG nova.objects.instance [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.402 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:09:51 np0005634017 nova_compute[243452]:  <uuid>9633f2a0-e94b-40f8-a1c1-6a9ebf9458be</uuid>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:  <name>instance-00000039</name>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <nova:name>tempest-DeleteServersTestJSON-server-980989576</nova:name>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:09:50</nova:creationTime>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:09:51 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:        <nova:user uuid="33855957e5e3480b850c2ddef62a5f89">tempest-DeleteServersTestJSON-515886650-project-member</nova:user>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:        <nova:project uuid="a5095f810f0d431788237ae1da262bf6">tempest-DeleteServersTestJSON-515886650</nova:project>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:        <nova:port uuid="b74d22f2-fc92-4f30-b403-6cecd975b301">
Feb 28 05:09:51 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <entry name="serial">9633f2a0-e94b-40f8-a1c1-6a9ebf9458be</entry>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <entry name="uuid">9633f2a0-e94b-40f8-a1c1-6a9ebf9458be</entry>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk">
Feb 28 05:09:51 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:09:51 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk.config">
Feb 28 05:09:51 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:09:51 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:7f:e6:f5"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <target dev="tapb74d22f2-fc"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/9633f2a0-e94b-40f8-a1c1-6a9ebf9458be/console.log" append="off"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:09:51 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:09:51 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:09:51 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:09:51 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.404 243456 DEBUG nova.compute.manager [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Preparing to wait for external event network-vif-plugged-b74d22f2-fc92-4f30-b403-6cecd975b301 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.404 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.404 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.404 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.405 243456 DEBUG nova.virt.libvirt.vif [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:09:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-980989576',display_name='tempest-DeleteServersTestJSON-server-980989576',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-980989576',id=57,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-kebdyvyk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:09:46Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=9633f2a0-e94b-40f8-a1c1-6a9ebf9458be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b74d22f2-fc92-4f30-b403-6cecd975b301", "address": "fa:16:3e:7f:e6:f5", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb74d22f2-fc", "ovs_interfaceid": "b74d22f2-fc92-4f30-b403-6cecd975b301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.406 243456 DEBUG nova.network.os_vif_util [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "b74d22f2-fc92-4f30-b403-6cecd975b301", "address": "fa:16:3e:7f:e6:f5", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb74d22f2-fc", "ovs_interfaceid": "b74d22f2-fc92-4f30-b403-6cecd975b301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.407 243456 DEBUG nova.network.os_vif_util [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:e6:f5,bridge_name='br-int',has_traffic_filtering=True,id=b74d22f2-fc92-4f30-b403-6cecd975b301,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb74d22f2-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.407 243456 DEBUG os_vif [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:e6:f5,bridge_name='br-int',has_traffic_filtering=True,id=b74d22f2-fc92-4f30-b403-6cecd975b301,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb74d22f2-fc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.409 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.409 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.410 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.413 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.414 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb74d22f2-fc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.414 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb74d22f2-fc, col_values=(('external_ids', {'iface-id': 'b74d22f2-fc92-4f30-b403-6cecd975b301', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:e6:f5', 'vm-uuid': '9633f2a0-e94b-40f8-a1c1-6a9ebf9458be'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.416 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:51 np0005634017 NetworkManager[49805]: <info>  [1772273391.4176] manager: (tapb74d22f2-fc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/215)
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.418 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.422 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.423 243456 INFO os_vif [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:e6:f5,bridge_name='br-int',has_traffic_filtering=True,id=b74d22f2-fc92-4f30-b403-6cecd975b301,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb74d22f2-fc')#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.485 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.485 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.486 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No VIF found with MAC fa:16:3e:7f:e6:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.486 243456 INFO nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Using config drive#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.507 243456 DEBUG nova.storage.rbd_utils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.908 243456 INFO nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Creating config drive at /var/lib/nova/instances/9633f2a0-e94b-40f8-a1c1-6a9ebf9458be/disk.config#033[00m
Feb 28 05:09:51 np0005634017 nova_compute[243452]: 2026-02-28 10:09:51.915 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9633f2a0-e94b-40f8-a1c1-6a9ebf9458be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmphb96m2be execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 542 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 9.9 MiB/s rd, 11 MiB/s wr, 381 op/s
Feb 28 05:09:52 np0005634017 nova_compute[243452]: 2026-02-28 10:09:52.069 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9633f2a0-e94b-40f8-a1c1-6a9ebf9458be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmphb96m2be" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:52 np0005634017 nova_compute[243452]: 2026-02-28 10:09:52.109 243456 DEBUG nova.storage.rbd_utils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:09:52 np0005634017 nova_compute[243452]: 2026-02-28 10:09:52.115 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9633f2a0-e94b-40f8-a1c1-6a9ebf9458be/disk.config 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:09:52 np0005634017 nova_compute[243452]: 2026-02-28 10:09:52.162 243456 DEBUG nova.network.neutron [req-843086f3-7f5d-4a54-a6b5-71b4e4da3ed1 req-ef39f2e9-814d-4899-b24f-8fb99115f9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Updated VIF entry in instance network info cache for port b74d22f2-fc92-4f30-b403-6cecd975b301. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:09:52 np0005634017 nova_compute[243452]: 2026-02-28 10:09:52.163 243456 DEBUG nova.network.neutron [req-843086f3-7f5d-4a54-a6b5-71b4e4da3ed1 req-ef39f2e9-814d-4899-b24f-8fb99115f9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Updating instance_info_cache with network_info: [{"id": "b74d22f2-fc92-4f30-b403-6cecd975b301", "address": "fa:16:3e:7f:e6:f5", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb74d22f2-fc", "ovs_interfaceid": "b74d22f2-fc92-4f30-b403-6cecd975b301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:09:52 np0005634017 nova_compute[243452]: 2026-02-28 10:09:52.187 243456 DEBUG oslo_concurrency.lockutils [req-843086f3-7f5d-4a54-a6b5-71b4e4da3ed1 req-ef39f2e9-814d-4899-b24f-8fb99115f9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:09:52 np0005634017 nova_compute[243452]: 2026-02-28 10:09:52.398 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9633f2a0-e94b-40f8-a1c1-6a9ebf9458be/disk.config 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:09:52 np0005634017 nova_compute[243452]: 2026-02-28 10:09:52.399 243456 INFO nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Deleting local config drive /var/lib/nova/instances/9633f2a0-e94b-40f8-a1c1-6a9ebf9458be/disk.config because it was imported into RBD.#033[00m
Feb 28 05:09:52 np0005634017 kernel: tapb74d22f2-fc: entered promiscuous mode
Feb 28 05:09:52 np0005634017 NetworkManager[49805]: <info>  [1772273392.4535] manager: (tapb74d22f2-fc): new Tun device (/org/freedesktop/NetworkManager/Devices/216)
Feb 28 05:09:52 np0005634017 nova_compute[243452]: 2026-02-28 10:09:52.454 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:52Z|00465|binding|INFO|Claiming lport b74d22f2-fc92-4f30-b403-6cecd975b301 for this chassis.
Feb 28 05:09:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:52Z|00466|binding|INFO|b74d22f2-fc92-4f30-b403-6cecd975b301: Claiming fa:16:3e:7f:e6:f5 10.100.0.5
Feb 28 05:09:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.462 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:e6:f5 10.100.0.5'], port_security=['fa:16:3e:7f:e6:f5 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9633f2a0-e94b-40f8-a1c1-6a9ebf9458be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e92100d-850d-4567-9a5d-269bb15701d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5095f810f0d431788237ae1da262bf6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05124cc2-701f-43d1-96d7-3db27e805ae2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4409f277-3260-44c5-97fd-d8b2121f3d6b, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b74d22f2-fc92-4f30-b403-6cecd975b301) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:09:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.464 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b74d22f2-fc92-4f30-b403-6cecd975b301 in datapath 8e92100d-850d-4567-9a5d-269bb15701d5 bound to our chassis#033[00m
Feb 28 05:09:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.465 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8e92100d-850d-4567-9a5d-269bb15701d5#033[00m
Feb 28 05:09:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.480 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[29263ca6-ae44-43ad-9f5b-f5a4c6647435]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.481 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8e92100d-81 in ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:09:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:52Z|00467|binding|INFO|Setting lport b74d22f2-fc92-4f30-b403-6cecd975b301 up in Southbound
Feb 28 05:09:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.839 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8e92100d-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:09:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.839 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[759be36f-2279-4d55-b7be-777d1e9f217e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:52Z|00468|binding|INFO|Setting lport b74d22f2-fc92-4f30-b403-6cecd975b301 ovn-installed in OVS
Feb 28 05:09:52 np0005634017 nova_compute[243452]: 2026-02-28 10:09:52.841 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.842 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a730ac69-c158-41a9-9bc8-84836fc8069c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.861 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a34e20-d603-4f18-8bbe-f2afef37bfbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:52 np0005634017 systemd-machined[209480]: New machine qemu-64-instance-00000039.
Feb 28 05:09:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.881 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f90a394e-8f5c-45d1-9c55-6ae4a6cb02eb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:52 np0005634017 systemd[1]: Started Virtual Machine qemu-64-instance-00000039.
Feb 28 05:09:52 np0005634017 systemd-udevd[292006]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:09:52 np0005634017 NetworkManager[49805]: <info>  [1772273392.9070] device (tapb74d22f2-fc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:09:52 np0005634017 NetworkManager[49805]: <info>  [1772273392.9079] device (tapb74d22f2-fc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:09:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.912 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[382d9a52-7676-4942-83dd-b41e07fcd005]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.919 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[880d1bf4-c7b4-4ff6-9a67-5c4940ec973e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:52 np0005634017 NetworkManager[49805]: <info>  [1772273392.9211] manager: (tap8e92100d-80): new Veth device (/org/freedesktop/NetworkManager/Devices/217)
Feb 28 05:09:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:09:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.952 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[94ef4533-442f-432d-bf03-8f4c29a558d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.956 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[95b28ce9-e991-4db5-9acd-9688f72b8210]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:52 np0005634017 NetworkManager[49805]: <info>  [1772273392.9779] device (tap8e92100d-80): carrier: link connected
Feb 28 05:09:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.983 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[03b69734-7e51-4e06-a265-bb183bc9337d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:53.000 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3df4bd5e-777f-4387-a18e-67d228302ab4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e92100d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:bd:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490026, 'reachable_time': 26109, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292035, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:53.019 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[064988d8-1249-4141-801c-d1531dce7838]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:bd50'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490026, 'tstamp': 490026}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292036, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:53.043 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[da3cd615-af54-40d4-b494-d72135e72742]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e92100d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:bd:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490026, 'reachable_time': 26109, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292037, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:53.082 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2a7654ae-d336-428a-b221-bd6eaf2c9ed6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:53.155 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[26ab27d5-760e-4955-a72e-79547f06b3ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:53.157 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e92100d-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:53.158 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:53.158 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e92100d-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:53 np0005634017 kernel: tap8e92100d-80: entered promiscuous mode
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.161 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:53 np0005634017 NetworkManager[49805]: <info>  [1772273393.1621] manager: (tap8e92100d-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:53.165 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8e92100d-80, col_values=(('external_ids', {'iface-id': 'df60d363-b5aa-4c1b-a9a1-997dfff36799'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.167 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:53Z|00469|binding|INFO|Releasing lport df60d363-b5aa-4c1b-a9a1-997dfff36799 from this chassis (sb_readonly=0)
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:53.168 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:53.169 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f50d9ed2-3299-47d3-be93-9cf0bd27d025]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:53.170 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-8e92100d-850d-4567-9a5d-269bb15701d5
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 8e92100d-850d-4567-9a5d-269bb15701d5
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:09:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:53.170 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'env', 'PROCESS_TAG=haproxy-8e92100d-850d-4567-9a5d-269bb15701d5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8e92100d-850d-4567-9a5d-269bb15701d5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.174 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.184 243456 DEBUG nova.compute.manager [req-9f236ab7-19b9-470e-b02a-491db9ad18b8 req-799a60f9-33fe-4e78-b4ea-132afe49ed5e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Received event network-vif-plugged-b74d22f2-fc92-4f30-b403-6cecd975b301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.185 243456 DEBUG oslo_concurrency.lockutils [req-9f236ab7-19b9-470e-b02a-491db9ad18b8 req-799a60f9-33fe-4e78-b4ea-132afe49ed5e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.185 243456 DEBUG oslo_concurrency.lockutils [req-9f236ab7-19b9-470e-b02a-491db9ad18b8 req-799a60f9-33fe-4e78-b4ea-132afe49ed5e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.186 243456 DEBUG oslo_concurrency.lockutils [req-9f236ab7-19b9-470e-b02a-491db9ad18b8 req-799a60f9-33fe-4e78-b4ea-132afe49ed5e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.186 243456 DEBUG nova.compute.manager [req-9f236ab7-19b9-470e-b02a-491db9ad18b8 req-799a60f9-33fe-4e78-b4ea-132afe49ed5e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Processing event network-vif-plugged-b74d22f2-fc92-4f30-b403-6cecd975b301 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.297 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273393.2964723, 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.297 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] VM Started (Lifecycle Event)#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.300 243456 DEBUG nova.compute.manager [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.303 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:09:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:09:53Z|00470|binding|INFO|Releasing lport df60d363-b5aa-4c1b-a9a1-997dfff36799 from this chassis (sb_readonly=0)
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.310 243456 INFO nova.virt.libvirt.driver [-] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Instance spawned successfully.#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.310 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.315 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.318 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.339 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.345 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.345 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273393.297761, 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.345 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.350 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.350 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.351 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.351 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.351 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.352 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.376 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.379 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273393.3024683, 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.379 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.402 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.405 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.417 243456 INFO nova.compute.manager [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Took 6.95 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.418 243456 DEBUG nova.compute.manager [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.459 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.506 243456 INFO nova.compute.manager [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Took 8.20 seconds to build instance.#033[00m
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.523 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:53 np0005634017 podman[292111]: 2026-02-28 10:09:53.576128591 +0000 UTC m=+0.057233878 container create 4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 28 05:09:53 np0005634017 systemd[1]: Started libpod-conmon-4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6.scope.
Feb 28 05:09:53 np0005634017 podman[292111]: 2026-02-28 10:09:53.541266792 +0000 UTC m=+0.022372119 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:09:53 np0005634017 nova_compute[243452]: 2026-02-28 10:09:53.647 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:53 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:09:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92e74bf99b9c75f9c89a25a756913a23bd14227a4210479c5fb175cf03370893/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:09:53 np0005634017 podman[292111]: 2026-02-28 10:09:53.675275696 +0000 UTC m=+0.156381003 container init 4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 28 05:09:53 np0005634017 podman[292111]: 2026-02-28 10:09:53.68146736 +0000 UTC m=+0.162572647 container start 4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 28 05:09:53 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[292125]: [NOTICE]   (292129) : New worker (292131) forked
Feb 28 05:09:53 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[292125]: [NOTICE]   (292129) : Loading success.
Feb 28 05:09:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 530 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 8.6 MiB/s wr, 301 op/s
Feb 28 05:09:54 np0005634017 nova_compute[243452]: 2026-02-28 10:09:54.610 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273379.6085157, 8eee8376-acc6-4a01-80c3-d7f0d579f9bb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:09:54 np0005634017 nova_compute[243452]: 2026-02-28 10:09:54.610 243456 INFO nova.compute.manager [-] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:09:54 np0005634017 nova_compute[243452]: 2026-02-28 10:09:54.634 243456 DEBUG nova.compute.manager [None req-6fc0830b-eb4e-4bf1-a37f-50f2b5421def - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:09:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:54.934 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:09:54 np0005634017 nova_compute[243452]: 2026-02-28 10:09:54.934 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:54.935 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:09:55 np0005634017 nova_compute[243452]: 2026-02-28 10:09:55.494 243456 DEBUG nova.compute.manager [req-970591b7-4623-480a-9e4d-7d3499be8756 req-1896fb99-8194-4520-9de5-56c8d2623d97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Received event network-vif-plugged-b74d22f2-fc92-4f30-b403-6cecd975b301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:09:55 np0005634017 nova_compute[243452]: 2026-02-28 10:09:55.494 243456 DEBUG oslo_concurrency.lockutils [req-970591b7-4623-480a-9e4d-7d3499be8756 req-1896fb99-8194-4520-9de5-56c8d2623d97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:55 np0005634017 nova_compute[243452]: 2026-02-28 10:09:55.494 243456 DEBUG oslo_concurrency.lockutils [req-970591b7-4623-480a-9e4d-7d3499be8756 req-1896fb99-8194-4520-9de5-56c8d2623d97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:55 np0005634017 nova_compute[243452]: 2026-02-28 10:09:55.495 243456 DEBUG oslo_concurrency.lockutils [req-970591b7-4623-480a-9e4d-7d3499be8756 req-1896fb99-8194-4520-9de5-56c8d2623d97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:55 np0005634017 nova_compute[243452]: 2026-02-28 10:09:55.495 243456 DEBUG nova.compute.manager [req-970591b7-4623-480a-9e4d-7d3499be8756 req-1896fb99-8194-4520-9de5-56c8d2623d97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] No waiting events found dispatching network-vif-plugged-b74d22f2-fc92-4f30-b403-6cecd975b301 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:09:55 np0005634017 nova_compute[243452]: 2026-02-28 10:09:55.495 243456 WARNING nova.compute.manager [req-970591b7-4623-480a-9e4d-7d3499be8756 req-1896fb99-8194-4520-9de5-56c8d2623d97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Received unexpected event network-vif-plugged-b74d22f2-fc92-4f30-b403-6cecd975b301 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:09:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 530 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.7 MiB/s wr, 315 op/s
Feb 28 05:09:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Feb 28 05:09:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Feb 28 05:09:56 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Feb 28 05:09:56 np0005634017 nova_compute[243452]: 2026-02-28 10:09:56.353 243456 DEBUG oslo_concurrency.lockutils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:56 np0005634017 nova_compute[243452]: 2026-02-28 10:09:56.354 243456 DEBUG oslo_concurrency.lockutils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:56 np0005634017 nova_compute[243452]: 2026-02-28 10:09:56.354 243456 INFO nova.compute.manager [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Shelving#033[00m
Feb 28 05:09:56 np0005634017 nova_compute[243452]: 2026-02-28 10:09:56.382 243456 DEBUG nova.virt.libvirt.driver [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 28 05:09:56 np0005634017 nova_compute[243452]: 2026-02-28 10:09:56.417 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:56.938 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:09:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Feb 28 05:09:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Feb 28 05:09:57 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Feb 28 05:09:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:57.848 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:09:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:57.849 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:09:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:09:57.849 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:09:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:09:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Feb 28 05:09:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Feb 28 05:09:57 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Feb 28 05:09:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 497 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 161 op/s
Feb 28 05:09:58 np0005634017 nova_compute[243452]: 2026-02-28 10:09:58.649 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:09:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Feb 28 05:09:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Feb 28 05:09:59 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Feb 28 05:10:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 446 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.5 KiB/s wr, 136 op/s
Feb 28 05:10:00 np0005634017 nova_compute[243452]: 2026-02-28 10:10:00.080 243456 DEBUG oslo_concurrency.lockutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "226b6da4-15c9-4d10-ab4d-194b313446f9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:00 np0005634017 nova_compute[243452]: 2026-02-28 10:10:00.082 243456 DEBUG oslo_concurrency.lockutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "226b6da4-15c9-4d10-ab4d-194b313446f9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:00 np0005634017 nova_compute[243452]: 2026-02-28 10:10:00.082 243456 DEBUG oslo_concurrency.lockutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "226b6da4-15c9-4d10-ab4d-194b313446f9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:00 np0005634017 nova_compute[243452]: 2026-02-28 10:10:00.083 243456 DEBUG oslo_concurrency.lockutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "226b6da4-15c9-4d10-ab4d-194b313446f9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:00 np0005634017 nova_compute[243452]: 2026-02-28 10:10:00.083 243456 DEBUG oslo_concurrency.lockutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "226b6da4-15c9-4d10-ab4d-194b313446f9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:00 np0005634017 nova_compute[243452]: 2026-02-28 10:10:00.085 243456 INFO nova.compute.manager [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Terminating instance#033[00m
Feb 28 05:10:00 np0005634017 nova_compute[243452]: 2026-02-28 10:10:00.086 243456 DEBUG oslo_concurrency.lockutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "refresh_cache-226b6da4-15c9-4d10-ab4d-194b313446f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:10:00 np0005634017 nova_compute[243452]: 2026-02-28 10:10:00.086 243456 DEBUG oslo_concurrency.lockutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquired lock "refresh_cache-226b6da4-15c9-4d10-ab4d-194b313446f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:10:00 np0005634017 nova_compute[243452]: 2026-02-28 10:10:00.087 243456 DEBUG nova.network.neutron [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:10:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:10:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:10:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:10:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:10:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:10:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:10:00 np0005634017 nova_compute[243452]: 2026-02-28 10:10:00.331 243456 DEBUG nova.network.neutron [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:10:00 np0005634017 nova_compute[243452]: 2026-02-28 10:10:00.589 243456 DEBUG nova.network.neutron [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:10:00 np0005634017 nova_compute[243452]: 2026-02-28 10:10:00.605 243456 DEBUG oslo_concurrency.lockutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Releasing lock "refresh_cache-226b6da4-15c9-4d10-ab4d-194b313446f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:10:00 np0005634017 nova_compute[243452]: 2026-02-28 10:10:00.607 243456 DEBUG nova.compute.manager [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:10:00 np0005634017 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000036.scope: Deactivated successfully.
Feb 28 05:10:00 np0005634017 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000036.scope: Consumed 13.260s CPU time.
Feb 28 05:10:00 np0005634017 systemd-machined[209480]: Machine qemu-60-instance-00000036 terminated.
Feb 28 05:10:00 np0005634017 podman[292141]: 2026-02-28 10:10:00.882199912 +0000 UTC m=+0.067989011 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 28 05:10:00 np0005634017 podman[292140]: 2026-02-28 10:10:00.931179637 +0000 UTC m=+0.115641698 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:10:01 np0005634017 nova_compute[243452]: 2026-02-28 10:10:01.038 243456 INFO nova.virt.libvirt.driver [-] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Instance destroyed successfully.#033[00m
Feb 28 05:10:01 np0005634017 nova_compute[243452]: 2026-02-28 10:10:01.039 243456 DEBUG nova.objects.instance [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lazy-loading 'resources' on Instance uuid 226b6da4-15c9-4d10-ab4d-194b313446f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:10:01 np0005634017 nova_compute[243452]: 2026-02-28 10:10:01.419 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:01 np0005634017 nova_compute[243452]: 2026-02-28 10:10:01.656 243456 INFO nova.virt.libvirt.driver [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Deleting instance files /var/lib/nova/instances/226b6da4-15c9-4d10-ab4d-194b313446f9_del#033[00m
Feb 28 05:10:01 np0005634017 nova_compute[243452]: 2026-02-28 10:10:01.657 243456 INFO nova.virt.libvirt.driver [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Deletion of /var/lib/nova/instances/226b6da4-15c9-4d10-ab4d-194b313446f9_del complete#033[00m
Feb 28 05:10:01 np0005634017 nova_compute[243452]: 2026-02-28 10:10:01.710 243456 INFO nova.compute.manager [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Took 1.10 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:10:01 np0005634017 nova_compute[243452]: 2026-02-28 10:10:01.710 243456 DEBUG oslo.service.loopingcall [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:10:01 np0005634017 nova_compute[243452]: 2026-02-28 10:10:01.711 243456 DEBUG nova.compute.manager [-] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:10:01 np0005634017 nova_compute[243452]: 2026-02-28 10:10:01.711 243456 DEBUG nova.network.neutron [-] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:10:01 np0005634017 nova_compute[243452]: 2026-02-28 10:10:01.916 243456 DEBUG nova.network.neutron [-] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:10:01 np0005634017 nova_compute[243452]: 2026-02-28 10:10:01.930 243456 DEBUG nova.network.neutron [-] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:10:01 np0005634017 nova_compute[243452]: 2026-02-28 10:10:01.943 243456 INFO nova.compute.manager [-] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Took 0.23 seconds to deallocate network for instance.#033[00m
Feb 28 05:10:01 np0005634017 nova_compute[243452]: 2026-02-28 10:10:01.996 243456 DEBUG oslo_concurrency.lockutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:01 np0005634017 nova_compute[243452]: 2026-02-28 10:10:01.997 243456 DEBUG oslo_concurrency.lockutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 383 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.6 KiB/s wr, 211 op/s
Feb 28 05:10:02 np0005634017 nova_compute[243452]: 2026-02-28 10:10:02.091 243456 DEBUG oslo_concurrency.processutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:10:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:10:02 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3967393191' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:10:02 np0005634017 nova_compute[243452]: 2026-02-28 10:10:02.666 243456 DEBUG oslo_concurrency.processutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:10:02 np0005634017 nova_compute[243452]: 2026-02-28 10:10:02.673 243456 DEBUG nova.compute.provider_tree [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:10:02 np0005634017 nova_compute[243452]: 2026-02-28 10:10:02.780 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273387.7784197, 163deb6e-49f4-4093-b0c1-98240f93c499 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:10:02 np0005634017 nova_compute[243452]: 2026-02-28 10:10:02.781 243456 INFO nova.compute.manager [-] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:10:02 np0005634017 nova_compute[243452]: 2026-02-28 10:10:02.796 243456 DEBUG nova.scheduler.client.report [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:10:02 np0005634017 nova_compute[243452]: 2026-02-28 10:10:02.812 243456 DEBUG nova.compute.manager [None req-d4e3a452-ffd9-4715-bb20-d2fc02dbf462 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:10:02 np0005634017 nova_compute[243452]: 2026-02-28 10:10:02.819 243456 DEBUG oslo_concurrency.lockutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:02 np0005634017 nova_compute[243452]: 2026-02-28 10:10:02.858 243456 INFO nova.scheduler.client.report [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Deleted allocations for instance 226b6da4-15c9-4d10-ab4d-194b313446f9#033[00m
Feb 28 05:10:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:10:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Feb 28 05:10:02 np0005634017 nova_compute[243452]: 2026-02-28 10:10:02.946 243456 DEBUG oslo_concurrency.lockutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "226b6da4-15c9-4d10-ab4d-194b313446f9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Feb 28 05:10:03 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Feb 28 05:10:03 np0005634017 nova_compute[243452]: 2026-02-28 10:10:03.332 243456 DEBUG oslo_concurrency.lockutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "9098ebf3-e36c-492b-9c50-dc6f0078794d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:03 np0005634017 nova_compute[243452]: 2026-02-28 10:10:03.333 243456 DEBUG oslo_concurrency.lockutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "9098ebf3-e36c-492b-9c50-dc6f0078794d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:03 np0005634017 nova_compute[243452]: 2026-02-28 10:10:03.333 243456 DEBUG oslo_concurrency.lockutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "9098ebf3-e36c-492b-9c50-dc6f0078794d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:03 np0005634017 nova_compute[243452]: 2026-02-28 10:10:03.333 243456 DEBUG oslo_concurrency.lockutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "9098ebf3-e36c-492b-9c50-dc6f0078794d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:03 np0005634017 nova_compute[243452]: 2026-02-28 10:10:03.333 243456 DEBUG oslo_concurrency.lockutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "9098ebf3-e36c-492b-9c50-dc6f0078794d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:03 np0005634017 nova_compute[243452]: 2026-02-28 10:10:03.334 243456 INFO nova.compute.manager [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Terminating instance#033[00m
Feb 28 05:10:03 np0005634017 nova_compute[243452]: 2026-02-28 10:10:03.335 243456 DEBUG oslo_concurrency.lockutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "refresh_cache-9098ebf3-e36c-492b-9c50-dc6f0078794d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:10:03 np0005634017 nova_compute[243452]: 2026-02-28 10:10:03.335 243456 DEBUG oslo_concurrency.lockutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquired lock "refresh_cache-9098ebf3-e36c-492b-9c50-dc6f0078794d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:10:03 np0005634017 nova_compute[243452]: 2026-02-28 10:10:03.336 243456 DEBUG nova.network.neutron [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:10:03 np0005634017 nova_compute[243452]: 2026-02-28 10:10:03.650 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:03 np0005634017 nova_compute[243452]: 2026-02-28 10:10:03.769 243456 DEBUG nova.network.neutron [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:10:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 333 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 5.9 KiB/s wr, 130 op/s
Feb 28 05:10:04 np0005634017 nova_compute[243452]: 2026-02-28 10:10:04.101 243456 DEBUG nova.network.neutron [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:10:04 np0005634017 nova_compute[243452]: 2026-02-28 10:10:04.121 243456 DEBUG oslo_concurrency.lockutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Releasing lock "refresh_cache-9098ebf3-e36c-492b-9c50-dc6f0078794d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:10:04 np0005634017 nova_compute[243452]: 2026-02-28 10:10:04.123 243456 DEBUG nova.compute.manager [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:10:04 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb 28 05:10:04 np0005634017 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000035.scope: Deactivated successfully.
Feb 28 05:10:04 np0005634017 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000035.scope: Consumed 13.461s CPU time.
Feb 28 05:10:04 np0005634017 systemd-machined[209480]: Machine qemu-59-instance-00000035 terminated.
Feb 28 05:10:04 np0005634017 nova_compute[243452]: 2026-02-28 10:10:04.547 243456 INFO nova.virt.libvirt.driver [-] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Instance destroyed successfully.#033[00m
Feb 28 05:10:04 np0005634017 nova_compute[243452]: 2026-02-28 10:10:04.548 243456 DEBUG nova.objects.instance [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lazy-loading 'resources' on Instance uuid 9098ebf3-e36c-492b-9c50-dc6f0078794d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:10:05 np0005634017 nova_compute[243452]: 2026-02-28 10:10:05.469 243456 INFO nova.virt.libvirt.driver [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Deleting instance files /var/lib/nova/instances/9098ebf3-e36c-492b-9c50-dc6f0078794d_del#033[00m
Feb 28 05:10:05 np0005634017 nova_compute[243452]: 2026-02-28 10:10:05.471 243456 INFO nova.virt.libvirt.driver [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Deletion of /var/lib/nova/instances/9098ebf3-e36c-492b-9c50-dc6f0078794d_del complete#033[00m
Feb 28 05:10:05 np0005634017 nova_compute[243452]: 2026-02-28 10:10:05.529 243456 INFO nova.compute.manager [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Took 1.41 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:10:05 np0005634017 nova_compute[243452]: 2026-02-28 10:10:05.530 243456 DEBUG oslo.service.loopingcall [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:10:05 np0005634017 nova_compute[243452]: 2026-02-28 10:10:05.531 243456 DEBUG nova.compute.manager [-] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:10:05 np0005634017 nova_compute[243452]: 2026-02-28 10:10:05.531 243456 DEBUG nova.network.neutron [-] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:10:05 np0005634017 nova_compute[243452]: 2026-02-28 10:10:05.979 243456 DEBUG nova.network.neutron [-] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:10:05 np0005634017 nova_compute[243452]: 2026-02-28 10:10:05.996 243456 DEBUG nova.network.neutron [-] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:10:06 np0005634017 nova_compute[243452]: 2026-02-28 10:10:06.011 243456 INFO nova.compute.manager [-] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Took 0.48 seconds to deallocate network for instance.#033[00m
Feb 28 05:10:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 276 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 165 KiB/s rd, 1.8 MiB/s wr, 169 op/s
Feb 28 05:10:06 np0005634017 nova_compute[243452]: 2026-02-28 10:10:06.098 243456 DEBUG oslo_concurrency.lockutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:06 np0005634017 nova_compute[243452]: 2026-02-28 10:10:06.099 243456 DEBUG oslo_concurrency.lockutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:06 np0005634017 nova_compute[243452]: 2026-02-28 10:10:06.200 243456 DEBUG oslo_concurrency.processutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:10:06 np0005634017 nova_compute[243452]: 2026-02-28 10:10:06.422 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:06 np0005634017 nova_compute[243452]: 2026-02-28 10:10:06.432 243456 DEBUG nova.virt.libvirt.driver [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 28 05:10:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:10:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2689457357' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:10:06 np0005634017 nova_compute[243452]: 2026-02-28 10:10:06.797 243456 DEBUG oslo_concurrency.processutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:10:06 np0005634017 nova_compute[243452]: 2026-02-28 10:10:06.805 243456 DEBUG nova.compute.provider_tree [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:10:06 np0005634017 nova_compute[243452]: 2026-02-28 10:10:06.833 243456 DEBUG nova.scheduler.client.report [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:10:06 np0005634017 nova_compute[243452]: 2026-02-28 10:10:06.865 243456 DEBUG oslo_concurrency.lockutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:06 np0005634017 nova_compute[243452]: 2026-02-28 10:10:06.893 243456 INFO nova.scheduler.client.report [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Deleted allocations for instance 9098ebf3-e36c-492b-9c50-dc6f0078794d#033[00m
Feb 28 05:10:06 np0005634017 nova_compute[243452]: 2026-02-28 10:10:06.972 243456 DEBUG oslo_concurrency.lockutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "9098ebf3-e36c-492b-9c50-dc6f0078794d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:10:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Feb 28 05:10:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Feb 28 05:10:08 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Feb 28 05:10:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 260 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 558 KiB/s rd, 3.1 MiB/s wr, 236 op/s
Feb 28 05:10:08 np0005634017 nova_compute[243452]: 2026-02-28 10:10:08.651 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Feb 28 05:10:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Feb 28 05:10:09 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Feb 28 05:10:09 np0005634017 kernel: tapb74d22f2-fc (unregistering): left promiscuous mode
Feb 28 05:10:09 np0005634017 NetworkManager[49805]: <info>  [1772273409.2295] device (tapb74d22f2-fc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:10:09 np0005634017 nova_compute[243452]: 2026-02-28 10:10:09.230 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:09 np0005634017 ovn_controller[146846]: 2026-02-28T10:10:09Z|00471|binding|INFO|Releasing lport b74d22f2-fc92-4f30-b403-6cecd975b301 from this chassis (sb_readonly=0)
Feb 28 05:10:09 np0005634017 ovn_controller[146846]: 2026-02-28T10:10:09Z|00472|binding|INFO|Setting lport b74d22f2-fc92-4f30-b403-6cecd975b301 down in Southbound
Feb 28 05:10:09 np0005634017 ovn_controller[146846]: 2026-02-28T10:10:09Z|00473|binding|INFO|Removing iface tapb74d22f2-fc ovn-installed in OVS
Feb 28 05:10:09 np0005634017 nova_compute[243452]: 2026-02-28 10:10:09.239 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.247 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:e6:f5 10.100.0.5'], port_security=['fa:16:3e:7f:e6:f5 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9633f2a0-e94b-40f8-a1c1-6a9ebf9458be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e92100d-850d-4567-9a5d-269bb15701d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5095f810f0d431788237ae1da262bf6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05124cc2-701f-43d1-96d7-3db27e805ae2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4409f277-3260-44c5-97fd-d8b2121f3d6b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b74d22f2-fc92-4f30-b403-6cecd975b301) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:10:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.249 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b74d22f2-fc92-4f30-b403-6cecd975b301 in datapath 8e92100d-850d-4567-9a5d-269bb15701d5 unbound from our chassis#033[00m
Feb 28 05:10:09 np0005634017 nova_compute[243452]: 2026-02-28 10:10:09.250 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.251 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e92100d-850d-4567-9a5d-269bb15701d5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:10:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.253 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cab45061-ff39-4533-a322-73d3cd2add79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.253 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 namespace which is not needed anymore#033[00m
Feb 28 05:10:09 np0005634017 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000039.scope: Deactivated successfully.
Feb 28 05:10:09 np0005634017 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000039.scope: Consumed 12.409s CPU time.
Feb 28 05:10:09 np0005634017 systemd-machined[209480]: Machine qemu-64-instance-00000039 terminated.
Feb 28 05:10:09 np0005634017 nova_compute[243452]: 2026-02-28 10:10:09.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:10:09 np0005634017 nova_compute[243452]: 2026-02-28 10:10:09.454 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:09 np0005634017 nova_compute[243452]: 2026-02-28 10:10:09.457 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:09 np0005634017 nova_compute[243452]: 2026-02-28 10:10:09.466 243456 INFO nova.virt.libvirt.driver [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Instance shutdown successfully after 13 seconds.#033[00m
Feb 28 05:10:09 np0005634017 nova_compute[243452]: 2026-02-28 10:10:09.471 243456 INFO nova.virt.libvirt.driver [-] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Instance destroyed successfully.#033[00m
Feb 28 05:10:09 np0005634017 nova_compute[243452]: 2026-02-28 10:10:09.472 243456 DEBUG nova.objects.instance [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:10:09 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[292125]: [NOTICE]   (292129) : haproxy version is 2.8.14-c23fe91
Feb 28 05:10:09 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[292125]: [NOTICE]   (292129) : path to executable is /usr/sbin/haproxy
Feb 28 05:10:09 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[292125]: [WARNING]  (292129) : Exiting Master process...
Feb 28 05:10:09 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[292125]: [ALERT]    (292129) : Current worker (292131) exited with code 143 (Terminated)
Feb 28 05:10:09 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[292125]: [WARNING]  (292129) : All workers exited. Exiting... (0)
Feb 28 05:10:09 np0005634017 systemd[1]: libpod-4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6.scope: Deactivated successfully.
Feb 28 05:10:09 np0005634017 podman[292296]: 2026-02-28 10:10:09.506440442 +0000 UTC m=+0.166301072 container died 4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 28 05:10:09 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6-userdata-shm.mount: Deactivated successfully.
Feb 28 05:10:09 np0005634017 systemd[1]: var-lib-containers-storage-overlay-92e74bf99b9c75f9c89a25a756913a23bd14227a4210479c5fb175cf03370893-merged.mount: Deactivated successfully.
Feb 28 05:10:09 np0005634017 nova_compute[243452]: 2026-02-28 10:10:09.636 243456 DEBUG nova.compute.manager [req-0048cee6-eccf-4e59-8fdd-f5457519817f req-2160a2cb-3183-4841-ba89-777cd36f1f3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Received event network-vif-unplugged-b74d22f2-fc92-4f30-b403-6cecd975b301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:10:09 np0005634017 nova_compute[243452]: 2026-02-28 10:10:09.637 243456 DEBUG oslo_concurrency.lockutils [req-0048cee6-eccf-4e59-8fdd-f5457519817f req-2160a2cb-3183-4841-ba89-777cd36f1f3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:09 np0005634017 nova_compute[243452]: 2026-02-28 10:10:09.637 243456 DEBUG oslo_concurrency.lockutils [req-0048cee6-eccf-4e59-8fdd-f5457519817f req-2160a2cb-3183-4841-ba89-777cd36f1f3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:09 np0005634017 nova_compute[243452]: 2026-02-28 10:10:09.638 243456 DEBUG oslo_concurrency.lockutils [req-0048cee6-eccf-4e59-8fdd-f5457519817f req-2160a2cb-3183-4841-ba89-777cd36f1f3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:09 np0005634017 nova_compute[243452]: 2026-02-28 10:10:09.638 243456 DEBUG nova.compute.manager [req-0048cee6-eccf-4e59-8fdd-f5457519817f req-2160a2cb-3183-4841-ba89-777cd36f1f3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] No waiting events found dispatching network-vif-unplugged-b74d22f2-fc92-4f30-b403-6cecd975b301 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:10:09 np0005634017 nova_compute[243452]: 2026-02-28 10:10:09.638 243456 WARNING nova.compute.manager [req-0048cee6-eccf-4e59-8fdd-f5457519817f req-2160a2cb-3183-4841-ba89-777cd36f1f3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Received unexpected event network-vif-unplugged-b74d22f2-fc92-4f30-b403-6cecd975b301 for instance with vm_state active and task_state shelving.#033[00m
Feb 28 05:10:09 np0005634017 podman[292296]: 2026-02-28 10:10:09.653375019 +0000 UTC m=+0.313235609 container cleanup 4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 28 05:10:09 np0005634017 systemd[1]: libpod-conmon-4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6.scope: Deactivated successfully.
Feb 28 05:10:09 np0005634017 podman[292336]: 2026-02-28 10:10:09.735299129 +0000 UTC m=+0.058905145 container remove 4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 28 05:10:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.740 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cfed5200-4be8-4b7f-a6e5-a31461254449]: (4, ('Sat Feb 28 10:10:09 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 (4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6)\n4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6\nSat Feb 28 10:10:09 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 (4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6)\n4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.742 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c67e622f-02d6-4155-9131-1b2acefc4dfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.744 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e92100d-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:10:09 np0005634017 nova_compute[243452]: 2026-02-28 10:10:09.745 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:09 np0005634017 kernel: tap8e92100d-80: left promiscuous mode
Feb 28 05:10:09 np0005634017 nova_compute[243452]: 2026-02-28 10:10:09.755 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.758 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[039ea8a4-efc2-4097-9444-13359e21f3bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.770 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1c4dd583-9bca-44a8-800a-7aaddc280d2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.771 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4db91db3-61af-4869-b597-bc41f4d16782]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.785 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[978b1b53-330e-4faa-9603-cd6e2eec38ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490019, 'reachable_time': 25468, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292355, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:09 np0005634017 systemd[1]: run-netns-ovnmeta\x2d8e92100d\x2d850d\x2d4567\x2d9a5d\x2d269bb15701d5.mount: Deactivated successfully.
Feb 28 05:10:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.788 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:10:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.788 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3da9a8-1887-44b4-9ceb-3acb581aa590]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 232 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 639 KiB/s rd, 3.7 MiB/s wr, 191 op/s
Feb 28 05:10:10 np0005634017 nova_compute[243452]: 2026-02-28 10:10:10.128 243456 INFO nova.virt.libvirt.driver [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Beginning cold snapshot process#033[00m
Feb 28 05:10:10 np0005634017 nova_compute[243452]: 2026-02-28 10:10:10.269 243456 DEBUG nova.virt.libvirt.imagebackend [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 28 05:10:10 np0005634017 nova_compute[243452]: 2026-02-28 10:10:10.733 243456 DEBUG nova.storage.rbd_utils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] creating snapshot(b9e3e9b3926241409c368dfbd5b637e4) on rbd image(9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:10:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Feb 28 05:10:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Feb 28 05:10:11 np0005634017 nova_compute[243452]: 2026-02-28 10:10:11.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:10:11 np0005634017 nova_compute[243452]: 2026-02-28 10:10:11.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:10:11 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Feb 28 05:10:11 np0005634017 nova_compute[243452]: 2026-02-28 10:10:11.425 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:11 np0005634017 nova_compute[243452]: 2026-02-28 10:10:11.548 243456 DEBUG nova.storage.rbd_utils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] cloning vms/9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk@b9e3e9b3926241409c368dfbd5b637e4 to images/342d07d0-ce6e-40be-938e-c99ef1da978f clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 28 05:10:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 233 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 703 KiB/s rd, 1.9 MiB/s wr, 195 op/s
Feb 28 05:10:12 np0005634017 nova_compute[243452]: 2026-02-28 10:10:12.266 243456 DEBUG nova.storage.rbd_utils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] flattening images/342d07d0-ce6e-40be-938e-c99ef1da978f flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 28 05:10:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:10:13 np0005634017 nova_compute[243452]: 2026-02-28 10:10:13.429 243456 DEBUG nova.compute.manager [req-d737b28b-bcb3-4d68-812e-c7942fd2f181 req-cd7ab193-97d3-4287-bc25-40e7ecae139d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Received event network-vif-plugged-b74d22f2-fc92-4f30-b403-6cecd975b301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:10:13 np0005634017 nova_compute[243452]: 2026-02-28 10:10:13.429 243456 DEBUG oslo_concurrency.lockutils [req-d737b28b-bcb3-4d68-812e-c7942fd2f181 req-cd7ab193-97d3-4287-bc25-40e7ecae139d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:13 np0005634017 nova_compute[243452]: 2026-02-28 10:10:13.430 243456 DEBUG oslo_concurrency.lockutils [req-d737b28b-bcb3-4d68-812e-c7942fd2f181 req-cd7ab193-97d3-4287-bc25-40e7ecae139d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:13 np0005634017 nova_compute[243452]: 2026-02-28 10:10:13.430 243456 DEBUG oslo_concurrency.lockutils [req-d737b28b-bcb3-4d68-812e-c7942fd2f181 req-cd7ab193-97d3-4287-bc25-40e7ecae139d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:13 np0005634017 nova_compute[243452]: 2026-02-28 10:10:13.430 243456 DEBUG nova.compute.manager [req-d737b28b-bcb3-4d68-812e-c7942fd2f181 req-cd7ab193-97d3-4287-bc25-40e7ecae139d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] No waiting events found dispatching network-vif-plugged-b74d22f2-fc92-4f30-b403-6cecd975b301 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:10:13 np0005634017 nova_compute[243452]: 2026-02-28 10:10:13.431 243456 WARNING nova.compute.manager [req-d737b28b-bcb3-4d68-812e-c7942fd2f181 req-cd7ab193-97d3-4287-bc25-40e7ecae139d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Received unexpected event network-vif-plugged-b74d22f2-fc92-4f30-b403-6cecd975b301 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Feb 28 05:10:13 np0005634017 nova_compute[243452]: 2026-02-28 10:10:13.450 243456 DEBUG nova.storage.rbd_utils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] removing snapshot(b9e3e9b3926241409c368dfbd5b637e4) on rbd image(9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 28 05:10:13 np0005634017 nova_compute[243452]: 2026-02-28 10:10:13.654 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Feb 28 05:10:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Feb 28 05:10:13 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Feb 28 05:10:13 np0005634017 nova_compute[243452]: 2026-02-28 10:10:13.989 243456 DEBUG nova.storage.rbd_utils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] creating snapshot(snap) on rbd image(342d07d0-ce6e-40be-938e-c99ef1da978f) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:10:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 233 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 257 KiB/s wr, 105 op/s
Feb 28 05:10:14 np0005634017 nova_compute[243452]: 2026-02-28 10:10:14.093 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:14 np0005634017 nova_compute[243452]: 2026-02-28 10:10:14.093 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:14 np0005634017 nova_compute[243452]: 2026-02-28 10:10:14.121 243456 DEBUG nova.compute.manager [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:10:14 np0005634017 nova_compute[243452]: 2026-02-28 10:10:14.200 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:14 np0005634017 nova_compute[243452]: 2026-02-28 10:10:14.200 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:14 np0005634017 nova_compute[243452]: 2026-02-28 10:10:14.207 243456 DEBUG nova.virt.hardware [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:10:14 np0005634017 nova_compute[243452]: 2026-02-28 10:10:14.208 243456 INFO nova.compute.claims [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:10:14 np0005634017 nova_compute[243452]: 2026-02-28 10:10:14.328 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:10:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Feb 28 05:10:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:10:14 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3048594996' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:10:14 np0005634017 nova_compute[243452]: 2026-02-28 10:10:14.919 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:10:14 np0005634017 nova_compute[243452]: 2026-02-28 10:10:14.954 243456 DEBUG nova.compute.provider_tree [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:10:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Feb 28 05:10:14 np0005634017 nova_compute[243452]: 2026-02-28 10:10:14.973 243456 DEBUG nova.scheduler.client.report [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:10:14 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.002 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.003 243456 DEBUG nova.compute.manager [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.079 243456 DEBUG nova.compute.manager [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.080 243456 DEBUG nova.network.neutron [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.110 243456 INFO nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.137 243456 DEBUG nova.compute.manager [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.269 243456 DEBUG nova.compute.manager [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.271 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.272 243456 INFO nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Creating image(s)#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.306 243456 DEBUG nova.storage.rbd_utils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.343 243456 DEBUG nova.storage.rbd_utils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.380 243456 DEBUG nova.storage.rbd_utils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.386 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.430 243456 DEBUG nova.policy [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '48e14a77ec8842f98a0d2efc6d5e167f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cffbbb9857954b188c363e9565817bf2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.435 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.436 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.437 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.460 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.461 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.462 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.462 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.497 243456 DEBUG nova.storage.rbd_utils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.503 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 30a5d845-ce28-490a-afe8-3b7552f02c63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.541 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.542 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.542 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.543 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:10:15 np0005634017 nova_compute[243452]: 2026-02-28 10:10:15.543 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:10:15 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Feb 28 05:10:16 np0005634017 nova_compute[243452]: 2026-02-28 10:10:16.035 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273401.0344145, 226b6da4-15c9-4d10-ab4d-194b313446f9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:10:16 np0005634017 nova_compute[243452]: 2026-02-28 10:10:16.036 243456 INFO nova.compute.manager [-] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:10:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 304 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 7.2 MiB/s wr, 254 op/s
Feb 28 05:10:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Feb 28 05:10:16 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Feb 28 05:10:16 np0005634017 nova_compute[243452]: 2026-02-28 10:10:16.063 243456 DEBUG nova.compute.manager [None req-6da9b441-7083-400a-98b5-941a01633493 - - - - - -] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:10:16 np0005634017 nova_compute[243452]: 2026-02-28 10:10:16.129 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 30a5d845-ce28-490a-afe8-3b7552f02c63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:10:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:10:16 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/551922288' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:10:16 np0005634017 nova_compute[243452]: 2026-02-28 10:10:16.210 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.666s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:10:16 np0005634017 nova_compute[243452]: 2026-02-28 10:10:16.216 243456 DEBUG nova.storage.rbd_utils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] resizing rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:10:16 np0005634017 nova_compute[243452]: 2026-02-28 10:10:16.330 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:10:16 np0005634017 nova_compute[243452]: 2026-02-28 10:10:16.331 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:10:16 np0005634017 nova_compute[243452]: 2026-02-28 10:10:16.382 243456 DEBUG nova.objects.instance [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'migration_context' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:10:16 np0005634017 nova_compute[243452]: 2026-02-28 10:10:16.407 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:10:16 np0005634017 nova_compute[243452]: 2026-02-28 10:10:16.408 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Ensure instance console log exists: /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:10:16 np0005634017 nova_compute[243452]: 2026-02-28 10:10:16.408 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:16 np0005634017 nova_compute[243452]: 2026-02-28 10:10:16.408 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:16 np0005634017 nova_compute[243452]: 2026-02-28 10:10:16.409 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:16 np0005634017 nova_compute[243452]: 2026-02-28 10:10:16.427 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:16 np0005634017 nova_compute[243452]: 2026-02-28 10:10:16.432 243456 DEBUG nova.network.neutron [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Successfully created port: 037eb744-3024-4a3d-b52c-894abe1cbac8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:10:16 np0005634017 nova_compute[243452]: 2026-02-28 10:10:16.547 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:10:16 np0005634017 nova_compute[243452]: 2026-02-28 10:10:16.548 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3903MB free_disk=59.94226722791791GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:10:16 np0005634017 nova_compute[243452]: 2026-02-28 10:10:16.548 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:16 np0005634017 nova_compute[243452]: 2026-02-28 10:10:16.549 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:16 np0005634017 nova_compute[243452]: 2026-02-28 10:10:16.623 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:10:16 np0005634017 nova_compute[243452]: 2026-02-28 10:10:16.624 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 30a5d845-ce28-490a-afe8-3b7552f02c63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:10:16 np0005634017 nova_compute[243452]: 2026-02-28 10:10:16.624 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:10:16 np0005634017 nova_compute[243452]: 2026-02-28 10:10:16.624 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:10:16 np0005634017 nova_compute[243452]: 2026-02-28 10:10:16.673 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:10:17 np0005634017 nova_compute[243452]: 2026-02-28 10:10:17.156 243456 INFO nova.virt.libvirt.driver [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Snapshot image upload complete#033[00m
Feb 28 05:10:17 np0005634017 nova_compute[243452]: 2026-02-28 10:10:17.157 243456 DEBUG nova.compute.manager [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:10:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:10:17 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2894001400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:10:17 np0005634017 nova_compute[243452]: 2026-02-28 10:10:17.189 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:10:17 np0005634017 nova_compute[243452]: 2026-02-28 10:10:17.195 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:10:17 np0005634017 nova_compute[243452]: 2026-02-28 10:10:17.212 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:10:17 np0005634017 nova_compute[243452]: 2026-02-28 10:10:17.220 243456 INFO nova.compute.manager [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Shelve offloading#033[00m
Feb 28 05:10:17 np0005634017 nova_compute[243452]: 2026-02-28 10:10:17.231 243456 INFO nova.virt.libvirt.driver [-] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Instance destroyed successfully.#033[00m
Feb 28 05:10:17 np0005634017 nova_compute[243452]: 2026-02-28 10:10:17.233 243456 DEBUG nova.compute.manager [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:10:17 np0005634017 nova_compute[243452]: 2026-02-28 10:10:17.235 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:10:17 np0005634017 nova_compute[243452]: 2026-02-28 10:10:17.236 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:17 np0005634017 nova_compute[243452]: 2026-02-28 10:10:17.238 243456 DEBUG oslo_concurrency.lockutils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:10:17 np0005634017 nova_compute[243452]: 2026-02-28 10:10:17.238 243456 DEBUG oslo_concurrency.lockutils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquired lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:10:17 np0005634017 nova_compute[243452]: 2026-02-28 10:10:17.238 243456 DEBUG nova.network.neutron [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:10:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:10:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Feb 28 05:10:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Feb 28 05:10:18 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Feb 28 05:10:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 317 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 12 MiB/s wr, 274 op/s
Feb 28 05:10:18 np0005634017 nova_compute[243452]: 2026-02-28 10:10:18.658 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:19 np0005634017 nova_compute[243452]: 2026-02-28 10:10:19.115 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:10:19 np0005634017 nova_compute[243452]: 2026-02-28 10:10:19.116 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:10:19 np0005634017 nova_compute[243452]: 2026-02-28 10:10:19.116 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:10:19 np0005634017 nova_compute[243452]: 2026-02-28 10:10:19.133 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 28 05:10:19 np0005634017 nova_compute[243452]: 2026-02-28 10:10:19.133 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:10:19 np0005634017 nova_compute[243452]: 2026-02-28 10:10:19.342 243456 DEBUG nova.network.neutron [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Successfully updated port: 037eb744-3024-4a3d-b52c-894abe1cbac8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:10:19 np0005634017 nova_compute[243452]: 2026-02-28 10:10:19.365 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:10:19 np0005634017 nova_compute[243452]: 2026-02-28 10:10:19.366 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquired lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:10:19 np0005634017 nova_compute[243452]: 2026-02-28 10:10:19.367 243456 DEBUG nova.network.neutron [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:10:19 np0005634017 nova_compute[243452]: 2026-02-28 10:10:19.455 243456 DEBUG nova.compute.manager [req-aff2c63a-8a56-43db-956c-778540843062 req-40ad3852-8bcc-41bb-bac9-ace059a4c0cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received event network-changed-037eb744-3024-4a3d-b52c-894abe1cbac8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:10:19 np0005634017 nova_compute[243452]: 2026-02-28 10:10:19.455 243456 DEBUG nova.compute.manager [req-aff2c63a-8a56-43db-956c-778540843062 req-40ad3852-8bcc-41bb-bac9-ace059a4c0cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Refreshing instance network info cache due to event network-changed-037eb744-3024-4a3d-b52c-894abe1cbac8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:10:19 np0005634017 nova_compute[243452]: 2026-02-28 10:10:19.456 243456 DEBUG oslo_concurrency.lockutils [req-aff2c63a-8a56-43db-956c-778540843062 req-40ad3852-8bcc-41bb-bac9-ace059a4c0cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:10:19 np0005634017 nova_compute[243452]: 2026-02-28 10:10:19.545 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273404.5437534, 9098ebf3-e36c-492b-9c50-dc6f0078794d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:10:19 np0005634017 nova_compute[243452]: 2026-02-28 10:10:19.545 243456 INFO nova.compute.manager [-] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:10:19 np0005634017 nova_compute[243452]: 2026-02-28 10:10:19.588 243456 DEBUG nova.compute.manager [None req-405d5950-d170-46b0-a3bc-50acc69f4e5b - - - - - -] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:10:19 np0005634017 nova_compute[243452]: 2026-02-28 10:10:19.648 243456 DEBUG nova.network.neutron [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:10:19 np0005634017 nova_compute[243452]: 2026-02-28 10:10:19.801 243456 DEBUG nova.network.neutron [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Updating instance_info_cache with network_info: [{"id": "b74d22f2-fc92-4f30-b403-6cecd975b301", "address": "fa:16:3e:7f:e6:f5", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb74d22f2-fc", "ovs_interfaceid": "b74d22f2-fc92-4f30-b403-6cecd975b301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:10:19 np0005634017 nova_compute[243452]: 2026-02-28 10:10:19.817 243456 DEBUG oslo_concurrency.lockutils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Releasing lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:10:19 np0005634017 nova_compute[243452]: 2026-02-28 10:10:19.822 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:10:19 np0005634017 nova_compute[243452]: 2026-02-28 10:10:19.822 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:10:19 np0005634017 nova_compute[243452]: 2026-02-28 10:10:19.822 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:10:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 346 MiB data, 668 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 11 MiB/s wr, 233 op/s
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.428 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.547 243456 DEBUG nova.network.neutron [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updating instance_info_cache with network_info: [{"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.571 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Releasing lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.572 243456 DEBUG nova.compute.manager [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Instance network_info: |[{"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.572 243456 DEBUG oslo_concurrency.lockutils [req-aff2c63a-8a56-43db-956c-778540843062 req-40ad3852-8bcc-41bb-bac9-ace059a4c0cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.573 243456 DEBUG nova.network.neutron [req-aff2c63a-8a56-43db-956c-778540843062 req-40ad3852-8bcc-41bb-bac9-ace059a4c0cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Refreshing network info cache for port 037eb744-3024-4a3d-b52c-894abe1cbac8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.577 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Start _get_guest_xml network_info=[{"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.583 243456 WARNING nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.657 243456 DEBUG nova.virt.libvirt.host [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.658 243456 DEBUG nova.virt.libvirt.host [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.666 243456 DEBUG nova.virt.libvirt.host [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.667 243456 DEBUG nova.virt.libvirt.host [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.668 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.668 243456 DEBUG nova.virt.hardware [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.669 243456 DEBUG nova.virt.hardware [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.670 243456 DEBUG nova.virt.hardware [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.670 243456 DEBUG nova.virt.hardware [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.671 243456 DEBUG nova.virt.hardware [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.671 243456 DEBUG nova.virt.hardware [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.671 243456 DEBUG nova.virt.hardware [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.672 243456 DEBUG nova.virt.hardware [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.672 243456 DEBUG nova.virt.hardware [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.673 243456 DEBUG nova.virt.hardware [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.673 243456 DEBUG nova.virt.hardware [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.678 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.922 243456 INFO nova.virt.libvirt.driver [-] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Instance destroyed successfully.#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.922 243456 DEBUG nova.objects.instance [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'resources' on Instance uuid 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.938 243456 DEBUG nova.virt.libvirt.vif [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:09:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-980989576',display_name='tempest-DeleteServersTestJSON-server-980989576',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-980989576',id=57,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:09:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-kebdyvyk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member',shelved_at='2026-02-28T10:10:17.157658',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='342d07d0-ce6e-40be-938e-c99ef1da978f'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:10:10Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=9633f2a0-e94b-40f8-a1c1-6a9ebf9458be,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "b74d22f2-fc92-4f30-b403-6cecd975b301", "address": "fa:16:3e:7f:e6:f5", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb74d22f2-fc", "ovs_interfaceid": "b74d22f2-fc92-4f30-b403-6cecd975b301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.938 243456 DEBUG nova.network.os_vif_util [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "b74d22f2-fc92-4f30-b403-6cecd975b301", "address": "fa:16:3e:7f:e6:f5", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb74d22f2-fc", "ovs_interfaceid": "b74d22f2-fc92-4f30-b403-6cecd975b301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.939 243456 DEBUG nova.network.os_vif_util [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:e6:f5,bridge_name='br-int',has_traffic_filtering=True,id=b74d22f2-fc92-4f30-b403-6cecd975b301,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb74d22f2-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.940 243456 DEBUG os_vif [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:e6:f5,bridge_name='br-int',has_traffic_filtering=True,id=b74d22f2-fc92-4f30-b403-6cecd975b301,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb74d22f2-fc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.942 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.942 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb74d22f2-fc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.947 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.948 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:10:21 np0005634017 nova_compute[243452]: 2026-02-28 10:10:21.953 243456 INFO os_vif [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:e6:f5,bridge_name='br-int',has_traffic_filtering=True,id=b74d22f2-fc92-4f30-b403-6cecd975b301,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb74d22f2-fc')#033[00m
Feb 28 05:10:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 358 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 7.0 MiB/s wr, 180 op/s
Feb 28 05:10:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:10:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/314854447' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.242 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.271 243456 DEBUG nova.storage.rbd_utils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.274 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.312 243456 DEBUG nova.compute.manager [req-a67fad85-0f4c-4ac4-9ddc-9ad692bc7007 req-ff290743-f618-48c6-8c63-67c6b87212f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Received event network-changed-b74d22f2-fc92-4f30-b403-6cecd975b301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.312 243456 DEBUG nova.compute.manager [req-a67fad85-0f4c-4ac4-9ddc-9ad692bc7007 req-ff290743-f618-48c6-8c63-67c6b87212f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Refreshing instance network info cache due to event network-changed-b74d22f2-fc92-4f30-b403-6cecd975b301. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.313 243456 DEBUG oslo_concurrency.lockutils [req-a67fad85-0f4c-4ac4-9ddc-9ad692bc7007 req-ff290743-f618-48c6-8c63-67c6b87212f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.669 243456 INFO nova.virt.libvirt.driver [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Deleting instance files /var/lib/nova/instances/9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_del#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.670 243456 INFO nova.virt.libvirt.driver [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Deletion of /var/lib/nova/instances/9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_del complete#033[00m
Feb 28 05:10:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:10:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1881119327' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.782 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.784 243456 DEBUG nova.virt.libvirt.vif [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-655402139',display_name='tempest-ServerActionsTestOtherB-server-655402139',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-655402139',id=58,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBqRVcgiI80flX1TIQUc3kE8k0bKr7rk5iIO2Yv6L90cfSe29lIWbu2zW6sL5TXWXaoniRhBj4ljGby24BY2TxBinS7LuRQtwYhYPFr+EO7gzkzEKLoiCVvk+xY5at1zhw==',key_name='tempest-keypair-1106870795',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cffbbb9857954b188c363e9565817bf2',ramdisk_id='',reservation_id='r-xj14uw0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-909812490',owner_user_name='tempest-ServerActionsTestOtherB-909812490-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:10:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='48e14a77ec8842f98a0d2efc6d5e167f',uuid=30a5d845-ce28-490a-afe8-3b7552f02c63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.784 243456 DEBUG nova.network.os_vif_util [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converting VIF {"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.785 243456 DEBUG nova.network.os_vif_util [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.786 243456 DEBUG nova.objects.instance [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.808 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:10:22 np0005634017 nova_compute[243452]:  <uuid>30a5d845-ce28-490a-afe8-3b7552f02c63</uuid>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:  <name>instance-0000003a</name>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerActionsTestOtherB-server-655402139</nova:name>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:10:21</nova:creationTime>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:10:22 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:        <nova:user uuid="48e14a77ec8842f98a0d2efc6d5e167f">tempest-ServerActionsTestOtherB-909812490-project-member</nova:user>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:        <nova:project uuid="cffbbb9857954b188c363e9565817bf2">tempest-ServerActionsTestOtherB-909812490</nova:project>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:        <nova:port uuid="037eb744-3024-4a3d-b52c-894abe1cbac8">
Feb 28 05:10:22 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <entry name="serial">30a5d845-ce28-490a-afe8-3b7552f02c63</entry>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <entry name="uuid">30a5d845-ce28-490a-afe8-3b7552f02c63</entry>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/30a5d845-ce28-490a-afe8-3b7552f02c63_disk">
Feb 28 05:10:22 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:10:22 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/30a5d845-ce28-490a-afe8-3b7552f02c63_disk.config">
Feb 28 05:10:22 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:10:22 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:32:d3:6f"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <target dev="tap037eb744-30"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/console.log" append="off"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:10:22 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:10:22 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:10:22 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:10:22 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.810 243456 DEBUG nova.compute.manager [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Preparing to wait for external event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.811 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.812 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.812 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.813 243456 DEBUG nova.virt.libvirt.vif [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-655402139',display_name='tempest-ServerActionsTestOtherB-server-655402139',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-655402139',id=58,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBqRVcgiI80flX1TIQUc3kE8k0bKr7rk5iIO2Yv6L90cfSe29lIWbu2zW6sL5TXWXaoniRhBj4ljGby24BY2TxBinS7LuRQtwYhYPFr+EO7gzkzEKLoiCVvk+xY5at1zhw==',key_name='tempest-keypair-1106870795',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cffbbb9857954b188c363e9565817bf2',ramdisk_id='',reservation_id='r-xj14uw0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-909812490',owner_user_name='tempest-ServerActionsTestOtherB-909812490-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:10:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='48e14a77ec8842f98a0d2efc6d5e167f',uuid=30a5d845-ce28-490a-afe8-3b7552f02c63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.814 243456 DEBUG nova.network.os_vif_util [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converting VIF {"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.815 243456 DEBUG nova.network.os_vif_util [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.816 243456 DEBUG os_vif [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.818 243456 INFO nova.scheduler.client.report [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Deleted allocations for instance 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.822 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.823 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.824 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.828 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.828 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap037eb744-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.829 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap037eb744-30, col_values=(('external_ids', {'iface-id': '037eb744-3024-4a3d-b52c-894abe1cbac8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:d3:6f', 'vm-uuid': '30a5d845-ce28-490a-afe8-3b7552f02c63'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:10:22 np0005634017 NetworkManager[49805]: <info>  [1772273422.8319] manager: (tap037eb744-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/219)
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.835 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.837 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.838 243456 INFO os_vif [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30')#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.876 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Updating instance_info_cache with network_info: [{"id": "b74d22f2-fc92-4f30-b403-6cecd975b301", "address": "fa:16:3e:7f:e6:f5", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb74d22f2-fc", "ovs_interfaceid": "b74d22f2-fc92-4f30-b403-6cecd975b301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.886 243456 DEBUG oslo_concurrency.lockutils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.886 243456 DEBUG oslo_concurrency.lockutils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.903 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.903 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.903 243456 DEBUG oslo_concurrency.lockutils [req-a67fad85-0f4c-4ac4-9ddc-9ad692bc7007 req-ff290743-f618-48c6-8c63-67c6b87212f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.904 243456 DEBUG nova.network.neutron [req-a67fad85-0f4c-4ac4-9ddc-9ad692bc7007 req-ff290743-f618-48c6-8c63-67c6b87212f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Refreshing network info cache for port b74d22f2-fc92-4f30-b403-6cecd975b301 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.906 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.907 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.907 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.912 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.912 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.913 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No VIF found with MAC fa:16:3e:32:d3:6f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.913 243456 INFO nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Using config drive#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.939 243456 DEBUG nova.storage.rbd_utils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:10:22 np0005634017 nova_compute[243452]: 2026-02-28 10:10:22.982 243456 DEBUG oslo_concurrency.processutils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:10:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:10:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Feb 28 05:10:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Feb 28 05:10:23 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Feb 28 05:10:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:10:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/258300255' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:10:23 np0005634017 nova_compute[243452]: 2026-02-28 10:10:23.532 243456 DEBUG oslo_concurrency.processutils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:10:23 np0005634017 nova_compute[243452]: 2026-02-28 10:10:23.539 243456 DEBUG nova.compute.provider_tree [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:10:23 np0005634017 nova_compute[243452]: 2026-02-28 10:10:23.562 243456 DEBUG nova.scheduler.client.report [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:10:23 np0005634017 nova_compute[243452]: 2026-02-28 10:10:23.596 243456 DEBUG oslo_concurrency.lockutils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:23 np0005634017 nova_compute[243452]: 2026-02-28 10:10:23.656 243456 DEBUG oslo_concurrency.lockutils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 27.301s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:23 np0005634017 nova_compute[243452]: 2026-02-28 10:10:23.659 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 318 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 632 KiB/s rd, 3.1 MiB/s wr, 91 op/s
Feb 28 05:10:24 np0005634017 nova_compute[243452]: 2026-02-28 10:10:24.466 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273409.4650848, 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:10:24 np0005634017 nova_compute[243452]: 2026-02-28 10:10:24.466 243456 INFO nova.compute.manager [-] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:10:24 np0005634017 nova_compute[243452]: 2026-02-28 10:10:24.490 243456 DEBUG nova.compute.manager [None req-4f03788f-3437-42ac-9e9c-bfccd643ccf4 - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:10:24 np0005634017 nova_compute[243452]: 2026-02-28 10:10:24.876 243456 INFO nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Creating config drive at /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/disk.config#033[00m
Feb 28 05:10:24 np0005634017 nova_compute[243452]: 2026-02-28 10:10:24.882 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkv9bib_u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:10:25 np0005634017 nova_compute[243452]: 2026-02-28 10:10:25.030 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkv9bib_u" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:10:25 np0005634017 nova_compute[243452]: 2026-02-28 10:10:25.056 243456 DEBUG nova.storage.rbd_utils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:10:25 np0005634017 nova_compute[243452]: 2026-02-28 10:10:25.059 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/disk.config 30a5d845-ce28-490a-afe8-3b7552f02c63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:10:25 np0005634017 nova_compute[243452]: 2026-02-28 10:10:25.404 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/disk.config 30a5d845-ce28-490a-afe8-3b7552f02c63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:10:25 np0005634017 nova_compute[243452]: 2026-02-28 10:10:25.405 243456 INFO nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Deleting local config drive /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/disk.config because it was imported into RBD.#033[00m
Feb 28 05:10:25 np0005634017 kernel: tap037eb744-30: entered promiscuous mode
Feb 28 05:10:25 np0005634017 NetworkManager[49805]: <info>  [1772273425.4805] manager: (tap037eb744-30): new Tun device (/org/freedesktop/NetworkManager/Devices/220)
Feb 28 05:10:25 np0005634017 nova_compute[243452]: 2026-02-28 10:10:25.482 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:25 np0005634017 nova_compute[243452]: 2026-02-28 10:10:25.486 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:10:25Z|00474|binding|INFO|Claiming lport 037eb744-3024-4a3d-b52c-894abe1cbac8 for this chassis.
Feb 28 05:10:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:10:25Z|00475|binding|INFO|037eb744-3024-4a3d-b52c-894abe1cbac8: Claiming fa:16:3e:32:d3:6f 10.100.0.9
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.499 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:d3:6f 10.100.0.9'], port_security=['fa:16:3e:32:d3:6f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '30a5d845-ce28-490a-afe8-3b7552f02c63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cffbbb9857954b188c363e9565817bf2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '998e17ae-0a65-46f5-b817-1f8d2d0cba63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d55c9b7f-6b12-4934-aabc-ab954d2b71e1, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=037eb744-3024-4a3d-b52c-894abe1cbac8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.502 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 037eb744-3024-4a3d-b52c-894abe1cbac8 in datapath 41b22e92-d251-48dd-9bf8-8f38cbd749fa bound to our chassis#033[00m
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.505 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41b22e92-d251-48dd-9bf8-8f38cbd749fa#033[00m
Feb 28 05:10:25 np0005634017 nova_compute[243452]: 2026-02-28 10:10:25.516 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.518 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[89e7a564-86f3-4d51-99a6-91ab7ed6e029]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.521 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap41b22e92-d1 in ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:10:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:10:25Z|00476|binding|INFO|Setting lport 037eb744-3024-4a3d-b52c-894abe1cbac8 ovn-installed in OVS
Feb 28 05:10:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:10:25Z|00477|binding|INFO|Setting lport 037eb744-3024-4a3d-b52c-894abe1cbac8 up in Southbound
Feb 28 05:10:25 np0005634017 nova_compute[243452]: 2026-02-28 10:10:25.524 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.524 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap41b22e92-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.524 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc1f4b1-758e-49c6-be7f-417598ddfdc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.526 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f83524de-3d25-45d9-ac23-551841e9b902]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:25 np0005634017 systemd-udevd[292960]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.541 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[a039e34b-cb65-4652-b7cb-e01a2178621f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:25 np0005634017 NetworkManager[49805]: <info>  [1772273425.5449] device (tap037eb744-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:10:25 np0005634017 NetworkManager[49805]: <info>  [1772273425.5460] device (tap037eb744-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:10:25 np0005634017 systemd-machined[209480]: New machine qemu-65-instance-0000003a.
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.556 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8a92f508-c053-46af-8ab8-07c024645a2c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:25 np0005634017 systemd[1]: Started Virtual Machine qemu-65-instance-0000003a.
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.592 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[00956693-5b22-4fad-a48b-48ce67c573c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:25 np0005634017 NetworkManager[49805]: <info>  [1772273425.6008] manager: (tap41b22e92-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/221)
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.599 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[739a6435-ac58-4eaf-8bc4-7f7cc7e97df5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.636 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6bdc668c-9247-455c-adab-cda09490a6d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.641 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c704648b-0410-4df6-9f04-0e9c4b9d1826]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:25 np0005634017 NetworkManager[49805]: <info>  [1772273425.6621] device (tap41b22e92-d0): carrier: link connected
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.667 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[64fbc81c-58e1-479b-999f-243b4eb10f39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.683 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7d7080ea-4152-4d0c-9ca7-d0e2da9329f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41b22e92-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:1f:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493294, 'reachable_time': 23633, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293007, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.700 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[89660529-e9df-4bdc-8559-a82fc343bc29]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe52:1fae'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493294, 'tstamp': 493294}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293009, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.713 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b14af751-25ee-4099-a52e-ab6a0041cf44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41b22e92-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:1f:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493294, 'reachable_time': 23633, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293012, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.745 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d33879fd-e982-4895-b314-1943775557bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.796 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[97e2ecbd-70b3-4c9d-b66b-d7b3d551eb2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.797 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41b22e92-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.798 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.798 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41b22e92-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:10:25 np0005634017 kernel: tap41b22e92-d0: entered promiscuous mode
Feb 28 05:10:25 np0005634017 nova_compute[243452]: 2026-02-28 10:10:25.800 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:25 np0005634017 NetworkManager[49805]: <info>  [1772273425.8015] manager: (tap41b22e92-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/222)
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.803 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41b22e92-d0, col_values=(('external_ids', {'iface-id': 'cad89901-4493-47e9-b0fc-45158375eff8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:10:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:10:25Z|00478|binding|INFO|Releasing lport cad89901-4493-47e9-b0fc-45158375eff8 from this chassis (sb_readonly=0)
Feb 28 05:10:25 np0005634017 nova_compute[243452]: 2026-02-28 10:10:25.804 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:25 np0005634017 nova_compute[243452]: 2026-02-28 10:10:25.805 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.805 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/41b22e92-d251-48dd-9bf8-8f38cbd749fa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/41b22e92-d251-48dd-9bf8-8f38cbd749fa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.806 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d299c3-d477-48ff-a93d-9d1bc606d62e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.806 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-41b22e92-d251-48dd-9bf8-8f38cbd749fa
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/41b22e92-d251-48dd-9bf8-8f38cbd749fa.pid.haproxy
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 41b22e92-d251-48dd-9bf8-8f38cbd749fa
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:10:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.807 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'env', 'PROCESS_TAG=haproxy-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/41b22e92-d251-48dd-9bf8-8f38cbd749fa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:10:25 np0005634017 nova_compute[243452]: 2026-02-28 10:10:25.811 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:10:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:10:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:10:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:10:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:10:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:10:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:10:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:10:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:10:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:10:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:10:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:10:25 np0005634017 nova_compute[243452]: 2026-02-28 10:10:25.900 243456 DEBUG nova.compute.manager [req-9fad18f8-4ceb-41b7-93d3-902d9edd64d1 req-0ec35797-a9dc-4158-b458-a9ca619efa81 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:10:25 np0005634017 nova_compute[243452]: 2026-02-28 10:10:25.900 243456 DEBUG oslo_concurrency.lockutils [req-9fad18f8-4ceb-41b7-93d3-902d9edd64d1 req-0ec35797-a9dc-4158-b458-a9ca619efa81 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:25 np0005634017 nova_compute[243452]: 2026-02-28 10:10:25.901 243456 DEBUG oslo_concurrency.lockutils [req-9fad18f8-4ceb-41b7-93d3-902d9edd64d1 req-0ec35797-a9dc-4158-b458-a9ca619efa81 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:25 np0005634017 nova_compute[243452]: 2026-02-28 10:10:25.901 243456 DEBUG oslo_concurrency.lockutils [req-9fad18f8-4ceb-41b7-93d3-902d9edd64d1 req-0ec35797-a9dc-4158-b458-a9ca619efa81 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:25 np0005634017 nova_compute[243452]: 2026-02-28 10:10:25.901 243456 DEBUG nova.compute.manager [req-9fad18f8-4ceb-41b7-93d3-902d9edd64d1 req-0ec35797-a9dc-4158-b458-a9ca619efa81 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Processing event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:10:25 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:10:25 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:10:25 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:10:25 np0005634017 nova_compute[243452]: 2026-02-28 10:10:25.965 243456 DEBUG nova.network.neutron [req-aff2c63a-8a56-43db-956c-778540843062 req-40ad3852-8bcc-41bb-bac9-ace059a4c0cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updated VIF entry in instance network info cache for port 037eb744-3024-4a3d-b52c-894abe1cbac8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:10:25 np0005634017 nova_compute[243452]: 2026-02-28 10:10:25.966 243456 DEBUG nova.network.neutron [req-aff2c63a-8a56-43db-956c-778540843062 req-40ad3852-8bcc-41bb-bac9-ace059a4c0cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updating instance_info_cache with network_info: [{"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:10:25 np0005634017 nova_compute[243452]: 2026-02-28 10:10:25.994 243456 DEBUG oslo_concurrency.lockutils [req-aff2c63a-8a56-43db-956c-778540843062 req-40ad3852-8bcc-41bb-bac9-ace059a4c0cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:10:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 279 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 2.0 MiB/s wr, 119 op/s
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.138 243456 DEBUG nova.network.neutron [req-a67fad85-0f4c-4ac4-9ddc-9ad692bc7007 req-ff290743-f618-48c6-8c63-67c6b87212f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Updated VIF entry in instance network info cache for port b74d22f2-fc92-4f30-b403-6cecd975b301. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.140 243456 DEBUG nova.network.neutron [req-a67fad85-0f4c-4ac4-9ddc-9ad692bc7007 req-ff290743-f618-48c6-8c63-67c6b87212f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Updating instance_info_cache with network_info: [{"id": "b74d22f2-fc92-4f30-b403-6cecd975b301", "address": "fa:16:3e:7f:e6:f5", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": null, "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapb74d22f2-fc", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.164 243456 DEBUG oslo_concurrency.lockutils [req-a67fad85-0f4c-4ac4-9ddc-9ad692bc7007 req-ff290743-f618-48c6-8c63-67c6b87212f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:10:26 np0005634017 podman[293108]: 2026-02-28 10:10:26.190808991 +0000 UTC m=+0.067341121 container create 4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 28 05:10:26 np0005634017 systemd[1]: Started libpod-conmon-4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6.scope.
Feb 28 05:10:26 np0005634017 podman[293108]: 2026-02-28 10:10:26.161793066 +0000 UTC m=+0.038325226 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:10:26 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:10:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0f80b5e09dc8495c28b54de22f23404b9f324ad0e098a2a59a53d201acefe9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:10:26 np0005634017 podman[293108]: 2026-02-28 10:10:26.289163833 +0000 UTC m=+0.165695993 container init 4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 28 05:10:26 np0005634017 podman[293108]: 2026-02-28 10:10:26.29937308 +0000 UTC m=+0.175905200 container start 4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 05:10:26 np0005634017 neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa[293148]: [NOTICE]   (293187) : New worker (293190) forked
Feb 28 05:10:26 np0005634017 neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa[293148]: [NOTICE]   (293187) : Loading success.
Feb 28 05:10:26 np0005634017 podman[293157]: 2026-02-28 10:10:26.336495933 +0000 UTC m=+0.054244105 container create facf7a643a3bf7ede1cdc0ebf809028d94b5c0f5f058831f6f5098c45d41efc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_benz, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:10:26 np0005634017 systemd[1]: Started libpod-conmon-facf7a643a3bf7ede1cdc0ebf809028d94b5c0f5f058831f6f5098c45d41efc6.scope.
Feb 28 05:10:26 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:10:26 np0005634017 podman[293157]: 2026-02-28 10:10:26.31431841 +0000 UTC m=+0.032066632 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.417 243456 DEBUG nova.compute.manager [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.418 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273426.416621, 30a5d845-ce28-490a-afe8-3b7552f02c63 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.419 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] VM Started (Lifecycle Event)#033[00m
Feb 28 05:10:26 np0005634017 podman[293157]: 2026-02-28 10:10:26.421412248 +0000 UTC m=+0.139160450 container init facf7a643a3bf7ede1cdc0ebf809028d94b5c0f5f058831f6f5098c45d41efc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_benz, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.422 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.427 243456 INFO nova.virt.libvirt.driver [-] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Instance spawned successfully.#033[00m
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.427 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:10:26 np0005634017 podman[293157]: 2026-02-28 10:10:26.431847701 +0000 UTC m=+0.149595893 container start facf7a643a3bf7ede1cdc0ebf809028d94b5c0f5f058831f6f5098c45d41efc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 28 05:10:26 np0005634017 podman[293157]: 2026-02-28 10:10:26.435611126 +0000 UTC m=+0.153359308 container attach facf7a643a3bf7ede1cdc0ebf809028d94b5c0f5f058831f6f5098c45d41efc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_benz, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 05:10:26 np0005634017 exciting_benz[293207]: 167 167
Feb 28 05:10:26 np0005634017 systemd[1]: libpod-facf7a643a3bf7ede1cdc0ebf809028d94b5c0f5f058831f6f5098c45d41efc6.scope: Deactivated successfully.
Feb 28 05:10:26 np0005634017 podman[293157]: 2026-02-28 10:10:26.442590662 +0000 UTC m=+0.160338834 container died facf7a643a3bf7ede1cdc0ebf809028d94b5c0f5f058831f6f5098c45d41efc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_benz, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.460 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.467 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.468 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.469 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.470 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.472 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:10:26 np0005634017 systemd[1]: var-lib-containers-storage-overlay-55832f87349a388083cdb220cf182f96094fd417cef162df011da3d575b45b33-merged.mount: Deactivated successfully.
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.473 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.482 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:10:26 np0005634017 podman[293157]: 2026-02-28 10:10:26.495150849 +0000 UTC m=+0.212899021 container remove facf7a643a3bf7ede1cdc0ebf809028d94b5c0f5f058831f6f5098c45d41efc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 05:10:26 np0005634017 systemd[1]: libpod-conmon-facf7a643a3bf7ede1cdc0ebf809028d94b5c0f5f058831f6f5098c45d41efc6.scope: Deactivated successfully.
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.530 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.530 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273426.417748, 30a5d845-ce28-490a-afe8-3b7552f02c63 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.531 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.567 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.571 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273426.421398, 30a5d845-ce28-490a-afe8-3b7552f02c63 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.571 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.579 243456 INFO nova.compute.manager [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Took 11.31 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.580 243456 DEBUG nova.compute.manager [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.627 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.631 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:10:26 np0005634017 podman[293230]: 2026-02-28 10:10:26.645333547 +0000 UTC m=+0.045228202 container create c531ea07da39221a5a63cbad2043a310ce18d8093975ec5ac853b863b3f0051d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.666 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:10:26 np0005634017 systemd[1]: Started libpod-conmon-c531ea07da39221a5a63cbad2043a310ce18d8093975ec5ac853b863b3f0051d.scope.
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.692 243456 INFO nova.compute.manager [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Took 12.52 seconds to build instance.#033[00m
Feb 28 05:10:26 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:10:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a2b17bbd0659d33c5e7afcf0e9109f72f2983de49cf4a50ec84d89fcf12b905/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:10:26 np0005634017 podman[293230]: 2026-02-28 10:10:26.625668644 +0000 UTC m=+0.025563339 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:10:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a2b17bbd0659d33c5e7afcf0e9109f72f2983de49cf4a50ec84d89fcf12b905/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:10:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a2b17bbd0659d33c5e7afcf0e9109f72f2983de49cf4a50ec84d89fcf12b905/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:10:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a2b17bbd0659d33c5e7afcf0e9109f72f2983de49cf4a50ec84d89fcf12b905/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:10:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a2b17bbd0659d33c5e7afcf0e9109f72f2983de49cf4a50ec84d89fcf12b905/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:10:26 np0005634017 nova_compute[243452]: 2026-02-28 10:10:26.733 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:26 np0005634017 podman[293230]: 2026-02-28 10:10:26.768817785 +0000 UTC m=+0.168712540 container init c531ea07da39221a5a63cbad2043a310ce18d8093975ec5ac853b863b3f0051d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_ptolemy, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 05:10:26 np0005634017 podman[293230]: 2026-02-28 10:10:26.780308027 +0000 UTC m=+0.180202682 container start c531ea07da39221a5a63cbad2043a310ce18d8093975ec5ac853b863b3f0051d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 28 05:10:26 np0005634017 podman[293230]: 2026-02-28 10:10:26.784559587 +0000 UTC m=+0.184454282 container attach c531ea07da39221a5a63cbad2043a310ce18d8093975ec5ac853b863b3f0051d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 05:10:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Feb 28 05:10:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Feb 28 05:10:26 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Feb 28 05:10:27 np0005634017 pensive_ptolemy[293247]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:10:27 np0005634017 pensive_ptolemy[293247]: --> All data devices are unavailable
Feb 28 05:10:27 np0005634017 systemd[1]: libpod-c531ea07da39221a5a63cbad2043a310ce18d8093975ec5ac853b863b3f0051d.scope: Deactivated successfully.
Feb 28 05:10:27 np0005634017 conmon[293247]: conmon c531ea07da39221a5a63 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c531ea07da39221a5a63cbad2043a310ce18d8093975ec5ac853b863b3f0051d.scope/container/memory.events
Feb 28 05:10:27 np0005634017 podman[293230]: 2026-02-28 10:10:27.301822184 +0000 UTC m=+0.701716839 container died c531ea07da39221a5a63cbad2043a310ce18d8093975ec5ac853b863b3f0051d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 05:10:27 np0005634017 systemd[1]: var-lib-containers-storage-overlay-2a2b17bbd0659d33c5e7afcf0e9109f72f2983de49cf4a50ec84d89fcf12b905-merged.mount: Deactivated successfully.
Feb 28 05:10:27 np0005634017 podman[293230]: 2026-02-28 10:10:27.351922521 +0000 UTC m=+0.751817186 container remove c531ea07da39221a5a63cbad2043a310ce18d8093975ec5ac853b863b3f0051d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_ptolemy, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 05:10:27 np0005634017 systemd[1]: libpod-conmon-c531ea07da39221a5a63cbad2043a310ce18d8093975ec5ac853b863b3f0051d.scope: Deactivated successfully.
Feb 28 05:10:27 np0005634017 nova_compute[243452]: 2026-02-28 10:10:27.832 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:27 np0005634017 podman[293342]: 2026-02-28 10:10:27.856265636 +0000 UTC m=+0.059682867 container create 1d48a191c9005b3e2d16e505064b17f3cade0d77eb03e713f0a80e06d98b734e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_noether, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 05:10:27 np0005634017 systemd[1]: Started libpod-conmon-1d48a191c9005b3e2d16e505064b17f3cade0d77eb03e713f0a80e06d98b734e.scope.
Feb 28 05:10:27 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:10:27 np0005634017 podman[293342]: 2026-02-28 10:10:27.828512706 +0000 UTC m=+0.031929987 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:10:27 np0005634017 podman[293342]: 2026-02-28 10:10:27.937845887 +0000 UTC m=+0.141263128 container init 1d48a191c9005b3e2d16e505064b17f3cade0d77eb03e713f0a80e06d98b734e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_noether, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:10:27 np0005634017 podman[293342]: 2026-02-28 10:10:27.948207278 +0000 UTC m=+0.151624509 container start 1d48a191c9005b3e2d16e505064b17f3cade0d77eb03e713f0a80e06d98b734e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_noether, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 05:10:27 np0005634017 podman[293342]: 2026-02-28 10:10:27.952828568 +0000 UTC m=+0.156245859 container attach 1d48a191c9005b3e2d16e505064b17f3cade0d77eb03e713f0a80e06d98b734e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_noether, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:10:27 np0005634017 zen_noether[293358]: 167 167
Feb 28 05:10:27 np0005634017 systemd[1]: libpod-1d48a191c9005b3e2d16e505064b17f3cade0d77eb03e713f0a80e06d98b734e.scope: Deactivated successfully.
Feb 28 05:10:27 np0005634017 podman[293342]: 2026-02-28 10:10:27.959451204 +0000 UTC m=+0.162868435 container died 1d48a191c9005b3e2d16e505064b17f3cade0d77eb03e713f0a80e06d98b734e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_noether, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Feb 28 05:10:27 np0005634017 systemd[1]: var-lib-containers-storage-overlay-0d6d04e1b71e8a5442d850b2d21f6d4afadb866fc6b93a09a9ff7c93382e6c61-merged.mount: Deactivated successfully.
Feb 28 05:10:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:10:28 np0005634017 podman[293342]: 2026-02-28 10:10:28.002710489 +0000 UTC m=+0.206127710 container remove 1d48a191c9005b3e2d16e505064b17f3cade0d77eb03e713f0a80e06d98b734e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:10:28 np0005634017 systemd[1]: libpod-conmon-1d48a191c9005b3e2d16e505064b17f3cade0d77eb03e713f0a80e06d98b734e.scope: Deactivated successfully.
Feb 28 05:10:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 271 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 178 KiB/s rd, 425 KiB/s wr, 98 op/s
Feb 28 05:10:28 np0005634017 podman[293383]: 2026-02-28 10:10:28.161285752 +0000 UTC m=+0.042367031 container create 66d0a90907d609e0df4098e82c758b5345bda5f9a70a1fcdea4cfd8ad9690ffe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nightingale, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:10:28 np0005634017 systemd[1]: Started libpod-conmon-66d0a90907d609e0df4098e82c758b5345bda5f9a70a1fcdea4cfd8ad9690ffe.scope.
Feb 28 05:10:28 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:10:28 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4e77349caa4feb03cd0c453c46e6c31d4af8b99f37abe1c48e97176003d3c72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:10:28 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4e77349caa4feb03cd0c453c46e6c31d4af8b99f37abe1c48e97176003d3c72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:10:28 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4e77349caa4feb03cd0c453c46e6c31d4af8b99f37abe1c48e97176003d3c72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:10:28 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4e77349caa4feb03cd0c453c46e6c31d4af8b99f37abe1c48e97176003d3c72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:10:28 np0005634017 podman[293383]: 2026-02-28 10:10:28.143853273 +0000 UTC m=+0.024934622 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:10:28 np0005634017 podman[293383]: 2026-02-28 10:10:28.252984238 +0000 UTC m=+0.134065547 container init 66d0a90907d609e0df4098e82c758b5345bda5f9a70a1fcdea4cfd8ad9690ffe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nightingale, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 28 05:10:28 np0005634017 podman[293383]: 2026-02-28 10:10:28.263721729 +0000 UTC m=+0.144803028 container start 66d0a90907d609e0df4098e82c758b5345bda5f9a70a1fcdea4cfd8ad9690ffe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:10:28 np0005634017 podman[293383]: 2026-02-28 10:10:28.268108643 +0000 UTC m=+0.149189942 container attach 66d0a90907d609e0df4098e82c758b5345bda5f9a70a1fcdea4cfd8ad9690ffe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nightingale, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:10:28 np0005634017 nova_compute[243452]: 2026-02-28 10:10:28.316 243456 DEBUG nova.compute.manager [req-cd08ce76-97c1-443d-8c41-5ecf4b9acf36 req-693992c8-c065-4bff-9760-1896ee94231d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:10:28 np0005634017 nova_compute[243452]: 2026-02-28 10:10:28.318 243456 DEBUG oslo_concurrency.lockutils [req-cd08ce76-97c1-443d-8c41-5ecf4b9acf36 req-693992c8-c065-4bff-9760-1896ee94231d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:28 np0005634017 nova_compute[243452]: 2026-02-28 10:10:28.318 243456 DEBUG oslo_concurrency.lockutils [req-cd08ce76-97c1-443d-8c41-5ecf4b9acf36 req-693992c8-c065-4bff-9760-1896ee94231d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:28 np0005634017 nova_compute[243452]: 2026-02-28 10:10:28.318 243456 DEBUG oslo_concurrency.lockutils [req-cd08ce76-97c1-443d-8c41-5ecf4b9acf36 req-693992c8-c065-4bff-9760-1896ee94231d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:28 np0005634017 nova_compute[243452]: 2026-02-28 10:10:28.319 243456 DEBUG nova.compute.manager [req-cd08ce76-97c1-443d-8c41-5ecf4b9acf36 req-693992c8-c065-4bff-9760-1896ee94231d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] No waiting events found dispatching network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:10:28 np0005634017 nova_compute[243452]: 2026-02-28 10:10:28.319 243456 WARNING nova.compute.manager [req-cd08ce76-97c1-443d-8c41-5ecf4b9acf36 req-693992c8-c065-4bff-9760-1896ee94231d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received unexpected event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]: {
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:    "0": [
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:        {
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "devices": [
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "/dev/loop3"
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            ],
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "lv_name": "ceph_lv0",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "lv_size": "21470642176",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "name": "ceph_lv0",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "tags": {
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.cluster_name": "ceph",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.crush_device_class": "",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.encrypted": "0",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.objectstore": "bluestore",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.osd_id": "0",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.type": "block",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.vdo": "0",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.with_tpm": "0"
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            },
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "type": "block",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "vg_name": "ceph_vg0"
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:        }
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:    ],
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:    "1": [
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:        {
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "devices": [
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "/dev/loop4"
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            ],
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "lv_name": "ceph_lv1",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "lv_size": "21470642176",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "name": "ceph_lv1",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "tags": {
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.cluster_name": "ceph",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.crush_device_class": "",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.encrypted": "0",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.objectstore": "bluestore",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.osd_id": "1",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.type": "block",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.vdo": "0",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.with_tpm": "0"
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            },
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "type": "block",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "vg_name": "ceph_vg1"
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:        }
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:    ],
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:    "2": [
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:        {
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "devices": [
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "/dev/loop5"
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            ],
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "lv_name": "ceph_lv2",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "lv_size": "21470642176",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "name": "ceph_lv2",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "tags": {
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.cluster_name": "ceph",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.crush_device_class": "",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.encrypted": "0",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.objectstore": "bluestore",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.osd_id": "2",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.type": "block",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.vdo": "0",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:                "ceph.with_tpm": "0"
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            },
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "type": "block",
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:            "vg_name": "ceph_vg2"
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:        }
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]:    ]
Feb 28 05:10:28 np0005634017 pedantic_nightingale[293400]: }
Feb 28 05:10:28 np0005634017 systemd[1]: libpod-66d0a90907d609e0df4098e82c758b5345bda5f9a70a1fcdea4cfd8ad9690ffe.scope: Deactivated successfully.
Feb 28 05:10:28 np0005634017 podman[293383]: 2026-02-28 10:10:28.602963117 +0000 UTC m=+0.484044416 container died 66d0a90907d609e0df4098e82c758b5345bda5f9a70a1fcdea4cfd8ad9690ffe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nightingale, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:10:28 np0005634017 systemd[1]: var-lib-containers-storage-overlay-d4e77349caa4feb03cd0c453c46e6c31d4af8b99f37abe1c48e97176003d3c72-merged.mount: Deactivated successfully.
Feb 28 05:10:28 np0005634017 podman[293383]: 2026-02-28 10:10:28.651595643 +0000 UTC m=+0.532676932 container remove 66d0a90907d609e0df4098e82c758b5345bda5f9a70a1fcdea4cfd8ad9690ffe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 05:10:28 np0005634017 nova_compute[243452]: 2026-02-28 10:10:28.662 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:28 np0005634017 systemd[1]: libpod-conmon-66d0a90907d609e0df4098e82c758b5345bda5f9a70a1fcdea4cfd8ad9690ffe.scope: Deactivated successfully.
Feb 28 05:10:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:10:29
Feb 28 05:10:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:10:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:10:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'volumes', '.rgw.root', 'images', 'vms']
Feb 28 05:10:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:10:29 np0005634017 podman[293484]: 2026-02-28 10:10:29.156173594 +0000 UTC m=+0.053491643 container create 7e0695e28b7a076a824569482db11dc9215a8fe376678f7a55d652762ec9c10b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:10:29 np0005634017 systemd[1]: Started libpod-conmon-7e0695e28b7a076a824569482db11dc9215a8fe376678f7a55d652762ec9c10b.scope.
Feb 28 05:10:29 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:10:29 np0005634017 podman[293484]: 2026-02-28 10:10:29.124511675 +0000 UTC m=+0.021829744 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:10:29 np0005634017 podman[293484]: 2026-02-28 10:10:29.387771729 +0000 UTC m=+0.285089858 container init 7e0695e28b7a076a824569482db11dc9215a8fe376678f7a55d652762ec9c10b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 05:10:29 np0005634017 podman[293484]: 2026-02-28 10:10:29.396787212 +0000 UTC m=+0.294105251 container start 7e0695e28b7a076a824569482db11dc9215a8fe376678f7a55d652762ec9c10b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hofstadter, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 05:10:29 np0005634017 flamboyant_hofstadter[293500]: 167 167
Feb 28 05:10:29 np0005634017 systemd[1]: libpod-7e0695e28b7a076a824569482db11dc9215a8fe376678f7a55d652762ec9c10b.scope: Deactivated successfully.
Feb 28 05:10:29 np0005634017 podman[293484]: 2026-02-28 10:10:29.418415369 +0000 UTC m=+0.315733508 container attach 7e0695e28b7a076a824569482db11dc9215a8fe376678f7a55d652762ec9c10b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hofstadter, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 05:10:29 np0005634017 podman[293484]: 2026-02-28 10:10:29.418942534 +0000 UTC m=+0.316260613 container died 7e0695e28b7a076a824569482db11dc9215a8fe376678f7a55d652762ec9c10b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hofstadter, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 05:10:29 np0005634017 systemd[1]: var-lib-containers-storage-overlay-f147d2fdbb015e61c847655d9eac2c8009ba138afdbbe5168e4b1d3a93063203-merged.mount: Deactivated successfully.
Feb 28 05:10:29 np0005634017 podman[293484]: 2026-02-28 10:10:29.460918833 +0000 UTC m=+0.358236892 container remove 7e0695e28b7a076a824569482db11dc9215a8fe376678f7a55d652762ec9c10b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hofstadter, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:10:29 np0005634017 systemd[1]: libpod-conmon-7e0695e28b7a076a824569482db11dc9215a8fe376678f7a55d652762ec9c10b.scope: Deactivated successfully.
Feb 28 05:10:29 np0005634017 podman[293525]: 2026-02-28 10:10:29.61179658 +0000 UTC m=+0.041777294 container create a1e8d58cb44a80a82163bc6ffc940e736ae282f5c9e18d41cfd12b9fdc6cf2bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_gauss, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:10:29 np0005634017 systemd[1]: Started libpod-conmon-a1e8d58cb44a80a82163bc6ffc940e736ae282f5c9e18d41cfd12b9fdc6cf2bb.scope.
Feb 28 05:10:29 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:10:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8092e18d75816668f9bc4767930d3511af1ad37bb26d652f2f3e6f42ea6218a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:10:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8092e18d75816668f9bc4767930d3511af1ad37bb26d652f2f3e6f42ea6218a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:10:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8092e18d75816668f9bc4767930d3511af1ad37bb26d652f2f3e6f42ea6218a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:10:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8092e18d75816668f9bc4767930d3511af1ad37bb26d652f2f3e6f42ea6218a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:10:29 np0005634017 podman[293525]: 2026-02-28 10:10:29.591457729 +0000 UTC m=+0.021438463 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:10:29 np0005634017 podman[293525]: 2026-02-28 10:10:29.699871594 +0000 UTC m=+0.129852338 container init a1e8d58cb44a80a82163bc6ffc940e736ae282f5c9e18d41cfd12b9fdc6cf2bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_gauss, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 28 05:10:29 np0005634017 podman[293525]: 2026-02-28 10:10:29.707754485 +0000 UTC m=+0.137735239 container start a1e8d58cb44a80a82163bc6ffc940e736ae282f5c9e18d41cfd12b9fdc6cf2bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 05:10:29 np0005634017 podman[293525]: 2026-02-28 10:10:29.712794846 +0000 UTC m=+0.142775580 container attach a1e8d58cb44a80a82163bc6ffc940e736ae282f5c9e18d41cfd12b9fdc6cf2bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_gauss, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 05:10:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 245 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 24 KiB/s wr, 121 op/s
Feb 28 05:10:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:10:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:10:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:10:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:10:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:10:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:10:30 np0005634017 lvm[293622]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:10:30 np0005634017 lvm[293622]: VG ceph_vg0 finished
Feb 28 05:10:30 np0005634017 lvm[293623]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:10:30 np0005634017 lvm[293623]: VG ceph_vg1 finished
Feb 28 05:10:30 np0005634017 lvm[293625]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:10:30 np0005634017 lvm[293625]: VG ceph_vg2 finished
Feb 28 05:10:30 np0005634017 zen_gauss[293543]: {}
Feb 28 05:10:30 np0005634017 systemd[1]: libpod-a1e8d58cb44a80a82163bc6ffc940e736ae282f5c9e18d41cfd12b9fdc6cf2bb.scope: Deactivated successfully.
Feb 28 05:10:30 np0005634017 systemd[1]: libpod-a1e8d58cb44a80a82163bc6ffc940e736ae282f5c9e18d41cfd12b9fdc6cf2bb.scope: Consumed 1.248s CPU time.
Feb 28 05:10:30 np0005634017 podman[293525]: 2026-02-28 10:10:30.566792311 +0000 UTC m=+0.996773085 container died a1e8d58cb44a80a82163bc6ffc940e736ae282f5c9e18d41cfd12b9fdc6cf2bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:10:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:10:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:10:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:10:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:10:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:10:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:10:30 np0005634017 systemd[1]: var-lib-containers-storage-overlay-8092e18d75816668f9bc4767930d3511af1ad37bb26d652f2f3e6f42ea6218a9-merged.mount: Deactivated successfully.
Feb 28 05:10:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:10:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:10:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:10:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:10:30 np0005634017 podman[293525]: 2026-02-28 10:10:30.618684408 +0000 UTC m=+1.048665122 container remove a1e8d58cb44a80a82163bc6ffc940e736ae282f5c9e18d41cfd12b9fdc6cf2bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_gauss, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:10:30 np0005634017 systemd[1]: libpod-conmon-a1e8d58cb44a80a82163bc6ffc940e736ae282f5c9e18d41cfd12b9fdc6cf2bb.scope: Deactivated successfully.
Feb 28 05:10:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:10:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:10:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:10:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:10:30 np0005634017 NetworkManager[49805]: <info>  [1772273430.9433] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/223)
Feb 28 05:10:30 np0005634017 ovn_controller[146846]: 2026-02-28T10:10:30Z|00479|binding|INFO|Releasing lport cad89901-4493-47e9-b0fc-45158375eff8 from this chassis (sb_readonly=0)
Feb 28 05:10:30 np0005634017 NetworkManager[49805]: <info>  [1772273430.9443] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Feb 28 05:10:30 np0005634017 nova_compute[243452]: 2026-02-28 10:10:30.942 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:30 np0005634017 ovn_controller[146846]: 2026-02-28T10:10:30Z|00480|binding|INFO|Releasing lport cad89901-4493-47e9-b0fc-45158375eff8 from this chassis (sb_readonly=0)
Feb 28 05:10:30 np0005634017 nova_compute[243452]: 2026-02-28 10:10:30.956 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:30 np0005634017 nova_compute[243452]: 2026-02-28 10:10:30.963 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:31 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:10:31 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:10:31 np0005634017 podman[293666]: 2026-02-28 10:10:31.126328095 +0000 UTC m=+0.063255887 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 28 05:10:31 np0005634017 podman[293665]: 2026-02-28 10:10:31.162324766 +0000 UTC m=+0.099655230 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 28 05:10:31 np0005634017 nova_compute[243452]: 2026-02-28 10:10:31.482 243456 DEBUG nova.compute.manager [req-04b5d317-f409-42ef-955f-338446e094f1 req-7b6aa543-422a-4249-9188-0905080cc85e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received event network-changed-037eb744-3024-4a3d-b52c-894abe1cbac8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:10:31 np0005634017 nova_compute[243452]: 2026-02-28 10:10:31.483 243456 DEBUG nova.compute.manager [req-04b5d317-f409-42ef-955f-338446e094f1 req-7b6aa543-422a-4249-9188-0905080cc85e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Refreshing instance network info cache due to event network-changed-037eb744-3024-4a3d-b52c-894abe1cbac8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:10:31 np0005634017 nova_compute[243452]: 2026-02-28 10:10:31.483 243456 DEBUG oslo_concurrency.lockutils [req-04b5d317-f409-42ef-955f-338446e094f1 req-7b6aa543-422a-4249-9188-0905080cc85e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:10:31 np0005634017 nova_compute[243452]: 2026-02-28 10:10:31.483 243456 DEBUG oslo_concurrency.lockutils [req-04b5d317-f409-42ef-955f-338446e094f1 req-7b6aa543-422a-4249-9188-0905080cc85e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:10:31 np0005634017 nova_compute[243452]: 2026-02-28 10:10:31.483 243456 DEBUG nova.network.neutron [req-04b5d317-f409-42ef-955f-338446e094f1 req-7b6aa543-422a-4249-9188-0905080cc85e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Refreshing network info cache for port 037eb744-3024-4a3d-b52c-894abe1cbac8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:10:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 200 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 22 KiB/s wr, 144 op/s
Feb 28 05:10:32 np0005634017 nova_compute[243452]: 2026-02-28 10:10:32.836 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:10:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Feb 28 05:10:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Feb 28 05:10:33 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Feb 28 05:10:33 np0005634017 nova_compute[243452]: 2026-02-28 10:10:33.665 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:33 np0005634017 nova_compute[243452]: 2026-02-28 10:10:33.713 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:33 np0005634017 nova_compute[243452]: 2026-02-28 10:10:33.714 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:33 np0005634017 nova_compute[243452]: 2026-02-28 10:10:33.738 243456 DEBUG nova.compute.manager [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:10:33 np0005634017 nova_compute[243452]: 2026-02-28 10:10:33.836 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:33 np0005634017 nova_compute[243452]: 2026-02-28 10:10:33.837 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:33 np0005634017 nova_compute[243452]: 2026-02-28 10:10:33.851 243456 DEBUG nova.virt.hardware [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:10:33 np0005634017 nova_compute[243452]: 2026-02-28 10:10:33.852 243456 INFO nova.compute.claims [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:10:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 200 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 24 KiB/s wr, 147 op/s
Feb 28 05:10:34 np0005634017 nova_compute[243452]: 2026-02-28 10:10:34.256 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:10:34 np0005634017 nova_compute[243452]: 2026-02-28 10:10:34.354 243456 DEBUG nova.network.neutron [req-04b5d317-f409-42ef-955f-338446e094f1 req-7b6aa543-422a-4249-9188-0905080cc85e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updated VIF entry in instance network info cache for port 037eb744-3024-4a3d-b52c-894abe1cbac8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:10:34 np0005634017 nova_compute[243452]: 2026-02-28 10:10:34.355 243456 DEBUG nova.network.neutron [req-04b5d317-f409-42ef-955f-338446e094f1 req-7b6aa543-422a-4249-9188-0905080cc85e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updating instance_info_cache with network_info: [{"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:10:34 np0005634017 nova_compute[243452]: 2026-02-28 10:10:34.374 243456 DEBUG oslo_concurrency.lockutils [req-04b5d317-f409-42ef-955f-338446e094f1 req-7b6aa543-422a-4249-9188-0905080cc85e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:10:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:10:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1224320205' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:10:34 np0005634017 nova_compute[243452]: 2026-02-28 10:10:34.839 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:10:34 np0005634017 nova_compute[243452]: 2026-02-28 10:10:34.845 243456 DEBUG nova.compute.provider_tree [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:10:34 np0005634017 nova_compute[243452]: 2026-02-28 10:10:34.863 243456 DEBUG nova.scheduler.client.report [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:10:34 np0005634017 nova_compute[243452]: 2026-02-28 10:10:34.884 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:34 np0005634017 nova_compute[243452]: 2026-02-28 10:10:34.885 243456 DEBUG nova.compute.manager [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:10:34 np0005634017 nova_compute[243452]: 2026-02-28 10:10:34.930 243456 DEBUG nova.compute.manager [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:10:34 np0005634017 nova_compute[243452]: 2026-02-28 10:10:34.931 243456 DEBUG nova.network.neutron [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:10:34 np0005634017 nova_compute[243452]: 2026-02-28 10:10:34.956 243456 INFO nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:10:34 np0005634017 nova_compute[243452]: 2026-02-28 10:10:34.978 243456 DEBUG nova.compute.manager [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:10:35 np0005634017 nova_compute[243452]: 2026-02-28 10:10:35.075 243456 DEBUG nova.compute.manager [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:10:35 np0005634017 nova_compute[243452]: 2026-02-28 10:10:35.077 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:10:35 np0005634017 nova_compute[243452]: 2026-02-28 10:10:35.077 243456 INFO nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Creating image(s)#033[00m
Feb 28 05:10:35 np0005634017 nova_compute[243452]: 2026-02-28 10:10:35.101 243456 DEBUG nova.storage.rbd_utils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:10:35 np0005634017 nova_compute[243452]: 2026-02-28 10:10:35.130 243456 DEBUG nova.storage.rbd_utils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:10:35 np0005634017 nova_compute[243452]: 2026-02-28 10:10:35.154 243456 DEBUG nova.storage.rbd_utils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:10:35 np0005634017 nova_compute[243452]: 2026-02-28 10:10:35.158 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:10:35 np0005634017 nova_compute[243452]: 2026-02-28 10:10:35.229 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:10:35 np0005634017 nova_compute[243452]: 2026-02-28 10:10:35.231 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:35 np0005634017 nova_compute[243452]: 2026-02-28 10:10:35.232 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:35 np0005634017 nova_compute[243452]: 2026-02-28 10:10:35.232 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:35 np0005634017 nova_compute[243452]: 2026-02-28 10:10:35.256 243456 DEBUG nova.storage.rbd_utils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:10:35 np0005634017 nova_compute[243452]: 2026-02-28 10:10:35.261 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:10:35 np0005634017 nova_compute[243452]: 2026-02-28 10:10:35.482 243456 DEBUG nova.policy [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '33855957e5e3480b850c2ddef62a5f89', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a5095f810f0d431788237ae1da262bf6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:10:35 np0005634017 nova_compute[243452]: 2026-02-28 10:10:35.562 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:10:35 np0005634017 nova_compute[243452]: 2026-02-28 10:10:35.634 243456 DEBUG nova.storage.rbd_utils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] resizing rbd image 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:10:35 np0005634017 nova_compute[243452]: 2026-02-28 10:10:35.742 243456 DEBUG nova.objects.instance [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:10:35 np0005634017 nova_compute[243452]: 2026-02-28 10:10:35.759 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:10:35 np0005634017 nova_compute[243452]: 2026-02-28 10:10:35.759 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Ensure instance console log exists: /var/lib/nova/instances/9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:10:35 np0005634017 nova_compute[243452]: 2026-02-28 10:10:35.760 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:35 np0005634017 nova_compute[243452]: 2026-02-28 10:10:35.760 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:35 np0005634017 nova_compute[243452]: 2026-02-28 10:10:35.761 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 210 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 298 KiB/s wr, 130 op/s
Feb 28 05:10:37 np0005634017 nova_compute[243452]: 2026-02-28 10:10:37.175 243456 DEBUG nova.network.neutron [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Successfully created port: 188d9948-e6ef-4c09-a2ff-5b07d0f93779 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:10:37 np0005634017 nova_compute[243452]: 2026-02-28 10:10:37.839 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:10:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 243 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 134 op/s
Feb 28 05:10:38 np0005634017 nova_compute[243452]: 2026-02-28 10:10:38.465 243456 DEBUG nova.network.neutron [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Successfully updated port: 188d9948-e6ef-4c09-a2ff-5b07d0f93779 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:10:38 np0005634017 nova_compute[243452]: 2026-02-28 10:10:38.490 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "refresh_cache-9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:10:38 np0005634017 nova_compute[243452]: 2026-02-28 10:10:38.491 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquired lock "refresh_cache-9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:10:38 np0005634017 nova_compute[243452]: 2026-02-28 10:10:38.492 243456 DEBUG nova.network.neutron [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:10:38 np0005634017 nova_compute[243452]: 2026-02-28 10:10:38.609 243456 DEBUG nova.compute.manager [req-c67e4584-8891-4ce0-8c7b-3af5375d64d3 req-9e67664c-2908-40f2-a697-1bd94d2fedde 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Received event network-changed-188d9948-e6ef-4c09-a2ff-5b07d0f93779 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:10:38 np0005634017 nova_compute[243452]: 2026-02-28 10:10:38.609 243456 DEBUG nova.compute.manager [req-c67e4584-8891-4ce0-8c7b-3af5375d64d3 req-9e67664c-2908-40f2-a697-1bd94d2fedde 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Refreshing instance network info cache due to event network-changed-188d9948-e6ef-4c09-a2ff-5b07d0f93779. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:10:38 np0005634017 nova_compute[243452]: 2026-02-28 10:10:38.609 243456 DEBUG oslo_concurrency.lockutils [req-c67e4584-8891-4ce0-8c7b-3af5375d64d3 req-9e67664c-2908-40f2-a697-1bd94d2fedde 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:10:38 np0005634017 nova_compute[243452]: 2026-02-28 10:10:38.667 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:38 np0005634017 nova_compute[243452]: 2026-02-28 10:10:38.717 243456 DEBUG nova.network.neutron [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:10:39 np0005634017 ovn_controller[146846]: 2026-02-28T10:10:39Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:32:d3:6f 10.100.0.9
Feb 28 05:10:39 np0005634017 ovn_controller[146846]: 2026-02-28T10:10:39Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:32:d3:6f 10.100.0.9
Feb 28 05:10:39 np0005634017 nova_compute[243452]: 2026-02-28 10:10:39.873 243456 DEBUG nova.network.neutron [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Updating instance_info_cache with network_info: [{"id": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "address": "fa:16:3e:36:78:c1", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap188d9948-e6", "ovs_interfaceid": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:10:39 np0005634017 nova_compute[243452]: 2026-02-28 10:10:39.899 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Releasing lock "refresh_cache-9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:10:39 np0005634017 nova_compute[243452]: 2026-02-28 10:10:39.900 243456 DEBUG nova.compute.manager [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Instance network_info: |[{"id": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "address": "fa:16:3e:36:78:c1", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap188d9948-e6", "ovs_interfaceid": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:10:39 np0005634017 nova_compute[243452]: 2026-02-28 10:10:39.900 243456 DEBUG oslo_concurrency.lockutils [req-c67e4584-8891-4ce0-8c7b-3af5375d64d3 req-9e67664c-2908-40f2-a697-1bd94d2fedde 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:10:39 np0005634017 nova_compute[243452]: 2026-02-28 10:10:39.900 243456 DEBUG nova.network.neutron [req-c67e4584-8891-4ce0-8c7b-3af5375d64d3 req-9e67664c-2908-40f2-a697-1bd94d2fedde 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Refreshing network info cache for port 188d9948-e6ef-4c09-a2ff-5b07d0f93779 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:10:39 np0005634017 nova_compute[243452]: 2026-02-28 10:10:39.904 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Start _get_guest_xml network_info=[{"id": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "address": "fa:16:3e:36:78:c1", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap188d9948-e6", "ovs_interfaceid": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:10:39 np0005634017 nova_compute[243452]: 2026-02-28 10:10:39.909 243456 WARNING nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:10:39 np0005634017 nova_compute[243452]: 2026-02-28 10:10:39.921 243456 DEBUG nova.virt.libvirt.host [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:10:39 np0005634017 nova_compute[243452]: 2026-02-28 10:10:39.921 243456 DEBUG nova.virt.libvirt.host [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:10:39 np0005634017 nova_compute[243452]: 2026-02-28 10:10:39.927 243456 DEBUG nova.virt.libvirt.host [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:10:39 np0005634017 nova_compute[243452]: 2026-02-28 10:10:39.928 243456 DEBUG nova.virt.libvirt.host [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:10:39 np0005634017 nova_compute[243452]: 2026-02-28 10:10:39.928 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:10:39 np0005634017 nova_compute[243452]: 2026-02-28 10:10:39.929 243456 DEBUG nova.virt.hardware [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:10:39 np0005634017 nova_compute[243452]: 2026-02-28 10:10:39.929 243456 DEBUG nova.virt.hardware [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:10:39 np0005634017 nova_compute[243452]: 2026-02-28 10:10:39.929 243456 DEBUG nova.virt.hardware [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:10:39 np0005634017 nova_compute[243452]: 2026-02-28 10:10:39.929 243456 DEBUG nova.virt.hardware [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:10:39 np0005634017 nova_compute[243452]: 2026-02-28 10:10:39.930 243456 DEBUG nova.virt.hardware [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:10:39 np0005634017 nova_compute[243452]: 2026-02-28 10:10:39.930 243456 DEBUG nova.virt.hardware [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:10:39 np0005634017 nova_compute[243452]: 2026-02-28 10:10:39.930 243456 DEBUG nova.virt.hardware [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:10:39 np0005634017 nova_compute[243452]: 2026-02-28 10:10:39.930 243456 DEBUG nova.virt.hardware [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:10:39 np0005634017 nova_compute[243452]: 2026-02-28 10:10:39.931 243456 DEBUG nova.virt.hardware [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:10:39 np0005634017 nova_compute[243452]: 2026-02-28 10:10:39.931 243456 DEBUG nova.virt.hardware [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:10:39 np0005634017 nova_compute[243452]: 2026-02-28 10:10:39.931 243456 DEBUG nova.virt.hardware [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:10:39 np0005634017 nova_compute[243452]: 2026-02-28 10:10:39.934 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:10:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 266 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.2 MiB/s wr, 115 op/s
Feb 28 05:10:40 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:10:40 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3217312333' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:10:40 np0005634017 nova_compute[243452]: 2026-02-28 10:10:40.523 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:10:40 np0005634017 nova_compute[243452]: 2026-02-28 10:10:40.545 243456 DEBUG nova.storage.rbd_utils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:10:40 np0005634017 nova_compute[243452]: 2026-02-28 10:10:40.550 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:10:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:10:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:10:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:10:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:10:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0010363606648371541 of space, bias 1.0, pg target 0.3109081994511462 quantized to 32 (current 32)
Feb 28 05:10:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:10:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:10:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:10:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:10:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:10:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024934987145615343 of space, bias 1.0, pg target 0.7480496143684603 quantized to 32 (current 32)
Feb 28 05:10:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:10:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.894500714536989e-07 of space, bias 4.0, pg target 0.0009473400857444386 quantized to 16 (current 16)
Feb 28 05:10:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:10:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:10:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:10:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:10:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:10:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:10:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:10:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:10:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:10:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:10:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:10:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/41945769' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.111 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.113 243456 DEBUG nova.virt.libvirt.vif [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:10:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-389107082',display_name='tempest-DeleteServersTestJSON-server-389107082',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-389107082',id=59,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-p13pmd0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:10:35Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "address": "fa:16:3e:36:78:c1", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap188d9948-e6", "ovs_interfaceid": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.114 243456 DEBUG nova.network.os_vif_util [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "address": "fa:16:3e:36:78:c1", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap188d9948-e6", "ovs_interfaceid": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.115 243456 DEBUG nova.network.os_vif_util [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:78:c1,bridge_name='br-int',has_traffic_filtering=True,id=188d9948-e6ef-4c09-a2ff-5b07d0f93779,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap188d9948-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.117 243456 DEBUG nova.objects.instance [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.176 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:10:41 np0005634017 nova_compute[243452]:  <uuid>9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab</uuid>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:  <name>instance-0000003b</name>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <nova:name>tempest-DeleteServersTestJSON-server-389107082</nova:name>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:10:39</nova:creationTime>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:10:41 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:        <nova:user uuid="33855957e5e3480b850c2ddef62a5f89">tempest-DeleteServersTestJSON-515886650-project-member</nova:user>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:        <nova:project uuid="a5095f810f0d431788237ae1da262bf6">tempest-DeleteServersTestJSON-515886650</nova:project>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:        <nova:port uuid="188d9948-e6ef-4c09-a2ff-5b07d0f93779">
Feb 28 05:10:41 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <entry name="serial">9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab</entry>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <entry name="uuid">9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab</entry>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk">
Feb 28 05:10:41 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:10:41 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk.config">
Feb 28 05:10:41 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:10:41 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:36:78:c1"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <target dev="tap188d9948-e6"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab/console.log" append="off"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:10:41 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:10:41 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:10:41 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:10:41 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.178 243456 DEBUG nova.compute.manager [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Preparing to wait for external event network-vif-plugged-188d9948-e6ef-4c09-a2ff-5b07d0f93779 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.178 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.179 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.179 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.180 243456 DEBUG nova.virt.libvirt.vif [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:10:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-389107082',display_name='tempest-DeleteServersTestJSON-server-389107082',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-389107082',id=59,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-p13pmd0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:10:35Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "address": "fa:16:3e:36:78:c1", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap188d9948-e6", "ovs_interfaceid": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.181 243456 DEBUG nova.network.os_vif_util [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "address": "fa:16:3e:36:78:c1", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap188d9948-e6", "ovs_interfaceid": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.181 243456 DEBUG nova.network.os_vif_util [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:78:c1,bridge_name='br-int',has_traffic_filtering=True,id=188d9948-e6ef-4c09-a2ff-5b07d0f93779,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap188d9948-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.182 243456 DEBUG os_vif [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:78:c1,bridge_name='br-int',has_traffic_filtering=True,id=188d9948-e6ef-4c09-a2ff-5b07d0f93779,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap188d9948-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.182 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.183 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.183 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.187 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.188 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap188d9948-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.188 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap188d9948-e6, col_values=(('external_ids', {'iface-id': '188d9948-e6ef-4c09-a2ff-5b07d0f93779', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:36:78:c1', 'vm-uuid': '9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.190 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:41 np0005634017 NetworkManager[49805]: <info>  [1772273441.1913] manager: (tap188d9948-e6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.191 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.198 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.199 243456 INFO os_vif [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:78:c1,bridge_name='br-int',has_traffic_filtering=True,id=188d9948-e6ef-4c09-a2ff-5b07d0f93779,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap188d9948-e6')#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.252 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.253 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.253 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No VIF found with MAC fa:16:3e:36:78:c1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.254 243456 INFO nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Using config drive#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.277 243456 DEBUG nova.storage.rbd_utils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.454 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.936 243456 INFO nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Creating config drive at /var/lib/nova/instances/9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab/disk.config#033[00m
Feb 28 05:10:41 np0005634017 nova_compute[243452]: 2026-02-28 10:10:41.943 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpmjwcx03v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:10:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 279 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 798 KiB/s rd, 4.7 MiB/s wr, 107 op/s
Feb 28 05:10:42 np0005634017 nova_compute[243452]: 2026-02-28 10:10:42.091 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpmjwcx03v" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:10:42 np0005634017 nova_compute[243452]: 2026-02-28 10:10:42.128 243456 DEBUG nova.storage.rbd_utils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:10:42 np0005634017 nova_compute[243452]: 2026-02-28 10:10:42.133 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab/disk.config 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:10:42 np0005634017 nova_compute[243452]: 2026-02-28 10:10:42.257 243456 DEBUG nova.network.neutron [req-c67e4584-8891-4ce0-8c7b-3af5375d64d3 req-9e67664c-2908-40f2-a697-1bd94d2fedde 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Updated VIF entry in instance network info cache for port 188d9948-e6ef-4c09-a2ff-5b07d0f93779. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:10:42 np0005634017 nova_compute[243452]: 2026-02-28 10:10:42.258 243456 DEBUG nova.network.neutron [req-c67e4584-8891-4ce0-8c7b-3af5375d64d3 req-9e67664c-2908-40f2-a697-1bd94d2fedde 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Updating instance_info_cache with network_info: [{"id": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "address": "fa:16:3e:36:78:c1", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap188d9948-e6", "ovs_interfaceid": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:10:42 np0005634017 nova_compute[243452]: 2026-02-28 10:10:42.274 243456 DEBUG oslo_concurrency.lockutils [req-c67e4584-8891-4ce0-8c7b-3af5375d64d3 req-9e67664c-2908-40f2-a697-1bd94d2fedde 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:10:42 np0005634017 nova_compute[243452]: 2026-02-28 10:10:42.293 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab/disk.config 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:10:42 np0005634017 nova_compute[243452]: 2026-02-28 10:10:42.294 243456 INFO nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Deleting local config drive /var/lib/nova/instances/9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab/disk.config because it was imported into RBD.#033[00m
Feb 28 05:10:42 np0005634017 kernel: tap188d9948-e6: entered promiscuous mode
Feb 28 05:10:42 np0005634017 NetworkManager[49805]: <info>  [1772273442.3456] manager: (tap188d9948-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/226)
Feb 28 05:10:42 np0005634017 ovn_controller[146846]: 2026-02-28T10:10:42Z|00481|binding|INFO|Claiming lport 188d9948-e6ef-4c09-a2ff-5b07d0f93779 for this chassis.
Feb 28 05:10:42 np0005634017 ovn_controller[146846]: 2026-02-28T10:10:42Z|00482|binding|INFO|188d9948-e6ef-4c09-a2ff-5b07d0f93779: Claiming fa:16:3e:36:78:c1 10.100.0.10
Feb 28 05:10:42 np0005634017 nova_compute[243452]: 2026-02-28 10:10:42.349 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:42 np0005634017 ovn_controller[146846]: 2026-02-28T10:10:42Z|00483|binding|INFO|Setting lport 188d9948-e6ef-4c09-a2ff-5b07d0f93779 ovn-installed in OVS
Feb 28 05:10:42 np0005634017 ovn_controller[146846]: 2026-02-28T10:10:42Z|00484|binding|INFO|Setting lport 188d9948-e6ef-4c09-a2ff-5b07d0f93779 up in Southbound
Feb 28 05:10:42 np0005634017 nova_compute[243452]: 2026-02-28 10:10:42.359 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.357 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:78:c1 10.100.0.10'], port_security=['fa:16:3e:36:78:c1 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e92100d-850d-4567-9a5d-269bb15701d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5095f810f0d431788237ae1da262bf6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05124cc2-701f-43d1-96d7-3db27e805ae2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4409f277-3260-44c5-97fd-d8b2121f3d6b, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=188d9948-e6ef-4c09-a2ff-5b07d0f93779) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:10:42 np0005634017 nova_compute[243452]: 2026-02-28 10:10:42.360 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.360 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 188d9948-e6ef-4c09-a2ff-5b07d0f93779 in datapath 8e92100d-850d-4567-9a5d-269bb15701d5 bound to our chassis#033[00m
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.362 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8e92100d-850d-4567-9a5d-269bb15701d5#033[00m
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.376 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e9ee2a54-c667-461f-a37c-8f07cc6738c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.377 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8e92100d-81 in ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.380 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8e92100d-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.380 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2115bdb4-6eb7-4c08-8da8-5774689b0a61]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.381 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a84ef553-cb13-4243-b154-14577aabe715]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:42 np0005634017 systemd-machined[209480]: New machine qemu-66-instance-0000003b.
Feb 28 05:10:42 np0005634017 systemd-udevd[294038]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:10:42 np0005634017 NetworkManager[49805]: <info>  [1772273442.3965] device (tap188d9948-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:10:42 np0005634017 NetworkManager[49805]: <info>  [1772273442.3972] device (tap188d9948-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.396 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[95476e4a-d88d-4af2-945b-9d6496247a1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:42 np0005634017 systemd[1]: Started Virtual Machine qemu-66-instance-0000003b.
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.422 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[34c4444f-c7e8-491e-af3b-99e0b95bc768]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.454 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6f7c72b6-3a9d-4881-abd4-3cca3fd4fe89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:42 np0005634017 NetworkManager[49805]: <info>  [1772273442.4625] manager: (tap8e92100d-80): new Veth device (/org/freedesktop/NetworkManager/Devices/227)
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.463 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bdd759ce-8879-49d3-be5f-79482f461feb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.496 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c51d37a5-c976-447b-8ec2-8f9a67ff4aa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.499 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[241a126d-adc9-4c8c-af08-89d9f47f00ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:42 np0005634017 NetworkManager[49805]: <info>  [1772273442.5282] device (tap8e92100d-80): carrier: link connected
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.536 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb475c7-ae4a-4194-b168-c8d1a22b8ea0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.557 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a8634bd9-f85f-4b06-86b5-08b3ed8eb56a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e92100d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:bd:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494981, 'reachable_time': 27357, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294070, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.574 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[88f53669-0d1a-4cc0-898e-8eb7b1812aea]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:bd50'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 494981, 'tstamp': 494981}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294071, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.591 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f3204a05-db1d-49d6-a8b4-afe1eecfdf81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e92100d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:bd:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494981, 'reachable_time': 27357, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294072, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.617 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4593e698-3fe6-4d66-846e-cc7d0e20efd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.673 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9eab880f-e357-47bf-a4d6-b7e3bbb1461b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.674 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e92100d-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.675 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.675 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e92100d-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:10:42 np0005634017 nova_compute[243452]: 2026-02-28 10:10:42.677 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:42 np0005634017 kernel: tap8e92100d-80: entered promiscuous mode
Feb 28 05:10:42 np0005634017 NetworkManager[49805]: <info>  [1772273442.6781] manager: (tap8e92100d-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/228)
Feb 28 05:10:42 np0005634017 nova_compute[243452]: 2026-02-28 10:10:42.680 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.688 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8e92100d-80, col_values=(('external_ids', {'iface-id': 'df60d363-b5aa-4c1b-a9a1-997dfff36799'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:10:42 np0005634017 nova_compute[243452]: 2026-02-28 10:10:42.690 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:42 np0005634017 ovn_controller[146846]: 2026-02-28T10:10:42Z|00485|binding|INFO|Releasing lport df60d363-b5aa-4c1b-a9a1-997dfff36799 from this chassis (sb_readonly=0)
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.693 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.694 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f566d028-8e4c-406e-951b-832b774f8057]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.695 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-8e92100d-850d-4567-9a5d-269bb15701d5
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 8e92100d-850d-4567-9a5d-269bb15701d5
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:10:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.695 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'env', 'PROCESS_TAG=haproxy-8e92100d-850d-4567-9a5d-269bb15701d5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8e92100d-850d-4567-9a5d-269bb15701d5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:10:42 np0005634017 nova_compute[243452]: 2026-02-28 10:10:42.699 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:42 np0005634017 nova_compute[243452]: 2026-02-28 10:10:42.721 243456 DEBUG nova.compute.manager [req-0d2e3ba7-8183-44d7-80cb-9fced15822fa req-0207ba0d-9d58-41b3-8d4f-d37fccb1a5f1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Received event network-vif-plugged-188d9948-e6ef-4c09-a2ff-5b07d0f93779 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:10:42 np0005634017 nova_compute[243452]: 2026-02-28 10:10:42.722 243456 DEBUG oslo_concurrency.lockutils [req-0d2e3ba7-8183-44d7-80cb-9fced15822fa req-0207ba0d-9d58-41b3-8d4f-d37fccb1a5f1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:42 np0005634017 nova_compute[243452]: 2026-02-28 10:10:42.722 243456 DEBUG oslo_concurrency.lockutils [req-0d2e3ba7-8183-44d7-80cb-9fced15822fa req-0207ba0d-9d58-41b3-8d4f-d37fccb1a5f1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:42 np0005634017 nova_compute[243452]: 2026-02-28 10:10:42.722 243456 DEBUG oslo_concurrency.lockutils [req-0d2e3ba7-8183-44d7-80cb-9fced15822fa req-0207ba0d-9d58-41b3-8d4f-d37fccb1a5f1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:42 np0005634017 nova_compute[243452]: 2026-02-28 10:10:42.722 243456 DEBUG nova.compute.manager [req-0d2e3ba7-8183-44d7-80cb-9fced15822fa req-0207ba0d-9d58-41b3-8d4f-d37fccb1a5f1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Processing event network-vif-plugged-188d9948-e6ef-4c09-a2ff-5b07d0f93779 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:10:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:10:43 np0005634017 podman[294104]: 2026-02-28 10:10:43.10798511 +0000 UTC m=+0.061496089 container create 65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:10:43 np0005634017 systemd[1]: Started libpod-conmon-65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7.scope.
Feb 28 05:10:43 np0005634017 podman[294104]: 2026-02-28 10:10:43.074999123 +0000 UTC m=+0.028510152 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:10:43 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:10:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e7eda6696ef10c33dbfdcc6c094076f11bfa23cb58683d0b70ea823785b7457/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:10:43 np0005634017 podman[294104]: 2026-02-28 10:10:43.198336587 +0000 UTC m=+0.151847546 container init 65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 05:10:43 np0005634017 podman[294104]: 2026-02-28 10:10:43.204463639 +0000 UTC m=+0.157974608 container start 65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 05:10:43 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[294144]: [NOTICE]   (294164) : New worker (294166) forked
Feb 28 05:10:43 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[294144]: [NOTICE]   (294164) : Loading success.
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.275 243456 DEBUG nova.compute.manager [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.276 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273443.2749145, 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.276 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] VM Started (Lifecycle Event)#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.280 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.284 243456 INFO nova.virt.libvirt.driver [-] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Instance spawned successfully.#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.285 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.309 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.315 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.318 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.319 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.319 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.319 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.320 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.320 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.366 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.366 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273443.2791846, 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.367 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.393 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.397 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273443.2797794, 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.397 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.401 243456 INFO nova.compute.manager [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Took 8.33 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.401 243456 DEBUG nova.compute.manager [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.430 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.433 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.458 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.474 243456 INFO nova.compute.manager [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Took 9.67 seconds to build instance.#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.522 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:43 np0005634017 nova_compute[243452]: 2026-02-28 10:10:43.670 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 279 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 4.2 MiB/s wr, 102 op/s
Feb 28 05:10:44 np0005634017 nova_compute[243452]: 2026-02-28 10:10:44.830 243456 DEBUG nova.compute.manager [req-ba61098d-c4c5-4837-a085-cd0a8d41c9b8 req-a2231f53-a69e-430f-8128-43c2932934ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Received event network-vif-plugged-188d9948-e6ef-4c09-a2ff-5b07d0f93779 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:10:44 np0005634017 nova_compute[243452]: 2026-02-28 10:10:44.831 243456 DEBUG oslo_concurrency.lockutils [req-ba61098d-c4c5-4837-a085-cd0a8d41c9b8 req-a2231f53-a69e-430f-8128-43c2932934ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:44 np0005634017 nova_compute[243452]: 2026-02-28 10:10:44.831 243456 DEBUG oslo_concurrency.lockutils [req-ba61098d-c4c5-4837-a085-cd0a8d41c9b8 req-a2231f53-a69e-430f-8128-43c2932934ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:44 np0005634017 nova_compute[243452]: 2026-02-28 10:10:44.831 243456 DEBUG oslo_concurrency.lockutils [req-ba61098d-c4c5-4837-a085-cd0a8d41c9b8 req-a2231f53-a69e-430f-8128-43c2932934ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:44 np0005634017 nova_compute[243452]: 2026-02-28 10:10:44.831 243456 DEBUG nova.compute.manager [req-ba61098d-c4c5-4837-a085-cd0a8d41c9b8 req-a2231f53-a69e-430f-8128-43c2932934ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] No waiting events found dispatching network-vif-plugged-188d9948-e6ef-4c09-a2ff-5b07d0f93779 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:10:44 np0005634017 nova_compute[243452]: 2026-02-28 10:10:44.832 243456 WARNING nova.compute.manager [req-ba61098d-c4c5-4837-a085-cd0a8d41c9b8 req-a2231f53-a69e-430f-8128-43c2932934ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Received unexpected event network-vif-plugged-188d9948-e6ef-4c09-a2ff-5b07d0f93779 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:10:44 np0005634017 nova_compute[243452]: 2026-02-28 10:10:44.859 243456 DEBUG oslo_concurrency.lockutils [None req-e9f85009-e662-400f-8c01-36e7e6eeea40 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:44 np0005634017 nova_compute[243452]: 2026-02-28 10:10:44.860 243456 DEBUG oslo_concurrency.lockutils [None req-e9f85009-e662-400f-8c01-36e7e6eeea40 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:44 np0005634017 nova_compute[243452]: 2026-02-28 10:10:44.860 243456 DEBUG nova.compute.manager [None req-e9f85009-e662-400f-8c01-36e7e6eeea40 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:10:44 np0005634017 nova_compute[243452]: 2026-02-28 10:10:44.864 243456 DEBUG nova.compute.manager [None req-e9f85009-e662-400f-8c01-36e7e6eeea40 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Feb 28 05:10:44 np0005634017 nova_compute[243452]: 2026-02-28 10:10:44.865 243456 DEBUG nova.objects.instance [None req-e9f85009-e662-400f-8c01-36e7e6eeea40 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'flavor' on Instance uuid 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:10:44 np0005634017 nova_compute[243452]: 2026-02-28 10:10:44.903 243456 DEBUG nova.virt.libvirt.driver [None req-e9f85009-e662-400f-8c01-36e7e6eeea40 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 28 05:10:45 np0005634017 nova_compute[243452]: 2026-02-28 10:10:45.404 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:10:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1450302044' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:10:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:10:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1450302044' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:10:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 279 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 124 op/s
Feb 28 05:10:46 np0005634017 nova_compute[243452]: 2026-02-28 10:10:46.190 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:10:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 279 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.7 MiB/s wr, 164 op/s
Feb 28 05:10:48 np0005634017 nova_compute[243452]: 2026-02-28 10:10:48.672 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:49 np0005634017 nova_compute[243452]: 2026-02-28 10:10:49.057 243456 DEBUG nova.compute.manager [None req-25e57c54-eaaf-4bde-b34e-43ef37c5f5b7 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:10:49 np0005634017 nova_compute[243452]: 2026-02-28 10:10:49.102 243456 INFO nova.compute.manager [None req-25e57c54-eaaf-4bde-b34e-43ef37c5f5b7 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] instance snapshotting#033[00m
Feb 28 05:10:49 np0005634017 nova_compute[243452]: 2026-02-28 10:10:49.103 243456 DEBUG nova.objects.instance [None req-25e57c54-eaaf-4bde-b34e-43ef37c5f5b7 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'flavor' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:10:49 np0005634017 nova_compute[243452]: 2026-02-28 10:10:49.364 243456 INFO nova.virt.libvirt.driver [None req-25e57c54-eaaf-4bde-b34e-43ef37c5f5b7 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Beginning live snapshot process#033[00m
Feb 28 05:10:49 np0005634017 nova_compute[243452]: 2026-02-28 10:10:49.485 243456 DEBUG nova.virt.libvirt.imagebackend [None req-25e57c54-eaaf-4bde-b34e-43ef37c5f5b7 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 28 05:10:49 np0005634017 nova_compute[243452]: 2026-02-28 10:10:49.730 243456 DEBUG nova.storage.rbd_utils [None req-25e57c54-eaaf-4bde-b34e-43ef37c5f5b7 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] creating snapshot(7f8bac8210f6497cb3b3fd7287b74f64) on rbd image(30a5d845-ce28-490a-afe8-3b7552f02c63_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:10:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 279 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.7 MiB/s wr, 145 op/s
Feb 28 05:10:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Feb 28 05:10:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Feb 28 05:10:50 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Feb 28 05:10:50 np0005634017 nova_compute[243452]: 2026-02-28 10:10:50.183 243456 DEBUG nova.storage.rbd_utils [None req-25e57c54-eaaf-4bde-b34e-43ef37c5f5b7 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] cloning vms/30a5d845-ce28-490a-afe8-3b7552f02c63_disk@7f8bac8210f6497cb3b3fd7287b74f64 to images/566c962b-ab07-4ea4-8c4a-daa71c23c042 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 28 05:10:50 np0005634017 nova_compute[243452]: 2026-02-28 10:10:50.274 243456 DEBUG nova.storage.rbd_utils [None req-25e57c54-eaaf-4bde-b34e-43ef37c5f5b7 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] flattening images/566c962b-ab07-4ea4-8c4a-daa71c23c042 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 28 05:10:50 np0005634017 nova_compute[243452]: 2026-02-28 10:10:50.663 243456 DEBUG nova.storage.rbd_utils [None req-25e57c54-eaaf-4bde-b34e-43ef37c5f5b7 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] removing snapshot(7f8bac8210f6497cb3b3fd7287b74f64) on rbd image(30a5d845-ce28-490a-afe8-3b7552f02c63_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 28 05:10:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Feb 28 05:10:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Feb 28 05:10:51 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Feb 28 05:10:51 np0005634017 nova_compute[243452]: 2026-02-28 10:10:51.193 243456 DEBUG nova.storage.rbd_utils [None req-25e57c54-eaaf-4bde-b34e-43ef37c5f5b7 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] creating snapshot(snap) on rbd image(566c962b-ab07-4ea4-8c4a-daa71c23c042) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:10:51 np0005634017 nova_compute[243452]: 2026-02-28 10:10:51.239 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 291 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.2 MiB/s wr, 143 op/s
Feb 28 05:10:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Feb 28 05:10:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Feb 28 05:10:52 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.194892) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273453194936, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2310, "num_deletes": 263, "total_data_size": 3420643, "memory_usage": 3474160, "flush_reason": "Manual Compaction"}
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273453219614, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3344023, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25895, "largest_seqno": 28204, "table_properties": {"data_size": 3333210, "index_size": 7051, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22482, "raw_average_key_size": 21, "raw_value_size": 3311568, "raw_average_value_size": 3106, "num_data_blocks": 307, "num_entries": 1066, "num_filter_entries": 1066, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772273283, "oldest_key_time": 1772273283, "file_creation_time": 1772273453, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 24809 microseconds, and 6947 cpu microseconds.
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.219690) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3344023 bytes OK
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.219726) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.221580) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.221610) EVENT_LOG_v1 {"time_micros": 1772273453221602, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.221647) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3410774, prev total WAL file size 3410774, number of live WAL files 2.
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.223864) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3265KB)], [59(6902KB)]
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273453223929, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10411853, "oldest_snapshot_seqno": -1}
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5429 keys, 8757292 bytes, temperature: kUnknown
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273453322645, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8757292, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8719381, "index_size": 23209, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 134990, "raw_average_key_size": 24, "raw_value_size": 8620268, "raw_average_value_size": 1587, "num_data_blocks": 951, "num_entries": 5429, "num_filter_entries": 5429, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772273453, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.323102) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8757292 bytes
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.330702) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.3 rd, 88.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 6.7 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(5.7) write-amplify(2.6) OK, records in: 5961, records dropped: 532 output_compression: NoCompression
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.330756) EVENT_LOG_v1 {"time_micros": 1772273453330739, "job": 32, "event": "compaction_finished", "compaction_time_micros": 98834, "compaction_time_cpu_micros": 19499, "output_level": 6, "num_output_files": 1, "total_output_size": 8757292, "num_input_records": 5961, "num_output_records": 5429, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273453331962, "job": 32, "event": "table_file_deletion", "file_number": 61}
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273453333100, "job": 32, "event": "table_file_deletion", "file_number": 59}
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.223676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.333212) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.333216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.333217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.333219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:10:53 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.333220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:10:53 np0005634017 nova_compute[243452]: 2026-02-28 10:10:53.673 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:53 np0005634017 nova_compute[243452]: 2026-02-28 10:10:53.971 243456 INFO nova.virt.libvirt.driver [None req-25e57c54-eaaf-4bde-b34e-43ef37c5f5b7 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Snapshot image upload complete#033[00m
Feb 28 05:10:53 np0005634017 nova_compute[243452]: 2026-02-28 10:10:53.971 243456 INFO nova.compute.manager [None req-25e57c54-eaaf-4bde-b34e-43ef37c5f5b7 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Took 4.85 seconds to snapshot the instance on the hypervisor.#033[00m
Feb 28 05:10:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 311 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.9 MiB/s wr, 54 op/s
Feb 28 05:10:54 np0005634017 nova_compute[243452]: 2026-02-28 10:10:54.291 243456 DEBUG nova.compute.manager [None req-25e57c54-eaaf-4bde-b34e-43ef37c5f5b7 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Feb 28 05:10:54 np0005634017 nova_compute[243452]: 2026-02-28 10:10:54.950 243456 DEBUG nova.virt.libvirt.driver [None req-e9f85009-e662-400f-8c01-36e7e6eeea40 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 28 05:10:56 np0005634017 nova_compute[243452]: 2026-02-28 10:10:56.046 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:56.047 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:10:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:56.048 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:10:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 381 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 11 MiB/s wr, 259 op/s
Feb 28 05:10:56 np0005634017 nova_compute[243452]: 2026-02-28 10:10:56.241 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:57 np0005634017 kernel: tap188d9948-e6 (unregistering): left promiscuous mode
Feb 28 05:10:57 np0005634017 NetworkManager[49805]: <info>  [1772273457.2556] device (tap188d9948-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:10:57 np0005634017 ovn_controller[146846]: 2026-02-28T10:10:57Z|00486|binding|INFO|Releasing lport 188d9948-e6ef-4c09-a2ff-5b07d0f93779 from this chassis (sb_readonly=0)
Feb 28 05:10:57 np0005634017 ovn_controller[146846]: 2026-02-28T10:10:57Z|00487|binding|INFO|Setting lport 188d9948-e6ef-4c09-a2ff-5b07d0f93779 down in Southbound
Feb 28 05:10:57 np0005634017 ovn_controller[146846]: 2026-02-28T10:10:57Z|00488|binding|INFO|Removing iface tap188d9948-e6 ovn-installed in OVS
Feb 28 05:10:57 np0005634017 nova_compute[243452]: 2026-02-28 10:10:57.264 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:57 np0005634017 nova_compute[243452]: 2026-02-28 10:10:57.268 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:57 np0005634017 nova_compute[243452]: 2026-02-28 10:10:57.275 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.278 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:78:c1 10.100.0.10'], port_security=['fa:16:3e:36:78:c1 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e92100d-850d-4567-9a5d-269bb15701d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5095f810f0d431788237ae1da262bf6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05124cc2-701f-43d1-96d7-3db27e805ae2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4409f277-3260-44c5-97fd-d8b2121f3d6b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=188d9948-e6ef-4c09-a2ff-5b07d0f93779) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:10:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.279 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 188d9948-e6ef-4c09-a2ff-5b07d0f93779 in datapath 8e92100d-850d-4567-9a5d-269bb15701d5 unbound from our chassis#033[00m
Feb 28 05:10:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.281 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e92100d-850d-4567-9a5d-269bb15701d5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:10:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.282 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[569bccc8-fc15-42db-a107-634a59970946]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.282 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 namespace which is not needed anymore#033[00m
Feb 28 05:10:57 np0005634017 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Feb 28 05:10:57 np0005634017 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000003b.scope: Consumed 12.248s CPU time.
Feb 28 05:10:57 np0005634017 systemd-machined[209480]: Machine qemu-66-instance-0000003b terminated.
Feb 28 05:10:57 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[294144]: [NOTICE]   (294164) : haproxy version is 2.8.14-c23fe91
Feb 28 05:10:57 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[294144]: [NOTICE]   (294164) : path to executable is /usr/sbin/haproxy
Feb 28 05:10:57 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[294144]: [WARNING]  (294164) : Exiting Master process...
Feb 28 05:10:57 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[294144]: [ALERT]    (294164) : Current worker (294166) exited with code 143 (Terminated)
Feb 28 05:10:57 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[294144]: [WARNING]  (294164) : All workers exited. Exiting... (0)
Feb 28 05:10:57 np0005634017 systemd[1]: libpod-65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7.scope: Deactivated successfully.
Feb 28 05:10:57 np0005634017 conmon[294144]: conmon 65cb934f7051612ccce1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7.scope/container/memory.events
Feb 28 05:10:57 np0005634017 podman[294340]: 2026-02-28 10:10:57.415422584 +0000 UTC m=+0.051195309 container died 65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:10:57 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7-userdata-shm.mount: Deactivated successfully.
Feb 28 05:10:57 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3e7eda6696ef10c33dbfdcc6c094076f11bfa23cb58683d0b70ea823785b7457-merged.mount: Deactivated successfully.
Feb 28 05:10:57 np0005634017 podman[294340]: 2026-02-28 10:10:57.454014248 +0000 UTC m=+0.089786973 container cleanup 65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 05:10:57 np0005634017 systemd[1]: libpod-conmon-65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7.scope: Deactivated successfully.
Feb 28 05:10:57 np0005634017 podman[294369]: 2026-02-28 10:10:57.524140767 +0000 UTC m=+0.049200503 container remove 65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 05:10:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.529 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2319094f-789b-4eaa-95ba-cec7eadcfc5d]: (4, ('Sat Feb 28 10:10:57 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 (65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7)\n65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7\nSat Feb 28 10:10:57 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 (65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7)\n65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.532 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[91bd963c-27ef-41b7-81b5-2b5ec546fc0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.533 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e92100d-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:10:57 np0005634017 nova_compute[243452]: 2026-02-28 10:10:57.535 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:57 np0005634017 kernel: tap8e92100d-80: left promiscuous mode
Feb 28 05:10:57 np0005634017 nova_compute[243452]: 2026-02-28 10:10:57.546 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:10:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.549 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[76d7615d-ea33-4e77-897e-7d2d836d4c76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.567 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[299a1d6b-5bb9-44a9-ba6a-96c8218222ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.569 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ec40e64d-2e93-4f8a-b4d2-c88be4941441]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.590 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[892d3980-98f9-4fc2-bbd8-c02b8ccf9594]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494973, 'reachable_time': 36485, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294398, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.593 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:10:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.593 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[058e3bec-23ea-498a-8da1-b103af2baca3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:10:57 np0005634017 systemd[1]: run-netns-ovnmeta\x2d8e92100d\x2d850d\x2d4567\x2d9a5d\x2d269bb15701d5.mount: Deactivated successfully.
Feb 28 05:10:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.849 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.850 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.851 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:57 np0005634017 nova_compute[243452]: 2026-02-28 10:10:57.969 243456 INFO nova.virt.libvirt.driver [None req-e9f85009-e662-400f-8c01-36e7e6eeea40 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Instance shutdown successfully after 13 seconds.#033[00m
Feb 28 05:10:57 np0005634017 nova_compute[243452]: 2026-02-28 10:10:57.975 243456 INFO nova.virt.libvirt.driver [-] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Instance destroyed successfully.#033[00m
Feb 28 05:10:57 np0005634017 nova_compute[243452]: 2026-02-28 10:10:57.975 243456 DEBUG nova.objects.instance [None req-e9f85009-e662-400f-8c01-36e7e6eeea40 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:10:57 np0005634017 nova_compute[243452]: 2026-02-28 10:10:57.990 243456 DEBUG nova.compute.manager [None req-e9f85009-e662-400f-8c01-36e7e6eeea40 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:10:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:10:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Feb 28 05:10:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Feb 28 05:10:58 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Feb 28 05:10:58 np0005634017 nova_compute[243452]: 2026-02-28 10:10:58.042 243456 DEBUG oslo_concurrency.lockutils [None req-e9f85009-e662-400f-8c01-36e7e6eeea40 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 388 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 9.1 MiB/s wr, 206 op/s
Feb 28 05:10:58 np0005634017 nova_compute[243452]: 2026-02-28 10:10:58.279 243456 DEBUG nova.compute.manager [req-660511db-4d59-4cff-a7b8-dd512b52da63 req-9c6bb3b8-6bb5-4fbc-bd04-036208b65c8a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Received event network-vif-unplugged-188d9948-e6ef-4c09-a2ff-5b07d0f93779 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:10:58 np0005634017 nova_compute[243452]: 2026-02-28 10:10:58.280 243456 DEBUG oslo_concurrency.lockutils [req-660511db-4d59-4cff-a7b8-dd512b52da63 req-9c6bb3b8-6bb5-4fbc-bd04-036208b65c8a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:10:58 np0005634017 nova_compute[243452]: 2026-02-28 10:10:58.280 243456 DEBUG oslo_concurrency.lockutils [req-660511db-4d59-4cff-a7b8-dd512b52da63 req-9c6bb3b8-6bb5-4fbc-bd04-036208b65c8a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:10:58 np0005634017 nova_compute[243452]: 2026-02-28 10:10:58.280 243456 DEBUG oslo_concurrency.lockutils [req-660511db-4d59-4cff-a7b8-dd512b52da63 req-9c6bb3b8-6bb5-4fbc-bd04-036208b65c8a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:10:58 np0005634017 nova_compute[243452]: 2026-02-28 10:10:58.280 243456 DEBUG nova.compute.manager [req-660511db-4d59-4cff-a7b8-dd512b52da63 req-9c6bb3b8-6bb5-4fbc-bd04-036208b65c8a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] No waiting events found dispatching network-vif-unplugged-188d9948-e6ef-4c09-a2ff-5b07d0f93779 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:10:58 np0005634017 nova_compute[243452]: 2026-02-28 10:10:58.280 243456 WARNING nova.compute.manager [req-660511db-4d59-4cff-a7b8-dd512b52da63 req-9c6bb3b8-6bb5-4fbc-bd04-036208b65c8a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Received unexpected event network-vif-unplugged-188d9948-e6ef-4c09-a2ff-5b07d0f93779 for instance with vm_state stopped and task_state None.#033[00m
Feb 28 05:10:58 np0005634017 nova_compute[243452]: 2026-02-28 10:10:58.676 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 391 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 7.9 MiB/s wr, 192 op/s
Feb 28 05:11:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:11:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:11:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:11:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:11:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:11:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.523 243456 DEBUG nova.compute.manager [None req-d8e93841-fd24-48d8-9803-1f1a2d17c604 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.846 243456 DEBUG nova.compute.manager [req-0cf0ee96-230f-4150-adcf-5c208c3fd83d req-2908bb04-1c66-4869-8559-2f0ac1a8e635 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Received event network-vif-plugged-188d9948-e6ef-4c09-a2ff-5b07d0f93779 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.847 243456 DEBUG oslo_concurrency.lockutils [req-0cf0ee96-230f-4150-adcf-5c208c3fd83d req-2908bb04-1c66-4869-8559-2f0ac1a8e635 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.847 243456 DEBUG oslo_concurrency.lockutils [req-0cf0ee96-230f-4150-adcf-5c208c3fd83d req-2908bb04-1c66-4869-8559-2f0ac1a8e635 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.847 243456 DEBUG oslo_concurrency.lockutils [req-0cf0ee96-230f-4150-adcf-5c208c3fd83d req-2908bb04-1c66-4869-8559-2f0ac1a8e635 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.848 243456 DEBUG nova.compute.manager [req-0cf0ee96-230f-4150-adcf-5c208c3fd83d req-2908bb04-1c66-4869-8559-2f0ac1a8e635 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] No waiting events found dispatching network-vif-plugged-188d9948-e6ef-4c09-a2ff-5b07d0f93779 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.848 243456 WARNING nova.compute.manager [req-0cf0ee96-230f-4150-adcf-5c208c3fd83d req-2908bb04-1c66-4869-8559-2f0ac1a8e635 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Received unexpected event network-vif-plugged-188d9948-e6ef-4c09-a2ff-5b07d0f93779 for instance with vm_state stopped and task_state deleting.#033[00m
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.876 243456 INFO nova.compute.manager [None req-d8e93841-fd24-48d8-9803-1f1a2d17c604 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] instance snapshotting#033[00m
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.877 243456 DEBUG nova.objects.instance [None req-d8e93841-fd24-48d8-9803-1f1a2d17c604 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'flavor' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.879 243456 DEBUG oslo_concurrency.lockutils [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.879 243456 DEBUG oslo_concurrency.lockutils [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.879 243456 DEBUG oslo_concurrency.lockutils [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.880 243456 DEBUG oslo_concurrency.lockutils [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.880 243456 DEBUG oslo_concurrency.lockutils [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.881 243456 INFO nova.compute.manager [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Terminating instance#033[00m
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.882 243456 DEBUG nova.compute.manager [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.889 243456 INFO nova.virt.libvirt.driver [-] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Instance destroyed successfully.#033[00m
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.890 243456 DEBUG nova.objects.instance [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'resources' on Instance uuid 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.908 243456 DEBUG nova.virt.libvirt.vif [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:10:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-389107082',display_name='tempest-DeleteServersTestJSON-server-389107082',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-389107082',id=59,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:10:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-p13pmd0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:10:58Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "address": "fa:16:3e:36:78:c1", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap188d9948-e6", "ovs_interfaceid": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.909 243456 DEBUG nova.network.os_vif_util [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "address": "fa:16:3e:36:78:c1", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap188d9948-e6", "ovs_interfaceid": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.910 243456 DEBUG nova.network.os_vif_util [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:78:c1,bridge_name='br-int',has_traffic_filtering=True,id=188d9948-e6ef-4c09-a2ff-5b07d0f93779,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap188d9948-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.910 243456 DEBUG os_vif [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:78:c1,bridge_name='br-int',has_traffic_filtering=True,id=188d9948-e6ef-4c09-a2ff-5b07d0f93779,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap188d9948-e6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.914 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.914 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap188d9948-e6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.917 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.918 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:00 np0005634017 nova_compute[243452]: 2026-02-28 10:11:00.924 243456 INFO os_vif [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:78:c1,bridge_name='br-int',has_traffic_filtering=True,id=188d9948-e6ef-4c09-a2ff-5b07d0f93779,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap188d9948-e6')#033[00m
Feb 28 05:11:01 np0005634017 nova_compute[243452]: 2026-02-28 10:11:01.179 243456 INFO nova.virt.libvirt.driver [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Deleting instance files /var/lib/nova/instances/9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_del#033[00m
Feb 28 05:11:01 np0005634017 nova_compute[243452]: 2026-02-28 10:11:01.180 243456 INFO nova.virt.libvirt.driver [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Deletion of /var/lib/nova/instances/9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_del complete#033[00m
Feb 28 05:11:01 np0005634017 nova_compute[243452]: 2026-02-28 10:11:01.226 243456 INFO nova.compute.manager [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Took 0.34 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:11:01 np0005634017 nova_compute[243452]: 2026-02-28 10:11:01.227 243456 DEBUG oslo.service.loopingcall [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:11:01 np0005634017 nova_compute[243452]: 2026-02-28 10:11:01.227 243456 DEBUG nova.compute.manager [-] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:11:01 np0005634017 nova_compute[243452]: 2026-02-28 10:11:01.227 243456 DEBUG nova.network.neutron [-] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:11:01 np0005634017 nova_compute[243452]: 2026-02-28 10:11:01.295 243456 INFO nova.virt.libvirt.driver [None req-d8e93841-fd24-48d8-9803-1f1a2d17c604 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Beginning live snapshot process#033[00m
Feb 28 05:11:01 np0005634017 nova_compute[243452]: 2026-02-28 10:11:01.457 243456 DEBUG nova.virt.libvirt.imagebackend [None req-d8e93841-fd24-48d8-9803-1f1a2d17c604 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 28 05:11:01 np0005634017 nova_compute[243452]: 2026-02-28 10:11:01.876 243456 DEBUG nova.storage.rbd_utils [None req-d8e93841-fd24-48d8-9803-1f1a2d17c604 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] creating snapshot(0d10d9664e754370883f1e992dc7293a) on rbd image(30a5d845-ce28-490a-afe8-3b7552f02c63_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:11:01 np0005634017 nova_compute[243452]: 2026-02-28 10:11:01.918 243456 DEBUG nova.network.neutron [-] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:11:01 np0005634017 nova_compute[243452]: 2026-02-28 10:11:01.933 243456 INFO nova.compute.manager [-] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Took 0.71 seconds to deallocate network for instance.#033[00m
Feb 28 05:11:01 np0005634017 nova_compute[243452]: 2026-02-28 10:11:01.955 243456 DEBUG nova.compute.manager [req-9b016cf7-9658-4a26-a1c8-ae13cd52b86e req-bb34fbc4-d4ef-4f65-85af-197a59412670 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Received event network-vif-deleted-188d9948-e6ef-4c09-a2ff-5b07d0f93779 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:11:01 np0005634017 nova_compute[243452]: 2026-02-28 10:11:01.972 243456 DEBUG oslo_concurrency.lockutils [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:01 np0005634017 nova_compute[243452]: 2026-02-28 10:11:01.973 243456 DEBUG oslo_concurrency.lockutils [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:02 np0005634017 nova_compute[243452]: 2026-02-28 10:11:02.046 243456 DEBUG oslo_concurrency.processutils [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 391 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.4 MiB/s wr, 156 op/s
Feb 28 05:11:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Feb 28 05:11:02 np0005634017 podman[294470]: 2026-02-28 10:11:02.124094104 +0000 UTC m=+0.056397904 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, tcib_managed=true)
Feb 28 05:11:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Feb 28 05:11:02 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Feb 28 05:11:02 np0005634017 nova_compute[243452]: 2026-02-28 10:11:02.173 243456 DEBUG nova.storage.rbd_utils [None req-d8e93841-fd24-48d8-9803-1f1a2d17c604 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] cloning vms/30a5d845-ce28-490a-afe8-3b7552f02c63_disk@0d10d9664e754370883f1e992dc7293a to images/31c8918a-4cbd-4459-b04d-e55d79f71575 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 28 05:11:02 np0005634017 podman[294469]: 2026-02-28 10:11:02.175023865 +0000 UTC m=+0.105630488 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:11:02 np0005634017 nova_compute[243452]: 2026-02-28 10:11:02.267 243456 DEBUG nova.storage.rbd_utils [None req-d8e93841-fd24-48d8-9803-1f1a2d17c604 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] flattening images/31c8918a-4cbd-4459-b04d-e55d79f71575 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 28 05:11:02 np0005634017 nova_compute[243452]: 2026-02-28 10:11:02.638 243456 DEBUG nova.storage.rbd_utils [None req-d8e93841-fd24-48d8-9803-1f1a2d17c604 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] removing snapshot(0d10d9664e754370883f1e992dc7293a) on rbd image(30a5d845-ce28-490a-afe8-3b7552f02c63_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 28 05:11:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:11:02 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/268796098' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:11:02 np0005634017 nova_compute[243452]: 2026-02-28 10:11:02.680 243456 DEBUG oslo_concurrency.processutils [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:02 np0005634017 nova_compute[243452]: 2026-02-28 10:11:02.686 243456 DEBUG nova.compute.provider_tree [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:11:02 np0005634017 nova_compute[243452]: 2026-02-28 10:11:02.707 243456 DEBUG nova.scheduler.client.report [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:11:02 np0005634017 nova_compute[243452]: 2026-02-28 10:11:02.732 243456 DEBUG oslo_concurrency.lockutils [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:02 np0005634017 nova_compute[243452]: 2026-02-28 10:11:02.765 243456 INFO nova.scheduler.client.report [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Deleted allocations for instance 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab#033[00m
Feb 28 05:11:02 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:02Z|00489|binding|INFO|Releasing lport cad89901-4493-47e9-b0fc-45158375eff8 from this chassis (sb_readonly=0)
Feb 28 05:11:02 np0005634017 nova_compute[243452]: 2026-02-28 10:11:02.826 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:02 np0005634017 nova_compute[243452]: 2026-02-28 10:11:02.840 243456 DEBUG oslo_concurrency.lockutils [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.961s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:11:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Feb 28 05:11:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Feb 28 05:11:03 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Feb 28 05:11:03 np0005634017 nova_compute[243452]: 2026-02-28 10:11:03.179 243456 DEBUG nova.storage.rbd_utils [None req-d8e93841-fd24-48d8-9803-1f1a2d17c604 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] creating snapshot(snap) on rbd image(31c8918a-4cbd-4459-b04d-e55d79f71575) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:11:03 np0005634017 nova_compute[243452]: 2026-02-28 10:11:03.677 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 390 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.6 MiB/s wr, 67 op/s
Feb 28 05:11:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Feb 28 05:11:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Feb 28 05:11:04 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Feb 28 05:11:05 np0005634017 nova_compute[243452]: 2026-02-28 10:11:05.918 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:05 np0005634017 nova_compute[243452]: 2026-02-28 10:11:05.945 243456 INFO nova.virt.libvirt.driver [None req-d8e93841-fd24-48d8-9803-1f1a2d17c604 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Snapshot image upload complete#033[00m
Feb 28 05:11:05 np0005634017 nova_compute[243452]: 2026-02-28 10:11:05.946 243456 INFO nova.compute.manager [None req-d8e93841-fd24-48d8-9803-1f1a2d17c604 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Took 5.03 seconds to snapshot the instance on the hypervisor.#033[00m
Feb 28 05:11:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:06.051 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 391 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 209 op/s
Feb 28 05:11:06 np0005634017 nova_compute[243452]: 2026-02-28 10:11:06.337 243456 DEBUG nova.compute.manager [None req-d8e93841-fd24-48d8-9803-1f1a2d17c604 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Feb 28 05:11:06 np0005634017 nova_compute[243452]: 2026-02-28 10:11:06.588 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "9e4c0569-eb76-4874-89e2-751c8237a762" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:06 np0005634017 nova_compute[243452]: 2026-02-28 10:11:06.589 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:06 np0005634017 nova_compute[243452]: 2026-02-28 10:11:06.614 243456 DEBUG nova.compute.manager [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:11:06 np0005634017 nova_compute[243452]: 2026-02-28 10:11:06.779 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:06 np0005634017 nova_compute[243452]: 2026-02-28 10:11:06.779 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:06 np0005634017 nova_compute[243452]: 2026-02-28 10:11:06.787 243456 DEBUG nova.virt.hardware [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:11:06 np0005634017 nova_compute[243452]: 2026-02-28 10:11:06.788 243456 INFO nova.compute.claims [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:11:07 np0005634017 nova_compute[243452]: 2026-02-28 10:11:07.066 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:11:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1960429364' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:11:07 np0005634017 nova_compute[243452]: 2026-02-28 10:11:07.622 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:07 np0005634017 nova_compute[243452]: 2026-02-28 10:11:07.629 243456 DEBUG nova.compute.provider_tree [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:11:07 np0005634017 nova_compute[243452]: 2026-02-28 10:11:07.644 243456 DEBUG nova.scheduler.client.report [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:11:07 np0005634017 nova_compute[243452]: 2026-02-28 10:11:07.665 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:07 np0005634017 nova_compute[243452]: 2026-02-28 10:11:07.666 243456 DEBUG nova.compute.manager [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:11:07 np0005634017 nova_compute[243452]: 2026-02-28 10:11:07.731 243456 DEBUG nova.compute.manager [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:11:07 np0005634017 nova_compute[243452]: 2026-02-28 10:11:07.732 243456 DEBUG nova.network.neutron [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:11:07 np0005634017 nova_compute[243452]: 2026-02-28 10:11:07.754 243456 INFO nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:11:07 np0005634017 nova_compute[243452]: 2026-02-28 10:11:07.765 243456 DEBUG nova.compute.manager [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:11:07 np0005634017 nova_compute[243452]: 2026-02-28 10:11:07.774 243456 DEBUG nova.compute.manager [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:11:07 np0005634017 nova_compute[243452]: 2026-02-28 10:11:07.840 243456 INFO nova.compute.manager [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] instance snapshotting#033[00m
Feb 28 05:11:07 np0005634017 nova_compute[243452]: 2026-02-28 10:11:07.841 243456 DEBUG nova.objects.instance [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'flavor' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:11:07 np0005634017 nova_compute[243452]: 2026-02-28 10:11:07.897 243456 DEBUG nova.compute.manager [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:11:07 np0005634017 nova_compute[243452]: 2026-02-28 10:11:07.899 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:11:07 np0005634017 nova_compute[243452]: 2026-02-28 10:11:07.899 243456 INFO nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Creating image(s)#033[00m
Feb 28 05:11:07 np0005634017 nova_compute[243452]: 2026-02-28 10:11:07.924 243456 DEBUG nova.storage.rbd_utils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9e4c0569-eb76-4874-89e2-751c8237a762_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:07 np0005634017 nova_compute[243452]: 2026-02-28 10:11:07.951 243456 DEBUG nova.storage.rbd_utils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9e4c0569-eb76-4874-89e2-751c8237a762_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:07 np0005634017 nova_compute[243452]: 2026-02-28 10:11:07.976 243456 DEBUG nova.storage.rbd_utils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9e4c0569-eb76-4874-89e2-751c8237a762_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:07 np0005634017 nova_compute[243452]: 2026-02-28 10:11:07.981 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:11:08 np0005634017 nova_compute[243452]: 2026-02-28 10:11:08.017 243456 DEBUG nova.policy [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '33855957e5e3480b850c2ddef62a5f89', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a5095f810f0d431788237ae1da262bf6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:11:08 np0005634017 nova_compute[243452]: 2026-02-28 10:11:08.052 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:08 np0005634017 nova_compute[243452]: 2026-02-28 10:11:08.053 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:08 np0005634017 nova_compute[243452]: 2026-02-28 10:11:08.054 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:08 np0005634017 nova_compute[243452]: 2026-02-28 10:11:08.054 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 391 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 212 op/s
Feb 28 05:11:08 np0005634017 nova_compute[243452]: 2026-02-28 10:11:08.077 243456 DEBUG nova.storage.rbd_utils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9e4c0569-eb76-4874-89e2-751c8237a762_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:08 np0005634017 nova_compute[243452]: 2026-02-28 10:11:08.082 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9e4c0569-eb76-4874-89e2-751c8237a762_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:08 np0005634017 nova_compute[243452]: 2026-02-28 10:11:08.216 243456 INFO nova.virt.libvirt.driver [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Beginning live snapshot process#033[00m
Feb 28 05:11:08 np0005634017 nova_compute[243452]: 2026-02-28 10:11:08.365 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9e4c0569-eb76-4874-89e2-751c8237a762_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:08 np0005634017 nova_compute[243452]: 2026-02-28 10:11:08.399 243456 DEBUG nova.virt.libvirt.imagebackend [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 28 05:11:08 np0005634017 nova_compute[243452]: 2026-02-28 10:11:08.443 243456 DEBUG nova.storage.rbd_utils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] resizing rbd image 9e4c0569-eb76-4874-89e2-751c8237a762_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:11:08 np0005634017 nova_compute[243452]: 2026-02-28 10:11:08.541 243456 DEBUG nova.objects.instance [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'migration_context' on Instance uuid 9e4c0569-eb76-4874-89e2-751c8237a762 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:11:08 np0005634017 nova_compute[243452]: 2026-02-28 10:11:08.563 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:11:08 np0005634017 nova_compute[243452]: 2026-02-28 10:11:08.564 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Ensure instance console log exists: /var/lib/nova/instances/9e4c0569-eb76-4874-89e2-751c8237a762/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:11:08 np0005634017 nova_compute[243452]: 2026-02-28 10:11:08.565 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:08 np0005634017 nova_compute[243452]: 2026-02-28 10:11:08.565 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:08 np0005634017 nova_compute[243452]: 2026-02-28 10:11:08.566 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:08 np0005634017 nova_compute[243452]: 2026-02-28 10:11:08.663 243456 DEBUG nova.storage.rbd_utils [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] creating snapshot(58a67ab8e75c40eaa402f557cd0cffe6) on rbd image(30a5d845-ce28-490a-afe8-3b7552f02c63_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:11:08 np0005634017 nova_compute[243452]: 2026-02-28 10:11:08.695 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:09 np0005634017 nova_compute[243452]: 2026-02-28 10:11:09.131 243456 DEBUG nova.network.neutron [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Successfully created port: 92050f95-5357-4df4-bb17-7553685a3edb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:11:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Feb 28 05:11:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Feb 28 05:11:09 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Feb 28 05:11:09 np0005634017 nova_compute[243452]: 2026-02-28 10:11:09.459 243456 DEBUG nova.storage.rbd_utils [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] cloning vms/30a5d845-ce28-490a-afe8-3b7552f02c63_disk@58a67ab8e75c40eaa402f557cd0cffe6 to images/a3ff9934-08db-4bb8-903b-7c981dea1b00 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 28 05:11:09 np0005634017 nova_compute[243452]: 2026-02-28 10:11:09.607 243456 DEBUG nova.storage.rbd_utils [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] flattening images/a3ff9934-08db-4bb8-903b-7c981dea1b00 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 28 05:11:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 406 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.2 MiB/s wr, 184 op/s
Feb 28 05:11:10 np0005634017 nova_compute[243452]: 2026-02-28 10:11:10.187 243456 DEBUG nova.network.neutron [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Successfully updated port: 92050f95-5357-4df4-bb17-7553685a3edb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:11:10 np0005634017 nova_compute[243452]: 2026-02-28 10:11:10.210 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "refresh_cache-9e4c0569-eb76-4874-89e2-751c8237a762" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:11:10 np0005634017 nova_compute[243452]: 2026-02-28 10:11:10.211 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquired lock "refresh_cache-9e4c0569-eb76-4874-89e2-751c8237a762" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:11:10 np0005634017 nova_compute[243452]: 2026-02-28 10:11:10.211 243456 DEBUG nova.network.neutron [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:11:10 np0005634017 nova_compute[243452]: 2026-02-28 10:11:10.281 243456 DEBUG nova.compute.manager [req-29067d24-295f-4c2b-a519-fae98e30cca4 req-c47f7bc0-1bd8-43f4-8a9b-cd1b83487a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Received event network-changed-92050f95-5357-4df4-bb17-7553685a3edb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:11:10 np0005634017 nova_compute[243452]: 2026-02-28 10:11:10.282 243456 DEBUG nova.compute.manager [req-29067d24-295f-4c2b-a519-fae98e30cca4 req-c47f7bc0-1bd8-43f4-8a9b-cd1b83487a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Refreshing instance network info cache due to event network-changed-92050f95-5357-4df4-bb17-7553685a3edb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:11:10 np0005634017 nova_compute[243452]: 2026-02-28 10:11:10.283 243456 DEBUG oslo_concurrency.lockutils [req-29067d24-295f-4c2b-a519-fae98e30cca4 req-c47f7bc0-1bd8-43f4-8a9b-cd1b83487a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9e4c0569-eb76-4874-89e2-751c8237a762" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:11:10 np0005634017 nova_compute[243452]: 2026-02-28 10:11:10.360 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:10 np0005634017 nova_compute[243452]: 2026-02-28 10:11:10.420 243456 DEBUG nova.network.neutron [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:11:10 np0005634017 nova_compute[243452]: 2026-02-28 10:11:10.728 243456 DEBUG nova.storage.rbd_utils [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] removing snapshot(58a67ab8e75c40eaa402f557cd0cffe6) on rbd image(30a5d845-ce28-490a-afe8-3b7552f02c63_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 28 05:11:10 np0005634017 nova_compute[243452]: 2026-02-28 10:11:10.920 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.318 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.318 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:11:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Feb 28 05:11:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Feb 28 05:11:11 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.550 243456 DEBUG nova.storage.rbd_utils [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] creating snapshot(snap) on rbd image(a3ff9934-08db-4bb8-903b-7c981dea1b00) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.874 243456 DEBUG nova.network.neutron [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Updating instance_info_cache with network_info: [{"id": "92050f95-5357-4df4-bb17-7553685a3edb", "address": "fa:16:3e:b3:0b:82", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92050f95-53", "ovs_interfaceid": "92050f95-5357-4df4-bb17-7553685a3edb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.895 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Releasing lock "refresh_cache-9e4c0569-eb76-4874-89e2-751c8237a762" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.896 243456 DEBUG nova.compute.manager [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Instance network_info: |[{"id": "92050f95-5357-4df4-bb17-7553685a3edb", "address": "fa:16:3e:b3:0b:82", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92050f95-53", "ovs_interfaceid": "92050f95-5357-4df4-bb17-7553685a3edb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.896 243456 DEBUG oslo_concurrency.lockutils [req-29067d24-295f-4c2b-a519-fae98e30cca4 req-c47f7bc0-1bd8-43f4-8a9b-cd1b83487a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9e4c0569-eb76-4874-89e2-751c8237a762" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.897 243456 DEBUG nova.network.neutron [req-29067d24-295f-4c2b-a519-fae98e30cca4 req-c47f7bc0-1bd8-43f4-8a9b-cd1b83487a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Refreshing network info cache for port 92050f95-5357-4df4-bb17-7553685a3edb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.899 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Start _get_guest_xml network_info=[{"id": "92050f95-5357-4df4-bb17-7553685a3edb", "address": "fa:16:3e:b3:0b:82", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92050f95-53", "ovs_interfaceid": "92050f95-5357-4df4-bb17-7553685a3edb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.905 243456 WARNING nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.910 243456 DEBUG nova.virt.libvirt.host [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.911 243456 DEBUG nova.virt.libvirt.host [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.921 243456 DEBUG nova.virt.libvirt.host [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.922 243456 DEBUG nova.virt.libvirt.host [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.922 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.923 243456 DEBUG nova.virt.hardware [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.923 243456 DEBUG nova.virt.hardware [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.924 243456 DEBUG nova.virt.hardware [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.924 243456 DEBUG nova.virt.hardware [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.924 243456 DEBUG nova.virt.hardware [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.924 243456 DEBUG nova.virt.hardware [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.925 243456 DEBUG nova.virt.hardware [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.925 243456 DEBUG nova.virt.hardware [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.925 243456 DEBUG nova.virt.hardware [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.926 243456 DEBUG nova.virt.hardware [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.926 243456 DEBUG nova.virt.hardware [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:11:11 np0005634017 nova_compute[243452]: 2026-02-28 10:11:11.929 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 453 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 8.7 MiB/s wr, 220 op/s
Feb 28 05:11:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:11:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2980146945' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:11:12 np0005634017 nova_compute[243452]: 2026-02-28 10:11:12.469 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Feb 28 05:11:12 np0005634017 nova_compute[243452]: 2026-02-28 10:11:12.505 243456 DEBUG nova.storage.rbd_utils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9e4c0569-eb76-4874-89e2-751c8237a762_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:12 np0005634017 nova_compute[243452]: 2026-02-28 10:11:12.509 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:12 np0005634017 nova_compute[243452]: 2026-02-28 10:11:12.575 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273457.49659, 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:11:12 np0005634017 nova_compute[243452]: 2026-02-28 10:11:12.576 243456 INFO nova.compute.manager [-] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:11:12 np0005634017 nova_compute[243452]: 2026-02-28 10:11:12.607 243456 DEBUG nova.compute.manager [None req-f107b39f-15b2-4c7a-8c00-ae63e5fbd145 - - - - - -] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:11:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Feb 28 05:11:12 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Feb 28 05:11:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:11:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Feb 28 05:11:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Feb 28 05:11:13 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Feb 28 05:11:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:11:13 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2300921922' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.203 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.694s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.205 243456 DEBUG nova.virt.libvirt.vif [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:11:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-837461685',display_name='tempest-DeleteServersTestJSON-server-837461685',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-837461685',id=60,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-4o8foh91',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:11:07Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=9e4c0569-eb76-4874-89e2-751c8237a762,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "92050f95-5357-4df4-bb17-7553685a3edb", "address": "fa:16:3e:b3:0b:82", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92050f95-53", "ovs_interfaceid": "92050f95-5357-4df4-bb17-7553685a3edb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.206 243456 DEBUG nova.network.os_vif_util [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "92050f95-5357-4df4-bb17-7553685a3edb", "address": "fa:16:3e:b3:0b:82", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92050f95-53", "ovs_interfaceid": "92050f95-5357-4df4-bb17-7553685a3edb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.207 243456 DEBUG nova.network.os_vif_util [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:0b:82,bridge_name='br-int',has_traffic_filtering=True,id=92050f95-5357-4df4-bb17-7553685a3edb,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92050f95-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.208 243456 DEBUG nova.objects.instance [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9e4c0569-eb76-4874-89e2-751c8237a762 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.229 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:11:13 np0005634017 nova_compute[243452]:  <uuid>9e4c0569-eb76-4874-89e2-751c8237a762</uuid>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:  <name>instance-0000003c</name>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <nova:name>tempest-DeleteServersTestJSON-server-837461685</nova:name>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:11:11</nova:creationTime>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:11:13 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:        <nova:user uuid="33855957e5e3480b850c2ddef62a5f89">tempest-DeleteServersTestJSON-515886650-project-member</nova:user>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:        <nova:project uuid="a5095f810f0d431788237ae1da262bf6">tempest-DeleteServersTestJSON-515886650</nova:project>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:        <nova:port uuid="92050f95-5357-4df4-bb17-7553685a3edb">
Feb 28 05:11:13 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <entry name="serial">9e4c0569-eb76-4874-89e2-751c8237a762</entry>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <entry name="uuid">9e4c0569-eb76-4874-89e2-751c8237a762</entry>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/9e4c0569-eb76-4874-89e2-751c8237a762_disk">
Feb 28 05:11:13 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:11:13 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/9e4c0569-eb76-4874-89e2-751c8237a762_disk.config">
Feb 28 05:11:13 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:11:13 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:b3:0b:82"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <target dev="tap92050f95-53"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/9e4c0569-eb76-4874-89e2-751c8237a762/console.log" append="off"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:11:13 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:11:13 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:11:13 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:11:13 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.231 243456 DEBUG nova.compute.manager [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Preparing to wait for external event network-vif-plugged-92050f95-5357-4df4-bb17-7553685a3edb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.232 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.232 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.232 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.233 243456 DEBUG nova.virt.libvirt.vif [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:11:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-837461685',display_name='tempest-DeleteServersTestJSON-server-837461685',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-837461685',id=60,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-4o8foh91',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:11:07Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=9e4c0569-eb76-4874-89e2-751c8237a762,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "92050f95-5357-4df4-bb17-7553685a3edb", "address": "fa:16:3e:b3:0b:82", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92050f95-53", "ovs_interfaceid": "92050f95-5357-4df4-bb17-7553685a3edb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.234 243456 DEBUG nova.network.os_vif_util [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "92050f95-5357-4df4-bb17-7553685a3edb", "address": "fa:16:3e:b3:0b:82", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92050f95-53", "ovs_interfaceid": "92050f95-5357-4df4-bb17-7553685a3edb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.234 243456 DEBUG nova.network.os_vif_util [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:0b:82,bridge_name='br-int',has_traffic_filtering=True,id=92050f95-5357-4df4-bb17-7553685a3edb,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92050f95-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.235 243456 DEBUG os_vif [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:0b:82,bridge_name='br-int',has_traffic_filtering=True,id=92050f95-5357-4df4-bb17-7553685a3edb,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92050f95-53') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.235 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.236 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.236 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.241 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.241 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92050f95-53, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.242 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap92050f95-53, col_values=(('external_ids', {'iface-id': '92050f95-5357-4df4-bb17-7553685a3edb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:0b:82', 'vm-uuid': '9e4c0569-eb76-4874-89e2-751c8237a762'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.244 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:13 np0005634017 NetworkManager[49805]: <info>  [1772273473.2453] manager: (tap92050f95-53): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.245 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.250 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.250 243456 INFO os_vif [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:0b:82,bridge_name='br-int',has_traffic_filtering=True,id=92050f95-5357-4df4-bb17-7553685a3edb,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92050f95-53')#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.330 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.332 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.332 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No VIF found with MAC fa:16:3e:b3:0b:82, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.333 243456 INFO nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Using config drive#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.371 243456 DEBUG nova.storage.rbd_utils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9e4c0569-eb76-4874-89e2-751c8237a762_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.454 243456 DEBUG nova.network.neutron [req-29067d24-295f-4c2b-a519-fae98e30cca4 req-c47f7bc0-1bd8-43f4-8a9b-cd1b83487a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Updated VIF entry in instance network info cache for port 92050f95-5357-4df4-bb17-7553685a3edb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.455 243456 DEBUG nova.network.neutron [req-29067d24-295f-4c2b-a519-fae98e30cca4 req-c47f7bc0-1bd8-43f4-8a9b-cd1b83487a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Updating instance_info_cache with network_info: [{"id": "92050f95-5357-4df4-bb17-7553685a3edb", "address": "fa:16:3e:b3:0b:82", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92050f95-53", "ovs_interfaceid": "92050f95-5357-4df4-bb17-7553685a3edb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.471 243456 DEBUG oslo_concurrency.lockutils [req-29067d24-295f-4c2b-a519-fae98e30cca4 req-c47f7bc0-1bd8-43f4-8a9b-cd1b83487a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9e4c0569-eb76-4874-89e2-751c8237a762" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.682 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.727 243456 INFO nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Creating config drive at /var/lib/nova/instances/9e4c0569-eb76-4874-89e2-751c8237a762/disk.config#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.731 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9e4c0569-eb76-4874-89e2-751c8237a762/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpuwxce3sc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.878 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9e4c0569-eb76-4874-89e2-751c8237a762/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpuwxce3sc" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.902 243456 DEBUG nova.storage.rbd_utils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9e4c0569-eb76-4874-89e2-751c8237a762_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:13 np0005634017 nova_compute[243452]: 2026-02-28 10:11:13.907 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9e4c0569-eb76-4874-89e2-751c8237a762/disk.config 9e4c0569-eb76-4874-89e2-751c8237a762_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.059 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9e4c0569-eb76-4874-89e2-751c8237a762/disk.config 9e4c0569-eb76-4874-89e2-751c8237a762_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.060 243456 INFO nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Deleting local config drive /var/lib/nova/instances/9e4c0569-eb76-4874-89e2-751c8237a762/disk.config because it was imported into RBD.#033[00m
Feb 28 05:11:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 494 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 12 MiB/s wr, 231 op/s
Feb 28 05:11:14 np0005634017 kernel: tap92050f95-53: entered promiscuous mode
Feb 28 05:11:14 np0005634017 NetworkManager[49805]: <info>  [1772273474.0929] manager: (tap92050f95-53): new Tun device (/org/freedesktop/NetworkManager/Devices/230)
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.095 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:14Z|00490|binding|INFO|Claiming lport 92050f95-5357-4df4-bb17-7553685a3edb for this chassis.
Feb 28 05:11:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:14Z|00491|binding|INFO|92050f95-5357-4df4-bb17-7553685a3edb: Claiming fa:16:3e:b3:0b:82 10.100.0.6
Feb 28 05:11:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:14Z|00492|binding|INFO|Setting lport 92050f95-5357-4df4-bb17-7553685a3edb ovn-installed in OVS
Feb 28 05:11:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:14Z|00493|binding|INFO|Setting lport 92050f95-5357-4df4-bb17-7553685a3edb up in Southbound
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.105 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.108 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.109 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:0b:82 10.100.0.6'], port_security=['fa:16:3e:b3:0b:82 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9e4c0569-eb76-4874-89e2-751c8237a762', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e92100d-850d-4567-9a5d-269bb15701d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5095f810f0d431788237ae1da262bf6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05124cc2-701f-43d1-96d7-3db27e805ae2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4409f277-3260-44c5-97fd-d8b2121f3d6b, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=92050f95-5357-4df4-bb17-7553685a3edb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.111 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 92050f95-5357-4df4-bb17-7553685a3edb in datapath 8e92100d-850d-4567-9a5d-269bb15701d5 bound to our chassis#033[00m
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.113 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8e92100d-850d-4567-9a5d-269bb15701d5#033[00m
Feb 28 05:11:14 np0005634017 systemd-udevd[295089]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.127 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff60738-1d2a-47aa-99df-a8d34345da01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.128 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8e92100d-81 in ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:11:14 np0005634017 systemd-machined[209480]: New machine qemu-67-instance-0000003c.
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.131 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8e92100d-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.131 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe5361d-ddc1-4f93-9781-6b434d1489c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.132 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5664f976-0e25-4432-8e4e-2321e6e80ca6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:14 np0005634017 NetworkManager[49805]: <info>  [1772273474.1359] device (tap92050f95-53): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:11:14 np0005634017 NetworkManager[49805]: <info>  [1772273474.1367] device (tap92050f95-53): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:11:14 np0005634017 systemd[1]: Started Virtual Machine qemu-67-instance-0000003c.
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.147 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[c1211fb8-b627-4110-a66a-221a7fc20c80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.159 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5735c02b-da90-44ab-8d1a-ab77806ef373]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.184 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6965186a-dde4-41b5-9938-4196154e7dd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.191 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7dafd514-5674-4903-b901-e6575c18d269]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:14 np0005634017 NetworkManager[49805]: <info>  [1772273474.1923] manager: (tap8e92100d-80): new Veth device (/org/freedesktop/NetworkManager/Devices/231)
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.220 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5160dd0b-7301-42bf-8128-724cb5434124]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.224 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[75272fa4-e3a2-4947-8452-5a83c0c06c28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:14 np0005634017 NetworkManager[49805]: <info>  [1772273474.2495] device (tap8e92100d-80): carrier: link connected
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.255 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b11a5ed3-1486-430f-80e3-7a1fbab50187]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.273 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d954335f-3902-46ff-ac69-103f333522d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e92100d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:bd:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 156], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498153, 'reachable_time': 42312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295122, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.288 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f4841681-3c3b-4fc5-9025-34ad6dc33cf3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:bd50'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 498153, 'tstamp': 498153}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295123, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.303 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6fce0335-2c30-4e32-af15-17558e785414]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e92100d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:bd:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 156], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498153, 'reachable_time': 42312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295124, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.335 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[39cf85db-e7a9-4cd6-8904-f6e85c1f7e1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.390 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f6001ed0-bad2-4714-81ab-9430211f23ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.393 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e92100d-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.393 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.394 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e92100d-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.396 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:14 np0005634017 kernel: tap8e92100d-80: entered promiscuous mode
Feb 28 05:11:14 np0005634017 NetworkManager[49805]: <info>  [1772273474.3970] manager: (tap8e92100d-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/232)
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.404 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8e92100d-80, col_values=(('external_ids', {'iface-id': 'df60d363-b5aa-4c1b-a9a1-997dfff36799'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:14Z|00494|binding|INFO|Releasing lport df60d363-b5aa-4c1b-a9a1-997dfff36799 from this chassis (sb_readonly=0)
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.406 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.412 243456 DEBUG nova.compute.manager [req-b18af631-2033-4380-8ef1-73153732aaa8 req-a580fbd0-7af5-4c4d-9384-19346fb0bcc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Received event network-vif-plugged-92050f95-5357-4df4-bb17-7553685a3edb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.412 243456 DEBUG oslo_concurrency.lockutils [req-b18af631-2033-4380-8ef1-73153732aaa8 req-a580fbd0-7af5-4c4d-9384-19346fb0bcc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.412 243456 DEBUG oslo_concurrency.lockutils [req-b18af631-2033-4380-8ef1-73153732aaa8 req-a580fbd0-7af5-4c4d-9384-19346fb0bcc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.413 243456 DEBUG oslo_concurrency.lockutils [req-b18af631-2033-4380-8ef1-73153732aaa8 req-a580fbd0-7af5-4c4d-9384-19346fb0bcc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.413 243456 DEBUG nova.compute.manager [req-b18af631-2033-4380-8ef1-73153732aaa8 req-a580fbd0-7af5-4c4d-9384-19346fb0bcc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Processing event network-vif-plugged-92050f95-5357-4df4-bb17-7553685a3edb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.417 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.419 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.420 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.423 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.423 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f07f8a4c-03cd-4777-8661-e0876fdb36f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.426 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-8e92100d-850d-4567-9a5d-269bb15701d5
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 8e92100d-850d-4567-9a5d-269bb15701d5
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:11:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.426 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'env', 'PROCESS_TAG=haproxy-8e92100d-850d-4567-9a5d-269bb15701d5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8e92100d-850d-4567-9a5d-269bb15701d5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.596 243456 DEBUG nova.compute.manager [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.597 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273474.595994, 9e4c0569-eb76-4874-89e2-751c8237a762 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.597 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] VM Started (Lifecycle Event)#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.602 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.608 243456 INFO nova.virt.libvirt.driver [-] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Instance spawned successfully.#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.608 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.617 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.621 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.635 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.636 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.636 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.637 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.637 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.638 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.646 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.647 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273474.6002932, 9e4c0569-eb76-4874-89e2-751c8237a762 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.647 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.672 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.677 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273474.6023083, 9e4c0569-eb76-4874-89e2-751c8237a762 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.677 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.700 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.703 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.714 243456 INFO nova.compute.manager [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Took 6.82 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.715 243456 DEBUG nova.compute.manager [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.730 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.746 243456 INFO nova.virt.libvirt.driver [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Snapshot image upload complete#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.747 243456 INFO nova.compute.manager [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Took 6.87 seconds to snapshot the instance on the hypervisor.#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.787 243456 INFO nova.compute.manager [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Took 8.12 seconds to build instance.#033[00m
Feb 28 05:11:14 np0005634017 nova_compute[243452]: 2026-02-28 10:11:14.813 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.224s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:14 np0005634017 podman[295198]: 2026-02-28 10:11:14.801203209 +0000 UTC m=+0.027172694 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:11:14 np0005634017 podman[295198]: 2026-02-28 10:11:14.902642328 +0000 UTC m=+0.128611783 container create 26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:11:14 np0005634017 systemd[1]: Started libpod-conmon-26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e.scope.
Feb 28 05:11:14 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:11:14 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd6fa8301760fba518821551e121f076081ba5a3390aa504e7ff77efc7e192a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:11:14 np0005634017 podman[295198]: 2026-02-28 10:11:14.993833329 +0000 UTC m=+0.219802814 container init 26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 05:11:14 np0005634017 podman[295198]: 2026-02-28 10:11:14.999283173 +0000 UTC m=+0.225252628 container start 26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:11:15 np0005634017 nova_compute[243452]: 2026-02-28 10:11:15.013 243456 DEBUG nova.compute.manager [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Feb 28 05:11:15 np0005634017 nova_compute[243452]: 2026-02-28 10:11:15.013 243456 DEBUG nova.compute.manager [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458#033[00m
Feb 28 05:11:15 np0005634017 nova_compute[243452]: 2026-02-28 10:11:15.013 243456 DEBUG nova.compute.manager [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Deleting image 566c962b-ab07-4ea4-8c4a-daa71c23c042 _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463#033[00m
Feb 28 05:11:15 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[295212]: [NOTICE]   (295216) : New worker (295218) forked
Feb 28 05:11:15 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[295212]: [NOTICE]   (295216) : Loading success.
Feb 28 05:11:15 np0005634017 nova_compute[243452]: 2026-02-28 10:11:15.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:11:15 np0005634017 nova_compute[243452]: 2026-02-28 10:11:15.397 243456 DEBUG nova.objects.instance [None req-7a1cffa1-95f1-408e-9347-d7147770fd10 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9e4c0569-eb76-4874-89e2-751c8237a762 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:11:15 np0005634017 nova_compute[243452]: 2026-02-28 10:11:15.419 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273475.4189055, 9e4c0569-eb76-4874-89e2-751c8237a762 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:11:15 np0005634017 nova_compute[243452]: 2026-02-28 10:11:15.419 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:11:15 np0005634017 nova_compute[243452]: 2026-02-28 10:11:15.445 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:11:15 np0005634017 nova_compute[243452]: 2026-02-28 10:11:15.450 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:11:15 np0005634017 nova_compute[243452]: 2026-02-28 10:11:15.479 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Feb 28 05:11:15 np0005634017 kernel: tap92050f95-53 (unregistering): left promiscuous mode
Feb 28 05:11:15 np0005634017 NetworkManager[49805]: <info>  [1772273475.5489] device (tap92050f95-53): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:11:15 np0005634017 nova_compute[243452]: 2026-02-28 10:11:15.557 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:15Z|00495|binding|INFO|Releasing lport 92050f95-5357-4df4-bb17-7553685a3edb from this chassis (sb_readonly=0)
Feb 28 05:11:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:15Z|00496|binding|INFO|Setting lport 92050f95-5357-4df4-bb17-7553685a3edb down in Southbound
Feb 28 05:11:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:15Z|00497|binding|INFO|Removing iface tap92050f95-53 ovn-installed in OVS
Feb 28 05:11:15 np0005634017 nova_compute[243452]: 2026-02-28 10:11:15.559 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:15 np0005634017 nova_compute[243452]: 2026-02-28 10:11:15.564 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.567 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:0b:82 10.100.0.6'], port_security=['fa:16:3e:b3:0b:82 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9e4c0569-eb76-4874-89e2-751c8237a762', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e92100d-850d-4567-9a5d-269bb15701d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5095f810f0d431788237ae1da262bf6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05124cc2-701f-43d1-96d7-3db27e805ae2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4409f277-3260-44c5-97fd-d8b2121f3d6b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=92050f95-5357-4df4-bb17-7553685a3edb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:11:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.569 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 92050f95-5357-4df4-bb17-7553685a3edb in datapath 8e92100d-850d-4567-9a5d-269bb15701d5 unbound from our chassis#033[00m
Feb 28 05:11:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.570 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e92100d-850d-4567-9a5d-269bb15701d5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:11:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.571 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c01bc85a-465c-4532-81e6-5fdc724e9275]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.572 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 namespace which is not needed anymore#033[00m
Feb 28 05:11:15 np0005634017 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Feb 28 05:11:15 np0005634017 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003c.scope: Consumed 1.306s CPU time.
Feb 28 05:11:15 np0005634017 systemd-machined[209480]: Machine qemu-67-instance-0000003c terminated.
Feb 28 05:11:15 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[295212]: [NOTICE]   (295216) : haproxy version is 2.8.14-c23fe91
Feb 28 05:11:15 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[295212]: [NOTICE]   (295216) : path to executable is /usr/sbin/haproxy
Feb 28 05:11:15 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[295212]: [WARNING]  (295216) : Exiting Master process...
Feb 28 05:11:15 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[295212]: [WARNING]  (295216) : Exiting Master process...
Feb 28 05:11:15 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[295212]: [ALERT]    (295216) : Current worker (295218) exited with code 143 (Terminated)
Feb 28 05:11:15 np0005634017 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[295212]: [WARNING]  (295216) : All workers exited. Exiting... (0)
Feb 28 05:11:15 np0005634017 systemd[1]: libpod-26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e.scope: Deactivated successfully.
Feb 28 05:11:15 np0005634017 podman[295251]: 2026-02-28 10:11:15.690190837 +0000 UTC m=+0.041800675 container died 26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 05:11:15 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e-userdata-shm.mount: Deactivated successfully.
Feb 28 05:11:15 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3fd6fa8301760fba518821551e121f076081ba5a3390aa504e7ff77efc7e192a-merged.mount: Deactivated successfully.
Feb 28 05:11:15 np0005634017 podman[295251]: 2026-02-28 10:11:15.728427831 +0000 UTC m=+0.080037659 container cleanup 26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 05:11:15 np0005634017 nova_compute[243452]: 2026-02-28 10:11:15.743 243456 DEBUG nova.compute.manager [None req-7a1cffa1-95f1-408e-9347-d7147770fd10 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:11:15 np0005634017 systemd[1]: libpod-conmon-26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e.scope: Deactivated successfully.
Feb 28 05:11:15 np0005634017 podman[295284]: 2026-02-28 10:11:15.783518738 +0000 UTC m=+0.034235143 container remove 26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:11:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.789 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[43732717-a143-4ef9-b5c6-cf485ad12a81]: (4, ('Sat Feb 28 10:11:15 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 (26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e)\n26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e\nSat Feb 28 10:11:15 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 (26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e)\n26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.791 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ee9db69f-251e-481a-a2a1-b1f0a49de817]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.792 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e92100d-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:15 np0005634017 kernel: tap8e92100d-80: left promiscuous mode
Feb 28 05:11:15 np0005634017 nova_compute[243452]: 2026-02-28 10:11:15.797 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:15 np0005634017 nova_compute[243452]: 2026-02-28 10:11:15.803 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.806 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[89c97877-985c-4459-8553-3e99a4944fbc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.823 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[12660fed-70c0-4e43-b1e0-81011715cdf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.824 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b0f1e4da-9882-455f-8fd9-a4eafcfa5eda]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.837 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[765fdc55-082c-4259-92d1-04851d5ed434]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498146, 'reachable_time': 16156, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295310, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:15 np0005634017 systemd[1]: run-netns-ovnmeta\x2d8e92100d\x2d850d\x2d4567\x2d9a5d\x2d269bb15701d5.mount: Deactivated successfully.
Feb 28 05:11:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.840 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:11:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.840 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[83d47b80-a01b-45e2-9d08-d860822412a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:15 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Feb 28 05:11:15 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Feb 28 05:11:15 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Feb 28 05:11:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 517 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 8.0 MiB/s wr, 212 op/s
Feb 28 05:11:16 np0005634017 nova_compute[243452]: 2026-02-28 10:11:16.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:11:16 np0005634017 nova_compute[243452]: 2026-02-28 10:11:16.338 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:16 np0005634017 nova_compute[243452]: 2026-02-28 10:11:16.339 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:16 np0005634017 nova_compute[243452]: 2026-02-28 10:11:16.339 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:16 np0005634017 nova_compute[243452]: 2026-02-28 10:11:16.340 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:11:16 np0005634017 nova_compute[243452]: 2026-02-28 10:11:16.340 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:16 np0005634017 nova_compute[243452]: 2026-02-28 10:11:16.534 243456 DEBUG nova.compute.manager [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Received event network-vif-plugged-92050f95-5357-4df4-bb17-7553685a3edb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:11:16 np0005634017 nova_compute[243452]: 2026-02-28 10:11:16.535 243456 DEBUG oslo_concurrency.lockutils [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:16 np0005634017 nova_compute[243452]: 2026-02-28 10:11:16.535 243456 DEBUG oslo_concurrency.lockutils [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:16 np0005634017 nova_compute[243452]: 2026-02-28 10:11:16.536 243456 DEBUG oslo_concurrency.lockutils [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:16 np0005634017 nova_compute[243452]: 2026-02-28 10:11:16.536 243456 DEBUG nova.compute.manager [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] No waiting events found dispatching network-vif-plugged-92050f95-5357-4df4-bb17-7553685a3edb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:11:16 np0005634017 nova_compute[243452]: 2026-02-28 10:11:16.536 243456 WARNING nova.compute.manager [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Received unexpected event network-vif-plugged-92050f95-5357-4df4-bb17-7553685a3edb for instance with vm_state suspended and task_state None.#033[00m
Feb 28 05:11:16 np0005634017 nova_compute[243452]: 2026-02-28 10:11:16.537 243456 DEBUG nova.compute.manager [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Received event network-vif-unplugged-92050f95-5357-4df4-bb17-7553685a3edb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:11:16 np0005634017 nova_compute[243452]: 2026-02-28 10:11:16.537 243456 DEBUG oslo_concurrency.lockutils [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:16 np0005634017 nova_compute[243452]: 2026-02-28 10:11:16.538 243456 DEBUG oslo_concurrency.lockutils [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:16 np0005634017 nova_compute[243452]: 2026-02-28 10:11:16.538 243456 DEBUG oslo_concurrency.lockutils [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:16 np0005634017 nova_compute[243452]: 2026-02-28 10:11:16.538 243456 DEBUG nova.compute.manager [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] No waiting events found dispatching network-vif-unplugged-92050f95-5357-4df4-bb17-7553685a3edb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:11:16 np0005634017 nova_compute[243452]: 2026-02-28 10:11:16.539 243456 WARNING nova.compute.manager [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Received unexpected event network-vif-unplugged-92050f95-5357-4df4-bb17-7553685a3edb for instance with vm_state suspended and task_state None.#033[00m
Feb 28 05:11:16 np0005634017 nova_compute[243452]: 2026-02-28 10:11:16.539 243456 DEBUG nova.compute.manager [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Received event network-vif-plugged-92050f95-5357-4df4-bb17-7553685a3edb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:11:16 np0005634017 nova_compute[243452]: 2026-02-28 10:11:16.539 243456 DEBUG oslo_concurrency.lockutils [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:16 np0005634017 nova_compute[243452]: 2026-02-28 10:11:16.540 243456 DEBUG oslo_concurrency.lockutils [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:16 np0005634017 nova_compute[243452]: 2026-02-28 10:11:16.540 243456 DEBUG oslo_concurrency.lockutils [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:16 np0005634017 nova_compute[243452]: 2026-02-28 10:11:16.541 243456 DEBUG nova.compute.manager [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] No waiting events found dispatching network-vif-plugged-92050f95-5357-4df4-bb17-7553685a3edb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:11:16 np0005634017 nova_compute[243452]: 2026-02-28 10:11:16.541 243456 WARNING nova.compute.manager [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Received unexpected event network-vif-plugged-92050f95-5357-4df4-bb17-7553685a3edb for instance with vm_state suspended and task_state None.#033[00m
Feb 28 05:11:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:11:16 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4068071505' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:11:16 np0005634017 nova_compute[243452]: 2026-02-28 10:11:16.914 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.002 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.002 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.006 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.006 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.122 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.123 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3773MB free_disk=59.92144317924976GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.123 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.124 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.218 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 30a5d845-ce28-490a-afe8-3b7552f02c63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.219 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 9e4c0569-eb76-4874-89e2-751c8237a762 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.219 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.220 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.325 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.638 243456 DEBUG oslo_concurrency.lockutils [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "9e4c0569-eb76-4874-89e2-751c8237a762" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.639 243456 DEBUG oslo_concurrency.lockutils [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.640 243456 DEBUG oslo_concurrency.lockutils [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.640 243456 DEBUG oslo_concurrency.lockutils [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.641 243456 DEBUG oslo_concurrency.lockutils [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.642 243456 INFO nova.compute.manager [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Terminating instance#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.644 243456 DEBUG nova.compute.manager [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.652 243456 INFO nova.virt.libvirt.driver [-] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Instance destroyed successfully.#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.652 243456 DEBUG nova.objects.instance [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'resources' on Instance uuid 9e4c0569-eb76-4874-89e2-751c8237a762 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.667 243456 DEBUG nova.virt.libvirt.vif [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:11:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-837461685',display_name='tempest-DeleteServersTestJSON-server-837461685',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-837461685',id=60,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:11:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-4o8foh91',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:11:15Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=9e4c0569-eb76-4874-89e2-751c8237a762,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "92050f95-5357-4df4-bb17-7553685a3edb", "address": "fa:16:3e:b3:0b:82", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92050f95-53", "ovs_interfaceid": "92050f95-5357-4df4-bb17-7553685a3edb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.668 243456 DEBUG nova.network.os_vif_util [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "92050f95-5357-4df4-bb17-7553685a3edb", "address": "fa:16:3e:b3:0b:82", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92050f95-53", "ovs_interfaceid": "92050f95-5357-4df4-bb17-7553685a3edb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.669 243456 DEBUG nova.network.os_vif_util [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:0b:82,bridge_name='br-int',has_traffic_filtering=True,id=92050f95-5357-4df4-bb17-7553685a3edb,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92050f95-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.669 243456 DEBUG os_vif [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:0b:82,bridge_name='br-int',has_traffic_filtering=True,id=92050f95-5357-4df4-bb17-7553685a3edb,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92050f95-53') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.671 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.672 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92050f95-53, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.674 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.675 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.678 243456 INFO os_vif [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:0b:82,bridge_name='br-int',has_traffic_filtering=True,id=92050f95-5357-4df4-bb17-7553685a3edb,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92050f95-53')#033[00m
Feb 28 05:11:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:11:17 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2446873241' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:11:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Feb 28 05:11:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Feb 28 05:11:17 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.912 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.918 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.944 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.970 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:11:17 np0005634017 nova_compute[243452]: 2026-02-28 10:11:17.971 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:18 np0005634017 nova_compute[243452]: 2026-02-28 10:11:18.004 243456 INFO nova.virt.libvirt.driver [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Deleting instance files /var/lib/nova/instances/9e4c0569-eb76-4874-89e2-751c8237a762_del#033[00m
Feb 28 05:11:18 np0005634017 nova_compute[243452]: 2026-02-28 10:11:18.005 243456 INFO nova.virt.libvirt.driver [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Deletion of /var/lib/nova/instances/9e4c0569-eb76-4874-89e2-751c8237a762_del complete#033[00m
Feb 28 05:11:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:11:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Feb 28 05:11:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Feb 28 05:11:18 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Feb 28 05:11:18 np0005634017 nova_compute[243452]: 2026-02-28 10:11:18.066 243456 INFO nova.compute.manager [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Took 0.42 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:11:18 np0005634017 nova_compute[243452]: 2026-02-28 10:11:18.066 243456 DEBUG oslo.service.loopingcall [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:11:18 np0005634017 nova_compute[243452]: 2026-02-28 10:11:18.067 243456 DEBUG nova.compute.manager [-] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:11:18 np0005634017 nova_compute[243452]: 2026-02-28 10:11:18.067 243456 DEBUG nova.network.neutron [-] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:11:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 497 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 2.2 MiB/s wr, 190 op/s
Feb 28 05:11:18 np0005634017 nova_compute[243452]: 2026-02-28 10:11:18.684 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:18 np0005634017 nova_compute[243452]: 2026-02-28 10:11:18.971 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:11:18 np0005634017 nova_compute[243452]: 2026-02-28 10:11:18.972 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:11:18 np0005634017 nova_compute[243452]: 2026-02-28 10:11:18.972 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:11:18 np0005634017 nova_compute[243452]: 2026-02-28 10:11:18.995 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Feb 28 05:11:18 np0005634017 nova_compute[243452]: 2026-02-28 10:11:18.998 243456 DEBUG nova.network.neutron [-] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:11:19 np0005634017 nova_compute[243452]: 2026-02-28 10:11:19.039 243456 INFO nova.compute.manager [-] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Took 0.97 seconds to deallocate network for instance.#033[00m
Feb 28 05:11:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Feb 28 05:11:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Feb 28 05:11:19 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Feb 28 05:11:19 np0005634017 nova_compute[243452]: 2026-02-28 10:11:19.082 243456 DEBUG oslo_concurrency.lockutils [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:19 np0005634017 nova_compute[243452]: 2026-02-28 10:11:19.082 243456 DEBUG oslo_concurrency.lockutils [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:19 np0005634017 nova_compute[243452]: 2026-02-28 10:11:19.093 243456 DEBUG nova.compute.manager [req-5b0f2c84-6c92-4844-b210-e98012590642 req-a0da3bfa-677c-4e14-a877-ea1d05f2afc6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Received event network-vif-deleted-92050f95-5357-4df4-bb17-7553685a3edb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:11:19 np0005634017 nova_compute[243452]: 2026-02-28 10:11:19.143 243456 DEBUG oslo_concurrency.processutils [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:19 np0005634017 nova_compute[243452]: 2026-02-28 10:11:19.330 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:11:19 np0005634017 nova_compute[243452]: 2026-02-28 10:11:19.331 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:11:19 np0005634017 nova_compute[243452]: 2026-02-28 10:11:19.332 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:11:19 np0005634017 nova_compute[243452]: 2026-02-28 10:11:19.332 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:11:19 np0005634017 nova_compute[243452]: 2026-02-28 10:11:19.693 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquiring lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:19 np0005634017 nova_compute[243452]: 2026-02-28 10:11:19.694 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:19 np0005634017 nova_compute[243452]: 2026-02-28 10:11:19.721 243456 DEBUG nova.compute.manager [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:11:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:11:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/417357746' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:11:19 np0005634017 nova_compute[243452]: 2026-02-28 10:11:19.746 243456 DEBUG oslo_concurrency.processutils [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:19 np0005634017 nova_compute[243452]: 2026-02-28 10:11:19.753 243456 DEBUG nova.compute.provider_tree [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:11:19 np0005634017 nova_compute[243452]: 2026-02-28 10:11:19.796 243456 DEBUG nova.scheduler.client.report [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:11:19 np0005634017 nova_compute[243452]: 2026-02-28 10:11:19.826 243456 DEBUG oslo_concurrency.lockutils [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:19 np0005634017 nova_compute[243452]: 2026-02-28 10:11:19.830 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:19 np0005634017 nova_compute[243452]: 2026-02-28 10:11:19.830 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:19 np0005634017 nova_compute[243452]: 2026-02-28 10:11:19.839 243456 DEBUG nova.virt.hardware [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:11:19 np0005634017 nova_compute[243452]: 2026-02-28 10:11:19.839 243456 INFO nova.compute.claims [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:11:19 np0005634017 nova_compute[243452]: 2026-02-28 10:11:19.872 243456 INFO nova.scheduler.client.report [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Deleted allocations for instance 9e4c0569-eb76-4874-89e2-751c8237a762#033[00m
Feb 28 05:11:19 np0005634017 nova_compute[243452]: 2026-02-28 10:11:19.936 243456 DEBUG oslo_concurrency.lockutils [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.297s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:19 np0005634017 nova_compute[243452]: 2026-02-28 10:11:19.962 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 411 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 8.4 KiB/s wr, 262 op/s
Feb 28 05:11:20 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:11:20 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3582915146' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:11:20 np0005634017 nova_compute[243452]: 2026-02-28 10:11:20.542 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:20 np0005634017 nova_compute[243452]: 2026-02-28 10:11:20.549 243456 DEBUG nova.compute.provider_tree [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:11:20 np0005634017 nova_compute[243452]: 2026-02-28 10:11:20.572 243456 DEBUG nova.scheduler.client.report [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:11:20 np0005634017 nova_compute[243452]: 2026-02-28 10:11:20.599 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:20 np0005634017 nova_compute[243452]: 2026-02-28 10:11:20.600 243456 DEBUG nova.compute.manager [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:11:20 np0005634017 nova_compute[243452]: 2026-02-28 10:11:20.665 243456 DEBUG nova.compute.manager [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:11:20 np0005634017 nova_compute[243452]: 2026-02-28 10:11:20.666 243456 DEBUG nova.network.neutron [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:11:20 np0005634017 nova_compute[243452]: 2026-02-28 10:11:20.718 243456 INFO nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:11:20 np0005634017 nova_compute[243452]: 2026-02-28 10:11:20.736 243456 DEBUG nova.compute.manager [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:11:20 np0005634017 nova_compute[243452]: 2026-02-28 10:11:20.828 243456 DEBUG nova.compute.manager [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:11:20 np0005634017 nova_compute[243452]: 2026-02-28 10:11:20.830 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:11:20 np0005634017 nova_compute[243452]: 2026-02-28 10:11:20.831 243456 INFO nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Creating image(s)#033[00m
Feb 28 05:11:20 np0005634017 nova_compute[243452]: 2026-02-28 10:11:20.870 243456 DEBUG nova.storage.rbd_utils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] rbd image fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:20 np0005634017 nova_compute[243452]: 2026-02-28 10:11:20.905 243456 DEBUG nova.storage.rbd_utils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] rbd image fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:20 np0005634017 nova_compute[243452]: 2026-02-28 10:11:20.931 243456 DEBUG nova.storage.rbd_utils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] rbd image fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:20 np0005634017 nova_compute[243452]: 2026-02-28 10:11:20.934 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:20 np0005634017 nova_compute[243452]: 2026-02-28 10:11:20.977 243456 DEBUG nova.policy [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4d03368d5ddc403db8a8315dabe88681', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '676657ed4ac447c580a9480d26bd7f87', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:11:20 np0005634017 nova_compute[243452]: 2026-02-28 10:11:20.981 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updating instance_info_cache with network_info: [{"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.007 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.008 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.008 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.008 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.008 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.019 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.019 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.020 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.020 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.043 243456 DEBUG nova.storage.rbd_utils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] rbd image fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.047 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.097 243456 WARNING nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] While synchronizing instance power states, found 2 instances in the database and 1 instances on the hypervisor.#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.098 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Triggering sync for uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.098 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Triggering sync for uuid fc527bc2-3cc2-4ce2-b99e-24d252793d06 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.098 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.099 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.099 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.100 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.137 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.322 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.345 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.376 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.376 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.380 243456 DEBUG nova.storage.rbd_utils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] resizing rbd image fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.475 243456 DEBUG nova.objects.instance [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lazy-loading 'migration_context' on Instance uuid fc527bc2-3cc2-4ce2-b99e-24d252793d06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.505 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.506 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Ensure instance console log exists: /var/lib/nova/instances/fc527bc2-3cc2-4ce2-b99e-24d252793d06/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.506 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.506 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.507 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:21 np0005634017 nova_compute[243452]: 2026-02-28 10:11:21.928 243456 DEBUG nova.network.neutron [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Successfully created port: 52c9c534-2d55-4b9a-bd70-a8114ac975c6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:11:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 298 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 7.5 KiB/s wr, 226 op/s
Feb 28 05:11:22 np0005634017 nova_compute[243452]: 2026-02-28 10:11:22.572 243456 DEBUG nova.network.neutron [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Successfully updated port: 52c9c534-2d55-4b9a-bd70-a8114ac975c6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:11:22 np0005634017 nova_compute[243452]: 2026-02-28 10:11:22.590 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquiring lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:11:22 np0005634017 nova_compute[243452]: 2026-02-28 10:11:22.591 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquired lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:11:22 np0005634017 nova_compute[243452]: 2026-02-28 10:11:22.591 243456 DEBUG nova.network.neutron [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:11:22 np0005634017 nova_compute[243452]: 2026-02-28 10:11:22.666 243456 DEBUG nova.compute.manager [req-77623f0c-fe49-4d7f-9998-45eeccec05a0 req-fa39db7d-d00a-4d1e-87fa-42231b0035a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Received event network-changed-52c9c534-2d55-4b9a-bd70-a8114ac975c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:11:22 np0005634017 nova_compute[243452]: 2026-02-28 10:11:22.666 243456 DEBUG nova.compute.manager [req-77623f0c-fe49-4d7f-9998-45eeccec05a0 req-fa39db7d-d00a-4d1e-87fa-42231b0035a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Refreshing instance network info cache due to event network-changed-52c9c534-2d55-4b9a-bd70-a8114ac975c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:11:22 np0005634017 nova_compute[243452]: 2026-02-28 10:11:22.667 243456 DEBUG oslo_concurrency.lockutils [req-77623f0c-fe49-4d7f-9998-45eeccec05a0 req-fa39db7d-d00a-4d1e-87fa-42231b0035a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:11:22 np0005634017 nova_compute[243452]: 2026-02-28 10:11:22.675 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:22 np0005634017 nova_compute[243452]: 2026-02-28 10:11:22.794 243456 DEBUG nova.network.neutron [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:11:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:11:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Feb 28 05:11:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Feb 28 05:11:23 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Feb 28 05:11:23 np0005634017 nova_compute[243452]: 2026-02-28 10:11:23.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:11:23 np0005634017 nova_compute[243452]: 2026-02-28 10:11:23.318 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 28 05:11:23 np0005634017 nova_compute[243452]: 2026-02-28 10:11:23.685 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 258 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 989 KiB/s rd, 1.7 MiB/s wr, 229 op/s
Feb 28 05:11:24 np0005634017 nova_compute[243452]: 2026-02-28 10:11:24.883 243456 DEBUG nova.network.neutron [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Updating instance_info_cache with network_info: [{"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:11:24 np0005634017 nova_compute[243452]: 2026-02-28 10:11:24.903 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Releasing lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:11:24 np0005634017 nova_compute[243452]: 2026-02-28 10:11:24.903 243456 DEBUG nova.compute.manager [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Instance network_info: |[{"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:11:24 np0005634017 nova_compute[243452]: 2026-02-28 10:11:24.903 243456 DEBUG oslo_concurrency.lockutils [req-77623f0c-fe49-4d7f-9998-45eeccec05a0 req-fa39db7d-d00a-4d1e-87fa-42231b0035a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:11:24 np0005634017 nova_compute[243452]: 2026-02-28 10:11:24.904 243456 DEBUG nova.network.neutron [req-77623f0c-fe49-4d7f-9998-45eeccec05a0 req-fa39db7d-d00a-4d1e-87fa-42231b0035a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Refreshing network info cache for port 52c9c534-2d55-4b9a-bd70-a8114ac975c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:11:24 np0005634017 nova_compute[243452]: 2026-02-28 10:11:24.907 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Start _get_guest_xml network_info=[{"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:11:24 np0005634017 nova_compute[243452]: 2026-02-28 10:11:24.910 243456 WARNING nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:11:24 np0005634017 nova_compute[243452]: 2026-02-28 10:11:24.914 243456 DEBUG nova.virt.libvirt.host [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:11:24 np0005634017 nova_compute[243452]: 2026-02-28 10:11:24.915 243456 DEBUG nova.virt.libvirt.host [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:11:24 np0005634017 nova_compute[243452]: 2026-02-28 10:11:24.920 243456 DEBUG nova.virt.libvirt.host [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:11:24 np0005634017 nova_compute[243452]: 2026-02-28 10:11:24.921 243456 DEBUG nova.virt.libvirt.host [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:11:24 np0005634017 nova_compute[243452]: 2026-02-28 10:11:24.922 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:11:24 np0005634017 nova_compute[243452]: 2026-02-28 10:11:24.922 243456 DEBUG nova.virt.hardware [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:11:24 np0005634017 nova_compute[243452]: 2026-02-28 10:11:24.922 243456 DEBUG nova.virt.hardware [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:11:24 np0005634017 nova_compute[243452]: 2026-02-28 10:11:24.923 243456 DEBUG nova.virt.hardware [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:11:24 np0005634017 nova_compute[243452]: 2026-02-28 10:11:24.923 243456 DEBUG nova.virt.hardware [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:11:24 np0005634017 nova_compute[243452]: 2026-02-28 10:11:24.923 243456 DEBUG nova.virt.hardware [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:11:24 np0005634017 nova_compute[243452]: 2026-02-28 10:11:24.923 243456 DEBUG nova.virt.hardware [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:11:24 np0005634017 nova_compute[243452]: 2026-02-28 10:11:24.923 243456 DEBUG nova.virt.hardware [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:11:24 np0005634017 nova_compute[243452]: 2026-02-28 10:11:24.924 243456 DEBUG nova.virt.hardware [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:11:24 np0005634017 nova_compute[243452]: 2026-02-28 10:11:24.924 243456 DEBUG nova.virt.hardware [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:11:24 np0005634017 nova_compute[243452]: 2026-02-28 10:11:24.924 243456 DEBUG nova.virt.hardware [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:11:24 np0005634017 nova_compute[243452]: 2026-02-28 10:11:24.924 243456 DEBUG nova.virt.hardware [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:11:24 np0005634017 nova_compute[243452]: 2026-02-28 10:11:24.927 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:11:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4060572511' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:11:25 np0005634017 nova_compute[243452]: 2026-02-28 10:11:25.467 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:25 np0005634017 nova_compute[243452]: 2026-02-28 10:11:25.498 243456 DEBUG nova.storage.rbd_utils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] rbd image fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:25 np0005634017 nova_compute[243452]: 2026-02-28 10:11:25.503 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:11:26 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2025441163' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:11:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 279 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 757 KiB/s rd, 2.7 MiB/s wr, 198 op/s
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.077 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.079 243456 DEBUG nova.virt.libvirt.vif [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:11:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1011485011',display_name='tempest-AttachInterfacesUnderV243Test-server-1011485011',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1011485011',id=61,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNjnzagFJoSFtRanAJV8Vh8se7ZkhkWvbV0r0AVyZNItt2AKKQ99gHb6oQBNip6kETClAVoOoJsFdbRjy0fqH4CfM5YyB80Yui5ULnbk8Emyxy1XpzzeCEJgH6ejC7SlXQ==',key_name='tempest-keypair-1736116569',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='676657ed4ac447c580a9480d26bd7f87',ramdisk_id='',reservation_id='r-041m6mn9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-809208084',owner_user_name='tempest-AttachInterfacesUnderV243Test-809208084-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:11:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d03368d5ddc403db8a8315dabe88681',uuid=fc527bc2-3cc2-4ce2-b99e-24d252793d06,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.080 243456 DEBUG nova.network.os_vif_util [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Converting VIF {"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.081 243456 DEBUG nova.network.os_vif_util [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:55:cf,bridge_name='br-int',has_traffic_filtering=True,id=52c9c534-2d55-4b9a-bd70-a8114ac975c6,network=Network(0192f192-ffd8-4bb7-b267-d74d97cc6cf5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52c9c534-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.083 243456 DEBUG nova.objects.instance [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lazy-loading 'pci_devices' on Instance uuid fc527bc2-3cc2-4ce2-b99e-24d252793d06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.108 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:11:26 np0005634017 nova_compute[243452]:  <uuid>fc527bc2-3cc2-4ce2-b99e-24d252793d06</uuid>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:  <name>instance-0000003d</name>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <nova:name>tempest-AttachInterfacesUnderV243Test-server-1011485011</nova:name>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:11:24</nova:creationTime>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:11:26 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:        <nova:user uuid="4d03368d5ddc403db8a8315dabe88681">tempest-AttachInterfacesUnderV243Test-809208084-project-member</nova:user>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:        <nova:project uuid="676657ed4ac447c580a9480d26bd7f87">tempest-AttachInterfacesUnderV243Test-809208084</nova:project>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:        <nova:port uuid="52c9c534-2d55-4b9a-bd70-a8114ac975c6">
Feb 28 05:11:26 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <entry name="serial">fc527bc2-3cc2-4ce2-b99e-24d252793d06</entry>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <entry name="uuid">fc527bc2-3cc2-4ce2-b99e-24d252793d06</entry>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk">
Feb 28 05:11:26 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:11:26 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk.config">
Feb 28 05:11:26 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:11:26 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:ec:55:cf"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <target dev="tap52c9c534-2d"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/fc527bc2-3cc2-4ce2-b99e-24d252793d06/console.log" append="off"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:11:26 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:11:26 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:11:26 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:11:26 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.121 243456 DEBUG nova.compute.manager [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Preparing to wait for external event network-vif-plugged-52c9c534-2d55-4b9a-bd70-a8114ac975c6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.121 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquiring lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.122 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.122 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.123 243456 DEBUG nova.virt.libvirt.vif [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:11:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1011485011',display_name='tempest-AttachInterfacesUnderV243Test-server-1011485011',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1011485011',id=61,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNjnzagFJoSFtRanAJV8Vh8se7ZkhkWvbV0r0AVyZNItt2AKKQ99gHb6oQBNip6kETClAVoOoJsFdbRjy0fqH4CfM5YyB80Yui5ULnbk8Emyxy1XpzzeCEJgH6ejC7SlXQ==',key_name='tempest-keypair-1736116569',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='676657ed4ac447c580a9480d26bd7f87',ramdisk_id='',reservation_id='r-041m6mn9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-809208084',owner_user_name='tempest-AttachInterfacesUnderV243Test-809208084-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:11:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d03368d5ddc403db8a8315dabe88681',uuid=fc527bc2-3cc2-4ce2-b99e-24d252793d06,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.123 243456 DEBUG nova.network.os_vif_util [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Converting VIF {"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.124 243456 DEBUG nova.network.os_vif_util [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:55:cf,bridge_name='br-int',has_traffic_filtering=True,id=52c9c534-2d55-4b9a-bd70-a8114ac975c6,network=Network(0192f192-ffd8-4bb7-b267-d74d97cc6cf5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52c9c534-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.124 243456 DEBUG os_vif [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:55:cf,bridge_name='br-int',has_traffic_filtering=True,id=52c9c534-2d55-4b9a-bd70-a8114ac975c6,network=Network(0192f192-ffd8-4bb7-b267-d74d97cc6cf5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52c9c534-2d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.125 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.126 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.126 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.130 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.131 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52c9c534-2d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.132 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52c9c534-2d, col_values=(('external_ids', {'iface-id': '52c9c534-2d55-4b9a-bd70-a8114ac975c6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:55:cf', 'vm-uuid': 'fc527bc2-3cc2-4ce2-b99e-24d252793d06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.134 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:26 np0005634017 NetworkManager[49805]: <info>  [1772273486.1351] manager: (tap52c9c534-2d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/233)
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.138 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.141 243456 INFO os_vif [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:55:cf,bridge_name='br-int',has_traffic_filtering=True,id=52c9c534-2d55-4b9a-bd70-a8114ac975c6,network=Network(0192f192-ffd8-4bb7-b267-d74d97cc6cf5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52c9c534-2d')#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.207 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.208 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.209 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] No VIF found with MAC fa:16:3e:ec:55:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.210 243456 INFO nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Using config drive#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.242 243456 DEBUG nova.storage.rbd_utils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] rbd image fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.710 243456 DEBUG nova.network.neutron [req-77623f0c-fe49-4d7f-9998-45eeccec05a0 req-fa39db7d-d00a-4d1e-87fa-42231b0035a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Updated VIF entry in instance network info cache for port 52c9c534-2d55-4b9a-bd70-a8114ac975c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.710 243456 DEBUG nova.network.neutron [req-77623f0c-fe49-4d7f-9998-45eeccec05a0 req-fa39db7d-d00a-4d1e-87fa-42231b0035a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Updating instance_info_cache with network_info: [{"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.736 243456 DEBUG oslo_concurrency.lockutils [req-77623f0c-fe49-4d7f-9998-45eeccec05a0 req-fa39db7d-d00a-4d1e-87fa-42231b0035a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.749 243456 INFO nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Creating config drive at /var/lib/nova/instances/fc527bc2-3cc2-4ce2-b99e-24d252793d06/disk.config#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.755 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fc527bc2-3cc2-4ce2-b99e-24d252793d06/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmphzwezgjo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:26 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:26Z|00498|binding|INFO|Releasing lport cad89901-4493-47e9-b0fc-45158375eff8 from this chassis (sb_readonly=0)
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.882 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.910 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fc527bc2-3cc2-4ce2-b99e-24d252793d06/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmphzwezgjo" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.935 243456 DEBUG nova.storage.rbd_utils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] rbd image fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:26 np0005634017 nova_compute[243452]: 2026-02-28 10:11:26.940 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fc527bc2-3cc2-4ce2-b99e-24d252793d06/disk.config fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.087 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fc527bc2-3cc2-4ce2-b99e-24d252793d06/disk.config fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.088 243456 INFO nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Deleting local config drive /var/lib/nova/instances/fc527bc2-3cc2-4ce2-b99e-24d252793d06/disk.config because it was imported into RBD.#033[00m
Feb 28 05:11:27 np0005634017 kernel: tap52c9c534-2d: entered promiscuous mode
Feb 28 05:11:27 np0005634017 NetworkManager[49805]: <info>  [1772273487.1374] manager: (tap52c9c534-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/234)
Feb 28 05:11:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:27Z|00499|binding|INFO|Claiming lport 52c9c534-2d55-4b9a-bd70-a8114ac975c6 for this chassis.
Feb 28 05:11:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:27Z|00500|binding|INFO|52c9c534-2d55-4b9a-bd70-a8114ac975c6: Claiming fa:16:3e:ec:55:cf 10.100.0.8
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.139 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:27Z|00501|binding|INFO|Setting lport 52c9c534-2d55-4b9a-bd70-a8114ac975c6 ovn-installed in OVS
Feb 28 05:11:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:27Z|00502|binding|INFO|Setting lport 52c9c534-2d55-4b9a-bd70-a8114ac975c6 up in Southbound
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.146 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:55:cf 10.100.0.8'], port_security=['fa:16:3e:ec:55:cf 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'fc527bc2-3cc2-4ce2-b99e-24d252793d06', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0192f192-ffd8-4bb7-b267-d74d97cc6cf5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '676657ed4ac447c580a9480d26bd7f87', 'neutron:revision_number': '2', 'neutron:security_group_ids': '27cca83f-7b50-4cdc-b0c5-9a7ed57a0cbe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=527b0cac-8159-448a-a976-d464a8e38db9, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=52c9c534-2d55-4b9a-bd70-a8114ac975c6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.148 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.149 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 52c9c534-2d55-4b9a-bd70-a8114ac975c6 in datapath 0192f192-ffd8-4bb7-b267-d74d97cc6cf5 bound to our chassis#033[00m
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.151 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0192f192-ffd8-4bb7-b267-d74d97cc6cf5#033[00m
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.163 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[66e10842-993b-4b2d-b973-580b3d4db385]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.166 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0192f192-f1 in ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.168 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0192f192-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.168 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc6d1b3-77aa-4b79-8091-713550db1567]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.169 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9299d201-55b9-471f-866c-f3b56a29fc09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:27 np0005634017 systemd-udevd[295721]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:11:27 np0005634017 systemd-machined[209480]: New machine qemu-68-instance-0000003d.
Feb 28 05:11:27 np0005634017 NetworkManager[49805]: <info>  [1772273487.1890] device (tap52c9c534-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:11:27 np0005634017 NetworkManager[49805]: <info>  [1772273487.1896] device (tap52c9c534-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.186 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[5307e642-e71b-4d81-9bbc-cbc7ac746865]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.198 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f8d9a670-df1c-447e-898f-53c65eca209c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:27 np0005634017 systemd[1]: Started Virtual Machine qemu-68-instance-0000003d.
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.226 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[fd368bce-cddc-4610-a770-cbc245e06eed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:27 np0005634017 NetworkManager[49805]: <info>  [1772273487.2345] manager: (tap0192f192-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/235)
Feb 28 05:11:27 np0005634017 systemd-udevd[295725]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.233 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3aeab9a3-e0c8-4a09-88bf-56775b69b12d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.252 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.252 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.264 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c298c527-6e27-4c6c-841b-9b0598131ddb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.268 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2bba92a5-b36a-451a-be0c-6fe9adba3979]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.271 243456 DEBUG nova.compute.manager [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:11:27 np0005634017 NetworkManager[49805]: <info>  [1772273487.2918] device (tap0192f192-f0): carrier: link connected
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.298 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[dc98b3a1-eaa9-47c8-b553-c99bc4abf112]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.316 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5d9af073-73c4-414f-82f3-00638c1a030a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0192f192-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:c0:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499457, 'reachable_time': 17927, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295754, 'error': None, 'target': 'ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.333 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3243740b-58a1-47b6-98e5-535a4e451954]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:c094'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499457, 'tstamp': 499457}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295755, 'error': None, 'target': 'ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.354 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e1ba1d65-6c12-42e2-8b25-a9d8c2e9355b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0192f192-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:c0:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499457, 'reachable_time': 17927, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295756, 'error': None, 'target': 'ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.359 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.360 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.369 243456 DEBUG nova.virt.hardware [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.370 243456 INFO nova.compute.claims [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.389 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b2eb2d5f-2117-478a-9de1-24c9d52854b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.420 243456 DEBUG nova.compute.manager [req-e7e63c1b-a6b3-4963-82e4-be76d6c7eb68 req-8fabb2ba-d072-4114-a3c6-4fe49ead07a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Received event network-vif-plugged-52c9c534-2d55-4b9a-bd70-a8114ac975c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.421 243456 DEBUG oslo_concurrency.lockutils [req-e7e63c1b-a6b3-4963-82e4-be76d6c7eb68 req-8fabb2ba-d072-4114-a3c6-4fe49ead07a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.421 243456 DEBUG oslo_concurrency.lockutils [req-e7e63c1b-a6b3-4963-82e4-be76d6c7eb68 req-8fabb2ba-d072-4114-a3c6-4fe49ead07a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.422 243456 DEBUG oslo_concurrency.lockutils [req-e7e63c1b-a6b3-4963-82e4-be76d6c7eb68 req-8fabb2ba-d072-4114-a3c6-4fe49ead07a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.423 243456 DEBUG nova.compute.manager [req-e7e63c1b-a6b3-4963-82e4-be76d6c7eb68 req-8fabb2ba-d072-4114-a3c6-4fe49ead07a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Processing event network-vif-plugged-52c9c534-2d55-4b9a-bd70-a8114ac975c6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.445 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6becbbf1-227f-4d4e-bbea-0cb6a5608c5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.446 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0192f192-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.447 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.447 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0192f192-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:27 np0005634017 NetworkManager[49805]: <info>  [1772273487.4506] manager: (tap0192f192-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/236)
Feb 28 05:11:27 np0005634017 kernel: tap0192f192-f0: entered promiscuous mode
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.453 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0192f192-f0, col_values=(('external_ids', {'iface-id': '3526e0c3-6eaf-40f3-a85d-8dfbd6551585'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:27Z|00503|binding|INFO|Releasing lport 3526e0c3-6eaf-40f3-a85d-8dfbd6551585 from this chassis (sb_readonly=0)
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.455 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.469 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.471 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0192f192-ffd8-4bb7-b267-d74d97cc6cf5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0192f192-ffd8-4bb7-b267-d74d97cc6cf5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.472 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[87dd73c9-5f11-4363-b9e7-39da660f49b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.474 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-0192f192-ffd8-4bb7-b267-d74d97cc6cf5
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/0192f192-ffd8-4bb7-b267-d74d97cc6cf5.pid.haproxy
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 0192f192-ffd8-4bb7-b267-d74d97cc6cf5
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:11:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.475 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5', 'env', 'PROCESS_TAG=haproxy-0192f192-ffd8-4bb7-b267-d74d97cc6cf5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0192f192-ffd8-4bb7-b267-d74d97cc6cf5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.579 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.755 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273487.7545323, fc527bc2-3cc2-4ce2-b99e-24d252793d06 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.756 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] VM Started (Lifecycle Event)#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.759 243456 DEBUG nova.compute.manager [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.769 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.792 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.796 243456 INFO nova.virt.libvirt.driver [-] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Instance spawned successfully.#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.797 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.799 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:11:27 np0005634017 podman[295850]: 2026-02-28 10:11:27.817682586 +0000 UTC m=+0.049325566 container create 00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.826 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.827 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273487.7549033, fc527bc2-3cc2-4ce2-b99e-24d252793d06 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.828 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.834 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.834 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.835 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.835 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.836 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.836 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.846 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.850 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273487.7633803, fc527bc2-3cc2-4ce2-b99e-24d252793d06 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.850 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:11:27 np0005634017 systemd[1]: Started libpod-conmon-00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625.scope.
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.879 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:11:27 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.883 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:11:27 np0005634017 podman[295850]: 2026-02-28 10:11:27.791971074 +0000 UTC m=+0.023614064 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:11:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abd06fea17760e87046dfe6bdf5d2603ded7e2ad74c164232a00dda0bdcd2728/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:11:27 np0005634017 podman[295850]: 2026-02-28 10:11:27.901791938 +0000 UTC m=+0.133434958 container init 00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:11:27 np0005634017 podman[295850]: 2026-02-28 10:11:27.906697506 +0000 UTC m=+0.138340486 container start 00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223)
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.909 243456 INFO nova.compute.manager [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Took 7.08 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.909 243456 DEBUG nova.compute.manager [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.911 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:11:27 np0005634017 neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5[295865]: [NOTICE]   (295869) : New worker (295871) forked
Feb 28 05:11:27 np0005634017 neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5[295865]: [NOTICE]   (295869) : Loading success.
Feb 28 05:11:27 np0005634017 nova_compute[243452]: 2026-02-28 10:11:27.977 243456 INFO nova.compute.manager [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Took 8.17 seconds to build instance.#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.001 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.001 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 6.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.002 243456 INFO nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.002 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:11:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Feb 28 05:11:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 279 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 673 KiB/s rd, 2.4 MiB/s wr, 178 op/s
Feb 28 05:11:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Feb 28 05:11:28 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Feb 28 05:11:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:11:28 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1849421748' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.164 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.170 243456 DEBUG nova.compute.provider_tree [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.207 243456 DEBUG nova.scheduler.client.report [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.228 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.228 243456 DEBUG nova.compute.manager [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.283 243456 DEBUG nova.compute.manager [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.283 243456 DEBUG nova.network.neutron [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.305 243456 INFO nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.322 243456 DEBUG nova.compute.manager [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.336 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.337 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.374 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.437 243456 DEBUG nova.compute.manager [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.439 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.439 243456 INFO nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Creating image(s)#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.463 243456 DEBUG nova.storage.rbd_utils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.492 243456 DEBUG nova.storage.rbd_utils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.520 243456 DEBUG nova.storage.rbd_utils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.525 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.574 243456 DEBUG nova.policy [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '48e14a77ec8842f98a0d2efc6d5e167f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cffbbb9857954b188c363e9565817bf2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.609 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.610 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.611 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.611 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.640 243456 DEBUG nova.storage.rbd_utils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.645 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.688 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:28 np0005634017 nova_compute[243452]: 2026-02-28 10:11:28.949 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:29 np0005634017 nova_compute[243452]: 2026-02-28 10:11:29.029 243456 DEBUG nova.storage.rbd_utils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] resizing rbd image 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:11:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:11:29
Feb 28 05:11:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:11:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:11:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'volumes', 'default.rgw.log', 'vms', 'backups', 'default.rgw.meta', 'images']
Feb 28 05:11:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:11:29 np0005634017 nova_compute[243452]: 2026-02-28 10:11:29.124 243456 DEBUG nova.objects.instance [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'migration_context' on Instance uuid 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:11:29 np0005634017 nova_compute[243452]: 2026-02-28 10:11:29.157 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:11:29 np0005634017 nova_compute[243452]: 2026-02-28 10:11:29.158 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Ensure instance console log exists: /var/lib/nova/instances/51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:11:29 np0005634017 nova_compute[243452]: 2026-02-28 10:11:29.158 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:29 np0005634017 nova_compute[243452]: 2026-02-28 10:11:29.159 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:29 np0005634017 nova_compute[243452]: 2026-02-28 10:11:29.159 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:29 np0005634017 nova_compute[243452]: 2026-02-28 10:11:29.546 243456 DEBUG nova.compute.manager [req-64fef0b9-ec3c-4b62-baec-904cd839fbab req-2e4b87ee-ce2a-4ad9-a990-c326f0af680a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Received event network-vif-plugged-52c9c534-2d55-4b9a-bd70-a8114ac975c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:11:29 np0005634017 nova_compute[243452]: 2026-02-28 10:11:29.547 243456 DEBUG oslo_concurrency.lockutils [req-64fef0b9-ec3c-4b62-baec-904cd839fbab req-2e4b87ee-ce2a-4ad9-a990-c326f0af680a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:29 np0005634017 nova_compute[243452]: 2026-02-28 10:11:29.547 243456 DEBUG oslo_concurrency.lockutils [req-64fef0b9-ec3c-4b62-baec-904cd839fbab req-2e4b87ee-ce2a-4ad9-a990-c326f0af680a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:29 np0005634017 nova_compute[243452]: 2026-02-28 10:11:29.547 243456 DEBUG oslo_concurrency.lockutils [req-64fef0b9-ec3c-4b62-baec-904cd839fbab req-2e4b87ee-ce2a-4ad9-a990-c326f0af680a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:29 np0005634017 nova_compute[243452]: 2026-02-28 10:11:29.548 243456 DEBUG nova.compute.manager [req-64fef0b9-ec3c-4b62-baec-904cd839fbab req-2e4b87ee-ce2a-4ad9-a990-c326f0af680a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] No waiting events found dispatching network-vif-plugged-52c9c534-2d55-4b9a-bd70-a8114ac975c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:11:29 np0005634017 nova_compute[243452]: 2026-02-28 10:11:29.548 243456 WARNING nova.compute.manager [req-64fef0b9-ec3c-4b62-baec-904cd839fbab req-2e4b87ee-ce2a-4ad9-a990-c326f0af680a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Received unexpected event network-vif-plugged-52c9c534-2d55-4b9a-bd70-a8114ac975c6 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:11:29 np0005634017 nova_compute[243452]: 2026-02-28 10:11:29.770 243456 DEBUG nova.network.neutron [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Successfully created port: 2c3f8e94-025d-4aea-97be-c325f6366e0d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:11:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 298 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 707 KiB/s rd, 3.5 MiB/s wr, 102 op/s
Feb 28 05:11:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:11:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:11:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:11:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:11:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:11:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:11:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:11:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:11:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:11:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:11:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:11:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:11:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:11:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:11:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:11:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:11:30 np0005634017 nova_compute[243452]: 2026-02-28 10:11:30.745 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273475.742595, 9e4c0569-eb76-4874-89e2-751c8237a762 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:11:30 np0005634017 nova_compute[243452]: 2026-02-28 10:11:30.746 243456 INFO nova.compute.manager [-] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:11:30 np0005634017 nova_compute[243452]: 2026-02-28 10:11:30.766 243456 DEBUG nova.compute.manager [None req-b3ba19fe-00b7-4fca-bd5e-c3a2a09639d3 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:11:31 np0005634017 nova_compute[243452]: 2026-02-28 10:11:31.137 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:31 np0005634017 nova_compute[243452]: 2026-02-28 10:11:31.145 243456 DEBUG nova.network.neutron [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Successfully updated port: 2c3f8e94-025d-4aea-97be-c325f6366e0d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:11:31 np0005634017 nova_compute[243452]: 2026-02-28 10:11:31.167 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "refresh_cache-51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:11:31 np0005634017 nova_compute[243452]: 2026-02-28 10:11:31.167 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquired lock "refresh_cache-51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:11:31 np0005634017 nova_compute[243452]: 2026-02-28 10:11:31.168 243456 DEBUG nova.network.neutron [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:11:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:11:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:11:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:11:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:11:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:11:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:11:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:11:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:11:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:11:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:11:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:11:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:11:31 np0005634017 nova_compute[243452]: 2026-02-28 10:11:31.457 243456 DEBUG nova.network.neutron [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:11:31 np0005634017 nova_compute[243452]: 2026-02-28 10:11:31.654 243456 DEBUG nova.compute.manager [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Received event network-changed-52c9c534-2d55-4b9a-bd70-a8114ac975c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:11:31 np0005634017 nova_compute[243452]: 2026-02-28 10:11:31.654 243456 DEBUG nova.compute.manager [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Refreshing instance network info cache due to event network-changed-52c9c534-2d55-4b9a-bd70-a8114ac975c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:11:31 np0005634017 nova_compute[243452]: 2026-02-28 10:11:31.655 243456 DEBUG oslo_concurrency.lockutils [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:11:31 np0005634017 nova_compute[243452]: 2026-02-28 10:11:31.655 243456 DEBUG oslo_concurrency.lockutils [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:11:31 np0005634017 nova_compute[243452]: 2026-02-28 10:11:31.655 243456 DEBUG nova.network.neutron [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Refreshing network info cache for port 52c9c534-2d55-4b9a-bd70-a8114ac975c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:11:31 np0005634017 podman[296191]: 2026-02-28 10:11:31.817552622 +0000 UTC m=+0.043224945 container create 282d302fb1928ea268bc8ad9fc69141221501ee48eea77d76ad994452488be2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 28 05:11:31 np0005634017 systemd[1]: Started libpod-conmon-282d302fb1928ea268bc8ad9fc69141221501ee48eea77d76ad994452488be2c.scope.
Feb 28 05:11:31 np0005634017 podman[296191]: 2026-02-28 10:11:31.799503765 +0000 UTC m=+0.025176108 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:11:31 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:11:31 np0005634017 podman[296191]: 2026-02-28 10:11:31.919842295 +0000 UTC m=+0.145514628 container init 282d302fb1928ea268bc8ad9fc69141221501ee48eea77d76ad994452488be2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_davinci, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:11:31 np0005634017 podman[296191]: 2026-02-28 10:11:31.927546631 +0000 UTC m=+0.153218944 container start 282d302fb1928ea268bc8ad9fc69141221501ee48eea77d76ad994452488be2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_davinci, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:11:31 np0005634017 podman[296191]: 2026-02-28 10:11:31.931229645 +0000 UTC m=+0.156901978 container attach 282d302fb1928ea268bc8ad9fc69141221501ee48eea77d76ad994452488be2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:11:31 np0005634017 systemd[1]: libpod-282d302fb1928ea268bc8ad9fc69141221501ee48eea77d76ad994452488be2c.scope: Deactivated successfully.
Feb 28 05:11:31 np0005634017 nostalgic_davinci[296208]: 167 167
Feb 28 05:11:31 np0005634017 podman[296191]: 2026-02-28 10:11:31.938409527 +0000 UTC m=+0.164081840 container died 282d302fb1928ea268bc8ad9fc69141221501ee48eea77d76ad994452488be2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_davinci, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 05:11:31 np0005634017 conmon[296208]: conmon 282d302fb1928ea268bc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-282d302fb1928ea268bc8ad9fc69141221501ee48eea77d76ad994452488be2c.scope/container/memory.events
Feb 28 05:11:31 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3d395c1a78465fab8e5ea3fc6b165fd403227ae009991a043febbffb083d7936-merged.mount: Deactivated successfully.
Feb 28 05:11:31 np0005634017 podman[296191]: 2026-02-28 10:11:31.974162521 +0000 UTC m=+0.199834834 container remove 282d302fb1928ea268bc8ad9fc69141221501ee48eea77d76ad994452488be2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_davinci, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:11:31 np0005634017 systemd[1]: libpod-conmon-282d302fb1928ea268bc8ad9fc69141221501ee48eea77d76ad994452488be2c.scope: Deactivated successfully.
Feb 28 05:11:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 310 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.5 MiB/s wr, 112 op/s
Feb 28 05:11:32 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:11:32 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:11:32 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:11:32 np0005634017 podman[296231]: 2026-02-28 10:11:32.133108325 +0000 UTC m=+0.040500319 container create fb174ed62b1ee324f88e21d7e25087b8a58197358548437c8fe68e4431fdc70f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:11:32 np0005634017 systemd[1]: Started libpod-conmon-fb174ed62b1ee324f88e21d7e25087b8a58197358548437c8fe68e4431fdc70f.scope.
Feb 28 05:11:32 np0005634017 podman[296231]: 2026-02-28 10:11:32.114393839 +0000 UTC m=+0.021785883 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:11:32 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:11:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6531d7cd8a20913c1be29cbb3616f4391bd1f05b7b843c4454e6638421d5ea80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:11:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6531d7cd8a20913c1be29cbb3616f4391bd1f05b7b843c4454e6638421d5ea80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:11:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6531d7cd8a20913c1be29cbb3616f4391bd1f05b7b843c4454e6638421d5ea80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:11:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6531d7cd8a20913c1be29cbb3616f4391bd1f05b7b843c4454e6638421d5ea80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:11:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6531d7cd8a20913c1be29cbb3616f4391bd1f05b7b843c4454e6638421d5ea80/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:11:32 np0005634017 podman[296231]: 2026-02-28 10:11:32.244202785 +0000 UTC m=+0.151594779 container init fb174ed62b1ee324f88e21d7e25087b8a58197358548437c8fe68e4431fdc70f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lovelace, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:11:32 np0005634017 podman[296245]: 2026-02-28 10:11:32.252173099 +0000 UTC m=+0.084322630 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 05:11:32 np0005634017 podman[296231]: 2026-02-28 10:11:32.253493406 +0000 UTC m=+0.160885400 container start fb174ed62b1ee324f88e21d7e25087b8a58197358548437c8fe68e4431fdc70f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lovelace, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 05:11:32 np0005634017 podman[296231]: 2026-02-28 10:11:32.256639514 +0000 UTC m=+0.164031508 container attach fb174ed62b1ee324f88e21d7e25087b8a58197358548437c8fe68e4431fdc70f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:11:32 np0005634017 podman[296263]: 2026-02-28 10:11:32.312985647 +0000 UTC m=+0.092536600 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller)
Feb 28 05:11:32 np0005634017 nova_compute[243452]: 2026-02-28 10:11:32.377 243456 DEBUG nova.network.neutron [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Updating instance_info_cache with network_info: [{"id": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "address": "fa:16:3e:81:e3:c3", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c3f8e94-02", "ovs_interfaceid": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:11:32 np0005634017 nova_compute[243452]: 2026-02-28 10:11:32.399 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Releasing lock "refresh_cache-51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:11:32 np0005634017 nova_compute[243452]: 2026-02-28 10:11:32.399 243456 DEBUG nova.compute.manager [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Instance network_info: |[{"id": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "address": "fa:16:3e:81:e3:c3", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c3f8e94-02", "ovs_interfaceid": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:11:32 np0005634017 nova_compute[243452]: 2026-02-28 10:11:32.402 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Start _get_guest_xml network_info=[{"id": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "address": "fa:16:3e:81:e3:c3", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c3f8e94-02", "ovs_interfaceid": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:11:32 np0005634017 nova_compute[243452]: 2026-02-28 10:11:32.408 243456 WARNING nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:11:32 np0005634017 nova_compute[243452]: 2026-02-28 10:11:32.413 243456 DEBUG nova.virt.libvirt.host [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:11:32 np0005634017 nova_compute[243452]: 2026-02-28 10:11:32.414 243456 DEBUG nova.virt.libvirt.host [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:11:32 np0005634017 nova_compute[243452]: 2026-02-28 10:11:32.417 243456 DEBUG nova.virt.libvirt.host [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:11:32 np0005634017 nova_compute[243452]: 2026-02-28 10:11:32.418 243456 DEBUG nova.virt.libvirt.host [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:11:32 np0005634017 nova_compute[243452]: 2026-02-28 10:11:32.418 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:11:32 np0005634017 nova_compute[243452]: 2026-02-28 10:11:32.419 243456 DEBUG nova.virt.hardware [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:11:32 np0005634017 nova_compute[243452]: 2026-02-28 10:11:32.419 243456 DEBUG nova.virt.hardware [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:11:32 np0005634017 nova_compute[243452]: 2026-02-28 10:11:32.420 243456 DEBUG nova.virt.hardware [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:11:32 np0005634017 nova_compute[243452]: 2026-02-28 10:11:32.420 243456 DEBUG nova.virt.hardware [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:11:32 np0005634017 nova_compute[243452]: 2026-02-28 10:11:32.420 243456 DEBUG nova.virt.hardware [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:11:32 np0005634017 nova_compute[243452]: 2026-02-28 10:11:32.421 243456 DEBUG nova.virt.hardware [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:11:32 np0005634017 nova_compute[243452]: 2026-02-28 10:11:32.421 243456 DEBUG nova.virt.hardware [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:11:32 np0005634017 nova_compute[243452]: 2026-02-28 10:11:32.421 243456 DEBUG nova.virt.hardware [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:11:32 np0005634017 nova_compute[243452]: 2026-02-28 10:11:32.421 243456 DEBUG nova.virt.hardware [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:11:32 np0005634017 nova_compute[243452]: 2026-02-28 10:11:32.422 243456 DEBUG nova.virt.hardware [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:11:32 np0005634017 nova_compute[243452]: 2026-02-28 10:11:32.422 243456 DEBUG nova.virt.hardware [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:11:32 np0005634017 nova_compute[243452]: 2026-02-28 10:11:32.428 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:32 np0005634017 jolly_lovelace[296261]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:11:32 np0005634017 jolly_lovelace[296261]: --> All data devices are unavailable
Feb 28 05:11:32 np0005634017 systemd[1]: libpod-fb174ed62b1ee324f88e21d7e25087b8a58197358548437c8fe68e4431fdc70f.scope: Deactivated successfully.
Feb 28 05:11:32 np0005634017 podman[296331]: 2026-02-28 10:11:32.787292248 +0000 UTC m=+0.023340457 container died fb174ed62b1ee324f88e21d7e25087b8a58197358548437c8fe68e4431fdc70f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lovelace, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 05:11:32 np0005634017 systemd[1]: var-lib-containers-storage-overlay-6531d7cd8a20913c1be29cbb3616f4391bd1f05b7b843c4454e6638421d5ea80-merged.mount: Deactivated successfully.
Feb 28 05:11:32 np0005634017 podman[296331]: 2026-02-28 10:11:32.822609699 +0000 UTC m=+0.058657898 container remove fb174ed62b1ee324f88e21d7e25087b8a58197358548437c8fe68e4431fdc70f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lovelace, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Feb 28 05:11:32 np0005634017 systemd[1]: libpod-conmon-fb174ed62b1ee324f88e21d7e25087b8a58197358548437c8fe68e4431fdc70f.scope: Deactivated successfully.
Feb 28 05:11:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:11:32 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4012521364' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.008 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.036 243456 DEBUG nova.storage.rbd_utils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.042 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.087 243456 DEBUG nova.network.neutron [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Updated VIF entry in instance network info cache for port 52c9c534-2d55-4b9a-bd70-a8114ac975c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.088 243456 DEBUG nova.network.neutron [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Updating instance_info_cache with network_info: [{"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.107 243456 DEBUG oslo_concurrency.lockutils [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.107 243456 DEBUG nova.compute.manager [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Received event network-changed-2c3f8e94-025d-4aea-97be-c325f6366e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.107 243456 DEBUG nova.compute.manager [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Refreshing instance network info cache due to event network-changed-2c3f8e94-025d-4aea-97be-c325f6366e0d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.108 243456 DEBUG oslo_concurrency.lockutils [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.108 243456 DEBUG oslo_concurrency.lockutils [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.108 243456 DEBUG nova.network.neutron [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Refreshing network info cache for port 2c3f8e94-025d-4aea-97be-c325f6366e0d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:11:33 np0005634017 podman[296450]: 2026-02-28 10:11:33.269379517 +0000 UTC m=+0.043819012 container create 99266dd6f136a4fff4489e48e3173942b636a7516cf6f04995a11f17307a5cf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bose, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:11:33 np0005634017 systemd[1]: Started libpod-conmon-99266dd6f136a4fff4489e48e3173942b636a7516cf6f04995a11f17307a5cf4.scope.
Feb 28 05:11:33 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:11:33 np0005634017 podman[296450]: 2026-02-28 10:11:33.248585573 +0000 UTC m=+0.023025038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:11:33 np0005634017 podman[296450]: 2026-02-28 10:11:33.360667291 +0000 UTC m=+0.135106676 container init 99266dd6f136a4fff4489e48e3173942b636a7516cf6f04995a11f17307a5cf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bose, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:11:33 np0005634017 podman[296450]: 2026-02-28 10:11:33.367633517 +0000 UTC m=+0.142072882 container start 99266dd6f136a4fff4489e48e3173942b636a7516cf6f04995a11f17307a5cf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bose, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 05:11:33 np0005634017 podman[296450]: 2026-02-28 10:11:33.370458856 +0000 UTC m=+0.144898241 container attach 99266dd6f136a4fff4489e48e3173942b636a7516cf6f04995a11f17307a5cf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bose, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:11:33 np0005634017 wonderful_bose[296467]: 167 167
Feb 28 05:11:33 np0005634017 systemd[1]: libpod-99266dd6f136a4fff4489e48e3173942b636a7516cf6f04995a11f17307a5cf4.scope: Deactivated successfully.
Feb 28 05:11:33 np0005634017 conmon[296467]: conmon 99266dd6f136a4fff448 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-99266dd6f136a4fff4489e48e3173942b636a7516cf6f04995a11f17307a5cf4.scope/container/memory.events
Feb 28 05:11:33 np0005634017 podman[296450]: 2026-02-28 10:11:33.375308372 +0000 UTC m=+0.149747747 container died 99266dd6f136a4fff4489e48e3173942b636a7516cf6f04995a11f17307a5cf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bose, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:11:33 np0005634017 systemd[1]: var-lib-containers-storage-overlay-6b5c2dd933ff798dba6b965134bef652df487cc213b23a65b245849a9af173ae-merged.mount: Deactivated successfully.
Feb 28 05:11:33 np0005634017 podman[296450]: 2026-02-28 10:11:33.412292131 +0000 UTC m=+0.186731516 container remove 99266dd6f136a4fff4489e48e3173942b636a7516cf6f04995a11f17307a5cf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:11:33 np0005634017 systemd[1]: libpod-conmon-99266dd6f136a4fff4489e48e3173942b636a7516cf6f04995a11f17307a5cf4.scope: Deactivated successfully.
Feb 28 05:11:33 np0005634017 podman[296490]: 2026-02-28 10:11:33.559263969 +0000 UTC m=+0.040872379 container create 45d68a8cbe44c23868d2bb02100a54977a3f835dc1a0e086deb5e60bf5f68b40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 05:11:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:11:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/975269225' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:11:33 np0005634017 systemd[1]: Started libpod-conmon-45d68a8cbe44c23868d2bb02100a54977a3f835dc1a0e086deb5e60bf5f68b40.scope.
Feb 28 05:11:33 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:11:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e18b65f04999f8950064df8a21662870f4213db0bf5d6ae3437a28fd5d97486c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:11:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e18b65f04999f8950064df8a21662870f4213db0bf5d6ae3437a28fd5d97486c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:11:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e18b65f04999f8950064df8a21662870f4213db0bf5d6ae3437a28fd5d97486c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:11:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e18b65f04999f8950064df8a21662870f4213db0bf5d6ae3437a28fd5d97486c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.611 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.614 243456 DEBUG nova.virt.libvirt.vif [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:11:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1290428002',display_name='tempest-ServerActionsTestOtherB-server-1290428002',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1290428002',id=62,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cffbbb9857954b188c363e9565817bf2',ramdisk_id='',reservation_id='r-2snxvf9r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-909812490',owner_user_name='tempest-ServerActionsTestOtherB-909812490-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:11:28Z,user_data=None,user_id='48e14a77ec8842f98a0d2efc6d5e167f',uuid=51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "address": "fa:16:3e:81:e3:c3", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c3f8e94-02", "ovs_interfaceid": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.614 243456 DEBUG nova.network.os_vif_util [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converting VIF {"id": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "address": "fa:16:3e:81:e3:c3", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c3f8e94-02", "ovs_interfaceid": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.615 243456 DEBUG nova.network.os_vif_util [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:e3:c3,bridge_name='br-int',has_traffic_filtering=True,id=2c3f8e94-025d-4aea-97be-c325f6366e0d,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c3f8e94-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.617 243456 DEBUG nova.objects.instance [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:11:33 np0005634017 podman[296490]: 2026-02-28 10:11:33.625688724 +0000 UTC m=+0.107297144 container init 45d68a8cbe44c23868d2bb02100a54977a3f835dc1a0e086deb5e60bf5f68b40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:11:33 np0005634017 podman[296490]: 2026-02-28 10:11:33.632140825 +0000 UTC m=+0.113749235 container start 45d68a8cbe44c23868d2bb02100a54977a3f835dc1a0e086deb5e60bf5f68b40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_taussig, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.635 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:11:33 np0005634017 nova_compute[243452]:  <uuid>51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3</uuid>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:  <name>instance-0000003e</name>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerActionsTestOtherB-server-1290428002</nova:name>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:11:32</nova:creationTime>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:11:33 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:        <nova:user uuid="48e14a77ec8842f98a0d2efc6d5e167f">tempest-ServerActionsTestOtherB-909812490-project-member</nova:user>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:        <nova:project uuid="cffbbb9857954b188c363e9565817bf2">tempest-ServerActionsTestOtherB-909812490</nova:project>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:        <nova:port uuid="2c3f8e94-025d-4aea-97be-c325f6366e0d">
Feb 28 05:11:33 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <entry name="serial">51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3</entry>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <entry name="uuid">51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3</entry>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk">
Feb 28 05:11:33 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:11:33 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk.config">
Feb 28 05:11:33 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:11:33 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:81:e3:c3"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <target dev="tap2c3f8e94-02"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3/console.log" append="off"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:11:33 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:11:33 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:11:33 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:11:33 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:11:33 np0005634017 podman[296490]: 2026-02-28 10:11:33.541099738 +0000 UTC m=+0.022708168 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.637 243456 DEBUG nova.compute.manager [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Preparing to wait for external event network-vif-plugged-2c3f8e94-025d-4aea-97be-c325f6366e0d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.637 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.638 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:33 np0005634017 podman[296490]: 2026-02-28 10:11:33.638440812 +0000 UTC m=+0.120049302 container attach 45d68a8cbe44c23868d2bb02100a54977a3f835dc1a0e086deb5e60bf5f68b40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.638 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.639 243456 DEBUG nova.virt.libvirt.vif [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:11:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1290428002',display_name='tempest-ServerActionsTestOtherB-server-1290428002',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1290428002',id=62,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cffbbb9857954b188c363e9565817bf2',ramdisk_id='',reservation_id='r-2snxvf9r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-909812490',owner_user_name='tempest-ServerActionsTestOtherB-909812490-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:11:28Z,user_data=None,user_id='48e14a77ec8842f98a0d2efc6d5e167f',uuid=51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "address": "fa:16:3e:81:e3:c3", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c3f8e94-02", "ovs_interfaceid": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.639 243456 DEBUG nova.network.os_vif_util [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converting VIF {"id": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "address": "fa:16:3e:81:e3:c3", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c3f8e94-02", "ovs_interfaceid": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.640 243456 DEBUG nova.network.os_vif_util [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:e3:c3,bridge_name='br-int',has_traffic_filtering=True,id=2c3f8e94-025d-4aea-97be-c325f6366e0d,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c3f8e94-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.640 243456 DEBUG os_vif [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:e3:c3,bridge_name='br-int',has_traffic_filtering=True,id=2c3f8e94-025d-4aea-97be-c325f6366e0d,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c3f8e94-02') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.641 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.641 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.642 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.647 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.647 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c3f8e94-02, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.648 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2c3f8e94-02, col_values=(('external_ids', {'iface-id': '2c3f8e94-025d-4aea-97be-c325f6366e0d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:81:e3:c3', 'vm-uuid': '51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:33 np0005634017 NetworkManager[49805]: <info>  [1772273493.6505] manager: (tap2c3f8e94-02): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/237)
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.653 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.657 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.659 243456 INFO os_vif [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:e3:c3,bridge_name='br-int',has_traffic_filtering=True,id=2c3f8e94-025d-4aea-97be-c325f6366e0d,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c3f8e94-02')#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.690 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.713 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.715 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.715 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No VIF found with MAC fa:16:3e:81:e3:c3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.716 243456 INFO nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Using config drive#033[00m
Feb 28 05:11:33 np0005634017 nova_compute[243452]: 2026-02-28 10:11:33.739 243456 DEBUG nova.storage.rbd_utils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]: {
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:    "0": [
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:        {
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "devices": [
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "/dev/loop3"
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            ],
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "lv_name": "ceph_lv0",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "lv_size": "21470642176",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "name": "ceph_lv0",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "tags": {
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.cluster_name": "ceph",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.crush_device_class": "",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.encrypted": "0",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.objectstore": "bluestore",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.osd_id": "0",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.type": "block",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.vdo": "0",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.with_tpm": "0"
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            },
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "type": "block",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "vg_name": "ceph_vg0"
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:        }
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:    ],
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:    "1": [
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:        {
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "devices": [
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "/dev/loop4"
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            ],
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "lv_name": "ceph_lv1",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "lv_size": "21470642176",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "name": "ceph_lv1",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "tags": {
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.cluster_name": "ceph",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.crush_device_class": "",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.encrypted": "0",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.objectstore": "bluestore",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.osd_id": "1",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.type": "block",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.vdo": "0",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.with_tpm": "0"
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            },
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "type": "block",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "vg_name": "ceph_vg1"
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:        }
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:    ],
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:    "2": [
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:        {
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "devices": [
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "/dev/loop5"
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            ],
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "lv_name": "ceph_lv2",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "lv_size": "21470642176",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "name": "ceph_lv2",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "tags": {
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.cluster_name": "ceph",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.crush_device_class": "",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.encrypted": "0",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.objectstore": "bluestore",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.osd_id": "2",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.type": "block",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.vdo": "0",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:                "ceph.with_tpm": "0"
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            },
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "type": "block",
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:            "vg_name": "ceph_vg2"
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:        }
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]:    ]
Feb 28 05:11:33 np0005634017 xenodochial_taussig[296508]: }
Feb 28 05:11:33 np0005634017 systemd[1]: libpod-45d68a8cbe44c23868d2bb02100a54977a3f835dc1a0e086deb5e60bf5f68b40.scope: Deactivated successfully.
Feb 28 05:11:33 np0005634017 conmon[296508]: conmon 45d68a8cbe44c23868d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-45d68a8cbe44c23868d2bb02100a54977a3f835dc1a0e086deb5e60bf5f68b40.scope/container/memory.events
Feb 28 05:11:33 np0005634017 podman[296490]: 2026-02-28 10:11:33.956727481 +0000 UTC m=+0.438335921 container died 45d68a8cbe44c23868d2bb02100a54977a3f835dc1a0e086deb5e60bf5f68b40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 05:11:33 np0005634017 systemd[1]: var-lib-containers-storage-overlay-e18b65f04999f8950064df8a21662870f4213db0bf5d6ae3437a28fd5d97486c-merged.mount: Deactivated successfully.
Feb 28 05:11:34 np0005634017 podman[296490]: 2026-02-28 10:11:34.007821996 +0000 UTC m=+0.489430436 container remove 45d68a8cbe44c23868d2bb02100a54977a3f835dc1a0e086deb5e60bf5f68b40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_taussig, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:11:34 np0005634017 systemd[1]: libpod-conmon-45d68a8cbe44c23868d2bb02100a54977a3f835dc1a0e086deb5e60bf5f68b40.scope: Deactivated successfully.
Feb 28 05:11:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 325 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 141 op/s
Feb 28 05:11:34 np0005634017 nova_compute[243452]: 2026-02-28 10:11:34.253 243456 INFO nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Creating config drive at /var/lib/nova/instances/51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3/disk.config#033[00m
Feb 28 05:11:34 np0005634017 nova_compute[243452]: 2026-02-28 10:11:34.261 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpsweco1mg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:34 np0005634017 nova_compute[243452]: 2026-02-28 10:11:34.407 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpsweco1mg" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:34 np0005634017 nova_compute[243452]: 2026-02-28 10:11:34.435 243456 DEBUG nova.storage.rbd_utils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:34 np0005634017 nova_compute[243452]: 2026-02-28 10:11:34.442 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3/disk.config 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:34 np0005634017 podman[296616]: 2026-02-28 10:11:34.462360761 +0000 UTC m=+0.050671864 container create a6072a97313189c1ea2ec1814ef5ed3deb8f75c32926908aa248f2e0a5d74a61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 05:11:34 np0005634017 systemd[1]: Started libpod-conmon-a6072a97313189c1ea2ec1814ef5ed3deb8f75c32926908aa248f2e0a5d74a61.scope.
Feb 28 05:11:34 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:11:34 np0005634017 podman[296616]: 2026-02-28 10:11:34.443280215 +0000 UTC m=+0.031591338 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:11:34 np0005634017 podman[296616]: 2026-02-28 10:11:34.555664782 +0000 UTC m=+0.143975905 container init a6072a97313189c1ea2ec1814ef5ed3deb8f75c32926908aa248f2e0a5d74a61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_khorana, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 05:11:34 np0005634017 podman[296616]: 2026-02-28 10:11:34.563851052 +0000 UTC m=+0.152162145 container start a6072a97313189c1ea2ec1814ef5ed3deb8f75c32926908aa248f2e0a5d74a61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Feb 28 05:11:34 np0005634017 podman[296616]: 2026-02-28 10:11:34.567681119 +0000 UTC m=+0.155992202 container attach a6072a97313189c1ea2ec1814ef5ed3deb8f75c32926908aa248f2e0a5d74a61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 28 05:11:34 np0005634017 awesome_khorana[296652]: 167 167
Feb 28 05:11:34 np0005634017 systemd[1]: libpod-a6072a97313189c1ea2ec1814ef5ed3deb8f75c32926908aa248f2e0a5d74a61.scope: Deactivated successfully.
Feb 28 05:11:34 np0005634017 conmon[296652]: conmon a6072a97313189c1ea2e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a6072a97313189c1ea2ec1814ef5ed3deb8f75c32926908aa248f2e0a5d74a61.scope/container/memory.events
Feb 28 05:11:34 np0005634017 podman[296616]: 2026-02-28 10:11:34.572301929 +0000 UTC m=+0.160613022 container died a6072a97313189c1ea2ec1814ef5ed3deb8f75c32926908aa248f2e0a5d74a61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_khorana, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 28 05:11:34 np0005634017 systemd[1]: var-lib-containers-storage-overlay-0b26095004ee561d7f1bd8c94279f6a6aecfbfbb2bfdcfc1a0d78b473de936fc-merged.mount: Deactivated successfully.
Feb 28 05:11:34 np0005634017 nova_compute[243452]: 2026-02-28 10:11:34.601 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3/disk.config 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:34 np0005634017 nova_compute[243452]: 2026-02-28 10:11:34.602 243456 INFO nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Deleting local config drive /var/lib/nova/instances/51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3/disk.config because it was imported into RBD.#033[00m
Feb 28 05:11:34 np0005634017 podman[296616]: 2026-02-28 10:11:34.608432074 +0000 UTC m=+0.196743167 container remove a6072a97313189c1ea2ec1814ef5ed3deb8f75c32926908aa248f2e0a5d74a61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 05:11:34 np0005634017 systemd[1]: libpod-conmon-a6072a97313189c1ea2ec1814ef5ed3deb8f75c32926908aa248f2e0a5d74a61.scope: Deactivated successfully.
Feb 28 05:11:34 np0005634017 kernel: tap2c3f8e94-02: entered promiscuous mode
Feb 28 05:11:34 np0005634017 NetworkManager[49805]: <info>  [1772273494.6573] manager: (tap2c3f8e94-02): new Tun device (/org/freedesktop/NetworkManager/Devices/238)
Feb 28 05:11:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:34Z|00504|binding|INFO|Claiming lport 2c3f8e94-025d-4aea-97be-c325f6366e0d for this chassis.
Feb 28 05:11:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:34Z|00505|binding|INFO|2c3f8e94-025d-4aea-97be-c325f6366e0d: Claiming fa:16:3e:81:e3:c3 10.100.0.7
Feb 28 05:11:34 np0005634017 nova_compute[243452]: 2026-02-28 10:11:34.669 243456 DEBUG nova.network.neutron [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Updated VIF entry in instance network info cache for port 2c3f8e94-025d-4aea-97be-c325f6366e0d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:11:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.672 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:e3:c3 10.100.0.7'], port_security=['fa:16:3e:81:e3:c3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cffbbb9857954b188c363e9565817bf2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9dcd58cd-d32a-4394-9f61-d8f4c1381371', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d55c9b7f-6b12-4934-aabc-ab954d2b71e1, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2c3f8e94-025d-4aea-97be-c325f6366e0d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:11:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.673 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2c3f8e94-025d-4aea-97be-c325f6366e0d in datapath 41b22e92-d251-48dd-9bf8-8f38cbd749fa bound to our chassis#033[00m
Feb 28 05:11:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.675 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41b22e92-d251-48dd-9bf8-8f38cbd749fa#033[00m
Feb 28 05:11:34 np0005634017 nova_compute[243452]: 2026-02-28 10:11:34.676 243456 DEBUG nova.network.neutron [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Updating instance_info_cache with network_info: [{"id": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "address": "fa:16:3e:81:e3:c3", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c3f8e94-02", "ovs_interfaceid": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:11:34 np0005634017 nova_compute[243452]: 2026-02-28 10:11:34.679 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:34Z|00506|binding|INFO|Setting lport 2c3f8e94-025d-4aea-97be-c325f6366e0d ovn-installed in OVS
Feb 28 05:11:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:34Z|00507|binding|INFO|Setting lport 2c3f8e94-025d-4aea-97be-c325f6366e0d up in Southbound
Feb 28 05:11:34 np0005634017 systemd-machined[209480]: New machine qemu-69-instance-0000003e.
Feb 28 05:11:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.697 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d8338103-e402-40d0-933d-0c0dd86f1ed0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:34 np0005634017 nova_compute[243452]: 2026-02-28 10:11:34.700 243456 DEBUG oslo_concurrency.lockutils [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:11:34 np0005634017 systemd[1]: Started Virtual Machine qemu-69-instance-0000003e.
Feb 28 05:11:34 np0005634017 systemd-udevd[296700]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:11:34 np0005634017 NetworkManager[49805]: <info>  [1772273494.7268] device (tap2c3f8e94-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:11:34 np0005634017 NetworkManager[49805]: <info>  [1772273494.7276] device (tap2c3f8e94-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:11:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.730 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3e65121c-250f-40fe-81e8-4dced3cca489]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.735 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[02021e42-9f26-464f-b0bb-1b4265097a72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:34 np0005634017 nova_compute[243452]: 2026-02-28 10:11:34.760 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.777 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ec99d611-a998-4f59-9a2a-496413c4233c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.796 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7de45a39-0d75-4419-ae18-522c147b5689]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41b22e92-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:1f:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493294, 'reachable_time': 16597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296728, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:34 np0005634017 podman[296714]: 2026-02-28 10:11:34.804575342 +0000 UTC m=+0.046784755 container create 655eb667657bbe507029c3643321176256b99c9e3b314b896790bb57d428b02d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:11:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.816 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1f4e87d6-09c9-4b1c-b615-f6e999ff84ba]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap41b22e92-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493305, 'tstamp': 493305}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296729, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap41b22e92-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493308, 'tstamp': 493308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296729, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.818 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41b22e92-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:34 np0005634017 nova_compute[243452]: 2026-02-28 10:11:34.819 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.821 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41b22e92-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.821 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:11:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.821 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41b22e92-d0, col_values=(('external_ids', {'iface-id': 'cad89901-4493-47e9-b0fc-45158375eff8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.822 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:11:34 np0005634017 systemd[1]: Started libpod-conmon-655eb667657bbe507029c3643321176256b99c9e3b314b896790bb57d428b02d.scope.
Feb 28 05:11:34 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:11:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0250d3bdf11b6fcd9af5181b737ff7c470aa22cc7d8d4952b4e746718f32408/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:11:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0250d3bdf11b6fcd9af5181b737ff7c470aa22cc7d8d4952b4e746718f32408/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:11:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0250d3bdf11b6fcd9af5181b737ff7c470aa22cc7d8d4952b4e746718f32408/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:11:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0250d3bdf11b6fcd9af5181b737ff7c470aa22cc7d8d4952b4e746718f32408/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:11:34 np0005634017 podman[296714]: 2026-02-28 10:11:34.788325376 +0000 UTC m=+0.030534819 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:11:34 np0005634017 podman[296714]: 2026-02-28 10:11:34.88565753 +0000 UTC m=+0.127867033 container init 655eb667657bbe507029c3643321176256b99c9e3b314b896790bb57d428b02d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_brown, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:11:34 np0005634017 podman[296714]: 2026-02-28 10:11:34.896304949 +0000 UTC m=+0.138514362 container start 655eb667657bbe507029c3643321176256b99c9e3b314b896790bb57d428b02d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_brown, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 05:11:34 np0005634017 podman[296714]: 2026-02-28 10:11:34.89920536 +0000 UTC m=+0.141414793 container attach 655eb667657bbe507029c3643321176256b99c9e3b314b896790bb57d428b02d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_brown, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.071 243456 DEBUG nova.compute.manager [req-24adf661-5d6d-4a83-b287-45cf4a7cfe5d req-981c9aef-83b4-4ed8-adf8-64249dfd342d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Received event network-vif-plugged-2c3f8e94-025d-4aea-97be-c325f6366e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.072 243456 DEBUG oslo_concurrency.lockutils [req-24adf661-5d6d-4a83-b287-45cf4a7cfe5d req-981c9aef-83b4-4ed8-adf8-64249dfd342d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.072 243456 DEBUG oslo_concurrency.lockutils [req-24adf661-5d6d-4a83-b287-45cf4a7cfe5d req-981c9aef-83b4-4ed8-adf8-64249dfd342d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.072 243456 DEBUG oslo_concurrency.lockutils [req-24adf661-5d6d-4a83-b287-45cf4a7cfe5d req-981c9aef-83b4-4ed8-adf8-64249dfd342d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.072 243456 DEBUG nova.compute.manager [req-24adf661-5d6d-4a83-b287-45cf4a7cfe5d req-981c9aef-83b4-4ed8-adf8-64249dfd342d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Processing event network-vif-plugged-2c3f8e94-025d-4aea-97be-c325f6366e0d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.489 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273495.4892802, 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.491 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] VM Started (Lifecycle Event)#033[00m
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.493 243456 DEBUG nova.compute.manager [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.497 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.501 243456 INFO nova.virt.libvirt.driver [-] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Instance spawned successfully.#033[00m
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.501 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.566 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.570 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:11:35 np0005634017 lvm[296854]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:11:35 np0005634017 lvm[296854]: VG ceph_vg0 finished
Feb 28 05:11:35 np0005634017 lvm[296856]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:11:35 np0005634017 lvm[296856]: VG ceph_vg1 finished
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.597 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.598 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.599 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.599 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.600 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.600 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.610 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.611 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273495.490647, 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.611 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:11:35 np0005634017 lvm[296858]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:11:35 np0005634017 lvm[296858]: VG ceph_vg2 finished
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.648 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.655 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273495.4971533, 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.656 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:11:35 np0005634017 admiring_brown[296737]: {}
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.687 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.691 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.698 243456 INFO nova.compute.manager [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Took 7.26 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.698 243456 DEBUG nova.compute.manager [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.709 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:11:35 np0005634017 systemd[1]: libpod-655eb667657bbe507029c3643321176256b99c9e3b314b896790bb57d428b02d.scope: Deactivated successfully.
Feb 28 05:11:35 np0005634017 systemd[1]: libpod-655eb667657bbe507029c3643321176256b99c9e3b314b896790bb57d428b02d.scope: Consumed 1.254s CPU time.
Feb 28 05:11:35 np0005634017 podman[296714]: 2026-02-28 10:11:35.731589038 +0000 UTC m=+0.973798451 container died 655eb667657bbe507029c3643321176256b99c9e3b314b896790bb57d428b02d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_brown, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 05:11:35 np0005634017 systemd[1]: var-lib-containers-storage-overlay-a0250d3bdf11b6fcd9af5181b737ff7c470aa22cc7d8d4952b4e746718f32408-merged.mount: Deactivated successfully.
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.770 243456 INFO nova.compute.manager [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Took 8.45 seconds to build instance.#033[00m
Feb 28 05:11:35 np0005634017 podman[296714]: 2026-02-28 10:11:35.77724747 +0000 UTC m=+1.019456893 container remove 655eb667657bbe507029c3643321176256b99c9e3b314b896790bb57d428b02d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:11:35 np0005634017 nova_compute[243452]: 2026-02-28 10:11:35.784 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:35 np0005634017 systemd[1]: libpod-conmon-655eb667657bbe507029c3643321176256b99c9e3b314b896790bb57d428b02d.scope: Deactivated successfully.
Feb 28 05:11:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:11:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:11:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:11:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:11:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 325 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Feb 28 05:11:36 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:11:36 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:11:37 np0005634017 nova_compute[243452]: 2026-02-28 10:11:37.191 243456 DEBUG nova.compute.manager [req-a495ea97-5711-4e59-a7a6-dc2d13ea971b req-49d3b45e-adf8-49b8-9a8e-29ea279bd1b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Received event network-vif-plugged-2c3f8e94-025d-4aea-97be-c325f6366e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:11:37 np0005634017 nova_compute[243452]: 2026-02-28 10:11:37.192 243456 DEBUG oslo_concurrency.lockutils [req-a495ea97-5711-4e59-a7a6-dc2d13ea971b req-49d3b45e-adf8-49b8-9a8e-29ea279bd1b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:37 np0005634017 nova_compute[243452]: 2026-02-28 10:11:37.192 243456 DEBUG oslo_concurrency.lockutils [req-a495ea97-5711-4e59-a7a6-dc2d13ea971b req-49d3b45e-adf8-49b8-9a8e-29ea279bd1b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:37 np0005634017 nova_compute[243452]: 2026-02-28 10:11:37.192 243456 DEBUG oslo_concurrency.lockutils [req-a495ea97-5711-4e59-a7a6-dc2d13ea971b req-49d3b45e-adf8-49b8-9a8e-29ea279bd1b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:37 np0005634017 nova_compute[243452]: 2026-02-28 10:11:37.192 243456 DEBUG nova.compute.manager [req-a495ea97-5711-4e59-a7a6-dc2d13ea971b req-49d3b45e-adf8-49b8-9a8e-29ea279bd1b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] No waiting events found dispatching network-vif-plugged-2c3f8e94-025d-4aea-97be-c325f6366e0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:11:37 np0005634017 nova_compute[243452]: 2026-02-28 10:11:37.192 243456 WARNING nova.compute.manager [req-a495ea97-5711-4e59-a7a6-dc2d13ea971b req-49d3b45e-adf8-49b8-9a8e-29ea279bd1b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Received unexpected event network-vif-plugged-2c3f8e94-025d-4aea-97be-c325f6366e0d for instance with vm_state active and task_state None.#033[00m
Feb 28 05:11:37 np0005634017 nova_compute[243452]: 2026-02-28 10:11:37.564 243456 INFO nova.compute.manager [None req-18536f81-6840-417e-906b-35015774754e 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Get console output#033[00m
Feb 28 05:11:37 np0005634017 nova_compute[243452]: 2026-02-28 10:11:37.571 243456 INFO oslo.privsep.daemon [None req-18536f81-6840-417e-906b-35015774754e 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpi731famn/privsep.sock']#033[00m
Feb 28 05:11:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:11:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 325 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.1 MiB/s wr, 165 op/s
Feb 28 05:11:38 np0005634017 nova_compute[243452]: 2026-02-28 10:11:38.655 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:38 np0005634017 nova_compute[243452]: 2026-02-28 10:11:38.687 243456 INFO oslo.privsep.daemon [None req-18536f81-6840-417e-906b-35015774754e 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Feb 28 05:11:38 np0005634017 nova_compute[243452]: 2026-02-28 10:11:38.526 296900 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb 28 05:11:38 np0005634017 nova_compute[243452]: 2026-02-28 10:11:38.530 296900 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb 28 05:11:38 np0005634017 nova_compute[243452]: 2026-02-28 10:11:38.533 296900 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Feb 28 05:11:38 np0005634017 nova_compute[243452]: 2026-02-28 10:11:38.533 296900 INFO oslo.privsep.daemon [-] privsep daemon running as pid 296900#033[00m
Feb 28 05:11:38 np0005634017 nova_compute[243452]: 2026-02-28 10:11:38.695 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:38 np0005634017 nova_compute[243452]: 2026-02-28 10:11:38.784 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:38 np0005634017 nova_compute[243452]: 2026-02-28 10:11:38.920 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 28 05:11:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 329 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.0 MiB/s wr, 160 op/s
Feb 28 05:11:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:11:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:11:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:11:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:11:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014983719041231532 of space, bias 1.0, pg target 0.44951157123694596 quantized to 32 (current 32)
Feb 28 05:11:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:11:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:11:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:11:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:11:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:11:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024935186177078785 of space, bias 1.0, pg target 0.7480555853123635 quantized to 32 (current 32)
Feb 28 05:11:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:11:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.028637363845935e-07 of space, bias 4.0, pg target 0.0009634364836615122 quantized to 16 (current 16)
Feb 28 05:11:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:11:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:11:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:11:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:11:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:11:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:11:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:11:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:11:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:11:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:11:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:40Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ec:55:cf 10.100.0.8
Feb 28 05:11:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:40Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:55:cf 10.100.0.8
Feb 28 05:11:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 334 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.0 MiB/s wr, 171 op/s
Feb 28 05:11:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:11:43 np0005634017 nova_compute[243452]: 2026-02-28 10:11:43.410 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Acquiring lock "59d0cb01-5644-425e-82b1-b79cf4265dfb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:43 np0005634017 nova_compute[243452]: 2026-02-28 10:11:43.411 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:43 np0005634017 nova_compute[243452]: 2026-02-28 10:11:43.439 243456 DEBUG nova.compute.manager [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:11:43 np0005634017 nova_compute[243452]: 2026-02-28 10:11:43.529 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:43 np0005634017 nova_compute[243452]: 2026-02-28 10:11:43.529 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:43 np0005634017 nova_compute[243452]: 2026-02-28 10:11:43.537 243456 DEBUG nova.virt.hardware [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:11:43 np0005634017 nova_compute[243452]: 2026-02-28 10:11:43.537 243456 INFO nova.compute.claims [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:11:43 np0005634017 nova_compute[243452]: 2026-02-28 10:11:43.668 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:43 np0005634017 nova_compute[243452]: 2026-02-28 10:11:43.679 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:43 np0005634017 nova_compute[243452]: 2026-02-28 10:11:43.705 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 347 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.9 MiB/s wr, 154 op/s
Feb 28 05:11:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:11:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2401319329' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.295 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.304 243456 DEBUG nova.compute.provider_tree [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.324 243456 DEBUG nova.scheduler.client.report [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.351 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.352 243456 DEBUG nova.compute.manager [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.398 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "56badf5b-d05a-4123-b43c-087a91e0e3b6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.398 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.404 243456 DEBUG nova.compute.manager [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.405 243456 DEBUG nova.network.neutron [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.413 243456 DEBUG nova.compute.manager [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.421 243456 INFO nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.447 243456 DEBUG nova.compute.manager [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.504 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.504 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.513 243456 DEBUG nova.virt.hardware [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.513 243456 INFO nova.compute.claims [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.544 243456 DEBUG nova.compute.manager [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.546 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.546 243456 INFO nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Creating image(s)#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.573 243456 DEBUG nova.storage.rbd_utils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] rbd image 59d0cb01-5644-425e-82b1-b79cf4265dfb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.605 243456 DEBUG nova.storage.rbd_utils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] rbd image 59d0cb01-5644-425e-82b1-b79cf4265dfb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.634 243456 DEBUG nova.storage.rbd_utils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] rbd image 59d0cb01-5644-425e-82b1-b79cf4265dfb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.638 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.665 243456 DEBUG nova.policy [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '05f7daf505a349dcb8574e9ef6f061fb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'af2c91609b444c458a32203261ac88d3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.700 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.700 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.701 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.701 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.722 243456 DEBUG nova.storage.rbd_utils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] rbd image 59d0cb01-5644-425e-82b1-b79cf4265dfb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.726 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 59d0cb01-5644-425e-82b1-b79cf4265dfb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:44 np0005634017 nova_compute[243452]: 2026-02-28 10:11:44.879 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.280 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 59d0cb01-5644-425e-82b1-b79cf4265dfb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.356 243456 DEBUG nova.storage.rbd_utils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] resizing rbd image 59d0cb01-5644-425e-82b1-b79cf4265dfb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.389 243456 DEBUG nova.network.neutron [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Successfully created port: 52a3be20-6165-41ea-9677-2b0575c65db6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:11:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:11:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4162066550' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:11:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:11:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3874446015' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:11:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:11:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3874446015' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.487 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.492 243456 DEBUG nova.compute.provider_tree [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.542 243456 DEBUG nova.scheduler.client.report [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.553 243456 DEBUG nova.objects.instance [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lazy-loading 'migration_context' on Instance uuid 59d0cb01-5644-425e-82b1-b79cf4265dfb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.566 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.567 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Ensure instance console log exists: /var/lib/nova/instances/59d0cb01-5644-425e-82b1-b79cf4265dfb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.568 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.568 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.568 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.569 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.570 243456 DEBUG nova.compute.manager [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.621 243456 DEBUG nova.compute.manager [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.621 243456 DEBUG nova.network.neutron [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.652 243456 INFO nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.673 243456 DEBUG nova.compute.manager [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.761 243456 DEBUG nova.compute.manager [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.763 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.764 243456 INFO nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Creating image(s)#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.789 243456 DEBUG nova.storage.rbd_utils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 56badf5b-d05a-4123-b43c-087a91e0e3b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.819 243456 DEBUG nova.storage.rbd_utils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 56badf5b-d05a-4123-b43c-087a91e0e3b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.846 243456 DEBUG nova.storage.rbd_utils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 56badf5b-d05a-4123-b43c-087a91e0e3b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.851 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.898 243456 DEBUG nova.policy [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '48e14a77ec8842f98a0d2efc6d5e167f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cffbbb9857954b188c363e9565817bf2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.933 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.934 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.935 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.935 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.970 243456 DEBUG nova.storage.rbd_utils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 56badf5b-d05a-4123-b43c-087a91e0e3b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:45 np0005634017 nova_compute[243452]: 2026-02-28 10:11:45.976 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 56badf5b-d05a-4123-b43c-087a91e0e3b6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 374 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 151 op/s
Feb 28 05:11:46 np0005634017 nova_compute[243452]: 2026-02-28 10:11:46.202 243456 DEBUG nova.network.neutron [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Successfully updated port: 52a3be20-6165-41ea-9677-2b0575c65db6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:11:46 np0005634017 nova_compute[243452]: 2026-02-28 10:11:46.223 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Acquiring lock "refresh_cache-59d0cb01-5644-425e-82b1-b79cf4265dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:11:46 np0005634017 nova_compute[243452]: 2026-02-28 10:11:46.224 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Acquired lock "refresh_cache-59d0cb01-5644-425e-82b1-b79cf4265dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:11:46 np0005634017 nova_compute[243452]: 2026-02-28 10:11:46.224 243456 DEBUG nova.network.neutron [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:11:46 np0005634017 nova_compute[243452]: 2026-02-28 10:11:46.303 243456 DEBUG nova.compute.manager [req-dad6faa5-2c82-4bcb-b60e-7d0cda41e0c0 req-34a5c7a0-5b57-4b7c-a60b-965a658f48c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Received event network-changed-52a3be20-6165-41ea-9677-2b0575c65db6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:11:46 np0005634017 nova_compute[243452]: 2026-02-28 10:11:46.304 243456 DEBUG nova.compute.manager [req-dad6faa5-2c82-4bcb-b60e-7d0cda41e0c0 req-34a5c7a0-5b57-4b7c-a60b-965a658f48c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Refreshing instance network info cache due to event network-changed-52a3be20-6165-41ea-9677-2b0575c65db6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:11:46 np0005634017 nova_compute[243452]: 2026-02-28 10:11:46.304 243456 DEBUG oslo_concurrency.lockutils [req-dad6faa5-2c82-4bcb-b60e-7d0cda41e0c0 req-34a5c7a0-5b57-4b7c-a60b-965a658f48c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-59d0cb01-5644-425e-82b1-b79cf4265dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:11:46 np0005634017 nova_compute[243452]: 2026-02-28 10:11:46.365 243456 DEBUG nova.network.neutron [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:11:46 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Feb 28 05:11:46 np0005634017 nova_compute[243452]: 2026-02-28 10:11:46.470 243456 DEBUG nova.network.neutron [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Successfully created port: 2a10bc2f-52a7-47e7-a308-eaa218f335b6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:11:46 np0005634017 nova_compute[243452]: 2026-02-28 10:11:46.685 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 56badf5b-d05a-4123-b43c-087a91e0e3b6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.709s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:46 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Feb 28 05:11:46 np0005634017 nova_compute[243452]: 2026-02-28 10:11:46.764 243456 DEBUG nova.storage.rbd_utils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] resizing rbd image 56badf5b-d05a-4123-b43c-087a91e0e3b6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:11:46 np0005634017 nova_compute[243452]: 2026-02-28 10:11:46.872 243456 DEBUG nova.objects.instance [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'migration_context' on Instance uuid 56badf5b-d05a-4123-b43c-087a91e0e3b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:11:46 np0005634017 nova_compute[243452]: 2026-02-28 10:11:46.892 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:11:46 np0005634017 nova_compute[243452]: 2026-02-28 10:11:46.893 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Ensure instance console log exists: /var/lib/nova/instances/56badf5b-d05a-4123-b43c-087a91e0e3b6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:11:46 np0005634017 nova_compute[243452]: 2026-02-28 10:11:46.893 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:46 np0005634017 nova_compute[243452]: 2026-02-28 10:11:46.894 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:46 np0005634017 nova_compute[243452]: 2026-02-28 10:11:46.894 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.168 243456 DEBUG nova.network.neutron [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Updating instance_info_cache with network_info: [{"id": "52a3be20-6165-41ea-9677-2b0575c65db6", "address": "fa:16:3e:7d:57:92", "network": {"id": "81c61437-7ddd-45da-a105-90d1e0bc7134", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1204378504-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2c91609b444c458a32203261ac88d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a3be20-61", "ovs_interfaceid": "52a3be20-6165-41ea-9677-2b0575c65db6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.197 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Releasing lock "refresh_cache-59d0cb01-5644-425e-82b1-b79cf4265dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.198 243456 DEBUG nova.compute.manager [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Instance network_info: |[{"id": "52a3be20-6165-41ea-9677-2b0575c65db6", "address": "fa:16:3e:7d:57:92", "network": {"id": "81c61437-7ddd-45da-a105-90d1e0bc7134", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1204378504-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2c91609b444c458a32203261ac88d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a3be20-61", "ovs_interfaceid": "52a3be20-6165-41ea-9677-2b0575c65db6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.198 243456 DEBUG oslo_concurrency.lockutils [req-dad6faa5-2c82-4bcb-b60e-7d0cda41e0c0 req-34a5c7a0-5b57-4b7c-a60b-965a658f48c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-59d0cb01-5644-425e-82b1-b79cf4265dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.198 243456 DEBUG nova.network.neutron [req-dad6faa5-2c82-4bcb-b60e-7d0cda41e0c0 req-34a5c7a0-5b57-4b7c-a60b-965a658f48c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Refreshing network info cache for port 52a3be20-6165-41ea-9677-2b0575c65db6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.203 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Start _get_guest_xml network_info=[{"id": "52a3be20-6165-41ea-9677-2b0575c65db6", "address": "fa:16:3e:7d:57:92", "network": {"id": "81c61437-7ddd-45da-a105-90d1e0bc7134", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1204378504-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2c91609b444c458a32203261ac88d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a3be20-61", "ovs_interfaceid": "52a3be20-6165-41ea-9677-2b0575c65db6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.208 243456 WARNING nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.214 243456 DEBUG nova.virt.libvirt.host [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.215 243456 DEBUG nova.virt.libvirt.host [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.218 243456 DEBUG nova.virt.libvirt.host [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.218 243456 DEBUG nova.virt.libvirt.host [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.219 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.219 243456 DEBUG nova.virt.hardware [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.219 243456 DEBUG nova.virt.hardware [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.220 243456 DEBUG nova.virt.hardware [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.220 243456 DEBUG nova.virt.hardware [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.220 243456 DEBUG nova.virt.hardware [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.220 243456 DEBUG nova.virt.hardware [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.220 243456 DEBUG nova.virt.hardware [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.221 243456 DEBUG nova.virt.hardware [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.221 243456 DEBUG nova.virt.hardware [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.221 243456 DEBUG nova.virt.hardware [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.221 243456 DEBUG nova.virt.hardware [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.224 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.558 243456 DEBUG nova.network.neutron [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Successfully updated port: 2a10bc2f-52a7-47e7-a308-eaa218f335b6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.586 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "refresh_cache-56badf5b-d05a-4123-b43c-087a91e0e3b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.586 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquired lock "refresh_cache-56badf5b-d05a-4123-b43c-087a91e0e3b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.586 243456 DEBUG nova.network.neutron [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.708 243456 DEBUG nova.compute.manager [req-da25178a-6360-4ac8-9f53-1fd239519ef1 req-b47503f1-7136-46c6-b850-14457202d001 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Received event network-changed-2a10bc2f-52a7-47e7-a308-eaa218f335b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.709 243456 DEBUG nova.compute.manager [req-da25178a-6360-4ac8-9f53-1fd239519ef1 req-b47503f1-7136-46c6-b850-14457202d001 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Refreshing instance network info cache due to event network-changed-2a10bc2f-52a7-47e7-a308-eaa218f335b6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.710 243456 DEBUG oslo_concurrency.lockutils [req-da25178a-6360-4ac8-9f53-1fd239519ef1 req-b47503f1-7136-46c6-b850-14457202d001 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-56badf5b-d05a-4123-b43c-087a91e0e3b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:11:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:11:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/969045207' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.821 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.848 243456 DEBUG nova.storage.rbd_utils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] rbd image 59d0cb01-5644-425e-82b1-b79cf4265dfb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.853 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:47 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:47Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:81:e3:c3 10.100.0.7
Feb 28 05:11:47 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:47Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:81:e3:c3 10.100.0.7
Feb 28 05:11:47 np0005634017 nova_compute[243452]: 2026-02-28 10:11:47.900 243456 DEBUG nova.network.neutron [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:11:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:11:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 420 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.5 MiB/s wr, 167 op/s
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.245 243456 DEBUG nova.objects.instance [None req-d8a08f2d-e2b4-47ac-b98e-e31a721fe8dd 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lazy-loading 'flavor' on Instance uuid fc527bc2-3cc2-4ce2-b99e-24d252793d06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.273 243456 DEBUG oslo_concurrency.lockutils [None req-d8a08f2d-e2b4-47ac-b98e-e31a721fe8dd 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquiring lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.273 243456 DEBUG oslo_concurrency.lockutils [None req-d8a08f2d-e2b4-47ac-b98e-e31a721fe8dd 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquired lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:11:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:11:48 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3516329336' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.438 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.440 243456 DEBUG nova.virt.libvirt.vif [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:11:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-536333370',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-536333370',id=63,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af2c91609b444c458a32203261ac88d3',ramdisk_id='',reservation_id='r-mp5idvof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-644416342',owner_user_name='tempest-InstanceActionsV221TestJSON-644416342-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:11:44Z,user_data=None,user_id='05f7daf505a349dcb8574e9ef6f061fb',uuid=59d0cb01-5644-425e-82b1-b79cf4265dfb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52a3be20-6165-41ea-9677-2b0575c65db6", "address": "fa:16:3e:7d:57:92", "network": {"id": "81c61437-7ddd-45da-a105-90d1e0bc7134", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1204378504-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2c91609b444c458a32203261ac88d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a3be20-61", "ovs_interfaceid": "52a3be20-6165-41ea-9677-2b0575c65db6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.440 243456 DEBUG nova.network.os_vif_util [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Converting VIF {"id": "52a3be20-6165-41ea-9677-2b0575c65db6", "address": "fa:16:3e:7d:57:92", "network": {"id": "81c61437-7ddd-45da-a105-90d1e0bc7134", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1204378504-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2c91609b444c458a32203261ac88d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a3be20-61", "ovs_interfaceid": "52a3be20-6165-41ea-9677-2b0575c65db6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.441 243456 DEBUG nova.network.os_vif_util [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:57:92,bridge_name='br-int',has_traffic_filtering=True,id=52a3be20-6165-41ea-9677-2b0575c65db6,network=Network(81c61437-7ddd-45da-a105-90d1e0bc7134),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a3be20-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.442 243456 DEBUG nova.objects.instance [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 59d0cb01-5644-425e-82b1-b79cf4265dfb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.460 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:11:48 np0005634017 nova_compute[243452]:  <uuid>59d0cb01-5644-425e-82b1-b79cf4265dfb</uuid>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:  <name>instance-0000003f</name>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <nova:name>tempest-InstanceActionsV221TestJSON-server-536333370</nova:name>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:11:47</nova:creationTime>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:11:48 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:        <nova:user uuid="05f7daf505a349dcb8574e9ef6f061fb">tempest-InstanceActionsV221TestJSON-644416342-project-member</nova:user>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:        <nova:project uuid="af2c91609b444c458a32203261ac88d3">tempest-InstanceActionsV221TestJSON-644416342</nova:project>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:        <nova:port uuid="52a3be20-6165-41ea-9677-2b0575c65db6">
Feb 28 05:11:48 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <entry name="serial">59d0cb01-5644-425e-82b1-b79cf4265dfb</entry>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <entry name="uuid">59d0cb01-5644-425e-82b1-b79cf4265dfb</entry>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/59d0cb01-5644-425e-82b1-b79cf4265dfb_disk">
Feb 28 05:11:48 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:11:48 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/59d0cb01-5644-425e-82b1-b79cf4265dfb_disk.config">
Feb 28 05:11:48 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:11:48 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:7d:57:92"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <target dev="tap52a3be20-61"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/59d0cb01-5644-425e-82b1-b79cf4265dfb/console.log" append="off"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:11:48 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:11:48 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:11:48 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:11:48 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.461 243456 DEBUG nova.compute.manager [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Preparing to wait for external event network-vif-plugged-52a3be20-6165-41ea-9677-2b0575c65db6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.461 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Acquiring lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.461 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.461 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.462 243456 DEBUG nova.virt.libvirt.vif [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:11:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-536333370',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-536333370',id=63,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af2c91609b444c458a32203261ac88d3',ramdisk_id='',reservation_id='r-mp5idvof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-644416342',owner_user_name='tempest-InstanceActionsV221TestJSON-644416342-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:11:44Z,user_data=None,user_id='05f7daf505a349dcb8574e9ef6f061fb',uuid=59d0cb01-5644-425e-82b1-b79cf4265dfb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52a3be20-6165-41ea-9677-2b0575c65db6", "address": "fa:16:3e:7d:57:92", "network": {"id": "81c61437-7ddd-45da-a105-90d1e0bc7134", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1204378504-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2c91609b444c458a32203261ac88d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a3be20-61", "ovs_interfaceid": "52a3be20-6165-41ea-9677-2b0575c65db6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.462 243456 DEBUG nova.network.os_vif_util [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Converting VIF {"id": "52a3be20-6165-41ea-9677-2b0575c65db6", "address": "fa:16:3e:7d:57:92", "network": {"id": "81c61437-7ddd-45da-a105-90d1e0bc7134", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1204378504-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2c91609b444c458a32203261ac88d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a3be20-61", "ovs_interfaceid": "52a3be20-6165-41ea-9677-2b0575c65db6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.463 243456 DEBUG nova.network.os_vif_util [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:57:92,bridge_name='br-int',has_traffic_filtering=True,id=52a3be20-6165-41ea-9677-2b0575c65db6,network=Network(81c61437-7ddd-45da-a105-90d1e0bc7134),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a3be20-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.463 243456 DEBUG os_vif [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:57:92,bridge_name='br-int',has_traffic_filtering=True,id=52a3be20-6165-41ea-9677-2b0575c65db6,network=Network(81c61437-7ddd-45da-a105-90d1e0bc7134),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a3be20-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.463 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.464 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.464 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.467 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.467 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52a3be20-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.468 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52a3be20-61, col_values=(('external_ids', {'iface-id': '52a3be20-6165-41ea-9677-2b0575c65db6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:57:92', 'vm-uuid': '59d0cb01-5644-425e-82b1-b79cf4265dfb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.469 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:48 np0005634017 NetworkManager[49805]: <info>  [1772273508.4713] manager: (tap52a3be20-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.472 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.476 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.477 243456 INFO os_vif [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:57:92,bridge_name='br-int',has_traffic_filtering=True,id=52a3be20-6165-41ea-9677-2b0575c65db6,network=Network(81c61437-7ddd-45da-a105-90d1e0bc7134),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a3be20-61')#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.540 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.541 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.541 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] No VIF found with MAC fa:16:3e:7d:57:92, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.541 243456 INFO nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Using config drive#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.564 243456 DEBUG nova.storage.rbd_utils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] rbd image 59d0cb01-5644-425e-82b1-b79cf4265dfb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:48 np0005634017 nova_compute[243452]: 2026-02-28 10:11:48.694 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:49 np0005634017 nova_compute[243452]: 2026-02-28 10:11:49.355 243456 INFO nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Creating config drive at /var/lib/nova/instances/59d0cb01-5644-425e-82b1-b79cf4265dfb/disk.config#033[00m
Feb 28 05:11:49 np0005634017 nova_compute[243452]: 2026-02-28 10:11:49.359 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/59d0cb01-5644-425e-82b1-b79cf4265dfb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpljztfuaj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:49 np0005634017 nova_compute[243452]: 2026-02-28 10:11:49.396 243456 DEBUG nova.network.neutron [req-dad6faa5-2c82-4bcb-b60e-7d0cda41e0c0 req-34a5c7a0-5b57-4b7c-a60b-965a658f48c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Updated VIF entry in instance network info cache for port 52a3be20-6165-41ea-9677-2b0575c65db6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:11:49 np0005634017 nova_compute[243452]: 2026-02-28 10:11:49.397 243456 DEBUG nova.network.neutron [req-dad6faa5-2c82-4bcb-b60e-7d0cda41e0c0 req-34a5c7a0-5b57-4b7c-a60b-965a658f48c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Updating instance_info_cache with network_info: [{"id": "52a3be20-6165-41ea-9677-2b0575c65db6", "address": "fa:16:3e:7d:57:92", "network": {"id": "81c61437-7ddd-45da-a105-90d1e0bc7134", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1204378504-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2c91609b444c458a32203261ac88d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a3be20-61", "ovs_interfaceid": "52a3be20-6165-41ea-9677-2b0575c65db6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:11:49 np0005634017 nova_compute[243452]: 2026-02-28 10:11:49.414 243456 DEBUG oslo_concurrency.lockutils [req-dad6faa5-2c82-4bcb-b60e-7d0cda41e0c0 req-34a5c7a0-5b57-4b7c-a60b-965a658f48c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-59d0cb01-5644-425e-82b1-b79cf4265dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:11:49 np0005634017 nova_compute[243452]: 2026-02-28 10:11:49.500 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/59d0cb01-5644-425e-82b1-b79cf4265dfb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpljztfuaj" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:49 np0005634017 nova_compute[243452]: 2026-02-28 10:11:49.527 243456 DEBUG nova.storage.rbd_utils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] rbd image 59d0cb01-5644-425e-82b1-b79cf4265dfb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:49 np0005634017 nova_compute[243452]: 2026-02-28 10:11:49.531 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/59d0cb01-5644-425e-82b1-b79cf4265dfb/disk.config 59d0cb01-5644-425e-82b1-b79cf4265dfb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:49 np0005634017 nova_compute[243452]: 2026-02-28 10:11:49.680 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/59d0cb01-5644-425e-82b1-b79cf4265dfb/disk.config 59d0cb01-5644-425e-82b1-b79cf4265dfb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:49 np0005634017 nova_compute[243452]: 2026-02-28 10:11:49.681 243456 INFO nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Deleting local config drive /var/lib/nova/instances/59d0cb01-5644-425e-82b1-b79cf4265dfb/disk.config because it was imported into RBD.#033[00m
Feb 28 05:11:49 np0005634017 kernel: tap52a3be20-61: entered promiscuous mode
Feb 28 05:11:49 np0005634017 NetworkManager[49805]: <info>  [1772273509.7357] manager: (tap52a3be20-61): new Tun device (/org/freedesktop/NetworkManager/Devices/240)
Feb 28 05:11:49 np0005634017 nova_compute[243452]: 2026-02-28 10:11:49.737 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:49 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:49Z|00508|binding|INFO|Claiming lport 52a3be20-6165-41ea-9677-2b0575c65db6 for this chassis.
Feb 28 05:11:49 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:49Z|00509|binding|INFO|52a3be20-6165-41ea-9677-2b0575c65db6: Claiming fa:16:3e:7d:57:92 10.100.0.4
Feb 28 05:11:49 np0005634017 nova_compute[243452]: 2026-02-28 10:11:49.740 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.746 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:57:92 10.100.0.4'], port_security=['fa:16:3e:7d:57:92 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '59d0cb01-5644-425e-82b1-b79cf4265dfb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81c61437-7ddd-45da-a105-90d1e0bc7134', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af2c91609b444c458a32203261ac88d3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5a493f80-3664-4942-99b2-7d547baf2c48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8acc948-8b91-464a-8bb5-c73edb1af1bf, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=52a3be20-6165-41ea-9677-2b0575c65db6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:11:49 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:49Z|00510|binding|INFO|Setting lport 52a3be20-6165-41ea-9677-2b0575c65db6 ovn-installed in OVS
Feb 28 05:11:49 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:49Z|00511|binding|INFO|Setting lport 52a3be20-6165-41ea-9677-2b0575c65db6 up in Southbound
Feb 28 05:11:49 np0005634017 nova_compute[243452]: 2026-02-28 10:11:49.748 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.749 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 52a3be20-6165-41ea-9677-2b0575c65db6 in datapath 81c61437-7ddd-45da-a105-90d1e0bc7134 bound to our chassis#033[00m
Feb 28 05:11:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.752 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 81c61437-7ddd-45da-a105-90d1e0bc7134#033[00m
Feb 28 05:11:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.765 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e139b133-d8b6-4a8a-9a61-12ee3807422c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.767 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap81c61437-71 in ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:11:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.771 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap81c61437-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:11:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.771 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c2aeabf7-0a75-4dbf-8b68-36c025724b35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:49 np0005634017 systemd-udevd[297415]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:11:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.772 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bace6d2b-773f-4ef9-be96-db365402b590]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:49 np0005634017 systemd-machined[209480]: New machine qemu-70-instance-0000003f.
Feb 28 05:11:49 np0005634017 systemd[1]: Started Virtual Machine qemu-70-instance-0000003f.
Feb 28 05:11:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.783 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[429eff71-d6ec-44ed-a2b9-dec7444de1ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:49 np0005634017 NetworkManager[49805]: <info>  [1772273509.7882] device (tap52a3be20-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:11:49 np0005634017 NetworkManager[49805]: <info>  [1772273509.7889] device (tap52a3be20-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:11:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.798 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cea6a5b6-4479-47d0-a1ba-cd53592a5e47]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.826 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6d131e4e-b0e8-495c-93e0-f144e292276f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:49 np0005634017 NetworkManager[49805]: <info>  [1772273509.8346] manager: (tap81c61437-70): new Veth device (/org/freedesktop/NetworkManager/Devices/241)
Feb 28 05:11:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.834 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[53bb25e2-3696-4285-8b4b-9f2370e6e637]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.862 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[51f65961-f0ca-499b-bd73-2a12cd8e5759]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.866 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[97bd8f73-406e-40ee-bc5f-f999f6abef8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:49 np0005634017 NetworkManager[49805]: <info>  [1772273509.8847] device (tap81c61437-70): carrier: link connected
Feb 28 05:11:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.889 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[03a856d3-8864-468a-ba52-189f70e1c728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.908 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eca5d38d-095a-4385-b651-433c6df8b51b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81c61437-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:40:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 162], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501717, 'reachable_time': 35470, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297447, 'error': None, 'target': 'ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.925 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[129233ab-05d0-48d8-8884-bb4503070b85]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1d:4037'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501717, 'tstamp': 501717}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297448, 'error': None, 'target': 'ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.947 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e45ae643-bc9f-430e-a056-798a1fdf612c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81c61437-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:40:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 162], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501717, 'reachable_time': 35470, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297449, 'error': None, 'target': 'ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.983 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b823912b-f528-4583-9603-99e3a382c27f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:50.047 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e914c7d1-e559-402c-bb98-f85adc4f0efc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:50.049 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81c61437-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:50.049 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:50.050 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81c61437-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:50 np0005634017 kernel: tap81c61437-70: entered promiscuous mode
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.052 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:50 np0005634017 NetworkManager[49805]: <info>  [1772273510.0531] manager: (tap81c61437-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/242)
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:50.057 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap81c61437-70, col_values=(('external_ids', {'iface-id': 'db035a80-54f4-4899-9603-57d2e9388511'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.058 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:50Z|00512|binding|INFO|Releasing lport db035a80-54f4-4899-9603-57d2e9388511 from this chassis (sb_readonly=0)
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:50.060 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/81c61437-7ddd-45da-a105-90d1e0bc7134.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/81c61437-7ddd-45da-a105-90d1e0bc7134.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:50.061 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8c12f472-b6d3-445b-b3e4-e67a52d414ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:50.062 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-81c61437-7ddd-45da-a105-90d1e0bc7134
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/81c61437-7ddd-45da-a105-90d1e0bc7134.pid.haproxy
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 81c61437-7ddd-45da-a105-90d1e0bc7134
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:11:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:50.063 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134', 'env', 'PROCESS_TAG=haproxy-81c61437-7ddd-45da-a105-90d1e0bc7134', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/81c61437-7ddd-45da-a105-90d1e0bc7134.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.066 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 459 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 6.9 MiB/s wr, 173 op/s
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.111 243456 DEBUG nova.network.neutron [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Updating instance_info_cache with network_info: [{"id": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "address": "fa:16:3e:25:29:e4", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a10bc2f-52", "ovs_interfaceid": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.139 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Releasing lock "refresh_cache-56badf5b-d05a-4123-b43c-087a91e0e3b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.139 243456 DEBUG nova.compute.manager [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Instance network_info: |[{"id": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "address": "fa:16:3e:25:29:e4", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a10bc2f-52", "ovs_interfaceid": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.139 243456 DEBUG oslo_concurrency.lockutils [req-da25178a-6360-4ac8-9f53-1fd239519ef1 req-b47503f1-7136-46c6-b850-14457202d001 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-56badf5b-d05a-4123-b43c-087a91e0e3b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.140 243456 DEBUG nova.network.neutron [req-da25178a-6360-4ac8-9f53-1fd239519ef1 req-b47503f1-7136-46c6-b850-14457202d001 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Refreshing network info cache for port 2a10bc2f-52a7-47e7-a308-eaa218f335b6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.143 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Start _get_guest_xml network_info=[{"id": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "address": "fa:16:3e:25:29:e4", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a10bc2f-52", "ovs_interfaceid": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.144 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273510.1433833, 59d0cb01-5644-425e-82b1-b79cf4265dfb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.144 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] VM Started (Lifecycle Event)#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.149 243456 WARNING nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.154 243456 DEBUG nova.virt.libvirt.host [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.154 243456 DEBUG nova.virt.libvirt.host [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.157 243456 DEBUG nova.virt.libvirt.host [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.157 243456 DEBUG nova.virt.libvirt.host [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.157 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.158 243456 DEBUG nova.virt.hardware [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.158 243456 DEBUG nova.virt.hardware [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.158 243456 DEBUG nova.virt.hardware [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.158 243456 DEBUG nova.virt.hardware [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.158 243456 DEBUG nova.virt.hardware [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.159 243456 DEBUG nova.virt.hardware [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.159 243456 DEBUG nova.virt.hardware [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.159 243456 DEBUG nova.virt.hardware [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.159 243456 DEBUG nova.virt.hardware [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.159 243456 DEBUG nova.virt.hardware [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.160 243456 DEBUG nova.virt.hardware [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.163 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.210 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.216 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273510.1458104, 59d0cb01-5644-425e-82b1-b79cf4265dfb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.217 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.243 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.246 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.271 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.320 243456 DEBUG nova.compute.manager [req-21fea942-65af-41a3-895d-c97f45441bb4 req-ff403c9f-0f49-4b8d-a3b5-6bbac951be10 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Received event network-vif-plugged-52a3be20-6165-41ea-9677-2b0575c65db6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.321 243456 DEBUG oslo_concurrency.lockutils [req-21fea942-65af-41a3-895d-c97f45441bb4 req-ff403c9f-0f49-4b8d-a3b5-6bbac951be10 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.321 243456 DEBUG oslo_concurrency.lockutils [req-21fea942-65af-41a3-895d-c97f45441bb4 req-ff403c9f-0f49-4b8d-a3b5-6bbac951be10 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.321 243456 DEBUG oslo_concurrency.lockutils [req-21fea942-65af-41a3-895d-c97f45441bb4 req-ff403c9f-0f49-4b8d-a3b5-6bbac951be10 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.322 243456 DEBUG nova.compute.manager [req-21fea942-65af-41a3-895d-c97f45441bb4 req-ff403c9f-0f49-4b8d-a3b5-6bbac951be10 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Processing event network-vif-plugged-52a3be20-6165-41ea-9677-2b0575c65db6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.322 243456 DEBUG nova.compute.manager [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.328 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273510.328399, 59d0cb01-5644-425e-82b1-b79cf4265dfb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.329 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.336 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.348 243456 INFO nova.virt.libvirt.driver [-] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Instance spawned successfully.#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.349 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.375 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.382 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.389 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.389 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.392 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.393 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.393 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.393 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.402 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:11:50 np0005634017 podman[297543]: 2026-02-28 10:11:50.451590187 +0000 UTC m=+0.064657987 container create ed7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.455 243456 INFO nova.compute.manager [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Took 5.91 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.455 243456 DEBUG nova.compute.manager [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.472 243456 DEBUG nova.network.neutron [None req-d8a08f2d-e2b4-47ac-b98e-e31a721fe8dd 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:11:50 np0005634017 systemd[1]: Started libpod-conmon-ed7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1.scope.
Feb 28 05:11:50 np0005634017 podman[297543]: 2026-02-28 10:11:50.418841947 +0000 UTC m=+0.031909757 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:11:50 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:11:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d333aed14cb07f78bcef36484a3c058b6aad07f16329dc547e47f3c304a89cb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.548 243456 INFO nova.compute.manager [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Took 7.05 seconds to build instance.#033[00m
Feb 28 05:11:50 np0005634017 podman[297543]: 2026-02-28 10:11:50.550794253 +0000 UTC m=+0.163862123 container init ed7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 28 05:11:50 np0005634017 podman[297543]: 2026-02-28 10:11:50.55601249 +0000 UTC m=+0.169080320 container start ed7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.570 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:50 np0005634017 neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134[297558]: [NOTICE]   (297563) : New worker (297565) forked
Feb 28 05:11:50 np0005634017 neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134[297558]: [NOTICE]   (297563) : Loading success.
Feb 28 05:11:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:11:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/484140731' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.770 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.799 243456 DEBUG nova.storage.rbd_utils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 56badf5b-d05a-4123-b43c-087a91e0e3b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:50 np0005634017 nova_compute[243452]: 2026-02-28 10:11:50.805 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.146 243456 DEBUG nova.compute.manager [req-7ae39e46-19ff-47a6-a06c-fe295c313563 req-e4c92ea7-7ffc-4542-9142-6502dd1a7597 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Received event network-changed-52c9c534-2d55-4b9a-bd70-a8114ac975c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.149 243456 DEBUG nova.compute.manager [req-7ae39e46-19ff-47a6-a06c-fe295c313563 req-e4c92ea7-7ffc-4542-9142-6502dd1a7597 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Refreshing instance network info cache due to event network-changed-52c9c534-2d55-4b9a-bd70-a8114ac975c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.150 243456 DEBUG oslo_concurrency.lockutils [req-7ae39e46-19ff-47a6-a06c-fe295c313563 req-e4c92ea7-7ffc-4542-9142-6502dd1a7597 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:11:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:11:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1912729658' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.432 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.436 243456 DEBUG nova.virt.libvirt.vif [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:11:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-440852658',display_name='tempest-ServerActionsTestOtherB-server-440852658',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-440852658',id=64,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cffbbb9857954b188c363e9565817bf2',ramdisk_id='',reservation_id='r-sfdugq8v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-909812490',owner_user_name='tempest-ServerActionsTestOtherB-909812490-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:11:45Z,user_data=None,user_id='48e14a77ec8842f98a0d2efc6d5e167f',uuid=56badf5b-d05a-4123-b43c-087a91e0e3b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "address": "fa:16:3e:25:29:e4", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a10bc2f-52", "ovs_interfaceid": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.437 243456 DEBUG nova.network.os_vif_util [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converting VIF {"id": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "address": "fa:16:3e:25:29:e4", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a10bc2f-52", "ovs_interfaceid": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.439 243456 DEBUG nova.network.os_vif_util [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:29:e4,bridge_name='br-int',has_traffic_filtering=True,id=2a10bc2f-52a7-47e7-a308-eaa218f335b6,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a10bc2f-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.441 243456 DEBUG nova.objects.instance [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 56badf5b-d05a-4123-b43c-087a91e0e3b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.460 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:11:51 np0005634017 nova_compute[243452]:  <uuid>56badf5b-d05a-4123-b43c-087a91e0e3b6</uuid>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:  <name>instance-00000040</name>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerActionsTestOtherB-server-440852658</nova:name>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:11:50</nova:creationTime>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:11:51 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:        <nova:user uuid="48e14a77ec8842f98a0d2efc6d5e167f">tempest-ServerActionsTestOtherB-909812490-project-member</nova:user>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:        <nova:project uuid="cffbbb9857954b188c363e9565817bf2">tempest-ServerActionsTestOtherB-909812490</nova:project>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:        <nova:port uuid="2a10bc2f-52a7-47e7-a308-eaa218f335b6">
Feb 28 05:11:51 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <entry name="serial">56badf5b-d05a-4123-b43c-087a91e0e3b6</entry>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <entry name="uuid">56badf5b-d05a-4123-b43c-087a91e0e3b6</entry>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/56badf5b-d05a-4123-b43c-087a91e0e3b6_disk">
Feb 28 05:11:51 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:11:51 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/56badf5b-d05a-4123-b43c-087a91e0e3b6_disk.config">
Feb 28 05:11:51 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:11:51 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:25:29:e4"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <target dev="tap2a10bc2f-52"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/56badf5b-d05a-4123-b43c-087a91e0e3b6/console.log" append="off"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:11:51 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:11:51 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:11:51 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:11:51 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.470 243456 DEBUG nova.compute.manager [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Preparing to wait for external event network-vif-plugged-2a10bc2f-52a7-47e7-a308-eaa218f335b6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.471 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.472 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.472 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.474 243456 DEBUG nova.virt.libvirt.vif [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:11:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-440852658',display_name='tempest-ServerActionsTestOtherB-server-440852658',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-440852658',id=64,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cffbbb9857954b188c363e9565817bf2',ramdisk_id='',reservation_id='r-sfdugq8v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-909812490',owner_user_name='tempest-ServerActionsTestOtherB-909812490-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:11:45Z,user_data=None,user_id='48e14a77ec8842f98a0d2efc6d5e167f',uuid=56badf5b-d05a-4123-b43c-087a91e0e3b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "address": "fa:16:3e:25:29:e4", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a10bc2f-52", "ovs_interfaceid": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.475 243456 DEBUG nova.network.os_vif_util [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converting VIF {"id": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "address": "fa:16:3e:25:29:e4", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a10bc2f-52", "ovs_interfaceid": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.476 243456 DEBUG nova.network.os_vif_util [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:29:e4,bridge_name='br-int',has_traffic_filtering=True,id=2a10bc2f-52a7-47e7-a308-eaa218f335b6,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a10bc2f-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.477 243456 DEBUG os_vif [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:29:e4,bridge_name='br-int',has_traffic_filtering=True,id=2a10bc2f-52a7-47e7-a308-eaa218f335b6,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a10bc2f-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.478 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.479 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.480 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.484 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.484 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a10bc2f-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.485 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2a10bc2f-52, col_values=(('external_ids', {'iface-id': '2a10bc2f-52a7-47e7-a308-eaa218f335b6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:25:29:e4', 'vm-uuid': '56badf5b-d05a-4123-b43c-087a91e0e3b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:51 np0005634017 NetworkManager[49805]: <info>  [1772273511.4884] manager: (tap2a10bc2f-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/243)
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.490 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.495 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.496 243456 INFO os_vif [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:29:e4,bridge_name='br-int',has_traffic_filtering=True,id=2a10bc2f-52a7-47e7-a308-eaa218f335b6,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a10bc2f-52')#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.551 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.552 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.553 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No VIF found with MAC fa:16:3e:25:29:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.553 243456 INFO nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Using config drive#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.581 243456 DEBUG nova.storage.rbd_utils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 56badf5b-d05a-4123-b43c-087a91e0e3b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.926 243456 DEBUG nova.network.neutron [req-da25178a-6360-4ac8-9f53-1fd239519ef1 req-b47503f1-7136-46c6-b850-14457202d001 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Updated VIF entry in instance network info cache for port 2a10bc2f-52a7-47e7-a308-eaa218f335b6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.927 243456 DEBUG nova.network.neutron [req-da25178a-6360-4ac8-9f53-1fd239519ef1 req-b47503f1-7136-46c6-b850-14457202d001 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Updating instance_info_cache with network_info: [{"id": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "address": "fa:16:3e:25:29:e4", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a10bc2f-52", "ovs_interfaceid": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:11:51 np0005634017 nova_compute[243452]: 2026-02-28 10:11:51.946 243456 DEBUG oslo_concurrency.lockutils [req-da25178a-6360-4ac8-9f53-1fd239519ef1 req-b47503f1-7136-46c6-b850-14457202d001 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-56badf5b-d05a-4123-b43c-087a91e0e3b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:11:52 np0005634017 nova_compute[243452]: 2026-02-28 10:11:52.025 243456 DEBUG nova.network.neutron [None req-d8a08f2d-e2b4-47ac-b98e-e31a721fe8dd 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Updating instance_info_cache with network_info: [{"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:11:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 481 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 7.6 MiB/s wr, 193 op/s
Feb 28 05:11:52 np0005634017 nova_compute[243452]: 2026-02-28 10:11:52.205 243456 INFO nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Creating config drive at /var/lib/nova/instances/56badf5b-d05a-4123-b43c-087a91e0e3b6/disk.config#033[00m
Feb 28 05:11:52 np0005634017 nova_compute[243452]: 2026-02-28 10:11:52.209 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/56badf5b-d05a-4123-b43c-087a91e0e3b6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmppgkcws94 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:52 np0005634017 nova_compute[243452]: 2026-02-28 10:11:52.358 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/56badf5b-d05a-4123-b43c-087a91e0e3b6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmppgkcws94" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:52 np0005634017 nova_compute[243452]: 2026-02-28 10:11:52.478 243456 DEBUG nova.storage.rbd_utils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 56badf5b-d05a-4123-b43c-087a91e0e3b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:11:52 np0005634017 nova_compute[243452]: 2026-02-28 10:11:52.494 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/56badf5b-d05a-4123-b43c-087a91e0e3b6/disk.config 56badf5b-d05a-4123-b43c-087a91e0e3b6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:52 np0005634017 nova_compute[243452]: 2026-02-28 10:11:52.556 243456 DEBUG oslo_concurrency.lockutils [None req-d8a08f2d-e2b4-47ac-b98e-e31a721fe8dd 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Releasing lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:11:52 np0005634017 nova_compute[243452]: 2026-02-28 10:11:52.557 243456 DEBUG nova.compute.manager [None req-d8a08f2d-e2b4-47ac-b98e-e31a721fe8dd 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Feb 28 05:11:52 np0005634017 nova_compute[243452]: 2026-02-28 10:11:52.557 243456 DEBUG nova.compute.manager [None req-d8a08f2d-e2b4-47ac-b98e-e31a721fe8dd 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] network_info to inject: |[{"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Feb 28 05:11:52 np0005634017 nova_compute[243452]: 2026-02-28 10:11:52.561 243456 DEBUG oslo_concurrency.lockutils [req-7ae39e46-19ff-47a6-a06c-fe295c313563 req-e4c92ea7-7ffc-4542-9142-6502dd1a7597 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:11:52 np0005634017 nova_compute[243452]: 2026-02-28 10:11:52.561 243456 DEBUG nova.network.neutron [req-7ae39e46-19ff-47a6-a06c-fe295c313563 req-e4c92ea7-7ffc-4542-9142-6502dd1a7597 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Refreshing network info cache for port 52c9c534-2d55-4b9a-bd70-a8114ac975c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:11:52 np0005634017 nova_compute[243452]: 2026-02-28 10:11:52.669 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/56badf5b-d05a-4123-b43c-087a91e0e3b6/disk.config 56badf5b-d05a-4123-b43c-087a91e0e3b6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:52 np0005634017 nova_compute[243452]: 2026-02-28 10:11:52.670 243456 INFO nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Deleting local config drive /var/lib/nova/instances/56badf5b-d05a-4123-b43c-087a91e0e3b6/disk.config because it was imported into RBD.#033[00m
Feb 28 05:11:52 np0005634017 NetworkManager[49805]: <info>  [1772273512.7248] manager: (tap2a10bc2f-52): new Tun device (/org/freedesktop/NetworkManager/Devices/244)
Feb 28 05:11:52 np0005634017 kernel: tap2a10bc2f-52: entered promiscuous mode
Feb 28 05:11:52 np0005634017 systemd-udevd[297436]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:11:52 np0005634017 nova_compute[243452]: 2026-02-28 10:11:52.728 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:52Z|00513|binding|INFO|Claiming lport 2a10bc2f-52a7-47e7-a308-eaa218f335b6 for this chassis.
Feb 28 05:11:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:52Z|00514|binding|INFO|2a10bc2f-52a7-47e7-a308-eaa218f335b6: Claiming fa:16:3e:25:29:e4 10.100.0.6
Feb 28 05:11:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.738 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:29:e4 10.100.0.6'], port_security=['fa:16:3e:25:29:e4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '56badf5b-d05a-4123-b43c-087a91e0e3b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cffbbb9857954b188c363e9565817bf2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9dcd58cd-d32a-4394-9f61-d8f4c1381371', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d55c9b7f-6b12-4934-aabc-ab954d2b71e1, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2a10bc2f-52a7-47e7-a308-eaa218f335b6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:11:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.739 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2a10bc2f-52a7-47e7-a308-eaa218f335b6 in datapath 41b22e92-d251-48dd-9bf8-8f38cbd749fa bound to our chassis#033[00m
Feb 28 05:11:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.741 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41b22e92-d251-48dd-9bf8-8f38cbd749fa#033[00m
Feb 28 05:11:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:52Z|00515|binding|INFO|Setting lport 2a10bc2f-52a7-47e7-a308-eaa218f335b6 ovn-installed in OVS
Feb 28 05:11:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:52Z|00516|binding|INFO|Setting lport 2a10bc2f-52a7-47e7-a308-eaa218f335b6 up in Southbound
Feb 28 05:11:52 np0005634017 nova_compute[243452]: 2026-02-28 10:11:52.745 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:52 np0005634017 NetworkManager[49805]: <info>  [1772273512.7462] device (tap2a10bc2f-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:11:52 np0005634017 NetworkManager[49805]: <info>  [1772273512.7467] device (tap2a10bc2f-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:11:52 np0005634017 nova_compute[243452]: 2026-02-28 10:11:52.754 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.763 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b7254812-8ecb-4e62-8127-d090df8fb515]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:52 np0005634017 systemd-machined[209480]: New machine qemu-71-instance-00000040.
Feb 28 05:11:52 np0005634017 systemd[1]: Started Virtual Machine qemu-71-instance-00000040.
Feb 28 05:11:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.795 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[126afbeb-e426-4bfe-8811-530e2ddea327]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.800 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7ced0625-3003-4cee-84ea-d35b53f9e664]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.833 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4c83af92-f82b-404b-bca6-32ad5c072185]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.856 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ee488bf8-4a00-4b5d-ae5e-040fae3674d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41b22e92-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:1f:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493294, 'reachable_time': 16597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297704, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.880 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a9f14cd5-cafc-4c9e-bfeb-f9866c58a2ab]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap41b22e92-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493305, 'tstamp': 493305}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297705, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap41b22e92-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493308, 'tstamp': 493308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297705, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.883 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41b22e92-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:52 np0005634017 nova_compute[243452]: 2026-02-28 10:11:52.885 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:52 np0005634017 nova_compute[243452]: 2026-02-28 10:11:52.887 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.887 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41b22e92-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.887 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:11:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.888 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41b22e92-d0, col_values=(('external_ids', {'iface-id': 'cad89901-4493-47e9-b0fc-45158375eff8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.888 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:11:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.329 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273513.3291252, 56badf5b-d05a-4123-b43c-087a91e0e3b6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.330 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] VM Started (Lifecycle Event)#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.350 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.356 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273513.333007, 56badf5b-d05a-4123-b43c-087a91e0e3b6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.357 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.379 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.385 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.409 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.429 243456 DEBUG nova.compute.manager [req-76aa7c76-7771-4b89-a00b-c6ebf93b6b6c req-f7a6ed5a-a6e2-4ecf-ba55-6ee76289f6e3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Received event network-vif-plugged-52a3be20-6165-41ea-9677-2b0575c65db6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.429 243456 DEBUG oslo_concurrency.lockutils [req-76aa7c76-7771-4b89-a00b-c6ebf93b6b6c req-f7a6ed5a-a6e2-4ecf-ba55-6ee76289f6e3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.430 243456 DEBUG oslo_concurrency.lockutils [req-76aa7c76-7771-4b89-a00b-c6ebf93b6b6c req-f7a6ed5a-a6e2-4ecf-ba55-6ee76289f6e3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.430 243456 DEBUG oslo_concurrency.lockutils [req-76aa7c76-7771-4b89-a00b-c6ebf93b6b6c req-f7a6ed5a-a6e2-4ecf-ba55-6ee76289f6e3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.430 243456 DEBUG nova.compute.manager [req-76aa7c76-7771-4b89-a00b-c6ebf93b6b6c req-f7a6ed5a-a6e2-4ecf-ba55-6ee76289f6e3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] No waiting events found dispatching network-vif-plugged-52a3be20-6165-41ea-9677-2b0575c65db6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.430 243456 WARNING nova.compute.manager [req-76aa7c76-7771-4b89-a00b-c6ebf93b6b6c req-f7a6ed5a-a6e2-4ecf-ba55-6ee76289f6e3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Received unexpected event network-vif-plugged-52a3be20-6165-41ea-9677-2b0575c65db6 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.577 243456 DEBUG oslo_concurrency.lockutils [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Acquiring lock "59d0cb01-5644-425e-82b1-b79cf4265dfb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.578 243456 DEBUG oslo_concurrency.lockutils [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.578 243456 DEBUG oslo_concurrency.lockutils [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Acquiring lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.579 243456 DEBUG oslo_concurrency.lockutils [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.579 243456 DEBUG oslo_concurrency.lockutils [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.580 243456 INFO nova.compute.manager [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Terminating instance#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.581 243456 DEBUG nova.compute.manager [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:11:53 np0005634017 kernel: tap52a3be20-61 (unregistering): left promiscuous mode
Feb 28 05:11:53 np0005634017 NetworkManager[49805]: <info>  [1772273513.6524] device (tap52a3be20-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.662 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:53Z|00517|binding|INFO|Releasing lport 52a3be20-6165-41ea-9677-2b0575c65db6 from this chassis (sb_readonly=0)
Feb 28 05:11:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:53Z|00518|binding|INFO|Setting lport 52a3be20-6165-41ea-9677-2b0575c65db6 down in Southbound
Feb 28 05:11:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:11:53Z|00519|binding|INFO|Removing iface tap52a3be20-61 ovn-installed in OVS
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.665 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:53.668 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:57:92 10.100.0.4'], port_security=['fa:16:3e:7d:57:92 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '59d0cb01-5644-425e-82b1-b79cf4265dfb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81c61437-7ddd-45da-a105-90d1e0bc7134', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af2c91609b444c458a32203261ac88d3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5a493f80-3664-4942-99b2-7d547baf2c48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8acc948-8b91-464a-8bb5-c73edb1af1bf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=52a3be20-6165-41ea-9677-2b0575c65db6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:11:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:53.669 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 52a3be20-6165-41ea-9677-2b0575c65db6 in datapath 81c61437-7ddd-45da-a105-90d1e0bc7134 unbound from our chassis#033[00m
Feb 28 05:11:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:53.671 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 81c61437-7ddd-45da-a105-90d1e0bc7134, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.672 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:53.672 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[58856626-db38-4752-aa28-0ef7fa0d63e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:53.673 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134 namespace which is not needed anymore#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.697 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:53 np0005634017 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Feb 28 05:11:53 np0005634017 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003f.scope: Consumed 3.655s CPU time.
Feb 28 05:11:53 np0005634017 systemd-machined[209480]: Machine qemu-70-instance-0000003f terminated.
Feb 28 05:11:53 np0005634017 neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134[297558]: [NOTICE]   (297563) : haproxy version is 2.8.14-c23fe91
Feb 28 05:11:53 np0005634017 neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134[297558]: [NOTICE]   (297563) : path to executable is /usr/sbin/haproxy
Feb 28 05:11:53 np0005634017 neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134[297558]: [WARNING]  (297563) : Exiting Master process...
Feb 28 05:11:53 np0005634017 neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134[297558]: [ALERT]    (297563) : Current worker (297565) exited with code 143 (Terminated)
Feb 28 05:11:53 np0005634017 neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134[297558]: [WARNING]  (297563) : All workers exited. Exiting... (0)
Feb 28 05:11:53 np0005634017 systemd[1]: libpod-ed7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1.scope: Deactivated successfully.
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.827 243456 INFO nova.virt.libvirt.driver [-] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Instance destroyed successfully.#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.828 243456 DEBUG nova.objects.instance [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lazy-loading 'resources' on Instance uuid 59d0cb01-5644-425e-82b1-b79cf4265dfb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:11:53 np0005634017 podman[297768]: 2026-02-28 10:11:53.830622916 +0000 UTC m=+0.049907792 container died ed7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.857 243456 DEBUG nova.virt.libvirt.vif [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:11:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-536333370',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-536333370',id=63,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:11:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='af2c91609b444c458a32203261ac88d3',ramdisk_id='',reservation_id='r-mp5idvof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-644416342',owner_user_name='tempest-InstanceActionsV221TestJSON-644416342-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:11:50Z,user_data=None,user_id='05f7daf505a349dcb8574e9ef6f061fb',uuid=59d0cb01-5644-425e-82b1-b79cf4265dfb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "52a3be20-6165-41ea-9677-2b0575c65db6", "address": "fa:16:3e:7d:57:92", "network": {"id": "81c61437-7ddd-45da-a105-90d1e0bc7134", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1204378504-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2c91609b444c458a32203261ac88d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a3be20-61", "ovs_interfaceid": "52a3be20-6165-41ea-9677-2b0575c65db6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.858 243456 DEBUG nova.network.os_vif_util [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Converting VIF {"id": "52a3be20-6165-41ea-9677-2b0575c65db6", "address": "fa:16:3e:7d:57:92", "network": {"id": "81c61437-7ddd-45da-a105-90d1e0bc7134", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1204378504-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2c91609b444c458a32203261ac88d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a3be20-61", "ovs_interfaceid": "52a3be20-6165-41ea-9677-2b0575c65db6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.859 243456 DEBUG nova.network.os_vif_util [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:57:92,bridge_name='br-int',has_traffic_filtering=True,id=52a3be20-6165-41ea-9677-2b0575c65db6,network=Network(81c61437-7ddd-45da-a105-90d1e0bc7134),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a3be20-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.859 243456 DEBUG os_vif [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:57:92,bridge_name='br-int',has_traffic_filtering=True,id=52a3be20-6165-41ea-9677-2b0575c65db6,network=Network(81c61437-7ddd-45da-a105-90d1e0bc7134),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a3be20-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:11:53 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ed7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1-userdata-shm.mount: Deactivated successfully.
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.861 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.861 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52a3be20-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:53 np0005634017 systemd[1]: var-lib-containers-storage-overlay-7d333aed14cb07f78bcef36484a3c058b6aad07f16329dc547e47f3c304a89cb-merged.mount: Deactivated successfully.
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.864 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.865 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.866 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.868 243456 INFO os_vif [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:57:92,bridge_name='br-int',has_traffic_filtering=True,id=52a3be20-6165-41ea-9677-2b0575c65db6,network=Network(81c61437-7ddd-45da-a105-90d1e0bc7134),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a3be20-61')#033[00m
Feb 28 05:11:53 np0005634017 podman[297768]: 2026-02-28 10:11:53.88097216 +0000 UTC m=+0.100257036 container cleanup ed7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 05:11:53 np0005634017 systemd[1]: libpod-conmon-ed7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1.scope: Deactivated successfully.
Feb 28 05:11:53 np0005634017 podman[297820]: 2026-02-28 10:11:53.974814156 +0000 UTC m=+0.070345677 container remove ed7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 28 05:11:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:53.979 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5e27b60f-06c7-485d-b55c-f476b6bde3a1]: (4, ('Sat Feb 28 10:11:53 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134 (ed7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1)\ned7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1\nSat Feb 28 10:11:53 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134 (ed7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1)\ned7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:53.981 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[37ddaace-d690-4a6f-b7f2-e785bc4dcf58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:53.982 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81c61437-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:11:53 np0005634017 kernel: tap81c61437-70: left promiscuous mode
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.984 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:53 np0005634017 nova_compute[243452]: 2026-02-28 10:11:53.991 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:53.995 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[be2bbf08-347f-4d66-9fe6-f673bd1cf1b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:54.011 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e12f7c89-49f2-40cc-a282-794103089b5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:54.012 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[db608de0-0f3d-451d-8903-0bd51b08c845]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:54.029 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[59d009fe-5d45-4a21-b424-fa44ac844058]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501710, 'reachable_time': 33165, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297838, 'error': None, 'target': 'ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:54.032 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:11:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:54.032 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[d574115d-0ebf-40bc-bfd8-363e68eb063f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:11:54 np0005634017 systemd[1]: run-netns-ovnmeta\x2d81c61437\x2d7ddd\x2d45da\x2da105\x2d90d1e0bc7134.mount: Deactivated successfully.
Feb 28 05:11:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 484 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 7.1 MiB/s wr, 198 op/s
Feb 28 05:11:54 np0005634017 nova_compute[243452]: 2026-02-28 10:11:54.094 243456 DEBUG nova.objects.instance [None req-fba13b83-7b47-4242-9841-d655ffc52dd5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lazy-loading 'flavor' on Instance uuid fc527bc2-3cc2-4ce2-b99e-24d252793d06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:11:54 np0005634017 nova_compute[243452]: 2026-02-28 10:11:54.116 243456 DEBUG oslo_concurrency.lockutils [None req-fba13b83-7b47-4242-9841-d655ffc52dd5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquiring lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:11:54 np0005634017 nova_compute[243452]: 2026-02-28 10:11:54.256 243456 INFO nova.virt.libvirt.driver [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Deleting instance files /var/lib/nova/instances/59d0cb01-5644-425e-82b1-b79cf4265dfb_del#033[00m
Feb 28 05:11:54 np0005634017 nova_compute[243452]: 2026-02-28 10:11:54.257 243456 INFO nova.virt.libvirt.driver [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Deletion of /var/lib/nova/instances/59d0cb01-5644-425e-82b1-b79cf4265dfb_del complete#033[00m
Feb 28 05:11:54 np0005634017 nova_compute[243452]: 2026-02-28 10:11:54.312 243456 INFO nova.compute.manager [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:11:54 np0005634017 nova_compute[243452]: 2026-02-28 10:11:54.313 243456 DEBUG oslo.service.loopingcall [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:11:54 np0005634017 nova_compute[243452]: 2026-02-28 10:11:54.314 243456 DEBUG nova.compute.manager [-] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:11:54 np0005634017 nova_compute[243452]: 2026-02-28 10:11:54.314 243456 DEBUG nova.network.neutron [-] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:11:54 np0005634017 nova_compute[243452]: 2026-02-28 10:11:54.332 243456 DEBUG nova.network.neutron [req-7ae39e46-19ff-47a6-a06c-fe295c313563 req-e4c92ea7-7ffc-4542-9142-6502dd1a7597 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Updated VIF entry in instance network info cache for port 52c9c534-2d55-4b9a-bd70-a8114ac975c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:11:54 np0005634017 nova_compute[243452]: 2026-02-28 10:11:54.333 243456 DEBUG nova.network.neutron [req-7ae39e46-19ff-47a6-a06c-fe295c313563 req-e4c92ea7-7ffc-4542-9142-6502dd1a7597 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Updating instance_info_cache with network_info: [{"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:11:54 np0005634017 nova_compute[243452]: 2026-02-28 10:11:54.347 243456 DEBUG oslo_concurrency.lockutils [req-7ae39e46-19ff-47a6-a06c-fe295c313563 req-e4c92ea7-7ffc-4542-9142-6502dd1a7597 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:11:54 np0005634017 nova_compute[243452]: 2026-02-28 10:11:54.348 243456 DEBUG oslo_concurrency.lockutils [None req-fba13b83-7b47-4242-9841-d655ffc52dd5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquired lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.077 243456 DEBUG nova.network.neutron [-] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.095 243456 INFO nova.compute.manager [-] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Took 0.78 seconds to deallocate network for instance.#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.155 243456 DEBUG oslo_concurrency.lockutils [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.156 243456 DEBUG oslo_concurrency.lockutils [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.191 243456 DEBUG nova.network.neutron [None req-fba13b83-7b47-4242-9841-d655ffc52dd5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.253 243456 DEBUG nova.compute.manager [req-c816ddcf-488c-4c68-900c-4dd2ee8a28b0 req-b7eeaf59-7508-4d11-8337-1c8435bc05d6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Received event network-vif-deleted-52a3be20-6165-41ea-9677-2b0575c65db6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.292 243456 DEBUG oslo_concurrency.processutils [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.532 243456 DEBUG nova.compute.manager [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Received event network-vif-plugged-2a10bc2f-52a7-47e7-a308-eaa218f335b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.535 243456 DEBUG oslo_concurrency.lockutils [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.535 243456 DEBUG oslo_concurrency.lockutils [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.536 243456 DEBUG oslo_concurrency.lockutils [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.536 243456 DEBUG nova.compute.manager [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Processing event network-vif-plugged-2a10bc2f-52a7-47e7-a308-eaa218f335b6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.536 243456 DEBUG nova.compute.manager [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Received event network-vif-plugged-2a10bc2f-52a7-47e7-a308-eaa218f335b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.536 243456 DEBUG oslo_concurrency.lockutils [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.537 243456 DEBUG oslo_concurrency.lockutils [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.537 243456 DEBUG oslo_concurrency.lockutils [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.537 243456 DEBUG nova.compute.manager [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] No waiting events found dispatching network-vif-plugged-2a10bc2f-52a7-47e7-a308-eaa218f335b6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.537 243456 WARNING nova.compute.manager [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Received unexpected event network-vif-plugged-2a10bc2f-52a7-47e7-a308-eaa218f335b6 for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.537 243456 DEBUG nova.compute.manager [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Received event network-vif-unplugged-52a3be20-6165-41ea-9677-2b0575c65db6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.538 243456 DEBUG oslo_concurrency.lockutils [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.538 243456 DEBUG oslo_concurrency.lockutils [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.538 243456 DEBUG oslo_concurrency.lockutils [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.538 243456 DEBUG nova.compute.manager [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] No waiting events found dispatching network-vif-unplugged-52a3be20-6165-41ea-9677-2b0575c65db6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.538 243456 WARNING nova.compute.manager [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Received unexpected event network-vif-unplugged-52a3be20-6165-41ea-9677-2b0575c65db6 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.539 243456 DEBUG nova.compute.manager [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Received event network-vif-plugged-52a3be20-6165-41ea-9677-2b0575c65db6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.539 243456 DEBUG oslo_concurrency.lockutils [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.539 243456 DEBUG oslo_concurrency.lockutils [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.539 243456 DEBUG oslo_concurrency.lockutils [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.539 243456 DEBUG nova.compute.manager [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] No waiting events found dispatching network-vif-plugged-52a3be20-6165-41ea-9677-2b0575c65db6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.540 243456 WARNING nova.compute.manager [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Received unexpected event network-vif-plugged-52a3be20-6165-41ea-9677-2b0575c65db6 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.540 243456 DEBUG nova.compute.manager [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.561 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273515.5443044, 56badf5b-d05a-4123-b43c-087a91e0e3b6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.561 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.563 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.588 243456 INFO nova.virt.libvirt.driver [-] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Instance spawned successfully.#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.588 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.607 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.814 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.817 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.817 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.817 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.818 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.818 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.818 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.859 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:11:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:11:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/572110918' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.894 243456 INFO nova.compute.manager [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Took 10.13 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.894 243456 DEBUG nova.compute.manager [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.907 243456 DEBUG oslo_concurrency.processutils [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.912 243456 DEBUG nova.compute.provider_tree [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.935 243456 DEBUG nova.scheduler.client.report [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.965 243456 DEBUG oslo_concurrency.lockutils [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.968 243456 INFO nova.compute.manager [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Took 11.49 seconds to build instance.#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.986 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:55 np0005634017 nova_compute[243452]: 2026-02-28 10:11:55.999 243456 INFO nova.scheduler.client.report [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Deleted allocations for instance 59d0cb01-5644-425e-82b1-b79cf4265dfb#033[00m
Feb 28 05:11:56 np0005634017 nova_compute[243452]: 2026-02-28 10:11:56.062 243456 DEBUG oslo_concurrency.lockutils [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.484s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 463 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.8 MiB/s wr, 231 op/s
Feb 28 05:11:56 np0005634017 nova_compute[243452]: 2026-02-28 10:11:56.567 243456 DEBUG nova.network.neutron [None req-fba13b83-7b47-4242-9841-d655ffc52dd5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Updating instance_info_cache with network_info: [{"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:11:56 np0005634017 nova_compute[243452]: 2026-02-28 10:11:56.584 243456 DEBUG oslo_concurrency.lockutils [None req-fba13b83-7b47-4242-9841-d655ffc52dd5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Releasing lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:11:56 np0005634017 nova_compute[243452]: 2026-02-28 10:11:56.585 243456 DEBUG nova.compute.manager [None req-fba13b83-7b47-4242-9841-d655ffc52dd5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Feb 28 05:11:56 np0005634017 nova_compute[243452]: 2026-02-28 10:11:56.585 243456 DEBUG nova.compute.manager [None req-fba13b83-7b47-4242-9841-d655ffc52dd5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] network_info to inject: |[{"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Feb 28 05:11:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:57.851 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:11:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:57.852 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:11:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:11:57.853 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:11:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:11:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 438 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.2 MiB/s wr, 227 op/s
Feb 28 05:11:58 np0005634017 nova_compute[243452]: 2026-02-28 10:11:58.699 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:11:58 np0005634017 nova_compute[243452]: 2026-02-28 10:11:58.865 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.034 243456 DEBUG nova.compute.manager [req-cc59da39-5f5f-4e54-8543-26fe1689b1eb req-eec2234f-833f-4e3d-9693-0c6af805ba9b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Received event network-changed-52c9c534-2d55-4b9a-bd70-a8114ac975c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.034 243456 DEBUG nova.compute.manager [req-cc59da39-5f5f-4e54-8543-26fe1689b1eb req-eec2234f-833f-4e3d-9693-0c6af805ba9b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Refreshing instance network info cache due to event network-changed-52c9c534-2d55-4b9a-bd70-a8114ac975c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.035 243456 DEBUG oslo_concurrency.lockutils [req-cc59da39-5f5f-4e54-8543-26fe1689b1eb req-eec2234f-833f-4e3d-9693-0c6af805ba9b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.035 243456 DEBUG oslo_concurrency.lockutils [req-cc59da39-5f5f-4e54-8543-26fe1689b1eb req-eec2234f-833f-4e3d-9693-0c6af805ba9b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.035 243456 DEBUG nova.network.neutron [req-cc59da39-5f5f-4e54-8543-26fe1689b1eb req-eec2234f-833f-4e3d-9693-0c6af805ba9b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Refreshing network info cache for port 52c9c534-2d55-4b9a-bd70-a8114ac975c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:12:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 438 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.3 MiB/s wr, 249 op/s
Feb 28 05:12:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:12:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:12:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:12:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:12:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:12:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.536 243456 DEBUG oslo_concurrency.lockutils [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquiring lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.537 243456 DEBUG oslo_concurrency.lockutils [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.537 243456 DEBUG oslo_concurrency.lockutils [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquiring lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.538 243456 DEBUG oslo_concurrency.lockutils [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.538 243456 DEBUG oslo_concurrency.lockutils [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.539 243456 INFO nova.compute.manager [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Terminating instance#033[00m
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.540 243456 DEBUG nova.compute.manager [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:12:00 np0005634017 kernel: tap52c9c534-2d (unregistering): left promiscuous mode
Feb 28 05:12:00 np0005634017 NetworkManager[49805]: <info>  [1772273520.5983] device (tap52c9c534-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:12:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:00Z|00520|binding|INFO|Releasing lport 52c9c534-2d55-4b9a-bd70-a8114ac975c6 from this chassis (sb_readonly=0)
Feb 28 05:12:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:00Z|00521|binding|INFO|Setting lport 52c9c534-2d55-4b9a-bd70-a8114ac975c6 down in Southbound
Feb 28 05:12:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:00Z|00522|binding|INFO|Removing iface tap52c9c534-2d ovn-installed in OVS
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.606 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.628 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:00 np0005634017 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Feb 28 05:12:00 np0005634017 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003d.scope: Consumed 13.043s CPU time.
Feb 28 05:12:00 np0005634017 systemd-machined[209480]: Machine qemu-68-instance-0000003d terminated.
Feb 28 05:12:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:00.671 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:55:cf 10.100.0.8'], port_security=['fa:16:3e:ec:55:cf 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'fc527bc2-3cc2-4ce2-b99e-24d252793d06', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0192f192-ffd8-4bb7-b267-d74d97cc6cf5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '676657ed4ac447c580a9480d26bd7f87', 'neutron:revision_number': '6', 'neutron:security_group_ids': '27cca83f-7b50-4cdc-b0c5-9a7ed57a0cbe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=527b0cac-8159-448a-a976-d464a8e38db9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=52c9c534-2d55-4b9a-bd70-a8114ac975c6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:12:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:00.672 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 52c9c534-2d55-4b9a-bd70-a8114ac975c6 in datapath 0192f192-ffd8-4bb7-b267-d74d97cc6cf5 unbound from our chassis#033[00m
Feb 28 05:12:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:00.674 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0192f192-ffd8-4bb7-b267-d74d97cc6cf5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:12:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:00.675 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6124d6b7-3700-4aba-b83c-a184ec5a349a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:00.676 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5 namespace which is not needed anymore#033[00m
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.779 243456 INFO nova.virt.libvirt.driver [-] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Instance destroyed successfully.#033[00m
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.780 243456 DEBUG nova.objects.instance [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lazy-loading 'resources' on Instance uuid fc527bc2-3cc2-4ce2-b99e-24d252793d06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:12:00 np0005634017 neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5[295865]: [NOTICE]   (295869) : haproxy version is 2.8.14-c23fe91
Feb 28 05:12:00 np0005634017 neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5[295865]: [NOTICE]   (295869) : path to executable is /usr/sbin/haproxy
Feb 28 05:12:00 np0005634017 neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5[295865]: [WARNING]  (295869) : Exiting Master process...
Feb 28 05:12:00 np0005634017 neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5[295865]: [WARNING]  (295869) : Exiting Master process...
Feb 28 05:12:00 np0005634017 neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5[295865]: [ALERT]    (295869) : Current worker (295871) exited with code 143 (Terminated)
Feb 28 05:12:00 np0005634017 neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5[295865]: [WARNING]  (295869) : All workers exited. Exiting... (0)
Feb 28 05:12:00 np0005634017 systemd[1]: libpod-00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625.scope: Deactivated successfully.
Feb 28 05:12:00 np0005634017 podman[297885]: 2026-02-28 10:12:00.810811774 +0000 UTC m=+0.052349121 container died 00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.832 243456 DEBUG nova.virt.libvirt.vif [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:11:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1011485011',display_name='tempest-AttachInterfacesUnderV243Test-server-1011485011',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1011485011',id=61,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNjnzagFJoSFtRanAJV8Vh8se7ZkhkWvbV0r0AVyZNItt2AKKQ99gHb6oQBNip6kETClAVoOoJsFdbRjy0fqH4CfM5YyB80Yui5ULnbk8Emyxy1XpzzeCEJgH6ejC7SlXQ==',key_name='tempest-keypair-1736116569',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:11:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='676657ed4ac447c580a9480d26bd7f87',ramdisk_id='',reservation_id='r-041m6mn9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-809208084',owner_user_name='tempest-AttachInterfacesUnderV243Test-809208084-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:11:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d03368d5ddc403db8a8315dabe88681',uuid=fc527bc2-3cc2-4ce2-b99e-24d252793d06,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.834 243456 DEBUG nova.network.os_vif_util [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Converting VIF {"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.835 243456 DEBUG nova.network.os_vif_util [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ec:55:cf,bridge_name='br-int',has_traffic_filtering=True,id=52c9c534-2d55-4b9a-bd70-a8114ac975c6,network=Network(0192f192-ffd8-4bb7-b267-d74d97cc6cf5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52c9c534-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.835 243456 DEBUG os_vif [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:55:cf,bridge_name='br-int',has_traffic_filtering=True,id=52c9c534-2d55-4b9a-bd70-a8114ac975c6,network=Network(0192f192-ffd8-4bb7-b267-d74d97cc6cf5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52c9c534-2d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.838 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.839 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52c9c534-2d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.840 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.842 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.845 243456 INFO os_vif [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:55:cf,bridge_name='br-int',has_traffic_filtering=True,id=52c9c534-2d55-4b9a-bd70-a8114ac975c6,network=Network(0192f192-ffd8-4bb7-b267-d74d97cc6cf5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52c9c534-2d')#033[00m
Feb 28 05:12:00 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625-userdata-shm.mount: Deactivated successfully.
Feb 28 05:12:00 np0005634017 systemd[1]: var-lib-containers-storage-overlay-abd06fea17760e87046dfe6bdf5d2603ded7e2ad74c164232a00dda0bdcd2728-merged.mount: Deactivated successfully.
Feb 28 05:12:00 np0005634017 podman[297885]: 2026-02-28 10:12:00.887172839 +0000 UTC m=+0.128710186 container cleanup 00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:12:00 np0005634017 systemd[1]: libpod-conmon-00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625.scope: Deactivated successfully.
Feb 28 05:12:00 np0005634017 podman[297942]: 2026-02-28 10:12:00.970239622 +0000 UTC m=+0.057503926 container remove 00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:12:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:00.979 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[612769d4-e521-42e7-8cae-0f00c5da9f2d]: (4, ('Sat Feb 28 10:12:00 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5 (00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625)\n00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625\nSat Feb 28 10:12:00 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5 (00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625)\n00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:00.982 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[51090b13-bdd3-462c-ba99-d14e9c28159c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:00.983 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0192f192-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.985 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:00 np0005634017 kernel: tap0192f192-f0: left promiscuous mode
Feb 28 05:12:00 np0005634017 nova_compute[243452]: 2026-02-28 10:12:00.993 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:00.997 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[845108cc-298a-4132-a2b5-ea6429eced39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:01.022 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d80d25f3-c24e-4938-aeb4-43013290bee3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:01.024 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e1131701-6ffa-4a6b-b0e6-44b795946e4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:01.040 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6a73dc29-9839-485f-a8ad-90a0a284d1bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499450, 'reachable_time': 35700, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297957, 'error': None, 'target': 'ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:01.043 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:12:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:01.043 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[727d1d55-78fe-413c-94fb-e8cbf4d48aff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:01 np0005634017 systemd[1]: run-netns-ovnmeta\x2d0192f192\x2dffd8\x2d4bb7\x2db267\x2dd74d97cc6cf5.mount: Deactivated successfully.
Feb 28 05:12:01 np0005634017 nova_compute[243452]: 2026-02-28 10:12:01.144 243456 INFO nova.compute.manager [None req-201d74ce-cf60-42a0-a1ac-43804dffa5cc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Pausing#033[00m
Feb 28 05:12:01 np0005634017 nova_compute[243452]: 2026-02-28 10:12:01.145 243456 DEBUG nova.objects.instance [None req-201d74ce-cf60-42a0-a1ac-43804dffa5cc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'flavor' on Instance uuid 56badf5b-d05a-4123-b43c-087a91e0e3b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:12:01 np0005634017 nova_compute[243452]: 2026-02-28 10:12:01.181 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273521.1814907, 56badf5b-d05a-4123-b43c-087a91e0e3b6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:12:01 np0005634017 nova_compute[243452]: 2026-02-28 10:12:01.182 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:12:01 np0005634017 nova_compute[243452]: 2026-02-28 10:12:01.184 243456 DEBUG nova.compute.manager [None req-201d74ce-cf60-42a0-a1ac-43804dffa5cc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:12:01 np0005634017 nova_compute[243452]: 2026-02-28 10:12:01.199 243456 INFO nova.virt.libvirt.driver [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Deleting instance files /var/lib/nova/instances/fc527bc2-3cc2-4ce2-b99e-24d252793d06_del#033[00m
Feb 28 05:12:01 np0005634017 nova_compute[243452]: 2026-02-28 10:12:01.200 243456 INFO nova.virt.libvirt.driver [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Deletion of /var/lib/nova/instances/fc527bc2-3cc2-4ce2-b99e-24d252793d06_del complete#033[00m
Feb 28 05:12:01 np0005634017 nova_compute[243452]: 2026-02-28 10:12:01.226 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:12:01 np0005634017 nova_compute[243452]: 2026-02-28 10:12:01.229 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:12:01 np0005634017 nova_compute[243452]: 2026-02-28 10:12:01.261 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Feb 28 05:12:01 np0005634017 nova_compute[243452]: 2026-02-28 10:12:01.288 243456 INFO nova.compute.manager [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:12:01 np0005634017 nova_compute[243452]: 2026-02-28 10:12:01.289 243456 DEBUG oslo.service.loopingcall [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:12:01 np0005634017 nova_compute[243452]: 2026-02-28 10:12:01.289 243456 DEBUG nova.compute.manager [-] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:12:01 np0005634017 nova_compute[243452]: 2026-02-28 10:12:01.289 243456 DEBUG nova.network.neutron [-] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:12:01 np0005634017 nova_compute[243452]: 2026-02-28 10:12:01.731 243456 DEBUG nova.compute.manager [req-1a3fa58a-26ed-4e43-9c04-c505d40d4af7 req-38a73385-fe97-433d-985d-b7cd66ff00ea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Received event network-vif-unplugged-52c9c534-2d55-4b9a-bd70-a8114ac975c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:12:01 np0005634017 nova_compute[243452]: 2026-02-28 10:12:01.733 243456 DEBUG oslo_concurrency.lockutils [req-1a3fa58a-26ed-4e43-9c04-c505d40d4af7 req-38a73385-fe97-433d-985d-b7cd66ff00ea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:01 np0005634017 nova_compute[243452]: 2026-02-28 10:12:01.733 243456 DEBUG oslo_concurrency.lockutils [req-1a3fa58a-26ed-4e43-9c04-c505d40d4af7 req-38a73385-fe97-433d-985d-b7cd66ff00ea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:01 np0005634017 nova_compute[243452]: 2026-02-28 10:12:01.733 243456 DEBUG oslo_concurrency.lockutils [req-1a3fa58a-26ed-4e43-9c04-c505d40d4af7 req-38a73385-fe97-433d-985d-b7cd66ff00ea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:01 np0005634017 nova_compute[243452]: 2026-02-28 10:12:01.733 243456 DEBUG nova.compute.manager [req-1a3fa58a-26ed-4e43-9c04-c505d40d4af7 req-38a73385-fe97-433d-985d-b7cd66ff00ea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] No waiting events found dispatching network-vif-unplugged-52c9c534-2d55-4b9a-bd70-a8114ac975c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:12:01 np0005634017 nova_compute[243452]: 2026-02-28 10:12:01.734 243456 DEBUG nova.compute.manager [req-1a3fa58a-26ed-4e43-9c04-c505d40d4af7 req-38a73385-fe97-433d-985d-b7cd66ff00ea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Received event network-vif-unplugged-52c9c534-2d55-4b9a-bd70-a8114ac975c6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:12:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 438 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1016 KiB/s wr, 219 op/s
Feb 28 05:12:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:02.471 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:12:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:02.476 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:12:02 np0005634017 nova_compute[243452]: 2026-02-28 10:12:02.477 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:02 np0005634017 nova_compute[243452]: 2026-02-28 10:12:02.496 243456 DEBUG nova.network.neutron [-] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:12:02 np0005634017 nova_compute[243452]: 2026-02-28 10:12:02.523 243456 INFO nova.compute.manager [-] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Took 1.23 seconds to deallocate network for instance.#033[00m
Feb 28 05:12:02 np0005634017 nova_compute[243452]: 2026-02-28 10:12:02.589 243456 DEBUG oslo_concurrency.lockutils [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:02 np0005634017 nova_compute[243452]: 2026-02-28 10:12:02.590 243456 DEBUG oslo_concurrency.lockutils [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:02 np0005634017 nova_compute[243452]: 2026-02-28 10:12:02.594 243456 DEBUG nova.compute.manager [req-8e20f2da-1c7e-41b4-ab55-fb15fe272c46 req-5f564e98-f687-4a29-abd6-f0c9c6274c27 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Received event network-vif-deleted-52c9c534-2d55-4b9a-bd70-a8114ac975c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:12:02 np0005634017 nova_compute[243452]: 2026-02-28 10:12:02.730 243456 DEBUG oslo_concurrency.processutils [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:12:02 np0005634017 nova_compute[243452]: 2026-02-28 10:12:02.982 243456 DEBUG nova.network.neutron [req-cc59da39-5f5f-4e54-8543-26fe1689b1eb req-eec2234f-833f-4e3d-9693-0c6af805ba9b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Updated VIF entry in instance network info cache for port 52c9c534-2d55-4b9a-bd70-a8114ac975c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:12:02 np0005634017 nova_compute[243452]: 2026-02-28 10:12:02.983 243456 DEBUG nova.network.neutron [req-cc59da39-5f5f-4e54-8543-26fe1689b1eb req-eec2234f-833f-4e3d-9693-0c6af805ba9b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Updating instance_info_cache with network_info: [{"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:12:03 np0005634017 nova_compute[243452]: 2026-02-28 10:12:03.002 243456 DEBUG oslo_concurrency.lockutils [req-cc59da39-5f5f-4e54-8543-26fe1689b1eb req-eec2234f-833f-4e3d-9693-0c6af805ba9b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:12:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:12:03 np0005634017 podman[297980]: 2026-02-28 10:12:03.146210044 +0000 UTC m=+0.076918181 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent)
Feb 28 05:12:03 np0005634017 podman[297979]: 2026-02-28 10:12:03.180672532 +0000 UTC m=+0.110546586 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 28 05:12:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:12:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3209316137' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:12:03 np0005634017 nova_compute[243452]: 2026-02-28 10:12:03.303 243456 DEBUG oslo_concurrency.processutils [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:12:03 np0005634017 nova_compute[243452]: 2026-02-28 10:12:03.309 243456 DEBUG nova.compute.provider_tree [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:12:03 np0005634017 nova_compute[243452]: 2026-02-28 10:12:03.333 243456 DEBUG nova.scheduler.client.report [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:12:03 np0005634017 nova_compute[243452]: 2026-02-28 10:12:03.354 243456 DEBUG oslo_concurrency.lockutils [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:03 np0005634017 nova_compute[243452]: 2026-02-28 10:12:03.404 243456 INFO nova.scheduler.client.report [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Deleted allocations for instance fc527bc2-3cc2-4ce2-b99e-24d252793d06#033[00m
Feb 28 05:12:03 np0005634017 nova_compute[243452]: 2026-02-28 10:12:03.485 243456 DEBUG oslo_concurrency.lockutils [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:03 np0005634017 nova_compute[243452]: 2026-02-28 10:12:03.702 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:03 np0005634017 nova_compute[243452]: 2026-02-28 10:12:03.974 243456 DEBUG nova.compute.manager [req-0ff22998-e4c8-4526-addb-6ea115ebb08f req-6fd9909a-b946-4662-b2cf-91f0d7cadd12 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Received event network-vif-plugged-52c9c534-2d55-4b9a-bd70-a8114ac975c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:12:03 np0005634017 nova_compute[243452]: 2026-02-28 10:12:03.974 243456 DEBUG oslo_concurrency.lockutils [req-0ff22998-e4c8-4526-addb-6ea115ebb08f req-6fd9909a-b946-4662-b2cf-91f0d7cadd12 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:03 np0005634017 nova_compute[243452]: 2026-02-28 10:12:03.975 243456 DEBUG oslo_concurrency.lockutils [req-0ff22998-e4c8-4526-addb-6ea115ebb08f req-6fd9909a-b946-4662-b2cf-91f0d7cadd12 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:03 np0005634017 nova_compute[243452]: 2026-02-28 10:12:03.975 243456 DEBUG oslo_concurrency.lockutils [req-0ff22998-e4c8-4526-addb-6ea115ebb08f req-6fd9909a-b946-4662-b2cf-91f0d7cadd12 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:03 np0005634017 nova_compute[243452]: 2026-02-28 10:12:03.975 243456 DEBUG nova.compute.manager [req-0ff22998-e4c8-4526-addb-6ea115ebb08f req-6fd9909a-b946-4662-b2cf-91f0d7cadd12 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] No waiting events found dispatching network-vif-plugged-52c9c534-2d55-4b9a-bd70-a8114ac975c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:12:03 np0005634017 nova_compute[243452]: 2026-02-28 10:12:03.975 243456 WARNING nova.compute.manager [req-0ff22998-e4c8-4526-addb-6ea115ebb08f req-6fd9909a-b946-4662-b2cf-91f0d7cadd12 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Received unexpected event network-vif-plugged-52c9c534-2d55-4b9a-bd70-a8114ac975c6 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:12:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 400 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 71 KiB/s wr, 190 op/s
Feb 28 05:12:04 np0005634017 nova_compute[243452]: 2026-02-28 10:12:04.113 243456 DEBUG oslo_concurrency.lockutils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "56badf5b-d05a-4123-b43c-087a91e0e3b6" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:04 np0005634017 nova_compute[243452]: 2026-02-28 10:12:04.113 243456 DEBUG oslo_concurrency.lockutils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:04 np0005634017 nova_compute[243452]: 2026-02-28 10:12:04.114 243456 INFO nova.compute.manager [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Shelving#033[00m
Feb 28 05:12:04 np0005634017 kernel: tap2a10bc2f-52 (unregistering): left promiscuous mode
Feb 28 05:12:04 np0005634017 NetworkManager[49805]: <info>  [1772273524.1778] device (tap2a10bc2f-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:12:04 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:04Z|00523|binding|INFO|Releasing lport 2a10bc2f-52a7-47e7-a308-eaa218f335b6 from this chassis (sb_readonly=0)
Feb 28 05:12:04 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:04Z|00524|binding|INFO|Setting lport 2a10bc2f-52a7-47e7-a308-eaa218f335b6 down in Southbound
Feb 28 05:12:04 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:04Z|00525|binding|INFO|Removing iface tap2a10bc2f-52 ovn-installed in OVS
Feb 28 05:12:04 np0005634017 nova_compute[243452]: 2026-02-28 10:12:04.187 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:04 np0005634017 nova_compute[243452]: 2026-02-28 10:12:04.189 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:04 np0005634017 nova_compute[243452]: 2026-02-28 10:12:04.195 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.198 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:29:e4 10.100.0.6'], port_security=['fa:16:3e:25:29:e4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '56badf5b-d05a-4123-b43c-087a91e0e3b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cffbbb9857954b188c363e9565817bf2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9dcd58cd-d32a-4394-9f61-d8f4c1381371', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d55c9b7f-6b12-4934-aabc-ab954d2b71e1, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2a10bc2f-52a7-47e7-a308-eaa218f335b6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:12:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.201 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2a10bc2f-52a7-47e7-a308-eaa218f335b6 in datapath 41b22e92-d251-48dd-9bf8-8f38cbd749fa unbound from our chassis#033[00m
Feb 28 05:12:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.204 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41b22e92-d251-48dd-9bf8-8f38cbd749fa#033[00m
Feb 28 05:12:04 np0005634017 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000040.scope: Deactivated successfully.
Feb 28 05:12:04 np0005634017 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000040.scope: Consumed 6.188s CPU time.
Feb 28 05:12:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.223 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4a2dcd51-a6cc-4956-ae62-090bc40183ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:04 np0005634017 systemd-machined[209480]: Machine qemu-71-instance-00000040 terminated.
Feb 28 05:12:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.262 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a198e53a-3c15-4da5-a209-b2415e855808]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.269 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ee958317-282a-4561-97f1-80cb4c563833]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.306 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[34f239fe-cd15-4d38-8cc7-a5f7703cf1a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.330 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[438b5ef2-ef18-40ca-8150-777d9c1332ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41b22e92-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:1f:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493294, 'reachable_time': 16597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298036, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.355 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[51a18559-faf0-4a90-b034-a86d226fa978]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap41b22e92-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493305, 'tstamp': 493305}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298038, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap41b22e92-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493308, 'tstamp': 493308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298038, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.358 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41b22e92-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:04 np0005634017 nova_compute[243452]: 2026-02-28 10:12:04.361 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:04 np0005634017 nova_compute[243452]: 2026-02-28 10:12:04.366 243456 INFO nova.virt.libvirt.driver [-] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Instance destroyed successfully.#033[00m
Feb 28 05:12:04 np0005634017 nova_compute[243452]: 2026-02-28 10:12:04.366 243456 DEBUG nova.objects.instance [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'numa_topology' on Instance uuid 56badf5b-d05a-4123-b43c-087a91e0e3b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:12:04 np0005634017 nova_compute[243452]: 2026-02-28 10:12:04.368 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.370 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41b22e92-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.370 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:12:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.371 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41b22e92-d0, col_values=(('external_ids', {'iface-id': 'cad89901-4493-47e9-b0fc-45158375eff8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.372 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:12:04 np0005634017 nova_compute[243452]: 2026-02-28 10:12:04.693 243456 INFO nova.virt.libvirt.driver [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Beginning cold snapshot process#033[00m
Feb 28 05:12:04 np0005634017 nova_compute[243452]: 2026-02-28 10:12:04.889 243456 DEBUG nova.virt.libvirt.imagebackend [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 28 05:12:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:05Z|00526|binding|INFO|Releasing lport cad89901-4493-47e9-b0fc-45158375eff8 from this chassis (sb_readonly=0)
Feb 28 05:12:05 np0005634017 nova_compute[243452]: 2026-02-28 10:12:05.230 243456 DEBUG nova.storage.rbd_utils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] creating snapshot(15e4b23e728846d999ec44c208d8074e) on rbd image(56badf5b-d05a-4123-b43c-087a91e0e3b6_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:12:05 np0005634017 nova_compute[243452]: 2026-02-28 10:12:05.257 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:05 np0005634017 nova_compute[243452]: 2026-02-28 10:12:05.841 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:06 np0005634017 nova_compute[243452]: 2026-02-28 10:12:06.082 243456 DEBUG nova.compute.manager [req-e0065e92-1dcd-4590-906b-a8cf2efa241a req-fa221932-2544-40e0-af12-c4380435f668 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Received event network-vif-unplugged-2a10bc2f-52a7-47e7-a308-eaa218f335b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:12:06 np0005634017 nova_compute[243452]: 2026-02-28 10:12:06.082 243456 DEBUG oslo_concurrency.lockutils [req-e0065e92-1dcd-4590-906b-a8cf2efa241a req-fa221932-2544-40e0-af12-c4380435f668 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:06 np0005634017 nova_compute[243452]: 2026-02-28 10:12:06.082 243456 DEBUG oslo_concurrency.lockutils [req-e0065e92-1dcd-4590-906b-a8cf2efa241a req-fa221932-2544-40e0-af12-c4380435f668 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:06 np0005634017 nova_compute[243452]: 2026-02-28 10:12:06.083 243456 DEBUG oslo_concurrency.lockutils [req-e0065e92-1dcd-4590-906b-a8cf2efa241a req-fa221932-2544-40e0-af12-c4380435f668 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:06 np0005634017 nova_compute[243452]: 2026-02-28 10:12:06.083 243456 DEBUG nova.compute.manager [req-e0065e92-1dcd-4590-906b-a8cf2efa241a req-fa221932-2544-40e0-af12-c4380435f668 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] No waiting events found dispatching network-vif-unplugged-2a10bc2f-52a7-47e7-a308-eaa218f335b6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:12:06 np0005634017 nova_compute[243452]: 2026-02-28 10:12:06.083 243456 WARNING nova.compute.manager [req-e0065e92-1dcd-4590-906b-a8cf2efa241a req-fa221932-2544-40e0-af12-c4380435f668 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Received unexpected event network-vif-unplugged-2a10bc2f-52a7-47e7-a308-eaa218f335b6 for instance with vm_state paused and task_state shelving_image_uploading.#033[00m
Feb 28 05:12:06 np0005634017 nova_compute[243452]: 2026-02-28 10:12:06.083 243456 DEBUG nova.compute.manager [req-e0065e92-1dcd-4590-906b-a8cf2efa241a req-fa221932-2544-40e0-af12-c4380435f668 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Received event network-vif-plugged-2a10bc2f-52a7-47e7-a308-eaa218f335b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:12:06 np0005634017 nova_compute[243452]: 2026-02-28 10:12:06.083 243456 DEBUG oslo_concurrency.lockutils [req-e0065e92-1dcd-4590-906b-a8cf2efa241a req-fa221932-2544-40e0-af12-c4380435f668 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:06 np0005634017 nova_compute[243452]: 2026-02-28 10:12:06.083 243456 DEBUG oslo_concurrency.lockutils [req-e0065e92-1dcd-4590-906b-a8cf2efa241a req-fa221932-2544-40e0-af12-c4380435f668 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:06 np0005634017 nova_compute[243452]: 2026-02-28 10:12:06.084 243456 DEBUG oslo_concurrency.lockutils [req-e0065e92-1dcd-4590-906b-a8cf2efa241a req-fa221932-2544-40e0-af12-c4380435f668 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:06 np0005634017 nova_compute[243452]: 2026-02-28 10:12:06.084 243456 DEBUG nova.compute.manager [req-e0065e92-1dcd-4590-906b-a8cf2efa241a req-fa221932-2544-40e0-af12-c4380435f668 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] No waiting events found dispatching network-vif-plugged-2a10bc2f-52a7-47e7-a308-eaa218f335b6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:12:06 np0005634017 nova_compute[243452]: 2026-02-28 10:12:06.084 243456 WARNING nova.compute.manager [req-e0065e92-1dcd-4590-906b-a8cf2efa241a req-fa221932-2544-40e0-af12-c4380435f668 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Received unexpected event network-vif-plugged-2a10bc2f-52a7-47e7-a308-eaa218f335b6 for instance with vm_state paused and task_state shelving_image_uploading.#033[00m
Feb 28 05:12:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 358 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 42 KiB/s wr, 173 op/s
Feb 28 05:12:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Feb 28 05:12:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Feb 28 05:12:06 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Feb 28 05:12:06 np0005634017 nova_compute[243452]: 2026-02-28 10:12:06.299 243456 DEBUG nova.storage.rbd_utils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] cloning vms/56badf5b-d05a-4123-b43c-087a91e0e3b6_disk@15e4b23e728846d999ec44c208d8074e to images/a76de62a-d69e-4c03-92ec-aaff7623c365 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 28 05:12:06 np0005634017 nova_compute[243452]: 2026-02-28 10:12:06.397 243456 DEBUG nova.storage.rbd_utils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] flattening images/a76de62a-d69e-4c03-92ec-aaff7623c365 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 28 05:12:06 np0005634017 nova_compute[243452]: 2026-02-28 10:12:06.599 243456 DEBUG nova.storage.rbd_utils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] removing snapshot(15e4b23e728846d999ec44c208d8074e) on rbd image(56badf5b-d05a-4123-b43c-087a91e0e3b6_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 28 05:12:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Feb 28 05:12:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Feb 28 05:12:07 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Feb 28 05:12:07 np0005634017 nova_compute[243452]: 2026-02-28 10:12:07.300 243456 DEBUG nova.storage.rbd_utils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] creating snapshot(snap) on rbd image(a76de62a-d69e-4c03-92ec-aaff7623c365) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:12:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:12:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 370 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 510 KiB/s wr, 73 op/s
Feb 28 05:12:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Feb 28 05:12:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Feb 28 05:12:08 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Feb 28 05:12:08 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:08.480 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:08 np0005634017 nova_compute[243452]: 2026-02-28 10:12:08.705 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:08 np0005634017 nova_compute[243452]: 2026-02-28 10:12:08.824 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273513.821662, 59d0cb01-5644-425e-82b1-b79cf4265dfb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:12:08 np0005634017 nova_compute[243452]: 2026-02-28 10:12:08.826 243456 INFO nova.compute.manager [-] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:12:08 np0005634017 nova_compute[243452]: 2026-02-28 10:12:08.860 243456 DEBUG nova.compute.manager [None req-2957d2d9-acf5-4219-89af-2e3ebc2afebc - - - - - -] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:12:10 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:10Z|00527|binding|INFO|Releasing lport cad89901-4493-47e9-b0fc-45158375eff8 from this chassis (sb_readonly=0)
Feb 28 05:12:10 np0005634017 nova_compute[243452]: 2026-02-28 10:12:10.088 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 397 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.9 MiB/s wr, 54 op/s
Feb 28 05:12:10 np0005634017 nova_compute[243452]: 2026-02-28 10:12:10.262 243456 INFO nova.virt.libvirt.driver [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Snapshot image upload complete#033[00m
Feb 28 05:12:10 np0005634017 nova_compute[243452]: 2026-02-28 10:12:10.264 243456 DEBUG nova.compute.manager [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:12:10 np0005634017 nova_compute[243452]: 2026-02-28 10:12:10.331 243456 INFO nova.compute.manager [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Shelve offloading#033[00m
Feb 28 05:12:10 np0005634017 nova_compute[243452]: 2026-02-28 10:12:10.339 243456 INFO nova.virt.libvirt.driver [-] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Instance destroyed successfully.#033[00m
Feb 28 05:12:10 np0005634017 nova_compute[243452]: 2026-02-28 10:12:10.340 243456 DEBUG nova.compute.manager [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:12:10 np0005634017 nova_compute[243452]: 2026-02-28 10:12:10.343 243456 DEBUG oslo_concurrency.lockutils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "refresh_cache-56badf5b-d05a-4123-b43c-087a91e0e3b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:12:10 np0005634017 nova_compute[243452]: 2026-02-28 10:12:10.344 243456 DEBUG oslo_concurrency.lockutils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquired lock "refresh_cache-56badf5b-d05a-4123-b43c-087a91e0e3b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:12:10 np0005634017 nova_compute[243452]: 2026-02-28 10:12:10.345 243456 DEBUG nova.network.neutron [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:12:10 np0005634017 nova_compute[243452]: 2026-02-28 10:12:10.843 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 405 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 120 op/s
Feb 28 05:12:12 np0005634017 nova_compute[243452]: 2026-02-28 10:12:12.107 243456 DEBUG nova.network.neutron [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Updating instance_info_cache with network_info: [{"id": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "address": "fa:16:3e:25:29:e4", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a10bc2f-52", "ovs_interfaceid": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:12:12 np0005634017 nova_compute[243452]: 2026-02-28 10:12:12.130 243456 DEBUG oslo_concurrency.lockutils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Releasing lock "refresh_cache-56badf5b-d05a-4123-b43c-087a91e0e3b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:12:12 np0005634017 nova_compute[243452]: 2026-02-28 10:12:12.355 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:12:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:12:13 np0005634017 nova_compute[243452]: 2026-02-28 10:12:13.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:12:13 np0005634017 nova_compute[243452]: 2026-02-28 10:12:13.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:12:13 np0005634017 nova_compute[243452]: 2026-02-28 10:12:13.707 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 405 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 91 op/s
Feb 28 05:12:14 np0005634017 nova_compute[243452]: 2026-02-28 10:12:14.973 243456 INFO nova.virt.libvirt.driver [-] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Instance destroyed successfully.#033[00m
Feb 28 05:12:14 np0005634017 nova_compute[243452]: 2026-02-28 10:12:14.974 243456 DEBUG nova.objects.instance [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'resources' on Instance uuid 56badf5b-d05a-4123-b43c-087a91e0e3b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:12:14 np0005634017 nova_compute[243452]: 2026-02-28 10:12:14.998 243456 DEBUG nova.virt.libvirt.vif [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:11:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-440852658',display_name='tempest-ServerActionsTestOtherB-server-440852658',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-440852658',id=64,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:11:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='cffbbb9857954b188c363e9565817bf2',ramdisk_id='',reservation_id='r-sfdugq8v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-909812490',owner_user_name='tempest-ServerActionsTestOtherB-909812490-project-member',shelved_at='2026-02-28T10:12:10.264021',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='a76de62a-d69e-4c03-92ec-aaff7623c365'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:12:04Z,user_data=None,user_id='48e14a77ec8842f98a0d2efc6d5e167f',uuid=56badf5b-d05a-4123-b43c-087a91e0e3b6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "address": "fa:16:3e:25:29:e4", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a10bc2f-52", "ovs_interfaceid": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:12:14 np0005634017 nova_compute[243452]: 2026-02-28 10:12:14.999 243456 DEBUG nova.network.os_vif_util [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converting VIF {"id": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "address": "fa:16:3e:25:29:e4", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a10bc2f-52", "ovs_interfaceid": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:12:15 np0005634017 nova_compute[243452]: 2026-02-28 10:12:15.000 243456 DEBUG nova.network.os_vif_util [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:29:e4,bridge_name='br-int',has_traffic_filtering=True,id=2a10bc2f-52a7-47e7-a308-eaa218f335b6,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a10bc2f-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:12:15 np0005634017 nova_compute[243452]: 2026-02-28 10:12:15.000 243456 DEBUG os_vif [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:29:e4,bridge_name='br-int',has_traffic_filtering=True,id=2a10bc2f-52a7-47e7-a308-eaa218f335b6,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a10bc2f-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:12:15 np0005634017 nova_compute[243452]: 2026-02-28 10:12:15.002 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:15 np0005634017 nova_compute[243452]: 2026-02-28 10:12:15.002 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a10bc2f-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:15 np0005634017 nova_compute[243452]: 2026-02-28 10:12:15.005 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:15 np0005634017 nova_compute[243452]: 2026-02-28 10:12:15.006 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:12:15 np0005634017 nova_compute[243452]: 2026-02-28 10:12:15.009 243456 INFO os_vif [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:29:e4,bridge_name='br-int',has_traffic_filtering=True,id=2a10bc2f-52a7-47e7-a308-eaa218f335b6,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a10bc2f-52')#033[00m
Feb 28 05:12:15 np0005634017 nova_compute[243452]: 2026-02-28 10:12:15.165 243456 DEBUG nova.compute.manager [req-e92bbf7a-2041-47c2-a740-119cebbfb2f5 req-119ffa0e-5c13-4952-8169-6f04c9ec48d8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Received event network-changed-2a10bc2f-52a7-47e7-a308-eaa218f335b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:12:15 np0005634017 nova_compute[243452]: 2026-02-28 10:12:15.166 243456 DEBUG nova.compute.manager [req-e92bbf7a-2041-47c2-a740-119cebbfb2f5 req-119ffa0e-5c13-4952-8169-6f04c9ec48d8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Refreshing instance network info cache due to event network-changed-2a10bc2f-52a7-47e7-a308-eaa218f335b6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:12:15 np0005634017 nova_compute[243452]: 2026-02-28 10:12:15.166 243456 DEBUG oslo_concurrency.lockutils [req-e92bbf7a-2041-47c2-a740-119cebbfb2f5 req-119ffa0e-5c13-4952-8169-6f04c9ec48d8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-56badf5b-d05a-4123-b43c-087a91e0e3b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:12:15 np0005634017 nova_compute[243452]: 2026-02-28 10:12:15.167 243456 DEBUG oslo_concurrency.lockutils [req-e92bbf7a-2041-47c2-a740-119cebbfb2f5 req-119ffa0e-5c13-4952-8169-6f04c9ec48d8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-56badf5b-d05a-4123-b43c-087a91e0e3b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:12:15 np0005634017 nova_compute[243452]: 2026-02-28 10:12:15.167 243456 DEBUG nova.network.neutron [req-e92bbf7a-2041-47c2-a740-119cebbfb2f5 req-119ffa0e-5c13-4952-8169-6f04c9ec48d8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Refreshing network info cache for port 2a10bc2f-52a7-47e7-a308-eaa218f335b6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:12:15 np0005634017 nova_compute[243452]: 2026-02-28 10:12:15.317 243456 INFO nova.virt.libvirt.driver [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Deleting instance files /var/lib/nova/instances/56badf5b-d05a-4123-b43c-087a91e0e3b6_del#033[00m
Feb 28 05:12:15 np0005634017 nova_compute[243452]: 2026-02-28 10:12:15.318 243456 INFO nova.virt.libvirt.driver [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Deletion of /var/lib/nova/instances/56badf5b-d05a-4123-b43c-087a91e0e3b6_del complete#033[00m
Feb 28 05:12:15 np0005634017 nova_compute[243452]: 2026-02-28 10:12:15.425 243456 INFO nova.scheduler.client.report [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Deleted allocations for instance 56badf5b-d05a-4123-b43c-087a91e0e3b6#033[00m
Feb 28 05:12:15 np0005634017 nova_compute[243452]: 2026-02-28 10:12:15.477 243456 DEBUG oslo_concurrency.lockutils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:15 np0005634017 nova_compute[243452]: 2026-02-28 10:12:15.478 243456 DEBUG oslo_concurrency.lockutils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:15 np0005634017 nova_compute[243452]: 2026-02-28 10:12:15.549 243456 DEBUG oslo_concurrency.processutils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:12:15 np0005634017 nova_compute[243452]: 2026-02-28 10:12:15.778 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273520.7764163, fc527bc2-3cc2-4ce2-b99e-24d252793d06 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:12:15 np0005634017 nova_compute[243452]: 2026-02-28 10:12:15.779 243456 INFO nova.compute.manager [-] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:12:15 np0005634017 nova_compute[243452]: 2026-02-28 10:12:15.809 243456 DEBUG nova.compute.manager [None req-04ed2f7a-82f6-4346-8620-80120f7e14d8 - - - - - -] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:12:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:12:16 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/940536717' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:12:16 np0005634017 nova_compute[243452]: 2026-02-28 10:12:16.090 243456 DEBUG oslo_concurrency.processutils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:12:16 np0005634017 nova_compute[243452]: 2026-02-28 10:12:16.097 243456 DEBUG nova.compute.provider_tree [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:12:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 390 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.4 MiB/s wr, 97 op/s
Feb 28 05:12:16 np0005634017 nova_compute[243452]: 2026-02-28 10:12:16.120 243456 DEBUG nova.scheduler.client.report [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:12:16 np0005634017 nova_compute[243452]: 2026-02-28 10:12:16.153 243456 DEBUG oslo_concurrency.lockutils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:16 np0005634017 nova_compute[243452]: 2026-02-28 10:12:16.228 243456 DEBUG oslo_concurrency.lockutils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 12.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:16 np0005634017 nova_compute[243452]: 2026-02-28 10:12:16.636 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:17 np0005634017 nova_compute[243452]: 2026-02-28 10:12:17.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:12:17 np0005634017 nova_compute[243452]: 2026-02-28 10:12:17.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:12:17 np0005634017 nova_compute[243452]: 2026-02-28 10:12:17.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:12:17 np0005634017 nova_compute[243452]: 2026-02-28 10:12:17.337 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:17 np0005634017 nova_compute[243452]: 2026-02-28 10:12:17.337 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:17 np0005634017 nova_compute[243452]: 2026-02-28 10:12:17.338 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:17 np0005634017 nova_compute[243452]: 2026-02-28 10:12:17.338 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:12:17 np0005634017 nova_compute[243452]: 2026-02-28 10:12:17.338 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:12:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:12:17 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3737643756' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:12:17 np0005634017 nova_compute[243452]: 2026-02-28 10:12:17.884 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:12:18 np0005634017 nova_compute[243452]: 2026-02-28 10:12:18.004 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:12:18 np0005634017 nova_compute[243452]: 2026-02-28 10:12:18.005 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:12:18 np0005634017 nova_compute[243452]: 2026-02-28 10:12:18.009 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:12:18 np0005634017 nova_compute[243452]: 2026-02-28 10:12:18.009 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:12:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:12:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Feb 28 05:12:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Feb 28 05:12:18 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Feb 28 05:12:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 378 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 05:12:18 np0005634017 nova_compute[243452]: 2026-02-28 10:12:18.155 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:12:18 np0005634017 nova_compute[243452]: 2026-02-28 10:12:18.157 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3561MB free_disk=59.880274675786495GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:12:18 np0005634017 nova_compute[243452]: 2026-02-28 10:12:18.157 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:18 np0005634017 nova_compute[243452]: 2026-02-28 10:12:18.157 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:18 np0005634017 nova_compute[243452]: 2026-02-28 10:12:18.162 243456 DEBUG nova.network.neutron [req-e92bbf7a-2041-47c2-a740-119cebbfb2f5 req-119ffa0e-5c13-4952-8169-6f04c9ec48d8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Updated VIF entry in instance network info cache for port 2a10bc2f-52a7-47e7-a308-eaa218f335b6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:12:18 np0005634017 nova_compute[243452]: 2026-02-28 10:12:18.162 243456 DEBUG nova.network.neutron [req-e92bbf7a-2041-47c2-a740-119cebbfb2f5 req-119ffa0e-5c13-4952-8169-6f04c9ec48d8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Updating instance_info_cache with network_info: [{"id": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "address": "fa:16:3e:25:29:e4", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": null, "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap2a10bc2f-52", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:12:18 np0005634017 nova_compute[243452]: 2026-02-28 10:12:18.189 243456 DEBUG oslo_concurrency.lockutils [req-e92bbf7a-2041-47c2-a740-119cebbfb2f5 req-119ffa0e-5c13-4952-8169-6f04c9ec48d8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-56badf5b-d05a-4123-b43c-087a91e0e3b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:12:18 np0005634017 nova_compute[243452]: 2026-02-28 10:12:18.254 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 30a5d845-ce28-490a-afe8-3b7552f02c63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:12:18 np0005634017 nova_compute[243452]: 2026-02-28 10:12:18.254 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:12:18 np0005634017 nova_compute[243452]: 2026-02-28 10:12:18.254 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:12:18 np0005634017 nova_compute[243452]: 2026-02-28 10:12:18.255 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:12:18 np0005634017 nova_compute[243452]: 2026-02-28 10:12:18.327 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:12:18 np0005634017 nova_compute[243452]: 2026-02-28 10:12:18.711 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:12:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3947280260' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:12:18 np0005634017 nova_compute[243452]: 2026-02-28 10:12:18.861 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:12:18 np0005634017 nova_compute[243452]: 2026-02-28 10:12:18.867 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:12:18 np0005634017 nova_compute[243452]: 2026-02-28 10:12:18.888 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:12:18 np0005634017 nova_compute[243452]: 2026-02-28 10:12:18.912 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:12:18 np0005634017 nova_compute[243452]: 2026-02-28 10:12:18.913 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:19 np0005634017 nova_compute[243452]: 2026-02-28 10:12:19.364 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273524.3632941, 56badf5b-d05a-4123-b43c-087a91e0e3b6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:12:19 np0005634017 nova_compute[243452]: 2026-02-28 10:12:19.365 243456 INFO nova.compute.manager [-] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:12:19 np0005634017 nova_compute[243452]: 2026-02-28 10:12:19.397 243456 DEBUG nova.compute.manager [None req-bf8c5cf7-6c0c-4ec7-ba9e-b27ea4dc06eb - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:12:20 np0005634017 nova_compute[243452]: 2026-02-28 10:12:20.005 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 358 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 524 KiB/s rd, 430 KiB/s wr, 95 op/s
Feb 28 05:12:20 np0005634017 nova_compute[243452]: 2026-02-28 10:12:20.637 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:20 np0005634017 nova_compute[243452]: 2026-02-28 10:12:20.915 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:12:20 np0005634017 nova_compute[243452]: 2026-02-28 10:12:20.916 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:12:20 np0005634017 nova_compute[243452]: 2026-02-28 10:12:20.917 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:12:20 np0005634017 nova_compute[243452]: 2026-02-28 10:12:20.935 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:12:20 np0005634017 nova_compute[243452]: 2026-02-28 10:12:20.935 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:12:20 np0005634017 nova_compute[243452]: 2026-02-28 10:12:20.935 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:12:20 np0005634017 nova_compute[243452]: 2026-02-28 10:12:20.936 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:12:21 np0005634017 nova_compute[243452]: 2026-02-28 10:12:21.006 243456 DEBUG oslo_concurrency.lockutils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:21 np0005634017 nova_compute[243452]: 2026-02-28 10:12:21.007 243456 DEBUG oslo_concurrency.lockutils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:21 np0005634017 nova_compute[243452]: 2026-02-28 10:12:21.007 243456 INFO nova.compute.manager [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Shelving#033[00m
Feb 28 05:12:21 np0005634017 nova_compute[243452]: 2026-02-28 10:12:21.030 243456 DEBUG nova.virt.libvirt.driver [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 28 05:12:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 358 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 2.6 KiB/s wr, 32 op/s
Feb 28 05:12:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:12:23 np0005634017 kernel: tap037eb744-30 (unregistering): left promiscuous mode
Feb 28 05:12:23 np0005634017 NetworkManager[49805]: <info>  [1772273543.3450] device (tap037eb744-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:12:23 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:23Z|00528|binding|INFO|Releasing lport 037eb744-3024-4a3d-b52c-894abe1cbac8 from this chassis (sb_readonly=0)
Feb 28 05:12:23 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:23Z|00529|binding|INFO|Setting lport 037eb744-3024-4a3d-b52c-894abe1cbac8 down in Southbound
Feb 28 05:12:23 np0005634017 nova_compute[243452]: 2026-02-28 10:12:23.352 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:23 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:23Z|00530|binding|INFO|Removing iface tap037eb744-30 ovn-installed in OVS
Feb 28 05:12:23 np0005634017 nova_compute[243452]: 2026-02-28 10:12:23.354 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:23 np0005634017 nova_compute[243452]: 2026-02-28 10:12:23.361 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.362 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:d3:6f 10.100.0.9'], port_security=['fa:16:3e:32:d3:6f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '30a5d845-ce28-490a-afe8-3b7552f02c63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cffbbb9857954b188c363e9565817bf2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '998e17ae-0a65-46f5-b817-1f8d2d0cba63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d55c9b7f-6b12-4934-aabc-ab954d2b71e1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=037eb744-3024-4a3d-b52c-894abe1cbac8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:12:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.364 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 037eb744-3024-4a3d-b52c-894abe1cbac8 in datapath 41b22e92-d251-48dd-9bf8-8f38cbd749fa unbound from our chassis#033[00m
Feb 28 05:12:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.366 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41b22e92-d251-48dd-9bf8-8f38cbd749fa#033[00m
Feb 28 05:12:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.380 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6ce8e2cd-d2ea-447a-8e70-01a6169182db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:23 np0005634017 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Feb 28 05:12:23 np0005634017 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000003a.scope: Consumed 16.952s CPU time.
Feb 28 05:12:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.408 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5eec3869-6b3f-46c4-98f6-65f27d136c7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:23 np0005634017 systemd-machined[209480]: Machine qemu-65-instance-0000003a terminated.
Feb 28 05:12:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.411 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ab92aa33-4902-4e55-9ee7-eba9276554bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.438 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f22cae30-6488-4063-a26e-6ded47d87b34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.452 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[26f3bc19-b163-4635-a2f5-c8a3ccae4528]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41b22e92-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:1f:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493294, 'reachable_time': 16597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298289, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.465 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[149706f5-d50f-4e6a-895b-afa47b5da0bc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap41b22e92-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493305, 'tstamp': 493305}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298290, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap41b22e92-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493308, 'tstamp': 493308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298290, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.467 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41b22e92-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:23 np0005634017 nova_compute[243452]: 2026-02-28 10:12:23.469 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:23 np0005634017 nova_compute[243452]: 2026-02-28 10:12:23.473 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.474 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41b22e92-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.474 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:12:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.475 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41b22e92-d0, col_values=(('external_ids', {'iface-id': 'cad89901-4493-47e9-b0fc-45158375eff8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.475 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:12:23 np0005634017 nova_compute[243452]: 2026-02-28 10:12:23.570 243456 DEBUG nova.compute.manager [req-30b5340d-54c6-4a51-8a15-49633e87b396 req-5f81ff99-7623-4216-9580-0862a0e58b47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received event network-vif-unplugged-037eb744-3024-4a3d-b52c-894abe1cbac8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:12:23 np0005634017 nova_compute[243452]: 2026-02-28 10:12:23.571 243456 DEBUG oslo_concurrency.lockutils [req-30b5340d-54c6-4a51-8a15-49633e87b396 req-5f81ff99-7623-4216-9580-0862a0e58b47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:23 np0005634017 nova_compute[243452]: 2026-02-28 10:12:23.571 243456 DEBUG oslo_concurrency.lockutils [req-30b5340d-54c6-4a51-8a15-49633e87b396 req-5f81ff99-7623-4216-9580-0862a0e58b47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:23 np0005634017 nova_compute[243452]: 2026-02-28 10:12:23.571 243456 DEBUG oslo_concurrency.lockutils [req-30b5340d-54c6-4a51-8a15-49633e87b396 req-5f81ff99-7623-4216-9580-0862a0e58b47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:23 np0005634017 nova_compute[243452]: 2026-02-28 10:12:23.571 243456 DEBUG nova.compute.manager [req-30b5340d-54c6-4a51-8a15-49633e87b396 req-5f81ff99-7623-4216-9580-0862a0e58b47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] No waiting events found dispatching network-vif-unplugged-037eb744-3024-4a3d-b52c-894abe1cbac8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:12:23 np0005634017 nova_compute[243452]: 2026-02-28 10:12:23.572 243456 WARNING nova.compute.manager [req-30b5340d-54c6-4a51-8a15-49633e87b396 req-5f81ff99-7623-4216-9580-0862a0e58b47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received unexpected event network-vif-unplugged-037eb744-3024-4a3d-b52c-894abe1cbac8 for instance with vm_state active and task_state shelving.#033[00m
Feb 28 05:12:23 np0005634017 nova_compute[243452]: 2026-02-28 10:12:23.575 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:23 np0005634017 nova_compute[243452]: 2026-02-28 10:12:23.610 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:23 np0005634017 nova_compute[243452]: 2026-02-28 10:12:23.711 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:24 np0005634017 nova_compute[243452]: 2026-02-28 10:12:24.051 243456 INFO nova.virt.libvirt.driver [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Instance shutdown successfully after 3 seconds.#033[00m
Feb 28 05:12:24 np0005634017 nova_compute[243452]: 2026-02-28 10:12:24.055 243456 INFO nova.virt.libvirt.driver [-] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Instance destroyed successfully.#033[00m
Feb 28 05:12:24 np0005634017 nova_compute[243452]: 2026-02-28 10:12:24.056 243456 DEBUG nova.objects.instance [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'numa_topology' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:12:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 358 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 5.4 KiB/s wr, 32 op/s
Feb 28 05:12:24 np0005634017 nova_compute[243452]: 2026-02-28 10:12:24.387 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:24 np0005634017 nova_compute[243452]: 2026-02-28 10:12:24.414 243456 INFO nova.virt.libvirt.driver [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Beginning cold snapshot process#033[00m
Feb 28 05:12:24 np0005634017 nova_compute[243452]: 2026-02-28 10:12:24.458 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updating instance_info_cache with network_info: [{"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:12:24 np0005634017 nova_compute[243452]: 2026-02-28 10:12:24.482 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:12:24 np0005634017 nova_compute[243452]: 2026-02-28 10:12:24.483 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:12:24 np0005634017 nova_compute[243452]: 2026-02-28 10:12:24.484 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:12:24 np0005634017 nova_compute[243452]: 2026-02-28 10:12:24.484 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:12:24 np0005634017 nova_compute[243452]: 2026-02-28 10:12:24.484 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:12:24 np0005634017 nova_compute[243452]: 2026-02-28 10:12:24.581 243456 DEBUG nova.virt.libvirt.imagebackend [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 28 05:12:24 np0005634017 nova_compute[243452]: 2026-02-28 10:12:24.917 243456 DEBUG nova.storage.rbd_utils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] creating snapshot(35a42d0e4ba9432381c70d253b64fb1b) on rbd image(30a5d845-ce28-490a-afe8-3b7552f02c63_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:12:25 np0005634017 nova_compute[243452]: 2026-02-28 10:12:25.008 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Feb 28 05:12:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Feb 28 05:12:25 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Feb 28 05:12:25 np0005634017 nova_compute[243452]: 2026-02-28 10:12:25.290 243456 DEBUG nova.storage.rbd_utils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] cloning vms/30a5d845-ce28-490a-afe8-3b7552f02c63_disk@35a42d0e4ba9432381c70d253b64fb1b to images/337de210-963f-41af-92a4-16b5716eae17 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 28 05:12:25 np0005634017 nova_compute[243452]: 2026-02-28 10:12:25.403 243456 DEBUG nova.storage.rbd_utils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] flattening images/337de210-963f-41af-92a4-16b5716eae17 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 28 05:12:25 np0005634017 nova_compute[243452]: 2026-02-28 10:12:25.687 243456 DEBUG nova.storage.rbd_utils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] removing snapshot(35a42d0e4ba9432381c70d253b64fb1b) on rbd image(30a5d845-ce28-490a-afe8-3b7552f02c63_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 28 05:12:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 364 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 825 KiB/s rd, 628 KiB/s wr, 39 op/s
Feb 28 05:12:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Feb 28 05:12:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Feb 28 05:12:26 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Feb 28 05:12:26 np0005634017 nova_compute[243452]: 2026-02-28 10:12:26.275 243456 DEBUG nova.storage.rbd_utils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] creating snapshot(snap) on rbd image(337de210-963f-41af-92a4-16b5716eae17) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:12:26 np0005634017 nova_compute[243452]: 2026-02-28 10:12:26.362 243456 DEBUG nova.compute.manager [req-54dd7941-eb7e-4804-9c80-284c0f47e494 req-7ebef1ac-97a2-40d2-9434-287513373a66 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:12:26 np0005634017 nova_compute[243452]: 2026-02-28 10:12:26.362 243456 DEBUG oslo_concurrency.lockutils [req-54dd7941-eb7e-4804-9c80-284c0f47e494 req-7ebef1ac-97a2-40d2-9434-287513373a66 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:26 np0005634017 nova_compute[243452]: 2026-02-28 10:12:26.363 243456 DEBUG oslo_concurrency.lockutils [req-54dd7941-eb7e-4804-9c80-284c0f47e494 req-7ebef1ac-97a2-40d2-9434-287513373a66 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:26 np0005634017 nova_compute[243452]: 2026-02-28 10:12:26.363 243456 DEBUG oslo_concurrency.lockutils [req-54dd7941-eb7e-4804-9c80-284c0f47e494 req-7ebef1ac-97a2-40d2-9434-287513373a66 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:26 np0005634017 nova_compute[243452]: 2026-02-28 10:12:26.363 243456 DEBUG nova.compute.manager [req-54dd7941-eb7e-4804-9c80-284c0f47e494 req-7ebef1ac-97a2-40d2-9434-287513373a66 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] No waiting events found dispatching network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:12:26 np0005634017 nova_compute[243452]: 2026-02-28 10:12:26.363 243456 WARNING nova.compute.manager [req-54dd7941-eb7e-4804-9c80-284c0f47e494 req-7ebef1ac-97a2-40d2-9434-287513373a66 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received unexpected event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Feb 28 05:12:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Feb 28 05:12:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Feb 28 05:12:27 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Feb 28 05:12:27 np0005634017 nova_compute[243452]: 2026-02-28 10:12:27.760 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:12:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 377 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.5 MiB/s wr, 54 op/s
Feb 28 05:12:28 np0005634017 nova_compute[243452]: 2026-02-28 10:12:28.240 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Acquiring lock "920aae47-311f-4921-818d-92025cc1abee" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:28 np0005634017 nova_compute[243452]: 2026-02-28 10:12:28.242 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:28 np0005634017 nova_compute[243452]: 2026-02-28 10:12:28.267 243456 DEBUG nova.compute.manager [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:12:28 np0005634017 nova_compute[243452]: 2026-02-28 10:12:28.368 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:28 np0005634017 nova_compute[243452]: 2026-02-28 10:12:28.370 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:28 np0005634017 nova_compute[243452]: 2026-02-28 10:12:28.379 243456 DEBUG nova.virt.hardware [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:12:28 np0005634017 nova_compute[243452]: 2026-02-28 10:12:28.379 243456 INFO nova.compute.claims [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:12:28 np0005634017 nova_compute[243452]: 2026-02-28 10:12:28.594 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:12:28 np0005634017 nova_compute[243452]: 2026-02-28 10:12:28.691 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:28 np0005634017 nova_compute[243452]: 2026-02-28 10:12:28.712 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:28 np0005634017 nova_compute[243452]: 2026-02-28 10:12:28.941 243456 INFO nova.virt.libvirt.driver [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Snapshot image upload complete#033[00m
Feb 28 05:12:28 np0005634017 nova_compute[243452]: 2026-02-28 10:12:28.942 243456 DEBUG nova.compute.manager [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:12:28 np0005634017 nova_compute[243452]: 2026-02-28 10:12:28.994 243456 INFO nova.compute.manager [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Shelve offloading#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.002 243456 INFO nova.virt.libvirt.driver [-] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Instance destroyed successfully.#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.002 243456 DEBUG nova.compute.manager [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.005 243456 DEBUG oslo_concurrency.lockutils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.005 243456 DEBUG oslo_concurrency.lockutils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquired lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.005 243456 DEBUG nova.network.neutron [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:12:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:12:29
Feb 28 05:12:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:12:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:12:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', '.rgw.root']
Feb 28 05:12:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:12:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:12:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2027665590' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.203 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.209 243456 DEBUG nova.compute.provider_tree [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.230 243456 DEBUG nova.scheduler.client.report [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.261 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.261 243456 DEBUG nova.compute.manager [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.336 243456 DEBUG nova.compute.manager [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.336 243456 DEBUG nova.network.neutron [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.357 243456 INFO nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.374 243456 DEBUG nova.compute.manager [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.470 243456 DEBUG nova.compute.manager [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.473 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.473 243456 INFO nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Creating image(s)#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.503 243456 DEBUG nova.storage.rbd_utils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] rbd image 920aae47-311f-4921-818d-92025cc1abee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.537 243456 DEBUG nova.storage.rbd_utils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] rbd image 920aae47-311f-4921-818d-92025cc1abee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.572 243456 DEBUG nova.storage.rbd_utils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] rbd image 920aae47-311f-4921-818d-92025cc1abee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.578 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.654 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.655 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.656 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.656 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.684 243456 DEBUG nova.storage.rbd_utils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] rbd image 920aae47-311f-4921-818d-92025cc1abee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.689 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 920aae47-311f-4921-818d-92025cc1abee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.742 243456 DEBUG nova.policy [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4285303dac0b4ee497a908cdca0aecf4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8174cdce90534957854824466483d42b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:12:29 np0005634017 nova_compute[243452]: 2026-02-28 10:12:29.945 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 920aae47-311f-4921-818d-92025cc1abee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.256s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:12:30 np0005634017 nova_compute[243452]: 2026-02-28 10:12:30.012 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:30 np0005634017 nova_compute[243452]: 2026-02-28 10:12:30.019 243456 DEBUG nova.storage.rbd_utils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] resizing rbd image 920aae47-311f-4921-818d-92025cc1abee_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:12:30 np0005634017 nova_compute[243452]: 2026-02-28 10:12:30.095 243456 DEBUG nova.objects.instance [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lazy-loading 'migration_context' on Instance uuid 920aae47-311f-4921-818d-92025cc1abee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:12:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 396 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 3.5 MiB/s wr, 113 op/s
Feb 28 05:12:30 np0005634017 nova_compute[243452]: 2026-02-28 10:12:30.227 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:12:30 np0005634017 nova_compute[243452]: 2026-02-28 10:12:30.228 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Ensure instance console log exists: /var/lib/nova/instances/920aae47-311f-4921-818d-92025cc1abee/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:12:30 np0005634017 nova_compute[243452]: 2026-02-28 10:12:30.228 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:30 np0005634017 nova_compute[243452]: 2026-02-28 10:12:30.228 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:30 np0005634017 nova_compute[243452]: 2026-02-28 10:12:30.229 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:12:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:12:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:12:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:12:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:12:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:12:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:12:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:12:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:12:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:12:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:12:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:12:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:12:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:12:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:12:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:12:31 np0005634017 nova_compute[243452]: 2026-02-28 10:12:31.289 243456 DEBUG nova.network.neutron [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Successfully created port: 0df27d88-0475-4a4d-8a7c-883b977bc7ad _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:12:31 np0005634017 nova_compute[243452]: 2026-02-28 10:12:31.436 243456 DEBUG nova.network.neutron [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updating instance_info_cache with network_info: [{"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:12:31 np0005634017 nova_compute[243452]: 2026-02-28 10:12:31.482 243456 DEBUG oslo_concurrency.lockutils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Releasing lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:12:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 453 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 7.6 MiB/s wr, 154 op/s
Feb 28 05:12:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:12:33 np0005634017 nova_compute[243452]: 2026-02-28 10:12:33.603 243456 DEBUG nova.network.neutron [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Successfully updated port: 0df27d88-0475-4a4d-8a7c-883b977bc7ad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:12:33 np0005634017 nova_compute[243452]: 2026-02-28 10:12:33.626 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Acquiring lock "refresh_cache-920aae47-311f-4921-818d-92025cc1abee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:12:33 np0005634017 nova_compute[243452]: 2026-02-28 10:12:33.627 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Acquired lock "refresh_cache-920aae47-311f-4921-818d-92025cc1abee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:12:33 np0005634017 nova_compute[243452]: 2026-02-28 10:12:33.627 243456 DEBUG nova.network.neutron [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:12:33 np0005634017 nova_compute[243452]: 2026-02-28 10:12:33.718 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:33 np0005634017 nova_compute[243452]: 2026-02-28 10:12:33.760 243456 DEBUG nova.compute.manager [req-705e2661-2110-4450-aef1-9a8b8a26012d req-77a7f18d-50d9-41ae-81a4-65510f3a7882 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Received event network-changed-0df27d88-0475-4a4d-8a7c-883b977bc7ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:12:33 np0005634017 nova_compute[243452]: 2026-02-28 10:12:33.760 243456 DEBUG nova.compute.manager [req-705e2661-2110-4450-aef1-9a8b8a26012d req-77a7f18d-50d9-41ae-81a4-65510f3a7882 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Refreshing instance network info cache due to event network-changed-0df27d88-0475-4a4d-8a7c-883b977bc7ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:12:33 np0005634017 nova_compute[243452]: 2026-02-28 10:12:33.761 243456 DEBUG oslo_concurrency.lockutils [req-705e2661-2110-4450-aef1-9a8b8a26012d req-77a7f18d-50d9-41ae-81a4-65510f3a7882 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-920aae47-311f-4921-818d-92025cc1abee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:12:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 477 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 7.5 MiB/s wr, 133 op/s
Feb 28 05:12:34 np0005634017 nova_compute[243452]: 2026-02-28 10:12:34.155 243456 DEBUG nova.network.neutron [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:12:34 np0005634017 podman[298632]: 2026-02-28 10:12:34.1677674 +0000 UTC m=+0.094955801 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb 28 05:12:34 np0005634017 podman[298631]: 2026-02-28 10:12:34.190986503 +0000 UTC m=+0.123061481 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 28 05:12:34 np0005634017 nova_compute[243452]: 2026-02-28 10:12:34.203 243456 INFO nova.virt.libvirt.driver [-] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Instance destroyed successfully.#033[00m
Feb 28 05:12:34 np0005634017 nova_compute[243452]: 2026-02-28 10:12:34.204 243456 DEBUG nova.objects.instance [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'resources' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:12:34 np0005634017 nova_compute[243452]: 2026-02-28 10:12:34.219 243456 DEBUG nova.virt.libvirt.vif [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-655402139',display_name='tempest-ServerActionsTestOtherB-server-655402139',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-655402139',id=58,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBqRVcgiI80flX1TIQUc3kE8k0bKr7rk5iIO2Yv6L90cfSe29lIWbu2zW6sL5TXWXaoniRhBj4ljGby24BY2TxBinS7LuRQtwYhYPFr+EO7gzkzEKLoiCVvk+xY5at1zhw==',key_name='tempest-keypair-1106870795',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:10:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='cffbbb9857954b188c363e9565817bf2',ramdisk_id='',reservation_id='r-xj14uw0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-909812490',owner_user_name='tempest-ServerActionsTestOtherB-909812490-project-member',shelved_at='2026-02-28T10:12:28.942409',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='337de210-963f-41af-92a4-16b5716eae17'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:12:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='48e14a77ec8842f98a0d2efc6d5e167f',uuid=30a5d845-ce28-490a-afe8-3b7552f02c63,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:12:34 np0005634017 nova_compute[243452]: 2026-02-28 10:12:34.220 243456 DEBUG nova.network.os_vif_util [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converting VIF {"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:12:34 np0005634017 nova_compute[243452]: 2026-02-28 10:12:34.221 243456 DEBUG nova.network.os_vif_util [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:12:34 np0005634017 nova_compute[243452]: 2026-02-28 10:12:34.221 243456 DEBUG os_vif [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:12:34 np0005634017 nova_compute[243452]: 2026-02-28 10:12:34.223 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:34 np0005634017 nova_compute[243452]: 2026-02-28 10:12:34.223 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap037eb744-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:34 np0005634017 nova_compute[243452]: 2026-02-28 10:12:34.225 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:34 np0005634017 nova_compute[243452]: 2026-02-28 10:12:34.227 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:34 np0005634017 nova_compute[243452]: 2026-02-28 10:12:34.230 243456 INFO os_vif [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30')#033[00m
Feb 28 05:12:34 np0005634017 nova_compute[243452]: 2026-02-28 10:12:34.454 243456 INFO nova.virt.libvirt.driver [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Deleting instance files /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63_del#033[00m
Feb 28 05:12:34 np0005634017 nova_compute[243452]: 2026-02-28 10:12:34.455 243456 INFO nova.virt.libvirt.driver [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Deletion of /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63_del complete#033[00m
Feb 28 05:12:34 np0005634017 nova_compute[243452]: 2026-02-28 10:12:34.551 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Acquiring lock "6cac1749-1126-44c9-b31c-1041025c52cf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:34 np0005634017 nova_compute[243452]: 2026-02-28 10:12:34.552 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:34 np0005634017 nova_compute[243452]: 2026-02-28 10:12:34.573 243456 DEBUG nova.compute.manager [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:12:34 np0005634017 nova_compute[243452]: 2026-02-28 10:12:34.604 243456 INFO nova.scheduler.client.report [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Deleted allocations for instance 30a5d845-ce28-490a-afe8-3b7552f02c63#033[00m
Feb 28 05:12:34 np0005634017 nova_compute[243452]: 2026-02-28 10:12:34.667 243456 DEBUG oslo_concurrency.lockutils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:34 np0005634017 nova_compute[243452]: 2026-02-28 10:12:34.668 243456 DEBUG oslo_concurrency.lockutils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:34 np0005634017 nova_compute[243452]: 2026-02-28 10:12:34.684 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:34 np0005634017 nova_compute[243452]: 2026-02-28 10:12:34.782 243456 DEBUG oslo_concurrency.processutils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:12:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:12:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1510402414' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.345 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.350 243456 DEBUG oslo_concurrency.processutils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.358 243456 DEBUG nova.compute.provider_tree [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.375 243456 DEBUG nova.scheduler.client.report [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.392 243456 DEBUG oslo_concurrency.lockutils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.395 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.401 243456 DEBUG nova.virt.hardware [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.402 243456 INFO nova.compute.claims [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.462 243456 DEBUG oslo_concurrency.lockutils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 14.455s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.465 243456 DEBUG nova.network.neutron [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Updating instance_info_cache with network_info: [{"id": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "address": "fa:16:3e:e1:db:88", "network": {"id": "ed8f95d8-16ff-486a-9986-045e9e754d77", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-960697188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8174cdce90534957854824466483d42b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df27d88-04", "ovs_interfaceid": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.494 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Releasing lock "refresh_cache-920aae47-311f-4921-818d-92025cc1abee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.495 243456 DEBUG nova.compute.manager [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Instance network_info: |[{"id": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "address": "fa:16:3e:e1:db:88", "network": {"id": "ed8f95d8-16ff-486a-9986-045e9e754d77", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-960697188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8174cdce90534957854824466483d42b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df27d88-04", "ovs_interfaceid": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.495 243456 DEBUG oslo_concurrency.lockutils [req-705e2661-2110-4450-aef1-9a8b8a26012d req-77a7f18d-50d9-41ae-81a4-65510f3a7882 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-920aae47-311f-4921-818d-92025cc1abee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.496 243456 DEBUG nova.network.neutron [req-705e2661-2110-4450-aef1-9a8b8a26012d req-77a7f18d-50d9-41ae-81a4-65510f3a7882 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Refreshing network info cache for port 0df27d88-0475-4a4d-8a7c-883b977bc7ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.499 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Start _get_guest_xml network_info=[{"id": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "address": "fa:16:3e:e1:db:88", "network": {"id": "ed8f95d8-16ff-486a-9986-045e9e754d77", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-960697188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8174cdce90534957854824466483d42b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df27d88-04", "ovs_interfaceid": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.506 243456 WARNING nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.512 243456 DEBUG nova.virt.libvirt.host [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.512 243456 DEBUG nova.virt.libvirt.host [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.522 243456 DEBUG nova.virt.libvirt.host [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.523 243456 DEBUG nova.virt.libvirt.host [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.523 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.524 243456 DEBUG nova.virt.hardware [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.524 243456 DEBUG nova.virt.hardware [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.524 243456 DEBUG nova.virt.hardware [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.525 243456 DEBUG nova.virt.hardware [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.525 243456 DEBUG nova.virt.hardware [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.525 243456 DEBUG nova.virt.hardware [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.525 243456 DEBUG nova.virt.hardware [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.526 243456 DEBUG nova.virt.hardware [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.526 243456 DEBUG nova.virt.hardware [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.526 243456 DEBUG nova.virt.hardware [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.526 243456 DEBUG nova.virt.hardware [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.530 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:12:35 np0005634017 nova_compute[243452]: 2026-02-28 10:12:35.632 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:12:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:12:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1585548673' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.087 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:12:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 424 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.4 MiB/s wr, 143 op/s
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.115 243456 DEBUG nova.storage.rbd_utils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] rbd image 920aae47-311f-4921-818d-92025cc1abee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.120 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:12:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:12:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/943179781' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.211 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.217 243456 DEBUG nova.compute.provider_tree [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.270 243456 DEBUG nova.scheduler.client.report [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.305 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.306 243456 DEBUG nova.compute.manager [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.364 243456 DEBUG nova.compute.manager [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.365 243456 DEBUG nova.network.neutron [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.387 243456 INFO nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.407 243456 DEBUG nova.compute.manager [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.525 243456 DEBUG nova.compute.manager [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.526 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.526 243456 INFO nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Creating image(s)#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.564 243456 DEBUG nova.storage.rbd_utils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] rbd image 6cac1749-1126-44c9-b31c-1041025c52cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.584 243456 DEBUG nova.storage.rbd_utils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] rbd image 6cac1749-1126-44c9-b31c-1041025c52cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:12:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:12:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:12:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:12:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:12:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:12:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:12:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:12:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:12:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:12:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:12:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:12:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.618 243456 DEBUG nova.storage.rbd_utils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] rbd image 6cac1749-1126-44c9-b31c-1041025c52cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.624 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.691 243456 DEBUG nova.policy [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2465d2d41534ef098e24bdd413eefab', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b83705c4693849a58c70b1271f24f320', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.702 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.703 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.703 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.703 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.726 243456 DEBUG nova.storage.rbd_utils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] rbd image 6cac1749-1126-44c9-b31c-1041025c52cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:12:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:12:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2010978883' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.730 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 6cac1749-1126-44c9-b31c-1041025c52cf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.781 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.661s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.783 243456 DEBUG nova.virt.libvirt.vif [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1001227597',display_name='tempest-InstanceActionsNegativeTestJSON-server-1001227597',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1001227597',id=65,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8174cdce90534957854824466483d42b',ramdisk_id='',reservation_id='r-xoam7m7h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1693183666',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1693183666-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:12:29Z,user_data=None,user_id='4285303dac0b4ee497a908cdca0aecf4',uuid=920aae47-311f-4921-818d-92025cc1abee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "address": "fa:16:3e:e1:db:88", "network": {"id": "ed8f95d8-16ff-486a-9986-045e9e754d77", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-960697188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8174cdce90534957854824466483d42b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df27d88-04", "ovs_interfaceid": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.783 243456 DEBUG nova.network.os_vif_util [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Converting VIF {"id": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "address": "fa:16:3e:e1:db:88", "network": {"id": "ed8f95d8-16ff-486a-9986-045e9e754d77", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-960697188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8174cdce90534957854824466483d42b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df27d88-04", "ovs_interfaceid": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.784 243456 DEBUG nova.network.os_vif_util [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:db:88,bridge_name='br-int',has_traffic_filtering=True,id=0df27d88-0475-4a4d-8a7c-883b977bc7ad,network=Network(ed8f95d8-16ff-486a-9986-045e9e754d77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0df27d88-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.785 243456 DEBUG nova.objects.instance [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lazy-loading 'pci_devices' on Instance uuid 920aae47-311f-4921-818d-92025cc1abee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.800 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:12:36 np0005634017 nova_compute[243452]:  <uuid>920aae47-311f-4921-818d-92025cc1abee</uuid>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:  <name>instance-00000041</name>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <nova:name>tempest-InstanceActionsNegativeTestJSON-server-1001227597</nova:name>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:12:35</nova:creationTime>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:12:36 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:        <nova:user uuid="4285303dac0b4ee497a908cdca0aecf4">tempest-InstanceActionsNegativeTestJSON-1693183666-project-member</nova:user>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:        <nova:project uuid="8174cdce90534957854824466483d42b">tempest-InstanceActionsNegativeTestJSON-1693183666</nova:project>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:        <nova:port uuid="0df27d88-0475-4a4d-8a7c-883b977bc7ad">
Feb 28 05:12:36 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <entry name="serial">920aae47-311f-4921-818d-92025cc1abee</entry>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <entry name="uuid">920aae47-311f-4921-818d-92025cc1abee</entry>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/920aae47-311f-4921-818d-92025cc1abee_disk">
Feb 28 05:12:36 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:12:36 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/920aae47-311f-4921-818d-92025cc1abee_disk.config">
Feb 28 05:12:36 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:12:36 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:e1:db:88"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <target dev="tap0df27d88-04"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/920aae47-311f-4921-818d-92025cc1abee/console.log" append="off"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:12:36 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:12:36 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:12:36 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:12:36 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.800 243456 DEBUG nova.compute.manager [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Preparing to wait for external event network-vif-plugged-0df27d88-0475-4a4d-8a7c-883b977bc7ad prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.800 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Acquiring lock "920aae47-311f-4921-818d-92025cc1abee-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.801 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.801 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.801 243456 DEBUG nova.virt.libvirt.vif [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1001227597',display_name='tempest-InstanceActionsNegativeTestJSON-server-1001227597',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1001227597',id=65,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8174cdce90534957854824466483d42b',ramdisk_id='',reservation_id='r-xoam7m7h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1693183666',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1693183666-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:12:29Z,user_data=None,user_id='4285303dac0b4ee497a908cdca0aecf4',uuid=920aae47-311f-4921-818d-92025cc1abee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "address": "fa:16:3e:e1:db:88", "network": {"id": "ed8f95d8-16ff-486a-9986-045e9e754d77", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-960697188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8174cdce90534957854824466483d42b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df27d88-04", "ovs_interfaceid": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.802 243456 DEBUG nova.network.os_vif_util [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Converting VIF {"id": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "address": "fa:16:3e:e1:db:88", "network": {"id": "ed8f95d8-16ff-486a-9986-045e9e754d77", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-960697188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8174cdce90534957854824466483d42b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df27d88-04", "ovs_interfaceid": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.802 243456 DEBUG nova.network.os_vif_util [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:db:88,bridge_name='br-int',has_traffic_filtering=True,id=0df27d88-0475-4a4d-8a7c-883b977bc7ad,network=Network(ed8f95d8-16ff-486a-9986-045e9e754d77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0df27d88-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.802 243456 DEBUG os_vif [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:db:88,bridge_name='br-int',has_traffic_filtering=True,id=0df27d88-0475-4a4d-8a7c-883b977bc7ad,network=Network(ed8f95d8-16ff-486a-9986-045e9e754d77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0df27d88-04') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.803 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.803 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.804 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.806 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.806 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0df27d88-04, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.807 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0df27d88-04, col_values=(('external_ids', {'iface-id': '0df27d88-0475-4a4d-8a7c-883b977bc7ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e1:db:88', 'vm-uuid': '920aae47-311f-4921-818d-92025cc1abee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.809 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:36 np0005634017 NetworkManager[49805]: <info>  [1772273556.8095] manager: (tap0df27d88-04): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/245)
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.811 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.816 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.818 243456 INFO os_vif [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:db:88,bridge_name='br-int',has_traffic_filtering=True,id=0df27d88-0475-4a4d-8a7c-883b977bc7ad,network=Network(ed8f95d8-16ff-486a-9986-045e9e754d77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0df27d88-04')#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.881 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.882 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.882 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] No VIF found with MAC fa:16:3e:e1:db:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.883 243456 INFO nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Using config drive#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.925 243456 DEBUG nova.storage.rbd_utils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] rbd image 920aae47-311f-4921-818d-92025cc1abee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:12:36 np0005634017 nova_compute[243452]: 2026-02-28 10:12:36.991 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 6cac1749-1126-44c9-b31c-1041025c52cf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:12:37 np0005634017 podman[299060]: 2026-02-28 10:12:37.022656414 +0000 UTC m=+0.044930904 container create 2d6d84bdfe6dd1f347f41f3696a23ee70e85c9b79f7b6ee8d1549804b4def003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_antonelli, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:12:37 np0005634017 systemd[1]: Started libpod-conmon-2d6d84bdfe6dd1f347f41f3696a23ee70e85c9b79f7b6ee8d1549804b4def003.scope.
Feb 28 05:12:37 np0005634017 nova_compute[243452]: 2026-02-28 10:12:37.076 243456 DEBUG nova.storage.rbd_utils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] resizing rbd image 6cac1749-1126-44c9-b31c-1041025c52cf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:12:37 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:12:37 np0005634017 podman[299060]: 2026-02-28 10:12:37.001021086 +0000 UTC m=+0.023295576 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:12:37 np0005634017 podman[299060]: 2026-02-28 10:12:37.109377342 +0000 UTC m=+0.131651872 container init 2d6d84bdfe6dd1f347f41f3696a23ee70e85c9b79f7b6ee8d1549804b4def003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:12:37 np0005634017 podman[299060]: 2026-02-28 10:12:37.116535914 +0000 UTC m=+0.138810424 container start 2d6d84bdfe6dd1f347f41f3696a23ee70e85c9b79f7b6ee8d1549804b4def003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:12:37 np0005634017 optimistic_antonelli[299112]: 167 167
Feb 28 05:12:37 np0005634017 systemd[1]: libpod-2d6d84bdfe6dd1f347f41f3696a23ee70e85c9b79f7b6ee8d1549804b4def003.scope: Deactivated successfully.
Feb 28 05:12:37 np0005634017 podman[299060]: 2026-02-28 10:12:37.122742338 +0000 UTC m=+0.145016848 container attach 2d6d84bdfe6dd1f347f41f3696a23ee70e85c9b79f7b6ee8d1549804b4def003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_antonelli, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 05:12:37 np0005634017 podman[299060]: 2026-02-28 10:12:37.12314749 +0000 UTC m=+0.145421980 container died 2d6d84bdfe6dd1f347f41f3696a23ee70e85c9b79f7b6ee8d1549804b4def003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:12:37 np0005634017 systemd[1]: var-lib-containers-storage-overlay-a554d08977115c17003803cff50f639922991977f9daaf1d618246a1fe3c72ed-merged.mount: Deactivated successfully.
Feb 28 05:12:37 np0005634017 podman[299060]: 2026-02-28 10:12:37.161112507 +0000 UTC m=+0.183386997 container remove 2d6d84bdfe6dd1f347f41f3696a23ee70e85c9b79f7b6ee8d1549804b4def003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_antonelli, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:12:37 np0005634017 nova_compute[243452]: 2026-02-28 10:12:37.174 243456 DEBUG nova.objects.instance [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lazy-loading 'migration_context' on Instance uuid 6cac1749-1126-44c9-b31c-1041025c52cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:12:37 np0005634017 systemd[1]: libpod-conmon-2d6d84bdfe6dd1f347f41f3696a23ee70e85c9b79f7b6ee8d1549804b4def003.scope: Deactivated successfully.
Feb 28 05:12:37 np0005634017 nova_compute[243452]: 2026-02-28 10:12:37.188 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:12:37 np0005634017 nova_compute[243452]: 2026-02-28 10:12:37.189 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Ensure instance console log exists: /var/lib/nova/instances/6cac1749-1126-44c9-b31c-1041025c52cf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:12:37 np0005634017 nova_compute[243452]: 2026-02-28 10:12:37.189 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:37 np0005634017 nova_compute[243452]: 2026-02-28 10:12:37.189 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:37 np0005634017 nova_compute[243452]: 2026-02-28 10:12:37.190 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:37 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:12:37 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:12:37 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:12:37 np0005634017 podman[299171]: 2026-02-28 10:12:37.318124722 +0000 UTC m=+0.049207255 container create 21088ba95b54af188e8572bc3902f00f0cf0c959bbe274bc41312d543fc019f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_margulis, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:12:37 np0005634017 nova_compute[243452]: 2026-02-28 10:12:37.333 243456 INFO nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Creating config drive at /var/lib/nova/instances/920aae47-311f-4921-818d-92025cc1abee/disk.config#033[00m
Feb 28 05:12:37 np0005634017 nova_compute[243452]: 2026-02-28 10:12:37.338 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/920aae47-311f-4921-818d-92025cc1abee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkc73nzct execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:12:37 np0005634017 systemd[1]: Started libpod-conmon-21088ba95b54af188e8572bc3902f00f0cf0c959bbe274bc41312d543fc019f8.scope.
Feb 28 05:12:37 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:12:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd51f4861a7cdf682905343d9964867db2fc2f4bcd781f97e55c0d6e975336c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:12:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd51f4861a7cdf682905343d9964867db2fc2f4bcd781f97e55c0d6e975336c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:12:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd51f4861a7cdf682905343d9964867db2fc2f4bcd781f97e55c0d6e975336c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:12:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd51f4861a7cdf682905343d9964867db2fc2f4bcd781f97e55c0d6e975336c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:12:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd51f4861a7cdf682905343d9964867db2fc2f4bcd781f97e55c0d6e975336c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:12:37 np0005634017 podman[299171]: 2026-02-28 10:12:37.302482102 +0000 UTC m=+0.033564685 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:12:37 np0005634017 podman[299171]: 2026-02-28 10:12:37.398558374 +0000 UTC m=+0.129640927 container init 21088ba95b54af188e8572bc3902f00f0cf0c959bbe274bc41312d543fc019f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_margulis, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:12:37 np0005634017 podman[299171]: 2026-02-28 10:12:37.406080595 +0000 UTC m=+0.137163128 container start 21088ba95b54af188e8572bc3902f00f0cf0c959bbe274bc41312d543fc019f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_margulis, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 28 05:12:37 np0005634017 podman[299171]: 2026-02-28 10:12:37.409125111 +0000 UTC m=+0.140207644 container attach 21088ba95b54af188e8572bc3902f00f0cf0c959bbe274bc41312d543fc019f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_margulis, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 05:12:37 np0005634017 nova_compute[243452]: 2026-02-28 10:12:37.489 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/920aae47-311f-4921-818d-92025cc1abee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkc73nzct" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:12:37 np0005634017 nova_compute[243452]: 2026-02-28 10:12:37.519 243456 DEBUG nova.storage.rbd_utils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] rbd image 920aae47-311f-4921-818d-92025cc1abee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:12:37 np0005634017 nova_compute[243452]: 2026-02-28 10:12:37.523 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/920aae47-311f-4921-818d-92025cc1abee/disk.config 920aae47-311f-4921-818d-92025cc1abee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:12:37 np0005634017 nova_compute[243452]: 2026-02-28 10:12:37.663 243456 DEBUG nova.network.neutron [req-705e2661-2110-4450-aef1-9a8b8a26012d req-77a7f18d-50d9-41ae-81a4-65510f3a7882 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Updated VIF entry in instance network info cache for port 0df27d88-0475-4a4d-8a7c-883b977bc7ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:12:37 np0005634017 nova_compute[243452]: 2026-02-28 10:12:37.664 243456 DEBUG nova.network.neutron [req-705e2661-2110-4450-aef1-9a8b8a26012d req-77a7f18d-50d9-41ae-81a4-65510f3a7882 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Updating instance_info_cache with network_info: [{"id": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "address": "fa:16:3e:e1:db:88", "network": {"id": "ed8f95d8-16ff-486a-9986-045e9e754d77", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-960697188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8174cdce90534957854824466483d42b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df27d88-04", "ovs_interfaceid": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:12:37 np0005634017 nova_compute[243452]: 2026-02-28 10:12:37.669 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/920aae47-311f-4921-818d-92025cc1abee/disk.config 920aae47-311f-4921-818d-92025cc1abee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:12:37 np0005634017 nova_compute[243452]: 2026-02-28 10:12:37.669 243456 INFO nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Deleting local config drive /var/lib/nova/instances/920aae47-311f-4921-818d-92025cc1abee/disk.config because it was imported into RBD.#033[00m
Feb 28 05:12:37 np0005634017 nova_compute[243452]: 2026-02-28 10:12:37.680 243456 DEBUG oslo_concurrency.lockutils [req-705e2661-2110-4450-aef1-9a8b8a26012d req-77a7f18d-50d9-41ae-81a4-65510f3a7882 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-920aae47-311f-4921-818d-92025cc1abee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:12:37 np0005634017 kernel: tap0df27d88-04: entered promiscuous mode
Feb 28 05:12:37 np0005634017 NetworkManager[49805]: <info>  [1772273557.7186] manager: (tap0df27d88-04): new Tun device (/org/freedesktop/NetworkManager/Devices/246)
Feb 28 05:12:37 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:37Z|00531|binding|INFO|Claiming lport 0df27d88-0475-4a4d-8a7c-883b977bc7ad for this chassis.
Feb 28 05:12:37 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:37Z|00532|binding|INFO|0df27d88-0475-4a4d-8a7c-883b977bc7ad: Claiming fa:16:3e:e1:db:88 10.100.0.5
Feb 28 05:12:37 np0005634017 nova_compute[243452]: 2026-02-28 10:12:37.721 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.728 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:db:88 10.100.0.5'], port_security=['fa:16:3e:e1:db:88 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '920aae47-311f-4921-818d-92025cc1abee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed8f95d8-16ff-486a-9986-045e9e754d77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8174cdce90534957854824466483d42b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd62d1bc-27fc-4a40-8f5a-4d9d80596045', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad705234-e36d-496f-9ca6-546e227c6770, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0df27d88-0475-4a4d-8a7c-883b977bc7ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:12:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.729 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0df27d88-0475-4a4d-8a7c-883b977bc7ad in datapath ed8f95d8-16ff-486a-9986-045e9e754d77 bound to our chassis#033[00m
Feb 28 05:12:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.731 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed8f95d8-16ff-486a-9986-045e9e754d77#033[00m
Feb 28 05:12:37 np0005634017 nova_compute[243452]: 2026-02-28 10:12:37.732 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:37 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:37Z|00533|binding|INFO|Setting lport 0df27d88-0475-4a4d-8a7c-883b977bc7ad ovn-installed in OVS
Feb 28 05:12:37 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:37Z|00534|binding|INFO|Setting lport 0df27d88-0475-4a4d-8a7c-883b977bc7ad up in Southbound
Feb 28 05:12:37 np0005634017 nova_compute[243452]: 2026-02-28 10:12:37.735 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:37 np0005634017 nova_compute[243452]: 2026-02-28 10:12:37.736 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.742 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b86bdbb4-7a64-4841-8db6-886597443e85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.743 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH taped8f95d8-11 in ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:12:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.746 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface taped8f95d8-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:12:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.746 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[45fc2b0d-6cd3-446a-aedb-841ac4480018]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.747 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[11bc89ba-0f2e-416b-8082-1109a290a8ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:37 np0005634017 systemd-udevd[299254]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:12:37 np0005634017 systemd-machined[209480]: New machine qemu-72-instance-00000041.
Feb 28 05:12:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.764 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[7d7e0f9d-e147-4c9d-b7cd-a146ddad833e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:37 np0005634017 NetworkManager[49805]: <info>  [1772273557.7701] device (tap0df27d88-04): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:12:37 np0005634017 NetworkManager[49805]: <info>  [1772273557.7707] device (tap0df27d88-04): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:12:37 np0005634017 systemd[1]: Started Virtual Machine qemu-72-instance-00000041.
Feb 28 05:12:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.789 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b678ce01-b9f3-4b73-9ba5-b4b28e0fc698]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.812 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8da299-5793-49c4-9e15-2307b8cd03b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:37 np0005634017 NetworkManager[49805]: <info>  [1772273557.8200] manager: (taped8f95d8-10): new Veth device (/org/freedesktop/NetworkManager/Devices/247)
Feb 28 05:12:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.819 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c804e7-e884-4a37-a0db-a3389b9da8f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.847 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[46d2d585-bb28-4622-ae5d-30d08e1d49b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.852 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[be138a0c-fdad-4ea9-8398-ffc3e6541a0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:37 np0005634017 modest_margulis[299187]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:12:37 np0005634017 modest_margulis[299187]: --> All data devices are unavailable
Feb 28 05:12:37 np0005634017 NetworkManager[49805]: <info>  [1772273557.8728] device (taped8f95d8-10): carrier: link connected
Feb 28 05:12:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.880 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[035d239c-42bc-47ca-982d-43ddffd2e452]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.896 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[23f710bd-767e-4921-9e0d-28b5984b36ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped8f95d8-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:e7:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506515, 'reachable_time': 39739, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299292, 'error': None, 'target': 'ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:37 np0005634017 systemd[1]: libpod-21088ba95b54af188e8572bc3902f00f0cf0c959bbe274bc41312d543fc019f8.scope: Deactivated successfully.
Feb 28 05:12:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.911 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5afe2c56-6d3b-4a5e-b142-366971299df8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe65:e7fb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506515, 'tstamp': 506515}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299293, 'error': None, 'target': 'ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:37 np0005634017 conmon[299187]: conmon 21088ba95b54af188e85 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-21088ba95b54af188e8572bc3902f00f0cf0c959bbe274bc41312d543fc019f8.scope/container/memory.events
Feb 28 05:12:37 np0005634017 podman[299171]: 2026-02-28 10:12:37.914688016 +0000 UTC m=+0.645770549 container died 21088ba95b54af188e8572bc3902f00f0cf0c959bbe274bc41312d543fc019f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_margulis, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:12:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.933 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ed0c0c8a-7f7a-4d02-a17b-77da424d5e39]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped8f95d8-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:e7:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506515, 'reachable_time': 39739, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299294, 'error': None, 'target': 'ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:37 np0005634017 systemd[1]: var-lib-containers-storage-overlay-4dd51f4861a7cdf682905343d9964867db2fc2f4bcd781f97e55c0d6e975336c-merged.mount: Deactivated successfully.
Feb 28 05:12:37 np0005634017 podman[299171]: 2026-02-28 10:12:37.960430273 +0000 UTC m=+0.691512806 container remove 21088ba95b54af188e8572bc3902f00f0cf0c959bbe274bc41312d543fc019f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Feb 28 05:12:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.966 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0224a192-5d2b-483d-a36d-c3fa53f2798e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:37 np0005634017 systemd[1]: libpod-conmon-21088ba95b54af188e8572bc3902f00f0cf0c959bbe274bc41312d543fc019f8.scope: Deactivated successfully.
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:38.015 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[51dd4a7d-4c35-43c9-8fd1-529909d9f534]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:38.025 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped8f95d8-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:38.026 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:38.028 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped8f95d8-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:38 np0005634017 kernel: taped8f95d8-10: entered promiscuous mode
Feb 28 05:12:38 np0005634017 NetworkManager[49805]: <info>  [1772273558.0328] manager: (taped8f95d8-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/248)
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.031 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.035 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:38.038 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped8f95d8-10, col_values=(('external_ids', {'iface-id': '886def4f-e7cf-45ce-8b39-a6416f0f0a59'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:38Z|00535|binding|INFO|Releasing lport 886def4f-e7cf-45ce-8b39-a6416f0f0a59 from this chassis (sb_readonly=0)
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.040 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.042 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:38.043 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ed8f95d8-16ff-486a-9986-045e9e754d77.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ed8f95d8-16ff-486a-9986-045e9e754d77.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:38.046 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[476ecb6a-0872-4e3a-9fbf-749d6ff3db7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:38.047 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-ed8f95d8-16ff-486a-9986-045e9e754d77
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/ed8f95d8-16ff-486a-9986-045e9e754d77.pid.haproxy
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID ed8f95d8-16ff-486a-9986-045e9e754d77
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:12:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:38.048 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77', 'env', 'PROCESS_TAG=haproxy-ed8f95d8-16ff-486a-9986-045e9e754d77', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ed8f95d8-16ff-486a-9986-045e9e754d77.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.054 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:12:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Feb 28 05:12:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Feb 28 05:12:38 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Feb 28 05:12:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 425 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.4 MiB/s wr, 167 op/s
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.120 243456 DEBUG nova.network.neutron [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Successfully created port: ebb57b0b-4fa0-4ee0-8791-87271512797e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.252 243456 DEBUG nova.compute.manager [req-99a4c5d7-88a3-4dec-a93a-2f3f7d1d5b89 req-929630eb-3181-4219-b9ed-d2501497d1b7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Received event network-vif-plugged-0df27d88-0475-4a4d-8a7c-883b977bc7ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.252 243456 DEBUG oslo_concurrency.lockutils [req-99a4c5d7-88a3-4dec-a93a-2f3f7d1d5b89 req-929630eb-3181-4219-b9ed-d2501497d1b7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "920aae47-311f-4921-818d-92025cc1abee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.253 243456 DEBUG oslo_concurrency.lockutils [req-99a4c5d7-88a3-4dec-a93a-2f3f7d1d5b89 req-929630eb-3181-4219-b9ed-d2501497d1b7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.253 243456 DEBUG oslo_concurrency.lockutils [req-99a4c5d7-88a3-4dec-a93a-2f3f7d1d5b89 req-929630eb-3181-4219-b9ed-d2501497d1b7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.254 243456 DEBUG nova.compute.manager [req-99a4c5d7-88a3-4dec-a93a-2f3f7d1d5b89 req-929630eb-3181-4219-b9ed-d2501497d1b7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Processing event network-vif-plugged-0df27d88-0475-4a4d-8a7c-883b977bc7ad _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.349 243456 DEBUG nova.compute.manager [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.350 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273558.3481872, 920aae47-311f-4921-818d-92025cc1abee => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.352 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 920aae47-311f-4921-818d-92025cc1abee] VM Started (Lifecycle Event)#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.355 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.359 243456 INFO nova.virt.libvirt.driver [-] [instance: 920aae47-311f-4921-818d-92025cc1abee] Instance spawned successfully.#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.360 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.377 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 920aae47-311f-4921-818d-92025cc1abee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.385 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 920aae47-311f-4921-818d-92025cc1abee] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.388 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.389 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.389 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.390 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.390 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.390 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:12:38 np0005634017 podman[299442]: 2026-02-28 10:12:38.409231512 +0000 UTC m=+0.037992639 container create a1002d5ab77c125c8839ace6f447e0fc930a27be96b999b0dca3843b755fa320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_clarke, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 05:12:38 np0005634017 podman[299441]: 2026-02-28 10:12:38.422648429 +0000 UTC m=+0.051492949 container create a29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.427 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 920aae47-311f-4921-818d-92025cc1abee] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.428 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273558.349836, 920aae47-311f-4921-818d-92025cc1abee => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.428 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 920aae47-311f-4921-818d-92025cc1abee] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:12:38 np0005634017 systemd[1]: Started libpod-conmon-a1002d5ab77c125c8839ace6f447e0fc930a27be96b999b0dca3843b755fa320.scope.
Feb 28 05:12:38 np0005634017 systemd[1]: Started libpod-conmon-a29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92.scope.
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.453 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 920aae47-311f-4921-818d-92025cc1abee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.457 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273558.3543012, 920aae47-311f-4921-818d-92025cc1abee => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.457 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 920aae47-311f-4921-818d-92025cc1abee] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:12:38 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.465 243456 INFO nova.compute.manager [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Took 8.99 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.466 243456 DEBUG nova.compute.manager [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:12:38 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:12:38 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e66799afe237481138dae22134609236b539bbb4a9cb5a77f81d9bf57753d4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.477 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 920aae47-311f-4921-818d-92025cc1abee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:12:38 np0005634017 podman[299442]: 2026-02-28 10:12:38.477851922 +0000 UTC m=+0.106613069 container init a1002d5ab77c125c8839ace6f447e0fc930a27be96b999b0dca3843b755fa320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_clarke, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.481 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 920aae47-311f-4921-818d-92025cc1abee] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:12:38 np0005634017 podman[299441]: 2026-02-28 10:12:38.486770112 +0000 UTC m=+0.115614642 container init a29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 28 05:12:38 np0005634017 podman[299442]: 2026-02-28 10:12:38.487672928 +0000 UTC m=+0.116434045 container start a1002d5ab77c125c8839ace6f447e0fc930a27be96b999b0dca3843b755fa320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:12:38 np0005634017 podman[299442]: 2026-02-28 10:12:38.491190917 +0000 UTC m=+0.119952074 container attach a1002d5ab77c125c8839ace6f447e0fc930a27be96b999b0dca3843b755fa320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_clarke, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:12:38 np0005634017 podman[299441]: 2026-02-28 10:12:38.491787003 +0000 UTC m=+0.120631523 container start a29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:12:38 np0005634017 podman[299442]: 2026-02-28 10:12:38.395967839 +0000 UTC m=+0.024728976 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:12:38 np0005634017 podman[299441]: 2026-02-28 10:12:38.396738821 +0000 UTC m=+0.025583361 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:12:38 np0005634017 dreamy_clarke[299471]: 167 167
Feb 28 05:12:38 np0005634017 systemd[1]: libpod-a1002d5ab77c125c8839ace6f447e0fc930a27be96b999b0dca3843b755fa320.scope: Deactivated successfully.
Feb 28 05:12:38 np0005634017 conmon[299471]: conmon a1002d5ab77c125c8839 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a1002d5ab77c125c8839ace6f447e0fc930a27be96b999b0dca3843b755fa320.scope/container/memory.events
Feb 28 05:12:38 np0005634017 podman[299442]: 2026-02-28 10:12:38.496379743 +0000 UTC m=+0.125140880 container died a1002d5ab77c125c8839ace6f447e0fc930a27be96b999b0dca3843b755fa320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_clarke, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.510 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 920aae47-311f-4921-818d-92025cc1abee] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:12:38 np0005634017 neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77[299473]: [NOTICE]   (299480) : New worker (299489) forked
Feb 28 05:12:38 np0005634017 neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77[299473]: [NOTICE]   (299480) : Loading success.
Feb 28 05:12:38 np0005634017 systemd[1]: var-lib-containers-storage-overlay-6f34c024675d54f3eaafd79e915bd3f486ded296d0417af11a6da35712ae7b12-merged.mount: Deactivated successfully.
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.533 243456 INFO nova.compute.manager [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Took 10.20 seconds to build instance.#033[00m
Feb 28 05:12:38 np0005634017 podman[299442]: 2026-02-28 10:12:38.53577403 +0000 UTC m=+0.164535147 container remove a1002d5ab77c125c8839ace6f447e0fc930a27be96b999b0dca3843b755fa320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_clarke, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 05:12:38 np0005634017 systemd[1]: libpod-conmon-a1002d5ab77c125c8839ace6f447e0fc930a27be96b999b0dca3843b755fa320.scope: Deactivated successfully.
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.551 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.309s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.617 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273543.6167212, 30a5d845-ce28-490a-afe8-3b7552f02c63 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.618 243456 INFO nova.compute.manager [-] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.643 243456 DEBUG nova.compute.manager [None req-81a2826a-a947-454c-8104-f67517d333b9 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:12:38 np0005634017 podman[299509]: 2026-02-28 10:12:38.676824015 +0000 UTC m=+0.036666152 container create 5ac5cdb7a3846b0b214698ce6dcf910fd38a9c995f341755550bd748297ceb44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:12:38 np0005634017 systemd[1]: Started libpod-conmon-5ac5cdb7a3846b0b214698ce6dcf910fd38a9c995f341755550bd748297ceb44.scope.
Feb 28 05:12:38 np0005634017 nova_compute[243452]: 2026-02-28 10:12:38.722 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:38 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:12:38 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44e89c183fd9d1701504b8f251e140b88fbdbce10295fc3ce298d37153f90f0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:12:38 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44e89c183fd9d1701504b8f251e140b88fbdbce10295fc3ce298d37153f90f0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:12:38 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44e89c183fd9d1701504b8f251e140b88fbdbce10295fc3ce298d37153f90f0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:12:38 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44e89c183fd9d1701504b8f251e140b88fbdbce10295fc3ce298d37153f90f0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:12:38 np0005634017 podman[299509]: 2026-02-28 10:12:38.661194856 +0000 UTC m=+0.021037003 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:12:38 np0005634017 podman[299509]: 2026-02-28 10:12:38.767400432 +0000 UTC m=+0.127242589 container init 5ac5cdb7a3846b0b214698ce6dcf910fd38a9c995f341755550bd748297ceb44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_banzai, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:12:38 np0005634017 podman[299509]: 2026-02-28 10:12:38.77515168 +0000 UTC m=+0.134993817 container start 5ac5cdb7a3846b0b214698ce6dcf910fd38a9c995f341755550bd748297ceb44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_banzai, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 05:12:38 np0005634017 podman[299509]: 2026-02-28 10:12:38.778801153 +0000 UTC m=+0.138643300 container attach 5ac5cdb7a3846b0b214698ce6dcf910fd38a9c995f341755550bd748297ceb44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_banzai, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]: {
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:    "0": [
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:        {
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "devices": [
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "/dev/loop3"
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            ],
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "lv_name": "ceph_lv0",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "lv_size": "21470642176",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "name": "ceph_lv0",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "tags": {
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.cluster_name": "ceph",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.crush_device_class": "",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.encrypted": "0",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.objectstore": "bluestore",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.osd_id": "0",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.type": "block",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.vdo": "0",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.with_tpm": "0"
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            },
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "type": "block",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "vg_name": "ceph_vg0"
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:        }
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:    ],
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:    "1": [
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:        {
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "devices": [
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "/dev/loop4"
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            ],
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "lv_name": "ceph_lv1",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "lv_size": "21470642176",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "name": "ceph_lv1",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "tags": {
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.cluster_name": "ceph",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.crush_device_class": "",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.encrypted": "0",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.objectstore": "bluestore",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.osd_id": "1",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.type": "block",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.vdo": "0",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.with_tpm": "0"
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            },
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "type": "block",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "vg_name": "ceph_vg1"
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:        }
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:    ],
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:    "2": [
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:        {
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "devices": [
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "/dev/loop5"
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            ],
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "lv_name": "ceph_lv2",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "lv_size": "21470642176",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "name": "ceph_lv2",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "tags": {
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.cluster_name": "ceph",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.crush_device_class": "",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.encrypted": "0",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.objectstore": "bluestore",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.osd_id": "2",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.type": "block",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.vdo": "0",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:                "ceph.with_tpm": "0"
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            },
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "type": "block",
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:            "vg_name": "ceph_vg2"
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:        }
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]:    ]
Feb 28 05:12:39 np0005634017 wizardly_banzai[299526]: }
Feb 28 05:12:39 np0005634017 systemd[1]: libpod-5ac5cdb7a3846b0b214698ce6dcf910fd38a9c995f341755550bd748297ceb44.scope: Deactivated successfully.
Feb 28 05:12:39 np0005634017 podman[299509]: 2026-02-28 10:12:39.124207235 +0000 UTC m=+0.484049372 container died 5ac5cdb7a3846b0b214698ce6dcf910fd38a9c995f341755550bd748297ceb44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 05:12:39 np0005634017 nova_compute[243452]: 2026-02-28 10:12:39.144 243456 DEBUG nova.network.neutron [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Successfully updated port: ebb57b0b-4fa0-4ee0-8791-87271512797e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:12:39 np0005634017 podman[299509]: 2026-02-28 10:12:39.165037703 +0000 UTC m=+0.524879840 container remove 5ac5cdb7a3846b0b214698ce6dcf910fd38a9c995f341755550bd748297ceb44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:12:39 np0005634017 nova_compute[243452]: 2026-02-28 10:12:39.166 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Acquiring lock "refresh_cache-6cac1749-1126-44c9-b31c-1041025c52cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:12:39 np0005634017 nova_compute[243452]: 2026-02-28 10:12:39.166 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Acquired lock "refresh_cache-6cac1749-1126-44c9-b31c-1041025c52cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:12:39 np0005634017 nova_compute[243452]: 2026-02-28 10:12:39.166 243456 DEBUG nova.network.neutron [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:12:39 np0005634017 systemd[1]: libpod-conmon-5ac5cdb7a3846b0b214698ce6dcf910fd38a9c995f341755550bd748297ceb44.scope: Deactivated successfully.
Feb 28 05:12:39 np0005634017 nova_compute[243452]: 2026-02-28 10:12:39.272 243456 DEBUG nova.compute.manager [req-d7b97c98-6569-4353-84cb-74c038dae680 req-c294475b-515c-406d-9dbd-c59e5a68d9c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Received event network-changed-ebb57b0b-4fa0-4ee0-8791-87271512797e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:12:39 np0005634017 nova_compute[243452]: 2026-02-28 10:12:39.272 243456 DEBUG nova.compute.manager [req-d7b97c98-6569-4353-84cb-74c038dae680 req-c294475b-515c-406d-9dbd-c59e5a68d9c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Refreshing instance network info cache due to event network-changed-ebb57b0b-4fa0-4ee0-8791-87271512797e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:12:39 np0005634017 nova_compute[243452]: 2026-02-28 10:12:39.272 243456 DEBUG oslo_concurrency.lockutils [req-d7b97c98-6569-4353-84cb-74c038dae680 req-c294475b-515c-406d-9dbd-c59e5a68d9c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-6cac1749-1126-44c9-b31c-1041025c52cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:12:39 np0005634017 systemd[1]: var-lib-containers-storage-overlay-44e89c183fd9d1701504b8f251e140b88fbdbce10295fc3ce298d37153f90f0d-merged.mount: Deactivated successfully.
Feb 28 05:12:39 np0005634017 nova_compute[243452]: 2026-02-28 10:12:39.533 243456 DEBUG nova.network.neutron [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:12:39 np0005634017 podman[299609]: 2026-02-28 10:12:39.648548739 +0000 UTC m=+0.051177110 container create 6a9a265d0bf38bf5576e05703b6b064fad61eb4b4f73dfd8fb2a013318dca6f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_clarke, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:12:39 np0005634017 systemd[1]: Started libpod-conmon-6a9a265d0bf38bf5576e05703b6b064fad61eb4b4f73dfd8fb2a013318dca6f9.scope.
Feb 28 05:12:39 np0005634017 podman[299609]: 2026-02-28 10:12:39.624443281 +0000 UTC m=+0.027071672 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:12:39 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:12:39 np0005634017 podman[299609]: 2026-02-28 10:12:39.740809223 +0000 UTC m=+0.143437574 container init 6a9a265d0bf38bf5576e05703b6b064fad61eb4b4f73dfd8fb2a013318dca6f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:12:39 np0005634017 podman[299609]: 2026-02-28 10:12:39.747089529 +0000 UTC m=+0.149717890 container start 6a9a265d0bf38bf5576e05703b6b064fad61eb4b4f73dfd8fb2a013318dca6f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 05:12:39 np0005634017 podman[299609]: 2026-02-28 10:12:39.750048423 +0000 UTC m=+0.152676804 container attach 6a9a265d0bf38bf5576e05703b6b064fad61eb4b4f73dfd8fb2a013318dca6f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 28 05:12:39 np0005634017 happy_clarke[299625]: 167 167
Feb 28 05:12:39 np0005634017 systemd[1]: libpod-6a9a265d0bf38bf5576e05703b6b064fad61eb4b4f73dfd8fb2a013318dca6f9.scope: Deactivated successfully.
Feb 28 05:12:39 np0005634017 podman[299609]: 2026-02-28 10:12:39.751628947 +0000 UTC m=+0.154257288 container died 6a9a265d0bf38bf5576e05703b6b064fad61eb4b4f73dfd8fb2a013318dca6f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_clarke, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Feb 28 05:12:39 np0005634017 systemd[1]: var-lib-containers-storage-overlay-5de62a73b0fbfe1ae9c34f2b427a509e0dbcc063d6ab332c07b54b5462ba700f-merged.mount: Deactivated successfully.
Feb 28 05:12:39 np0005634017 podman[299609]: 2026-02-28 10:12:39.791391185 +0000 UTC m=+0.194019546 container remove 6a9a265d0bf38bf5576e05703b6b064fad61eb4b4f73dfd8fb2a013318dca6f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_clarke, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:12:39 np0005634017 systemd[1]: libpod-conmon-6a9a265d0bf38bf5576e05703b6b064fad61eb4b4f73dfd8fb2a013318dca6f9.scope: Deactivated successfully.
Feb 28 05:12:39 np0005634017 podman[299649]: 2026-02-28 10:12:39.995035661 +0000 UTC m=+0.064864805 container create dcc9d489819ec65401b1534d01243726ae559a498aafe162ff9b0342b233fa7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_kare, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:12:40 np0005634017 systemd[1]: Started libpod-conmon-dcc9d489819ec65401b1534d01243726ae559a498aafe162ff9b0342b233fa7e.scope.
Feb 28 05:12:40 np0005634017 podman[299649]: 2026-02-28 10:12:39.970268055 +0000 UTC m=+0.040097239 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:12:40 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:12:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009454b242c02fa00933761d3e546a8d96c78fd82c751fe1c3f28546cf1b8139/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:12:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009454b242c02fa00933761d3e546a8d96c78fd82c751fe1c3f28546cf1b8139/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:12:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009454b242c02fa00933761d3e546a8d96c78fd82c751fe1c3f28546cf1b8139/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:12:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009454b242c02fa00933761d3e546a8d96c78fd82c751fe1c3f28546cf1b8139/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:12:40 np0005634017 podman[299649]: 2026-02-28 10:12:40.108639506 +0000 UTC m=+0.178468670 container init dcc9d489819ec65401b1534d01243726ae559a498aafe162ff9b0342b233fa7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 28 05:12:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 444 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 452 KiB/s rd, 6.6 MiB/s wr, 150 op/s
Feb 28 05:12:40 np0005634017 podman[299649]: 2026-02-28 10:12:40.115829078 +0000 UTC m=+0.185658202 container start dcc9d489819ec65401b1534d01243726ae559a498aafe162ff9b0342b233fa7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 05:12:40 np0005634017 podman[299649]: 2026-02-28 10:12:40.120105978 +0000 UTC m=+0.189935112 container attach dcc9d489819ec65401b1534d01243726ae559a498aafe162ff9b0342b233fa7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_kare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.425 243456 DEBUG nova.compute.manager [req-f8bc8bd6-f1af-4924-8f7f-39d722931679 req-b6ffb4c2-dfe5-4a88-ad6e-0de87099142c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Received event network-vif-plugged-0df27d88-0475-4a4d-8a7c-883b977bc7ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.426 243456 DEBUG oslo_concurrency.lockutils [req-f8bc8bd6-f1af-4924-8f7f-39d722931679 req-b6ffb4c2-dfe5-4a88-ad6e-0de87099142c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "920aae47-311f-4921-818d-92025cc1abee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.426 243456 DEBUG oslo_concurrency.lockutils [req-f8bc8bd6-f1af-4924-8f7f-39d722931679 req-b6ffb4c2-dfe5-4a88-ad6e-0de87099142c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.426 243456 DEBUG oslo_concurrency.lockutils [req-f8bc8bd6-f1af-4924-8f7f-39d722931679 req-b6ffb4c2-dfe5-4a88-ad6e-0de87099142c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.426 243456 DEBUG nova.compute.manager [req-f8bc8bd6-f1af-4924-8f7f-39d722931679 req-b6ffb4c2-dfe5-4a88-ad6e-0de87099142c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] No waiting events found dispatching network-vif-plugged-0df27d88-0475-4a4d-8a7c-883b977bc7ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.427 243456 WARNING nova.compute.manager [req-f8bc8bd6-f1af-4924-8f7f-39d722931679 req-b6ffb4c2-dfe5-4a88-ad6e-0de87099142c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Received unexpected event network-vif-plugged-0df27d88-0475-4a4d-8a7c-883b977bc7ad for instance with vm_state active and task_state None.#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.525 243456 DEBUG nova.network.neutron [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Updating instance_info_cache with network_info: [{"id": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "address": "fa:16:3e:e6:c6:f2", "network": {"id": "8f735fcd-0d4b-4c56-85c4-3ead65135429", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1080673219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b83705c4693849a58c70b1271f24f320", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebb57b0b-4f", "ovs_interfaceid": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.563 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Releasing lock "refresh_cache-6cac1749-1126-44c9-b31c-1041025c52cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.563 243456 DEBUG nova.compute.manager [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Instance network_info: |[{"id": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "address": "fa:16:3e:e6:c6:f2", "network": {"id": "8f735fcd-0d4b-4c56-85c4-3ead65135429", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1080673219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b83705c4693849a58c70b1271f24f320", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebb57b0b-4f", "ovs_interfaceid": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.563 243456 DEBUG oslo_concurrency.lockutils [req-d7b97c98-6569-4353-84cb-74c038dae680 req-c294475b-515c-406d-9dbd-c59e5a68d9c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-6cac1749-1126-44c9-b31c-1041025c52cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.563 243456 DEBUG nova.network.neutron [req-d7b97c98-6569-4353-84cb-74c038dae680 req-c294475b-515c-406d-9dbd-c59e5a68d9c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Refreshing network info cache for port ebb57b0b-4fa0-4ee0-8791-87271512797e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.566 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Start _get_guest_xml network_info=[{"id": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "address": "fa:16:3e:e6:c6:f2", "network": {"id": "8f735fcd-0d4b-4c56-85c4-3ead65135429", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1080673219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b83705c4693849a58c70b1271f24f320", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebb57b0b-4f", "ovs_interfaceid": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.571 243456 WARNING nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.580 243456 DEBUG nova.virt.libvirt.host [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.581 243456 DEBUG nova.virt.libvirt.host [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.585 243456 DEBUG nova.virt.libvirt.host [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.586 243456 DEBUG nova.virt.libvirt.host [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.586 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.586 243456 DEBUG nova.virt.hardware [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.587 243456 DEBUG nova.virt.hardware [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.587 243456 DEBUG nova.virt.hardware [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.587 243456 DEBUG nova.virt.hardware [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.587 243456 DEBUG nova.virt.hardware [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.587 243456 DEBUG nova.virt.hardware [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.587 243456 DEBUG nova.virt.hardware [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.588 243456 DEBUG nova.virt.hardware [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.588 243456 DEBUG nova.virt.hardware [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.588 243456 DEBUG nova.virt.hardware [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.588 243456 DEBUG nova.virt.hardware [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.590 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.695 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.695 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.695 243456 INFO nova.compute.manager [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Unshelving#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.731 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:40 np0005634017 lvm[299765]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:12:40 np0005634017 lvm[299765]: VG ceph_vg1 finished
Feb 28 05:12:40 np0005634017 lvm[299764]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:12:40 np0005634017 lvm[299764]: VG ceph_vg0 finished
Feb 28 05:12:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:12:40 np0005634017 lvm[299767]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:12:40 np0005634017 lvm[299767]: VG ceph_vg2 finished
Feb 28 05:12:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:12:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:12:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:12:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014177182537818348 of space, bias 1.0, pg target 0.42531547613455045 quantized to 32 (current 32)
Feb 28 05:12:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:12:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:12:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:12:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:12:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:12:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003597432983149043 of space, bias 1.0, pg target 1.0792298949447128 quantized to 32 (current 32)
Feb 28 05:12:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:12:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.021961581530791e-07 of space, bias 4.0, pg target 0.0009594266051510827 quantized to 16 (current 16)
Feb 28 05:12:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:12:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:12:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:12:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Feb 28 05:12:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:12:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Feb 28 05:12:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:12:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:12:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:12:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.817 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.817 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:40 np0005634017 lvm[299768]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:12:40 np0005634017 lvm[299768]: VG ceph_vg0 finished
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.827 243456 DEBUG nova.objects.instance [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'pci_requests' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.846 243456 DEBUG nova.objects.instance [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'numa_topology' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.859 243456 DEBUG nova.virt.hardware [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.859 243456 INFO nova.compute.claims [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:12:40 np0005634017 stoic_kare[299666]: {}
Feb 28 05:12:40 np0005634017 systemd[1]: libpod-dcc9d489819ec65401b1534d01243726ae559a498aafe162ff9b0342b233fa7e.scope: Deactivated successfully.
Feb 28 05:12:40 np0005634017 podman[299649]: 2026-02-28 10:12:40.911906372 +0000 UTC m=+0.981735496 container died dcc9d489819ec65401b1534d01243726ae559a498aafe162ff9b0342b233fa7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_kare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 05:12:40 np0005634017 systemd[1]: libpod-dcc9d489819ec65401b1534d01243726ae559a498aafe162ff9b0342b233fa7e.scope: Consumed 1.115s CPU time.
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.921 243456 DEBUG oslo_concurrency.lockutils [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Acquiring lock "920aae47-311f-4921-818d-92025cc1abee" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.921 243456 DEBUG oslo_concurrency.lockutils [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.922 243456 DEBUG oslo_concurrency.lockutils [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Acquiring lock "920aae47-311f-4921-818d-92025cc1abee-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.922 243456 DEBUG oslo_concurrency.lockutils [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.922 243456 DEBUG oslo_concurrency.lockutils [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.923 243456 INFO nova.compute.manager [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Terminating instance#033[00m
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.924 243456 DEBUG nova.compute.manager [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:12:40 np0005634017 systemd[1]: var-lib-containers-storage-overlay-009454b242c02fa00933761d3e546a8d96c78fd82c751fe1c3f28546cf1b8139-merged.mount: Deactivated successfully.
Feb 28 05:12:40 np0005634017 podman[299649]: 2026-02-28 10:12:40.949762567 +0000 UTC m=+1.019591681 container remove dcc9d489819ec65401b1534d01243726ae559a498aafe162ff9b0342b233fa7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:12:40 np0005634017 systemd[1]: libpod-conmon-dcc9d489819ec65401b1534d01243726ae559a498aafe162ff9b0342b233fa7e.scope: Deactivated successfully.
Feb 28 05:12:40 np0005634017 kernel: tap0df27d88-04 (unregistering): left promiscuous mode
Feb 28 05:12:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:40Z|00536|binding|INFO|Releasing lport 0df27d88-0475-4a4d-8a7c-883b977bc7ad from this chassis (sb_readonly=0)
Feb 28 05:12:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:40Z|00537|binding|INFO|Setting lport 0df27d88-0475-4a4d-8a7c-883b977bc7ad down in Southbound
Feb 28 05:12:40 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.995 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:40Z|00538|binding|INFO|Removing iface tap0df27d88-04 ovn-installed in OVS
Feb 28 05:12:40 np0005634017 NetworkManager[49805]: <info>  [1772273560.9960] device (tap0df27d88-04): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:12:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.002 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:db:88 10.100.0.5'], port_security=['fa:16:3e:e1:db:88 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '920aae47-311f-4921-818d-92025cc1abee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed8f95d8-16ff-486a-9986-045e9e754d77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8174cdce90534957854824466483d42b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd62d1bc-27fc-4a40-8f5a-4d9d80596045', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad705234-e36d-496f-9ca6-546e227c6770, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0df27d88-0475-4a4d-8a7c-883b977bc7ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:12:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.003 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0df27d88-0475-4a4d-8a7c-883b977bc7ad in datapath ed8f95d8-16ff-486a-9986-045e9e754d77 unbound from our chassis#033[00m
Feb 28 05:12:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.005 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ed8f95d8-16ff-486a-9986-045e9e754d77, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:40.999 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.008 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b830f980-1553-4633-83dc-0d2a88aaa142]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.009 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77 namespace which is not needed anymore#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.010 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:12:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:12:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:12:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:12:41 np0005634017 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000041.scope: Deactivated successfully.
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.048 243456 DEBUG oslo_concurrency.processutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:12:41 np0005634017 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000041.scope: Consumed 3.196s CPU time.
Feb 28 05:12:41 np0005634017 systemd-machined[209480]: Machine qemu-72-instance-00000041 terminated.
Feb 28 05:12:41 np0005634017 neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77[299473]: [NOTICE]   (299480) : haproxy version is 2.8.14-c23fe91
Feb 28 05:12:41 np0005634017 neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77[299473]: [NOTICE]   (299480) : path to executable is /usr/sbin/haproxy
Feb 28 05:12:41 np0005634017 neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77[299473]: [WARNING]  (299480) : Exiting Master process...
Feb 28 05:12:41 np0005634017 neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77[299473]: [ALERT]    (299480) : Current worker (299489) exited with code 143 (Terminated)
Feb 28 05:12:41 np0005634017 neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77[299473]: [WARNING]  (299480) : All workers exited. Exiting... (0)
Feb 28 05:12:41 np0005634017 systemd[1]: libpod-a29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92.scope: Deactivated successfully.
Feb 28 05:12:41 np0005634017 podman[299826]: 2026-02-28 10:12:41.147552118 +0000 UTC m=+0.046986322 container died a29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS)
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.166 243456 INFO nova.virt.libvirt.driver [-] [instance: 920aae47-311f-4921-818d-92025cc1abee] Instance destroyed successfully.#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.167 243456 DEBUG nova.objects.instance [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lazy-loading 'resources' on Instance uuid 920aae47-311f-4921-818d-92025cc1abee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:12:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:12:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3867140259' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:12:41 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:12:41 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:12:41 np0005634017 systemd[1]: var-lib-containers-storage-overlay-e3e66799afe237481138dae22134609236b539bbb4a9cb5a77f81d9bf57753d4-merged.mount: Deactivated successfully.
Feb 28 05:12:41 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92-userdata-shm.mount: Deactivated successfully.
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.189 243456 DEBUG nova.virt.libvirt.vif [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1001227597',display_name='tempest-InstanceActionsNegativeTestJSON-server-1001227597',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1001227597',id=65,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:12:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8174cdce90534957854824466483d42b',ramdisk_id='',reservation_id='r-xoam7m7h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1693183666',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1693183666-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:12:38Z,user_data=None,user_id='4285303dac0b4ee497a908cdca0aecf4',uuid=920aae47-311f-4921-818d-92025cc1abee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "address": "fa:16:3e:e1:db:88", "network": {"id": "ed8f95d8-16ff-486a-9986-045e9e754d77", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-960697188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8174cdce90534957854824466483d42b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df27d88-04", "ovs_interfaceid": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.190 243456 DEBUG nova.network.os_vif_util [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Converting VIF {"id": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "address": "fa:16:3e:e1:db:88", "network": {"id": "ed8f95d8-16ff-486a-9986-045e9e754d77", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-960697188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8174cdce90534957854824466483d42b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df27d88-04", "ovs_interfaceid": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:12:41 np0005634017 podman[299826]: 2026-02-28 10:12:41.19136678 +0000 UTC m=+0.090800974 container cleanup a29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.191 243456 DEBUG nova.network.os_vif_util [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:db:88,bridge_name='br-int',has_traffic_filtering=True,id=0df27d88-0475-4a4d-8a7c-883b977bc7ad,network=Network(ed8f95d8-16ff-486a-9986-045e9e754d77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0df27d88-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.191 243456 DEBUG os_vif [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:db:88,bridge_name='br-int',has_traffic_filtering=True,id=0df27d88-0475-4a4d-8a7c-883b977bc7ad,network=Network(ed8f95d8-16ff-486a-9986-045e9e754d77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0df27d88-04') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.194 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.194 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0df27d88-04, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.196 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:41 np0005634017 systemd[1]: libpod-conmon-a29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92.scope: Deactivated successfully.
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.200 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.204 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.205 243456 INFO os_vif [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:db:88,bridge_name='br-int',has_traffic_filtering=True,id=0df27d88-0475-4a4d-8a7c-883b977bc7ad,network=Network(ed8f95d8-16ff-486a-9986-045e9e754d77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0df27d88-04')#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.248 243456 DEBUG nova.storage.rbd_utils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] rbd image 6cac1749-1126-44c9-b31c-1041025c52cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.253 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:12:41 np0005634017 podman[299875]: 2026-02-28 10:12:41.256964935 +0000 UTC m=+0.049058401 container remove a29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 05:12:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.261 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[88deb2d9-5bc6-4ff6-af0c-c045daeeeca7]: (4, ('Sat Feb 28 10:12:41 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77 (a29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92)\na29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92\nSat Feb 28 10:12:41 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77 (a29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92)\na29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.263 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[448105af-bc85-46ae-95f0-bf8cccf300ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.264 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped8f95d8-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:41 np0005634017 kernel: taped8f95d8-10: left promiscuous mode
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.285 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.288 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6cbd2e8d-ab21-457e-b05f-a1fcc837c5ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.312 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f88941de-baf1-4c7f-8d90-0dc3334e3792]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.313 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1163310d-13b6-4114-94be-a3eda0bd74d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.326 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[81eacc21-512f-4e97-83dc-2f398bcc2d21]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506509, 'reachable_time': 16679, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299939, 'error': None, 'target': 'ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.328 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:12:41 np0005634017 systemd[1]: run-netns-ovnmeta\x2ded8f95d8\x2d16ff\x2d486a\x2d9986\x2d045e9e754d77.mount: Deactivated successfully.
Feb 28 05:12:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.328 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[9679f7ed-3a34-47f0-8cb7-134bf61aed2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.534 243456 INFO nova.virt.libvirt.driver [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Deleting instance files /var/lib/nova/instances/920aae47-311f-4921-818d-92025cc1abee_del#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.536 243456 INFO nova.virt.libvirt.driver [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Deletion of /var/lib/nova/instances/920aae47-311f-4921-818d-92025cc1abee_del complete#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.590 243456 INFO nova.compute.manager [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.591 243456 DEBUG oslo.service.loopingcall [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.592 243456 DEBUG nova.compute.manager [-] [instance: 920aae47-311f-4921-818d-92025cc1abee] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.592 243456 DEBUG nova.network.neutron [-] [instance: 920aae47-311f-4921-818d-92025cc1abee] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:12:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:12:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2040599172' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.653 243456 DEBUG oslo_concurrency.processutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.659 243456 DEBUG nova.compute.provider_tree [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.677 243456 DEBUG nova.scheduler.client.report [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.718 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:12:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1028071564' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.815 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.817 243456 DEBUG nova.virt.libvirt.vif [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1212768098',display_name='tempest-ServersTestManualDisk-server-1212768098',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1212768098',id=66,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0ik5kXdNalWXEzR/gysmRW7reVUwqtO6ef6zS+WfZMKLSAXSFSlMXiq8+ds7SdKqMeBVaNe0jzsweQ7HRyVzZGYG856pZmMCyHy7oKBxG1WfL6nFJqI05F/mBkTXIwrw==',key_name='tempest-keypair-1576574983',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b83705c4693849a58c70b1271f24f320',ramdisk_id='',reservation_id='r-e45nckj8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1613202530',owner_user_name='tempest-ServersTestManualDisk-1613202530-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:12:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2465d2d41534ef098e24bdd413eefab',uuid=6cac1749-1126-44c9-b31c-1041025c52cf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "address": "fa:16:3e:e6:c6:f2", "network": {"id": "8f735fcd-0d4b-4c56-85c4-3ead65135429", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1080673219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b83705c4693849a58c70b1271f24f320", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebb57b0b-4f", "ovs_interfaceid": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.818 243456 DEBUG nova.network.os_vif_util [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Converting VIF {"id": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "address": "fa:16:3e:e6:c6:f2", "network": {"id": "8f735fcd-0d4b-4c56-85c4-3ead65135429", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1080673219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b83705c4693849a58c70b1271f24f320", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebb57b0b-4f", "ovs_interfaceid": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.819 243456 DEBUG nova.network.os_vif_util [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:c6:f2,bridge_name='br-int',has_traffic_filtering=True,id=ebb57b0b-4fa0-4ee0-8791-87271512797e,network=Network(8f735fcd-0d4b-4c56-85c4-3ead65135429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebb57b0b-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.820 243456 DEBUG nova.objects.instance [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6cac1749-1126-44c9-b31c-1041025c52cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.839 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:12:41 np0005634017 nova_compute[243452]:  <uuid>6cac1749-1126-44c9-b31c-1041025c52cf</uuid>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:  <name>instance-00000042</name>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServersTestManualDisk-server-1212768098</nova:name>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:12:40</nova:creationTime>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:12:41 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:        <nova:user uuid="c2465d2d41534ef098e24bdd413eefab">tempest-ServersTestManualDisk-1613202530-project-member</nova:user>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:        <nova:project uuid="b83705c4693849a58c70b1271f24f320">tempest-ServersTestManualDisk-1613202530</nova:project>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:        <nova:port uuid="ebb57b0b-4fa0-4ee0-8791-87271512797e">
Feb 28 05:12:41 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <entry name="serial">6cac1749-1126-44c9-b31c-1041025c52cf</entry>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <entry name="uuid">6cac1749-1126-44c9-b31c-1041025c52cf</entry>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/6cac1749-1126-44c9-b31c-1041025c52cf_disk">
Feb 28 05:12:41 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:12:41 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/6cac1749-1126-44c9-b31c-1041025c52cf_disk.config">
Feb 28 05:12:41 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:12:41 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:e6:c6:f2"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <target dev="tapebb57b0b-4f"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/6cac1749-1126-44c9-b31c-1041025c52cf/console.log" append="off"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:12:41 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:12:41 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:12:41 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:12:41 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.840 243456 DEBUG nova.compute.manager [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Preparing to wait for external event network-vif-plugged-ebb57b0b-4fa0-4ee0-8791-87271512797e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.840 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Acquiring lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.841 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.841 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.842 243456 DEBUG nova.virt.libvirt.vif [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1212768098',display_name='tempest-ServersTestManualDisk-server-1212768098',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1212768098',id=66,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0ik5kXdNalWXEzR/gysmRW7reVUwqtO6ef6zS+WfZMKLSAXSFSlMXiq8+ds7SdKqMeBVaNe0jzsweQ7HRyVzZGYG856pZmMCyHy7oKBxG1WfL6nFJqI05F/mBkTXIwrw==',key_name='tempest-keypair-1576574983',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b83705c4693849a58c70b1271f24f320',ramdisk_id='',reservation_id='r-e45nckj8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1613202530',owner_user_name='tempest-ServersTestManualDisk-1613202530-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:12:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2465d2d41534ef098e24bdd413eefab',uuid=6cac1749-1126-44c9-b31c-1041025c52cf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "address": "fa:16:3e:e6:c6:f2", "network": {"id": "8f735fcd-0d4b-4c56-85c4-3ead65135429", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1080673219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b83705c4693849a58c70b1271f24f320", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebb57b0b-4f", "ovs_interfaceid": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.842 243456 DEBUG nova.network.os_vif_util [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Converting VIF {"id": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "address": "fa:16:3e:e6:c6:f2", "network": {"id": "8f735fcd-0d4b-4c56-85c4-3ead65135429", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1080673219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b83705c4693849a58c70b1271f24f320", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebb57b0b-4f", "ovs_interfaceid": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.843 243456 DEBUG nova.network.os_vif_util [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:c6:f2,bridge_name='br-int',has_traffic_filtering=True,id=ebb57b0b-4fa0-4ee0-8791-87271512797e,network=Network(8f735fcd-0d4b-4c56-85c4-3ead65135429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebb57b0b-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.844 243456 DEBUG os_vif [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:c6:f2,bridge_name='br-int',has_traffic_filtering=True,id=ebb57b0b-4fa0-4ee0-8791-87271512797e,network=Network(8f735fcd-0d4b-4c56-85c4-3ead65135429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebb57b0b-4f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.845 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.845 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.846 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.849 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.849 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebb57b0b-4f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.850 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapebb57b0b-4f, col_values=(('external_ids', {'iface-id': 'ebb57b0b-4fa0-4ee0-8791-87271512797e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:c6:f2', 'vm-uuid': '6cac1749-1126-44c9-b31c-1041025c52cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.852 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:41 np0005634017 NetworkManager[49805]: <info>  [1772273561.8532] manager: (tapebb57b0b-4f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/249)
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.855 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.858 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.859 243456 INFO os_vif [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:c6:f2,bridge_name='br-int',has_traffic_filtering=True,id=ebb57b0b-4fa0-4ee0-8791-87271512797e,network=Network(8f735fcd-0d4b-4c56-85c4-3ead65135429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebb57b0b-4f')#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.928 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.929 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.929 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] No VIF found with MAC fa:16:3e:e6:c6:f2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.930 243456 INFO nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Using config drive#033[00m
Feb 28 05:12:41 np0005634017 nova_compute[243452]: 2026-02-28 10:12:41.959 243456 DEBUG nova.storage.rbd_utils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] rbd image 6cac1749-1126-44c9-b31c-1041025c52cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:12:42 np0005634017 nova_compute[243452]: 2026-02-28 10:12:42.088 243456 INFO nova.network.neutron [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updating port 037eb744-3024-4a3d-b52c-894abe1cbac8 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Feb 28 05:12:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 451 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 964 KiB/s rd, 3.7 MiB/s wr, 136 op/s
Feb 28 05:12:42 np0005634017 nova_compute[243452]: 2026-02-28 10:12:42.140 243456 DEBUG nova.network.neutron [req-d7b97c98-6569-4353-84cb-74c038dae680 req-c294475b-515c-406d-9dbd-c59e5a68d9c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Updated VIF entry in instance network info cache for port ebb57b0b-4fa0-4ee0-8791-87271512797e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:12:42 np0005634017 nova_compute[243452]: 2026-02-28 10:12:42.141 243456 DEBUG nova.network.neutron [req-d7b97c98-6569-4353-84cb-74c038dae680 req-c294475b-515c-406d-9dbd-c59e5a68d9c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Updating instance_info_cache with network_info: [{"id": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "address": "fa:16:3e:e6:c6:f2", "network": {"id": "8f735fcd-0d4b-4c56-85c4-3ead65135429", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1080673219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b83705c4693849a58c70b1271f24f320", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebb57b0b-4f", "ovs_interfaceid": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:12:42 np0005634017 nova_compute[243452]: 2026-02-28 10:12:42.167 243456 DEBUG oslo_concurrency.lockutils [req-d7b97c98-6569-4353-84cb-74c038dae680 req-c294475b-515c-406d-9dbd-c59e5a68d9c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-6cac1749-1126-44c9-b31c-1041025c52cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:12:42 np0005634017 nova_compute[243452]: 2026-02-28 10:12:42.514 243456 DEBUG nova.compute.manager [req-e0f05c44-ae01-4923-b68b-d52c1697a3db req-66c5f677-e641-49b5-809e-a511570c0539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Received event network-vif-unplugged-0df27d88-0475-4a4d-8a7c-883b977bc7ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:12:42 np0005634017 nova_compute[243452]: 2026-02-28 10:12:42.515 243456 DEBUG oslo_concurrency.lockutils [req-e0f05c44-ae01-4923-b68b-d52c1697a3db req-66c5f677-e641-49b5-809e-a511570c0539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "920aae47-311f-4921-818d-92025cc1abee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:42 np0005634017 nova_compute[243452]: 2026-02-28 10:12:42.515 243456 DEBUG oslo_concurrency.lockutils [req-e0f05c44-ae01-4923-b68b-d52c1697a3db req-66c5f677-e641-49b5-809e-a511570c0539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:42 np0005634017 nova_compute[243452]: 2026-02-28 10:12:42.516 243456 DEBUG oslo_concurrency.lockutils [req-e0f05c44-ae01-4923-b68b-d52c1697a3db req-66c5f677-e641-49b5-809e-a511570c0539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:42 np0005634017 nova_compute[243452]: 2026-02-28 10:12:42.516 243456 DEBUG nova.compute.manager [req-e0f05c44-ae01-4923-b68b-d52c1697a3db req-66c5f677-e641-49b5-809e-a511570c0539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] No waiting events found dispatching network-vif-unplugged-0df27d88-0475-4a4d-8a7c-883b977bc7ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:12:42 np0005634017 nova_compute[243452]: 2026-02-28 10:12:42.517 243456 DEBUG nova.compute.manager [req-e0f05c44-ae01-4923-b68b-d52c1697a3db req-66c5f677-e641-49b5-809e-a511570c0539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Received event network-vif-unplugged-0df27d88-0475-4a4d-8a7c-883b977bc7ad for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:12:42 np0005634017 nova_compute[243452]: 2026-02-28 10:12:42.517 243456 DEBUG nova.compute.manager [req-e0f05c44-ae01-4923-b68b-d52c1697a3db req-66c5f677-e641-49b5-809e-a511570c0539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Received event network-vif-plugged-0df27d88-0475-4a4d-8a7c-883b977bc7ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:12:42 np0005634017 nova_compute[243452]: 2026-02-28 10:12:42.517 243456 DEBUG oslo_concurrency.lockutils [req-e0f05c44-ae01-4923-b68b-d52c1697a3db req-66c5f677-e641-49b5-809e-a511570c0539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "920aae47-311f-4921-818d-92025cc1abee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:42 np0005634017 nova_compute[243452]: 2026-02-28 10:12:42.518 243456 DEBUG oslo_concurrency.lockutils [req-e0f05c44-ae01-4923-b68b-d52c1697a3db req-66c5f677-e641-49b5-809e-a511570c0539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:42 np0005634017 nova_compute[243452]: 2026-02-28 10:12:42.518 243456 DEBUG oslo_concurrency.lockutils [req-e0f05c44-ae01-4923-b68b-d52c1697a3db req-66c5f677-e641-49b5-809e-a511570c0539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:42 np0005634017 nova_compute[243452]: 2026-02-28 10:12:42.519 243456 DEBUG nova.compute.manager [req-e0f05c44-ae01-4923-b68b-d52c1697a3db req-66c5f677-e641-49b5-809e-a511570c0539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] No waiting events found dispatching network-vif-plugged-0df27d88-0475-4a4d-8a7c-883b977bc7ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:12:42 np0005634017 nova_compute[243452]: 2026-02-28 10:12:42.519 243456 WARNING nova.compute.manager [req-e0f05c44-ae01-4923-b68b-d52c1697a3db req-66c5f677-e641-49b5-809e-a511570c0539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Received unexpected event network-vif-plugged-0df27d88-0475-4a4d-8a7c-883b977bc7ad for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:12:42 np0005634017 nova_compute[243452]: 2026-02-28 10:12:42.695 243456 INFO nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Creating config drive at /var/lib/nova/instances/6cac1749-1126-44c9-b31c-1041025c52cf/disk.config#033[00m
Feb 28 05:12:42 np0005634017 nova_compute[243452]: 2026-02-28 10:12:42.703 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6cac1749-1126-44c9-b31c-1041025c52cf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpb7l5ajgb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:12:42 np0005634017 nova_compute[243452]: 2026-02-28 10:12:42.757 243456 DEBUG nova.network.neutron [-] [instance: 920aae47-311f-4921-818d-92025cc1abee] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:12:42 np0005634017 nova_compute[243452]: 2026-02-28 10:12:42.779 243456 INFO nova.compute.manager [-] [instance: 920aae47-311f-4921-818d-92025cc1abee] Took 1.19 seconds to deallocate network for instance.#033[00m
Feb 28 05:12:42 np0005634017 nova_compute[243452]: 2026-02-28 10:12:42.845 243456 DEBUG oslo_concurrency.lockutils [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:42 np0005634017 nova_compute[243452]: 2026-02-28 10:12:42.845 243456 DEBUG oslo_concurrency.lockutils [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:42 np0005634017 nova_compute[243452]: 2026-02-28 10:12:42.846 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6cac1749-1126-44c9-b31c-1041025c52cf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpb7l5ajgb" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:12:42 np0005634017 nova_compute[243452]: 2026-02-28 10:12:42.875 243456 DEBUG nova.storage.rbd_utils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] rbd image 6cac1749-1126-44c9-b31c-1041025c52cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:12:42 np0005634017 nova_compute[243452]: 2026-02-28 10:12:42.881 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6cac1749-1126-44c9-b31c-1041025c52cf/disk.config 6cac1749-1126-44c9-b31c-1041025c52cf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:12:42 np0005634017 nova_compute[243452]: 2026-02-28 10:12:42.926 243456 DEBUG nova.compute.manager [req-d9e5444e-f861-4fdd-b7f8-d7bffc791926 req-6b575949-9093-45af-a628-22eb194ecc25 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Received event network-vif-deleted-0df27d88-0475-4a4d-8a7c-883b977bc7ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:12:43 np0005634017 nova_compute[243452]: 2026-02-28 10:12:43.053 243456 DEBUG oslo_concurrency.processutils [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:12:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:12:43 np0005634017 nova_compute[243452]: 2026-02-28 10:12:43.099 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6cac1749-1126-44c9-b31c-1041025c52cf/disk.config 6cac1749-1126-44c9-b31c-1041025c52cf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:12:43 np0005634017 nova_compute[243452]: 2026-02-28 10:12:43.101 243456 INFO nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Deleting local config drive /var/lib/nova/instances/6cac1749-1126-44c9-b31c-1041025c52cf/disk.config because it was imported into RBD.#033[00m
Feb 28 05:12:43 np0005634017 kernel: tapebb57b0b-4f: entered promiscuous mode
Feb 28 05:12:43 np0005634017 systemd-udevd[299762]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:12:43 np0005634017 NetworkManager[49805]: <info>  [1772273563.1557] manager: (tapebb57b0b-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/250)
Feb 28 05:12:43 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:43Z|00539|binding|INFO|Claiming lport ebb57b0b-4fa0-4ee0-8791-87271512797e for this chassis.
Feb 28 05:12:43 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:43Z|00540|binding|INFO|ebb57b0b-4fa0-4ee0-8791-87271512797e: Claiming fa:16:3e:e6:c6:f2 10.100.0.13
Feb 28 05:12:43 np0005634017 nova_compute[243452]: 2026-02-28 10:12:43.160 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.169 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:c6:f2 10.100.0.13'], port_security=['fa:16:3e:e6:c6:f2 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6cac1749-1126-44c9-b31c-1041025c52cf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f735fcd-0d4b-4c56-85c4-3ead65135429', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b83705c4693849a58c70b1271f24f320', 'neutron:revision_number': '2', 'neutron:security_group_ids': '49856350-974b-427f-aa68-4afea31e430e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b184abca-cc34-4f4e-8734-d06f0ef64e8b, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ebb57b0b-4fa0-4ee0-8791-87271512797e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:12:43 np0005634017 NetworkManager[49805]: <info>  [1772273563.1726] device (tapebb57b0b-4f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:12:43 np0005634017 NetworkManager[49805]: <info>  [1772273563.1733] device (tapebb57b0b-4f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.172 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ebb57b0b-4fa0-4ee0-8791-87271512797e in datapath 8f735fcd-0d4b-4c56-85c4-3ead65135429 bound to our chassis#033[00m
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.173 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f735fcd-0d4b-4c56-85c4-3ead65135429#033[00m
Feb 28 05:12:43 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:43Z|00541|binding|INFO|Setting lport ebb57b0b-4fa0-4ee0-8791-87271512797e ovn-installed in OVS
Feb 28 05:12:43 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:43Z|00542|binding|INFO|Setting lport ebb57b0b-4fa0-4ee0-8791-87271512797e up in Southbound
Feb 28 05:12:43 np0005634017 nova_compute[243452]: 2026-02-28 10:12:43.191 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.190 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[efdb7e3e-7969-4ff0-ab45-26151a11be2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.192 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8f735fcd-01 in ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.194 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8f735fcd-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.194 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7ba3bfa6-f74a-4ed7-840d-3b0a2953afb5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.199 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cad675dd-66c1-4812-92ac-6653aa044db0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.212 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[c5a3fd6e-11a5-422b-b02f-ec5264630c67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:43 np0005634017 systemd-machined[209480]: New machine qemu-73-instance-00000042.
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.225 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6fa42337-e8c1-4355-9511-99e740417ea5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:43 np0005634017 systemd[1]: Started Virtual Machine qemu-73-instance-00000042.
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.252 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[345ac387-fae0-48fb-803c-de819dfc8fc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:43 np0005634017 NetworkManager[49805]: <info>  [1772273563.2593] manager: (tap8f735fcd-00): new Veth device (/org/freedesktop/NetworkManager/Devices/251)
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.258 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d5856971-a30f-4bc5-9926-ea29c8739032]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.286 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f7853a5a-135c-4121-80db-814f9011b0d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.290 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[12b0a5fa-ad77-4472-9290-0debba5e0301]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:43 np0005634017 NetworkManager[49805]: <info>  [1772273563.3126] device (tap8f735fcd-00): carrier: link connected
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.319 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8bcb4817-4ed2-45ae-b35f-eb949a42ea85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.335 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1f462e09-b7c7-4da9-a93d-0d4464cfcb09]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f735fcd-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:88:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507059, 'reachable_time': 43810, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300087, 'error': None, 'target': 'ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.351 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a89398-7a18-44b3-ad32-b696103b8fd9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefe:88bb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 507059, 'tstamp': 507059}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300088, 'error': None, 'target': 'ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.374 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[de13b997-fa27-4c3a-bf51-070fc5dc0a03]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f735fcd-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:88:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507059, 'reachable_time': 43810, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300089, 'error': None, 'target': 'ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.407 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1c96be48-ca8f-46a7-8074-c4bf301c9d9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:43 np0005634017 nova_compute[243452]: 2026-02-28 10:12:43.427 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:12:43 np0005634017 nova_compute[243452]: 2026-02-28 10:12:43.427 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquired lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:12:43 np0005634017 nova_compute[243452]: 2026-02-28 10:12:43.428 243456 DEBUG nova.network.neutron [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.472 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[927e4250-f563-499e-8364-6a5ff9da65b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.473 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f735fcd-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.473 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.474 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f735fcd-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:43 np0005634017 NetworkManager[49805]: <info>  [1772273563.4764] manager: (tap8f735fcd-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/252)
Feb 28 05:12:43 np0005634017 kernel: tap8f735fcd-00: entered promiscuous mode
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.477 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f735fcd-00, col_values=(('external_ids', {'iface-id': '4d3929e7-28fe-4c5f-a36e-3a516397a383'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:43 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:43Z|00543|binding|INFO|Releasing lport 4d3929e7-28fe-4c5f-a36e-3a516397a383 from this chassis (sb_readonly=0)
Feb 28 05:12:43 np0005634017 nova_compute[243452]: 2026-02-28 10:12:43.475 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.484 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8f735fcd-0d4b-4c56-85c4-3ead65135429.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8f735fcd-0d4b-4c56-85c4-3ead65135429.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.484 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[600cad15-8b47-4592-940f-a7c371d8a33c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.485 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-8f735fcd-0d4b-4c56-85c4-3ead65135429
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/8f735fcd-0d4b-4c56-85c4-3ead65135429.pid.haproxy
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 8f735fcd-0d4b-4c56-85c4-3ead65135429
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:12:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.486 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429', 'env', 'PROCESS_TAG=haproxy-8f735fcd-0d4b-4c56-85c4-3ead65135429', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8f735fcd-0d4b-4c56-85c4-3ead65135429.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:12:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:12:43 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2928744060' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:12:43 np0005634017 nova_compute[243452]: 2026-02-28 10:12:43.636 243456 DEBUG oslo_concurrency.processutils [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:12:43 np0005634017 nova_compute[243452]: 2026-02-28 10:12:43.641 243456 DEBUG nova.compute.provider_tree [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:12:43 np0005634017 nova_compute[243452]: 2026-02-28 10:12:43.660 243456 DEBUG nova.scheduler.client.report [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:12:43 np0005634017 nova_compute[243452]: 2026-02-28 10:12:43.686 243456 DEBUG oslo_concurrency.lockutils [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:43 np0005634017 nova_compute[243452]: 2026-02-28 10:12:43.718 243456 INFO nova.scheduler.client.report [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Deleted allocations for instance 920aae47-311f-4921-818d-92025cc1abee#033[00m
Feb 28 05:12:43 np0005634017 nova_compute[243452]: 2026-02-28 10:12:43.722 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:43 np0005634017 nova_compute[243452]: 2026-02-28 10:12:43.767 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273563.7669137, 6cac1749-1126-44c9-b31c-1041025c52cf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:12:43 np0005634017 nova_compute[243452]: 2026-02-28 10:12:43.768 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] VM Started (Lifecycle Event)#033[00m
Feb 28 05:12:43 np0005634017 nova_compute[243452]: 2026-02-28 10:12:43.801 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:12:43 np0005634017 nova_compute[243452]: 2026-02-28 10:12:43.810 243456 DEBUG oslo_concurrency.lockutils [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:43 np0005634017 nova_compute[243452]: 2026-02-28 10:12:43.817 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273563.767477, 6cac1749-1126-44c9-b31c-1041025c52cf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:12:43 np0005634017 nova_compute[243452]: 2026-02-28 10:12:43.818 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:12:43 np0005634017 nova_compute[243452]: 2026-02-28 10:12:43.851 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:12:43 np0005634017 nova_compute[243452]: 2026-02-28 10:12:43.856 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:12:43 np0005634017 nova_compute[243452]: 2026-02-28 10:12:43.875 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:12:43 np0005634017 podman[300162]: 2026-02-28 10:12:43.88927297 +0000 UTC m=+0.090214488 container create ed7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:12:43 np0005634017 podman[300162]: 2026-02-28 10:12:43.821297598 +0000 UTC m=+0.022239136 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:12:43 np0005634017 systemd[1]: Started libpod-conmon-ed7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e.scope.
Feb 28 05:12:43 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:12:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9745197d659a7b64f0784193f16e075d7a26cccfd957b8957e2a616a406e62/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:12:43 np0005634017 podman[300162]: 2026-02-28 10:12:43.991147264 +0000 UTC m=+0.192088802 container init ed7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:12:43 np0005634017 podman[300162]: 2026-02-28 10:12:43.996495605 +0000 UTC m=+0.197437123 container start ed7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 28 05:12:44 np0005634017 neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429[300178]: [NOTICE]   (300182) : New worker (300184) forked
Feb 28 05:12:44 np0005634017 neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429[300178]: [NOTICE]   (300182) : Loading success.
Feb 28 05:12:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 427 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.5 MiB/s wr, 173 op/s
Feb 28 05:12:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:12:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/572620644' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:12:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:12:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/572620644' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:12:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 404 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 165 op/s
Feb 28 05:12:46 np0005634017 nova_compute[243452]: 2026-02-28 10:12:46.852 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.240 243456 DEBUG nova.compute.manager [req-29d507b4-1d25-45f0-8640-ae1d1b3c1b89 req-b6f6a8e3-25ba-41dc-aa53-0f967cf0b9c7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received event network-changed-037eb744-3024-4a3d-b52c-894abe1cbac8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.240 243456 DEBUG nova.compute.manager [req-29d507b4-1d25-45f0-8640-ae1d1b3c1b89 req-b6f6a8e3-25ba-41dc-aa53-0f967cf0b9c7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Refreshing instance network info cache due to event network-changed-037eb744-3024-4a3d-b52c-894abe1cbac8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.240 243456 DEBUG oslo_concurrency.lockutils [req-29d507b4-1d25-45f0-8640-ae1d1b3c1b89 req-b6f6a8e3-25ba-41dc-aa53-0f967cf0b9c7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.312 243456 DEBUG nova.compute.manager [req-9b92de22-b2cb-496e-afca-9176cde4f2bb req-5b622c02-e211-4247-a597-875c2e7ed854 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Received event network-vif-plugged-ebb57b0b-4fa0-4ee0-8791-87271512797e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.313 243456 DEBUG oslo_concurrency.lockutils [req-9b92de22-b2cb-496e-afca-9176cde4f2bb req-5b622c02-e211-4247-a597-875c2e7ed854 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.313 243456 DEBUG oslo_concurrency.lockutils [req-9b92de22-b2cb-496e-afca-9176cde4f2bb req-5b622c02-e211-4247-a597-875c2e7ed854 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.313 243456 DEBUG oslo_concurrency.lockutils [req-9b92de22-b2cb-496e-afca-9176cde4f2bb req-5b622c02-e211-4247-a597-875c2e7ed854 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.313 243456 DEBUG nova.compute.manager [req-9b92de22-b2cb-496e-afca-9176cde4f2bb req-5b622c02-e211-4247-a597-875c2e7ed854 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Processing event network-vif-plugged-ebb57b0b-4fa0-4ee0-8791-87271512797e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.313 243456 DEBUG nova.compute.manager [req-9b92de22-b2cb-496e-afca-9176cde4f2bb req-5b622c02-e211-4247-a597-875c2e7ed854 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Received event network-vif-plugged-ebb57b0b-4fa0-4ee0-8791-87271512797e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.314 243456 DEBUG oslo_concurrency.lockutils [req-9b92de22-b2cb-496e-afca-9176cde4f2bb req-5b622c02-e211-4247-a597-875c2e7ed854 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.314 243456 DEBUG oslo_concurrency.lockutils [req-9b92de22-b2cb-496e-afca-9176cde4f2bb req-5b622c02-e211-4247-a597-875c2e7ed854 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.314 243456 DEBUG oslo_concurrency.lockutils [req-9b92de22-b2cb-496e-afca-9176cde4f2bb req-5b622c02-e211-4247-a597-875c2e7ed854 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.314 243456 DEBUG nova.compute.manager [req-9b92de22-b2cb-496e-afca-9176cde4f2bb req-5b622c02-e211-4247-a597-875c2e7ed854 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] No waiting events found dispatching network-vif-plugged-ebb57b0b-4fa0-4ee0-8791-87271512797e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.314 243456 WARNING nova.compute.manager [req-9b92de22-b2cb-496e-afca-9176cde4f2bb req-5b622c02-e211-4247-a597-875c2e7ed854 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Received unexpected event network-vif-plugged-ebb57b0b-4fa0-4ee0-8791-87271512797e for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.315 243456 DEBUG nova.compute.manager [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.320 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273567.3204653, 6cac1749-1126-44c9-b31c-1041025c52cf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.320 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.323 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.326 243456 INFO nova.virt.libvirt.driver [-] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Instance spawned successfully.#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.327 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.357 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.371 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.377 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.378 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.379 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.380 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.380 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.381 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.397 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.455 243456 INFO nova.compute.manager [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Took 10.93 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.456 243456 DEBUG nova.compute.manager [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.531 243456 INFO nova.compute.manager [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Took 12.87 seconds to build instance.#033[00m
Feb 28 05:12:47 np0005634017 nova_compute[243452]: 2026-02-28 10:12:47.606 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:12:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 405 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.1 MiB/s wr, 136 op/s
Feb 28 05:12:48 np0005634017 nova_compute[243452]: 2026-02-28 10:12:48.183 243456 DEBUG nova.network.neutron [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updating instance_info_cache with network_info: [{"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:12:48 np0005634017 nova_compute[243452]: 2026-02-28 10:12:48.226 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Releasing lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:12:48 np0005634017 nova_compute[243452]: 2026-02-28 10:12:48.227 243456 DEBUG nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:12:48 np0005634017 nova_compute[243452]: 2026-02-28 10:12:48.228 243456 INFO nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Creating image(s)#033[00m
Feb 28 05:12:48 np0005634017 nova_compute[243452]: 2026-02-28 10:12:48.253 243456 DEBUG nova.storage.rbd_utils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:12:48 np0005634017 nova_compute[243452]: 2026-02-28 10:12:48.257 243456 DEBUG nova.objects.instance [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:12:48 np0005634017 nova_compute[243452]: 2026-02-28 10:12:48.259 243456 DEBUG oslo_concurrency.lockutils [req-29d507b4-1d25-45f0-8640-ae1d1b3c1b89 req-b6f6a8e3-25ba-41dc-aa53-0f967cf0b9c7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:12:48 np0005634017 nova_compute[243452]: 2026-02-28 10:12:48.259 243456 DEBUG nova.network.neutron [req-29d507b4-1d25-45f0-8640-ae1d1b3c1b89 req-b6f6a8e3-25ba-41dc-aa53-0f967cf0b9c7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Refreshing network info cache for port 037eb744-3024-4a3d-b52c-894abe1cbac8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:12:48 np0005634017 nova_compute[243452]: 2026-02-28 10:12:48.303 243456 DEBUG nova.storage.rbd_utils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:12:48 np0005634017 nova_compute[243452]: 2026-02-28 10:12:48.329 243456 DEBUG nova.storage.rbd_utils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:12:48 np0005634017 nova_compute[243452]: 2026-02-28 10:12:48.333 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "de414522523c5d89af98d076848191bbc8097e6f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:48 np0005634017 nova_compute[243452]: 2026-02-28 10:12:48.334 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "de414522523c5d89af98d076848191bbc8097e6f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:48 np0005634017 nova_compute[243452]: 2026-02-28 10:12:48.364 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:48 np0005634017 nova_compute[243452]: 2026-02-28 10:12:48.724 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:48 np0005634017 nova_compute[243452]: 2026-02-28 10:12:48.749 243456 DEBUG nova.virt.libvirt.imagebackend [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Image locations are: [{'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/337de210-963f-41af-92a4-16b5716eae17/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/337de210-963f-41af-92a4-16b5716eae17/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Feb 28 05:12:48 np0005634017 nova_compute[243452]: 2026-02-28 10:12:48.828 243456 DEBUG nova.virt.libvirt.imagebackend [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Selected location: {'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/337de210-963f-41af-92a4-16b5716eae17/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Feb 28 05:12:48 np0005634017 nova_compute[243452]: 2026-02-28 10:12:48.829 243456 DEBUG nova.storage.rbd_utils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] cloning images/337de210-963f-41af-92a4-16b5716eae17@snap to None/30a5d845-ce28-490a-afe8-3b7552f02c63_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 28 05:12:48 np0005634017 nova_compute[243452]: 2026-02-28 10:12:48.934 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "de414522523c5d89af98d076848191bbc8097e6f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.099 243456 DEBUG nova.objects.instance [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'migration_context' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.179 243456 DEBUG nova.storage.rbd_utils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] flattening vms/30a5d845-ce28-490a-afe8-3b7552f02c63_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.574 243456 DEBUG nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Image rbd:vms/30a5d845-ce28-490a-afe8-3b7552f02c63_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.575 243456 DEBUG nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.575 243456 DEBUG nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Ensure instance console log exists: /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.576 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.576 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.577 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.579 243456 DEBUG nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Start _get_guest_xml network_info=[{"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-02-28T10:12:20Z,direct_url=<?>,disk_format='raw',id=337de210-963f-41af-92a4-16b5716eae17,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-655402139-shelved',owner='cffbbb9857954b188c363e9565817bf2',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-28T10:12:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.583 243456 WARNING nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.589 243456 DEBUG nova.virt.libvirt.host [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.589 243456 DEBUG nova.virt.libvirt.host [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.592 243456 DEBUG nova.virt.libvirt.host [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.592 243456 DEBUG nova.virt.libvirt.host [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.593 243456 DEBUG nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.593 243456 DEBUG nova.virt.hardware [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-02-28T10:12:20Z,direct_url=<?>,disk_format='raw',id=337de210-963f-41af-92a4-16b5716eae17,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-655402139-shelved',owner='cffbbb9857954b188c363e9565817bf2',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-28T10:12:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.593 243456 DEBUG nova.virt.hardware [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.594 243456 DEBUG nova.virt.hardware [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.594 243456 DEBUG nova.virt.hardware [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.594 243456 DEBUG nova.virt.hardware [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.594 243456 DEBUG nova.virt.hardware [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.594 243456 DEBUG nova.virt.hardware [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.595 243456 DEBUG nova.virt.hardware [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.595 243456 DEBUG nova.virt.hardware [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.595 243456 DEBUG nova.virt.hardware [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.595 243456 DEBUG nova.virt.hardware [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.596 243456 DEBUG nova.objects.instance [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.617 243456 DEBUG oslo_concurrency.processutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.927 243456 DEBUG nova.network.neutron [req-29d507b4-1d25-45f0-8640-ae1d1b3c1b89 req-b6f6a8e3-25ba-41dc-aa53-0f967cf0b9c7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updated VIF entry in instance network info cache for port 037eb744-3024-4a3d-b52c-894abe1cbac8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.928 243456 DEBUG nova.network.neutron [req-29d507b4-1d25-45f0-8640-ae1d1b3c1b89 req-b6f6a8e3-25ba-41dc-aa53-0f967cf0b9c7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updating instance_info_cache with network_info: [{"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:12:49 np0005634017 nova_compute[243452]: 2026-02-28 10:12:49.945 243456 DEBUG oslo_concurrency.lockutils [req-29d507b4-1d25-45f0-8640-ae1d1b3c1b89 req-b6f6a8e3-25ba-41dc-aa53-0f967cf0b9c7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:12:50 np0005634017 nova_compute[243452]: 2026-02-28 10:12:50.085 243456 DEBUG nova.compute.manager [req-5aaa6f68-1cb2-4263-ad71-120e68dd10a0 req-292a61ac-e06f-49c5-8017-8b57c4ce8080 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Received event network-changed-ebb57b0b-4fa0-4ee0-8791-87271512797e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:12:50 np0005634017 nova_compute[243452]: 2026-02-28 10:12:50.085 243456 DEBUG nova.compute.manager [req-5aaa6f68-1cb2-4263-ad71-120e68dd10a0 req-292a61ac-e06f-49c5-8017-8b57c4ce8080 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Refreshing instance network info cache due to event network-changed-ebb57b0b-4fa0-4ee0-8791-87271512797e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:12:50 np0005634017 nova_compute[243452]: 2026-02-28 10:12:50.086 243456 DEBUG oslo_concurrency.lockutils [req-5aaa6f68-1cb2-4263-ad71-120e68dd10a0 req-292a61ac-e06f-49c5-8017-8b57c4ce8080 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-6cac1749-1126-44c9-b31c-1041025c52cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:12:50 np0005634017 nova_compute[243452]: 2026-02-28 10:12:50.086 243456 DEBUG oslo_concurrency.lockutils [req-5aaa6f68-1cb2-4263-ad71-120e68dd10a0 req-292a61ac-e06f-49c5-8017-8b57c4ce8080 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-6cac1749-1126-44c9-b31c-1041025c52cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:12:50 np0005634017 nova_compute[243452]: 2026-02-28 10:12:50.086 243456 DEBUG nova.network.neutron [req-5aaa6f68-1cb2-4263-ad71-120e68dd10a0 req-292a61ac-e06f-49c5-8017-8b57c4ce8080 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Refreshing network info cache for port ebb57b0b-4fa0-4ee0-8791-87271512797e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:12:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 436 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.9 MiB/s wr, 152 op/s
Feb 28 05:12:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:12:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3063679996' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:12:50 np0005634017 nova_compute[243452]: 2026-02-28 10:12:50.176 243456 DEBUG oslo_concurrency.processutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:12:50 np0005634017 nova_compute[243452]: 2026-02-28 10:12:50.667 243456 DEBUG nova.storage.rbd_utils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:12:50 np0005634017 nova_compute[243452]: 2026-02-28 10:12:50.673 243456 DEBUG oslo_concurrency.processutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:12:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:12:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2294504967' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.266 243456 DEBUG oslo_concurrency.processutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.268 243456 DEBUG nova.virt.libvirt.vif [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-655402139',display_name='tempest-ServerActionsTestOtherB-server-655402139',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-655402139',id=58,image_ref='337de210-963f-41af-92a4-16b5716eae17',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-1106870795',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:10:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='cffbbb9857954b188c363e9565817bf2',ramdisk_id='',reservation_id='r-xj14uw0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-909812490',owner_user_name='tempest-ServerActionsTestOtherB-909812490-project-member',shelved_at='2026-02-28T10:12:28.942409',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='337de210-963f-41af-92a4-16b5716eae17'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:12:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='48e14a77ec8842f98a0d2efc6d5e167f',uuid=30a5d845-ce28-490a-afe8-3b7552f02c63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.268 243456 DEBUG nova.network.os_vif_util [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converting VIF {"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.269 243456 DEBUG nova.network.os_vif_util [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.272 243456 DEBUG nova.objects.instance [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.296 243456 DEBUG nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:12:51 np0005634017 nova_compute[243452]:  <uuid>30a5d845-ce28-490a-afe8-3b7552f02c63</uuid>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:  <name>instance-0000003a</name>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerActionsTestOtherB-server-655402139</nova:name>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:12:49</nova:creationTime>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:12:51 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:        <nova:user uuid="48e14a77ec8842f98a0d2efc6d5e167f">tempest-ServerActionsTestOtherB-909812490-project-member</nova:user>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:        <nova:project uuid="cffbbb9857954b188c363e9565817bf2">tempest-ServerActionsTestOtherB-909812490</nova:project>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="337de210-963f-41af-92a4-16b5716eae17"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:        <nova:port uuid="037eb744-3024-4a3d-b52c-894abe1cbac8">
Feb 28 05:12:51 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <entry name="serial">30a5d845-ce28-490a-afe8-3b7552f02c63</entry>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <entry name="uuid">30a5d845-ce28-490a-afe8-3b7552f02c63</entry>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/30a5d845-ce28-490a-afe8-3b7552f02c63_disk">
Feb 28 05:12:51 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:12:51 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/30a5d845-ce28-490a-afe8-3b7552f02c63_disk.config">
Feb 28 05:12:51 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:12:51 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:32:d3:6f"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <target dev="tap037eb744-30"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/console.log" append="off"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <input type="keyboard" bus="usb"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:12:51 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:12:51 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:12:51 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:12:51 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.299 243456 DEBUG nova.compute.manager [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Preparing to wait for external event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.299 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.300 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.301 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.304 243456 DEBUG nova.virt.libvirt.vif [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-655402139',display_name='tempest-ServerActionsTestOtherB-server-655402139',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-655402139',id=58,image_ref='337de210-963f-41af-92a4-16b5716eae17',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-1106870795',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:10:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='cffbbb9857954b188c363e9565817bf2',ramdisk_id='',reservation_id='r-xj14uw0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-909812490',owner_user_name='tempest-ServerActionsTestOtherB-909812490-project-member',shelved_at='2026-02-28T10:12:28.942409',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='337de210-963f-41af-92a4-16b5716eae17'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:12:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='48e14a77ec8842f98a0d2efc6d5e167f',uuid=30a5d845-ce28-490a-afe8-3b7552f02c63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.305 243456 DEBUG nova.network.os_vif_util [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converting VIF {"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.307 243456 DEBUG nova.network.os_vif_util [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.309 243456 DEBUG os_vif [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.310 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.311 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.313 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.318 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.319 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap037eb744-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.321 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap037eb744-30, col_values=(('external_ids', {'iface-id': '037eb744-3024-4a3d-b52c-894abe1cbac8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:d3:6f', 'vm-uuid': '30a5d845-ce28-490a-afe8-3b7552f02c63'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:51 np0005634017 NetworkManager[49805]: <info>  [1772273571.3239] manager: (tap037eb744-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/253)
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.327 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.336 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.338 243456 INFO os_vif [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30')#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.391 243456 DEBUG nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.392 243456 DEBUG nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.392 243456 DEBUG nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No VIF found with MAC fa:16:3e:32:d3:6f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.393 243456 INFO nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Using config drive#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.413 243456 DEBUG nova.storage.rbd_utils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.431 243456 DEBUG nova.objects.instance [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.470 243456 DEBUG nova.objects.instance [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'keypairs' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.852 243456 INFO nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Creating config drive at /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/disk.config#033[00m
Feb 28 05:12:51 np0005634017 nova_compute[243452]: 2026-02-28 10:12:51.860 243456 DEBUG oslo_concurrency.processutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpq3ozy6nv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.015 243456 DEBUG oslo_concurrency.processutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpq3ozy6nv" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.053 243456 DEBUG nova.storage.rbd_utils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.059 243456 DEBUG oslo_concurrency.processutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/disk.config 30a5d845-ce28-490a-afe8-3b7552f02c63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:12:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 452 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.2 MiB/s wr, 189 op/s
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.228 243456 DEBUG oslo_concurrency.processutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/disk.config 30a5d845-ce28-490a-afe8-3b7552f02c63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.230 243456 INFO nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Deleting local config drive /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/disk.config because it was imported into RBD.#033[00m
Feb 28 05:12:52 np0005634017 kernel: tap037eb744-30: entered promiscuous mode
Feb 28 05:12:52 np0005634017 NetworkManager[49805]: <info>  [1772273572.2897] manager: (tap037eb744-30): new Tun device (/org/freedesktop/NetworkManager/Devices/254)
Feb 28 05:12:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:52Z|00544|binding|INFO|Claiming lport 037eb744-3024-4a3d-b52c-894abe1cbac8 for this chassis.
Feb 28 05:12:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:52Z|00545|binding|INFO|037eb744-3024-4a3d-b52c-894abe1cbac8: Claiming fa:16:3e:32:d3:6f 10.100.0.9
Feb 28 05:12:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.299 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:d3:6f 10.100.0.9'], port_security=['fa:16:3e:32:d3:6f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '30a5d845-ce28-490a-afe8-3b7552f02c63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cffbbb9857954b188c363e9565817bf2', 'neutron:revision_number': '7', 'neutron:security_group_ids': '998e17ae-0a65-46f5-b817-1f8d2d0cba63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d55c9b7f-6b12-4934-aabc-ab954d2b71e1, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=037eb744-3024-4a3d-b52c-894abe1cbac8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:12:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.301 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 037eb744-3024-4a3d-b52c-894abe1cbac8 in datapath 41b22e92-d251-48dd-9bf8-8f38cbd749fa bound to our chassis#033[00m
Feb 28 05:12:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.303 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41b22e92-d251-48dd-9bf8-8f38cbd749fa#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.303 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:52Z|00546|binding|INFO|Setting lport 037eb744-3024-4a3d-b52c-894abe1cbac8 ovn-installed in OVS
Feb 28 05:12:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:52Z|00547|binding|INFO|Setting lport 037eb744-3024-4a3d-b52c-894abe1cbac8 up in Southbound
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.305 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:52Z|00548|binding|INFO|Releasing lport cad89901-4493-47e9-b0fc-45158375eff8 from this chassis (sb_readonly=0)
Feb 28 05:12:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:52Z|00549|binding|INFO|Releasing lport 4d3929e7-28fe-4c5f-a36e-3a516397a383 from this chassis (sb_readonly=0)
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.313 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.322 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[49d66a51-f41d-469d-8538-6591a2f56474]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:52 np0005634017 systemd-udevd[300542]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:12:52 np0005634017 systemd-machined[209480]: New machine qemu-74-instance-0000003a.
Feb 28 05:12:52 np0005634017 NetworkManager[49805]: <info>  [1772273572.3438] device (tap037eb744-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:12:52 np0005634017 NetworkManager[49805]: <info>  [1772273572.3444] device (tap037eb744-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:12:52 np0005634017 systemd[1]: Started Virtual Machine qemu-74-instance-0000003a.
Feb 28 05:12:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.359 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6396b56d-663f-4124-b38e-25f892a09998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.361 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.363 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b8db32b8-b89b-48a9-a8d3-975476ac0524]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.388 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e1079034-b029-4410-b2ff-16b9349bbff1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.406 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[485f875a-1948-40b0-8df1-3df431038c28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41b22e92-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:1f:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 14, 'rx_bytes': 700, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 14, 'rx_bytes': 700, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493294, 'reachable_time': 16597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300552, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.432 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6a48db22-7c22-463b-9977-b5385061d659]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap41b22e92-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493305, 'tstamp': 493305}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300555, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap41b22e92-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493308, 'tstamp': 493308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300555, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.435 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41b22e92-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.437 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.438 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.442 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41b22e92-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.442 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:12:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.443 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41b22e92-d0, col_values=(('external_ids', {'iface-id': 'cad89901-4493-47e9-b0fc-45158375eff8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.443 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.501 243456 DEBUG nova.network.neutron [req-5aaa6f68-1cb2-4263-ad71-120e68dd10a0 req-292a61ac-e06f-49c5-8017-8b57c4ce8080 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Updated VIF entry in instance network info cache for port ebb57b0b-4fa0-4ee0-8791-87271512797e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.502 243456 DEBUG nova.network.neutron [req-5aaa6f68-1cb2-4263-ad71-120e68dd10a0 req-292a61ac-e06f-49c5-8017-8b57c4ce8080 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Updating instance_info_cache with network_info: [{"id": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "address": "fa:16:3e:e6:c6:f2", "network": {"id": "8f735fcd-0d4b-4c56-85c4-3ead65135429", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1080673219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b83705c4693849a58c70b1271f24f320", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebb57b0b-4f", "ovs_interfaceid": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.521 243456 DEBUG oslo_concurrency.lockutils [req-5aaa6f68-1cb2-4263-ad71-120e68dd10a0 req-292a61ac-e06f-49c5-8017-8b57c4ce8080 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-6cac1749-1126-44c9-b31c-1041025c52cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.586 243456 DEBUG nova.compute.manager [req-82d5f45f-d03f-42e9-8ad9-caecdf7d092f req-50aa4d16-ea4a-4234-9cfe-f0bcdd2ee522 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.587 243456 DEBUG oslo_concurrency.lockutils [req-82d5f45f-d03f-42e9-8ad9-caecdf7d092f req-50aa4d16-ea4a-4234-9cfe-f0bcdd2ee522 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.587 243456 DEBUG oslo_concurrency.lockutils [req-82d5f45f-d03f-42e9-8ad9-caecdf7d092f req-50aa4d16-ea4a-4234-9cfe-f0bcdd2ee522 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.587 243456 DEBUG oslo_concurrency.lockutils [req-82d5f45f-d03f-42e9-8ad9-caecdf7d092f req-50aa4d16-ea4a-4234-9cfe-f0bcdd2ee522 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.588 243456 DEBUG nova.compute.manager [req-82d5f45f-d03f-42e9-8ad9-caecdf7d092f req-50aa4d16-ea4a-4234-9cfe-f0bcdd2ee522 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Processing event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.865 243456 DEBUG nova.compute.manager [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.866 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273572.8650043, 30a5d845-ce28-490a-afe8-3b7552f02c63 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.866 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] VM Started (Lifecycle Event)#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.871 243456 DEBUG nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.875 243456 INFO nova.virt.libvirt.driver [-] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Instance spawned successfully.#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.889 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.892 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.912 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.912 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273572.8662136, 30a5d845-ce28-490a-afe8-3b7552f02c63 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.913 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.929 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.933 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273572.8698864, 30a5d845-ce28-490a-afe8-3b7552f02c63 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.933 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.956 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.959 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:12:52 np0005634017 nova_compute[243452]: 2026-02-28 10:12:52.979 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:12:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:12:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:53Z|00550|binding|INFO|Releasing lport cad89901-4493-47e9-b0fc-45158375eff8 from this chassis (sb_readonly=0)
Feb 28 05:12:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:53Z|00551|binding|INFO|Releasing lport 4d3929e7-28fe-4c5f-a36e-3a516397a383 from this chassis (sb_readonly=0)
Feb 28 05:12:53 np0005634017 nova_compute[243452]: 2026-02-28 10:12:53.393 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Feb 28 05:12:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Feb 28 05:12:53 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Feb 28 05:12:53 np0005634017 nova_compute[243452]: 2026-02-28 10:12:53.725 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:53 np0005634017 nova_compute[243452]: 2026-02-28 10:12:53.995 243456 DEBUG nova.compute.manager [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:12:54 np0005634017 nova_compute[243452]: 2026-02-28 10:12:54.081 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 13.386s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 484 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 200 op/s
Feb 28 05:12:54 np0005634017 nova_compute[243452]: 2026-02-28 10:12:54.677 243456 DEBUG nova.compute.manager [req-4d946485-0ee1-4afc-8850-4d1564f4ff4b req-bf29f170-e081-4c5b-9603-d4c2f6320e28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:12:54 np0005634017 nova_compute[243452]: 2026-02-28 10:12:54.679 243456 DEBUG oslo_concurrency.lockutils [req-4d946485-0ee1-4afc-8850-4d1564f4ff4b req-bf29f170-e081-4c5b-9603-d4c2f6320e28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:54 np0005634017 nova_compute[243452]: 2026-02-28 10:12:54.680 243456 DEBUG oslo_concurrency.lockutils [req-4d946485-0ee1-4afc-8850-4d1564f4ff4b req-bf29f170-e081-4c5b-9603-d4c2f6320e28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:54 np0005634017 nova_compute[243452]: 2026-02-28 10:12:54.680 243456 DEBUG oslo_concurrency.lockutils [req-4d946485-0ee1-4afc-8850-4d1564f4ff4b req-bf29f170-e081-4c5b-9603-d4c2f6320e28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:54 np0005634017 nova_compute[243452]: 2026-02-28 10:12:54.681 243456 DEBUG nova.compute.manager [req-4d946485-0ee1-4afc-8850-4d1564f4ff4b req-bf29f170-e081-4c5b-9603-d4c2f6320e28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] No waiting events found dispatching network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:12:54 np0005634017 nova_compute[243452]: 2026-02-28 10:12:54.681 243456 WARNING nova.compute.manager [req-4d946485-0ee1-4afc-8850-4d1564f4ff4b req-bf29f170-e081-4c5b-9603-d4c2f6320e28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received unexpected event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:12:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 418 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 4.7 MiB/s wr, 252 op/s
Feb 28 05:12:56 np0005634017 nova_compute[243452]: 2026-02-28 10:12:56.162 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273561.1609914, 920aae47-311f-4921-818d-92025cc1abee => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:12:56 np0005634017 nova_compute[243452]: 2026-02-28 10:12:56.163 243456 INFO nova.compute.manager [-] [instance: 920aae47-311f-4921-818d-92025cc1abee] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:12:56 np0005634017 nova_compute[243452]: 2026-02-28 10:12:56.191 243456 DEBUG nova.compute.manager [None req-f0e48fa4-a7ab-451e-9e74-745f4468f067 - - - - - -] [instance: 920aae47-311f-4921-818d-92025cc1abee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:12:56 np0005634017 nova_compute[243452]: 2026-02-28 10:12:56.323 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Feb 28 05:12:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Feb 28 05:12:57 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Feb 28 05:12:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:57.851 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:57.852 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:57.853 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:58 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:58Z|00552|binding|INFO|Releasing lport cad89901-4493-47e9-b0fc-45158375eff8 from this chassis (sb_readonly=0)
Feb 28 05:12:58 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:58Z|00553|binding|INFO|Releasing lport 4d3929e7-28fe-4c5f-a36e-3a516397a383 from this chassis (sb_readonly=0)
Feb 28 05:12:58 np0005634017 nova_compute[243452]: 2026-02-28 10:12:58.089 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:12:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 394 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 9.7 MiB/s rd, 2.9 MiB/s wr, 299 op/s
Feb 28 05:12:58 np0005634017 nova_compute[243452]: 2026-02-28 10:12:58.728 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.118 243456 DEBUG oslo_concurrency.lockutils [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.118 243456 DEBUG oslo_concurrency.lockutils [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.119 243456 DEBUG oslo_concurrency.lockutils [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.119 243456 DEBUG oslo_concurrency.lockutils [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.119 243456 DEBUG oslo_concurrency.lockutils [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.120 243456 INFO nova.compute.manager [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Terminating instance#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.121 243456 DEBUG nova.compute.manager [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:12:59 np0005634017 kernel: tap2c3f8e94-02 (unregistering): left promiscuous mode
Feb 28 05:12:59 np0005634017 NetworkManager[49805]: <info>  [1772273579.1787] device (tap2c3f8e94-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.184 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:59 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:59Z|00554|binding|INFO|Releasing lport 2c3f8e94-025d-4aea-97be-c325f6366e0d from this chassis (sb_readonly=0)
Feb 28 05:12:59 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:59Z|00555|binding|INFO|Setting lport 2c3f8e94-025d-4aea-97be-c325f6366e0d down in Southbound
Feb 28 05:12:59 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:59Z|00556|binding|INFO|Removing iface tap2c3f8e94-02 ovn-installed in OVS
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.188 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.193 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:e3:c3 10.100.0.7'], port_security=['fa:16:3e:81:e3:c3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cffbbb9857954b188c363e9565817bf2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9dcd58cd-d32a-4394-9f61-d8f4c1381371', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d55c9b7f-6b12-4934-aabc-ab954d2b71e1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2c3f8e94-025d-4aea-97be-c325f6366e0d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:12:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.194 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2c3f8e94-025d-4aea-97be-c325f6366e0d in datapath 41b22e92-d251-48dd-9bf8-8f38cbd749fa unbound from our chassis#033[00m
Feb 28 05:12:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.195 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41b22e92-d251-48dd-9bf8-8f38cbd749fa#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.199 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.213 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4df737d2-5659-4ef7-a4b0-8ac3ece77b7b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:59 np0005634017 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Feb 28 05:12:59 np0005634017 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003e.scope: Consumed 14.942s CPU time.
Feb 28 05:12:59 np0005634017 systemd-machined[209480]: Machine qemu-69-instance-0000003e terminated.
Feb 28 05:12:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.251 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3dcf5c05-273e-404b-8f4c-0f32c1b983da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.255 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a64ec726-478a-406b-88c5-b829623bfbda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.285 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d52dd52f-9d29-49fb-8347-e065c01649a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.302 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[00251a57-62ce-41f7-8f03-a2d72e00c13b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41b22e92-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:1f:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 16, 'rx_bytes': 700, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 16, 'rx_bytes': 700, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493294, 'reachable_time': 16597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300614, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.319 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[15c53f9b-d288-4050-ba6b-7c7db0a68858]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap41b22e92-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493305, 'tstamp': 493305}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300615, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap41b22e92-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493308, 'tstamp': 493308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300615, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:12:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.320 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41b22e92-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.322 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.326 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.326 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41b22e92-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.327 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:12:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.327 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41b22e92-d0, col_values=(('external_ids', {'iface-id': 'cad89901-4493-47e9-b0fc-45158375eff8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.327 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.350 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.359 243456 INFO nova.virt.libvirt.driver [-] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Instance destroyed successfully.#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.360 243456 DEBUG nova.objects.instance [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'resources' on Instance uuid 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.374 243456 DEBUG nova.virt.libvirt.vif [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:11:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1290428002',display_name='tempest-ServerActionsTestOtherB-server-1290428002',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1290428002',id=62,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:11:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cffbbb9857954b188c363e9565817bf2',ramdisk_id='',reservation_id='r-2snxvf9r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-909812490',owner_user_name='tempest-ServerActionsTestOtherB-909812490-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:11:35Z,user_data=None,user_id='48e14a77ec8842f98a0d2efc6d5e167f',uuid=51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "address": "fa:16:3e:81:e3:c3", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c3f8e94-02", "ovs_interfaceid": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.375 243456 DEBUG nova.network.os_vif_util [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converting VIF {"id": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "address": "fa:16:3e:81:e3:c3", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c3f8e94-02", "ovs_interfaceid": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.376 243456 DEBUG nova.network.os_vif_util [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:e3:c3,bridge_name='br-int',has_traffic_filtering=True,id=2c3f8e94-025d-4aea-97be-c325f6366e0d,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c3f8e94-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.376 243456 DEBUG os_vif [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:e3:c3,bridge_name='br-int',has_traffic_filtering=True,id=2c3f8e94-025d-4aea-97be-c325f6366e0d,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c3f8e94-02') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.378 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.379 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c3f8e94-02, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.381 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.382 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.386 243456 INFO os_vif [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:e3:c3,bridge_name='br-int',has_traffic_filtering=True,id=2c3f8e94-025d-4aea-97be-c325f6366e0d,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c3f8e94-02')#033[00m
Feb 28 05:12:59 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:59Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e6:c6:f2 10.100.0.13
Feb 28 05:12:59 np0005634017 ovn_controller[146846]: 2026-02-28T10:12:59Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e6:c6:f2 10.100.0.13
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.679 243456 INFO nova.virt.libvirt.driver [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Deleting instance files /var/lib/nova/instances/51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_del#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.681 243456 INFO nova.virt.libvirt.driver [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Deletion of /var/lib/nova/instances/51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_del complete#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.760 243456 INFO nova.compute.manager [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.761 243456 DEBUG oslo.service.loopingcall [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.761 243456 DEBUG nova.compute.manager [-] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:12:59 np0005634017 nova_compute[243452]: 2026-02-28 10:12:59.761 243456 DEBUG nova.network.neutron [-] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:13:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 341 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.4 MiB/s wr, 265 op/s
Feb 28 05:13:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:13:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:13:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:13:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:13:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:13:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:13:00 np0005634017 nova_compute[243452]: 2026-02-28 10:13:00.434 243456 DEBUG nova.compute.manager [req-9b8ac49f-92c0-402d-8be9-7eb5163426db req-89b6d183-78ef-4d55-90de-c495d2c4e5a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Received event network-vif-unplugged-2c3f8e94-025d-4aea-97be-c325f6366e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:00 np0005634017 nova_compute[243452]: 2026-02-28 10:13:00.434 243456 DEBUG oslo_concurrency.lockutils [req-9b8ac49f-92c0-402d-8be9-7eb5163426db req-89b6d183-78ef-4d55-90de-c495d2c4e5a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:00 np0005634017 nova_compute[243452]: 2026-02-28 10:13:00.435 243456 DEBUG oslo_concurrency.lockutils [req-9b8ac49f-92c0-402d-8be9-7eb5163426db req-89b6d183-78ef-4d55-90de-c495d2c4e5a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:00 np0005634017 nova_compute[243452]: 2026-02-28 10:13:00.435 243456 DEBUG oslo_concurrency.lockutils [req-9b8ac49f-92c0-402d-8be9-7eb5163426db req-89b6d183-78ef-4d55-90de-c495d2c4e5a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:00 np0005634017 nova_compute[243452]: 2026-02-28 10:13:00.435 243456 DEBUG nova.compute.manager [req-9b8ac49f-92c0-402d-8be9-7eb5163426db req-89b6d183-78ef-4d55-90de-c495d2c4e5a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] No waiting events found dispatching network-vif-unplugged-2c3f8e94-025d-4aea-97be-c325f6366e0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:13:00 np0005634017 nova_compute[243452]: 2026-02-28 10:13:00.436 243456 DEBUG nova.compute.manager [req-9b8ac49f-92c0-402d-8be9-7eb5163426db req-89b6d183-78ef-4d55-90de-c495d2c4e5a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Received event network-vif-unplugged-2c3f8e94-025d-4aea-97be-c325f6366e0d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:13:00 np0005634017 nova_compute[243452]: 2026-02-28 10:13:00.436 243456 DEBUG nova.compute.manager [req-9b8ac49f-92c0-402d-8be9-7eb5163426db req-89b6d183-78ef-4d55-90de-c495d2c4e5a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Received event network-vif-plugged-2c3f8e94-025d-4aea-97be-c325f6366e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:00 np0005634017 nova_compute[243452]: 2026-02-28 10:13:00.436 243456 DEBUG oslo_concurrency.lockutils [req-9b8ac49f-92c0-402d-8be9-7eb5163426db req-89b6d183-78ef-4d55-90de-c495d2c4e5a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:00 np0005634017 nova_compute[243452]: 2026-02-28 10:13:00.437 243456 DEBUG oslo_concurrency.lockutils [req-9b8ac49f-92c0-402d-8be9-7eb5163426db req-89b6d183-78ef-4d55-90de-c495d2c4e5a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:00 np0005634017 nova_compute[243452]: 2026-02-28 10:13:00.437 243456 DEBUG oslo_concurrency.lockutils [req-9b8ac49f-92c0-402d-8be9-7eb5163426db req-89b6d183-78ef-4d55-90de-c495d2c4e5a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:00 np0005634017 nova_compute[243452]: 2026-02-28 10:13:00.437 243456 DEBUG nova.compute.manager [req-9b8ac49f-92c0-402d-8be9-7eb5163426db req-89b6d183-78ef-4d55-90de-c495d2c4e5a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] No waiting events found dispatching network-vif-plugged-2c3f8e94-025d-4aea-97be-c325f6366e0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:13:00 np0005634017 nova_compute[243452]: 2026-02-28 10:13:00.438 243456 WARNING nova.compute.manager [req-9b8ac49f-92c0-402d-8be9-7eb5163426db req-89b6d183-78ef-4d55-90de-c495d2c4e5a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Received unexpected event network-vif-plugged-2c3f8e94-025d-4aea-97be-c325f6366e0d for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:13:00 np0005634017 nova_compute[243452]: 2026-02-28 10:13:00.461 243456 DEBUG nova.network.neutron [-] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:00 np0005634017 nova_compute[243452]: 2026-02-28 10:13:00.479 243456 INFO nova.compute.manager [-] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Took 0.72 seconds to deallocate network for instance.#033[00m
Feb 28 05:13:00 np0005634017 nova_compute[243452]: 2026-02-28 10:13:00.530 243456 DEBUG oslo_concurrency.lockutils [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:00 np0005634017 nova_compute[243452]: 2026-02-28 10:13:00.530 243456 DEBUG oslo_concurrency.lockutils [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:00 np0005634017 nova_compute[243452]: 2026-02-28 10:13:00.561 243456 DEBUG nova.compute.manager [req-a7d8093b-ed40-4858-9d20-940f3f454639 req-3763c14b-ada4-42f3-b9de-0c5bb4768ee4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Received event network-vif-deleted-2c3f8e94-025d-4aea-97be-c325f6366e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:00 np0005634017 nova_compute[243452]: 2026-02-28 10:13:00.627 243456 DEBUG oslo_concurrency.processutils [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:13:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2993608191' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:13:01 np0005634017 nova_compute[243452]: 2026-02-28 10:13:01.260 243456 DEBUG oslo_concurrency.processutils [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:01 np0005634017 nova_compute[243452]: 2026-02-28 10:13:01.268 243456 DEBUG nova.compute.provider_tree [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:13:01 np0005634017 nova_compute[243452]: 2026-02-28 10:13:01.287 243456 DEBUG nova.scheduler.client.report [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:13:01 np0005634017 nova_compute[243452]: 2026-02-28 10:13:01.312 243456 DEBUG oslo_concurrency.lockutils [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:01 np0005634017 nova_compute[243452]: 2026-02-28 10:13:01.359 243456 INFO nova.scheduler.client.report [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Deleted allocations for instance 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3#033[00m
Feb 28 05:13:01 np0005634017 nova_compute[243452]: 2026-02-28 10:13:01.450 243456 DEBUG oslo_concurrency.lockutils [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.332s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 326 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.6 MiB/s wr, 230 op/s
Feb 28 05:13:02 np0005634017 nova_compute[243452]: 2026-02-28 10:13:02.813 243456 DEBUG oslo_concurrency.lockutils [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:02 np0005634017 nova_compute[243452]: 2026-02-28 10:13:02.814 243456 DEBUG oslo_concurrency.lockutils [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:02 np0005634017 nova_compute[243452]: 2026-02-28 10:13:02.814 243456 DEBUG oslo_concurrency.lockutils [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:02 np0005634017 nova_compute[243452]: 2026-02-28 10:13:02.814 243456 DEBUG oslo_concurrency.lockutils [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:02 np0005634017 nova_compute[243452]: 2026-02-28 10:13:02.814 243456 DEBUG oslo_concurrency.lockutils [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:02 np0005634017 nova_compute[243452]: 2026-02-28 10:13:02.816 243456 INFO nova.compute.manager [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Terminating instance#033[00m
Feb 28 05:13:02 np0005634017 nova_compute[243452]: 2026-02-28 10:13:02.817 243456 DEBUG nova.compute.manager [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:13:02 np0005634017 kernel: tap037eb744-30 (unregistering): left promiscuous mode
Feb 28 05:13:02 np0005634017 NetworkManager[49805]: <info>  [1772273582.8631] device (tap037eb744-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:13:02 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:02Z|00557|binding|INFO|Releasing lport 037eb744-3024-4a3d-b52c-894abe1cbac8 from this chassis (sb_readonly=0)
Feb 28 05:13:02 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:02Z|00558|binding|INFO|Setting lport 037eb744-3024-4a3d-b52c-894abe1cbac8 down in Southbound
Feb 28 05:13:02 np0005634017 nova_compute[243452]: 2026-02-28 10:13:02.868 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:02 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:02Z|00559|binding|INFO|Removing iface tap037eb744-30 ovn-installed in OVS
Feb 28 05:13:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:02.875 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:d3:6f 10.100.0.9'], port_security=['fa:16:3e:32:d3:6f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '30a5d845-ce28-490a-afe8-3b7552f02c63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cffbbb9857954b188c363e9565817bf2', 'neutron:revision_number': '9', 'neutron:security_group_ids': '998e17ae-0a65-46f5-b817-1f8d2d0cba63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.219', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d55c9b7f-6b12-4934-aabc-ab954d2b71e1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=037eb744-3024-4a3d-b52c-894abe1cbac8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:13:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:02.876 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 037eb744-3024-4a3d-b52c-894abe1cbac8 in datapath 41b22e92-d251-48dd-9bf8-8f38cbd749fa unbound from our chassis#033[00m
Feb 28 05:13:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:02.877 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41b22e92-d251-48dd-9bf8-8f38cbd749fa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:13:02 np0005634017 nova_compute[243452]: 2026-02-28 10:13:02.877 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:02.879 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a8ff6a15-d4a7-46c0-9086-af45ad4cde69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:02.880 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa namespace which is not needed anymore#033[00m
Feb 28 05:13:02 np0005634017 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Feb 28 05:13:02 np0005634017 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d0000003a.scope: Consumed 10.703s CPU time.
Feb 28 05:13:02 np0005634017 systemd-machined[209480]: Machine qemu-74-instance-0000003a terminated.
Feb 28 05:13:03 np0005634017 neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa[293148]: [NOTICE]   (293187) : haproxy version is 2.8.14-c23fe91
Feb 28 05:13:03 np0005634017 neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa[293148]: [NOTICE]   (293187) : path to executable is /usr/sbin/haproxy
Feb 28 05:13:03 np0005634017 neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa[293148]: [WARNING]  (293187) : Exiting Master process...
Feb 28 05:13:03 np0005634017 neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa[293148]: [WARNING]  (293187) : Exiting Master process...
Feb 28 05:13:03 np0005634017 neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa[293148]: [ALERT]    (293187) : Current worker (293190) exited with code 143 (Terminated)
Feb 28 05:13:03 np0005634017 neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa[293148]: [WARNING]  (293187) : All workers exited. Exiting... (0)
Feb 28 05:13:03 np0005634017 systemd[1]: libpod-4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6.scope: Deactivated successfully.
Feb 28 05:13:03 np0005634017 podman[300693]: 2026-02-28 10:13:03.023248499 +0000 UTC m=+0.048681620 container died 4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.041 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.047 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:03 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6-userdata-shm.mount: Deactivated successfully.
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.058 243456 INFO nova.virt.libvirt.driver [-] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Instance destroyed successfully.#033[00m
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.058 243456 DEBUG nova.objects.instance [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'resources' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:03 np0005634017 systemd[1]: var-lib-containers-storage-overlay-bc0f80b5e09dc8495c28b54de22f23404b9f324ad0e098a2a59a53d201acefe9-merged.mount: Deactivated successfully.
Feb 28 05:13:03 np0005634017 podman[300693]: 2026-02-28 10:13:03.080150589 +0000 UTC m=+0.105583740 container cleanup 4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 05:13:03 np0005634017 systemd[1]: libpod-conmon-4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6.scope: Deactivated successfully.
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.093 243456 DEBUG nova.virt.libvirt.vif [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-655402139',display_name='tempest-ServerActionsTestOtherB-server-655402139',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-655402139',id=58,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBqRVcgiI80flX1TIQUc3kE8k0bKr7rk5iIO2Yv6L90cfSe29lIWbu2zW6sL5TXWXaoniRhBj4ljGby24BY2TxBinS7LuRQtwYhYPFr+EO7gzkzEKLoiCVvk+xY5at1zhw==',key_name='tempest-keypair-1106870795',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:12:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cffbbb9857954b188c363e9565817bf2',ramdisk_id='',reservation_id='r-xj14uw0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-909812490',owner_user_name='tempest-ServerActionsTestOtherB-909812490-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:12:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='48e14a77ec8842f98a0d2efc6d5e167f',uuid=30a5d845-ce28-490a-afe8-3b7552f02c63,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.094 243456 DEBUG nova.network.os_vif_util [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converting VIF {"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.095 243456 DEBUG nova.network.os_vif_util [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.096 243456 DEBUG os_vif [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.097 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.098 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap037eb744-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.099 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:13:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.102 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:13:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.104 243456 INFO os_vif [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30')#033[00m
Feb 28 05:13:03 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.152 243456 DEBUG nova.compute.manager [req-c26265a2-ad21-43f6-b2d2-7703329ff82e req-69f90944-3c6c-4bc2-af8d-8f9ef26f642d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received event network-vif-unplugged-037eb744-3024-4a3d-b52c-894abe1cbac8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.153 243456 DEBUG oslo_concurrency.lockutils [req-c26265a2-ad21-43f6-b2d2-7703329ff82e req-69f90944-3c6c-4bc2-af8d-8f9ef26f642d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.153 243456 DEBUG oslo_concurrency.lockutils [req-c26265a2-ad21-43f6-b2d2-7703329ff82e req-69f90944-3c6c-4bc2-af8d-8f9ef26f642d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.153 243456 DEBUG oslo_concurrency.lockutils [req-c26265a2-ad21-43f6-b2d2-7703329ff82e req-69f90944-3c6c-4bc2-af8d-8f9ef26f642d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.153 243456 DEBUG nova.compute.manager [req-c26265a2-ad21-43f6-b2d2-7703329ff82e req-69f90944-3c6c-4bc2-af8d-8f9ef26f642d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] No waiting events found dispatching network-vif-unplugged-037eb744-3024-4a3d-b52c-894abe1cbac8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.154 243456 DEBUG nova.compute.manager [req-c26265a2-ad21-43f6-b2d2-7703329ff82e req-69f90944-3c6c-4bc2-af8d-8f9ef26f642d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received event network-vif-unplugged-037eb744-3024-4a3d-b52c-894abe1cbac8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:13:03 np0005634017 podman[300731]: 2026-02-28 10:13:03.155307752 +0000 UTC m=+0.057437716 container remove 4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:13:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:03.160 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[41b3b9ae-f9aa-4c6c-b3bc-771d16512ba5]: (4, ('Sat Feb 28 10:13:02 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa (4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6)\n4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6\nSat Feb 28 10:13:03 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa (4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6)\n4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:03.163 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9e95839f-9684-4d56-88b2-03cbaf49f5c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:03.164 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41b22e92-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:03 np0005634017 kernel: tap41b22e92-d0: left promiscuous mode
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.165 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.174 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:03.177 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3896a4c2-1f0d-493e-8d57-7afdba42f939]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:03.189 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[10c1502f-64da-44d4-9807-570847a98afd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:03.190 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8ea4f2e3-fddb-4907-aafd-8bc45f826082]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:03.207 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a07704cb-f85a-4568-868d-7e0aee8d8197]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493287, 'reachable_time': 34066, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300764, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:03.210 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:13:03 np0005634017 systemd[1]: run-netns-ovnmeta\x2d41b22e92\x2dd251\x2d48dd\x2d9bf8\x2d8f38cbd749fa.mount: Deactivated successfully.
Feb 28 05:13:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:03.211 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[d236c172-9d7a-471d-95f5-4e699b54c786]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.411 243456 INFO nova.virt.libvirt.driver [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Deleting instance files /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63_del#033[00m
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.412 243456 INFO nova.virt.libvirt.driver [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Deletion of /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63_del complete#033[00m
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.487 243456 INFO nova.compute.manager [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.489 243456 DEBUG oslo.service.loopingcall [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.489 243456 DEBUG nova.compute.manager [-] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.490 243456 DEBUG nova.network.neutron [-] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.499 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:03.501 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:13:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:03.502 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.547 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:03 np0005634017 nova_compute[243452]: 2026-02-28 10:13:03.730 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 312 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.2 MiB/s wr, 206 op/s
Feb 28 05:13:04 np0005634017 nova_compute[243452]: 2026-02-28 10:13:04.822 243456 DEBUG nova.network.neutron [-] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:04 np0005634017 nova_compute[243452]: 2026-02-28 10:13:04.841 243456 INFO nova.compute.manager [-] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Took 1.35 seconds to deallocate network for instance.#033[00m
Feb 28 05:13:04 np0005634017 nova_compute[243452]: 2026-02-28 10:13:04.881 243456 DEBUG oslo_concurrency.lockutils [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:04 np0005634017 nova_compute[243452]: 2026-02-28 10:13:04.881 243456 DEBUG oslo_concurrency.lockutils [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:04 np0005634017 nova_compute[243452]: 2026-02-28 10:13:04.936 243456 DEBUG oslo_concurrency.processutils [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:05 np0005634017 nova_compute[243452]: 2026-02-28 10:13:05.048 243456 DEBUG nova.compute.manager [req-6f25a1de-e1e5-482e-a75f-ffcf7321c849 req-35232f4c-baba-49a1-813d-28508d0e3a2b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received event network-vif-deleted-037eb744-3024-4a3d-b52c-894abe1cbac8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:05 np0005634017 podman[300768]: 2026-02-28 10:13:05.136856909 +0000 UTC m=+0.062043266 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 28 05:13:05 np0005634017 podman[300767]: 2026-02-28 10:13:05.185093685 +0000 UTC m=+0.114562722 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:13:05 np0005634017 nova_compute[243452]: 2026-02-28 10:13:05.319 243456 DEBUG nova.compute.manager [req-c51e297f-f64b-4ab3-b4a2-5698ea2dd7b3 req-b1bbb829-1e6e-4d8c-9b50-7d780f379c78 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:05 np0005634017 nova_compute[243452]: 2026-02-28 10:13:05.319 243456 DEBUG oslo_concurrency.lockutils [req-c51e297f-f64b-4ab3-b4a2-5698ea2dd7b3 req-b1bbb829-1e6e-4d8c-9b50-7d780f379c78 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:05 np0005634017 nova_compute[243452]: 2026-02-28 10:13:05.320 243456 DEBUG oslo_concurrency.lockutils [req-c51e297f-f64b-4ab3-b4a2-5698ea2dd7b3 req-b1bbb829-1e6e-4d8c-9b50-7d780f379c78 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:05 np0005634017 nova_compute[243452]: 2026-02-28 10:13:05.320 243456 DEBUG oslo_concurrency.lockutils [req-c51e297f-f64b-4ab3-b4a2-5698ea2dd7b3 req-b1bbb829-1e6e-4d8c-9b50-7d780f379c78 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:05 np0005634017 nova_compute[243452]: 2026-02-28 10:13:05.320 243456 DEBUG nova.compute.manager [req-c51e297f-f64b-4ab3-b4a2-5698ea2dd7b3 req-b1bbb829-1e6e-4d8c-9b50-7d780f379c78 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] No waiting events found dispatching network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:13:05 np0005634017 nova_compute[243452]: 2026-02-28 10:13:05.321 243456 WARNING nova.compute.manager [req-c51e297f-f64b-4ab3-b4a2-5698ea2dd7b3 req-b1bbb829-1e6e-4d8c-9b50-7d780f379c78 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received unexpected event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:13:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:13:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/919389033' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:13:05 np0005634017 nova_compute[243452]: 2026-02-28 10:13:05.519 243456 DEBUG oslo_concurrency.processutils [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:05 np0005634017 nova_compute[243452]: 2026-02-28 10:13:05.525 243456 DEBUG nova.compute.provider_tree [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:13:05 np0005634017 nova_compute[243452]: 2026-02-28 10:13:05.552 243456 DEBUG nova.scheduler.client.report [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:13:05 np0005634017 nova_compute[243452]: 2026-02-28 10:13:05.582 243456 DEBUG oslo_concurrency.lockutils [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:05 np0005634017 nova_compute[243452]: 2026-02-28 10:13:05.606 243456 INFO nova.scheduler.client.report [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Deleted allocations for instance 30a5d845-ce28-490a-afe8-3b7552f02c63#033[00m
Feb 28 05:13:05 np0005634017 nova_compute[243452]: 2026-02-28 10:13:05.687 243456 DEBUG oslo_concurrency.lockutils [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 265 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.9 MiB/s wr, 203 op/s
Feb 28 05:13:06 np0005634017 nova_compute[243452]: 2026-02-28 10:13:06.849 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:06 np0005634017 nova_compute[243452]: 2026-02-28 10:13:06.850 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:06 np0005634017 nova_compute[243452]: 2026-02-28 10:13:06.871 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:13:06 np0005634017 nova_compute[243452]: 2026-02-28 10:13:06.908 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "bd2e7775-9332-417e-a139-0847263b3343" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:06 np0005634017 nova_compute[243452]: 2026-02-28 10:13:06.910 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:06 np0005634017 nova_compute[243452]: 2026-02-28 10:13:06.938 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:13:06 np0005634017 nova_compute[243452]: 2026-02-28 10:13:06.977 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:06 np0005634017 nova_compute[243452]: 2026-02-28 10:13:06.978 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:06 np0005634017 nova_compute[243452]: 2026-02-28 10:13:06.987 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:13:06 np0005634017 nova_compute[243452]: 2026-02-28 10:13:06.988 243456 INFO nova.compute.claims [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:13:07 np0005634017 nova_compute[243452]: 2026-02-28 10:13:07.016 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:07 np0005634017 nova_compute[243452]: 2026-02-28 10:13:07.145 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:07.504 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:13:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3077429748' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:13:07 np0005634017 nova_compute[243452]: 2026-02-28 10:13:07.697 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:07 np0005634017 nova_compute[243452]: 2026-02-28 10:13:07.703 243456 DEBUG nova.compute.provider_tree [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:13:07 np0005634017 nova_compute[243452]: 2026-02-28 10:13:07.730 243456 DEBUG nova.scheduler.client.report [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:13:07 np0005634017 nova_compute[243452]: 2026-02-28 10:13:07.773 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:07 np0005634017 nova_compute[243452]: 2026-02-28 10:13:07.775 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:13:07 np0005634017 nova_compute[243452]: 2026-02-28 10:13:07.778 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:07 np0005634017 nova_compute[243452]: 2026-02-28 10:13:07.786 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:13:07 np0005634017 nova_compute[243452]: 2026-02-28 10:13:07.786 243456 INFO nova.compute.claims [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:13:07 np0005634017 nova_compute[243452]: 2026-02-28 10:13:07.871 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:13:07 np0005634017 nova_compute[243452]: 2026-02-28 10:13:07.872 243456 DEBUG nova.network.neutron [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:13:07 np0005634017 nova_compute[243452]: 2026-02-28 10:13:07.892 243456 INFO nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:13:07 np0005634017 nova_compute[243452]: 2026-02-28 10:13:07.911 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:13:07 np0005634017 nova_compute[243452]: 2026-02-28 10:13:07.962 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.100 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:13:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 233 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 431 KiB/s rd, 2.6 MiB/s wr, 167 op/s
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.145 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.147 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.147 243456 INFO nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Creating image(s)#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.172 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image 6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.198 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image 6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.225 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image 6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.229 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.262 243456 DEBUG nova.policy [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c60ae50e478245b49930e1e71ea14df4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f398725990434948ba6927f1c1477015', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.295 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.295 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.296 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.297 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.331 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image 6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.336 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.371 243456 DEBUG oslo_concurrency.lockutils [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Acquiring lock "6cac1749-1126-44c9-b31c-1041025c52cf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.371 243456 DEBUG oslo_concurrency.lockutils [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.372 243456 DEBUG oslo_concurrency.lockutils [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Acquiring lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.372 243456 DEBUG oslo_concurrency.lockutils [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.372 243456 DEBUG oslo_concurrency.lockutils [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.374 243456 INFO nova.compute.manager [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Terminating instance#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.376 243456 DEBUG nova.compute.manager [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:13:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:13:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1643035264' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.514 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.521 243456 DEBUG nova.compute.provider_tree [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.538 243456 DEBUG nova.scheduler.client.report [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.570 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.571 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.616 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.617 243456 DEBUG nova.network.neutron [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.648 243456 INFO nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.684 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:13:08 np0005634017 kernel: tapebb57b0b-4f (unregistering): left promiscuous mode
Feb 28 05:13:08 np0005634017 NetworkManager[49805]: <info>  [1772273588.7078] device (tapebb57b0b-4f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.710 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.716 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:08 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:08Z|00560|binding|INFO|Releasing lport ebb57b0b-4fa0-4ee0-8791-87271512797e from this chassis (sb_readonly=0)
Feb 28 05:13:08 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:08Z|00561|binding|INFO|Setting lport ebb57b0b-4fa0-4ee0-8791-87271512797e down in Southbound
Feb 28 05:13:08 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:08Z|00562|binding|INFO|Removing iface tapebb57b0b-4f ovn-installed in OVS
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.720 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.729 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:08 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:08.729 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:c6:f2 10.100.0.13'], port_security=['fa:16:3e:e6:c6:f2 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6cac1749-1126-44c9-b31c-1041025c52cf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f735fcd-0d4b-4c56-85c4-3ead65135429', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b83705c4693849a58c70b1271f24f320', 'neutron:revision_number': '4', 'neutron:security_group_ids': '49856350-974b-427f-aa68-4afea31e430e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.181'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b184abca-cc34-4f4e-8734-d06f0ef64e8b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ebb57b0b-4fa0-4ee0-8791-87271512797e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:13:08 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:08.731 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ebb57b0b-4fa0-4ee0-8791-87271512797e in datapath 8f735fcd-0d4b-4c56-85c4-3ead65135429 unbound from our chassis#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.731 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:08 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:08.733 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f735fcd-0d4b-4c56-85c4-3ead65135429, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:13:08 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:08.735 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[05f2c8bf-094c-48aa-8f92-a4e5dc3c9771]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:08 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:08.736 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429 namespace which is not needed anymore#033[00m
Feb 28 05:13:08 np0005634017 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000042.scope: Deactivated successfully.
Feb 28 05:13:08 np0005634017 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000042.scope: Consumed 12.600s CPU time.
Feb 28 05:13:08 np0005634017 systemd-machined[209480]: Machine qemu-73-instance-00000042 terminated.
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.805 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.806 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.807 243456 INFO nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Creating image(s)#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.835 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image bd2e7775-9332-417e-a139-0847263b3343_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.858 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image bd2e7775-9332-417e-a139-0847263b3343_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.882 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image bd2e7775-9332-417e-a139-0847263b3343_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.888 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:08 np0005634017 neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429[300178]: [NOTICE]   (300182) : haproxy version is 2.8.14-c23fe91
Feb 28 05:13:08 np0005634017 neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429[300178]: [NOTICE]   (300182) : path to executable is /usr/sbin/haproxy
Feb 28 05:13:08 np0005634017 neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429[300178]: [WARNING]  (300182) : Exiting Master process...
Feb 28 05:13:08 np0005634017 neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429[300178]: [ALERT]    (300182) : Current worker (300184) exited with code 143 (Terminated)
Feb 28 05:13:08 np0005634017 neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429[300178]: [WARNING]  (300182) : All workers exited. Exiting... (0)
Feb 28 05:13:08 np0005634017 systemd[1]: libpod-ed7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e.scope: Deactivated successfully.
Feb 28 05:13:08 np0005634017 conmon[300178]: conmon ed7bf78f9237022bffc0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ed7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e.scope/container/memory.events
Feb 28 05:13:08 np0005634017 podman[301006]: 2026-02-28 10:13:08.905002761 +0000 UTC m=+0.074784054 container died ed7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.933 243456 DEBUG nova.policy [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c60ae50e478245b49930e1e71ea14df4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f398725990434948ba6927f1c1477015', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.941 243456 INFO nova.virt.libvirt.driver [-] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Instance destroyed successfully.#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.942 243456 DEBUG nova.objects.instance [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lazy-loading 'resources' on Instance uuid 6cac1749-1126-44c9-b31c-1041025c52cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.957 243456 DEBUG nova.virt.libvirt.vif [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1212768098',display_name='tempest-ServersTestManualDisk-server-1212768098',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1212768098',id=66,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0ik5kXdNalWXEzR/gysmRW7reVUwqtO6ef6zS+WfZMKLSAXSFSlMXiq8+ds7SdKqMeBVaNe0jzsweQ7HRyVzZGYG856pZmMCyHy7oKBxG1WfL6nFJqI05F/mBkTXIwrw==',key_name='tempest-keypair-1576574983',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:12:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b83705c4693849a58c70b1271f24f320',ramdisk_id='',reservation_id='r-e45nckj8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-1613202530',owner_user_name='tempest-ServersTestManualDisk-1613202530-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:12:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2465d2d41534ef098e24bdd413eefab',uuid=6cac1749-1126-44c9-b31c-1041025c52cf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "address": "fa:16:3e:e6:c6:f2", "network": {"id": "8f735fcd-0d4b-4c56-85c4-3ead65135429", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1080673219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b83705c4693849a58c70b1271f24f320", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebb57b0b-4f", "ovs_interfaceid": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.958 243456 DEBUG nova.network.os_vif_util [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Converting VIF {"id": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "address": "fa:16:3e:e6:c6:f2", "network": {"id": "8f735fcd-0d4b-4c56-85c4-3ead65135429", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1080673219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b83705c4693849a58c70b1271f24f320", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebb57b0b-4f", "ovs_interfaceid": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.959 243456 DEBUG nova.network.os_vif_util [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:c6:f2,bridge_name='br-int',has_traffic_filtering=True,id=ebb57b0b-4fa0-4ee0-8791-87271512797e,network=Network(8f735fcd-0d4b-4c56-85c4-3ead65135429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebb57b0b-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.959 243456 DEBUG os_vif [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:c6:f2,bridge_name='br-int',has_traffic_filtering=True,id=ebb57b0b-4fa0-4ee0-8791-87271512797e,network=Network(8f735fcd-0d4b-4c56-85c4-3ead65135429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebb57b0b-4f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.961 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.962 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebb57b0b-4f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.964 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.966 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.970 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:08 np0005634017 nova_compute[243452]: 2026-02-28 10:13:08.972 243456 INFO os_vif [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:c6:f2,bridge_name='br-int',has_traffic_filtering=True,id=ebb57b0b-4fa0-4ee0-8791-87271512797e,network=Network(8f735fcd-0d4b-4c56-85c4-3ead65135429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebb57b0b-4f')#033[00m
Feb 28 05:13:08 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ed7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e-userdata-shm.mount: Deactivated successfully.
Feb 28 05:13:08 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ac9745197d659a7b64f0784193f16e075d7a26cccfd957b8957e2a616a406e62-merged.mount: Deactivated successfully.
Feb 28 05:13:09 np0005634017 podman[301006]: 2026-02-28 10:13:09.000715922 +0000 UTC m=+0.170497225 container cleanup ed7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:13:09 np0005634017 systemd[1]: libpod-conmon-ed7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e.scope: Deactivated successfully.
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.023 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.025 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.026 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.026 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.047 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image bd2e7775-9332-417e-a139-0847263b3343_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.052 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 bd2e7775-9332-417e-a139-0847263b3343_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:09 np0005634017 podman[301116]: 2026-02-28 10:13:09.099524731 +0000 UTC m=+0.074483826 container remove ed7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 28 05:13:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:09.105 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[55decd61-747a-4002-bc4b-80d56a5e48c8]: (4, ('Sat Feb 28 10:13:08 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429 (ed7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e)\ned7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e\nSat Feb 28 10:13:09 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429 (ed7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e)\ned7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:09.107 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[632b61d4-8821-45eb-8c69-3b04a4ebf3ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:09.108 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f735fcd-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:09 np0005634017 kernel: tap8f735fcd-00: left promiscuous mode
Feb 28 05:13:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:09.122 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8f5d0f-02f0-47f7-8c87-732f61f6ff08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:09.138 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ef31b91e-9224-4816-8a7c-5672ec79ffc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:09.142 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b6d402da-d308-471b-a09a-8c9bbd7996b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.160 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:09.165 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ee738fb2-1755-4d7d-99bf-bdda6cf7d9c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507053, 'reachable_time': 30067, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301181, 'error': None, 'target': 'ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:09 np0005634017 systemd[1]: run-netns-ovnmeta\x2d8f735fcd\x2d0d4b\x2d4c56\x2d85c4\x2d3ead65135429.mount: Deactivated successfully.
Feb 28 05:13:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:09.168 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:13:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:09.168 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[26fe1f86-58a8-42f1-a3a0-25ee6245c412]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.172 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] resizing rbd image 6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.209 243456 DEBUG nova.network.neutron [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Successfully created port: 0f76084a-5cb2-4246-adc3-ec58ff470ed4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.429 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 bd2e7775-9332-417e-a139-0847263b3343_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.377s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.464 243456 DEBUG nova.objects.instance [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lazy-loading 'migration_context' on Instance uuid 6af19b7d-b0a9-40fe-8d1a-f38c95452a10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.504 243456 DEBUG nova.compute.manager [req-cd114ef6-0c01-4db3-85ec-4283ba448670 req-935df6da-7721-43b2-ad6c-51a26855d5af 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Received event network-vif-unplugged-ebb57b0b-4fa0-4ee0-8791-87271512797e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.505 243456 DEBUG oslo_concurrency.lockutils [req-cd114ef6-0c01-4db3-85ec-4283ba448670 req-935df6da-7721-43b2-ad6c-51a26855d5af 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.506 243456 DEBUG oslo_concurrency.lockutils [req-cd114ef6-0c01-4db3-85ec-4283ba448670 req-935df6da-7721-43b2-ad6c-51a26855d5af 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.506 243456 DEBUG oslo_concurrency.lockutils [req-cd114ef6-0c01-4db3-85ec-4283ba448670 req-935df6da-7721-43b2-ad6c-51a26855d5af 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.507 243456 DEBUG nova.compute.manager [req-cd114ef6-0c01-4db3-85ec-4283ba448670 req-935df6da-7721-43b2-ad6c-51a26855d5af 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] No waiting events found dispatching network-vif-unplugged-ebb57b0b-4fa0-4ee0-8791-87271512797e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.507 243456 DEBUG nova.compute.manager [req-cd114ef6-0c01-4db3-85ec-4283ba448670 req-935df6da-7721-43b2-ad6c-51a26855d5af 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Received event network-vif-unplugged-ebb57b0b-4fa0-4ee0-8791-87271512797e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.508 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.508 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Ensure instance console log exists: /var/lib/nova/instances/6af19b7d-b0a9-40fe-8d1a-f38c95452a10/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.509 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.509 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.510 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.517 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] resizing rbd image bd2e7775-9332-417e-a139-0847263b3343_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.678 243456 DEBUG nova.objects.instance [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lazy-loading 'migration_context' on Instance uuid bd2e7775-9332-417e-a139-0847263b3343 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.710 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.711 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Ensure instance console log exists: /var/lib/nova/instances/bd2e7775-9332-417e-a139-0847263b3343/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.712 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.712 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.713 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.780 243456 INFO nova.virt.libvirt.driver [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Deleting instance files /var/lib/nova/instances/6cac1749-1126-44c9-b31c-1041025c52cf_del#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.782 243456 INFO nova.virt.libvirt.driver [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Deletion of /var/lib/nova/instances/6cac1749-1126-44c9-b31c-1041025c52cf_del complete#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.797 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.852 243456 INFO nova.compute.manager [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Took 1.48 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.853 243456 DEBUG oslo.service.loopingcall [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.854 243456 DEBUG nova.compute.manager [-] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.854 243456 DEBUG nova.network.neutron [-] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:13:09 np0005634017 nova_compute[243452]: 2026-02-28 10:13:09.890 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 251 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 246 KiB/s rd, 3.2 MiB/s wr, 144 op/s
Feb 28 05:13:10 np0005634017 nova_compute[243452]: 2026-02-28 10:13:10.249 243456 DEBUG nova.network.neutron [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Successfully created port: cac4ab79-f021-4f19-8f15-95ea09460f70 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:13:10 np0005634017 nova_compute[243452]: 2026-02-28 10:13:10.689 243456 DEBUG nova.network.neutron [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Successfully updated port: 0f76084a-5cb2-4246-adc3-ec58ff470ed4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:13:10 np0005634017 nova_compute[243452]: 2026-02-28 10:13:10.712 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "refresh_cache-6af19b7d-b0a9-40fe-8d1a-f38c95452a10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:13:10 np0005634017 nova_compute[243452]: 2026-02-28 10:13:10.713 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquired lock "refresh_cache-6af19b7d-b0a9-40fe-8d1a-f38c95452a10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:13:10 np0005634017 nova_compute[243452]: 2026-02-28 10:13:10.713 243456 DEBUG nova.network.neutron [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:13:10 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 05:13:10 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 6636 writes, 29K keys, 6636 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 6636 writes, 6636 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1802 writes, 8224 keys, 1802 commit groups, 1.0 writes per commit group, ingest: 10.65 MB, 0.02 MB/s#012Interval WAL: 1802 writes, 1802 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     56.4      0.59              0.08        16    0.037       0      0       0.0       0.0#012  L6      1/0    8.35 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4    135.5    111.0      1.01              0.28        15    0.068     71K   8331       0.0       0.0#012 Sum      1/0    8.35 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4     85.8     91.0      1.60              0.36        31    0.052     71K   8331       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8     87.5     90.1      0.46              0.11         8    0.057     22K   2559       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    135.5    111.0      1.01              0.28        15    0.068     71K   8331       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     56.8      0.58              0.08        15    0.039       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.5      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.032, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.14 GB write, 0.06 MB/s write, 0.13 GB read, 0.06 MB/s read, 1.6 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555caadbb8d0#2 capacity: 304.00 MB usage: 15.53 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000203 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(985,14.95 MB,4.9186%) FilterBlock(32,205.42 KB,0.0659892%) IndexBlock(32,381.70 KB,0.122617%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb 28 05:13:10 np0005634017 nova_compute[243452]: 2026-02-28 10:13:10.839 243456 DEBUG nova.compute.manager [req-b6c4a804-dd18-49d0-a215-44246bf8b43b req-1476afa8-4ec2-40d4-bc73-a6cde1736da3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Received event network-changed-0f76084a-5cb2-4246-adc3-ec58ff470ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:10 np0005634017 nova_compute[243452]: 2026-02-28 10:13:10.840 243456 DEBUG nova.compute.manager [req-b6c4a804-dd18-49d0-a215-44246bf8b43b req-1476afa8-4ec2-40d4-bc73-a6cde1736da3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Refreshing instance network info cache due to event network-changed-0f76084a-5cb2-4246-adc3-ec58ff470ed4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:13:10 np0005634017 nova_compute[243452]: 2026-02-28 10:13:10.840 243456 DEBUG oslo_concurrency.lockutils [req-b6c4a804-dd18-49d0-a215-44246bf8b43b req-1476afa8-4ec2-40d4-bc73-a6cde1736da3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-6af19b7d-b0a9-40fe-8d1a-f38c95452a10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:13:11 np0005634017 nova_compute[243452]: 2026-02-28 10:13:11.029 243456 DEBUG nova.network.neutron [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:13:11 np0005634017 nova_compute[243452]: 2026-02-28 10:13:11.188 243456 DEBUG nova.network.neutron [-] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:11 np0005634017 nova_compute[243452]: 2026-02-28 10:13:11.207 243456 INFO nova.compute.manager [-] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Took 1.35 seconds to deallocate network for instance.#033[00m
Feb 28 05:13:11 np0005634017 nova_compute[243452]: 2026-02-28 10:13:11.267 243456 DEBUG oslo_concurrency.lockutils [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:11 np0005634017 nova_compute[243452]: 2026-02-28 10:13:11.268 243456 DEBUG oslo_concurrency.lockutils [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:11 np0005634017 nova_compute[243452]: 2026-02-28 10:13:11.551 243456 DEBUG oslo_concurrency.processutils [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:11 np0005634017 nova_compute[243452]: 2026-02-28 10:13:11.622 243456 DEBUG nova.network.neutron [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Successfully updated port: cac4ab79-f021-4f19-8f15-95ea09460f70 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:13:11 np0005634017 nova_compute[243452]: 2026-02-28 10:13:11.633 243456 DEBUG nova.compute.manager [req-a47970ce-ea85-4185-b97f-8da042888cd5 req-bd55c91d-dd7d-4680-b0df-d9a9c1e5058d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Received event network-vif-plugged-ebb57b0b-4fa0-4ee0-8791-87271512797e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:11 np0005634017 nova_compute[243452]: 2026-02-28 10:13:11.633 243456 DEBUG oslo_concurrency.lockutils [req-a47970ce-ea85-4185-b97f-8da042888cd5 req-bd55c91d-dd7d-4680-b0df-d9a9c1e5058d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:11 np0005634017 nova_compute[243452]: 2026-02-28 10:13:11.634 243456 DEBUG oslo_concurrency.lockutils [req-a47970ce-ea85-4185-b97f-8da042888cd5 req-bd55c91d-dd7d-4680-b0df-d9a9c1e5058d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:11 np0005634017 nova_compute[243452]: 2026-02-28 10:13:11.634 243456 DEBUG oslo_concurrency.lockutils [req-a47970ce-ea85-4185-b97f-8da042888cd5 req-bd55c91d-dd7d-4680-b0df-d9a9c1e5058d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:11 np0005634017 nova_compute[243452]: 2026-02-28 10:13:11.634 243456 DEBUG nova.compute.manager [req-a47970ce-ea85-4185-b97f-8da042888cd5 req-bd55c91d-dd7d-4680-b0df-d9a9c1e5058d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] No waiting events found dispatching network-vif-plugged-ebb57b0b-4fa0-4ee0-8791-87271512797e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:13:11 np0005634017 nova_compute[243452]: 2026-02-28 10:13:11.635 243456 WARNING nova.compute.manager [req-a47970ce-ea85-4185-b97f-8da042888cd5 req-bd55c91d-dd7d-4680-b0df-d9a9c1e5058d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Received unexpected event network-vif-plugged-ebb57b0b-4fa0-4ee0-8791-87271512797e for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:13:11 np0005634017 nova_compute[243452]: 2026-02-28 10:13:11.636 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "refresh_cache-bd2e7775-9332-417e-a139-0847263b3343" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:13:11 np0005634017 nova_compute[243452]: 2026-02-28 10:13:11.636 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquired lock "refresh_cache-bd2e7775-9332-417e-a139-0847263b3343" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:13:11 np0005634017 nova_compute[243452]: 2026-02-28 10:13:11.636 243456 DEBUG nova.network.neutron [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:13:11 np0005634017 nova_compute[243452]: 2026-02-28 10:13:11.883 243456 DEBUG nova.network.neutron [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:13:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:13:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/166054627' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.122 243456 DEBUG oslo_concurrency.processutils [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.128 243456 DEBUG nova.compute.provider_tree [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:13:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 262 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 213 KiB/s rd, 4.6 MiB/s wr, 108 op/s
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.146 243456 DEBUG nova.scheduler.client.report [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.167 243456 DEBUG oslo_concurrency.lockutils [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.208 243456 INFO nova.scheduler.client.report [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Deleted allocations for instance 6cac1749-1126-44c9-b31c-1041025c52cf#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.276 243456 DEBUG oslo_concurrency.lockutils [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.430 243456 DEBUG nova.network.neutron [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Updating instance_info_cache with network_info: [{"id": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "address": "fa:16:3e:20:c9:da", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f76084a-5c", "ovs_interfaceid": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.451 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Releasing lock "refresh_cache-6af19b7d-b0a9-40fe-8d1a-f38c95452a10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.451 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Instance network_info: |[{"id": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "address": "fa:16:3e:20:c9:da", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f76084a-5c", "ovs_interfaceid": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.452 243456 DEBUG oslo_concurrency.lockutils [req-b6c4a804-dd18-49d0-a215-44246bf8b43b req-1476afa8-4ec2-40d4-bc73-a6cde1736da3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-6af19b7d-b0a9-40fe-8d1a-f38c95452a10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.452 243456 DEBUG nova.network.neutron [req-b6c4a804-dd18-49d0-a215-44246bf8b43b req-1476afa8-4ec2-40d4-bc73-a6cde1736da3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Refreshing network info cache for port 0f76084a-5cb2-4246-adc3-ec58ff470ed4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.455 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Start _get_guest_xml network_info=[{"id": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "address": "fa:16:3e:20:c9:da", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f76084a-5c", "ovs_interfaceid": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.459 243456 WARNING nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.465 243456 DEBUG nova.virt.libvirt.host [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.468 243456 DEBUG nova.virt.libvirt.host [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.483 243456 DEBUG nova.virt.libvirt.host [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.484 243456 DEBUG nova.virt.libvirt.host [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.485 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.485 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.486 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.486 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.487 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.487 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.487 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.488 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.488 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.489 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.489 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.489 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.493 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:12 np0005634017 nova_compute[243452]: 2026-02-28 10:13:12.979 243456 DEBUG nova.compute.manager [req-acb828ae-5d02-4a47-bd2d-bd1ea90a28d4 req-415510a3-48e8-4906-aadd-defe81c4d6b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Received event network-vif-deleted-ebb57b0b-4fa0-4ee0-8791-87271512797e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:13:13 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1449638553' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.058 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.082 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image 6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.086 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.167 243456 DEBUG nova.network.neutron [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Updating instance_info_cache with network_info: [{"id": "cac4ab79-f021-4f19-8f15-95ea09460f70", "address": "fa:16:3e:1b:f3:42", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcac4ab79-f0", "ovs_interfaceid": "cac4ab79-f021-4f19-8f15-95ea09460f70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.195 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Releasing lock "refresh_cache-bd2e7775-9332-417e-a139-0847263b3343" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.195 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Instance network_info: |[{"id": "cac4ab79-f021-4f19-8f15-95ea09460f70", "address": "fa:16:3e:1b:f3:42", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcac4ab79-f0", "ovs_interfaceid": "cac4ab79-f021-4f19-8f15-95ea09460f70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.199 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Start _get_guest_xml network_info=[{"id": "cac4ab79-f021-4f19-8f15-95ea09460f70", "address": "fa:16:3e:1b:f3:42", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcac4ab79-f0", "ovs_interfaceid": "cac4ab79-f021-4f19-8f15-95ea09460f70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.203 243456 WARNING nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.207 243456 DEBUG nova.virt.libvirt.host [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.208 243456 DEBUG nova.virt.libvirt.host [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.212 243456 DEBUG nova.virt.libvirt.host [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.212 243456 DEBUG nova.virt.libvirt.host [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.213 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.213 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.214 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.214 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.214 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.214 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.215 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.215 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.215 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.216 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.216 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.216 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.220 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:13:13 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/312306103' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.649 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.652 243456 DEBUG nova.virt.libvirt.vif [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1120322056',display_name='tempest-tempest.common.compute-instance-1120322056-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1120322056-1',id=67,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f398725990434948ba6927f1c1477015',ramdisk_id='',reservation_id='r-0gks6oxa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-211863181',owner_user_name='tempest-MultipleCreateTestJSON-211863181-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:07Z,user_data=None,user_id='c60ae50e478245b49930e1e71ea14df4',uuid=6af19b7d-b0a9-40fe-8d1a-f38c95452a10,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "address": "fa:16:3e:20:c9:da", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f76084a-5c", "ovs_interfaceid": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.652 243456 DEBUG nova.network.os_vif_util [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converting VIF {"id": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "address": "fa:16:3e:20:c9:da", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f76084a-5c", "ovs_interfaceid": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.654 243456 DEBUG nova.network.os_vif_util [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:c9:da,bridge_name='br-int',has_traffic_filtering=True,id=0f76084a-5cb2-4246-adc3-ec58ff470ed4,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f76084a-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.655 243456 DEBUG nova.objects.instance [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6af19b7d-b0a9-40fe-8d1a-f38c95452a10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.672 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:13:13 np0005634017 nova_compute[243452]:  <uuid>6af19b7d-b0a9-40fe-8d1a-f38c95452a10</uuid>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:  <name>instance-00000043</name>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <nova:name>tempest-tempest.common.compute-instance-1120322056-1</nova:name>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:13:12</nova:creationTime>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:13:13 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:        <nova:user uuid="c60ae50e478245b49930e1e71ea14df4">tempest-MultipleCreateTestJSON-211863181-project-member</nova:user>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:        <nova:project uuid="f398725990434948ba6927f1c1477015">tempest-MultipleCreateTestJSON-211863181</nova:project>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:        <nova:port uuid="0f76084a-5cb2-4246-adc3-ec58ff470ed4">
Feb 28 05:13:13 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <entry name="serial">6af19b7d-b0a9-40fe-8d1a-f38c95452a10</entry>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <entry name="uuid">6af19b7d-b0a9-40fe-8d1a-f38c95452a10</entry>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk">
Feb 28 05:13:13 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:13:13 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk.config">
Feb 28 05:13:13 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:13:13 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:20:c9:da"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <target dev="tap0f76084a-5c"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/6af19b7d-b0a9-40fe-8d1a-f38c95452a10/console.log" append="off"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:13:13 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:13:13 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:13:13 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:13:13 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.679 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Preparing to wait for external event network-vif-plugged-0f76084a-5cb2-4246-adc3-ec58ff470ed4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.679 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.680 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.680 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.681 243456 DEBUG nova.virt.libvirt.vif [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1120322056',display_name='tempest-tempest.common.compute-instance-1120322056-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1120322056-1',id=67,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f398725990434948ba6927f1c1477015',ramdisk_id='',reservation_id='r-0gks6oxa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-211863181',owner_user_name='tempest-MultipleCreateTestJSON-211863181-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:07Z,user_data=None,user_id='c60ae50e478245b49930e1e71ea14df4',uuid=6af19b7d-b0a9-40fe-8d1a-f38c95452a10,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "address": "fa:16:3e:20:c9:da", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f76084a-5c", "ovs_interfaceid": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.682 243456 DEBUG nova.network.os_vif_util [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converting VIF {"id": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "address": "fa:16:3e:20:c9:da", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f76084a-5c", "ovs_interfaceid": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.683 243456 DEBUG nova.network.os_vif_util [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:c9:da,bridge_name='br-int',has_traffic_filtering=True,id=0f76084a-5cb2-4246-adc3-ec58ff470ed4,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f76084a-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.683 243456 DEBUG os_vif [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:c9:da,bridge_name='br-int',has_traffic_filtering=True,id=0f76084a-5cb2-4246-adc3-ec58ff470ed4,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f76084a-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.684 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.685 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.685 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.688 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.689 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f76084a-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.690 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0f76084a-5c, col_values=(('external_ids', {'iface-id': '0f76084a-5cb2-4246-adc3-ec58ff470ed4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:c9:da', 'vm-uuid': '6af19b7d-b0a9-40fe-8d1a-f38c95452a10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.691 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:13 np0005634017 NetworkManager[49805]: <info>  [1772273593.6926] manager: (tap0f76084a-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/255)
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.694 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.697 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.698 243456 INFO os_vif [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:c9:da,bridge_name='br-int',has_traffic_filtering=True,id=0f76084a-5cb2-4246-adc3-ec58ff470ed4,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f76084a-5c')#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.734 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.758 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.759 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.759 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] No VIF found with MAC fa:16:3e:20:c9:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.760 243456 INFO nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Using config drive#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.786 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image 6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.796 243456 DEBUG nova.compute.manager [req-9f145982-69b0-4fed-9abb-8465aef0f4c0 req-298e812b-114d-42b4-96c9-d3a7dcd44400 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received event network-changed-cac4ab79-f021-4f19-8f15-95ea09460f70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.797 243456 DEBUG nova.compute.manager [req-9f145982-69b0-4fed-9abb-8465aef0f4c0 req-298e812b-114d-42b4-96c9-d3a7dcd44400 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Refreshing instance network info cache due to event network-changed-cac4ab79-f021-4f19-8f15-95ea09460f70. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.797 243456 DEBUG oslo_concurrency.lockutils [req-9f145982-69b0-4fed-9abb-8465aef0f4c0 req-298e812b-114d-42b4-96c9-d3a7dcd44400 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-bd2e7775-9332-417e-a139-0847263b3343" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.798 243456 DEBUG oslo_concurrency.lockutils [req-9f145982-69b0-4fed-9abb-8465aef0f4c0 req-298e812b-114d-42b4-96c9-d3a7dcd44400 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-bd2e7775-9332-417e-a139-0847263b3343" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.798 243456 DEBUG nova.network.neutron [req-9f145982-69b0-4fed-9abb-8465aef0f4c0 req-298e812b-114d-42b4-96c9-d3a7dcd44400 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Refreshing network info cache for port cac4ab79-f021-4f19-8f15-95ea09460f70 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:13:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:13:13 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2409038409' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.834 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.858 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image bd2e7775-9332-417e-a139-0847263b3343_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:13 np0005634017 nova_compute[243452]: 2026-02-28 10:13:13.863 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 246 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 3.9 MiB/s wr, 119 op/s
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.358 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273579.356753, 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.359 243456 INFO nova.compute.manager [-] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.382 243456 DEBUG nova.compute.manager [None req-f19a1aa9-fb14-46d0-aa5b-5569860717c6 - - - - - -] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:13:14 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2815072042' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.425 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.427 243456 DEBUG nova.virt.libvirt.vif [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1120322056',display_name='tempest-tempest.common.compute-instance-1120322056-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1120322056-2',id=68,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f398725990434948ba6927f1c1477015',ramdisk_id='',reservation_id='r-0gks6oxa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-211863181',owner_user_name='tempest-MultipleCreateTestJSON-211863181-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:08Z,user_data=None,user_id='c60ae50e478245b49930e1e71ea14df4',uuid=bd2e7775-9332-417e-a139-0847263b3343,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cac4ab79-f021-4f19-8f15-95ea09460f70", "address": "fa:16:3e:1b:f3:42", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcac4ab79-f0", "ovs_interfaceid": "cac4ab79-f021-4f19-8f15-95ea09460f70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.427 243456 DEBUG nova.network.os_vif_util [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converting VIF {"id": "cac4ab79-f021-4f19-8f15-95ea09460f70", "address": "fa:16:3e:1b:f3:42", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcac4ab79-f0", "ovs_interfaceid": "cac4ab79-f021-4f19-8f15-95ea09460f70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.428 243456 DEBUG nova.network.os_vif_util [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:f3:42,bridge_name='br-int',has_traffic_filtering=True,id=cac4ab79-f021-4f19-8f15-95ea09460f70,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcac4ab79-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.429 243456 DEBUG nova.objects.instance [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lazy-loading 'pci_devices' on Instance uuid bd2e7775-9332-417e-a139-0847263b3343 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.450 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:13:14 np0005634017 nova_compute[243452]:  <uuid>bd2e7775-9332-417e-a139-0847263b3343</uuid>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:  <name>instance-00000044</name>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <nova:name>tempest-tempest.common.compute-instance-1120322056-2</nova:name>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:13:13</nova:creationTime>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:13:14 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:        <nova:user uuid="c60ae50e478245b49930e1e71ea14df4">tempest-MultipleCreateTestJSON-211863181-project-member</nova:user>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:        <nova:project uuid="f398725990434948ba6927f1c1477015">tempest-MultipleCreateTestJSON-211863181</nova:project>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:        <nova:port uuid="cac4ab79-f021-4f19-8f15-95ea09460f70">
Feb 28 05:13:14 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <entry name="serial">bd2e7775-9332-417e-a139-0847263b3343</entry>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <entry name="uuid">bd2e7775-9332-417e-a139-0847263b3343</entry>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/bd2e7775-9332-417e-a139-0847263b3343_disk">
Feb 28 05:13:14 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:13:14 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/bd2e7775-9332-417e-a139-0847263b3343_disk.config">
Feb 28 05:13:14 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:13:14 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:1b:f3:42"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <target dev="tapcac4ab79-f0"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/bd2e7775-9332-417e-a139-0847263b3343/console.log" append="off"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:13:14 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:13:14 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:13:14 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:13:14 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.451 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Preparing to wait for external event network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.451 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "bd2e7775-9332-417e-a139-0847263b3343-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.452 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.452 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.453 243456 DEBUG nova.virt.libvirt.vif [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1120322056',display_name='tempest-tempest.common.compute-instance-1120322056-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1120322056-2',id=68,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f398725990434948ba6927f1c1477015',ramdisk_id='',reservation_id='r-0gks6oxa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-211863181',owner_user_name='tempest-MultipleCreateTestJSON-211863181-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:08Z,user_data=None,user_id='c60ae50e478245b49930e1e71ea14df4',uuid=bd2e7775-9332-417e-a139-0847263b3343,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cac4ab79-f021-4f19-8f15-95ea09460f70", "address": "fa:16:3e:1b:f3:42", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcac4ab79-f0", "ovs_interfaceid": "cac4ab79-f021-4f19-8f15-95ea09460f70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.453 243456 DEBUG nova.network.os_vif_util [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converting VIF {"id": "cac4ab79-f021-4f19-8f15-95ea09460f70", "address": "fa:16:3e:1b:f3:42", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcac4ab79-f0", "ovs_interfaceid": "cac4ab79-f021-4f19-8f15-95ea09460f70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.453 243456 DEBUG nova.network.os_vif_util [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:f3:42,bridge_name='br-int',has_traffic_filtering=True,id=cac4ab79-f021-4f19-8f15-95ea09460f70,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcac4ab79-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.454 243456 DEBUG os_vif [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:f3:42,bridge_name='br-int',has_traffic_filtering=True,id=cac4ab79-f021-4f19-8f15-95ea09460f70,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcac4ab79-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.454 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.455 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.455 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.457 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.457 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcac4ab79-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.458 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcac4ab79-f0, col_values=(('external_ids', {'iface-id': 'cac4ab79-f021-4f19-8f15-95ea09460f70', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:f3:42', 'vm-uuid': 'bd2e7775-9332-417e-a139-0847263b3343'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:14 np0005634017 NetworkManager[49805]: <info>  [1772273594.4599] manager: (tapcac4ab79-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/256)
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.459 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.463 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.464 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.464 243456 INFO os_vif [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:f3:42,bridge_name='br-int',has_traffic_filtering=True,id=cac4ab79-f021-4f19-8f15-95ea09460f70,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcac4ab79-f0')#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.506 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.506 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.507 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] No VIF found with MAC fa:16:3e:1b:f3:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.507 243456 INFO nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Using config drive#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.527 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image bd2e7775-9332-417e-a139-0847263b3343_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.766 243456 INFO nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Creating config drive at /var/lib/nova/instances/6af19b7d-b0a9-40fe-8d1a-f38c95452a10/disk.config#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.770 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6af19b7d-b0a9-40fe-8d1a-f38c95452a10/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpsjyzgadp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.854 243456 DEBUG nova.network.neutron [req-b6c4a804-dd18-49d0-a215-44246bf8b43b req-1476afa8-4ec2-40d4-bc73-a6cde1736da3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Updated VIF entry in instance network info cache for port 0f76084a-5cb2-4246-adc3-ec58ff470ed4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.854 243456 DEBUG nova.network.neutron [req-b6c4a804-dd18-49d0-a215-44246bf8b43b req-1476afa8-4ec2-40d4-bc73-a6cde1736da3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Updating instance_info_cache with network_info: [{"id": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "address": "fa:16:3e:20:c9:da", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f76084a-5c", "ovs_interfaceid": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.884 243456 DEBUG oslo_concurrency.lockutils [req-b6c4a804-dd18-49d0-a215-44246bf8b43b req-1476afa8-4ec2-40d4-bc73-a6cde1736da3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-6af19b7d-b0a9-40fe-8d1a-f38c95452a10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.902 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6af19b7d-b0a9-40fe-8d1a-f38c95452a10/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpsjyzgadp" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.927 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image 6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:14 np0005634017 nova_compute[243452]: 2026-02-28 10:13:14.930 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6af19b7d-b0a9-40fe-8d1a-f38c95452a10/disk.config 6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.097 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6af19b7d-b0a9-40fe-8d1a-f38c95452a10/disk.config 6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.098 243456 INFO nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Deleting local config drive /var/lib/nova/instances/6af19b7d-b0a9-40fe-8d1a-f38c95452a10/disk.config because it was imported into RBD.#033[00m
Feb 28 05:13:15 np0005634017 kernel: tap0f76084a-5c: entered promiscuous mode
Feb 28 05:13:15 np0005634017 NetworkManager[49805]: <info>  [1772273595.1551] manager: (tap0f76084a-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/257)
Feb 28 05:13:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:15Z|00563|binding|INFO|Claiming lport 0f76084a-5cb2-4246-adc3-ec58ff470ed4 for this chassis.
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.158 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:15Z|00564|binding|INFO|0f76084a-5cb2-4246-adc3-ec58ff470ed4: Claiming fa:16:3e:20:c9:da 10.100.0.4
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.178 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:c9:da 10.100.0.4'], port_security=['fa:16:3e:20:c9:da 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '6af19b7d-b0a9-40fe-8d1a-f38c95452a10', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f398725990434948ba6927f1c1477015', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9892012a-4876-4e21-91d3-1183462d5768', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72248a44-6915-4752-941f-1938e32574c6, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0f76084a-5cb2-4246-adc3-ec58ff470ed4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.180 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0f76084a-5cb2-4246-adc3-ec58ff470ed4 in datapath 2ee6adef-26da-41a9-91a7-f9a806d37d26 bound to our chassis#033[00m
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.181 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ee6adef-26da-41a9-91a7-f9a806d37d26#033[00m
Feb 28 05:13:15 np0005634017 systemd-udevd[301546]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:13:15 np0005634017 systemd-machined[209480]: New machine qemu-75-instance-00000043.
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.194 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c33b7c62-df71-4178-b041-85a259f639f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.196 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.196 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2ee6adef-21 in ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.199 243456 INFO nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Creating config drive at /var/lib/nova/instances/bd2e7775-9332-417e-a139-0847263b3343/disk.config#033[00m
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.200 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2ee6adef-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.200 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5e465bef-3ef5-4c6b-8cec-5c0360aacc28]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.202 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6c6f4d6f-3a48-4ff6-ab5c-f2aac862f1a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.202 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bd2e7775-9332-417e-a139-0847263b3343/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp7nffun_1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:15Z|00565|binding|INFO|Setting lport 0f76084a-5cb2-4246-adc3-ec58ff470ed4 ovn-installed in OVS
Feb 28 05:13:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:15Z|00566|binding|INFO|Setting lport 0f76084a-5cb2-4246-adc3-ec58ff470ed4 up in Southbound
Feb 28 05:13:15 np0005634017 NetworkManager[49805]: <info>  [1772273595.2137] device (tap0f76084a-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:13:15 np0005634017 systemd[1]: Started Virtual Machine qemu-75-instance-00000043.
Feb 28 05:13:15 np0005634017 NetworkManager[49805]: <info>  [1772273595.2148] device (tap0f76084a-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.220 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[039c7ff1-f46c-4c0d-adc6-5ad93bece74d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.236 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2ed781ac-01fd-4b28-8eac-0f3fc7d06396]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.241 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.265 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bec1a43f-d45e-4838-b01c-2b280d9d2daa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.270 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[84b472b3-421a-40ef-bab2-86d28e60efce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:15 np0005634017 systemd-udevd[301551]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:13:15 np0005634017 NetworkManager[49805]: <info>  [1772273595.2732] manager: (tap2ee6adef-20): new Veth device (/org/freedesktop/NetworkManager/Devices/258)
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.305 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[46424fb8-c90b-4469-a0d9-e7c96f17bb78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.309 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0c9c7352-6098-4492-8874-fff08627d7dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:13:15 np0005634017 NetworkManager[49805]: <info>  [1772273595.3385] device (tap2ee6adef-20): carrier: link connected
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.344 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7b9e5a80-a2b9-4100-8d6c-58fe6ebbd437]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.348 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bd2e7775-9332-417e-a139-0847263b3343/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp7nffun_1" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.373 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b67e63f8-5132-444e-8353-064e383941a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ee6adef-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:79:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510262, 'reachable_time': 26991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301593, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.391 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[84d19f9d-624a-4014-aa3d-2a163c02f953]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0f:79d6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 510262, 'tstamp': 510262}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301602, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.392 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image bd2e7775-9332-417e-a139-0847263b3343_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.401 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bd2e7775-9332-417e-a139-0847263b3343/disk.config bd2e7775-9332-417e-a139-0847263b3343_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.411 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e65d5932-c707-40ff-ac45-98cbbd3ced1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ee6adef-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:79:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510262, 'reachable_time': 26991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301606, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.442 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9cfa8d41-3d18-4728-8df2-778828033cc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.491 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b84d3ac4-82ec-4f35-994b-fba50ed25d41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.492 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ee6adef-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.493 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.493 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ee6adef-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:15 np0005634017 NetworkManager[49805]: <info>  [1772273595.4957] manager: (tap2ee6adef-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Feb 28 05:13:15 np0005634017 kernel: tap2ee6adef-20: entered promiscuous mode
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.495 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.500 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ee6adef-20, col_values=(('external_ids', {'iface-id': '51b33bea-9c2c-447c-817e-7f72887f045f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:15Z|00567|binding|INFO|Releasing lport 51b33bea-9c2c-447c-817e-7f72887f045f from this chassis (sb_readonly=0)
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.501 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.511 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.512 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2ee6adef-26da-41a9-91a7-f9a806d37d26.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2ee6adef-26da-41a9-91a7-f9a806d37d26.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.513 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[23b63601-4457-4ee7-803a-499c37c3b96f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.514 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-2ee6adef-26da-41a9-91a7-f9a806d37d26
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/2ee6adef-26da-41a9-91a7-f9a806d37d26.pid.haproxy
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 2ee6adef-26da-41a9-91a7-f9a806d37d26
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.514 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'env', 'PROCESS_TAG=haproxy-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2ee6adef-26da-41a9-91a7-f9a806d37d26.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.578 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bd2e7775-9332-417e-a139-0847263b3343/disk.config bd2e7775-9332-417e-a139-0847263b3343_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.579 243456 INFO nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Deleting local config drive /var/lib/nova/instances/bd2e7775-9332-417e-a139-0847263b3343/disk.config because it was imported into RBD.#033[00m
Feb 28 05:13:15 np0005634017 kernel: tapcac4ab79-f0: entered promiscuous mode
Feb 28 05:13:15 np0005634017 systemd-udevd[301573]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:13:15 np0005634017 NetworkManager[49805]: <info>  [1772273595.6344] manager: (tapcac4ab79-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/260)
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.635 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:15Z|00568|binding|INFO|Claiming lport cac4ab79-f021-4f19-8f15-95ea09460f70 for this chassis.
Feb 28 05:13:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:15Z|00569|binding|INFO|cac4ab79-f021-4f19-8f15-95ea09460f70: Claiming fa:16:3e:1b:f3:42 10.100.0.8
Feb 28 05:13:15 np0005634017 NetworkManager[49805]: <info>  [1772273595.6468] device (tapcac4ab79-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:13:15 np0005634017 NetworkManager[49805]: <info>  [1772273595.6485] device (tapcac4ab79-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:13:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:15Z|00570|binding|INFO|Setting lport cac4ab79-f021-4f19-8f15-95ea09460f70 ovn-installed in OVS
Feb 28 05:13:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:15Z|00571|binding|INFO|Setting lport cac4ab79-f021-4f19-8f15-95ea09460f70 up in Southbound
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.649 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.653 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:f3:42 10.100.0.8'], port_security=['fa:16:3e:1b:f3:42 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'bd2e7775-9332-417e-a139-0847263b3343', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f398725990434948ba6927f1c1477015', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9892012a-4876-4e21-91d3-1183462d5768', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72248a44-6915-4752-941f-1938e32574c6, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cac4ab79-f021-4f19-8f15-95ea09460f70) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.657 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:15 np0005634017 systemd-machined[209480]: New machine qemu-76-instance-00000044.
Feb 28 05:13:15 np0005634017 systemd[1]: Started Virtual Machine qemu-76-instance-00000044.
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.853 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273595.852532, 6af19b7d-b0a9-40fe-8d1a-f38c95452a10 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.853 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] VM Started (Lifecycle Event)#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.876 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.880 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273595.8557558, 6af19b7d-b0a9-40fe-8d1a-f38c95452a10 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.881 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:13:15 np0005634017 podman[301721]: 2026-02-28 10:13:15.906341765 +0000 UTC m=+0.071604964 container create e15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.911 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.919 243456 DEBUG nova.compute.manager [req-65488ff3-2726-41d0-a31b-7205de28d267 req-85e0abad-e4e0-4bb4-ac9a-79a954e8cd44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Received event network-vif-plugged-0f76084a-5cb2-4246-adc3-ec58ff470ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.920 243456 DEBUG oslo_concurrency.lockutils [req-65488ff3-2726-41d0-a31b-7205de28d267 req-85e0abad-e4e0-4bb4-ac9a-79a954e8cd44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.920 243456 DEBUG oslo_concurrency.lockutils [req-65488ff3-2726-41d0-a31b-7205de28d267 req-85e0abad-e4e0-4bb4-ac9a-79a954e8cd44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.920 243456 DEBUG oslo_concurrency.lockutils [req-65488ff3-2726-41d0-a31b-7205de28d267 req-85e0abad-e4e0-4bb4-ac9a-79a954e8cd44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.921 243456 DEBUG nova.compute.manager [req-65488ff3-2726-41d0-a31b-7205de28d267 req-85e0abad-e4e0-4bb4-ac9a-79a954e8cd44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Processing event network-vif-plugged-0f76084a-5cb2-4246-adc3-ec58ff470ed4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.921 243456 DEBUG nova.compute.manager [req-65488ff3-2726-41d0-a31b-7205de28d267 req-85e0abad-e4e0-4bb4-ac9a-79a954e8cd44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Received event network-vif-plugged-0f76084a-5cb2-4246-adc3-ec58ff470ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.921 243456 DEBUG oslo_concurrency.lockutils [req-65488ff3-2726-41d0-a31b-7205de28d267 req-85e0abad-e4e0-4bb4-ac9a-79a954e8cd44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.921 243456 DEBUG oslo_concurrency.lockutils [req-65488ff3-2726-41d0-a31b-7205de28d267 req-85e0abad-e4e0-4bb4-ac9a-79a954e8cd44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.922 243456 DEBUG oslo_concurrency.lockutils [req-65488ff3-2726-41d0-a31b-7205de28d267 req-85e0abad-e4e0-4bb4-ac9a-79a954e8cd44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.922 243456 DEBUG nova.compute.manager [req-65488ff3-2726-41d0-a31b-7205de28d267 req-85e0abad-e4e0-4bb4-ac9a-79a954e8cd44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] No waiting events found dispatching network-vif-plugged-0f76084a-5cb2-4246-adc3-ec58ff470ed4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.922 243456 WARNING nova.compute.manager [req-65488ff3-2726-41d0-a31b-7205de28d267 req-85e0abad-e4e0-4bb4-ac9a-79a954e8cd44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Received unexpected event network-vif-plugged-0f76084a-5cb2-4246-adc3-ec58ff470ed4 for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.923 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.924 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.942 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.947 243456 INFO nova.virt.libvirt.driver [-] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Instance spawned successfully.#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.948 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.959 243456 DEBUG oslo_concurrency.lockutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Acquiring lock "43abdac9-81fa-437c-8a7c-fb7b1a9ff97f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.959 243456 DEBUG oslo_concurrency.lockutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "43abdac9-81fa-437c-8a7c-fb7b1a9ff97f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:15 np0005634017 podman[301721]: 2026-02-28 10:13:15.863239073 +0000 UTC m=+0.028502262 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.965 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.965 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273595.934672, 6af19b7d-b0a9-40fe-8d1a-f38c95452a10 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.966 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.977 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:15 np0005634017 systemd[1]: Started libpod-conmon-e15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04.scope.
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.977 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.978 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.978 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.979 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.980 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.985 243456 DEBUG nova.compute.manager [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:13:15 np0005634017 nova_compute[243452]: 2026-02-28 10:13:15.995 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.001 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:13:16 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:13:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/642c3d42b12e83ccf48d08b0d735927c24084e9d8d6d8b72eb20cdfcacadcea2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:13:16 np0005634017 podman[301721]: 2026-02-28 10:13:16.029702324 +0000 UTC m=+0.194965513 container init e15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223)
Feb 28 05:13:16 np0005634017 podman[301721]: 2026-02-28 10:13:16.035218929 +0000 UTC m=+0.200482108 container start e15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.051 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:13:16 np0005634017 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[301737]: [NOTICE]   (301741) : New worker (301743) forked
Feb 28 05:13:16 np0005634017 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[301737]: [NOTICE]   (301741) : Loading success.
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.073 243456 INFO nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Took 7.93 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.074 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.093 243456 DEBUG oslo_concurrency.lockutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.094 243456 DEBUG oslo_concurrency.lockutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.102 243456 DEBUG nova.virt.hardware [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.103 243456 INFO nova.compute.claims [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:13:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:16.115 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cac4ab79-f021-4f19-8f15-95ea09460f70 in datapath 2ee6adef-26da-41a9-91a7-f9a806d37d26 unbound from our chassis#033[00m
Feb 28 05:13:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:16.118 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ee6adef-26da-41a9-91a7-f9a806d37d26#033[00m
Feb 28 05:13:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 246 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 3.6 MiB/s wr, 116 op/s
Feb 28 05:13:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:16.133 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f80a9a24-f1c7-492c-a7dc-d8ceb7b69ae8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.148 243456 INFO nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Took 9.20 seconds to build instance.#033[00m
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.153 243456 DEBUG nova.network.neutron [req-9f145982-69b0-4fed-9abb-8465aef0f4c0 req-298e812b-114d-42b4-96c9-d3a7dcd44400 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Updated VIF entry in instance network info cache for port cac4ab79-f021-4f19-8f15-95ea09460f70. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.154 243456 DEBUG nova.network.neutron [req-9f145982-69b0-4fed-9abb-8465aef0f4c0 req-298e812b-114d-42b4-96c9-d3a7dcd44400 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Updating instance_info_cache with network_info: [{"id": "cac4ab79-f021-4f19-8f15-95ea09460f70", "address": "fa:16:3e:1b:f3:42", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcac4ab79-f0", "ovs_interfaceid": "cac4ab79-f021-4f19-8f15-95ea09460f70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:16.161 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6c50a34c-3c85-49e3-a5fe-cda7398850b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:16.166 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bac7bc6e-a1db-4154-ab4f-c068b1a463f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.172 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.321s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.173 243456 DEBUG oslo_concurrency.lockutils [req-9f145982-69b0-4fed-9abb-8465aef0f4c0 req-298e812b-114d-42b4-96c9-d3a7dcd44400 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-bd2e7775-9332-417e-a139-0847263b3343" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:13:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:16.202 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d3666be0-22e9-43dc-8172-0f9cf4e4f4f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:16.220 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[604ca787-6d53-4cbf-8b9f-b9fa1f3bd6d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ee6adef-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:79:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 6, 'rx_bytes': 266, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 6, 'rx_bytes': 266, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510262, 'reachable_time': 26991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301798, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:16.240 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[235790e2-2884-432f-b090-7d826f0dc602]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2ee6adef-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 510275, 'tstamp': 510275}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301799, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2ee6adef-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 510277, 'tstamp': 510277}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301799, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:16.242 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ee6adef-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.244 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:16.245 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ee6adef-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:16.246 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:13:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:16.246 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ee6adef-20, col_values=(('external_ids', {'iface-id': '51b33bea-9c2c-447c-817e-7f72887f045f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:16.246 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.251 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.292 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273596.2845995, bd2e7775-9332-417e-a139-0847263b3343 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.293 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bd2e7775-9332-417e-a139-0847263b3343] VM Started (Lifecycle Event)#033[00m
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.312 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bd2e7775-9332-417e-a139-0847263b3343] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.318 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273596.2851293, bd2e7775-9332-417e-a139-0847263b3343 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.318 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bd2e7775-9332-417e-a139-0847263b3343] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.335 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bd2e7775-9332-417e-a139-0847263b3343] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.338 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bd2e7775-9332-417e-a139-0847263b3343] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.356 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bd2e7775-9332-417e-a139-0847263b3343] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:13:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:13:16 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3798364864' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.906 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.911 243456 DEBUG nova.compute.provider_tree [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.937 243456 DEBUG nova.scheduler.client.report [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.966 243456 DEBUG oslo_concurrency.lockutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:16 np0005634017 nova_compute[243452]: 2026-02-28 10:13:16.967 243456 DEBUG nova.compute.manager [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:13:17 np0005634017 nova_compute[243452]: 2026-02-28 10:13:17.021 243456 DEBUG nova.compute.manager [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Feb 28 05:13:17 np0005634017 nova_compute[243452]: 2026-02-28 10:13:17.042 243456 INFO nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:13:17 np0005634017 nova_compute[243452]: 2026-02-28 10:13:17.058 243456 DEBUG nova.compute.manager [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:13:17 np0005634017 nova_compute[243452]: 2026-02-28 10:13:17.157 243456 DEBUG nova.compute.manager [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:13:17 np0005634017 nova_compute[243452]: 2026-02-28 10:13:17.158 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:13:17 np0005634017 nova_compute[243452]: 2026-02-28 10:13:17.159 243456 INFO nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Creating image(s)#033[00m
Feb 28 05:13:17 np0005634017 nova_compute[243452]: 2026-02-28 10:13:17.188 243456 DEBUG nova.storage.rbd_utils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] rbd image 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:17 np0005634017 nova_compute[243452]: 2026-02-28 10:13:17.214 243456 DEBUG nova.storage.rbd_utils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] rbd image 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:17 np0005634017 nova_compute[243452]: 2026-02-28 10:13:17.242 243456 DEBUG nova.storage.rbd_utils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] rbd image 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:17 np0005634017 nova_compute[243452]: 2026-02-28 10:13:17.248 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:17 np0005634017 nova_compute[243452]: 2026-02-28 10:13:17.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:13:17 np0005634017 nova_compute[243452]: 2026-02-28 10:13:17.334 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:17 np0005634017 nova_compute[243452]: 2026-02-28 10:13:17.335 243456 DEBUG oslo_concurrency.lockutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:17 np0005634017 nova_compute[243452]: 2026-02-28 10:13:17.336 243456 DEBUG oslo_concurrency.lockutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:17 np0005634017 nova_compute[243452]: 2026-02-28 10:13:17.336 243456 DEBUG oslo_concurrency.lockutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:17 np0005634017 nova_compute[243452]: 2026-02-28 10:13:17.362 243456 DEBUG nova.storage.rbd_utils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] rbd image 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:17 np0005634017 nova_compute[243452]: 2026-02-28 10:13:17.367 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:17 np0005634017 nova_compute[243452]: 2026-02-28 10:13:17.641 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.274s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:17 np0005634017 nova_compute[243452]: 2026-02-28 10:13:17.711 243456 DEBUG nova.storage.rbd_utils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] resizing rbd image 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:13:17 np0005634017 nova_compute[243452]: 2026-02-28 10:13:17.800 243456 DEBUG nova.objects.instance [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lazy-loading 'migration_context' on Instance uuid 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.016 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.017 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.026 243456 DEBUG nova.compute.manager [req-fcb5c50e-fd39-4879-a9b8-19d46f57275e req-e0a79079-7c1f-428d-89df-ffdc0ee21f44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received event network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.026 243456 DEBUG oslo_concurrency.lockutils [req-fcb5c50e-fd39-4879-a9b8-19d46f57275e req-e0a79079-7c1f-428d-89df-ffdc0ee21f44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bd2e7775-9332-417e-a139-0847263b3343-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.026 243456 DEBUG oslo_concurrency.lockutils [req-fcb5c50e-fd39-4879-a9b8-19d46f57275e req-e0a79079-7c1f-428d-89df-ffdc0ee21f44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.027 243456 DEBUG oslo_concurrency.lockutils [req-fcb5c50e-fd39-4879-a9b8-19d46f57275e req-e0a79079-7c1f-428d-89df-ffdc0ee21f44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.027 243456 DEBUG nova.compute.manager [req-fcb5c50e-fd39-4879-a9b8-19d46f57275e req-e0a79079-7c1f-428d-89df-ffdc0ee21f44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Processing event network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.027 243456 DEBUG nova.compute.manager [req-fcb5c50e-fd39-4879-a9b8-19d46f57275e req-e0a79079-7c1f-428d-89df-ffdc0ee21f44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received event network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.027 243456 DEBUG oslo_concurrency.lockutils [req-fcb5c50e-fd39-4879-a9b8-19d46f57275e req-e0a79079-7c1f-428d-89df-ffdc0ee21f44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bd2e7775-9332-417e-a139-0847263b3343-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.027 243456 DEBUG oslo_concurrency.lockutils [req-fcb5c50e-fd39-4879-a9b8-19d46f57275e req-e0a79079-7c1f-428d-89df-ffdc0ee21f44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.028 243456 DEBUG oslo_concurrency.lockutils [req-fcb5c50e-fd39-4879-a9b8-19d46f57275e req-e0a79079-7c1f-428d-89df-ffdc0ee21f44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.028 243456 DEBUG nova.compute.manager [req-fcb5c50e-fd39-4879-a9b8-19d46f57275e req-e0a79079-7c1f-428d-89df-ffdc0ee21f44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] No waiting events found dispatching network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.028 243456 WARNING nova.compute.manager [req-fcb5c50e-fd39-4879-a9b8-19d46f57275e req-e0a79079-7c1f-428d-89df-ffdc0ee21f44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received unexpected event network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.029 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.029 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.030 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Ensure instance console log exists: /var/lib/nova/instances/43abdac9-81fa-437c-8a7c-fb7b1a9ff97f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.030 243456 DEBUG oslo_concurrency.lockutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.030 243456 DEBUG oslo_concurrency.lockutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.031 243456 DEBUG oslo_concurrency.lockutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.032 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.033 243456 DEBUG nova.compute.manager [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.037 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.037 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273598.0369453, bd2e7775-9332-417e-a139-0847263b3343 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.038 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bd2e7775-9332-417e-a139-0847263b3343] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.043 243456 INFO nova.virt.libvirt.driver [-] [instance: bd2e7775-9332-417e-a139-0847263b3343] Instance spawned successfully.#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.044 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.046 243456 WARNING nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.056 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273583.055168, 30a5d845-ce28-490a-afe8-3b7552f02c63 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.056 243456 INFO nova.compute.manager [-] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.060 243456 DEBUG nova.virt.libvirt.host [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.060 243456 DEBUG nova.virt.libvirt.host [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.065 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bd2e7775-9332-417e-a139-0847263b3343] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.066 243456 DEBUG nova.virt.libvirt.host [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.067 243456 DEBUG nova.virt.libvirt.host [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.067 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.067 243456 DEBUG nova.virt.hardware [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.068 243456 DEBUG nova.virt.hardware [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.068 243456 DEBUG nova.virt.hardware [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.068 243456 DEBUG nova.virt.hardware [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.068 243456 DEBUG nova.virt.hardware [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.068 243456 DEBUG nova.virt.hardware [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.068 243456 DEBUG nova.virt.hardware [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.069 243456 DEBUG nova.virt.hardware [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.069 243456 DEBUG nova.virt.hardware [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.069 243456 DEBUG nova.virt.hardware [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.069 243456 DEBUG nova.virt.hardware [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.071 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.118 243456 DEBUG nova.compute.manager [None req-4b289d54-5d0f-48e8-917f-8c9c0040b96e - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.127 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.128 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.128 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.129 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.129 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.130 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 263 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 4.3 MiB/s wr, 125 op/s
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.136 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bd2e7775-9332-417e-a139-0847263b3343] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.161 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.162 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.166 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bd2e7775-9332-417e-a139-0847263b3343] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.169 243456 DEBUG nova.virt.hardware [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.169 243456 INFO nova.compute.claims [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.206 243456 INFO nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Took 9.40 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.206 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.269 243456 INFO nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Took 11.27 seconds to build instance.#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.285 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.375s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.332 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.377 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:13:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/969748596' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.681 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.706 243456 DEBUG nova.storage.rbd_utils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] rbd image 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.709 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.742 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:13:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1567930006' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.902 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.909 243456 DEBUG nova.compute.provider_tree [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.928 243456 DEBUG nova.scheduler.client.report [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.953 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.954 243456 DEBUG nova.compute.manager [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.960 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.961 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.961 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:13:18 np0005634017 nova_compute[243452]: 2026-02-28 10:13:18.962 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.063 243456 DEBUG nova.compute.manager [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.064 243456 DEBUG nova.network.neutron [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.084 243456 INFO nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.099 243456 DEBUG nova.compute.manager [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.241 243456 DEBUG nova.compute.manager [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.243 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.244 243456 INFO nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Creating image(s)#033[00m
Feb 28 05:13:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:13:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/422064088' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.367 243456 DEBUG nova.storage.rbd_utils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.397 243456 DEBUG nova.storage.rbd_utils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.426 243456 DEBUG nova.storage.rbd_utils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.431 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.464 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.469 243456 DEBUG nova.policy [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '99530c323188499c8d0e75b8edf1f77b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4c568ca6a09a48c1a1197267be4d4583', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.472 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.763s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.474 243456 DEBUG nova.objects.instance [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lazy-loading 'pci_devices' on Instance uuid 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.490 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:13:19 np0005634017 nova_compute[243452]:  <uuid>43abdac9-81fa-437c-8a7c-fb7b1a9ff97f</uuid>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:  <name>instance-00000045</name>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServersAaction247Test-server-1375251304</nova:name>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:13:18</nova:creationTime>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:13:19 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:        <nova:user uuid="a11a56ff5fb844d1b03a25da76136c9d">tempest-ServersAaction247Test-1778300402-project-member</nova:user>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:        <nova:project uuid="efd550ad4a4044b5b976691be30a846c">tempest-ServersAaction247Test-1778300402</nova:project>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      <nova:ports/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      <entry name="serial">43abdac9-81fa-437c-8a7c-fb7b1a9ff97f</entry>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      <entry name="uuid">43abdac9-81fa-437c-8a7c-fb7b1a9ff97f</entry>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk">
Feb 28 05:13:19 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:13:19 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk.config">
Feb 28 05:13:19 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:13:19 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/43abdac9-81fa-437c-8a7c-fb7b1a9ff97f/console.log" append="off"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:13:19 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:13:19 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:13:19 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:13:19 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.508 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.509 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.510 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.510 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.533 243456 DEBUG nova.storage.rbd_utils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.538 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:13:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1751327151' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.613 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.614 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.615 243456 INFO nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Using config drive#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.645 243456 DEBUG nova.storage.rbd_utils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] rbd image 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.654 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.692s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.740 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.741 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.751 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000043 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.752 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000043 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.757 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.758 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.799 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.851 243456 INFO nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Creating config drive at /var/lib/nova/instances/43abdac9-81fa-437c-8a7c-fb7b1a9ff97f/disk.config#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.855 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/43abdac9-81fa-437c-8a7c-fb7b1a9ff97f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpjmwgswpk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.897 243456 DEBUG nova.storage.rbd_utils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] resizing rbd image c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:13:19 np0005634017 nova_compute[243452]: 2026-02-28 10:13:19.997 243456 DEBUG nova.objects.instance [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'migration_context' on Instance uuid c4a13c84-8fca-43c8-87c3-fde9f5d1c031 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.000 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/43abdac9-81fa-437c-8a7c-fb7b1a9ff97f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpjmwgswpk" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.023 243456 DEBUG nova.storage.rbd_utils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] rbd image 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.028 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/43abdac9-81fa-437c-8a7c-fb7b1a9ff97f/disk.config 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.067 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.068 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Ensure instance console log exists: /var/lib/nova/instances/c4a13c84-8fca-43c8-87c3-fde9f5d1c031/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.069 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.070 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.070 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 274 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.8 MiB/s wr, 170 op/s
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.172 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/43abdac9-81fa-437c-8a7c-fb7b1a9ff97f/disk.config 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.173 243456 INFO nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Deleting local config drive /var/lib/nova/instances/43abdac9-81fa-437c-8a7c-fb7b1a9ff97f/disk.config because it was imported into RBD.#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.216 243456 DEBUG oslo_concurrency.lockutils [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.217 243456 DEBUG oslo_concurrency.lockutils [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.218 243456 DEBUG oslo_concurrency.lockutils [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.218 243456 DEBUG oslo_concurrency.lockutils [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.218 243456 DEBUG oslo_concurrency.lockutils [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.219 243456 INFO nova.compute.manager [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Terminating instance#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.220 243456 DEBUG nova.compute.manager [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.222 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.223 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3685MB free_disk=59.93733172863722GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.224 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.224 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:20 np0005634017 systemd-machined[209480]: New machine qemu-77-instance-00000045.
Feb 28 05:13:20 np0005634017 systemd[1]: Started Virtual Machine qemu-77-instance-00000045.
Feb 28 05:13:20 np0005634017 kernel: tap0f76084a-5c (unregistering): left promiscuous mode
Feb 28 05:13:20 np0005634017 NetworkManager[49805]: <info>  [1772273600.2844] device (tap0f76084a-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:13:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:20Z|00572|binding|INFO|Releasing lport 0f76084a-5cb2-4246-adc3-ec58ff470ed4 from this chassis (sb_readonly=0)
Feb 28 05:13:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:20Z|00573|binding|INFO|Setting lport 0f76084a-5cb2-4246-adc3-ec58ff470ed4 down in Southbound
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.297 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:20Z|00574|binding|INFO|Removing iface tap0f76084a-5c ovn-installed in OVS
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.305 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.311 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.324 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:c9:da 10.100.0.4'], port_security=['fa:16:3e:20:c9:da 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '6af19b7d-b0a9-40fe-8d1a-f38c95452a10', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f398725990434948ba6927f1c1477015', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9892012a-4876-4e21-91d3-1183462d5768', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72248a44-6915-4752-941f-1938e32574c6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0f76084a-5cb2-4246-adc3-ec58ff470ed4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.326 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0f76084a-5cb2-4246-adc3-ec58ff470ed4 in datapath 2ee6adef-26da-41a9-91a7-f9a806d37d26 unbound from our chassis#033[00m
Feb 28 05:13:20 np0005634017 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000043.scope: Deactivated successfully.
Feb 28 05:13:20 np0005634017 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000043.scope: Consumed 4.923s CPU time.
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.326 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 6af19b7d-b0a9-40fe-8d1a-f38c95452a10 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.327 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance bd2e7775-9332-417e-a139-0847263b3343 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.327 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.327 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance c4a13c84-8fca-43c8-87c3-fde9f5d1c031 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.327 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.328 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.327 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ee6adef-26da-41a9-91a7-f9a806d37d26#033[00m
Feb 28 05:13:20 np0005634017 systemd-machined[209480]: Machine qemu-75-instance-00000043 terminated.
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.339 243456 DEBUG nova.network.neutron [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Successfully created port: 9be79a2c-76fa-4a58-a532-eac0151d2bb1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.347 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9bcedda6-31d6-4948-a7fa-47192e2561e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.381 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a047390d-51d4-4772-96d3-2928c3ebf8e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.384 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[182299d1-a16e-498f-b086-ea40425ae804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.409 243456 DEBUG oslo_concurrency.lockutils [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "bd2e7775-9332-417e-a139-0847263b3343" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.409 243456 DEBUG oslo_concurrency.lockutils [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.410 243456 DEBUG oslo_concurrency.lockutils [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "bd2e7775-9332-417e-a139-0847263b3343-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.410 243456 DEBUG oslo_concurrency.lockutils [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.410 243456 DEBUG oslo_concurrency.lockutils [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.411 243456 INFO nova.compute.manager [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Terminating instance#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.413 243456 DEBUG nova.compute.manager [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.433 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[46de1dd4-2733-4623-ba60-3fee6e3df6e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.442 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.452 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bf49378d-f2cd-4ba4-aa64-4a09fbc73a1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ee6adef-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:79:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510262, 'reachable_time': 26991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302351, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:20 np0005634017 kernel: tapcac4ab79-f0 (unregistering): left promiscuous mode
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.467 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b9db6824-0371-44c0-bbda-14d099cd6deb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2ee6adef-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 510275, 'tstamp': 510275}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302360, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2ee6adef-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 510277, 'tstamp': 510277}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302360, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:20 np0005634017 NetworkManager[49805]: <info>  [1772273600.4698] device (tapcac4ab79-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.470 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ee6adef-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:20Z|00575|binding|INFO|Releasing lport cac4ab79-f021-4f19-8f15-95ea09460f70 from this chassis (sb_readonly=0)
Feb 28 05:13:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:20Z|00576|binding|INFO|Setting lport cac4ab79-f021-4f19-8f15-95ea09460f70 down in Southbound
Feb 28 05:13:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:20Z|00577|binding|INFO|Removing iface tapcac4ab79-f0 ovn-installed in OVS
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.479 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ee6adef-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.480 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.480 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ee6adef-20, col_values=(('external_ids', {'iface-id': '51b33bea-9c2c-447c-817e-7f72887f045f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.481 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.485 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:f3:42 10.100.0.8'], port_security=['fa:16:3e:1b:f3:42 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'bd2e7775-9332-417e-a139-0847263b3343', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f398725990434948ba6927f1c1477015', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9892012a-4876-4e21-91d3-1183462d5768', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72248a44-6915-4752-941f-1938e32574c6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cac4ab79-f021-4f19-8f15-95ea09460f70) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.487 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cac4ab79-f021-4f19-8f15-95ea09460f70 in datapath 2ee6adef-26da-41a9-91a7-f9a806d37d26 unbound from our chassis#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.488 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.489 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2ee6adef-26da-41a9-91a7-f9a806d37d26, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.490 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[29cbf046-e6b8-4c78-92ec-e990da63fd2a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.491 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26 namespace which is not needed anymore#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.494 243456 INFO nova.virt.libvirt.driver [-] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Instance destroyed successfully.#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.494 243456 DEBUG nova.objects.instance [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lazy-loading 'resources' on Instance uuid 6af19b7d-b0a9-40fe-8d1a-f38c95452a10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.516 243456 DEBUG nova.virt.libvirt.vif [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:13:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1120322056',display_name='tempest-tempest.common.compute-instance-1120322056-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1120322056-1',id=67,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:13:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f398725990434948ba6927f1c1477015',ramdisk_id='',reservation_id='r-0gks6oxa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-211863181',owner_user_name='tempest-MultipleCreateTestJSON-211863181-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:13:16Z,user_data=None,user_id='c60ae50e478245b49930e1e71ea14df4',uuid=6af19b7d-b0a9-40fe-8d1a-f38c95452a10,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "address": "fa:16:3e:20:c9:da", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f76084a-5c", "ovs_interfaceid": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.517 243456 DEBUG nova.network.os_vif_util [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converting VIF {"id": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "address": "fa:16:3e:20:c9:da", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f76084a-5c", "ovs_interfaceid": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.518 243456 DEBUG nova.network.os_vif_util [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:c9:da,bridge_name='br-int',has_traffic_filtering=True,id=0f76084a-5cb2-4246-adc3-ec58ff470ed4,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f76084a-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.518 243456 DEBUG os_vif [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:c9:da,bridge_name='br-int',has_traffic_filtering=True,id=0f76084a-5cb2-4246-adc3-ec58ff470ed4,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f76084a-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.521 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.521 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f76084a-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.523 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:20 np0005634017 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000044.scope: Deactivated successfully.
Feb 28 05:13:20 np0005634017 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000044.scope: Consumed 2.928s CPU time.
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.525 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:13:20 np0005634017 systemd-machined[209480]: Machine qemu-76-instance-00000044 terminated.
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.528 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.531 243456 INFO os_vif [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:c9:da,bridge_name='br-int',has_traffic_filtering=True,id=0f76084a-5cb2-4246-adc3-ec58ff470ed4,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f76084a-5c')#033[00m
Feb 28 05:13:20 np0005634017 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[301737]: [NOTICE]   (301741) : haproxy version is 2.8.14-c23fe91
Feb 28 05:13:20 np0005634017 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[301737]: [NOTICE]   (301741) : path to executable is /usr/sbin/haproxy
Feb 28 05:13:20 np0005634017 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[301737]: [WARNING]  (301741) : Exiting Master process...
Feb 28 05:13:20 np0005634017 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[301737]: [ALERT]    (301741) : Current worker (301743) exited with code 143 (Terminated)
Feb 28 05:13:20 np0005634017 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[301737]: [WARNING]  (301741) : All workers exited. Exiting... (0)
Feb 28 05:13:20 np0005634017 systemd[1]: libpod-e15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04.scope: Deactivated successfully.
Feb 28 05:13:20 np0005634017 kernel: tapcac4ab79-f0: entered promiscuous mode
Feb 28 05:13:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:20Z|00578|binding|INFO|Claiming lport cac4ab79-f021-4f19-8f15-95ea09460f70 for this chassis.
Feb 28 05:13:20 np0005634017 NetworkManager[49805]: <info>  [1772273600.6363] manager: (tapcac4ab79-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/261)
Feb 28 05:13:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:20Z|00579|binding|INFO|cac4ab79-f021-4f19-8f15-95ea09460f70: Claiming fa:16:3e:1b:f3:42 10.100.0.8
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.636 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:20 np0005634017 podman[302404]: 2026-02-28 10:13:20.639935365 +0000 UTC m=+0.053819564 container died e15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 05:13:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:20Z|00580|binding|INFO|Setting lport cac4ab79-f021-4f19-8f15-95ea09460f70 ovn-installed in OVS
Feb 28 05:13:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:20Z|00581|binding|INFO|Setting lport cac4ab79-f021-4f19-8f15-95ea09460f70 up in Southbound
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.644 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:f3:42 10.100.0.8'], port_security=['fa:16:3e:1b:f3:42 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'bd2e7775-9332-417e-a139-0847263b3343', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f398725990434948ba6927f1c1477015', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9892012a-4876-4e21-91d3-1183462d5768', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72248a44-6915-4752-941f-1938e32574c6, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cac4ab79-f021-4f19-8f15-95ea09460f70) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:13:20 np0005634017 kernel: tapcac4ab79-f0 (unregistering): left promiscuous mode
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.646 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:20Z|00582|binding|INFO|Releasing lport cac4ab79-f021-4f19-8f15-95ea09460f70 from this chassis (sb_readonly=0)
Feb 28 05:13:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:20Z|00583|binding|INFO|Setting lport cac4ab79-f021-4f19-8f15-95ea09460f70 down in Southbound
Feb 28 05:13:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:20Z|00584|binding|INFO|Removing iface tapcac4ab79-f0 ovn-installed in OVS
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.654 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.661 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.665 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:f3:42 10.100.0.8'], port_security=['fa:16:3e:1b:f3:42 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'bd2e7775-9332-417e-a139-0847263b3343', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f398725990434948ba6927f1c1477015', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9892012a-4876-4e21-91d3-1183462d5768', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72248a44-6915-4752-941f-1938e32574c6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cac4ab79-f021-4f19-8f15-95ea09460f70) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.670 243456 INFO nova.virt.libvirt.driver [-] [instance: bd2e7775-9332-417e-a139-0847263b3343] Instance destroyed successfully.#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.670 243456 DEBUG nova.objects.instance [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lazy-loading 'resources' on Instance uuid bd2e7775-9332-417e-a139-0847263b3343 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:20 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04-userdata-shm.mount: Deactivated successfully.
Feb 28 05:13:20 np0005634017 systemd[1]: var-lib-containers-storage-overlay-642c3d42b12e83ccf48d08b0d735927c24084e9d8d6d8b72eb20cdfcacadcea2-merged.mount: Deactivated successfully.
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.689 243456 DEBUG nova.virt.libvirt.vif [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:13:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1120322056',display_name='tempest-tempest.common.compute-instance-1120322056-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1120322056-2',id=68,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-02-28T10:13:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f398725990434948ba6927f1c1477015',ramdisk_id='',reservation_id='r-0gks6oxa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-211863181',owner_user_name='tempest-MultipleCreateTestJSON-211863181-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:13:18Z,user_data=None,user_id='c60ae50e478245b49930e1e71ea14df4',uuid=bd2e7775-9332-417e-a139-0847263b3343,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cac4ab79-f021-4f19-8f15-95ea09460f70", "address": "fa:16:3e:1b:f3:42", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcac4ab79-f0", "ovs_interfaceid": "cac4ab79-f021-4f19-8f15-95ea09460f70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.689 243456 DEBUG nova.network.os_vif_util [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converting VIF {"id": "cac4ab79-f021-4f19-8f15-95ea09460f70", "address": "fa:16:3e:1b:f3:42", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcac4ab79-f0", "ovs_interfaceid": "cac4ab79-f021-4f19-8f15-95ea09460f70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.690 243456 DEBUG nova.network.os_vif_util [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:f3:42,bridge_name='br-int',has_traffic_filtering=True,id=cac4ab79-f021-4f19-8f15-95ea09460f70,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcac4ab79-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.690 243456 DEBUG os_vif [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:f3:42,bridge_name='br-int',has_traffic_filtering=True,id=cac4ab79-f021-4f19-8f15-95ea09460f70,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcac4ab79-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.691 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.692 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcac4ab79-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.693 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:20 np0005634017 podman[302404]: 2026-02-28 10:13:20.695349653 +0000 UTC m=+0.109233842 container cleanup e15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.696 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.698 243456 INFO os_vif [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:f3:42,bridge_name='br-int',has_traffic_filtering=True,id=cac4ab79-f021-4f19-8f15-95ea09460f70,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcac4ab79-f0')#033[00m
Feb 28 05:13:20 np0005634017 systemd[1]: libpod-conmon-e15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04.scope: Deactivated successfully.
Feb 28 05:13:20 np0005634017 podman[302471]: 2026-02-28 10:13:20.760957578 +0000 UTC m=+0.042598809 container remove e15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.765 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8762a2c6-48b2-49ed-87ef-d90fe880b517]: (4, ('Sat Feb 28 10:13:20 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26 (e15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04)\ne15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04\nSat Feb 28 10:13:20 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26 (e15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04)\ne15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.768 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[10748680-423b-4ee0-b053-bbeb5c87db06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.769 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ee6adef-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.770 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:20 np0005634017 kernel: tap2ee6adef-20: left promiscuous mode
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.778 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.782 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[16513c05-368e-42aa-9c33-4d393e1a62e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.794 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0e00fc01-c78c-4fe3-a02f-05495868ffa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.796 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[30c7dea2-a5e1-4459-a331-a5ec9fbd3dbd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.815 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[68f3dcf4-8f06-4bc4-b9b3-31bf7bd6dddc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510254, 'reachable_time': 36190, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302535, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.819 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.819 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[a7910dd9-7229-48b5-b3eb-a35828ea2035]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:20 np0005634017 systemd[1]: run-netns-ovnmeta\x2d2ee6adef\x2d26da\x2d41a9\x2d91a7\x2df9a806d37d26.mount: Deactivated successfully.
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.820 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cac4ab79-f021-4f19-8f15-95ea09460f70 in datapath 2ee6adef-26da-41a9-91a7-f9a806d37d26 unbound from our chassis#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.821 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2ee6adef-26da-41a9-91a7-f9a806d37d26, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.822 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e3e1bb1b-599e-4462-88d1-197307fc90bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.823 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cac4ab79-f021-4f19-8f15-95ea09460f70 in datapath 2ee6adef-26da-41a9-91a7-f9a806d37d26 unbound from our chassis#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.824 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2ee6adef-26da-41a9-91a7-f9a806d37d26, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.824 243456 DEBUG nova.compute.manager [req-f67f70aa-8bbc-4e8a-8318-ffc4bcd3efb4 req-14e79ce5-d85a-4f1b-b5e5-c0fe561bfa62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Received event network-vif-unplugged-0f76084a-5cb2-4246-adc3-ec58ff470ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.824 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5b459c17-6a84-41d3-b9b7-306c4c5e8498]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.825 243456 DEBUG oslo_concurrency.lockutils [req-f67f70aa-8bbc-4e8a-8318-ffc4bcd3efb4 req-14e79ce5-d85a-4f1b-b5e5-c0fe561bfa62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.825 243456 DEBUG oslo_concurrency.lockutils [req-f67f70aa-8bbc-4e8a-8318-ffc4bcd3efb4 req-14e79ce5-d85a-4f1b-b5e5-c0fe561bfa62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.825 243456 DEBUG oslo_concurrency.lockutils [req-f67f70aa-8bbc-4e8a-8318-ffc4bcd3efb4 req-14e79ce5-d85a-4f1b-b5e5-c0fe561bfa62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.825 243456 DEBUG nova.compute.manager [req-f67f70aa-8bbc-4e8a-8318-ffc4bcd3efb4 req-14e79ce5-d85a-4f1b-b5e5-c0fe561bfa62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] No waiting events found dispatching network-vif-unplugged-0f76084a-5cb2-4246-adc3-ec58ff470ed4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.826 243456 DEBUG nova.compute.manager [req-f67f70aa-8bbc-4e8a-8318-ffc4bcd3efb4 req-14e79ce5-d85a-4f1b-b5e5-c0fe561bfa62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Received event network-vif-unplugged-0f76084a-5cb2-4246-adc3-ec58ff470ed4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.898 243456 INFO nova.virt.libvirt.driver [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Deleting instance files /var/lib/nova/instances/6af19b7d-b0a9-40fe-8d1a-f38c95452a10_del#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.899 243456 INFO nova.virt.libvirt.driver [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Deletion of /var/lib/nova/instances/6af19b7d-b0a9-40fe-8d1a-f38c95452a10_del complete#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.913 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273600.912879, 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.914 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.916 243456 DEBUG nova.compute.manager [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.917 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.921 243456 INFO nova.virt.libvirt.driver [-] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Instance spawned successfully.#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.922 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.943 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.956 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.962 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.963 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.964 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.965 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.966 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.966 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.973 243456 INFO nova.compute.manager [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.974 243456 DEBUG oslo.service.loopingcall [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.975 243456 DEBUG nova.compute.manager [-] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.976 243456 DEBUG nova.network.neutron [-] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.981 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.982 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273600.9140139, 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:20 np0005634017 nova_compute[243452]: 2026-02-28 10:13:20.982 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] VM Started (Lifecycle Event)#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.007 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.012 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.019 243456 INFO nova.compute.manager [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Took 3.86 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.020 243456 DEBUG nova.compute.manager [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:13:21 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1367673230' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.031 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.052 243456 INFO nova.virt.libvirt.driver [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Deleting instance files /var/lib/nova/instances/bd2e7775-9332-417e-a139-0847263b3343_del#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.053 243456 INFO nova.virt.libvirt.driver [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Deletion of /var/lib/nova/instances/bd2e7775-9332-417e-a139-0847263b3343_del complete#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.056 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.061 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.081 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.086 243456 INFO nova.compute.manager [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Took 5.02 seconds to build instance.#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.111 243456 DEBUG nova.network.neutron [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Successfully updated port: 9be79a2c-76fa-4a58-a532-eac0151d2bb1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.115 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.116 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.117 243456 DEBUG oslo_concurrency.lockutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "43abdac9-81fa-437c-8a7c-fb7b1a9ff97f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.121 243456 INFO nova.compute.manager [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.122 243456 DEBUG oslo.service.loopingcall [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.123 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "refresh_cache-c4a13c84-8fca-43c8-87c3-fde9f5d1c031" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.123 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquired lock "refresh_cache-c4a13c84-8fca-43c8-87c3-fde9f5d1c031" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.123 243456 DEBUG nova.network.neutron [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.124 243456 DEBUG nova.compute.manager [-] [instance: bd2e7775-9332-417e-a139-0847263b3343] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.124 243456 DEBUG nova.network.neutron [-] [instance: bd2e7775-9332-417e-a139-0847263b3343] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.204 243456 DEBUG nova.compute.manager [req-13657082-4fe6-4e8c-8808-a254256cc185 req-220911a3-163c-48e4-b0e0-3f0d891bd497 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Received event network-changed-9be79a2c-76fa-4a58-a532-eac0151d2bb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.205 243456 DEBUG nova.compute.manager [req-13657082-4fe6-4e8c-8808-a254256cc185 req-220911a3-163c-48e4-b0e0-3f0d891bd497 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Refreshing instance network info cache due to event network-changed-9be79a2c-76fa-4a58-a532-eac0151d2bb1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.205 243456 DEBUG oslo_concurrency.lockutils [req-13657082-4fe6-4e8c-8808-a254256cc185 req-220911a3-163c-48e4-b0e0-3f0d891bd497 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c4a13c84-8fca-43c8-87c3-fde9f5d1c031" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.312 243456 DEBUG nova.network.neutron [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.635 243456 DEBUG nova.network.neutron [-] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.657 243456 INFO nova.compute.manager [-] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Took 0.68 seconds to deallocate network for instance.#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.706 243456 DEBUG oslo_concurrency.lockutils [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.707 243456 DEBUG oslo_concurrency.lockutils [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:21 np0005634017 nova_compute[243452]: 2026-02-28 10:13:21.804 243456 DEBUG oslo_concurrency.processutils [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.049 243456 DEBUG nova.network.neutron [-] [instance: bd2e7775-9332-417e-a139-0847263b3343] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.069 243456 INFO nova.compute.manager [-] [instance: bd2e7775-9332-417e-a139-0847263b3343] Took 0.94 seconds to deallocate network for instance.#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.110 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.111 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:13:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 295 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.3 MiB/s wr, 231 op/s
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.158 243456 DEBUG oslo_concurrency.lockutils [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.159 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.159 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.176 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.177 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.254 243456 DEBUG nova.network.neutron [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Updating instance_info_cache with network_info: [{"id": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "address": "fa:16:3e:54:18:0c", "network": {"id": "5d8683e2-4377-476f-ae8b-6d3dd4e61943", "bridge": "br-int", "label": "tempest-network-smoke--813977532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9be79a2c-76", "ovs_interfaceid": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.271 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Releasing lock "refresh_cache-c4a13c84-8fca-43c8-87c3-fde9f5d1c031" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.272 243456 DEBUG nova.compute.manager [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Instance network_info: |[{"id": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "address": "fa:16:3e:54:18:0c", "network": {"id": "5d8683e2-4377-476f-ae8b-6d3dd4e61943", "bridge": "br-int", "label": "tempest-network-smoke--813977532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9be79a2c-76", "ovs_interfaceid": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.273 243456 DEBUG oslo_concurrency.lockutils [req-13657082-4fe6-4e8c-8808-a254256cc185 req-220911a3-163c-48e4-b0e0-3f0d891bd497 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c4a13c84-8fca-43c8-87c3-fde9f5d1c031" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.274 243456 DEBUG nova.network.neutron [req-13657082-4fe6-4e8c-8808-a254256cc185 req-220911a3-163c-48e4-b0e0-3f0d891bd497 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Refreshing network info cache for port 9be79a2c-76fa-4a58-a532-eac0151d2bb1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.276 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Start _get_guest_xml network_info=[{"id": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "address": "fa:16:3e:54:18:0c", "network": {"id": "5d8683e2-4377-476f-ae8b-6d3dd4e61943", "bridge": "br-int", "label": "tempest-network-smoke--813977532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9be79a2c-76", "ovs_interfaceid": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.284 243456 WARNING nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.297 243456 DEBUG nova.virt.libvirt.host [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.298 243456 DEBUG nova.virt.libvirt.host [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.302 243456 DEBUG nova.virt.libvirt.host [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.303 243456 DEBUG nova.virt.libvirt.host [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.303 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.304 243456 DEBUG nova.virt.hardware [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.305 243456 DEBUG nova.virt.hardware [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.305 243456 DEBUG nova.virt.hardware [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.305 243456 DEBUG nova.virt.hardware [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.306 243456 DEBUG nova.virt.hardware [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.306 243456 DEBUG nova.virt.hardware [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.306 243456 DEBUG nova.virt.hardware [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.307 243456 DEBUG nova.virt.hardware [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.307 243456 DEBUG nova.virt.hardware [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.308 243456 DEBUG nova.virt.hardware [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.308 243456 DEBUG nova.virt.hardware [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.312 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:13:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1038723385' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.380 243456 DEBUG oslo_concurrency.processutils [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.392 243456 DEBUG nova.compute.provider_tree [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.411 243456 DEBUG nova.scheduler.client.report [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.438 243456 DEBUG oslo_concurrency.lockutils [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.441 243456 DEBUG oslo_concurrency.lockutils [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.284s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.470 243456 INFO nova.scheduler.client.report [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Deleted allocations for instance 6af19b7d-b0a9-40fe-8d1a-f38c95452a10#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.474 243456 DEBUG nova.compute.manager [None req-bbeafa4b-900d-49d7-94b4-fd5ef8d04487 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.533 243456 INFO nova.compute.manager [None req-bbeafa4b-900d-49d7-94b4-fd5ef8d04487 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] instance snapshotting#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.534 243456 DEBUG nova.objects.instance [None req-bbeafa4b-900d-49d7-94b4-fd5ef8d04487 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lazy-loading 'flavor' on Instance uuid 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.563 243456 DEBUG oslo_concurrency.processutils [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.623 243456 DEBUG oslo_concurrency.lockutils [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.406s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.690 243456 DEBUG oslo_concurrency.lockutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Acquiring lock "43abdac9-81fa-437c-8a7c-fb7b1a9ff97f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.692 243456 DEBUG oslo_concurrency.lockutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "43abdac9-81fa-437c-8a7c-fb7b1a9ff97f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.692 243456 DEBUG oslo_concurrency.lockutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Acquiring lock "43abdac9-81fa-437c-8a7c-fb7b1a9ff97f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.693 243456 DEBUG oslo_concurrency.lockutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "43abdac9-81fa-437c-8a7c-fb7b1a9ff97f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.694 243456 DEBUG oslo_concurrency.lockutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "43abdac9-81fa-437c-8a7c-fb7b1a9ff97f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.696 243456 INFO nova.compute.manager [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Terminating instance#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.698 243456 DEBUG oslo_concurrency.lockutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Acquiring lock "refresh_cache-43abdac9-81fa-437c-8a7c-fb7b1a9ff97f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.699 243456 DEBUG oslo_concurrency.lockutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Acquired lock "refresh_cache-43abdac9-81fa-437c-8a7c-fb7b1a9ff97f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.699 243456 DEBUG nova.network.neutron [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:13:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:13:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1542243089' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.877 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.897 243456 DEBUG nova.storage.rbd_utils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.901 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.954 243456 DEBUG nova.network.neutron [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.963 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Received event network-vif-plugged-0f76084a-5cb2-4246-adc3-ec58ff470ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.963 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.964 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.964 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.964 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] No waiting events found dispatching network-vif-plugged-0f76084a-5cb2-4246-adc3-ec58ff470ed4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.965 243456 WARNING nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Received unexpected event network-vif-plugged-0f76084a-5cb2-4246-adc3-ec58ff470ed4 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.965 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received event network-vif-unplugged-cac4ab79-f021-4f19-8f15-95ea09460f70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.965 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bd2e7775-9332-417e-a139-0847263b3343-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.965 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.965 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.966 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] No waiting events found dispatching network-vif-unplugged-cac4ab79-f021-4f19-8f15-95ea09460f70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.966 243456 WARNING nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received unexpected event network-vif-unplugged-cac4ab79-f021-4f19-8f15-95ea09460f70 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.966 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received event network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.966 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bd2e7775-9332-417e-a139-0847263b3343-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.966 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.966 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.966 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] No waiting events found dispatching network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.967 243456 WARNING nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received unexpected event network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.967 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received event network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.967 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bd2e7775-9332-417e-a139-0847263b3343-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.967 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.967 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.967 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] No waiting events found dispatching network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.968 243456 WARNING nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received unexpected event network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.968 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received event network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.968 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bd2e7775-9332-417e-a139-0847263b3343-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.968 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.968 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.968 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] No waiting events found dispatching network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.968 243456 WARNING nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received unexpected event network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.969 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received event network-vif-unplugged-cac4ab79-f021-4f19-8f15-95ea09460f70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.969 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bd2e7775-9332-417e-a139-0847263b3343-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.969 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.969 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.969 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] No waiting events found dispatching network-vif-unplugged-cac4ab79-f021-4f19-8f15-95ea09460f70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.969 243456 WARNING nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received unexpected event network-vif-unplugged-cac4ab79-f021-4f19-8f15-95ea09460f70 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.970 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received event network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.970 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bd2e7775-9332-417e-a139-0847263b3343-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.970 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.970 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.970 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] No waiting events found dispatching network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.970 243456 WARNING nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received unexpected event network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.970 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Received event network-vif-deleted-0f76084a-5cb2-4246-adc3-ec58ff470ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:22 np0005634017 nova_compute[243452]: 2026-02-28 10:13:22.979 243456 INFO nova.virt.libvirt.driver [None req-bbeafa4b-900d-49d7-94b4-fd5ef8d04487 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Beginning live snapshot process#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.063 243456 DEBUG nova.compute.manager [None req-bbeafa4b-900d-49d7-94b4-fd5ef8d04487 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390#033[00m
Feb 28 05:13:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:13:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:13:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1742701125' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.182 243456 DEBUG oslo_concurrency.processutils [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.187 243456 DEBUG nova.compute.provider_tree [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.209 243456 DEBUG nova.scheduler.client.report [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.243 243456 DEBUG oslo_concurrency.lockutils [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.294 243456 INFO nova.scheduler.client.report [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Deleted allocations for instance bd2e7775-9332-417e-a139-0847263b3343#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.331 243456 DEBUG nova.compute.manager [req-f40751e4-9982-4cef-85f3-601c19078689 req-545b6372-4ee2-4026-af8d-06c171aa3c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received event network-vif-deleted-cac4ab79-f021-4f19-8f15-95ea09460f70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.395 243456 DEBUG oslo_concurrency.lockutils [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.985s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.398 243456 DEBUG nova.network.neutron [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.438 243456 DEBUG oslo_concurrency.lockutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Releasing lock "refresh_cache-43abdac9-81fa-437c-8a7c-fb7b1a9ff97f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.438 243456 DEBUG nova.compute.manager [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:13:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:13:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3082823000' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.495 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:23 np0005634017 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000045.scope: Deactivated successfully.
Feb 28 05:13:23 np0005634017 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000045.scope: Consumed 3.123s CPU time.
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.498 243456 DEBUG nova.virt.libvirt.vif [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1132377693',display_name='tempest-TestNetworkAdvancedServerOps-server-1132377693',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1132377693',id=70,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF644+rgLhlFamGuaTmUUFO0KlVXAPKX08nnbDy2rOoT8ige3Fj9kAWfB90244fjvTph7JmDo+5JGi6o3TsUwr4tGSK5dI2cnuR44Pht5/KXvc5e6feRQxx/zv8GZOsh+w==',key_name='tempest-TestNetworkAdvancedServerOps-1238511245',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-8dn1q2v0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:19Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=c4a13c84-8fca-43c8-87c3-fde9f5d1c031,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "address": "fa:16:3e:54:18:0c", "network": {"id": "5d8683e2-4377-476f-ae8b-6d3dd4e61943", "bridge": "br-int", "label": "tempest-network-smoke--813977532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9be79a2c-76", "ovs_interfaceid": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.498 243456 DEBUG nova.network.os_vif_util [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "address": "fa:16:3e:54:18:0c", "network": {"id": "5d8683e2-4377-476f-ae8b-6d3dd4e61943", "bridge": "br-int", "label": "tempest-network-smoke--813977532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9be79a2c-76", "ovs_interfaceid": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.499 243456 DEBUG nova.network.os_vif_util [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:18:0c,bridge_name='br-int',has_traffic_filtering=True,id=9be79a2c-76fa-4a58-a532-eac0151d2bb1,network=Network(5d8683e2-4377-476f-ae8b-6d3dd4e61943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9be79a2c-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.501 243456 DEBUG nova.objects.instance [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'pci_devices' on Instance uuid c4a13c84-8fca-43c8-87c3-fde9f5d1c031 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:23 np0005634017 systemd-machined[209480]: Machine qemu-77-instance-00000045 terminated.
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.525 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:13:23 np0005634017 nova_compute[243452]:  <uuid>c4a13c84-8fca-43c8-87c3-fde9f5d1c031</uuid>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:  <name>instance-00000046</name>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1132377693</nova:name>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:13:22</nova:creationTime>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:13:23 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:        <nova:user uuid="99530c323188499c8d0e75b8edf1f77b">tempest-TestNetworkAdvancedServerOps-1987172309-project-member</nova:user>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:        <nova:project uuid="4c568ca6a09a48c1a1197267be4d4583">tempest-TestNetworkAdvancedServerOps-1987172309</nova:project>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:        <nova:port uuid="9be79a2c-76fa-4a58-a532-eac0151d2bb1">
Feb 28 05:13:23 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <entry name="serial">c4a13c84-8fca-43c8-87c3-fde9f5d1c031</entry>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <entry name="uuid">c4a13c84-8fca-43c8-87c3-fde9f5d1c031</entry>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk">
Feb 28 05:13:23 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:13:23 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk.config">
Feb 28 05:13:23 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:13:23 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:54:18:0c"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <target dev="tap9be79a2c-76"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/c4a13c84-8fca-43c8-87c3-fde9f5d1c031/console.log" append="off"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:13:23 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:13:23 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:13:23 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:13:23 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.526 243456 DEBUG nova.compute.manager [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Preparing to wait for external event network-vif-plugged-9be79a2c-76fa-4a58-a532-eac0151d2bb1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.526 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.526 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.527 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.527 243456 DEBUG nova.virt.libvirt.vif [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1132377693',display_name='tempest-TestNetworkAdvancedServerOps-server-1132377693',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1132377693',id=70,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF644+rgLhlFamGuaTmUUFO0KlVXAPKX08nnbDy2rOoT8ige3Fj9kAWfB90244fjvTph7JmDo+5JGi6o3TsUwr4tGSK5dI2cnuR44Pht5/KXvc5e6feRQxx/zv8GZOsh+w==',key_name='tempest-TestNetworkAdvancedServerOps-1238511245',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-8dn1q2v0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:19Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=c4a13c84-8fca-43c8-87c3-fde9f5d1c031,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "address": "fa:16:3e:54:18:0c", "network": {"id": "5d8683e2-4377-476f-ae8b-6d3dd4e61943", "bridge": "br-int", "label": "tempest-network-smoke--813977532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9be79a2c-76", "ovs_interfaceid": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.527 243456 DEBUG nova.network.os_vif_util [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "address": "fa:16:3e:54:18:0c", "network": {"id": "5d8683e2-4377-476f-ae8b-6d3dd4e61943", "bridge": "br-int", "label": "tempest-network-smoke--813977532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9be79a2c-76", "ovs_interfaceid": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.528 243456 DEBUG nova.network.os_vif_util [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:18:0c,bridge_name='br-int',has_traffic_filtering=True,id=9be79a2c-76fa-4a58-a532-eac0151d2bb1,network=Network(5d8683e2-4377-476f-ae8b-6d3dd4e61943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9be79a2c-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.528 243456 DEBUG os_vif [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:18:0c,bridge_name='br-int',has_traffic_filtering=True,id=9be79a2c-76fa-4a58-a532-eac0151d2bb1,network=Network(5d8683e2-4377-476f-ae8b-6d3dd4e61943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9be79a2c-76') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.529 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.529 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.529 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.536 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.536 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9be79a2c-76, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.537 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9be79a2c-76, col_values=(('external_ids', {'iface-id': '9be79a2c-76fa-4a58-a532-eac0151d2bb1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:18:0c', 'vm-uuid': 'c4a13c84-8fca-43c8-87c3-fde9f5d1c031'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:23 np0005634017 NetworkManager[49805]: <info>  [1772273603.5394] manager: (tap9be79a2c-76): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/262)
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.540 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.545 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.545 243456 INFO os_vif [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:18:0c,bridge_name='br-int',has_traffic_filtering=True,id=9be79a2c-76fa-4a58-a532-eac0151d2bb1,network=Network(5d8683e2-4377-476f-ae8b-6d3dd4e61943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9be79a2c-76')#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.619 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.620 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.620 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] No VIF found with MAC fa:16:3e:54:18:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.620 243456 INFO nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Using config drive#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.643 243456 DEBUG nova.storage.rbd_utils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.656 243456 INFO nova.virt.libvirt.driver [-] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Instance destroyed successfully.#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.656 243456 DEBUG nova.objects.instance [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lazy-loading 'resources' on Instance uuid 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.707 243456 DEBUG nova.compute.manager [None req-bbeafa4b-900d-49d7-94b4-fd5ef8d04487 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Found 0 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.738 243456 DEBUG nova.network.neutron [req-13657082-4fe6-4e8c-8808-a254256cc185 req-220911a3-163c-48e4-b0e0-3f0d891bd497 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Updated VIF entry in instance network info cache for port 9be79a2c-76fa-4a58-a532-eac0151d2bb1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.739 243456 DEBUG nova.network.neutron [req-13657082-4fe6-4e8c-8808-a254256cc185 req-220911a3-163c-48e4-b0e0-3f0d891bd497 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Updating instance_info_cache with network_info: [{"id": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "address": "fa:16:3e:54:18:0c", "network": {"id": "5d8683e2-4377-476f-ae8b-6d3dd4e61943", "bridge": "br-int", "label": "tempest-network-smoke--813977532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9be79a2c-76", "ovs_interfaceid": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.742 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.754 243456 DEBUG oslo_concurrency.lockutils [req-13657082-4fe6-4e8c-8808-a254256cc185 req-220911a3-163c-48e4-b0e0-3f0d891bd497 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c4a13c84-8fca-43c8-87c3-fde9f5d1c031" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.936 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273588.809136, 6cac1749-1126-44c9-b31c-1041025c52cf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.938 243456 INFO nova.compute.manager [-] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:13:23 np0005634017 nova_compute[243452]: 2026-02-28 10:13:23.976 243456 DEBUG nova.compute.manager [None req-756119fa-3b38-4d26-a376-e677e97f6cef - - - - - -] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:24 np0005634017 nova_compute[243452]: 2026-02-28 10:13:24.004 243456 INFO nova.virt.libvirt.driver [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Deleting instance files /var/lib/nova/instances/43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_del#033[00m
Feb 28 05:13:24 np0005634017 nova_compute[243452]: 2026-02-28 10:13:24.005 243456 INFO nova.virt.libvirt.driver [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Deletion of /var/lib/nova/instances/43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_del complete#033[00m
Feb 28 05:13:24 np0005634017 nova_compute[243452]: 2026-02-28 10:13:24.073 243456 INFO nova.compute.manager [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Took 0.63 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:13:24 np0005634017 nova_compute[243452]: 2026-02-28 10:13:24.074 243456 DEBUG oslo.service.loopingcall [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:13:24 np0005634017 nova_compute[243452]: 2026-02-28 10:13:24.074 243456 DEBUG nova.compute.manager [-] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:13:24 np0005634017 nova_compute[243452]: 2026-02-28 10:13:24.075 243456 DEBUG nova.network.neutron [-] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:13:24 np0005634017 nova_compute[243452]: 2026-02-28 10:13:24.130 243456 INFO nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Creating config drive at /var/lib/nova/instances/c4a13c84-8fca-43c8-87c3-fde9f5d1c031/disk.config#033[00m
Feb 28 05:13:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 294 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 3.3 MiB/s wr, 330 op/s
Feb 28 05:13:24 np0005634017 nova_compute[243452]: 2026-02-28 10:13:24.136 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c4a13c84-8fca-43c8-87c3-fde9f5d1c031/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpzukkz8cj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:24 np0005634017 nova_compute[243452]: 2026-02-28 10:13:24.274 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c4a13c84-8fca-43c8-87c3-fde9f5d1c031/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpzukkz8cj" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:24 np0005634017 nova_compute[243452]: 2026-02-28 10:13:24.310 243456 DEBUG nova.storage.rbd_utils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:24 np0005634017 nova_compute[243452]: 2026-02-28 10:13:24.315 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c4a13c84-8fca-43c8-87c3-fde9f5d1c031/disk.config c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:24 np0005634017 nova_compute[243452]: 2026-02-28 10:13:24.489 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c4a13c84-8fca-43c8-87c3-fde9f5d1c031/disk.config c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:24 np0005634017 nova_compute[243452]: 2026-02-28 10:13:24.491 243456 INFO nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Deleting local config drive /var/lib/nova/instances/c4a13c84-8fca-43c8-87c3-fde9f5d1c031/disk.config because it was imported into RBD.#033[00m
Feb 28 05:13:24 np0005634017 kernel: tap9be79a2c-76: entered promiscuous mode
Feb 28 05:13:24 np0005634017 NetworkManager[49805]: <info>  [1772273604.5521] manager: (tap9be79a2c-76): new Tun device (/org/freedesktop/NetworkManager/Devices/263)
Feb 28 05:13:24 np0005634017 systemd-udevd[302650]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:13:24 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:24Z|00585|binding|INFO|Claiming lport 9be79a2c-76fa-4a58-a532-eac0151d2bb1 for this chassis.
Feb 28 05:13:24 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:24Z|00586|binding|INFO|9be79a2c-76fa-4a58-a532-eac0151d2bb1: Claiming fa:16:3e:54:18:0c 10.100.0.3
Feb 28 05:13:24 np0005634017 nova_compute[243452]: 2026-02-28 10:13:24.554 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:24 np0005634017 NetworkManager[49805]: <info>  [1772273604.5712] device (tap9be79a2c-76): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:13:24 np0005634017 NetworkManager[49805]: <info>  [1772273604.5723] device (tap9be79a2c-76): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.575 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:18:0c 10.100.0.3'], port_security=['fa:16:3e:54:18:0c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c4a13c84-8fca-43c8-87c3-fde9f5d1c031', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d8683e2-4377-476f-ae8b-6d3dd4e61943', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aeb82654-4ae5-4b85-ab47-1d2fe984318d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4eff969a-a040-4a86-b625-9ebfe779e412, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=9be79a2c-76fa-4a58-a532-eac0151d2bb1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.577 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 9be79a2c-76fa-4a58-a532-eac0151d2bb1 in datapath 5d8683e2-4377-476f-ae8b-6d3dd4e61943 bound to our chassis#033[00m
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.578 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d8683e2-4377-476f-ae8b-6d3dd4e61943#033[00m
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.593 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6a1c3b47-590b-4715-b282-c5ca5ce1e0fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:24 np0005634017 systemd-machined[209480]: New machine qemu-78-instance-00000046.
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.595 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5d8683e2-41 in ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.598 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5d8683e2-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.598 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9ddb0034-0a4c-4b0c-9682-392f520deeeb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.600 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a76ed88f-edc6-405e-a3bb-1adf0eee9200]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:24 np0005634017 nova_compute[243452]: 2026-02-28 10:13:24.610 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:24 np0005634017 systemd[1]: Started Virtual Machine qemu-78-instance-00000046.
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.618 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[b6c70666-4bd0-44e6-9ab0-a1cba7784b9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:24 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:24Z|00587|binding|INFO|Setting lport 9be79a2c-76fa-4a58-a532-eac0151d2bb1 ovn-installed in OVS
Feb 28 05:13:24 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:24Z|00588|binding|INFO|Setting lport 9be79a2c-76fa-4a58-a532-eac0151d2bb1 up in Southbound
Feb 28 05:13:24 np0005634017 nova_compute[243452]: 2026-02-28 10:13:24.622 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.631 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[66a08e6b-9268-40cc-a9be-92c5dd76c1fe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.664 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[65ccda1a-71ad-4d74-9e29-8a7dc38ec3d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.672 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[12bbc8a5-7826-4191-be3d-97d908f08ac8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:24 np0005634017 NetworkManager[49805]: <info>  [1772273604.6739] manager: (tap5d8683e2-40): new Veth device (/org/freedesktop/NetworkManager/Devices/264)
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.712 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[14e3a695-076a-4b42-9624-29a25e6c4bfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.717 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3b7ceb2d-f38b-4c44-b3c9-83da1d8562dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:24 np0005634017 NetworkManager[49805]: <info>  [1772273604.7483] device (tap5d8683e2-40): carrier: link connected
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.757 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8f7e4469-46a1-472c-9c04-6971ffab4252]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.774 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[138edc01-d5d0-4819-a975-1e001549cc43]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d8683e2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0c:0b:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 183], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511203, 'reachable_time': 18784, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302778, 'error': None, 'target': 'ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.791 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d4fc8fa4-81e7-44f2-b27c-d0ea9dff27d6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0c:b58'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 511203, 'tstamp': 511203}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302779, 'error': None, 'target': 'ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.816 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8a725acc-870b-4879-804b-5f95669abe8a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d8683e2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0c:0b:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 183], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511203, 'reachable_time': 18784, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302780, 'error': None, 'target': 'ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.856 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[772c66c0-0d5f-45a2-a303-d27edca92d6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.931 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6f7f86-1547-4fdb-8fb8-d2f0b07d3dcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.933 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d8683e2-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.934 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.934 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d8683e2-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:24 np0005634017 nova_compute[243452]: 2026-02-28 10:13:24.936 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:24 np0005634017 NetworkManager[49805]: <info>  [1772273604.9371] manager: (tap5d8683e2-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/265)
Feb 28 05:13:24 np0005634017 kernel: tap5d8683e2-40: entered promiscuous mode
Feb 28 05:13:24 np0005634017 nova_compute[243452]: 2026-02-28 10:13:24.940 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.941 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d8683e2-40, col_values=(('external_ids', {'iface-id': '6fea9809-bf5e-43e8-beec-b6cf6edc52cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:24 np0005634017 nova_compute[243452]: 2026-02-28 10:13:24.942 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:24 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:24Z|00589|binding|INFO|Releasing lport 6fea9809-bf5e-43e8-beec-b6cf6edc52cd from this chassis (sb_readonly=0)
Feb 28 05:13:24 np0005634017 nova_compute[243452]: 2026-02-28 10:13:24.952 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.954 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5d8683e2-4377-476f-ae8b-6d3dd4e61943.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5d8683e2-4377-476f-ae8b-6d3dd4e61943.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.955 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[43dfcc7e-4201-4bc4-9227-79fb56086c73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.956 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-5d8683e2-4377-476f-ae8b-6d3dd4e61943
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/5d8683e2-4377-476f-ae8b-6d3dd4e61943.pid.haproxy
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 5d8683e2-4377-476f-ae8b-6d3dd4e61943
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:13:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.957 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943', 'env', 'PROCESS_TAG=haproxy-5d8683e2-4377-476f-ae8b-6d3dd4e61943', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5d8683e2-4377-476f-ae8b-6d3dd4e61943.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:13:25 np0005634017 nova_compute[243452]: 2026-02-28 10:13:25.054 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273605.0533674, c4a13c84-8fca-43c8-87c3-fde9f5d1c031 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:25 np0005634017 nova_compute[243452]: 2026-02-28 10:13:25.054 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] VM Started (Lifecycle Event)#033[00m
Feb 28 05:13:25 np0005634017 nova_compute[243452]: 2026-02-28 10:13:25.083 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:25 np0005634017 nova_compute[243452]: 2026-02-28 10:13:25.098 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273605.053559, c4a13c84-8fca-43c8-87c3-fde9f5d1c031 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:25 np0005634017 nova_compute[243452]: 2026-02-28 10:13:25.098 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:13:25 np0005634017 nova_compute[243452]: 2026-02-28 10:13:25.132 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:25 np0005634017 nova_compute[243452]: 2026-02-28 10:13:25.135 243456 DEBUG nova.network.neutron [-] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:13:25 np0005634017 nova_compute[243452]: 2026-02-28 10:13:25.140 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:13:25 np0005634017 nova_compute[243452]: 2026-02-28 10:13:25.180 243456 DEBUG nova.network.neutron [-] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:25 np0005634017 nova_compute[243452]: 2026-02-28 10:13:25.186 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:13:25 np0005634017 nova_compute[243452]: 2026-02-28 10:13:25.207 243456 INFO nova.compute.manager [-] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Took 1.13 seconds to deallocate network for instance.#033[00m
Feb 28 05:13:25 np0005634017 nova_compute[243452]: 2026-02-28 10:13:25.293 243456 DEBUG oslo_concurrency.lockutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:25 np0005634017 nova_compute[243452]: 2026-02-28 10:13:25.294 243456 DEBUG oslo_concurrency.lockutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:25 np0005634017 podman[302852]: 2026-02-28 10:13:25.297161806 +0000 UTC m=+0.052142927 container create a9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:13:25 np0005634017 systemd[1]: Started libpod-conmon-a9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612.scope.
Feb 28 05:13:25 np0005634017 podman[302852]: 2026-02-28 10:13:25.270392704 +0000 UTC m=+0.025373835 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:13:25 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:13:25 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8da54b6f18717d512266281e970794ab514d96d93e0121c460ef59a108898240/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:13:25 np0005634017 podman[302852]: 2026-02-28 10:13:25.396226342 +0000 UTC m=+0.151207483 container init a9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:13:25 np0005634017 podman[302852]: 2026-02-28 10:13:25.40361183 +0000 UTC m=+0.158592951 container start a9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 05:13:25 np0005634017 nova_compute[243452]: 2026-02-28 10:13:25.401 243456 DEBUG oslo_concurrency.processutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:25 np0005634017 neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943[302867]: [NOTICE]   (302871) : New worker (302874) forked
Feb 28 05:13:25 np0005634017 neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943[302867]: [NOTICE]   (302871) : Loading success.
Feb 28 05:13:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:13:26 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3015353337' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.059 243456 DEBUG oslo_concurrency.processutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.068 243456 DEBUG nova.compute.provider_tree [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:13:26 np0005634017 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 05:13:26 np0005634017 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.095 243456 DEBUG nova.scheduler.client.report [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.119 243456 DEBUG oslo_concurrency.lockutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 216 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 357 op/s
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.162 243456 INFO nova.scheduler.client.report [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Deleted allocations for instance 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.237 243456 DEBUG oslo_concurrency.lockutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "43abdac9-81fa-437c-8a7c-fb7b1a9ff97f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.564 243456 DEBUG nova.compute.manager [req-04ed243d-1096-497c-820a-8b0baf3d136e req-b6b86042-204c-4cd4-b90e-d089ce00fec0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Received event network-vif-plugged-9be79a2c-76fa-4a58-a532-eac0151d2bb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.565 243456 DEBUG oslo_concurrency.lockutils [req-04ed243d-1096-497c-820a-8b0baf3d136e req-b6b86042-204c-4cd4-b90e-d089ce00fec0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.566 243456 DEBUG oslo_concurrency.lockutils [req-04ed243d-1096-497c-820a-8b0baf3d136e req-b6b86042-204c-4cd4-b90e-d089ce00fec0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.566 243456 DEBUG oslo_concurrency.lockutils [req-04ed243d-1096-497c-820a-8b0baf3d136e req-b6b86042-204c-4cd4-b90e-d089ce00fec0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.566 243456 DEBUG nova.compute.manager [req-04ed243d-1096-497c-820a-8b0baf3d136e req-b6b86042-204c-4cd4-b90e-d089ce00fec0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Processing event network-vif-plugged-9be79a2c-76fa-4a58-a532-eac0151d2bb1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.567 243456 DEBUG nova.compute.manager [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.574 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273606.5738165, c4a13c84-8fca-43c8-87c3-fde9f5d1c031 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.575 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.582 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.586 243456 INFO nova.virt.libvirt.driver [-] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Instance spawned successfully.#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.587 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.617 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.626 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.633 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.634 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.635 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.636 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.637 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.638 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.648 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.690 243456 INFO nova.compute.manager [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Took 7.45 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.692 243456 DEBUG nova.compute.manager [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.793 243456 INFO nova.compute.manager [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Took 8.67 seconds to build instance.#033[00m
Feb 28 05:13:26 np0005634017 nova_compute[243452]: 2026-02-28 10:13:26.843 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:27 np0005634017 nova_compute[243452]: 2026-02-28 10:13:27.740 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:27 np0005634017 nova_compute[243452]: 2026-02-28 10:13:27.740 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:27 np0005634017 nova_compute[243452]: 2026-02-28 10:13:27.762 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:13:27 np0005634017 nova_compute[243452]: 2026-02-28 10:13:27.776 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "f3def0af-1227-498f-a525-0df8d5bb3768" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:27 np0005634017 nova_compute[243452]: 2026-02-28 10:13:27.776 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:27 np0005634017 nova_compute[243452]: 2026-02-28 10:13:27.802 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:13:27 np0005634017 nova_compute[243452]: 2026-02-28 10:13:27.887 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:27 np0005634017 nova_compute[243452]: 2026-02-28 10:13:27.888 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:27 np0005634017 nova_compute[243452]: 2026-02-28 10:13:27.894 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:13:27 np0005634017 nova_compute[243452]: 2026-02-28 10:13:27.894 243456 INFO nova.compute.claims [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:13:27 np0005634017 nova_compute[243452]: 2026-02-28 10:13:27.901 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:28 np0005634017 nova_compute[243452]: 2026-02-28 10:13:28.068 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:13:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 200 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 3.6 MiB/s wr, 384 op/s
Feb 28 05:13:28 np0005634017 nova_compute[243452]: 2026-02-28 10:13:28.540 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:13:28 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2324003416' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:13:28 np0005634017 nova_compute[243452]: 2026-02-28 10:13:28.639 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:28 np0005634017 nova_compute[243452]: 2026-02-28 10:13:28.648 243456 DEBUG nova.compute.provider_tree [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:13:28 np0005634017 nova_compute[243452]: 2026-02-28 10:13:28.667 243456 DEBUG nova.scheduler.client.report [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:13:28 np0005634017 nova_compute[243452]: 2026-02-28 10:13:28.694 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:28 np0005634017 nova_compute[243452]: 2026-02-28 10:13:28.695 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:13:28 np0005634017 nova_compute[243452]: 2026-02-28 10:13:28.698 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:28 np0005634017 nova_compute[243452]: 2026-02-28 10:13:28.703 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:13:28 np0005634017 nova_compute[243452]: 2026-02-28 10:13:28.703 243456 INFO nova.compute.claims [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:13:28 np0005634017 nova_compute[243452]: 2026-02-28 10:13:28.739 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:28 np0005634017 nova_compute[243452]: 2026-02-28 10:13:28.765 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:13:28 np0005634017 nova_compute[243452]: 2026-02-28 10:13:28.766 243456 DEBUG nova.network.neutron [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:13:28 np0005634017 nova_compute[243452]: 2026-02-28 10:13:28.792 243456 INFO nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:13:28 np0005634017 nova_compute[243452]: 2026-02-28 10:13:28.821 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:13:28 np0005634017 nova_compute[243452]: 2026-02-28 10:13:28.876 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:28 np0005634017 nova_compute[243452]: 2026-02-28 10:13:28.934 243456 DEBUG nova.compute.manager [req-c0eb0349-3a36-4801-af08-a78e2d9b8d57 req-77089efb-b992-4d3d-a076-339c0845de41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Received event network-vif-plugged-9be79a2c-76fa-4a58-a532-eac0151d2bb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:28 np0005634017 nova_compute[243452]: 2026-02-28 10:13:28.935 243456 DEBUG oslo_concurrency.lockutils [req-c0eb0349-3a36-4801-af08-a78e2d9b8d57 req-77089efb-b992-4d3d-a076-339c0845de41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:28 np0005634017 nova_compute[243452]: 2026-02-28 10:13:28.935 243456 DEBUG oslo_concurrency.lockutils [req-c0eb0349-3a36-4801-af08-a78e2d9b8d57 req-77089efb-b992-4d3d-a076-339c0845de41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:28 np0005634017 nova_compute[243452]: 2026-02-28 10:13:28.935 243456 DEBUG oslo_concurrency.lockutils [req-c0eb0349-3a36-4801-af08-a78e2d9b8d57 req-77089efb-b992-4d3d-a076-339c0845de41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:28 np0005634017 nova_compute[243452]: 2026-02-28 10:13:28.935 243456 DEBUG nova.compute.manager [req-c0eb0349-3a36-4801-af08-a78e2d9b8d57 req-77089efb-b992-4d3d-a076-339c0845de41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] No waiting events found dispatching network-vif-plugged-9be79a2c-76fa-4a58-a532-eac0151d2bb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:13:28 np0005634017 nova_compute[243452]: 2026-02-28 10:13:28.936 243456 WARNING nova.compute.manager [req-c0eb0349-3a36-4801-af08-a78e2d9b8d57 req-77089efb-b992-4d3d-a076-339c0845de41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Received unexpected event network-vif-plugged-9be79a2c-76fa-4a58-a532-eac0151d2bb1 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:13:28 np0005634017 nova_compute[243452]: 2026-02-28 10:13:28.978 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:13:28 np0005634017 nova_compute[243452]: 2026-02-28 10:13:28.980 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:13:28 np0005634017 nova_compute[243452]: 2026-02-28 10:13:28.980 243456 INFO nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Creating image(s)#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.003 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.030 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.054 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.063 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:13:29
Feb 28 05:13:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:13:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:13:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', '.mgr', 'default.rgw.meta', '.rgw.root', 'images', 'vms']
Feb 28 05:13:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.105 243456 DEBUG nova.policy [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c60ae50e478245b49930e1e71ea14df4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f398725990434948ba6927f1c1477015', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.138 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.139 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.139 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.139 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.169 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.173 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.407 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:13:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2697708311' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.507 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.516 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] resizing rbd image a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.554 243456 DEBUG nova.compute.provider_tree [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.573 243456 DEBUG nova.scheduler.client.report [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.595 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.596 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.639 243456 DEBUG nova.objects.instance [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lazy-loading 'migration_context' on Instance uuid a16c1faa-2568-47fc-8006-91c68ae7ae5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.651 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.651 243456 DEBUG nova.network.neutron [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.656 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.656 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Ensure instance console log exists: /var/lib/nova/instances/a16c1faa-2568-47fc-8006-91c68ae7ae5d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.657 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.657 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.657 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.672 243456 INFO nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.699 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.792 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.794 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.795 243456 INFO nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Creating image(s)#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.827 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image f3def0af-1227-498f-a525-0df8d5bb3768_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.853 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image f3def0af-1227-498f-a525-0df8d5bb3768_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.876 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image f3def0af-1227-498f-a525-0df8d5bb3768_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.880 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.941 243456 DEBUG nova.policy [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c60ae50e478245b49930e1e71ea14df4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f398725990434948ba6927f1c1477015', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.950 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.950 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.951 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.951 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.974 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image f3def0af-1227-498f-a525-0df8d5bb3768_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:29 np0005634017 nova_compute[243452]: 2026-02-28 10:13:29.978 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 f3def0af-1227-498f-a525-0df8d5bb3768_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 209 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 3.4 MiB/s wr, 412 op/s
Feb 28 05:13:30 np0005634017 nova_compute[243452]: 2026-02-28 10:13:30.298 243456 DEBUG nova.network.neutron [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Successfully created port: b8b97a75-8d54-4bd6-8372-eaff2a5aa910 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:13:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:13:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:13:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:13:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:13:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:13:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:13:30 np0005634017 nova_compute[243452]: 2026-02-28 10:13:30.460 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 f3def0af-1227-498f-a525-0df8d5bb3768_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:30 np0005634017 nova_compute[243452]: 2026-02-28 10:13:30.551 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] resizing rbd image f3def0af-1227-498f-a525-0df8d5bb3768_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:13:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:13:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:13:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:13:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:13:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:13:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:13:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:13:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:13:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:13:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:13:30 np0005634017 nova_compute[243452]: 2026-02-28 10:13:30.677 243456 DEBUG nova.objects.instance [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lazy-loading 'migration_context' on Instance uuid f3def0af-1227-498f-a525-0df8d5bb3768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:30 np0005634017 nova_compute[243452]: 2026-02-28 10:13:30.705 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:13:30 np0005634017 nova_compute[243452]: 2026-02-28 10:13:30.705 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Ensure instance console log exists: /var/lib/nova/instances/f3def0af-1227-498f-a525-0df8d5bb3768/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:13:30 np0005634017 nova_compute[243452]: 2026-02-28 10:13:30.706 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:30 np0005634017 nova_compute[243452]: 2026-02-28 10:13:30.707 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:30 np0005634017 nova_compute[243452]: 2026-02-28 10:13:30.707 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:30 np0005634017 NetworkManager[49805]: <info>  [1772273610.7164] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/266)
Feb 28 05:13:30 np0005634017 NetworkManager[49805]: <info>  [1772273610.7174] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/267)
Feb 28 05:13:30 np0005634017 nova_compute[243452]: 2026-02-28 10:13:30.719 243456 DEBUG nova.network.neutron [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Successfully created port: fa6d7d29-c113-4729-a953-5dc14a05cd16 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:13:30 np0005634017 nova_compute[243452]: 2026-02-28 10:13:30.724 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:30 np0005634017 nova_compute[243452]: 2026-02-28 10:13:30.747 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:30 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:30Z|00590|binding|INFO|Releasing lport 6fea9809-bf5e-43e8-beec-b6cf6edc52cd from this chassis (sb_readonly=0)
Feb 28 05:13:30 np0005634017 nova_compute[243452]: 2026-02-28 10:13:30.773 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:31 np0005634017 nova_compute[243452]: 2026-02-28 10:13:31.783 243456 DEBUG nova.network.neutron [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Successfully updated port: b8b97a75-8d54-4bd6-8372-eaff2a5aa910 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:13:31 np0005634017 nova_compute[243452]: 2026-02-28 10:13:31.800 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "refresh_cache-a16c1faa-2568-47fc-8006-91c68ae7ae5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:13:31 np0005634017 nova_compute[243452]: 2026-02-28 10:13:31.801 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquired lock "refresh_cache-a16c1faa-2568-47fc-8006-91c68ae7ae5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:13:31 np0005634017 nova_compute[243452]: 2026-02-28 10:13:31.801 243456 DEBUG nova.network.neutron [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:13:31 np0005634017 nova_compute[243452]: 2026-02-28 10:13:31.921 243456 DEBUG nova.network.neutron [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Successfully updated port: fa6d7d29-c113-4729-a953-5dc14a05cd16 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:13:31 np0005634017 nova_compute[243452]: 2026-02-28 10:13:31.938 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "refresh_cache-f3def0af-1227-498f-a525-0df8d5bb3768" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:13:31 np0005634017 nova_compute[243452]: 2026-02-28 10:13:31.939 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquired lock "refresh_cache-f3def0af-1227-498f-a525-0df8d5bb3768" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:13:31 np0005634017 nova_compute[243452]: 2026-02-28 10:13:31.939 243456 DEBUG nova.network.neutron [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:13:31 np0005634017 nova_compute[243452]: 2026-02-28 10:13:31.989 243456 DEBUG nova.network.neutron [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:13:32 np0005634017 nova_compute[243452]: 2026-02-28 10:13:32.078 243456 DEBUG nova.network.neutron [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:13:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 246 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 4.5 MiB/s wr, 354 op/s
Feb 28 05:13:32 np0005634017 nova_compute[243452]: 2026-02-28 10:13:32.221 243456 DEBUG nova.compute.manager [req-57dc64d9-dc84-45e5-a3f5-7ca6fbe7d483 req-e877e983-6b00-495c-8bfb-50c3589c7d9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Received event network-changed-9be79a2c-76fa-4a58-a532-eac0151d2bb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:32 np0005634017 nova_compute[243452]: 2026-02-28 10:13:32.221 243456 DEBUG nova.compute.manager [req-57dc64d9-dc84-45e5-a3f5-7ca6fbe7d483 req-e877e983-6b00-495c-8bfb-50c3589c7d9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Refreshing instance network info cache due to event network-changed-9be79a2c-76fa-4a58-a532-eac0151d2bb1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:13:32 np0005634017 nova_compute[243452]: 2026-02-28 10:13:32.222 243456 DEBUG oslo_concurrency.lockutils [req-57dc64d9-dc84-45e5-a3f5-7ca6fbe7d483 req-e877e983-6b00-495c-8bfb-50c3589c7d9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c4a13c84-8fca-43c8-87c3-fde9f5d1c031" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:13:32 np0005634017 nova_compute[243452]: 2026-02-28 10:13:32.222 243456 DEBUG oslo_concurrency.lockutils [req-57dc64d9-dc84-45e5-a3f5-7ca6fbe7d483 req-e877e983-6b00-495c-8bfb-50c3589c7d9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c4a13c84-8fca-43c8-87c3-fde9f5d1c031" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:13:32 np0005634017 nova_compute[243452]: 2026-02-28 10:13:32.222 243456 DEBUG nova.network.neutron [req-57dc64d9-dc84-45e5-a3f5-7ca6fbe7d483 req-e877e983-6b00-495c-8bfb-50c3589c7d9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Refreshing network info cache for port 9be79a2c-76fa-4a58-a532-eac0151d2bb1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:13:32 np0005634017 nova_compute[243452]: 2026-02-28 10:13:32.300 243456 DEBUG nova.compute.manager [req-6b008d7e-18d7-464b-ae40-00133486e433 req-bf82f2c9-7a6e-431d-8e7d-4e633b31829f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Received event network-changed-fa6d7d29-c113-4729-a953-5dc14a05cd16 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:32 np0005634017 nova_compute[243452]: 2026-02-28 10:13:32.300 243456 DEBUG nova.compute.manager [req-6b008d7e-18d7-464b-ae40-00133486e433 req-bf82f2c9-7a6e-431d-8e7d-4e633b31829f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Refreshing instance network info cache due to event network-changed-fa6d7d29-c113-4729-a953-5dc14a05cd16. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:13:32 np0005634017 nova_compute[243452]: 2026-02-28 10:13:32.300 243456 DEBUG oslo_concurrency.lockutils [req-6b008d7e-18d7-464b-ae40-00133486e433 req-bf82f2c9-7a6e-431d-8e7d-4e633b31829f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f3def0af-1227-498f-a525-0df8d5bb3768" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:13:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:13:33 np0005634017 nova_compute[243452]: 2026-02-28 10:13:33.546 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:33 np0005634017 nova_compute[243452]: 2026-02-28 10:13:33.596 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:33 np0005634017 nova_compute[243452]: 2026-02-28 10:13:33.598 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:33 np0005634017 nova_compute[243452]: 2026-02-28 10:13:33.621 243456 DEBUG nova.compute.manager [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:13:33 np0005634017 nova_compute[243452]: 2026-02-28 10:13:33.703 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:33 np0005634017 nova_compute[243452]: 2026-02-28 10:13:33.704 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:33 np0005634017 nova_compute[243452]: 2026-02-28 10:13:33.714 243456 DEBUG nova.virt.hardware [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:13:33 np0005634017 nova_compute[243452]: 2026-02-28 10:13:33.715 243456 INFO nova.compute.claims [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:13:33 np0005634017 nova_compute[243452]: 2026-02-28 10:13:33.741 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:33 np0005634017 nova_compute[243452]: 2026-02-28 10:13:33.892 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 282 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 5.1 MiB/s wr, 298 op/s
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.271 243456 DEBUG nova.network.neutron [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Updating instance_info_cache with network_info: [{"id": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "address": "fa:16:3e:dd:7a:dc", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b97a75-8d", "ovs_interfaceid": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.280 243456 DEBUG nova.network.neutron [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Updating instance_info_cache with network_info: [{"id": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "address": "fa:16:3e:2c:d1:ad", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa6d7d29-c1", "ovs_interfaceid": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.306 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Releasing lock "refresh_cache-a16c1faa-2568-47fc-8006-91c68ae7ae5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.307 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Instance network_info: |[{"id": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "address": "fa:16:3e:dd:7a:dc", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b97a75-8d", "ovs_interfaceid": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.308 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Releasing lock "refresh_cache-f3def0af-1227-498f-a525-0df8d5bb3768" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.308 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Instance network_info: |[{"id": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "address": "fa:16:3e:2c:d1:ad", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa6d7d29-c1", "ovs_interfaceid": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.313 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Start _get_guest_xml network_info=[{"id": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "address": "fa:16:3e:dd:7a:dc", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b97a75-8d", "ovs_interfaceid": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.314 243456 DEBUG oslo_concurrency.lockutils [req-6b008d7e-18d7-464b-ae40-00133486e433 req-bf82f2c9-7a6e-431d-8e7d-4e633b31829f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f3def0af-1227-498f-a525-0df8d5bb3768" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.315 243456 DEBUG nova.network.neutron [req-6b008d7e-18d7-464b-ae40-00133486e433 req-bf82f2c9-7a6e-431d-8e7d-4e633b31829f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Refreshing network info cache for port fa6d7d29-c113-4729-a953-5dc14a05cd16 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.320 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Start _get_guest_xml network_info=[{"id": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "address": "fa:16:3e:2c:d1:ad", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa6d7d29-c1", "ovs_interfaceid": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.331 243456 DEBUG nova.compute.manager [req-25cfc3ba-df31-4b4b-9405-45bcfc670eba req-f4e6b4ec-31d5-49d4-b89e-ac2ddba785ef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Received event network-changed-b8b97a75-8d54-4bd6-8372-eaff2a5aa910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.332 243456 DEBUG nova.compute.manager [req-25cfc3ba-df31-4b4b-9405-45bcfc670eba req-f4e6b4ec-31d5-49d4-b89e-ac2ddba785ef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Refreshing instance network info cache due to event network-changed-b8b97a75-8d54-4bd6-8372-eaff2a5aa910. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.333 243456 DEBUG oslo_concurrency.lockutils [req-25cfc3ba-df31-4b4b-9405-45bcfc670eba req-f4e6b4ec-31d5-49d4-b89e-ac2ddba785ef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-a16c1faa-2568-47fc-8006-91c68ae7ae5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.333 243456 DEBUG oslo_concurrency.lockutils [req-25cfc3ba-df31-4b4b-9405-45bcfc670eba req-f4e6b4ec-31d5-49d4-b89e-ac2ddba785ef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-a16c1faa-2568-47fc-8006-91c68ae7ae5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.333 243456 DEBUG nova.network.neutron [req-25cfc3ba-df31-4b4b-9405-45bcfc670eba req-f4e6b4ec-31d5-49d4-b89e-ac2ddba785ef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Refreshing network info cache for port b8b97a75-8d54-4bd6-8372-eaff2a5aa910 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.349 243456 WARNING nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.356 243456 WARNING nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.360 243456 DEBUG nova.virt.libvirt.host [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.361 243456 DEBUG nova.virt.libvirt.host [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.363 243456 DEBUG nova.virt.libvirt.host [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.364 243456 DEBUG nova.virt.libvirt.host [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.365 243456 DEBUG nova.virt.libvirt.host [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.366 243456 DEBUG nova.virt.libvirt.host [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.366 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.367 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.367 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.368 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.368 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.368 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.368 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.369 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.369 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.369 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.369 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.369 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.373 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.417 243456 DEBUG nova.virt.libvirt.host [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.418 243456 DEBUG nova.virt.libvirt.host [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.419 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.419 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.420 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.420 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.420 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.420 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.421 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.421 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.421 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.421 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.422 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.422 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.425 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:13:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1963363167' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.485 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.492 243456 DEBUG nova.compute.provider_tree [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.507 243456 DEBUG nova.scheduler.client.report [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.530 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.531 243456 DEBUG nova.compute.manager [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.581 243456 DEBUG nova.compute.manager [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.583 243456 DEBUG nova.network.neutron [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.601 243456 INFO nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.617 243456 DEBUG nova.compute.manager [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.697 243456 DEBUG nova.compute.manager [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.699 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.700 243456 INFO nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Creating image(s)#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.726 243456 DEBUG nova.storage.rbd_utils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.760 243456 DEBUG nova.storage.rbd_utils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.793 243456 DEBUG nova.storage.rbd_utils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.797 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.893 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.895 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.895 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.896 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:13:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1242527802' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.917 243456 DEBUG nova.storage.rbd_utils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.922 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.959 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:13:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/282437512' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.985 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:34 np0005634017 nova_compute[243452]: 2026-02-28 10:13:34.990 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.038 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.077 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image f3def0af-1227-498f-a525-0df8d5bb3768_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.088 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.180 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.263 243456 DEBUG nova.network.neutron [req-57dc64d9-dc84-45e5-a3f5-7ca6fbe7d483 req-e877e983-6b00-495c-8bfb-50c3589c7d9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Updated VIF entry in instance network info cache for port 9be79a2c-76fa-4a58-a532-eac0151d2bb1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.264 243456 DEBUG nova.network.neutron [req-57dc64d9-dc84-45e5-a3f5-7ca6fbe7d483 req-e877e983-6b00-495c-8bfb-50c3589c7d9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Updating instance_info_cache with network_info: [{"id": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "address": "fa:16:3e:54:18:0c", "network": {"id": "5d8683e2-4377-476f-ae8b-6d3dd4e61943", "bridge": "br-int", "label": "tempest-network-smoke--813977532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9be79a2c-76", "ovs_interfaceid": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.270 243456 DEBUG nova.policy [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c6b5724da2e648fd85fd8cb293525967', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.283 243456 DEBUG nova.storage.rbd_utils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] resizing rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.317 243456 DEBUG oslo_concurrency.lockutils [req-57dc64d9-dc84-45e5-a3f5-7ca6fbe7d483 req-e877e983-6b00-495c-8bfb-50c3589c7d9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c4a13c84-8fca-43c8-87c3-fde9f5d1c031" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.379 243456 DEBUG nova.objects.instance [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'migration_context' on Instance uuid 1e13ffbf-dba5-421b-afc3-84eb471e2d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.403 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.403 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Ensure instance console log exists: /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.404 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.404 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.405 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.489 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273600.4606256, 6af19b7d-b0a9-40fe-8d1a-f38c95452a10 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.490 243456 INFO nova.compute.manager [-] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.513 243456 DEBUG nova.compute.manager [None req-cbdfa404-fcb6-4b9b-8942-c46cb283e620 - - - - - -] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:13:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1598360059' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.661 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273600.6599488, bd2e7775-9332-417e-a139-0847263b3343 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.662 243456 INFO nova.compute.manager [-] [instance: bd2e7775-9332-417e-a139-0847263b3343] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.671 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.681s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.673 243456 DEBUG nova.virt.libvirt.vif [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-650257309',display_name='tempest-MultipleCreateTestJSON-server-650257309-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-650257309-1',id=71,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f398725990434948ba6927f1c1477015',ramdisk_id='',reservation_id='r-fb7tvvwh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-211863181',owner_user_name='tempest-MultipleCreateTestJSON-211863181-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:28Z,user_data=None,user_id='c60ae50e478245b49930e1e71ea14df4',uuid=a16c1faa-2568-47fc-8006-91c68ae7ae5d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "address": "fa:16:3e:dd:7a:dc", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b97a75-8d", "ovs_interfaceid": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.673 243456 DEBUG nova.network.os_vif_util [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converting VIF {"id": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "address": "fa:16:3e:dd:7a:dc", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b97a75-8d", "ovs_interfaceid": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.674 243456 DEBUG nova.network.os_vif_util [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:7a:dc,bridge_name='br-int',has_traffic_filtering=True,id=b8b97a75-8d54-4bd6-8372-eaff2a5aa910,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b97a75-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.675 243456 DEBUG nova.objects.instance [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lazy-loading 'pci_devices' on Instance uuid a16c1faa-2568-47fc-8006-91c68ae7ae5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:13:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2798180972' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.690 243456 DEBUG nova.compute.manager [None req-6c082276-e6a9-4a7f-9837-ded9d67b7c29 - - - - - -] [instance: bd2e7775-9332-417e-a139-0847263b3343] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.692 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  <uuid>a16c1faa-2568-47fc-8006-91c68ae7ae5d</uuid>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  <name>instance-00000047</name>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <nova:name>tempest-MultipleCreateTestJSON-server-650257309-1</nova:name>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:13:34</nova:creationTime>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:        <nova:user uuid="c60ae50e478245b49930e1e71ea14df4">tempest-MultipleCreateTestJSON-211863181-project-member</nova:user>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:        <nova:project uuid="f398725990434948ba6927f1c1477015">tempest-MultipleCreateTestJSON-211863181</nova:project>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:        <nova:port uuid="b8b97a75-8d54-4bd6-8372-eaff2a5aa910">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <entry name="serial">a16c1faa-2568-47fc-8006-91c68ae7ae5d</entry>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <entry name="uuid">a16c1faa-2568-47fc-8006-91c68ae7ae5d</entry>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk.config">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:dd:7a:dc"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <target dev="tapb8b97a75-8d"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/a16c1faa-2568-47fc-8006-91c68ae7ae5d/console.log" append="off"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:13:35 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:13:35 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.697 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Preparing to wait for external event network-vif-plugged-b8b97a75-8d54-4bd6-8372-eaff2a5aa910 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.697 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.697 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.698 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.698 243456 DEBUG nova.virt.libvirt.vif [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-650257309',display_name='tempest-MultipleCreateTestJSON-server-650257309-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-650257309-1',id=71,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f398725990434948ba6927f1c1477015',ramdisk_id='',reservation_id='r-fb7tvvwh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-211863181',owner_user_name='tempest-MultipleCreateTestJSON-211863181-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:28Z,user_data=None,user_id='c60ae50e478245b49930e1e71ea14df4',uuid=a16c1faa-2568-47fc-8006-91c68ae7ae5d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "address": "fa:16:3e:dd:7a:dc", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b97a75-8d", "ovs_interfaceid": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.699 243456 DEBUG nova.network.os_vif_util [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converting VIF {"id": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "address": "fa:16:3e:dd:7a:dc", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b97a75-8d", "ovs_interfaceid": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.700 243456 DEBUG nova.network.os_vif_util [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:7a:dc,bridge_name='br-int',has_traffic_filtering=True,id=b8b97a75-8d54-4bd6-8372-eaff2a5aa910,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b97a75-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.700 243456 DEBUG os_vif [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:7a:dc,bridge_name='br-int',has_traffic_filtering=True,id=b8b97a75-8d54-4bd6-8372-eaff2a5aa910,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b97a75-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.701 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.701 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.702 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.702 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.704 243456 DEBUG nova.virt.libvirt.vif [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-650257309',display_name='tempest-MultipleCreateTestJSON-server-650257309-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-650257309-2',id=72,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f398725990434948ba6927f1c1477015',ramdisk_id='',reservation_id='r-fb7tvvwh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-211863181',owner_user_name='tempest-MultipleCreateTestJSON-211863181-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:29Z,user_data=None,user_id='c60ae50e478245b49930e1e71ea14df4',uuid=f3def0af-1227-498f-a525-0df8d5bb3768,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "address": "fa:16:3e:2c:d1:ad", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa6d7d29-c1", "ovs_interfaceid": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.705 243456 DEBUG nova.network.os_vif_util [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converting VIF {"id": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "address": "fa:16:3e:2c:d1:ad", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa6d7d29-c1", "ovs_interfaceid": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.705 243456 DEBUG nova.network.os_vif_util [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:d1:ad,bridge_name='br-int',has_traffic_filtering=True,id=fa6d7d29-c113-4729-a953-5dc14a05cd16,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa6d7d29-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.706 243456 DEBUG nova.objects.instance [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lazy-loading 'pci_devices' on Instance uuid f3def0af-1227-498f-a525-0df8d5bb3768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.708 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.708 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8b97a75-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.709 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb8b97a75-8d, col_values=(('external_ids', {'iface-id': 'b8b97a75-8d54-4bd6-8372-eaff2a5aa910', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:7a:dc', 'vm-uuid': 'a16c1faa-2568-47fc-8006-91c68ae7ae5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.710 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:35 np0005634017 NetworkManager[49805]: <info>  [1772273615.7113] manager: (tapb8b97a75-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/268)
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.713 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.717 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.718 243456 INFO os_vif [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:7a:dc,bridge_name='br-int',has_traffic_filtering=True,id=b8b97a75-8d54-4bd6-8372-eaff2a5aa910,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b97a75-8d')#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.721 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  <uuid>f3def0af-1227-498f-a525-0df8d5bb3768</uuid>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  <name>instance-00000048</name>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <nova:name>tempest-MultipleCreateTestJSON-server-650257309-2</nova:name>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:13:34</nova:creationTime>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:        <nova:user uuid="c60ae50e478245b49930e1e71ea14df4">tempest-MultipleCreateTestJSON-211863181-project-member</nova:user>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:        <nova:project uuid="f398725990434948ba6927f1c1477015">tempest-MultipleCreateTestJSON-211863181</nova:project>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:        <nova:port uuid="fa6d7d29-c113-4729-a953-5dc14a05cd16">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <entry name="serial">f3def0af-1227-498f-a525-0df8d5bb3768</entry>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <entry name="uuid">f3def0af-1227-498f-a525-0df8d5bb3768</entry>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/f3def0af-1227-498f-a525-0df8d5bb3768_disk">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/f3def0af-1227-498f-a525-0df8d5bb3768_disk.config">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:2c:d1:ad"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <target dev="tapfa6d7d29-c1"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/f3def0af-1227-498f-a525-0df8d5bb3768/console.log" append="off"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:13:35 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:13:35 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:13:35 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:13:35 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.725 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Preparing to wait for external event network-vif-plugged-fa6d7d29-c113-4729-a953-5dc14a05cd16 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.725 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.726 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.726 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.727 243456 DEBUG nova.virt.libvirt.vif [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-650257309',display_name='tempest-MultipleCreateTestJSON-server-650257309-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-650257309-2',id=72,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f398725990434948ba6927f1c1477015',ramdisk_id='',reservation_id='r-fb7tvvwh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-211863181',owner_user_name='tempest-MultipleCreateTestJSON-211863181-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:29Z,user_data=None,user_id='c60ae50e478245b49930e1e71ea14df4',uuid=f3def0af-1227-498f-a525-0df8d5bb3768,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "address": "fa:16:3e:2c:d1:ad", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa6d7d29-c1", "ovs_interfaceid": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.727 243456 DEBUG nova.network.os_vif_util [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converting VIF {"id": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "address": "fa:16:3e:2c:d1:ad", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa6d7d29-c1", "ovs_interfaceid": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.727 243456 DEBUG nova.network.os_vif_util [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:d1:ad,bridge_name='br-int',has_traffic_filtering=True,id=fa6d7d29-c113-4729-a953-5dc14a05cd16,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa6d7d29-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.728 243456 DEBUG os_vif [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:d1:ad,bridge_name='br-int',has_traffic_filtering=True,id=fa6d7d29-c113-4729-a953-5dc14a05cd16,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa6d7d29-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.728 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.728 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.729 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.733 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.733 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa6d7d29-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.734 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfa6d7d29-c1, col_values=(('external_ids', {'iface-id': 'fa6d7d29-c113-4729-a953-5dc14a05cd16', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2c:d1:ad', 'vm-uuid': 'f3def0af-1227-498f-a525-0df8d5bb3768'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.735 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:35 np0005634017 NetworkManager[49805]: <info>  [1772273615.7378] manager: (tapfa6d7d29-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/269)
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.739 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.747 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.748 243456 INFO os_vif [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:d1:ad,bridge_name='br-int',has_traffic_filtering=True,id=fa6d7d29-c113-4729-a953-5dc14a05cd16,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa6d7d29-c1')#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.794 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.794 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.795 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] No VIF found with MAC fa:16:3e:dd:7a:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.795 243456 INFO nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Using config drive#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.817 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:35 np0005634017 podman[303600]: 2026-02-28 10:13:35.820406891 +0000 UTC m=+0.055539333 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true)
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.840 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.840 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.841 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] No VIF found with MAC fa:16:3e:2c:d1:ad, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.841 243456 INFO nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Using config drive#033[00m
Feb 28 05:13:35 np0005634017 podman[303597]: 2026-02-28 10:13:35.857180914 +0000 UTC m=+0.099465527 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Feb 28 05:13:35 np0005634017 nova_compute[243452]: 2026-02-28 10:13:35.871 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image f3def0af-1227-498f-a525-0df8d5bb3768_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:36 np0005634017 nova_compute[243452]: 2026-02-28 10:13:36.120 243456 DEBUG nova.network.neutron [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Successfully created port: 41441957-9492-481d-847c-895c9fd2ef8f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:13:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 307 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 5.1 MiB/s wr, 227 op/s
Feb 28 05:13:36 np0005634017 nova_compute[243452]: 2026-02-28 10:13:36.298 243456 INFO nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Creating config drive at /var/lib/nova/instances/f3def0af-1227-498f-a525-0df8d5bb3768/disk.config#033[00m
Feb 28 05:13:36 np0005634017 nova_compute[243452]: 2026-02-28 10:13:36.302 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f3def0af-1227-498f-a525-0df8d5bb3768/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxz8qkimt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:36 np0005634017 nova_compute[243452]: 2026-02-28 10:13:36.347 243456 INFO nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Creating config drive at /var/lib/nova/instances/a16c1faa-2568-47fc-8006-91c68ae7ae5d/disk.config#033[00m
Feb 28 05:13:36 np0005634017 nova_compute[243452]: 2026-02-28 10:13:36.351 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a16c1faa-2568-47fc-8006-91c68ae7ae5d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpb8hqeua_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:36 np0005634017 nova_compute[243452]: 2026-02-28 10:13:36.445 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:36 np0005634017 nova_compute[243452]: 2026-02-28 10:13:36.448 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f3def0af-1227-498f-a525-0df8d5bb3768/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxz8qkimt" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:36 np0005634017 nova_compute[243452]: 2026-02-28 10:13:36.472 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image f3def0af-1227-498f-a525-0df8d5bb3768_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:36 np0005634017 nova_compute[243452]: 2026-02-28 10:13:36.476 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f3def0af-1227-498f-a525-0df8d5bb3768/disk.config f3def0af-1227-498f-a525-0df8d5bb3768_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:36 np0005634017 nova_compute[243452]: 2026-02-28 10:13:36.507 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a16c1faa-2568-47fc-8006-91c68ae7ae5d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpb8hqeua_" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:36 np0005634017 nova_compute[243452]: 2026-02-28 10:13:36.535 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:36 np0005634017 nova_compute[243452]: 2026-02-28 10:13:36.541 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a16c1faa-2568-47fc-8006-91c68ae7ae5d/disk.config a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:36 np0005634017 nova_compute[243452]: 2026-02-28 10:13:36.611 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f3def0af-1227-498f-a525-0df8d5bb3768/disk.config f3def0af-1227-498f-a525-0df8d5bb3768_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:36 np0005634017 nova_compute[243452]: 2026-02-28 10:13:36.612 243456 INFO nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Deleting local config drive /var/lib/nova/instances/f3def0af-1227-498f-a525-0df8d5bb3768/disk.config because it was imported into RBD.#033[00m
Feb 28 05:13:36 np0005634017 kernel: tapfa6d7d29-c1: entered promiscuous mode
Feb 28 05:13:36 np0005634017 NetworkManager[49805]: <info>  [1772273616.6709] manager: (tapfa6d7d29-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/270)
Feb 28 05:13:36 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:36Z|00591|binding|INFO|Claiming lport fa6d7d29-c113-4729-a953-5dc14a05cd16 for this chassis.
Feb 28 05:13:36 np0005634017 nova_compute[243452]: 2026-02-28 10:13:36.676 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:36 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:36Z|00592|binding|INFO|fa6d7d29-c113-4729-a953-5dc14a05cd16: Claiming fa:16:3e:2c:d1:ad 10.100.0.3
Feb 28 05:13:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.688 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:d1:ad 10.100.0.3'], port_security=['fa:16:3e:2c:d1:ad 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f3def0af-1227-498f-a525-0df8d5bb3768', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f398725990434948ba6927f1c1477015', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9892012a-4876-4e21-91d3-1183462d5768', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72248a44-6915-4752-941f-1938e32574c6, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=fa6d7d29-c113-4729-a953-5dc14a05cd16) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:13:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.689 156681 INFO neutron.agent.ovn.metadata.agent [-] Port fa6d7d29-c113-4729-a953-5dc14a05cd16 in datapath 2ee6adef-26da-41a9-91a7-f9a806d37d26 bound to our chassis#033[00m
Feb 28 05:13:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.691 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ee6adef-26da-41a9-91a7-f9a806d37d26#033[00m
Feb 28 05:13:36 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:36Z|00593|binding|INFO|Setting lport fa6d7d29-c113-4729-a953-5dc14a05cd16 ovn-installed in OVS
Feb 28 05:13:36 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:36Z|00594|binding|INFO|Setting lport fa6d7d29-c113-4729-a953-5dc14a05cd16 up in Southbound
Feb 28 05:13:36 np0005634017 nova_compute[243452]: 2026-02-28 10:13:36.701 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:36 np0005634017 systemd-udevd[303775]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:13:36 np0005634017 systemd-machined[209480]: New machine qemu-79-instance-00000048.
Feb 28 05:13:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.707 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a8e2963e-4467-4493-a56a-38174fadc521]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:36 np0005634017 nova_compute[243452]: 2026-02-28 10:13:36.707 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.708 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2ee6adef-21 in ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:13:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.710 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2ee6adef-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:13:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.710 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[635741ba-499e-48aa-8f27-a94d5dcfb0d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.712 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a1783739-477a-4ae2-b370-e2dca5aae4ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:36 np0005634017 systemd[1]: Started Virtual Machine qemu-79-instance-00000048.
Feb 28 05:13:36 np0005634017 nova_compute[243452]: 2026-02-28 10:13:36.715 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a16c1faa-2568-47fc-8006-91c68ae7ae5d/disk.config a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:36 np0005634017 nova_compute[243452]: 2026-02-28 10:13:36.715 243456 INFO nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Deleting local config drive /var/lib/nova/instances/a16c1faa-2568-47fc-8006-91c68ae7ae5d/disk.config because it was imported into RBD.#033[00m
Feb 28 05:13:36 np0005634017 NetworkManager[49805]: <info>  [1772273616.7248] device (tapfa6d7d29-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:13:36 np0005634017 NetworkManager[49805]: <info>  [1772273616.7258] device (tapfa6d7d29-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:13:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.734 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[8575d6cd-6de5-4d0f-a02c-80f5885a1e7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.748 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fae84556-1268-49ec-b541-5c3b58547d3e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.780 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[589ef9ea-7542-458d-8e43-c5cfed55f7d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:36 np0005634017 NetworkManager[49805]: <info>  [1772273616.7887] manager: (tapb8b97a75-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/271)
Feb 28 05:13:36 np0005634017 kernel: tapb8b97a75-8d: entered promiscuous mode
Feb 28 05:13:36 np0005634017 NetworkManager[49805]: <info>  [1772273616.7902] manager: (tap2ee6adef-20): new Veth device (/org/freedesktop/NetworkManager/Devices/272)
Feb 28 05:13:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.788 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c48511dc-dbae-499d-9d77-baceb8ac7d69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:36 np0005634017 systemd-udevd[303779]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:13:36 np0005634017 nova_compute[243452]: 2026-02-28 10:13:36.791 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:36 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:36Z|00595|binding|INFO|Claiming lport b8b97a75-8d54-4bd6-8372-eaff2a5aa910 for this chassis.
Feb 28 05:13:36 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:36Z|00596|binding|INFO|b8b97a75-8d54-4bd6-8372-eaff2a5aa910: Claiming fa:16:3e:dd:7a:dc 10.100.0.7
Feb 28 05:13:36 np0005634017 NetworkManager[49805]: <info>  [1772273616.8007] device (tapb8b97a75-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:13:36 np0005634017 NetworkManager[49805]: <info>  [1772273616.8017] device (tapb8b97a75-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:13:36 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:36Z|00597|binding|INFO|Setting lport b8b97a75-8d54-4bd6-8372-eaff2a5aa910 ovn-installed in OVS
Feb 28 05:13:36 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:36Z|00598|binding|INFO|Setting lport b8b97a75-8d54-4bd6-8372-eaff2a5aa910 up in Southbound
Feb 28 05:13:36 np0005634017 nova_compute[243452]: 2026-02-28 10:13:36.803 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.808 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:7a:dc 10.100.0.7'], port_security=['fa:16:3e:dd:7a:dc 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a16c1faa-2568-47fc-8006-91c68ae7ae5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f398725990434948ba6927f1c1477015', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9892012a-4876-4e21-91d3-1183462d5768', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72248a44-6915-4752-941f-1938e32574c6, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b8b97a75-8d54-4bd6-8372-eaff2a5aa910) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:13:36 np0005634017 systemd-machined[209480]: New machine qemu-80-instance-00000047.
Feb 28 05:13:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.834 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e28a8514-dd59-4078-abd5-571e81f476c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.838 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e91e5d98-e6bb-4b18-baeb-98e07eab6600]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:36 np0005634017 systemd[1]: Started Virtual Machine qemu-80-instance-00000047.
Feb 28 05:13:36 np0005634017 NetworkManager[49805]: <info>  [1772273616.8663] device (tap2ee6adef-20): carrier: link connected
Feb 28 05:13:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.876 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[23d20a2e-96c4-4a87-a7f9-3fa7a7a66c6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.907 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[96c4b22a-b3ed-421a-9050-6190aec682a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ee6adef-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:79:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512415, 'reachable_time': 16033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303823, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.934 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[968311fd-eab3-4bc7-bace-8136eb0c10d4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0f:79d6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 512415, 'tstamp': 512415}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303827, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.953 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d959ae2e-685c-472a-b558-c6dc6d3588ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ee6adef-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:79:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512415, 'reachable_time': 16033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303829, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.984 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[04307bb9-b828-4c00-aa52-4a6304875c1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.030 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a075beaf-e4d6-4b29-a3b8-e0cda06e0641]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.032 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ee6adef-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.032 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.033 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ee6adef-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:37 np0005634017 kernel: tap2ee6adef-20: entered promiscuous mode
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.034 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:37 np0005634017 NetworkManager[49805]: <info>  [1772273617.0352] manager: (tap2ee6adef-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.037 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ee6adef-20, col_values=(('external_ids', {'iface-id': '51b33bea-9c2c-447c-817e-7f72887f045f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:37 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:37Z|00599|binding|INFO|Releasing lport 51b33bea-9c2c-447c-817e-7f72887f045f from this chassis (sb_readonly=0)
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.038 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.041 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2ee6adef-26da-41a9-91a7-f9a806d37d26.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2ee6adef-26da-41a9-91a7-f9a806d37d26.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.042 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e67c8ac8-44cb-45c3-9ff7-fd7c31b24152]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.043 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-2ee6adef-26da-41a9-91a7-f9a806d37d26
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/2ee6adef-26da-41a9-91a7-f9a806d37d26.pid.haproxy
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 2ee6adef-26da-41a9-91a7-f9a806d37d26
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.044 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.045 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'env', 'PROCESS_TAG=haproxy-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2ee6adef-26da-41a9-91a7-f9a806d37d26.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.309 243456 DEBUG nova.network.neutron [req-25cfc3ba-df31-4b4b-9405-45bcfc670eba req-f4e6b4ec-31d5-49d4-b89e-ac2ddba785ef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Updated VIF entry in instance network info cache for port b8b97a75-8d54-4bd6-8372-eaff2a5aa910. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.310 243456 DEBUG nova.network.neutron [req-25cfc3ba-df31-4b4b-9405-45bcfc670eba req-f4e6b4ec-31d5-49d4-b89e-ac2ddba785ef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Updating instance_info_cache with network_info: [{"id": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "address": "fa:16:3e:dd:7a:dc", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b97a75-8d", "ovs_interfaceid": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.321 243456 DEBUG nova.network.neutron [req-6b008d7e-18d7-464b-ae40-00133486e433 req-bf82f2c9-7a6e-431d-8e7d-4e633b31829f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Updated VIF entry in instance network info cache for port fa6d7d29-c113-4729-a953-5dc14a05cd16. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.322 243456 DEBUG nova.network.neutron [req-6b008d7e-18d7-464b-ae40-00133486e433 req-bf82f2c9-7a6e-431d-8e7d-4e633b31829f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Updating instance_info_cache with network_info: [{"id": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "address": "fa:16:3e:2c:d1:ad", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa6d7d29-c1", "ovs_interfaceid": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.325 243456 DEBUG oslo_concurrency.lockutils [req-25cfc3ba-df31-4b4b-9405-45bcfc670eba req-f4e6b4ec-31d5-49d4-b89e-ac2ddba785ef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-a16c1faa-2568-47fc-8006-91c68ae7ae5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.336 243456 DEBUG oslo_concurrency.lockutils [req-6b008d7e-18d7-464b-ae40-00133486e433 req-bf82f2c9-7a6e-431d-8e7d-4e633b31829f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f3def0af-1227-498f-a525-0df8d5bb3768" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.386 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273617.3855097, a16c1faa-2568-47fc-8006-91c68ae7ae5d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.387 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] VM Started (Lifecycle Event)#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.405 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.411 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273617.3856285, a16c1faa-2568-47fc-8006-91c68ae7ae5d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.411 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.429 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.433 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.456 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:13:37 np0005634017 podman[303903]: 2026-02-28 10:13:37.46810868 +0000 UTC m=+0.059526055 container create d4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 05:13:37 np0005634017 systemd[1]: Started libpod-conmon-d4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f.scope.
Feb 28 05:13:37 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:13:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff72f63fb99d7b1ef7c7e48e92434ddd62cf45aed08cb6ac8121bfdf9bdc9341/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:13:37 np0005634017 podman[303903]: 2026-02-28 10:13:37.437572182 +0000 UTC m=+0.028989577 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:13:37 np0005634017 podman[303903]: 2026-02-28 10:13:37.541330799 +0000 UTC m=+0.132748194 container init d4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:13:37 np0005634017 podman[303903]: 2026-02-28 10:13:37.54491117 +0000 UTC m=+0.136328545 container start d4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 28 05:13:37 np0005634017 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[303958]: [NOTICE]   (303963) : New worker (303965) forked
Feb 28 05:13:37 np0005634017 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[303958]: [NOTICE]   (303963) : Loading success.
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.576 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273617.575528, f3def0af-1227-498f-a525-0df8d5bb3768 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.577 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] VM Started (Lifecycle Event)#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.610 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.615 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273617.5767467, f3def0af-1227-498f-a525-0df8d5bb3768 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.616 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.629 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b8b97a75-8d54-4bd6-8372-eaff2a5aa910 in datapath 2ee6adef-26da-41a9-91a7-f9a806d37d26 unbound from our chassis#033[00m
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.631 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ee6adef-26da-41a9-91a7-f9a806d37d26#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.636 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.642 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.644 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[07fee492-5ca3-4be7-adf9-c3c2414486c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.648 243456 DEBUG nova.compute.manager [req-6671b4ce-e5f0-4770-a7a2-7f03795ba435 req-dfa51912-019b-47a5-b38c-7083628c8fea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Received event network-vif-plugged-b8b97a75-8d54-4bd6-8372-eaff2a5aa910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.648 243456 DEBUG oslo_concurrency.lockutils [req-6671b4ce-e5f0-4770-a7a2-7f03795ba435 req-dfa51912-019b-47a5-b38c-7083628c8fea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.648 243456 DEBUG oslo_concurrency.lockutils [req-6671b4ce-e5f0-4770-a7a2-7f03795ba435 req-dfa51912-019b-47a5-b38c-7083628c8fea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.649 243456 DEBUG oslo_concurrency.lockutils [req-6671b4ce-e5f0-4770-a7a2-7f03795ba435 req-dfa51912-019b-47a5-b38c-7083628c8fea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.649 243456 DEBUG nova.compute.manager [req-6671b4ce-e5f0-4770-a7a2-7f03795ba435 req-dfa51912-019b-47a5-b38c-7083628c8fea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Processing event network-vif-plugged-b8b97a75-8d54-4bd6-8372-eaff2a5aa910 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.650 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.654 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.656 243456 INFO nova.virt.libvirt.driver [-] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Instance spawned successfully.#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.657 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.662 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.663 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273617.6537325, a16c1faa-2568-47fc-8006-91c68ae7ae5d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.663 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.665 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[949b216f-aeff-46e2-8df9-a495461c6022]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.668 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[79cb4d80-82e2-4506-85ba-62f9cb604423]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.678 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.679 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.679 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.680 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.681 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.681 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.686 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.689 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.690 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[115db8df-ab8e-4e55-b0d6-27329953d14a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.708 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6ea2e562-feff-4ee0-87ab-4c5e7286ee26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ee6adef-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:79:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 5, 'rx_bytes': 176, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 5, 'rx_bytes': 176, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512415, 'reachable_time': 16033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303979, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.715 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.723 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d60a5932-3af2-4446-865b-e7d167091276]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2ee6adef-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 512429, 'tstamp': 512429}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303980, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2ee6adef-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 512431, 'tstamp': 512431}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303980, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.725 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ee6adef-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.726 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.727 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.728 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ee6adef-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.728 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.729 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ee6adef-20, col_values=(('external_ids', {'iface-id': '51b33bea-9c2c-447c-817e-7f72887f045f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.730 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.742 243456 INFO nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Took 8.76 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.743 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.795 243456 INFO nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Took 9.93 seconds to build instance.#033[00m
Feb 28 05:13:37 np0005634017 nova_compute[243452]: 2026-02-28 10:13:37.821 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:38 np0005634017 nova_compute[243452]: 2026-02-28 10:13:38.094 243456 DEBUG nova.network.neutron [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Successfully updated port: 41441957-9492-481d-847c-895c9fd2ef8f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:13:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:13:38 np0005634017 nova_compute[243452]: 2026-02-28 10:13:38.120 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "refresh_cache-1e13ffbf-dba5-421b-afc3-84eb471e2d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:13:38 np0005634017 nova_compute[243452]: 2026-02-28 10:13:38.121 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquired lock "refresh_cache-1e13ffbf-dba5-421b-afc3-84eb471e2d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:13:38 np0005634017 nova_compute[243452]: 2026-02-28 10:13:38.121 243456 DEBUG nova.network.neutron [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:13:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 329 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.5 MiB/s wr, 178 op/s
Feb 28 05:13:38 np0005634017 nova_compute[243452]: 2026-02-28 10:13:38.274 243456 DEBUG nova.network.neutron [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:13:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:38Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:54:18:0c 10.100.0.3
Feb 28 05:13:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:38Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:54:18:0c 10.100.0.3
Feb 28 05:13:38 np0005634017 nova_compute[243452]: 2026-02-28 10:13:38.655 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273603.6542428, 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:38 np0005634017 nova_compute[243452]: 2026-02-28 10:13:38.655 243456 INFO nova.compute.manager [-] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:13:38 np0005634017 nova_compute[243452]: 2026-02-28 10:13:38.674 243456 DEBUG nova.compute.manager [None req-e1736646-d36f-46a5-b5f5-26eb25670736 - - - - - -] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:38 np0005634017 nova_compute[243452]: 2026-02-28 10:13:38.743 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 359 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 7.3 MiB/s wr, 204 op/s
Feb 28 05:13:40 np0005634017 nova_compute[243452]: 2026-02-28 10:13:40.285 243456 DEBUG nova.compute.manager [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Received event network-vif-plugged-b8b97a75-8d54-4bd6-8372-eaff2a5aa910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:40 np0005634017 nova_compute[243452]: 2026-02-28 10:13:40.285 243456 DEBUG oslo_concurrency.lockutils [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:40 np0005634017 nova_compute[243452]: 2026-02-28 10:13:40.286 243456 DEBUG oslo_concurrency.lockutils [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:40 np0005634017 nova_compute[243452]: 2026-02-28 10:13:40.286 243456 DEBUG oslo_concurrency.lockutils [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:40 np0005634017 nova_compute[243452]: 2026-02-28 10:13:40.286 243456 DEBUG nova.compute.manager [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] No waiting events found dispatching network-vif-plugged-b8b97a75-8d54-4bd6-8372-eaff2a5aa910 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:13:40 np0005634017 nova_compute[243452]: 2026-02-28 10:13:40.287 243456 WARNING nova.compute.manager [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Received unexpected event network-vif-plugged-b8b97a75-8d54-4bd6-8372-eaff2a5aa910 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:13:40 np0005634017 nova_compute[243452]: 2026-02-28 10:13:40.287 243456 DEBUG nova.compute.manager [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Received event network-changed-41441957-9492-481d-847c-895c9fd2ef8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:40 np0005634017 nova_compute[243452]: 2026-02-28 10:13:40.287 243456 DEBUG nova.compute.manager [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Refreshing instance network info cache due to event network-changed-41441957-9492-481d-847c-895c9fd2ef8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:13:40 np0005634017 nova_compute[243452]: 2026-02-28 10:13:40.287 243456 DEBUG oslo_concurrency.lockutils [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-1e13ffbf-dba5-421b-afc3-84eb471e2d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:13:40 np0005634017 nova_compute[243452]: 2026-02-28 10:13:40.738 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:13:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:13:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:13:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:13:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0017718529960501666 of space, bias 1.0, pg target 0.5315558988150499 quantized to 32 (current 32)
Feb 28 05:13:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:13:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:13:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:13:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:13:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:13:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024938815784398454 of space, bias 1.0, pg target 0.7481644735319536 quantized to 32 (current 32)
Feb 28 05:13:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:13:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.948993728318748e-07 of space, bias 4.0, pg target 0.0009538792473982498 quantized to 16 (current 16)
Feb 28 05:13:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:13:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:13:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:13:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:13:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:13:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:13:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:13:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:13:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:13:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.170 243456 DEBUG nova.network.neutron [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Updating instance_info_cache with network_info: [{"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.222 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Releasing lock "refresh_cache-1e13ffbf-dba5-421b-afc3-84eb471e2d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.222 243456 DEBUG nova.compute.manager [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Instance network_info: |[{"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.223 243456 DEBUG oslo_concurrency.lockutils [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-1e13ffbf-dba5-421b-afc3-84eb471e2d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.223 243456 DEBUG nova.network.neutron [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Refreshing network info cache for port 41441957-9492-481d-847c-895c9fd2ef8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.225 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Start _get_guest_xml network_info=[{"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.231 243456 WARNING nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.241 243456 DEBUG nova.virt.libvirt.host [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.242 243456 DEBUG nova.virt.libvirt.host [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.249 243456 DEBUG nova.virt.libvirt.host [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.250 243456 DEBUG nova.virt.libvirt.host [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.250 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.250 243456 DEBUG nova.virt.hardware [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.251 243456 DEBUG nova.virt.hardware [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.251 243456 DEBUG nova.virt.hardware [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.251 243456 DEBUG nova.virt.hardware [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.252 243456 DEBUG nova.virt.hardware [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.252 243456 DEBUG nova.virt.hardware [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.252 243456 DEBUG nova.virt.hardware [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.252 243456 DEBUG nova.virt.hardware [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.253 243456 DEBUG nova.virt.hardware [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.253 243456 DEBUG nova.virt.hardware [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.253 243456 DEBUG nova.virt.hardware [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.256 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.295 243456 DEBUG nova.compute.manager [req-ce6b1eb4-1f48-453d-aeee-35adf2c84fb6 req-755113e7-03c5-4028-b79f-a1719657d0fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Received event network-vif-plugged-fa6d7d29-c113-4729-a953-5dc14a05cd16 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.296 243456 DEBUG oslo_concurrency.lockutils [req-ce6b1eb4-1f48-453d-aeee-35adf2c84fb6 req-755113e7-03c5-4028-b79f-a1719657d0fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.297 243456 DEBUG oslo_concurrency.lockutils [req-ce6b1eb4-1f48-453d-aeee-35adf2c84fb6 req-755113e7-03c5-4028-b79f-a1719657d0fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.297 243456 DEBUG oslo_concurrency.lockutils [req-ce6b1eb4-1f48-453d-aeee-35adf2c84fb6 req-755113e7-03c5-4028-b79f-a1719657d0fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.297 243456 DEBUG nova.compute.manager [req-ce6b1eb4-1f48-453d-aeee-35adf2c84fb6 req-755113e7-03c5-4028-b79f-a1719657d0fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Processing event network-vif-plugged-fa6d7d29-c113-4729-a953-5dc14a05cd16 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.298 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.303 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.308 243456 INFO nova.virt.libvirt.driver [-] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Instance spawned successfully.#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.309 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.330 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273621.3302917, f3def0af-1227-498f-a525-0df8d5bb3768 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.332 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.350 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.351 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.352 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.353 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.353 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.354 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.361 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.367 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.400 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.423 243456 INFO nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Took 11.63 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.424 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.496 243456 INFO nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Took 13.62 seconds to build instance.#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.521 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:13:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:13:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:13:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:13:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:13:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:13:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:13:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:13:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:13:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:13:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:13:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:13:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:13:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/303814099' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.889 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.926 243456 DEBUG nova.storage.rbd_utils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:41 np0005634017 nova_compute[243452]: 2026-02-28 10:13:41.936 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 364 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 6.9 MiB/s wr, 172 op/s
Feb 28 05:13:42 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:13:42 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:13:42 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:13:42 np0005634017 podman[304181]: 2026-02-28 10:13:42.263752335 +0000 UTC m=+0.059075092 container create aea8008f262a728dfd1c2f8f5b1da68b1183904389d70cce1c685ac24f199faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_heisenberg, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 05:13:42 np0005634017 systemd[1]: Started libpod-conmon-aea8008f262a728dfd1c2f8f5b1da68b1183904389d70cce1c685ac24f199faa.scope.
Feb 28 05:13:42 np0005634017 podman[304181]: 2026-02-28 10:13:42.229872602 +0000 UTC m=+0.025195369 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:13:42 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:13:42 np0005634017 podman[304181]: 2026-02-28 10:13:42.380716234 +0000 UTC m=+0.176039021 container init aea8008f262a728dfd1c2f8f5b1da68b1183904389d70cce1c685ac24f199faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_heisenberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:13:42 np0005634017 podman[304181]: 2026-02-28 10:13:42.391091445 +0000 UTC m=+0.186414232 container start aea8008f262a728dfd1c2f8f5b1da68b1183904389d70cce1c685ac24f199faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 05:13:42 np0005634017 podman[304181]: 2026-02-28 10:13:42.395136549 +0000 UTC m=+0.190482767 container attach aea8008f262a728dfd1c2f8f5b1da68b1183904389d70cce1c685ac24f199faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_heisenberg, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:13:42 np0005634017 naughty_heisenberg[304197]: 167 167
Feb 28 05:13:42 np0005634017 podman[304181]: 2026-02-28 10:13:42.403392641 +0000 UTC m=+0.198715428 container died aea8008f262a728dfd1c2f8f5b1da68b1183904389d70cce1c685ac24f199faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_heisenberg, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 05:13:42 np0005634017 systemd[1]: libpod-aea8008f262a728dfd1c2f8f5b1da68b1183904389d70cce1c685ac24f199faa.scope: Deactivated successfully.
Feb 28 05:13:42 np0005634017 systemd[1]: var-lib-containers-storage-overlay-642cf929cb620ccf00e0a5bc1a2462b8f794c620ecae56d46cc8ab567c9d03b9-merged.mount: Deactivated successfully.
Feb 28 05:13:42 np0005634017 podman[304181]: 2026-02-28 10:13:42.449404815 +0000 UTC m=+0.244727572 container remove aea8008f262a728dfd1c2f8f5b1da68b1183904389d70cce1c685ac24f199faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_heisenberg, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:13:42 np0005634017 systemd[1]: libpod-conmon-aea8008f262a728dfd1c2f8f5b1da68b1183904389d70cce1c685ac24f199faa.scope: Deactivated successfully.
Feb 28 05:13:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:13:42 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/994772234' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.543 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.545 243456 DEBUG nova.virt.libvirt.vif [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-933663289',display_name='tempest-ServerDiskConfigTestJSON-server-933663289',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-933663289',id=73,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-nc00nqtf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:34Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=1e13ffbf-dba5-421b-afc3-84eb471e2d44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.546 243456 DEBUG nova.network.os_vif_util [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.547 243456 DEBUG nova.network.os_vif_util [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.548 243456 DEBUG nova.objects.instance [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'pci_devices' on Instance uuid 1e13ffbf-dba5-421b-afc3-84eb471e2d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.564 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:13:42 np0005634017 nova_compute[243452]:  <uuid>1e13ffbf-dba5-421b-afc3-84eb471e2d44</uuid>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:  <name>instance-00000049</name>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-933663289</nova:name>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:13:41</nova:creationTime>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:13:42 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:        <nova:user uuid="c6b5724da2e648fd85fd8cb293525967">tempest-ServerDiskConfigTestJSON-1778232696-project-member</nova:user>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:        <nova:project uuid="92b324a375ad4f198dc44d31a0e0a6eb">tempest-ServerDiskConfigTestJSON-1778232696</nova:project>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:        <nova:port uuid="41441957-9492-481d-847c-895c9fd2ef8f">
Feb 28 05:13:42 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <entry name="serial">1e13ffbf-dba5-421b-afc3-84eb471e2d44</entry>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <entry name="uuid">1e13ffbf-dba5-421b-afc3-84eb471e2d44</entry>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk">
Feb 28 05:13:42 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:13:42 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk.config">
Feb 28 05:13:42 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:13:42 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:6c:7c:88"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <target dev="tap41441957-94"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/console.log" append="off"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:13:42 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:13:42 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:13:42 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:13:42 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.564 243456 DEBUG nova.compute.manager [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Preparing to wait for external event network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.564 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.565 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.565 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.566 243456 DEBUG nova.virt.libvirt.vif [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-933663289',display_name='tempest-ServerDiskConfigTestJSON-server-933663289',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-933663289',id=73,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-nc00nqtf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:34Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=1e13ffbf-dba5-421b-afc3-84eb471e2d44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.566 243456 DEBUG nova.network.os_vif_util [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.567 243456 DEBUG nova.network.os_vif_util [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.568 243456 DEBUG os_vif [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.568 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.568 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.569 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.573 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.574 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41441957-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.574 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap41441957-94, col_values=(('external_ids', {'iface-id': '41441957-9492-481d-847c-895c9fd2ef8f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6c:7c:88', 'vm-uuid': '1e13ffbf-dba5-421b-afc3-84eb471e2d44'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.576 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:42 np0005634017 NetworkManager[49805]: <info>  [1772273622.5785] manager: (tap41441957-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/274)
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.578 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.589 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.591 243456 INFO os_vif [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94')#033[00m
Feb 28 05:13:42 np0005634017 podman[304222]: 2026-02-28 10:13:42.630359333 +0000 UTC m=+0.041666782 container create daa907dfc20d3f830299a346cddf7a46786e2d084d19d289b045e3b4c8846997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kepler, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.649 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.651 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.651 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No VIF found with MAC fa:16:3e:6c:7c:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.651 243456 INFO nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Using config drive#033[00m
Feb 28 05:13:42 np0005634017 systemd[1]: Started libpod-conmon-daa907dfc20d3f830299a346cddf7a46786e2d084d19d289b045e3b4c8846997.scope.
Feb 28 05:13:42 np0005634017 nova_compute[243452]: 2026-02-28 10:13:42.674 243456 DEBUG nova.storage.rbd_utils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:42 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:13:42 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5e004925d6b26e484e97e52b0f4ad03898043da66953a75936441b89c320e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:13:42 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5e004925d6b26e484e97e52b0f4ad03898043da66953a75936441b89c320e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:13:42 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5e004925d6b26e484e97e52b0f4ad03898043da66953a75936441b89c320e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:13:42 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5e004925d6b26e484e97e52b0f4ad03898043da66953a75936441b89c320e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:13:42 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5e004925d6b26e484e97e52b0f4ad03898043da66953a75936441b89c320e5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:13:42 np0005634017 podman[304222]: 2026-02-28 10:13:42.612285945 +0000 UTC m=+0.023593424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:13:42 np0005634017 podman[304222]: 2026-02-28 10:13:42.731862757 +0000 UTC m=+0.143170206 container init daa907dfc20d3f830299a346cddf7a46786e2d084d19d289b045e3b4c8846997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kepler, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:13:42 np0005634017 podman[304222]: 2026-02-28 10:13:42.737613919 +0000 UTC m=+0.148921358 container start daa907dfc20d3f830299a346cddf7a46786e2d084d19d289b045e3b4c8846997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kepler, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 05:13:42 np0005634017 podman[304222]: 2026-02-28 10:13:42.741876929 +0000 UTC m=+0.153184378 container attach daa907dfc20d3f830299a346cddf7a46786e2d084d19d289b045e3b4c8846997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kepler, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:13:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:13:43 np0005634017 flamboyant_kepler[304260]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:13:43 np0005634017 flamboyant_kepler[304260]: --> All data devices are unavailable
Feb 28 05:13:43 np0005634017 systemd[1]: libpod-daa907dfc20d3f830299a346cddf7a46786e2d084d19d289b045e3b4c8846997.scope: Deactivated successfully.
Feb 28 05:13:43 np0005634017 podman[304222]: 2026-02-28 10:13:43.235811666 +0000 UTC m=+0.647119205 container died daa907dfc20d3f830299a346cddf7a46786e2d084d19d289b045e3b4c8846997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kepler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 28 05:13:43 np0005634017 systemd[1]: var-lib-containers-storage-overlay-2d5e004925d6b26e484e97e52b0f4ad03898043da66953a75936441b89c320e5-merged.mount: Deactivated successfully.
Feb 28 05:13:43 np0005634017 podman[304222]: 2026-02-28 10:13:43.283741534 +0000 UTC m=+0.695049023 container remove daa907dfc20d3f830299a346cddf7a46786e2d084d19d289b045e3b4c8846997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 28 05:13:43 np0005634017 systemd[1]: libpod-conmon-daa907dfc20d3f830299a346cddf7a46786e2d084d19d289b045e3b4c8846997.scope: Deactivated successfully.
Feb 28 05:13:43 np0005634017 nova_compute[243452]: 2026-02-28 10:13:43.394 243456 INFO nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Creating config drive at /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/disk.config#033[00m
Feb 28 05:13:43 np0005634017 nova_compute[243452]: 2026-02-28 10:13:43.401 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp29cdv40p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:43 np0005634017 nova_compute[243452]: 2026-02-28 10:13:43.552 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp29cdv40p" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:43 np0005634017 nova_compute[243452]: 2026-02-28 10:13:43.574 243456 DEBUG nova.storage.rbd_utils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:43 np0005634017 nova_compute[243452]: 2026-02-28 10:13:43.577 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/disk.config 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:43 np0005634017 nova_compute[243452]: 2026-02-28 10:13:43.744 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:43 np0005634017 nova_compute[243452]: 2026-02-28 10:13:43.752 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/disk.config 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:43 np0005634017 nova_compute[243452]: 2026-02-28 10:13:43.753 243456 INFO nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Deleting local config drive /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/disk.config because it was imported into RBD.#033[00m
Feb 28 05:13:43 np0005634017 podman[304395]: 2026-02-28 10:13:43.759029848 +0000 UTC m=+0.046536979 container create a033146514e102a863a4404146e7c904e34506189b5fe602c17f9cbbda4d19e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 05:13:43 np0005634017 kernel: tap41441957-94: entered promiscuous mode
Feb 28 05:13:43 np0005634017 NetworkManager[49805]: <info>  [1772273623.8057] manager: (tap41441957-94): new Tun device (/org/freedesktop/NetworkManager/Devices/275)
Feb 28 05:13:43 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:43Z|00600|binding|INFO|Claiming lport 41441957-9492-481d-847c-895c9fd2ef8f for this chassis.
Feb 28 05:13:43 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:43Z|00601|binding|INFO|41441957-9492-481d-847c-895c9fd2ef8f: Claiming fa:16:3e:6c:7c:88 10.100.0.4
Feb 28 05:13:43 np0005634017 nova_compute[243452]: 2026-02-28 10:13:43.807 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:43 np0005634017 systemd[1]: Started libpod-conmon-a033146514e102a863a4404146e7c904e34506189b5fe602c17f9cbbda4d19e8.scope.
Feb 28 05:13:43 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:43Z|00602|binding|INFO|Setting lport 41441957-9492-481d-847c-895c9fd2ef8f ovn-installed in OVS
Feb 28 05:13:43 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:43Z|00603|binding|INFO|Setting lport 41441957-9492-481d-847c-895c9fd2ef8f up in Southbound
Feb 28 05:13:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.828 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:7c:88 10.100.0.4'], port_security=['fa:16:3e:6c:7c:88 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1e13ffbf-dba5-421b-afc3-84eb471e2d44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '73844c74-9151-4190-8457-a421e77e32a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1433cfb0-2a88-4f7f-838a-e6496a4d73b0, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=41441957-9492-481d-847c-895c9fd2ef8f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:13:43 np0005634017 nova_compute[243452]: 2026-02-28 10:13:43.828 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.829 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 41441957-9492-481d-847c-895c9fd2ef8f in datapath 77a5b13a-ec2d-4bde-b8f1-201557ef8008 bound to our chassis#033[00m
Feb 28 05:13:43 np0005634017 podman[304395]: 2026-02-28 10:13:43.736113554 +0000 UTC m=+0.023620785 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:13:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.830 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77a5b13a-ec2d-4bde-b8f1-201557ef8008#033[00m
Feb 28 05:13:43 np0005634017 nova_compute[243452]: 2026-02-28 10:13:43.835 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.843 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e5990e12-e78c-4a6f-8f5f-d5bd5e762dd4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.843 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77a5b13a-e1 in ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:13:43 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:13:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.847 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77a5b13a-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:13:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.847 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[aa076858-21c1-4a97-9849-e15cb91ba2ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.849 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a06122-2859-455d-bee4-e2b752f11843]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.859 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c4a5fd-eaa2-4a28-8140-e2df318c7ed5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:43 np0005634017 systemd-udevd[304425]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:13:43 np0005634017 systemd-machined[209480]: New machine qemu-81-instance-00000049.
Feb 28 05:13:43 np0005634017 podman[304395]: 2026-02-28 10:13:43.872043756 +0000 UTC m=+0.159550927 container init a033146514e102a863a4404146e7c904e34506189b5fe602c17f9cbbda4d19e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 05:13:43 np0005634017 NetworkManager[49805]: <info>  [1772273623.8767] device (tap41441957-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:13:43 np0005634017 NetworkManager[49805]: <info>  [1772273623.8773] device (tap41441957-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:13:43 np0005634017 systemd[1]: Started Virtual Machine qemu-81-instance-00000049.
Feb 28 05:13:43 np0005634017 podman[304395]: 2026-02-28 10:13:43.879321671 +0000 UTC m=+0.166828802 container start a033146514e102a863a4404146e7c904e34506189b5fe602c17f9cbbda4d19e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_villani, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 28 05:13:43 np0005634017 podman[304395]: 2026-02-28 10:13:43.883757506 +0000 UTC m=+0.171264677 container attach a033146514e102a863a4404146e7c904e34506189b5fe602c17f9cbbda4d19e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Feb 28 05:13:43 np0005634017 amazing_villani[304417]: 167 167
Feb 28 05:13:43 np0005634017 systemd[1]: libpod-a033146514e102a863a4404146e7c904e34506189b5fe602c17f9cbbda4d19e8.scope: Deactivated successfully.
Feb 28 05:13:43 np0005634017 podman[304395]: 2026-02-28 10:13:43.886670877 +0000 UTC m=+0.174178008 container died a033146514e102a863a4404146e7c904e34506189b5fe602c17f9cbbda4d19e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_villani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:13:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.893 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2411bc-f4c9-413a-b38a-abe131139da4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:43 np0005634017 systemd[1]: var-lib-containers-storage-overlay-efdebb9e5b886c339c37ef3a4ca8f2244df9adac3e0b9458bae6bccc757b4ce4-merged.mount: Deactivated successfully.
Feb 28 05:13:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.925 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[576d72ce-49d1-4537-b066-ba58668d4349]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:43 np0005634017 podman[304395]: 2026-02-28 10:13:43.929307306 +0000 UTC m=+0.216814437 container remove a033146514e102a863a4404146e7c904e34506189b5fe602c17f9cbbda4d19e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:13:43 np0005634017 NetworkManager[49805]: <info>  [1772273623.9334] manager: (tap77a5b13a-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/276)
Feb 28 05:13:43 np0005634017 systemd-udevd[304430]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:13:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.932 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e363986f-1548-4d28-a5f5-25544a4574b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:43 np0005634017 systemd[1]: libpod-conmon-a033146514e102a863a4404146e7c904e34506189b5fe602c17f9cbbda4d19e8.scope: Deactivated successfully.
Feb 28 05:13:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.966 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c0b5c415-7a91-44d5-99c9-13ebb42e69a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.972 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2059b610-226f-4708-a041-544a84895210]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:43 np0005634017 NetworkManager[49805]: <info>  [1772273623.9964] device (tap77a5b13a-e0): carrier: link connected
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.003 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[11fb5b18-7596-46bc-a215-52583df8ef53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.019 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4eb419e6-fc7f-459b-96fe-d3aee102fb54]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77a5b13a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ae:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 513128, 'reachable_time': 34791, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304471, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.032 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b84c89-8a42-4b94-a17f-280b75f98b65]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:aeac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 513128, 'tstamp': 513128}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304472, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.056 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d728aa9b-e3e2-419d-9f06-b4f273031520]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77a5b13a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ae:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 513128, 'reachable_time': 34791, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304474, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.084 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fbbbdced-f2dc-4f63-8164-f4647473f5df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:44 np0005634017 podman[304479]: 2026-02-28 10:13:44.121977194 +0000 UTC m=+0.052764825 container create 71cfe95c4972daa3f66388604209ca0a8f2109a78a9fa1a2520bc06d5be42161 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_hawking, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 28 05:13:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 372 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.3 MiB/s wr, 217 op/s
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.150 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[16c25420-e6b4-473e-8f4b-d219eda9715b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.152 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77a5b13a-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.152 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:13:44 np0005634017 kernel: tap77a5b13a-e0: entered promiscuous mode
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.153 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77a5b13a-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:44 np0005634017 nova_compute[243452]: 2026-02-28 10:13:44.154 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:44 np0005634017 NetworkManager[49805]: <info>  [1772273624.1563] manager: (tap77a5b13a-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/277)
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.157 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77a5b13a-e0, col_values=(('external_ids', {'iface-id': '5829ec02-3925-4479-9cc6-4b24ee8cfe06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:44Z|00604|binding|INFO|Releasing lport 5829ec02-3925-4479-9cc6-4b24ee8cfe06 from this chassis (sb_readonly=0)
Feb 28 05:13:44 np0005634017 nova_compute[243452]: 2026-02-28 10:13:44.164 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.165 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.166 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4e57b1c6-bd18-40d1-a9cc-955c98d9a4e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.167 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-77a5b13a-ec2d-4bde-b8f1-201557ef8008
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 77a5b13a-ec2d-4bde-b8f1-201557ef8008
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:13:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.168 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'env', 'PROCESS_TAG=haproxy-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77a5b13a-ec2d-4bde-b8f1-201557ef8008.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:13:44 np0005634017 systemd[1]: Started libpod-conmon-71cfe95c4972daa3f66388604209ca0a8f2109a78a9fa1a2520bc06d5be42161.scope.
Feb 28 05:13:44 np0005634017 nova_compute[243452]: 2026-02-28 10:13:44.187 243456 DEBUG nova.network.neutron [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Updated VIF entry in instance network info cache for port 41441957-9492-481d-847c-895c9fd2ef8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:13:44 np0005634017 nova_compute[243452]: 2026-02-28 10:13:44.188 243456 DEBUG nova.network.neutron [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Updating instance_info_cache with network_info: [{"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:44 np0005634017 podman[304479]: 2026-02-28 10:13:44.101754725 +0000 UTC m=+0.032542396 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:13:44 np0005634017 nova_compute[243452]: 2026-02-28 10:13:44.209 243456 DEBUG oslo_concurrency.lockutils [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-1e13ffbf-dba5-421b-afc3-84eb471e2d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:13:44 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:13:44 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3b314df0e8948085be8b2fd2881f735449cbfd97a603a066456394a94770f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:13:44 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3b314df0e8948085be8b2fd2881f735449cbfd97a603a066456394a94770f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:13:44 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3b314df0e8948085be8b2fd2881f735449cbfd97a603a066456394a94770f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:13:44 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3b314df0e8948085be8b2fd2881f735449cbfd97a603a066456394a94770f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:13:44 np0005634017 podman[304479]: 2026-02-28 10:13:44.241596507 +0000 UTC m=+0.172384148 container init 71cfe95c4972daa3f66388604209ca0a8f2109a78a9fa1a2520bc06d5be42161 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 05:13:44 np0005634017 podman[304479]: 2026-02-28 10:13:44.250537869 +0000 UTC m=+0.181325510 container start 71cfe95c4972daa3f66388604209ca0a8f2109a78a9fa1a2520bc06d5be42161 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_hawking, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:13:44 np0005634017 nova_compute[243452]: 2026-02-28 10:13:44.251 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:44 np0005634017 podman[304479]: 2026-02-28 10:13:44.254219192 +0000 UTC m=+0.185006863 container attach 71cfe95c4972daa3f66388604209ca0a8f2109a78a9fa1a2520bc06d5be42161 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 05:13:44 np0005634017 nova_compute[243452]: 2026-02-28 10:13:44.304 243456 INFO nova.compute.manager [None req-c46f4e69-1731-4d16-80bd-882275079361 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Get console output#033[00m
Feb 28 05:13:44 np0005634017 nova_compute[243452]: 2026-02-28 10:13:44.313 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 28 05:13:44 np0005634017 nova_compute[243452]: 2026-02-28 10:13:44.325 243456 DEBUG nova.compute.manager [req-9ae0139e-bd7d-4241-a3cc-133a97f8b742 req-083514cc-8a39-454e-9c9d-288cfa75af60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Received event network-vif-plugged-fa6d7d29-c113-4729-a953-5dc14a05cd16 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:44 np0005634017 nova_compute[243452]: 2026-02-28 10:13:44.326 243456 DEBUG oslo_concurrency.lockutils [req-9ae0139e-bd7d-4241-a3cc-133a97f8b742 req-083514cc-8a39-454e-9c9d-288cfa75af60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:44 np0005634017 nova_compute[243452]: 2026-02-28 10:13:44.326 243456 DEBUG oslo_concurrency.lockutils [req-9ae0139e-bd7d-4241-a3cc-133a97f8b742 req-083514cc-8a39-454e-9c9d-288cfa75af60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:44 np0005634017 nova_compute[243452]: 2026-02-28 10:13:44.327 243456 DEBUG oslo_concurrency.lockutils [req-9ae0139e-bd7d-4241-a3cc-133a97f8b742 req-083514cc-8a39-454e-9c9d-288cfa75af60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:44 np0005634017 nova_compute[243452]: 2026-02-28 10:13:44.327 243456 DEBUG nova.compute.manager [req-9ae0139e-bd7d-4241-a3cc-133a97f8b742 req-083514cc-8a39-454e-9c9d-288cfa75af60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] No waiting events found dispatching network-vif-plugged-fa6d7d29-c113-4729-a953-5dc14a05cd16 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:13:44 np0005634017 nova_compute[243452]: 2026-02-28 10:13:44.327 243456 WARNING nova.compute.manager [req-9ae0139e-bd7d-4241-a3cc-133a97f8b742 req-083514cc-8a39-454e-9c9d-288cfa75af60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Received unexpected event network-vif-plugged-fa6d7d29-c113-4729-a953-5dc14a05cd16 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]: {
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:    "0": [
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:        {
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "devices": [
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "/dev/loop3"
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            ],
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "lv_name": "ceph_lv0",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "lv_size": "21470642176",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "name": "ceph_lv0",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "tags": {
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.cluster_name": "ceph",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.crush_device_class": "",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.encrypted": "0",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.objectstore": "bluestore",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.osd_id": "0",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.type": "block",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.vdo": "0",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.with_tpm": "0"
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            },
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "type": "block",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "vg_name": "ceph_vg0"
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:        }
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:    ],
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:    "1": [
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:        {
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "devices": [
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "/dev/loop4"
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            ],
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "lv_name": "ceph_lv1",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "lv_size": "21470642176",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "name": "ceph_lv1",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "tags": {
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.cluster_name": "ceph",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.crush_device_class": "",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.encrypted": "0",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.objectstore": "bluestore",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.osd_id": "1",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.type": "block",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.vdo": "0",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.with_tpm": "0"
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            },
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "type": "block",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "vg_name": "ceph_vg1"
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:        }
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:    ],
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:    "2": [
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:        {
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "devices": [
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "/dev/loop5"
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            ],
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "lv_name": "ceph_lv2",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "lv_size": "21470642176",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "name": "ceph_lv2",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "tags": {
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.cluster_name": "ceph",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.crush_device_class": "",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.encrypted": "0",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.objectstore": "bluestore",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.osd_id": "2",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.type": "block",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.vdo": "0",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:                "ceph.with_tpm": "0"
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            },
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "type": "block",
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:            "vg_name": "ceph_vg2"
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:        }
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]:    ]
Feb 28 05:13:44 np0005634017 unruffled_hawking[304502]: }
Feb 28 05:13:44 np0005634017 systemd[1]: libpod-71cfe95c4972daa3f66388604209ca0a8f2109a78a9fa1a2520bc06d5be42161.scope: Deactivated successfully.
Feb 28 05:13:44 np0005634017 podman[304533]: 2026-02-28 10:13:44.540256975 +0000 UTC m=+0.043100483 container create e11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 05:13:44 np0005634017 conmon[304502]: conmon 71cfe95c4972daa3f663 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-71cfe95c4972daa3f66388604209ca0a8f2109a78a9fa1a2520bc06d5be42161.scope/container/memory.events
Feb 28 05:13:44 np0005634017 systemd[1]: Started libpod-conmon-e11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6.scope.
Feb 28 05:13:44 np0005634017 podman[304545]: 2026-02-28 10:13:44.601083346 +0000 UTC m=+0.033693739 container died 71cfe95c4972daa3f66388604209ca0a8f2109a78a9fa1a2520bc06d5be42161 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_hawking, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 05:13:44 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:13:44 np0005634017 podman[304533]: 2026-02-28 10:13:44.517043043 +0000 UTC m=+0.019886571 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:13:44 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c17d24a091a535ceaf27eeb52692c29dbc8d488028fb382fe11058cd494becd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:13:44 np0005634017 nova_compute[243452]: 2026-02-28 10:13:44.619 243456 INFO nova.compute.manager [None req-4dac448d-5b7c-43e2-bc18-6f2aa140938b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Pausing#033[00m
Feb 28 05:13:44 np0005634017 nova_compute[243452]: 2026-02-28 10:13:44.624 243456 DEBUG nova.objects.instance [None req-4dac448d-5b7c-43e2-bc18-6f2aa140938b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'flavor' on Instance uuid c4a13c84-8fca-43c8-87c3-fde9f5d1c031 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:44 np0005634017 podman[304533]: 2026-02-28 10:13:44.634873846 +0000 UTC m=+0.137717374 container init e11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 28 05:13:44 np0005634017 podman[304533]: 2026-02-28 10:13:44.640398881 +0000 UTC m=+0.143242389 container start e11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:13:44 np0005634017 podman[304545]: 2026-02-28 10:13:44.654356794 +0000 UTC m=+0.086967157 container remove 71cfe95c4972daa3f66388604209ca0a8f2109a78a9fa1a2520bc06d5be42161 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_hawking, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 05:13:44 np0005634017 systemd[1]: libpod-conmon-71cfe95c4972daa3f66388604209ca0a8f2109a78a9fa1a2520bc06d5be42161.scope: Deactivated successfully.
Feb 28 05:13:44 np0005634017 nova_compute[243452]: 2026-02-28 10:13:44.663 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273624.6633108, c4a13c84-8fca-43c8-87c3-fde9f5d1c031 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:44 np0005634017 nova_compute[243452]: 2026-02-28 10:13:44.663 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:13:44 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[304558]: [NOTICE]   (304564) : New worker (304566) forked
Feb 28 05:13:44 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[304558]: [NOTICE]   (304564) : Loading success.
Feb 28 05:13:44 np0005634017 nova_compute[243452]: 2026-02-28 10:13:44.665 243456 DEBUG nova.compute.manager [None req-4dac448d-5b7c-43e2-bc18-6f2aa140938b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:44 np0005634017 nova_compute[243452]: 2026-02-28 10:13:44.694 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:44 np0005634017 nova_compute[243452]: 2026-02-28 10:13:44.701 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:13:44 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ef3b314df0e8948085be8b2fd2881f735449cbfd97a603a066456394a94770f7-merged.mount: Deactivated successfully.
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.027 243456 DEBUG oslo_concurrency.lockutils [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.032 243456 DEBUG oslo_concurrency.lockutils [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.032 243456 DEBUG oslo_concurrency.lockutils [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.033 243456 DEBUG oslo_concurrency.lockutils [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.035 243456 DEBUG oslo_concurrency.lockutils [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.040 243456 INFO nova.compute.manager [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Terminating instance#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.048 243456 DEBUG nova.compute.manager [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:13:45 np0005634017 kernel: tapb8b97a75-8d (unregistering): left promiscuous mode
Feb 28 05:13:45 np0005634017 NetworkManager[49805]: <info>  [1772273625.0965] device (tapb8b97a75-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:13:45 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:45Z|00605|binding|INFO|Releasing lport b8b97a75-8d54-4bd6-8372-eaff2a5aa910 from this chassis (sb_readonly=0)
Feb 28 05:13:45 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:45Z|00606|binding|INFO|Setting lport b8b97a75-8d54-4bd6-8372-eaff2a5aa910 down in Southbound
Feb 28 05:13:45 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:45Z|00607|binding|INFO|Removing iface tapb8b97a75-8d ovn-installed in OVS
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.102 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.104 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.109 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:7a:dc 10.100.0.7'], port_security=['fa:16:3e:dd:7a:dc 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a16c1faa-2568-47fc-8006-91c68ae7ae5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f398725990434948ba6927f1c1477015', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9892012a-4876-4e21-91d3-1183462d5768', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72248a44-6915-4752-941f-1938e32574c6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b8b97a75-8d54-4bd6-8372-eaff2a5aa910) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.110 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b8b97a75-8d54-4bd6-8372-eaff2a5aa910 in datapath 2ee6adef-26da-41a9-91a7-f9a806d37d26 unbound from our chassis#033[00m
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.111 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ee6adef-26da-41a9-91a7-f9a806d37d26#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.114 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.129 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[09d3445f-b8d4-410d-98a6-9c614fae689a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:45 np0005634017 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000047.scope: Deactivated successfully.
Feb 28 05:13:45 np0005634017 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000047.scope: Consumed 7.934s CPU time.
Feb 28 05:13:45 np0005634017 systemd-machined[209480]: Machine qemu-80-instance-00000047 terminated.
Feb 28 05:13:45 np0005634017 podman[304656]: 2026-02-28 10:13:45.148344444 +0000 UTC m=+0.070115553 container create c976f335fdd4ab45ad54168e8ce3e1bbf7d43894d4ece8354c1c8cf835085690 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.156 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b421b61a-2b48-4f21-9cad-556f77151429]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.162 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f09e00a4-2204-4ed6-9e30-fbdca27bc10e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.170 243456 DEBUG oslo_concurrency.lockutils [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "f3def0af-1227-498f-a525-0df8d5bb3768" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.171 243456 DEBUG oslo_concurrency.lockutils [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.171 243456 DEBUG oslo_concurrency.lockutils [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.172 243456 DEBUG oslo_concurrency.lockutils [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.172 243456 DEBUG oslo_concurrency.lockutils [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.173 243456 INFO nova.compute.manager [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Terminating instance#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.175 243456 DEBUG nova.compute.manager [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.187 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6d5925-7b99-456b-80fa-21e492181be2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:45 np0005634017 systemd[1]: Started libpod-conmon-c976f335fdd4ab45ad54168e8ce3e1bbf7d43894d4ece8354c1c8cf835085690.scope.
Feb 28 05:13:45 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.212 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273625.211634, 1e13ffbf-dba5-421b-afc3-84eb471e2d44 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.213 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] VM Started (Lifecycle Event)#033[00m
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.209 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f1a6d168-ae5d-44da-b82c-0070d9634548]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ee6adef-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:79:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512415, 'reachable_time': 16033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304707, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:45 np0005634017 podman[304656]: 2026-02-28 10:13:45.122905058 +0000 UTC m=+0.044676197 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:13:45 np0005634017 kernel: tapfa6d7d29-c1 (unregistering): left promiscuous mode
Feb 28 05:13:45 np0005634017 NetworkManager[49805]: <info>  [1772273625.2228] device (tapfa6d7d29-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:13:45 np0005634017 podman[304656]: 2026-02-28 10:13:45.233049065 +0000 UTC m=+0.154820194 container init c976f335fdd4ab45ad54168e8ce3e1bbf7d43894d4ece8354c1c8cf835085690 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.234 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.235 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ff501dcd-2214-4606-8da1-7371147524c9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2ee6adef-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 512429, 'tstamp': 512429}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304712, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2ee6adef-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 512431, 'tstamp': 512431}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304712, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:45 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:45Z|00608|binding|INFO|Releasing lport fa6d7d29-c113-4729-a953-5dc14a05cd16 from this chassis (sb_readonly=0)
Feb 28 05:13:45 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:45Z|00609|binding|INFO|Setting lport fa6d7d29-c113-4729-a953-5dc14a05cd16 down in Southbound
Feb 28 05:13:45 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:45Z|00610|binding|INFO|Removing iface tapfa6d7d29-c1 ovn-installed in OVS
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.236 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.237 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ee6adef-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.239 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.239 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.240 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:d1:ad 10.100.0.3'], port_security=['fa:16:3e:2c:d1:ad 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f3def0af-1227-498f-a525-0df8d5bb3768', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f398725990434948ba6927f1c1477015', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9892012a-4876-4e21-91d3-1183462d5768', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72248a44-6915-4752-941f-1938e32574c6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=fa6d7d29-c113-4729-a953-5dc14a05cd16) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:13:45 np0005634017 podman[304656]: 2026-02-28 10:13:45.24211185 +0000 UTC m=+0.163882979 container start c976f335fdd4ab45ad54168e8ce3e1bbf7d43894d4ece8354c1c8cf835085690 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bhabha, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.242 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.244 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273625.2117395, 1e13ffbf-dba5-421b-afc3-84eb471e2d44 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.244 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:13:45 np0005634017 podman[304656]: 2026-02-28 10:13:45.245465565 +0000 UTC m=+0.167236704 container attach c976f335fdd4ab45ad54168e8ce3e1bbf7d43894d4ece8354c1c8cf835085690 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bhabha, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 05:13:45 np0005634017 hardcore_bhabha[304708]: 167 167
Feb 28 05:13:45 np0005634017 systemd[1]: libpod-c976f335fdd4ab45ad54168e8ce3e1bbf7d43894d4ece8354c1c8cf835085690.scope: Deactivated successfully.
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.250 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.250 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ee6adef-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.251 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.251 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ee6adef-20, col_values=(('external_ids', {'iface-id': '51b33bea-9c2c-447c-817e-7f72887f045f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.252 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:13:45 np0005634017 conmon[304708]: conmon c976f335fdd4ab45ad54 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c976f335fdd4ab45ad54168e8ce3e1bbf7d43894d4ece8354c1c8cf835085690.scope/container/memory.events
Feb 28 05:13:45 np0005634017 podman[304656]: 2026-02-28 10:13:45.254674603 +0000 UTC m=+0.176445712 container died c976f335fdd4ab45ad54168e8ce3e1bbf7d43894d4ece8354c1c8cf835085690 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bhabha, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.255 156681 INFO neutron.agent.ovn.metadata.agent [-] Port fa6d7d29-c113-4729-a953-5dc14a05cd16 in datapath 2ee6adef-26da-41a9-91a7-f9a806d37d26 unbound from our chassis#033[00m
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.256 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2ee6adef-26da-41a9-91a7-f9a806d37d26, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.258 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ab2689f4-1b74-4ff4-af00-650dd9202860]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.258 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26 namespace which is not needed anymore#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.273 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:45 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3026c4219fe373d42a74d44c6897bf3069ef98694db766c78462032acc5029ee-merged.mount: Deactivated successfully.
Feb 28 05:13:45 np0005634017 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000048.scope: Deactivated successfully.
Feb 28 05:13:45 np0005634017 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000048.scope: Consumed 4.636s CPU time.
Feb 28 05:13:45 np0005634017 systemd-machined[209480]: Machine qemu-79-instance-00000048 terminated.
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.283 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:13:45 np0005634017 podman[304656]: 2026-02-28 10:13:45.293482325 +0000 UTC m=+0.215253434 container remove c976f335fdd4ab45ad54168e8ce3e1bbf7d43894d4ece8354c1c8cf835085690 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.305 243456 INFO nova.virt.libvirt.driver [-] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Instance destroyed successfully.#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.306 243456 DEBUG nova.objects.instance [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lazy-loading 'resources' on Instance uuid a16c1faa-2568-47fc-8006-91c68ae7ae5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.314 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:13:45 np0005634017 systemd[1]: libpod-conmon-c976f335fdd4ab45ad54168e8ce3e1bbf7d43894d4ece8354c1c8cf835085690.scope: Deactivated successfully.
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.323 243456 DEBUG nova.virt.libvirt.vif [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:13:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-650257309',display_name='tempest-MultipleCreateTestJSON-server-650257309-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-650257309-1',id=71,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:13:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f398725990434948ba6927f1c1477015',ramdisk_id='',reservation_id='r-fb7tvvwh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-211863181',owner_user_name='tempest-MultipleCreateTestJSON-211863181-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:13:37Z,user_data=None,user_id='c60ae50e478245b49930e1e71ea14df4',uuid=a16c1faa-2568-47fc-8006-91c68ae7ae5d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "address": "fa:16:3e:dd:7a:dc", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b97a75-8d", "ovs_interfaceid": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.324 243456 DEBUG nova.network.os_vif_util [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converting VIF {"id": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "address": "fa:16:3e:dd:7a:dc", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b97a75-8d", "ovs_interfaceid": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.325 243456 DEBUG nova.network.os_vif_util [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:7a:dc,bridge_name='br-int',has_traffic_filtering=True,id=b8b97a75-8d54-4bd6-8372-eaff2a5aa910,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b97a75-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.325 243456 DEBUG os_vif [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:7a:dc,bridge_name='br-int',has_traffic_filtering=True,id=b8b97a75-8d54-4bd6-8372-eaff2a5aa910,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b97a75-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.327 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.328 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8b97a75-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.329 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.331 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.335 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.339 243456 INFO os_vif [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:7a:dc,bridge_name='br-int',has_traffic_filtering=True,id=b8b97a75-8d54-4bd6-8372-eaff2a5aa910,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b97a75-8d')#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.402 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.406 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.421 243456 INFO nova.virt.libvirt.driver [-] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Instance destroyed successfully.#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.422 243456 DEBUG nova.objects.instance [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lazy-loading 'resources' on Instance uuid f3def0af-1227-498f-a525-0df8d5bb3768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:45 np0005634017 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[303958]: [NOTICE]   (303963) : haproxy version is 2.8.14-c23fe91
Feb 28 05:13:45 np0005634017 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[303958]: [NOTICE]   (303963) : path to executable is /usr/sbin/haproxy
Feb 28 05:13:45 np0005634017 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[303958]: [WARNING]  (303963) : Exiting Master process...
Feb 28 05:13:45 np0005634017 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[303958]: [ALERT]    (303963) : Current worker (303965) exited with code 143 (Terminated)
Feb 28 05:13:45 np0005634017 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[303958]: [WARNING]  (303963) : All workers exited. Exiting... (0)
Feb 28 05:13:45 np0005634017 systemd[1]: libpod-d4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f.scope: Deactivated successfully.
Feb 28 05:13:45 np0005634017 podman[304778]: 2026-02-28 10:13:45.442348391 +0000 UTC m=+0.067785927 container died d4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.444 243456 DEBUG nova.virt.libvirt.vif [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:13:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-650257309',display_name='tempest-MultipleCreateTestJSON-server-650257309-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-650257309-2',id=72,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-02-28T10:13:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f398725990434948ba6927f1c1477015',ramdisk_id='',reservation_id='r-fb7tvvwh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-211863181',owner_user_name='tempest-MultipleCreateTestJSON-211863181-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:13:41Z,user_data=None,user_id='c60ae50e478245b49930e1e71ea14df4',uuid=f3def0af-1227-498f-a525-0df8d5bb3768,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "address": "fa:16:3e:2c:d1:ad", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa6d7d29-c1", "ovs_interfaceid": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.445 243456 DEBUG nova.network.os_vif_util [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converting VIF {"id": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "address": "fa:16:3e:2c:d1:ad", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa6d7d29-c1", "ovs_interfaceid": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.446 243456 DEBUG nova.network.os_vif_util [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:d1:ad,bridge_name='br-int',has_traffic_filtering=True,id=fa6d7d29-c113-4729-a953-5dc14a05cd16,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa6d7d29-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.447 243456 DEBUG os_vif [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:d1:ad,bridge_name='br-int',has_traffic_filtering=True,id=fa6d7d29-c113-4729-a953-5dc14a05cd16,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa6d7d29-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.448 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.448 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa6d7d29-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.451 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.455 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.458 243456 INFO os_vif [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:d1:ad,bridge_name='br-int',has_traffic_filtering=True,id=fa6d7d29-c113-4729-a953-5dc14a05cd16,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa6d7d29-c1')#033[00m
Feb 28 05:13:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:13:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2741424992' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:13:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:13:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2741424992' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:13:45 np0005634017 podman[304802]: 2026-02-28 10:13:45.464673278 +0000 UTC m=+0.044741269 container create 0bfdeabc66373f8ab305ca5f897975be09a6b54d5444a4a3a4af3210c9b0b6cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_rhodes, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:13:45 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f-userdata-shm.mount: Deactivated successfully.
Feb 28 05:13:45 np0005634017 podman[304778]: 2026-02-28 10:13:45.491792411 +0000 UTC m=+0.117229947 container cleanup d4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 28 05:13:45 np0005634017 systemd[1]: libpod-conmon-d4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f.scope: Deactivated successfully.
Feb 28 05:13:45 np0005634017 systemd[1]: Started libpod-conmon-0bfdeabc66373f8ab305ca5f897975be09a6b54d5444a4a3a4af3210c9b0b6cf.scope.
Feb 28 05:13:45 np0005634017 podman[304802]: 2026-02-28 10:13:45.447987269 +0000 UTC m=+0.028055280 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:13:45 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:13:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009387b1e1c57f2dd3516b1e9061f77e130b06aedaf967af89ad5a277f486ad6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:13:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009387b1e1c57f2dd3516b1e9061f77e130b06aedaf967af89ad5a277f486ad6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:13:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009387b1e1c57f2dd3516b1e9061f77e130b06aedaf967af89ad5a277f486ad6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:13:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009387b1e1c57f2dd3516b1e9061f77e130b06aedaf967af89ad5a277f486ad6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:13:45 np0005634017 podman[304856]: 2026-02-28 10:13:45.57107862 +0000 UTC m=+0.056030536 container remove d4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.577 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8b396425-d6ba-4db4-9ccb-a0cc2ba7e8cb]: (4, ('Sat Feb 28 10:13:45 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26 (d4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f)\nd4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f\nSat Feb 28 10:13:45 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26 (d4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f)\nd4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:45 np0005634017 podman[304802]: 2026-02-28 10:13:45.578406086 +0000 UTC m=+0.158474117 container init 0bfdeabc66373f8ab305ca5f897975be09a6b54d5444a4a3a4af3210c9b0b6cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_rhodes, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.580 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b6c580ec-e32b-4ff2-8938-e121ca111540]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.583 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ee6adef-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:45 np0005634017 podman[304802]: 2026-02-28 10:13:45.585466695 +0000 UTC m=+0.165534686 container start 0bfdeabc66373f8ab305ca5f897975be09a6b54d5444a4a3a4af3210c9b0b6cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_rhodes, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:13:45 np0005634017 kernel: tap2ee6adef-20: left promiscuous mode
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.585 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:45 np0005634017 podman[304802]: 2026-02-28 10:13:45.589753665 +0000 UTC m=+0.169821676 container attach 0bfdeabc66373f8ab305ca5f897975be09a6b54d5444a4a3a4af3210c9b0b6cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_rhodes, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.594 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.598 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c35337-9d43-4c52-a136-a3dbb1f8cfa7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.612 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[584696a7-1640-471d-a157-112e40597b16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.615 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d9044428-b5db-46eb-ad4f-340aa6d3fae0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.637 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[16150e7a-7455-4895-ab21-07b0c0e6ce34]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512405, 'reachable_time': 16566, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304882, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.640 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:13:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.641 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[23fb139d-be34-4538-8aef-bc7380056857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.737 243456 INFO nova.virt.libvirt.driver [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Deleting instance files /var/lib/nova/instances/a16c1faa-2568-47fc-8006-91c68ae7ae5d_del#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.738 243456 INFO nova.virt.libvirt.driver [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Deletion of /var/lib/nova/instances/a16c1faa-2568-47fc-8006-91c68ae7ae5d_del complete#033[00m
Feb 28 05:13:45 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ff72f63fb99d7b1ef7c7e48e92434ddd62cf45aed08cb6ac8121bfdf9bdc9341-merged.mount: Deactivated successfully.
Feb 28 05:13:45 np0005634017 systemd[1]: run-netns-ovnmeta\x2d2ee6adef\x2d26da\x2d41a9\x2d91a7\x2df9a806d37d26.mount: Deactivated successfully.
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.791 243456 INFO nova.virt.libvirt.driver [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Deleting instance files /var/lib/nova/instances/f3def0af-1227-498f-a525-0df8d5bb3768_del#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.792 243456 INFO nova.virt.libvirt.driver [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Deletion of /var/lib/nova/instances/f3def0af-1227-498f-a525-0df8d5bb3768_del complete#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.840 243456 INFO nova.compute.manager [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.840 243456 DEBUG oslo.service.loopingcall [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.840 243456 DEBUG nova.compute.manager [-] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.841 243456 DEBUG nova.network.neutron [-] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.854 243456 INFO nova.compute.manager [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.855 243456 DEBUG oslo.service.loopingcall [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.855 243456 DEBUG nova.compute.manager [-] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:13:45 np0005634017 nova_compute[243452]: 2026-02-28 10:13:45.856 243456 DEBUG nova.network.neutron [-] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:13:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 364 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.1 MiB/s wr, 265 op/s
Feb 28 05:13:46 np0005634017 lvm[304954]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:13:46 np0005634017 lvm[304954]: VG ceph_vg0 finished
Feb 28 05:13:46 np0005634017 lvm[304956]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:13:46 np0005634017 lvm[304956]: VG ceph_vg1 finished
Feb 28 05:13:46 np0005634017 lvm[304957]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:13:46 np0005634017 lvm[304957]: VG ceph_vg2 finished
Feb 28 05:13:46 np0005634017 hardcore_rhodes[304866]: {}
Feb 28 05:13:46 np0005634017 systemd[1]: libpod-0bfdeabc66373f8ab305ca5f897975be09a6b54d5444a4a3a4af3210c9b0b6cf.scope: Deactivated successfully.
Feb 28 05:13:46 np0005634017 systemd[1]: libpod-0bfdeabc66373f8ab305ca5f897975be09a6b54d5444a4a3a4af3210c9b0b6cf.scope: Consumed 1.279s CPU time.
Feb 28 05:13:46 np0005634017 podman[304802]: 2026-02-28 10:13:46.460901541 +0000 UTC m=+1.040969562 container died 0bfdeabc66373f8ab305ca5f897975be09a6b54d5444a4a3a4af3210c9b0b6cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 05:13:46 np0005634017 systemd[1]: var-lib-containers-storage-overlay-009387b1e1c57f2dd3516b1e9061f77e130b06aedaf967af89ad5a277f486ad6-merged.mount: Deactivated successfully.
Feb 28 05:13:46 np0005634017 podman[304802]: 2026-02-28 10:13:46.513812748 +0000 UTC m=+1.093880759 container remove 0bfdeabc66373f8ab305ca5f897975be09a6b54d5444a4a3a4af3210c9b0b6cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:13:46 np0005634017 systemd[1]: libpod-conmon-0bfdeabc66373f8ab305ca5f897975be09a6b54d5444a4a3a4af3210c9b0b6cf.scope: Deactivated successfully.
Feb 28 05:13:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:13:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:13:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:13:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:13:47 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:13:47 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:13:47 np0005634017 nova_compute[243452]: 2026-02-28 10:13:47.632 243456 DEBUG nova.network.neutron [-] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:47 np0005634017 nova_compute[243452]: 2026-02-28 10:13:47.667 243456 DEBUG nova.network.neutron [-] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:47 np0005634017 nova_compute[243452]: 2026-02-28 10:13:47.669 243456 INFO nova.compute.manager [-] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Took 1.81 seconds to deallocate network for instance.#033[00m
Feb 28 05:13:47 np0005634017 nova_compute[243452]: 2026-02-28 10:13:47.696 243456 INFO nova.compute.manager [-] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Took 1.86 seconds to deallocate network for instance.#033[00m
Feb 28 05:13:47 np0005634017 nova_compute[243452]: 2026-02-28 10:13:47.750 243456 DEBUG oslo_concurrency.lockutils [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:47 np0005634017 nova_compute[243452]: 2026-02-28 10:13:47.751 243456 DEBUG oslo_concurrency.lockutils [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:47 np0005634017 nova_compute[243452]: 2026-02-28 10:13:47.787 243456 DEBUG oslo_concurrency.lockutils [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:47 np0005634017 nova_compute[243452]: 2026-02-28 10:13:47.804 243456 DEBUG nova.compute.manager [req-a4932efe-ba17-42b5-b40b-6be30f856988 req-e3753b3e-681e-4b62-a688-742e958cb55a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Received event network-vif-unplugged-fa6d7d29-c113-4729-a953-5dc14a05cd16 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:47 np0005634017 nova_compute[243452]: 2026-02-28 10:13:47.804 243456 DEBUG oslo_concurrency.lockutils [req-a4932efe-ba17-42b5-b40b-6be30f856988 req-e3753b3e-681e-4b62-a688-742e958cb55a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:47 np0005634017 nova_compute[243452]: 2026-02-28 10:13:47.805 243456 DEBUG oslo_concurrency.lockutils [req-a4932efe-ba17-42b5-b40b-6be30f856988 req-e3753b3e-681e-4b62-a688-742e958cb55a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:47 np0005634017 nova_compute[243452]: 2026-02-28 10:13:47.805 243456 DEBUG oslo_concurrency.lockutils [req-a4932efe-ba17-42b5-b40b-6be30f856988 req-e3753b3e-681e-4b62-a688-742e958cb55a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:47 np0005634017 nova_compute[243452]: 2026-02-28 10:13:47.805 243456 DEBUG nova.compute.manager [req-a4932efe-ba17-42b5-b40b-6be30f856988 req-e3753b3e-681e-4b62-a688-742e958cb55a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] No waiting events found dispatching network-vif-unplugged-fa6d7d29-c113-4729-a953-5dc14a05cd16 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:13:47 np0005634017 nova_compute[243452]: 2026-02-28 10:13:47.806 243456 WARNING nova.compute.manager [req-a4932efe-ba17-42b5-b40b-6be30f856988 req-e3753b3e-681e-4b62-a688-742e958cb55a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Received unexpected event network-vif-unplugged-fa6d7d29-c113-4729-a953-5dc14a05cd16 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:13:47 np0005634017 nova_compute[243452]: 2026-02-28 10:13:47.806 243456 DEBUG nova.compute.manager [req-a4932efe-ba17-42b5-b40b-6be30f856988 req-e3753b3e-681e-4b62-a688-742e958cb55a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Received event network-vif-plugged-fa6d7d29-c113-4729-a953-5dc14a05cd16 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:47 np0005634017 nova_compute[243452]: 2026-02-28 10:13:47.807 243456 DEBUG oslo_concurrency.lockutils [req-a4932efe-ba17-42b5-b40b-6be30f856988 req-e3753b3e-681e-4b62-a688-742e958cb55a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:47 np0005634017 nova_compute[243452]: 2026-02-28 10:13:47.807 243456 DEBUG oslo_concurrency.lockutils [req-a4932efe-ba17-42b5-b40b-6be30f856988 req-e3753b3e-681e-4b62-a688-742e958cb55a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:47 np0005634017 nova_compute[243452]: 2026-02-28 10:13:47.807 243456 DEBUG oslo_concurrency.lockutils [req-a4932efe-ba17-42b5-b40b-6be30f856988 req-e3753b3e-681e-4b62-a688-742e958cb55a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:47 np0005634017 nova_compute[243452]: 2026-02-28 10:13:47.808 243456 DEBUG nova.compute.manager [req-a4932efe-ba17-42b5-b40b-6be30f856988 req-e3753b3e-681e-4b62-a688-742e958cb55a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] No waiting events found dispatching network-vif-plugged-fa6d7d29-c113-4729-a953-5dc14a05cd16 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:13:47 np0005634017 nova_compute[243452]: 2026-02-28 10:13:47.808 243456 WARNING nova.compute.manager [req-a4932efe-ba17-42b5-b40b-6be30f856988 req-e3753b3e-681e-4b62-a688-742e958cb55a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Received unexpected event network-vif-plugged-fa6d7d29-c113-4729-a953-5dc14a05cd16 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:13:47 np0005634017 nova_compute[243452]: 2026-02-28 10:13:47.876 243456 DEBUG oslo_concurrency.processutils [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.032 243456 DEBUG nova.compute.manager [req-76ad6b27-4c89-44d3-8993-66d231ec3f25 req-42993927-e4d1-4474-84ca-f89e69a516ae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Received event network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.033 243456 DEBUG oslo_concurrency.lockutils [req-76ad6b27-4c89-44d3-8993-66d231ec3f25 req-42993927-e4d1-4474-84ca-f89e69a516ae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.033 243456 DEBUG oslo_concurrency.lockutils [req-76ad6b27-4c89-44d3-8993-66d231ec3f25 req-42993927-e4d1-4474-84ca-f89e69a516ae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.033 243456 DEBUG oslo_concurrency.lockutils [req-76ad6b27-4c89-44d3-8993-66d231ec3f25 req-42993927-e4d1-4474-84ca-f89e69a516ae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.034 243456 DEBUG nova.compute.manager [req-76ad6b27-4c89-44d3-8993-66d231ec3f25 req-42993927-e4d1-4474-84ca-f89e69a516ae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Processing event network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.035 243456 DEBUG nova.compute.manager [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.042 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.045 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273628.044446, 1e13ffbf-dba5-421b-afc3-84eb471e2d44 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.046 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.053 243456 INFO nova.virt.libvirt.driver [-] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Instance spawned successfully.#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.053 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.072 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.080 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.088 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.088 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.089 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.089 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.090 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.090 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:13:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.125 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:13:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 328 MiB data, 724 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.4 MiB/s wr, 259 op/s
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.154 243456 INFO nova.compute.manager [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Took 13.46 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.155 243456 DEBUG nova.compute.manager [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.209 243456 INFO nova.compute.manager [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Took 14.53 seconds to build instance.#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.226 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.274 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.275 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.291 243456 DEBUG nova.compute.manager [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.359 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:13:48 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/675098835' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.459 243456 DEBUG oslo_concurrency.processutils [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.466 243456 DEBUG nova.compute.provider_tree [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.483 243456 DEBUG nova.scheduler.client.report [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.506 243456 DEBUG oslo_concurrency.lockutils [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.509 243456 DEBUG oslo_concurrency.lockutils [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.552 243456 INFO nova.scheduler.client.report [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Deleted allocations for instance f3def0af-1227-498f-a525-0df8d5bb3768#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.647 243456 DEBUG oslo_concurrency.lockutils [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.651 243456 DEBUG oslo_concurrency.processutils [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:48 np0005634017 nova_compute[243452]: 2026-02-28 10:13:48.747 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:13:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3576097265' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:13:49 np0005634017 nova_compute[243452]: 2026-02-28 10:13:49.303 243456 DEBUG oslo_concurrency.processutils [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:49 np0005634017 nova_compute[243452]: 2026-02-28 10:13:49.313 243456 DEBUG nova.compute.provider_tree [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:13:49 np0005634017 nova_compute[243452]: 2026-02-28 10:13:49.329 243456 DEBUG nova.scheduler.client.report [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:13:49 np0005634017 nova_compute[243452]: 2026-02-28 10:13:49.357 243456 DEBUG oslo_concurrency.lockutils [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:49 np0005634017 nova_compute[243452]: 2026-02-28 10:13:49.364 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:49 np0005634017 nova_compute[243452]: 2026-02-28 10:13:49.373 243456 DEBUG nova.virt.hardware [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:13:49 np0005634017 nova_compute[243452]: 2026-02-28 10:13:49.374 243456 INFO nova.compute.claims [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:13:49 np0005634017 nova_compute[243452]: 2026-02-28 10:13:49.401 243456 INFO nova.scheduler.client.report [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Deleted allocations for instance a16c1faa-2568-47fc-8006-91c68ae7ae5d#033[00m
Feb 28 05:13:49 np0005634017 nova_compute[243452]: 2026-02-28 10:13:49.495 243456 DEBUG oslo_concurrency.lockutils [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:49 np0005634017 nova_compute[243452]: 2026-02-28 10:13:49.540 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:49 np0005634017 nova_compute[243452]: 2026-02-28 10:13:49.687 243456 INFO nova.compute.manager [None req-70be3216-ee33-411e-97b0-116e09b04da6 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Get console output#033[00m
Feb 28 05:13:49 np0005634017 nova_compute[243452]: 2026-02-28 10:13:49.704 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 28 05:13:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:13:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2894740837' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.098 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.103 243456 DEBUG nova.compute.provider_tree [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.125 243456 DEBUG nova.scheduler.client.report [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.146 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.147 243456 DEBUG nova.compute.manager [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:13:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 308 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 2.0 MiB/s wr, 270 op/s
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.211 243456 DEBUG nova.compute.manager [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.212 243456 DEBUG nova.network.neutron [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.234 243456 INFO nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.255 243456 DEBUG nova.compute.manager [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.357 243456 DEBUG nova.compute.manager [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.360 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.360 243456 INFO nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Creating image(s)#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.386 243456 DEBUG nova.storage.rbd_utils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.414 243456 DEBUG nova.storage.rbd_utils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.440 243456 DEBUG nova.storage.rbd_utils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.444 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.493 243456 DEBUG nova.policy [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ec5caafc16ec43a493f7d553353a27c3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '809bf856030f4316b385ba1c02291ca7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.496 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.525 243456 INFO nova.compute.manager [None req-3a0ecb06-216d-4943-9b0f-92f6a1c9b510 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Unpausing#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.527 243456 DEBUG nova.objects.instance [None req-3a0ecb06-216d-4943-9b0f-92f6a1c9b510 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'flavor' on Instance uuid c4a13c84-8fca-43c8-87c3-fde9f5d1c031 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.533 243456 DEBUG nova.compute.manager [req-0ec19d9e-402d-4158-967f-02b4700fad0c req-dedb58cc-f0bf-49c5-90df-3961a8aff8fd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Received event network-vif-deleted-fa6d7d29-c113-4729-a953-5dc14a05cd16 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.534 243456 DEBUG nova.compute.manager [req-0ec19d9e-402d-4158-967f-02b4700fad0c req-dedb58cc-f0bf-49c5-90df-3961a8aff8fd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Received event network-vif-deleted-b8b97a75-8d54-4bd6-8372-eaff2a5aa910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.534 243456 DEBUG nova.compute.manager [req-0ec19d9e-402d-4158-967f-02b4700fad0c req-dedb58cc-f0bf-49c5-90df-3961a8aff8fd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Received event network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.535 243456 DEBUG oslo_concurrency.lockutils [req-0ec19d9e-402d-4158-967f-02b4700fad0c req-dedb58cc-f0bf-49c5-90df-3961a8aff8fd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.535 243456 DEBUG oslo_concurrency.lockutils [req-0ec19d9e-402d-4158-967f-02b4700fad0c req-dedb58cc-f0bf-49c5-90df-3961a8aff8fd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.535 243456 DEBUG oslo_concurrency.lockutils [req-0ec19d9e-402d-4158-967f-02b4700fad0c req-dedb58cc-f0bf-49c5-90df-3961a8aff8fd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.536 243456 DEBUG nova.compute.manager [req-0ec19d9e-402d-4158-967f-02b4700fad0c req-dedb58cc-f0bf-49c5-90df-3961a8aff8fd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] No waiting events found dispatching network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.536 243456 WARNING nova.compute.manager [req-0ec19d9e-402d-4158-967f-02b4700fad0c req-dedb58cc-f0bf-49c5-90df-3961a8aff8fd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Received unexpected event network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f for instance with vm_state active and task_state None.#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.538 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.538 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.539 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.539 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.562 243456 DEBUG nova.storage.rbd_utils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.566 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.618 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273630.6172748, c4a13c84-8fca-43c8-87c3-fde9f5d1c031 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.622 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:13:50 np0005634017 virtqemud[242837]: argument unsupported: QEMU guest agent is not configured
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.630 243456 DEBUG nova.virt.libvirt.guest [None req-3a0ecb06-216d-4943-9b0f-92f6a1c9b510 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.631 243456 DEBUG nova.compute.manager [None req-3a0ecb06-216d-4943-9b0f-92f6a1c9b510 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.645 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.649 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.686 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.824 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.858 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "dc2dbab8-312e-4130-8141-d848beeb6bec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.859 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.895 243456 DEBUG nova.compute.manager [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.905 243456 DEBUG nova.storage.rbd_utils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] resizing rbd image 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.965 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:50 np0005634017 nova_compute[243452]: 2026-02-28 10:13:50.965 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.002 243456 DEBUG nova.objects.instance [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'migration_context' on Instance uuid 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.007 243456 DEBUG nova.virt.hardware [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.007 243456 INFO nova.compute.claims [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.032 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.033 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Ensure instance console log exists: /var/lib/nova/instances/40f8f3fa-1f1c-440e-a640-5a223b1ca9b8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.033 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.034 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.034 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.160 243456 DEBUG nova.network.neutron [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Successfully created port: 26c42747-4919-4440-9b73-cf3516525108 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.180 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.305 243456 INFO nova.compute.manager [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Rebuilding instance#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.521 243456 DEBUG nova.objects.instance [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'trusted_certs' on Instance uuid 1e13ffbf-dba5-421b-afc3-84eb471e2d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.525 243456 INFO nova.compute.manager [None req-9e455dd4-5a06-4f87-abb8-7898c6c5bc2e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Get console output#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.533 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.538 243456 DEBUG nova.compute.manager [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.593 243456 DEBUG nova.objects.instance [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'pci_requests' on Instance uuid 1e13ffbf-dba5-421b-afc3-84eb471e2d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.613 243456 DEBUG nova.objects.instance [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'pci_devices' on Instance uuid 1e13ffbf-dba5-421b-afc3-84eb471e2d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.626 243456 DEBUG nova.objects.instance [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'resources' on Instance uuid 1e13ffbf-dba5-421b-afc3-84eb471e2d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.639 243456 DEBUG nova.objects.instance [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'migration_context' on Instance uuid 1e13ffbf-dba5-421b-afc3-84eb471e2d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.655 243456 DEBUG nova.objects.instance [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.660 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 28 05:13:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:13:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4213241902' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.839 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.845 243456 DEBUG nova.compute.provider_tree [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.860 243456 DEBUG nova.scheduler.client.report [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.885 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.886 243456 DEBUG nova.compute.manager [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.935 243456 DEBUG nova.compute.manager [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.936 243456 DEBUG nova.network.neutron [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.956 243456 INFO nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:13:51 np0005634017 nova_compute[243452]: 2026-02-28 10:13:51.974 243456 DEBUG nova.compute.manager [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:13:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 279 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 228 KiB/s wr, 238 op/s
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.538 243456 DEBUG nova.policy [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ec5caafc16ec43a493f7d553353a27c3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '809bf856030f4316b385ba1c02291ca7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:13:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.542 243456 DEBUG nova.compute.manager [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.543 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.543 243456 INFO nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Creating image(s)#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.567 243456 DEBUG nova.storage.rbd_utils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.594 243456 DEBUG nova.storage.rbd_utils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.615 243456 DEBUG nova.storage.rbd_utils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.619 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.652 243456 DEBUG nova.network.neutron [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Successfully updated port: 26c42747-4919-4440-9b73-cf3516525108 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.655 243456 DEBUG nova.compute.manager [req-2971f4e5-4b81-4452-8d74-5d1159cb8c77 req-a72efe1b-6825-4e58-af2d-4367dab88ddb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Received event network-changed-26c42747-4919-4440-9b73-cf3516525108 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.656 243456 DEBUG nova.compute.manager [req-2971f4e5-4b81-4452-8d74-5d1159cb8c77 req-a72efe1b-6825-4e58-af2d-4367dab88ddb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Refreshing instance network info cache due to event network-changed-26c42747-4919-4440-9b73-cf3516525108. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.656 243456 DEBUG oslo_concurrency.lockutils [req-2971f4e5-4b81-4452-8d74-5d1159cb8c77 req-a72efe1b-6825-4e58-af2d-4367dab88ddb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.656 243456 DEBUG oslo_concurrency.lockutils [req-2971f4e5-4b81-4452-8d74-5d1159cb8c77 req-a72efe1b-6825-4e58-af2d-4367dab88ddb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.659 243456 DEBUG nova.network.neutron [req-2971f4e5-4b81-4452-8d74-5d1159cb8c77 req-a72efe1b-6825-4e58-af2d-4367dab88ddb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Refreshing network info cache for port 26c42747-4919-4440-9b73-cf3516525108 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.661 243456 DEBUG nova.compute.manager [req-0b09a729-96f8-4215-9c01-45e7a8db858e req-f1dd96db-4804-4ef8-8605-ecfa8ad36b03 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Received event network-changed-9be79a2c-76fa-4a58-a532-eac0151d2bb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.661 243456 DEBUG nova.compute.manager [req-0b09a729-96f8-4215-9c01-45e7a8db858e req-f1dd96db-4804-4ef8-8605-ecfa8ad36b03 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Refreshing instance network info cache due to event network-changed-9be79a2c-76fa-4a58-a532-eac0151d2bb1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.661 243456 DEBUG oslo_concurrency.lockutils [req-0b09a729-96f8-4215-9c01-45e7a8db858e req-f1dd96db-4804-4ef8-8605-ecfa8ad36b03 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c4a13c84-8fca-43c8-87c3-fde9f5d1c031" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.661 243456 DEBUG oslo_concurrency.lockutils [req-0b09a729-96f8-4215-9c01-45e7a8db858e req-f1dd96db-4804-4ef8-8605-ecfa8ad36b03 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c4a13c84-8fca-43c8-87c3-fde9f5d1c031" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.662 243456 DEBUG nova.network.neutron [req-0b09a729-96f8-4215-9c01-45e7a8db858e req-f1dd96db-4804-4ef8-8605-ecfa8ad36b03 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Refreshing network info cache for port 9be79a2c-76fa-4a58-a532-eac0151d2bb1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.691 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.692 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.692 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.693 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.714 243456 DEBUG nova.storage.rbd_utils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.718 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 dc2dbab8-312e-4130-8141-d848beeb6bec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.758 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.762 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "refresh_cache-40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.768 243456 DEBUG oslo_concurrency.lockutils [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.772 243456 DEBUG oslo_concurrency.lockutils [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.774 243456 DEBUG oslo_concurrency.lockutils [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.774 243456 DEBUG oslo_concurrency.lockutils [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.774 243456 DEBUG oslo_concurrency.lockutils [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.776 243456 INFO nova.compute.manager [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Terminating instance#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.777 243456 DEBUG nova.compute.manager [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:13:53 np0005634017 kernel: tap9be79a2c-76 (unregistering): left promiscuous mode
Feb 28 05:13:53 np0005634017 NetworkManager[49805]: <info>  [1772273633.9145] device (tap9be79a2c-76): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.922 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:53Z|00611|binding|INFO|Releasing lport 9be79a2c-76fa-4a58-a532-eac0151d2bb1 from this chassis (sb_readonly=0)
Feb 28 05:13:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:53Z|00612|binding|INFO|Setting lport 9be79a2c-76fa-4a58-a532-eac0151d2bb1 down in Southbound
Feb 28 05:13:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:53Z|00613|binding|INFO|Removing iface tap9be79a2c-76 ovn-installed in OVS
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.924 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:53.931 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:18:0c 10.100.0.3'], port_security=['fa:16:3e:54:18:0c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c4a13c84-8fca-43c8-87c3-fde9f5d1c031', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d8683e2-4377-476f-ae8b-6d3dd4e61943', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aeb82654-4ae5-4b85-ab47-1d2fe984318d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4eff969a-a040-4a86-b625-9ebfe779e412, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=9be79a2c-76fa-4a58-a532-eac0151d2bb1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:13:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:53.934 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 9be79a2c-76fa-4a58-a532-eac0151d2bb1 in datapath 5d8683e2-4377-476f-ae8b-6d3dd4e61943 unbound from our chassis#033[00m
Feb 28 05:13:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:53.936 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d8683e2-4377-476f-ae8b-6d3dd4e61943, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:13:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:53.938 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b24985bd-ff78-4ad7-a5a0-bd9caba1a2b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:53.940 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943 namespace which is not needed anymore#033[00m
Feb 28 05:13:53 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.943 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:53 np0005634017 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000046.scope: Deactivated successfully.
Feb 28 05:13:53 np0005634017 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000046.scope: Consumed 12.040s CPU time.
Feb 28 05:13:53 np0005634017 systemd-machined[209480]: Machine qemu-78-instance-00000046 terminated.
Feb 28 05:13:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:53Z|00614|binding|INFO|Releasing lport 6fea9809-bf5e-43e8-beec-b6cf6edc52cd from this chassis (sb_readonly=0)
Feb 28 05:13:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:53Z|00615|binding|INFO|Releasing lport 5829ec02-3925-4479-9cc6-4b24ee8cfe06 from this chassis (sb_readonly=0)
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:53.999 243456 DEBUG nova.network.neutron [req-2971f4e5-4b81-4452-8d74-5d1159cb8c77 req-a72efe1b-6825-4e58-af2d-4367dab88ddb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.002 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.036 243456 INFO nova.virt.libvirt.driver [-] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Instance destroyed successfully.#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.037 243456 DEBUG nova.objects.instance [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'resources' on Instance uuid c4a13c84-8fca-43c8-87c3-fde9f5d1c031 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.075 243456 DEBUG nova.virt.libvirt.vif [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:13:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1132377693',display_name='tempest-TestNetworkAdvancedServerOps-server-1132377693',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1132377693',id=70,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF644+rgLhlFamGuaTmUUFO0KlVXAPKX08nnbDy2rOoT8ige3Fj9kAWfB90244fjvTph7JmDo+5JGi6o3TsUwr4tGSK5dI2cnuR44Pht5/KXvc5e6feRQxx/zv8GZOsh+w==',key_name='tempest-TestNetworkAdvancedServerOps-1238511245',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:13:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-8dn1q2v0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:13:50Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=c4a13c84-8fca-43c8-87c3-fde9f5d1c031,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "address": "fa:16:3e:54:18:0c", "network": {"id": "5d8683e2-4377-476f-ae8b-6d3dd4e61943", "bridge": "br-int", "label": "tempest-network-smoke--813977532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9be79a2c-76", "ovs_interfaceid": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.075 243456 DEBUG nova.network.os_vif_util [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "address": "fa:16:3e:54:18:0c", "network": {"id": "5d8683e2-4377-476f-ae8b-6d3dd4e61943", "bridge": "br-int", "label": "tempest-network-smoke--813977532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9be79a2c-76", "ovs_interfaceid": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.076 243456 DEBUG nova.network.os_vif_util [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:18:0c,bridge_name='br-int',has_traffic_filtering=True,id=9be79a2c-76fa-4a58-a532-eac0151d2bb1,network=Network(5d8683e2-4377-476f-ae8b-6d3dd4e61943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9be79a2c-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.077 243456 DEBUG os_vif [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:18:0c,bridge_name='br-int',has_traffic_filtering=True,id=9be79a2c-76fa-4a58-a532-eac0151d2bb1,network=Network(5d8683e2-4377-476f-ae8b-6d3dd4e61943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9be79a2c-76') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.079 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.079 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9be79a2c-76, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.080 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.082 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.084 243456 INFO os_vif [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:18:0c,bridge_name='br-int',has_traffic_filtering=True,id=9be79a2c-76fa-4a58-a532-eac0151d2bb1,network=Network(5d8683e2-4377-476f-ae8b-6d3dd4e61943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9be79a2c-76')#033[00m
Feb 28 05:13:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:54Z|00616|binding|INFO|Releasing lport 6fea9809-bf5e-43e8-beec-b6cf6edc52cd from this chassis (sb_readonly=0)
Feb 28 05:13:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:13:54Z|00617|binding|INFO|Releasing lport 5829ec02-3925-4479-9cc6-4b24ee8cfe06 from this chassis (sb_readonly=0)
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.097 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:54 np0005634017 neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943[302867]: [NOTICE]   (302871) : haproxy version is 2.8.14-c23fe91
Feb 28 05:13:54 np0005634017 neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943[302867]: [NOTICE]   (302871) : path to executable is /usr/sbin/haproxy
Feb 28 05:13:54 np0005634017 neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943[302867]: [WARNING]  (302871) : Exiting Master process...
Feb 28 05:13:54 np0005634017 neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943[302867]: [WARNING]  (302871) : Exiting Master process...
Feb 28 05:13:54 np0005634017 neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943[302867]: [ALERT]    (302871) : Current worker (302874) exited with code 143 (Terminated)
Feb 28 05:13:54 np0005634017 neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943[302867]: [WARNING]  (302871) : All workers exited. Exiting... (0)
Feb 28 05:13:54 np0005634017 systemd[1]: libpod-a9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612.scope: Deactivated successfully.
Feb 28 05:13:54 np0005634017 podman[305375]: 2026-02-28 10:13:54.117676213 +0000 UTC m=+0.059591436 container died a9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.139 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 dc2dbab8-312e-4130-8141-d848beeb6bec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:54 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612-userdata-shm.mount: Deactivated successfully.
Feb 28 05:13:54 np0005634017 systemd[1]: var-lib-containers-storage-overlay-8da54b6f18717d512266281e970794ab514d96d93e0121c460ef59a108898240-merged.mount: Deactivated successfully.
Feb 28 05:13:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 292 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 692 KiB/s wr, 240 op/s
Feb 28 05:13:54 np0005634017 podman[305375]: 2026-02-28 10:13:54.160160918 +0000 UTC m=+0.102076141 container cleanup a9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 28 05:13:54 np0005634017 systemd[1]: libpod-conmon-a9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612.scope: Deactivated successfully.
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.236 243456 DEBUG nova.storage.rbd_utils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] resizing rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:13:54 np0005634017 podman[305439]: 2026-02-28 10:13:54.244397927 +0000 UTC m=+0.062870459 container remove a9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 28 05:13:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:54.249 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1906cf62-5f9f-4a71-8d67-4bacd0f671b7]: (4, ('Sat Feb 28 10:13:54 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943 (a9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612)\na9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612\nSat Feb 28 10:13:54 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943 (a9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612)\na9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:54.251 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[46769a2d-6d0f-4d9c-9e78-8d9cc8301bd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:54.254 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d8683e2-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:54 np0005634017 kernel: tap5d8683e2-40: left promiscuous mode
Feb 28 05:13:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:54.267 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f1ecf8bc-35ef-4418-bb69-d5e94f99eab0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:54.279 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[16a004b4-f8f1-40e4-85ff-d4a1178a5c00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:54.282 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d068ad3e-1d03-4558-a7be-0e94a40e391b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.288 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:54.299 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[349cd4b8-33fd-4892-9e2b-34661a05d740]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511194, 'reachable_time': 24977, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305492, 'error': None, 'target': 'ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:54 np0005634017 systemd[1]: run-netns-ovnmeta\x2d5d8683e2\x2d4377\x2d476f\x2dae8b\x2d6d3dd4e61943.mount: Deactivated successfully.
Feb 28 05:13:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:54.305 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:13:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:54.305 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[11b12927-238b-49df-8191-dc3195a2da8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.353 243456 DEBUG nova.objects.instance [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'migration_context' on Instance uuid dc2dbab8-312e-4130-8141-d848beeb6bec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.371 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.372 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Ensure instance console log exists: /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.372 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.373 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.373 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.376 243456 DEBUG nova.network.neutron [req-2971f4e5-4b81-4452-8d74-5d1159cb8c77 req-a72efe1b-6825-4e58-af2d-4367dab88ddb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.390 243456 DEBUG oslo_concurrency.lockutils [req-2971f4e5-4b81-4452-8d74-5d1159cb8c77 req-a72efe1b-6825-4e58-af2d-4367dab88ddb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.391 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquired lock "refresh_cache-40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.391 243456 DEBUG nova.network.neutron [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.414 243456 INFO nova.virt.libvirt.driver [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Deleting instance files /var/lib/nova/instances/c4a13c84-8fca-43c8-87c3-fde9f5d1c031_del#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.415 243456 INFO nova.virt.libvirt.driver [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Deletion of /var/lib/nova/instances/c4a13c84-8fca-43c8-87c3-fde9f5d1c031_del complete#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.469 243456 INFO nova.compute.manager [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.470 243456 DEBUG oslo.service.loopingcall [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.470 243456 DEBUG nova.compute.manager [-] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.470 243456 DEBUG nova.network.neutron [-] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.596887) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273634596940, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2464, "num_deletes": 514, "total_data_size": 3253611, "memory_usage": 3329568, "flush_reason": "Manual Compaction"}
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273634617148, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3191416, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28205, "largest_seqno": 30668, "table_properties": {"data_size": 3180766, "index_size": 6312, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3269, "raw_key_size": 25684, "raw_average_key_size": 19, "raw_value_size": 3157123, "raw_average_value_size": 2451, "num_data_blocks": 276, "num_entries": 1288, "num_filter_entries": 1288, "num_deletions": 514, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772273454, "oldest_key_time": 1772273454, "file_creation_time": 1772273634, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 20333 microseconds, and 10078 cpu microseconds.
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.617218) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3191416 bytes OK
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.617250) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.620517) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.620540) EVENT_LOG_v1 {"time_micros": 1772273634620531, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.620568) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3242173, prev total WAL file size 3242173, number of live WAL files 2.
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.621730) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3116KB)], [62(8552KB)]
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273634621776, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 11948708, "oldest_snapshot_seqno": -1}
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5678 keys, 10204970 bytes, temperature: kUnknown
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273634679755, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10204970, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10163137, "index_size": 26555, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14213, "raw_key_size": 142427, "raw_average_key_size": 25, "raw_value_size": 10057371, "raw_average_value_size": 1771, "num_data_blocks": 1083, "num_entries": 5678, "num_filter_entries": 5678, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772273634, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.680142) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10204970 bytes
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.681825) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 205.6 rd, 175.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 8.4 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 6717, records dropped: 1039 output_compression: NoCompression
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.681848) EVENT_LOG_v1 {"time_micros": 1772273634681836, "job": 34, "event": "compaction_finished", "compaction_time_micros": 58110, "compaction_time_cpu_micros": 22512, "output_level": 6, "num_output_files": 1, "total_output_size": 10204970, "num_input_records": 6717, "num_output_records": 5678, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273634682378, "job": 34, "event": "table_file_deletion", "file_number": 64}
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273634683325, "job": 34, "event": "table_file_deletion", "file_number": 62}
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.621619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.683368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.683383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.683391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.683395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:13:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.683400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.860 243456 DEBUG nova.network.neutron [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:13:54 np0005634017 nova_compute[243452]: 2026-02-28 10:13:54.981 243456 DEBUG nova.network.neutron [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Successfully created port: a2f66c0b-78f3-49cb-929b-5e9b4072beb0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:13:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 314 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.3 MiB/s wr, 262 op/s
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.254 243456 DEBUG nova.network.neutron [req-0b09a729-96f8-4215-9c01-45e7a8db858e req-f1dd96db-4804-4ef8-8605-ecfa8ad36b03 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Updated VIF entry in instance network info cache for port 9be79a2c-76fa-4a58-a532-eac0151d2bb1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.255 243456 DEBUG nova.network.neutron [req-0b09a729-96f8-4215-9c01-45e7a8db858e req-f1dd96db-4804-4ef8-8605-ecfa8ad36b03 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Updating instance_info_cache with network_info: [{"id": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "address": "fa:16:3e:54:18:0c", "network": {"id": "5d8683e2-4377-476f-ae8b-6d3dd4e61943", "bridge": "br-int", "label": "tempest-network-smoke--813977532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9be79a2c-76", "ovs_interfaceid": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.289 243456 DEBUG oslo_concurrency.lockutils [req-0b09a729-96f8-4215-9c01-45e7a8db858e req-f1dd96db-4804-4ef8-8605-ecfa8ad36b03 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c4a13c84-8fca-43c8-87c3-fde9f5d1c031" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.397 243456 DEBUG nova.compute.manager [req-8e724451-4399-4604-b68a-387ba927b915 req-97b26794-946e-4d44-99eb-938e6d4f4efe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Received event network-vif-unplugged-9be79a2c-76fa-4a58-a532-eac0151d2bb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.398 243456 DEBUG oslo_concurrency.lockutils [req-8e724451-4399-4604-b68a-387ba927b915 req-97b26794-946e-4d44-99eb-938e6d4f4efe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.398 243456 DEBUG oslo_concurrency.lockutils [req-8e724451-4399-4604-b68a-387ba927b915 req-97b26794-946e-4d44-99eb-938e6d4f4efe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.398 243456 DEBUG oslo_concurrency.lockutils [req-8e724451-4399-4604-b68a-387ba927b915 req-97b26794-946e-4d44-99eb-938e6d4f4efe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.398 243456 DEBUG nova.compute.manager [req-8e724451-4399-4604-b68a-387ba927b915 req-97b26794-946e-4d44-99eb-938e6d4f4efe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] No waiting events found dispatching network-vif-unplugged-9be79a2c-76fa-4a58-a532-eac0151d2bb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.398 243456 DEBUG nova.compute.manager [req-8e724451-4399-4604-b68a-387ba927b915 req-97b26794-946e-4d44-99eb-938e6d4f4efe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Received event network-vif-unplugged-9be79a2c-76fa-4a58-a532-eac0151d2bb1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.623 243456 DEBUG nova.network.neutron [-] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.645 243456 INFO nova.compute.manager [-] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Took 3.17 seconds to deallocate network for instance.#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.687 243456 DEBUG oslo_concurrency.lockutils [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.688 243456 DEBUG oslo_concurrency.lockutils [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.791 243456 DEBUG oslo_concurrency.processutils [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.831 243456 DEBUG nova.network.neutron [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Successfully updated port: a2f66c0b-78f3-49cb-929b-5e9b4072beb0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.837 243456 DEBUG nova.compute.manager [req-72990cc7-252c-4bd9-91cb-00639e508100 req-d2725763-5a6a-4098-a484-fccb0c1d78ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Received event network-vif-deleted-9be79a2c-76fa-4a58-a532-eac0151d2bb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.849 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "refresh_cache-dc2dbab8-312e-4130-8141-d848beeb6bec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.849 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquired lock "refresh_cache-dc2dbab8-312e-4130-8141-d848beeb6bec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.849 243456 DEBUG nova.network.neutron [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:13:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:57.852 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:57.853 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:13:57.854 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.859 243456 DEBUG nova.network.neutron [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Updating instance_info_cache with network_info: [{"id": "26c42747-4919-4440-9b73-cf3516525108", "address": "fa:16:3e:5f:43:e1", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c42747-49", "ovs_interfaceid": "26c42747-4919-4440-9b73-cf3516525108", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.888 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Releasing lock "refresh_cache-40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.889 243456 DEBUG nova.compute.manager [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Instance network_info: |[{"id": "26c42747-4919-4440-9b73-cf3516525108", "address": "fa:16:3e:5f:43:e1", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c42747-49", "ovs_interfaceid": "26c42747-4919-4440-9b73-cf3516525108", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.898 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Start _get_guest_xml network_info=[{"id": "26c42747-4919-4440-9b73-cf3516525108", "address": "fa:16:3e:5f:43:e1", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c42747-49", "ovs_interfaceid": "26c42747-4919-4440-9b73-cf3516525108", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.911 243456 WARNING nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.917 243456 DEBUG nova.virt.libvirt.host [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.918 243456 DEBUG nova.virt.libvirt.host [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.924 243456 DEBUG nova.virt.libvirt.host [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.924 243456 DEBUG nova.virt.libvirt.host [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.925 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.925 243456 DEBUG nova.virt.hardware [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.926 243456 DEBUG nova.virt.hardware [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.926 243456 DEBUG nova.virt.hardware [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.927 243456 DEBUG nova.virt.hardware [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.927 243456 DEBUG nova.virt.hardware [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.927 243456 DEBUG nova.virt.hardware [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.927 243456 DEBUG nova.virt.hardware [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.928 243456 DEBUG nova.virt.hardware [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.928 243456 DEBUG nova.virt.hardware [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.928 243456 DEBUG nova.virt.hardware [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.928 243456 DEBUG nova.virt.hardware [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.932 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:57 np0005634017 nova_compute[243452]: 2026-02-28 10:13:57.975 243456 DEBUG nova.network.neutron [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:13:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 292 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 205 op/s
Feb 28 05:13:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:13:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1381478180' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:13:58 np0005634017 nova_compute[243452]: 2026-02-28 10:13:58.431 243456 DEBUG oslo_concurrency.processutils [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:58 np0005634017 nova_compute[243452]: 2026-02-28 10:13:58.437 243456 DEBUG nova.compute.provider_tree [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:13:58 np0005634017 nova_compute[243452]: 2026-02-28 10:13:58.458 243456 DEBUG nova.scheduler.client.report [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:13:58 np0005634017 nova_compute[243452]: 2026-02-28 10:13:58.483 243456 DEBUG oslo_concurrency.lockutils [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:13:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/164005645' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:13:58 np0005634017 nova_compute[243452]: 2026-02-28 10:13:58.509 243456 INFO nova.scheduler.client.report [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Deleted allocations for instance c4a13c84-8fca-43c8-87c3-fde9f5d1c031#033[00m
Feb 28 05:13:58 np0005634017 nova_compute[243452]: 2026-02-28 10:13:58.517 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:13:58 np0005634017 nova_compute[243452]: 2026-02-28 10:13:58.551 243456 DEBUG nova.storage.rbd_utils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:58 np0005634017 nova_compute[243452]: 2026-02-28 10:13:58.557 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:58 np0005634017 nova_compute[243452]: 2026-02-28 10:13:58.669 243456 DEBUG oslo_concurrency.lockutils [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:58 np0005634017 nova_compute[243452]: 2026-02-28 10:13:58.753 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:58 np0005634017 nova_compute[243452]: 2026-02-28 10:13:58.963 243456 DEBUG nova.network.neutron [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Updating instance_info_cache with network_info: [{"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:13:58 np0005634017 nova_compute[243452]: 2026-02-28 10:13:58.987 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Releasing lock "refresh_cache-dc2dbab8-312e-4130-8141-d848beeb6bec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:13:58 np0005634017 nova_compute[243452]: 2026-02-28 10:13:58.987 243456 DEBUG nova.compute.manager [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Instance network_info: |[{"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:13:58 np0005634017 nova_compute[243452]: 2026-02-28 10:13:58.991 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Start _get_guest_xml network_info=[{"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:13:58 np0005634017 nova_compute[243452]: 2026-02-28 10:13:58.998 243456 WARNING nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.005 243456 DEBUG nova.virt.libvirt.host [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.006 243456 DEBUG nova.virt.libvirt.host [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.013 243456 DEBUG nova.virt.libvirt.host [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.014 243456 DEBUG nova.virt.libvirt.host [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.017 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.017 243456 DEBUG nova.virt.hardware [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.018 243456 DEBUG nova.virt.hardware [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.018 243456 DEBUG nova.virt.hardware [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.019 243456 DEBUG nova.virt.hardware [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.019 243456 DEBUG nova.virt.hardware [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.020 243456 DEBUG nova.virt.hardware [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.020 243456 DEBUG nova.virt.hardware [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.021 243456 DEBUG nova.virt.hardware [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.021 243456 DEBUG nova.virt.hardware [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.021 243456 DEBUG nova.virt.hardware [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.022 243456 DEBUG nova.virt.hardware [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.026 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:13:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/490792730' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.081 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.083 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.085 243456 DEBUG nova.virt.libvirt.vif [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-191562024',display_name='tempest-ServerRescueNegativeTestJSON-server-191562024',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-191562024',id=74,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='809bf856030f4316b385ba1c02291ca7',ramdisk_id='',reservation_id='r-55ob4e3h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-338743494',owner_user_name='tempest-ServerRescueNegativeTestJSON-338743494-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:50Z,user_data=None,user_id='ec5caafc16ec43a493f7d553353a27c3',uuid=40f8f3fa-1f1c-440e-a640-5a223b1ca9b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "26c42747-4919-4440-9b73-cf3516525108", "address": "fa:16:3e:5f:43:e1", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c42747-49", "ovs_interfaceid": "26c42747-4919-4440-9b73-cf3516525108", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.085 243456 DEBUG nova.network.os_vif_util [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converting VIF {"id": "26c42747-4919-4440-9b73-cf3516525108", "address": "fa:16:3e:5f:43:e1", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c42747-49", "ovs_interfaceid": "26c42747-4919-4440-9b73-cf3516525108", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.086 243456 DEBUG nova.network.os_vif_util [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:43:e1,bridge_name='br-int',has_traffic_filtering=True,id=26c42747-4919-4440-9b73-cf3516525108,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c42747-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.088 243456 DEBUG nova.objects.instance [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.107 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:13:59 np0005634017 nova_compute[243452]:  <uuid>40f8f3fa-1f1c-440e-a640-5a223b1ca9b8</uuid>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:  <name>instance-0000004a</name>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-191562024</nova:name>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:13:57</nova:creationTime>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:13:59 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:        <nova:user uuid="ec5caafc16ec43a493f7d553353a27c3">tempest-ServerRescueNegativeTestJSON-338743494-project-member</nova:user>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:        <nova:project uuid="809bf856030f4316b385ba1c02291ca7">tempest-ServerRescueNegativeTestJSON-338743494</nova:project>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:        <nova:port uuid="26c42747-4919-4440-9b73-cf3516525108">
Feb 28 05:13:59 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <entry name="serial">40f8f3fa-1f1c-440e-a640-5a223b1ca9b8</entry>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <entry name="uuid">40f8f3fa-1f1c-440e-a640-5a223b1ca9b8</entry>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk">
Feb 28 05:13:59 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:13:59 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk.config">
Feb 28 05:13:59 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:13:59 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:5f:43:e1"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <target dev="tap26c42747-49"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/40f8f3fa-1f1c-440e-a640-5a223b1ca9b8/console.log" append="off"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:13:59 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:13:59 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:13:59 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:13:59 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.108 243456 DEBUG nova.compute.manager [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Preparing to wait for external event network-vif-plugged-26c42747-4919-4440-9b73-cf3516525108 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.108 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.109 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.109 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.110 243456 DEBUG nova.virt.libvirt.vif [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-191562024',display_name='tempest-ServerRescueNegativeTestJSON-server-191562024',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-191562024',id=74,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='809bf856030f4316b385ba1c02291ca7',ramdisk_id='',reservation_id='r-55ob4e3h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-338743494',owner_user_name='tempest-ServerRescueNegativeTestJSON-338743494-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:50Z,user_data=None,user_id='ec5caafc16ec43a493f7d553353a27c3',uuid=40f8f3fa-1f1c-440e-a640-5a223b1ca9b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "26c42747-4919-4440-9b73-cf3516525108", "address": "fa:16:3e:5f:43:e1", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c42747-49", "ovs_interfaceid": "26c42747-4919-4440-9b73-cf3516525108", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.110 243456 DEBUG nova.network.os_vif_util [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converting VIF {"id": "26c42747-4919-4440-9b73-cf3516525108", "address": "fa:16:3e:5f:43:e1", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c42747-49", "ovs_interfaceid": "26c42747-4919-4440-9b73-cf3516525108", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.111 243456 DEBUG nova.network.os_vif_util [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:43:e1,bridge_name='br-int',has_traffic_filtering=True,id=26c42747-4919-4440-9b73-cf3516525108,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c42747-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.111 243456 DEBUG os_vif [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:43:e1,bridge_name='br-int',has_traffic_filtering=True,id=26c42747-4919-4440-9b73-cf3516525108,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c42747-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.112 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.113 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.113 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.116 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.117 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26c42747-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.117 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap26c42747-49, col_values=(('external_ids', {'iface-id': '26c42747-4919-4440-9b73-cf3516525108', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:43:e1', 'vm-uuid': '40f8f3fa-1f1c-440e-a640-5a223b1ca9b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:13:59 np0005634017 NetworkManager[49805]: <info>  [1772273639.1204] manager: (tap26c42747-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/278)
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.121 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.125 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.126 243456 INFO os_vif [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:43:e1,bridge_name='br-int',has_traffic_filtering=True,id=26c42747-4919-4440-9b73-cf3516525108,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c42747-49')#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.185 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.185 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.185 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] No VIF found with MAC fa:16:3e:5f:43:e1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.186 243456 INFO nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Using config drive#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.206 243456 DEBUG nova.storage.rbd_utils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.492 243456 DEBUG nova.compute.manager [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Received event network-vif-plugged-9be79a2c-76fa-4a58-a532-eac0151d2bb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.493 243456 DEBUG oslo_concurrency.lockutils [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.494 243456 DEBUG oslo_concurrency.lockutils [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.495 243456 DEBUG oslo_concurrency.lockutils [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.495 243456 DEBUG nova.compute.manager [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] No waiting events found dispatching network-vif-plugged-9be79a2c-76fa-4a58-a532-eac0151d2bb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.495 243456 WARNING nova.compute.manager [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Received unexpected event network-vif-plugged-9be79a2c-76fa-4a58-a532-eac0151d2bb1 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.496 243456 DEBUG nova.compute.manager [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received event network-changed-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.496 243456 DEBUG nova.compute.manager [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Refreshing instance network info cache due to event network-changed-a2f66c0b-78f3-49cb-929b-5e9b4072beb0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.496 243456 DEBUG oslo_concurrency.lockutils [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-dc2dbab8-312e-4130-8141-d848beeb6bec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.497 243456 DEBUG oslo_concurrency.lockutils [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-dc2dbab8-312e-4130-8141-d848beeb6bec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.497 243456 DEBUG nova.network.neutron [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Refreshing network info cache for port a2f66c0b-78f3-49cb-929b-5e9b4072beb0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:13:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:13:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3488652800' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.590 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.616 243456 DEBUG nova.storage.rbd_utils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.626 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.690 243456 INFO nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Creating config drive at /var/lib/nova/instances/40f8f3fa-1f1c-440e-a640-5a223b1ca9b8/disk.config#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.700 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/40f8f3fa-1f1c-440e-a640-5a223b1ca9b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpc481ioc6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.853 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/40f8f3fa-1f1c-440e-a640-5a223b1ca9b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpc481ioc6" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.879 243456 DEBUG nova.storage.rbd_utils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:13:59 np0005634017 nova_compute[243452]: 2026-02-28 10:13:59.884 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/40f8f3fa-1f1c-440e-a640-5a223b1ca9b8/disk.config 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.071 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/40f8f3fa-1f1c-440e-a640-5a223b1ca9b8/disk.config 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.073 243456 INFO nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Deleting local config drive /var/lib/nova/instances/40f8f3fa-1f1c-440e-a640-5a223b1ca9b8/disk.config because it was imported into RBD.#033[00m
Feb 28 05:14:00 np0005634017 kernel: tap26c42747-49: entered promiscuous mode
Feb 28 05:14:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:00Z|00618|binding|INFO|Claiming lport 26c42747-4919-4440-9b73-cf3516525108 for this chassis.
Feb 28 05:14:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:00Z|00619|binding|INFO|26c42747-4919-4440-9b73-cf3516525108: Claiming fa:16:3e:5f:43:e1 10.100.0.5
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.123 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:00 np0005634017 NetworkManager[49805]: <info>  [1772273640.1281] manager: (tap26c42747-49): new Tun device (/org/freedesktop/NetworkManager/Devices/279)
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.141 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:43:e1 10.100.0.5'], port_security=['fa:16:3e:5f:43:e1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '40f8f3fa-1f1c-440e-a640-5a223b1ca9b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-621843b6-256a-4ce5-83c3-83b888738508', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809bf856030f4316b385ba1c02291ca7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2ac4acb3-b14a-4b15-b397-21203f1665be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a33e3f0-a2b2-429c-8d14-4c6d980064b2, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=26c42747-4919-4440-9b73-cf3516525108) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.142 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 26c42747-4919-4440-9b73-cf3516525108 in datapath 621843b6-256a-4ce5-83c3-83b888738508 bound to our chassis#033[00m
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.143 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 621843b6-256a-4ce5-83c3-83b888738508#033[00m
Feb 28 05:14:00 np0005634017 systemd-udevd[305725]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:14:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 297 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 209 op/s
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.163 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b5f41958-6541-4b6e-a672-abfb6c7a67fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.164 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap621843b6-21 in ovnmeta-621843b6-256a-4ce5-83c3-83b888738508 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:14:00 np0005634017 NetworkManager[49805]: <info>  [1772273640.1674] device (tap26c42747-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:14:00 np0005634017 NetworkManager[49805]: <info>  [1772273640.1682] device (tap26c42747-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.169 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap621843b6-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.169 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[671e6f7c-6071-49e4-9339-90100f842610]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.171 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[50ae1198-7b04-45c8-a157-2f446d36cb05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.170 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:00Z|00620|binding|INFO|Setting lport 26c42747-4919-4440-9b73-cf3516525108 ovn-installed in OVS
Feb 28 05:14:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:00Z|00621|binding|INFO|Setting lport 26c42747-4919-4440-9b73-cf3516525108 up in Southbound
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.174 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:00 np0005634017 systemd-machined[209480]: New machine qemu-82-instance-0000004a.
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.186 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[8b33c79d-de28-4940-8557-285adbdf1ac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:00 np0005634017 systemd[1]: Started Virtual Machine qemu-82-instance-0000004a.
Feb 28 05:14:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:14:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/715874071' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.196 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[060e94f6-f7eb-4e5b-97ef-a046ae29e3be]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.220 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.222 243456 DEBUG nova.virt.libvirt.vif [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1249757035',display_name='tempest-ServerRescueNegativeTestJSON-server-1249757035',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1249757035',id=75,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='809bf856030f4316b385ba1c02291ca7',ramdisk_id='',reservation_id='r-hyvw557d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-338743494',owner_user_name='tempest-ServerRescueNegativeTestJSON-338743494-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:52Z,user_data=None,user_id='ec5caafc16ec43a493f7d553353a27c3',uuid=dc2dbab8-312e-4130-8141-d848beeb6bec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.223 243456 DEBUG nova.network.os_vif_util [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converting VIF {"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.224 243456 DEBUG nova.network.os_vif_util [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:b7:32,bridge_name='br-int',has_traffic_filtering=True,id=a2f66c0b-78f3-49cb-929b-5e9b4072beb0,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2f66c0b-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.225 243456 DEBUG nova.objects.instance [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'pci_devices' on Instance uuid dc2dbab8-312e-4130-8141-d848beeb6bec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.226 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ee1852a1-434a-4c9b-ae3e-6673f5f2cfe0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.231 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a30074b0-36e1-4dd3-ad7f-752ec3759e35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:00 np0005634017 NetworkManager[49805]: <info>  [1772273640.2323] manager: (tap621843b6-20): new Veth device (/org/freedesktop/NetworkManager/Devices/280)
Feb 28 05:14:00 np0005634017 systemd-udevd[305730]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.240 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:14:00 np0005634017 nova_compute[243452]:  <uuid>dc2dbab8-312e-4130-8141-d848beeb6bec</uuid>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:  <name>instance-0000004b</name>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-1249757035</nova:name>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:13:58</nova:creationTime>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:14:00 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:        <nova:user uuid="ec5caafc16ec43a493f7d553353a27c3">tempest-ServerRescueNegativeTestJSON-338743494-project-member</nova:user>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:        <nova:project uuid="809bf856030f4316b385ba1c02291ca7">tempest-ServerRescueNegativeTestJSON-338743494</nova:project>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:        <nova:port uuid="a2f66c0b-78f3-49cb-929b-5e9b4072beb0">
Feb 28 05:14:00 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <entry name="serial">dc2dbab8-312e-4130-8141-d848beeb6bec</entry>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <entry name="uuid">dc2dbab8-312e-4130-8141-d848beeb6bec</entry>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/dc2dbab8-312e-4130-8141-d848beeb6bec_disk">
Feb 28 05:14:00 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:14:00 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/dc2dbab8-312e-4130-8141-d848beeb6bec_disk.config">
Feb 28 05:14:00 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:14:00 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:39:b7:32"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <target dev="tapa2f66c0b-78"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/console.log" append="off"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:14:00 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:14:00 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:14:00 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:14:00 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.240 243456 DEBUG nova.compute.manager [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Preparing to wait for external event network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.241 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.241 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.241 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.242 243456 DEBUG nova.virt.libvirt.vif [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1249757035',display_name='tempest-ServerRescueNegativeTestJSON-server-1249757035',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1249757035',id=75,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='809bf856030f4316b385ba1c02291ca7',ramdisk_id='',reservation_id='r-hyvw557d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-338743494',owner_user_name='tempest-ServerRescueNegativeTestJSON-338743494-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:52Z,user_data=None,user_id='ec5caafc16ec43a493f7d553353a27c3',uuid=dc2dbab8-312e-4130-8141-d848beeb6bec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.242 243456 DEBUG nova.network.os_vif_util [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converting VIF {"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.243 243456 DEBUG nova.network.os_vif_util [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:b7:32,bridge_name='br-int',has_traffic_filtering=True,id=a2f66c0b-78f3-49cb-929b-5e9b4072beb0,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2f66c0b-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.243 243456 DEBUG os_vif [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:b7:32,bridge_name='br-int',has_traffic_filtering=True,id=a2f66c0b-78f3-49cb-929b-5e9b4072beb0,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2f66c0b-78') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.243 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.244 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.244 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.249 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.250 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa2f66c0b-78, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.250 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa2f66c0b-78, col_values=(('external_ids', {'iface-id': 'a2f66c0b-78f3-49cb-929b-5e9b4072beb0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:b7:32', 'vm-uuid': 'dc2dbab8-312e-4130-8141-d848beeb6bec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.254 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:00 np0005634017 NetworkManager[49805]: <info>  [1772273640.2557] manager: (tapa2f66c0b-78): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/281)
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.258 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.263 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.264 243456 INFO os_vif [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:b7:32,bridge_name='br-int',has_traffic_filtering=True,id=a2f66c0b-78f3-49cb-929b-5e9b4072beb0,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2f66c0b-78')#033[00m
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.272 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[79563ad7-70f7-4758-9b50-89dc933c08e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.276 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1833c74e-a00b-4013-a539-ec4f59b4938b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:00 np0005634017 NetworkManager[49805]: <info>  [1772273640.2964] device (tap621843b6-20): carrier: link connected
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.301 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6a4648cd-3230-4d93-8b0f-573dee937da2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.310 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.310 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.311 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] No VIF found with MAC fa:16:3e:39:b7:32, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.311 243456 INFO nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Using config drive#033[00m
Feb 28 05:14:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:14:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:14:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:14:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:14:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:14:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.321 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2fe49798-af5f-4504-8b91-32356f8955cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap621843b6-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3f:07:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514758, 'reachable_time': 23572, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305766, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.336 243456 DEBUG nova.storage.rbd_utils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:00Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6c:7c:88 10.100.0.4
Feb 28 05:14:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:00Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6c:7c:88 10.100.0.4
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.340 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8502df1b-eae3-4e4d-9c4d-3832d67f432a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3f:7e0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514758, 'tstamp': 514758}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305782, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.344 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273625.297358, a16c1faa-2568-47fc-8006-91c68ae7ae5d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.345 243456 INFO nova.compute.manager [-] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.358 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad5cf27-ba8f-4df1-a94a-cbe0c4d0e202]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap621843b6-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3f:07:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514758, 'reachable_time': 23572, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305786, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.372 243456 DEBUG nova.compute.manager [None req-819fbf67-d6fc-4dfd-a0c4-a3b5ed5249ea - - - - - -] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.383 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d9711357-d166-43bd-b0c6-57c8326094f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.415 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273625.4146235, f3def0af-1227-498f-a525-0df8d5bb3768 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.416 243456 INFO nova.compute.manager [-] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.435 243456 DEBUG nova.compute.manager [None req-0f8ea9ad-92f8-4d3f-99e6-6fd73e986b25 - - - - - -] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.444 243456 DEBUG nova.compute.manager [req-2498c707-f814-44de-b1b0-81f2e06bb299 req-0cabaa93-1d2e-4fd2-b39e-855a54175580 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Received event network-vif-plugged-26c42747-4919-4440-9b73-cf3516525108 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.445 243456 DEBUG oslo_concurrency.lockutils [req-2498c707-f814-44de-b1b0-81f2e06bb299 req-0cabaa93-1d2e-4fd2-b39e-855a54175580 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.445 243456 DEBUG oslo_concurrency.lockutils [req-2498c707-f814-44de-b1b0-81f2e06bb299 req-0cabaa93-1d2e-4fd2-b39e-855a54175580 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.445 243456 DEBUG oslo_concurrency.lockutils [req-2498c707-f814-44de-b1b0-81f2e06bb299 req-0cabaa93-1d2e-4fd2-b39e-855a54175580 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.445 243456 DEBUG nova.compute.manager [req-2498c707-f814-44de-b1b0-81f2e06bb299 req-0cabaa93-1d2e-4fd2-b39e-855a54175580 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Processing event network-vif-plugged-26c42747-4919-4440-9b73-cf3516525108 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.454 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[693b2532-eba4-4d7c-b72d-f898bd733bb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.456 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap621843b6-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.456 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.456 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap621843b6-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:00 np0005634017 kernel: tap621843b6-20: entered promiscuous mode
Feb 28 05:14:00 np0005634017 NetworkManager[49805]: <info>  [1772273640.4603] manager: (tap621843b6-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/282)
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.459 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.464 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.465 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap621843b6-20, col_values=(('external_ids', {'iface-id': '92bcea78-9a21-4d44-99f4-fd3e41fc7e97'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.466 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:00Z|00622|binding|INFO|Releasing lport 92bcea78-9a21-4d44-99f4-fd3e41fc7e97 from this chassis (sb_readonly=0)
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.476 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.480 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.481 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/621843b6-256a-4ce5-83c3-83b888738508.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/621843b6-256a-4ce5-83c3-83b888738508.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.482 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c521fdd0-dd71-4d88-ade3-645ac254b9ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.483 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-621843b6-256a-4ce5-83c3-83b888738508
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/621843b6-256a-4ce5-83c3-83b888738508.pid.haproxy
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 621843b6-256a-4ce5-83c3-83b888738508
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:14:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.484 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'env', 'PROCESS_TAG=haproxy-621843b6-256a-4ce5-83c3-83b888738508', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/621843b6-256a-4ce5-83c3-83b888738508.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:14:00 np0005634017 podman[305857]: 2026-02-28 10:14:00.841202327 +0000 UTC m=+0.053893387 container create 05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.851 243456 DEBUG nova.network.neutron [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Updated VIF entry in instance network info cache for port a2f66c0b-78f3-49cb-929b-5e9b4072beb0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.852 243456 DEBUG nova.network.neutron [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Updating instance_info_cache with network_info: [{"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.860 243456 INFO nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Creating config drive at /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/disk.config#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.864 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqmvk2nss execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:00 np0005634017 systemd[1]: Started libpod-conmon-05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6.scope.
Feb 28 05:14:00 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.901 243456 DEBUG oslo_concurrency.lockutils [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-dc2dbab8-312e-4130-8141-d848beeb6bec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.903 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273640.8822312, 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.903 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] VM Started (Lifecycle Event)#033[00m
Feb 28 05:14:00 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cfd78d30250744f4812bed5eda336963389aeb75643f7f79dbbc06f1fb2c979/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.906 243456 DEBUG nova.compute.manager [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:14:00 np0005634017 podman[305857]: 2026-02-28 10:14:00.812317215 +0000 UTC m=+0.025008305 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.912 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.918 243456 INFO nova.virt.libvirt.driver [-] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Instance spawned successfully.#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.919 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:14:00 np0005634017 podman[305857]: 2026-02-28 10:14:00.919724744 +0000 UTC m=+0.132415844 container init 05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_managed=true)
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.925 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:00 np0005634017 podman[305857]: 2026-02-28 10:14:00.926879605 +0000 UTC m=+0.139570695 container start 05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.929 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.939 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.939 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.940 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.940 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.940 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.941 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.946 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.946 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273640.8829153, 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.946 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:14:00 np0005634017 neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508[305881]: [NOTICE]   (305887) : New worker (305889) forked
Feb 28 05:14:00 np0005634017 neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508[305881]: [NOTICE]   (305887) : Loading success.
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.983 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.987 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273640.9116664, 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:00 np0005634017 nova_compute[243452]: 2026-02-28 10:14:00.987 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.003 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqmvk2nss" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.029 243456 DEBUG nova.storage.rbd_utils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.036 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/disk.config dc2dbab8-312e-4130-8141-d848beeb6bec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.084 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.086 243456 INFO nova.compute.manager [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Took 10.73 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.087 243456 DEBUG nova.compute.manager [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.091 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.119 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.157 243456 INFO nova.compute.manager [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Took 12.82 seconds to build instance.#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.179 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/disk.config dc2dbab8-312e-4130-8141-d848beeb6bec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.180 243456 INFO nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Deleting local config drive /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/disk.config because it was imported into RBD.#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.185 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:01 np0005634017 kernel: tapa2f66c0b-78: entered promiscuous mode
Feb 28 05:14:01 np0005634017 NetworkManager[49805]: <info>  [1772273641.2338] manager: (tapa2f66c0b-78): new Tun device (/org/freedesktop/NetworkManager/Devices/283)
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.233 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:01 np0005634017 systemd-udevd[305746]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:14:01 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:01Z|00623|binding|INFO|Claiming lport a2f66c0b-78f3-49cb-929b-5e9b4072beb0 for this chassis.
Feb 28 05:14:01 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:01Z|00624|binding|INFO|a2f66c0b-78f3-49cb-929b-5e9b4072beb0: Claiming fa:16:3e:39:b7:32 10.100.0.6
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.238 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:01 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:01Z|00625|binding|INFO|Setting lport a2f66c0b-78f3-49cb-929b-5e9b4072beb0 ovn-installed in OVS
Feb 28 05:14:01 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:01Z|00626|binding|INFO|Setting lport a2f66c0b-78f3-49cb-929b-5e9b4072beb0 up in Southbound
Feb 28 05:14:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.246 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:b7:32 10.100.0.6'], port_security=['fa:16:3e:39:b7:32 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dc2dbab8-312e-4130-8141-d848beeb6bec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-621843b6-256a-4ce5-83c3-83b888738508', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809bf856030f4316b385ba1c02291ca7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2ac4acb3-b14a-4b15-b397-21203f1665be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a33e3f0-a2b2-429c-8d14-4c6d980064b2, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a2f66c0b-78f3-49cb-929b-5e9b4072beb0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.247 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:01 np0005634017 NetworkManager[49805]: <info>  [1772273641.2528] device (tapa2f66c0b-78): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:14:01 np0005634017 NetworkManager[49805]: <info>  [1772273641.2535] device (tapa2f66c0b-78): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:14:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.252 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a2f66c0b-78f3-49cb-929b-5e9b4072beb0 in datapath 621843b6-256a-4ce5-83c3-83b888738508 bound to our chassis#033[00m
Feb 28 05:14:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.258 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 621843b6-256a-4ce5-83c3-83b888738508#033[00m
Feb 28 05:14:01 np0005634017 systemd-machined[209480]: New machine qemu-83-instance-0000004b.
Feb 28 05:14:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.274 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8329d044-7f53-4b06-a979-3243bb09585e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:01 np0005634017 systemd[1]: Started Virtual Machine qemu-83-instance-0000004b.
Feb 28 05:14:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.303 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3950d3c6-5cda-4132-9063-ccdbac519e04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.308 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[52b2559c-8e10-44b4-bf0d-e39f156513ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.337 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8f6593b9-6702-4920-b5fd-102f2e36e61f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.364 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[550440eb-ab92-49b2-8694-57cff066b0ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap621843b6-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3f:07:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514758, 'reachable_time': 23572, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305959, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.383 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0592c1cf-dc80-4a07-85c3-34599fec0910]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap621843b6-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514771, 'tstamp': 514771}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305961, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap621843b6-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514773, 'tstamp': 514773}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305961, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.384 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap621843b6-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.386 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.388 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap621843b6-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.388 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.388 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:14:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.389 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap621843b6-20, col_values=(('external_ids', {'iface-id': '92bcea78-9a21-4d44-99f4-fd3e41fc7e97'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.389 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.594 243456 DEBUG nova.compute.manager [req-d8b4bdfe-61cf-4d8c-84d9-04ee44a9f765 req-623bfe39-3daf-4ecb-9202-c045b2e372a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received event network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.595 243456 DEBUG oslo_concurrency.lockutils [req-d8b4bdfe-61cf-4d8c-84d9-04ee44a9f765 req-623bfe39-3daf-4ecb-9202-c045b2e372a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.595 243456 DEBUG oslo_concurrency.lockutils [req-d8b4bdfe-61cf-4d8c-84d9-04ee44a9f765 req-623bfe39-3daf-4ecb-9202-c045b2e372a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.596 243456 DEBUG oslo_concurrency.lockutils [req-d8b4bdfe-61cf-4d8c-84d9-04ee44a9f765 req-623bfe39-3daf-4ecb-9202-c045b2e372a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.596 243456 DEBUG nova.compute.manager [req-d8b4bdfe-61cf-4d8c-84d9-04ee44a9f765 req-623bfe39-3daf-4ecb-9202-c045b2e372a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Processing event network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.821 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273641.8204968, dc2dbab8-312e-4130-8141-d848beeb6bec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.821 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] VM Started (Lifecycle Event)#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.823 243456 DEBUG nova.compute.manager [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.827 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.831 243456 INFO nova.virt.libvirt.driver [-] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Instance spawned successfully.#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.831 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.849 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.855 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.864 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.864 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.865 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.866 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.866 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.867 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.891 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.892 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273641.8207939, dc2dbab8-312e-4130-8141-d848beeb6bec => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.892 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.917 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.923 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273641.8274448, dc2dbab8-312e-4130-8141-d848beeb6bec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.924 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.928 243456 INFO nova.compute.manager [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Took 8.39 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.929 243456 DEBUG nova.compute.manager [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.944 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.949 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:14:01 np0005634017 nova_compute[243452]: 2026-02-28 10:14:01.987 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:14:02 np0005634017 nova_compute[243452]: 2026-02-28 10:14:02.011 243456 INFO nova.compute.manager [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Took 11.07 seconds to build instance.#033[00m
Feb 28 05:14:02 np0005634017 nova_compute[243452]: 2026-02-28 10:14:02.028 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 310 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.9 MiB/s wr, 178 op/s
Feb 28 05:14:02 np0005634017 nova_compute[243452]: 2026-02-28 10:14:02.776 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 28 05:14:03 np0005634017 nova_compute[243452]: 2026-02-28 10:14:03.413 243456 DEBUG nova.compute.manager [req-ecea2efe-307e-4921-a58c-cb3de6c157a6 req-88e6ecdf-000b-4a6b-8d48-44e821e6f180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Received event network-vif-plugged-26c42747-4919-4440-9b73-cf3516525108 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:03 np0005634017 nova_compute[243452]: 2026-02-28 10:14:03.413 243456 DEBUG oslo_concurrency.lockutils [req-ecea2efe-307e-4921-a58c-cb3de6c157a6 req-88e6ecdf-000b-4a6b-8d48-44e821e6f180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:03 np0005634017 nova_compute[243452]: 2026-02-28 10:14:03.414 243456 DEBUG oslo_concurrency.lockutils [req-ecea2efe-307e-4921-a58c-cb3de6c157a6 req-88e6ecdf-000b-4a6b-8d48-44e821e6f180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:03 np0005634017 nova_compute[243452]: 2026-02-28 10:14:03.414 243456 DEBUG oslo_concurrency.lockutils [req-ecea2efe-307e-4921-a58c-cb3de6c157a6 req-88e6ecdf-000b-4a6b-8d48-44e821e6f180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:03 np0005634017 nova_compute[243452]: 2026-02-28 10:14:03.414 243456 DEBUG nova.compute.manager [req-ecea2efe-307e-4921-a58c-cb3de6c157a6 req-88e6ecdf-000b-4a6b-8d48-44e821e6f180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] No waiting events found dispatching network-vif-plugged-26c42747-4919-4440-9b73-cf3516525108 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:03 np0005634017 nova_compute[243452]: 2026-02-28 10:14:03.414 243456 WARNING nova.compute.manager [req-ecea2efe-307e-4921-a58c-cb3de6c157a6 req-88e6ecdf-000b-4a6b-8d48-44e821e6f180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Received unexpected event network-vif-plugged-26c42747-4919-4440-9b73-cf3516525108 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:14:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:14:03 np0005634017 nova_compute[243452]: 2026-02-28 10:14:03.731 243456 DEBUG nova.compute.manager [req-4560d5f8-ee7e-4069-819d-4ea932c163ea req-dbdc402b-93da-4c14-bbe2-80815bd20760 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received event network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:03 np0005634017 nova_compute[243452]: 2026-02-28 10:14:03.732 243456 DEBUG oslo_concurrency.lockutils [req-4560d5f8-ee7e-4069-819d-4ea932c163ea req-dbdc402b-93da-4c14-bbe2-80815bd20760 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:03 np0005634017 nova_compute[243452]: 2026-02-28 10:14:03.733 243456 DEBUG oslo_concurrency.lockutils [req-4560d5f8-ee7e-4069-819d-4ea932c163ea req-dbdc402b-93da-4c14-bbe2-80815bd20760 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:03 np0005634017 nova_compute[243452]: 2026-02-28 10:14:03.733 243456 DEBUG oslo_concurrency.lockutils [req-4560d5f8-ee7e-4069-819d-4ea932c163ea req-dbdc402b-93da-4c14-bbe2-80815bd20760 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:03 np0005634017 nova_compute[243452]: 2026-02-28 10:14:03.733 243456 DEBUG nova.compute.manager [req-4560d5f8-ee7e-4069-819d-4ea932c163ea req-dbdc402b-93da-4c14-bbe2-80815bd20760 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] No waiting events found dispatching network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:03 np0005634017 nova_compute[243452]: 2026-02-28 10:14:03.734 243456 WARNING nova.compute.manager [req-4560d5f8-ee7e-4069-819d-4ea932c163ea req-dbdc402b-93da-4c14-bbe2-80815bd20760 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received unexpected event network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:14:03 np0005634017 nova_compute[243452]: 2026-02-28 10:14:03.756 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:03.901 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:14:03 np0005634017 nova_compute[243452]: 2026-02-28 10:14:03.901 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:03.902 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:14:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 325 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.7 MiB/s wr, 198 op/s
Feb 28 05:14:05 np0005634017 kernel: tap41441957-94 (unregistering): left promiscuous mode
Feb 28 05:14:05 np0005634017 NetworkManager[49805]: <info>  [1772273645.0280] device (tap41441957-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:14:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:05Z|00627|binding|INFO|Releasing lport 41441957-9492-481d-847c-895c9fd2ef8f from this chassis (sb_readonly=0)
Feb 28 05:14:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:05Z|00628|binding|INFO|Setting lport 41441957-9492-481d-847c-895c9fd2ef8f down in Southbound
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.036 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:05Z|00629|binding|INFO|Removing iface tap41441957-94 ovn-installed in OVS
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.039 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.042 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:7c:88 10.100.0.4'], port_security=['fa:16:3e:6c:7c:88 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1e13ffbf-dba5-421b-afc3-84eb471e2d44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '73844c74-9151-4190-8457-a421e77e32a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1433cfb0-2a88-4f7f-838a-e6496a4d73b0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=41441957-9492-481d-847c-895c9fd2ef8f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:14:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.044 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 41441957-9492-481d-847c-895c9fd2ef8f in datapath 77a5b13a-ec2d-4bde-b8f1-201557ef8008 unbound from our chassis#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.044 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.045 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77a5b13a-ec2d-4bde-b8f1-201557ef8008, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:14:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.046 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[14dcb47c-c3fe-4bfc-aa8d-9d2420b398d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.047 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 namespace which is not needed anymore#033[00m
Feb 28 05:14:05 np0005634017 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000049.scope: Deactivated successfully.
Feb 28 05:14:05 np0005634017 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000049.scope: Consumed 12.707s CPU time.
Feb 28 05:14:05 np0005634017 systemd-machined[209480]: Machine qemu-81-instance-00000049 terminated.
Feb 28 05:14:05 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[304558]: [NOTICE]   (304564) : haproxy version is 2.8.14-c23fe91
Feb 28 05:14:05 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[304558]: [NOTICE]   (304564) : path to executable is /usr/sbin/haproxy
Feb 28 05:14:05 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[304558]: [WARNING]  (304564) : Exiting Master process...
Feb 28 05:14:05 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[304558]: [ALERT]    (304564) : Current worker (304566) exited with code 143 (Terminated)
Feb 28 05:14:05 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[304558]: [WARNING]  (304564) : All workers exited. Exiting... (0)
Feb 28 05:14:05 np0005634017 systemd[1]: libpod-e11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6.scope: Deactivated successfully.
Feb 28 05:14:05 np0005634017 podman[306029]: 2026-02-28 10:14:05.211603403 +0000 UTC m=+0.044133422 container died e11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:14:05 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6-userdata-shm.mount: Deactivated successfully.
Feb 28 05:14:05 np0005634017 systemd[1]: var-lib-containers-storage-overlay-1c17d24a091a535ceaf27eeb52692c29dbc8d488028fb382fe11058cd494becd-merged.mount: Deactivated successfully.
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.256 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:05 np0005634017 podman[306029]: 2026-02-28 10:14:05.25489313 +0000 UTC m=+0.087423159 container cleanup e11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 28 05:14:05 np0005634017 systemd[1]: libpod-conmon-e11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6.scope: Deactivated successfully.
Feb 28 05:14:05 np0005634017 podman[306064]: 2026-02-28 10:14:05.334684074 +0000 UTC m=+0.042684991 container remove e11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 05:14:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.339 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f30a20e4-5c91-4f2b-aa2e-3349d0545a88]: (4, ('Sat Feb 28 10:14:05 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 (e11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6)\ne11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6\nSat Feb 28 10:14:05 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 (e11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6)\ne11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.341 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2402ec6c-c603-4282-8882-2228608e8fef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.342 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77a5b13a-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:05 np0005634017 kernel: tap77a5b13a-e0: left promiscuous mode
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.344 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.354 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.358 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ef409c92-8a92-4f5c-9b45-67d71f68ca53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.374 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[397691f9-20bd-4383-af23-9152f7c828a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.376 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[76db14dd-54da-4196-8842-30c53d4f8ec1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.391 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6e0e7ec9-86f7-4f53-9a4b-01a0b22b5f8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 513120, 'reachable_time': 35950, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306084, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:05 np0005634017 systemd[1]: run-netns-ovnmeta\x2d77a5b13a\x2dec2d\x2d4bde\x2db8f1\x2d201557ef8008.mount: Deactivated successfully.
Feb 28 05:14:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.397 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:14:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.397 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[fa94ba09-acee-4f15-a0d9-b6fa26a25393]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.408 243456 INFO nova.compute.manager [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Rescuing#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.409 243456 DEBUG oslo_concurrency.lockutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "refresh_cache-dc2dbab8-312e-4130-8141-d848beeb6bec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.409 243456 DEBUG oslo_concurrency.lockutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquired lock "refresh_cache-dc2dbab8-312e-4130-8141-d848beeb6bec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.410 243456 DEBUG nova.network.neutron [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.791 243456 INFO nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Instance shutdown successfully after 13 seconds.#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.796 243456 INFO nova.virt.libvirt.driver [-] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Instance destroyed successfully.#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.802 243456 INFO nova.virt.libvirt.driver [-] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Instance destroyed successfully.#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.803 243456 DEBUG nova.virt.libvirt.vif [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-933663289',display_name='tempest-ServerDiskConfigTestJSON-server-933663289',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-933663289',id=73,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:13:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-nc00nqtf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:50Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=1e13ffbf-dba5-421b-afc3-84eb471e2d44,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.804 243456 DEBUG nova.network.os_vif_util [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.805 243456 DEBUG nova.network.os_vif_util [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.805 243456 DEBUG os_vif [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.808 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.808 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41441957-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.810 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.812 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.815 243456 INFO os_vif [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94')#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.872 243456 DEBUG nova.compute.manager [req-42713826-98af-42bb-b59a-b1ac145199f1 req-35f6a192-0333-496c-aa05-7eb365262c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Received event network-vif-unplugged-41441957-9492-481d-847c-895c9fd2ef8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.872 243456 DEBUG oslo_concurrency.lockutils [req-42713826-98af-42bb-b59a-b1ac145199f1 req-35f6a192-0333-496c-aa05-7eb365262c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.872 243456 DEBUG oslo_concurrency.lockutils [req-42713826-98af-42bb-b59a-b1ac145199f1 req-35f6a192-0333-496c-aa05-7eb365262c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.873 243456 DEBUG oslo_concurrency.lockutils [req-42713826-98af-42bb-b59a-b1ac145199f1 req-35f6a192-0333-496c-aa05-7eb365262c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.873 243456 DEBUG nova.compute.manager [req-42713826-98af-42bb-b59a-b1ac145199f1 req-35f6a192-0333-496c-aa05-7eb365262c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] No waiting events found dispatching network-vif-unplugged-41441957-9492-481d-847c-895c9fd2ef8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.873 243456 WARNING nova.compute.manager [req-42713826-98af-42bb-b59a-b1ac145199f1 req-35f6a192-0333-496c-aa05-7eb365262c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Received unexpected event network-vif-unplugged-41441957-9492-481d-847c-895c9fd2ef8f for instance with vm_state active and task_state rebuilding.#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.873 243456 DEBUG nova.compute.manager [req-42713826-98af-42bb-b59a-b1ac145199f1 req-35f6a192-0333-496c-aa05-7eb365262c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Received event network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.873 243456 DEBUG oslo_concurrency.lockutils [req-42713826-98af-42bb-b59a-b1ac145199f1 req-35f6a192-0333-496c-aa05-7eb365262c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.874 243456 DEBUG oslo_concurrency.lockutils [req-42713826-98af-42bb-b59a-b1ac145199f1 req-35f6a192-0333-496c-aa05-7eb365262c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.874 243456 DEBUG oslo_concurrency.lockutils [req-42713826-98af-42bb-b59a-b1ac145199f1 req-35f6a192-0333-496c-aa05-7eb365262c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.874 243456 DEBUG nova.compute.manager [req-42713826-98af-42bb-b59a-b1ac145199f1 req-35f6a192-0333-496c-aa05-7eb365262c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] No waiting events found dispatching network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:05 np0005634017 nova_compute[243452]: 2026-02-28 10:14:05.874 243456 WARNING nova.compute.manager [req-42713826-98af-42bb-b59a-b1ac145199f1 req-35f6a192-0333-496c-aa05-7eb365262c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Received unexpected event network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f for instance with vm_state active and task_state rebuilding.#033[00m
Feb 28 05:14:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 326 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.1 MiB/s wr, 286 op/s
Feb 28 05:14:06 np0005634017 nova_compute[243452]: 2026-02-28 10:14:06.155 243456 INFO nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Deleting instance files /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44_del#033[00m
Feb 28 05:14:06 np0005634017 nova_compute[243452]: 2026-02-28 10:14:06.156 243456 INFO nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Deletion of /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44_del complete#033[00m
Feb 28 05:14:06 np0005634017 podman[306105]: 2026-02-28 10:14:06.186236208 +0000 UTC m=+0.105996011 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 28 05:14:06 np0005634017 podman[306104]: 2026-02-28 10:14:06.205173551 +0000 UTC m=+0.126604221 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 28 05:14:06 np0005634017 nova_compute[243452]: 2026-02-28 10:14:06.327 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:14:06 np0005634017 nova_compute[243452]: 2026-02-28 10:14:06.328 243456 INFO nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Creating image(s)#033[00m
Feb 28 05:14:06 np0005634017 nova_compute[243452]: 2026-02-28 10:14:06.360 243456 DEBUG nova.storage.rbd_utils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:06 np0005634017 nova_compute[243452]: 2026-02-28 10:14:06.401 243456 DEBUG nova.storage.rbd_utils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:06 np0005634017 nova_compute[243452]: 2026-02-28 10:14:06.439 243456 DEBUG nova.storage.rbd_utils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:06 np0005634017 nova_compute[243452]: 2026-02-28 10:14:06.447 243456 DEBUG oslo_concurrency.processutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:06 np0005634017 nova_compute[243452]: 2026-02-28 10:14:06.534 243456 DEBUG oslo_concurrency.processutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:06 np0005634017 nova_compute[243452]: 2026-02-28 10:14:06.535 243456 DEBUG oslo_concurrency.lockutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:06 np0005634017 nova_compute[243452]: 2026-02-28 10:14:06.536 243456 DEBUG oslo_concurrency.lockutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:06 np0005634017 nova_compute[243452]: 2026-02-28 10:14:06.536 243456 DEBUG oslo_concurrency.lockutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:06 np0005634017 nova_compute[243452]: 2026-02-28 10:14:06.564 243456 DEBUG nova.storage.rbd_utils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:06 np0005634017 nova_compute[243452]: 2026-02-28 10:14:06.572 243456 DEBUG oslo_concurrency.processutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:06 np0005634017 nova_compute[243452]: 2026-02-28 10:14:06.834 243456 DEBUG oslo_concurrency.processutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:06 np0005634017 nova_compute[243452]: 2026-02-28 10:14:06.897 243456 DEBUG nova.storage.rbd_utils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] resizing rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:14:06 np0005634017 nova_compute[243452]: 2026-02-28 10:14:06.980 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:14:06 np0005634017 nova_compute[243452]: 2026-02-28 10:14:06.981 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Ensure instance console log exists: /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:14:06 np0005634017 nova_compute[243452]: 2026-02-28 10:14:06.982 243456 DEBUG oslo_concurrency.lockutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:06 np0005634017 nova_compute[243452]: 2026-02-28 10:14:06.982 243456 DEBUG oslo_concurrency.lockutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:06 np0005634017 nova_compute[243452]: 2026-02-28 10:14:06.983 243456 DEBUG oslo_concurrency.lockutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:06 np0005634017 nova_compute[243452]: 2026-02-28 10:14:06.985 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Start _get_guest_xml network_info=[{"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:14:06 np0005634017 nova_compute[243452]: 2026-02-28 10:14:06.989 243456 WARNING nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Feb 28 05:14:06 np0005634017 nova_compute[243452]: 2026-02-28 10:14:06.996 243456 DEBUG nova.virt.libvirt.host [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:14:06 np0005634017 nova_compute[243452]: 2026-02-28 10:14:06.997 243456 DEBUG nova.virt.libvirt.host [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:14:07 np0005634017 nova_compute[243452]: 2026-02-28 10:14:07.001 243456 DEBUG nova.virt.libvirt.host [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:14:07 np0005634017 nova_compute[243452]: 2026-02-28 10:14:07.001 243456 DEBUG nova.virt.libvirt.host [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:14:07 np0005634017 nova_compute[243452]: 2026-02-28 10:14:07.002 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:14:07 np0005634017 nova_compute[243452]: 2026-02-28 10:14:07.002 243456 DEBUG nova.virt.hardware [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:14:07 np0005634017 nova_compute[243452]: 2026-02-28 10:14:07.003 243456 DEBUG nova.virt.hardware [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:14:07 np0005634017 nova_compute[243452]: 2026-02-28 10:14:07.003 243456 DEBUG nova.virt.hardware [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:14:07 np0005634017 nova_compute[243452]: 2026-02-28 10:14:07.003 243456 DEBUG nova.virt.hardware [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:14:07 np0005634017 nova_compute[243452]: 2026-02-28 10:14:07.004 243456 DEBUG nova.virt.hardware [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:14:07 np0005634017 nova_compute[243452]: 2026-02-28 10:14:07.004 243456 DEBUG nova.virt.hardware [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:14:07 np0005634017 nova_compute[243452]: 2026-02-28 10:14:07.004 243456 DEBUG nova.virt.hardware [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:14:07 np0005634017 nova_compute[243452]: 2026-02-28 10:14:07.005 243456 DEBUG nova.virt.hardware [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:14:07 np0005634017 nova_compute[243452]: 2026-02-28 10:14:07.005 243456 DEBUG nova.virt.hardware [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:14:07 np0005634017 nova_compute[243452]: 2026-02-28 10:14:07.005 243456 DEBUG nova.virt.hardware [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:14:07 np0005634017 nova_compute[243452]: 2026-02-28 10:14:07.006 243456 DEBUG nova.virt.hardware [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:14:07 np0005634017 nova_compute[243452]: 2026-02-28 10:14:07.006 243456 DEBUG nova.objects.instance [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1e13ffbf-dba5-421b-afc3-84eb471e2d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:07 np0005634017 nova_compute[243452]: 2026-02-28 10:14:07.026 243456 DEBUG oslo_concurrency.processutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:14:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1461921869' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:14:07 np0005634017 nova_compute[243452]: 2026-02-28 10:14:07.607 243456 DEBUG oslo_concurrency.processutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:07 np0005634017 nova_compute[243452]: 2026-02-28 10:14:07.632 243456 DEBUG nova.storage.rbd_utils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:07 np0005634017 nova_compute[243452]: 2026-02-28 10:14:07.637 243456 DEBUG oslo_concurrency.processutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:07 np0005634017 nova_compute[243452]: 2026-02-28 10:14:07.894 243456 DEBUG nova.network.neutron [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Updating instance_info_cache with network_info: [{"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:14:07 np0005634017 nova_compute[243452]: 2026-02-28 10:14:07.932 243456 DEBUG oslo_concurrency.lockutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Releasing lock "refresh_cache-dc2dbab8-312e-4130-8141-d848beeb6bec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:14:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 312 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.1 MiB/s wr, 252 op/s
Feb 28 05:14:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:14:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1407093777' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.189 243456 DEBUG oslo_concurrency.processutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.192 243456 DEBUG nova.virt.libvirt.vif [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-933663289',display_name='tempest-ServerDiskConfigTestJSON-server-933663289',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-933663289',id=73,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:13:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-nc00nqtf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:06Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=1e13ffbf-dba5-421b-afc3-84eb471e2d44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.193 243456 DEBUG nova.network.os_vif_util [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.194 243456 DEBUG nova.network.os_vif_util [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.199 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:14:08 np0005634017 nova_compute[243452]:  <uuid>1e13ffbf-dba5-421b-afc3-84eb471e2d44</uuid>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:  <name>instance-00000049</name>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-933663289</nova:name>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:14:06</nova:creationTime>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:14:08 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:        <nova:user uuid="c6b5724da2e648fd85fd8cb293525967">tempest-ServerDiskConfigTestJSON-1778232696-project-member</nova:user>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:        <nova:project uuid="92b324a375ad4f198dc44d31a0e0a6eb">tempest-ServerDiskConfigTestJSON-1778232696</nova:project>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="88971623-4808-4102-a4a7-34a287d8b7fe"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:        <nova:port uuid="41441957-9492-481d-847c-895c9fd2ef8f">
Feb 28 05:14:08 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <entry name="serial">1e13ffbf-dba5-421b-afc3-84eb471e2d44</entry>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <entry name="uuid">1e13ffbf-dba5-421b-afc3-84eb471e2d44</entry>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk">
Feb 28 05:14:08 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:14:08 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk.config">
Feb 28 05:14:08 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:14:08 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:6c:7c:88"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <target dev="tap41441957-94"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/console.log" append="off"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:14:08 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:14:08 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:14:08 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:14:08 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.209 243456 DEBUG nova.compute.manager [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Preparing to wait for external event network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.210 243456 DEBUG oslo_concurrency.lockutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.210 243456 DEBUG oslo_concurrency.lockutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.211 243456 DEBUG oslo_concurrency.lockutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.213 243456 DEBUG nova.virt.libvirt.vif [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-933663289',display_name='tempest-ServerDiskConfigTestJSON-server-933663289',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-933663289',id=73,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:13:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-nc00nqtf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:06Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=1e13ffbf-dba5-421b-afc3-84eb471e2d44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.214 243456 DEBUG nova.network.os_vif_util [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.215 243456 DEBUG nova.network.os_vif_util [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.216 243456 DEBUG os_vif [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.218 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.219 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.220 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.225 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.225 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41441957-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.226 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap41441957-94, col_values=(('external_ids', {'iface-id': '41441957-9492-481d-847c-895c9fd2ef8f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6c:7c:88', 'vm-uuid': '1e13ffbf-dba5-421b-afc3-84eb471e2d44'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.229 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:08 np0005634017 NetworkManager[49805]: <info>  [1772273648.2304] manager: (tap41441957-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/284)
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.233 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.235 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.237 243456 INFO os_vif [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94')#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.311 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.313 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.314 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No VIF found with MAC fa:16:3e:6c:7c:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.315 243456 INFO nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Using config drive#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.345 243456 DEBUG nova.storage.rbd_utils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.357 243456 DEBUG nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.365 243456 DEBUG nova.objects.instance [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'ec2_ids' on Instance uuid 1e13ffbf-dba5-421b-afc3-84eb471e2d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.397 243456 DEBUG nova.objects.instance [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'keypairs' on Instance uuid 1e13ffbf-dba5-421b-afc3-84eb471e2d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:14:08 np0005634017 nova_compute[243452]: 2026-02-28 10:14:08.757 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:09 np0005634017 nova_compute[243452]: 2026-02-28 10:14:09.028 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273634.027302, c4a13c84-8fca-43c8-87c3-fde9f5d1c031 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:09 np0005634017 nova_compute[243452]: 2026-02-28 10:14:09.030 243456 INFO nova.compute.manager [-] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:14:09 np0005634017 nova_compute[243452]: 2026-02-28 10:14:09.067 243456 DEBUG nova.compute.manager [None req-f436d665-147f-4727-b2c9-f142de07389e - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:09 np0005634017 nova_compute[243452]: 2026-02-28 10:14:09.180 243456 INFO nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Creating config drive at /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/disk.config#033[00m
Feb 28 05:14:09 np0005634017 nova_compute[243452]: 2026-02-28 10:14:09.187 243456 DEBUG oslo_concurrency.processutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpv1ykfaco execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:09 np0005634017 nova_compute[243452]: 2026-02-28 10:14:09.347 243456 DEBUG oslo_concurrency.processutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpv1ykfaco" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:09 np0005634017 nova_compute[243452]: 2026-02-28 10:14:09.395 243456 DEBUG nova.storage.rbd_utils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:09 np0005634017 nova_compute[243452]: 2026-02-28 10:14:09.400 243456 DEBUG oslo_concurrency.processutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/disk.config 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:09 np0005634017 nova_compute[243452]: 2026-02-28 10:14:09.573 243456 DEBUG oslo_concurrency.processutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/disk.config 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:09 np0005634017 nova_compute[243452]: 2026-02-28 10:14:09.574 243456 INFO nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Deleting local config drive /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/disk.config because it was imported into RBD.#033[00m
Feb 28 05:14:09 np0005634017 kernel: tap41441957-94: entered promiscuous mode
Feb 28 05:14:09 np0005634017 NetworkManager[49805]: <info>  [1772273649.6269] manager: (tap41441957-94): new Tun device (/org/freedesktop/NetworkManager/Devices/285)
Feb 28 05:14:09 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:09Z|00630|binding|INFO|Claiming lport 41441957-9492-481d-847c-895c9fd2ef8f for this chassis.
Feb 28 05:14:09 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:09Z|00631|binding|INFO|41441957-9492-481d-847c-895c9fd2ef8f: Claiming fa:16:3e:6c:7c:88 10.100.0.4
Feb 28 05:14:09 np0005634017 nova_compute[243452]: 2026-02-28 10:14:09.630 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:09 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:09Z|00632|binding|INFO|Setting lport 41441957-9492-481d-847c-895c9fd2ef8f ovn-installed in OVS
Feb 28 05:14:09 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:09Z|00633|binding|INFO|Setting lport 41441957-9492-481d-847c-895c9fd2ef8f up in Southbound
Feb 28 05:14:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.638 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:7c:88 10.100.0.4'], port_security=['fa:16:3e:6c:7c:88 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1e13ffbf-dba5-421b-afc3-84eb471e2d44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'neutron:revision_number': '5', 'neutron:security_group_ids': '73844c74-9151-4190-8457-a421e77e32a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1433cfb0-2a88-4f7f-838a-e6496a4d73b0, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=41441957-9492-481d-847c-895c9fd2ef8f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:14:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.641 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 41441957-9492-481d-847c-895c9fd2ef8f in datapath 77a5b13a-ec2d-4bde-b8f1-201557ef8008 bound to our chassis#033[00m
Feb 28 05:14:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.644 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77a5b13a-ec2d-4bde-b8f1-201557ef8008#033[00m
Feb 28 05:14:09 np0005634017 nova_compute[243452]: 2026-02-28 10:14:09.652 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:09 np0005634017 systemd-machined[209480]: New machine qemu-84-instance-00000049.
Feb 28 05:14:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.662 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[37e0968a-68ea-4696-909d-c5bd6b72194c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.664 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77a5b13a-e1 in ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:14:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.665 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77a5b13a-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:14:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.666 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1ad6f3c9-d24c-44e8-b524-f761b12800fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.666 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[97c8e85a-555a-4838-99f4-3d0940b8b55a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:09 np0005634017 systemd[1]: Started Virtual Machine qemu-84-instance-00000049.
Feb 28 05:14:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.681 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[87616cba-20d7-4e25-a316-14aad18d9f59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:09 np0005634017 systemd-udevd[306453]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:14:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.700 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3c8b2e9c-c1a2-4486-841e-3c67224e99a3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:09 np0005634017 NetworkManager[49805]: <info>  [1772273649.7071] device (tap41441957-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:14:09 np0005634017 NetworkManager[49805]: <info>  [1772273649.7080] device (tap41441957-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:14:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.736 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9c324085-7c6a-45bf-8442-6a4400fa8076]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.742 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[093b67b0-5e7d-4752-a27e-b6c69b6e9dc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:09 np0005634017 NetworkManager[49805]: <info>  [1772273649.7435] manager: (tap77a5b13a-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/286)
Feb 28 05:14:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.772 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[34fe1808-4d27-4684-b215-375a39ae4af9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.776 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[910d825e-43cc-46b1-9df2-968a81b16b0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:09 np0005634017 NetworkManager[49805]: <info>  [1772273649.8035] device (tap77a5b13a-e0): carrier: link connected
Feb 28 05:14:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.810 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7646e6e4-9d91-4a72-bf4b-57e39695c442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.827 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3bf13585-252d-4ab3-9787-2d0ba5b5181f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77a5b13a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ae:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 515708, 'reachable_time': 27002, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306484, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.851 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[64c58558-ed4c-456b-88fa-abcdbd6e20e3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:aeac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 515708, 'tstamp': 515708}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306485, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.874 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f362ff30-a8eb-4e13-8c8f-5053c017460a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77a5b13a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ae:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 515708, 'reachable_time': 27002, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306486, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.904 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.916 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6d848625-5597-4897-8c5d-ec4705e52f85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.993 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f7d00f34-5915-49c4-bbd7-e325634c1582]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.995 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77a5b13a-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.996 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:14:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.996 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77a5b13a-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:10 np0005634017 nova_compute[243452]: 2026-02-28 10:14:09.999 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:10 np0005634017 kernel: tap77a5b13a-e0: entered promiscuous mode
Feb 28 05:14:10 np0005634017 NetworkManager[49805]: <info>  [1772273650.0000] manager: (tap77a5b13a-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/287)
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:10.013 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77a5b13a-e0, col_values=(('external_ids', {'iface-id': '5829ec02-3925-4479-9cc6-4b24ee8cfe06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:10 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:10Z|00634|binding|INFO|Releasing lport 5829ec02-3925-4479-9cc6-4b24ee8cfe06 from this chassis (sb_readonly=0)
Feb 28 05:14:10 np0005634017 nova_compute[243452]: 2026-02-28 10:14:10.015 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:10.019 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:14:10 np0005634017 nova_compute[243452]: 2026-02-28 10:14:10.022 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:10.024 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[727af06d-92f2-4117-b313-06c8c1202d88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:10.025 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-77a5b13a-ec2d-4bde-b8f1-201557ef8008
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 77a5b13a-ec2d-4bde-b8f1-201557ef8008
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:14:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:10.027 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'env', 'PROCESS_TAG=haproxy-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77a5b13a-ec2d-4bde-b8f1-201557ef8008.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:14:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 304 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.5 MiB/s wr, 260 op/s
Feb 28 05:14:10 np0005634017 nova_compute[243452]: 2026-02-28 10:14:10.328 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 1e13ffbf-dba5-421b-afc3-84eb471e2d44 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 28 05:14:10 np0005634017 nova_compute[243452]: 2026-02-28 10:14:10.329 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273650.3283272, 1e13ffbf-dba5-421b-afc3-84eb471e2d44 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:10 np0005634017 nova_compute[243452]: 2026-02-28 10:14:10.330 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] VM Started (Lifecycle Event)#033[00m
Feb 28 05:14:10 np0005634017 nova_compute[243452]: 2026-02-28 10:14:10.371 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:10 np0005634017 nova_compute[243452]: 2026-02-28 10:14:10.378 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273650.329309, 1e13ffbf-dba5-421b-afc3-84eb471e2d44 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:10 np0005634017 nova_compute[243452]: 2026-02-28 10:14:10.379 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:14:10 np0005634017 podman[306558]: 2026-02-28 10:14:10.400031002 +0000 UTC m=+0.061070078 container create 29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:14:10 np0005634017 nova_compute[243452]: 2026-02-28 10:14:10.417 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:10 np0005634017 nova_compute[243452]: 2026-02-28 10:14:10.428 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:14:10 np0005634017 systemd[1]: Started libpod-conmon-29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa.scope.
Feb 28 05:14:10 np0005634017 podman[306558]: 2026-02-28 10:14:10.369873444 +0000 UTC m=+0.030912340 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:14:10 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:14:10 np0005634017 nova_compute[243452]: 2026-02-28 10:14:10.475 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 28 05:14:10 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7ac191ddc7bc426d6a4ce16e6d43bcc048e51a5a880fd81e5c01f4c30677ae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:14:10 np0005634017 podman[306558]: 2026-02-28 10:14:10.494425996 +0000 UTC m=+0.155464852 container init 29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 05:14:10 np0005634017 podman[306558]: 2026-02-28 10:14:10.499412487 +0000 UTC m=+0.160451343 container start 29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 28 05:14:10 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[306574]: [NOTICE]   (306578) : New worker (306580) forked
Feb 28 05:14:10 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[306574]: [NOTICE]   (306578) : Loading success.
Feb 28 05:14:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 293 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.2 MiB/s wr, 236 op/s
Feb 28 05:14:12 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:12Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5f:43:e1 10.100.0.5
Feb 28 05:14:12 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:12Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5f:43:e1 10.100.0.5
Feb 28 05:14:12 np0005634017 nova_compute[243452]: 2026-02-28 10:14:12.844 243456 DEBUG nova.compute.manager [req-1ad64536-bfb8-4605-ba22-6877c3e1772b req-b1925e73-4b97-42aa-ac26-01e7edf2ccde 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Received event network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:12 np0005634017 nova_compute[243452]: 2026-02-28 10:14:12.844 243456 DEBUG oslo_concurrency.lockutils [req-1ad64536-bfb8-4605-ba22-6877c3e1772b req-b1925e73-4b97-42aa-ac26-01e7edf2ccde 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:12 np0005634017 nova_compute[243452]: 2026-02-28 10:14:12.844 243456 DEBUG oslo_concurrency.lockutils [req-1ad64536-bfb8-4605-ba22-6877c3e1772b req-b1925e73-4b97-42aa-ac26-01e7edf2ccde 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:12 np0005634017 nova_compute[243452]: 2026-02-28 10:14:12.845 243456 DEBUG oslo_concurrency.lockutils [req-1ad64536-bfb8-4605-ba22-6877c3e1772b req-b1925e73-4b97-42aa-ac26-01e7edf2ccde 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:12 np0005634017 nova_compute[243452]: 2026-02-28 10:14:12.845 243456 DEBUG nova.compute.manager [req-1ad64536-bfb8-4605-ba22-6877c3e1772b req-b1925e73-4b97-42aa-ac26-01e7edf2ccde 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Processing event network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:14:12 np0005634017 nova_compute[243452]: 2026-02-28 10:14:12.846 243456 DEBUG nova.compute.manager [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:14:12 np0005634017 nova_compute[243452]: 2026-02-28 10:14:12.850 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273652.8506882, 1e13ffbf-dba5-421b-afc3-84eb471e2d44 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:12 np0005634017 nova_compute[243452]: 2026-02-28 10:14:12.851 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:14:12 np0005634017 nova_compute[243452]: 2026-02-28 10:14:12.856 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:14:12 np0005634017 nova_compute[243452]: 2026-02-28 10:14:12.859 243456 INFO nova.virt.libvirt.driver [-] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Instance spawned successfully.#033[00m
Feb 28 05:14:12 np0005634017 nova_compute[243452]: 2026-02-28 10:14:12.860 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:14:12 np0005634017 nova_compute[243452]: 2026-02-28 10:14:12.885 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:12 np0005634017 nova_compute[243452]: 2026-02-28 10:14:12.891 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:12 np0005634017 nova_compute[243452]: 2026-02-28 10:14:12.892 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:12 np0005634017 nova_compute[243452]: 2026-02-28 10:14:12.892 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:12 np0005634017 nova_compute[243452]: 2026-02-28 10:14:12.893 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:12 np0005634017 nova_compute[243452]: 2026-02-28 10:14:12.893 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:12 np0005634017 nova_compute[243452]: 2026-02-28 10:14:12.894 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:12 np0005634017 nova_compute[243452]: 2026-02-28 10:14:12.900 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:14:12 np0005634017 nova_compute[243452]: 2026-02-28 10:14:12.942 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 28 05:14:12 np0005634017 nova_compute[243452]: 2026-02-28 10:14:12.963 243456 DEBUG nova.compute.manager [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:13 np0005634017 nova_compute[243452]: 2026-02-28 10:14:13.021 243456 DEBUG oslo_concurrency.lockutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:13 np0005634017 nova_compute[243452]: 2026-02-28 10:14:13.021 243456 DEBUG oslo_concurrency.lockutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:13 np0005634017 nova_compute[243452]: 2026-02-28 10:14:13.022 243456 DEBUG nova.objects.instance [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 28 05:14:13 np0005634017 nova_compute[243452]: 2026-02-28 10:14:13.078 243456 DEBUG oslo_concurrency.lockutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:13 np0005634017 nova_compute[243452]: 2026-02-28 10:14:13.230 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:14:13 np0005634017 nova_compute[243452]: 2026-02-28 10:14:13.759 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 302 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.2 MiB/s wr, 238 op/s
Feb 28 05:14:14 np0005634017 nova_compute[243452]: 2026-02-28 10:14:14.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:14:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:14Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:39:b7:32 10.100.0.6
Feb 28 05:14:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:14Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:39:b7:32 10.100.0.6
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.246 243456 DEBUG nova.compute.manager [req-1a0cdea4-2c30-4a92-9ef8-cfd0edfcf6a3 req-8a22f47c-f129-4574-9ce5-c9853efa2cb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Received event network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.247 243456 DEBUG oslo_concurrency.lockutils [req-1a0cdea4-2c30-4a92-9ef8-cfd0edfcf6a3 req-8a22f47c-f129-4574-9ce5-c9853efa2cb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.247 243456 DEBUG oslo_concurrency.lockutils [req-1a0cdea4-2c30-4a92-9ef8-cfd0edfcf6a3 req-8a22f47c-f129-4574-9ce5-c9853efa2cb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.248 243456 DEBUG oslo_concurrency.lockutils [req-1a0cdea4-2c30-4a92-9ef8-cfd0edfcf6a3 req-8a22f47c-f129-4574-9ce5-c9853efa2cb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.248 243456 DEBUG nova.compute.manager [req-1a0cdea4-2c30-4a92-9ef8-cfd0edfcf6a3 req-8a22f47c-f129-4574-9ce5-c9853efa2cb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] No waiting events found dispatching network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.249 243456 WARNING nova.compute.manager [req-1a0cdea4-2c30-4a92-9ef8-cfd0edfcf6a3 req-8a22f47c-f129-4574-9ce5-c9853efa2cb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Received unexpected event network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f for instance with vm_state active and task_state None.#033[00m
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.509 243456 DEBUG oslo_concurrency.lockutils [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.509 243456 DEBUG oslo_concurrency.lockutils [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.510 243456 DEBUG oslo_concurrency.lockutils [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.511 243456 DEBUG oslo_concurrency.lockutils [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.511 243456 DEBUG oslo_concurrency.lockutils [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.514 243456 INFO nova.compute.manager [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Terminating instance#033[00m
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.515 243456 DEBUG nova.compute.manager [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:14:15 np0005634017 kernel: tap41441957-94 (unregistering): left promiscuous mode
Feb 28 05:14:15 np0005634017 NetworkManager[49805]: <info>  [1772273655.5583] device (tap41441957-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.563 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:15Z|00635|binding|INFO|Releasing lport 41441957-9492-481d-847c-895c9fd2ef8f from this chassis (sb_readonly=0)
Feb 28 05:14:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:15Z|00636|binding|INFO|Setting lport 41441957-9492-481d-847c-895c9fd2ef8f down in Southbound
Feb 28 05:14:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:15Z|00637|binding|INFO|Removing iface tap41441957-94 ovn-installed in OVS
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.566 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.576 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:7c:88 10.100.0.4'], port_security=['fa:16:3e:6c:7c:88 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1e13ffbf-dba5-421b-afc3-84eb471e2d44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'neutron:revision_number': '6', 'neutron:security_group_ids': '73844c74-9151-4190-8457-a421e77e32a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1433cfb0-2a88-4f7f-838a-e6496a4d73b0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=41441957-9492-481d-847c-895c9fd2ef8f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:14:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.579 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 41441957-9492-481d-847c-895c9fd2ef8f in datapath 77a5b13a-ec2d-4bde-b8f1-201557ef8008 unbound from our chassis#033[00m
Feb 28 05:14:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.581 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77a5b13a-ec2d-4bde-b8f1-201557ef8008, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:14:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.583 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c7dee43a-9271-463b-aa1f-48d9c084f4e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.585 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 namespace which is not needed anymore#033[00m
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.588 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:15 np0005634017 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d00000049.scope: Deactivated successfully.
Feb 28 05:14:15 np0005634017 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d00000049.scope: Consumed 3.363s CPU time.
Feb 28 05:14:15 np0005634017 systemd-machined[209480]: Machine qemu-84-instance-00000049 terminated.
Feb 28 05:14:15 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[306574]: [NOTICE]   (306578) : haproxy version is 2.8.14-c23fe91
Feb 28 05:14:15 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[306574]: [NOTICE]   (306578) : path to executable is /usr/sbin/haproxy
Feb 28 05:14:15 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[306574]: [WARNING]  (306578) : Exiting Master process...
Feb 28 05:14:15 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[306574]: [ALERT]    (306578) : Current worker (306580) exited with code 143 (Terminated)
Feb 28 05:14:15 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[306574]: [WARNING]  (306578) : All workers exited. Exiting... (0)
Feb 28 05:14:15 np0005634017 systemd[1]: libpod-29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa.scope: Deactivated successfully.
Feb 28 05:14:15 np0005634017 podman[306614]: 2026-02-28 10:14:15.70013213 +0000 UTC m=+0.044939515 container died 29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:14:15 np0005634017 systemd[1]: var-lib-containers-storage-overlay-fb7ac191ddc7bc426d6a4ce16e6d43bcc048e51a5a880fd81e5c01f4c30677ae-merged.mount: Deactivated successfully.
Feb 28 05:14:15 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa-userdata-shm.mount: Deactivated successfully.
Feb 28 05:14:15 np0005634017 podman[306614]: 2026-02-28 10:14:15.742519992 +0000 UTC m=+0.087327397 container cleanup 29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:14:15 np0005634017 systemd[1]: libpod-conmon-29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa.scope: Deactivated successfully.
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.761 243456 INFO nova.virt.libvirt.driver [-] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Instance destroyed successfully.#033[00m
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.762 243456 DEBUG nova.objects.instance [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'resources' on Instance uuid 1e13ffbf-dba5-421b-afc3-84eb471e2d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.777 243456 DEBUG nova.virt.libvirt.vif [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-933663289',display_name='tempest-ServerDiskConfigTestJSON-server-933663289',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-933663289',id=73,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:14:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-nc00nqtf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:14:13Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=1e13ffbf-dba5-421b-afc3-84eb471e2d44,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.779 243456 DEBUG nova.network.os_vif_util [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.780 243456 DEBUG nova.network.os_vif_util [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.781 243456 DEBUG os_vif [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.783 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.783 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41441957-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.785 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.788 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.790 243456 INFO os_vif [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94')#033[00m
Feb 28 05:14:15 np0005634017 podman[306648]: 2026-02-28 10:14:15.838088309 +0000 UTC m=+0.065031830 container remove 29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 28 05:14:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.842 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4d0ff094-c8d7-471d-b09b-4e709169c42c]: (4, ('Sat Feb 28 10:14:15 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 (29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa)\n29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa\nSat Feb 28 10:14:15 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 (29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa)\n29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.844 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf72fdd-197c-4b24-b823-3414adcc7028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.845 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77a5b13a-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.846 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:15 np0005634017 kernel: tap77a5b13a-e0: left promiscuous mode
Feb 28 05:14:15 np0005634017 nova_compute[243452]: 2026-02-28 10:14:15.853 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.856 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cd7af3b6-2121-43ca-86fc-b90497a7cf96]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.868 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d96a9c-6e97-4a2a-8667-f7c177d5788b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.870 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe4c46a-815c-415f-8001-11790bff16b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.884 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a15d5bcb-af4d-49a3-a34a-d192c90c35c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 515701, 'reachable_time': 22950, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306683, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:15 np0005634017 systemd[1]: run-netns-ovnmeta\x2d77a5b13a\x2dec2d\x2d4bde\x2db8f1\x2d201557ef8008.mount: Deactivated successfully.
Feb 28 05:14:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.887 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:14:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.887 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[4d786a2e-c58f-4e08-bdb8-f46b02555431]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:16 np0005634017 nova_compute[243452]: 2026-02-28 10:14:16.059 243456 INFO nova.virt.libvirt.driver [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Deleting instance files /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44_del#033[00m
Feb 28 05:14:16 np0005634017 nova_compute[243452]: 2026-02-28 10:14:16.061 243456 INFO nova.virt.libvirt.driver [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Deletion of /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44_del complete#033[00m
Feb 28 05:14:16 np0005634017 nova_compute[243452]: 2026-02-28 10:14:16.120 243456 INFO nova.compute.manager [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:14:16 np0005634017 nova_compute[243452]: 2026-02-28 10:14:16.121 243456 DEBUG oslo.service.loopingcall [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:14:16 np0005634017 nova_compute[243452]: 2026-02-28 10:14:16.122 243456 DEBUG nova.compute.manager [-] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:14:16 np0005634017 nova_compute[243452]: 2026-02-28 10:14:16.122 243456 DEBUG nova.network.neutron [-] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:14:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 349 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 5.6 MiB/s wr, 321 op/s
Feb 28 05:14:16 np0005634017 nova_compute[243452]: 2026-02-28 10:14:16.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:14:16 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 05:14:16 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 23K writes, 95K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.04 MB/s#012Cumulative WAL: 23K writes, 8207 syncs, 2.91 writes per sync, written: 0.09 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 43K keys, 11K commit groups, 1.0 writes per commit group, ingest: 42.16 MB, 0.07 MB/s#012Interval WAL: 11K writes, 4559 syncs, 2.48 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 05:14:17 np0005634017 nova_compute[243452]: 2026-02-28 10:14:17.182 243456 DEBUG nova.network.neutron [-] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:14:17 np0005634017 nova_compute[243452]: 2026-02-28 10:14:17.199 243456 INFO nova.compute.manager [-] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Took 1.08 seconds to deallocate network for instance.#033[00m
Feb 28 05:14:17 np0005634017 nova_compute[243452]: 2026-02-28 10:14:17.239 243456 DEBUG oslo_concurrency.lockutils [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:17 np0005634017 nova_compute[243452]: 2026-02-28 10:14:17.240 243456 DEBUG oslo_concurrency.lockutils [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:17 np0005634017 nova_compute[243452]: 2026-02-28 10:14:17.283 243456 DEBUG nova.scheduler.client.report [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 28 05:14:17 np0005634017 nova_compute[243452]: 2026-02-28 10:14:17.311 243456 DEBUG nova.scheduler.client.report [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 28 05:14:17 np0005634017 nova_compute[243452]: 2026-02-28 10:14:17.312 243456 DEBUG nova.compute.provider_tree [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 28 05:14:17 np0005634017 nova_compute[243452]: 2026-02-28 10:14:17.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:14:17 np0005634017 nova_compute[243452]: 2026-02-28 10:14:17.336 243456 DEBUG nova.scheduler.client.report [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 28 05:14:17 np0005634017 nova_compute[243452]: 2026-02-28 10:14:17.366 243456 DEBUG nova.scheduler.client.report [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 28 05:14:17 np0005634017 nova_compute[243452]: 2026-02-28 10:14:17.439 243456 DEBUG oslo_concurrency.processutils [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:14:17 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/247075934' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:14:17 np0005634017 nova_compute[243452]: 2026-02-28 10:14:17.975 243456 DEBUG oslo_concurrency.processutils [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:17 np0005634017 nova_compute[243452]: 2026-02-28 10:14:17.983 243456 DEBUG nova.compute.provider_tree [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:14:18 np0005634017 nova_compute[243452]: 2026-02-28 10:14:18.006 243456 DEBUG nova.scheduler.client.report [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:14:18 np0005634017 nova_compute[243452]: 2026-02-28 10:14:18.040 243456 DEBUG oslo_concurrency.lockutils [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:18 np0005634017 nova_compute[243452]: 2026-02-28 10:14:18.083 243456 INFO nova.scheduler.client.report [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Deleted allocations for instance 1e13ffbf-dba5-421b-afc3-84eb471e2d44#033[00m
Feb 28 05:14:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 344 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 6.0 MiB/s wr, 276 op/s
Feb 28 05:14:18 np0005634017 nova_compute[243452]: 2026-02-28 10:14:18.166 243456 DEBUG oslo_concurrency.lockutils [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:18 np0005634017 nova_compute[243452]: 2026-02-28 10:14:18.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:14:18 np0005634017 nova_compute[243452]: 2026-02-28 10:14:18.332 243456 DEBUG nova.compute.manager [req-f2008e52-0014-45e6-8cc0-9dd7bca5d9cd req-c481a3b3-c57f-4d21-85cc-c618d1144b9a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Received event network-vif-deleted-41441957-9492-481d-847c-895c9fd2ef8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:18 np0005634017 nova_compute[243452]: 2026-02-28 10:14:18.491 243456 DEBUG nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 28 05:14:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:14:18 np0005634017 nova_compute[243452]: 2026-02-28 10:14:18.708 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:18 np0005634017 nova_compute[243452]: 2026-02-28 10:14:18.708 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:18 np0005634017 nova_compute[243452]: 2026-02-28 10:14:18.919 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:18 np0005634017 nova_compute[243452]: 2026-02-28 10:14:18.925 243456 DEBUG nova.compute.manager [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:14:18 np0005634017 nova_compute[243452]: 2026-02-28 10:14:18.927 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "32fe69ba-ea8d-411e-8917-de872b62b8b0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:18 np0005634017 nova_compute[243452]: 2026-02-28 10:14:18.928 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:18 np0005634017 nova_compute[243452]: 2026-02-28 10:14:18.965 243456 DEBUG nova.compute.manager [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:14:19 np0005634017 nova_compute[243452]: 2026-02-28 10:14:19.036 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:19 np0005634017 nova_compute[243452]: 2026-02-28 10:14:19.037 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:19 np0005634017 nova_compute[243452]: 2026-02-28 10:14:19.045 243456 DEBUG nova.virt.hardware [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:14:19 np0005634017 nova_compute[243452]: 2026-02-28 10:14:19.046 243456 INFO nova.compute.claims [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:14:19 np0005634017 nova_compute[243452]: 2026-02-28 10:14:19.053 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:19 np0005634017 nova_compute[243452]: 2026-02-28 10:14:19.191 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:14:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3701244892' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:14:19 np0005634017 nova_compute[243452]: 2026-02-28 10:14:19.763 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:19 np0005634017 nova_compute[243452]: 2026-02-28 10:14:19.770 243456 DEBUG nova.compute.provider_tree [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:14:19 np0005634017 nova_compute[243452]: 2026-02-28 10:14:19.792 243456 DEBUG nova.scheduler.client.report [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:14:19 np0005634017 nova_compute[243452]: 2026-02-28 10:14:19.816 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:19 np0005634017 nova_compute[243452]: 2026-02-28 10:14:19.816 243456 DEBUG nova.compute.manager [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:14:19 np0005634017 nova_compute[243452]: 2026-02-28 10:14:19.818 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:19 np0005634017 nova_compute[243452]: 2026-02-28 10:14:19.825 243456 DEBUG nova.virt.hardware [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:14:19 np0005634017 nova_compute[243452]: 2026-02-28 10:14:19.826 243456 INFO nova.compute.claims [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:14:19 np0005634017 nova_compute[243452]: 2026-02-28 10:14:19.882 243456 DEBUG nova.compute.manager [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:14:19 np0005634017 nova_compute[243452]: 2026-02-28 10:14:19.883 243456 DEBUG nova.network.neutron [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:14:19 np0005634017 nova_compute[243452]: 2026-02-28 10:14:19.904 243456 INFO nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:14:19 np0005634017 nova_compute[243452]: 2026-02-28 10:14:19.922 243456 DEBUG nova.compute.manager [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.020 243456 DEBUG nova.compute.manager [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.021 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.022 243456 INFO nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Creating image(s)#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.052 243456 DEBUG nova.storage.rbd_utils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.088 243456 DEBUG nova.storage.rbd_utils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.123 243456 DEBUG nova.storage.rbd_utils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.127 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.164 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 328 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.4 MiB/s wr, 249 op/s
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.203 243456 DEBUG nova.policy [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c6b5724da2e648fd85fd8cb293525967', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.209 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.211 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.212 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.212 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.242 243456 DEBUG nova.storage.rbd_utils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.247 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ba33446e-fcd5-454c-bc8c-79a367002d57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.340 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.489 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ba33446e-fcd5-454c-bc8c-79a367002d57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.242s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.522 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "b883c1a1-cf01-434d-8258-24ca193a2683" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.523 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "b883c1a1-cf01-434d-8258-24ca193a2683" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.551 243456 DEBUG nova.compute.manager [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.558 243456 DEBUG nova.storage.rbd_utils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] resizing rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.619 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.687 243456 DEBUG nova.objects.instance [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'migration_context' on Instance uuid ba33446e-fcd5-454c-bc8c-79a367002d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.701 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.701 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Ensure instance console log exists: /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.701 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.702 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.702 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:20 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:14:20 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2478028342' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.717 243456 DEBUG nova.network.neutron [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Successfully created port: 0226697b-95b2-4303-aa60-b98eb0bb4cd9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.723 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.727 243456 DEBUG nova.compute.provider_tree [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.741 243456 DEBUG nova.scheduler.client.report [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:14:20 np0005634017 kernel: tapa2f66c0b-78 (unregistering): left promiscuous mode
Feb 28 05:14:20 np0005634017 NetworkManager[49805]: <info>  [1772273660.7565] device (tapa2f66c0b-78): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.760 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.761 243456 DEBUG nova.compute.manager [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:14:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:20Z|00638|binding|INFO|Releasing lport a2f66c0b-78f3-49cb-929b-5e9b4072beb0 from this chassis (sb_readonly=0)
Feb 28 05:14:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:20Z|00639|binding|INFO|Setting lport a2f66c0b-78f3-49cb-929b-5e9b4072beb0 down in Southbound
Feb 28 05:14:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:20Z|00640|binding|INFO|Removing iface tapa2f66c0b-78 ovn-installed in OVS
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.766 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.769 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.429s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.770 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.770 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:b7:32 10.100.0.6'], port_security=['fa:16:3e:39:b7:32 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dc2dbab8-312e-4130-8141-d848beeb6bec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-621843b6-256a-4ce5-83c3-83b888738508', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809bf856030f4316b385ba1c02291ca7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2ac4acb3-b14a-4b15-b397-21203f1665be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a33e3f0-a2b2-429c-8d14-4c6d980064b2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a2f66c0b-78f3-49cb-929b-5e9b4072beb0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.770 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.771 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.772 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a2f66c0b-78f3-49cb-929b-5e9b4072beb0 in datapath 621843b6-256a-4ce5-83c3-83b888738508 unbound from our chassis#033[00m
Feb 28 05:14:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.773 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 621843b6-256a-4ce5-83c3-83b888738508#033[00m
Feb 28 05:14:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.788 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[be6830ed-222e-4fd8-86af-dbeec5245246]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.802 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.183s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.805 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.811 243456 DEBUG nova.virt.hardware [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.811 243456 INFO nova.compute.claims [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:14:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.813 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a3cfd7fd-e3c6-441c-b229-8892354c2a16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.816 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7a93c089-1140-4d8d-be1a-ee872b0bf3a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:20 np0005634017 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Feb 28 05:14:20 np0005634017 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d0000004b.scope: Consumed 12.735s CPU time.
Feb 28 05:14:20 np0005634017 systemd-machined[209480]: Machine qemu-83-instance-0000004b terminated.
Feb 28 05:14:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.833 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f079a522-a1cc-4c62-8673-da438a386f12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.844 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b233421f-ad01-4f9e-afa2-09355039ae03]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap621843b6-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3f:07:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514758, 'reachable_time': 23572, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306929, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.853 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c00bad-aab3-4b2f-b5e4-34ff34760d03]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap621843b6-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514771, 'tstamp': 514771}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306930, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap621843b6-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514773, 'tstamp': 514773}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306930, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.854 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap621843b6-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.855 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.858 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.859 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap621843b6-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.859 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:14:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.860 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap621843b6-20, col_values=(('external_ids', {'iface-id': '92bcea78-9a21-4d44-99f4-fd3e41fc7e97'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.860 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.865 243456 DEBUG nova.compute.manager [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.865 243456 DEBUG nova.network.neutron [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.886 243456 INFO nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:14:20 np0005634017 nova_compute[243452]: 2026-02-28 10:14:20.911 243456 DEBUG nova.compute.manager [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.010 243456 DEBUG nova.compute.manager [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.012 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.012 243456 INFO nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Creating image(s)#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.045 243456 DEBUG nova.storage.rbd_utils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image 32fe69ba-ea8d-411e-8917-de872b62b8b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.083 243456 DEBUG nova.storage.rbd_utils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image 32fe69ba-ea8d-411e-8917-de872b62b8b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.113 243456 DEBUG nova.storage.rbd_utils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image 32fe69ba-ea8d-411e-8917-de872b62b8b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.118 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.149 243456 DEBUG nova.policy [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '99530c323188499c8d0e75b8edf1f77b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4c568ca6a09a48c1a1197267be4d4583', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.160 243456 DEBUG nova.compute.manager [req-40fa971b-3862-4c93-a5c9-264a19e7bc58 req-c7730c54-2ec8-480f-8fdd-53a2aa8c4bc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received event network-vif-unplugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.161 243456 DEBUG oslo_concurrency.lockutils [req-40fa971b-3862-4c93-a5c9-264a19e7bc58 req-c7730c54-2ec8-480f-8fdd-53a2aa8c4bc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.161 243456 DEBUG oslo_concurrency.lockutils [req-40fa971b-3862-4c93-a5c9-264a19e7bc58 req-c7730c54-2ec8-480f-8fdd-53a2aa8c4bc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.161 243456 DEBUG oslo_concurrency.lockutils [req-40fa971b-3862-4c93-a5c9-264a19e7bc58 req-c7730c54-2ec8-480f-8fdd-53a2aa8c4bc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.162 243456 DEBUG nova.compute.manager [req-40fa971b-3862-4c93-a5c9-264a19e7bc58 req-c7730c54-2ec8-480f-8fdd-53a2aa8c4bc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] No waiting events found dispatching network-vif-unplugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.162 243456 WARNING nova.compute.manager [req-40fa971b-3862-4c93-a5c9-264a19e7bc58 req-c7730c54-2ec8-480f-8fdd-53a2aa8c4bc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received unexpected event network-vif-unplugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 for instance with vm_state active and task_state rescuing.#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.178 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.211 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.212 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.212 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.213 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.250 243456 DEBUG nova.storage.rbd_utils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image 32fe69ba-ea8d-411e-8917-de872b62b8b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.254 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 32fe69ba-ea8d-411e-8917-de872b62b8b0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:14:21 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1659373007' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.281 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.372 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.373 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.382 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.382 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.457 243456 DEBUG nova.network.neutron [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Successfully updated port: 0226697b-95b2-4303-aa60-b98eb0bb4cd9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.465 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 32fe69ba-ea8d-411e-8917-de872b62b8b0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.538 243456 DEBUG nova.storage.rbd_utils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] resizing rbd image 32fe69ba-ea8d-411e-8917-de872b62b8b0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.587 243456 INFO nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Instance shutdown successfully after 13 seconds.#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.623 243456 DEBUG nova.objects.instance [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'migration_context' on Instance uuid 32fe69ba-ea8d-411e-8917-de872b62b8b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.631 243456 INFO nova.virt.libvirt.driver [-] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Instance destroyed successfully.#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.632 243456 DEBUG nova.objects.instance [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'numa_topology' on Instance uuid dc2dbab8-312e-4130-8141-d848beeb6bec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.648 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "refresh_cache-ba33446e-fcd5-454c-bc8c-79a367002d57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.648 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquired lock "refresh_cache-ba33446e-fcd5-454c-bc8c-79a367002d57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.648 243456 DEBUG nova.network.neutron [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.650 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.651 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Ensure instance console log exists: /var/lib/nova/instances/32fe69ba-ea8d-411e-8917-de872b62b8b0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.651 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.651 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.651 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.659 243456 INFO nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Attempting rescue#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.660 243456 DEBUG nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.670 243456 DEBUG nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.670 243456 INFO nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Creating image(s)#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.690 243456 DEBUG nova.storage.rbd_utils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.693 243456 DEBUG nova.objects.instance [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'trusted_certs' on Instance uuid dc2dbab8-312e-4130-8141-d848beeb6bec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.731 243456 DEBUG nova.storage.rbd_utils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.756 243456 DEBUG nova.storage.rbd_utils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.760 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:14:21 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/423912894' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.792 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.796 243456 DEBUG nova.network.neutron [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.802 243456 DEBUG nova.compute.provider_tree [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.817 243456 DEBUG nova.scheduler.client.report [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.836 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.836 243456 DEBUG nova.compute.manager [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.844 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.845 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3700MB free_disk=59.891969472169876GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.845 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.845 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.848 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.848 243456 DEBUG oslo_concurrency.lockutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.849 243456 DEBUG oslo_concurrency.lockutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.849 243456 DEBUG oslo_concurrency.lockutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.869 243456 DEBUG nova.storage.rbd_utils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.872 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 dc2dbab8-312e-4130-8141-d848beeb6bec_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.903 243456 DEBUG nova.compute.manager [req-9cb1bbc3-ddd8-4b85-b4fe-fd1d8546b3ad req-5cd455a3-601c-49df-9cfc-e8a843cb4d28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-changed-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.903 243456 DEBUG nova.compute.manager [req-9cb1bbc3-ddd8-4b85-b4fe-fd1d8546b3ad req-5cd455a3-601c-49df-9cfc-e8a843cb4d28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Refreshing instance network info cache due to event network-changed-0226697b-95b2-4303-aa60-b98eb0bb4cd9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.904 243456 DEBUG oslo_concurrency.lockutils [req-9cb1bbc3-ddd8-4b85-b4fe-fd1d8546b3ad req-5cd455a3-601c-49df-9cfc-e8a843cb4d28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ba33446e-fcd5-454c-bc8c-79a367002d57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.905 243456 DEBUG nova.compute.manager [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.905 243456 DEBUG nova.network.neutron [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.944 243456 INFO nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.965 243456 DEBUG nova.compute.manager [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.991 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.991 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance dc2dbab8-312e-4130-8141-d848beeb6bec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.991 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance ba33446e-fcd5-454c-bc8c-79a367002d57 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.992 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 32fe69ba-ea8d-411e-8917-de872b62b8b0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.992 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance b883c1a1-cf01-434d-8258-24ca193a2683 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.992 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:14:21 np0005634017 nova_compute[243452]: 2026-02-28 10:14:21.992 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.064 243456 DEBUG nova.compute.manager [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.065 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.066 243456 INFO nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Creating image(s)#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.090 243456 DEBUG nova.storage.rbd_utils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image b883c1a1-cf01-434d-8258-24ca193a2683_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.115 243456 DEBUG nova.storage.rbd_utils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image b883c1a1-cf01-434d-8258-24ca193a2683_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.140 243456 DEBUG nova.storage.rbd_utils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image b883c1a1-cf01-434d-8258-24ca193a2683_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.144 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 322 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.1 MiB/s wr, 253 op/s
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.183 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 dc2dbab8-312e-4130-8141-d848beeb6bec_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.185 243456 DEBUG nova.objects.instance [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'migration_context' on Instance uuid dc2dbab8-312e-4130-8141-d848beeb6bec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.195 243456 DEBUG nova.policy [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14b2d28379164786ad68563acb83a50a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fd70835696bf4e12a062516e9de5527d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.202 243456 DEBUG nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.203 243456 DEBUG nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Start _get_guest_xml network_info=[{"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "vif_mac": "fa:16:3e:39:b7:32"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.204 243456 DEBUG nova.objects.instance [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'resources' on Instance uuid dc2dbab8-312e-4130-8141-d848beeb6bec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.219 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.220 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.221 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.222 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.246 243456 DEBUG nova.storage.rbd_utils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image b883c1a1-cf01-434d-8258-24ca193a2683_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.250 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 b883c1a1-cf01-434d-8258-24ca193a2683_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.282 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.309 243456 WARNING nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.313 243456 DEBUG nova.virt.libvirt.host [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.314 243456 DEBUG nova.virt.libvirt.host [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.316 243456 DEBUG nova.virt.libvirt.host [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.317 243456 DEBUG nova.virt.libvirt.host [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.317 243456 DEBUG nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.317 243456 DEBUG nova.virt.hardware [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.318 243456 DEBUG nova.virt.hardware [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.318 243456 DEBUG nova.virt.hardware [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.318 243456 DEBUG nova.virt.hardware [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.318 243456 DEBUG nova.virt.hardware [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.318 243456 DEBUG nova.virt.hardware [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.319 243456 DEBUG nova.virt.hardware [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.319 243456 DEBUG nova.virt.hardware [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.319 243456 DEBUG nova.virt.hardware [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.319 243456 DEBUG nova.virt.hardware [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.320 243456 DEBUG nova.virt.hardware [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.320 243456 DEBUG nova.objects.instance [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'vcpu_model' on Instance uuid dc2dbab8-312e-4130-8141-d848beeb6bec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.338 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.461 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 b883c1a1-cf01-434d-8258-24ca193a2683_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.532 243456 DEBUG nova.storage.rbd_utils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] resizing rbd image b883c1a1-cf01-434d-8258-24ca193a2683_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.618 243456 DEBUG nova.objects.instance [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'migration_context' on Instance uuid b883c1a1-cf01-434d-8258-24ca193a2683 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.633 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.634 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Ensure instance console log exists: /var/lib/nova/instances/b883c1a1-cf01-434d-8258-24ca193a2683/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.634 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.634 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.635 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.643 243456 DEBUG nova.network.neutron [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Successfully created port: 6b5acb8c-5d09-42b0-9c1d-b51be18712fe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:14:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:14:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3701906499' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:14:22 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 05:14:22 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2401.6 total, 600.0 interval#012Cumulative writes: 26K writes, 103K keys, 26K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.04 MB/s#012Cumulative WAL: 26K writes, 9226 syncs, 2.87 writes per sync, written: 0.09 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 12K writes, 48K keys, 12K commit groups, 1.0 writes per commit group, ingest: 47.98 MB, 0.08 MB/s#012Interval WAL: 12K writes, 5009 syncs, 2.49 writes per sync, written: 0.05 GB, 0.08 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.848 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.855 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.871 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:14:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:14:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1146444698' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.894 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.895 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.938 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:14:22 np0005634017 nova_compute[243452]: 2026-02-28 10:14:22.939 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.124 243456 DEBUG nova.compute.manager [req-1c95ed0b-52e2-4f38-b6f7-1d69675d84a5 req-72dd9fc9-0cb8-4b46-b47c-0d677d1bb255 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received event network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.125 243456 DEBUG oslo_concurrency.lockutils [req-1c95ed0b-52e2-4f38-b6f7-1d69675d84a5 req-72dd9fc9-0cb8-4b46-b47c-0d677d1bb255 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.125 243456 DEBUG oslo_concurrency.lockutils [req-1c95ed0b-52e2-4f38-b6f7-1d69675d84a5 req-72dd9fc9-0cb8-4b46-b47c-0d677d1bb255 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.125 243456 DEBUG oslo_concurrency.lockutils [req-1c95ed0b-52e2-4f38-b6f7-1d69675d84a5 req-72dd9fc9-0cb8-4b46-b47c-0d677d1bb255 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.125 243456 DEBUG nova.compute.manager [req-1c95ed0b-52e2-4f38-b6f7-1d69675d84a5 req-72dd9fc9-0cb8-4b46-b47c-0d677d1bb255 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] No waiting events found dispatching network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.126 243456 WARNING nova.compute.manager [req-1c95ed0b-52e2-4f38-b6f7-1d69675d84a5 req-72dd9fc9-0cb8-4b46-b47c-0d677d1bb255 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received unexpected event network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 for instance with vm_state active and task_state rescuing.#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.201 243456 DEBUG nova.network.neutron [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Updating instance_info_cache with network_info: [{"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.228 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Releasing lock "refresh_cache-ba33446e-fcd5-454c-bc8c-79a367002d57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.228 243456 DEBUG nova.compute.manager [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Instance network_info: |[{"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.229 243456 DEBUG oslo_concurrency.lockutils [req-9cb1bbc3-ddd8-4b85-b4fe-fd1d8546b3ad req-5cd455a3-601c-49df-9cfc-e8a843cb4d28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ba33446e-fcd5-454c-bc8c-79a367002d57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.229 243456 DEBUG nova.network.neutron [req-9cb1bbc3-ddd8-4b85-b4fe-fd1d8546b3ad req-5cd455a3-601c-49df-9cfc-e8a843cb4d28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Refreshing network info cache for port 0226697b-95b2-4303-aa60-b98eb0bb4cd9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.233 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Start _get_guest_xml network_info=[{"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.235 243456 DEBUG nova.network.neutron [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Successfully created port: aa9724a7-fad1-4968-a1b0-0d8182007723 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.246 243456 WARNING nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.251 243456 DEBUG nova.virt.libvirt.host [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.252 243456 DEBUG nova.virt.libvirt.host [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.257 243456 DEBUG nova.virt.libvirt.host [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.258 243456 DEBUG nova.virt.libvirt.host [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.258 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.259 243456 DEBUG nova.virt.hardware [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.260 243456 DEBUG nova.virt.hardware [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.260 243456 DEBUG nova.virt.hardware [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.261 243456 DEBUG nova.virt.hardware [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.261 243456 DEBUG nova.virt.hardware [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.261 243456 DEBUG nova.virt.hardware [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.262 243456 DEBUG nova.virt.hardware [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.262 243456 DEBUG nova.virt.hardware [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.262 243456 DEBUG nova.virt.hardware [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.263 243456 DEBUG nova.virt.hardware [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.263 243456 DEBUG nova.virt.hardware [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.269 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:14:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1782364055' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.467 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.469 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.511 243456 DEBUG nova.network.neutron [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Successfully updated port: 6b5acb8c-5d09-42b0-9c1d-b51be18712fe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:14:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.549 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.549 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquired lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.550 243456 DEBUG nova.network.neutron [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.733 243456 DEBUG nova.network.neutron [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.762 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:14:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/627594311' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.791 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.816 243456 DEBUG nova.storage.rbd_utils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.820 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.940 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.941 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.941 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:14:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.977 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.977 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 28 05:14:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2550592236' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:14:23 np0005634017 nova_compute[243452]: 2026-02-28 10:14:23.978 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.001 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.003 243456 DEBUG nova.virt.libvirt.vif [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:13:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1249757035',display_name='tempest-ServerRescueNegativeTestJSON-server-1249757035',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1249757035',id=75,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:14:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='809bf856030f4316b385ba1c02291ca7',ramdisk_id='',reservation_id='r-hyvw557d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-338743494',owner_user_name='tempest-ServerRescueNegativeTestJSON-338743494-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:01Z,user_data=None,user_id='ec5caafc16ec43a493f7d553353a27c3',uuid=dc2dbab8-312e-4130-8141-d848beeb6bec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "vif_mac": "fa:16:3e:39:b7:32"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.004 243456 DEBUG nova.network.os_vif_util [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converting VIF {"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "vif_mac": "fa:16:3e:39:b7:32"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.004 243456 DEBUG nova.network.os_vif_util [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:39:b7:32,bridge_name='br-int',has_traffic_filtering=True,id=a2f66c0b-78f3-49cb-929b-5e9b4072beb0,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2f66c0b-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.006 243456 DEBUG nova.objects.instance [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'pci_devices' on Instance uuid dc2dbab8-312e-4130-8141-d848beeb6bec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.021 243456 DEBUG nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  <uuid>dc2dbab8-312e-4130-8141-d848beeb6bec</uuid>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  <name>instance-0000004b</name>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-1249757035</nova:name>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:14:22</nova:creationTime>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        <nova:user uuid="ec5caafc16ec43a493f7d553353a27c3">tempest-ServerRescueNegativeTestJSON-338743494-project-member</nova:user>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        <nova:project uuid="809bf856030f4316b385ba1c02291ca7">tempest-ServerRescueNegativeTestJSON-338743494</nova:project>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        <nova:port uuid="a2f66c0b-78f3-49cb-929b-5e9b4072beb0">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <entry name="serial">dc2dbab8-312e-4130-8141-d848beeb6bec</entry>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <entry name="uuid">dc2dbab8-312e-4130-8141-d848beeb6bec</entry>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/dc2dbab8-312e-4130-8141-d848beeb6bec_disk.rescue">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/dc2dbab8-312e-4130-8141-d848beeb6bec_disk">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <target dev="vdb" bus="virtio"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/dc2dbab8-312e-4130-8141-d848beeb6bec_disk.config.rescue">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:39:b7:32"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <target dev="tapa2f66c0b-78"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/console.log" append="off"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:14:24 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:14:24 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.030 243456 INFO nova.virt.libvirt.driver [-] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Instance destroyed successfully.#033[00m
Feb 28 05:14:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 365 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.3 MiB/s wr, 282 op/s
Feb 28 05:14:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:14:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3935120704' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.368 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.369 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.369 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.370 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.379 243456 DEBUG nova.network.neutron [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Successfully updated port: aa9724a7-fad1-4968-a1b0-0d8182007723 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.395 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.397 243456 DEBUG nova.virt.libvirt.vif [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:14:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-877654664',display_name='tempest-ServerDiskConfigTestJSON-server-877654664',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-877654664',id=76,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-rxssdzd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:19Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=ba33446e-fcd5-454c-bc8c-79a367002d57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.398 243456 DEBUG nova.network.os_vif_util [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.399 243456 DEBUG nova.network.os_vif_util [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.400 243456 DEBUG nova.objects.instance [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'pci_devices' on Instance uuid ba33446e-fcd5-454c-bc8c-79a367002d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.424 243456 DEBUG nova.compute.manager [req-16f45327-abfb-4113-89e2-7c2e2dc1a038 req-0acc1a20-37bc-4813-828e-123510aca461 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Received event network-changed-aa9724a7-fad1-4968-a1b0-0d8182007723 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.425 243456 DEBUG nova.compute.manager [req-16f45327-abfb-4113-89e2-7c2e2dc1a038 req-0acc1a20-37bc-4813-828e-123510aca461 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Refreshing instance network info cache due to event network-changed-aa9724a7-fad1-4968-a1b0-0d8182007723. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.425 243456 DEBUG oslo_concurrency.lockutils [req-16f45327-abfb-4113-89e2-7c2e2dc1a038 req-0acc1a20-37bc-4813-828e-123510aca461 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b883c1a1-cf01-434d-8258-24ca193a2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.425 243456 DEBUG oslo_concurrency.lockutils [req-16f45327-abfb-4113-89e2-7c2e2dc1a038 req-0acc1a20-37bc-4813-828e-123510aca461 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b883c1a1-cf01-434d-8258-24ca193a2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.426 243456 DEBUG nova.network.neutron [req-16f45327-abfb-4113-89e2-7c2e2dc1a038 req-0acc1a20-37bc-4813-828e-123510aca461 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Refreshing network info cache for port aa9724a7-fad1-4968-a1b0-0d8182007723 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.429 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "refresh_cache-b883c1a1-cf01-434d-8258-24ca193a2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.431 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  <uuid>ba33446e-fcd5-454c-bc8c-79a367002d57</uuid>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  <name>instance-0000004c</name>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-877654664</nova:name>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:14:23</nova:creationTime>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        <nova:user uuid="c6b5724da2e648fd85fd8cb293525967">tempest-ServerDiskConfigTestJSON-1778232696-project-member</nova:user>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        <nova:project uuid="92b324a375ad4f198dc44d31a0e0a6eb">tempest-ServerDiskConfigTestJSON-1778232696</nova:project>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        <nova:port uuid="0226697b-95b2-4303-aa60-b98eb0bb4cd9">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <entry name="serial">ba33446e-fcd5-454c-bc8c-79a367002d57</entry>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <entry name="uuid">ba33446e-fcd5-454c-bc8c-79a367002d57</entry>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/ba33446e-fcd5-454c-bc8c-79a367002d57_disk">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/ba33446e-fcd5-454c-bc8c-79a367002d57_disk.config">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:70:1d:4f"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <target dev="tap0226697b-95"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/console.log" append="off"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:14:24 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:14:24 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:14:24 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:14:24 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.432 243456 DEBUG nova.compute.manager [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Preparing to wait for external event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.432 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.433 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.433 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.434 243456 DEBUG nova.virt.libvirt.vif [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:14:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-877654664',display_name='tempest-ServerDiskConfigTestJSON-server-877654664',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-877654664',id=76,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-rxssdzd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:19Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=ba33446e-fcd5-454c-bc8c-79a367002d57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.434 243456 DEBUG nova.network.os_vif_util [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.435 243456 DEBUG nova.network.os_vif_util [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.435 243456 DEBUG os_vif [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.437 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.438 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.438 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.442 243456 DEBUG nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.443 243456 DEBUG nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.443 243456 DEBUG nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.443 243456 DEBUG nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] No VIF found with MAC fa:16:3e:39:b7:32, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.444 243456 INFO nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Using config drive#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.466 243456 DEBUG nova.storage.rbd_utils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.477 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.477 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0226697b-95, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.478 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0226697b-95, col_values=(('external_ids', {'iface-id': '0226697b-95b2-4303-aa60-b98eb0bb4cd9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:1d:4f', 'vm-uuid': 'ba33446e-fcd5-454c-bc8c-79a367002d57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.479 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:24 np0005634017 NetworkManager[49805]: <info>  [1772273664.4807] manager: (tap0226697b-95): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/288)
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.482 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.485 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.486 243456 INFO os_vif [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95')#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.488 243456 DEBUG nova.objects.instance [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'ec2_ids' on Instance uuid dc2dbab8-312e-4130-8141-d848beeb6bec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.517 243456 DEBUG nova.objects.instance [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'keypairs' on Instance uuid dc2dbab8-312e-4130-8141-d848beeb6bec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.537 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.537 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.537 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No VIF found with MAC fa:16:3e:70:1d:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.538 243456 INFO nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Using config drive#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.559 243456 DEBUG nova.storage.rbd_utils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.661 243456 DEBUG nova.network.neutron [req-16f45327-abfb-4113-89e2-7c2e2dc1a038 req-0acc1a20-37bc-4813-828e-123510aca461 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.839 243456 DEBUG nova.network.neutron [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Updating instance_info_cache with network_info: [{"id": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "address": "fa:16:3e:88:90:03", "network": {"id": "269fae56-42c3-478e-88d5-36164c0a6ae4", "bridge": "br-int", "label": "tempest-network-smoke--2058090371", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5acb8c-5d", "ovs_interfaceid": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.872 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Releasing lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.873 243456 DEBUG nova.compute.manager [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Instance network_info: |[{"id": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "address": "fa:16:3e:88:90:03", "network": {"id": "269fae56-42c3-478e-88d5-36164c0a6ae4", "bridge": "br-int", "label": "tempest-network-smoke--2058090371", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5acb8c-5d", "ovs_interfaceid": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.876 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Start _get_guest_xml network_info=[{"id": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "address": "fa:16:3e:88:90:03", "network": {"id": "269fae56-42c3-478e-88d5-36164c0a6ae4", "bridge": "br-int", "label": "tempest-network-smoke--2058090371", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5acb8c-5d", "ovs_interfaceid": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.881 243456 WARNING nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.886 243456 DEBUG nova.virt.libvirt.host [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.886 243456 DEBUG nova.virt.libvirt.host [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.890 243456 DEBUG nova.virt.libvirt.host [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.890 243456 DEBUG nova.virt.libvirt.host [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.890 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.890 243456 DEBUG nova.virt.hardware [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.891 243456 DEBUG nova.virt.hardware [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.891 243456 DEBUG nova.virt.hardware [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.891 243456 DEBUG nova.virt.hardware [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.891 243456 DEBUG nova.virt.hardware [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.892 243456 DEBUG nova.virt.hardware [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.892 243456 DEBUG nova.virt.hardware [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.892 243456 DEBUG nova.virt.hardware [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.892 243456 DEBUG nova.virt.hardware [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.893 243456 DEBUG nova.virt.hardware [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.893 243456 DEBUG nova.virt.hardware [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.895 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.919 243456 INFO nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Creating config drive at /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/disk.config.rescue#033[00m
Feb 28 05:14:24 np0005634017 nova_compute[243452]: 2026-02-28 10:14:24.924 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmppcyagode execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.055 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmppcyagode" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.082 243456 DEBUG nova.storage.rbd_utils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.086 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/disk.config.rescue dc2dbab8-312e-4130-8141-d848beeb6bec_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.229 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/disk.config.rescue dc2dbab8-312e-4130-8141-d848beeb6bec_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.230 243456 INFO nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Deleting local config drive /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/disk.config.rescue because it was imported into RBD.#033[00m
Feb 28 05:14:25 np0005634017 kernel: tapa2f66c0b-78: entered promiscuous mode
Feb 28 05:14:25 np0005634017 NetworkManager[49805]: <info>  [1772273665.2730] manager: (tapa2f66c0b-78): new Tun device (/org/freedesktop/NetworkManager/Devices/289)
Feb 28 05:14:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:25Z|00641|binding|INFO|Claiming lport a2f66c0b-78f3-49cb-929b-5e9b4072beb0 for this chassis.
Feb 28 05:14:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:25Z|00642|binding|INFO|a2f66c0b-78f3-49cb-929b-5e9b4072beb0: Claiming fa:16:3e:39:b7:32 10.100.0.6
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.279 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.292 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:b7:32 10.100.0.6'], port_security=['fa:16:3e:39:b7:32 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dc2dbab8-312e-4130-8141-d848beeb6bec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-621843b6-256a-4ce5-83c3-83b888738508', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809bf856030f4316b385ba1c02291ca7', 'neutron:revision_number': '5', 'neutron:security_group_ids': '2ac4acb3-b14a-4b15-b397-21203f1665be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a33e3f0-a2b2-429c-8d14-4c6d980064b2, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a2f66c0b-78f3-49cb-929b-5e9b4072beb0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.295 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a2f66c0b-78f3-49cb-929b-5e9b4072beb0 in datapath 621843b6-256a-4ce5-83c3-83b888738508 bound to our chassis#033[00m
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.299 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 621843b6-256a-4ce5-83c3-83b888738508#033[00m
Feb 28 05:14:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:25Z|00643|binding|INFO|Setting lport a2f66c0b-78f3-49cb-929b-5e9b4072beb0 ovn-installed in OVS
Feb 28 05:14:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:25Z|00644|binding|INFO|Setting lport a2f66c0b-78f3-49cb-929b-5e9b4072beb0 up in Southbound
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.305 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:25 np0005634017 systemd-machined[209480]: New machine qemu-85-instance-0000004b.
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.318 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0bbe3f34-7a78-4043-92a4-41799b098d49]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:25 np0005634017 systemd[1]: Started Virtual Machine qemu-85-instance-0000004b.
Feb 28 05:14:25 np0005634017 systemd-udevd[307678]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.344 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d27fa713-c159-4a7e-9f22-e773644875e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.348 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[83c674a3-c9a7-48d5-8676-b077c773db4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:25 np0005634017 NetworkManager[49805]: <info>  [1772273665.3558] device (tapa2f66c0b-78): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:14:25 np0005634017 NetworkManager[49805]: <info>  [1772273665.3573] device (tapa2f66c0b-78): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.373 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[46eb5b74-6e3e-4494-94cf-ac2690e7a863]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.378 243456 DEBUG nova.network.neutron [req-16f45327-abfb-4113-89e2-7c2e2dc1a038 req-0acc1a20-37bc-4813-828e-123510aca461 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.390 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[535eae58-5366-4d7d-b49c-124940d88da8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap621843b6-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3f:07:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514758, 'reachable_time': 23572, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307688, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.394 243456 DEBUG oslo_concurrency.lockutils [req-16f45327-abfb-4113-89e2-7c2e2dc1a038 req-0acc1a20-37bc-4813-828e-123510aca461 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b883c1a1-cf01-434d-8258-24ca193a2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.395 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquired lock "refresh_cache-b883c1a1-cf01-434d-8258-24ca193a2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.395 243456 DEBUG nova.network.neutron [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.403 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4a53d19e-bf80-4e2b-aa01-9172bdd23649]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap621843b6-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514771, 'tstamp': 514771}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307690, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap621843b6-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514773, 'tstamp': 514773}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307690, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.405 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap621843b6-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.408 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.414 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap621843b6-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.415 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.415 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap621843b6-20, col_values=(('external_ids', {'iface-id': '92bcea78-9a21-4d44-99f4-fd3e41fc7e97'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.416 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.421 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.440 243456 INFO nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Creating config drive at /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/disk.config#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.444 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpgw8_dt31 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:14:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/727041209' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.483 243456 DEBUG nova.network.neutron [req-9cb1bbc3-ddd8-4b85-b4fe-fd1d8546b3ad req-5cd455a3-601c-49df-9cfc-e8a843cb4d28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Updated VIF entry in instance network info cache for port 0226697b-95b2-4303-aa60-b98eb0bb4cd9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.484 243456 DEBUG nova.network.neutron [req-9cb1bbc3-ddd8-4b85-b4fe-fd1d8546b3ad req-5cd455a3-601c-49df-9cfc-e8a843cb4d28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Updating instance_info_cache with network_info: [{"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.486 243456 DEBUG nova.compute.manager [req-9660b201-fb75-465b-a3e3-c35f3abe3307 req-e8e715a2-0de3-4938-a8d1-2234f70605c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received event network-changed-6b5acb8c-5d09-42b0-9c1d-b51be18712fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.487 243456 DEBUG nova.compute.manager [req-9660b201-fb75-465b-a3e3-c35f3abe3307 req-e8e715a2-0de3-4938-a8d1-2234f70605c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Refreshing instance network info cache due to event network-changed-6b5acb8c-5d09-42b0-9c1d-b51be18712fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.487 243456 DEBUG oslo_concurrency.lockutils [req-9660b201-fb75-465b-a3e3-c35f3abe3307 req-e8e715a2-0de3-4938-a8d1-2234f70605c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.487 243456 DEBUG oslo_concurrency.lockutils [req-9660b201-fb75-465b-a3e3-c35f3abe3307 req-e8e715a2-0de3-4938-a8d1-2234f70605c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.487 243456 DEBUG nova.network.neutron [req-9660b201-fb75-465b-a3e3-c35f3abe3307 req-e8e715a2-0de3-4938-a8d1-2234f70605c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Refreshing network info cache for port 6b5acb8c-5d09-42b0-9c1d-b51be18712fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.489 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.519 243456 DEBUG nova.storage.rbd_utils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image 32fe69ba-ea8d-411e-8917-de872b62b8b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.522 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.546 243456 DEBUG oslo_concurrency.lockutils [req-9cb1bbc3-ddd8-4b85-b4fe-fd1d8546b3ad req-5cd455a3-601c-49df-9cfc-e8a843cb4d28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ba33446e-fcd5-454c-bc8c-79a367002d57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.584 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpgw8_dt31" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.618 243456 DEBUG nova.storage.rbd_utils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.626 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/disk.config ba33446e-fcd5-454c-bc8c-79a367002d57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.653 243456 DEBUG nova.network.neutron [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.763 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for dc2dbab8-312e-4130-8141-d848beeb6bec due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.763 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273665.7626216, dc2dbab8-312e-4130-8141-d848beeb6bec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.763 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.768 243456 DEBUG nova.compute.manager [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.785 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/disk.config ba33446e-fcd5-454c-bc8c-79a367002d57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.785 243456 INFO nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Deleting local config drive /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/disk.config because it was imported into RBD.#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.807 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.816 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:14:25 np0005634017 kernel: tap0226697b-95: entered promiscuous mode
Feb 28 05:14:25 np0005634017 NetworkManager[49805]: <info>  [1772273665.8369] manager: (tap0226697b-95): new Tun device (/org/freedesktop/NetworkManager/Devices/290)
Feb 28 05:14:25 np0005634017 systemd-udevd[307680]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:14:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:25Z|00645|binding|INFO|Claiming lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 for this chassis.
Feb 28 05:14:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:25Z|00646|binding|INFO|0226697b-95b2-4303-aa60-b98eb0bb4cd9: Claiming fa:16:3e:70:1d:4f 10.100.0.13
Feb 28 05:14:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:25Z|00647|binding|INFO|Setting lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 ovn-installed in OVS
Feb 28 05:14:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:25Z|00648|binding|INFO|Setting lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 up in Southbound
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.857 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.857 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:1d:4f 10.100.0.13'], port_security=['fa:16:3e:70:1d:4f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ba33446e-fcd5-454c-bc8c-79a367002d57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '73844c74-9151-4190-8457-a421e77e32a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1433cfb0-2a88-4f7f-838a-e6496a4d73b0, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0226697b-95b2-4303-aa60-b98eb0bb4cd9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.858 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0226697b-95b2-4303-aa60-b98eb0bb4cd9 in datapath 77a5b13a-ec2d-4bde-b8f1-201557ef8008 bound to our chassis#033[00m
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.859 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77a5b13a-ec2d-4bde-b8f1-201557ef8008#033[00m
Feb 28 05:14:25 np0005634017 NetworkManager[49805]: <info>  [1772273665.8614] device (tap0226697b-95): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.863 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.864 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273665.7630217, dc2dbab8-312e-4130-8141-d848beeb6bec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.864 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] VM Started (Lifecycle Event)#033[00m
Feb 28 05:14:25 np0005634017 NetworkManager[49805]: <info>  [1772273665.8656] device (tap0226697b-95): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.869 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.871 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[49cea0db-3ee5-4d54-ab63-8624300c162c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.872 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77a5b13a-e1 in ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.873 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77a5b13a-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.873 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d3f6ff8d-5971-413e-a0e4-719846b0cdda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.874 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[158150f9-bd1b-4763-b4e3-f20fdebf1039]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:25 np0005634017 systemd-machined[209480]: New machine qemu-86-instance-0000004c.
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.888 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.891 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[d10639bb-ac4e-4ed0-85fb-8e4b80f2f829]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:25 np0005634017 nova_compute[243452]: 2026-02-28 10:14:25.893 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:14:25 np0005634017 systemd[1]: Started Virtual Machine qemu-86-instance-0000004c.
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.912 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6b13476f-8649-4185-acc3-c476491e91cc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.946 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e9f05bb0-220f-4b6f-8b16-3394427f4943]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:25 np0005634017 NetworkManager[49805]: <info>  [1772273665.9534] manager: (tap77a5b13a-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/291)
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.952 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b00c7e20-a71c-418e-8233-b8a0304e074b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.993 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5f26c7d3-03d5-4575-ac5b-b79075ca8937]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.999 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f98a2b-5fe8-45e8-9926-9d31decfc2a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:26 np0005634017 NetworkManager[49805]: <info>  [1772273666.0249] device (tap77a5b13a-e0): carrier: link connected
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.036 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[af2b08cf-db17-4a2f-8f38-30b4f79db89b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.051 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[515d21d5-1ec8-4972-b852-9878335eb287]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77a5b13a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ae:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 202], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517331, 'reachable_time': 27035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307878, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:14:26 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2644307750' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.072 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e87b5e59-084e-4e56-a368-032f1393ac33]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:aeac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517331, 'tstamp': 517331}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307880, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.079 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.080 243456 DEBUG nova.virt.libvirt.vif [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:14:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1765724638',display_name='tempest-TestNetworkAdvancedServerOps-server-1765724638',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1765724638',id=77,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBuYc1DeElc1rkkDaA1+Wg1yBEjd9NzIhoaxm5Bt8KGJiKrnwEv07p/i1kPPlomnS4Xw2edPyOwDq78Zz+5s4I0hLVriZjH75jhUt93INSMBcBhlqyJu7ug5cVivkPNiww==',key_name='tempest-TestNetworkAdvancedServerOps-1506787166',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-0cvc6m44',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:20Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=32fe69ba-ea8d-411e-8917-de872b62b8b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "address": "fa:16:3e:88:90:03", "network": {"id": "269fae56-42c3-478e-88d5-36164c0a6ae4", "bridge": "br-int", "label": "tempest-network-smoke--2058090371", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5acb8c-5d", "ovs_interfaceid": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.080 243456 DEBUG nova.network.os_vif_util [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "address": "fa:16:3e:88:90:03", "network": {"id": "269fae56-42c3-478e-88d5-36164c0a6ae4", "bridge": "br-int", "label": "tempest-network-smoke--2058090371", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5acb8c-5d", "ovs_interfaceid": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.081 243456 DEBUG nova.network.os_vif_util [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:90:03,bridge_name='br-int',has_traffic_filtering=True,id=6b5acb8c-5d09-42b0-9c1d-b51be18712fe,network=Network(269fae56-42c3-478e-88d5-36164c0a6ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5acb8c-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.082 243456 DEBUG nova.objects.instance [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'pci_devices' on Instance uuid 32fe69ba-ea8d-411e-8917-de872b62b8b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.092 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf7b6e0-69f3-4e8a-8c82-aefc5666492d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77a5b13a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ae:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 202], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517331, 'reachable_time': 27035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307882, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.098 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:14:26 np0005634017 nova_compute[243452]:  <uuid>32fe69ba-ea8d-411e-8917-de872b62b8b0</uuid>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:  <name>instance-0000004d</name>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1765724638</nova:name>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:14:24</nova:creationTime>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:14:26 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:        <nova:user uuid="99530c323188499c8d0e75b8edf1f77b">tempest-TestNetworkAdvancedServerOps-1987172309-project-member</nova:user>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:        <nova:project uuid="4c568ca6a09a48c1a1197267be4d4583">tempest-TestNetworkAdvancedServerOps-1987172309</nova:project>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:        <nova:port uuid="6b5acb8c-5d09-42b0-9c1d-b51be18712fe">
Feb 28 05:14:26 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <entry name="serial">32fe69ba-ea8d-411e-8917-de872b62b8b0</entry>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <entry name="uuid">32fe69ba-ea8d-411e-8917-de872b62b8b0</entry>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/32fe69ba-ea8d-411e-8917-de872b62b8b0_disk">
Feb 28 05:14:26 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:14:26 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/32fe69ba-ea8d-411e-8917-de872b62b8b0_disk.config">
Feb 28 05:14:26 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:14:26 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:88:90:03"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <target dev="tap6b5acb8c-5d"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/32fe69ba-ea8d-411e-8917-de872b62b8b0/console.log" append="off"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:14:26 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:14:26 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:14:26 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:14:26 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.099 243456 DEBUG nova.compute.manager [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Preparing to wait for external event network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.099 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.099 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.100 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.100 243456 DEBUG nova.virt.libvirt.vif [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:14:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1765724638',display_name='tempest-TestNetworkAdvancedServerOps-server-1765724638',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1765724638',id=77,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBuYc1DeElc1rkkDaA1+Wg1yBEjd9NzIhoaxm5Bt8KGJiKrnwEv07p/i1kPPlomnS4Xw2edPyOwDq78Zz+5s4I0hLVriZjH75jhUt93INSMBcBhlqyJu7ug5cVivkPNiww==',key_name='tempest-TestNetworkAdvancedServerOps-1506787166',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-0cvc6m44',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:20Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=32fe69ba-ea8d-411e-8917-de872b62b8b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "address": "fa:16:3e:88:90:03", "network": {"id": "269fae56-42c3-478e-88d5-36164c0a6ae4", "bridge": "br-int", "label": "tempest-network-smoke--2058090371", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5acb8c-5d", "ovs_interfaceid": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.101 243456 DEBUG nova.network.os_vif_util [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "address": "fa:16:3e:88:90:03", "network": {"id": "269fae56-42c3-478e-88d5-36164c0a6ae4", "bridge": "br-int", "label": "tempest-network-smoke--2058090371", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5acb8c-5d", "ovs_interfaceid": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.101 243456 DEBUG nova.network.os_vif_util [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:90:03,bridge_name='br-int',has_traffic_filtering=True,id=6b5acb8c-5d09-42b0-9c1d-b51be18712fe,network=Network(269fae56-42c3-478e-88d5-36164c0a6ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5acb8c-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.102 243456 DEBUG os_vif [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:90:03,bridge_name='br-int',has_traffic_filtering=True,id=6b5acb8c-5d09-42b0-9c1d-b51be18712fe,network=Network(269fae56-42c3-478e-88d5-36164c0a6ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5acb8c-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.102 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.103 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.103 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.105 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.105 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b5acb8c-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.106 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6b5acb8c-5d, col_values=(('external_ids', {'iface-id': '6b5acb8c-5d09-42b0-9c1d-b51be18712fe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:90:03', 'vm-uuid': '32fe69ba-ea8d-411e-8917-de872b62b8b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.107 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:26 np0005634017 NetworkManager[49805]: <info>  [1772273666.1088] manager: (tap6b5acb8c-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/292)
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.110 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.112 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.112 243456 INFO os_vif [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:90:03,bridge_name='br-int',has_traffic_filtering=True,id=6b5acb8c-5d09-42b0-9c1d-b51be18712fe,network=Network(269fae56-42c3-478e-88d5-36164c0a6ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5acb8c-5d')#033[00m
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.129 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[29cd9192-9ec7-48a9-8ab6-58ca3d731f09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.161 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.161 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.162 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] No VIF found with MAC fa:16:3e:88:90:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.164 243456 INFO nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Using config drive#033[00m
Feb 28 05:14:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 497 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 11 MiB/s wr, 306 op/s
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.190 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ece7e3ea-8aa2-4187-8890-83de97215982]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.192 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77a5b13a-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.193 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.194 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77a5b13a-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:26 np0005634017 NetworkManager[49805]: <info>  [1772273666.1974] manager: (tap77a5b13a-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/293)
Feb 28 05:14:26 np0005634017 kernel: tap77a5b13a-e0: entered promiscuous mode
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.206 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77a5b13a-e0, col_values=(('external_ids', {'iface-id': '5829ec02-3925-4479-9cc6-4b24ee8cfe06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:26 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:26Z|00649|binding|INFO|Releasing lport 5829ec02-3925-4479-9cc6-4b24ee8cfe06 from this chassis (sb_readonly=0)
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.226 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.225 243456 DEBUG nova.storage.rbd_utils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image 32fe69ba-ea8d-411e-8917-de872b62b8b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.228 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8785f08b-7820-47a7-9221-7ea661563a1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.230 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-77a5b13a-ec2d-4bde-b8f1-201557ef8008
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 77a5b13a-ec2d-4bde-b8f1-201557ef8008
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:14:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.233 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'env', 'PROCESS_TAG=haproxy-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77a5b13a-ec2d-4bde-b8f1-201557ef8008.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.238 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.250 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Updating instance_info_cache with network_info: [{"id": "26c42747-4919-4440-9b73-cf3516525108", "address": "fa:16:3e:5f:43:e1", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c42747-49", "ovs_interfaceid": "26c42747-4919-4440-9b73-cf3516525108", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.285 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.285 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.286 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.286 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.286 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.391 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273666.390876, ba33446e-fcd5-454c-bc8c-79a367002d57 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.391 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] VM Started (Lifecycle Event)#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.414 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.419 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273666.39205, ba33446e-fcd5-454c-bc8c-79a367002d57 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.419 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.442 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.445 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.468 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:14:26 np0005634017 podman[307980]: 2026-02-28 10:14:26.59953232 +0000 UTC m=+0.056418447 container create 80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 28 05:14:26 np0005634017 systemd[1]: Started libpod-conmon-80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4.scope.
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.641 243456 INFO nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Creating config drive at /var/lib/nova/instances/32fe69ba-ea8d-411e-8917-de872b62b8b0/disk.config#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.649 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/32fe69ba-ea8d-411e-8917-de872b62b8b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpbkkcx0bf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:26 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:14:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/750aff55c3900492540f27eedaa3917fbcf26f3c02e0a8efc1a0b07485dba889/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:14:26 np0005634017 podman[307980]: 2026-02-28 10:14:26.571957355 +0000 UTC m=+0.028843512 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:14:26 np0005634017 podman[307980]: 2026-02-28 10:14:26.677588625 +0000 UTC m=+0.134474812 container init 80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:14:26 np0005634017 podman[307980]: 2026-02-28 10:14:26.68382117 +0000 UTC m=+0.140707317 container start 80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:14:26 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[307996]: [NOTICE]   (308001) : New worker (308005) forked
Feb 28 05:14:26 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[307996]: [NOTICE]   (308001) : Loading success.
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.805 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/32fe69ba-ea8d-411e-8917-de872b62b8b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpbkkcx0bf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.831 243456 DEBUG nova.storage.rbd_utils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image 32fe69ba-ea8d-411e-8917-de872b62b8b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.835 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/32fe69ba-ea8d-411e-8917-de872b62b8b0/disk.config 32fe69ba-ea8d-411e-8917-de872b62b8b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.882 243456 DEBUG nova.network.neutron [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Updating instance_info_cache with network_info: [{"id": "aa9724a7-fad1-4968-a1b0-0d8182007723", "address": "fa:16:3e:c4:71:d6", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9724a7-fa", "ovs_interfaceid": "aa9724a7-fad1-4968-a1b0-0d8182007723", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.951 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Releasing lock "refresh_cache-b883c1a1-cf01-434d-8258-24ca193a2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.952 243456 DEBUG nova.compute.manager [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Instance network_info: |[{"id": "aa9724a7-fad1-4968-a1b0-0d8182007723", "address": "fa:16:3e:c4:71:d6", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9724a7-fa", "ovs_interfaceid": "aa9724a7-fad1-4968-a1b0-0d8182007723", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.957 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Start _get_guest_xml network_info=[{"id": "aa9724a7-fad1-4968-a1b0-0d8182007723", "address": "fa:16:3e:c4:71:d6", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9724a7-fa", "ovs_interfaceid": "aa9724a7-fad1-4968-a1b0-0d8182007723", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.962 243456 WARNING nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.969 243456 DEBUG nova.virt.libvirt.host [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.970 243456 DEBUG nova.virt.libvirt.host [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.973 243456 DEBUG nova.virt.libvirt.host [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.973 243456 DEBUG nova.virt.libvirt.host [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.974 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.974 243456 DEBUG nova.virt.hardware [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.975 243456 DEBUG nova.virt.hardware [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.975 243456 DEBUG nova.virt.hardware [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.975 243456 DEBUG nova.virt.hardware [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.975 243456 DEBUG nova.virt.hardware [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.976 243456 DEBUG nova.virt.hardware [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.976 243456 DEBUG nova.virt.hardware [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.976 243456 DEBUG nova.virt.hardware [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.976 243456 DEBUG nova.virt.hardware [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.977 243456 DEBUG nova.virt.hardware [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.977 243456 DEBUG nova.virt.hardware [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:14:26 np0005634017 nova_compute[243452]: 2026-02-28 10:14:26.982 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.016 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/32fe69ba-ea8d-411e-8917-de872b62b8b0/disk.config 32fe69ba-ea8d-411e-8917-de872b62b8b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.017 243456 INFO nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Deleting local config drive /var/lib/nova/instances/32fe69ba-ea8d-411e-8917-de872b62b8b0/disk.config because it was imported into RBD.#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.021 243456 DEBUG nova.compute.manager [req-5542d08b-ce3a-4516-af9d-1e3c78590db1 req-5ed42ef6-a351-4782-be9e-0b060119e481 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.022 243456 DEBUG oslo_concurrency.lockutils [req-5542d08b-ce3a-4516-af9d-1e3c78590db1 req-5ed42ef6-a351-4782-be9e-0b060119e481 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.022 243456 DEBUG oslo_concurrency.lockutils [req-5542d08b-ce3a-4516-af9d-1e3c78590db1 req-5ed42ef6-a351-4782-be9e-0b060119e481 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.022 243456 DEBUG oslo_concurrency.lockutils [req-5542d08b-ce3a-4516-af9d-1e3c78590db1 req-5ed42ef6-a351-4782-be9e-0b060119e481 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.023 243456 DEBUG nova.compute.manager [req-5542d08b-ce3a-4516-af9d-1e3c78590db1 req-5ed42ef6-a351-4782-be9e-0b060119e481 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Processing event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.023 243456 DEBUG nova.compute.manager [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.029 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273667.0286882, ba33446e-fcd5-454c-bc8c-79a367002d57 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.029 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.032 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.040 243456 INFO nova.virt.libvirt.driver [-] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Instance spawned successfully.#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.041 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.059 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:27 np0005634017 kernel: tap6b5acb8c-5d: entered promiscuous mode
Feb 28 05:14:27 np0005634017 systemd-udevd[307863]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:14:27 np0005634017 NetworkManager[49805]: <info>  [1772273667.0648] manager: (tap6b5acb8c-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/294)
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.066 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.068 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:27Z|00650|binding|INFO|Claiming lport 6b5acb8c-5d09-42b0-9c1d-b51be18712fe for this chassis.
Feb 28 05:14:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:27Z|00651|binding|INFO|6b5acb8c-5d09-42b0-9c1d-b51be18712fe: Claiming fa:16:3e:88:90:03 10.100.0.11
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.074 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.074 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.075 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.075 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.076 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.076 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.079 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:27 np0005634017 NetworkManager[49805]: <info>  [1772273667.0809] device (tap6b5acb8c-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:14:27 np0005634017 NetworkManager[49805]: <info>  [1772273667.0826] device (tap6b5acb8c-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.082 243456 DEBUG nova.network.neutron [req-9660b201-fb75-465b-a3e3-c35f3abe3307 req-e8e715a2-0de3-4938-a8d1-2234f70605c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Updated VIF entry in instance network info cache for port 6b5acb8c-5d09-42b0-9c1d-b51be18712fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.083 243456 DEBUG nova.network.neutron [req-9660b201-fb75-465b-a3e3-c35f3abe3307 req-e8e715a2-0de3-4938-a8d1-2234f70605c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Updating instance_info_cache with network_info: [{"id": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "address": "fa:16:3e:88:90:03", "network": {"id": "269fae56-42c3-478e-88d5-36164c0a6ae4", "bridge": "br-int", "label": "tempest-network-smoke--2058090371", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5acb8c-5d", "ovs_interfaceid": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.085 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.090 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:90:03 10.100.0.11'], port_security=['fa:16:3e:88:90:03 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '32fe69ba-ea8d-411e-8917-de872b62b8b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-269fae56-42c3-478e-88d5-36164c0a6ae4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '2', 'neutron:security_group_ids': '61546fe4-ca04-44ca-b6ae-d1b7a21ab6e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8585beb-af2c-4eb8-805b-2614cf37e3d3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=6b5acb8c-5d09-42b0-9c1d-b51be18712fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.092 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 6b5acb8c-5d09-42b0-9c1d-b51be18712fe in datapath 269fae56-42c3-478e-88d5-36164c0a6ae4 bound to our chassis#033[00m
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.094 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 269fae56-42c3-478e-88d5-36164c0a6ae4#033[00m
Feb 28 05:14:27 np0005634017 systemd-machined[209480]: New machine qemu-87-instance-0000004d.
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.106 243456 DEBUG oslo_concurrency.lockutils [req-9660b201-fb75-465b-a3e3-c35f3abe3307 req-e8e715a2-0de3-4938-a8d1-2234f70605c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.110 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a0367f14-0b50-4e18-890c-4a5cd8b51dbd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.112 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap269fae56-41 in ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.114 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap269fae56-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.115 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0c8e83ae-38fe-4acf-9a67-f3e0491d66cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:27Z|00652|binding|INFO|Setting lport 6b5acb8c-5d09-42b0-9c1d-b51be18712fe ovn-installed in OVS
Feb 28 05:14:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:27Z|00653|binding|INFO|Setting lport 6b5acb8c-5d09-42b0-9c1d-b51be18712fe up in Southbound
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.116 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c7716475-f69d-46d1-8a54-09612ccd61d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.117 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:27 np0005634017 systemd[1]: Started Virtual Machine qemu-87-instance-0000004d.
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.129 243456 INFO nova.compute.manager [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Took 7.11 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.129 243456 DEBUG nova.compute.manager [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.134 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[3623756c-d38e-4c57-b0fa-4c7ebc706dc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.155 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5b33215d-8a83-4aaf-92c3-53a82fde8d1a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.184 243456 INFO nova.compute.manager [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Took 8.19 seconds to build instance.#033[00m
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.184 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9dd3d3a8-d884-4104-8ae3-27edda837031]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.190 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8b9f9515-7752-4494-b30c-bc9890cbf164]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:27 np0005634017 NetworkManager[49805]: <info>  [1772273667.1913] manager: (tap269fae56-40): new Veth device (/org/freedesktop/NetworkManager/Devices/295)
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.206 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.232 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[892f2cd2-25ec-4fce-bd84-bac1d7d6c14f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.236 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[638d633f-d666-4e8e-9033-6f267833394c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:27 np0005634017 NetworkManager[49805]: <info>  [1772273667.2566] device (tap269fae56-40): carrier: link connected
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.262 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[efdcb874-58e9-4a66-a039-c80ed494c1d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.278 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[18245f5d-746e-4808-8e0f-065e838e12e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap269fae56-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:28:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 204], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517454, 'reachable_time': 24789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308101, 'error': None, 'target': 'ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.294 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0c5611b6-cac3-48e0-b303-c5d8cd20745c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:282c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517454, 'tstamp': 517454}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308102, 'error': None, 'target': 'ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.312 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f26a80-a328-46fe-a9f0-16f4489e1a12]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap269fae56-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:28:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 204], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517454, 'reachable_time': 24789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308103, 'error': None, 'target': 'ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.341 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[78ac8bc2-5a04-4686-8b10-8ee65c2094f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.402 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2a467cd3-f18f-4cac-bef8-de3714eded04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.404 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap269fae56-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.404 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.405 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap269fae56-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.407 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:27 np0005634017 NetworkManager[49805]: <info>  [1772273667.4076] manager: (tap269fae56-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/296)
Feb 28 05:14:27 np0005634017 kernel: tap269fae56-40: entered promiscuous mode
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.409 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap269fae56-40, col_values=(('external_ids', {'iface-id': '7bc082a7-4576-4494-b633-962a40b4d816'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.410 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:27Z|00654|binding|INFO|Releasing lport 7bc082a7-4576-4494-b633-962a40b4d816 from this chassis (sb_readonly=0)
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.411 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.412 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/269fae56-42c3-478e-88d5-36164c0a6ae4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/269fae56-42c3-478e-88d5-36164c0a6ae4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.413 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3f753990-b2c2-4155-84a2-94ebffd1d6df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.414 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-269fae56-42c3-478e-88d5-36164c0a6ae4
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/269fae56-42c3-478e-88d5-36164c0a6ae4.pid.haproxy
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 269fae56-42c3-478e-88d5-36164c0a6ae4
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:14:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.416 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4', 'env', 'PROCESS_TAG=haproxy-269fae56-42c3-478e-88d5-36164c0a6ae4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/269fae56-42c3-478e-88d5-36164c0a6ae4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.423 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.570 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273667.5700831, 32fe69ba-ea8d-411e-8917-de872b62b8b0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.570 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] VM Started (Lifecycle Event)#033[00m
Feb 28 05:14:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:14:27 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3482838717' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.599 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.604 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273667.5727224, 32fe69ba-ea8d-411e-8917-de872b62b8b0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.604 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.611 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.639 243456 DEBUG nova.storage.rbd_utils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image b883c1a1-cf01-434d-8258-24ca193a2683_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.648 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.679 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.689 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:14:27 np0005634017 nova_compute[243452]: 2026-02-28 10:14:27.712 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:14:27 np0005634017 podman[308195]: 2026-02-28 10:14:27.845317169 +0000 UTC m=+0.115878269 container create 4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 05:14:27 np0005634017 podman[308195]: 2026-02-28 10:14:27.75784011 +0000 UTC m=+0.028401240 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:14:27 np0005634017 systemd[1]: Started libpod-conmon-4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9.scope.
Feb 28 05:14:27 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:14:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea0b749c5a528cfce51b141fd23aad936756a87d85288db6d59ade46c812befa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:14:27 np0005634017 podman[308195]: 2026-02-28 10:14:27.943354826 +0000 UTC m=+0.213915936 container init 4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 05:14:27 np0005634017 podman[308195]: 2026-02-28 10:14:27.951117534 +0000 UTC m=+0.221678634 container start 4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:14:27 np0005634017 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[308227]: [NOTICE]   (308231) : New worker (308233) forked
Feb 28 05:14:27 np0005634017 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[308227]: [NOTICE]   (308231) : Loading success.
Feb 28 05:14:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 497 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.6 MiB/s wr, 230 op/s
Feb 28 05:14:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:14:28 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1966840864' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.244 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.245 243456 DEBUG nova.virt.libvirt.vif [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:14:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1601523722',display_name='tempest-ServerActionsTestOtherA-server-1601523722',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1601523722',id=78,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBeB4M8j3RPMEGsTEupU809MpDMu1lONxa3GM96jOaKy7lCnQVg4MzBbpF5eLhYMsfAQf+axdx0pdKDPLAAkphsN2WtFcI9X16V02fEsKKASEotygshJqgIA8eut813xpw==',key_name='tempest-keypair-127070709',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd70835696bf4e12a062516e9de5527d',ramdisk_id='',reservation_id='r-by9e4o0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1764257371',owner_user_name='tempest-ServerActionsTestOtherA-1764257371-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14b2d28379164786ad68563acb83a50a',uuid=b883c1a1-cf01-434d-8258-24ca193a2683,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa9724a7-fad1-4968-a1b0-0d8182007723", "address": "fa:16:3e:c4:71:d6", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9724a7-fa", "ovs_interfaceid": "aa9724a7-fad1-4968-a1b0-0d8182007723", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.245 243456 DEBUG nova.network.os_vif_util [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converting VIF {"id": "aa9724a7-fad1-4968-a1b0-0d8182007723", "address": "fa:16:3e:c4:71:d6", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9724a7-fa", "ovs_interfaceid": "aa9724a7-fad1-4968-a1b0-0d8182007723", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.246 243456 DEBUG nova.network.os_vif_util [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:71:d6,bridge_name='br-int',has_traffic_filtering=True,id=aa9724a7-fad1-4968-a1b0-0d8182007723,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa9724a7-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.248 243456 DEBUG nova.objects.instance [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'pci_devices' on Instance uuid b883c1a1-cf01-434d-8258-24ca193a2683 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.269 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:14:28 np0005634017 nova_compute[243452]:  <uuid>b883c1a1-cf01-434d-8258-24ca193a2683</uuid>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:  <name>instance-0000004e</name>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerActionsTestOtherA-server-1601523722</nova:name>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:14:26</nova:creationTime>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:14:28 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:        <nova:user uuid="14b2d28379164786ad68563acb83a50a">tempest-ServerActionsTestOtherA-1764257371-project-member</nova:user>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:        <nova:project uuid="fd70835696bf4e12a062516e9de5527d">tempest-ServerActionsTestOtherA-1764257371</nova:project>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:        <nova:port uuid="aa9724a7-fad1-4968-a1b0-0d8182007723">
Feb 28 05:14:28 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <entry name="serial">b883c1a1-cf01-434d-8258-24ca193a2683</entry>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <entry name="uuid">b883c1a1-cf01-434d-8258-24ca193a2683</entry>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/b883c1a1-cf01-434d-8258-24ca193a2683_disk">
Feb 28 05:14:28 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:14:28 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/b883c1a1-cf01-434d-8258-24ca193a2683_disk.config">
Feb 28 05:14:28 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:14:28 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:c4:71:d6"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <target dev="tapaa9724a7-fa"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/b883c1a1-cf01-434d-8258-24ca193a2683/console.log" append="off"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:14:28 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:14:28 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:14:28 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:14:28 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.270 243456 DEBUG nova.compute.manager [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Preparing to wait for external event network-vif-plugged-aa9724a7-fad1-4968-a1b0-0d8182007723 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.271 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.271 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.271 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.272 243456 DEBUG nova.virt.libvirt.vif [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:14:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1601523722',display_name='tempest-ServerActionsTestOtherA-server-1601523722',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1601523722',id=78,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBeB4M8j3RPMEGsTEupU809MpDMu1lONxa3GM96jOaKy7lCnQVg4MzBbpF5eLhYMsfAQf+axdx0pdKDPLAAkphsN2WtFcI9X16V02fEsKKASEotygshJqgIA8eut813xpw==',key_name='tempest-keypair-127070709',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd70835696bf4e12a062516e9de5527d',ramdisk_id='',reservation_id='r-by9e4o0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1764257371',owner_user_name='tempest-ServerActionsTestOtherA-1764257371-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14b2d28379164786ad68563acb83a50a',uuid=b883c1a1-cf01-434d-8258-24ca193a2683,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa9724a7-fad1-4968-a1b0-0d8182007723", "address": "fa:16:3e:c4:71:d6", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9724a7-fa", "ovs_interfaceid": "aa9724a7-fad1-4968-a1b0-0d8182007723", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.272 243456 DEBUG nova.network.os_vif_util [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converting VIF {"id": "aa9724a7-fad1-4968-a1b0-0d8182007723", "address": "fa:16:3e:c4:71:d6", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9724a7-fa", "ovs_interfaceid": "aa9724a7-fad1-4968-a1b0-0d8182007723", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.273 243456 DEBUG nova.network.os_vif_util [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:71:d6,bridge_name='br-int',has_traffic_filtering=True,id=aa9724a7-fad1-4968-a1b0-0d8182007723,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa9724a7-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.274 243456 DEBUG os_vif [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:71:d6,bridge_name='br-int',has_traffic_filtering=True,id=aa9724a7-fad1-4968-a1b0-0d8182007723,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa9724a7-fa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.274 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.275 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.275 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.278 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.279 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa9724a7-fa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.279 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaa9724a7-fa, col_values=(('external_ids', {'iface-id': 'aa9724a7-fad1-4968-a1b0-0d8182007723', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:71:d6', 'vm-uuid': 'b883c1a1-cf01-434d-8258-24ca193a2683'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:28 np0005634017 NetworkManager[49805]: <info>  [1772273668.2822] manager: (tapaa9724a7-fa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/297)
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.285 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.289 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.290 243456 INFO os_vif [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:71:d6,bridge_name='br-int',has_traffic_filtering=True,id=aa9724a7-fad1-4968-a1b0-0d8182007723,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa9724a7-fa')#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.339 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.339 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.339 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] No VIF found with MAC fa:16:3e:c4:71:d6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.340 243456 INFO nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Using config drive#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.360 243456 DEBUG nova.storage.rbd_utils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image b883c1a1-cf01-434d-8258-24ca193a2683_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.411 243456 DEBUG nova.compute.manager [req-bf795205-7a84-4cd0-99e8-43fe5672f21f req-ff1134a8-fc70-4054-9f24-6a38c79eca90 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received event network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.411 243456 DEBUG oslo_concurrency.lockutils [req-bf795205-7a84-4cd0-99e8-43fe5672f21f req-ff1134a8-fc70-4054-9f24-6a38c79eca90 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.412 243456 DEBUG oslo_concurrency.lockutils [req-bf795205-7a84-4cd0-99e8-43fe5672f21f req-ff1134a8-fc70-4054-9f24-6a38c79eca90 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.412 243456 DEBUG oslo_concurrency.lockutils [req-bf795205-7a84-4cd0-99e8-43fe5672f21f req-ff1134a8-fc70-4054-9f24-6a38c79eca90 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.412 243456 DEBUG nova.compute.manager [req-bf795205-7a84-4cd0-99e8-43fe5672f21f req-ff1134a8-fc70-4054-9f24-6a38c79eca90 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] No waiting events found dispatching network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.412 243456 WARNING nova.compute.manager [req-bf795205-7a84-4cd0-99e8-43fe5672f21f req-ff1134a8-fc70-4054-9f24-6a38c79eca90 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received unexpected event network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 for instance with vm_state rescued and task_state None.#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.412 243456 DEBUG nova.compute.manager [req-bf795205-7a84-4cd0-99e8-43fe5672f21f req-ff1134a8-fc70-4054-9f24-6a38c79eca90 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received event network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.413 243456 DEBUG oslo_concurrency.lockutils [req-bf795205-7a84-4cd0-99e8-43fe5672f21f req-ff1134a8-fc70-4054-9f24-6a38c79eca90 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.413 243456 DEBUG oslo_concurrency.lockutils [req-bf795205-7a84-4cd0-99e8-43fe5672f21f req-ff1134a8-fc70-4054-9f24-6a38c79eca90 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.413 243456 DEBUG oslo_concurrency.lockutils [req-bf795205-7a84-4cd0-99e8-43fe5672f21f req-ff1134a8-fc70-4054-9f24-6a38c79eca90 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.413 243456 DEBUG nova.compute.manager [req-bf795205-7a84-4cd0-99e8-43fe5672f21f req-ff1134a8-fc70-4054-9f24-6a38c79eca90 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] No waiting events found dispatching network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.413 243456 WARNING nova.compute.manager [req-bf795205-7a84-4cd0-99e8-43fe5672f21f req-ff1134a8-fc70-4054-9f24-6a38c79eca90 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received unexpected event network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 for instance with vm_state rescued and task_state None.#033[00m
Feb 28 05:14:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.768 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.856 243456 INFO nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Creating config drive at /var/lib/nova/instances/b883c1a1-cf01-434d-8258-24ca193a2683/disk.config#033[00m
Feb 28 05:14:28 np0005634017 nova_compute[243452]: 2026-02-28 10:14:28.865 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b883c1a1-cf01-434d-8258-24ca193a2683/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpp0o5oljj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:28 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 05:14:28 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.2 total, 600.0 interval#012Cumulative writes: 21K writes, 84K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s#012Cumulative WAL: 21K writes, 7517 syncs, 2.90 writes per sync, written: 0.08 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 41K keys, 11K commit groups, 1.0 writes per commit group, ingest: 40.75 MB, 0.07 MB/s#012Interval WAL: 11K writes, 4570 syncs, 2.44 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.011 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b883c1a1-cf01-434d-8258-24ca193a2683/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpp0o5oljj" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.037 243456 DEBUG nova.storage.rbd_utils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image b883c1a1-cf01-434d-8258-24ca193a2683_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.042 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b883c1a1-cf01-434d-8258-24ca193a2683/disk.config b883c1a1-cf01-434d-8258-24ca193a2683_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:14:29
Feb 28 05:14:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:14:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:14:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', '.mgr', 'backups']
Feb 28 05:14:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.226 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b883c1a1-cf01-434d-8258-24ca193a2683/disk.config b883c1a1-cf01-434d-8258-24ca193a2683_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.227 243456 INFO nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Deleting local config drive /var/lib/nova/instances/b883c1a1-cf01-434d-8258-24ca193a2683/disk.config because it was imported into RBD.#033[00m
Feb 28 05:14:29 np0005634017 NetworkManager[49805]: <info>  [1772273669.2765] manager: (tapaa9724a7-fa): new Tun device (/org/freedesktop/NetworkManager/Devices/298)
Feb 28 05:14:29 np0005634017 kernel: tapaa9724a7-fa: entered promiscuous mode
Feb 28 05:14:29 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:29Z|00655|binding|INFO|Claiming lport aa9724a7-fad1-4968-a1b0-0d8182007723 for this chassis.
Feb 28 05:14:29 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:29Z|00656|binding|INFO|aa9724a7-fad1-4968-a1b0-0d8182007723: Claiming fa:16:3e:c4:71:d6 10.100.0.6
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.280 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.291 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.301 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:71:d6 10.100.0.6'], port_security=['fa:16:3e:c4:71:d6 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b883c1a1-cf01-434d-8258-24ca193a2683', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd70835696bf4e12a062516e9de5527d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a3d01661-9794-4315-81d4-c2d74d609338', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90546891-a028-4a5f-a7b5-01dac44edc93, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=aa9724a7-fad1-4968-a1b0-0d8182007723) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.304 156681 INFO neutron.agent.ovn.metadata.agent [-] Port aa9724a7-fad1-4968-a1b0-0d8182007723 in datapath 2e5dcf5b-2f4a-41dc-9c28-b500e2889923 bound to our chassis#033[00m
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.307 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2e5dcf5b-2f4a-41dc-9c28-b500e2889923#033[00m
Feb 28 05:14:29 np0005634017 systemd-machined[209480]: New machine qemu-88-instance-0000004e.
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.318 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[17147773-7826-4f01-b867-a7bf217dc308]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.322 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2e5dcf5b-21 in ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.322 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.327 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2e5dcf5b-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.327 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[aaf83cb0-9122-40b2-a231-27947e16e6fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.328 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cd35f3b2-c67d-4bc9-8466-376e91070e85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:29 np0005634017 systemd[1]: Started Virtual Machine qemu-88-instance-0000004e.
Feb 28 05:14:29 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:29Z|00657|binding|INFO|Setting lport aa9724a7-fad1-4968-a1b0-0d8182007723 ovn-installed in OVS
Feb 28 05:14:29 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:29Z|00658|binding|INFO|Setting lport aa9724a7-fad1-4968-a1b0-0d8182007723 up in Southbound
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.333 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.343 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[5cd6b508-c09f-4c9b-8f3d-f734b9151a2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:29 np0005634017 systemd-udevd[308318]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:14:29 np0005634017 NetworkManager[49805]: <info>  [1772273669.3563] device (tapaa9724a7-fa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:14:29 np0005634017 NetworkManager[49805]: <info>  [1772273669.3570] device (tapaa9724a7-fa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.359 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[27fcef33-4285-4b94-aaf1-4b2ee55993f0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.386 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9596399a-b7a2-4457-a932-312abd9f062e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:29 np0005634017 NetworkManager[49805]: <info>  [1772273669.3934] manager: (tap2e5dcf5b-20): new Veth device (/org/freedesktop/NetworkManager/Devices/299)
Feb 28 05:14:29 np0005634017 systemd-udevd[308321]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.396 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5a40b4cb-8478-4ae2-acdf-3460b9eba159]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.423 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[49f15fb0-b1f8-4065-a994-5545940d8f38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.425 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c25f1aed-1ee4-4a33-80b7-063300757064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:29 np0005634017 NetworkManager[49805]: <info>  [1772273669.4473] device (tap2e5dcf5b-20): carrier: link connected
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.453 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[eabf4463-c09f-48a8-82ce-dc84612241eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.467 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b01e82ef-3c84-4e17-954c-efb202777100]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e5dcf5b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:a8:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517673, 'reachable_time': 18521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308349, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.481 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4941b6f6-f665-40a1-b3cb-19b6e0b20465]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe25:a820'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517673, 'tstamp': 517673}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308350, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.499 243456 DEBUG nova.compute.manager [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.499 243456 DEBUG oslo_concurrency.lockutils [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.499 243456 DEBUG oslo_concurrency.lockutils [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.500 243456 DEBUG oslo_concurrency.lockutils [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.500 243456 DEBUG nova.compute.manager [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] No waiting events found dispatching network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.500 243456 WARNING nova.compute.manager [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received unexpected event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.500 243456 DEBUG nova.compute.manager [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received event network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.500 243456 DEBUG oslo_concurrency.lockutils [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.501 243456 DEBUG oslo_concurrency.lockutils [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.501 243456 DEBUG oslo_concurrency.lockutils [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.501 243456 DEBUG nova.compute.manager [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Processing event network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.501 243456 DEBUG nova.compute.manager [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received event network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.502 243456 DEBUG oslo_concurrency.lockutils [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.502 243456 DEBUG oslo_concurrency.lockutils [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.502 243456 DEBUG oslo_concurrency.lockutils [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.502 243456 DEBUG nova.compute.manager [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] No waiting events found dispatching network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.502 243456 WARNING nova.compute.manager [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received unexpected event network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.502 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9d796c6d-68a1-414c-a47b-8d163ef6265f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e5dcf5b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:a8:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517673, 'reachable_time': 18521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308351, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.503 243456 DEBUG nova.compute.manager [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.513 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273669.5134647, 32fe69ba-ea8d-411e-8917-de872b62b8b0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.514 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.517 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.524 243456 INFO nova.virt.libvirt.driver [-] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Instance spawned successfully.#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.524 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.542 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[01d5bc47-c8b1-4e10-b32e-a205029adf7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.544 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.548 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.560 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.560 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.560 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.561 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.561 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.562 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.566 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.614 243456 INFO nova.compute.manager [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Took 8.60 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.614 243456 DEBUG nova.compute.manager [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.611 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d9c7085e-66f2-4078-9c58-42f6a78bf21c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.616 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e5dcf5b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.616 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.616 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e5dcf5b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:29 np0005634017 NetworkManager[49805]: <info>  [1772273669.6189] manager: (tap2e5dcf5b-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/300)
Feb 28 05:14:29 np0005634017 kernel: tap2e5dcf5b-20: entered promiscuous mode
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.619 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.621 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2e5dcf5b-20, col_values=(('external_ids', {'iface-id': '4070e10c-8283-47ad-b9bf-0e19e9198bce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:29 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:29Z|00659|binding|INFO|Releasing lport 4070e10c-8283-47ad-b9bf-0e19e9198bce from this chassis (sb_readonly=0)
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.631 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2e5dcf5b-2f4a-41dc-9c28-b500e2889923.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2e5dcf5b-2f4a-41dc-9c28-b500e2889923.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.632 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e468e314-cc3c-4939-b158-c4af7b935466]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.632 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-2e5dcf5b-2f4a-41dc-9c28-b500e2889923
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/2e5dcf5b-2f4a-41dc-9c28-b500e2889923.pid.haproxy
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 2e5dcf5b-2f4a-41dc-9c28-b500e2889923
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:14:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.633 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'env', 'PROCESS_TAG=haproxy-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2e5dcf5b-2f4a-41dc-9c28-b500e2889923.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.634 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.695 243456 INFO nova.compute.manager [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Took 10.67 seconds to build instance.#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.710 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.786 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273669.786109, b883c1a1-cf01-434d-8258-24ca193a2683 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.788 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] VM Started (Lifecycle Event)#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.816 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.826 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273669.7885325, b883c1a1-cf01-434d-8258-24ca193a2683 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.827 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.848 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.852 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.872 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.904 243456 INFO nova.compute.manager [None req-e3852afc-64e9-4632-b4ec-e0e2b4c56088 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Pausing#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.905 243456 DEBUG nova.objects.instance [None req-e3852afc-64e9-4632-b4ec-e0e2b4c56088 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'flavor' on Instance uuid 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.931 243456 DEBUG nova.compute.manager [None req-e3852afc-64e9-4632-b4ec-e0e2b4c56088 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.932 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273669.931356, 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.932 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.968 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.972 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:14:29 np0005634017 podman[308423]: 2026-02-28 10:14:29.994363676 +0000 UTC m=+0.057402975 container create 2cc631f7684fc8cc1a9d6516a01e4c5df19646460928bd860ccd63048096cf4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 28 05:14:29 np0005634017 nova_compute[243452]: 2026-02-28 10:14:29.997 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Feb 28 05:14:30 np0005634017 systemd[1]: Started libpod-conmon-2cc631f7684fc8cc1a9d6516a01e4c5df19646460928bd860ccd63048096cf4b.scope.
Feb 28 05:14:30 np0005634017 podman[308423]: 2026-02-28 10:14:29.96286217 +0000 UTC m=+0.025901489 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:14:30 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:14:30 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a70ff2f5773caafa9c58f78253f13ae91fe67282a6374207ba8de2071a659ac6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:14:30 np0005634017 podman[308423]: 2026-02-28 10:14:30.098773752 +0000 UTC m=+0.161813071 container init 2cc631f7684fc8cc1a9d6516a01e4c5df19646460928bd860ccd63048096cf4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 28 05:14:30 np0005634017 podman[308423]: 2026-02-28 10:14:30.1147305 +0000 UTC m=+0.177769839 container start 2cc631f7684fc8cc1a9d6516a01e4c5df19646460928bd860ccd63048096cf4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:14:30 np0005634017 neutron-haproxy-ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923[308437]: [NOTICE]   (308441) : New worker (308443) forked
Feb 28 05:14:30 np0005634017 neutron-haproxy-ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923[308437]: [NOTICE]   (308441) : Loading success.
Feb 28 05:14:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 497 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 7.1 MiB/s wr, 236 op/s
Feb 28 05:14:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:14:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:14:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:14:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:14:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:14:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:14:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:14:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:14:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:14:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:14:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:14:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:14:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:14:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:14:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:14:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:14:30 np0005634017 nova_compute[243452]: 2026-02-28 10:14:30.761 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273655.759299, 1e13ffbf-dba5-421b-afc3-84eb471e2d44 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:30 np0005634017 nova_compute[243452]: 2026-02-28 10:14:30.761 243456 INFO nova.compute.manager [-] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:14:30 np0005634017 nova_compute[243452]: 2026-02-28 10:14:30.780 243456 DEBUG nova.compute.manager [None req-8a2faace-8cbf-44a5-83de-d20840d249c2 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:31 np0005634017 ceph-mgr[76610]: [devicehealth INFO root] Check health
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.282 243456 DEBUG nova.compute.manager [req-5d9ac18f-fdc5-41de-97c9-dc916f30848d req-5bfd5e07-8cc1-48a9-bf55-8c6639c0bfb4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Received event network-vif-plugged-aa9724a7-fad1-4968-a1b0-0d8182007723 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.282 243456 DEBUG oslo_concurrency.lockutils [req-5d9ac18f-fdc5-41de-97c9-dc916f30848d req-5bfd5e07-8cc1-48a9-bf55-8c6639c0bfb4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.283 243456 DEBUG oslo_concurrency.lockutils [req-5d9ac18f-fdc5-41de-97c9-dc916f30848d req-5bfd5e07-8cc1-48a9-bf55-8c6639c0bfb4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.283 243456 DEBUG oslo_concurrency.lockutils [req-5d9ac18f-fdc5-41de-97c9-dc916f30848d req-5bfd5e07-8cc1-48a9-bf55-8c6639c0bfb4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.283 243456 DEBUG nova.compute.manager [req-5d9ac18f-fdc5-41de-97c9-dc916f30848d req-5bfd5e07-8cc1-48a9-bf55-8c6639c0bfb4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Processing event network-vif-plugged-aa9724a7-fad1-4968-a1b0-0d8182007723 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.284 243456 DEBUG nova.compute.manager [req-5d9ac18f-fdc5-41de-97c9-dc916f30848d req-5bfd5e07-8cc1-48a9-bf55-8c6639c0bfb4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Received event network-vif-plugged-aa9724a7-fad1-4968-a1b0-0d8182007723 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.284 243456 DEBUG oslo_concurrency.lockutils [req-5d9ac18f-fdc5-41de-97c9-dc916f30848d req-5bfd5e07-8cc1-48a9-bf55-8c6639c0bfb4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.284 243456 DEBUG oslo_concurrency.lockutils [req-5d9ac18f-fdc5-41de-97c9-dc916f30848d req-5bfd5e07-8cc1-48a9-bf55-8c6639c0bfb4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.284 243456 DEBUG oslo_concurrency.lockutils [req-5d9ac18f-fdc5-41de-97c9-dc916f30848d req-5bfd5e07-8cc1-48a9-bf55-8c6639c0bfb4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.285 243456 DEBUG nova.compute.manager [req-5d9ac18f-fdc5-41de-97c9-dc916f30848d req-5bfd5e07-8cc1-48a9-bf55-8c6639c0bfb4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] No waiting events found dispatching network-vif-plugged-aa9724a7-fad1-4968-a1b0-0d8182007723 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.285 243456 WARNING nova.compute.manager [req-5d9ac18f-fdc5-41de-97c9-dc916f30848d req-5bfd5e07-8cc1-48a9-bf55-8c6639c0bfb4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Received unexpected event network-vif-plugged-aa9724a7-fad1-4968-a1b0-0d8182007723 for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.286 243456 DEBUG nova.compute.manager [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.291 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273671.2909417, b883c1a1-cf01-434d-8258-24ca193a2683 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.291 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.294 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.298 243456 INFO nova.virt.libvirt.driver [-] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Instance spawned successfully.#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.298 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.329 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.337 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.342 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.342 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.343 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.344 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.344 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.345 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.386 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.515 243456 INFO nova.compute.manager [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Took 9.45 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.516 243456 DEBUG nova.compute.manager [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.615 243456 INFO nova.compute.manager [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Took 11.02 seconds to build instance.#033[00m
Feb 28 05:14:31 np0005634017 nova_compute[243452]: 2026-02-28 10:14:31.645 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "b883c1a1-cf01-434d-8258-24ca193a2683" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 498 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 7.1 MiB/s wr, 283 op/s
Feb 28 05:14:32 np0005634017 nova_compute[243452]: 2026-02-28 10:14:32.349 243456 INFO nova.compute.manager [None req-d6e172ce-63f4-4605-8258-3115deba821a ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Unpausing#033[00m
Feb 28 05:14:32 np0005634017 nova_compute[243452]: 2026-02-28 10:14:32.350 243456 DEBUG nova.objects.instance [None req-d6e172ce-63f4-4605-8258-3115deba821a ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'flavor' on Instance uuid 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:32 np0005634017 nova_compute[243452]: 2026-02-28 10:14:32.376 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273672.375541, 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:32 np0005634017 nova_compute[243452]: 2026-02-28 10:14:32.377 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:14:32 np0005634017 virtqemud[242837]: argument unsupported: QEMU guest agent is not configured
Feb 28 05:14:32 np0005634017 nova_compute[243452]: 2026-02-28 10:14:32.381 243456 DEBUG nova.virt.libvirt.guest [None req-d6e172ce-63f4-4605-8258-3115deba821a ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Feb 28 05:14:32 np0005634017 nova_compute[243452]: 2026-02-28 10:14:32.382 243456 DEBUG nova.compute.manager [None req-d6e172ce-63f4-4605-8258-3115deba821a ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:32 np0005634017 nova_compute[243452]: 2026-02-28 10:14:32.394 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:32 np0005634017 nova_compute[243452]: 2026-02-28 10:14:32.398 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:14:32 np0005634017 nova_compute[243452]: 2026-02-28 10:14:32.426 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Feb 28 05:14:33 np0005634017 nova_compute[243452]: 2026-02-28 10:14:33.117 243456 INFO nova.compute.manager [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Rebuilding instance#033[00m
Feb 28 05:14:33 np0005634017 nova_compute[243452]: 2026-02-28 10:14:33.284 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:33 np0005634017 nova_compute[243452]: 2026-02-28 10:14:33.436 243456 DEBUG nova.objects.instance [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'trusted_certs' on Instance uuid ba33446e-fcd5-454c-bc8c-79a367002d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:33 np0005634017 nova_compute[243452]: 2026-02-28 10:14:33.463 243456 DEBUG nova.compute.manager [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:33 np0005634017 nova_compute[243452]: 2026-02-28 10:14:33.506 243456 DEBUG nova.objects.instance [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'pci_requests' on Instance uuid ba33446e-fcd5-454c-bc8c-79a367002d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:33 np0005634017 nova_compute[243452]: 2026-02-28 10:14:33.529 243456 DEBUG nova.objects.instance [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'pci_devices' on Instance uuid ba33446e-fcd5-454c-bc8c-79a367002d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:33 np0005634017 nova_compute[243452]: 2026-02-28 10:14:33.539 243456 DEBUG nova.objects.instance [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'resources' on Instance uuid ba33446e-fcd5-454c-bc8c-79a367002d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:14:33 np0005634017 nova_compute[243452]: 2026-02-28 10:14:33.551 243456 DEBUG nova.objects.instance [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'migration_context' on Instance uuid ba33446e-fcd5-454c-bc8c-79a367002d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:33 np0005634017 nova_compute[243452]: 2026-02-28 10:14:33.564 243456 DEBUG nova.objects.instance [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 28 05:14:33 np0005634017 nova_compute[243452]: 2026-02-28 10:14:33.568 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 28 05:14:33 np0005634017 nova_compute[243452]: 2026-02-28 10:14:33.773 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 498 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 6.8 MiB/s wr, 290 op/s
Feb 28 05:14:34 np0005634017 NetworkManager[49805]: <info>  [1772273674.4021] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/301)
Feb 28 05:14:34 np0005634017 NetworkManager[49805]: <info>  [1772273674.4031] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/302)
Feb 28 05:14:34 np0005634017 nova_compute[243452]: 2026-02-28 10:14:34.406 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:34Z|00660|binding|INFO|Releasing lport 4070e10c-8283-47ad-b9bf-0e19e9198bce from this chassis (sb_readonly=0)
Feb 28 05:14:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:34Z|00661|binding|INFO|Releasing lport 92bcea78-9a21-4d44-99f4-fd3e41fc7e97 from this chassis (sb_readonly=0)
Feb 28 05:14:34 np0005634017 nova_compute[243452]: 2026-02-28 10:14:34.409 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:34Z|00662|binding|INFO|Releasing lport 7bc082a7-4576-4494-b633-962a40b4d816 from this chassis (sb_readonly=0)
Feb 28 05:14:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:34Z|00663|binding|INFO|Releasing lport 5829ec02-3925-4479-9cc6-4b24ee8cfe06 from this chassis (sb_readonly=0)
Feb 28 05:14:34 np0005634017 nova_compute[243452]: 2026-02-28 10:14:34.422 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.238 243456 DEBUG nova.compute.manager [req-e9baf6c1-e594-4de2-bad8-9f6fe1d9fe40 req-134dcaa3-4e81-4ac7-a66e-755552174824 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received event network-changed-6b5acb8c-5d09-42b0-9c1d-b51be18712fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.239 243456 DEBUG nova.compute.manager [req-e9baf6c1-e594-4de2-bad8-9f6fe1d9fe40 req-134dcaa3-4e81-4ac7-a66e-755552174824 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Refreshing instance network info cache due to event network-changed-6b5acb8c-5d09-42b0-9c1d-b51be18712fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.241 243456 DEBUG oslo_concurrency.lockutils [req-e9baf6c1-e594-4de2-bad8-9f6fe1d9fe40 req-134dcaa3-4e81-4ac7-a66e-755552174824 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.241 243456 DEBUG oslo_concurrency.lockutils [req-e9baf6c1-e594-4de2-bad8-9f6fe1d9fe40 req-134dcaa3-4e81-4ac7-a66e-755552174824 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.242 243456 DEBUG nova.network.neutron [req-e9baf6c1-e594-4de2-bad8-9f6fe1d9fe40 req-134dcaa3-4e81-4ac7-a66e-755552174824 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Refreshing network info cache for port 6b5acb8c-5d09-42b0-9c1d-b51be18712fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.506 243456 DEBUG oslo_concurrency.lockutils [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "dc2dbab8-312e-4130-8141-d848beeb6bec" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.508 243456 DEBUG oslo_concurrency.lockutils [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.508 243456 DEBUG oslo_concurrency.lockutils [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.509 243456 DEBUG oslo_concurrency.lockutils [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.510 243456 DEBUG oslo_concurrency.lockutils [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.512 243456 INFO nova.compute.manager [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Terminating instance#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.515 243456 DEBUG nova.compute.manager [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:14:35 np0005634017 kernel: tapa2f66c0b-78 (unregistering): left promiscuous mode
Feb 28 05:14:35 np0005634017 NetworkManager[49805]: <info>  [1772273675.5767] device (tapa2f66c0b-78): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:14:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:35Z|00664|binding|INFO|Releasing lport a2f66c0b-78f3-49cb-929b-5e9b4072beb0 from this chassis (sb_readonly=0)
Feb 28 05:14:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:35Z|00665|binding|INFO|Setting lport a2f66c0b-78f3-49cb-929b-5e9b4072beb0 down in Southbound
Feb 28 05:14:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:35Z|00666|binding|INFO|Removing iface tapa2f66c0b-78 ovn-installed in OVS
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.594 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.604 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:b7:32 10.100.0.6'], port_security=['fa:16:3e:39:b7:32 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dc2dbab8-312e-4130-8141-d848beeb6bec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-621843b6-256a-4ce5-83c3-83b888738508', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809bf856030f4316b385ba1c02291ca7', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2ac4acb3-b14a-4b15-b397-21203f1665be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a33e3f0-a2b2-429c-8d14-4c6d980064b2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a2f66c0b-78f3-49cb-929b-5e9b4072beb0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.605 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.605 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a2f66c0b-78f3-49cb-929b-5e9b4072beb0 in datapath 621843b6-256a-4ce5-83c3-83b888738508 unbound from our chassis#033[00m
Feb 28 05:14:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.607 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 621843b6-256a-4ce5-83c3-83b888738508#033[00m
Feb 28 05:14:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.623 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9794184f-3db3-4a40-b8a7-74b55c0c3308]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:35 np0005634017 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Feb 28 05:14:35 np0005634017 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d0000004b.scope: Consumed 10.084s CPU time.
Feb 28 05:14:35 np0005634017 systemd-machined[209480]: Machine qemu-85-instance-0000004b terminated.
Feb 28 05:14:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.661 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[097f59b5-08a2-4c74-afd3-8f3a0e151e7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.665 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0ccf2174-2f8f-48ff-a5e7-a111cce50a3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.693 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[aa72a19e-ab3d-45ae-8e72-db2fcb622ca6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.708 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1baa0ec2-646e-4038-b41c-84ae54dbf19c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap621843b6-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3f:07:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514758, 'reachable_time': 23572, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308465, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.725 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4a28d6c5-ad39-40bb-9c2e-f46a582e57f5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap621843b6-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514771, 'tstamp': 514771}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308466, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap621843b6-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514773, 'tstamp': 514773}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308466, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.727 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap621843b6-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.730 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.734 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.735 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap621843b6-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.735 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:14:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.735 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap621843b6-20, col_values=(('external_ids', {'iface-id': '92bcea78-9a21-4d44-99f4-fd3e41fc7e97'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.736 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.741 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.745 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.755 243456 INFO nova.virt.libvirt.driver [-] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Instance destroyed successfully.#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.756 243456 DEBUG nova.objects.instance [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'resources' on Instance uuid dc2dbab8-312e-4130-8141-d848beeb6bec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.770 243456 DEBUG nova.virt.libvirt.vif [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:13:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1249757035',display_name='tempest-ServerRescueNegativeTestJSON-server-1249757035',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1249757035',id=75,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:14:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='809bf856030f4316b385ba1c02291ca7',ramdisk_id='',reservation_id='r-hyvw557d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-338743494',owner_user_name='tempest-ServerRescueNegativeTestJSON-338743494-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:14:25Z,user_data=None,user_id='ec5caafc16ec43a493f7d553353a27c3',uuid=dc2dbab8-312e-4130-8141-d848beeb6bec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.770 243456 DEBUG nova.network.os_vif_util [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converting VIF {"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.771 243456 DEBUG nova.network.os_vif_util [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:39:b7:32,bridge_name='br-int',has_traffic_filtering=True,id=a2f66c0b-78f3-49cb-929b-5e9b4072beb0,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2f66c0b-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.771 243456 DEBUG os_vif [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:b7:32,bridge_name='br-int',has_traffic_filtering=True,id=a2f66c0b-78f3-49cb-929b-5e9b4072beb0,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2f66c0b-78') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.773 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.774 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa2f66c0b-78, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.775 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.778 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:14:35 np0005634017 nova_compute[243452]: 2026-02-28 10:14:35.780 243456 INFO os_vif [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:b7:32,bridge_name='br-int',has_traffic_filtering=True,id=a2f66c0b-78f3-49cb-929b-5e9b4072beb0,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2f66c0b-78')#033[00m
Feb 28 05:14:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 498 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 5.2 MiB/s wr, 340 op/s
Feb 28 05:14:36 np0005634017 nova_compute[243452]: 2026-02-28 10:14:36.221 243456 INFO nova.virt.libvirt.driver [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Deleting instance files /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec_del#033[00m
Feb 28 05:14:36 np0005634017 nova_compute[243452]: 2026-02-28 10:14:36.222 243456 INFO nova.virt.libvirt.driver [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Deletion of /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec_del complete#033[00m
Feb 28 05:14:36 np0005634017 nova_compute[243452]: 2026-02-28 10:14:36.302 243456 INFO nova.compute.manager [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:14:36 np0005634017 nova_compute[243452]: 2026-02-28 10:14:36.303 243456 DEBUG oslo.service.loopingcall [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:14:36 np0005634017 nova_compute[243452]: 2026-02-28 10:14:36.303 243456 DEBUG nova.compute.manager [-] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:14:36 np0005634017 nova_compute[243452]: 2026-02-28 10:14:36.304 243456 DEBUG nova.network.neutron [-] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:14:36 np0005634017 nova_compute[243452]: 2026-02-28 10:14:36.721 243456 DEBUG nova.compute.manager [req-50259988-339e-495d-b775-d997ddc3d095 req-291314b2-412d-4861-b377-83efe7ce464e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received event network-vif-unplugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:36 np0005634017 nova_compute[243452]: 2026-02-28 10:14:36.721 243456 DEBUG oslo_concurrency.lockutils [req-50259988-339e-495d-b775-d997ddc3d095 req-291314b2-412d-4861-b377-83efe7ce464e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:36 np0005634017 nova_compute[243452]: 2026-02-28 10:14:36.722 243456 DEBUG oslo_concurrency.lockutils [req-50259988-339e-495d-b775-d997ddc3d095 req-291314b2-412d-4861-b377-83efe7ce464e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:36 np0005634017 nova_compute[243452]: 2026-02-28 10:14:36.722 243456 DEBUG oslo_concurrency.lockutils [req-50259988-339e-495d-b775-d997ddc3d095 req-291314b2-412d-4861-b377-83efe7ce464e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:36 np0005634017 nova_compute[243452]: 2026-02-28 10:14:36.723 243456 DEBUG nova.compute.manager [req-50259988-339e-495d-b775-d997ddc3d095 req-291314b2-412d-4861-b377-83efe7ce464e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] No waiting events found dispatching network-vif-unplugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:36 np0005634017 nova_compute[243452]: 2026-02-28 10:14:36.723 243456 DEBUG nova.compute.manager [req-50259988-339e-495d-b775-d997ddc3d095 req-291314b2-412d-4861-b377-83efe7ce464e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received event network-vif-unplugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:14:37 np0005634017 podman[308498]: 2026-02-28 10:14:37.144036501 +0000 UTC m=+0.076930904 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:14:37 np0005634017 podman[308497]: 2026-02-28 10:14:37.157655404 +0000 UTC m=+0.093286524 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 05:14:37 np0005634017 nova_compute[243452]: 2026-02-28 10:14:37.206 243456 DEBUG nova.network.neutron [-] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:14:37 np0005634017 nova_compute[243452]: 2026-02-28 10:14:37.224 243456 INFO nova.compute.manager [-] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Took 0.92 seconds to deallocate network for instance.#033[00m
Feb 28 05:14:37 np0005634017 nova_compute[243452]: 2026-02-28 10:14:37.264 243456 DEBUG nova.network.neutron [req-e9baf6c1-e594-4de2-bad8-9f6fe1d9fe40 req-134dcaa3-4e81-4ac7-a66e-755552174824 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Updated VIF entry in instance network info cache for port 6b5acb8c-5d09-42b0-9c1d-b51be18712fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:14:37 np0005634017 nova_compute[243452]: 2026-02-28 10:14:37.265 243456 DEBUG nova.network.neutron [req-e9baf6c1-e594-4de2-bad8-9f6fe1d9fe40 req-134dcaa3-4e81-4ac7-a66e-755552174824 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Updating instance_info_cache with network_info: [{"id": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "address": "fa:16:3e:88:90:03", "network": {"id": "269fae56-42c3-478e-88d5-36164c0a6ae4", "bridge": "br-int", "label": "tempest-network-smoke--2058090371", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5acb8c-5d", "ovs_interfaceid": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:14:37 np0005634017 nova_compute[243452]: 2026-02-28 10:14:37.280 243456 DEBUG oslo_concurrency.lockutils [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:37 np0005634017 nova_compute[243452]: 2026-02-28 10:14:37.281 243456 DEBUG oslo_concurrency.lockutils [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:37 np0005634017 nova_compute[243452]: 2026-02-28 10:14:37.293 243456 DEBUG oslo_concurrency.lockutils [req-e9baf6c1-e594-4de2-bad8-9f6fe1d9fe40 req-134dcaa3-4e81-4ac7-a66e-755552174824 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:14:37 np0005634017 nova_compute[243452]: 2026-02-28 10:14:37.329 243456 DEBUG nova.compute.manager [req-9500b82e-2c3a-455b-ab89-7673d9dde0a0 req-f4ac86c8-c97a-4cb8-a362-ddcfa3d3cc83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Received event network-changed-aa9724a7-fad1-4968-a1b0-0d8182007723 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:37 np0005634017 nova_compute[243452]: 2026-02-28 10:14:37.331 243456 DEBUG nova.compute.manager [req-9500b82e-2c3a-455b-ab89-7673d9dde0a0 req-f4ac86c8-c97a-4cb8-a362-ddcfa3d3cc83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Refreshing instance network info cache due to event network-changed-aa9724a7-fad1-4968-a1b0-0d8182007723. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:14:37 np0005634017 nova_compute[243452]: 2026-02-28 10:14:37.331 243456 DEBUG oslo_concurrency.lockutils [req-9500b82e-2c3a-455b-ab89-7673d9dde0a0 req-f4ac86c8-c97a-4cb8-a362-ddcfa3d3cc83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b883c1a1-cf01-434d-8258-24ca193a2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:14:37 np0005634017 nova_compute[243452]: 2026-02-28 10:14:37.331 243456 DEBUG oslo_concurrency.lockutils [req-9500b82e-2c3a-455b-ab89-7673d9dde0a0 req-f4ac86c8-c97a-4cb8-a362-ddcfa3d3cc83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b883c1a1-cf01-434d-8258-24ca193a2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:14:37 np0005634017 nova_compute[243452]: 2026-02-28 10:14:37.332 243456 DEBUG nova.network.neutron [req-9500b82e-2c3a-455b-ab89-7673d9dde0a0 req-f4ac86c8-c97a-4cb8-a362-ddcfa3d3cc83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Refreshing network info cache for port aa9724a7-fad1-4968-a1b0-0d8182007723 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:14:37 np0005634017 nova_compute[243452]: 2026-02-28 10:14:37.409 243456 DEBUG oslo_concurrency.processutils [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:14:37 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2375755774' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:14:38 np0005634017 nova_compute[243452]: 2026-02-28 10:14:38.019 243456 DEBUG oslo_concurrency.processutils [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:38 np0005634017 nova_compute[243452]: 2026-02-28 10:14:38.026 243456 DEBUG nova.compute.provider_tree [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:14:38 np0005634017 nova_compute[243452]: 2026-02-28 10:14:38.045 243456 DEBUG nova.scheduler.client.report [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:14:38 np0005634017 nova_compute[243452]: 2026-02-28 10:14:38.071 243456 DEBUG oslo_concurrency.lockutils [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:38 np0005634017 nova_compute[243452]: 2026-02-28 10:14:38.111 243456 INFO nova.scheduler.client.report [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Deleted allocations for instance dc2dbab8-312e-4130-8141-d848beeb6bec#033[00m
Feb 28 05:14:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 457 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 41 KiB/s wr, 312 op/s
Feb 28 05:14:38 np0005634017 nova_compute[243452]: 2026-02-28 10:14:38.193 243456 DEBUG oslo_concurrency.lockutils [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:38 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Feb 28 05:14:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:14:38 np0005634017 nova_compute[243452]: 2026-02-28 10:14:38.774 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:38 np0005634017 nova_compute[243452]: 2026-02-28 10:14:38.805 243456 DEBUG nova.compute.manager [req-81681a52-1d93-4403-8c15-6b24ffc8ccb3 req-24ac5a34-84bb-448c-b571-62e35222feb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received event network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:38 np0005634017 nova_compute[243452]: 2026-02-28 10:14:38.806 243456 DEBUG oslo_concurrency.lockutils [req-81681a52-1d93-4403-8c15-6b24ffc8ccb3 req-24ac5a34-84bb-448c-b571-62e35222feb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:38 np0005634017 nova_compute[243452]: 2026-02-28 10:14:38.807 243456 DEBUG oslo_concurrency.lockutils [req-81681a52-1d93-4403-8c15-6b24ffc8ccb3 req-24ac5a34-84bb-448c-b571-62e35222feb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:38 np0005634017 nova_compute[243452]: 2026-02-28 10:14:38.807 243456 DEBUG oslo_concurrency.lockutils [req-81681a52-1d93-4403-8c15-6b24ffc8ccb3 req-24ac5a34-84bb-448c-b571-62e35222feb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:38 np0005634017 nova_compute[243452]: 2026-02-28 10:14:38.807 243456 DEBUG nova.compute.manager [req-81681a52-1d93-4403-8c15-6b24ffc8ccb3 req-24ac5a34-84bb-448c-b571-62e35222feb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] No waiting events found dispatching network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:38 np0005634017 nova_compute[243452]: 2026-02-28 10:14:38.808 243456 WARNING nova.compute.manager [req-81681a52-1d93-4403-8c15-6b24ffc8ccb3 req-24ac5a34-84bb-448c-b571-62e35222feb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received unexpected event network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:14:38 np0005634017 nova_compute[243452]: 2026-02-28 10:14:38.808 243456 DEBUG nova.compute.manager [req-81681a52-1d93-4403-8c15-6b24ffc8ccb3 req-24ac5a34-84bb-448c-b571-62e35222feb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received event network-vif-deleted-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:39 np0005634017 nova_compute[243452]: 2026-02-28 10:14:39.361 243456 DEBUG nova.network.neutron [req-9500b82e-2c3a-455b-ab89-7673d9dde0a0 req-f4ac86c8-c97a-4cb8-a362-ddcfa3d3cc83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Updated VIF entry in instance network info cache for port aa9724a7-fad1-4968-a1b0-0d8182007723. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:14:39 np0005634017 nova_compute[243452]: 2026-02-28 10:14:39.363 243456 DEBUG nova.network.neutron [req-9500b82e-2c3a-455b-ab89-7673d9dde0a0 req-f4ac86c8-c97a-4cb8-a362-ddcfa3d3cc83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Updating instance_info_cache with network_info: [{"id": "aa9724a7-fad1-4968-a1b0-0d8182007723", "address": "fa:16:3e:c4:71:d6", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9724a7-fa", "ovs_interfaceid": "aa9724a7-fad1-4968-a1b0-0d8182007723", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:14:39 np0005634017 nova_compute[243452]: 2026-02-28 10:14:39.383 243456 DEBUG oslo_concurrency.lockutils [req-9500b82e-2c3a-455b-ab89-7673d9dde0a0 req-f4ac86c8-c97a-4cb8-a362-ddcfa3d3cc83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b883c1a1-cf01-434d-8258-24ca193a2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:14:39 np0005634017 nova_compute[243452]: 2026-02-28 10:14:39.434 243456 DEBUG oslo_concurrency.lockutils [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:39 np0005634017 nova_compute[243452]: 2026-02-28 10:14:39.434 243456 DEBUG oslo_concurrency.lockutils [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:39 np0005634017 nova_compute[243452]: 2026-02-28 10:14:39.435 243456 DEBUG oslo_concurrency.lockutils [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:39 np0005634017 nova_compute[243452]: 2026-02-28 10:14:39.435 243456 DEBUG oslo_concurrency.lockutils [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:39 np0005634017 nova_compute[243452]: 2026-02-28 10:14:39.435 243456 DEBUG oslo_concurrency.lockutils [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:39 np0005634017 nova_compute[243452]: 2026-02-28 10:14:39.436 243456 INFO nova.compute.manager [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Terminating instance#033[00m
Feb 28 05:14:39 np0005634017 nova_compute[243452]: 2026-02-28 10:14:39.437 243456 DEBUG nova.compute.manager [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:14:39 np0005634017 kernel: tap26c42747-49 (unregistering): left promiscuous mode
Feb 28 05:14:39 np0005634017 NetworkManager[49805]: <info>  [1772273679.4757] device (tap26c42747-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:14:39 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:39Z|00667|binding|INFO|Releasing lport 26c42747-4919-4440-9b73-cf3516525108 from this chassis (sb_readonly=0)
Feb 28 05:14:39 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:39Z|00668|binding|INFO|Setting lport 26c42747-4919-4440-9b73-cf3516525108 down in Southbound
Feb 28 05:14:39 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:39Z|00669|binding|INFO|Removing iface tap26c42747-49 ovn-installed in OVS
Feb 28 05:14:39 np0005634017 nova_compute[243452]: 2026-02-28 10:14:39.487 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.497 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:43:e1 10.100.0.5'], port_security=['fa:16:3e:5f:43:e1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '40f8f3fa-1f1c-440e-a640-5a223b1ca9b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-621843b6-256a-4ce5-83c3-83b888738508', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809bf856030f4316b385ba1c02291ca7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2ac4acb3-b14a-4b15-b397-21203f1665be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a33e3f0-a2b2-429c-8d14-4c6d980064b2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=26c42747-4919-4440-9b73-cf3516525108) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:14:39 np0005634017 nova_compute[243452]: 2026-02-28 10:14:39.499 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.500 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 26c42747-4919-4440-9b73-cf3516525108 in datapath 621843b6-256a-4ce5-83c3-83b888738508 unbound from our chassis#033[00m
Feb 28 05:14:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.502 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 621843b6-256a-4ce5-83c3-83b888738508, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:14:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.503 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[02a2d2a4-192f-4fce-b166-9969edfd5e48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.504 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-621843b6-256a-4ce5-83c3-83b888738508 namespace which is not needed anymore#033[00m
Feb 28 05:14:39 np0005634017 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Feb 28 05:14:39 np0005634017 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d0000004a.scope: Consumed 13.161s CPU time.
Feb 28 05:14:39 np0005634017 systemd-machined[209480]: Machine qemu-82-instance-0000004a terminated.
Feb 28 05:14:39 np0005634017 neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508[305881]: [NOTICE]   (305887) : haproxy version is 2.8.14-c23fe91
Feb 28 05:14:39 np0005634017 neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508[305881]: [NOTICE]   (305887) : path to executable is /usr/sbin/haproxy
Feb 28 05:14:39 np0005634017 neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508[305881]: [WARNING]  (305887) : Exiting Master process...
Feb 28 05:14:39 np0005634017 neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508[305881]: [WARNING]  (305887) : Exiting Master process...
Feb 28 05:14:39 np0005634017 neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508[305881]: [ALERT]    (305887) : Current worker (305889) exited with code 143 (Terminated)
Feb 28 05:14:39 np0005634017 neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508[305881]: [WARNING]  (305887) : All workers exited. Exiting... (0)
Feb 28 05:14:39 np0005634017 systemd[1]: libpod-05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6.scope: Deactivated successfully.
Feb 28 05:14:39 np0005634017 podman[308582]: 2026-02-28 10:14:39.632706798 +0000 UTC m=+0.052217199 container died 05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 28 05:14:39 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6-userdata-shm.mount: Deactivated successfully.
Feb 28 05:14:39 np0005634017 systemd[1]: var-lib-containers-storage-overlay-7cfd78d30250744f4812bed5eda336963389aeb75643f7f79dbbc06f1fb2c979-merged.mount: Deactivated successfully.
Feb 28 05:14:39 np0005634017 nova_compute[243452]: 2026-02-28 10:14:39.673 243456 INFO nova.virt.libvirt.driver [-] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Instance destroyed successfully.#033[00m
Feb 28 05:14:39 np0005634017 nova_compute[243452]: 2026-02-28 10:14:39.674 243456 DEBUG nova.objects.instance [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'resources' on Instance uuid 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:39 np0005634017 podman[308582]: 2026-02-28 10:14:39.682602051 +0000 UTC m=+0.102112372 container cleanup 05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, io.buildah.version=1.43.0)
Feb 28 05:14:39 np0005634017 nova_compute[243452]: 2026-02-28 10:14:39.693 243456 DEBUG nova.virt.libvirt.vif [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:13:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-191562024',display_name='tempest-ServerRescueNegativeTestJSON-server-191562024',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-191562024',id=74,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:14:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='809bf856030f4316b385ba1c02291ca7',ramdisk_id='',reservation_id='r-55ob4e3h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-338743494',owner_user_name='tempest-ServerRescueNegativeTestJSON-338743494-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:14:32Z,user_data=None,user_id='ec5caafc16ec43a493f7d553353a27c3',uuid=40f8f3fa-1f1c-440e-a640-5a223b1ca9b8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "26c42747-4919-4440-9b73-cf3516525108", "address": "fa:16:3e:5f:43:e1", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c42747-49", "ovs_interfaceid": "26c42747-4919-4440-9b73-cf3516525108", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:14:39 np0005634017 nova_compute[243452]: 2026-02-28 10:14:39.693 243456 DEBUG nova.network.os_vif_util [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converting VIF {"id": "26c42747-4919-4440-9b73-cf3516525108", "address": "fa:16:3e:5f:43:e1", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c42747-49", "ovs_interfaceid": "26c42747-4919-4440-9b73-cf3516525108", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:14:39 np0005634017 nova_compute[243452]: 2026-02-28 10:14:39.696 243456 DEBUG nova.network.os_vif_util [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5f:43:e1,bridge_name='br-int',has_traffic_filtering=True,id=26c42747-4919-4440-9b73-cf3516525108,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c42747-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:14:39 np0005634017 nova_compute[243452]: 2026-02-28 10:14:39.697 243456 DEBUG os_vif [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5f:43:e1,bridge_name='br-int',has_traffic_filtering=True,id=26c42747-4919-4440-9b73-cf3516525108,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c42747-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:14:39 np0005634017 nova_compute[243452]: 2026-02-28 10:14:39.702 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:39 np0005634017 nova_compute[243452]: 2026-02-28 10:14:39.702 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26c42747-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:39 np0005634017 nova_compute[243452]: 2026-02-28 10:14:39.704 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:39 np0005634017 nova_compute[243452]: 2026-02-28 10:14:39.706 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:39 np0005634017 systemd[1]: libpod-conmon-05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6.scope: Deactivated successfully.
Feb 28 05:14:39 np0005634017 nova_compute[243452]: 2026-02-28 10:14:39.710 243456 INFO os_vif [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5f:43:e1,bridge_name='br-int',has_traffic_filtering=True,id=26c42747-4919-4440-9b73-cf3516525108,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c42747-49')#033[00m
Feb 28 05:14:39 np0005634017 podman[308621]: 2026-02-28 10:14:39.749727989 +0000 UTC m=+0.047825406 container remove 05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 05:14:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.758 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[13a3eefa-6166-4d00-8596-49b59916fe08]: (4, ('Sat Feb 28 10:14:39 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508 (05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6)\n05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6\nSat Feb 28 10:14:39 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508 (05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6)\n05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.761 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b909289c-fb28-4f2c-902d-6a59753e8cc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.763 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap621843b6-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:39 np0005634017 nova_compute[243452]: 2026-02-28 10:14:39.766 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:39 np0005634017 kernel: tap621843b6-20: left promiscuous mode
Feb 28 05:14:39 np0005634017 nova_compute[243452]: 2026-02-28 10:14:39.775 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.777 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[56370350-6a33-4377-9493-b67b1a566243]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.788 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ac798feb-2880-44be-bc76-91947c91ae65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.790 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ddb2799e-b9ae-4053-96c6-2a11f5fc0175]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.802 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7c00c7ba-ce6f-4e7b-b07e-6d7a544ec32f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514750, 'reachable_time': 21671, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308654, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:39 np0005634017 systemd[1]: run-netns-ovnmeta\x2d621843b6\x2d256a\x2d4ce5\x2d83c3\x2d83b888738508.mount: Deactivated successfully.
Feb 28 05:14:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.807 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-621843b6-256a-4ce5-83c3-83b888738508 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:14:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.807 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[61a63d4e-131f-457f-a8be-22fab07c6863]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:40 np0005634017 nova_compute[243452]: 2026-02-28 10:14:39.999 243456 INFO nova.virt.libvirt.driver [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Deleting instance files /var/lib/nova/instances/40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_del#033[00m
Feb 28 05:14:40 np0005634017 nova_compute[243452]: 2026-02-28 10:14:40.002 243456 INFO nova.virt.libvirt.driver [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Deletion of /var/lib/nova/instances/40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_del complete#033[00m
Feb 28 05:14:40 np0005634017 nova_compute[243452]: 2026-02-28 10:14:40.076 243456 INFO nova.compute.manager [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:14:40 np0005634017 nova_compute[243452]: 2026-02-28 10:14:40.077 243456 DEBUG oslo.service.loopingcall [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:14:40 np0005634017 nova_compute[243452]: 2026-02-28 10:14:40.077 243456 DEBUG nova.compute.manager [-] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:14:40 np0005634017 nova_compute[243452]: 2026-02-28 10:14:40.077 243456 DEBUG nova.network.neutron [-] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:14:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 423 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 931 KiB/s wr, 302 op/s
Feb 28 05:14:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:40Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:70:1d:4f 10.100.0.13
Feb 28 05:14:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:40Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:1d:4f 10.100.0.13
Feb 28 05:14:40 np0005634017 nova_compute[243452]: 2026-02-28 10:14:40.303 243456 DEBUG nova.compute.manager [req-bc87526d-a278-4304-8fcf-d7926bbbe1a1 req-b2321d45-f459-46e5-909d-7e97d7faef2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Received event network-vif-unplugged-26c42747-4919-4440-9b73-cf3516525108 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:40 np0005634017 nova_compute[243452]: 2026-02-28 10:14:40.303 243456 DEBUG oslo_concurrency.lockutils [req-bc87526d-a278-4304-8fcf-d7926bbbe1a1 req-b2321d45-f459-46e5-909d-7e97d7faef2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:40 np0005634017 nova_compute[243452]: 2026-02-28 10:14:40.304 243456 DEBUG oslo_concurrency.lockutils [req-bc87526d-a278-4304-8fcf-d7926bbbe1a1 req-b2321d45-f459-46e5-909d-7e97d7faef2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:40 np0005634017 nova_compute[243452]: 2026-02-28 10:14:40.304 243456 DEBUG oslo_concurrency.lockutils [req-bc87526d-a278-4304-8fcf-d7926bbbe1a1 req-b2321d45-f459-46e5-909d-7e97d7faef2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:40 np0005634017 nova_compute[243452]: 2026-02-28 10:14:40.304 243456 DEBUG nova.compute.manager [req-bc87526d-a278-4304-8fcf-d7926bbbe1a1 req-b2321d45-f459-46e5-909d-7e97d7faef2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] No waiting events found dispatching network-vif-unplugged-26c42747-4919-4440-9b73-cf3516525108 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:40 np0005634017 nova_compute[243452]: 2026-02-28 10:14:40.304 243456 DEBUG nova.compute.manager [req-bc87526d-a278-4304-8fcf-d7926bbbe1a1 req-b2321d45-f459-46e5-909d-7e97d7faef2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Received event network-vif-unplugged-26c42747-4919-4440-9b73-cf3516525108 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:14:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:14:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:14:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:14:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:14:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0023727217339721053 of space, bias 1.0, pg target 0.7118165201916316 quantized to 32 (current 32)
Feb 28 05:14:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:14:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:14:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:14:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:14:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:14:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493456532932348 of space, bias 1.0, pg target 0.7480369598797043 quantized to 32 (current 32)
Feb 28 05:14:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:14:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.871523603312771e-07 of space, bias 4.0, pg target 0.0009445828323975324 quantized to 16 (current 16)
Feb 28 05:14:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:14:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:14:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:14:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:14:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:14:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:14:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:14:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:14:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:14:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:14:41 np0005634017 nova_compute[243452]: 2026-02-28 10:14:41.356 243456 DEBUG nova.network.neutron [-] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:14:41 np0005634017 nova_compute[243452]: 2026-02-28 10:14:41.382 243456 INFO nova.compute.manager [-] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Took 1.30 seconds to deallocate network for instance.#033[00m
Feb 28 05:14:41 np0005634017 nova_compute[243452]: 2026-02-28 10:14:41.416 243456 DEBUG nova.compute.manager [req-6b23a7bb-4ecb-4e65-b051-78c751131130 req-97ced286-2112-4d58-a77c-9312e5b4bad0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Received event network-vif-deleted-26c42747-4919-4440-9b73-cf3516525108 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:41 np0005634017 nova_compute[243452]: 2026-02-28 10:14:41.436 243456 DEBUG oslo_concurrency.lockutils [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:41 np0005634017 nova_compute[243452]: 2026-02-28 10:14:41.436 243456 DEBUG oslo_concurrency.lockutils [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:41 np0005634017 nova_compute[243452]: 2026-02-28 10:14:41.520 243456 DEBUG oslo_concurrency.processutils [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:14:42 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2089390432' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:14:42 np0005634017 nova_compute[243452]: 2026-02-28 10:14:42.117 243456 DEBUG oslo_concurrency.processutils [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:42 np0005634017 nova_compute[243452]: 2026-02-28 10:14:42.125 243456 DEBUG nova.compute.provider_tree [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:14:42 np0005634017 nova_compute[243452]: 2026-02-28 10:14:42.147 243456 DEBUG nova.scheduler.client.report [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:14:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 373 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 1.5 MiB/s wr, 274 op/s
Feb 28 05:14:42 np0005634017 nova_compute[243452]: 2026-02-28 10:14:42.180 243456 DEBUG oslo_concurrency.lockutils [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:42 np0005634017 nova_compute[243452]: 2026-02-28 10:14:42.209 243456 INFO nova.scheduler.client.report [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Deleted allocations for instance 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8#033[00m
Feb 28 05:14:42 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:42Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:88:90:03 10.100.0.11
Feb 28 05:14:42 np0005634017 nova_compute[243452]: 2026-02-28 10:14:42.294 243456 DEBUG oslo_concurrency.lockutils [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:42 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:42Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:88:90:03 10.100.0.11
Feb 28 05:14:42 np0005634017 nova_compute[243452]: 2026-02-28 10:14:42.382 243456 DEBUG nova.compute.manager [req-f58bc33b-86dc-4a11-995b-94d692d6320d req-15e52f56-36c4-4f99-88c5-8c7671fd9e2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Received event network-vif-plugged-26c42747-4919-4440-9b73-cf3516525108 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:42 np0005634017 nova_compute[243452]: 2026-02-28 10:14:42.383 243456 DEBUG oslo_concurrency.lockutils [req-f58bc33b-86dc-4a11-995b-94d692d6320d req-15e52f56-36c4-4f99-88c5-8c7671fd9e2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:42 np0005634017 nova_compute[243452]: 2026-02-28 10:14:42.384 243456 DEBUG oslo_concurrency.lockutils [req-f58bc33b-86dc-4a11-995b-94d692d6320d req-15e52f56-36c4-4f99-88c5-8c7671fd9e2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:42 np0005634017 nova_compute[243452]: 2026-02-28 10:14:42.384 243456 DEBUG oslo_concurrency.lockutils [req-f58bc33b-86dc-4a11-995b-94d692d6320d req-15e52f56-36c4-4f99-88c5-8c7671fd9e2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:42 np0005634017 nova_compute[243452]: 2026-02-28 10:14:42.385 243456 DEBUG nova.compute.manager [req-f58bc33b-86dc-4a11-995b-94d692d6320d req-15e52f56-36c4-4f99-88c5-8c7671fd9e2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] No waiting events found dispatching network-vif-plugged-26c42747-4919-4440-9b73-cf3516525108 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:42 np0005634017 nova_compute[243452]: 2026-02-28 10:14:42.385 243456 WARNING nova.compute.manager [req-f58bc33b-86dc-4a11-995b-94d692d6320d req-15e52f56-36c4-4f99-88c5-8c7671fd9e2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Received unexpected event network-vif-plugged-26c42747-4919-4440-9b73-cf3516525108 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:14:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:14:43 np0005634017 nova_compute[243452]: 2026-02-28 10:14:43.624 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 28 05:14:43 np0005634017 nova_compute[243452]: 2026-02-28 10:14:43.777 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 386 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.0 MiB/s wr, 283 op/s
Feb 28 05:14:44 np0005634017 nova_compute[243452]: 2026-02-28 10:14:44.705 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:14:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/959705387' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:14:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:14:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/959705387' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:14:45 np0005634017 kernel: tap0226697b-95 (unregistering): left promiscuous mode
Feb 28 05:14:45 np0005634017 NetworkManager[49805]: <info>  [1772273685.9790] device (tap0226697b-95): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:14:45 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:45Z|00670|binding|INFO|Releasing lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 from this chassis (sb_readonly=0)
Feb 28 05:14:45 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:45Z|00671|binding|INFO|Setting lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 down in Southbound
Feb 28 05:14:45 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:45Z|00672|binding|INFO|Removing iface tap0226697b-95 ovn-installed in OVS
Feb 28 05:14:45 np0005634017 nova_compute[243452]: 2026-02-28 10:14:45.981 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:45.989 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:1d:4f 10.100.0.13'], port_security=['fa:16:3e:70:1d:4f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ba33446e-fcd5-454c-bc8c-79a367002d57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '73844c74-9151-4190-8457-a421e77e32a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1433cfb0-2a88-4f7f-838a-e6496a4d73b0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0226697b-95b2-4303-aa60-b98eb0bb4cd9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:14:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:45.991 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0226697b-95b2-4303-aa60-b98eb0bb4cd9 in datapath 77a5b13a-ec2d-4bde-b8f1-201557ef8008 unbound from our chassis#033[00m
Feb 28 05:14:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:45.993 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77a5b13a-ec2d-4bde-b8f1-201557ef8008, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:14:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:45.993 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9ebde475-68e0-46b0-84cf-4dcd08cc7016]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:45.994 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 namespace which is not needed anymore#033[00m
Feb 28 05:14:46 np0005634017 nova_compute[243452]: 2026-02-28 10:14:46.001 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:46 np0005634017 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Feb 28 05:14:46 np0005634017 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d0000004c.scope: Consumed 12.479s CPU time.
Feb 28 05:14:46 np0005634017 systemd-machined[209480]: Machine qemu-86-instance-0000004c terminated.
Feb 28 05:14:46 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[307996]: [NOTICE]   (308001) : haproxy version is 2.8.14-c23fe91
Feb 28 05:14:46 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[307996]: [NOTICE]   (308001) : path to executable is /usr/sbin/haproxy
Feb 28 05:14:46 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[307996]: [WARNING]  (308001) : Exiting Master process...
Feb 28 05:14:46 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[307996]: [WARNING]  (308001) : Exiting Master process...
Feb 28 05:14:46 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[307996]: [ALERT]    (308001) : Current worker (308005) exited with code 143 (Terminated)
Feb 28 05:14:46 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[307996]: [WARNING]  (308001) : All workers exited. Exiting... (0)
Feb 28 05:14:46 np0005634017 systemd[1]: libpod-80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4.scope: Deactivated successfully.
Feb 28 05:14:46 np0005634017 podman[308702]: 2026-02-28 10:14:46.126024696 +0000 UTC m=+0.044560604 container died 80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 05:14:46 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4-userdata-shm.mount: Deactivated successfully.
Feb 28 05:14:46 np0005634017 systemd[1]: var-lib-containers-storage-overlay-750aff55c3900492540f27eedaa3917fbcf26f3c02e0a8efc1a0b07485dba889-merged.mount: Deactivated successfully.
Feb 28 05:14:46 np0005634017 podman[308702]: 2026-02-28 10:14:46.167460391 +0000 UTC m=+0.085996299 container cleanup 80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 28 05:14:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 376 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.8 MiB/s wr, 326 op/s
Feb 28 05:14:46 np0005634017 systemd[1]: libpod-conmon-80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4.scope: Deactivated successfully.
Feb 28 05:14:46 np0005634017 podman[308730]: 2026-02-28 10:14:46.235281908 +0000 UTC m=+0.053741342 container remove 80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 28 05:14:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:46.241 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c9c6c114-b361-4bb3-90cc-ef1c8d729dd5]: (4, ('Sat Feb 28 10:14:46 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 (80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4)\n80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4\nSat Feb 28 10:14:46 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 (80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4)\n80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:46.244 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bc20fcfa-fc96-4d57-8858-f19f94a5b3cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:46.245 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77a5b13a-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:46 np0005634017 nova_compute[243452]: 2026-02-28 10:14:46.247 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:46 np0005634017 kernel: tap77a5b13a-e0: left promiscuous mode
Feb 28 05:14:46 np0005634017 nova_compute[243452]: 2026-02-28 10:14:46.259 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:46.263 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[aa83d7eb-8d04-4765-93b8-c8e854d0f294]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:46.284 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[446bc177-cb65-4df5-bed0-d41aaba0b306]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:46.286 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[10049df3-937e-4f3f-a488-b3042401b2f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:46.301 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b7e053-ee1c-4791-8c35-45c040119c20]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517321, 'reachable_time': 37761, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308756, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:46.304 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:14:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:46.304 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[15c33980-a58f-40fd-bdca-9f4b36cf57b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:46 np0005634017 systemd[1]: run-netns-ovnmeta\x2d77a5b13a\x2dec2d\x2d4bde\x2db8f1\x2d201557ef8008.mount: Deactivated successfully.
Feb 28 05:14:46 np0005634017 nova_compute[243452]: 2026-02-28 10:14:46.342 243456 DEBUG nova.compute.manager [req-dfd0e9b6-0aaf-4f74-9eb0-354f65ca9430 req-7d4cd4fe-9d48-4c59-a6ee-c77ba0432eb9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-vif-unplugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:46 np0005634017 nova_compute[243452]: 2026-02-28 10:14:46.342 243456 DEBUG oslo_concurrency.lockutils [req-dfd0e9b6-0aaf-4f74-9eb0-354f65ca9430 req-7d4cd4fe-9d48-4c59-a6ee-c77ba0432eb9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:46 np0005634017 nova_compute[243452]: 2026-02-28 10:14:46.343 243456 DEBUG oslo_concurrency.lockutils [req-dfd0e9b6-0aaf-4f74-9eb0-354f65ca9430 req-7d4cd4fe-9d48-4c59-a6ee-c77ba0432eb9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:46 np0005634017 nova_compute[243452]: 2026-02-28 10:14:46.343 243456 DEBUG oslo_concurrency.lockutils [req-dfd0e9b6-0aaf-4f74-9eb0-354f65ca9430 req-7d4cd4fe-9d48-4c59-a6ee-c77ba0432eb9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:46 np0005634017 nova_compute[243452]: 2026-02-28 10:14:46.343 243456 DEBUG nova.compute.manager [req-dfd0e9b6-0aaf-4f74-9eb0-354f65ca9430 req-7d4cd4fe-9d48-4c59-a6ee-c77ba0432eb9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] No waiting events found dispatching network-vif-unplugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:46 np0005634017 nova_compute[243452]: 2026-02-28 10:14:46.344 243456 WARNING nova.compute.manager [req-dfd0e9b6-0aaf-4f74-9eb0-354f65ca9430 req-7d4cd4fe-9d48-4c59-a6ee-c77ba0432eb9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received unexpected event network-vif-unplugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 for instance with vm_state active and task_state rebuilding.#033[00m
Feb 28 05:14:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:46Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c4:71:d6 10.100.0.6
Feb 28 05:14:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:46Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c4:71:d6 10.100.0.6
Feb 28 05:14:46 np0005634017 nova_compute[243452]: 2026-02-28 10:14:46.638 243456 INFO nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Instance shutdown successfully after 13 seconds.#033[00m
Feb 28 05:14:46 np0005634017 nova_compute[243452]: 2026-02-28 10:14:46.646 243456 INFO nova.virt.libvirt.driver [-] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Instance destroyed successfully.#033[00m
Feb 28 05:14:46 np0005634017 nova_compute[243452]: 2026-02-28 10:14:46.653 243456 INFO nova.virt.libvirt.driver [-] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Instance destroyed successfully.#033[00m
Feb 28 05:14:46 np0005634017 nova_compute[243452]: 2026-02-28 10:14:46.654 243456 DEBUG nova.virt.libvirt.vif [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:14:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-877654664',display_name='tempest-ServerDiskConfigTestJSON-server-877654664',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-877654664',id=76,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:14:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-rxssdzd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:32Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=ba33446e-fcd5-454c-bc8c-79a367002d57,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:14:46 np0005634017 nova_compute[243452]: 2026-02-28 10:14:46.654 243456 DEBUG nova.network.os_vif_util [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:14:46 np0005634017 nova_compute[243452]: 2026-02-28 10:14:46.655 243456 DEBUG nova.network.os_vif_util [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:14:46 np0005634017 nova_compute[243452]: 2026-02-28 10:14:46.655 243456 DEBUG os_vif [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:14:46 np0005634017 nova_compute[243452]: 2026-02-28 10:14:46.658 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:46 np0005634017 nova_compute[243452]: 2026-02-28 10:14:46.658 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0226697b-95, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:46 np0005634017 nova_compute[243452]: 2026-02-28 10:14:46.660 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:46 np0005634017 nova_compute[243452]: 2026-02-28 10:14:46.662 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:14:46 np0005634017 nova_compute[243452]: 2026-02-28 10:14:46.666 243456 INFO os_vif [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95')#033[00m
Feb 28 05:14:46 np0005634017 nova_compute[243452]: 2026-02-28 10:14:46.951 243456 INFO nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Deleting instance files /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57_del#033[00m
Feb 28 05:14:46 np0005634017 nova_compute[243452]: 2026-02-28 10:14:46.952 243456 INFO nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Deletion of /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57_del complete#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.147 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.148 243456 INFO nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Creating image(s)#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.178 243456 DEBUG nova.storage.rbd_utils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.203 243456 DEBUG nova.storage.rbd_utils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.230 243456 DEBUG nova.storage.rbd_utils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.234 243456 DEBUG oslo_concurrency.processutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.309 243456 DEBUG oslo_concurrency.processutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.310 243456 DEBUG oslo_concurrency.lockutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.311 243456 DEBUG oslo_concurrency.lockutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.311 243456 DEBUG oslo_concurrency.lockutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:14:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:14:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:14:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:14:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:14:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:14:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:14:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:14:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:14:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:14:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:14:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.339 243456 DEBUG nova.storage.rbd_utils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.344 243456 DEBUG oslo_concurrency.processutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 ba33446e-fcd5-454c-bc8c-79a367002d57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:47 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:14:47 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:14:47 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.617 243456 DEBUG oslo_concurrency.processutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 ba33446e-fcd5-454c-bc8c-79a367002d57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.678 243456 DEBUG nova.storage.rbd_utils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] resizing rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:14:47 np0005634017 podman[309057]: 2026-02-28 10:14:47.743664699 +0000 UTC m=+0.046294152 container create 37e21d4baddeac6cfdbed16be5c99c1b6cc823e1f7ae1a27bfd319b90fa41a99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.765 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.766 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Ensure instance console log exists: /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.766 243456 DEBUG oslo_concurrency.lockutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.767 243456 DEBUG oslo_concurrency.lockutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.767 243456 DEBUG oslo_concurrency.lockutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.769 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Start _get_guest_xml network_info=[{"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.773 243456 WARNING nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.780 243456 DEBUG nova.virt.libvirt.host [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.781 243456 DEBUG nova.virt.libvirt.host [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:14:47 np0005634017 systemd[1]: Started libpod-conmon-37e21d4baddeac6cfdbed16be5c99c1b6cc823e1f7ae1a27bfd319b90fa41a99.scope.
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.784 243456 DEBUG nova.virt.libvirt.host [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.784 243456 DEBUG nova.virt.libvirt.host [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.785 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.785 243456 DEBUG nova.virt.hardware [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.785 243456 DEBUG nova.virt.hardware [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.786 243456 DEBUG nova.virt.hardware [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.786 243456 DEBUG nova.virt.hardware [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.786 243456 DEBUG nova.virt.hardware [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.786 243456 DEBUG nova.virt.hardware [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.787 243456 DEBUG nova.virt.hardware [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.787 243456 DEBUG nova.virt.hardware [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.787 243456 DEBUG nova.virt.hardware [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.787 243456 DEBUG nova.virt.hardware [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.787 243456 DEBUG nova.virt.hardware [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.788 243456 DEBUG nova.objects.instance [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'vcpu_model' on Instance uuid ba33446e-fcd5-454c-bc8c-79a367002d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:47 np0005634017 nova_compute[243452]: 2026-02-28 10:14:47.811 243456 DEBUG oslo_concurrency.processutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:47 np0005634017 podman[309057]: 2026-02-28 10:14:47.722847104 +0000 UTC m=+0.025476607 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:14:47 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:14:47 np0005634017 podman[309057]: 2026-02-28 10:14:47.840737329 +0000 UTC m=+0.143366882 container init 37e21d4baddeac6cfdbed16be5c99c1b6cc823e1f7ae1a27bfd319b90fa41a99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_goodall, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:14:47 np0005634017 podman[309057]: 2026-02-28 10:14:47.84859286 +0000 UTC m=+0.151222353 container start 37e21d4baddeac6cfdbed16be5c99c1b6cc823e1f7ae1a27bfd319b90fa41a99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_goodall, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:14:47 np0005634017 podman[309057]: 2026-02-28 10:14:47.852738706 +0000 UTC m=+0.155368199 container attach 37e21d4baddeac6cfdbed16be5c99c1b6cc823e1f7ae1a27bfd319b90fa41a99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_goodall, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 28 05:14:47 np0005634017 heuristic_goodall[309102]: 167 167
Feb 28 05:14:47 np0005634017 systemd[1]: libpod-37e21d4baddeac6cfdbed16be5c99c1b6cc823e1f7ae1a27bfd319b90fa41a99.scope: Deactivated successfully.
Feb 28 05:14:47 np0005634017 podman[309057]: 2026-02-28 10:14:47.858262232 +0000 UTC m=+0.160891685 container died 37e21d4baddeac6cfdbed16be5c99c1b6cc823e1f7ae1a27bfd319b90fa41a99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_goodall, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:14:47 np0005634017 systemd[1]: var-lib-containers-storage-overlay-11989493354dfaeae9eb2e78ac84a83a1984133f3db8dc99d237b4052f988411-merged.mount: Deactivated successfully.
Feb 28 05:14:47 np0005634017 podman[309057]: 2026-02-28 10:14:47.905765967 +0000 UTC m=+0.208395440 container remove 37e21d4baddeac6cfdbed16be5c99c1b6cc823e1f7ae1a27bfd319b90fa41a99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_goodall, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Feb 28 05:14:47 np0005634017 systemd[1]: libpod-conmon-37e21d4baddeac6cfdbed16be5c99c1b6cc823e1f7ae1a27bfd319b90fa41a99.scope: Deactivated successfully.
Feb 28 05:14:48 np0005634017 podman[309145]: 2026-02-28 10:14:48.106562903 +0000 UTC m=+0.052215080 container create 6ecc4158ac4fb687750c6d5b53b756ce04cc28371bd14e85de5eb1f18033ed32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 05:14:48 np0005634017 systemd[1]: Started libpod-conmon-6ecc4158ac4fb687750c6d5b53b756ce04cc28371bd14e85de5eb1f18033ed32.scope.
Feb 28 05:14:48 np0005634017 podman[309145]: 2026-02-28 10:14:48.082408614 +0000 UTC m=+0.028060780 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:14:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 371 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 910 KiB/s rd, 6.4 MiB/s wr, 293 op/s
Feb 28 05:14:48 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:14:48 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a26a5d30ea58b2215aaf795b61c01ed421a3eb1b38b110c21316123f2b1ea13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:14:48 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a26a5d30ea58b2215aaf795b61c01ed421a3eb1b38b110c21316123f2b1ea13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:14:48 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a26a5d30ea58b2215aaf795b61c01ed421a3eb1b38b110c21316123f2b1ea13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:14:48 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a26a5d30ea58b2215aaf795b61c01ed421a3eb1b38b110c21316123f2b1ea13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:14:48 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a26a5d30ea58b2215aaf795b61c01ed421a3eb1b38b110c21316123f2b1ea13/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:14:48 np0005634017 podman[309145]: 2026-02-28 10:14:48.223944884 +0000 UTC m=+0.169597060 container init 6ecc4158ac4fb687750c6d5b53b756ce04cc28371bd14e85de5eb1f18033ed32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 05:14:48 np0005634017 podman[309145]: 2026-02-28 10:14:48.232881195 +0000 UTC m=+0.178533331 container start 6ecc4158ac4fb687750c6d5b53b756ce04cc28371bd14e85de5eb1f18033ed32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 28 05:14:48 np0005634017 podman[309145]: 2026-02-28 10:14:48.237381312 +0000 UTC m=+0.183033498 container attach 6ecc4158ac4fb687750c6d5b53b756ce04cc28371bd14e85de5eb1f18033ed32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_wilbur, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:14:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:14:48 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2556272107' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:14:48 np0005634017 nova_compute[243452]: 2026-02-28 10:14:48.439 243456 DEBUG oslo_concurrency.processutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:48 np0005634017 nova_compute[243452]: 2026-02-28 10:14:48.466 243456 DEBUG nova.storage.rbd_utils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:48 np0005634017 nova_compute[243452]: 2026-02-28 10:14:48.469 243456 DEBUG oslo_concurrency.processutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:48 np0005634017 nova_compute[243452]: 2026-02-28 10:14:48.497 243456 DEBUG nova.compute.manager [req-a086dac7-8742-4b61-a3c8-3b09de117d66 req-152039f9-58ad-4b82-9575-fef171a96201 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:48 np0005634017 nova_compute[243452]: 2026-02-28 10:14:48.498 243456 DEBUG oslo_concurrency.lockutils [req-a086dac7-8742-4b61-a3c8-3b09de117d66 req-152039f9-58ad-4b82-9575-fef171a96201 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:48 np0005634017 nova_compute[243452]: 2026-02-28 10:14:48.498 243456 DEBUG oslo_concurrency.lockutils [req-a086dac7-8742-4b61-a3c8-3b09de117d66 req-152039f9-58ad-4b82-9575-fef171a96201 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:48 np0005634017 nova_compute[243452]: 2026-02-28 10:14:48.498 243456 DEBUG oslo_concurrency.lockutils [req-a086dac7-8742-4b61-a3c8-3b09de117d66 req-152039f9-58ad-4b82-9575-fef171a96201 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:48 np0005634017 nova_compute[243452]: 2026-02-28 10:14:48.499 243456 DEBUG nova.compute.manager [req-a086dac7-8742-4b61-a3c8-3b09de117d66 req-152039f9-58ad-4b82-9575-fef171a96201 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] No waiting events found dispatching network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:48 np0005634017 nova_compute[243452]: 2026-02-28 10:14:48.499 243456 WARNING nova.compute.manager [req-a086dac7-8742-4b61-a3c8-3b09de117d66 req-152039f9-58ad-4b82-9575-fef171a96201 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received unexpected event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Feb 28 05:14:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:14:48 np0005634017 gallant_wilbur[309162]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:14:48 np0005634017 gallant_wilbur[309162]: --> All data devices are unavailable
Feb 28 05:14:48 np0005634017 systemd[1]: libpod-6ecc4158ac4fb687750c6d5b53b756ce04cc28371bd14e85de5eb1f18033ed32.scope: Deactivated successfully.
Feb 28 05:14:48 np0005634017 podman[309145]: 2026-02-28 10:14:48.704498427 +0000 UTC m=+0.650150603 container died 6ecc4158ac4fb687750c6d5b53b756ce04cc28371bd14e85de5eb1f18033ed32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Feb 28 05:14:48 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3a26a5d30ea58b2215aaf795b61c01ed421a3eb1b38b110c21316123f2b1ea13-merged.mount: Deactivated successfully.
Feb 28 05:14:48 np0005634017 podman[309145]: 2026-02-28 10:14:48.75048291 +0000 UTC m=+0.696135066 container remove 6ecc4158ac4fb687750c6d5b53b756ce04cc28371bd14e85de5eb1f18033ed32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_wilbur, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:14:48 np0005634017 nova_compute[243452]: 2026-02-28 10:14:48.781 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:48 np0005634017 systemd[1]: libpod-conmon-6ecc4158ac4fb687750c6d5b53b756ce04cc28371bd14e85de5eb1f18033ed32.scope: Deactivated successfully.
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.037 243456 INFO nova.compute.manager [None req-acfc8abe-19d5-4309-aa92-5a4058c4c85f 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Get console output#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.045 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 28 05:14:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:14:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2046892496' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.072 243456 DEBUG oslo_concurrency.processutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.074 243456 DEBUG nova.virt.libvirt.vif [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:14:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-877654664',display_name='tempest-ServerDiskConfigTestJSON-server-877654664',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-877654664',id=76,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:14:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-rxssdzd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:47Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=ba33446e-fcd5-454c-bc8c-79a367002d57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.074 243456 DEBUG nova.network.os_vif_util [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.075 243456 DEBUG nova.network.os_vif_util [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.078 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:14:49 np0005634017 nova_compute[243452]:  <uuid>ba33446e-fcd5-454c-bc8c-79a367002d57</uuid>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:  <name>instance-0000004c</name>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-877654664</nova:name>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:14:47</nova:creationTime>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:14:49 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:        <nova:user uuid="c6b5724da2e648fd85fd8cb293525967">tempest-ServerDiskConfigTestJSON-1778232696-project-member</nova:user>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:        <nova:project uuid="92b324a375ad4f198dc44d31a0e0a6eb">tempest-ServerDiskConfigTestJSON-1778232696</nova:project>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="88971623-4808-4102-a4a7-34a287d8b7fe"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:        <nova:port uuid="0226697b-95b2-4303-aa60-b98eb0bb4cd9">
Feb 28 05:14:49 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <entry name="serial">ba33446e-fcd5-454c-bc8c-79a367002d57</entry>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <entry name="uuid">ba33446e-fcd5-454c-bc8c-79a367002d57</entry>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/ba33446e-fcd5-454c-bc8c-79a367002d57_disk">
Feb 28 05:14:49 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:14:49 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/ba33446e-fcd5-454c-bc8c-79a367002d57_disk.config">
Feb 28 05:14:49 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:14:49 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:70:1d:4f"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <target dev="tap0226697b-95"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/console.log" append="off"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:14:49 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:14:49 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:14:49 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:14:49 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.078 243456 DEBUG nova.compute.manager [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Preparing to wait for external event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.079 243456 DEBUG oslo_concurrency.lockutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.079 243456 DEBUG oslo_concurrency.lockutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.079 243456 DEBUG oslo_concurrency.lockutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.080 243456 DEBUG nova.virt.libvirt.vif [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:14:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-877654664',display_name='tempest-ServerDiskConfigTestJSON-server-877654664',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-877654664',id=76,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:14:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-rxssdzd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:47Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=ba33446e-fcd5-454c-bc8c-79a367002d57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.080 243456 DEBUG nova.network.os_vif_util [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.080 243456 DEBUG nova.network.os_vif_util [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.081 243456 DEBUG os_vif [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.082 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.082 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.082 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.086 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.086 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0226697b-95, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.087 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0226697b-95, col_values=(('external_ids', {'iface-id': '0226697b-95b2-4303-aa60-b98eb0bb4cd9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:1d:4f', 'vm-uuid': 'ba33446e-fcd5-454c-bc8c-79a367002d57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.089 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:49 np0005634017 NetworkManager[49805]: <info>  [1772273689.0910] manager: (tap0226697b-95): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/303)
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.093 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.099 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.100 243456 INFO os_vif [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95')#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.173 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.173 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.173 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No VIF found with MAC fa:16:3e:70:1d:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.173 243456 INFO nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Using config drive#033[00m
Feb 28 05:14:49 np0005634017 podman[309302]: 2026-02-28 10:14:49.193799895 +0000 UTC m=+0.051141549 container create 349059f9ebc88309cf7e68db69045f64e40e3977bd15735b3ef4105d89baa49e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mendel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.207 243456 DEBUG nova.storage.rbd_utils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.234 243456 DEBUG nova.objects.instance [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'ec2_ids' on Instance uuid ba33446e-fcd5-454c-bc8c-79a367002d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:49 np0005634017 systemd[1]: Started libpod-conmon-349059f9ebc88309cf7e68db69045f64e40e3977bd15735b3ef4105d89baa49e.scope.
Feb 28 05:14:49 np0005634017 podman[309302]: 2026-02-28 10:14:49.175875631 +0000 UTC m=+0.033217365 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:14:49 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.275 243456 DEBUG nova.objects.instance [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'keypairs' on Instance uuid ba33446e-fcd5-454c-bc8c-79a367002d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:49 np0005634017 podman[309302]: 2026-02-28 10:14:49.287376906 +0000 UTC m=+0.144718650 container init 349059f9ebc88309cf7e68db69045f64e40e3977bd15735b3ef4105d89baa49e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 05:14:49 np0005634017 podman[309302]: 2026-02-28 10:14:49.293966751 +0000 UTC m=+0.151308405 container start 349059f9ebc88309cf7e68db69045f64e40e3977bd15735b3ef4105d89baa49e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mendel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 05:14:49 np0005634017 trusting_mendel[309336]: 167 167
Feb 28 05:14:49 np0005634017 systemd[1]: libpod-349059f9ebc88309cf7e68db69045f64e40e3977bd15735b3ef4105d89baa49e.scope: Deactivated successfully.
Feb 28 05:14:49 np0005634017 conmon[309336]: conmon 349059f9ebc88309cf7e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-349059f9ebc88309cf7e68db69045f64e40e3977bd15735b3ef4105d89baa49e.scope/container/memory.events
Feb 28 05:14:49 np0005634017 podman[309302]: 2026-02-28 10:14:49.299876108 +0000 UTC m=+0.157217862 container attach 349059f9ebc88309cf7e68db69045f64e40e3977bd15735b3ef4105d89baa49e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:14:49 np0005634017 podman[309302]: 2026-02-28 10:14:49.301235726 +0000 UTC m=+0.158577410 container died 349059f9ebc88309cf7e68db69045f64e40e3977bd15735b3ef4105d89baa49e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mendel, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 05:14:49 np0005634017 systemd[1]: var-lib-containers-storage-overlay-4834dd39f65511192abfcbcaba9e8868b77c1e87c3aa52baf0094eb645e766fa-merged.mount: Deactivated successfully.
Feb 28 05:14:49 np0005634017 podman[309302]: 2026-02-28 10:14:49.343077172 +0000 UTC m=+0.200418826 container remove 349059f9ebc88309cf7e68db69045f64e40e3977bd15735b3ef4105d89baa49e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:14:49 np0005634017 systemd[1]: libpod-conmon-349059f9ebc88309cf7e68db69045f64e40e3977bd15735b3ef4105d89baa49e.scope: Deactivated successfully.
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.416 243456 DEBUG oslo_concurrency.lockutils [None req-1c4bb949-17be-4600-aca4-b8c68b65a3ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "32fe69ba-ea8d-411e-8917-de872b62b8b0" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.418 243456 DEBUG oslo_concurrency.lockutils [None req-1c4bb949-17be-4600-aca4-b8c68b65a3ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.419 243456 INFO nova.compute.manager [None req-1c4bb949-17be-4600-aca4-b8c68b65a3ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Rebooting instance#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.438 243456 DEBUG oslo_concurrency.lockutils [None req-1c4bb949-17be-4600-aca4-b8c68b65a3ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.439 243456 DEBUG oslo_concurrency.lockutils [None req-1c4bb949-17be-4600-aca4-b8c68b65a3ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquired lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:14:49 np0005634017 nova_compute[243452]: 2026-02-28 10:14:49.440 243456 DEBUG nova.network.neutron [None req-1c4bb949-17be-4600-aca4-b8c68b65a3ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:14:49 np0005634017 podman[309360]: 2026-02-28 10:14:49.498940415 +0000 UTC m=+0.047271040 container create b82afb29ba98a7d654c4751d3ad04dee10c53bcf83ec5e28a14926611e414c61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_margulis, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:14:49 np0005634017 systemd[1]: Started libpod-conmon-b82afb29ba98a7d654c4751d3ad04dee10c53bcf83ec5e28a14926611e414c61.scope.
Feb 28 05:14:49 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:14:49 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584eb1627ec16f2573d7e5462ccacf62ecc67d49f6169f78f319fd7d127a89f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:14:49 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584eb1627ec16f2573d7e5462ccacf62ecc67d49f6169f78f319fd7d127a89f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:14:49 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584eb1627ec16f2573d7e5462ccacf62ecc67d49f6169f78f319fd7d127a89f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:14:49 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584eb1627ec16f2573d7e5462ccacf62ecc67d49f6169f78f319fd7d127a89f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:14:49 np0005634017 podman[309360]: 2026-02-28 10:14:49.479264242 +0000 UTC m=+0.027594887 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:14:49 np0005634017 podman[309360]: 2026-02-28 10:14:49.592510516 +0000 UTC m=+0.140841171 container init b82afb29ba98a7d654c4751d3ad04dee10c53bcf83ec5e28a14926611e414c61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_margulis, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:14:49 np0005634017 podman[309360]: 2026-02-28 10:14:49.60724978 +0000 UTC m=+0.155580405 container start b82afb29ba98a7d654c4751d3ad04dee10c53bcf83ec5e28a14926611e414c61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_margulis, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 05:14:49 np0005634017 podman[309360]: 2026-02-28 10:14:49.611130839 +0000 UTC m=+0.159461484 container attach b82afb29ba98a7d654c4751d3ad04dee10c53bcf83ec5e28a14926611e414c61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]: {
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:    "0": [
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:        {
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "devices": [
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "/dev/loop3"
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            ],
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "lv_name": "ceph_lv0",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "lv_size": "21470642176",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "name": "ceph_lv0",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "tags": {
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.cluster_name": "ceph",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.crush_device_class": "",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.encrypted": "0",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.objectstore": "bluestore",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.osd_id": "0",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.type": "block",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.vdo": "0",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.with_tpm": "0"
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            },
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "type": "block",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "vg_name": "ceph_vg0"
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:        }
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:    ],
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:    "1": [
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:        {
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "devices": [
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "/dev/loop4"
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            ],
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "lv_name": "ceph_lv1",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "lv_size": "21470642176",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "name": "ceph_lv1",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "tags": {
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.cluster_name": "ceph",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.crush_device_class": "",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.encrypted": "0",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.objectstore": "bluestore",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.osd_id": "1",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.type": "block",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.vdo": "0",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.with_tpm": "0"
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            },
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "type": "block",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "vg_name": "ceph_vg1"
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:        }
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:    ],
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:    "2": [
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:        {
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "devices": [
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "/dev/loop5"
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            ],
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "lv_name": "ceph_lv2",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "lv_size": "21470642176",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "name": "ceph_lv2",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "tags": {
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.cluster_name": "ceph",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.crush_device_class": "",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.encrypted": "0",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.objectstore": "bluestore",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.osd_id": "2",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.type": "block",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.vdo": "0",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:                "ceph.with_tpm": "0"
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            },
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "type": "block",
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:            "vg_name": "ceph_vg2"
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:        }
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]:    ]
Feb 28 05:14:49 np0005634017 romantic_margulis[309376]: }
Feb 28 05:14:49 np0005634017 systemd[1]: libpod-b82afb29ba98a7d654c4751d3ad04dee10c53bcf83ec5e28a14926611e414c61.scope: Deactivated successfully.
Feb 28 05:14:49 np0005634017 podman[309385]: 2026-02-28 10:14:49.933653078 +0000 UTC m=+0.031864527 container died b82afb29ba98a7d654c4751d3ad04dee10c53bcf83ec5e28a14926611e414c61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 05:14:49 np0005634017 systemd[1]: var-lib-containers-storage-overlay-584eb1627ec16f2573d7e5462ccacf62ecc67d49f6169f78f319fd7d127a89f4-merged.mount: Deactivated successfully.
Feb 28 05:14:49 np0005634017 podman[309385]: 2026-02-28 10:14:49.989299263 +0000 UTC m=+0.087510672 container remove b82afb29ba98a7d654c4751d3ad04dee10c53bcf83ec5e28a14926611e414c61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_margulis, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:14:49 np0005634017 systemd[1]: libpod-conmon-b82afb29ba98a7d654c4751d3ad04dee10c53bcf83ec5e28a14926611e414c61.scope: Deactivated successfully.
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.002 243456 INFO nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Creating config drive at /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/disk.config#033[00m
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.009 243456 DEBUG oslo_concurrency.processutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5krtzcb1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:50Z|00673|binding|INFO|Releasing lport 4070e10c-8283-47ad-b9bf-0e19e9198bce from this chassis (sb_readonly=0)
Feb 28 05:14:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:50Z|00674|binding|INFO|Releasing lport 7bc082a7-4576-4494-b633-962a40b4d816 from this chassis (sb_readonly=0)
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.077 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.160 243456 DEBUG oslo_concurrency.processutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5krtzcb1" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 364 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 906 KiB/s rd, 7.5 MiB/s wr, 286 op/s
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.200 243456 DEBUG nova.storage.rbd_utils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.205 243456 DEBUG oslo_concurrency.processutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/disk.config ba33446e-fcd5-454c-bc8c-79a367002d57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.377 243456 DEBUG oslo_concurrency.processutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/disk.config ba33446e-fcd5-454c-bc8c-79a367002d57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.378 243456 INFO nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Deleting local config drive /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/disk.config because it was imported into RBD.#033[00m
Feb 28 05:14:50 np0005634017 kernel: tap0226697b-95: entered promiscuous mode
Feb 28 05:14:50 np0005634017 NetworkManager[49805]: <info>  [1772273690.4401] manager: (tap0226697b-95): new Tun device (/org/freedesktop/NetworkManager/Devices/304)
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.442 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:50Z|00675|binding|INFO|Claiming lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 for this chassis.
Feb 28 05:14:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:50Z|00676|binding|INFO|0226697b-95b2-4303-aa60-b98eb0bb4cd9: Claiming fa:16:3e:70:1d:4f 10.100.0.13
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.451 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:1d:4f 10.100.0.13'], port_security=['fa:16:3e:70:1d:4f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ba33446e-fcd5-454c-bc8c-79a367002d57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'neutron:revision_number': '5', 'neutron:security_group_ids': '73844c74-9151-4190-8457-a421e77e32a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1433cfb0-2a88-4f7f-838a-e6496a4d73b0, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0226697b-95b2-4303-aa60-b98eb0bb4cd9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.452 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0226697b-95b2-4303-aa60-b98eb0bb4cd9 in datapath 77a5b13a-ec2d-4bde-b8f1-201557ef8008 bound to our chassis#033[00m
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.453 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77a5b13a-ec2d-4bde-b8f1-201557ef8008#033[00m
Feb 28 05:14:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:50Z|00677|binding|INFO|Setting lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 ovn-installed in OVS
Feb 28 05:14:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:50Z|00678|binding|INFO|Setting lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 up in Southbound
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.461 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.464 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.466 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3b7d8d5f-7b35-4d1b-9e75-1accfb3ae8e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.466 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77a5b13a-e1 in ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.468 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77a5b13a-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.469 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6e024bbb-6c45-4d9d-8dcb-85b2a91e8bf5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.469 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[367e365b-dfbf-484c-bc8f-7a803d7c081d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:50 np0005634017 systemd-machined[209480]: New machine qemu-89-instance-0000004c.
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.485 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[44767f32-81b3-42c7-8e55-c5d1536c2fc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:50 np0005634017 systemd-udevd[309519]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:14:50 np0005634017 systemd[1]: Started Virtual Machine qemu-89-instance-0000004c.
Feb 28 05:14:50 np0005634017 NetworkManager[49805]: <info>  [1772273690.5057] device (tap0226697b-95): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:14:50 np0005634017 NetworkManager[49805]: <info>  [1772273690.5064] device (tap0226697b-95): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.508 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[85423711-3bdf-4e80-a1c2-645a7c5930c2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.537 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[92b894a3-fef8-4797-bdd6-57d9be38b28a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.541 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[007598aa-78a9-40e9-b37e-75ee975583bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:50 np0005634017 NetworkManager[49805]: <info>  [1772273690.5433] manager: (tap77a5b13a-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/305)
Feb 28 05:14:50 np0005634017 podman[309517]: 2026-02-28 10:14:50.560882755 +0000 UTC m=+0.071521672 container create 7f984d968e601b0c729a9eeae2c70405a343b85892e49fda91b649d3faedf4d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_kirch, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.586 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[eb9a3c51-c577-4ab1-91d7-aeb7406127d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.590 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8dbb4dfb-2732-4ddf-bc64-7101d4c324c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:50 np0005634017 systemd[1]: Started libpod-conmon-7f984d968e601b0c729a9eeae2c70405a343b85892e49fda91b649d3faedf4d7.scope.
Feb 28 05:14:50 np0005634017 NetworkManager[49805]: <info>  [1772273690.6140] device (tap77a5b13a-e0): carrier: link connected
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.619 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c6be2018-43a8-4b8a-b2db-073f5ebfb3c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:50 np0005634017 podman[309517]: 2026-02-28 10:14:50.526481528 +0000 UTC m=+0.037120475 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.639 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a98ac6dc-f7e6-43f6-a2fa-749f250c4749]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77a5b13a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ae:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519790, 'reachable_time': 36879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309566, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:50 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.656 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4348b0e8-b6dc-42d4-8898-cd4ad96ebced]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:aeac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519790, 'tstamp': 519790}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309568, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:50 np0005634017 podman[309517]: 2026-02-28 10:14:50.658644054 +0000 UTC m=+0.169283011 container init 7f984d968e601b0c729a9eeae2c70405a343b85892e49fda91b649d3faedf4d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 05:14:50 np0005634017 podman[309517]: 2026-02-28 10:14:50.669328244 +0000 UTC m=+0.179967151 container start 7f984d968e601b0c729a9eeae2c70405a343b85892e49fda91b649d3faedf4d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_kirch, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:14:50 np0005634017 podman[309517]: 2026-02-28 10:14:50.672802132 +0000 UTC m=+0.183441159 container attach 7f984d968e601b0c729a9eeae2c70405a343b85892e49fda91b649d3faedf4d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:14:50 np0005634017 fervent_kirch[309564]: 167 167
Feb 28 05:14:50 np0005634017 systemd[1]: libpod-7f984d968e601b0c729a9eeae2c70405a343b85892e49fda91b649d3faedf4d7.scope: Deactivated successfully.
Feb 28 05:14:50 np0005634017 podman[309517]: 2026-02-28 10:14:50.677101463 +0000 UTC m=+0.187740380 container died 7f984d968e601b0c729a9eeae2c70405a343b85892e49fda91b649d3faedf4d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_kirch, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.678 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4a43a2ea-e2db-4b66-8dfc-479f4ab6fb6a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77a5b13a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ae:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519790, 'reachable_time': 36879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309570, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:50 np0005634017 systemd[1]: var-lib-containers-storage-overlay-cd30f041bbfcd0e9b012b045b00af521ab95ced666631c3bae0e2479b628c3ec-merged.mount: Deactivated successfully.
Feb 28 05:14:50 np0005634017 podman[309517]: 2026-02-28 10:14:50.716739027 +0000 UTC m=+0.227377934 container remove 7f984d968e601b0c729a9eeae2c70405a343b85892e49fda91b649d3faedf4d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_kirch, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.726 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e50e3baf-c6da-442d-8f1a-db1c12ece1c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:50 np0005634017 systemd[1]: libpod-conmon-7f984d968e601b0c729a9eeae2c70405a343b85892e49fda91b649d3faedf4d7.scope: Deactivated successfully.
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.749 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273675.747812, dc2dbab8-312e-4130-8141-d848beeb6bec => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.749 243456 INFO nova.compute.manager [-] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.788 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[93f31cbe-5d20-430a-9882-3e2644d8b0b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.790 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77a5b13a-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.790 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.790 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77a5b13a-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:50 np0005634017 kernel: tap77a5b13a-e0: entered promiscuous mode
Feb 28 05:14:50 np0005634017 NetworkManager[49805]: <info>  [1772273690.7931] manager: (tap77a5b13a-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/306)
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.794 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77a5b13a-e0, col_values=(('external_ids', {'iface-id': '5829ec02-3925-4479-9cc6-4b24ee8cfe06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.792 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:50Z|00679|binding|INFO|Releasing lport 5829ec02-3925-4479-9cc6-4b24ee8cfe06 from this chassis (sb_readonly=0)
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.797 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.799 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[830bac77-fa11-4616-b603-cd0be972da96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.799 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-77a5b13a-ec2d-4bde-b8f1-201557ef8008
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 77a5b13a-ec2d-4bde-b8f1-201557ef8008
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:14:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.800 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'env', 'PROCESS_TAG=haproxy-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77a5b13a-ec2d-4bde-b8f1-201557ef8008.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.802 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.805 243456 DEBUG nova.compute.manager [None req-49c65651-53bc-4764-b26c-6cc8f9f5dda0 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.875 243456 DEBUG nova.compute.manager [req-a29a94c1-d475-4afb-92cc-5ad1ce78362f req-49a26b02-6110-42e3-9726-b95d728ece3e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.876 243456 DEBUG oslo_concurrency.lockutils [req-a29a94c1-d475-4afb-92cc-5ad1ce78362f req-49a26b02-6110-42e3-9726-b95d728ece3e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.876 243456 DEBUG oslo_concurrency.lockutils [req-a29a94c1-d475-4afb-92cc-5ad1ce78362f req-49a26b02-6110-42e3-9726-b95d728ece3e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.876 243456 DEBUG oslo_concurrency.lockutils [req-a29a94c1-d475-4afb-92cc-5ad1ce78362f req-49a26b02-6110-42e3-9726-b95d728ece3e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.876 243456 DEBUG nova.compute.manager [req-a29a94c1-d475-4afb-92cc-5ad1ce78362f req-49a26b02-6110-42e3-9726-b95d728ece3e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Processing event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:14:50 np0005634017 podman[309639]: 2026-02-28 10:14:50.895427942 +0000 UTC m=+0.049034840 container create 00224d781ae3cd792cebbcbd561eeaa5e1a089c0da4d2fd9e71d06dbeefc62b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.918 243456 DEBUG nova.compute.manager [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.919 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for ba33446e-fcd5-454c-bc8c-79a367002d57 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.920 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273690.9190905, ba33446e-fcd5-454c-bc8c-79a367002d57 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.920 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] VM Started (Lifecycle Event)#033[00m
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.925 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.930 243456 INFO nova.virt.libvirt.driver [-] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Instance spawned successfully.#033[00m
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.930 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:14:50 np0005634017 systemd[1]: Started libpod-conmon-00224d781ae3cd792cebbcbd561eeaa5e1a089c0da4d2fd9e71d06dbeefc62b6.scope.
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.958 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:50 np0005634017 podman[309639]: 2026-02-28 10:14:50.873548797 +0000 UTC m=+0.027155675 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.971 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.972 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.972 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.973 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.973 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.974 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:14:50 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:14:50 np0005634017 nova_compute[243452]: 2026-02-28 10:14:50.978 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:14:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb07f03913c9eb0693bbc0970752abfd9cb74906940b0baa5e8bbdaa5c181130/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:14:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb07f03913c9eb0693bbc0970752abfd9cb74906940b0baa5e8bbdaa5c181130/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:14:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb07f03913c9eb0693bbc0970752abfd9cb74906940b0baa5e8bbdaa5c181130/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:14:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb07f03913c9eb0693bbc0970752abfd9cb74906940b0baa5e8bbdaa5c181130/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:14:50 np0005634017 podman[309639]: 2026-02-28 10:14:50.997777139 +0000 UTC m=+0.151384047 container init 00224d781ae3cd792cebbcbd561eeaa5e1a089c0da4d2fd9e71d06dbeefc62b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 05:14:51 np0005634017 podman[309639]: 2026-02-28 10:14:51.004630361 +0000 UTC m=+0.158237249 container start 00224d781ae3cd792cebbcbd561eeaa5e1a089c0da4d2fd9e71d06dbeefc62b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:14:51 np0005634017 podman[309639]: 2026-02-28 10:14:51.008986634 +0000 UTC m=+0.162593512 container attach 00224d781ae3cd792cebbcbd561eeaa5e1a089c0da4d2fd9e71d06dbeefc62b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_satoshi, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 05:14:51 np0005634017 nova_compute[243452]: 2026-02-28 10:14:51.034 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 28 05:14:51 np0005634017 nova_compute[243452]: 2026-02-28 10:14:51.035 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273690.9196408, ba33446e-fcd5-454c-bc8c-79a367002d57 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:51 np0005634017 nova_compute[243452]: 2026-02-28 10:14:51.035 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:14:51 np0005634017 nova_compute[243452]: 2026-02-28 10:14:51.063 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:51 np0005634017 nova_compute[243452]: 2026-02-28 10:14:51.067 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273690.9231927, ba33446e-fcd5-454c-bc8c-79a367002d57 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:51 np0005634017 nova_compute[243452]: 2026-02-28 10:14:51.068 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:14:51 np0005634017 nova_compute[243452]: 2026-02-28 10:14:51.072 243456 DEBUG nova.compute.manager [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:51 np0005634017 nova_compute[243452]: 2026-02-28 10:14:51.086 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:51 np0005634017 nova_compute[243452]: 2026-02-28 10:14:51.089 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:14:51 np0005634017 nova_compute[243452]: 2026-02-28 10:14:51.119 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 28 05:14:51 np0005634017 nova_compute[243452]: 2026-02-28 10:14:51.144 243456 DEBUG oslo_concurrency.lockutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:51 np0005634017 nova_compute[243452]: 2026-02-28 10:14:51.144 243456 DEBUG oslo_concurrency.lockutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:51 np0005634017 nova_compute[243452]: 2026-02-28 10:14:51.144 243456 DEBUG nova.objects.instance [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 28 05:14:51 np0005634017 podman[309681]: 2026-02-28 10:14:51.190283112 +0000 UTC m=+0.055995026 container create e3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:14:51 np0005634017 nova_compute[243452]: 2026-02-28 10:14:51.207 243456 DEBUG oslo_concurrency.lockutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:51 np0005634017 systemd[1]: Started libpod-conmon-e3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927.scope.
Feb 28 05:14:51 np0005634017 nova_compute[243452]: 2026-02-28 10:14:51.251 243456 DEBUG nova.network.neutron [None req-1c4bb949-17be-4600-aca4-b8c68b65a3ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Updating instance_info_cache with network_info: [{"id": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "address": "fa:16:3e:88:90:03", "network": {"id": "269fae56-42c3-478e-88d5-36164c0a6ae4", "bridge": "br-int", "label": "tempest-network-smoke--2058090371", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5acb8c-5d", "ovs_interfaceid": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:14:51 np0005634017 podman[309681]: 2026-02-28 10:14:51.161514803 +0000 UTC m=+0.027226707 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:14:51 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:14:51 np0005634017 nova_compute[243452]: 2026-02-28 10:14:51.271 243456 DEBUG oslo_concurrency.lockutils [None req-1c4bb949-17be-4600-aca4-b8c68b65a3ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Releasing lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:14:51 np0005634017 nova_compute[243452]: 2026-02-28 10:14:51.272 243456 DEBUG nova.compute.manager [None req-1c4bb949-17be-4600-aca4-b8c68b65a3ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:51 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8037661ea88197897d9b5ce5e2095358c49702921dd5e2e5c0833d172654169/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:14:51 np0005634017 podman[309681]: 2026-02-28 10:14:51.285341415 +0000 UTC m=+0.151053339 container init e3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 28 05:14:51 np0005634017 podman[309681]: 2026-02-28 10:14:51.292888027 +0000 UTC m=+0.158599931 container start e3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 28 05:14:51 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[309706]: [NOTICE]   (309710) : New worker (309712) forked
Feb 28 05:14:51 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[309706]: [NOTICE]   (309710) : Loading success.
Feb 28 05:14:51 np0005634017 lvm[309782]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:14:51 np0005634017 lvm[309782]: VG ceph_vg0 finished
Feb 28 05:14:51 np0005634017 lvm[309784]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:14:51 np0005634017 lvm[309784]: VG ceph_vg1 finished
Feb 28 05:14:51 np0005634017 lvm[309785]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:14:51 np0005634017 lvm[309785]: VG ceph_vg2 finished
Feb 28 05:14:51 np0005634017 suspicious_satoshi[309656]: {}
Feb 28 05:14:51 np0005634017 systemd[1]: libpod-00224d781ae3cd792cebbcbd561eeaa5e1a089c0da4d2fd9e71d06dbeefc62b6.scope: Deactivated successfully.
Feb 28 05:14:51 np0005634017 systemd[1]: libpod-00224d781ae3cd792cebbcbd561eeaa5e1a089c0da4d2fd9e71d06dbeefc62b6.scope: Consumed 1.185s CPU time.
Feb 28 05:14:51 np0005634017 podman[309639]: 2026-02-28 10:14:51.811402666 +0000 UTC m=+0.965009544 container died 00224d781ae3cd792cebbcbd561eeaa5e1a089c0da4d2fd9e71d06dbeefc62b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_satoshi, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:14:51 np0005634017 systemd[1]: var-lib-containers-storage-overlay-cb07f03913c9eb0693bbc0970752abfd9cb74906940b0baa5e8bbdaa5c181130-merged.mount: Deactivated successfully.
Feb 28 05:14:51 np0005634017 podman[309639]: 2026-02-28 10:14:51.852819261 +0000 UTC m=+1.006426139 container remove 00224d781ae3cd792cebbcbd561eeaa5e1a089c0da4d2fd9e71d06dbeefc62b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 28 05:14:51 np0005634017 systemd[1]: libpod-conmon-00224d781ae3cd792cebbcbd561eeaa5e1a089c0da4d2fd9e71d06dbeefc62b6.scope: Deactivated successfully.
Feb 28 05:14:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:14:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:14:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:14:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:14:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 358 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 793 KiB/s rd, 7.3 MiB/s wr, 268 op/s
Feb 28 05:14:53 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:14:53 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:14:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:14:53 np0005634017 kernel: tap6b5acb8c-5d (unregistering): left promiscuous mode
Feb 28 05:14:53 np0005634017 NetworkManager[49805]: <info>  [1772273693.6354] device (tap6b5acb8c-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:14:53 np0005634017 nova_compute[243452]: 2026-02-28 10:14:53.641 243456 DEBUG nova.compute.manager [req-ee251f27-473c-43f9-904a-d798fe76e14e req-7dbc9cc4-a500-487e-bd2f-c4de70498770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:53 np0005634017 nova_compute[243452]: 2026-02-28 10:14:53.644 243456 DEBUG oslo_concurrency.lockutils [req-ee251f27-473c-43f9-904a-d798fe76e14e req-7dbc9cc4-a500-487e-bd2f-c4de70498770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:53 np0005634017 nova_compute[243452]: 2026-02-28 10:14:53.644 243456 DEBUG oslo_concurrency.lockutils [req-ee251f27-473c-43f9-904a-d798fe76e14e req-7dbc9cc4-a500-487e-bd2f-c4de70498770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:53 np0005634017 nova_compute[243452]: 2026-02-28 10:14:53.644 243456 DEBUG oslo_concurrency.lockutils [req-ee251f27-473c-43f9-904a-d798fe76e14e req-7dbc9cc4-a500-487e-bd2f-c4de70498770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:53 np0005634017 nova_compute[243452]: 2026-02-28 10:14:53.644 243456 DEBUG nova.compute.manager [req-ee251f27-473c-43f9-904a-d798fe76e14e req-7dbc9cc4-a500-487e-bd2f-c4de70498770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] No waiting events found dispatching network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:53 np0005634017 nova_compute[243452]: 2026-02-28 10:14:53.644 243456 WARNING nova.compute.manager [req-ee251f27-473c-43f9-904a-d798fe76e14e req-7dbc9cc4-a500-487e-bd2f-c4de70498770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received unexpected event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:14:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:53Z|00680|binding|INFO|Releasing lport 6b5acb8c-5d09-42b0-9c1d-b51be18712fe from this chassis (sb_readonly=0)
Feb 28 05:14:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:53Z|00681|binding|INFO|Setting lport 6b5acb8c-5d09-42b0-9c1d-b51be18712fe down in Southbound
Feb 28 05:14:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:53Z|00682|binding|INFO|Removing iface tap6b5acb8c-5d ovn-installed in OVS
Feb 28 05:14:53 np0005634017 nova_compute[243452]: 2026-02-28 10:14:53.659 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.665 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:90:03 10.100.0.11'], port_security=['fa:16:3e:88:90:03 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '32fe69ba-ea8d-411e-8917-de872b62b8b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-269fae56-42c3-478e-88d5-36164c0a6ae4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '4', 'neutron:security_group_ids': '61546fe4-ca04-44ca-b6ae-d1b7a21ab6e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8585beb-af2c-4eb8-805b-2614cf37e3d3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=6b5acb8c-5d09-42b0-9c1d-b51be18712fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:14:53 np0005634017 nova_compute[243452]: 2026-02-28 10:14:53.666 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.667 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 6b5acb8c-5d09-42b0-9c1d-b51be18712fe in datapath 269fae56-42c3-478e-88d5-36164c0a6ae4 unbound from our chassis#033[00m
Feb 28 05:14:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.668 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 269fae56-42c3-478e-88d5-36164c0a6ae4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:14:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.670 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ba46ae2b-b0c0-4105-941e-c7f32904f76d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.670 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4 namespace which is not needed anymore#033[00m
Feb 28 05:14:53 np0005634017 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Feb 28 05:14:53 np0005634017 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d0000004d.scope: Consumed 13.378s CPU time.
Feb 28 05:14:53 np0005634017 systemd-machined[209480]: Machine qemu-87-instance-0000004d terminated.
Feb 28 05:14:53 np0005634017 nova_compute[243452]: 2026-02-28 10:14:53.783 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:53 np0005634017 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[308227]: [NOTICE]   (308231) : haproxy version is 2.8.14-c23fe91
Feb 28 05:14:53 np0005634017 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[308227]: [NOTICE]   (308231) : path to executable is /usr/sbin/haproxy
Feb 28 05:14:53 np0005634017 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[308227]: [ALERT]    (308231) : Current worker (308233) exited with code 143 (Terminated)
Feb 28 05:14:53 np0005634017 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[308227]: [WARNING]  (308231) : All workers exited. Exiting... (0)
Feb 28 05:14:53 np0005634017 systemd[1]: libpod-4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9.scope: Deactivated successfully.
Feb 28 05:14:53 np0005634017 podman[309846]: 2026-02-28 10:14:53.810631212 +0000 UTC m=+0.050677746 container died 4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 05:14:53 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9-userdata-shm.mount: Deactivated successfully.
Feb 28 05:14:53 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ea0b749c5a528cfce51b141fd23aad936756a87d85288db6d59ade46c812befa-merged.mount: Deactivated successfully.
Feb 28 05:14:53 np0005634017 podman[309846]: 2026-02-28 10:14:53.845105091 +0000 UTC m=+0.085151665 container cleanup 4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:14:53 np0005634017 systemd[1]: libpod-conmon-4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9.scope: Deactivated successfully.
Feb 28 05:14:53 np0005634017 NetworkManager[49805]: <info>  [1772273693.8674] manager: (tap6b5acb8c-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/307)
Feb 28 05:14:53 np0005634017 kernel: tap6b5acb8c-5d: entered promiscuous mode
Feb 28 05:14:53 np0005634017 kernel: tap6b5acb8c-5d (unregistering): left promiscuous mode
Feb 28 05:14:53 np0005634017 nova_compute[243452]: 2026-02-28 10:14:53.871 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:53Z|00683|binding|INFO|Claiming lport 6b5acb8c-5d09-42b0-9c1d-b51be18712fe for this chassis.
Feb 28 05:14:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:53Z|00684|binding|INFO|6b5acb8c-5d09-42b0-9c1d-b51be18712fe: Claiming fa:16:3e:88:90:03 10.100.0.11
Feb 28 05:14:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.886 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:90:03 10.100.0.11'], port_security=['fa:16:3e:88:90:03 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '32fe69ba-ea8d-411e-8917-de872b62b8b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-269fae56-42c3-478e-88d5-36164c0a6ae4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '4', 'neutron:security_group_ids': '61546fe4-ca04-44ca-b6ae-d1b7a21ab6e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8585beb-af2c-4eb8-805b-2614cf37e3d3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=6b5acb8c-5d09-42b0-9c1d-b51be18712fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:14:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:53Z|00685|binding|INFO|Releasing lport 6b5acb8c-5d09-42b0-9c1d-b51be18712fe from this chassis (sb_readonly=0)
Feb 28 05:14:53 np0005634017 nova_compute[243452]: 2026-02-28 10:14:53.899 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.912 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:90:03 10.100.0.11'], port_security=['fa:16:3e:88:90:03 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '32fe69ba-ea8d-411e-8917-de872b62b8b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-269fae56-42c3-478e-88d5-36164c0a6ae4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '4', 'neutron:security_group_ids': '61546fe4-ca04-44ca-b6ae-d1b7a21ab6e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8585beb-af2c-4eb8-805b-2614cf37e3d3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=6b5acb8c-5d09-42b0-9c1d-b51be18712fe) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:14:53 np0005634017 podman[309875]: 2026-02-28 10:14:53.928310361 +0000 UTC m=+0.054710030 container remove 4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 28 05:14:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.934 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[06d8e051-8a28-45b4-8303-7992688d1a0b]: (4, ('Sat Feb 28 10:14:53 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4 (4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9)\n4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9\nSat Feb 28 10:14:53 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4 (4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9)\n4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.936 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0498fd7b-7f8d-4268-8c60-9fa82d840bdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.937 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap269fae56-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:53 np0005634017 nova_compute[243452]: 2026-02-28 10:14:53.939 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:53 np0005634017 kernel: tap269fae56-40: left promiscuous mode
Feb 28 05:14:53 np0005634017 nova_compute[243452]: 2026-02-28 10:14:53.948 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.951 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[08c0bbd4-78bf-489f-8d23-fa1dcb07f816]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.969 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ebd1b429-f19f-45b9-bcc5-009705470af1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.970 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5c867bab-8729-4181-9255-3ba0db075cf1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.988 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6aad80e0-b49e-49da-a27d-2f1b5c9d3dcf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517446, 'reachable_time': 22305, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309898, 'error': None, 'target': 'ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:53 np0005634017 systemd[1]: run-netns-ovnmeta\x2d269fae56\x2d42c3\x2d478e\x2d88d5\x2d36164c0a6ae4.mount: Deactivated successfully.
Feb 28 05:14:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.993 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:14:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.993 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[c19c5da7-fbf8-433b-ac1e-f283cf51bad6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.994 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 6b5acb8c-5d09-42b0-9c1d-b51be18712fe in datapath 269fae56-42c3-478e-88d5-36164c0a6ae4 unbound from our chassis#033[00m
Feb 28 05:14:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.996 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 269fae56-42c3-478e-88d5-36164c0a6ae4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:14:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.997 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9455ac56-8d50-4e33-a75f-a7fb642d284a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.997 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 6b5acb8c-5d09-42b0-9c1d-b51be18712fe in datapath 269fae56-42c3-478e-88d5-36164c0a6ae4 unbound from our chassis#033[00m
Feb 28 05:14:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.999 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 269fae56-42c3-478e-88d5-36164c0a6ae4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.999 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6144ff33-0e9e-4caf-933a-3ab6583b5763]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.090 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 358 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 755 KiB/s rd, 6.8 MiB/s wr, 233 op/s
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.308 243456 DEBUG oslo_concurrency.lockutils [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.309 243456 DEBUG oslo_concurrency.lockutils [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.310 243456 DEBUG oslo_concurrency.lockutils [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.310 243456 DEBUG oslo_concurrency.lockutils [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.310 243456 DEBUG oslo_concurrency.lockutils [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.311 243456 INFO nova.compute.manager [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Terminating instance#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.312 243456 DEBUG nova.compute.manager [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:14:54 np0005634017 kernel: tap0226697b-95 (unregistering): left promiscuous mode
Feb 28 05:14:54 np0005634017 NetworkManager[49805]: <info>  [1772273694.3525] device (tap0226697b-95): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.364 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:54Z|00686|binding|INFO|Releasing lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 from this chassis (sb_readonly=0)
Feb 28 05:14:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:54Z|00687|binding|INFO|Setting lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 down in Southbound
Feb 28 05:14:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:54Z|00688|binding|INFO|Removing iface tap0226697b-95 ovn-installed in OVS
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.366 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.372 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:1d:4f 10.100.0.13'], port_security=['fa:16:3e:70:1d:4f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ba33446e-fcd5-454c-bc8c-79a367002d57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'neutron:revision_number': '6', 'neutron:security_group_ids': '73844c74-9151-4190-8457-a421e77e32a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1433cfb0-2a88-4f7f-838a-e6496a4d73b0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0226697b-95b2-4303-aa60-b98eb0bb4cd9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.374 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0226697b-95b2-4303-aa60-b98eb0bb4cd9 in datapath 77a5b13a-ec2d-4bde-b8f1-201557ef8008 unbound from our chassis#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.376 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77a5b13a-ec2d-4bde-b8f1-201557ef8008, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.377 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2a58ea6d-0c05-457d-80ad-2189d6625caa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.378 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 namespace which is not needed anymore#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.382 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.400 243456 INFO nova.virt.libvirt.driver [None req-1c4bb949-17be-4600-aca4-b8c68b65a3ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Instance shutdown successfully.#033[00m
Feb 28 05:14:54 np0005634017 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Feb 28 05:14:54 np0005634017 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d0000004c.scope: Consumed 3.783s CPU time.
Feb 28 05:14:54 np0005634017 systemd-machined[209480]: Machine qemu-89-instance-0000004c terminated.
Feb 28 05:14:54 np0005634017 NetworkManager[49805]: <info>  [1772273694.4542] manager: (tap6b5acb8c-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/308)
Feb 28 05:14:54 np0005634017 systemd-udevd[309826]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:14:54 np0005634017 kernel: tap6b5acb8c-5d: entered promiscuous mode
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.461 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:54Z|00689|binding|INFO|Claiming lport 6b5acb8c-5d09-42b0-9c1d-b51be18712fe for this chassis.
Feb 28 05:14:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:54Z|00690|binding|INFO|6b5acb8c-5d09-42b0-9c1d-b51be18712fe: Claiming fa:16:3e:88:90:03 10.100.0.11
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.472 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:90:03 10.100.0.11'], port_security=['fa:16:3e:88:90:03 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '32fe69ba-ea8d-411e-8917-de872b62b8b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-269fae56-42c3-478e-88d5-36164c0a6ae4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '5', 'neutron:security_group_ids': '61546fe4-ca04-44ca-b6ae-d1b7a21ab6e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8585beb-af2c-4eb8-805b-2614cf37e3d3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=6b5acb8c-5d09-42b0-9c1d-b51be18712fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:14:54 np0005634017 NetworkManager[49805]: <info>  [1772273694.4759] device (tap6b5acb8c-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:14:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:54Z|00691|binding|INFO|Setting lport 6b5acb8c-5d09-42b0-9c1d-b51be18712fe ovn-installed in OVS
Feb 28 05:14:54 np0005634017 NetworkManager[49805]: <info>  [1772273694.4775] device (tap6b5acb8c-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:14:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:54Z|00692|binding|INFO|Setting lport 6b5acb8c-5d09-42b0-9c1d-b51be18712fe up in Southbound
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.480 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:54 np0005634017 systemd-machined[209480]: New machine qemu-90-instance-0000004d.
Feb 28 05:14:54 np0005634017 systemd[1]: Started Virtual Machine qemu-90-instance-0000004d.
Feb 28 05:14:54 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[309706]: [NOTICE]   (309710) : haproxy version is 2.8.14-c23fe91
Feb 28 05:14:54 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[309706]: [NOTICE]   (309710) : path to executable is /usr/sbin/haproxy
Feb 28 05:14:54 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[309706]: [WARNING]  (309710) : Exiting Master process...
Feb 28 05:14:54 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[309706]: [ALERT]    (309710) : Current worker (309712) exited with code 143 (Terminated)
Feb 28 05:14:54 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[309706]: [WARNING]  (309710) : All workers exited. Exiting... (0)
Feb 28 05:14:54 np0005634017 systemd[1]: libpod-e3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927.scope: Deactivated successfully.
Feb 28 05:14:54 np0005634017 podman[309931]: 2026-02-28 10:14:54.524926736 +0000 UTC m=+0.057490677 container died e3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:14:54 np0005634017 kernel: tap0226697b-95: entered promiscuous mode
Feb 28 05:14:54 np0005634017 NetworkManager[49805]: <info>  [1772273694.5376] manager: (tap0226697b-95): new Tun device (/org/freedesktop/NetworkManager/Devices/309)
Feb 28 05:14:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:54Z|00693|binding|INFO|Claiming lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 for this chassis.
Feb 28 05:14:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:54Z|00694|binding|INFO|0226697b-95b2-4303-aa60-b98eb0bb4cd9: Claiming fa:16:3e:70:1d:4f 10.100.0.13
Feb 28 05:14:54 np0005634017 kernel: tap0226697b-95 (unregistering): left promiscuous mode
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.538 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:54Z|00695|binding|INFO|Setting lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 ovn-installed in OVS
Feb 28 05:14:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:54Z|00696|binding|INFO|Setting lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 up in Southbound
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.555 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:1d:4f 10.100.0.13'], port_security=['fa:16:3e:70:1d:4f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ba33446e-fcd5-454c-bc8c-79a367002d57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'neutron:revision_number': '6', 'neutron:security_group_ids': '73844c74-9151-4190-8457-a421e77e32a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1433cfb0-2a88-4f7f-838a-e6496a4d73b0, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0226697b-95b2-4303-aa60-b98eb0bb4cd9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:14:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:54Z|00697|binding|INFO|Releasing lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 from this chassis (sb_readonly=1)
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.557 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:54Z|00698|if_status|INFO|Dropped 1 log messages in last 782 seconds (most recently, 782 seconds ago) due to excessive rate
Feb 28 05:14:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:54Z|00699|if_status|INFO|Not setting lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 down as sb is readonly
Feb 28 05:14:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:54Z|00700|binding|INFO|Removing iface tap0226697b-95 ovn-installed in OVS
Feb 28 05:14:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:54Z|00701|binding|INFO|Releasing lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 from this chassis (sb_readonly=0)
Feb 28 05:14:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:54Z|00702|binding|INFO|Setting lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 down in Southbound
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.567 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.570 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:1d:4f 10.100.0.13'], port_security=['fa:16:3e:70:1d:4f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ba33446e-fcd5-454c-bc8c-79a367002d57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'neutron:revision_number': '6', 'neutron:security_group_ids': '73844c74-9151-4190-8457-a421e77e32a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1433cfb0-2a88-4f7f-838a-e6496a4d73b0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0226697b-95b2-4303-aa60-b98eb0bb4cd9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:14:54 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927-userdata-shm.mount: Deactivated successfully.
Feb 28 05:14:54 np0005634017 systemd[1]: var-lib-containers-storage-overlay-a8037661ea88197897d9b5ce5e2095358c49702921dd5e2e5c0833d172654169-merged.mount: Deactivated successfully.
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.587 243456 INFO nova.virt.libvirt.driver [-] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Instance destroyed successfully.#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.588 243456 DEBUG nova.objects.instance [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'resources' on Instance uuid ba33446e-fcd5-454c-bc8c-79a367002d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:54 np0005634017 podman[309931]: 2026-02-28 10:14:54.593672298 +0000 UTC m=+0.126236249 container cleanup e3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.603 243456 DEBUG nova.virt.libvirt.vif [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:14:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-877654664',display_name='tempest-ServerDiskConfigTestJSON-server-877654664',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-877654664',id=76,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:14:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-rxssdzd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:14:51Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=ba33446e-fcd5-454c-bc8c-79a367002d57,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.604 243456 DEBUG nova.network.os_vif_util [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:14:54 np0005634017 systemd[1]: libpod-conmon-e3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927.scope: Deactivated successfully.
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.605 243456 DEBUG nova.network.os_vif_util [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.606 243456 DEBUG os_vif [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.608 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.608 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0226697b-95, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.610 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.611 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.614 243456 INFO os_vif [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95')#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.671 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273679.6692448, 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.671 243456 INFO nova.compute.manager [-] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:14:54 np0005634017 podman[309980]: 2026-02-28 10:14:54.677795424 +0000 UTC m=+0.054141993 container remove e3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.683 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b73eb1cc-f3db-4b2b-9865-c2f617bdd9b8]: (4, ('Sat Feb 28 10:14:54 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 (e3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927)\ne3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927\nSat Feb 28 10:14:54 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 (e3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927)\ne3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.686 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bd7cc4e6-fbab-41f6-a2fa-f0d57e405400]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.687 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77a5b13a-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.689 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:54 np0005634017 kernel: tap77a5b13a-e0: left promiscuous mode
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.693 243456 DEBUG nova.compute.manager [None req-9661c424-9811-4e9a-9512-4707e4c8023d - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.698 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.703 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b03a1e-ec21-4a6e-a616-aa55fb3d9ace]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.720 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[191571a3-b253-43e6-b454-645019824929]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.722 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[84c242c5-2dd6-4f8f-b9df-863260db2575]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.738 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[82e2ced1-4547-4b00-b202-b0c1354cd3d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519781, 'reachable_time': 24502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310013, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.740 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.741 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[092e7a0c-e395-4a63-887b-40335d9e04b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.741 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 6b5acb8c-5d09-42b0-9c1d-b51be18712fe in datapath 269fae56-42c3-478e-88d5-36164c0a6ae4 unbound from our chassis#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.742 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 269fae56-42c3-478e-88d5-36164c0a6ae4#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.755 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e336ef5a-d0dd-443e-b8d2-914a9a73e565]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.757 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap269fae56-41 in ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.759 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap269fae56-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.759 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a9d2b8dd-5644-4c6f-92e6-f685b81bcf69]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.761 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9efa259c-0523-4fb4-a007-3d1dc2a692c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.770 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[3f75d81d-17d3-4e53-8bbd-91c6f07bf6b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.795 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f652b458-cad5-43c3-b2ef-5c09c2d4a041]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.819 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[cd9ecf16-41ef-47ac-8bb1-241d5c329cdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.826 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8a339077-f513-4a3b-b5b0-85e48421f4c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:54 np0005634017 NetworkManager[49805]: <info>  [1772273694.8275] manager: (tap269fae56-40): new Veth device (/org/freedesktop/NetworkManager/Devices/310)
Feb 28 05:14:54 np0005634017 systemd[1]: run-netns-ovnmeta\x2d77a5b13a\x2dec2d\x2d4bde\x2db8f1\x2d201557ef8008.mount: Deactivated successfully.
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.857 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c6dd5562-5d41-4cd1-9b45-3138e8abc2fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.861 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[356e5f46-878d-49da-9a16-d20d60524d94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:54 np0005634017 NetworkManager[49805]: <info>  [1772273694.8831] device (tap269fae56-40): carrier: link connected
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.885 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f01b52f7-636c-4b50-8cc4-be59f1134451]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.902 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6973a86a-6c7f-4f18-b4de-2fdfb0ad6e51]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap269fae56-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:28:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520216, 'reachable_time': 21260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310075, 'error': None, 'target': 'ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.916 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[22db6c6a-de7b-494b-8ff3-32ff3efec31e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:282c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520216, 'tstamp': 520216}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310079, 'error': None, 'target': 'ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.929 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[edb57b6a-d3ee-4275-a346-0d7e43b3de2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap269fae56-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:28:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520216, 'reachable_time': 21260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310081, 'error': None, 'target': 'ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.938 243456 INFO nova.virt.libvirt.driver [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Deleting instance files /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57_del#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.938 243456 INFO nova.virt.libvirt.driver [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Deletion of /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57_del complete#033[00m
Feb 28 05:14:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.960 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d7a64f82-86d6-4262-bf5c-6c7c34833919]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.984 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 32fe69ba-ea8d-411e-8917-de872b62b8b0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.984 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273694.9833775, 32fe69ba-ea8d-411e-8917-de872b62b8b0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.985 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.993 243456 INFO nova.virt.libvirt.driver [-] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Instance running successfully.#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.994 243456 INFO nova.virt.libvirt.driver [None req-1c4bb949-17be-4600-aca4-b8c68b65a3ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Instance soft rebooted successfully.#033[00m
Feb 28 05:14:54 np0005634017 nova_compute[243452]: 2026-02-28 10:14:54.994 243456 DEBUG nova.compute.manager [None req-1c4bb949-17be-4600-aca4-b8c68b65a3ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.003 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7ebbf6c5-ffe3-457c-b7b3-e8b972805eee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.004 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap269fae56-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.004 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.005 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap269fae56-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:55 np0005634017 kernel: tap269fae56-40: entered promiscuous mode
Feb 28 05:14:55 np0005634017 NetworkManager[49805]: <info>  [1772273695.0070] manager: (tap269fae56-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/311)
Feb 28 05:14:55 np0005634017 nova_compute[243452]: 2026-02-28 10:14:55.006 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.008 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap269fae56-40, col_values=(('external_ids', {'iface-id': '7bc082a7-4576-4494-b633-962a40b4d816'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:14:55 np0005634017 ovn_controller[146846]: 2026-02-28T10:14:55Z|00703|binding|INFO|Releasing lport 7bc082a7-4576-4494-b633-962a40b4d816 from this chassis (sb_readonly=0)
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.010 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/269fae56-42c3-478e-88d5-36164c0a6ae4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/269fae56-42c3-478e-88d5-36164c0a6ae4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.011 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[41ff608a-8afe-43d9-bb3d-6fd04a989283]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.012 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-269fae56-42c3-478e-88d5-36164c0a6ae4
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/269fae56-42c3-478e-88d5-36164c0a6ae4.pid.haproxy
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 269fae56-42c3-478e-88d5-36164c0a6ae4
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.013 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4', 'env', 'PROCESS_TAG=haproxy-269fae56-42c3-478e-88d5-36164c0a6ae4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/269fae56-42c3-478e-88d5-36164c0a6ae4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:14:55 np0005634017 nova_compute[243452]: 2026-02-28 10:14:55.016 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:55 np0005634017 nova_compute[243452]: 2026-02-28 10:14:55.024 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:55 np0005634017 nova_compute[243452]: 2026-02-28 10:14:55.029 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:14:55 np0005634017 nova_compute[243452]: 2026-02-28 10:14:55.045 243456 INFO nova.compute.manager [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:14:55 np0005634017 nova_compute[243452]: 2026-02-28 10:14:55.045 243456 DEBUG oslo.service.loopingcall [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:14:55 np0005634017 nova_compute[243452]: 2026-02-28 10:14:55.046 243456 DEBUG nova.compute.manager [-] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:14:55 np0005634017 nova_compute[243452]: 2026-02-28 10:14:55.046 243456 DEBUG nova.network.neutron [-] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:14:55 np0005634017 nova_compute[243452]: 2026-02-28 10:14:55.057 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] During sync_power_state the instance has a pending task (reboot_started). Skip.#033[00m
Feb 28 05:14:55 np0005634017 nova_compute[243452]: 2026-02-28 10:14:55.058 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273694.9852946, 32fe69ba-ea8d-411e-8917-de872b62b8b0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:14:55 np0005634017 nova_compute[243452]: 2026-02-28 10:14:55.058 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] VM Started (Lifecycle Event)#033[00m
Feb 28 05:14:55 np0005634017 nova_compute[243452]: 2026-02-28 10:14:55.069 243456 DEBUG oslo_concurrency.lockutils [None req-1c4bb949-17be-4600-aca4-b8c68b65a3ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:55 np0005634017 nova_compute[243452]: 2026-02-28 10:14:55.077 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:14:55 np0005634017 nova_compute[243452]: 2026-02-28 10:14:55.080 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:14:55 np0005634017 podman[310114]: 2026-02-28 10:14:55.354970445 +0000 UTC m=+0.056533991 container create 35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:14:55 np0005634017 systemd[1]: Started libpod-conmon-35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350.scope.
Feb 28 05:14:55 np0005634017 podman[310114]: 2026-02-28 10:14:55.323048157 +0000 UTC m=+0.024611783 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:14:55 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:14:55 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/207a55a22fbbac4fb5440ec404e2752fc098d104a50be06532a7ca6329c7d281/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:14:55 np0005634017 podman[310114]: 2026-02-28 10:14:55.443245347 +0000 UTC m=+0.144808913 container init 35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 05:14:55 np0005634017 podman[310114]: 2026-02-28 10:14:55.449730069 +0000 UTC m=+0.151293635 container start 35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:14:55 np0005634017 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[310129]: [NOTICE]   (310133) : New worker (310135) forked
Feb 28 05:14:55 np0005634017 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[310129]: [NOTICE]   (310133) : Loading success.
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.520 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0226697b-95b2-4303-aa60-b98eb0bb4cd9 in datapath 77a5b13a-ec2d-4bde-b8f1-201557ef8008 unbound from our chassis#033[00m
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.523 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77a5b13a-ec2d-4bde-b8f1-201557ef8008, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.524 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9d5b4e05-b666-4644-84f5-0254b3e3155c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.525 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0226697b-95b2-4303-aa60-b98eb0bb4cd9 in datapath 77a5b13a-ec2d-4bde-b8f1-201557ef8008 unbound from our chassis#033[00m
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.528 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77a5b13a-ec2d-4bde-b8f1-201557ef8008, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:14:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.529 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[77a53553-5cc9-4864-aa59-4840f86c0894]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:14:55 np0005634017 nova_compute[243452]: 2026-02-28 10:14:55.823 243456 DEBUG nova.network.neutron [-] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:14:55 np0005634017 nova_compute[243452]: 2026-02-28 10:14:55.853 243456 INFO nova.compute.manager [-] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Took 0.81 seconds to deallocate network for instance.#033[00m
Feb 28 05:14:55 np0005634017 nova_compute[243452]: 2026-02-28 10:14:55.906 243456 DEBUG oslo_concurrency.lockutils [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:55 np0005634017 nova_compute[243452]: 2026-02-28 10:14:55.907 243456 DEBUG oslo_concurrency.lockutils [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.007 243456 DEBUG oslo_concurrency.processutils [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 344 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.2 MiB/s wr, 247 op/s
Feb 28 05:14:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:14:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3003900474' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.592 243456 DEBUG oslo_concurrency.processutils [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.602 243456 DEBUG nova.compute.provider_tree [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.622 243456 DEBUG nova.scheduler.client.report [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.650 243456 DEBUG oslo_concurrency.lockutils [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.686 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received event network-vif-unplugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.687 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.687 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.688 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.688 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] No waiting events found dispatching network-vif-unplugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.689 243456 WARNING nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received unexpected event network-vif-unplugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe for instance with vm_state active and task_state None.#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.689 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received event network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.690 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.690 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.690 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.691 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] No waiting events found dispatching network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.691 243456 WARNING nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received unexpected event network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe for instance with vm_state active and task_state None.#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.692 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-vif-unplugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.692 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.693 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.693 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.694 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] No waiting events found dispatching network-vif-unplugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.695 243456 WARNING nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received unexpected event network-vif-unplugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.695 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.696 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.696 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.697 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.697 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] No waiting events found dispatching network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.698 243456 WARNING nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received unexpected event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.698 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received event network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.699 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.699 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.700 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.700 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] No waiting events found dispatching network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.700 243456 WARNING nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received unexpected event network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe for instance with vm_state active and task_state None.#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.701 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received event network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.701 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.701 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.702 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.702 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] No waiting events found dispatching network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.703 243456 WARNING nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received unexpected event network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe for instance with vm_state active and task_state None.#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.703 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.704 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.704 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.705 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.705 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] No waiting events found dispatching network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.705 243456 WARNING nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received unexpected event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.706 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.706 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.706 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.707 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.707 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] No waiting events found dispatching network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.708 243456 WARNING nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received unexpected event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.710 243456 INFO nova.scheduler.client.report [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Deleted allocations for instance ba33446e-fcd5-454c-bc8c-79a367002d57#033[00m
Feb 28 05:14:56 np0005634017 nova_compute[243452]: 2026-02-28 10:14:56.785 243456 DEBUG oslo_concurrency.lockutils [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.476s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:57 np0005634017 nova_compute[243452]: 2026-02-28 10:14:57.481 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:57 np0005634017 nova_compute[243452]: 2026-02-28 10:14:57.482 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:57 np0005634017 nova_compute[243452]: 2026-02-28 10:14:57.500 243456 DEBUG nova.compute.manager [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:14:57 np0005634017 nova_compute[243452]: 2026-02-28 10:14:57.562 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:57 np0005634017 nova_compute[243452]: 2026-02-28 10:14:57.562 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:57 np0005634017 nova_compute[243452]: 2026-02-28 10:14:57.574 243456 DEBUG nova.virt.hardware [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:14:57 np0005634017 nova_compute[243452]: 2026-02-28 10:14:57.574 243456 INFO nova.compute.claims [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:14:57 np0005634017 nova_compute[243452]: 2026-02-28 10:14:57.695 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:57.853 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:57.853 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:14:57.854 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:57 np0005634017 nova_compute[243452]: 2026-02-28 10:14:57.857 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "a28c23bd-34cb-4189-9cca-778178eb41b1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:57 np0005634017 nova_compute[243452]: 2026-02-28 10:14:57.858 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:57 np0005634017 nova_compute[243452]: 2026-02-28 10:14:57.875 243456 DEBUG nova.compute.manager [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:14:57 np0005634017 nova_compute[243452]: 2026-02-28 10:14:57.933 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 336 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 198 op/s
Feb 28 05:14:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:14:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1746732956' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.276 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.283 243456 DEBUG nova.compute.provider_tree [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.306 243456 DEBUG nova.scheduler.client.report [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.330 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.331 243456 DEBUG nova.compute.manager [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.335 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.402s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.344 243456 DEBUG nova.virt.hardware [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.345 243456 INFO nova.compute.claims [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.412 243456 DEBUG nova.compute.manager [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.413 243456 DEBUG nova.network.neutron [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.443 243456 INFO nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.464 243456 DEBUG nova.compute.manager [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:14:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.554 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.588 243456 DEBUG nova.compute.manager [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.590 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.590 243456 INFO nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Creating image(s)#033[00m
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.614 243456 DEBUG nova.storage.rbd_utils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.643 243456 DEBUG nova.storage.rbd_utils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.671 243456 DEBUG nova.storage.rbd_utils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.676 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.757 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.758 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.758 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.759 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.794 243456 DEBUG nova.storage.rbd_utils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.798 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:58 np0005634017 nova_compute[243452]: 2026-02-28 10:14:58.826 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:14:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3480607201' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.110 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.141 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.178 243456 DEBUG nova.compute.provider_tree [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.184 243456 DEBUG nova.storage.rbd_utils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] resizing rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.259 243456 DEBUG nova.policy [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14b2d28379164786ad68563acb83a50a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fd70835696bf4e12a062516e9de5527d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.263 243456 DEBUG nova.scheduler.client.report [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.267 243456 DEBUG nova.compute.manager [req-6ea68d2d-bf7e-4b6d-bdae-dc74b730eb0f req-74cc4f67-380e-4a51-9981-090c5ac04396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-vif-unplugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.267 243456 DEBUG oslo_concurrency.lockutils [req-6ea68d2d-bf7e-4b6d-bdae-dc74b730eb0f req-74cc4f67-380e-4a51-9981-090c5ac04396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.267 243456 DEBUG oslo_concurrency.lockutils [req-6ea68d2d-bf7e-4b6d-bdae-dc74b730eb0f req-74cc4f67-380e-4a51-9981-090c5ac04396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.268 243456 DEBUG oslo_concurrency.lockutils [req-6ea68d2d-bf7e-4b6d-bdae-dc74b730eb0f req-74cc4f67-380e-4a51-9981-090c5ac04396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.268 243456 DEBUG nova.compute.manager [req-6ea68d2d-bf7e-4b6d-bdae-dc74b730eb0f req-74cc4f67-380e-4a51-9981-090c5ac04396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] No waiting events found dispatching network-vif-unplugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.268 243456 WARNING nova.compute.manager [req-6ea68d2d-bf7e-4b6d-bdae-dc74b730eb0f req-74cc4f67-380e-4a51-9981-090c5ac04396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received unexpected event network-vif-unplugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.268 243456 DEBUG nova.compute.manager [req-6ea68d2d-bf7e-4b6d-bdae-dc74b730eb0f req-74cc4f67-380e-4a51-9981-090c5ac04396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-vif-deleted-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.268 243456 DEBUG nova.compute.manager [req-6ea68d2d-bf7e-4b6d-bdae-dc74b730eb0f req-74cc4f67-380e-4a51-9981-090c5ac04396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.269 243456 DEBUG oslo_concurrency.lockutils [req-6ea68d2d-bf7e-4b6d-bdae-dc74b730eb0f req-74cc4f67-380e-4a51-9981-090c5ac04396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.269 243456 DEBUG oslo_concurrency.lockutils [req-6ea68d2d-bf7e-4b6d-bdae-dc74b730eb0f req-74cc4f67-380e-4a51-9981-090c5ac04396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.269 243456 DEBUG oslo_concurrency.lockutils [req-6ea68d2d-bf7e-4b6d-bdae-dc74b730eb0f req-74cc4f67-380e-4a51-9981-090c5ac04396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.269 243456 DEBUG nova.compute.manager [req-6ea68d2d-bf7e-4b6d-bdae-dc74b730eb0f req-74cc4f67-380e-4a51-9981-090c5ac04396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] No waiting events found dispatching network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.269 243456 WARNING nova.compute.manager [req-6ea68d2d-bf7e-4b6d-bdae-dc74b730eb0f req-74cc4f67-380e-4a51-9981-090c5ac04396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received unexpected event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.275 243456 DEBUG nova.objects.instance [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'migration_context' on Instance uuid ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.290 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.291 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Ensure instance console log exists: /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.291 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.291 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.292 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.293 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.958s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.294 243456 DEBUG nova.compute.manager [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.336 243456 DEBUG nova.compute.manager [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.337 243456 DEBUG nova.network.neutron [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.352 243456 INFO nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.368 243456 DEBUG nova.compute.manager [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.440 243456 DEBUG nova.compute.manager [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.441 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.441 243456 INFO nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Creating image(s)#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.460 243456 DEBUG nova.storage.rbd_utils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image a28c23bd-34cb-4189-9cca-778178eb41b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.480 243456 DEBUG nova.storage.rbd_utils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image a28c23bd-34cb-4189-9cca-778178eb41b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.502 243456 DEBUG nova.storage.rbd_utils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image a28c23bd-34cb-4189-9cca-778178eb41b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.508 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.532 243456 DEBUG nova.policy [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c6b5724da2e648fd85fd8cb293525967', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.564 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.565 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.565 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.566 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.586 243456 DEBUG nova.storage.rbd_utils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image a28c23bd-34cb-4189-9cca-778178eb41b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.590 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 a28c23bd-34cb-4189-9cca-778178eb41b1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.611 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.891 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 a28c23bd-34cb-4189-9cca-778178eb41b1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:14:59 np0005634017 nova_compute[243452]: 2026-02-28 10:14:59.982 243456 DEBUG nova.storage.rbd_utils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] resizing rbd image a28c23bd-34cb-4189-9cca-778178eb41b1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:15:00 np0005634017 nova_compute[243452]: 2026-02-28 10:15:00.091 243456 DEBUG nova.objects.instance [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'migration_context' on Instance uuid a28c23bd-34cb-4189-9cca-778178eb41b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:15:00 np0005634017 nova_compute[243452]: 2026-02-28 10:15:00.107 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:15:00 np0005634017 nova_compute[243452]: 2026-02-28 10:15:00.107 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Ensure instance console log exists: /var/lib/nova/instances/a28c23bd-34cb-4189-9cca-778178eb41b1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:15:00 np0005634017 nova_compute[243452]: 2026-02-28 10:15:00.108 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:00 np0005634017 nova_compute[243452]: 2026-02-28 10:15:00.108 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:00 np0005634017 nova_compute[243452]: 2026-02-28 10:15:00.108 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:00 np0005634017 nova_compute[243452]: 2026-02-28 10:15:00.179 243456 DEBUG nova.network.neutron [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Successfully created port: d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:15:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 330 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.1 MiB/s wr, 176 op/s
Feb 28 05:15:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:15:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:15:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:15:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:15:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:15:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:15:00 np0005634017 nova_compute[243452]: 2026-02-28 10:15:00.703 243456 DEBUG nova.network.neutron [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Successfully created port: f6d838dc-126d-40e0-bd84-54c611b21b22 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:15:01 np0005634017 nova_compute[243452]: 2026-02-28 10:15:01.393 243456 DEBUG nova.network.neutron [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Successfully updated port: d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:15:01 np0005634017 nova_compute[243452]: 2026-02-28 10:15:01.409 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "refresh_cache-ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:15:01 np0005634017 nova_compute[243452]: 2026-02-28 10:15:01.410 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquired lock "refresh_cache-ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:15:01 np0005634017 nova_compute[243452]: 2026-02-28 10:15:01.410 243456 DEBUG nova.network.neutron [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:15:01 np0005634017 nova_compute[243452]: 2026-02-28 10:15:01.493 243456 DEBUG nova.compute.manager [req-42bafc81-ed81-4d58-a2cb-06f678fbee89 req-53279fdc-86c7-4b08-a408-15e2d7b714a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received event network-changed-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:15:01 np0005634017 nova_compute[243452]: 2026-02-28 10:15:01.494 243456 DEBUG nova.compute.manager [req-42bafc81-ed81-4d58-a2cb-06f678fbee89 req-53279fdc-86c7-4b08-a408-15e2d7b714a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Refreshing instance network info cache due to event network-changed-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:15:01 np0005634017 nova_compute[243452]: 2026-02-28 10:15:01.494 243456 DEBUG oslo_concurrency.lockutils [req-42bafc81-ed81-4d58-a2cb-06f678fbee89 req-53279fdc-86c7-4b08-a408-15e2d7b714a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:15:01 np0005634017 nova_compute[243452]: 2026-02-28 10:15:01.576 243456 DEBUG nova.network.neutron [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:15:01 np0005634017 nova_compute[243452]: 2026-02-28 10:15:01.584 243456 DEBUG nova.network.neutron [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Successfully updated port: f6d838dc-126d-40e0-bd84-54c611b21b22 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:15:01 np0005634017 nova_compute[243452]: 2026-02-28 10:15:01.602 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "refresh_cache-a28c23bd-34cb-4189-9cca-778178eb41b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:15:01 np0005634017 nova_compute[243452]: 2026-02-28 10:15:01.603 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquired lock "refresh_cache-a28c23bd-34cb-4189-9cca-778178eb41b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:15:01 np0005634017 nova_compute[243452]: 2026-02-28 10:15:01.603 243456 DEBUG nova.network.neutron [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:15:02 np0005634017 nova_compute[243452]: 2026-02-28 10:15:02.186 243456 DEBUG nova.network.neutron [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:15:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 346 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.0 MiB/s wr, 213 op/s
Feb 28 05:15:02 np0005634017 nova_compute[243452]: 2026-02-28 10:15:02.558 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.511 243456 DEBUG nova.network.neutron [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Updating instance_info_cache with network_info: [{"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:15:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.558 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Releasing lock "refresh_cache-ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.558 243456 DEBUG nova.compute.manager [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance network_info: |[{"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.559 243456 DEBUG oslo_concurrency.lockutils [req-42bafc81-ed81-4d58-a2cb-06f678fbee89 req-53279fdc-86c7-4b08-a408-15e2d7b714a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.559 243456 DEBUG nova.network.neutron [req-42bafc81-ed81-4d58-a2cb-06f678fbee89 req-53279fdc-86c7-4b08-a408-15e2d7b714a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Refreshing network info cache for port d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.562 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Start _get_guest_xml network_info=[{"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.567 243456 WARNING nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.575 243456 DEBUG nova.virt.libvirt.host [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.576 243456 DEBUG nova.virt.libvirt.host [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.585 243456 DEBUG nova.virt.libvirt.host [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.586 243456 DEBUG nova.virt.libvirt.host [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.587 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.587 243456 DEBUG nova.virt.hardware [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.588 243456 DEBUG nova.virt.hardware [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.588 243456 DEBUG nova.virt.hardware [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.588 243456 DEBUG nova.virt.hardware [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.588 243456 DEBUG nova.virt.hardware [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.588 243456 DEBUG nova.virt.hardware [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.589 243456 DEBUG nova.virt.hardware [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.589 243456 DEBUG nova.virt.hardware [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.589 243456 DEBUG nova.virt.hardware [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.589 243456 DEBUG nova.virt.hardware [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.590 243456 DEBUG nova.virt.hardware [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.592 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.643 243456 DEBUG nova.compute.manager [req-ea430fe9-dfd1-48d5-98fd-ed8243c260bf req-4ceb00dd-8ebe-47d0-b50d-181c78873b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Received event network-changed-f6d838dc-126d-40e0-bd84-54c611b21b22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.644 243456 DEBUG nova.compute.manager [req-ea430fe9-dfd1-48d5-98fd-ed8243c260bf req-4ceb00dd-8ebe-47d0-b50d-181c78873b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Refreshing instance network info cache due to event network-changed-f6d838dc-126d-40e0-bd84-54c611b21b22. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.644 243456 DEBUG oslo_concurrency.lockutils [req-ea430fe9-dfd1-48d5-98fd-ed8243c260bf req-4ceb00dd-8ebe-47d0-b50d-181c78873b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-a28c23bd-34cb-4189-9cca-778178eb41b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:15:03 np0005634017 nova_compute[243452]: 2026-02-28 10:15:03.789 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:15:04 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/837647096' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:15:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 391 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.2 MiB/s wr, 226 op/s
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.196 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.220 243456 DEBUG nova.storage.rbd_utils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.229 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.283 243456 DEBUG nova.network.neutron [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Updating instance_info_cache with network_info: [{"id": "f6d838dc-126d-40e0-bd84-54c611b21b22", "address": "fa:16:3e:25:80:96", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d838dc-12", "ovs_interfaceid": "f6d838dc-126d-40e0-bd84-54c611b21b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.320 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Releasing lock "refresh_cache-a28c23bd-34cb-4189-9cca-778178eb41b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.321 243456 DEBUG nova.compute.manager [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Instance network_info: |[{"id": "f6d838dc-126d-40e0-bd84-54c611b21b22", "address": "fa:16:3e:25:80:96", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d838dc-12", "ovs_interfaceid": "f6d838dc-126d-40e0-bd84-54c611b21b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.321 243456 DEBUG oslo_concurrency.lockutils [req-ea430fe9-dfd1-48d5-98fd-ed8243c260bf req-4ceb00dd-8ebe-47d0-b50d-181c78873b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-a28c23bd-34cb-4189-9cca-778178eb41b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.322 243456 DEBUG nova.network.neutron [req-ea430fe9-dfd1-48d5-98fd-ed8243c260bf req-4ceb00dd-8ebe-47d0-b50d-181c78873b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Refreshing network info cache for port f6d838dc-126d-40e0-bd84-54c611b21b22 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.325 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Start _get_guest_xml network_info=[{"id": "f6d838dc-126d-40e0-bd84-54c611b21b22", "address": "fa:16:3e:25:80:96", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d838dc-12", "ovs_interfaceid": "f6d838dc-126d-40e0-bd84-54c611b21b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.331 243456 WARNING nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.336 243456 DEBUG nova.virt.libvirt.host [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.336 243456 DEBUG nova.virt.libvirt.host [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.341 243456 DEBUG nova.virt.libvirt.host [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.341 243456 DEBUG nova.virt.libvirt.host [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.342 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.342 243456 DEBUG nova.virt.hardware [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.342 243456 DEBUG nova.virt.hardware [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.343 243456 DEBUG nova.virt.hardware [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.343 243456 DEBUG nova.virt.hardware [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.343 243456 DEBUG nova.virt.hardware [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.343 243456 DEBUG nova.virt.hardware [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.344 243456 DEBUG nova.virt.hardware [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.344 243456 DEBUG nova.virt.hardware [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.344 243456 DEBUG nova.virt.hardware [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.344 243456 DEBUG nova.virt.hardware [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.344 243456 DEBUG nova.virt.hardware [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.348 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.613 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:04.729 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.729 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:04.730 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:15:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:15:04 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3018996537' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.802 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.804 243456 DEBUG nova.virt.libvirt.vif [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:14:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-542187541',display_name='tempest-tempest.common.compute-instance-542187541',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-542187541',id=79,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd70835696bf4e12a062516e9de5527d',ramdisk_id='',reservation_id='r-tzwfcb59',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1764257371',owner_user_name='tempest-ServerActionsTestOtherA-1764257371-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:58Z,user_data=None,user_id='14b2d28379164786ad68563acb83a50a',uuid=ca4fec3f-7355-47c7-baa5-8d9af25c6eb4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.805 243456 DEBUG nova.network.os_vif_util [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converting VIF {"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.806 243456 DEBUG nova.network.os_vif_util [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.807 243456 DEBUG nova.objects.instance [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'pci_devices' on Instance uuid ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.827 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:15:04 np0005634017 nova_compute[243452]:  <uuid>ca4fec3f-7355-47c7-baa5-8d9af25c6eb4</uuid>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:  <name>instance-0000004f</name>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <nova:name>tempest-tempest.common.compute-instance-542187541</nova:name>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:15:03</nova:creationTime>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:15:04 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:        <nova:user uuid="14b2d28379164786ad68563acb83a50a">tempest-ServerActionsTestOtherA-1764257371-project-member</nova:user>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:        <nova:project uuid="fd70835696bf4e12a062516e9de5527d">tempest-ServerActionsTestOtherA-1764257371</nova:project>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:        <nova:port uuid="d7f6883b-88ea-45f6-a85b-7fe7dd5cf814">
Feb 28 05:15:04 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <entry name="serial">ca4fec3f-7355-47c7-baa5-8d9af25c6eb4</entry>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <entry name="uuid">ca4fec3f-7355-47c7-baa5-8d9af25c6eb4</entry>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk">
Feb 28 05:15:04 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:15:04 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk.config">
Feb 28 05:15:04 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:15:04 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:87:04:95"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <target dev="tapd7f6883b-88"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/console.log" append="off"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:15:04 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:15:04 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:15:04 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:15:04 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.828 243456 DEBUG nova.compute.manager [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Preparing to wait for external event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.828 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.828 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.829 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.829 243456 DEBUG nova.virt.libvirt.vif [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:14:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-542187541',display_name='tempest-tempest.common.compute-instance-542187541',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-542187541',id=79,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd70835696bf4e12a062516e9de5527d',ramdisk_id='',reservation_id='r-tzwfcb59',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1764257371',owner_user_name='tempest-ServerActionsTestOtherA-1764257371-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:58Z,user_data=None,user_id='14b2d28379164786ad68563acb83a50a',uuid=ca4fec3f-7355-47c7-baa5-8d9af25c6eb4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.830 243456 DEBUG nova.network.os_vif_util [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converting VIF {"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.831 243456 DEBUG nova.network.os_vif_util [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.831 243456 DEBUG os_vif [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.832 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.832 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.833 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.835 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.835 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7f6883b-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.836 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd7f6883b-88, col_values=(('external_ids', {'iface-id': 'd7f6883b-88ea-45f6-a85b-7fe7dd5cf814', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:04:95', 'vm-uuid': 'ca4fec3f-7355-47c7-baa5-8d9af25c6eb4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:04 np0005634017 NetworkManager[49805]: <info>  [1772273704.8386] manager: (tapd7f6883b-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/312)
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.837 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.842 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.848 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.849 243456 INFO os_vif [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88')#033[00m
Feb 28 05:15:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:15:04 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2879990967' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.908 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.908 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.909 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] No VIF found with MAC fa:16:3e:87:04:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.910 243456 INFO nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Using config drive#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.933 243456 DEBUG nova.storage.rbd_utils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.939 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.960 243456 DEBUG nova.storage.rbd_utils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image a28c23bd-34cb-4189-9cca-778178eb41b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:04 np0005634017 nova_compute[243452]: 2026-02-28 10:15:04.963 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:15:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3829889935' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.586 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.589 243456 DEBUG nova.virt.libvirt.vif [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:14:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-228209281',display_name='tempest-ServerDiskConfigTestJSON-server-228209281',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-228209281',id=80,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-s88x0ti1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:59Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=a28c23bd-34cb-4189-9cca-778178eb41b1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f6d838dc-126d-40e0-bd84-54c611b21b22", "address": "fa:16:3e:25:80:96", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d838dc-12", "ovs_interfaceid": "f6d838dc-126d-40e0-bd84-54c611b21b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.589 243456 DEBUG nova.network.os_vif_util [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "f6d838dc-126d-40e0-bd84-54c611b21b22", "address": "fa:16:3e:25:80:96", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d838dc-12", "ovs_interfaceid": "f6d838dc-126d-40e0-bd84-54c611b21b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.590 243456 DEBUG nova.network.os_vif_util [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:80:96,bridge_name='br-int',has_traffic_filtering=True,id=f6d838dc-126d-40e0-bd84-54c611b21b22,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d838dc-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.592 243456 DEBUG nova.objects.instance [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'pci_devices' on Instance uuid a28c23bd-34cb-4189-9cca-778178eb41b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.609 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:15:05 np0005634017 nova_compute[243452]:  <uuid>a28c23bd-34cb-4189-9cca-778178eb41b1</uuid>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:  <name>instance-00000050</name>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-228209281</nova:name>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:15:04</nova:creationTime>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:15:05 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:        <nova:user uuid="c6b5724da2e648fd85fd8cb293525967">tempest-ServerDiskConfigTestJSON-1778232696-project-member</nova:user>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:        <nova:project uuid="92b324a375ad4f198dc44d31a0e0a6eb">tempest-ServerDiskConfigTestJSON-1778232696</nova:project>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:        <nova:port uuid="f6d838dc-126d-40e0-bd84-54c611b21b22">
Feb 28 05:15:05 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <entry name="serial">a28c23bd-34cb-4189-9cca-778178eb41b1</entry>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <entry name="uuid">a28c23bd-34cb-4189-9cca-778178eb41b1</entry>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/a28c23bd-34cb-4189-9cca-778178eb41b1_disk">
Feb 28 05:15:05 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:15:05 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/a28c23bd-34cb-4189-9cca-778178eb41b1_disk.config">
Feb 28 05:15:05 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:15:05 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:25:80:96"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <target dev="tapf6d838dc-12"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/a28c23bd-34cb-4189-9cca-778178eb41b1/console.log" append="off"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:15:05 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:15:05 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:15:05 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:15:05 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.610 243456 DEBUG nova.compute.manager [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Preparing to wait for external event network-vif-plugged-f6d838dc-126d-40e0-bd84-54c611b21b22 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.610 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.611 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.611 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.612 243456 DEBUG nova.virt.libvirt.vif [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:14:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-228209281',display_name='tempest-ServerDiskConfigTestJSON-server-228209281',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-228209281',id=80,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-s88x0ti1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:59Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=a28c23bd-34cb-4189-9cca-778178eb41b1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f6d838dc-126d-40e0-bd84-54c611b21b22", "address": "fa:16:3e:25:80:96", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d838dc-12", "ovs_interfaceid": "f6d838dc-126d-40e0-bd84-54c611b21b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.612 243456 DEBUG nova.network.os_vif_util [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "f6d838dc-126d-40e0-bd84-54c611b21b22", "address": "fa:16:3e:25:80:96", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d838dc-12", "ovs_interfaceid": "f6d838dc-126d-40e0-bd84-54c611b21b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.613 243456 DEBUG nova.network.os_vif_util [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:80:96,bridge_name='br-int',has_traffic_filtering=True,id=f6d838dc-126d-40e0-bd84-54c611b21b22,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d838dc-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.613 243456 DEBUG os_vif [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:80:96,bridge_name='br-int',has_traffic_filtering=True,id=f6d838dc-126d-40e0-bd84-54c611b21b22,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d838dc-12') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.614 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.615 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.616 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.626 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.627 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6d838dc-12, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.628 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf6d838dc-12, col_values=(('external_ids', {'iface-id': 'f6d838dc-126d-40e0-bd84-54c611b21b22', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:25:80:96', 'vm-uuid': 'a28c23bd-34cb-4189-9cca-778178eb41b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.630 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:05 np0005634017 NetworkManager[49805]: <info>  [1772273705.6323] manager: (tapf6d838dc-12): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/313)
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.633 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.639 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.641 243456 INFO os_vif [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:80:96,bridge_name='br-int',has_traffic_filtering=True,id=f6d838dc-126d-40e0-bd84-54c611b21b22,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d838dc-12')#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.683 243456 INFO nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Creating config drive at /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/disk.config#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.690 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp2cv0e20j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.760 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.762 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.763 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No VIF found with MAC fa:16:3e:25:80:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.765 243456 INFO nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Using config drive#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.790 243456 DEBUG nova.storage.rbd_utils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image a28c23bd-34cb-4189-9cca-778178eb41b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.840 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp2cv0e20j" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.869 243456 DEBUG nova.storage.rbd_utils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.874 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/disk.config ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.904 243456 DEBUG nova.network.neutron [req-42bafc81-ed81-4d58-a2cb-06f678fbee89 req-53279fdc-86c7-4b08-a408-15e2d7b714a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Updated VIF entry in instance network info cache for port d7f6883b-88ea-45f6-a85b-7fe7dd5cf814. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.904 243456 DEBUG nova.network.neutron [req-42bafc81-ed81-4d58-a2cb-06f678fbee89 req-53279fdc-86c7-4b08-a408-15e2d7b714a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Updating instance_info_cache with network_info: [{"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:15:05 np0005634017 nova_compute[243452]: 2026-02-28 10:15:05.924 243456 DEBUG oslo_concurrency.lockutils [req-42bafc81-ed81-4d58-a2cb-06f678fbee89 req-53279fdc-86c7-4b08-a408-15e2d7b714a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:15:06 np0005634017 nova_compute[243452]: 2026-02-28 10:15:06.040 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/disk.config ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:06 np0005634017 nova_compute[243452]: 2026-02-28 10:15:06.040 243456 INFO nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Deleting local config drive /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/disk.config because it was imported into RBD.#033[00m
Feb 28 05:15:06 np0005634017 NetworkManager[49805]: <info>  [1772273706.0881] manager: (tapd7f6883b-88): new Tun device (/org/freedesktop/NetworkManager/Devices/314)
Feb 28 05:15:06 np0005634017 kernel: tapd7f6883b-88: entered promiscuous mode
Feb 28 05:15:06 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:06Z|00704|binding|INFO|Claiming lport d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 for this chassis.
Feb 28 05:15:06 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:06Z|00705|binding|INFO|d7f6883b-88ea-45f6-a85b-7fe7dd5cf814: Claiming fa:16:3e:87:04:95 10.100.0.9
Feb 28 05:15:06 np0005634017 nova_compute[243452]: 2026-02-28 10:15:06.097 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:06 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:06Z|00706|binding|INFO|Setting lport d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 ovn-installed in OVS
Feb 28 05:15:06 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:06Z|00707|binding|INFO|Setting lport d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 up in Southbound
Feb 28 05:15:06 np0005634017 nova_compute[243452]: 2026-02-28 10:15:06.107 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.109 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:04:95 10.100.0.9'], port_security=['fa:16:3e:87:04:95 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ca4fec3f-7355-47c7-baa5-8d9af25c6eb4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd70835696bf4e12a062516e9de5527d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f8eaa742-8504-4c21-8533-267de16b101e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90546891-a028-4a5f-a7b5-01dac44edc93, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.116 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 in datapath 2e5dcf5b-2f4a-41dc-9c28-b500e2889923 bound to our chassis#033[00m
Feb 28 05:15:06 np0005634017 nova_compute[243452]: 2026-02-28 10:15:06.118 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.119 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2e5dcf5b-2f4a-41dc-9c28-b500e2889923#033[00m
Feb 28 05:15:06 np0005634017 systemd-udevd[310764]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.141 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[04b675b9-4867-4237-991b-6d929c890730]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:06 np0005634017 systemd-machined[209480]: New machine qemu-91-instance-0000004f.
Feb 28 05:15:06 np0005634017 NetworkManager[49805]: <info>  [1772273706.1485] device (tapd7f6883b-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:15:06 np0005634017 NetworkManager[49805]: <info>  [1772273706.1492] device (tapd7f6883b-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:15:06 np0005634017 systemd[1]: Started Virtual Machine qemu-91-instance-0000004f.
Feb 28 05:15:06 np0005634017 nova_compute[243452]: 2026-02-28 10:15:06.159 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.176 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[60468ba3-6cb2-405a-a8cd-6b435e7fa9df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.181 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[26e2891e-48a6-4ec3-a8fe-b4f6c69c6a12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 405 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 221 op/s
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.206 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c26bc0ec-ee01-4147-89aa-7dbef205064f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.224 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3e31bdaa-cf04-4933-8828-b78918c2f33c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e5dcf5b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:a8:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517673, 'reachable_time': 18521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310777, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.238 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b97f28d8-b31c-4a73-86f5-991eadcc805f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2e5dcf5b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517685, 'tstamp': 517685}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310779, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2e5dcf5b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517689, 'tstamp': 517689}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310779, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.241 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e5dcf5b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:06 np0005634017 nova_compute[243452]: 2026-02-28 10:15:06.245 243456 INFO nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Creating config drive at /var/lib/nova/instances/a28c23bd-34cb-4189-9cca-778178eb41b1/disk.config#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.250 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e5dcf5b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.250 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.250 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2e5dcf5b-20, col_values=(('external_ids', {'iface-id': '4070e10c-8283-47ad-b9bf-0e19e9198bce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:06 np0005634017 nova_compute[243452]: 2026-02-28 10:15:06.250 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a28c23bd-34cb-4189-9cca-778178eb41b1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6xzm38yg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.251 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:15:06 np0005634017 nova_compute[243452]: 2026-02-28 10:15:06.292 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:06 np0005634017 nova_compute[243452]: 2026-02-28 10:15:06.295 243456 DEBUG nova.network.neutron [req-ea430fe9-dfd1-48d5-98fd-ed8243c260bf req-4ceb00dd-8ebe-47d0-b50d-181c78873b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Updated VIF entry in instance network info cache for port f6d838dc-126d-40e0-bd84-54c611b21b22. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:15:06 np0005634017 nova_compute[243452]: 2026-02-28 10:15:06.296 243456 DEBUG nova.network.neutron [req-ea430fe9-dfd1-48d5-98fd-ed8243c260bf req-4ceb00dd-8ebe-47d0-b50d-181c78873b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Updating instance_info_cache with network_info: [{"id": "f6d838dc-126d-40e0-bd84-54c611b21b22", "address": "fa:16:3e:25:80:96", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d838dc-12", "ovs_interfaceid": "f6d838dc-126d-40e0-bd84-54c611b21b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:15:06 np0005634017 nova_compute[243452]: 2026-02-28 10:15:06.320 243456 DEBUG oslo_concurrency.lockutils [req-ea430fe9-dfd1-48d5-98fd-ed8243c260bf req-4ceb00dd-8ebe-47d0-b50d-181c78873b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-a28c23bd-34cb-4189-9cca-778178eb41b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:15:06 np0005634017 nova_compute[243452]: 2026-02-28 10:15:06.402 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a28c23bd-34cb-4189-9cca-778178eb41b1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6xzm38yg" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:06 np0005634017 nova_compute[243452]: 2026-02-28 10:15:06.440 243456 DEBUG nova.storage.rbd_utils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image a28c23bd-34cb-4189-9cca-778178eb41b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:06 np0005634017 nova_compute[243452]: 2026-02-28 10:15:06.445 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a28c23bd-34cb-4189-9cca-778178eb41b1/disk.config a28c23bd-34cb-4189-9cca-778178eb41b1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:06 np0005634017 nova_compute[243452]: 2026-02-28 10:15:06.494 243456 DEBUG nova.compute.manager [req-4ebee71b-4cef-421d-866f-0cfd70b83141 req-0855b919-0df8-4a97-a7a7-a3e156bdc60d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:15:06 np0005634017 nova_compute[243452]: 2026-02-28 10:15:06.495 243456 DEBUG oslo_concurrency.lockutils [req-4ebee71b-4cef-421d-866f-0cfd70b83141 req-0855b919-0df8-4a97-a7a7-a3e156bdc60d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:06 np0005634017 nova_compute[243452]: 2026-02-28 10:15:06.495 243456 DEBUG oslo_concurrency.lockutils [req-4ebee71b-4cef-421d-866f-0cfd70b83141 req-0855b919-0df8-4a97-a7a7-a3e156bdc60d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:06 np0005634017 nova_compute[243452]: 2026-02-28 10:15:06.495 243456 DEBUG oslo_concurrency.lockutils [req-4ebee71b-4cef-421d-866f-0cfd70b83141 req-0855b919-0df8-4a97-a7a7-a3e156bdc60d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:06 np0005634017 nova_compute[243452]: 2026-02-28 10:15:06.496 243456 DEBUG nova.compute.manager [req-4ebee71b-4cef-421d-866f-0cfd70b83141 req-0855b919-0df8-4a97-a7a7-a3e156bdc60d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Processing event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:15:06 np0005634017 nova_compute[243452]: 2026-02-28 10:15:06.626 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a28c23bd-34cb-4189-9cca-778178eb41b1/disk.config a28c23bd-34cb-4189-9cca-778178eb41b1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:06 np0005634017 nova_compute[243452]: 2026-02-28 10:15:06.628 243456 INFO nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Deleting local config drive /var/lib/nova/instances/a28c23bd-34cb-4189-9cca-778178eb41b1/disk.config because it was imported into RBD.#033[00m
Feb 28 05:15:06 np0005634017 kernel: tapf6d838dc-12: entered promiscuous mode
Feb 28 05:15:06 np0005634017 NetworkManager[49805]: <info>  [1772273706.6755] manager: (tapf6d838dc-12): new Tun device (/org/freedesktop/NetworkManager/Devices/315)
Feb 28 05:15:06 np0005634017 systemd-udevd[310767]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:15:06 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:06Z|00708|binding|INFO|Claiming lport f6d838dc-126d-40e0-bd84-54c611b21b22 for this chassis.
Feb 28 05:15:06 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:06Z|00709|binding|INFO|f6d838dc-126d-40e0-bd84-54c611b21b22: Claiming fa:16:3e:25:80:96 10.100.0.5
Feb 28 05:15:06 np0005634017 nova_compute[243452]: 2026-02-28 10:15:06.679 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:06 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:06Z|00710|binding|INFO|Setting lport f6d838dc-126d-40e0-bd84-54c611b21b22 ovn-installed in OVS
Feb 28 05:15:06 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:06Z|00711|binding|INFO|Setting lport f6d838dc-126d-40e0-bd84-54c611b21b22 up in Southbound
Feb 28 05:15:06 np0005634017 nova_compute[243452]: 2026-02-28 10:15:06.690 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.690 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:80:96 10.100.0.5'], port_security=['fa:16:3e:25:80:96 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a28c23bd-34cb-4189-9cca-778178eb41b1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '73844c74-9151-4190-8457-a421e77e32a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1433cfb0-2a88-4f7f-838a-e6496a4d73b0, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=f6d838dc-126d-40e0-bd84-54c611b21b22) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.692 156681 INFO neutron.agent.ovn.metadata.agent [-] Port f6d838dc-126d-40e0-bd84-54c611b21b22 in datapath 77a5b13a-ec2d-4bde-b8f1-201557ef8008 bound to our chassis#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.695 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77a5b13a-ec2d-4bde-b8f1-201557ef8008#033[00m
Feb 28 05:15:06 np0005634017 NetworkManager[49805]: <info>  [1772273706.6977] device (tapf6d838dc-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:15:06 np0005634017 NetworkManager[49805]: <info>  [1772273706.6986] device (tapf6d838dc-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:15:06 np0005634017 nova_compute[243452]: 2026-02-28 10:15:06.701 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.707 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[040f8b72-99b9-4966-a04a-2f5d54fed6df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.708 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77a5b13a-e1 in ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.710 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77a5b13a-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.710 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b038a428-5dc9-4f16-b3b1-7880f3f09f89]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.711 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7586227d-6cb9-47b0-a4a6-dbf70b89c478]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:06 np0005634017 systemd-machined[209480]: New machine qemu-92-instance-00000050.
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.727 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[881803f0-bcf1-4db0-b38e-ea3530318192]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:06 np0005634017 systemd[1]: Started Virtual Machine qemu-92-instance-00000050.
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.744 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3a1841cf-f31a-4739-9a30-9f2ef77dfd19]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.777 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[145379cc-141a-4b0f-816f-defe6e0e6188]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:06 np0005634017 NetworkManager[49805]: <info>  [1772273706.7846] manager: (tap77a5b13a-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/316)
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.783 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0dc1013a-df95-45ff-b6d2-9179058899ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.818 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[19252e7c-0950-49a0-b47f-c2c8dbfcef73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.822 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f775d1a0-c7a6-435b-8ba1-b956b8b1206d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:06 np0005634017 NetworkManager[49805]: <info>  [1772273706.8428] device (tap77a5b13a-e0): carrier: link connected
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.848 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e4ab581e-6acb-4772-9c60-4f7fc48895f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.867 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d9eae520-2413-4ec7-862b-9219853dd3d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77a5b13a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ae:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 218], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521412, 'reachable_time': 38514, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310864, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.881 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8d114a-0e23-45b3-9fc6-9682518115b3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:aeac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 521412, 'tstamp': 521412}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310865, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.906 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[65dae70a-62ce-4e9a-b81d-88856ee775d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77a5b13a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ae:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 218], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521412, 'reachable_time': 38514, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310866, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:06 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.943 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f02790de-4417-485a-964d-16e9c0018dab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:07.020 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eb050cad-7608-4897-a4ac-d53fdcc94727]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:07.023 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77a5b13a-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:07.023 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:07.024 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77a5b13a-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.027 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:07 np0005634017 NetworkManager[49805]: <info>  [1772273707.0277] manager: (tap77a5b13a-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/317)
Feb 28 05:15:07 np0005634017 kernel: tap77a5b13a-e0: entered promiscuous mode
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:07.033 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77a5b13a-e0, col_values=(('external_ids', {'iface-id': '5829ec02-3925-4479-9cc6-4b24ee8cfe06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.034 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:07 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:07Z|00712|binding|INFO|Releasing lport 5829ec02-3925-4479-9cc6-4b24ee8cfe06 from this chassis (sb_readonly=0)
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.046 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:07.048 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.048 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:07.049 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2a879ea5-8c52-40ec-9566-3deb96d00b00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:07.051 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-77a5b13a-ec2d-4bde-b8f1-201557ef8008
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 77a5b13a-ec2d-4bde-b8f1-201557ef8008
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:15:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:07.054 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'env', 'PROCESS_TAG=haproxy-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77a5b13a-ec2d-4bde-b8f1-201557ef8008.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.124 243456 DEBUG nova.compute.manager [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.125 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273707.1233804, ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.125 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] VM Started (Lifecycle Event)#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.135 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.140 243456 INFO nova.virt.libvirt.driver [-] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance spawned successfully.#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.141 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.147 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.151 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.160 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.161 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.161 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.162 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.163 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.163 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.169 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.170 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273707.129277, ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.170 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.192 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.205 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273707.1351213, ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.206 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.222 243456 INFO nova.compute.manager [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Took 8.63 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.222 243456 DEBUG nova.compute.manager [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.231 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.234 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.259 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.298 243456 INFO nova.compute.manager [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Took 9.76 seconds to build instance.#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.316 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:07 np0005634017 podman[310940]: 2026-02-28 10:15:07.454577503 +0000 UTC m=+0.064641069 container create 804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 28 05:15:07 np0005634017 systemd[1]: Started libpod-conmon-804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047.scope.
Feb 28 05:15:07 np0005634017 podman[310940]: 2026-02-28 10:15:07.415080272 +0000 UTC m=+0.025143848 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:15:07 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:15:07 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08db0d9b3cbf48242c2a9835ca420d7ec1169862bc5ffbff25f96cc7a4c202d2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:15:07 np0005634017 podman[310940]: 2026-02-28 10:15:07.578158348 +0000 UTC m=+0.188222224 container init 804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 05:15:07 np0005634017 podman[310954]: 2026-02-28 10:15:07.582042417 +0000 UTC m=+0.084037754 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 28 05:15:07 np0005634017 podman[310940]: 2026-02-28 10:15:07.588854558 +0000 UTC m=+0.198918134 container start 804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:15:07 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[310974]: [NOTICE]   (310996) : New worker (311003) forked
Feb 28 05:15:07 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[310974]: [NOTICE]   (310996) : Loading success.
Feb 28 05:15:07 np0005634017 podman[310953]: 2026-02-28 10:15:07.617878204 +0000 UTC m=+0.116572278 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 28 05:15:07 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:07Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:88:90:03 10.100.0.11
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.865 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273707.8645816, a28c23bd-34cb-4189-9cca-778178eb41b1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.866 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] VM Started (Lifecycle Event)#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.895 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.899 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273707.865, a28c23bd-34cb-4189-9cca-778178eb41b1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.900 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.923 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.927 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:15:07 np0005634017 nova_compute[243452]: 2026-02-28 10:15:07.953 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:15:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 405 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 165 op/s
Feb 28 05:15:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:15:08 np0005634017 nova_compute[243452]: 2026-02-28 10:15:08.792 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:09 np0005634017 nova_compute[243452]: 2026-02-28 10:15:09.266 243456 DEBUG nova.compute.manager [req-b5d12f8d-209f-4f9f-b5dd-e9ac2bb5b9a8 req-c5225c30-b68a-4957-b48e-2a5e9cdb9a11 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:15:09 np0005634017 nova_compute[243452]: 2026-02-28 10:15:09.267 243456 DEBUG oslo_concurrency.lockutils [req-b5d12f8d-209f-4f9f-b5dd-e9ac2bb5b9a8 req-c5225c30-b68a-4957-b48e-2a5e9cdb9a11 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:09 np0005634017 nova_compute[243452]: 2026-02-28 10:15:09.267 243456 DEBUG oslo_concurrency.lockutils [req-b5d12f8d-209f-4f9f-b5dd-e9ac2bb5b9a8 req-c5225c30-b68a-4957-b48e-2a5e9cdb9a11 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:09 np0005634017 nova_compute[243452]: 2026-02-28 10:15:09.267 243456 DEBUG oslo_concurrency.lockutils [req-b5d12f8d-209f-4f9f-b5dd-e9ac2bb5b9a8 req-c5225c30-b68a-4957-b48e-2a5e9cdb9a11 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:09 np0005634017 nova_compute[243452]: 2026-02-28 10:15:09.267 243456 DEBUG nova.compute.manager [req-b5d12f8d-209f-4f9f-b5dd-e9ac2bb5b9a8 req-c5225c30-b68a-4957-b48e-2a5e9cdb9a11 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] No waiting events found dispatching network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:15:09 np0005634017 nova_compute[243452]: 2026-02-28 10:15:09.268 243456 WARNING nova.compute.manager [req-b5d12f8d-209f-4f9f-b5dd-e9ac2bb5b9a8 req-c5225c30-b68a-4957-b48e-2a5e9cdb9a11 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received unexpected event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:15:09 np0005634017 nova_compute[243452]: 2026-02-28 10:15:09.489 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "4db5bcd7-8b41-4850-8c88-89ad757c8558" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:09 np0005634017 nova_compute[243452]: 2026-02-28 10:15:09.489 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:09 np0005634017 nova_compute[243452]: 2026-02-28 10:15:09.511 243456 DEBUG nova.compute.manager [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:15:09 np0005634017 nova_compute[243452]: 2026-02-28 10:15:09.579 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273694.5728126, ba33446e-fcd5-454c-bc8c-79a367002d57 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:15:09 np0005634017 nova_compute[243452]: 2026-02-28 10:15:09.579 243456 INFO nova.compute.manager [-] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:15:09 np0005634017 nova_compute[243452]: 2026-02-28 10:15:09.602 243456 DEBUG nova.compute.manager [None req-6aa6c024-43ae-4fb8-97a5-2d81873f54b7 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:15:09 np0005634017 nova_compute[243452]: 2026-02-28 10:15:09.604 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:09 np0005634017 nova_compute[243452]: 2026-02-28 10:15:09.604 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:09 np0005634017 nova_compute[243452]: 2026-02-28 10:15:09.612 243456 DEBUG nova.virt.hardware [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:15:09 np0005634017 nova_compute[243452]: 2026-02-28 10:15:09.613 243456 INFO nova.compute.claims [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:15:09 np0005634017 nova_compute[243452]: 2026-02-28 10:15:09.781 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.026 243456 DEBUG oslo_concurrency.lockutils [None req-891f012c-ec4e-4e5d-9695-05a2cf3043ca 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.027 243456 DEBUG oslo_concurrency.lockutils [None req-891f012c-ec4e-4e5d-9695-05a2cf3043ca 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.028 243456 DEBUG nova.compute.manager [None req-891f012c-ec4e-4e5d-9695-05a2cf3043ca 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.034 243456 DEBUG nova.compute.manager [None req-891f012c-ec4e-4e5d-9695-05a2cf3043ca 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.035 243456 DEBUG nova.objects.instance [None req-891f012c-ec4e-4e5d-9695-05a2cf3043ca 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'flavor' on Instance uuid ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.072 243456 DEBUG nova.virt.libvirt.driver [None req-891f012c-ec4e-4e5d-9695-05a2cf3043ca 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 28 05:15:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 405 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 179 op/s
Feb 28 05:15:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:15:10 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1139855722' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.360 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.367 243456 DEBUG nova.compute.provider_tree [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.380 243456 DEBUG nova.scheduler.client.report [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.403 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.405 243456 DEBUG nova.compute.manager [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.458 243456 DEBUG nova.compute.manager [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.459 243456 DEBUG nova.network.neutron [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.480 243456 INFO nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.503 243456 DEBUG nova.compute.manager [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.593 243456 DEBUG nova.compute.manager [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.594 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.595 243456 INFO nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Creating image(s)#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.621 243456 DEBUG nova.storage.rbd_utils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.654 243456 DEBUG nova.storage.rbd_utils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.692 243456 DEBUG nova.storage.rbd_utils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.702 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:10.732 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.758 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.807 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.808 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.809 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.809 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.834 243456 DEBUG nova.storage.rbd_utils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:10 np0005634017 nova_compute[243452]: 2026-02-28 10:15:10.840 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.187 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.281 243456 DEBUG nova.storage.rbd_utils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] resizing rbd image 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.331 243456 DEBUG nova.policy [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7ef51521ffc947cbbce8323ec2b71753', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c0c4bc44c37f4a4f83c83b6105be3190', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.398 243456 DEBUG nova.objects.instance [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lazy-loading 'migration_context' on Instance uuid 4db5bcd7-8b41-4850-8c88-89ad757c8558 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.418 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.419 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Ensure instance console log exists: /var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.419 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.421 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.423 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.448 243456 DEBUG nova.compute.manager [req-ee6a59d9-12fc-4561-9f94-3da599168bca req-6e5c8fd7-32b2-47d2-96d2-8fcca41e3853 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Received event network-vif-plugged-f6d838dc-126d-40e0-bd84-54c611b21b22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.448 243456 DEBUG oslo_concurrency.lockutils [req-ee6a59d9-12fc-4561-9f94-3da599168bca req-6e5c8fd7-32b2-47d2-96d2-8fcca41e3853 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.449 243456 DEBUG oslo_concurrency.lockutils [req-ee6a59d9-12fc-4561-9f94-3da599168bca req-6e5c8fd7-32b2-47d2-96d2-8fcca41e3853 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.449 243456 DEBUG oslo_concurrency.lockutils [req-ee6a59d9-12fc-4561-9f94-3da599168bca req-6e5c8fd7-32b2-47d2-96d2-8fcca41e3853 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.450 243456 DEBUG nova.compute.manager [req-ee6a59d9-12fc-4561-9f94-3da599168bca req-6e5c8fd7-32b2-47d2-96d2-8fcca41e3853 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Processing event network-vif-plugged-f6d838dc-126d-40e0-bd84-54c611b21b22 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.451 243456 DEBUG nova.compute.manager [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.455 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.460 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273711.4603863, a28c23bd-34cb-4189-9cca-778178eb41b1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.461 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.468 243456 INFO nova.virt.libvirt.driver [-] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Instance spawned successfully.#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.469 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.490 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.498 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.502 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.503 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.503 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.504 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.505 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.505 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.538 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.566 243456 INFO nova.compute.manager [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Took 12.13 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.567 243456 DEBUG nova.compute.manager [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.621 243456 INFO nova.compute.manager [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Took 13.70 seconds to build instance.#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.636 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:11 np0005634017 nova_compute[243452]: 2026-02-28 10:15:11.848 243456 DEBUG nova.network.neutron [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Successfully created port: 52f49649-6181-4c24-95b7-fc7227858c70 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:15:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 405 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.2 MiB/s wr, 175 op/s
Feb 28 05:15:13 np0005634017 nova_compute[243452]: 2026-02-28 10:15:13.234 243456 DEBUG nova.network.neutron [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Successfully updated port: 52f49649-6181-4c24-95b7-fc7227858c70 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:15:13 np0005634017 nova_compute[243452]: 2026-02-28 10:15:13.265 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:15:13 np0005634017 nova_compute[243452]: 2026-02-28 10:15:13.266 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquired lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:15:13 np0005634017 nova_compute[243452]: 2026-02-28 10:15:13.266 243456 DEBUG nova.network.neutron [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:15:13 np0005634017 nova_compute[243452]: 2026-02-28 10:15:13.419 243456 DEBUG nova.network.neutron [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:15:13 np0005634017 nova_compute[243452]: 2026-02-28 10:15:13.511 243456 DEBUG nova.compute.manager [req-0396b9cd-f2e3-4faa-ad00-9be441db718f req-9cd16c41-a88d-4247-bd82-465336f3ffed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received event network-changed-52f49649-6181-4c24-95b7-fc7227858c70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:15:13 np0005634017 nova_compute[243452]: 2026-02-28 10:15:13.512 243456 DEBUG nova.compute.manager [req-0396b9cd-f2e3-4faa-ad00-9be441db718f req-9cd16c41-a88d-4247-bd82-465336f3ffed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Refreshing instance network info cache due to event network-changed-52f49649-6181-4c24-95b7-fc7227858c70. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:15:13 np0005634017 nova_compute[243452]: 2026-02-28 10:15:13.512 243456 DEBUG oslo_concurrency.lockutils [req-0396b9cd-f2e3-4faa-ad00-9be441db718f req-9cd16c41-a88d-4247-bd82-465336f3ffed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:15:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:15:13 np0005634017 nova_compute[243452]: 2026-02-28 10:15:13.573 243456 DEBUG nova.compute.manager [req-3b313b82-e7aa-4d70-b938-8e416ff2e478 req-5ed7e7df-9239-40e4-b1ef-561e347790ed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Received event network-vif-plugged-f6d838dc-126d-40e0-bd84-54c611b21b22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:15:13 np0005634017 nova_compute[243452]: 2026-02-28 10:15:13.573 243456 DEBUG oslo_concurrency.lockutils [req-3b313b82-e7aa-4d70-b938-8e416ff2e478 req-5ed7e7df-9239-40e4-b1ef-561e347790ed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:13 np0005634017 nova_compute[243452]: 2026-02-28 10:15:13.574 243456 DEBUG oslo_concurrency.lockutils [req-3b313b82-e7aa-4d70-b938-8e416ff2e478 req-5ed7e7df-9239-40e4-b1ef-561e347790ed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:13 np0005634017 nova_compute[243452]: 2026-02-28 10:15:13.574 243456 DEBUG oslo_concurrency.lockutils [req-3b313b82-e7aa-4d70-b938-8e416ff2e478 req-5ed7e7df-9239-40e4-b1ef-561e347790ed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:13 np0005634017 nova_compute[243452]: 2026-02-28 10:15:13.574 243456 DEBUG nova.compute.manager [req-3b313b82-e7aa-4d70-b938-8e416ff2e478 req-5ed7e7df-9239-40e4-b1ef-561e347790ed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] No waiting events found dispatching network-vif-plugged-f6d838dc-126d-40e0-bd84-54c611b21b22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:15:13 np0005634017 nova_compute[243452]: 2026-02-28 10:15:13.574 243456 WARNING nova.compute.manager [req-3b313b82-e7aa-4d70-b938-8e416ff2e478 req-5ed7e7df-9239-40e4-b1ef-561e347790ed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Received unexpected event network-vif-plugged-f6d838dc-126d-40e0-bd84-54c611b21b22 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:15:13 np0005634017 nova_compute[243452]: 2026-02-28 10:15:13.794 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 411 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.8 MiB/s wr, 200 op/s
Feb 28 05:15:14 np0005634017 nova_compute[243452]: 2026-02-28 10:15:14.443 243456 INFO nova.compute.manager [None req-eef97643-ce1c-4ec0-8954-c239e2561d18 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Get console output#033[00m
Feb 28 05:15:14 np0005634017 nova_compute[243452]: 2026-02-28 10:15:14.452 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.422 243456 DEBUG nova.network.neutron [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Updating instance_info_cache with network_info: [{"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.453 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Releasing lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.455 243456 DEBUG nova.compute.manager [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Instance network_info: |[{"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.456 243456 DEBUG oslo_concurrency.lockutils [req-0396b9cd-f2e3-4faa-ad00-9be441db718f req-9cd16c41-a88d-4247-bd82-465336f3ffed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.457 243456 DEBUG nova.network.neutron [req-0396b9cd-f2e3-4faa-ad00-9be441db718f req-9cd16c41-a88d-4247-bd82-465336f3ffed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Refreshing network info cache for port 52f49649-6181-4c24-95b7-fc7227858c70 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.459 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Start _get_guest_xml network_info=[{"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.464 243456 WARNING nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.470 243456 DEBUG nova.virt.libvirt.host [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.470 243456 DEBUG nova.virt.libvirt.host [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.477 243456 DEBUG nova.virt.libvirt.host [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.478 243456 DEBUG nova.virt.libvirt.host [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.478 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.479 243456 DEBUG nova.virt.hardware [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.479 243456 DEBUG nova.virt.hardware [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.480 243456 DEBUG nova.virt.hardware [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.480 243456 DEBUG nova.virt.hardware [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.480 243456 DEBUG nova.virt.hardware [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.480 243456 DEBUG nova.virt.hardware [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.481 243456 DEBUG nova.virt.hardware [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.481 243456 DEBUG nova.virt.hardware [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.481 243456 DEBUG nova.virt.hardware [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.482 243456 DEBUG nova.virt.hardware [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.482 243456 DEBUG nova.virt.hardware [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.485 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.762 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.877 243456 DEBUG oslo_concurrency.lockutils [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "32fe69ba-ea8d-411e-8917-de872b62b8b0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.879 243456 DEBUG oslo_concurrency.lockutils [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.879 243456 DEBUG oslo_concurrency.lockutils [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.880 243456 DEBUG oslo_concurrency.lockutils [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.881 243456 DEBUG oslo_concurrency.lockutils [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.885 243456 INFO nova.compute.manager [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Terminating instance#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.888 243456 DEBUG nova.compute.manager [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:15:15 np0005634017 kernel: tap6b5acb8c-5d (unregistering): left promiscuous mode
Feb 28 05:15:15 np0005634017 NetworkManager[49805]: <info>  [1772273715.9378] device (tap6b5acb8c-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.935 243456 DEBUG nova.compute.manager [req-03c0d5a9-13bf-4abe-aa57-1ea2e564c43c req-3b1279c8-a15d-4b8c-8c97-e18d1a66c02a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received event network-changed-6b5acb8c-5d09-42b0-9c1d-b51be18712fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.941 243456 DEBUG nova.compute.manager [req-03c0d5a9-13bf-4abe-aa57-1ea2e564c43c req-3b1279c8-a15d-4b8c-8c97-e18d1a66c02a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Refreshing instance network info cache due to event network-changed-6b5acb8c-5d09-42b0-9c1d-b51be18712fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.942 243456 DEBUG oslo_concurrency.lockutils [req-03c0d5a9-13bf-4abe-aa57-1ea2e564c43c req-3b1279c8-a15d-4b8c-8c97-e18d1a66c02a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.942 243456 DEBUG oslo_concurrency.lockutils [req-03c0d5a9-13bf-4abe-aa57-1ea2e564c43c req-3b1279c8-a15d-4b8c-8c97-e18d1a66c02a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.943 243456 DEBUG nova.network.neutron [req-03c0d5a9-13bf-4abe-aa57-1ea2e564c43c req-3b1279c8-a15d-4b8c-8c97-e18d1a66c02a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Refreshing network info cache for port 6b5acb8c-5d09-42b0-9c1d-b51be18712fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.951 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:15Z|00713|binding|INFO|Releasing lport 6b5acb8c-5d09-42b0-9c1d-b51be18712fe from this chassis (sb_readonly=0)
Feb 28 05:15:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:15Z|00714|binding|INFO|Setting lport 6b5acb8c-5d09-42b0-9c1d-b51be18712fe down in Southbound
Feb 28 05:15:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:15Z|00715|binding|INFO|Removing iface tap6b5acb8c-5d ovn-installed in OVS
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.954 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:15.961 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:90:03 10.100.0.11'], port_security=['fa:16:3e:88:90:03 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '32fe69ba-ea8d-411e-8917-de872b62b8b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-269fae56-42c3-478e-88d5-36164c0a6ae4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '6', 'neutron:security_group_ids': '61546fe4-ca04-44ca-b6ae-d1b7a21ab6e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8585beb-af2c-4eb8-805b-2614cf37e3d3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=6b5acb8c-5d09-42b0-9c1d-b51be18712fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:15:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:15.962 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 6b5acb8c-5d09-42b0-9c1d-b51be18712fe in datapath 269fae56-42c3-478e-88d5-36164c0a6ae4 unbound from our chassis#033[00m
Feb 28 05:15:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:15.964 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 269fae56-42c3-478e-88d5-36164c0a6ae4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:15:15 np0005634017 nova_compute[243452]: 2026-02-28 10:15:15.965 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:15.966 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[495f758c-ce98-4f25-84e7-afd2ea5e68ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:15.966 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4 namespace which is not needed anymore#033[00m
Feb 28 05:15:16 np0005634017 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Feb 28 05:15:16 np0005634017 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d0000004d.scope: Consumed 12.106s CPU time.
Feb 28 05:15:16 np0005634017 systemd-machined[209480]: Machine qemu-90-instance-0000004d terminated.
Feb 28 05:15:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:15:16 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3973807842' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.080 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.115 243456 DEBUG nova.storage.rbd_utils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.124 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:16 np0005634017 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[310129]: [NOTICE]   (310133) : haproxy version is 2.8.14-c23fe91
Feb 28 05:15:16 np0005634017 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[310129]: [NOTICE]   (310133) : path to executable is /usr/sbin/haproxy
Feb 28 05:15:16 np0005634017 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[310129]: [WARNING]  (310133) : Exiting Master process...
Feb 28 05:15:16 np0005634017 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[310129]: [WARNING]  (310133) : Exiting Master process...
Feb 28 05:15:16 np0005634017 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[310129]: [ALERT]    (310133) : Current worker (310135) exited with code 143 (Terminated)
Feb 28 05:15:16 np0005634017 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[310129]: [WARNING]  (310133) : All workers exited. Exiting... (0)
Feb 28 05:15:16 np0005634017 systemd[1]: libpod-35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350.scope: Deactivated successfully.
Feb 28 05:15:16 np0005634017 podman[311288]: 2026-02-28 10:15:16.1368248 +0000 UTC m=+0.052220459 container died 35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:15:16 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350-userdata-shm.mount: Deactivated successfully.
Feb 28 05:15:16 np0005634017 systemd[1]: var-lib-containers-storage-overlay-207a55a22fbbac4fb5440ec404e2752fc098d104a50be06532a7ca6329c7d281-merged.mount: Deactivated successfully.
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.182 243456 INFO nova.virt.libvirt.driver [-] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Instance destroyed successfully.#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.183 243456 DEBUG nova.objects.instance [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'resources' on Instance uuid 32fe69ba-ea8d-411e-8917-de872b62b8b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:15:16 np0005634017 podman[311288]: 2026-02-28 10:15:16.186468116 +0000 UTC m=+0.101863775 container cleanup 35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:15:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 453 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.2 MiB/s wr, 225 op/s
Feb 28 05:15:16 np0005634017 systemd[1]: libpod-conmon-35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350.scope: Deactivated successfully.
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.199 243456 DEBUG nova.virt.libvirt.vif [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:14:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1765724638',display_name='tempest-TestNetworkAdvancedServerOps-server-1765724638',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1765724638',id=77,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBuYc1DeElc1rkkDaA1+Wg1yBEjd9NzIhoaxm5Bt8KGJiKrnwEv07p/i1kPPlomnS4Xw2edPyOwDq78Zz+5s4I0hLVriZjH75jhUt93INSMBcBhlqyJu7ug5cVivkPNiww==',key_name='tempest-TestNetworkAdvancedServerOps-1506787166',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:14:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-0cvc6m44',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:14:55Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=32fe69ba-ea8d-411e-8917-de872b62b8b0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "address": "fa:16:3e:88:90:03", "network": {"id": "269fae56-42c3-478e-88d5-36164c0a6ae4", "bridge": "br-int", "label": "tempest-network-smoke--2058090371", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5acb8c-5d", "ovs_interfaceid": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.200 243456 DEBUG nova.network.os_vif_util [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "address": "fa:16:3e:88:90:03", "network": {"id": "269fae56-42c3-478e-88d5-36164c0a6ae4", "bridge": "br-int", "label": "tempest-network-smoke--2058090371", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5acb8c-5d", "ovs_interfaceid": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.200 243456 DEBUG nova.network.os_vif_util [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:88:90:03,bridge_name='br-int',has_traffic_filtering=True,id=6b5acb8c-5d09-42b0-9c1d-b51be18712fe,network=Network(269fae56-42c3-478e-88d5-36164c0a6ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5acb8c-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.201 243456 DEBUG os_vif [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:90:03,bridge_name='br-int',has_traffic_filtering=True,id=6b5acb8c-5d09-42b0-9c1d-b51be18712fe,network=Network(269fae56-42c3-478e-88d5-36164c0a6ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5acb8c-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.204 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.204 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b5acb8c-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.210 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.213 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.216 243456 INFO os_vif [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:90:03,bridge_name='br-int',has_traffic_filtering=True,id=6b5acb8c-5d09-42b0-9c1d-b51be18712fe,network=Network(269fae56-42c3-478e-88d5-36164c0a6ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5acb8c-5d')#033[00m
Feb 28 05:15:16 np0005634017 podman[311347]: 2026-02-28 10:15:16.262409761 +0000 UTC m=+0.050971014 container remove 35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:15:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:16.267 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8202075b-ebe7-4957-af13-7cfed3b7795d]: (4, ('Sat Feb 28 10:15:16 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4 (35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350)\n35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350\nSat Feb 28 10:15:16 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4 (35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350)\n35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:16.270 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5b993289-d48e-48da-a2dd-223775f4d321]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:16.271 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap269fae56-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.273 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:16 np0005634017 kernel: tap269fae56-40: left promiscuous mode
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.284 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:16.286 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[53b53594-6b2f-4234-8b05-80cf5a3a2437]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:16.295 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6c9d8c7c-f1e8-4cd2-92d8-3ad70e6ef022]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:16.296 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[645dd627-d912-4831-8847-198168c1eed2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:16.314 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c5d6c7d2-a44d-4421-b5c5-de730e167f35]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520210, 'reachable_time': 28185, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311380, 'error': None, 'target': 'ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:16 np0005634017 systemd[1]: run-netns-ovnmeta\x2d269fae56\x2d42c3\x2d478e\x2d88d5\x2d36164c0a6ae4.mount: Deactivated successfully.
Feb 28 05:15:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:16.318 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:15:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:16.318 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[63b881b0-36ac-492d-b22a-c04452d37c4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.593 243456 DEBUG nova.compute.manager [req-d3b8b7ff-465c-4c52-8a17-a7ef07eb8149 req-dfb2eab4-e35c-40ec-9306-86d870628c76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received event network-vif-unplugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.594 243456 DEBUG oslo_concurrency.lockutils [req-d3b8b7ff-465c-4c52-8a17-a7ef07eb8149 req-dfb2eab4-e35c-40ec-9306-86d870628c76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.594 243456 DEBUG oslo_concurrency.lockutils [req-d3b8b7ff-465c-4c52-8a17-a7ef07eb8149 req-dfb2eab4-e35c-40ec-9306-86d870628c76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.595 243456 DEBUG oslo_concurrency.lockutils [req-d3b8b7ff-465c-4c52-8a17-a7ef07eb8149 req-dfb2eab4-e35c-40ec-9306-86d870628c76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.595 243456 DEBUG nova.compute.manager [req-d3b8b7ff-465c-4c52-8a17-a7ef07eb8149 req-dfb2eab4-e35c-40ec-9306-86d870628c76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] No waiting events found dispatching network-vif-unplugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.595 243456 DEBUG nova.compute.manager [req-d3b8b7ff-465c-4c52-8a17-a7ef07eb8149 req-dfb2eab4-e35c-40ec-9306-86d870628c76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received event network-vif-unplugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.664 243456 INFO nova.virt.libvirt.driver [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Deleting instance files /var/lib/nova/instances/32fe69ba-ea8d-411e-8917-de872b62b8b0_del#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.666 243456 INFO nova.virt.libvirt.driver [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Deletion of /var/lib/nova/instances/32fe69ba-ea8d-411e-8917-de872b62b8b0_del complete#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.707 243456 INFO nova.compute.manager [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.707 243456 DEBUG oslo.service.loopingcall [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.708 243456 DEBUG nova.compute.manager [-] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.708 243456 DEBUG nova.network.neutron [-] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:15:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:15:16 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/992169226' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.749 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.751 243456 DEBUG nova.virt.libvirt.vif [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:15:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-76744621',display_name='tempest-ServersNegativeTestJSON-server-76744621',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-76744621',id=81,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c0c4bc44c37f4a4f83c83b6105be3190',ramdisk_id='',reservation_id='r-labnea0j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-621636341',owner_user_name='tempest-ServersNegativeTestJSON-621636341-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:15:10Z,user_data=None,user_id='7ef51521ffc947cbbce8323ec2b71753',uuid=4db5bcd7-8b41-4850-8c88-89ad757c8558,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.752 243456 DEBUG nova.network.os_vif_util [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converting VIF {"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.753 243456 DEBUG nova.network.os_vif_util [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:e7:39,bridge_name='br-int',has_traffic_filtering=True,id=52f49649-6181-4c24-95b7-fc7227858c70,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f49649-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.754 243456 DEBUG nova.objects.instance [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4db5bcd7-8b41-4850-8c88-89ad757c8558 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.768 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:15:16 np0005634017 nova_compute[243452]:  <uuid>4db5bcd7-8b41-4850-8c88-89ad757c8558</uuid>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:  <name>instance-00000051</name>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServersNegativeTestJSON-server-76744621</nova:name>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:15:15</nova:creationTime>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:15:16 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:        <nova:user uuid="7ef51521ffc947cbbce8323ec2b71753">tempest-ServersNegativeTestJSON-621636341-project-member</nova:user>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:        <nova:project uuid="c0c4bc44c37f4a4f83c83b6105be3190">tempest-ServersNegativeTestJSON-621636341</nova:project>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:        <nova:port uuid="52f49649-6181-4c24-95b7-fc7227858c70">
Feb 28 05:15:16 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <entry name="serial">4db5bcd7-8b41-4850-8c88-89ad757c8558</entry>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <entry name="uuid">4db5bcd7-8b41-4850-8c88-89ad757c8558</entry>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/4db5bcd7-8b41-4850-8c88-89ad757c8558_disk">
Feb 28 05:15:16 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:15:16 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/4db5bcd7-8b41-4850-8c88-89ad757c8558_disk.config">
Feb 28 05:15:16 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:15:16 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:22:e7:39"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <target dev="tap52f49649-61"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558/console.log" append="off"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:15:16 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:15:16 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:15:16 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:15:16 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.775 243456 DEBUG nova.compute.manager [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Preparing to wait for external event network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.776 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.776 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.777 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.778 243456 DEBUG nova.virt.libvirt.vif [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:15:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-76744621',display_name='tempest-ServersNegativeTestJSON-server-76744621',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-76744621',id=81,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c0c4bc44c37f4a4f83c83b6105be3190',ramdisk_id='',reservation_id='r-labnea0j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-621636341',owner_user_name='tempest-ServersNegativeTestJSON-621636341-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:15:10Z,user_data=None,user_id='7ef51521ffc947cbbce8323ec2b71753',uuid=4db5bcd7-8b41-4850-8c88-89ad757c8558,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.778 243456 DEBUG nova.network.os_vif_util [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converting VIF {"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.779 243456 DEBUG nova.network.os_vif_util [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:e7:39,bridge_name='br-int',has_traffic_filtering=True,id=52f49649-6181-4c24-95b7-fc7227858c70,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f49649-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.780 243456 DEBUG os_vif [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:e7:39,bridge_name='br-int',has_traffic_filtering=True,id=52f49649-6181-4c24-95b7-fc7227858c70,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f49649-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.780 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.781 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.781 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.784 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.785 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52f49649-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.785 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52f49649-61, col_values=(('external_ids', {'iface-id': '52f49649-6181-4c24-95b7-fc7227858c70', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:22:e7:39', 'vm-uuid': '4db5bcd7-8b41-4850-8c88-89ad757c8558'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.787 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:16 np0005634017 NetworkManager[49805]: <info>  [1772273716.7878] manager: (tap52f49649-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/318)
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.790 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.793 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.794 243456 INFO os_vif [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:e7:39,bridge_name='br-int',has_traffic_filtering=True,id=52f49649-6181-4c24-95b7-fc7227858c70,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f49649-61')#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.841 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.842 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.843 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] No VIF found with MAC fa:16:3e:22:e7:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.844 243456 INFO nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Using config drive#033[00m
Feb 28 05:15:16 np0005634017 nova_compute[243452]: 2026-02-28 10:15:16.871 243456 DEBUG nova.storage.rbd_utils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 425 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 1.8 MiB/s wr, 265 op/s
Feb 28 05:15:18 np0005634017 nova_compute[243452]: 2026-02-28 10:15:18.267 243456 INFO nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Creating config drive at /var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558/disk.config#033[00m
Feb 28 05:15:18 np0005634017 nova_compute[243452]: 2026-02-28 10:15:18.271 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpzgkqovsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:18 np0005634017 nova_compute[243452]: 2026-02-28 10:15:18.309 243456 DEBUG nova.network.neutron [req-0396b9cd-f2e3-4faa-ad00-9be441db718f req-9cd16c41-a88d-4247-bd82-465336f3ffed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Updated VIF entry in instance network info cache for port 52f49649-6181-4c24-95b7-fc7227858c70. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:15:18 np0005634017 nova_compute[243452]: 2026-02-28 10:15:18.310 243456 DEBUG nova.network.neutron [req-0396b9cd-f2e3-4faa-ad00-9be441db718f req-9cd16c41-a88d-4247-bd82-465336f3ffed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Updating instance_info_cache with network_info: [{"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:15:18 np0005634017 nova_compute[243452]: 2026-02-28 10:15:18.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:15:18 np0005634017 nova_compute[243452]: 2026-02-28 10:15:18.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:15:18 np0005634017 nova_compute[243452]: 2026-02-28 10:15:18.342 243456 DEBUG oslo_concurrency.lockutils [req-0396b9cd-f2e3-4faa-ad00-9be441db718f req-9cd16c41-a88d-4247-bd82-465336f3ffed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:15:18 np0005634017 nova_compute[243452]: 2026-02-28 10:15:18.418 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpzgkqovsi" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:18 np0005634017 nova_compute[243452]: 2026-02-28 10:15:18.448 243456 DEBUG nova.storage.rbd_utils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:18 np0005634017 nova_compute[243452]: 2026-02-28 10:15:18.453 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558/disk.config 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:18 np0005634017 nova_compute[243452]: 2026-02-28 10:15:18.504 243456 DEBUG nova.network.neutron [-] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:15:18 np0005634017 nova_compute[243452]: 2026-02-28 10:15:18.521 243456 INFO nova.compute.manager [-] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Took 1.81 seconds to deallocate network for instance.#033[00m
Feb 28 05:15:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:15:18 np0005634017 nova_compute[243452]: 2026-02-28 10:15:18.568 243456 DEBUG oslo_concurrency.lockutils [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:18 np0005634017 nova_compute[243452]: 2026-02-28 10:15:18.569 243456 DEBUG oslo_concurrency.lockutils [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:18 np0005634017 nova_compute[243452]: 2026-02-28 10:15:18.583 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558/disk.config 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:18 np0005634017 nova_compute[243452]: 2026-02-28 10:15:18.585 243456 INFO nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Deleting local config drive /var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558/disk.config because it was imported into RBD.#033[00m
Feb 28 05:15:18 np0005634017 kernel: tap52f49649-61: entered promiscuous mode
Feb 28 05:15:18 np0005634017 NetworkManager[49805]: <info>  [1772273718.6428] manager: (tap52f49649-61): new Tun device (/org/freedesktop/NetworkManager/Devices/319)
Feb 28 05:15:18 np0005634017 systemd-udevd[311266]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:15:18 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:18Z|00716|binding|INFO|Claiming lport 52f49649-6181-4c24-95b7-fc7227858c70 for this chassis.
Feb 28 05:15:18 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:18Z|00717|binding|INFO|52f49649-6181-4c24-95b7-fc7227858c70: Claiming fa:16:3e:22:e7:39 10.100.0.9
Feb 28 05:15:18 np0005634017 nova_compute[243452]: 2026-02-28 10:15:18.645 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:18 np0005634017 NetworkManager[49805]: <info>  [1772273718.6558] device (tap52f49649-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:15:18 np0005634017 NetworkManager[49805]: <info>  [1772273718.6564] device (tap52f49649-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.657 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:e7:39 10.100.0.9'], port_security=['fa:16:3e:22:e7:39 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4db5bcd7-8b41-4850-8c88-89ad757c8558', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c0c4bc44c37f4a4f83c83b6105be3190', 'neutron:revision_number': '2', 'neutron:security_group_ids': '60e2cad2-1539-4f21-ae07-3933335fcb5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbe93b8f-a4a4-4682-b3ab-de91ea6bc538, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=52f49649-6181-4c24-95b7-fc7227858c70) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:15:18 np0005634017 nova_compute[243452]: 2026-02-28 10:15:18.659 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.658 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 52f49649-6181-4c24-95b7-fc7227858c70 in datapath ce4b855a-cb9e-4dad-bfe0-ddfe326a1505 bound to our chassis#033[00m
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.660 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce4b855a-cb9e-4dad-bfe0-ddfe326a1505#033[00m
Feb 28 05:15:18 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:18Z|00718|binding|INFO|Setting lport 52f49649-6181-4c24-95b7-fc7227858c70 ovn-installed in OVS
Feb 28 05:15:18 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:18Z|00719|binding|INFO|Setting lport 52f49649-6181-4c24-95b7-fc7227858c70 up in Southbound
Feb 28 05:15:18 np0005634017 nova_compute[243452]: 2026-02-28 10:15:18.663 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:18 np0005634017 nova_compute[243452]: 2026-02-28 10:15:18.668 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.670 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2abb287b-5aec-42b8-8c42-d07e7c64ec86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.671 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce4b855a-c1 in ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.672 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce4b855a-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.672 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3e7e95ea-f9cd-4e59-a07b-a5234422eef3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.673 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[371b6b89-d7a9-4dc0-ad3c-f4f118220084]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.684 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[c3e1fea5-b7b9-4569-970e-bcad45c1cb17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:18 np0005634017 nova_compute[243452]: 2026-02-28 10:15:18.688 243456 DEBUG oslo_concurrency.processutils [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:18 np0005634017 systemd-machined[209480]: New machine qemu-93-instance-00000051.
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.698 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[18f0de95-b5b6-48e3-9fb6-4d8de04bbe28]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:18 np0005634017 systemd[1]: Started Virtual Machine qemu-93-instance-00000051.
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.723 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[72cd8157-7943-46f7-ad0d-3967eb2c8062]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:18 np0005634017 NetworkManager[49805]: <info>  [1772273718.7333] manager: (tapce4b855a-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/320)
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.733 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0691ea80-2970-4b4b-8dcd-8a0cd8c3fde7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.771 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7946e6d8-9abc-4421-b6f5-477560575e3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.775 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5197006d-f9e9-411a-9c1c-4dfc5fca0fa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:18 np0005634017 nova_compute[243452]: 2026-02-28 10:15:18.795 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:18 np0005634017 NetworkManager[49805]: <info>  [1772273718.7986] device (tapce4b855a-c0): carrier: link connected
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.802 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[998c1d5f-e277-4367-8f8b-64549c485355]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.820 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3d1366d2-3ac6-4b2b-a26f-7403514d04a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce4b855a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:cf:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522608, 'reachable_time': 35465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311518, 'error': None, 'target': 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.838 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cafad3a7-ccc0-489a-a89c-cc9a7bd35cac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:cf33'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522608, 'tstamp': 522608}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311528, 'error': None, 'target': 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.855 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[240075ff-ee06-4afe-9ea7-8f32da1bdf5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce4b855a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:cf:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522608, 'reachable_time': 35465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311529, 'error': None, 'target': 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.889 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c5dd13b1-6f04-4a5c-813e-314242be87af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.955 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[14182573-a69e-41d9-8c38-0a4432a7adfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.957 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce4b855a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.957 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.958 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce4b855a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:18 np0005634017 nova_compute[243452]: 2026-02-28 10:15:18.960 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:18 np0005634017 NetworkManager[49805]: <info>  [1772273718.9608] manager: (tapce4b855a-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/321)
Feb 28 05:15:18 np0005634017 kernel: tapce4b855a-c0: entered promiscuous mode
Feb 28 05:15:18 np0005634017 nova_compute[243452]: 2026-02-28 10:15:18.963 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.965 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce4b855a-c0, col_values=(('external_ids', {'iface-id': 'f0acdf7e-44d8-43d4-ade1-89536f5a8e0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:18 np0005634017 nova_compute[243452]: 2026-02-28 10:15:18.967 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:18 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:18Z|00720|binding|INFO|Releasing lport f0acdf7e-44d8-43d4-ade1-89536f5a8e0e from this chassis (sb_readonly=0)
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.968 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce4b855a-cb9e-4dad-bfe0-ddfe326a1505.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce4b855a-cb9e-4dad-bfe0-ddfe326a1505.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.969 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2ceddfde-3fb8-4ef1-930e-7980436ee2cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.970 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/ce4b855a-cb9e-4dad-bfe0-ddfe326a1505.pid.haproxy
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID ce4b855a-cb9e-4dad-bfe0-ddfe326a1505
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:15:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.971 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'env', 'PROCESS_TAG=haproxy-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce4b855a-cb9e-4dad-bfe0-ddfe326a1505.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:15:18 np0005634017 nova_compute[243452]: 2026-02-28 10:15:18.976 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:19 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:19Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:87:04:95 10.100.0.9
Feb 28 05:15:19 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:19Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:87:04:95 10.100.0.9
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.100 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273719.0997443, 4db5bcd7-8b41-4850-8c88-89ad757c8558 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.102 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] VM Started (Lifecycle Event)#033[00m
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.128 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.133 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273719.0999036, 4db5bcd7-8b41-4850-8c88-89ad757c8558 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.133 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.154 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.158 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.182 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.217 243456 DEBUG nova.network.neutron [req-03c0d5a9-13bf-4abe-aa57-1ea2e564c43c req-3b1279c8-a15d-4b8c-8c97-e18d1a66c02a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Updated VIF entry in instance network info cache for port 6b5acb8c-5d09-42b0-9c1d-b51be18712fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.218 243456 DEBUG nova.network.neutron [req-03c0d5a9-13bf-4abe-aa57-1ea2e564c43c req-3b1279c8-a15d-4b8c-8c97-e18d1a66c02a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Updating instance_info_cache with network_info: [{"id": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "address": "fa:16:3e:88:90:03", "network": {"id": "269fae56-42c3-478e-88d5-36164c0a6ae4", "bridge": "br-int", "label": "tempest-network-smoke--2058090371", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5acb8c-5d", "ovs_interfaceid": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:15:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:15:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3257134379' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.240 243456 DEBUG oslo_concurrency.lockutils [req-03c0d5a9-13bf-4abe-aa57-1ea2e564c43c req-3b1279c8-a15d-4b8c-8c97-e18d1a66c02a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.255 243456 DEBUG oslo_concurrency.processutils [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.262 243456 DEBUG nova.compute.provider_tree [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.280 243456 DEBUG nova.scheduler.client.report [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.302 243456 DEBUG oslo_concurrency.lockutils [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.335 243456 INFO nova.scheduler.client.report [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Deleted allocations for instance 32fe69ba-ea8d-411e-8917-de872b62b8b0#033[00m
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.393 243456 DEBUG oslo_concurrency.lockutils [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:19 np0005634017 podman[311606]: 2026-02-28 10:15:19.313595154 +0000 UTC m=+0.021721592 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:15:19 np0005634017 podman[311606]: 2026-02-28 10:15:19.489958483 +0000 UTC m=+0.198084901 container create f121583a9a088cc3ceffdb0e5a079816e7d7baef454c8803be51187524d3575e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.531 243456 DEBUG nova.compute.manager [req-91496c26-d62e-48c6-a6c5-27bd2217df40 req-f1cc0d54-c483-4a39-908e-866d3493582a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received event network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.531 243456 DEBUG oslo_concurrency.lockutils [req-91496c26-d62e-48c6-a6c5-27bd2217df40 req-f1cc0d54-c483-4a39-908e-866d3493582a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.532 243456 DEBUG oslo_concurrency.lockutils [req-91496c26-d62e-48c6-a6c5-27bd2217df40 req-f1cc0d54-c483-4a39-908e-866d3493582a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.532 243456 DEBUG oslo_concurrency.lockutils [req-91496c26-d62e-48c6-a6c5-27bd2217df40 req-f1cc0d54-c483-4a39-908e-866d3493582a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.532 243456 DEBUG nova.compute.manager [req-91496c26-d62e-48c6-a6c5-27bd2217df40 req-f1cc0d54-c483-4a39-908e-866d3493582a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] No waiting events found dispatching network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.533 243456 WARNING nova.compute.manager [req-91496c26-d62e-48c6-a6c5-27bd2217df40 req-f1cc0d54-c483-4a39-908e-866d3493582a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received unexpected event network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.533 243456 DEBUG nova.compute.manager [req-91496c26-d62e-48c6-a6c5-27bd2217df40 req-f1cc0d54-c483-4a39-908e-866d3493582a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received event network-vif-deleted-6b5acb8c-5d09-42b0-9c1d-b51be18712fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.533 243456 INFO nova.compute.manager [req-91496c26-d62e-48c6-a6c5-27bd2217df40 req-f1cc0d54-c483-4a39-908e-866d3493582a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Neutron deleted interface 6b5acb8c-5d09-42b0-9c1d-b51be18712fe; detaching it from the instance and deleting it from the info cache#033[00m
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.533 243456 DEBUG nova.network.neutron [req-91496c26-d62e-48c6-a6c5-27bd2217df40 req-f1cc0d54-c483-4a39-908e-866d3493582a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Feb 28 05:15:19 np0005634017 nova_compute[243452]: 2026-02-28 10:15:19.536 243456 DEBUG nova.compute.manager [req-91496c26-d62e-48c6-a6c5-27bd2217df40 req-f1cc0d54-c483-4a39-908e-866d3493582a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Detach interface failed, port_id=6b5acb8c-5d09-42b0-9c1d-b51be18712fe, reason: Instance 32fe69ba-ea8d-411e-8917-de872b62b8b0 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 28 05:15:19 np0005634017 systemd[1]: Started libpod-conmon-f121583a9a088cc3ceffdb0e5a079816e7d7baef454c8803be51187524d3575e.scope.
Feb 28 05:15:19 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:15:19 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2721b40a4fc85af083307e4f102b5144e734f21a07b53ee098a9eebd71c6ce49/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:15:19 np0005634017 podman[311606]: 2026-02-28 10:15:19.709692 +0000 UTC m=+0.417818448 container init f121583a9a088cc3ceffdb0e5a079816e7d7baef454c8803be51187524d3575e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:15:19 np0005634017 podman[311606]: 2026-02-28 10:15:19.719173337 +0000 UTC m=+0.427299755 container start f121583a9a088cc3ceffdb0e5a079816e7d7baef454c8803be51187524d3575e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:15:19 np0005634017 neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505[311622]: [NOTICE]   (311626) : New worker (311628) forked
Feb 28 05:15:19 np0005634017 neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505[311622]: [NOTICE]   (311626) : Loading success.
Feb 28 05:15:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 403 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.6 MiB/s wr, 277 op/s
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.200 243456 DEBUG nova.virt.libvirt.driver [None req-891f012c-ec4e-4e5d-9695-05a2cf3043ca 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.340 243456 DEBUG nova.compute.manager [req-696fd095-d3ed-4854-ada8-e3bd7f5c7d67 req-a8045f26-82a3-4494-bed2-8ac3056204aa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received event network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.341 243456 DEBUG oslo_concurrency.lockutils [req-696fd095-d3ed-4854-ada8-e3bd7f5c7d67 req-a8045f26-82a3-4494-bed2-8ac3056204aa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.341 243456 DEBUG oslo_concurrency.lockutils [req-696fd095-d3ed-4854-ada8-e3bd7f5c7d67 req-a8045f26-82a3-4494-bed2-8ac3056204aa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.342 243456 DEBUG oslo_concurrency.lockutils [req-696fd095-d3ed-4854-ada8-e3bd7f5c7d67 req-a8045f26-82a3-4494-bed2-8ac3056204aa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.342 243456 DEBUG nova.compute.manager [req-696fd095-d3ed-4854-ada8-e3bd7f5c7d67 req-a8045f26-82a3-4494-bed2-8ac3056204aa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Processing event network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.344 243456 DEBUG nova.compute.manager [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.348 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273720.347408, 4db5bcd7-8b41-4850-8c88-89ad757c8558 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.350 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.353 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.358 243456 INFO nova.virt.libvirt.driver [-] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Instance spawned successfully.#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.359 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.380 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.390 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.396 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.396 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.397 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.398 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.399 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.399 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.437 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.494 243456 INFO nova.compute.manager [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Took 9.90 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.495 243456 DEBUG nova.compute.manager [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.604 243456 INFO nova.compute.manager [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Took 11.03 seconds to build instance.#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.628 243456 DEBUG oslo_concurrency.lockutils [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "a28c23bd-34cb-4189-9cca-778178eb41b1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.629 243456 DEBUG oslo_concurrency.lockutils [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.630 243456 DEBUG oslo_concurrency.lockutils [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.630 243456 DEBUG oslo_concurrency.lockutils [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.631 243456 DEBUG oslo_concurrency.lockutils [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.634 243456 INFO nova.compute.manager [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Terminating instance#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.636 243456 DEBUG nova.compute.manager [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.639 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:20 np0005634017 kernel: tapf6d838dc-12 (unregistering): left promiscuous mode
Feb 28 05:15:20 np0005634017 NetworkManager[49805]: <info>  [1772273720.6847] device (tapf6d838dc-12): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:15:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:20Z|00721|binding|INFO|Releasing lport f6d838dc-126d-40e0-bd84-54c611b21b22 from this chassis (sb_readonly=0)
Feb 28 05:15:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:20Z|00722|binding|INFO|Setting lport f6d838dc-126d-40e0-bd84-54c611b21b22 down in Southbound
Feb 28 05:15:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:20Z|00723|binding|INFO|Removing iface tapf6d838dc-12 ovn-installed in OVS
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.699 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:20.708 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:80:96 10.100.0.5'], port_security=['fa:16:3e:25:80:96 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a28c23bd-34cb-4189-9cca-778178eb41b1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '73844c74-9151-4190-8457-a421e77e32a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1433cfb0-2a88-4f7f-838a-e6496a4d73b0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=f6d838dc-126d-40e0-bd84-54c611b21b22) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:15:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:20.710 156681 INFO neutron.agent.ovn.metadata.agent [-] Port f6d838dc-126d-40e0-bd84-54c611b21b22 in datapath 77a5b13a-ec2d-4bde-b8f1-201557ef8008 unbound from our chassis#033[00m
Feb 28 05:15:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:20.711 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77a5b13a-ec2d-4bde-b8f1-201557ef8008, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:15:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:20.712 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[455af9a3-217f-4c1f-8035-35b936cc5b89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:20.713 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 namespace which is not needed anymore#033[00m
Feb 28 05:15:20 np0005634017 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d00000050.scope: Deactivated successfully.
Feb 28 05:15:20 np0005634017 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d00000050.scope: Consumed 10.215s CPU time.
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.724 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:20 np0005634017 systemd-machined[209480]: Machine qemu-92-instance-00000050 terminated.
Feb 28 05:15:20 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[310974]: [NOTICE]   (310996) : haproxy version is 2.8.14-c23fe91
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.865 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:20 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[310974]: [NOTICE]   (310996) : path to executable is /usr/sbin/haproxy
Feb 28 05:15:20 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[310974]: [WARNING]  (310996) : Exiting Master process...
Feb 28 05:15:20 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[310974]: [WARNING]  (310996) : Exiting Master process...
Feb 28 05:15:20 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[310974]: [ALERT]    (310996) : Current worker (311003) exited with code 143 (Terminated)
Feb 28 05:15:20 np0005634017 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[310974]: [WARNING]  (310996) : All workers exited. Exiting... (0)
Feb 28 05:15:20 np0005634017 systemd[1]: libpod-804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047.scope: Deactivated successfully.
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.870 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:20 np0005634017 podman[311658]: 2026-02-28 10:15:20.879264587 +0000 UTC m=+0.051252052 container died 804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.884 243456 INFO nova.virt.libvirt.driver [-] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Instance destroyed successfully.#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.885 243456 DEBUG nova.objects.instance [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'resources' on Instance uuid a28c23bd-34cb-4189-9cca-778178eb41b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.899 243456 DEBUG nova.virt.libvirt.vif [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:14:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-228209281',display_name='tempest-ServerDiskConfigTestJSON-server-228209281',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-228209281',id=80,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:15:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-s88x0ti1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:15:16Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=a28c23bd-34cb-4189-9cca-778178eb41b1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f6d838dc-126d-40e0-bd84-54c611b21b22", "address": "fa:16:3e:25:80:96", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d838dc-12", "ovs_interfaceid": "f6d838dc-126d-40e0-bd84-54c611b21b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.899 243456 DEBUG nova.network.os_vif_util [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "f6d838dc-126d-40e0-bd84-54c611b21b22", "address": "fa:16:3e:25:80:96", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d838dc-12", "ovs_interfaceid": "f6d838dc-126d-40e0-bd84-54c611b21b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.900 243456 DEBUG nova.network.os_vif_util [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:80:96,bridge_name='br-int',has_traffic_filtering=True,id=f6d838dc-126d-40e0-bd84-54c611b21b22,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d838dc-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.901 243456 DEBUG os_vif [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:80:96,bridge_name='br-int',has_traffic_filtering=True,id=f6d838dc-126d-40e0-bd84-54c611b21b22,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d838dc-12') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.902 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.902 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6d838dc-12, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.907 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.910 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:15:20 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047-userdata-shm.mount: Deactivated successfully.
Feb 28 05:15:20 np0005634017 nova_compute[243452]: 2026-02-28 10:15:20.916 243456 INFO os_vif [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:80:96,bridge_name='br-int',has_traffic_filtering=True,id=f6d838dc-126d-40e0-bd84-54c611b21b22,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d838dc-12')#033[00m
Feb 28 05:15:20 np0005634017 systemd[1]: var-lib-containers-storage-overlay-08db0d9b3cbf48242c2a9835ca420d7ec1169862bc5ffbff25f96cc7a4c202d2-merged.mount: Deactivated successfully.
Feb 28 05:15:20 np0005634017 podman[311658]: 2026-02-28 10:15:20.935138868 +0000 UTC m=+0.107126303 container cleanup 804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 05:15:20 np0005634017 systemd[1]: libpod-conmon-804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047.scope: Deactivated successfully.
Feb 28 05:15:21 np0005634017 podman[311706]: 2026-02-28 10:15:21.014277363 +0000 UTC m=+0.061886531 container remove 804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:15:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:21.024 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[937c142f-fadc-4817-a20c-f529b788f47c]: (4, ('Sat Feb 28 10:15:20 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 (804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047)\n804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047\nSat Feb 28 10:15:20 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 (804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047)\n804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:21.028 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c45ef643-9b25-4003-8765-534773be9888]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:21.029 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77a5b13a-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:21 np0005634017 kernel: tap77a5b13a-e0: left promiscuous mode
Feb 28 05:15:21 np0005634017 nova_compute[243452]: 2026-02-28 10:15:21.034 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:21 np0005634017 nova_compute[243452]: 2026-02-28 10:15:21.043 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:21.047 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[68fd9856-2fc2-4df3-8857-140664c587bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:21.069 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3f294453-4f18-47b1-86fb-7c618ff775cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:21.072 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[327e5778-b4ce-4181-bc8e-a1808a50523d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:21.091 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1978ffe7-60f0-4c9e-824e-db4bca082f8e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521405, 'reachable_time': 36571, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311724, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:21.096 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:15:21 np0005634017 systemd[1]: run-netns-ovnmeta\x2d77a5b13a\x2dec2d\x2d4bde\x2db8f1\x2d201557ef8008.mount: Deactivated successfully.
Feb 28 05:15:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:21.096 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[92251bc2-060f-42cf-bd64-8073c66d3598]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:21 np0005634017 nova_compute[243452]: 2026-02-28 10:15:21.254 243456 INFO nova.virt.libvirt.driver [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Deleting instance files /var/lib/nova/instances/a28c23bd-34cb-4189-9cca-778178eb41b1_del#033[00m
Feb 28 05:15:21 np0005634017 nova_compute[243452]: 2026-02-28 10:15:21.256 243456 INFO nova.virt.libvirt.driver [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Deletion of /var/lib/nova/instances/a28c23bd-34cb-4189-9cca-778178eb41b1_del complete#033[00m
Feb 28 05:15:21 np0005634017 nova_compute[243452]: 2026-02-28 10:15:21.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:15:21 np0005634017 nova_compute[243452]: 2026-02-28 10:15:21.355 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:21 np0005634017 nova_compute[243452]: 2026-02-28 10:15:21.355 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:21 np0005634017 nova_compute[243452]: 2026-02-28 10:15:21.356 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:21 np0005634017 nova_compute[243452]: 2026-02-28 10:15:21.356 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:15:21 np0005634017 nova_compute[243452]: 2026-02-28 10:15:21.356 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:21 np0005634017 nova_compute[243452]: 2026-02-28 10:15:21.394 243456 INFO nova.compute.manager [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:15:21 np0005634017 nova_compute[243452]: 2026-02-28 10:15:21.395 243456 DEBUG oslo.service.loopingcall [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:15:21 np0005634017 nova_compute[243452]: 2026-02-28 10:15:21.395 243456 DEBUG nova.compute.manager [-] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:15:21 np0005634017 nova_compute[243452]: 2026-02-28 10:15:21.395 243456 DEBUG nova.network.neutron [-] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:15:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:15:21 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1429912051' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:15:21 np0005634017 nova_compute[243452]: 2026-02-28 10:15:21.917 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.024 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.024 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.029 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.030 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.037 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.037 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:15:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 387 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.0 MiB/s wr, 289 op/s
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.251 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.253 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3344MB free_disk=59.85517138708383GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.253 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.253 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.362 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance b883c1a1-cf01-434d-8258-24ca193a2683 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.363 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.363 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance a28c23bd-34cb-4189-9cca-778178eb41b1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.363 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 4db5bcd7-8b41-4850-8c88-89ad757c8558 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.363 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.364 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:15:22 np0005634017 kernel: tapd7f6883b-88 (unregistering): left promiscuous mode
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.429 243456 DEBUG nova.compute.manager [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received event network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.429 243456 DEBUG oslo_concurrency.lockutils [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.430 243456 DEBUG oslo_concurrency.lockutils [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.430 243456 DEBUG oslo_concurrency.lockutils [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.430 243456 DEBUG nova.compute.manager [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] No waiting events found dispatching network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.431 243456 WARNING nova.compute.manager [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received unexpected event network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.431 243456 DEBUG nova.compute.manager [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Received event network-vif-unplugged-f6d838dc-126d-40e0-bd84-54c611b21b22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.431 243456 DEBUG oslo_concurrency.lockutils [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.431 243456 DEBUG oslo_concurrency.lockutils [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.432 243456 DEBUG oslo_concurrency.lockutils [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.432 243456 DEBUG nova.compute.manager [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] No waiting events found dispatching network-vif-unplugged-f6d838dc-126d-40e0-bd84-54c611b21b22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.432 243456 DEBUG nova.compute.manager [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Received event network-vif-unplugged-f6d838dc-126d-40e0-bd84-54c611b21b22 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.432 243456 DEBUG nova.compute.manager [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Received event network-vif-plugged-f6d838dc-126d-40e0-bd84-54c611b21b22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.433 243456 DEBUG oslo_concurrency.lockutils [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.433 243456 DEBUG oslo_concurrency.lockutils [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.433 243456 DEBUG oslo_concurrency.lockutils [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.433 243456 DEBUG nova.compute.manager [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] No waiting events found dispatching network-vif-plugged-f6d838dc-126d-40e0-bd84-54c611b21b22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.434 243456 WARNING nova.compute.manager [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Received unexpected event network-vif-plugged-f6d838dc-126d-40e0-bd84-54c611b21b22 for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:15:22 np0005634017 NetworkManager[49805]: <info>  [1772273722.4463] device (tapd7f6883b-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.453 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:22 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:22Z|00724|binding|INFO|Releasing lport d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 from this chassis (sb_readonly=0)
Feb 28 05:15:22 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:22Z|00725|binding|INFO|Setting lport d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 down in Southbound
Feb 28 05:15:22 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:22Z|00726|binding|INFO|Removing iface tapd7f6883b-88 ovn-installed in OVS
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.455 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.459 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:04:95 10.100.0.9'], port_security=['fa:16:3e:87:04:95 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ca4fec3f-7355-47c7-baa5-8d9af25c6eb4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd70835696bf4e12a062516e9de5527d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f8eaa742-8504-4c21-8533-267de16b101e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90546891-a028-4a5f-a7b5-01dac44edc93, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:15:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.461 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 in datapath 2e5dcf5b-2f4a-41dc-9c28-b500e2889923 unbound from our chassis#033[00m
Feb 28 05:15:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.462 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2e5dcf5b-2f4a-41dc-9c28-b500e2889923#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.471 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.485 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9da69ecf-188f-412d-9770-7bd72a38596c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:22 np0005634017 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Feb 28 05:15:22 np0005634017 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d0000004f.scope: Consumed 12.900s CPU time.
Feb 28 05:15:22 np0005634017 systemd-machined[209480]: Machine qemu-91-instance-0000004f terminated.
Feb 28 05:15:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.519 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8e46a11e-3908-462c-8ea9-0f484296f822]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.523 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.524 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c558c585-503e-451f-bb45-339617666e82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.552 243456 DEBUG nova.network.neutron [-] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:15:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.552 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[69f0612f-1237-4a1f-a5dc-072200568fd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.570 243456 INFO nova.compute.manager [-] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Took 1.17 seconds to deallocate network for instance.#033[00m
Feb 28 05:15:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.568 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[74ea6c56-53ac-4cc3-8800-4c53dafdb1cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e5dcf5b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:a8:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517673, 'reachable_time': 18521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311761, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.593 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0dfe8fc7-4041-4748-9673-94654251a8fc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2e5dcf5b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517685, 'tstamp': 517685}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311762, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2e5dcf5b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517689, 'tstamp': 517689}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311762, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.595 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e5dcf5b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.597 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.601 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e5dcf5b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.602 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:15:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.602 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2e5dcf5b-20, col_values=(('external_ids', {'iface-id': '4070e10c-8283-47ad-b9bf-0e19e9198bce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.603 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.604 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:22 np0005634017 nova_compute[243452]: 2026-02-28 10:15:22.639 243456 DEBUG oslo_concurrency.lockutils [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:15:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/349934823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:15:23 np0005634017 nova_compute[243452]: 2026-02-28 10:15:23.054 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:23 np0005634017 nova_compute[243452]: 2026-02-28 10:15:23.060 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:15:23 np0005634017 nova_compute[243452]: 2026-02-28 10:15:23.077 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:15:23 np0005634017 nova_compute[243452]: 2026-02-28 10:15:23.102 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:15:23 np0005634017 nova_compute[243452]: 2026-02-28 10:15:23.103 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:23 np0005634017 nova_compute[243452]: 2026-02-28 10:15:23.103 243456 DEBUG oslo_concurrency.lockutils [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.464s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:23 np0005634017 nova_compute[243452]: 2026-02-28 10:15:23.215 243456 INFO nova.virt.libvirt.driver [None req-891f012c-ec4e-4e5d-9695-05a2cf3043ca 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance shutdown successfully after 13 seconds.#033[00m
Feb 28 05:15:23 np0005634017 nova_compute[243452]: 2026-02-28 10:15:23.222 243456 DEBUG oslo_concurrency.processutils [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:23 np0005634017 nova_compute[243452]: 2026-02-28 10:15:23.268 243456 INFO nova.virt.libvirt.driver [-] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance destroyed successfully.#033[00m
Feb 28 05:15:23 np0005634017 nova_compute[243452]: 2026-02-28 10:15:23.271 243456 DEBUG nova.objects.instance [None req-891f012c-ec4e-4e5d-9695-05a2cf3043ca 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'numa_topology' on Instance uuid ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:15:23 np0005634017 nova_compute[243452]: 2026-02-28 10:15:23.288 243456 DEBUG nova.compute.manager [None req-891f012c-ec4e-4e5d-9695-05a2cf3043ca 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:15:23 np0005634017 nova_compute[243452]: 2026-02-28 10:15:23.319 243456 DEBUG nova.compute.manager [req-196243a5-6032-412f-b463-d43b7392cd61 req-a901539d-6de5-4be9-a6b3-0e77057f9431 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Received event network-vif-deleted-f6d838dc-126d-40e0-bd84-54c611b21b22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:15:23 np0005634017 nova_compute[243452]: 2026-02-28 10:15:23.345 243456 DEBUG oslo_concurrency.lockutils [None req-891f012c-ec4e-4e5d-9695-05a2cf3043ca 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.318s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:15:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:15:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3234714658' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:15:23 np0005634017 nova_compute[243452]: 2026-02-28 10:15:23.798 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:23 np0005634017 nova_compute[243452]: 2026-02-28 10:15:23.805 243456 DEBUG oslo_concurrency.processutils [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:23 np0005634017 nova_compute[243452]: 2026-02-28 10:15:23.810 243456 DEBUG nova.compute.provider_tree [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:15:23 np0005634017 nova_compute[243452]: 2026-02-28 10:15:23.823 243456 DEBUG nova.scheduler.client.report [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:15:23 np0005634017 nova_compute[243452]: 2026-02-28 10:15:23.844 243456 DEBUG oslo_concurrency.lockutils [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:23 np0005634017 nova_compute[243452]: 2026-02-28 10:15:23.877 243456 INFO nova.scheduler.client.report [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Deleted allocations for instance a28c23bd-34cb-4189-9cca-778178eb41b1#033[00m
Feb 28 05:15:23 np0005634017 nova_compute[243452]: 2026-02-28 10:15:23.934 243456 DEBUG oslo_concurrency.lockutils [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:24 np0005634017 nova_compute[243452]: 2026-02-28 10:15:24.099 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:15:24 np0005634017 nova_compute[243452]: 2026-02-28 10:15:24.133 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:15:24 np0005634017 nova_compute[243452]: 2026-02-28 10:15:24.134 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:15:24 np0005634017 nova_compute[243452]: 2026-02-28 10:15:24.155 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 05:15:24 np0005634017 nova_compute[243452]: 2026-02-28 10:15:24.155 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:15:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 378 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 318 op/s
Feb 28 05:15:24 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:24Z|00727|binding|INFO|Releasing lport 4070e10c-8283-47ad-b9bf-0e19e9198bce from this chassis (sb_readonly=0)
Feb 28 05:15:24 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:24Z|00728|binding|INFO|Releasing lport f0acdf7e-44d8-43d4-ade1-89536f5a8e0e from this chassis (sb_readonly=0)
Feb 28 05:15:24 np0005634017 nova_compute[243452]: 2026-02-28 10:15:24.409 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:25 np0005634017 nova_compute[243452]: 2026-02-28 10:15:25.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:15:25 np0005634017 nova_compute[243452]: 2026-02-28 10:15:25.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:15:25 np0005634017 nova_compute[243452]: 2026-02-28 10:15:25.494 243456 DEBUG nova.compute.manager [req-b5743242-0c68-4be1-9822-d3844048019a req-c8556161-7f67-4d04-a0cc-48ae13d71a64 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received event network-vif-unplugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:15:25 np0005634017 nova_compute[243452]: 2026-02-28 10:15:25.494 243456 DEBUG oslo_concurrency.lockutils [req-b5743242-0c68-4be1-9822-d3844048019a req-c8556161-7f67-4d04-a0cc-48ae13d71a64 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:25 np0005634017 nova_compute[243452]: 2026-02-28 10:15:25.495 243456 DEBUG oslo_concurrency.lockutils [req-b5743242-0c68-4be1-9822-d3844048019a req-c8556161-7f67-4d04-a0cc-48ae13d71a64 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:25 np0005634017 nova_compute[243452]: 2026-02-28 10:15:25.495 243456 DEBUG oslo_concurrency.lockutils [req-b5743242-0c68-4be1-9822-d3844048019a req-c8556161-7f67-4d04-a0cc-48ae13d71a64 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:25 np0005634017 nova_compute[243452]: 2026-02-28 10:15:25.495 243456 DEBUG nova.compute.manager [req-b5743242-0c68-4be1-9822-d3844048019a req-c8556161-7f67-4d04-a0cc-48ae13d71a64 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] No waiting events found dispatching network-vif-unplugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:15:25 np0005634017 nova_compute[243452]: 2026-02-28 10:15:25.495 243456 WARNING nova.compute.manager [req-b5743242-0c68-4be1-9822-d3844048019a req-c8556161-7f67-4d04-a0cc-48ae13d71a64 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received unexpected event network-vif-unplugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 for instance with vm_state stopped and task_state None.#033[00m
Feb 28 05:15:25 np0005634017 nova_compute[243452]: 2026-02-28 10:15:25.495 243456 DEBUG nova.compute.manager [req-b5743242-0c68-4be1-9822-d3844048019a req-c8556161-7f67-4d04-a0cc-48ae13d71a64 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:15:25 np0005634017 nova_compute[243452]: 2026-02-28 10:15:25.496 243456 DEBUG oslo_concurrency.lockutils [req-b5743242-0c68-4be1-9822-d3844048019a req-c8556161-7f67-4d04-a0cc-48ae13d71a64 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:25 np0005634017 nova_compute[243452]: 2026-02-28 10:15:25.496 243456 DEBUG oslo_concurrency.lockutils [req-b5743242-0c68-4be1-9822-d3844048019a req-c8556161-7f67-4d04-a0cc-48ae13d71a64 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:25 np0005634017 nova_compute[243452]: 2026-02-28 10:15:25.496 243456 DEBUG oslo_concurrency.lockutils [req-b5743242-0c68-4be1-9822-d3844048019a req-c8556161-7f67-4d04-a0cc-48ae13d71a64 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:25 np0005634017 nova_compute[243452]: 2026-02-28 10:15:25.496 243456 DEBUG nova.compute.manager [req-b5743242-0c68-4be1-9822-d3844048019a req-c8556161-7f67-4d04-a0cc-48ae13d71a64 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] No waiting events found dispatching network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:15:25 np0005634017 nova_compute[243452]: 2026-02-28 10:15:25.497 243456 WARNING nova.compute.manager [req-b5743242-0c68-4be1-9822-d3844048019a req-c8556161-7f67-4d04-a0cc-48ae13d71a64 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received unexpected event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 for instance with vm_state stopped and task_state None.#033[00m
Feb 28 05:15:25 np0005634017 nova_compute[243452]: 2026-02-28 10:15:25.583 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "c2763dc4-f643-48bd-964a-d4ab75938d0a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:25 np0005634017 nova_compute[243452]: 2026-02-28 10:15:25.583 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "c2763dc4-f643-48bd-964a-d4ab75938d0a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:25 np0005634017 nova_compute[243452]: 2026-02-28 10:15:25.609 243456 DEBUG nova.compute.manager [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:15:25 np0005634017 nova_compute[243452]: 2026-02-28 10:15:25.701 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:25 np0005634017 nova_compute[243452]: 2026-02-28 10:15:25.701 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:25 np0005634017 nova_compute[243452]: 2026-02-28 10:15:25.707 243456 DEBUG nova.virt.hardware [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:15:25 np0005634017 nova_compute[243452]: 2026-02-28 10:15:25.708 243456 INFO nova.compute.claims [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:15:25 np0005634017 nova_compute[243452]: 2026-02-28 10:15:25.861 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:25 np0005634017 nova_compute[243452]: 2026-02-28 10:15:25.907 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:25 np0005634017 nova_compute[243452]: 2026-02-28 10:15:25.966 243456 INFO nova.compute.manager [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Rebuilding instance#033[00m
Feb 28 05:15:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 358 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.5 MiB/s wr, 313 op/s
Feb 28 05:15:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:15:26 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/704125383' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.386 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.391 243456 DEBUG nova.compute.provider_tree [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.436 243456 DEBUG nova.scheduler.client.report [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.459 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.460 243456 DEBUG nova.compute.manager [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.512 243456 DEBUG nova.compute.manager [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.513 243456 DEBUG nova.network.neutron [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.521 243456 DEBUG nova.objects.instance [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'trusted_certs' on Instance uuid ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.534 243456 INFO nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.541 243456 DEBUG nova.compute.manager [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.573 243456 DEBUG nova.compute.manager [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.634 243456 DEBUG nova.objects.instance [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'pci_requests' on Instance uuid ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.646 243456 DEBUG nova.objects.instance [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'pci_devices' on Instance uuid ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.665 243456 DEBUG nova.objects.instance [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'resources' on Instance uuid ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.686 243456 DEBUG nova.compute.manager [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.689 243456 DEBUG nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.689 243456 INFO nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Creating image(s)#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.721 243456 DEBUG nova.storage.rbd_utils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image c2763dc4-f643-48bd-964a-d4ab75938d0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.757 243456 DEBUG nova.storage.rbd_utils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image c2763dc4-f643-48bd-964a-d4ab75938d0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.785 243456 DEBUG nova.storage.rbd_utils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image c2763dc4-f643-48bd-964a-d4ab75938d0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.788 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.815 243456 DEBUG nova.objects.instance [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'migration_context' on Instance uuid ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.821 243456 DEBUG nova.policy [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7ef51521ffc947cbbce8323ec2b71753', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c0c4bc44c37f4a4f83c83b6105be3190', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.841 243456 DEBUG nova.objects.instance [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.845 243456 INFO nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance already shutdown.#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.850 243456 INFO nova.virt.libvirt.driver [-] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance destroyed successfully.#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.856 243456 INFO nova.virt.libvirt.driver [-] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance destroyed successfully.#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.857 243456 DEBUG nova.virt.libvirt.vif [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:14:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-542187541',display_name='tempest-tempest.common.compute-instance-542187541',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-542187541',id=79,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:15:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='fd70835696bf4e12a062516e9de5527d',ramdisk_id='',reservation_id='r-tzwfcb59',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1764257371',owner_user_name='tempest-ServerActionsTestOtherA-1764257371-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:15:25Z,user_data=None,user_id='14b2d28379164786ad68563acb83a50a',uuid=ca4fec3f-7355-47c7-baa5-8d9af25c6eb4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.858 243456 DEBUG nova.network.os_vif_util [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converting VIF {"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.859 243456 DEBUG nova.network.os_vif_util [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.859 243456 DEBUG os_vif [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.861 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.862 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7f6883b-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.864 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.866 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.870 243456 INFO os_vif [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88')#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.891 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.893 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.893 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.894 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.914 243456 DEBUG nova.storage.rbd_utils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image c2763dc4-f643-48bd-964a-d4ab75938d0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:26 np0005634017 nova_compute[243452]: 2026-02-28 10:15:26.917 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c2763dc4-f643-48bd-964a-d4ab75938d0a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:27 np0005634017 nova_compute[243452]: 2026-02-28 10:15:27.243 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c2763dc4-f643-48bd-964a-d4ab75938d0a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:27 np0005634017 nova_compute[243452]: 2026-02-28 10:15:27.316 243456 DEBUG nova.storage.rbd_utils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] resizing rbd image c2763dc4-f643-48bd-964a-d4ab75938d0a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:15:27 np0005634017 nova_compute[243452]: 2026-02-28 10:15:27.420 243456 INFO nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Deleting instance files /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_del#033[00m
Feb 28 05:15:27 np0005634017 nova_compute[243452]: 2026-02-28 10:15:27.422 243456 INFO nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Deletion of /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_del complete#033[00m
Feb 28 05:15:27 np0005634017 nova_compute[243452]: 2026-02-28 10:15:27.433 243456 DEBUG nova.objects.instance [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lazy-loading 'migration_context' on Instance uuid c2763dc4-f643-48bd-964a-d4ab75938d0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:15:27 np0005634017 nova_compute[243452]: 2026-02-28 10:15:27.464 243456 DEBUG nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:15:27 np0005634017 nova_compute[243452]: 2026-02-28 10:15:27.464 243456 DEBUG nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Ensure instance console log exists: /var/lib/nova/instances/c2763dc4-f643-48bd-964a-d4ab75938d0a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:15:27 np0005634017 nova_compute[243452]: 2026-02-28 10:15:27.465 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:27 np0005634017 nova_compute[243452]: 2026-02-28 10:15:27.465 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:27 np0005634017 nova_compute[243452]: 2026-02-28 10:15:27.465 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:27 np0005634017 nova_compute[243452]: 2026-02-28 10:15:27.693 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:15:27 np0005634017 nova_compute[243452]: 2026-02-28 10:15:27.694 243456 INFO nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Creating image(s)#033[00m
Feb 28 05:15:27 np0005634017 nova_compute[243452]: 2026-02-28 10:15:27.718 243456 DEBUG nova.storage.rbd_utils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:27 np0005634017 nova_compute[243452]: 2026-02-28 10:15:27.749 243456 DEBUG nova.storage.rbd_utils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:27 np0005634017 nova_compute[243452]: 2026-02-28 10:15:27.782 243456 DEBUG nova.storage.rbd_utils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:27 np0005634017 nova_compute[243452]: 2026-02-28 10:15:27.787 243456 DEBUG oslo_concurrency.processutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:27 np0005634017 nova_compute[243452]: 2026-02-28 10:15:27.855 243456 DEBUG oslo_concurrency.processutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:27 np0005634017 nova_compute[243452]: 2026-02-28 10:15:27.857 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:27 np0005634017 nova_compute[243452]: 2026-02-28 10:15:27.857 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:27 np0005634017 nova_compute[243452]: 2026-02-28 10:15:27.858 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:27 np0005634017 nova_compute[243452]: 2026-02-28 10:15:27.883 243456 DEBUG nova.storage.rbd_utils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:27 np0005634017 nova_compute[243452]: 2026-02-28 10:15:27.887 243456 DEBUG oslo_concurrency.processutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 342 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 268 op/s
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.212 243456 DEBUG oslo_concurrency.processutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.256 243456 DEBUG nova.network.neutron [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Successfully created port: 590fac49-f2a2-48ec-ad9f-7bd17a63fe37 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.310 243456 DEBUG nova.storage.rbd_utils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] resizing rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.415 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.415 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Ensure instance console log exists: /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.416 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.416 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.417 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.420 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Start _get_guest_xml network_info=[{"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.425 243456 WARNING nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.432 243456 DEBUG nova.virt.libvirt.host [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.432 243456 DEBUG nova.virt.libvirt.host [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.436 243456 DEBUG nova.virt.libvirt.host [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.436 243456 DEBUG nova.virt.libvirt.host [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.437 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.437 243456 DEBUG nova.virt.hardware [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.438 243456 DEBUG nova.virt.hardware [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.438 243456 DEBUG nova.virt.hardware [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.439 243456 DEBUG nova.virt.hardware [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.439 243456 DEBUG nova.virt.hardware [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.439 243456 DEBUG nova.virt.hardware [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.439 243456 DEBUG nova.virt.hardware [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.440 243456 DEBUG nova.virt.hardware [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.440 243456 DEBUG nova.virt.hardware [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.440 243456 DEBUG nova.virt.hardware [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.441 243456 DEBUG nova.virt.hardware [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.441 243456 DEBUG nova.objects.instance [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'vcpu_model' on Instance uuid ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.458 243456 DEBUG oslo_concurrency.processutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:15:28 np0005634017 nova_compute[243452]: 2026-02-28 10:15:28.798 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:15:28 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2594062940' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.010 243456 DEBUG oslo_concurrency.processutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.036 243456 DEBUG nova.storage.rbd_utils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.041 243456 DEBUG oslo_concurrency.processutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:15:29
Feb 28 05:15:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:15:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:15:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'backups', '.rgw.root', '.mgr', 'default.rgw.log', 'vms', 'volumes', 'default.rgw.control', 'default.rgw.meta']
Feb 28 05:15:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.449 243456 DEBUG nova.network.neutron [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Successfully updated port: 590fac49-f2a2-48ec-ad9f-7bd17a63fe37 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.467 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "refresh_cache-c2763dc4-f643-48bd-964a-d4ab75938d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.467 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquired lock "refresh_cache-c2763dc4-f643-48bd-964a-d4ab75938d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.468 243456 DEBUG nova.network.neutron [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:15:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:15:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1231944948' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.560 243456 DEBUG oslo_concurrency.processutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.562 243456 DEBUG nova.virt.libvirt.vif [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:14:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-542187541',display_name='tempest-tempest.common.compute-instance-542187541',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-542187541',id=79,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:15:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='fd70835696bf4e12a062516e9de5527d',ramdisk_id='',reservation_id='r-tzwfcb59',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1764257371',owner_user_name='tempest-ServerActionsTestOtherA-1764257371-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:15:27Z,user_data=None,user_id='14b2d28379164786ad68563acb83a50a',uuid=ca4fec3f-7355-47c7-baa5-8d9af25c6eb4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.562 243456 DEBUG nova.network.os_vif_util [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converting VIF {"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.563 243456 DEBUG nova.network.os_vif_util [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.566 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:15:29 np0005634017 nova_compute[243452]:  <uuid>ca4fec3f-7355-47c7-baa5-8d9af25c6eb4</uuid>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:  <name>instance-0000004f</name>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <nova:name>tempest-tempest.common.compute-instance-542187541</nova:name>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:15:28</nova:creationTime>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:15:29 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:        <nova:user uuid="14b2d28379164786ad68563acb83a50a">tempest-ServerActionsTestOtherA-1764257371-project-member</nova:user>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:        <nova:project uuid="fd70835696bf4e12a062516e9de5527d">tempest-ServerActionsTestOtherA-1764257371</nova:project>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="88971623-4808-4102-a4a7-34a287d8b7fe"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:        <nova:port uuid="d7f6883b-88ea-45f6-a85b-7fe7dd5cf814">
Feb 28 05:15:29 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <entry name="serial">ca4fec3f-7355-47c7-baa5-8d9af25c6eb4</entry>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <entry name="uuid">ca4fec3f-7355-47c7-baa5-8d9af25c6eb4</entry>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk">
Feb 28 05:15:29 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:15:29 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk.config">
Feb 28 05:15:29 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:15:29 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:87:04:95"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <target dev="tapd7f6883b-88"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/console.log" append="off"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:15:29 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:15:29 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:15:29 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:15:29 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.567 243456 DEBUG nova.compute.manager [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Preparing to wait for external event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.567 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.567 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.568 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.568 243456 DEBUG nova.virt.libvirt.vif [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:14:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-542187541',display_name='tempest-tempest.common.compute-instance-542187541',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-542187541',id=79,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:15:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='fd70835696bf4e12a062516e9de5527d',ramdisk_id='',reservation_id='r-tzwfcb59',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1764257371',owner_user_name='tempest-ServerActionsTestOtherA-1764257371-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:15:27Z,user_data=None,user_id='14b2d28379164786ad68563acb83a50a',uuid=ca4fec3f-7355-47c7-baa5-8d9af25c6eb4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.569 243456 DEBUG nova.network.os_vif_util [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converting VIF {"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.570 243456 DEBUG nova.network.os_vif_util [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.570 243456 DEBUG os_vif [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.571 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.571 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.572 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.575 243456 DEBUG nova.compute.manager [req-9c145437-3318-4404-a72e-6e798a08c4d7 req-2566d28e-dc5b-424f-b354-92bdc6ce2180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Received event network-changed-590fac49-f2a2-48ec-ad9f-7bd17a63fe37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.575 243456 DEBUG nova.compute.manager [req-9c145437-3318-4404-a72e-6e798a08c4d7 req-2566d28e-dc5b-424f-b354-92bdc6ce2180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Refreshing instance network info cache due to event network-changed-590fac49-f2a2-48ec-ad9f-7bd17a63fe37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.575 243456 DEBUG oslo_concurrency.lockutils [req-9c145437-3318-4404-a72e-6e798a08c4d7 req-2566d28e-dc5b-424f-b354-92bdc6ce2180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c2763dc4-f643-48bd-964a-d4ab75938d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.576 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.577 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7f6883b-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.577 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd7f6883b-88, col_values=(('external_ids', {'iface-id': 'd7f6883b-88ea-45f6-a85b-7fe7dd5cf814', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:04:95', 'vm-uuid': 'ca4fec3f-7355-47c7-baa5-8d9af25c6eb4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:29 np0005634017 NetworkManager[49805]: <info>  [1772273729.5798] manager: (tapd7f6883b-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/322)
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.579 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.581 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.584 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.585 243456 INFO os_vif [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88')#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.643 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.644 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.644 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] No VIF found with MAC fa:16:3e:87:04:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.644 243456 INFO nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Using config drive#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.667 243456 DEBUG nova.storage.rbd_utils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.678 243456 DEBUG nova.network.neutron [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.686 243456 DEBUG nova.objects.instance [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'ec2_ids' on Instance uuid ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:15:29 np0005634017 nova_compute[243452]: 2026-02-28 10:15:29.723 243456 DEBUG nova.objects.instance [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'keypairs' on Instance uuid ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:15:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 344 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 240 op/s
Feb 28 05:15:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:15:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:15:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:15:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:15:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:15:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:15:30 np0005634017 nova_compute[243452]: 2026-02-28 10:15:30.320 243456 INFO nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Creating config drive at /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/disk.config#033[00m
Feb 28 05:15:30 np0005634017 nova_compute[243452]: 2026-02-28 10:15:30.328 243456 DEBUG oslo_concurrency.processutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpa6zhv94q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:30 np0005634017 nova_compute[243452]: 2026-02-28 10:15:30.470 243456 DEBUG oslo_concurrency.processutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpa6zhv94q" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:30 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:30Z|00729|binding|INFO|Releasing lport 4070e10c-8283-47ad-b9bf-0e19e9198bce from this chassis (sb_readonly=0)
Feb 28 05:15:30 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:30Z|00730|binding|INFO|Releasing lport f0acdf7e-44d8-43d4-ade1-89536f5a8e0e from this chassis (sb_readonly=0)
Feb 28 05:15:30 np0005634017 nova_compute[243452]: 2026-02-28 10:15:30.523 243456 DEBUG nova.storage.rbd_utils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:30 np0005634017 nova_compute[243452]: 2026-02-28 10:15:30.526 243456 DEBUG oslo_concurrency.processutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/disk.config ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:30 np0005634017 nova_compute[243452]: 2026-02-28 10:15:30.554 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:15:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:15:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:15:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:15:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:15:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:15:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:15:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:15:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:15:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:15:30 np0005634017 nova_compute[243452]: 2026-02-28 10:15:30.674 243456 DEBUG oslo_concurrency.processutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/disk.config ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:30 np0005634017 nova_compute[243452]: 2026-02-28 10:15:30.674 243456 INFO nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Deleting local config drive /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/disk.config because it was imported into RBD.#033[00m
Feb 28 05:15:30 np0005634017 kernel: tapd7f6883b-88: entered promiscuous mode
Feb 28 05:15:30 np0005634017 NetworkManager[49805]: <info>  [1772273730.7269] manager: (tapd7f6883b-88): new Tun device (/org/freedesktop/NetworkManager/Devices/323)
Feb 28 05:15:30 np0005634017 nova_compute[243452]: 2026-02-28 10:15:30.728 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:30 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:30Z|00731|binding|INFO|Claiming lport d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 for this chassis.
Feb 28 05:15:30 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:30Z|00732|binding|INFO|d7f6883b-88ea-45f6-a85b-7fe7dd5cf814: Claiming fa:16:3e:87:04:95 10.100.0.9
Feb 28 05:15:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.735 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:04:95 10.100.0.9'], port_security=['fa:16:3e:87:04:95 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ca4fec3f-7355-47c7-baa5-8d9af25c6eb4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd70835696bf4e12a062516e9de5527d', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f8eaa742-8504-4c21-8533-267de16b101e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90546891-a028-4a5f-a7b5-01dac44edc93, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:15:30 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:30Z|00733|binding|INFO|Setting lport d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 ovn-installed in OVS
Feb 28 05:15:30 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:30Z|00734|binding|INFO|Setting lport d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 up in Southbound
Feb 28 05:15:30 np0005634017 nova_compute[243452]: 2026-02-28 10:15:30.738 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.739 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 in datapath 2e5dcf5b-2f4a-41dc-9c28-b500e2889923 bound to our chassis#033[00m
Feb 28 05:15:30 np0005634017 nova_compute[243452]: 2026-02-28 10:15:30.740 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.741 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2e5dcf5b-2f4a-41dc-9c28-b500e2889923#033[00m
Feb 28 05:15:30 np0005634017 systemd-udevd[312328]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:15:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.757 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fb604fea-685f-41c2-aec6-0c9f37f1adf5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:30 np0005634017 NetworkManager[49805]: <info>  [1772273730.7613] device (tapd7f6883b-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:15:30 np0005634017 NetworkManager[49805]: <info>  [1772273730.7623] device (tapd7f6883b-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:15:30 np0005634017 systemd-machined[209480]: New machine qemu-94-instance-0000004f.
Feb 28 05:15:30 np0005634017 systemd[1]: Started Virtual Machine qemu-94-instance-0000004f.
Feb 28 05:15:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.790 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[cb1e699e-23b6-46be-92e0-31680415c7cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.793 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[fd418621-a36a-40d7-a786-6d1e60201bd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.819 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[cddb7f1a-be1a-4ea3-93d9-3f1d35d73169]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.832 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ad77b4-2632-4b8b-8792-225b1ba5e8b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e5dcf5b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:a8:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517673, 'reachable_time': 18521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312341, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.849 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1faebe70-0890-44cc-866c-52d903d53171]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2e5dcf5b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517685, 'tstamp': 517685}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312342, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2e5dcf5b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517689, 'tstamp': 517689}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312342, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.850 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e5dcf5b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:30 np0005634017 nova_compute[243452]: 2026-02-28 10:15:30.852 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.853 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e5dcf5b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.853 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:15:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.853 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2e5dcf5b-20, col_values=(('external_ids', {'iface-id': '4070e10c-8283-47ad-b9bf-0e19e9198bce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.854 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.127 243456 DEBUG nova.compute.manager [req-e9cdf859-a629-45f3-a77e-8a0c66148e82 req-42ef8376-8160-4975-9bdd-fe1e9500cb7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.127 243456 DEBUG oslo_concurrency.lockutils [req-e9cdf859-a629-45f3-a77e-8a0c66148e82 req-42ef8376-8160-4975-9bdd-fe1e9500cb7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.128 243456 DEBUG oslo_concurrency.lockutils [req-e9cdf859-a629-45f3-a77e-8a0c66148e82 req-42ef8376-8160-4975-9bdd-fe1e9500cb7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.128 243456 DEBUG oslo_concurrency.lockutils [req-e9cdf859-a629-45f3-a77e-8a0c66148e82 req-42ef8376-8160-4975-9bdd-fe1e9500cb7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.128 243456 DEBUG nova.compute.manager [req-e9cdf859-a629-45f3-a77e-8a0c66148e82 req-42ef8376-8160-4975-9bdd-fe1e9500cb7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Processing event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.139 243456 DEBUG nova.network.neutron [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Updating instance_info_cache with network_info: [{"id": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "address": "fa:16:3e:34:9c:8f", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap590fac49-f2", "ovs_interfaceid": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.164 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Releasing lock "refresh_cache-c2763dc4-f643-48bd-964a-d4ab75938d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.164 243456 DEBUG nova.compute.manager [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Instance network_info: |[{"id": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "address": "fa:16:3e:34:9c:8f", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap590fac49-f2", "ovs_interfaceid": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.164 243456 DEBUG oslo_concurrency.lockutils [req-9c145437-3318-4404-a72e-6e798a08c4d7 req-2566d28e-dc5b-424f-b354-92bdc6ce2180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c2763dc4-f643-48bd-964a-d4ab75938d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.165 243456 DEBUG nova.network.neutron [req-9c145437-3318-4404-a72e-6e798a08c4d7 req-2566d28e-dc5b-424f-b354-92bdc6ce2180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Refreshing network info cache for port 590fac49-f2a2-48ec-ad9f-7bd17a63fe37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.168 243456 DEBUG nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Start _get_guest_xml network_info=[{"id": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "address": "fa:16:3e:34:9c:8f", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap590fac49-f2", "ovs_interfaceid": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.171 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273716.1233923, 32fe69ba-ea8d-411e-8917-de872b62b8b0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.171 243456 INFO nova.compute.manager [-] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.175 243456 WARNING nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.181 243456 DEBUG nova.virt.libvirt.host [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.181 243456 DEBUG nova.virt.libvirt.host [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.184 243456 DEBUG nova.virt.libvirt.host [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.184 243456 DEBUG nova.virt.libvirt.host [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.185 243456 DEBUG nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.185 243456 DEBUG nova.virt.hardware [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.185 243456 DEBUG nova.virt.hardware [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.185 243456 DEBUG nova.virt.hardware [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.185 243456 DEBUG nova.virt.hardware [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.186 243456 DEBUG nova.virt.hardware [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.186 243456 DEBUG nova.virt.hardware [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.186 243456 DEBUG nova.virt.hardware [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.186 243456 DEBUG nova.virt.hardware [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.186 243456 DEBUG nova.virt.hardware [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.186 243456 DEBUG nova.virt.hardware [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.187 243456 DEBUG nova.virt.hardware [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.189 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.239 243456 DEBUG nova.compute.manager [None req-cb2f8817-b3a6-4fdd-bbb8-b0dc5d2400f6 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.491 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.492 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273731.4905484, ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.493 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] VM Started (Lifecycle Event)#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.498 243456 DEBUG nova.compute.manager [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.504 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.509 243456 INFO nova.virt.libvirt.driver [-] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance spawned successfully.#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.510 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.520 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.526 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.543 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.544 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.545 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.546 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.547 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.548 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.554 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.555 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273731.4907484, ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.556 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.590 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.596 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273731.5035603, ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.596 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.619 243456 DEBUG nova.compute.manager [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.621 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.628 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.666 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.689 243456 INFO nova.compute.manager [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] bringing vm to original state: 'stopped'#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.751 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.766 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.767 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.768 243456 DEBUG nova.compute.manager [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.779 243456 DEBUG nova.compute.manager [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Feb 28 05:15:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:15:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4257579247' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.820 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:31 np0005634017 kernel: tapd7f6883b-88 (unregistering): left promiscuous mode
Feb 28 05:15:31 np0005634017 NetworkManager[49805]: <info>  [1772273731.8245] device (tapd7f6883b-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:15:31 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:31Z|00735|binding|INFO|Releasing lport d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 from this chassis (sb_readonly=0)
Feb 28 05:15:31 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:31Z|00736|binding|INFO|Setting lport d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 down in Southbound
Feb 28 05:15:31 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:31Z|00737|binding|INFO|Removing iface tapd7f6883b-88 ovn-installed in OVS
Feb 28 05:15:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.840 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:04:95 10.100.0.9'], port_security=['fa:16:3e:87:04:95 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ca4fec3f-7355-47c7-baa5-8d9af25c6eb4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd70835696bf4e12a062516e9de5527d', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f8eaa742-8504-4c21-8533-267de16b101e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90546891-a028-4a5f-a7b5-01dac44edc93, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:15:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.841 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 in datapath 2e5dcf5b-2f4a-41dc-9c28-b500e2889923 unbound from our chassis#033[00m
Feb 28 05:15:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.842 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2e5dcf5b-2f4a-41dc-9c28-b500e2889923#033[00m
Feb 28 05:15:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.858 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[89f220c5-5f44-44d9-9941-64f59ddbedac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:31 np0005634017 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.866 243456 DEBUG nova.storage.rbd_utils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image c2763dc4-f643-48bd-964a-d4ab75938d0a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:31 np0005634017 systemd-machined[209480]: Machine qemu-94-instance-0000004f terminated.
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.875 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.883 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d3785daf-8de8-4fd0-8a11-edadad16141b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.888 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d415187d-5181-478c-839f-e33b6aafd917]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.916 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.918 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5f977b8f-53bf-44ad-b2f7-fa842a201d4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.935 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a9b88f8c-05ef-4358-b1c7-c12cd5ef974d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e5dcf5b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:a8:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517673, 'reachable_time': 18521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312436, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.946 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[35620329-d81d-4a28-ae3c-080604e71af0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2e5dcf5b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517685, 'tstamp': 517685}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312437, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2e5dcf5b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517689, 'tstamp': 517689}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312437, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.948 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e5dcf5b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.950 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.954 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e5dcf5b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.954 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:15:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.955 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2e5dcf5b-20, col_values=(('external_ids', {'iface-id': '4070e10c-8283-47ad-b9bf-0e19e9198bce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.955 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:15:31 np0005634017 nova_compute[243452]: 2026-02-28 10:15:31.956 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.023 243456 INFO nova.virt.libvirt.driver [-] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance destroyed successfully.#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.024 243456 DEBUG nova.compute.manager [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.079 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 0.311s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.120 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.120 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.121 243456 DEBUG nova.objects.instance [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.188 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 353 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.4 MiB/s wr, 249 op/s
Feb 28 05:15:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:15:32 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1336649039' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.459 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.461 243456 DEBUG nova.virt.libvirt.vif [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:15:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-35499043',display_name='tempest-ServersNegativeTestJSON-server-35499043',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-35499043',id=82,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c0c4bc44c37f4a4f83c83b6105be3190',ramdisk_id='',reservation_id='r-vudkfx5m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-621636341',owner_user_name='tempest-ServersNegativeTestJSON-621636341-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:15:26Z,user_data=None,user_id='7ef51521ffc947cbbce8323ec2b71753',uuid=c2763dc4-f643-48bd-964a-d4ab75938d0a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "address": "fa:16:3e:34:9c:8f", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap590fac49-f2", "ovs_interfaceid": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.462 243456 DEBUG nova.network.os_vif_util [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converting VIF {"id": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "address": "fa:16:3e:34:9c:8f", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap590fac49-f2", "ovs_interfaceid": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.463 243456 DEBUG nova.network.os_vif_util [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:9c:8f,bridge_name='br-int',has_traffic_filtering=True,id=590fac49-f2a2-48ec-ad9f-7bd17a63fe37,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap590fac49-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.465 243456 DEBUG nova.objects.instance [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lazy-loading 'pci_devices' on Instance uuid c2763dc4-f643-48bd-964a-d4ab75938d0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.495 243456 DEBUG nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:15:32 np0005634017 nova_compute[243452]:  <uuid>c2763dc4-f643-48bd-964a-d4ab75938d0a</uuid>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:  <name>instance-00000052</name>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServersNegativeTestJSON-server-35499043</nova:name>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:15:31</nova:creationTime>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:15:32 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:        <nova:user uuid="7ef51521ffc947cbbce8323ec2b71753">tempest-ServersNegativeTestJSON-621636341-project-member</nova:user>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:        <nova:project uuid="c0c4bc44c37f4a4f83c83b6105be3190">tempest-ServersNegativeTestJSON-621636341</nova:project>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:        <nova:port uuid="590fac49-f2a2-48ec-ad9f-7bd17a63fe37">
Feb 28 05:15:32 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <entry name="serial">c2763dc4-f643-48bd-964a-d4ab75938d0a</entry>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <entry name="uuid">c2763dc4-f643-48bd-964a-d4ab75938d0a</entry>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/c2763dc4-f643-48bd-964a-d4ab75938d0a_disk">
Feb 28 05:15:32 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:15:32 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/c2763dc4-f643-48bd-964a-d4ab75938d0a_disk.config">
Feb 28 05:15:32 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:15:32 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:34:9c:8f"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <target dev="tap590fac49-f2"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/c2763dc4-f643-48bd-964a-d4ab75938d0a/console.log" append="off"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:15:32 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:15:32 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:15:32 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:15:32 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.497 243456 DEBUG nova.compute.manager [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Preparing to wait for external event network-vif-plugged-590fac49-f2a2-48ec-ad9f-7bd17a63fe37 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.498 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "c2763dc4-f643-48bd-964a-d4ab75938d0a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.498 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "c2763dc4-f643-48bd-964a-d4ab75938d0a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.498 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "c2763dc4-f643-48bd-964a-d4ab75938d0a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.499 243456 DEBUG nova.virt.libvirt.vif [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:15:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-35499043',display_name='tempest-ServersNegativeTestJSON-server-35499043',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-35499043',id=82,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c0c4bc44c37f4a4f83c83b6105be3190',ramdisk_id='',reservation_id='r-vudkfx5m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-621636341',owner_user_name='tempest-ServersNegativeTestJSON-621636341-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:15:26Z,user_data=None,user_id='7ef51521ffc947cbbce8323ec2b71753',uuid=c2763dc4-f643-48bd-964a-d4ab75938d0a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "address": "fa:16:3e:34:9c:8f", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap590fac49-f2", "ovs_interfaceid": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.499 243456 DEBUG nova.network.os_vif_util [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converting VIF {"id": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "address": "fa:16:3e:34:9c:8f", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap590fac49-f2", "ovs_interfaceid": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.500 243456 DEBUG nova.network.os_vif_util [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:9c:8f,bridge_name='br-int',has_traffic_filtering=True,id=590fac49-f2a2-48ec-ad9f-7bd17a63fe37,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap590fac49-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.500 243456 DEBUG os_vif [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:9c:8f,bridge_name='br-int',has_traffic_filtering=True,id=590fac49-f2a2-48ec-ad9f-7bd17a63fe37,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap590fac49-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.501 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.501 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.501 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.507 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.508 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap590fac49-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.509 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap590fac49-f2, col_values=(('external_ids', {'iface-id': '590fac49-f2a2-48ec-ad9f-7bd17a63fe37', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:34:9c:8f', 'vm-uuid': 'c2763dc4-f643-48bd-964a-d4ab75938d0a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.511 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:32 np0005634017 NetworkManager[49805]: <info>  [1772273732.5126] manager: (tap590fac49-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/324)
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.515 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.521 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.524 243456 INFO os_vif [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:9c:8f,bridge_name='br-int',has_traffic_filtering=True,id=590fac49-f2a2-48ec-ad9f-7bd17a63fe37,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap590fac49-f2')#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.595 243456 DEBUG nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.596 243456 DEBUG nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.596 243456 DEBUG nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] No VIF found with MAC fa:16:3e:34:9c:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.597 243456 INFO nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Using config drive#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.632 243456 DEBUG nova.storage.rbd_utils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image c2763dc4-f643-48bd-964a-d4ab75938d0a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.980 243456 DEBUG nova.network.neutron [req-9c145437-3318-4404-a72e-6e798a08c4d7 req-2566d28e-dc5b-424f-b354-92bdc6ce2180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Updated VIF entry in instance network info cache for port 590fac49-f2a2-48ec-ad9f-7bd17a63fe37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.981 243456 DEBUG nova.network.neutron [req-9c145437-3318-4404-a72e-6e798a08c4d7 req-2566d28e-dc5b-424f-b354-92bdc6ce2180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Updating instance_info_cache with network_info: [{"id": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "address": "fa:16:3e:34:9c:8f", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap590fac49-f2", "ovs_interfaceid": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:15:32 np0005634017 nova_compute[243452]: 2026-02-28 10:15:32.996 243456 DEBUG oslo_concurrency.lockutils [req-9c145437-3318-4404-a72e-6e798a08c4d7 req-2566d28e-dc5b-424f-b354-92bdc6ce2180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c2763dc4-f643-48bd-964a-d4ab75938d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:15:33 np0005634017 nova_compute[243452]: 2026-02-28 10:15:33.284 243456 DEBUG nova.compute.manager [req-4f4e6e45-8211-449a-924a-a4c77e907ea0 req-2340049f-f059-4a7f-a438-63bae82686d3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:15:33 np0005634017 nova_compute[243452]: 2026-02-28 10:15:33.284 243456 DEBUG oslo_concurrency.lockutils [req-4f4e6e45-8211-449a-924a-a4c77e907ea0 req-2340049f-f059-4a7f-a438-63bae82686d3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:33 np0005634017 nova_compute[243452]: 2026-02-28 10:15:33.284 243456 DEBUG oslo_concurrency.lockutils [req-4f4e6e45-8211-449a-924a-a4c77e907ea0 req-2340049f-f059-4a7f-a438-63bae82686d3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:33 np0005634017 nova_compute[243452]: 2026-02-28 10:15:33.285 243456 DEBUG oslo_concurrency.lockutils [req-4f4e6e45-8211-449a-924a-a4c77e907ea0 req-2340049f-f059-4a7f-a438-63bae82686d3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:33 np0005634017 nova_compute[243452]: 2026-02-28 10:15:33.285 243456 DEBUG nova.compute.manager [req-4f4e6e45-8211-449a-924a-a4c77e907ea0 req-2340049f-f059-4a7f-a438-63bae82686d3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] No waiting events found dispatching network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:15:33 np0005634017 nova_compute[243452]: 2026-02-28 10:15:33.285 243456 WARNING nova.compute.manager [req-4f4e6e45-8211-449a-924a-a4c77e907ea0 req-2340049f-f059-4a7f-a438-63bae82686d3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received unexpected event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 for instance with vm_state stopped and task_state None.#033[00m
Feb 28 05:15:33 np0005634017 nova_compute[243452]: 2026-02-28 10:15:33.415 243456 INFO nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Creating config drive at /var/lib/nova/instances/c2763dc4-f643-48bd-964a-d4ab75938d0a/disk.config#033[00m
Feb 28 05:15:33 np0005634017 nova_compute[243452]: 2026-02-28 10:15:33.422 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c2763dc4-f643-48bd-964a-d4ab75938d0a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmprpft0hl7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:15:33 np0005634017 nova_compute[243452]: 2026-02-28 10:15:33.560 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c2763dc4-f643-48bd-964a-d4ab75938d0a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmprpft0hl7" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:33 np0005634017 nova_compute[243452]: 2026-02-28 10:15:33.597 243456 DEBUG nova.storage.rbd_utils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image c2763dc4-f643-48bd-964a-d4ab75938d0a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:15:33 np0005634017 nova_compute[243452]: 2026-02-28 10:15:33.602 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c2763dc4-f643-48bd-964a-d4ab75938d0a/disk.config c2763dc4-f643-48bd-964a-d4ab75938d0a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:15:33 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:33Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:22:e7:39 10.100.0.9
Feb 28 05:15:33 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:33Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:22:e7:39 10.100.0.9
Feb 28 05:15:33 np0005634017 nova_compute[243452]: 2026-02-28 10:15:33.759 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c2763dc4-f643-48bd-964a-d4ab75938d0a/disk.config c2763dc4-f643-48bd-964a-d4ab75938d0a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:15:33 np0005634017 nova_compute[243452]: 2026-02-28 10:15:33.760 243456 INFO nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Deleting local config drive /var/lib/nova/instances/c2763dc4-f643-48bd-964a-d4ab75938d0a/disk.config because it was imported into RBD.#033[00m
Feb 28 05:15:33 np0005634017 nova_compute[243452]: 2026-02-28 10:15:33.801 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:33 np0005634017 kernel: tap590fac49-f2: entered promiscuous mode
Feb 28 05:15:33 np0005634017 NetworkManager[49805]: <info>  [1772273733.8084] manager: (tap590fac49-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/325)
Feb 28 05:15:33 np0005634017 nova_compute[243452]: 2026-02-28 10:15:33.815 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:33 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:33Z|00738|binding|INFO|Claiming lport 590fac49-f2a2-48ec-ad9f-7bd17a63fe37 for this chassis.
Feb 28 05:15:33 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:33Z|00739|binding|INFO|590fac49-f2a2-48ec-ad9f-7bd17a63fe37: Claiming fa:16:3e:34:9c:8f 10.100.0.11
Feb 28 05:15:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.823 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:9c:8f 10.100.0.11'], port_security=['fa:16:3e:34:9c:8f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c2763dc4-f643-48bd-964a-d4ab75938d0a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c0c4bc44c37f4a4f83c83b6105be3190', 'neutron:revision_number': '2', 'neutron:security_group_ids': '60e2cad2-1539-4f21-ae07-3933335fcb5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbe93b8f-a4a4-4682-b3ab-de91ea6bc538, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=590fac49-f2a2-48ec-ad9f-7bd17a63fe37) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:15:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.824 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 590fac49-f2a2-48ec-ad9f-7bd17a63fe37 in datapath ce4b855a-cb9e-4dad-bfe0-ddfe326a1505 bound to our chassis#033[00m
Feb 28 05:15:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.825 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce4b855a-cb9e-4dad-bfe0-ddfe326a1505#033[00m
Feb 28 05:15:33 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:33Z|00740|binding|INFO|Setting lport 590fac49-f2a2-48ec-ad9f-7bd17a63fe37 ovn-installed in OVS
Feb 28 05:15:33 np0005634017 ovn_controller[146846]: 2026-02-28T10:15:33Z|00741|binding|INFO|Setting lport 590fac49-f2a2-48ec-ad9f-7bd17a63fe37 up in Southbound
Feb 28 05:15:33 np0005634017 nova_compute[243452]: 2026-02-28 10:15:33.830 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.837 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ed3821c6-622d-4377-bb19-937b3f39268f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:33 np0005634017 systemd-machined[209480]: New machine qemu-95-instance-00000052.
Feb 28 05:15:33 np0005634017 systemd[1]: Started Virtual Machine qemu-95-instance-00000052.
Feb 28 05:15:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.863 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c86d8ac3-6c8b-4ff8-8fa9-9a07abf7eff1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.866 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[db7bb5b7-f42a-4bf2-8a70-72b183d1f4d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:33 np0005634017 systemd-udevd[312550]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:15:33 np0005634017 NetworkManager[49805]: <info>  [1772273733.8829] device (tap590fac49-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:15:33 np0005634017 NetworkManager[49805]: <info>  [1772273733.8834] device (tap590fac49-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:15:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.897 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[53f177f6-f4e6-405d-b540-33d526a14e36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.914 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[488f4d62-bfed-435f-b494-6adb85661e1f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce4b855a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:cf:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522608, 'reachable_time': 35465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312557, 'error': None, 'target': 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.928 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c4c03b5f-6ccf-4299-8904-f66d197faab7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce4b855a-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522620, 'tstamp': 522620}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312559, 'error': None, 'target': 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce4b855a-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522623, 'tstamp': 522623}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312559, 'error': None, 'target': 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:15:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.932 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce4b855a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:33 np0005634017 nova_compute[243452]: 2026-02-28 10:15:33.934 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:33 np0005634017 nova_compute[243452]: 2026-02-28 10:15:33.938 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.939 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce4b855a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.939 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:15:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.940 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce4b855a-c0, col_values=(('external_ids', {'iface-id': 'f0acdf7e-44d8-43d4-ade1-89536f5a8e0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.941 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:15:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 376 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.7 MiB/s wr, 227 op/s
Feb 28 05:15:34 np0005634017 nova_compute[243452]: 2026-02-28 10:15:34.375 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273734.374168, c2763dc4-f643-48bd-964a-d4ab75938d0a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:15:34 np0005634017 nova_compute[243452]: 2026-02-28 10:15:34.377 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] VM Started (Lifecycle Event)#033[00m
Feb 28 05:15:34 np0005634017 nova_compute[243452]: 2026-02-28 10:15:34.398 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:15:34 np0005634017 nova_compute[243452]: 2026-02-28 10:15:34.405 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273734.3743386, c2763dc4-f643-48bd-964a-d4ab75938d0a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:15:34 np0005634017 nova_compute[243452]: 2026-02-28 10:15:34.405 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:15:34 np0005634017 nova_compute[243452]: 2026-02-28 10:15:34.422 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:15:34 np0005634017 nova_compute[243452]: 2026-02-28 10:15:34.426 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:15:34 np0005634017 nova_compute[243452]: 2026-02-28 10:15:34.445 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.455 243456 DEBUG oslo_concurrency.lockutils [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.456 243456 DEBUG oslo_concurrency.lockutils [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.457 243456 DEBUG oslo_concurrency.lockutils [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.457 243456 DEBUG oslo_concurrency.lockutils [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.458 243456 DEBUG oslo_concurrency.lockutils [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.460 243456 INFO nova.compute.manager [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Terminating instance#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.462 243456 DEBUG nova.compute.manager [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.472 243456 INFO nova.virt.libvirt.driver [-] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance destroyed successfully.#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.473 243456 DEBUG nova.objects.instance [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'resources' on Instance uuid ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.492 243456 DEBUG nova.virt.libvirt.vif [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:14:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-542187541',display_name='tempest-tempest.common.compute-instance-542187541',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-542187541',id=79,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:15:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='fd70835696bf4e12a062516e9de5527d',ramdisk_id='',reservation_id='r-tzwfcb59',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1764257371',owner_user_name='tempest-ServerActionsTestOtherA-1764257371-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:15:32Z,user_data=None,user_id='14b2d28379164786ad68563acb83a50a',uuid=ca4fec3f-7355-47c7-baa5-8d9af25c6eb4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.493 243456 DEBUG nova.network.os_vif_util [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converting VIF {"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.494 243456 DEBUG nova.network.os_vif_util [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.495 243456 DEBUG os_vif [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.497 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.498 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7f6883b-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.500 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.501 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.502 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.506 243456 INFO os_vif [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88')#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.540 243456 DEBUG nova.compute.manager [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received event network-vif-unplugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.541 243456 DEBUG oslo_concurrency.lockutils [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.542 243456 DEBUG oslo_concurrency.lockutils [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.543 243456 DEBUG oslo_concurrency.lockutils [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.544 243456 DEBUG nova.compute.manager [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] No waiting events found dispatching network-vif-unplugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.544 243456 DEBUG nova.compute.manager [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received event network-vif-unplugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.545 243456 DEBUG nova.compute.manager [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.545 243456 DEBUG oslo_concurrency.lockutils [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.546 243456 DEBUG oslo_concurrency.lockutils [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.546 243456 DEBUG oslo_concurrency.lockutils [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.546 243456 DEBUG nova.compute.manager [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] No waiting events found dispatching network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.547 243456 WARNING nova.compute.manager [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received unexpected event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 for instance with vm_state stopped and task_state deleting.#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.547 243456 DEBUG nova.compute.manager [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Received event network-vif-plugged-590fac49-f2a2-48ec-ad9f-7bd17a63fe37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.547 243456 DEBUG oslo_concurrency.lockutils [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c2763dc4-f643-48bd-964a-d4ab75938d0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.548 243456 DEBUG oslo_concurrency.lockutils [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c2763dc4-f643-48bd-964a-d4ab75938d0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.548 243456 DEBUG oslo_concurrency.lockutils [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c2763dc4-f643-48bd-964a-d4ab75938d0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.549 243456 DEBUG nova.compute.manager [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Processing event network-vif-plugged-590fac49-f2a2-48ec-ad9f-7bd17a63fe37 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.549 243456 DEBUG nova.compute.manager [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:15:35 np0005634017 nova_compute[243452]: 2026-02-28 10:15:35.554 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273735.554152, c2763dc4-f643-48bd-964a-d4ab75938d0a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:18:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:18:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2949975947' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:18:56 np0005634017 nova_compute[243452]: 2026-02-28 10:18:56.981 243456 DEBUG oslo_concurrency.processutils [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:18:56 np0005634017 nova_compute[243452]: 2026-02-28 10:18:56.991 243456 DEBUG nova.compute.provider_tree [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:18:57 np0005634017 nova_compute[243452]: 2026-02-28 10:18:57.009 243456 DEBUG nova.scheduler.client.report [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:18:57 np0005634017 nova_compute[243452]: 2026-02-28 10:18:57.042 243456 DEBUG oslo_concurrency.lockutils [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:18:57 np0005634017 nova_compute[243452]: 2026-02-28 10:18:57.076 243456 INFO nova.scheduler.client.report [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Deleted allocations for instance 33627cb1-9db9-4b71-81a5-071a52daaba2#033[00m
Feb 28 05:18:57 np0005634017 nova_compute[243452]: 2026-02-28 10:18:57.147 243456 DEBUG oslo_concurrency.lockutils [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Lock "33627cb1-9db9-4b71-81a5-071a52daaba2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:18:57 np0005634017 nova_compute[243452]: 2026-02-28 10:18:57.189 243456 DEBUG nova.compute.manager [req-df10cdb5-06d6-4459-9a17-262b0d6a1355 req-9356ddab-2492-4811-adb0-1669714cf892 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Received event network-vif-plugged-193238a7-8ebc-4160-8a2a-edd1dcf804b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:18:57 np0005634017 nova_compute[243452]: 2026-02-28 10:18:57.189 243456 DEBUG oslo_concurrency.lockutils [req-df10cdb5-06d6-4459-9a17-262b0d6a1355 req-9356ddab-2492-4811-adb0-1669714cf892 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "33627cb1-9db9-4b71-81a5-071a52daaba2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:18:57 np0005634017 nova_compute[243452]: 2026-02-28 10:18:57.190 243456 DEBUG oslo_concurrency.lockutils [req-df10cdb5-06d6-4459-9a17-262b0d6a1355 req-9356ddab-2492-4811-adb0-1669714cf892 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "33627cb1-9db9-4b71-81a5-071a52daaba2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:18:57 np0005634017 nova_compute[243452]: 2026-02-28 10:18:57.190 243456 DEBUG oslo_concurrency.lockutils [req-df10cdb5-06d6-4459-9a17-262b0d6a1355 req-9356ddab-2492-4811-adb0-1669714cf892 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "33627cb1-9db9-4b71-81a5-071a52daaba2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:18:57 np0005634017 nova_compute[243452]: 2026-02-28 10:18:57.191 243456 DEBUG nova.compute.manager [req-df10cdb5-06d6-4459-9a17-262b0d6a1355 req-9356ddab-2492-4811-adb0-1669714cf892 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] No waiting events found dispatching network-vif-plugged-193238a7-8ebc-4160-8a2a-edd1dcf804b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:18:57 np0005634017 nova_compute[243452]: 2026-02-28 10:18:57.191 243456 WARNING nova.compute.manager [req-df10cdb5-06d6-4459-9a17-262b0d6a1355 req-9356ddab-2492-4811-adb0-1669714cf892 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Received unexpected event network-vif-plugged-193238a7-8ebc-4160-8a2a-edd1dcf804b2 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:18:57 np0005634017 nova_compute[243452]: 2026-02-28 10:18:57.192 243456 DEBUG nova.compute.manager [req-df10cdb5-06d6-4459-9a17-262b0d6a1355 req-9356ddab-2492-4811-adb0-1669714cf892 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Received event network-vif-deleted-193238a7-8ebc-4160-8a2a-edd1dcf804b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:18:57 np0005634017 rsyslogd[1017]: imjournal: 10210 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Feb 28 05:18:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:18:57.857 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:18:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:18:57.858 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:18:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:18:57.860 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:18:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 305 active+clean; 335 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 2.1 MiB/s wr, 262 op/s
Feb 28 05:18:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:18:58 np0005634017 nova_compute[243452]: 2026-02-28 10:18:58.895 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:18:59 np0005634017 nova_compute[243452]: 2026-02-28 10:18:59.848 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:18:59 np0005634017 ovn_controller[146846]: 2026-02-28T10:18:59Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:05:21 10.100.0.14
Feb 28 05:19:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 314 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 2.2 MiB/s wr, 255 op/s
Feb 28 05:19:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:19:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:19:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:19:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:19:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:19:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:19:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:00Z|00925|binding|INFO|Releasing lport ecb4bce5-f543-4ade-98cc-d98a0d015682 from this chassis (sb_readonly=0)
Feb 28 05:19:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:00Z|00926|binding|INFO|Releasing lport e6986f00-b070-4e36-95ae-3683483bf103 from this chassis (sb_readonly=0)
Feb 28 05:19:00 np0005634017 nova_compute[243452]: 2026-02-28 10:19:00.665 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 305 active+clean; 314 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 232 op/s
Feb 28 05:19:02 np0005634017 nova_compute[243452]: 2026-02-28 10:19:02.374 243456 INFO nova.compute.manager [None req-673e44f5-1768-438a-8ca1-302ac37ee9a3 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Get console output#033[00m
Feb 28 05:19:02 np0005634017 nova_compute[243452]: 2026-02-28 10:19:02.382 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 28 05:19:02 np0005634017 nova_compute[243452]: 2026-02-28 10:19:02.900 243456 DEBUG nova.objects.instance [None req-4cf04ff2-8823-4afa-bf8c-381cd12ad8f4 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'pci_devices' on Instance uuid e8c0bffa-2672-4f45-8646-3a41b8e780a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:03 np0005634017 nova_compute[243452]: 2026-02-28 10:19:03.091 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273943.0915222, e8c0bffa-2672-4f45-8646-3a41b8e780a8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:19:03 np0005634017 nova_compute[243452]: 2026-02-28 10:19:03.092 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:19:03 np0005634017 nova_compute[243452]: 2026-02-28 10:19:03.213 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:03 np0005634017 nova_compute[243452]: 2026-02-28 10:19:03.224 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:19:03 np0005634017 nova_compute[243452]: 2026-02-28 10:19:03.251 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Feb 28 05:19:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:19:03 np0005634017 kernel: tap6912d1ef-96 (unregistering): left promiscuous mode
Feb 28 05:19:03 np0005634017 NetworkManager[49805]: <info>  [1772273943.7204] device (tap6912d1ef-96): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:19:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:03Z|00927|binding|INFO|Releasing lport 6912d1ef-9679-45b5-ae80-a91f63ecce55 from this chassis (sb_readonly=0)
Feb 28 05:19:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:03Z|00928|binding|INFO|Setting lport 6912d1ef-9679-45b5-ae80-a91f63ecce55 down in Southbound
Feb 28 05:19:03 np0005634017 nova_compute[243452]: 2026-02-28 10:19:03.729 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:03Z|00929|binding|INFO|Removing iface tap6912d1ef-96 ovn-installed in OVS
Feb 28 05:19:03 np0005634017 nova_compute[243452]: 2026-02-28 10:19:03.732 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:03 np0005634017 nova_compute[243452]: 2026-02-28 10:19:03.736 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:03.746 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a0:a7 10.100.0.8'], port_security=['fa:16:3e:b7:a0:a7 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e8c0bffa-2672-4f45-8646-3a41b8e780a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99091813-133c-46b0-a8d3-eeb21884f48a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '4', 'neutron:security_group_ids': '11fcf035-f78f-4ebd-8bac-b3303ff31262', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.197'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd6ca39e-5439-4056-b277-bd0e3b6ca6ca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=6912d1ef-9679-45b5-ae80-a91f63ecce55) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:19:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:03.748 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 6912d1ef-9679-45b5-ae80-a91f63ecce55 in datapath 99091813-133c-46b0-a8d3-eeb21884f48a unbound from our chassis#033[00m
Feb 28 05:19:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:03.750 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 99091813-133c-46b0-a8d3-eeb21884f48a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:19:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:03.751 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[da6a5874-1b13-4672-ba86-ddf1ba9ba5a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:03.751 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a namespace which is not needed anymore#033[00m
Feb 28 05:19:03 np0005634017 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Feb 28 05:19:03 np0005634017 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005d.scope: Consumed 12.105s CPU time.
Feb 28 05:19:03 np0005634017 systemd-machined[209480]: Machine qemu-113-instance-0000005d terminated.
Feb 28 05:19:03 np0005634017 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[323839]: [NOTICE]   (323843) : haproxy version is 2.8.14-c23fe91
Feb 28 05:19:03 np0005634017 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[323839]: [NOTICE]   (323843) : path to executable is /usr/sbin/haproxy
Feb 28 05:19:03 np0005634017 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[323839]: [WARNING]  (323843) : Exiting Master process...
Feb 28 05:19:03 np0005634017 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[323839]: [ALERT]    (323843) : Current worker (323845) exited with code 143 (Terminated)
Feb 28 05:19:03 np0005634017 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[323839]: [WARNING]  (323843) : All workers exited. Exiting... (0)
Feb 28 05:19:03 np0005634017 systemd[1]: libpod-09e8a579b9da90957a01738e490b8ff5b5e0010d980edb1ad884c0355f6f9597.scope: Deactivated successfully.
Feb 28 05:19:03 np0005634017 podman[324547]: 2026-02-28 10:19:03.878301895 +0000 UTC m=+0.045037415 container died 09e8a579b9da90957a01738e490b8ff5b5e0010d980edb1ad884c0355f6f9597 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 05:19:03 np0005634017 nova_compute[243452]: 2026-02-28 10:19:03.901 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:03 np0005634017 nova_compute[243452]: 2026-02-28 10:19:03.905 243456 DEBUG nova.compute.manager [None req-4cf04ff2-8823-4afa-bf8c-381cd12ad8f4 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:03 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-09e8a579b9da90957a01738e490b8ff5b5e0010d980edb1ad884c0355f6f9597-userdata-shm.mount: Deactivated successfully.
Feb 28 05:19:03 np0005634017 systemd[1]: var-lib-containers-storage-overlay-999094b081a957cbe1d5a0b39ac314b0f3f277acb666b4d99b76c83d7d5cdc56-merged.mount: Deactivated successfully.
Feb 28 05:19:03 np0005634017 podman[324547]: 2026-02-28 10:19:03.929859275 +0000 UTC m=+0.096594825 container cleanup 09e8a579b9da90957a01738e490b8ff5b5e0010d980edb1ad884c0355f6f9597 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:19:03 np0005634017 systemd[1]: libpod-conmon-09e8a579b9da90957a01738e490b8ff5b5e0010d980edb1ad884c0355f6f9597.scope: Deactivated successfully.
Feb 28 05:19:03 np0005634017 podman[324585]: 2026-02-28 10:19:03.996296418 +0000 UTC m=+0.047610128 container remove 09e8a579b9da90957a01738e490b8ff5b5e0010d980edb1ad884c0355f6f9597 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:19:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:04.001 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[708752ff-492f-4dfd-a7f3-6fa34df953a0]: (4, ('Sat Feb 28 10:19:03 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a (09e8a579b9da90957a01738e490b8ff5b5e0010d980edb1ad884c0355f6f9597)\n09e8a579b9da90957a01738e490b8ff5b5e0010d980edb1ad884c0355f6f9597\nSat Feb 28 10:19:03 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a (09e8a579b9da90957a01738e490b8ff5b5e0010d980edb1ad884c0355f6f9597)\n09e8a579b9da90957a01738e490b8ff5b5e0010d980edb1ad884c0355f6f9597\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:04.004 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[42fd5da8-adee-4cdc-97bc-daafc4690cbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:04.005 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99091813-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:04 np0005634017 nova_compute[243452]: 2026-02-28 10:19:04.007 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:04 np0005634017 kernel: tap99091813-10: left promiscuous mode
Feb 28 05:19:04 np0005634017 nova_compute[243452]: 2026-02-28 10:19:04.023 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:04.026 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2e3dbbf6-35e5-45b1-b3aa-d84ac2f4cfbc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:04.040 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9b281331-35a7-4ec4-92cc-1c687c8de965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:04.041 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0b9e4505-e18f-474a-9be7-16a8c0917de0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:04.056 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c9e5bc84-a0cf-46c2-bf0a-88ccf69ed542]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543026, 'reachable_time': 17803, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324604, 'error': None, 'target': 'ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:04.059 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:19:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:04.059 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[408be78f-f05a-436f-add2-36467841b6e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:04 np0005634017 systemd[1]: run-netns-ovnmeta\x2d99091813\x2d133c\x2d46b0\x2da8d3\x2deeb21884f48a.mount: Deactivated successfully.
Feb 28 05:19:04 np0005634017 nova_compute[243452]: 2026-02-28 10:19:04.187 243456 DEBUG nova.compute.manager [req-b29e7060-ab2b-4128-bc1a-e657b8858e8d req-9bae07ef-16d4-472d-8f9d-201022c1d2cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received event network-vif-unplugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:04 np0005634017 nova_compute[243452]: 2026-02-28 10:19:04.188 243456 DEBUG oslo_concurrency.lockutils [req-b29e7060-ab2b-4128-bc1a-e657b8858e8d req-9bae07ef-16d4-472d-8f9d-201022c1d2cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:04 np0005634017 nova_compute[243452]: 2026-02-28 10:19:04.188 243456 DEBUG oslo_concurrency.lockutils [req-b29e7060-ab2b-4128-bc1a-e657b8858e8d req-9bae07ef-16d4-472d-8f9d-201022c1d2cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:04 np0005634017 nova_compute[243452]: 2026-02-28 10:19:04.188 243456 DEBUG oslo_concurrency.lockutils [req-b29e7060-ab2b-4128-bc1a-e657b8858e8d req-9bae07ef-16d4-472d-8f9d-201022c1d2cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:04 np0005634017 nova_compute[243452]: 2026-02-28 10:19:04.188 243456 DEBUG nova.compute.manager [req-b29e7060-ab2b-4128-bc1a-e657b8858e8d req-9bae07ef-16d4-472d-8f9d-201022c1d2cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] No waiting events found dispatching network-vif-unplugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:19:04 np0005634017 nova_compute[243452]: 2026-02-28 10:19:04.189 243456 WARNING nova.compute.manager [req-b29e7060-ab2b-4128-bc1a-e657b8858e8d req-9bae07ef-16d4-472d-8f9d-201022c1d2cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received unexpected event network-vif-unplugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 for instance with vm_state suspended and task_state None.#033[00m
Feb 28 05:19:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 314 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.1 MiB/s wr, 211 op/s
Feb 28 05:19:04 np0005634017 nova_compute[243452]: 2026-02-28 10:19:04.851 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 314 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 159 op/s
Feb 28 05:19:06 np0005634017 nova_compute[243452]: 2026-02-28 10:19:06.435 243456 DEBUG nova.compute.manager [req-70d6f3aa-5e9b-4a60-8855-dbbc31175c2e req-e1e09b01-6e16-41c7-8160-ce146029616e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received event network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:06 np0005634017 nova_compute[243452]: 2026-02-28 10:19:06.436 243456 DEBUG oslo_concurrency.lockutils [req-70d6f3aa-5e9b-4a60-8855-dbbc31175c2e req-e1e09b01-6e16-41c7-8160-ce146029616e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:06 np0005634017 nova_compute[243452]: 2026-02-28 10:19:06.436 243456 DEBUG oslo_concurrency.lockutils [req-70d6f3aa-5e9b-4a60-8855-dbbc31175c2e req-e1e09b01-6e16-41c7-8160-ce146029616e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:06 np0005634017 nova_compute[243452]: 2026-02-28 10:19:06.436 243456 DEBUG oslo_concurrency.lockutils [req-70d6f3aa-5e9b-4a60-8855-dbbc31175c2e req-e1e09b01-6e16-41c7-8160-ce146029616e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:06 np0005634017 nova_compute[243452]: 2026-02-28 10:19:06.437 243456 DEBUG nova.compute.manager [req-70d6f3aa-5e9b-4a60-8855-dbbc31175c2e req-e1e09b01-6e16-41c7-8160-ce146029616e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] No waiting events found dispatching network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:19:06 np0005634017 nova_compute[243452]: 2026-02-28 10:19:06.437 243456 WARNING nova.compute.manager [req-70d6f3aa-5e9b-4a60-8855-dbbc31175c2e req-e1e09b01-6e16-41c7-8160-ce146029616e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received unexpected event network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 for instance with vm_state suspended and task_state None.#033[00m
Feb 28 05:19:06 np0005634017 nova_compute[243452]: 2026-02-28 10:19:06.561 243456 INFO nova.compute.manager [None req-a0c728fe-9098-4d74-a8ae-3809fc1b5edf 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Get console output#033[00m
Feb 28 05:19:07 np0005634017 nova_compute[243452]: 2026-02-28 10:19:07.097 243456 INFO nova.compute.manager [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Resuming#033[00m
Feb 28 05:19:07 np0005634017 nova_compute[243452]: 2026-02-28 10:19:07.098 243456 DEBUG nova.objects.instance [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'flavor' on Instance uuid e8c0bffa-2672-4f45-8646-3a41b8e780a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:07 np0005634017 nova_compute[243452]: 2026-02-28 10:19:07.197 243456 DEBUG oslo_concurrency.lockutils [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "refresh_cache-e8c0bffa-2672-4f45-8646-3a41b8e780a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:19:07 np0005634017 nova_compute[243452]: 2026-02-28 10:19:07.198 243456 DEBUG oslo_concurrency.lockutils [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquired lock "refresh_cache-e8c0bffa-2672-4f45-8646-3a41b8e780a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:19:07 np0005634017 nova_compute[243452]: 2026-02-28 10:19:07.199 243456 DEBUG nova.network.neutron [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:19:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 314 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 611 KiB/s rd, 851 KiB/s wr, 83 op/s
Feb 28 05:19:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:19:08 np0005634017 nova_compute[243452]: 2026-02-28 10:19:08.903 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:09 np0005634017 nova_compute[243452]: 2026-02-28 10:19:09.809 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273934.8085463, 33627cb1-9db9-4b71-81a5-071a52daaba2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:19:09 np0005634017 nova_compute[243452]: 2026-02-28 10:19:09.810 243456 INFO nova.compute.manager [-] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:19:09 np0005634017 nova_compute[243452]: 2026-02-28 10:19:09.854 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:09 np0005634017 nova_compute[243452]: 2026-02-28 10:19:09.861 243456 DEBUG nova.compute.manager [None req-dc743a2d-d3b7-4f0d-b74b-712860c53c26 - - - - - -] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 314 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 596 KiB/s rd, 102 KiB/s wr, 57 op/s
Feb 28 05:19:10 np0005634017 nova_compute[243452]: 2026-02-28 10:19:10.483 243456 DEBUG nova.network.neutron [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Updating instance_info_cache with network_info: [{"id": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "address": "fa:16:3e:b7:a0:a7", "network": {"id": "99091813-133c-46b0-a8d3-eeb21884f48a", "bridge": "br-int", "label": "tempest-network-smoke--923256228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6912d1ef-96", "ovs_interfaceid": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:19:10 np0005634017 nova_compute[243452]: 2026-02-28 10:19:10.584 243456 DEBUG oslo_concurrency.lockutils [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Releasing lock "refresh_cache-e8c0bffa-2672-4f45-8646-3a41b8e780a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:19:10 np0005634017 nova_compute[243452]: 2026-02-28 10:19:10.594 243456 DEBUG nova.virt.libvirt.vif [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:18:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1132178903',display_name='tempest-TestNetworkAdvancedServerOps-server-1132178903',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1132178903',id=93,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAaE+RUFgVeYlRX30KMGxnwEz+7reAn227i7sOjhZIZoASG2YdNYnV+Hsj1MiXJ39Nt5WWX427Y5pNZnlCuOzU5m4otlC0FTaPuRwmOXLsqunpWkEbv7T/dSbfrRT0dp7Q==',key_name='tempest-TestNetworkAdvancedServerOps-156508609',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:18:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-g07k9xeh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:19:03Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=e8c0bffa-2672-4f45-8646-3a41b8e780a8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "address": "fa:16:3e:b7:a0:a7", "network": {"id": "99091813-133c-46b0-a8d3-eeb21884f48a", "bridge": "br-int", "label": "tempest-network-smoke--923256228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6912d1ef-96", "ovs_interfaceid": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:19:10 np0005634017 nova_compute[243452]: 2026-02-28 10:19:10.595 243456 DEBUG nova.network.os_vif_util [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "address": "fa:16:3e:b7:a0:a7", "network": {"id": "99091813-133c-46b0-a8d3-eeb21884f48a", "bridge": "br-int", "label": "tempest-network-smoke--923256228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6912d1ef-96", "ovs_interfaceid": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:19:10 np0005634017 nova_compute[243452]: 2026-02-28 10:19:10.598 243456 DEBUG nova.network.os_vif_util [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a0:a7,bridge_name='br-int',has_traffic_filtering=True,id=6912d1ef-9679-45b5-ae80-a91f63ecce55,network=Network(99091813-133c-46b0-a8d3-eeb21884f48a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6912d1ef-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:19:10 np0005634017 nova_compute[243452]: 2026-02-28 10:19:10.598 243456 DEBUG os_vif [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a0:a7,bridge_name='br-int',has_traffic_filtering=True,id=6912d1ef-9679-45b5-ae80-a91f63ecce55,network=Network(99091813-133c-46b0-a8d3-eeb21884f48a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6912d1ef-96') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:19:10 np0005634017 nova_compute[243452]: 2026-02-28 10:19:10.599 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:10 np0005634017 nova_compute[243452]: 2026-02-28 10:19:10.600 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:10 np0005634017 nova_compute[243452]: 2026-02-28 10:19:10.601 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:19:10 np0005634017 nova_compute[243452]: 2026-02-28 10:19:10.606 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:10 np0005634017 nova_compute[243452]: 2026-02-28 10:19:10.607 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6912d1ef-96, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:10 np0005634017 nova_compute[243452]: 2026-02-28 10:19:10.607 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6912d1ef-96, col_values=(('external_ids', {'iface-id': '6912d1ef-9679-45b5-ae80-a91f63ecce55', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:a0:a7', 'vm-uuid': 'e8c0bffa-2672-4f45-8646-3a41b8e780a8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:10 np0005634017 nova_compute[243452]: 2026-02-28 10:19:10.609 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:19:10 np0005634017 nova_compute[243452]: 2026-02-28 10:19:10.609 243456 INFO os_vif [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a0:a7,bridge_name='br-int',has_traffic_filtering=True,id=6912d1ef-9679-45b5-ae80-a91f63ecce55,network=Network(99091813-133c-46b0-a8d3-eeb21884f48a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6912d1ef-96')#033[00m
Feb 28 05:19:10 np0005634017 nova_compute[243452]: 2026-02-28 10:19:10.637 243456 DEBUG nova.objects.instance [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'numa_topology' on Instance uuid e8c0bffa-2672-4f45-8646-3a41b8e780a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:10 np0005634017 NetworkManager[49805]: <info>  [1772273950.8360] manager: (tap6912d1ef-96): new Tun device (/org/freedesktop/NetworkManager/Devices/405)
Feb 28 05:19:10 np0005634017 kernel: tap6912d1ef-96: entered promiscuous mode
Feb 28 05:19:10 np0005634017 nova_compute[243452]: 2026-02-28 10:19:10.843 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:10 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:10Z|00930|binding|INFO|Claiming lport 6912d1ef-9679-45b5-ae80-a91f63ecce55 for this chassis.
Feb 28 05:19:10 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:10Z|00931|binding|INFO|6912d1ef-9679-45b5-ae80-a91f63ecce55: Claiming fa:16:3e:b7:a0:a7 10.100.0.8
Feb 28 05:19:10 np0005634017 podman[324701]: 2026-02-28 10:19:10.854197356 +0000 UTC m=+0.089058370 container exec 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:19:10 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:10Z|00932|binding|INFO|Setting lport 6912d1ef-9679-45b5-ae80-a91f63ecce55 ovn-installed in OVS
Feb 28 05:19:10 np0005634017 nova_compute[243452]: 2026-02-28 10:19:10.858 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:10 np0005634017 systemd-udevd[324732]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:19:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:10.870 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a0:a7 10.100.0.8'], port_security=['fa:16:3e:b7:a0:a7 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e8c0bffa-2672-4f45-8646-3a41b8e780a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99091813-133c-46b0-a8d3-eeb21884f48a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '5', 'neutron:security_group_ids': '11fcf035-f78f-4ebd-8bac-b3303ff31262', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.197'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd6ca39e-5439-4056-b277-bd0e3b6ca6ca, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=6912d1ef-9679-45b5-ae80-a91f63ecce55) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:19:10 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:10Z|00933|binding|INFO|Setting lport 6912d1ef-9679-45b5-ae80-a91f63ecce55 up in Southbound
Feb 28 05:19:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:10.871 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 6912d1ef-9679-45b5-ae80-a91f63ecce55 in datapath 99091813-133c-46b0-a8d3-eeb21884f48a bound to our chassis#033[00m
Feb 28 05:19:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:10.872 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 99091813-133c-46b0-a8d3-eeb21884f48a#033[00m
Feb 28 05:19:10 np0005634017 NetworkManager[49805]: <info>  [1772273950.8843] device (tap6912d1ef-96): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:19:10 np0005634017 NetworkManager[49805]: <info>  [1772273950.8849] device (tap6912d1ef-96): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:19:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:10.886 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e5da22c9-b030-4720-8dbf-cfc51e1bd3fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:10.887 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap99091813-11 in ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:19:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:10.888 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap99091813-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:19:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:10.888 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5158b230-5d27-42da-a7d9-e7f41ed10a4f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:10 np0005634017 systemd-machined[209480]: New machine qemu-116-instance-0000005d.
Feb 28 05:19:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:10.889 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f8b29ae0-76de-4239-8734-b9d8f018690c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:10.903 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[881870f9-64ac-4370-a800-ee7474fcfa99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:10 np0005634017 systemd[1]: Started Virtual Machine qemu-116-instance-0000005d.
Feb 28 05:19:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:10.929 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b455c3-4813-4289-a3a1-1355c69d19fb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:10.960 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b4ea8d-9cb3-4f92-b9ca-b0c61eb88e47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:10 np0005634017 NetworkManager[49805]: <info>  [1772273950.9712] manager: (tap99091813-10): new Veth device (/org/freedesktop/NetworkManager/Devices/406)
Feb 28 05:19:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:10.970 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4f739bdd-fb39-4e4a-b8de-30c99a74748b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:10 np0005634017 podman[324701]: 2026-02-28 10:19:10.975182714 +0000 UTC m=+0.210043768 container exec_died 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.007 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c361f775-a99f-4f7f-a6a1-a67880b252f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.011 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e4f2ec4b-9613-49e2-8b1c-7e95e0ede06d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:11 np0005634017 NetworkManager[49805]: <info>  [1772273951.0368] device (tap99091813-10): carrier: link connected
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.043 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f601c71a-d3d6-44c5-85ff-5fb2141c8c1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.058 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[91aca993-e7d6-43bb-a0df-731826cc5aaf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99091813-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:8b:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 286], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545832, 'reachable_time': 42666, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324794, 'error': None, 'target': 'ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.073 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e5db8e84-23a7-4fa8-8e64-338531f86de9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3e:8b20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545832, 'tstamp': 545832}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324797, 'error': None, 'target': 'ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.091 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d653f63b-9dc9-49ab-bd41-99148ac1f87e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99091813-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:8b:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 286], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545832, 'reachable_time': 42666, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324803, 'error': None, 'target': 'ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.127 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[30c43f7f-e10b-448f-ad33-416f317d5b9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.185 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9b71a56d-8188-44a6-b318-64286843236b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.187 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99091813-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.188 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.188 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99091813-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.191 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:11 np0005634017 NetworkManager[49805]: <info>  [1772273951.1921] manager: (tap99091813-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/407)
Feb 28 05:19:11 np0005634017 kernel: tap99091813-10: entered promiscuous mode
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.194 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.196 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap99091813-10, col_values=(('external_ids', {'iface-id': 'e6986f00-b070-4e36-95ae-3683483bf103'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.197 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:11 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:11Z|00934|binding|INFO|Releasing lport e6986f00-b070-4e36-95ae-3683483bf103 from this chassis (sb_readonly=0)
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.204 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.206 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/99091813-133c-46b0-a8d3-eeb21884f48a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/99091813-133c-46b0-a8d3-eeb21884f48a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.207 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f247434f-b3c0-4053-9537-97b7c322bfac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.208 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-99091813-133c-46b0-a8d3-eeb21884f48a
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/99091813-133c-46b0-a8d3-eeb21884f48a.pid.haproxy
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 99091813-133c-46b0-a8d3-eeb21884f48a
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:19:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.209 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a', 'env', 'PROCESS_TAG=haproxy-99091813-133c-46b0-a8d3-eeb21884f48a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/99091813-133c-46b0-a8d3-eeb21884f48a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.403 243456 DEBUG nova.compute.manager [req-66f2d065-649a-40c6-a2c7-3680f255320b req-cfd71f30-08c4-4c30-9dd8-2db08ab4e78f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received event network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.405 243456 DEBUG oslo_concurrency.lockutils [req-66f2d065-649a-40c6-a2c7-3680f255320b req-cfd71f30-08c4-4c30-9dd8-2db08ab4e78f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.405 243456 DEBUG oslo_concurrency.lockutils [req-66f2d065-649a-40c6-a2c7-3680f255320b req-cfd71f30-08c4-4c30-9dd8-2db08ab4e78f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.405 243456 DEBUG oslo_concurrency.lockutils [req-66f2d065-649a-40c6-a2c7-3680f255320b req-cfd71f30-08c4-4c30-9dd8-2db08ab4e78f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.405 243456 DEBUG nova.compute.manager [req-66f2d065-649a-40c6-a2c7-3680f255320b req-cfd71f30-08c4-4c30-9dd8-2db08ab4e78f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] No waiting events found dispatching network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.405 243456 WARNING nova.compute.manager [req-66f2d065-649a-40c6-a2c7-3680f255320b req-cfd71f30-08c4-4c30-9dd8-2db08ab4e78f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received unexpected event network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 for instance with vm_state suspended and task_state resuming.#033[00m
Feb 28 05:19:11 np0005634017 podman[324928]: 2026-02-28 10:19:11.561011332 +0000 UTC m=+0.056566473 container create 2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 05:19:11 np0005634017 systemd[1]: Started libpod-conmon-2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f.scope.
Feb 28 05:19:11 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:19:11 np0005634017 podman[324928]: 2026-02-28 10:19:11.530361998 +0000 UTC m=+0.025917179 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:19:11 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/020af8fd3586118fc9aa7bfe4f2379c7e7600d6ebd5072313956ddeff5e61ed4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:19:11 np0005634017 podman[324928]: 2026-02-28 10:19:11.641878707 +0000 UTC m=+0.137433848 container init 2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 28 05:19:11 np0005634017 podman[324928]: 2026-02-28 10:19:11.649174475 +0000 UTC m=+0.144729616 container start 2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 28 05:19:11 np0005634017 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[324989]: [NOTICE]   (325004) : New worker (325011) forked
Feb 28 05:19:11 np0005634017 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[324989]: [NOTICE]   (325004) : Loading success.
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.770 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for e8c0bffa-2672-4f45-8646-3a41b8e780a8 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.770 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273951.7696404, e8c0bffa-2672-4f45-8646-3a41b8e780a8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.771 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] VM Started (Lifecycle Event)#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.796 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.802 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "6a659835-f144-4e34-87ec-3b37ff81b0d1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.803 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.804 243456 DEBUG nova.compute.manager [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.804 243456 DEBUG nova.objects.instance [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'pci_devices' on Instance uuid e8c0bffa-2672-4f45-8646-3a41b8e780a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.807 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:19:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:19:11 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:19:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:19:11 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.844 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.845 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273951.7727218, e8c0bffa-2672-4f45-8646-3a41b8e780a8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.845 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.847 243456 DEBUG nova.compute.manager [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.851 243456 INFO nova.virt.libvirt.driver [-] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Instance running successfully.#033[00m
Feb 28 05:19:11 np0005634017 virtqemud[242837]: argument unsupported: QEMU guest agent is not configured
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.855 243456 DEBUG nova.virt.libvirt.guest [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.855 243456 DEBUG nova.compute.manager [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.899 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.903 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.949 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.981 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.982 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.988 243456 DEBUG nova.virt.hardware [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:19:11 np0005634017 nova_compute[243452]: 2026-02-28 10:19:11.988 243456 INFO nova.compute.claims [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:19:12 np0005634017 nova_compute[243452]: 2026-02-28 10:19:12.128 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:12 np0005634017 nova_compute[243452]: 2026-02-28 10:19:12.182 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 314 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 25 KiB/s wr, 34 op/s
Feb 28 05:19:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:19:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:19:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:19:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:19:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:19:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:19:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:19:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:19:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:19:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:19:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:19:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:19:12 np0005634017 podman[325165]: 2026-02-28 10:19:12.686521651 +0000 UTC m=+0.051703635 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:19:12 np0005634017 podman[325164]: 2026-02-28 10:19:12.711615486 +0000 UTC m=+0.079426345 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 05:19:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:19:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3915906322' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:19:12 np0005634017 nova_compute[243452]: 2026-02-28 10:19:12.735 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:12 np0005634017 nova_compute[243452]: 2026-02-28 10:19:12.741 243456 DEBUG nova.compute.provider_tree [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:19:12 np0005634017 nova_compute[243452]: 2026-02-28 10:19:12.760 243456 DEBUG nova.scheduler.client.report [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:19:12 np0005634017 nova_compute[243452]: 2026-02-28 10:19:12.781 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:12 np0005634017 nova_compute[243452]: 2026-02-28 10:19:12.782 243456 DEBUG nova.compute.manager [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:19:12 np0005634017 nova_compute[243452]: 2026-02-28 10:19:12.824 243456 DEBUG nova.compute.manager [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:19:12 np0005634017 nova_compute[243452]: 2026-02-28 10:19:12.824 243456 DEBUG nova.network.neutron [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:19:12 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:19:12 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:19:12 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:19:12 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:19:12 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:19:12 np0005634017 nova_compute[243452]: 2026-02-28 10:19:12.848 243456 INFO nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:19:12 np0005634017 nova_compute[243452]: 2026-02-28 10:19:12.869 243456 DEBUG nova.compute.manager [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:19:12 np0005634017 podman[325247]: 2026-02-28 10:19:12.946877552 +0000 UTC m=+0.038841798 container create c561191b29f1ac74ac3440633be08107ef8c48553e3448e6dbfb67d2d3309b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_easley, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:19:12 np0005634017 systemd[1]: Started libpod-conmon-c561191b29f1ac74ac3440633be08107ef8c48553e3448e6dbfb67d2d3309b85.scope.
Feb 28 05:19:12 np0005634017 nova_compute[243452]: 2026-02-28 10:19:12.985 243456 DEBUG nova.compute.manager [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:19:12 np0005634017 nova_compute[243452]: 2026-02-28 10:19:12.987 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:19:12 np0005634017 nova_compute[243452]: 2026-02-28 10:19:12.987 243456 INFO nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Creating image(s)#033[00m
Feb 28 05:19:13 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.013 243456 DEBUG nova.storage.rbd_utils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:13 np0005634017 podman[325247]: 2026-02-28 10:19:13.022190178 +0000 UTC m=+0.114154464 container init c561191b29f1ac74ac3440633be08107ef8c48553e3448e6dbfb67d2d3309b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_easley, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 05:19:13 np0005634017 podman[325247]: 2026-02-28 10:19:12.929090155 +0000 UTC m=+0.021054421 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:19:13 np0005634017 podman[325247]: 2026-02-28 10:19:13.033853411 +0000 UTC m=+0.125817687 container start c561191b29f1ac74ac3440633be08107ef8c48553e3448e6dbfb67d2d3309b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 28 05:19:13 np0005634017 podman[325247]: 2026-02-28 10:19:13.037839064 +0000 UTC m=+0.129803390 container attach c561191b29f1ac74ac3440633be08107ef8c48553e3448e6dbfb67d2d3309b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_easley, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:19:13 np0005634017 hungry_easley[325263]: 167 167
Feb 28 05:19:13 np0005634017 systemd[1]: libpod-c561191b29f1ac74ac3440633be08107ef8c48553e3448e6dbfb67d2d3309b85.scope: Deactivated successfully.
Feb 28 05:19:13 np0005634017 podman[325247]: 2026-02-28 10:19:13.044271348 +0000 UTC m=+0.136235614 container died c561191b29f1ac74ac3440633be08107ef8c48553e3448e6dbfb67d2d3309b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_easley, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.043 243456 DEBUG nova.storage.rbd_utils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:13 np0005634017 systemd[1]: var-lib-containers-storage-overlay-94f5a82e747f61196b2d14291f7e3015f76b11995ed97980aea8152825c1503e-merged.mount: Deactivated successfully.
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.071 243456 DEBUG nova.storage.rbd_utils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.078 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:13 np0005634017 podman[325247]: 2026-02-28 10:19:13.083581788 +0000 UTC m=+0.175546034 container remove c561191b29f1ac74ac3440633be08107ef8c48553e3448e6dbfb67d2d3309b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_easley, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 05:19:13 np0005634017 systemd[1]: libpod-conmon-c561191b29f1ac74ac3440633be08107ef8c48553e3448e6dbfb67d2d3309b85.scope: Deactivated successfully.
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.157 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.158 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.159 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.159 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.185 243456 DEBUG nova.storage.rbd_utils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.191 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:13 np0005634017 podman[325358]: 2026-02-28 10:19:13.235410776 +0000 UTC m=+0.046129906 container create 0d09091b6126a384307ff3d6c3df063e652ed4b17901063ba928313d7cb1a489 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 05:19:13 np0005634017 systemd[1]: Started libpod-conmon-0d09091b6126a384307ff3d6c3df063e652ed4b17901063ba928313d7cb1a489.scope.
Feb 28 05:19:13 np0005634017 podman[325358]: 2026-02-28 10:19:13.213899293 +0000 UTC m=+0.024618213 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:19:13 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:19:13 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6d4049678991f2fd404633fbf3775c66c5d97a5c3ee57c50561b6324f2a472/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:19:13 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6d4049678991f2fd404633fbf3775c66c5d97a5c3ee57c50561b6324f2a472/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:19:13 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6d4049678991f2fd404633fbf3775c66c5d97a5c3ee57c50561b6324f2a472/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:19:13 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6d4049678991f2fd404633fbf3775c66c5d97a5c3ee57c50561b6324f2a472/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:19:13 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6d4049678991f2fd404633fbf3775c66c5d97a5c3ee57c50561b6324f2a472/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.355 243456 DEBUG nova.policy [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b9006c7543a244aa948b78020335223a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6952e00efd364e1491714983e2425e93', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:19:13 np0005634017 podman[325358]: 2026-02-28 10:19:13.404642739 +0000 UTC m=+0.215361699 container init 0d09091b6126a384307ff3d6c3df063e652ed4b17901063ba928313d7cb1a489 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_satoshi, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 05:19:13 np0005634017 podman[325358]: 2026-02-28 10:19:13.411649349 +0000 UTC m=+0.222368279 container start 0d09091b6126a384307ff3d6c3df063e652ed4b17901063ba928313d7cb1a489 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_satoshi, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:19:13 np0005634017 podman[325358]: 2026-02-28 10:19:13.430050603 +0000 UTC m=+0.240769533 container attach 0d09091b6126a384307ff3d6c3df063e652ed4b17901063ba928313d7cb1a489 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.510 243456 DEBUG nova.compute.manager [req-c887070b-1b8f-4785-be98-444c1337869e req-ba96c42f-b235-41a8-a5a1-f3d344750b6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received event network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.511 243456 DEBUG oslo_concurrency.lockutils [req-c887070b-1b8f-4785-be98-444c1337869e req-ba96c42f-b235-41a8-a5a1-f3d344750b6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.511 243456 DEBUG oslo_concurrency.lockutils [req-c887070b-1b8f-4785-be98-444c1337869e req-ba96c42f-b235-41a8-a5a1-f3d344750b6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.511 243456 DEBUG oslo_concurrency.lockutils [req-c887070b-1b8f-4785-be98-444c1337869e req-ba96c42f-b235-41a8-a5a1-f3d344750b6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.512 243456 DEBUG nova.compute.manager [req-c887070b-1b8f-4785-be98-444c1337869e req-ba96c42f-b235-41a8-a5a1-f3d344750b6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] No waiting events found dispatching network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.512 243456 WARNING nova.compute.manager [req-c887070b-1b8f-4785-be98-444c1337869e req-ba96c42f-b235-41a8-a5a1-f3d344750b6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received unexpected event network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.592 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.689 243456 DEBUG nova.storage.rbd_utils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] resizing rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.848 243456 DEBUG nova.objects.instance [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'migration_context' on Instance uuid 6a659835-f144-4e34-87ec-3b37ff81b0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.875 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.875 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Ensure instance console log exists: /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.876 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.876 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.876 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:13 np0005634017 serene_satoshi[325398]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:19:13 np0005634017 serene_satoshi[325398]: --> All data devices are unavailable
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.905 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.915 243456 DEBUG nova.network.neutron [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Successfully created port: c058dd2c-3349-4364-8659-31bb8b2509bb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:19:13 np0005634017 systemd[1]: libpod-0d09091b6126a384307ff3d6c3df063e652ed4b17901063ba928313d7cb1a489.scope: Deactivated successfully.
Feb 28 05:19:13 np0005634017 podman[325358]: 2026-02-28 10:19:13.921138371 +0000 UTC m=+0.731857311 container died 0d09091b6126a384307ff3d6c3df063e652ed4b17901063ba928313d7cb1a489 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.931 243456 INFO nova.compute.manager [None req-7219141d-d05f-47cc-a17c-bfc69603df46 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Get console output#033[00m
Feb 28 05:19:13 np0005634017 nova_compute[243452]: 2026-02-28 10:19:13.938 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 28 05:19:14 np0005634017 systemd[1]: var-lib-containers-storage-overlay-5f6d4049678991f2fd404633fbf3775c66c5d97a5c3ee57c50561b6324f2a472-merged.mount: Deactivated successfully.
Feb 28 05:19:14 np0005634017 podman[325358]: 2026-02-28 10:19:14.032181796 +0000 UTC m=+0.842900716 container remove 0d09091b6126a384307ff3d6c3df063e652ed4b17901063ba928313d7cb1a489 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_satoshi, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 05:19:14 np0005634017 systemd[1]: libpod-conmon-0d09091b6126a384307ff3d6c3df063e652ed4b17901063ba928313d7cb1a489.scope: Deactivated successfully.
Feb 28 05:19:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 314 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 158 KiB/s rd, 26 KiB/s wr, 16 op/s
Feb 28 05:19:14 np0005634017 podman[325564]: 2026-02-28 10:19:14.50492138 +0000 UTC m=+0.076352097 container create ac2fd85f2b02b8a85bde835184ba65f3b5e658e3a371cccc3ddd2df128b82db0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Feb 28 05:19:14 np0005634017 systemd[1]: Started libpod-conmon-ac2fd85f2b02b8a85bde835184ba65f3b5e658e3a371cccc3ddd2df128b82db0.scope.
Feb 28 05:19:14 np0005634017 podman[325564]: 2026-02-28 10:19:14.451142917 +0000 UTC m=+0.022573654 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:19:14 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:19:14 np0005634017 podman[325564]: 2026-02-28 10:19:14.622941904 +0000 UTC m=+0.194372621 container init ac2fd85f2b02b8a85bde835184ba65f3b5e658e3a371cccc3ddd2df128b82db0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_gates, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:19:14 np0005634017 podman[325564]: 2026-02-28 10:19:14.630396197 +0000 UTC m=+0.201826914 container start ac2fd85f2b02b8a85bde835184ba65f3b5e658e3a371cccc3ddd2df128b82db0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_gates, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:19:14 np0005634017 podman[325564]: 2026-02-28 10:19:14.63366409 +0000 UTC m=+0.205094827 container attach ac2fd85f2b02b8a85bde835184ba65f3b5e658e3a371cccc3ddd2df128b82db0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 05:19:14 np0005634017 hardcore_gates[325580]: 167 167
Feb 28 05:19:14 np0005634017 systemd[1]: libpod-ac2fd85f2b02b8a85bde835184ba65f3b5e658e3a371cccc3ddd2df128b82db0.scope: Deactivated successfully.
Feb 28 05:19:14 np0005634017 podman[325564]: 2026-02-28 10:19:14.636730237 +0000 UTC m=+0.208160954 container died ac2fd85f2b02b8a85bde835184ba65f3b5e658e3a371cccc3ddd2df128b82db0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_gates, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 05:19:14 np0005634017 systemd[1]: var-lib-containers-storage-overlay-d43070ebddbe33f5ce735f70ecc427bbe7d395cd4af10614efac6e4a14882420-merged.mount: Deactivated successfully.
Feb 28 05:19:14 np0005634017 podman[325564]: 2026-02-28 10:19:14.669790929 +0000 UTC m=+0.241221696 container remove ac2fd85f2b02b8a85bde835184ba65f3b5e658e3a371cccc3ddd2df128b82db0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_gates, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 05:19:14 np0005634017 systemd[1]: libpod-conmon-ac2fd85f2b02b8a85bde835184ba65f3b5e658e3a371cccc3ddd2df128b82db0.scope: Deactivated successfully.
Feb 28 05:19:14 np0005634017 podman[325605]: 2026-02-28 10:19:14.815612716 +0000 UTC m=+0.040867686 container create 06a0766103473cc56263279b03562a849807cbc56d7ae256db79d744be01d87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_pascal, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:19:14 np0005634017 nova_compute[243452]: 2026-02-28 10:19:14.856 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:14 np0005634017 systemd[1]: Started libpod-conmon-06a0766103473cc56263279b03562a849807cbc56d7ae256db79d744be01d87a.scope.
Feb 28 05:19:14 np0005634017 podman[325605]: 2026-02-28 10:19:14.799154207 +0000 UTC m=+0.024409187 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:19:14 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:19:14 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0135a93154ca7ad5b56a41e694081cf646ef9400ac4ea9eed10ac9747f1dd919/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:19:14 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0135a93154ca7ad5b56a41e694081cf646ef9400ac4ea9eed10ac9747f1dd919/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:19:14 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0135a93154ca7ad5b56a41e694081cf646ef9400ac4ea9eed10ac9747f1dd919/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:19:14 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0135a93154ca7ad5b56a41e694081cf646ef9400ac4ea9eed10ac9747f1dd919/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:19:14 np0005634017 podman[325605]: 2026-02-28 10:19:14.927874535 +0000 UTC m=+0.153129545 container init 06a0766103473cc56263279b03562a849807cbc56d7ae256db79d744be01d87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:19:14 np0005634017 podman[325605]: 2026-02-28 10:19:14.934447413 +0000 UTC m=+0.159702383 container start 06a0766103473cc56263279b03562a849807cbc56d7ae256db79d744be01d87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_pascal, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 05:19:14 np0005634017 podman[325605]: 2026-02-28 10:19:14.93891794 +0000 UTC m=+0.164172930 container attach 06a0766103473cc56263279b03562a849807cbc56d7ae256db79d744be01d87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_pascal, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.134 243456 DEBUG oslo_concurrency.lockutils [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.136 243456 DEBUG oslo_concurrency.lockutils [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.136 243456 DEBUG oslo_concurrency.lockutils [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.136 243456 DEBUG oslo_concurrency.lockutils [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.137 243456 DEBUG oslo_concurrency.lockutils [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.138 243456 INFO nova.compute.manager [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Terminating instance#033[00m
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.139 243456 DEBUG nova.compute.manager [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:19:15 np0005634017 kernel: tap6912d1ef-96 (unregistering): left promiscuous mode
Feb 28 05:19:15 np0005634017 NetworkManager[49805]: <info>  [1772273955.1795] device (tap6912d1ef-96): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:19:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:15Z|00935|binding|INFO|Releasing lport 6912d1ef-9679-45b5-ae80-a91f63ecce55 from this chassis (sb_readonly=0)
Feb 28 05:19:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:15Z|00936|binding|INFO|Setting lport 6912d1ef-9679-45b5-ae80-a91f63ecce55 down in Southbound
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.193 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:15Z|00937|binding|INFO|Removing iface tap6912d1ef-96 ovn-installed in OVS
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.196 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:15 np0005634017 strange_pascal[325621]: {
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:    "0": [
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:        {
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "devices": [
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "/dev/loop3"
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            ],
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "lv_name": "ceph_lv0",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "lv_size": "21470642176",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "name": "ceph_lv0",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "tags": {
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.cluster_name": "ceph",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.crush_device_class": "",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.encrypted": "0",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.objectstore": "bluestore",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.osd_id": "0",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.type": "block",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.vdo": "0",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.with_tpm": "0"
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            },
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "type": "block",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "vg_name": "ceph_vg0"
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:        }
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:    ],
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:    "1": [
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:        {
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "devices": [
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "/dev/loop4"
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            ],
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "lv_name": "ceph_lv1",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "lv_size": "21470642176",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "name": "ceph_lv1",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "tags": {
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.cluster_name": "ceph",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.crush_device_class": "",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.encrypted": "0",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.objectstore": "bluestore",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.osd_id": "1",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.type": "block",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.vdo": "0",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.with_tpm": "0"
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            },
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "type": "block",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "vg_name": "ceph_vg1"
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:        }
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:    ],
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:    "2": [
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:        {
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "devices": [
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "/dev/loop5"
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            ],
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "lv_name": "ceph_lv2",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "lv_size": "21470642176",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "name": "ceph_lv2",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "tags": {
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.cluster_name": "ceph",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.crush_device_class": "",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.encrypted": "0",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.objectstore": "bluestore",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.osd_id": "2",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.type": "block",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.vdo": "0",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:                "ceph.with_tpm": "0"
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            },
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "type": "block",
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:            "vg_name": "ceph_vg2"
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:        }
Feb 28 05:19:15 np0005634017 strange_pascal[325621]:    ]
Feb 28 05:19:15 np0005634017 strange_pascal[325621]: }
Feb 28 05:19:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.204 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a0:a7 10.100.0.8'], port_security=['fa:16:3e:b7:a0:a7 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e8c0bffa-2672-4f45-8646-3a41b8e780a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99091813-133c-46b0-a8d3-eeb21884f48a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '6', 'neutron:security_group_ids': '11fcf035-f78f-4ebd-8bac-b3303ff31262', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd6ca39e-5439-4056-b277-bd0e3b6ca6ca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=6912d1ef-9679-45b5-ae80-a91f63ecce55) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:19:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.206 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 6912d1ef-9679-45b5-ae80-a91f63ecce55 in datapath 99091813-133c-46b0-a8d3-eeb21884f48a unbound from our chassis#033[00m
Feb 28 05:19:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.208 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 99091813-133c-46b0-a8d3-eeb21884f48a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.209 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.211 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[de1d243d-a798-4b1a-9b8e-45a0e59e1b76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.212 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a namespace which is not needed anymore#033[00m
Feb 28 05:19:15 np0005634017 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.224 243456 DEBUG nova.network.neutron [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Successfully updated port: c058dd2c-3349-4364-8659-31bb8b2509bb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:19:15 np0005634017 systemd-machined[209480]: Machine qemu-116-instance-0000005d terminated.
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.240 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "refresh_cache-6a659835-f144-4e34-87ec-3b37ff81b0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.240 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquired lock "refresh_cache-6a659835-f144-4e34-87ec-3b37ff81b0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.241 243456 DEBUG nova.network.neutron [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:19:15 np0005634017 systemd[1]: libpod-06a0766103473cc56263279b03562a849807cbc56d7ae256db79d744be01d87a.scope: Deactivated successfully.
Feb 28 05:19:15 np0005634017 podman[325605]: 2026-02-28 10:19:15.245047416 +0000 UTC m=+0.470302396 container died 06a0766103473cc56263279b03562a849807cbc56d7ae256db79d744be01d87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_pascal, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:19:15 np0005634017 systemd[1]: var-lib-containers-storage-overlay-0135a93154ca7ad5b56a41e694081cf646ef9400ac4ea9eed10ac9747f1dd919-merged.mount: Deactivated successfully.
Feb 28 05:19:15 np0005634017 podman[325605]: 2026-02-28 10:19:15.308092853 +0000 UTC m=+0.533347813 container remove 06a0766103473cc56263279b03562a849807cbc56d7ae256db79d744be01d87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Feb 28 05:19:15 np0005634017 systemd[1]: libpod-conmon-06a0766103473cc56263279b03562a849807cbc56d7ae256db79d744be01d87a.scope: Deactivated successfully.
Feb 28 05:19:15 np0005634017 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[324989]: [NOTICE]   (325004) : haproxy version is 2.8.14-c23fe91
Feb 28 05:19:15 np0005634017 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[324989]: [NOTICE]   (325004) : path to executable is /usr/sbin/haproxy
Feb 28 05:19:15 np0005634017 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[324989]: [WARNING]  (325004) : Exiting Master process...
Feb 28 05:19:15 np0005634017 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[324989]: [WARNING]  (325004) : Exiting Master process...
Feb 28 05:19:15 np0005634017 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[324989]: [ALERT]    (325004) : Current worker (325011) exited with code 143 (Terminated)
Feb 28 05:19:15 np0005634017 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[324989]: [WARNING]  (325004) : All workers exited. Exiting... (0)
Feb 28 05:19:15 np0005634017 systemd[1]: libpod-2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f.scope: Deactivated successfully.
Feb 28 05:19:15 np0005634017 podman[325666]: 2026-02-28 10:19:15.362956346 +0000 UTC m=+0.046470715 container died 2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.374 243456 INFO nova.virt.libvirt.driver [-] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Instance destroyed successfully.#033[00m
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.375 243456 DEBUG nova.objects.instance [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'resources' on Instance uuid e8c0bffa-2672-4f45-8646-3a41b8e780a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:15 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f-userdata-shm.mount: Deactivated successfully.
Feb 28 05:19:15 np0005634017 systemd[1]: var-lib-containers-storage-overlay-020af8fd3586118fc9aa7bfe4f2379c7e7600d6ebd5072313956ddeff5e61ed4-merged.mount: Deactivated successfully.
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.392 243456 DEBUG nova.virt.libvirt.vif [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:18:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1132178903',display_name='tempest-TestNetworkAdvancedServerOps-server-1132178903',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1132178903',id=93,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAaE+RUFgVeYlRX30KMGxnwEz+7reAn227i7sOjhZIZoASG2YdNYnV+Hsj1MiXJ39Nt5WWX427Y5pNZnlCuOzU5m4otlC0FTaPuRwmOXLsqunpWkEbv7T/dSbfrRT0dp7Q==',key_name='tempest-TestNetworkAdvancedServerOps-156508609',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:18:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-g07k9xeh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:19:11Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=e8c0bffa-2672-4f45-8646-3a41b8e780a8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "address": "fa:16:3e:b7:a0:a7", "network": {"id": "99091813-133c-46b0-a8d3-eeb21884f48a", "bridge": "br-int", "label": "tempest-network-smoke--923256228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6912d1ef-96", "ovs_interfaceid": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.393 243456 DEBUG nova.network.os_vif_util [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "address": "fa:16:3e:b7:a0:a7", "network": {"id": "99091813-133c-46b0-a8d3-eeb21884f48a", "bridge": "br-int", "label": "tempest-network-smoke--923256228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6912d1ef-96", "ovs_interfaceid": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.394 243456 DEBUG nova.network.os_vif_util [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a0:a7,bridge_name='br-int',has_traffic_filtering=True,id=6912d1ef-9679-45b5-ae80-a91f63ecce55,network=Network(99091813-133c-46b0-a8d3-eeb21884f48a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6912d1ef-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.394 243456 DEBUG os_vif [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a0:a7,bridge_name='br-int',has_traffic_filtering=True,id=6912d1ef-9679-45b5-ae80-a91f63ecce55,network=Network(99091813-133c-46b0-a8d3-eeb21884f48a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6912d1ef-96') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.399 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.400 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6912d1ef-96, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.401 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.403 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.405 243456 INFO os_vif [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a0:a7,bridge_name='br-int',has_traffic_filtering=True,id=6912d1ef-9679-45b5-ae80-a91f63ecce55,network=Network(99091813-133c-46b0-a8d3-eeb21884f48a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6912d1ef-96')#033[00m
Feb 28 05:19:15 np0005634017 podman[325666]: 2026-02-28 10:19:15.407909948 +0000 UTC m=+0.091424317 container cleanup 2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:19:15 np0005634017 systemd[1]: libpod-conmon-2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f.scope: Deactivated successfully.
Feb 28 05:19:15 np0005634017 podman[325734]: 2026-02-28 10:19:15.484381317 +0000 UTC m=+0.050024716 container remove 2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 28 05:19:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.489 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[be8db31d-e317-4d4b-a265-760ecb31aabc]: (4, ('Sat Feb 28 10:19:15 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a (2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f)\n2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f\nSat Feb 28 10:19:15 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a (2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f)\n2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.492 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[76fed827-00b8-43b6-b570-169b92d29580]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.495 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99091813-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.498 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:15 np0005634017 kernel: tap99091813-10: left promiscuous mode
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.504 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.507 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0e065786-3306-4f64-ae11-4c1210662957]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.519 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c9bea1e8-35a5-4b0a-a44d-6dcf0cf02bc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.521 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[58522183-3d51-4a7a-a6d4-b5493e552620]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.541 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a1374249-0f0c-4601-8873-3c62c338dcca]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545823, 'reachable_time': 41715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325783, 'error': None, 'target': 'ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.545 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:19:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.545 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[9079921d-ebd7-4f7e-8984-626cff19ca29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:15 np0005634017 systemd[1]: run-netns-ovnmeta\x2d99091813\x2d133c\x2d46b0\x2da8d3\x2deeb21884f48a.mount: Deactivated successfully.
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.675 243456 INFO nova.virt.libvirt.driver [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Deleting instance files /var/lib/nova/instances/e8c0bffa-2672-4f45-8646-3a41b8e780a8_del#033[00m
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.678 243456 INFO nova.virt.libvirt.driver [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Deletion of /var/lib/nova/instances/e8c0bffa-2672-4f45-8646-3a41b8e780a8_del complete#033[00m
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.739 243456 INFO nova.compute.manager [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.739 243456 DEBUG oslo.service.loopingcall [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.739 243456 DEBUG nova.compute.manager [-] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:19:15 np0005634017 nova_compute[243452]: 2026-02-28 10:19:15.740 243456 DEBUG nova.network.neutron [-] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:19:15 np0005634017 podman[325796]: 2026-02-28 10:19:15.790666747 +0000 UTC m=+0.050173251 container create 6691f41e3c1e128c7b40397f618e2ddba736dab90e803bc7a60f716e5ff0e756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_robinson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:19:15 np0005634017 systemd[1]: Started libpod-conmon-6691f41e3c1e128c7b40397f618e2ddba736dab90e803bc7a60f716e5ff0e756.scope.
Feb 28 05:19:15 np0005634017 podman[325796]: 2026-02-28 10:19:15.76725839 +0000 UTC m=+0.026764894 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:19:15 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:19:15 np0005634017 podman[325796]: 2026-02-28 10:19:15.89886514 +0000 UTC m=+0.158371624 container init 6691f41e3c1e128c7b40397f618e2ddba736dab90e803bc7a60f716e5ff0e756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_robinson, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:19:15 np0005634017 podman[325796]: 2026-02-28 10:19:15.907091805 +0000 UTC m=+0.166598309 container start 6691f41e3c1e128c7b40397f618e2ddba736dab90e803bc7a60f716e5ff0e756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 05:19:15 np0005634017 podman[325796]: 2026-02-28 10:19:15.911400138 +0000 UTC m=+0.170906622 container attach 6691f41e3c1e128c7b40397f618e2ddba736dab90e803bc7a60f716e5ff0e756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_robinson, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 05:19:15 np0005634017 confident_robinson[325812]: 167 167
Feb 28 05:19:15 np0005634017 systemd[1]: libpod-6691f41e3c1e128c7b40397f618e2ddba736dab90e803bc7a60f716e5ff0e756.scope: Deactivated successfully.
Feb 28 05:19:15 np0005634017 conmon[325812]: conmon 6691f41e3c1e128c7b40 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6691f41e3c1e128c7b40397f618e2ddba736dab90e803bc7a60f716e5ff0e756.scope/container/memory.events
Feb 28 05:19:15 np0005634017 podman[325796]: 2026-02-28 10:19:15.916558735 +0000 UTC m=+0.176065239 container died 6691f41e3c1e128c7b40397f618e2ddba736dab90e803bc7a60f716e5ff0e756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_robinson, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:19:15 np0005634017 podman[325796]: 2026-02-28 10:19:15.962136704 +0000 UTC m=+0.221643208 container remove 6691f41e3c1e128c7b40397f618e2ddba736dab90e803bc7a60f716e5ff0e756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_robinson, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:19:15 np0005634017 systemd[1]: libpod-conmon-6691f41e3c1e128c7b40397f618e2ddba736dab90e803bc7a60f716e5ff0e756.scope: Deactivated successfully.
Feb 28 05:19:16 np0005634017 podman[325838]: 2026-02-28 10:19:16.163279907 +0000 UTC m=+0.057122499 container create 2ddae9da1512a1d74e7a45e5eb1a47eab9044b3dbed407d0273e44d222f05eac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swirles, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:19:16 np0005634017 systemd[1]: Started libpod-conmon-2ddae9da1512a1d74e7a45e5eb1a47eab9044b3dbed407d0273e44d222f05eac.scope.
Feb 28 05:19:16 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:19:16 np0005634017 podman[325838]: 2026-02-28 10:19:16.142245817 +0000 UTC m=+0.036088409 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:19:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9435238adb36c0e8b230e4e4f8f69f203c54790df78ed580d420abf4aff516a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:19:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9435238adb36c0e8b230e4e4f8f69f203c54790df78ed580d420abf4aff516a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:19:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9435238adb36c0e8b230e4e4f8f69f203c54790df78ed580d420abf4aff516a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:19:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9435238adb36c0e8b230e4e4f8f69f203c54790df78ed580d420abf4aff516a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:19:16 np0005634017 podman[325838]: 2026-02-28 10:19:16.272373946 +0000 UTC m=+0.166216618 container init 2ddae9da1512a1d74e7a45e5eb1a47eab9044b3dbed407d0273e44d222f05eac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swirles, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:19:16 np0005634017 podman[325838]: 2026-02-28 10:19:16.278355367 +0000 UTC m=+0.172197969 container start 2ddae9da1512a1d74e7a45e5eb1a47eab9044b3dbed407d0273e44d222f05eac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 05:19:16 np0005634017 podman[325838]: 2026-02-28 10:19:16.282973018 +0000 UTC m=+0.176815630 container attach 2ddae9da1512a1d74e7a45e5eb1a47eab9044b3dbed407d0273e44d222f05eac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:19:16 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3ffa0670bcd910fc33d82608cba6b5395e6dfb25fb41af8bafd134b6043389a6-merged.mount: Deactivated successfully.
Feb 28 05:19:16 np0005634017 nova_compute[243452]: 2026-02-28 10:19:16.313 243456 DEBUG nova.network.neutron [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:19:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 313 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.1 MiB/s wr, 23 op/s
Feb 28 05:19:16 np0005634017 nova_compute[243452]: 2026-02-28 10:19:16.371 243456 DEBUG nova.compute.manager [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received event network-changed-6912d1ef-9679-45b5-ae80-a91f63ecce55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:16 np0005634017 nova_compute[243452]: 2026-02-28 10:19:16.372 243456 DEBUG nova.compute.manager [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Refreshing instance network info cache due to event network-changed-6912d1ef-9679-45b5-ae80-a91f63ecce55. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:19:16 np0005634017 nova_compute[243452]: 2026-02-28 10:19:16.372 243456 DEBUG oslo_concurrency.lockutils [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-e8c0bffa-2672-4f45-8646-3a41b8e780a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:19:16 np0005634017 nova_compute[243452]: 2026-02-28 10:19:16.372 243456 DEBUG oslo_concurrency.lockutils [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-e8c0bffa-2672-4f45-8646-3a41b8e780a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:19:16 np0005634017 nova_compute[243452]: 2026-02-28 10:19:16.372 243456 DEBUG nova.network.neutron [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Refreshing network info cache for port 6912d1ef-9679-45b5-ae80-a91f63ecce55 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:19:16 np0005634017 lvm[325930]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:19:16 np0005634017 lvm[325930]: VG ceph_vg0 finished
Feb 28 05:19:16 np0005634017 lvm[325932]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:19:16 np0005634017 lvm[325932]: VG ceph_vg1 finished
Feb 28 05:19:17 np0005634017 lvm[325933]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:19:17 np0005634017 lvm[325933]: VG ceph_vg2 finished
Feb 28 05:19:17 np0005634017 funny_swirles[325855]: {}
Feb 28 05:19:17 np0005634017 systemd[1]: libpod-2ddae9da1512a1d74e7a45e5eb1a47eab9044b3dbed407d0273e44d222f05eac.scope: Deactivated successfully.
Feb 28 05:19:17 np0005634017 systemd[1]: libpod-2ddae9da1512a1d74e7a45e5eb1a47eab9044b3dbed407d0273e44d222f05eac.scope: Consumed 1.250s CPU time.
Feb 28 05:19:17 np0005634017 podman[325838]: 2026-02-28 10:19:17.17170787 +0000 UTC m=+1.065550472 container died 2ddae9da1512a1d74e7a45e5eb1a47eab9044b3dbed407d0273e44d222f05eac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swirles, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 28 05:19:17 np0005634017 systemd[1]: var-lib-containers-storage-overlay-9435238adb36c0e8b230e4e4f8f69f203c54790df78ed580d420abf4aff516a0-merged.mount: Deactivated successfully.
Feb 28 05:19:17 np0005634017 podman[325838]: 2026-02-28 10:19:17.22610421 +0000 UTC m=+1.119946802 container remove 2ddae9da1512a1d74e7a45e5eb1a47eab9044b3dbed407d0273e44d222f05eac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 05:19:17 np0005634017 systemd[1]: libpod-conmon-2ddae9da1512a1d74e7a45e5eb1a47eab9044b3dbed407d0273e44d222f05eac.scope: Deactivated successfully.
Feb 28 05:19:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:19:17 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:19:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:19:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:17.310 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:19:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:17.312 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:19:17 np0005634017 nova_compute[243452]: 2026-02-28 10:19:17.311 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:17 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:19:17 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:19:17 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.011 243456 DEBUG nova.network.neutron [-] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.035 243456 INFO nova.compute.manager [-] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Took 2.30 seconds to deallocate network for instance.#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.084 243456 DEBUG oslo_concurrency.lockutils [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.085 243456 DEBUG oslo_concurrency.lockutils [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.117 243456 DEBUG nova.scheduler.client.report [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.140 243456 DEBUG nova.scheduler.client.report [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.141 243456 DEBUG nova.compute.provider_tree [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.166 243456 DEBUG nova.scheduler.client.report [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.201 243456 DEBUG nova.scheduler.client.report [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.221 243456 DEBUG nova.compute.manager [req-0cad93cb-b060-437b-8132-a1288770327f req-0469494b-d714-4be4-a1da-b9445a967275 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received event network-vif-deleted-6912d1ef-9679-45b5-ae80-a91f63ecce55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.282 243456 DEBUG oslo_concurrency.processutils [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 309 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.480 243456 DEBUG nova.compute.manager [req-d8e5620c-a70d-4090-ba60-d74635a792ed req-7268491e-3e03-4213-bf66-7177c4894390 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received event network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.482 243456 DEBUG oslo_concurrency.lockutils [req-d8e5620c-a70d-4090-ba60-d74635a792ed req-7268491e-3e03-4213-bf66-7177c4894390 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.483 243456 DEBUG oslo_concurrency.lockutils [req-d8e5620c-a70d-4090-ba60-d74635a792ed req-7268491e-3e03-4213-bf66-7177c4894390 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.483 243456 DEBUG oslo_concurrency.lockutils [req-d8e5620c-a70d-4090-ba60-d74635a792ed req-7268491e-3e03-4213-bf66-7177c4894390 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.483 243456 DEBUG nova.compute.manager [req-d8e5620c-a70d-4090-ba60-d74635a792ed req-7268491e-3e03-4213-bf66-7177c4894390 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] No waiting events found dispatching network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.484 243456 WARNING nova.compute.manager [req-d8e5620c-a70d-4090-ba60-d74635a792ed req-7268491e-3e03-4213-bf66-7177c4894390 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received unexpected event network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.485 243456 DEBUG nova.network.neutron [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Updating instance_info_cache with network_info: [{"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.503 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Releasing lock "refresh_cache-6a659835-f144-4e34-87ec-3b37ff81b0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.504 243456 DEBUG nova.compute.manager [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Instance network_info: |[{"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.508 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Start _get_guest_xml network_info=[{"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.515 243456 WARNING nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.520 243456 DEBUG nova.virt.libvirt.host [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.521 243456 DEBUG nova.virt.libvirt.host [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.525 243456 DEBUG nova.virt.libvirt.host [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.527 243456 DEBUG nova.virt.libvirt.host [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.527 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.528 243456 DEBUG nova.virt.hardware [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.529 243456 DEBUG nova.virt.hardware [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.529 243456 DEBUG nova.virt.hardware [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.529 243456 DEBUG nova.virt.hardware [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.530 243456 DEBUG nova.virt.hardware [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.530 243456 DEBUG nova.virt.hardware [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.530 243456 DEBUG nova.virt.hardware [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.532 243456 DEBUG nova.virt.hardware [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.532 243456 DEBUG nova.virt.hardware [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.532 243456 DEBUG nova.virt.hardware [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.533 243456 DEBUG nova.virt.hardware [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.538 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:19:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:19:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/452138698' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.814 243456 DEBUG oslo_concurrency.processutils [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.820 243456 DEBUG nova.compute.provider_tree [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.839 243456 DEBUG nova.scheduler.client.report [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.872 243456 DEBUG oslo_concurrency.lockutils [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.895 243456 INFO nova.scheduler.client.report [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Deleted allocations for instance e8c0bffa-2672-4f45-8646-3a41b8e780a8#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.907 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:18 np0005634017 nova_compute[243452]: 2026-02-28 10:19:18.963 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.053 243456 DEBUG oslo_concurrency.lockutils [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:19:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3671386092' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.166 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.195 243456 DEBUG nova.storage.rbd_utils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.201 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.291 243456 DEBUG nova.network.neutron [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Updated VIF entry in instance network info cache for port 6912d1ef-9679-45b5-ae80-a91f63ecce55. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.292 243456 DEBUG nova.network.neutron [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Updating instance_info_cache with network_info: [{"id": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "address": "fa:16:3e:b7:a0:a7", "network": {"id": "99091813-133c-46b0-a8d3-eeb21884f48a", "bridge": "br-int", "label": "tempest-network-smoke--923256228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6912d1ef-96", "ovs_interfaceid": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.309 243456 DEBUG oslo_concurrency.lockutils [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-e8c0bffa-2672-4f45-8646-3a41b8e780a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.310 243456 DEBUG nova.compute.manager [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received event network-changed-c058dd2c-3349-4364-8659-31bb8b2509bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.310 243456 DEBUG nova.compute.manager [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Refreshing instance network info cache due to event network-changed-c058dd2c-3349-4364-8659-31bb8b2509bb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.311 243456 DEBUG oslo_concurrency.lockutils [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-6a659835-f144-4e34-87ec-3b37ff81b0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.311 243456 DEBUG oslo_concurrency.lockutils [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-6a659835-f144-4e34-87ec-3b37ff81b0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.311 243456 DEBUG nova.network.neutron [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Refreshing network info cache for port c058dd2c-3349-4364-8659-31bb8b2509bb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:19:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:19:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3007217156' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.766 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.768 243456 DEBUG nova.virt.libvirt.vif [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:19:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-303074087',display_name='tempest-tempest.common.compute-instance-303074087',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-303074087',id=95,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-ufo6e5u0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:19:12Z,user_data=None,user_id='b9006c7543a244aa948b78020335223a',uuid=6a659835-f144-4e34-87ec-3b37ff81b0d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.769 243456 DEBUG nova.network.os_vif_util [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.771 243456 DEBUG nova.network.os_vif_util [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.773 243456 DEBUG nova.objects.instance [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6a659835-f144-4e34-87ec-3b37ff81b0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.790 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:19:19 np0005634017 nova_compute[243452]:  <uuid>6a659835-f144-4e34-87ec-3b37ff81b0d1</uuid>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:  <name>instance-0000005f</name>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <nova:name>tempest-tempest.common.compute-instance-303074087</nova:name>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:19:18</nova:creationTime>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:19:19 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:        <nova:user uuid="b9006c7543a244aa948b78020335223a">tempest-ServerActionsTestJSON-152155156-project-member</nova:user>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:        <nova:project uuid="6952e00efd364e1491714983e2425e93">tempest-ServerActionsTestJSON-152155156</nova:project>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:        <nova:port uuid="c058dd2c-3349-4364-8659-31bb8b2509bb">
Feb 28 05:19:19 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <entry name="serial">6a659835-f144-4e34-87ec-3b37ff81b0d1</entry>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <entry name="uuid">6a659835-f144-4e34-87ec-3b37ff81b0d1</entry>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/6a659835-f144-4e34-87ec-3b37ff81b0d1_disk">
Feb 28 05:19:19 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:19:19 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/6a659835-f144-4e34-87ec-3b37ff81b0d1_disk.config">
Feb 28 05:19:19 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:19:19 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:5a:09:29"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <target dev="tapc058dd2c-33"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/console.log" append="off"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:19:19 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:19:19 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:19:19 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:19:19 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.791 243456 DEBUG nova.compute.manager [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Preparing to wait for external event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.791 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.792 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.792 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.794 243456 DEBUG nova.virt.libvirt.vif [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:19:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-303074087',display_name='tempest-tempest.common.compute-instance-303074087',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-303074087',id=95,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-ufo6e5u0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:19:12Z,user_data=None,user_id='b9006c7543a244aa948b78020335223a',uuid=6a659835-f144-4e34-87ec-3b37ff81b0d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.794 243456 DEBUG nova.network.os_vif_util [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.796 243456 DEBUG nova.network.os_vif_util [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.796 243456 DEBUG os_vif [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.798 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.798 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.799 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.804 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.805 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc058dd2c-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.805 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc058dd2c-33, col_values=(('external_ids', {'iface-id': 'c058dd2c-3349-4364-8659-31bb8b2509bb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5a:09:29', 'vm-uuid': '6a659835-f144-4e34-87ec-3b37ff81b0d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.808 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:19 np0005634017 NetworkManager[49805]: <info>  [1772273959.8096] manager: (tapc058dd2c-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/408)
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.811 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.819 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:19 np0005634017 nova_compute[243452]: 2026-02-28 10:19:19.820 243456 INFO os_vif [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33')#033[00m
Feb 28 05:19:20 np0005634017 nova_compute[243452]: 2026-02-28 10:19:20.050 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:19:20 np0005634017 nova_compute[243452]: 2026-02-28 10:19:20.050 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:19:20 np0005634017 nova_compute[243452]: 2026-02-28 10:19:20.051 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] No VIF found with MAC fa:16:3e:5a:09:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:19:20 np0005634017 nova_compute[243452]: 2026-02-28 10:19:20.051 243456 INFO nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Using config drive#033[00m
Feb 28 05:19:20 np0005634017 nova_compute[243452]: 2026-02-28 10:19:20.080 243456 DEBUG nova.storage.rbd_utils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:20 np0005634017 nova_compute[243452]: 2026-02-28 10:19:20.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:19:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 281 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Feb 28 05:19:20 np0005634017 nova_compute[243452]: 2026-02-28 10:19:20.564 243456 INFO nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Creating config drive at /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/disk.config#033[00m
Feb 28 05:19:20 np0005634017 nova_compute[243452]: 2026-02-28 10:19:20.568 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp651650hv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:20 np0005634017 nova_compute[243452]: 2026-02-28 10:19:20.711 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp651650hv" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:20 np0005634017 nova_compute[243452]: 2026-02-28 10:19:20.751 243456 DEBUG nova.storage.rbd_utils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:20 np0005634017 nova_compute[243452]: 2026-02-28 10:19:20.756 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/disk.config 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:21 np0005634017 nova_compute[243452]: 2026-02-28 10:19:21.294 243456 DEBUG nova.network.neutron [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Updated VIF entry in instance network info cache for port c058dd2c-3349-4364-8659-31bb8b2509bb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:19:21 np0005634017 nova_compute[243452]: 2026-02-28 10:19:21.295 243456 DEBUG nova.network.neutron [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Updating instance_info_cache with network_info: [{"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:19:21 np0005634017 nova_compute[243452]: 2026-02-28 10:19:21.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:19:21 np0005634017 nova_compute[243452]: 2026-02-28 10:19:21.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:19:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.314 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:21 np0005634017 nova_compute[243452]: 2026-02-28 10:19:21.323 243456 DEBUG oslo_concurrency.lockutils [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-6a659835-f144-4e34-87ec-3b37ff81b0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:19:21 np0005634017 nova_compute[243452]: 2026-02-28 10:19:21.324 243456 DEBUG nova.compute.manager [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received event network-vif-unplugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:21 np0005634017 nova_compute[243452]: 2026-02-28 10:19:21.324 243456 DEBUG oslo_concurrency.lockutils [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:21 np0005634017 nova_compute[243452]: 2026-02-28 10:19:21.324 243456 DEBUG oslo_concurrency.lockutils [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:21 np0005634017 nova_compute[243452]: 2026-02-28 10:19:21.324 243456 DEBUG oslo_concurrency.lockutils [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:21 np0005634017 nova_compute[243452]: 2026-02-28 10:19:21.325 243456 DEBUG nova.compute.manager [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] No waiting events found dispatching network-vif-unplugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:19:21 np0005634017 nova_compute[243452]: 2026-02-28 10:19:21.325 243456 DEBUG nova.compute.manager [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received event network-vif-unplugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:19:21 np0005634017 nova_compute[243452]: 2026-02-28 10:19:21.503 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/disk.config 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.747s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:21 np0005634017 nova_compute[243452]: 2026-02-28 10:19:21.503 243456 INFO nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Deleting local config drive /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/disk.config because it was imported into RBD.#033[00m
Feb 28 05:19:21 np0005634017 kernel: tapc058dd2c-33: entered promiscuous mode
Feb 28 05:19:21 np0005634017 NetworkManager[49805]: <info>  [1772273961.5541] manager: (tapc058dd2c-33): new Tun device (/org/freedesktop/NetworkManager/Devices/409)
Feb 28 05:19:21 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:21Z|00938|binding|INFO|Claiming lport c058dd2c-3349-4364-8659-31bb8b2509bb for this chassis.
Feb 28 05:19:21 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:21Z|00939|binding|INFO|c058dd2c-3349-4364-8659-31bb8b2509bb: Claiming fa:16:3e:5a:09:29 10.100.0.13
Feb 28 05:19:21 np0005634017 nova_compute[243452]: 2026-02-28 10:19:21.559 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:21 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:21Z|00940|binding|INFO|Setting lport c058dd2c-3349-4364-8659-31bb8b2509bb up in Southbound
Feb 28 05:19:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.565 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:09:29 10.100.0.13'], port_security=['fa:16:3e:5a:09:29 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6a659835-f144-4e34-87ec-3b37ff81b0d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6952e00efd364e1491714983e2425e93', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5bc636a0-4322-44ce-8e09-d7c1b49250ab', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999babbe-38ab-491f-bab6-db221e47dcf3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=c058dd2c-3349-4364-8659-31bb8b2509bb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:19:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.568 156681 INFO neutron.agent.ovn.metadata.agent [-] Port c058dd2c-3349-4364-8659-31bb8b2509bb in datapath 8082b9e7-a888-4fb7-b48c-a7c16db892eb bound to our chassis#033[00m
Feb 28 05:19:21 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:21Z|00941|binding|INFO|Setting lport c058dd2c-3349-4364-8659-31bb8b2509bb ovn-installed in OVS
Feb 28 05:19:21 np0005634017 nova_compute[243452]: 2026-02-28 10:19:21.570 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:21 np0005634017 nova_compute[243452]: 2026-02-28 10:19:21.571 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.571 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8082b9e7-a888-4fb7-b48c-a7c16db892eb#033[00m
Feb 28 05:19:21 np0005634017 systemd-machined[209480]: New machine qemu-117-instance-0000005f.
Feb 28 05:19:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.592 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2cd4bdad-560f-49e2-b884-580eae1e5d10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:21 np0005634017 systemd[1]: Started Virtual Machine qemu-117-instance-0000005f.
Feb 28 05:19:21 np0005634017 systemd-udevd[326132]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:19:21 np0005634017 NetworkManager[49805]: <info>  [1772273961.6142] device (tapc058dd2c-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:19:21 np0005634017 NetworkManager[49805]: <info>  [1772273961.6146] device (tapc058dd2c-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:19:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.634 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6e9fed39-16e9-4c4a-acb2-70eede857271]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.637 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[32368a87-16cf-4d9b-8dc4-3d9dc2265fd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.667 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bdf744da-6f10-48e0-a4f6-c7fa2ff3034e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.684 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2d2ba85d-0606-4682-9900-576ccfd77bec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8082b9e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:09:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 280], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543466, 'reachable_time': 27884, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326144, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.699 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a2648d28-6b90-4d8a-80c8-bbe0c2ecdebb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8082b9e7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543481, 'tstamp': 543481}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326145, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8082b9e7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543483, 'tstamp': 543483}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326145, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.701 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8082b9e7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:21 np0005634017 nova_compute[243452]: 2026-02-28 10:19:21.703 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:21 np0005634017 nova_compute[243452]: 2026-02-28 10:19:21.704 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.705 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8082b9e7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.705 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:19:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.706 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8082b9e7-a0, col_values=(('external_ids', {'iface-id': 'ecb4bce5-f543-4ade-98cc-d98a0d015682'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.707 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:19:21 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:21Z|00942|binding|INFO|Releasing lport ecb4bce5-f543-4ade-98cc-d98a0d015682 from this chassis (sb_readonly=0)
Feb 28 05:19:21 np0005634017 nova_compute[243452]: 2026-02-28 10:19:21.820 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.034 243456 DEBUG nova.compute.manager [req-e4880c6f-f07e-4f3c-87ad-c02def4f74d9 req-326e4468-f9f6-42ed-89cc-58b5657350a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.035 243456 DEBUG oslo_concurrency.lockutils [req-e4880c6f-f07e-4f3c-87ad-c02def4f74d9 req-326e4468-f9f6-42ed-89cc-58b5657350a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.035 243456 DEBUG oslo_concurrency.lockutils [req-e4880c6f-f07e-4f3c-87ad-c02def4f74d9 req-326e4468-f9f6-42ed-89cc-58b5657350a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.035 243456 DEBUG oslo_concurrency.lockutils [req-e4880c6f-f07e-4f3c-87ad-c02def4f74d9 req-326e4468-f9f6-42ed-89cc-58b5657350a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.035 243456 DEBUG nova.compute.manager [req-e4880c6f-f07e-4f3c-87ad-c02def4f74d9 req-326e4468-f9f6-42ed-89cc-58b5657350a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Processing event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:19:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 281 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.872 243456 DEBUG nova.compute.manager [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.873 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273962.8731813, 6a659835-f144-4e34-87ec-3b37ff81b0d1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.873 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] VM Started (Lifecycle Event)#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.876 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.879 243456 INFO nova.virt.libvirt.driver [-] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Instance spawned successfully.#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.880 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.900 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.903 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.904 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.904 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.904 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.905 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.905 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.909 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.945 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.945 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273962.873347, 6a659835-f144-4e34-87ec-3b37ff81b0d1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.945 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.970 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.975 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273962.8768837, 6a659835-f144-4e34-87ec-3b37ff81b0d1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.975 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.978 243456 INFO nova.compute.manager [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Took 9.99 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.979 243456 DEBUG nova.compute.manager [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:22 np0005634017 nova_compute[243452]: 2026-02-28 10:19:22.999 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "ea5efc55-0a5e-435e-9805-9a9726c17eda" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:23 np0005634017 nova_compute[243452]: 2026-02-28 10:19:23.000 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:23 np0005634017 nova_compute[243452]: 2026-02-28 10:19:23.016 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:23 np0005634017 nova_compute[243452]: 2026-02-28 10:19:23.019 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:19:23 np0005634017 nova_compute[243452]: 2026-02-28 10:19:23.035 243456 DEBUG nova.compute.manager [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:19:23 np0005634017 nova_compute[243452]: 2026-02-28 10:19:23.060 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:19:23 np0005634017 nova_compute[243452]: 2026-02-28 10:19:23.079 243456 INFO nova.compute.manager [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Took 11.12 seconds to build instance.#033[00m
Feb 28 05:19:23 np0005634017 nova_compute[243452]: 2026-02-28 10:19:23.103 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:23 np0005634017 nova_compute[243452]: 2026-02-28 10:19:23.117 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:23 np0005634017 nova_compute[243452]: 2026-02-28 10:19:23.118 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:23 np0005634017 nova_compute[243452]: 2026-02-28 10:19:23.124 243456 DEBUG nova.virt.hardware [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:19:23 np0005634017 nova_compute[243452]: 2026-02-28 10:19:23.125 243456 INFO nova.compute.claims [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:19:23 np0005634017 nova_compute[243452]: 2026-02-28 10:19:23.288 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:23 np0005634017 nova_compute[243452]: 2026-02-28 10:19:23.330 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:19:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:19:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:19:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2493329366' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:19:23 np0005634017 nova_compute[243452]: 2026-02-28 10:19:23.891 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:23 np0005634017 nova_compute[243452]: 2026-02-28 10:19:23.897 243456 DEBUG nova.compute.provider_tree [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:19:23 np0005634017 nova_compute[243452]: 2026-02-28 10:19:23.909 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:23 np0005634017 nova_compute[243452]: 2026-02-28 10:19:23.928 243456 DEBUG nova.scheduler.client.report [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:19:23 np0005634017 nova_compute[243452]: 2026-02-28 10:19:23.961 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:23 np0005634017 nova_compute[243452]: 2026-02-28 10:19:23.962 243456 DEBUG nova.compute.manager [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.015 243456 DEBUG nova.compute.manager [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.015 243456 DEBUG nova.network.neutron [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.036 243456 INFO nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.057 243456 DEBUG nova.compute.manager [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.158 243456 DEBUG nova.compute.manager [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.161 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.161 243456 INFO nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Creating image(s)#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.200 243456 DEBUG nova.storage.rbd_utils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.238 243456 DEBUG nova.storage.rbd_utils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.270 243456 DEBUG nova.storage.rbd_utils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.275 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.314 243456 DEBUG nova.compute.manager [req-54fcaabe-560f-4aa2-bd17-d4ec92647533 req-aadea5ae-6da8-4b19-93cc-02c7fb3620c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.315 243456 DEBUG oslo_concurrency.lockutils [req-54fcaabe-560f-4aa2-bd17-d4ec92647533 req-aadea5ae-6da8-4b19-93cc-02c7fb3620c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.316 243456 DEBUG oslo_concurrency.lockutils [req-54fcaabe-560f-4aa2-bd17-d4ec92647533 req-aadea5ae-6da8-4b19-93cc-02c7fb3620c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.316 243456 DEBUG oslo_concurrency.lockutils [req-54fcaabe-560f-4aa2-bd17-d4ec92647533 req-aadea5ae-6da8-4b19-93cc-02c7fb3620c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.317 243456 DEBUG nova.compute.manager [req-54fcaabe-560f-4aa2-bd17-d4ec92647533 req-aadea5ae-6da8-4b19-93cc-02c7fb3620c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] No waiting events found dispatching network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.317 243456 WARNING nova.compute.manager [req-54fcaabe-560f-4aa2-bd17-d4ec92647533 req-aadea5ae-6da8-4b19-93cc-02c7fb3620c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received unexpected event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb for instance with vm_state active and task_state None.#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.318 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:19:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 281 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 1.8 MiB/s wr, 69 op/s
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.357 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.358 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.358 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.359 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.384 243456 DEBUG nova.storage.rbd_utils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.388 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ea5efc55-0a5e-435e-9805-9a9726c17eda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.419 243456 DEBUG nova.policy [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d03850a765742908401b28b9f983e96', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3882eded03594958a2e5d10832a6c3a9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.646 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ea5efc55-0a5e-435e-9805-9a9726c17eda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.727 243456 DEBUG nova.storage.rbd_utils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] resizing rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.811 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.817 243456 DEBUG nova.objects.instance [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'migration_context' on Instance uuid ea5efc55-0a5e-435e-9805-9a9726c17eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.835 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.835 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Ensure instance console log exists: /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.836 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.836 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:24 np0005634017 nova_compute[243452]: 2026-02-28 10:19:24.837 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:25 np0005634017 nova_compute[243452]: 2026-02-28 10:19:25.290 243456 DEBUG nova.network.neutron [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Successfully created port: 2f9562b0-54ce-4c24-9341-33a674532bf0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:19:26 np0005634017 nova_compute[243452]: 2026-02-28 10:19:26.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:19:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 310 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.7 MiB/s wr, 123 op/s
Feb 28 05:19:26 np0005634017 nova_compute[243452]: 2026-02-28 10:19:26.335 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:26 np0005634017 nova_compute[243452]: 2026-02-28 10:19:26.336 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:26 np0005634017 nova_compute[243452]: 2026-02-28 10:19:26.336 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:26 np0005634017 nova_compute[243452]: 2026-02-28 10:19:26.336 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:19:26 np0005634017 nova_compute[243452]: 2026-02-28 10:19:26.337 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:26 np0005634017 nova_compute[243452]: 2026-02-28 10:19:26.378 243456 DEBUG nova.network.neutron [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Successfully updated port: 2f9562b0-54ce-4c24-9341-33a674532bf0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:19:26 np0005634017 nova_compute[243452]: 2026-02-28 10:19:26.393 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "refresh_cache-ea5efc55-0a5e-435e-9805-9a9726c17eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:19:26 np0005634017 nova_compute[243452]: 2026-02-28 10:19:26.394 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquired lock "refresh_cache-ea5efc55-0a5e-435e-9805-9a9726c17eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:19:26 np0005634017 nova_compute[243452]: 2026-02-28 10:19:26.395 243456 DEBUG nova.network.neutron [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:19:26 np0005634017 nova_compute[243452]: 2026-02-28 10:19:26.566 243456 DEBUG nova.compute.manager [req-16fea14e-b7ec-4299-825f-fb31fbf15512 req-8d69dc01-aa90-4642-b6f5-438cfcac7678 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received event network-changed-2f9562b0-54ce-4c24-9341-33a674532bf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:26 np0005634017 nova_compute[243452]: 2026-02-28 10:19:26.566 243456 DEBUG nova.compute.manager [req-16fea14e-b7ec-4299-825f-fb31fbf15512 req-8d69dc01-aa90-4642-b6f5-438cfcac7678 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Refreshing instance network info cache due to event network-changed-2f9562b0-54ce-4c24-9341-33a674532bf0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:19:26 np0005634017 nova_compute[243452]: 2026-02-28 10:19:26.567 243456 DEBUG oslo_concurrency.lockutils [req-16fea14e-b7ec-4299-825f-fb31fbf15512 req-8d69dc01-aa90-4642-b6f5-438cfcac7678 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ea5efc55-0a5e-435e-9805-9a9726c17eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:19:26 np0005634017 nova_compute[243452]: 2026-02-28 10:19:26.660 243456 DEBUG nova.network.neutron [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:19:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:19:26 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2504765288' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:19:26 np0005634017 nova_compute[243452]: 2026-02-28 10:19:26.872 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:26 np0005634017 nova_compute[243452]: 2026-02-28 10:19:26.936 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:19:26 np0005634017 nova_compute[243452]: 2026-02-28 10:19:26.937 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:19:26 np0005634017 nova_compute[243452]: 2026-02-28 10:19:26.941 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:19:26 np0005634017 nova_compute[243452]: 2026-02-28 10:19:26.941 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:19:27 np0005634017 nova_compute[243452]: 2026-02-28 10:19:27.059 243456 INFO nova.compute.manager [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Rebuilding instance#033[00m
Feb 28 05:19:27 np0005634017 nova_compute[243452]: 2026-02-28 10:19:27.155 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:19:27 np0005634017 nova_compute[243452]: 2026-02-28 10:19:27.156 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3373MB free_disk=59.91017228830606GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:19:27 np0005634017 nova_compute[243452]: 2026-02-28 10:19:27.156 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:27 np0005634017 nova_compute[243452]: 2026-02-28 10:19:27.157 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:27 np0005634017 nova_compute[243452]: 2026-02-28 10:19:27.237 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 690896df-6307-469c-9685-325a61a62b88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:19:27 np0005634017 nova_compute[243452]: 2026-02-28 10:19:27.237 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 6a659835-f144-4e34-87ec-3b37ff81b0d1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:19:27 np0005634017 nova_compute[243452]: 2026-02-28 10:19:27.237 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance ea5efc55-0a5e-435e-9805-9a9726c17eda actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:19:27 np0005634017 nova_compute[243452]: 2026-02-28 10:19:27.238 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:19:27 np0005634017 nova_compute[243452]: 2026-02-28 10:19:27.238 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:19:27 np0005634017 nova_compute[243452]: 2026-02-28 10:19:27.328 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:19:27 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/614799489' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:19:27 np0005634017 nova_compute[243452]: 2026-02-28 10:19:27.852 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:27 np0005634017 nova_compute[243452]: 2026-02-28 10:19:27.856 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:19:27 np0005634017 nova_compute[243452]: 2026-02-28 10:19:27.876 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:19:27 np0005634017 nova_compute[243452]: 2026-02-28 10:19:27.890 243456 DEBUG nova.objects.instance [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 6a659835-f144-4e34-87ec-3b37ff81b0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:27 np0005634017 nova_compute[243452]: 2026-02-28 10:19:27.912 243456 DEBUG nova.compute.manager [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:27 np0005634017 nova_compute[243452]: 2026-02-28 10:19:27.914 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:19:27 np0005634017 nova_compute[243452]: 2026-02-28 10:19:27.914 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:27 np0005634017 nova_compute[243452]: 2026-02-28 10:19:27.977 243456 DEBUG nova.objects.instance [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'pci_requests' on Instance uuid 6a659835-f144-4e34-87ec-3b37ff81b0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:27 np0005634017 nova_compute[243452]: 2026-02-28 10:19:27.991 243456 DEBUG nova.objects.instance [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6a659835-f144-4e34-87ec-3b37ff81b0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.020 243456 DEBUG nova.objects.instance [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'resources' on Instance uuid 6a659835-f144-4e34-87ec-3b37ff81b0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.041 243456 DEBUG nova.objects.instance [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'migration_context' on Instance uuid 6a659835-f144-4e34-87ec-3b37ff81b0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.062 243456 DEBUG nova.objects.instance [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.067 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.311 243456 DEBUG nova.network.neutron [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Updating instance_info_cache with network_info: [{"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:19:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 327 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 130 op/s
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.346 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Releasing lock "refresh_cache-ea5efc55-0a5e-435e-9805-9a9726c17eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.346 243456 DEBUG nova.compute.manager [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Instance network_info: |[{"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.346 243456 DEBUG oslo_concurrency.lockutils [req-16fea14e-b7ec-4299-825f-fb31fbf15512 req-8d69dc01-aa90-4642-b6f5-438cfcac7678 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ea5efc55-0a5e-435e-9805-9a9726c17eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.347 243456 DEBUG nova.network.neutron [req-16fea14e-b7ec-4299-825f-fb31fbf15512 req-8d69dc01-aa90-4642-b6f5-438cfcac7678 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Refreshing network info cache for port 2f9562b0-54ce-4c24-9341-33a674532bf0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.349 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Start _get_guest_xml network_info=[{"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.488 243456 WARNING nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.493 243456 DEBUG nova.virt.libvirt.host [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.493 243456 DEBUG nova.virt.libvirt.host [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.497 243456 DEBUG nova.virt.libvirt.host [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.497 243456 DEBUG nova.virt.libvirt.host [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.498 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.498 243456 DEBUG nova.virt.hardware [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.498 243456 DEBUG nova.virt.hardware [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.499 243456 DEBUG nova.virt.hardware [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.499 243456 DEBUG nova.virt.hardware [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.499 243456 DEBUG nova.virt.hardware [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.499 243456 DEBUG nova.virt.hardware [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.499 243456 DEBUG nova.virt.hardware [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.500 243456 DEBUG nova.virt.hardware [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.500 243456 DEBUG nova.virt.hardware [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.500 243456 DEBUG nova.virt.hardware [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.500 243456 DEBUG nova.virt.hardware [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.503 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.911 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.915 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.916 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.916 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:19:28 np0005634017 nova_compute[243452]: 2026-02-28 10:19:28.935 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 28 05:19:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:19:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1289625373' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.077 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:19:29
Feb 28 05:19:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:19:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:19:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'vms', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'default.rgw.meta', 'volumes', 'default.rgw.log']
Feb 28 05:19:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.106 243456 DEBUG nova.storage.rbd_utils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.110 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.141 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.143 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.144 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.144 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:19:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2843012645' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.671 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.673 243456 DEBUG nova.virt.libvirt.vif [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:19:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1998552864',display_name='tempest-ServerRescueTestJSON-server-1998552864',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1998552864',id=96,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3882eded03594958a2e5d10832a6c3a9',ramdisk_id='',reservation_id='r-fussapqk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-2101936935',owner_user_name='tempest-ServerRescueTestJSON-2101936935-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:19:24Z,user_data=None,user_id='8d03850a765742908401b28b9f983e96',uuid=ea5efc55-0a5e-435e-9805-9a9726c17eda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.674 243456 DEBUG nova.network.os_vif_util [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converting VIF {"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.675 243456 DEBUG nova.network.os_vif_util [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:07:b7,bridge_name='br-int',has_traffic_filtering=True,id=2f9562b0-54ce-4c24-9341-33a674532bf0,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f9562b0-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.677 243456 DEBUG nova.objects.instance [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'pci_devices' on Instance uuid ea5efc55-0a5e-435e-9805-9a9726c17eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.697 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:19:29 np0005634017 nova_compute[243452]:  <uuid>ea5efc55-0a5e-435e-9805-9a9726c17eda</uuid>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:  <name>instance-00000060</name>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerRescueTestJSON-server-1998552864</nova:name>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:19:28</nova:creationTime>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:19:29 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:        <nova:user uuid="8d03850a765742908401b28b9f983e96">tempest-ServerRescueTestJSON-2101936935-project-member</nova:user>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:        <nova:project uuid="3882eded03594958a2e5d10832a6c3a9">tempest-ServerRescueTestJSON-2101936935</nova:project>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:        <nova:port uuid="2f9562b0-54ce-4c24-9341-33a674532bf0">
Feb 28 05:19:29 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <entry name="serial">ea5efc55-0a5e-435e-9805-9a9726c17eda</entry>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <entry name="uuid">ea5efc55-0a5e-435e-9805-9a9726c17eda</entry>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/ea5efc55-0a5e-435e-9805-9a9726c17eda_disk">
Feb 28 05:19:29 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:19:29 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.config">
Feb 28 05:19:29 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:19:29 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:db:07:b7"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <target dev="tap2f9562b0-54"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/console.log" append="off"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:19:29 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:19:29 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:19:29 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:19:29 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.699 243456 DEBUG nova.compute.manager [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Preparing to wait for external event network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.700 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.700 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.701 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.702 243456 DEBUG nova.virt.libvirt.vif [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:19:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1998552864',display_name='tempest-ServerRescueTestJSON-server-1998552864',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1998552864',id=96,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3882eded03594958a2e5d10832a6c3a9',ramdisk_id='',reservation_id='r-fussapqk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-2101936935',owner_user_name='tempest-ServerRescueTestJSON-2101936935-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:19:24Z,user_data=None,user_id='8d03850a765742908401b28b9f983e96',uuid=ea5efc55-0a5e-435e-9805-9a9726c17eda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.703 243456 DEBUG nova.network.os_vif_util [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converting VIF {"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.705 243456 DEBUG nova.network.os_vif_util [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:07:b7,bridge_name='br-int',has_traffic_filtering=True,id=2f9562b0-54ce-4c24-9341-33a674532bf0,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f9562b0-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.706 243456 DEBUG os_vif [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:07:b7,bridge_name='br-int',has_traffic_filtering=True,id=2f9562b0-54ce-4c24-9341-33a674532bf0,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f9562b0-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.707 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.708 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.709 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.713 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.713 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f9562b0-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.714 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2f9562b0-54, col_values=(('external_ids', {'iface-id': '2f9562b0-54ce-4c24-9341-33a674532bf0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:db:07:b7', 'vm-uuid': 'ea5efc55-0a5e-435e-9805-9a9726c17eda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:29 np0005634017 NetworkManager[49805]: <info>  [1772273969.7183] manager: (tap2f9562b0-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/410)
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.717 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.722 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.724 243456 INFO os_vif [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:07:b7,bridge_name='br-int',has_traffic_filtering=True,id=2f9562b0-54ce-4c24-9341-33a674532bf0,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f9562b0-54')#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.782 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.783 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.783 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No VIF found with MAC fa:16:3e:db:07:b7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.784 243456 INFO nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Using config drive#033[00m
Feb 28 05:19:29 np0005634017 nova_compute[243452]: 2026-02-28 10:19:29.815 243456 DEBUG nova.storage.rbd_utils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:30 np0005634017 nova_compute[243452]: 2026-02-28 10:19:30.210 243456 INFO nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Creating config drive at /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/disk.config#033[00m
Feb 28 05:19:30 np0005634017 nova_compute[243452]: 2026-02-28 10:19:30.218 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpay5t475r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:19:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:19:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:19:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:19:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:19:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:19:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 327 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Feb 28 05:19:30 np0005634017 nova_compute[243452]: 2026-02-28 10:19:30.362 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpay5t475r" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:30 np0005634017 nova_compute[243452]: 2026-02-28 10:19:30.396 243456 DEBUG nova.storage.rbd_utils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:30 np0005634017 nova_compute[243452]: 2026-02-28 10:19:30.402 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/disk.config ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:30 np0005634017 nova_compute[243452]: 2026-02-28 10:19:30.433 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273955.3706656, e8c0bffa-2672-4f45-8646-3a41b8e780a8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:19:30 np0005634017 nova_compute[243452]: 2026-02-28 10:19:30.434 243456 INFO nova.compute.manager [-] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:19:30 np0005634017 nova_compute[243452]: 2026-02-28 10:19:30.457 243456 DEBUG nova.compute.manager [None req-1a644ef3-9a80-4edd-a691-d9691dd7f74a - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:30 np0005634017 nova_compute[243452]: 2026-02-28 10:19:30.529 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/disk.config ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:30 np0005634017 nova_compute[243452]: 2026-02-28 10:19:30.530 243456 INFO nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Deleting local config drive /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/disk.config because it was imported into RBD.#033[00m
Feb 28 05:19:30 np0005634017 kernel: tap2f9562b0-54: entered promiscuous mode
Feb 28 05:19:30 np0005634017 NetworkManager[49805]: <info>  [1772273970.5836] manager: (tap2f9562b0-54): new Tun device (/org/freedesktop/NetworkManager/Devices/411)
Feb 28 05:19:30 np0005634017 nova_compute[243452]: 2026-02-28 10:19:30.584 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:30 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:30Z|00943|binding|INFO|Claiming lport 2f9562b0-54ce-4c24-9341-33a674532bf0 for this chassis.
Feb 28 05:19:30 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:30Z|00944|binding|INFO|2f9562b0-54ce-4c24-9341-33a674532bf0: Claiming fa:16:3e:db:07:b7 10.100.0.9
Feb 28 05:19:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:30.592 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:07:b7 10.100.0.9'], port_security=['fa:16:3e:db:07:b7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ea5efc55-0a5e-435e-9805-9a9726c17eda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ba87a0d-4c27-4844-a55f-2927dfe4893e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3882eded03594958a2e5d10832a6c3a9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '045c4763-a218-4cca-a882-e267b0c43dad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5d941e6-1790-487e-9400-385b4660d2ae, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2f9562b0-54ce-4c24-9341-33a674532bf0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:19:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:30.593 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2f9562b0-54ce-4c24-9341-33a674532bf0 in datapath 9ba87a0d-4c27-4844-a55f-2927dfe4893e bound to our chassis#033[00m
Feb 28 05:19:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:30.594 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9ba87a0d-4c27-4844-a55f-2927dfe4893e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 28 05:19:30 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:30Z|00945|binding|INFO|Setting lport 2f9562b0-54ce-4c24-9341-33a674532bf0 ovn-installed in OVS
Feb 28 05:19:30 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:30Z|00946|binding|INFO|Setting lport 2f9562b0-54ce-4c24-9341-33a674532bf0 up in Southbound
Feb 28 05:19:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:30.595 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[91b1988a-c2c9-4820-b660-1ac1b19f8852]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:30 np0005634017 nova_compute[243452]: 2026-02-28 10:19:30.595 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:30 np0005634017 nova_compute[243452]: 2026-02-28 10:19:30.601 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:30 np0005634017 systemd-udevd[326554]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:19:30 np0005634017 systemd-machined[209480]: New machine qemu-118-instance-00000060.
Feb 28 05:19:30 np0005634017 NetworkManager[49805]: <info>  [1772273970.6278] device (tap2f9562b0-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:19:30 np0005634017 NetworkManager[49805]: <info>  [1772273970.6289] device (tap2f9562b0-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:19:30 np0005634017 systemd[1]: Started Virtual Machine qemu-118-instance-00000060.
Feb 28 05:19:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:19:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:19:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:19:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:19:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:19:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:19:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:19:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:19:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:19:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:19:30 np0005634017 nova_compute[243452]: 2026-02-28 10:19:30.810 243456 DEBUG nova.compute.manager [req-453d831c-aa45-400f-9c55-c4fea7840c5f req-c14c9960-453d-41dd-bd8b-736e55963435 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received event network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:30 np0005634017 nova_compute[243452]: 2026-02-28 10:19:30.812 243456 DEBUG oslo_concurrency.lockutils [req-453d831c-aa45-400f-9c55-c4fea7840c5f req-c14c9960-453d-41dd-bd8b-736e55963435 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:30 np0005634017 nova_compute[243452]: 2026-02-28 10:19:30.812 243456 DEBUG oslo_concurrency.lockutils [req-453d831c-aa45-400f-9c55-c4fea7840c5f req-c14c9960-453d-41dd-bd8b-736e55963435 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:30 np0005634017 nova_compute[243452]: 2026-02-28 10:19:30.813 243456 DEBUG oslo_concurrency.lockutils [req-453d831c-aa45-400f-9c55-c4fea7840c5f req-c14c9960-453d-41dd-bd8b-736e55963435 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:30 np0005634017 nova_compute[243452]: 2026-02-28 10:19:30.813 243456 DEBUG nova.compute.manager [req-453d831c-aa45-400f-9c55-c4fea7840c5f req-c14c9960-453d-41dd-bd8b-736e55963435 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Processing event network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:19:30 np0005634017 nova_compute[243452]: 2026-02-28 10:19:30.819 243456 DEBUG nova.network.neutron [req-16fea14e-b7ec-4299-825f-fb31fbf15512 req-8d69dc01-aa90-4642-b6f5-438cfcac7678 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Updated VIF entry in instance network info cache for port 2f9562b0-54ce-4c24-9341-33a674532bf0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:19:30 np0005634017 nova_compute[243452]: 2026-02-28 10:19:30.820 243456 DEBUG nova.network.neutron [req-16fea14e-b7ec-4299-825f-fb31fbf15512 req-8d69dc01-aa90-4642-b6f5-438cfcac7678 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Updating instance_info_cache with network_info: [{"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:19:30 np0005634017 nova_compute[243452]: 2026-02-28 10:19:30.840 243456 DEBUG oslo_concurrency.lockutils [req-16fea14e-b7ec-4299-825f-fb31fbf15512 req-8d69dc01-aa90-4642-b6f5-438cfcac7678 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ea5efc55-0a5e-435e-9805-9a9726c17eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.070 243456 DEBUG nova.compute.manager [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.071 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273971.0691988, ea5efc55-0a5e-435e-9805-9a9726c17eda => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.072 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] VM Started (Lifecycle Event)#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.075 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.079 243456 INFO nova.virt.libvirt.driver [-] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Instance spawned successfully.#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.079 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.094 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.101 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.105 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.105 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.105 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.106 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.106 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.107 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.133 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.133 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273971.0695293, ea5efc55-0a5e-435e-9805-9a9726c17eda => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.134 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.169 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.172 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273971.0746179, ea5efc55-0a5e-435e-9805-9a9726c17eda => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.173 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.199 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.201 243456 INFO nova.compute.manager [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Took 7.04 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.202 243456 DEBUG nova.compute.manager [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.204 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.244 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.265 243456 INFO nova.compute.manager [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Took 8.16 seconds to build instance.#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.280 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.379 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Updating instance_info_cache with network_info: [{"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.402 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.403 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.404 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.405 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:19:31 np0005634017 nova_compute[243452]: 2026-02-28 10:19:31.800 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:19:32 np0005634017 nova_compute[243452]: 2026-02-28 10:19:32.184 243456 INFO nova.compute.manager [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Rescuing#033[00m
Feb 28 05:19:32 np0005634017 nova_compute[243452]: 2026-02-28 10:19:32.185 243456 DEBUG oslo_concurrency.lockutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "refresh_cache-ea5efc55-0a5e-435e-9805-9a9726c17eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:19:32 np0005634017 nova_compute[243452]: 2026-02-28 10:19:32.185 243456 DEBUG oslo_concurrency.lockutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquired lock "refresh_cache-ea5efc55-0a5e-435e-9805-9a9726c17eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:19:32 np0005634017 nova_compute[243452]: 2026-02-28 10:19:32.185 243456 DEBUG nova.network.neutron [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:19:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 305 active+clean; 327 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 28 05:19:32 np0005634017 nova_compute[243452]: 2026-02-28 10:19:32.899 243456 DEBUG nova.compute.manager [req-5a329331-ed6a-400e-b41e-4c779a2e8e1f req-d1250509-66f1-450b-bb08-88100ebde577 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received event network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:32 np0005634017 nova_compute[243452]: 2026-02-28 10:19:32.900 243456 DEBUG oslo_concurrency.lockutils [req-5a329331-ed6a-400e-b41e-4c779a2e8e1f req-d1250509-66f1-450b-bb08-88100ebde577 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:32 np0005634017 nova_compute[243452]: 2026-02-28 10:19:32.900 243456 DEBUG oslo_concurrency.lockutils [req-5a329331-ed6a-400e-b41e-4c779a2e8e1f req-d1250509-66f1-450b-bb08-88100ebde577 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:32 np0005634017 nova_compute[243452]: 2026-02-28 10:19:32.901 243456 DEBUG oslo_concurrency.lockutils [req-5a329331-ed6a-400e-b41e-4c779a2e8e1f req-d1250509-66f1-450b-bb08-88100ebde577 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:32 np0005634017 nova_compute[243452]: 2026-02-28 10:19:32.901 243456 DEBUG nova.compute.manager [req-5a329331-ed6a-400e-b41e-4c779a2e8e1f req-d1250509-66f1-450b-bb08-88100ebde577 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] No waiting events found dispatching network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:19:32 np0005634017 nova_compute[243452]: 2026-02-28 10:19:32.902 243456 WARNING nova.compute.manager [req-5a329331-ed6a-400e-b41e-4c779a2e8e1f req-d1250509-66f1-450b-bb08-88100ebde577 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received unexpected event network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 for instance with vm_state active and task_state rescuing.#033[00m
Feb 28 05:19:33 np0005634017 nova_compute[243452]: 2026-02-28 10:19:33.027 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:19:33 np0005634017 nova_compute[243452]: 2026-02-28 10:19:33.913 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:34 np0005634017 nova_compute[243452]: 2026-02-28 10:19:34.309 243456 DEBUG nova.network.neutron [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Updating instance_info_cache with network_info: [{"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:19:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 327 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 139 op/s
Feb 28 05:19:34 np0005634017 nova_compute[243452]: 2026-02-28 10:19:34.333 243456 DEBUG oslo_concurrency.lockutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Releasing lock "refresh_cache-ea5efc55-0a5e-435e-9805-9a9726c17eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:19:34 np0005634017 nova_compute[243452]: 2026-02-28 10:19:34.717 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:34 np0005634017 nova_compute[243452]: 2026-02-28 10:19:34.849 243456 DEBUG nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 28 05:19:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:35Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5a:09:29 10.100.0.13
Feb 28 05:19:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:35Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5a:09:29 10.100.0.13
Feb 28 05:19:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 350 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.3 MiB/s wr, 199 op/s
Feb 28 05:19:38 np0005634017 nova_compute[243452]: 2026-02-28 10:19:38.193 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 28 05:19:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 358 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.9 MiB/s wr, 171 op/s
Feb 28 05:19:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:19:38 np0005634017 nova_compute[243452]: 2026-02-28 10:19:38.915 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:39 np0005634017 nova_compute[243452]: 2026-02-28 10:19:39.345 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:39 np0005634017 nova_compute[243452]: 2026-02-28 10:19:39.721 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 305 active+clean; 360 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 146 op/s
Feb 28 05:19:40 np0005634017 kernel: tapc058dd2c-33 (unregistering): left promiscuous mode
Feb 28 05:19:40 np0005634017 NetworkManager[49805]: <info>  [1772273980.4779] device (tapc058dd2c-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:19:40 np0005634017 nova_compute[243452]: 2026-02-28 10:19:40.483 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:40Z|00947|binding|INFO|Releasing lport c058dd2c-3349-4364-8659-31bb8b2509bb from this chassis (sb_readonly=0)
Feb 28 05:19:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:40Z|00948|binding|INFO|Setting lport c058dd2c-3349-4364-8659-31bb8b2509bb down in Southbound
Feb 28 05:19:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:40Z|00949|binding|INFO|Removing iface tapc058dd2c-33 ovn-installed in OVS
Feb 28 05:19:40 np0005634017 nova_compute[243452]: 2026-02-28 10:19:40.487 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:40 np0005634017 nova_compute[243452]: 2026-02-28 10:19:40.490 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.493 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:09:29 10.100.0.13'], port_security=['fa:16:3e:5a:09:29 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6a659835-f144-4e34-87ec-3b37ff81b0d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6952e00efd364e1491714983e2425e93', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5bc636a0-4322-44ce-8e09-d7c1b49250ab', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999babbe-38ab-491f-bab6-db221e47dcf3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=c058dd2c-3349-4364-8659-31bb8b2509bb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:19:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.494 156681 INFO neutron.agent.ovn.metadata.agent [-] Port c058dd2c-3349-4364-8659-31bb8b2509bb in datapath 8082b9e7-a888-4fb7-b48c-a7c16db892eb unbound from our chassis#033[00m
Feb 28 05:19:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.496 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8082b9e7-a888-4fb7-b48c-a7c16db892eb#033[00m
Feb 28 05:19:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.515 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1672a2de-9976-4617-ab69-6e978f310f03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:40 np0005634017 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Feb 28 05:19:40 np0005634017 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005f.scope: Consumed 13.156s CPU time.
Feb 28 05:19:40 np0005634017 systemd-machined[209480]: Machine qemu-117-instance-0000005f terminated.
Feb 28 05:19:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.560 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8aab9deb-58f3-47e8-9eb6-7d83516018cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.565 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1d6560c7-e342-447d-8e8b-51d74766c103]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.598 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7a97ddc9-d17b-41e5-ad23-436b5964d314]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.618 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[26998d13-7d3e-43f6-97a0-0199e3e54cb5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8082b9e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:09:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 280], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543466, 'reachable_time': 27884, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326617, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.632 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fff6fde7-0039-497d-911b-2261e32b0693]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8082b9e7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543481, 'tstamp': 543481}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326618, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8082b9e7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543483, 'tstamp': 543483}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326618, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.634 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8082b9e7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:40 np0005634017 nova_compute[243452]: 2026-02-28 10:19:40.636 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:40 np0005634017 nova_compute[243452]: 2026-02-28 10:19:40.639 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.639 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8082b9e7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.639 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:19:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.640 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8082b9e7-a0, col_values=(('external_ids', {'iface-id': 'ecb4bce5-f543-4ade-98cc-d98a0d015682'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.640 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:19:40 np0005634017 nova_compute[243452]: 2026-02-28 10:19:40.707 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:40 np0005634017 nova_compute[243452]: 2026-02-28 10:19:40.712 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:19:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:19:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:19:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:19:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018791045311676103 of space, bias 1.0, pg target 0.5637313593502831 quantized to 32 (current 32)
Feb 28 05:19:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:19:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:19:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:19:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:19:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:19:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024930647110856744 of space, bias 1.0, pg target 0.7479194133257023 quantized to 32 (current 32)
Feb 28 05:19:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:19:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.713944090525061e-07 of space, bias 4.0, pg target 0.0009256732908630073 quantized to 16 (current 16)
Feb 28 05:19:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:19:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:19:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:19:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:19:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:19:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:19:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:19:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:19:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:19:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:19:41 np0005634017 nova_compute[243452]: 2026-02-28 10:19:41.207 243456 INFO nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Instance shutdown successfully after 13 seconds.#033[00m
Feb 28 05:19:41 np0005634017 nova_compute[243452]: 2026-02-28 10:19:41.213 243456 INFO nova.virt.libvirt.driver [-] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Instance destroyed successfully.#033[00m
Feb 28 05:19:41 np0005634017 nova_compute[243452]: 2026-02-28 10:19:41.218 243456 INFO nova.virt.libvirt.driver [-] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Instance destroyed successfully.#033[00m
Feb 28 05:19:41 np0005634017 nova_compute[243452]: 2026-02-28 10:19:41.220 243456 DEBUG nova.virt.libvirt.vif [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:19:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-303074087',display_name='tempest-ServerActionsTestJSON-server-1668637263',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-303074087',id=95,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:19:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-ufo6e5u0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:19:26Z,user_data=None,user_id='b9006c7543a244aa948b78020335223a',uuid=6a659835-f144-4e34-87ec-3b37ff81b0d1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:19:41 np0005634017 nova_compute[243452]: 2026-02-28 10:19:41.221 243456 DEBUG nova.network.os_vif_util [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:19:41 np0005634017 nova_compute[243452]: 2026-02-28 10:19:41.222 243456 DEBUG nova.network.os_vif_util [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:19:41 np0005634017 nova_compute[243452]: 2026-02-28 10:19:41.223 243456 DEBUG os_vif [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:19:41 np0005634017 nova_compute[243452]: 2026-02-28 10:19:41.226 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:41 np0005634017 nova_compute[243452]: 2026-02-28 10:19:41.226 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc058dd2c-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:41 np0005634017 nova_compute[243452]: 2026-02-28 10:19:41.229 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:41 np0005634017 nova_compute[243452]: 2026-02-28 10:19:41.232 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:19:41 np0005634017 nova_compute[243452]: 2026-02-28 10:19:41.234 243456 INFO os_vif [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33')#033[00m
Feb 28 05:19:41 np0005634017 nova_compute[243452]: 2026-02-28 10:19:41.507 243456 INFO nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Deleting instance files /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1_del#033[00m
Feb 28 05:19:41 np0005634017 nova_compute[243452]: 2026-02-28 10:19:41.509 243456 INFO nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Deletion of /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1_del complete#033[00m
Feb 28 05:19:41 np0005634017 nova_compute[243452]: 2026-02-28 10:19:41.686 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:19:41 np0005634017 nova_compute[243452]: 2026-02-28 10:19:41.687 243456 INFO nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Creating image(s)#033[00m
Feb 28 05:19:41 np0005634017 nova_compute[243452]: 2026-02-28 10:19:41.735 243456 DEBUG nova.storage.rbd_utils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:41 np0005634017 nova_compute[243452]: 2026-02-28 10:19:41.776 243456 DEBUG nova.storage.rbd_utils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:41 np0005634017 nova_compute[243452]: 2026-02-28 10:19:41.807 243456 DEBUG nova.storage.rbd_utils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:41 np0005634017 nova_compute[243452]: 2026-02-28 10:19:41.814 243456 DEBUG oslo_concurrency.processutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:41 np0005634017 nova_compute[243452]: 2026-02-28 10:19:41.907 243456 DEBUG oslo_concurrency.processutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:41 np0005634017 nova_compute[243452]: 2026-02-28 10:19:41.907 243456 DEBUG oslo_concurrency.lockutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:41 np0005634017 nova_compute[243452]: 2026-02-28 10:19:41.908 243456 DEBUG oslo_concurrency.lockutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:41 np0005634017 nova_compute[243452]: 2026-02-28 10:19:41.908 243456 DEBUG oslo_concurrency.lockutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:41 np0005634017 nova_compute[243452]: 2026-02-28 10:19:41.931 243456 DEBUG nova.storage.rbd_utils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:41 np0005634017 nova_compute[243452]: 2026-02-28 10:19:41.935 243456 DEBUG oslo_concurrency.processutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.142 243456 DEBUG oslo_concurrency.processutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.212 243456 DEBUG nova.storage.rbd_utils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] resizing rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.301 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.303 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Ensure instance console log exists: /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.303 243456 DEBUG oslo_concurrency.lockutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.304 243456 DEBUG oslo_concurrency.lockutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.304 243456 DEBUG oslo_concurrency.lockutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.307 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Start _get_guest_xml network_info=[{"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.311 243456 WARNING nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.320 243456 DEBUG nova.virt.libvirt.host [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.321 243456 DEBUG nova.virt.libvirt.host [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.324 243456 DEBUG nova.virt.libvirt.host [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.325 243456 DEBUG nova.virt.libvirt.host [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.325 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.325 243456 DEBUG nova.virt.hardware [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.326 243456 DEBUG nova.virt.hardware [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.327 243456 DEBUG nova.virt.hardware [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.327 243456 DEBUG nova.virt.hardware [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.327 243456 DEBUG nova.virt.hardware [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.328 243456 DEBUG nova.virt.hardware [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.328 243456 DEBUG nova.virt.hardware [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.328 243456 DEBUG nova.virt.hardware [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.329 243456 DEBUG nova.virt.hardware [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.329 243456 DEBUG nova.virt.hardware [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.329 243456 DEBUG nova.virt.hardware [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.330 243456 DEBUG nova.objects.instance [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6a659835-f144-4e34-87ec-3b37ff81b0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 360 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.347 243456 DEBUG oslo_concurrency.processutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:19:42 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3469063387' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.905 243456 DEBUG oslo_concurrency.processutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.946 243456 DEBUG nova.storage.rbd_utils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:42 np0005634017 nova_compute[243452]: 2026-02-28 10:19:42.953 243456 DEBUG oslo_concurrency.processutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:43 np0005634017 podman[326858]: 2026-02-28 10:19:43.112816953 +0000 UTC m=+0.054049951 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 05:19:43 np0005634017 podman[326857]: 2026-02-28 10:19:43.166199105 +0000 UTC m=+0.107360701 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller)
Feb 28 05:19:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:19:43 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/470665309' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.506 243456 DEBUG oslo_concurrency.processutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.510 243456 DEBUG nova.virt.libvirt.vif [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:19:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-303074087',display_name='tempest-ServerActionsTestJSON-server-1668637263',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-303074087',id=95,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:19:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-ufo6e5u0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:19:41Z,user_data=None,user_id='b9006c7543a244aa948b78020335223a',uuid=6a659835-f144-4e34-87ec-3b37ff81b0d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.511 243456 DEBUG nova.network.os_vif_util [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.512 243456 DEBUG nova.network.os_vif_util [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.518 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:19:43 np0005634017 nova_compute[243452]:  <uuid>6a659835-f144-4e34-87ec-3b37ff81b0d1</uuid>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:  <name>instance-0000005f</name>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerActionsTestJSON-server-1668637263</nova:name>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:19:42</nova:creationTime>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:19:43 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:        <nova:user uuid="b9006c7543a244aa948b78020335223a">tempest-ServerActionsTestJSON-152155156-project-member</nova:user>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:        <nova:project uuid="6952e00efd364e1491714983e2425e93">tempest-ServerActionsTestJSON-152155156</nova:project>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="88971623-4808-4102-a4a7-34a287d8b7fe"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:        <nova:port uuid="c058dd2c-3349-4364-8659-31bb8b2509bb">
Feb 28 05:19:43 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <entry name="serial">6a659835-f144-4e34-87ec-3b37ff81b0d1</entry>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <entry name="uuid">6a659835-f144-4e34-87ec-3b37ff81b0d1</entry>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/6a659835-f144-4e34-87ec-3b37ff81b0d1_disk">
Feb 28 05:19:43 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:19:43 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/6a659835-f144-4e34-87ec-3b37ff81b0d1_disk.config">
Feb 28 05:19:43 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:19:43 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:5a:09:29"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <target dev="tapc058dd2c-33"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/console.log" append="off"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:19:43 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:19:43 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:19:43 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:19:43 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.519 243456 DEBUG nova.compute.manager [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Preparing to wait for external event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.520 243456 DEBUG oslo_concurrency.lockutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.520 243456 DEBUG oslo_concurrency.lockutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.520 243456 DEBUG oslo_concurrency.lockutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.521 243456 DEBUG nova.virt.libvirt.vif [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:19:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-303074087',display_name='tempest-ServerActionsTestJSON-server-1668637263',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-303074087',id=95,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:19:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-ufo6e5u0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:19:41Z,user_data=None,user_id='b9006c7543a244aa948b78020335223a',uuid=6a659835-f144-4e34-87ec-3b37ff81b0d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.521 243456 DEBUG nova.network.os_vif_util [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.522 243456 DEBUG nova.network.os_vif_util [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.522 243456 DEBUG os_vif [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.523 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.523 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.524 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.526 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.526 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc058dd2c-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.526 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc058dd2c-33, col_values=(('external_ids', {'iface-id': 'c058dd2c-3349-4364-8659-31bb8b2509bb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5a:09:29', 'vm-uuid': '6a659835-f144-4e34-87ec-3b37ff81b0d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.528 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:43 np0005634017 NetworkManager[49805]: <info>  [1772273983.5294] manager: (tapc058dd2c-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/412)
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.532 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.534 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.535 243456 INFO os_vif [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33')#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.598 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.599 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.599 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] No VIF found with MAC fa:16:3e:5a:09:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.600 243456 INFO nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Using config drive#033[00m
Feb 28 05:19:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.634 243456 DEBUG nova.storage.rbd_utils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.665 243456 DEBUG nova.objects.instance [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 6a659835-f144-4e34-87ec-3b37ff81b0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.705 243456 DEBUG nova.objects.instance [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'keypairs' on Instance uuid 6a659835-f144-4e34-87ec-3b37ff81b0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:43 np0005634017 nova_compute[243452]: 2026-02-28 10:19:43.918 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:44 np0005634017 nova_compute[243452]: 2026-02-28 10:19:44.082 243456 DEBUG nova.compute.manager [req-c6291eac-0e3c-41f6-8ac1-4a6ed6cfba67 req-f81d53d4-97bc-4b6e-871a-19239680adc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received event network-vif-unplugged-c058dd2c-3349-4364-8659-31bb8b2509bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:44 np0005634017 nova_compute[243452]: 2026-02-28 10:19:44.083 243456 DEBUG oslo_concurrency.lockutils [req-c6291eac-0e3c-41f6-8ac1-4a6ed6cfba67 req-f81d53d4-97bc-4b6e-871a-19239680adc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:44 np0005634017 nova_compute[243452]: 2026-02-28 10:19:44.084 243456 DEBUG oslo_concurrency.lockutils [req-c6291eac-0e3c-41f6-8ac1-4a6ed6cfba67 req-f81d53d4-97bc-4b6e-871a-19239680adc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:44 np0005634017 nova_compute[243452]: 2026-02-28 10:19:44.084 243456 DEBUG oslo_concurrency.lockutils [req-c6291eac-0e3c-41f6-8ac1-4a6ed6cfba67 req-f81d53d4-97bc-4b6e-871a-19239680adc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:44 np0005634017 nova_compute[243452]: 2026-02-28 10:19:44.085 243456 DEBUG nova.compute.manager [req-c6291eac-0e3c-41f6-8ac1-4a6ed6cfba67 req-f81d53d4-97bc-4b6e-871a-19239680adc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] No event matching network-vif-unplugged-c058dd2c-3349-4364-8659-31bb8b2509bb in dict_keys([('network-vif-plugged', 'c058dd2c-3349-4364-8659-31bb8b2509bb')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Feb 28 05:19:44 np0005634017 nova_compute[243452]: 2026-02-28 10:19:44.085 243456 WARNING nova.compute.manager [req-c6291eac-0e3c-41f6-8ac1-4a6ed6cfba67 req-f81d53d4-97bc-4b6e-871a-19239680adc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received unexpected event network-vif-unplugged-c058dd2c-3349-4364-8659-31bb8b2509bb for instance with vm_state active and task_state rebuild_spawning.#033[00m
Feb 28 05:19:44 np0005634017 nova_compute[243452]: 2026-02-28 10:19:44.190 243456 INFO nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Creating config drive at /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/disk.config#033[00m
Feb 28 05:19:44 np0005634017 nova_compute[243452]: 2026-02-28 10:19:44.197 243456 DEBUG oslo_concurrency.processutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp05kbo8jn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:44 np0005634017 nova_compute[243452]: 2026-02-28 10:19:44.273 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "e4349bd8-727a-4533-9edd-b2d54353a617" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:44 np0005634017 nova_compute[243452]: 2026-02-28 10:19:44.274 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:44 np0005634017 nova_compute[243452]: 2026-02-28 10:19:44.293 243456 DEBUG nova.compute.manager [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:19:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 362 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.8 MiB/s wr, 173 op/s
Feb 28 05:19:44 np0005634017 nova_compute[243452]: 2026-02-28 10:19:44.339 243456 DEBUG oslo_concurrency.processutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp05kbo8jn" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:44 np0005634017 nova_compute[243452]: 2026-02-28 10:19:44.376 243456 DEBUG nova.storage.rbd_utils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:44 np0005634017 nova_compute[243452]: 2026-02-28 10:19:44.380 243456 DEBUG oslo_concurrency.processutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/disk.config 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:44 np0005634017 nova_compute[243452]: 2026-02-28 10:19:44.464 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:44 np0005634017 nova_compute[243452]: 2026-02-28 10:19:44.464 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:44 np0005634017 nova_compute[243452]: 2026-02-28 10:19:44.473 243456 DEBUG nova.virt.hardware [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:19:44 np0005634017 nova_compute[243452]: 2026-02-28 10:19:44.473 243456 INFO nova.compute.claims [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:19:44 np0005634017 nova_compute[243452]: 2026-02-28 10:19:44.548 243456 DEBUG oslo_concurrency.processutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/disk.config 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:44 np0005634017 nova_compute[243452]: 2026-02-28 10:19:44.549 243456 INFO nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Deleting local config drive /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/disk.config because it was imported into RBD.#033[00m
Feb 28 05:19:44 np0005634017 NetworkManager[49805]: <info>  [1772273984.6063] manager: (tapc058dd2c-33): new Tun device (/org/freedesktop/NetworkManager/Devices/413)
Feb 28 05:19:44 np0005634017 kernel: tapc058dd2c-33: entered promiscuous mode
Feb 28 05:19:44 np0005634017 nova_compute[243452]: 2026-02-28 10:19:44.609 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:44Z|00950|binding|INFO|Claiming lport c058dd2c-3349-4364-8659-31bb8b2509bb for this chassis.
Feb 28 05:19:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:44Z|00951|binding|INFO|c058dd2c-3349-4364-8659-31bb8b2509bb: Claiming fa:16:3e:5a:09:29 10.100.0.13
Feb 28 05:19:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.620 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:09:29 10.100.0.13'], port_security=['fa:16:3e:5a:09:29 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6a659835-f144-4e34-87ec-3b37ff81b0d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6952e00efd364e1491714983e2425e93', 'neutron:revision_number': '5', 'neutron:security_group_ids': '5bc636a0-4322-44ce-8e09-d7c1b49250ab', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999babbe-38ab-491f-bab6-db221e47dcf3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=c058dd2c-3349-4364-8659-31bb8b2509bb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:19:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:44Z|00952|binding|INFO|Setting lport c058dd2c-3349-4364-8659-31bb8b2509bb ovn-installed in OVS
Feb 28 05:19:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:44Z|00953|binding|INFO|Setting lport c058dd2c-3349-4364-8659-31bb8b2509bb up in Southbound
Feb 28 05:19:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.622 156681 INFO neutron.agent.ovn.metadata.agent [-] Port c058dd2c-3349-4364-8659-31bb8b2509bb in datapath 8082b9e7-a888-4fb7-b48c-a7c16db892eb bound to our chassis#033[00m
Feb 28 05:19:44 np0005634017 nova_compute[243452]: 2026-02-28 10:19:44.622 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.624 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8082b9e7-a888-4fb7-b48c-a7c16db892eb#033[00m
Feb 28 05:19:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.643 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b080cb8a-0e4a-4740-9172-b81b00c7f862]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:44 np0005634017 systemd-udevd[326995]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:19:44 np0005634017 systemd-machined[209480]: New machine qemu-119-instance-0000005f.
Feb 28 05:19:44 np0005634017 NetworkManager[49805]: <info>  [1772273984.6575] device (tapc058dd2c-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:19:44 np0005634017 NetworkManager[49805]: <info>  [1772273984.6588] device (tapc058dd2c-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:19:44 np0005634017 systemd[1]: Started Virtual Machine qemu-119-instance-0000005f.
Feb 28 05:19:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.666 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ed16d963-be18-46a8-a77f-c288066dafbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.670 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[99eb3c29-fe7c-4fa0-8005-82d170685143]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:44 np0005634017 nova_compute[243452]: 2026-02-28 10:19:44.670 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.696 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[73fb9ee8-ba22-4cef-befc-8fdce6173e84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.711 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[49e0d96d-3ff7-46b4-88f4-60f099557746]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8082b9e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:09:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 280], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543466, 'reachable_time': 27884, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327006, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.725 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[127f7c9a-12e4-41de-954e-e6281e7680de]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8082b9e7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543481, 'tstamp': 543481}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327009, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8082b9e7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543483, 'tstamp': 543483}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327009, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.727 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8082b9e7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:44 np0005634017 nova_compute[243452]: 2026-02-28 10:19:44.729 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.731 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8082b9e7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.731 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:19:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.731 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8082b9e7-a0, col_values=(('external_ids', {'iface-id': 'ecb4bce5-f543-4ade-98cc-d98a0d015682'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.731 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:19:44 np0005634017 nova_compute[243452]: 2026-02-28 10:19:44.901 243456 DEBUG nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 28 05:19:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:19:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1864028965' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.236 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.243 243456 DEBUG nova.compute.provider_tree [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.265 243456 DEBUG nova.scheduler.client.report [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.291 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.292 243456 DEBUG nova.compute.manager [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.327 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 6a659835-f144-4e34-87ec-3b37ff81b0d1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.328 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273985.3272188, 6a659835-f144-4e34-87ec-3b37ff81b0d1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.328 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] VM Started (Lifecycle Event)#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.335 243456 DEBUG nova.compute.manager [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.336 243456 DEBUG nova.network.neutron [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.343 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.347 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273985.3283725, 6a659835-f144-4e34-87ec-3b37ff81b0d1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.347 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.351 243456 INFO nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.368 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.372 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.375 243456 DEBUG nova.compute.manager [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.398 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.459 243456 DEBUG nova.compute.manager [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.460 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.461 243456 INFO nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Creating image(s)#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.482 243456 DEBUG nova.storage.rbd_utils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image e4349bd8-727a-4533-9edd-b2d54353a617_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:19:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3343616085' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:19:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:19:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3343616085' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.514 243456 DEBUG nova.storage.rbd_utils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image e4349bd8-727a-4533-9edd-b2d54353a617_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.540 243456 DEBUG nova.storage.rbd_utils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image e4349bd8-727a-4533-9edd-b2d54353a617_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.544 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.596 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.597 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.598 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.598 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.621 243456 DEBUG nova.storage.rbd_utils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image e4349bd8-727a-4533-9edd-b2d54353a617_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.624 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 e4349bd8-727a-4533-9edd-b2d54353a617_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.845 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 e4349bd8-727a-4533-9edd-b2d54353a617_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:45 np0005634017 nova_compute[243452]: 2026-02-28 10:19:45.922 243456 DEBUG nova.storage.rbd_utils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] resizing rbd image e4349bd8-727a-4533-9edd-b2d54353a617_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.027 243456 DEBUG nova.objects.instance [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'migration_context' on Instance uuid e4349bd8-727a-4533-9edd-b2d54353a617 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.085 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.085 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Ensure instance console log exists: /var/lib/nova/instances/e4349bd8-727a-4533-9edd-b2d54353a617/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.086 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.086 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.086 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.217 243456 DEBUG nova.compute.manager [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.218 243456 DEBUG oslo_concurrency.lockutils [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.218 243456 DEBUG oslo_concurrency.lockutils [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.218 243456 DEBUG oslo_concurrency.lockutils [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.218 243456 DEBUG nova.compute.manager [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Processing event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.218 243456 DEBUG nova.compute.manager [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.218 243456 DEBUG oslo_concurrency.lockutils [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.219 243456 DEBUG oslo_concurrency.lockutils [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.219 243456 DEBUG oslo_concurrency.lockutils [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.219 243456 DEBUG nova.compute.manager [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] No waiting events found dispatching network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.219 243456 WARNING nova.compute.manager [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received unexpected event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb for instance with vm_state active and task_state rebuild_spawning.#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.220 243456 DEBUG nova.compute.manager [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.220 243456 DEBUG oslo_concurrency.lockutils [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.220 243456 DEBUG oslo_concurrency.lockutils [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.220 243456 DEBUG oslo_concurrency.lockutils [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.220 243456 DEBUG nova.compute.manager [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] No waiting events found dispatching network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.221 243456 WARNING nova.compute.manager [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received unexpected event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb for instance with vm_state active and task_state rebuild_spawning.#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.222 243456 DEBUG nova.compute.manager [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.225 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273986.2254226, 6a659835-f144-4e34-87ec-3b37ff81b0d1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.225 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.227 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.231 243456 INFO nova.virt.libvirt.driver [-] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Instance spawned successfully.#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.231 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.236 243456 DEBUG nova.policy [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9046d3ef932481196b82d5e1fdd5de6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:19:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 357 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 6.0 MiB/s wr, 224 op/s
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.349 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.355 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.355 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.356 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.356 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.357 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.357 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.362 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.482 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.498 243456 DEBUG nova.compute.manager [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.798 243456 DEBUG oslo_concurrency.lockutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.799 243456 DEBUG oslo_concurrency.lockutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:46 np0005634017 nova_compute[243452]: 2026-02-28 10:19:46.800 243456 DEBUG nova.objects.instance [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 28 05:19:47 np0005634017 nova_compute[243452]: 2026-02-28 10:19:47.085 243456 DEBUG oslo_concurrency.lockutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:47 np0005634017 kernel: tap2f9562b0-54 (unregistering): left promiscuous mode
Feb 28 05:19:47 np0005634017 NetworkManager[49805]: <info>  [1772273987.1574] device (tap2f9562b0-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:19:47 np0005634017 nova_compute[243452]: 2026-02-28 10:19:47.168 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:47 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:47Z|00954|binding|INFO|Releasing lport 2f9562b0-54ce-4c24-9341-33a674532bf0 from this chassis (sb_readonly=0)
Feb 28 05:19:47 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:47Z|00955|binding|INFO|Setting lport 2f9562b0-54ce-4c24-9341-33a674532bf0 down in Southbound
Feb 28 05:19:47 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:47Z|00956|binding|INFO|Removing iface tap2f9562b0-54 ovn-installed in OVS
Feb 28 05:19:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:47.189 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:07:b7 10.100.0.9'], port_security=['fa:16:3e:db:07:b7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ea5efc55-0a5e-435e-9805-9a9726c17eda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ba87a0d-4c27-4844-a55f-2927dfe4893e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3882eded03594958a2e5d10832a6c3a9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '045c4763-a218-4cca-a882-e267b0c43dad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5d941e6-1790-487e-9400-385b4660d2ae, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2f9562b0-54ce-4c24-9341-33a674532bf0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:19:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:47.192 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2f9562b0-54ce-4c24-9341-33a674532bf0 in datapath 9ba87a0d-4c27-4844-a55f-2927dfe4893e unbound from our chassis#033[00m
Feb 28 05:19:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:47.194 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9ba87a0d-4c27-4844-a55f-2927dfe4893e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 28 05:19:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:47.195 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[91c6ee70-dc44-4bc3-970a-46da387d6fa9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:47 np0005634017 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000060.scope: Deactivated successfully.
Feb 28 05:19:47 np0005634017 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000060.scope: Consumed 12.230s CPU time.
Feb 28 05:19:47 np0005634017 systemd-machined[209480]: Machine qemu-118-instance-00000060 terminated.
Feb 28 05:19:47 np0005634017 nova_compute[243452]: 2026-02-28 10:19:47.498 243456 DEBUG nova.network.neutron [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Successfully created port: 07b4c83e-2fe2-42c9-a758-c50ddf0919fb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:19:47 np0005634017 nova_compute[243452]: 2026-02-28 10:19:47.971 243456 INFO nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Instance shutdown successfully after 13 seconds.#033[00m
Feb 28 05:19:47 np0005634017 nova_compute[243452]: 2026-02-28 10:19:47.977 243456 INFO nova.virt.libvirt.driver [-] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Instance destroyed successfully.#033[00m
Feb 28 05:19:47 np0005634017 nova_compute[243452]: 2026-02-28 10:19:47.978 243456 DEBUG nova.objects.instance [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'numa_topology' on Instance uuid ea5efc55-0a5e-435e-9805-9a9726c17eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:47.999 243456 INFO nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Attempting rescue#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.000 243456 DEBUG nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.006 243456 DEBUG nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.006 243456 INFO nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Creating image(s)#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.031 243456 DEBUG nova.storage.rbd_utils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.037 243456 DEBUG nova.objects.instance [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'trusted_certs' on Instance uuid ea5efc55-0a5e-435e-9805-9a9726c17eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.076 243456 DEBUG nova.storage.rbd_utils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.104 243456 DEBUG nova.storage.rbd_utils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.109 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.198 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.199 243456 DEBUG oslo_concurrency.lockutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.200 243456 DEBUG oslo_concurrency.lockutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.200 243456 DEBUG oslo_concurrency.lockutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.232 243456 DEBUG nova.storage.rbd_utils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.239 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 369 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.8 MiB/s wr, 200 op/s
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.382 243456 DEBUG nova.compute.manager [req-2efeae38-389a-4da0-ba10-8ea1fc0deff7 req-4343f810-ba4e-4c59-bafb-9d20fb102884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received event network-vif-unplugged-2f9562b0-54ce-4c24-9341-33a674532bf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.383 243456 DEBUG oslo_concurrency.lockutils [req-2efeae38-389a-4da0-ba10-8ea1fc0deff7 req-4343f810-ba4e-4c59-bafb-9d20fb102884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.383 243456 DEBUG oslo_concurrency.lockutils [req-2efeae38-389a-4da0-ba10-8ea1fc0deff7 req-4343f810-ba4e-4c59-bafb-9d20fb102884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.384 243456 DEBUG oslo_concurrency.lockutils [req-2efeae38-389a-4da0-ba10-8ea1fc0deff7 req-4343f810-ba4e-4c59-bafb-9d20fb102884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.384 243456 DEBUG nova.compute.manager [req-2efeae38-389a-4da0-ba10-8ea1fc0deff7 req-4343f810-ba4e-4c59-bafb-9d20fb102884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] No waiting events found dispatching network-vif-unplugged-2f9562b0-54ce-4c24-9341-33a674532bf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.384 243456 WARNING nova.compute.manager [req-2efeae38-389a-4da0-ba10-8ea1fc0deff7 req-4343f810-ba4e-4c59-bafb-9d20fb102884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received unexpected event network-vif-unplugged-2f9562b0-54ce-4c24-9341-33a674532bf0 for instance with vm_state active and task_state rescuing.#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.384 243456 DEBUG nova.compute.manager [req-2efeae38-389a-4da0-ba10-8ea1fc0deff7 req-4343f810-ba4e-4c59-bafb-9d20fb102884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received event network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.385 243456 DEBUG oslo_concurrency.lockutils [req-2efeae38-389a-4da0-ba10-8ea1fc0deff7 req-4343f810-ba4e-4c59-bafb-9d20fb102884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.385 243456 DEBUG oslo_concurrency.lockutils [req-2efeae38-389a-4da0-ba10-8ea1fc0deff7 req-4343f810-ba4e-4c59-bafb-9d20fb102884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.385 243456 DEBUG oslo_concurrency.lockutils [req-2efeae38-389a-4da0-ba10-8ea1fc0deff7 req-4343f810-ba4e-4c59-bafb-9d20fb102884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.385 243456 DEBUG nova.compute.manager [req-2efeae38-389a-4da0-ba10-8ea1fc0deff7 req-4343f810-ba4e-4c59-bafb-9d20fb102884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] No waiting events found dispatching network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.385 243456 WARNING nova.compute.manager [req-2efeae38-389a-4da0-ba10-8ea1fc0deff7 req-4343f810-ba4e-4c59-bafb-9d20fb102884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received unexpected event network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 for instance with vm_state active and task_state rescuing.#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.394 243456 DEBUG nova.network.neutron [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Successfully updated port: 07b4c83e-2fe2-42c9-a758-c50ddf0919fb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.412 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "refresh_cache-e4349bd8-727a-4533-9edd-b2d54353a617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.413 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquired lock "refresh_cache-e4349bd8-727a-4533-9edd-b2d54353a617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.413 243456 DEBUG nova.network.neutron [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.474 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.475 243456 DEBUG nova.objects.instance [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'migration_context' on Instance uuid ea5efc55-0a5e-435e-9805-9a9726c17eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.487 243456 DEBUG nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.488 243456 DEBUG nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Start _get_guest_xml network_info=[{"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1242488818-network", "vif_mac": "fa:16:3e:db:07:b7"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.488 243456 DEBUG nova.objects.instance [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'resources' on Instance uuid ea5efc55-0a5e-435e-9805-9a9726c17eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.508 243456 WARNING nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.513 243456 DEBUG nova.virt.libvirt.host [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.514 243456 DEBUG nova.virt.libvirt.host [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.517 243456 DEBUG nova.virt.libvirt.host [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.517 243456 DEBUG nova.virt.libvirt.host [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.518 243456 DEBUG nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.518 243456 DEBUG nova.virt.hardware [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.518 243456 DEBUG nova.virt.hardware [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.518 243456 DEBUG nova.virt.hardware [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.519 243456 DEBUG nova.virt.hardware [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.519 243456 DEBUG nova.virt.hardware [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.519 243456 DEBUG nova.virt.hardware [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.519 243456 DEBUG nova.virt.hardware [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.519 243456 DEBUG nova.virt.hardware [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.519 243456 DEBUG nova.virt.hardware [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.519 243456 DEBUG nova.virt.hardware [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.520 243456 DEBUG nova.virt.hardware [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.520 243456 DEBUG nova.objects.instance [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'vcpu_model' on Instance uuid ea5efc55-0a5e-435e-9805-9a9726c17eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.529 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.538 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.598 243456 DEBUG nova.network.neutron [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:19:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.879 243456 DEBUG nova.compute.manager [req-6f0681d2-d204-4f1e-adae-6df8d2963c8c req-03242dab-70bc-4754-b82a-838012c4d51b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Received event network-changed-07b4c83e-2fe2-42c9-a758-c50ddf0919fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.879 243456 DEBUG nova.compute.manager [req-6f0681d2-d204-4f1e-adae-6df8d2963c8c req-03242dab-70bc-4754-b82a-838012c4d51b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Refreshing instance network info cache due to event network-changed-07b4c83e-2fe2-42c9-a758-c50ddf0919fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.880 243456 DEBUG oslo_concurrency.lockutils [req-6f0681d2-d204-4f1e-adae-6df8d2963c8c req-03242dab-70bc-4754-b82a-838012c4d51b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-e4349bd8-727a-4533-9edd-b2d54353a617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:19:48 np0005634017 nova_compute[243452]: 2026-02-28 10:19:48.920 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:19:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3045192296' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.186 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.187 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.525 243456 DEBUG nova.network.neutron [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Updating instance_info_cache with network_info: [{"id": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "address": "fa:16:3e:7d:73:58", "network": {"id": "df5b5e58-da82-40fd-b4b8-660edea3cecb", "bridge": "br-int", "label": "tempest-network-smoke--2117845100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07b4c83e-2f", "ovs_interfaceid": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.543 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Releasing lock "refresh_cache-e4349bd8-727a-4533-9edd-b2d54353a617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.544 243456 DEBUG nova.compute.manager [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Instance network_info: |[{"id": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "address": "fa:16:3e:7d:73:58", "network": {"id": "df5b5e58-da82-40fd-b4b8-660edea3cecb", "bridge": "br-int", "label": "tempest-network-smoke--2117845100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07b4c83e-2f", "ovs_interfaceid": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.546 243456 DEBUG oslo_concurrency.lockutils [req-6f0681d2-d204-4f1e-adae-6df8d2963c8c req-03242dab-70bc-4754-b82a-838012c4d51b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-e4349bd8-727a-4533-9edd-b2d54353a617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.546 243456 DEBUG nova.network.neutron [req-6f0681d2-d204-4f1e-adae-6df8d2963c8c req-03242dab-70bc-4754-b82a-838012c4d51b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Refreshing network info cache for port 07b4c83e-2fe2-42c9-a758-c50ddf0919fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.552 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Start _get_guest_xml network_info=[{"id": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "address": "fa:16:3e:7d:73:58", "network": {"id": "df5b5e58-da82-40fd-b4b8-660edea3cecb", "bridge": "br-int", "label": "tempest-network-smoke--2117845100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07b4c83e-2f", "ovs_interfaceid": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.560 243456 WARNING nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.570 243456 DEBUG nova.virt.libvirt.host [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.571 243456 DEBUG nova.virt.libvirt.host [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.581 243456 DEBUG nova.virt.libvirt.host [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.582 243456 DEBUG nova.virt.libvirt.host [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.583 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.583 243456 DEBUG nova.virt.hardware [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.584 243456 DEBUG nova.virt.hardware [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.585 243456 DEBUG nova.virt.hardware [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.585 243456 DEBUG nova.virt.hardware [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.586 243456 DEBUG nova.virt.hardware [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.586 243456 DEBUG nova.virt.hardware [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.586 243456 DEBUG nova.virt.hardware [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.587 243456 DEBUG nova.virt.hardware [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.588 243456 DEBUG nova.virt.hardware [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.588 243456 DEBUG nova.virt.hardware [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.589 243456 DEBUG nova.virt.hardware [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.594 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:19:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3249347176' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.749 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:49 np0005634017 nova_compute[243452]: 2026-02-28 10:19:49.751 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:19:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1889414919' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.182 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.218 243456 DEBUG nova.storage.rbd_utils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image e4349bd8-727a-4533-9edd-b2d54353a617_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.225 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:19:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1615206555' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.307 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.310 243456 DEBUG nova.virt.libvirt.vif [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:19:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1998552864',display_name='tempest-ServerRescueTestJSON-server-1998552864',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1998552864',id=96,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:19:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3882eded03594958a2e5d10832a6c3a9',ramdisk_id='',reservation_id='r-fussapqk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-2101936935',owner_user_name='tempest-ServerRescueTestJSON-2101936935-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:19:31Z,user_data=None,user_id='8d03850a765742908401b28b9f983e96',uuid=ea5efc55-0a5e-435e-9805-9a9726c17eda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1242488818-network", "vif_mac": "fa:16:3e:db:07:b7"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.311 243456 DEBUG nova.network.os_vif_util [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converting VIF {"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1242488818-network", "vif_mac": "fa:16:3e:db:07:b7"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.313 243456 DEBUG nova.network.os_vif_util [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:db:07:b7,bridge_name='br-int',has_traffic_filtering=True,id=2f9562b0-54ce-4c24-9341-33a674532bf0,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f9562b0-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.315 243456 DEBUG nova.objects.instance [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'pci_devices' on Instance uuid ea5efc55-0a5e-435e-9805-9a9726c17eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 408 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.5 MiB/s wr, 205 op/s
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.341 243456 DEBUG nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  <uuid>ea5efc55-0a5e-435e-9805-9a9726c17eda</uuid>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  <name>instance-00000060</name>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerRescueTestJSON-server-1998552864</nova:name>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:19:48</nova:creationTime>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        <nova:user uuid="8d03850a765742908401b28b9f983e96">tempest-ServerRescueTestJSON-2101936935-project-member</nova:user>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        <nova:project uuid="3882eded03594958a2e5d10832a6c3a9">tempest-ServerRescueTestJSON-2101936935</nova:project>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        <nova:port uuid="2f9562b0-54ce-4c24-9341-33a674532bf0">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <entry name="serial">ea5efc55-0a5e-435e-9805-9a9726c17eda</entry>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <entry name="uuid">ea5efc55-0a5e-435e-9805-9a9726c17eda</entry>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.rescue">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/ea5efc55-0a5e-435e-9805-9a9726c17eda_disk">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <target dev="vdb" bus="virtio"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.config.rescue">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:db:07:b7"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <target dev="tap2f9562b0-54"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/console.log" append="off"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:19:50 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:19:50 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.353 243456 DEBUG oslo_concurrency.lockutils [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "6a659835-f144-4e34-87ec-3b37ff81b0d1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.354 243456 DEBUG oslo_concurrency.lockutils [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.354 243456 DEBUG oslo_concurrency.lockutils [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.355 243456 DEBUG oslo_concurrency.lockutils [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.356 243456 DEBUG oslo_concurrency.lockutils [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.358 243456 INFO nova.compute.manager [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Terminating instance#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.360 243456 DEBUG nova.compute.manager [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.373 243456 INFO nova.virt.libvirt.driver [-] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Instance destroyed successfully.#033[00m
Feb 28 05:19:50 np0005634017 kernel: tapc058dd2c-33 (unregistering): left promiscuous mode
Feb 28 05:19:50 np0005634017 NetworkManager[49805]: <info>  [1772273990.4146] device (tapc058dd2c-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:19:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:50Z|00957|binding|INFO|Releasing lport c058dd2c-3349-4364-8659-31bb8b2509bb from this chassis (sb_readonly=0)
Feb 28 05:19:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:50Z|00958|binding|INFO|Setting lport c058dd2c-3349-4364-8659-31bb8b2509bb down in Southbound
Feb 28 05:19:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:50Z|00959|binding|INFO|Removing iface tapc058dd2c-33 ovn-installed in OVS
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.430 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.433 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:09:29 10.100.0.13'], port_security=['fa:16:3e:5a:09:29 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6a659835-f144-4e34-87ec-3b37ff81b0d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6952e00efd364e1491714983e2425e93', 'neutron:revision_number': '6', 'neutron:security_group_ids': '5bc636a0-4322-44ce-8e09-d7c1b49250ab', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999babbe-38ab-491f-bab6-db221e47dcf3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=c058dd2c-3349-4364-8659-31bb8b2509bb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:19:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.434 156681 INFO neutron.agent.ovn.metadata.agent [-] Port c058dd2c-3349-4364-8659-31bb8b2509bb in datapath 8082b9e7-a888-4fb7-b48c-a7c16db892eb unbound from our chassis#033[00m
Feb 28 05:19:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.436 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8082b9e7-a888-4fb7-b48c-a7c16db892eb#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.446 243456 DEBUG nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.447 243456 DEBUG nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.447 243456 DEBUG nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.448 243456 DEBUG nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No VIF found with MAC fa:16:3e:db:07:b7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.448 243456 INFO nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Using config drive#033[00m
Feb 28 05:19:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.449 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ca69a150-e91e-4851-afd6-8485f246ef30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:50 np0005634017 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Feb 28 05:19:50 np0005634017 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d0000005f.scope: Consumed 4.940s CPU time.
Feb 28 05:19:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.465 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c2a57e-10f5-4f53-8da0-29a6177633e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:50 np0005634017 systemd-machined[209480]: Machine qemu-119-instance-0000005f terminated.
Feb 28 05:19:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.469 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[57e079bb-5b2d-42e1-9789-e0d404526d2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.477 243456 DEBUG nova.storage.rbd_utils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.485 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d2943871-fe5e-498a-a087-e11bedc6049c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.497 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8956db9c-0d98-4735-9f42-fa46478c9f75]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8082b9e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:09:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 280], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543466, 'reachable_time': 27884, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327508, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.502 243456 DEBUG nova.objects.instance [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'ec2_ids' on Instance uuid ea5efc55-0a5e-435e-9805-9a9726c17eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.514 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eeeeb413-cf09-48b2-af17-5eae5e552048]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8082b9e7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543481, 'tstamp': 543481}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327509, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8082b9e7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543483, 'tstamp': 543483}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327509, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.516 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8082b9e7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.520 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.525 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8082b9e7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.526 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:19:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.526 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8082b9e7-a0, col_values=(('external_ids', {'iface-id': 'ecb4bce5-f543-4ade-98cc-d98a0d015682'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.527 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.527 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.529 243456 DEBUG nova.objects.instance [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'keypairs' on Instance uuid ea5efc55-0a5e-435e-9805-9a9726c17eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.603 243456 INFO nova.virt.libvirt.driver [-] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Instance destroyed successfully.#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.603 243456 DEBUG nova.objects.instance [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'resources' on Instance uuid 6a659835-f144-4e34-87ec-3b37ff81b0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.616 243456 DEBUG nova.virt.libvirt.vif [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:19:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-303074087',display_name='tempest-ServerActionsTestJSON-server-1668637263',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-303074087',id=95,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:19:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-ufo6e5u0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:19:46Z,user_data=None,user_id='b9006c7543a244aa948b78020335223a',uuid=6a659835-f144-4e34-87ec-3b37ff81b0d1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.616 243456 DEBUG nova.network.os_vif_util [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.617 243456 DEBUG nova.network.os_vif_util [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.618 243456 DEBUG os_vif [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.619 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.619 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc058dd2c-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.621 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.623 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.624 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.627 243456 INFO os_vif [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33')#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.656 243456 DEBUG nova.compute.manager [req-9235aef7-8290-41ce-b652-6718c8a127eb req-9dcf121b-8aa6-47ab-a151-a8f5e12a4725 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received event network-vif-unplugged-c058dd2c-3349-4364-8659-31bb8b2509bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.657 243456 DEBUG oslo_concurrency.lockutils [req-9235aef7-8290-41ce-b652-6718c8a127eb req-9dcf121b-8aa6-47ab-a151-a8f5e12a4725 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.657 243456 DEBUG oslo_concurrency.lockutils [req-9235aef7-8290-41ce-b652-6718c8a127eb req-9dcf121b-8aa6-47ab-a151-a8f5e12a4725 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.658 243456 DEBUG oslo_concurrency.lockutils [req-9235aef7-8290-41ce-b652-6718c8a127eb req-9dcf121b-8aa6-47ab-a151-a8f5e12a4725 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.658 243456 DEBUG nova.compute.manager [req-9235aef7-8290-41ce-b652-6718c8a127eb req-9dcf121b-8aa6-47ab-a151-a8f5e12a4725 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] No waiting events found dispatching network-vif-unplugged-c058dd2c-3349-4364-8659-31bb8b2509bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.658 243456 DEBUG nova.compute.manager [req-9235aef7-8290-41ce-b652-6718c8a127eb req-9dcf121b-8aa6-47ab-a151-a8f5e12a4725 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received event network-vif-unplugged-c058dd2c-3349-4364-8659-31bb8b2509bb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.757 243456 DEBUG nova.network.neutron [req-6f0681d2-d204-4f1e-adae-6df8d2963c8c req-03242dab-70bc-4754-b82a-838012c4d51b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Updated VIF entry in instance network info cache for port 07b4c83e-2fe2-42c9-a758-c50ddf0919fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.758 243456 DEBUG nova.network.neutron [req-6f0681d2-d204-4f1e-adae-6df8d2963c8c req-03242dab-70bc-4754-b82a-838012c4d51b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Updating instance_info_cache with network_info: [{"id": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "address": "fa:16:3e:7d:73:58", "network": {"id": "df5b5e58-da82-40fd-b4b8-660edea3cecb", "bridge": "br-int", "label": "tempest-network-smoke--2117845100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07b4c83e-2f", "ovs_interfaceid": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.778 243456 DEBUG oslo_concurrency.lockutils [req-6f0681d2-d204-4f1e-adae-6df8d2963c8c req-03242dab-70bc-4754-b82a-838012c4d51b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-e4349bd8-727a-4533-9edd-b2d54353a617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:19:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:19:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2072605686' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.809 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.810 243456 DEBUG nova.virt.libvirt.vif [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:19:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-900188454',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-900188454',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=97,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDNIQnNUL9XDPDLTHZklFyfDZYSt9AYWyQxG0r/+WdFB7vyQw7J2acteYFQFpxgWWQ/0J0kXyBJr3KJn/hEaVogEETmejopRSrT8PjwOvTjZAhY243jVoswANs6Qv2qSuA==',key_name='tempest-TestSecurityGroupsBasicOps-146961595',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-7bnrc9wi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:19:45Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=e4349bd8-727a-4533-9edd-b2d54353a617,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "address": "fa:16:3e:7d:73:58", "network": {"id": "df5b5e58-da82-40fd-b4b8-660edea3cecb", "bridge": "br-int", "label": "tempest-network-smoke--2117845100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07b4c83e-2f", "ovs_interfaceid": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.811 243456 DEBUG nova.network.os_vif_util [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "address": "fa:16:3e:7d:73:58", "network": {"id": "df5b5e58-da82-40fd-b4b8-660edea3cecb", "bridge": "br-int", "label": "tempest-network-smoke--2117845100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07b4c83e-2f", "ovs_interfaceid": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.812 243456 DEBUG nova.network.os_vif_util [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:73:58,bridge_name='br-int',has_traffic_filtering=True,id=07b4c83e-2fe2-42c9-a758-c50ddf0919fb,network=Network(df5b5e58-da82-40fd-b4b8-660edea3cecb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07b4c83e-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.813 243456 DEBUG nova.objects.instance [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'pci_devices' on Instance uuid e4349bd8-727a-4533-9edd-b2d54353a617 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.825 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  <uuid>e4349bd8-727a-4533-9edd-b2d54353a617</uuid>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  <name>instance-00000061</name>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-900188454</nova:name>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:19:49</nova:creationTime>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        <nova:user uuid="f9046d3ef932481196b82d5e1fdd5de6">tempest-TestSecurityGroupsBasicOps-1303513239-project-member</nova:user>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        <nova:project uuid="1eb3aafa74d14544ba5cde61223f2e23">tempest-TestSecurityGroupsBasicOps-1303513239</nova:project>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        <nova:port uuid="07b4c83e-2fe2-42c9-a758-c50ddf0919fb">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <entry name="serial">e4349bd8-727a-4533-9edd-b2d54353a617</entry>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <entry name="uuid">e4349bd8-727a-4533-9edd-b2d54353a617</entry>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/e4349bd8-727a-4533-9edd-b2d54353a617_disk">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/e4349bd8-727a-4533-9edd-b2d54353a617_disk.config">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:7d:73:58"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <target dev="tap07b4c83e-2f"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/e4349bd8-727a-4533-9edd-b2d54353a617/console.log" append="off"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:19:50 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:19:50 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:19:50 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:19:50 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.827 243456 DEBUG nova.compute.manager [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Preparing to wait for external event network-vif-plugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.828 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.829 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.830 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.831 243456 DEBUG nova.virt.libvirt.vif [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:19:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-900188454',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-900188454',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=97,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDNIQnNUL9XDPDLTHZklFyfDZYSt9AYWyQxG0r/+WdFB7vyQw7J2acteYFQFpxgWWQ/0J0kXyBJr3KJn/hEaVogEETmejopRSrT8PjwOvTjZAhY243jVoswANs6Qv2qSuA==',key_name='tempest-TestSecurityGroupsBasicOps-146961595',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-7bnrc9wi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:19:45Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=e4349bd8-727a-4533-9edd-b2d54353a617,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "address": "fa:16:3e:7d:73:58", "network": {"id": "df5b5e58-da82-40fd-b4b8-660edea3cecb", "bridge": "br-int", "label": "tempest-network-smoke--2117845100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07b4c83e-2f", "ovs_interfaceid": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.832 243456 DEBUG nova.network.os_vif_util [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "address": "fa:16:3e:7d:73:58", "network": {"id": "df5b5e58-da82-40fd-b4b8-660edea3cecb", "bridge": "br-int", "label": "tempest-network-smoke--2117845100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07b4c83e-2f", "ovs_interfaceid": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.833 243456 DEBUG nova.network.os_vif_util [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:73:58,bridge_name='br-int',has_traffic_filtering=True,id=07b4c83e-2fe2-42c9-a758-c50ddf0919fb,network=Network(df5b5e58-da82-40fd-b4b8-660edea3cecb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07b4c83e-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.835 243456 DEBUG os_vif [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:73:58,bridge_name='br-int',has_traffic_filtering=True,id=07b4c83e-2fe2-42c9-a758-c50ddf0919fb,network=Network(df5b5e58-da82-40fd-b4b8-660edea3cecb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07b4c83e-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.837 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.837 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.838 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.843 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.844 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07b4c83e-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.845 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap07b4c83e-2f, col_values=(('external_ids', {'iface-id': '07b4c83e-2fe2-42c9-a758-c50ddf0919fb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:73:58', 'vm-uuid': 'e4349bd8-727a-4533-9edd-b2d54353a617'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.846 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:50 np0005634017 NetworkManager[49805]: <info>  [1772273990.8477] manager: (tap07b4c83e-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/414)
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.850 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.856 243456 INFO nova.virt.libvirt.driver [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Deleting instance files /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1_del#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.857 243456 INFO nova.virt.libvirt.driver [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Deletion of /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1_del complete#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.861 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.863 243456 INFO os_vif [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:73:58,bridge_name='br-int',has_traffic_filtering=True,id=07b4c83e-2fe2-42c9-a758-c50ddf0919fb,network=Network(df5b5e58-da82-40fd-b4b8-660edea3cecb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07b4c83e-2f')#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.874 243456 INFO nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Creating config drive at /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/disk.config.rescue#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.880 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpdf309q3j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.934 243456 INFO nova.compute.manager [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Took 0.57 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.935 243456 DEBUG oslo.service.loopingcall [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.936 243456 DEBUG nova.compute.manager [-] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.936 243456 DEBUG nova.network.neutron [-] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.968 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.969 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.970 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No VIF found with MAC fa:16:3e:7d:73:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:19:50 np0005634017 nova_compute[243452]: 2026-02-28 10:19:50.971 243456 INFO nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Using config drive#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.003 243456 DEBUG nova.storage.rbd_utils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image e4349bd8-727a-4533-9edd-b2d54353a617_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.034 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpdf309q3j" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.065 243456 DEBUG nova.storage.rbd_utils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.070 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/disk.config.rescue ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.221 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/disk.config.rescue ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.223 243456 INFO nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Deleting local config drive /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/disk.config.rescue because it was imported into RBD.#033[00m
Feb 28 05:19:51 np0005634017 kernel: tap2f9562b0-54: entered promiscuous mode
Feb 28 05:19:51 np0005634017 systemd-udevd[327482]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:19:51 np0005634017 NetworkManager[49805]: <info>  [1772273991.2673] manager: (tap2f9562b0-54): new Tun device (/org/freedesktop/NetworkManager/Devices/415)
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.269 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:51Z|00960|binding|INFO|Claiming lport 2f9562b0-54ce-4c24-9341-33a674532bf0 for this chassis.
Feb 28 05:19:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:51Z|00961|binding|INFO|2f9562b0-54ce-4c24-9341-33a674532bf0: Claiming fa:16:3e:db:07:b7 10.100.0.9
Feb 28 05:19:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:51Z|00962|binding|INFO|Setting lport 2f9562b0-54ce-4c24-9341-33a674532bf0 ovn-installed in OVS
Feb 28 05:19:51 np0005634017 NetworkManager[49805]: <info>  [1772273991.2781] device (tap2f9562b0-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.276 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:51 np0005634017 NetworkManager[49805]: <info>  [1772273991.2791] device (tap2f9562b0-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:19:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:51Z|00963|binding|INFO|Setting lport 2f9562b0-54ce-4c24-9341-33a674532bf0 up in Southbound
Feb 28 05:19:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.278 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:07:b7 10.100.0.9'], port_security=['fa:16:3e:db:07:b7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ea5efc55-0a5e-435e-9805-9a9726c17eda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ba87a0d-4c27-4844-a55f-2927dfe4893e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3882eded03594958a2e5d10832a6c3a9', 'neutron:revision_number': '5', 'neutron:security_group_ids': '045c4763-a218-4cca-a882-e267b0c43dad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5d941e6-1790-487e-9400-385b4660d2ae, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2f9562b0-54ce-4c24-9341-33a674532bf0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:19:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.280 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2f9562b0-54ce-4c24-9341-33a674532bf0 in datapath 9ba87a0d-4c27-4844-a55f-2927dfe4893e bound to our chassis#033[00m
Feb 28 05:19:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.280 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9ba87a0d-4c27-4844-a55f-2927dfe4893e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 28 05:19:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.281 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[48272dae-5b11-4bd3-9ca3-5e38aa6c6a34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.288 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:51 np0005634017 systemd-machined[209480]: New machine qemu-120-instance-00000060.
Feb 28 05:19:51 np0005634017 systemd[1]: Started Virtual Machine qemu-120-instance-00000060.
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.561 243456 INFO nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Creating config drive at /var/lib/nova/instances/e4349bd8-727a-4533-9edd-b2d54353a617/disk.config#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.565 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e4349bd8-727a-4533-9edd-b2d54353a617/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpp3nbjb7e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.683 243456 DEBUG nova.network.neutron [-] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.706 243456 INFO nova.compute.manager [-] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Took 0.77 seconds to deallocate network for instance.#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.710 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e4349bd8-727a-4533-9edd-b2d54353a617/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpp3nbjb7e" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.736 243456 DEBUG nova.storage.rbd_utils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image e4349bd8-727a-4533-9edd-b2d54353a617_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.742 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e4349bd8-727a-4533-9edd-b2d54353a617/disk.config e4349bd8-727a-4533-9edd-b2d54353a617_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.784 243456 DEBUG oslo_concurrency.lockutils [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.785 243456 DEBUG oslo_concurrency.lockutils [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.785 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for ea5efc55-0a5e-435e-9805-9a9726c17eda due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.786 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273991.739541, ea5efc55-0a5e-435e-9805-9a9726c17eda => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.786 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.789 243456 DEBUG nova.compute.manager [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.794 243456 DEBUG nova.compute.manager [req-70abde67-282d-449b-8523-0065eeafb132 req-899f704f-fbb4-45ba-aeb7-d974df9cc90f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received event network-vif-deleted-c058dd2c-3349-4364-8659-31bb8b2509bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.808 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.813 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.840 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.841 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273991.7396615, ea5efc55-0a5e-435e-9805-9a9726c17eda => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.841 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] VM Started (Lifecycle Event)#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.866 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.870 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.875 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e4349bd8-727a-4533-9edd-b2d54353a617/disk.config e4349bd8-727a-4533-9edd-b2d54353a617_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.876 243456 INFO nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Deleting local config drive /var/lib/nova/instances/e4349bd8-727a-4533-9edd-b2d54353a617/disk.config because it was imported into RBD.#033[00m
Feb 28 05:19:51 np0005634017 kernel: tap07b4c83e-2f: entered promiscuous mode
Feb 28 05:19:51 np0005634017 NetworkManager[49805]: <info>  [1772273991.9143] manager: (tap07b4c83e-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/416)
Feb 28 05:19:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:51Z|00964|binding|INFO|Claiming lport 07b4c83e-2fe2-42c9-a758-c50ddf0919fb for this chassis.
Feb 28 05:19:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:51Z|00965|binding|INFO|07b4c83e-2fe2-42c9-a758-c50ddf0919fb: Claiming fa:16:3e:7d:73:58 10.100.0.9
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.916 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:51Z|00966|binding|INFO|Setting lport 07b4c83e-2fe2-42c9-a758-c50ddf0919fb ovn-installed in OVS
Feb 28 05:19:51 np0005634017 NetworkManager[49805]: <info>  [1772273991.9252] device (tap07b4c83e-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:19:51 np0005634017 NetworkManager[49805]: <info>  [1772273991.9257] device (tap07b4c83e-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:19:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:51Z|00967|binding|INFO|Setting lport 07b4c83e-2fe2-42c9-a758-c50ddf0919fb up in Southbound
Feb 28 05:19:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.926 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:73:58 10.100.0.9'], port_security=['fa:16:3e:7d:73:58 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e4349bd8-727a-4533-9edd-b2d54353a617', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df5b5e58-da82-40fd-b4b8-660edea3cecb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0184ef1a-f67e-49ee-b39d-03bdf1995f2e ec08cc57-dbf0-4da0-80ed-116af2ee764f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f39672a-6e8d-45c3-9a0d-437ee1b4e09f, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=07b4c83e-2fe2-42c9-a758-c50ddf0919fb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.929 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.931 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.930 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 07b4c83e-2fe2-42c9-a758-c50ddf0919fb in datapath df5b5e58-da82-40fd-b4b8-660edea3cecb bound to our chassis#033[00m
Feb 28 05:19:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.931 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network df5b5e58-da82-40fd-b4b8-660edea3cecb#033[00m
Feb 28 05:19:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.941 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[722f2d40-8d3a-40ca-ae3c-c22c571f9736]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.943 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdf5b5e58-d1 in ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:19:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.945 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdf5b5e58-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:19:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.945 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[34480ff1-0a2b-4c16-ac34-fddf3517fc45]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.947 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a2f2c5e1-498f-4683-b587-97965c96a8a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:51 np0005634017 systemd-machined[209480]: New machine qemu-121-instance-00000061.
Feb 28 05:19:51 np0005634017 nova_compute[243452]: 2026-02-28 10:19:51.950 243456 DEBUG oslo_concurrency.processutils [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.959 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8dac98-8adc-4fca-89a0-d17e7ccd8915]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:51 np0005634017 systemd[1]: Started Virtual Machine qemu-121-instance-00000061.
Feb 28 05:19:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.977 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f784c2c3-4131-4883-a0e8-4b6f806e5257]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.006 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[153ccfe0-4628-4de6-b77e-de01e8e4b8bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:52 np0005634017 NetworkManager[49805]: <info>  [1772273992.0145] manager: (tapdf5b5e58-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/417)
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.012 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b5833434-51b3-49df-abda-3d2ca292973e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.042 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[90e0f93f-889d-4ec4-b4bf-c05aa4a83070]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.046 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[255042b2-5a21-4cbc-99c6-c84b0b170295]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:52 np0005634017 NetworkManager[49805]: <info>  [1772273992.0735] device (tapdf5b5e58-d0): carrier: link connected
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.079 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ddbb5193-6827-46e7-9014-926131718d40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.103 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a2b4b5db-5b9a-4cfa-a8ed-3613092f9adc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf5b5e58-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:ff:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 296], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549935, 'reachable_time': 15255, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327786, 'error': None, 'target': 'ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.123 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6c24dc65-5f79-430e-abbb-6725e328e0e7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee4:ff15'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 549935, 'tstamp': 549935}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327796, 'error': None, 'target': 'ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.139 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e58fbf87-bcf8-44a3-a5c9-0d9666c4be0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf5b5e58-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:ff:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 296], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549935, 'reachable_time': 15255, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 327797, 'error': None, 'target': 'ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.177 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[691a0f9a-23d5-4228-af60-b0dab22faeb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.250 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[411cc363-2b95-42de-940c-95825e5bee48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.252 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf5b5e58-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.252 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.253 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf5b5e58-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:52 np0005634017 kernel: tapdf5b5e58-d0: entered promiscuous mode
Feb 28 05:19:52 np0005634017 NetworkManager[49805]: <info>  [1772273992.2560] manager: (tapdf5b5e58-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/418)
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.260 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdf5b5e58-d0, col_values=(('external_ids', {'iface-id': 'ec441ae8-7dea-4a06-ba6a-57dcbc67001f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:19:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:19:52Z|00968|binding|INFO|Releasing lport ec441ae8-7dea-4a06-ba6a-57dcbc67001f from this chassis (sb_readonly=0)
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.263 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/df5b5e58-da82-40fd-b4b8-660edea3cecb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/df5b5e58-da82-40fd-b4b8-660edea3cecb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.264 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e47c5790-31a0-4065-b594-c5dece26d48b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.265 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-df5b5e58-da82-40fd-b4b8-660edea3cecb
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/df5b5e58-da82-40fd-b4b8-660edea3cecb.pid.haproxy
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID df5b5e58-da82-40fd-b4b8-660edea3cecb
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:19:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.266 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb', 'env', 'PROCESS_TAG=haproxy-df5b5e58-da82-40fd-b4b8-660edea3cecb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/df5b5e58-da82-40fd-b4b8-660edea3cecb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:19:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 439 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.9 MiB/s wr, 228 op/s
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.255 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:52 np0005634017 podman[327830]: 2026-02-28 10:19:52.614286848 +0000 UTC m=+0.050565303 container create 37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:19:52 np0005634017 systemd[1]: Started libpod-conmon-37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c.scope.
Feb 28 05:19:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:19:52 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1230983670' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:19:52 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:19:52 np0005634017 podman[327830]: 2026-02-28 10:19:52.584918501 +0000 UTC m=+0.021196976 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:19:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/835d604bdbc864696e0b1086b090dae849a757f07b6c5ac638b1186ab31417bd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:19:52 np0005634017 podman[327830]: 2026-02-28 10:19:52.696641625 +0000 UTC m=+0.132920110 container init 37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.696 243456 DEBUG oslo_concurrency.processutils [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.746s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:52 np0005634017 podman[327830]: 2026-02-28 10:19:52.702886963 +0000 UTC m=+0.139165408 container start 37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.706 243456 DEBUG nova.compute.provider_tree [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:19:52 np0005634017 neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb[327846]: [NOTICE]   (327852) : New worker (327854) forked
Feb 28 05:19:52 np0005634017 neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb[327846]: [NOTICE]   (327852) : Loading success.
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.726 243456 DEBUG nova.scheduler.client.report [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.762 243456 DEBUG oslo_concurrency.lockutils [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.978s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.792 243456 DEBUG nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.793 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.793 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.794 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.794 243456 DEBUG nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] No waiting events found dispatching network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.794 243456 WARNING nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received unexpected event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.795 243456 DEBUG nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received event network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.795 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.795 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.796 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.796 243456 DEBUG nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] No waiting events found dispatching network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.796 243456 WARNING nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received unexpected event network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 for instance with vm_state rescued and task_state None.#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.797 243456 DEBUG nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received event network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.797 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.797 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.797 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.798 243456 DEBUG nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] No waiting events found dispatching network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.798 243456 WARNING nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received unexpected event network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 for instance with vm_state rescued and task_state None.#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.798 243456 DEBUG nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Received event network-vif-plugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.799 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.799 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.799 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.799 243456 DEBUG nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Processing event network-vif-plugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.800 243456 DEBUG nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Received event network-vif-plugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.800 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.800 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.800 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.801 243456 DEBUG nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] No waiting events found dispatching network-vif-plugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.801 243456 WARNING nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Received unexpected event network-vif-plugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.803 243456 INFO nova.scheduler.client.report [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Deleted allocations for instance 6a659835-f144-4e34-87ec-3b37ff81b0d1#033[00m
Feb 28 05:19:52 np0005634017 nova_compute[243452]: 2026-02-28 10:19:52.879 243456 DEBUG oslo_concurrency.lockutils [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.143 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273993.1424856, e4349bd8-727a-4533-9edd-b2d54353a617 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.143 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] VM Started (Lifecycle Event)#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.145 243456 DEBUG nova.compute.manager [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.148 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.151 243456 INFO nova.virt.libvirt.driver [-] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Instance spawned successfully.#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.151 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.165 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.170 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.174 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.176 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.177 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.177 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.178 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.178 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.186 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.187 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273993.142695, e4349bd8-727a-4533-9edd-b2d54353a617 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.187 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.214 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.218 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273993.147844, e4349bd8-727a-4533-9edd-b2d54353a617 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.218 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.242 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.246 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.255 243456 INFO nova.compute.manager [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Took 7.80 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.256 243456 DEBUG nova.compute.manager [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.267 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.327 243456 INFO nova.compute.manager [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Took 8.90 seconds to build instance.#033[00m
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.348 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Feb 28 05:19:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Feb 28 05:19:53 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Feb 28 05:19:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:19:53 np0005634017 nova_compute[243452]: 2026-02-28 10:19:53.922 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 305 active+clean; 437 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 7.0 MiB/s wr, 262 op/s
Feb 28 05:19:55 np0005634017 nova_compute[243452]: 2026-02-28 10:19:55.848 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 407 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 4.4 MiB/s wr, 327 op/s
Feb 28 05:19:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:57.858 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:57.860 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:19:57.860 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:57 np0005634017 nova_compute[243452]: 2026-02-28 10:19:57.928 243456 DEBUG oslo_concurrency.lockutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Acquiring lock "98504b0a-8c47-4488-b870-9fb9ebfa3e59" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:57 np0005634017 nova_compute[243452]: 2026-02-28 10:19:57.929 243456 DEBUG oslo_concurrency.lockutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "98504b0a-8c47-4488-b870-9fb9ebfa3e59" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:57 np0005634017 nova_compute[243452]: 2026-02-28 10:19:57.948 243456 DEBUG nova.compute.manager [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.013 243456 DEBUG oslo_concurrency.lockutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.014 243456 DEBUG oslo_concurrency.lockutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.020 243456 DEBUG nova.virt.hardware [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.021 243456 INFO nova.compute.claims [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.239 243456 DEBUG oslo_concurrency.processutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.302 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.302 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.314 243456 DEBUG oslo_concurrency.lockutils [None req-a3f3fcad-a332-4cbc-8a36-3371252b885a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.314 243456 DEBUG oslo_concurrency.lockutils [None req-a3f3fcad-a332-4cbc-8a36-3371252b885a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "690896df-6307-469c-9685-325a61a62b88" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.315 243456 DEBUG nova.compute.manager [None req-a3f3fcad-a332-4cbc-8a36-3371252b885a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.316 243456 DEBUG nova.compute.manager [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.324 243456 DEBUG nova.compute.manager [None req-a3f3fcad-a332-4cbc-8a36-3371252b885a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.325 243456 DEBUG nova.objects.instance [None req-a3f3fcad-a332-4cbc-8a36-3371252b885a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'flavor' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 407 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 4.0 MiB/s wr, 310 op/s
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.361 243456 DEBUG nova.virt.libvirt.driver [None req-a3f3fcad-a332-4cbc-8a36-3371252b885a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.414 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:19:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:19:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/228876417' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.810 243456 DEBUG oslo_concurrency.processutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.815 243456 DEBUG nova.compute.provider_tree [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.835 243456 DEBUG nova.scheduler.client.report [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.882 243456 DEBUG oslo_concurrency.lockutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.883 243456 DEBUG nova.compute.manager [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.890 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.476s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.899 243456 DEBUG nova.virt.hardware [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.901 243456 INFO nova.compute.claims [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.926 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.942 243456 DEBUG nova.compute.manager [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.942 243456 DEBUG nova.network.neutron [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.964 243456 INFO nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:19:58 np0005634017 nova_compute[243452]: 2026-02-28 10:19:58.982 243456 DEBUG nova.compute.manager [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.071 243456 DEBUG nova.compute.manager [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.073 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.074 243456 INFO nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Creating image(s)#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.110 243456 DEBUG nova.storage.rbd_utils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] rbd image 98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.132 243456 DEBUG nova.storage.rbd_utils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] rbd image 98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.155 243456 DEBUG nova.storage.rbd_utils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] rbd image 98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.159 243456 DEBUG oslo_concurrency.lockutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Acquiring lock "e8bb49cc096e9adbbb6156bde35e9bc9f9bdd7c5" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.159 243456 DEBUG oslo_concurrency.lockutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "e8bb49cc096e9adbbb6156bde35e9bc9f9bdd7c5" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.179 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.280 243456 DEBUG nova.network.neutron [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.281 243456 DEBUG nova.compute.manager [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.437 243456 DEBUG nova.virt.libvirt.imagebackend [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Image locations are: [{'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/1583781d-ec8c-4060-a2ad-53d52445d23e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/1583781d-ec8c-4060-a2ad-53d52445d23e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.499 243456 DEBUG nova.virt.libvirt.imagebackend [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Selected location: {'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/1583781d-ec8c-4060-a2ad-53d52445d23e/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.500 243456 DEBUG nova.storage.rbd_utils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] cloning images/1583781d-ec8c-4060-a2ad-53d52445d23e@snap to None/98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.590 243456 DEBUG oslo_concurrency.lockutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "e8bb49cc096e9adbbb6156bde35e9bc9f9bdd7c5" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.430s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:19:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3239131865' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.729 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.742 243456 DEBUG nova.storage.rbd_utils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] resizing rbd image 98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.793 243456 DEBUG nova.compute.provider_tree [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.816 243456 DEBUG nova.objects.instance [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lazy-loading 'migration_context' on Instance uuid 98504b0a-8c47-4488-b870-9fb9ebfa3e59 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.821 243456 DEBUG nova.scheduler.client.report [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.839 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.840 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Ensure instance console log exists: /var/lib/nova/instances/98504b0a-8c47-4488-b870-9fb9ebfa3e59/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.841 243456 DEBUG oslo_concurrency.lockutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.841 243456 DEBUG oslo_concurrency.lockutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.842 243456 DEBUG oslo_concurrency.lockutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.843 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='8dbf5d7f5ae126eb3d2e8c6f3c6631aa',container_format='bare',created_at=2026-02-28T10:19:52Z,direct_url=<?>,disk_format='raw',id=1583781d-ec8c-4060-a2ad-53d52445d23e,min_disk=0,min_ram=0,name='tempest-image-dependency-test-1415239692',owner='f743c06bc2ae45fda68427b8418baef8',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2026-02-28T10:19:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': '1583781d-ec8c-4060-a2ad-53d52445d23e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.848 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.958s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.849 243456 DEBUG nova.compute.manager [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.853 243456 WARNING nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.860 243456 DEBUG nova.virt.libvirt.host [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.861 243456 DEBUG nova.virt.libvirt.host [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.864 243456 DEBUG nova.virt.libvirt.host [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.864 243456 DEBUG nova.virt.libvirt.host [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.865 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.865 243456 DEBUG nova.virt.hardware [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='8dbf5d7f5ae126eb3d2e8c6f3c6631aa',container_format='bare',created_at=2026-02-28T10:19:52Z,direct_url=<?>,disk_format='raw',id=1583781d-ec8c-4060-a2ad-53d52445d23e,min_disk=0,min_ram=0,name='tempest-image-dependency-test-1415239692',owner='f743c06bc2ae45fda68427b8418baef8',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2026-02-28T10:19:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.865 243456 DEBUG nova.virt.hardware [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.866 243456 DEBUG nova.virt.hardware [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.866 243456 DEBUG nova.virt.hardware [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.866 243456 DEBUG nova.virt.hardware [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.866 243456 DEBUG nova.virt.hardware [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.867 243456 DEBUG nova.virt.hardware [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.867 243456 DEBUG nova.virt.hardware [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.867 243456 DEBUG nova.virt.hardware [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.867 243456 DEBUG nova.virt.hardware [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.867 243456 DEBUG nova.virt.hardware [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.870 243456 DEBUG oslo_concurrency.processutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.905 243456 DEBUG nova.compute.manager [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.905 243456 DEBUG nova.network.neutron [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.929 243456 INFO nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:19:59 np0005634017 nova_compute[243452]: 2026-02-28 10:19:59.946 243456 DEBUG nova.compute.manager [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.058 243456 DEBUG nova.compute.manager [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.059 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.060 243456 INFO nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Creating image(s)#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.082 243456 DEBUG nova.storage.rbd_utils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.105 243456 DEBUG nova.storage.rbd_utils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.127 243456 DEBUG nova.storage.rbd_utils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.132 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.198 243456 DEBUG nova.compute.manager [req-09abe010-ca2c-41d7-96d9-026dca2c48b1 req-9b9a6adb-28a4-4f45-bb3c-9606f70a8da6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Received event network-changed-07b4c83e-2fe2-42c9-a758-c50ddf0919fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.198 243456 DEBUG nova.compute.manager [req-09abe010-ca2c-41d7-96d9-026dca2c48b1 req-9b9a6adb-28a4-4f45-bb3c-9606f70a8da6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Refreshing instance network info cache due to event network-changed-07b4c83e-2fe2-42c9-a758-c50ddf0919fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.198 243456 DEBUG oslo_concurrency.lockutils [req-09abe010-ca2c-41d7-96d9-026dca2c48b1 req-9b9a6adb-28a4-4f45-bb3c-9606f70a8da6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-e4349bd8-727a-4533-9edd-b2d54353a617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.198 243456 DEBUG oslo_concurrency.lockutils [req-09abe010-ca2c-41d7-96d9-026dca2c48b1 req-9b9a6adb-28a4-4f45-bb3c-9606f70a8da6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-e4349bd8-727a-4533-9edd-b2d54353a617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.199 243456 DEBUG nova.network.neutron [req-09abe010-ca2c-41d7-96d9-026dca2c48b1 req-9b9a6adb-28a4-4f45-bb3c-9606f70a8da6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Refreshing network info cache for port 07b4c83e-2fe2-42c9-a758-c50ddf0919fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.209 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.209 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.210 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.210 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.236 243456 DEBUG nova.storage.rbd_utils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.241 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:20:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:20:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:20:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:20:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:20:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:20:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 407 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 2.4 MiB/s wr, 271 op/s
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.359 243456 DEBUG nova.policy [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d03850a765742908401b28b9f983e96', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3882eded03594958a2e5d10832a6c3a9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:20:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:20:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2833492996' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.489 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.522 243456 DEBUG oslo_concurrency.processutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.547 243456 DEBUG nova.storage.rbd_utils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] rbd image 98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.552 243456 DEBUG oslo_concurrency.processutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:00 np0005634017 kernel: taped25d1f8-c3 (unregistering): left promiscuous mode
Feb 28 05:20:00 np0005634017 NetworkManager[49805]: <info>  [1772274000.6143] device (taped25d1f8-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:20:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:00Z|00969|binding|INFO|Releasing lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c from this chassis (sb_readonly=0)
Feb 28 05:20:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:00Z|00970|binding|INFO|Setting lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c down in Southbound
Feb 28 05:20:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:00Z|00971|binding|INFO|Removing iface taped25d1f8-c3 ovn-installed in OVS
Feb 28 05:20:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.642 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:05:21 10.100.0.14'], port_security=['fa:16:3e:f6:05:21 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '690896df-6307-469c-9685-325a61a62b88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6952e00efd364e1491714983e2425e93', 'neutron:revision_number': '10', 'neutron:security_group_ids': '26695444-5c7f-4bb0-b1ed-94a026868cfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.178', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999babbe-38ab-491f-bab6-db221e47dcf3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:20:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.643 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ed25d1f8-c3a0-43d4-b57e-12b647a48b3c in datapath 8082b9e7-a888-4fb7-b48c-a7c16db892eb unbound from our chassis#033[00m
Feb 28 05:20:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.644 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8082b9e7-a888-4fb7-b48c-a7c16db892eb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:20:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.649 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6f171366-8fee-4b8d-9de7-81a416a88499]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.650 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb namespace which is not needed anymore#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.663 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:00 np0005634017 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Feb 28 05:20:00 np0005634017 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005a.scope: Consumed 15.014s CPU time.
Feb 28 05:20:00 np0005634017 systemd-machined[209480]: Machine qemu-114-instance-0000005a terminated.
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.676 243456 DEBUG nova.storage.rbd_utils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] resizing rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:20:00 np0005634017 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[324272]: [NOTICE]   (324276) : haproxy version is 2.8.14-c23fe91
Feb 28 05:20:00 np0005634017 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[324272]: [NOTICE]   (324276) : path to executable is /usr/sbin/haproxy
Feb 28 05:20:00 np0005634017 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[324272]: [WARNING]  (324276) : Exiting Master process...
Feb 28 05:20:00 np0005634017 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[324272]: [ALERT]    (324276) : Current worker (324278) exited with code 143 (Terminated)
Feb 28 05:20:00 np0005634017 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[324272]: [WARNING]  (324276) : All workers exited. Exiting... (0)
Feb 28 05:20:00 np0005634017 systemd[1]: libpod-fe3292f6733adde4b2a70c1020c2faa32967541523abd80f3702d6230a2572d3.scope: Deactivated successfully.
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.791 243456 DEBUG nova.objects.instance [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'migration_context' on Instance uuid 080f8608-f57f-4ffa-a966-ae62df8f6f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:00 np0005634017 podman[328375]: 2026-02-28 10:20:00.796970334 +0000 UTC m=+0.050199222 container died fe3292f6733adde4b2a70c1020c2faa32967541523abd80f3702d6230a2572d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.807 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.808 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Ensure instance console log exists: /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.808 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.809 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.809 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:00 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fe3292f6733adde4b2a70c1020c2faa32967541523abd80f3702d6230a2572d3-userdata-shm.mount: Deactivated successfully.
Feb 28 05:20:00 np0005634017 systemd[1]: var-lib-containers-storage-overlay-503d7322e1f253bb2450dad4bde36cc78f849c077ea3c594879c4d79193de54d-merged.mount: Deactivated successfully.
Feb 28 05:20:00 np0005634017 podman[328375]: 2026-02-28 10:20:00.837264233 +0000 UTC m=+0.090493101 container cleanup fe3292f6733adde4b2a70c1020c2faa32967541523abd80f3702d6230a2572d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:20:00 np0005634017 systemd[1]: libpod-conmon-fe3292f6733adde4b2a70c1020c2faa32967541523abd80f3702d6230a2572d3.scope: Deactivated successfully.
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.852 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.865 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:00 np0005634017 podman[328423]: 2026-02-28 10:20:00.913058583 +0000 UTC m=+0.045483348 container remove fe3292f6733adde4b2a70c1020c2faa32967541523abd80f3702d6230a2572d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:20:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.919 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[aac1a205-c2c0-4da6-b7ba-0e7cdb0d8f1b]: (4, ('Sat Feb 28 10:20:00 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb (fe3292f6733adde4b2a70c1020c2faa32967541523abd80f3702d6230a2572d3)\nfe3292f6733adde4b2a70c1020c2faa32967541523abd80f3702d6230a2572d3\nSat Feb 28 10:20:00 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb (fe3292f6733adde4b2a70c1020c2faa32967541523abd80f3702d6230a2572d3)\nfe3292f6733adde4b2a70c1020c2faa32967541523abd80f3702d6230a2572d3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.921 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fc671fa2-6a8d-4be6-ad4b-cc07dab37af7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.922 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8082b9e7-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:00 np0005634017 kernel: tap8082b9e7-a0: left promiscuous mode
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.924 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:00 np0005634017 nova_compute[243452]: 2026-02-28 10:20:00.936 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.940 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f82c41a1-c7be-4c0b-9e70-eca8b99a2018]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.955 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[14c757ba-fb79-4768-872d-6d9e96d9f9bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.957 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ba88b635-e15c-4e02-a451-bd614af43523]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.972 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[875fbe60-d3f9-4c2e-8bc7-9f67ad220b00]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543458, 'reachable_time': 38409, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328449, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.974 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:20:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.974 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[8bdba2b7-9ab7-4337-98f1-2b3c79954671]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:00 np0005634017 systemd[1]: run-netns-ovnmeta\x2d8082b9e7\x2da888\x2d4fb7\x2db48c\x2da7c16db892eb.mount: Deactivated successfully.
Feb 28 05:20:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:20:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/439465713' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:20:01 np0005634017 nova_compute[243452]: 2026-02-28 10:20:01.157 243456 DEBUG oslo_concurrency.processutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:01 np0005634017 nova_compute[243452]: 2026-02-28 10:20:01.159 243456 DEBUG nova.objects.instance [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 98504b0a-8c47-4488-b870-9fb9ebfa3e59 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:01 np0005634017 nova_compute[243452]: 2026-02-28 10:20:01.177 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:20:01 np0005634017 nova_compute[243452]:  <uuid>98504b0a-8c47-4488-b870-9fb9ebfa3e59</uuid>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:  <name>instance-00000062</name>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      <nova:name>instance-depend-image</nova:name>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:19:59</nova:creationTime>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:20:01 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:        <nova:user uuid="05780ead76294473ae8c8fc112f7610d">tempest-ImageDependencyTests-2065005612-project-member</nova:user>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:        <nova:project uuid="f743c06bc2ae45fda68427b8418baef8">tempest-ImageDependencyTests-2065005612</nova:project>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="1583781d-ec8c-4060-a2ad-53d52445d23e"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      <nova:ports/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      <entry name="serial">98504b0a-8c47-4488-b870-9fb9ebfa3e59</entry>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      <entry name="uuid">98504b0a-8c47-4488-b870-9fb9ebfa3e59</entry>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk">
Feb 28 05:20:01 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:20:01 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk.config">
Feb 28 05:20:01 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:20:01 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/98504b0a-8c47-4488-b870-9fb9ebfa3e59/console.log" append="off"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:20:01 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:20:01 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:20:01 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:20:01 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:20:01 np0005634017 nova_compute[243452]: 2026-02-28 10:20:01.225 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:20:01 np0005634017 nova_compute[243452]: 2026-02-28 10:20:01.227 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:20:01 np0005634017 nova_compute[243452]: 2026-02-28 10:20:01.228 243456 INFO nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Using config drive#033[00m
Feb 28 05:20:01 np0005634017 nova_compute[243452]: 2026-02-28 10:20:01.250 243456 DEBUG nova.storage.rbd_utils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] rbd image 98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:01 np0005634017 nova_compute[243452]: 2026-02-28 10:20:01.435 243456 INFO nova.virt.libvirt.driver [None req-a3f3fcad-a332-4cbc-8a36-3371252b885a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance shutdown successfully after 3 seconds.#033[00m
Feb 28 05:20:01 np0005634017 nova_compute[243452]: 2026-02-28 10:20:01.443 243456 INFO nova.virt.libvirt.driver [-] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance destroyed successfully.#033[00m
Feb 28 05:20:01 np0005634017 nova_compute[243452]: 2026-02-28 10:20:01.444 243456 DEBUG nova.objects.instance [None req-a3f3fcad-a332-4cbc-8a36-3371252b885a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'numa_topology' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:01 np0005634017 nova_compute[243452]: 2026-02-28 10:20:01.466 243456 DEBUG nova.compute.manager [None req-a3f3fcad-a332-4cbc-8a36-3371252b885a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:01 np0005634017 nova_compute[243452]: 2026-02-28 10:20:01.535 243456 DEBUG oslo_concurrency.lockutils [None req-a3f3fcad-a332-4cbc-8a36-3371252b885a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "690896df-6307-469c-9685-325a61a62b88" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:01 np0005634017 nova_compute[243452]: 2026-02-28 10:20:01.567 243456 INFO nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Creating config drive at /var/lib/nova/instances/98504b0a-8c47-4488-b870-9fb9ebfa3e59/disk.config#033[00m
Feb 28 05:20:01 np0005634017 nova_compute[243452]: 2026-02-28 10:20:01.575 243456 DEBUG oslo_concurrency.processutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98504b0a-8c47-4488-b870-9fb9ebfa3e59/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpg50prgut execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:01 np0005634017 nova_compute[243452]: 2026-02-28 10:20:01.726 243456 DEBUG oslo_concurrency.processutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98504b0a-8c47-4488-b870-9fb9ebfa3e59/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpg50prgut" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:01 np0005634017 nova_compute[243452]: 2026-02-28 10:20:01.759 243456 DEBUG nova.storage.rbd_utils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] rbd image 98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:01 np0005634017 nova_compute[243452]: 2026-02-28 10:20:01.764 243456 DEBUG oslo_concurrency.processutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98504b0a-8c47-4488-b870-9fb9ebfa3e59/disk.config 98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:01 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Feb 28 05:20:01 np0005634017 nova_compute[243452]: 2026-02-28 10:20:01.942 243456 DEBUG oslo_concurrency.processutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98504b0a-8c47-4488-b870-9fb9ebfa3e59/disk.config 98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:01 np0005634017 nova_compute[243452]: 2026-02-28 10:20:01.943 243456 INFO nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Deleting local config drive /var/lib/nova/instances/98504b0a-8c47-4488-b870-9fb9ebfa3e59/disk.config because it was imported into RBD.#033[00m
Feb 28 05:20:02 np0005634017 systemd-machined[209480]: New machine qemu-122-instance-00000062.
Feb 28 05:20:02 np0005634017 systemd[1]: Started Virtual Machine qemu-122-instance-00000062.
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.079 243456 DEBUG nova.network.neutron [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Successfully created port: a98f753e-a6d6-4d97-b307-f08d35a37f1f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.277 243456 DEBUG nova.compute.manager [req-8d785c43-4d5f-4f1b-ba6c-268f8ddcccec req-7cacf95e-d902-4a5e-aa37-d9d7d9fa0a7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.278 243456 DEBUG oslo_concurrency.lockutils [req-8d785c43-4d5f-4f1b-ba6c-268f8ddcccec req-7cacf95e-d902-4a5e-aa37-d9d7d9fa0a7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.279 243456 DEBUG oslo_concurrency.lockutils [req-8d785c43-4d5f-4f1b-ba6c-268f8ddcccec req-7cacf95e-d902-4a5e-aa37-d9d7d9fa0a7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.279 243456 DEBUG oslo_concurrency.lockutils [req-8d785c43-4d5f-4f1b-ba6c-268f8ddcccec req-7cacf95e-d902-4a5e-aa37-d9d7d9fa0a7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.279 243456 DEBUG nova.compute.manager [req-8d785c43-4d5f-4f1b-ba6c-268f8ddcccec req-7cacf95e-d902-4a5e-aa37-d9d7d9fa0a7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.280 243456 WARNING nova.compute.manager [req-8d785c43-4d5f-4f1b-ba6c-268f8ddcccec req-7cacf95e-d902-4a5e-aa37-d9d7d9fa0a7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state stopped and task_state None.#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.280 243456 DEBUG nova.compute.manager [req-8d785c43-4d5f-4f1b-ba6c-268f8ddcccec req-7cacf95e-d902-4a5e-aa37-d9d7d9fa0a7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.280 243456 DEBUG oslo_concurrency.lockutils [req-8d785c43-4d5f-4f1b-ba6c-268f8ddcccec req-7cacf95e-d902-4a5e-aa37-d9d7d9fa0a7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.281 243456 DEBUG oslo_concurrency.lockutils [req-8d785c43-4d5f-4f1b-ba6c-268f8ddcccec req-7cacf95e-d902-4a5e-aa37-d9d7d9fa0a7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.281 243456 DEBUG oslo_concurrency.lockutils [req-8d785c43-4d5f-4f1b-ba6c-268f8ddcccec req-7cacf95e-d902-4a5e-aa37-d9d7d9fa0a7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.281 243456 DEBUG nova.compute.manager [req-8d785c43-4d5f-4f1b-ba6c-268f8ddcccec req-7cacf95e-d902-4a5e-aa37-d9d7d9fa0a7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.282 243456 WARNING nova.compute.manager [req-8d785c43-4d5f-4f1b-ba6c-268f8ddcccec req-7cacf95e-d902-4a5e-aa37-d9d7d9fa0a7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state stopped and task_state None.#033[00m
Feb 28 05:20:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 408 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 793 KiB/s wr, 265 op/s
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.710 243456 DEBUG nova.network.neutron [req-09abe010-ca2c-41d7-96d9-026dca2c48b1 req-9b9a6adb-28a4-4f45-bb3c-9606f70a8da6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Updated VIF entry in instance network info cache for port 07b4c83e-2fe2-42c9-a758-c50ddf0919fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.710 243456 DEBUG nova.network.neutron [req-09abe010-ca2c-41d7-96d9-026dca2c48b1 req-9b9a6adb-28a4-4f45-bb3c-9606f70a8da6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Updating instance_info_cache with network_info: [{"id": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "address": "fa:16:3e:7d:73:58", "network": {"id": "df5b5e58-da82-40fd-b4b8-660edea3cecb", "bridge": "br-int", "label": "tempest-network-smoke--2117845100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07b4c83e-2f", "ovs_interfaceid": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.739 243456 DEBUG oslo_concurrency.lockutils [req-09abe010-ca2c-41d7-96d9-026dca2c48b1 req-9b9a6adb-28a4-4f45-bb3c-9606f70a8da6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-e4349bd8-727a-4533-9edd-b2d54353a617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.814 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274002.8143718, 98504b0a-8c47-4488-b870-9fb9ebfa3e59 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.815 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.819 243456 DEBUG nova.compute.manager [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.820 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.825 243456 INFO nova.virt.libvirt.driver [-] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Instance spawned successfully.#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.826 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.853 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.863 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.872 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.873 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.874 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.875 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.876 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.878 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.888 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.889 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274002.8154237, 98504b0a-8c47-4488-b870-9fb9ebfa3e59 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.890 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] VM Started (Lifecycle Event)#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.921 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.926 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.953 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.960 243456 INFO nova.compute.manager [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Took 3.89 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:20:02 np0005634017 nova_compute[243452]: 2026-02-28 10:20:02.961 243456 DEBUG nova.compute.manager [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:03 np0005634017 nova_compute[243452]: 2026-02-28 10:20:03.047 243456 INFO nova.compute.manager [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Took 5.06 seconds to build instance.#033[00m
Feb 28 05:20:03 np0005634017 nova_compute[243452]: 2026-02-28 10:20:03.051 243456 DEBUG nova.network.neutron [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Successfully updated port: a98f753e-a6d6-4d97-b307-f08d35a37f1f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:20:03 np0005634017 nova_compute[243452]: 2026-02-28 10:20:03.070 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "refresh_cache-080f8608-f57f-4ffa-a966-ae62df8f6f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:20:03 np0005634017 nova_compute[243452]: 2026-02-28 10:20:03.071 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquired lock "refresh_cache-080f8608-f57f-4ffa-a966-ae62df8f6f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:20:03 np0005634017 nova_compute[243452]: 2026-02-28 10:20:03.071 243456 DEBUG nova.network.neutron [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:20:03 np0005634017 nova_compute[243452]: 2026-02-28 10:20:03.074 243456 DEBUG oslo_concurrency.lockutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "98504b0a-8c47-4488-b870-9fb9ebfa3e59" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:20:03 np0005634017 nova_compute[243452]: 2026-02-28 10:20:03.926 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:04 np0005634017 nova_compute[243452]: 2026-02-28 10:20:04.296 243456 DEBUG nova.network.neutron [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:20:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 422 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 526 KiB/s wr, 246 op/s
Feb 28 05:20:04 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:04Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7d:73:58 10.100.0.9
Feb 28 05:20:04 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:04Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7d:73:58 10.100.0.9
Feb 28 05:20:05 np0005634017 nova_compute[243452]: 2026-02-28 10:20:05.464 243456 DEBUG nova.compute.manager [req-aede0ce5-c386-4b4c-9914-fe0e92de4571 req-b1a45c56-2647-43e1-b765-4b3d373d6519 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-changed-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:05 np0005634017 nova_compute[243452]: 2026-02-28 10:20:05.464 243456 DEBUG nova.compute.manager [req-aede0ce5-c386-4b4c-9914-fe0e92de4571 req-b1a45c56-2647-43e1-b765-4b3d373d6519 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Refreshing instance network info cache due to event network-changed-a98f753e-a6d6-4d97-b307-f08d35a37f1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:20:05 np0005634017 nova_compute[243452]: 2026-02-28 10:20:05.464 243456 DEBUG oslo_concurrency.lockutils [req-aede0ce5-c386-4b4c-9914-fe0e92de4571 req-b1a45c56-2647-43e1-b765-4b3d373d6519 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-080f8608-f57f-4ffa-a966-ae62df8f6f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:20:05 np0005634017 nova_compute[243452]: 2026-02-28 10:20:05.601 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273990.6005676, 6a659835-f144-4e34-87ec-3b37ff81b0d1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:20:05 np0005634017 nova_compute[243452]: 2026-02-28 10:20:05.602 243456 INFO nova.compute.manager [-] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:20:05 np0005634017 nova_compute[243452]: 2026-02-28 10:20:05.622 243456 DEBUG nova.compute.manager [None req-d945d424-f03c-490c-9313-edf621b89489 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:05 np0005634017 nova_compute[243452]: 2026-02-28 10:20:05.855 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 471 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.8 MiB/s wr, 335 op/s
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.397 243456 DEBUG nova.objects.instance [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'flavor' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.421 243456 DEBUG oslo_concurrency.lockutils [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.422 243456 DEBUG oslo_concurrency.lockutils [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquired lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.422 243456 DEBUG nova.network.neutron [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.423 243456 DEBUG nova.objects.instance [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'info_cache' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.820 243456 DEBUG nova.network.neutron [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Updating instance_info_cache with network_info: [{"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.845 243456 DEBUG nova.compute.manager [None req-00240748-d009-4dc9-9cd8-d63ffc57e8b9 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.846 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Releasing lock "refresh_cache-080f8608-f57f-4ffa-a966-ae62df8f6f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.846 243456 DEBUG nova.compute.manager [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Instance network_info: |[{"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.847 243456 DEBUG oslo_concurrency.lockutils [req-aede0ce5-c386-4b4c-9914-fe0e92de4571 req-b1a45c56-2647-43e1-b765-4b3d373d6519 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-080f8608-f57f-4ffa-a966-ae62df8f6f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.848 243456 DEBUG nova.network.neutron [req-aede0ce5-c386-4b4c-9914-fe0e92de4571 req-b1a45c56-2647-43e1-b765-4b3d373d6519 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Refreshing network info cache for port a98f753e-a6d6-4d97-b307-f08d35a37f1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.851 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Start _get_guest_xml network_info=[{"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.861 243456 WARNING nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.867 243456 DEBUG nova.virt.libvirt.host [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.867 243456 DEBUG nova.virt.libvirt.host [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.873 243456 DEBUG nova.virt.libvirt.host [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.874 243456 DEBUG nova.virt.libvirt.host [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.874 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.874 243456 DEBUG nova.virt.hardware [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.875 243456 DEBUG nova.virt.hardware [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.875 243456 DEBUG nova.virt.hardware [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.876 243456 DEBUG nova.virt.hardware [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.876 243456 DEBUG nova.virt.hardware [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.876 243456 DEBUG nova.virt.hardware [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.876 243456 DEBUG nova.virt.hardware [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.877 243456 DEBUG nova.virt.hardware [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.877 243456 DEBUG nova.virt.hardware [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.877 243456 DEBUG nova.virt.hardware [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.878 243456 DEBUG nova.virt.hardware [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.881 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:06 np0005634017 nova_compute[243452]: 2026-02-28 10:20:06.925 243456 INFO nova.compute.manager [None req-00240748-d009-4dc9-9cd8-d63ffc57e8b9 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] instance snapshotting#033[00m
Feb 28 05:20:07 np0005634017 nova_compute[243452]: 2026-02-28 10:20:07.109 243456 INFO nova.virt.libvirt.driver [None req-00240748-d009-4dc9-9cd8-d63ffc57e8b9 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Beginning live snapshot process#033[00m
Feb 28 05:20:07 np0005634017 nova_compute[243452]: 2026-02-28 10:20:07.294 243456 DEBUG nova.storage.rbd_utils [None req-00240748-d009-4dc9-9cd8-d63ffc57e8b9 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] creating snapshot(61830609917343a3bcc672a9878daa2d) on rbd image(98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:20:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:20:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3635222755' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:20:07 np0005634017 nova_compute[243452]: 2026-02-28 10:20:07.436 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:07 np0005634017 nova_compute[243452]: 2026-02-28 10:20:07.454 243456 DEBUG nova.storage.rbd_utils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:07 np0005634017 nova_compute[243452]: 2026-02-28 10:20:07.458 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Feb 28 05:20:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Feb 28 05:20:07 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Feb 28 05:20:07 np0005634017 nova_compute[243452]: 2026-02-28 10:20:07.551 243456 DEBUG nova.storage.rbd_utils [None req-00240748-d009-4dc9-9cd8-d63ffc57e8b9 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] cloning vms/98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk@61830609917343a3bcc672a9878daa2d to images/818dfec7-2d43-4696-a4bb-91240db544c9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 28 05:20:07 np0005634017 nova_compute[243452]: 2026-02-28 10:20:07.662 243456 DEBUG nova.storage.rbd_utils [None req-00240748-d009-4dc9-9cd8-d63ffc57e8b9 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] flattening images/818dfec7-2d43-4696-a4bb-91240db544c9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 28 05:20:07 np0005634017 nova_compute[243452]: 2026-02-28 10:20:07.796 243456 DEBUG nova.storage.rbd_utils [None req-00240748-d009-4dc9-9cd8-d63ffc57e8b9 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] removing snapshot(61830609917343a3bcc672a9878daa2d) on rbd image(98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 28 05:20:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:20:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4085660845' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.029 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.031 243456 DEBUG nova.virt.libvirt.vif [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:19:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1593211213',display_name='tempest-ServerRescueTestJSON-server-1593211213',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1593211213',id=99,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3882eded03594958a2e5d10832a6c3a9',ramdisk_id='',reservation_id='r-xm64dw2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-2101936935',owner_user_name='tempest-ServerRescueTestJSON-2101936935-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:19:59Z,user_data=None,user_id='8d03850a765742908401b28b9f983e96',uuid=080f8608-f57f-4ffa-a966-ae62df8f6f9b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.032 243456 DEBUG nova.network.os_vif_util [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converting VIF {"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.032 243456 DEBUG nova.network.os_vif_util [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:c9:51,bridge_name='br-int',has_traffic_filtering=True,id=a98f753e-a6d6-4d97-b307-f08d35a37f1f,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa98f753e-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.034 243456 DEBUG nova.objects.instance [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 080f8608-f57f-4ffa-a966-ae62df8f6f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.057 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:20:08 np0005634017 nova_compute[243452]:  <uuid>080f8608-f57f-4ffa-a966-ae62df8f6f9b</uuid>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:  <name>instance-00000063</name>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerRescueTestJSON-server-1593211213</nova:name>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:20:06</nova:creationTime>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:20:08 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:        <nova:user uuid="8d03850a765742908401b28b9f983e96">tempest-ServerRescueTestJSON-2101936935-project-member</nova:user>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:        <nova:project uuid="3882eded03594958a2e5d10832a6c3a9">tempest-ServerRescueTestJSON-2101936935</nova:project>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:        <nova:port uuid="a98f753e-a6d6-4d97-b307-f08d35a37f1f">
Feb 28 05:20:08 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <entry name="serial">080f8608-f57f-4ffa-a966-ae62df8f6f9b</entry>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <entry name="uuid">080f8608-f57f-4ffa-a966-ae62df8f6f9b</entry>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk">
Feb 28 05:20:08 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:20:08 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.config">
Feb 28 05:20:08 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:20:08 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:71:c9:51"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <target dev="tapa98f753e-a6"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/console.log" append="off"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:20:08 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:20:08 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:20:08 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:20:08 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.059 243456 DEBUG nova.compute.manager [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Preparing to wait for external event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.059 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.060 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.060 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.060 243456 DEBUG nova.virt.libvirt.vif [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:19:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1593211213',display_name='tempest-ServerRescueTestJSON-server-1593211213',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1593211213',id=99,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3882eded03594958a2e5d10832a6c3a9',ramdisk_id='',reservation_id='r-xm64dw2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-2101936935',owner_user_name='tempest-ServerRescueTestJSON-2101936935-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:19:59Z,user_data=None,user_id='8d03850a765742908401b28b9f983e96',uuid=080f8608-f57f-4ffa-a966-ae62df8f6f9b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.061 243456 DEBUG nova.network.os_vif_util [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converting VIF {"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.061 243456 DEBUG nova.network.os_vif_util [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:c9:51,bridge_name='br-int',has_traffic_filtering=True,id=a98f753e-a6d6-4d97-b307-f08d35a37f1f,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa98f753e-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.062 243456 DEBUG os_vif [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:c9:51,bridge_name='br-int',has_traffic_filtering=True,id=a98f753e-a6d6-4d97-b307-f08d35a37f1f,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa98f753e-a6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.062 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.063 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.063 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.067 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.068 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa98f753e-a6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.068 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa98f753e-a6, col_values=(('external_ids', {'iface-id': 'a98f753e-a6d6-4d97-b307-f08d35a37f1f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:71:c9:51', 'vm-uuid': '080f8608-f57f-4ffa-a966-ae62df8f6f9b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.070 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:08 np0005634017 NetworkManager[49805]: <info>  [1772274008.0723] manager: (tapa98f753e-a6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/419)
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.072 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.079 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.080 243456 INFO os_vif [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:c9:51,bridge_name='br-int',has_traffic_filtering=True,id=a98f753e-a6d6-4d97-b307-f08d35a37f1f,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa98f753e-a6')#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.149 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.150 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.150 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No VIF found with MAC fa:16:3e:71:c9:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.151 243456 INFO nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Using config drive#033[00m
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.173 243456 DEBUG nova.storage.rbd_utils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 486 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.7 MiB/s wr, 235 op/s
Feb 28 05:20:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Feb 28 05:20:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Feb 28 05:20:08 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.567 243456 DEBUG nova.storage.rbd_utils [None req-00240748-d009-4dc9-9cd8-d63ffc57e8b9 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] creating snapshot(snap) on rbd image(818dfec7-2d43-4696-a4bb-91240db544c9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:20:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:20:08 np0005634017 nova_compute[243452]: 2026-02-28 10:20:08.929 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:09 np0005634017 nova_compute[243452]: 2026-02-28 10:20:09.342 243456 INFO nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Creating config drive at /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/disk.config#033[00m
Feb 28 05:20:09 np0005634017 nova_compute[243452]: 2026-02-28 10:20:09.350 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpiou81cvo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:09 np0005634017 nova_compute[243452]: 2026-02-28 10:20:09.502 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpiou81cvo" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Feb 28 05:20:09 np0005634017 nova_compute[243452]: 2026-02-28 10:20:09.541 243456 DEBUG nova.storage.rbd_utils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Feb 28 05:20:09 np0005634017 nova_compute[243452]: 2026-02-28 10:20:09.545 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/disk.config 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:09 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Feb 28 05:20:09 np0005634017 nova_compute[243452]: 2026-02-28 10:20:09.699 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/disk.config 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:09 np0005634017 nova_compute[243452]: 2026-02-28 10:20:09.700 243456 INFO nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Deleting local config drive /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/disk.config because it was imported into RBD.#033[00m
Feb 28 05:20:09 np0005634017 NetworkManager[49805]: <info>  [1772274009.7462] manager: (tapa98f753e-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/420)
Feb 28 05:20:09 np0005634017 kernel: tapa98f753e-a6: entered promiscuous mode
Feb 28 05:20:09 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:09Z|00972|binding|INFO|Claiming lport a98f753e-a6d6-4d97-b307-f08d35a37f1f for this chassis.
Feb 28 05:20:09 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:09Z|00973|binding|INFO|a98f753e-a6d6-4d97-b307-f08d35a37f1f: Claiming fa:16:3e:71:c9:51 10.100.0.5
Feb 28 05:20:09 np0005634017 nova_compute[243452]: 2026-02-28 10:20:09.751 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:09 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:09Z|00974|binding|INFO|Setting lport a98f753e-a6d6-4d97-b307-f08d35a37f1f ovn-installed in OVS
Feb 28 05:20:09 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:09Z|00975|binding|INFO|Setting lport a98f753e-a6d6-4d97-b307-f08d35a37f1f up in Southbound
Feb 28 05:20:09 np0005634017 nova_compute[243452]: 2026-02-28 10:20:09.762 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:09.766 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:c9:51 10.100.0.5'], port_security=['fa:16:3e:71:c9:51 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '080f8608-f57f-4ffa-a966-ae62df8f6f9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ba87a0d-4c27-4844-a55f-2927dfe4893e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3882eded03594958a2e5d10832a6c3a9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '045c4763-a218-4cca-a882-e267b0c43dad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5d941e6-1790-487e-9400-385b4660d2ae, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a98f753e-a6d6-4d97-b307-f08d35a37f1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:20:09 np0005634017 nova_compute[243452]: 2026-02-28 10:20:09.767 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:09.768 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a98f753e-a6d6-4d97-b307-f08d35a37f1f in datapath 9ba87a0d-4c27-4844-a55f-2927dfe4893e bound to our chassis#033[00m
Feb 28 05:20:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:09.770 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9ba87a0d-4c27-4844-a55f-2927dfe4893e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 28 05:20:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:09.771 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[77bd0630-b31d-45df-af23-fe505a3dc9d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:09 np0005634017 systemd-udevd[328853]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:20:09 np0005634017 systemd-machined[209480]: New machine qemu-123-instance-00000063.
Feb 28 05:20:09 np0005634017 NetworkManager[49805]: <info>  [1772274009.7894] device (tapa98f753e-a6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:20:09 np0005634017 NetworkManager[49805]: <info>  [1772274009.7899] device (tapa98f753e-a6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:20:09 np0005634017 systemd[1]: Started Virtual Machine qemu-123-instance-00000063.
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.159 243456 DEBUG nova.network.neutron [req-aede0ce5-c386-4b4c-9914-fe0e92de4571 req-b1a45c56-2647-43e1-b765-4b3d373d6519 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Updated VIF entry in instance network info cache for port a98f753e-a6d6-4d97-b307-f08d35a37f1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.160 243456 DEBUG nova.network.neutron [req-aede0ce5-c386-4b4c-9914-fe0e92de4571 req-b1a45c56-2647-43e1-b765-4b3d373d6519 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Updating instance_info_cache with network_info: [{"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.164 243456 DEBUG nova.network.neutron [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Updating instance_info_cache with network_info: [{"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.183 243456 DEBUG oslo_concurrency.lockutils [req-aede0ce5-c386-4b4c-9914-fe0e92de4571 req-b1a45c56-2647-43e1-b765-4b3d373d6519 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-080f8608-f57f-4ffa-a966-ae62df8f6f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.184 243456 DEBUG oslo_concurrency.lockutils [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Releasing lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.217 243456 INFO nova.virt.libvirt.driver [-] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance destroyed successfully.#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.218 243456 DEBUG nova.objects.instance [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'numa_topology' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.234 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274010.233177, 080f8608-f57f-4ffa-a966-ae62df8f6f9b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.234 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] VM Started (Lifecycle Event)#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.239 243456 DEBUG nova.objects.instance [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'resources' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.254 243456 DEBUG nova.virt.libvirt.vif [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:17:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-49030969',display_name='tempest-ServerActionsTestJSON-server-49030969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-49030969',id=90,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2gogpz3Yx3EhEkpX1VCBq2KO8dk2oxi+PMU5eyyDvXF1ONWBCuHAC/cBb5pEuXFL5UEmuntYZ5G82kfqwpjBS+OnOyyPUfTpF/G370TEdXgIbjgT7mCqXRtxntkLSfyw==',key_name='tempest-keypair-1420549814',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:17:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-iwkgdg48',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:20:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b9006c7543a244aa948b78020335223a',uuid=690896df-6307-469c-9685-325a61a62b88,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.255 243456 DEBUG nova.network.os_vif_util [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.256 243456 DEBUG nova.network.os_vif_util [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.256 243456 DEBUG os_vif [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.260 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.260 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped25d1f8-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.262 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.263 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.264 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.267 243456 INFO os_vif [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3')#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.268 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274010.233379, 080f8608-f57f-4ffa-a966-ae62df8f6f9b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.269 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.277 243456 DEBUG nova.virt.libvirt.driver [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Start _get_guest_xml network_info=[{"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.280 243456 WARNING nova.virt.libvirt.driver [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.285 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.286 243456 DEBUG nova.virt.libvirt.host [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.286 243456 DEBUG nova.virt.libvirt.host [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.291 243456 DEBUG nova.compute.manager [req-2c9c60f3-c06a-4bfc-aeec-65618ca5ea77 req-89e33606-b52c-4a21-9888-02ec0f6bb7fb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.292 243456 DEBUG oslo_concurrency.lockutils [req-2c9c60f3-c06a-4bfc-aeec-65618ca5ea77 req-89e33606-b52c-4a21-9888-02ec0f6bb7fb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.292 243456 DEBUG oslo_concurrency.lockutils [req-2c9c60f3-c06a-4bfc-aeec-65618ca5ea77 req-89e33606-b52c-4a21-9888-02ec0f6bb7fb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.292 243456 DEBUG oslo_concurrency.lockutils [req-2c9c60f3-c06a-4bfc-aeec-65618ca5ea77 req-89e33606-b52c-4a21-9888-02ec0f6bb7fb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.293 243456 DEBUG nova.compute.manager [req-2c9c60f3-c06a-4bfc-aeec-65618ca5ea77 req-89e33606-b52c-4a21-9888-02ec0f6bb7fb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Processing event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.294 243456 DEBUG nova.compute.manager [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.295 243456 DEBUG nova.virt.libvirt.host [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.296 243456 DEBUG nova.virt.libvirt.host [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.297 243456 DEBUG nova.virt.libvirt.driver [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.297 243456 DEBUG nova.virt.hardware [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.298 243456 DEBUG nova.virt.hardware [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.298 243456 DEBUG nova.virt.hardware [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.299 243456 DEBUG nova.virt.hardware [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.299 243456 DEBUG nova.virt.hardware [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.299 243456 DEBUG nova.virt.hardware [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.300 243456 DEBUG nova.virt.hardware [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.300 243456 DEBUG nova.virt.hardware [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.300 243456 DEBUG nova.virt.hardware [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.300 243456 DEBUG nova.virt.hardware [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.301 243456 DEBUG nova.virt.hardware [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.301 243456 DEBUG nova.objects.instance [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.303 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.306 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.308 243456 INFO nova.virt.libvirt.driver [-] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Instance spawned successfully.#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.308 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.327 243456 DEBUG oslo_concurrency.processutils [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 486 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 7.0 MiB/s wr, 336 op/s
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.375 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.376 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274010.297266, 080f8608-f57f-4ffa-a966-ae62df8f6f9b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.376 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.387 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.388 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.389 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.389 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.390 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.390 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.428 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.435 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.459 243456 INFO nova.compute.manager [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Took 10.40 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.459 243456 DEBUG nova.compute.manager [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.460 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.518 243456 INFO nova.compute.manager [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Took 12.14 seconds to build instance.#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.535 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:20:10 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2539464663' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.940 243456 INFO nova.virt.libvirt.driver [None req-00240748-d009-4dc9-9cd8-d63ffc57e8b9 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Snapshot image upload complete#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.941 243456 INFO nova.compute.manager [None req-00240748-d009-4dc9-9cd8-d63ffc57e8b9 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Took 4.01 seconds to snapshot the instance on the hypervisor.#033[00m
Feb 28 05:20:10 np0005634017 nova_compute[243452]: 2026-02-28 10:20:10.954 243456 DEBUG oslo_concurrency.processutils [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:11 np0005634017 nova_compute[243452]: 2026-02-28 10:20:11.006 243456 DEBUG oslo_concurrency.processutils [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:20:11 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1016280227' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:20:11 np0005634017 nova_compute[243452]: 2026-02-28 10:20:11.638 243456 DEBUG oslo_concurrency.processutils [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:11 np0005634017 nova_compute[243452]: 2026-02-28 10:20:11.639 243456 DEBUG nova.virt.libvirt.vif [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:17:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-49030969',display_name='tempest-ServerActionsTestJSON-server-49030969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-49030969',id=90,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2gogpz3Yx3EhEkpX1VCBq2KO8dk2oxi+PMU5eyyDvXF1ONWBCuHAC/cBb5pEuXFL5UEmuntYZ5G82kfqwpjBS+OnOyyPUfTpF/G370TEdXgIbjgT7mCqXRtxntkLSfyw==',key_name='tempest-keypair-1420549814',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:17:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-iwkgdg48',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:20:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b9006c7543a244aa948b78020335223a',uuid=690896df-6307-469c-9685-325a61a62b88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:20:11 np0005634017 nova_compute[243452]: 2026-02-28 10:20:11.639 243456 DEBUG nova.network.os_vif_util [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:20:11 np0005634017 nova_compute[243452]: 2026-02-28 10:20:11.640 243456 DEBUG nova.network.os_vif_util [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:20:11 np0005634017 nova_compute[243452]: 2026-02-28 10:20:11.641 243456 DEBUG nova.objects.instance [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'pci_devices' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:11 np0005634017 nova_compute[243452]: 2026-02-28 10:20:11.656 243456 DEBUG nova.virt.libvirt.driver [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:20:11 np0005634017 nova_compute[243452]:  <uuid>690896df-6307-469c-9685-325a61a62b88</uuid>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:  <name>instance-0000005a</name>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerActionsTestJSON-server-49030969</nova:name>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:20:10</nova:creationTime>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:20:11 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:        <nova:user uuid="b9006c7543a244aa948b78020335223a">tempest-ServerActionsTestJSON-152155156-project-member</nova:user>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:        <nova:project uuid="6952e00efd364e1491714983e2425e93">tempest-ServerActionsTestJSON-152155156</nova:project>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:        <nova:port uuid="ed25d1f8-c3a0-43d4-b57e-12b647a48b3c">
Feb 28 05:20:11 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <entry name="serial">690896df-6307-469c-9685-325a61a62b88</entry>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <entry name="uuid">690896df-6307-469c-9685-325a61a62b88</entry>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/690896df-6307-469c-9685-325a61a62b88_disk">
Feb 28 05:20:11 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:20:11 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/690896df-6307-469c-9685-325a61a62b88_disk.config">
Feb 28 05:20:11 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:20:11 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:f6:05:21"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <target dev="taped25d1f8-c3"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/690896df-6307-469c-9685-325a61a62b88/console.log" append="off"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <input type="keyboard" bus="usb"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:20:11 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:20:11 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:20:11 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:20:11 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:20:11 np0005634017 nova_compute[243452]: 2026-02-28 10:20:11.658 243456 DEBUG nova.virt.libvirt.driver [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:20:11 np0005634017 nova_compute[243452]: 2026-02-28 10:20:11.658 243456 DEBUG nova.virt.libvirt.driver [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:20:11 np0005634017 nova_compute[243452]: 2026-02-28 10:20:11.659 243456 DEBUG nova.virt.libvirt.vif [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:17:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-49030969',display_name='tempest-ServerActionsTestJSON-server-49030969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-49030969',id=90,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2gogpz3Yx3EhEkpX1VCBq2KO8dk2oxi+PMU5eyyDvXF1ONWBCuHAC/cBb5pEuXFL5UEmuntYZ5G82kfqwpjBS+OnOyyPUfTpF/G370TEdXgIbjgT7mCqXRtxntkLSfyw==',key_name='tempest-keypair-1420549814',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:17:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-iwkgdg48',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:20:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b9006c7543a244aa948b78020335223a',uuid=690896df-6307-469c-9685-325a61a62b88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:20:11 np0005634017 nova_compute[243452]: 2026-02-28 10:20:11.659 243456 DEBUG nova.network.os_vif_util [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:20:11 np0005634017 nova_compute[243452]: 2026-02-28 10:20:11.660 243456 DEBUG nova.network.os_vif_util [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:20:11 np0005634017 nova_compute[243452]: 2026-02-28 10:20:11.660 243456 DEBUG os_vif [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:20:11 np0005634017 nova_compute[243452]: 2026-02-28 10:20:11.661 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:11 np0005634017 nova_compute[243452]: 2026-02-28 10:20:11.662 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:11 np0005634017 nova_compute[243452]: 2026-02-28 10:20:11.662 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:20:11 np0005634017 nova_compute[243452]: 2026-02-28 10:20:11.664 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:11 np0005634017 nova_compute[243452]: 2026-02-28 10:20:11.665 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped25d1f8-c3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:11 np0005634017 nova_compute[243452]: 2026-02-28 10:20:11.665 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=taped25d1f8-c3, col_values=(('external_ids', {'iface-id': 'ed25d1f8-c3a0-43d4-b57e-12b647a48b3c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:05:21', 'vm-uuid': '690896df-6307-469c-9685-325a61a62b88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:11 np0005634017 nova_compute[243452]: 2026-02-28 10:20:11.666 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:11 np0005634017 NetworkManager[49805]: <info>  [1772274011.6676] manager: (taped25d1f8-c3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/421)
Feb 28 05:20:11 np0005634017 nova_compute[243452]: 2026-02-28 10:20:11.671 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:11 np0005634017 nova_compute[243452]: 2026-02-28 10:20:11.673 243456 INFO os_vif [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3')#033[00m
Feb 28 05:20:11 np0005634017 NetworkManager[49805]: <info>  [1772274011.7284] manager: (taped25d1f8-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/422)
Feb 28 05:20:11 np0005634017 kernel: taped25d1f8-c3: entered promiscuous mode
Feb 28 05:20:11 np0005634017 systemd-udevd[328855]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:20:11 np0005634017 nova_compute[243452]: 2026-02-28 10:20:11.732 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:11 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:11Z|00976|binding|INFO|Claiming lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for this chassis.
Feb 28 05:20:11 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:11Z|00977|binding|INFO|ed25d1f8-c3a0-43d4-b57e-12b647a48b3c: Claiming fa:16:3e:f6:05:21 10.100.0.14
Feb 28 05:20:11 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:11Z|00978|binding|INFO|Setting lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c up in Southbound
Feb 28 05:20:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.742 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:05:21 10.100.0.14'], port_security=['fa:16:3e:f6:05:21 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '690896df-6307-469c-9685-325a61a62b88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6952e00efd364e1491714983e2425e93', 'neutron:revision_number': '11', 'neutron:security_group_ids': '26695444-5c7f-4bb0-b1ed-94a026868cfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999babbe-38ab-491f-bab6-db221e47dcf3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:20:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.743 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ed25d1f8-c3a0-43d4-b57e-12b647a48b3c in datapath 8082b9e7-a888-4fb7-b48c-a7c16db892eb bound to our chassis#033[00m
Feb 28 05:20:11 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:11Z|00979|binding|INFO|Setting lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c ovn-installed in OVS
Feb 28 05:20:11 np0005634017 nova_compute[243452]: 2026-02-28 10:20:11.744 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.747 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8082b9e7-a888-4fb7-b48c-a7c16db892eb#033[00m
Feb 28 05:20:11 np0005634017 nova_compute[243452]: 2026-02-28 10:20:11.746 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:11 np0005634017 NetworkManager[49805]: <info>  [1772274011.7495] device (taped25d1f8-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:20:11 np0005634017 NetworkManager[49805]: <info>  [1772274011.7499] device (taped25d1f8-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:20:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.761 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2070fc36-9c41-47ad-a452-210dd0865102]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.762 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8082b9e7-a1 in ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:20:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.764 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8082b9e7-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:20:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.764 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e7262f7f-cb7d-4fba-89b5-455a5aea5fd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:11 np0005634017 systemd-machined[209480]: New machine qemu-124-instance-0000005a.
Feb 28 05:20:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.764 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c7a27af1-81ed-49c5-8c8b-0cf9e9dc317f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.774 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[24641404-c9ca-446e-bee6-3ea61756ef32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.783 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2ac6925b-9539-4559-8491-c8429632062f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:11 np0005634017 systemd[1]: Started Virtual Machine qemu-124-instance-0000005a.
Feb 28 05:20:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.802 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[32cf1cae-9b77-402c-a1e2-416e3a666efb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.807 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[69c7a063-74c6-4bed-9702-3d7da44fef26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:11 np0005634017 NetworkManager[49805]: <info>  [1772274011.8104] manager: (tap8082b9e7-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/423)
Feb 28 05:20:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.832 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9fee1723-e248-4e4e-a36f-168324bf26dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.835 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[38fdb5eb-cac5-47e9-ba68-3d9e63693fd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:11 np0005634017 NetworkManager[49805]: <info>  [1772274011.8631] device (tap8082b9e7-a0): carrier: link connected
Feb 28 05:20:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.865 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[60a2e09c-79bc-4fc9-b837-e7a0bd6f62d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.879 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[16c4514b-7d8e-4f1f-a112-cffcedcedd68]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8082b9e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:09:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 551914, 'reachable_time': 39192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329014, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.889 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c7df5f40-fa06-4e5b-bf65-7cd31ede811b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:9bb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 551914, 'tstamp': 551914}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329015, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.901 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c00acc-cbdd-4623-b6b7-984b47fa1bca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8082b9e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:09:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 551914, 'reachable_time': 39192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 329016, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.933 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2911363b-566f-49ae-afe4-c7b16bcd2469]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.997 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f85df9-ca68-4efe-905a-d8c31cf8d6a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.998 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8082b9e7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.998 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:20:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.999 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8082b9e7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:12 np0005634017 NetworkManager[49805]: <info>  [1772274012.0017] manager: (tap8082b9e7-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/424)
Feb 28 05:20:12 np0005634017 nova_compute[243452]: 2026-02-28 10:20:12.001 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:12 np0005634017 kernel: tap8082b9e7-a0: entered promiscuous mode
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:12.006 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8082b9e7-a0, col_values=(('external_ids', {'iface-id': 'ecb4bce5-f543-4ade-98cc-d98a0d015682'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:12 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:12Z|00980|binding|INFO|Releasing lport ecb4bce5-f543-4ade-98cc-d98a0d015682 from this chassis (sb_readonly=0)
Feb 28 05:20:12 np0005634017 nova_compute[243452]: 2026-02-28 10:20:12.007 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:12.010 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8082b9e7-a888-4fb7-b48c-a7c16db892eb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8082b9e7-a888-4fb7-b48c-a7c16db892eb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:12.010 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cd1dc0e8-9ae6-417e-8306-eaab8e78d258]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:12.011 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-8082b9e7-a888-4fb7-b48c-a7c16db892eb
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/8082b9e7-a888-4fb7-b48c-a7c16db892eb.pid.haproxy
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 8082b9e7-a888-4fb7-b48c-a7c16db892eb
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:20:12 np0005634017 nova_compute[243452]: 2026-02-28 10:20:12.013 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:12.013 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'env', 'PROCESS_TAG=haproxy-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8082b9e7-a888-4fb7-b48c-a7c16db892eb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:20:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 486 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 663 KiB/s rd, 2.3 MiB/s wr, 173 op/s
Feb 28 05:20:12 np0005634017 podman[329066]: 2026-02-28 10:20:12.397854377 +0000 UTC m=+0.056584414 container create 08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:20:12 np0005634017 nova_compute[243452]: 2026-02-28 10:20:12.425 243456 DEBUG nova.compute.manager [req-75e8a72b-314e-45aa-9c70-d7e039fb1ae3 req-814fa99b-15f0-4f8b-808d-a26ce59f1e95 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:12 np0005634017 nova_compute[243452]: 2026-02-28 10:20:12.426 243456 DEBUG oslo_concurrency.lockutils [req-75e8a72b-314e-45aa-9c70-d7e039fb1ae3 req-814fa99b-15f0-4f8b-808d-a26ce59f1e95 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:12 np0005634017 nova_compute[243452]: 2026-02-28 10:20:12.427 243456 DEBUG oslo_concurrency.lockutils [req-75e8a72b-314e-45aa-9c70-d7e039fb1ae3 req-814fa99b-15f0-4f8b-808d-a26ce59f1e95 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:12 np0005634017 nova_compute[243452]: 2026-02-28 10:20:12.427 243456 DEBUG oslo_concurrency.lockutils [req-75e8a72b-314e-45aa-9c70-d7e039fb1ae3 req-814fa99b-15f0-4f8b-808d-a26ce59f1e95 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:12 np0005634017 nova_compute[243452]: 2026-02-28 10:20:12.427 243456 DEBUG nova.compute.manager [req-75e8a72b-314e-45aa-9c70-d7e039fb1ae3 req-814fa99b-15f0-4f8b-808d-a26ce59f1e95 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] No waiting events found dispatching network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:20:12 np0005634017 nova_compute[243452]: 2026-02-28 10:20:12.427 243456 WARNING nova.compute.manager [req-75e8a72b-314e-45aa-9c70-d7e039fb1ae3 req-814fa99b-15f0-4f8b-808d-a26ce59f1e95 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received unexpected event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f for instance with vm_state active and task_state None.#033[00m
Feb 28 05:20:12 np0005634017 systemd[1]: Started libpod-conmon-08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3.scope.
Feb 28 05:20:12 np0005634017 podman[329066]: 2026-02-28 10:20:12.366933886 +0000 UTC m=+0.025663943 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:20:12 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:20:12 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec2b40b2212a11bdcbb192052f2bdb44e9ffa51d1597581290bbd158ef73b64/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:20:12 np0005634017 nova_compute[243452]: 2026-02-28 10:20:12.485 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 690896df-6307-469c-9685-325a61a62b88 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 28 05:20:12 np0005634017 nova_compute[243452]: 2026-02-28 10:20:12.486 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274012.4851067, 690896df-6307-469c-9685-325a61a62b88 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:20:12 np0005634017 nova_compute[243452]: 2026-02-28 10:20:12.486 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:20:12 np0005634017 nova_compute[243452]: 2026-02-28 10:20:12.488 243456 DEBUG nova.compute.manager [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:20:12 np0005634017 nova_compute[243452]: 2026-02-28 10:20:12.494 243456 INFO nova.virt.libvirt.driver [-] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance rebooted successfully.#033[00m
Feb 28 05:20:12 np0005634017 nova_compute[243452]: 2026-02-28 10:20:12.494 243456 DEBUG nova.compute.manager [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:12 np0005634017 podman[329066]: 2026-02-28 10:20:12.495383377 +0000 UTC m=+0.154113434 container init 08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:20:12 np0005634017 podman[329066]: 2026-02-28 10:20:12.509478659 +0000 UTC m=+0.168208686 container start 08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 05:20:12 np0005634017 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[329105]: [NOTICE]   (329110) : New worker (329112) forked
Feb 28 05:20:12 np0005634017 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[329105]: [NOTICE]   (329110) : Loading success.
Feb 28 05:20:12 np0005634017 nova_compute[243452]: 2026-02-28 10:20:12.543 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:12 np0005634017 nova_compute[243452]: 2026-02-28 10:20:12.549 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:20:12 np0005634017 nova_compute[243452]: 2026-02-28 10:20:12.572 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274012.485372, 690896df-6307-469c-9685-325a61a62b88 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:20:12 np0005634017 nova_compute[243452]: 2026-02-28 10:20:12.573 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] VM Started (Lifecycle Event)#033[00m
Feb 28 05:20:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Feb 28 05:20:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Feb 28 05:20:12 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Feb 28 05:20:12 np0005634017 nova_compute[243452]: 2026-02-28 10:20:12.597 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:12 np0005634017 nova_compute[243452]: 2026-02-28 10:20:12.603 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:20:13 np0005634017 nova_compute[243452]: 2026-02-28 10:20:13.030 243456 DEBUG nova.compute.manager [req-76d1596c-a49a-4a16-9a07-926d43cf0458 req-19a35324-86d2-47cd-9661-32728a0ab8f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:13 np0005634017 nova_compute[243452]: 2026-02-28 10:20:13.031 243456 DEBUG oslo_concurrency.lockutils [req-76d1596c-a49a-4a16-9a07-926d43cf0458 req-19a35324-86d2-47cd-9661-32728a0ab8f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:13 np0005634017 nova_compute[243452]: 2026-02-28 10:20:13.032 243456 DEBUG oslo_concurrency.lockutils [req-76d1596c-a49a-4a16-9a07-926d43cf0458 req-19a35324-86d2-47cd-9661-32728a0ab8f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:13 np0005634017 nova_compute[243452]: 2026-02-28 10:20:13.033 243456 DEBUG oslo_concurrency.lockutils [req-76d1596c-a49a-4a16-9a07-926d43cf0458 req-19a35324-86d2-47cd-9661-32728a0ab8f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:13 np0005634017 nova_compute[243452]: 2026-02-28 10:20:13.033 243456 DEBUG nova.compute.manager [req-76d1596c-a49a-4a16-9a07-926d43cf0458 req-19a35324-86d2-47cd-9661-32728a0ab8f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:20:13 np0005634017 nova_compute[243452]: 2026-02-28 10:20:13.034 243456 WARNING nova.compute.manager [req-76d1596c-a49a-4a16-9a07-926d43cf0458 req-19a35324-86d2-47cd-9661-32728a0ab8f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state active and task_state None.#033[00m
Feb 28 05:20:13 np0005634017 nova_compute[243452]: 2026-02-28 10:20:13.145 243456 INFO nova.compute.manager [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Rescuing#033[00m
Feb 28 05:20:13 np0005634017 nova_compute[243452]: 2026-02-28 10:20:13.146 243456 DEBUG oslo_concurrency.lockutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "refresh_cache-080f8608-f57f-4ffa-a966-ae62df8f6f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:20:13 np0005634017 nova_compute[243452]: 2026-02-28 10:20:13.146 243456 DEBUG oslo_concurrency.lockutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquired lock "refresh_cache-080f8608-f57f-4ffa-a966-ae62df8f6f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:20:13 np0005634017 nova_compute[243452]: 2026-02-28 10:20:13.147 243456 DEBUG nova.network.neutron [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:20:13 np0005634017 nova_compute[243452]: 2026-02-28 10:20:13.215 243456 DEBUG oslo_concurrency.lockutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Acquiring lock "98504b0a-8c47-4488-b870-9fb9ebfa3e59" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:13 np0005634017 nova_compute[243452]: 2026-02-28 10:20:13.216 243456 DEBUG oslo_concurrency.lockutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "98504b0a-8c47-4488-b870-9fb9ebfa3e59" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:13 np0005634017 nova_compute[243452]: 2026-02-28 10:20:13.217 243456 DEBUG oslo_concurrency.lockutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Acquiring lock "98504b0a-8c47-4488-b870-9fb9ebfa3e59-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:13 np0005634017 nova_compute[243452]: 2026-02-28 10:20:13.218 243456 DEBUG oslo_concurrency.lockutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "98504b0a-8c47-4488-b870-9fb9ebfa3e59-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:13 np0005634017 nova_compute[243452]: 2026-02-28 10:20:13.219 243456 DEBUG oslo_concurrency.lockutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "98504b0a-8c47-4488-b870-9fb9ebfa3e59-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:13 np0005634017 nova_compute[243452]: 2026-02-28 10:20:13.221 243456 INFO nova.compute.manager [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Terminating instance#033[00m
Feb 28 05:20:13 np0005634017 nova_compute[243452]: 2026-02-28 10:20:13.224 243456 DEBUG oslo_concurrency.lockutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Acquiring lock "refresh_cache-98504b0a-8c47-4488-b870-9fb9ebfa3e59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:20:13 np0005634017 nova_compute[243452]: 2026-02-28 10:20:13.225 243456 DEBUG oslo_concurrency.lockutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Acquired lock "refresh_cache-98504b0a-8c47-4488-b870-9fb9ebfa3e59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:20:13 np0005634017 nova_compute[243452]: 2026-02-28 10:20:13.226 243456 DEBUG nova.network.neutron [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:20:13 np0005634017 nova_compute[243452]: 2026-02-28 10:20:13.418 243456 DEBUG nova.network.neutron [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:20:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:20:13 np0005634017 nova_compute[243452]: 2026-02-28 10:20:13.703 243456 DEBUG nova.network.neutron [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:20:13 np0005634017 nova_compute[243452]: 2026-02-28 10:20:13.722 243456 DEBUG oslo_concurrency.lockutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Releasing lock "refresh_cache-98504b0a-8c47-4488-b870-9fb9ebfa3e59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:20:13 np0005634017 nova_compute[243452]: 2026-02-28 10:20:13.724 243456 DEBUG nova.compute.manager [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:20:13 np0005634017 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000062.scope: Deactivated successfully.
Feb 28 05:20:13 np0005634017 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000062.scope: Consumed 1.179s CPU time.
Feb 28 05:20:13 np0005634017 systemd-machined[209480]: Machine qemu-122-instance-00000062 terminated.
Feb 28 05:20:13 np0005634017 podman[329122]: 2026-02-28 10:20:13.855116592 +0000 UTC m=+0.063844041 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:20:13 np0005634017 podman[329121]: 2026-02-28 10:20:13.880499735 +0000 UTC m=+0.090728797 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 28 05:20:13 np0005634017 nova_compute[243452]: 2026-02-28 10:20:13.933 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:13 np0005634017 nova_compute[243452]: 2026-02-28 10:20:13.951 243456 INFO nova.virt.libvirt.driver [-] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Instance destroyed successfully.#033[00m
Feb 28 05:20:13 np0005634017 nova_compute[243452]: 2026-02-28 10:20:13.951 243456 DEBUG nova.objects.instance [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lazy-loading 'resources' on Instance uuid 98504b0a-8c47-4488-b870-9fb9ebfa3e59 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 488 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 890 KiB/s rd, 156 KiB/s wr, 187 op/s
Feb 28 05:20:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Feb 28 05:20:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Feb 28 05:20:14 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Feb 28 05:20:14 np0005634017 nova_compute[243452]: 2026-02-28 10:20:14.805 243456 INFO nova.virt.libvirt.driver [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Deleting instance files /var/lib/nova/instances/98504b0a-8c47-4488-b870-9fb9ebfa3e59_del#033[00m
Feb 28 05:20:14 np0005634017 nova_compute[243452]: 2026-02-28 10:20:14.808 243456 INFO nova.virt.libvirt.driver [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Deletion of /var/lib/nova/instances/98504b0a-8c47-4488-b870-9fb9ebfa3e59_del complete#033[00m
Feb 28 05:20:14 np0005634017 nova_compute[243452]: 2026-02-28 10:20:14.815 243456 DEBUG nova.network.neutron [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Updating instance_info_cache with network_info: [{"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:20:14 np0005634017 nova_compute[243452]: 2026-02-28 10:20:14.852 243456 DEBUG oslo_concurrency.lockutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Releasing lock "refresh_cache-080f8608-f57f-4ffa-a966-ae62df8f6f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:20:14 np0005634017 nova_compute[243452]: 2026-02-28 10:20:14.881 243456 INFO nova.compute.manager [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Took 1.16 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:20:14 np0005634017 nova_compute[243452]: 2026-02-28 10:20:14.882 243456 DEBUG oslo.service.loopingcall [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:20:14 np0005634017 nova_compute[243452]: 2026-02-28 10:20:14.883 243456 DEBUG nova.compute.manager [-] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:20:14 np0005634017 nova_compute[243452]: 2026-02-28 10:20:14.883 243456 DEBUG nova.network.neutron [-] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:20:15 np0005634017 nova_compute[243452]: 2026-02-28 10:20:15.112 243456 DEBUG nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 28 05:20:15 np0005634017 nova_compute[243452]: 2026-02-28 10:20:15.273 243456 DEBUG nova.network.neutron [-] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:20:15 np0005634017 nova_compute[243452]: 2026-02-28 10:20:15.293 243456 DEBUG nova.network.neutron [-] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:20:15 np0005634017 nova_compute[243452]: 2026-02-28 10:20:15.310 243456 INFO nova.compute.manager [-] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Took 0.43 seconds to deallocate network for instance.#033[00m
Feb 28 05:20:15 np0005634017 nova_compute[243452]: 2026-02-28 10:20:15.368 243456 DEBUG oslo_concurrency.lockutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:15 np0005634017 nova_compute[243452]: 2026-02-28 10:20:15.370 243456 DEBUG oslo_concurrency.lockutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:15 np0005634017 nova_compute[243452]: 2026-02-28 10:20:15.410 243456 DEBUG nova.compute.manager [req-872faa97-242c-40a9-8958-edfb6d9228a2 req-1c4a1f3d-0566-49c0-9593-23a0912f2831 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:15 np0005634017 nova_compute[243452]: 2026-02-28 10:20:15.411 243456 DEBUG oslo_concurrency.lockutils [req-872faa97-242c-40a9-8958-edfb6d9228a2 req-1c4a1f3d-0566-49c0-9593-23a0912f2831 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:15 np0005634017 nova_compute[243452]: 2026-02-28 10:20:15.411 243456 DEBUG oslo_concurrency.lockutils [req-872faa97-242c-40a9-8958-edfb6d9228a2 req-1c4a1f3d-0566-49c0-9593-23a0912f2831 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:15 np0005634017 nova_compute[243452]: 2026-02-28 10:20:15.412 243456 DEBUG oslo_concurrency.lockutils [req-872faa97-242c-40a9-8958-edfb6d9228a2 req-1c4a1f3d-0566-49c0-9593-23a0912f2831 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:15 np0005634017 nova_compute[243452]: 2026-02-28 10:20:15.412 243456 DEBUG nova.compute.manager [req-872faa97-242c-40a9-8958-edfb6d9228a2 req-1c4a1f3d-0566-49c0-9593-23a0912f2831 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:20:15 np0005634017 nova_compute[243452]: 2026-02-28 10:20:15.412 243456 WARNING nova.compute.manager [req-872faa97-242c-40a9-8958-edfb6d9228a2 req-1c4a1f3d-0566-49c0-9593-23a0912f2831 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state active and task_state None.#033[00m
Feb 28 05:20:15 np0005634017 nova_compute[243452]: 2026-02-28 10:20:15.516 243456 DEBUG oslo_concurrency.processutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:20:16 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2692883052' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:20:16 np0005634017 nova_compute[243452]: 2026-02-28 10:20:16.159 243456 DEBUG oslo_concurrency.processutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:16 np0005634017 nova_compute[243452]: 2026-02-28 10:20:16.168 243456 DEBUG nova.compute.provider_tree [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:20:16 np0005634017 nova_compute[243452]: 2026-02-28 10:20:16.196 243456 DEBUG nova.scheduler.client.report [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:20:16 np0005634017 nova_compute[243452]: 2026-02-28 10:20:16.246 243456 DEBUG oslo_concurrency.lockutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:16 np0005634017 nova_compute[243452]: 2026-02-28 10:20:16.283 243456 INFO nova.scheduler.client.report [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Deleted allocations for instance 98504b0a-8c47-4488-b870-9fb9ebfa3e59#033[00m
Feb 28 05:20:16 np0005634017 nova_compute[243452]: 2026-02-28 10:20:16.347 243456 DEBUG oslo_concurrency.lockutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "98504b0a-8c47-4488-b870-9fb9ebfa3e59" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 305 active+clean; 488 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 56 KiB/s wr, 400 op/s
Feb 28 05:20:16 np0005634017 nova_compute[243452]: 2026-02-28 10:20:16.666 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:16 np0005634017 nova_compute[243452]: 2026-02-28 10:20:16.978 243456 DEBUG nova.objects.instance [None req-9940bc3f-d8c6-4eee-ba93-d095bd7d653c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'pci_devices' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:17 np0005634017 nova_compute[243452]: 2026-02-28 10:20:17.007 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274017.0068257, 690896df-6307-469c-9685-325a61a62b88 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:20:17 np0005634017 nova_compute[243452]: 2026-02-28 10:20:17.007 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:20:17 np0005634017 nova_compute[243452]: 2026-02-28 10:20:17.027 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:17 np0005634017 nova_compute[243452]: 2026-02-28 10:20:17.037 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:20:17 np0005634017 nova_compute[243452]: 2026-02-28 10:20:17.059 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Feb 28 05:20:17 np0005634017 kernel: taped25d1f8-c3 (unregistering): left promiscuous mode
Feb 28 05:20:17 np0005634017 NetworkManager[49805]: <info>  [1772274017.4716] device (taped25d1f8-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:20:17 np0005634017 nova_compute[243452]: 2026-02-28 10:20:17.484 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:17 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:17Z|00981|binding|INFO|Releasing lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c from this chassis (sb_readonly=0)
Feb 28 05:20:17 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:17Z|00982|binding|INFO|Setting lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c down in Southbound
Feb 28 05:20:17 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:17Z|00983|binding|INFO|Removing iface taped25d1f8-c3 ovn-installed in OVS
Feb 28 05:20:17 np0005634017 nova_compute[243452]: 2026-02-28 10:20:17.487 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.494 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:05:21 10.100.0.14'], port_security=['fa:16:3e:f6:05:21 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '690896df-6307-469c-9685-325a61a62b88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6952e00efd364e1491714983e2425e93', 'neutron:revision_number': '12', 'neutron:security_group_ids': '26695444-5c7f-4bb0-b1ed-94a026868cfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.178', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999babbe-38ab-491f-bab6-db221e47dcf3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:20:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.496 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ed25d1f8-c3a0-43d4-b57e-12b647a48b3c in datapath 8082b9e7-a888-4fb7-b48c-a7c16db892eb unbound from our chassis#033[00m
Feb 28 05:20:17 np0005634017 nova_compute[243452]: 2026-02-28 10:20:17.497 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.499 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8082b9e7-a888-4fb7-b48c-a7c16db892eb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:20:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.500 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f2e96454-aa85-4502-bd96-d4d69cb36fcb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.500 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb namespace which is not needed anymore#033[00m
Feb 28 05:20:17 np0005634017 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Feb 28 05:20:17 np0005634017 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d0000005a.scope: Consumed 5.402s CPU time.
Feb 28 05:20:17 np0005634017 systemd-machined[209480]: Machine qemu-124-instance-0000005a terminated.
Feb 28 05:20:17 np0005634017 nova_compute[243452]: 2026-02-28 10:20:17.657 243456 DEBUG nova.compute.manager [None req-9940bc3f-d8c6-4eee-ba93-d095bd7d653c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:17 np0005634017 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[329105]: [NOTICE]   (329110) : haproxy version is 2.8.14-c23fe91
Feb 28 05:20:17 np0005634017 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[329105]: [NOTICE]   (329110) : path to executable is /usr/sbin/haproxy
Feb 28 05:20:17 np0005634017 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[329105]: [ALERT]    (329110) : Current worker (329112) exited with code 143 (Terminated)
Feb 28 05:20:17 np0005634017 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[329105]: [WARNING]  (329110) : All workers exited. Exiting... (0)
Feb 28 05:20:17 np0005634017 systemd[1]: libpod-08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3.scope: Deactivated successfully.
Feb 28 05:20:17 np0005634017 conmon[329105]: conmon 08fad2a274be7c8ad1d4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3.scope/container/memory.events
Feb 28 05:20:17 np0005634017 nova_compute[243452]: 2026-02-28 10:20:17.681 243456 DEBUG nova.compute.manager [req-b62c33c2-4406-45a8-b426-a98f41bcab72 req-d8c7aeb5-49bb-4d32-bf56-9230fecf90dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:17 np0005634017 nova_compute[243452]: 2026-02-28 10:20:17.681 243456 DEBUG oslo_concurrency.lockutils [req-b62c33c2-4406-45a8-b426-a98f41bcab72 req-d8c7aeb5-49bb-4d32-bf56-9230fecf90dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:17 np0005634017 nova_compute[243452]: 2026-02-28 10:20:17.682 243456 DEBUG oslo_concurrency.lockutils [req-b62c33c2-4406-45a8-b426-a98f41bcab72 req-d8c7aeb5-49bb-4d32-bf56-9230fecf90dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:17 np0005634017 nova_compute[243452]: 2026-02-28 10:20:17.682 243456 DEBUG oslo_concurrency.lockutils [req-b62c33c2-4406-45a8-b426-a98f41bcab72 req-d8c7aeb5-49bb-4d32-bf56-9230fecf90dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:17 np0005634017 nova_compute[243452]: 2026-02-28 10:20:17.682 243456 DEBUG nova.compute.manager [req-b62c33c2-4406-45a8-b426-a98f41bcab72 req-d8c7aeb5-49bb-4d32-bf56-9230fecf90dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:20:17 np0005634017 nova_compute[243452]: 2026-02-28 10:20:17.682 243456 WARNING nova.compute.manager [req-b62c33c2-4406-45a8-b426-a98f41bcab72 req-d8c7aeb5-49bb-4d32-bf56-9230fecf90dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state active and task_state suspending.#033[00m
Feb 28 05:20:17 np0005634017 podman[329279]: 2026-02-28 10:20:17.688098401 +0000 UTC m=+0.064930602 container died 08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:20:17 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3-userdata-shm.mount: Deactivated successfully.
Feb 28 05:20:17 np0005634017 systemd[1]: var-lib-containers-storage-overlay-1ec2b40b2212a11bdcbb192052f2bdb44e9ffa51d1597581290bbd158ef73b64-merged.mount: Deactivated successfully.
Feb 28 05:20:17 np0005634017 podman[329279]: 2026-02-28 10:20:17.732911458 +0000 UTC m=+0.109743649 container cleanup 08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:20:17 np0005634017 systemd[1]: libpod-conmon-08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3.scope: Deactivated successfully.
Feb 28 05:20:17 np0005634017 podman[329318]: 2026-02-28 10:20:17.803284734 +0000 UTC m=+0.048772371 container remove 08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 28 05:20:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.809 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[721e9807-7a6f-47ac-99f4-dd3d3e83af83]: (4, ('Sat Feb 28 10:20:17 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb (08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3)\n08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3\nSat Feb 28 10:20:17 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb (08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3)\n08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.811 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e071f7af-93ba-4b76-b275-741f57256678]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.812 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8082b9e7-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:17 np0005634017 kernel: tap8082b9e7-a0: left promiscuous mode
Feb 28 05:20:17 np0005634017 nova_compute[243452]: 2026-02-28 10:20:17.814 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.821 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:20:17 np0005634017 nova_compute[243452]: 2026-02-28 10:20:17.824 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:17 np0005634017 nova_compute[243452]: 2026-02-28 10:20:17.825 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.829 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cab3ace5-a4cb-4085-a94b-887f0a8200a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.839 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[abf9da2f-59ea-4a85-957a-0e84c52a7e05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.840 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9fd86623-6f6e-4c24-aa71-6ddc2c70be37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.856 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf490d7-2318-41ee-bb60-cd3e3852428b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 551908, 'reachable_time': 42459, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329351, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:17 np0005634017 systemd[1]: run-netns-ovnmeta\x2d8082b9e7\x2da888\x2d4fb7\x2db48c\x2da7c16db892eb.mount: Deactivated successfully.
Feb 28 05:20:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.860 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:20:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.860 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[9c07dce6-9768-4230-a3e6-bf919ec96b48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.861 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:20:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:20:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:20:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:20:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:20:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:20:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:20:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:20:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:20:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:20:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:20:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:20:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:20:18 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:20:18 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:20:18 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:20:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 488 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 48 KiB/s wr, 417 op/s
Feb 28 05:20:18 np0005634017 podman[329430]: 2026-02-28 10:20:18.534671861 +0000 UTC m=+0.055940966 container create b898c3f5a4b8e8ee20be43060ab44ac3caf4e8e10c81f9772c9245e8c9b0f0ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cray, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 28 05:20:18 np0005634017 systemd[1]: Started libpod-conmon-b898c3f5a4b8e8ee20be43060ab44ac3caf4e8e10c81f9772c9245e8c9b0f0ef.scope.
Feb 28 05:20:18 np0005634017 podman[329430]: 2026-02-28 10:20:18.506739275 +0000 UTC m=+0.028008420 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:20:18 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:20:18 np0005634017 podman[329430]: 2026-02-28 10:20:18.622329029 +0000 UTC m=+0.143598104 container init b898c3f5a4b8e8ee20be43060ab44ac3caf4e8e10c81f9772c9245e8c9b0f0ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cray, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 28 05:20:18 np0005634017 podman[329430]: 2026-02-28 10:20:18.631988875 +0000 UTC m=+0.153257980 container start b898c3f5a4b8e8ee20be43060ab44ac3caf4e8e10c81f9772c9245e8c9b0f0ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cray, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:20:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:20:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Feb 28 05:20:18 np0005634017 podman[329430]: 2026-02-28 10:20:18.636476472 +0000 UTC m=+0.157745577 container attach b898c3f5a4b8e8ee20be43060ab44ac3caf4e8e10c81f9772c9245e8c9b0f0ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cray, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 05:20:18 np0005634017 sweet_cray[329446]: 167 167
Feb 28 05:20:18 np0005634017 systemd[1]: libpod-b898c3f5a4b8e8ee20be43060ab44ac3caf4e8e10c81f9772c9245e8c9b0f0ef.scope: Deactivated successfully.
Feb 28 05:20:18 np0005634017 podman[329430]: 2026-02-28 10:20:18.637653676 +0000 UTC m=+0.158922781 container died b898c3f5a4b8e8ee20be43060ab44ac3caf4e8e10c81f9772c9245e8c9b0f0ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cray, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 05:20:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Feb 28 05:20:18 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Feb 28 05:20:18 np0005634017 systemd[1]: var-lib-containers-storage-overlay-77a3e58e80279d8d9443543f6edd42033f6d8c987c1d59c17a3633c2e5f828f5-merged.mount: Deactivated successfully.
Feb 28 05:20:18 np0005634017 podman[329430]: 2026-02-28 10:20:18.690217674 +0000 UTC m=+0.211486749 container remove b898c3f5a4b8e8ee20be43060ab44ac3caf4e8e10c81f9772c9245e8c9b0f0ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cray, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 28 05:20:18 np0005634017 systemd[1]: libpod-conmon-b898c3f5a4b8e8ee20be43060ab44ac3caf4e8e10c81f9772c9245e8c9b0f0ef.scope: Deactivated successfully.
Feb 28 05:20:18 np0005634017 podman[329470]: 2026-02-28 10:20:18.889089283 +0000 UTC m=+0.059103366 container create 7101f662ab572529109ac8f09f39f1985da5e8fe6ca08a4d4e3af847d8180bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_ritchie, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 05:20:18 np0005634017 systemd[1]: Started libpod-conmon-7101f662ab572529109ac8f09f39f1985da5e8fe6ca08a4d4e3af847d8180bad.scope.
Feb 28 05:20:18 np0005634017 nova_compute[243452]: 2026-02-28 10:20:18.934 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:18 np0005634017 podman[329470]: 2026-02-28 10:20:18.867645961 +0000 UTC m=+0.037660094 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:20:18 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:20:18 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbc2bd44b3fda94be3a6957e7f81d0944a4eaf1bf514fe72f743bf0f6cc7f74a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:20:18 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbc2bd44b3fda94be3a6957e7f81d0944a4eaf1bf514fe72f743bf0f6cc7f74a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:20:18 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbc2bd44b3fda94be3a6957e7f81d0944a4eaf1bf514fe72f743bf0f6cc7f74a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:20:18 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbc2bd44b3fda94be3a6957e7f81d0944a4eaf1bf514fe72f743bf0f6cc7f74a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:20:18 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbc2bd44b3fda94be3a6957e7f81d0944a4eaf1bf514fe72f743bf0f6cc7f74a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:20:18 np0005634017 podman[329470]: 2026-02-28 10:20:18.987014554 +0000 UTC m=+0.157028657 container init 7101f662ab572529109ac8f09f39f1985da5e8fe6ca08a4d4e3af847d8180bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_ritchie, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 05:20:18 np0005634017 podman[329470]: 2026-02-28 10:20:18.993718665 +0000 UTC m=+0.163732758 container start 7101f662ab572529109ac8f09f39f1985da5e8fe6ca08a4d4e3af847d8180bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_ritchie, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:20:18 np0005634017 podman[329470]: 2026-02-28 10:20:18.99704742 +0000 UTC m=+0.167061513 container attach 7101f662ab572529109ac8f09f39f1985da5e8fe6ca08a4d4e3af847d8180bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_ritchie, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 28 05:20:19 np0005634017 nova_compute[243452]: 2026-02-28 10:20:19.218 243456 INFO nova.compute.manager [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Resuming#033[00m
Feb 28 05:20:19 np0005634017 nova_compute[243452]: 2026-02-28 10:20:19.220 243456 DEBUG nova.objects.instance [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'flavor' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:19 np0005634017 nova_compute[243452]: 2026-02-28 10:20:19.267 243456 DEBUG oslo_concurrency.lockutils [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:20:19 np0005634017 nova_compute[243452]: 2026-02-28 10:20:19.268 243456 DEBUG oslo_concurrency.lockutils [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquired lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:20:19 np0005634017 nova_compute[243452]: 2026-02-28 10:20:19.269 243456 DEBUG nova.network.neutron [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:20:19 np0005634017 gallant_ritchie[329488]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:20:19 np0005634017 gallant_ritchie[329488]: --> All data devices are unavailable
Feb 28 05:20:19 np0005634017 systemd[1]: libpod-7101f662ab572529109ac8f09f39f1985da5e8fe6ca08a4d4e3af847d8180bad.scope: Deactivated successfully.
Feb 28 05:20:19 np0005634017 podman[329470]: 2026-02-28 10:20:19.471208274 +0000 UTC m=+0.641222377 container died 7101f662ab572529109ac8f09f39f1985da5e8fe6ca08a4d4e3af847d8180bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:20:19 np0005634017 systemd[1]: var-lib-containers-storage-overlay-fbc2bd44b3fda94be3a6957e7f81d0944a4eaf1bf514fe72f743bf0f6cc7f74a-merged.mount: Deactivated successfully.
Feb 28 05:20:19 np0005634017 podman[329470]: 2026-02-28 10:20:19.552709868 +0000 UTC m=+0.722723961 container remove 7101f662ab572529109ac8f09f39f1985da5e8fe6ca08a4d4e3af847d8180bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_ritchie, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 05:20:19 np0005634017 systemd[1]: libpod-conmon-7101f662ab572529109ac8f09f39f1985da5e8fe6ca08a4d4e3af847d8180bad.scope: Deactivated successfully.
Feb 28 05:20:19 np0005634017 podman[329584]: 2026-02-28 10:20:19.971425322 +0000 UTC m=+0.039078185 container create 0d150080be6f64b34dfac8ed435e48e89ddc39c58027c01919a6ab1aeb823898 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:20:20 np0005634017 systemd[1]: Started libpod-conmon-0d150080be6f64b34dfac8ed435e48e89ddc39c58027c01919a6ab1aeb823898.scope.
Feb 28 05:20:20 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:20:20 np0005634017 podman[329584]: 2026-02-28 10:20:20.046650936 +0000 UTC m=+0.114303699 container init 0d150080be6f64b34dfac8ed435e48e89ddc39c58027c01919a6ab1aeb823898 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 05:20:20 np0005634017 podman[329584]: 2026-02-28 10:20:19.953769119 +0000 UTC m=+0.021421922 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:20:20 np0005634017 podman[329584]: 2026-02-28 10:20:20.054120859 +0000 UTC m=+0.121773612 container start 0d150080be6f64b34dfac8ed435e48e89ddc39c58027c01919a6ab1aeb823898 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_johnson, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:20:20 np0005634017 nostalgic_johnson[329601]: 167 167
Feb 28 05:20:20 np0005634017 systemd[1]: libpod-0d150080be6f64b34dfac8ed435e48e89ddc39c58027c01919a6ab1aeb823898.scope: Deactivated successfully.
Feb 28 05:20:20 np0005634017 podman[329584]: 2026-02-28 10:20:20.058103303 +0000 UTC m=+0.125756156 container attach 0d150080be6f64b34dfac8ed435e48e89ddc39c58027c01919a6ab1aeb823898 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:20:20 np0005634017 podman[329584]: 2026-02-28 10:20:20.058490944 +0000 UTC m=+0.126143737 container died 0d150080be6f64b34dfac8ed435e48e89ddc39c58027c01919a6ab1aeb823898 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Feb 28 05:20:20 np0005634017 nova_compute[243452]: 2026-02-28 10:20:20.078 243456 DEBUG nova.compute.manager [req-2ed76c3f-94f8-4ddf-8e9f-34a50b249180 req-fe3ca739-77f8-4c88-901e-5092132d5557 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:20 np0005634017 nova_compute[243452]: 2026-02-28 10:20:20.080 243456 DEBUG oslo_concurrency.lockutils [req-2ed76c3f-94f8-4ddf-8e9f-34a50b249180 req-fe3ca739-77f8-4c88-901e-5092132d5557 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:20 np0005634017 nova_compute[243452]: 2026-02-28 10:20:20.081 243456 DEBUG oslo_concurrency.lockutils [req-2ed76c3f-94f8-4ddf-8e9f-34a50b249180 req-fe3ca739-77f8-4c88-901e-5092132d5557 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:20 np0005634017 nova_compute[243452]: 2026-02-28 10:20:20.081 243456 DEBUG oslo_concurrency.lockutils [req-2ed76c3f-94f8-4ddf-8e9f-34a50b249180 req-fe3ca739-77f8-4c88-901e-5092132d5557 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:20 np0005634017 nova_compute[243452]: 2026-02-28 10:20:20.081 243456 DEBUG nova.compute.manager [req-2ed76c3f-94f8-4ddf-8e9f-34a50b249180 req-fe3ca739-77f8-4c88-901e-5092132d5557 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:20:20 np0005634017 nova_compute[243452]: 2026-02-28 10:20:20.081 243456 WARNING nova.compute.manager [req-2ed76c3f-94f8-4ddf-8e9f-34a50b249180 req-fe3ca739-77f8-4c88-901e-5092132d5557 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state suspended and task_state resuming.#033[00m
Feb 28 05:20:20 np0005634017 systemd[1]: var-lib-containers-storage-overlay-5ebd8e1e706f20511262fd899816cf64f5e10436e51ed37b234fa5b1c5a0f425-merged.mount: Deactivated successfully.
Feb 28 05:20:20 np0005634017 podman[329584]: 2026-02-28 10:20:20.173125851 +0000 UTC m=+0.240778634 container remove 0d150080be6f64b34dfac8ed435e48e89ddc39c58027c01919a6ab1aeb823898 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:20:20 np0005634017 systemd[1]: libpod-conmon-0d150080be6f64b34dfac8ed435e48e89ddc39c58027c01919a6ab1aeb823898.scope: Deactivated successfully.
Feb 28 05:20:20 np0005634017 nova_compute[243452]: 2026-02-28 10:20:20.228 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 488 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 27 KiB/s wr, 321 op/s
Feb 28 05:20:20 np0005634017 podman[329629]: 2026-02-28 10:20:20.362865848 +0000 UTC m=+0.050025727 container create 5d95dfc81aa7a89ad1241b46888f9ce35fa5a19a70665dadb480275ab883cc8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_morse, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:20:20 np0005634017 systemd[1]: Started libpod-conmon-5d95dfc81aa7a89ad1241b46888f9ce35fa5a19a70665dadb480275ab883cc8a.scope.
Feb 28 05:20:20 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:20:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ece8f0fb69de873a9de827df8cac432b0f646b413535f3848b79c3e84d69e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:20:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ece8f0fb69de873a9de827df8cac432b0f646b413535f3848b79c3e84d69e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:20:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ece8f0fb69de873a9de827df8cac432b0f646b413535f3848b79c3e84d69e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:20:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ece8f0fb69de873a9de827df8cac432b0f646b413535f3848b79c3e84d69e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:20:20 np0005634017 podman[329629]: 2026-02-28 10:20:20.424766842 +0000 UTC m=+0.111926731 container init 5d95dfc81aa7a89ad1241b46888f9ce35fa5a19a70665dadb480275ab883cc8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 05:20:20 np0005634017 podman[329629]: 2026-02-28 10:20:20.332090861 +0000 UTC m=+0.019250790 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:20:20 np0005634017 podman[329629]: 2026-02-28 10:20:20.438983758 +0000 UTC m=+0.126143657 container start 5d95dfc81aa7a89ad1241b46888f9ce35fa5a19a70665dadb480275ab883cc8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 28 05:20:20 np0005634017 podman[329629]: 2026-02-28 10:20:20.441942222 +0000 UTC m=+0.129102131 container attach 5d95dfc81aa7a89ad1241b46888f9ce35fa5a19a70665dadb480275ab883cc8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_morse, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:20:20 np0005634017 goofy_morse[329646]: {
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:    "0": [
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:        {
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "devices": [
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "/dev/loop3"
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            ],
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "lv_name": "ceph_lv0",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "lv_size": "21470642176",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "name": "ceph_lv0",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "tags": {
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.cluster_name": "ceph",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.crush_device_class": "",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.encrypted": "0",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.objectstore": "bluestore",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.osd_id": "0",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.type": "block",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.vdo": "0",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.with_tpm": "0"
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            },
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "type": "block",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "vg_name": "ceph_vg0"
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:        }
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:    ],
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:    "1": [
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:        {
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "devices": [
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "/dev/loop4"
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            ],
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "lv_name": "ceph_lv1",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "lv_size": "21470642176",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "name": "ceph_lv1",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "tags": {
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.cluster_name": "ceph",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.crush_device_class": "",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.encrypted": "0",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.objectstore": "bluestore",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.osd_id": "1",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.type": "block",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.vdo": "0",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.with_tpm": "0"
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            },
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "type": "block",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "vg_name": "ceph_vg1"
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:        }
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:    ],
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:    "2": [
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:        {
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "devices": [
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "/dev/loop5"
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            ],
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "lv_name": "ceph_lv2",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "lv_size": "21470642176",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "name": "ceph_lv2",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "tags": {
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.cluster_name": "ceph",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.crush_device_class": "",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.encrypted": "0",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.objectstore": "bluestore",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.osd_id": "2",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.type": "block",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.vdo": "0",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:                "ceph.with_tpm": "0"
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            },
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "type": "block",
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:            "vg_name": "ceph_vg2"
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:        }
Feb 28 05:20:20 np0005634017 goofy_morse[329646]:    ]
Feb 28 05:20:20 np0005634017 goofy_morse[329646]: }
Feb 28 05:20:20 np0005634017 systemd[1]: libpod-5d95dfc81aa7a89ad1241b46888f9ce35fa5a19a70665dadb480275ab883cc8a.scope: Deactivated successfully.
Feb 28 05:20:20 np0005634017 podman[329629]: 2026-02-28 10:20:20.736204209 +0000 UTC m=+0.423364178 container died 5d95dfc81aa7a89ad1241b46888f9ce35fa5a19a70665dadb480275ab883cc8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_morse, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 05:20:20 np0005634017 systemd[1]: var-lib-containers-storage-overlay-a5ece8f0fb69de873a9de827df8cac432b0f646b413535f3848b79c3e84d69e5-merged.mount: Deactivated successfully.
Feb 28 05:20:20 np0005634017 podman[329629]: 2026-02-28 10:20:20.78289023 +0000 UTC m=+0.470050119 container remove 5d95dfc81aa7a89ad1241b46888f9ce35fa5a19a70665dadb480275ab883cc8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_morse, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Feb 28 05:20:20 np0005634017 systemd[1]: libpod-conmon-5d95dfc81aa7a89ad1241b46888f9ce35fa5a19a70665dadb480275ab883cc8a.scope: Deactivated successfully.
Feb 28 05:20:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:20.863 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:20 np0005634017 nova_compute[243452]: 2026-02-28 10:20:20.969 243456 DEBUG nova.network.neutron [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Updating instance_info_cache with network_info: [{"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:20:20 np0005634017 nova_compute[243452]: 2026-02-28 10:20:20.992 243456 DEBUG oslo_concurrency.lockutils [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Releasing lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:20:20 np0005634017 nova_compute[243452]: 2026-02-28 10:20:20.998 243456 DEBUG nova.virt.libvirt.vif [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:17:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-49030969',display_name='tempest-ServerActionsTestJSON-server-49030969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-49030969',id=90,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2gogpz3Yx3EhEkpX1VCBq2KO8dk2oxi+PMU5eyyDvXF1ONWBCuHAC/cBb5pEuXFL5UEmuntYZ5G82kfqwpjBS+OnOyyPUfTpF/G370TEdXgIbjgT7mCqXRtxntkLSfyw==',key_name='tempest-keypair-1420549814',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:17:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-iwkgdg48',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:20:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b9006c7543a244aa948b78020335223a',uuid=690896df-6307-469c-9685-325a61a62b88,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:20:20 np0005634017 nova_compute[243452]: 2026-02-28 10:20:20.998 243456 DEBUG nova.network.os_vif_util [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:20:20 np0005634017 nova_compute[243452]: 2026-02-28 10:20:20.999 243456 DEBUG nova.network.os_vif_util [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:20.999 243456 DEBUG os_vif [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.000 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.001 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.001 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.005 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.005 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped25d1f8-c3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.006 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=taped25d1f8-c3, col_values=(('external_ids', {'iface-id': 'ed25d1f8-c3a0-43d4-b57e-12b647a48b3c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:05:21', 'vm-uuid': '690896df-6307-469c-9685-325a61a62b88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.006 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.006 243456 INFO os_vif [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3')#033[00m
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.025 243456 DEBUG nova.objects.instance [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'numa_topology' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:21 np0005634017 kernel: taped25d1f8-c3: entered promiscuous mode
Feb 28 05:20:21 np0005634017 NetworkManager[49805]: <info>  [1772274021.0902] manager: (taped25d1f8-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/425)
Feb 28 05:20:21 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:21Z|00984|binding|INFO|Claiming lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for this chassis.
Feb 28 05:20:21 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:21Z|00985|binding|INFO|ed25d1f8-c3a0-43d4-b57e-12b647a48b3c: Claiming fa:16:3e:f6:05:21 10.100.0.14
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.095 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.100 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:05:21 10.100.0.14'], port_security=['fa:16:3e:f6:05:21 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '690896df-6307-469c-9685-325a61a62b88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6952e00efd364e1491714983e2425e93', 'neutron:revision_number': '13', 'neutron:security_group_ids': '26695444-5c7f-4bb0-b1ed-94a026868cfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999babbe-38ab-491f-bab6-db221e47dcf3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.101 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ed25d1f8-c3a0-43d4-b57e-12b647a48b3c in datapath 8082b9e7-a888-4fb7-b48c-a7c16db892eb bound to our chassis#033[00m
Feb 28 05:20:21 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:21Z|00986|binding|INFO|Setting lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c ovn-installed in OVS
Feb 28 05:20:21 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:21Z|00987|binding|INFO|Setting lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c up in Southbound
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.102 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8082b9e7-a888-4fb7-b48c-a7c16db892eb#033[00m
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.102 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.107 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.113 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6fd0a80e-0c34-49dc-befa-7abf9c176be7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.114 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8082b9e7-a1 in ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.115 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8082b9e7-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.116 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2fe4ac6d-6c06-4c9e-bbac-a86c6f199e92]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.119 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dd7ede27-c31f-441a-a437-40f48065754d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:21 np0005634017 systemd-machined[209480]: New machine qemu-125-instance-0000005a.
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.129 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[d93a2bf8-41f8-4b5e-b821-3888095abc1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:21 np0005634017 systemd[1]: Started Virtual Machine qemu-125-instance-0000005a.
Feb 28 05:20:21 np0005634017 systemd-udevd[329731]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:20:21 np0005634017 NetworkManager[49805]: <info>  [1772274021.1531] device (taped25d1f8-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.152 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[312f6e5a-e547-4f72-a5ef-00ed18fdb387]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:21 np0005634017 NetworkManager[49805]: <info>  [1772274021.1535] device (taped25d1f8-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.176 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[cbd5e2d7-5c79-4a94-b36b-b3debf791402]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.182 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c6378773-1daa-4390-8dd3-4bb623d0abcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:21 np0005634017 NetworkManager[49805]: <info>  [1772274021.1834] manager: (tap8082b9e7-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/426)
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.208 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4a5eb13d-fbb2-46ac-809c-27c8c371a5ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.212 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3913047c-1714-4a13-8fb1-bea5a5ac04a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:21 np0005634017 NetworkManager[49805]: <info>  [1772274021.2343] device (tap8082b9e7-a0): carrier: link connected
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.240 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[abe36f68-1c10-49ac-9276-7ef9c1b4da24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.261 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[404d40e2-c6ef-4902-9578-cede93c5f87e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8082b9e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:09:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 552852, 'reachable_time': 38387, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329781, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.274 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[61823e39-5e9f-49a0-8d2c-4d0e673f51bb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:9bb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 552852, 'tstamp': 552852}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329785, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:21 np0005634017 podman[329770]: 2026-02-28 10:20:21.278545477 +0000 UTC m=+0.048863733 container create 0dfb5c10b8cd581f8f67af85edba89e9e627a5da83ee8efb65c30ca3c154aa99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_darwin, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.289 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4660225f-a484-49d1-9552-cc7d03c0f280]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8082b9e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:09:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 552852, 'reachable_time': 38387, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 329786, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:21 np0005634017 systemd[1]: Started libpod-conmon-0dfb5c10b8cd581f8f67af85edba89e9e627a5da83ee8efb65c30ca3c154aa99.scope.
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.316 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a89945c8-e361-4d43-9ca0-72a984f82817]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:21 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:20:21 np0005634017 podman[329770]: 2026-02-28 10:20:21.253097512 +0000 UTC m=+0.023415818 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:20:21 np0005634017 podman[329770]: 2026-02-28 10:20:21.35159585 +0000 UTC m=+0.121914126 container init 0dfb5c10b8cd581f8f67af85edba89e9e627a5da83ee8efb65c30ca3c154aa99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 28 05:20:21 np0005634017 podman[329770]: 2026-02-28 10:20:21.359590987 +0000 UTC m=+0.129909243 container start 0dfb5c10b8cd581f8f67af85edba89e9e627a5da83ee8efb65c30ca3c154aa99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:20:21 np0005634017 podman[329770]: 2026-02-28 10:20:21.364705133 +0000 UTC m=+0.135023399 container attach 0dfb5c10b8cd581f8f67af85edba89e9e627a5da83ee8efb65c30ca3c154aa99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.365 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bba9bf53-60a5-4647-89ba-6d9d4fd77788]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.366 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8082b9e7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.366 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.367 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8082b9e7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:21 np0005634017 nifty_darwin[329792]: 167 167
Feb 28 05:20:21 np0005634017 systemd[1]: libpod-0dfb5c10b8cd581f8f67af85edba89e9e627a5da83ee8efb65c30ca3c154aa99.scope: Deactivated successfully.
Feb 28 05:20:21 np0005634017 NetworkManager[49805]: <info>  [1772274021.3692] manager: (tap8082b9e7-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/427)
Feb 28 05:20:21 np0005634017 kernel: tap8082b9e7-a0: entered promiscuous mode
Feb 28 05:20:21 np0005634017 conmon[329792]: conmon 0dfb5c10b8cd581f8f67 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0dfb5c10b8cd581f8f67af85edba89e9e627a5da83ee8efb65c30ca3c154aa99.scope/container/memory.events
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.371 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:21 np0005634017 podman[329770]: 2026-02-28 10:20:21.372016802 +0000 UTC m=+0.142335068 container died 0dfb5c10b8cd581f8f67af85edba89e9e627a5da83ee8efb65c30ca3c154aa99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.371 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8082b9e7-a0, col_values=(('external_ids', {'iface-id': 'ecb4bce5-f543-4ade-98cc-d98a0d015682'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:21 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:21Z|00988|binding|INFO|Releasing lport ecb4bce5-f543-4ade-98cc-d98a0d015682 from this chassis (sb_readonly=0)
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.386 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8082b9e7-a888-4fb7-b48c-a7c16db892eb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8082b9e7-a888-4fb7-b48c-a7c16db892eb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.386 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.387 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4eeab2a7-b0e6-4507-a703-0236e7d6c431]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.388 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-8082b9e7-a888-4fb7-b48c-a7c16db892eb
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/8082b9e7-a888-4fb7-b48c-a7c16db892eb.pid.haproxy
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 8082b9e7-a888-4fb7-b48c-a7c16db892eb
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:20:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.389 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'env', 'PROCESS_TAG=haproxy-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8082b9e7-a888-4fb7-b48c-a7c16db892eb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:20:21 np0005634017 systemd[1]: var-lib-containers-storage-overlay-b6703a6b67b815133b32ac27cb36260612c110dab160e5f184f53e9478894961-merged.mount: Deactivated successfully.
Feb 28 05:20:21 np0005634017 podman[329770]: 2026-02-28 10:20:21.406276268 +0000 UTC m=+0.176594524 container remove 0dfb5c10b8cd581f8f67af85edba89e9e627a5da83ee8efb65c30ca3c154aa99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_darwin, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:20:21 np0005634017 systemd[1]: libpod-conmon-0dfb5c10b8cd581f8f67af85edba89e9e627a5da83ee8efb65c30ca3c154aa99.scope: Deactivated successfully.
Feb 28 05:20:21 np0005634017 podman[329822]: 2026-02-28 10:20:21.554314548 +0000 UTC m=+0.045233181 container create 8fe856f9045fa144a71ee73f89efcec6f15d1811ae9f32f0bf58dbc522e52c71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_solomon, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 05:20:21 np0005634017 systemd[1]: Started libpod-conmon-8fe856f9045fa144a71ee73f89efcec6f15d1811ae9f32f0bf58dbc522e52c71.scope.
Feb 28 05:20:21 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:20:21 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/985ae06a249e67321205ae0f5643e37b42e6f56e3f96165f868d0c0c0563e640/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:20:21 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/985ae06a249e67321205ae0f5643e37b42e6f56e3f96165f868d0c0c0563e640/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:20:21 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/985ae06a249e67321205ae0f5643e37b42e6f56e3f96165f868d0c0c0563e640/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:20:21 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/985ae06a249e67321205ae0f5643e37b42e6f56e3f96165f868d0c0c0563e640/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:20:21 np0005634017 podman[329822]: 2026-02-28 10:20:21.53439288 +0000 UTC m=+0.025311553 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:20:21 np0005634017 podman[329822]: 2026-02-28 10:20:21.648799051 +0000 UTC m=+0.139717704 container init 8fe856f9045fa144a71ee73f89efcec6f15d1811ae9f32f0bf58dbc522e52c71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_solomon, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:20:21 np0005634017 podman[329822]: 2026-02-28 10:20:21.65682965 +0000 UTC m=+0.147748283 container start 8fe856f9045fa144a71ee73f89efcec6f15d1811ae9f32f0bf58dbc522e52c71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:20:21 np0005634017 podman[329822]: 2026-02-28 10:20:21.668467431 +0000 UTC m=+0.159386054 container attach 8fe856f9045fa144a71ee73f89efcec6f15d1811ae9f32f0bf58dbc522e52c71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.669 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:21 np0005634017 podman[329864]: 2026-02-28 10:20:21.738298822 +0000 UTC m=+0.053453915 container create 6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 05:20:21 np0005634017 systemd[1]: Started libpod-conmon-6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d.scope.
Feb 28 05:20:21 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:20:21 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcb7018c52ded7e66f2df60116ae88981dd11943239a5de84d085b48a577280f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:20:21 np0005634017 podman[329864]: 2026-02-28 10:20:21.798772505 +0000 UTC m=+0.113927628 container init 6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 28 05:20:21 np0005634017 podman[329864]: 2026-02-28 10:20:21.803828559 +0000 UTC m=+0.118983652 container start 6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:20:21 np0005634017 podman[329864]: 2026-02-28 10:20:21.712141086 +0000 UTC m=+0.027296209 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:20:21 np0005634017 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[329919]: [NOTICE]   (329924) : New worker (329927) forked
Feb 28 05:20:21 np0005634017 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[329919]: [NOTICE]   (329924) : Loading success.
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.927 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 690896df-6307-469c-9685-325a61a62b88 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.928 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274021.9272852, 690896df-6307-469c-9685-325a61a62b88 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.928 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] VM Started (Lifecycle Event)#033[00m
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.937 243456 DEBUG nova.compute.manager [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.938 243456 DEBUG nova.objects.instance [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'pci_devices' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.951 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.955 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.958 243456 INFO nova.virt.libvirt.driver [-] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance running successfully.#033[00m
Feb 28 05:20:21 np0005634017 virtqemud[242837]: argument unsupported: QEMU guest agent is not configured
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.961 243456 DEBUG nova.virt.libvirt.guest [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.961 243456 DEBUG nova.compute.manager [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.981 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.982 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274021.9313285, 690896df-6307-469c-9685-325a61a62b88 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:20:21 np0005634017 nova_compute[243452]: 2026-02-28 10:20:21.982 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:20:22 np0005634017 nova_compute[243452]: 2026-02-28 10:20:22.010 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:22 np0005634017 nova_compute[243452]: 2026-02-28 10:20:22.013 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:20:22 np0005634017 lvm[330006]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:20:22 np0005634017 lvm[330006]: VG ceph_vg0 finished
Feb 28 05:20:22 np0005634017 lvm[330008]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:20:22 np0005634017 lvm[330008]: VG ceph_vg1 finished
Feb 28 05:20:22 np0005634017 nova_compute[243452]: 2026-02-28 10:20:22.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:20:22 np0005634017 nova_compute[243452]: 2026-02-28 10:20:22.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:20:22 np0005634017 lvm[330009]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:20:22 np0005634017 lvm[330009]: VG ceph_vg2 finished
Feb 28 05:20:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 488 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 26 KiB/s wr, 311 op/s
Feb 28 05:20:22 np0005634017 brave_solomon[329839]: {}
Feb 28 05:20:22 np0005634017 systemd[1]: libpod-8fe856f9045fa144a71ee73f89efcec6f15d1811ae9f32f0bf58dbc522e52c71.scope: Deactivated successfully.
Feb 28 05:20:22 np0005634017 systemd[1]: libpod-8fe856f9045fa144a71ee73f89efcec6f15d1811ae9f32f0bf58dbc522e52c71.scope: Consumed 1.040s CPU time.
Feb 28 05:20:22 np0005634017 conmon[329839]: conmon 8fe856f9045fa144a71e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8fe856f9045fa144a71ee73f89efcec6f15d1811ae9f32f0bf58dbc522e52c71.scope/container/memory.events
Feb 28 05:20:22 np0005634017 podman[329822]: 2026-02-28 10:20:22.447789804 +0000 UTC m=+0.938708427 container died 8fe856f9045fa144a71ee73f89efcec6f15d1811ae9f32f0bf58dbc522e52c71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:20:22 np0005634017 systemd[1]: var-lib-containers-storage-overlay-985ae06a249e67321205ae0f5643e37b42e6f56e3f96165f868d0c0c0563e640-merged.mount: Deactivated successfully.
Feb 28 05:20:22 np0005634017 podman[329822]: 2026-02-28 10:20:22.493186278 +0000 UTC m=+0.984104901 container remove 8fe856f9045fa144a71ee73f89efcec6f15d1811ae9f32f0bf58dbc522e52c71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_solomon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:20:22 np0005634017 systemd[1]: libpod-conmon-8fe856f9045fa144a71ee73f89efcec6f15d1811ae9f32f0bf58dbc522e52c71.scope: Deactivated successfully.
Feb 28 05:20:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:20:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:20:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:20:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:20:22 np0005634017 nova_compute[243452]: 2026-02-28 10:20:22.875 243456 DEBUG nova.compute.manager [req-efa807b9-da66-4b1c-a84a-9fc76da02497 req-019403c7-664c-46e4-9a3b-865645575b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:22 np0005634017 nova_compute[243452]: 2026-02-28 10:20:22.876 243456 DEBUG oslo_concurrency.lockutils [req-efa807b9-da66-4b1c-a84a-9fc76da02497 req-019403c7-664c-46e4-9a3b-865645575b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:22 np0005634017 nova_compute[243452]: 2026-02-28 10:20:22.877 243456 DEBUG oslo_concurrency.lockutils [req-efa807b9-da66-4b1c-a84a-9fc76da02497 req-019403c7-664c-46e4-9a3b-865645575b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:22 np0005634017 nova_compute[243452]: 2026-02-28 10:20:22.877 243456 DEBUG oslo_concurrency.lockutils [req-efa807b9-da66-4b1c-a84a-9fc76da02497 req-019403c7-664c-46e4-9a3b-865645575b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:22 np0005634017 nova_compute[243452]: 2026-02-28 10:20:22.877 243456 DEBUG nova.compute.manager [req-efa807b9-da66-4b1c-a84a-9fc76da02497 req-019403c7-664c-46e4-9a3b-865645575b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:20:22 np0005634017 nova_compute[243452]: 2026-02-28 10:20:22.878 243456 WARNING nova.compute.manager [req-efa807b9-da66-4b1c-a84a-9fc76da02497 req-019403c7-664c-46e4-9a3b-865645575b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state active and task_state None.#033[00m
Feb 28 05:20:23 np0005634017 nova_compute[243452]: 2026-02-28 10:20:23.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:20:23 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:20:23 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:20:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:20:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Feb 28 05:20:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Feb 28 05:20:23 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Feb 28 05:20:23 np0005634017 nova_compute[243452]: 2026-02-28 10:20:23.936 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:24 np0005634017 nova_compute[243452]: 2026-02-28 10:20:24.100 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:24 np0005634017 nova_compute[243452]: 2026-02-28 10:20:24.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:20:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 504 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.1 MiB/s wr, 132 op/s
Feb 28 05:20:24 np0005634017 nova_compute[243452]: 2026-02-28 10:20:24.363 243456 DEBUG oslo_concurrency.lockutils [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:24 np0005634017 nova_compute[243452]: 2026-02-28 10:20:24.364 243456 DEBUG oslo_concurrency.lockutils [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "690896df-6307-469c-9685-325a61a62b88" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:24 np0005634017 nova_compute[243452]: 2026-02-28 10:20:24.364 243456 DEBUG oslo_concurrency.lockutils [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:24 np0005634017 nova_compute[243452]: 2026-02-28 10:20:24.365 243456 DEBUG oslo_concurrency.lockutils [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:24 np0005634017 nova_compute[243452]: 2026-02-28 10:20:24.365 243456 DEBUG oslo_concurrency.lockutils [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:24 np0005634017 nova_compute[243452]: 2026-02-28 10:20:24.367 243456 INFO nova.compute.manager [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Terminating instance#033[00m
Feb 28 05:20:24 np0005634017 nova_compute[243452]: 2026-02-28 10:20:24.369 243456 DEBUG nova.compute.manager [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:20:24 np0005634017 kernel: taped25d1f8-c3 (unregistering): left promiscuous mode
Feb 28 05:20:24 np0005634017 NetworkManager[49805]: <info>  [1772274024.4093] device (taped25d1f8-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:20:24 np0005634017 nova_compute[243452]: 2026-02-28 10:20:24.422 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:24 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:24Z|00989|binding|INFO|Releasing lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c from this chassis (sb_readonly=0)
Feb 28 05:20:24 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:24Z|00990|binding|INFO|Setting lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c down in Southbound
Feb 28 05:20:24 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:24Z|00991|binding|INFO|Removing iface taped25d1f8-c3 ovn-installed in OVS
Feb 28 05:20:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.431 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:05:21 10.100.0.14'], port_security=['fa:16:3e:f6:05:21 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '690896df-6307-469c-9685-325a61a62b88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6952e00efd364e1491714983e2425e93', 'neutron:revision_number': '14', 'neutron:security_group_ids': '26695444-5c7f-4bb0-b1ed-94a026868cfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.178', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999babbe-38ab-491f-bab6-db221e47dcf3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:20:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.433 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ed25d1f8-c3a0-43d4-b57e-12b647a48b3c in datapath 8082b9e7-a888-4fb7-b48c-a7c16db892eb unbound from our chassis#033[00m
Feb 28 05:20:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.436 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8082b9e7-a888-4fb7-b48c-a7c16db892eb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:20:24 np0005634017 nova_compute[243452]: 2026-02-28 10:20:24.438 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.438 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3a8be346-da52-47be-85eb-580681dcb045]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.440 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb namespace which is not needed anymore#033[00m
Feb 28 05:20:24 np0005634017 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Feb 28 05:20:24 np0005634017 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d0000005a.scope: Consumed 3.159s CPU time.
Feb 28 05:20:24 np0005634017 systemd-machined[209480]: Machine qemu-125-instance-0000005a terminated.
Feb 28 05:20:24 np0005634017 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[329919]: [NOTICE]   (329924) : haproxy version is 2.8.14-c23fe91
Feb 28 05:20:24 np0005634017 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[329919]: [NOTICE]   (329924) : path to executable is /usr/sbin/haproxy
Feb 28 05:20:24 np0005634017 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[329919]: [WARNING]  (329924) : Exiting Master process...
Feb 28 05:20:24 np0005634017 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[329919]: [ALERT]    (329924) : Current worker (329927) exited with code 143 (Terminated)
Feb 28 05:20:24 np0005634017 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[329919]: [WARNING]  (329924) : All workers exited. Exiting... (0)
Feb 28 05:20:24 np0005634017 systemd[1]: libpod-6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d.scope: Deactivated successfully.
Feb 28 05:20:24 np0005634017 podman[330071]: 2026-02-28 10:20:24.562612531 +0000 UTC m=+0.045027164 container died 6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:20:24 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d-userdata-shm.mount: Deactivated successfully.
Feb 28 05:20:24 np0005634017 systemd[1]: var-lib-containers-storage-overlay-dcb7018c52ded7e66f2df60116ae88981dd11943239a5de84d085b48a577280f-merged.mount: Deactivated successfully.
Feb 28 05:20:24 np0005634017 podman[330071]: 2026-02-28 10:20:24.607580463 +0000 UTC m=+0.089995096 container cleanup 6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 28 05:20:24 np0005634017 nova_compute[243452]: 2026-02-28 10:20:24.607 243456 INFO nova.virt.libvirt.driver [-] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance destroyed successfully.#033[00m
Feb 28 05:20:24 np0005634017 nova_compute[243452]: 2026-02-28 10:20:24.608 243456 DEBUG nova.objects.instance [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'resources' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:24 np0005634017 systemd[1]: libpod-conmon-6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d.scope: Deactivated successfully.
Feb 28 05:20:24 np0005634017 nova_compute[243452]: 2026-02-28 10:20:24.623 243456 DEBUG nova.virt.libvirt.vif [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:17:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-49030969',display_name='tempest-ServerActionsTestJSON-server-49030969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-49030969',id=90,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2gogpz3Yx3EhEkpX1VCBq2KO8dk2oxi+PMU5eyyDvXF1ONWBCuHAC/cBb5pEuXFL5UEmuntYZ5G82kfqwpjBS+OnOyyPUfTpF/G370TEdXgIbjgT7mCqXRtxntkLSfyw==',key_name='tempest-keypair-1420549814',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:17:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-iwkgdg48',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:20:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b9006c7543a244aa948b78020335223a',uuid=690896df-6307-469c-9685-325a61a62b88,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:20:24 np0005634017 nova_compute[243452]: 2026-02-28 10:20:24.624 243456 DEBUG nova.network.os_vif_util [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:20:24 np0005634017 nova_compute[243452]: 2026-02-28 10:20:24.625 243456 DEBUG nova.network.os_vif_util [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:20:24 np0005634017 nova_compute[243452]: 2026-02-28 10:20:24.625 243456 DEBUG os_vif [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:20:24 np0005634017 nova_compute[243452]: 2026-02-28 10:20:24.628 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:24 np0005634017 nova_compute[243452]: 2026-02-28 10:20:24.628 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped25d1f8-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:24 np0005634017 nova_compute[243452]: 2026-02-28 10:20:24.630 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:24 np0005634017 nova_compute[243452]: 2026-02-28 10:20:24.631 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:24 np0005634017 nova_compute[243452]: 2026-02-28 10:20:24.633 243456 INFO os_vif [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3')#033[00m
Feb 28 05:20:24 np0005634017 podman[330112]: 2026-02-28 10:20:24.66114444 +0000 UTC m=+0.037532331 container remove 6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 05:20:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.666 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7d5ecdc9-f8d5-4bec-8c8a-efc52f064e68]: (4, ('Sat Feb 28 10:20:24 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb (6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d)\n6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d\nSat Feb 28 10:20:24 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb (6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d)\n6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.667 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2dc41cb5-f8ca-478d-a95d-e1526d5658ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.668 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8082b9e7-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:24 np0005634017 nova_compute[243452]: 2026-02-28 10:20:24.670 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:24 np0005634017 kernel: tap8082b9e7-a0: left promiscuous mode
Feb 28 05:20:24 np0005634017 nova_compute[243452]: 2026-02-28 10:20:24.677 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.682 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f3c9b5a3-d6bd-4411-bedd-d5b84ba8bb65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.693 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[92220f3d-ba65-48f2-ada8-dc133d64b213]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.695 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bc0597b0-0fe8-4403-b8e7-8d946811d44c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.709 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d4dd4fe2-4571-436b-a313-9082f6f980a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 552845, 'reachable_time': 19837, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330145, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.711 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:20:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.711 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[c534eae7-6d6b-49be-b55c-97031e7fd25b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:24 np0005634017 systemd[1]: run-netns-ovnmeta\x2d8082b9e7\x2da888\x2d4fb7\x2db48c\x2da7c16db892eb.mount: Deactivated successfully.
Feb 28 05:20:24 np0005634017 nova_compute[243452]: 2026-02-28 10:20:24.949 243456 INFO nova.virt.libvirt.driver [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Deleting instance files /var/lib/nova/instances/690896df-6307-469c-9685-325a61a62b88_del#033[00m
Feb 28 05:20:24 np0005634017 nova_compute[243452]: 2026-02-28 10:20:24.950 243456 INFO nova.virt.libvirt.driver [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Deletion of /var/lib/nova/instances/690896df-6307-469c-9685-325a61a62b88_del complete#033[00m
Feb 28 05:20:25 np0005634017 nova_compute[243452]: 2026-02-28 10:20:25.037 243456 INFO nova.compute.manager [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:20:25 np0005634017 nova_compute[243452]: 2026-02-28 10:20:25.038 243456 DEBUG oslo.service.loopingcall [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:20:25 np0005634017 nova_compute[243452]: 2026-02-28 10:20:25.038 243456 DEBUG nova.compute.manager [-] [instance: 690896df-6307-469c-9685-325a61a62b88] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:20:25 np0005634017 nova_compute[243452]: 2026-02-28 10:20:25.039 243456 DEBUG nova.network.neutron [-] [instance: 690896df-6307-469c-9685-325a61a62b88] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:20:25 np0005634017 nova_compute[243452]: 2026-02-28 10:20:25.169 243456 DEBUG nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 28 05:20:25 np0005634017 nova_compute[243452]: 2026-02-28 10:20:25.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:20:25 np0005634017 nova_compute[243452]: 2026-02-28 10:20:25.601 243456 DEBUG nova.compute.manager [req-ef89d28c-a0e2-4284-94d9-7bbe2b48f988 req-8947a8e7-cc75-41b4-9c53-e6823fa780fc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:25 np0005634017 nova_compute[243452]: 2026-02-28 10:20:25.601 243456 DEBUG oslo_concurrency.lockutils [req-ef89d28c-a0e2-4284-94d9-7bbe2b48f988 req-8947a8e7-cc75-41b4-9c53-e6823fa780fc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:25 np0005634017 nova_compute[243452]: 2026-02-28 10:20:25.602 243456 DEBUG oslo_concurrency.lockutils [req-ef89d28c-a0e2-4284-94d9-7bbe2b48f988 req-8947a8e7-cc75-41b4-9c53-e6823fa780fc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:25 np0005634017 nova_compute[243452]: 2026-02-28 10:20:25.602 243456 DEBUG oslo_concurrency.lockutils [req-ef89d28c-a0e2-4284-94d9-7bbe2b48f988 req-8947a8e7-cc75-41b4-9c53-e6823fa780fc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:25 np0005634017 nova_compute[243452]: 2026-02-28 10:20:25.602 243456 DEBUG nova.compute.manager [req-ef89d28c-a0e2-4284-94d9-7bbe2b48f988 req-8947a8e7-cc75-41b4-9c53-e6823fa780fc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:20:25 np0005634017 nova_compute[243452]: 2026-02-28 10:20:25.602 243456 WARNING nova.compute.manager [req-ef89d28c-a0e2-4284-94d9-7bbe2b48f988 req-8947a8e7-cc75-41b4-9c53-e6823fa780fc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:20:26 np0005634017 nova_compute[243452]: 2026-02-28 10:20:26.317 243456 DEBUG nova.network.neutron [-] [instance: 690896df-6307-469c-9685-325a61a62b88] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:20:26 np0005634017 nova_compute[243452]: 2026-02-28 10:20:26.343 243456 INFO nova.compute.manager [-] [instance: 690896df-6307-469c-9685-325a61a62b88] Took 1.30 seconds to deallocate network for instance.#033[00m
Feb 28 05:20:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 495 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 511 KiB/s rd, 3.2 MiB/s wr, 131 op/s
Feb 28 05:20:26 np0005634017 nova_compute[243452]: 2026-02-28 10:20:26.404 243456 DEBUG oslo_concurrency.lockutils [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:26 np0005634017 nova_compute[243452]: 2026-02-28 10:20:26.405 243456 DEBUG oslo_concurrency.lockutils [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:26 np0005634017 nova_compute[243452]: 2026-02-28 10:20:26.513 243456 DEBUG oslo_concurrency.processutils [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:20:27 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4099121038' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:20:27 np0005634017 nova_compute[243452]: 2026-02-28 10:20:27.113 243456 DEBUG oslo_concurrency.processutils [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:27 np0005634017 nova_compute[243452]: 2026-02-28 10:20:27.120 243456 DEBUG nova.compute.provider_tree [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:20:27 np0005634017 nova_compute[243452]: 2026-02-28 10:20:27.140 243456 DEBUG nova.scheduler.client.report [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:20:27 np0005634017 nova_compute[243452]: 2026-02-28 10:20:27.168 243456 DEBUG oslo_concurrency.lockutils [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:27 np0005634017 nova_compute[243452]: 2026-02-28 10:20:27.198 243456 INFO nova.scheduler.client.report [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Deleted allocations for instance 690896df-6307-469c-9685-325a61a62b88#033[00m
Feb 28 05:20:27 np0005634017 nova_compute[243452]: 2026-02-28 10:20:27.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:20:27 np0005634017 nova_compute[243452]: 2026-02-28 10:20:27.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:20:27 np0005634017 nova_compute[243452]: 2026-02-28 10:20:27.323 243456 DEBUG oslo_concurrency.lockutils [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "690896df-6307-469c-9685-325a61a62b88" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:27 np0005634017 nova_compute[243452]: 2026-02-28 10:20:27.348 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 05:20:27 np0005634017 kernel: tapa98f753e-a6 (unregistering): left promiscuous mode
Feb 28 05:20:27 np0005634017 NetworkManager[49805]: <info>  [1772274027.4577] device (tapa98f753e-a6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:20:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:27Z|00992|binding|INFO|Releasing lport a98f753e-a6d6-4d97-b307-f08d35a37f1f from this chassis (sb_readonly=0)
Feb 28 05:20:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:27Z|00993|binding|INFO|Setting lport a98f753e-a6d6-4d97-b307-f08d35a37f1f down in Southbound
Feb 28 05:20:27 np0005634017 nova_compute[243452]: 2026-02-28 10:20:27.460 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:27Z|00994|binding|INFO|Removing iface tapa98f753e-a6 ovn-installed in OVS
Feb 28 05:20:27 np0005634017 nova_compute[243452]: 2026-02-28 10:20:27.464 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:27.469 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:c9:51 10.100.0.5'], port_security=['fa:16:3e:71:c9:51 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '080f8608-f57f-4ffa-a966-ae62df8f6f9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ba87a0d-4c27-4844-a55f-2927dfe4893e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3882eded03594958a2e5d10832a6c3a9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '045c4763-a218-4cca-a882-e267b0c43dad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5d941e6-1790-487e-9400-385b4660d2ae, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a98f753e-a6d6-4d97-b307-f08d35a37f1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:20:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:27.470 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a98f753e-a6d6-4d97-b307-f08d35a37f1f in datapath 9ba87a0d-4c27-4844-a55f-2927dfe4893e unbound from our chassis#033[00m
Feb 28 05:20:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:27.471 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9ba87a0d-4c27-4844-a55f-2927dfe4893e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 28 05:20:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:27.472 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3ab41529-bc4a-4fb5-badd-c59a82bf51c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:27 np0005634017 nova_compute[243452]: 2026-02-28 10:20:27.479 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:27 np0005634017 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000063.scope: Deactivated successfully.
Feb 28 05:20:27 np0005634017 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000063.scope: Consumed 12.082s CPU time.
Feb 28 05:20:27 np0005634017 systemd-machined[209480]: Machine qemu-123-instance-00000063 terminated.
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.186 243456 INFO nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Instance shutdown successfully after 13 seconds.#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.194 243456 INFO nova.virt.libvirt.driver [-] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Instance destroyed successfully.#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.195 243456 DEBUG nova.objects.instance [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 080f8608-f57f-4ffa-a966-ae62df8f6f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.223 243456 INFO nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Attempting rescue#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.224 243456 DEBUG nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.230 243456 DEBUG nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.231 243456 INFO nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Creating image(s)#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.268 243456 DEBUG nova.storage.rbd_utils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.272 243456 DEBUG nova.objects.instance [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 080f8608-f57f-4ffa-a966-ae62df8f6f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.323 243456 DEBUG nova.storage.rbd_utils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 461 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 432 KiB/s rd, 2.6 MiB/s wr, 123 op/s
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.358 243456 DEBUG nova.storage.rbd_utils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.364 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.398 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.425 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.425 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.426 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.426 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.426 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.468 243456 DEBUG nova.compute.manager [req-81ca06df-a0bc-4b66-b573-debaf6ff2ad7 req-bf0c376f-9ff7-41e3-b0dc-24de521a3e45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.469 243456 DEBUG oslo_concurrency.lockutils [req-81ca06df-a0bc-4b66-b573-debaf6ff2ad7 req-bf0c376f-9ff7-41e3-b0dc-24de521a3e45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.469 243456 DEBUG oslo_concurrency.lockutils [req-81ca06df-a0bc-4b66-b573-debaf6ff2ad7 req-bf0c376f-9ff7-41e3-b0dc-24de521a3e45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.470 243456 DEBUG oslo_concurrency.lockutils [req-81ca06df-a0bc-4b66-b573-debaf6ff2ad7 req-bf0c376f-9ff7-41e3-b0dc-24de521a3e45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.470 243456 DEBUG nova.compute.manager [req-81ca06df-a0bc-4b66-b573-debaf6ff2ad7 req-bf0c376f-9ff7-41e3-b0dc-24de521a3e45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.470 243456 WARNING nova.compute.manager [req-81ca06df-a0bc-4b66-b573-debaf6ff2ad7 req-bf0c376f-9ff7-41e3-b0dc-24de521a3e45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.471 243456 DEBUG nova.compute.manager [req-81ca06df-a0bc-4b66-b573-debaf6ff2ad7 req-bf0c376f-9ff7-41e3-b0dc-24de521a3e45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.471 243456 DEBUG oslo_concurrency.lockutils [req-81ca06df-a0bc-4b66-b573-debaf6ff2ad7 req-bf0c376f-9ff7-41e3-b0dc-24de521a3e45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.471 243456 DEBUG oslo_concurrency.lockutils [req-81ca06df-a0bc-4b66-b573-debaf6ff2ad7 req-bf0c376f-9ff7-41e3-b0dc-24de521a3e45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.472 243456 DEBUG oslo_concurrency.lockutils [req-81ca06df-a0bc-4b66-b573-debaf6ff2ad7 req-bf0c376f-9ff7-41e3-b0dc-24de521a3e45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.472 243456 DEBUG nova.compute.manager [req-81ca06df-a0bc-4b66-b573-debaf6ff2ad7 req-bf0c376f-9ff7-41e3-b0dc-24de521a3e45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.472 243456 WARNING nova.compute.manager [req-81ca06df-a0bc-4b66-b573-debaf6ff2ad7 req-bf0c376f-9ff7-41e3-b0dc-24de521a3e45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.473 243456 DEBUG nova.compute.manager [req-81ca06df-a0bc-4b66-b573-debaf6ff2ad7 req-bf0c376f-9ff7-41e3-b0dc-24de521a3e45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-deleted-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.474 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.475 243456 DEBUG oslo_concurrency.lockutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.475 243456 DEBUG oslo_concurrency.lockutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.476 243456 DEBUG oslo_concurrency.lockutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.502 243456 DEBUG nova.storage.rbd_utils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.506 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.803 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.805 243456 DEBUG nova.objects.instance [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'migration_context' on Instance uuid 080f8608-f57f-4ffa-a966-ae62df8f6f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.824 243456 DEBUG nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.825 243456 DEBUG nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Start _get_guest_xml network_info=[{"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1242488818-network", "vif_mac": "fa:16:3e:71:c9:51"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.825 243456 DEBUG nova.objects.instance [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'resources' on Instance uuid 080f8608-f57f-4ffa-a966-ae62df8f6f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.849 243456 WARNING nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.853 243456 DEBUG nova.virt.libvirt.host [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.856 243456 DEBUG nova.virt.libvirt.host [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.862 243456 DEBUG nova.virt.libvirt.host [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.863 243456 DEBUG nova.virt.libvirt.host [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.863 243456 DEBUG nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.863 243456 DEBUG nova.virt.hardware [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.864 243456 DEBUG nova.virt.hardware [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.864 243456 DEBUG nova.virt.hardware [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.865 243456 DEBUG nova.virt.hardware [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.865 243456 DEBUG nova.virt.hardware [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.865 243456 DEBUG nova.virt.hardware [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.866 243456 DEBUG nova.virt.hardware [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.866 243456 DEBUG nova.virt.hardware [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.866 243456 DEBUG nova.virt.hardware [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.866 243456 DEBUG nova.virt.hardware [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.867 243456 DEBUG nova.virt.hardware [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.867 243456 DEBUG nova.objects.instance [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 080f8608-f57f-4ffa-a966-ae62df8f6f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.890 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.938 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.948 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274013.9480052, 98504b0a-8c47-4488-b870-9fb9ebfa3e59 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.949 243456 INFO nova.compute.manager [-] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:20:28 np0005634017 nova_compute[243452]: 2026-02-28 10:20:28.969 243456 DEBUG nova.compute.manager [None req-7018c7ea-1feb-4079-bede-ce88763957cb - - - - - -] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:20:28 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1919154869' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:20:29 np0005634017 nova_compute[243452]: 2026-02-28 10:20:29.009 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:29 np0005634017 nova_compute[243452]: 2026-02-28 10:20:29.095 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:20:29 np0005634017 nova_compute[243452]: 2026-02-28 10:20:29.095 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:20:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:20:29
Feb 28 05:20:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:20:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:20:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'backups', '.mgr', 'images', '.rgw.root', 'default.rgw.meta', 'volumes']
Feb 28 05:20:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:20:29 np0005634017 nova_compute[243452]: 2026-02-28 10:20:29.101 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:20:29 np0005634017 nova_compute[243452]: 2026-02-28 10:20:29.102 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:20:29 np0005634017 nova_compute[243452]: 2026-02-28 10:20:29.107 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:20:29 np0005634017 nova_compute[243452]: 2026-02-28 10:20:29.107 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:20:29 np0005634017 nova_compute[243452]: 2026-02-28 10:20:29.108 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:20:29 np0005634017 nova_compute[243452]: 2026-02-28 10:20:29.265 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:20:29 np0005634017 nova_compute[243452]: 2026-02-28 10:20:29.266 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3246MB free_disk=59.81664128229022GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:20:29 np0005634017 nova_compute[243452]: 2026-02-28 10:20:29.267 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:29 np0005634017 nova_compute[243452]: 2026-02-28 10:20:29.267 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:29 np0005634017 nova_compute[243452]: 2026-02-28 10:20:29.330 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance ea5efc55-0a5e-435e-9805-9a9726c17eda actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:20:29 np0005634017 nova_compute[243452]: 2026-02-28 10:20:29.330 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance e4349bd8-727a-4533-9edd-b2d54353a617 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:20:29 np0005634017 nova_compute[243452]: 2026-02-28 10:20:29.331 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 080f8608-f57f-4ffa-a966-ae62df8f6f9b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:20:29 np0005634017 nova_compute[243452]: 2026-02-28 10:20:29.332 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:20:29 np0005634017 nova_compute[243452]: 2026-02-28 10:20:29.332 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:20:29 np0005634017 nova_compute[243452]: 2026-02-28 10:20:29.422 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:20:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/348724625' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:20:29 np0005634017 nova_compute[243452]: 2026-02-28 10:20:29.503 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:29 np0005634017 nova_compute[243452]: 2026-02-28 10:20:29.504 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:29 np0005634017 nova_compute[243452]: 2026-02-28 10:20:29.630 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:20:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3467550894' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.006 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.015 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/363430046' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.026 243456 DEBUG nova.compute.manager [req-42247004-ffe9-4c56-94e8-4c5401c27c1f req-78546d18-2834-4a82-8c46-82faad6e1f82 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-unplugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.026 243456 DEBUG oslo_concurrency.lockutils [req-42247004-ffe9-4c56-94e8-4c5401c27c1f req-78546d18-2834-4a82-8c46-82faad6e1f82 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.027 243456 DEBUG oslo_concurrency.lockutils [req-42247004-ffe9-4c56-94e8-4c5401c27c1f req-78546d18-2834-4a82-8c46-82faad6e1f82 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.027 243456 DEBUG oslo_concurrency.lockutils [req-42247004-ffe9-4c56-94e8-4c5401c27c1f req-78546d18-2834-4a82-8c46-82faad6e1f82 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.027 243456 DEBUG nova.compute.manager [req-42247004-ffe9-4c56-94e8-4c5401c27c1f req-78546d18-2834-4a82-8c46-82faad6e1f82 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] No waiting events found dispatching network-vif-unplugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.027 243456 WARNING nova.compute.manager [req-42247004-ffe9-4c56-94e8-4c5401c27c1f req-78546d18-2834-4a82-8c46-82faad6e1f82 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received unexpected event network-vif-unplugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f for instance with vm_state active and task_state rescuing.#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.035 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.040 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.041 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.070 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.070 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.273 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Acquiring lock "c4a33511-0908-4787-82f4-79505aa9d436" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.274 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.287 243456 DEBUG nova.compute.manager [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:20:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:20:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:20:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:20:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:20:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:20:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.346 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.346 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.352 243456 DEBUG nova.virt.hardware [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.353 243456 INFO nova.compute.claims [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:20:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 454 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 418 KiB/s rd, 3.1 MiB/s wr, 120 op/s
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.437918) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274030438004, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2010, "num_deletes": 254, "total_data_size": 3119230, "memory_usage": 3170880, "flush_reason": "Manual Compaction"}
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274030453217, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 3051822, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33407, "largest_seqno": 35416, "table_properties": {"data_size": 3042750, "index_size": 5566, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19558, "raw_average_key_size": 20, "raw_value_size": 3024276, "raw_average_value_size": 3180, "num_data_blocks": 245, "num_entries": 951, "num_filter_entries": 951, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772273844, "oldest_key_time": 1772273844, "file_creation_time": 1772274030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 15351 microseconds, and 8814 cpu microseconds.
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.453281) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 3051822 bytes OK
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.453310) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.454915) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.454937) EVENT_LOG_v1 {"time_micros": 1772274030454930, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.454968) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 3110630, prev total WAL file size 3110630, number of live WAL files 2.
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.456353) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(2980KB)], [74(8152KB)]
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274030456463, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 11400386, "oldest_snapshot_seqno": -1}
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6122 keys, 9751563 bytes, temperature: kUnknown
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274030502134, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 9751563, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9709047, "index_size": 26086, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15365, "raw_key_size": 154585, "raw_average_key_size": 25, "raw_value_size": 9597865, "raw_average_value_size": 1567, "num_data_blocks": 1053, "num_entries": 6122, "num_filter_entries": 6122, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772274030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.502466) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 9751563 bytes
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.503766) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 249.0 rd, 213.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 8.0 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 6645, records dropped: 523 output_compression: NoCompression
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.503793) EVENT_LOG_v1 {"time_micros": 1772274030503779, "job": 42, "event": "compaction_finished", "compaction_time_micros": 45784, "compaction_time_cpu_micros": 24524, "output_level": 6, "num_output_files": 1, "total_output_size": 9751563, "num_input_records": 6645, "num_output_records": 6122, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274030504411, "job": 42, "event": "table_file_deletion", "file_number": 76}
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274030505614, "job": 42, "event": "table_file_deletion", "file_number": 74}
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.456184) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.505729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.505736) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.505738) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.505739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.505741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.510 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:20:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/609544957' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.558 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.560 243456 DEBUG nova.virt.libvirt.vif [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:19:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1593211213',display_name='tempest-ServerRescueTestJSON-server-1593211213',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1593211213',id=99,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:20:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3882eded03594958a2e5d10832a6c3a9',ramdisk_id='',reservation_id='r-xm64dw2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-2101936935',owner_user_name='tempest-ServerRescueTestJSON-2101936935-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:20:10Z,user_data=None,user_id='8d03850a765742908401b28b9f983e96',uuid=080f8608-f57f-4ffa-a966-ae62df8f6f9b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1242488818-network", "vif_mac": "fa:16:3e:71:c9:51"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.561 243456 DEBUG nova.network.os_vif_util [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converting VIF {"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1242488818-network", "vif_mac": "fa:16:3e:71:c9:51"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.562 243456 DEBUG nova.network.os_vif_util [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:71:c9:51,bridge_name='br-int',has_traffic_filtering=True,id=a98f753e-a6d6-4d97-b307-f08d35a37f1f,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa98f753e-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.564 243456 DEBUG nova.objects.instance [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 080f8608-f57f-4ffa-a966-ae62df8f6f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.582 243456 DEBUG nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:20:30 np0005634017 nova_compute[243452]:  <uuid>080f8608-f57f-4ffa-a966-ae62df8f6f9b</uuid>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:  <name>instance-00000063</name>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerRescueTestJSON-server-1593211213</nova:name>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:20:28</nova:creationTime>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:20:30 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:        <nova:user uuid="8d03850a765742908401b28b9f983e96">tempest-ServerRescueTestJSON-2101936935-project-member</nova:user>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:        <nova:project uuid="3882eded03594958a2e5d10832a6c3a9">tempest-ServerRescueTestJSON-2101936935</nova:project>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:        <nova:port uuid="a98f753e-a6d6-4d97-b307-f08d35a37f1f">
Feb 28 05:20:30 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <entry name="serial">080f8608-f57f-4ffa-a966-ae62df8f6f9b</entry>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <entry name="uuid">080f8608-f57f-4ffa-a966-ae62df8f6f9b</entry>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.rescue">
Feb 28 05:20:30 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:20:30 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk">
Feb 28 05:20:30 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:20:30 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <target dev="vdb" bus="virtio"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.config.rescue">
Feb 28 05:20:30 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:20:30 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:71:c9:51"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <target dev="tapa98f753e-a6"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/console.log" append="off"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:20:30 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:20:30 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:20:30 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:20:30 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.599 243456 INFO nova.virt.libvirt.driver [-] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Instance destroyed successfully.#033[00m
Feb 28 05:20:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:20:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:20:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:20:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:20:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:20:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:20:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:20:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:20:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:20:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.669 243456 DEBUG nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.670 243456 DEBUG nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.671 243456 DEBUG nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.672 243456 DEBUG nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No VIF found with MAC fa:16:3e:71:c9:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.673 243456 INFO nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Using config drive#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.712 243456 DEBUG nova.storage.rbd_utils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.740 243456 DEBUG nova.objects.instance [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 080f8608-f57f-4ffa-a966-ae62df8f6f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:30 np0005634017 nova_compute[243452]: 2026-02-28 10:20:30.781 243456 DEBUG nova.objects.instance [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'keypairs' on Instance uuid 080f8608-f57f-4ffa-a966-ae62df8f6f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:20:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2723623436' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.117 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.125 243456 DEBUG nova.compute.provider_tree [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.149 243456 DEBUG nova.scheduler.client.report [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.175 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.176 243456 DEBUG nova.compute.manager [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.239 243456 DEBUG nova.compute.manager [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.239 243456 DEBUG nova.network.neutron [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.268 243456 INFO nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.288 243456 DEBUG nova.compute.manager [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.394 243456 DEBUG nova.compute.manager [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.397 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.398 243456 INFO nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Creating image(s)#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.431 243456 DEBUG nova.storage.rbd_utils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] rbd image c4a33511-0908-4787-82f4-79505aa9d436_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.468 243456 DEBUG nova.storage.rbd_utils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] rbd image c4a33511-0908-4787-82f4-79505aa9d436_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.507 243456 DEBUG nova.storage.rbd_utils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] rbd image c4a33511-0908-4787-82f4-79505aa9d436_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.513 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.556 243456 INFO nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Creating config drive at /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/disk.config.rescue#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.565 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp92z2clak execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.608 243456 DEBUG nova.policy [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd000e26b1aaf4a60bd2c928412e59ca5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1dab7067181d43f1acb702fce4ca882c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.615 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.616 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.617 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.617 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.652 243456 DEBUG nova.storage.rbd_utils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] rbd image c4a33511-0908-4787-82f4-79505aa9d436_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.656 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c4a33511-0908-4787-82f4-79505aa9d436_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.712 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp92z2clak" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.759 243456 DEBUG nova.storage.rbd_utils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.766 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/disk.config.rescue 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.895 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c4a33511-0908-4787-82f4-79505aa9d436_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.239s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.934 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/disk.config.rescue 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.935 243456 INFO nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Deleting local config drive /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/disk.config.rescue because it was imported into RBD.#033[00m
Feb 28 05:20:31 np0005634017 kernel: tapa98f753e-a6: entered promiscuous mode
Feb 28 05:20:31 np0005634017 NetworkManager[49805]: <info>  [1772274031.9744] manager: (tapa98f753e-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/428)
Feb 28 05:20:31 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.977 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:31 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:31Z|00995|binding|INFO|Claiming lport a98f753e-a6d6-4d97-b307-f08d35a37f1f for this chassis.
Feb 28 05:20:31 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:31Z|00996|binding|INFO|a98f753e-a6d6-4d97-b307-f08d35a37f1f: Claiming fa:16:3e:71:c9:51 10.100.0.5
Feb 28 05:20:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:31.988 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:c9:51 10.100.0.5'], port_security=['fa:16:3e:71:c9:51 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '080f8608-f57f-4ffa-a966-ae62df8f6f9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ba87a0d-4c27-4844-a55f-2927dfe4893e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3882eded03594958a2e5d10832a6c3a9', 'neutron:revision_number': '5', 'neutron:security_group_ids': '045c4763-a218-4cca-a882-e267b0c43dad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5d941e6-1790-487e-9400-385b4660d2ae, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a98f753e-a6d6-4d97-b307-f08d35a37f1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:20:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:31.990 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a98f753e-a6d6-4d97-b307-f08d35a37f1f in datapath 9ba87a0d-4c27-4844-a55f-2927dfe4893e bound to our chassis#033[00m
Feb 28 05:20:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:31.990 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9ba87a0d-4c27-4844-a55f-2927dfe4893e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 28 05:20:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:31.992 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7f213891-9e7a-41f4-954f-92401607dc60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:31 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:31Z|00997|binding|INFO|Setting lport a98f753e-a6d6-4d97-b307-f08d35a37f1f ovn-installed in OVS
Feb 28 05:20:31 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:31Z|00998|binding|INFO|Setting lport a98f753e-a6d6-4d97-b307-f08d35a37f1f up in Southbound
Feb 28 05:20:32 np0005634017 systemd-machined[209480]: New machine qemu-126-instance-00000063.
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:31.999 243456 DEBUG nova.storage.rbd_utils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] resizing rbd image c4a33511-0908-4787-82f4-79505aa9d436_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:20:32 np0005634017 systemd[1]: Started Virtual Machine qemu-126-instance-00000063.
Feb 28 05:20:32 np0005634017 systemd-udevd[330631]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:20:32 np0005634017 NetworkManager[49805]: <info>  [1772274032.0342] device (tapa98f753e-a6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:20:32 np0005634017 NetworkManager[49805]: <info>  [1772274032.0351] device (tapa98f753e-a6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:32.037 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:32.096 243456 DEBUG nova.objects.instance [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lazy-loading 'migration_context' on Instance uuid c4a33511-0908-4787-82f4-79505aa9d436 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:32.106 243456 DEBUG nova.compute.manager [req-5f582d92-7b1e-401c-8fc5-86c1df758c6a req-9c7950b3-585a-42f2-a614-349fcc4ef5eb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:32.107 243456 DEBUG oslo_concurrency.lockutils [req-5f582d92-7b1e-401c-8fc5-86c1df758c6a req-9c7950b3-585a-42f2-a614-349fcc4ef5eb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:32.107 243456 DEBUG oslo_concurrency.lockutils [req-5f582d92-7b1e-401c-8fc5-86c1df758c6a req-9c7950b3-585a-42f2-a614-349fcc4ef5eb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:32.107 243456 DEBUG oslo_concurrency.lockutils [req-5f582d92-7b1e-401c-8fc5-86c1df758c6a req-9c7950b3-585a-42f2-a614-349fcc4ef5eb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:32.108 243456 DEBUG nova.compute.manager [req-5f582d92-7b1e-401c-8fc5-86c1df758c6a req-9c7950b3-585a-42f2-a614-349fcc4ef5eb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] No waiting events found dispatching network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:32.108 243456 WARNING nova.compute.manager [req-5f582d92-7b1e-401c-8fc5-86c1df758c6a req-9c7950b3-585a-42f2-a614-349fcc4ef5eb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received unexpected event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f for instance with vm_state active and task_state rescuing.#033[00m
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:32.110 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:32.110 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Ensure instance console log exists: /var/lib/nova/instances/c4a33511-0908-4787-82f4-79505aa9d436/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:32.111 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:32.111 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:32.112 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 462 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 418 KiB/s rd, 3.7 MiB/s wr, 121 op/s
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:32.464 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 080f8608-f57f-4ffa-a966-ae62df8f6f9b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:32.465 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274032.4642735, 080f8608-f57f-4ffa-a966-ae62df8f6f9b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:32.465 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:32.469 243456 DEBUG nova.compute.manager [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:32.502 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:32.507 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:32.535 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:32.536 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274032.4698439, 080f8608-f57f-4ffa-a966-ae62df8f6f9b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:32.537 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] VM Started (Lifecycle Event)#033[00m
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:32.557 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:32.562 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:32.987 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:20:32 np0005634017 nova_compute[243452]: 2026-02-28 10:20:32.988 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:20:33 np0005634017 nova_compute[243452]: 2026-02-28 10:20:33.068 243456 DEBUG nova.network.neutron [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Successfully created port: 8f265ce7-668d-4462-8ac4-a9487fc7d3cd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:20:33 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:33Z|00999|binding|INFO|Releasing lport ec441ae8-7dea-4a06-ba6a-57dcbc67001f from this chassis (sb_readonly=0)
Feb 28 05:20:33 np0005634017 nova_compute[243452]: 2026-02-28 10:20:33.120 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:33 np0005634017 nova_compute[243452]: 2026-02-28 10:20:33.393 243456 INFO nova.compute.manager [None req-8715f514-03fd-46a8-9c45-ee942edd0c27 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Unrescuing#033[00m
Feb 28 05:20:33 np0005634017 nova_compute[243452]: 2026-02-28 10:20:33.394 243456 DEBUG oslo_concurrency.lockutils [None req-8715f514-03fd-46a8-9c45-ee942edd0c27 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "refresh_cache-080f8608-f57f-4ffa-a966-ae62df8f6f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:20:33 np0005634017 nova_compute[243452]: 2026-02-28 10:20:33.395 243456 DEBUG oslo_concurrency.lockutils [None req-8715f514-03fd-46a8-9c45-ee942edd0c27 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquired lock "refresh_cache-080f8608-f57f-4ffa-a966-ae62df8f6f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:20:33 np0005634017 nova_compute[243452]: 2026-02-28 10:20:33.395 243456 DEBUG nova.network.neutron [None req-8715f514-03fd-46a8-9c45-ee942edd0c27 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:20:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:20:33 np0005634017 nova_compute[243452]: 2026-02-28 10:20:33.941 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:34 np0005634017 nova_compute[243452]: 2026-02-28 10:20:34.105 243456 DEBUG nova.network.neutron [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Successfully updated port: 8f265ce7-668d-4462-8ac4-a9487fc7d3cd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:20:34 np0005634017 nova_compute[243452]: 2026-02-28 10:20:34.125 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Acquiring lock "refresh_cache-c4a33511-0908-4787-82f4-79505aa9d436" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:20:34 np0005634017 nova_compute[243452]: 2026-02-28 10:20:34.125 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Acquired lock "refresh_cache-c4a33511-0908-4787-82f4-79505aa9d436" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:20:34 np0005634017 nova_compute[243452]: 2026-02-28 10:20:34.126 243456 DEBUG nova.network.neutron [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:20:34 np0005634017 nova_compute[243452]: 2026-02-28 10:20:34.215 243456 DEBUG nova.compute.manager [req-d948fe28-6815-453c-a1c7-f026c258c57c req-96009592-3d6b-46a0-8315-fd5a35bb1f8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:34 np0005634017 nova_compute[243452]: 2026-02-28 10:20:34.215 243456 DEBUG oslo_concurrency.lockutils [req-d948fe28-6815-453c-a1c7-f026c258c57c req-96009592-3d6b-46a0-8315-fd5a35bb1f8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:34 np0005634017 nova_compute[243452]: 2026-02-28 10:20:34.216 243456 DEBUG oslo_concurrency.lockutils [req-d948fe28-6815-453c-a1c7-f026c258c57c req-96009592-3d6b-46a0-8315-fd5a35bb1f8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:34 np0005634017 nova_compute[243452]: 2026-02-28 10:20:34.216 243456 DEBUG oslo_concurrency.lockutils [req-d948fe28-6815-453c-a1c7-f026c258c57c req-96009592-3d6b-46a0-8315-fd5a35bb1f8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:34 np0005634017 nova_compute[243452]: 2026-02-28 10:20:34.217 243456 DEBUG nova.compute.manager [req-d948fe28-6815-453c-a1c7-f026c258c57c req-96009592-3d6b-46a0-8315-fd5a35bb1f8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] No waiting events found dispatching network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:20:34 np0005634017 nova_compute[243452]: 2026-02-28 10:20:34.217 243456 WARNING nova.compute.manager [req-d948fe28-6815-453c-a1c7-f026c258c57c req-96009592-3d6b-46a0-8315-fd5a35bb1f8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received unexpected event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f for instance with vm_state rescued and task_state unrescuing.#033[00m
Feb 28 05:20:34 np0005634017 nova_compute[243452]: 2026-02-28 10:20:34.217 243456 DEBUG nova.compute.manager [req-d948fe28-6815-453c-a1c7-f026c258c57c req-96009592-3d6b-46a0-8315-fd5a35bb1f8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:34 np0005634017 nova_compute[243452]: 2026-02-28 10:20:34.218 243456 DEBUG oslo_concurrency.lockutils [req-d948fe28-6815-453c-a1c7-f026c258c57c req-96009592-3d6b-46a0-8315-fd5a35bb1f8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:34 np0005634017 nova_compute[243452]: 2026-02-28 10:20:34.218 243456 DEBUG oslo_concurrency.lockutils [req-d948fe28-6815-453c-a1c7-f026c258c57c req-96009592-3d6b-46a0-8315-fd5a35bb1f8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:34 np0005634017 nova_compute[243452]: 2026-02-28 10:20:34.218 243456 DEBUG oslo_concurrency.lockutils [req-d948fe28-6815-453c-a1c7-f026c258c57c req-96009592-3d6b-46a0-8315-fd5a35bb1f8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:34 np0005634017 nova_compute[243452]: 2026-02-28 10:20:34.218 243456 DEBUG nova.compute.manager [req-d948fe28-6815-453c-a1c7-f026c258c57c req-96009592-3d6b-46a0-8315-fd5a35bb1f8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] No waiting events found dispatching network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:20:34 np0005634017 nova_compute[243452]: 2026-02-28 10:20:34.219 243456 WARNING nova.compute.manager [req-d948fe28-6815-453c-a1c7-f026c258c57c req-96009592-3d6b-46a0-8315-fd5a35bb1f8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received unexpected event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f for instance with vm_state rescued and task_state unrescuing.#033[00m
Feb 28 05:20:34 np0005634017 nova_compute[243452]: 2026-02-28 10:20:34.351 243456 DEBUG nova.compute.manager [req-99bbe8fd-f180-4a00-a218-6eab02723577 req-b7bd5e3f-747c-4bc5-b45a-0d452be80215 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Received event network-changed-8f265ce7-668d-4462-8ac4-a9487fc7d3cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:34 np0005634017 nova_compute[243452]: 2026-02-28 10:20:34.351 243456 DEBUG nova.compute.manager [req-99bbe8fd-f180-4a00-a218-6eab02723577 req-b7bd5e3f-747c-4bc5-b45a-0d452be80215 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Refreshing instance network info cache due to event network-changed-8f265ce7-668d-4462-8ac4-a9487fc7d3cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:20:34 np0005634017 nova_compute[243452]: 2026-02-28 10:20:34.351 243456 DEBUG oslo_concurrency.lockutils [req-99bbe8fd-f180-4a00-a218-6eab02723577 req-b7bd5e3f-747c-4bc5-b45a-0d452be80215 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c4a33511-0908-4787-82f4-79505aa9d436" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:20:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 495 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 321 KiB/s rd, 3.9 MiB/s wr, 99 op/s
Feb 28 05:20:34 np0005634017 nova_compute[243452]: 2026-02-28 10:20:34.419 243456 DEBUG nova.network.neutron [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:20:34 np0005634017 nova_compute[243452]: 2026-02-28 10:20:34.634 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.187 243456 DEBUG nova.network.neutron [None req-8715f514-03fd-46a8-9c45-ee942edd0c27 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Updating instance_info_cache with network_info: [{"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.209 243456 DEBUG oslo_concurrency.lockutils [None req-8715f514-03fd-46a8-9c45-ee942edd0c27 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Releasing lock "refresh_cache-080f8608-f57f-4ffa-a966-ae62df8f6f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.210 243456 DEBUG nova.objects.instance [None req-8715f514-03fd-46a8-9c45-ee942edd0c27 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'flavor' on Instance uuid 080f8608-f57f-4ffa-a966-ae62df8f6f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:35 np0005634017 kernel: tapa98f753e-a6 (unregistering): left promiscuous mode
Feb 28 05:20:35 np0005634017 NetworkManager[49805]: <info>  [1772274035.2825] device (tapa98f753e-a6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:20:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:35Z|01000|binding|INFO|Releasing lport a98f753e-a6d6-4d97-b307-f08d35a37f1f from this chassis (sb_readonly=0)
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.288 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:35Z|01001|binding|INFO|Setting lport a98f753e-a6d6-4d97-b307-f08d35a37f1f down in Southbound
Feb 28 05:20:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:35Z|01002|binding|INFO|Removing iface tapa98f753e-a6 ovn-installed in OVS
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.291 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.296 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:35.300 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:c9:51 10.100.0.5'], port_security=['fa:16:3e:71:c9:51 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '080f8608-f57f-4ffa-a966-ae62df8f6f9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ba87a0d-4c27-4844-a55f-2927dfe4893e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3882eded03594958a2e5d10832a6c3a9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '045c4763-a218-4cca-a882-e267b0c43dad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5d941e6-1790-487e-9400-385b4660d2ae, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a98f753e-a6d6-4d97-b307-f08d35a37f1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:20:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:35.302 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a98f753e-a6d6-4d97-b307-f08d35a37f1f in datapath 9ba87a0d-4c27-4844-a55f-2927dfe4893e unbound from our chassis#033[00m
Feb 28 05:20:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:35.303 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9ba87a0d-4c27-4844-a55f-2927dfe4893e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 28 05:20:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:35.304 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5ecb81e9-f3d8-4cbc-be39-527465a21bae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:35 np0005634017 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000063.scope: Deactivated successfully.
Feb 28 05:20:35 np0005634017 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000063.scope: Consumed 3.110s CPU time.
Feb 28 05:20:35 np0005634017 systemd-machined[209480]: Machine qemu-126-instance-00000063 terminated.
Feb 28 05:20:35 np0005634017 kernel: tapa98f753e-a6: entered promiscuous mode
Feb 28 05:20:35 np0005634017 kernel: tapa98f753e-a6 (unregistering): left promiscuous mode
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.461 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.470 243456 INFO nova.virt.libvirt.driver [-] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Instance destroyed successfully.#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.471 243456 DEBUG nova.objects.instance [None req-8715f514-03fd-46a8-9c45-ee942edd0c27 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 080f8608-f57f-4ffa-a966-ae62df8f6f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:35 np0005634017 kernel: tapa98f753e-a6: entered promiscuous mode
Feb 28 05:20:35 np0005634017 systemd-udevd[330725]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:20:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:35Z|01003|binding|INFO|Claiming lport a98f753e-a6d6-4d97-b307-f08d35a37f1f for this chassis.
Feb 28 05:20:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:35Z|01004|binding|INFO|a98f753e-a6d6-4d97-b307-f08d35a37f1f: Claiming fa:16:3e:71:c9:51 10.100.0.5
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.563 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:35 np0005634017 NetworkManager[49805]: <info>  [1772274035.5648] manager: (tapa98f753e-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/429)
Feb 28 05:20:35 np0005634017 NetworkManager[49805]: <info>  [1772274035.5721] device (tapa98f753e-a6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:20:35 np0005634017 NetworkManager[49805]: <info>  [1772274035.5728] device (tapa98f753e-a6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:20:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:35Z|01005|binding|INFO|Setting lport a98f753e-a6d6-4d97-b307-f08d35a37f1f ovn-installed in OVS
Feb 28 05:20:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:35Z|01006|binding|INFO|Setting lport a98f753e-a6d6-4d97-b307-f08d35a37f1f up in Southbound
Feb 28 05:20:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:35.573 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:c9:51 10.100.0.5'], port_security=['fa:16:3e:71:c9:51 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '080f8608-f57f-4ffa-a966-ae62df8f6f9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ba87a0d-4c27-4844-a55f-2927dfe4893e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3882eded03594958a2e5d10832a6c3a9', 'neutron:revision_number': '7', 'neutron:security_group_ids': '045c4763-a218-4cca-a882-e267b0c43dad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5d941e6-1790-487e-9400-385b4660d2ae, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a98f753e-a6d6-4d97-b307-f08d35a37f1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.574 243456 DEBUG nova.network.neutron [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Updating instance_info_cache with network_info: [{"id": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "address": "fa:16:3e:cb:75:fe", "network": {"id": "37214c09-5017-4e54-bd29-785084655f44", "bridge": "br-int", "label": "tempest-network-smoke--1165805221", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dab7067181d43f1acb702fce4ca882c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f265ce7-66", "ovs_interfaceid": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:20:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:35.575 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a98f753e-a6d6-4d97-b307-f08d35a37f1f in datapath 9ba87a0d-4c27-4844-a55f-2927dfe4893e bound to our chassis#033[00m
Feb 28 05:20:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:35.577 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9ba87a0d-4c27-4844-a55f-2927dfe4893e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.578 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:35.578 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c161e065-016a-455c-9660-13b72e766358]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:35 np0005634017 systemd-machined[209480]: New machine qemu-127-instance-00000063.
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.606 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Releasing lock "refresh_cache-c4a33511-0908-4787-82f4-79505aa9d436" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.607 243456 DEBUG nova.compute.manager [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Instance network_info: |[{"id": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "address": "fa:16:3e:cb:75:fe", "network": {"id": "37214c09-5017-4e54-bd29-785084655f44", "bridge": "br-int", "label": "tempest-network-smoke--1165805221", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dab7067181d43f1acb702fce4ca882c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f265ce7-66", "ovs_interfaceid": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.607 243456 DEBUG oslo_concurrency.lockutils [req-99bbe8fd-f180-4a00-a218-6eab02723577 req-b7bd5e3f-747c-4bc5-b45a-0d452be80215 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c4a33511-0908-4787-82f4-79505aa9d436" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:20:35 np0005634017 systemd[1]: Started Virtual Machine qemu-127-instance-00000063.
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.608 243456 DEBUG nova.network.neutron [req-99bbe8fd-f180-4a00-a218-6eab02723577 req-b7bd5e3f-747c-4bc5-b45a-0d452be80215 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Refreshing network info cache for port 8f265ce7-668d-4462-8ac4-a9487fc7d3cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.613 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Start _get_guest_xml network_info=[{"id": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "address": "fa:16:3e:cb:75:fe", "network": {"id": "37214c09-5017-4e54-bd29-785084655f44", "bridge": "br-int", "label": "tempest-network-smoke--1165805221", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dab7067181d43f1acb702fce4ca882c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f265ce7-66", "ovs_interfaceid": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.621 243456 WARNING nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.628 243456 DEBUG nova.virt.libvirt.host [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.630 243456 DEBUG nova.virt.libvirt.host [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.643 243456 DEBUG nova.virt.libvirt.host [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.645 243456 DEBUG nova.virt.libvirt.host [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.645 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.646 243456 DEBUG nova.virt.hardware [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.647 243456 DEBUG nova.virt.hardware [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.647 243456 DEBUG nova.virt.hardware [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.647 243456 DEBUG nova.virt.hardware [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.648 243456 DEBUG nova.virt.hardware [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.648 243456 DEBUG nova.virt.hardware [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.648 243456 DEBUG nova.virt.hardware [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.649 243456 DEBUG nova.virt.hardware [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.649 243456 DEBUG nova.virt.hardware [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.649 243456 DEBUG nova.virt.hardware [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.650 243456 DEBUG nova.virt.hardware [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:20:35 np0005634017 nova_compute[243452]: 2026-02-28 10:20:35.655 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:20:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4290161541' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.235 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.279 243456 DEBUG nova.storage.rbd_utils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] rbd image c4a33511-0908-4787-82f4-79505aa9d436_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.287 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 532 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.9 MiB/s wr, 159 op/s
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.557 243456 DEBUG nova.compute.manager [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-unplugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.559 243456 DEBUG oslo_concurrency.lockutils [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.560 243456 DEBUG oslo_concurrency.lockutils [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.561 243456 DEBUG oslo_concurrency.lockutils [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.561 243456 DEBUG nova.compute.manager [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] No waiting events found dispatching network-vif-unplugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.562 243456 WARNING nova.compute.manager [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received unexpected event network-vif-unplugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f for instance with vm_state rescued and task_state unrescuing.#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.562 243456 DEBUG nova.compute.manager [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.562 243456 DEBUG oslo_concurrency.lockutils [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.563 243456 DEBUG oslo_concurrency.lockutils [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.563 243456 DEBUG oslo_concurrency.lockutils [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.564 243456 DEBUG nova.compute.manager [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] No waiting events found dispatching network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.564 243456 WARNING nova.compute.manager [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received unexpected event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f for instance with vm_state rescued and task_state unrescuing.#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.565 243456 DEBUG nova.compute.manager [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.565 243456 DEBUG oslo_concurrency.lockutils [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.566 243456 DEBUG oslo_concurrency.lockutils [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.566 243456 DEBUG oslo_concurrency.lockutils [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.566 243456 DEBUG nova.compute.manager [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] No waiting events found dispatching network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.567 243456 WARNING nova.compute.manager [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received unexpected event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f for instance with vm_state rescued and task_state unrescuing.#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.567 243456 DEBUG nova.compute.manager [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.568 243456 DEBUG oslo_concurrency.lockutils [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.568 243456 DEBUG oslo_concurrency.lockutils [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.569 243456 DEBUG oslo_concurrency.lockutils [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.569 243456 DEBUG nova.compute.manager [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] No waiting events found dispatching network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.569 243456 WARNING nova.compute.manager [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received unexpected event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f for instance with vm_state rescued and task_state unrescuing.#033[00m
Feb 28 05:20:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:20:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1135467290' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.832 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.835 243456 DEBUG nova.virt.libvirt.vif [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:20:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1952070192-access_point-1839928684',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1952070192-access_point-1839928684',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1952070192-ac',id=100,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHCNMYLg5p+aB2JY148QNYzXhobB6/drpl2qIRV0JTfQX5v2ytqL0PnW0jus5lqsKTS3QAwQHPeB44EmRsvT+kml+WKzSDkocdN0cBQPBu+t8DfZ2YkjQe5xHSo47UtrMg==',key_name='tempest-TestSecurityGroupsBasicOps-225180537',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1dab7067181d43f1acb702fce4ca882c',ramdisk_id='',reservation_id='r-6trxlpi6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1952070192',owner_user_name='tempest-TestSecurityGroupsBasicOps-1952070192-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:20:31Z,user_data=None,user_id='d000e26b1aaf4a60bd2c928412e59ca5',uuid=c4a33511-0908-4787-82f4-79505aa9d436,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "address": "fa:16:3e:cb:75:fe", "network": {"id": "37214c09-5017-4e54-bd29-785084655f44", "bridge": "br-int", "label": "tempest-network-smoke--1165805221", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dab7067181d43f1acb702fce4ca882c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f265ce7-66", "ovs_interfaceid": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.835 243456 DEBUG nova.network.os_vif_util [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Converting VIF {"id": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "address": "fa:16:3e:cb:75:fe", "network": {"id": "37214c09-5017-4e54-bd29-785084655f44", "bridge": "br-int", "label": "tempest-network-smoke--1165805221", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dab7067181d43f1acb702fce4ca882c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f265ce7-66", "ovs_interfaceid": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.839 243456 DEBUG nova.network.os_vif_util [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:75:fe,bridge_name='br-int',has_traffic_filtering=True,id=8f265ce7-668d-4462-8ac4-a9487fc7d3cd,network=Network(37214c09-5017-4e54-bd29-785084655f44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f265ce7-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.843 243456 DEBUG nova.objects.instance [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lazy-loading 'pci_devices' on Instance uuid c4a33511-0908-4787-82f4-79505aa9d436 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.880 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:20:36 np0005634017 nova_compute[243452]:  <uuid>c4a33511-0908-4787-82f4-79505aa9d436</uuid>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:  <name>instance-00000064</name>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1952070192-access_point-1839928684</nova:name>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:20:35</nova:creationTime>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:20:36 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:        <nova:user uuid="d000e26b1aaf4a60bd2c928412e59ca5">tempest-TestSecurityGroupsBasicOps-1952070192-project-member</nova:user>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:        <nova:project uuid="1dab7067181d43f1acb702fce4ca882c">tempest-TestSecurityGroupsBasicOps-1952070192</nova:project>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:        <nova:port uuid="8f265ce7-668d-4462-8ac4-a9487fc7d3cd">
Feb 28 05:20:36 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <entry name="serial">c4a33511-0908-4787-82f4-79505aa9d436</entry>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <entry name="uuid">c4a33511-0908-4787-82f4-79505aa9d436</entry>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/c4a33511-0908-4787-82f4-79505aa9d436_disk">
Feb 28 05:20:36 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:20:36 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/c4a33511-0908-4787-82f4-79505aa9d436_disk.config">
Feb 28 05:20:36 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:20:36 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:cb:75:fe"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <target dev="tap8f265ce7-66"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/c4a33511-0908-4787-82f4-79505aa9d436/console.log" append="off"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:20:36 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:20:36 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:20:36 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:20:36 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.882 243456 DEBUG nova.compute.manager [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Preparing to wait for external event network-vif-plugged-8f265ce7-668d-4462-8ac4-a9487fc7d3cd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.882 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Acquiring lock "c4a33511-0908-4787-82f4-79505aa9d436-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.883 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.883 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.884 243456 DEBUG nova.virt.libvirt.vif [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:20:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1952070192-access_point-1839928684',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1952070192-access_point-1839928684',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1952070192-ac',id=100,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHCNMYLg5p+aB2JY148QNYzXhobB6/drpl2qIRV0JTfQX5v2ytqL0PnW0jus5lqsKTS3QAwQHPeB44EmRsvT+kml+WKzSDkocdN0cBQPBu+t8DfZ2YkjQe5xHSo47UtrMg==',key_name='tempest-TestSecurityGroupsBasicOps-225180537',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1dab7067181d43f1acb702fce4ca882c',ramdisk_id='',reservation_id='r-6trxlpi6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1952070192',owner_user_name='tempest-TestSecurityGroupsBasicOps-1952070192-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:20:31Z,user_data=None,user_id='d000e26b1aaf4a60bd2c928412e59ca5',uuid=c4a33511-0908-4787-82f4-79505aa9d436,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "address": "fa:16:3e:cb:75:fe", "network": {"id": "37214c09-5017-4e54-bd29-785084655f44", "bridge": "br-int", "label": "tempest-network-smoke--1165805221", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dab7067181d43f1acb702fce4ca882c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f265ce7-66", "ovs_interfaceid": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.884 243456 DEBUG nova.network.os_vif_util [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Converting VIF {"id": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "address": "fa:16:3e:cb:75:fe", "network": {"id": "37214c09-5017-4e54-bd29-785084655f44", "bridge": "br-int", "label": "tempest-network-smoke--1165805221", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dab7067181d43f1acb702fce4ca882c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f265ce7-66", "ovs_interfaceid": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.885 243456 DEBUG nova.network.os_vif_util [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:75:fe,bridge_name='br-int',has_traffic_filtering=True,id=8f265ce7-668d-4462-8ac4-a9487fc7d3cd,network=Network(37214c09-5017-4e54-bd29-785084655f44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f265ce7-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.886 243456 DEBUG os_vif [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:75:fe,bridge_name='br-int',has_traffic_filtering=True,id=8f265ce7-668d-4462-8ac4-a9487fc7d3cd,network=Network(37214c09-5017-4e54-bd29-785084655f44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f265ce7-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.886 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.887 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.887 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.890 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.890 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f265ce7-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.891 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8f265ce7-66, col_values=(('external_ids', {'iface-id': '8f265ce7-668d-4462-8ac4-a9487fc7d3cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:75:fe', 'vm-uuid': 'c4a33511-0908-4787-82f4-79505aa9d436'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.892 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:36 np0005634017 NetworkManager[49805]: <info>  [1772274036.8936] manager: (tap8f265ce7-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/430)
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.895 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.901 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.902 243456 INFO os_vif [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:75:fe,bridge_name='br-int',has_traffic_filtering=True,id=8f265ce7-668d-4462-8ac4-a9487fc7d3cd,network=Network(37214c09-5017-4e54-bd29-785084655f44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f265ce7-66')#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.960 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.961 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.961 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] No VIF found with MAC fa:16:3e:cb:75:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.962 243456 INFO nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Using config drive#033[00m
Feb 28 05:20:36 np0005634017 nova_compute[243452]: 2026-02-28 10:20:36.986 243456 DEBUG nova.storage.rbd_utils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] rbd image c4a33511-0908-4787-82f4-79505aa9d436_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:37 np0005634017 nova_compute[243452]: 2026-02-28 10:20:37.027 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 080f8608-f57f-4ffa-a966-ae62df8f6f9b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 28 05:20:37 np0005634017 nova_compute[243452]: 2026-02-28 10:20:37.028 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274037.0246725, 080f8608-f57f-4ffa-a966-ae62df8f6f9b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:20:37 np0005634017 nova_compute[243452]: 2026-02-28 10:20:37.029 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:20:37 np0005634017 nova_compute[243452]: 2026-02-28 10:20:37.054 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:37 np0005634017 nova_compute[243452]: 2026-02-28 10:20:37.058 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:20:37 np0005634017 nova_compute[243452]: 2026-02-28 10:20:37.081 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Feb 28 05:20:37 np0005634017 nova_compute[243452]: 2026-02-28 10:20:37.081 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274037.031372, 080f8608-f57f-4ffa-a966-ae62df8f6f9b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:20:37 np0005634017 nova_compute[243452]: 2026-02-28 10:20:37.082 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] VM Started (Lifecycle Event)#033[00m
Feb 28 05:20:37 np0005634017 nova_compute[243452]: 2026-02-28 10:20:37.100 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:37 np0005634017 nova_compute[243452]: 2026-02-28 10:20:37.106 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:20:37 np0005634017 nova_compute[243452]: 2026-02-28 10:20:37.133 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Feb 28 05:20:37 np0005634017 nova_compute[243452]: 2026-02-28 10:20:37.361 243456 DEBUG nova.compute.manager [None req-8715f514-03fd-46a8-9c45-ee942edd0c27 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:37 np0005634017 nova_compute[243452]: 2026-02-28 10:20:37.495 243456 DEBUG nova.network.neutron [req-99bbe8fd-f180-4a00-a218-6eab02723577 req-b7bd5e3f-747c-4bc5-b45a-0d452be80215 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Updated VIF entry in instance network info cache for port 8f265ce7-668d-4462-8ac4-a9487fc7d3cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:20:37 np0005634017 nova_compute[243452]: 2026-02-28 10:20:37.496 243456 DEBUG nova.network.neutron [req-99bbe8fd-f180-4a00-a218-6eab02723577 req-b7bd5e3f-747c-4bc5-b45a-0d452be80215 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Updating instance_info_cache with network_info: [{"id": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "address": "fa:16:3e:cb:75:fe", "network": {"id": "37214c09-5017-4e54-bd29-785084655f44", "bridge": "br-int", "label": "tempest-network-smoke--1165805221", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dab7067181d43f1acb702fce4ca882c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f265ce7-66", "ovs_interfaceid": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:20:37 np0005634017 nova_compute[243452]: 2026-02-28 10:20:37.525 243456 DEBUG oslo_concurrency.lockutils [req-99bbe8fd-f180-4a00-a218-6eab02723577 req-b7bd5e3f-747c-4bc5-b45a-0d452be80215 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c4a33511-0908-4787-82f4-79505aa9d436" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:20:37 np0005634017 nova_compute[243452]: 2026-02-28 10:20:37.773 243456 INFO nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Creating config drive at /var/lib/nova/instances/c4a33511-0908-4787-82f4-79505aa9d436/disk.config#033[00m
Feb 28 05:20:37 np0005634017 nova_compute[243452]: 2026-02-28 10:20:37.779 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c4a33511-0908-4787-82f4-79505aa9d436/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpic30z_x9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:37 np0005634017 nova_compute[243452]: 2026-02-28 10:20:37.922 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c4a33511-0908-4787-82f4-79505aa9d436/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpic30z_x9" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:37 np0005634017 nova_compute[243452]: 2026-02-28 10:20:37.967 243456 DEBUG nova.storage.rbd_utils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] rbd image c4a33511-0908-4787-82f4-79505aa9d436_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:37 np0005634017 nova_compute[243452]: 2026-02-28 10:20:37.974 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c4a33511-0908-4787-82f4-79505aa9d436/disk.config c4a33511-0908-4787-82f4-79505aa9d436_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.131 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c4a33511-0908-4787-82f4-79505aa9d436/disk.config c4a33511-0908-4787-82f4-79505aa9d436_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.133 243456 INFO nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Deleting local config drive /var/lib/nova/instances/c4a33511-0908-4787-82f4-79505aa9d436/disk.config because it was imported into RBD.#033[00m
Feb 28 05:20:38 np0005634017 kernel: tap8f265ce7-66: entered promiscuous mode
Feb 28 05:20:38 np0005634017 NetworkManager[49805]: <info>  [1772274038.1897] manager: (tap8f265ce7-66): new Tun device (/org/freedesktop/NetworkManager/Devices/431)
Feb 28 05:20:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:38Z|01007|binding|INFO|Claiming lport 8f265ce7-668d-4462-8ac4-a9487fc7d3cd for this chassis.
Feb 28 05:20:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:38Z|01008|binding|INFO|8f265ce7-668d-4462-8ac4-a9487fc7d3cd: Claiming fa:16:3e:cb:75:fe 10.100.0.6
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.197 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.208 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:75:fe 10.100.0.6'], port_security=['fa:16:3e:cb:75:fe 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c4a33511-0908-4787-82f4-79505aa9d436', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37214c09-5017-4e54-bd29-785084655f44', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1dab7067181d43f1acb702fce4ca882c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b33bdd64-2933-4bbe-ba3e-cc39acc8701b e5271994-622a-4bbd-b4e6-a7d717e49d1d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=564ae5b8-c7a3-416a-9979-7200cc2a4584, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=8f265ce7-668d-4462-8ac4-a9487fc7d3cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:20:38 np0005634017 NetworkManager[49805]: <info>  [1772274038.2097] device (tap8f265ce7-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:20:38 np0005634017 NetworkManager[49805]: <info>  [1772274038.2106] device (tap8f265ce7-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:20:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:38Z|01009|binding|INFO|Setting lport 8f265ce7-668d-4462-8ac4-a9487fc7d3cd ovn-installed in OVS
Feb 28 05:20:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:38Z|01010|binding|INFO|Setting lport 8f265ce7-668d-4462-8ac4-a9487fc7d3cd up in Southbound
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.211 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 8f265ce7-668d-4462-8ac4-a9487fc7d3cd in datapath 37214c09-5017-4e54-bd29-785084655f44 bound to our chassis#033[00m
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.214 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 37214c09-5017-4e54-bd29-785084655f44#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.216 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.230 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7d9a11a7-5b68-4af7-b551-e1a422c3e721]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.232 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap37214c09-51 in ovnmeta-37214c09-5017-4e54-bd29-785084655f44 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.235 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap37214c09-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.235 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a555ce01-1bd8-4d17-a9aa-079d0e0083b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.237 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[50a64cb4-0eb5-4c83-a8a3-3440209cf328]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:38 np0005634017 systemd-machined[209480]: New machine qemu-128-instance-00000064.
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.256 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[8b4ed5fd-ef8b-40a6-9277-8ce12b17eacb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:38 np0005634017 systemd[1]: Started Virtual Machine qemu-128-instance-00000064.
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.284 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[93fe1736-233d-4b71-9beb-9b9f44fa9ffe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.320 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5ce7fbb9-cf71-45dc-87ea-b904706152f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:38 np0005634017 NetworkManager[49805]: <info>  [1772274038.3290] manager: (tap37214c09-50): new Veth device (/org/freedesktop/NetworkManager/Devices/432)
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.329 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b88cd2-d396-4665-b03b-79fc4229860a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 509 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 152 op/s
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.374 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5e27ad8b-dfef-48a7-a5bc-67f8b38f1de0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:38 np0005634017 systemd-udevd[330965]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.379 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c6b1c1-a0e7-4def-8a4a-64bb86b0815b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:38 np0005634017 NetworkManager[49805]: <info>  [1772274038.4143] device (tap37214c09-50): carrier: link connected
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.420 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[60e86aa6-57ca-47c3-ab2d-bd63b6dade96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.440 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2eda440a-3952-4629-869b-ec78329b14f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap37214c09-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:31:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 310], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554570, 'reachable_time': 28129, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330984, 'error': None, 'target': 'ovnmeta-37214c09-5017-4e54-bd29-785084655f44', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.462 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ada63865-185b-4ac6-8139-be5ae8b4988d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed7:3160'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554570, 'tstamp': 554570}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330985, 'error': None, 'target': 'ovnmeta-37214c09-5017-4e54-bd29-785084655f44', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.482 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6eedc2d5-5ff0-49d8-930f-cd5cd86f4c90]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap37214c09-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:31:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 310], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554570, 'reachable_time': 28129, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 330986, 'error': None, 'target': 'ovnmeta-37214c09-5017-4e54-bd29-785084655f44', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.522 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[14488879-a46a-429c-acdd-b7c3ba9d29f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.599 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ca1f351a-6bcd-4fe3-a181-6525a789a6b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.602 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37214c09-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.603 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.604 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap37214c09-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:38 np0005634017 kernel: tap37214c09-50: entered promiscuous mode
Feb 28 05:20:38 np0005634017 NetworkManager[49805]: <info>  [1772274038.6082] manager: (tap37214c09-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/433)
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.611 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.616 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap37214c09-50, col_values=(('external_ids', {'iface-id': '84e4a764-c038-44dc-af65-1b856dd92486'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.618 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:38Z|01011|binding|INFO|Releasing lport 84e4a764-c038-44dc-af65-1b856dd92486 from this chassis (sb_readonly=0)
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.621 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/37214c09-5017-4e54-bd29-785084655f44.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/37214c09-5017-4e54-bd29-785084655f44.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.626 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[16be51e0-3989-4077-b498-0e8495ebabce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.628 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.628 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-37214c09-5017-4e54-bd29-785084655f44
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/37214c09-5017-4e54-bd29-785084655f44.pid.haproxy
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 37214c09-5017-4e54-bd29-785084655f44
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.630 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-37214c09-5017-4e54-bd29-785084655f44', 'env', 'PROCESS_TAG=haproxy-37214c09-5017-4e54-bd29-785084655f44', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/37214c09-5017-4e54-bd29-785084655f44.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:20:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.656 243456 DEBUG nova.compute.manager [req-2fac99b7-2a5f-4091-8204-be93cface995 req-d3916823-f0f1-4efe-aafd-9b25bd3a0bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Received event network-vif-plugged-8f265ce7-668d-4462-8ac4-a9487fc7d3cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.657 243456 DEBUG oslo_concurrency.lockutils [req-2fac99b7-2a5f-4091-8204-be93cface995 req-d3916823-f0f1-4efe-aafd-9b25bd3a0bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c4a33511-0908-4787-82f4-79505aa9d436-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.658 243456 DEBUG oslo_concurrency.lockutils [req-2fac99b7-2a5f-4091-8204-be93cface995 req-d3916823-f0f1-4efe-aafd-9b25bd3a0bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.658 243456 DEBUG oslo_concurrency.lockutils [req-2fac99b7-2a5f-4091-8204-be93cface995 req-d3916823-f0f1-4efe-aafd-9b25bd3a0bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.659 243456 DEBUG nova.compute.manager [req-2fac99b7-2a5f-4091-8204-be93cface995 req-d3916823-f0f1-4efe-aafd-9b25bd3a0bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Processing event network-vif-plugged-8f265ce7-668d-4462-8ac4-a9487fc7d3cd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.659 243456 DEBUG nova.compute.manager [req-2fac99b7-2a5f-4091-8204-be93cface995 req-d3916823-f0f1-4efe-aafd-9b25bd3a0bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Received event network-vif-plugged-8f265ce7-668d-4462-8ac4-a9487fc7d3cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.660 243456 DEBUG oslo_concurrency.lockutils [req-2fac99b7-2a5f-4091-8204-be93cface995 req-d3916823-f0f1-4efe-aafd-9b25bd3a0bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c4a33511-0908-4787-82f4-79505aa9d436-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.661 243456 DEBUG oslo_concurrency.lockutils [req-2fac99b7-2a5f-4091-8204-be93cface995 req-d3916823-f0f1-4efe-aafd-9b25bd3a0bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.662 243456 DEBUG oslo_concurrency.lockutils [req-2fac99b7-2a5f-4091-8204-be93cface995 req-d3916823-f0f1-4efe-aafd-9b25bd3a0bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.662 243456 DEBUG nova.compute.manager [req-2fac99b7-2a5f-4091-8204-be93cface995 req-d3916823-f0f1-4efe-aafd-9b25bd3a0bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] No waiting events found dispatching network-vif-plugged-8f265ce7-668d-4462-8ac4-a9487fc7d3cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.663 243456 WARNING nova.compute.manager [req-2fac99b7-2a5f-4091-8204-be93cface995 req-d3916823-f0f1-4efe-aafd-9b25bd3a0bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Received unexpected event network-vif-plugged-8f265ce7-668d-4462-8ac4-a9487fc7d3cd for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.750 243456 DEBUG nova.compute.manager [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.752 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274038.7494185, c4a33511-0908-4787-82f4-79505aa9d436 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.752 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a33511-0908-4787-82f4-79505aa9d436] VM Started (Lifecycle Event)#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.765 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.771 243456 INFO nova.virt.libvirt.driver [-] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Instance spawned successfully.#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.771 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.776 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.780 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.809 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a33511-0908-4787-82f4-79505aa9d436] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.810 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274038.749851, c4a33511-0908-4787-82f4-79505aa9d436 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.811 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a33511-0908-4787-82f4-79505aa9d436] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.822 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.823 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.824 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.824 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.825 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.826 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.841 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.846 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274038.7636118, c4a33511-0908-4787-82f4-79505aa9d436 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.846 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a33511-0908-4787-82f4-79505aa9d436] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.892 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.897 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.902 243456 DEBUG oslo_concurrency.lockutils [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.903 243456 DEBUG oslo_concurrency.lockutils [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.904 243456 DEBUG oslo_concurrency.lockutils [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.904 243456 DEBUG oslo_concurrency.lockutils [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.904 243456 DEBUG oslo_concurrency.lockutils [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.906 243456 INFO nova.compute.manager [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Terminating instance#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.907 243456 DEBUG nova.compute.manager [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.937 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a33511-0908-4787-82f4-79505aa9d436] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:20:38 np0005634017 kernel: tapa98f753e-a6 (unregistering): left promiscuous mode
Feb 28 05:20:38 np0005634017 NetworkManager[49805]: <info>  [1772274038.9477] device (tapa98f753e-a6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.953 243456 INFO nova.compute.manager [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Took 7.56 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.953 243456 DEBUG nova.compute.manager [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.957 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:38Z|01012|binding|INFO|Releasing lport a98f753e-a6d6-4d97-b307-f08d35a37f1f from this chassis (sb_readonly=0)
Feb 28 05:20:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:38Z|01013|binding|INFO|Setting lport a98f753e-a6d6-4d97-b307-f08d35a37f1f down in Southbound
Feb 28 05:20:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:38Z|01014|binding|INFO|Removing iface tapa98f753e-a6 ovn-installed in OVS
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.961 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:38 np0005634017 nova_compute[243452]: 2026-02-28 10:20:38.971 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.975 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:c9:51 10.100.0.5'], port_security=['fa:16:3e:71:c9:51 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '080f8608-f57f-4ffa-a966-ae62df8f6f9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ba87a0d-4c27-4844-a55f-2927dfe4893e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3882eded03594958a2e5d10832a6c3a9', 'neutron:revision_number': '8', 'neutron:security_group_ids': '045c4763-a218-4cca-a882-e267b0c43dad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5d941e6-1790-487e-9400-385b4660d2ae, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a98f753e-a6d6-4d97-b307-f08d35a37f1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:20:38 np0005634017 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000063.scope: Deactivated successfully.
Feb 28 05:20:38 np0005634017 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000063.scope: Consumed 3.403s CPU time.
Feb 28 05:20:38 np0005634017 systemd-machined[209480]: Machine qemu-127-instance-00000063 terminated.
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.062 243456 INFO nova.compute.manager [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Took 8.73 seconds to build instance.#033[00m
Feb 28 05:20:39 np0005634017 podman[331063]: 2026-02-28 10:20:39.067489056 +0000 UTC m=+0.079204348 container create 92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.092 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:39 np0005634017 podman[331063]: 2026-02-28 10:20:39.023443191 +0000 UTC m=+0.035158563 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:20:39 np0005634017 systemd[1]: Started libpod-conmon-92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e.scope.
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.130 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.136 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.147 243456 INFO nova.virt.libvirt.driver [-] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Instance destroyed successfully.#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.147 243456 DEBUG nova.objects.instance [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'resources' on Instance uuid 080f8608-f57f-4ffa-a966-ae62df8f6f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:39 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:20:39 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248dfaf5b43c5e78179cc2d605133b83fbf37721a8e21412e8df26c566d94eb2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.168 243456 DEBUG nova.virt.libvirt.vif [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:19:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1593211213',display_name='tempest-ServerRescueTestJSON-server-1593211213',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1593211213',id=99,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:20:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3882eded03594958a2e5d10832a6c3a9',ramdisk_id='',reservation_id='r-xm64dw2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-2101936935',owner_user_name='tempest-ServerRescueTestJSON-2101936935-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:20:37Z,user_data=None,user_id='8d03850a765742908401b28b9f983e96',uuid=080f8608-f57f-4ffa-a966-ae62df8f6f9b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.168 243456 DEBUG nova.network.os_vif_util [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converting VIF {"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.169 243456 DEBUG nova.network.os_vif_util [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:71:c9:51,bridge_name='br-int',has_traffic_filtering=True,id=a98f753e-a6d6-4d97-b307-f08d35a37f1f,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa98f753e-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.169 243456 DEBUG os_vif [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:71:c9:51,bridge_name='br-int',has_traffic_filtering=True,id=a98f753e-a6d6-4d97-b307-f08d35a37f1f,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa98f753e-a6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.172 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.172 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa98f753e-a6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.174 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.175 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.175 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:39 np0005634017 podman[331063]: 2026-02-28 10:20:39.176878454 +0000 UTC m=+0.188593786 container init 92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.178 243456 INFO os_vif [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:71:c9:51,bridge_name='br-int',has_traffic_filtering=True,id=a98f753e-a6d6-4d97-b307-f08d35a37f1f,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa98f753e-a6')#033[00m
Feb 28 05:20:39 np0005634017 podman[331063]: 2026-02-28 10:20:39.184049698 +0000 UTC m=+0.195765020 container start 92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 28 05:20:39 np0005634017 neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44[331080]: [NOTICE]   (331098) : New worker (331110) forked
Feb 28 05:20:39 np0005634017 neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44[331080]: [NOTICE]   (331098) : Loading success.
Feb 28 05:20:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:39.277 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a98f753e-a6d6-4d97-b307-f08d35a37f1f in datapath 9ba87a0d-4c27-4844-a55f-2927dfe4893e unbound from our chassis#033[00m
Feb 28 05:20:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:39.279 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9ba87a0d-4c27-4844-a55f-2927dfe4893e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 28 05:20:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:39.281 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc6a3bf-d228-4c5b-8e4b-a8c963168242]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.388 243456 DEBUG oslo_concurrency.lockutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Acquiring lock "98150245-079d-43f8-bbd9-3d12a8f26719" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.388 243456 DEBUG oslo_concurrency.lockutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "98150245-079d-43f8-bbd9-3d12a8f26719" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.413 243456 DEBUG nova.compute.manager [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.495 243456 INFO nova.virt.libvirt.driver [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Deleting instance files /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b_del#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.496 243456 INFO nova.virt.libvirt.driver [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Deletion of /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b_del complete#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.535 243456 DEBUG oslo_concurrency.lockutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.536 243456 DEBUG oslo_concurrency.lockutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.544 243456 DEBUG nova.virt.hardware [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.545 243456 INFO nova.compute.claims [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.564 243456 INFO nova.compute.manager [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Took 0.66 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.564 243456 DEBUG oslo.service.loopingcall [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.564 243456 DEBUG nova.compute.manager [-] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.565 243456 DEBUG nova.network.neutron [-] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.604 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274024.6034784, 690896df-6307-469c-9685-325a61a62b88 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.605 243456 INFO nova.compute.manager [-] [instance: 690896df-6307-469c-9685-325a61a62b88] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.637 243456 DEBUG nova.compute.manager [None req-b4928298-7a11-4eed-b764-2383b31c9306 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:39 np0005634017 nova_compute[243452]: 2026-02-28 10:20:39.760 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:40 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:20:40 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4141303139' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:20:40 np0005634017 nova_compute[243452]: 2026-02-28 10:20:40.303 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:40 np0005634017 nova_compute[243452]: 2026-02-28 10:20:40.313 243456 DEBUG nova.compute.provider_tree [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:20:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 464 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 199 op/s
Feb 28 05:20:40 np0005634017 nova_compute[243452]: 2026-02-28 10:20:40.782 243456 DEBUG nova.scheduler.client.report [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:20:40 np0005634017 nova_compute[243452]: 2026-02-28 10:20:40.835 243456 DEBUG nova.compute.manager [req-c689d01a-b4ee-4116-b172-9c59ebbf7e78 req-8b978275-8778-42da-8862-8537fd73d0c1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-unplugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:40 np0005634017 nova_compute[243452]: 2026-02-28 10:20:40.835 243456 DEBUG oslo_concurrency.lockutils [req-c689d01a-b4ee-4116-b172-9c59ebbf7e78 req-8b978275-8778-42da-8862-8537fd73d0c1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:40 np0005634017 nova_compute[243452]: 2026-02-28 10:20:40.835 243456 DEBUG oslo_concurrency.lockutils [req-c689d01a-b4ee-4116-b172-9c59ebbf7e78 req-8b978275-8778-42da-8862-8537fd73d0c1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:40 np0005634017 nova_compute[243452]: 2026-02-28 10:20:40.836 243456 DEBUG oslo_concurrency.lockutils [req-c689d01a-b4ee-4116-b172-9c59ebbf7e78 req-8b978275-8778-42da-8862-8537fd73d0c1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:40 np0005634017 nova_compute[243452]: 2026-02-28 10:20:40.836 243456 DEBUG nova.compute.manager [req-c689d01a-b4ee-4116-b172-9c59ebbf7e78 req-8b978275-8778-42da-8862-8537fd73d0c1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] No waiting events found dispatching network-vif-unplugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:20:40 np0005634017 nova_compute[243452]: 2026-02-28 10:20:40.836 243456 DEBUG nova.compute.manager [req-c689d01a-b4ee-4116-b172-9c59ebbf7e78 req-8b978275-8778-42da-8862-8537fd73d0c1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-unplugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:20:40 np0005634017 nova_compute[243452]: 2026-02-28 10:20:40.836 243456 DEBUG nova.compute.manager [req-c689d01a-b4ee-4116-b172-9c59ebbf7e78 req-8b978275-8778-42da-8862-8537fd73d0c1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:40 np0005634017 nova_compute[243452]: 2026-02-28 10:20:40.836 243456 DEBUG oslo_concurrency.lockutils [req-c689d01a-b4ee-4116-b172-9c59ebbf7e78 req-8b978275-8778-42da-8862-8537fd73d0c1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:40 np0005634017 nova_compute[243452]: 2026-02-28 10:20:40.837 243456 DEBUG oslo_concurrency.lockutils [req-c689d01a-b4ee-4116-b172-9c59ebbf7e78 req-8b978275-8778-42da-8862-8537fd73d0c1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:40 np0005634017 nova_compute[243452]: 2026-02-28 10:20:40.837 243456 DEBUG oslo_concurrency.lockutils [req-c689d01a-b4ee-4116-b172-9c59ebbf7e78 req-8b978275-8778-42da-8862-8537fd73d0c1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:40 np0005634017 nova_compute[243452]: 2026-02-28 10:20:40.837 243456 DEBUG nova.compute.manager [req-c689d01a-b4ee-4116-b172-9c59ebbf7e78 req-8b978275-8778-42da-8862-8537fd73d0c1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] No waiting events found dispatching network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:20:40 np0005634017 nova_compute[243452]: 2026-02-28 10:20:40.837 243456 WARNING nova.compute.manager [req-c689d01a-b4ee-4116-b172-9c59ebbf7e78 req-8b978275-8778-42da-8862-8537fd73d0c1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received unexpected event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:20:40 np0005634017 nova_compute[243452]: 2026-02-28 10:20:40.846 243456 DEBUG oslo_concurrency.lockutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.310s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:40 np0005634017 nova_compute[243452]: 2026-02-28 10:20:40.846 243456 DEBUG nova.compute.manager [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:20:40 np0005634017 nova_compute[243452]: 2026-02-28 10:20:40.895 243456 DEBUG nova.compute.manager [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Feb 28 05:20:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:20:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:20:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:20:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:20:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0027284727141890838 of space, bias 1.0, pg target 0.8185418142567251 quantized to 32 (current 32)
Feb 28 05:20:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:20:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:20:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:20:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:20:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:20:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493165111746679 of space, bias 1.0, pg target 0.7479495335240037 quantized to 32 (current 32)
Feb 28 05:20:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:20:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.61893063059789e-07 of space, bias 4.0, pg target 0.0009142716756717468 quantized to 16 (current 16)
Feb 28 05:20:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:20:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:20:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:20:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:20:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:20:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:20:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:20:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:20:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:20:40 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.031 243456 INFO nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.146 243456 DEBUG nova.compute.manager [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.261 243456 DEBUG nova.compute.manager [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.262 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.263 243456 INFO nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Creating image(s)#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.291 243456 DEBUG nova.storage.rbd_utils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.320 243456 DEBUG nova.storage.rbd_utils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.350 243456 DEBUG nova.storage.rbd_utils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.354 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.422 243456 DEBUG nova.network.neutron [-] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.429 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.429 243456 DEBUG oslo_concurrency.lockutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.430 243456 DEBUG oslo_concurrency.lockutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.430 243456 DEBUG oslo_concurrency.lockutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.457 243456 DEBUG nova.storage.rbd_utils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.462 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 98150245-079d-43f8-bbd9-3d12a8f26719_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.492 243456 INFO nova.compute.manager [-] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Took 1.93 seconds to deallocate network for instance.#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.698 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 98150245-079d-43f8-bbd9-3d12a8f26719_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.738 243456 DEBUG oslo_concurrency.lockutils [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.739 243456 DEBUG oslo_concurrency.lockutils [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.792 243456 DEBUG nova.storage.rbd_utils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] resizing rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.840 243456 DEBUG nova.compute.manager [req-5660b58d-db96-47d5-9cfa-8c1e346390d5 req-29fcd91f-ee0c-48e0-aa4c-2db078f93635 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-deleted-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.881 243456 DEBUG nova.objects.instance [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lazy-loading 'migration_context' on Instance uuid 98150245-079d-43f8-bbd9-3d12a8f26719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.939 243456 DEBUG oslo_concurrency.processutils [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.979 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.980 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Ensure instance console log exists: /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.981 243456 DEBUG oslo_concurrency.lockutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.981 243456 DEBUG oslo_concurrency.lockutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.981 243456 DEBUG oslo_concurrency.lockutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.983 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.988 243456 WARNING nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.995 243456 DEBUG nova.virt.libvirt.host [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.995 243456 DEBUG nova.virt.libvirt.host [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.998 243456 DEBUG nova.virt.libvirt.host [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.998 243456 DEBUG nova.virt.libvirt.host [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:20:41 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.999 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:20:42 np0005634017 nova_compute[243452]: 2026-02-28 10:20:41.999 243456 DEBUG nova.virt.hardware [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:20:42 np0005634017 nova_compute[243452]: 2026-02-28 10:20:42.000 243456 DEBUG nova.virt.hardware [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:20:42 np0005634017 nova_compute[243452]: 2026-02-28 10:20:42.000 243456 DEBUG nova.virt.hardware [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:20:42 np0005634017 nova_compute[243452]: 2026-02-28 10:20:42.000 243456 DEBUG nova.virt.hardware [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:20:42 np0005634017 nova_compute[243452]: 2026-02-28 10:20:42.000 243456 DEBUG nova.virt.hardware [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:20:42 np0005634017 nova_compute[243452]: 2026-02-28 10:20:42.001 243456 DEBUG nova.virt.hardware [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:20:42 np0005634017 nova_compute[243452]: 2026-02-28 10:20:42.001 243456 DEBUG nova.virt.hardware [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:20:42 np0005634017 nova_compute[243452]: 2026-02-28 10:20:42.001 243456 DEBUG nova.virt.hardware [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:20:42 np0005634017 nova_compute[243452]: 2026-02-28 10:20:42.001 243456 DEBUG nova.virt.hardware [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:20:42 np0005634017 nova_compute[243452]: 2026-02-28 10:20:42.001 243456 DEBUG nova.virt.hardware [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:20:42 np0005634017 nova_compute[243452]: 2026-02-28 10:20:42.002 243456 DEBUG nova.virt.hardware [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:20:42 np0005634017 nova_compute[243452]: 2026-02-28 10:20:42.006 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 437 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 3.2 MiB/s wr, 252 op/s
Feb 28 05:20:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:20:42 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/753415033' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:20:42 np0005634017 nova_compute[243452]: 2026-02-28 10:20:42.525 243456 DEBUG oslo_concurrency.processutils [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:42 np0005634017 nova_compute[243452]: 2026-02-28 10:20:42.535 243456 DEBUG nova.compute.provider_tree [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:20:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:20:42 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1212162173' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:20:42 np0005634017 nova_compute[243452]: 2026-02-28 10:20:42.560 243456 DEBUG nova.scheduler.client.report [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:20:42 np0005634017 nova_compute[243452]: 2026-02-28 10:20:42.575 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:42 np0005634017 nova_compute[243452]: 2026-02-28 10:20:42.613 243456 DEBUG nova.storage.rbd_utils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:42 np0005634017 nova_compute[243452]: 2026-02-28 10:20:42.620 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:42 np0005634017 nova_compute[243452]: 2026-02-28 10:20:42.661 243456 DEBUG oslo_concurrency.lockutils [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.922s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:42 np0005634017 nova_compute[243452]: 2026-02-28 10:20:42.690 243456 INFO nova.scheduler.client.report [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Deleted allocations for instance 080f8608-f57f-4ffa-a966-ae62df8f6f9b#033[00m
Feb 28 05:20:42 np0005634017 nova_compute[243452]: 2026-02-28 10:20:42.785 243456 DEBUG oslo_concurrency.lockutils [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:20:43 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3047918325' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.180 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.182 243456 DEBUG nova.objects.instance [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lazy-loading 'pci_devices' on Instance uuid 98150245-079d-43f8-bbd9-3d12a8f26719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.221 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:20:43 np0005634017 nova_compute[243452]:  <uuid>98150245-079d-43f8-bbd9-3d12a8f26719</uuid>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:  <name>instance-00000065</name>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerShowV257Test-server-2037913836</nova:name>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:20:41</nova:creationTime>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:20:43 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:        <nova:user uuid="17ee551657ed4a4c8a2f040ff863ad9a">tempest-ServerShowV257Test-1598019138-project-member</nova:user>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:        <nova:project uuid="124d8457f7e342f1ab81af27d8c3ba3a">tempest-ServerShowV257Test-1598019138</nova:project>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      <nova:ports/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      <entry name="serial">98150245-079d-43f8-bbd9-3d12a8f26719</entry>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      <entry name="uuid">98150245-079d-43f8-bbd9-3d12a8f26719</entry>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/98150245-079d-43f8-bbd9-3d12a8f26719_disk">
Feb 28 05:20:43 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:20:43 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/98150245-079d-43f8-bbd9-3d12a8f26719_disk.config">
Feb 28 05:20:43 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:20:43 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/console.log" append="off"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:20:43 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:20:43 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:20:43 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:20:43 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.304 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.305 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.307 243456 INFO nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Using config drive#033[00m
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.344 243456 DEBUG nova.storage.rbd_utils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.618 243456 INFO nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Creating config drive at /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/disk.config#033[00m
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.626 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpwz4w93sb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.778 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpwz4w93sb" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.811 243456 DEBUG nova.storage.rbd_utils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.816 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/disk.config 98150245-079d-43f8-bbd9-3d12a8f26719_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.872 243456 DEBUG oslo_concurrency.lockutils [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "ea5efc55-0a5e-435e-9805-9a9726c17eda" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.873 243456 DEBUG oslo_concurrency.lockutils [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.874 243456 DEBUG oslo_concurrency.lockutils [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.874 243456 DEBUG oslo_concurrency.lockutils [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.874 243456 DEBUG oslo_concurrency.lockutils [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.876 243456 INFO nova.compute.manager [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Terminating instance#033[00m
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.877 243456 DEBUG nova.compute.manager [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:20:43 np0005634017 kernel: tap2f9562b0-54 (unregistering): left promiscuous mode
Feb 28 05:20:43 np0005634017 NetworkManager[49805]: <info>  [1772274043.9367] device (tap2f9562b0-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:20:43 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:43Z|01015|binding|INFO|Releasing lport 2f9562b0-54ce-4c24-9341-33a674532bf0 from this chassis (sb_readonly=0)
Feb 28 05:20:43 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:43Z|01016|binding|INFO|Setting lport 2f9562b0-54ce-4c24-9341-33a674532bf0 down in Southbound
Feb 28 05:20:43 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:43Z|01017|binding|INFO|Removing iface tap2f9562b0-54 ovn-installed in OVS
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.945 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.950 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.960 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:43.961 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:07:b7 10.100.0.9'], port_security=['fa:16:3e:db:07:b7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ea5efc55-0a5e-435e-9805-9a9726c17eda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ba87a0d-4c27-4844-a55f-2927dfe4893e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3882eded03594958a2e5d10832a6c3a9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '045c4763-a218-4cca-a882-e267b0c43dad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5d941e6-1790-487e-9400-385b4660d2ae, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2f9562b0-54ce-4c24-9341-33a674532bf0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:20:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:43.964 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2f9562b0-54ce-4c24-9341-33a674532bf0 in datapath 9ba87a0d-4c27-4844-a55f-2927dfe4893e unbound from our chassis#033[00m
Feb 28 05:20:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:43.966 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9ba87a0d-4c27-4844-a55f-2927dfe4893e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 28 05:20:43 np0005634017 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000060.scope: Deactivated successfully.
Feb 28 05:20:43 np0005634017 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000060.scope: Consumed 13.010s CPU time.
Feb 28 05:20:43 np0005634017 systemd-machined[209480]: Machine qemu-120-instance-00000060 terminated.
Feb 28 05:20:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:43.971 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0f34996b-2852-430f-9645-59d2299ba175]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.974 243456 DEBUG nova.compute.manager [req-65aceb78-cfc9-435f-b8f9-4c797458115f req-e2b60602-0c93-4372-8144-4a4cbe57f3c8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Received event network-changed-8f265ce7-668d-4462-8ac4-a9487fc7d3cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.974 243456 DEBUG nova.compute.manager [req-65aceb78-cfc9-435f-b8f9-4c797458115f req-e2b60602-0c93-4372-8144-4a4cbe57f3c8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Refreshing instance network info cache due to event network-changed-8f265ce7-668d-4462-8ac4-a9487fc7d3cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.975 243456 DEBUG oslo_concurrency.lockutils [req-65aceb78-cfc9-435f-b8f9-4c797458115f req-e2b60602-0c93-4372-8144-4a4cbe57f3c8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c4a33511-0908-4787-82f4-79505aa9d436" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.976 243456 DEBUG oslo_concurrency.lockutils [req-65aceb78-cfc9-435f-b8f9-4c797458115f req-e2b60602-0c93-4372-8144-4a4cbe57f3c8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c4a33511-0908-4787-82f4-79505aa9d436" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:20:43 np0005634017 nova_compute[243452]: 2026-02-28 10:20:43.976 243456 DEBUG nova.network.neutron [req-65aceb78-cfc9-435f-b8f9-4c797458115f req-e2b60602-0c93-4372-8144-4a4cbe57f3c8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Refreshing network info cache for port 8f265ce7-668d-4462-8ac4-a9487fc7d3cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.023 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/disk.config 98150245-079d-43f8-bbd9-3d12a8f26719_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.024 243456 INFO nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Deleting local config drive /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/disk.config because it was imported into RBD.#033[00m
Feb 28 05:20:44 np0005634017 podman[331457]: 2026-02-28 10:20:44.026294124 +0000 UTC m=+0.061093002 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:20:44 np0005634017 podman[331454]: 2026-02-28 10:20:44.040850159 +0000 UTC m=+0.075525193 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 28 05:20:44 np0005634017 systemd-udevd[331474]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:20:44 np0005634017 kernel: tap2f9562b0-54: entered promiscuous mode
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.097 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:44 np0005634017 NetworkManager[49805]: <info>  [1772274044.0992] manager: (tap2f9562b0-54): new Tun device (/org/freedesktop/NetworkManager/Devices/434)
Feb 28 05:20:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:44Z|01018|binding|INFO|Claiming lport 2f9562b0-54ce-4c24-9341-33a674532bf0 for this chassis.
Feb 28 05:20:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:44Z|01019|binding|INFO|2f9562b0-54ce-4c24-9341-33a674532bf0: Claiming fa:16:3e:db:07:b7 10.100.0.9
Feb 28 05:20:44 np0005634017 kernel: tap2f9562b0-54 (unregistering): left promiscuous mode
Feb 28 05:20:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:44Z|01020|binding|INFO|Setting lport 2f9562b0-54ce-4c24-9341-33a674532bf0 ovn-installed in OVS
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.109 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.113 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:44Z|01021|if_status|INFO|Dropped 1 log messages in last 242 seconds (most recently, 242 seconds ago) due to excessive rate
Feb 28 05:20:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:44Z|01022|if_status|INFO|Not setting lport 2f9562b0-54ce-4c24-9341-33a674532bf0 down as sb is readonly
Feb 28 05:20:44 np0005634017 systemd-machined[209480]: New machine qemu-129-instance-00000065.
Feb 28 05:20:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:44Z|01023|binding|INFO|Releasing lport 2f9562b0-54ce-4c24-9341-33a674532bf0 from this chassis (sb_readonly=0)
Feb 28 05:20:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:44.131 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:07:b7 10.100.0.9'], port_security=['fa:16:3e:db:07:b7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ea5efc55-0a5e-435e-9805-9a9726c17eda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ba87a0d-4c27-4844-a55f-2927dfe4893e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3882eded03594958a2e5d10832a6c3a9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '045c4763-a218-4cca-a882-e267b0c43dad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5d941e6-1790-487e-9400-385b4660d2ae, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2f9562b0-54ce-4c24-9341-33a674532bf0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:20:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:44.132 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2f9562b0-54ce-4c24-9341-33a674532bf0 in datapath 9ba87a0d-4c27-4844-a55f-2927dfe4893e bound to our chassis#033[00m
Feb 28 05:20:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:44.132 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9ba87a0d-4c27-4844-a55f-2927dfe4893e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 28 05:20:44 np0005634017 systemd[1]: Started Virtual Machine qemu-129-instance-00000065.
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.134 243456 INFO nova.virt.libvirt.driver [-] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Instance destroyed successfully.#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.135 243456 DEBUG nova.objects.instance [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'resources' on Instance uuid ea5efc55-0a5e-435e-9805-9a9726c17eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:44.133 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9ddf3432-ac02-4146-bf76-011db5eae7a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:44.136 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:07:b7 10.100.0.9'], port_security=['fa:16:3e:db:07:b7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ea5efc55-0a5e-435e-9805-9a9726c17eda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ba87a0d-4c27-4844-a55f-2927dfe4893e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3882eded03594958a2e5d10832a6c3a9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '045c4763-a218-4cca-a882-e267b0c43dad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5d941e6-1790-487e-9400-385b4660d2ae, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2f9562b0-54ce-4c24-9341-33a674532bf0) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:20:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:44.137 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2f9562b0-54ce-4c24-9341-33a674532bf0 in datapath 9ba87a0d-4c27-4844-a55f-2927dfe4893e unbound from our chassis#033[00m
Feb 28 05:20:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:44.138 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9ba87a0d-4c27-4844-a55f-2927dfe4893e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.139 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:44.138 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c40493cc-e17c-4141-ad51-98ad8044f81c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.146 243456 DEBUG nova.virt.libvirt.vif [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:19:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1998552864',display_name='tempest-ServerRescueTestJSON-server-1998552864',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1998552864',id=96,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:19:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3882eded03594958a2e5d10832a6c3a9',ramdisk_id='',reservation_id='r-fussapqk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-2101936935',owner_user_name='tempest-ServerRescueTestJSON-2101936935-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:19:51Z,user_data=None,user_id='8d03850a765742908401b28b9f983e96',uuid=ea5efc55-0a5e-435e-9805-9a9726c17eda,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.147 243456 DEBUG nova.network.os_vif_util [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converting VIF {"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.147 243456 DEBUG nova.network.os_vif_util [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:db:07:b7,bridge_name='br-int',has_traffic_filtering=True,id=2f9562b0-54ce-4c24-9341-33a674532bf0,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f9562b0-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.147 243456 DEBUG os_vif [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:db:07:b7,bridge_name='br-int',has_traffic_filtering=True,id=2f9562b0-54ce-4c24-9341-33a674532bf0,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f9562b0-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.151 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.152 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f9562b0-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.154 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.157 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.159 243456 INFO os_vif [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:db:07:b7,bridge_name='br-int',has_traffic_filtering=True,id=2f9562b0-54ce-4c24-9341-33a674532bf0,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f9562b0-54')#033[00m
Feb 28 05:20:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 428 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.1 MiB/s wr, 311 op/s
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.626 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274044.6254103, 98150245-079d-43f8-bbd9-3d12a8f26719 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.627 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.635 243456 DEBUG nova.compute.manager [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.636 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.643 243456 INFO nova.virt.libvirt.driver [-] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Instance spawned successfully.#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.643 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.657 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.660 243456 DEBUG nova.compute.manager [req-79d5d539-79ff-47e3-abc6-e5c1d814acf5 req-89c3ffc8-fa12-4a55-b6e3-bb96b7704846 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received event network-vif-unplugged-2f9562b0-54ce-4c24-9341-33a674532bf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.661 243456 DEBUG oslo_concurrency.lockutils [req-79d5d539-79ff-47e3-abc6-e5c1d814acf5 req-89c3ffc8-fa12-4a55-b6e3-bb96b7704846 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.661 243456 DEBUG oslo_concurrency.lockutils [req-79d5d539-79ff-47e3-abc6-e5c1d814acf5 req-89c3ffc8-fa12-4a55-b6e3-bb96b7704846 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.661 243456 DEBUG oslo_concurrency.lockutils [req-79d5d539-79ff-47e3-abc6-e5c1d814acf5 req-89c3ffc8-fa12-4a55-b6e3-bb96b7704846 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.662 243456 DEBUG nova.compute.manager [req-79d5d539-79ff-47e3-abc6-e5c1d814acf5 req-89c3ffc8-fa12-4a55-b6e3-bb96b7704846 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] No waiting events found dispatching network-vif-unplugged-2f9562b0-54ce-4c24-9341-33a674532bf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.662 243456 DEBUG nova.compute.manager [req-79d5d539-79ff-47e3-abc6-e5c1d814acf5 req-89c3ffc8-fa12-4a55-b6e3-bb96b7704846 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received event network-vif-unplugged-2f9562b0-54ce-4c24-9341-33a674532bf0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.668 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.674 243456 INFO nova.virt.libvirt.driver [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Deleting instance files /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda_del#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.675 243456 INFO nova.virt.libvirt.driver [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Deletion of /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda_del complete#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.679 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.680 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.680 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.680 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.681 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.681 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.774 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.775 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274044.6272006, 98150245-079d-43f8-bbd9-3d12a8f26719 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.775 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] VM Started (Lifecycle Event)#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.803 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.813 243456 INFO nova.compute.manager [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Took 3.55 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.813 243456 DEBUG nova.compute.manager [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.814 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.825 243456 INFO nova.compute.manager [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Took 0.95 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.826 243456 DEBUG oslo.service.loopingcall [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.826 243456 DEBUG nova.compute.manager [-] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.827 243456 DEBUG nova.network.neutron [-] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.846 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.906 243456 INFO nova.compute.manager [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Took 5.40 seconds to build instance.#033[00m
Feb 28 05:20:44 np0005634017 nova_compute[243452]: 2026-02-28 10:20:44.965 243456 DEBUG oslo_concurrency.lockutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "98150245-079d-43f8-bbd9-3d12a8f26719" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:20:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/484818350' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:20:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:20:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/484818350' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:20:46 np0005634017 nova_compute[243452]: 2026-02-28 10:20:46.160 243456 DEBUG nova.network.neutron [-] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:20:46 np0005634017 nova_compute[243452]: 2026-02-28 10:20:46.178 243456 INFO nova.compute.manager [-] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Took 1.35 seconds to deallocate network for instance.#033[00m
Feb 28 05:20:46 np0005634017 nova_compute[243452]: 2026-02-28 10:20:46.241 243456 DEBUG oslo_concurrency.lockutils [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:46 np0005634017 nova_compute[243452]: 2026-02-28 10:20:46.242 243456 DEBUG oslo_concurrency.lockutils [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 395 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 3.3 MiB/s wr, 355 op/s
Feb 28 05:20:46 np0005634017 nova_compute[243452]: 2026-02-28 10:20:46.395 243456 DEBUG oslo_concurrency.processutils [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:20:46 np0005634017 nova_compute[243452]: 2026-02-28 10:20:46.450 243456 DEBUG nova.network.neutron [req-65aceb78-cfc9-435f-b8f9-4c797458115f req-e2b60602-0c93-4372-8144-4a4cbe57f3c8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Updated VIF entry in instance network info cache for port 8f265ce7-668d-4462-8ac4-a9487fc7d3cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:20:46 np0005634017 nova_compute[243452]: 2026-02-28 10:20:46.451 243456 DEBUG nova.network.neutron [req-65aceb78-cfc9-435f-b8f9-4c797458115f req-e2b60602-0c93-4372-8144-4a4cbe57f3c8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Updating instance_info_cache with network_info: [{"id": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "address": "fa:16:3e:cb:75:fe", "network": {"id": "37214c09-5017-4e54-bd29-785084655f44", "bridge": "br-int", "label": "tempest-network-smoke--1165805221", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dab7067181d43f1acb702fce4ca882c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f265ce7-66", "ovs_interfaceid": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:20:46 np0005634017 nova_compute[243452]: 2026-02-28 10:20:46.474 243456 DEBUG oslo_concurrency.lockutils [req-65aceb78-cfc9-435f-b8f9-4c797458115f req-e2b60602-0c93-4372-8144-4a4cbe57f3c8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c4a33511-0908-4787-82f4-79505aa9d436" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:20:46 np0005634017 nova_compute[243452]: 2026-02-28 10:20:46.841 243456 DEBUG nova.compute.manager [req-135c7d13-47f1-42c3-86cc-eb7e9c203493 req-1f20d14c-0d4c-4b4d-a1f6-7710bb4cd573 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received event network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:46 np0005634017 nova_compute[243452]: 2026-02-28 10:20:46.842 243456 DEBUG oslo_concurrency.lockutils [req-135c7d13-47f1-42c3-86cc-eb7e9c203493 req-1f20d14c-0d4c-4b4d-a1f6-7710bb4cd573 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:46 np0005634017 nova_compute[243452]: 2026-02-28 10:20:46.843 243456 DEBUG oslo_concurrency.lockutils [req-135c7d13-47f1-42c3-86cc-eb7e9c203493 req-1f20d14c-0d4c-4b4d-a1f6-7710bb4cd573 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:46 np0005634017 nova_compute[243452]: 2026-02-28 10:20:46.843 243456 DEBUG oslo_concurrency.lockutils [req-135c7d13-47f1-42c3-86cc-eb7e9c203493 req-1f20d14c-0d4c-4b4d-a1f6-7710bb4cd573 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:46 np0005634017 nova_compute[243452]: 2026-02-28 10:20:46.843 243456 DEBUG nova.compute.manager [req-135c7d13-47f1-42c3-86cc-eb7e9c203493 req-1f20d14c-0d4c-4b4d-a1f6-7710bb4cd573 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] No waiting events found dispatching network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:20:46 np0005634017 nova_compute[243452]: 2026-02-28 10:20:46.843 243456 WARNING nova.compute.manager [req-135c7d13-47f1-42c3-86cc-eb7e9c203493 req-1f20d14c-0d4c-4b4d-a1f6-7710bb4cd573 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received unexpected event network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:20:46 np0005634017 nova_compute[243452]: 2026-02-28 10:20:46.844 243456 DEBUG nova.compute.manager [req-135c7d13-47f1-42c3-86cc-eb7e9c203493 req-1f20d14c-0d4c-4b4d-a1f6-7710bb4cd573 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received event network-vif-deleted-2f9562b0-54ce-4c24-9341-33a674532bf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:20:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:20:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4164320592' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:20:46 np0005634017 nova_compute[243452]: 2026-02-28 10:20:46.965 243456 DEBUG oslo_concurrency.processutils [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:20:46 np0005634017 nova_compute[243452]: 2026-02-28 10:20:46.971 243456 DEBUG nova.compute.provider_tree [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:20:46 np0005634017 nova_compute[243452]: 2026-02-28 10:20:46.990 243456 DEBUG nova.scheduler.client.report [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:20:47 np0005634017 nova_compute[243452]: 2026-02-28 10:20:47.031 243456 DEBUG oslo_concurrency.lockutils [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:47 np0005634017 nova_compute[243452]: 2026-02-28 10:20:47.083 243456 INFO nova.scheduler.client.report [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Deleted allocations for instance ea5efc55-0a5e-435e-9805-9a9726c17eda#033[00m
Feb 28 05:20:47 np0005634017 nova_compute[243452]: 2026-02-28 10:20:47.154 243456 DEBUG oslo_concurrency.lockutils [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.281s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:47 np0005634017 nova_compute[243452]: 2026-02-28 10:20:47.177 243456 INFO nova.compute.manager [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Rebuilding instance#033[00m
Feb 28 05:20:47 np0005634017 nova_compute[243452]: 2026-02-28 10:20:47.487 243456 DEBUG nova.objects.instance [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lazy-loading 'trusted_certs' on Instance uuid 98150245-079d-43f8-bbd9-3d12a8f26719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:47 np0005634017 nova_compute[243452]: 2026-02-28 10:20:47.511 243456 DEBUG nova.compute.manager [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:47 np0005634017 nova_compute[243452]: 2026-02-28 10:20:47.562 243456 DEBUG nova.objects.instance [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lazy-loading 'pci_requests' on Instance uuid 98150245-079d-43f8-bbd9-3d12a8f26719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:47 np0005634017 nova_compute[243452]: 2026-02-28 10:20:47.578 243456 DEBUG nova.objects.instance [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lazy-loading 'pci_devices' on Instance uuid 98150245-079d-43f8-bbd9-3d12a8f26719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:47 np0005634017 nova_compute[243452]: 2026-02-28 10:20:47.602 243456 DEBUG nova.objects.instance [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lazy-loading 'resources' on Instance uuid 98150245-079d-43f8-bbd9-3d12a8f26719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:47 np0005634017 nova_compute[243452]: 2026-02-28 10:20:47.620 243456 DEBUG nova.objects.instance [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lazy-loading 'migration_context' on Instance uuid 98150245-079d-43f8-bbd9-3d12a8f26719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:20:47 np0005634017 nova_compute[243452]: 2026-02-28 10:20:47.649 243456 DEBUG nova.objects.instance [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 28 05:20:47 np0005634017 nova_compute[243452]: 2026-02-28 10:20:47.653 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 28 05:20:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 343 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 1.8 MiB/s wr, 337 op/s
Feb 28 05:20:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:20:48 np0005634017 nova_compute[243452]: 2026-02-28 10:20:48.962 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:49 np0005634017 nova_compute[243452]: 2026-02-28 10:20:49.154 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:49 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Feb 28 05:20:49 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:49Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cb:75:fe 10.100.0.6
Feb 28 05:20:49 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:49Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cb:75:fe 10.100.0.6
Feb 28 05:20:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:50Z|01024|binding|INFO|Releasing lport ec441ae8-7dea-4a06-ba6a-57dcbc67001f from this chassis (sb_readonly=0)
Feb 28 05:20:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:20:50Z|01025|binding|INFO|Releasing lport 84e4a764-c038-44dc-af65-1b856dd92486 from this chassis (sb_readonly=0)
Feb 28 05:20:50 np0005634017 nova_compute[243452]: 2026-02-28 10:20:50.125 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 332 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 2.3 MiB/s wr, 329 op/s
Feb 28 05:20:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 340 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.2 MiB/s wr, 305 op/s
Feb 28 05:20:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:20:53 np0005634017 nova_compute[243452]: 2026-02-28 10:20:53.965 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:54 np0005634017 nova_compute[243452]: 2026-02-28 10:20:54.144 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274039.14271, 080f8608-f57f-4ffa-a966-ae62df8f6f9b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:20:54 np0005634017 nova_compute[243452]: 2026-02-28 10:20:54.145 243456 INFO nova.compute.manager [-] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:20:54 np0005634017 nova_compute[243452]: 2026-02-28 10:20:54.156 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:54 np0005634017 nova_compute[243452]: 2026-02-28 10:20:54.169 243456 DEBUG nova.compute.manager [None req-c73c8896-0bb4-4124-9582-9aaed9b59974 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 355 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 261 op/s
Feb 28 05:20:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 364 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.1 MiB/s wr, 237 op/s
Feb 28 05:20:57 np0005634017 nova_compute[243452]: 2026-02-28 10:20:57.565 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:57.859 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:20:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:57.860 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:20:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:20:57.860 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:20:58 np0005634017 nova_compute[243452]: 2026-02-28 10:20:58.120 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 28 05:20:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 382 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.0 MiB/s wr, 201 op/s
Feb 28 05:20:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:20:58 np0005634017 nova_compute[243452]: 2026-02-28 10:20:58.966 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:20:59 np0005634017 nova_compute[243452]: 2026-02-28 10:20:59.125 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274044.1239626, ea5efc55-0a5e-435e-9805-9a9726c17eda => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:20:59 np0005634017 nova_compute[243452]: 2026-02-28 10:20:59.126 243456 INFO nova.compute.manager [-] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:20:59 np0005634017 nova_compute[243452]: 2026-02-28 10:20:59.148 243456 DEBUG nova.compute.manager [None req-3e097d6c-6827-4017-aee7-fd42a9520aef - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:20:59 np0005634017 nova_compute[243452]: 2026-02-28 10:20:59.160 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:21:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:21:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:21:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:21:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:21:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:21:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 390 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.2 MiB/s wr, 165 op/s
Feb 28 05:21:00 np0005634017 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000065.scope: Deactivated successfully.
Feb 28 05:21:00 np0005634017 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000065.scope: Consumed 11.538s CPU time.
Feb 28 05:21:00 np0005634017 systemd-machined[209480]: Machine qemu-129-instance-00000065 terminated.
Feb 28 05:21:01 np0005634017 nova_compute[243452]: 2026-02-28 10:21:01.135 243456 INFO nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Instance shutdown successfully after 13 seconds.#033[00m
Feb 28 05:21:01 np0005634017 nova_compute[243452]: 2026-02-28 10:21:01.142 243456 INFO nova.virt.libvirt.driver [-] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Instance destroyed successfully.#033[00m
Feb 28 05:21:01 np0005634017 nova_compute[243452]: 2026-02-28 10:21:01.148 243456 INFO nova.virt.libvirt.driver [-] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Instance destroyed successfully.#033[00m
Feb 28 05:21:01 np0005634017 nova_compute[243452]: 2026-02-28 10:21:01.471 243456 INFO nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Deleting instance files /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719_del#033[00m
Feb 28 05:21:01 np0005634017 nova_compute[243452]: 2026-02-28 10:21:01.472 243456 INFO nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Deletion of /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719_del complete#033[00m
Feb 28 05:21:01 np0005634017 nova_compute[243452]: 2026-02-28 10:21:01.638 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:21:01 np0005634017 nova_compute[243452]: 2026-02-28 10:21:01.639 243456 INFO nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Creating image(s)#033[00m
Feb 28 05:21:01 np0005634017 nova_compute[243452]: 2026-02-28 10:21:01.674 243456 DEBUG nova.storage.rbd_utils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:01 np0005634017 nova_compute[243452]: 2026-02-28 10:21:01.710 243456 DEBUG nova.storage.rbd_utils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:01 np0005634017 nova_compute[243452]: 2026-02-28 10:21:01.747 243456 DEBUG nova.storage.rbd_utils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:01 np0005634017 nova_compute[243452]: 2026-02-28 10:21:01.752 243456 DEBUG oslo_concurrency.processutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:01 np0005634017 nova_compute[243452]: 2026-02-28 10:21:01.835 243456 DEBUG oslo_concurrency.processutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:01 np0005634017 nova_compute[243452]: 2026-02-28 10:21:01.837 243456 DEBUG oslo_concurrency.lockutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Acquiring lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:01 np0005634017 nova_compute[243452]: 2026-02-28 10:21:01.838 243456 DEBUG oslo_concurrency.lockutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:01 np0005634017 nova_compute[243452]: 2026-02-28 10:21:01.838 243456 DEBUG oslo_concurrency.lockutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:01 np0005634017 nova_compute[243452]: 2026-02-28 10:21:01.871 243456 DEBUG nova.storage.rbd_utils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:01 np0005634017 nova_compute[243452]: 2026-02-28 10:21:01.875 243456 DEBUG oslo_concurrency.processutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 98150245-079d-43f8-bbd9-3d12a8f26719_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:01 np0005634017 nova_compute[243452]: 2026-02-28 10:21:01.975 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.109 243456 DEBUG oslo_concurrency.processutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 98150245-079d-43f8-bbd9-3d12a8f26719_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.175 243456 DEBUG nova.storage.rbd_utils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] resizing rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.267 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.269 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Ensure instance console log exists: /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.270 243456 DEBUG oslo_concurrency.lockutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.270 243456 DEBUG oslo_concurrency.lockutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.271 243456 DEBUG oslo_concurrency.lockutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.273 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.278 243456 WARNING nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.290 243456 DEBUG nova.virt.libvirt.host [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.291 243456 DEBUG nova.virt.libvirt.host [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.294 243456 DEBUG nova.virt.libvirt.host [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.294 243456 DEBUG nova.virt.libvirt.host [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.295 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.295 243456 DEBUG nova.virt.hardware [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.296 243456 DEBUG nova.virt.hardware [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.296 243456 DEBUG nova.virt.hardware [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.296 243456 DEBUG nova.virt.hardware [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.297 243456 DEBUG nova.virt.hardware [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.297 243456 DEBUG nova.virt.hardware [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.297 243456 DEBUG nova.virt.hardware [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.298 243456 DEBUG nova.virt.hardware [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.298 243456 DEBUG nova.virt.hardware [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.298 243456 DEBUG nova.virt.hardware [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.299 243456 DEBUG nova.virt.hardware [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.299 243456 DEBUG nova.objects.instance [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 98150245-079d-43f8-bbd9-3d12a8f26719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.325 243456 DEBUG oslo_concurrency.processutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 391 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.8 MiB/s wr, 135 op/s
Feb 28 05:21:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:21:02 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1572976600' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.898 243456 DEBUG oslo_concurrency.processutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.932 243456 DEBUG nova.storage.rbd_utils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:02 np0005634017 nova_compute[243452]: 2026-02-28 10:21:02.936 243456 DEBUG oslo_concurrency.processutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:21:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/538922934' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:21:03 np0005634017 nova_compute[243452]: 2026-02-28 10:21:03.466 243456 DEBUG oslo_concurrency.processutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:03 np0005634017 nova_compute[243452]: 2026-02-28 10:21:03.469 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:21:03 np0005634017 nova_compute[243452]:  <uuid>98150245-079d-43f8-bbd9-3d12a8f26719</uuid>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:  <name>instance-00000065</name>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerShowV257Test-server-2037913836</nova:name>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:21:02</nova:creationTime>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:21:03 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:        <nova:user uuid="17ee551657ed4a4c8a2f040ff863ad9a">tempest-ServerShowV257Test-1598019138-project-member</nova:user>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:        <nova:project uuid="124d8457f7e342f1ab81af27d8c3ba3a">tempest-ServerShowV257Test-1598019138</nova:project>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="88971623-4808-4102-a4a7-34a287d8b7fe"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      <nova:ports/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      <entry name="serial">98150245-079d-43f8-bbd9-3d12a8f26719</entry>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      <entry name="uuid">98150245-079d-43f8-bbd9-3d12a8f26719</entry>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/98150245-079d-43f8-bbd9-3d12a8f26719_disk">
Feb 28 05:21:03 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:21:03 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/98150245-079d-43f8-bbd9-3d12a8f26719_disk.config">
Feb 28 05:21:03 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:21:03 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/console.log" append="off"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:21:03 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:21:03 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:21:03 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:21:03 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:21:03 np0005634017 nova_compute[243452]: 2026-02-28 10:21:03.532 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:21:03 np0005634017 nova_compute[243452]: 2026-02-28 10:21:03.533 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:21:03 np0005634017 nova_compute[243452]: 2026-02-28 10:21:03.534 243456 INFO nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Using config drive#033[00m
Feb 28 05:21:03 np0005634017 nova_compute[243452]: 2026-02-28 10:21:03.567 243456 DEBUG nova.storage.rbd_utils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:03 np0005634017 nova_compute[243452]: 2026-02-28 10:21:03.591 243456 DEBUG nova.objects.instance [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lazy-loading 'ec2_ids' on Instance uuid 98150245-079d-43f8-bbd9-3d12a8f26719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:21:03 np0005634017 nova_compute[243452]: 2026-02-28 10:21:03.626 243456 DEBUG nova.objects.instance [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lazy-loading 'keypairs' on Instance uuid 98150245-079d-43f8-bbd9-3d12a8f26719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:21:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:21:03 np0005634017 nova_compute[243452]: 2026-02-28 10:21:03.857 243456 INFO nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Creating config drive at /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/disk.config#033[00m
Feb 28 05:21:03 np0005634017 nova_compute[243452]: 2026-02-28 10:21:03.866 243456 DEBUG oslo_concurrency.processutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpwbghj97n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:03 np0005634017 nova_compute[243452]: 2026-02-28 10:21:03.969 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.012 243456 DEBUG oslo_concurrency.processutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpwbghj97n" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.049 243456 DEBUG nova.storage.rbd_utils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.054 243456 DEBUG oslo_concurrency.processutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/disk.config 98150245-079d-43f8-bbd9-3d12a8f26719_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.161 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.226 243456 DEBUG oslo_concurrency.processutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/disk.config 98150245-079d-43f8-bbd9-3d12a8f26719_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.226 243456 INFO nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Deleting local config drive /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/disk.config because it was imported into RBD.#033[00m
Feb 28 05:21:04 np0005634017 systemd-machined[209480]: New machine qemu-130-instance-00000065.
Feb 28 05:21:04 np0005634017 systemd[1]: Started Virtual Machine qemu-130-instance-00000065.
Feb 28 05:21:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 373 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 522 KiB/s rd, 3.7 MiB/s wr, 138 op/s
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.687 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Acquiring lock "ec785d5e-9b62-4b52-a727-f64173b4b853" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.688 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.706 243456 DEBUG nova.compute.manager [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.715 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 98150245-079d-43f8-bbd9-3d12a8f26719 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.716 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274064.7145927, 98150245-079d-43f8-bbd9-3d12a8f26719 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.716 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.718 243456 DEBUG nova.compute.manager [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.719 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.723 243456 INFO nova.virt.libvirt.driver [-] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Instance spawned successfully.#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.723 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.744 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.748 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.767 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.768 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.769 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.769 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.770 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.770 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.775 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.775 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274064.7183204, 98150245-079d-43f8-bbd9-3d12a8f26719 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.776 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] VM Started (Lifecycle Event)#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.792 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.793 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.801 243456 DEBUG nova.virt.hardware [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.802 243456 INFO nova.compute.claims [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.814 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.818 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.850 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.864 243456 DEBUG nova.compute.manager [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.938 243456 DEBUG oslo_concurrency.lockutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.941 243456 DEBUG nova.compute.manager [req-687ab859-cbba-4e5c-8e40-ad42413e22e5 req-3a2d292e-ab30-4046-86c6-62eebd5a1e72 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Received event network-changed-8f265ce7-668d-4462-8ac4-a9487fc7d3cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.941 243456 DEBUG nova.compute.manager [req-687ab859-cbba-4e5c-8e40-ad42413e22e5 req-3a2d292e-ab30-4046-86c6-62eebd5a1e72 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Refreshing instance network info cache due to event network-changed-8f265ce7-668d-4462-8ac4-a9487fc7d3cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.941 243456 DEBUG oslo_concurrency.lockutils [req-687ab859-cbba-4e5c-8e40-ad42413e22e5 req-3a2d292e-ab30-4046-86c6-62eebd5a1e72 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c4a33511-0908-4787-82f4-79505aa9d436" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.941 243456 DEBUG oslo_concurrency.lockutils [req-687ab859-cbba-4e5c-8e40-ad42413e22e5 req-3a2d292e-ab30-4046-86c6-62eebd5a1e72 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c4a33511-0908-4787-82f4-79505aa9d436" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.942 243456 DEBUG nova.network.neutron [req-687ab859-cbba-4e5c-8e40-ad42413e22e5 req-3a2d292e-ab30-4046-86c6-62eebd5a1e72 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Refreshing network info cache for port 8f265ce7-668d-4462-8ac4-a9487fc7d3cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:21:04 np0005634017 nova_compute[243452]: 2026-02-28 10:21:04.982 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.017 243456 DEBUG oslo_concurrency.lockutils [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Acquiring lock "c4a33511-0908-4787-82f4-79505aa9d436" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.018 243456 DEBUG oslo_concurrency.lockutils [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.018 243456 DEBUG oslo_concurrency.lockutils [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Acquiring lock "c4a33511-0908-4787-82f4-79505aa9d436-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.019 243456 DEBUG oslo_concurrency.lockutils [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.019 243456 DEBUG oslo_concurrency.lockutils [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.021 243456 INFO nova.compute.manager [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Terminating instance#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.022 243456 DEBUG nova.compute.manager [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:21:05 np0005634017 kernel: tap8f265ce7-66 (unregistering): left promiscuous mode
Feb 28 05:21:05 np0005634017 NetworkManager[49805]: <info>  [1772274065.0686] device (tap8f265ce7-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.076 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:05Z|01026|binding|INFO|Releasing lport 8f265ce7-668d-4462-8ac4-a9487fc7d3cd from this chassis (sb_readonly=0)
Feb 28 05:21:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:05Z|01027|binding|INFO|Setting lport 8f265ce7-668d-4462-8ac4-a9487fc7d3cd down in Southbound
Feb 28 05:21:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:05Z|01028|binding|INFO|Removing iface tap8f265ce7-66 ovn-installed in OVS
Feb 28 05:21:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.082 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:75:fe 10.100.0.6'], port_security=['fa:16:3e:cb:75:fe 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c4a33511-0908-4787-82f4-79505aa9d436', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37214c09-5017-4e54-bd29-785084655f44', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1dab7067181d43f1acb702fce4ca882c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b33bdd64-2933-4bbe-ba3e-cc39acc8701b e5271994-622a-4bbd-b4e6-a7d717e49d1d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=564ae5b8-c7a3-416a-9979-7200cc2a4584, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=8f265ce7-668d-4462-8ac4-a9487fc7d3cd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:21:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.084 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 8f265ce7-668d-4462-8ac4-a9487fc7d3cd in datapath 37214c09-5017-4e54-bd29-785084655f44 unbound from our chassis#033[00m
Feb 28 05:21:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.086 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 37214c09-5017-4e54-bd29-785084655f44, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:21:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.087 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5a6022c8-b81c-4d84-8f96-b91527485d95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.088 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-37214c09-5017-4e54-bd29-785084655f44 namespace which is not needed anymore#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.101 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:05 np0005634017 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000064.scope: Deactivated successfully.
Feb 28 05:21:05 np0005634017 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000064.scope: Consumed 12.431s CPU time.
Feb 28 05:21:05 np0005634017 systemd-machined[209480]: Machine qemu-128-instance-00000064 terminated.
Feb 28 05:21:05 np0005634017 neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44[331080]: [NOTICE]   (331098) : haproxy version is 2.8.14-c23fe91
Feb 28 05:21:05 np0005634017 neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44[331080]: [NOTICE]   (331098) : path to executable is /usr/sbin/haproxy
Feb 28 05:21:05 np0005634017 neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44[331080]: [WARNING]  (331098) : Exiting Master process...
Feb 28 05:21:05 np0005634017 neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44[331080]: [ALERT]    (331098) : Current worker (331110) exited with code 143 (Terminated)
Feb 28 05:21:05 np0005634017 neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44[331080]: [WARNING]  (331098) : All workers exited. Exiting... (0)
Feb 28 05:21:05 np0005634017 systemd[1]: libpod-92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e.scope: Deactivated successfully.
Feb 28 05:21:05 np0005634017 conmon[331080]: conmon 92886d7d5b788931cf14 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e.scope/container/memory.events
Feb 28 05:21:05 np0005634017 podman[332016]: 2026-02-28 10:21:05.228910433 +0000 UTC m=+0.043650741 container died 92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 05:21:05 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e-userdata-shm.mount: Deactivated successfully.
Feb 28 05:21:05 np0005634017 systemd[1]: var-lib-containers-storage-overlay-248dfaf5b43c5e78179cc2d605133b83fbf37721a8e21412e8df26c566d94eb2-merged.mount: Deactivated successfully.
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.268 243456 INFO nova.virt.libvirt.driver [-] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Instance destroyed successfully.#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.271 243456 DEBUG nova.objects.instance [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lazy-loading 'resources' on Instance uuid c4a33511-0908-4787-82f4-79505aa9d436 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:21:05 np0005634017 podman[332016]: 2026-02-28 10:21:05.275848848 +0000 UTC m=+0.090589146 container cleanup 92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:21:05 np0005634017 systemd[1]: libpod-conmon-92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e.scope: Deactivated successfully.
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.306 243456 DEBUG nova.virt.libvirt.vif [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:20:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1952070192-access_point-1839928684',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1952070192-access_point-1839928684',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1952070192-ac',id=100,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHCNMYLg5p+aB2JY148QNYzXhobB6/drpl2qIRV0JTfQX5v2ytqL0PnW0jus5lqsKTS3QAwQHPeB44EmRsvT+kml+WKzSDkocdN0cBQPBu+t8DfZ2YkjQe5xHSo47UtrMg==',key_name='tempest-TestSecurityGroupsBasicOps-225180537',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:20:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1dab7067181d43f1acb702fce4ca882c',ramdisk_id='',reservation_id='r-6trxlpi6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1952070192',owner_user_name='tempest-TestSecurityGroupsBasicOps-1952070192-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:20:39Z,user_data=None,user_id='d000e26b1aaf4a60bd2c928412e59ca5',uuid=c4a33511-0908-4787-82f4-79505aa9d436,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "address": "fa:16:3e:cb:75:fe", "network": {"id": "37214c09-5017-4e54-bd29-785084655f44", "bridge": "br-int", "label": "tempest-network-smoke--1165805221", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dab7067181d43f1acb702fce4ca882c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f265ce7-66", "ovs_interfaceid": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.307 243456 DEBUG nova.network.os_vif_util [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Converting VIF {"id": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "address": "fa:16:3e:cb:75:fe", "network": {"id": "37214c09-5017-4e54-bd29-785084655f44", "bridge": "br-int", "label": "tempest-network-smoke--1165805221", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dab7067181d43f1acb702fce4ca882c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f265ce7-66", "ovs_interfaceid": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.308 243456 DEBUG nova.network.os_vif_util [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cb:75:fe,bridge_name='br-int',has_traffic_filtering=True,id=8f265ce7-668d-4462-8ac4-a9487fc7d3cd,network=Network(37214c09-5017-4e54-bd29-785084655f44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f265ce7-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.309 243456 DEBUG os_vif [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:75:fe,bridge_name='br-int',has_traffic_filtering=True,id=8f265ce7-668d-4462-8ac4-a9487fc7d3cd,network=Network(37214c09-5017-4e54-bd29-785084655f44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f265ce7-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.312 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.312 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f265ce7-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.314 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.315 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.319 243456 INFO os_vif [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:75:fe,bridge_name='br-int',has_traffic_filtering=True,id=8f265ce7-668d-4462-8ac4-a9487fc7d3cd,network=Network(37214c09-5017-4e54-bd29-785084655f44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f265ce7-66')#033[00m
Feb 28 05:21:05 np0005634017 podman[332054]: 2026-02-28 10:21:05.341950216 +0000 UTC m=+0.046272257 container remove 92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 05:21:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.348 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[caedfa5b-74b7-4fb2-a0f3-270b32c15d89]: (4, ('Sat Feb 28 10:21:05 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44 (92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e)\n92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e\nSat Feb 28 10:21:05 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44 (92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e)\n92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.350 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9ae6cc8b-6e88-4d5a-af6f-c1269112aeb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.353 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37214c09-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.356 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:05 np0005634017 kernel: tap37214c09-50: left promiscuous mode
Feb 28 05:21:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.367 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[396bca7d-98d8-4b9b-ac86-f4876bf937e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.369 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.382 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[59ce3d21-fa7b-46e3-9955-01b6632d1a7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.384 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4572dd6f-ca08-4318-b056-60022e8bceeb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.410 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[66448698-c887-4af0-9844-a8b6dc44f4d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554560, 'reachable_time': 22780, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332087, 'error': None, 'target': 'ovnmeta-37214c09-5017-4e54-bd29-785084655f44', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.414 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-37214c09-5017-4e54-bd29-785084655f44 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:21:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.414 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[0656b8fb-6c11-4ef6-9ef5-0cc52914d30c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:05 np0005634017 systemd[1]: run-netns-ovnmeta\x2d37214c09\x2d5017\x2d4e54\x2dbd29\x2d785084655f44.mount: Deactivated successfully.
Feb 28 05:21:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:21:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3312576475' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.600 243456 INFO nova.virt.libvirt.driver [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Deleting instance files /var/lib/nova/instances/c4a33511-0908-4787-82f4-79505aa9d436_del#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.601 243456 INFO nova.virt.libvirt.driver [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Deletion of /var/lib/nova/instances/c4a33511-0908-4787-82f4-79505aa9d436_del complete#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.610 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.614 243456 DEBUG nova.compute.provider_tree [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.635 243456 DEBUG nova.scheduler.client.report [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.661 243456 INFO nova.compute.manager [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.661 243456 DEBUG oslo.service.loopingcall [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.662 243456 DEBUG nova.compute.manager [-] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.663 243456 DEBUG nova.network.neutron [-] [instance: c4a33511-0908-4787-82f4-79505aa9d436] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.668 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.668 243456 DEBUG nova.compute.manager [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.672 243456 DEBUG oslo_concurrency.lockutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.673 243456 DEBUG nova.objects.instance [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.741 243456 DEBUG nova.compute.manager [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.742 243456 DEBUG nova.network.neutron [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.747 243456 DEBUG oslo_concurrency.lockutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.764 243456 INFO nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.780 243456 DEBUG nova.compute.manager [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.802 243456 DEBUG nova.compute.manager [req-9def961b-b51d-442b-8b63-d4ba94904355 req-6c887afd-bb9d-43e0-bce2-0e7ef50e2406 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Received event network-vif-unplugged-8f265ce7-668d-4462-8ac4-a9487fc7d3cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.803 243456 DEBUG oslo_concurrency.lockutils [req-9def961b-b51d-442b-8b63-d4ba94904355 req-6c887afd-bb9d-43e0-bce2-0e7ef50e2406 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c4a33511-0908-4787-82f4-79505aa9d436-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.803 243456 DEBUG oslo_concurrency.lockutils [req-9def961b-b51d-442b-8b63-d4ba94904355 req-6c887afd-bb9d-43e0-bce2-0e7ef50e2406 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.803 243456 DEBUG oslo_concurrency.lockutils [req-9def961b-b51d-442b-8b63-d4ba94904355 req-6c887afd-bb9d-43e0-bce2-0e7ef50e2406 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.803 243456 DEBUG nova.compute.manager [req-9def961b-b51d-442b-8b63-d4ba94904355 req-6c887afd-bb9d-43e0-bce2-0e7ef50e2406 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] No waiting events found dispatching network-vif-unplugged-8f265ce7-668d-4462-8ac4-a9487fc7d3cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.803 243456 DEBUG nova.compute.manager [req-9def961b-b51d-442b-8b63-d4ba94904355 req-6c887afd-bb9d-43e0-bce2-0e7ef50e2406 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Received event network-vif-unplugged-8f265ce7-668d-4462-8ac4-a9487fc7d3cd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.854 243456 DEBUG nova.compute.manager [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.860 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.860 243456 INFO nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Creating image(s)#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.884 243456 DEBUG nova.storage.rbd_utils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.915 243456 DEBUG nova.storage.rbd_utils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.939 243456 DEBUG nova.storage.rbd_utils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:05 np0005634017 nova_compute[243452]: 2026-02-28 10:21:05.942 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.001 243456 DEBUG oslo_concurrency.lockutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Acquiring lock "98150245-079d-43f8-bbd9-3d12a8f26719" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.002 243456 DEBUG oslo_concurrency.lockutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "98150245-079d-43f8-bbd9-3d12a8f26719" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.002 243456 DEBUG oslo_concurrency.lockutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Acquiring lock "98150245-079d-43f8-bbd9-3d12a8f26719-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.003 243456 DEBUG oslo_concurrency.lockutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "98150245-079d-43f8-bbd9-3d12a8f26719-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.003 243456 DEBUG oslo_concurrency.lockutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "98150245-079d-43f8-bbd9-3d12a8f26719-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.005 243456 INFO nova.compute.manager [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Terminating instance#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.005 243456 DEBUG oslo_concurrency.lockutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Acquiring lock "refresh_cache-98150245-079d-43f8-bbd9-3d12a8f26719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.006 243456 DEBUG oslo_concurrency.lockutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Acquired lock "refresh_cache-98150245-079d-43f8-bbd9-3d12a8f26719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.006 243456 DEBUG nova.network.neutron [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.008 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.009 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.009 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.010 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.033 243456 DEBUG nova.storage.rbd_utils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.037 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ec785d5e-9b62-4b52-a727-f64173b4b853_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.102 243456 DEBUG nova.policy [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f18b63d43ee24e59bdff962c9a727213', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '14500a4ea1d94c0e9c58b076f5c918b5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.317 243456 DEBUG nova.network.neutron [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.338 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ec785d5e-9b62-4b52-a727-f64173b4b853_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 358 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 489 KiB/s rd, 4.0 MiB/s wr, 144 op/s
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.424 243456 DEBUG nova.storage.rbd_utils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] resizing rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.535 243456 DEBUG nova.objects.instance [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lazy-loading 'migration_context' on Instance uuid ec785d5e-9b62-4b52-a727-f64173b4b853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.547 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.548 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Ensure instance console log exists: /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.548 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.548 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.549 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.583 243456 DEBUG nova.network.neutron [req-687ab859-cbba-4e5c-8e40-ad42413e22e5 req-3a2d292e-ab30-4046-86c6-62eebd5a1e72 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Updated VIF entry in instance network info cache for port 8f265ce7-668d-4462-8ac4-a9487fc7d3cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.584 243456 DEBUG nova.network.neutron [req-687ab859-cbba-4e5c-8e40-ad42413e22e5 req-3a2d292e-ab30-4046-86c6-62eebd5a1e72 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Updating instance_info_cache with network_info: [{"id": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "address": "fa:16:3e:cb:75:fe", "network": {"id": "37214c09-5017-4e54-bd29-785084655f44", "bridge": "br-int", "label": "tempest-network-smoke--1165805221", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dab7067181d43f1acb702fce4ca882c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f265ce7-66", "ovs_interfaceid": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.605 243456 DEBUG oslo_concurrency.lockutils [req-687ab859-cbba-4e5c-8e40-ad42413e22e5 req-3a2d292e-ab30-4046-86c6-62eebd5a1e72 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c4a33511-0908-4787-82f4-79505aa9d436" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.696 243456 DEBUG nova.network.neutron [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.719 243456 DEBUG oslo_concurrency.lockutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Releasing lock "refresh_cache-98150245-079d-43f8-bbd9-3d12a8f26719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.720 243456 DEBUG nova.compute.manager [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:21:06 np0005634017 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000065.scope: Deactivated successfully.
Feb 28 05:21:06 np0005634017 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000065.scope: Consumed 2.379s CPU time.
Feb 28 05:21:06 np0005634017 systemd-machined[209480]: Machine qemu-130-instance-00000065 terminated.
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.826 243456 DEBUG nova.network.neutron [-] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.854 243456 INFO nova.compute.manager [-] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Took 1.19 seconds to deallocate network for instance.#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.910 243456 DEBUG oslo_concurrency.lockutils [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.910 243456 DEBUG oslo_concurrency.lockutils [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.944 243456 INFO nova.virt.libvirt.driver [-] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Instance destroyed successfully.#033[00m
Feb 28 05:21:06 np0005634017 nova_compute[243452]: 2026-02-28 10:21:06.945 243456 DEBUG nova.objects.instance [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lazy-loading 'resources' on Instance uuid 98150245-079d-43f8-bbd9-3d12a8f26719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.029 243456 DEBUG oslo_concurrency.processutils [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.068 243456 DEBUG nova.network.neutron [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Successfully created port: a920b0c3-c6cf-44d3-9a22-40eda0e09078 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.119 243456 DEBUG nova.compute.manager [req-a027fc06-4800-44a1-a8f9-208f9d1fe81a req-2fd1a55c-7a6a-4813-b1d8-2ceb5b3ebc9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Received event network-vif-deleted-8f265ce7-668d-4462-8ac4-a9487fc7d3cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.294 243456 INFO nova.virt.libvirt.driver [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Deleting instance files /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719_del#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.296 243456 INFO nova.virt.libvirt.driver [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Deletion of /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719_del complete#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.357 243456 INFO nova.compute.manager [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.358 243456 DEBUG oslo.service.loopingcall [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.359 243456 DEBUG nova.compute.manager [-] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.360 243456 DEBUG nova.network.neutron [-] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:21:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:21:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3983807347' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.553 243456 DEBUG nova.network.neutron [-] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.567 243456 DEBUG nova.network.neutron [-] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.577 243456 DEBUG oslo_concurrency.processutils [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.583 243456 DEBUG nova.compute.provider_tree [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.613 243456 DEBUG nova.scheduler.client.report [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.621 243456 INFO nova.compute.manager [-] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Took 0.26 seconds to deallocate network for instance.#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.649 243456 DEBUG oslo_concurrency.lockutils [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.675 243456 DEBUG oslo_concurrency.lockutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.676 243456 DEBUG oslo_concurrency.lockutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.679 243456 INFO nova.scheduler.client.report [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Deleted allocations for instance c4a33511-0908-4787-82f4-79505aa9d436#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.748 243456 DEBUG oslo_concurrency.lockutils [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.761 243456 DEBUG oslo_concurrency.processutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.808 243456 DEBUG nova.network.neutron [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Successfully updated port: a920b0c3-c6cf-44d3-9a22-40eda0e09078 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.829 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Acquiring lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.829 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Acquired lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.830 243456 DEBUG nova.network.neutron [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.881 243456 DEBUG nova.compute.manager [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Received event network-vif-plugged-8f265ce7-668d-4462-8ac4-a9487fc7d3cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.882 243456 DEBUG oslo_concurrency.lockutils [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c4a33511-0908-4787-82f4-79505aa9d436-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.883 243456 DEBUG oslo_concurrency.lockutils [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.883 243456 DEBUG oslo_concurrency.lockutils [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.884 243456 DEBUG nova.compute.manager [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] No waiting events found dispatching network-vif-plugged-8f265ce7-668d-4462-8ac4-a9487fc7d3cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.884 243456 WARNING nova.compute.manager [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Received unexpected event network-vif-plugged-8f265ce7-668d-4462-8ac4-a9487fc7d3cd for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.885 243456 DEBUG nova.compute.manager [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-changed-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.885 243456 DEBUG nova.compute.manager [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Refreshing instance network info cache due to event network-changed-a920b0c3-c6cf-44d3-9a22-40eda0e09078. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.886 243456 DEBUG oslo_concurrency.lockutils [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:21:07 np0005634017 nova_compute[243452]: 2026-02-28 10:21:07.943 243456 DEBUG nova.network.neutron [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:21:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:21:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3246148993' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:21:08 np0005634017 nova_compute[243452]: 2026-02-28 10:21:08.344 243456 DEBUG oslo_concurrency.processutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:08 np0005634017 nova_compute[243452]: 2026-02-28 10:21:08.352 243456 DEBUG nova.compute.provider_tree [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:21:08 np0005634017 nova_compute[243452]: 2026-02-28 10:21:08.370 243456 DEBUG nova.scheduler.client.report [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:21:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 325 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.8 MiB/s wr, 191 op/s
Feb 28 05:21:08 np0005634017 nova_compute[243452]: 2026-02-28 10:21:08.406 243456 DEBUG oslo_concurrency.lockutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:08 np0005634017 nova_compute[243452]: 2026-02-28 10:21:08.463 243456 INFO nova.scheduler.client.report [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Deleted allocations for instance 98150245-079d-43f8-bbd9-3d12a8f26719#033[00m
Feb 28 05:21:08 np0005634017 nova_compute[243452]: 2026-02-28 10:21:08.550 243456 DEBUG oslo_concurrency.lockutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "98150245-079d-43f8-bbd9-3d12a8f26719" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:21:08 np0005634017 nova_compute[243452]: 2026-02-28 10:21:08.972 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:09 np0005634017 nova_compute[243452]: 2026-02-28 10:21:09.528 243456 DEBUG nova.network.neutron [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updating instance_info_cache with network_info: [{"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:21:09 np0005634017 nova_compute[243452]: 2026-02-28 10:21:09.546 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Releasing lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:21:09 np0005634017 nova_compute[243452]: 2026-02-28 10:21:09.547 243456 DEBUG nova.compute.manager [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Instance network_info: |[{"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:21:09 np0005634017 nova_compute[243452]: 2026-02-28 10:21:09.547 243456 DEBUG oslo_concurrency.lockutils [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:21:09 np0005634017 nova_compute[243452]: 2026-02-28 10:21:09.548 243456 DEBUG nova.network.neutron [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Refreshing network info cache for port a920b0c3-c6cf-44d3-9a22-40eda0e09078 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:21:09 np0005634017 nova_compute[243452]: 2026-02-28 10:21:09.553 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Start _get_guest_xml network_info=[{"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:21:09 np0005634017 nova_compute[243452]: 2026-02-28 10:21:09.559 243456 WARNING nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:21:09 np0005634017 nova_compute[243452]: 2026-02-28 10:21:09.565 243456 DEBUG nova.virt.libvirt.host [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:21:09 np0005634017 nova_compute[243452]: 2026-02-28 10:21:09.566 243456 DEBUG nova.virt.libvirt.host [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:21:09 np0005634017 nova_compute[243452]: 2026-02-28 10:21:09.575 243456 DEBUG nova.virt.libvirt.host [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:21:09 np0005634017 nova_compute[243452]: 2026-02-28 10:21:09.576 243456 DEBUG nova.virt.libvirt.host [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:21:09 np0005634017 nova_compute[243452]: 2026-02-28 10:21:09.576 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:21:09 np0005634017 nova_compute[243452]: 2026-02-28 10:21:09.577 243456 DEBUG nova.virt.hardware [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:21:09 np0005634017 nova_compute[243452]: 2026-02-28 10:21:09.577 243456 DEBUG nova.virt.hardware [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:21:09 np0005634017 nova_compute[243452]: 2026-02-28 10:21:09.578 243456 DEBUG nova.virt.hardware [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:21:09 np0005634017 nova_compute[243452]: 2026-02-28 10:21:09.578 243456 DEBUG nova.virt.hardware [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:21:09 np0005634017 nova_compute[243452]: 2026-02-28 10:21:09.578 243456 DEBUG nova.virt.hardware [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:21:09 np0005634017 nova_compute[243452]: 2026-02-28 10:21:09.578 243456 DEBUG nova.virt.hardware [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:21:09 np0005634017 nova_compute[243452]: 2026-02-28 10:21:09.579 243456 DEBUG nova.virt.hardware [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:21:09 np0005634017 nova_compute[243452]: 2026-02-28 10:21:09.579 243456 DEBUG nova.virt.hardware [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:21:09 np0005634017 nova_compute[243452]: 2026-02-28 10:21:09.579 243456 DEBUG nova.virt.hardware [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:21:09 np0005634017 nova_compute[243452]: 2026-02-28 10:21:09.580 243456 DEBUG nova.virt.hardware [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:21:09 np0005634017 nova_compute[243452]: 2026-02-28 10:21:09.580 243456 DEBUG nova.virt.hardware [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:21:09 np0005634017 nova_compute[243452]: 2026-02-28 10:21:09.584 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:21:10 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2674454693' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.148 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.183 243456 DEBUG nova.storage.rbd_utils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.191 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.316 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 321 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 192 op/s
Feb 28 05:21:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:21:10 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2288084726' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.745 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.747 243456 DEBUG nova.virt.libvirt.vif [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1346326288',display_name='tempest-ServerRescueTestJSONUnderV235-server-1346326288',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1346326288',id=102,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='14500a4ea1d94c0e9c58b076f5c918b5',ramdisk_id='',reservation_id='r-svzcm10h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-749971841',owner_user_name='tempest-ServerRescueTestJSONUnderV235-749971841-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:21:05Z,user_data=None,user_id='f18b63d43ee24e59bdff962c9a727213',uuid=ec785d5e-9b62-4b52-a727-f64173b4b853,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.748 243456 DEBUG nova.network.os_vif_util [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Converting VIF {"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.749 243456 DEBUG nova.network.os_vif_util [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:a9:65,bridge_name='br-int',has_traffic_filtering=True,id=a920b0c3-c6cf-44d3-9a22-40eda0e09078,network=Network(4d053625-c393-49e7-ae73-bce276bdc186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa920b0c3-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.750 243456 DEBUG nova.objects.instance [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid ec785d5e-9b62-4b52-a727-f64173b4b853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.770 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:21:10 np0005634017 nova_compute[243452]:  <uuid>ec785d5e-9b62-4b52-a727-f64173b4b853</uuid>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:  <name>instance-00000066</name>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerRescueTestJSONUnderV235-server-1346326288</nova:name>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:21:09</nova:creationTime>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:21:10 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:        <nova:user uuid="f18b63d43ee24e59bdff962c9a727213">tempest-ServerRescueTestJSONUnderV235-749971841-project-member</nova:user>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:        <nova:project uuid="14500a4ea1d94c0e9c58b076f5c918b5">tempest-ServerRescueTestJSONUnderV235-749971841</nova:project>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:        <nova:port uuid="a920b0c3-c6cf-44d3-9a22-40eda0e09078">
Feb 28 05:21:10 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.2" ipVersion="4"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <entry name="serial">ec785d5e-9b62-4b52-a727-f64173b4b853</entry>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <entry name="uuid">ec785d5e-9b62-4b52-a727-f64173b4b853</entry>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/ec785d5e-9b62-4b52-a727-f64173b4b853_disk">
Feb 28 05:21:10 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:21:10 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/ec785d5e-9b62-4b52-a727-f64173b4b853_disk.config">
Feb 28 05:21:10 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:21:10 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:a2:a9:65"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <target dev="tapa920b0c3-c6"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/console.log" append="off"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:21:10 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:21:10 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:21:10 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:21:10 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.772 243456 DEBUG nova.compute.manager [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Preparing to wait for external event network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.772 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Acquiring lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.772 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.772 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.773 243456 DEBUG nova.virt.libvirt.vif [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1346326288',display_name='tempest-ServerRescueTestJSONUnderV235-server-1346326288',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1346326288',id=102,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='14500a4ea1d94c0e9c58b076f5c918b5',ramdisk_id='',reservation_id='r-svzcm10h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-749971841',owner_user_name='tempest-ServerRescueTestJSONUnderV235-749971841-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:21:05Z,user_data=None,user_id='f18b63d43ee24e59bdff962c9a727213',uuid=ec785d5e-9b62-4b52-a727-f64173b4b853,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.774 243456 DEBUG nova.network.os_vif_util [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Converting VIF {"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.774 243456 DEBUG nova.network.os_vif_util [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:a9:65,bridge_name='br-int',has_traffic_filtering=True,id=a920b0c3-c6cf-44d3-9a22-40eda0e09078,network=Network(4d053625-c393-49e7-ae73-bce276bdc186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa920b0c3-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.775 243456 DEBUG os_vif [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:a9:65,bridge_name='br-int',has_traffic_filtering=True,id=a920b0c3-c6cf-44d3-9a22-40eda0e09078,network=Network(4d053625-c393-49e7-ae73-bce276bdc186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa920b0c3-c6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.775 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.776 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.776 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.781 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.782 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa920b0c3-c6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.782 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa920b0c3-c6, col_values=(('external_ids', {'iface-id': 'a920b0c3-c6cf-44d3-9a22-40eda0e09078', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a2:a9:65', 'vm-uuid': 'ec785d5e-9b62-4b52-a727-f64173b4b853'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:21:10 np0005634017 NetworkManager[49805]: <info>  [1772274070.7853] manager: (tapa920b0c3-c6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/435)
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.787 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.790 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.791 243456 INFO os_vif [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:a9:65,bridge_name='br-int',has_traffic_filtering=True,id=a920b0c3-c6cf-44d3-9a22-40eda0e09078,network=Network(4d053625-c393-49e7-ae73-bce276bdc186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa920b0c3-c6')#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.838 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.839 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.839 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] No VIF found with MAC fa:16:3e:a2:a9:65, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.839 243456 INFO nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Using config drive#033[00m
Feb 28 05:21:10 np0005634017 nova_compute[243452]: 2026-02-28 10:21:10.862 243456 DEBUG nova.storage.rbd_utils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:11 np0005634017 nova_compute[243452]: 2026-02-28 10:21:11.169 243456 DEBUG nova.network.neutron [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updated VIF entry in instance network info cache for port a920b0c3-c6cf-44d3-9a22-40eda0e09078. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:21:11 np0005634017 nova_compute[243452]: 2026-02-28 10:21:11.170 243456 DEBUG nova.network.neutron [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updating instance_info_cache with network_info: [{"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:21:11 np0005634017 nova_compute[243452]: 2026-02-28 10:21:11.185 243456 DEBUG oslo_concurrency.lockutils [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:21:11 np0005634017 nova_compute[243452]: 2026-02-28 10:21:11.320 243456 INFO nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Creating config drive at /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/disk.config#033[00m
Feb 28 05:21:11 np0005634017 nova_compute[243452]: 2026-02-28 10:21:11.323 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpn2ckyxj7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:11 np0005634017 nova_compute[243452]: 2026-02-28 10:21:11.462 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpn2ckyxj7" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:11 np0005634017 nova_compute[243452]: 2026-02-28 10:21:11.486 243456 DEBUG nova.storage.rbd_utils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:11 np0005634017 nova_compute[243452]: 2026-02-28 10:21:11.489 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/disk.config ec785d5e-9b62-4b52-a727-f64173b4b853_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:11 np0005634017 nova_compute[243452]: 2026-02-28 10:21:11.618 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/disk.config ec785d5e-9b62-4b52-a727-f64173b4b853_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:11 np0005634017 nova_compute[243452]: 2026-02-28 10:21:11.620 243456 INFO nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Deleting local config drive /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/disk.config because it was imported into RBD.#033[00m
Feb 28 05:21:11 np0005634017 kernel: tapa920b0c3-c6: entered promiscuous mode
Feb 28 05:21:11 np0005634017 nova_compute[243452]: 2026-02-28 10:21:11.676 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:11 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:11Z|01029|binding|INFO|Claiming lport a920b0c3-c6cf-44d3-9a22-40eda0e09078 for this chassis.
Feb 28 05:21:11 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:11Z|01030|binding|INFO|a920b0c3-c6cf-44d3-9a22-40eda0e09078: Claiming fa:16:3e:a2:a9:65 10.100.0.2
Feb 28 05:21:11 np0005634017 NetworkManager[49805]: <info>  [1772274071.6786] manager: (tapa920b0c3-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/436)
Feb 28 05:21:11 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:11Z|01031|binding|INFO|Setting lport a920b0c3-c6cf-44d3-9a22-40eda0e09078 ovn-installed in OVS
Feb 28 05:21:11 np0005634017 nova_compute[243452]: 2026-02-28 10:21:11.685 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:11 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:11Z|01032|binding|INFO|Setting lport a920b0c3-c6cf-44d3-9a22-40eda0e09078 up in Southbound
Feb 28 05:21:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:11.687 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:a9:65 10.100.0.2'], port_security=['fa:16:3e:a2:a9:65 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ec785d5e-9b62-4b52-a727-f64173b4b853', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d053625-c393-49e7-ae73-bce276bdc186', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '14500a4ea1d94c0e9c58b076f5c918b5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1b56c898-1f47-46b5-8fd8-bf772110d194', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0513d6b5-e918-4c66-8302-fa0b35a813c3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a920b0c3-c6cf-44d3-9a22-40eda0e09078) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:21:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:11.689 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a920b0c3-c6cf-44d3-9a22-40eda0e09078 in datapath 4d053625-c393-49e7-ae73-bce276bdc186 bound to our chassis#033[00m
Feb 28 05:21:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:11.689 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4d053625-c393-49e7-ae73-bce276bdc186 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 28 05:21:11 np0005634017 nova_compute[243452]: 2026-02-28 10:21:11.691 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:11 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:11.694 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ba604f51-4d26-4811-8b32-4a27181de233]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:11 np0005634017 systemd-machined[209480]: New machine qemu-131-instance-00000066.
Feb 28 05:21:11 np0005634017 systemd[1]: Started Virtual Machine qemu-131-instance-00000066.
Feb 28 05:21:11 np0005634017 systemd-udevd[332457]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:21:11 np0005634017 NetworkManager[49805]: <info>  [1772274071.7528] device (tapa920b0c3-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:21:11 np0005634017 NetworkManager[49805]: <info>  [1772274071.7533] device (tapa920b0c3-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:21:11 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:11Z|01033|binding|INFO|Releasing lport ec441ae8-7dea-4a06-ba6a-57dcbc67001f from this chassis (sb_readonly=0)
Feb 28 05:21:11 np0005634017 nova_compute[243452]: 2026-02-28 10:21:11.851 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.006 243456 DEBUG nova.compute.manager [req-f2c707b6-c077-4cc9-8ebd-ef6d11899594 req-91907f9b-8bcc-49e4-960b-e2df0e935f7a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.007 243456 DEBUG oslo_concurrency.lockutils [req-f2c707b6-c077-4cc9-8ebd-ef6d11899594 req-91907f9b-8bcc-49e4-960b-e2df0e935f7a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.007 243456 DEBUG oslo_concurrency.lockutils [req-f2c707b6-c077-4cc9-8ebd-ef6d11899594 req-91907f9b-8bcc-49e4-960b-e2df0e935f7a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.008 243456 DEBUG oslo_concurrency.lockutils [req-f2c707b6-c077-4cc9-8ebd-ef6d11899594 req-91907f9b-8bcc-49e4-960b-e2df0e935f7a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.009 243456 DEBUG nova.compute.manager [req-f2c707b6-c077-4cc9-8ebd-ef6d11899594 req-91907f9b-8bcc-49e4-960b-e2df0e935f7a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Processing event network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.155 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274072.1553555, ec785d5e-9b62-4b52-a727-f64173b4b853 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.156 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] VM Started (Lifecycle Event)#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.159 243456 DEBUG nova.compute.manager [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.162 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.166 243456 INFO nova.virt.libvirt.driver [-] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Instance spawned successfully.#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.167 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.195 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.198 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.207 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.208 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.208 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.208 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.209 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.209 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.261 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.262 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274072.1556273, ec785d5e-9b62-4b52-a727-f64173b4b853 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.262 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.296 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.300 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274072.1614273, ec785d5e-9b62-4b52-a727-f64173b4b853 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.300 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.306 243456 INFO nova.compute.manager [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Took 6.45 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.306 243456 DEBUG nova.compute.manager [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.335 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.342 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.365 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:21:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 218 op/s
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.382 243456 INFO nova.compute.manager [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Took 7.62 seconds to build instance.#033[00m
Feb 28 05:21:12 np0005634017 nova_compute[243452]: 2026-02-28 10:21:12.404 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:21:13 np0005634017 nova_compute[243452]: 2026-02-28 10:21:13.823 243456 DEBUG oslo_concurrency.lockutils [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "e4349bd8-727a-4533-9edd-b2d54353a617" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:13 np0005634017 nova_compute[243452]: 2026-02-28 10:21:13.824 243456 DEBUG oslo_concurrency.lockutils [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:13 np0005634017 nova_compute[243452]: 2026-02-28 10:21:13.824 243456 DEBUG oslo_concurrency.lockutils [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:13 np0005634017 nova_compute[243452]: 2026-02-28 10:21:13.824 243456 DEBUG oslo_concurrency.lockutils [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:13 np0005634017 nova_compute[243452]: 2026-02-28 10:21:13.825 243456 DEBUG oslo_concurrency.lockutils [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:13 np0005634017 nova_compute[243452]: 2026-02-28 10:21:13.826 243456 INFO nova.compute.manager [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Terminating instance#033[00m
Feb 28 05:21:13 np0005634017 nova_compute[243452]: 2026-02-28 10:21:13.828 243456 DEBUG nova.compute.manager [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:21:13 np0005634017 kernel: tap07b4c83e-2f (unregistering): left promiscuous mode
Feb 28 05:21:13 np0005634017 NetworkManager[49805]: <info>  [1772274073.8773] device (tap07b4c83e-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:21:13 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:13Z|01034|binding|INFO|Releasing lport 07b4c83e-2fe2-42c9-a758-c50ddf0919fb from this chassis (sb_readonly=0)
Feb 28 05:21:13 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:13Z|01035|binding|INFO|Setting lport 07b4c83e-2fe2-42c9-a758-c50ddf0919fb down in Southbound
Feb 28 05:21:13 np0005634017 nova_compute[243452]: 2026-02-28 10:21:13.884 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:13 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:13Z|01036|binding|INFO|Removing iface tap07b4c83e-2f ovn-installed in OVS
Feb 28 05:21:13 np0005634017 nova_compute[243452]: 2026-02-28 10:21:13.888 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:13.894 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:73:58 10.100.0.9'], port_security=['fa:16:3e:7d:73:58 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e4349bd8-727a-4533-9edd-b2d54353a617', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df5b5e58-da82-40fd-b4b8-660edea3cecb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0184ef1a-f67e-49ee-b39d-03bdf1995f2e ec08cc57-dbf0-4da0-80ed-116af2ee764f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f39672a-6e8d-45c3-9a0d-437ee1b4e09f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=07b4c83e-2fe2-42c9-a758-c50ddf0919fb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:21:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:13.896 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 07b4c83e-2fe2-42c9-a758-c50ddf0919fb in datapath df5b5e58-da82-40fd-b4b8-660edea3cecb unbound from our chassis#033[00m
Feb 28 05:21:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:13.897 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network df5b5e58-da82-40fd-b4b8-660edea3cecb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:21:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:13.903 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[66ff4ab7-3bba-47e8-89bc-21f225418246]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:13.903 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb namespace which is not needed anymore#033[00m
Feb 28 05:21:13 np0005634017 nova_compute[243452]: 2026-02-28 10:21:13.906 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:13 np0005634017 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000061.scope: Deactivated successfully.
Feb 28 05:21:13 np0005634017 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000061.scope: Consumed 15.298s CPU time.
Feb 28 05:21:13 np0005634017 systemd-machined[209480]: Machine qemu-121-instance-00000061 terminated.
Feb 28 05:21:13 np0005634017 nova_compute[243452]: 2026-02-28 10:21:13.974 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:14 np0005634017 neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb[327846]: [NOTICE]   (327852) : haproxy version is 2.8.14-c23fe91
Feb 28 05:21:14 np0005634017 neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb[327846]: [NOTICE]   (327852) : path to executable is /usr/sbin/haproxy
Feb 28 05:21:14 np0005634017 neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb[327846]: [WARNING]  (327852) : Exiting Master process...
Feb 28 05:21:14 np0005634017 neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb[327846]: [WARNING]  (327852) : Exiting Master process...
Feb 28 05:21:14 np0005634017 neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb[327846]: [ALERT]    (327852) : Current worker (327854) exited with code 143 (Terminated)
Feb 28 05:21:14 np0005634017 neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb[327846]: [WARNING]  (327852) : All workers exited. Exiting... (0)
Feb 28 05:21:14 np0005634017 systemd[1]: libpod-37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c.scope: Deactivated successfully.
Feb 28 05:21:14 np0005634017 podman[332530]: 2026-02-28 10:21:14.037959922 +0000 UTC m=+0.041924391 container died 37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 05:21:14 np0005634017 NetworkManager[49805]: <info>  [1772274074.0466] manager: (tap07b4c83e-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/437)
Feb 28 05:21:14 np0005634017 kernel: tap07b4c83e-2f: entered promiscuous mode
Feb 28 05:21:14 np0005634017 kernel: tap07b4c83e-2f (unregistering): left promiscuous mode
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.054 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:14Z|01037|binding|INFO|Claiming lport 07b4c83e-2fe2-42c9-a758-c50ddf0919fb for this chassis.
Feb 28 05:21:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:14Z|01038|binding|INFO|07b4c83e-2fe2-42c9-a758-c50ddf0919fb: Claiming fa:16:3e:7d:73:58 10.100.0.9
Feb 28 05:21:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.060 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:73:58 10.100.0.9'], port_security=['fa:16:3e:7d:73:58 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e4349bd8-727a-4533-9edd-b2d54353a617', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df5b5e58-da82-40fd-b4b8-660edea3cecb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0184ef1a-f67e-49ee-b39d-03bdf1995f2e ec08cc57-dbf0-4da0-80ed-116af2ee764f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f39672a-6e8d-45c3-9a0d-437ee1b4e09f, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=07b4c83e-2fe2-42c9-a758-c50ddf0919fb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.067 243456 INFO nova.virt.libvirt.driver [-] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Instance destroyed successfully.#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.068 243456 DEBUG nova.objects.instance [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'resources' on Instance uuid e4349bd8-727a-4533-9edd-b2d54353a617 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:21:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:14Z|01039|binding|INFO|Setting lport 07b4c83e-2fe2-42c9-a758-c50ddf0919fb ovn-installed in OVS
Feb 28 05:21:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:14Z|01040|binding|INFO|Setting lport 07b4c83e-2fe2-42c9-a758-c50ddf0919fb up in Southbound
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.073 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:14Z|01041|binding|INFO|Releasing lport 07b4c83e-2fe2-42c9-a758-c50ddf0919fb from this chassis (sb_readonly=1)
Feb 28 05:21:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:14Z|01042|binding|INFO|Removing iface tap07b4c83e-2f ovn-installed in OVS
Feb 28 05:21:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:14Z|01043|if_status|INFO|Dropped 1 log messages in last 30 seconds (most recently, 30 seconds ago) due to excessive rate
Feb 28 05:21:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:14Z|01044|if_status|INFO|Not setting lport 07b4c83e-2fe2-42c9-a758-c50ddf0919fb down as sb is readonly
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.074 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:14 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c-userdata-shm.mount: Deactivated successfully.
Feb 28 05:21:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:14Z|01045|binding|INFO|Releasing lport 07b4c83e-2fe2-42c9-a758-c50ddf0919fb from this chassis (sb_readonly=0)
Feb 28 05:21:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:14Z|01046|binding|INFO|Setting lport 07b4c83e-2fe2-42c9-a758-c50ddf0919fb down in Southbound
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.081 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.085 243456 DEBUG nova.virt.libvirt.vif [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:19:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-900188454',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-900188454',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=97,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDNIQnNUL9XDPDLTHZklFyfDZYSt9AYWyQxG0r/+WdFB7vyQw7J2acteYFQFpxgWWQ/0J0kXyBJr3KJn/hEaVogEETmejopRSrT8PjwOvTjZAhY243jVoswANs6Qv2qSuA==',key_name='tempest-TestSecurityGroupsBasicOps-146961595',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:19:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-7bnrc9wi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:19:53Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=e4349bd8-727a-4533-9edd-b2d54353a617,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "address": "fa:16:3e:7d:73:58", "network": {"id": "df5b5e58-da82-40fd-b4b8-660edea3cecb", "bridge": "br-int", "label": "tempest-network-smoke--2117845100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07b4c83e-2f", "ovs_interfaceid": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:21:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.085 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:73:58 10.100.0.9'], port_security=['fa:16:3e:7d:73:58 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e4349bd8-727a-4533-9edd-b2d54353a617', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df5b5e58-da82-40fd-b4b8-660edea3cecb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0184ef1a-f67e-49ee-b39d-03bdf1995f2e ec08cc57-dbf0-4da0-80ed-116af2ee764f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f39672a-6e8d-45c3-9a0d-437ee1b4e09f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=07b4c83e-2fe2-42c9-a758-c50ddf0919fb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.085 243456 DEBUG nova.network.os_vif_util [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "address": "fa:16:3e:7d:73:58", "network": {"id": "df5b5e58-da82-40fd-b4b8-660edea3cecb", "bridge": "br-int", "label": "tempest-network-smoke--2117845100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07b4c83e-2f", "ovs_interfaceid": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.086 243456 DEBUG nova.network.os_vif_util [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7d:73:58,bridge_name='br-int',has_traffic_filtering=True,id=07b4c83e-2fe2-42c9-a758-c50ddf0919fb,network=Network(df5b5e58-da82-40fd-b4b8-660edea3cecb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07b4c83e-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.086 243456 DEBUG os_vif [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:73:58,bridge_name='br-int',has_traffic_filtering=True,id=07b4c83e-2fe2-42c9-a758-c50ddf0919fb,network=Network(df5b5e58-da82-40fd-b4b8-660edea3cecb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07b4c83e-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.089 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.089 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07b4c83e-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:21:14 np0005634017 systemd[1]: var-lib-containers-storage-overlay-835d604bdbc864696e0b1086b090dae849a757f07b6c5ac638b1186ab31417bd-merged.mount: Deactivated successfully.
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.094 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.097 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.099 243456 INFO os_vif [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:73:58,bridge_name='br-int',has_traffic_filtering=True,id=07b4c83e-2fe2-42c9-a758-c50ddf0919fb,network=Network(df5b5e58-da82-40fd-b4b8-660edea3cecb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07b4c83e-2f')#033[00m
Feb 28 05:21:14 np0005634017 podman[332530]: 2026-02-28 10:21:14.102260298 +0000 UTC m=+0.106224617 container cleanup 37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 05:21:14 np0005634017 systemd[1]: libpod-conmon-37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c.scope: Deactivated successfully.
Feb 28 05:21:14 np0005634017 podman[332548]: 2026-02-28 10:21:14.137048522 +0000 UTC m=+0.073472262 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 28 05:21:14 np0005634017 podman[332557]: 2026-02-28 10:21:14.166917864 +0000 UTC m=+0.101122340 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller)
Feb 28 05:21:14 np0005634017 podman[332592]: 2026-02-28 10:21:14.173576966 +0000 UTC m=+0.049888871 container remove 37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 05:21:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.180 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[acf59703-4eae-4e94-819a-e77d61ad86e7]: (4, ('Sat Feb 28 10:21:13 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb (37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c)\n37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c\nSat Feb 28 10:21:14 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb (37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c)\n37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.182 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a8553538-2381-4a4f-81a8-f4e15ba41b01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.183 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf5b5e58-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.186 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:14 np0005634017 kernel: tapdf5b5e58-d0: left promiscuous mode
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.192 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.195 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b8cb342b-fd42-4046-b8eb-baa55c3d8ef3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.207 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[18e094d3-318a-47e8-be95-223c6ecaf6ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.208 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c81a0c-10d9-40bb-a4f0-aa624bcb7a14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.223 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0ace15-b6a9-49ac-9220-0f7e6b5d08cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549928, 'reachable_time': 29601, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332637, 'error': None, 'target': 'ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:14 np0005634017 systemd[1]: run-netns-ovnmeta\x2ddf5b5e58\x2dda82\x2d40fd\x2db4b8\x2d660edea3cecb.mount: Deactivated successfully.
Feb 28 05:21:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.229 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:21:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.229 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[63cb1bc2-cf52-45b1-92ad-bf5b2892b917]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.231 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 07b4c83e-2fe2-42c9-a758-c50ddf0919fb in datapath df5b5e58-da82-40fd-b4b8-660edea3cecb unbound from our chassis#033[00m
Feb 28 05:21:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.232 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network df5b5e58-da82-40fd-b4b8-660edea3cecb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:21:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.233 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a6cc5f60-f46d-47be-8182-6f92537dce1e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.234 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 07b4c83e-2fe2-42c9-a758-c50ddf0919fb in datapath df5b5e58-da82-40fd-b4b8-660edea3cecb unbound from our chassis#033[00m
Feb 28 05:21:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.235 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network df5b5e58-da82-40fd-b4b8-660edea3cecb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:21:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.235 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3fbc6f57-e999-4cfc-8bc3-be0873ef0dab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.285 243456 DEBUG nova.compute.manager [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.286 243456 DEBUG oslo_concurrency.lockutils [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.286 243456 DEBUG oslo_concurrency.lockutils [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.286 243456 DEBUG oslo_concurrency.lockutils [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.286 243456 DEBUG nova.compute.manager [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] No waiting events found dispatching network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.287 243456 WARNING nova.compute.manager [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received unexpected event network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.287 243456 DEBUG nova.compute.manager [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Received event network-changed-07b4c83e-2fe2-42c9-a758-c50ddf0919fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.287 243456 DEBUG nova.compute.manager [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Refreshing instance network info cache due to event network-changed-07b4c83e-2fe2-42c9-a758-c50ddf0919fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.288 243456 DEBUG oslo_concurrency.lockutils [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-e4349bd8-727a-4533-9edd-b2d54353a617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.288 243456 DEBUG oslo_concurrency.lockutils [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-e4349bd8-727a-4533-9edd-b2d54353a617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.288 243456 DEBUG nova.network.neutron [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Refreshing network info cache for port 07b4c83e-2fe2-42c9-a758-c50ddf0919fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.356 243456 INFO nova.virt.libvirt.driver [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Deleting instance files /var/lib/nova/instances/e4349bd8-727a-4533-9edd-b2d54353a617_del#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.357 243456 INFO nova.virt.libvirt.driver [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Deletion of /var/lib/nova/instances/e4349bd8-727a-4533-9edd-b2d54353a617_del complete#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.363 243456 DEBUG nova.compute.manager [req-522141ed-9d1d-4f94-b5ac-4c2455017f53 req-c8132fb1-9562-4ae1-a56a-1316a55fb539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Received event network-vif-unplugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.364 243456 DEBUG oslo_concurrency.lockutils [req-522141ed-9d1d-4f94-b5ac-4c2455017f53 req-c8132fb1-9562-4ae1-a56a-1316a55fb539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.364 243456 DEBUG oslo_concurrency.lockutils [req-522141ed-9d1d-4f94-b5ac-4c2455017f53 req-c8132fb1-9562-4ae1-a56a-1316a55fb539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.364 243456 DEBUG oslo_concurrency.lockutils [req-522141ed-9d1d-4f94-b5ac-4c2455017f53 req-c8132fb1-9562-4ae1-a56a-1316a55fb539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.365 243456 DEBUG nova.compute.manager [req-522141ed-9d1d-4f94-b5ac-4c2455017f53 req-c8132fb1-9562-4ae1-a56a-1316a55fb539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] No waiting events found dispatching network-vif-unplugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.365 243456 DEBUG nova.compute.manager [req-522141ed-9d1d-4f94-b5ac-4c2455017f53 req-c8132fb1-9562-4ae1-a56a-1316a55fb539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Received event network-vif-unplugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:21:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 279 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 217 op/s
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.424 243456 INFO nova.compute.manager [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.425 243456 DEBUG oslo.service.loopingcall [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.425 243456 DEBUG nova.compute.manager [-] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.425 243456 DEBUG nova.network.neutron [-] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.449 243456 INFO nova.compute.manager [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Rescuing#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.450 243456 DEBUG oslo_concurrency.lockutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Acquiring lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.450 243456 DEBUG oslo_concurrency.lockutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Acquired lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:21:14 np0005634017 nova_compute[243452]: 2026-02-28 10:21:14.451 243456 DEBUG nova.network.neutron [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:21:15 np0005634017 nova_compute[243452]: 2026-02-28 10:21:15.371 243456 DEBUG nova.network.neutron [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Updated VIF entry in instance network info cache for port 07b4c83e-2fe2-42c9-a758-c50ddf0919fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:21:15 np0005634017 nova_compute[243452]: 2026-02-28 10:21:15.373 243456 DEBUG nova.network.neutron [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Updating instance_info_cache with network_info: [{"id": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "address": "fa:16:3e:7d:73:58", "network": {"id": "df5b5e58-da82-40fd-b4b8-660edea3cecb", "bridge": "br-int", "label": "tempest-network-smoke--2117845100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07b4c83e-2f", "ovs_interfaceid": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:21:15 np0005634017 nova_compute[243452]: 2026-02-28 10:21:15.394 243456 DEBUG oslo_concurrency.lockutils [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-e4349bd8-727a-4533-9edd-b2d54353a617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:21:16 np0005634017 nova_compute[243452]: 2026-02-28 10:21:16.140 243456 DEBUG nova.network.neutron [-] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:21:16 np0005634017 nova_compute[243452]: 2026-02-28 10:21:16.170 243456 INFO nova.compute.manager [-] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Took 1.74 seconds to deallocate network for instance.#033[00m
Feb 28 05:21:16 np0005634017 nova_compute[243452]: 2026-02-28 10:21:16.227 243456 DEBUG oslo_concurrency.lockutils [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:16 np0005634017 nova_compute[243452]: 2026-02-28 10:21:16.228 243456 DEBUG oslo_concurrency.lockutils [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:16 np0005634017 nova_compute[243452]: 2026-02-28 10:21:16.342 243456 DEBUG nova.network.neutron [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updating instance_info_cache with network_info: [{"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:21:16 np0005634017 nova_compute[243452]: 2026-02-28 10:21:16.368 243456 DEBUG oslo_concurrency.lockutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Releasing lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:21:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 224 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.7 MiB/s wr, 242 op/s
Feb 28 05:21:16 np0005634017 nova_compute[243452]: 2026-02-28 10:21:16.538 243456 DEBUG nova.compute.manager [req-eab06c7d-d872-463a-94d1-8afa74f8adba req-31b0f443-edf6-4498-b3db-e712468f6356 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Received event network-vif-plugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:16 np0005634017 nova_compute[243452]: 2026-02-28 10:21:16.539 243456 DEBUG oslo_concurrency.lockutils [req-eab06c7d-d872-463a-94d1-8afa74f8adba req-31b0f443-edf6-4498-b3db-e712468f6356 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:16 np0005634017 nova_compute[243452]: 2026-02-28 10:21:16.539 243456 DEBUG oslo_concurrency.lockutils [req-eab06c7d-d872-463a-94d1-8afa74f8adba req-31b0f443-edf6-4498-b3db-e712468f6356 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:16 np0005634017 nova_compute[243452]: 2026-02-28 10:21:16.539 243456 DEBUG oslo_concurrency.lockutils [req-eab06c7d-d872-463a-94d1-8afa74f8adba req-31b0f443-edf6-4498-b3db-e712468f6356 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:16 np0005634017 nova_compute[243452]: 2026-02-28 10:21:16.539 243456 DEBUG nova.compute.manager [req-eab06c7d-d872-463a-94d1-8afa74f8adba req-31b0f443-edf6-4498-b3db-e712468f6356 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] No waiting events found dispatching network-vif-plugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:21:16 np0005634017 nova_compute[243452]: 2026-02-28 10:21:16.540 243456 WARNING nova.compute.manager [req-eab06c7d-d872-463a-94d1-8afa74f8adba req-31b0f443-edf6-4498-b3db-e712468f6356 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Received unexpected event network-vif-plugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:21:16 np0005634017 nova_compute[243452]: 2026-02-28 10:21:16.540 243456 DEBUG nova.compute.manager [req-eab06c7d-d872-463a-94d1-8afa74f8adba req-31b0f443-edf6-4498-b3db-e712468f6356 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Received event network-vif-plugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:16 np0005634017 nova_compute[243452]: 2026-02-28 10:21:16.540 243456 DEBUG oslo_concurrency.lockutils [req-eab06c7d-d872-463a-94d1-8afa74f8adba req-31b0f443-edf6-4498-b3db-e712468f6356 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:16 np0005634017 nova_compute[243452]: 2026-02-28 10:21:16.540 243456 DEBUG oslo_concurrency.lockutils [req-eab06c7d-d872-463a-94d1-8afa74f8adba req-31b0f443-edf6-4498-b3db-e712468f6356 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:16 np0005634017 nova_compute[243452]: 2026-02-28 10:21:16.541 243456 DEBUG oslo_concurrency.lockutils [req-eab06c7d-d872-463a-94d1-8afa74f8adba req-31b0f443-edf6-4498-b3db-e712468f6356 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:16 np0005634017 nova_compute[243452]: 2026-02-28 10:21:16.541 243456 DEBUG nova.compute.manager [req-eab06c7d-d872-463a-94d1-8afa74f8adba req-31b0f443-edf6-4498-b3db-e712468f6356 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] No waiting events found dispatching network-vif-plugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:21:16 np0005634017 nova_compute[243452]: 2026-02-28 10:21:16.541 243456 WARNING nova.compute.manager [req-eab06c7d-d872-463a-94d1-8afa74f8adba req-31b0f443-edf6-4498-b3db-e712468f6356 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Received unexpected event network-vif-plugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:21:16 np0005634017 nova_compute[243452]: 2026-02-28 10:21:16.541 243456 DEBUG nova.compute.manager [req-eab06c7d-d872-463a-94d1-8afa74f8adba req-31b0f443-edf6-4498-b3db-e712468f6356 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Received event network-vif-deleted-07b4c83e-2fe2-42c9-a758-c50ddf0919fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:16 np0005634017 nova_compute[243452]: 2026-02-28 10:21:16.636 243456 DEBUG oslo_concurrency.processutils [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:16 np0005634017 nova_compute[243452]: 2026-02-28 10:21:16.749 243456 DEBUG nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 28 05:21:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:21:17 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2573961724' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:21:17 np0005634017 nova_compute[243452]: 2026-02-28 10:21:17.186 243456 DEBUG oslo_concurrency.processutils [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:17 np0005634017 nova_compute[243452]: 2026-02-28 10:21:17.192 243456 DEBUG nova.compute.provider_tree [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:21:17 np0005634017 nova_compute[243452]: 2026-02-28 10:21:17.209 243456 DEBUG nova.scheduler.client.report [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:21:17 np0005634017 nova_compute[243452]: 2026-02-28 10:21:17.234 243456 DEBUG oslo_concurrency.lockutils [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:17 np0005634017 nova_compute[243452]: 2026-02-28 10:21:17.262 243456 INFO nova.scheduler.client.report [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Deleted allocations for instance e4349bd8-727a-4533-9edd-b2d54353a617#033[00m
Feb 28 05:21:17 np0005634017 nova_compute[243452]: 2026-02-28 10:21:17.330 243456 DEBUG oslo_concurrency.lockutils [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.506s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:18 np0005634017 nova_compute[243452]: 2026-02-28 10:21:18.049 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:18.365 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:21:18 np0005634017 nova_compute[243452]: 2026-02-28 10:21:18.366 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:18.367 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:21:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 200 MiB data, 860 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 255 op/s
Feb 28 05:21:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:21:18 np0005634017 nova_compute[243452]: 2026-02-28 10:21:18.976 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:19 np0005634017 nova_compute[243452]: 2026-02-28 10:21:19.090 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:19 np0005634017 nova_compute[243452]: 2026-02-28 10:21:19.760 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:21:20 np0005634017 nova_compute[243452]: 2026-02-28 10:21:20.254 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274065.2520442, c4a33511-0908-4787-82f4-79505aa9d436 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:21:20 np0005634017 nova_compute[243452]: 2026-02-28 10:21:20.255 243456 INFO nova.compute.manager [-] [instance: c4a33511-0908-4787-82f4-79505aa9d436] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:21:20 np0005634017 nova_compute[243452]: 2026-02-28 10:21:20.278 243456 DEBUG nova.compute.manager [None req-21b203dd-f74f-4455-9189-1a484c9c647c - - - - - -] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:21:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:20.370 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:21:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.2 MiB/s wr, 173 op/s
Feb 28 05:21:21 np0005634017 nova_compute[243452]: 2026-02-28 10:21:21.287 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:21 np0005634017 nova_compute[243452]: 2026-02-28 10:21:21.649 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:21 np0005634017 nova_compute[243452]: 2026-02-28 10:21:21.699 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:21 np0005634017 nova_compute[243452]: 2026-02-28 10:21:21.938 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274066.9368863, 98150245-079d-43f8-bbd9-3d12a8f26719 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:21:21 np0005634017 nova_compute[243452]: 2026-02-28 10:21:21.939 243456 INFO nova.compute.manager [-] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:21:21 np0005634017 nova_compute[243452]: 2026-02-28 10:21:21.959 243456 DEBUG nova.compute.manager [None req-4ee28fe9-5307-412c-9594-c8c131221a3d - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:21:22 np0005634017 nova_compute[243452]: 2026-02-28 10:21:22.356 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:21:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 314 KiB/s wr, 145 op/s
Feb 28 05:21:22 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Feb 28 05:21:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:21:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:21:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:21:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:21:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:21:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:21:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:21:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:21:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:21:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:21:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:21:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:21:23 np0005634017 nova_compute[243452]: 2026-02-28 10:21:23.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:21:23 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:21:23 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:21:23 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:21:23 np0005634017 podman[332805]: 2026-02-28 10:21:23.605305978 +0000 UTC m=+0.046948696 container create 2400d91611b25dca50caffddc0f895d33ba1ef013a0d94afa36bca0f2c2586c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_colden, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 05:21:23 np0005634017 systemd[1]: Started libpod-conmon-2400d91611b25dca50caffddc0f895d33ba1ef013a0d94afa36bca0f2c2586c8.scope.
Feb 28 05:21:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:21:23 np0005634017 podman[332805]: 2026-02-28 10:21:23.575441036 +0000 UTC m=+0.017083744 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:21:23 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:21:23 np0005634017 podman[332805]: 2026-02-28 10:21:23.692429843 +0000 UTC m=+0.134072551 container init 2400d91611b25dca50caffddc0f895d33ba1ef013a0d94afa36bca0f2c2586c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_colden, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Feb 28 05:21:23 np0005634017 podman[332805]: 2026-02-28 10:21:23.697241272 +0000 UTC m=+0.138883950 container start 2400d91611b25dca50caffddc0f895d33ba1ef013a0d94afa36bca0f2c2586c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 05:21:23 np0005634017 podman[332805]: 2026-02-28 10:21:23.700778044 +0000 UTC m=+0.142420742 container attach 2400d91611b25dca50caffddc0f895d33ba1ef013a0d94afa36bca0f2c2586c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_colden, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 28 05:21:23 np0005634017 romantic_colden[332822]: 167 167
Feb 28 05:21:23 np0005634017 systemd[1]: libpod-2400d91611b25dca50caffddc0f895d33ba1ef013a0d94afa36bca0f2c2586c8.scope: Deactivated successfully.
Feb 28 05:21:23 np0005634017 podman[332805]: 2026-02-28 10:21:23.702528824 +0000 UTC m=+0.144171502 container died 2400d91611b25dca50caffddc0f895d33ba1ef013a0d94afa36bca0f2c2586c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_colden, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 28 05:21:23 np0005634017 systemd[1]: var-lib-containers-storage-overlay-50d8d129f69e9d21db5f72fb98024947df6c1de556072f6c3789b9cd3ad703c5-merged.mount: Deactivated successfully.
Feb 28 05:21:23 np0005634017 podman[332805]: 2026-02-28 10:21:23.747150202 +0000 UTC m=+0.188792920 container remove 2400d91611b25dca50caffddc0f895d33ba1ef013a0d94afa36bca0f2c2586c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_colden, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 05:21:23 np0005634017 systemd[1]: libpod-conmon-2400d91611b25dca50caffddc0f895d33ba1ef013a0d94afa36bca0f2c2586c8.scope: Deactivated successfully.
Feb 28 05:21:23 np0005634017 podman[332845]: 2026-02-28 10:21:23.9261936 +0000 UTC m=+0.064749880 container create 219d63b3b62079ca40ffd9dc42abb76959ee7981e43feb775b04c6df7db472e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_poitras, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:21:23 np0005634017 systemd[1]: Started libpod-conmon-219d63b3b62079ca40ffd9dc42abb76959ee7981e43feb775b04c6df7db472e0.scope.
Feb 28 05:21:23 np0005634017 nova_compute[243452]: 2026-02-28 10:21:23.977 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:23 np0005634017 podman[332845]: 2026-02-28 10:21:23.896957796 +0000 UTC m=+0.035514066 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:21:23 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:21:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2016cd9257489d62598b7a3dcc79c8d13fc12ee00b7c6d61599f61a04aeac9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:21:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2016cd9257489d62598b7a3dcc79c8d13fc12ee00b7c6d61599f61a04aeac9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:21:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2016cd9257489d62598b7a3dcc79c8d13fc12ee00b7c6d61599f61a04aeac9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:21:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2016cd9257489d62598b7a3dcc79c8d13fc12ee00b7c6d61599f61a04aeac9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:21:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2016cd9257489d62598b7a3dcc79c8d13fc12ee00b7c6d61599f61a04aeac9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:21:24 np0005634017 podman[332845]: 2026-02-28 10:21:24.030313045 +0000 UTC m=+0.168869305 container init 219d63b3b62079ca40ffd9dc42abb76959ee7981e43feb775b04c6df7db472e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_poitras, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 05:21:24 np0005634017 podman[332845]: 2026-02-28 10:21:24.03739946 +0000 UTC m=+0.175955700 container start 219d63b3b62079ca40ffd9dc42abb76959ee7981e43feb775b04c6df7db472e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_poitras, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 28 05:21:24 np0005634017 podman[332845]: 2026-02-28 10:21:24.040842819 +0000 UTC m=+0.179399069 container attach 219d63b3b62079ca40ffd9dc42abb76959ee7981e43feb775b04c6df7db472e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_poitras, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 28 05:21:24 np0005634017 nova_compute[243452]: 2026-02-28 10:21:24.092 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:24 np0005634017 nova_compute[243452]: 2026-02-28 10:21:24.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:21:24 np0005634017 nova_compute[243452]: 2026-02-28 10:21:24.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:21:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 28 05:21:24 np0005634017 laughing_poitras[332861]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:21:24 np0005634017 laughing_poitras[332861]: --> All data devices are unavailable
Feb 28 05:21:24 np0005634017 systemd[1]: libpod-219d63b3b62079ca40ffd9dc42abb76959ee7981e43feb775b04c6df7db472e0.scope: Deactivated successfully.
Feb 28 05:21:24 np0005634017 podman[332845]: 2026-02-28 10:21:24.495863963 +0000 UTC m=+0.634420233 container died 219d63b3b62079ca40ffd9dc42abb76959ee7981e43feb775b04c6df7db472e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_poitras, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:21:24 np0005634017 systemd[1]: var-lib-containers-storage-overlay-4e2016cd9257489d62598b7a3dcc79c8d13fc12ee00b7c6d61599f61a04aeac9-merged.mount: Deactivated successfully.
Feb 28 05:21:24 np0005634017 podman[332845]: 2026-02-28 10:21:24.630498029 +0000 UTC m=+0.769054289 container remove 219d63b3b62079ca40ffd9dc42abb76959ee7981e43feb775b04c6df7db472e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:21:24 np0005634017 systemd[1]: libpod-conmon-219d63b3b62079ca40ffd9dc42abb76959ee7981e43feb775b04c6df7db472e0.scope: Deactivated successfully.
Feb 28 05:21:25 np0005634017 podman[332955]: 2026-02-28 10:21:25.080902978 +0000 UTC m=+0.060984941 container create 5a2aa464035d75300948f1f9b71f104ff1b0d1f9b7ac59929c956631c38f185f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_booth, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 05:21:25 np0005634017 podman[332955]: 2026-02-28 10:21:25.040328307 +0000 UTC m=+0.020410280 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:21:25 np0005634017 systemd[1]: Started libpod-conmon-5a2aa464035d75300948f1f9b71f104ff1b0d1f9b7ac59929c956631c38f185f.scope.
Feb 28 05:21:25 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:21:25 np0005634017 podman[332955]: 2026-02-28 10:21:25.306974534 +0000 UTC m=+0.287056567 container init 5a2aa464035d75300948f1f9b71f104ff1b0d1f9b7ac59929c956631c38f185f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_booth, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 05:21:25 np0005634017 podman[332955]: 2026-02-28 10:21:25.316845258 +0000 UTC m=+0.296927251 container start 5a2aa464035d75300948f1f9b71f104ff1b0d1f9b7ac59929c956631c38f185f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_booth, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:21:25 np0005634017 romantic_booth[332971]: 167 167
Feb 28 05:21:25 np0005634017 systemd[1]: libpod-5a2aa464035d75300948f1f9b71f104ff1b0d1f9b7ac59929c956631c38f185f.scope: Deactivated successfully.
Feb 28 05:21:25 np0005634017 podman[332955]: 2026-02-28 10:21:25.354452374 +0000 UTC m=+0.334534407 container attach 5a2aa464035d75300948f1f9b71f104ff1b0d1f9b7ac59929c956631c38f185f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:21:25 np0005634017 podman[332955]: 2026-02-28 10:21:25.35502027 +0000 UTC m=+0.335102263 container died 5a2aa464035d75300948f1f9b71f104ff1b0d1f9b7ac59929c956631c38f185f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:21:25 np0005634017 systemd[1]: var-lib-containers-storage-overlay-b9ab46d8f9f2003216e37bcea31ac514457aa9875a87d8b26c44d74068173003-merged.mount: Deactivated successfully.
Feb 28 05:21:25 np0005634017 podman[332955]: 2026-02-28 10:21:25.500269403 +0000 UTC m=+0.480351346 container remove 5a2aa464035d75300948f1f9b71f104ff1b0d1f9b7ac59929c956631c38f185f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:21:25 np0005634017 systemd[1]: libpod-conmon-5a2aa464035d75300948f1f9b71f104ff1b0d1f9b7ac59929c956631c38f185f.scope: Deactivated successfully.
Feb 28 05:21:25 np0005634017 podman[332997]: 2026-02-28 10:21:25.639124271 +0000 UTC m=+0.022974134 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:21:25 np0005634017 podman[332997]: 2026-02-28 10:21:25.800840869 +0000 UTC m=+0.184690772 container create 873721ec326fb27a581b6eae48e52b2b565ae7d2690d67ef1c215bcf2b8ee00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 28 05:21:25 np0005634017 systemd[1]: Started libpod-conmon-873721ec326fb27a581b6eae48e52b2b565ae7d2690d67ef1c215bcf2b8ee00a.scope.
Feb 28 05:21:25 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:21:25 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4425f153f629a57e11cbb6fafce09cf0736031329f12f1f36bd31f18b2e57b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:21:25 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4425f153f629a57e11cbb6fafce09cf0736031329f12f1f36bd31f18b2e57b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:21:25 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4425f153f629a57e11cbb6fafce09cf0736031329f12f1f36bd31f18b2e57b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:21:25 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4425f153f629a57e11cbb6fafce09cf0736031329f12f1f36bd31f18b2e57b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:21:26 np0005634017 podman[332997]: 2026-02-28 10:21:26.010891662 +0000 UTC m=+0.394741545 container init 873721ec326fb27a581b6eae48e52b2b565ae7d2690d67ef1c215bcf2b8ee00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:21:26 np0005634017 podman[332997]: 2026-02-28 10:21:26.019708286 +0000 UTC m=+0.403558159 container start 873721ec326fb27a581b6eae48e52b2b565ae7d2690d67ef1c215bcf2b8ee00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:21:26 np0005634017 podman[332997]: 2026-02-28 10:21:26.024261377 +0000 UTC m=+0.408111290 container attach 873721ec326fb27a581b6eae48e52b2b565ae7d2690d67ef1c215bcf2b8ee00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]: {
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:    "0": [
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:        {
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "devices": [
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "/dev/loop3"
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            ],
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "lv_name": "ceph_lv0",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "lv_size": "21470642176",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "name": "ceph_lv0",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "tags": {
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.cluster_name": "ceph",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.crush_device_class": "",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.encrypted": "0",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.objectstore": "bluestore",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.osd_id": "0",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.type": "block",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.vdo": "0",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.with_tpm": "0"
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            },
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "type": "block",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "vg_name": "ceph_vg0"
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:        }
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:    ],
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:    "1": [
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:        {
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "devices": [
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "/dev/loop4"
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            ],
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "lv_name": "ceph_lv1",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "lv_size": "21470642176",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "name": "ceph_lv1",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "tags": {
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.cluster_name": "ceph",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.crush_device_class": "",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.encrypted": "0",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.objectstore": "bluestore",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.osd_id": "1",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.type": "block",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.vdo": "0",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.with_tpm": "0"
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            },
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "type": "block",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "vg_name": "ceph_vg1"
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:        }
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:    ],
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:    "2": [
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:        {
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "devices": [
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "/dev/loop5"
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            ],
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "lv_name": "ceph_lv2",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "lv_size": "21470642176",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "name": "ceph_lv2",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "tags": {
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.cluster_name": "ceph",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.crush_device_class": "",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.encrypted": "0",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.objectstore": "bluestore",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.osd_id": "2",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.type": "block",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.vdo": "0",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:                "ceph.with_tpm": "0"
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            },
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "type": "block",
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:            "vg_name": "ceph_vg2"
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:        }
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]:    ]
Feb 28 05:21:26 np0005634017 pedantic_dirac[333012]: }
Feb 28 05:21:26 np0005634017 nova_compute[243452]: 2026-02-28 10:21:26.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:21:26 np0005634017 systemd[1]: libpod-873721ec326fb27a581b6eae48e52b2b565ae7d2690d67ef1c215bcf2b8ee00a.scope: Deactivated successfully.
Feb 28 05:21:26 np0005634017 podman[332997]: 2026-02-28 10:21:26.352314536 +0000 UTC m=+0.736164409 container died 873721ec326fb27a581b6eae48e52b2b565ae7d2690d67ef1c215bcf2b8ee00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:21:26 np0005634017 systemd[1]: var-lib-containers-storage-overlay-b4425f153f629a57e11cbb6fafce09cf0736031329f12f1f36bd31f18b2e57b1-merged.mount: Deactivated successfully.
Feb 28 05:21:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 223 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 127 op/s
Feb 28 05:21:26 np0005634017 podman[332997]: 2026-02-28 10:21:26.398490679 +0000 UTC m=+0.782340552 container remove 873721ec326fb27a581b6eae48e52b2b565ae7d2690d67ef1c215bcf2b8ee00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 05:21:26 np0005634017 systemd[1]: libpod-conmon-873721ec326fb27a581b6eae48e52b2b565ae7d2690d67ef1c215bcf2b8ee00a.scope: Deactivated successfully.
Feb 28 05:21:26 np0005634017 podman[333097]: 2026-02-28 10:21:26.788436104 +0000 UTC m=+0.041842919 container create 4f92dde3def6ec13b8d1b0c253f7cd2a035ad7e42b0c8410dd33bf1865281b45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_haslett, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 05:21:26 np0005634017 nova_compute[243452]: 2026-02-28 10:21:26.803 243456 DEBUG nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 28 05:21:26 np0005634017 systemd[1]: Started libpod-conmon-4f92dde3def6ec13b8d1b0c253f7cd2a035ad7e42b0c8410dd33bf1865281b45.scope.
Feb 28 05:21:26 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:21:26 np0005634017 podman[333097]: 2026-02-28 10:21:26.772108463 +0000 UTC m=+0.025515278 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:21:26 np0005634017 podman[333097]: 2026-02-28 10:21:26.876156686 +0000 UTC m=+0.129563481 container init 4f92dde3def6ec13b8d1b0c253f7cd2a035ad7e42b0c8410dd33bf1865281b45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_haslett, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 05:21:26 np0005634017 podman[333097]: 2026-02-28 10:21:26.883487918 +0000 UTC m=+0.136894723 container start 4f92dde3def6ec13b8d1b0c253f7cd2a035ad7e42b0c8410dd33bf1865281b45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_haslett, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Feb 28 05:21:26 np0005634017 vigorous_haslett[333113]: 167 167
Feb 28 05:21:26 np0005634017 systemd[1]: libpod-4f92dde3def6ec13b8d1b0c253f7cd2a035ad7e42b0c8410dd33bf1865281b45.scope: Deactivated successfully.
Feb 28 05:21:26 np0005634017 podman[333097]: 2026-02-28 10:21:26.90745514 +0000 UTC m=+0.160861955 container attach 4f92dde3def6ec13b8d1b0c253f7cd2a035ad7e42b0c8410dd33bf1865281b45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 05:21:26 np0005634017 podman[333097]: 2026-02-28 10:21:26.908442278 +0000 UTC m=+0.161849083 container died 4f92dde3def6ec13b8d1b0c253f7cd2a035ad7e42b0c8410dd33bf1865281b45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 05:21:26 np0005634017 systemd[1]: var-lib-containers-storage-overlay-9ad8f14d99fa5e12c0253c426bc30a6a496e3f63a0d493bf57a81ac000ff7d45-merged.mount: Deactivated successfully.
Feb 28 05:21:26 np0005634017 podman[333097]: 2026-02-28 10:21:26.974847215 +0000 UTC m=+0.228254010 container remove 4f92dde3def6ec13b8d1b0c253f7cd2a035ad7e42b0c8410dd33bf1865281b45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:21:26 np0005634017 systemd[1]: libpod-conmon-4f92dde3def6ec13b8d1b0c253f7cd2a035ad7e42b0c8410dd33bf1865281b45.scope: Deactivated successfully.
Feb 28 05:21:27 np0005634017 podman[333139]: 2026-02-28 10:21:27.132630929 +0000 UTC m=+0.045930857 container create fed93f33bdab4fcf377facd8994a32a88d1e6beca02f316ac2abd838695f2263 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_kepler, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 28 05:21:27 np0005634017 systemd[1]: Started libpod-conmon-fed93f33bdab4fcf377facd8994a32a88d1e6beca02f316ac2abd838695f2263.scope.
Feb 28 05:21:27 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:21:27 np0005634017 podman[333139]: 2026-02-28 10:21:27.109936554 +0000 UTC m=+0.023236522 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:21:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c95de49c52439a7bad64eeb9a028b4ab1190554a94ba9cb07de6636e30095d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:21:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c95de49c52439a7bad64eeb9a028b4ab1190554a94ba9cb07de6636e30095d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:21:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c95de49c52439a7bad64eeb9a028b4ab1190554a94ba9cb07de6636e30095d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:21:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c95de49c52439a7bad64eeb9a028b4ab1190554a94ba9cb07de6636e30095d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:21:27 np0005634017 podman[333139]: 2026-02-28 10:21:27.237858996 +0000 UTC m=+0.151159034 container init fed93f33bdab4fcf377facd8994a32a88d1e6beca02f316ac2abd838695f2263 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:21:27 np0005634017 podman[333139]: 2026-02-28 10:21:27.246134405 +0000 UTC m=+0.159434343 container start fed93f33bdab4fcf377facd8994a32a88d1e6beca02f316ac2abd838695f2263 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_kepler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 05:21:27 np0005634017 podman[333139]: 2026-02-28 10:21:27.250675126 +0000 UTC m=+0.163975144 container attach fed93f33bdab4fcf377facd8994a32a88d1e6beca02f316ac2abd838695f2263 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 28 05:21:27 np0005634017 nova_compute[243452]: 2026-02-28 10:21:27.320 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:21:27 np0005634017 lvm[333234]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:21:27 np0005634017 lvm[333234]: VG ceph_vg1 finished
Feb 28 05:21:27 np0005634017 lvm[333233]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:21:27 np0005634017 lvm[333233]: VG ceph_vg0 finished
Feb 28 05:21:27 np0005634017 lvm[333236]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:21:27 np0005634017 lvm[333236]: VG ceph_vg2 finished
Feb 28 05:21:28 np0005634017 ecstatic_kepler[333155]: {}
Feb 28 05:21:28 np0005634017 systemd[1]: libpod-fed93f33bdab4fcf377facd8994a32a88d1e6beca02f316ac2abd838695f2263.scope: Deactivated successfully.
Feb 28 05:21:28 np0005634017 podman[333139]: 2026-02-28 10:21:28.039363541 +0000 UTC m=+0.952663519 container died fed93f33bdab4fcf377facd8994a32a88d1e6beca02f316ac2abd838695f2263 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_kepler, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:21:28 np0005634017 systemd[1]: libpod-fed93f33bdab4fcf377facd8994a32a88d1e6beca02f316ac2abd838695f2263.scope: Consumed 1.138s CPU time.
Feb 28 05:21:28 np0005634017 systemd[1]: var-lib-containers-storage-overlay-6c95de49c52439a7bad64eeb9a028b4ab1190554a94ba9cb07de6636e30095d2-merged.mount: Deactivated successfully.
Feb 28 05:21:28 np0005634017 podman[333139]: 2026-02-28 10:21:28.08404254 +0000 UTC m=+0.997342498 container remove fed93f33bdab4fcf377facd8994a32a88d1e6beca02f316ac2abd838695f2263 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_kepler, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:21:28 np0005634017 systemd[1]: libpod-conmon-fed93f33bdab4fcf377facd8994a32a88d1e6beca02f316ac2abd838695f2263.scope: Deactivated successfully.
Feb 28 05:21:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:21:28 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:21:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:21:28 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:21:28 np0005634017 nova_compute[243452]: 2026-02-28 10:21:28.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:21:28 np0005634017 nova_compute[243452]: 2026-02-28 10:21:28.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:21:28 np0005634017 nova_compute[243452]: 2026-02-28 10:21:28.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:21:28 np0005634017 nova_compute[243452]: 2026-02-28 10:21:28.347 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:21:28 np0005634017 nova_compute[243452]: 2026-02-28 10:21:28.347 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:21:28 np0005634017 nova_compute[243452]: 2026-02-28 10:21:28.347 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:21:28 np0005634017 nova_compute[243452]: 2026-02-28 10:21:28.347 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid ec785d5e-9b62-4b52-a727-f64173b4b853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:21:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 233 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 850 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Feb 28 05:21:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:21:28 np0005634017 nova_compute[243452]: 2026-02-28 10:21:28.979 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:29 np0005634017 nova_compute[243452]: 2026-02-28 10:21:29.065 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274074.0623562, e4349bd8-727a-4533-9edd-b2d54353a617 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:21:29 np0005634017 nova_compute[243452]: 2026-02-28 10:21:29.065 243456 INFO nova.compute.manager [-] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:21:29 np0005634017 kernel: tapa920b0c3-c6 (unregistering): left promiscuous mode
Feb 28 05:21:29 np0005634017 NetworkManager[49805]: <info>  [1772274089.0788] device (tapa920b0c3-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:21:29 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:29Z|01047|binding|INFO|Releasing lport a920b0c3-c6cf-44d3-9a22-40eda0e09078 from this chassis (sb_readonly=0)
Feb 28 05:21:29 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:29Z|01048|binding|INFO|Setting lport a920b0c3-c6cf-44d3-9a22-40eda0e09078 down in Southbound
Feb 28 05:21:29 np0005634017 nova_compute[243452]: 2026-02-28 10:21:29.087 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:29 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:29Z|01049|binding|INFO|Removing iface tapa920b0c3-c6 ovn-installed in OVS
Feb 28 05:21:29 np0005634017 nova_compute[243452]: 2026-02-28 10:21:29.091 243456 DEBUG nova.compute.manager [None req-32049b83-c46e-48bf-a77c-cce9a3429e69 - - - - - -] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:21:29 np0005634017 nova_compute[243452]: 2026-02-28 10:21:29.092 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:29 np0005634017 nova_compute[243452]: 2026-02-28 10:21:29.093 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:21:29
Feb 28 05:21:29 np0005634017 nova_compute[243452]: 2026-02-28 10:21:29.101 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:21:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:21:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'images', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'backups']
Feb 28 05:21:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:21:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:29.118 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:a9:65 10.100.0.2'], port_security=['fa:16:3e:a2:a9:65 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ec785d5e-9b62-4b52-a727-f64173b4b853', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d053625-c393-49e7-ae73-bce276bdc186', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '14500a4ea1d94c0e9c58b076f5c918b5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1b56c898-1f47-46b5-8fd8-bf772110d194', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0513d6b5-e918-4c66-8302-fa0b35a813c3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a920b0c3-c6cf-44d3-9a22-40eda0e09078) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:21:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:29.120 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a920b0c3-c6cf-44d3-9a22-40eda0e09078 in datapath 4d053625-c393-49e7-ae73-bce276bdc186 unbound from our chassis#033[00m
Feb 28 05:21:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:29.121 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4d053625-c393-49e7-ae73-bce276bdc186 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 28 05:21:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:29.122 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[448be03c-3e16-42e8-83af-a40107fe1c78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:29 np0005634017 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000066.scope: Deactivated successfully.
Feb 28 05:21:29 np0005634017 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000066.scope: Consumed 12.152s CPU time.
Feb 28 05:21:29 np0005634017 systemd-machined[209480]: Machine qemu-131-instance-00000066 terminated.
Feb 28 05:21:29 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:21:29 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:21:29 np0005634017 nova_compute[243452]: 2026-02-28 10:21:29.311 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:29 np0005634017 nova_compute[243452]: 2026-02-28 10:21:29.315 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:29 np0005634017 nova_compute[243452]: 2026-02-28 10:21:29.822 243456 INFO nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Instance shutdown successfully after 13 seconds.#033[00m
Feb 28 05:21:29 np0005634017 nova_compute[243452]: 2026-02-28 10:21:29.832 243456 INFO nova.virt.libvirt.driver [-] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Instance destroyed successfully.#033[00m
Feb 28 05:21:29 np0005634017 nova_compute[243452]: 2026-02-28 10:21:29.833 243456 DEBUG nova.objects.instance [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lazy-loading 'numa_topology' on Instance uuid ec785d5e-9b62-4b52-a727-f64173b4b853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:21:29 np0005634017 nova_compute[243452]: 2026-02-28 10:21:29.856 243456 INFO nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Attempting rescue#033[00m
Feb 28 05:21:29 np0005634017 nova_compute[243452]: 2026-02-28 10:21:29.858 243456 DEBUG nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Feb 28 05:21:29 np0005634017 nova_compute[243452]: 2026-02-28 10:21:29.865 243456 DEBUG nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Feb 28 05:21:29 np0005634017 nova_compute[243452]: 2026-02-28 10:21:29.866 243456 INFO nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Creating image(s)#033[00m
Feb 28 05:21:29 np0005634017 nova_compute[243452]: 2026-02-28 10:21:29.902 243456 DEBUG nova.storage.rbd_utils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:29 np0005634017 nova_compute[243452]: 2026-02-28 10:21:29.906 243456 DEBUG nova.objects.instance [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lazy-loading 'trusted_certs' on Instance uuid ec785d5e-9b62-4b52-a727-f64173b4b853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:21:29 np0005634017 nova_compute[243452]: 2026-02-28 10:21:29.958 243456 DEBUG nova.storage.rbd_utils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:29 np0005634017 nova_compute[243452]: 2026-02-28 10:21:29.997 243456 DEBUG nova.storage.rbd_utils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.003 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.083 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.084 243456 DEBUG oslo_concurrency.lockutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.085 243456 DEBUG oslo_concurrency.lockutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.085 243456 DEBUG oslo_concurrency.lockutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.114 243456 DEBUG nova.storage.rbd_utils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.120 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ec785d5e-9b62-4b52-a727-f64173b4b853_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:21:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:21:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:21:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:21:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:21:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.387 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ec785d5e-9b62-4b52-a727-f64173b4b853_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.267s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.388 243456 DEBUG nova.objects.instance [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lazy-loading 'migration_context' on Instance uuid ec785d5e-9b62-4b52-a727-f64173b4b853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:21:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.414 243456 DEBUG nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.415 243456 DEBUG nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Start _get_guest_xml network_info=[{"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "vif_mac": "fa:16:3e:a2:a9:65"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.415 243456 DEBUG nova.objects.instance [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lazy-loading 'resources' on Instance uuid ec785d5e-9b62-4b52-a727-f64173b4b853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.433 243456 WARNING nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.439 243456 DEBUG nova.virt.libvirt.host [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.441 243456 DEBUG nova.virt.libvirt.host [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.445 243456 DEBUG nova.virt.libvirt.host [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.446 243456 DEBUG nova.virt.libvirt.host [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.446 243456 DEBUG nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.446 243456 DEBUG nova.virt.hardware [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.447 243456 DEBUG nova.virt.hardware [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.447 243456 DEBUG nova.virt.hardware [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.447 243456 DEBUG nova.virt.hardware [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.448 243456 DEBUG nova.virt.hardware [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.448 243456 DEBUG nova.virt.hardware [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.448 243456 DEBUG nova.virt.hardware [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.448 243456 DEBUG nova.virt.hardware [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.449 243456 DEBUG nova.virt.hardware [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.449 243456 DEBUG nova.virt.hardware [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.449 243456 DEBUG nova.virt.hardware [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.449 243456 DEBUG nova.objects.instance [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid ec785d5e-9b62-4b52-a727-f64173b4b853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.467 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.529 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updating instance_info_cache with network_info: [{"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.546 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.547 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.548 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.602 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.603 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.603 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.604 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.605 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:21:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:21:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:21:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:21:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:21:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:21:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:21:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:21:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:21:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:21:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:21:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1063165600' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.985 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:30 np0005634017 nova_compute[243452]: 2026-02-28 10:21:30.987 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:21:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3543630777' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:21:31 np0005634017 nova_compute[243452]: 2026-02-28 10:21:31.193 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:31 np0005634017 nova_compute[243452]: 2026-02-28 10:21:31.271 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:21:31 np0005634017 nova_compute[243452]: 2026-02-28 10:21:31.272 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:21:31 np0005634017 nova_compute[243452]: 2026-02-28 10:21:31.412 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:21:31 np0005634017 nova_compute[243452]: 2026-02-28 10:21:31.413 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3741MB free_disk=59.94204984605312GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:21:31 np0005634017 nova_compute[243452]: 2026-02-28 10:21:31.413 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:31 np0005634017 nova_compute[243452]: 2026-02-28 10:21:31.413 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:21:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2532053339' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:21:31 np0005634017 nova_compute[243452]: 2026-02-28 10:21:31.517 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance ec785d5e-9b62-4b52-a727-f64173b4b853 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:21:31 np0005634017 nova_compute[243452]: 2026-02-28 10:21:31.517 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:21:31 np0005634017 nova_compute[243452]: 2026-02-28 10:21:31.517 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:21:31 np0005634017 nova_compute[243452]: 2026-02-28 10:21:31.526 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:31 np0005634017 nova_compute[243452]: 2026-02-28 10:21:31.527 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:31 np0005634017 nova_compute[243452]: 2026-02-28 10:21:31.585 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:21:32 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4017787345' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:21:32 np0005634017 nova_compute[243452]: 2026-02-28 10:21:32.044 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:32 np0005634017 nova_compute[243452]: 2026-02-28 10:21:32.048 243456 DEBUG nova.virt.libvirt.vif [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1346326288',display_name='tempest-ServerRescueTestJSONUnderV235-server-1346326288',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1346326288',id=102,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:21:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='14500a4ea1d94c0e9c58b076f5c918b5',ramdisk_id='',reservation_id='r-svzcm10h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-749971841',owner_user_name='tempest-ServerRescueTestJSONUnderV235-749971841-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:21:12Z,user_data=None,user_id='f18b63d43ee24e59bdff962c9a727213',uuid=ec785d5e-9b62-4b52-a727-f64173b4b853,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "vif_mac": "fa:16:3e:a2:a9:65"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:21:32 np0005634017 nova_compute[243452]: 2026-02-28 10:21:32.049 243456 DEBUG nova.network.os_vif_util [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Converting VIF {"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "vif_mac": "fa:16:3e:a2:a9:65"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:21:32 np0005634017 nova_compute[243452]: 2026-02-28 10:21:32.050 243456 DEBUG nova.network.os_vif_util [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a2:a9:65,bridge_name='br-int',has_traffic_filtering=True,id=a920b0c3-c6cf-44d3-9a22-40eda0e09078,network=Network(4d053625-c393-49e7-ae73-bce276bdc186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa920b0c3-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:21:32 np0005634017 nova_compute[243452]: 2026-02-28 10:21:32.053 243456 DEBUG nova.objects.instance [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid ec785d5e-9b62-4b52-a727-f64173b4b853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:21:32 np0005634017 nova_compute[243452]: 2026-02-28 10:21:32.071 243456 DEBUG nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:21:32 np0005634017 nova_compute[243452]:  <uuid>ec785d5e-9b62-4b52-a727-f64173b4b853</uuid>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:  <name>instance-00000066</name>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerRescueTestJSONUnderV235-server-1346326288</nova:name>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:21:30</nova:creationTime>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:21:32 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:        <nova:user uuid="f18b63d43ee24e59bdff962c9a727213">tempest-ServerRescueTestJSONUnderV235-749971841-project-member</nova:user>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:        <nova:project uuid="14500a4ea1d94c0e9c58b076f5c918b5">tempest-ServerRescueTestJSONUnderV235-749971841</nova:project>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:        <nova:port uuid="a920b0c3-c6cf-44d3-9a22-40eda0e09078">
Feb 28 05:21:32 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.2" ipVersion="4"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <entry name="serial">ec785d5e-9b62-4b52-a727-f64173b4b853</entry>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <entry name="uuid">ec785d5e-9b62-4b52-a727-f64173b4b853</entry>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/ec785d5e-9b62-4b52-a727-f64173b4b853_disk.rescue">
Feb 28 05:21:32 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:21:32 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/ec785d5e-9b62-4b52-a727-f64173b4b853_disk">
Feb 28 05:21:32 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:21:32 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <target dev="vdb" bus="virtio"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/ec785d5e-9b62-4b52-a727-f64173b4b853_disk.config.rescue">
Feb 28 05:21:32 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:21:32 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:a2:a9:65"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <target dev="tapa920b0c3-c6"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/console.log" append="off"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:21:32 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:21:32 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:21:32 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:21:32 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:21:32 np0005634017 nova_compute[243452]: 2026-02-28 10:21:32.084 243456 INFO nova.virt.libvirt.driver [-] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Instance destroyed successfully.#033[00m
Feb 28 05:21:32 np0005634017 nova_compute[243452]: 2026-02-28 10:21:32.132 243456 DEBUG nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:21:32 np0005634017 nova_compute[243452]: 2026-02-28 10:21:32.133 243456 DEBUG nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:21:32 np0005634017 nova_compute[243452]: 2026-02-28 10:21:32.133 243456 DEBUG nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:21:32 np0005634017 nova_compute[243452]: 2026-02-28 10:21:32.134 243456 DEBUG nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] No VIF found with MAC fa:16:3e:a2:a9:65, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:21:32 np0005634017 nova_compute[243452]: 2026-02-28 10:21:32.135 243456 INFO nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Using config drive#033[00m
Feb 28 05:21:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:21:32 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/950168576' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:21:32 np0005634017 nova_compute[243452]: 2026-02-28 10:21:32.168 243456 DEBUG nova.storage.rbd_utils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:32 np0005634017 nova_compute[243452]: 2026-02-28 10:21:32.176 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:32 np0005634017 nova_compute[243452]: 2026-02-28 10:21:32.182 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:21:32 np0005634017 nova_compute[243452]: 2026-02-28 10:21:32.189 243456 DEBUG nova.objects.instance [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lazy-loading 'ec2_ids' on Instance uuid ec785d5e-9b62-4b52-a727-f64173b4b853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:21:32 np0005634017 nova_compute[243452]: 2026-02-28 10:21:32.198 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:21:32 np0005634017 nova_compute[243452]: 2026-02-28 10:21:32.234 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:21:32 np0005634017 nova_compute[243452]: 2026-02-28 10:21:32.235 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:32 np0005634017 nova_compute[243452]: 2026-02-28 10:21:32.236 243456 DEBUG nova.objects.instance [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lazy-loading 'keypairs' on Instance uuid ec785d5e-9b62-4b52-a727-f64173b4b853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:21:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 246 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 300 KiB/s rd, 2.4 MiB/s wr, 65 op/s
Feb 28 05:21:33 np0005634017 nova_compute[243452]: 2026-02-28 10:21:33.386 243456 INFO nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Creating config drive at /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/disk.config.rescue#033[00m
Feb 28 05:21:33 np0005634017 nova_compute[243452]: 2026-02-28 10:21:33.393 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp63_kwpac execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:33 np0005634017 nova_compute[243452]: 2026-02-28 10:21:33.540 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp63_kwpac" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:33 np0005634017 nova_compute[243452]: 2026-02-28 10:21:33.579 243456 DEBUG nova.storage.rbd_utils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:33 np0005634017 nova_compute[243452]: 2026-02-28 10:21:33.584 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/disk.config.rescue ec785d5e-9b62-4b52-a727-f64173b4b853_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:21:33 np0005634017 nova_compute[243452]: 2026-02-28 10:21:33.760 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/disk.config.rescue ec785d5e-9b62-4b52-a727-f64173b4b853_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:33 np0005634017 nova_compute[243452]: 2026-02-28 10:21:33.761 243456 INFO nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Deleting local config drive /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/disk.config.rescue because it was imported into RBD.#033[00m
Feb 28 05:21:33 np0005634017 kernel: tapa920b0c3-c6: entered promiscuous mode
Feb 28 05:21:33 np0005634017 NetworkManager[49805]: <info>  [1772274093.8172] manager: (tapa920b0c3-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/438)
Feb 28 05:21:33 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:33Z|01050|binding|INFO|Claiming lport a920b0c3-c6cf-44d3-9a22-40eda0e09078 for this chassis.
Feb 28 05:21:33 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:33Z|01051|binding|INFO|a920b0c3-c6cf-44d3-9a22-40eda0e09078: Claiming fa:16:3e:a2:a9:65 10.100.0.2
Feb 28 05:21:33 np0005634017 nova_compute[243452]: 2026-02-28 10:21:33.818 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:33.828 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:a9:65 10.100.0.2'], port_security=['fa:16:3e:a2:a9:65 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ec785d5e-9b62-4b52-a727-f64173b4b853', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d053625-c393-49e7-ae73-bce276bdc186', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '14500a4ea1d94c0e9c58b076f5c918b5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1b56c898-1f47-46b5-8fd8-bf772110d194', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0513d6b5-e918-4c66-8302-fa0b35a813c3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a920b0c3-c6cf-44d3-9a22-40eda0e09078) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:21:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:33.829 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a920b0c3-c6cf-44d3-9a22-40eda0e09078 in datapath 4d053625-c393-49e7-ae73-bce276bdc186 bound to our chassis#033[00m
Feb 28 05:21:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:33.829 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4d053625-c393-49e7-ae73-bce276bdc186 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 28 05:21:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:33.830 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6f261b4a-7b63-4770-a39d-0ff0b72600de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:33 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:33Z|01052|binding|INFO|Setting lport a920b0c3-c6cf-44d3-9a22-40eda0e09078 ovn-installed in OVS
Feb 28 05:21:33 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:33Z|01053|binding|INFO|Setting lport a920b0c3-c6cf-44d3-9a22-40eda0e09078 up in Southbound
Feb 28 05:21:33 np0005634017 nova_compute[243452]: 2026-02-28 10:21:33.831 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:33 np0005634017 nova_compute[243452]: 2026-02-28 10:21:33.836 243456 DEBUG nova.compute.manager [req-9ca796fd-fa89-4554-80a0-2ff7cae6abac req-dc9457a9-4585-4959-ae4c-bed49bb10adc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-vif-unplugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:33 np0005634017 nova_compute[243452]: 2026-02-28 10:21:33.837 243456 DEBUG oslo_concurrency.lockutils [req-9ca796fd-fa89-4554-80a0-2ff7cae6abac req-dc9457a9-4585-4959-ae4c-bed49bb10adc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:33 np0005634017 nova_compute[243452]: 2026-02-28 10:21:33.837 243456 DEBUG oslo_concurrency.lockutils [req-9ca796fd-fa89-4554-80a0-2ff7cae6abac req-dc9457a9-4585-4959-ae4c-bed49bb10adc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:33 np0005634017 nova_compute[243452]: 2026-02-28 10:21:33.837 243456 DEBUG oslo_concurrency.lockutils [req-9ca796fd-fa89-4554-80a0-2ff7cae6abac req-dc9457a9-4585-4959-ae4c-bed49bb10adc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:33 np0005634017 nova_compute[243452]: 2026-02-28 10:21:33.837 243456 DEBUG nova.compute.manager [req-9ca796fd-fa89-4554-80a0-2ff7cae6abac req-dc9457a9-4585-4959-ae4c-bed49bb10adc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] No waiting events found dispatching network-vif-unplugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:21:33 np0005634017 nova_compute[243452]: 2026-02-28 10:21:33.838 243456 WARNING nova.compute.manager [req-9ca796fd-fa89-4554-80a0-2ff7cae6abac req-dc9457a9-4585-4959-ae4c-bed49bb10adc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received unexpected event network-vif-unplugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 for instance with vm_state active and task_state rescuing.#033[00m
Feb 28 05:21:33 np0005634017 nova_compute[243452]: 2026-02-28 10:21:33.838 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:33 np0005634017 systemd-udevd[333565]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:21:33 np0005634017 systemd-machined[209480]: New machine qemu-132-instance-00000066.
Feb 28 05:21:33 np0005634017 NetworkManager[49805]: <info>  [1772274093.8620] device (tapa920b0c3-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:21:33 np0005634017 NetworkManager[49805]: <info>  [1772274093.8626] device (tapa920b0c3-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:21:33 np0005634017 systemd[1]: Started Virtual Machine qemu-132-instance-00000066.
Feb 28 05:21:33 np0005634017 nova_compute[243452]: 2026-02-28 10:21:33.980 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:34 np0005634017 nova_compute[243452]: 2026-02-28 10:21:34.003 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:21:34 np0005634017 nova_compute[243452]: 2026-02-28 10:21:34.030 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:21:34 np0005634017 nova_compute[243452]: 2026-02-28 10:21:34.031 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:21:34 np0005634017 nova_compute[243452]: 2026-02-28 10:21:34.095 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:34 np0005634017 nova_compute[243452]: 2026-02-28 10:21:34.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:21:34 np0005634017 nova_compute[243452]: 2026-02-28 10:21:34.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 28 05:21:34 np0005634017 nova_compute[243452]: 2026-02-28 10:21:34.324 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for ec785d5e-9b62-4b52-a727-f64173b4b853 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 28 05:21:34 np0005634017 nova_compute[243452]: 2026-02-28 10:21:34.325 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274094.3240635, ec785d5e-9b62-4b52-a727-f64173b4b853 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:21:34 np0005634017 nova_compute[243452]: 2026-02-28 10:21:34.325 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:21:34 np0005634017 nova_compute[243452]: 2026-02-28 10:21:34.329 243456 DEBUG nova.compute.manager [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:21:34 np0005634017 nova_compute[243452]: 2026-02-28 10:21:34.344 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:21:34 np0005634017 nova_compute[243452]: 2026-02-28 10:21:34.365 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:21:34 np0005634017 nova_compute[243452]: 2026-02-28 10:21:34.368 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:21:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 259 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 306 KiB/s rd, 3.1 MiB/s wr, 76 op/s
Feb 28 05:21:34 np0005634017 nova_compute[243452]: 2026-02-28 10:21:34.391 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Feb 28 05:21:34 np0005634017 nova_compute[243452]: 2026-02-28 10:21:34.392 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274094.3252978, ec785d5e-9b62-4b52-a727-f64173b4b853 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:21:34 np0005634017 nova_compute[243452]: 2026-02-28 10:21:34.392 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] VM Started (Lifecycle Event)#033[00m
Feb 28 05:21:34 np0005634017 nova_compute[243452]: 2026-02-28 10:21:34.420 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:21:34 np0005634017 nova_compute[243452]: 2026-02-28 10:21:34.424 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:21:36 np0005634017 nova_compute[243452]: 2026-02-28 10:21:36.168 243456 DEBUG nova.compute.manager [req-edefcd04-622d-47d1-b229-09ac5c23fa91 req-ce513542-e5ae-46e2-b81e-4ec06ff42bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:36 np0005634017 nova_compute[243452]: 2026-02-28 10:21:36.168 243456 DEBUG oslo_concurrency.lockutils [req-edefcd04-622d-47d1-b229-09ac5c23fa91 req-ce513542-e5ae-46e2-b81e-4ec06ff42bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:36 np0005634017 nova_compute[243452]: 2026-02-28 10:21:36.169 243456 DEBUG oslo_concurrency.lockutils [req-edefcd04-622d-47d1-b229-09ac5c23fa91 req-ce513542-e5ae-46e2-b81e-4ec06ff42bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:36 np0005634017 nova_compute[243452]: 2026-02-28 10:21:36.169 243456 DEBUG oslo_concurrency.lockutils [req-edefcd04-622d-47d1-b229-09ac5c23fa91 req-ce513542-e5ae-46e2-b81e-4ec06ff42bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:36 np0005634017 nova_compute[243452]: 2026-02-28 10:21:36.169 243456 DEBUG nova.compute.manager [req-edefcd04-622d-47d1-b229-09ac5c23fa91 req-ce513542-e5ae-46e2-b81e-4ec06ff42bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] No waiting events found dispatching network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:21:36 np0005634017 nova_compute[243452]: 2026-02-28 10:21:36.170 243456 WARNING nova.compute.manager [req-edefcd04-622d-47d1-b229-09ac5c23fa91 req-ce513542-e5ae-46e2-b81e-4ec06ff42bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received unexpected event network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 for instance with vm_state rescued and task_state None.#033[00m
Feb 28 05:21:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 505 KiB/s rd, 3.9 MiB/s wr, 96 op/s
Feb 28 05:21:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 94 op/s
Feb 28 05:21:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:21:38 np0005634017 nova_compute[243452]: 2026-02-28 10:21:38.982 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:39 np0005634017 nova_compute[243452]: 2026-02-28 10:21:39.097 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Feb 28 05:21:40 np0005634017 nova_compute[243452]: 2026-02-28 10:21:40.685 243456 DEBUG nova.compute.manager [req-12d54ea1-7a88-4822-9f48-58e679254996 req-f788e0d0-7159-4481-a21b-53288dcb4e3e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-changed-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:40 np0005634017 nova_compute[243452]: 2026-02-28 10:21:40.685 243456 DEBUG nova.compute.manager [req-12d54ea1-7a88-4822-9f48-58e679254996 req-f788e0d0-7159-4481-a21b-53288dcb4e3e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Refreshing instance network info cache due to event network-changed-a920b0c3-c6cf-44d3-9a22-40eda0e09078. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:21:40 np0005634017 nova_compute[243452]: 2026-02-28 10:21:40.687 243456 DEBUG oslo_concurrency.lockutils [req-12d54ea1-7a88-4822-9f48-58e679254996 req-f788e0d0-7159-4481-a21b-53288dcb4e3e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:21:40 np0005634017 nova_compute[243452]: 2026-02-28 10:21:40.688 243456 DEBUG oslo_concurrency.lockutils [req-12d54ea1-7a88-4822-9f48-58e679254996 req-f788e0d0-7159-4481-a21b-53288dcb4e3e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:21:40 np0005634017 nova_compute[243452]: 2026-02-28 10:21:40.688 243456 DEBUG nova.network.neutron [req-12d54ea1-7a88-4822-9f48-58e679254996 req-f788e0d0-7159-4481-a21b-53288dcb4e3e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Refreshing network info cache for port a920b0c3-c6cf-44d3-9a22-40eda0e09078 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:21:40 np0005634017 nova_compute[243452]: 2026-02-28 10:21:40.740 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Acquiring lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:40 np0005634017 nova_compute[243452]: 2026-02-28 10:21:40.741 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:40 np0005634017 nova_compute[243452]: 2026-02-28 10:21:40.761 243456 DEBUG nova.compute.manager [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:21:40 np0005634017 nova_compute[243452]: 2026-02-28 10:21:40.848 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:40 np0005634017 nova_compute[243452]: 2026-02-28 10:21:40.849 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:40 np0005634017 nova_compute[243452]: 2026-02-28 10:21:40.858 243456 DEBUG nova.virt.hardware [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:21:40 np0005634017 nova_compute[243452]: 2026-02-28 10:21:40.859 243456 INFO nova.compute.claims [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:21:40 np0005634017 nova_compute[243452]: 2026-02-28 10:21:40.990 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:21:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:21:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:21:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:21:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001119069245175675 of space, bias 1.0, pg target 0.3357207735527025 quantized to 32 (current 32)
Feb 28 05:21:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:21:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:21:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:21:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:21:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:21:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493206594747484 of space, bias 1.0, pg target 0.7479619784242452 quantized to 32 (current 32)
Feb 28 05:21:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:21:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.57530516942218e-07 of space, bias 4.0, pg target 0.0009090366203306615 quantized to 16 (current 16)
Feb 28 05:21:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:21:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:21:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:21:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:21:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:21:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:21:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:21:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:21:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:21:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:21:41 np0005634017 nova_compute[243452]: 2026-02-28 10:21:41.294 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "0bafc3af-eadf-4d97-9acf-026c531362c3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:41 np0005634017 nova_compute[243452]: 2026-02-28 10:21:41.295 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:41 np0005634017 nova_compute[243452]: 2026-02-28 10:21:41.298 243456 DEBUG nova.compute.manager [req-43d6402d-accd-424c-97c3-b0b58818f165 req-808ae0a4-7032-4af9-ab3f-fc3230100cba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-changed-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:41 np0005634017 nova_compute[243452]: 2026-02-28 10:21:41.299 243456 DEBUG nova.compute.manager [req-43d6402d-accd-424c-97c3-b0b58818f165 req-808ae0a4-7032-4af9-ab3f-fc3230100cba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Refreshing instance network info cache due to event network-changed-a920b0c3-c6cf-44d3-9a22-40eda0e09078. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:21:41 np0005634017 nova_compute[243452]: 2026-02-28 10:21:41.299 243456 DEBUG oslo_concurrency.lockutils [req-43d6402d-accd-424c-97c3-b0b58818f165 req-808ae0a4-7032-4af9-ab3f-fc3230100cba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:21:41 np0005634017 nova_compute[243452]: 2026-02-28 10:21:41.318 243456 DEBUG nova.compute.manager [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:21:41 np0005634017 nova_compute[243452]: 2026-02-28 10:21:41.393 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:21:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2553236120' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:21:41 np0005634017 nova_compute[243452]: 2026-02-28 10:21:41.608 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:41 np0005634017 nova_compute[243452]: 2026-02-28 10:21:41.613 243456 DEBUG nova.compute.provider_tree [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:21:41 np0005634017 nova_compute[243452]: 2026-02-28 10:21:41.631 243456 DEBUG nova.scheduler.client.report [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:21:41 np0005634017 nova_compute[243452]: 2026-02-28 10:21:41.664 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:41 np0005634017 nova_compute[243452]: 2026-02-28 10:21:41.665 243456 DEBUG nova.compute.manager [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:21:41 np0005634017 nova_compute[243452]: 2026-02-28 10:21:41.668 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:41 np0005634017 nova_compute[243452]: 2026-02-28 10:21:41.677 243456 DEBUG nova.virt.hardware [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:21:41 np0005634017 nova_compute[243452]: 2026-02-28 10:21:41.677 243456 INFO nova.compute.claims [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:21:41 np0005634017 nova_compute[243452]: 2026-02-28 10:21:41.727 243456 DEBUG nova.compute.manager [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:21:41 np0005634017 nova_compute[243452]: 2026-02-28 10:21:41.729 243456 DEBUG nova.network.neutron [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:21:41 np0005634017 nova_compute[243452]: 2026-02-28 10:21:41.751 243456 INFO nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:21:41 np0005634017 nova_compute[243452]: 2026-02-28 10:21:41.788 243456 DEBUG nova.compute.manager [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:21:41 np0005634017 nova_compute[243452]: 2026-02-28 10:21:41.852 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:41 np0005634017 nova_compute[243452]: 2026-02-28 10:21:41.897 243456 DEBUG nova.compute.manager [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:21:41 np0005634017 nova_compute[243452]: 2026-02-28 10:21:41.901 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:21:41 np0005634017 nova_compute[243452]: 2026-02-28 10:21:41.902 243456 INFO nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Creating image(s)#033[00m
Feb 28 05:21:41 np0005634017 nova_compute[243452]: 2026-02-28 10:21:41.944 243456 DEBUG nova.storage.rbd_utils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] rbd image 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:41 np0005634017 nova_compute[243452]: 2026-02-28 10:21:41.979 243456 DEBUG nova.storage.rbd_utils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] rbd image 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.008 243456 DEBUG nova.storage.rbd_utils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] rbd image 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.012 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.102 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.103 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.104 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.104 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.125 243456 DEBUG nova.storage.rbd_utils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] rbd image 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.129 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.395 243456 DEBUG nova.policy [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '70e8f691ae0f4768bb68cd8d497033e8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '335faa1173e64cf8a7b107ae6238353d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.400 243456 DEBUG nova.network.neutron [req-12d54ea1-7a88-4822-9f48-58e679254996 req-f788e0d0-7159-4481-a21b-53288dcb4e3e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updated VIF entry in instance network info cache for port a920b0c3-c6cf-44d3-9a22-40eda0e09078. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.401 243456 DEBUG nova.network.neutron [req-12d54ea1-7a88-4822-9f48-58e679254996 req-f788e0d0-7159-4481-a21b-53288dcb4e3e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updating instance_info_cache with network_info: [{"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.411 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:21:42 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2994791735' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.447 243456 DEBUG oslo_concurrency.lockutils [req-12d54ea1-7a88-4822-9f48-58e679254996 req-f788e0d0-7159-4481-a21b-53288dcb4e3e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.448 243456 DEBUG oslo_concurrency.lockutils [req-43d6402d-accd-424c-97c3-b0b58818f165 req-808ae0a4-7032-4af9-ab3f-fc3230100cba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.449 243456 DEBUG nova.network.neutron [req-43d6402d-accd-424c-97c3-b0b58818f165 req-808ae0a4-7032-4af9-ab3f-fc3230100cba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Refreshing network info cache for port a920b0c3-c6cf-44d3-9a22-40eda0e09078 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.453 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.501 243456 DEBUG nova.storage.rbd_utils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] resizing rbd image 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.536 243456 DEBUG nova.compute.provider_tree [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.555 243456 DEBUG nova.scheduler.client.report [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.597 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.599 243456 DEBUG nova.compute.manager [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.608 243456 DEBUG nova.objects.instance [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lazy-loading 'migration_context' on Instance uuid 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.623 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.623 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Ensure instance console log exists: /var/lib/nova/instances/6f73ef31-2ae0-4765-9f6c-fb732af2b3f4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.624 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.624 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.624 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.652 243456 DEBUG nova.compute.manager [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.654 243456 DEBUG nova.network.neutron [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.670 243456 INFO nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.693 243456 DEBUG nova.compute.manager [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.801 243456 DEBUG nova.compute.manager [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.803 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.804 243456 INFO nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Creating image(s)#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.829 243456 DEBUG nova.storage.rbd_utils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 0bafc3af-eadf-4d97-9acf-026c531362c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.857 243456 DEBUG nova.storage.rbd_utils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 0bafc3af-eadf-4d97-9acf-026c531362c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.880 243456 DEBUG nova.storage.rbd_utils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 0bafc3af-eadf-4d97-9acf-026c531362c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.885 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.916 243456 DEBUG nova.policy [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9046d3ef932481196b82d5e1fdd5de6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.921 243456 DEBUG nova.compute.manager [req-23af569c-a7d7-475a-88d0-3955c5bc7a53 req-d06d97cf-8085-466d-a1de-3f25ecfb7286 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.921 243456 DEBUG oslo_concurrency.lockutils [req-23af569c-a7d7-475a-88d0-3955c5bc7a53 req-d06d97cf-8085-466d-a1de-3f25ecfb7286 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.922 243456 DEBUG oslo_concurrency.lockutils [req-23af569c-a7d7-475a-88d0-3955c5bc7a53 req-d06d97cf-8085-466d-a1de-3f25ecfb7286 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.922 243456 DEBUG oslo_concurrency.lockutils [req-23af569c-a7d7-475a-88d0-3955c5bc7a53 req-d06d97cf-8085-466d-a1de-3f25ecfb7286 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.922 243456 DEBUG nova.compute.manager [req-23af569c-a7d7-475a-88d0-3955c5bc7a53 req-d06d97cf-8085-466d-a1de-3f25ecfb7286 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] No waiting events found dispatching network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.923 243456 WARNING nova.compute.manager [req-23af569c-a7d7-475a-88d0-3955c5bc7a53 req-d06d97cf-8085-466d-a1de-3f25ecfb7286 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received unexpected event network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 for instance with vm_state rescued and task_state None.#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.923 243456 DEBUG nova.compute.manager [req-23af569c-a7d7-475a-88d0-3955c5bc7a53 req-d06d97cf-8085-466d-a1de-3f25ecfb7286 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.923 243456 DEBUG oslo_concurrency.lockutils [req-23af569c-a7d7-475a-88d0-3955c5bc7a53 req-d06d97cf-8085-466d-a1de-3f25ecfb7286 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.923 243456 DEBUG oslo_concurrency.lockutils [req-23af569c-a7d7-475a-88d0-3955c5bc7a53 req-d06d97cf-8085-466d-a1de-3f25ecfb7286 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.924 243456 DEBUG oslo_concurrency.lockutils [req-23af569c-a7d7-475a-88d0-3955c5bc7a53 req-d06d97cf-8085-466d-a1de-3f25ecfb7286 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.924 243456 DEBUG nova.compute.manager [req-23af569c-a7d7-475a-88d0-3955c5bc7a53 req-d06d97cf-8085-466d-a1de-3f25ecfb7286 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] No waiting events found dispatching network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.924 243456 WARNING nova.compute.manager [req-23af569c-a7d7-475a-88d0-3955c5bc7a53 req-d06d97cf-8085-466d-a1de-3f25ecfb7286 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received unexpected event network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 for instance with vm_state rescued and task_state None.#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.960 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.961 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.962 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.963 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.987 243456 DEBUG nova.storage.rbd_utils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 0bafc3af-eadf-4d97-9acf-026c531362c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:42 np0005634017 nova_compute[243452]: 2026-02-28 10:21:42.992 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 0bafc3af-eadf-4d97-9acf-026c531362c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:43 np0005634017 nova_compute[243452]: 2026-02-28 10:21:43.203 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 0bafc3af-eadf-4d97-9acf-026c531362c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:43 np0005634017 nova_compute[243452]: 2026-02-28 10:21:43.258 243456 DEBUG nova.network.neutron [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Successfully created port: 247791be-e482-41d7-b078-7328138dd0ea _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:21:43 np0005634017 nova_compute[243452]: 2026-02-28 10:21:43.270 243456 DEBUG nova.storage.rbd_utils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] resizing rbd image 0bafc3af-eadf-4d97-9acf-026c531362c3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:21:43 np0005634017 nova_compute[243452]: 2026-02-28 10:21:43.339 243456 DEBUG nova.objects.instance [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'migration_context' on Instance uuid 0bafc3af-eadf-4d97-9acf-026c531362c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:21:43 np0005634017 nova_compute[243452]: 2026-02-28 10:21:43.354 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:21:43 np0005634017 nova_compute[243452]: 2026-02-28 10:21:43.354 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Ensure instance console log exists: /var/lib/nova/instances/0bafc3af-eadf-4d97-9acf-026c531362c3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:21:43 np0005634017 nova_compute[243452]: 2026-02-28 10:21:43.355 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:43 np0005634017 nova_compute[243452]: 2026-02-28 10:21:43.355 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:43 np0005634017 nova_compute[243452]: 2026-02-28 10:21:43.355 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:21:43 np0005634017 nova_compute[243452]: 2026-02-28 10:21:43.947 243456 DEBUG nova.network.neutron [req-43d6402d-accd-424c-97c3-b0b58818f165 req-808ae0a4-7032-4af9-ab3f-fc3230100cba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updated VIF entry in instance network info cache for port a920b0c3-c6cf-44d3-9a22-40eda0e09078. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:21:43 np0005634017 nova_compute[243452]: 2026-02-28 10:21:43.947 243456 DEBUG nova.network.neutron [req-43d6402d-accd-424c-97c3-b0b58818f165 req-808ae0a4-7032-4af9-ab3f-fc3230100cba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updating instance_info_cache with network_info: [{"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:21:43 np0005634017 nova_compute[243452]: 2026-02-28 10:21:43.979 243456 DEBUG oslo_concurrency.lockutils [req-43d6402d-accd-424c-97c3-b0b58818f165 req-808ae0a4-7032-4af9-ab3f-fc3230100cba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:21:43 np0005634017 nova_compute[243452]: 2026-02-28 10:21:43.984 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:44 np0005634017 nova_compute[243452]: 2026-02-28 10:21:44.098 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:44 np0005634017 nova_compute[243452]: 2026-02-28 10:21:44.162 243456 DEBUG nova.network.neutron [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Successfully created port: 09ffa25b-e3df-45c2-9db2-423ed33e2a28 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:21:44 np0005634017 nova_compute[243452]: 2026-02-28 10:21:44.326 243456 DEBUG nova.network.neutron [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Successfully updated port: 247791be-e482-41d7-b078-7328138dd0ea _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:21:44 np0005634017 nova_compute[243452]: 2026-02-28 10:21:44.343 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Acquiring lock "refresh_cache-6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:21:44 np0005634017 nova_compute[243452]: 2026-02-28 10:21:44.344 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Acquired lock "refresh_cache-6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:21:44 np0005634017 nova_compute[243452]: 2026-02-28 10:21:44.344 243456 DEBUG nova.network.neutron [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:21:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 296 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.4 MiB/s wr, 95 op/s
Feb 28 05:21:44 np0005634017 nova_compute[243452]: 2026-02-28 10:21:44.527 243456 DEBUG nova.network.neutron [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:21:44 np0005634017 nova_compute[243452]: 2026-02-28 10:21:44.871 243456 DEBUG nova.compute.manager [req-91a42177-0aef-441a-a21d-5432b4245bd1 req-d236fcce-d20a-4898-8351-e81a599a3968 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Received event network-changed-247791be-e482-41d7-b078-7328138dd0ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:44 np0005634017 nova_compute[243452]: 2026-02-28 10:21:44.872 243456 DEBUG nova.compute.manager [req-91a42177-0aef-441a-a21d-5432b4245bd1 req-d236fcce-d20a-4898-8351-e81a599a3968 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Refreshing instance network info cache due to event network-changed-247791be-e482-41d7-b078-7328138dd0ea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:21:44 np0005634017 nova_compute[243452]: 2026-02-28 10:21:44.872 243456 DEBUG oslo_concurrency.lockutils [req-91a42177-0aef-441a-a21d-5432b4245bd1 req-d236fcce-d20a-4898-8351-e81a599a3968 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:21:45 np0005634017 podman[334012]: 2026-02-28 10:21:45.160368151 +0000 UTC m=+0.085751166 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223)
Feb 28 05:21:45 np0005634017 podman[334011]: 2026-02-28 10:21:45.181611904 +0000 UTC m=+0.107421322 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.271 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:45 np0005634017 NetworkManager[49805]: <info>  [1772274105.2720] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/439)
Feb 28 05:21:45 np0005634017 NetworkManager[49805]: <info>  [1772274105.2728] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/440)
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.303 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.332 243456 WARNING nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] While synchronizing instance power states, found 3 instances in the database and 1 instances on the hypervisor.#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.333 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Triggering sync for uuid ec785d5e-9b62-4b52-a727-f64173b4b853 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.333 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Triggering sync for uuid 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.333 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Triggering sync for uuid 0bafc3af-eadf-4d97-9acf-026c531362c3 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.334 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "ec785d5e-9b62-4b52-a727-f64173b4b853" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.334 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.335 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.335 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "0bafc3af-eadf-4d97-9acf-026c531362c3" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.336 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.336 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.349 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.362 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.365 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.377 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.446 243456 DEBUG nova.network.neutron [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Successfully updated port: 09ffa25b-e3df-45c2-9db2-423ed33e2a28 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.470 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.470 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquired lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.471 243456 DEBUG nova.network.neutron [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:21:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:21:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3602889676' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:21:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:21:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3602889676' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.591 243456 DEBUG nova.network.neutron [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Updating instance_info_cache with network_info: [{"id": "247791be-e482-41d7-b078-7328138dd0ea", "address": "fa:16:3e:d2:c3:14", "network": {"id": "10418c0f-a33e-4d93-99b1-462207fda43a", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1972878049-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "335faa1173e64cf8a7b107ae6238353d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247791be-e4", "ovs_interfaceid": "247791be-e482-41d7-b078-7328138dd0ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.615 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Releasing lock "refresh_cache-6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.616 243456 DEBUG nova.compute.manager [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Instance network_info: |[{"id": "247791be-e482-41d7-b078-7328138dd0ea", "address": "fa:16:3e:d2:c3:14", "network": {"id": "10418c0f-a33e-4d93-99b1-462207fda43a", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1972878049-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "335faa1173e64cf8a7b107ae6238353d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247791be-e4", "ovs_interfaceid": "247791be-e482-41d7-b078-7328138dd0ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.617 243456 DEBUG oslo_concurrency.lockutils [req-91a42177-0aef-441a-a21d-5432b4245bd1 req-d236fcce-d20a-4898-8351-e81a599a3968 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.617 243456 DEBUG nova.network.neutron [req-91a42177-0aef-441a-a21d-5432b4245bd1 req-d236fcce-d20a-4898-8351-e81a599a3968 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Refreshing network info cache for port 247791be-e482-41d7-b078-7328138dd0ea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.623 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Start _get_guest_xml network_info=[{"id": "247791be-e482-41d7-b078-7328138dd0ea", "address": "fa:16:3e:d2:c3:14", "network": {"id": "10418c0f-a33e-4d93-99b1-462207fda43a", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1972878049-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "335faa1173e64cf8a7b107ae6238353d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247791be-e4", "ovs_interfaceid": "247791be-e482-41d7-b078-7328138dd0ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.629 243456 WARNING nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.634 243456 DEBUG nova.virt.libvirt.host [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.635 243456 DEBUG nova.virt.libvirt.host [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.638 243456 DEBUG nova.virt.libvirt.host [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.639 243456 DEBUG nova.virt.libvirt.host [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.640 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.641 243456 DEBUG nova.virt.hardware [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.641 243456 DEBUG nova.virt.hardware [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.642 243456 DEBUG nova.virt.hardware [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.642 243456 DEBUG nova.virt.hardware [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.643 243456 DEBUG nova.virt.hardware [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.643 243456 DEBUG nova.virt.hardware [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.644 243456 DEBUG nova.virt.hardware [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.644 243456 DEBUG nova.virt.hardware [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.644 243456 DEBUG nova.virt.hardware [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.645 243456 DEBUG nova.virt.hardware [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.645 243456 DEBUG nova.virt.hardware [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.650 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.686 243456 DEBUG nova.network.neutron [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.696 243456 DEBUG nova.compute.manager [req-7254dec2-1bed-45a1-821d-5795c0bb967a req-7b4e9ede-89f7-4dc4-a215-0be44eed0a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Received event network-changed-09ffa25b-e3df-45c2-9db2-423ed33e2a28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.696 243456 DEBUG nova.compute.manager [req-7254dec2-1bed-45a1-821d-5795c0bb967a req-7b4e9ede-89f7-4dc4-a215-0be44eed0a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Refreshing instance network info cache due to event network-changed-09ffa25b-e3df-45c2-9db2-423ed33e2a28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:21:45 np0005634017 nova_compute[243452]: 2026-02-28 10:21:45.696 243456 DEBUG oslo_concurrency.lockutils [req-7254dec2-1bed-45a1-821d-5795c0bb967a req-7b4e9ede-89f7-4dc4-a215-0be44eed0a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:21:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:21:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/513603687' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.187 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.210 243456 DEBUG nova.storage.rbd_utils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] rbd image 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.214 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 344 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 148 op/s
Feb 28 05:21:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:21:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3022739211' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.897 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.683s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.899 243456 DEBUG nova.virt.libvirt.vif [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:21:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1786122360',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1786122360',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1786122360',id=103,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='335faa1173e64cf8a7b107ae6238353d',ramdisk_id='',reservation_id='r-bbmf5pbr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-1249930023',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-1249930023-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:21:41Z,user_data=None,user_id='70e8f691ae0f4768bb68cd8d497033e8',uuid=6f73ef31-2ae0-4765-9f6c-fb732af2b3f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "247791be-e482-41d7-b078-7328138dd0ea", "address": "fa:16:3e:d2:c3:14", "network": {"id": "10418c0f-a33e-4d93-99b1-462207fda43a", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1972878049-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "335faa1173e64cf8a7b107ae6238353d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247791be-e4", "ovs_interfaceid": "247791be-e482-41d7-b078-7328138dd0ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.899 243456 DEBUG nova.network.os_vif_util [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Converting VIF {"id": "247791be-e482-41d7-b078-7328138dd0ea", "address": "fa:16:3e:d2:c3:14", "network": {"id": "10418c0f-a33e-4d93-99b1-462207fda43a", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1972878049-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "335faa1173e64cf8a7b107ae6238353d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247791be-e4", "ovs_interfaceid": "247791be-e482-41d7-b078-7328138dd0ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.900 243456 DEBUG nova.network.os_vif_util [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:c3:14,bridge_name='br-int',has_traffic_filtering=True,id=247791be-e482-41d7-b078-7328138dd0ea,network=Network(10418c0f-a33e-4d93-99b1-462207fda43a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247791be-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.901 243456 DEBUG nova.objects.instance [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lazy-loading 'pci_devices' on Instance uuid 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.916 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:21:46 np0005634017 nova_compute[243452]:  <uuid>6f73ef31-2ae0-4765-9f6c-fb732af2b3f4</uuid>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:  <name>instance-00000067</name>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-1786122360</nova:name>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:21:45</nova:creationTime>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:21:46 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:        <nova:user uuid="70e8f691ae0f4768bb68cd8d497033e8">tempest-ServersNegativeTestMultiTenantJSON-1249930023-project-member</nova:user>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:        <nova:project uuid="335faa1173e64cf8a7b107ae6238353d">tempest-ServersNegativeTestMultiTenantJSON-1249930023</nova:project>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:        <nova:port uuid="247791be-e482-41d7-b078-7328138dd0ea">
Feb 28 05:21:46 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <entry name="serial">6f73ef31-2ae0-4765-9f6c-fb732af2b3f4</entry>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <entry name="uuid">6f73ef31-2ae0-4765-9f6c-fb732af2b3f4</entry>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk">
Feb 28 05:21:46 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:21:46 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk.config">
Feb 28 05:21:46 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:21:46 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:d2:c3:14"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <target dev="tap247791be-e4"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/6f73ef31-2ae0-4765-9f6c-fb732af2b3f4/console.log" append="off"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:21:46 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:21:46 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:21:46 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:21:46 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.917 243456 DEBUG nova.compute.manager [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Preparing to wait for external event network-vif-plugged-247791be-e482-41d7-b078-7328138dd0ea prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.918 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Acquiring lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.918 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.918 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.919 243456 DEBUG nova.virt.libvirt.vif [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:21:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1786122360',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1786122360',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1786122360',id=103,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='335faa1173e64cf8a7b107ae6238353d',ramdisk_id='',reservation_id='r-bbmf5pbr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-1249930023',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-1249930023-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:21:41Z,user_data=None,user_id='70e8f691ae0f4768bb68cd8d497033e8',uuid=6f73ef31-2ae0-4765-9f6c-fb732af2b3f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "247791be-e482-41d7-b078-7328138dd0ea", "address": "fa:16:3e:d2:c3:14", "network": {"id": "10418c0f-a33e-4d93-99b1-462207fda43a", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1972878049-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "335faa1173e64cf8a7b107ae6238353d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247791be-e4", "ovs_interfaceid": "247791be-e482-41d7-b078-7328138dd0ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.919 243456 DEBUG nova.network.os_vif_util [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Converting VIF {"id": "247791be-e482-41d7-b078-7328138dd0ea", "address": "fa:16:3e:d2:c3:14", "network": {"id": "10418c0f-a33e-4d93-99b1-462207fda43a", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1972878049-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "335faa1173e64cf8a7b107ae6238353d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247791be-e4", "ovs_interfaceid": "247791be-e482-41d7-b078-7328138dd0ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.920 243456 DEBUG nova.network.os_vif_util [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:c3:14,bridge_name='br-int',has_traffic_filtering=True,id=247791be-e482-41d7-b078-7328138dd0ea,network=Network(10418c0f-a33e-4d93-99b1-462207fda43a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247791be-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.920 243456 DEBUG os_vif [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:c3:14,bridge_name='br-int',has_traffic_filtering=True,id=247791be-e482-41d7-b078-7328138dd0ea,network=Network(10418c0f-a33e-4d93-99b1-462207fda43a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247791be-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.920 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.921 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.921 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.926 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.926 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap247791be-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.927 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap247791be-e4, col_values=(('external_ids', {'iface-id': '247791be-e482-41d7-b078-7328138dd0ea', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:c3:14', 'vm-uuid': '6f73ef31-2ae0-4765-9f6c-fb732af2b3f4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.928 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:46 np0005634017 NetworkManager[49805]: <info>  [1772274106.9296] manager: (tap247791be-e4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/441)
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.930 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.934 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.935 243456 INFO os_vif [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:c3:14,bridge_name='br-int',has_traffic_filtering=True,id=247791be-e482-41d7-b078-7328138dd0ea,network=Network(10418c0f-a33e-4d93-99b1-462207fda43a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247791be-e4')#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.984 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.984 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.984 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] No VIF found with MAC fa:16:3e:d2:c3:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:21:46 np0005634017 nova_compute[243452]: 2026-02-28 10:21:46.985 243456 INFO nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Using config drive#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.009 243456 DEBUG nova.storage.rbd_utils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] rbd image 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.370 243456 DEBUG nova.network.neutron [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Updating instance_info_cache with network_info: [{"id": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "address": "fa:16:3e:93:cc:93", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ffa25b-e3", "ovs_interfaceid": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.401 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Releasing lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.402 243456 DEBUG nova.compute.manager [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Instance network_info: |[{"id": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "address": "fa:16:3e:93:cc:93", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ffa25b-e3", "ovs_interfaceid": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.403 243456 DEBUG oslo_concurrency.lockutils [req-7254dec2-1bed-45a1-821d-5795c0bb967a req-7b4e9ede-89f7-4dc4-a215-0be44eed0a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.403 243456 DEBUG nova.network.neutron [req-7254dec2-1bed-45a1-821d-5795c0bb967a req-7b4e9ede-89f7-4dc4-a215-0be44eed0a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Refreshing network info cache for port 09ffa25b-e3df-45c2-9db2-423ed33e2a28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.406 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Start _get_guest_xml network_info=[{"id": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "address": "fa:16:3e:93:cc:93", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ffa25b-e3", "ovs_interfaceid": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.412 243456 WARNING nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.417 243456 DEBUG nova.virt.libvirt.host [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.418 243456 DEBUG nova.virt.libvirt.host [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.424 243456 DEBUG nova.virt.libvirt.host [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.424 243456 DEBUG nova.virt.libvirt.host [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.425 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.425 243456 DEBUG nova.virt.hardware [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.426 243456 DEBUG nova.virt.hardware [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.426 243456 DEBUG nova.virt.hardware [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.427 243456 DEBUG nova.virt.hardware [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.427 243456 DEBUG nova.virt.hardware [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.427 243456 DEBUG nova.virt.hardware [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.428 243456 DEBUG nova.virt.hardware [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.428 243456 DEBUG nova.virt.hardware [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.428 243456 DEBUG nova.virt.hardware [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.429 243456 DEBUG nova.virt.hardware [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.429 243456 DEBUG nova.virt.hardware [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.432 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.532 243456 DEBUG nova.compute.manager [req-86196260-6576-4fde-a72e-0a1eee4093b0 req-d4e74e4b-0495-434f-a035-a46f1e032aa4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-changed-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.533 243456 DEBUG nova.compute.manager [req-86196260-6576-4fde-a72e-0a1eee4093b0 req-d4e74e4b-0495-434f-a035-a46f1e032aa4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Refreshing instance network info cache due to event network-changed-a920b0c3-c6cf-44d3-9a22-40eda0e09078. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.533 243456 DEBUG oslo_concurrency.lockutils [req-86196260-6576-4fde-a72e-0a1eee4093b0 req-d4e74e4b-0495-434f-a035-a46f1e032aa4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.534 243456 DEBUG oslo_concurrency.lockutils [req-86196260-6576-4fde-a72e-0a1eee4093b0 req-d4e74e4b-0495-434f-a035-a46f1e032aa4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.534 243456 DEBUG nova.network.neutron [req-86196260-6576-4fde-a72e-0a1eee4093b0 req-d4e74e4b-0495-434f-a035-a46f1e032aa4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Refreshing network info cache for port a920b0c3-c6cf-44d3-9a22-40eda0e09078 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.674 243456 DEBUG nova.network.neutron [req-91a42177-0aef-441a-a21d-5432b4245bd1 req-d236fcce-d20a-4898-8351-e81a599a3968 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Updated VIF entry in instance network info cache for port 247791be-e482-41d7-b078-7328138dd0ea. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.675 243456 DEBUG nova.network.neutron [req-91a42177-0aef-441a-a21d-5432b4245bd1 req-d236fcce-d20a-4898-8351-e81a599a3968 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Updating instance_info_cache with network_info: [{"id": "247791be-e482-41d7-b078-7328138dd0ea", "address": "fa:16:3e:d2:c3:14", "network": {"id": "10418c0f-a33e-4d93-99b1-462207fda43a", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1972878049-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "335faa1173e64cf8a7b107ae6238353d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247791be-e4", "ovs_interfaceid": "247791be-e482-41d7-b078-7328138dd0ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.700 243456 DEBUG oslo_concurrency.lockutils [req-91a42177-0aef-441a-a21d-5432b4245bd1 req-d236fcce-d20a-4898-8351-e81a599a3968 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.709 243456 INFO nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Creating config drive at /var/lib/nova/instances/6f73ef31-2ae0-4765-9f6c-fb732af2b3f4/disk.config#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.713 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6f73ef31-2ae0-4765-9f6c-fb732af2b3f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpcxm3vk4i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.855 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6f73ef31-2ae0-4765-9f6c-fb732af2b3f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpcxm3vk4i" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.882 243456 DEBUG nova.storage.rbd_utils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] rbd image 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.888 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6f73ef31-2ae0-4765-9f6c-fb732af2b3f4/disk.config 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:21:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2508671135' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.967 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.993 243456 DEBUG nova.storage.rbd_utils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 0bafc3af-eadf-4d97-9acf-026c531362c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:47 np0005634017 nova_compute[243452]: 2026-02-28 10:21:47.999 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.065 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6f73ef31-2ae0-4765-9f6c-fb732af2b3f4/disk.config 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.066 243456 INFO nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Deleting local config drive /var/lib/nova/instances/6f73ef31-2ae0-4765-9f6c-fb732af2b3f4/disk.config because it was imported into RBD.#033[00m
Feb 28 05:21:48 np0005634017 kernel: tap247791be-e4: entered promiscuous mode
Feb 28 05:21:48 np0005634017 NetworkManager[49805]: <info>  [1772274108.1155] manager: (tap247791be-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/442)
Feb 28 05:21:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:48Z|01054|binding|INFO|Claiming lport 247791be-e482-41d7-b078-7328138dd0ea for this chassis.
Feb 28 05:21:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:48Z|01055|binding|INFO|247791be-e482-41d7-b078-7328138dd0ea: Claiming fa:16:3e:d2:c3:14 10.100.0.3
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.118 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:48Z|01056|binding|INFO|Setting lport 247791be-e482-41d7-b078-7328138dd0ea ovn-installed in OVS
Feb 28 05:21:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:48Z|01057|binding|INFO|Setting lport 247791be-e482-41d7-b078-7328138dd0ea up in Southbound
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.124 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.126 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:c3:14 10.100.0.3'], port_security=['fa:16:3e:d2:c3:14 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6f73ef31-2ae0-4765-9f6c-fb732af2b3f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10418c0f-a33e-4d93-99b1-462207fda43a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '335faa1173e64cf8a7b107ae6238353d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6bd5e14f-e535-467a-9991-a18d1756839a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b473e0e-3363-4c16-9aa6-bed6b246a535, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=247791be-e482-41d7-b078-7328138dd0ea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.128 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 247791be-e482-41d7-b078-7328138dd0ea in datapath 10418c0f-a33e-4d93-99b1-462207fda43a bound to our chassis#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.128 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.130 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 10418c0f-a33e-4d93-99b1-462207fda43a#033[00m
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.147 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8c620339-bb87-425a-8389-ed253dcec72a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.148 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap10418c0f-a1 in ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.151 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap10418c0f-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.152 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[28733d90-f886-4f59-8c29-1ed09eb1a921]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:48 np0005634017 systemd-machined[209480]: New machine qemu-133-instance-00000067.
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.154 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bd807ad9-5940-414d-9b22-68a7e65778be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:48 np0005634017 systemd[1]: Started Virtual Machine qemu-133-instance-00000067.
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.172 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[3cc44bec-b837-4f51-ba0b-6f0b145e571c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.189 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ff76dc5c-1c34-43bb-9c77-4a95544c5ce2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:48 np0005634017 systemd-udevd[334258]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:21:48 np0005634017 NetworkManager[49805]: <info>  [1772274108.2171] device (tap247791be-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:21:48 np0005634017 NetworkManager[49805]: <info>  [1772274108.2184] device (tap247791be-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.225 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[aaef5a4c-3fb4-49cf-bc82-8b795f738f29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:48 np0005634017 NetworkManager[49805]: <info>  [1772274108.2349] manager: (tap10418c0f-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/443)
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.234 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b368ee28-c12f-43af-a794-eee6689003b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.283 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5376905b-785f-407c-867b-66178be3b4ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.287 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6c9381eb-a5ec-4d83-8cdb-327be6ad4ae3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:48 np0005634017 NetworkManager[49805]: <info>  [1772274108.3118] device (tap10418c0f-a0): carrier: link connected
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.317 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7e74d424-0b93-48e7-bf0b-d37809ce2e6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.340 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c89a2e37-79b6-4b81-995e-190a31568ae5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap10418c0f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:9a:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 319], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561559, 'reachable_time': 39554, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334287, 'error': None, 'target': 'ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.356 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ceac77c3-deff-403e-9da5-4ea6e1d01ff7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:9a0d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561559, 'tstamp': 561559}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334288, 'error': None, 'target': 'ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.371 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dc7b7263-42ab-4163-8234-13ad60d2a13d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap10418c0f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:9a:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 319], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561559, 'reachable_time': 39554, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 334289, 'error': None, 'target': 'ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 372 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 169 op/s
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.398 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc769de-b7bc-4ee2-87c8-c6cbbe16e0d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.452 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[48fa2315-0dad-4886-9b36-9ecfeaa5c39a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.453 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10418c0f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.453 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.454 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10418c0f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:21:48 np0005634017 NetworkManager[49805]: <info>  [1772274108.4564] manager: (tap10418c0f-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/444)
Feb 28 05:21:48 np0005634017 kernel: tap10418c0f-a0: entered promiscuous mode
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.457 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.458 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap10418c0f-a0, col_values=(('external_ids', {'iface-id': 'c380a886-9856-4ecb-b5cc-7df0476b3254'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:21:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:48Z|01058|binding|INFO|Releasing lport c380a886-9856-4ecb-b5cc-7df0476b3254 from this chassis (sb_readonly=0)
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.460 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/10418c0f-a33e-4d93-99b1-462207fda43a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/10418c0f-a33e-4d93-99b1-462207fda43a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.465 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5704a405-276b-4cc4-90df-e2135d6bfdfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.466 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.466 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-10418c0f-a33e-4d93-99b1-462207fda43a
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/10418c0f-a33e-4d93-99b1-462207fda43a.pid.haproxy
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 10418c0f-a33e-4d93-99b1-462207fda43a
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:21:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.466 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a', 'env', 'PROCESS_TAG=haproxy-10418c0f-a33e-4d93-99b1-462207fda43a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/10418c0f-a33e-4d93-99b1-462207fda43a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:21:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:21:48 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3517046644' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.579 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.581 243456 DEBUG nova.virt.libvirt.vif [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:21:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1705661194',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1705661194',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=104,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNs+nd6rPqJ/AGV55FcprVoNF73HxSzu9S0FiqOuhvN4rlizLZ9YW8wn4BFYC1ax4N+CgAyJWdOsbXuKNmW4pxXu15elRBRUjfpzdRq3EXNuL00qJwL5OuaSxnPUOdFXtw==',key_name='tempest-TestSecurityGroupsBasicOps-1541180013',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-qp97324q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:21:42Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=0bafc3af-eadf-4d97-9acf-026c531362c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "address": "fa:16:3e:93:cc:93", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ffa25b-e3", "ovs_interfaceid": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.581 243456 DEBUG nova.network.os_vif_util [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "address": "fa:16:3e:93:cc:93", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ffa25b-e3", "ovs_interfaceid": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.582 243456 DEBUG nova.network.os_vif_util [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:cc:93,bridge_name='br-int',has_traffic_filtering=True,id=09ffa25b-e3df-45c2-9db2-423ed33e2a28,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09ffa25b-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.583 243456 DEBUG nova.objects.instance [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0bafc3af-eadf-4d97-9acf-026c531362c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.600 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:21:48 np0005634017 nova_compute[243452]:  <uuid>0bafc3af-eadf-4d97-9acf-026c531362c3</uuid>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:  <name>instance-00000068</name>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1705661194</nova:name>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:21:47</nova:creationTime>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:21:48 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:        <nova:user uuid="f9046d3ef932481196b82d5e1fdd5de6">tempest-TestSecurityGroupsBasicOps-1303513239-project-member</nova:user>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:        <nova:project uuid="1eb3aafa74d14544ba5cde61223f2e23">tempest-TestSecurityGroupsBasicOps-1303513239</nova:project>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:        <nova:port uuid="09ffa25b-e3df-45c2-9db2-423ed33e2a28">
Feb 28 05:21:48 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <entry name="serial">0bafc3af-eadf-4d97-9acf-026c531362c3</entry>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <entry name="uuid">0bafc3af-eadf-4d97-9acf-026c531362c3</entry>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/0bafc3af-eadf-4d97-9acf-026c531362c3_disk">
Feb 28 05:21:48 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:21:48 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/0bafc3af-eadf-4d97-9acf-026c531362c3_disk.config">
Feb 28 05:21:48 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:21:48 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:93:cc:93"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <target dev="tap09ffa25b-e3"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/0bafc3af-eadf-4d97-9acf-026c531362c3/console.log" append="off"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:21:48 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:21:48 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:21:48 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:21:48 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.605 243456 DEBUG nova.compute.manager [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Preparing to wait for external event network-vif-plugged-09ffa25b-e3df-45c2-9db2-423ed33e2a28 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.605 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.605 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.605 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.606 243456 DEBUG nova.virt.libvirt.vif [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:21:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1705661194',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1705661194',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=104,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNs+nd6rPqJ/AGV55FcprVoNF73HxSzu9S0FiqOuhvN4rlizLZ9YW8wn4BFYC1ax4N+CgAyJWdOsbXuKNmW4pxXu15elRBRUjfpzdRq3EXNuL00qJwL5OuaSxnPUOdFXtw==',key_name='tempest-TestSecurityGroupsBasicOps-1541180013',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-qp97324q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:21:42Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=0bafc3af-eadf-4d97-9acf-026c531362c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "address": "fa:16:3e:93:cc:93", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ffa25b-e3", "ovs_interfaceid": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.606 243456 DEBUG nova.network.os_vif_util [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "address": "fa:16:3e:93:cc:93", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ffa25b-e3", "ovs_interfaceid": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.607 243456 DEBUG nova.network.os_vif_util [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:cc:93,bridge_name='br-int',has_traffic_filtering=True,id=09ffa25b-e3df-45c2-9db2-423ed33e2a28,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09ffa25b-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.607 243456 DEBUG os_vif [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:cc:93,bridge_name='br-int',has_traffic_filtering=True,id=09ffa25b-e3df-45c2-9db2-423ed33e2a28,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09ffa25b-e3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.608 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.608 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.609 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.610 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.611 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09ffa25b-e3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.611 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap09ffa25b-e3, col_values=(('external_ids', {'iface-id': '09ffa25b-e3df-45c2-9db2-423ed33e2a28', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:cc:93', 'vm-uuid': '0bafc3af-eadf-4d97-9acf-026c531362c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.612 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:48 np0005634017 NetworkManager[49805]: <info>  [1772274108.6135] manager: (tap09ffa25b-e3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/445)
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.616 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.617 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.617 243456 INFO os_vif [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:cc:93,bridge_name='br-int',has_traffic_filtering=True,id=09ffa25b-e3df-45c2-9db2-423ed33e2a28,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09ffa25b-e3')#033[00m
Feb 28 05:21:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.682 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.683 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.683 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No VIF found with MAC fa:16:3e:93:cc:93, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.684 243456 INFO nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Using config drive#033[00m
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.709 243456 DEBUG nova.storage.rbd_utils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 0bafc3af-eadf-4d97-9acf-026c531362c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:48 np0005634017 podman[334340]: 2026-02-28 10:21:48.855125314 +0000 UTC m=+0.055533784 container create 9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:21:48 np0005634017 systemd[1]: Started libpod-conmon-9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30.scope.
Feb 28 05:21:48 np0005634017 podman[334340]: 2026-02-28 10:21:48.826024054 +0000 UTC m=+0.026432534 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:21:48 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:21:48 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c59a038730ab4b956f42ebaa25408510fb18b7aaf9726af2e166ee3c3b54acf3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:21:48 np0005634017 podman[334340]: 2026-02-28 10:21:48.961818384 +0000 UTC m=+0.162226894 container init 9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223)
Feb 28 05:21:48 np0005634017 podman[334340]: 2026-02-28 10:21:48.965888371 +0000 UTC m=+0.166296851 container start 9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 05:21:48 np0005634017 neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a[334356]: [NOTICE]   (334360) : New worker (334362) forked
Feb 28 05:21:48 np0005634017 neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a[334356]: [NOTICE]   (334360) : Loading success.
Feb 28 05:21:48 np0005634017 nova_compute[243452]: 2026-02-28 10:21:48.986 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.249 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274109.2485044, 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.250 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] VM Started (Lifecycle Event)#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.275 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.280 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274109.2498407, 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.280 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.307 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.310 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.332 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.378 243456 DEBUG nova.network.neutron [req-7254dec2-1bed-45a1-821d-5795c0bb967a req-7b4e9ede-89f7-4dc4-a215-0be44eed0a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Updated VIF entry in instance network info cache for port 09ffa25b-e3df-45c2-9db2-423ed33e2a28. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.379 243456 DEBUG nova.network.neutron [req-7254dec2-1bed-45a1-821d-5795c0bb967a req-7b4e9ede-89f7-4dc4-a215-0be44eed0a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Updating instance_info_cache with network_info: [{"id": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "address": "fa:16:3e:93:cc:93", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ffa25b-e3", "ovs_interfaceid": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.401 243456 DEBUG oslo_concurrency.lockutils [req-7254dec2-1bed-45a1-821d-5795c0bb967a req-7b4e9ede-89f7-4dc4-a215-0be44eed0a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.415 243456 INFO nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Creating config drive at /var/lib/nova/instances/0bafc3af-eadf-4d97-9acf-026c531362c3/disk.config#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.418 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0bafc3af-eadf-4d97-9acf-026c531362c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqu6i4s8c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.559 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0bafc3af-eadf-4d97-9acf-026c531362c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqu6i4s8c" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.600 243456 DEBUG nova.storage.rbd_utils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 0bafc3af-eadf-4d97-9acf-026c531362c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.606 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0bafc3af-eadf-4d97-9acf-026c531362c3/disk.config 0bafc3af-eadf-4d97-9acf-026c531362c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.652 243456 DEBUG nova.compute.manager [req-6c458fa8-bf94-47c6-bc26-1fb6b1f641e1 req-e1843abd-88f1-40a9-9062-f38c6d1290a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Received event network-vif-plugged-247791be-e482-41d7-b078-7328138dd0ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.654 243456 DEBUG oslo_concurrency.lockutils [req-6c458fa8-bf94-47c6-bc26-1fb6b1f641e1 req-e1843abd-88f1-40a9-9062-f38c6d1290a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.655 243456 DEBUG oslo_concurrency.lockutils [req-6c458fa8-bf94-47c6-bc26-1fb6b1f641e1 req-e1843abd-88f1-40a9-9062-f38c6d1290a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.655 243456 DEBUG oslo_concurrency.lockutils [req-6c458fa8-bf94-47c6-bc26-1fb6b1f641e1 req-e1843abd-88f1-40a9-9062-f38c6d1290a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.656 243456 DEBUG nova.compute.manager [req-6c458fa8-bf94-47c6-bc26-1fb6b1f641e1 req-e1843abd-88f1-40a9-9062-f38c6d1290a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Processing event network-vif-plugged-247791be-e482-41d7-b078-7328138dd0ea _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.657 243456 DEBUG nova.compute.manager [req-6c458fa8-bf94-47c6-bc26-1fb6b1f641e1 req-e1843abd-88f1-40a9-9062-f38c6d1290a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Received event network-vif-plugged-247791be-e482-41d7-b078-7328138dd0ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.658 243456 DEBUG oslo_concurrency.lockutils [req-6c458fa8-bf94-47c6-bc26-1fb6b1f641e1 req-e1843abd-88f1-40a9-9062-f38c6d1290a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.659 243456 DEBUG oslo_concurrency.lockutils [req-6c458fa8-bf94-47c6-bc26-1fb6b1f641e1 req-e1843abd-88f1-40a9-9062-f38c6d1290a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.659 243456 DEBUG oslo_concurrency.lockutils [req-6c458fa8-bf94-47c6-bc26-1fb6b1f641e1 req-e1843abd-88f1-40a9-9062-f38c6d1290a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.660 243456 DEBUG nova.compute.manager [req-6c458fa8-bf94-47c6-bc26-1fb6b1f641e1 req-e1843abd-88f1-40a9-9062-f38c6d1290a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] No waiting events found dispatching network-vif-plugged-247791be-e482-41d7-b078-7328138dd0ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.661 243456 WARNING nova.compute.manager [req-6c458fa8-bf94-47c6-bc26-1fb6b1f641e1 req-e1843abd-88f1-40a9-9062-f38c6d1290a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Received unexpected event network-vif-plugged-247791be-e482-41d7-b078-7328138dd0ea for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.663 243456 DEBUG nova.compute.manager [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.668 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274109.6681738, 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.669 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.671 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.676 243456 INFO nova.virt.libvirt.driver [-] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Instance spawned successfully.#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.677 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.695 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.701 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.704 243456 DEBUG nova.network.neutron [req-86196260-6576-4fde-a72e-0a1eee4093b0 req-d4e74e4b-0495-434f-a035-a46f1e032aa4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updated VIF entry in instance network info cache for port a920b0c3-c6cf-44d3-9a22-40eda0e09078. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.705 243456 DEBUG nova.network.neutron [req-86196260-6576-4fde-a72e-0a1eee4093b0 req-d4e74e4b-0495-434f-a035-a46f1e032aa4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updating instance_info_cache with network_info: [{"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.709 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.710 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.711 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.711 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.712 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.712 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.746 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.748 243456 DEBUG oslo_concurrency.lockutils [req-86196260-6576-4fde-a72e-0a1eee4093b0 req-d4e74e4b-0495-434f-a035-a46f1e032aa4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.786 243456 INFO nova.compute.manager [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Took 7.89 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.787 243456 DEBUG nova.compute.manager [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.788 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0bafc3af-eadf-4d97-9acf-026c531362c3/disk.config 0bafc3af-eadf-4d97-9acf-026c531362c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.789 243456 INFO nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Deleting local config drive /var/lib/nova/instances/0bafc3af-eadf-4d97-9acf-026c531362c3/disk.config because it was imported into RBD.#033[00m
Feb 28 05:21:49 np0005634017 kernel: tap09ffa25b-e3: entered promiscuous mode
Feb 28 05:21:49 np0005634017 NetworkManager[49805]: <info>  [1772274109.8539] manager: (tap09ffa25b-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/446)
Feb 28 05:21:49 np0005634017 systemd-udevd[334280]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:21:49 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:49Z|01059|binding|INFO|Claiming lport 09ffa25b-e3df-45c2-9db2-423ed33e2a28 for this chassis.
Feb 28 05:21:49 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:49Z|01060|binding|INFO|09ffa25b-e3df-45c2-9db2-423ed33e2a28: Claiming fa:16:3e:93:cc:93 10.100.0.9
Feb 28 05:21:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:49.870 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:cc:93 10.100.0.9'], port_security=['fa:16:3e:93:cc:93 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0bafc3af-eadf-4d97-9acf-026c531362c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3451a2ef-e97c-49df-813f-57c35ec0999a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6d9d1441-5ce1-4022-9f50-b5399f868b07 88239c96-6a1e-46d1-adfd-2f479ed23a6a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6802505a-6a05-4258-9e53-19d8c7319e67, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=09ffa25b-e3df-45c2-9db2-423ed33e2a28) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.870 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:49.872 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 09ffa25b-e3df-45c2-9db2-423ed33e2a28 in datapath 3451a2ef-e97c-49df-813f-57c35ec0999a bound to our chassis#033[00m
Feb 28 05:21:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:49.875 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3451a2ef-e97c-49df-813f-57c35ec0999a#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.878 243456 INFO nova.compute.manager [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Took 9.06 seconds to build instance.#033[00m
Feb 28 05:21:49 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:49Z|01061|binding|INFO|Setting lport 09ffa25b-e3df-45c2-9db2-423ed33e2a28 ovn-installed in OVS
Feb 28 05:21:49 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:49Z|01062|binding|INFO|Setting lport 09ffa25b-e3df-45c2-9db2-423ed33e2a28 up in Southbound
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.882 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:49 np0005634017 NetworkManager[49805]: <info>  [1772274109.8913] device (tap09ffa25b-e3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:21:49 np0005634017 NetworkManager[49805]: <info>  [1772274109.8925] device (tap09ffa25b-e3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.894 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:49.898 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[97f1baff-8639-49d4-bc3e-559397234197]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.899 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:49.899 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3451a2ef-e1 in ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.899 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 4.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.900 243456 INFO nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:21:49 np0005634017 nova_compute[243452]: 2026-02-28 10:21:49.900 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:49 np0005634017 systemd-machined[209480]: New machine qemu-134-instance-00000068.
Feb 28 05:21:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:49.902 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3451a2ef-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:21:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:49.902 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[20b41717-7268-49c5-999c-ea0c8f803e88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:49.903 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f1daf5e1-02d8-40c4-a008-c62fdbe272fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:49 np0005634017 systemd[1]: Started Virtual Machine qemu-134-instance-00000068.
Feb 28 05:21:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:49.916 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[a385033f-dc7f-43cd-97eb-ce5136267f8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:49.931 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9b41aa9f-9d6b-48bf-8f35-a0e93ceed10f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:49.961 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b10713a5-d08b-4bea-93a6-290f14d512d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:49.969 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d2cb4fab-91e0-444d-9f2e-9984b5faf99d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:49 np0005634017 NetworkManager[49805]: <info>  [1772274109.9710] manager: (tap3451a2ef-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/447)
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.006 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9fc9639c-49a7-4632-8bc1-47ea4c416971]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.009 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3613b634-0940-435e-959d-4add3302e8cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:50 np0005634017 NetworkManager[49805]: <info>  [1772274110.0486] device (tap3451a2ef-e0): carrier: link connected
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.057 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[17f25f50-abcf-4bf1-a50f-26b5462f081a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.076 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6a32e26b-3cbc-4869-998e-4be63e3c6c1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3451a2ef-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:c2:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 321], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561733, 'reachable_time': 16789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334485, 'error': None, 'target': 'ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.096 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d474e820-fb6e-4768-9c49-d2768eb88ad3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe51:c2b6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561733, 'tstamp': 561733}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334486, 'error': None, 'target': 'ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.114 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b1660708-76ff-4f36-8589-f3ec2958ccc7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3451a2ef-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:c2:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 321], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561733, 'reachable_time': 16789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 334487, 'error': None, 'target': 'ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.147 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9e9f9598-6b21-4a70-be0f-0ac43d916788]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.216 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cdd17419-c5fb-495f-bb33-a533bd1cf59f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.219 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3451a2ef-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.219 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.220 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3451a2ef-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:21:50 np0005634017 nova_compute[243452]: 2026-02-28 10:21:50.223 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:50 np0005634017 kernel: tap3451a2ef-e0: entered promiscuous mode
Feb 28 05:21:50 np0005634017 NetworkManager[49805]: <info>  [1772274110.2245] manager: (tap3451a2ef-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/448)
Feb 28 05:21:50 np0005634017 nova_compute[243452]: 2026-02-28 10:21:50.226 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.228 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3451a2ef-e0, col_values=(('external_ids', {'iface-id': '0715e649-02a0-4a60-88ea-663bf03161ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:21:50 np0005634017 nova_compute[243452]: 2026-02-28 10:21:50.230 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:50Z|01063|binding|INFO|Releasing lport 0715e649-02a0-4a60-88ea-663bf03161ac from this chassis (sb_readonly=0)
Feb 28 05:21:50 np0005634017 nova_compute[243452]: 2026-02-28 10:21:50.235 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.236 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3451a2ef-e97c-49df-813f-57c35ec0999a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3451a2ef-e97c-49df-813f-57c35ec0999a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.237 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e2234411-8442-4555-9b3d-2e39aac9df99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.238 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-3451a2ef-e97c-49df-813f-57c35ec0999a
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/3451a2ef-e97c-49df-813f-57c35ec0999a.pid.haproxy
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 3451a2ef-e97c-49df-813f-57c35ec0999a
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:21:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.239 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a', 'env', 'PROCESS_TAG=haproxy-3451a2ef-e97c-49df-813f-57c35ec0999a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3451a2ef-e97c-49df-813f-57c35ec0999a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:21:50 np0005634017 nova_compute[243452]: 2026-02-28 10:21:50.249 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:50 np0005634017 nova_compute[243452]: 2026-02-28 10:21:50.382 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274110.3818536, 0bafc3af-eadf-4d97-9acf-026c531362c3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:21:50 np0005634017 nova_compute[243452]: 2026-02-28 10:21:50.382 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] VM Started (Lifecycle Event)#033[00m
Feb 28 05:21:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 372 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 148 op/s
Feb 28 05:21:50 np0005634017 nova_compute[243452]: 2026-02-28 10:21:50.412 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:21:50 np0005634017 nova_compute[243452]: 2026-02-28 10:21:50.418 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274110.3838344, 0bafc3af-eadf-4d97-9acf-026c531362c3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:21:50 np0005634017 nova_compute[243452]: 2026-02-28 10:21:50.419 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:21:50 np0005634017 nova_compute[243452]: 2026-02-28 10:21:50.441 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:21:50 np0005634017 nova_compute[243452]: 2026-02-28 10:21:50.446 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:21:50 np0005634017 nova_compute[243452]: 2026-02-28 10:21:50.468 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:21:50 np0005634017 podman[334562]: 2026-02-28 10:21:50.645951913 +0000 UTC m=+0.064991507 container create b54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 05:21:50 np0005634017 systemd[1]: Started libpod-conmon-b54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4.scope.
Feb 28 05:21:50 np0005634017 podman[334562]: 2026-02-28 10:21:50.611613972 +0000 UTC m=+0.030653646 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:21:50 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:21:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aefe496d22c05c4788569ae7202c870c2d3e386af2fddcdb07f716cc705ac851/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:21:50 np0005634017 podman[334562]: 2026-02-28 10:21:50.738541826 +0000 UTC m=+0.157581450 container init b54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 28 05:21:50 np0005634017 podman[334562]: 2026-02-28 10:21:50.745488386 +0000 UTC m=+0.164527980 container start b54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223)
Feb 28 05:21:50 np0005634017 neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a[334577]: [NOTICE]   (334581) : New worker (334583) forked
Feb 28 05:21:50 np0005634017 neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a[334577]: [NOTICE]   (334581) : Loading success.
Feb 28 05:21:51 np0005634017 nova_compute[243452]: 2026-02-28 10:21:51.630 243456 DEBUG nova.compute.manager [req-ff0e5a8b-286f-448b-92cc-f46fc54c6dad req-c2caf2a0-7ee2-4b6c-9dda-403798dac40e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-changed-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:51 np0005634017 nova_compute[243452]: 2026-02-28 10:21:51.631 243456 DEBUG nova.compute.manager [req-ff0e5a8b-286f-448b-92cc-f46fc54c6dad req-c2caf2a0-7ee2-4b6c-9dda-403798dac40e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Refreshing instance network info cache due to event network-changed-a920b0c3-c6cf-44d3-9a22-40eda0e09078. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:21:51 np0005634017 nova_compute[243452]: 2026-02-28 10:21:51.631 243456 DEBUG oslo_concurrency.lockutils [req-ff0e5a8b-286f-448b-92cc-f46fc54c6dad req-c2caf2a0-7ee2-4b6c-9dda-403798dac40e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:21:51 np0005634017 nova_compute[243452]: 2026-02-28 10:21:51.631 243456 DEBUG oslo_concurrency.lockutils [req-ff0e5a8b-286f-448b-92cc-f46fc54c6dad req-c2caf2a0-7ee2-4b6c-9dda-403798dac40e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:21:51 np0005634017 nova_compute[243452]: 2026-02-28 10:21:51.631 243456 DEBUG nova.network.neutron [req-ff0e5a8b-286f-448b-92cc-f46fc54c6dad req-c2caf2a0-7ee2-4b6c-9dda-403798dac40e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Refreshing network info cache for port a920b0c3-c6cf-44d3-9a22-40eda0e09078 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:21:51 np0005634017 nova_compute[243452]: 2026-02-28 10:21:51.790 243456 DEBUG nova.compute.manager [req-978f0740-e7d4-409a-a146-f2279e1d0073 req-7195dda3-513a-47b6-8725-82460fa48bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Received event network-vif-plugged-09ffa25b-e3df-45c2-9db2-423ed33e2a28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:51 np0005634017 nova_compute[243452]: 2026-02-28 10:21:51.790 243456 DEBUG oslo_concurrency.lockutils [req-978f0740-e7d4-409a-a146-f2279e1d0073 req-7195dda3-513a-47b6-8725-82460fa48bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:51 np0005634017 nova_compute[243452]: 2026-02-28 10:21:51.791 243456 DEBUG oslo_concurrency.lockutils [req-978f0740-e7d4-409a-a146-f2279e1d0073 req-7195dda3-513a-47b6-8725-82460fa48bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:51 np0005634017 nova_compute[243452]: 2026-02-28 10:21:51.791 243456 DEBUG oslo_concurrency.lockutils [req-978f0740-e7d4-409a-a146-f2279e1d0073 req-7195dda3-513a-47b6-8725-82460fa48bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:51 np0005634017 nova_compute[243452]: 2026-02-28 10:21:51.791 243456 DEBUG nova.compute.manager [req-978f0740-e7d4-409a-a146-f2279e1d0073 req-7195dda3-513a-47b6-8725-82460fa48bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Processing event network-vif-plugged-09ffa25b-e3df-45c2-9db2-423ed33e2a28 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:21:51 np0005634017 nova_compute[243452]: 2026-02-28 10:21:51.792 243456 DEBUG nova.compute.manager [req-978f0740-e7d4-409a-a146-f2279e1d0073 req-7195dda3-513a-47b6-8725-82460fa48bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Received event network-vif-plugged-09ffa25b-e3df-45c2-9db2-423ed33e2a28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:51 np0005634017 nova_compute[243452]: 2026-02-28 10:21:51.792 243456 DEBUG oslo_concurrency.lockutils [req-978f0740-e7d4-409a-a146-f2279e1d0073 req-7195dda3-513a-47b6-8725-82460fa48bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:51 np0005634017 nova_compute[243452]: 2026-02-28 10:21:51.793 243456 DEBUG oslo_concurrency.lockutils [req-978f0740-e7d4-409a-a146-f2279e1d0073 req-7195dda3-513a-47b6-8725-82460fa48bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:51 np0005634017 nova_compute[243452]: 2026-02-28 10:21:51.793 243456 DEBUG oslo_concurrency.lockutils [req-978f0740-e7d4-409a-a146-f2279e1d0073 req-7195dda3-513a-47b6-8725-82460fa48bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:51 np0005634017 nova_compute[243452]: 2026-02-28 10:21:51.794 243456 DEBUG nova.compute.manager [req-978f0740-e7d4-409a-a146-f2279e1d0073 req-7195dda3-513a-47b6-8725-82460fa48bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] No waiting events found dispatching network-vif-plugged-09ffa25b-e3df-45c2-9db2-423ed33e2a28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:21:51 np0005634017 nova_compute[243452]: 2026-02-28 10:21:51.794 243456 WARNING nova.compute.manager [req-978f0740-e7d4-409a-a146-f2279e1d0073 req-7195dda3-513a-47b6-8725-82460fa48bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Received unexpected event network-vif-plugged-09ffa25b-e3df-45c2-9db2-423ed33e2a28 for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:21:51 np0005634017 nova_compute[243452]: 2026-02-28 10:21:51.795 243456 DEBUG nova.compute.manager [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:21:52 np0005634017 nova_compute[243452]: 2026-02-28 10:21:52.028 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:21:52 np0005634017 nova_compute[243452]: 2026-02-28 10:21:52.029 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274112.029148, 0bafc3af-eadf-4d97-9acf-026c531362c3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:21:52 np0005634017 nova_compute[243452]: 2026-02-28 10:21:52.029 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:21:52 np0005634017 nova_compute[243452]: 2026-02-28 10:21:52.034 243456 INFO nova.virt.libvirt.driver [-] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Instance spawned successfully.#033[00m
Feb 28 05:21:52 np0005634017 nova_compute[243452]: 2026-02-28 10:21:52.035 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:21:52 np0005634017 nova_compute[243452]: 2026-02-28 10:21:52.051 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:21:52 np0005634017 nova_compute[243452]: 2026-02-28 10:21:52.056 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:21:52 np0005634017 nova_compute[243452]: 2026-02-28 10:21:52.056 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:21:52 np0005634017 nova_compute[243452]: 2026-02-28 10:21:52.056 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:21:52 np0005634017 nova_compute[243452]: 2026-02-28 10:21:52.057 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:21:52 np0005634017 nova_compute[243452]: 2026-02-28 10:21:52.057 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:21:52 np0005634017 nova_compute[243452]: 2026-02-28 10:21:52.057 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:21:52 np0005634017 nova_compute[243452]: 2026-02-28 10:21:52.062 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:21:52 np0005634017 nova_compute[243452]: 2026-02-28 10:21:52.085 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:21:52 np0005634017 nova_compute[243452]: 2026-02-28 10:21:52.125 243456 INFO nova.compute.manager [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Took 9.32 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:21:52 np0005634017 nova_compute[243452]: 2026-02-28 10:21:52.125 243456 DEBUG nova.compute.manager [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:21:52 np0005634017 nova_compute[243452]: 2026-02-28 10:21:52.175 243456 INFO nova.compute.manager [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Took 10.80 seconds to build instance.#033[00m
Feb 28 05:21:52 np0005634017 nova_compute[243452]: 2026-02-28 10:21:52.192 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:52 np0005634017 nova_compute[243452]: 2026-02-28 10:21:52.192 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 6.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:52 np0005634017 nova_compute[243452]: 2026-02-28 10:21:52.193 243456 INFO nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:21:52 np0005634017 nova_compute[243452]: 2026-02-28 10:21:52.193 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 373 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 793 KiB/s rd, 3.6 MiB/s wr, 130 op/s
Feb 28 05:21:53 np0005634017 nova_compute[243452]: 2026-02-28 10:21:53.615 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:21:53 np0005634017 nova_compute[243452]: 2026-02-28 10:21:53.866 243456 DEBUG nova.network.neutron [req-ff0e5a8b-286f-448b-92cc-f46fc54c6dad req-c2caf2a0-7ee2-4b6c-9dda-403798dac40e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updated VIF entry in instance network info cache for port a920b0c3-c6cf-44d3-9a22-40eda0e09078. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:21:53 np0005634017 nova_compute[243452]: 2026-02-28 10:21:53.867 243456 DEBUG nova.network.neutron [req-ff0e5a8b-286f-448b-92cc-f46fc54c6dad req-c2caf2a0-7ee2-4b6c-9dda-403798dac40e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updating instance_info_cache with network_info: [{"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:21:53 np0005634017 nova_compute[243452]: 2026-02-28 10:21:53.883 243456 DEBUG oslo_concurrency.lockutils [req-ff0e5a8b-286f-448b-92cc-f46fc54c6dad req-c2caf2a0-7ee2-4b6c-9dda-403798dac40e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:21:53 np0005634017 nova_compute[243452]: 2026-02-28 10:21:53.989 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.239 243456 DEBUG oslo_concurrency.lockutils [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Acquiring lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.240 243456 DEBUG oslo_concurrency.lockutils [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.240 243456 DEBUG oslo_concurrency.lockutils [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Acquiring lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.241 243456 DEBUG oslo_concurrency.lockutils [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.241 243456 DEBUG oslo_concurrency.lockutils [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.242 243456 INFO nova.compute.manager [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Terminating instance#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.243 243456 DEBUG nova.compute.manager [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:21:54 np0005634017 kernel: tap247791be-e4 (unregistering): left promiscuous mode
Feb 28 05:21:54 np0005634017 NetworkManager[49805]: <info>  [1772274114.2883] device (tap247791be-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.291 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:54Z|01064|binding|INFO|Releasing lport 247791be-e482-41d7-b078-7328138dd0ea from this chassis (sb_readonly=0)
Feb 28 05:21:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:54Z|01065|binding|INFO|Setting lport 247791be-e482-41d7-b078-7328138dd0ea down in Southbound
Feb 28 05:21:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:54Z|01066|binding|INFO|Removing iface tap247791be-e4 ovn-installed in OVS
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.294 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.301 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:c3:14 10.100.0.3'], port_security=['fa:16:3e:d2:c3:14 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6f73ef31-2ae0-4765-9f6c-fb732af2b3f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10418c0f-a33e-4d93-99b1-462207fda43a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '335faa1173e64cf8a7b107ae6238353d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6bd5e14f-e535-467a-9991-a18d1756839a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b473e0e-3363-4c16-9aa6-bed6b246a535, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=247791be-e482-41d7-b078-7328138dd0ea) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:21:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.303 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 247791be-e482-41d7-b078-7328138dd0ea in datapath 10418c0f-a33e-4d93-99b1-462207fda43a unbound from our chassis#033[00m
Feb 28 05:21:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.304 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 10418c0f-a33e-4d93-99b1-462207fda43a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:21:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.306 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5c04289b-8355-4445-928a-01d0b1e1a3d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.307 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a namespace which is not needed anymore#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.312 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:54 np0005634017 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d00000067.scope: Deactivated successfully.
Feb 28 05:21:54 np0005634017 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d00000067.scope: Consumed 5.622s CPU time.
Feb 28 05:21:54 np0005634017 systemd-machined[209480]: Machine qemu-133-instance-00000067 terminated.
Feb 28 05:21:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 374 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.6 MiB/s wr, 165 op/s
Feb 28 05:21:54 np0005634017 neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a[334356]: [NOTICE]   (334360) : haproxy version is 2.8.14-c23fe91
Feb 28 05:21:54 np0005634017 neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a[334356]: [NOTICE]   (334360) : path to executable is /usr/sbin/haproxy
Feb 28 05:21:54 np0005634017 neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a[334356]: [WARNING]  (334360) : Exiting Master process...
Feb 28 05:21:54 np0005634017 neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a[334356]: [WARNING]  (334360) : Exiting Master process...
Feb 28 05:21:54 np0005634017 neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a[334356]: [ALERT]    (334360) : Current worker (334362) exited with code 143 (Terminated)
Feb 28 05:21:54 np0005634017 neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a[334356]: [WARNING]  (334360) : All workers exited. Exiting... (0)
Feb 28 05:21:54 np0005634017 systemd[1]: libpod-9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30.scope: Deactivated successfully.
Feb 28 05:21:54 np0005634017 podman[334618]: 2026-02-28 10:21:54.42711898 +0000 UTC m=+0.040182280 container died 9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 28 05:21:54 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30-userdata-shm.mount: Deactivated successfully.
Feb 28 05:21:54 np0005634017 systemd[1]: var-lib-containers-storage-overlay-c59a038730ab4b956f42ebaa25408510fb18b7aaf9726af2e166ee3c3b54acf3-merged.mount: Deactivated successfully.
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.460 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.463 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:54 np0005634017 podman[334618]: 2026-02-28 10:21:54.46488269 +0000 UTC m=+0.077945990 container cleanup 9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 28 05:21:54 np0005634017 systemd[1]: libpod-conmon-9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30.scope: Deactivated successfully.
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.472 243456 INFO nova.virt.libvirt.driver [-] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Instance destroyed successfully.#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.474 243456 DEBUG nova.objects.instance [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lazy-loading 'resources' on Instance uuid 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.499 243456 DEBUG nova.virt.libvirt.vif [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:21:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1786122360',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1786122360',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1786122360',id=103,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:21:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='335faa1173e64cf8a7b107ae6238353d',ramdisk_id='',reservation_id='r-bbmf5pbr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-1249930023',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-1249930023-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:21:49Z,user_data=None,user_id='70e8f691ae0f4768bb68cd8d497033e8',uuid=6f73ef31-2ae0-4765-9f6c-fb732af2b3f4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "247791be-e482-41d7-b078-7328138dd0ea", "address": "fa:16:3e:d2:c3:14", "network": {"id": "10418c0f-a33e-4d93-99b1-462207fda43a", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1972878049-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "335faa1173e64cf8a7b107ae6238353d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247791be-e4", "ovs_interfaceid": "247791be-e482-41d7-b078-7328138dd0ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.500 243456 DEBUG nova.network.os_vif_util [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Converting VIF {"id": "247791be-e482-41d7-b078-7328138dd0ea", "address": "fa:16:3e:d2:c3:14", "network": {"id": "10418c0f-a33e-4d93-99b1-462207fda43a", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1972878049-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "335faa1173e64cf8a7b107ae6238353d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247791be-e4", "ovs_interfaceid": "247791be-e482-41d7-b078-7328138dd0ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.501 243456 DEBUG nova.network.os_vif_util [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:c3:14,bridge_name='br-int',has_traffic_filtering=True,id=247791be-e482-41d7-b078-7328138dd0ea,network=Network(10418c0f-a33e-4d93-99b1-462207fda43a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247791be-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.501 243456 DEBUG os_vif [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:c3:14,bridge_name='br-int',has_traffic_filtering=True,id=247791be-e482-41d7-b078-7328138dd0ea,network=Network(10418c0f-a33e-4d93-99b1-462207fda43a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247791be-e4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.503 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.504 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap247791be-e4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.505 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.509 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.512 243456 INFO os_vif [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:c3:14,bridge_name='br-int',has_traffic_filtering=True,id=247791be-e482-41d7-b078-7328138dd0ea,network=Network(10418c0f-a33e-4d93-99b1-462207fda43a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247791be-e4')#033[00m
Feb 28 05:21:54 np0005634017 podman[334656]: 2026-02-28 10:21:54.537025903 +0000 UTC m=+0.051176168 container remove 9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 28 05:21:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.541 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4d03bc25-be60-4859-bf56-bb231538f0d9]: (4, ('Sat Feb 28 10:21:54 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a (9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30)\n9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30\nSat Feb 28 10:21:54 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a (9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30)\n9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.543 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[42ccef82-ce79-4ec3-b5b8-db30f37fa9e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.544 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10418c0f-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.545 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:54 np0005634017 kernel: tap10418c0f-a0: left promiscuous mode
Feb 28 05:21:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.556 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9e37b1c2-8707-4b96-b221-dd198ae67202]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.558 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.570 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8bdc520e-bfb7-49a5-803c-999ebd9bda30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.571 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5ac3fcf7-ff62-4ff8-9d20-6592b2f10c5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.585 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ff94107c-9952-484b-b20f-8195b5f6ceec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561550, 'reachable_time': 18717, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334690, 'error': None, 'target': 'ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:54 np0005634017 systemd[1]: run-netns-ovnmeta\x2d10418c0f\x2da33e\x2d4d93\x2d99b1\x2d462207fda43a.mount: Deactivated successfully.
Feb 28 05:21:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.590 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:21:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.590 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[eeeed348-57a2-466a-85fc-b394724bedd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.763 243456 INFO nova.virt.libvirt.driver [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Deleting instance files /var/lib/nova/instances/6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_del#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.764 243456 INFO nova.virt.libvirt.driver [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Deletion of /var/lib/nova/instances/6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_del complete#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.830 243456 INFO nova.compute.manager [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.831 243456 DEBUG oslo.service.loopingcall [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.831 243456 DEBUG nova.compute.manager [-] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:21:54 np0005634017 nova_compute[243452]: 2026-02-28 10:21:54.832 243456 DEBUG nova.network.neutron [-] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:21:55 np0005634017 nova_compute[243452]: 2026-02-28 10:21:55.471 243456 DEBUG nova.compute.manager [req-8e383a08-98d5-41e1-838d-de7a3914c603 req-cf0eb54f-48a0-45c1-95af-a7b8c79dca41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Received event network-vif-unplugged-247791be-e482-41d7-b078-7328138dd0ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:55 np0005634017 nova_compute[243452]: 2026-02-28 10:21:55.472 243456 DEBUG oslo_concurrency.lockutils [req-8e383a08-98d5-41e1-838d-de7a3914c603 req-cf0eb54f-48a0-45c1-95af-a7b8c79dca41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:55 np0005634017 nova_compute[243452]: 2026-02-28 10:21:55.472 243456 DEBUG oslo_concurrency.lockutils [req-8e383a08-98d5-41e1-838d-de7a3914c603 req-cf0eb54f-48a0-45c1-95af-a7b8c79dca41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:55 np0005634017 nova_compute[243452]: 2026-02-28 10:21:55.473 243456 DEBUG oslo_concurrency.lockutils [req-8e383a08-98d5-41e1-838d-de7a3914c603 req-cf0eb54f-48a0-45c1-95af-a7b8c79dca41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:55 np0005634017 nova_compute[243452]: 2026-02-28 10:21:55.473 243456 DEBUG nova.compute.manager [req-8e383a08-98d5-41e1-838d-de7a3914c603 req-cf0eb54f-48a0-45c1-95af-a7b8c79dca41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] No waiting events found dispatching network-vif-unplugged-247791be-e482-41d7-b078-7328138dd0ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:21:55 np0005634017 nova_compute[243452]: 2026-02-28 10:21:55.473 243456 DEBUG nova.compute.manager [req-8e383a08-98d5-41e1-838d-de7a3914c603 req-cf0eb54f-48a0-45c1-95af-a7b8c79dca41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Received event network-vif-unplugged-247791be-e482-41d7-b078-7328138dd0ea for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:21:55 np0005634017 nova_compute[243452]: 2026-02-28 10:21:55.546 243456 DEBUG nova.compute.manager [req-030025ef-1bcc-4a2a-8fa3-3f9aa0dcbbbe req-7965bbf6-0559-4b10-b8e8-0afdeb351450 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Received event network-changed-09ffa25b-e3df-45c2-9db2-423ed33e2a28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:55 np0005634017 nova_compute[243452]: 2026-02-28 10:21:55.547 243456 DEBUG nova.compute.manager [req-030025ef-1bcc-4a2a-8fa3-3f9aa0dcbbbe req-7965bbf6-0559-4b10-b8e8-0afdeb351450 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Refreshing instance network info cache due to event network-changed-09ffa25b-e3df-45c2-9db2-423ed33e2a28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:21:55 np0005634017 nova_compute[243452]: 2026-02-28 10:21:55.547 243456 DEBUG oslo_concurrency.lockutils [req-030025ef-1bcc-4a2a-8fa3-3f9aa0dcbbbe req-7965bbf6-0559-4b10-b8e8-0afdeb351450 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:21:55 np0005634017 nova_compute[243452]: 2026-02-28 10:21:55.547 243456 DEBUG oslo_concurrency.lockutils [req-030025ef-1bcc-4a2a-8fa3-3f9aa0dcbbbe req-7965bbf6-0559-4b10-b8e8-0afdeb351450 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:21:55 np0005634017 nova_compute[243452]: 2026-02-28 10:21:55.548 243456 DEBUG nova.network.neutron [req-030025ef-1bcc-4a2a-8fa3-3f9aa0dcbbbe req-7965bbf6-0559-4b10-b8e8-0afdeb351450 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Refreshing network info cache for port 09ffa25b-e3df-45c2-9db2-423ed33e2a28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.044 243456 DEBUG oslo_concurrency.lockutils [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Acquiring lock "ec785d5e-9b62-4b52-a727-f64173b4b853" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.044 243456 DEBUG oslo_concurrency.lockutils [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.045 243456 DEBUG oslo_concurrency.lockutils [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Acquiring lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.045 243456 DEBUG oslo_concurrency.lockutils [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.045 243456 DEBUG oslo_concurrency.lockutils [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.047 243456 INFO nova.compute.manager [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Terminating instance#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.048 243456 DEBUG nova.compute.manager [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:21:56 np0005634017 kernel: tapa920b0c3-c6 (unregistering): left promiscuous mode
Feb 28 05:21:56 np0005634017 NetworkManager[49805]: <info>  [1772274116.1230] device (tapa920b0c3-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.128 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:56Z|01067|binding|INFO|Releasing lport a920b0c3-c6cf-44d3-9a22-40eda0e09078 from this chassis (sb_readonly=0)
Feb 28 05:21:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:56Z|01068|binding|INFO|Setting lport a920b0c3-c6cf-44d3-9a22-40eda0e09078 down in Southbound
Feb 28 05:21:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:21:56Z|01069|binding|INFO|Removing iface tapa920b0c3-c6 ovn-installed in OVS
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.131 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.141 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:56.148 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:a9:65 10.100.0.2'], port_security=['fa:16:3e:a2:a9:65 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ec785d5e-9b62-4b52-a727-f64173b4b853', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d053625-c393-49e7-ae73-bce276bdc186', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '14500a4ea1d94c0e9c58b076f5c918b5', 'neutron:revision_number': '8', 'neutron:security_group_ids': '1b56c898-1f47-46b5-8fd8-bf772110d194', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0513d6b5-e918-4c66-8302-fa0b35a813c3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a920b0c3-c6cf-44d3-9a22-40eda0e09078) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:21:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:56.149 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a920b0c3-c6cf-44d3-9a22-40eda0e09078 in datapath 4d053625-c393-49e7-ae73-bce276bdc186 unbound from our chassis#033[00m
Feb 28 05:21:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:56.150 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4d053625-c393-49e7-ae73-bce276bdc186 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 28 05:21:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:56.151 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[420ec822-fd03-47dd-bddc-2200a19be7f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:21:56 np0005634017 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000066.scope: Deactivated successfully.
Feb 28 05:21:56 np0005634017 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000066.scope: Consumed 12.035s CPU time.
Feb 28 05:21:56 np0005634017 systemd-machined[209480]: Machine qemu-132-instance-00000066 terminated.
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.267 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.272 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.283 243456 INFO nova.virt.libvirt.driver [-] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Instance destroyed successfully.#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.284 243456 DEBUG nova.objects.instance [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lazy-loading 'resources' on Instance uuid ec785d5e-9b62-4b52-a727-f64173b4b853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.297 243456 DEBUG nova.virt.libvirt.vif [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1346326288',display_name='tempest-ServerRescueTestJSONUnderV235-server-1346326288',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1346326288',id=102,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:21:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='14500a4ea1d94c0e9c58b076f5c918b5',ramdisk_id='',reservation_id='r-svzcm10h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-749971841',owner_user_name='tempest-ServerRescueTestJSONUnderV235-749971841-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:21:34Z,user_data=None,user_id='f18b63d43ee24e59bdff962c9a727213',uuid=ec785d5e-9b62-4b52-a727-f64173b4b853,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.298 243456 DEBUG nova.network.os_vif_util [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Converting VIF {"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.299 243456 DEBUG nova.network.os_vif_util [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a2:a9:65,bridge_name='br-int',has_traffic_filtering=True,id=a920b0c3-c6cf-44d3-9a22-40eda0e09078,network=Network(4d053625-c393-49e7-ae73-bce276bdc186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa920b0c3-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.299 243456 DEBUG os_vif [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a2:a9:65,bridge_name='br-int',has_traffic_filtering=True,id=a920b0c3-c6cf-44d3-9a22-40eda0e09078,network=Network(4d053625-c393-49e7-ae73-bce276bdc186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa920b0c3-c6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.302 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.302 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa920b0c3-c6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.303 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.304 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.306 243456 INFO os_vif [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a2:a9:65,bridge_name='br-int',has_traffic_filtering=True,id=a920b0c3-c6cf-44d3-9a22-40eda0e09078,network=Network(4d053625-c393-49e7-ae73-bce276bdc186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa920b0c3-c6')#033[00m
Feb 28 05:21:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 361 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.7 MiB/s wr, 226 op/s
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.414 243456 DEBUG nova.network.neutron [-] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.589 243456 DEBUG nova.network.neutron [req-030025ef-1bcc-4a2a-8fa3-3f9aa0dcbbbe req-7965bbf6-0559-4b10-b8e8-0afdeb351450 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Updated VIF entry in instance network info cache for port 09ffa25b-e3df-45c2-9db2-423ed33e2a28. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.590 243456 DEBUG nova.network.neutron [req-030025ef-1bcc-4a2a-8fa3-3f9aa0dcbbbe req-7965bbf6-0559-4b10-b8e8-0afdeb351450 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Updating instance_info_cache with network_info: [{"id": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "address": "fa:16:3e:93:cc:93", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ffa25b-e3", "ovs_interfaceid": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.749 243456 INFO nova.compute.manager [-] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Took 1.92 seconds to deallocate network for instance.#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.854 243456 DEBUG oslo_concurrency.lockutils [req-030025ef-1bcc-4a2a-8fa3-3f9aa0dcbbbe req-7965bbf6-0559-4b10-b8e8-0afdeb351450 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.866 243456 DEBUG oslo_concurrency.lockutils [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.867 243456 DEBUG oslo_concurrency.lockutils [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.905 243456 INFO nova.virt.libvirt.driver [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Deleting instance files /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853_del#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.906 243456 INFO nova.virt.libvirt.driver [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Deletion of /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853_del complete#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.980 243456 INFO nova.compute.manager [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.980 243456 DEBUG oslo.service.loopingcall [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.981 243456 DEBUG nova.compute.manager [-] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.982 243456 DEBUG nova.network.neutron [-] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:21:56 np0005634017 nova_compute[243452]: 2026-02-28 10:21:56.998 243456 DEBUG oslo_concurrency.processutils [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:21:57 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/790989969' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:21:57 np0005634017 nova_compute[243452]: 2026-02-28 10:21:57.536 243456 DEBUG oslo_concurrency.processutils [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:57 np0005634017 nova_compute[243452]: 2026-02-28 10:21:57.544 243456 DEBUG nova.compute.provider_tree [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:21:57 np0005634017 nova_compute[243452]: 2026-02-28 10:21:57.567 243456 DEBUG nova.scheduler.client.report [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:21:57 np0005634017 nova_compute[243452]: 2026-02-28 10:21:57.608 243456 DEBUG nova.compute.manager [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Received event network-vif-plugged-247791be-e482-41d7-b078-7328138dd0ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:57 np0005634017 nova_compute[243452]: 2026-02-28 10:21:57.609 243456 DEBUG oslo_concurrency.lockutils [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:57 np0005634017 nova_compute[243452]: 2026-02-28 10:21:57.609 243456 DEBUG oslo_concurrency.lockutils [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:57 np0005634017 nova_compute[243452]: 2026-02-28 10:21:57.610 243456 DEBUG oslo_concurrency.lockutils [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:57 np0005634017 nova_compute[243452]: 2026-02-28 10:21:57.610 243456 DEBUG nova.compute.manager [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] No waiting events found dispatching network-vif-plugged-247791be-e482-41d7-b078-7328138dd0ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:21:57 np0005634017 nova_compute[243452]: 2026-02-28 10:21:57.611 243456 WARNING nova.compute.manager [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Received unexpected event network-vif-plugged-247791be-e482-41d7-b078-7328138dd0ea for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:21:57 np0005634017 nova_compute[243452]: 2026-02-28 10:21:57.611 243456 DEBUG nova.compute.manager [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Received event network-vif-deleted-247791be-e482-41d7-b078-7328138dd0ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:57 np0005634017 nova_compute[243452]: 2026-02-28 10:21:57.612 243456 DEBUG nova.compute.manager [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-vif-unplugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:57 np0005634017 nova_compute[243452]: 2026-02-28 10:21:57.612 243456 DEBUG oslo_concurrency.lockutils [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:57 np0005634017 nova_compute[243452]: 2026-02-28 10:21:57.612 243456 DEBUG oslo_concurrency.lockutils [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:57 np0005634017 nova_compute[243452]: 2026-02-28 10:21:57.613 243456 DEBUG oslo_concurrency.lockutils [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:57 np0005634017 nova_compute[243452]: 2026-02-28 10:21:57.613 243456 DEBUG nova.compute.manager [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] No waiting events found dispatching network-vif-unplugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:21:57 np0005634017 nova_compute[243452]: 2026-02-28 10:21:57.614 243456 DEBUG nova.compute.manager [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-vif-unplugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:21:57 np0005634017 nova_compute[243452]: 2026-02-28 10:21:57.614 243456 DEBUG nova.compute.manager [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:21:57 np0005634017 nova_compute[243452]: 2026-02-28 10:21:57.615 243456 DEBUG oslo_concurrency.lockutils [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:57 np0005634017 nova_compute[243452]: 2026-02-28 10:21:57.615 243456 DEBUG oslo_concurrency.lockutils [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:57 np0005634017 nova_compute[243452]: 2026-02-28 10:21:57.615 243456 DEBUG oslo_concurrency.lockutils [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:57 np0005634017 nova_compute[243452]: 2026-02-28 10:21:57.616 243456 DEBUG nova.compute.manager [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] No waiting events found dispatching network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:21:57 np0005634017 nova_compute[243452]: 2026-02-28 10:21:57.616 243456 WARNING nova.compute.manager [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received unexpected event network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 for instance with vm_state rescued and task_state deleting.#033[00m
Feb 28 05:21:57 np0005634017 nova_compute[243452]: 2026-02-28 10:21:57.619 243456 DEBUG oslo_concurrency.lockutils [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:57 np0005634017 nova_compute[243452]: 2026-02-28 10:21:57.649 243456 INFO nova.scheduler.client.report [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Deleted allocations for instance 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4#033[00m
Feb 28 05:21:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:57.860 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:57.862 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:21:57.862 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:57 np0005634017 nova_compute[243452]: 2026-02-28 10:21:57.950 243456 DEBUG nova.network.neutron [-] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:21:58 np0005634017 nova_compute[243452]: 2026-02-28 10:21:58.048 243456 DEBUG oslo_concurrency.lockutils [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 310 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.1 MiB/s wr, 226 op/s
Feb 28 05:21:58 np0005634017 nova_compute[243452]: 2026-02-28 10:21:58.435 243456 INFO nova.compute.manager [-] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Took 1.45 seconds to deallocate network for instance.#033[00m
Feb 28 05:21:58 np0005634017 nova_compute[243452]: 2026-02-28 10:21:58.649 243456 DEBUG oslo_concurrency.lockutils [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:21:58 np0005634017 nova_compute[243452]: 2026-02-28 10:21:58.649 243456 DEBUG oslo_concurrency.lockutils [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:21:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:21:58 np0005634017 nova_compute[243452]: 2026-02-28 10:21:58.725 243456 DEBUG oslo_concurrency.processutils [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:21:58 np0005634017 nova_compute[243452]: 2026-02-28 10:21:58.992 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:21:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:21:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/352467888' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:21:59 np0005634017 nova_compute[243452]: 2026-02-28 10:21:59.301 243456 DEBUG oslo_concurrency.processutils [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:21:59 np0005634017 nova_compute[243452]: 2026-02-28 10:21:59.307 243456 DEBUG nova.compute.provider_tree [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:21:59 np0005634017 nova_compute[243452]: 2026-02-28 10:21:59.338 243456 DEBUG nova.scheduler.client.report [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:21:59 np0005634017 nova_compute[243452]: 2026-02-28 10:21:59.433 243456 DEBUG oslo_concurrency.lockutils [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:59 np0005634017 nova_compute[243452]: 2026-02-28 10:21:59.486 243456 INFO nova.scheduler.client.report [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Deleted allocations for instance ec785d5e-9b62-4b52-a727-f64173b4b853#033[00m
Feb 28 05:21:59 np0005634017 nova_compute[243452]: 2026-02-28 10:21:59.599 243456 DEBUG oslo_concurrency.lockutils [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:21:59 np0005634017 nova_compute[243452]: 2026-02-28 10:21:59.698 243456 DEBUG nova.compute.manager [req-c71425bc-f088-4458-8323-a5cbd0315b3d req-df6ab0fa-6596-4949-b8ed-3a0cf85c9dd3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-vif-deleted-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:22:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:22:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:22:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:22:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:22:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:22:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:22:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 244 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 39 KiB/s wr, 218 op/s
Feb 28 05:22:01 np0005634017 nova_compute[243452]: 2026-02-28 10:22:01.305 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 200 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 39 KiB/s wr, 224 op/s
Feb 28 05:22:02 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:02Z|01070|binding|INFO|Releasing lport 0715e649-02a0-4a60-88ea-663bf03161ac from this chassis (sb_readonly=0)
Feb 28 05:22:02 np0005634017 nova_compute[243452]: 2026-02-28 10:22:02.762 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:03Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:93:cc:93 10.100.0.9
Feb 28 05:22:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:03Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:93:cc:93 10.100.0.9
Feb 28 05:22:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:22:03 np0005634017 nova_compute[243452]: 2026-02-28 10:22:03.995 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 200 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 37 KiB/s wr, 211 op/s
Feb 28 05:22:04 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:04Z|01071|binding|INFO|Releasing lport 0715e649-02a0-4a60-88ea-663bf03161ac from this chassis (sb_readonly=0)
Feb 28 05:22:04 np0005634017 nova_compute[243452]: 2026-02-28 10:22:04.634 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:06 np0005634017 nova_compute[243452]: 2026-02-28 10:22:06.309 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 215 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.2 MiB/s wr, 218 op/s
Feb 28 05:22:06 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:06Z|01072|binding|INFO|Releasing lport 0715e649-02a0-4a60-88ea-663bf03161ac from this chassis (sb_readonly=0)
Feb 28 05:22:06 np0005634017 nova_compute[243452]: 2026-02-28 10:22:06.556 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 233 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 176 op/s
Feb 28 05:22:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:22:08 np0005634017 nova_compute[243452]: 2026-02-28 10:22:08.997 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:09 np0005634017 nova_compute[243452]: 2026-02-28 10:22:09.468 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274114.4671314, 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:22:09 np0005634017 nova_compute[243452]: 2026-02-28 10:22:09.468 243456 INFO nova.compute.manager [-] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:22:09 np0005634017 nova_compute[243452]: 2026-02-28 10:22:09.503 243456 DEBUG nova.compute.manager [None req-f8e8eadc-f14a-40a3-a09a-bdc18503ff74 - - - - - -] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:22:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 423 KiB/s rd, 2.1 MiB/s wr, 112 op/s
Feb 28 05:22:11 np0005634017 nova_compute[243452]: 2026-02-28 10:22:11.288 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274116.2818878, ec785d5e-9b62-4b52-a727-f64173b4b853 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:22:11 np0005634017 nova_compute[243452]: 2026-02-28 10:22:11.290 243456 INFO nova.compute.manager [-] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:22:11 np0005634017 nova_compute[243452]: 2026-02-28 10:22:11.312 243456 DEBUG nova.compute.manager [None req-6dc4cd3c-8b3b-4b0c-ade5-34a4a82fd322 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:22:11 np0005634017 nova_compute[243452]: 2026-02-28 10:22:11.314 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:12 np0005634017 nova_compute[243452]: 2026-02-28 10:22:12.361 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 2.1 MiB/s wr, 79 op/s
Feb 28 05:22:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:22:14 np0005634017 nova_compute[243452]: 2026-02-28 10:22:13.999 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 28 05:22:15 np0005634017 nova_compute[243452]: 2026-02-28 10:22:15.414 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:16 np0005634017 podman[334773]: 2026-02-28 10:22:16.132989566 +0000 UTC m=+0.061783924 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223)
Feb 28 05:22:16 np0005634017 podman[334772]: 2026-02-28 10:22:16.163555848 +0000 UTC m=+0.097343651 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Feb 28 05:22:16 np0005634017 nova_compute[243452]: 2026-02-28 10:22:16.315 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 28 05:22:16 np0005634017 nova_compute[243452]: 2026-02-28 10:22:16.716 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "3a425770-67d6-411f-9586-1977cbc678ed" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:16 np0005634017 nova_compute[243452]: 2026-02-28 10:22:16.717 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "3a425770-67d6-411f-9586-1977cbc678ed" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:16 np0005634017 nova_compute[243452]: 2026-02-28 10:22:16.734 243456 DEBUG nova.compute.manager [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:22:16 np0005634017 nova_compute[243452]: 2026-02-28 10:22:16.825 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:16 np0005634017 nova_compute[243452]: 2026-02-28 10:22:16.826 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:16 np0005634017 nova_compute[243452]: 2026-02-28 10:22:16.839 243456 DEBUG nova.virt.hardware [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:22:16 np0005634017 nova_compute[243452]: 2026-02-28 10:22:16.840 243456 INFO nova.compute.claims [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:22:16 np0005634017 nova_compute[243452]: 2026-02-28 10:22:16.956 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:22:17 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3279325425' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:22:17 np0005634017 nova_compute[243452]: 2026-02-28 10:22:17.518 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:17 np0005634017 nova_compute[243452]: 2026-02-28 10:22:17.528 243456 DEBUG nova.compute.provider_tree [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:22:17 np0005634017 nova_compute[243452]: 2026-02-28 10:22:17.550 243456 DEBUG nova.scheduler.client.report [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:22:17 np0005634017 nova_compute[243452]: 2026-02-28 10:22:17.578 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:17 np0005634017 nova_compute[243452]: 2026-02-28 10:22:17.579 243456 DEBUG nova.compute.manager [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:22:17 np0005634017 nova_compute[243452]: 2026-02-28 10:22:17.652 243456 DEBUG nova.compute.manager [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:22:17 np0005634017 nova_compute[243452]: 2026-02-28 10:22:17.653 243456 DEBUG nova.network.neutron [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:22:17 np0005634017 nova_compute[243452]: 2026-02-28 10:22:17.679 243456 INFO nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:22:17 np0005634017 nova_compute[243452]: 2026-02-28 10:22:17.698 243456 DEBUG nova.compute.manager [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:22:17 np0005634017 nova_compute[243452]: 2026-02-28 10:22:17.833 243456 DEBUG nova.compute.manager [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:22:17 np0005634017 nova_compute[243452]: 2026-02-28 10:22:17.835 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:22:17 np0005634017 nova_compute[243452]: 2026-02-28 10:22:17.835 243456 INFO nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Creating image(s)#033[00m
Feb 28 05:22:17 np0005634017 nova_compute[243452]: 2026-02-28 10:22:17.861 243456 DEBUG nova.storage.rbd_utils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 3a425770-67d6-411f-9586-1977cbc678ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:22:17 np0005634017 nova_compute[243452]: 2026-02-28 10:22:17.889 243456 DEBUG nova.storage.rbd_utils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 3a425770-67d6-411f-9586-1977cbc678ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:22:17 np0005634017 nova_compute[243452]: 2026-02-28 10:22:17.921 243456 DEBUG nova.storage.rbd_utils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 3a425770-67d6-411f-9586-1977cbc678ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:22:17 np0005634017 nova_compute[243452]: 2026-02-28 10:22:17.926 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:18 np0005634017 nova_compute[243452]: 2026-02-28 10:22:18.000 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:18 np0005634017 nova_compute[243452]: 2026-02-28 10:22:18.002 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:18 np0005634017 nova_compute[243452]: 2026-02-28 10:22:18.003 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:18 np0005634017 nova_compute[243452]: 2026-02-28 10:22:18.004 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:18 np0005634017 nova_compute[243452]: 2026-02-28 10:22:18.042 243456 DEBUG nova.storage.rbd_utils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 3a425770-67d6-411f-9586-1977cbc678ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:22:18 np0005634017 nova_compute[243452]: 2026-02-28 10:22:18.049 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 3a425770-67d6-411f-9586-1977cbc678ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:18 np0005634017 nova_compute[243452]: 2026-02-28 10:22:18.194 243456 DEBUG nova.policy [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9046d3ef932481196b82d5e1fdd5de6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:22:18 np0005634017 nova_compute[243452]: 2026-02-28 10:22:18.319 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 3a425770-67d6-411f-9586-1977cbc678ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.271s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:18 np0005634017 nova_compute[243452]: 2026-02-28 10:22:18.389 243456 DEBUG nova.storage.rbd_utils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] resizing rbd image 3a425770-67d6-411f-9586-1977cbc678ed_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:22:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 182 KiB/s rd, 968 KiB/s wr, 23 op/s
Feb 28 05:22:18 np0005634017 nova_compute[243452]: 2026-02-28 10:22:18.479 243456 DEBUG nova.objects.instance [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'migration_context' on Instance uuid 3a425770-67d6-411f-9586-1977cbc678ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:22:18 np0005634017 nova_compute[243452]: 2026-02-28 10:22:18.499 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:22:18 np0005634017 nova_compute[243452]: 2026-02-28 10:22:18.500 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Ensure instance console log exists: /var/lib/nova/instances/3a425770-67d6-411f-9586-1977cbc678ed/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:22:18 np0005634017 nova_compute[243452]: 2026-02-28 10:22:18.501 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:18 np0005634017 nova_compute[243452]: 2026-02-28 10:22:18.501 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:18 np0005634017 nova_compute[243452]: 2026-02-28 10:22:18.501 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:18 np0005634017 nova_compute[243452]: 2026-02-28 10:22:18.579 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:18.579 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:22:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:18.581 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:22:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:22:19 np0005634017 nova_compute[243452]: 2026-02-28 10:22:19.001 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:19 np0005634017 nova_compute[243452]: 2026-02-28 10:22:19.033 243456 DEBUG nova.network.neutron [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Successfully created port: 8356f577-07af-4575-b9ba-e2764b155dcc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:22:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 272 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.6 MiB/s wr, 27 op/s
Feb 28 05:22:20 np0005634017 nova_compute[243452]: 2026-02-28 10:22:20.449 243456 DEBUG nova.network.neutron [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Successfully updated port: 8356f577-07af-4575-b9ba-e2764b155dcc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:22:20 np0005634017 nova_compute[243452]: 2026-02-28 10:22:20.469 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "refresh_cache-3a425770-67d6-411f-9586-1977cbc678ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:22:20 np0005634017 nova_compute[243452]: 2026-02-28 10:22:20.470 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquired lock "refresh_cache-3a425770-67d6-411f-9586-1977cbc678ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:22:20 np0005634017 nova_compute[243452]: 2026-02-28 10:22:20.470 243456 DEBUG nova.network.neutron [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:22:20 np0005634017 nova_compute[243452]: 2026-02-28 10:22:20.482 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:20 np0005634017 nova_compute[243452]: 2026-02-28 10:22:20.483 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:20 np0005634017 nova_compute[243452]: 2026-02-28 10:22:20.501 243456 DEBUG nova.compute.manager [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:22:20 np0005634017 nova_compute[243452]: 2026-02-28 10:22:20.573 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:20 np0005634017 nova_compute[243452]: 2026-02-28 10:22:20.573 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:20 np0005634017 nova_compute[243452]: 2026-02-28 10:22:20.582 243456 DEBUG nova.virt.hardware [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:22:20 np0005634017 nova_compute[243452]: 2026-02-28 10:22:20.582 243456 INFO nova.compute.claims [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:22:20 np0005634017 nova_compute[243452]: 2026-02-28 10:22:20.591 243456 DEBUG nova.compute.manager [req-bf056df2-a93e-47c4-bed6-3231c16ca438 req-013ccc78-117c-49b4-a58b-43e0cefa4c30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Received event network-changed-8356f577-07af-4575-b9ba-e2764b155dcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:22:20 np0005634017 nova_compute[243452]: 2026-02-28 10:22:20.592 243456 DEBUG nova.compute.manager [req-bf056df2-a93e-47c4-bed6-3231c16ca438 req-013ccc78-117c-49b4-a58b-43e0cefa4c30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Refreshing instance network info cache due to event network-changed-8356f577-07af-4575-b9ba-e2764b155dcc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:22:20 np0005634017 nova_compute[243452]: 2026-02-28 10:22:20.593 243456 DEBUG oslo_concurrency.lockutils [req-bf056df2-a93e-47c4-bed6-3231c16ca438 req-013ccc78-117c-49b4-a58b-43e0cefa4c30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-3a425770-67d6-411f-9586-1977cbc678ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:22:20 np0005634017 nova_compute[243452]: 2026-02-28 10:22:20.731 243456 DEBUG nova.network.neutron [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:22:20 np0005634017 nova_compute[243452]: 2026-02-28 10:22:20.849 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:21 np0005634017 nova_compute[243452]: 2026-02-28 10:22:21.317 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:22:21 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3014086928' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:22:21 np0005634017 nova_compute[243452]: 2026-02-28 10:22:21.441 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:21 np0005634017 nova_compute[243452]: 2026-02-28 10:22:21.448 243456 DEBUG nova.compute.provider_tree [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:22:21 np0005634017 nova_compute[243452]: 2026-02-28 10:22:21.462 243456 DEBUG nova.scheduler.client.report [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:22:21 np0005634017 nova_compute[243452]: 2026-02-28 10:22:21.493 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:21 np0005634017 nova_compute[243452]: 2026-02-28 10:22:21.494 243456 DEBUG nova.compute.manager [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:22:21 np0005634017 nova_compute[243452]: 2026-02-28 10:22:21.550 243456 DEBUG nova.compute.manager [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:22:21 np0005634017 nova_compute[243452]: 2026-02-28 10:22:21.551 243456 DEBUG nova.network.neutron [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:22:21 np0005634017 nova_compute[243452]: 2026-02-28 10:22:21.585 243456 INFO nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:22:21 np0005634017 nova_compute[243452]: 2026-02-28 10:22:21.603 243456 DEBUG nova.compute.manager [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:22:21 np0005634017 nova_compute[243452]: 2026-02-28 10:22:21.693 243456 DEBUG nova.compute.manager [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:22:21 np0005634017 nova_compute[243452]: 2026-02-28 10:22:21.694 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:22:21 np0005634017 nova_compute[243452]: 2026-02-28 10:22:21.695 243456 INFO nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Creating image(s)#033[00m
Feb 28 05:22:21 np0005634017 nova_compute[243452]: 2026-02-28 10:22:21.716 243456 DEBUG nova.storage.rbd_utils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:22:21 np0005634017 nova_compute[243452]: 2026-02-28 10:22:21.739 243456 DEBUG nova.storage.rbd_utils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:22:21 np0005634017 nova_compute[243452]: 2026-02-28 10:22:21.762 243456 DEBUG nova.storage.rbd_utils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:22:21 np0005634017 nova_compute[243452]: 2026-02-28 10:22:21.766 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:21 np0005634017 nova_compute[243452]: 2026-02-28 10:22:21.826 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:21 np0005634017 nova_compute[243452]: 2026-02-28 10:22:21.827 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:21 np0005634017 nova_compute[243452]: 2026-02-28 10:22:21.827 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:21 np0005634017 nova_compute[243452]: 2026-02-28 10:22:21.828 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:21 np0005634017 nova_compute[243452]: 2026-02-28 10:22:21.848 243456 DEBUG nova.storage.rbd_utils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:22:21 np0005634017 nova_compute[243452]: 2026-02-28 10:22:21.854 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:21 np0005634017 nova_compute[243452]: 2026-02-28 10:22:21.924 243456 DEBUG nova.policy [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30797c1e587b4532a2e148d0cdcd9c51', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:22:21 np0005634017 nova_compute[243452]: 2026-02-28 10:22:21.993 243456 DEBUG nova.network.neutron [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Updating instance_info_cache with network_info: [{"id": "8356f577-07af-4575-b9ba-e2764b155dcc", "address": "fa:16:3e:bc:ac:78", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8356f577-07", "ovs_interfaceid": "8356f577-07af-4575-b9ba-e2764b155dcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.011 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Releasing lock "refresh_cache-3a425770-67d6-411f-9586-1977cbc678ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.012 243456 DEBUG nova.compute.manager [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Instance network_info: |[{"id": "8356f577-07af-4575-b9ba-e2764b155dcc", "address": "fa:16:3e:bc:ac:78", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8356f577-07", "ovs_interfaceid": "8356f577-07af-4575-b9ba-e2764b155dcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.013 243456 DEBUG oslo_concurrency.lockutils [req-bf056df2-a93e-47c4-bed6-3231c16ca438 req-013ccc78-117c-49b4-a58b-43e0cefa4c30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-3a425770-67d6-411f-9586-1977cbc678ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.014 243456 DEBUG nova.network.neutron [req-bf056df2-a93e-47c4-bed6-3231c16ca438 req-013ccc78-117c-49b4-a58b-43e0cefa4c30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Refreshing network info cache for port 8356f577-07af-4575-b9ba-e2764b155dcc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.016 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Start _get_guest_xml network_info=[{"id": "8356f577-07af-4575-b9ba-e2764b155dcc", "address": "fa:16:3e:bc:ac:78", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8356f577-07", "ovs_interfaceid": "8356f577-07af-4575-b9ba-e2764b155dcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.025 243456 WARNING nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.040 243456 DEBUG nova.virt.libvirt.host [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.042 243456 DEBUG nova.virt.libvirt.host [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.045 243456 DEBUG nova.virt.libvirt.host [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.045 243456 DEBUG nova.virt.libvirt.host [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.046 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.046 243456 DEBUG nova.virt.hardware [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.046 243456 DEBUG nova.virt.hardware [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.046 243456 DEBUG nova.virt.hardware [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.047 243456 DEBUG nova.virt.hardware [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.047 243456 DEBUG nova.virt.hardware [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.047 243456 DEBUG nova.virt.hardware [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.047 243456 DEBUG nova.virt.hardware [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.047 243456 DEBUG nova.virt.hardware [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.048 243456 DEBUG nova.virt.hardware [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.048 243456 DEBUG nova.virt.hardware [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.048 243456 DEBUG nova.virt.hardware [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.052 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.111 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.203 243456 DEBUG nova.storage.rbd_utils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] resizing rbd image 0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.281 243456 DEBUG nova.objects.instance [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'migration_context' on Instance uuid 0d4ce277-1bbb-4926-a7ee-30f5df57fff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.303 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.303 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Ensure instance console log exists: /var/lib/nova/instances/0d4ce277-1bbb-4926-a7ee-30f5df57fff9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.304 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.304 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.304 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 272 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.6 MiB/s wr, 27 op/s
Feb 28 05:22:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:22:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3333438576' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.605 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.631 243456 DEBUG nova.storage.rbd_utils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 3a425770-67d6-411f-9586-1977cbc678ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.637 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:22 np0005634017 nova_compute[243452]: 2026-02-28 10:22:22.787 243456 DEBUG nova.network.neutron [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Successfully created port: 53819bfb-ebe3-4956-8f91-805dd04b5954 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:22:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:22:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1846463003' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.241 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.243 243456 DEBUG nova.virt.libvirt.vif [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:22:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-0-1517409208',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-0-1517409208',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ge',id=105,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNs+nd6rPqJ/AGV55FcprVoNF73HxSzu9S0FiqOuhvN4rlizLZ9YW8wn4BFYC1ax4N+CgAyJWdOsbXuKNmW4pxXu15elRBRUjfpzdRq3EXNuL00qJwL5OuaSxnPUOdFXtw==',key_name='tempest-TestSecurityGroupsBasicOps-1541180013',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-xny8xf69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:22:17Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=3a425770-67d6-411f-9586-1977cbc678ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8356f577-07af-4575-b9ba-e2764b155dcc", "address": "fa:16:3e:bc:ac:78", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8356f577-07", "ovs_interfaceid": "8356f577-07af-4575-b9ba-e2764b155dcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.243 243456 DEBUG nova.network.os_vif_util [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "8356f577-07af-4575-b9ba-e2764b155dcc", "address": "fa:16:3e:bc:ac:78", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8356f577-07", "ovs_interfaceid": "8356f577-07af-4575-b9ba-e2764b155dcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.244 243456 DEBUG nova.network.os_vif_util [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:ac:78,bridge_name='br-int',has_traffic_filtering=True,id=8356f577-07af-4575-b9ba-e2764b155dcc,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8356f577-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.246 243456 DEBUG nova.objects.instance [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a425770-67d6-411f-9586-1977cbc678ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.259 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:22:23 np0005634017 nova_compute[243452]:  <uuid>3a425770-67d6-411f-9586-1977cbc678ed</uuid>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:  <name>instance-00000069</name>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-0-1517409208</nova:name>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:22:22</nova:creationTime>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:22:23 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:        <nova:user uuid="f9046d3ef932481196b82d5e1fdd5de6">tempest-TestSecurityGroupsBasicOps-1303513239-project-member</nova:user>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:        <nova:project uuid="1eb3aafa74d14544ba5cde61223f2e23">tempest-TestSecurityGroupsBasicOps-1303513239</nova:project>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:        <nova:port uuid="8356f577-07af-4575-b9ba-e2764b155dcc">
Feb 28 05:22:23 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <entry name="serial">3a425770-67d6-411f-9586-1977cbc678ed</entry>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <entry name="uuid">3a425770-67d6-411f-9586-1977cbc678ed</entry>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/3a425770-67d6-411f-9586-1977cbc678ed_disk">
Feb 28 05:22:23 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:22:23 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/3a425770-67d6-411f-9586-1977cbc678ed_disk.config">
Feb 28 05:22:23 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:22:23 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:bc:ac:78"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <target dev="tap8356f577-07"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/3a425770-67d6-411f-9586-1977cbc678ed/console.log" append="off"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:22:23 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:22:23 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:22:23 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:22:23 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.260 243456 DEBUG nova.compute.manager [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Preparing to wait for external event network-vif-plugged-8356f577-07af-4575-b9ba-e2764b155dcc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.260 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "3a425770-67d6-411f-9586-1977cbc678ed-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.260 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "3a425770-67d6-411f-9586-1977cbc678ed-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.261 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "3a425770-67d6-411f-9586-1977cbc678ed-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.261 243456 DEBUG nova.virt.libvirt.vif [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:22:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-0-1517409208',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-0-1517409208',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ge',id=105,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNs+nd6rPqJ/AGV55FcprVoNF73HxSzu9S0FiqOuhvN4rlizLZ9YW8wn4BFYC1ax4N+CgAyJWdOsbXuKNmW4pxXu15elRBRUjfpzdRq3EXNuL00qJwL5OuaSxnPUOdFXtw==',key_name='tempest-TestSecurityGroupsBasicOps-1541180013',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-xny8xf69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:22:17Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=3a425770-67d6-411f-9586-1977cbc678ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8356f577-07af-4575-b9ba-e2764b155dcc", "address": "fa:16:3e:bc:ac:78", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8356f577-07", "ovs_interfaceid": "8356f577-07af-4575-b9ba-e2764b155dcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.262 243456 DEBUG nova.network.os_vif_util [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "8356f577-07af-4575-b9ba-e2764b155dcc", "address": "fa:16:3e:bc:ac:78", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8356f577-07", "ovs_interfaceid": "8356f577-07af-4575-b9ba-e2764b155dcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.262 243456 DEBUG nova.network.os_vif_util [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:ac:78,bridge_name='br-int',has_traffic_filtering=True,id=8356f577-07af-4575-b9ba-e2764b155dcc,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8356f577-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.263 243456 DEBUG os_vif [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:ac:78,bridge_name='br-int',has_traffic_filtering=True,id=8356f577-07af-4575-b9ba-e2764b155dcc,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8356f577-07') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.263 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.264 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.264 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.268 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.268 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8356f577-07, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.269 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8356f577-07, col_values=(('external_ids', {'iface-id': '8356f577-07af-4575-b9ba-e2764b155dcc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bc:ac:78', 'vm-uuid': '3a425770-67d6-411f-9586-1977cbc678ed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.270 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:23 np0005634017 NetworkManager[49805]: <info>  [1772274143.2725] manager: (tap8356f577-07): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/449)
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.272 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.278 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.278 243456 INFO os_vif [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:ac:78,bridge_name='br-int',has_traffic_filtering=True,id=8356f577-07af-4575-b9ba-e2764b155dcc,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8356f577-07')#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.329 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.330 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.330 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No VIF found with MAC fa:16:3e:bc:ac:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.330 243456 INFO nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Using config drive#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.351 243456 DEBUG nova.storage.rbd_utils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 3a425770-67d6-411f-9586-1977cbc678ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.634 243456 DEBUG nova.network.neutron [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Successfully updated port: 53819bfb-ebe3-4956-8f91-805dd04b5954 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.652 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "refresh_cache-0d4ce277-1bbb-4926-a7ee-30f5df57fff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.652 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquired lock "refresh_cache-0d4ce277-1bbb-4926-a7ee-30f5df57fff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.653 243456 DEBUG nova.network.neutron [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:22:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:22:23 np0005634017 nova_compute[243452]: 2026-02-28 10:22:23.969 243456 DEBUG nova.network.neutron [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:22:24 np0005634017 nova_compute[243452]: 2026-02-28 10:22:24.003 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:24 np0005634017 nova_compute[243452]: 2026-02-28 10:22:24.081 243456 INFO nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Creating config drive at /var/lib/nova/instances/3a425770-67d6-411f-9586-1977cbc678ed/disk.config#033[00m
Feb 28 05:22:24 np0005634017 nova_compute[243452]: 2026-02-28 10:22:24.089 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3a425770-67d6-411f-9586-1977cbc678ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpwdqozpj7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:24 np0005634017 nova_compute[243452]: 2026-02-28 10:22:24.177 243456 DEBUG nova.network.neutron [req-bf056df2-a93e-47c4-bed6-3231c16ca438 req-013ccc78-117c-49b4-a58b-43e0cefa4c30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Updated VIF entry in instance network info cache for port 8356f577-07af-4575-b9ba-e2764b155dcc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:22:24 np0005634017 nova_compute[243452]: 2026-02-28 10:22:24.178 243456 DEBUG nova.network.neutron [req-bf056df2-a93e-47c4-bed6-3231c16ca438 req-013ccc78-117c-49b4-a58b-43e0cefa4c30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Updating instance_info_cache with network_info: [{"id": "8356f577-07af-4575-b9ba-e2764b155dcc", "address": "fa:16:3e:bc:ac:78", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8356f577-07", "ovs_interfaceid": "8356f577-07af-4575-b9ba-e2764b155dcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:22:24 np0005634017 nova_compute[243452]: 2026-02-28 10:22:24.201 243456 DEBUG oslo_concurrency.lockutils [req-bf056df2-a93e-47c4-bed6-3231c16ca438 req-013ccc78-117c-49b4-a58b-43e0cefa4c30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-3a425770-67d6-411f-9586-1977cbc678ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:22:24 np0005634017 nova_compute[243452]: 2026-02-28 10:22:24.239 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3a425770-67d6-411f-9586-1977cbc678ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpwdqozpj7" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:24 np0005634017 nova_compute[243452]: 2026-02-28 10:22:24.284 243456 DEBUG nova.storage.rbd_utils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 3a425770-67d6-411f-9586-1977cbc678ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:22:24 np0005634017 nova_compute[243452]: 2026-02-28 10:22:24.291 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3a425770-67d6-411f-9586-1977cbc678ed/disk.config 3a425770-67d6-411f-9586-1977cbc678ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:24 np0005634017 nova_compute[243452]: 2026-02-28 10:22:24.376 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:22:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 302 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 3.1 MiB/s wr, 30 op/s
Feb 28 05:22:24 np0005634017 nova_compute[243452]: 2026-02-28 10:22:24.445 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3a425770-67d6-411f-9586-1977cbc678ed/disk.config 3a425770-67d6-411f-9586-1977cbc678ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:24 np0005634017 nova_compute[243452]: 2026-02-28 10:22:24.446 243456 INFO nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Deleting local config drive /var/lib/nova/instances/3a425770-67d6-411f-9586-1977cbc678ed/disk.config because it was imported into RBD.#033[00m
Feb 28 05:22:24 np0005634017 kernel: tap8356f577-07: entered promiscuous mode
Feb 28 05:22:24 np0005634017 NetworkManager[49805]: <info>  [1772274144.4889] manager: (tap8356f577-07): new Tun device (/org/freedesktop/NetworkManager/Devices/450)
Feb 28 05:22:24 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:24Z|01073|binding|INFO|Claiming lport 8356f577-07af-4575-b9ba-e2764b155dcc for this chassis.
Feb 28 05:22:24 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:24Z|01074|binding|INFO|8356f577-07af-4575-b9ba-e2764b155dcc: Claiming fa:16:3e:bc:ac:78 10.100.0.14
Feb 28 05:22:24 np0005634017 nova_compute[243452]: 2026-02-28 10:22:24.489 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.498 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:ac:78 10.100.0.14'], port_security=['fa:16:3e:bc:ac:78 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3a425770-67d6-411f-9586-1977cbc678ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3451a2ef-e97c-49df-813f-57c35ec0999a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '2', 'neutron:security_group_ids': '88239c96-6a1e-46d1-adfd-2f479ed23a6a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6802505a-6a05-4258-9e53-19d8c7319e67, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=8356f577-07af-4575-b9ba-e2764b155dcc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:22:24 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:24Z|01075|binding|INFO|Setting lport 8356f577-07af-4575-b9ba-e2764b155dcc ovn-installed in OVS
Feb 28 05:22:24 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:24Z|01076|binding|INFO|Setting lport 8356f577-07af-4575-b9ba-e2764b155dcc up in Southbound
Feb 28 05:22:24 np0005634017 nova_compute[243452]: 2026-02-28 10:22:24.501 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.500 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 8356f577-07af-4575-b9ba-e2764b155dcc in datapath 3451a2ef-e97c-49df-813f-57c35ec0999a bound to our chassis#033[00m
Feb 28 05:22:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.502 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3451a2ef-e97c-49df-813f-57c35ec0999a#033[00m
Feb 28 05:22:24 np0005634017 nova_compute[243452]: 2026-02-28 10:22:24.505 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:24 np0005634017 systemd-udevd[335329]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:22:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.521 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9f586dfd-a902-42c4-9397-b88e6cee832e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:24 np0005634017 systemd-machined[209480]: New machine qemu-135-instance-00000069.
Feb 28 05:22:24 np0005634017 NetworkManager[49805]: <info>  [1772274144.5341] device (tap8356f577-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:22:24 np0005634017 NetworkManager[49805]: <info>  [1772274144.5348] device (tap8356f577-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:22:24 np0005634017 systemd[1]: Started Virtual Machine qemu-135-instance-00000069.
Feb 28 05:22:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.551 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[61d2ff74-159a-47e3-bddd-51d878d7f4eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.555 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bb355185-1658-4dd6-9875-e527ba81c781]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.580 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[eea5eebf-fab3-4984-95f1-59be5d9437b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.603 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3336b1ea-b1c2-48ca-8245-4907b1995e6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3451a2ef-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:c2:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 321], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561733, 'reachable_time': 16789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335342, 'error': None, 'target': 'ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.623 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c2cdb91b-5aea-41c8-9954-1c4e6d4c3216]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3451a2ef-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561746, 'tstamp': 561746}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335344, 'error': None, 'target': 'ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3451a2ef-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561749, 'tstamp': 561749}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335344, 'error': None, 'target': 'ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.624 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3451a2ef-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:24 np0005634017 nova_compute[243452]: 2026-02-28 10:22:24.626 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.627 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3451a2ef-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.628 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:22:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.628 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3451a2ef-e0, col_values=(('external_ids', {'iface-id': '0715e649-02a0-4a60-88ea-663bf03161ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.628 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:22:24 np0005634017 nova_compute[243452]: 2026-02-28 10:22:24.663 243456 DEBUG nova.compute.manager [req-ac4988b1-6c77-45d5-9835-6588fc4c25be req-0a3f93d5-de65-4eb1-95e2-46bd1c78dc29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Received event network-changed-53819bfb-ebe3-4956-8f91-805dd04b5954 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:22:24 np0005634017 nova_compute[243452]: 2026-02-28 10:22:24.664 243456 DEBUG nova.compute.manager [req-ac4988b1-6c77-45d5-9835-6588fc4c25be req-0a3f93d5-de65-4eb1-95e2-46bd1c78dc29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Refreshing instance network info cache due to event network-changed-53819bfb-ebe3-4956-8f91-805dd04b5954. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:22:24 np0005634017 nova_compute[243452]: 2026-02-28 10:22:24.664 243456 DEBUG oslo_concurrency.lockutils [req-ac4988b1-6c77-45d5-9835-6588fc4c25be req-0a3f93d5-de65-4eb1-95e2-46bd1c78dc29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-0d4ce277-1bbb-4926-a7ee-30f5df57fff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:22:24 np0005634017 nova_compute[243452]: 2026-02-28 10:22:24.774 243456 DEBUG nova.compute.manager [req-c70a3a4d-de1d-4f8e-9b09-1ce5ea5fed3a req-6c6df1e6-5349-4cf7-9e60-5b2c46e01914 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Received event network-vif-plugged-8356f577-07af-4575-b9ba-e2764b155dcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:22:24 np0005634017 nova_compute[243452]: 2026-02-28 10:22:24.775 243456 DEBUG oslo_concurrency.lockutils [req-c70a3a4d-de1d-4f8e-9b09-1ce5ea5fed3a req-6c6df1e6-5349-4cf7-9e60-5b2c46e01914 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "3a425770-67d6-411f-9586-1977cbc678ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:24 np0005634017 nova_compute[243452]: 2026-02-28 10:22:24.776 243456 DEBUG oslo_concurrency.lockutils [req-c70a3a4d-de1d-4f8e-9b09-1ce5ea5fed3a req-6c6df1e6-5349-4cf7-9e60-5b2c46e01914 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a425770-67d6-411f-9586-1977cbc678ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:24 np0005634017 nova_compute[243452]: 2026-02-28 10:22:24.778 243456 DEBUG oslo_concurrency.lockutils [req-c70a3a4d-de1d-4f8e-9b09-1ce5ea5fed3a req-6c6df1e6-5349-4cf7-9e60-5b2c46e01914 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a425770-67d6-411f-9586-1977cbc678ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:24 np0005634017 nova_compute[243452]: 2026-02-28 10:22:24.779 243456 DEBUG nova.compute.manager [req-c70a3a4d-de1d-4f8e-9b09-1ce5ea5fed3a req-6c6df1e6-5349-4cf7-9e60-5b2c46e01914 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Processing event network-vif-plugged-8356f577-07af-4575-b9ba-e2764b155dcc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.314 243456 DEBUG nova.network.neutron [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Updating instance_info_cache with network_info: [{"id": "53819bfb-ebe3-4956-8f91-805dd04b5954", "address": "fa:16:3e:e5:f2:95", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53819bfb-eb", "ovs_interfaceid": "53819bfb-ebe3-4956-8f91-805dd04b5954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.342 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Releasing lock "refresh_cache-0d4ce277-1bbb-4926-a7ee-30f5df57fff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.343 243456 DEBUG nova.compute.manager [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Instance network_info: |[{"id": "53819bfb-ebe3-4956-8f91-805dd04b5954", "address": "fa:16:3e:e5:f2:95", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53819bfb-eb", "ovs_interfaceid": "53819bfb-ebe3-4956-8f91-805dd04b5954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.345 243456 DEBUG oslo_concurrency.lockutils [req-ac4988b1-6c77-45d5-9835-6588fc4c25be req-0a3f93d5-de65-4eb1-95e2-46bd1c78dc29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-0d4ce277-1bbb-4926-a7ee-30f5df57fff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.345 243456 DEBUG nova.network.neutron [req-ac4988b1-6c77-45d5-9835-6588fc4c25be req-0a3f93d5-de65-4eb1-95e2-46bd1c78dc29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Refreshing network info cache for port 53819bfb-ebe3-4956-8f91-805dd04b5954 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.348 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Start _get_guest_xml network_info=[{"id": "53819bfb-ebe3-4956-8f91-805dd04b5954", "address": "fa:16:3e:e5:f2:95", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53819bfb-eb", "ovs_interfaceid": "53819bfb-ebe3-4956-8f91-805dd04b5954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.355 243456 WARNING nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.364 243456 DEBUG nova.virt.libvirt.host [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.366 243456 DEBUG nova.virt.libvirt.host [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.378 243456 DEBUG nova.virt.libvirt.host [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.378 243456 DEBUG nova.virt.libvirt.host [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.380 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.380 243456 DEBUG nova.virt.hardware [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.381 243456 DEBUG nova.virt.hardware [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.382 243456 DEBUG nova.virt.hardware [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.383 243456 DEBUG nova.virt.hardware [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.383 243456 DEBUG nova.virt.hardware [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.384 243456 DEBUG nova.virt.hardware [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.384 243456 DEBUG nova.virt.hardware [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.385 243456 DEBUG nova.virt.hardware [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.385 243456 DEBUG nova.virt.hardware [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.386 243456 DEBUG nova.virt.hardware [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.387 243456 DEBUG nova.virt.hardware [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.392 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.460 243456 DEBUG nova.compute.manager [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.461 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274145.461376, 3a425770-67d6-411f-9586-1977cbc678ed => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.462 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] VM Started (Lifecycle Event)#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.471 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.476 243456 INFO nova.virt.libvirt.driver [-] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Instance spawned successfully.#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.477 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.491 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.506 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.512 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.512 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.513 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.514 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.515 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.516 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.553 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.554 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274145.4615405, 3a425770-67d6-411f-9586-1977cbc678ed => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.554 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:22:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:25.583 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.591 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.595 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274145.4706702, 3a425770-67d6-411f-9586-1977cbc678ed => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.595 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.602 243456 INFO nova.compute.manager [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Took 7.77 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.603 243456 DEBUG nova.compute.manager [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.612 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.617 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.645 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.693 243456 INFO nova.compute.manager [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Took 8.91 seconds to build instance.#033[00m
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.714 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "3a425770-67d6-411f-9586-1977cbc678ed" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.997s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:22:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2198608139' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:22:25 np0005634017 nova_compute[243452]: 2026-02-28 10:22:25.971 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.004 243456 DEBUG nova.storage.rbd_utils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.012 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:22:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 325 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.6 MiB/s wr, 57 op/s
Feb 28 05:22:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:22:26 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2565571775' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.567 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.569 243456 DEBUG nova.virt.libvirt.vif [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:22:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-168997413',display_name='tempest-₡-168997413',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--168997413',id=106,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-9boikcvy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:22:21Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=0d4ce277-1bbb-4926-a7ee-30f5df57fff9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "53819bfb-ebe3-4956-8f91-805dd04b5954", "address": "fa:16:3e:e5:f2:95", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53819bfb-eb", "ovs_interfaceid": "53819bfb-ebe3-4956-8f91-805dd04b5954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.570 243456 DEBUG nova.network.os_vif_util [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "53819bfb-ebe3-4956-8f91-805dd04b5954", "address": "fa:16:3e:e5:f2:95", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53819bfb-eb", "ovs_interfaceid": "53819bfb-ebe3-4956-8f91-805dd04b5954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.570 243456 DEBUG nova.network.os_vif_util [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:f2:95,bridge_name='br-int',has_traffic_filtering=True,id=53819bfb-ebe3-4956-8f91-805dd04b5954,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53819bfb-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.572 243456 DEBUG nova.objects.instance [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0d4ce277-1bbb-4926-a7ee-30f5df57fff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.596 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:22:26 np0005634017 nova_compute[243452]:  <uuid>0d4ce277-1bbb-4926-a7ee-30f5df57fff9</uuid>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:  <name>instance-0000006a</name>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <nova:name>tempest-₡-168997413</nova:name>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:22:25</nova:creationTime>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:22:26 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:        <nova:user uuid="30797c1e587b4532a2e148d0cdcd9c51">tempest-ServersTestJSON-973249707-project-member</nova:user>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:        <nova:project uuid="1c3cb5cdfa53405bb0387af43e804bd1">tempest-ServersTestJSON-973249707</nova:project>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:        <nova:port uuid="53819bfb-ebe3-4956-8f91-805dd04b5954">
Feb 28 05:22:26 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <entry name="serial">0d4ce277-1bbb-4926-a7ee-30f5df57fff9</entry>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <entry name="uuid">0d4ce277-1bbb-4926-a7ee-30f5df57fff9</entry>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk">
Feb 28 05:22:26 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:22:26 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk.config">
Feb 28 05:22:26 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:22:26 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:e5:f2:95"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <target dev="tap53819bfb-eb"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/0d4ce277-1bbb-4926-a7ee-30f5df57fff9/console.log" append="off"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:22:26 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:22:26 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:22:26 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:22:26 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.596 243456 DEBUG nova.compute.manager [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Preparing to wait for external event network-vif-plugged-53819bfb-ebe3-4956-8f91-805dd04b5954 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.596 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.597 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.597 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.597 243456 DEBUG nova.virt.libvirt.vif [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:22:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-168997413',display_name='tempest-₡-168997413',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--168997413',id=106,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-9boikcvy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:22:21Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=0d4ce277-1bbb-4926-a7ee-30f5df57fff9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "53819bfb-ebe3-4956-8f91-805dd04b5954", "address": "fa:16:3e:e5:f2:95", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53819bfb-eb", "ovs_interfaceid": "53819bfb-ebe3-4956-8f91-805dd04b5954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.598 243456 DEBUG nova.network.os_vif_util [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "53819bfb-ebe3-4956-8f91-805dd04b5954", "address": "fa:16:3e:e5:f2:95", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53819bfb-eb", "ovs_interfaceid": "53819bfb-ebe3-4956-8f91-805dd04b5954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.598 243456 DEBUG nova.network.os_vif_util [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:f2:95,bridge_name='br-int',has_traffic_filtering=True,id=53819bfb-ebe3-4956-8f91-805dd04b5954,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53819bfb-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.599 243456 DEBUG os_vif [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:f2:95,bridge_name='br-int',has_traffic_filtering=True,id=53819bfb-ebe3-4956-8f91-805dd04b5954,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53819bfb-eb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.599 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.600 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.600 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.604 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.604 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap53819bfb-eb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.604 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap53819bfb-eb, col_values=(('external_ids', {'iface-id': '53819bfb-ebe3-4956-8f91-805dd04b5954', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e5:f2:95', 'vm-uuid': '0d4ce277-1bbb-4926-a7ee-30f5df57fff9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:26 np0005634017 NetworkManager[49805]: <info>  [1772274146.6070] manager: (tap53819bfb-eb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/451)
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.609 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.613 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.614 243456 INFO os_vif [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:f2:95,bridge_name='br-int',has_traffic_filtering=True,id=53819bfb-ebe3-4956-8f91-805dd04b5954,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53819bfb-eb')#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.667 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.668 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.668 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No VIF found with MAC fa:16:3e:e5:f2:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.669 243456 INFO nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Using config drive#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.691 243456 DEBUG nova.storage.rbd_utils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.980 243456 DEBUG nova.compute.manager [req-5aea5c56-69d5-4549-883d-e777203d9ef1 req-f3c14ba5-5563-4e00-9e50-6f365eda5f9c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Received event network-vif-plugged-8356f577-07af-4575-b9ba-e2764b155dcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.981 243456 DEBUG oslo_concurrency.lockutils [req-5aea5c56-69d5-4549-883d-e777203d9ef1 req-f3c14ba5-5563-4e00-9e50-6f365eda5f9c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "3a425770-67d6-411f-9586-1977cbc678ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.982 243456 DEBUG oslo_concurrency.lockutils [req-5aea5c56-69d5-4549-883d-e777203d9ef1 req-f3c14ba5-5563-4e00-9e50-6f365eda5f9c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a425770-67d6-411f-9586-1977cbc678ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.983 243456 DEBUG oslo_concurrency.lockutils [req-5aea5c56-69d5-4549-883d-e777203d9ef1 req-f3c14ba5-5563-4e00-9e50-6f365eda5f9c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a425770-67d6-411f-9586-1977cbc678ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.984 243456 DEBUG nova.compute.manager [req-5aea5c56-69d5-4549-883d-e777203d9ef1 req-f3c14ba5-5563-4e00-9e50-6f365eda5f9c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] No waiting events found dispatching network-vif-plugged-8356f577-07af-4575-b9ba-e2764b155dcc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:22:26 np0005634017 nova_compute[243452]: 2026-02-28 10:22:26.985 243456 WARNING nova.compute.manager [req-5aea5c56-69d5-4549-883d-e777203d9ef1 req-f3c14ba5-5563-4e00-9e50-6f365eda5f9c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Received unexpected event network-vif-plugged-8356f577-07af-4575-b9ba-e2764b155dcc for instance with vm_state active and task_state None.#033[00m
Feb 28 05:22:27 np0005634017 nova_compute[243452]: 2026-02-28 10:22:27.236 243456 INFO nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Creating config drive at /var/lib/nova/instances/0d4ce277-1bbb-4926-a7ee-30f5df57fff9/disk.config#033[00m
Feb 28 05:22:27 np0005634017 nova_compute[243452]: 2026-02-28 10:22:27.240 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0d4ce277-1bbb-4926-a7ee-30f5df57fff9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpofek_agn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:27 np0005634017 nova_compute[243452]: 2026-02-28 10:22:27.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:22:27 np0005634017 nova_compute[243452]: 2026-02-28 10:22:27.387 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0d4ce277-1bbb-4926-a7ee-30f5df57fff9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpofek_agn" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:27 np0005634017 nova_compute[243452]: 2026-02-28 10:22:27.422 243456 DEBUG nova.storage.rbd_utils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:22:27 np0005634017 nova_compute[243452]: 2026-02-28 10:22:27.428 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0d4ce277-1bbb-4926-a7ee-30f5df57fff9/disk.config 0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:27 np0005634017 nova_compute[243452]: 2026-02-28 10:22:27.592 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0d4ce277-1bbb-4926-a7ee-30f5df57fff9/disk.config 0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:27 np0005634017 nova_compute[243452]: 2026-02-28 10:22:27.593 243456 INFO nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Deleting local config drive /var/lib/nova/instances/0d4ce277-1bbb-4926-a7ee-30f5df57fff9/disk.config because it was imported into RBD.#033[00m
Feb 28 05:22:27 np0005634017 NetworkManager[49805]: <info>  [1772274147.6398] manager: (tap53819bfb-eb): new Tun device (/org/freedesktop/NetworkManager/Devices/452)
Feb 28 05:22:27 np0005634017 kernel: tap53819bfb-eb: entered promiscuous mode
Feb 28 05:22:27 np0005634017 nova_compute[243452]: 2026-02-28 10:22:27.643 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:27Z|01077|binding|INFO|Claiming lport 53819bfb-ebe3-4956-8f91-805dd04b5954 for this chassis.
Feb 28 05:22:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:27Z|01078|binding|INFO|53819bfb-ebe3-4956-8f91-805dd04b5954: Claiming fa:16:3e:e5:f2:95 10.100.0.9
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.652 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:f2:95 10.100.0.9'], port_security=['fa:16:3e:e5:f2:95 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0d4ce277-1bbb-4926-a7ee-30f5df57fff9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=53819bfb-ebe3-4956-8f91-805dd04b5954) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.654 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 53819bfb-ebe3-4956-8f91-805dd04b5954 in datapath 7ec4804c-4a13-485a-9300-db6edf74473b bound to our chassis#033[00m
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.656 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ec4804c-4a13-485a-9300-db6edf74473b#033[00m
Feb 28 05:22:27 np0005634017 systemd-machined[209480]: New machine qemu-136-instance-0000006a.
Feb 28 05:22:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:27Z|01079|binding|INFO|Setting lport 53819bfb-ebe3-4956-8f91-805dd04b5954 ovn-installed in OVS
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.672 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c021d8cb-2c47-4e3b-aec8-4942926ceac2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.673 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7ec4804c-41 in ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:22:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:27Z|01080|binding|INFO|Setting lport 53819bfb-ebe3-4956-8f91-805dd04b5954 up in Southbound
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.676 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7ec4804c-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:22:27 np0005634017 nova_compute[243452]: 2026-02-28 10:22:27.676 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.676 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f1a8e702-481d-43c0-acb6-1d31a4d40140]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.678 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9714133c-1ebe-4e9c-a044-07adfcd6e2ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:27 np0005634017 systemd[1]: Started Virtual Machine qemu-136-instance-0000006a.
Feb 28 05:22:27 np0005634017 nova_compute[243452]: 2026-02-28 10:22:27.681 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:27 np0005634017 systemd-udevd[335525]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.694 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[e5bccdfe-db5c-40e1-a112-0d3043421ece]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:27 np0005634017 NetworkManager[49805]: <info>  [1772274147.7004] device (tap53819bfb-eb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:22:27 np0005634017 NetworkManager[49805]: <info>  [1772274147.7011] device (tap53819bfb-eb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.710 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[45eeb1b7-feae-41d0-bf1b-db106cd169f6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.741 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e6fde387-3dc6-4f84-b6a1-727e4fb1d1fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:27 np0005634017 NetworkManager[49805]: <info>  [1772274147.7495] manager: (tap7ec4804c-40): new Veth device (/org/freedesktop/NetworkManager/Devices/453)
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.750 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ee98d370-f922-4440-8487-c74bcfe3b2bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.779 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[38861613-eb0b-4320-bc1a-50212543c9d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.784 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b8eeff6e-0871-4abd-8285-f35b2aec5bfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:27 np0005634017 NetworkManager[49805]: <info>  [1772274147.8050] device (tap7ec4804c-40): carrier: link connected
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.813 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e704911c-0806-4c00-b3a5-3831d6fac8b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.830 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5748bb7c-870a-482b-8c46-a6bef4a33b66]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335556, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.843 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b74341-2043-4833-b300-11d88269df5e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2d:7196'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565509, 'tstamp': 565509}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335557, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.864 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[080169ba-939e-48fa-a2b3-c5abe456dba3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 335558, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.894 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[25654d04-ae03-46fe-ba48-23777eeabae5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.955 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e7f248-b57c-4464-8483-15d6a91da36c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.957 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.957 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.958 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ec4804c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:27 np0005634017 nova_compute[243452]: 2026-02-28 10:22:27.959 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:27 np0005634017 kernel: tap7ec4804c-40: entered promiscuous mode
Feb 28 05:22:27 np0005634017 NetworkManager[49805]: <info>  [1772274147.9603] manager: (tap7ec4804c-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/454)
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.963 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ec4804c-40, col_values=(('external_ids', {'iface-id': 'e8c59be7-5922-4be3-b143-f702a4828758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:27 np0005634017 nova_compute[243452]: 2026-02-28 10:22:27.964 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:27Z|01081|binding|INFO|Releasing lport e8c59be7-5922-4be3-b143-f702a4828758 from this chassis (sb_readonly=0)
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.966 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7ec4804c-4a13-485a-9300-db6edf74473b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7ec4804c-4a13-485a-9300-db6edf74473b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.967 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7fbe8d73-1298-4eaf-94c8-6ee5575984b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.967 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-7ec4804c-4a13-485a-9300-db6edf74473b
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/7ec4804c-4a13-485a-9300-db6edf74473b.pid.haproxy
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 7ec4804c-4a13-485a-9300-db6edf74473b
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:22:27 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.969 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'env', 'PROCESS_TAG=haproxy-7ec4804c-4a13-485a-9300-db6edf74473b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7ec4804c-4a13-485a-9300-db6edf74473b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:22:27 np0005634017 nova_compute[243452]: 2026-02-28 10:22:27.973 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:28 np0005634017 nova_compute[243452]: 2026-02-28 10:22:28.043 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274148.0424592, 0d4ce277-1bbb-4926-a7ee-30f5df57fff9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:22:28 np0005634017 nova_compute[243452]: 2026-02-28 10:22:28.043 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] VM Started (Lifecycle Event)#033[00m
Feb 28 05:22:28 np0005634017 nova_compute[243452]: 2026-02-28 10:22:28.094 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:22:28 np0005634017 nova_compute[243452]: 2026-02-28 10:22:28.099 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274148.0427306, 0d4ce277-1bbb-4926-a7ee-30f5df57fff9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:22:28 np0005634017 nova_compute[243452]: 2026-02-28 10:22:28.099 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:22:28 np0005634017 nova_compute[243452]: 2026-02-28 10:22:28.133 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:22:28 np0005634017 nova_compute[243452]: 2026-02-28 10:22:28.136 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:22:28 np0005634017 nova_compute[243452]: 2026-02-28 10:22:28.159 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:22:28 np0005634017 nova_compute[243452]: 2026-02-28 10:22:28.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:22:28 np0005634017 podman[335648]: 2026-02-28 10:22:28.386659769 +0000 UTC m=+0.049838930 container create bf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 05:22:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 325 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 308 KiB/s rd, 3.6 MiB/s wr, 68 op/s
Feb 28 05:22:28 np0005634017 systemd[1]: Started libpod-conmon-bf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334.scope.
Feb 28 05:22:28 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:22:28 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f86c5f2ef6c8614823a87cf49a92bf5ddc50b5f72de34d838a1adf35e0b322e0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:22:28 np0005634017 podman[335648]: 2026-02-28 10:22:28.452385076 +0000 UTC m=+0.115564257 container init bf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 28 05:22:28 np0005634017 podman[335648]: 2026-02-28 10:22:28.361643117 +0000 UTC m=+0.024822298 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:22:28 np0005634017 podman[335648]: 2026-02-28 10:22:28.456658919 +0000 UTC m=+0.119838080 container start bf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223)
Feb 28 05:22:28 np0005634017 neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b[335695]: [NOTICE]   (335699) : New worker (335701) forked
Feb 28 05:22:28 np0005634017 neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b[335695]: [NOTICE]   (335699) : Loading success.
Feb 28 05:22:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:22:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:22:28 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:22:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:22:28 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:22:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:22:28 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:22:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:22:28 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:22:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:22:28 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:22:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:22:28 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:22:28 np0005634017 nova_compute[243452]: 2026-02-28 10:22:28.968 243456 DEBUG nova.network.neutron [req-ac4988b1-6c77-45d5-9835-6588fc4c25be req-0a3f93d5-de65-4eb1-95e2-46bd1c78dc29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Updated VIF entry in instance network info cache for port 53819bfb-ebe3-4956-8f91-805dd04b5954. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:22:28 np0005634017 nova_compute[243452]: 2026-02-28 10:22:28.969 243456 DEBUG nova.network.neutron [req-ac4988b1-6c77-45d5-9835-6588fc4c25be req-0a3f93d5-de65-4eb1-95e2-46bd1c78dc29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Updating instance_info_cache with network_info: [{"id": "53819bfb-ebe3-4956-8f91-805dd04b5954", "address": "fa:16:3e:e5:f2:95", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53819bfb-eb", "ovs_interfaceid": "53819bfb-ebe3-4956-8f91-805dd04b5954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:22:28 np0005634017 nova_compute[243452]: 2026-02-28 10:22:28.987 243456 DEBUG oslo_concurrency.lockutils [req-ac4988b1-6c77-45d5-9835-6588fc4c25be req-0a3f93d5-de65-4eb1-95e2-46bd1c78dc29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-0d4ce277-1bbb-4926-a7ee-30f5df57fff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.005 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.098 243456 DEBUG nova.compute.manager [req-1050af38-203a-4dec-b50b-a5d252c49520 req-5103447f-381d-47cc-8401-bb455c0c4f47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Received event network-vif-plugged-53819bfb-ebe3-4956-8f91-805dd04b5954 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.098 243456 DEBUG oslo_concurrency.lockutils [req-1050af38-203a-4dec-b50b-a5d252c49520 req-5103447f-381d-47cc-8401-bb455c0c4f47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.099 243456 DEBUG oslo_concurrency.lockutils [req-1050af38-203a-4dec-b50b-a5d252c49520 req-5103447f-381d-47cc-8401-bb455c0c4f47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.099 243456 DEBUG oslo_concurrency.lockutils [req-1050af38-203a-4dec-b50b-a5d252c49520 req-5103447f-381d-47cc-8401-bb455c0c4f47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.099 243456 DEBUG nova.compute.manager [req-1050af38-203a-4dec-b50b-a5d252c49520 req-5103447f-381d-47cc-8401-bb455c0c4f47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Processing event network-vif-plugged-53819bfb-ebe3-4956-8f91-805dd04b5954 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.100 243456 DEBUG nova.compute.manager [req-1050af38-203a-4dec-b50b-a5d252c49520 req-5103447f-381d-47cc-8401-bb455c0c4f47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Received event network-vif-plugged-53819bfb-ebe3-4956-8f91-805dd04b5954 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.100 243456 DEBUG oslo_concurrency.lockutils [req-1050af38-203a-4dec-b50b-a5d252c49520 req-5103447f-381d-47cc-8401-bb455c0c4f47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.100 243456 DEBUG oslo_concurrency.lockutils [req-1050af38-203a-4dec-b50b-a5d252c49520 req-5103447f-381d-47cc-8401-bb455c0c4f47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.100 243456 DEBUG oslo_concurrency.lockutils [req-1050af38-203a-4dec-b50b-a5d252c49520 req-5103447f-381d-47cc-8401-bb455c0c4f47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.101 243456 DEBUG nova.compute.manager [req-1050af38-203a-4dec-b50b-a5d252c49520 req-5103447f-381d-47cc-8401-bb455c0c4f47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] No waiting events found dispatching network-vif-plugged-53819bfb-ebe3-4956-8f91-805dd04b5954 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.101 243456 WARNING nova.compute.manager [req-1050af38-203a-4dec-b50b-a5d252c49520 req-5103447f-381d-47cc-8401-bb455c0c4f47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Received unexpected event network-vif-plugged-53819bfb-ebe3-4956-8f91-805dd04b5954 for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.101 243456 DEBUG nova.compute.manager [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:22:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:22:29
Feb 28 05:22:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:22:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:22:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'default.rgw.control', '.mgr', 'images', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'vms']
Feb 28 05:22:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.107 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274149.1070514, 0d4ce277-1bbb-4926-a7ee-30f5df57fff9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.108 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.110 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.123 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.124 243456 INFO nova.virt.libvirt.driver [-] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Instance spawned successfully.#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.124 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.128 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.154 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.164 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.165 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.165 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.166 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.166 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.167 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.236 243456 INFO nova.compute.manager [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Took 7.54 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.237 243456 DEBUG nova.compute.manager [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:22:29 np0005634017 podman[335801]: 2026-02-28 10:22:29.292193785 +0000 UTC m=+0.034742554 container create 6e3b6f5f27c52550b3ba6f32745a9deae602b2723ca93c9943c7c772cd683e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.306 243456 INFO nova.compute.manager [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Took 8.76 seconds to build instance.#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.326 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:29 np0005634017 systemd[1]: Started libpod-conmon-6e3b6f5f27c52550b3ba6f32745a9deae602b2723ca93c9943c7c772cd683e93.scope.
Feb 28 05:22:29 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:22:29 np0005634017 podman[335801]: 2026-02-28 10:22:29.275290277 +0000 UTC m=+0.017839066 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:22:29 np0005634017 podman[335801]: 2026-02-28 10:22:29.37725322 +0000 UTC m=+0.119801999 container init 6e3b6f5f27c52550b3ba6f32745a9deae602b2723ca93c9943c7c772cd683e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 05:22:29 np0005634017 podman[335801]: 2026-02-28 10:22:29.382280115 +0000 UTC m=+0.124828874 container start 6e3b6f5f27c52550b3ba6f32745a9deae602b2723ca93c9943c7c772cd683e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leavitt, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 05:22:29 np0005634017 podman[335801]: 2026-02-28 10:22:29.385318593 +0000 UTC m=+0.127867362 container attach 6e3b6f5f27c52550b3ba6f32745a9deae602b2723ca93c9943c7c772cd683e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Feb 28 05:22:29 np0005634017 focused_leavitt[335817]: 167 167
Feb 28 05:22:29 np0005634017 systemd[1]: libpod-6e3b6f5f27c52550b3ba6f32745a9deae602b2723ca93c9943c7c772cd683e93.scope: Deactivated successfully.
Feb 28 05:22:29 np0005634017 podman[335801]: 2026-02-28 10:22:29.387559917 +0000 UTC m=+0.130108716 container died 6e3b6f5f27c52550b3ba6f32745a9deae602b2723ca93c9943c7c772cd683e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:22:29 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3973c938799c586d1d75d102f0da4b0db114429b12ceb7dfcd3f69947a623146-merged.mount: Deactivated successfully.
Feb 28 05:22:29 np0005634017 podman[335801]: 2026-02-28 10:22:29.421534358 +0000 UTC m=+0.164083127 container remove 6e3b6f5f27c52550b3ba6f32745a9deae602b2723ca93c9943c7c772cd683e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:22:29 np0005634017 systemd[1]: libpod-conmon-6e3b6f5f27c52550b3ba6f32745a9deae602b2723ca93c9943c7c772cd683e93.scope: Deactivated successfully.
Feb 28 05:22:29 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:22:29 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:22:29 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.584 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.586 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.587 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:22:29 np0005634017 nova_compute[243452]: 2026-02-28 10:22:29.587 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0bafc3af-eadf-4d97-9acf-026c531362c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:22:29 np0005634017 podman[335840]: 2026-02-28 10:22:29.59172684 +0000 UTC m=+0.040195071 container create 7c7d8a7e65f27bf2d495ea63d0af8132cb1fd01ad6abfd50185b04bc962a2aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bhaskara, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:22:29 np0005634017 systemd[1]: Started libpod-conmon-7c7d8a7e65f27bf2d495ea63d0af8132cb1fd01ad6abfd50185b04bc962a2aa0.scope.
Feb 28 05:22:29 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:22:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3b2151004483a65434c4478d6f048b5faa90fcfa8b491bb7d2fb64211a19f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:22:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3b2151004483a65434c4478d6f048b5faa90fcfa8b491bb7d2fb64211a19f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:22:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3b2151004483a65434c4478d6f048b5faa90fcfa8b491bb7d2fb64211a19f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:22:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3b2151004483a65434c4478d6f048b5faa90fcfa8b491bb7d2fb64211a19f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:22:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3b2151004483a65434c4478d6f048b5faa90fcfa8b491bb7d2fb64211a19f1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:22:29 np0005634017 podman[335840]: 2026-02-28 10:22:29.572869656 +0000 UTC m=+0.021337887 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:22:29 np0005634017 podman[335840]: 2026-02-28 10:22:29.680194734 +0000 UTC m=+0.128662955 container init 7c7d8a7e65f27bf2d495ea63d0af8132cb1fd01ad6abfd50185b04bc962a2aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 28 05:22:29 np0005634017 podman[335840]: 2026-02-28 10:22:29.691274664 +0000 UTC m=+0.139742895 container start 7c7d8a7e65f27bf2d495ea63d0af8132cb1fd01ad6abfd50185b04bc962a2aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bhaskara, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:22:29 np0005634017 podman[335840]: 2026-02-28 10:22:29.694409504 +0000 UTC m=+0.142877715 container attach 7c7d8a7e65f27bf2d495ea63d0af8132cb1fd01ad6abfd50185b04bc962a2aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bhaskara, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:22:30 np0005634017 unruffled_bhaskara[335856]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:22:30 np0005634017 unruffled_bhaskara[335856]: --> All data devices are unavailable
Feb 28 05:22:30 np0005634017 systemd[1]: libpod-7c7d8a7e65f27bf2d495ea63d0af8132cb1fd01ad6abfd50185b04bc962a2aa0.scope: Deactivated successfully.
Feb 28 05:22:30 np0005634017 podman[335840]: 2026-02-28 10:22:30.201612034 +0000 UTC m=+0.650080265 container died 7c7d8a7e65f27bf2d495ea63d0af8132cb1fd01ad6abfd50185b04bc962a2aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bhaskara, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 28 05:22:30 np0005634017 systemd[1]: var-lib-containers-storage-overlay-0a3b2151004483a65434c4478d6f048b5faa90fcfa8b491bb7d2fb64211a19f1-merged.mount: Deactivated successfully.
Feb 28 05:22:30 np0005634017 podman[335840]: 2026-02-28 10:22:30.245627564 +0000 UTC m=+0.694095785 container remove 7c7d8a7e65f27bf2d495ea63d0af8132cb1fd01ad6abfd50185b04bc962a2aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 28 05:22:30 np0005634017 systemd[1]: libpod-conmon-7c7d8a7e65f27bf2d495ea63d0af8132cb1fd01ad6abfd50185b04bc962a2aa0.scope: Deactivated successfully.
Feb 28 05:22:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:22:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:22:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:22:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:22:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:22:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:22:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 325 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.6 MiB/s wr, 151 op/s
Feb 28 05:22:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:22:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:22:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:22:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:22:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:22:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:22:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:22:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:22:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:22:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:22:30 np0005634017 podman[335946]: 2026-02-28 10:22:30.770885175 +0000 UTC m=+0.063203095 container create 27d770c256907dd0715f3be29786d0fdf3906cbe18909ecfbda659f071455e48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wu, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 28 05:22:30 np0005634017 systemd[1]: Started libpod-conmon-27d770c256907dd0715f3be29786d0fdf3906cbe18909ecfbda659f071455e48.scope.
Feb 28 05:22:30 np0005634017 podman[335946]: 2026-02-28 10:22:30.741731714 +0000 UTC m=+0.034049684 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:22:30 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:22:30 np0005634017 podman[335946]: 2026-02-28 10:22:30.858563516 +0000 UTC m=+0.150881426 container init 27d770c256907dd0715f3be29786d0fdf3906cbe18909ecfbda659f071455e48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:22:30 np0005634017 podman[335946]: 2026-02-28 10:22:30.869537223 +0000 UTC m=+0.161855113 container start 27d770c256907dd0715f3be29786d0fdf3906cbe18909ecfbda659f071455e48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 28 05:22:30 np0005634017 funny_wu[335962]: 167 167
Feb 28 05:22:30 np0005634017 podman[335946]: 2026-02-28 10:22:30.874274679 +0000 UTC m=+0.166592589 container attach 27d770c256907dd0715f3be29786d0fdf3906cbe18909ecfbda659f071455e48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wu, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 05:22:30 np0005634017 systemd[1]: libpod-27d770c256907dd0715f3be29786d0fdf3906cbe18909ecfbda659f071455e48.scope: Deactivated successfully.
Feb 28 05:22:30 np0005634017 podman[335946]: 2026-02-28 10:22:30.876894075 +0000 UTC m=+0.169212005 container died 27d770c256907dd0715f3be29786d0fdf3906cbe18909ecfbda659f071455e48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wu, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:22:30 np0005634017 systemd[1]: var-lib-containers-storage-overlay-0f124b158eb271d4de62e4aa5cf6b489e62fc75853436be914500f15232f0be1-merged.mount: Deactivated successfully.
Feb 28 05:22:30 np0005634017 podman[335946]: 2026-02-28 10:22:30.926240509 +0000 UTC m=+0.218558439 container remove 27d770c256907dd0715f3be29786d0fdf3906cbe18909ecfbda659f071455e48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wu, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:22:30 np0005634017 systemd[1]: libpod-conmon-27d770c256907dd0715f3be29786d0fdf3906cbe18909ecfbda659f071455e48.scope: Deactivated successfully.
Feb 28 05:22:31 np0005634017 podman[335985]: 2026-02-28 10:22:31.083584431 +0000 UTC m=+0.040984894 container create 84c20c69d94332d4388891ea04031dee1c8f1b1d6f460c052be8a01e1abe088d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mendeleev, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:22:31 np0005634017 systemd[1]: Started libpod-conmon-84c20c69d94332d4388891ea04031dee1c8f1b1d6f460c052be8a01e1abe088d.scope.
Feb 28 05:22:31 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:22:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732c93cee66603d6d89625c60da3d87c806f97c993c21461f4385ef76534fed4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:22:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732c93cee66603d6d89625c60da3d87c806f97c993c21461f4385ef76534fed4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:22:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732c93cee66603d6d89625c60da3d87c806f97c993c21461f4385ef76534fed4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:22:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732c93cee66603d6d89625c60da3d87c806f97c993c21461f4385ef76534fed4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:22:31 np0005634017 podman[335985]: 2026-02-28 10:22:31.064275034 +0000 UTC m=+0.021675507 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:22:31 np0005634017 podman[335985]: 2026-02-28 10:22:31.177627295 +0000 UTC m=+0.135027818 container init 84c20c69d94332d4388891ea04031dee1c8f1b1d6f460c052be8a01e1abe088d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mendeleev, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:22:31 np0005634017 podman[335985]: 2026-02-28 10:22:31.186682607 +0000 UTC m=+0.144083060 container start 84c20c69d94332d4388891ea04031dee1c8f1b1d6f460c052be8a01e1abe088d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:22:31 np0005634017 podman[335985]: 2026-02-28 10:22:31.193252256 +0000 UTC m=+0.150652799 container attach 84c20c69d94332d4388891ea04031dee1c8f1b1d6f460c052be8a01e1abe088d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:22:31 np0005634017 nova_compute[243452]: 2026-02-28 10:22:31.264 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Updating instance_info_cache with network_info: [{"id": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "address": "fa:16:3e:93:cc:93", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ffa25b-e3", "ovs_interfaceid": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:22:31 np0005634017 nova_compute[243452]: 2026-02-28 10:22:31.287 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:22:31 np0005634017 nova_compute[243452]: 2026-02-28 10:22:31.288 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:22:31 np0005634017 nova_compute[243452]: 2026-02-28 10:22:31.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:22:31 np0005634017 nova_compute[243452]: 2026-02-28 10:22:31.352 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:31 np0005634017 nova_compute[243452]: 2026-02-28 10:22:31.353 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:31 np0005634017 nova_compute[243452]: 2026-02-28 10:22:31.354 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:31 np0005634017 nova_compute[243452]: 2026-02-28 10:22:31.355 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:22:31 np0005634017 nova_compute[243452]: 2026-02-28 10:22:31.355 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]: {
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:    "0": [
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:        {
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "devices": [
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "/dev/loop3"
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            ],
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "lv_name": "ceph_lv0",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "lv_size": "21470642176",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "name": "ceph_lv0",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "tags": {
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.cluster_name": "ceph",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.crush_device_class": "",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.encrypted": "0",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.objectstore": "bluestore",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.osd_id": "0",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.type": "block",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.vdo": "0",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.with_tpm": "0"
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            },
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "type": "block",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "vg_name": "ceph_vg0"
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:        }
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:    ],
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:    "1": [
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:        {
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "devices": [
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "/dev/loop4"
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            ],
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "lv_name": "ceph_lv1",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "lv_size": "21470642176",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "name": "ceph_lv1",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "tags": {
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.cluster_name": "ceph",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.crush_device_class": "",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.encrypted": "0",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.objectstore": "bluestore",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.osd_id": "1",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.type": "block",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.vdo": "0",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.with_tpm": "0"
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            },
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "type": "block",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "vg_name": "ceph_vg1"
Feb 28 05:22:31 np0005634017 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:        }
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:    ],
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:    "2": [
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:        {
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "devices": [
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "/dev/loop5"
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            ],
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "lv_name": "ceph_lv2",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "lv_size": "21470642176",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "name": "ceph_lv2",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "tags": {
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.cluster_name": "ceph",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.crush_device_class": "",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.encrypted": "0",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.objectstore": "bluestore",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.osd_id": "2",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.type": "block",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.vdo": "0",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:                "ceph.with_tpm": "0"
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            },
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "type": "block",
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:            "vg_name": "ceph_vg2"
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:        }
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]:    ]
Feb 28 05:22:31 np0005634017 unruffled_mendeleev[336001]: }
Feb 28 05:22:31 np0005634017 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 05:22:31 np0005634017 systemd[1]: libpod-84c20c69d94332d4388891ea04031dee1c8f1b1d6f460c052be8a01e1abe088d.scope: Deactivated successfully.
Feb 28 05:22:31 np0005634017 podman[335985]: 2026-02-28 10:22:31.519999097 +0000 UTC m=+0.477399570 container died 84c20c69d94332d4388891ea04031dee1c8f1b1d6f460c052be8a01e1abe088d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 05:22:31 np0005634017 systemd[1]: var-lib-containers-storage-overlay-732c93cee66603d6d89625c60da3d87c806f97c993c21461f4385ef76534fed4-merged.mount: Deactivated successfully.
Feb 28 05:22:31 np0005634017 podman[335985]: 2026-02-28 10:22:31.559165438 +0000 UTC m=+0.516565891 container remove 84c20c69d94332d4388891ea04031dee1c8f1b1d6f460c052be8a01e1abe088d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 05:22:31 np0005634017 systemd[1]: libpod-conmon-84c20c69d94332d4388891ea04031dee1c8f1b1d6f460c052be8a01e1abe088d.scope: Deactivated successfully.
Feb 28 05:22:31 np0005634017 nova_compute[243452]: 2026-02-28 10:22:31.606 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:22:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3433513059' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:22:31 np0005634017 nova_compute[243452]: 2026-02-28 10:22:31.966 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:32 np0005634017 podman[336105]: 2026-02-28 10:22:32.061825997 +0000 UTC m=+0.054681300 container create 94a5c8c4160e42e8f2b8e832f24516f17b3c2bbd4f8e04142bc691538eb464ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 05:22:32 np0005634017 nova_compute[243452]: 2026-02-28 10:22:32.074 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:22:32 np0005634017 nova_compute[243452]: 2026-02-28 10:22:32.076 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:22:32 np0005634017 nova_compute[243452]: 2026-02-28 10:22:32.080 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:22:32 np0005634017 nova_compute[243452]: 2026-02-28 10:22:32.081 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:22:32 np0005634017 nova_compute[243452]: 2026-02-28 10:22:32.085 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:22:32 np0005634017 nova_compute[243452]: 2026-02-28 10:22:32.085 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:22:32 np0005634017 systemd[1]: Started libpod-conmon-94a5c8c4160e42e8f2b8e832f24516f17b3c2bbd4f8e04142bc691538eb464ba.scope.
Feb 28 05:22:32 np0005634017 podman[336105]: 2026-02-28 10:22:32.03354553 +0000 UTC m=+0.026400853 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:22:32 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:22:32 np0005634017 podman[336105]: 2026-02-28 10:22:32.159398073 +0000 UTC m=+0.152253726 container init 94a5c8c4160e42e8f2b8e832f24516f17b3c2bbd4f8e04142bc691538eb464ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hamilton, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 28 05:22:32 np0005634017 podman[336105]: 2026-02-28 10:22:32.167299121 +0000 UTC m=+0.160154414 container start 94a5c8c4160e42e8f2b8e832f24516f17b3c2bbd4f8e04142bc691538eb464ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hamilton, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:22:32 np0005634017 podman[336105]: 2026-02-28 10:22:32.171188403 +0000 UTC m=+0.164043736 container attach 94a5c8c4160e42e8f2b8e832f24516f17b3c2bbd4f8e04142bc691538eb464ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 05:22:32 np0005634017 magical_hamilton[336122]: 167 167
Feb 28 05:22:32 np0005634017 systemd[1]: libpod-94a5c8c4160e42e8f2b8e832f24516f17b3c2bbd4f8e04142bc691538eb464ba.scope: Deactivated successfully.
Feb 28 05:22:32 np0005634017 podman[336127]: 2026-02-28 10:22:32.246585699 +0000 UTC m=+0.051426865 container died 94a5c8c4160e42e8f2b8e832f24516f17b3c2bbd4f8e04142bc691538eb464ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 05:22:32 np0005634017 systemd[1]: var-lib-containers-storage-overlay-8da13ef9a709400e21a50f23ac7326d4078667df6931786848e5b9073e9876b9-merged.mount: Deactivated successfully.
Feb 28 05:22:32 np0005634017 podman[336127]: 2026-02-28 10:22:32.300440704 +0000 UTC m=+0.105281830 container remove 94a5c8c4160e42e8f2b8e832f24516f17b3c2bbd4f8e04142bc691538eb464ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Feb 28 05:22:32 np0005634017 systemd[1]: libpod-conmon-94a5c8c4160e42e8f2b8e832f24516f17b3c2bbd4f8e04142bc691538eb464ba.scope: Deactivated successfully.
Feb 28 05:22:32 np0005634017 nova_compute[243452]: 2026-02-28 10:22:32.350 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:22:32 np0005634017 nova_compute[243452]: 2026-02-28 10:22:32.352 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3200MB free_disk=59.900322584435344GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:22:32 np0005634017 nova_compute[243452]: 2026-02-28 10:22:32.353 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:32 np0005634017 nova_compute[243452]: 2026-02-28 10:22:32.353 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 325 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.0 MiB/s wr, 125 op/s
Feb 28 05:22:32 np0005634017 nova_compute[243452]: 2026-02-28 10:22:32.495 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 0bafc3af-eadf-4d97-9acf-026c531362c3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:22:32 np0005634017 nova_compute[243452]: 2026-02-28 10:22:32.496 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 3a425770-67d6-411f-9586-1977cbc678ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:22:32 np0005634017 nova_compute[243452]: 2026-02-28 10:22:32.496 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 0d4ce277-1bbb-4926-a7ee-30f5df57fff9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:22:32 np0005634017 nova_compute[243452]: 2026-02-28 10:22:32.496 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:22:32 np0005634017 nova_compute[243452]: 2026-02-28 10:22:32.496 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:22:32 np0005634017 podman[336149]: 2026-02-28 10:22:32.505219625 +0000 UTC m=+0.047873343 container create ae1f5ba2a510d4b6793328eccbc65851ff1aea2efb10cef957f8e4d6e638ac45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_swanson, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 28 05:22:32 np0005634017 systemd[1]: Started libpod-conmon-ae1f5ba2a510d4b6793328eccbc65851ff1aea2efb10cef957f8e4d6e638ac45.scope.
Feb 28 05:22:32 np0005634017 podman[336149]: 2026-02-28 10:22:32.482981213 +0000 UTC m=+0.025634951 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:22:32 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:22:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c1f91e8d74c31b75cff2b307a983d6ae134c71cee8be2f0f3c161b8e670859b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:22:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c1f91e8d74c31b75cff2b307a983d6ae134c71cee8be2f0f3c161b8e670859b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:22:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c1f91e8d74c31b75cff2b307a983d6ae134c71cee8be2f0f3c161b8e670859b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:22:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c1f91e8d74c31b75cff2b307a983d6ae134c71cee8be2f0f3c161b8e670859b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:22:32 np0005634017 podman[336149]: 2026-02-28 10:22:32.615391375 +0000 UTC m=+0.158045073 container init ae1f5ba2a510d4b6793328eccbc65851ff1aea2efb10cef957f8e4d6e638ac45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_swanson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:22:32 np0005634017 podman[336149]: 2026-02-28 10:22:32.627154004 +0000 UTC m=+0.169807702 container start ae1f5ba2a510d4b6793328eccbc65851ff1aea2efb10cef957f8e4d6e638ac45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_swanson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:22:32 np0005634017 podman[336149]: 2026-02-28 10:22:32.630405658 +0000 UTC m=+0.173059466 container attach ae1f5ba2a510d4b6793328eccbc65851ff1aea2efb10cef957f8e4d6e638ac45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_swanson, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 05:22:32 np0005634017 nova_compute[243452]: 2026-02-28 10:22:32.764 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:22:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2317332672' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:22:33 np0005634017 nova_compute[243452]: 2026-02-28 10:22:33.344 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:33 np0005634017 nova_compute[243452]: 2026-02-28 10:22:33.351 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:22:33 np0005634017 nova_compute[243452]: 2026-02-28 10:22:33.366 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:22:33 np0005634017 nova_compute[243452]: 2026-02-28 10:22:33.384 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:22:33 np0005634017 nova_compute[243452]: 2026-02-28 10:22:33.384 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:33 np0005634017 lvm[336263]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:22:33 np0005634017 lvm[336266]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:22:33 np0005634017 lvm[336266]: VG ceph_vg1 finished
Feb 28 05:22:33 np0005634017 lvm[336263]: VG ceph_vg0 finished
Feb 28 05:22:33 np0005634017 lvm[336268]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:22:33 np0005634017 lvm[336268]: VG ceph_vg2 finished
Feb 28 05:22:33 np0005634017 fervent_swanson[336165]: {}
Feb 28 05:22:33 np0005634017 systemd[1]: libpod-ae1f5ba2a510d4b6793328eccbc65851ff1aea2efb10cef957f8e4d6e638ac45.scope: Deactivated successfully.
Feb 28 05:22:33 np0005634017 systemd[1]: libpod-ae1f5ba2a510d4b6793328eccbc65851ff1aea2efb10cef957f8e4d6e638ac45.scope: Consumed 1.328s CPU time.
Feb 28 05:22:33 np0005634017 conmon[336165]: conmon ae1f5ba2a510d4b67933 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae1f5ba2a510d4b6793328eccbc65851ff1aea2efb10cef957f8e4d6e638ac45.scope/container/memory.events
Feb 28 05:22:33 np0005634017 podman[336149]: 2026-02-28 10:22:33.603311598 +0000 UTC m=+1.145965336 container died ae1f5ba2a510d4b6793328eccbc65851ff1aea2efb10cef957f8e4d6e638ac45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_swanson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:22:33 np0005634017 systemd[1]: var-lib-containers-storage-overlay-9c1f91e8d74c31b75cff2b307a983d6ae134c71cee8be2f0f3c161b8e670859b-merged.mount: Deactivated successfully.
Feb 28 05:22:33 np0005634017 podman[336149]: 2026-02-28 10:22:33.647142924 +0000 UTC m=+1.189796642 container remove ae1f5ba2a510d4b6793328eccbc65851ff1aea2efb10cef957f8e4d6e638ac45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_swanson, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:22:33 np0005634017 systemd[1]: libpod-conmon-ae1f5ba2a510d4b6793328eccbc65851ff1aea2efb10cef957f8e4d6e638ac45.scope: Deactivated successfully.
Feb 28 05:22:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:22:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:22:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:22:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:22:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:22:34 np0005634017 nova_compute[243452]: 2026-02-28 10:22:34.009 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:34 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:22:34 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:22:34 np0005634017 nova_compute[243452]: 2026-02-28 10:22:34.385 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:22:34 np0005634017 nova_compute[243452]: 2026-02-28 10:22:34.386 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:22:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.0 MiB/s wr, 166 op/s
Feb 28 05:22:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 545 KiB/s wr, 172 op/s
Feb 28 05:22:36 np0005634017 nova_compute[243452]: 2026-02-28 10:22:36.611 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:37 np0005634017 nova_compute[243452]: 2026-02-28 10:22:37.446 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:37 np0005634017 nova_compute[243452]: 2026-02-28 10:22:37.446 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:37 np0005634017 nova_compute[243452]: 2026-02-28 10:22:37.464 243456 DEBUG nova.compute.manager [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:22:37 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:37Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bc:ac:78 10.100.0.14
Feb 28 05:22:37 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:37Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bc:ac:78 10.100.0.14
Feb 28 05:22:37 np0005634017 nova_compute[243452]: 2026-02-28 10:22:37.569 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:37 np0005634017 nova_compute[243452]: 2026-02-28 10:22:37.570 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:37 np0005634017 nova_compute[243452]: 2026-02-28 10:22:37.576 243456 DEBUG nova.virt.hardware [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:22:37 np0005634017 nova_compute[243452]: 2026-02-28 10:22:37.577 243456 INFO nova.compute.claims [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:22:37 np0005634017 nova_compute[243452]: 2026-02-28 10:22:37.925 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 338 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 841 KiB/s wr, 165 op/s
Feb 28 05:22:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:22:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/158170004' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:22:38 np0005634017 nova_compute[243452]: 2026-02-28 10:22:38.472 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:38 np0005634017 nova_compute[243452]: 2026-02-28 10:22:38.482 243456 DEBUG nova.compute.provider_tree [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:22:38 np0005634017 nova_compute[243452]: 2026-02-28 10:22:38.508 243456 DEBUG nova.scheduler.client.report [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:22:38 np0005634017 nova_compute[243452]: 2026-02-28 10:22:38.538 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.968s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:38 np0005634017 nova_compute[243452]: 2026-02-28 10:22:38.540 243456 DEBUG nova.compute.manager [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:22:38 np0005634017 nova_compute[243452]: 2026-02-28 10:22:38.593 243456 DEBUG nova.compute.manager [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:22:38 np0005634017 nova_compute[243452]: 2026-02-28 10:22:38.593 243456 DEBUG nova.network.neutron [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:22:38 np0005634017 nova_compute[243452]: 2026-02-28 10:22:38.624 243456 INFO nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:22:38 np0005634017 nova_compute[243452]: 2026-02-28 10:22:38.645 243456 DEBUG nova.compute.manager [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:22:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:22:38 np0005634017 nova_compute[243452]: 2026-02-28 10:22:38.731 243456 DEBUG nova.compute.manager [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:22:38 np0005634017 nova_compute[243452]: 2026-02-28 10:22:38.733 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:22:38 np0005634017 nova_compute[243452]: 2026-02-28 10:22:38.734 243456 INFO nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Creating image(s)#033[00m
Feb 28 05:22:38 np0005634017 nova_compute[243452]: 2026-02-28 10:22:38.760 243456 DEBUG nova.storage.rbd_utils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:22:38 np0005634017 nova_compute[243452]: 2026-02-28 10:22:38.789 243456 DEBUG nova.storage.rbd_utils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:22:38 np0005634017 nova_compute[243452]: 2026-02-28 10:22:38.814 243456 DEBUG nova.storage.rbd_utils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:22:38 np0005634017 nova_compute[243452]: 2026-02-28 10:22:38.818 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:38 np0005634017 nova_compute[243452]: 2026-02-28 10:22:38.886 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:38 np0005634017 nova_compute[243452]: 2026-02-28 10:22:38.887 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:38 np0005634017 nova_compute[243452]: 2026-02-28 10:22:38.888 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:38 np0005634017 nova_compute[243452]: 2026-02-28 10:22:38.888 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:38 np0005634017 nova_compute[243452]: 2026-02-28 10:22:38.916 243456 DEBUG nova.storage.rbd_utils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:22:38 np0005634017 nova_compute[243452]: 2026-02-28 10:22:38.920 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:39 np0005634017 nova_compute[243452]: 2026-02-28 10:22:39.013 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:39 np0005634017 nova_compute[243452]: 2026-02-28 10:22:39.184 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.264s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:39 np0005634017 nova_compute[243452]: 2026-02-28 10:22:39.258 243456 DEBUG nova.storage.rbd_utils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] resizing rbd image 1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:22:39 np0005634017 nova_compute[243452]: 2026-02-28 10:22:39.362 243456 DEBUG nova.objects.instance [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'migration_context' on Instance uuid 1c621cf0-85bd-40e0-8c9c-467e5da2e21b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:22:39 np0005634017 nova_compute[243452]: 2026-02-28 10:22:39.377 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:22:39 np0005634017 nova_compute[243452]: 2026-02-28 10:22:39.378 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Ensure instance console log exists: /var/lib/nova/instances/1c621cf0-85bd-40e0-8c9c-467e5da2e21b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:22:39 np0005634017 nova_compute[243452]: 2026-02-28 10:22:39.378 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:39 np0005634017 nova_compute[243452]: 2026-02-28 10:22:39.379 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:39 np0005634017 nova_compute[243452]: 2026-02-28 10:22:39.379 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:39 np0005634017 nova_compute[243452]: 2026-02-28 10:22:39.412 243456 DEBUG nova.policy [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30797c1e587b4532a2e148d0cdcd9c51', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:22:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 389 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.5 MiB/s wr, 206 op/s
Feb 28 05:22:40 np0005634017 nova_compute[243452]: 2026-02-28 10:22:40.591 243456 DEBUG nova.network.neutron [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Successfully created port: 3789f2db-7dec-44ac-93d7-2712307dc094 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:22:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:40Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e5:f2:95 10.100.0.9
Feb 28 05:22:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:40Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e5:f2:95 10.100.0.9
Feb 28 05:22:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:22:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:22:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:22:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:22:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002142553397157101 of space, bias 1.0, pg target 0.6427660191471304 quantized to 32 (current 32)
Feb 28 05:22:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:22:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:22:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:22:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:22:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:22:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024929816364085387 of space, bias 1.0, pg target 0.7478944909225617 quantized to 32 (current 32)
Feb 28 05:22:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:22:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.540373750331308e-07 of space, bias 4.0, pg target 0.0009048448500397569 quantized to 16 (current 16)
Feb 28 05:22:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:22:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:22:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:22:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:22:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:22:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:22:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:22:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:22:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:22:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:22:41 np0005634017 nova_compute[243452]: 2026-02-28 10:22:41.613 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:41 np0005634017 nova_compute[243452]: 2026-02-28 10:22:41.870 243456 DEBUG nova.network.neutron [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Successfully updated port: 3789f2db-7dec-44ac-93d7-2712307dc094 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:22:41 np0005634017 nova_compute[243452]: 2026-02-28 10:22:41.889 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "refresh_cache-1c621cf0-85bd-40e0-8c9c-467e5da2e21b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:22:41 np0005634017 nova_compute[243452]: 2026-02-28 10:22:41.890 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquired lock "refresh_cache-1c621cf0-85bd-40e0-8c9c-467e5da2e21b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:22:41 np0005634017 nova_compute[243452]: 2026-02-28 10:22:41.890 243456 DEBUG nova.network.neutron [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:22:42 np0005634017 nova_compute[243452]: 2026-02-28 10:22:42.000 243456 DEBUG nova.compute.manager [req-592d1763-8fcb-41f8-9dae-5f8974ddb545 req-29729949-ace1-4035-8b04-76ca2aa80a5e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Received event network-changed-3789f2db-7dec-44ac-93d7-2712307dc094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:22:42 np0005634017 nova_compute[243452]: 2026-02-28 10:22:42.001 243456 DEBUG nova.compute.manager [req-592d1763-8fcb-41f8-9dae-5f8974ddb545 req-29729949-ace1-4035-8b04-76ca2aa80a5e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Refreshing instance network info cache due to event network-changed-3789f2db-7dec-44ac-93d7-2712307dc094. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:22:42 np0005634017 nova_compute[243452]: 2026-02-28 10:22:42.001 243456 DEBUG oslo_concurrency.lockutils [req-592d1763-8fcb-41f8-9dae-5f8974ddb545 req-29729949-ace1-4035-8b04-76ca2aa80a5e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-1c621cf0-85bd-40e0-8c9c-467e5da2e21b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:22:42 np0005634017 nova_compute[243452]: 2026-02-28 10:22:42.089 243456 DEBUG nova.network.neutron [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:22:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 389 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.5 MiB/s wr, 123 op/s
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.276 243456 DEBUG nova.network.neutron [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Updating instance_info_cache with network_info: [{"id": "3789f2db-7dec-44ac-93d7-2712307dc094", "address": "fa:16:3e:d7:89:88", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3789f2db-7d", "ovs_interfaceid": "3789f2db-7dec-44ac-93d7-2712307dc094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.302 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Releasing lock "refresh_cache-1c621cf0-85bd-40e0-8c9c-467e5da2e21b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.303 243456 DEBUG nova.compute.manager [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Instance network_info: |[{"id": "3789f2db-7dec-44ac-93d7-2712307dc094", "address": "fa:16:3e:d7:89:88", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3789f2db-7d", "ovs_interfaceid": "3789f2db-7dec-44ac-93d7-2712307dc094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.303 243456 DEBUG oslo_concurrency.lockutils [req-592d1763-8fcb-41f8-9dae-5f8974ddb545 req-29729949-ace1-4035-8b04-76ca2aa80a5e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-1c621cf0-85bd-40e0-8c9c-467e5da2e21b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.304 243456 DEBUG nova.network.neutron [req-592d1763-8fcb-41f8-9dae-5f8974ddb545 req-29729949-ace1-4035-8b04-76ca2aa80a5e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Refreshing network info cache for port 3789f2db-7dec-44ac-93d7-2712307dc094 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.306 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Start _get_guest_xml network_info=[{"id": "3789f2db-7dec-44ac-93d7-2712307dc094", "address": "fa:16:3e:d7:89:88", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3789f2db-7d", "ovs_interfaceid": "3789f2db-7dec-44ac-93d7-2712307dc094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.312 243456 WARNING nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.322 243456 DEBUG nova.virt.libvirt.host [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.322 243456 DEBUG nova.virt.libvirt.host [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.325 243456 DEBUG nova.virt.libvirt.host [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.326 243456 DEBUG nova.virt.libvirt.host [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.327 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.327 243456 DEBUG nova.virt.hardware [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.327 243456 DEBUG nova.virt.hardware [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.327 243456 DEBUG nova.virt.hardware [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.328 243456 DEBUG nova.virt.hardware [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.328 243456 DEBUG nova.virt.hardware [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.328 243456 DEBUG nova.virt.hardware [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.329 243456 DEBUG nova.virt.hardware [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.329 243456 DEBUG nova.virt.hardware [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.329 243456 DEBUG nova.virt.hardware [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.330 243456 DEBUG nova.virt.hardware [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.330 243456 DEBUG nova.virt.hardware [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.333 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:22:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:22:43 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3378005444' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.868 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.899 243456 DEBUG nova.storage.rbd_utils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:22:43 np0005634017 nova_compute[243452]: 2026-02-28 10:22:43.904 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.014 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.228 243456 DEBUG oslo_concurrency.lockutils [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "3a425770-67d6-411f-9586-1977cbc678ed" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.228 243456 DEBUG oslo_concurrency.lockutils [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "3a425770-67d6-411f-9586-1977cbc678ed" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.229 243456 DEBUG oslo_concurrency.lockutils [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "3a425770-67d6-411f-9586-1977cbc678ed-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.229 243456 DEBUG oslo_concurrency.lockutils [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "3a425770-67d6-411f-9586-1977cbc678ed-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.229 243456 DEBUG oslo_concurrency.lockutils [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "3a425770-67d6-411f-9586-1977cbc678ed-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.230 243456 INFO nova.compute.manager [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Terminating instance#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.231 243456 DEBUG nova.compute.manager [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:22:44 np0005634017 kernel: tap8356f577-07 (unregistering): left promiscuous mode
Feb 28 05:22:44 np0005634017 NetworkManager[49805]: <info>  [1772274164.2762] device (tap8356f577-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:22:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:44Z|01082|binding|INFO|Releasing lport 8356f577-07af-4575-b9ba-e2764b155dcc from this chassis (sb_readonly=0)
Feb 28 05:22:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:44Z|01083|binding|INFO|Setting lport 8356f577-07af-4575-b9ba-e2764b155dcc down in Southbound
Feb 28 05:22:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:44Z|01084|binding|INFO|Removing iface tap8356f577-07 ovn-installed in OVS
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.288 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.289 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.300 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:ac:78 10.100.0.14'], port_security=['fa:16:3e:bc:ac:78 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3a425770-67d6-411f-9586-1977cbc678ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3451a2ef-e97c-49df-813f-57c35ec0999a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '4', 'neutron:security_group_ids': '88239c96-6a1e-46d1-adfd-2f479ed23a6a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6802505a-6a05-4258-9e53-19d8c7319e67, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=8356f577-07af-4575-b9ba-e2764b155dcc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:22:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.304 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 8356f577-07af-4575-b9ba-e2764b155dcc in datapath 3451a2ef-e97c-49df-813f-57c35ec0999a unbound from our chassis#033[00m
Feb 28 05:22:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.306 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3451a2ef-e97c-49df-813f-57c35ec0999a#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.319 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.331 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ce8fede8-20b7-4fee-a16a-88d2dd3f46f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:44 np0005634017 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d00000069.scope: Deactivated successfully.
Feb 28 05:22:44 np0005634017 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d00000069.scope: Consumed 12.886s CPU time.
Feb 28 05:22:44 np0005634017 systemd-machined[209480]: Machine qemu-135-instance-00000069 terminated.
Feb 28 05:22:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.359 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[488fc8f6-875c-41f6-8e4a-740274889881]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.363 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1901f27a-3a78-4189-bfc3-ad970e689bcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.386 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5025843d-5721-42de-b309-7c8e5db8bc3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.400 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5a71bff7-8427-4fc3-a046-5442448162ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3451a2ef-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:c2:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 321], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561733, 'reachable_time': 16789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336567, 'error': None, 'target': 'ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.414 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[905f56c1-3232-4340-b471-e231723bdb9c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3451a2ef-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561746, 'tstamp': 561746}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336568, 'error': None, 'target': 'ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3451a2ef-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561749, 'tstamp': 561749}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336568, 'error': None, 'target': 'ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.416 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3451a2ef-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.418 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.421 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.422 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3451a2ef-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.422 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:22:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.422 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3451a2ef-e0, col_values=(('external_ids', {'iface-id': '0715e649-02a0-4a60-88ea-663bf03161ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.423 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:22:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 421 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 174 op/s
Feb 28 05:22:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:22:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/225991063' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.444 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.445 243456 DEBUG nova.virt.libvirt.vif [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:22:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-828230938',display_name='tempest-ServersTestJSON-server-828230938',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-828230938',id=107,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-qfm9iifg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:22:38Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=1c621cf0-85bd-40e0-8c9c-467e5da2e21b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3789f2db-7dec-44ac-93d7-2712307dc094", "address": "fa:16:3e:d7:89:88", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3789f2db-7d", "ovs_interfaceid": "3789f2db-7dec-44ac-93d7-2712307dc094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.446 243456 DEBUG nova.network.os_vif_util [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "3789f2db-7dec-44ac-93d7-2712307dc094", "address": "fa:16:3e:d7:89:88", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3789f2db-7d", "ovs_interfaceid": "3789f2db-7dec-44ac-93d7-2712307dc094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.447 243456 DEBUG nova.network.os_vif_util [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:89:88,bridge_name='br-int',has_traffic_filtering=True,id=3789f2db-7dec-44ac-93d7-2712307dc094,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3789f2db-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.448 243456 DEBUG nova.objects.instance [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1c621cf0-85bd-40e0-8c9c-467e5da2e21b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.465 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:22:44 np0005634017 nova_compute[243452]:  <uuid>1c621cf0-85bd-40e0-8c9c-467e5da2e21b</uuid>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:  <name>instance-0000006b</name>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServersTestJSON-server-828230938</nova:name>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:22:43</nova:creationTime>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:22:44 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:        <nova:user uuid="30797c1e587b4532a2e148d0cdcd9c51">tempest-ServersTestJSON-973249707-project-member</nova:user>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:        <nova:project uuid="1c3cb5cdfa53405bb0387af43e804bd1">tempest-ServersTestJSON-973249707</nova:project>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:        <nova:port uuid="3789f2db-7dec-44ac-93d7-2712307dc094">
Feb 28 05:22:44 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <entry name="serial">1c621cf0-85bd-40e0-8c9c-467e5da2e21b</entry>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <entry name="uuid">1c621cf0-85bd-40e0-8c9c-467e5da2e21b</entry>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk">
Feb 28 05:22:44 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:22:44 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk.config">
Feb 28 05:22:44 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:22:44 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:d7:89:88"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <target dev="tap3789f2db-7d"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/1c621cf0-85bd-40e0-8c9c-467e5da2e21b/console.log" append="off"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:22:44 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:22:44 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:22:44 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:22:44 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.466 243456 DEBUG nova.compute.manager [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Preparing to wait for external event network-vif-plugged-3789f2db-7dec-44ac-93d7-2712307dc094 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.467 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.467 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.467 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.468 243456 DEBUG nova.virt.libvirt.vif [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:22:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-828230938',display_name='tempest-ServersTestJSON-server-828230938',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-828230938',id=107,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-qfm9iifg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:22:38Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=1c621cf0-85bd-40e0-8c9c-467e5da2e21b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3789f2db-7dec-44ac-93d7-2712307dc094", "address": "fa:16:3e:d7:89:88", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3789f2db-7d", "ovs_interfaceid": "3789f2db-7dec-44ac-93d7-2712307dc094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.468 243456 DEBUG nova.network.os_vif_util [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "3789f2db-7dec-44ac-93d7-2712307dc094", "address": "fa:16:3e:d7:89:88", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3789f2db-7d", "ovs_interfaceid": "3789f2db-7dec-44ac-93d7-2712307dc094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.469 243456 DEBUG nova.network.os_vif_util [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:89:88,bridge_name='br-int',has_traffic_filtering=True,id=3789f2db-7dec-44ac-93d7-2712307dc094,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3789f2db-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.469 243456 DEBUG os_vif [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:89:88,bridge_name='br-int',has_traffic_filtering=True,id=3789f2db-7dec-44ac-93d7-2712307dc094,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3789f2db-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.469 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.470 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.470 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.474 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.474 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3789f2db-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.475 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3789f2db-7d, col_values=(('external_ids', {'iface-id': '3789f2db-7dec-44ac-93d7-2712307dc094', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d7:89:88', 'vm-uuid': '1c621cf0-85bd-40e0-8c9c-467e5da2e21b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.476 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:44 np0005634017 NetworkManager[49805]: <info>  [1772274164.4773] manager: (tap3789f2db-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/455)
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.478 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.479 243456 INFO nova.virt.libvirt.driver [-] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Instance destroyed successfully.#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.479 243456 DEBUG nova.objects.instance [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'resources' on Instance uuid 3a425770-67d6-411f-9586-1977cbc678ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.484 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.486 243456 INFO os_vif [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:89:88,bridge_name='br-int',has_traffic_filtering=True,id=3789f2db-7dec-44ac-93d7-2712307dc094,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3789f2db-7d')#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.494 243456 DEBUG nova.virt.libvirt.vif [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:22:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-0-1517409208',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-0-1517409208',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ge',id=105,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNs+nd6rPqJ/AGV55FcprVoNF73HxSzu9S0FiqOuhvN4rlizLZ9YW8wn4BFYC1ax4N+CgAyJWdOsbXuKNmW4pxXu15elRBRUjfpzdRq3EXNuL00qJwL5OuaSxnPUOdFXtw==',key_name='tempest-TestSecurityGroupsBasicOps-1541180013',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:22:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-xny8xf69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:22:25Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=3a425770-67d6-411f-9586-1977cbc678ed,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8356f577-07af-4575-b9ba-e2764b155dcc", "address": "fa:16:3e:bc:ac:78", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8356f577-07", "ovs_interfaceid": "8356f577-07af-4575-b9ba-e2764b155dcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.494 243456 DEBUG nova.network.os_vif_util [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "8356f577-07af-4575-b9ba-e2764b155dcc", "address": "fa:16:3e:bc:ac:78", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8356f577-07", "ovs_interfaceid": "8356f577-07af-4575-b9ba-e2764b155dcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.495 243456 DEBUG nova.network.os_vif_util [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:ac:78,bridge_name='br-int',has_traffic_filtering=True,id=8356f577-07af-4575-b9ba-e2764b155dcc,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8356f577-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.496 243456 DEBUG os_vif [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:ac:78,bridge_name='br-int',has_traffic_filtering=True,id=8356f577-07af-4575-b9ba-e2764b155dcc,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8356f577-07') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.498 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.499 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8356f577-07, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.502 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.504 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.507 243456 INFO os_vif [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:ac:78,bridge_name='br-int',has_traffic_filtering=True,id=8356f577-07af-4575-b9ba-e2764b155dcc,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8356f577-07')#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.556 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.558 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.558 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No VIF found with MAC fa:16:3e:d7:89:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.559 243456 INFO nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Using config drive#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.589 243456 DEBUG nova.storage.rbd_utils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.780 243456 INFO nova.virt.libvirt.driver [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Deleting instance files /var/lib/nova/instances/3a425770-67d6-411f-9586-1977cbc678ed_del#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.781 243456 INFO nova.virt.libvirt.driver [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Deletion of /var/lib/nova/instances/3a425770-67d6-411f-9586-1977cbc678ed_del complete#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.988 243456 INFO nova.compute.manager [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.989 243456 DEBUG oslo.service.loopingcall [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.989 243456 DEBUG nova.compute.manager [-] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:22:44 np0005634017 nova_compute[243452]: 2026-02-28 10:22:44.990 243456 DEBUG nova.network.neutron [-] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:22:45 np0005634017 nova_compute[243452]: 2026-02-28 10:22:45.023 243456 DEBUG nova.network.neutron [req-592d1763-8fcb-41f8-9dae-5f8974ddb545 req-29729949-ace1-4035-8b04-76ca2aa80a5e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Updated VIF entry in instance network info cache for port 3789f2db-7dec-44ac-93d7-2712307dc094. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:22:45 np0005634017 nova_compute[243452]: 2026-02-28 10:22:45.023 243456 DEBUG nova.network.neutron [req-592d1763-8fcb-41f8-9dae-5f8974ddb545 req-29729949-ace1-4035-8b04-76ca2aa80a5e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Updating instance_info_cache with network_info: [{"id": "3789f2db-7dec-44ac-93d7-2712307dc094", "address": "fa:16:3e:d7:89:88", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3789f2db-7d", "ovs_interfaceid": "3789f2db-7dec-44ac-93d7-2712307dc094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:22:45 np0005634017 nova_compute[243452]: 2026-02-28 10:22:45.039 243456 DEBUG oslo_concurrency.lockutils [req-592d1763-8fcb-41f8-9dae-5f8974ddb545 req-29729949-ace1-4035-8b04-76ca2aa80a5e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-1c621cf0-85bd-40e0-8c9c-467e5da2e21b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:22:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:22:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3463702608' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:22:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:22:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3463702608' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:22:45 np0005634017 nova_compute[243452]: 2026-02-28 10:22:45.681 243456 INFO nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Creating config drive at /var/lib/nova/instances/1c621cf0-85bd-40e0-8c9c-467e5da2e21b/disk.config#033[00m
Feb 28 05:22:45 np0005634017 nova_compute[243452]: 2026-02-28 10:22:45.689 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1c621cf0-85bd-40e0-8c9c-467e5da2e21b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpnny66t7b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:45 np0005634017 nova_compute[243452]: 2026-02-28 10:22:45.833 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1c621cf0-85bd-40e0-8c9c-467e5da2e21b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpnny66t7b" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:45 np0005634017 nova_compute[243452]: 2026-02-28 10:22:45.872 243456 DEBUG nova.storage.rbd_utils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:22:45 np0005634017 nova_compute[243452]: 2026-02-28 10:22:45.877 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1c621cf0-85bd-40e0-8c9c-467e5da2e21b/disk.config 1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:46 np0005634017 nova_compute[243452]: 2026-02-28 10:22:46.016 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1c621cf0-85bd-40e0-8c9c-467e5da2e21b/disk.config 1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:46 np0005634017 nova_compute[243452]: 2026-02-28 10:22:46.018 243456 INFO nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Deleting local config drive /var/lib/nova/instances/1c621cf0-85bd-40e0-8c9c-467e5da2e21b/disk.config because it was imported into RBD.#033[00m
Feb 28 05:22:46 np0005634017 kernel: tap3789f2db-7d: entered promiscuous mode
Feb 28 05:22:46 np0005634017 NetworkManager[49805]: <info>  [1772274166.0681] manager: (tap3789f2db-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/456)
Feb 28 05:22:46 np0005634017 systemd-udevd[336558]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:22:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:46Z|01085|binding|INFO|Claiming lport 3789f2db-7dec-44ac-93d7-2712307dc094 for this chassis.
Feb 28 05:22:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:46Z|01086|binding|INFO|3789f2db-7dec-44ac-93d7-2712307dc094: Claiming fa:16:3e:d7:89:88 10.100.0.10
Feb 28 05:22:46 np0005634017 nova_compute[243452]: 2026-02-28 10:22:46.070 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.078 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:89:88 10.100.0.10'], port_security=['fa:16:3e:d7:89:88 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1c621cf0-85bd-40e0-8c9c-467e5da2e21b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=3789f2db-7dec-44ac-93d7-2712307dc094) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:22:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.080 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 3789f2db-7dec-44ac-93d7-2712307dc094 in datapath 7ec4804c-4a13-485a-9300-db6edf74473b bound to our chassis#033[00m
Feb 28 05:22:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:46Z|01087|binding|INFO|Setting lport 3789f2db-7dec-44ac-93d7-2712307dc094 ovn-installed in OVS
Feb 28 05:22:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:46Z|01088|binding|INFO|Setting lport 3789f2db-7dec-44ac-93d7-2712307dc094 up in Southbound
Feb 28 05:22:46 np0005634017 NetworkManager[49805]: <info>  [1772274166.0865] device (tap3789f2db-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:22:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.086 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ec4804c-4a13-485a-9300-db6edf74473b#033[00m
Feb 28 05:22:46 np0005634017 nova_compute[243452]: 2026-02-28 10:22:46.087 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:46 np0005634017 NetworkManager[49805]: <info>  [1772274166.0891] device (tap3789f2db-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:22:46 np0005634017 nova_compute[243452]: 2026-02-28 10:22:46.094 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.108 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[66538b13-a6cd-4301-8298-0b93b8245a3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:46 np0005634017 systemd-machined[209480]: New machine qemu-137-instance-0000006b.
Feb 28 05:22:46 np0005634017 systemd[1]: Started Virtual Machine qemu-137-instance-0000006b.
Feb 28 05:22:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.141 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[436f89fe-5ea9-42c3-8fb2-55bb25a18bdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:46 np0005634017 nova_compute[243452]: 2026-02-28 10:22:46.142 243456 DEBUG nova.network.neutron [-] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:22:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.145 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[fb796d7e-552d-4dcb-819b-cf7b1eb6d187]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:46 np0005634017 nova_compute[243452]: 2026-02-28 10:22:46.163 243456 INFO nova.compute.manager [-] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Took 1.17 seconds to deallocate network for instance.#033[00m
Feb 28 05:22:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.180 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f58adbaf-55d3-41f8-afa1-31399e9061ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.195 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a185a71a-21b7-4204-bb49-6bd4d1c249a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336688, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.212 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[94039fda-ae90-40ef-83c0-308babc9b7f4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565520, 'tstamp': 565520}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336690, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565523, 'tstamp': 565523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336690, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.214 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:46 np0005634017 nova_compute[243452]: 2026-02-28 10:22:46.217 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.219 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ec4804c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.220 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:22:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.220 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ec4804c-40, col_values=(('external_ids', {'iface-id': 'e8c59be7-5922-4be3-b143-f702a4828758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.221 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:22:46 np0005634017 nova_compute[243452]: 2026-02-28 10:22:46.231 243456 DEBUG oslo_concurrency.lockutils [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:46 np0005634017 nova_compute[243452]: 2026-02-28 10:22:46.231 243456 DEBUG oslo_concurrency.lockutils [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:46 np0005634017 nova_compute[243452]: 2026-02-28 10:22:46.352 243456 DEBUG oslo_concurrency.processutils [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:46 np0005634017 nova_compute[243452]: 2026-02-28 10:22:46.404 243456 DEBUG nova.compute.manager [req-10ab324e-cc10-462e-a99a-7f40675746d0 req-46312ded-84e1-4cab-8c20-0183949abf31 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Received event network-vif-deleted-8356f577-07af-4575-b9ba-e2764b155dcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:22:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 417 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 908 KiB/s rd, 6.0 MiB/s wr, 167 op/s
Feb 28 05:22:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:22:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3036863761' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:22:46 np0005634017 nova_compute[243452]: 2026-02-28 10:22:46.893 243456 DEBUG oslo_concurrency.processutils [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:46 np0005634017 nova_compute[243452]: 2026-02-28 10:22:46.902 243456 DEBUG nova.compute.provider_tree [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:22:46 np0005634017 nova_compute[243452]: 2026-02-28 10:22:46.919 243456 DEBUG nova.scheduler.client.report [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:22:46 np0005634017 nova_compute[243452]: 2026-02-28 10:22:46.954 243456 DEBUG oslo_concurrency.lockutils [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:46 np0005634017 nova_compute[243452]: 2026-02-28 10:22:46.983 243456 INFO nova.scheduler.client.report [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Deleted allocations for instance 3a425770-67d6-411f-9586-1977cbc678ed#033[00m
Feb 28 05:22:47 np0005634017 nova_compute[243452]: 2026-02-28 10:22:47.053 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274167.0523014, 1c621cf0-85bd-40e0-8c9c-467e5da2e21b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:22:47 np0005634017 nova_compute[243452]: 2026-02-28 10:22:47.053 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] VM Started (Lifecycle Event)#033[00m
Feb 28 05:22:47 np0005634017 nova_compute[243452]: 2026-02-28 10:22:47.065 243456 DEBUG oslo_concurrency.lockutils [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "3a425770-67d6-411f-9586-1977cbc678ed" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:47 np0005634017 nova_compute[243452]: 2026-02-28 10:22:47.077 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:22:47 np0005634017 nova_compute[243452]: 2026-02-28 10:22:47.087 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274167.0569365, 1c621cf0-85bd-40e0-8c9c-467e5da2e21b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:22:47 np0005634017 nova_compute[243452]: 2026-02-28 10:22:47.087 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:22:47 np0005634017 nova_compute[243452]: 2026-02-28 10:22:47.108 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:22:47 np0005634017 nova_compute[243452]: 2026-02-28 10:22:47.114 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:22:47 np0005634017 nova_compute[243452]: 2026-02-28 10:22:47.147 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:22:47 np0005634017 podman[336756]: 2026-02-28 10:22:47.155577462 +0000 UTC m=+0.096273970 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true)
Feb 28 05:22:47 np0005634017 podman[336755]: 2026-02-28 10:22:47.157722684 +0000 UTC m=+0.101145370 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:22:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 398 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 679 KiB/s rd, 6.0 MiB/s wr, 171 op/s
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.643 243456 DEBUG nova.compute.manager [req-0a9bf9a5-5368-4fbd-8086-cb3dd2b07145 req-88bc653c-586f-426f-90e3-012b9add6814 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Received event network-vif-plugged-3789f2db-7dec-44ac-93d7-2712307dc094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.643 243456 DEBUG oslo_concurrency.lockutils [req-0a9bf9a5-5368-4fbd-8086-cb3dd2b07145 req-88bc653c-586f-426f-90e3-012b9add6814 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.644 243456 DEBUG oslo_concurrency.lockutils [req-0a9bf9a5-5368-4fbd-8086-cb3dd2b07145 req-88bc653c-586f-426f-90e3-012b9add6814 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.644 243456 DEBUG oslo_concurrency.lockutils [req-0a9bf9a5-5368-4fbd-8086-cb3dd2b07145 req-88bc653c-586f-426f-90e3-012b9add6814 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.644 243456 DEBUG nova.compute.manager [req-0a9bf9a5-5368-4fbd-8086-cb3dd2b07145 req-88bc653c-586f-426f-90e3-012b9add6814 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Processing event network-vif-plugged-3789f2db-7dec-44ac-93d7-2712307dc094 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.645 243456 DEBUG nova.compute.manager [req-0a9bf9a5-5368-4fbd-8086-cb3dd2b07145 req-88bc653c-586f-426f-90e3-012b9add6814 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Received event network-vif-plugged-3789f2db-7dec-44ac-93d7-2712307dc094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.645 243456 DEBUG oslo_concurrency.lockutils [req-0a9bf9a5-5368-4fbd-8086-cb3dd2b07145 req-88bc653c-586f-426f-90e3-012b9add6814 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.645 243456 DEBUG oslo_concurrency.lockutils [req-0a9bf9a5-5368-4fbd-8086-cb3dd2b07145 req-88bc653c-586f-426f-90e3-012b9add6814 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.646 243456 DEBUG oslo_concurrency.lockutils [req-0a9bf9a5-5368-4fbd-8086-cb3dd2b07145 req-88bc653c-586f-426f-90e3-012b9add6814 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.646 243456 DEBUG nova.compute.manager [req-0a9bf9a5-5368-4fbd-8086-cb3dd2b07145 req-88bc653c-586f-426f-90e3-012b9add6814 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] No waiting events found dispatching network-vif-plugged-3789f2db-7dec-44ac-93d7-2712307dc094 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.647 243456 WARNING nova.compute.manager [req-0a9bf9a5-5368-4fbd-8086-cb3dd2b07145 req-88bc653c-586f-426f-90e3-012b9add6814 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Received unexpected event network-vif-plugged-3789f2db-7dec-44ac-93d7-2712307dc094 for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.648 243456 DEBUG nova.compute.manager [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.652 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274168.652321, 1c621cf0-85bd-40e0-8c9c-467e5da2e21b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.653 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.655 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.659 243456 INFO nova.virt.libvirt.driver [-] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Instance spawned successfully.#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.660 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.676 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:22:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.690 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.695 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.696 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.697 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.697 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.698 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.699 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.710 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.755 243456 INFO nova.compute.manager [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Took 10.02 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.756 243456 DEBUG nova.compute.manager [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.812 243456 DEBUG oslo_concurrency.lockutils [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "0bafc3af-eadf-4d97-9acf-026c531362c3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.813 243456 DEBUG oslo_concurrency.lockutils [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.814 243456 DEBUG oslo_concurrency.lockutils [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.814 243456 DEBUG oslo_concurrency.lockutils [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.815 243456 DEBUG oslo_concurrency.lockutils [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.817 243456 INFO nova.compute.manager [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Terminating instance#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.819 243456 DEBUG nova.compute.manager [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.841 243456 INFO nova.compute.manager [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Took 11.30 seconds to build instance.#033[00m
Feb 28 05:22:48 np0005634017 kernel: tap09ffa25b-e3 (unregistering): left promiscuous mode
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.865 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:48 np0005634017 NetworkManager[49805]: <info>  [1772274168.8676] device (tap09ffa25b-e3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.872 243456 DEBUG nova.compute.manager [req-1a799f7f-bf5b-4754-b588-c4307bfc0777 req-d4b2a65d-d1d2-4583-98a2-0487f7f7737a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Received event network-changed-09ffa25b-e3df-45c2-9db2-423ed33e2a28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.872 243456 DEBUG nova.compute.manager [req-1a799f7f-bf5b-4754-b588-c4307bfc0777 req-d4b2a65d-d1d2-4583-98a2-0487f7f7737a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Refreshing instance network info cache due to event network-changed-09ffa25b-e3df-45c2-9db2-423ed33e2a28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.873 243456 DEBUG oslo_concurrency.lockutils [req-1a799f7f-bf5b-4754-b588-c4307bfc0777 req-d4b2a65d-d1d2-4583-98a2-0487f7f7737a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.873 243456 DEBUG oslo_concurrency.lockutils [req-1a799f7f-bf5b-4754-b588-c4307bfc0777 req-d4b2a65d-d1d2-4583-98a2-0487f7f7737a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.873 243456 DEBUG nova.network.neutron [req-1a799f7f-bf5b-4754-b588-c4307bfc0777 req-d4b2a65d-d1d2-4583-98a2-0487f7f7737a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Refreshing network info cache for port 09ffa25b-e3df-45c2-9db2-423ed33e2a28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:22:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:48Z|01089|binding|INFO|Releasing lport 09ffa25b-e3df-45c2-9db2-423ed33e2a28 from this chassis (sb_readonly=0)
Feb 28 05:22:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:48Z|01090|binding|INFO|Setting lport 09ffa25b-e3df-45c2-9db2-423ed33e2a28 down in Southbound
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.883 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:48Z|01091|binding|INFO|Removing iface tap09ffa25b-e3 ovn-installed in OVS
Feb 28 05:22:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:48.896 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:cc:93 10.100.0.9'], port_security=['fa:16:3e:93:cc:93 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0bafc3af-eadf-4d97-9acf-026c531362c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3451a2ef-e97c-49df-813f-57c35ec0999a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6d9d1441-5ce1-4022-9f50-b5399f868b07 88239c96-6a1e-46d1-adfd-2f479ed23a6a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6802505a-6a05-4258-9e53-19d8c7319e67, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=09ffa25b-e3df-45c2-9db2-423ed33e2a28) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:22:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:48.897 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 09ffa25b-e3df-45c2-9db2-423ed33e2a28 in datapath 3451a2ef-e97c-49df-813f-57c35ec0999a unbound from our chassis#033[00m
Feb 28 05:22:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:48.900 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3451a2ef-e97c-49df-813f-57c35ec0999a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:22:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:48.901 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[58b96399-fc83-4b6a-934d-f3093c37279c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:48.902 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a namespace which is not needed anymore#033[00m
Feb 28 05:22:48 np0005634017 nova_compute[243452]: 2026-02-28 10:22:48.902 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:48 np0005634017 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d00000068.scope: Deactivated successfully.
Feb 28 05:22:48 np0005634017 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d00000068.scope: Consumed 14.703s CPU time.
Feb 28 05:22:48 np0005634017 systemd-machined[209480]: Machine qemu-134-instance-00000068 terminated.
Feb 28 05:22:49 np0005634017 nova_compute[243452]: 2026-02-28 10:22:49.016 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:49 np0005634017 neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a[334577]: [NOTICE]   (334581) : haproxy version is 2.8.14-c23fe91
Feb 28 05:22:49 np0005634017 neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a[334577]: [NOTICE]   (334581) : path to executable is /usr/sbin/haproxy
Feb 28 05:22:49 np0005634017 neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a[334577]: [WARNING]  (334581) : Exiting Master process...
Feb 28 05:22:49 np0005634017 neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a[334577]: [ALERT]    (334581) : Current worker (334583) exited with code 143 (Terminated)
Feb 28 05:22:49 np0005634017 neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a[334577]: [WARNING]  (334581) : All workers exited. Exiting... (0)
Feb 28 05:22:49 np0005634017 systemd[1]: libpod-b54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4.scope: Deactivated successfully.
Feb 28 05:22:49 np0005634017 podman[336820]: 2026-02-28 10:22:49.05443367 +0000 UTC m=+0.067273543 container died b54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 28 05:22:49 np0005634017 nova_compute[243452]: 2026-02-28 10:22:49.055 243456 INFO nova.virt.libvirt.driver [-] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Instance destroyed successfully.#033[00m
Feb 28 05:22:49 np0005634017 nova_compute[243452]: 2026-02-28 10:22:49.057 243456 DEBUG nova.objects.instance [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'resources' on Instance uuid 0bafc3af-eadf-4d97-9acf-026c531362c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:22:49 np0005634017 nova_compute[243452]: 2026-02-28 10:22:49.076 243456 DEBUG nova.virt.libvirt.vif [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:21:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1705661194',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1705661194',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=104,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNs+nd6rPqJ/AGV55FcprVoNF73HxSzu9S0FiqOuhvN4rlizLZ9YW8wn4BFYC1ax4N+CgAyJWdOsbXuKNmW4pxXu15elRBRUjfpzdRq3EXNuL00qJwL5OuaSxnPUOdFXtw==',key_name='tempest-TestSecurityGroupsBasicOps-1541180013',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:21:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-qp97324q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:21:52Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=0bafc3af-eadf-4d97-9acf-026c531362c3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "address": "fa:16:3e:93:cc:93", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ffa25b-e3", "ovs_interfaceid": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:22:49 np0005634017 nova_compute[243452]: 2026-02-28 10:22:49.076 243456 DEBUG nova.network.os_vif_util [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "address": "fa:16:3e:93:cc:93", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ffa25b-e3", "ovs_interfaceid": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:22:49 np0005634017 nova_compute[243452]: 2026-02-28 10:22:49.078 243456 DEBUG nova.network.os_vif_util [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:93:cc:93,bridge_name='br-int',has_traffic_filtering=True,id=09ffa25b-e3df-45c2-9db2-423ed33e2a28,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09ffa25b-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:22:49 np0005634017 nova_compute[243452]: 2026-02-28 10:22:49.078 243456 DEBUG os_vif [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:cc:93,bridge_name='br-int',has_traffic_filtering=True,id=09ffa25b-e3df-45c2-9db2-423ed33e2a28,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09ffa25b-e3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:22:49 np0005634017 nova_compute[243452]: 2026-02-28 10:22:49.081 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:49 np0005634017 nova_compute[243452]: 2026-02-28 10:22:49.081 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09ffa25b-e3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:49 np0005634017 nova_compute[243452]: 2026-02-28 10:22:49.087 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:22:49 np0005634017 nova_compute[243452]: 2026-02-28 10:22:49.090 243456 INFO os_vif [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:cc:93,bridge_name='br-int',has_traffic_filtering=True,id=09ffa25b-e3df-45c2-9db2-423ed33e2a28,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09ffa25b-e3')#033[00m
Feb 28 05:22:49 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4-userdata-shm.mount: Deactivated successfully.
Feb 28 05:22:49 np0005634017 systemd[1]: var-lib-containers-storage-overlay-aefe496d22c05c4788569ae7202c870c2d3e386af2fddcdb07f716cc705ac851-merged.mount: Deactivated successfully.
Feb 28 05:22:49 np0005634017 podman[336820]: 2026-02-28 10:22:49.116774709 +0000 UTC m=+0.129614492 container cleanup b54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:22:49 np0005634017 systemd[1]: libpod-conmon-b54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4.scope: Deactivated successfully.
Feb 28 05:22:49 np0005634017 podman[336878]: 2026-02-28 10:22:49.196281994 +0000 UTC m=+0.055623227 container remove b54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 05:22:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:49.205 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[742a65e3-9fac-4f4d-b140-e7ad28686d3d]: (4, ('Sat Feb 28 10:22:48 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a (b54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4)\nb54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4\nSat Feb 28 10:22:49 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a (b54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4)\nb54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:49.206 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[45ec6928-31af-4973-9a6c-38297864a2cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:49.208 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3451a2ef-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:49 np0005634017 nova_compute[243452]: 2026-02-28 10:22:49.209 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:49 np0005634017 kernel: tap3451a2ef-e0: left promiscuous mode
Feb 28 05:22:49 np0005634017 nova_compute[243452]: 2026-02-28 10:22:49.212 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:49.215 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7da2638c-4e94-456e-a084-31c20874fe46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:49 np0005634017 nova_compute[243452]: 2026-02-28 10:22:49.219 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:49.228 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e51e820c-e4be-4266-a8e1-29535c38f48f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:49.230 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fd3be867-4b23-4ba7-aead-b0c140a2c10e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:49.251 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[10c3eaf0-fea8-4f25-9924-2a2f8c6044a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561724, 'reachable_time': 18181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336890, 'error': None, 'target': 'ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:49 np0005634017 systemd[1]: run-netns-ovnmeta\x2d3451a2ef\x2de97c\x2d49df\x2d813f\x2d57c35ec0999a.mount: Deactivated successfully.
Feb 28 05:22:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:49.257 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:22:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:49.257 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[cb531358-c855-41fc-bd78-82bd7c241be7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:49 np0005634017 nova_compute[243452]: 2026-02-28 10:22:49.397 243456 INFO nova.virt.libvirt.driver [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Deleting instance files /var/lib/nova/instances/0bafc3af-eadf-4d97-9acf-026c531362c3_del#033[00m
Feb 28 05:22:49 np0005634017 nova_compute[243452]: 2026-02-28 10:22:49.398 243456 INFO nova.virt.libvirt.driver [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Deletion of /var/lib/nova/instances/0bafc3af-eadf-4d97-9acf-026c531362c3_del complete#033[00m
Feb 28 05:22:49 np0005634017 nova_compute[243452]: 2026-02-28 10:22:49.461 243456 INFO nova.compute.manager [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:22:49 np0005634017 nova_compute[243452]: 2026-02-28 10:22:49.461 243456 DEBUG oslo.service.loopingcall [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:22:49 np0005634017 nova_compute[243452]: 2026-02-28 10:22:49.462 243456 DEBUG nova.compute.manager [-] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:22:49 np0005634017 nova_compute[243452]: 2026-02-28 10:22:49.462 243456 DEBUG nova.network.neutron [-] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.311 243456 DEBUG nova.network.neutron [-] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.332 243456 INFO nova.compute.manager [-] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Took 0.87 seconds to deallocate network for instance.#033[00m
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.396 243456 DEBUG oslo_concurrency.lockutils [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.397 243456 DEBUG oslo_concurrency.lockutils [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 324 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.3 MiB/s wr, 218 op/s
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.562 243456 DEBUG oslo_concurrency.processutils [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.642 243456 DEBUG oslo_concurrency.lockutils [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.643 243456 DEBUG oslo_concurrency.lockutils [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.643 243456 DEBUG oslo_concurrency.lockutils [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.643 243456 DEBUG oslo_concurrency.lockutils [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.643 243456 DEBUG oslo_concurrency.lockutils [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.645 243456 INFO nova.compute.manager [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Terminating instance#033[00m
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.646 243456 DEBUG nova.compute.manager [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:22:50 np0005634017 kernel: tap3789f2db-7d (unregistering): left promiscuous mode
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.675 243456 DEBUG nova.network.neutron [req-1a799f7f-bf5b-4754-b588-c4307bfc0777 req-d4b2a65d-d1d2-4583-98a2-0487f7f7737a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Updated VIF entry in instance network info cache for port 09ffa25b-e3df-45c2-9db2-423ed33e2a28. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.677 243456 DEBUG nova.network.neutron [req-1a799f7f-bf5b-4754-b588-c4307bfc0777 req-d4b2a65d-d1d2-4583-98a2-0487f7f7737a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Updating instance_info_cache with network_info: [{"id": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "address": "fa:16:3e:93:cc:93", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ffa25b-e3", "ovs_interfaceid": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:22:50 np0005634017 NetworkManager[49805]: <info>  [1772274170.6776] device (tap3789f2db-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.684 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:50Z|01092|binding|INFO|Releasing lport 3789f2db-7dec-44ac-93d7-2712307dc094 from this chassis (sb_readonly=0)
Feb 28 05:22:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:50Z|01093|binding|INFO|Setting lport 3789f2db-7dec-44ac-93d7-2712307dc094 down in Southbound
Feb 28 05:22:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:50Z|01094|binding|INFO|Removing iface tap3789f2db-7d ovn-installed in OVS
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.694 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.696 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:89:88 10.100.0.10'], port_security=['fa:16:3e:d7:89:88 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1c621cf0-85bd-40e0-8c9c-467e5da2e21b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=3789f2db-7dec-44ac-93d7-2712307dc094) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:22:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.698 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 3789f2db-7dec-44ac-93d7-2712307dc094 in datapath 7ec4804c-4a13-485a-9300-db6edf74473b unbound from our chassis#033[00m
Feb 28 05:22:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.699 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ec4804c-4a13-485a-9300-db6edf74473b#033[00m
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.701 243456 DEBUG oslo_concurrency.lockutils [req-1a799f7f-bf5b-4754-b588-c4307bfc0777 req-d4b2a65d-d1d2-4583-98a2-0487f7f7737a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:22:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.711 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1eb68ef8-bd54-4b7c-b0f1-14899aeb162f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:50 np0005634017 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Feb 28 05:22:50 np0005634017 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006b.scope: Consumed 2.932s CPU time.
Feb 28 05:22:50 np0005634017 systemd-machined[209480]: Machine qemu-137-instance-0000006b terminated.
Feb 28 05:22:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.737 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed180c9-f375-48ac-86bd-813d00c0a9f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.740 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[208952a6-98b9-40ad-876b-7e1a7ea757d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.765 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[fd3894dd-fc82-4510-9cfb-ce09cc903bb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.785 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[561044a7-9b97-42d3-9c34-fb49cbcb211c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336922, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.800 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[789cbbc4-a8e6-4632-9c0a-4a823942e6f0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565520, 'tstamp': 565520}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336923, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565523, 'tstamp': 565523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336923, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:22:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.802 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.803 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.806 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.807 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ec4804c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.808 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:22:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.809 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ec4804c-40, col_values=(('external_ids', {'iface-id': 'e8c59be7-5922-4be3-b143-f702a4828758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.810 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.877 243456 INFO nova.virt.libvirt.driver [-] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Instance destroyed successfully.#033[00m
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.878 243456 DEBUG nova.objects.instance [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'resources' on Instance uuid 1c621cf0-85bd-40e0-8c9c-467e5da2e21b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.969 243456 DEBUG nova.virt.libvirt.vif [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:22:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-828230938',display_name='tempest-ServersTestJSON-server-828230938',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-828230938',id=107,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:22:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-qfm9iifg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:22:48Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=1c621cf0-85bd-40e0-8c9c-467e5da2e21b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3789f2db-7dec-44ac-93d7-2712307dc094", "address": "fa:16:3e:d7:89:88", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3789f2db-7d", "ovs_interfaceid": "3789f2db-7dec-44ac-93d7-2712307dc094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.970 243456 DEBUG nova.network.os_vif_util [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "3789f2db-7dec-44ac-93d7-2712307dc094", "address": "fa:16:3e:d7:89:88", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3789f2db-7d", "ovs_interfaceid": "3789f2db-7dec-44ac-93d7-2712307dc094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.971 243456 DEBUG nova.network.os_vif_util [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:89:88,bridge_name='br-int',has_traffic_filtering=True,id=3789f2db-7dec-44ac-93d7-2712307dc094,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3789f2db-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.971 243456 DEBUG os_vif [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:89:88,bridge_name='br-int',has_traffic_filtering=True,id=3789f2db-7dec-44ac-93d7-2712307dc094,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3789f2db-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.973 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.974 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3789f2db-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.976 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.977 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:50 np0005634017 nova_compute[243452]: 2026-02-28 10:22:50.979 243456 INFO os_vif [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:89:88,bridge_name='br-int',has_traffic_filtering=True,id=3789f2db-7dec-44ac-93d7-2712307dc094,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3789f2db-7d')#033[00m
Feb 28 05:22:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:22:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2690108448' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.118 243456 DEBUG oslo_concurrency.processutils [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.126 243456 DEBUG nova.compute.provider_tree [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.146 243456 DEBUG nova.scheduler.client.report [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.172 243456 DEBUG oslo_concurrency.lockutils [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.211 243456 INFO nova.scheduler.client.report [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Deleted allocations for instance 0bafc3af-eadf-4d97-9acf-026c531362c3#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.257 243456 INFO nova.virt.libvirt.driver [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Deleting instance files /var/lib/nova/instances/1c621cf0-85bd-40e0-8c9c-467e5da2e21b_del#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.258 243456 INFO nova.virt.libvirt.driver [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Deletion of /var/lib/nova/instances/1c621cf0-85bd-40e0-8c9c-467e5da2e21b_del complete#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.265 243456 DEBUG nova.compute.manager [req-ff50eb72-c91d-4815-9412-1074a9974beb req-f803ed3c-5e56-4526-831e-3c5d9c596ae4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Received event network-vif-unplugged-3789f2db-7dec-44ac-93d7-2712307dc094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.265 243456 DEBUG oslo_concurrency.lockutils [req-ff50eb72-c91d-4815-9412-1074a9974beb req-f803ed3c-5e56-4526-831e-3c5d9c596ae4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.266 243456 DEBUG oslo_concurrency.lockutils [req-ff50eb72-c91d-4815-9412-1074a9974beb req-f803ed3c-5e56-4526-831e-3c5d9c596ae4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.266 243456 DEBUG oslo_concurrency.lockutils [req-ff50eb72-c91d-4815-9412-1074a9974beb req-f803ed3c-5e56-4526-831e-3c5d9c596ae4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.266 243456 DEBUG nova.compute.manager [req-ff50eb72-c91d-4815-9412-1074a9974beb req-f803ed3c-5e56-4526-831e-3c5d9c596ae4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] No waiting events found dispatching network-vif-unplugged-3789f2db-7dec-44ac-93d7-2712307dc094 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.267 243456 DEBUG nova.compute.manager [req-ff50eb72-c91d-4815-9412-1074a9974beb req-f803ed3c-5e56-4526-831e-3c5d9c596ae4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Received event network-vif-unplugged-3789f2db-7dec-44ac-93d7-2712307dc094 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.285 243456 DEBUG oslo_concurrency.lockutils [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.317 243456 INFO nova.compute.manager [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.318 243456 DEBUG oslo.service.loopingcall [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.318 243456 DEBUG nova.compute.manager [-] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.318 243456 DEBUG nova.network.neutron [-] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.388 243456 DEBUG nova.compute.manager [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Received event network-vif-unplugged-09ffa25b-e3df-45c2-9db2-423ed33e2a28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.389 243456 DEBUG oslo_concurrency.lockutils [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.389 243456 DEBUG oslo_concurrency.lockutils [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.390 243456 DEBUG oslo_concurrency.lockutils [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.390 243456 DEBUG nova.compute.manager [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] No waiting events found dispatching network-vif-unplugged-09ffa25b-e3df-45c2-9db2-423ed33e2a28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.390 243456 WARNING nova.compute.manager [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Received unexpected event network-vif-unplugged-09ffa25b-e3df-45c2-9db2-423ed33e2a28 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.391 243456 DEBUG nova.compute.manager [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Received event network-vif-plugged-09ffa25b-e3df-45c2-9db2-423ed33e2a28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.391 243456 DEBUG oslo_concurrency.lockutils [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.392 243456 DEBUG oslo_concurrency.lockutils [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.392 243456 DEBUG oslo_concurrency.lockutils [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.393 243456 DEBUG nova.compute.manager [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] No waiting events found dispatching network-vif-plugged-09ffa25b-e3df-45c2-9db2-423ed33e2a28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.393 243456 WARNING nova.compute.manager [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Received unexpected event network-vif-plugged-09ffa25b-e3df-45c2-9db2-423ed33e2a28 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.394 243456 DEBUG nova.compute.manager [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Received event network-vif-deleted-09ffa25b-e3df-45c2-9db2-423ed33e2a28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.394 243456 INFO nova.compute.manager [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Neutron deleted interface 09ffa25b-e3df-45c2-9db2-423ed33e2a28; detaching it from the instance and deleting it from the info cache#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.394 243456 DEBUG nova.network.neutron [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Feb 28 05:22:51 np0005634017 nova_compute[243452]: 2026-02-28 10:22:51.398 243456 DEBUG nova.compute.manager [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Detach interface failed, port_id=09ffa25b-e3df-45c2-9db2-423ed33e2a28, reason: Instance 0bafc3af-eadf-4d97-9acf-026c531362c3 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 28 05:22:52 np0005634017 nova_compute[243452]: 2026-02-28 10:22:52.282 243456 DEBUG nova.network.neutron [-] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:22:52 np0005634017 nova_compute[243452]: 2026-02-28 10:22:52.304 243456 INFO nova.compute.manager [-] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Took 0.99 seconds to deallocate network for instance.#033[00m
Feb 28 05:22:52 np0005634017 nova_compute[243452]: 2026-02-28 10:22:52.359 243456 DEBUG oslo_concurrency.lockutils [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:52 np0005634017 nova_compute[243452]: 2026-02-28 10:22:52.360 243456 DEBUG oslo_concurrency.lockutils [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:52 np0005634017 nova_compute[243452]: 2026-02-28 10:22:52.426 243456 DEBUG oslo_concurrency.processutils [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 324 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.6 MiB/s wr, 166 op/s
Feb 28 05:22:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:22:52 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1330349262' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:22:52 np0005634017 nova_compute[243452]: 2026-02-28 10:22:52.966 243456 DEBUG oslo_concurrency.processutils [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:52 np0005634017 nova_compute[243452]: 2026-02-28 10:22:52.971 243456 DEBUG nova.compute.provider_tree [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:22:52 np0005634017 nova_compute[243452]: 2026-02-28 10:22:52.990 243456 DEBUG nova.scheduler.client.report [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:22:53 np0005634017 nova_compute[243452]: 2026-02-28 10:22:53.007 243456 DEBUG oslo_concurrency.lockutils [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:53 np0005634017 nova_compute[243452]: 2026-02-28 10:22:53.028 243456 INFO nova.scheduler.client.report [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Deleted allocations for instance 1c621cf0-85bd-40e0-8c9c-467e5da2e21b#033[00m
Feb 28 05:22:53 np0005634017 nova_compute[243452]: 2026-02-28 10:22:53.104 243456 DEBUG oslo_concurrency.lockutils [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.461s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:53 np0005634017 nova_compute[243452]: 2026-02-28 10:22:53.365 243456 DEBUG nova.compute.manager [req-30d40c1c-11e0-4ed4-86d6-012b8bc7a47e req-6b66c654-3c01-4adc-8e79-a1934939ea94 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Received event network-vif-plugged-3789f2db-7dec-44ac-93d7-2712307dc094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:22:53 np0005634017 nova_compute[243452]: 2026-02-28 10:22:53.366 243456 DEBUG oslo_concurrency.lockutils [req-30d40c1c-11e0-4ed4-86d6-012b8bc7a47e req-6b66c654-3c01-4adc-8e79-a1934939ea94 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:53 np0005634017 nova_compute[243452]: 2026-02-28 10:22:53.366 243456 DEBUG oslo_concurrency.lockutils [req-30d40c1c-11e0-4ed4-86d6-012b8bc7a47e req-6b66c654-3c01-4adc-8e79-a1934939ea94 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:53 np0005634017 nova_compute[243452]: 2026-02-28 10:22:53.367 243456 DEBUG oslo_concurrency.lockutils [req-30d40c1c-11e0-4ed4-86d6-012b8bc7a47e req-6b66c654-3c01-4adc-8e79-a1934939ea94 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:53 np0005634017 nova_compute[243452]: 2026-02-28 10:22:53.367 243456 DEBUG nova.compute.manager [req-30d40c1c-11e0-4ed4-86d6-012b8bc7a47e req-6b66c654-3c01-4adc-8e79-a1934939ea94 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] No waiting events found dispatching network-vif-plugged-3789f2db-7dec-44ac-93d7-2712307dc094 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:22:53 np0005634017 nova_compute[243452]: 2026-02-28 10:22:53.368 243456 WARNING nova.compute.manager [req-30d40c1c-11e0-4ed4-86d6-012b8bc7a47e req-6b66c654-3c01-4adc-8e79-a1934939ea94 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Received unexpected event network-vif-plugged-3789f2db-7dec-44ac-93d7-2712307dc094 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:22:53 np0005634017 nova_compute[243452]: 2026-02-28 10:22:53.368 243456 DEBUG nova.compute.manager [req-30d40c1c-11e0-4ed4-86d6-012b8bc7a47e req-6b66c654-3c01-4adc-8e79-a1934939ea94 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Received event network-vif-deleted-3789f2db-7dec-44ac-93d7-2712307dc094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:22:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:22:54 np0005634017 nova_compute[243452]: 2026-02-28 10:22:54.017 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 262 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 230 op/s
Feb 28 05:22:55 np0005634017 nova_compute[243452]: 2026-02-28 10:22:55.976 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:56Z|01095|binding|INFO|Releasing lport e8c59be7-5922-4be3-b143-f702a4828758 from this chassis (sb_readonly=0)
Feb 28 05:22:56 np0005634017 nova_compute[243452]: 2026-02-28 10:22:56.054 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:22:56Z|01096|binding|INFO|Releasing lport e8c59be7-5922-4be3-b143-f702a4828758 from this chassis (sb_readonly=0)
Feb 28 05:22:56 np0005634017 nova_compute[243452]: 2026-02-28 10:22:56.092 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 188 op/s
Feb 28 05:22:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:57.861 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:57.862 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:22:57.863 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 30 KiB/s wr, 154 op/s
Feb 28 05:22:58 np0005634017 nova_compute[243452]: 2026-02-28 10:22:58.493 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "44a09c36-3876-4513-8285-1d4aedc2ec68" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:58 np0005634017 nova_compute[243452]: 2026-02-28 10:22:58.494 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "44a09c36-3876-4513-8285-1d4aedc2ec68" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:58 np0005634017 nova_compute[243452]: 2026-02-28 10:22:58.526 243456 DEBUG nova.compute.manager [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:22:58 np0005634017 nova_compute[243452]: 2026-02-28 10:22:58.625 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:58 np0005634017 nova_compute[243452]: 2026-02-28 10:22:58.626 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:58 np0005634017 nova_compute[243452]: 2026-02-28 10:22:58.636 243456 DEBUG nova.virt.hardware [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:22:58 np0005634017 nova_compute[243452]: 2026-02-28 10:22:58.637 243456 INFO nova.compute.claims [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:22:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:22:58 np0005634017 nova_compute[243452]: 2026-02-28 10:22:58.782 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.019 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:22:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:22:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3058618406' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.330 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.334 243456 DEBUG nova.compute.provider_tree [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.355 243456 DEBUG nova.scheduler.client.report [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.429 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.429 243456 DEBUG nova.compute.manager [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.472 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274164.4663682, 3a425770-67d6-411f-9586-1977cbc678ed => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.472 243456 INFO nova.compute.manager [-] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.481 243456 DEBUG nova.compute.manager [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.482 243456 DEBUG nova.network.neutron [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.497 243456 DEBUG nova.compute.manager [None req-35db4545-0ad0-450c-90bd-e43342cf96c7 - - - - - -] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.511 243456 INFO nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.540 243456 DEBUG nova.compute.manager [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.675 243456 DEBUG nova.compute.manager [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.677 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.678 243456 INFO nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Creating image(s)#033[00m
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.712 243456 DEBUG nova.storage.rbd_utils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 44a09c36-3876-4513-8285-1d4aedc2ec68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.749 243456 DEBUG nova.storage.rbd_utils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 44a09c36-3876-4513-8285-1d4aedc2ec68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.786 243456 DEBUG nova.storage.rbd_utils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 44a09c36-3876-4513-8285-1d4aedc2ec68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.791 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.874 243456 DEBUG nova.policy [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30797c1e587b4532a2e148d0cdcd9c51', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.880 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.882 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.883 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.883 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.924 243456 DEBUG nova.storage.rbd_utils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 44a09c36-3876-4513-8285-1d4aedc2ec68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:22:59 np0005634017 nova_compute[243452]: 2026-02-28 10:22:59.931 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 44a09c36-3876-4513-8285-1d4aedc2ec68_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:00 np0005634017 nova_compute[243452]: 2026-02-28 10:23:00.236 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 44a09c36-3876-4513-8285-1d4aedc2ec68_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:00 np0005634017 nova_compute[243452]: 2026-02-28 10:23:00.321 243456 DEBUG nova.storage.rbd_utils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] resizing rbd image 44a09c36-3876-4513-8285-1d4aedc2ec68_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:23:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:23:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:23:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:23:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:23:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:23:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:23:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 247 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 530 KiB/s wr, 152 op/s
Feb 28 05:23:00 np0005634017 nova_compute[243452]: 2026-02-28 10:23:00.434 243456 DEBUG nova.objects.instance [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'migration_context' on Instance uuid 44a09c36-3876-4513-8285-1d4aedc2ec68 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:23:00 np0005634017 nova_compute[243452]: 2026-02-28 10:23:00.454 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:23:00 np0005634017 nova_compute[243452]: 2026-02-28 10:23:00.455 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Ensure instance console log exists: /var/lib/nova/instances/44a09c36-3876-4513-8285-1d4aedc2ec68/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:23:00 np0005634017 nova_compute[243452]: 2026-02-28 10:23:00.455 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:00 np0005634017 nova_compute[243452]: 2026-02-28 10:23:00.456 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:00 np0005634017 nova_compute[243452]: 2026-02-28 10:23:00.457 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:00 np0005634017 nova_compute[243452]: 2026-02-28 10:23:00.978 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:01 np0005634017 nova_compute[243452]: 2026-02-28 10:23:01.350 243456 DEBUG nova.network.neutron [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Successfully created port: d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:23:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 247 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1020 KiB/s rd, 504 KiB/s wr, 83 op/s
Feb 28 05:23:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:23:03 np0005634017 nova_compute[243452]: 2026-02-28 10:23:03.867 243456 DEBUG nova.network.neutron [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Successfully updated port: d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:23:03 np0005634017 nova_compute[243452]: 2026-02-28 10:23:03.895 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "refresh_cache-44a09c36-3876-4513-8285-1d4aedc2ec68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:23:03 np0005634017 nova_compute[243452]: 2026-02-28 10:23:03.895 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquired lock "refresh_cache-44a09c36-3876-4513-8285-1d4aedc2ec68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:23:03 np0005634017 nova_compute[243452]: 2026-02-28 10:23:03.896 243456 DEBUG nova.network.neutron [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:23:04 np0005634017 nova_compute[243452]: 2026-02-28 10:23:04.025 243456 DEBUG nova.compute.manager [req-e454a304-84fe-4790-af2e-7166500575b2 req-5da59a9e-a9ee-43ca-8f81-a28659a6d8dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Received event network-changed-d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:23:04 np0005634017 nova_compute[243452]: 2026-02-28 10:23:04.025 243456 DEBUG nova.compute.manager [req-e454a304-84fe-4790-af2e-7166500575b2 req-5da59a9e-a9ee-43ca-8f81-a28659a6d8dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Refreshing instance network info cache due to event network-changed-d79e5c2c-3bc4-4c60-842c-c52c58b15ff3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:23:04 np0005634017 nova_compute[243452]: 2026-02-28 10:23:04.026 243456 DEBUG oslo_concurrency.lockutils [req-e454a304-84fe-4790-af2e-7166500575b2 req-5da59a9e-a9ee-43ca-8f81-a28659a6d8dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-44a09c36-3876-4513-8285-1d4aedc2ec68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:23:04 np0005634017 nova_compute[243452]: 2026-02-28 10:23:04.026 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:04 np0005634017 nova_compute[243452]: 2026-02-28 10:23:04.054 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274169.0534127, 0bafc3af-eadf-4d97-9acf-026c531362c3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:23:04 np0005634017 nova_compute[243452]: 2026-02-28 10:23:04.054 243456 INFO nova.compute.manager [-] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:23:04 np0005634017 nova_compute[243452]: 2026-02-28 10:23:04.080 243456 DEBUG nova.compute.manager [None req-e8de3ef8-9958-4170-96f4-7489508016fd - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:23:04 np0005634017 nova_compute[243452]: 2026-02-28 10:23:04.400 243456 DEBUG nova.network.neutron [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:23:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 266 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.5 MiB/s wr, 95 op/s
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.722 243456 DEBUG nova.network.neutron [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Updating instance_info_cache with network_info: [{"id": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "address": "fa:16:3e:28:2a:42", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79e5c2c-3b", "ovs_interfaceid": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.751 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Releasing lock "refresh_cache-44a09c36-3876-4513-8285-1d4aedc2ec68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.752 243456 DEBUG nova.compute.manager [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Instance network_info: |[{"id": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "address": "fa:16:3e:28:2a:42", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79e5c2c-3b", "ovs_interfaceid": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.753 243456 DEBUG oslo_concurrency.lockutils [req-e454a304-84fe-4790-af2e-7166500575b2 req-5da59a9e-a9ee-43ca-8f81-a28659a6d8dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-44a09c36-3876-4513-8285-1d4aedc2ec68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.754 243456 DEBUG nova.network.neutron [req-e454a304-84fe-4790-af2e-7166500575b2 req-5da59a9e-a9ee-43ca-8f81-a28659a6d8dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Refreshing network info cache for port d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.758 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Start _get_guest_xml network_info=[{"id": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "address": "fa:16:3e:28:2a:42", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79e5c2c-3b", "ovs_interfaceid": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.765 243456 WARNING nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.771 243456 DEBUG nova.virt.libvirt.host [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.772 243456 DEBUG nova.virt.libvirt.host [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.785 243456 DEBUG nova.virt.libvirt.host [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.786 243456 DEBUG nova.virt.libvirt.host [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.786 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.787 243456 DEBUG nova.virt.hardware [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.788 243456 DEBUG nova.virt.hardware [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.789 243456 DEBUG nova.virt.hardware [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.789 243456 DEBUG nova.virt.hardware [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.790 243456 DEBUG nova.virt.hardware [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.790 243456 DEBUG nova.virt.hardware [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.791 243456 DEBUG nova.virt.hardware [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.791 243456 DEBUG nova.virt.hardware [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.792 243456 DEBUG nova.virt.hardware [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.792 243456 DEBUG nova.virt.hardware [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.793 243456 DEBUG nova.virt.hardware [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.798 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.876 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274170.8753576, 1c621cf0-85bd-40e0-8c9c-467e5da2e21b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.877 243456 INFO nova.compute.manager [-] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.904 243456 DEBUG nova.compute.manager [None req-22f141ab-8034-415f-9f3f-70ba2ff4ed2d - - - - - -] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:23:05 np0005634017 nova_compute[243452]: 2026-02-28 10:23:05.980 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:23:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2213914873' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:23:06 np0005634017 nova_compute[243452]: 2026-02-28 10:23:06.365 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:06 np0005634017 nova_compute[243452]: 2026-02-28 10:23:06.394 243456 DEBUG nova.storage.rbd_utils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 44a09c36-3876-4513-8285-1d4aedc2ec68_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:06 np0005634017 nova_compute[243452]: 2026-02-28 10:23:06.400 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 184 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Feb 28 05:23:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:23:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3278732323' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.008 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.010 243456 DEBUG nova.virt.libvirt.vif [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:22:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-925426756',display_name='tempest-ServersTestJSON-server-925426756',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-925426756',id=108,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLaWmqbATX1tMQaqkwgwWynVJNR8yYFwj3L4iJiJyj4lWHU05AbI0aLcvMcXZfAOON+EkQ1bGm552xW05v6R9BOUu6oYre0g1cnU2TpdCVUO7YJAOjpRb0+YCxAujwtdhA==',key_name='tempest-key-1054399794',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-q00sjeb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:22:59Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=44a09c36-3876-4513-8285-1d4aedc2ec68,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "address": "fa:16:3e:28:2a:42", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79e5c2c-3b", "ovs_interfaceid": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.010 243456 DEBUG nova.network.os_vif_util [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "address": "fa:16:3e:28:2a:42", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79e5c2c-3b", "ovs_interfaceid": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.011 243456 DEBUG nova.network.os_vif_util [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:2a:42,bridge_name='br-int',has_traffic_filtering=True,id=d79e5c2c-3bc4-4c60-842c-c52c58b15ff3,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79e5c2c-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.012 243456 DEBUG nova.objects.instance [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 44a09c36-3876-4513-8285-1d4aedc2ec68 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.029 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:23:07 np0005634017 nova_compute[243452]:  <uuid>44a09c36-3876-4513-8285-1d4aedc2ec68</uuid>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:  <name>instance-0000006c</name>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServersTestJSON-server-925426756</nova:name>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:23:05</nova:creationTime>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:23:07 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:        <nova:user uuid="30797c1e587b4532a2e148d0cdcd9c51">tempest-ServersTestJSON-973249707-project-member</nova:user>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:        <nova:project uuid="1c3cb5cdfa53405bb0387af43e804bd1">tempest-ServersTestJSON-973249707</nova:project>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:        <nova:port uuid="d79e5c2c-3bc4-4c60-842c-c52c58b15ff3">
Feb 28 05:23:07 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <entry name="serial">44a09c36-3876-4513-8285-1d4aedc2ec68</entry>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <entry name="uuid">44a09c36-3876-4513-8285-1d4aedc2ec68</entry>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/44a09c36-3876-4513-8285-1d4aedc2ec68_disk">
Feb 28 05:23:07 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:23:07 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/44a09c36-3876-4513-8285-1d4aedc2ec68_disk.config">
Feb 28 05:23:07 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:23:07 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:28:2a:42"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <target dev="tapd79e5c2c-3b"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/44a09c36-3876-4513-8285-1d4aedc2ec68/console.log" append="off"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:23:07 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:23:07 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:23:07 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:23:07 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.030 243456 DEBUG nova.compute.manager [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Preparing to wait for external event network-vif-plugged-d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.030 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "44a09c36-3876-4513-8285-1d4aedc2ec68-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.030 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "44a09c36-3876-4513-8285-1d4aedc2ec68-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.031 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "44a09c36-3876-4513-8285-1d4aedc2ec68-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.031 243456 DEBUG nova.virt.libvirt.vif [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:22:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-925426756',display_name='tempest-ServersTestJSON-server-925426756',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-925426756',id=108,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLaWmqbATX1tMQaqkwgwWynVJNR8yYFwj3L4iJiJyj4lWHU05AbI0aLcvMcXZfAOON+EkQ1bGm552xW05v6R9BOUu6oYre0g1cnU2TpdCVUO7YJAOjpRb0+YCxAujwtdhA==',key_name='tempest-key-1054399794',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-q00sjeb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:22:59Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=44a09c36-3876-4513-8285-1d4aedc2ec68,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "address": "fa:16:3e:28:2a:42", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79e5c2c-3b", "ovs_interfaceid": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.032 243456 DEBUG nova.network.os_vif_util [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "address": "fa:16:3e:28:2a:42", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79e5c2c-3b", "ovs_interfaceid": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.032 243456 DEBUG nova.network.os_vif_util [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:2a:42,bridge_name='br-int',has_traffic_filtering=True,id=d79e5c2c-3bc4-4c60-842c-c52c58b15ff3,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79e5c2c-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.033 243456 DEBUG os_vif [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:2a:42,bridge_name='br-int',has_traffic_filtering=True,id=d79e5c2c-3bc4-4c60-842c-c52c58b15ff3,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79e5c2c-3b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.033 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.033 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.034 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.037 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.038 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd79e5c2c-3b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.039 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd79e5c2c-3b, col_values=(('external_ids', {'iface-id': 'd79e5c2c-3bc4-4c60-842c-c52c58b15ff3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:2a:42', 'vm-uuid': '44a09c36-3876-4513-8285-1d4aedc2ec68'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.041 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:07 np0005634017 NetworkManager[49805]: <info>  [1772274187.0421] manager: (tapd79e5c2c-3b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/457)
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.044 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.046 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.047 243456 INFO os_vif [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:2a:42,bridge_name='br-int',has_traffic_filtering=True,id=d79e5c2c-3bc4-4c60-842c-c52c58b15ff3,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79e5c2c-3b')#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.114 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.114 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.114 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No VIF found with MAC fa:16:3e:28:2a:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.115 243456 INFO nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Using config drive#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.135 243456 DEBUG nova.storage.rbd_utils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 44a09c36-3876-4513-8285-1d4aedc2ec68_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.649 243456 INFO nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Creating config drive at /var/lib/nova/instances/44a09c36-3876-4513-8285-1d4aedc2ec68/disk.config#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.656 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/44a09c36-3876-4513-8285-1d4aedc2ec68/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpo6b2l10g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.800 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/44a09c36-3876-4513-8285-1d4aedc2ec68/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpo6b2l10g" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.837 243456 DEBUG nova.storage.rbd_utils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 44a09c36-3876-4513-8285-1d4aedc2ec68_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:07 np0005634017 nova_compute[243452]: 2026-02-28 10:23:07.843 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/44a09c36-3876-4513-8285-1d4aedc2ec68/disk.config 44a09c36-3876-4513-8285-1d4aedc2ec68_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:08 np0005634017 nova_compute[243452]: 2026-02-28 10:23:08.008 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/44a09c36-3876-4513-8285-1d4aedc2ec68/disk.config 44a09c36-3876-4513-8285-1d4aedc2ec68_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:08 np0005634017 nova_compute[243452]: 2026-02-28 10:23:08.010 243456 INFO nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Deleting local config drive /var/lib/nova/instances/44a09c36-3876-4513-8285-1d4aedc2ec68/disk.config because it was imported into RBD.#033[00m
Feb 28 05:23:08 np0005634017 kernel: tapd79e5c2c-3b: entered promiscuous mode
Feb 28 05:23:08 np0005634017 NetworkManager[49805]: <info>  [1772274188.0622] manager: (tapd79e5c2c-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/458)
Feb 28 05:23:08 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:08Z|01097|binding|INFO|Claiming lport d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 for this chassis.
Feb 28 05:23:08 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:08Z|01098|binding|INFO|d79e5c2c-3bc4-4c60-842c-c52c58b15ff3: Claiming fa:16:3e:28:2a:42 10.100.0.4
Feb 28 05:23:08 np0005634017 nova_compute[243452]: 2026-02-28 10:23:08.063 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:08 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.073 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:2a:42 10.100.0.4'], port_security=['fa:16:3e:28:2a:42 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '44a09c36-3876-4513-8285-1d4aedc2ec68', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d79e5c2c-3bc4-4c60-842c-c52c58b15ff3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:23:08 np0005634017 nova_compute[243452]: 2026-02-28 10:23:08.074 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:08 np0005634017 nova_compute[243452]: 2026-02-28 10:23:08.075 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:08 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:08Z|01099|binding|INFO|Setting lport d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 ovn-installed in OVS
Feb 28 05:23:08 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:08Z|01100|binding|INFO|Setting lport d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 up in Southbound
Feb 28 05:23:08 np0005634017 nova_compute[243452]: 2026-02-28 10:23:08.076 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:08 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.076 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 in datapath 7ec4804c-4a13-485a-9300-db6edf74473b bound to our chassis#033[00m
Feb 28 05:23:08 np0005634017 nova_compute[243452]: 2026-02-28 10:23:08.078 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:08 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.078 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ec4804c-4a13-485a-9300-db6edf74473b#033[00m
Feb 28 05:23:08 np0005634017 systemd-udevd[337300]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:23:08 np0005634017 NetworkManager[49805]: <info>  [1772274188.1004] device (tapd79e5c2c-3b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:23:08 np0005634017 NetworkManager[49805]: <info>  [1772274188.1025] device (tapd79e5c2c-3b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:23:08 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.104 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7c6a9e8b-04a5-439a-bc98-2fe43b458b93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:08 np0005634017 systemd-machined[209480]: New machine qemu-138-instance-0000006c.
Feb 28 05:23:08 np0005634017 systemd[1]: Started Virtual Machine qemu-138-instance-0000006c.
Feb 28 05:23:08 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.132 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c76ae0bb-71d8-4ae0-9cbe-ff18506eb496]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:08 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.136 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[af9b5d8d-4887-4370-a276-91636ec89b03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:08 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.171 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5db4f43c-abbd-4717-afe1-b0f653e353bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:08 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.191 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[66f44932-4d53-4540-af0b-55a3def94c7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337317, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:08 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.209 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[deaad118-b819-477b-a576-64023af03d36]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565520, 'tstamp': 565520}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337318, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565523, 'tstamp': 565523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337318, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:08 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.211 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:08 np0005634017 nova_compute[243452]: 2026-02-28 10:23:08.212 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:08 np0005634017 nova_compute[243452]: 2026-02-28 10:23:08.214 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:08 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.214 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ec4804c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:08 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.214 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:23:08 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.215 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ec4804c-40, col_values=(('external_ids', {'iface-id': 'e8c59be7-5922-4be3-b143-f702a4828758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:08 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.215 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:23:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:23:08 np0005634017 nova_compute[243452]: 2026-02-28 10:23:08.443 243456 DEBUG nova.compute.manager [req-0eac1b18-1a66-47cd-a991-51539cf32a14 req-f7175931-6e15-4d4c-828f-ae0ab79d41d7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Received event network-vif-plugged-d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:23:08 np0005634017 nova_compute[243452]: 2026-02-28 10:23:08.443 243456 DEBUG oslo_concurrency.lockutils [req-0eac1b18-1a66-47cd-a991-51539cf32a14 req-f7175931-6e15-4d4c-828f-ae0ab79d41d7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "44a09c36-3876-4513-8285-1d4aedc2ec68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:08 np0005634017 nova_compute[243452]: 2026-02-28 10:23:08.444 243456 DEBUG oslo_concurrency.lockutils [req-0eac1b18-1a66-47cd-a991-51539cf32a14 req-f7175931-6e15-4d4c-828f-ae0ab79d41d7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "44a09c36-3876-4513-8285-1d4aedc2ec68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:08 np0005634017 nova_compute[243452]: 2026-02-28 10:23:08.444 243456 DEBUG oslo_concurrency.lockutils [req-0eac1b18-1a66-47cd-a991-51539cf32a14 req-f7175931-6e15-4d4c-828f-ae0ab79d41d7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "44a09c36-3876-4513-8285-1d4aedc2ec68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:08 np0005634017 nova_compute[243452]: 2026-02-28 10:23:08.444 243456 DEBUG nova.compute.manager [req-0eac1b18-1a66-47cd-a991-51539cf32a14 req-f7175931-6e15-4d4c-828f-ae0ab79d41d7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Processing event network-vif-plugged-d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:23:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.024 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.143 243456 DEBUG nova.network.neutron [req-e454a304-84fe-4790-af2e-7166500575b2 req-5da59a9e-a9ee-43ca-8f81-a28659a6d8dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Updated VIF entry in instance network info cache for port d79e5c2c-3bc4-4c60-842c-c52c58b15ff3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.144 243456 DEBUG nova.network.neutron [req-e454a304-84fe-4790-af2e-7166500575b2 req-5da59a9e-a9ee-43ca-8f81-a28659a6d8dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Updating instance_info_cache with network_info: [{"id": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "address": "fa:16:3e:28:2a:42", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79e5c2c-3b", "ovs_interfaceid": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.167 243456 DEBUG oslo_concurrency.lockutils [req-e454a304-84fe-4790-af2e-7166500575b2 req-5da59a9e-a9ee-43ca-8f81-a28659a6d8dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-44a09c36-3876-4513-8285-1d4aedc2ec68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.456 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274189.4554343, 44a09c36-3876-4513-8285-1d4aedc2ec68 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.457 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] VM Started (Lifecycle Event)#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.459 243456 DEBUG nova.compute.manager [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.463 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.467 243456 INFO nova.virt.libvirt.driver [-] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Instance spawned successfully.#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.467 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.496 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.502 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.502 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.503 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.503 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.504 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.505 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.510 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.558 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.559 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274189.4575496, 44a09c36-3876-4513-8285-1d4aedc2ec68 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.559 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.570 243456 INFO nova.compute.manager [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Took 9.89 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.571 243456 DEBUG nova.compute.manager [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.588 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.592 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274189.4616396, 44a09c36-3876-4513-8285-1d4aedc2ec68 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.592 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.634 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.639 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.673 243456 INFO nova.compute.manager [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Took 11.08 seconds to build instance.#033[00m
Feb 28 05:23:09 np0005634017 nova_compute[243452]: 2026-02-28 10:23:09.716 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "44a09c36-3876-4513-8285-1d4aedc2ec68" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 204 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 28 05:23:10 np0005634017 nova_compute[243452]: 2026-02-28 10:23:10.709 243456 DEBUG nova.compute.manager [req-c1fe24bd-58f7-4e20-a747-c488b01fc79e req-dbefe8ec-51ac-40c9-b2c8-ecd6a65dbbce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Received event network-vif-plugged-d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:23:10 np0005634017 nova_compute[243452]: 2026-02-28 10:23:10.710 243456 DEBUG oslo_concurrency.lockutils [req-c1fe24bd-58f7-4e20-a747-c488b01fc79e req-dbefe8ec-51ac-40c9-b2c8-ecd6a65dbbce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "44a09c36-3876-4513-8285-1d4aedc2ec68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:10 np0005634017 nova_compute[243452]: 2026-02-28 10:23:10.711 243456 DEBUG oslo_concurrency.lockutils [req-c1fe24bd-58f7-4e20-a747-c488b01fc79e req-dbefe8ec-51ac-40c9-b2c8-ecd6a65dbbce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "44a09c36-3876-4513-8285-1d4aedc2ec68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:10 np0005634017 nova_compute[243452]: 2026-02-28 10:23:10.711 243456 DEBUG oslo_concurrency.lockutils [req-c1fe24bd-58f7-4e20-a747-c488b01fc79e req-dbefe8ec-51ac-40c9-b2c8-ecd6a65dbbce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "44a09c36-3876-4513-8285-1d4aedc2ec68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:10 np0005634017 nova_compute[243452]: 2026-02-28 10:23:10.712 243456 DEBUG nova.compute.manager [req-c1fe24bd-58f7-4e20-a747-c488b01fc79e req-dbefe8ec-51ac-40c9-b2c8-ecd6a65dbbce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] No waiting events found dispatching network-vif-plugged-d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:23:10 np0005634017 nova_compute[243452]: 2026-02-28 10:23:10.712 243456 WARNING nova.compute.manager [req-c1fe24bd-58f7-4e20-a747-c488b01fc79e req-dbefe8ec-51ac-40c9-b2c8-ecd6a65dbbce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Received unexpected event network-vif-plugged-d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:23:10 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 05:23:10 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 8152 writes, 36K keys, 8152 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 8152 writes, 8152 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1516 writes, 6822 keys, 1516 commit groups, 1.0 writes per commit group, ingest: 9.21 MB, 0.02 MB/s#012Interval WAL: 1516 writes, 1516 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     64.4      0.65              0.11        21    0.031       0      0       0.0       0.0#012  L6      1/0    9.30 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   3.7    142.1    117.6      1.32              0.39        20    0.066    103K    11K       0.0       0.0#012 Sum      1/0    9.30 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.7     95.4    100.1      1.97              0.50        41    0.048    103K    11K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   6.0    137.4    140.0      0.37              0.14        10    0.037     32K   3070       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   0.0    142.1    117.6      1.32              0.39        20    0.066    103K    11K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     64.9      0.64              0.11        20    0.032       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.5      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.041, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.19 GB write, 0.07 MB/s write, 0.18 GB read, 0.06 MB/s read, 2.0 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555caadbb8d0#2 capacity: 304.00 MB usage: 22.39 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000295 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1442,21.59 MB,7.10186%) FilterBlock(42,295.86 KB,0.0950412%) IndexBlock(42,524.14 KB,0.168374%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb 28 05:23:12 np0005634017 nova_compute[243452]: 2026-02-28 10:23:12.041 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 196 KiB/s rd, 1.3 MiB/s wr, 26 op/s
Feb 28 05:23:12 np0005634017 nova_compute[243452]: 2026-02-28 10:23:12.526 243456 DEBUG oslo_concurrency.lockutils [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "44a09c36-3876-4513-8285-1d4aedc2ec68" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:12 np0005634017 nova_compute[243452]: 2026-02-28 10:23:12.527 243456 DEBUG oslo_concurrency.lockutils [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "44a09c36-3876-4513-8285-1d4aedc2ec68" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:12 np0005634017 nova_compute[243452]: 2026-02-28 10:23:12.527 243456 DEBUG oslo_concurrency.lockutils [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "44a09c36-3876-4513-8285-1d4aedc2ec68-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:12 np0005634017 nova_compute[243452]: 2026-02-28 10:23:12.528 243456 DEBUG oslo_concurrency.lockutils [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "44a09c36-3876-4513-8285-1d4aedc2ec68-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:12 np0005634017 nova_compute[243452]: 2026-02-28 10:23:12.528 243456 DEBUG oslo_concurrency.lockutils [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "44a09c36-3876-4513-8285-1d4aedc2ec68-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:12 np0005634017 nova_compute[243452]: 2026-02-28 10:23:12.530 243456 INFO nova.compute.manager [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Terminating instance#033[00m
Feb 28 05:23:12 np0005634017 nova_compute[243452]: 2026-02-28 10:23:12.532 243456 DEBUG nova.compute.manager [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:23:12 np0005634017 kernel: tapd79e5c2c-3b (unregistering): left promiscuous mode
Feb 28 05:23:12 np0005634017 NetworkManager[49805]: <info>  [1772274192.5788] device (tapd79e5c2c-3b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:23:12 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:12Z|01101|binding|INFO|Releasing lport d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 from this chassis (sb_readonly=0)
Feb 28 05:23:12 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:12Z|01102|binding|INFO|Setting lport d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 down in Southbound
Feb 28 05:23:12 np0005634017 nova_compute[243452]: 2026-02-28 10:23:12.582 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:12 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:12Z|01103|binding|INFO|Removing iface tapd79e5c2c-3b ovn-installed in OVS
Feb 28 05:23:12 np0005634017 nova_compute[243452]: 2026-02-28 10:23:12.585 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.592 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:2a:42 10.100.0.4'], port_security=['fa:16:3e:28:2a:42 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '44a09c36-3876-4513-8285-1d4aedc2ec68', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d79e5c2c-3bc4-4c60-842c-c52c58b15ff3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:23:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.596 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 in datapath 7ec4804c-4a13-485a-9300-db6edf74473b unbound from our chassis#033[00m
Feb 28 05:23:12 np0005634017 nova_compute[243452]: 2026-02-28 10:23:12.598 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.600 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ec4804c-4a13-485a-9300-db6edf74473b#033[00m
Feb 28 05:23:12 np0005634017 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Feb 28 05:23:12 np0005634017 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006c.scope: Consumed 4.515s CPU time.
Feb 28 05:23:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.620 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a1d819eb-8fd1-4be9-8fe6-f7d78408b46b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:12 np0005634017 systemd-machined[209480]: Machine qemu-138-instance-0000006c terminated.
Feb 28 05:23:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.661 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[796fe9d6-5f8b-4568-a51a-a5ca91a7f7bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.666 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5ce8cfe3-4dc0-4e66-b646-979cc2624aea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.708 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[63d1c5bf-effa-4e55-9e54-76744af69364]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.734 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2f6a0716-8dce-46b6-92d4-29a374eb9e93]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337374, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:12 np0005634017 kernel: tapd79e5c2c-3b: entered promiscuous mode
Feb 28 05:23:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.759 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c04373b9-bf0a-4a38-a01f-7557393513e1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565520, 'tstamp': 565520}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337376, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565523, 'tstamp': 565523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337376, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.762 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:12 np0005634017 kernel: tapd79e5c2c-3b (unregistering): left promiscuous mode
Feb 28 05:23:12 np0005634017 nova_compute[243452]: 2026-02-28 10:23:12.764 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:12 np0005634017 NetworkManager[49805]: <info>  [1772274192.7664] manager: (tapd79e5c2c-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/459)
Feb 28 05:23:12 np0005634017 nova_compute[243452]: 2026-02-28 10:23:12.771 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:12 np0005634017 nova_compute[243452]: 2026-02-28 10:23:12.783 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.784 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ec4804c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.785 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:23:12 np0005634017 nova_compute[243452]: 2026-02-28 10:23:12.785 243456 INFO nova.virt.libvirt.driver [-] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Instance destroyed successfully.#033[00m
Feb 28 05:23:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.786 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ec4804c-40, col_values=(('external_ids', {'iface-id': 'e8c59be7-5922-4be3-b143-f702a4828758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:12 np0005634017 nova_compute[243452]: 2026-02-28 10:23:12.786 243456 DEBUG nova.objects.instance [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'resources' on Instance uuid 44a09c36-3876-4513-8285-1d4aedc2ec68 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:23:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.786 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:23:12 np0005634017 nova_compute[243452]: 2026-02-28 10:23:12.805 243456 DEBUG nova.virt.libvirt.vif [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:22:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-925426756',display_name='tempest-ServersTestJSON-server-925426756',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-925426756',id=108,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLaWmqbATX1tMQaqkwgwWynVJNR8yYFwj3L4iJiJyj4lWHU05AbI0aLcvMcXZfAOON+EkQ1bGm552xW05v6R9BOUu6oYre0g1cnU2TpdCVUO7YJAOjpRb0+YCxAujwtdhA==',key_name='tempest-key-1054399794',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:23:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-q00sjeb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:23:09Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=44a09c36-3876-4513-8285-1d4aedc2ec68,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "address": "fa:16:3e:28:2a:42", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79e5c2c-3b", "ovs_interfaceid": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:23:12 np0005634017 nova_compute[243452]: 2026-02-28 10:23:12.806 243456 DEBUG nova.network.os_vif_util [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "address": "fa:16:3e:28:2a:42", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79e5c2c-3b", "ovs_interfaceid": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:23:12 np0005634017 nova_compute[243452]: 2026-02-28 10:23:12.807 243456 DEBUG nova.network.os_vif_util [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:2a:42,bridge_name='br-int',has_traffic_filtering=True,id=d79e5c2c-3bc4-4c60-842c-c52c58b15ff3,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79e5c2c-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:23:12 np0005634017 nova_compute[243452]: 2026-02-28 10:23:12.807 243456 DEBUG os_vif [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:2a:42,bridge_name='br-int',has_traffic_filtering=True,id=d79e5c2c-3bc4-4c60-842c-c52c58b15ff3,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79e5c2c-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:23:12 np0005634017 nova_compute[243452]: 2026-02-28 10:23:12.810 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:12 np0005634017 nova_compute[243452]: 2026-02-28 10:23:12.810 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd79e5c2c-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:12 np0005634017 nova_compute[243452]: 2026-02-28 10:23:12.813 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:12 np0005634017 nova_compute[243452]: 2026-02-28 10:23:12.815 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:12 np0005634017 nova_compute[243452]: 2026-02-28 10:23:12.818 243456 INFO os_vif [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:2a:42,bridge_name='br-int',has_traffic_filtering=True,id=d79e5c2c-3bc4-4c60-842c-c52c58b15ff3,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79e5c2c-3b')#033[00m
Feb 28 05:23:13 np0005634017 nova_compute[243452]: 2026-02-28 10:23:13.126 243456 INFO nova.virt.libvirt.driver [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Deleting instance files /var/lib/nova/instances/44a09c36-3876-4513-8285-1d4aedc2ec68_del#033[00m
Feb 28 05:23:13 np0005634017 nova_compute[243452]: 2026-02-28 10:23:13.126 243456 INFO nova.virt.libvirt.driver [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Deletion of /var/lib/nova/instances/44a09c36-3876-4513-8285-1d4aedc2ec68_del complete#033[00m
Feb 28 05:23:13 np0005634017 nova_compute[243452]: 2026-02-28 10:23:13.192 243456 INFO nova.compute.manager [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Took 0.66 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:23:13 np0005634017 nova_compute[243452]: 2026-02-28 10:23:13.192 243456 DEBUG oslo.service.loopingcall [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:23:13 np0005634017 nova_compute[243452]: 2026-02-28 10:23:13.193 243456 DEBUG nova.compute.manager [-] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:23:13 np0005634017 nova_compute[243452]: 2026-02-28 10:23:13.193 243456 DEBUG nova.network.neutron [-] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:23:13 np0005634017 nova_compute[243452]: 2026-02-28 10:23:13.430 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "027ce924-b530-4917-956c-ab66555058b0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:13 np0005634017 nova_compute[243452]: 2026-02-28 10:23:13.430 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:13 np0005634017 nova_compute[243452]: 2026-02-28 10:23:13.456 243456 DEBUG nova.compute.manager [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:23:13 np0005634017 nova_compute[243452]: 2026-02-28 10:23:13.545 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:13 np0005634017 nova_compute[243452]: 2026-02-28 10:23:13.546 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:13 np0005634017 nova_compute[243452]: 2026-02-28 10:23:13.554 243456 DEBUG nova.virt.hardware [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:23:13 np0005634017 nova_compute[243452]: 2026-02-28 10:23:13.555 243456 INFO nova.compute.claims [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:23:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:23:13 np0005634017 nova_compute[243452]: 2026-02-28 10:23:13.722 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.027 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.225 243456 DEBUG nova.network.neutron [-] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.248 243456 INFO nova.compute.manager [-] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Took 1.05 seconds to deallocate network for instance.#033[00m
Feb 28 05:23:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:23:14 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2176746023' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.288 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.296 243456 DEBUG nova.compute.provider_tree [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.320 243456 DEBUG oslo_concurrency.lockutils [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.323 243456 DEBUG nova.scheduler.client.report [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.350 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.351 243456 DEBUG nova.compute.manager [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.357 243456 DEBUG oslo_concurrency.lockutils [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.396 243456 DEBUG nova.compute.manager [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.396 243456 DEBUG nova.network.neutron [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.420 243456 INFO nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:23:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.3 MiB/s wr, 68 op/s
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.443 243456 DEBUG nova.compute.manager [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.448 243456 DEBUG oslo_concurrency.processutils [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.488 243456 DEBUG nova.compute.manager [req-dd2317cd-ed01-46e8-8810-7f6a0c3661d5 req-ed02bac0-c7ea-4e06-b877-69b8c4ed5f25 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Received event network-vif-deleted-d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.566 243456 DEBUG nova.compute.manager [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.567 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.568 243456 INFO nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Creating image(s)#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.595 243456 DEBUG nova.storage.rbd_utils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 027ce924-b530-4917-956c-ab66555058b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.625 243456 DEBUG nova.storage.rbd_utils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 027ce924-b530-4917-956c-ab66555058b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.655 243456 DEBUG nova.storage.rbd_utils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 027ce924-b530-4917-956c-ab66555058b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.660 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.702 243456 DEBUG nova.policy [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9046d3ef932481196b82d5e1fdd5de6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.752 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.753 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.754 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.754 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.783 243456 DEBUG nova.storage.rbd_utils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 027ce924-b530-4917-956c-ab66555058b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:14 np0005634017 nova_compute[243452]: 2026-02-28 10:23:14.790 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 027ce924-b530-4917-956c-ab66555058b0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:15 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:23:15 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1929258099' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:23:15 np0005634017 nova_compute[243452]: 2026-02-28 10:23:15.049 243456 DEBUG oslo_concurrency.processutils [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:15 np0005634017 nova_compute[243452]: 2026-02-28 10:23:15.056 243456 DEBUG nova.compute.provider_tree [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:23:15 np0005634017 nova_compute[243452]: 2026-02-28 10:23:15.062 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 027ce924-b530-4917-956c-ab66555058b0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:15 np0005634017 nova_compute[243452]: 2026-02-28 10:23:15.092 243456 DEBUG nova.scheduler.client.report [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:23:15 np0005634017 nova_compute[243452]: 2026-02-28 10:23:15.131 243456 DEBUG oslo_concurrency.lockutils [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:15 np0005634017 nova_compute[243452]: 2026-02-28 10:23:15.139 243456 DEBUG nova.storage.rbd_utils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] resizing rbd image 027ce924-b530-4917-956c-ab66555058b0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:23:15 np0005634017 nova_compute[243452]: 2026-02-28 10:23:15.171 243456 INFO nova.scheduler.client.report [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Deleted allocations for instance 44a09c36-3876-4513-8285-1d4aedc2ec68#033[00m
Feb 28 05:23:15 np0005634017 nova_compute[243452]: 2026-02-28 10:23:15.242 243456 DEBUG oslo_concurrency.lockutils [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "44a09c36-3876-4513-8285-1d4aedc2ec68" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:15 np0005634017 nova_compute[243452]: 2026-02-28 10:23:15.250 243456 DEBUG nova.objects.instance [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'migration_context' on Instance uuid 027ce924-b530-4917-956c-ab66555058b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:23:15 np0005634017 nova_compute[243452]: 2026-02-28 10:23:15.274 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:23:15 np0005634017 nova_compute[243452]: 2026-02-28 10:23:15.275 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Ensure instance console log exists: /var/lib/nova/instances/027ce924-b530-4917-956c-ab66555058b0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:23:15 np0005634017 nova_compute[243452]: 2026-02-28 10:23:15.276 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:15 np0005634017 nova_compute[243452]: 2026-02-28 10:23:15.276 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:15 np0005634017 nova_compute[243452]: 2026-02-28 10:23:15.276 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:15 np0005634017 nova_compute[243452]: 2026-02-28 10:23:15.800 243456 DEBUG nova.network.neutron [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Successfully created port: 415ef63e-f355-4d3a-a625-bee99de661ad _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:23:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 264 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 617 KiB/s wr, 104 op/s
Feb 28 05:23:17 np0005634017 nova_compute[243452]: 2026-02-28 10:23:17.273 243456 DEBUG nova.network.neutron [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Successfully updated port: 415ef63e-f355-4d3a-a625-bee99de661ad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:23:17 np0005634017 nova_compute[243452]: 2026-02-28 10:23:17.290 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "refresh_cache-027ce924-b530-4917-956c-ab66555058b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:23:17 np0005634017 nova_compute[243452]: 2026-02-28 10:23:17.291 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquired lock "refresh_cache-027ce924-b530-4917-956c-ab66555058b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:23:17 np0005634017 nova_compute[243452]: 2026-02-28 10:23:17.291 243456 DEBUG nova.network.neutron [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:23:17 np0005634017 nova_compute[243452]: 2026-02-28 10:23:17.533 243456 DEBUG nova.network.neutron [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:23:17 np0005634017 nova_compute[243452]: 2026-02-28 10:23:17.813 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:17 np0005634017 nova_compute[243452]: 2026-02-28 10:23:17.890 243456 DEBUG nova.compute.manager [req-a27af774-3e24-4f30-b034-0a3ff585e354 req-8232bc22-7492-4e8b-b3e5-20c851b93df5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Received event network-changed-415ef63e-f355-4d3a-a625-bee99de661ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:23:17 np0005634017 nova_compute[243452]: 2026-02-28 10:23:17.891 243456 DEBUG nova.compute.manager [req-a27af774-3e24-4f30-b034-0a3ff585e354 req-8232bc22-7492-4e8b-b3e5-20c851b93df5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Refreshing instance network info cache due to event network-changed-415ef63e-f355-4d3a-a625-bee99de661ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:23:17 np0005634017 nova_compute[243452]: 2026-02-28 10:23:17.891 243456 DEBUG oslo_concurrency.lockutils [req-a27af774-3e24-4f30-b034-0a3ff585e354 req-8232bc22-7492-4e8b-b3e5-20c851b93df5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-027ce924-b530-4917-956c-ab66555058b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:23:18 np0005634017 podman[337616]: 2026-02-28 10:23:18.173107534 +0000 UTC m=+0.096000432 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Feb 28 05:23:18 np0005634017 podman[337615]: 2026-02-28 10:23:18.177218222 +0000 UTC m=+0.105416313 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0)
Feb 28 05:23:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 260 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 771 KiB/s wr, 114 op/s
Feb 28 05:23:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.688 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:18.688 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:23:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:18.690 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:23:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:18.692 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.830 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.830 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.858 243456 DEBUG nova.compute.manager [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.909 243456 DEBUG nova.network.neutron [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Updating instance_info_cache with network_info: [{"id": "415ef63e-f355-4d3a-a625-bee99de661ad", "address": "fa:16:3e:01:49:3b", "network": {"id": "17bc494c-7a5d-47b4-92b7-06dd91f131ca", "bridge": "br-int", "label": "tempest-network-smoke--669117396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap415ef63e-f3", "ovs_interfaceid": "415ef63e-f355-4d3a-a625-bee99de661ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.928 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Releasing lock "refresh_cache-027ce924-b530-4917-956c-ab66555058b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.929 243456 DEBUG nova.compute.manager [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Instance network_info: |[{"id": "415ef63e-f355-4d3a-a625-bee99de661ad", "address": "fa:16:3e:01:49:3b", "network": {"id": "17bc494c-7a5d-47b4-92b7-06dd91f131ca", "bridge": "br-int", "label": "tempest-network-smoke--669117396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap415ef63e-f3", "ovs_interfaceid": "415ef63e-f355-4d3a-a625-bee99de661ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.929 243456 DEBUG oslo_concurrency.lockutils [req-a27af774-3e24-4f30-b034-0a3ff585e354 req-8232bc22-7492-4e8b-b3e5-20c851b93df5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-027ce924-b530-4917-956c-ab66555058b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.930 243456 DEBUG nova.network.neutron [req-a27af774-3e24-4f30-b034-0a3ff585e354 req-8232bc22-7492-4e8b-b3e5-20c851b93df5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Refreshing network info cache for port 415ef63e-f355-4d3a-a625-bee99de661ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.935 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Start _get_guest_xml network_info=[{"id": "415ef63e-f355-4d3a-a625-bee99de661ad", "address": "fa:16:3e:01:49:3b", "network": {"id": "17bc494c-7a5d-47b4-92b7-06dd91f131ca", "bridge": "br-int", "label": "tempest-network-smoke--669117396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap415ef63e-f3", "ovs_interfaceid": "415ef63e-f355-4d3a-a625-bee99de661ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.941 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.942 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.946 243456 WARNING nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.959 243456 DEBUG nova.virt.hardware [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.960 243456 INFO nova.compute.claims [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.967 243456 DEBUG nova.virt.libvirt.host [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.968 243456 DEBUG nova.virt.libvirt.host [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.974 243456 DEBUG nova.virt.libvirt.host [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.975 243456 DEBUG nova.virt.libvirt.host [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.975 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.976 243456 DEBUG nova.virt.hardware [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.977 243456 DEBUG nova.virt.hardware [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.977 243456 DEBUG nova.virt.hardware [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.977 243456 DEBUG nova.virt.hardware [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.978 243456 DEBUG nova.virt.hardware [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.978 243456 DEBUG nova.virt.hardware [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.978 243456 DEBUG nova.virt.hardware [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.979 243456 DEBUG nova.virt.hardware [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.980 243456 DEBUG nova.virt.hardware [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.980 243456 DEBUG nova.virt.hardware [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.980 243456 DEBUG nova.virt.hardware [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:23:18 np0005634017 nova_compute[243452]: 2026-02-28 10:23:18.988 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:19 np0005634017 nova_compute[243452]: 2026-02-28 10:23:19.034 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:19 np0005634017 nova_compute[243452]: 2026-02-28 10:23:19.152 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:23:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1371678343' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:23:19 np0005634017 nova_compute[243452]: 2026-02-28 10:23:19.611 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:19 np0005634017 nova_compute[243452]: 2026-02-28 10:23:19.637 243456 DEBUG nova.storage.rbd_utils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 027ce924-b530-4917-956c-ab66555058b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:19 np0005634017 nova_compute[243452]: 2026-02-28 10:23:19.642 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:23:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3783250326' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:23:19 np0005634017 nova_compute[243452]: 2026-02-28 10:23:19.739 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:19 np0005634017 nova_compute[243452]: 2026-02-28 10:23:19.744 243456 DEBUG nova.compute.provider_tree [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:23:19 np0005634017 nova_compute[243452]: 2026-02-28 10:23:19.768 243456 DEBUG nova.scheduler.client.report [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:23:19 np0005634017 nova_compute[243452]: 2026-02-28 10:23:19.790 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:19 np0005634017 nova_compute[243452]: 2026-02-28 10:23:19.791 243456 DEBUG nova.compute.manager [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:23:19 np0005634017 nova_compute[243452]: 2026-02-28 10:23:19.838 243456 DEBUG nova.compute.manager [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:23:19 np0005634017 nova_compute[243452]: 2026-02-28 10:23:19.839 243456 DEBUG nova.network.neutron [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:23:19 np0005634017 nova_compute[243452]: 2026-02-28 10:23:19.858 243456 INFO nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:23:19 np0005634017 nova_compute[243452]: 2026-02-28 10:23:19.875 243456 DEBUG nova.compute.manager [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:23:19 np0005634017 nova_compute[243452]: 2026-02-28 10:23:19.954 243456 DEBUG nova.compute.manager [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:23:19 np0005634017 nova_compute[243452]: 2026-02-28 10:23:19.956 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:23:19 np0005634017 nova_compute[243452]: 2026-02-28 10:23:19.956 243456 INFO nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Creating image(s)#033[00m
Feb 28 05:23:19 np0005634017 nova_compute[243452]: 2026-02-28 10:23:19.980 243456 DEBUG nova.storage.rbd_utils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.006 243456 DEBUG nova.storage.rbd_utils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.037 243456 DEBUG nova.storage.rbd_utils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.042 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.112 243456 DEBUG nova.policy [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30797c1e587b4532a2e148d0cdcd9c51', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.123 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.124 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.124 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.124 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:20 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:23:20 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4134194997' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.150 243456 DEBUG nova.storage.rbd_utils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.156 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.198 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.201 243456 DEBUG nova.virt.libvirt.vif [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:23:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-603155240',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-603155240',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=109,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP9wU8YY0rYLr+FY6zo9NLT62DKW4Etx3PelXGTWlh9GbKMYWy7V2liaoU6eFO4swsQU7hFfqtc0g/T8jGlbB4RbgIdTi8pbxGSqSx76HlaAvQoV4NMlMuhy55HuiLv7yw==',key_name='tempest-TestSecurityGroupsBasicOps-1207522769',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-ag0oqgtp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:23:14Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=027ce924-b530-4917-956c-ab66555058b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "415ef63e-f355-4d3a-a625-bee99de661ad", "address": "fa:16:3e:01:49:3b", "network": {"id": "17bc494c-7a5d-47b4-92b7-06dd91f131ca", "bridge": "br-int", "label": "tempest-network-smoke--669117396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap415ef63e-f3", "ovs_interfaceid": "415ef63e-f355-4d3a-a625-bee99de661ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.202 243456 DEBUG nova.network.os_vif_util [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "415ef63e-f355-4d3a-a625-bee99de661ad", "address": "fa:16:3e:01:49:3b", "network": {"id": "17bc494c-7a5d-47b4-92b7-06dd91f131ca", "bridge": "br-int", "label": "tempest-network-smoke--669117396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap415ef63e-f3", "ovs_interfaceid": "415ef63e-f355-4d3a-a625-bee99de661ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.203 243456 DEBUG nova.network.os_vif_util [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:49:3b,bridge_name='br-int',has_traffic_filtering=True,id=415ef63e-f355-4d3a-a625-bee99de661ad,network=Network(17bc494c-7a5d-47b4-92b7-06dd91f131ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap415ef63e-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.204 243456 DEBUG nova.objects.instance [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'pci_devices' on Instance uuid 027ce924-b530-4917-956c-ab66555058b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.221 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:23:20 np0005634017 nova_compute[243452]:  <uuid>027ce924-b530-4917-956c-ab66555058b0</uuid>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:  <name>instance-0000006d</name>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-603155240</nova:name>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:23:18</nova:creationTime>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:23:20 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:        <nova:user uuid="f9046d3ef932481196b82d5e1fdd5de6">tempest-TestSecurityGroupsBasicOps-1303513239-project-member</nova:user>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:        <nova:project uuid="1eb3aafa74d14544ba5cde61223f2e23">tempest-TestSecurityGroupsBasicOps-1303513239</nova:project>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:        <nova:port uuid="415ef63e-f355-4d3a-a625-bee99de661ad">
Feb 28 05:23:20 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <entry name="serial">027ce924-b530-4917-956c-ab66555058b0</entry>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <entry name="uuid">027ce924-b530-4917-956c-ab66555058b0</entry>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/027ce924-b530-4917-956c-ab66555058b0_disk">
Feb 28 05:23:20 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:23:20 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/027ce924-b530-4917-956c-ab66555058b0_disk.config">
Feb 28 05:23:20 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:23:20 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:01:49:3b"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <target dev="tap415ef63e-f3"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/027ce924-b530-4917-956c-ab66555058b0/console.log" append="off"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:23:20 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:23:20 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:23:20 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:23:20 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.222 243456 DEBUG nova.compute.manager [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Preparing to wait for external event network-vif-plugged-415ef63e-f355-4d3a-a625-bee99de661ad prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.222 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "027ce924-b530-4917-956c-ab66555058b0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.222 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.223 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.224 243456 DEBUG nova.virt.libvirt.vif [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:23:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-603155240',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-603155240',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=109,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP9wU8YY0rYLr+FY6zo9NLT62DKW4Etx3PelXGTWlh9GbKMYWy7V2liaoU6eFO4swsQU7hFfqtc0g/T8jGlbB4RbgIdTi8pbxGSqSx76HlaAvQoV4NMlMuhy55HuiLv7yw==',key_name='tempest-TestSecurityGroupsBasicOps-1207522769',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-ag0oqgtp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:23:14Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=027ce924-b530-4917-956c-ab66555058b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "415ef63e-f355-4d3a-a625-bee99de661ad", "address": "fa:16:3e:01:49:3b", "network": {"id": "17bc494c-7a5d-47b4-92b7-06dd91f131ca", "bridge": "br-int", "label": "tempest-network-smoke--669117396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap415ef63e-f3", "ovs_interfaceid": "415ef63e-f355-4d3a-a625-bee99de661ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.224 243456 DEBUG nova.network.os_vif_util [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "415ef63e-f355-4d3a-a625-bee99de661ad", "address": "fa:16:3e:01:49:3b", "network": {"id": "17bc494c-7a5d-47b4-92b7-06dd91f131ca", "bridge": "br-int", "label": "tempest-network-smoke--669117396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap415ef63e-f3", "ovs_interfaceid": "415ef63e-f355-4d3a-a625-bee99de661ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.225 243456 DEBUG nova.network.os_vif_util [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:49:3b,bridge_name='br-int',has_traffic_filtering=True,id=415ef63e-f355-4d3a-a625-bee99de661ad,network=Network(17bc494c-7a5d-47b4-92b7-06dd91f131ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap415ef63e-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.225 243456 DEBUG os_vif [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:49:3b,bridge_name='br-int',has_traffic_filtering=True,id=415ef63e-f355-4d3a-a625-bee99de661ad,network=Network(17bc494c-7a5d-47b4-92b7-06dd91f131ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap415ef63e-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.226 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.226 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.227 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.231 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.231 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap415ef63e-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.232 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap415ef63e-f3, col_values=(('external_ids', {'iface-id': '415ef63e-f355-4d3a-a625-bee99de661ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:49:3b', 'vm-uuid': '027ce924-b530-4917-956c-ab66555058b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.233 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:20 np0005634017 NetworkManager[49805]: <info>  [1772274200.2348] manager: (tap415ef63e-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/460)
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.237 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.238 243456 INFO os_vif [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:49:3b,bridge_name='br-int',has_traffic_filtering=True,id=415ef63e-f355-4d3a-a625-bee99de661ad,network=Network(17bc494c-7a5d-47b4-92b7-06dd91f131ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap415ef63e-f3')#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.396 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 292 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 128 op/s
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.460 243456 DEBUG nova.storage.rbd_utils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] resizing rbd image c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.498 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.499 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.500 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No VIF found with MAC fa:16:3e:01:49:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.501 243456 INFO nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Using config drive#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.522 243456 DEBUG nova.storage.rbd_utils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 027ce924-b530-4917-956c-ab66555058b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.571 243456 DEBUG nova.objects.instance [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'migration_context' on Instance uuid c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.583 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.583 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Ensure instance console log exists: /var/lib/nova/instances/c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.583 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.584 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:20 np0005634017 nova_compute[243452]: 2026-02-28 10:23:20.584 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:21 np0005634017 nova_compute[243452]: 2026-02-28 10:23:21.103 243456 DEBUG nova.network.neutron [req-a27af774-3e24-4f30-b034-0a3ff585e354 req-8232bc22-7492-4e8b-b3e5-20c851b93df5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Updated VIF entry in instance network info cache for port 415ef63e-f355-4d3a-a625-bee99de661ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:23:21 np0005634017 nova_compute[243452]: 2026-02-28 10:23:21.103 243456 DEBUG nova.network.neutron [req-a27af774-3e24-4f30-b034-0a3ff585e354 req-8232bc22-7492-4e8b-b3e5-20c851b93df5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Updating instance_info_cache with network_info: [{"id": "415ef63e-f355-4d3a-a625-bee99de661ad", "address": "fa:16:3e:01:49:3b", "network": {"id": "17bc494c-7a5d-47b4-92b7-06dd91f131ca", "bridge": "br-int", "label": "tempest-network-smoke--669117396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap415ef63e-f3", "ovs_interfaceid": "415ef63e-f355-4d3a-a625-bee99de661ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:23:21 np0005634017 nova_compute[243452]: 2026-02-28 10:23:21.137 243456 DEBUG oslo_concurrency.lockutils [req-a27af774-3e24-4f30-b034-0a3ff585e354 req-8232bc22-7492-4e8b-b3e5-20c851b93df5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-027ce924-b530-4917-956c-ab66555058b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:23:21 np0005634017 nova_compute[243452]: 2026-02-28 10:23:21.387 243456 INFO nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Creating config drive at /var/lib/nova/instances/027ce924-b530-4917-956c-ab66555058b0/disk.config#033[00m
Feb 28 05:23:21 np0005634017 nova_compute[243452]: 2026-02-28 10:23:21.391 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/027ce924-b530-4917-956c-ab66555058b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpc6_kp8k_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:21 np0005634017 nova_compute[243452]: 2026-02-28 10:23:21.511 243456 DEBUG nova.network.neutron [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Successfully created port: 7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:23:21 np0005634017 nova_compute[243452]: 2026-02-28 10:23:21.530 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/027ce924-b530-4917-956c-ab66555058b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpc6_kp8k_" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:21 np0005634017 nova_compute[243452]: 2026-02-28 10:23:21.566 243456 DEBUG nova.storage.rbd_utils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 027ce924-b530-4917-956c-ab66555058b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:21 np0005634017 nova_compute[243452]: 2026-02-28 10:23:21.571 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/027ce924-b530-4917-956c-ab66555058b0/disk.config 027ce924-b530-4917-956c-ab66555058b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:21 np0005634017 nova_compute[243452]: 2026-02-28 10:23:21.725 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/027ce924-b530-4917-956c-ab66555058b0/disk.config 027ce924-b530-4917-956c-ab66555058b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:21 np0005634017 nova_compute[243452]: 2026-02-28 10:23:21.727 243456 INFO nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Deleting local config drive /var/lib/nova/instances/027ce924-b530-4917-956c-ab66555058b0/disk.config because it was imported into RBD.#033[00m
Feb 28 05:23:21 np0005634017 kernel: tap415ef63e-f3: entered promiscuous mode
Feb 28 05:23:21 np0005634017 NetworkManager[49805]: <info>  [1772274201.7584] manager: (tap415ef63e-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/461)
Feb 28 05:23:21 np0005634017 nova_compute[243452]: 2026-02-28 10:23:21.758 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:21 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:21Z|01104|binding|INFO|Claiming lport 415ef63e-f355-4d3a-a625-bee99de661ad for this chassis.
Feb 28 05:23:21 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:21Z|01105|binding|INFO|415ef63e-f355-4d3a-a625-bee99de661ad: Claiming fa:16:3e:01:49:3b 10.100.0.8
Feb 28 05:23:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.774 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:49:3b 10.100.0.8'], port_security=['fa:16:3e:01:49:3b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '027ce924-b530-4917-956c-ab66555058b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17bc494c-7a5d-47b4-92b7-06dd91f131ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1ef1d2e7-20e3-4f13-87f1-05e5f962f01b 2651d5ee-6558-4826-a4a8-35378d050e16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1398b86-8c3c-44c8-8c2b-3601475652eb, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=415ef63e-f355-4d3a-a625-bee99de661ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:23:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.775 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 415ef63e-f355-4d3a-a625-bee99de661ad in datapath 17bc494c-7a5d-47b4-92b7-06dd91f131ca bound to our chassis#033[00m
Feb 28 05:23:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.776 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 17bc494c-7a5d-47b4-92b7-06dd91f131ca#033[00m
Feb 28 05:23:21 np0005634017 systemd-machined[209480]: New machine qemu-139-instance-0000006d.
Feb 28 05:23:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.786 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bb3a920a-b876-49dd-acbf-f8949122ff16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.786 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap17bc494c-71 in ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:23:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.788 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap17bc494c-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:23:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.788 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e6337f9f-4c19-4bfc-90d0-a4670d41be68]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.788 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[36689852-5386-4872-b819-abded53b4834]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.795 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[a1985c2f-5d3a-4a33-a69f-bddae5b39d4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:21 np0005634017 systemd[1]: Started Virtual Machine qemu-139-instance-0000006d.
Feb 28 05:23:21 np0005634017 nova_compute[243452]: 2026-02-28 10:23:21.801 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:21 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:21Z|01106|binding|INFO|Setting lport 415ef63e-f355-4d3a-a625-bee99de661ad ovn-installed in OVS
Feb 28 05:23:21 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:21Z|01107|binding|INFO|Setting lport 415ef63e-f355-4d3a-a625-bee99de661ad up in Southbound
Feb 28 05:23:21 np0005634017 nova_compute[243452]: 2026-02-28 10:23:21.804 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:21 np0005634017 systemd-udevd[337987]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:23:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.819 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9fc40d14-2437-4785-bae2-7f166d0803b9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:21 np0005634017 NetworkManager[49805]: <info>  [1772274201.8284] device (tap415ef63e-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:23:21 np0005634017 NetworkManager[49805]: <info>  [1772274201.8291] device (tap415ef63e-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:23:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.849 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[65cf5616-cde0-4bfa-8bcc-8ae487064ed0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:21 np0005634017 NetworkManager[49805]: <info>  [1772274201.8565] manager: (tap17bc494c-70): new Veth device (/org/freedesktop/NetworkManager/Devices/462)
Feb 28 05:23:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.856 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1e716caf-4c90-4477-9375-12a8edf34960]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.885 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[fb19a366-e486-4a7e-b36c-d7879bfb1541]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.888 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5f97c1c4-1de4-47ab-a9c1-ddde9f546814]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:21 np0005634017 NetworkManager[49805]: <info>  [1772274201.9056] device (tap17bc494c-70): carrier: link connected
Feb 28 05:23:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.910 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b2e131ef-b6f0-437b-bc59-f9cb5cc7a508]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.924 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e88270db-4f77-43e1-8a65-02887e6bf48b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17bc494c-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:31:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 334], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570919, 'reachable_time': 28744, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338017, 'error': None, 'target': 'ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.936 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b75dc445-888b-4f27-8d5b-4cc3bbd75766]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea0:31ab'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570919, 'tstamp': 570919}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338018, 'error': None, 'target': 'ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.951 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[700ce6f5-e1e9-4d21-b20f-ab7ed1ba6d7a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17bc494c-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:31:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 334], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570919, 'reachable_time': 28744, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 338019, 'error': None, 'target': 'ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.976 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c83b68dd-ac48-4f20-b389-91bb34d12312]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:22.030 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[09fdd8dd-9cd0-4b73-b6c0-560bf06ecbe0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:22.032 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17bc494c-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:22.033 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:22.033 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17bc494c-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:22 np0005634017 NetworkManager[49805]: <info>  [1772274202.0364] manager: (tap17bc494c-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/463)
Feb 28 05:23:22 np0005634017 kernel: tap17bc494c-70: entered promiscuous mode
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.035 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:22.040 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap17bc494c-70, col_values=(('external_ids', {'iface-id': 'c07252a3-f61e-4ad2-8106-a9d8b025e5a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:22 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:22Z|01108|binding|INFO|Releasing lport c07252a3-f61e-4ad2-8106-a9d8b025e5a7 from this chassis (sb_readonly=0)
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.041 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.056 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:22.057 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/17bc494c-7a5d-47b4-92b7-06dd91f131ca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/17bc494c-7a5d-47b4-92b7-06dd91f131ca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:22.058 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9cd48361-3998-4b2f-b301-8b70746e1952]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:22.059 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-17bc494c-7a5d-47b4-92b7-06dd91f131ca
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/17bc494c-7a5d-47b4-92b7-06dd91f131ca.pid.haproxy
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 17bc494c-7a5d-47b4-92b7-06dd91f131ca
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:23:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:22.060 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca', 'env', 'PROCESS_TAG=haproxy-17bc494c-7a5d-47b4-92b7-06dd91f131ca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/17bc494c-7a5d-47b4-92b7-06dd91f131ca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.088 243456 DEBUG nova.compute.manager [req-81a36192-06f2-48f9-babe-13cd6d4a846c req-cf556a9d-c122-4c40-a984-fd3751647150 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Received event network-vif-plugged-415ef63e-f355-4d3a-a625-bee99de661ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.088 243456 DEBUG oslo_concurrency.lockutils [req-81a36192-06f2-48f9-babe-13cd6d4a846c req-cf556a9d-c122-4c40-a984-fd3751647150 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "027ce924-b530-4917-956c-ab66555058b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.089 243456 DEBUG oslo_concurrency.lockutils [req-81a36192-06f2-48f9-babe-13cd6d4a846c req-cf556a9d-c122-4c40-a984-fd3751647150 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.089 243456 DEBUG oslo_concurrency.lockutils [req-81a36192-06f2-48f9-babe-13cd6d4a846c req-cf556a9d-c122-4c40-a984-fd3751647150 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.089 243456 DEBUG nova.compute.manager [req-81a36192-06f2-48f9-babe-13cd6d4a846c req-cf556a9d-c122-4c40-a984-fd3751647150 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Processing event network-vif-plugged-415ef63e-f355-4d3a-a625-bee99de661ad _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.191 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274202.1907961, 027ce924-b530-4917-956c-ab66555058b0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.191 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 027ce924-b530-4917-956c-ab66555058b0] VM Started (Lifecycle Event)#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.194 243456 DEBUG nova.compute.manager [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.198 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.202 243456 INFO nova.virt.libvirt.driver [-] [instance: 027ce924-b530-4917-956c-ab66555058b0] Instance spawned successfully.#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.202 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.213 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 027ce924-b530-4917-956c-ab66555058b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.222 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 027ce924-b530-4917-956c-ab66555058b0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.226 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.227 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.228 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.228 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.229 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.230 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.238 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 027ce924-b530-4917-956c-ab66555058b0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.238 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274202.1910398, 027ce924-b530-4917-956c-ab66555058b0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.238 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 027ce924-b530-4917-956c-ab66555058b0] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.257 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 027ce924-b530-4917-956c-ab66555058b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.261 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274202.1972303, 027ce924-b530-4917-956c-ab66555058b0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.261 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 027ce924-b530-4917-956c-ab66555058b0] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.276 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 027ce924-b530-4917-956c-ab66555058b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.280 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 027ce924-b530-4917-956c-ab66555058b0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.287 243456 INFO nova.compute.manager [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Took 7.72 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.288 243456 DEBUG nova.compute.manager [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.314 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 027ce924-b530-4917-956c-ab66555058b0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.355 243456 INFO nova.compute.manager [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Took 8.85 seconds to build instance.#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.372 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:22 np0005634017 podman[338093]: 2026-02-28 10:23:22.397671009 +0000 UTC m=+0.056807841 container create 6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 28 05:23:22 np0005634017 systemd[1]: Started libpod-conmon-6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df.scope.
Feb 28 05:23:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 312 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.0 MiB/s wr, 130 op/s
Feb 28 05:23:22 np0005634017 podman[338093]: 2026-02-28 10:23:22.365890692 +0000 UTC m=+0.025027394 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:23:22 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:23:22 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fba657c3e9cfc3e8a46c61136017aeb0fbea7cdb4ea5ad38579fb28430e14c1d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:23:22 np0005634017 podman[338093]: 2026-02-28 10:23:22.484821494 +0000 UTC m=+0.143958216 container init 6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 05:23:22 np0005634017 podman[338093]: 2026-02-28 10:23:22.48985416 +0000 UTC m=+0.148990882 container start 6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:23:22 np0005634017 neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca[338109]: [NOTICE]   (338113) : New worker (338115) forked
Feb 28 05:23:22 np0005634017 neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca[338109]: [NOTICE]   (338113) : Loading success.
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.599 243456 DEBUG nova.network.neutron [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Successfully updated port: 7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.618 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "refresh_cache-c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.618 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquired lock "refresh_cache-c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.619 243456 DEBUG nova.network.neutron [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.851 243456 DEBUG nova.network.neutron [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.988 243456 DEBUG nova.compute.manager [req-c064ce51-9a99-4315-a5a5-a72670ff0712 req-b23ecc9b-8bf9-46e9-b3c6-25f18653bfd2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Received event network-changed-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.989 243456 DEBUG nova.compute.manager [req-c064ce51-9a99-4315-a5a5-a72670ff0712 req-b23ecc9b-8bf9-46e9-b3c6-25f18653bfd2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Refreshing instance network info cache due to event network-changed-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:23:22 np0005634017 nova_compute[243452]: 2026-02-28 10:23:22.989 243456 DEBUG oslo_concurrency.lockutils [req-c064ce51-9a99-4315-a5a5-a72670ff0712 req-b23ecc9b-8bf9-46e9-b3c6-25f18653bfd2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:23:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Feb 28 05:23:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Feb 28 05:23:23 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Feb 28 05:23:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:23:23 np0005634017 nova_compute[243452]: 2026-02-28 10:23:23.804 243456 DEBUG nova.network.neutron [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Updating instance_info_cache with network_info: [{"id": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "address": "fa:16:3e:3f:12:78", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d930aa0-d5", "ovs_interfaceid": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:23:23 np0005634017 nova_compute[243452]: 2026-02-28 10:23:23.828 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Releasing lock "refresh_cache-c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:23:23 np0005634017 nova_compute[243452]: 2026-02-28 10:23:23.828 243456 DEBUG nova.compute.manager [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Instance network_info: |[{"id": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "address": "fa:16:3e:3f:12:78", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d930aa0-d5", "ovs_interfaceid": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:23:23 np0005634017 nova_compute[243452]: 2026-02-28 10:23:23.829 243456 DEBUG oslo_concurrency.lockutils [req-c064ce51-9a99-4315-a5a5-a72670ff0712 req-b23ecc9b-8bf9-46e9-b3c6-25f18653bfd2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:23:23 np0005634017 nova_compute[243452]: 2026-02-28 10:23:23.829 243456 DEBUG nova.network.neutron [req-c064ce51-9a99-4315-a5a5-a72670ff0712 req-b23ecc9b-8bf9-46e9-b3c6-25f18653bfd2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Refreshing network info cache for port 7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:23:23 np0005634017 nova_compute[243452]: 2026-02-28 10:23:23.833 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Start _get_guest_xml network_info=[{"id": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "address": "fa:16:3e:3f:12:78", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d930aa0-d5", "ovs_interfaceid": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:23:23 np0005634017 nova_compute[243452]: 2026-02-28 10:23:23.840 243456 WARNING nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:23:23 np0005634017 nova_compute[243452]: 2026-02-28 10:23:23.859 243456 DEBUG nova.virt.libvirt.host [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:23:23 np0005634017 nova_compute[243452]: 2026-02-28 10:23:23.861 243456 DEBUG nova.virt.libvirt.host [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:23:23 np0005634017 nova_compute[243452]: 2026-02-28 10:23:23.881 243456 DEBUG nova.virt.libvirt.host [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:23:23 np0005634017 nova_compute[243452]: 2026-02-28 10:23:23.882 243456 DEBUG nova.virt.libvirt.host [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:23:23 np0005634017 nova_compute[243452]: 2026-02-28 10:23:23.883 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:23:23 np0005634017 nova_compute[243452]: 2026-02-28 10:23:23.884 243456 DEBUG nova.virt.hardware [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:23:23 np0005634017 nova_compute[243452]: 2026-02-28 10:23:23.885 243456 DEBUG nova.virt.hardware [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:23:23 np0005634017 nova_compute[243452]: 2026-02-28 10:23:23.886 243456 DEBUG nova.virt.hardware [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:23:23 np0005634017 nova_compute[243452]: 2026-02-28 10:23:23.886 243456 DEBUG nova.virt.hardware [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:23:23 np0005634017 nova_compute[243452]: 2026-02-28 10:23:23.887 243456 DEBUG nova.virt.hardware [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:23:23 np0005634017 nova_compute[243452]: 2026-02-28 10:23:23.887 243456 DEBUG nova.virt.hardware [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:23:23 np0005634017 nova_compute[243452]: 2026-02-28 10:23:23.888 243456 DEBUG nova.virt.hardware [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:23:23 np0005634017 nova_compute[243452]: 2026-02-28 10:23:23.888 243456 DEBUG nova.virt.hardware [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:23:23 np0005634017 nova_compute[243452]: 2026-02-28 10:23:23.889 243456 DEBUG nova.virt.hardware [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:23:23 np0005634017 nova_compute[243452]: 2026-02-28 10:23:23.889 243456 DEBUG nova.virt.hardware [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:23:23 np0005634017 nova_compute[243452]: 2026-02-28 10:23:23.890 243456 DEBUG nova.virt.hardware [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:23:23 np0005634017 nova_compute[243452]: 2026-02-28 10:23:23.897 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:24 np0005634017 nova_compute[243452]: 2026-02-28 10:23:24.031 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:24 np0005634017 nova_compute[243452]: 2026-02-28 10:23:24.183 243456 DEBUG nova.compute.manager [req-ba9157ec-4c0e-477e-9f58-8143d0e5623d req-4bff6b26-a1f8-47e6-9171-3fc4c472886e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Received event network-vif-plugged-415ef63e-f355-4d3a-a625-bee99de661ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:23:24 np0005634017 nova_compute[243452]: 2026-02-28 10:23:24.184 243456 DEBUG oslo_concurrency.lockutils [req-ba9157ec-4c0e-477e-9f58-8143d0e5623d req-4bff6b26-a1f8-47e6-9171-3fc4c472886e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "027ce924-b530-4917-956c-ab66555058b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:24 np0005634017 nova_compute[243452]: 2026-02-28 10:23:24.184 243456 DEBUG oslo_concurrency.lockutils [req-ba9157ec-4c0e-477e-9f58-8143d0e5623d req-4bff6b26-a1f8-47e6-9171-3fc4c472886e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:24 np0005634017 nova_compute[243452]: 2026-02-28 10:23:24.184 243456 DEBUG oslo_concurrency.lockutils [req-ba9157ec-4c0e-477e-9f58-8143d0e5623d req-4bff6b26-a1f8-47e6-9171-3fc4c472886e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:24 np0005634017 nova_compute[243452]: 2026-02-28 10:23:24.185 243456 DEBUG nova.compute.manager [req-ba9157ec-4c0e-477e-9f58-8143d0e5623d req-4bff6b26-a1f8-47e6-9171-3fc4c472886e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] No waiting events found dispatching network-vif-plugged-415ef63e-f355-4d3a-a625-bee99de661ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:23:24 np0005634017 nova_compute[243452]: 2026-02-28 10:23:24.185 243456 WARNING nova.compute.manager [req-ba9157ec-4c0e-477e-9f58-8143d0e5623d req-4bff6b26-a1f8-47e6-9171-3fc4c472886e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Received unexpected event network-vif-plugged-415ef63e-f355-4d3a-a625-bee99de661ad for instance with vm_state active and task_state None.#033[00m
Feb 28 05:23:24 np0005634017 nova_compute[243452]: 2026-02-28 10:23:24.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:23:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 312 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 866 KiB/s rd, 3.6 MiB/s wr, 106 op/s
Feb 28 05:23:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:23:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/76998934' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:23:24 np0005634017 nova_compute[243452]: 2026-02-28 10:23:24.527 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:24 np0005634017 nova_compute[243452]: 2026-02-28 10:23:24.574 243456 DEBUG nova.storage.rbd_utils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:24 np0005634017 nova_compute[243452]: 2026-02-28 10:23:24.579 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:23:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1109741320' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.114 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.117 243456 DEBUG nova.virt.libvirt.vif [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:23:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-426854263',display_name='tempest-ServersTestJSON-server-426854263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-426854263',id=110,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-6jiu7deq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:23:19Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "address": "fa:16:3e:3f:12:78", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d930aa0-d5", "ovs_interfaceid": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.117 243456 DEBUG nova.network.os_vif_util [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "address": "fa:16:3e:3f:12:78", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d930aa0-d5", "ovs_interfaceid": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.118 243456 DEBUG nova.network.os_vif_util [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:12:78,bridge_name='br-int',has_traffic_filtering=True,id=7d930aa0-d51d-4c56-9da2-7f8c1faf8b99,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d930aa0-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.120 243456 DEBUG nova.objects.instance [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'pci_devices' on Instance uuid c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.137 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:23:25 np0005634017 nova_compute[243452]:  <uuid>c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4</uuid>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:  <name>instance-0000006e</name>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServersTestJSON-server-426854263</nova:name>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:23:23</nova:creationTime>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:23:25 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:        <nova:user uuid="30797c1e587b4532a2e148d0cdcd9c51">tempest-ServersTestJSON-973249707-project-member</nova:user>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:        <nova:project uuid="1c3cb5cdfa53405bb0387af43e804bd1">tempest-ServersTestJSON-973249707</nova:project>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:        <nova:port uuid="7d930aa0-d51d-4c56-9da2-7f8c1faf8b99">
Feb 28 05:23:25 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <entry name="serial">c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4</entry>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <entry name="uuid">c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4</entry>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk">
Feb 28 05:23:25 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:23:25 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk.config">
Feb 28 05:23:25 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:23:25 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:3f:12:78"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <target dev="tap7d930aa0-d5"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4/console.log" append="off"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:23:25 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:23:25 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:23:25 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:23:25 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.139 243456 DEBUG nova.compute.manager [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Preparing to wait for external event network-vif-plugged-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.139 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.140 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.140 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.140 243456 DEBUG nova.virt.libvirt.vif [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:23:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-426854263',display_name='tempest-ServersTestJSON-server-426854263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-426854263',id=110,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-6jiu7deq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:23:19Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "address": "fa:16:3e:3f:12:78", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d930aa0-d5", "ovs_interfaceid": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.141 243456 DEBUG nova.network.os_vif_util [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "address": "fa:16:3e:3f:12:78", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d930aa0-d5", "ovs_interfaceid": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.141 243456 DEBUG nova.network.os_vif_util [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:12:78,bridge_name='br-int',has_traffic_filtering=True,id=7d930aa0-d51d-4c56-9da2-7f8c1faf8b99,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d930aa0-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.142 243456 DEBUG os_vif [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:12:78,bridge_name='br-int',has_traffic_filtering=True,id=7d930aa0-d51d-4c56-9da2-7f8c1faf8b99,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d930aa0-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.142 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.142 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.143 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.146 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.146 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7d930aa0-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.147 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7d930aa0-d5, col_values=(('external_ids', {'iface-id': '7d930aa0-d51d-4c56-9da2-7f8c1faf8b99', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:12:78', 'vm-uuid': 'c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.148 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:25 np0005634017 NetworkManager[49805]: <info>  [1772274205.1495] manager: (tap7d930aa0-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/464)
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.151 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.153 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.154 243456 INFO os_vif [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:12:78,bridge_name='br-int',has_traffic_filtering=True,id=7d930aa0-d51d-4c56-9da2-7f8c1faf8b99,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d930aa0-d5')#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.211 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.211 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.212 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No VIF found with MAC fa:16:3e:3f:12:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.212 243456 INFO nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Using config drive#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.234 243456 DEBUG nova.storage.rbd_utils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Feb 28 05:23:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Feb 28 05:23:25 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Feb 28 05:23:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:25Z|01109|binding|INFO|Releasing lport c07252a3-f61e-4ad2-8106-a9d8b025e5a7 from this chassis (sb_readonly=0)
Feb 28 05:23:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:25Z|01110|binding|INFO|Releasing lport e8c59be7-5922-4be3-b143-f702a4828758 from this chassis (sb_readonly=0)
Feb 28 05:23:25 np0005634017 NetworkManager[49805]: <info>  [1772274205.6119] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/465)
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.611 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:25 np0005634017 NetworkManager[49805]: <info>  [1772274205.6125] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/466)
Feb 28 05:23:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:25Z|01111|binding|INFO|Releasing lport c07252a3-f61e-4ad2-8106-a9d8b025e5a7 from this chassis (sb_readonly=0)
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.631 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:25Z|01112|binding|INFO|Releasing lport e8c59be7-5922-4be3-b143-f702a4828758 from this chassis (sb_readonly=0)
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.644 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.788 243456 INFO nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Creating config drive at /var/lib/nova/instances/c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4/disk.config#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.793 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp9frouqyp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.867 243456 DEBUG nova.network.neutron [req-c064ce51-9a99-4315-a5a5-a72670ff0712 req-b23ecc9b-8bf9-46e9-b3c6-25f18653bfd2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Updated VIF entry in instance network info cache for port 7d930aa0-d51d-4c56-9da2-7f8c1faf8b99. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.870 243456 DEBUG nova.network.neutron [req-c064ce51-9a99-4315-a5a5-a72670ff0712 req-b23ecc9b-8bf9-46e9-b3c6-25f18653bfd2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Updating instance_info_cache with network_info: [{"id": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "address": "fa:16:3e:3f:12:78", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d930aa0-d5", "ovs_interfaceid": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.888 243456 DEBUG oslo_concurrency.lockutils [req-c064ce51-9a99-4315-a5a5-a72670ff0712 req-b23ecc9b-8bf9-46e9-b3c6-25f18653bfd2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.931 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp9frouqyp" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.963 243456 DEBUG nova.storage.rbd_utils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:25 np0005634017 nova_compute[243452]: 2026-02-28 10:23:25.970 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4/disk.config c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:26 np0005634017 nova_compute[243452]: 2026-02-28 10:23:26.110 243456 DEBUG nova.compute.manager [req-3718eff9-190d-485b-a3e5-a9d84865cbdb req-4f01d3fe-1a72-4bf6-8e4d-0c09a2a93167 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Received event network-changed-415ef63e-f355-4d3a-a625-bee99de661ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:23:26 np0005634017 nova_compute[243452]: 2026-02-28 10:23:26.111 243456 DEBUG nova.compute.manager [req-3718eff9-190d-485b-a3e5-a9d84865cbdb req-4f01d3fe-1a72-4bf6-8e4d-0c09a2a93167 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Refreshing instance network info cache due to event network-changed-415ef63e-f355-4d3a-a625-bee99de661ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:23:26 np0005634017 nova_compute[243452]: 2026-02-28 10:23:26.112 243456 DEBUG oslo_concurrency.lockutils [req-3718eff9-190d-485b-a3e5-a9d84865cbdb req-4f01d3fe-1a72-4bf6-8e4d-0c09a2a93167 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-027ce924-b530-4917-956c-ab66555058b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:23:26 np0005634017 nova_compute[243452]: 2026-02-28 10:23:26.112 243456 DEBUG oslo_concurrency.lockutils [req-3718eff9-190d-485b-a3e5-a9d84865cbdb req-4f01d3fe-1a72-4bf6-8e4d-0c09a2a93167 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-027ce924-b530-4917-956c-ab66555058b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:23:26 np0005634017 nova_compute[243452]: 2026-02-28 10:23:26.113 243456 DEBUG nova.network.neutron [req-3718eff9-190d-485b-a3e5-a9d84865cbdb req-4f01d3fe-1a72-4bf6-8e4d-0c09a2a93167 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Refreshing network info cache for port 415ef63e-f355-4d3a-a625-bee99de661ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:23:26 np0005634017 nova_compute[243452]: 2026-02-28 10:23:26.125 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4/disk.config c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:26 np0005634017 nova_compute[243452]: 2026-02-28 10:23:26.129 243456 INFO nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Deleting local config drive /var/lib/nova/instances/c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4/disk.config because it was imported into RBD.#033[00m
Feb 28 05:23:26 np0005634017 kernel: tap7d930aa0-d5: entered promiscuous mode
Feb 28 05:23:26 np0005634017 NetworkManager[49805]: <info>  [1772274206.1931] manager: (tap7d930aa0-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/467)
Feb 28 05:23:26 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:26Z|01113|binding|INFO|Claiming lport 7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 for this chassis.
Feb 28 05:23:26 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:26Z|01114|binding|INFO|7d930aa0-d51d-4c56-9da2-7f8c1faf8b99: Claiming fa:16:3e:3f:12:78 10.100.0.6
Feb 28 05:23:26 np0005634017 nova_compute[243452]: 2026-02-28 10:23:26.204 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.213 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:12:78 10.100.0.6'], port_security=['fa:16:3e:3f:12:78 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=7d930aa0-d51d-4c56-9da2-7f8c1faf8b99) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:23:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.214 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 in datapath 7ec4804c-4a13-485a-9300-db6edf74473b bound to our chassis#033[00m
Feb 28 05:23:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.216 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ec4804c-4a13-485a-9300-db6edf74473b#033[00m
Feb 28 05:23:26 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:26Z|01115|binding|INFO|Setting lport 7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 ovn-installed in OVS
Feb 28 05:23:26 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:26Z|01116|binding|INFO|Setting lport 7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 up in Southbound
Feb 28 05:23:26 np0005634017 nova_compute[243452]: 2026-02-28 10:23:26.219 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:26 np0005634017 nova_compute[243452]: 2026-02-28 10:23:26.228 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.234 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ca85fa0f-25d8-429d-8d5c-fe84c7cb9b0e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:26 np0005634017 systemd-udevd[338261]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:23:26 np0005634017 systemd-machined[209480]: New machine qemu-140-instance-0000006e.
Feb 28 05:23:26 np0005634017 NetworkManager[49805]: <info>  [1772274206.2541] device (tap7d930aa0-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:23:26 np0005634017 systemd[1]: Started Virtual Machine qemu-140-instance-0000006e.
Feb 28 05:23:26 np0005634017 NetworkManager[49805]: <info>  [1772274206.2575] device (tap7d930aa0-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:23:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.263 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[21340cc7-d1ed-4596-b778-0420e4e07727]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.268 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3b691b1c-4926-46eb-936c-1885babf8c4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.298 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[44300fb3-dfb7-47d7-806c-1ee39122064d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:26 np0005634017 nova_compute[243452]: 2026-02-28 10:23:26.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:23:26 np0005634017 nova_compute[243452]: 2026-02-28 10:23:26.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:23:26 np0005634017 nova_compute[243452]: 2026-02-28 10:23:26.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:23:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.317 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[45b474b8-002d-45d3-a059-99c9be12414c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338273, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.331 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3d8918b9-c17b-4690-8caa-4ab5fded5a6f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565520, 'tstamp': 565520}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338274, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565523, 'tstamp': 565523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338274, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.334 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:26 np0005634017 nova_compute[243452]: 2026-02-28 10:23:26.336 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:26 np0005634017 nova_compute[243452]: 2026-02-28 10:23:26.338 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.338 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ec4804c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.338 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:23:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.339 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ec4804c-40, col_values=(('external_ids', {'iface-id': 'e8c59be7-5922-4be3-b143-f702a4828758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.340 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:23:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1742: 305 pgs: 305 active+clean; 325 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.2 MiB/s wr, 172 op/s
Feb 28 05:23:26 np0005634017 nova_compute[243452]: 2026-02-28 10:23:26.797 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274206.797625, c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:23:26 np0005634017 nova_compute[243452]: 2026-02-28 10:23:26.798 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] VM Started (Lifecycle Event)#033[00m
Feb 28 05:23:26 np0005634017 nova_compute[243452]: 2026-02-28 10:23:26.826 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:23:26 np0005634017 nova_compute[243452]: 2026-02-28 10:23:26.830 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274206.7983136, c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:23:26 np0005634017 nova_compute[243452]: 2026-02-28 10:23:26.830 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:23:26 np0005634017 nova_compute[243452]: 2026-02-28 10:23:26.858 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:23:26 np0005634017 nova_compute[243452]: 2026-02-28 10:23:26.862 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:23:26 np0005634017 nova_compute[243452]: 2026-02-28 10:23:26.894 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:23:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Feb 28 05:23:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Feb 28 05:23:27 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Feb 28 05:23:27 np0005634017 nova_compute[243452]: 2026-02-28 10:23:27.781 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274192.7799156, 44a09c36-3876-4513-8285-1d4aedc2ec68 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:23:27 np0005634017 nova_compute[243452]: 2026-02-28 10:23:27.782 243456 INFO nova.compute.manager [-] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:23:27 np0005634017 nova_compute[243452]: 2026-02-28 10:23:27.818 243456 DEBUG nova.compute.manager [None req-568bc9d8-1746-432a-bc2e-f913d7f213bb - - - - - -] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.250 243456 DEBUG nova.compute.manager [req-7c044746-3401-43ae-8c06-b9a3e823265a req-0586b470-bcbc-417e-b2d2-27f9791cc02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Received event network-vif-plugged-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.251 243456 DEBUG oslo_concurrency.lockutils [req-7c044746-3401-43ae-8c06-b9a3e823265a req-0586b470-bcbc-417e-b2d2-27f9791cc02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.251 243456 DEBUG oslo_concurrency.lockutils [req-7c044746-3401-43ae-8c06-b9a3e823265a req-0586b470-bcbc-417e-b2d2-27f9791cc02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.251 243456 DEBUG oslo_concurrency.lockutils [req-7c044746-3401-43ae-8c06-b9a3e823265a req-0586b470-bcbc-417e-b2d2-27f9791cc02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.251 243456 DEBUG nova.compute.manager [req-7c044746-3401-43ae-8c06-b9a3e823265a req-0586b470-bcbc-417e-b2d2-27f9791cc02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Processing event network-vif-plugged-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.252 243456 DEBUG nova.compute.manager [req-7c044746-3401-43ae-8c06-b9a3e823265a req-0586b470-bcbc-417e-b2d2-27f9791cc02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Received event network-vif-plugged-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.252 243456 DEBUG oslo_concurrency.lockutils [req-7c044746-3401-43ae-8c06-b9a3e823265a req-0586b470-bcbc-417e-b2d2-27f9791cc02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.252 243456 DEBUG oslo_concurrency.lockutils [req-7c044746-3401-43ae-8c06-b9a3e823265a req-0586b470-bcbc-417e-b2d2-27f9791cc02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.253 243456 DEBUG oslo_concurrency.lockutils [req-7c044746-3401-43ae-8c06-b9a3e823265a req-0586b470-bcbc-417e-b2d2-27f9791cc02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.253 243456 DEBUG nova.compute.manager [req-7c044746-3401-43ae-8c06-b9a3e823265a req-0586b470-bcbc-417e-b2d2-27f9791cc02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] No waiting events found dispatching network-vif-plugged-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.253 243456 WARNING nova.compute.manager [req-7c044746-3401-43ae-8c06-b9a3e823265a req-0586b470-bcbc-417e-b2d2-27f9791cc02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Received unexpected event network-vif-plugged-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.254 243456 DEBUG nova.compute.manager [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.263 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.264 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274208.2630408, c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.264 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.270 243456 INFO nova.virt.libvirt.driver [-] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Instance spawned successfully.#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.270 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.290 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.295 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.299 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.300 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.300 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.301 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.301 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.302 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.332 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.368 243456 INFO nova.compute.manager [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Took 8.41 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.368 243456 DEBUG nova.compute.manager [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.441 243456 INFO nova.compute.manager [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Took 9.53 seconds to build instance.#033[00m
Feb 28 05:23:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 325 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.0 MiB/s wr, 267 op/s
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.461 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.679 243456 DEBUG nova.network.neutron [req-3718eff9-190d-485b-a3e5-a9d84865cbdb req-4f01d3fe-1a72-4bf6-8e4d-0c09a2a93167 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Updated VIF entry in instance network info cache for port 415ef63e-f355-4d3a-a625-bee99de661ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.680 243456 DEBUG nova.network.neutron [req-3718eff9-190d-485b-a3e5-a9d84865cbdb req-4f01d3fe-1a72-4bf6-8e4d-0c09a2a93167 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Updating instance_info_cache with network_info: [{"id": "415ef63e-f355-4d3a-a625-bee99de661ad", "address": "fa:16:3e:01:49:3b", "network": {"id": "17bc494c-7a5d-47b4-92b7-06dd91f131ca", "bridge": "br-int", "label": "tempest-network-smoke--669117396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap415ef63e-f3", "ovs_interfaceid": "415ef63e-f355-4d3a-a625-bee99de661ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:23:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:23:28 np0005634017 nova_compute[243452]: 2026-02-28 10:23:28.697 243456 DEBUG oslo_concurrency.lockutils [req-3718eff9-190d-485b-a3e5-a9d84865cbdb req-4f01d3fe-1a72-4bf6-8e4d-0c09a2a93167 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-027ce924-b530-4917-956c-ab66555058b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:23:29 np0005634017 nova_compute[243452]: 2026-02-28 10:23:29.032 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:23:29
Feb 28 05:23:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:23:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:23:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['volumes', '.rgw.root', 'default.rgw.meta', 'images', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', 'vms']
Feb 28 05:23:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:23:29 np0005634017 nova_compute[243452]: 2026-02-28 10:23:29.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:23:29 np0005634017 nova_compute[243452]: 2026-02-28 10:23:29.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:23:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Feb 28 05:23:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Feb 28 05:23:29 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Feb 28 05:23:30 np0005634017 nova_compute[243452]: 2026-02-28 10:23:30.148 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:30 np0005634017 nova_compute[243452]: 2026-02-28 10:23:30.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:23:30 np0005634017 nova_compute[243452]: 2026-02-28 10:23:30.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:23:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:23:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:23:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:23:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:23:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:23:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:23:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 1.1 MiB/s wr, 399 op/s
Feb 28 05:23:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:23:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:23:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:23:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:23:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:23:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:23:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:23:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:23:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:23:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:23:30 np0005634017 nova_compute[243452]: 2026-02-28 10:23:30.706 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-0d4ce277-1bbb-4926-a7ee-30f5df57fff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:23:30 np0005634017 nova_compute[243452]: 2026-02-28 10:23:30.708 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-0d4ce277-1bbb-4926-a7ee-30f5df57fff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:23:30 np0005634017 nova_compute[243452]: 2026-02-28 10:23:30.708 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:23:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 29 KiB/s wr, 266 op/s
Feb 28 05:23:32 np0005634017 nova_compute[243452]: 2026-02-28 10:23:32.960 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Updating instance_info_cache with network_info: [{"id": "53819bfb-ebe3-4956-8f91-805dd04b5954", "address": "fa:16:3e:e5:f2:95", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53819bfb-eb", "ovs_interfaceid": "53819bfb-ebe3-4956-8f91-805dd04b5954", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:23:32 np0005634017 nova_compute[243452]: 2026-02-28 10:23:32.984 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-0d4ce277-1bbb-4926-a7ee-30f5df57fff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:23:32 np0005634017 nova_compute[243452]: 2026-02-28 10:23:32.985 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:23:32 np0005634017 nova_compute[243452]: 2026-02-28 10:23:32.986 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:23:32 np0005634017 nova_compute[243452]: 2026-02-28 10:23:32.986 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:23:32 np0005634017 nova_compute[243452]: 2026-02-28 10:23:32.987 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:23:33 np0005634017 nova_compute[243452]: 2026-02-28 10:23:33.019 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:33 np0005634017 nova_compute[243452]: 2026-02-28 10:23:33.020 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:33 np0005634017 nova_compute[243452]: 2026-02-28 10:23:33.020 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:33 np0005634017 nova_compute[243452]: 2026-02-28 10:23:33.021 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:23:33 np0005634017 nova_compute[243452]: 2026-02-28 10:23:33.021 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:23:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/846847650' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:23:33 np0005634017 nova_compute[243452]: 2026-02-28 10:23:33.605 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:23:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Feb 28 05:23:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Feb 28 05:23:33 np0005634017 nova_compute[243452]: 2026-02-28 10:23:33.691 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:23:33 np0005634017 nova_compute[243452]: 2026-02-28 10:23:33.692 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:23:33 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Feb 28 05:23:33 np0005634017 nova_compute[243452]: 2026-02-28 10:23:33.699 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:23:33 np0005634017 nova_compute[243452]: 2026-02-28 10:23:33.700 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:23:33 np0005634017 nova_compute[243452]: 2026-02-28 10:23:33.704 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:23:33 np0005634017 nova_compute[243452]: 2026-02-28 10:23:33.705 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:23:33 np0005634017 nova_compute[243452]: 2026-02-28 10:23:33.747 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "287f71c6-9068-476b-81e7-da5069ee831f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:33 np0005634017 nova_compute[243452]: 2026-02-28 10:23:33.748 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:33 np0005634017 nova_compute[243452]: 2026-02-28 10:23:33.773 243456 DEBUG nova.compute.manager [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:23:33 np0005634017 nova_compute[243452]: 2026-02-28 10:23:33.851 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:33 np0005634017 nova_compute[243452]: 2026-02-28 10:23:33.852 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:33 np0005634017 nova_compute[243452]: 2026-02-28 10:23:33.857 243456 DEBUG nova.virt.hardware [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:23:33 np0005634017 nova_compute[243452]: 2026-02-28 10:23:33.858 243456 INFO nova.compute.claims [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:23:33 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:33Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:01:49:3b 10.100.0.8
Feb 28 05:23:33 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:33Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:01:49:3b 10.100.0.8
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.016 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.017 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3192MB free_disk=59.90021583996713GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.017 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.036 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.039 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 26 KiB/s wr, 187 op/s
Feb 28 05:23:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:23:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:23:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:23:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:23:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:23:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:23:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:23:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:23:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:23:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:23:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:23:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:23:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:23:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3479048948' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.565 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.573 243456 DEBUG nova.compute.provider_tree [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.593 243456 DEBUG nova.scheduler.client.report [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.629 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.630 243456 DEBUG nova.compute.manager [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.634 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:34 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:23:34 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:23:34 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.737 243456 DEBUG nova.compute.manager [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.737 243456 DEBUG nova.network.neutron [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.757 243456 INFO nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.761 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 0d4ce277-1bbb-4926-a7ee-30f5df57fff9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.762 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 027ce924-b530-4917-956c-ab66555058b0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.762 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.762 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 287f71c6-9068-476b-81e7-da5069ee831f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.762 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.763 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.788 243456 DEBUG nova.compute.manager [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.857 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:34 np0005634017 podman[338509]: 2026-02-28 10:23:34.876968863 +0000 UTC m=+0.062486144 container create dafee47a345f12dc46a90dfc083fd1e6f0ba904c99e7af891958f328ad5c656f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_chatelet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.894 243456 DEBUG nova.compute.manager [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.896 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.897 243456 INFO nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Creating image(s)#033[00m
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.918 243456 DEBUG nova.storage.rbd_utils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 287f71c6-9068-476b-81e7-da5069ee831f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:34 np0005634017 systemd[1]: Started libpod-conmon-dafee47a345f12dc46a90dfc083fd1e6f0ba904c99e7af891958f328ad5c656f.scope.
Feb 28 05:23:34 np0005634017 podman[338509]: 2026-02-28 10:23:34.848818841 +0000 UTC m=+0.034336132 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:23:34 np0005634017 nova_compute[243452]: 2026-02-28 10:23:34.953 243456 DEBUG nova.storage.rbd_utils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 287f71c6-9068-476b-81e7-da5069ee831f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:34 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:23:34 np0005634017 podman[338509]: 2026-02-28 10:23:34.976460745 +0000 UTC m=+0.161978076 container init dafee47a345f12dc46a90dfc083fd1e6f0ba904c99e7af891958f328ad5c656f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:23:34 np0005634017 podman[338509]: 2026-02-28 10:23:34.98666767 +0000 UTC m=+0.172184921 container start dafee47a345f12dc46a90dfc083fd1e6f0ba904c99e7af891958f328ad5c656f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:23:34 np0005634017 podman[338509]: 2026-02-28 10:23:34.992316163 +0000 UTC m=+0.177833494 container attach dafee47a345f12dc46a90dfc083fd1e6f0ba904c99e7af891958f328ad5c656f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_chatelet, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Feb 28 05:23:34 np0005634017 bold_chatelet[338544]: 167 167
Feb 28 05:23:34 np0005634017 systemd[1]: libpod-dafee47a345f12dc46a90dfc083fd1e6f0ba904c99e7af891958f328ad5c656f.scope: Deactivated successfully.
Feb 28 05:23:34 np0005634017 podman[338509]: 2026-02-28 10:23:34.994516696 +0000 UTC m=+0.180033977 container died dafee47a345f12dc46a90dfc083fd1e6f0ba904c99e7af891958f328ad5c656f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 05:23:35 np0005634017 nova_compute[243452]: 2026-02-28 10:23:35.010 243456 DEBUG nova.storage.rbd_utils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 287f71c6-9068-476b-81e7-da5069ee831f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:35 np0005634017 nova_compute[243452]: 2026-02-28 10:23:35.017 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:35 np0005634017 systemd[1]: var-lib-containers-storage-overlay-aad993d6cb4ae4baaa8e7875fd6a24ebcea24ce07ebc9713469f0bc33934587c-merged.mount: Deactivated successfully.
Feb 28 05:23:35 np0005634017 podman[338509]: 2026-02-28 10:23:35.051569983 +0000 UTC m=+0.237087234 container remove dafee47a345f12dc46a90dfc083fd1e6f0ba904c99e7af891958f328ad5c656f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 05:23:35 np0005634017 systemd[1]: libpod-conmon-dafee47a345f12dc46a90dfc083fd1e6f0ba904c99e7af891958f328ad5c656f.scope: Deactivated successfully.
Feb 28 05:23:35 np0005634017 nova_compute[243452]: 2026-02-28 10:23:35.064 243456 DEBUG nova.policy [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30797c1e587b4532a2e148d0cdcd9c51', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:23:35 np0005634017 nova_compute[243452]: 2026-02-28 10:23:35.108 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:35 np0005634017 nova_compute[243452]: 2026-02-28 10:23:35.109 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:35 np0005634017 nova_compute[243452]: 2026-02-28 10:23:35.110 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:35 np0005634017 nova_compute[243452]: 2026-02-28 10:23:35.110 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:35 np0005634017 nova_compute[243452]: 2026-02-28 10:23:35.140 243456 DEBUG nova.storage.rbd_utils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 287f71c6-9068-476b-81e7-da5069ee831f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:35 np0005634017 nova_compute[243452]: 2026-02-28 10:23:35.149 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 287f71c6-9068-476b-81e7-da5069ee831f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:35 np0005634017 nova_compute[243452]: 2026-02-28 10:23:35.187 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:35 np0005634017 podman[338645]: 2026-02-28 10:23:35.202330425 +0000 UTC m=+0.037732881 container create 769cd698b594d46667c6490134084fa37f0a39c2dd730f83d44c549d2c794b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_montalcini, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:23:35 np0005634017 systemd[1]: Started libpod-conmon-769cd698b594d46667c6490134084fa37f0a39c2dd730f83d44c549d2c794b82.scope.
Feb 28 05:23:35 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:23:35 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/448be31a2eef4d91d759b42b71ff6ce105a549b01171a8e93e70c28b66af66b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:23:35 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/448be31a2eef4d91d759b42b71ff6ce105a549b01171a8e93e70c28b66af66b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:23:35 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/448be31a2eef4d91d759b42b71ff6ce105a549b01171a8e93e70c28b66af66b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:23:35 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/448be31a2eef4d91d759b42b71ff6ce105a549b01171a8e93e70c28b66af66b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:23:35 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/448be31a2eef4d91d759b42b71ff6ce105a549b01171a8e93e70c28b66af66b3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:23:35 np0005634017 podman[338645]: 2026-02-28 10:23:35.1852036 +0000 UTC m=+0.020606076 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:23:35 np0005634017 podman[338645]: 2026-02-28 10:23:35.290819909 +0000 UTC m=+0.126222415 container init 769cd698b594d46667c6490134084fa37f0a39c2dd730f83d44c549d2c794b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_montalcini, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 28 05:23:35 np0005634017 podman[338645]: 2026-02-28 10:23:35.299212001 +0000 UTC m=+0.134614467 container start 769cd698b594d46667c6490134084fa37f0a39c2dd730f83d44c549d2c794b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_montalcini, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:23:35 np0005634017 podman[338645]: 2026-02-28 10:23:35.307290844 +0000 UTC m=+0.142693300 container attach 769cd698b594d46667c6490134084fa37f0a39c2dd730f83d44c549d2c794b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 05:23:35 np0005634017 nova_compute[243452]: 2026-02-28 10:23:35.428 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 287f71c6-9068-476b-81e7-da5069ee831f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:23:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/935840045' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:23:35 np0005634017 nova_compute[243452]: 2026-02-28 10:23:35.483 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:35 np0005634017 nova_compute[243452]: 2026-02-28 10:23:35.490 243456 DEBUG nova.storage.rbd_utils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] resizing rbd image 287f71c6-9068-476b-81e7-da5069ee831f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:23:35 np0005634017 nova_compute[243452]: 2026-02-28 10:23:35.522 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:23:35 np0005634017 nova_compute[243452]: 2026-02-28 10:23:35.542 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:23:35 np0005634017 nova_compute[243452]: 2026-02-28 10:23:35.575 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:23:35 np0005634017 nova_compute[243452]: 2026-02-28 10:23:35.575 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:35 np0005634017 nova_compute[243452]: 2026-02-28 10:23:35.580 243456 DEBUG nova.objects.instance [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'migration_context' on Instance uuid 287f71c6-9068-476b-81e7-da5069ee831f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:23:35 np0005634017 nova_compute[243452]: 2026-02-28 10:23:35.596 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:23:35 np0005634017 nova_compute[243452]: 2026-02-28 10:23:35.597 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Ensure instance console log exists: /var/lib/nova/instances/287f71c6-9068-476b-81e7-da5069ee831f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:23:35 np0005634017 nova_compute[243452]: 2026-02-28 10:23:35.598 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:35 np0005634017 nova_compute[243452]: 2026-02-28 10:23:35.598 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:35 np0005634017 nova_compute[243452]: 2026-02-28 10:23:35.599 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:35 np0005634017 unruffled_montalcini[338677]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:23:35 np0005634017 unruffled_montalcini[338677]: --> All data devices are unavailable
Feb 28 05:23:35 np0005634017 systemd[1]: libpod-769cd698b594d46667c6490134084fa37f0a39c2dd730f83d44c549d2c794b82.scope: Deactivated successfully.
Feb 28 05:23:35 np0005634017 podman[338645]: 2026-02-28 10:23:35.763809291 +0000 UTC m=+0.599211747 container died 769cd698b594d46667c6490134084fa37f0a39c2dd730f83d44c549d2c794b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 28 05:23:35 np0005634017 systemd[1]: var-lib-containers-storage-overlay-448be31a2eef4d91d759b42b71ff6ce105a549b01171a8e93e70c28b66af66b3-merged.mount: Deactivated successfully.
Feb 28 05:23:35 np0005634017 podman[338645]: 2026-02-28 10:23:35.805542375 +0000 UTC m=+0.640944831 container remove 769cd698b594d46667c6490134084fa37f0a39c2dd730f83d44c549d2c794b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:23:35 np0005634017 nova_compute[243452]: 2026-02-28 10:23:35.811 243456 DEBUG nova.network.neutron [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Successfully created port: caccf6a2-afd7-48b7-b262-bb7a3178d25c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:23:35 np0005634017 systemd[1]: libpod-conmon-769cd698b594d46667c6490134084fa37f0a39c2dd730f83d44c549d2c794b82.scope: Deactivated successfully.
Feb 28 05:23:36 np0005634017 podman[338849]: 2026-02-28 10:23:36.194262575 +0000 UTC m=+0.047582524 container create 65fca6aa369eac8f5e231215837f9f1dd6fd72cfc17ffc2f7c604c53036f3807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wilbur, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:23:36 np0005634017 systemd[1]: Started libpod-conmon-65fca6aa369eac8f5e231215837f9f1dd6fd72cfc17ffc2f7c604c53036f3807.scope.
Feb 28 05:23:36 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:23:36 np0005634017 podman[338849]: 2026-02-28 10:23:36.270971859 +0000 UTC m=+0.124291768 container init 65fca6aa369eac8f5e231215837f9f1dd6fd72cfc17ffc2f7c604c53036f3807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wilbur, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:23:36 np0005634017 podman[338849]: 2026-02-28 10:23:36.176870693 +0000 UTC m=+0.030190582 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:23:36 np0005634017 podman[338849]: 2026-02-28 10:23:36.280261727 +0000 UTC m=+0.133581586 container start 65fca6aa369eac8f5e231215837f9f1dd6fd72cfc17ffc2f7c604c53036f3807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 05:23:36 np0005634017 podman[338849]: 2026-02-28 10:23:36.284100548 +0000 UTC m=+0.137420417 container attach 65fca6aa369eac8f5e231215837f9f1dd6fd72cfc17ffc2f7c604c53036f3807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:23:36 np0005634017 magical_wilbur[338865]: 167 167
Feb 28 05:23:36 np0005634017 systemd[1]: libpod-65fca6aa369eac8f5e231215837f9f1dd6fd72cfc17ffc2f7c604c53036f3807.scope: Deactivated successfully.
Feb 28 05:23:36 np0005634017 podman[338849]: 2026-02-28 10:23:36.285554 +0000 UTC m=+0.138873919 container died 65fca6aa369eac8f5e231215837f9f1dd6fd72cfc17ffc2f7c604c53036f3807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wilbur, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Feb 28 05:23:36 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3c9d92d27e23debae8679b44e4517355e5ef78bcab6f263399df7c2e0696d69b-merged.mount: Deactivated successfully.
Feb 28 05:23:36 np0005634017 podman[338849]: 2026-02-28 10:23:36.328688435 +0000 UTC m=+0.182008304 container remove 65fca6aa369eac8f5e231215837f9f1dd6fd72cfc17ffc2f7c604c53036f3807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 05:23:36 np0005634017 systemd[1]: libpod-conmon-65fca6aa369eac8f5e231215837f9f1dd6fd72cfc17ffc2f7c604c53036f3807.scope: Deactivated successfully.
Feb 28 05:23:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 345 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.3 MiB/s wr, 228 op/s
Feb 28 05:23:36 np0005634017 podman[338889]: 2026-02-28 10:23:36.525195857 +0000 UTC m=+0.062792413 container create b9b7141a63b7b04dd82b12d0f3abe349daecb8dcb3ff51deac10220716f8bc0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 05:23:36 np0005634017 systemd[1]: Started libpod-conmon-b9b7141a63b7b04dd82b12d0f3abe349daecb8dcb3ff51deac10220716f8bc0e.scope.
Feb 28 05:23:36 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:23:36 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8321f76a524bc7e4c9d0ffe81cf6e7efd57603aac09effa73453294dd9ac3b6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:23:36 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8321f76a524bc7e4c9d0ffe81cf6e7efd57603aac09effa73453294dd9ac3b6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:23:36 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8321f76a524bc7e4c9d0ffe81cf6e7efd57603aac09effa73453294dd9ac3b6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:23:36 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8321f76a524bc7e4c9d0ffe81cf6e7efd57603aac09effa73453294dd9ac3b6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:23:36 np0005634017 podman[338889]: 2026-02-28 10:23:36.498488526 +0000 UTC m=+0.036085152 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:23:36 np0005634017 podman[338889]: 2026-02-28 10:23:36.619049026 +0000 UTC m=+0.156645622 container init b9b7141a63b7b04dd82b12d0f3abe349daecb8dcb3ff51deac10220716f8bc0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 05:23:36 np0005634017 podman[338889]: 2026-02-28 10:23:36.626850351 +0000 UTC m=+0.164446907 container start b9b7141a63b7b04dd82b12d0f3abe349daecb8dcb3ff51deac10220716f8bc0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 05:23:36 np0005634017 podman[338889]: 2026-02-28 10:23:36.63061567 +0000 UTC m=+0.168212246 container attach b9b7141a63b7b04dd82b12d0f3abe349daecb8dcb3ff51deac10220716f8bc0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:23:36 np0005634017 nova_compute[243452]: 2026-02-28 10:23:36.675 243456 DEBUG nova.network.neutron [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Successfully updated port: caccf6a2-afd7-48b7-b262-bb7a3178d25c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:23:36 np0005634017 nova_compute[243452]: 2026-02-28 10:23:36.692 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "refresh_cache-287f71c6-9068-476b-81e7-da5069ee831f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:23:36 np0005634017 nova_compute[243452]: 2026-02-28 10:23:36.693 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquired lock "refresh_cache-287f71c6-9068-476b-81e7-da5069ee831f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:23:36 np0005634017 nova_compute[243452]: 2026-02-28 10:23:36.693 243456 DEBUG nova.network.neutron [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:23:36 np0005634017 nova_compute[243452]: 2026-02-28 10:23:36.860 243456 DEBUG nova.compute.manager [req-ea2c648c-f5e9-4329-b772-b8b9c1af270d req-cedc1a47-0635-4898-8a31-69642dcdab88 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Received event network-changed-caccf6a2-afd7-48b7-b262-bb7a3178d25c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:23:36 np0005634017 nova_compute[243452]: 2026-02-28 10:23:36.861 243456 DEBUG nova.compute.manager [req-ea2c648c-f5e9-4329-b772-b8b9c1af270d req-cedc1a47-0635-4898-8a31-69642dcdab88 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Refreshing instance network info cache due to event network-changed-caccf6a2-afd7-48b7-b262-bb7a3178d25c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:23:36 np0005634017 nova_compute[243452]: 2026-02-28 10:23:36.861 243456 DEBUG oslo_concurrency.lockutils [req-ea2c648c-f5e9-4329-b772-b8b9c1af270d req-cedc1a47-0635-4898-8a31-69642dcdab88 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-287f71c6-9068-476b-81e7-da5069ee831f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:23:36 np0005634017 happy_liskov[338906]: {
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:    "0": [
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:        {
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "devices": [
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "/dev/loop3"
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            ],
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "lv_name": "ceph_lv0",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "lv_size": "21470642176",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "name": "ceph_lv0",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "tags": {
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.cluster_name": "ceph",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.crush_device_class": "",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.encrypted": "0",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.objectstore": "bluestore",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.osd_id": "0",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.type": "block",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.vdo": "0",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.with_tpm": "0"
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            },
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "type": "block",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "vg_name": "ceph_vg0"
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:        }
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:    ],
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:    "1": [
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:        {
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "devices": [
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "/dev/loop4"
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            ],
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "lv_name": "ceph_lv1",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "lv_size": "21470642176",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "name": "ceph_lv1",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "tags": {
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.cluster_name": "ceph",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.crush_device_class": "",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.encrypted": "0",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.objectstore": "bluestore",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.osd_id": "1",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.type": "block",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.vdo": "0",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.with_tpm": "0"
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            },
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "type": "block",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "vg_name": "ceph_vg1"
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:        }
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:    ],
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:    "2": [
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:        {
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "devices": [
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "/dev/loop5"
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            ],
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "lv_name": "ceph_lv2",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "lv_size": "21470642176",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "name": "ceph_lv2",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "tags": {
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.cluster_name": "ceph",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.crush_device_class": "",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.encrypted": "0",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.objectstore": "bluestore",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.osd_id": "2",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.type": "block",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.vdo": "0",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:                "ceph.with_tpm": "0"
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            },
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "type": "block",
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:            "vg_name": "ceph_vg2"
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:        }
Feb 28 05:23:36 np0005634017 happy_liskov[338906]:    ]
Feb 28 05:23:36 np0005634017 happy_liskov[338906]: }
Feb 28 05:23:36 np0005634017 systemd[1]: libpod-b9b7141a63b7b04dd82b12d0f3abe349daecb8dcb3ff51deac10220716f8bc0e.scope: Deactivated successfully.
Feb 28 05:23:36 np0005634017 podman[338889]: 2026-02-28 10:23:36.943743738 +0000 UTC m=+0.481340294 container died b9b7141a63b7b04dd82b12d0f3abe349daecb8dcb3ff51deac10220716f8bc0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_liskov, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:23:36 np0005634017 nova_compute[243452]: 2026-02-28 10:23:36.952 243456 DEBUG nova.network.neutron [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:23:36 np0005634017 systemd[1]: var-lib-containers-storage-overlay-8321f76a524bc7e4c9d0ffe81cf6e7efd57603aac09effa73453294dd9ac3b6f-merged.mount: Deactivated successfully.
Feb 28 05:23:36 np0005634017 podman[338889]: 2026-02-28 10:23:36.997078547 +0000 UTC m=+0.534675103 container remove b9b7141a63b7b04dd82b12d0f3abe349daecb8dcb3ff51deac10220716f8bc0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:23:37 np0005634017 systemd[1]: libpod-conmon-b9b7141a63b7b04dd82b12d0f3abe349daecb8dcb3ff51deac10220716f8bc0e.scope: Deactivated successfully.
Feb 28 05:23:37 np0005634017 podman[338990]: 2026-02-28 10:23:37.529436762 +0000 UTC m=+0.069848167 container create be3494a5f590ce0063e698c0db7fd3b644cd5e4367a5205221723d6acf4964cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Feb 28 05:23:37 np0005634017 systemd[1]: Started libpod-conmon-be3494a5f590ce0063e698c0db7fd3b644cd5e4367a5205221723d6acf4964cd.scope.
Feb 28 05:23:37 np0005634017 podman[338990]: 2026-02-28 10:23:37.497923793 +0000 UTC m=+0.038335218 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:23:37 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:23:37 np0005634017 podman[338990]: 2026-02-28 10:23:37.627912835 +0000 UTC m=+0.168324250 container init be3494a5f590ce0063e698c0db7fd3b644cd5e4367a5205221723d6acf4964cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 05:23:37 np0005634017 podman[338990]: 2026-02-28 10:23:37.639835549 +0000 UTC m=+0.180246974 container start be3494a5f590ce0063e698c0db7fd3b644cd5e4367a5205221723d6acf4964cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_poitras, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 05:23:37 np0005634017 podman[338990]: 2026-02-28 10:23:37.643439243 +0000 UTC m=+0.183850698 container attach be3494a5f590ce0063e698c0db7fd3b644cd5e4367a5205221723d6acf4964cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_poitras, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:23:37 np0005634017 nervous_poitras[339007]: 167 167
Feb 28 05:23:37 np0005634017 systemd[1]: libpod-be3494a5f590ce0063e698c0db7fd3b644cd5e4367a5205221723d6acf4964cd.scope: Deactivated successfully.
Feb 28 05:23:37 np0005634017 conmon[339007]: conmon be3494a5f590ce0063e6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-be3494a5f590ce0063e698c0db7fd3b644cd5e4367a5205221723d6acf4964cd.scope/container/memory.events
Feb 28 05:23:37 np0005634017 podman[338990]: 2026-02-28 10:23:37.64716343 +0000 UTC m=+0.187574845 container died be3494a5f590ce0063e698c0db7fd3b644cd5e4367a5205221723d6acf4964cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_poitras, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Feb 28 05:23:37 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ee0f9542274fae20aa7371279d0cb5c4315750a159557935074f32c7fa1b498a-merged.mount: Deactivated successfully.
Feb 28 05:23:37 np0005634017 podman[338990]: 2026-02-28 10:23:37.683147699 +0000 UTC m=+0.223559134 container remove be3494a5f590ce0063e698c0db7fd3b644cd5e4367a5205221723d6acf4964cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_poitras, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 05:23:37 np0005634017 systemd[1]: libpod-conmon-be3494a5f590ce0063e698c0db7fd3b644cd5e4367a5205221723d6acf4964cd.scope: Deactivated successfully.
Feb 28 05:23:37 np0005634017 podman[339029]: 2026-02-28 10:23:37.845611188 +0000 UTC m=+0.051662442 container create e78f309341baed0162dc81ef21d7ad24b3a0f8cb04e5774e3a67ef1678e71ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 05:23:37 np0005634017 systemd[1]: Started libpod-conmon-e78f309341baed0162dc81ef21d7ad24b3a0f8cb04e5774e3a67ef1678e71ed1.scope.
Feb 28 05:23:37 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:23:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c88b5fc105fb998fb67e968eda9ac1698ee081ebb683e2ce0acc91bd317661b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:23:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c88b5fc105fb998fb67e968eda9ac1698ee081ebb683e2ce0acc91bd317661b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:23:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c88b5fc105fb998fb67e968eda9ac1698ee081ebb683e2ce0acc91bd317661b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:23:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c88b5fc105fb998fb67e968eda9ac1698ee081ebb683e2ce0acc91bd317661b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:23:37 np0005634017 podman[339029]: 2026-02-28 10:23:37.827203327 +0000 UTC m=+0.033254531 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:23:37 np0005634017 podman[339029]: 2026-02-28 10:23:37.941587478 +0000 UTC m=+0.147638652 container init e78f309341baed0162dc81ef21d7ad24b3a0f8cb04e5774e3a67ef1678e71ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_perlman, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:23:37 np0005634017 podman[339029]: 2026-02-28 10:23:37.955518771 +0000 UTC m=+0.161569925 container start e78f309341baed0162dc81ef21d7ad24b3a0f8cb04e5774e3a67ef1678e71ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_perlman, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:23:37 np0005634017 podman[339029]: 2026-02-28 10:23:37.959766303 +0000 UTC m=+0.165817487 container attach e78f309341baed0162dc81ef21d7ad24b3a0f8cb04e5774e3a67ef1678e71ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.011 243456 DEBUG nova.network.neutron [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Updating instance_info_cache with network_info: [{"id": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "address": "fa:16:3e:08:b6:0f", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaccf6a2-af", "ovs_interfaceid": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.034 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Releasing lock "refresh_cache-287f71c6-9068-476b-81e7-da5069ee831f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.035 243456 DEBUG nova.compute.manager [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Instance network_info: |[{"id": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "address": "fa:16:3e:08:b6:0f", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaccf6a2-af", "ovs_interfaceid": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.036 243456 DEBUG oslo_concurrency.lockutils [req-ea2c648c-f5e9-4329-b772-b8b9c1af270d req-cedc1a47-0635-4898-8a31-69642dcdab88 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-287f71c6-9068-476b-81e7-da5069ee831f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.037 243456 DEBUG nova.network.neutron [req-ea2c648c-f5e9-4329-b772-b8b9c1af270d req-cedc1a47-0635-4898-8a31-69642dcdab88 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Refreshing network info cache for port caccf6a2-afd7-48b7-b262-bb7a3178d25c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.042 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Start _get_guest_xml network_info=[{"id": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "address": "fa:16:3e:08:b6:0f", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaccf6a2-af", "ovs_interfaceid": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.049 243456 WARNING nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.055 243456 DEBUG nova.virt.libvirt.host [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.058 243456 DEBUG nova.virt.libvirt.host [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.068 243456 DEBUG nova.virt.libvirt.host [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.070 243456 DEBUG nova.virt.libvirt.host [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.071 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.071 243456 DEBUG nova.virt.hardware [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.072 243456 DEBUG nova.virt.hardware [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.073 243456 DEBUG nova.virt.hardware [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.074 243456 DEBUG nova.virt.hardware [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.074 243456 DEBUG nova.virt.hardware [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.075 243456 DEBUG nova.virt.hardware [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.075 243456 DEBUG nova.virt.hardware [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.076 243456 DEBUG nova.virt.hardware [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.076 243456 DEBUG nova.virt.hardware [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.077 243456 DEBUG nova.virt.hardware [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.078 243456 DEBUG nova.virt.hardware [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.083 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 404 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.3 MiB/s wr, 227 op/s
Feb 28 05:23:38 np0005634017 lvm[339145]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:23:38 np0005634017 lvm[339144]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:23:38 np0005634017 lvm[339145]: VG ceph_vg1 finished
Feb 28 05:23:38 np0005634017 lvm[339144]: VG ceph_vg0 finished
Feb 28 05:23:38 np0005634017 lvm[339147]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:23:38 np0005634017 lvm[339147]: VG ceph_vg2 finished
Feb 28 05:23:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:23:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3806269009' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.635 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.660 243456 DEBUG nova.storage.rbd_utils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 287f71c6-9068-476b-81e7-da5069ee831f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:38 np0005634017 musing_perlman[339046]: {}
Feb 28 05:23:38 np0005634017 nova_compute[243452]: 2026-02-28 10:23:38.668 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:23:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Feb 28 05:23:38 np0005634017 systemd[1]: libpod-e78f309341baed0162dc81ef21d7ad24b3a0f8cb04e5774e3a67ef1678e71ed1.scope: Deactivated successfully.
Feb 28 05:23:38 np0005634017 systemd[1]: libpod-e78f309341baed0162dc81ef21d7ad24b3a0f8cb04e5774e3a67ef1678e71ed1.scope: Consumed 1.046s CPU time.
Feb 28 05:23:38 np0005634017 podman[339029]: 2026-02-28 10:23:38.694401787 +0000 UTC m=+0.900452941 container died e78f309341baed0162dc81ef21d7ad24b3a0f8cb04e5774e3a67ef1678e71ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_perlman, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:23:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Feb 28 05:23:38 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Feb 28 05:23:38 np0005634017 systemd[1]: var-lib-containers-storage-overlay-2c88b5fc105fb998fb67e968eda9ac1698ee081ebb683e2ce0acc91bd317661b-merged.mount: Deactivated successfully.
Feb 28 05:23:38 np0005634017 podman[339029]: 2026-02-28 10:23:38.77347756 +0000 UTC m=+0.979528714 container remove e78f309341baed0162dc81ef21d7ad24b3a0f8cb04e5774e3a67ef1678e71ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_perlman, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 05:23:38 np0005634017 systemd[1]: libpod-conmon-e78f309341baed0162dc81ef21d7ad24b3a0f8cb04e5774e3a67ef1678e71ed1.scope: Deactivated successfully.
Feb 28 05:23:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:23:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:23:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:23:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.037 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:23:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/23880949' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.243 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.246 243456 DEBUG nova.virt.libvirt.vif [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:23:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-426854263',display_name='tempest-ServersTestJSON-server-426854263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-426854263',id=111,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-20wtorxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:23:34Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=287f71c6-9068-476b-81e7-da5069ee831f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "address": "fa:16:3e:08:b6:0f", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaccf6a2-af", "ovs_interfaceid": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.246 243456 DEBUG nova.network.os_vif_util [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "address": "fa:16:3e:08:b6:0f", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaccf6a2-af", "ovs_interfaceid": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.248 243456 DEBUG nova.network.os_vif_util [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:b6:0f,bridge_name='br-int',has_traffic_filtering=True,id=caccf6a2-afd7-48b7-b262-bb7a3178d25c,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcaccf6a2-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.250 243456 DEBUG nova.objects.instance [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 287f71c6-9068-476b-81e7-da5069ee831f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.274 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:23:39 np0005634017 nova_compute[243452]:  <uuid>287f71c6-9068-476b-81e7-da5069ee831f</uuid>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:  <name>instance-0000006f</name>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServersTestJSON-server-426854263</nova:name>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:23:38</nova:creationTime>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:23:39 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:        <nova:user uuid="30797c1e587b4532a2e148d0cdcd9c51">tempest-ServersTestJSON-973249707-project-member</nova:user>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:        <nova:project uuid="1c3cb5cdfa53405bb0387af43e804bd1">tempest-ServersTestJSON-973249707</nova:project>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:        <nova:port uuid="caccf6a2-afd7-48b7-b262-bb7a3178d25c">
Feb 28 05:23:39 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <entry name="serial">287f71c6-9068-476b-81e7-da5069ee831f</entry>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <entry name="uuid">287f71c6-9068-476b-81e7-da5069ee831f</entry>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/287f71c6-9068-476b-81e7-da5069ee831f_disk">
Feb 28 05:23:39 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:23:39 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/287f71c6-9068-476b-81e7-da5069ee831f_disk.config">
Feb 28 05:23:39 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:23:39 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:08:b6:0f"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <target dev="tapcaccf6a2-af"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/287f71c6-9068-476b-81e7-da5069ee831f/console.log" append="off"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:23:39 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:23:39 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:23:39 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:23:39 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.275 243456 DEBUG nova.compute.manager [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Preparing to wait for external event network-vif-plugged-caccf6a2-afd7-48b7-b262-bb7a3178d25c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.276 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "287f71c6-9068-476b-81e7-da5069ee831f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.277 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.278 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.280 243456 DEBUG nova.virt.libvirt.vif [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:23:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-426854263',display_name='tempest-ServersTestJSON-server-426854263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-426854263',id=111,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-20wtorxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:23:34Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=287f71c6-9068-476b-81e7-da5069ee831f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "address": "fa:16:3e:08:b6:0f", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaccf6a2-af", "ovs_interfaceid": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.280 243456 DEBUG nova.network.os_vif_util [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "address": "fa:16:3e:08:b6:0f", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaccf6a2-af", "ovs_interfaceid": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.282 243456 DEBUG nova.network.os_vif_util [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:b6:0f,bridge_name='br-int',has_traffic_filtering=True,id=caccf6a2-afd7-48b7-b262-bb7a3178d25c,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcaccf6a2-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.283 243456 DEBUG os_vif [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:b6:0f,bridge_name='br-int',has_traffic_filtering=True,id=caccf6a2-afd7-48b7-b262-bb7a3178d25c,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcaccf6a2-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.284 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.285 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.286 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.291 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.292 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcaccf6a2-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.293 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcaccf6a2-af, col_values=(('external_ids', {'iface-id': 'caccf6a2-afd7-48b7-b262-bb7a3178d25c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:08:b6:0f', 'vm-uuid': '287f71c6-9068-476b-81e7-da5069ee831f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:39 np0005634017 NetworkManager[49805]: <info>  [1772274219.2974] manager: (tapcaccf6a2-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/468)
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.295 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.299 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.304 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.305 243456 INFO os_vif [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:b6:0f,bridge_name='br-int',has_traffic_filtering=True,id=caccf6a2-afd7-48b7-b262-bb7a3178d25c,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcaccf6a2-af')#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.358 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.359 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.359 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No VIF found with MAC fa:16:3e:08:b6:0f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.360 243456 INFO nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Using config drive#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.391 243456 DEBUG nova.storage.rbd_utils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 287f71c6-9068-476b-81e7-da5069ee831f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:39 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:23:39 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.570 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.944 243456 INFO nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Creating config drive at /var/lib/nova/instances/287f71c6-9068-476b-81e7-da5069ee831f/disk.config#033[00m
Feb 28 05:23:39 np0005634017 nova_compute[243452]: 2026-02-28 10:23:39.948 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/287f71c6-9068-476b-81e7-da5069ee831f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpti1l9q65 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:40 np0005634017 nova_compute[243452]: 2026-02-28 10:23:40.084 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/287f71c6-9068-476b-81e7-da5069ee831f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpti1l9q65" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:40 np0005634017 nova_compute[243452]: 2026-02-28 10:23:40.122 243456 DEBUG nova.storage.rbd_utils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 287f71c6-9068-476b-81e7-da5069ee831f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:40 np0005634017 nova_compute[243452]: 2026-02-28 10:23:40.126 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/287f71c6-9068-476b-81e7-da5069ee831f/disk.config 287f71c6-9068-476b-81e7-da5069ee831f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:40Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3f:12:78 10.100.0.6
Feb 28 05:23:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:40Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3f:12:78 10.100.0.6
Feb 28 05:23:40 np0005634017 nova_compute[243452]: 2026-02-28 10:23:40.265 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/287f71c6-9068-476b-81e7-da5069ee831f/disk.config 287f71c6-9068-476b-81e7-da5069ee831f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:40 np0005634017 nova_compute[243452]: 2026-02-28 10:23:40.266 243456 INFO nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Deleting local config drive /var/lib/nova/instances/287f71c6-9068-476b-81e7-da5069ee831f/disk.config because it was imported into RBD.#033[00m
Feb 28 05:23:40 np0005634017 kernel: tapcaccf6a2-af: entered promiscuous mode
Feb 28 05:23:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:40Z|01117|binding|INFO|Claiming lport caccf6a2-afd7-48b7-b262-bb7a3178d25c for this chassis.
Feb 28 05:23:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:40Z|01118|binding|INFO|caccf6a2-afd7-48b7-b262-bb7a3178d25c: Claiming fa:16:3e:08:b6:0f 10.100.0.3
Feb 28 05:23:40 np0005634017 systemd-udevd[339142]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:23:40 np0005634017 nova_compute[243452]: 2026-02-28 10:23:40.299 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:40 np0005634017 NetworkManager[49805]: <info>  [1772274220.3007] manager: (tapcaccf6a2-af): new Tun device (/org/freedesktop/NetworkManager/Devices/469)
Feb 28 05:23:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.308 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:b6:0f 10.100.0.3'], port_security=['fa:16:3e:08:b6:0f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '287f71c6-9068-476b-81e7-da5069ee831f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=caccf6a2-afd7-48b7-b262-bb7a3178d25c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:23:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:40Z|01119|binding|INFO|Setting lport caccf6a2-afd7-48b7-b262-bb7a3178d25c ovn-installed in OVS
Feb 28 05:23:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:40Z|01120|binding|INFO|Setting lport caccf6a2-afd7-48b7-b262-bb7a3178d25c up in Southbound
Feb 28 05:23:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.311 156681 INFO neutron.agent.ovn.metadata.agent [-] Port caccf6a2-afd7-48b7-b262-bb7a3178d25c in datapath 7ec4804c-4a13-485a-9300-db6edf74473b bound to our chassis#033[00m
Feb 28 05:23:40 np0005634017 nova_compute[243452]: 2026-02-28 10:23:40.313 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.316 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ec4804c-4a13-485a-9300-db6edf74473b#033[00m
Feb 28 05:23:40 np0005634017 NetworkManager[49805]: <info>  [1772274220.3206] device (tapcaccf6a2-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:23:40 np0005634017 NetworkManager[49805]: <info>  [1772274220.3232] device (tapcaccf6a2-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:23:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.339 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[886d0297-a90c-4a31-9607-7ad7b7e7167f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:40 np0005634017 systemd-machined[209480]: New machine qemu-141-instance-0000006f.
Feb 28 05:23:40 np0005634017 nova_compute[243452]: 2026-02-28 10:23:40.355 243456 DEBUG nova.network.neutron [req-ea2c648c-f5e9-4329-b772-b8b9c1af270d req-cedc1a47-0635-4898-8a31-69642dcdab88 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Updated VIF entry in instance network info cache for port caccf6a2-afd7-48b7-b262-bb7a3178d25c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:23:40 np0005634017 nova_compute[243452]: 2026-02-28 10:23:40.356 243456 DEBUG nova.network.neutron [req-ea2c648c-f5e9-4329-b772-b8b9c1af270d req-cedc1a47-0635-4898-8a31-69642dcdab88 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Updating instance_info_cache with network_info: [{"id": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "address": "fa:16:3e:08:b6:0f", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaccf6a2-af", "ovs_interfaceid": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:23:40 np0005634017 systemd[1]: Started Virtual Machine qemu-141-instance-0000006f.
Feb 28 05:23:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.375 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7a9be823-b867-496b-9571-5e7f60473cfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:40 np0005634017 nova_compute[243452]: 2026-02-28 10:23:40.376 243456 DEBUG oslo_concurrency.lockutils [req-ea2c648c-f5e9-4329-b772-b8b9c1af270d req-cedc1a47-0635-4898-8a31-69642dcdab88 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-287f71c6-9068-476b-81e7-da5069ee831f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:23:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.378 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[05c0af0e-eb22-497b-a479-a901e19930fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.402 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[33dd5142-de2f-4db3-8ddb-5fbe4df6c7f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.416 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[23bdddf5-c206-4fd9-a83d-0145bf604361]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339312, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.429 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c57fa506-8838-403b-b6cb-9a24e44ec30a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565520, 'tstamp': 565520}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339313, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565523, 'tstamp': 565523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339313, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.431 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:40 np0005634017 nova_compute[243452]: 2026-02-28 10:23:40.433 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:40 np0005634017 nova_compute[243452]: 2026-02-28 10:23:40.434 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.436 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ec4804c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.436 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:23:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.437 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ec4804c-40, col_values=(('external_ids', {'iface-id': 'e8c59be7-5922-4be3-b143-f702a4828758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.438 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:23:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 417 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 818 KiB/s rd, 7.6 MiB/s wr, 187 op/s
Feb 28 05:23:40 np0005634017 nova_compute[243452]: 2026-02-28 10:23:40.738 243456 DEBUG nova.compute.manager [req-53cd4346-0bfa-48db-8e9f-d1e4baab5ffb req-9ec8a643-990d-4896-aaac-34836f903fb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Received event network-vif-plugged-caccf6a2-afd7-48b7-b262-bb7a3178d25c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:23:40 np0005634017 nova_compute[243452]: 2026-02-28 10:23:40.739 243456 DEBUG oslo_concurrency.lockutils [req-53cd4346-0bfa-48db-8e9f-d1e4baab5ffb req-9ec8a643-990d-4896-aaac-34836f903fb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "287f71c6-9068-476b-81e7-da5069ee831f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:40 np0005634017 nova_compute[243452]: 2026-02-28 10:23:40.739 243456 DEBUG oslo_concurrency.lockutils [req-53cd4346-0bfa-48db-8e9f-d1e4baab5ffb req-9ec8a643-990d-4896-aaac-34836f903fb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:40 np0005634017 nova_compute[243452]: 2026-02-28 10:23:40.739 243456 DEBUG oslo_concurrency.lockutils [req-53cd4346-0bfa-48db-8e9f-d1e4baab5ffb req-9ec8a643-990d-4896-aaac-34836f903fb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:40 np0005634017 nova_compute[243452]: 2026-02-28 10:23:40.739 243456 DEBUG nova.compute.manager [req-53cd4346-0bfa-48db-8e9f-d1e4baab5ffb req-9ec8a643-990d-4896-aaac-34836f903fb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Processing event network-vif-plugged-caccf6a2-afd7-48b7-b262-bb7a3178d25c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:23:40 np0005634017 nova_compute[243452]: 2026-02-28 10:23:40.991 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274220.9909859, 287f71c6-9068-476b-81e7-da5069ee831f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:23:40 np0005634017 nova_compute[243452]: 2026-02-28 10:23:40.992 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] VM Started (Lifecycle Event)#033[00m
Feb 28 05:23:40 np0005634017 nova_compute[243452]: 2026-02-28 10:23:40.995 243456 DEBUG nova.compute.manager [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:23:41 np0005634017 nova_compute[243452]: 2026-02-28 10:23:40.999 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:23:41 np0005634017 nova_compute[243452]: 2026-02-28 10:23:41.003 243456 INFO nova.virt.libvirt.driver [-] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Instance spawned successfully.#033[00m
Feb 28 05:23:41 np0005634017 nova_compute[243452]: 2026-02-28 10:23:41.004 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:23:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:23:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:23:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:23:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:23:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0024429714104513985 of space, bias 1.0, pg target 0.7328914231354196 quantized to 32 (current 32)
Feb 28 05:23:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:23:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:23:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:23:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:23:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:23:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024929289132533243 of space, bias 1.0, pg target 0.7478786739759973 quantized to 32 (current 32)
Feb 28 05:23:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:23:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.388228013846623e-07 of space, bias 4.0, pg target 0.0008865873616615948 quantized to 16 (current 16)
Feb 28 05:23:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:23:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:23:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:23:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:23:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:23:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:23:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:23:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:23:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:23:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:23:41 np0005634017 nova_compute[243452]: 2026-02-28 10:23:41.030 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:23:41 np0005634017 nova_compute[243452]: 2026-02-28 10:23:41.036 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:23:41 np0005634017 nova_compute[243452]: 2026-02-28 10:23:41.040 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:23:41 np0005634017 nova_compute[243452]: 2026-02-28 10:23:41.040 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:23:41 np0005634017 nova_compute[243452]: 2026-02-28 10:23:41.041 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:23:41 np0005634017 nova_compute[243452]: 2026-02-28 10:23:41.041 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:23:41 np0005634017 nova_compute[243452]: 2026-02-28 10:23:41.042 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:23:41 np0005634017 nova_compute[243452]: 2026-02-28 10:23:41.042 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:23:41 np0005634017 nova_compute[243452]: 2026-02-28 10:23:41.066 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:23:41 np0005634017 nova_compute[243452]: 2026-02-28 10:23:41.066 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274220.991178, 287f71c6-9068-476b-81e7-da5069ee831f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:23:41 np0005634017 nova_compute[243452]: 2026-02-28 10:23:41.067 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:23:41 np0005634017 nova_compute[243452]: 2026-02-28 10:23:41.092 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:23:41 np0005634017 nova_compute[243452]: 2026-02-28 10:23:41.097 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274220.999, 287f71c6-9068-476b-81e7-da5069ee831f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:23:41 np0005634017 nova_compute[243452]: 2026-02-28 10:23:41.097 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:23:41 np0005634017 nova_compute[243452]: 2026-02-28 10:23:41.105 243456 INFO nova.compute.manager [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Took 6.21 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:23:41 np0005634017 nova_compute[243452]: 2026-02-28 10:23:41.105 243456 DEBUG nova.compute.manager [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:23:41 np0005634017 nova_compute[243452]: 2026-02-28 10:23:41.114 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:23:41 np0005634017 nova_compute[243452]: 2026-02-28 10:23:41.117 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:23:41 np0005634017 nova_compute[243452]: 2026-02-28 10:23:41.139 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:23:41 np0005634017 nova_compute[243452]: 2026-02-28 10:23:41.177 243456 INFO nova.compute.manager [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Took 7.36 seconds to build instance.#033[00m
Feb 28 05:23:41 np0005634017 nova_compute[243452]: 2026-02-28 10:23:41.193 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 430 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 8.3 MiB/s wr, 292 op/s
Feb 28 05:23:42 np0005634017 nova_compute[243452]: 2026-02-28 10:23:42.948 243456 DEBUG nova.compute.manager [req-1e690c7f-59c7-40a1-b61f-8c510f237e56 req-09ea45cc-985c-4fb1-a514-abdc2b7efcd7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Received event network-vif-plugged-caccf6a2-afd7-48b7-b262-bb7a3178d25c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:23:42 np0005634017 nova_compute[243452]: 2026-02-28 10:23:42.949 243456 DEBUG oslo_concurrency.lockutils [req-1e690c7f-59c7-40a1-b61f-8c510f237e56 req-09ea45cc-985c-4fb1-a514-abdc2b7efcd7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "287f71c6-9068-476b-81e7-da5069ee831f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:42 np0005634017 nova_compute[243452]: 2026-02-28 10:23:42.949 243456 DEBUG oslo_concurrency.lockutils [req-1e690c7f-59c7-40a1-b61f-8c510f237e56 req-09ea45cc-985c-4fb1-a514-abdc2b7efcd7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:42 np0005634017 nova_compute[243452]: 2026-02-28 10:23:42.950 243456 DEBUG oslo_concurrency.lockutils [req-1e690c7f-59c7-40a1-b61f-8c510f237e56 req-09ea45cc-985c-4fb1-a514-abdc2b7efcd7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:42 np0005634017 nova_compute[243452]: 2026-02-28 10:23:42.951 243456 DEBUG nova.compute.manager [req-1e690c7f-59c7-40a1-b61f-8c510f237e56 req-09ea45cc-985c-4fb1-a514-abdc2b7efcd7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] No waiting events found dispatching network-vif-plugged-caccf6a2-afd7-48b7-b262-bb7a3178d25c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:23:42 np0005634017 nova_compute[243452]: 2026-02-28 10:23:42.951 243456 WARNING nova.compute.manager [req-1e690c7f-59c7-40a1-b61f-8c510f237e56 req-09ea45cc-985c-4fb1-a514-abdc2b7efcd7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Received unexpected event network-vif-plugged-caccf6a2-afd7-48b7-b262-bb7a3178d25c for instance with vm_state active and task_state None.#033[00m
Feb 28 05:23:43 np0005634017 nova_compute[243452]: 2026-02-28 10:23:43.634 243456 DEBUG oslo_concurrency.lockutils [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "287f71c6-9068-476b-81e7-da5069ee831f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:43 np0005634017 nova_compute[243452]: 2026-02-28 10:23:43.635 243456 DEBUG oslo_concurrency.lockutils [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:43 np0005634017 nova_compute[243452]: 2026-02-28 10:23:43.635 243456 DEBUG oslo_concurrency.lockutils [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "287f71c6-9068-476b-81e7-da5069ee831f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:43 np0005634017 nova_compute[243452]: 2026-02-28 10:23:43.635 243456 DEBUG oslo_concurrency.lockutils [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:43 np0005634017 nova_compute[243452]: 2026-02-28 10:23:43.635 243456 DEBUG oslo_concurrency.lockutils [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:43 np0005634017 nova_compute[243452]: 2026-02-28 10:23:43.636 243456 INFO nova.compute.manager [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Terminating instance#033[00m
Feb 28 05:23:43 np0005634017 nova_compute[243452]: 2026-02-28 10:23:43.637 243456 DEBUG nova.compute.manager [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:23:43 np0005634017 kernel: tapcaccf6a2-af (unregistering): left promiscuous mode
Feb 28 05:23:43 np0005634017 NetworkManager[49805]: <info>  [1772274223.6798] device (tapcaccf6a2-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:23:43 np0005634017 nova_compute[243452]: 2026-02-28 10:23:43.690 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:43 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:43Z|01121|binding|INFO|Releasing lport caccf6a2-afd7-48b7-b262-bb7a3178d25c from this chassis (sb_readonly=0)
Feb 28 05:23:43 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:43Z|01122|binding|INFO|Setting lport caccf6a2-afd7-48b7-b262-bb7a3178d25c down in Southbound
Feb 28 05:23:43 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:43Z|01123|binding|INFO|Removing iface tapcaccf6a2-af ovn-installed in OVS
Feb 28 05:23:43 np0005634017 nova_compute[243452]: 2026-02-28 10:23:43.697 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:23:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.703 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:b6:0f 10.100.0.3'], port_security=['fa:16:3e:08:b6:0f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '287f71c6-9068-476b-81e7-da5069ee831f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=caccf6a2-afd7-48b7-b262-bb7a3178d25c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:23:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.705 156681 INFO neutron.agent.ovn.metadata.agent [-] Port caccf6a2-afd7-48b7-b262-bb7a3178d25c in datapath 7ec4804c-4a13-485a-9300-db6edf74473b unbound from our chassis#033[00m
Feb 28 05:23:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.706 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ec4804c-4a13-485a-9300-db6edf74473b#033[00m
Feb 28 05:23:43 np0005634017 nova_compute[243452]: 2026-02-28 10:23:43.706 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.718793) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274223718826, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 2027, "num_deletes": 257, "total_data_size": 3076728, "memory_usage": 3117272, "flush_reason": "Manual Compaction"}
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Feb 28 05:23:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.720 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d49c28fa-d0a7-46b5-97f0-23d5951e5a22]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274223732032, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 3020885, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35417, "largest_seqno": 37443, "table_properties": {"data_size": 3011826, "index_size": 5553, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19289, "raw_average_key_size": 20, "raw_value_size": 2993322, "raw_average_value_size": 3131, "num_data_blocks": 245, "num_entries": 956, "num_filter_entries": 956, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772274031, "oldest_key_time": 1772274031, "file_creation_time": 1772274223, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 13319 microseconds, and 4726 cpu microseconds.
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:23:43 np0005634017 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Feb 28 05:23:43 np0005634017 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d0000006f.scope: Consumed 3.347s CPU time.
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.732107) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 3020885 bytes OK
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.732130) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.736260) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.736289) EVENT_LOG_v1 {"time_micros": 1772274223736281, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.736315) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 3068114, prev total WAL file size 3068114, number of live WAL files 2.
Feb 28 05:23:43 np0005634017 systemd-machined[209480]: Machine qemu-141-instance-0000006f terminated.
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.737044) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323531' seq:72057594037927935, type:22 .. '6C6F676D0031353033' seq:0, type:0; will stop at (end)
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(2950KB)], [77(9523KB)]
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274223737090, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 12772448, "oldest_snapshot_seqno": -1}
Feb 28 05:23:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.741 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[eba1a2bf-5f43-4815-964f-2516798addb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.745 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ca645480-7fc0-46b3-ae1b-3d820faf6e10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.768 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c0d9e78c-c270-4670-b24d-49a1e270240c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.782 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[71597038-8ffb-4074-9cc4-ebfc28f98e36]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 17, 'rx_bytes': 616, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 17, 'rx_bytes': 616, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339369, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6548 keys, 12639637 bytes, temperature: kUnknown
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274223784919, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 12639637, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12591031, "index_size": 31150, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 164528, "raw_average_key_size": 25, "raw_value_size": 12469170, "raw_average_value_size": 1904, "num_data_blocks": 1268, "num_entries": 6548, "num_filter_entries": 6548, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772274223, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.785195) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 12639637 bytes
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.786612) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 266.6 rd, 263.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 9.3 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(8.4) write-amplify(4.2) OK, records in: 7078, records dropped: 530 output_compression: NoCompression
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.786628) EVENT_LOG_v1 {"time_micros": 1772274223786620, "job": 44, "event": "compaction_finished", "compaction_time_micros": 47908, "compaction_time_cpu_micros": 24696, "output_level": 6, "num_output_files": 1, "total_output_size": 12639637, "num_input_records": 7078, "num_output_records": 6548, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274223786927, "job": 44, "event": "table_file_deletion", "file_number": 79}
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274223788346, "job": 44, "event": "table_file_deletion", "file_number": 77}
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.736961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.788546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.788557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.788562) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.788566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:23:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.788570) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:23:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.796 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c62c3965-3e98-44ce-9159-f8133fad299f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565520, 'tstamp': 565520}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339370, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565523, 'tstamp': 565523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339370, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.798 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:43 np0005634017 nova_compute[243452]: 2026-02-28 10:23:43.800 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:43 np0005634017 nova_compute[243452]: 2026-02-28 10:23:43.805 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.805 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ec4804c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.805 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:23:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.806 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ec4804c-40, col_values=(('external_ids', {'iface-id': 'e8c59be7-5922-4be3-b143-f702a4828758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.806 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:23:43 np0005634017 nova_compute[243452]: 2026-02-28 10:23:43.870 243456 INFO nova.virt.libvirt.driver [-] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Instance destroyed successfully.#033[00m
Feb 28 05:23:43 np0005634017 nova_compute[243452]: 2026-02-28 10:23:43.870 243456 DEBUG nova.objects.instance [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'resources' on Instance uuid 287f71c6-9068-476b-81e7-da5069ee831f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:23:43 np0005634017 nova_compute[243452]: 2026-02-28 10:23:43.885 243456 DEBUG nova.virt.libvirt.vif [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:23:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-426854263',display_name='tempest-ServersTestJSON-server-426854263',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-426854263',id=111,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:23:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-20wtorxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:23:41Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=287f71c6-9068-476b-81e7-da5069ee831f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "address": "fa:16:3e:08:b6:0f", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaccf6a2-af", "ovs_interfaceid": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:23:43 np0005634017 nova_compute[243452]: 2026-02-28 10:23:43.885 243456 DEBUG nova.network.os_vif_util [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "address": "fa:16:3e:08:b6:0f", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaccf6a2-af", "ovs_interfaceid": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:23:43 np0005634017 nova_compute[243452]: 2026-02-28 10:23:43.886 243456 DEBUG nova.network.os_vif_util [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:b6:0f,bridge_name='br-int',has_traffic_filtering=True,id=caccf6a2-afd7-48b7-b262-bb7a3178d25c,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcaccf6a2-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:23:43 np0005634017 nova_compute[243452]: 2026-02-28 10:23:43.886 243456 DEBUG os_vif [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:b6:0f,bridge_name='br-int',has_traffic_filtering=True,id=caccf6a2-afd7-48b7-b262-bb7a3178d25c,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcaccf6a2-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:23:43 np0005634017 nova_compute[243452]: 2026-02-28 10:23:43.888 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:43 np0005634017 nova_compute[243452]: 2026-02-28 10:23:43.888 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcaccf6a2-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:43 np0005634017 nova_compute[243452]: 2026-02-28 10:23:43.890 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:43 np0005634017 nova_compute[243452]: 2026-02-28 10:23:43.891 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:43 np0005634017 nova_compute[243452]: 2026-02-28 10:23:43.894 243456 INFO os_vif [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:b6:0f,bridge_name='br-int',has_traffic_filtering=True,id=caccf6a2-afd7-48b7-b262-bb7a3178d25c,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcaccf6a2-af')#033[00m
Feb 28 05:23:44 np0005634017 nova_compute[243452]: 2026-02-28 10:23:44.039 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:44 np0005634017 nova_compute[243452]: 2026-02-28 10:23:44.320 243456 INFO nova.virt.libvirt.driver [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Deleting instance files /var/lib/nova/instances/287f71c6-9068-476b-81e7-da5069ee831f_del#033[00m
Feb 28 05:23:44 np0005634017 nova_compute[243452]: 2026-02-28 10:23:44.321 243456 INFO nova.virt.libvirt.driver [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Deletion of /var/lib/nova/instances/287f71c6-9068-476b-81e7-da5069ee831f_del complete#033[00m
Feb 28 05:23:44 np0005634017 nova_compute[243452]: 2026-02-28 10:23:44.380 243456 INFO nova.compute.manager [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:23:44 np0005634017 nova_compute[243452]: 2026-02-28 10:23:44.380 243456 DEBUG oslo.service.loopingcall [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:23:44 np0005634017 nova_compute[243452]: 2026-02-28 10:23:44.381 243456 DEBUG nova.compute.manager [-] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:23:44 np0005634017 nova_compute[243452]: 2026-02-28 10:23:44.381 243456 DEBUG nova.network.neutron [-] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:23:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 430 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 7.2 MiB/s wr, 256 op/s
Feb 28 05:23:45 np0005634017 nova_compute[243452]: 2026-02-28 10:23:45.110 243456 DEBUG nova.compute.manager [req-848543a1-8f65-4544-b1c5-b4453ae241ff req-79dc5173-87aa-41a8-afff-e559e1c4f770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Received event network-vif-unplugged-caccf6a2-afd7-48b7-b262-bb7a3178d25c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:23:45 np0005634017 nova_compute[243452]: 2026-02-28 10:23:45.110 243456 DEBUG oslo_concurrency.lockutils [req-848543a1-8f65-4544-b1c5-b4453ae241ff req-79dc5173-87aa-41a8-afff-e559e1c4f770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "287f71c6-9068-476b-81e7-da5069ee831f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:45 np0005634017 nova_compute[243452]: 2026-02-28 10:23:45.110 243456 DEBUG oslo_concurrency.lockutils [req-848543a1-8f65-4544-b1c5-b4453ae241ff req-79dc5173-87aa-41a8-afff-e559e1c4f770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:45 np0005634017 nova_compute[243452]: 2026-02-28 10:23:45.111 243456 DEBUG oslo_concurrency.lockutils [req-848543a1-8f65-4544-b1c5-b4453ae241ff req-79dc5173-87aa-41a8-afff-e559e1c4f770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:45 np0005634017 nova_compute[243452]: 2026-02-28 10:23:45.111 243456 DEBUG nova.compute.manager [req-848543a1-8f65-4544-b1c5-b4453ae241ff req-79dc5173-87aa-41a8-afff-e559e1c4f770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] No waiting events found dispatching network-vif-unplugged-caccf6a2-afd7-48b7-b262-bb7a3178d25c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:23:45 np0005634017 nova_compute[243452]: 2026-02-28 10:23:45.112 243456 DEBUG nova.compute.manager [req-848543a1-8f65-4544-b1c5-b4453ae241ff req-79dc5173-87aa-41a8-afff-e559e1c4f770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Received event network-vif-unplugged-caccf6a2-afd7-48b7-b262-bb7a3178d25c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:23:45 np0005634017 nova_compute[243452]: 2026-02-28 10:23:45.112 243456 DEBUG nova.compute.manager [req-848543a1-8f65-4544-b1c5-b4453ae241ff req-79dc5173-87aa-41a8-afff-e559e1c4f770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Received event network-vif-plugged-caccf6a2-afd7-48b7-b262-bb7a3178d25c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:23:45 np0005634017 nova_compute[243452]: 2026-02-28 10:23:45.112 243456 DEBUG oslo_concurrency.lockutils [req-848543a1-8f65-4544-b1c5-b4453ae241ff req-79dc5173-87aa-41a8-afff-e559e1c4f770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "287f71c6-9068-476b-81e7-da5069ee831f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:45 np0005634017 nova_compute[243452]: 2026-02-28 10:23:45.112 243456 DEBUG oslo_concurrency.lockutils [req-848543a1-8f65-4544-b1c5-b4453ae241ff req-79dc5173-87aa-41a8-afff-e559e1c4f770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:45 np0005634017 nova_compute[243452]: 2026-02-28 10:23:45.112 243456 DEBUG oslo_concurrency.lockutils [req-848543a1-8f65-4544-b1c5-b4453ae241ff req-79dc5173-87aa-41a8-afff-e559e1c4f770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:45 np0005634017 nova_compute[243452]: 2026-02-28 10:23:45.113 243456 DEBUG nova.compute.manager [req-848543a1-8f65-4544-b1c5-b4453ae241ff req-79dc5173-87aa-41a8-afff-e559e1c4f770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] No waiting events found dispatching network-vif-plugged-caccf6a2-afd7-48b7-b262-bb7a3178d25c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:23:45 np0005634017 nova_compute[243452]: 2026-02-28 10:23:45.113 243456 WARNING nova.compute.manager [req-848543a1-8f65-4544-b1c5-b4453ae241ff req-79dc5173-87aa-41a8-afff-e559e1c4f770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Received unexpected event network-vif-plugged-caccf6a2-afd7-48b7-b262-bb7a3178d25c for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:23:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:23:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2259821455' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:23:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:23:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2259821455' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:23:46 np0005634017 nova_compute[243452]: 2026-02-28 10:23:46.004 243456 DEBUG nova.network.neutron [-] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:23:46 np0005634017 nova_compute[243452]: 2026-02-28 10:23:46.033 243456 INFO nova.compute.manager [-] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Took 1.65 seconds to deallocate network for instance.#033[00m
Feb 28 05:23:46 np0005634017 nova_compute[243452]: 2026-02-28 10:23:46.080 243456 DEBUG oslo_concurrency.lockutils [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:46 np0005634017 nova_compute[243452]: 2026-02-28 10:23:46.081 243456 DEBUG oslo_concurrency.lockutils [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:46 np0005634017 nova_compute[243452]: 2026-02-28 10:23:46.146 243456 DEBUG nova.compute.manager [req-f649b28a-0b9c-409b-9ac6-961d1f40dcb5 req-f5227968-cda9-4847-9f9c-0b1879628298 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Received event network-vif-deleted-caccf6a2-afd7-48b7-b262-bb7a3178d25c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:23:46 np0005634017 nova_compute[243452]: 2026-02-28 10:23:46.205 243456 DEBUG oslo_concurrency.processutils [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 305 active+clean; 429 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.2 MiB/s wr, 239 op/s
Feb 28 05:23:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:23:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4083545794' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:23:46 np0005634017 nova_compute[243452]: 2026-02-28 10:23:46.799 243456 DEBUG oslo_concurrency.processutils [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:46 np0005634017 nova_compute[243452]: 2026-02-28 10:23:46.808 243456 DEBUG nova.compute.provider_tree [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:23:46 np0005634017 nova_compute[243452]: 2026-02-28 10:23:46.840 243456 DEBUG nova.scheduler.client.report [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:23:46 np0005634017 nova_compute[243452]: 2026-02-28 10:23:46.880 243456 DEBUG oslo_concurrency.lockutils [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:46 np0005634017 nova_compute[243452]: 2026-02-28 10:23:46.882 243456 DEBUG oslo_concurrency.lockutils [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "027ce924-b530-4917-956c-ab66555058b0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:46 np0005634017 nova_compute[243452]: 2026-02-28 10:23:46.883 243456 DEBUG oslo_concurrency.lockutils [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:46 np0005634017 nova_compute[243452]: 2026-02-28 10:23:46.883 243456 DEBUG oslo_concurrency.lockutils [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "027ce924-b530-4917-956c-ab66555058b0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:46 np0005634017 nova_compute[243452]: 2026-02-28 10:23:46.883 243456 DEBUG oslo_concurrency.lockutils [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:46 np0005634017 nova_compute[243452]: 2026-02-28 10:23:46.883 243456 DEBUG oslo_concurrency.lockutils [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:46 np0005634017 nova_compute[243452]: 2026-02-28 10:23:46.884 243456 INFO nova.compute.manager [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Terminating instance#033[00m
Feb 28 05:23:46 np0005634017 nova_compute[243452]: 2026-02-28 10:23:46.885 243456 DEBUG nova.compute.manager [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:23:46 np0005634017 kernel: tap415ef63e-f3 (unregistering): left promiscuous mode
Feb 28 05:23:46 np0005634017 nova_compute[243452]: 2026-02-28 10:23:46.936 243456 INFO nova.scheduler.client.report [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Deleted allocations for instance 287f71c6-9068-476b-81e7-da5069ee831f#033[00m
Feb 28 05:23:46 np0005634017 NetworkManager[49805]: <info>  [1772274226.9436] device (tap415ef63e-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:23:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:46Z|01124|binding|INFO|Releasing lport 415ef63e-f355-4d3a-a625-bee99de661ad from this chassis (sb_readonly=0)
Feb 28 05:23:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:46Z|01125|binding|INFO|Setting lport 415ef63e-f355-4d3a-a625-bee99de661ad down in Southbound
Feb 28 05:23:46 np0005634017 nova_compute[243452]: 2026-02-28 10:23:46.952 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:46Z|01126|binding|INFO|Removing iface tap415ef63e-f3 ovn-installed in OVS
Feb 28 05:23:46 np0005634017 nova_compute[243452]: 2026-02-28 10:23:46.956 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:46.965 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:49:3b 10.100.0.8'], port_security=['fa:16:3e:01:49:3b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '027ce924-b530-4917-956c-ab66555058b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17bc494c-7a5d-47b4-92b7-06dd91f131ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1ef1d2e7-20e3-4f13-87f1-05e5f962f01b 2651d5ee-6558-4826-a4a8-35378d050e16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1398b86-8c3c-44c8-8c2b-3601475652eb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=415ef63e-f355-4d3a-a625-bee99de661ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:23:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:46.967 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 415ef63e-f355-4d3a-a625-bee99de661ad in datapath 17bc494c-7a5d-47b4-92b7-06dd91f131ca unbound from our chassis#033[00m
Feb 28 05:23:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:46.968 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 17bc494c-7a5d-47b4-92b7-06dd91f131ca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:23:46 np0005634017 nova_compute[243452]: 2026-02-28 10:23:46.968 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:46.969 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2be31753-85ab-475a-a57f-c53ea3157709]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:46.969 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca namespace which is not needed anymore#033[00m
Feb 28 05:23:47 np0005634017 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Feb 28 05:23:47 np0005634017 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006d.scope: Consumed 12.305s CPU time.
Feb 28 05:23:47 np0005634017 systemd-machined[209480]: Machine qemu-139-instance-0000006d terminated.
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.015 243456 DEBUG oslo_concurrency.lockutils [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.381s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.108 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.113 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.119 243456 INFO nova.virt.libvirt.driver [-] [instance: 027ce924-b530-4917-956c-ab66555058b0] Instance destroyed successfully.#033[00m
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.119 243456 DEBUG nova.objects.instance [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'resources' on Instance uuid 027ce924-b530-4917-956c-ab66555058b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.136 243456 DEBUG nova.virt.libvirt.vif [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:23:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-603155240',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-603155240',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=109,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP9wU8YY0rYLr+FY6zo9NLT62DKW4Etx3PelXGTWlh9GbKMYWy7V2liaoU6eFO4swsQU7hFfqtc0g/T8jGlbB4RbgIdTi8pbxGSqSx76HlaAvQoV4NMlMuhy55HuiLv7yw==',key_name='tempest-TestSecurityGroupsBasicOps-1207522769',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:23:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-ag0oqgtp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:23:22Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=027ce924-b530-4917-956c-ab66555058b0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "415ef63e-f355-4d3a-a625-bee99de661ad", "address": "fa:16:3e:01:49:3b", "network": {"id": "17bc494c-7a5d-47b4-92b7-06dd91f131ca", "bridge": "br-int", "label": "tempest-network-smoke--669117396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap415ef63e-f3", "ovs_interfaceid": "415ef63e-f355-4d3a-a625-bee99de661ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.137 243456 DEBUG nova.network.os_vif_util [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "415ef63e-f355-4d3a-a625-bee99de661ad", "address": "fa:16:3e:01:49:3b", "network": {"id": "17bc494c-7a5d-47b4-92b7-06dd91f131ca", "bridge": "br-int", "label": "tempest-network-smoke--669117396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap415ef63e-f3", "ovs_interfaceid": "415ef63e-f355-4d3a-a625-bee99de661ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.138 243456 DEBUG nova.network.os_vif_util [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:49:3b,bridge_name='br-int',has_traffic_filtering=True,id=415ef63e-f355-4d3a-a625-bee99de661ad,network=Network(17bc494c-7a5d-47b4-92b7-06dd91f131ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap415ef63e-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.139 243456 DEBUG os_vif [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:49:3b,bridge_name='br-int',has_traffic_filtering=True,id=415ef63e-f355-4d3a-a625-bee99de661ad,network=Network(17bc494c-7a5d-47b4-92b7-06dd91f131ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap415ef63e-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:23:47 np0005634017 neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca[338109]: [NOTICE]   (338113) : haproxy version is 2.8.14-c23fe91
Feb 28 05:23:47 np0005634017 neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca[338109]: [NOTICE]   (338113) : path to executable is /usr/sbin/haproxy
Feb 28 05:23:47 np0005634017 neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca[338109]: [WARNING]  (338113) : Exiting Master process...
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.142 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:47 np0005634017 neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca[338109]: [ALERT]    (338113) : Current worker (338115) exited with code 143 (Terminated)
Feb 28 05:23:47 np0005634017 neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca[338109]: [WARNING]  (338113) : All workers exited. Exiting... (0)
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.142 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap415ef63e-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:47 np0005634017 systemd[1]: libpod-6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df.scope: Deactivated successfully.
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.146 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.149 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:23:47 np0005634017 podman[339446]: 2026-02-28 10:23:47.152193879 +0000 UTC m=+0.067765787 container died 6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.152 243456 INFO os_vif [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:49:3b,bridge_name='br-int',has_traffic_filtering=True,id=415ef63e-f355-4d3a-a625-bee99de661ad,network=Network(17bc494c-7a5d-47b4-92b7-06dd91f131ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap415ef63e-f3')#033[00m
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.206 243456 DEBUG nova.compute.manager [req-a135118d-718a-4072-8db3-10055611bb9d req-99d0b2bf-1749-4aca-b3e7-5485473bcf57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Received event network-vif-unplugged-415ef63e-f355-4d3a-a625-bee99de661ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.207 243456 DEBUG oslo_concurrency.lockutils [req-a135118d-718a-4072-8db3-10055611bb9d req-99d0b2bf-1749-4aca-b3e7-5485473bcf57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "027ce924-b530-4917-956c-ab66555058b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.207 243456 DEBUG oslo_concurrency.lockutils [req-a135118d-718a-4072-8db3-10055611bb9d req-99d0b2bf-1749-4aca-b3e7-5485473bcf57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.207 243456 DEBUG oslo_concurrency.lockutils [req-a135118d-718a-4072-8db3-10055611bb9d req-99d0b2bf-1749-4aca-b3e7-5485473bcf57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.208 243456 DEBUG nova.compute.manager [req-a135118d-718a-4072-8db3-10055611bb9d req-99d0b2bf-1749-4aca-b3e7-5485473bcf57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] No waiting events found dispatching network-vif-unplugged-415ef63e-f355-4d3a-a625-bee99de661ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.208 243456 DEBUG nova.compute.manager [req-a135118d-718a-4072-8db3-10055611bb9d req-99d0b2bf-1749-4aca-b3e7-5485473bcf57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Received event network-vif-unplugged-415ef63e-f355-4d3a-a625-bee99de661ad for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:23:47 np0005634017 systemd[1]: var-lib-containers-storage-overlay-fba657c3e9cfc3e8a46c61136017aeb0fbea7cdb4ea5ad38579fb28430e14c1d-merged.mount: Deactivated successfully.
Feb 28 05:23:47 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df-userdata-shm.mount: Deactivated successfully.
Feb 28 05:23:47 np0005634017 podman[339446]: 2026-02-28 10:23:47.242555267 +0000 UTC m=+0.158127165 container cleanup 6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 28 05:23:47 np0005634017 systemd[1]: libpod-conmon-6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df.scope: Deactivated successfully.
Feb 28 05:23:47 np0005634017 podman[339503]: 2026-02-28 10:23:47.314466483 +0000 UTC m=+0.053565597 container remove 6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 28 05:23:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:47.321 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4ba5b8f7-acb9-48bf-b990-5cf4875f3797]: (4, ('Sat Feb 28 10:23:47 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca (6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df)\n6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df\nSat Feb 28 10:23:47 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca (6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df)\n6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:47.323 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[42c0b989-a549-41a1-8c7e-2328881d8cb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:47.325 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17bc494c-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:47 np0005634017 kernel: tap17bc494c-70: left promiscuous mode
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.328 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.335 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:47.340 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[21e46b6e-55c4-4026-9912-df01b6fd195c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:47.357 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[04124746-b45b-4c7c-a524-d59d58419e67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:47.358 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8b3a01b9-f84a-4097-972f-9c3ea88a6353]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:47.375 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[080919f1-aa1c-493a-9e59-6dc155bb73b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570913, 'reachable_time': 39527, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339519, 'error': None, 'target': 'ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:47 np0005634017 systemd[1]: run-netns-ovnmeta\x2d17bc494c\x2d7a5d\x2d47b4\x2d92b7\x2d06dd91f131ca.mount: Deactivated successfully.
Feb 28 05:23:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:47.379 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:23:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:47.380 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[8f711cb3-b74d-4e0d-98ef-ff4cde7775be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.496 243456 INFO nova.virt.libvirt.driver [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Deleting instance files /var/lib/nova/instances/027ce924-b530-4917-956c-ab66555058b0_del#033[00m
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.497 243456 INFO nova.virt.libvirt.driver [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Deletion of /var/lib/nova/instances/027ce924-b530-4917-956c-ab66555058b0_del complete#033[00m
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.575 243456 INFO nova.compute.manager [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.575 243456 DEBUG oslo.service.loopingcall [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.575 243456 DEBUG nova.compute.manager [-] [instance: 027ce924-b530-4917-956c-ab66555058b0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:23:47 np0005634017 nova_compute[243452]: 2026-02-28 10:23:47.576 243456 DEBUG nova.network.neutron [-] [instance: 027ce924-b530-4917-956c-ab66555058b0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.359 243456 DEBUG oslo_concurrency.lockutils [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.360 243456 DEBUG oslo_concurrency.lockutils [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.360 243456 DEBUG oslo_concurrency.lockutils [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.360 243456 DEBUG oslo_concurrency.lockutils [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.361 243456 DEBUG oslo_concurrency.lockutils [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.362 243456 INFO nova.compute.manager [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Terminating instance#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.363 243456 DEBUG nova.compute.manager [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:23:48 np0005634017 kernel: tap7d930aa0-d5 (unregistering): left promiscuous mode
Feb 28 05:23:48 np0005634017 NetworkManager[49805]: <info>  [1772274228.4154] device (tap7d930aa0-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:23:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:48Z|01127|binding|INFO|Releasing lport 7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 from this chassis (sb_readonly=0)
Feb 28 05:23:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:48Z|01128|binding|INFO|Setting lport 7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 down in Southbound
Feb 28 05:23:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:48Z|01129|binding|INFO|Removing iface tap7d930aa0-d5 ovn-installed in OVS
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.423 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.431 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:12:78 10.100.0.6'], port_security=['fa:16:3e:3f:12:78 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=7d930aa0-d51d-4c56-9da2-7f8c1faf8b99) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:23:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.432 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 in datapath 7ec4804c-4a13-485a-9300-db6edf74473b unbound from our chassis#033[00m
Feb 28 05:23:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.434 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ec4804c-4a13-485a-9300-db6edf74473b#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.436 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:48 np0005634017 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Feb 28 05:23:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 354 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 205 op/s
Feb 28 05:23:48 np0005634017 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d0000006e.scope: Consumed 12.274s CPU time.
Feb 28 05:23:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.454 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d5d3d1b2-c378-49d5-b065-074a62a6ef89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:48 np0005634017 systemd-machined[209480]: Machine qemu-140-instance-0000006e terminated.
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.473 243456 DEBUG nova.compute.manager [req-ed4a5ac2-72de-4f8d-ad95-97f2632a564a req-28be3677-2090-41ea-b867-8f9d512929db 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Received event network-changed-415ef63e-f355-4d3a-a625-bee99de661ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.473 243456 DEBUG nova.compute.manager [req-ed4a5ac2-72de-4f8d-ad95-97f2632a564a req-28be3677-2090-41ea-b867-8f9d512929db 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Refreshing instance network info cache due to event network-changed-415ef63e-f355-4d3a-a625-bee99de661ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.474 243456 DEBUG oslo_concurrency.lockutils [req-ed4a5ac2-72de-4f8d-ad95-97f2632a564a req-28be3677-2090-41ea-b867-8f9d512929db 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-027ce924-b530-4917-956c-ab66555058b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.474 243456 DEBUG oslo_concurrency.lockutils [req-ed4a5ac2-72de-4f8d-ad95-97f2632a564a req-28be3677-2090-41ea-b867-8f9d512929db 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-027ce924-b530-4917-956c-ab66555058b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.474 243456 DEBUG nova.network.neutron [req-ed4a5ac2-72de-4f8d-ad95-97f2632a564a req-28be3677-2090-41ea-b867-8f9d512929db 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Refreshing network info cache for port 415ef63e-f355-4d3a-a625-bee99de661ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:23:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.484 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[27a5f58a-c400-4183-8f74-1a50b1085339]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.489 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[81c29694-abde-492d-927c-025c4674a865]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:48 np0005634017 podman[339525]: 2026-02-28 10:23:48.506485888 +0000 UTC m=+0.059598641 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 05:23:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.518 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3580e880-dea3-4876-b944-ccd40326fcaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:48 np0005634017 podman[339520]: 2026-02-28 10:23:48.539764618 +0000 UTC m=+0.094859769 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Feb 28 05:23:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.538 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b665ad14-475d-4e69-b6c1-e47d9e220b38]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339568, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.554 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[de678895-58b0-460f-adc4-ec5c68d03a25]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565520, 'tstamp': 565520}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339570, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565523, 'tstamp': 565523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339570, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:23:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.557 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.559 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.564 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.565 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ec4804c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.565 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:23:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.566 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ec4804c-40, col_values=(('external_ids', {'iface-id': 'e8c59be7-5922-4be3-b143-f702a4828758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.566 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.584 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.589 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.603 243456 INFO nova.virt.libvirt.driver [-] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Instance destroyed successfully.#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.604 243456 DEBUG nova.objects.instance [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'resources' on Instance uuid c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.634 243456 DEBUG nova.virt.libvirt.vif [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:23:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-426854263',display_name='tempest-ServersTestJSON-server-426854263',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-426854263',id=110,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:23:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-6jiu7deq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:23:28Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "address": "fa:16:3e:3f:12:78", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d930aa0-d5", "ovs_interfaceid": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.634 243456 DEBUG nova.network.os_vif_util [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "address": "fa:16:3e:3f:12:78", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d930aa0-d5", "ovs_interfaceid": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.635 243456 DEBUG nova.network.os_vif_util [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:12:78,bridge_name='br-int',has_traffic_filtering=True,id=7d930aa0-d51d-4c56-9da2-7f8c1faf8b99,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d930aa0-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.635 243456 DEBUG os_vif [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:12:78,bridge_name='br-int',has_traffic_filtering=True,id=7d930aa0-d51d-4c56-9da2-7f8c1faf8b99,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d930aa0-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.637 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.637 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7d930aa0-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.641 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.644 243456 INFO os_vif [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:12:78,bridge_name='br-int',has_traffic_filtering=True,id=7d930aa0-d51d-4c56-9da2-7f8c1faf8b99,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d930aa0-d5')#033[00m
Feb 28 05:23:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.908 243456 INFO nova.virt.libvirt.driver [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Deleting instance files /var/lib/nova/instances/c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_del#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.909 243456 INFO nova.virt.libvirt.driver [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Deletion of /var/lib/nova/instances/c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_del complete#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.965 243456 INFO nova.compute.manager [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.967 243456 DEBUG oslo.service.loopingcall [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.967 243456 DEBUG nova.compute.manager [-] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:23:48 np0005634017 nova_compute[243452]: 2026-02-28 10:23:48.967 243456 DEBUG nova.network.neutron [-] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:23:49 np0005634017 nova_compute[243452]: 2026-02-28 10:23:49.042 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:49 np0005634017 nova_compute[243452]: 2026-02-28 10:23:49.516 243456 DEBUG nova.compute.manager [req-1d0512f1-f663-444c-aa3d-435698a253e9 req-dcbcf5c2-4f25-4f02-aadc-36a28e2cb4b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Received event network-vif-plugged-415ef63e-f355-4d3a-a625-bee99de661ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:23:49 np0005634017 nova_compute[243452]: 2026-02-28 10:23:49.517 243456 DEBUG oslo_concurrency.lockutils [req-1d0512f1-f663-444c-aa3d-435698a253e9 req-dcbcf5c2-4f25-4f02-aadc-36a28e2cb4b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "027ce924-b530-4917-956c-ab66555058b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:49 np0005634017 nova_compute[243452]: 2026-02-28 10:23:49.517 243456 DEBUG oslo_concurrency.lockutils [req-1d0512f1-f663-444c-aa3d-435698a253e9 req-dcbcf5c2-4f25-4f02-aadc-36a28e2cb4b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:49 np0005634017 nova_compute[243452]: 2026-02-28 10:23:49.518 243456 DEBUG oslo_concurrency.lockutils [req-1d0512f1-f663-444c-aa3d-435698a253e9 req-dcbcf5c2-4f25-4f02-aadc-36a28e2cb4b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:49 np0005634017 nova_compute[243452]: 2026-02-28 10:23:49.518 243456 DEBUG nova.compute.manager [req-1d0512f1-f663-444c-aa3d-435698a253e9 req-dcbcf5c2-4f25-4f02-aadc-36a28e2cb4b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] No waiting events found dispatching network-vif-plugged-415ef63e-f355-4d3a-a625-bee99de661ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:23:49 np0005634017 nova_compute[243452]: 2026-02-28 10:23:49.518 243456 WARNING nova.compute.manager [req-1d0512f1-f663-444c-aa3d-435698a253e9 req-dcbcf5c2-4f25-4f02-aadc-36a28e2cb4b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Received unexpected event network-vif-plugged-415ef63e-f355-4d3a-a625-bee99de661ad for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:23:49 np0005634017 nova_compute[243452]: 2026-02-28 10:23:49.677 243456 DEBUG nova.network.neutron [-] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:23:49 np0005634017 nova_compute[243452]: 2026-02-28 10:23:49.679 243456 DEBUG nova.network.neutron [-] [instance: 027ce924-b530-4917-956c-ab66555058b0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:23:49 np0005634017 nova_compute[243452]: 2026-02-28 10:23:49.708 243456 INFO nova.compute.manager [-] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Took 0.74 seconds to deallocate network for instance.#033[00m
Feb 28 05:23:49 np0005634017 nova_compute[243452]: 2026-02-28 10:23:49.718 243456 INFO nova.compute.manager [-] [instance: 027ce924-b530-4917-956c-ab66555058b0] Took 2.14 seconds to deallocate network for instance.#033[00m
Feb 28 05:23:49 np0005634017 nova_compute[243452]: 2026-02-28 10:23:49.811 243456 DEBUG oslo_concurrency.lockutils [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:49 np0005634017 nova_compute[243452]: 2026-02-28 10:23:49.811 243456 DEBUG oslo_concurrency.lockutils [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:49 np0005634017 nova_compute[243452]: 2026-02-28 10:23:49.844 243456 DEBUG nova.network.neutron [req-ed4a5ac2-72de-4f8d-ad95-97f2632a564a req-28be3677-2090-41ea-b867-8f9d512929db 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Updated VIF entry in instance network info cache for port 415ef63e-f355-4d3a-a625-bee99de661ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:23:49 np0005634017 nova_compute[243452]: 2026-02-28 10:23:49.845 243456 DEBUG nova.network.neutron [req-ed4a5ac2-72de-4f8d-ad95-97f2632a564a req-28be3677-2090-41ea-b867-8f9d512929db 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Updating instance_info_cache with network_info: [{"id": "415ef63e-f355-4d3a-a625-bee99de661ad", "address": "fa:16:3e:01:49:3b", "network": {"id": "17bc494c-7a5d-47b4-92b7-06dd91f131ca", "bridge": "br-int", "label": "tempest-network-smoke--669117396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap415ef63e-f3", "ovs_interfaceid": "415ef63e-f355-4d3a-a625-bee99de661ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:23:49 np0005634017 nova_compute[243452]: 2026-02-28 10:23:49.891 243456 DEBUG oslo_concurrency.lockutils [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:49 np0005634017 nova_compute[243452]: 2026-02-28 10:23:49.892 243456 DEBUG oslo_concurrency.processutils [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:50 np0005634017 nova_compute[243452]: 2026-02-28 10:23:50.045 243456 DEBUG oslo_concurrency.lockutils [req-ed4a5ac2-72de-4f8d-ad95-97f2632a564a req-28be3677-2090-41ea-b867-8f9d512929db 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-027ce924-b530-4917-956c-ab66555058b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4139069359' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:23:50 np0005634017 nova_compute[243452]: 2026-02-28 10:23:50.445 243456 DEBUG oslo_concurrency.processutils [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:50 np0005634017 nova_compute[243452]: 2026-02-28 10:23:50.451 243456 DEBUG nova.compute.provider_tree [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:23:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 305 active+clean; 265 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 213 op/s
Feb 28 05:23:50 np0005634017 nova_compute[243452]: 2026-02-28 10:23:50.498 243456 DEBUG nova.scheduler.client.report [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.521094) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274230521165, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 327, "num_deletes": 251, "total_data_size": 130205, "memory_usage": 136312, "flush_reason": "Manual Compaction"}
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274230524670, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 129028, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37444, "largest_seqno": 37770, "table_properties": {"data_size": 126938, "index_size": 254, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5276, "raw_average_key_size": 18, "raw_value_size": 122872, "raw_average_value_size": 429, "num_data_blocks": 12, "num_entries": 286, "num_filter_entries": 286, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772274224, "oldest_key_time": 1772274224, "file_creation_time": 1772274230, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 3607 microseconds, and 1156 cpu microseconds.
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.524717) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 129028 bytes OK
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.524734) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.526286) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.526302) EVENT_LOG_v1 {"time_micros": 1772274230526297, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.526320) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 127929, prev total WAL file size 127929, number of live WAL files 2.
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.526773) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(126KB)], [80(12MB)]
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274230526863, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 12768665, "oldest_snapshot_seqno": -1}
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6325 keys, 11164506 bytes, temperature: kUnknown
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274230583182, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 11164506, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11118662, "index_size": 28902, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 160585, "raw_average_key_size": 25, "raw_value_size": 11001969, "raw_average_value_size": 1739, "num_data_blocks": 1165, "num_entries": 6325, "num_filter_entries": 6325, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772274230, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.583499) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 11164506 bytes
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.584984) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 226.4 rd, 197.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 12.1 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(185.5) write-amplify(86.5) OK, records in: 6834, records dropped: 509 output_compression: NoCompression
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.585014) EVENT_LOG_v1 {"time_micros": 1772274230585000, "job": 46, "event": "compaction_finished", "compaction_time_micros": 56406, "compaction_time_cpu_micros": 34816, "output_level": 6, "num_output_files": 1, "total_output_size": 11164506, "num_input_records": 6834, "num_output_records": 6325, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274230585214, "job": 46, "event": "table_file_deletion", "file_number": 82}
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274230586860, "job": 46, "event": "table_file_deletion", "file_number": 80}
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.526614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.586997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.587008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.587011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.587015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:23:50 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.587019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:23:50 np0005634017 nova_compute[243452]: 2026-02-28 10:23:50.719 243456 DEBUG oslo_concurrency.lockutils [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:50 np0005634017 nova_compute[243452]: 2026-02-28 10:23:50.723 243456 DEBUG oslo_concurrency.lockutils [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:50 np0005634017 nova_compute[243452]: 2026-02-28 10:23:50.765 243456 INFO nova.scheduler.client.report [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Deleted allocations for instance c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4#033[00m
Feb 28 05:23:50 np0005634017 nova_compute[243452]: 2026-02-28 10:23:50.789 243456 DEBUG oslo_concurrency.processutils [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:51 np0005634017 nova_compute[243452]: 2026-02-28 10:23:51.153 243456 DEBUG oslo_concurrency.lockutils [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:51 np0005634017 nova_compute[243452]: 2026-02-28 10:23:51.201 243456 DEBUG nova.compute.manager [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Received event network-vif-unplugged-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:23:51 np0005634017 nova_compute[243452]: 2026-02-28 10:23:51.202 243456 DEBUG oslo_concurrency.lockutils [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:51 np0005634017 nova_compute[243452]: 2026-02-28 10:23:51.202 243456 DEBUG oslo_concurrency.lockutils [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:51 np0005634017 nova_compute[243452]: 2026-02-28 10:23:51.203 243456 DEBUG oslo_concurrency.lockutils [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:51 np0005634017 nova_compute[243452]: 2026-02-28 10:23:51.203 243456 DEBUG nova.compute.manager [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] No waiting events found dispatching network-vif-unplugged-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:23:51 np0005634017 nova_compute[243452]: 2026-02-28 10:23:51.204 243456 WARNING nova.compute.manager [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Received unexpected event network-vif-unplugged-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:23:51 np0005634017 nova_compute[243452]: 2026-02-28 10:23:51.204 243456 DEBUG nova.compute.manager [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Received event network-vif-plugged-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:23:51 np0005634017 nova_compute[243452]: 2026-02-28 10:23:51.204 243456 DEBUG oslo_concurrency.lockutils [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:51 np0005634017 nova_compute[243452]: 2026-02-28 10:23:51.205 243456 DEBUG oslo_concurrency.lockutils [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:51 np0005634017 nova_compute[243452]: 2026-02-28 10:23:51.205 243456 DEBUG oslo_concurrency.lockutils [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:51 np0005634017 nova_compute[243452]: 2026-02-28 10:23:51.206 243456 DEBUG nova.compute.manager [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] No waiting events found dispatching network-vif-plugged-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:23:51 np0005634017 nova_compute[243452]: 2026-02-28 10:23:51.207 243456 WARNING nova.compute.manager [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Received unexpected event network-vif-plugged-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:23:51 np0005634017 nova_compute[243452]: 2026-02-28 10:23:51.207 243456 DEBUG nova.compute.manager [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Received event network-vif-deleted-415ef63e-f355-4d3a-a625-bee99de661ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:23:51 np0005634017 nova_compute[243452]: 2026-02-28 10:23:51.208 243456 INFO nova.compute.manager [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Neutron deleted interface 415ef63e-f355-4d3a-a625-bee99de661ad; detaching it from the instance and deleting it from the info cache#033[00m
Feb 28 05:23:51 np0005634017 nova_compute[243452]: 2026-02-28 10:23:51.208 243456 DEBUG nova.network.neutron [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:23:51 np0005634017 nova_compute[243452]: 2026-02-28 10:23:51.235 243456 DEBUG nova.compute.manager [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Detach interface failed, port_id=415ef63e-f355-4d3a-a625-bee99de661ad, reason: Instance 027ce924-b530-4917-956c-ab66555058b0 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 28 05:23:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:23:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3397732749' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:23:51 np0005634017 nova_compute[243452]: 2026-02-28 10:23:51.419 243456 DEBUG oslo_concurrency.processutils [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:51 np0005634017 nova_compute[243452]: 2026-02-28 10:23:51.426 243456 DEBUG nova.compute.provider_tree [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:23:51 np0005634017 nova_compute[243452]: 2026-02-28 10:23:51.445 243456 DEBUG nova.scheduler.client.report [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:23:51 np0005634017 nova_compute[243452]: 2026-02-28 10:23:51.474 243456 DEBUG oslo_concurrency.lockutils [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:51 np0005634017 nova_compute[243452]: 2026-02-28 10:23:51.517 243456 INFO nova.scheduler.client.report [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Deleted allocations for instance 027ce924-b530-4917-956c-ab66555058b0#033[00m
Feb 28 05:23:51 np0005634017 nova_compute[243452]: 2026-02-28 10:23:51.608 243456 DEBUG oslo_concurrency.lockutils [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:51 np0005634017 nova_compute[243452]: 2026-02-28 10:23:51.681 243456 DEBUG nova.compute.manager [req-cc62da77-a5d3-48ab-bf0b-7688e55ab0a4 req-e5a4d357-2c71-49a9-b878-70a92999b43d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Received event network-vif-deleted-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:23:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.0 MiB/s wr, 187 op/s
Feb 28 05:23:53 np0005634017 nova_compute[243452]: 2026-02-28 10:23:53.639 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:23:54 np0005634017 nova_compute[243452]: 2026-02-28 10:23:54.045 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 247 KiB/s rd, 35 KiB/s wr, 99 op/s
Feb 28 05:23:54 np0005634017 nova_compute[243452]: 2026-02-28 10:23:54.614 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:54 np0005634017 nova_compute[243452]: 2026-02-28 10:23:54.615 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:54 np0005634017 nova_compute[243452]: 2026-02-28 10:23:54.636 243456 DEBUG nova.compute.manager [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:23:54 np0005634017 nova_compute[243452]: 2026-02-28 10:23:54.718 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:54 np0005634017 nova_compute[243452]: 2026-02-28 10:23:54.719 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:54 np0005634017 nova_compute[243452]: 2026-02-28 10:23:54.729 243456 DEBUG nova.virt.hardware [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:23:54 np0005634017 nova_compute[243452]: 2026-02-28 10:23:54.730 243456 INFO nova.compute.claims [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:23:54 np0005634017 nova_compute[243452]: 2026-02-28 10:23:54.874 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:23:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3660935122' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:23:55 np0005634017 nova_compute[243452]: 2026-02-28 10:23:55.427 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:55 np0005634017 nova_compute[243452]: 2026-02-28 10:23:55.436 243456 DEBUG nova.compute.provider_tree [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:23:55 np0005634017 nova_compute[243452]: 2026-02-28 10:23:55.456 243456 DEBUG nova.scheduler.client.report [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:23:55 np0005634017 nova_compute[243452]: 2026-02-28 10:23:55.484 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:55 np0005634017 nova_compute[243452]: 2026-02-28 10:23:55.485 243456 DEBUG nova.compute.manager [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:23:55 np0005634017 nova_compute[243452]: 2026-02-28 10:23:55.545 243456 DEBUG nova.compute.manager [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:23:55 np0005634017 nova_compute[243452]: 2026-02-28 10:23:55.546 243456 DEBUG nova.network.neutron [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:23:55 np0005634017 nova_compute[243452]: 2026-02-28 10:23:55.570 243456 INFO nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:23:55 np0005634017 nova_compute[243452]: 2026-02-28 10:23:55.590 243456 DEBUG nova.compute.manager [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:23:55 np0005634017 nova_compute[243452]: 2026-02-28 10:23:55.682 243456 DEBUG nova.compute.manager [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:23:55 np0005634017 nova_compute[243452]: 2026-02-28 10:23:55.685 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:23:55 np0005634017 nova_compute[243452]: 2026-02-28 10:23:55.687 243456 INFO nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Creating image(s)#033[00m
Feb 28 05:23:55 np0005634017 nova_compute[243452]: 2026-02-28 10:23:55.721 243456 DEBUG nova.storage.rbd_utils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:55 np0005634017 nova_compute[243452]: 2026-02-28 10:23:55.752 243456 DEBUG nova.storage.rbd_utils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:55 np0005634017 nova_compute[243452]: 2026-02-28 10:23:55.782 243456 DEBUG nova.storage.rbd_utils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:55 np0005634017 nova_compute[243452]: 2026-02-28 10:23:55.787 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:55 np0005634017 nova_compute[243452]: 2026-02-28 10:23:55.842 243456 DEBUG nova.policy [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30797c1e587b4532a2e148d0cdcd9c51', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:23:55 np0005634017 nova_compute[243452]: 2026-02-28 10:23:55.878 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:55 np0005634017 nova_compute[243452]: 2026-02-28 10:23:55.880 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:55 np0005634017 nova_compute[243452]: 2026-02-28 10:23:55.881 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:55 np0005634017 nova_compute[243452]: 2026-02-28 10:23:55.881 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:55 np0005634017 nova_compute[243452]: 2026-02-28 10:23:55.908 243456 DEBUG nova.storage.rbd_utils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:55 np0005634017 nova_compute[243452]: 2026-02-28 10:23:55.913 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:56 np0005634017 nova_compute[243452]: 2026-02-28 10:23:56.201 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:56 np0005634017 nova_compute[243452]: 2026-02-28 10:23:56.261 243456 DEBUG nova.storage.rbd_utils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] resizing rbd image bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:23:56 np0005634017 nova_compute[243452]: 2026-02-28 10:23:56.337 243456 DEBUG nova.objects.instance [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'migration_context' on Instance uuid bf3b0f76-98db-4eb3-8e3c-ab7b525cd129 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:23:56 np0005634017 nova_compute[243452]: 2026-02-28 10:23:56.356 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:23:56 np0005634017 nova_compute[243452]: 2026-02-28 10:23:56.356 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Ensure instance console log exists: /var/lib/nova/instances/bf3b0f76-98db-4eb3-8e3c-ab7b525cd129/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:23:56 np0005634017 nova_compute[243452]: 2026-02-28 10:23:56.357 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:56 np0005634017 nova_compute[243452]: 2026-02-28 10:23:56.357 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:56 np0005634017 nova_compute[243452]: 2026-02-28 10:23:56.357 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:56 np0005634017 nova_compute[243452]: 2026-02-28 10:23:56.407 243456 DEBUG nova.network.neutron [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Successfully created port: bf4d947d-49ab-4681-beda-1384a9a5c61b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:23:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 247 KiB/s rd, 35 KiB/s wr, 99 op/s
Feb 28 05:23:57 np0005634017 nova_compute[243452]: 2026-02-28 10:23:57.114 243456 DEBUG nova.network.neutron [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Successfully updated port: bf4d947d-49ab-4681-beda-1384a9a5c61b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:23:57 np0005634017 nova_compute[243452]: 2026-02-28 10:23:57.139 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "refresh_cache-bf3b0f76-98db-4eb3-8e3c-ab7b525cd129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:23:57 np0005634017 nova_compute[243452]: 2026-02-28 10:23:57.139 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquired lock "refresh_cache-bf3b0f76-98db-4eb3-8e3c-ab7b525cd129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:23:57 np0005634017 nova_compute[243452]: 2026-02-28 10:23:57.139 243456 DEBUG nova.network.neutron [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:23:57 np0005634017 nova_compute[243452]: 2026-02-28 10:23:57.240 243456 DEBUG nova.compute.manager [req-7e209b65-7475-42a0-a525-b8b76ad255f1 req-64123ad4-b74d-41d6-9570-2f076e0a16ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Received event network-changed-bf4d947d-49ab-4681-beda-1384a9a5c61b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:23:57 np0005634017 nova_compute[243452]: 2026-02-28 10:23:57.240 243456 DEBUG nova.compute.manager [req-7e209b65-7475-42a0-a525-b8b76ad255f1 req-64123ad4-b74d-41d6-9570-2f076e0a16ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Refreshing instance network info cache due to event network-changed-bf4d947d-49ab-4681-beda-1384a9a5c61b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:23:57 np0005634017 nova_compute[243452]: 2026-02-28 10:23:57.241 243456 DEBUG oslo_concurrency.lockutils [req-7e209b65-7475-42a0-a525-b8b76ad255f1 req-64123ad4-b74d-41d6-9570-2f076e0a16ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-bf3b0f76-98db-4eb3-8e3c-ab7b525cd129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:23:57 np0005634017 nova_compute[243452]: 2026-02-28 10:23:57.317 243456 DEBUG nova.network.neutron [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:23:57 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:57Z|01130|binding|INFO|Releasing lport e8c59be7-5922-4be3-b143-f702a4828758 from this chassis (sb_readonly=0)
Feb 28 05:23:57 np0005634017 nova_compute[243452]: 2026-02-28 10:23:57.504 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:57 np0005634017 ovn_controller[146846]: 2026-02-28T10:23:57Z|01131|binding|INFO|Releasing lport e8c59be7-5922-4be3-b143-f702a4828758 from this chassis (sb_readonly=0)
Feb 28 05:23:57 np0005634017 nova_compute[243452]: 2026-02-28 10:23:57.534 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:57.862 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:57.863 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:23:57.863 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.142 243456 DEBUG nova.network.neutron [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Updating instance_info_cache with network_info: [{"id": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "address": "fa:16:3e:fc:bc:6a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf4d947d-49", "ovs_interfaceid": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.160 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Releasing lock "refresh_cache-bf3b0f76-98db-4eb3-8e3c-ab7b525cd129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.161 243456 DEBUG nova.compute.manager [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Instance network_info: |[{"id": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "address": "fa:16:3e:fc:bc:6a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf4d947d-49", "ovs_interfaceid": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.161 243456 DEBUG oslo_concurrency.lockutils [req-7e209b65-7475-42a0-a525-b8b76ad255f1 req-64123ad4-b74d-41d6-9570-2f076e0a16ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-bf3b0f76-98db-4eb3-8e3c-ab7b525cd129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.161 243456 DEBUG nova.network.neutron [req-7e209b65-7475-42a0-a525-b8b76ad255f1 req-64123ad4-b74d-41d6-9570-2f076e0a16ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Refreshing network info cache for port bf4d947d-49ab-4681-beda-1384a9a5c61b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.165 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Start _get_guest_xml network_info=[{"id": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "address": "fa:16:3e:fc:bc:6a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf4d947d-49", "ovs_interfaceid": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.171 243456 WARNING nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.178 243456 DEBUG nova.virt.libvirt.host [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.179 243456 DEBUG nova.virt.libvirt.host [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.189 243456 DEBUG nova.virt.libvirt.host [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.190 243456 DEBUG nova.virt.libvirt.host [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.191 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.191 243456 DEBUG nova.virt.hardware [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.192 243456 DEBUG nova.virt.hardware [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.192 243456 DEBUG nova.virt.hardware [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.192 243456 DEBUG nova.virt.hardware [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.192 243456 DEBUG nova.virt.hardware [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.193 243456 DEBUG nova.virt.hardware [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.193 243456 DEBUG nova.virt.hardware [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.193 243456 DEBUG nova.virt.hardware [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.194 243456 DEBUG nova.virt.hardware [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.194 243456 DEBUG nova.virt.hardware [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.194 243456 DEBUG nova.virt.hardware [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.198 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 305 active+clean; 251 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 718 KiB/s wr, 71 op/s
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.642 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:23:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:23:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1972460946' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.818 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.855 243456 DEBUG nova.storage.rbd_utils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.863 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.906 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274223.8678675, 287f71c6-9068-476b-81e7-da5069ee831f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.908 243456 INFO nova.compute.manager [-] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:23:58 np0005634017 nova_compute[243452]: 2026-02-28 10:23:58.939 243456 DEBUG nova.compute.manager [None req-051dd1fe-2a93-45c3-aa29-598995f8b201 - - - - - -] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.048 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:23:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1255167679' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.447 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.449 243456 DEBUG nova.virt.libvirt.vif [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:23:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1318487154',display_name='tempest-ServersTestJSON-server-1318487154',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1318487154',id=112,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-puzx8x3a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:23:55Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=bf3b0f76-98db-4eb3-8e3c-ab7b525cd129,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "address": "fa:16:3e:fc:bc:6a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf4d947d-49", "ovs_interfaceid": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.450 243456 DEBUG nova.network.os_vif_util [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "address": "fa:16:3e:fc:bc:6a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf4d947d-49", "ovs_interfaceid": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.451 243456 DEBUG nova.network.os_vif_util [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:bc:6a,bridge_name='br-int',has_traffic_filtering=True,id=bf4d947d-49ab-4681-beda-1384a9a5c61b,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf4d947d-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.453 243456 DEBUG nova.objects.instance [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'pci_devices' on Instance uuid bf3b0f76-98db-4eb3-8e3c-ab7b525cd129 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.478 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:23:59 np0005634017 nova_compute[243452]:  <uuid>bf3b0f76-98db-4eb3-8e3c-ab7b525cd129</uuid>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:  <name>instance-00000070</name>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServersTestJSON-server-1318487154</nova:name>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:23:58</nova:creationTime>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:23:59 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:        <nova:user uuid="30797c1e587b4532a2e148d0cdcd9c51">tempest-ServersTestJSON-973249707-project-member</nova:user>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:        <nova:project uuid="1c3cb5cdfa53405bb0387af43e804bd1">tempest-ServersTestJSON-973249707</nova:project>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:        <nova:port uuid="bf4d947d-49ab-4681-beda-1384a9a5c61b">
Feb 28 05:23:59 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <entry name="serial">bf3b0f76-98db-4eb3-8e3c-ab7b525cd129</entry>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <entry name="uuid">bf3b0f76-98db-4eb3-8e3c-ab7b525cd129</entry>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk">
Feb 28 05:23:59 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:23:59 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk.config">
Feb 28 05:23:59 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:23:59 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:fc:bc:6a"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <target dev="tapbf4d947d-49"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/bf3b0f76-98db-4eb3-8e3c-ab7b525cd129/console.log" append="off"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:23:59 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:23:59 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:23:59 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:23:59 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.480 243456 DEBUG nova.compute.manager [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Preparing to wait for external event network-vif-plugged-bf4d947d-49ab-4681-beda-1384a9a5c61b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.481 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.482 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.482 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.484 243456 DEBUG nova.virt.libvirt.vif [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:23:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1318487154',display_name='tempest-ServersTestJSON-server-1318487154',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1318487154',id=112,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-puzx8x3a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:23:55Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=bf3b0f76-98db-4eb3-8e3c-ab7b525cd129,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "address": "fa:16:3e:fc:bc:6a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf4d947d-49", "ovs_interfaceid": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.484 243456 DEBUG nova.network.os_vif_util [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "address": "fa:16:3e:fc:bc:6a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf4d947d-49", "ovs_interfaceid": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.485 243456 DEBUG nova.network.os_vif_util [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:bc:6a,bridge_name='br-int',has_traffic_filtering=True,id=bf4d947d-49ab-4681-beda-1384a9a5c61b,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf4d947d-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.487 243456 DEBUG os_vif [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:bc:6a,bridge_name='br-int',has_traffic_filtering=True,id=bf4d947d-49ab-4681-beda-1384a9a5c61b,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf4d947d-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.488 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.489 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.489 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.498 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.498 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf4d947d-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.499 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf4d947d-49, col_values=(('external_ids', {'iface-id': 'bf4d947d-49ab-4681-beda-1384a9a5c61b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:bc:6a', 'vm-uuid': 'bf3b0f76-98db-4eb3-8e3c-ab7b525cd129'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.501 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:59 np0005634017 NetworkManager[49805]: <info>  [1772274239.5024] manager: (tapbf4d947d-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/470)
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.503 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.506 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.508 243456 INFO os_vif [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:bc:6a,bridge_name='br-int',has_traffic_filtering=True,id=bf4d947d-49ab-4681-beda-1384a9a5c61b,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf4d947d-49')#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.590 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.590 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.591 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No VIF found with MAC fa:16:3e:fc:bc:6a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.591 243456 INFO nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Using config drive#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.619 243456 DEBUG nova.storage.rbd_utils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.909 243456 DEBUG nova.network.neutron [req-7e209b65-7475-42a0-a525-b8b76ad255f1 req-64123ad4-b74d-41d6-9570-2f076e0a16ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Updated VIF entry in instance network info cache for port bf4d947d-49ab-4681-beda-1384a9a5c61b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.910 243456 DEBUG nova.network.neutron [req-7e209b65-7475-42a0-a525-b8b76ad255f1 req-64123ad4-b74d-41d6-9570-2f076e0a16ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Updating instance_info_cache with network_info: [{"id": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "address": "fa:16:3e:fc:bc:6a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf4d947d-49", "ovs_interfaceid": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:23:59 np0005634017 nova_compute[243452]: 2026-02-28 10:23:59.936 243456 DEBUG oslo_concurrency.lockutils [req-7e209b65-7475-42a0-a525-b8b76ad255f1 req-64123ad4-b74d-41d6-9570-2f076e0a16ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-bf3b0f76-98db-4eb3-8e3c-ab7b525cd129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:24:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:24:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:24:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:24:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:24:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:24:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:24:00 np0005634017 nova_compute[243452]: 2026-02-28 10:24:00.415 243456 INFO nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Creating config drive at /var/lib/nova/instances/bf3b0f76-98db-4eb3-8e3c-ab7b525cd129/disk.config#033[00m
Feb 28 05:24:00 np0005634017 nova_compute[243452]: 2026-02-28 10:24:00.420 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bf3b0f76-98db-4eb3-8e3c-ab7b525cd129/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp9_7ry731 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Feb 28 05:24:00 np0005634017 nova_compute[243452]: 2026-02-28 10:24:00.577 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bf3b0f76-98db-4eb3-8e3c-ab7b525cd129/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp9_7ry731" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:00 np0005634017 nova_compute[243452]: 2026-02-28 10:24:00.625 243456 DEBUG nova.storage.rbd_utils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:00 np0005634017 nova_compute[243452]: 2026-02-28 10:24:00.630 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bf3b0f76-98db-4eb3-8e3c-ab7b525cd129/disk.config bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:00 np0005634017 nova_compute[243452]: 2026-02-28 10:24:00.773 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bf3b0f76-98db-4eb3-8e3c-ab7b525cd129/disk.config bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:00 np0005634017 nova_compute[243452]: 2026-02-28 10:24:00.774 243456 INFO nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Deleting local config drive /var/lib/nova/instances/bf3b0f76-98db-4eb3-8e3c-ab7b525cd129/disk.config because it was imported into RBD.#033[00m
Feb 28 05:24:00 np0005634017 kernel: tapbf4d947d-49: entered promiscuous mode
Feb 28 05:24:00 np0005634017 NetworkManager[49805]: <info>  [1772274240.8208] manager: (tapbf4d947d-49): new Tun device (/org/freedesktop/NetworkManager/Devices/471)
Feb 28 05:24:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:00Z|01132|binding|INFO|Claiming lport bf4d947d-49ab-4681-beda-1384a9a5c61b for this chassis.
Feb 28 05:24:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:00Z|01133|binding|INFO|bf4d947d-49ab-4681-beda-1384a9a5c61b: Claiming fa:16:3e:fc:bc:6a 10.100.0.7
Feb 28 05:24:00 np0005634017 nova_compute[243452]: 2026-02-28 10:24:00.824 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.832 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:bc:6a 10.100.0.7'], port_security=['fa:16:3e:fc:bc:6a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bf3b0f76-98db-4eb3-8e3c-ab7b525cd129', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=bf4d947d-49ab-4681-beda-1384a9a5c61b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:24:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.833 156681 INFO neutron.agent.ovn.metadata.agent [-] Port bf4d947d-49ab-4681-beda-1384a9a5c61b in datapath 7ec4804c-4a13-485a-9300-db6edf74473b bound to our chassis#033[00m
Feb 28 05:24:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.834 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ec4804c-4a13-485a-9300-db6edf74473b#033[00m
Feb 28 05:24:00 np0005634017 nova_compute[243452]: 2026-02-28 10:24:00.834 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:00Z|01134|binding|INFO|Setting lport bf4d947d-49ab-4681-beda-1384a9a5c61b ovn-installed in OVS
Feb 28 05:24:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:00Z|01135|binding|INFO|Setting lport bf4d947d-49ab-4681-beda-1384a9a5c61b up in Southbound
Feb 28 05:24:00 np0005634017 nova_compute[243452]: 2026-02-28 10:24:00.838 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.853 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1134ae-9b5c-464e-bd1f-53f65f65fd38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:00 np0005634017 systemd-machined[209480]: New machine qemu-142-instance-00000070.
Feb 28 05:24:00 np0005634017 systemd[1]: Started Virtual Machine qemu-142-instance-00000070.
Feb 28 05:24:00 np0005634017 systemd-udevd[339973]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:24:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.886 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0852f699-1619-4883-b230-93a09a460a47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.890 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ea57b2-2d29-4cc6-a1a4-82edcce90df3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:00 np0005634017 NetworkManager[49805]: <info>  [1772274240.8969] device (tapbf4d947d-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:24:00 np0005634017 NetworkManager[49805]: <info>  [1772274240.8975] device (tapbf4d947d-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:24:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.914 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d0bd90e1-7b7f-410f-b07b-93c26e91f86d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.927 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6209bf0f-1325-4b9f-9c0c-790feebfb45c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 21, 'rx_bytes': 700, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 21, 'rx_bytes': 700, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339980, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.948 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f5437c4d-1971-4d98-9ece-3ff8cd030bb6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565520, 'tstamp': 565520}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339984, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565523, 'tstamp': 565523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339984, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.951 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:00 np0005634017 nova_compute[243452]: 2026-02-28 10:24:00.954 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.956 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ec4804c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:00 np0005634017 nova_compute[243452]: 2026-02-28 10:24:00.955 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.958 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:24:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.960 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ec4804c-40, col_values=(('external_ids', {'iface-id': 'e8c59be7-5922-4be3-b143-f702a4828758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.960 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:24:01 np0005634017 nova_compute[243452]: 2026-02-28 10:24:01.227 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274241.22599, bf3b0f76-98db-4eb3-8e3c-ab7b525cd129 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:24:01 np0005634017 nova_compute[243452]: 2026-02-28 10:24:01.228 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] VM Started (Lifecycle Event)#033[00m
Feb 28 05:24:01 np0005634017 nova_compute[243452]: 2026-02-28 10:24:01.254 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:24:01 np0005634017 nova_compute[243452]: 2026-02-28 10:24:01.261 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274241.2261636, bf3b0f76-98db-4eb3-8e3c-ab7b525cd129 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:24:01 np0005634017 nova_compute[243452]: 2026-02-28 10:24:01.261 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:24:01 np0005634017 nova_compute[243452]: 2026-02-28 10:24:01.286 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:24:01 np0005634017 nova_compute[243452]: 2026-02-28 10:24:01.290 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:24:01 np0005634017 nova_compute[243452]: 2026-02-28 10:24:01.321 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:24:02 np0005634017 nova_compute[243452]: 2026-02-28 10:24:02.116 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274227.115624, 027ce924-b530-4917-956c-ab66555058b0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:24:02 np0005634017 nova_compute[243452]: 2026-02-28 10:24:02.116 243456 INFO nova.compute.manager [-] [instance: 027ce924-b530-4917-956c-ab66555058b0] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:24:02 np0005634017 nova_compute[243452]: 2026-02-28 10:24:02.140 243456 DEBUG nova.compute.manager [None req-2067aee3-e77a-4d6d-88e5-ca258b2566db - - - - - -] [instance: 027ce924-b530-4917-956c-ab66555058b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:24:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.074 243456 DEBUG nova.compute.manager [req-c18696f0-dc41-48da-ae5e-4b548143af5e req-b0408efa-37d8-4be4-8ccc-b7f2c6ce04e3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Received event network-vif-plugged-bf4d947d-49ab-4681-beda-1384a9a5c61b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.074 243456 DEBUG oslo_concurrency.lockutils [req-c18696f0-dc41-48da-ae5e-4b548143af5e req-b0408efa-37d8-4be4-8ccc-b7f2c6ce04e3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.075 243456 DEBUG oslo_concurrency.lockutils [req-c18696f0-dc41-48da-ae5e-4b548143af5e req-b0408efa-37d8-4be4-8ccc-b7f2c6ce04e3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.075 243456 DEBUG oslo_concurrency.lockutils [req-c18696f0-dc41-48da-ae5e-4b548143af5e req-b0408efa-37d8-4be4-8ccc-b7f2c6ce04e3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.076 243456 DEBUG nova.compute.manager [req-c18696f0-dc41-48da-ae5e-4b548143af5e req-b0408efa-37d8-4be4-8ccc-b7f2c6ce04e3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Processing event network-vif-plugged-bf4d947d-49ab-4681-beda-1384a9a5c61b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.077 243456 DEBUG nova.compute.manager [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.083 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274243.0829666, bf3b0f76-98db-4eb3-8e3c-ab7b525cd129 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.083 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.087 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.091 243456 INFO nova.virt.libvirt.driver [-] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Instance spawned successfully.#033[00m
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.092 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.109 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.118 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.127 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.128 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.129 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.130 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.131 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.132 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.139 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.182 243456 INFO nova.compute.manager [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Took 7.50 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.183 243456 DEBUG nova.compute.manager [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.256 243456 INFO nova.compute.manager [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Took 8.56 seconds to build instance.#033[00m
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.276 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.602 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274228.6016119, c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.603 243456 INFO nova.compute.manager [-] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:24:03 np0005634017 nova_compute[243452]: 2026-02-28 10:24:03.629 243456 DEBUG nova.compute.manager [None req-11772288-6a13-4928-af58-e83dff88a8e5 - - - - - -] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.720304) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274243720349, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 362, "num_deletes": 250, "total_data_size": 189790, "memory_usage": 196432, "flush_reason": "Manual Compaction"}
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274243724053, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 187401, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37771, "largest_seqno": 38132, "table_properties": {"data_size": 185216, "index_size": 349, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 6057, "raw_average_key_size": 20, "raw_value_size": 180852, "raw_average_value_size": 604, "num_data_blocks": 16, "num_entries": 299, "num_filter_entries": 299, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772274231, "oldest_key_time": 1772274231, "file_creation_time": 1772274243, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 3879 microseconds, and 1962 cpu microseconds.
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.724178) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 187401 bytes OK
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.724209) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.726029) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.726062) EVENT_LOG_v1 {"time_micros": 1772274243726051, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.726126) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 187390, prev total WAL file size 187390, number of live WAL files 2.
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.726848) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323532' seq:72057594037927935, type:22 .. '6D6772737461740031353033' seq:0, type:0; will stop at (end)
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(183KB)], [83(10MB)]
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274243726919, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 11351907, "oldest_snapshot_seqno": -1}
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6117 keys, 8012493 bytes, temperature: kUnknown
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274243790619, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 8012493, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7972864, "index_size": 23263, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15301, "raw_key_size": 156457, "raw_average_key_size": 25, "raw_value_size": 7864583, "raw_average_value_size": 1285, "num_data_blocks": 929, "num_entries": 6117, "num_filter_entries": 6117, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772274243, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.790943) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8012493 bytes
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.792145) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.9 rd, 125.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.6 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(103.3) write-amplify(42.8) OK, records in: 6624, records dropped: 507 output_compression: NoCompression
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.792171) EVENT_LOG_v1 {"time_micros": 1772274243792159, "job": 48, "event": "compaction_finished", "compaction_time_micros": 63805, "compaction_time_cpu_micros": 36212, "output_level": 6, "num_output_files": 1, "total_output_size": 8012493, "num_input_records": 6624, "num_output_records": 6117, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274243792334, "job": 48, "event": "table_file_deletion", "file_number": 85}
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274243793528, "job": 48, "event": "table_file_deletion", "file_number": 83}
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.726680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.793624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.793633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.793635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.793636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:24:03 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.793639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:24:04 np0005634017 nova_compute[243452]: 2026-02-28 10:24:04.052 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:24:04 np0005634017 nova_compute[243452]: 2026-02-28 10:24:04.502 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.156 243456 DEBUG nova.compute.manager [req-2390defe-8f89-40de-b1ee-b0075e0fa7e4 req-e83c2266-7f76-4de4-957f-5ad17201ed21 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Received event network-vif-plugged-bf4d947d-49ab-4681-beda-1384a9a5c61b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.157 243456 DEBUG oslo_concurrency.lockutils [req-2390defe-8f89-40de-b1ee-b0075e0fa7e4 req-e83c2266-7f76-4de4-957f-5ad17201ed21 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.157 243456 DEBUG oslo_concurrency.lockutils [req-2390defe-8f89-40de-b1ee-b0075e0fa7e4 req-e83c2266-7f76-4de4-957f-5ad17201ed21 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.157 243456 DEBUG oslo_concurrency.lockutils [req-2390defe-8f89-40de-b1ee-b0075e0fa7e4 req-e83c2266-7f76-4de4-957f-5ad17201ed21 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.158 243456 DEBUG nova.compute.manager [req-2390defe-8f89-40de-b1ee-b0075e0fa7e4 req-e83c2266-7f76-4de4-957f-5ad17201ed21 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] No waiting events found dispatching network-vif-plugged-bf4d947d-49ab-4681-beda-1384a9a5c61b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.158 243456 WARNING nova.compute.manager [req-2390defe-8f89-40de-b1ee-b0075e0fa7e4 req-e83c2266-7f76-4de4-957f-5ad17201ed21 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Received unexpected event network-vif-plugged-bf4d947d-49ab-4681-beda-1384a9a5c61b for instance with vm_state active and task_state None.#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.707 243456 DEBUG oslo_concurrency.lockutils [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.709 243456 DEBUG oslo_concurrency.lockutils [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.710 243456 DEBUG oslo_concurrency.lockutils [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.710 243456 DEBUG oslo_concurrency.lockutils [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.710 243456 DEBUG oslo_concurrency.lockutils [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.712 243456 INFO nova.compute.manager [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Terminating instance#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.713 243456 DEBUG nova.compute.manager [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:24:05 np0005634017 kernel: tapbf4d947d-49 (unregistering): left promiscuous mode
Feb 28 05:24:05 np0005634017 NetworkManager[49805]: <info>  [1772274245.7588] device (tapbf4d947d-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:24:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:05Z|01136|binding|INFO|Releasing lport bf4d947d-49ab-4681-beda-1384a9a5c61b from this chassis (sb_readonly=0)
Feb 28 05:24:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:05Z|01137|binding|INFO|Setting lport bf4d947d-49ab-4681-beda-1384a9a5c61b down in Southbound
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.766 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:05Z|01138|binding|INFO|Removing iface tapbf4d947d-49 ovn-installed in OVS
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.769 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.773 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.778 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:bc:6a 10.100.0.7'], port_security=['fa:16:3e:fc:bc:6a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bf3b0f76-98db-4eb3-8e3c-ab7b525cd129', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=bf4d947d-49ab-4681-beda-1384a9a5c61b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:24:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.780 156681 INFO neutron.agent.ovn.metadata.agent [-] Port bf4d947d-49ab-4681-beda-1384a9a5c61b in datapath 7ec4804c-4a13-485a-9300-db6edf74473b unbound from our chassis#033[00m
Feb 28 05:24:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.781 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ec4804c-4a13-485a-9300-db6edf74473b#033[00m
Feb 28 05:24:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.794 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0c18dae5-0664-47e3-8cc4-fc96b008adc8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:05 np0005634017 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000070.scope: Deactivated successfully.
Feb 28 05:24:05 np0005634017 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000070.scope: Consumed 3.085s CPU time.
Feb 28 05:24:05 np0005634017 systemd-machined[209480]: Machine qemu-142-instance-00000070 terminated.
Feb 28 05:24:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.822 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9e26ac81-b46e-4aaf-a376-c0a222cf4450]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.825 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[09922337-29e3-4e64-a879-ff3780009804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.847 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d227ad52-77c8-4d1e-90fe-6706d90f6702]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.864 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[586e8d95-285b-4558-9715-993670ae4047]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 23, 'rx_bytes': 700, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 23, 'rx_bytes': 700, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340039, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.886 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2d261842-e58a-4717-947a-dff33c1bfafb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565520, 'tstamp': 565520}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340040, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565523, 'tstamp': 565523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340040, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.887 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.889 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.894 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.894 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ec4804c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.895 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:24:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.896 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ec4804c-40, col_values=(('external_ids', {'iface-id': 'e8c59be7-5922-4be3-b143-f702a4828758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.896 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.934 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.938 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.957 243456 INFO nova.virt.libvirt.driver [-] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Instance destroyed successfully.#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.958 243456 DEBUG nova.objects.instance [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'resources' on Instance uuid bf3b0f76-98db-4eb3-8e3c-ab7b525cd129 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.974 243456 DEBUG nova.virt.libvirt.vif [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:202:202,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:23:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1318487154',display_name='tempest-ServersTestJSON-server-1318487154',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1318487154',id=112,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:24:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-puzx8x3a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:24:04Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=bf3b0f76-98db-4eb3-8e3c-ab7b525cd129,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "address": "fa:16:3e:fc:bc:6a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf4d947d-49", "ovs_interfaceid": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.975 243456 DEBUG nova.network.os_vif_util [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "address": "fa:16:3e:fc:bc:6a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf4d947d-49", "ovs_interfaceid": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.977 243456 DEBUG nova.network.os_vif_util [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:bc:6a,bridge_name='br-int',has_traffic_filtering=True,id=bf4d947d-49ab-4681-beda-1384a9a5c61b,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf4d947d-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.977 243456 DEBUG os_vif [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:bc:6a,bridge_name='br-int',has_traffic_filtering=True,id=bf4d947d-49ab-4681-beda-1384a9a5c61b,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf4d947d-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.981 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.982 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf4d947d-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.984 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.985 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:05 np0005634017 nova_compute[243452]: 2026-02-28 10:24:05.989 243456 INFO os_vif [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:bc:6a,bridge_name='br-int',has_traffic_filtering=True,id=bf4d947d-49ab-4681-beda-1384a9a5c61b,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf4d947d-49')#033[00m
Feb 28 05:24:06 np0005634017 nova_compute[243452]: 2026-02-28 10:24:06.300 243456 INFO nova.virt.libvirt.driver [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Deleting instance files /var/lib/nova/instances/bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_del#033[00m
Feb 28 05:24:06 np0005634017 nova_compute[243452]: 2026-02-28 10:24:06.302 243456 INFO nova.virt.libvirt.driver [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Deletion of /var/lib/nova/instances/bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_del complete#033[00m
Feb 28 05:24:06 np0005634017 nova_compute[243452]: 2026-02-28 10:24:06.358 243456 INFO nova.compute.manager [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:24:06 np0005634017 nova_compute[243452]: 2026-02-28 10:24:06.359 243456 DEBUG oslo.service.loopingcall [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:24:06 np0005634017 nova_compute[243452]: 2026-02-28 10:24:06.359 243456 DEBUG nova.compute.manager [-] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:24:06 np0005634017 nova_compute[243452]: 2026-02-28 10:24:06.360 243456 DEBUG nova.network.neutron [-] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:24:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 76 op/s
Feb 28 05:24:07 np0005634017 nova_compute[243452]: 2026-02-28 10:24:07.426 243456 DEBUG nova.compute.manager [req-70cbcc6f-da5d-4d84-a654-f931c2f50ba1 req-f1e8a8df-4e86-4212-8be9-9796c10edcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Received event network-vif-unplugged-bf4d947d-49ab-4681-beda-1384a9a5c61b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:24:07 np0005634017 nova_compute[243452]: 2026-02-28 10:24:07.426 243456 DEBUG oslo_concurrency.lockutils [req-70cbcc6f-da5d-4d84-a654-f931c2f50ba1 req-f1e8a8df-4e86-4212-8be9-9796c10edcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:07 np0005634017 nova_compute[243452]: 2026-02-28 10:24:07.426 243456 DEBUG oslo_concurrency.lockutils [req-70cbcc6f-da5d-4d84-a654-f931c2f50ba1 req-f1e8a8df-4e86-4212-8be9-9796c10edcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:07 np0005634017 nova_compute[243452]: 2026-02-28 10:24:07.427 243456 DEBUG oslo_concurrency.lockutils [req-70cbcc6f-da5d-4d84-a654-f931c2f50ba1 req-f1e8a8df-4e86-4212-8be9-9796c10edcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:07 np0005634017 nova_compute[243452]: 2026-02-28 10:24:07.427 243456 DEBUG nova.compute.manager [req-70cbcc6f-da5d-4d84-a654-f931c2f50ba1 req-f1e8a8df-4e86-4212-8be9-9796c10edcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] No waiting events found dispatching network-vif-unplugged-bf4d947d-49ab-4681-beda-1384a9a5c61b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:24:07 np0005634017 nova_compute[243452]: 2026-02-28 10:24:07.427 243456 DEBUG nova.compute.manager [req-70cbcc6f-da5d-4d84-a654-f931c2f50ba1 req-f1e8a8df-4e86-4212-8be9-9796c10edcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Received event network-vif-unplugged-bf4d947d-49ab-4681-beda-1384a9a5c61b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:24:07 np0005634017 nova_compute[243452]: 2026-02-28 10:24:07.427 243456 DEBUG nova.compute.manager [req-70cbcc6f-da5d-4d84-a654-f931c2f50ba1 req-f1e8a8df-4e86-4212-8be9-9796c10edcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Received event network-vif-plugged-bf4d947d-49ab-4681-beda-1384a9a5c61b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:24:07 np0005634017 nova_compute[243452]: 2026-02-28 10:24:07.427 243456 DEBUG oslo_concurrency.lockutils [req-70cbcc6f-da5d-4d84-a654-f931c2f50ba1 req-f1e8a8df-4e86-4212-8be9-9796c10edcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:07 np0005634017 nova_compute[243452]: 2026-02-28 10:24:07.428 243456 DEBUG oslo_concurrency.lockutils [req-70cbcc6f-da5d-4d84-a654-f931c2f50ba1 req-f1e8a8df-4e86-4212-8be9-9796c10edcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:07 np0005634017 nova_compute[243452]: 2026-02-28 10:24:07.428 243456 DEBUG oslo_concurrency.lockutils [req-70cbcc6f-da5d-4d84-a654-f931c2f50ba1 req-f1e8a8df-4e86-4212-8be9-9796c10edcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:07 np0005634017 nova_compute[243452]: 2026-02-28 10:24:07.428 243456 DEBUG nova.compute.manager [req-70cbcc6f-da5d-4d84-a654-f931c2f50ba1 req-f1e8a8df-4e86-4212-8be9-9796c10edcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] No waiting events found dispatching network-vif-plugged-bf4d947d-49ab-4681-beda-1384a9a5c61b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:24:07 np0005634017 nova_compute[243452]: 2026-02-28 10:24:07.428 243456 WARNING nova.compute.manager [req-70cbcc6f-da5d-4d84-a654-f931c2f50ba1 req-f1e8a8df-4e86-4212-8be9-9796c10edcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Received unexpected event network-vif-plugged-bf4d947d-49ab-4681-beda-1384a9a5c61b for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:24:07 np0005634017 nova_compute[243452]: 2026-02-28 10:24:07.664 243456 DEBUG nova.network.neutron [-] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:24:07 np0005634017 nova_compute[243452]: 2026-02-28 10:24:07.683 243456 INFO nova.compute.manager [-] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Took 1.32 seconds to deallocate network for instance.#033[00m
Feb 28 05:24:07 np0005634017 nova_compute[243452]: 2026-02-28 10:24:07.723 243456 DEBUG oslo_concurrency.lockutils [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:07 np0005634017 nova_compute[243452]: 2026-02-28 10:24:07.724 243456 DEBUG oslo_concurrency.lockutils [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:07 np0005634017 nova_compute[243452]: 2026-02-28 10:24:07.789 243456 DEBUG oslo_concurrency.processutils [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:24:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/104028041' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:24:08 np0005634017 nova_compute[243452]: 2026-02-28 10:24:08.443 243456 DEBUG oslo_concurrency.processutils [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.654s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:08 np0005634017 nova_compute[243452]: 2026-02-28 10:24:08.450 243456 DEBUG nova.compute.provider_tree [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:24:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 260 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Feb 28 05:24:08 np0005634017 nova_compute[243452]: 2026-02-28 10:24:08.475 243456 DEBUG nova.scheduler.client.report [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:24:08 np0005634017 nova_compute[243452]: 2026-02-28 10:24:08.518 243456 DEBUG oslo_concurrency.lockutils [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:08 np0005634017 nova_compute[243452]: 2026-02-28 10:24:08.569 243456 INFO nova.scheduler.client.report [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Deleted allocations for instance bf3b0f76-98db-4eb3-8e3c-ab7b525cd129#033[00m
Feb 28 05:24:08 np0005634017 nova_compute[243452]: 2026-02-28 10:24:08.636 243456 DEBUG oslo_concurrency.lockutils [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:24:09 np0005634017 nova_compute[243452]: 2026-02-28 10:24:09.054 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 124 op/s
Feb 28 05:24:10 np0005634017 nova_compute[243452]: 2026-02-28 10:24:10.687 243456 DEBUG nova.compute.manager [req-3626c2c9-d809-47f5-989b-c65d43e82340 req-d0223bdf-db5a-40be-bf71-afea8bbe6e06 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Received event network-vif-deleted-bf4d947d-49ab-4681-beda-1384a9a5c61b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:24:10 np0005634017 nova_compute[243452]: 2026-02-28 10:24:10.984 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:11 np0005634017 nova_compute[243452]: 2026-02-28 10:24:11.345 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "08a4cd5e-f711-44d2-b17e-c1941be22e85" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:11 np0005634017 nova_compute[243452]: 2026-02-28 10:24:11.346 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:11 np0005634017 nova_compute[243452]: 2026-02-28 10:24:11.363 243456 DEBUG nova.compute.manager [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:24:11 np0005634017 nova_compute[243452]: 2026-02-28 10:24:11.427 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:11 np0005634017 nova_compute[243452]: 2026-02-28 10:24:11.427 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:11 np0005634017 nova_compute[243452]: 2026-02-28 10:24:11.437 243456 DEBUG nova.virt.hardware [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:24:11 np0005634017 nova_compute[243452]: 2026-02-28 10:24:11.437 243456 INFO nova.compute.claims [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:24:11 np0005634017 nova_compute[243452]: 2026-02-28 10:24:11.582 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:11 np0005634017 nova_compute[243452]: 2026-02-28 10:24:11.927 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:11 np0005634017 nova_compute[243452]: 2026-02-28 10:24:11.927 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:11 np0005634017 nova_compute[243452]: 2026-02-28 10:24:11.945 243456 DEBUG nova.compute.manager [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.007 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:24:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1030360493' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.134 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.142 243456 DEBUG nova.compute.provider_tree [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.165 243456 DEBUG nova.scheduler.client.report [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.198 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.199 243456 DEBUG nova.compute.manager [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.202 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.210 243456 DEBUG nova.virt.hardware [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.210 243456 INFO nova.compute.claims [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.274 243456 DEBUG nova.compute.manager [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.275 243456 DEBUG nova.network.neutron [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.317 243456 INFO nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.350 243456 DEBUG nova.compute.manager [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.437 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.482 243456 DEBUG nova.compute.manager [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.485 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.486 243456 INFO nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Creating image(s)#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.517 243456 DEBUG nova.storage.rbd_utils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 08a4cd5e-f711-44d2-b17e-c1941be22e85_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.539 243456 DEBUG nova.storage.rbd_utils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 08a4cd5e-f711-44d2-b17e-c1941be22e85_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.560 243456 DEBUG nova.storage.rbd_utils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 08a4cd5e-f711-44d2-b17e-c1941be22e85_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.563 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.607 243456 DEBUG nova.policy [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30797c1e587b4532a2e148d0cdcd9c51', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.662 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.664 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.664 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.665 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.690 243456 DEBUG nova.storage.rbd_utils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 08a4cd5e-f711-44d2-b17e-c1941be22e85_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.694 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 08a4cd5e-f711-44d2-b17e-c1941be22e85_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:12 np0005634017 nova_compute[243452]: 2026-02-28 10:24:12.931 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 08a4cd5e-f711-44d2-b17e-c1941be22e85_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:24:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2330502058' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.014 243456 DEBUG nova.storage.rbd_utils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] resizing rbd image 08a4cd5e-f711-44d2-b17e-c1941be22e85_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.047 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.053 243456 DEBUG nova.compute.provider_tree [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.095 243456 DEBUG nova.scheduler.client.report [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.103 243456 DEBUG nova.objects.instance [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'migration_context' on Instance uuid 08a4cd5e-f711-44d2-b17e-c1941be22e85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.116 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.116 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Ensure instance console log exists: /var/lib/nova/instances/08a4cd5e-f711-44d2-b17e-c1941be22e85/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.117 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.117 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.117 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.118 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.119 243456 DEBUG nova.compute.manager [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.171 243456 DEBUG nova.compute.manager [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.172 243456 DEBUG nova.network.neutron [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.196 243456 INFO nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.217 243456 DEBUG nova.compute.manager [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.310 243456 DEBUG nova.compute.manager [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.313 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.313 243456 INFO nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Creating image(s)#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.343 243456 DEBUG nova.storage.rbd_utils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.376 243456 DEBUG nova.storage.rbd_utils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.413 243456 DEBUG nova.storage.rbd_utils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.419 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.538 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.539 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.540 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.540 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.567 243456 DEBUG nova.storage.rbd_utils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.572 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.777 243456 DEBUG nova.policy [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9046d3ef932481196b82d5e1fdd5de6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.842 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:13 np0005634017 nova_compute[243452]: 2026-02-28 10:24:13.912 243456 DEBUG nova.storage.rbd_utils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] resizing rbd image a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:24:14 np0005634017 nova_compute[243452]: 2026-02-28 10:24:14.006 243456 DEBUG nova.objects.instance [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'migration_context' on Instance uuid a1bf329d-ed65-4cbc-99cb-e49716d1b24d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:24:14 np0005634017 nova_compute[243452]: 2026-02-28 10:24:14.022 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:24:14 np0005634017 nova_compute[243452]: 2026-02-28 10:24:14.022 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Ensure instance console log exists: /var/lib/nova/instances/a1bf329d-ed65-4cbc-99cb-e49716d1b24d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:24:14 np0005634017 nova_compute[243452]: 2026-02-28 10:24:14.023 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:14 np0005634017 nova_compute[243452]: 2026-02-28 10:24:14.023 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:14 np0005634017 nova_compute[243452]: 2026-02-28 10:24:14.023 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:14 np0005634017 nova_compute[243452]: 2026-02-28 10:24:14.057 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:14 np0005634017 nova_compute[243452]: 2026-02-28 10:24:14.073 243456 DEBUG nova.network.neutron [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Successfully created port: c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:24:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Feb 28 05:24:15 np0005634017 nova_compute[243452]: 2026-02-28 10:24:15.465 243456 DEBUG nova.network.neutron [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Successfully updated port: c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:24:15 np0005634017 nova_compute[243452]: 2026-02-28 10:24:15.483 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "refresh_cache-08a4cd5e-f711-44d2-b17e-c1941be22e85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:24:15 np0005634017 nova_compute[243452]: 2026-02-28 10:24:15.483 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquired lock "refresh_cache-08a4cd5e-f711-44d2-b17e-c1941be22e85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:24:15 np0005634017 nova_compute[243452]: 2026-02-28 10:24:15.484 243456 DEBUG nova.network.neutron [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:24:15 np0005634017 nova_compute[243452]: 2026-02-28 10:24:15.530 243456 DEBUG nova.network.neutron [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Successfully created port: cc564e14-816b-4b92-877a-db0b2ddd0285 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:24:15 np0005634017 nova_compute[243452]: 2026-02-28 10:24:15.656 243456 DEBUG nova.compute.manager [req-b96105e5-dcc2-4e9a-af68-901d8f16938a req-690a6c37-2c62-4076-ad8f-5848751026cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Received event network-changed-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:24:15 np0005634017 nova_compute[243452]: 2026-02-28 10:24:15.657 243456 DEBUG nova.compute.manager [req-b96105e5-dcc2-4e9a-af68-901d8f16938a req-690a6c37-2c62-4076-ad8f-5848751026cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Refreshing instance network info cache due to event network-changed-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:24:15 np0005634017 nova_compute[243452]: 2026-02-28 10:24:15.657 243456 DEBUG oslo_concurrency.lockutils [req-b96105e5-dcc2-4e9a-af68-901d8f16938a req-690a6c37-2c62-4076-ad8f-5848751026cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-08a4cd5e-f711-44d2-b17e-c1941be22e85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:24:15 np0005634017 nova_compute[243452]: 2026-02-28 10:24:15.751 243456 DEBUG nova.network.neutron [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:24:15 np0005634017 nova_compute[243452]: 2026-02-28 10:24:15.986 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 287 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 149 op/s
Feb 28 05:24:16 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 05:24:16 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 34K writes, 134K keys, 34K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.04 MB/s#012Cumulative WAL: 34K writes, 12K syncs, 2.76 writes per sync, written: 0.13 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 40.78 MB, 0.07 MB/s#012Interval WAL: 10K writes, 4124 syncs, 2.48 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 05:24:16 np0005634017 nova_compute[243452]: 2026-02-28 10:24:16.973 243456 DEBUG nova.network.neutron [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Updating instance_info_cache with network_info: [{"id": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "address": "fa:16:3e:98:3b:3a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f8ec31-25", "ovs_interfaceid": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:24:16 np0005634017 nova_compute[243452]: 2026-02-28 10:24:16.998 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Releasing lock "refresh_cache-08a4cd5e-f711-44d2-b17e-c1941be22e85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:24:16 np0005634017 nova_compute[243452]: 2026-02-28 10:24:16.999 243456 DEBUG nova.compute.manager [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Instance network_info: |[{"id": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "address": "fa:16:3e:98:3b:3a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f8ec31-25", "ovs_interfaceid": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.000 243456 DEBUG nova.network.neutron [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Successfully updated port: cc564e14-816b-4b92-877a-db0b2ddd0285 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.001 243456 DEBUG oslo_concurrency.lockutils [req-b96105e5-dcc2-4e9a-af68-901d8f16938a req-690a6c37-2c62-4076-ad8f-5848751026cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-08a4cd5e-f711-44d2-b17e-c1941be22e85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.001 243456 DEBUG nova.network.neutron [req-b96105e5-dcc2-4e9a-af68-901d8f16938a req-690a6c37-2c62-4076-ad8f-5848751026cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Refreshing network info cache for port c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.006 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Start _get_guest_xml network_info=[{"id": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "address": "fa:16:3e:98:3b:3a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f8ec31-25", "ovs_interfaceid": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.011 243456 WARNING nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.018 243456 DEBUG nova.virt.libvirt.host [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.018 243456 DEBUG nova.virt.libvirt.host [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.022 243456 DEBUG nova.virt.libvirt.host [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.022 243456 DEBUG nova.virt.libvirt.host [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.022 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.022 243456 DEBUG nova.virt.hardware [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.023 243456 DEBUG nova.virt.hardware [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.023 243456 DEBUG nova.virt.hardware [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.023 243456 DEBUG nova.virt.hardware [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.023 243456 DEBUG nova.virt.hardware [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.023 243456 DEBUG nova.virt.hardware [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.024 243456 DEBUG nova.virt.hardware [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.024 243456 DEBUG nova.virt.hardware [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.024 243456 DEBUG nova.virt.hardware [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.024 243456 DEBUG nova.virt.hardware [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.024 243456 DEBUG nova.virt.hardware [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.027 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.072 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.072 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquired lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.072 243456 DEBUG nova.network.neutron [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.248 243456 DEBUG nova.network.neutron [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:24:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:24:17 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4075688166' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.620 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.645 243456 DEBUG nova.storage.rbd_utils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 08a4cd5e-f711-44d2-b17e-c1941be22e85_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.651 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.857 243456 DEBUG nova.compute.manager [req-4c0a22ed-1623-4290-a42b-0f9202d48b80 req-5e348653-d604-4baa-b814-29f893d1dff8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Received event network-changed-cc564e14-816b-4b92-877a-db0b2ddd0285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.858 243456 DEBUG nova.compute.manager [req-4c0a22ed-1623-4290-a42b-0f9202d48b80 req-5e348653-d604-4baa-b814-29f893d1dff8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Refreshing instance network info cache due to event network-changed-cc564e14-816b-4b92-877a-db0b2ddd0285. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:24:17 np0005634017 nova_compute[243452]: 2026-02-28 10:24:17.858 243456 DEBUG oslo_concurrency.lockutils [req-4c0a22ed-1623-4290-a42b-0f9202d48b80 req-5e348653-d604-4baa-b814-29f893d1dff8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:24:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:24:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2306797905' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.153 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.154 243456 DEBUG nova.virt.libvirt.vif [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:24:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1172761758',display_name='tempest-ServersTestJSON-server-1172761758',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1172761758',id=113,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-0100k6jb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:24:12Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=08a4cd5e-f711-44d2-b17e-c1941be22e85,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "address": "fa:16:3e:98:3b:3a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f8ec31-25", "ovs_interfaceid": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.154 243456 DEBUG nova.network.os_vif_util [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "address": "fa:16:3e:98:3b:3a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f8ec31-25", "ovs_interfaceid": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.155 243456 DEBUG nova.network.os_vif_util [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:3b:3a,bridge_name='br-int',has_traffic_filtering=True,id=c6f8ec31-2557-49a7-9d41-fc0bc3a16a37,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f8ec31-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.156 243456 DEBUG nova.objects.instance [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 08a4cd5e-f711-44d2-b17e-c1941be22e85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.192 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:24:18 np0005634017 nova_compute[243452]:  <uuid>08a4cd5e-f711-44d2-b17e-c1941be22e85</uuid>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:  <name>instance-00000071</name>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServersTestJSON-server-1172761758</nova:name>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:24:17</nova:creationTime>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:24:18 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:        <nova:user uuid="30797c1e587b4532a2e148d0cdcd9c51">tempest-ServersTestJSON-973249707-project-member</nova:user>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:        <nova:project uuid="1c3cb5cdfa53405bb0387af43e804bd1">tempest-ServersTestJSON-973249707</nova:project>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:        <nova:port uuid="c6f8ec31-2557-49a7-9d41-fc0bc3a16a37">
Feb 28 05:24:18 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <entry name="serial">08a4cd5e-f711-44d2-b17e-c1941be22e85</entry>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <entry name="uuid">08a4cd5e-f711-44d2-b17e-c1941be22e85</entry>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/08a4cd5e-f711-44d2-b17e-c1941be22e85_disk">
Feb 28 05:24:18 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:24:18 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/08a4cd5e-f711-44d2-b17e-c1941be22e85_disk.config">
Feb 28 05:24:18 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:24:18 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:98:3b:3a"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <target dev="tapc6f8ec31-25"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/08a4cd5e-f711-44d2-b17e-c1941be22e85/console.log" append="off"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:24:18 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:24:18 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:24:18 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:24:18 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.193 243456 DEBUG nova.compute.manager [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Preparing to wait for external event network-vif-plugged-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.193 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.193 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.194 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.195 243456 DEBUG nova.virt.libvirt.vif [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:24:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1172761758',display_name='tempest-ServersTestJSON-server-1172761758',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1172761758',id=113,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-0100k6jb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:24:12Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=08a4cd5e-f711-44d2-b17e-c1941be22e85,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "address": "fa:16:3e:98:3b:3a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f8ec31-25", "ovs_interfaceid": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.195 243456 DEBUG nova.network.os_vif_util [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "address": "fa:16:3e:98:3b:3a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f8ec31-25", "ovs_interfaceid": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.196 243456 DEBUG nova.network.os_vif_util [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:3b:3a,bridge_name='br-int',has_traffic_filtering=True,id=c6f8ec31-2557-49a7-9d41-fc0bc3a16a37,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f8ec31-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.196 243456 DEBUG os_vif [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:3b:3a,bridge_name='br-int',has_traffic_filtering=True,id=c6f8ec31-2557-49a7-9d41-fc0bc3a16a37,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f8ec31-25') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.197 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.198 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.198 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.206 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.206 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6f8ec31-25, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.207 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc6f8ec31-25, col_values=(('external_ids', {'iface-id': 'c6f8ec31-2557-49a7-9d41-fc0bc3a16a37', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:98:3b:3a', 'vm-uuid': '08a4cd5e-f711-44d2-b17e-c1941be22e85'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:18 np0005634017 NetworkManager[49805]: <info>  [1772274258.2095] manager: (tapc6f8ec31-25): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/472)
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.213 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.215 243456 INFO os_vif [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:3b:3a,bridge_name='br-int',has_traffic_filtering=True,id=c6f8ec31-2557-49a7-9d41-fc0bc3a16a37,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f8ec31-25')#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.283 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.284 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.284 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No VIF found with MAC fa:16:3e:98:3b:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.286 243456 INFO nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Using config drive#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.313 243456 DEBUG nova.storage.rbd_utils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 08a4cd5e-f711-44d2-b17e-c1941be22e85_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.329 243456 DEBUG nova.network.neutron [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Updating instance_info_cache with network_info: [{"id": "cc564e14-816b-4b92-877a-db0b2ddd0285", "address": "fa:16:3e:cc:45:f7", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc564e14-81", "ovs_interfaceid": "cc564e14-816b-4b92-877a-db0b2ddd0285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.359 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Releasing lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.360 243456 DEBUG nova.compute.manager [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Instance network_info: |[{"id": "cc564e14-816b-4b92-877a-db0b2ddd0285", "address": "fa:16:3e:cc:45:f7", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc564e14-81", "ovs_interfaceid": "cc564e14-816b-4b92-877a-db0b2ddd0285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.361 243456 DEBUG oslo_concurrency.lockutils [req-4c0a22ed-1623-4290-a42b-0f9202d48b80 req-5e348653-d604-4baa-b814-29f893d1dff8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.361 243456 DEBUG nova.network.neutron [req-4c0a22ed-1623-4290-a42b-0f9202d48b80 req-5e348653-d604-4baa-b814-29f893d1dff8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Refreshing network info cache for port cc564e14-816b-4b92-877a-db0b2ddd0285 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.364 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Start _get_guest_xml network_info=[{"id": "cc564e14-816b-4b92-877a-db0b2ddd0285", "address": "fa:16:3e:cc:45:f7", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc564e14-81", "ovs_interfaceid": "cc564e14-816b-4b92-877a-db0b2ddd0285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.370 243456 WARNING nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.376 243456 DEBUG nova.virt.libvirt.host [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.376 243456 DEBUG nova.virt.libvirt.host [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.384 243456 DEBUG nova.virt.libvirt.host [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.385 243456 DEBUG nova.virt.libvirt.host [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.385 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.386 243456 DEBUG nova.virt.hardware [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.386 243456 DEBUG nova.virt.hardware [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.386 243456 DEBUG nova.virt.hardware [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.387 243456 DEBUG nova.virt.hardware [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.387 243456 DEBUG nova.virt.hardware [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.387 243456 DEBUG nova.virt.hardware [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.387 243456 DEBUG nova.virt.hardware [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.387 243456 DEBUG nova.virt.hardware [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.388 243456 DEBUG nova.virt.hardware [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.388 243456 DEBUG nova.virt.hardware [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.388 243456 DEBUG nova.virt.hardware [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.391 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.432 243456 DEBUG nova.network.neutron [req-b96105e5-dcc2-4e9a-af68-901d8f16938a req-690a6c37-2c62-4076-ad8f-5848751026cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Updated VIF entry in instance network info cache for port c6f8ec31-2557-49a7-9d41-fc0bc3a16a37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.433 243456 DEBUG nova.network.neutron [req-b96105e5-dcc2-4e9a-af68-901d8f16938a req-690a6c37-2c62-4076-ad8f-5848751026cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Updating instance_info_cache with network_info: [{"id": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "address": "fa:16:3e:98:3b:3a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f8ec31-25", "ovs_interfaceid": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.449 243456 DEBUG oslo_concurrency.lockutils [req-b96105e5-dcc2-4e9a-af68-901d8f16938a req-690a6c37-2c62-4076-ad8f-5848751026cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-08a4cd5e-f711-44d2-b17e-c1941be22e85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:24:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 325 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 803 KiB/s rd, 3.5 MiB/s wr, 104 op/s
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.665 243456 INFO nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Creating config drive at /var/lib/nova/instances/08a4cd5e-f711-44d2-b17e-c1941be22e85/disk.config#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.671 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/08a4cd5e-f711-44d2-b17e-c1941be22e85/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpq6ttu7xm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.819 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/08a4cd5e-f711-44d2-b17e-c1941be22e85/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpq6ttu7xm" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.856 243456 DEBUG nova.storage.rbd_utils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 08a4cd5e-f711-44d2-b17e-c1941be22e85_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.862 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/08a4cd5e-f711-44d2-b17e-c1941be22e85/disk.config 08a4cd5e-f711-44d2-b17e-c1941be22e85_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:24:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2389415991' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.955 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.977 243456 DEBUG nova.storage.rbd_utils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:18 np0005634017 nova_compute[243452]: 2026-02-28 10:24:18.982 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.012 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/08a4cd5e-f711-44d2-b17e-c1941be22e85/disk.config 08a4cd5e-f711-44d2-b17e-c1941be22e85_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.012 243456 INFO nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Deleting local config drive /var/lib/nova/instances/08a4cd5e-f711-44d2-b17e-c1941be22e85/disk.config because it was imported into RBD.#033[00m
Feb 28 05:24:19 np0005634017 kernel: tapc6f8ec31-25: entered promiscuous mode
Feb 28 05:24:19 np0005634017 NetworkManager[49805]: <info>  [1772274259.0604] manager: (tapc6f8ec31-25): new Tun device (/org/freedesktop/NetworkManager/Devices/473)
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.062 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:19 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:19Z|01139|binding|INFO|Claiming lport c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 for this chassis.
Feb 28 05:24:19 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:19Z|01140|binding|INFO|c6f8ec31-2557-49a7-9d41-fc0bc3a16a37: Claiming fa:16:3e:98:3b:3a 10.100.0.3
Feb 28 05:24:19 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:19Z|01141|binding|INFO|Setting lport c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 ovn-installed in OVS
Feb 28 05:24:19 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:19Z|01142|binding|INFO|Setting lport c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 up in Southbound
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.071 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.072 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.071 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:3b:3a 10.100.0.3'], port_security=['fa:16:3e:98:3b:3a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '08a4cd5e-f711-44d2-b17e-c1941be22e85', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=c6f8ec31-2557-49a7-9d41-fc0bc3a16a37) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:24:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.074 156681 INFO neutron.agent.ovn.metadata.agent [-] Port c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 in datapath 7ec4804c-4a13-485a-9300-db6edf74473b bound to our chassis#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.076 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.077 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ec4804c-4a13-485a-9300-db6edf74473b#033[00m
Feb 28 05:24:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.095 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e7e4957d-a8ea-4310-9617-2c35c3edd25e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:19 np0005634017 systemd-udevd[340666]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:24:19 np0005634017 systemd-machined[209480]: New machine qemu-143-instance-00000071.
Feb 28 05:24:19 np0005634017 systemd[1]: Started Virtual Machine qemu-143-instance-00000071.
Feb 28 05:24:19 np0005634017 NetworkManager[49805]: <info>  [1772274259.1141] device (tapc6f8ec31-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:24:19 np0005634017 NetworkManager[49805]: <info>  [1772274259.1148] device (tapc6f8ec31-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:24:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.125 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[17e38e74-8458-480a-8c55-64d68c745b4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.128 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e83279e5-a4f8-4040-ac15-cc3ccffc71b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:19 np0005634017 podman[340640]: 2026-02-28 10:24:19.144107676 +0000 UTC m=+0.081391001 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:24:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.156 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[669a6933-b316-42ba-8685-42d80e8d65ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.173 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[464efda9-128b-4afd-a742-69592030530f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 25, 'rx_bytes': 700, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 25, 'rx_bytes': 700, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340714, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.186 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1d58fa1b-7de8-4c78-8b8f-922adf0f837d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565520, 'tstamp': 565520}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340719, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565523, 'tstamp': 565523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340719, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.188 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.189 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.191 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ec4804c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.191 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:24:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.192 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ec4804c-40, col_values=(('external_ids', {'iface-id': 'e8c59be7-5922-4be3-b143-f702a4828758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.192 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:24:19 np0005634017 podman[340637]: 2026-02-28 10:24:19.203939663 +0000 UTC m=+0.141174886 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 05:24:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:24:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3606098825' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.529 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.531 243456 DEBUG nova.virt.libvirt.vif [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:24:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1241744982',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1241744982',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=114,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAhOy8scu7pucn6hR6bdEJEq/jaPKx42EW3BCfNrjQsnOj6n32/A+FEMwyqz7ox0jzcrbxxA91mTDdT3vrbzI6b58jlzlwGg5fJ7HUGs8jPxEEShL7FM9h/slX6zH/FfpQ==',key_name='tempest-TestSecurityGroupsBasicOps-850120140',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-t2on9vk5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:24:13Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=a1bf329d-ed65-4cbc-99cb-e49716d1b24d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cc564e14-816b-4b92-877a-db0b2ddd0285", "address": "fa:16:3e:cc:45:f7", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc564e14-81", "ovs_interfaceid": "cc564e14-816b-4b92-877a-db0b2ddd0285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.532 243456 DEBUG nova.network.os_vif_util [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "cc564e14-816b-4b92-877a-db0b2ddd0285", "address": "fa:16:3e:cc:45:f7", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc564e14-81", "ovs_interfaceid": "cc564e14-816b-4b92-877a-db0b2ddd0285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.534 243456 DEBUG nova.network.os_vif_util [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:45:f7,bridge_name='br-int',has_traffic_filtering=True,id=cc564e14-816b-4b92-877a-db0b2ddd0285,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc564e14-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.536 243456 DEBUG nova.objects.instance [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'pci_devices' on Instance uuid a1bf329d-ed65-4cbc-99cb-e49716d1b24d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.553 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:24:19 np0005634017 nova_compute[243452]:  <uuid>a1bf329d-ed65-4cbc-99cb-e49716d1b24d</uuid>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:  <name>instance-00000072</name>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1241744982</nova:name>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:24:18</nova:creationTime>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:24:19 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:        <nova:user uuid="f9046d3ef932481196b82d5e1fdd5de6">tempest-TestSecurityGroupsBasicOps-1303513239-project-member</nova:user>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:        <nova:project uuid="1eb3aafa74d14544ba5cde61223f2e23">tempest-TestSecurityGroupsBasicOps-1303513239</nova:project>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:        <nova:port uuid="cc564e14-816b-4b92-877a-db0b2ddd0285">
Feb 28 05:24:19 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <entry name="serial">a1bf329d-ed65-4cbc-99cb-e49716d1b24d</entry>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <entry name="uuid">a1bf329d-ed65-4cbc-99cb-e49716d1b24d</entry>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk">
Feb 28 05:24:19 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:24:19 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk.config">
Feb 28 05:24:19 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:24:19 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:cc:45:f7"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <target dev="tapcc564e14-81"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/a1bf329d-ed65-4cbc-99cb-e49716d1b24d/console.log" append="off"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:24:19 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:24:19 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:24:19 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:24:19 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.557 243456 DEBUG nova.compute.manager [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Preparing to wait for external event network-vif-plugged-cc564e14-816b-4b92-877a-db0b2ddd0285 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.558 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.558 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.558 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.559 243456 DEBUG nova.virt.libvirt.vif [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:24:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1241744982',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1241744982',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=114,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAhOy8scu7pucn6hR6bdEJEq/jaPKx42EW3BCfNrjQsnOj6n32/A+FEMwyqz7ox0jzcrbxxA91mTDdT3vrbzI6b58jlzlwGg5fJ7HUGs8jPxEEShL7FM9h/slX6zH/FfpQ==',key_name='tempest-TestSecurityGroupsBasicOps-850120140',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-t2on9vk5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:24:13Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=a1bf329d-ed65-4cbc-99cb-e49716d1b24d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cc564e14-816b-4b92-877a-db0b2ddd0285", "address": "fa:16:3e:cc:45:f7", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc564e14-81", "ovs_interfaceid": "cc564e14-816b-4b92-877a-db0b2ddd0285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.559 243456 DEBUG nova.network.os_vif_util [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "cc564e14-816b-4b92-877a-db0b2ddd0285", "address": "fa:16:3e:cc:45:f7", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc564e14-81", "ovs_interfaceid": "cc564e14-816b-4b92-877a-db0b2ddd0285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.560 243456 DEBUG nova.network.os_vif_util [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:45:f7,bridge_name='br-int',has_traffic_filtering=True,id=cc564e14-816b-4b92-877a-db0b2ddd0285,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc564e14-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.560 243456 DEBUG os_vif [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:45:f7,bridge_name='br-int',has_traffic_filtering=True,id=cc564e14-816b-4b92-877a-db0b2ddd0285,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc564e14-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.561 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.561 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.562 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.564 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.564 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcc564e14-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.565 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcc564e14-81, col_values=(('external_ids', {'iface-id': 'cc564e14-816b-4b92-877a-db0b2ddd0285', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cc:45:f7', 'vm-uuid': 'a1bf329d-ed65-4cbc-99cb-e49716d1b24d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.566 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:19 np0005634017 NetworkManager[49805]: <info>  [1772274259.5671] manager: (tapcc564e14-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/474)
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.569 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.571 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.572 243456 INFO os_vif [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:45:f7,bridge_name='br-int',has_traffic_filtering=True,id=cc564e14-816b-4b92-877a-db0b2ddd0285,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc564e14-81')#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.621 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.621 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.621 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No VIF found with MAC fa:16:3e:cc:45:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.623 243456 INFO nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Using config drive#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.642 243456 DEBUG nova.storage.rbd_utils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.945 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274259.945472, 08a4cd5e-f711-44d2-b17e-c1941be22e85 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.946 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] VM Started (Lifecycle Event)#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.967 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.974 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274259.9456332, 08a4cd5e-f711-44d2-b17e-c1941be22e85 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.974 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.995 243456 DEBUG nova.compute.manager [req-5af582db-6873-4ce7-970c-2ebe15c3fe6d req-cd96bed2-fa77-48f6-8665-d8ecb273131d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Received event network-vif-plugged-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.996 243456 DEBUG oslo_concurrency.lockutils [req-5af582db-6873-4ce7-970c-2ebe15c3fe6d req-cd96bed2-fa77-48f6-8665-d8ecb273131d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.996 243456 DEBUG oslo_concurrency.lockutils [req-5af582db-6873-4ce7-970c-2ebe15c3fe6d req-cd96bed2-fa77-48f6-8665-d8ecb273131d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.996 243456 DEBUG oslo_concurrency.lockutils [req-5af582db-6873-4ce7-970c-2ebe15c3fe6d req-cd96bed2-fa77-48f6-8665-d8ecb273131d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.997 243456 DEBUG nova.compute.manager [req-5af582db-6873-4ce7-970c-2ebe15c3fe6d req-cd96bed2-fa77-48f6-8665-d8ecb273131d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Processing event network-vif-plugged-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:24:19 np0005634017 nova_compute[243452]: 2026-02-28 10:24:19.997 243456 DEBUG nova.compute.manager [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.001 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.003 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.005 243456 INFO nova.virt.libvirt.driver [-] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Instance spawned successfully.#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.006 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.008 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274260.0008912, 08a4cd5e-f711-44d2-b17e-c1941be22e85 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.008 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.033 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.038 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.038 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.038 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.039 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.039 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.040 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.046 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.079 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.133 243456 INFO nova.compute.manager [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Took 7.65 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.133 243456 DEBUG nova.compute.manager [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.198 243456 INFO nova.compute.manager [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Took 8.79 seconds to build instance.#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.214 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.275 243456 DEBUG nova.network.neutron [req-4c0a22ed-1623-4290-a42b-0f9202d48b80 req-5e348653-d604-4baa-b814-29f893d1dff8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Updated VIF entry in instance network info cache for port cc564e14-816b-4b92-877a-db0b2ddd0285. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.276 243456 DEBUG nova.network.neutron [req-4c0a22ed-1623-4290-a42b-0f9202d48b80 req-5e348653-d604-4baa-b814-29f893d1dff8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Updating instance_info_cache with network_info: [{"id": "cc564e14-816b-4b92-877a-db0b2ddd0285", "address": "fa:16:3e:cc:45:f7", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc564e14-81", "ovs_interfaceid": "cc564e14-816b-4b92-877a-db0b2ddd0285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.282 243456 INFO nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Creating config drive at /var/lib/nova/instances/a1bf329d-ed65-4cbc-99cb-e49716d1b24d/disk.config#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.286 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a1bf329d-ed65-4cbc-99cb-e49716d1b24d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpeqjokvvf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.327 243456 DEBUG oslo_concurrency.lockutils [req-4c0a22ed-1623-4290-a42b-0f9202d48b80 req-5e348653-d604-4baa-b814-29f893d1dff8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:24:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.384 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.385 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.385 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.434 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a1bf329d-ed65-4cbc-99cb-e49716d1b24d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpeqjokvvf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 325 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 3.6 MiB/s wr, 86 op/s
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.473 243456 DEBUG nova.storage.rbd_utils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.479 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a1bf329d-ed65-4cbc-99cb-e49716d1b24d/disk.config a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.635 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a1bf329d-ed65-4cbc-99cb-e49716d1b24d/disk.config a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.637 243456 INFO nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Deleting local config drive /var/lib/nova/instances/a1bf329d-ed65-4cbc-99cb-e49716d1b24d/disk.config because it was imported into RBD.#033[00m
Feb 28 05:24:20 np0005634017 NetworkManager[49805]: <info>  [1772274260.7021] manager: (tapcc564e14-81): new Tun device (/org/freedesktop/NetworkManager/Devices/475)
Feb 28 05:24:20 np0005634017 systemd-udevd[340688]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:24:20 np0005634017 kernel: tapcc564e14-81: entered promiscuous mode
Feb 28 05:24:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:20Z|01143|binding|INFO|Claiming lport cc564e14-816b-4b92-877a-db0b2ddd0285 for this chassis.
Feb 28 05:24:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:20Z|01144|binding|INFO|cc564e14-816b-4b92-877a-db0b2ddd0285: Claiming fa:16:3e:cc:45:f7 10.100.0.10
Feb 28 05:24:20 np0005634017 NetworkManager[49805]: <info>  [1772274260.7146] device (tapcc564e14-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.710 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:20 np0005634017 NetworkManager[49805]: <info>  [1772274260.7150] device (tapcc564e14-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:24:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.717 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:45:f7 10.100.0.10'], port_security=['fa:16:3e:cc:45:f7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a1bf329d-ed65-4cbc-99cb-e49716d1b24d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ee12748-b368-477a-aacb-62375ce0b51c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0b899b7a-4953-43a9-ae1c-b9897419d094 ec278221-1438-4a93-ade3-9d533e8b727e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2fa6d2-fb6a-49e0-af1a-0d44f4a09e25, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cc564e14-816b-4b92-877a-db0b2ddd0285) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:24:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.719 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cc564e14-816b-4b92-877a-db0b2ddd0285 in datapath 9ee12748-b368-477a-aacb-62375ce0b51c bound to our chassis#033[00m
Feb 28 05:24:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.720 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9ee12748-b368-477a-aacb-62375ce0b51c#033[00m
Feb 28 05:24:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.728 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4ecd4de0-a56d-401a-a0e8-bc097b7a01ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.729 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9ee12748-b1 in ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:24:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.733 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9ee12748-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:24:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.733 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4315042d-99c0-4090-9f42-8e211b0cc30d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.734 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2cb0b879-7c90-4a32-bb15-374e3c4ef6b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:20 np0005634017 systemd-machined[209480]: New machine qemu-144-instance-00000072.
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.742 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.744 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[1b13acc8-6c71-4e89-b045-eb3846a3e20e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:20Z|01145|binding|INFO|Setting lport cc564e14-816b-4b92-877a-db0b2ddd0285 ovn-installed in OVS
Feb 28 05:24:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:20Z|01146|binding|INFO|Setting lport cc564e14-816b-4b92-877a-db0b2ddd0285 up in Southbound
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.748 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:20 np0005634017 systemd[1]: Started Virtual Machine qemu-144-instance-00000072.
Feb 28 05:24:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.757 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3c26c8bc-1b4a-4d4f-93af-d0090d11cdaf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.789 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ae1c7eee-6fb8-434c-aeb0-3d64904421d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:20 np0005634017 NetworkManager[49805]: <info>  [1772274260.7965] manager: (tap9ee12748-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/476)
Feb 28 05:24:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.797 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[42fa24d5-4910-4b3a-a1e2-ccd17e5f5229]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.833 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[464404cb-ac0c-4861-bf7c-338f13735124]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.836 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e801d697-cd6d-4846-80dc-81ff9263e1f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:20 np0005634017 NetworkManager[49805]: <info>  [1772274260.8588] device (tap9ee12748-b0): carrier: link connected
Feb 28 05:24:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.865 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b38dcb64-fa94-4acf-a8a5-94c829e429df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.886 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a2a77de0-73c2-414b-92b2-c84398f896fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ee12748-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fd:c4:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 344], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576814, 'reachable_time': 35187, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340870, 'error': None, 'target': 'ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.902 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b06d8999-49bf-44e3-abe2-705d9b2ea764]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefd:c4ea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576814, 'tstamp': 576814}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340871, 'error': None, 'target': 'ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.918 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[49faae44-643e-4abc-9bb9-392d457d0c27]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ee12748-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fd:c4:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 344], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576814, 'reachable_time': 35187, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 340872, 'error': None, 'target': 'ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.949 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fa6537c8-79bd-4074-888d-c1270323149c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.955 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274245.9551432, bf3b0f76-98db-4eb3-8e3c-ab7b525cd129 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.957 243456 INFO nova.compute.manager [-] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:24:20 np0005634017 nova_compute[243452]: 2026-02-28 10:24:20.982 243456 DEBUG nova.compute.manager [None req-9ca06cca-9ff8-4fec-81da-028b9dfe4020 - - - - - -] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:21.010 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[01b0e9ca-71c7-4376-b28f-851b2d461146]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:21.012 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ee12748-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:21.012 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:21.012 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ee12748-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:21 np0005634017 NetworkManager[49805]: <info>  [1772274261.0148] manager: (tap9ee12748-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/477)
Feb 28 05:24:21 np0005634017 kernel: tap9ee12748-b0: entered promiscuous mode
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:21.017 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9ee12748-b0, col_values=(('external_ids', {'iface-id': '42d54653-4cb7-4be2-99ee-96f5681da7be'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:21 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:21Z|01147|binding|INFO|Releasing lport 42d54653-4cb7-4be2-99ee-96f5681da7be from this chassis (sb_readonly=0)
Feb 28 05:24:21 np0005634017 nova_compute[243452]: 2026-02-28 10:24:21.019 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:21.019 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9ee12748-b368-477a-aacb-62375ce0b51c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9ee12748-b368-477a-aacb-62375ce0b51c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:21.020 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a61e7b0f-4cde-4ed7-a99d-aebfda2379d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:21.021 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-9ee12748-b368-477a-aacb-62375ce0b51c
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/9ee12748-b368-477a-aacb-62375ce0b51c.pid.haproxy
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 9ee12748-b368-477a-aacb-62375ce0b51c
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:24:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:21.022 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c', 'env', 'PROCESS_TAG=haproxy-9ee12748-b368-477a-aacb-62375ce0b51c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9ee12748-b368-477a-aacb-62375ce0b51c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:24:21 np0005634017 podman[340904]: 2026-02-28 10:24:21.388136265 +0000 UTC m=+0.049487689 container create a7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223)
Feb 28 05:24:21 np0005634017 systemd[1]: Started libpod-conmon-a7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662.scope.
Feb 28 05:24:21 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:24:21 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efddfc5a8b916af41ee280cb5601c71a26e54fba8c82938781e38e62b83cab90/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:24:21 np0005634017 podman[340904]: 2026-02-28 10:24:21.359951412 +0000 UTC m=+0.021302696 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:24:21 np0005634017 podman[340904]: 2026-02-28 10:24:21.459003121 +0000 UTC m=+0.120354415 container init a7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 05:24:21 np0005634017 podman[340904]: 2026-02-28 10:24:21.463588233 +0000 UTC m=+0.124939527 container start a7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:24:21 np0005634017 neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c[340952]: [NOTICE]   (340965) : New worker (340967) forked
Feb 28 05:24:21 np0005634017 neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c[340952]: [NOTICE]   (340965) : Loading success.
Feb 28 05:24:21 np0005634017 nova_compute[243452]: 2026-02-28 10:24:21.521 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274261.5207772, a1bf329d-ed65-4cbc-99cb-e49716d1b24d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:24:21 np0005634017 nova_compute[243452]: 2026-02-28 10:24:21.521 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] VM Started (Lifecycle Event)#033[00m
Feb 28 05:24:21 np0005634017 nova_compute[243452]: 2026-02-28 10:24:21.550 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:24:21 np0005634017 nova_compute[243452]: 2026-02-28 10:24:21.553 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274261.5212047, a1bf329d-ed65-4cbc-99cb-e49716d1b24d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:24:21 np0005634017 nova_compute[243452]: 2026-02-28 10:24:21.553 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:24:21 np0005634017 nova_compute[243452]: 2026-02-28 10:24:21.576 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:24:21 np0005634017 nova_compute[243452]: 2026-02-28 10:24:21.578 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:24:21 np0005634017 nova_compute[243452]: 2026-02-28 10:24:21.607 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.082 243456 DEBUG nova.compute.manager [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Received event network-vif-plugged-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.083 243456 DEBUG oslo_concurrency.lockutils [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.083 243456 DEBUG oslo_concurrency.lockutils [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.083 243456 DEBUG oslo_concurrency.lockutils [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.084 243456 DEBUG nova.compute.manager [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] No waiting events found dispatching network-vif-plugged-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.084 243456 WARNING nova.compute.manager [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Received unexpected event network-vif-plugged-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.084 243456 DEBUG nova.compute.manager [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Received event network-vif-plugged-cc564e14-816b-4b92-877a-db0b2ddd0285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.085 243456 DEBUG oslo_concurrency.lockutils [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.085 243456 DEBUG oslo_concurrency.lockutils [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.085 243456 DEBUG oslo_concurrency.lockutils [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.086 243456 DEBUG nova.compute.manager [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Processing event network-vif-plugged-cc564e14-816b-4b92-877a-db0b2ddd0285 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.086 243456 DEBUG nova.compute.manager [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Received event network-vif-plugged-cc564e14-816b-4b92-877a-db0b2ddd0285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.086 243456 DEBUG oslo_concurrency.lockutils [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.087 243456 DEBUG oslo_concurrency.lockutils [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.087 243456 DEBUG oslo_concurrency.lockutils [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.088 243456 DEBUG nova.compute.manager [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] No waiting events found dispatching network-vif-plugged-cc564e14-816b-4b92-877a-db0b2ddd0285 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.088 243456 WARNING nova.compute.manager [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Received unexpected event network-vif-plugged-cc564e14-816b-4b92-877a-db0b2ddd0285 for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.089 243456 DEBUG nova.compute.manager [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.093 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274262.093343, a1bf329d-ed65-4cbc-99cb-e49716d1b24d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.094 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.096 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.099 243456 INFO nova.virt.libvirt.driver [-] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Instance spawned successfully.#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.100 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.126 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.132 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.137 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.138 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.139 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.139 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.140 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.140 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.172 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.209 243456 INFO nova.compute.manager [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Took 8.90 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.210 243456 DEBUG nova.compute.manager [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.278 243456 INFO nova.compute.manager [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Took 10.28 seconds to build instance.#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.298 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 325 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 3.6 MiB/s wr, 75 op/s
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.517 243456 DEBUG oslo_concurrency.lockutils [None req-a9e428e4-4d79-40cc-a7e1-5c57b1ecc77e 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "08a4cd5e-f711-44d2-b17e-c1941be22e85" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.518 243456 DEBUG oslo_concurrency.lockutils [None req-a9e428e4-4d79-40cc-a7e1-5c57b1ecc77e 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.518 243456 DEBUG nova.compute.manager [None req-a9e428e4-4d79-40cc-a7e1-5c57b1ecc77e 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.522 243456 DEBUG nova.compute.manager [None req-a9e428e4-4d79-40cc-a7e1-5c57b1ecc77e 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.522 243456 DEBUG nova.objects.instance [None req-a9e428e4-4d79-40cc-a7e1-5c57b1ecc77e 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'flavor' on Instance uuid 08a4cd5e-f711-44d2-b17e-c1941be22e85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:24:22 np0005634017 nova_compute[243452]: 2026-02-28 10:24:22.547 243456 DEBUG nova.virt.libvirt.driver [None req-a9e428e4-4d79-40cc-a7e1-5c57b1ecc77e 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 28 05:24:22 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 05:24:22 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3001.6 total, 600.0 interval#012Cumulative writes: 36K writes, 140K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.04 MB/s#012Cumulative WAL: 36K writes, 13K syncs, 2.76 writes per sync, written: 0.13 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9816 writes, 37K keys, 9816 commit groups, 1.0 writes per commit group, ingest: 40.46 MB, 0.07 MB/s#012Interval WAL: 9816 writes, 3930 syncs, 2.50 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 05:24:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:24:24 np0005634017 nova_compute[243452]: 2026-02-28 10:24:24.072 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 325 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 3.6 MiB/s wr, 75 op/s
Feb 28 05:24:24 np0005634017 nova_compute[243452]: 2026-02-28 10:24:24.566 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:25 np0005634017 nova_compute[243452]: 2026-02-28 10:24:25.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:24:26 np0005634017 nova_compute[243452]: 2026-02-28 10:24:26.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:24:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.6 MiB/s wr, 164 op/s
Feb 28 05:24:27 np0005634017 nova_compute[243452]: 2026-02-28 10:24:27.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:24:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:27Z|01148|binding|INFO|Releasing lport 42d54653-4cb7-4be2-99ee-96f5681da7be from this chassis (sb_readonly=0)
Feb 28 05:24:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:27Z|01149|binding|INFO|Releasing lport e8c59be7-5922-4be3-b143-f702a4828758 from this chassis (sb_readonly=0)
Feb 28 05:24:27 np0005634017 NetworkManager[49805]: <info>  [1772274267.6843] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/478)
Feb 28 05:24:27 np0005634017 NetworkManager[49805]: <info>  [1772274267.6852] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/479)
Feb 28 05:24:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:27Z|01150|binding|INFO|Releasing lport 42d54653-4cb7-4be2-99ee-96f5681da7be from this chassis (sb_readonly=0)
Feb 28 05:24:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:27Z|01151|binding|INFO|Releasing lport e8c59be7-5922-4be3-b143-f702a4828758 from this chassis (sb_readonly=0)
Feb 28 05:24:27 np0005634017 nova_compute[243452]: 2026-02-28 10:24:27.692 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:27 np0005634017 nova_compute[243452]: 2026-02-28 10:24:27.698 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:28 np0005634017 nova_compute[243452]: 2026-02-28 10:24:28.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:24:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.4 MiB/s wr, 151 op/s
Feb 28 05:24:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:24:28 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 05:24:28 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.2 total, 600.0 interval#012Cumulative writes: 28K writes, 112K keys, 28K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.04 MB/s#012Cumulative WAL: 28K writes, 10K syncs, 2.78 writes per sync, written: 0.11 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7128 writes, 28K keys, 7128 commit groups, 1.0 writes per commit group, ingest: 30.63 MB, 0.05 MB/s#012Interval WAL: 7128 writes, 2882 syncs, 2.47 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 05:24:29 np0005634017 nova_compute[243452]: 2026-02-28 10:24:29.075 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:24:29
Feb 28 05:24:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:24:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:24:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'images', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'vms', 'backups']
Feb 28 05:24:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:24:29 np0005634017 nova_compute[243452]: 2026-02-28 10:24:29.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:24:29 np0005634017 nova_compute[243452]: 2026-02-28 10:24:29.464 243456 DEBUG nova.compute.manager [req-d180d92f-17c9-4f67-9449-b08af764b441 req-77daf279-5822-4224-905f-770b6a793b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Received event network-changed-cc564e14-816b-4b92-877a-db0b2ddd0285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:24:29 np0005634017 nova_compute[243452]: 2026-02-28 10:24:29.464 243456 DEBUG nova.compute.manager [req-d180d92f-17c9-4f67-9449-b08af764b441 req-77daf279-5822-4224-905f-770b6a793b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Refreshing instance network info cache due to event network-changed-cc564e14-816b-4b92-877a-db0b2ddd0285. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:24:29 np0005634017 nova_compute[243452]: 2026-02-28 10:24:29.464 243456 DEBUG oslo_concurrency.lockutils [req-d180d92f-17c9-4f67-9449-b08af764b441 req-77daf279-5822-4224-905f-770b6a793b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:24:29 np0005634017 nova_compute[243452]: 2026-02-28 10:24:29.464 243456 DEBUG oslo_concurrency.lockutils [req-d180d92f-17c9-4f67-9449-b08af764b441 req-77daf279-5822-4224-905f-770b6a793b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:24:29 np0005634017 nova_compute[243452]: 2026-02-28 10:24:29.465 243456 DEBUG nova.network.neutron [req-d180d92f-17c9-4f67-9449-b08af764b441 req-77daf279-5822-4224-905f-770b6a793b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Refreshing network info cache for port cc564e14-816b-4b92-877a-db0b2ddd0285 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:24:29 np0005634017 nova_compute[243452]: 2026-02-28 10:24:29.568 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:30 np0005634017 nova_compute[243452]: 2026-02-28 10:24:30.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:24:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:24:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:24:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:24:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:24:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:24:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:24:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:30.388 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 148 op/s
Feb 28 05:24:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:24:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:24:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:24:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:24:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:24:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:24:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:24:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:24:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:24:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:24:31 np0005634017 ceph-mgr[76610]: [devicehealth INFO root] Check health
Feb 28 05:24:31 np0005634017 nova_compute[243452]: 2026-02-28 10:24:31.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:24:31 np0005634017 nova_compute[243452]: 2026-02-28 10:24:31.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:24:31 np0005634017 nova_compute[243452]: 2026-02-28 10:24:31.318 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:24:31 np0005634017 nova_compute[243452]: 2026-02-28 10:24:31.531 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-0d4ce277-1bbb-4926-a7ee-30f5df57fff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:24:31 np0005634017 nova_compute[243452]: 2026-02-28 10:24:31.532 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-0d4ce277-1bbb-4926-a7ee-30f5df57fff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:24:31 np0005634017 nova_compute[243452]: 2026-02-28 10:24:31.532 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:24:31 np0005634017 nova_compute[243452]: 2026-02-28 10:24:31.533 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0d4ce277-1bbb-4926-a7ee-30f5df57fff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:24:31 np0005634017 nova_compute[243452]: 2026-02-28 10:24:31.627 243456 DEBUG nova.network.neutron [req-d180d92f-17c9-4f67-9449-b08af764b441 req-77daf279-5822-4224-905f-770b6a793b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Updated VIF entry in instance network info cache for port cc564e14-816b-4b92-877a-db0b2ddd0285. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:24:31 np0005634017 nova_compute[243452]: 2026-02-28 10:24:31.628 243456 DEBUG nova.network.neutron [req-d180d92f-17c9-4f67-9449-b08af764b441 req-77daf279-5822-4224-905f-770b6a793b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Updating instance_info_cache with network_info: [{"id": "cc564e14-816b-4b92-877a-db0b2ddd0285", "address": "fa:16:3e:cc:45:f7", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc564e14-81", "ovs_interfaceid": "cc564e14-816b-4b92-877a-db0b2ddd0285", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:24:31 np0005634017 nova_compute[243452]: 2026-02-28 10:24:31.649 243456 DEBUG oslo_concurrency.lockutils [req-d180d92f-17c9-4f67-9449-b08af764b441 req-77daf279-5822-4224-905f-770b6a793b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:24:31 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:31Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:98:3b:3a 10.100.0.3
Feb 28 05:24:31 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:31Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:98:3b:3a 10.100.0.3
Feb 28 05:24:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 337 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.2 MiB/s wr, 155 op/s
Feb 28 05:24:32 np0005634017 nova_compute[243452]: 2026-02-28 10:24:32.594 243456 DEBUG nova.virt.libvirt.driver [None req-a9e428e4-4d79-40cc-a7e1-5c57b1ecc77e 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 28 05:24:32 np0005634017 nova_compute[243452]: 2026-02-28 10:24:32.677 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Updating instance_info_cache with network_info: [{"id": "53819bfb-ebe3-4956-8f91-805dd04b5954", "address": "fa:16:3e:e5:f2:95", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53819bfb-eb", "ovs_interfaceid": "53819bfb-ebe3-4956-8f91-805dd04b5954", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:24:32 np0005634017 nova_compute[243452]: 2026-02-28 10:24:32.697 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-0d4ce277-1bbb-4926-a7ee-30f5df57fff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:24:32 np0005634017 nova_compute[243452]: 2026-02-28 10:24:32.697 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:24:33 np0005634017 nova_compute[243452]: 2026-02-28 10:24:33.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:24:33 np0005634017 nova_compute[243452]: 2026-02-28 10:24:33.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:24:33 np0005634017 nova_compute[243452]: 2026-02-28 10:24:33.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:24:33 np0005634017 nova_compute[243452]: 2026-02-28 10:24:33.335 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:33 np0005634017 nova_compute[243452]: 2026-02-28 10:24:33.336 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:33 np0005634017 nova_compute[243452]: 2026-02-28 10:24:33.336 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:33 np0005634017 nova_compute[243452]: 2026-02-28 10:24:33.337 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:24:33 np0005634017 nova_compute[243452]: 2026-02-28 10:24:33.337 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:24:33 np0005634017 nova_compute[243452]: 2026-02-28 10:24:33.734 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:24:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2696210564' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:24:33 np0005634017 nova_compute[243452]: 2026-02-28 10:24:33.937 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:34 np0005634017 nova_compute[243452]: 2026-02-28 10:24:34.055 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:24:34 np0005634017 nova_compute[243452]: 2026-02-28 10:24:34.056 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:24:34 np0005634017 nova_compute[243452]: 2026-02-28 10:24:34.061 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:24:34 np0005634017 nova_compute[243452]: 2026-02-28 10:24:34.062 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:24:34 np0005634017 nova_compute[243452]: 2026-02-28 10:24:34.067 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:24:34 np0005634017 nova_compute[243452]: 2026-02-28 10:24:34.068 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:24:34 np0005634017 nova_compute[243452]: 2026-02-28 10:24:34.080 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:34Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cc:45:f7 10.100.0.10
Feb 28 05:24:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:34Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cc:45:f7 10.100.0.10
Feb 28 05:24:34 np0005634017 nova_compute[243452]: 2026-02-28 10:24:34.253 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:24:34 np0005634017 nova_compute[243452]: 2026-02-28 10:24:34.254 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3121MB free_disk=59.886876408942044GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:24:34 np0005634017 nova_compute[243452]: 2026-02-28 10:24:34.254 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:34 np0005634017 nova_compute[243452]: 2026-02-28 10:24:34.254 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:34 np0005634017 nova_compute[243452]: 2026-02-28 10:24:34.341 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 0d4ce277-1bbb-4926-a7ee-30f5df57fff9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:24:34 np0005634017 nova_compute[243452]: 2026-02-28 10:24:34.341 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 08a4cd5e-f711-44d2-b17e-c1941be22e85 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:24:34 np0005634017 nova_compute[243452]: 2026-02-28 10:24:34.342 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance a1bf329d-ed65-4cbc-99cb-e49716d1b24d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:24:34 np0005634017 nova_compute[243452]: 2026-02-28 10:24:34.342 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:24:34 np0005634017 nova_compute[243452]: 2026-02-28 10:24:34.342 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:24:34 np0005634017 nova_compute[243452]: 2026-02-28 10:24:34.361 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 28 05:24:34 np0005634017 nova_compute[243452]: 2026-02-28 10:24:34.381 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 28 05:24:34 np0005634017 nova_compute[243452]: 2026-02-28 10:24:34.381 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 28 05:24:34 np0005634017 nova_compute[243452]: 2026-02-28 10:24:34.400 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 28 05:24:34 np0005634017 nova_compute[243452]: 2026-02-28 10:24:34.438 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 28 05:24:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 337 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.2 MiB/s wr, 141 op/s
Feb 28 05:24:34 np0005634017 nova_compute[243452]: 2026-02-28 10:24:34.514 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:34 np0005634017 nova_compute[243452]: 2026-02-28 10:24:34.570 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:34 np0005634017 kernel: tapc6f8ec31-25 (unregistering): left promiscuous mode
Feb 28 05:24:34 np0005634017 NetworkManager[49805]: <info>  [1772274274.9174] device (tapc6f8ec31-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:24:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:34Z|01152|binding|INFO|Releasing lport c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 from this chassis (sb_readonly=0)
Feb 28 05:24:34 np0005634017 nova_compute[243452]: 2026-02-28 10:24:34.922 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:34Z|01153|binding|INFO|Setting lport c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 down in Southbound
Feb 28 05:24:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:34Z|01154|binding|INFO|Removing iface tapc6f8ec31-25 ovn-installed in OVS
Feb 28 05:24:34 np0005634017 nova_compute[243452]: 2026-02-28 10:24:34.930 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:34.931 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:3b:3a 10.100.0.3'], port_security=['fa:16:3e:98:3b:3a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '08a4cd5e-f711-44d2-b17e-c1941be22e85', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=c6f8ec31-2557-49a7-9d41-fc0bc3a16a37) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:24:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:34.935 156681 INFO neutron.agent.ovn.metadata.agent [-] Port c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 in datapath 7ec4804c-4a13-485a-9300-db6edf74473b unbound from our chassis#033[00m
Feb 28 05:24:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:34.939 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ec4804c-4a13-485a-9300-db6edf74473b#033[00m
Feb 28 05:24:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:34.958 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8a0ac927-be9e-4bfd-8864-1efd68ee58d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:34.987 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[25c41995-767b-404d-b5bf-5aa0768d9f31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:34 np0005634017 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000071.scope: Deactivated successfully.
Feb 28 05:24:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:34.991 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[225a52a1-90e7-4ee1-bb6d-df583fac4a30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:34 np0005634017 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000071.scope: Consumed 12.805s CPU time.
Feb 28 05:24:34 np0005634017 systemd-machined[209480]: Machine qemu-143-instance-00000071 terminated.
Feb 28 05:24:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:35.015 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[567b1769-2b61-401d-ad96-f64db1f36455]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:35.034 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e5e52479-390e-401e-9d19-adca178a387f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 27, 'rx_bytes': 700, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 27, 'rx_bytes': 700, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341034, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:35.049 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0a8574ec-a36d-4472-9aa8-716e81633f8b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565520, 'tstamp': 565520}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341035, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565523, 'tstamp': 565523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341035, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:35.051 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:35 np0005634017 nova_compute[243452]: 2026-02-28 10:24:35.053 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:35 np0005634017 nova_compute[243452]: 2026-02-28 10:24:35.057 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:35.058 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ec4804c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:35.058 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:24:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:35.059 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ec4804c-40, col_values=(('external_ids', {'iface-id': 'e8c59be7-5922-4be3-b143-f702a4828758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:35.059 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:24:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:24:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2970285244' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:24:35 np0005634017 nova_compute[243452]: 2026-02-28 10:24:35.087 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:35 np0005634017 nova_compute[243452]: 2026-02-28 10:24:35.094 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:24:35 np0005634017 nova_compute[243452]: 2026-02-28 10:24:35.112 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:24:35 np0005634017 nova_compute[243452]: 2026-02-28 10:24:35.137 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:24:35 np0005634017 nova_compute[243452]: 2026-02-28 10:24:35.138 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:35 np0005634017 nova_compute[243452]: 2026-02-28 10:24:35.157 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:35 np0005634017 nova_compute[243452]: 2026-02-28 10:24:35.162 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:35 np0005634017 nova_compute[243452]: 2026-02-28 10:24:35.610 243456 INFO nova.virt.libvirt.driver [None req-a9e428e4-4d79-40cc-a7e1-5c57b1ecc77e 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Instance shutdown successfully after 13 seconds.#033[00m
Feb 28 05:24:35 np0005634017 nova_compute[243452]: 2026-02-28 10:24:35.618 243456 INFO nova.virt.libvirt.driver [-] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Instance destroyed successfully.#033[00m
Feb 28 05:24:35 np0005634017 nova_compute[243452]: 2026-02-28 10:24:35.619 243456 DEBUG nova.objects.instance [None req-a9e428e4-4d79-40cc-a7e1-5c57b1ecc77e 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'numa_topology' on Instance uuid 08a4cd5e-f711-44d2-b17e-c1941be22e85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:24:35 np0005634017 nova_compute[243452]: 2026-02-28 10:24:35.642 243456 DEBUG nova.compute.manager [None req-a9e428e4-4d79-40cc-a7e1-5c57b1ecc77e 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:24:35 np0005634017 nova_compute[243452]: 2026-02-28 10:24:35.716 243456 DEBUG oslo_concurrency.lockutils [None req-a9e428e4-4d79-40cc-a7e1-5c57b1ecc77e 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:36 np0005634017 nova_compute[243452]: 2026-02-28 10:24:36.468 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 379 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.6 MiB/s wr, 246 op/s
Feb 28 05:24:36 np0005634017 nova_compute[243452]: 2026-02-28 10:24:36.651 243456 DEBUG nova.compute.manager [req-25260319-06fb-4718-ba35-2fcab9fc1793 req-64c08a41-0a5c-431c-bf8d-3276e81de865 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Received event network-vif-unplugged-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:24:36 np0005634017 nova_compute[243452]: 2026-02-28 10:24:36.652 243456 DEBUG oslo_concurrency.lockutils [req-25260319-06fb-4718-ba35-2fcab9fc1793 req-64c08a41-0a5c-431c-bf8d-3276e81de865 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:36 np0005634017 nova_compute[243452]: 2026-02-28 10:24:36.652 243456 DEBUG oslo_concurrency.lockutils [req-25260319-06fb-4718-ba35-2fcab9fc1793 req-64c08a41-0a5c-431c-bf8d-3276e81de865 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:36 np0005634017 nova_compute[243452]: 2026-02-28 10:24:36.653 243456 DEBUG oslo_concurrency.lockutils [req-25260319-06fb-4718-ba35-2fcab9fc1793 req-64c08a41-0a5c-431c-bf8d-3276e81de865 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:36 np0005634017 nova_compute[243452]: 2026-02-28 10:24:36.653 243456 DEBUG nova.compute.manager [req-25260319-06fb-4718-ba35-2fcab9fc1793 req-64c08a41-0a5c-431c-bf8d-3276e81de865 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] No waiting events found dispatching network-vif-unplugged-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:24:36 np0005634017 nova_compute[243452]: 2026-02-28 10:24:36.653 243456 WARNING nova.compute.manager [req-25260319-06fb-4718-ba35-2fcab9fc1793 req-64c08a41-0a5c-431c-bf8d-3276e81de865 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Received unexpected event network-vif-unplugged-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 for instance with vm_state stopped and task_state None.#033[00m
Feb 28 05:24:37 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:37Z|01155|binding|INFO|Releasing lport 42d54653-4cb7-4be2-99ee-96f5681da7be from this chassis (sb_readonly=0)
Feb 28 05:24:37 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:37Z|01156|binding|INFO|Releasing lport e8c59be7-5922-4be3-b143-f702a4828758 from this chassis (sb_readonly=0)
Feb 28 05:24:37 np0005634017 nova_compute[243452]: 2026-02-28 10:24:37.885 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.311 243456 DEBUG oslo_concurrency.lockutils [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "08a4cd5e-f711-44d2-b17e-c1941be22e85" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.312 243456 DEBUG oslo_concurrency.lockutils [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.312 243456 DEBUG oslo_concurrency.lockutils [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.313 243456 DEBUG oslo_concurrency.lockutils [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.313 243456 DEBUG oslo_concurrency.lockutils [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.314 243456 INFO nova.compute.manager [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Terminating instance#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.315 243456 DEBUG nova.compute.manager [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.321 243456 INFO nova.virt.libvirt.driver [-] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Instance destroyed successfully.#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.322 243456 DEBUG nova.objects.instance [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'resources' on Instance uuid 08a4cd5e-f711-44d2-b17e-c1941be22e85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.339 243456 DEBUG nova.virt.libvirt.vif [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:24:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1172761758',display_name='tempest-Íñstáñcé-1386512012',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1172761758',id=113,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:24:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-0100k6jb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:24:36Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=08a4cd5e-f711-44d2-b17e-c1941be22e85,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "address": "fa:16:3e:98:3b:3a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f8ec31-25", "ovs_interfaceid": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.339 243456 DEBUG nova.network.os_vif_util [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "address": "fa:16:3e:98:3b:3a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f8ec31-25", "ovs_interfaceid": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.340 243456 DEBUG nova.network.os_vif_util [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:3b:3a,bridge_name='br-int',has_traffic_filtering=True,id=c6f8ec31-2557-49a7-9d41-fc0bc3a16a37,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f8ec31-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.340 243456 DEBUG os_vif [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:3b:3a,bridge_name='br-int',has_traffic_filtering=True,id=c6f8ec31-2557-49a7-9d41-fc0bc3a16a37,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f8ec31-25') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.342 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.342 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6f8ec31-25, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.344 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.346 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.348 243456 INFO os_vif [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:3b:3a,bridge_name='br-int',has_traffic_filtering=True,id=c6f8ec31-2557-49a7-9d41-fc0bc3a16a37,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f8ec31-25')#033[00m
Feb 28 05:24:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 391 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.3 MiB/s wr, 167 op/s
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.633 243456 INFO nova.virt.libvirt.driver [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Deleting instance files /var/lib/nova/instances/08a4cd5e-f711-44d2-b17e-c1941be22e85_del#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.634 243456 INFO nova.virt.libvirt.driver [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Deletion of /var/lib/nova/instances/08a4cd5e-f711-44d2-b17e-c1941be22e85_del complete#033[00m
Feb 28 05:24:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:38Z|01157|binding|INFO|Releasing lport 42d54653-4cb7-4be2-99ee-96f5681da7be from this chassis (sb_readonly=0)
Feb 28 05:24:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:38Z|01158|binding|INFO|Releasing lport e8c59be7-5922-4be3-b143-f702a4828758 from this chassis (sb_readonly=0)
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.677 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.685 243456 INFO nova.compute.manager [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Took 0.37 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.686 243456 DEBUG oslo.service.loopingcall [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.686 243456 DEBUG nova.compute.manager [-] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.686 243456 DEBUG nova.network.neutron [-] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:24:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.837 243456 DEBUG nova.compute.manager [req-bce335b3-6e01-4c81-884e-bb906065772b req-21a3c35f-8435-4fbe-ba55-3a5cb2d8d980 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Received event network-vif-plugged-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.837 243456 DEBUG oslo_concurrency.lockutils [req-bce335b3-6e01-4c81-884e-bb906065772b req-21a3c35f-8435-4fbe-ba55-3a5cb2d8d980 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.838 243456 DEBUG oslo_concurrency.lockutils [req-bce335b3-6e01-4c81-884e-bb906065772b req-21a3c35f-8435-4fbe-ba55-3a5cb2d8d980 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.838 243456 DEBUG oslo_concurrency.lockutils [req-bce335b3-6e01-4c81-884e-bb906065772b req-21a3c35f-8435-4fbe-ba55-3a5cb2d8d980 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.838 243456 DEBUG nova.compute.manager [req-bce335b3-6e01-4c81-884e-bb906065772b req-21a3c35f-8435-4fbe-ba55-3a5cb2d8d980 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] No waiting events found dispatching network-vif-plugged-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:24:38 np0005634017 nova_compute[243452]: 2026-02-28 10:24:38.838 243456 WARNING nova.compute.manager [req-bce335b3-6e01-4c81-884e-bb906065772b req-21a3c35f-8435-4fbe-ba55-3a5cb2d8d980 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Received unexpected event network-vif-plugged-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 for instance with vm_state stopped and task_state deleting.#033[00m
Feb 28 05:24:39 np0005634017 nova_compute[243452]: 2026-02-28 10:24:39.082 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:24:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:24:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:24:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:24:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:24:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:24:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:24:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:24:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:24:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:24:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:24:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:24:39 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:24:39 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:24:39 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:24:39 np0005634017 nova_compute[243452]: 2026-02-28 10:24:39.823 243456 DEBUG nova.network.neutron [-] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:24:39 np0005634017 nova_compute[243452]: 2026-02-28 10:24:39.843 243456 INFO nova.compute.manager [-] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Took 1.16 seconds to deallocate network for instance.#033[00m
Feb 28 05:24:39 np0005634017 nova_compute[243452]: 2026-02-28 10:24:39.893 243456 DEBUG nova.compute.manager [req-d5c130c9-c0e0-4854-ae24-ece0e3e6f26f req-f79b957a-711e-4431-8d76-2feb02b28690 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Received event network-vif-deleted-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:24:39 np0005634017 nova_compute[243452]: 2026-02-28 10:24:39.899 243456 DEBUG oslo_concurrency.lockutils [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:39 np0005634017 nova_compute[243452]: 2026-02-28 10:24:39.899 243456 DEBUG oslo_concurrency.lockutils [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:40 np0005634017 podman[341212]: 2026-02-28 10:24:39.999167384 +0000 UTC m=+0.052374112 container create e34002af4abbcc209ad8d1c2197652698157e55b0e3ebecc5d00bc59f3d1577e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hofstadter, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:24:40 np0005634017 nova_compute[243452]: 2026-02-28 10:24:40.004 243456 DEBUG oslo_concurrency.processutils [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:40 np0005634017 systemd[1]: Started libpod-conmon-e34002af4abbcc209ad8d1c2197652698157e55b0e3ebecc5d00bc59f3d1577e.scope.
Feb 28 05:24:40 np0005634017 podman[341212]: 2026-02-28 10:24:39.971763513 +0000 UTC m=+0.024970271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:24:40 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:24:40 np0005634017 podman[341212]: 2026-02-28 10:24:40.088329098 +0000 UTC m=+0.141535896 container init e34002af4abbcc209ad8d1c2197652698157e55b0e3ebecc5d00bc59f3d1577e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hofstadter, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:24:40 np0005634017 podman[341212]: 2026-02-28 10:24:40.094596729 +0000 UTC m=+0.147803457 container start e34002af4abbcc209ad8d1c2197652698157e55b0e3ebecc5d00bc59f3d1577e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 05:24:40 np0005634017 podman[341212]: 2026-02-28 10:24:40.099019417 +0000 UTC m=+0.152226135 container attach e34002af4abbcc209ad8d1c2197652698157e55b0e3ebecc5d00bc59f3d1577e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 28 05:24:40 np0005634017 systemd[1]: libpod-e34002af4abbcc209ad8d1c2197652698157e55b0e3ebecc5d00bc59f3d1577e.scope: Deactivated successfully.
Feb 28 05:24:40 np0005634017 conmon[341230]: conmon e34002af4abbcc209ad8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e34002af4abbcc209ad8d1c2197652698157e55b0e3ebecc5d00bc59f3d1577e.scope/container/memory.events
Feb 28 05:24:40 np0005634017 elastic_hofstadter[341230]: 167 167
Feb 28 05:24:40 np0005634017 podman[341212]: 2026-02-28 10:24:40.104782033 +0000 UTC m=+0.157988741 container died e34002af4abbcc209ad8d1c2197652698157e55b0e3ebecc5d00bc59f3d1577e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:24:40 np0005634017 systemd[1]: var-lib-containers-storage-overlay-f4cd07d4b5426adccfeb537d21adccd22d47a2dfd9fd269aad24062bcf03164d-merged.mount: Deactivated successfully.
Feb 28 05:24:40 np0005634017 podman[341212]: 2026-02-28 10:24:40.153333104 +0000 UTC m=+0.206539792 container remove e34002af4abbcc209ad8d1c2197652698157e55b0e3ebecc5d00bc59f3d1577e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hofstadter, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Feb 28 05:24:40 np0005634017 systemd[1]: libpod-conmon-e34002af4abbcc209ad8d1c2197652698157e55b0e3ebecc5d00bc59f3d1577e.scope: Deactivated successfully.
Feb 28 05:24:40 np0005634017 podman[341273]: 2026-02-28 10:24:40.304740504 +0000 UTC m=+0.036928506 container create 3b011bc23fde6a63ef69d0c4dffdc3805602120e5da99941f7ce2aba17a2ef5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Feb 28 05:24:40 np0005634017 systemd[1]: Started libpod-conmon-3b011bc23fde6a63ef69d0c4dffdc3805602120e5da99941f7ce2aba17a2ef5e.scope.
Feb 28 05:24:40 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:24:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/589dba552164f7f764ece3ab8ca01114d8e6e3b4931765664cc7bd86b8820b23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:24:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/589dba552164f7f764ece3ab8ca01114d8e6e3b4931765664cc7bd86b8820b23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:24:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/589dba552164f7f764ece3ab8ca01114d8e6e3b4931765664cc7bd86b8820b23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:24:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/589dba552164f7f764ece3ab8ca01114d8e6e3b4931765664cc7bd86b8820b23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:24:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/589dba552164f7f764ece3ab8ca01114d8e6e3b4931765664cc7bd86b8820b23/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:24:40 np0005634017 podman[341273]: 2026-02-28 10:24:40.289032751 +0000 UTC m=+0.021220763 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:24:40 np0005634017 podman[341273]: 2026-02-28 10:24:40.390788128 +0000 UTC m=+0.122976150 container init 3b011bc23fde6a63ef69d0c4dffdc3805602120e5da99941f7ce2aba17a2ef5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_leavitt, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:24:40 np0005634017 podman[341273]: 2026-02-28 10:24:40.398325996 +0000 UTC m=+0.130514038 container start 3b011bc23fde6a63ef69d0c4dffdc3805602120e5da99941f7ce2aba17a2ef5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:24:40 np0005634017 podman[341273]: 2026-02-28 10:24:40.402029492 +0000 UTC m=+0.134217524 container attach 3b011bc23fde6a63ef69d0c4dffdc3805602120e5da99941f7ce2aba17a2ef5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_leavitt, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Feb 28 05:24:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 338 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 670 KiB/s rd, 4.3 MiB/s wr, 155 op/s
Feb 28 05:24:40 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:24:40 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/750158308' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:24:40 np0005634017 nova_compute[243452]: 2026-02-28 10:24:40.558 243456 DEBUG oslo_concurrency.processutils [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:40 np0005634017 nova_compute[243452]: 2026-02-28 10:24:40.566 243456 DEBUG nova.compute.provider_tree [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:24:40 np0005634017 nova_compute[243452]: 2026-02-28 10:24:40.589 243456 DEBUG nova.scheduler.client.report [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:24:40 np0005634017 nova_compute[243452]: 2026-02-28 10:24:40.620 243456 DEBUG oslo_concurrency.lockutils [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:40 np0005634017 nova_compute[243452]: 2026-02-28 10:24:40.642 243456 INFO nova.scheduler.client.report [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Deleted allocations for instance 08a4cd5e-f711-44d2-b17e-c1941be22e85#033[00m
Feb 28 05:24:40 np0005634017 nova_compute[243452]: 2026-02-28 10:24:40.708 243456 DEBUG oslo_concurrency.lockutils [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.396s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:40 np0005634017 upbeat_leavitt[341289]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:24:40 np0005634017 upbeat_leavitt[341289]: --> All data devices are unavailable
Feb 28 05:24:40 np0005634017 systemd[1]: libpod-3b011bc23fde6a63ef69d0c4dffdc3805602120e5da99941f7ce2aba17a2ef5e.scope: Deactivated successfully.
Feb 28 05:24:40 np0005634017 podman[341273]: 2026-02-28 10:24:40.83603625 +0000 UTC m=+0.568224322 container died 3b011bc23fde6a63ef69d0c4dffdc3805602120e5da99941f7ce2aba17a2ef5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 05:24:40 np0005634017 systemd[1]: var-lib-containers-storage-overlay-589dba552164f7f764ece3ab8ca01114d8e6e3b4931765664cc7bd86b8820b23-merged.mount: Deactivated successfully.
Feb 28 05:24:40 np0005634017 podman[341273]: 2026-02-28 10:24:40.878984789 +0000 UTC m=+0.611172821 container remove 3b011bc23fde6a63ef69d0c4dffdc3805602120e5da99941f7ce2aba17a2ef5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 05:24:40 np0005634017 systemd[1]: libpod-conmon-3b011bc23fde6a63ef69d0c4dffdc3805602120e5da99941f7ce2aba17a2ef5e.scope: Deactivated successfully.
Feb 28 05:24:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:24:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:24:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:24:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:24:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018071058773471251 of space, bias 1.0, pg target 0.5421317632041376 quantized to 32 (current 32)
Feb 28 05:24:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:24:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:24:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:24:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:24:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:24:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00249289725762509 of space, bias 1.0, pg target 0.747869177287527 quantized to 32 (current 32)
Feb 28 05:24:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:24:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.377670962743603e-07 of space, bias 4.0, pg target 0.0008853205155292325 quantized to 16 (current 16)
Feb 28 05:24:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:24:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:24:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:24:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:24:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:24:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:24:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:24:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:24:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:24:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:24:41 np0005634017 podman[341387]: 2026-02-28 10:24:41.372694479 +0000 UTC m=+0.061246068 container create 4e609b74b0ea5c38977c59b77c9b8ef57f7343b6d1c6a20b539ff4d860a5bdcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_elgamal, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 05:24:41 np0005634017 systemd[1]: Started libpod-conmon-4e609b74b0ea5c38977c59b77c9b8ef57f7343b6d1c6a20b539ff4d860a5bdcb.scope.
Feb 28 05:24:41 np0005634017 podman[341387]: 2026-02-28 10:24:41.348966455 +0000 UTC m=+0.037518064 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:24:41 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:24:41 np0005634017 podman[341387]: 2026-02-28 10:24:41.486833984 +0000 UTC m=+0.175385623 container init 4e609b74b0ea5c38977c59b77c9b8ef57f7343b6d1c6a20b539ff4d860a5bdcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_elgamal, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:24:41 np0005634017 podman[341387]: 2026-02-28 10:24:41.49396114 +0000 UTC m=+0.182512749 container start 4e609b74b0ea5c38977c59b77c9b8ef57f7343b6d1c6a20b539ff4d860a5bdcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_elgamal, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 05:24:41 np0005634017 nice_elgamal[341404]: 167 167
Feb 28 05:24:41 np0005634017 podman[341387]: 2026-02-28 10:24:41.499670424 +0000 UTC m=+0.188222023 container attach 4e609b74b0ea5c38977c59b77c9b8ef57f7343b6d1c6a20b539ff4d860a5bdcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Feb 28 05:24:41 np0005634017 systemd[1]: libpod-4e609b74b0ea5c38977c59b77c9b8ef57f7343b6d1c6a20b539ff4d860a5bdcb.scope: Deactivated successfully.
Feb 28 05:24:41 np0005634017 podman[341387]: 2026-02-28 10:24:41.500652583 +0000 UTC m=+0.189204182 container died 4e609b74b0ea5c38977c59b77c9b8ef57f7343b6d1c6a20b539ff4d860a5bdcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_elgamal, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:24:41 np0005634017 systemd[1]: var-lib-containers-storage-overlay-bd66e5a32188e322eff9e4ead8a1cc3c850418ba36536f11f190e197909075a4-merged.mount: Deactivated successfully.
Feb 28 05:24:41 np0005634017 podman[341387]: 2026-02-28 10:24:41.546430114 +0000 UTC m=+0.234981683 container remove 4e609b74b0ea5c38977c59b77c9b8ef57f7343b6d1c6a20b539ff4d860a5bdcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_elgamal, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 28 05:24:41 np0005634017 systemd[1]: libpod-conmon-4e609b74b0ea5c38977c59b77c9b8ef57f7343b6d1c6a20b539ff4d860a5bdcb.scope: Deactivated successfully.
Feb 28 05:24:41 np0005634017 podman[341428]: 2026-02-28 10:24:41.733115841 +0000 UTC m=+0.051495338 container create a0f7d0f21d17fe3edca585f55d396295b6c9d959cef9073a12dce915c8ebc1d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 28 05:24:41 np0005634017 systemd[1]: Started libpod-conmon-a0f7d0f21d17fe3edca585f55d396295b6c9d959cef9073a12dce915c8ebc1d1.scope.
Feb 28 05:24:41 np0005634017 podman[341428]: 2026-02-28 10:24:41.711288451 +0000 UTC m=+0.029667897 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:24:41 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:24:41 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d213ca4875453ce62d8fcad99b673b7c688443e046c35013c2048728ed3fc5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:24:41 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d213ca4875453ce62d8fcad99b673b7c688443e046c35013c2048728ed3fc5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:24:41 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d213ca4875453ce62d8fcad99b673b7c688443e046c35013c2048728ed3fc5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:24:41 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d213ca4875453ce62d8fcad99b673b7c688443e046c35013c2048728ed3fc5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:24:41 np0005634017 podman[341428]: 2026-02-28 10:24:41.845201847 +0000 UTC m=+0.163581343 container init a0f7d0f21d17fe3edca585f55d396295b6c9d959cef9073a12dce915c8ebc1d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:24:41 np0005634017 podman[341428]: 2026-02-28 10:24:41.859392656 +0000 UTC m=+0.177772072 container start a0f7d0f21d17fe3edca585f55d396295b6c9d959cef9073a12dce915c8ebc1d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_knuth, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:24:41 np0005634017 podman[341428]: 2026-02-28 10:24:41.862533477 +0000 UTC m=+0.180912933 container attach a0f7d0f21d17fe3edca585f55d396295b6c9d959cef9073a12dce915c8ebc1d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:24:42 np0005634017 zen_knuth[341444]: {
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:    "0": [
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:        {
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "devices": [
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "/dev/loop3"
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            ],
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "lv_name": "ceph_lv0",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "lv_size": "21470642176",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "name": "ceph_lv0",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "tags": {
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.cluster_name": "ceph",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.crush_device_class": "",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.encrypted": "0",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.objectstore": "bluestore",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.osd_id": "0",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.type": "block",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.vdo": "0",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.with_tpm": "0"
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            },
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "type": "block",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "vg_name": "ceph_vg0"
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:        }
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:    ],
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:    "1": [
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:        {
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "devices": [
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "/dev/loop4"
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            ],
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "lv_name": "ceph_lv1",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "lv_size": "21470642176",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "name": "ceph_lv1",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "tags": {
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.cluster_name": "ceph",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.crush_device_class": "",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.encrypted": "0",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.objectstore": "bluestore",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.osd_id": "1",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.type": "block",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.vdo": "0",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.with_tpm": "0"
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            },
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "type": "block",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "vg_name": "ceph_vg1"
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:        }
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:    ],
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:    "2": [
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:        {
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "devices": [
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "/dev/loop5"
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            ],
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "lv_name": "ceph_lv2",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "lv_size": "21470642176",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "name": "ceph_lv2",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "tags": {
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.cluster_name": "ceph",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.crush_device_class": "",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.encrypted": "0",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.objectstore": "bluestore",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.osd_id": "2",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.type": "block",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.vdo": "0",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:                "ceph.with_tpm": "0"
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            },
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "type": "block",
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:            "vg_name": "ceph_vg2"
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:        }
Feb 28 05:24:42 np0005634017 zen_knuth[341444]:    ]
Feb 28 05:24:42 np0005634017 zen_knuth[341444]: }
Feb 28 05:24:42 np0005634017 systemd[1]: libpod-a0f7d0f21d17fe3edca585f55d396295b6c9d959cef9073a12dce915c8ebc1d1.scope: Deactivated successfully.
Feb 28 05:24:42 np0005634017 podman[341428]: 2026-02-28 10:24:42.228536001 +0000 UTC m=+0.546915447 container died a0f7d0f21d17fe3edca585f55d396295b6c9d959cef9073a12dce915c8ebc1d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_knuth, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 28 05:24:42 np0005634017 systemd[1]: var-lib-containers-storage-overlay-9d213ca4875453ce62d8fcad99b673b7c688443e046c35013c2048728ed3fc5f-merged.mount: Deactivated successfully.
Feb 28 05:24:42 np0005634017 podman[341428]: 2026-02-28 10:24:42.275803735 +0000 UTC m=+0.594183151 container remove a0f7d0f21d17fe3edca585f55d396295b6c9d959cef9073a12dce915c8ebc1d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:24:42 np0005634017 systemd[1]: libpod-conmon-a0f7d0f21d17fe3edca585f55d396295b6c9d959cef9073a12dce915c8ebc1d1.scope: Deactivated successfully.
Feb 28 05:24:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 312 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 668 KiB/s rd, 4.3 MiB/s wr, 157 op/s
Feb 28 05:24:42 np0005634017 podman[341527]: 2026-02-28 10:24:42.702570354 +0000 UTC m=+0.034692233 container create c3aaaa94c8858e2a905a8f05b277cfc81e8a54550fb9337cbd7c1db08d76a1b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_sinoussi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 05:24:42 np0005634017 systemd[1]: Started libpod-conmon-c3aaaa94c8858e2a905a8f05b277cfc81e8a54550fb9337cbd7c1db08d76a1b7.scope.
Feb 28 05:24:42 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:24:42 np0005634017 podman[341527]: 2026-02-28 10:24:42.68754815 +0000 UTC m=+0.019670029 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:24:42 np0005634017 podman[341527]: 2026-02-28 10:24:42.790691877 +0000 UTC m=+0.122813796 container init c3aaaa94c8858e2a905a8f05b277cfc81e8a54550fb9337cbd7c1db08d76a1b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_sinoussi, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 28 05:24:42 np0005634017 podman[341527]: 2026-02-28 10:24:42.797756331 +0000 UTC m=+0.129878200 container start c3aaaa94c8858e2a905a8f05b277cfc81e8a54550fb9337cbd7c1db08d76a1b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:24:42 np0005634017 systemd[1]: libpod-c3aaaa94c8858e2a905a8f05b277cfc81e8a54550fb9337cbd7c1db08d76a1b7.scope: Deactivated successfully.
Feb 28 05:24:42 np0005634017 interesting_sinoussi[341544]: 167 167
Feb 28 05:24:42 np0005634017 conmon[341544]: conmon c3aaaa94c8858e2a905a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c3aaaa94c8858e2a905a8f05b277cfc81e8a54550fb9337cbd7c1db08d76a1b7.scope/container/memory.events
Feb 28 05:24:42 np0005634017 podman[341527]: 2026-02-28 10:24:42.803890828 +0000 UTC m=+0.136012707 container attach c3aaaa94c8858e2a905a8f05b277cfc81e8a54550fb9337cbd7c1db08d76a1b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 05:24:42 np0005634017 podman[341527]: 2026-02-28 10:24:42.805423872 +0000 UTC m=+0.137545781 container died c3aaaa94c8858e2a905a8f05b277cfc81e8a54550fb9337cbd7c1db08d76a1b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:24:42 np0005634017 systemd[1]: var-lib-containers-storage-overlay-0d73f0d84920df7779fe02584012893928cf8da0cc715b73c38742931a0f4560-merged.mount: Deactivated successfully.
Feb 28 05:24:42 np0005634017 podman[341527]: 2026-02-28 10:24:42.838091225 +0000 UTC m=+0.170213084 container remove c3aaaa94c8858e2a905a8f05b277cfc81e8a54550fb9337cbd7c1db08d76a1b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_sinoussi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 05:24:42 np0005634017 systemd[1]: libpod-conmon-c3aaaa94c8858e2a905a8f05b277cfc81e8a54550fb9337cbd7c1db08d76a1b7.scope: Deactivated successfully.
Feb 28 05:24:42 np0005634017 nova_compute[243452]: 2026-02-28 10:24:42.938 243456 DEBUG oslo_concurrency.lockutils [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:42 np0005634017 nova_compute[243452]: 2026-02-28 10:24:42.941 243456 DEBUG oslo_concurrency.lockutils [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:42 np0005634017 nova_compute[243452]: 2026-02-28 10:24:42.941 243456 DEBUG oslo_concurrency.lockutils [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:42 np0005634017 nova_compute[243452]: 2026-02-28 10:24:42.941 243456 DEBUG oslo_concurrency.lockutils [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:42 np0005634017 nova_compute[243452]: 2026-02-28 10:24:42.942 243456 DEBUG oslo_concurrency.lockutils [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:42 np0005634017 nova_compute[243452]: 2026-02-28 10:24:42.944 243456 INFO nova.compute.manager [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Terminating instance#033[00m
Feb 28 05:24:42 np0005634017 nova_compute[243452]: 2026-02-28 10:24:42.946 243456 DEBUG nova.compute.manager [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:24:42 np0005634017 podman[341567]: 2026-02-28 10:24:42.986844169 +0000 UTC m=+0.042168958 container create b0db68dc84f47bbee464348ec3b9624040acc9979905c65bba7420f8db80e31b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bose, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 05:24:42 np0005634017 kernel: tap53819bfb-eb (unregistering): left promiscuous mode
Feb 28 05:24:42 np0005634017 NetworkManager[49805]: <info>  [1772274282.9946] device (tap53819bfb-eb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:24:43 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:43Z|01159|binding|INFO|Releasing lport 53819bfb-ebe3-4956-8f91-805dd04b5954 from this chassis (sb_readonly=0)
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.005 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:43 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:43Z|01160|binding|INFO|Setting lport 53819bfb-ebe3-4956-8f91-805dd04b5954 down in Southbound
Feb 28 05:24:43 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:43Z|01161|binding|INFO|Removing iface tap53819bfb-eb ovn-installed in OVS
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.007 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.014 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.019 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:f2:95 10.100.0.9'], port_security=['fa:16:3e:e5:f2:95 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0d4ce277-1bbb-4926-a7ee-30f5df57fff9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=53819bfb-ebe3-4956-8f91-805dd04b5954) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:24:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.021 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 53819bfb-ebe3-4956-8f91-805dd04b5954 in datapath 7ec4804c-4a13-485a-9300-db6edf74473b unbound from our chassis#033[00m
Feb 28 05:24:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.023 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7ec4804c-4a13-485a-9300-db6edf74473b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:24:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.025 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[aa0b81a0-4349-4514-a175-aaa1be9b0b5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.025 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b namespace which is not needed anymore#033[00m
Feb 28 05:24:43 np0005634017 systemd[1]: Started libpod-conmon-b0db68dc84f47bbee464348ec3b9624040acc9979905c65bba7420f8db80e31b.scope.
Feb 28 05:24:43 np0005634017 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Feb 28 05:24:43 np0005634017 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006a.scope: Consumed 16.844s CPU time.
Feb 28 05:24:43 np0005634017 systemd-machined[209480]: Machine qemu-136-instance-0000006a terminated.
Feb 28 05:24:43 np0005634017 podman[341567]: 2026-02-28 10:24:42.966129001 +0000 UTC m=+0.021453840 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:24:43 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:24:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/794a6f7c52ee2536e8e13066d593eee767db713f3a2cfc88dea2cfb0d64131fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:24:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/794a6f7c52ee2536e8e13066d593eee767db713f3a2cfc88dea2cfb0d64131fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:24:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/794a6f7c52ee2536e8e13066d593eee767db713f3a2cfc88dea2cfb0d64131fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:24:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/794a6f7c52ee2536e8e13066d593eee767db713f3a2cfc88dea2cfb0d64131fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:24:43 np0005634017 podman[341567]: 2026-02-28 10:24:43.08597405 +0000 UTC m=+0.141298859 container init b0db68dc84f47bbee464348ec3b9624040acc9979905c65bba7420f8db80e31b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bose, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 28 05:24:43 np0005634017 podman[341567]: 2026-02-28 10:24:43.092413026 +0000 UTC m=+0.147737825 container start b0db68dc84f47bbee464348ec3b9624040acc9979905c65bba7420f8db80e31b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bose, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 28 05:24:43 np0005634017 podman[341567]: 2026-02-28 10:24:43.09602293 +0000 UTC m=+0.151347739 container attach b0db68dc84f47bbee464348ec3b9624040acc9979905c65bba7420f8db80e31b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bose, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Feb 28 05:24:43 np0005634017 neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b[335695]: [NOTICE]   (335699) : haproxy version is 2.8.14-c23fe91
Feb 28 05:24:43 np0005634017 neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b[335695]: [NOTICE]   (335699) : path to executable is /usr/sbin/haproxy
Feb 28 05:24:43 np0005634017 neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b[335695]: [WARNING]  (335699) : Exiting Master process...
Feb 28 05:24:43 np0005634017 neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b[335695]: [ALERT]    (335699) : Current worker (335701) exited with code 143 (Terminated)
Feb 28 05:24:43 np0005634017 neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b[335695]: [WARNING]  (335699) : All workers exited. Exiting... (0)
Feb 28 05:24:43 np0005634017 systemd[1]: libpod-bf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334.scope: Deactivated successfully.
Feb 28 05:24:43 np0005634017 podman[341613]: 2026-02-28 10:24:43.155209758 +0000 UTC m=+0.039690466 container died bf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.168 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.174 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:43 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334-userdata-shm.mount: Deactivated successfully.
Feb 28 05:24:43 np0005634017 systemd[1]: var-lib-containers-storage-overlay-f86c5f2ef6c8614823a87cf49a92bf5ddc50b5f72de34d838a1adf35e0b322e0-merged.mount: Deactivated successfully.
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.188 243456 INFO nova.virt.libvirt.driver [-] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Instance destroyed successfully.#033[00m
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.189 243456 DEBUG nova.objects.instance [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'resources' on Instance uuid 0d4ce277-1bbb-4926-a7ee-30f5df57fff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:24:43 np0005634017 podman[341613]: 2026-02-28 10:24:43.197834189 +0000 UTC m=+0.082314887 container cleanup bf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 05:24:43 np0005634017 systemd[1]: libpod-conmon-bf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334.scope: Deactivated successfully.
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.203 243456 DEBUG nova.virt.libvirt.vif [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:22:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-168997413',display_name='tempest-₡-168997413',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--168997413',id=106,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:22:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-9boikcvy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:22:29Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=0d4ce277-1bbb-4926-a7ee-30f5df57fff9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "53819bfb-ebe3-4956-8f91-805dd04b5954", "address": "fa:16:3e:e5:f2:95", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53819bfb-eb", "ovs_interfaceid": "53819bfb-ebe3-4956-8f91-805dd04b5954", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.204 243456 DEBUG nova.network.os_vif_util [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "53819bfb-ebe3-4956-8f91-805dd04b5954", "address": "fa:16:3e:e5:f2:95", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53819bfb-eb", "ovs_interfaceid": "53819bfb-ebe3-4956-8f91-805dd04b5954", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.204 243456 DEBUG nova.network.os_vif_util [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e5:f2:95,bridge_name='br-int',has_traffic_filtering=True,id=53819bfb-ebe3-4956-8f91-805dd04b5954,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53819bfb-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.205 243456 DEBUG os_vif [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:f2:95,bridge_name='br-int',has_traffic_filtering=True,id=53819bfb-ebe3-4956-8f91-805dd04b5954,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53819bfb-eb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.207 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.207 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53819bfb-eb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.209 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.210 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.212 243456 INFO os_vif [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:f2:95,bridge_name='br-int',has_traffic_filtering=True,id=53819bfb-ebe3-4956-8f91-805dd04b5954,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53819bfb-eb')#033[00m
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.229 243456 DEBUG nova.compute.manager [req-1e90a592-2ccd-4917-84ec-2641772177ec req-f922827f-79e1-40e3-8c3c-fe49b6f1acc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Received event network-vif-unplugged-53819bfb-ebe3-4956-8f91-805dd04b5954 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.230 243456 DEBUG oslo_concurrency.lockutils [req-1e90a592-2ccd-4917-84ec-2641772177ec req-f922827f-79e1-40e3-8c3c-fe49b6f1acc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.230 243456 DEBUG oslo_concurrency.lockutils [req-1e90a592-2ccd-4917-84ec-2641772177ec req-f922827f-79e1-40e3-8c3c-fe49b6f1acc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.230 243456 DEBUG oslo_concurrency.lockutils [req-1e90a592-2ccd-4917-84ec-2641772177ec req-f922827f-79e1-40e3-8c3c-fe49b6f1acc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.231 243456 DEBUG nova.compute.manager [req-1e90a592-2ccd-4917-84ec-2641772177ec req-f922827f-79e1-40e3-8c3c-fe49b6f1acc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] No waiting events found dispatching network-vif-unplugged-53819bfb-ebe3-4956-8f91-805dd04b5954 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.231 243456 DEBUG nova.compute.manager [req-1e90a592-2ccd-4917-84ec-2641772177ec req-f922827f-79e1-40e3-8c3c-fe49b6f1acc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Received event network-vif-unplugged-53819bfb-ebe3-4956-8f91-805dd04b5954 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:24:43 np0005634017 podman[341648]: 2026-02-28 10:24:43.271865405 +0000 UTC m=+0.043461755 container remove bf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:24:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.278 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cc8250a1-e532-4d9d-bf3a-8c69dbcaf983]: (4, ('Sat Feb 28 10:24:43 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b (bf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334)\nbf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334\nSat Feb 28 10:24:43 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b (bf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334)\nbf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.281 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[df207e36-f3ac-4345-9c76-2a8229889fec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.283 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.285 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:43 np0005634017 kernel: tap7ec4804c-40: left promiscuous mode
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.302 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.301 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cb779f15-5f25-4550-8402-1ce5a4e4dbee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.316 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[abc4cfc9-8cd2-47c3-b5c5-702e9bfa9f75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.319 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[57328262-c130-452a-ba74-73f1047584b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.333 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[62fa9b5a-2a26-45c5-a93f-e3b100fc9b50]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565502, 'reachable_time': 31298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341691, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.336 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:24:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.336 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[125f5c25-0012-4d9b-bc1e-a22df0311a96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.474 243456 INFO nova.virt.libvirt.driver [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Deleting instance files /var/lib/nova/instances/0d4ce277-1bbb-4926-a7ee-30f5df57fff9_del#033[00m
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.475 243456 INFO nova.virt.libvirt.driver [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Deletion of /var/lib/nova/instances/0d4ce277-1bbb-4926-a7ee-30f5df57fff9_del complete#033[00m
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.523 243456 INFO nova.compute.manager [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Took 0.58 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.523 243456 DEBUG oslo.service.loopingcall [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.523 243456 DEBUG nova.compute.manager [-] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:24:43 np0005634017 nova_compute[243452]: 2026-02-28 10:24:43.524 243456 DEBUG nova.network.neutron [-] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:24:43 np0005634017 systemd[1]: run-netns-ovnmeta\x2d7ec4804c\x2d4a13\x2d485a\x2d9300\x2ddb6edf74473b.mount: Deactivated successfully.
Feb 28 05:24:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:24:43 np0005634017 lvm[341753]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:24:43 np0005634017 lvm[341753]: VG ceph_vg0 finished
Feb 28 05:24:43 np0005634017 lvm[341755]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:24:43 np0005634017 lvm[341755]: VG ceph_vg1 finished
Feb 28 05:24:43 np0005634017 lvm[341756]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:24:43 np0005634017 lvm[341756]: VG ceph_vg2 finished
Feb 28 05:24:43 np0005634017 crazy_bose[341586]: {}
Feb 28 05:24:43 np0005634017 systemd[1]: libpod-b0db68dc84f47bbee464348ec3b9624040acc9979905c65bba7420f8db80e31b.scope: Deactivated successfully.
Feb 28 05:24:43 np0005634017 systemd[1]: libpod-b0db68dc84f47bbee464348ec3b9624040acc9979905c65bba7420f8db80e31b.scope: Consumed 1.117s CPU time.
Feb 28 05:24:43 np0005634017 conmon[341586]: conmon b0db68dc84f47bbee464 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b0db68dc84f47bbee464348ec3b9624040acc9979905c65bba7420f8db80e31b.scope/container/memory.events
Feb 28 05:24:43 np0005634017 podman[341567]: 2026-02-28 10:24:43.929273851 +0000 UTC m=+0.984598680 container died b0db68dc84f47bbee464348ec3b9624040acc9979905c65bba7420f8db80e31b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bose, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:24:43 np0005634017 systemd[1]: var-lib-containers-storage-overlay-794a6f7c52ee2536e8e13066d593eee767db713f3a2cfc88dea2cfb0d64131fb-merged.mount: Deactivated successfully.
Feb 28 05:24:43 np0005634017 podman[341567]: 2026-02-28 10:24:43.971648144 +0000 UTC m=+1.026972933 container remove b0db68dc84f47bbee464348ec3b9624040acc9979905c65bba7420f8db80e31b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bose, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:24:43 np0005634017 systemd[1]: libpod-conmon-b0db68dc84f47bbee464348ec3b9624040acc9979905c65bba7420f8db80e31b.scope: Deactivated successfully.
Feb 28 05:24:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:24:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:24:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:24:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:24:44 np0005634017 nova_compute[243452]: 2026-02-28 10:24:44.084 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:44 np0005634017 nova_compute[243452]: 2026-02-28 10:24:44.383 243456 DEBUG nova.network.neutron [-] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:24:44 np0005634017 nova_compute[243452]: 2026-02-28 10:24:44.405 243456 INFO nova.compute.manager [-] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Took 0.88 seconds to deallocate network for instance.#033[00m
Feb 28 05:24:44 np0005634017 nova_compute[243452]: 2026-02-28 10:24:44.465 243456 DEBUG oslo_concurrency.lockutils [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:44 np0005634017 nova_compute[243452]: 2026-02-28 10:24:44.465 243456 DEBUG oslo_concurrency.lockutils [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 312 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 535 KiB/s rd, 3.2 MiB/s wr, 142 op/s
Feb 28 05:24:44 np0005634017 nova_compute[243452]: 2026-02-28 10:24:44.521 243456 DEBUG nova.compute.manager [req-4e55942d-7be3-4fe5-a4aa-4b5198f1b6bd req-3c897c44-ff5a-4e98-8d9b-ee9a834fe229 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Received event network-vif-deleted-53819bfb-ebe3-4956-8f91-805dd04b5954 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:24:44 np0005634017 nova_compute[243452]: 2026-02-28 10:24:44.549 243456 DEBUG oslo_concurrency.processutils [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:45 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:24:45 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:24:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:24:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/713305397' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:24:45 np0005634017 nova_compute[243452]: 2026-02-28 10:24:45.119 243456 DEBUG oslo_concurrency.processutils [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:45 np0005634017 nova_compute[243452]: 2026-02-28 10:24:45.127 243456 DEBUG nova.compute.provider_tree [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:24:45 np0005634017 nova_compute[243452]: 2026-02-28 10:24:45.146 243456 DEBUG nova.scheduler.client.report [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:24:45 np0005634017 nova_compute[243452]: 2026-02-28 10:24:45.173 243456 DEBUG oslo_concurrency.lockutils [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:45 np0005634017 nova_compute[243452]: 2026-02-28 10:24:45.212 243456 INFO nova.scheduler.client.report [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Deleted allocations for instance 0d4ce277-1bbb-4926-a7ee-30f5df57fff9#033[00m
Feb 28 05:24:45 np0005634017 nova_compute[243452]: 2026-02-28 10:24:45.340 243456 DEBUG oslo_concurrency.lockutils [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.399s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:45 np0005634017 nova_compute[243452]: 2026-02-28 10:24:45.366 243456 DEBUG nova.compute.manager [req-9190e274-79f1-493b-b070-acdad3ed88ae req-c231f7b4-f95a-4e41-a1c9-563c9b0bb0a7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Received event network-vif-plugged-53819bfb-ebe3-4956-8f91-805dd04b5954 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:24:45 np0005634017 nova_compute[243452]: 2026-02-28 10:24:45.367 243456 DEBUG oslo_concurrency.lockutils [req-9190e274-79f1-493b-b070-acdad3ed88ae req-c231f7b4-f95a-4e41-a1c9-563c9b0bb0a7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:45 np0005634017 nova_compute[243452]: 2026-02-28 10:24:45.368 243456 DEBUG oslo_concurrency.lockutils [req-9190e274-79f1-493b-b070-acdad3ed88ae req-c231f7b4-f95a-4e41-a1c9-563c9b0bb0a7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:45 np0005634017 nova_compute[243452]: 2026-02-28 10:24:45.368 243456 DEBUG oslo_concurrency.lockutils [req-9190e274-79f1-493b-b070-acdad3ed88ae req-c231f7b4-f95a-4e41-a1c9-563c9b0bb0a7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:45 np0005634017 nova_compute[243452]: 2026-02-28 10:24:45.368 243456 DEBUG nova.compute.manager [req-9190e274-79f1-493b-b070-acdad3ed88ae req-c231f7b4-f95a-4e41-a1c9-563c9b0bb0a7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] No waiting events found dispatching network-vif-plugged-53819bfb-ebe3-4956-8f91-805dd04b5954 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:24:45 np0005634017 nova_compute[243452]: 2026-02-28 10:24:45.369 243456 WARNING nova.compute.manager [req-9190e274-79f1-493b-b070-acdad3ed88ae req-c231f7b4-f95a-4e41-a1c9-563c9b0bb0a7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Received unexpected event network-vif-plugged-53819bfb-ebe3-4956-8f91-805dd04b5954 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:24:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:24:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1903058916' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:24:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:24:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1903058916' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:24:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 271 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 553 KiB/s rd, 3.2 MiB/s wr, 167 op/s
Feb 28 05:24:48 np0005634017 nova_compute[243452]: 2026-02-28 10:24:48.211 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 721 KiB/s wr, 65 op/s
Feb 28 05:24:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:24:48Z|01162|binding|INFO|Releasing lport 42d54653-4cb7-4be2-99ee-96f5681da7be from this chassis (sb_readonly=0)
Feb 28 05:24:48 np0005634017 nova_compute[243452]: 2026-02-28 10:24:48.588 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:24:49 np0005634017 nova_compute[243452]: 2026-02-28 10:24:49.088 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:49 np0005634017 nova_compute[243452]: 2026-02-28 10:24:49.969 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "9f873b2a-d04c-475f-941d-397e5a9bc81a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:49 np0005634017 nova_compute[243452]: 2026-02-28 10:24:49.970 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:49 np0005634017 nova_compute[243452]: 2026-02-28 10:24:49.992 243456 DEBUG nova.compute.manager [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:24:50 np0005634017 nova_compute[243452]: 2026-02-28 10:24:50.083 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:50 np0005634017 nova_compute[243452]: 2026-02-28 10:24:50.084 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:50 np0005634017 nova_compute[243452]: 2026-02-28 10:24:50.091 243456 DEBUG nova.virt.hardware [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:24:50 np0005634017 nova_compute[243452]: 2026-02-28 10:24:50.091 243456 INFO nova.compute.claims [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:24:50 np0005634017 podman[341819]: 2026-02-28 10:24:50.123521057 +0000 UTC m=+0.060547599 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 28 05:24:50 np0005634017 podman[341818]: 2026-02-28 10:24:50.16035708 +0000 UTC m=+0.094694394 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 05:24:50 np0005634017 nova_compute[243452]: 2026-02-28 10:24:50.169 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274275.167945, 08a4cd5e-f711-44d2-b17e-c1941be22e85 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:24:50 np0005634017 nova_compute[243452]: 2026-02-28 10:24:50.169 243456 INFO nova.compute.manager [-] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:24:50 np0005634017 nova_compute[243452]: 2026-02-28 10:24:50.192 243456 DEBUG nova.compute.manager [None req-249befa2-a20b-41fd-b58f-8e4cf21e93c3 - - - - - -] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:24:50 np0005634017 nova_compute[243452]: 2026-02-28 10:24:50.250 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 17 KiB/s wr, 56 op/s
Feb 28 05:24:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:24:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1489913355' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:24:50 np0005634017 nova_compute[243452]: 2026-02-28 10:24:50.805 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:50 np0005634017 nova_compute[243452]: 2026-02-28 10:24:50.815 243456 DEBUG nova.compute.provider_tree [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:24:50 np0005634017 nova_compute[243452]: 2026-02-28 10:24:50.836 243456 DEBUG nova.scheduler.client.report [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:24:50 np0005634017 nova_compute[243452]: 2026-02-28 10:24:50.872 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:50 np0005634017 nova_compute[243452]: 2026-02-28 10:24:50.873 243456 DEBUG nova.compute.manager [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:24:50 np0005634017 nova_compute[243452]: 2026-02-28 10:24:50.938 243456 DEBUG nova.compute.manager [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:24:50 np0005634017 nova_compute[243452]: 2026-02-28 10:24:50.939 243456 DEBUG nova.network.neutron [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:24:50 np0005634017 nova_compute[243452]: 2026-02-28 10:24:50.969 243456 INFO nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:24:50 np0005634017 nova_compute[243452]: 2026-02-28 10:24:50.991 243456 DEBUG nova.compute.manager [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:24:51 np0005634017 nova_compute[243452]: 2026-02-28 10:24:51.100 243456 DEBUG nova.compute.manager [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:24:51 np0005634017 nova_compute[243452]: 2026-02-28 10:24:51.102 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:24:51 np0005634017 nova_compute[243452]: 2026-02-28 10:24:51.104 243456 INFO nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Creating image(s)#033[00m
Feb 28 05:24:51 np0005634017 nova_compute[243452]: 2026-02-28 10:24:51.139 243456 DEBUG nova.storage.rbd_utils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9f873b2a-d04c-475f-941d-397e5a9bc81a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:51 np0005634017 nova_compute[243452]: 2026-02-28 10:24:51.173 243456 DEBUG nova.storage.rbd_utils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9f873b2a-d04c-475f-941d-397e5a9bc81a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:51 np0005634017 nova_compute[243452]: 2026-02-28 10:24:51.197 243456 DEBUG nova.storage.rbd_utils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9f873b2a-d04c-475f-941d-397e5a9bc81a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:51 np0005634017 nova_compute[243452]: 2026-02-28 10:24:51.201 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:51 np0005634017 nova_compute[243452]: 2026-02-28 10:24:51.284 243456 DEBUG nova.policy [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9046d3ef932481196b82d5e1fdd5de6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:24:51 np0005634017 nova_compute[243452]: 2026-02-28 10:24:51.291 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:51 np0005634017 nova_compute[243452]: 2026-02-28 10:24:51.292 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:51 np0005634017 nova_compute[243452]: 2026-02-28 10:24:51.293 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:51 np0005634017 nova_compute[243452]: 2026-02-28 10:24:51.293 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:51 np0005634017 nova_compute[243452]: 2026-02-28 10:24:51.333 243456 DEBUG nova.storage.rbd_utils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9f873b2a-d04c-475f-941d-397e5a9bc81a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:51 np0005634017 nova_compute[243452]: 2026-02-28 10:24:51.339 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9f873b2a-d04c-475f-941d-397e5a9bc81a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:51 np0005634017 nova_compute[243452]: 2026-02-28 10:24:51.584 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9f873b2a-d04c-475f-941d-397e5a9bc81a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.246s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:51 np0005634017 nova_compute[243452]: 2026-02-28 10:24:51.654 243456 DEBUG nova.storage.rbd_utils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] resizing rbd image 9f873b2a-d04c-475f-941d-397e5a9bc81a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:24:51 np0005634017 nova_compute[243452]: 2026-02-28 10:24:51.740 243456 DEBUG nova.objects.instance [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'migration_context' on Instance uuid 9f873b2a-d04c-475f-941d-397e5a9bc81a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:24:51 np0005634017 nova_compute[243452]: 2026-02-28 10:24:51.758 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:24:51 np0005634017 nova_compute[243452]: 2026-02-28 10:24:51.759 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Ensure instance console log exists: /var/lib/nova/instances/9f873b2a-d04c-475f-941d-397e5a9bc81a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:24:51 np0005634017 nova_compute[243452]: 2026-02-28 10:24:51.759 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:51 np0005634017 nova_compute[243452]: 2026-02-28 10:24:51.759 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:51 np0005634017 nova_compute[243452]: 2026-02-28 10:24:51.760 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 252 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1019 KiB/s wr, 32 op/s
Feb 28 05:24:52 np0005634017 nova_compute[243452]: 2026-02-28 10:24:52.935 243456 DEBUG nova.network.neutron [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Successfully created port: 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:24:53 np0005634017 nova_compute[243452]: 2026-02-28 10:24:53.215 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:24:54 np0005634017 nova_compute[243452]: 2026-02-28 10:24:54.091 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 252 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1008 KiB/s wr, 29 op/s
Feb 28 05:24:55 np0005634017 nova_compute[243452]: 2026-02-28 10:24:55.615 243456 DEBUG nova.network.neutron [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Successfully updated port: 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:24:55 np0005634017 nova_compute[243452]: 2026-02-28 10:24:55.632 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "refresh_cache-9f873b2a-d04c-475f-941d-397e5a9bc81a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:24:55 np0005634017 nova_compute[243452]: 2026-02-28 10:24:55.632 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquired lock "refresh_cache-9f873b2a-d04c-475f-941d-397e5a9bc81a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:24:55 np0005634017 nova_compute[243452]: 2026-02-28 10:24:55.632 243456 DEBUG nova.network.neutron [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:24:55 np0005634017 nova_compute[243452]: 2026-02-28 10:24:55.738 243456 DEBUG nova.compute.manager [req-bf773168-26d0-47db-834f-0de41d5d5880 req-9df9af04-1591-498b-810e-4b06c3b8b722 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Received event network-changed-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:24:55 np0005634017 nova_compute[243452]: 2026-02-28 10:24:55.739 243456 DEBUG nova.compute.manager [req-bf773168-26d0-47db-834f-0de41d5d5880 req-9df9af04-1591-498b-810e-4b06c3b8b722 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Refreshing instance network info cache due to event network-changed-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:24:55 np0005634017 nova_compute[243452]: 2026-02-28 10:24:55.739 243456 DEBUG oslo_concurrency.lockutils [req-bf773168-26d0-47db-834f-0de41d5d5880 req-9df9af04-1591-498b-810e-4b06c3b8b722 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9f873b2a-d04c-475f-941d-397e5a9bc81a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:24:55 np0005634017 nova_compute[243452]: 2026-02-28 10:24:55.833 243456 DEBUG oslo_concurrency.lockutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Acquiring lock "0f9fd694-ce0d-49b2-a027-0c24d6bbfa64" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:55 np0005634017 nova_compute[243452]: 2026-02-28 10:24:55.834 243456 DEBUG oslo_concurrency.lockutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "0f9fd694-ce0d-49b2-a027-0c24d6bbfa64" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:55 np0005634017 nova_compute[243452]: 2026-02-28 10:24:55.852 243456 DEBUG nova.network.neutron [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:24:55 np0005634017 nova_compute[243452]: 2026-02-28 10:24:55.860 243456 DEBUG nova.compute.manager [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:24:56 np0005634017 nova_compute[243452]: 2026-02-28 10:24:56.087 243456 DEBUG oslo_concurrency.lockutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:56 np0005634017 nova_compute[243452]: 2026-02-28 10:24:56.088 243456 DEBUG oslo_concurrency.lockutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:56 np0005634017 nova_compute[243452]: 2026-02-28 10:24:56.098 243456 DEBUG nova.virt.hardware [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:24:56 np0005634017 nova_compute[243452]: 2026-02-28 10:24:56.098 243456 INFO nova.compute.claims [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:24:56 np0005634017 nova_compute[243452]: 2026-02-28 10:24:56.408 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 28 05:24:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:24:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3925535493' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:24:56 np0005634017 nova_compute[243452]: 2026-02-28 10:24:56.951 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:56 np0005634017 nova_compute[243452]: 2026-02-28 10:24:56.958 243456 DEBUG nova.compute.provider_tree [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:24:56 np0005634017 nova_compute[243452]: 2026-02-28 10:24:56.979 243456 DEBUG nova.scheduler.client.report [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.003 243456 DEBUG oslo_concurrency.lockutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.005 243456 DEBUG nova.compute.manager [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.064 243456 DEBUG nova.compute.manager [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.079 243456 INFO nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.099 243456 DEBUG nova.compute.manager [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.200 243456 DEBUG nova.compute.manager [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.201 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.202 243456 INFO nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Creating image(s)#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.228 243456 DEBUG nova.storage.rbd_utils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.254 243456 DEBUG nova.storage.rbd_utils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.276 243456 DEBUG nova.storage.rbd_utils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.280 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.373 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.375 243456 DEBUG oslo_concurrency.lockutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.376 243456 DEBUG oslo_concurrency.lockutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.376 243456 DEBUG oslo_concurrency.lockutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.410 243456 DEBUG nova.storage.rbd_utils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.415 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.536 243456 DEBUG nova.network.neutron [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Updating instance_info_cache with network_info: [{"id": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "address": "fa:16:3e:51:ab:a9", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b15ec49-ee", "ovs_interfaceid": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.566 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Releasing lock "refresh_cache-9f873b2a-d04c-475f-941d-397e5a9bc81a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.567 243456 DEBUG nova.compute.manager [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Instance network_info: |[{"id": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "address": "fa:16:3e:51:ab:a9", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b15ec49-ee", "ovs_interfaceid": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.568 243456 DEBUG oslo_concurrency.lockutils [req-bf773168-26d0-47db-834f-0de41d5d5880 req-9df9af04-1591-498b-810e-4b06c3b8b722 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9f873b2a-d04c-475f-941d-397e5a9bc81a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.569 243456 DEBUG nova.network.neutron [req-bf773168-26d0-47db-834f-0de41d5d5880 req-9df9af04-1591-498b-810e-4b06c3b8b722 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Refreshing network info cache for port 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.575 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Start _get_guest_xml network_info=[{"id": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "address": "fa:16:3e:51:ab:a9", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b15ec49-ee", "ovs_interfaceid": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.584 243456 WARNING nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.605 243456 DEBUG nova.virt.libvirt.host [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.606 243456 DEBUG nova.virt.libvirt.host [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.610 243456 DEBUG nova.virt.libvirt.host [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.611 243456 DEBUG nova.virt.libvirt.host [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.611 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.612 243456 DEBUG nova.virt.hardware [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.612 243456 DEBUG nova.virt.hardware [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.613 243456 DEBUG nova.virt.hardware [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.613 243456 DEBUG nova.virt.hardware [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.613 243456 DEBUG nova.virt.hardware [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.614 243456 DEBUG nova.virt.hardware [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.615 243456 DEBUG nova.virt.hardware [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.615 243456 DEBUG nova.virt.hardware [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.616 243456 DEBUG nova.virt.hardware [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.616 243456 DEBUG nova.virt.hardware [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.616 243456 DEBUG nova.virt.hardware [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.621 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.670 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.255s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.749 243456 DEBUG nova.storage.rbd_utils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] resizing rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.842 243456 DEBUG nova.objects.instance [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lazy-loading 'migration_context' on Instance uuid 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.855 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.855 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Ensure instance console log exists: /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.856 243456 DEBUG oslo_concurrency.lockutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.856 243456 DEBUG oslo_concurrency.lockutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.856 243456 DEBUG oslo_concurrency.lockutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.857 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.861 243456 WARNING nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:24:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:57.863 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:57.864 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:24:57.864 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.866 243456 DEBUG nova.virt.libvirt.host [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.866 243456 DEBUG nova.virt.libvirt.host [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.869 243456 DEBUG nova.virt.libvirt.host [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.869 243456 DEBUG nova.virt.libvirt.host [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.870 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.870 243456 DEBUG nova.virt.hardware [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.870 243456 DEBUG nova.virt.hardware [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.870 243456 DEBUG nova.virt.hardware [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.870 243456 DEBUG nova.virt.hardware [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.871 243456 DEBUG nova.virt.hardware [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.871 243456 DEBUG nova.virt.hardware [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.871 243456 DEBUG nova.virt.hardware [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.871 243456 DEBUG nova.virt.hardware [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.871 243456 DEBUG nova.virt.hardware [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.872 243456 DEBUG nova.virt.hardware [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.872 243456 DEBUG nova.virt.hardware [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:24:57 np0005634017 nova_compute[243452]: 2026-02-28 10:24:57.874 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:24:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3086382063' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.183 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274283.182444, 0d4ce277-1bbb-4926-a7ee-30f5df57fff9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.184 243456 INFO nova.compute.manager [-] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.207 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.233 243456 DEBUG nova.storage.rbd_utils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9f873b2a-d04c-475f-941d-397e5a9bc81a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.237 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.266 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.271 243456 DEBUG nova.compute.manager [None req-18a51129-9050-4a72-9df6-da7f63e0198b - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:24:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:24:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/702629668' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.439 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.465 243456 DEBUG nova.storage.rbd_utils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.469 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Feb 28 05:24:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:24:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:24:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2547109125' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.800 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.802 243456 DEBUG nova.virt.libvirt.vif [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:24:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-1230754442',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-1230754442',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ge',id=115,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAhOy8scu7pucn6hR6bdEJEq/jaPKx42EW3BCfNrjQsnOj6n32/A+FEMwyqz7ox0jzcrbxxA91mTDdT3vrbzI6b58jlzlwGg5fJ7HUGs8jPxEEShL7FM9h/slX6zH/FfpQ==',key_name='tempest-TestSecurityGroupsBasicOps-850120140',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-8fgbwnao',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:24:51Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=9f873b2a-d04c-475f-941d-397e5a9bc81a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "address": "fa:16:3e:51:ab:a9", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b15ec49-ee", "ovs_interfaceid": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.802 243456 DEBUG nova.network.os_vif_util [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "address": "fa:16:3e:51:ab:a9", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b15ec49-ee", "ovs_interfaceid": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.803 243456 DEBUG nova.network.os_vif_util [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:ab:a9,bridge_name='br-int',has_traffic_filtering=True,id=8b15ec49-ee59-41b3-b0e8-b6779ab7bde7,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b15ec49-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.804 243456 DEBUG nova.objects.instance [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f873b2a-d04c-475f-941d-397e5a9bc81a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.818 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:24:58 np0005634017 nova_compute[243452]:  <uuid>9f873b2a-d04c-475f-941d-397e5a9bc81a</uuid>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:  <name>instance-00000073</name>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-1230754442</nova:name>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:24:57</nova:creationTime>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:24:58 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:        <nova:user uuid="f9046d3ef932481196b82d5e1fdd5de6">tempest-TestSecurityGroupsBasicOps-1303513239-project-member</nova:user>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:        <nova:project uuid="1eb3aafa74d14544ba5cde61223f2e23">tempest-TestSecurityGroupsBasicOps-1303513239</nova:project>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:        <nova:port uuid="8b15ec49-ee59-41b3-b0e8-b6779ab7bde7">
Feb 28 05:24:58 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <entry name="serial">9f873b2a-d04c-475f-941d-397e5a9bc81a</entry>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <entry name="uuid">9f873b2a-d04c-475f-941d-397e5a9bc81a</entry>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/9f873b2a-d04c-475f-941d-397e5a9bc81a_disk">
Feb 28 05:24:58 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:24:58 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/9f873b2a-d04c-475f-941d-397e5a9bc81a_disk.config">
Feb 28 05:24:58 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:24:58 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:51:ab:a9"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <target dev="tap8b15ec49-ee"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/9f873b2a-d04c-475f-941d-397e5a9bc81a/console.log" append="off"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:24:58 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:24:58 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:24:58 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:24:58 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.818 243456 DEBUG nova.compute.manager [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Preparing to wait for external event network-vif-plugged-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.818 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.819 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.820 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.820 243456 DEBUG nova.virt.libvirt.vif [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:24:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-1230754442',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-1230754442',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ge',id=115,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAhOy8scu7pucn6hR6bdEJEq/jaPKx42EW3BCfNrjQsnOj6n32/A+FEMwyqz7ox0jzcrbxxA91mTDdT3vrbzI6b58jlzlwGg5fJ7HUGs8jPxEEShL7FM9h/slX6zH/FfpQ==',key_name='tempest-TestSecurityGroupsBasicOps-850120140',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-8fgbwnao',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:24:51Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=9f873b2a-d04c-475f-941d-397e5a9bc81a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "address": "fa:16:3e:51:ab:a9", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b15ec49-ee", "ovs_interfaceid": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.820 243456 DEBUG nova.network.os_vif_util [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "address": "fa:16:3e:51:ab:a9", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b15ec49-ee", "ovs_interfaceid": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.821 243456 DEBUG nova.network.os_vif_util [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:ab:a9,bridge_name='br-int',has_traffic_filtering=True,id=8b15ec49-ee59-41b3-b0e8-b6779ab7bde7,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b15ec49-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.821 243456 DEBUG os_vif [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:ab:a9,bridge_name='br-int',has_traffic_filtering=True,id=8b15ec49-ee59-41b3-b0e8-b6779ab7bde7,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b15ec49-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.821 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.822 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.822 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.825 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.825 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b15ec49-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.825 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8b15ec49-ee, col_values=(('external_ids', {'iface-id': '8b15ec49-ee59-41b3-b0e8-b6779ab7bde7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:51:ab:a9', 'vm-uuid': '9f873b2a-d04c-475f-941d-397e5a9bc81a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.827 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:58 np0005634017 NetworkManager[49805]: <info>  [1772274298.8282] manager: (tap8b15ec49-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/480)
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.830 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.835 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.835 243456 INFO os_vif [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:ab:a9,bridge_name='br-int',has_traffic_filtering=True,id=8b15ec49-ee59-41b3-b0e8-b6779ab7bde7,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b15ec49-ee')#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.904 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.904 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.904 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No VIF found with MAC fa:16:3e:51:ab:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.904 243456 INFO nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Using config drive#033[00m
Feb 28 05:24:58 np0005634017 nova_compute[243452]: 2026-02-28 10:24:58.923 243456 DEBUG nova.storage.rbd_utils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9f873b2a-d04c-475f-941d-397e5a9bc81a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:24:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1230596899' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:24:59 np0005634017 nova_compute[243452]: 2026-02-28 10:24:59.032 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:59 np0005634017 nova_compute[243452]: 2026-02-28 10:24:59.034 243456 DEBUG nova.objects.instance [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:24:59 np0005634017 nova_compute[243452]: 2026-02-28 10:24:59.049 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:24:59 np0005634017 nova_compute[243452]:  <uuid>0f9fd694-ce0d-49b2-a027-0c24d6bbfa64</uuid>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:  <name>instance-00000074</name>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerShowV254Test-server-434009731</nova:name>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:24:57</nova:creationTime>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:24:59 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:        <nova:user uuid="df63289bf60946e2a983ee2fa57352b1">tempest-ServerShowV254Test-1359990056-project-member</nova:user>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:        <nova:project uuid="ff0879b364b142e782530e413eb35f55">tempest-ServerShowV254Test-1359990056</nova:project>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      <nova:ports/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      <entry name="serial">0f9fd694-ce0d-49b2-a027-0c24d6bbfa64</entry>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      <entry name="uuid">0f9fd694-ce0d-49b2-a027-0c24d6bbfa64</entry>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk">
Feb 28 05:24:59 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:24:59 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk.config">
Feb 28 05:24:59 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:24:59 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/console.log" append="off"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:24:59 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:24:59 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:24:59 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:24:59 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:24:59 np0005634017 nova_compute[243452]: 2026-02-28 10:24:59.085 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:24:59 np0005634017 nova_compute[243452]: 2026-02-28 10:24:59.086 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:24:59 np0005634017 nova_compute[243452]: 2026-02-28 10:24:59.086 243456 INFO nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Using config drive#033[00m
Feb 28 05:24:59 np0005634017 nova_compute[243452]: 2026-02-28 10:24:59.112 243456 DEBUG nova.storage.rbd_utils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:59 np0005634017 nova_compute[243452]: 2026-02-28 10:24:59.116 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:24:59 np0005634017 nova_compute[243452]: 2026-02-28 10:24:59.412 243456 INFO nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Creating config drive at /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/disk.config#033[00m
Feb 28 05:24:59 np0005634017 nova_compute[243452]: 2026-02-28 10:24:59.415 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpahb1kf1y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:59 np0005634017 nova_compute[243452]: 2026-02-28 10:24:59.571 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpahb1kf1y" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:59 np0005634017 nova_compute[243452]: 2026-02-28 10:24:59.597 243456 DEBUG nova.storage.rbd_utils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:59 np0005634017 nova_compute[243452]: 2026-02-28 10:24:59.601 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/disk.config 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:59 np0005634017 nova_compute[243452]: 2026-02-28 10:24:59.684 243456 INFO nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Creating config drive at /var/lib/nova/instances/9f873b2a-d04c-475f-941d-397e5a9bc81a/disk.config#033[00m
Feb 28 05:24:59 np0005634017 nova_compute[243452]: 2026-02-28 10:24:59.689 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9f873b2a-d04c-475f-941d-397e5a9bc81a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5are5zi1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:24:59 np0005634017 nova_compute[243452]: 2026-02-28 10:24:59.761 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/disk.config 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:59 np0005634017 nova_compute[243452]: 2026-02-28 10:24:59.762 243456 INFO nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Deleting local config drive /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/disk.config because it was imported into RBD.#033[00m
Feb 28 05:24:59 np0005634017 systemd-machined[209480]: New machine qemu-145-instance-00000074.
Feb 28 05:24:59 np0005634017 systemd[1]: Started Virtual Machine qemu-145-instance-00000074.
Feb 28 05:24:59 np0005634017 nova_compute[243452]: 2026-02-28 10:24:59.832 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9f873b2a-d04c-475f-941d-397e5a9bc81a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5are5zi1" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:24:59 np0005634017 nova_compute[243452]: 2026-02-28 10:24:59.867 243456 DEBUG nova.storage.rbd_utils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9f873b2a-d04c-475f-941d-397e5a9bc81a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:24:59 np0005634017 nova_compute[243452]: 2026-02-28 10:24:59.871 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9f873b2a-d04c-475f-941d-397e5a9bc81a/disk.config 9f873b2a-d04c-475f-941d-397e5a9bc81a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:24:59.999 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9f873b2a-d04c-475f-941d-397e5a9bc81a/disk.config 9f873b2a-d04c-475f-941d-397e5a9bc81a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.001 243456 INFO nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Deleting local config drive /var/lib/nova/instances/9f873b2a-d04c-475f-941d-397e5a9bc81a/disk.config because it was imported into RBD.#033[00m
Feb 28 05:25:00 np0005634017 kernel: tap8b15ec49-ee: entered promiscuous mode
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.050 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:25:00Z|01163|binding|INFO|Claiming lport 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 for this chassis.
Feb 28 05:25:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:25:00Z|01164|binding|INFO|8b15ec49-ee59-41b3-b0e8-b6779ab7bde7: Claiming fa:16:3e:51:ab:a9 10.100.0.5
Feb 28 05:25:00 np0005634017 NetworkManager[49805]: <info>  [1772274300.0540] manager: (tap8b15ec49-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/481)
Feb 28 05:25:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.058 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:ab:a9 10.100.0.5'], port_security=['fa:16:3e:51:ab:a9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9f873b2a-d04c-475f-941d-397e5a9bc81a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ee12748-b368-477a-aacb-62375ce0b51c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ec278221-1438-4a93-ade3-9d533e8b727e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2fa6d2-fb6a-49e0-af1a-0d44f4a09e25, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=8b15ec49-ee59-41b3-b0e8-b6779ab7bde7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:25:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.061 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 in datapath 9ee12748-b368-477a-aacb-62375ce0b51c bound to our chassis#033[00m
Feb 28 05:25:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.063 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9ee12748-b368-477a-aacb-62375ce0b51c#033[00m
Feb 28 05:25:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:25:00Z|01165|binding|INFO|Setting lport 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 ovn-installed in OVS
Feb 28 05:25:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:25:00Z|01166|binding|INFO|Setting lport 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 up in Southbound
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.065 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.067 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:00 np0005634017 systemd-machined[209480]: New machine qemu-146-instance-00000073.
Feb 28 05:25:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.079 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[04b5462a-d218-46b2-ab27-d704534df6ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:25:00 np0005634017 systemd[1]: Started Virtual Machine qemu-146-instance-00000073.
Feb 28 05:25:00 np0005634017 systemd-udevd[342544]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:25:00 np0005634017 NetworkManager[49805]: <info>  [1772274300.1113] device (tap8b15ec49-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:25:00 np0005634017 NetworkManager[49805]: <info>  [1772274300.1117] device (tap8b15ec49-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:25:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.112 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[95e36bc7-9035-4bae-bb7f-c525f1426f38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:25:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.118 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[904b94f5-bd07-4dc7-91f4-0314806a977f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:25:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.151 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5ef3e36c-8102-43da-a52a-d76cb59a4eb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:25:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.170 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[922fbba9-528e-4e54-944b-43cd57dfa7a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ee12748-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fd:c4:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 344], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576814, 'reachable_time': 35187, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342563, 'error': None, 'target': 'ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:25:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.187 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e80f999b-d779-4be2-8ebd-a18b2df7f1c5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9ee12748-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576826, 'tstamp': 576826}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342564, 'error': None, 'target': 'ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9ee12748-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576829, 'tstamp': 576829}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342564, 'error': None, 'target': 'ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:25:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.189 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ee12748-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.191 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.193 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ee12748-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:25:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.193 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:25:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.194 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9ee12748-b0, col_values=(('external_ids', {'iface-id': '42d54653-4cb7-4be2-99ee-96f5681da7be'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:25:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.194 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.240 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274300.2397754, 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.242 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.247 243456 DEBUG nova.compute.manager [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.248 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.253 243456 INFO nova.virt.libvirt.driver [-] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Instance spawned successfully.#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.253 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.277 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.280 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.299 243456 DEBUG nova.network.neutron [req-bf773168-26d0-47db-834f-0de41d5d5880 req-9df9af04-1591-498b-810e-4b06c3b8b722 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Updated VIF entry in instance network info cache for port 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.300 243456 DEBUG nova.network.neutron [req-bf773168-26d0-47db-834f-0de41d5d5880 req-9df9af04-1591-498b-810e-4b06c3b8b722 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Updating instance_info_cache with network_info: [{"id": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "address": "fa:16:3e:51:ab:a9", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b15ec49-ee", "ovs_interfaceid": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.304 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.304 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.304 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.305 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.305 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.306 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.311 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.312 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274300.2420087, 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.312 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] VM Started (Lifecycle Event)#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.315 243456 DEBUG oslo_concurrency.lockutils [req-bf773168-26d0-47db-834f-0de41d5d5880 req-9df9af04-1591-498b-810e-4b06c3b8b722 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9f873b2a-d04c-475f-941d-397e5a9bc81a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:25:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:25:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:25:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:25:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:25:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:25:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.346 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.350 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.372 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.383 243456 INFO nova.compute.manager [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Took 3.18 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.384 243456 DEBUG nova.compute.manager [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.409 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274300.408746, 9f873b2a-d04c-475f-941d-397e5a9bc81a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.409 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] VM Started (Lifecycle Event)#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.436 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.442 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274300.4095762, 9f873b2a-d04c-475f-941d-397e5a9bc81a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.442 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.455 243456 INFO nova.compute.manager [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Took 4.45 seconds to build instance.#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.466 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.470 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.473 243456 DEBUG oslo_concurrency.lockutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "0f9fd694-ce0d-49b2-a027-0c24d6bbfa64" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 305 active+clean; 302 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.8 MiB/s wr, 51 op/s
Feb 28 05:25:00 np0005634017 nova_compute[243452]: 2026-02-28 10:25:00.492 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:25:02 np0005634017 nova_compute[243452]: 2026-02-28 10:25:02.417 243456 INFO nova.compute.manager [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Rebuilding instance#033[00m
Feb 28 05:25:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 923 KiB/s rd, 3.6 MiB/s wr, 95 op/s
Feb 28 05:25:02 np0005634017 nova_compute[243452]: 2026-02-28 10:25:02.701 243456 DEBUG nova.objects.instance [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:25:02 np0005634017 nova_compute[243452]: 2026-02-28 10:25:02.719 243456 DEBUG nova.compute.manager [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:25:02 np0005634017 nova_compute[243452]: 2026-02-28 10:25:02.764 243456 DEBUG nova.objects.instance [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lazy-loading 'pci_requests' on Instance uuid 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:25:02 np0005634017 nova_compute[243452]: 2026-02-28 10:25:02.776 243456 DEBUG nova.objects.instance [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:25:02 np0005634017 nova_compute[243452]: 2026-02-28 10:25:02.787 243456 DEBUG nova.objects.instance [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lazy-loading 'resources' on Instance uuid 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:25:02 np0005634017 nova_compute[243452]: 2026-02-28 10:25:02.798 243456 DEBUG nova.objects.instance [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lazy-loading 'migration_context' on Instance uuid 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:25:02 np0005634017 nova_compute[243452]: 2026-02-28 10:25:02.810 243456 DEBUG nova.objects.instance [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 28 05:25:02 np0005634017 nova_compute[243452]: 2026-02-28 10:25:02.815 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 28 05:25:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:25:03 np0005634017 nova_compute[243452]: 2026-02-28 10:25:03.827 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:04 np0005634017 nova_compute[243452]: 2026-02-28 10:25:04.095 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 923 KiB/s rd, 2.6 MiB/s wr, 94 op/s
Feb 28 05:25:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 136 op/s
Feb 28 05:25:06 np0005634017 nova_compute[243452]: 2026-02-28 10:25:06.537 243456 DEBUG nova.compute.manager [req-cc04eafe-03c6-437f-a2c3-58ef71f93430 req-9cbf737b-3bc1-41fb-8262-9adda3103f3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Received event network-vif-plugged-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:25:06 np0005634017 nova_compute[243452]: 2026-02-28 10:25:06.538 243456 DEBUG oslo_concurrency.lockutils [req-cc04eafe-03c6-437f-a2c3-58ef71f93430 req-9cbf737b-3bc1-41fb-8262-9adda3103f3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:25:06 np0005634017 nova_compute[243452]: 2026-02-28 10:25:06.539 243456 DEBUG oslo_concurrency.lockutils [req-cc04eafe-03c6-437f-a2c3-58ef71f93430 req-9cbf737b-3bc1-41fb-8262-9adda3103f3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:25:06 np0005634017 nova_compute[243452]: 2026-02-28 10:25:06.539 243456 DEBUG oslo_concurrency.lockutils [req-cc04eafe-03c6-437f-a2c3-58ef71f93430 req-9cbf737b-3bc1-41fb-8262-9adda3103f3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:06 np0005634017 nova_compute[243452]: 2026-02-28 10:25:06.540 243456 DEBUG nova.compute.manager [req-cc04eafe-03c6-437f-a2c3-58ef71f93430 req-9cbf737b-3bc1-41fb-8262-9adda3103f3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Processing event network-vif-plugged-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:25:06 np0005634017 nova_compute[243452]: 2026-02-28 10:25:06.541 243456 DEBUG nova.compute.manager [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:25:06 np0005634017 nova_compute[243452]: 2026-02-28 10:25:06.554 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274306.553912, 9f873b2a-d04c-475f-941d-397e5a9bc81a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:25:06 np0005634017 nova_compute[243452]: 2026-02-28 10:25:06.555 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:25:06 np0005634017 nova_compute[243452]: 2026-02-28 10:25:06.558 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:25:06 np0005634017 nova_compute[243452]: 2026-02-28 10:25:06.563 243456 INFO nova.virt.libvirt.driver [-] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Instance spawned successfully.#033[00m
Feb 28 05:25:06 np0005634017 nova_compute[243452]: 2026-02-28 10:25:06.564 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:25:06 np0005634017 nova_compute[243452]: 2026-02-28 10:25:06.583 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:25:06 np0005634017 nova_compute[243452]: 2026-02-28 10:25:06.594 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:25:06 np0005634017 nova_compute[243452]: 2026-02-28 10:25:06.601 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:25:06 np0005634017 nova_compute[243452]: 2026-02-28 10:25:06.602 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:25:06 np0005634017 nova_compute[243452]: 2026-02-28 10:25:06.603 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:25:06 np0005634017 nova_compute[243452]: 2026-02-28 10:25:06.604 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:25:06 np0005634017 nova_compute[243452]: 2026-02-28 10:25:06.605 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:25:06 np0005634017 nova_compute[243452]: 2026-02-28 10:25:06.606 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:25:06 np0005634017 nova_compute[243452]: 2026-02-28 10:25:06.650 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:25:06 np0005634017 nova_compute[243452]: 2026-02-28 10:25:06.693 243456 INFO nova.compute.manager [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Took 15.59 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:25:06 np0005634017 nova_compute[243452]: 2026-02-28 10:25:06.694 243456 DEBUG nova.compute.manager [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:25:06 np0005634017 nova_compute[243452]: 2026-02-28 10:25:06.781 243456 INFO nova.compute.manager [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Took 16.73 seconds to build instance.#033[00m
Feb 28 05:25:06 np0005634017 nova_compute[243452]: 2026-02-28 10:25:06.804 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Feb 28 05:25:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:25:08 np0005634017 nova_compute[243452]: 2026-02-28 10:25:08.791 243456 DEBUG nova.compute.manager [req-eb25e7d7-6d34-405d-bddb-81c42a0838d8 req-d3c0b308-24f0-4b37-9ae7-049fe5c85ad1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Received event network-vif-plugged-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:25:08 np0005634017 nova_compute[243452]: 2026-02-28 10:25:08.792 243456 DEBUG oslo_concurrency.lockutils [req-eb25e7d7-6d34-405d-bddb-81c42a0838d8 req-d3c0b308-24f0-4b37-9ae7-049fe5c85ad1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:25:08 np0005634017 nova_compute[243452]: 2026-02-28 10:25:08.793 243456 DEBUG oslo_concurrency.lockutils [req-eb25e7d7-6d34-405d-bddb-81c42a0838d8 req-d3c0b308-24f0-4b37-9ae7-049fe5c85ad1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:25:08 np0005634017 nova_compute[243452]: 2026-02-28 10:25:08.793 243456 DEBUG oslo_concurrency.lockutils [req-eb25e7d7-6d34-405d-bddb-81c42a0838d8 req-d3c0b308-24f0-4b37-9ae7-049fe5c85ad1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:08 np0005634017 nova_compute[243452]: 2026-02-28 10:25:08.793 243456 DEBUG nova.compute.manager [req-eb25e7d7-6d34-405d-bddb-81c42a0838d8 req-d3c0b308-24f0-4b37-9ae7-049fe5c85ad1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] No waiting events found dispatching network-vif-plugged-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:25:08 np0005634017 nova_compute[243452]: 2026-02-28 10:25:08.794 243456 WARNING nova.compute.manager [req-eb25e7d7-6d34-405d-bddb-81c42a0838d8 req-d3c0b308-24f0-4b37-9ae7-049fe5c85ad1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Received unexpected event network-vif-plugged-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:25:08 np0005634017 nova_compute[243452]: 2026-02-28 10:25:08.830 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:09 np0005634017 nova_compute[243452]: 2026-02-28 10:25:09.098 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:10.318 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:35:99 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-d2c1432a-6bfa-4126-b876-01e6f5677734', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2c1432a-6bfa-4126-b876-01e6f5677734', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5c264946768448e9246a07a54d4b13c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0b6ad741-74ba-4342-9c4e-7da3bec25db6, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d3cc0106-9bf3-4ca8-8469-0e9c9bf5b98a) old=Port_Binding(mac=['fa:16:3e:f2:35:99 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-d2c1432a-6bfa-4126-b876-01e6f5677734', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2c1432a-6bfa-4126-b876-01e6f5677734', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5c264946768448e9246a07a54d4b13c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:25:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:10.320 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d3cc0106-9bf3-4ca8-8469-0e9c9bf5b98a in datapath d2c1432a-6bfa-4126-b876-01e6f5677734 updated#033[00m
Feb 28 05:25:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:10.322 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d2c1432a-6bfa-4126-b876-01e6f5677734, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:25:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:10.324 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b1987b6e-0d04-4a1b-b2cd-a5d6cd39f052]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:25:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.9 MiB/s wr, 153 op/s
Feb 28 05:25:11 np0005634017 nova_compute[243452]: 2026-02-28 10:25:11.356 243456 DEBUG nova.compute.manager [req-69f99b8a-cc0c-4e22-9f49-7db9896f5f7e req-3a38f6f8-94ec-4349-b10f-9974e5d902ff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Received event network-changed-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:25:11 np0005634017 nova_compute[243452]: 2026-02-28 10:25:11.357 243456 DEBUG nova.compute.manager [req-69f99b8a-cc0c-4e22-9f49-7db9896f5f7e req-3a38f6f8-94ec-4349-b10f-9974e5d902ff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Refreshing instance network info cache due to event network-changed-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:25:11 np0005634017 nova_compute[243452]: 2026-02-28 10:25:11.358 243456 DEBUG oslo_concurrency.lockutils [req-69f99b8a-cc0c-4e22-9f49-7db9896f5f7e req-3a38f6f8-94ec-4349-b10f-9974e5d902ff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9f873b2a-d04c-475f-941d-397e5a9bc81a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:25:11 np0005634017 nova_compute[243452]: 2026-02-28 10:25:11.359 243456 DEBUG oslo_concurrency.lockutils [req-69f99b8a-cc0c-4e22-9f49-7db9896f5f7e req-3a38f6f8-94ec-4349-b10f-9974e5d902ff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9f873b2a-d04c-475f-941d-397e5a9bc81a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:25:11 np0005634017 nova_compute[243452]: 2026-02-28 10:25:11.360 243456 DEBUG nova.network.neutron [req-69f99b8a-cc0c-4e22-9f49-7db9896f5f7e req-3a38f6f8-94ec-4349-b10f-9974e5d902ff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Refreshing network info cache for port 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:25:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 341 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.8 MiB/s wr, 192 op/s
Feb 28 05:25:12 np0005634017 nova_compute[243452]: 2026-02-28 10:25:12.654 243456 DEBUG nova.network.neutron [req-69f99b8a-cc0c-4e22-9f49-7db9896f5f7e req-3a38f6f8-94ec-4349-b10f-9974e5d902ff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Updated VIF entry in instance network info cache for port 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:25:12 np0005634017 nova_compute[243452]: 2026-02-28 10:25:12.655 243456 DEBUG nova.network.neutron [req-69f99b8a-cc0c-4e22-9f49-7db9896f5f7e req-3a38f6f8-94ec-4349-b10f-9974e5d902ff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Updating instance_info_cache with network_info: [{"id": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "address": "fa:16:3e:51:ab:a9", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b15ec49-ee", "ovs_interfaceid": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:25:12 np0005634017 nova_compute[243452]: 2026-02-28 10:25:12.674 243456 DEBUG oslo_concurrency.lockutils [req-69f99b8a-cc0c-4e22-9f49-7db9896f5f7e req-3a38f6f8-94ec-4349-b10f-9974e5d902ff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9f873b2a-d04c-475f-941d-397e5a9bc81a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:25:12 np0005634017 nova_compute[243452]: 2026-02-28 10:25:12.863 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 28 05:25:13 np0005634017 nova_compute[243452]: 2026-02-28 10:25:13.487 243456 DEBUG nova.compute.manager [req-8107f0f4-8dc0-4c6c-9e82-6f240b3bdade req-6831aaff-a725-4260-8382-9626679e3ba3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Received event network-changed-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:25:13 np0005634017 nova_compute[243452]: 2026-02-28 10:25:13.487 243456 DEBUG nova.compute.manager [req-8107f0f4-8dc0-4c6c-9e82-6f240b3bdade req-6831aaff-a725-4260-8382-9626679e3ba3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Refreshing instance network info cache due to event network-changed-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:25:13 np0005634017 nova_compute[243452]: 2026-02-28 10:25:13.488 243456 DEBUG oslo_concurrency.lockutils [req-8107f0f4-8dc0-4c6c-9e82-6f240b3bdade req-6831aaff-a725-4260-8382-9626679e3ba3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9f873b2a-d04c-475f-941d-397e5a9bc81a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:25:13 np0005634017 nova_compute[243452]: 2026-02-28 10:25:13.488 243456 DEBUG oslo_concurrency.lockutils [req-8107f0f4-8dc0-4c6c-9e82-6f240b3bdade req-6831aaff-a725-4260-8382-9626679e3ba3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9f873b2a-d04c-475f-941d-397e5a9bc81a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:25:13 np0005634017 nova_compute[243452]: 2026-02-28 10:25:13.489 243456 DEBUG nova.network.neutron [req-8107f0f4-8dc0-4c6c-9e82-6f240b3bdade req-6831aaff-a725-4260-8382-9626679e3ba3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Refreshing network info cache for port 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:25:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:25:13 np0005634017 nova_compute[243452]: 2026-02-28 10:25:13.836 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:14 np0005634017 nova_compute[243452]: 2026-02-28 10:25:14.099 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 341 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 993 KiB/s wr, 147 op/s
Feb 28 05:25:15 np0005634017 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000074.scope: Deactivated successfully.
Feb 28 05:25:15 np0005634017 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000074.scope: Consumed 11.325s CPU time.
Feb 28 05:25:15 np0005634017 systemd-machined[209480]: Machine qemu-145-instance-00000074 terminated.
Feb 28 05:25:15 np0005634017 nova_compute[243452]: 2026-02-28 10:25:15.877 243456 INFO nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Instance shutdown successfully after 13 seconds.#033[00m
Feb 28 05:25:15 np0005634017 nova_compute[243452]: 2026-02-28 10:25:15.887 243456 INFO nova.virt.libvirt.driver [-] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Instance destroyed successfully.#033[00m
Feb 28 05:25:15 np0005634017 nova_compute[243452]: 2026-02-28 10:25:15.893 243456 INFO nova.virt.libvirt.driver [-] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Instance destroyed successfully.#033[00m
Feb 28 05:25:16 np0005634017 nova_compute[243452]: 2026-02-28 10:25:16.086 243456 DEBUG nova.network.neutron [req-8107f0f4-8dc0-4c6c-9e82-6f240b3bdade req-6831aaff-a725-4260-8382-9626679e3ba3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Updated VIF entry in instance network info cache for port 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:25:16 np0005634017 nova_compute[243452]: 2026-02-28 10:25:16.087 243456 DEBUG nova.network.neutron [req-8107f0f4-8dc0-4c6c-9e82-6f240b3bdade req-6831aaff-a725-4260-8382-9626679e3ba3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Updating instance_info_cache with network_info: [{"id": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "address": "fa:16:3e:51:ab:a9", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b15ec49-ee", "ovs_interfaceid": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:25:16 np0005634017 nova_compute[243452]: 2026-02-28 10:25:16.104 243456 DEBUG oslo_concurrency.lockutils [req-8107f0f4-8dc0-4c6c-9e82-6f240b3bdade req-6831aaff-a725-4260-8382-9626679e3ba3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9f873b2a-d04c-475f-941d-397e5a9bc81a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:25:16 np0005634017 nova_compute[243452]: 2026-02-28 10:25:16.226 243456 INFO nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Deleting instance files /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_del#033[00m
Feb 28 05:25:16 np0005634017 nova_compute[243452]: 2026-02-28 10:25:16.227 243456 INFO nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Deletion of /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_del complete#033[00m
Feb 28 05:25:16 np0005634017 nova_compute[243452]: 2026-02-28 10:25:16.404 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:25:16 np0005634017 nova_compute[243452]: 2026-02-28 10:25:16.405 243456 INFO nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Creating image(s)#033[00m
Feb 28 05:25:16 np0005634017 nova_compute[243452]: 2026-02-28 10:25:16.428 243456 DEBUG nova.storage.rbd_utils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:25:16 np0005634017 nova_compute[243452]: 2026-02-28 10:25:16.465 243456 DEBUG nova.storage.rbd_utils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:25:16 np0005634017 nova_compute[243452]: 2026-02-28 10:25:16.496 243456 DEBUG nova.storage.rbd_utils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:25:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 358 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.1 MiB/s wr, 173 op/s
Feb 28 05:25:16 np0005634017 nova_compute[243452]: 2026-02-28 10:25:16.501 243456 DEBUG oslo_concurrency.processutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:25:16 np0005634017 nova_compute[243452]: 2026-02-28 10:25:16.579 243456 DEBUG oslo_concurrency.processutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:25:16 np0005634017 nova_compute[243452]: 2026-02-28 10:25:16.581 243456 DEBUG oslo_concurrency.lockutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Acquiring lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:25:16 np0005634017 nova_compute[243452]: 2026-02-28 10:25:16.582 243456 DEBUG oslo_concurrency.lockutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:25:16 np0005634017 nova_compute[243452]: 2026-02-28 10:25:16.582 243456 DEBUG oslo_concurrency.lockutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:16 np0005634017 nova_compute[243452]: 2026-02-28 10:25:16.615 243456 DEBUG nova.storage.rbd_utils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:25:16 np0005634017 nova_compute[243452]: 2026-02-28 10:25:16.622 243456 DEBUG oslo_concurrency.processutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.021 243456 DEBUG oslo_concurrency.processutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.108 243456 DEBUG nova.storage.rbd_utils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] resizing rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.211 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.212 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Ensure instance console log exists: /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.213 243456 DEBUG oslo_concurrency.lockutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.213 243456 DEBUG oslo_concurrency.lockutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.213 243456 DEBUG oslo_concurrency.lockutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.215 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.220 243456 WARNING nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.226 243456 DEBUG nova.virt.libvirt.host [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.227 243456 DEBUG nova.virt.libvirt.host [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.231 243456 DEBUG nova.virt.libvirt.host [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.232 243456 DEBUG nova.virt.libvirt.host [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.232 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.232 243456 DEBUG nova.virt.hardware [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.233 243456 DEBUG nova.virt.hardware [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.233 243456 DEBUG nova.virt.hardware [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.234 243456 DEBUG nova.virt.hardware [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.234 243456 DEBUG nova.virt.hardware [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.234 243456 DEBUG nova.virt.hardware [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.234 243456 DEBUG nova.virt.hardware [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.235 243456 DEBUG nova.virt.hardware [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.235 243456 DEBUG nova.virt.hardware [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.235 243456 DEBUG nova.virt.hardware [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.236 243456 DEBUG nova.virt.hardware [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.236 243456 DEBUG nova.objects.instance [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.257 243456 DEBUG oslo_concurrency.processutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:25:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:25:17 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3447145778' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.853 243456 DEBUG oslo_concurrency.processutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.888 243456 DEBUG nova.storage.rbd_utils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:25:17 np0005634017 nova_compute[243452]: 2026-02-28 10:25:17.894 243456 DEBUG oslo_concurrency.processutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:25:18 np0005634017 ovn_controller[146846]: 2026-02-28T10:25:18Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:51:ab:a9 10.100.0.5
Feb 28 05:25:18 np0005634017 ovn_controller[146846]: 2026-02-28T10:25:18Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:51:ab:a9 10.100.0.5
Feb 28 05:25:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:25:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3168694531' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:25:18 np0005634017 nova_compute[243452]: 2026-02-28 10:25:18.491 243456 DEBUG oslo_concurrency.processutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:25:18 np0005634017 nova_compute[243452]: 2026-02-28 10:25:18.497 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:25:18 np0005634017 nova_compute[243452]:  <uuid>0f9fd694-ce0d-49b2-a027-0c24d6bbfa64</uuid>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:  <name>instance-00000074</name>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      <nova:name>tempest-ServerShowV254Test-server-434009731</nova:name>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:25:17</nova:creationTime>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:25:18 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:        <nova:user uuid="df63289bf60946e2a983ee2fa57352b1">tempest-ServerShowV254Test-1359990056-project-member</nova:user>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:        <nova:project uuid="ff0879b364b142e782530e413eb35f55">tempest-ServerShowV254Test-1359990056</nova:project>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="88971623-4808-4102-a4a7-34a287d8b7fe"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      <nova:ports/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      <entry name="serial">0f9fd694-ce0d-49b2-a027-0c24d6bbfa64</entry>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      <entry name="uuid">0f9fd694-ce0d-49b2-a027-0c24d6bbfa64</entry>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk">
Feb 28 05:25:18 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:25:18 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk.config">
Feb 28 05:25:18 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:25:18 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/console.log" append="off"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:25:18 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:25:18 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:25:18 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:25:18 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:25:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 347 MiB data, 974 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.0 MiB/s wr, 180 op/s
Feb 28 05:25:18 np0005634017 nova_compute[243452]: 2026-02-28 10:25:18.550 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:25:18 np0005634017 nova_compute[243452]: 2026-02-28 10:25:18.550 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:25:18 np0005634017 nova_compute[243452]: 2026-02-28 10:25:18.551 243456 INFO nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Using config drive#033[00m
Feb 28 05:25:18 np0005634017 nova_compute[243452]: 2026-02-28 10:25:18.579 243456 DEBUG nova.storage.rbd_utils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:25:18 np0005634017 nova_compute[243452]: 2026-02-28 10:25:18.606 243456 DEBUG nova.objects.instance [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:25:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:25:18 np0005634017 nova_compute[243452]: 2026-02-28 10:25:18.772 243456 INFO nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Creating config drive at /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/disk.config#033[00m
Feb 28 05:25:18 np0005634017 nova_compute[243452]: 2026-02-28 10:25:18.776 243456 DEBUG oslo_concurrency.processutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqg_p77kq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:25:18 np0005634017 nova_compute[243452]: 2026-02-28 10:25:18.839 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:18 np0005634017 nova_compute[243452]: 2026-02-28 10:25:18.929 243456 DEBUG oslo_concurrency.processutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqg_p77kq" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:25:18 np0005634017 nova_compute[243452]: 2026-02-28 10:25:18.969 243456 DEBUG nova.storage.rbd_utils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:25:18 np0005634017 nova_compute[243452]: 2026-02-28 10:25:18.975 243456 DEBUG oslo_concurrency.processutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/disk.config 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.103 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.189 243456 DEBUG oslo_concurrency.processutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/disk.config 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.190 243456 INFO nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Deleting local config drive /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/disk.config because it was imported into RBD.#033[00m
Feb 28 05:25:19 np0005634017 systemd-machined[209480]: New machine qemu-147-instance-00000074.
Feb 28 05:25:19 np0005634017 systemd[1]: Started Virtual Machine qemu-147-instance-00000074.
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.581 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.582 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274319.580393, 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.582 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.588 243456 DEBUG nova.compute.manager [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.589 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.595 243456 INFO nova.virt.libvirt.driver [-] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Instance spawned successfully.#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.596 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.606 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.609 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.634 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.635 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.635 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.636 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.636 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.636 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.640 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.640 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274319.5868073, 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.640 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] VM Started (Lifecycle Event)#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.685 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.689 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.719 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.733 243456 DEBUG nova.compute.manager [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.790 243456 DEBUG oslo_concurrency.lockutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.790 243456 DEBUG oslo_concurrency.lockutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.790 243456 DEBUG nova.objects.instance [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 28 05:25:19 np0005634017 nova_compute[243452]: 2026-02-28 10:25:19.859 243456 DEBUG oslo_concurrency.lockutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:20 np0005634017 nova_compute[243452]: 2026-02-28 10:25:20.474 243456 DEBUG oslo_concurrency.lockutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Acquiring lock "0f9fd694-ce0d-49b2-a027-0c24d6bbfa64" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:25:20 np0005634017 nova_compute[243452]: 2026-02-28 10:25:20.475 243456 DEBUG oslo_concurrency.lockutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "0f9fd694-ce0d-49b2-a027-0c24d6bbfa64" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:25:20 np0005634017 nova_compute[243452]: 2026-02-28 10:25:20.475 243456 DEBUG oslo_concurrency.lockutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Acquiring lock "0f9fd694-ce0d-49b2-a027-0c24d6bbfa64-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:25:20 np0005634017 nova_compute[243452]: 2026-02-28 10:25:20.475 243456 DEBUG oslo_concurrency.lockutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "0f9fd694-ce0d-49b2-a027-0c24d6bbfa64-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:25:20 np0005634017 nova_compute[243452]: 2026-02-28 10:25:20.475 243456 DEBUG oslo_concurrency.lockutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "0f9fd694-ce0d-49b2-a027-0c24d6bbfa64-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:20 np0005634017 nova_compute[243452]: 2026-02-28 10:25:20.476 243456 INFO nova.compute.manager [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Terminating instance#033[00m
Feb 28 05:25:20 np0005634017 nova_compute[243452]: 2026-02-28 10:25:20.477 243456 DEBUG oslo_concurrency.lockutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Acquiring lock "refresh_cache-0f9fd694-ce0d-49b2-a027-0c24d6bbfa64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:25:20 np0005634017 nova_compute[243452]: 2026-02-28 10:25:20.477 243456 DEBUG oslo_concurrency.lockutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Acquired lock "refresh_cache-0f9fd694-ce0d-49b2-a027-0c24d6bbfa64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:25:20 np0005634017 nova_compute[243452]: 2026-02-28 10:25:20.477 243456 DEBUG nova.network.neutron [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:25:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 350 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.1 MiB/s wr, 297 op/s
Feb 28 05:25:20 np0005634017 nova_compute[243452]: 2026-02-28 10:25:20.664 243456 DEBUG nova.network.neutron [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:25:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:20.684 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:25:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:20.685 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:25:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:20.685 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:25:20 np0005634017 nova_compute[243452]: 2026-02-28 10:25:20.686 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:21 np0005634017 nova_compute[243452]: 2026-02-28 10:25:21.025 243456 DEBUG nova.network.neutron [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:25:21 np0005634017 nova_compute[243452]: 2026-02-28 10:25:21.040 243456 DEBUG oslo_concurrency.lockutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Releasing lock "refresh_cache-0f9fd694-ce0d-49b2-a027-0c24d6bbfa64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:25:21 np0005634017 nova_compute[243452]: 2026-02-28 10:25:21.041 243456 DEBUG nova.compute.manager [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:25:21 np0005634017 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000074.scope: Deactivated successfully.
Feb 28 05:25:21 np0005634017 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000074.scope: Consumed 1.842s CPU time.
Feb 28 05:25:21 np0005634017 systemd-machined[209480]: Machine qemu-147-instance-00000074 terminated.
Feb 28 05:25:21 np0005634017 podman[342976]: 2026-02-28 10:25:21.135674253 +0000 UTC m=+0.062325780 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:25:21 np0005634017 podman[342975]: 2026-02-28 10:25:21.171500087 +0000 UTC m=+0.101199902 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Feb 28 05:25:21 np0005634017 nova_compute[243452]: 2026-02-28 10:25:21.261 243456 INFO nova.virt.libvirt.driver [-] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Instance destroyed successfully.#033[00m
Feb 28 05:25:21 np0005634017 nova_compute[243452]: 2026-02-28 10:25:21.262 243456 DEBUG nova.objects.instance [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lazy-loading 'resources' on Instance uuid 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:25:21 np0005634017 nova_compute[243452]: 2026-02-28 10:25:21.633 243456 INFO nova.virt.libvirt.driver [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Deleting instance files /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_del#033[00m
Feb 28 05:25:21 np0005634017 nova_compute[243452]: 2026-02-28 10:25:21.634 243456 INFO nova.virt.libvirt.driver [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Deletion of /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_del complete#033[00m
Feb 28 05:25:21 np0005634017 nova_compute[243452]: 2026-02-28 10:25:21.700 243456 INFO nova.compute.manager [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Took 0.66 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:25:21 np0005634017 nova_compute[243452]: 2026-02-28 10:25:21.701 243456 DEBUG oslo.service.loopingcall [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:25:21 np0005634017 nova_compute[243452]: 2026-02-28 10:25:21.702 243456 DEBUG nova.compute.manager [-] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:25:21 np0005634017 nova_compute[243452]: 2026-02-28 10:25:21.702 243456 DEBUG nova.network.neutron [-] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:25:21 np0005634017 nova_compute[243452]: 2026-02-28 10:25:21.855 243456 DEBUG nova.network.neutron [-] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:25:21 np0005634017 nova_compute[243452]: 2026-02-28 10:25:21.872 243456 DEBUG nova.network.neutron [-] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:25:21 np0005634017 nova_compute[243452]: 2026-02-28 10:25:21.894 243456 INFO nova.compute.manager [-] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Took 0.19 seconds to deallocate network for instance.#033[00m
Feb 28 05:25:21 np0005634017 nova_compute[243452]: 2026-02-28 10:25:21.977 243456 DEBUG oslo_concurrency.lockutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:25:21 np0005634017 nova_compute[243452]: 2026-02-28 10:25:21.978 243456 DEBUG oslo_concurrency.lockutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:25:22 np0005634017 nova_compute[243452]: 2026-02-28 10:25:22.088 243456 DEBUG oslo_concurrency.processutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:25:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 337 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.0 MiB/s wr, 335 op/s
Feb 28 05:25:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:25:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/713539591' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:25:22 np0005634017 nova_compute[243452]: 2026-02-28 10:25:22.709 243456 DEBUG oslo_concurrency.processutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:25:22 np0005634017 nova_compute[243452]: 2026-02-28 10:25:22.717 243456 DEBUG nova.compute.provider_tree [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:25:22 np0005634017 nova_compute[243452]: 2026-02-28 10:25:22.741 243456 DEBUG nova.scheduler.client.report [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:25:22 np0005634017 nova_compute[243452]: 2026-02-28 10:25:22.774 243456 DEBUG oslo_concurrency.lockutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:22 np0005634017 nova_compute[243452]: 2026-02-28 10:25:22.827 243456 INFO nova.scheduler.client.report [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Deleted allocations for instance 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64#033[00m
Feb 28 05:25:22 np0005634017 nova_compute[243452]: 2026-02-28 10:25:22.926 243456 DEBUG oslo_concurrency.lockutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "0f9fd694-ce0d-49b2-a027-0c24d6bbfa64" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:25:23 np0005634017 nova_compute[243452]: 2026-02-28 10:25:23.844 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:24 np0005634017 nova_compute[243452]: 2026-02-28 10:25:24.104 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 337 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.1 MiB/s wr, 272 op/s
Feb 28 05:25:25 np0005634017 nova_compute[243452]: 2026-02-28 10:25:25.917 243456 DEBUG oslo_concurrency.lockutils [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "9f873b2a-d04c-475f-941d-397e5a9bc81a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:25:25 np0005634017 nova_compute[243452]: 2026-02-28 10:25:25.918 243456 DEBUG oslo_concurrency.lockutils [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:25:25 np0005634017 nova_compute[243452]: 2026-02-28 10:25:25.919 243456 DEBUG oslo_concurrency.lockutils [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:25:25 np0005634017 nova_compute[243452]: 2026-02-28 10:25:25.919 243456 DEBUG oslo_concurrency.lockutils [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:25:25 np0005634017 nova_compute[243452]: 2026-02-28 10:25:25.920 243456 DEBUG oslo_concurrency.lockutils [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:25 np0005634017 nova_compute[243452]: 2026-02-28 10:25:25.922 243456 INFO nova.compute.manager [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Terminating instance#033[00m
Feb 28 05:25:25 np0005634017 nova_compute[243452]: 2026-02-28 10:25:25.924 243456 DEBUG nova.compute.manager [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:25:25 np0005634017 kernel: tap8b15ec49-ee (unregistering): left promiscuous mode
Feb 28 05:25:25 np0005634017 NetworkManager[49805]: <info>  [1772274325.9806] device (tap8b15ec49-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:25:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:25:25Z|01167|binding|INFO|Releasing lport 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 from this chassis (sb_readonly=0)
Feb 28 05:25:25 np0005634017 nova_compute[243452]: 2026-02-28 10:25:25.991 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:25:25Z|01168|binding|INFO|Setting lport 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 down in Southbound
Feb 28 05:25:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:25:25Z|01169|binding|INFO|Removing iface tap8b15ec49-ee ovn-installed in OVS
Feb 28 05:25:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:25.999 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:ab:a9 10.100.0.5', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9f873b2a-d04c-475f-941d-397e5a9bc81a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ee12748-b368-477a-aacb-62375ce0b51c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2fa6d2-fb6a-49e0-af1a-0d44f4a09e25, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=8b15ec49-ee59-41b3-b0e8-b6779ab7bde7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:25:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:26.001 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 in datapath 9ee12748-b368-477a-aacb-62375ce0b51c unbound from our chassis#033[00m
Feb 28 05:25:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:26.002 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9ee12748-b368-477a-aacb-62375ce0b51c#033[00m
Feb 28 05:25:26 np0005634017 nova_compute[243452]: 2026-02-28 10:25:26.006 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:26.021 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d77caf35-3945-4312-9379-4872990fdf5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:25:26 np0005634017 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000073.scope: Deactivated successfully.
Feb 28 05:25:26 np0005634017 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000073.scope: Consumed 11.901s CPU time.
Feb 28 05:25:26 np0005634017 systemd-machined[209480]: Machine qemu-146-instance-00000073 terminated.
Feb 28 05:25:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:26.052 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d1fc1c45-e8ca-4959-8266-f94a3e8daf2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:25:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:26.055 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[276791a5-9afb-4917-abc7-b06d63edc331]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:25:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:26.075 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[86e88844-506a-4754-a901-d775ea8434ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:25:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:26.092 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9c0d55d9-9b20-4aec-95d3-852eeb51a55b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ee12748-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fd:c4:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 7, 'rx_bytes': 1412, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 7, 'rx_bytes': 1412, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 344], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576814, 'reachable_time': 35187, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 13, 'inoctets': 936, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 13, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 936, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 13, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343075, 'error': None, 'target': 'ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:25:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:26.107 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a9b2bf61-e8be-42b2-a943-fb4618c9dba7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9ee12748-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576826, 'tstamp': 576826}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343076, 'error': None, 'target': 'ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9ee12748-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576829, 'tstamp': 576829}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343076, 'error': None, 'target': 'ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:25:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:26.109 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ee12748-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:25:26 np0005634017 nova_compute[243452]: 2026-02-28 10:25:26.110 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:26 np0005634017 nova_compute[243452]: 2026-02-28 10:25:26.113 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:26.114 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ee12748-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:25:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:26.114 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:25:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:26.114 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9ee12748-b0, col_values=(('external_ids', {'iface-id': '42d54653-4cb7-4be2-99ee-96f5681da7be'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:25:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:26.115 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:25:26 np0005634017 nova_compute[243452]: 2026-02-28 10:25:26.163 243456 INFO nova.virt.libvirt.driver [-] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Instance destroyed successfully.#033[00m
Feb 28 05:25:26 np0005634017 nova_compute[243452]: 2026-02-28 10:25:26.163 243456 DEBUG nova.objects.instance [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'resources' on Instance uuid 9f873b2a-d04c-475f-941d-397e5a9bc81a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:25:26 np0005634017 nova_compute[243452]: 2026-02-28 10:25:26.193 243456 DEBUG nova.virt.libvirt.vif [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:24:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-1230754442',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-1230754442',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ge',id=115,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAhOy8scu7pucn6hR6bdEJEq/jaPKx42EW3BCfNrjQsnOj6n32/A+FEMwyqz7ox0jzcrbxxA91mTDdT3vrbzI6b58jlzlwGg5fJ7HUGs8jPxEEShL7FM9h/slX6zH/FfpQ==',key_name='tempest-TestSecurityGroupsBasicOps-850120140',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:25:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-8fgbwnao',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:25:06Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=9f873b2a-d04c-475f-941d-397e5a9bc81a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "address": "fa:16:3e:51:ab:a9", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b15ec49-ee", "ovs_interfaceid": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:25:26 np0005634017 nova_compute[243452]: 2026-02-28 10:25:26.195 243456 DEBUG nova.network.os_vif_util [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "address": "fa:16:3e:51:ab:a9", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b15ec49-ee", "ovs_interfaceid": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:25:26 np0005634017 nova_compute[243452]: 2026-02-28 10:25:26.196 243456 DEBUG nova.network.os_vif_util [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:51:ab:a9,bridge_name='br-int',has_traffic_filtering=True,id=8b15ec49-ee59-41b3-b0e8-b6779ab7bde7,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b15ec49-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:25:26 np0005634017 nova_compute[243452]: 2026-02-28 10:25:26.196 243456 DEBUG os_vif [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:51:ab:a9,bridge_name='br-int',has_traffic_filtering=True,id=8b15ec49-ee59-41b3-b0e8-b6779ab7bde7,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b15ec49-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:25:26 np0005634017 nova_compute[243452]: 2026-02-28 10:25:26.198 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:26 np0005634017 nova_compute[243452]: 2026-02-28 10:25:26.198 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b15ec49-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:25:26 np0005634017 nova_compute[243452]: 2026-02-28 10:25:26.200 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:26 np0005634017 nova_compute[243452]: 2026-02-28 10:25:26.201 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:26 np0005634017 nova_compute[243452]: 2026-02-28 10:25:26.204 243456 INFO os_vif [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:51:ab:a9,bridge_name='br-int',has_traffic_filtering=True,id=8b15ec49-ee59-41b3-b0e8-b6779ab7bde7,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b15ec49-ee')#033[00m
Feb 28 05:25:26 np0005634017 nova_compute[243452]: 2026-02-28 10:25:26.488 243456 INFO nova.virt.libvirt.driver [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Deleting instance files /var/lib/nova/instances/9f873b2a-d04c-475f-941d-397e5a9bc81a_del#033[00m
Feb 28 05:25:26 np0005634017 nova_compute[243452]: 2026-02-28 10:25:26.489 243456 INFO nova.virt.libvirt.driver [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Deletion of /var/lib/nova/instances/9f873b2a-d04c-475f-941d-397e5a9bc81a_del complete#033[00m
Feb 28 05:25:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 312 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.1 MiB/s wr, 302 op/s
Feb 28 05:25:26 np0005634017 nova_compute[243452]: 2026-02-28 10:25:26.554 243456 DEBUG nova.compute.manager [req-57fc531d-995e-4ae1-8b24-50b8b4cdd804 req-1a6f5a54-77e5-489a-8f44-8d24b2366402 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Received event network-vif-unplugged-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:25:26 np0005634017 nova_compute[243452]: 2026-02-28 10:25:26.555 243456 DEBUG oslo_concurrency.lockutils [req-57fc531d-995e-4ae1-8b24-50b8b4cdd804 req-1a6f5a54-77e5-489a-8f44-8d24b2366402 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:25:26 np0005634017 nova_compute[243452]: 2026-02-28 10:25:26.556 243456 DEBUG oslo_concurrency.lockutils [req-57fc531d-995e-4ae1-8b24-50b8b4cdd804 req-1a6f5a54-77e5-489a-8f44-8d24b2366402 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:25:26 np0005634017 nova_compute[243452]: 2026-02-28 10:25:26.556 243456 DEBUG oslo_concurrency.lockutils [req-57fc531d-995e-4ae1-8b24-50b8b4cdd804 req-1a6f5a54-77e5-489a-8f44-8d24b2366402 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:26 np0005634017 nova_compute[243452]: 2026-02-28 10:25:26.557 243456 DEBUG nova.compute.manager [req-57fc531d-995e-4ae1-8b24-50b8b4cdd804 req-1a6f5a54-77e5-489a-8f44-8d24b2366402 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] No waiting events found dispatching network-vif-unplugged-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:25:26 np0005634017 nova_compute[243452]: 2026-02-28 10:25:26.557 243456 DEBUG nova.compute.manager [req-57fc531d-995e-4ae1-8b24-50b8b4cdd804 req-1a6f5a54-77e5-489a-8f44-8d24b2366402 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Received event network-vif-unplugged-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:25:26 np0005634017 nova_compute[243452]: 2026-02-28 10:25:26.561 243456 INFO nova.compute.manager [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:25:26 np0005634017 nova_compute[243452]: 2026-02-28 10:25:26.562 243456 DEBUG oslo.service.loopingcall [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:25:26 np0005634017 nova_compute[243452]: 2026-02-28 10:25:26.563 243456 DEBUG nova.compute.manager [-] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:25:26 np0005634017 nova_compute[243452]: 2026-02-28 10:25:26.563 243456 DEBUG nova.network.neutron [-] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:25:27 np0005634017 nova_compute[243452]: 2026-02-28 10:25:27.139 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:25:27 np0005634017 nova_compute[243452]: 2026-02-28 10:25:27.273 243456 DEBUG nova.network.neutron [-] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:25:27 np0005634017 nova_compute[243452]: 2026-02-28 10:25:27.300 243456 INFO nova.compute.manager [-] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Took 0.74 seconds to deallocate network for instance.#033[00m
Feb 28 05:25:27 np0005634017 nova_compute[243452]: 2026-02-28 10:25:27.355 243456 DEBUG oslo_concurrency.lockutils [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:25:27 np0005634017 nova_compute[243452]: 2026-02-28 10:25:27.356 243456 DEBUG oslo_concurrency.lockutils [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:25:27 np0005634017 nova_compute[243452]: 2026-02-28 10:25:27.434 243456 DEBUG oslo_concurrency.processutils [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:25:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:25:27 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1728565344' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:25:27 np0005634017 nova_compute[243452]: 2026-02-28 10:25:27.985 243456 DEBUG oslo_concurrency.processutils [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:25:27 np0005634017 nova_compute[243452]: 2026-02-28 10:25:27.992 243456 DEBUG nova.compute.provider_tree [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:25:28 np0005634017 nova_compute[243452]: 2026-02-28 10:25:28.011 243456 DEBUG nova.scheduler.client.report [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:25:28 np0005634017 nova_compute[243452]: 2026-02-28 10:25:28.036 243456 DEBUG oslo_concurrency.lockutils [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:28 np0005634017 nova_compute[243452]: 2026-02-28 10:25:28.077 243456 INFO nova.scheduler.client.report [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Deleted allocations for instance 9f873b2a-d04c-475f-941d-397e5a9bc81a#033[00m
Feb 28 05:25:28 np0005634017 nova_compute[243452]: 2026-02-28 10:25:28.163 243456 DEBUG oslo_concurrency.lockutils [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:28 np0005634017 nova_compute[243452]: 2026-02-28 10:25:28.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:25:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 279 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.0 MiB/s wr, 281 op/s
Feb 28 05:25:28 np0005634017 nova_compute[243452]: 2026-02-28 10:25:28.716 243456 DEBUG nova.compute.manager [req-ae89ffef-c2ee-4e3a-829f-4bc3ef1f9306 req-9ecb288d-f9b1-4425-abb6-80777d09b5cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Received event network-vif-plugged-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:25:28 np0005634017 nova_compute[243452]: 2026-02-28 10:25:28.717 243456 DEBUG oslo_concurrency.lockutils [req-ae89ffef-c2ee-4e3a-829f-4bc3ef1f9306 req-9ecb288d-f9b1-4425-abb6-80777d09b5cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:25:28 np0005634017 nova_compute[243452]: 2026-02-28 10:25:28.718 243456 DEBUG oslo_concurrency.lockutils [req-ae89ffef-c2ee-4e3a-829f-4bc3ef1f9306 req-9ecb288d-f9b1-4425-abb6-80777d09b5cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:25:28 np0005634017 nova_compute[243452]: 2026-02-28 10:25:28.718 243456 DEBUG oslo_concurrency.lockutils [req-ae89ffef-c2ee-4e3a-829f-4bc3ef1f9306 req-9ecb288d-f9b1-4425-abb6-80777d09b5cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:28 np0005634017 nova_compute[243452]: 2026-02-28 10:25:28.719 243456 DEBUG nova.compute.manager [req-ae89ffef-c2ee-4e3a-829f-4bc3ef1f9306 req-9ecb288d-f9b1-4425-abb6-80777d09b5cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] No waiting events found dispatching network-vif-plugged-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:25:28 np0005634017 nova_compute[243452]: 2026-02-28 10:25:28.719 243456 WARNING nova.compute.manager [req-ae89ffef-c2ee-4e3a-829f-4bc3ef1f9306 req-9ecb288d-f9b1-4425-abb6-80777d09b5cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Received unexpected event network-vif-plugged-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:25:28 np0005634017 nova_compute[243452]: 2026-02-28 10:25:28.720 243456 DEBUG nova.compute.manager [req-ae89ffef-c2ee-4e3a-829f-4bc3ef1f9306 req-9ecb288d-f9b1-4425-abb6-80777d09b5cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Received event network-vif-deleted-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:25:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:25:29 np0005634017 nova_compute[243452]: 2026-02-28 10:25:29.107 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:25:29
Feb 28 05:25:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:25:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:25:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'default.rgw.meta', 'images', 'backups', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', '.mgr']
Feb 28 05:25:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:25:29 np0005634017 nova_compute[243452]: 2026-02-28 10:25:29.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:25:29 np0005634017 nova_compute[243452]: 2026-02-28 10:25:29.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:25:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:25:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:25:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:25:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:25:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:25:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:25:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 233 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 254 op/s
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.568 243456 DEBUG nova.compute.manager [req-bdf21405-8d8e-4e1b-a58c-41a0e42e3244 req-a3826c24-a5a8-4997-bf12-5c9ea789fa24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Received event network-changed-cc564e14-816b-4b92-877a-db0b2ddd0285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.569 243456 DEBUG nova.compute.manager [req-bdf21405-8d8e-4e1b-a58c-41a0e42e3244 req-a3826c24-a5a8-4997-bf12-5c9ea789fa24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Refreshing instance network info cache due to event network-changed-cc564e14-816b-4b92-877a-db0b2ddd0285. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.570 243456 DEBUG oslo_concurrency.lockutils [req-bdf21405-8d8e-4e1b-a58c-41a0e42e3244 req-a3826c24-a5a8-4997-bf12-5c9ea789fa24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.570 243456 DEBUG oslo_concurrency.lockutils [req-bdf21405-8d8e-4e1b-a58c-41a0e42e3244 req-a3826c24-a5a8-4997-bf12-5c9ea789fa24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.571 243456 DEBUG nova.network.neutron [req-bdf21405-8d8e-4e1b-a58c-41a0e42e3244 req-a3826c24-a5a8-4997-bf12-5c9ea789fa24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Refreshing network info cache for port cc564e14-816b-4b92-877a-db0b2ddd0285 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.623 243456 DEBUG oslo_concurrency.lockutils [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.624 243456 DEBUG oslo_concurrency.lockutils [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.624 243456 DEBUG oslo_concurrency.lockutils [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.625 243456 DEBUG oslo_concurrency.lockutils [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.625 243456 DEBUG oslo_concurrency.lockutils [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.627 243456 INFO nova.compute.manager [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Terminating instance#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.630 243456 DEBUG nova.compute.manager [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:25:30 np0005634017 kernel: tapcc564e14-81 (unregistering): left promiscuous mode
Feb 28 05:25:30 np0005634017 NetworkManager[49805]: <info>  [1772274330.6839] device (tapcc564e14-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.691 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:30 np0005634017 ovn_controller[146846]: 2026-02-28T10:25:30Z|01170|binding|INFO|Releasing lport cc564e14-816b-4b92-877a-db0b2ddd0285 from this chassis (sb_readonly=0)
Feb 28 05:25:30 np0005634017 ovn_controller[146846]: 2026-02-28T10:25:30Z|01171|binding|INFO|Setting lport cc564e14-816b-4b92-877a-db0b2ddd0285 down in Southbound
Feb 28 05:25:30 np0005634017 ovn_controller[146846]: 2026-02-28T10:25:30Z|01172|binding|INFO|Removing iface tapcc564e14-81 ovn-installed in OVS
Feb 28 05:25:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:30.703 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:45:f7 10.100.0.10'], port_security=['fa:16:3e:cc:45:f7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a1bf329d-ed65-4cbc-99cb-e49716d1b24d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ee12748-b368-477a-aacb-62375ce0b51c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0b899b7a-4953-43a9-ae1c-b9897419d094 ec278221-1438-4a93-ade3-9d533e8b727e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2fa6d2-fb6a-49e0-af1a-0d44f4a09e25, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cc564e14-816b-4b92-877a-db0b2ddd0285) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:25:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:30.706 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cc564e14-816b-4b92-877a-db0b2ddd0285 in datapath 9ee12748-b368-477a-aacb-62375ce0b51c unbound from our chassis#033[00m
Feb 28 05:25:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:30.708 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9ee12748-b368-477a-aacb-62375ce0b51c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:25:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:30.709 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1bf9af08-c615-4d17-bbd8-b26b58f75ee5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.711 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:30.711 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c namespace which is not needed anymore#033[00m
Feb 28 05:25:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:25:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:25:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:25:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:25:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:25:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:25:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:25:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:25:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:25:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:25:30 np0005634017 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000072.scope: Deactivated successfully.
Feb 28 05:25:30 np0005634017 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000072.scope: Consumed 14.712s CPU time.
Feb 28 05:25:30 np0005634017 systemd-machined[209480]: Machine qemu-144-instance-00000072 terminated.
Feb 28 05:25:30 np0005634017 neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c[340952]: [NOTICE]   (340965) : haproxy version is 2.8.14-c23fe91
Feb 28 05:25:30 np0005634017 neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c[340952]: [NOTICE]   (340965) : path to executable is /usr/sbin/haproxy
Feb 28 05:25:30 np0005634017 neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c[340952]: [WARNING]  (340965) : Exiting Master process...
Feb 28 05:25:30 np0005634017 neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c[340952]: [ALERT]    (340965) : Current worker (340967) exited with code 143 (Terminated)
Feb 28 05:25:30 np0005634017 neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c[340952]: [WARNING]  (340965) : All workers exited. Exiting... (0)
Feb 28 05:25:30 np0005634017 systemd[1]: libpod-a7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662.scope: Deactivated successfully.
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.852 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:30 np0005634017 podman[343154]: 2026-02-28 10:25:30.855978146 +0000 UTC m=+0.057796029 container died a7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.857 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.866 243456 INFO nova.virt.libvirt.driver [-] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Instance destroyed successfully.#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.867 243456 DEBUG nova.objects.instance [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'resources' on Instance uuid a1bf329d-ed65-4cbc-99cb-e49716d1b24d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.885 243456 DEBUG nova.virt.libvirt.vif [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:24:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1241744982',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1241744982',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=114,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAhOy8scu7pucn6hR6bdEJEq/jaPKx42EW3BCfNrjQsnOj6n32/A+FEMwyqz7ox0jzcrbxxA91mTDdT3vrbzI6b58jlzlwGg5fJ7HUGs8jPxEEShL7FM9h/slX6zH/FfpQ==',key_name='tempest-TestSecurityGroupsBasicOps-850120140',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:24:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-t2on9vk5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:24:22Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=a1bf329d-ed65-4cbc-99cb-e49716d1b24d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cc564e14-816b-4b92-877a-db0b2ddd0285", "address": "fa:16:3e:cc:45:f7", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc564e14-81", "ovs_interfaceid": "cc564e14-816b-4b92-877a-db0b2ddd0285", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.886 243456 DEBUG nova.network.os_vif_util [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "cc564e14-816b-4b92-877a-db0b2ddd0285", "address": "fa:16:3e:cc:45:f7", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc564e14-81", "ovs_interfaceid": "cc564e14-816b-4b92-877a-db0b2ddd0285", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:25:30 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662-userdata-shm.mount: Deactivated successfully.
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.888 243456 DEBUG nova.network.os_vif_util [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cc:45:f7,bridge_name='br-int',has_traffic_filtering=True,id=cc564e14-816b-4b92-877a-db0b2ddd0285,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc564e14-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.889 243456 DEBUG os_vif [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cc:45:f7,bridge_name='br-int',has_traffic_filtering=True,id=cc564e14-816b-4b92-877a-db0b2ddd0285,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc564e14-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.891 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:30 np0005634017 systemd[1]: var-lib-containers-storage-overlay-efddfc5a8b916af41ee280cb5601c71a26e54fba8c82938781e38e62b83cab90-merged.mount: Deactivated successfully.
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.892 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc564e14-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.893 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.896 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.898 243456 INFO os_vif [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cc:45:f7,bridge_name='br-int',has_traffic_filtering=True,id=cc564e14-816b-4b92-877a-db0b2ddd0285,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc564e14-81')#033[00m
Feb 28 05:25:30 np0005634017 podman[343154]: 2026-02-28 10:25:30.905442724 +0000 UTC m=+0.107260577 container cleanup a7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 05:25:30 np0005634017 systemd[1]: libpod-conmon-a7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662.scope: Deactivated successfully.
Feb 28 05:25:30 np0005634017 podman[343207]: 2026-02-28 10:25:30.96284267 +0000 UTC m=+0.039440949 container remove a7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 28 05:25:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:30.967 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[efaae4d9-51fa-446d-b8c3-2ed8dae3ab54]: (4, ('Sat Feb 28 10:25:30 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c (a7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662)\na7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662\nSat Feb 28 10:25:30 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c (a7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662)\na7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:25:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:30.968 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[446c562b-235c-4fd9-8131-e0e57f8d0c97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:25:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:30.969 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ee12748-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.972 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:30 np0005634017 kernel: tap9ee12748-b0: left promiscuous mode
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.974 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:30.977 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[932aa416-5699-4395-97fb-87f5db6a8dc2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.979 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:30.996 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9708f518-b15f-4a90-89a0-bbce23bba79d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.997 243456 DEBUG nova.compute.manager [req-837e3dfa-0e0b-44ed-a5c5-47a8652cb8bc req-e160142b-1263-49e2-9696-55201eb28b35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Received event network-vif-unplugged-cc564e14-816b-4b92-877a-db0b2ddd0285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.997 243456 DEBUG oslo_concurrency.lockutils [req-837e3dfa-0e0b-44ed-a5c5-47a8652cb8bc req-e160142b-1263-49e2-9696-55201eb28b35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:25:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:30.997 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f0223e49-a3f2-4d95-b133-db5d1b76ab81]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.997 243456 DEBUG oslo_concurrency.lockutils [req-837e3dfa-0e0b-44ed-a5c5-47a8652cb8bc req-e160142b-1263-49e2-9696-55201eb28b35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.997 243456 DEBUG oslo_concurrency.lockutils [req-837e3dfa-0e0b-44ed-a5c5-47a8652cb8bc req-e160142b-1263-49e2-9696-55201eb28b35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.998 243456 DEBUG nova.compute.manager [req-837e3dfa-0e0b-44ed-a5c5-47a8652cb8bc req-e160142b-1263-49e2-9696-55201eb28b35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] No waiting events found dispatching network-vif-unplugged-cc564e14-816b-4b92-877a-db0b2ddd0285 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:25:30 np0005634017 nova_compute[243452]: 2026-02-28 10:25:30.998 243456 DEBUG nova.compute.manager [req-837e3dfa-0e0b-44ed-a5c5-47a8652cb8bc req-e160142b-1263-49e2-9696-55201eb28b35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Received event network-vif-unplugged-cc564e14-816b-4b92-877a-db0b2ddd0285 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:25:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:31.009 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4aec7bc6-28fd-4919-b768-3ee6845cebad]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576807, 'reachable_time': 29858, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343229, 'error': None, 'target': 'ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:25:31 np0005634017 systemd[1]: run-netns-ovnmeta\x2d9ee12748\x2db368\x2d477a\x2daacb\x2d62375ce0b51c.mount: Deactivated successfully.
Feb 28 05:25:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:31.012 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:25:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:31.013 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[990c6b30-b659-4cd0-bb24-29eccc376091]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:25:31 np0005634017 nova_compute[243452]: 2026-02-28 10:25:31.151 243456 INFO nova.virt.libvirt.driver [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Deleting instance files /var/lib/nova/instances/a1bf329d-ed65-4cbc-99cb-e49716d1b24d_del#033[00m
Feb 28 05:25:31 np0005634017 nova_compute[243452]: 2026-02-28 10:25:31.151 243456 INFO nova.virt.libvirt.driver [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Deletion of /var/lib/nova/instances/a1bf329d-ed65-4cbc-99cb-e49716d1b24d_del complete#033[00m
Feb 28 05:25:31 np0005634017 nova_compute[243452]: 2026-02-28 10:25:31.202 243456 INFO nova.compute.manager [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Took 0.57 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:25:31 np0005634017 nova_compute[243452]: 2026-02-28 10:25:31.203 243456 DEBUG oslo.service.loopingcall [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:25:31 np0005634017 nova_compute[243452]: 2026-02-28 10:25:31.203 243456 DEBUG nova.compute.manager [-] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:25:31 np0005634017 nova_compute[243452]: 2026-02-28 10:25:31.203 243456 DEBUG nova.network.neutron [-] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:25:31 np0005634017 nova_compute[243452]: 2026-02-28 10:25:31.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:25:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 197 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 131 op/s
Feb 28 05:25:32 np0005634017 nova_compute[243452]: 2026-02-28 10:25:32.679 243456 DEBUG nova.network.neutron [-] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:25:32 np0005634017 nova_compute[243452]: 2026-02-28 10:25:32.701 243456 INFO nova.compute.manager [-] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Took 1.50 seconds to deallocate network for instance.#033[00m
Feb 28 05:25:32 np0005634017 nova_compute[243452]: 2026-02-28 10:25:32.763 243456 DEBUG oslo_concurrency.lockutils [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:25:32 np0005634017 nova_compute[243452]: 2026-02-28 10:25:32.763 243456 DEBUG oslo_concurrency.lockutils [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:25:32 np0005634017 nova_compute[243452]: 2026-02-28 10:25:32.818 243456 DEBUG oslo_concurrency.processutils [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:25:33 np0005634017 nova_compute[243452]: 2026-02-28 10:25:33.206 243456 DEBUG nova.network.neutron [req-bdf21405-8d8e-4e1b-a58c-41a0e42e3244 req-a3826c24-a5a8-4997-bf12-5c9ea789fa24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Updated VIF entry in instance network info cache for port cc564e14-816b-4b92-877a-db0b2ddd0285. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:25:33 np0005634017 nova_compute[243452]: 2026-02-28 10:25:33.206 243456 DEBUG nova.network.neutron [req-bdf21405-8d8e-4e1b-a58c-41a0e42e3244 req-a3826c24-a5a8-4997-bf12-5c9ea789fa24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Updating instance_info_cache with network_info: [{"id": "cc564e14-816b-4b92-877a-db0b2ddd0285", "address": "fa:16:3e:cc:45:f7", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc564e14-81", "ovs_interfaceid": "cc564e14-816b-4b92-877a-db0b2ddd0285", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:25:33 np0005634017 nova_compute[243452]: 2026-02-28 10:25:33.213 243456 DEBUG nova.compute.manager [req-b57455ff-8070-4c6d-b50c-6fe252a53f13 req-ccb51e9e-b12f-46be-ac1c-a6ade6f80200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Received event network-vif-plugged-cc564e14-816b-4b92-877a-db0b2ddd0285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:25:33 np0005634017 nova_compute[243452]: 2026-02-28 10:25:33.213 243456 DEBUG oslo_concurrency.lockutils [req-b57455ff-8070-4c6d-b50c-6fe252a53f13 req-ccb51e9e-b12f-46be-ac1c-a6ade6f80200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:25:33 np0005634017 nova_compute[243452]: 2026-02-28 10:25:33.213 243456 DEBUG oslo_concurrency.lockutils [req-b57455ff-8070-4c6d-b50c-6fe252a53f13 req-ccb51e9e-b12f-46be-ac1c-a6ade6f80200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:25:33 np0005634017 nova_compute[243452]: 2026-02-28 10:25:33.213 243456 DEBUG oslo_concurrency.lockutils [req-b57455ff-8070-4c6d-b50c-6fe252a53f13 req-ccb51e9e-b12f-46be-ac1c-a6ade6f80200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:33 np0005634017 nova_compute[243452]: 2026-02-28 10:25:33.214 243456 DEBUG nova.compute.manager [req-b57455ff-8070-4c6d-b50c-6fe252a53f13 req-ccb51e9e-b12f-46be-ac1c-a6ade6f80200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] No waiting events found dispatching network-vif-plugged-cc564e14-816b-4b92-877a-db0b2ddd0285 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:25:33 np0005634017 nova_compute[243452]: 2026-02-28 10:25:33.214 243456 WARNING nova.compute.manager [req-b57455ff-8070-4c6d-b50c-6fe252a53f13 req-ccb51e9e-b12f-46be-ac1c-a6ade6f80200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Received unexpected event network-vif-plugged-cc564e14-816b-4b92-877a-db0b2ddd0285 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:25:33 np0005634017 nova_compute[243452]: 2026-02-28 10:25:33.214 243456 DEBUG nova.compute.manager [req-b57455ff-8070-4c6d-b50c-6fe252a53f13 req-ccb51e9e-b12f-46be-ac1c-a6ade6f80200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Received event network-vif-deleted-cc564e14-816b-4b92-877a-db0b2ddd0285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:25:33 np0005634017 nova_compute[243452]: 2026-02-28 10:25:33.231 243456 DEBUG oslo_concurrency.lockutils [req-bdf21405-8d8e-4e1b-a58c-41a0e42e3244 req-a3826c24-a5a8-4997-bf12-5c9ea789fa24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:25:33 np0005634017 nova_compute[243452]: 2026-02-28 10:25:33.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:25:33 np0005634017 nova_compute[243452]: 2026-02-28 10:25:33.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:25:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:25:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/561652332' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:25:33 np0005634017 nova_compute[243452]: 2026-02-28 10:25:33.368 243456 DEBUG oslo_concurrency.processutils [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:25:33 np0005634017 nova_compute[243452]: 2026-02-28 10:25:33.383 243456 DEBUG nova.compute.provider_tree [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:25:33 np0005634017 nova_compute[243452]: 2026-02-28 10:25:33.398 243456 DEBUG nova.scheduler.client.report [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:25:33 np0005634017 nova_compute[243452]: 2026-02-28 10:25:33.426 243456 DEBUG oslo_concurrency.lockutils [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:33 np0005634017 nova_compute[243452]: 2026-02-28 10:25:33.457 243456 INFO nova.scheduler.client.report [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Deleted allocations for instance a1bf329d-ed65-4cbc-99cb-e49716d1b24d#033[00m
Feb 28 05:25:33 np0005634017 nova_compute[243452]: 2026-02-28 10:25:33.530 243456 DEBUG oslo_concurrency.lockutils [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:33 np0005634017 nova_compute[243452]: 2026-02-28 10:25:33.667 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:25:33 np0005634017 nova_compute[243452]: 2026-02-28 10:25:33.668 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:25:33 np0005634017 nova_compute[243452]: 2026-02-28 10:25:33.668 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:25:33 np0005634017 nova_compute[243452]: 2026-02-28 10:25:33.689 243456 DEBUG nova.compute.utils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Can not refresh info_cache because instance was not found refresh_info_cache_for_instance /usr/lib/python3.9/site-packages/nova/compute/utils.py:1010#033[00m
Feb 28 05:25:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:25:34 np0005634017 nova_compute[243452]: 2026-02-28 10:25:34.111 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:34 np0005634017 nova_compute[243452]: 2026-02-28 10:25:34.329 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:25:34 np0005634017 nova_compute[243452]: 2026-02-28 10:25:34.354 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:25:34 np0005634017 nova_compute[243452]: 2026-02-28 10:25:34.355 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:25:34 np0005634017 nova_compute[243452]: 2026-02-28 10:25:34.356 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:25:34 np0005634017 nova_compute[243452]: 2026-02-28 10:25:34.356 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:25:34 np0005634017 nova_compute[243452]: 2026-02-28 10:25:34.356 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:25:34 np0005634017 nova_compute[243452]: 2026-02-28 10:25:34.387 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:25:34 np0005634017 nova_compute[243452]: 2026-02-28 10:25:34.388 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:25:34 np0005634017 nova_compute[243452]: 2026-02-28 10:25:34.388 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:34 np0005634017 nova_compute[243452]: 2026-02-28 10:25:34.389 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:25:34 np0005634017 nova_compute[243452]: 2026-02-28 10:25:34.389 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:25:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 197 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 641 KiB/s rd, 16 KiB/s wr, 62 op/s
Feb 28 05:25:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:25:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2085250137' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:25:34 np0005634017 nova_compute[243452]: 2026-02-28 10:25:34.957 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:25:35 np0005634017 nova_compute[243452]: 2026-02-28 10:25:35.130 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:25:35 np0005634017 nova_compute[243452]: 2026-02-28 10:25:35.133 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3667MB free_disk=59.96235081087798GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:25:35 np0005634017 nova_compute[243452]: 2026-02-28 10:25:35.134 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:25:35 np0005634017 nova_compute[243452]: 2026-02-28 10:25:35.134 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:25:35 np0005634017 nova_compute[243452]: 2026-02-28 10:25:35.200 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:25:35 np0005634017 nova_compute[243452]: 2026-02-28 10:25:35.201 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:25:35 np0005634017 nova_compute[243452]: 2026-02-28 10:25:35.219 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:25:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:25:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3660763675' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:25:35 np0005634017 nova_compute[243452]: 2026-02-28 10:25:35.763 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:25:35 np0005634017 nova_compute[243452]: 2026-02-28 10:25:35.770 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:25:35 np0005634017 nova_compute[243452]: 2026-02-28 10:25:35.792 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:25:35 np0005634017 nova_compute[243452]: 2026-02-28 10:25:35.821 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:25:35 np0005634017 nova_compute[243452]: 2026-02-28 10:25:35.821 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:35 np0005634017 nova_compute[243452]: 2026-02-28 10:25:35.895 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:36 np0005634017 nova_compute[243452]: 2026-02-28 10:25:36.258 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274321.2568827, 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:25:36 np0005634017 nova_compute[243452]: 2026-02-28 10:25:36.259 243456 INFO nova.compute.manager [-] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:25:36 np0005634017 nova_compute[243452]: 2026-02-28 10:25:36.298 243456 DEBUG nova.compute.manager [None req-f2bf3ae3-61a7-45d8-be85-658d149bb255 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:25:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:36.430 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:73:63 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-f9247cb3-1f5f-4b25-8137-520bf2985945', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f9247cb3-1f5f-4b25-8137-520bf2985945', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5c264946768448e9246a07a54d4b13c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06dd8bdb-67e2-4a21-9f83-b5510704cb5b, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e273f9f1-84ad-4a28-b582-99a6b6931ccf) old=Port_Binding(mac=['fa:16:3e:b1:73:63 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-f9247cb3-1f5f-4b25-8137-520bf2985945', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f9247cb3-1f5f-4b25-8137-520bf2985945', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5c264946768448e9246a07a54d4b13c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:25:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:36.432 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e273f9f1-84ad-4a28-b582-99a6b6931ccf in datapath f9247cb3-1f5f-4b25-8137-520bf2985945 updated#033[00m
Feb 28 05:25:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:36.433 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f9247cb3-1f5f-4b25-8137-520bf2985945, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:25:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:36.434 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[efa6a36c-4f17-4e74-b333-0782cbff95ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:25:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 657 KiB/s rd, 17 KiB/s wr, 85 op/s
Feb 28 05:25:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.7 KiB/s wr, 55 op/s
Feb 28 05:25:38 np0005634017 nova_compute[243452]: 2026-02-28 10:25:38.529 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:38 np0005634017 nova_compute[243452]: 2026-02-28 10:25:38.564 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:25:38 np0005634017 nova_compute[243452]: 2026-02-28 10:25:38.817 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:25:39 np0005634017 nova_compute[243452]: 2026-02-28 10:25:39.111 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.0 KiB/s wr, 50 op/s
Feb 28 05:25:40 np0005634017 nova_compute[243452]: 2026-02-28 10:25:40.899 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:25:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:25:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:25:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:25:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.2958919954632163e-05 of space, bias 1.0, pg target 0.003887675986389649 quantized to 32 (current 32)
Feb 28 05:25:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:25:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:25:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:25:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:25:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:25:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024928262304062723 of space, bias 1.0, pg target 0.7478478691218817 quantized to 32 (current 32)
Feb 28 05:25:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:25:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.352365090246661e-07 of space, bias 4.0, pg target 0.0008822838108295993 quantized to 16 (current 16)
Feb 28 05:25:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:25:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:25:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:25:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:25:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:25:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:25:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:25:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:25:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:25:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:25:41 np0005634017 nova_compute[243452]: 2026-02-28 10:25:41.160 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274326.1591444, 9f873b2a-d04c-475f-941d-397e5a9bc81a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:25:41 np0005634017 nova_compute[243452]: 2026-02-28 10:25:41.161 243456 INFO nova.compute.manager [-] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:25:41 np0005634017 nova_compute[243452]: 2026-02-28 10:25:41.183 243456 DEBUG nova.compute.manager [None req-bdd83941-60e2-449f-84a4-8017efda6279 - - - - - -] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:25:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb 28 05:25:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:25:44 np0005634017 nova_compute[243452]: 2026-02-28 10:25:44.112 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 22 op/s
Feb 28 05:25:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:25:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:25:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:25:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:25:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:25:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:25:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:25:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:25:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:25:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:25:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:25:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:25:45 np0005634017 podman[343442]: 2026-02-28 10:25:45.184123505 +0000 UTC m=+0.044984790 container create f563734a623bdbc459511d3ad8e8fe273f8acbc62b47b05e1f3853ab32301a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wright, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 05:25:45 np0005634017 systemd[1]: Started libpod-conmon-f563734a623bdbc459511d3ad8e8fe273f8acbc62b47b05e1f3853ab32301a3c.scope.
Feb 28 05:25:45 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:25:45 np0005634017 podman[343442]: 2026-02-28 10:25:45.159301768 +0000 UTC m=+0.020163073 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:25:45 np0005634017 podman[343442]: 2026-02-28 10:25:45.277092978 +0000 UTC m=+0.137954283 container init f563734a623bdbc459511d3ad8e8fe273f8acbc62b47b05e1f3853ab32301a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wright, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:25:45 np0005634017 podman[343442]: 2026-02-28 10:25:45.282748722 +0000 UTC m=+0.143609987 container start f563734a623bdbc459511d3ad8e8fe273f8acbc62b47b05e1f3853ab32301a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 05:25:45 np0005634017 podman[343442]: 2026-02-28 10:25:45.286206731 +0000 UTC m=+0.147067996 container attach f563734a623bdbc459511d3ad8e8fe273f8acbc62b47b05e1f3853ab32301a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Feb 28 05:25:45 np0005634017 naughty_wright[343458]: 167 167
Feb 28 05:25:45 np0005634017 systemd[1]: libpod-f563734a623bdbc459511d3ad8e8fe273f8acbc62b47b05e1f3853ab32301a3c.scope: Deactivated successfully.
Feb 28 05:25:45 np0005634017 conmon[343458]: conmon f563734a623bdbc45951 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f563734a623bdbc459511d3ad8e8fe273f8acbc62b47b05e1f3853ab32301a3c.scope/container/memory.events
Feb 28 05:25:45 np0005634017 podman[343442]: 2026-02-28 10:25:45.290370452 +0000 UTC m=+0.151231717 container died f563734a623bdbc459511d3ad8e8fe273f8acbc62b47b05e1f3853ab32301a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:25:45 np0005634017 systemd[1]: var-lib-containers-storage-overlay-c0be7e6999c90064df6bc219978b544b8af3e739688a02d07420e68fec7fe1a9-merged.mount: Deactivated successfully.
Feb 28 05:25:45 np0005634017 podman[343442]: 2026-02-28 10:25:45.326105933 +0000 UTC m=+0.186967228 container remove f563734a623bdbc459511d3ad8e8fe273f8acbc62b47b05e1f3853ab32301a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wright, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:25:45 np0005634017 systemd[1]: libpod-conmon-f563734a623bdbc459511d3ad8e8fe273f8acbc62b47b05e1f3853ab32301a3c.scope: Deactivated successfully.
Feb 28 05:25:45 np0005634017 podman[343481]: 2026-02-28 10:25:45.452685217 +0000 UTC m=+0.040914962 container create 971d8f5da6284f39123ba3381f4fc1548d2a523c2d519ffc85775a8829edcc75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_black, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 05:25:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:25:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2887033397' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:25:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:25:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2887033397' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:25:45 np0005634017 systemd[1]: Started libpod-conmon-971d8f5da6284f39123ba3381f4fc1548d2a523c2d519ffc85775a8829edcc75.scope.
Feb 28 05:25:45 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:25:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7040cec0e5460779f18990ffb0242916fb86dff501dbb114a483c749a740b367/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:25:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7040cec0e5460779f18990ffb0242916fb86dff501dbb114a483c749a740b367/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:25:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7040cec0e5460779f18990ffb0242916fb86dff501dbb114a483c749a740b367/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:25:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7040cec0e5460779f18990ffb0242916fb86dff501dbb114a483c749a740b367/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:25:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7040cec0e5460779f18990ffb0242916fb86dff501dbb114a483c749a740b367/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:25:45 np0005634017 podman[343481]: 2026-02-28 10:25:45.432919266 +0000 UTC m=+0.021148991 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:25:45 np0005634017 podman[343481]: 2026-02-28 10:25:45.531551373 +0000 UTC m=+0.119781118 container init 971d8f5da6284f39123ba3381f4fc1548d2a523c2d519ffc85775a8829edcc75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_black, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:25:45 np0005634017 podman[343481]: 2026-02-28 10:25:45.540657906 +0000 UTC m=+0.128887661 container start 971d8f5da6284f39123ba3381f4fc1548d2a523c2d519ffc85775a8829edcc75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_black, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 05:25:45 np0005634017 podman[343481]: 2026-02-28 10:25:45.546119323 +0000 UTC m=+0.134349068 container attach 971d8f5da6284f39123ba3381f4fc1548d2a523c2d519ffc85775a8829edcc75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_black, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Feb 28 05:25:45 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:25:45 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:25:45 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:25:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:45.828 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:73:63 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-f9247cb3-1f5f-4b25-8137-520bf2985945', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f9247cb3-1f5f-4b25-8137-520bf2985945', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5c264946768448e9246a07a54d4b13c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06dd8bdb-67e2-4a21-9f83-b5510704cb5b, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e273f9f1-84ad-4a28-b582-99a6b6931ccf) old=Port_Binding(mac=['fa:16:3e:b1:73:63 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-f9247cb3-1f5f-4b25-8137-520bf2985945', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f9247cb3-1f5f-4b25-8137-520bf2985945', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5c264946768448e9246a07a54d4b13c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:25:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:45.834 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e273f9f1-84ad-4a28-b582-99a6b6931ccf in datapath f9247cb3-1f5f-4b25-8137-520bf2985945 updated#033[00m
Feb 28 05:25:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:45.836 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f9247cb3-1f5f-4b25-8137-520bf2985945, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:25:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:45.837 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ebc9f65a-7b3b-4724-874f-a38324568284]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:25:45 np0005634017 nova_compute[243452]: 2026-02-28 10:25:45.863 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274330.8632147, a1bf329d-ed65-4cbc-99cb-e49716d1b24d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:25:45 np0005634017 nova_compute[243452]: 2026-02-28 10:25:45.864 243456 INFO nova.compute.manager [-] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:25:45 np0005634017 nova_compute[243452]: 2026-02-28 10:25:45.889 243456 DEBUG nova.compute.manager [None req-aa84a9db-4c26-4221-afb4-5c3d47e9b84f - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:25:45 np0005634017 nova_compute[243452]: 2026-02-28 10:25:45.903 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:46 np0005634017 nostalgic_black[343498]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:25:46 np0005634017 nostalgic_black[343498]: --> All data devices are unavailable
Feb 28 05:25:46 np0005634017 systemd[1]: libpod-971d8f5da6284f39123ba3381f4fc1548d2a523c2d519ffc85775a8829edcc75.scope: Deactivated successfully.
Feb 28 05:25:46 np0005634017 podman[343481]: 2026-02-28 10:25:46.073573737 +0000 UTC m=+0.661803472 container died 971d8f5da6284f39123ba3381f4fc1548d2a523c2d519ffc85775a8829edcc75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 28 05:25:46 np0005634017 systemd[1]: var-lib-containers-storage-overlay-7040cec0e5460779f18990ffb0242916fb86dff501dbb114a483c749a740b367-merged.mount: Deactivated successfully.
Feb 28 05:25:46 np0005634017 podman[343481]: 2026-02-28 10:25:46.119458631 +0000 UTC m=+0.707688346 container remove 971d8f5da6284f39123ba3381f4fc1548d2a523c2d519ffc85775a8829edcc75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_black, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Feb 28 05:25:46 np0005634017 systemd[1]: libpod-conmon-971d8f5da6284f39123ba3381f4fc1548d2a523c2d519ffc85775a8829edcc75.scope: Deactivated successfully.
Feb 28 05:25:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 22 op/s
Feb 28 05:25:46 np0005634017 podman[343593]: 2026-02-28 10:25:46.588715125 +0000 UTC m=+0.047526373 container create a3912f6f39049e9ebb35c66cb187da81231cdbe3897468f7d1269920b8351141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_austin, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:25:46 np0005634017 systemd[1]: Started libpod-conmon-a3912f6f39049e9ebb35c66cb187da81231cdbe3897468f7d1269920b8351141.scope.
Feb 28 05:25:46 np0005634017 podman[343593]: 2026-02-28 10:25:46.562416926 +0000 UTC m=+0.021228233 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:25:46 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:25:46 np0005634017 podman[343593]: 2026-02-28 10:25:46.685226651 +0000 UTC m=+0.144037978 container init a3912f6f39049e9ebb35c66cb187da81231cdbe3897468f7d1269920b8351141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_austin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 05:25:46 np0005634017 podman[343593]: 2026-02-28 10:25:46.690452382 +0000 UTC m=+0.149263639 container start a3912f6f39049e9ebb35c66cb187da81231cdbe3897468f7d1269920b8351141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:25:46 np0005634017 festive_austin[343609]: 167 167
Feb 28 05:25:46 np0005634017 podman[343593]: 2026-02-28 10:25:46.694250712 +0000 UTC m=+0.153061959 container attach a3912f6f39049e9ebb35c66cb187da81231cdbe3897468f7d1269920b8351141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_austin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 28 05:25:46 np0005634017 systemd[1]: libpod-a3912f6f39049e9ebb35c66cb187da81231cdbe3897468f7d1269920b8351141.scope: Deactivated successfully.
Feb 28 05:25:46 np0005634017 conmon[343609]: conmon a3912f6f39049e9ebb35 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a3912f6f39049e9ebb35c66cb187da81231cdbe3897468f7d1269920b8351141.scope/container/memory.events
Feb 28 05:25:46 np0005634017 podman[343593]: 2026-02-28 10:25:46.697238068 +0000 UTC m=+0.156049315 container died a3912f6f39049e9ebb35c66cb187da81231cdbe3897468f7d1269920b8351141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:25:46 np0005634017 systemd[1]: var-lib-containers-storage-overlay-7ddf9c631c0441c4aaa47e6a3debce4d5bc21dc67fec32f02939099004e6cc39-merged.mount: Deactivated successfully.
Feb 28 05:25:46 np0005634017 podman[343593]: 2026-02-28 10:25:46.749578029 +0000 UTC m=+0.208389276 container remove a3912f6f39049e9ebb35c66cb187da81231cdbe3897468f7d1269920b8351141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_austin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:25:46 np0005634017 systemd[1]: libpod-conmon-a3912f6f39049e9ebb35c66cb187da81231cdbe3897468f7d1269920b8351141.scope: Deactivated successfully.
Feb 28 05:25:46 np0005634017 podman[343633]: 2026-02-28 10:25:46.926878476 +0000 UTC m=+0.060291771 container create b4c418c4f92387fa6494f8b90fd8a7e8777d7810dcd1e1c0299b1d5eba5a2c40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_kowalevski, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:25:46 np0005634017 systemd[1]: Started libpod-conmon-b4c418c4f92387fa6494f8b90fd8a7e8777d7810dcd1e1c0299b1d5eba5a2c40.scope.
Feb 28 05:25:46 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:25:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92991937c797c0188d1e7da69e9ffe5c461408e3f1685ea84c87674f1cf678f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:25:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92991937c797c0188d1e7da69e9ffe5c461408e3f1685ea84c87674f1cf678f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:25:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92991937c797c0188d1e7da69e9ffe5c461408e3f1685ea84c87674f1cf678f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:25:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92991937c797c0188d1e7da69e9ffe5c461408e3f1685ea84c87674f1cf678f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:25:47 np0005634017 podman[343633]: 2026-02-28 10:25:46.900175835 +0000 UTC m=+0.033589150 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:25:47 np0005634017 podman[343633]: 2026-02-28 10:25:47.005404783 +0000 UTC m=+0.138818118 container init b4c418c4f92387fa6494f8b90fd8a7e8777d7810dcd1e1c0299b1d5eba5a2c40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_kowalevski, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 05:25:47 np0005634017 podman[343633]: 2026-02-28 10:25:47.014519926 +0000 UTC m=+0.147933171 container start b4c418c4f92387fa6494f8b90fd8a7e8777d7810dcd1e1c0299b1d5eba5a2c40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:25:47 np0005634017 podman[343633]: 2026-02-28 10:25:47.017811231 +0000 UTC m=+0.151224486 container attach b4c418c4f92387fa6494f8b90fd8a7e8777d7810dcd1e1c0299b1d5eba5a2c40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]: {
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:    "0": [
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:        {
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "devices": [
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "/dev/loop3"
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            ],
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "lv_name": "ceph_lv0",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "lv_size": "21470642176",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "name": "ceph_lv0",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "tags": {
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.cluster_name": "ceph",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.crush_device_class": "",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.encrypted": "0",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.objectstore": "bluestore",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.osd_id": "0",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.type": "block",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.vdo": "0",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.with_tpm": "0"
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            },
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "type": "block",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "vg_name": "ceph_vg0"
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:        }
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:    ],
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:    "1": [
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:        {
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "devices": [
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "/dev/loop4"
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            ],
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "lv_name": "ceph_lv1",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "lv_size": "21470642176",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "name": "ceph_lv1",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "tags": {
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.cluster_name": "ceph",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.crush_device_class": "",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.encrypted": "0",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.objectstore": "bluestore",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.osd_id": "1",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.type": "block",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.vdo": "0",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.with_tpm": "0"
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            },
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "type": "block",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "vg_name": "ceph_vg1"
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:        }
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:    ],
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:    "2": [
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:        {
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "devices": [
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "/dev/loop5"
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            ],
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "lv_name": "ceph_lv2",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "lv_size": "21470642176",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "name": "ceph_lv2",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "tags": {
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.cluster_name": "ceph",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.crush_device_class": "",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.encrypted": "0",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.objectstore": "bluestore",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.osd_id": "2",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.type": "block",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.vdo": "0",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:                "ceph.with_tpm": "0"
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            },
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "type": "block",
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:            "vg_name": "ceph_vg2"
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:        }
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]:    ]
Feb 28 05:25:47 np0005634017 intelligent_kowalevski[343649]: }
Feb 28 05:25:47 np0005634017 systemd[1]: libpod-b4c418c4f92387fa6494f8b90fd8a7e8777d7810dcd1e1c0299b1d5eba5a2c40.scope: Deactivated successfully.
Feb 28 05:25:47 np0005634017 podman[343658]: 2026-02-28 10:25:47.387236724 +0000 UTC m=+0.040439698 container died b4c418c4f92387fa6494f8b90fd8a7e8777d7810dcd1e1c0299b1d5eba5a2c40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_kowalevski, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:25:47 np0005634017 systemd[1]: var-lib-containers-storage-overlay-92991937c797c0188d1e7da69e9ffe5c461408e3f1685ea84c87674f1cf678f2-merged.mount: Deactivated successfully.
Feb 28 05:25:47 np0005634017 podman[343658]: 2026-02-28 10:25:47.439649487 +0000 UTC m=+0.092852421 container remove b4c418c4f92387fa6494f8b90fd8a7e8777d7810dcd1e1c0299b1d5eba5a2c40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:25:47 np0005634017 systemd[1]: libpod-conmon-b4c418c4f92387fa6494f8b90fd8a7e8777d7810dcd1e1c0299b1d5eba5a2c40.scope: Deactivated successfully.
Feb 28 05:25:47 np0005634017 podman[343735]: 2026-02-28 10:25:47.985938375 +0000 UTC m=+0.055916005 container create a9df81eefafffd072c748a7b0c61a802c5d6909888a583097bc63dbf4def00da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cerf, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:25:48 np0005634017 systemd[1]: Started libpod-conmon-a9df81eefafffd072c748a7b0c61a802c5d6909888a583097bc63dbf4def00da.scope.
Feb 28 05:25:48 np0005634017 podman[343735]: 2026-02-28 10:25:47.960099299 +0000 UTC m=+0.030077019 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:25:48 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:25:48 np0005634017 podman[343735]: 2026-02-28 10:25:48.08382228 +0000 UTC m=+0.153799910 container init a9df81eefafffd072c748a7b0c61a802c5d6909888a583097bc63dbf4def00da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 05:25:48 np0005634017 podman[343735]: 2026-02-28 10:25:48.092794499 +0000 UTC m=+0.162772129 container start a9df81eefafffd072c748a7b0c61a802c5d6909888a583097bc63dbf4def00da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 28 05:25:48 np0005634017 podman[343735]: 2026-02-28 10:25:48.097754262 +0000 UTC m=+0.167731932 container attach a9df81eefafffd072c748a7b0c61a802c5d6909888a583097bc63dbf4def00da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 28 05:25:48 np0005634017 quirky_cerf[343752]: 167 167
Feb 28 05:25:48 np0005634017 systemd[1]: libpod-a9df81eefafffd072c748a7b0c61a802c5d6909888a583097bc63dbf4def00da.scope: Deactivated successfully.
Feb 28 05:25:48 np0005634017 podman[343735]: 2026-02-28 10:25:48.102626933 +0000 UTC m=+0.172604533 container died a9df81eefafffd072c748a7b0c61a802c5d6909888a583097bc63dbf4def00da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cerf, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 05:25:48 np0005634017 systemd[1]: var-lib-containers-storage-overlay-a6f4d89979cb50873ca25cb8f8ac8d47a5233fb242109715046bcc8a1e8ecf70-merged.mount: Deactivated successfully.
Feb 28 05:25:48 np0005634017 podman[343735]: 2026-02-28 10:25:48.152967386 +0000 UTC m=+0.222945016 container remove a9df81eefafffd072c748a7b0c61a802c5d6909888a583097bc63dbf4def00da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cerf, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 05:25:48 np0005634017 systemd[1]: libpod-conmon-a9df81eefafffd072c748a7b0c61a802c5d6909888a583097bc63dbf4def00da.scope: Deactivated successfully.
Feb 28 05:25:48 np0005634017 podman[343775]: 2026-02-28 10:25:48.34469952 +0000 UTC m=+0.056460501 container create c0e10ce97bbeda8090ca4c1841d9f68253eb4df8a963099dfe27e6e181297f02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:25:48 np0005634017 systemd[1]: Started libpod-conmon-c0e10ce97bbeda8090ca4c1841d9f68253eb4df8a963099dfe27e6e181297f02.scope.
Feb 28 05:25:48 np0005634017 podman[343775]: 2026-02-28 10:25:48.320641486 +0000 UTC m=+0.032402467 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:25:48 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:25:48 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/978886479e0a44a594dcaacee5ba62da6da0f0fc4228482cea0de48780aa0489/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:25:48 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/978886479e0a44a594dcaacee5ba62da6da0f0fc4228482cea0de48780aa0489/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:25:48 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/978886479e0a44a594dcaacee5ba62da6da0f0fc4228482cea0de48780aa0489/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:25:48 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/978886479e0a44a594dcaacee5ba62da6da0f0fc4228482cea0de48780aa0489/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:25:48 np0005634017 podman[343775]: 2026-02-28 10:25:48.463369495 +0000 UTC m=+0.175130516 container init c0e10ce97bbeda8090ca4c1841d9f68253eb4df8a963099dfe27e6e181297f02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 28 05:25:48 np0005634017 podman[343775]: 2026-02-28 10:25:48.473443986 +0000 UTC m=+0.185204957 container start c0e10ce97bbeda8090ca4c1841d9f68253eb4df8a963099dfe27e6e181297f02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goldberg, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 28 05:25:48 np0005634017 podman[343775]: 2026-02-28 10:25:48.478624646 +0000 UTC m=+0.190385657 container attach c0e10ce97bbeda8090ca4c1841d9f68253eb4df8a963099dfe27e6e181297f02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goldberg, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:25:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:25:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:25:49 np0005634017 nova_compute[243452]: 2026-02-28 10:25:49.114 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:49 np0005634017 lvm[343871]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:25:49 np0005634017 lvm[343871]: VG ceph_vg0 finished
Feb 28 05:25:49 np0005634017 lvm[343872]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:25:49 np0005634017 lvm[343872]: VG ceph_vg1 finished
Feb 28 05:25:49 np0005634017 lvm[343874]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:25:49 np0005634017 lvm[343874]: VG ceph_vg2 finished
Feb 28 05:25:49 np0005634017 clever_goldberg[343792]: {}
Feb 28 05:25:49 np0005634017 systemd[1]: libpod-c0e10ce97bbeda8090ca4c1841d9f68253eb4df8a963099dfe27e6e181297f02.scope: Deactivated successfully.
Feb 28 05:25:49 np0005634017 systemd[1]: libpod-c0e10ce97bbeda8090ca4c1841d9f68253eb4df8a963099dfe27e6e181297f02.scope: Consumed 1.217s CPU time.
Feb 28 05:25:49 np0005634017 podman[343775]: 2026-02-28 10:25:49.303021511 +0000 UTC m=+1.014782482 container died c0e10ce97bbeda8090ca4c1841d9f68253eb4df8a963099dfe27e6e181297f02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 05:25:49 np0005634017 systemd[1]: var-lib-containers-storage-overlay-978886479e0a44a594dcaacee5ba62da6da0f0fc4228482cea0de48780aa0489-merged.mount: Deactivated successfully.
Feb 28 05:25:49 np0005634017 podman[343775]: 2026-02-28 10:25:49.348012059 +0000 UTC m=+1.059773040 container remove c0e10ce97bbeda8090ca4c1841d9f68253eb4df8a963099dfe27e6e181297f02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goldberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:25:49 np0005634017 systemd[1]: libpod-conmon-c0e10ce97bbeda8090ca4c1841d9f68253eb4df8a963099dfe27e6e181297f02.scope: Deactivated successfully.
Feb 28 05:25:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:25:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:25:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:25:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:25:50 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:25:50 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:25:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:25:50 np0005634017 nova_compute[243452]: 2026-02-28 10:25:50.907 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:52 np0005634017 podman[343915]: 2026-02-28 10:25:52.156873843 +0000 UTC m=+0.086455516 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 28 05:25:52 np0005634017 podman[343914]: 2026-02-28 10:25:52.220652364 +0000 UTC m=+0.151448942 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 28 05:25:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:25:52 np0005634017 nova_compute[243452]: 2026-02-28 10:25:52.547 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:25:52 np0005634017 nova_compute[243452]: 2026-02-28 10:25:52.548 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:25:52 np0005634017 nova_compute[243452]: 2026-02-28 10:25:52.572 243456 DEBUG nova.compute.manager [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:25:52 np0005634017 nova_compute[243452]: 2026-02-28 10:25:52.660 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:25:52 np0005634017 nova_compute[243452]: 2026-02-28 10:25:52.661 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:25:52 np0005634017 nova_compute[243452]: 2026-02-28 10:25:52.672 243456 DEBUG nova.virt.hardware [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:25:52 np0005634017 nova_compute[243452]: 2026-02-28 10:25:52.673 243456 INFO nova.compute.claims [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:25:52 np0005634017 nova_compute[243452]: 2026-02-28 10:25:52.770 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:25:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:25:53 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1975519888' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:25:53 np0005634017 nova_compute[243452]: 2026-02-28 10:25:53.323 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:25:53 np0005634017 nova_compute[243452]: 2026-02-28 10:25:53.331 243456 DEBUG nova.compute.provider_tree [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:25:53 np0005634017 nova_compute[243452]: 2026-02-28 10:25:53.355 243456 DEBUG nova.scheduler.client.report [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:25:53 np0005634017 nova_compute[243452]: 2026-02-28 10:25:53.386 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:53 np0005634017 nova_compute[243452]: 2026-02-28 10:25:53.387 243456 DEBUG nova.compute.manager [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:25:53 np0005634017 nova_compute[243452]: 2026-02-28 10:25:53.440 243456 DEBUG nova.compute.manager [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:25:53 np0005634017 nova_compute[243452]: 2026-02-28 10:25:53.440 243456 DEBUG nova.network.neutron [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:25:53 np0005634017 nova_compute[243452]: 2026-02-28 10:25:53.458 243456 INFO nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:25:53 np0005634017 nova_compute[243452]: 2026-02-28 10:25:53.478 243456 DEBUG nova.compute.manager [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:25:53 np0005634017 nova_compute[243452]: 2026-02-28 10:25:53.580 243456 DEBUG nova.compute.manager [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:25:53 np0005634017 nova_compute[243452]: 2026-02-28 10:25:53.581 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:25:53 np0005634017 nova_compute[243452]: 2026-02-28 10:25:53.582 243456 INFO nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Creating image(s)#033[00m
Feb 28 05:25:53 np0005634017 nova_compute[243452]: 2026-02-28 10:25:53.606 243456 DEBUG nova.storage.rbd_utils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:25:53 np0005634017 nova_compute[243452]: 2026-02-28 10:25:53.636 243456 DEBUG nova.storage.rbd_utils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:25:53 np0005634017 nova_compute[243452]: 2026-02-28 10:25:53.665 243456 DEBUG nova.storage.rbd_utils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:25:53 np0005634017 nova_compute[243452]: 2026-02-28 10:25:53.670 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:25:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:25:53 np0005634017 nova_compute[243452]: 2026-02-28 10:25:53.749 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:25:53 np0005634017 nova_compute[243452]: 2026-02-28 10:25:53.751 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:25:53 np0005634017 nova_compute[243452]: 2026-02-28 10:25:53.751 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:25:53 np0005634017 nova_compute[243452]: 2026-02-28 10:25:53.752 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:53 np0005634017 nova_compute[243452]: 2026-02-28 10:25:53.776 243456 DEBUG nova.storage.rbd_utils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:25:53 np0005634017 nova_compute[243452]: 2026-02-28 10:25:53.781 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:25:53 np0005634017 nova_compute[243452]: 2026-02-28 10:25:53.850 243456 DEBUG nova.policy [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9046d3ef932481196b82d5e1fdd5de6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:25:54 np0005634017 nova_compute[243452]: 2026-02-28 10:25:54.053 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:25:54 np0005634017 nova_compute[243452]: 2026-02-28 10:25:54.132 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:54 np0005634017 nova_compute[243452]: 2026-02-28 10:25:54.142 243456 DEBUG nova.storage.rbd_utils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] resizing rbd image 8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:25:54 np0005634017 nova_compute[243452]: 2026-02-28 10:25:54.237 243456 DEBUG nova.objects.instance [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'migration_context' on Instance uuid 8f174807-b15f-4588-83e1-c6e2ef2c2b4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:25:54 np0005634017 nova_compute[243452]: 2026-02-28 10:25:54.252 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:25:54 np0005634017 nova_compute[243452]: 2026-02-28 10:25:54.253 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Ensure instance console log exists: /var/lib/nova/instances/8f174807-b15f-4588-83e1-c6e2ef2c2b4a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:25:54 np0005634017 nova_compute[243452]: 2026-02-28 10:25:54.253 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:25:54 np0005634017 nova_compute[243452]: 2026-02-28 10:25:54.254 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:25:54 np0005634017 nova_compute[243452]: 2026-02-28 10:25:54.254 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:25:55 np0005634017 nova_compute[243452]: 2026-02-28 10:25:55.911 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:56 np0005634017 nova_compute[243452]: 2026-02-28 10:25:56.150 243456 DEBUG nova.network.neutron [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Successfully created port: 3e3d207f-3991-41f0-a55c-44cd0479e7f8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:25:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 182 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 906 KiB/s wr, 25 op/s
Feb 28 05:25:56 np0005634017 nova_compute[243452]: 2026-02-28 10:25:56.934 243456 DEBUG nova.network.neutron [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Successfully updated port: 3e3d207f-3991-41f0-a55c-44cd0479e7f8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:25:56 np0005634017 nova_compute[243452]: 2026-02-28 10:25:56.952 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:25:56 np0005634017 nova_compute[243452]: 2026-02-28 10:25:56.953 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquired lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:25:56 np0005634017 nova_compute[243452]: 2026-02-28 10:25:56.953 243456 DEBUG nova.network.neutron [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:25:57 np0005634017 nova_compute[243452]: 2026-02-28 10:25:57.163 243456 DEBUG nova.compute.manager [req-32f09ad1-2c35-4aa2-adce-d90a3afeab52 req-f0051e1d-3c19-4e93-b478-5ee17dc31560 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Received event network-changed-3e3d207f-3991-41f0-a55c-44cd0479e7f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:25:57 np0005634017 nova_compute[243452]: 2026-02-28 10:25:57.163 243456 DEBUG nova.compute.manager [req-32f09ad1-2c35-4aa2-adce-d90a3afeab52 req-f0051e1d-3c19-4e93-b478-5ee17dc31560 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Refreshing instance network info cache due to event network-changed-3e3d207f-3991-41f0-a55c-44cd0479e7f8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:25:57 np0005634017 nova_compute[243452]: 2026-02-28 10:25:57.164 243456 DEBUG oslo_concurrency.lockutils [req-32f09ad1-2c35-4aa2-adce-d90a3afeab52 req-f0051e1d-3c19-4e93-b478-5ee17dc31560 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:25:57 np0005634017 nova_compute[243452]: 2026-02-28 10:25:57.191 243456 DEBUG nova.network.neutron [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:25:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:57.865 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:25:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:57.866 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:25:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:25:57.866 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.221 243456 DEBUG nova.network.neutron [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Updating instance_info_cache with network_info: [{"id": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "address": "fa:16:3e:27:84:10", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3d207f-39", "ovs_interfaceid": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.251 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Releasing lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.252 243456 DEBUG nova.compute.manager [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Instance network_info: |[{"id": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "address": "fa:16:3e:27:84:10", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3d207f-39", "ovs_interfaceid": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.253 243456 DEBUG oslo_concurrency.lockutils [req-32f09ad1-2c35-4aa2-adce-d90a3afeab52 req-f0051e1d-3c19-4e93-b478-5ee17dc31560 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.253 243456 DEBUG nova.network.neutron [req-32f09ad1-2c35-4aa2-adce-d90a3afeab52 req-f0051e1d-3c19-4e93-b478-5ee17dc31560 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Refreshing network info cache for port 3e3d207f-3991-41f0-a55c-44cd0479e7f8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.256 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Start _get_guest_xml network_info=[{"id": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "address": "fa:16:3e:27:84:10", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3d207f-39", "ovs_interfaceid": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.262 243456 WARNING nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.267 243456 DEBUG nova.virt.libvirt.host [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.268 243456 DEBUG nova.virt.libvirt.host [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.272 243456 DEBUG nova.virt.libvirt.host [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.273 243456 DEBUG nova.virt.libvirt.host [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.274 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.274 243456 DEBUG nova.virt.hardware [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.275 243456 DEBUG nova.virt.hardware [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.275 243456 DEBUG nova.virt.hardware [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.275 243456 DEBUG nova.virt.hardware [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.276 243456 DEBUG nova.virt.hardware [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.276 243456 DEBUG nova.virt.hardware [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.276 243456 DEBUG nova.virt.hardware [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.277 243456 DEBUG nova.virt.hardware [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.277 243456 DEBUG nova.virt.hardware [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.277 243456 DEBUG nova.virt.hardware [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.278 243456 DEBUG nova.virt.hardware [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.282 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:25:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 200 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:25:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:25:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:25:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1021679401' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.877 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.912 243456 DEBUG nova.storage.rbd_utils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:25:58 np0005634017 nova_compute[243452]: 2026-02-28 10:25:58.919 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.117 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:25:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1908879600' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.507 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.509 243456 DEBUG nova.virt.libvirt.vif [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:25:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1565847176',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1565847176',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=117,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJI9lOGpAHM54rXvRMjq/x0g38COJ3hV6RmI46r9AJNCEFVJesQ2691y4U6QBJ7NeNJa4TpkakoTRePIkSoO6HE1KNS3jJXu2ARIkPnrG7XuVDaOVfaQKY/gyfQrOLusGQ==',key_name='tempest-TestSecurityGroupsBasicOps-1768974086',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-899856xm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:25:53Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=8f174807-b15f-4588-83e1-c6e2ef2c2b4a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "address": "fa:16:3e:27:84:10", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3d207f-39", "ovs_interfaceid": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.510 243456 DEBUG nova.network.os_vif_util [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "address": "fa:16:3e:27:84:10", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3d207f-39", "ovs_interfaceid": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.511 243456 DEBUG nova.network.os_vif_util [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:84:10,bridge_name='br-int',has_traffic_filtering=True,id=3e3d207f-3991-41f0-a55c-44cd0479e7f8,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3d207f-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.513 243456 DEBUG nova.objects.instance [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8f174807-b15f-4588-83e1-c6e2ef2c2b4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.554 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:25:59 np0005634017 nova_compute[243452]:  <uuid>8f174807-b15f-4588-83e1-c6e2ef2c2b4a</uuid>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:  <name>instance-00000075</name>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1565847176</nova:name>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:25:58</nova:creationTime>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:25:59 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:        <nova:user uuid="f9046d3ef932481196b82d5e1fdd5de6">tempest-TestSecurityGroupsBasicOps-1303513239-project-member</nova:user>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:        <nova:project uuid="1eb3aafa74d14544ba5cde61223f2e23">tempest-TestSecurityGroupsBasicOps-1303513239</nova:project>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:        <nova:port uuid="3e3d207f-3991-41f0-a55c-44cd0479e7f8">
Feb 28 05:25:59 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <entry name="serial">8f174807-b15f-4588-83e1-c6e2ef2c2b4a</entry>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <entry name="uuid">8f174807-b15f-4588-83e1-c6e2ef2c2b4a</entry>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk">
Feb 28 05:25:59 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:25:59 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk.config">
Feb 28 05:25:59 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:25:59 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:27:84:10"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <target dev="tap3e3d207f-39"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/8f174807-b15f-4588-83e1-c6e2ef2c2b4a/console.log" append="off"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:25:59 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:25:59 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:25:59 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:25:59 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.555 243456 DEBUG nova.compute.manager [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Preparing to wait for external event network-vif-plugged-3e3d207f-3991-41f0-a55c-44cd0479e7f8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.556 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.556 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.557 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.558 243456 DEBUG nova.virt.libvirt.vif [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:25:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1565847176',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1565847176',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=117,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJI9lOGpAHM54rXvRMjq/x0g38COJ3hV6RmI46r9AJNCEFVJesQ2691y4U6QBJ7NeNJa4TpkakoTRePIkSoO6HE1KNS3jJXu2ARIkPnrG7XuVDaOVfaQKY/gyfQrOLusGQ==',key_name='tempest-TestSecurityGroupsBasicOps-1768974086',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-899856xm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:25:53Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=8f174807-b15f-4588-83e1-c6e2ef2c2b4a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "address": "fa:16:3e:27:84:10", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3d207f-39", "ovs_interfaceid": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.559 243456 DEBUG nova.network.os_vif_util [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "address": "fa:16:3e:27:84:10", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3d207f-39", "ovs_interfaceid": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.560 243456 DEBUG nova.network.os_vif_util [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:84:10,bridge_name='br-int',has_traffic_filtering=True,id=3e3d207f-3991-41f0-a55c-44cd0479e7f8,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3d207f-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.561 243456 DEBUG os_vif [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:84:10,bridge_name='br-int',has_traffic_filtering=True,id=3e3d207f-3991-41f0-a55c-44cd0479e7f8,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3d207f-39') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.562 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.563 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.563 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.568 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.569 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e3d207f-39, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.570 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3e3d207f-39, col_values=(('external_ids', {'iface-id': '3e3d207f-3991-41f0-a55c-44cd0479e7f8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:27:84:10', 'vm-uuid': '8f174807-b15f-4588-83e1-c6e2ef2c2b4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:25:59 np0005634017 NetworkManager[49805]: <info>  [1772274359.5732] manager: (tap3e3d207f-39): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/482)
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.572 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.576 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.580 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.582 243456 INFO os_vif [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:84:10,bridge_name='br-int',has_traffic_filtering=True,id=3e3d207f-3991-41f0-a55c-44cd0479e7f8,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3d207f-39')#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.652 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.652 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.653 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No VIF found with MAC fa:16:3e:27:84:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.654 243456 INFO nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Using config drive#033[00m
Feb 28 05:25:59 np0005634017 nova_compute[243452]: 2026-02-28 10:25:59.687 243456 DEBUG nova.storage.rbd_utils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:26:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:26:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:26:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:26:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:26:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:26:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:26:00 np0005634017 nova_compute[243452]: 2026-02-28 10:26:00.351 243456 INFO nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Creating config drive at /var/lib/nova/instances/8f174807-b15f-4588-83e1-c6e2ef2c2b4a/disk.config#033[00m
Feb 28 05:26:00 np0005634017 nova_compute[243452]: 2026-02-28 10:26:00.357 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8f174807-b15f-4588-83e1-c6e2ef2c2b4a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpvbh_8g7y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:26:00 np0005634017 nova_compute[243452]: 2026-02-28 10:26:00.491 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8f174807-b15f-4588-83e1-c6e2ef2c2b4a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpvbh_8g7y" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:26:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 200 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:26:00 np0005634017 nova_compute[243452]: 2026-02-28 10:26:00.523 243456 DEBUG nova.storage.rbd_utils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:26:00 np0005634017 nova_compute[243452]: 2026-02-28 10:26:00.527 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8f174807-b15f-4588-83e1-c6e2ef2c2b4a/disk.config 8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:26:00 np0005634017 nova_compute[243452]: 2026-02-28 10:26:00.709 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8f174807-b15f-4588-83e1-c6e2ef2c2b4a/disk.config 8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:26:00 np0005634017 nova_compute[243452]: 2026-02-28 10:26:00.711 243456 INFO nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Deleting local config drive /var/lib/nova/instances/8f174807-b15f-4588-83e1-c6e2ef2c2b4a/disk.config because it was imported into RBD.#033[00m
Feb 28 05:26:00 np0005634017 nova_compute[243452]: 2026-02-28 10:26:00.716 243456 DEBUG nova.network.neutron [req-32f09ad1-2c35-4aa2-adce-d90a3afeab52 req-f0051e1d-3c19-4e93-b478-5ee17dc31560 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Updated VIF entry in instance network info cache for port 3e3d207f-3991-41f0-a55c-44cd0479e7f8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:26:00 np0005634017 nova_compute[243452]: 2026-02-28 10:26:00.717 243456 DEBUG nova.network.neutron [req-32f09ad1-2c35-4aa2-adce-d90a3afeab52 req-f0051e1d-3c19-4e93-b478-5ee17dc31560 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Updating instance_info_cache with network_info: [{"id": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "address": "fa:16:3e:27:84:10", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3d207f-39", "ovs_interfaceid": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:26:00 np0005634017 nova_compute[243452]: 2026-02-28 10:26:00.737 243456 DEBUG oslo_concurrency.lockutils [req-32f09ad1-2c35-4aa2-adce-d90a3afeab52 req-f0051e1d-3c19-4e93-b478-5ee17dc31560 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:26:00 np0005634017 kernel: tap3e3d207f-39: entered promiscuous mode
Feb 28 05:26:00 np0005634017 NetworkManager[49805]: <info>  [1772274360.7720] manager: (tap3e3d207f-39): new Tun device (/org/freedesktop/NetworkManager/Devices/483)
Feb 28 05:26:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:26:00Z|01173|binding|INFO|Claiming lport 3e3d207f-3991-41f0-a55c-44cd0479e7f8 for this chassis.
Feb 28 05:26:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:26:00Z|01174|binding|INFO|3e3d207f-3991-41f0-a55c-44cd0479e7f8: Claiming fa:16:3e:27:84:10 10.100.0.3
Feb 28 05:26:00 np0005634017 nova_compute[243452]: 2026-02-28 10:26:00.773 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.784 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:84:10 10.100.0.3'], port_security=['fa:16:3e:27:84:10 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8f174807-b15f-4588-83e1-c6e2ef2c2b4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd1940e3c-fc4a-432c-9cc8-be1893711fb9 f3ded7bc-f509-4fba-90dd-a4f12df4a200', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d36dc637-8959-4ab7-8bcf-cb8fb989bc7e, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=3e3d207f-3991-41f0-a55c-44cd0479e7f8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:26:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.785 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 3e3d207f-3991-41f0-a55c-44cd0479e7f8 in datapath a3c17451-fecb-4c3a-bc65-efba96c6e655 bound to our chassis#033[00m
Feb 28 05:26:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.786 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a3c17451-fecb-4c3a-bc65-efba96c6e655#033[00m
Feb 28 05:26:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.800 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[034e6def-9ef9-4ed8-acb3-040c1d5c7f63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.801 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa3c17451-f1 in ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:26:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.804 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa3c17451-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:26:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.804 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[75fbca29-8a27-4210-9fec-5c85d08acda2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.805 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d0e18101-e93c-4d40-9170-319e3e3c4dcd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:00 np0005634017 systemd-machined[209480]: New machine qemu-148-instance-00000075.
Feb 28 05:26:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.819 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[cec89c59-5d9e-4e31-980c-fa0969951972]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:00 np0005634017 systemd[1]: Started Virtual Machine qemu-148-instance-00000075.
Feb 28 05:26:00 np0005634017 nova_compute[243452]: 2026-02-28 10:26:00.820 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:00 np0005634017 nova_compute[243452]: 2026-02-28 10:26:00.826 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:26:00Z|01175|binding|INFO|Setting lport 3e3d207f-3991-41f0-a55c-44cd0479e7f8 ovn-installed in OVS
Feb 28 05:26:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:26:00Z|01176|binding|INFO|Setting lport 3e3d207f-3991-41f0-a55c-44cd0479e7f8 up in Southbound
Feb 28 05:26:00 np0005634017 nova_compute[243452]: 2026-02-28 10:26:00.829 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:00 np0005634017 systemd-udevd[344289]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:26:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.833 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:6c:db 10.100.0.2 2001:db8::f816:3eff:feb9:6cdb'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9a075f4-1100-445c-9c83-55409e5c7a09, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d61006dd-6d54-48e6-89d8-98f649887b90) old=Port_Binding(mac=['fa:16:3e:b9:6c:db 2001:db8::f816:3eff:feb9:6cdb'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:26:00 np0005634017 NetworkManager[49805]: <info>  [1772274360.8437] device (tap3e3d207f-39): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:26:00 np0005634017 NetworkManager[49805]: <info>  [1772274360.8445] device (tap3e3d207f-39): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:26:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.846 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[49632bbe-a826-49e8-b37b-f5d4d0d2653d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.874 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d78e6a5a-2a4a-445c-ac4b-8265c47c229b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:00 np0005634017 NetworkManager[49805]: <info>  [1772274360.8809] manager: (tapa3c17451-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/484)
Feb 28 05:26:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.880 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2b0ed30f-0fb5-47fb-ba48-37d68bad7e32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.911 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[467c8dd3-ad2d-4a64-83c7-1019c3efcc76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.914 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[23fcf48a-d59e-408c-98a0-e3affa47486e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:00 np0005634017 NetworkManager[49805]: <info>  [1772274360.9365] device (tapa3c17451-f0): carrier: link connected
Feb 28 05:26:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.939 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a5137e7a-90eb-404c-a6a9-f9c543dca2dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.954 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ef4f1f9d-e5de-457c-a1a7-5e47767d9036]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3c17451-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:c6:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 351], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586822, 'reachable_time': 21966, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344319, 'error': None, 'target': 'ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.968 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2fa58034-faeb-43e9-9fbd-c82a266fac07]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5b:c66e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586822, 'tstamp': 586822}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344320, 'error': None, 'target': 'ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.984 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[00456d8f-ef9b-4954-8017-bc6d09fd7251]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3c17451-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:c6:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 351], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586822, 'reachable_time': 21966, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 344321, 'error': None, 'target': 'ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:01.009 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4aff1e-ffa4-44b0-a9e5-b646ed0bbab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:01.075 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[107b395c-3589-405d-a8c2-e8317ea9e9f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:01.077 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3c17451-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:01.078 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:01.078 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3c17451-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.080 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:01 np0005634017 kernel: tapa3c17451-f0: entered promiscuous mode
Feb 28 05:26:01 np0005634017 NetworkManager[49805]: <info>  [1772274361.0820] manager: (tapa3c17451-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/485)
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:01.083 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa3c17451-f0, col_values=(('external_ids', {'iface-id': '593df982-6822-42f7-9086-436943b49f8d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.084 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:01 np0005634017 ovn_controller[146846]: 2026-02-28T10:26:01Z|01177|binding|INFO|Releasing lport 593df982-6822-42f7-9086-436943b49f8d from this chassis (sb_readonly=0)
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:01.085 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a3c17451-fecb-4c3a-bc65-efba96c6e655.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a3c17451-fecb-4c3a-bc65-efba96c6e655.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:01.087 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[992ae3d8-dc70-463e-a9ab-5a4d58e5bd58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:01.087 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-a3c17451-fecb-4c3a-bc65-efba96c6e655
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/a3c17451-fecb-4c3a-bc65-efba96c6e655.pid.haproxy
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID a3c17451-fecb-4c3a-bc65-efba96c6e655
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:01.088 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'env', 'PROCESS_TAG=haproxy-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a3c17451-fecb-4c3a-bc65-efba96c6e655.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.090 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.122 243456 DEBUG nova.compute.manager [req-fc2a14b2-f17f-49f2-bfd6-b3f6f727e0f7 req-8a09ec66-d8c0-49d0-92e0-43dfa8672886 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Received event network-vif-plugged-3e3d207f-3991-41f0-a55c-44cd0479e7f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.122 243456 DEBUG oslo_concurrency.lockutils [req-fc2a14b2-f17f-49f2-bfd6-b3f6f727e0f7 req-8a09ec66-d8c0-49d0-92e0-43dfa8672886 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.122 243456 DEBUG oslo_concurrency.lockutils [req-fc2a14b2-f17f-49f2-bfd6-b3f6f727e0f7 req-8a09ec66-d8c0-49d0-92e0-43dfa8672886 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.122 243456 DEBUG oslo_concurrency.lockutils [req-fc2a14b2-f17f-49f2-bfd6-b3f6f727e0f7 req-8a09ec66-d8c0-49d0-92e0-43dfa8672886 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.123 243456 DEBUG nova.compute.manager [req-fc2a14b2-f17f-49f2-bfd6-b3f6f727e0f7 req-8a09ec66-d8c0-49d0-92e0-43dfa8672886 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Processing event network-vif-plugged-3e3d207f-3991-41f0-a55c-44cd0479e7f8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.278 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274361.2782345, 8f174807-b15f-4588-83e1-c6e2ef2c2b4a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.279 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] VM Started (Lifecycle Event)#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.281 243456 DEBUG nova.compute.manager [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.285 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.289 243456 INFO nova.virt.libvirt.driver [-] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Instance spawned successfully.#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.289 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.297 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.307 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.316 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.317 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.318 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.319 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.320 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.320 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.331 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.331 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274361.2784963, 8f174807-b15f-4588-83e1-c6e2ef2c2b4a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.332 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.352 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.358 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274361.284623, 8f174807-b15f-4588-83e1-c6e2ef2c2b4a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.358 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.380 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.386 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.392 243456 INFO nova.compute.manager [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Took 7.81 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.393 243456 DEBUG nova.compute.manager [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.405 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.452 243456 INFO nova.compute.manager [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Took 8.83 seconds to build instance.#033[00m
Feb 28 05:26:01 np0005634017 podman[344393]: 2026-02-28 10:26:01.469039116 +0000 UTC m=+0.051204539 container create d3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 28 05:26:01 np0005634017 nova_compute[243452]: 2026-02-28 10:26:01.471 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:26:01 np0005634017 systemd[1]: Started libpod-conmon-d3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d.scope.
Feb 28 05:26:01 np0005634017 podman[344393]: 2026-02-28 10:26:01.442985324 +0000 UTC m=+0.025150807 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:26:01 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:26:01 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc280004ec33d4980617412d8deb4460aaa18af7068156edd80551a2bd739484/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:26:01 np0005634017 podman[344393]: 2026-02-28 10:26:01.557951752 +0000 UTC m=+0.140117205 container init d3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 28 05:26:01 np0005634017 podman[344393]: 2026-02-28 10:26:01.56273051 +0000 UTC m=+0.144895943 container start d3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 28 05:26:01 np0005634017 neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655[344408]: [NOTICE]   (344412) : New worker (344414) forked
Feb 28 05:26:01 np0005634017 neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655[344408]: [NOTICE]   (344412) : Loading success.
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:01.625 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d61006dd-6d54-48e6-89d8-98f649887b90 in datapath 8f8968b3-f386-4803-8793-bde890c8208a unbound from our chassis#033[00m
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:01.628 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f8968b3-f386-4803-8793-bde890c8208a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:26:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:01.629 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[72c0d16c-c2c5-4b4a-953a-39d6c53b02aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 200 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 881 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 28 05:26:03 np0005634017 nova_compute[243452]: 2026-02-28 10:26:03.520 243456 DEBUG nova.compute.manager [req-09e89743-15f4-4202-a6f9-d6e6e2650cce req-d04c8965-3967-47ac-9753-9d2803781bec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Received event network-vif-plugged-3e3d207f-3991-41f0-a55c-44cd0479e7f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:26:03 np0005634017 nova_compute[243452]: 2026-02-28 10:26:03.521 243456 DEBUG oslo_concurrency.lockutils [req-09e89743-15f4-4202-a6f9-d6e6e2650cce req-d04c8965-3967-47ac-9753-9d2803781bec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:26:03 np0005634017 nova_compute[243452]: 2026-02-28 10:26:03.522 243456 DEBUG oslo_concurrency.lockutils [req-09e89743-15f4-4202-a6f9-d6e6e2650cce req-d04c8965-3967-47ac-9753-9d2803781bec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:26:03 np0005634017 nova_compute[243452]: 2026-02-28 10:26:03.523 243456 DEBUG oslo_concurrency.lockutils [req-09e89743-15f4-4202-a6f9-d6e6e2650cce req-d04c8965-3967-47ac-9753-9d2803781bec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:26:03 np0005634017 nova_compute[243452]: 2026-02-28 10:26:03.523 243456 DEBUG nova.compute.manager [req-09e89743-15f4-4202-a6f9-d6e6e2650cce req-d04c8965-3967-47ac-9753-9d2803781bec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] No waiting events found dispatching network-vif-plugged-3e3d207f-3991-41f0-a55c-44cd0479e7f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:26:03 np0005634017 nova_compute[243452]: 2026-02-28 10:26:03.524 243456 WARNING nova.compute.manager [req-09e89743-15f4-4202-a6f9-d6e6e2650cce req-d04c8965-3967-47ac-9753-9d2803781bec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Received unexpected event network-vif-plugged-3e3d207f-3991-41f0-a55c-44cd0479e7f8 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:26:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:26:04 np0005634017 nova_compute[243452]: 2026-02-28 10:26:04.119 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 200 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 881 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 28 05:26:04 np0005634017 nova_compute[243452]: 2026-02-28 10:26:04.573 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:05.399 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:6c:db 2001:db8::f816:3eff:feb9:6cdb'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9a075f4-1100-445c-9c83-55409e5c7a09, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d61006dd-6d54-48e6-89d8-98f649887b90) old=Port_Binding(mac=['fa:16:3e:b9:6c:db 10.100.0.2 2001:db8::f816:3eff:feb9:6cdb'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:26:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:05.401 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d61006dd-6d54-48e6-89d8-98f649887b90 in datapath 8f8968b3-f386-4803-8793-bde890c8208a updated#033[00m
Feb 28 05:26:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:05.402 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f8968b3-f386-4803-8793-bde890c8208a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:26:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:05.403 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d98a89d1-7544-4890-9ea7-4c2cb2c844a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:26:05Z|01178|binding|INFO|Releasing lport 593df982-6822-42f7-9086-436943b49f8d from this chassis (sb_readonly=0)
Feb 28 05:26:05 np0005634017 nova_compute[243452]: 2026-02-28 10:26:05.429 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:05 np0005634017 NetworkManager[49805]: <info>  [1772274365.4311] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/486)
Feb 28 05:26:05 np0005634017 NetworkManager[49805]: <info>  [1772274365.4337] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/487)
Feb 28 05:26:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:26:05Z|01179|binding|INFO|Releasing lport 593df982-6822-42f7-9086-436943b49f8d from this chassis (sb_readonly=0)
Feb 28 05:26:05 np0005634017 nova_compute[243452]: 2026-02-28 10:26:05.438 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:05 np0005634017 nova_compute[243452]: 2026-02-28 10:26:05.443 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:06 np0005634017 nova_compute[243452]: 2026-02-28 10:26:06.370 243456 DEBUG nova.compute.manager [req-cbabbbc5-506f-443b-a4d3-32403a62b0b9 req-3417a017-c43e-478c-8213-546b86595af0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Received event network-changed-3e3d207f-3991-41f0-a55c-44cd0479e7f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:26:06 np0005634017 nova_compute[243452]: 2026-02-28 10:26:06.371 243456 DEBUG nova.compute.manager [req-cbabbbc5-506f-443b-a4d3-32403a62b0b9 req-3417a017-c43e-478c-8213-546b86595af0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Refreshing instance network info cache due to event network-changed-3e3d207f-3991-41f0-a55c-44cd0479e7f8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:26:06 np0005634017 nova_compute[243452]: 2026-02-28 10:26:06.372 243456 DEBUG oslo_concurrency.lockutils [req-cbabbbc5-506f-443b-a4d3-32403a62b0b9 req-3417a017-c43e-478c-8213-546b86595af0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:26:06 np0005634017 nova_compute[243452]: 2026-02-28 10:26:06.372 243456 DEBUG oslo_concurrency.lockutils [req-cbabbbc5-506f-443b-a4d3-32403a62b0b9 req-3417a017-c43e-478c-8213-546b86595af0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:26:06 np0005634017 nova_compute[243452]: 2026-02-28 10:26:06.373 243456 DEBUG nova.network.neutron [req-cbabbbc5-506f-443b-a4d3-32403a62b0b9 req-3417a017-c43e-478c-8213-546b86595af0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Refreshing network info cache for port 3e3d207f-3991-41f0-a55c-44cd0479e7f8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:26:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 200 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 05:26:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:07.001 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:6c:db 10.100.0.2 2001:db8::f816:3eff:feb9:6cdb'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9a075f4-1100-445c-9c83-55409e5c7a09, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d61006dd-6d54-48e6-89d8-98f649887b90) old=Port_Binding(mac=['fa:16:3e:b9:6c:db 2001:db8::f816:3eff:feb9:6cdb'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:26:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:07.003 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d61006dd-6d54-48e6-89d8-98f649887b90 in datapath 8f8968b3-f386-4803-8793-bde890c8208a updated#033[00m
Feb 28 05:26:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:07.005 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f8968b3-f386-4803-8793-bde890c8208a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:26:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:07.006 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8e868867-dbca-4a2a-b5d8-f5687b325699]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:08 np0005634017 nova_compute[243452]: 2026-02-28 10:26:08.415 243456 DEBUG nova.network.neutron [req-cbabbbc5-506f-443b-a4d3-32403a62b0b9 req-3417a017-c43e-478c-8213-546b86595af0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Updated VIF entry in instance network info cache for port 3e3d207f-3991-41f0-a55c-44cd0479e7f8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:26:08 np0005634017 nova_compute[243452]: 2026-02-28 10:26:08.416 243456 DEBUG nova.network.neutron [req-cbabbbc5-506f-443b-a4d3-32403a62b0b9 req-3417a017-c43e-478c-8213-546b86595af0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Updating instance_info_cache with network_info: [{"id": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "address": "fa:16:3e:27:84:10", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3d207f-39", "ovs_interfaceid": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:26:08 np0005634017 nova_compute[243452]: 2026-02-28 10:26:08.445 243456 DEBUG oslo_concurrency.lockutils [req-cbabbbc5-506f-443b-a4d3-32403a62b0b9 req-3417a017-c43e-478c-8213-546b86595af0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:26:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 200 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 921 KiB/s wr, 74 op/s
Feb 28 05:26:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:26:09 np0005634017 nova_compute[243452]: 2026-02-28 10:26:09.121 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:09 np0005634017 nova_compute[243452]: 2026-02-28 10:26:09.575 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 200 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 05:26:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:10.770 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:6c:db 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9a075f4-1100-445c-9c83-55409e5c7a09, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d61006dd-6d54-48e6-89d8-98f649887b90) old=Port_Binding(mac=['fa:16:3e:b9:6c:db 10.100.0.2 2001:db8::f816:3eff:feb9:6cdb'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:26:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:10.771 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d61006dd-6d54-48e6-89d8-98f649887b90 in datapath 8f8968b3-f386-4803-8793-bde890c8208a updated#033[00m
Feb 28 05:26:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:10.772 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f8968b3-f386-4803-8793-bde890c8208a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:26:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:10.773 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[20748441-a48e-4e2d-9105-3667b060e160]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:11 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Feb 28 05:26:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:12.362 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:6c:db 10.100.0.2 2001:db8::f816:3eff:feb9:6cdb'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9a075f4-1100-445c-9c83-55409e5c7a09, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d61006dd-6d54-48e6-89d8-98f649887b90) old=Port_Binding(mac=['fa:16:3e:b9:6c:db 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:26:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:12.365 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d61006dd-6d54-48e6-89d8-98f649887b90 in datapath 8f8968b3-f386-4803-8793-bde890c8208a updated#033[00m
Feb 28 05:26:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:12.366 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f8968b3-f386-4803-8793-bde890c8208a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:26:12 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:12.370 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[082de97c-4d9d-42bf-baa1-ddf1c8f557b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 215 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 87 op/s
Feb 28 05:26:12 np0005634017 ovn_controller[146846]: 2026-02-28T10:26:12Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:27:84:10 10.100.0.3
Feb 28 05:26:12 np0005634017 ovn_controller[146846]: 2026-02-28T10:26:12Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:27:84:10 10.100.0.3
Feb 28 05:26:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:26:14 np0005634017 nova_compute[243452]: 2026-02-28 10:26:14.123 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 215 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.0 MiB/s wr, 60 op/s
Feb 28 05:26:14 np0005634017 nova_compute[243452]: 2026-02-28 10:26:14.577 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 233 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Feb 28 05:26:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 233 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 05:26:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:26:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:18.956 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:6c:db 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9a075f4-1100-445c-9c83-55409e5c7a09, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d61006dd-6d54-48e6-89d8-98f649887b90) old=Port_Binding(mac=['fa:16:3e:b9:6c:db 10.100.0.2 2001:db8::f816:3eff:feb9:6cdb'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:26:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:18.957 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d61006dd-6d54-48e6-89d8-98f649887b90 in datapath 8f8968b3-f386-4803-8793-bde890c8208a updated#033[00m
Feb 28 05:26:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:18.958 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f8968b3-f386-4803-8793-bde890c8208a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:26:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:18.959 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c8eecb95-6f7e-49f9-94ca-834137721e99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:19 np0005634017 nova_compute[243452]: 2026-02-28 10:26:19.125 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:19 np0005634017 nova_compute[243452]: 2026-02-28 10:26:19.579 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:20.234 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:6c:db 10.100.0.2 2001:db8::f816:3eff:feb9:6cdb'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9a075f4-1100-445c-9c83-55409e5c7a09, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d61006dd-6d54-48e6-89d8-98f649887b90) old=Port_Binding(mac=['fa:16:3e:b9:6c:db 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:26:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:20.236 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d61006dd-6d54-48e6-89d8-98f649887b90 in datapath 8f8968b3-f386-4803-8793-bde890c8208a updated#033[00m
Feb 28 05:26:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:20.237 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f8968b3-f386-4803-8793-bde890c8208a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:26:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:20.239 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cf054547-9e5c-4972-b6d8-56f15585af5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1833: 305 pgs: 305 active+clean; 233 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 05:26:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:20.989 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:26:20 np0005634017 nova_compute[243452]: 2026-02-28 10:26:20.989 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:20.990 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:26:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 233 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 05:26:23 np0005634017 podman[344429]: 2026-02-28 10:26:23.100805312 +0000 UTC m=+0.044514536 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb 28 05:26:23 np0005634017 podman[344428]: 2026-02-28 10:26:23.12497222 +0000 UTC m=+0.069473167 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller)
Feb 28 05:26:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:26:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:23.992 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:26:24 np0005634017 nova_compute[243452]: 2026-02-28 10:26:24.126 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 305 active+clean; 233 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 266 KiB/s rd, 1.1 MiB/s wr, 50 op/s
Feb 28 05:26:24 np0005634017 nova_compute[243452]: 2026-02-28 10:26:24.581 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:25 np0005634017 nova_compute[243452]: 2026-02-28 10:26:25.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:26:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 233 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 266 KiB/s rd, 1.1 MiB/s wr, 50 op/s
Feb 28 05:26:28 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:28.436 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:6c:db 2001:db8::f816:3eff:feb9:6cdb'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9a075f4-1100-445c-9c83-55409e5c7a09, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d61006dd-6d54-48e6-89d8-98f649887b90) old=Port_Binding(mac=['fa:16:3e:b9:6c:db 10.100.0.2 2001:db8::f816:3eff:feb9:6cdb'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:26:28 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:28.439 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d61006dd-6d54-48e6-89d8-98f649887b90 in datapath 8f8968b3-f386-4803-8793-bde890c8208a updated#033[00m
Feb 28 05:26:28 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:28.441 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f8968b3-f386-4803-8793-bde890c8208a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:26:28 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:28.442 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[42790b6f-69cd-4b32-85a8-63bbec21a805]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:28 np0005634017 nova_compute[243452]: 2026-02-28 10:26:28.452 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "9819900a-9819-4896-a490-54f445126d24" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:26:28 np0005634017 nova_compute[243452]: 2026-02-28 10:26:28.453 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9819900a-9819-4896-a490-54f445126d24" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:26:28 np0005634017 nova_compute[243452]: 2026-02-28 10:26:28.469 243456 DEBUG nova.compute.manager [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:26:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 233 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Feb 28 05:26:28 np0005634017 nova_compute[243452]: 2026-02-28 10:26:28.719 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:26:28 np0005634017 nova_compute[243452]: 2026-02-28 10:26:28.720 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:26:28 np0005634017 nova_compute[243452]: 2026-02-28 10:26:28.731 243456 DEBUG nova.virt.hardware [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:26:28 np0005634017 nova_compute[243452]: 2026-02-28 10:26:28.732 243456 INFO nova.compute.claims [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:26:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:26:29 np0005634017 nova_compute[243452]: 2026-02-28 10:26:29.073 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:26:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:26:29
Feb 28 05:26:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:26:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:26:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['images', 'default.rgw.log', '.rgw.root', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'backups', 'vms']
Feb 28 05:26:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:26:29 np0005634017 nova_compute[243452]: 2026-02-28 10:26:29.128 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:29 np0005634017 nova_compute[243452]: 2026-02-28 10:26:29.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:26:29 np0005634017 nova_compute[243452]: 2026-02-28 10:26:29.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:26:29 np0005634017 nova_compute[243452]: 2026-02-28 10:26:29.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:26:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:26:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3741980337' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:26:29 np0005634017 nova_compute[243452]: 2026-02-28 10:26:29.583 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:29 np0005634017 nova_compute[243452]: 2026-02-28 10:26:29.589 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:26:29 np0005634017 nova_compute[243452]: 2026-02-28 10:26:29.596 243456 DEBUG nova.compute.provider_tree [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:26:29 np0005634017 nova_compute[243452]: 2026-02-28 10:26:29.620 243456 DEBUG nova.scheduler.client.report [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:26:29 np0005634017 nova_compute[243452]: 2026-02-28 10:26:29.646 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:26:29 np0005634017 nova_compute[243452]: 2026-02-28 10:26:29.647 243456 DEBUG nova.compute.manager [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:26:29 np0005634017 nova_compute[243452]: 2026-02-28 10:26:29.697 243456 DEBUG nova.compute.manager [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:26:29 np0005634017 nova_compute[243452]: 2026-02-28 10:26:29.698 243456 DEBUG nova.network.neutron [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:26:29 np0005634017 nova_compute[243452]: 2026-02-28 10:26:29.757 243456 INFO nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:26:29 np0005634017 nova_compute[243452]: 2026-02-28 10:26:29.799 243456 DEBUG nova.compute.manager [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:26:29 np0005634017 nova_compute[243452]: 2026-02-28 10:26:29.907 243456 DEBUG nova.compute.manager [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:26:29 np0005634017 nova_compute[243452]: 2026-02-28 10:26:29.909 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:26:29 np0005634017 nova_compute[243452]: 2026-02-28 10:26:29.910 243456 INFO nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Creating image(s)#033[00m
Feb 28 05:26:29 np0005634017 nova_compute[243452]: 2026-02-28 10:26:29.944 243456 DEBUG nova.storage.rbd_utils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9819900a-9819-4896-a490-54f445126d24_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:26:29 np0005634017 nova_compute[243452]: 2026-02-28 10:26:29.984 243456 DEBUG nova.storage.rbd_utils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9819900a-9819-4896-a490-54f445126d24_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:26:30 np0005634017 nova_compute[243452]: 2026-02-28 10:26:30.017 243456 DEBUG nova.storage.rbd_utils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9819900a-9819-4896-a490-54f445126d24_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:26:30 np0005634017 nova_compute[243452]: 2026-02-28 10:26:30.021 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:26:30 np0005634017 nova_compute[243452]: 2026-02-28 10:26:30.114 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:26:30 np0005634017 nova_compute[243452]: 2026-02-28 10:26:30.116 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:26:30 np0005634017 nova_compute[243452]: 2026-02-28 10:26:30.117 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:26:30 np0005634017 nova_compute[243452]: 2026-02-28 10:26:30.117 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:26:30 np0005634017 nova_compute[243452]: 2026-02-28 10:26:30.150 243456 DEBUG nova.storage.rbd_utils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9819900a-9819-4896-a490-54f445126d24_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:26:30 np0005634017 nova_compute[243452]: 2026-02-28 10:26:30.155 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9819900a-9819-4896-a490-54f445126d24_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:26:30 np0005634017 nova_compute[243452]: 2026-02-28 10:26:30.214 243456 DEBUG nova.policy [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9046d3ef932481196b82d5e1fdd5de6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:26:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:26:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:26:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:26:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:26:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:26:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:26:30 np0005634017 nova_compute[243452]: 2026-02-28 10:26:30.366 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9819900a-9819-4896-a490-54f445126d24_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:26:30 np0005634017 nova_compute[243452]: 2026-02-28 10:26:30.441 243456 DEBUG nova.storage.rbd_utils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] resizing rbd image 9819900a-9819-4896-a490-54f445126d24_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:26:30 np0005634017 nova_compute[243452]: 2026-02-28 10:26:30.529 243456 DEBUG nova.objects.instance [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'migration_context' on Instance uuid 9819900a-9819-4896-a490-54f445126d24 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:26:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 305 active+clean; 240 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 448 KiB/s wr, 8 op/s
Feb 28 05:26:30 np0005634017 nova_compute[243452]: 2026-02-28 10:26:30.549 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:26:30 np0005634017 nova_compute[243452]: 2026-02-28 10:26:30.550 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Ensure instance console log exists: /var/lib/nova/instances/9819900a-9819-4896-a490-54f445126d24/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:26:30 np0005634017 nova_compute[243452]: 2026-02-28 10:26:30.550 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:26:30 np0005634017 nova_compute[243452]: 2026-02-28 10:26:30.551 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:26:30 np0005634017 nova_compute[243452]: 2026-02-28 10:26:30.551 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:26:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:26:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:26:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:26:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:26:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:26:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:26:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:26:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:26:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:26:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:26:31 np0005634017 nova_compute[243452]: 2026-02-28 10:26:31.179 243456 DEBUG nova.network.neutron [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Successfully created port: f56f6906-42d3-443a-bf7f-e2375d5bf698 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:26:31 np0005634017 nova_compute[243452]: 2026-02-28 10:26:31.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:26:32 np0005634017 nova_compute[243452]: 2026-02-28 10:26:32.267 243456 DEBUG nova.network.neutron [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Successfully updated port: f56f6906-42d3-443a-bf7f-e2375d5bf698 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:26:32 np0005634017 nova_compute[243452]: 2026-02-28 10:26:32.289 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "refresh_cache-9819900a-9819-4896-a490-54f445126d24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:26:32 np0005634017 nova_compute[243452]: 2026-02-28 10:26:32.290 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquired lock "refresh_cache-9819900a-9819-4896-a490-54f445126d24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:26:32 np0005634017 nova_compute[243452]: 2026-02-28 10:26:32.290 243456 DEBUG nova.network.neutron [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:26:32 np0005634017 nova_compute[243452]: 2026-02-28 10:26:32.446 243456 DEBUG nova.compute.manager [req-2504a021-31a7-4719-bc38-a0adaac94bb6 req-37086b7c-e713-43d0-9579-adb3cc993504 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Received event network-changed-f56f6906-42d3-443a-bf7f-e2375d5bf698 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:26:32 np0005634017 nova_compute[243452]: 2026-02-28 10:26:32.446 243456 DEBUG nova.compute.manager [req-2504a021-31a7-4719-bc38-a0adaac94bb6 req-37086b7c-e713-43d0-9579-adb3cc993504 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Refreshing instance network info cache due to event network-changed-f56f6906-42d3-443a-bf7f-e2375d5bf698. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:26:32 np0005634017 nova_compute[243452]: 2026-02-28 10:26:32.447 243456 DEBUG oslo_concurrency.lockutils [req-2504a021-31a7-4719-bc38-a0adaac94bb6 req-37086b7c-e713-43d0-9579-adb3cc993504 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9819900a-9819-4896-a490-54f445126d24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:26:32 np0005634017 nova_compute[243452]: 2026-02-28 10:26:32.506 243456 DEBUG nova.network.neutron [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:26:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 268 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.4 MiB/s wr, 20 op/s
Feb 28 05:26:33 np0005634017 nova_compute[243452]: 2026-02-28 10:26:33.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:26:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:26:34 np0005634017 nova_compute[243452]: 2026-02-28 10:26:34.130 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:34 np0005634017 nova_compute[243452]: 2026-02-28 10:26:34.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:26:34 np0005634017 nova_compute[243452]: 2026-02-28 10:26:34.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:26:34 np0005634017 nova_compute[243452]: 2026-02-28 10:26:34.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:26:34 np0005634017 nova_compute[243452]: 2026-02-28 10:26:34.337 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9819900a-9819-4896-a490-54f445126d24] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 28 05:26:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 305 active+clean; 268 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.4 MiB/s wr, 20 op/s
Feb 28 05:26:34 np0005634017 nova_compute[243452]: 2026-02-28 10:26:34.573 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:26:34 np0005634017 nova_compute[243452]: 2026-02-28 10:26:34.574 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:26:34 np0005634017 nova_compute[243452]: 2026-02-28 10:26:34.574 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:26:34 np0005634017 nova_compute[243452]: 2026-02-28 10:26:34.574 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8f174807-b15f-4588-83e1-c6e2ef2c2b4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:26:34 np0005634017 nova_compute[243452]: 2026-02-28 10:26:34.586 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:34 np0005634017 nova_compute[243452]: 2026-02-28 10:26:34.957 243456 DEBUG nova.network.neutron [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Updating instance_info_cache with network_info: [{"id": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "address": "fa:16:3e:c7:f8:51", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f6906-42", "ovs_interfaceid": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:26:34 np0005634017 nova_compute[243452]: 2026-02-28 10:26:34.978 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Releasing lock "refresh_cache-9819900a-9819-4896-a490-54f445126d24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:26:34 np0005634017 nova_compute[243452]: 2026-02-28 10:26:34.979 243456 DEBUG nova.compute.manager [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Instance network_info: |[{"id": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "address": "fa:16:3e:c7:f8:51", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f6906-42", "ovs_interfaceid": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:26:34 np0005634017 nova_compute[243452]: 2026-02-28 10:26:34.980 243456 DEBUG oslo_concurrency.lockutils [req-2504a021-31a7-4719-bc38-a0adaac94bb6 req-37086b7c-e713-43d0-9579-adb3cc993504 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9819900a-9819-4896-a490-54f445126d24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:26:34 np0005634017 nova_compute[243452]: 2026-02-28 10:26:34.980 243456 DEBUG nova.network.neutron [req-2504a021-31a7-4719-bc38-a0adaac94bb6 req-37086b7c-e713-43d0-9579-adb3cc993504 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Refreshing network info cache for port f56f6906-42d3-443a-bf7f-e2375d5bf698 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:26:34 np0005634017 nova_compute[243452]: 2026-02-28 10:26:34.985 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Start _get_guest_xml network_info=[{"id": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "address": "fa:16:3e:c7:f8:51", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f6906-42", "ovs_interfaceid": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:26:34 np0005634017 nova_compute[243452]: 2026-02-28 10:26:34.992 243456 WARNING nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:26:34 np0005634017 nova_compute[243452]: 2026-02-28 10:26:34.997 243456 DEBUG nova.virt.libvirt.host [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:26:34 np0005634017 nova_compute[243452]: 2026-02-28 10:26:34.998 243456 DEBUG nova.virt.libvirt.host [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:26:35 np0005634017 nova_compute[243452]: 2026-02-28 10:26:35.008 243456 DEBUG nova.virt.libvirt.host [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:26:35 np0005634017 nova_compute[243452]: 2026-02-28 10:26:35.009 243456 DEBUG nova.virt.libvirt.host [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:26:35 np0005634017 nova_compute[243452]: 2026-02-28 10:26:35.010 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:26:35 np0005634017 nova_compute[243452]: 2026-02-28 10:26:35.010 243456 DEBUG nova.virt.hardware [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:26:35 np0005634017 nova_compute[243452]: 2026-02-28 10:26:35.011 243456 DEBUG nova.virt.hardware [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:26:35 np0005634017 nova_compute[243452]: 2026-02-28 10:26:35.012 243456 DEBUG nova.virt.hardware [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:26:35 np0005634017 nova_compute[243452]: 2026-02-28 10:26:35.012 243456 DEBUG nova.virt.hardware [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:26:35 np0005634017 nova_compute[243452]: 2026-02-28 10:26:35.012 243456 DEBUG nova.virt.hardware [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:26:35 np0005634017 nova_compute[243452]: 2026-02-28 10:26:35.013 243456 DEBUG nova.virt.hardware [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:26:35 np0005634017 nova_compute[243452]: 2026-02-28 10:26:35.013 243456 DEBUG nova.virt.hardware [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:26:35 np0005634017 nova_compute[243452]: 2026-02-28 10:26:35.014 243456 DEBUG nova.virt.hardware [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:26:35 np0005634017 nova_compute[243452]: 2026-02-28 10:26:35.014 243456 DEBUG nova.virt.hardware [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:26:35 np0005634017 nova_compute[243452]: 2026-02-28 10:26:35.015 243456 DEBUG nova.virt.hardware [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:26:35 np0005634017 nova_compute[243452]: 2026-02-28 10:26:35.015 243456 DEBUG nova.virt.hardware [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:26:35 np0005634017 nova_compute[243452]: 2026-02-28 10:26:35.020 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:26:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:26:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2633375911' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:26:35 np0005634017 nova_compute[243452]: 2026-02-28 10:26:35.682 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.662s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:26:35 np0005634017 nova_compute[243452]: 2026-02-28 10:26:35.706 243456 DEBUG nova.storage.rbd_utils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9819900a-9819-4896-a490-54f445126d24_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:26:35 np0005634017 nova_compute[243452]: 2026-02-28 10:26:35.711 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:26:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:26:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1851073826' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.261 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.264 243456 DEBUG nova.virt.libvirt.vif [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:26:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-2119771830',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-2119771830',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ge',id=118,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJI9lOGpAHM54rXvRMjq/x0g38COJ3hV6RmI46r9AJNCEFVJesQ2691y4U6QBJ7NeNJa4TpkakoTRePIkSoO6HE1KNS3jJXu2ARIkPnrG7XuVDaOVfaQKY/gyfQrOLusGQ==',key_name='tempest-TestSecurityGroupsBasicOps-1768974086',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-7zjy0n29',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:26:29Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=9819900a-9819-4896-a490-54f445126d24,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "address": "fa:16:3e:c7:f8:51", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f6906-42", "ovs_interfaceid": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.265 243456 DEBUG nova.network.os_vif_util [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "address": "fa:16:3e:c7:f8:51", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f6906-42", "ovs_interfaceid": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.267 243456 DEBUG nova.network.os_vif_util [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:f8:51,bridge_name='br-int',has_traffic_filtering=True,id=f56f6906-42d3-443a-bf7f-e2375d5bf698,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf56f6906-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.270 243456 DEBUG nova.objects.instance [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9819900a-9819-4896-a490-54f445126d24 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.290 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:26:36 np0005634017 nova_compute[243452]:  <uuid>9819900a-9819-4896-a490-54f445126d24</uuid>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:  <name>instance-00000076</name>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-2119771830</nova:name>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:26:34</nova:creationTime>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:26:36 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:        <nova:user uuid="f9046d3ef932481196b82d5e1fdd5de6">tempest-TestSecurityGroupsBasicOps-1303513239-project-member</nova:user>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:        <nova:project uuid="1eb3aafa74d14544ba5cde61223f2e23">tempest-TestSecurityGroupsBasicOps-1303513239</nova:project>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:        <nova:port uuid="f56f6906-42d3-443a-bf7f-e2375d5bf698">
Feb 28 05:26:36 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <entry name="serial">9819900a-9819-4896-a490-54f445126d24</entry>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <entry name="uuid">9819900a-9819-4896-a490-54f445126d24</entry>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/9819900a-9819-4896-a490-54f445126d24_disk">
Feb 28 05:26:36 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:26:36 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/9819900a-9819-4896-a490-54f445126d24_disk.config">
Feb 28 05:26:36 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:26:36 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:c7:f8:51"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <target dev="tapf56f6906-42"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/9819900a-9819-4896-a490-54f445126d24/console.log" append="off"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:26:36 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:26:36 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:26:36 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:26:36 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.292 243456 DEBUG nova.compute.manager [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Preparing to wait for external event network-vif-plugged-f56f6906-42d3-443a-bf7f-e2375d5bf698 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.293 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "9819900a-9819-4896-a490-54f445126d24-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.293 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9819900a-9819-4896-a490-54f445126d24-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.294 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9819900a-9819-4896-a490-54f445126d24-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.294 243456 DEBUG nova.virt.libvirt.vif [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:26:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-2119771830',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-2119771830',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ge',id=118,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJI9lOGpAHM54rXvRMjq/x0g38COJ3hV6RmI46r9AJNCEFVJesQ2691y4U6QBJ7NeNJa4TpkakoTRePIkSoO6HE1KNS3jJXu2ARIkPnrG7XuVDaOVfaQKY/gyfQrOLusGQ==',key_name='tempest-TestSecurityGroupsBasicOps-1768974086',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-7zjy0n29',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:26:29Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=9819900a-9819-4896-a490-54f445126d24,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "address": "fa:16:3e:c7:f8:51", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f6906-42", "ovs_interfaceid": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.295 243456 DEBUG nova.network.os_vif_util [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "address": "fa:16:3e:c7:f8:51", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f6906-42", "ovs_interfaceid": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.295 243456 DEBUG nova.network.os_vif_util [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:f8:51,bridge_name='br-int',has_traffic_filtering=True,id=f56f6906-42d3-443a-bf7f-e2375d5bf698,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf56f6906-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.296 243456 DEBUG os_vif [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:f8:51,bridge_name='br-int',has_traffic_filtering=True,id=f56f6906-42d3-443a-bf7f-e2375d5bf698,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf56f6906-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.297 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.297 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.298 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.302 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.303 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf56f6906-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.304 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf56f6906-42, col_values=(('external_ids', {'iface-id': 'f56f6906-42d3-443a-bf7f-e2375d5bf698', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:f8:51', 'vm-uuid': '9819900a-9819-4896-a490-54f445126d24'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:26:36 np0005634017 NetworkManager[49805]: <info>  [1772274396.3076] manager: (tapf56f6906-42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/488)
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.306 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.310 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.314 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.316 243456 INFO os_vif [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:f8:51,bridge_name='br-int',has_traffic_filtering=True,id=f56f6906-42d3-443a-bf7f-e2375d5bf698,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf56f6906-42')#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.379 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.379 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.379 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No VIF found with MAC fa:16:3e:c7:f8:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.380 243456 INFO nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Using config drive#033[00m
Feb 28 05:26:36 np0005634017 nova_compute[243452]: 2026-02-28 10:26:36.405 243456 DEBUG nova.storage.rbd_utils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9819900a-9819-4896-a490-54f445126d24_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:26:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 279 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.210 243456 INFO nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Creating config drive at /var/lib/nova/instances/9819900a-9819-4896-a490-54f445126d24/disk.config#033[00m
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.216 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9819900a-9819-4896-a490-54f445126d24/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxmq0l6j7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.374 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9819900a-9819-4896-a490-54f445126d24/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxmq0l6j7" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.416 243456 DEBUG nova.storage.rbd_utils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9819900a-9819-4896-a490-54f445126d24_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.422 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9819900a-9819-4896-a490-54f445126d24/disk.config 9819900a-9819-4896-a490-54f445126d24_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.602 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9819900a-9819-4896-a490-54f445126d24/disk.config 9819900a-9819-4896-a490-54f445126d24_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.603 243456 INFO nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Deleting local config drive /var/lib/nova/instances/9819900a-9819-4896-a490-54f445126d24/disk.config because it was imported into RBD.#033[00m
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.609 243456 DEBUG nova.network.neutron [req-2504a021-31a7-4719-bc38-a0adaac94bb6 req-37086b7c-e713-43d0-9579-adb3cc993504 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Updated VIF entry in instance network info cache for port f56f6906-42d3-443a-bf7f-e2375d5bf698. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.610 243456 DEBUG nova.network.neutron [req-2504a021-31a7-4719-bc38-a0adaac94bb6 req-37086b7c-e713-43d0-9579-adb3cc993504 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Updating instance_info_cache with network_info: [{"id": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "address": "fa:16:3e:c7:f8:51", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f6906-42", "ovs_interfaceid": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.616 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Updating instance_info_cache with network_info: [{"id": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "address": "fa:16:3e:27:84:10", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3d207f-39", "ovs_interfaceid": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.637 243456 DEBUG oslo_concurrency.lockutils [req-2504a021-31a7-4719-bc38-a0adaac94bb6 req-37086b7c-e713-43d0-9579-adb3cc993504 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9819900a-9819-4896-a490-54f445126d24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.639 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.639 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.640 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.641 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.641 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:26:37 np0005634017 NetworkManager[49805]: <info>  [1772274397.6570] manager: (tapf56f6906-42): new Tun device (/org/freedesktop/NetworkManager/Devices/489)
Feb 28 05:26:37 np0005634017 kernel: tapf56f6906-42: entered promiscuous mode
Feb 28 05:26:37 np0005634017 ovn_controller[146846]: 2026-02-28T10:26:37Z|01180|binding|INFO|Claiming lport f56f6906-42d3-443a-bf7f-e2375d5bf698 for this chassis.
Feb 28 05:26:37 np0005634017 ovn_controller[146846]: 2026-02-28T10:26:37Z|01181|binding|INFO|f56f6906-42d3-443a-bf7f-e2375d5bf698: Claiming fa:16:3e:c7:f8:51 10.100.0.10
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.660 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:37 np0005634017 ovn_controller[146846]: 2026-02-28T10:26:37Z|01182|binding|INFO|Setting lport f56f6906-42d3-443a-bf7f-e2375d5bf698 ovn-installed in OVS
Feb 28 05:26:37 np0005634017 ovn_controller[146846]: 2026-02-28T10:26:37Z|01183|binding|INFO|Setting lport f56f6906-42d3-443a-bf7f-e2375d5bf698 up in Southbound
Feb 28 05:26:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.668 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:f8:51 10.100.0.10'], port_security=['fa:16:3e:c7:f8:51 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9819900a-9819-4896-a490-54f445126d24', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd1940e3c-fc4a-432c-9cc8-be1893711fb9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d36dc637-8959-4ab7-8bcf-cb8fb989bc7e, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=f56f6906-42d3-443a-bf7f-e2375d5bf698) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:26:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.669 156681 INFO neutron.agent.ovn.metadata.agent [-] Port f56f6906-42d3-443a-bf7f-e2375d5bf698 in datapath a3c17451-fecb-4c3a-bc65-efba96c6e655 bound to our chassis#033[00m
Feb 28 05:26:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.671 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a3c17451-fecb-4c3a-bc65-efba96c6e655#033[00m
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.666 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.667 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.668 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.668 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.669 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:26:37 np0005634017 systemd-machined[209480]: New machine qemu-149-instance-00000076.
Feb 28 05:26:37 np0005634017 systemd-udevd[344799]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:26:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.689 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f424eab4-a0d1-4e98-a7a1-e777bf95adee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:37 np0005634017 NetworkManager[49805]: <info>  [1772274397.7034] device (tapf56f6906-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:26:37 np0005634017 NetworkManager[49805]: <info>  [1772274397.7038] device (tapf56f6906-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:26:37 np0005634017 systemd[1]: Started Virtual Machine qemu-149-instance-00000076.
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.712 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.725 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0588ab52-f7c6-4313-94e8-0cdc8e303444]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.730 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ae1eb44b-2606-40e8-90ac-0aea114987a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.757 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[184a25a6-ec4e-47ba-bea8-9141e5d083ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.774 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e15e01d1-a921-4a1d-bc7b-a547a3d0e90e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3c17451-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:c6:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 351], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586822, 'reachable_time': 22720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344812, 'error': None, 'target': 'ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.789 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f44e0070-862b-4731-b7aa-326490295cda]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa3c17451-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586832, 'tstamp': 586832}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344813, 'error': None, 'target': 'ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa3c17451-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586835, 'tstamp': 586835}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344813, 'error': None, 'target': 'ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.791 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3c17451-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.792 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.793 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3c17451-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:26:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.793 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:26:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.794 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa3c17451-f0, col_values=(('external_ids', {'iface-id': '593df982-6822-42f7-9086-436943b49f8d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:26:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.794 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.920 243456 DEBUG nova.compute.manager [req-f0fa360f-c826-440c-b831-9fd06a0fdc60 req-ebe8bfad-0b72-4f04-a76d-d9c12c16f69e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Received event network-vif-plugged-f56f6906-42d3-443a-bf7f-e2375d5bf698 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.926 243456 DEBUG oslo_concurrency.lockutils [req-f0fa360f-c826-440c-b831-9fd06a0fdc60 req-ebe8bfad-0b72-4f04-a76d-d9c12c16f69e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9819900a-9819-4896-a490-54f445126d24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.927 243456 DEBUG oslo_concurrency.lockutils [req-f0fa360f-c826-440c-b831-9fd06a0fdc60 req-ebe8bfad-0b72-4f04-a76d-d9c12c16f69e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9819900a-9819-4896-a490-54f445126d24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.927 243456 DEBUG oslo_concurrency.lockutils [req-f0fa360f-c826-440c-b831-9fd06a0fdc60 req-ebe8bfad-0b72-4f04-a76d-d9c12c16f69e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9819900a-9819-4896-a490-54f445126d24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:26:37 np0005634017 nova_compute[243452]: 2026-02-28 10:26:37.928 243456 DEBUG nova.compute.manager [req-f0fa360f-c826-440c-b831-9fd06a0fdc60 req-ebe8bfad-0b72-4f04-a76d-d9c12c16f69e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Processing event network-vif-plugged-f56f6906-42d3-443a-bf7f-e2375d5bf698 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:26:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:26:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1706289120' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.243 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.334 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.335 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.338 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.338 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.458 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274398.4583406, 9819900a-9819-4896-a490-54f445126d24 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.459 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9819900a-9819-4896-a490-54f445126d24] VM Started (Lifecycle Event)#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.462 243456 DEBUG nova.compute.manager [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.466 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.470 243456 INFO nova.virt.libvirt.driver [-] [instance: 9819900a-9819-4896-a490-54f445126d24] Instance spawned successfully.#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.471 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.534 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.536 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3523MB free_disk=59.92119009792805GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.536 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.537 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:26:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 279 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.540 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9819900a-9819-4896-a490-54f445126d24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.546 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9819900a-9819-4896-a490-54f445126d24] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.550 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.551 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.551 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.552 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.553 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.553 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.589 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9819900a-9819-4896-a490-54f445126d24] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.590 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274398.4584951, 9819900a-9819-4896-a490-54f445126d24 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.590 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9819900a-9819-4896-a490-54f445126d24] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.643 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9819900a-9819-4896-a490-54f445126d24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.647 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274398.4645734, 9819900a-9819-4896-a490-54f445126d24 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.647 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9819900a-9819-4896-a490-54f445126d24] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.666 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9819900a-9819-4896-a490-54f445126d24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.676 243456 INFO nova.compute.manager [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Took 8.77 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.677 243456 DEBUG nova.compute.manager [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.683 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9819900a-9819-4896-a490-54f445126d24] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.690 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 8f174807-b15f-4588-83e1-c6e2ef2c2b4a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.691 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 9819900a-9819-4896-a490-54f445126d24 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.691 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.691 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.718 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9819900a-9819-4896-a490-54f445126d24] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:26:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.752 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.793 243456 INFO nova.compute.manager [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Took 10.27 seconds to build instance.#033[00m
Feb 28 05:26:38 np0005634017 nova_compute[243452]: 2026-02-28 10:26:38.816 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9819900a-9819-4896-a490-54f445126d24" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:26:39 np0005634017 nova_compute[243452]: 2026-02-28 10:26:39.132 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:26:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2368355386' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:26:39 np0005634017 nova_compute[243452]: 2026-02-28 10:26:39.308 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:26:39 np0005634017 nova_compute[243452]: 2026-02-28 10:26:39.316 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:26:39 np0005634017 nova_compute[243452]: 2026-02-28 10:26:39.414 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:26:39 np0005634017 nova_compute[243452]: 2026-02-28 10:26:39.443 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:26:39 np0005634017 nova_compute[243452]: 2026-02-28 10:26:39.444 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:26:39 np0005634017 nova_compute[243452]: 2026-02-28 10:26:39.445 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:26:40 np0005634017 nova_compute[243452]: 2026-02-28 10:26:40.077 243456 DEBUG nova.compute.manager [req-7a56199d-8c59-4ba5-910f-6c86838706d3 req-8df5057f-7d6c-4e9c-9be7-e2412bb7aa3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Received event network-vif-plugged-f56f6906-42d3-443a-bf7f-e2375d5bf698 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:26:40 np0005634017 nova_compute[243452]: 2026-02-28 10:26:40.078 243456 DEBUG oslo_concurrency.lockutils [req-7a56199d-8c59-4ba5-910f-6c86838706d3 req-8df5057f-7d6c-4e9c-9be7-e2412bb7aa3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9819900a-9819-4896-a490-54f445126d24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:26:40 np0005634017 nova_compute[243452]: 2026-02-28 10:26:40.078 243456 DEBUG oslo_concurrency.lockutils [req-7a56199d-8c59-4ba5-910f-6c86838706d3 req-8df5057f-7d6c-4e9c-9be7-e2412bb7aa3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9819900a-9819-4896-a490-54f445126d24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:26:40 np0005634017 nova_compute[243452]: 2026-02-28 10:26:40.078 243456 DEBUG oslo_concurrency.lockutils [req-7a56199d-8c59-4ba5-910f-6c86838706d3 req-8df5057f-7d6c-4e9c-9be7-e2412bb7aa3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9819900a-9819-4896-a490-54f445126d24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:26:40 np0005634017 nova_compute[243452]: 2026-02-28 10:26:40.079 243456 DEBUG nova.compute.manager [req-7a56199d-8c59-4ba5-910f-6c86838706d3 req-8df5057f-7d6c-4e9c-9be7-e2412bb7aa3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] No waiting events found dispatching network-vif-plugged-f56f6906-42d3-443a-bf7f-e2375d5bf698 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:26:40 np0005634017 nova_compute[243452]: 2026-02-28 10:26:40.079 243456 WARNING nova.compute.manager [req-7a56199d-8c59-4ba5-910f-6c86838706d3 req-8df5057f-7d6c-4e9c-9be7-e2412bb7aa3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Received unexpected event network-vif-plugged-f56f6906-42d3-443a-bf7f-e2375d5bf698 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:26:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 279 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 882 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Feb 28 05:26:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:26:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:26:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:26:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:26:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011207571623962963 of space, bias 1.0, pg target 0.33622714871888887 quantized to 32 (current 32)
Feb 28 05:26:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:26:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:26:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:26:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:26:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:26:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024928662385249375 of space, bias 1.0, pg target 0.7478598715574812 quantized to 32 (current 32)
Feb 28 05:26:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:26:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.357177863543625e-07 of space, bias 4.0, pg target 0.000882861343625235 quantized to 16 (current 16)
Feb 28 05:26:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:26:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:26:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:26:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:26:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:26:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:26:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:26:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:26:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:26:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:26:41 np0005634017 nova_compute[243452]: 2026-02-28 10:26:41.309 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 279 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 93 op/s
Feb 28 05:26:42 np0005634017 nova_compute[243452]: 2026-02-28 10:26:42.904 243456 DEBUG nova.compute.manager [req-93a7b3ed-5cbe-4bbb-bff3-c38462b9f0a8 req-6a9056ba-a010-427c-9aff-cfb0dc089181 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Received event network-changed-f56f6906-42d3-443a-bf7f-e2375d5bf698 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:26:42 np0005634017 nova_compute[243452]: 2026-02-28 10:26:42.904 243456 DEBUG nova.compute.manager [req-93a7b3ed-5cbe-4bbb-bff3-c38462b9f0a8 req-6a9056ba-a010-427c-9aff-cfb0dc089181 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Refreshing instance network info cache due to event network-changed-f56f6906-42d3-443a-bf7f-e2375d5bf698. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:26:42 np0005634017 nova_compute[243452]: 2026-02-28 10:26:42.905 243456 DEBUG oslo_concurrency.lockutils [req-93a7b3ed-5cbe-4bbb-bff3-c38462b9f0a8 req-6a9056ba-a010-427c-9aff-cfb0dc089181 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9819900a-9819-4896-a490-54f445126d24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:26:42 np0005634017 nova_compute[243452]: 2026-02-28 10:26:42.905 243456 DEBUG oslo_concurrency.lockutils [req-93a7b3ed-5cbe-4bbb-bff3-c38462b9f0a8 req-6a9056ba-a010-427c-9aff-cfb0dc089181 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9819900a-9819-4896-a490-54f445126d24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:26:42 np0005634017 nova_compute[243452]: 2026-02-28 10:26:42.905 243456 DEBUG nova.network.neutron [req-93a7b3ed-5cbe-4bbb-bff3-c38462b9f0a8 req-6a9056ba-a010-427c-9aff-cfb0dc089181 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Refreshing network info cache for port f56f6906-42d3-443a-bf7f-e2375d5bf698 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:26:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:26:44 np0005634017 nova_compute[243452]: 2026-02-28 10:26:44.137 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 279 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 405 KiB/s wr, 80 op/s
Feb 28 05:26:44 np0005634017 nova_compute[243452]: 2026-02-28 10:26:44.932 243456 DEBUG nova.network.neutron [req-93a7b3ed-5cbe-4bbb-bff3-c38462b9f0a8 req-6a9056ba-a010-427c-9aff-cfb0dc089181 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Updated VIF entry in instance network info cache for port f56f6906-42d3-443a-bf7f-e2375d5bf698. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:26:44 np0005634017 nova_compute[243452]: 2026-02-28 10:26:44.933 243456 DEBUG nova.network.neutron [req-93a7b3ed-5cbe-4bbb-bff3-c38462b9f0a8 req-6a9056ba-a010-427c-9aff-cfb0dc089181 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Updating instance_info_cache with network_info: [{"id": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "address": "fa:16:3e:c7:f8:51", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f6906-42", "ovs_interfaceid": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:26:44 np0005634017 nova_compute[243452]: 2026-02-28 10:26:44.955 243456 DEBUG oslo_concurrency.lockutils [req-93a7b3ed-5cbe-4bbb-bff3-c38462b9f0a8 req-6a9056ba-a010-427c-9aff-cfb0dc089181 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9819900a-9819-4896-a490-54f445126d24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:26:45 np0005634017 nova_compute[243452]: 2026-02-28 10:26:45.327 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:26:45 np0005634017 nova_compute[243452]: 2026-02-28 10:26:45.328 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 28 05:26:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:45.592 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:6c:db'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '25', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9a075f4-1100-445c-9c83-55409e5c7a09, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d61006dd-6d54-48e6-89d8-98f649887b90) old=Port_Binding(mac=['fa:16:3e:b9:6c:db 2001:db8::f816:3eff:feb9:6cdb'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '24', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:26:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:45.594 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d61006dd-6d54-48e6-89d8-98f649887b90 in datapath 8f8968b3-f386-4803-8793-bde890c8208a updated#033[00m
Feb 28 05:26:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:45.595 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8f8968b3-f386-4803-8793-bde890c8208a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 28 05:26:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:45.596 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7ca5a08b-b913-4de6-bb3c-22b4f4711d7e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:26:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1182219108' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:26:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:26:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1182219108' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:26:46 np0005634017 nova_compute[243452]: 2026-02-28 10:26:46.313 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:46 np0005634017 nova_compute[243452]: 2026-02-28 10:26:46.506 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:26:46 np0005634017 nova_compute[243452]: 2026-02-28 10:26:46.507 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 28 05:26:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 279 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 407 KiB/s wr, 81 op/s
Feb 28 05:26:46 np0005634017 nova_compute[243452]: 2026-02-28 10:26:46.544 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 28 05:26:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 279 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Feb 28 05:26:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:26:49 np0005634017 nova_compute[243452]: 2026-02-28 10:26:49.137 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:49 np0005634017 ovn_controller[146846]: 2026-02-28T10:26:49Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c7:f8:51 10.100.0.10
Feb 28 05:26:49 np0005634017 ovn_controller[146846]: 2026-02-28T10:26:49Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c7:f8:51 10.100.0.10
Feb 28 05:26:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:26:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:26:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:26:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:26:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:26:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:26:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:26:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:26:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:26:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:26:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:26:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:26:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 300 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 98 op/s
Feb 28 05:26:50 np0005634017 podman[345045]: 2026-02-28 10:26:50.691338997 +0000 UTC m=+0.048566712 container create 107b9407af0f2c334193ed09527326cdf18bc1fb358a65873879df3a6dac185d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:26:50 np0005634017 systemd[1]: Started libpod-conmon-107b9407af0f2c334193ed09527326cdf18bc1fb358a65873879df3a6dac185d.scope.
Feb 28 05:26:50 np0005634017 podman[345045]: 2026-02-28 10:26:50.667606627 +0000 UTC m=+0.024834342 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:26:50 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:26:50 np0005634017 podman[345045]: 2026-02-28 10:26:50.801921051 +0000 UTC m=+0.159148746 container init 107b9407af0f2c334193ed09527326cdf18bc1fb358a65873879df3a6dac185d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_franklin, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 05:26:50 np0005634017 podman[345045]: 2026-02-28 10:26:50.809577117 +0000 UTC m=+0.166804812 container start 107b9407af0f2c334193ed09527326cdf18bc1fb358a65873879df3a6dac185d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_franklin, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 28 05:26:50 np0005634017 kind_franklin[345062]: 167 167
Feb 28 05:26:50 np0005634017 systemd[1]: libpod-107b9407af0f2c334193ed09527326cdf18bc1fb358a65873879df3a6dac185d.scope: Deactivated successfully.
Feb 28 05:26:50 np0005634017 podman[345045]: 2026-02-28 10:26:50.826015141 +0000 UTC m=+0.183242926 container attach 107b9407af0f2c334193ed09527326cdf18bc1fb358a65873879df3a6dac185d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:26:50 np0005634017 podman[345045]: 2026-02-28 10:26:50.826612458 +0000 UTC m=+0.183840193 container died 107b9407af0f2c334193ed09527326cdf18bc1fb358a65873879df3a6dac185d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_franklin, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 28 05:26:50 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:26:50 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:26:50 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:26:50 np0005634017 systemd[1]: var-lib-containers-storage-overlay-5432456ae7524f33bc46741e8b5f0cde847553a89aab70fb2cefb98a22590adc-merged.mount: Deactivated successfully.
Feb 28 05:26:50 np0005634017 podman[345045]: 2026-02-28 10:26:50.890152693 +0000 UTC m=+0.247380428 container remove 107b9407af0f2c334193ed09527326cdf18bc1fb358a65873879df3a6dac185d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 05:26:50 np0005634017 systemd[1]: libpod-conmon-107b9407af0f2c334193ed09527326cdf18bc1fb358a65873879df3a6dac185d.scope: Deactivated successfully.
Feb 28 05:26:51 np0005634017 podman[345089]: 2026-02-28 10:26:51.07167766 +0000 UTC m=+0.065517191 container create 1a5973fb5a2103dacb9d43d3d66d30d05c0630491abf1fca62619b5901c18238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:26:51 np0005634017 systemd[1]: Started libpod-conmon-1a5973fb5a2103dacb9d43d3d66d30d05c0630491abf1fca62619b5901c18238.scope.
Feb 28 05:26:51 np0005634017 podman[345089]: 2026-02-28 10:26:51.025683991 +0000 UTC m=+0.019523502 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:26:51 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:26:51 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f911dd2e33781fe6f3eb3c999098c726b602eb00e9857f1b6efca13f59e6f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:26:51 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f911dd2e33781fe6f3eb3c999098c726b602eb00e9857f1b6efca13f59e6f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:26:51 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f911dd2e33781fe6f3eb3c999098c726b602eb00e9857f1b6efca13f59e6f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:26:51 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f911dd2e33781fe6f3eb3c999098c726b602eb00e9857f1b6efca13f59e6f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:26:51 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f911dd2e33781fe6f3eb3c999098c726b602eb00e9857f1b6efca13f59e6f7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:26:51 np0005634017 podman[345089]: 2026-02-28 10:26:51.195269241 +0000 UTC m=+0.189108742 container init 1a5973fb5a2103dacb9d43d3d66d30d05c0630491abf1fca62619b5901c18238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cartwright, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:26:51 np0005634017 podman[345089]: 2026-02-28 10:26:51.211303024 +0000 UTC m=+0.205142515 container start 1a5973fb5a2103dacb9d43d3d66d30d05c0630491abf1fca62619b5901c18238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cartwright, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Feb 28 05:26:51 np0005634017 podman[345089]: 2026-02-28 10:26:51.221579424 +0000 UTC m=+0.215418915 container attach 1a5973fb5a2103dacb9d43d3d66d30d05c0630491abf1fca62619b5901c18238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 05:26:51 np0005634017 nova_compute[243452]: 2026-02-28 10:26:51.315 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:51 np0005634017 boring_cartwright[345106]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:26:51 np0005634017 boring_cartwright[345106]: --> All data devices are unavailable
Feb 28 05:26:51 np0005634017 systemd[1]: libpod-1a5973fb5a2103dacb9d43d3d66d30d05c0630491abf1fca62619b5901c18238.scope: Deactivated successfully.
Feb 28 05:26:51 np0005634017 podman[345089]: 2026-02-28 10:26:51.650053996 +0000 UTC m=+0.643893487 container died 1a5973fb5a2103dacb9d43d3d66d30d05c0630491abf1fca62619b5901c18238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cartwright, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 05:26:51 np0005634017 systemd[1]: var-lib-containers-storage-overlay-75f911dd2e33781fe6f3eb3c999098c726b602eb00e9857f1b6efca13f59e6f7-merged.mount: Deactivated successfully.
Feb 28 05:26:51 np0005634017 podman[345089]: 2026-02-28 10:26:51.706381247 +0000 UTC m=+0.700220738 container remove 1a5973fb5a2103dacb9d43d3d66d30d05c0630491abf1fca62619b5901c18238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 05:26:51 np0005634017 systemd[1]: libpod-conmon-1a5973fb5a2103dacb9d43d3d66d30d05c0630491abf1fca62619b5901c18238.scope: Deactivated successfully.
Feb 28 05:26:52 np0005634017 podman[345201]: 2026-02-28 10:26:52.206326198 +0000 UTC m=+0.059846831 container create b36997fb92f617dfec10cec5932cea5db67ef6064866c2466f2c7cb3fd9ff164 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mclaren, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 28 05:26:52 np0005634017 systemd[1]: Started libpod-conmon-b36997fb92f617dfec10cec5932cea5db67ef6064866c2466f2c7cb3fd9ff164.scope.
Feb 28 05:26:52 np0005634017 podman[345201]: 2026-02-28 10:26:52.170831965 +0000 UTC m=+0.024352648 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:26:52 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:26:52 np0005634017 podman[345201]: 2026-02-28 10:26:52.292313577 +0000 UTC m=+0.145834210 container init b36997fb92f617dfec10cec5932cea5db67ef6064866c2466f2c7cb3fd9ff164 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 05:26:52 np0005634017 podman[345201]: 2026-02-28 10:26:52.298326687 +0000 UTC m=+0.151847310 container start b36997fb92f617dfec10cec5932cea5db67ef6064866c2466f2c7cb3fd9ff164 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mclaren, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:26:52 np0005634017 upbeat_mclaren[345218]: 167 167
Feb 28 05:26:52 np0005634017 systemd[1]: libpod-b36997fb92f617dfec10cec5932cea5db67ef6064866c2466f2c7cb3fd9ff164.scope: Deactivated successfully.
Feb 28 05:26:52 np0005634017 podman[345201]: 2026-02-28 10:26:52.3041236 +0000 UTC m=+0.157644243 container attach b36997fb92f617dfec10cec5932cea5db67ef6064866c2466f2c7cb3fd9ff164 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 05:26:52 np0005634017 podman[345201]: 2026-02-28 10:26:52.304912963 +0000 UTC m=+0.158433616 container died b36997fb92f617dfec10cec5932cea5db67ef6064866c2466f2c7cb3fd9ff164 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 05:26:52 np0005634017 systemd[1]: var-lib-containers-storage-overlay-de3a9eaa23e0bd70bbac25e1b8673ddcf966525ee8f3f330b12b3122ad477566-merged.mount: Deactivated successfully.
Feb 28 05:26:52 np0005634017 podman[345201]: 2026-02-28 10:26:52.385341254 +0000 UTC m=+0.238861847 container remove b36997fb92f617dfec10cec5932cea5db67ef6064866c2466f2c7cb3fd9ff164 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:26:52 np0005634017 systemd[1]: libpod-conmon-b36997fb92f617dfec10cec5932cea5db67ef6064866c2466f2c7cb3fd9ff164.scope: Deactivated successfully.
Feb 28 05:26:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 306 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 97 op/s
Feb 28 05:26:52 np0005634017 podman[345242]: 2026-02-28 10:26:52.552338511 +0000 UTC m=+0.045903207 container create 0933fd6528a14873efa549414c935064255eaaa1f76ef65f29634358204277e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:26:52 np0005634017 systemd[1]: Started libpod-conmon-0933fd6528a14873efa549414c935064255eaaa1f76ef65f29634358204277e2.scope.
Feb 28 05:26:52 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:26:52 np0005634017 podman[345242]: 2026-02-28 10:26:52.52823639 +0000 UTC m=+0.021800846 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:26:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4effd0e224792818a01cd62a683819d501594f99759291e1c6824275b5169281/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:26:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4effd0e224792818a01cd62a683819d501594f99759291e1c6824275b5169281/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:26:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4effd0e224792818a01cd62a683819d501594f99759291e1c6824275b5169281/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:26:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4effd0e224792818a01cd62a683819d501594f99759291e1c6824275b5169281/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:26:52 np0005634017 podman[345242]: 2026-02-28 10:26:52.643030663 +0000 UTC m=+0.136595109 container init 0933fd6528a14873efa549414c935064255eaaa1f76ef65f29634358204277e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_grothendieck, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:26:52 np0005634017 podman[345242]: 2026-02-28 10:26:52.651607635 +0000 UTC m=+0.145172081 container start 0933fd6528a14873efa549414c935064255eaaa1f76ef65f29634358204277e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_grothendieck, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 28 05:26:52 np0005634017 podman[345242]: 2026-02-28 10:26:52.655479704 +0000 UTC m=+0.149044190 container attach 0933fd6528a14873efa549414c935064255eaaa1f76ef65f29634358204277e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]: {
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:    "0": [
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:        {
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "devices": [
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "/dev/loop3"
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            ],
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "lv_name": "ceph_lv0",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "lv_size": "21470642176",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "name": "ceph_lv0",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "tags": {
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.cluster_name": "ceph",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.crush_device_class": "",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.encrypted": "0",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.objectstore": "bluestore",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.osd_id": "0",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.type": "block",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.vdo": "0",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.with_tpm": "0"
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            },
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "type": "block",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "vg_name": "ceph_vg0"
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:        }
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:    ],
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:    "1": [
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:        {
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "devices": [
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "/dev/loop4"
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            ],
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "lv_name": "ceph_lv1",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "lv_size": "21470642176",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "name": "ceph_lv1",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "tags": {
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.cluster_name": "ceph",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.crush_device_class": "",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.encrypted": "0",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.objectstore": "bluestore",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.osd_id": "1",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.type": "block",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.vdo": "0",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.with_tpm": "0"
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            },
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "type": "block",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "vg_name": "ceph_vg1"
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:        }
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:    ],
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:    "2": [
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:        {
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "devices": [
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "/dev/loop5"
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            ],
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "lv_name": "ceph_lv2",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "lv_size": "21470642176",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "name": "ceph_lv2",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "tags": {
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.cluster_name": "ceph",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.crush_device_class": "",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.encrypted": "0",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.objectstore": "bluestore",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.osd_id": "2",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.type": "block",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.vdo": "0",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:                "ceph.with_tpm": "0"
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            },
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "type": "block",
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:            "vg_name": "ceph_vg2"
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:        }
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]:    ]
Feb 28 05:26:52 np0005634017 hardcore_grothendieck[345259]: }
Feb 28 05:26:52 np0005634017 systemd[1]: libpod-0933fd6528a14873efa549414c935064255eaaa1f76ef65f29634358204277e2.scope: Deactivated successfully.
Feb 28 05:26:52 np0005634017 podman[345242]: 2026-02-28 10:26:52.943680794 +0000 UTC m=+0.437245220 container died 0933fd6528a14873efa549414c935064255eaaa1f76ef65f29634358204277e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_grothendieck, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:26:52 np0005634017 systemd[1]: var-lib-containers-storage-overlay-4effd0e224792818a01cd62a683819d501594f99759291e1c6824275b5169281-merged.mount: Deactivated successfully.
Feb 28 05:26:52 np0005634017 podman[345242]: 2026-02-28 10:26:52.983770897 +0000 UTC m=+0.477335323 container remove 0933fd6528a14873efa549414c935064255eaaa1f76ef65f29634358204277e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_grothendieck, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:26:53 np0005634017 systemd[1]: libpod-conmon-0933fd6528a14873efa549414c935064255eaaa1f76ef65f29634358204277e2.scope: Deactivated successfully.
Feb 28 05:26:53 np0005634017 podman[345303]: 2026-02-28 10:26:53.222804158 +0000 UTC m=+0.068670360 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:26:53 np0005634017 podman[345304]: 2026-02-28 10:26:53.262133229 +0000 UTC m=+0.098393200 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb 28 05:26:53 np0005634017 podman[345385]: 2026-02-28 10:26:53.476051831 +0000 UTC m=+0.043289894 container create 14aac7d29c81a119e21f64ed77e381a1c77284acfd28158e1d9efdfc93d884b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 05:26:53 np0005634017 systemd[1]: Started libpod-conmon-14aac7d29c81a119e21f64ed77e381a1c77284acfd28158e1d9efdfc93d884b6.scope.
Feb 28 05:26:53 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:26:53 np0005634017 podman[345385]: 2026-02-28 10:26:53.460044359 +0000 UTC m=+0.027282692 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:26:53 np0005634017 podman[345385]: 2026-02-28 10:26:53.564636253 +0000 UTC m=+0.131874396 container init 14aac7d29c81a119e21f64ed77e381a1c77284acfd28158e1d9efdfc93d884b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 05:26:53 np0005634017 podman[345385]: 2026-02-28 10:26:53.574337347 +0000 UTC m=+0.141575400 container start 14aac7d29c81a119e21f64ed77e381a1c77284acfd28158e1d9efdfc93d884b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 05:26:53 np0005634017 podman[345385]: 2026-02-28 10:26:53.577807225 +0000 UTC m=+0.145045278 container attach 14aac7d29c81a119e21f64ed77e381a1c77284acfd28158e1d9efdfc93d884b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bardeen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:26:53 np0005634017 determined_bardeen[345402]: 167 167
Feb 28 05:26:53 np0005634017 systemd[1]: libpod-14aac7d29c81a119e21f64ed77e381a1c77284acfd28158e1d9efdfc93d884b6.scope: Deactivated successfully.
Feb 28 05:26:53 np0005634017 conmon[345402]: conmon 14aac7d29c81a119e21f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-14aac7d29c81a119e21f64ed77e381a1c77284acfd28158e1d9efdfc93d884b6.scope/container/memory.events
Feb 28 05:26:53 np0005634017 podman[345385]: 2026-02-28 10:26:53.582951791 +0000 UTC m=+0.150189844 container died 14aac7d29c81a119e21f64ed77e381a1c77284acfd28158e1d9efdfc93d884b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Feb 28 05:26:53 np0005634017 systemd[1]: var-lib-containers-storage-overlay-fec0c17ce5c385e31c99c70ac82e0d664d4cac40eb68ba66df552b5680ed1d7d-merged.mount: Deactivated successfully.
Feb 28 05:26:53 np0005634017 podman[345385]: 2026-02-28 10:26:53.616284072 +0000 UTC m=+0.183522125 container remove 14aac7d29c81a119e21f64ed77e381a1c77284acfd28158e1d9efdfc93d884b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:26:53 np0005634017 systemd[1]: libpod-conmon-14aac7d29c81a119e21f64ed77e381a1c77284acfd28158e1d9efdfc93d884b6.scope: Deactivated successfully.
Feb 28 05:26:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:26:53 np0005634017 podman[345426]: 2026-02-28 10:26:53.793008113 +0000 UTC m=+0.054135350 container create 6ec1d3f12f3e341123f1c093942473ae7d1f3e94a7f87d09b7053a68182f474f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_joliot, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:26:53 np0005634017 systemd[1]: Started libpod-conmon-6ec1d3f12f3e341123f1c093942473ae7d1f3e94a7f87d09b7053a68182f474f.scope.
Feb 28 05:26:53 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:26:53 np0005634017 podman[345426]: 2026-02-28 10:26:53.774961354 +0000 UTC m=+0.036088511 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:26:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91a71f05f8e70d056f5a8d668c545f1a6bb2e364357b878f78cd7e46d57a2ec7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:26:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91a71f05f8e70d056f5a8d668c545f1a6bb2e364357b878f78cd7e46d57a2ec7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:26:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91a71f05f8e70d056f5a8d668c545f1a6bb2e364357b878f78cd7e46d57a2ec7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:26:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91a71f05f8e70d056f5a8d668c545f1a6bb2e364357b878f78cd7e46d57a2ec7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:26:53 np0005634017 podman[345426]: 2026-02-28 10:26:53.892190465 +0000 UTC m=+0.153317632 container init 6ec1d3f12f3e341123f1c093942473ae7d1f3e94a7f87d09b7053a68182f474f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 28 05:26:53 np0005634017 podman[345426]: 2026-02-28 10:26:53.904464772 +0000 UTC m=+0.165591939 container start 6ec1d3f12f3e341123f1c093942473ae7d1f3e94a7f87d09b7053a68182f474f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_joliot, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 05:26:53 np0005634017 podman[345426]: 2026-02-28 10:26:53.908358292 +0000 UTC m=+0.169485459 container attach 6ec1d3f12f3e341123f1c093942473ae7d1f3e94a7f87d09b7053a68182f474f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_joliot, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:26:54 np0005634017 nova_compute[243452]: 2026-02-28 10:26:54.138 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:54 np0005634017 lvm[345520]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:26:54 np0005634017 lvm[345521]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:26:54 np0005634017 lvm[345521]: VG ceph_vg1 finished
Feb 28 05:26:54 np0005634017 lvm[345520]: VG ceph_vg0 finished
Feb 28 05:26:54 np0005634017 lvm[345523]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:26:54 np0005634017 lvm[345523]: VG ceph_vg2 finished
Feb 28 05:26:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 306 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 280 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Feb 28 05:26:54 np0005634017 sweet_joliot[345442]: {}
Feb 28 05:26:54 np0005634017 systemd[1]: libpod-6ec1d3f12f3e341123f1c093942473ae7d1f3e94a7f87d09b7053a68182f474f.scope: Deactivated successfully.
Feb 28 05:26:54 np0005634017 podman[345426]: 2026-02-28 10:26:54.655964487 +0000 UTC m=+0.917091634 container died 6ec1d3f12f3e341123f1c093942473ae7d1f3e94a7f87d09b7053a68182f474f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:26:54 np0005634017 systemd[1]: libpod-6ec1d3f12f3e341123f1c093942473ae7d1f3e94a7f87d09b7053a68182f474f.scope: Consumed 1.106s CPU time.
Feb 28 05:26:54 np0005634017 systemd[1]: var-lib-containers-storage-overlay-91a71f05f8e70d056f5a8d668c545f1a6bb2e364357b878f78cd7e46d57a2ec7-merged.mount: Deactivated successfully.
Feb 28 05:26:54 np0005634017 podman[345426]: 2026-02-28 10:26:54.695914465 +0000 UTC m=+0.957041642 container remove 6ec1d3f12f3e341123f1c093942473ae7d1f3e94a7f87d09b7053a68182f474f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:26:54 np0005634017 systemd[1]: libpod-conmon-6ec1d3f12f3e341123f1c093942473ae7d1f3e94a7f87d09b7053a68182f474f.scope: Deactivated successfully.
Feb 28 05:26:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:26:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:26:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:26:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:26:55 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:26:55 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:26:55 np0005634017 nova_compute[243452]: 2026-02-28 10:26:55.905 243456 DEBUG oslo_concurrency.lockutils [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "9819900a-9819-4896-a490-54f445126d24" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:26:55 np0005634017 nova_compute[243452]: 2026-02-28 10:26:55.906 243456 DEBUG oslo_concurrency.lockutils [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9819900a-9819-4896-a490-54f445126d24" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:26:55 np0005634017 nova_compute[243452]: 2026-02-28 10:26:55.906 243456 DEBUG oslo_concurrency.lockutils [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "9819900a-9819-4896-a490-54f445126d24-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:26:55 np0005634017 nova_compute[243452]: 2026-02-28 10:26:55.906 243456 DEBUG oslo_concurrency.lockutils [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9819900a-9819-4896-a490-54f445126d24-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:26:55 np0005634017 nova_compute[243452]: 2026-02-28 10:26:55.907 243456 DEBUG oslo_concurrency.lockutils [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9819900a-9819-4896-a490-54f445126d24-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:26:55 np0005634017 nova_compute[243452]: 2026-02-28 10:26:55.908 243456 INFO nova.compute.manager [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Terminating instance#033[00m
Feb 28 05:26:55 np0005634017 nova_compute[243452]: 2026-02-28 10:26:55.909 243456 DEBUG nova.compute.manager [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:26:55 np0005634017 kernel: tapf56f6906-42 (unregistering): left promiscuous mode
Feb 28 05:26:55 np0005634017 NetworkManager[49805]: <info>  [1772274415.9618] device (tapf56f6906-42): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:26:55 np0005634017 ovn_controller[146846]: 2026-02-28T10:26:55Z|01184|binding|INFO|Releasing lport f56f6906-42d3-443a-bf7f-e2375d5bf698 from this chassis (sb_readonly=0)
Feb 28 05:26:55 np0005634017 ovn_controller[146846]: 2026-02-28T10:26:55Z|01185|binding|INFO|Setting lport f56f6906-42d3-443a-bf7f-e2375d5bf698 down in Southbound
Feb 28 05:26:55 np0005634017 ovn_controller[146846]: 2026-02-28T10:26:55Z|01186|binding|INFO|Removing iface tapf56f6906-42 ovn-installed in OVS
Feb 28 05:26:55 np0005634017 nova_compute[243452]: 2026-02-28 10:26:55.969 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:55.974 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:f8:51 10.100.0.10'], port_security=['fa:16:3e:c7:f8:51 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9819900a-9819-4896-a490-54f445126d24', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e747ccdf-f0bc-44cb-9277-bfb9c420d082', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d36dc637-8959-4ab7-8bcf-cb8fb989bc7e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=f56f6906-42d3-443a-bf7f-e2375d5bf698) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:26:55 np0005634017 nova_compute[243452]: 2026-02-28 10:26:55.975 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:55.975 156681 INFO neutron.agent.ovn.metadata.agent [-] Port f56f6906-42d3-443a-bf7f-e2375d5bf698 in datapath a3c17451-fecb-4c3a-bc65-efba96c6e655 unbound from our chassis#033[00m
Feb 28 05:26:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:55.976 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a3c17451-fecb-4c3a-bc65-efba96c6e655#033[00m
Feb 28 05:26:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:55.992 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[acc9a3af-1734-4466-aaad-260029dd879a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:56.020 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2d1bef5c-bf06-4a7b-8319-67edc1873caa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:56 np0005634017 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000076.scope: Deactivated successfully.
Feb 28 05:26:56 np0005634017 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000076.scope: Consumed 12.537s CPU time.
Feb 28 05:26:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:56.023 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f9ceee53-c7ac-4466-aa0b-f5839bc84810]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:56 np0005634017 systemd-machined[209480]: Machine qemu-149-instance-00000076 terminated.
Feb 28 05:26:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:56.049 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[dfb62eac-9f10-409f-8bef-991b59faf639]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:56.064 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4081c457-bf84-42d3-8cb9-8f0307db7ce2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3c17451-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:c6:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 351], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586822, 'reachable_time': 22720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345572, 'error': None, 'target': 'ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:56.080 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2178e797-abf6-4ae3-b713-029016e0ad31]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa3c17451-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586832, 'tstamp': 586832}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345573, 'error': None, 'target': 'ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa3c17451-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586835, 'tstamp': 586835}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345573, 'error': None, 'target': 'ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:56.081 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3c17451-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:26:56 np0005634017 nova_compute[243452]: 2026-02-28 10:26:56.083 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:56 np0005634017 nova_compute[243452]: 2026-02-28 10:26:56.087 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:56.087 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3c17451-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:26:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:56.087 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:26:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:56.088 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa3c17451-f0, col_values=(('external_ids', {'iface-id': '593df982-6822-42f7-9086-436943b49f8d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:26:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:56.088 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:26:56 np0005634017 nova_compute[243452]: 2026-02-28 10:26:56.155 243456 INFO nova.virt.libvirt.driver [-] [instance: 9819900a-9819-4896-a490-54f445126d24] Instance destroyed successfully.#033[00m
Feb 28 05:26:56 np0005634017 nova_compute[243452]: 2026-02-28 10:26:56.155 243456 DEBUG nova.objects.instance [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'resources' on Instance uuid 9819900a-9819-4896-a490-54f445126d24 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:26:56 np0005634017 nova_compute[243452]: 2026-02-28 10:26:56.174 243456 DEBUG nova.virt.libvirt.vif [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:26:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-2119771830',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-2119771830',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ge',id=118,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJI9lOGpAHM54rXvRMjq/x0g38COJ3hV6RmI46r9AJNCEFVJesQ2691y4U6QBJ7NeNJa4TpkakoTRePIkSoO6HE1KNS3jJXu2ARIkPnrG7XuVDaOVfaQKY/gyfQrOLusGQ==',key_name='tempest-TestSecurityGroupsBasicOps-1768974086',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:26:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-7zjy0n29',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:26:38Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=9819900a-9819-4896-a490-54f445126d24,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "address": "fa:16:3e:c7:f8:51", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f6906-42", "ovs_interfaceid": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:26:56 np0005634017 nova_compute[243452]: 2026-02-28 10:26:56.174 243456 DEBUG nova.network.os_vif_util [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "address": "fa:16:3e:c7:f8:51", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f6906-42", "ovs_interfaceid": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:26:56 np0005634017 nova_compute[243452]: 2026-02-28 10:26:56.175 243456 DEBUG nova.network.os_vif_util [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c7:f8:51,bridge_name='br-int',has_traffic_filtering=True,id=f56f6906-42d3-443a-bf7f-e2375d5bf698,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf56f6906-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:26:56 np0005634017 nova_compute[243452]: 2026-02-28 10:26:56.175 243456 DEBUG os_vif [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:f8:51,bridge_name='br-int',has_traffic_filtering=True,id=f56f6906-42d3-443a-bf7f-e2375d5bf698,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf56f6906-42') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:26:56 np0005634017 nova_compute[243452]: 2026-02-28 10:26:56.177 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:56 np0005634017 nova_compute[243452]: 2026-02-28 10:26:56.177 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf56f6906-42, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:26:56 np0005634017 nova_compute[243452]: 2026-02-28 10:26:56.179 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:56 np0005634017 nova_compute[243452]: 2026-02-28 10:26:56.180 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:56 np0005634017 nova_compute[243452]: 2026-02-28 10:26:56.181 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:26:56 np0005634017 nova_compute[243452]: 2026-02-28 10:26:56.183 243456 INFO os_vif [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:f8:51,bridge_name='br-int',has_traffic_filtering=True,id=f56f6906-42d3-443a-bf7f-e2375d5bf698,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf56f6906-42')#033[00m
Feb 28 05:26:56 np0005634017 nova_compute[243452]: 2026-02-28 10:26:56.307 243456 DEBUG nova.compute.manager [req-6a2da3b8-413c-447e-b2c8-e4f55d4a0559 req-962cdefc-99fb-49af-a6c3-2ea3696d5f35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Received event network-vif-unplugged-f56f6906-42d3-443a-bf7f-e2375d5bf698 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:26:56 np0005634017 nova_compute[243452]: 2026-02-28 10:26:56.307 243456 DEBUG oslo_concurrency.lockutils [req-6a2da3b8-413c-447e-b2c8-e4f55d4a0559 req-962cdefc-99fb-49af-a6c3-2ea3696d5f35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9819900a-9819-4896-a490-54f445126d24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:26:56 np0005634017 nova_compute[243452]: 2026-02-28 10:26:56.308 243456 DEBUG oslo_concurrency.lockutils [req-6a2da3b8-413c-447e-b2c8-e4f55d4a0559 req-962cdefc-99fb-49af-a6c3-2ea3696d5f35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9819900a-9819-4896-a490-54f445126d24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:26:56 np0005634017 nova_compute[243452]: 2026-02-28 10:26:56.308 243456 DEBUG oslo_concurrency.lockutils [req-6a2da3b8-413c-447e-b2c8-e4f55d4a0559 req-962cdefc-99fb-49af-a6c3-2ea3696d5f35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9819900a-9819-4896-a490-54f445126d24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:26:56 np0005634017 nova_compute[243452]: 2026-02-28 10:26:56.308 243456 DEBUG nova.compute.manager [req-6a2da3b8-413c-447e-b2c8-e4f55d4a0559 req-962cdefc-99fb-49af-a6c3-2ea3696d5f35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] No waiting events found dispatching network-vif-unplugged-f56f6906-42d3-443a-bf7f-e2375d5bf698 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:26:56 np0005634017 nova_compute[243452]: 2026-02-28 10:26:56.308 243456 DEBUG nova.compute.manager [req-6a2da3b8-413c-447e-b2c8-e4f55d4a0559 req-962cdefc-99fb-49af-a6c3-2ea3696d5f35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Received event network-vif-unplugged-f56f6906-42d3-443a-bf7f-e2375d5bf698 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:26:56 np0005634017 nova_compute[243452]: 2026-02-28 10:26:56.483 243456 INFO nova.virt.libvirt.driver [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Deleting instance files /var/lib/nova/instances/9819900a-9819-4896-a490-54f445126d24_del#033[00m
Feb 28 05:26:56 np0005634017 nova_compute[243452]: 2026-02-28 10:26:56.484 243456 INFO nova.virt.libvirt.driver [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Deletion of /var/lib/nova/instances/9819900a-9819-4896-a490-54f445126d24_del complete#033[00m
Feb 28 05:26:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 312 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 28 05:26:56 np0005634017 nova_compute[243452]: 2026-02-28 10:26:56.571 243456 INFO nova.compute.manager [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Took 0.66 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:26:56 np0005634017 nova_compute[243452]: 2026-02-28 10:26:56.572 243456 DEBUG oslo.service.loopingcall [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:26:56 np0005634017 nova_compute[243452]: 2026-02-28 10:26:56.572 243456 DEBUG nova.compute.manager [-] [instance: 9819900a-9819-4896-a490-54f445126d24] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:26:56 np0005634017 nova_compute[243452]: 2026-02-28 10:26:56.572 243456 DEBUG nova.network.neutron [-] [instance: 9819900a-9819-4896-a490-54f445126d24] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:26:57 np0005634017 nova_compute[243452]: 2026-02-28 10:26:57.338 243456 DEBUG nova.network.neutron [-] [instance: 9819900a-9819-4896-a490-54f445126d24] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:26:57 np0005634017 nova_compute[243452]: 2026-02-28 10:26:57.355 243456 INFO nova.compute.manager [-] [instance: 9819900a-9819-4896-a490-54f445126d24] Took 0.78 seconds to deallocate network for instance.#033[00m
Feb 28 05:26:57 np0005634017 nova_compute[243452]: 2026-02-28 10:26:57.396 243456 DEBUG oslo_concurrency.lockutils [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:26:57 np0005634017 nova_compute[243452]: 2026-02-28 10:26:57.396 243456 DEBUG oslo_concurrency.lockutils [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:26:57 np0005634017 nova_compute[243452]: 2026-02-28 10:26:57.469 243456 DEBUG oslo_concurrency.processutils [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:26:57 np0005634017 nova_compute[243452]: 2026-02-28 10:26:57.514 243456 DEBUG nova.compute.manager [req-c3b1765b-05a4-476e-af74-1c5c3a7c9456 req-f1390692-a1da-44f6-afed-e778b025f625 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Received event network-vif-deleted-f56f6906-42d3-443a-bf7f-e2375d5bf698 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:26:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:57.865 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:26:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:57.866 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:26:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:57.867 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:26:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:26:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2423769872' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:26:58 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:58.048 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:6c:db 2001:db8:0:1:f816:3eff:feb9:6cdb'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9a075f4-1100-445c-9c83-55409e5c7a09, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d61006dd-6d54-48e6-89d8-98f649887b90) old=Port_Binding(mac=['fa:16:3e:b9:6c:db 2001:db8::f816:3eff:feb9:6cdb'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:26:58 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:58.050 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d61006dd-6d54-48e6-89d8-98f649887b90 in datapath 8f8968b3-f386-4803-8793-bde890c8208a updated#033[00m
Feb 28 05:26:58 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:58.051 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f8968b3-f386-4803-8793-bde890c8208a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:26:58 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:26:58.052 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fb84407a-a47e-4bca-b2f2-2ea52d816fa6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:26:58 np0005634017 nova_compute[243452]: 2026-02-28 10:26:58.063 243456 DEBUG oslo_concurrency.processutils [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:26:58 np0005634017 nova_compute[243452]: 2026-02-28 10:26:58.069 243456 DEBUG nova.compute.provider_tree [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:26:58 np0005634017 nova_compute[243452]: 2026-02-28 10:26:58.087 243456 DEBUG nova.scheduler.client.report [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:26:58 np0005634017 nova_compute[243452]: 2026-02-28 10:26:58.169 243456 DEBUG oslo_concurrency.lockutils [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:26:58 np0005634017 nova_compute[243452]: 2026-02-28 10:26:58.201 243456 INFO nova.scheduler.client.report [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Deleted allocations for instance 9819900a-9819-4896-a490-54f445126d24#033[00m
Feb 28 05:26:58 np0005634017 nova_compute[243452]: 2026-02-28 10:26:58.273 243456 DEBUG oslo_concurrency.lockutils [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9819900a-9819-4896-a490-54f445126d24" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.367s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:26:58 np0005634017 nova_compute[243452]: 2026-02-28 10:26:58.427 243456 DEBUG nova.compute.manager [req-5fd2eae0-e6d1-4c39-a42b-803db149c469 req-3691ccc3-8618-4d45-9f07-5cf851798c95 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Received event network-vif-plugged-f56f6906-42d3-443a-bf7f-e2375d5bf698 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:26:58 np0005634017 nova_compute[243452]: 2026-02-28 10:26:58.427 243456 DEBUG oslo_concurrency.lockutils [req-5fd2eae0-e6d1-4c39-a42b-803db149c469 req-3691ccc3-8618-4d45-9f07-5cf851798c95 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9819900a-9819-4896-a490-54f445126d24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:26:58 np0005634017 nova_compute[243452]: 2026-02-28 10:26:58.429 243456 DEBUG oslo_concurrency.lockutils [req-5fd2eae0-e6d1-4c39-a42b-803db149c469 req-3691ccc3-8618-4d45-9f07-5cf851798c95 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9819900a-9819-4896-a490-54f445126d24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:26:58 np0005634017 nova_compute[243452]: 2026-02-28 10:26:58.430 243456 DEBUG oslo_concurrency.lockutils [req-5fd2eae0-e6d1-4c39-a42b-803db149c469 req-3691ccc3-8618-4d45-9f07-5cf851798c95 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9819900a-9819-4896-a490-54f445126d24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:26:58 np0005634017 nova_compute[243452]: 2026-02-28 10:26:58.431 243456 DEBUG nova.compute.manager [req-5fd2eae0-e6d1-4c39-a42b-803db149c469 req-3691ccc3-8618-4d45-9f07-5cf851798c95 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] No waiting events found dispatching network-vif-plugged-f56f6906-42d3-443a-bf7f-e2375d5bf698 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:26:58 np0005634017 nova_compute[243452]: 2026-02-28 10:26:58.432 243456 WARNING nova.compute.manager [req-5fd2eae0-e6d1-4c39-a42b-803db149c469 req-3691ccc3-8618-4d45-9f07-5cf851798c95 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Received unexpected event network-vif-plugged-f56f6906-42d3-443a-bf7f-e2375d5bf698 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:26:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 278 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Feb 28 05:26:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:26:59 np0005634017 nova_compute[243452]: 2026-02-28 10:26:59.141 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:27:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:27:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:27:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:27:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:27:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:27:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 233 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Feb 28 05:27:01 np0005634017 nova_compute[243452]: 2026-02-28 10:27:01.179 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:01 np0005634017 nova_compute[243452]: 2026-02-28 10:27:01.874 243456 DEBUG nova.compute.manager [req-26c3671b-1f99-486e-96eb-7e54cc0549c0 req-4e548568-d68b-4117-a050-9cb398d076b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Received event network-changed-3e3d207f-3991-41f0-a55c-44cd0479e7f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:27:01 np0005634017 nova_compute[243452]: 2026-02-28 10:27:01.874 243456 DEBUG nova.compute.manager [req-26c3671b-1f99-486e-96eb-7e54cc0549c0 req-4e548568-d68b-4117-a050-9cb398d076b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Refreshing instance network info cache due to event network-changed-3e3d207f-3991-41f0-a55c-44cd0479e7f8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:27:01 np0005634017 nova_compute[243452]: 2026-02-28 10:27:01.875 243456 DEBUG oslo_concurrency.lockutils [req-26c3671b-1f99-486e-96eb-7e54cc0549c0 req-4e548568-d68b-4117-a050-9cb398d076b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:27:01 np0005634017 nova_compute[243452]: 2026-02-28 10:27:01.875 243456 DEBUG oslo_concurrency.lockutils [req-26c3671b-1f99-486e-96eb-7e54cc0549c0 req-4e548568-d68b-4117-a050-9cb398d076b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:27:01 np0005634017 nova_compute[243452]: 2026-02-28 10:27:01.875 243456 DEBUG nova.network.neutron [req-26c3671b-1f99-486e-96eb-7e54cc0549c0 req-4e548568-d68b-4117-a050-9cb398d076b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Refreshing network info cache for port 3e3d207f-3991-41f0-a55c-44cd0479e7f8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:27:01 np0005634017 nova_compute[243452]: 2026-02-28 10:27:01.924 243456 DEBUG oslo_concurrency.lockutils [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:01 np0005634017 nova_compute[243452]: 2026-02-28 10:27:01.925 243456 DEBUG oslo_concurrency.lockutils [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:01 np0005634017 nova_compute[243452]: 2026-02-28 10:27:01.925 243456 DEBUG oslo_concurrency.lockutils [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:01 np0005634017 nova_compute[243452]: 2026-02-28 10:27:01.925 243456 DEBUG oslo_concurrency.lockutils [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:01 np0005634017 nova_compute[243452]: 2026-02-28 10:27:01.926 243456 DEBUG oslo_concurrency.lockutils [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:01 np0005634017 nova_compute[243452]: 2026-02-28 10:27:01.927 243456 INFO nova.compute.manager [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Terminating instance#033[00m
Feb 28 05:27:01 np0005634017 nova_compute[243452]: 2026-02-28 10:27:01.928 243456 DEBUG nova.compute.manager [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:27:01 np0005634017 kernel: tap3e3d207f-39 (unregistering): left promiscuous mode
Feb 28 05:27:01 np0005634017 NetworkManager[49805]: <info>  [1772274421.9718] device (tap3e3d207f-39): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:27:01 np0005634017 nova_compute[243452]: 2026-02-28 10:27:01.978 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:01 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:01Z|01187|binding|INFO|Releasing lport 3e3d207f-3991-41f0-a55c-44cd0479e7f8 from this chassis (sb_readonly=0)
Feb 28 05:27:01 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:01Z|01188|binding|INFO|Setting lport 3e3d207f-3991-41f0-a55c-44cd0479e7f8 down in Southbound
Feb 28 05:27:01 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:01Z|01189|binding|INFO|Removing iface tap3e3d207f-39 ovn-installed in OVS
Feb 28 05:27:01 np0005634017 nova_compute[243452]: 2026-02-28 10:27:01.984 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:01.987 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:84:10 10.100.0.3'], port_security=['fa:16:3e:27:84:10 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8f174807-b15f-4588-83e1-c6e2ef2c2b4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd1940e3c-fc4a-432c-9cc8-be1893711fb9 f3ded7bc-f509-4fba-90dd-a4f12df4a200', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d36dc637-8959-4ab7-8bcf-cb8fb989bc7e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=3e3d207f-3991-41f0-a55c-44cd0479e7f8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:27:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:01.988 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 3e3d207f-3991-41f0-a55c-44cd0479e7f8 in datapath a3c17451-fecb-4c3a-bc65-efba96c6e655 unbound from our chassis#033[00m
Feb 28 05:27:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:01.989 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a3c17451-fecb-4c3a-bc65-efba96c6e655, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:27:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:01.991 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[edd93e0b-2991-45dc-80ad-8bd8b83e9c8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:01.992 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655 namespace which is not needed anymore#033[00m
Feb 28 05:27:01 np0005634017 nova_compute[243452]: 2026-02-28 10:27:01.994 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:02 np0005634017 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000075.scope: Deactivated successfully.
Feb 28 05:27:02 np0005634017 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000075.scope: Consumed 13.359s CPU time.
Feb 28 05:27:02 np0005634017 systemd-machined[209480]: Machine qemu-148-instance-00000075 terminated.
Feb 28 05:27:02 np0005634017 neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655[344408]: [NOTICE]   (344412) : haproxy version is 2.8.14-c23fe91
Feb 28 05:27:02 np0005634017 neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655[344408]: [NOTICE]   (344412) : path to executable is /usr/sbin/haproxy
Feb 28 05:27:02 np0005634017 neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655[344408]: [WARNING]  (344412) : Exiting Master process...
Feb 28 05:27:02 np0005634017 neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655[344408]: [WARNING]  (344412) : Exiting Master process...
Feb 28 05:27:02 np0005634017 neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655[344408]: [ALERT]    (344412) : Current worker (344414) exited with code 143 (Terminated)
Feb 28 05:27:02 np0005634017 neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655[344408]: [WARNING]  (344412) : All workers exited. Exiting... (0)
Feb 28 05:27:02 np0005634017 systemd[1]: libpod-d3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d.scope: Deactivated successfully.
Feb 28 05:27:02 np0005634017 podman[345650]: 2026-02-28 10:27:02.112513673 +0000 UTC m=+0.044489767 container died d3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:27:02 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d-userdata-shm.mount: Deactivated successfully.
Feb 28 05:27:02 np0005634017 systemd[1]: var-lib-containers-storage-overlay-fc280004ec33d4980617412d8deb4460aaa18af7068156edd80551a2bd739484-merged.mount: Deactivated successfully.
Feb 28 05:27:02 np0005634017 podman[345650]: 2026-02-28 10:27:02.157574156 +0000 UTC m=+0.089550230 container cleanup d3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 05:27:02 np0005634017 nova_compute[243452]: 2026-02-28 10:27:02.165 243456 INFO nova.virt.libvirt.driver [-] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Instance destroyed successfully.#033[00m
Feb 28 05:27:02 np0005634017 nova_compute[243452]: 2026-02-28 10:27:02.165 243456 DEBUG nova.objects.instance [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'resources' on Instance uuid 8f174807-b15f-4588-83e1-c6e2ef2c2b4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:27:02 np0005634017 systemd[1]: libpod-conmon-d3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d.scope: Deactivated successfully.
Feb 28 05:27:02 np0005634017 nova_compute[243452]: 2026-02-28 10:27:02.179 243456 DEBUG nova.virt.libvirt.vif [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:25:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1565847176',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1565847176',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=117,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJI9lOGpAHM54rXvRMjq/x0g38COJ3hV6RmI46r9AJNCEFVJesQ2691y4U6QBJ7NeNJa4TpkakoTRePIkSoO6HE1KNS3jJXu2ARIkPnrG7XuVDaOVfaQKY/gyfQrOLusGQ==',key_name='tempest-TestSecurityGroupsBasicOps-1768974086',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:26:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-899856xm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:26:01Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=8f174807-b15f-4588-83e1-c6e2ef2c2b4a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "address": "fa:16:3e:27:84:10", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3d207f-39", "ovs_interfaceid": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:27:02 np0005634017 nova_compute[243452]: 2026-02-28 10:27:02.180 243456 DEBUG nova.network.os_vif_util [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "address": "fa:16:3e:27:84:10", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3d207f-39", "ovs_interfaceid": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:27:02 np0005634017 nova_compute[243452]: 2026-02-28 10:27:02.181 243456 DEBUG nova.network.os_vif_util [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:27:84:10,bridge_name='br-int',has_traffic_filtering=True,id=3e3d207f-3991-41f0-a55c-44cd0479e7f8,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3d207f-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:27:02 np0005634017 nova_compute[243452]: 2026-02-28 10:27:02.181 243456 DEBUG os_vif [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:84:10,bridge_name='br-int',has_traffic_filtering=True,id=3e3d207f-3991-41f0-a55c-44cd0479e7f8,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3d207f-39') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:27:02 np0005634017 nova_compute[243452]: 2026-02-28 10:27:02.183 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:02 np0005634017 nova_compute[243452]: 2026-02-28 10:27:02.183 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e3d207f-39, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:27:02 np0005634017 nova_compute[243452]: 2026-02-28 10:27:02.186 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:02 np0005634017 nova_compute[243452]: 2026-02-28 10:27:02.189 243456 INFO os_vif [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:84:10,bridge_name='br-int',has_traffic_filtering=True,id=3e3d207f-3991-41f0-a55c-44cd0479e7f8,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3d207f-39')#033[00m
Feb 28 05:27:02 np0005634017 podman[345689]: 2026-02-28 10:27:02.239452429 +0000 UTC m=+0.058956557 container remove d3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:27:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:02.245 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[df378473-6479-43de-8f2f-b4f52abf32f1]: (4, ('Sat Feb 28 10:27:02 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655 (d3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d)\nd3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d\nSat Feb 28 10:27:02 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655 (d3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d)\nd3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:02.247 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eabc7944-d6f9-4db3-bb94-db5a7973fd27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:02.248 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3c17451-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:27:02 np0005634017 nova_compute[243452]: 2026-02-28 10:27:02.250 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:02 np0005634017 kernel: tapa3c17451-f0: left promiscuous mode
Feb 28 05:27:02 np0005634017 nova_compute[243452]: 2026-02-28 10:27:02.260 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:02.263 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[62fbfc00-e1fc-4da1-8621-ef0018538fcd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:02.278 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0dfa83e7-5db2-4aa1-b1df-3b73850f17a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:02.280 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[32876ac4-a185-4a87-960c-abdd3cb376c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:02.297 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e478ef70-d3eb-4f48-9680-45b1f6f494b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586815, 'reachable_time': 29546, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345722, 'error': None, 'target': 'ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:02.299 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:27:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:02.299 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[e9952a61-5054-464c-a1ff-a1b2efb819dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:02 np0005634017 systemd[1]: run-netns-ovnmeta\x2da3c17451\x2dfecb\x2d4c3a\x2dbc65\x2defba96c6e655.mount: Deactivated successfully.
Feb 28 05:27:02 np0005634017 nova_compute[243452]: 2026-02-28 10:27:02.463 243456 INFO nova.virt.libvirt.driver [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Deleting instance files /var/lib/nova/instances/8f174807-b15f-4588-83e1-c6e2ef2c2b4a_del#033[00m
Feb 28 05:27:02 np0005634017 nova_compute[243452]: 2026-02-28 10:27:02.464 243456 INFO nova.virt.libvirt.driver [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Deletion of /var/lib/nova/instances/8f174807-b15f-4588-83e1-c6e2ef2c2b4a_del complete#033[00m
Feb 28 05:27:02 np0005634017 nova_compute[243452]: 2026-02-28 10:27:02.510 243456 INFO nova.compute.manager [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Took 0.58 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:27:02 np0005634017 nova_compute[243452]: 2026-02-28 10:27:02.510 243456 DEBUG oslo.service.loopingcall [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:27:02 np0005634017 nova_compute[243452]: 2026-02-28 10:27:02.511 243456 DEBUG nova.compute.manager [-] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:27:02 np0005634017 nova_compute[243452]: 2026-02-28 10:27:02.511 243456 DEBUG nova.network.neutron [-] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:27:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 201 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 266 KiB/s rd, 771 KiB/s wr, 71 op/s
Feb 28 05:27:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:27:03 np0005634017 nova_compute[243452]: 2026-02-28 10:27:03.995 243456 DEBUG nova.compute.manager [req-9811ce1f-f679-4c87-aaa4-4be8d9f67684 req-b6fadc07-278b-444a-aff7-dab72b534a58 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Received event network-vif-unplugged-3e3d207f-3991-41f0-a55c-44cd0479e7f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:27:03 np0005634017 nova_compute[243452]: 2026-02-28 10:27:03.995 243456 DEBUG oslo_concurrency.lockutils [req-9811ce1f-f679-4c87-aaa4-4be8d9f67684 req-b6fadc07-278b-444a-aff7-dab72b534a58 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:03 np0005634017 nova_compute[243452]: 2026-02-28 10:27:03.995 243456 DEBUG oslo_concurrency.lockutils [req-9811ce1f-f679-4c87-aaa4-4be8d9f67684 req-b6fadc07-278b-444a-aff7-dab72b534a58 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:03 np0005634017 nova_compute[243452]: 2026-02-28 10:27:03.996 243456 DEBUG oslo_concurrency.lockutils [req-9811ce1f-f679-4c87-aaa4-4be8d9f67684 req-b6fadc07-278b-444a-aff7-dab72b534a58 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:03 np0005634017 nova_compute[243452]: 2026-02-28 10:27:03.996 243456 DEBUG nova.compute.manager [req-9811ce1f-f679-4c87-aaa4-4be8d9f67684 req-b6fadc07-278b-444a-aff7-dab72b534a58 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] No waiting events found dispatching network-vif-unplugged-3e3d207f-3991-41f0-a55c-44cd0479e7f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:27:03 np0005634017 nova_compute[243452]: 2026-02-28 10:27:03.996 243456 DEBUG nova.compute.manager [req-9811ce1f-f679-4c87-aaa4-4be8d9f67684 req-b6fadc07-278b-444a-aff7-dab72b534a58 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Received event network-vif-unplugged-3e3d207f-3991-41f0-a55c-44cd0479e7f8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:27:03 np0005634017 nova_compute[243452]: 2026-02-28 10:27:03.996 243456 DEBUG nova.compute.manager [req-9811ce1f-f679-4c87-aaa4-4be8d9f67684 req-b6fadc07-278b-444a-aff7-dab72b534a58 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Received event network-vif-plugged-3e3d207f-3991-41f0-a55c-44cd0479e7f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:27:03 np0005634017 nova_compute[243452]: 2026-02-28 10:27:03.996 243456 DEBUG oslo_concurrency.lockutils [req-9811ce1f-f679-4c87-aaa4-4be8d9f67684 req-b6fadc07-278b-444a-aff7-dab72b534a58 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:03 np0005634017 nova_compute[243452]: 2026-02-28 10:27:03.997 243456 DEBUG oslo_concurrency.lockutils [req-9811ce1f-f679-4c87-aaa4-4be8d9f67684 req-b6fadc07-278b-444a-aff7-dab72b534a58 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:03 np0005634017 nova_compute[243452]: 2026-02-28 10:27:03.997 243456 DEBUG oslo_concurrency.lockutils [req-9811ce1f-f679-4c87-aaa4-4be8d9f67684 req-b6fadc07-278b-444a-aff7-dab72b534a58 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:03 np0005634017 nova_compute[243452]: 2026-02-28 10:27:03.997 243456 DEBUG nova.compute.manager [req-9811ce1f-f679-4c87-aaa4-4be8d9f67684 req-b6fadc07-278b-444a-aff7-dab72b534a58 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] No waiting events found dispatching network-vif-plugged-3e3d207f-3991-41f0-a55c-44cd0479e7f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:27:03 np0005634017 nova_compute[243452]: 2026-02-28 10:27:03.997 243456 WARNING nova.compute.manager [req-9811ce1f-f679-4c87-aaa4-4be8d9f67684 req-b6fadc07-278b-444a-aff7-dab72b534a58 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Received unexpected event network-vif-plugged-3e3d207f-3991-41f0-a55c-44cd0479e7f8 for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:27:04 np0005634017 nova_compute[243452]: 2026-02-28 10:27:04.018 243456 DEBUG nova.network.neutron [-] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:27:04 np0005634017 nova_compute[243452]: 2026-02-28 10:27:04.045 243456 INFO nova.compute.manager [-] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Took 1.53 seconds to deallocate network for instance.#033[00m
Feb 28 05:27:04 np0005634017 nova_compute[243452]: 2026-02-28 10:27:04.136 243456 DEBUG oslo_concurrency.lockutils [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:04 np0005634017 nova_compute[243452]: 2026-02-28 10:27:04.137 243456 DEBUG oslo_concurrency.lockutils [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:04 np0005634017 nova_compute[243452]: 2026-02-28 10:27:04.142 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:04 np0005634017 nova_compute[243452]: 2026-02-28 10:27:04.208 243456 DEBUG oslo_concurrency.processutils [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:27:04 np0005634017 nova_compute[243452]: 2026-02-28 10:27:04.297 243456 DEBUG nova.network.neutron [req-26c3671b-1f99-486e-96eb-7e54cc0549c0 req-4e548568-d68b-4117-a050-9cb398d076b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Updated VIF entry in instance network info cache for port 3e3d207f-3991-41f0-a55c-44cd0479e7f8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:27:04 np0005634017 nova_compute[243452]: 2026-02-28 10:27:04.298 243456 DEBUG nova.network.neutron [req-26c3671b-1f99-486e-96eb-7e54cc0549c0 req-4e548568-d68b-4117-a050-9cb398d076b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Updating instance_info_cache with network_info: [{"id": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "address": "fa:16:3e:27:84:10", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3d207f-39", "ovs_interfaceid": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:27:04 np0005634017 nova_compute[243452]: 2026-02-28 10:27:04.325 243456 DEBUG oslo_concurrency.lockutils [req-26c3671b-1f99-486e-96eb-7e54cc0549c0 req-4e548568-d68b-4117-a050-9cb398d076b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:27:04 np0005634017 nova_compute[243452]: 2026-02-28 10:27:04.425 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 201 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 35 KiB/s wr, 40 op/s
Feb 28 05:27:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:27:04 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/24670061' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:27:04 np0005634017 nova_compute[243452]: 2026-02-28 10:27:04.750 243456 DEBUG oslo_concurrency.processutils [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:27:04 np0005634017 nova_compute[243452]: 2026-02-28 10:27:04.755 243456 DEBUG nova.compute.provider_tree [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:27:04 np0005634017 nova_compute[243452]: 2026-02-28 10:27:04.780 243456 DEBUG nova.scheduler.client.report [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:27:04 np0005634017 nova_compute[243452]: 2026-02-28 10:27:04.810 243456 DEBUG oslo_concurrency.lockutils [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:04 np0005634017 nova_compute[243452]: 2026-02-28 10:27:04.843 243456 INFO nova.scheduler.client.report [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Deleted allocations for instance 8f174807-b15f-4588-83e1-c6e2ef2c2b4a#033[00m
Feb 28 05:27:04 np0005634017 nova_compute[243452]: 2026-02-28 10:27:04.954 243456 DEBUG oslo_concurrency.lockutils [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.029s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:06 np0005634017 nova_compute[243452]: 2026-02-28 10:27:06.135 243456 DEBUG nova.compute.manager [req-2e0e10e2-f91d-4b37-8fd4-6a98a945fc28 req-a639fb9f-90d4-4724-947c-d2758e8eb971 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Received event network-vif-deleted-3e3d207f-3991-41f0-a55c-44cd0479e7f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:27:06 np0005634017 nova_compute[243452]: 2026-02-28 10:27:06.136 243456 INFO nova.compute.manager [req-2e0e10e2-f91d-4b37-8fd4-6a98a945fc28 req-a639fb9f-90d4-4724-947c-d2758e8eb971 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Neutron deleted interface 3e3d207f-3991-41f0-a55c-44cd0479e7f8; detaching it from the instance and deleting it from the info cache#033[00m
Feb 28 05:27:06 np0005634017 nova_compute[243452]: 2026-02-28 10:27:06.136 243456 DEBUG nova.network.neutron [req-2e0e10e2-f91d-4b37-8fd4-6a98a945fc28 req-a639fb9f-90d4-4724-947c-d2758e8eb971 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Feb 28 05:27:06 np0005634017 nova_compute[243452]: 2026-02-28 10:27:06.138 243456 DEBUG nova.compute.manager [req-2e0e10e2-f91d-4b37-8fd4-6a98a945fc28 req-a639fb9f-90d4-4724-947c-d2758e8eb971 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Detach interface failed, port_id=3e3d207f-3991-41f0-a55c-44cd0479e7f8, reason: Instance 8f174807-b15f-4588-83e1-c6e2ef2c2b4a could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 28 05:27:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 153 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 36 KiB/s wr, 65 op/s
Feb 28 05:27:07 np0005634017 nova_compute[243452]: 2026-02-28 10:27:07.186 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:07 np0005634017 nova_compute[243452]: 2026-02-28 10:27:07.442 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 153 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.4 KiB/s wr, 55 op/s
Feb 28 05:27:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:27:09 np0005634017 nova_compute[243452]: 2026-02-28 10:27:09.144 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:10 np0005634017 nova_compute[243452]: 2026-02-28 10:27:10.240 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:10 np0005634017 nova_compute[243452]: 2026-02-28 10:27:10.309 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 153 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 3.1 KiB/s wr, 42 op/s
Feb 28 05:27:11 np0005634017 nova_compute[243452]: 2026-02-28 10:27:11.155 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274416.1532054, 9819900a-9819-4896-a490-54f445126d24 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:27:11 np0005634017 nova_compute[243452]: 2026-02-28 10:27:11.155 243456 INFO nova.compute.manager [-] [instance: 9819900a-9819-4896-a490-54f445126d24] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:27:11 np0005634017 nova_compute[243452]: 2026-02-28 10:27:11.176 243456 DEBUG nova.compute.manager [None req-b2e8f44d-8fd7-4dec-9d40-1b8aecee82fe - - - - - -] [instance: 9819900a-9819-4896-a490-54f445126d24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:27:12 np0005634017 nova_compute[243452]: 2026-02-28 10:27:12.189 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 153 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb 28 05:27:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:27:14 np0005634017 nova_compute[243452]: 2026-02-28 10:27:14.144 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 153 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Feb 28 05:27:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 153 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Feb 28 05:27:17 np0005634017 nova_compute[243452]: 2026-02-28 10:27:17.163 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274422.1619728, 8f174807-b15f-4588-83e1-c6e2ef2c2b4a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:27:17 np0005634017 nova_compute[243452]: 2026-02-28 10:27:17.164 243456 INFO nova.compute.manager [-] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:27:17 np0005634017 nova_compute[243452]: 2026-02-28 10:27:17.187 243456 DEBUG nova.compute.manager [None req-d63962e3-d820-420f-b247-5fa2615b1b7e - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:27:17 np0005634017 nova_compute[243452]: 2026-02-28 10:27:17.191 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 153 MiB data, 896 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:27:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:27:19 np0005634017 nova_compute[243452]: 2026-02-28 10:27:19.146 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Feb 28 05:27:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Feb 28 05:27:19 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Feb 28 05:27:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 153 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 8.3 KiB/s rd, 921 B/s wr, 10 op/s
Feb 28 05:27:20 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Feb 28 05:27:20 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Feb 28 05:27:20 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Feb 28 05:27:21 np0005634017 nova_compute[243452]: 2026-02-28 10:27:21.154 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:21.155 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:27:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:21.155 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:27:21 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Feb 28 05:27:21 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:21.925870) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:27:21 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Feb 28 05:27:21 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274441925938, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1949, "num_deletes": 251, "total_data_size": 3061579, "memory_usage": 3115664, "flush_reason": "Manual Compaction"}
Feb 28 05:27:21 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Feb 28 05:27:21 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274441956260, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 3007149, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38133, "largest_seqno": 40081, "table_properties": {"data_size": 2998414, "index_size": 5357, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18283, "raw_average_key_size": 20, "raw_value_size": 2980808, "raw_average_value_size": 3286, "num_data_blocks": 238, "num_entries": 907, "num_filter_entries": 907, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772274244, "oldest_key_time": 1772274244, "file_creation_time": 1772274441, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:27:21 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 30421 microseconds, and 5761 cpu microseconds.
Feb 28 05:27:21 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:27:21 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:21.956304) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 3007149 bytes OK
Feb 28 05:27:21 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:21.956323) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Feb 28 05:27:21 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:21.957707) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Feb 28 05:27:21 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:21.957723) EVENT_LOG_v1 {"time_micros": 1772274441957718, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:27:21 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:21.957740) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:27:21 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 3053349, prev total WAL file size 3053349, number of live WAL files 2.
Feb 28 05:27:21 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:27:21 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:21.958328) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Feb 28 05:27:21 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:27:21 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(2936KB)], [86(7824KB)]
Feb 28 05:27:21 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274441958353, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11019642, "oldest_snapshot_seqno": -1}
Feb 28 05:27:22 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6506 keys, 9336304 bytes, temperature: kUnknown
Feb 28 05:27:22 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274442010911, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 9336304, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9292780, "index_size": 26134, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 165205, "raw_average_key_size": 25, "raw_value_size": 9176483, "raw_average_value_size": 1410, "num_data_blocks": 1048, "num_entries": 6506, "num_filter_entries": 6506, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772274441, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:27:22 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:27:22 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:22.011609) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9336304 bytes
Feb 28 05:27:22 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:22.012954) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 207.7 rd, 176.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 7.6 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.8) write-amplify(3.1) OK, records in: 7024, records dropped: 518 output_compression: NoCompression
Feb 28 05:27:22 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:22.012987) EVENT_LOG_v1 {"time_micros": 1772274442012971, "job": 50, "event": "compaction_finished", "compaction_time_micros": 53053, "compaction_time_cpu_micros": 15457, "output_level": 6, "num_output_files": 1, "total_output_size": 9336304, "num_input_records": 7024, "num_output_records": 6506, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:27:22 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:27:22 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274442014379, "job": 50, "event": "table_file_deletion", "file_number": 88}
Feb 28 05:27:22 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:27:22 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274442016010, "job": 50, "event": "table_file_deletion", "file_number": 86}
Feb 28 05:27:22 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:21.958236) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:27:22 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:22.016192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:27:22 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:22.016199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:27:22 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:22.016202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:27:22 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:22.016205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:27:22 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:22.016208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:27:22 np0005634017 nova_compute[243452]: 2026-02-28 10:27:22.193 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 153 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.6 KiB/s wr, 15 op/s
Feb 28 05:27:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:27:24 np0005634017 nova_compute[243452]: 2026-02-28 10:27:24.147 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:24 np0005634017 podman[345749]: 2026-02-28 10:27:24.152500812 +0000 UTC m=+0.082050019 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 05:27:24 np0005634017 podman[345748]: 2026-02-28 10:27:24.17793207 +0000 UTC m=+0.107447136 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Feb 28 05:27:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 153 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 3.1 KiB/s wr, 27 op/s
Feb 28 05:27:25 np0005634017 nova_compute[243452]: 2026-02-28 10:27:25.353 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:27:25 np0005634017 nova_compute[243452]: 2026-02-28 10:27:25.668 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:25 np0005634017 nova_compute[243452]: 2026-02-28 10:27:25.669 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:25 np0005634017 nova_compute[243452]: 2026-02-28 10:27:25.696 243456 DEBUG nova.compute.manager [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:27:25 np0005634017 nova_compute[243452]: 2026-02-28 10:27:25.777 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Acquiring lock "502b3848-9702-4288-860e-d9b13ab3b047" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:25 np0005634017 nova_compute[243452]: 2026-02-28 10:27:25.778 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:25 np0005634017 nova_compute[243452]: 2026-02-28 10:27:25.781 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:25 np0005634017 nova_compute[243452]: 2026-02-28 10:27:25.782 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:25 np0005634017 nova_compute[243452]: 2026-02-28 10:27:25.792 243456 DEBUG nova.virt.hardware [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:27:25 np0005634017 nova_compute[243452]: 2026-02-28 10:27:25.793 243456 INFO nova.compute.claims [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:27:25 np0005634017 nova_compute[243452]: 2026-02-28 10:27:25.798 243456 DEBUG nova.compute.manager [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:27:25 np0005634017 nova_compute[243452]: 2026-02-28 10:27:25.900 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Feb 28 05:27:25 np0005634017 nova_compute[243452]: 2026-02-28 10:27:25.953 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:27:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Feb 28 05:27:25 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Feb 28 05:27:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:26.157 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:27:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:27:26 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2050811230' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:27:26 np0005634017 nova_compute[243452]: 2026-02-28 10:27:26.558 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:27:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 153 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 4.7 KiB/s wr, 60 op/s
Feb 28 05:27:26 np0005634017 nova_compute[243452]: 2026-02-28 10:27:26.566 243456 DEBUG nova.compute.provider_tree [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:27:26 np0005634017 nova_compute[243452]: 2026-02-28 10:27:26.590 243456 DEBUG nova.scheduler.client.report [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:27:26 np0005634017 nova_compute[243452]: 2026-02-28 10:27:26.626 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:26 np0005634017 nova_compute[243452]: 2026-02-28 10:27:26.627 243456 DEBUG nova.compute.manager [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:27:26 np0005634017 nova_compute[243452]: 2026-02-28 10:27:26.636 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:26 np0005634017 nova_compute[243452]: 2026-02-28 10:27:26.645 243456 DEBUG nova.virt.hardware [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:27:26 np0005634017 nova_compute[243452]: 2026-02-28 10:27:26.646 243456 INFO nova.compute.claims [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:27:26 np0005634017 nova_compute[243452]: 2026-02-28 10:27:26.702 243456 DEBUG nova.compute.manager [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:27:26 np0005634017 nova_compute[243452]: 2026-02-28 10:27:26.703 243456 DEBUG nova.network.neutron [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:27:26 np0005634017 nova_compute[243452]: 2026-02-28 10:27:26.721 243456 INFO nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:27:26 np0005634017 nova_compute[243452]: 2026-02-28 10:27:26.737 243456 DEBUG nova.compute.manager [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:27:26 np0005634017 nova_compute[243452]: 2026-02-28 10:27:26.783 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:27:26 np0005634017 nova_compute[243452]: 2026-02-28 10:27:26.820 243456 DEBUG nova.compute.manager [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:27:26 np0005634017 nova_compute[243452]: 2026-02-28 10:27:26.822 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:27:26 np0005634017 nova_compute[243452]: 2026-02-28 10:27:26.822 243456 INFO nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Creating image(s)#033[00m
Feb 28 05:27:26 np0005634017 nova_compute[243452]: 2026-02-28 10:27:26.844 243456 DEBUG nova.storage.rbd_utils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] rbd image 45cac133-9af0-462b-928c-05216ae1a68e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:27:26 np0005634017 nova_compute[243452]: 2026-02-28 10:27:26.868 243456 DEBUG nova.storage.rbd_utils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] rbd image 45cac133-9af0-462b-928c-05216ae1a68e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:27:26 np0005634017 nova_compute[243452]: 2026-02-28 10:27:26.892 243456 DEBUG nova.storage.rbd_utils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] rbd image 45cac133-9af0-462b-928c-05216ae1a68e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:27:26 np0005634017 nova_compute[243452]: 2026-02-28 10:27:26.896 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:27:26 np0005634017 nova_compute[243452]: 2026-02-28 10:27:26.958 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:27:26 np0005634017 nova_compute[243452]: 2026-02-28 10:27:26.959 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:26 np0005634017 nova_compute[243452]: 2026-02-28 10:27:26.960 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:26 np0005634017 nova_compute[243452]: 2026-02-28 10:27:26.960 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Feb 28 05:27:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Feb 28 05:27:26 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.013 243456 DEBUG nova.storage.rbd_utils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] rbd image 45cac133-9af0-462b-928c-05216ae1a68e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.017 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 45cac133-9af0-462b-928c-05216ae1a68e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.179 243456 DEBUG nova.policy [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3ed826a3011e43d68aac3f001281440a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '859784d5f59f4db99fb375f781853be3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.195 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:27:27 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3497929346' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.329 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 45cac133-9af0-462b-928c-05216ae1a68e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.377 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.417 243456 DEBUG nova.compute.provider_tree [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.425 243456 DEBUG nova.storage.rbd_utils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] resizing rbd image 45cac133-9af0-462b-928c-05216ae1a68e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.466 243456 DEBUG nova.scheduler.client.report [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.493 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.494 243456 DEBUG nova.compute.manager [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.567 243456 DEBUG nova.compute.manager [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.568 243456 DEBUG nova.network.neutron [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.576 243456 DEBUG nova.objects.instance [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lazy-loading 'migration_context' on Instance uuid 45cac133-9af0-462b-928c-05216ae1a68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.595 243456 INFO nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.599 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.600 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Ensure instance console log exists: /var/lib/nova/instances/45cac133-9af0-462b-928c-05216ae1a68e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.600 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.600 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.601 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.619 243456 DEBUG nova.compute.manager [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.719 243456 DEBUG nova.compute.manager [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.721 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.721 243456 INFO nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Creating image(s)#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.744 243456 DEBUG nova.storage.rbd_utils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] rbd image 502b3848-9702-4288-860e-d9b13ab3b047_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.770 243456 DEBUG nova.storage.rbd_utils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] rbd image 502b3848-9702-4288-860e-d9b13ab3b047_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.794 243456 DEBUG nova.storage.rbd_utils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] rbd image 502b3848-9702-4288-860e-d9b13ab3b047_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.798 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.863 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.863 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.864 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.864 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.885 243456 DEBUG nova.storage.rbd_utils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] rbd image 502b3848-9702-4288-860e-d9b13ab3b047_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:27:27 np0005634017 nova_compute[243452]: 2026-02-28 10:27:27.889 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 502b3848-9702-4288-860e-d9b13ab3b047_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:27:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Feb 28 05:27:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Feb 28 05:27:27 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Feb 28 05:27:28 np0005634017 nova_compute[243452]: 2026-02-28 10:27:28.234 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 502b3848-9702-4288-860e-d9b13ab3b047_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:27:28 np0005634017 nova_compute[243452]: 2026-02-28 10:27:28.314 243456 DEBUG nova.storage.rbd_utils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] resizing rbd image 502b3848-9702-4288-860e-d9b13ab3b047_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:27:28 np0005634017 nova_compute[243452]: 2026-02-28 10:27:28.376 243456 DEBUG nova.policy [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9dd03f07d754030bedc45ef75a2ceb8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b907eb5634054c23999a514f3cbfbc23', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:27:28 np0005634017 nova_compute[243452]: 2026-02-28 10:27:28.421 243456 DEBUG nova.network.neutron [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Successfully created port: 63cc9218-a429-4d50-9dad-e3849863cae1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:27:28 np0005634017 nova_compute[243452]: 2026-02-28 10:27:28.432 243456 DEBUG nova.objects.instance [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lazy-loading 'migration_context' on Instance uuid 502b3848-9702-4288-860e-d9b13ab3b047 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:27:28 np0005634017 nova_compute[243452]: 2026-02-28 10:27:28.448 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:27:28 np0005634017 nova_compute[243452]: 2026-02-28 10:27:28.448 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Ensure instance console log exists: /var/lib/nova/instances/502b3848-9702-4288-860e-d9b13ab3b047/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:27:28 np0005634017 nova_compute[243452]: 2026-02-28 10:27:28.449 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:28 np0005634017 nova_compute[243452]: 2026-02-28 10:27:28.449 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:28 np0005634017 nova_compute[243452]: 2026-02-28 10:27:28.449 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 171 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 1.9 MiB/s wr, 114 op/s
Feb 28 05:27:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:27:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Feb 28 05:27:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Feb 28 05:27:28 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Feb 28 05:27:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:27:29
Feb 28 05:27:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:27:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:27:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'images', '.rgw.root', 'volumes', 'default.rgw.log']
Feb 28 05:27:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:27:29 np0005634017 nova_compute[243452]: 2026-02-28 10:27:29.150 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:29 np0005634017 nova_compute[243452]: 2026-02-28 10:27:29.167 243456 DEBUG nova.network.neutron [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Successfully updated port: 63cc9218-a429-4d50-9dad-e3849863cae1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:27:29 np0005634017 nova_compute[243452]: 2026-02-28 10:27:29.189 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquiring lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:27:29 np0005634017 nova_compute[243452]: 2026-02-28 10:27:29.190 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquired lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:27:29 np0005634017 nova_compute[243452]: 2026-02-28 10:27:29.190 243456 DEBUG nova.network.neutron [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:27:29 np0005634017 nova_compute[243452]: 2026-02-28 10:27:29.294 243456 DEBUG nova.compute.manager [req-984f20e5-a404-407d-a71e-5ac40d132cd5 req-b16f010a-d4fc-4bb4-944b-2af60b18033f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-changed-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:27:29 np0005634017 nova_compute[243452]: 2026-02-28 10:27:29.294 243456 DEBUG nova.compute.manager [req-984f20e5-a404-407d-a71e-5ac40d132cd5 req-b16f010a-d4fc-4bb4-944b-2af60b18033f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Refreshing instance network info cache due to event network-changed-63cc9218-a429-4d50-9dad-e3849863cae1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:27:29 np0005634017 nova_compute[243452]: 2026-02-28 10:27:29.294 243456 DEBUG oslo_concurrency.lockutils [req-984f20e5-a404-407d-a71e-5ac40d132cd5 req-b16f010a-d4fc-4bb4-944b-2af60b18033f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:27:29 np0005634017 nova_compute[243452]: 2026-02-28 10:27:29.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:27:29 np0005634017 nova_compute[243452]: 2026-02-28 10:27:29.344 243456 DEBUG nova.network.neutron [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:27:29 np0005634017 nova_compute[243452]: 2026-02-28 10:27:29.524 243456 DEBUG nova.network.neutron [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Successfully created port: a4f4f33b-d010-42c3-9963-b0602fd11558 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:27:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Feb 28 05:27:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Feb 28 05:27:30 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Feb 28 05:27:30 np0005634017 nova_compute[243452]: 2026-02-28 10:27:30.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:27:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:27:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:27:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:27:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:27:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:27:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:27:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 223 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 210 KiB/s rd, 7.3 MiB/s wr, 310 op/s
Feb 28 05:27:30 np0005634017 nova_compute[243452]: 2026-02-28 10:27:30.659 243456 DEBUG nova.network.neutron [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Updating instance_info_cache with network_info: [{"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:27:30 np0005634017 nova_compute[243452]: 2026-02-28 10:27:30.684 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Releasing lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:27:30 np0005634017 nova_compute[243452]: 2026-02-28 10:27:30.685 243456 DEBUG nova.compute.manager [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Instance network_info: |[{"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:27:30 np0005634017 nova_compute[243452]: 2026-02-28 10:27:30.685 243456 DEBUG oslo_concurrency.lockutils [req-984f20e5-a404-407d-a71e-5ac40d132cd5 req-b16f010a-d4fc-4bb4-944b-2af60b18033f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:27:30 np0005634017 nova_compute[243452]: 2026-02-28 10:27:30.686 243456 DEBUG nova.network.neutron [req-984f20e5-a404-407d-a71e-5ac40d132cd5 req-b16f010a-d4fc-4bb4-944b-2af60b18033f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Refreshing network info cache for port 63cc9218-a429-4d50-9dad-e3849863cae1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:27:30 np0005634017 nova_compute[243452]: 2026-02-28 10:27:30.690 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Start _get_guest_xml network_info=[{"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:27:30 np0005634017 nova_compute[243452]: 2026-02-28 10:27:30.698 243456 WARNING nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:27:30 np0005634017 nova_compute[243452]: 2026-02-28 10:27:30.711 243456 DEBUG nova.virt.libvirt.host [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:27:30 np0005634017 nova_compute[243452]: 2026-02-28 10:27:30.712 243456 DEBUG nova.virt.libvirt.host [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:27:30 np0005634017 nova_compute[243452]: 2026-02-28 10:27:30.717 243456 DEBUG nova.virt.libvirt.host [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:27:30 np0005634017 nova_compute[243452]: 2026-02-28 10:27:30.718 243456 DEBUG nova.virt.libvirt.host [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:27:30 np0005634017 nova_compute[243452]: 2026-02-28 10:27:30.719 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:27:30 np0005634017 nova_compute[243452]: 2026-02-28 10:27:30.719 243456 DEBUG nova.virt.hardware [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:27:30 np0005634017 nova_compute[243452]: 2026-02-28 10:27:30.720 243456 DEBUG nova.virt.hardware [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:27:30 np0005634017 nova_compute[243452]: 2026-02-28 10:27:30.720 243456 DEBUG nova.virt.hardware [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:27:30 np0005634017 nova_compute[243452]: 2026-02-28 10:27:30.721 243456 DEBUG nova.virt.hardware [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:27:30 np0005634017 nova_compute[243452]: 2026-02-28 10:27:30.721 243456 DEBUG nova.virt.hardware [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:27:30 np0005634017 nova_compute[243452]: 2026-02-28 10:27:30.721 243456 DEBUG nova.virt.hardware [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:27:30 np0005634017 nova_compute[243452]: 2026-02-28 10:27:30.722 243456 DEBUG nova.virt.hardware [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:27:30 np0005634017 nova_compute[243452]: 2026-02-28 10:27:30.722 243456 DEBUG nova.virt.hardware [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:27:30 np0005634017 nova_compute[243452]: 2026-02-28 10:27:30.722 243456 DEBUG nova.virt.hardware [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:27:30 np0005634017 nova_compute[243452]: 2026-02-28 10:27:30.723 243456 DEBUG nova.virt.hardware [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:27:30 np0005634017 nova_compute[243452]: 2026-02-28 10:27:30.723 243456 DEBUG nova.virt.hardware [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:27:30 np0005634017 nova_compute[243452]: 2026-02-28 10:27:30.728 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:27:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:27:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:27:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:27:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:27:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:27:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:27:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:27:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:27:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:27:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:27:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Feb 28 05:27:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Feb 28 05:27:31 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.203 243456 DEBUG nova.network.neutron [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Successfully updated port: a4f4f33b-d010-42c3-9963-b0602fd11558 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.229 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Acquiring lock "refresh_cache-502b3848-9702-4288-860e-d9b13ab3b047" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.229 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Acquired lock "refresh_cache-502b3848-9702-4288-860e-d9b13ab3b047" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.230 243456 DEBUG nova.network.neutron [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:27:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:27:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4279637329' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.350 243456 DEBUG nova.compute.manager [req-7d674231-13f4-42ea-b5c2-6e1cc465faac req-85bfa5f1-8ef9-4219-be7a-64c5f0db5fe9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Received event network-changed-a4f4f33b-d010-42c3-9963-b0602fd11558 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.351 243456 DEBUG nova.compute.manager [req-7d674231-13f4-42ea-b5c2-6e1cc465faac req-85bfa5f1-8ef9-4219-be7a-64c5f0db5fe9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Refreshing instance network info cache due to event network-changed-a4f4f33b-d010-42c3-9963-b0602fd11558. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.351 243456 DEBUG oslo_concurrency.lockutils [req-7d674231-13f4-42ea-b5c2-6e1cc465faac req-85bfa5f1-8ef9-4219-be7a-64c5f0db5fe9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-502b3848-9702-4288-860e-d9b13ab3b047" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.361 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.395 243456 DEBUG nova.storage.rbd_utils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] rbd image 45cac133-9af0-462b-928c-05216ae1a68e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.401 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.460 243456 DEBUG nova.network.neutron [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:27:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:27:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2882330843' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.948 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.950 243456 DEBUG nova.virt.libvirt.vif [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:27:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-22377150',display_name='tempest-TestServerAdvancedOps-server-22377150',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-22377150',id=119,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='859784d5f59f4db99fb375f781853be3',ramdisk_id='',reservation_id='r-h74lcn4j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-244453076',owner_user_name='tempest-TestServerAdvancedOps-244453076-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:27:26Z,user_data=None,user_id='3ed826a3011e43d68aac3f001281440a',uuid=45cac133-9af0-462b-928c-05216ae1a68e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.950 243456 DEBUG nova.network.os_vif_util [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Converting VIF {"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.951 243456 DEBUG nova.network.os_vif_util [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:2a:f0,bridge_name='br-int',has_traffic_filtering=True,id=63cc9218-a429-4d50-9dad-e3849863cae1,network=Network(c9d695f1-2ebc-41de-afde-dc0fb11aa027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63cc9218-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.953 243456 DEBUG nova.objects.instance [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 45cac133-9af0-462b-928c-05216ae1a68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.970 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:27:31 np0005634017 nova_compute[243452]:  <uuid>45cac133-9af0-462b-928c-05216ae1a68e</uuid>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:  <name>instance-00000077</name>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestServerAdvancedOps-server-22377150</nova:name>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:27:30</nova:creationTime>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:27:31 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:        <nova:user uuid="3ed826a3011e43d68aac3f001281440a">tempest-TestServerAdvancedOps-244453076-project-member</nova:user>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:        <nova:project uuid="859784d5f59f4db99fb375f781853be3">tempest-TestServerAdvancedOps-244453076</nova:project>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:        <nova:port uuid="63cc9218-a429-4d50-9dad-e3849863cae1">
Feb 28 05:27:31 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <entry name="serial">45cac133-9af0-462b-928c-05216ae1a68e</entry>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <entry name="uuid">45cac133-9af0-462b-928c-05216ae1a68e</entry>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/45cac133-9af0-462b-928c-05216ae1a68e_disk">
Feb 28 05:27:31 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:27:31 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/45cac133-9af0-462b-928c-05216ae1a68e_disk.config">
Feb 28 05:27:31 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:27:31 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:26:2a:f0"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <target dev="tap63cc9218-a4"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/45cac133-9af0-462b-928c-05216ae1a68e/console.log" append="off"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:27:31 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:27:31 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:27:31 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:27:31 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.971 243456 DEBUG nova.compute.manager [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Preparing to wait for external event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.972 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.973 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.973 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.974 243456 DEBUG nova.virt.libvirt.vif [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:27:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-22377150',display_name='tempest-TestServerAdvancedOps-server-22377150',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-22377150',id=119,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='859784d5f59f4db99fb375f781853be3',ramdisk_id='',reservation_id='r-h74lcn4j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-244453076',owner_user_name='tempest-TestServerAdvancedOps-244453076-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:27:26Z,user_data=None,user_id='3ed826a3011e43d68aac3f001281440a',uuid=45cac133-9af0-462b-928c-05216ae1a68e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.974 243456 DEBUG nova.network.os_vif_util [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Converting VIF {"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.975 243456 DEBUG nova.network.os_vif_util [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:2a:f0,bridge_name='br-int',has_traffic_filtering=True,id=63cc9218-a429-4d50-9dad-e3849863cae1,network=Network(c9d695f1-2ebc-41de-afde-dc0fb11aa027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63cc9218-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.976 243456 DEBUG os_vif [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:2a:f0,bridge_name='br-int',has_traffic_filtering=True,id=63cc9218-a429-4d50-9dad-e3849863cae1,network=Network(c9d695f1-2ebc-41de-afde-dc0fb11aa027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63cc9218-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.977 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.978 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.979 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.983 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.984 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap63cc9218-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.985 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap63cc9218-a4, col_values=(('external_ids', {'iface-id': '63cc9218-a429-4d50-9dad-e3849863cae1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:2a:f0', 'vm-uuid': '45cac133-9af0-462b-928c-05216ae1a68e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.987 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:31 np0005634017 NetworkManager[49805]: <info>  [1772274451.9882] manager: (tap63cc9218-a4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/490)
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.990 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.993 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:31 np0005634017 nova_compute[243452]: 2026-02-28 10:27:31.994 243456 INFO os_vif [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:2a:f0,bridge_name='br-int',has_traffic_filtering=True,id=63cc9218-a429-4d50-9dad-e3849863cae1,network=Network(c9d695f1-2ebc-41de-afde-dc0fb11aa027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63cc9218-a4')#033[00m
Feb 28 05:27:32 np0005634017 nova_compute[243452]: 2026-02-28 10:27:32.051 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:27:32 np0005634017 nova_compute[243452]: 2026-02-28 10:27:32.052 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:27:32 np0005634017 nova_compute[243452]: 2026-02-28 10:27:32.052 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] No VIF found with MAC fa:16:3e:26:2a:f0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:27:32 np0005634017 nova_compute[243452]: 2026-02-28 10:27:32.053 243456 INFO nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Using config drive#033[00m
Feb 28 05:27:32 np0005634017 nova_compute[243452]: 2026-02-28 10:27:32.079 243456 DEBUG nova.storage.rbd_utils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] rbd image 45cac133-9af0-462b-928c-05216ae1a68e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:27:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 246 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 140 KiB/s rd, 6.8 MiB/s wr, 216 op/s
Feb 28 05:27:32 np0005634017 nova_compute[243452]: 2026-02-28 10:27:32.699 243456 INFO nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Creating config drive at /var/lib/nova/instances/45cac133-9af0-462b-928c-05216ae1a68e/disk.config#033[00m
Feb 28 05:27:32 np0005634017 nova_compute[243452]: 2026-02-28 10:27:32.703 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/45cac133-9af0-462b-928c-05216ae1a68e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8kzndjyj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:27:32 np0005634017 nova_compute[243452]: 2026-02-28 10:27:32.837 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/45cac133-9af0-462b-928c-05216ae1a68e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8kzndjyj" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:27:32 np0005634017 nova_compute[243452]: 2026-02-28 10:27:32.861 243456 DEBUG nova.storage.rbd_utils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] rbd image 45cac133-9af0-462b-928c-05216ae1a68e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:27:32 np0005634017 nova_compute[243452]: 2026-02-28 10:27:32.865 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/45cac133-9af0-462b-928c-05216ae1a68e/disk.config 45cac133-9af0-462b-928c-05216ae1a68e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.011 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/45cac133-9af0-462b-928c-05216ae1a68e/disk.config 45cac133-9af0-462b-928c-05216ae1a68e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.012 243456 INFO nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Deleting local config drive /var/lib/nova/instances/45cac133-9af0-462b-928c-05216ae1a68e/disk.config because it was imported into RBD.#033[00m
Feb 28 05:27:33 np0005634017 kernel: tap63cc9218-a4: entered promiscuous mode
Feb 28 05:27:33 np0005634017 NetworkManager[49805]: <info>  [1772274453.0709] manager: (tap63cc9218-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/491)
Feb 28 05:27:33 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:33Z|01190|binding|INFO|Claiming lport 63cc9218-a429-4d50-9dad-e3849863cae1 for this chassis.
Feb 28 05:27:33 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:33Z|01191|binding|INFO|63cc9218-a429-4d50-9dad-e3849863cae1: Claiming fa:16:3e:26:2a:f0 10.100.0.11
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.072 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:33.081 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:2a:f0 10.100.0.11'], port_security=['fa:16:3e:26:2a:f0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '45cac133-9af0-462b-928c-05216ae1a68e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9d695f1-2ebc-41de-afde-dc0fb11aa027', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '859784d5f59f4db99fb375f781853be3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3574eda9-0858-4fa6-a1da-2908ff989c86', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ecc9bc11-7523-42ec-afec-b5bdb65bb267, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=63cc9218-a429-4d50-9dad-e3849863cae1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:27:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:33.082 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 63cc9218-a429-4d50-9dad-e3849863cae1 in datapath c9d695f1-2ebc-41de-afde-dc0fb11aa027 bound to our chassis#033[00m
Feb 28 05:27:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:33.083 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c9d695f1-2ebc-41de-afde-dc0fb11aa027 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 28 05:27:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:33.084 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[525f4e94-6b68-4c6f-8af6-e8d1170a8e3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:33 np0005634017 systemd-udevd[346306]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:27:33 np0005634017 NetworkManager[49805]: <info>  [1772274453.1086] device (tap63cc9218-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:27:33 np0005634017 NetworkManager[49805]: <info>  [1772274453.1092] device (tap63cc9218-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:27:33 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:33Z|01192|binding|INFO|Setting lport 63cc9218-a429-4d50-9dad-e3849863cae1 ovn-installed in OVS
Feb 28 05:27:33 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:33Z|01193|binding|INFO|Setting lport 63cc9218-a429-4d50-9dad-e3849863cae1 up in Southbound
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.106 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.111 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:33 np0005634017 systemd-machined[209480]: New machine qemu-150-instance-00000077.
Feb 28 05:27:33 np0005634017 systemd[1]: Started Virtual Machine qemu-150-instance-00000077.
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.163 243456 DEBUG nova.network.neutron [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Updating instance_info_cache with network_info: [{"id": "a4f4f33b-d010-42c3-9963-b0602fd11558", "address": "fa:16:3e:be:1f:18", "network": {"id": "d755deef-15d2-410a-9b1a-81df70c45c93", "bridge": "br-int", "label": "tempest-TestServerBasicOps-594899057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b907eb5634054c23999a514f3cbfbc23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f4f33b-d0", "ovs_interfaceid": "a4f4f33b-d010-42c3-9963-b0602fd11558", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.183 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Releasing lock "refresh_cache-502b3848-9702-4288-860e-d9b13ab3b047" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.183 243456 DEBUG nova.compute.manager [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Instance network_info: |[{"id": "a4f4f33b-d010-42c3-9963-b0602fd11558", "address": "fa:16:3e:be:1f:18", "network": {"id": "d755deef-15d2-410a-9b1a-81df70c45c93", "bridge": "br-int", "label": "tempest-TestServerBasicOps-594899057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b907eb5634054c23999a514f3cbfbc23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f4f33b-d0", "ovs_interfaceid": "a4f4f33b-d010-42c3-9963-b0602fd11558", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.183 243456 DEBUG oslo_concurrency.lockutils [req-7d674231-13f4-42ea-b5c2-6e1cc465faac req-85bfa5f1-8ef9-4219-be7a-64c5f0db5fe9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-502b3848-9702-4288-860e-d9b13ab3b047" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.183 243456 DEBUG nova.network.neutron [req-7d674231-13f4-42ea-b5c2-6e1cc465faac req-85bfa5f1-8ef9-4219-be7a-64c5f0db5fe9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Refreshing network info cache for port a4f4f33b-d010-42c3-9963-b0602fd11558 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.186 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Start _get_guest_xml network_info=[{"id": "a4f4f33b-d010-42c3-9963-b0602fd11558", "address": "fa:16:3e:be:1f:18", "network": {"id": "d755deef-15d2-410a-9b1a-81df70c45c93", "bridge": "br-int", "label": "tempest-TestServerBasicOps-594899057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b907eb5634054c23999a514f3cbfbc23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f4f33b-d0", "ovs_interfaceid": "a4f4f33b-d010-42c3-9963-b0602fd11558", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.192 243456 WARNING nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.196 243456 DEBUG nova.virt.libvirt.host [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.197 243456 DEBUG nova.virt.libvirt.host [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.248 243456 DEBUG nova.virt.libvirt.host [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.249 243456 DEBUG nova.virt.libvirt.host [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.249 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.249 243456 DEBUG nova.virt.hardware [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.250 243456 DEBUG nova.virt.hardware [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.250 243456 DEBUG nova.virt.hardware [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.250 243456 DEBUG nova.virt.hardware [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.251 243456 DEBUG nova.virt.hardware [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.251 243456 DEBUG nova.virt.hardware [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.251 243456 DEBUG nova.virt.hardware [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.251 243456 DEBUG nova.virt.hardware [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.251 243456 DEBUG nova.virt.hardware [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.252 243456 DEBUG nova.virt.hardware [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.252 243456 DEBUG nova.virt.hardware [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.255 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.389 243456 DEBUG nova.compute.manager [req-0505ce41-3ddf-4b07-848a-128e9c9c59c6 req-d846d547-9d73-4675-972b-15bfb5b20e17 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.390 243456 DEBUG oslo_concurrency.lockutils [req-0505ce41-3ddf-4b07-848a-128e9c9c59c6 req-d846d547-9d73-4675-972b-15bfb5b20e17 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.390 243456 DEBUG oslo_concurrency.lockutils [req-0505ce41-3ddf-4b07-848a-128e9c9c59c6 req-d846d547-9d73-4675-972b-15bfb5b20e17 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.390 243456 DEBUG oslo_concurrency.lockutils [req-0505ce41-3ddf-4b07-848a-128e9c9c59c6 req-d846d547-9d73-4675-972b-15bfb5b20e17 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.391 243456 DEBUG nova.compute.manager [req-0505ce41-3ddf-4b07-848a-128e9c9c59c6 req-d846d547-9d73-4675-972b-15bfb5b20e17 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Processing event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.486 243456 DEBUG nova.network.neutron [req-984f20e5-a404-407d-a71e-5ac40d132cd5 req-b16f010a-d4fc-4bb4-944b-2af60b18033f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Updated VIF entry in instance network info cache for port 63cc9218-a429-4d50-9dad-e3849863cae1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.488 243456 DEBUG nova.network.neutron [req-984f20e5-a404-407d-a71e-5ac40d132cd5 req-b16f010a-d4fc-4bb4-944b-2af60b18033f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Updating instance_info_cache with network_info: [{"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.508 243456 DEBUG oslo_concurrency.lockutils [req-984f20e5-a404-407d-a71e-5ac40d132cd5 req-b16f010a-d4fc-4bb4-944b-2af60b18033f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.753 243456 DEBUG nova.compute.manager [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.756 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274453.7523522, 45cac133-9af0-462b-928c-05216ae1a68e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.756 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] VM Started (Lifecycle Event)#033[00m
Feb 28 05:27:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.763 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.770 243456 INFO nova.virt.libvirt.driver [-] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Instance spawned successfully.#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.771 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.781 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.786 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.802 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.803 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.804 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.805 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.806 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.807 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.814 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.815 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274453.7526946, 45cac133-9af0-462b-928c-05216ae1a68e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:27:33 np0005634017 nova_compute[243452]: 2026-02-28 10:27:33.816 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:27:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:27:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4144905592' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:27:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Feb 28 05:27:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.063 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:27:34 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.067 243456 INFO nova.compute.manager [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Took 7.25 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.068 243456 DEBUG nova.compute.manager [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.069 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.814s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.103 243456 DEBUG nova.storage.rbd_utils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] rbd image 502b3848-9702-4288-860e-d9b13ab3b047_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.107 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.147 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274453.7628763, 45cac133-9af0-462b-928c-05216ae1a68e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.147 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.153 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.187 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.190 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.245 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.270 243456 INFO nova.compute.manager [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Took 8.52 seconds to build instance.#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.292 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 246 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 823 KiB/s rd, 5.3 MiB/s wr, 222 op/s
Feb 28 05:27:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:27:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/48174800' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.683 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.685 243456 DEBUG nova.virt.libvirt.vif [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:27:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-952479761',display_name='tempest-TestServerBasicOps-server-952479761',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-952479761',id=120,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIxvwaoaZS6lxY+qm3SVwVdpmr4odFer5lT4S2h//UYF7wFrY/sNYSd6hzwRrmh2VB5KT4fELzlhq046tWMx92gixHEtTbsSYNmG1SF9Z5rksMEf2+FpLLjssyHNY9JdaA==',key_name='tempest-TestServerBasicOps-1169969291',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b907eb5634054c23999a514f3cbfbc23',ramdisk_id='',reservation_id='r-1m2lpc4b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-267277269',owner_user_name='tempest-TestServerBasicOps-267277269-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:27:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f9dd03f07d754030bedc45ef75a2ceb8',uuid=502b3848-9702-4288-860e-d9b13ab3b047,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4f4f33b-d010-42c3-9963-b0602fd11558", "address": "fa:16:3e:be:1f:18", "network": {"id": "d755deef-15d2-410a-9b1a-81df70c45c93", "bridge": "br-int", "label": "tempest-TestServerBasicOps-594899057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b907eb5634054c23999a514f3cbfbc23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f4f33b-d0", "ovs_interfaceid": "a4f4f33b-d010-42c3-9963-b0602fd11558", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.686 243456 DEBUG nova.network.os_vif_util [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Converting VIF {"id": "a4f4f33b-d010-42c3-9963-b0602fd11558", "address": "fa:16:3e:be:1f:18", "network": {"id": "d755deef-15d2-410a-9b1a-81df70c45c93", "bridge": "br-int", "label": "tempest-TestServerBasicOps-594899057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b907eb5634054c23999a514f3cbfbc23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f4f33b-d0", "ovs_interfaceid": "a4f4f33b-d010-42c3-9963-b0602fd11558", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.687 243456 DEBUG nova.network.os_vif_util [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:1f:18,bridge_name='br-int',has_traffic_filtering=True,id=a4f4f33b-d010-42c3-9963-b0602fd11558,network=Network(d755deef-15d2-410a-9b1a-81df70c45c93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4f4f33b-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.689 243456 DEBUG nova.objects.instance [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lazy-loading 'pci_devices' on Instance uuid 502b3848-9702-4288-860e-d9b13ab3b047 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.712 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:27:34 np0005634017 nova_compute[243452]:  <uuid>502b3848-9702-4288-860e-d9b13ab3b047</uuid>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:  <name>instance-00000078</name>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestServerBasicOps-server-952479761</nova:name>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:27:33</nova:creationTime>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:27:34 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:        <nova:user uuid="f9dd03f07d754030bedc45ef75a2ceb8">tempest-TestServerBasicOps-267277269-project-member</nova:user>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:        <nova:project uuid="b907eb5634054c23999a514f3cbfbc23">tempest-TestServerBasicOps-267277269</nova:project>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:        <nova:port uuid="a4f4f33b-d010-42c3-9963-b0602fd11558">
Feb 28 05:27:34 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <entry name="serial">502b3848-9702-4288-860e-d9b13ab3b047</entry>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <entry name="uuid">502b3848-9702-4288-860e-d9b13ab3b047</entry>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/502b3848-9702-4288-860e-d9b13ab3b047_disk">
Feb 28 05:27:34 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:27:34 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/502b3848-9702-4288-860e-d9b13ab3b047_disk.config">
Feb 28 05:27:34 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:27:34 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:be:1f:18"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <target dev="tapa4f4f33b-d0"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/502b3848-9702-4288-860e-d9b13ab3b047/console.log" append="off"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:27:34 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:27:34 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:27:34 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:27:34 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.713 243456 DEBUG nova.compute.manager [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Preparing to wait for external event network-vif-plugged-a4f4f33b-d010-42c3-9963-b0602fd11558 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.714 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Acquiring lock "502b3848-9702-4288-860e-d9b13ab3b047-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.714 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.715 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.716 243456 DEBUG nova.virt.libvirt.vif [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:27:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-952479761',display_name='tempest-TestServerBasicOps-server-952479761',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-952479761',id=120,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIxvwaoaZS6lxY+qm3SVwVdpmr4odFer5lT4S2h//UYF7wFrY/sNYSd6hzwRrmh2VB5KT4fELzlhq046tWMx92gixHEtTbsSYNmG1SF9Z5rksMEf2+FpLLjssyHNY9JdaA==',key_name='tempest-TestServerBasicOps-1169969291',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b907eb5634054c23999a514f3cbfbc23',ramdisk_id='',reservation_id='r-1m2lpc4b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-267277269',owner_user_name='tempest-TestServerBasicOps-267277269-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:27:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f9dd03f07d754030bedc45ef75a2ceb8',uuid=502b3848-9702-4288-860e-d9b13ab3b047,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4f4f33b-d010-42c3-9963-b0602fd11558", "address": "fa:16:3e:be:1f:18", "network": {"id": "d755deef-15d2-410a-9b1a-81df70c45c93", "bridge": "br-int", "label": "tempest-TestServerBasicOps-594899057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b907eb5634054c23999a514f3cbfbc23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f4f33b-d0", "ovs_interfaceid": "a4f4f33b-d010-42c3-9963-b0602fd11558", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.716 243456 DEBUG nova.network.os_vif_util [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Converting VIF {"id": "a4f4f33b-d010-42c3-9963-b0602fd11558", "address": "fa:16:3e:be:1f:18", "network": {"id": "d755deef-15d2-410a-9b1a-81df70c45c93", "bridge": "br-int", "label": "tempest-TestServerBasicOps-594899057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b907eb5634054c23999a514f3cbfbc23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f4f33b-d0", "ovs_interfaceid": "a4f4f33b-d010-42c3-9963-b0602fd11558", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.718 243456 DEBUG nova.network.os_vif_util [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:1f:18,bridge_name='br-int',has_traffic_filtering=True,id=a4f4f33b-d010-42c3-9963-b0602fd11558,network=Network(d755deef-15d2-410a-9b1a-81df70c45c93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4f4f33b-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.718 243456 DEBUG os_vif [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:1f:18,bridge_name='br-int',has_traffic_filtering=True,id=a4f4f33b-d010-42c3-9963-b0602fd11558,network=Network(d755deef-15d2-410a-9b1a-81df70c45c93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4f4f33b-d0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.719 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.720 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.720 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.725 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.726 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa4f4f33b-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.727 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa4f4f33b-d0, col_values=(('external_ids', {'iface-id': 'a4f4f33b-d010-42c3-9963-b0602fd11558', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:be:1f:18', 'vm-uuid': '502b3848-9702-4288-860e-d9b13ab3b047'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.729 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:34 np0005634017 NetworkManager[49805]: <info>  [1772274454.7304] manager: (tapa4f4f33b-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/492)
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.731 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.733 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.734 243456 INFO os_vif [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:1f:18,bridge_name='br-int',has_traffic_filtering=True,id=a4f4f33b-d010-42c3-9963-b0602fd11558,network=Network(d755deef-15d2-410a-9b1a-81df70c45c93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4f4f33b-d0')#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.791 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.792 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.792 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] No VIF found with MAC fa:16:3e:be:1f:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.793 243456 INFO nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Using config drive#033[00m
Feb 28 05:27:34 np0005634017 nova_compute[243452]: 2026-02-28 10:27:34.822 243456 DEBUG nova.storage.rbd_utils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] rbd image 502b3848-9702-4288-860e-d9b13ab3b047_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:27:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Feb 28 05:27:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Feb 28 05:27:35 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Feb 28 05:27:35 np0005634017 nova_compute[243452]: 2026-02-28 10:27:35.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:27:35 np0005634017 nova_compute[243452]: 2026-02-28 10:27:35.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:27:35 np0005634017 nova_compute[243452]: 2026-02-28 10:27:35.318 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:27:35 np0005634017 nova_compute[243452]: 2026-02-28 10:27:35.341 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 28 05:27:35 np0005634017 nova_compute[243452]: 2026-02-28 10:27:35.414 243456 DEBUG nova.network.neutron [req-7d674231-13f4-42ea-b5c2-6e1cc465faac req-85bfa5f1-8ef9-4219-be7a-64c5f0db5fe9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Updated VIF entry in instance network info cache for port a4f4f33b-d010-42c3-9963-b0602fd11558. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:27:35 np0005634017 nova_compute[243452]: 2026-02-28 10:27:35.415 243456 DEBUG nova.network.neutron [req-7d674231-13f4-42ea-b5c2-6e1cc465faac req-85bfa5f1-8ef9-4219-be7a-64c5f0db5fe9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Updating instance_info_cache with network_info: [{"id": "a4f4f33b-d010-42c3-9963-b0602fd11558", "address": "fa:16:3e:be:1f:18", "network": {"id": "d755deef-15d2-410a-9b1a-81df70c45c93", "bridge": "br-int", "label": "tempest-TestServerBasicOps-594899057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b907eb5634054c23999a514f3cbfbc23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f4f33b-d0", "ovs_interfaceid": "a4f4f33b-d010-42c3-9963-b0602fd11558", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:27:35 np0005634017 nova_compute[243452]: 2026-02-28 10:27:35.444 243456 DEBUG oslo_concurrency.lockutils [req-7d674231-13f4-42ea-b5c2-6e1cc465faac req-85bfa5f1-8ef9-4219-be7a-64c5f0db5fe9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-502b3848-9702-4288-860e-d9b13ab3b047" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:27:35 np0005634017 nova_compute[243452]: 2026-02-28 10:27:35.525 243456 DEBUG nova.compute.manager [req-5795ff3f-0e56-4b47-9185-a43645d0b336 req-35d91a37-91c4-4e60-b707-040521e4703a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:27:35 np0005634017 nova_compute[243452]: 2026-02-28 10:27:35.526 243456 DEBUG oslo_concurrency.lockutils [req-5795ff3f-0e56-4b47-9185-a43645d0b336 req-35d91a37-91c4-4e60-b707-040521e4703a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:35 np0005634017 nova_compute[243452]: 2026-02-28 10:27:35.527 243456 DEBUG oslo_concurrency.lockutils [req-5795ff3f-0e56-4b47-9185-a43645d0b336 req-35d91a37-91c4-4e60-b707-040521e4703a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:35 np0005634017 nova_compute[243452]: 2026-02-28 10:27:35.527 243456 DEBUG oslo_concurrency.lockutils [req-5795ff3f-0e56-4b47-9185-a43645d0b336 req-35d91a37-91c4-4e60-b707-040521e4703a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:35 np0005634017 nova_compute[243452]: 2026-02-28 10:27:35.528 243456 DEBUG nova.compute.manager [req-5795ff3f-0e56-4b47-9185-a43645d0b336 req-35d91a37-91c4-4e60-b707-040521e4703a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] No waiting events found dispatching network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:27:35 np0005634017 nova_compute[243452]: 2026-02-28 10:27:35.528 243456 WARNING nova.compute.manager [req-5795ff3f-0e56-4b47-9185-a43645d0b336 req-35d91a37-91c4-4e60-b707-040521e4703a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received unexpected event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:27:35 np0005634017 nova_compute[243452]: 2026-02-28 10:27:35.560 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:27:35 np0005634017 nova_compute[243452]: 2026-02-28 10:27:35.561 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:27:35 np0005634017 nova_compute[243452]: 2026-02-28 10:27:35.561 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:27:35 np0005634017 nova_compute[243452]: 2026-02-28 10:27:35.562 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 45cac133-9af0-462b-928c-05216ae1a68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:27:35 np0005634017 nova_compute[243452]: 2026-02-28 10:27:35.582 243456 INFO nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Creating config drive at /var/lib/nova/instances/502b3848-9702-4288-860e-d9b13ab3b047/disk.config#033[00m
Feb 28 05:27:35 np0005634017 nova_compute[243452]: 2026-02-28 10:27:35.591 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/502b3848-9702-4288-860e-d9b13ab3b047/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpmo063itf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:27:35 np0005634017 nova_compute[243452]: 2026-02-28 10:27:35.736 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/502b3848-9702-4288-860e-d9b13ab3b047/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpmo063itf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:27:35 np0005634017 nova_compute[243452]: 2026-02-28 10:27:35.772 243456 DEBUG nova.storage.rbd_utils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] rbd image 502b3848-9702-4288-860e-d9b13ab3b047_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:27:35 np0005634017 nova_compute[243452]: 2026-02-28 10:27:35.776 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/502b3848-9702-4288-860e-d9b13ab3b047/disk.config 502b3848-9702-4288-860e-d9b13ab3b047_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:27:35 np0005634017 nova_compute[243452]: 2026-02-28 10:27:35.920 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/502b3848-9702-4288-860e-d9b13ab3b047/disk.config 502b3848-9702-4288-860e-d9b13ab3b047_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:27:35 np0005634017 nova_compute[243452]: 2026-02-28 10:27:35.921 243456 INFO nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Deleting local config drive /var/lib/nova/instances/502b3848-9702-4288-860e-d9b13ab3b047/disk.config because it was imported into RBD.#033[00m
Feb 28 05:27:35 np0005634017 kernel: tapa4f4f33b-d0: entered promiscuous mode
Feb 28 05:27:35 np0005634017 NetworkManager[49805]: <info>  [1772274455.9722] manager: (tapa4f4f33b-d0): new Tun device (/org/freedesktop/NetworkManager/Devices/493)
Feb 28 05:27:35 np0005634017 nova_compute[243452]: 2026-02-28 10:27:35.974 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:35Z|01194|binding|INFO|Claiming lport a4f4f33b-d010-42c3-9963-b0602fd11558 for this chassis.
Feb 28 05:27:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:35Z|01195|binding|INFO|a4f4f33b-d010-42c3-9963-b0602fd11558: Claiming fa:16:3e:be:1f:18 10.100.0.3
Feb 28 05:27:35 np0005634017 nova_compute[243452]: 2026-02-28 10:27:35.980 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:35 np0005634017 NetworkManager[49805]: <info>  [1772274455.9912] device (tapa4f4f33b-d0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:27:35 np0005634017 NetworkManager[49805]: <info>  [1772274455.9944] device (tapa4f4f33b-d0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:27:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:35.992 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:1f:18 10.100.0.3'], port_security=['fa:16:3e:be:1f:18 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '502b3848-9702-4288-860e-d9b13ab3b047', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d755deef-15d2-410a-9b1a-81df70c45c93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b907eb5634054c23999a514f3cbfbc23', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0fc58a0a-8070-4472-9d4a-0833b80c1776 d0518d2f-a440-4fc3-9c12-d503c74451c2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f692542-8d39-4073-b698-c331b927e5a0, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a4f4f33b-d010-42c3-9963-b0602fd11558) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:27:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:35.995 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a4f4f33b-d010-42c3-9963-b0602fd11558 in datapath d755deef-15d2-410a-9b1a-81df70c45c93 bound to our chassis#033[00m
Feb 28 05:27:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:35.997 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d755deef-15d2-410a-9b1a-81df70c45c93#033[00m
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.009 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b950c6be-3992-4652-bfa3-f212e4feceeb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.010 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd755deef-11 in ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:27:36 np0005634017 systemd-machined[209480]: New machine qemu-151-instance-00000078.
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.021 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd755deef-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.021 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[90100bb7-5fc6-4480-a92c-e42271b88542]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.023 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1d9be2c0-60cc-4a35-bfc8-db3a30f850d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.026 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:36 np0005634017 systemd[1]: Started Virtual Machine qemu-151-instance-00000078.
Feb 28 05:27:36 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:36Z|01196|binding|INFO|Setting lport a4f4f33b-d010-42c3-9963-b0602fd11558 ovn-installed in OVS
Feb 28 05:27:36 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:36Z|01197|binding|INFO|Setting lport a4f4f33b-d010-42c3-9963-b0602fd11558 up in Southbound
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.039 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.040 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[a53d241b-7223-4f2d-951f-f7310ab66b1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.064 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9328ec45-c6cd-4e33-91c2-c8df99ac62c7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Feb 28 05:27:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Feb 28 05:27:36 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.107 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a43da3ca-bce8-4939-8c41-1fa39ffc5294]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.114 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[15aeeadc-8d5c-460d-85ac-db39245ba014]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:36 np0005634017 NetworkManager[49805]: <info>  [1772274456.1165] manager: (tapd755deef-10): new Veth device (/org/freedesktop/NetworkManager/Devices/494)
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.139 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6c9fdcfb-1125-4112-8e2c-13f7c0d0c9d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.144 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ffd3253a-5f0a-412c-88f3-7911565910b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:36 np0005634017 systemd-udevd[346508]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:27:36 np0005634017 NetworkManager[49805]: <info>  [1772274456.1788] device (tapd755deef-10): carrier: link connected
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.185 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[29439129-8c2e-4976-9b52-0814154d9755]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.205 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6c5bb0d1-6538-4360-85d3-f03706d07fe1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd755deef-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:97:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 357], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596346, 'reachable_time': 21250, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346527, 'error': None, 'target': 'ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.224 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e694fdcf-e45b-4ca1-9438-8579acc9680d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0b:97f7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 596346, 'tstamp': 596346}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346543, 'error': None, 'target': 'ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.258 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8eab2040-eec1-4593-bd9d-c753f01d9393]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd755deef-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:97:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 357], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596346, 'reachable_time': 21250, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 346554, 'error': None, 'target': 'ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.293 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1161635a-33c0-4475-8f89-1fbeb697a375]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.354 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274456.353527, 502b3848-9702-4288-860e-d9b13ab3b047 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.354 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] VM Started (Lifecycle Event)#033[00m
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.373 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[62614d6e-8820-4a4a-b049-d3ac9323d2e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.375 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd755deef-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.375 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.376 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd755deef-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:27:36 np0005634017 kernel: tapd755deef-10: entered promiscuous mode
Feb 28 05:27:36 np0005634017 NetworkManager[49805]: <info>  [1772274456.3790] manager: (tapd755deef-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/495)
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.381 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd755deef-10, col_values=(('external_ids', {'iface-id': 'cbbfe533-6ee1-4103-ad47-26b6e9271250'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.382 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:36 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:36Z|01198|binding|INFO|Releasing lport cbbfe533-6ee1-4103-ad47-26b6e9271250 from this chassis (sb_readonly=0)
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.384 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d755deef-15d2-410a-9b1a-81df70c45c93.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d755deef-15d2-410a-9b1a-81df70c45c93.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.385 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[33e43b02-4384-4094-a5c7-4aeb333d4c44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.386 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-d755deef-15d2-410a-9b1a-81df70c45c93
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/d755deef-15d2-410a-9b1a-81df70c45c93.pid.haproxy
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID d755deef-15d2-410a-9b1a-81df70c45c93
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:27:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.387 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93', 'env', 'PROCESS_TAG=haproxy-d755deef-15d2-410a-9b1a-81df70c45c93', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d755deef-15d2-410a-9b1a-81df70c45c93.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.394 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.396 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.400 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274456.3537502, 502b3848-9702-4288-860e-d9b13ab3b047 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.401 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.440 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.444 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.475 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:27:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 246 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.4 MiB/s wr, 215 op/s
Feb 28 05:27:36 np0005634017 podman[346603]: 2026-02-28 10:27:36.76159766 +0000 UTC m=+0.055546669 container create e91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223)
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.776 243456 DEBUG nova.compute.manager [req-d20066ea-efb5-4bfe-9022-26c7c046a2ea req-f9ce853b-64e3-4854-8d17-ce5f00b5ff72 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Received event network-vif-plugged-a4f4f33b-d010-42c3-9963-b0602fd11558 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.777 243456 DEBUG oslo_concurrency.lockutils [req-d20066ea-efb5-4bfe-9022-26c7c046a2ea req-f9ce853b-64e3-4854-8d17-ce5f00b5ff72 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "502b3848-9702-4288-860e-d9b13ab3b047-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.778 243456 DEBUG oslo_concurrency.lockutils [req-d20066ea-efb5-4bfe-9022-26c7c046a2ea req-f9ce853b-64e3-4854-8d17-ce5f00b5ff72 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.778 243456 DEBUG oslo_concurrency.lockutils [req-d20066ea-efb5-4bfe-9022-26c7c046a2ea req-f9ce853b-64e3-4854-8d17-ce5f00b5ff72 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.779 243456 DEBUG nova.compute.manager [req-d20066ea-efb5-4bfe-9022-26c7c046a2ea req-f9ce853b-64e3-4854-8d17-ce5f00b5ff72 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Processing event network-vif-plugged-a4f4f33b-d010-42c3-9963-b0602fd11558 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.780 243456 DEBUG nova.compute.manager [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.785 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274456.7845898, 502b3848-9702-4288-860e-d9b13ab3b047 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.785 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.787 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.794 243456 INFO nova.virt.libvirt.driver [-] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Instance spawned successfully.#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.794 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:27:36 np0005634017 systemd[1]: Started libpod-conmon-e91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2.scope.
Feb 28 05:27:36 np0005634017 podman[346603]: 2026-02-28 10:27:36.731351126 +0000 UTC m=+0.025300155 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:27:36 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.836 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:27:36 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7740129404eead1c6a7f09e8f10ed3317b876a311baeb37869bca63dec79ce5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.841 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.842 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.843 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.843 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.844 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.844 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.850 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:27:36 np0005634017 podman[346603]: 2026-02-28 10:27:36.85186196 +0000 UTC m=+0.145810999 container init e91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 05:27:36 np0005634017 podman[346603]: 2026-02-28 10:27:36.856219543 +0000 UTC m=+0.150168552 container start e91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 28 05:27:36 np0005634017 neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93[346618]: [NOTICE]   (346622) : New worker (346624) forked
Feb 28 05:27:36 np0005634017 neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93[346618]: [NOTICE]   (346622) : Loading success.
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.881 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.908 243456 INFO nova.compute.manager [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Took 9.19 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.909 243456 DEBUG nova.compute.manager [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.972 243456 INFO nova.compute.manager [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Took 11.10 seconds to build instance.#033[00m
Feb 28 05:27:36 np0005634017 nova_compute[243452]: 2026-02-28 10:27:36.988 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Feb 28 05:27:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Feb 28 05:27:37 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Feb 28 05:27:37 np0005634017 nova_compute[243452]: 2026-02-28 10:27:37.858 243456 DEBUG nova.objects.instance [None req-f6cb8fbb-c3f7-4b00-9dc8-471ed941566b 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 45cac133-9af0-462b-928c-05216ae1a68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:27:37 np0005634017 nova_compute[243452]: 2026-02-28 10:27:37.884 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274457.8840063, 45cac133-9af0-462b-928c-05216ae1a68e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:27:37 np0005634017 nova_compute[243452]: 2026-02-28 10:27:37.884 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:27:37 np0005634017 nova_compute[243452]: 2026-02-28 10:27:37.913 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:27:37 np0005634017 nova_compute[243452]: 2026-02-28 10:27:37.926 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:27:37 np0005634017 nova_compute[243452]: 2026-02-28 10:27:37.958 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.111 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Updating instance_info_cache with network_info: [{"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.136 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.137 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.138 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.138 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.138 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.139 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.161 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.162 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.162 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.162 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.163 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:27:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Feb 28 05:27:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Feb 28 05:27:38 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Feb 28 05:27:38 np0005634017 kernel: tap63cc9218-a4 (unregistering): left promiscuous mode
Feb 28 05:27:38 np0005634017 NetworkManager[49805]: <info>  [1772274458.3544] device (tap63cc9218-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.354 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:38Z|01199|binding|INFO|Releasing lport 63cc9218-a429-4d50-9dad-e3849863cae1 from this chassis (sb_readonly=0)
Feb 28 05:27:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:38Z|01200|binding|INFO|Setting lport 63cc9218-a429-4d50-9dad-e3849863cae1 down in Southbound
Feb 28 05:27:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:38Z|01201|binding|INFO|Removing iface tap63cc9218-a4 ovn-installed in OVS
Feb 28 05:27:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:38.364 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:2a:f0 10.100.0.11'], port_security=['fa:16:3e:26:2a:f0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '45cac133-9af0-462b-928c-05216ae1a68e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9d695f1-2ebc-41de-afde-dc0fb11aa027', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '859784d5f59f4db99fb375f781853be3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3574eda9-0858-4fa6-a1da-2908ff989c86', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ecc9bc11-7523-42ec-afec-b5bdb65bb267, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=63cc9218-a429-4d50-9dad-e3849863cae1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:27:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:38.368 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 63cc9218-a429-4d50-9dad-e3849863cae1 in datapath c9d695f1-2ebc-41de-afde-dc0fb11aa027 unbound from our chassis#033[00m
Feb 28 05:27:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:38.370 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c9d695f1-2ebc-41de-afde-dc0fb11aa027 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.372 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:38.371 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[08055500-cb8b-495a-a7ca-aca50d09eb22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:38 np0005634017 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000077.scope: Deactivated successfully.
Feb 28 05:27:38 np0005634017 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000077.scope: Consumed 4.803s CPU time.
Feb 28 05:27:38 np0005634017 systemd-machined[209480]: Machine qemu-150-instance-00000077 terminated.
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.534 243456 DEBUG nova.compute.manager [None req-f6cb8fbb-c3f7-4b00-9dc8-471ed941566b 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:27:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 90 KiB/s wr, 383 op/s
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.615 243456 DEBUG nova.compute.manager [req-47a355e6-6671-40d0-a506-ecf465ce1207 req-94013442-1284-4f5b-bc2a-871e41ddd848 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-vif-unplugged-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.618 243456 DEBUG oslo_concurrency.lockutils [req-47a355e6-6671-40d0-a506-ecf465ce1207 req-94013442-1284-4f5b-bc2a-871e41ddd848 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.619 243456 DEBUG oslo_concurrency.lockutils [req-47a355e6-6671-40d0-a506-ecf465ce1207 req-94013442-1284-4f5b-bc2a-871e41ddd848 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.619 243456 DEBUG oslo_concurrency.lockutils [req-47a355e6-6671-40d0-a506-ecf465ce1207 req-94013442-1284-4f5b-bc2a-871e41ddd848 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.620 243456 DEBUG nova.compute.manager [req-47a355e6-6671-40d0-a506-ecf465ce1207 req-94013442-1284-4f5b-bc2a-871e41ddd848 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] No waiting events found dispatching network-vif-unplugged-63cc9218-a429-4d50-9dad-e3849863cae1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.620 243456 WARNING nova.compute.manager [req-47a355e6-6671-40d0-a506-ecf465ce1207 req-94013442-1284-4f5b-bc2a-871e41ddd848 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received unexpected event network-vif-unplugged-63cc9218-a429-4d50-9dad-e3849863cae1 for instance with vm_state active and task_state suspending.#033[00m
Feb 28 05:27:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:27:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:27:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1845595973' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.791 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.886 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.889 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.896 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.896 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.902 243456 DEBUG nova.compute.manager [req-8a8542ed-1d44-4b6b-a464-c53db9d6ab68 req-6f051084-0873-4146-b245-ac38839275b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Received event network-vif-plugged-a4f4f33b-d010-42c3-9963-b0602fd11558 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.902 243456 DEBUG oslo_concurrency.lockutils [req-8a8542ed-1d44-4b6b-a464-c53db9d6ab68 req-6f051084-0873-4146-b245-ac38839275b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "502b3848-9702-4288-860e-d9b13ab3b047-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.903 243456 DEBUG oslo_concurrency.lockutils [req-8a8542ed-1d44-4b6b-a464-c53db9d6ab68 req-6f051084-0873-4146-b245-ac38839275b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.903 243456 DEBUG oslo_concurrency.lockutils [req-8a8542ed-1d44-4b6b-a464-c53db9d6ab68 req-6f051084-0873-4146-b245-ac38839275b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.904 243456 DEBUG nova.compute.manager [req-8a8542ed-1d44-4b6b-a464-c53db9d6ab68 req-6f051084-0873-4146-b245-ac38839275b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] No waiting events found dispatching network-vif-plugged-a4f4f33b-d010-42c3-9963-b0602fd11558 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:27:38 np0005634017 nova_compute[243452]: 2026-02-28 10:27:38.904 243456 WARNING nova.compute.manager [req-8a8542ed-1d44-4b6b-a464-c53db9d6ab68 req-6f051084-0873-4146-b245-ac38839275b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Received unexpected event network-vif-plugged-a4f4f33b-d010-42c3-9963-b0602fd11558 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:27:39 np0005634017 nova_compute[243452]: 2026-02-28 10:27:39.074 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:27:39 np0005634017 nova_compute[243452]: 2026-02-28 10:27:39.076 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3479MB free_disk=59.94584787450731GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:27:39 np0005634017 nova_compute[243452]: 2026-02-28 10:27:39.076 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:39 np0005634017 nova_compute[243452]: 2026-02-28 10:27:39.076 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:39 np0005634017 nova_compute[243452]: 2026-02-28 10:27:39.146 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 45cac133-9af0-462b-928c-05216ae1a68e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:27:39 np0005634017 nova_compute[243452]: 2026-02-28 10:27:39.147 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 502b3848-9702-4288-860e-d9b13ab3b047 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:27:39 np0005634017 nova_compute[243452]: 2026-02-28 10:27:39.148 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:27:39 np0005634017 nova_compute[243452]: 2026-02-28 10:27:39.148 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:27:39 np0005634017 nova_compute[243452]: 2026-02-28 10:27:39.152 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:39 np0005634017 nova_compute[243452]: 2026-02-28 10:27:39.200 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:27:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Feb 28 05:27:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Feb 28 05:27:39 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Feb 28 05:27:39 np0005634017 nova_compute[243452]: 2026-02-28 10:27:39.730 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:27:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3180287' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:27:39 np0005634017 nova_compute[243452]: 2026-02-28 10:27:39.761 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:27:39 np0005634017 nova_compute[243452]: 2026-02-28 10:27:39.768 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:27:39 np0005634017 nova_compute[243452]: 2026-02-28 10:27:39.790 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:27:39 np0005634017 nova_compute[243452]: 2026-02-28 10:27:39.825 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:27:39 np0005634017 nova_compute[243452]: 2026-02-28 10:27:39.826 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:40 np0005634017 nova_compute[243452]: 2026-02-28 10:27:40.562 243456 INFO nova.compute.manager [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Resuming#033[00m
Feb 28 05:27:40 np0005634017 nova_compute[243452]: 2026-02-28 10:27:40.564 243456 DEBUG nova.objects.instance [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lazy-loading 'flavor' on Instance uuid 45cac133-9af0-462b-928c-05216ae1a68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:27:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 52 KiB/s wr, 398 op/s
Feb 28 05:27:40 np0005634017 nova_compute[243452]: 2026-02-28 10:27:40.609 243456 DEBUG oslo_concurrency.lockutils [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquiring lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:27:40 np0005634017 nova_compute[243452]: 2026-02-28 10:27:40.609 243456 DEBUG oslo_concurrency.lockutils [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquired lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:27:40 np0005634017 nova_compute[243452]: 2026-02-28 10:27:40.610 243456 DEBUG nova.network.neutron [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:27:40 np0005634017 NetworkManager[49805]: <info>  [1772274460.6156] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/496)
Feb 28 05:27:40 np0005634017 nova_compute[243452]: 2026-02-28 10:27:40.615 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:40 np0005634017 NetworkManager[49805]: <info>  [1772274460.6170] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/497)
Feb 28 05:27:40 np0005634017 nova_compute[243452]: 2026-02-28 10:27:40.655 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:40Z|01202|binding|INFO|Releasing lport cbbfe533-6ee1-4103-ad47-26b6e9271250 from this chassis (sb_readonly=0)
Feb 28 05:27:40 np0005634017 nova_compute[243452]: 2026-02-28 10:27:40.667 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:40 np0005634017 nova_compute[243452]: 2026-02-28 10:27:40.674 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:40 np0005634017 nova_compute[243452]: 2026-02-28 10:27:40.746 243456 DEBUG nova.compute.manager [req-35109c67-de74-4b9e-94e8-54676c8c3031 req-255655a8-c4dc-44d3-92c7-c7d821124b8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:27:40 np0005634017 nova_compute[243452]: 2026-02-28 10:27:40.747 243456 DEBUG oslo_concurrency.lockutils [req-35109c67-de74-4b9e-94e8-54676c8c3031 req-255655a8-c4dc-44d3-92c7-c7d821124b8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:40 np0005634017 nova_compute[243452]: 2026-02-28 10:27:40.748 243456 DEBUG oslo_concurrency.lockutils [req-35109c67-de74-4b9e-94e8-54676c8c3031 req-255655a8-c4dc-44d3-92c7-c7d821124b8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:40 np0005634017 nova_compute[243452]: 2026-02-28 10:27:40.748 243456 DEBUG oslo_concurrency.lockutils [req-35109c67-de74-4b9e-94e8-54676c8c3031 req-255655a8-c4dc-44d3-92c7-c7d821124b8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:40 np0005634017 nova_compute[243452]: 2026-02-28 10:27:40.749 243456 DEBUG nova.compute.manager [req-35109c67-de74-4b9e-94e8-54676c8c3031 req-255655a8-c4dc-44d3-92c7-c7d821124b8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] No waiting events found dispatching network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:27:40 np0005634017 nova_compute[243452]: 2026-02-28 10:27:40.749 243456 WARNING nova.compute.manager [req-35109c67-de74-4b9e-94e8-54676c8c3031 req-255655a8-c4dc-44d3-92c7-c7d821124b8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received unexpected event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 for instance with vm_state suspended and task_state resuming.#033[00m
Feb 28 05:27:40 np0005634017 nova_compute[243452]: 2026-02-28 10:27:40.821 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:27:40 np0005634017 nova_compute[243452]: 2026-02-28 10:27:40.974 243456 DEBUG nova.compute.manager [req-aaf889e3-6da9-4685-a743-3065058fd400 req-f24bc271-4921-4057-a3b3-4258b1136fc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Received event network-changed-a4f4f33b-d010-42c3-9963-b0602fd11558 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:27:40 np0005634017 nova_compute[243452]: 2026-02-28 10:27:40.976 243456 DEBUG nova.compute.manager [req-aaf889e3-6da9-4685-a743-3065058fd400 req-f24bc271-4921-4057-a3b3-4258b1136fc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Refreshing instance network info cache due to event network-changed-a4f4f33b-d010-42c3-9963-b0602fd11558. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:27:40 np0005634017 nova_compute[243452]: 2026-02-28 10:27:40.976 243456 DEBUG oslo_concurrency.lockutils [req-aaf889e3-6da9-4685-a743-3065058fd400 req-f24bc271-4921-4057-a3b3-4258b1136fc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-502b3848-9702-4288-860e-d9b13ab3b047" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:27:40 np0005634017 nova_compute[243452]: 2026-02-28 10:27:40.977 243456 DEBUG oslo_concurrency.lockutils [req-aaf889e3-6da9-4685-a743-3065058fd400 req-f24bc271-4921-4057-a3b3-4258b1136fc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-502b3848-9702-4288-860e-d9b13ab3b047" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:27:40 np0005634017 nova_compute[243452]: 2026-02-28 10:27:40.978 243456 DEBUG nova.network.neutron [req-aaf889e3-6da9-4685-a743-3065058fd400 req-f24bc271-4921-4057-a3b3-4258b1136fc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Refreshing network info cache for port a4f4f33b-d010-42c3-9963-b0602fd11558 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:27:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:27:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:27:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:27:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:27:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007102768922788274 of space, bias 1.0, pg target 0.2130830676836482 quantized to 32 (current 32)
Feb 28 05:27:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:27:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:27:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:27:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:27:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:27:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493632680435017 of space, bias 1.0, pg target 0.7480898041305051 quantized to 32 (current 32)
Feb 28 05:27:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:27:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.36261163984665e-07 of space, bias 4.0, pg target 0.000883513396781598 quantized to 16 (current 16)
Feb 28 05:27:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:27:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:27:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:27:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:27:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:27:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:27:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:27:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:27:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:27:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:27:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 41 KiB/s wr, 397 op/s
Feb 28 05:27:42 np0005634017 nova_compute[243452]: 2026-02-28 10:27:42.671 243456 DEBUG nova.network.neutron [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Updating instance_info_cache with network_info: [{"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:27:42 np0005634017 nova_compute[243452]: 2026-02-28 10:27:42.706 243456 DEBUG oslo_concurrency.lockutils [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Releasing lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:27:42 np0005634017 nova_compute[243452]: 2026-02-28 10:27:42.713 243456 DEBUG nova.virt.libvirt.vif [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:27:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-22377150',display_name='tempest-TestServerAdvancedOps-server-22377150',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-22377150',id=119,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:27:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='859784d5f59f4db99fb375f781853be3',ramdisk_id='',reservation_id='r-h74lcn4j',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-244453076',owner_user_name='tempest-TestServerAdvancedOps-244453076-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:27:38Z,user_data=None,user_id='3ed826a3011e43d68aac3f001281440a',uuid=45cac133-9af0-462b-928c-05216ae1a68e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:27:42 np0005634017 nova_compute[243452]: 2026-02-28 10:27:42.714 243456 DEBUG nova.network.os_vif_util [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Converting VIF {"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:27:42 np0005634017 nova_compute[243452]: 2026-02-28 10:27:42.715 243456 DEBUG nova.network.os_vif_util [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:2a:f0,bridge_name='br-int',has_traffic_filtering=True,id=63cc9218-a429-4d50-9dad-e3849863cae1,network=Network(c9d695f1-2ebc-41de-afde-dc0fb11aa027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63cc9218-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:27:42 np0005634017 nova_compute[243452]: 2026-02-28 10:27:42.716 243456 DEBUG os_vif [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:2a:f0,bridge_name='br-int',has_traffic_filtering=True,id=63cc9218-a429-4d50-9dad-e3849863cae1,network=Network(c9d695f1-2ebc-41de-afde-dc0fb11aa027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63cc9218-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:27:42 np0005634017 nova_compute[243452]: 2026-02-28 10:27:42.717 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:42 np0005634017 nova_compute[243452]: 2026-02-28 10:27:42.718 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:27:42 np0005634017 nova_compute[243452]: 2026-02-28 10:27:42.719 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:27:42 np0005634017 nova_compute[243452]: 2026-02-28 10:27:42.723 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:42 np0005634017 nova_compute[243452]: 2026-02-28 10:27:42.723 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap63cc9218-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:27:42 np0005634017 nova_compute[243452]: 2026-02-28 10:27:42.724 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap63cc9218-a4, col_values=(('external_ids', {'iface-id': '63cc9218-a429-4d50-9dad-e3849863cae1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:2a:f0', 'vm-uuid': '45cac133-9af0-462b-928c-05216ae1a68e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:27:42 np0005634017 nova_compute[243452]: 2026-02-28 10:27:42.725 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:27:42 np0005634017 nova_compute[243452]: 2026-02-28 10:27:42.726 243456 INFO os_vif [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:2a:f0,bridge_name='br-int',has_traffic_filtering=True,id=63cc9218-a429-4d50-9dad-e3849863cae1,network=Network(c9d695f1-2ebc-41de-afde-dc0fb11aa027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63cc9218-a4')#033[00m
Feb 28 05:27:42 np0005634017 nova_compute[243452]: 2026-02-28 10:27:42.744 243456 DEBUG nova.objects.instance [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lazy-loading 'numa_topology' on Instance uuid 45cac133-9af0-462b-928c-05216ae1a68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:27:42 np0005634017 nova_compute[243452]: 2026-02-28 10:27:42.748 243456 DEBUG nova.network.neutron [req-aaf889e3-6da9-4685-a743-3065058fd400 req-f24bc271-4921-4057-a3b3-4258b1136fc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Updated VIF entry in instance network info cache for port a4f4f33b-d010-42c3-9963-b0602fd11558. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:27:42 np0005634017 nova_compute[243452]: 2026-02-28 10:27:42.749 243456 DEBUG nova.network.neutron [req-aaf889e3-6da9-4685-a743-3065058fd400 req-f24bc271-4921-4057-a3b3-4258b1136fc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Updating instance_info_cache with network_info: [{"id": "a4f4f33b-d010-42c3-9963-b0602fd11558", "address": "fa:16:3e:be:1f:18", "network": {"id": "d755deef-15d2-410a-9b1a-81df70c45c93", "bridge": "br-int", "label": "tempest-TestServerBasicOps-594899057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b907eb5634054c23999a514f3cbfbc23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f4f33b-d0", "ovs_interfaceid": "a4f4f33b-d010-42c3-9963-b0602fd11558", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:27:42 np0005634017 nova_compute[243452]: 2026-02-28 10:27:42.773 243456 DEBUG oslo_concurrency.lockutils [req-aaf889e3-6da9-4685-a743-3065058fd400 req-f24bc271-4921-4057-a3b3-4258b1136fc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-502b3848-9702-4288-860e-d9b13ab3b047" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:27:42 np0005634017 kernel: tap63cc9218-a4: entered promiscuous mode
Feb 28 05:27:42 np0005634017 NetworkManager[49805]: <info>  [1772274462.8281] manager: (tap63cc9218-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/498)
Feb 28 05:27:42 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:42Z|01203|binding|INFO|Claiming lport 63cc9218-a429-4d50-9dad-e3849863cae1 for this chassis.
Feb 28 05:27:42 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:42Z|01204|binding|INFO|63cc9218-a429-4d50-9dad-e3849863cae1: Claiming fa:16:3e:26:2a:f0 10.100.0.11
Feb 28 05:27:42 np0005634017 nova_compute[243452]: 2026-02-28 10:27:42.839 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:42.847 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:2a:f0 10.100.0.11'], port_security=['fa:16:3e:26:2a:f0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '45cac133-9af0-462b-928c-05216ae1a68e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9d695f1-2ebc-41de-afde-dc0fb11aa027', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '859784d5f59f4db99fb375f781853be3', 'neutron:revision_number': '5', 'neutron:security_group_ids': '3574eda9-0858-4fa6-a1da-2908ff989c86', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ecc9bc11-7523-42ec-afec-b5bdb65bb267, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=63cc9218-a429-4d50-9dad-e3849863cae1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:27:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:42.850 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 63cc9218-a429-4d50-9dad-e3849863cae1 in datapath c9d695f1-2ebc-41de-afde-dc0fb11aa027 bound to our chassis#033[00m
Feb 28 05:27:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:42.851 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c9d695f1-2ebc-41de-afde-dc0fb11aa027 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 28 05:27:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:42.853 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9540f8a3-1b44-4b29-9ac7-9c62292baff8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:42 np0005634017 systemd-udevd[346711]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:27:42 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:42Z|01205|binding|INFO|Setting lport 63cc9218-a429-4d50-9dad-e3849863cae1 ovn-installed in OVS
Feb 28 05:27:42 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:42Z|01206|binding|INFO|Setting lport 63cc9218-a429-4d50-9dad-e3849863cae1 up in Southbound
Feb 28 05:27:42 np0005634017 nova_compute[243452]: 2026-02-28 10:27:42.864 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:42 np0005634017 nova_compute[243452]: 2026-02-28 10:27:42.867 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:42 np0005634017 NetworkManager[49805]: <info>  [1772274462.8809] device (tap63cc9218-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:27:42 np0005634017 systemd-machined[209480]: New machine qemu-152-instance-00000077.
Feb 28 05:27:42 np0005634017 NetworkManager[49805]: <info>  [1772274462.8846] device (tap63cc9218-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:27:42 np0005634017 systemd[1]: Started Virtual Machine qemu-152-instance-00000077.
Feb 28 05:27:43 np0005634017 nova_compute[243452]: 2026-02-28 10:27:43.470 243456 DEBUG nova.compute.manager [req-88d8e949-2c04-44f5-98ca-c4bb13e81082 req-59a34910-6592-4317-aa03-c8815106f143 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:27:43 np0005634017 nova_compute[243452]: 2026-02-28 10:27:43.471 243456 DEBUG oslo_concurrency.lockutils [req-88d8e949-2c04-44f5-98ca-c4bb13e81082 req-59a34910-6592-4317-aa03-c8815106f143 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:43 np0005634017 nova_compute[243452]: 2026-02-28 10:27:43.472 243456 DEBUG oslo_concurrency.lockutils [req-88d8e949-2c04-44f5-98ca-c4bb13e81082 req-59a34910-6592-4317-aa03-c8815106f143 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:43 np0005634017 nova_compute[243452]: 2026-02-28 10:27:43.472 243456 DEBUG oslo_concurrency.lockutils [req-88d8e949-2c04-44f5-98ca-c4bb13e81082 req-59a34910-6592-4317-aa03-c8815106f143 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:43 np0005634017 nova_compute[243452]: 2026-02-28 10:27:43.473 243456 DEBUG nova.compute.manager [req-88d8e949-2c04-44f5-98ca-c4bb13e81082 req-59a34910-6592-4317-aa03-c8815106f143 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] No waiting events found dispatching network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:27:43 np0005634017 nova_compute[243452]: 2026-02-28 10:27:43.474 243456 WARNING nova.compute.manager [req-88d8e949-2c04-44f5-98ca-c4bb13e81082 req-59a34910-6592-4317-aa03-c8815106f143 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received unexpected event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 for instance with vm_state suspended and task_state resuming.#033[00m
Feb 28 05:27:43 np0005634017 nova_compute[243452]: 2026-02-28 10:27:43.618 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 45cac133-9af0-462b-928c-05216ae1a68e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 28 05:27:43 np0005634017 nova_compute[243452]: 2026-02-28 10:27:43.619 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274463.6178133, 45cac133-9af0-462b-928c-05216ae1a68e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:27:43 np0005634017 nova_compute[243452]: 2026-02-28 10:27:43.619 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] VM Started (Lifecycle Event)#033[00m
Feb 28 05:27:43 np0005634017 nova_compute[243452]: 2026-02-28 10:27:43.634 243456 DEBUG nova.compute.manager [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:27:43 np0005634017 nova_compute[243452]: 2026-02-28 10:27:43.636 243456 DEBUG nova.objects.instance [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 45cac133-9af0-462b-928c-05216ae1a68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:27:43 np0005634017 nova_compute[243452]: 2026-02-28 10:27:43.656 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:27:43 np0005634017 nova_compute[243452]: 2026-02-28 10:27:43.664 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:27:43 np0005634017 nova_compute[243452]: 2026-02-28 10:27:43.678 243456 INFO nova.virt.libvirt.driver [-] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Instance running successfully.#033[00m
Feb 28 05:27:43 np0005634017 virtqemud[242837]: argument unsupported: QEMU guest agent is not configured
Feb 28 05:27:43 np0005634017 nova_compute[243452]: 2026-02-28 10:27:43.683 243456 DEBUG nova.virt.libvirt.guest [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Feb 28 05:27:43 np0005634017 nova_compute[243452]: 2026-02-28 10:27:43.684 243456 DEBUG nova.compute.manager [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:27:43 np0005634017 nova_compute[243452]: 2026-02-28 10:27:43.690 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Feb 28 05:27:43 np0005634017 nova_compute[243452]: 2026-02-28 10:27:43.691 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274463.6228464, 45cac133-9af0-462b-928c-05216ae1a68e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:27:43 np0005634017 nova_compute[243452]: 2026-02-28 10:27:43.691 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:27:43 np0005634017 nova_compute[243452]: 2026-02-28 10:27:43.719 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:27:43 np0005634017 nova_compute[243452]: 2026-02-28 10:27:43.724 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:27:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:27:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Feb 28 05:27:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Feb 28 05:27:43 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Feb 28 05:27:44 np0005634017 nova_compute[243452]: 2026-02-28 10:27:44.154 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 7.5 KiB/s wr, 258 op/s
Feb 28 05:27:44 np0005634017 nova_compute[243452]: 2026-02-28 10:27:44.733 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:45 np0005634017 nova_compute[243452]: 2026-02-28 10:27:45.288 243456 DEBUG nova.objects.instance [None req-1b68bd6c-cd04-4c4b-88db-efb32514dc91 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 45cac133-9af0-462b-928c-05216ae1a68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:27:45 np0005634017 nova_compute[243452]: 2026-02-28 10:27:45.321 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274465.321013, 45cac133-9af0-462b-928c-05216ae1a68e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:27:45 np0005634017 nova_compute[243452]: 2026-02-28 10:27:45.322 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:27:45 np0005634017 nova_compute[243452]: 2026-02-28 10:27:45.342 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:27:45 np0005634017 nova_compute[243452]: 2026-02-28 10:27:45.346 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:27:45 np0005634017 nova_compute[243452]: 2026-02-28 10:27:45.363 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Feb 28 05:27:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:27:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/591691403' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:27:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:27:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/591691403' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:27:45 np0005634017 kernel: tap63cc9218-a4 (unregistering): left promiscuous mode
Feb 28 05:27:45 np0005634017 NetworkManager[49805]: <info>  [1772274465.6772] device (tap63cc9218-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:27:45 np0005634017 nova_compute[243452]: 2026-02-28 10:27:45.682 243456 DEBUG nova.compute.manager [req-67fa20db-856f-4f3c-b030-25b7388c9d62 req-4fbb077b-e9f3-40ce-bce6-f4954a83c3ad 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:27:45 np0005634017 nova_compute[243452]: 2026-02-28 10:27:45.683 243456 DEBUG oslo_concurrency.lockutils [req-67fa20db-856f-4f3c-b030-25b7388c9d62 req-4fbb077b-e9f3-40ce-bce6-f4954a83c3ad 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:45 np0005634017 nova_compute[243452]: 2026-02-28 10:27:45.683 243456 DEBUG oslo_concurrency.lockutils [req-67fa20db-856f-4f3c-b030-25b7388c9d62 req-4fbb077b-e9f3-40ce-bce6-f4954a83c3ad 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:45 np0005634017 nova_compute[243452]: 2026-02-28 10:27:45.683 243456 DEBUG oslo_concurrency.lockutils [req-67fa20db-856f-4f3c-b030-25b7388c9d62 req-4fbb077b-e9f3-40ce-bce6-f4954a83c3ad 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:45 np0005634017 nova_compute[243452]: 2026-02-28 10:27:45.683 243456 DEBUG nova.compute.manager [req-67fa20db-856f-4f3c-b030-25b7388c9d62 req-4fbb077b-e9f3-40ce-bce6-f4954a83c3ad 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] No waiting events found dispatching network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:27:45 np0005634017 nova_compute[243452]: 2026-02-28 10:27:45.684 243456 WARNING nova.compute.manager [req-67fa20db-856f-4f3c-b030-25b7388c9d62 req-4fbb077b-e9f3-40ce-bce6-f4954a83c3ad 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received unexpected event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 for instance with vm_state active and task_state suspending.#033[00m
Feb 28 05:27:45 np0005634017 nova_compute[243452]: 2026-02-28 10:27:45.690 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:45 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:45Z|01207|binding|INFO|Releasing lport 63cc9218-a429-4d50-9dad-e3849863cae1 from this chassis (sb_readonly=0)
Feb 28 05:27:45 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:45Z|01208|binding|INFO|Setting lport 63cc9218-a429-4d50-9dad-e3849863cae1 down in Southbound
Feb 28 05:27:45 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:45Z|01209|binding|INFO|Removing iface tap63cc9218-a4 ovn-installed in OVS
Feb 28 05:27:45 np0005634017 nova_compute[243452]: 2026-02-28 10:27:45.699 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:45.700 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:2a:f0 10.100.0.11'], port_security=['fa:16:3e:26:2a:f0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '45cac133-9af0-462b-928c-05216ae1a68e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9d695f1-2ebc-41de-afde-dc0fb11aa027', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '859784d5f59f4db99fb375f781853be3', 'neutron:revision_number': '6', 'neutron:security_group_ids': '3574eda9-0858-4fa6-a1da-2908ff989c86', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ecc9bc11-7523-42ec-afec-b5bdb65bb267, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=63cc9218-a429-4d50-9dad-e3849863cae1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:27:45 np0005634017 nova_compute[243452]: 2026-02-28 10:27:45.702 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:45.703 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 63cc9218-a429-4d50-9dad-e3849863cae1 in datapath c9d695f1-2ebc-41de-afde-dc0fb11aa027 unbound from our chassis#033[00m
Feb 28 05:27:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:45.705 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c9d695f1-2ebc-41de-afde-dc0fb11aa027 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 28 05:27:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:45.707 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d94f5cdd-87bd-4ddb-940e-25bce2ef7555]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:45 np0005634017 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000077.scope: Deactivated successfully.
Feb 28 05:27:45 np0005634017 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000077.scope: Consumed 2.424s CPU time.
Feb 28 05:27:45 np0005634017 systemd-machined[209480]: Machine qemu-152-instance-00000077 terminated.
Feb 28 05:27:45 np0005634017 nova_compute[243452]: 2026-02-28 10:27:45.858 243456 DEBUG nova.compute.manager [None req-1b68bd6c-cd04-4c4b-88db-efb32514dc91 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:27:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 5.9 KiB/s wr, 202 op/s
Feb 28 05:27:47 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Feb 28 05:27:47 np0005634017 nova_compute[243452]: 2026-02-28 10:27:47.794 243456 DEBUG nova.compute.manager [req-2b38a217-b55c-45df-956e-7b5676b6b73d req-cb5859df-18f4-42df-ba9e-b040189aff33 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-vif-unplugged-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:27:47 np0005634017 nova_compute[243452]: 2026-02-28 10:27:47.794 243456 DEBUG oslo_concurrency.lockutils [req-2b38a217-b55c-45df-956e-7b5676b6b73d req-cb5859df-18f4-42df-ba9e-b040189aff33 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:47 np0005634017 nova_compute[243452]: 2026-02-28 10:27:47.795 243456 DEBUG oslo_concurrency.lockutils [req-2b38a217-b55c-45df-956e-7b5676b6b73d req-cb5859df-18f4-42df-ba9e-b040189aff33 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:47 np0005634017 nova_compute[243452]: 2026-02-28 10:27:47.795 243456 DEBUG oslo_concurrency.lockutils [req-2b38a217-b55c-45df-956e-7b5676b6b73d req-cb5859df-18f4-42df-ba9e-b040189aff33 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:47 np0005634017 nova_compute[243452]: 2026-02-28 10:27:47.796 243456 DEBUG nova.compute.manager [req-2b38a217-b55c-45df-956e-7b5676b6b73d req-cb5859df-18f4-42df-ba9e-b040189aff33 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] No waiting events found dispatching network-vif-unplugged-63cc9218-a429-4d50-9dad-e3849863cae1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:27:47 np0005634017 nova_compute[243452]: 2026-02-28 10:27:47.796 243456 WARNING nova.compute.manager [req-2b38a217-b55c-45df-956e-7b5676b6b73d req-cb5859df-18f4-42df-ba9e-b040189aff33 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received unexpected event network-vif-unplugged-63cc9218-a429-4d50-9dad-e3849863cae1 for instance with vm_state suspended and task_state None.#033[00m
Feb 28 05:27:47 np0005634017 nova_compute[243452]: 2026-02-28 10:27:47.797 243456 DEBUG nova.compute.manager [req-2b38a217-b55c-45df-956e-7b5676b6b73d req-cb5859df-18f4-42df-ba9e-b040189aff33 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:27:47 np0005634017 nova_compute[243452]: 2026-02-28 10:27:47.797 243456 DEBUG oslo_concurrency.lockutils [req-2b38a217-b55c-45df-956e-7b5676b6b73d req-cb5859df-18f4-42df-ba9e-b040189aff33 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:47 np0005634017 nova_compute[243452]: 2026-02-28 10:27:47.798 243456 DEBUG oslo_concurrency.lockutils [req-2b38a217-b55c-45df-956e-7b5676b6b73d req-cb5859df-18f4-42df-ba9e-b040189aff33 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:47 np0005634017 nova_compute[243452]: 2026-02-28 10:27:47.798 243456 DEBUG oslo_concurrency.lockutils [req-2b38a217-b55c-45df-956e-7b5676b6b73d req-cb5859df-18f4-42df-ba9e-b040189aff33 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:47 np0005634017 nova_compute[243452]: 2026-02-28 10:27:47.799 243456 DEBUG nova.compute.manager [req-2b38a217-b55c-45df-956e-7b5676b6b73d req-cb5859df-18f4-42df-ba9e-b040189aff33 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] No waiting events found dispatching network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:27:47 np0005634017 nova_compute[243452]: 2026-02-28 10:27:47.799 243456 WARNING nova.compute.manager [req-2b38a217-b55c-45df-956e-7b5676b6b73d req-cb5859df-18f4-42df-ba9e-b040189aff33 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received unexpected event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 for instance with vm_state suspended and task_state None.#033[00m
Feb 28 05:27:48 np0005634017 nova_compute[243452]: 2026-02-28 10:27:48.273 243456 INFO nova.compute.manager [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Resuming#033[00m
Feb 28 05:27:48 np0005634017 nova_compute[243452]: 2026-02-28 10:27:48.275 243456 DEBUG nova.objects.instance [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lazy-loading 'flavor' on Instance uuid 45cac133-9af0-462b-928c-05216ae1a68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:27:48 np0005634017 nova_compute[243452]: 2026-02-28 10:27:48.325 243456 DEBUG oslo_concurrency.lockutils [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquiring lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:27:48 np0005634017 nova_compute[243452]: 2026-02-28 10:27:48.326 243456 DEBUG oslo_concurrency.lockutils [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquired lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:27:48 np0005634017 nova_compute[243452]: 2026-02-28 10:27:48.326 243456 DEBUG nova.network.neutron [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:27:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.1 KiB/s wr, 182 op/s
Feb 28 05:27:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:27:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Feb 28 05:27:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Feb 28 05:27:48 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Feb 28 05:27:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:48Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:be:1f:18 10.100.0.3
Feb 28 05:27:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:48Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:be:1f:18 10.100.0.3
Feb 28 05:27:49 np0005634017 nova_compute[243452]: 2026-02-28 10:27:49.156 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:49 np0005634017 nova_compute[243452]: 2026-02-28 10:27:49.735 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.153 243456 DEBUG nova.network.neutron [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Updating instance_info_cache with network_info: [{"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.178 243456 DEBUG oslo_concurrency.lockutils [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Releasing lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.187 243456 DEBUG nova.virt.libvirt.vif [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:27:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-22377150',display_name='tempest-TestServerAdvancedOps-server-22377150',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-22377150',id=119,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:27:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='859784d5f59f4db99fb375f781853be3',ramdisk_id='',reservation_id='r-h74lcn4j',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-244453076',owner_user_name='tempest-TestServerAdvancedOps-244453076-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:27:45Z,user_data=None,user_id='3ed826a3011e43d68aac3f001281440a',uuid=45cac133-9af0-462b-928c-05216ae1a68e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.188 243456 DEBUG nova.network.os_vif_util [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Converting VIF {"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.189 243456 DEBUG nova.network.os_vif_util [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:2a:f0,bridge_name='br-int',has_traffic_filtering=True,id=63cc9218-a429-4d50-9dad-e3849863cae1,network=Network(c9d695f1-2ebc-41de-afde-dc0fb11aa027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63cc9218-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.190 243456 DEBUG os_vif [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:2a:f0,bridge_name='br-int',has_traffic_filtering=True,id=63cc9218-a429-4d50-9dad-e3849863cae1,network=Network(c9d695f1-2ebc-41de-afde-dc0fb11aa027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63cc9218-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.190 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.191 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.192 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.195 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.196 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap63cc9218-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.196 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap63cc9218-a4, col_values=(('external_ids', {'iface-id': '63cc9218-a429-4d50-9dad-e3849863cae1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:2a:f0', 'vm-uuid': '45cac133-9af0-462b-928c-05216ae1a68e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.197 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.197 243456 INFO os_vif [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:2a:f0,bridge_name='br-int',has_traffic_filtering=True,id=63cc9218-a429-4d50-9dad-e3849863cae1,network=Network(c9d695f1-2ebc-41de-afde-dc0fb11aa027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63cc9218-a4')#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.213 243456 DEBUG nova.objects.instance [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lazy-loading 'numa_topology' on Instance uuid 45cac133-9af0-462b-928c-05216ae1a68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:27:50 np0005634017 kernel: tap63cc9218-a4: entered promiscuous mode
Feb 28 05:27:50 np0005634017 NetworkManager[49805]: <info>  [1772274470.2793] manager: (tap63cc9218-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/499)
Feb 28 05:27:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:50Z|01210|binding|INFO|Claiming lport 63cc9218-a429-4d50-9dad-e3849863cae1 for this chassis.
Feb 28 05:27:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:50Z|01211|binding|INFO|63cc9218-a429-4d50-9dad-e3849863cae1: Claiming fa:16:3e:26:2a:f0 10.100.0.11
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.281 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:50.291 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:2a:f0 10.100.0.11'], port_security=['fa:16:3e:26:2a:f0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '45cac133-9af0-462b-928c-05216ae1a68e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9d695f1-2ebc-41de-afde-dc0fb11aa027', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '859784d5f59f4db99fb375f781853be3', 'neutron:revision_number': '7', 'neutron:security_group_ids': '3574eda9-0858-4fa6-a1da-2908ff989c86', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ecc9bc11-7523-42ec-afec-b5bdb65bb267, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=63cc9218-a429-4d50-9dad-e3849863cae1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.291 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:50Z|01212|binding|INFO|Setting lport 63cc9218-a429-4d50-9dad-e3849863cae1 ovn-installed in OVS
Feb 28 05:27:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:50Z|01213|binding|INFO|Setting lport 63cc9218-a429-4d50-9dad-e3849863cae1 up in Southbound
Feb 28 05:27:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:50.294 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 63cc9218-a429-4d50-9dad-e3849863cae1 in datapath c9d695f1-2ebc-41de-afde-dc0fb11aa027 bound to our chassis#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.295 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:50.295 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c9d695f1-2ebc-41de-afde-dc0fb11aa027 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 28 05:27:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:50.297 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6b6f7726-ac66-449d-8fa0-4e91a135c50f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:50 np0005634017 systemd-udevd[346806]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:27:50 np0005634017 systemd-machined[209480]: New machine qemu-153-instance-00000077.
Feb 28 05:27:50 np0005634017 NetworkManager[49805]: <info>  [1772274470.3261] device (tap63cc9218-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:27:50 np0005634017 systemd[1]: Started Virtual Machine qemu-153-instance-00000077.
Feb 28 05:27:50 np0005634017 NetworkManager[49805]: <info>  [1772274470.3292] device (tap63cc9218-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:27:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 267 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 2.0 MiB/s wr, 67 op/s
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.623 243456 DEBUG nova.compute.manager [req-fdef7463-0344-4527-bd04-9517444d0923 req-a145479c-3ae0-41c5-ae0b-b13f2512c8c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.625 243456 DEBUG oslo_concurrency.lockutils [req-fdef7463-0344-4527-bd04-9517444d0923 req-a145479c-3ae0-41c5-ae0b-b13f2512c8c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.626 243456 DEBUG oslo_concurrency.lockutils [req-fdef7463-0344-4527-bd04-9517444d0923 req-a145479c-3ae0-41c5-ae0b-b13f2512c8c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.627 243456 DEBUG oslo_concurrency.lockutils [req-fdef7463-0344-4527-bd04-9517444d0923 req-a145479c-3ae0-41c5-ae0b-b13f2512c8c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.628 243456 DEBUG nova.compute.manager [req-fdef7463-0344-4527-bd04-9517444d0923 req-a145479c-3ae0-41c5-ae0b-b13f2512c8c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] No waiting events found dispatching network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.628 243456 WARNING nova.compute.manager [req-fdef7463-0344-4527-bd04-9517444d0923 req-a145479c-3ae0-41c5-ae0b-b13f2512c8c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received unexpected event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 for instance with vm_state suspended and task_state resuming.#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.767 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 45cac133-9af0-462b-928c-05216ae1a68e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.768 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274470.7671397, 45cac133-9af0-462b-928c-05216ae1a68e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.768 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] VM Started (Lifecycle Event)#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.780 243456 DEBUG nova.compute.manager [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.780 243456 DEBUG nova.objects.instance [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 45cac133-9af0-462b-928c-05216ae1a68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.787 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.792 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.798 243456 INFO nova.virt.libvirt.driver [-] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Instance running successfully.#033[00m
Feb 28 05:27:50 np0005634017 virtqemud[242837]: argument unsupported: QEMU guest agent is not configured
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.801 243456 DEBUG nova.virt.libvirt.guest [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.802 243456 DEBUG nova.compute.manager [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.812 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.812 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274470.7731998, 45cac133-9af0-462b-928c-05216ae1a68e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.813 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.836 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:27:50 np0005634017 nova_compute[243452]: 2026-02-28 10:27:50.841 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:27:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 279 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 413 KiB/s rd, 2.9 MiB/s wr, 94 op/s
Feb 28 05:27:52 np0005634017 nova_compute[243452]: 2026-02-28 10:27:52.762 243456 DEBUG nova.compute.manager [req-08f09ffb-f10a-4a85-8a77-7f89bb0bc68a req-75cb070c-f530-4849-a8f0-fad2dc835e32 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:27:52 np0005634017 nova_compute[243452]: 2026-02-28 10:27:52.763 243456 DEBUG oslo_concurrency.lockutils [req-08f09ffb-f10a-4a85-8a77-7f89bb0bc68a req-75cb070c-f530-4849-a8f0-fad2dc835e32 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:52 np0005634017 nova_compute[243452]: 2026-02-28 10:27:52.764 243456 DEBUG oslo_concurrency.lockutils [req-08f09ffb-f10a-4a85-8a77-7f89bb0bc68a req-75cb070c-f530-4849-a8f0-fad2dc835e32 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:52 np0005634017 nova_compute[243452]: 2026-02-28 10:27:52.764 243456 DEBUG oslo_concurrency.lockutils [req-08f09ffb-f10a-4a85-8a77-7f89bb0bc68a req-75cb070c-f530-4849-a8f0-fad2dc835e32 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:52 np0005634017 nova_compute[243452]: 2026-02-28 10:27:52.764 243456 DEBUG nova.compute.manager [req-08f09ffb-f10a-4a85-8a77-7f89bb0bc68a req-75cb070c-f530-4849-a8f0-fad2dc835e32 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] No waiting events found dispatching network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:27:52 np0005634017 nova_compute[243452]: 2026-02-28 10:27:52.765 243456 WARNING nova.compute.manager [req-08f09ffb-f10a-4a85-8a77-7f89bb0bc68a req-75cb070c-f530-4849-a8f0-fad2dc835e32 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received unexpected event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:27:52 np0005634017 nova_compute[243452]: 2026-02-28 10:27:52.780 243456 DEBUG oslo_concurrency.lockutils [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:52 np0005634017 nova_compute[243452]: 2026-02-28 10:27:52.781 243456 DEBUG oslo_concurrency.lockutils [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:52 np0005634017 nova_compute[243452]: 2026-02-28 10:27:52.781 243456 DEBUG oslo_concurrency.lockutils [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:52 np0005634017 nova_compute[243452]: 2026-02-28 10:27:52.781 243456 DEBUG oslo_concurrency.lockutils [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:52 np0005634017 nova_compute[243452]: 2026-02-28 10:27:52.781 243456 DEBUG oslo_concurrency.lockutils [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:52 np0005634017 nova_compute[243452]: 2026-02-28 10:27:52.783 243456 INFO nova.compute.manager [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Terminating instance#033[00m
Feb 28 05:27:52 np0005634017 nova_compute[243452]: 2026-02-28 10:27:52.784 243456 DEBUG nova.compute.manager [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:27:52 np0005634017 kernel: tap63cc9218-a4 (unregistering): left promiscuous mode
Feb 28 05:27:52 np0005634017 NetworkManager[49805]: <info>  [1772274472.8239] device (tap63cc9218-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:27:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:52Z|01214|binding|INFO|Releasing lport 63cc9218-a429-4d50-9dad-e3849863cae1 from this chassis (sb_readonly=0)
Feb 28 05:27:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:52Z|01215|binding|INFO|Setting lport 63cc9218-a429-4d50-9dad-e3849863cae1 down in Southbound
Feb 28 05:27:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:52Z|01216|binding|INFO|Removing iface tap63cc9218-a4 ovn-installed in OVS
Feb 28 05:27:52 np0005634017 nova_compute[243452]: 2026-02-28 10:27:52.830 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:52 np0005634017 nova_compute[243452]: 2026-02-28 10:27:52.837 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:52.839 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:2a:f0 10.100.0.11'], port_security=['fa:16:3e:26:2a:f0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '45cac133-9af0-462b-928c-05216ae1a68e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9d695f1-2ebc-41de-afde-dc0fb11aa027', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '859784d5f59f4db99fb375f781853be3', 'neutron:revision_number': '8', 'neutron:security_group_ids': '3574eda9-0858-4fa6-a1da-2908ff989c86', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ecc9bc11-7523-42ec-afec-b5bdb65bb267, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=63cc9218-a429-4d50-9dad-e3849863cae1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:27:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:52.841 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 63cc9218-a429-4d50-9dad-e3849863cae1 in datapath c9d695f1-2ebc-41de-afde-dc0fb11aa027 unbound from our chassis#033[00m
Feb 28 05:27:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:52.841 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c9d695f1-2ebc-41de-afde-dc0fb11aa027 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 28 05:27:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:52.842 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[00473275-d88c-45fe-b2be-987127b2a9c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:27:52 np0005634017 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000077.scope: Deactivated successfully.
Feb 28 05:27:52 np0005634017 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000077.scope: Consumed 2.426s CPU time.
Feb 28 05:27:52 np0005634017 systemd-machined[209480]: Machine qemu-153-instance-00000077 terminated.
Feb 28 05:27:53 np0005634017 nova_compute[243452]: 2026-02-28 10:27:53.020 243456 INFO nova.virt.libvirt.driver [-] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Instance destroyed successfully.#033[00m
Feb 28 05:27:53 np0005634017 nova_compute[243452]: 2026-02-28 10:27:53.021 243456 DEBUG nova.objects.instance [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lazy-loading 'resources' on Instance uuid 45cac133-9af0-462b-928c-05216ae1a68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:27:53 np0005634017 nova_compute[243452]: 2026-02-28 10:27:53.047 243456 DEBUG nova.virt.libvirt.vif [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:27:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-22377150',display_name='tempest-TestServerAdvancedOps-server-22377150',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-22377150',id=119,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:27:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='859784d5f59f4db99fb375f781853be3',ramdisk_id='',reservation_id='r-h74lcn4j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-244453076',owner_user_name='tempest-TestServerAdvancedOps-244453076-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:27:50Z,user_data=None,user_id='3ed826a3011e43d68aac3f001281440a',uuid=45cac133-9af0-462b-928c-05216ae1a68e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:27:53 np0005634017 nova_compute[243452]: 2026-02-28 10:27:53.047 243456 DEBUG nova.network.os_vif_util [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Converting VIF {"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:27:53 np0005634017 nova_compute[243452]: 2026-02-28 10:27:53.048 243456 DEBUG nova.network.os_vif_util [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:2a:f0,bridge_name='br-int',has_traffic_filtering=True,id=63cc9218-a429-4d50-9dad-e3849863cae1,network=Network(c9d695f1-2ebc-41de-afde-dc0fb11aa027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63cc9218-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:27:53 np0005634017 nova_compute[243452]: 2026-02-28 10:27:53.049 243456 DEBUG os_vif [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:2a:f0,bridge_name='br-int',has_traffic_filtering=True,id=63cc9218-a429-4d50-9dad-e3849863cae1,network=Network(c9d695f1-2ebc-41de-afde-dc0fb11aa027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63cc9218-a4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:27:53 np0005634017 nova_compute[243452]: 2026-02-28 10:27:53.052 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:53 np0005634017 nova_compute[243452]: 2026-02-28 10:27:53.052 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap63cc9218-a4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:27:53 np0005634017 nova_compute[243452]: 2026-02-28 10:27:53.182 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:53 np0005634017 nova_compute[243452]: 2026-02-28 10:27:53.184 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:53 np0005634017 nova_compute[243452]: 2026-02-28 10:27:53.188 243456 INFO os_vif [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:2a:f0,bridge_name='br-int',has_traffic_filtering=True,id=63cc9218-a429-4d50-9dad-e3849863cae1,network=Network(c9d695f1-2ebc-41de-afde-dc0fb11aa027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63cc9218-a4')#033[00m
Feb 28 05:27:53 np0005634017 nova_compute[243452]: 2026-02-28 10:27:53.493 243456 INFO nova.virt.libvirt.driver [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Deleting instance files /var/lib/nova/instances/45cac133-9af0-462b-928c-05216ae1a68e_del#033[00m
Feb 28 05:27:53 np0005634017 nova_compute[243452]: 2026-02-28 10:27:53.494 243456 INFO nova.virt.libvirt.driver [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Deletion of /var/lib/nova/instances/45cac133-9af0-462b-928c-05216ae1a68e_del complete#033[00m
Feb 28 05:27:53 np0005634017 nova_compute[243452]: 2026-02-28 10:27:53.569 243456 INFO nova.compute.manager [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:27:53 np0005634017 nova_compute[243452]: 2026-02-28 10:27:53.570 243456 DEBUG oslo.service.loopingcall [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:27:53 np0005634017 nova_compute[243452]: 2026-02-28 10:27:53.570 243456 DEBUG nova.compute.manager [-] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:27:53 np0005634017 nova_compute[243452]: 2026-02-28 10:27:53.571 243456 DEBUG nova.network.neutron [-] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:27:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:27:54 np0005634017 nova_compute[243452]: 2026-02-28 10:27:54.158 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 261 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.6 MiB/s wr, 95 op/s
Feb 28 05:27:54 np0005634017 nova_compute[243452]: 2026-02-28 10:27:54.607 243456 DEBUG nova.network.neutron [-] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:27:54 np0005634017 nova_compute[243452]: 2026-02-28 10:27:54.629 243456 INFO nova.compute.manager [-] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Took 1.06 seconds to deallocate network for instance.#033[00m
Feb 28 05:27:54 np0005634017 nova_compute[243452]: 2026-02-28 10:27:54.685 243456 DEBUG oslo_concurrency.lockutils [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:54 np0005634017 nova_compute[243452]: 2026-02-28 10:27:54.686 243456 DEBUG oslo_concurrency.lockutils [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:54 np0005634017 nova_compute[243452]: 2026-02-28 10:27:54.734 243456 DEBUG nova.compute.manager [req-bfac848b-220a-47ef-b2fe-91f5c4bcd5ae req-33e6f533-9aed-4919-82ad-536989a57462 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-vif-deleted-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:27:54 np0005634017 nova_compute[243452]: 2026-02-28 10:27:54.775 243456 DEBUG oslo_concurrency.processutils [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:27:54 np0005634017 nova_compute[243452]: 2026-02-28 10:27:54.842 243456 DEBUG nova.compute.manager [req-cce11852-ae14-43ab-8e07-111c02772335 req-7baf745c-69c5-46e5-b514-075f2aa48cab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-vif-unplugged-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:27:54 np0005634017 nova_compute[243452]: 2026-02-28 10:27:54.843 243456 DEBUG oslo_concurrency.lockutils [req-cce11852-ae14-43ab-8e07-111c02772335 req-7baf745c-69c5-46e5-b514-075f2aa48cab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:54 np0005634017 nova_compute[243452]: 2026-02-28 10:27:54.843 243456 DEBUG oslo_concurrency.lockutils [req-cce11852-ae14-43ab-8e07-111c02772335 req-7baf745c-69c5-46e5-b514-075f2aa48cab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:54 np0005634017 nova_compute[243452]: 2026-02-28 10:27:54.843 243456 DEBUG oslo_concurrency.lockutils [req-cce11852-ae14-43ab-8e07-111c02772335 req-7baf745c-69c5-46e5-b514-075f2aa48cab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:54 np0005634017 nova_compute[243452]: 2026-02-28 10:27:54.844 243456 DEBUG nova.compute.manager [req-cce11852-ae14-43ab-8e07-111c02772335 req-7baf745c-69c5-46e5-b514-075f2aa48cab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] No waiting events found dispatching network-vif-unplugged-63cc9218-a429-4d50-9dad-e3849863cae1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:27:54 np0005634017 nova_compute[243452]: 2026-02-28 10:27:54.844 243456 WARNING nova.compute.manager [req-cce11852-ae14-43ab-8e07-111c02772335 req-7baf745c-69c5-46e5-b514-075f2aa48cab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received unexpected event network-vif-unplugged-63cc9218-a429-4d50-9dad-e3849863cae1 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:27:54 np0005634017 nova_compute[243452]: 2026-02-28 10:27:54.844 243456 DEBUG nova.compute.manager [req-cce11852-ae14-43ab-8e07-111c02772335 req-7baf745c-69c5-46e5-b514-075f2aa48cab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:27:54 np0005634017 nova_compute[243452]: 2026-02-28 10:27:54.844 243456 DEBUG oslo_concurrency.lockutils [req-cce11852-ae14-43ab-8e07-111c02772335 req-7baf745c-69c5-46e5-b514-075f2aa48cab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:54 np0005634017 nova_compute[243452]: 2026-02-28 10:27:54.845 243456 DEBUG oslo_concurrency.lockutils [req-cce11852-ae14-43ab-8e07-111c02772335 req-7baf745c-69c5-46e5-b514-075f2aa48cab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:54 np0005634017 nova_compute[243452]: 2026-02-28 10:27:54.845 243456 DEBUG oslo_concurrency.lockutils [req-cce11852-ae14-43ab-8e07-111c02772335 req-7baf745c-69c5-46e5-b514-075f2aa48cab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:54 np0005634017 nova_compute[243452]: 2026-02-28 10:27:54.845 243456 DEBUG nova.compute.manager [req-cce11852-ae14-43ab-8e07-111c02772335 req-7baf745c-69c5-46e5-b514-075f2aa48cab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] No waiting events found dispatching network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:27:54 np0005634017 nova_compute[243452]: 2026-02-28 10:27:54.845 243456 WARNING nova.compute.manager [req-cce11852-ae14-43ab-8e07-111c02772335 req-7baf745c-69c5-46e5-b514-075f2aa48cab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received unexpected event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:27:55 np0005634017 podman[346917]: 2026-02-28 10:27:55.009518243 +0000 UTC m=+0.077255223 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent)
Feb 28 05:27:55 np0005634017 podman[346916]: 2026-02-28 10:27:55.04198004 +0000 UTC m=+0.109920786 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:27:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:27:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/41544991' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:27:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:27:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:27:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:27:55 np0005634017 nova_compute[243452]: 2026-02-28 10:27:55.357 243456 DEBUG oslo_concurrency.processutils [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:27:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:27:55 np0005634017 nova_compute[243452]: 2026-02-28 10:27:55.362 243456 DEBUG nova.compute.provider_tree [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:27:55 np0005634017 nova_compute[243452]: 2026-02-28 10:27:55.386 243456 DEBUG nova.scheduler.client.report [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:27:55 np0005634017 nova_compute[243452]: 2026-02-28 10:27:55.419 243456 DEBUG oslo_concurrency.lockutils [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:55 np0005634017 nova_compute[243452]: 2026-02-28 10:27:55.451 243456 INFO nova.scheduler.client.report [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Deleted allocations for instance 45cac133-9af0-462b-928c-05216ae1a68e#033[00m
Feb 28 05:27:55 np0005634017 nova_compute[243452]: 2026-02-28 10:27:55.540 243456 DEBUG oslo_concurrency.lockutils [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:55 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:27:55 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:27:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:27:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:27:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:27:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:27:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:27:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:27:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:27:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:27:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:27:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:27:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:27:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:27:56 np0005634017 podman[347168]: 2026-02-28 10:27:56.565624145 +0000 UTC m=+0.047973246 container create 35ee8c34f71076950f49f3215f6210c08e0c92edede1057566ac581f292aa0fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gagarin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 05:27:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 251 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 2.6 MiB/s wr, 96 op/s
Feb 28 05:27:56 np0005634017 systemd[1]: Started libpod-conmon-35ee8c34f71076950f49f3215f6210c08e0c92edede1057566ac581f292aa0fe.scope.
Feb 28 05:27:56 np0005634017 podman[347168]: 2026-02-28 10:27:56.543438919 +0000 UTC m=+0.025788020 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:27:56 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:27:56 np0005634017 podman[347168]: 2026-02-28 10:27:56.666827874 +0000 UTC m=+0.149176985 container init 35ee8c34f71076950f49f3215f6210c08e0c92edede1057566ac581f292aa0fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:27:56 np0005634017 podman[347168]: 2026-02-28 10:27:56.675864669 +0000 UTC m=+0.158213720 container start 35ee8c34f71076950f49f3215f6210c08e0c92edede1057566ac581f292aa0fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 05:27:56 np0005634017 podman[347168]: 2026-02-28 10:27:56.679736078 +0000 UTC m=+0.162085179 container attach 35ee8c34f71076950f49f3215f6210c08e0c92edede1057566ac581f292aa0fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:27:56 np0005634017 sad_gagarin[347184]: 167 167
Feb 28 05:27:56 np0005634017 systemd[1]: libpod-35ee8c34f71076950f49f3215f6210c08e0c92edede1057566ac581f292aa0fe.scope: Deactivated successfully.
Feb 28 05:27:56 np0005634017 podman[347168]: 2026-02-28 10:27:56.683998249 +0000 UTC m=+0.166347340 container died 35ee8c34f71076950f49f3215f6210c08e0c92edede1057566ac581f292aa0fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gagarin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:27:56 np0005634017 systemd[1]: var-lib-containers-storage-overlay-180daf0a882041d23204523cb7e287395505e9aa79016ac5bfdf269882a57f17-merged.mount: Deactivated successfully.
Feb 28 05:27:56 np0005634017 podman[347168]: 2026-02-28 10:27:56.735627207 +0000 UTC m=+0.217976308 container remove 35ee8c34f71076950f49f3215f6210c08e0c92edede1057566ac581f292aa0fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gagarin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 05:27:56 np0005634017 systemd[1]: libpod-conmon-35ee8c34f71076950f49f3215f6210c08e0c92edede1057566ac581f292aa0fe.scope: Deactivated successfully.
Feb 28 05:27:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:27:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:27:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:27:56 np0005634017 podman[347207]: 2026-02-28 10:27:56.942771038 +0000 UTC m=+0.051366072 container create f2ed47a636b6017af23a79fbcb7a0b6bf9fd6b0350a1f68ea800104eedc12b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mcnulty, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 05:27:56 np0005634017 systemd[1]: Started libpod-conmon-f2ed47a636b6017af23a79fbcb7a0b6bf9fd6b0350a1f68ea800104eedc12b9b.scope.
Feb 28 05:27:57 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:27:57 np0005634017 podman[347207]: 2026-02-28 10:27:56.916966669 +0000 UTC m=+0.025561493 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:27:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a8ecdfebac881430dd3bd0f9d4206a3c0707accfc8921156a2dad7f507ef7e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:27:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a8ecdfebac881430dd3bd0f9d4206a3c0707accfc8921156a2dad7f507ef7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:27:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a8ecdfebac881430dd3bd0f9d4206a3c0707accfc8921156a2dad7f507ef7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:27:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a8ecdfebac881430dd3bd0f9d4206a3c0707accfc8921156a2dad7f507ef7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:27:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a8ecdfebac881430dd3bd0f9d4206a3c0707accfc8921156a2dad7f507ef7e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:27:57 np0005634017 podman[347207]: 2026-02-28 10:27:57.030424343 +0000 UTC m=+0.139019177 container init f2ed47a636b6017af23a79fbcb7a0b6bf9fd6b0350a1f68ea800104eedc12b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mcnulty, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:27:57 np0005634017 podman[347207]: 2026-02-28 10:27:57.038590594 +0000 UTC m=+0.147185358 container start f2ed47a636b6017af23a79fbcb7a0b6bf9fd6b0350a1f68ea800104eedc12b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mcnulty, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:27:57 np0005634017 podman[347207]: 2026-02-28 10:27:57.042518865 +0000 UTC m=+0.151113659 container attach f2ed47a636b6017af23a79fbcb7a0b6bf9fd6b0350a1f68ea800104eedc12b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mcnulty, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 05:27:57 np0005634017 ovn_controller[146846]: 2026-02-28T10:27:57Z|01217|binding|INFO|Releasing lport cbbfe533-6ee1-4103-ad47-26b6e9271250 from this chassis (sb_readonly=0)
Feb 28 05:27:57 np0005634017 nova_compute[243452]: 2026-02-28 10:27:57.454 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:57 np0005634017 magical_mcnulty[347223]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:27:57 np0005634017 magical_mcnulty[347223]: --> All data devices are unavailable
Feb 28 05:27:57 np0005634017 systemd[1]: libpod-f2ed47a636b6017af23a79fbcb7a0b6bf9fd6b0350a1f68ea800104eedc12b9b.scope: Deactivated successfully.
Feb 28 05:27:57 np0005634017 podman[347207]: 2026-02-28 10:27:57.515183045 +0000 UTC m=+0.623777859 container died f2ed47a636b6017af23a79fbcb7a0b6bf9fd6b0350a1f68ea800104eedc12b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mcnulty, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 05:27:57 np0005634017 systemd[1]: var-lib-containers-storage-overlay-d8a8ecdfebac881430dd3bd0f9d4206a3c0707accfc8921156a2dad7f507ef7e-merged.mount: Deactivated successfully.
Feb 28 05:27:57 np0005634017 podman[347207]: 2026-02-28 10:27:57.554910458 +0000 UTC m=+0.663505212 container remove f2ed47a636b6017af23a79fbcb7a0b6bf9fd6b0350a1f68ea800104eedc12b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mcnulty, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 05:27:57 np0005634017 systemd[1]: libpod-conmon-f2ed47a636b6017af23a79fbcb7a0b6bf9fd6b0350a1f68ea800104eedc12b9b.scope: Deactivated successfully.
Feb 28 05:27:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:57.866 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:27:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:57.868 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:27:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:57.869 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:27:58 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:58.054 157134 DEBUG eventlet.wsgi.server [-] (157134) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Feb 28 05:27:58 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:58.056 157134 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Feb 28 05:27:58 np0005634017 ovn_metadata_agent[156634]: Accept: */*#015
Feb 28 05:27:58 np0005634017 ovn_metadata_agent[156634]: Connection: close#015
Feb 28 05:27:58 np0005634017 ovn_metadata_agent[156634]: Content-Type: text/plain#015
Feb 28 05:27:58 np0005634017 ovn_metadata_agent[156634]: Host: 169.254.169.254#015
Feb 28 05:27:58 np0005634017 ovn_metadata_agent[156634]: User-Agent: curl/7.84.0#015
Feb 28 05:27:58 np0005634017 ovn_metadata_agent[156634]: X-Forwarded-For: 10.100.0.3#015
Feb 28 05:27:58 np0005634017 ovn_metadata_agent[156634]: X-Ovn-Network-Id: d755deef-15d2-410a-9b1a-81df70c45c93 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Feb 28 05:27:58 np0005634017 podman[347318]: 2026-02-28 10:27:58.093353686 +0000 UTC m=+0.041475573 container create 85ed8edb92b886a29f633a7236c9b8db5d6954366711e75fcd7b1fadbb172e29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_boyd, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:27:58 np0005634017 systemd[1]: Started libpod-conmon-85ed8edb92b886a29f633a7236c9b8db5d6954366711e75fcd7b1fadbb172e29.scope.
Feb 28 05:27:58 np0005634017 podman[347318]: 2026-02-28 10:27:58.071315563 +0000 UTC m=+0.019437470 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:27:58 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:27:58 np0005634017 nova_compute[243452]: 2026-02-28 10:27:58.182 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:58 np0005634017 podman[347318]: 2026-02-28 10:27:58.187559557 +0000 UTC m=+0.135681534 container init 85ed8edb92b886a29f633a7236c9b8db5d6954366711e75fcd7b1fadbb172e29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_boyd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Feb 28 05:27:58 np0005634017 podman[347318]: 2026-02-28 10:27:58.19404499 +0000 UTC m=+0.142166907 container start 85ed8edb92b886a29f633a7236c9b8db5d6954366711e75fcd7b1fadbb172e29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_boyd, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:27:58 np0005634017 podman[347318]: 2026-02-28 10:27:58.197577639 +0000 UTC m=+0.145699566 container attach 85ed8edb92b886a29f633a7236c9b8db5d6954366711e75fcd7b1fadbb172e29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_boyd, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:27:58 np0005634017 reverent_boyd[347334]: 167 167
Feb 28 05:27:58 np0005634017 systemd[1]: libpod-85ed8edb92b886a29f633a7236c9b8db5d6954366711e75fcd7b1fadbb172e29.scope: Deactivated successfully.
Feb 28 05:27:58 np0005634017 conmon[347334]: conmon 85ed8edb92b886a29f63 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-85ed8edb92b886a29f633a7236c9b8db5d6954366711e75fcd7b1fadbb172e29.scope/container/memory.events
Feb 28 05:27:58 np0005634017 podman[347318]: 2026-02-28 10:27:58.20077744 +0000 UTC m=+0.148899367 container died 85ed8edb92b886a29f633a7236c9b8db5d6954366711e75fcd7b1fadbb172e29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:27:58 np0005634017 systemd[1]: var-lib-containers-storage-overlay-75b8eec30c7838712ca6e8ed1df0573ae3f1766fd39f0fd78594b6ed2741b453-merged.mount: Deactivated successfully.
Feb 28 05:27:58 np0005634017 podman[347318]: 2026-02-28 10:27:58.242996482 +0000 UTC m=+0.191118399 container remove 85ed8edb92b886a29f633a7236c9b8db5d6954366711e75fcd7b1fadbb172e29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_boyd, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 05:27:58 np0005634017 systemd[1]: libpod-conmon-85ed8edb92b886a29f633a7236c9b8db5d6954366711e75fcd7b1fadbb172e29.scope: Deactivated successfully.
Feb 28 05:27:58 np0005634017 podman[347358]: 2026-02-28 10:27:58.402102576 +0000 UTC m=+0.046299218 container create b23db2b16195fef2a8fbe43d25a8ad5e5a16a4acdcbda79beda050e4fd6cd720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sinoussi, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 05:27:58 np0005634017 systemd[1]: Started libpod-conmon-b23db2b16195fef2a8fbe43d25a8ad5e5a16a4acdcbda79beda050e4fd6cd720.scope.
Feb 28 05:27:58 np0005634017 podman[347358]: 2026-02-28 10:27:58.380618279 +0000 UTC m=+0.024814971 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:27:58 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:27:58 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e27656fbe118dde0787d08dac61a5fa4e44287c4bc598a31c22a2bc669768b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:27:58 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e27656fbe118dde0787d08dac61a5fa4e44287c4bc598a31c22a2bc669768b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:27:58 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e27656fbe118dde0787d08dac61a5fa4e44287c4bc598a31c22a2bc669768b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:27:58 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e27656fbe118dde0787d08dac61a5fa4e44287c4bc598a31c22a2bc669768b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:27:58 np0005634017 podman[347358]: 2026-02-28 10:27:58.508997744 +0000 UTC m=+0.153194386 container init b23db2b16195fef2a8fbe43d25a8ad5e5a16a4acdcbda79beda050e4fd6cd720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:27:58 np0005634017 podman[347358]: 2026-02-28 10:27:58.51909498 +0000 UTC m=+0.163291622 container start b23db2b16195fef2a8fbe43d25a8ad5e5a16a4acdcbda79beda050e4fd6cd720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:27:58 np0005634017 podman[347358]: 2026-02-28 10:27:58.522764803 +0000 UTC m=+0.166961485 container attach b23db2b16195fef2a8fbe43d25a8ad5e5a16a4acdcbda79beda050e4fd6cd720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 28 05:27:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 233 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.6 MiB/s wr, 108 op/s
Feb 28 05:27:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]: {
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:    "0": [
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:        {
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "devices": [
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "/dev/loop3"
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            ],
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "lv_name": "ceph_lv0",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "lv_size": "21470642176",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "name": "ceph_lv0",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "tags": {
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.cluster_name": "ceph",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.crush_device_class": "",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.encrypted": "0",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.objectstore": "bluestore",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.osd_id": "0",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.type": "block",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.vdo": "0",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.with_tpm": "0"
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            },
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "type": "block",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "vg_name": "ceph_vg0"
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:        }
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:    ],
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:    "1": [
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:        {
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "devices": [
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "/dev/loop4"
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            ],
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "lv_name": "ceph_lv1",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "lv_size": "21470642176",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "name": "ceph_lv1",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "tags": {
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.cluster_name": "ceph",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.crush_device_class": "",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.encrypted": "0",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.objectstore": "bluestore",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.osd_id": "1",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.type": "block",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.vdo": "0",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.with_tpm": "0"
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            },
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "type": "block",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "vg_name": "ceph_vg1"
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:        }
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:    ],
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:    "2": [
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:        {
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "devices": [
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "/dev/loop5"
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            ],
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "lv_name": "ceph_lv2",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "lv_size": "21470642176",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "name": "ceph_lv2",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "tags": {
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.cluster_name": "ceph",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.crush_device_class": "",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.encrypted": "0",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.objectstore": "bluestore",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.osd_id": "2",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.type": "block",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.vdo": "0",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:                "ceph.with_tpm": "0"
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            },
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "type": "block",
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:            "vg_name": "ceph_vg2"
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:        }
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]:    ]
Feb 28 05:27:58 np0005634017 gifted_sinoussi[347374]: }
Feb 28 05:27:58 np0005634017 systemd[1]: libpod-b23db2b16195fef2a8fbe43d25a8ad5e5a16a4acdcbda79beda050e4fd6cd720.scope: Deactivated successfully.
Feb 28 05:27:58 np0005634017 podman[347358]: 2026-02-28 10:27:58.88269895 +0000 UTC m=+0.526895602 container died b23db2b16195fef2a8fbe43d25a8ad5e5a16a4acdcbda79beda050e4fd6cd720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 05:27:58 np0005634017 systemd[1]: var-lib-containers-storage-overlay-4e27656fbe118dde0787d08dac61a5fa4e44287c4bc598a31c22a2bc669768b4-merged.mount: Deactivated successfully.
Feb 28 05:27:58 np0005634017 podman[347358]: 2026-02-28 10:27:58.920575949 +0000 UTC m=+0.564772581 container remove b23db2b16195fef2a8fbe43d25a8ad5e5a16a4acdcbda79beda050e4fd6cd720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sinoussi, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 05:27:58 np0005634017 systemd[1]: libpod-conmon-b23db2b16195fef2a8fbe43d25a8ad5e5a16a4acdcbda79beda050e4fd6cd720.scope: Deactivated successfully.
Feb 28 05:27:59 np0005634017 nova_compute[243452]: 2026-02-28 10:27:59.161 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:27:59 np0005634017 podman[347457]: 2026-02-28 10:27:59.360799843 +0000 UTC m=+0.036312196 container create 46648d26ba9738af9b5506dab2d01f9317795b7be77c3bdc342d485528f37128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_almeida, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:27:59 np0005634017 systemd[1]: Started libpod-conmon-46648d26ba9738af9b5506dab2d01f9317795b7be77c3bdc342d485528f37128.scope.
Feb 28 05:27:59 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:27:59 np0005634017 podman[347457]: 2026-02-28 10:27:59.421459687 +0000 UTC m=+0.096972070 container init 46648d26ba9738af9b5506dab2d01f9317795b7be77c3bdc342d485528f37128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_almeida, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:27:59 np0005634017 podman[347457]: 2026-02-28 10:27:59.426762366 +0000 UTC m=+0.102274719 container start 46648d26ba9738af9b5506dab2d01f9317795b7be77c3bdc342d485528f37128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:27:59 np0005634017 podman[347457]: 2026-02-28 10:27:59.429490324 +0000 UTC m=+0.105002707 container attach 46648d26ba9738af9b5506dab2d01f9317795b7be77c3bdc342d485528f37128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_almeida, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 05:27:59 np0005634017 gallant_almeida[347474]: 167 167
Feb 28 05:27:59 np0005634017 systemd[1]: libpod-46648d26ba9738af9b5506dab2d01f9317795b7be77c3bdc342d485528f37128.scope: Deactivated successfully.
Feb 28 05:27:59 np0005634017 podman[347457]: 2026-02-28 10:27:59.431779068 +0000 UTC m=+0.107291421 container died 46648d26ba9738af9b5506dab2d01f9317795b7be77c3bdc342d485528f37128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_almeida, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 05:27:59 np0005634017 podman[347457]: 2026-02-28 10:27:59.346652964 +0000 UTC m=+0.022165327 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:27:59 np0005634017 systemd[1]: var-lib-containers-storage-overlay-0449ea6c78b4c2b5682346f18d804c7686521facf7059a4254a658c8ea4ca67c-merged.mount: Deactivated successfully.
Feb 28 05:27:59 np0005634017 podman[347457]: 2026-02-28 10:27:59.474901036 +0000 UTC m=+0.150413389 container remove 46648d26ba9738af9b5506dab2d01f9317795b7be77c3bdc342d485528f37128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_almeida, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:27:59 np0005634017 systemd[1]: libpod-conmon-46648d26ba9738af9b5506dab2d01f9317795b7be77c3bdc342d485528f37128.scope: Deactivated successfully.
Feb 28 05:27:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:59.509 157134 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Feb 28 05:27:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:59.510 157134 INFO eventlet.wsgi.server [-] 10.100.0.3,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.4544373#033[00m
Feb 28 05:27:59 np0005634017 haproxy-metadata-proxy-d755deef-15d2-410a-9b1a-81df70c45c93[346624]: 10.100.0.3:56324 [28/Feb/2026:10:27:58.052] listener listener/metadata 0/0/0/1458/1458 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Feb 28 05:27:59 np0005634017 podman[347498]: 2026-02-28 10:27:59.627860656 +0000 UTC m=+0.047454931 container create 1688e24ed9c0ef84a41d7da6321c0a521acc07d175e5a517d862156ae4fd0b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 28 05:27:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:59.635 157134 DEBUG eventlet.wsgi.server [-] (157134) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Feb 28 05:27:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:59.636 157134 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Feb 28 05:27:59 np0005634017 ovn_metadata_agent[156634]: Accept: */*#015
Feb 28 05:27:59 np0005634017 ovn_metadata_agent[156634]: Connection: close#015
Feb 28 05:27:59 np0005634017 ovn_metadata_agent[156634]: Content-Length: 100#015
Feb 28 05:27:59 np0005634017 ovn_metadata_agent[156634]: Content-Type: application/x-www-form-urlencoded#015
Feb 28 05:27:59 np0005634017 ovn_metadata_agent[156634]: Host: 169.254.169.254#015
Feb 28 05:27:59 np0005634017 ovn_metadata_agent[156634]: User-Agent: curl/7.84.0#015
Feb 28 05:27:59 np0005634017 ovn_metadata_agent[156634]: X-Forwarded-For: 10.100.0.3#015
Feb 28 05:27:59 np0005634017 ovn_metadata_agent[156634]: X-Ovn-Network-Id: d755deef-15d2-410a-9b1a-81df70c45c93#015
Feb 28 05:27:59 np0005634017 ovn_metadata_agent[156634]: #015
Feb 28 05:27:59 np0005634017 ovn_metadata_agent[156634]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Feb 28 05:27:59 np0005634017 systemd[1]: Started libpod-conmon-1688e24ed9c0ef84a41d7da6321c0a521acc07d175e5a517d862156ae4fd0b9b.scope.
Feb 28 05:27:59 np0005634017 podman[347498]: 2026-02-28 10:27:59.60886585 +0000 UTC m=+0.028460155 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:27:59 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:27:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d9b2a0c3220da1068fa56c98fca11bf1544c12490c43afa3716b3ed15040ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:27:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d9b2a0c3220da1068fa56c98fca11bf1544c12490c43afa3716b3ed15040ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:27:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d9b2a0c3220da1068fa56c98fca11bf1544c12490c43afa3716b3ed15040ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:27:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d9b2a0c3220da1068fa56c98fca11bf1544c12490c43afa3716b3ed15040ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:27:59 np0005634017 podman[347498]: 2026-02-28 10:27:59.731339889 +0000 UTC m=+0.150934174 container init 1688e24ed9c0ef84a41d7da6321c0a521acc07d175e5a517d862156ae4fd0b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_lederberg, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:27:59 np0005634017 podman[347498]: 2026-02-28 10:27:59.73987564 +0000 UTC m=+0.159469905 container start 1688e24ed9c0ef84a41d7da6321c0a521acc07d175e5a517d862156ae4fd0b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 28 05:27:59 np0005634017 podman[347498]: 2026-02-28 10:27:59.743988626 +0000 UTC m=+0.163582951 container attach 1688e24ed9c0ef84a41d7da6321c0a521acc07d175e5a517d862156ae4fd0b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 05:27:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:59.898 157134 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Feb 28 05:27:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:27:59.898 157134 INFO eventlet.wsgi.server [-] 10.100.0.3,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2619910#033[00m
Feb 28 05:27:59 np0005634017 haproxy-metadata-proxy-d755deef-15d2-410a-9b1a-81df70c45c93[346624]: 10.100.0.3:56328 [28/Feb/2026:10:27:59.634] listener listener/metadata 0/0/0/264/264 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Feb 28 05:28:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:28:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:28:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:28:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:28:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:28:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:28:00 np0005634017 lvm[347592]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:28:00 np0005634017 lvm[347592]: VG ceph_vg1 finished
Feb 28 05:28:00 np0005634017 lvm[347593]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:28:00 np0005634017 lvm[347593]: VG ceph_vg0 finished
Feb 28 05:28:00 np0005634017 lvm[347595]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:28:00 np0005634017 lvm[347595]: VG ceph_vg2 finished
Feb 28 05:28:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 233 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 2.2 MiB/s wr, 92 op/s
Feb 28 05:28:00 np0005634017 relaxed_lederberg[347514]: {}
Feb 28 05:28:00 np0005634017 systemd[1]: libpod-1688e24ed9c0ef84a41d7da6321c0a521acc07d175e5a517d862156ae4fd0b9b.scope: Deactivated successfully.
Feb 28 05:28:00 np0005634017 systemd[1]: libpod-1688e24ed9c0ef84a41d7da6321c0a521acc07d175e5a517d862156ae4fd0b9b.scope: Consumed 1.301s CPU time.
Feb 28 05:28:00 np0005634017 podman[347498]: 2026-02-28 10:28:00.636494615 +0000 UTC m=+1.056088880 container died 1688e24ed9c0ef84a41d7da6321c0a521acc07d175e5a517d862156ae4fd0b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Feb 28 05:28:00 np0005634017 systemd[1]: var-lib-containers-storage-overlay-93d9b2a0c3220da1068fa56c98fca11bf1544c12490c43afa3716b3ed15040ff-merged.mount: Deactivated successfully.
Feb 28 05:28:00 np0005634017 podman[347498]: 2026-02-28 10:28:00.67739245 +0000 UTC m=+1.096986745 container remove 1688e24ed9c0ef84a41d7da6321c0a521acc07d175e5a517d862156ae4fd0b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 05:28:00 np0005634017 systemd[1]: libpod-conmon-1688e24ed9c0ef84a41d7da6321c0a521acc07d175e5a517d862156ae4fd0b9b.scope: Deactivated successfully.
Feb 28 05:28:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:28:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:28:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:28:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:28:01 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:28:01 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:28:01 np0005634017 nova_compute[243452]: 2026-02-28 10:28:01.922 243456 DEBUG oslo_concurrency.lockutils [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Acquiring lock "502b3848-9702-4288-860e-d9b13ab3b047" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:28:01 np0005634017 nova_compute[243452]: 2026-02-28 10:28:01.923 243456 DEBUG oslo_concurrency.lockutils [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:28:01 np0005634017 nova_compute[243452]: 2026-02-28 10:28:01.924 243456 DEBUG oslo_concurrency.lockutils [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Acquiring lock "502b3848-9702-4288-860e-d9b13ab3b047-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:28:01 np0005634017 nova_compute[243452]: 2026-02-28 10:28:01.924 243456 DEBUG oslo_concurrency.lockutils [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:28:01 np0005634017 nova_compute[243452]: 2026-02-28 10:28:01.924 243456 DEBUG oslo_concurrency.lockutils [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:28:01 np0005634017 nova_compute[243452]: 2026-02-28 10:28:01.925 243456 INFO nova.compute.manager [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Terminating instance#033[00m
Feb 28 05:28:01 np0005634017 nova_compute[243452]: 2026-02-28 10:28:01.927 243456 DEBUG nova.compute.manager [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:28:01 np0005634017 kernel: tapa4f4f33b-d0 (unregistering): left promiscuous mode
Feb 28 05:28:01 np0005634017 NetworkManager[49805]: <info>  [1772274481.9847] device (tapa4f4f33b-d0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:28:01 np0005634017 nova_compute[243452]: 2026-02-28 10:28:01.985 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:01 np0005634017 ovn_controller[146846]: 2026-02-28T10:28:01Z|01218|binding|INFO|Releasing lport a4f4f33b-d010-42c3-9963-b0602fd11558 from this chassis (sb_readonly=0)
Feb 28 05:28:01 np0005634017 ovn_controller[146846]: 2026-02-28T10:28:01Z|01219|binding|INFO|Setting lport a4f4f33b-d010-42c3-9963-b0602fd11558 down in Southbound
Feb 28 05:28:01 np0005634017 ovn_controller[146846]: 2026-02-28T10:28:01Z|01220|binding|INFO|Removing iface tapa4f4f33b-d0 ovn-installed in OVS
Feb 28 05:28:01 np0005634017 nova_compute[243452]: 2026-02-28 10:28:01.997 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:01 np0005634017 nova_compute[243452]: 2026-02-28 10:28:01.999 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.003 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.004 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:1f:18 10.100.0.3'], port_security=['fa:16:3e:be:1f:18 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '502b3848-9702-4288-860e-d9b13ab3b047', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d755deef-15d2-410a-9b1a-81df70c45c93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b907eb5634054c23999a514f3cbfbc23', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0fc58a0a-8070-4472-9d4a-0833b80c1776 d0518d2f-a440-4fc3-9c12-d503c74451c2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f692542-8d39-4073-b698-c331b927e5a0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a4f4f33b-d010-42c3-9963-b0602fd11558) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:28:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.006 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a4f4f33b-d010-42c3-9963-b0602fd11558 in datapath d755deef-15d2-410a-9b1a-81df70c45c93 unbound from our chassis#033[00m
Feb 28 05:28:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.008 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d755deef-15d2-410a-9b1a-81df70c45c93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:28:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.010 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1c690f62-edd7-4e64-8569-f6a4d19c9582]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:28:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.010 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93 namespace which is not needed anymore#033[00m
Feb 28 05:28:02 np0005634017 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000078.scope: Deactivated successfully.
Feb 28 05:28:02 np0005634017 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000078.scope: Consumed 13.794s CPU time.
Feb 28 05:28:02 np0005634017 systemd-machined[209480]: Machine qemu-151-instance-00000078 terminated.
Feb 28 05:28:02 np0005634017 neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93[346618]: [NOTICE]   (346622) : haproxy version is 2.8.14-c23fe91
Feb 28 05:28:02 np0005634017 neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93[346618]: [NOTICE]   (346622) : path to executable is /usr/sbin/haproxy
Feb 28 05:28:02 np0005634017 neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93[346618]: [WARNING]  (346622) : Exiting Master process...
Feb 28 05:28:02 np0005634017 neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93[346618]: [WARNING]  (346622) : Exiting Master process...
Feb 28 05:28:02 np0005634017 neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93[346618]: [ALERT]    (346622) : Current worker (346624) exited with code 143 (Terminated)
Feb 28 05:28:02 np0005634017 neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93[346618]: [WARNING]  (346622) : All workers exited. Exiting... (0)
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.158 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:02 np0005634017 systemd[1]: libpod-e91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2.scope: Deactivated successfully.
Feb 28 05:28:02 np0005634017 conmon[346618]: conmon e91af430cd6112e840b4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2.scope/container/memory.events
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.163 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:02 np0005634017 podman[347657]: 2026-02-28 10:28:02.166028986 +0000 UTC m=+0.044994732 container died e91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.178 243456 INFO nova.virt.libvirt.driver [-] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Instance destroyed successfully.#033[00m
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.179 243456 DEBUG nova.objects.instance [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lazy-loading 'resources' on Instance uuid 502b3848-9702-4288-860e-d9b13ab3b047 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:28:02 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2-userdata-shm.mount: Deactivated successfully.
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.200 243456 DEBUG nova.virt.libvirt.vif [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:27:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-952479761',display_name='tempest-TestServerBasicOps-server-952479761',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-952479761',id=120,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIxvwaoaZS6lxY+qm3SVwVdpmr4odFer5lT4S2h//UYF7wFrY/sNYSd6hzwRrmh2VB5KT4fELzlhq046tWMx92gixHEtTbsSYNmG1SF9Z5rksMEf2+FpLLjssyHNY9JdaA==',key_name='tempest-TestServerBasicOps-1169969291',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:27:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b907eb5634054c23999a514f3cbfbc23',ramdisk_id='',reservation_id='r-1m2lpc4b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-267277269',owner_user_name='tempest-TestServerBasicOps-267277269-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:27:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f9dd03f07d754030bedc45ef75a2ceb8',uuid=502b3848-9702-4288-860e-d9b13ab3b047,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a4f4f33b-d010-42c3-9963-b0602fd11558", "address": "fa:16:3e:be:1f:18", "network": {"id": "d755deef-15d2-410a-9b1a-81df70c45c93", "bridge": "br-int", "label": "tempest-TestServerBasicOps-594899057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b907eb5634054c23999a514f3cbfbc23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f4f33b-d0", "ovs_interfaceid": "a4f4f33b-d010-42c3-9963-b0602fd11558", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:28:02 np0005634017 systemd[1]: var-lib-containers-storage-overlay-c7740129404eead1c6a7f09e8f10ed3317b876a311baeb37869bca63dec79ce5-merged.mount: Deactivated successfully.
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.201 243456 DEBUG nova.network.os_vif_util [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Converting VIF {"id": "a4f4f33b-d010-42c3-9963-b0602fd11558", "address": "fa:16:3e:be:1f:18", "network": {"id": "d755deef-15d2-410a-9b1a-81df70c45c93", "bridge": "br-int", "label": "tempest-TestServerBasicOps-594899057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b907eb5634054c23999a514f3cbfbc23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f4f33b-d0", "ovs_interfaceid": "a4f4f33b-d010-42c3-9963-b0602fd11558", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.202 243456 DEBUG nova.network.os_vif_util [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:be:1f:18,bridge_name='br-int',has_traffic_filtering=True,id=a4f4f33b-d010-42c3-9963-b0602fd11558,network=Network(d755deef-15d2-410a-9b1a-81df70c45c93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4f4f33b-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.202 243456 DEBUG os_vif [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:1f:18,bridge_name='br-int',has_traffic_filtering=True,id=a4f4f33b-d010-42c3-9963-b0602fd11558,network=Network(d755deef-15d2-410a-9b1a-81df70c45c93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4f4f33b-d0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.204 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.205 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa4f4f33b-d0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.209 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:28:02 np0005634017 podman[347657]: 2026-02-28 10:28:02.211620693 +0000 UTC m=+0.090586439 container cleanup e91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.212 243456 INFO os_vif [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:1f:18,bridge_name='br-int',has_traffic_filtering=True,id=a4f4f33b-d010-42c3-9963-b0602fd11558,network=Network(d755deef-15d2-410a-9b1a-81df70c45c93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4f4f33b-d0')#033[00m
Feb 28 05:28:02 np0005634017 systemd[1]: libpod-conmon-e91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2.scope: Deactivated successfully.
Feb 28 05:28:02 np0005634017 podman[347704]: 2026-02-28 10:28:02.283184255 +0000 UTC m=+0.050273651 container remove e91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 05:28:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.287 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[65e99148-edc0-4212-958e-38cad2659787]: (4, ('Sat Feb 28 10:28:02 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93 (e91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2)\ne91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2\nSat Feb 28 10:28:02 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93 (e91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2)\ne91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:28:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.290 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cd9e22af-288c-44a8-9d6c-cbf9cdd111d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:28:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.291 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd755deef-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.293 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:02 np0005634017 kernel: tapd755deef-10: left promiscuous mode
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.299 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.301 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2f6f9fad-20f1-48cb-b713-198c3c830df8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:28:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.315 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[40558cc2-2a64-49b8-820b-42a26da3839d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:28:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.317 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[caa8b186-f7a7-4b4f-8b99-0e3ea0c7117b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:28:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.329 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6bd23dd1-46f5-49bd-b2d6-74c467969d17]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596338, 'reachable_time': 21578, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347730, 'error': None, 'target': 'ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:28:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.331 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:28:02 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.331 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[1c15d443-f9f9-4160-96ca-5c2f44274d64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:28:02 np0005634017 systemd[1]: run-netns-ovnmeta\x2dd755deef\x2d15d2\x2d410a\x2d9b1a\x2d81df70c45c93.mount: Deactivated successfully.
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.349 243456 DEBUG nova.compute.manager [req-31452c57-8083-4586-a6a2-05a65c8bc8ee req-3bae4e69-cea9-4383-82b7-718834bdbee3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Received event network-vif-unplugged-a4f4f33b-d010-42c3-9963-b0602fd11558 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.349 243456 DEBUG oslo_concurrency.lockutils [req-31452c57-8083-4586-a6a2-05a65c8bc8ee req-3bae4e69-cea9-4383-82b7-718834bdbee3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "502b3848-9702-4288-860e-d9b13ab3b047-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.350 243456 DEBUG oslo_concurrency.lockutils [req-31452c57-8083-4586-a6a2-05a65c8bc8ee req-3bae4e69-cea9-4383-82b7-718834bdbee3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.350 243456 DEBUG oslo_concurrency.lockutils [req-31452c57-8083-4586-a6a2-05a65c8bc8ee req-3bae4e69-cea9-4383-82b7-718834bdbee3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.350 243456 DEBUG nova.compute.manager [req-31452c57-8083-4586-a6a2-05a65c8bc8ee req-3bae4e69-cea9-4383-82b7-718834bdbee3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] No waiting events found dispatching network-vif-unplugged-a4f4f33b-d010-42c3-9963-b0602fd11558 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.350 243456 DEBUG nova.compute.manager [req-31452c57-8083-4586-a6a2-05a65c8bc8ee req-3bae4e69-cea9-4383-82b7-718834bdbee3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Received event network-vif-unplugged-a4f4f33b-d010-42c3-9963-b0602fd11558 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.510 243456 INFO nova.virt.libvirt.driver [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Deleting instance files /var/lib/nova/instances/502b3848-9702-4288-860e-d9b13ab3b047_del#033[00m
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.511 243456 INFO nova.virt.libvirt.driver [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Deletion of /var/lib/nova/instances/502b3848-9702-4288-860e-d9b13ab3b047_del complete#033[00m
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.577 243456 INFO nova.compute.manager [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.578 243456 DEBUG oslo.service.loopingcall [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.578 243456 DEBUG nova.compute.manager [-] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:28:02 np0005634017 nova_compute[243452]: 2026-02-28 10:28:02.578 243456 DEBUG nova.network.neutron [-] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:28:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 305 active+clean; 198 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 130 KiB/s rd, 833 KiB/s wr, 66 op/s
Feb 28 05:28:03 np0005634017 nova_compute[243452]: 2026-02-28 10:28:03.493 243456 DEBUG nova.network.neutron [-] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:28:03 np0005634017 nova_compute[243452]: 2026-02-28 10:28:03.516 243456 INFO nova.compute.manager [-] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Took 0.94 seconds to deallocate network for instance.#033[00m
Feb 28 05:28:03 np0005634017 nova_compute[243452]: 2026-02-28 10:28:03.614 243456 DEBUG oslo_concurrency.lockutils [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:28:03 np0005634017 nova_compute[243452]: 2026-02-28 10:28:03.614 243456 DEBUG oslo_concurrency.lockutils [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:28:03 np0005634017 nova_compute[243452]: 2026-02-28 10:28:03.653 243456 DEBUG nova.compute.manager [req-cf723f97-3336-46c6-97bd-921d1e001c2b req-77c5f323-6f96-4fab-b385-66f1e264f26c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Received event network-vif-deleted-a4f4f33b-d010-42c3-9963-b0602fd11558 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:28:03 np0005634017 nova_compute[243452]: 2026-02-28 10:28:03.691 243456 DEBUG oslo_concurrency.processutils [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:28:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:28:04 np0005634017 nova_compute[243452]: 2026-02-28 10:28:04.163 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:28:04 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1335962218' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:28:04 np0005634017 nova_compute[243452]: 2026-02-28 10:28:04.227 243456 DEBUG oslo_concurrency.processutils [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:28:04 np0005634017 nova_compute[243452]: 2026-02-28 10:28:04.233 243456 DEBUG nova.compute.provider_tree [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:28:04 np0005634017 nova_compute[243452]: 2026-02-28 10:28:04.251 243456 DEBUG nova.scheduler.client.report [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:28:04 np0005634017 nova_compute[243452]: 2026-02-28 10:28:04.270 243456 DEBUG oslo_concurrency.lockutils [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:28:04 np0005634017 nova_compute[243452]: 2026-02-28 10:28:04.311 243456 INFO nova.scheduler.client.report [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Deleted allocations for instance 502b3848-9702-4288-860e-d9b13ab3b047#033[00m
Feb 28 05:28:04 np0005634017 nova_compute[243452]: 2026-02-28 10:28:04.365 243456 DEBUG oslo_concurrency.lockutils [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.441s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:28:04 np0005634017 nova_compute[243452]: 2026-02-28 10:28:04.465 243456 DEBUG nova.compute.manager [req-316e7004-0927-4660-9868-da8f3123b5a0 req-c88ff67b-ca34-4621-9e95-d7fc40c25fce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Received event network-vif-plugged-a4f4f33b-d010-42c3-9963-b0602fd11558 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:28:04 np0005634017 nova_compute[243452]: 2026-02-28 10:28:04.465 243456 DEBUG oslo_concurrency.lockutils [req-316e7004-0927-4660-9868-da8f3123b5a0 req-c88ff67b-ca34-4621-9e95-d7fc40c25fce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "502b3848-9702-4288-860e-d9b13ab3b047-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:28:04 np0005634017 nova_compute[243452]: 2026-02-28 10:28:04.467 243456 DEBUG oslo_concurrency.lockutils [req-316e7004-0927-4660-9868-da8f3123b5a0 req-c88ff67b-ca34-4621-9e95-d7fc40c25fce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:28:04 np0005634017 nova_compute[243452]: 2026-02-28 10:28:04.467 243456 DEBUG oslo_concurrency.lockutils [req-316e7004-0927-4660-9868-da8f3123b5a0 req-c88ff67b-ca34-4621-9e95-d7fc40c25fce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:28:04 np0005634017 nova_compute[243452]: 2026-02-28 10:28:04.467 243456 DEBUG nova.compute.manager [req-316e7004-0927-4660-9868-da8f3123b5a0 req-c88ff67b-ca34-4621-9e95-d7fc40c25fce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] No waiting events found dispatching network-vif-plugged-a4f4f33b-d010-42c3-9963-b0602fd11558 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:28:04 np0005634017 nova_compute[243452]: 2026-02-28 10:28:04.467 243456 WARNING nova.compute.manager [req-316e7004-0927-4660-9868-da8f3123b5a0 req-c88ff67b-ca34-4621-9e95-d7fc40c25fce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Received unexpected event network-vif-plugged-a4f4f33b-d010-42c3-9963-b0602fd11558 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:28:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 168 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 19 KiB/s wr, 50 op/s
Feb 28 05:28:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 305 active+clean; 153 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 18 KiB/s wr, 47 op/s
Feb 28 05:28:07 np0005634017 nova_compute[243452]: 2026-02-28 10:28:07.208 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:08 np0005634017 nova_compute[243452]: 2026-02-28 10:28:08.018 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274473.0173407, 45cac133-9af0-462b-928c-05216ae1a68e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:28:08 np0005634017 nova_compute[243452]: 2026-02-28 10:28:08.019 243456 INFO nova.compute.manager [-] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:28:08 np0005634017 nova_compute[243452]: 2026-02-28 10:28:08.041 243456 DEBUG nova.compute.manager [None req-db1ccbd7-b97a-4f48-9245-b42e19fc006d - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:28:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 18 KiB/s wr, 45 op/s
Feb 28 05:28:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:28:09 np0005634017 nova_compute[243452]: 2026-02-28 10:28:09.166 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:09 np0005634017 nova_compute[243452]: 2026-02-28 10:28:09.906 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:09 np0005634017 nova_compute[243452]: 2026-02-28 10:28:09.972 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 5.5 KiB/s wr, 30 op/s
Feb 28 05:28:12 np0005634017 nova_compute[243452]: 2026-02-28 10:28:12.214 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 5.2 KiB/s wr, 30 op/s
Feb 28 05:28:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:28:14 np0005634017 nova_compute[243452]: 2026-02-28 10:28:14.166 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Feb 28 05:28:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 0 B/s wr, 8 op/s
Feb 28 05:28:17 np0005634017 nova_compute[243452]: 2026-02-28 10:28:17.176 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274482.1736732, 502b3848-9702-4288-860e-d9b13ab3b047 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:28:17 np0005634017 nova_compute[243452]: 2026-02-28 10:28:17.176 243456 INFO nova.compute.manager [-] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:28:17 np0005634017 nova_compute[243452]: 2026-02-28 10:28:17.202 243456 DEBUG nova.compute.manager [None req-43650dd4-500f-465b-a1f4-e72f7cd21541 - - - - - -] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:28:17 np0005634017 nova_compute[243452]: 2026-02-28 10:28:17.216 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:28:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:28:19 np0005634017 nova_compute[243452]: 2026-02-28 10:28:19.168 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:28:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:21.237 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:28:21 np0005634017 nova_compute[243452]: 2026-02-28 10:28:21.237 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:21.239 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:28:22 np0005634017 nova_compute[243452]: 2026-02-28 10:28:22.218 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:28:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:23.241 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:28:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:28:24 np0005634017 nova_compute[243452]: 2026-02-28 10:28:24.169 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:28:25 np0005634017 podman[347756]: 2026-02-28 10:28:25.124054165 +0000 UTC m=+0.061693924 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:28:25 np0005634017 podman[347757]: 2026-02-28 10:28:25.14372168 +0000 UTC m=+0.078947941 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 28 05:28:26 np0005634017 nova_compute[243452]: 2026-02-28 10:28:26.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:28:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:28:27 np0005634017 nova_compute[243452]: 2026-02-28 10:28:27.220 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:28:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:28:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:28:29
Feb 28 05:28:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:28:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:28:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.meta']
Feb 28 05:28:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:28:29 np0005634017 nova_compute[243452]: 2026-02-28 10:28:29.171 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:28:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:28:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:28:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:28:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:28:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:28:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:28:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:28:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:28:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:28:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:28:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:28:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:28:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:28:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:28:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:28:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:28:31 np0005634017 nova_compute[243452]: 2026-02-28 10:28:31.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:28:31 np0005634017 nova_compute[243452]: 2026-02-28 10:28:31.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:28:31 np0005634017 nova_compute[243452]: 2026-02-28 10:28:31.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:28:31 np0005634017 nova_compute[243452]: 2026-02-28 10:28:31.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:28:32 np0005634017 nova_compute[243452]: 2026-02-28 10:28:32.222 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:28:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:28:34 np0005634017 nova_compute[243452]: 2026-02-28 10:28:34.174 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:28:35 np0005634017 nova_compute[243452]: 2026-02-28 10:28:35.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:28:36 np0005634017 nova_compute[243452]: 2026-02-28 10:28:36.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:28:36 np0005634017 nova_compute[243452]: 2026-02-28 10:28:36.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:28:36 np0005634017 nova_compute[243452]: 2026-02-28 10:28:36.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:28:36 np0005634017 nova_compute[243452]: 2026-02-28 10:28:36.336 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 05:28:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:28:37 np0005634017 nova_compute[243452]: 2026-02-28 10:28:37.225 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:37 np0005634017 nova_compute[243452]: 2026-02-28 10:28:37.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:28:37 np0005634017 nova_compute[243452]: 2026-02-28 10:28:37.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:28:38 np0005634017 nova_compute[243452]: 2026-02-28 10:28:38.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:28:38 np0005634017 nova_compute[243452]: 2026-02-28 10:28:38.350 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:28:38 np0005634017 nova_compute[243452]: 2026-02-28 10:28:38.351 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:28:38 np0005634017 nova_compute[243452]: 2026-02-28 10:28:38.351 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:28:38 np0005634017 nova_compute[243452]: 2026-02-28 10:28:38.351 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:28:38 np0005634017 nova_compute[243452]: 2026-02-28 10:28:38.352 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:28:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:28:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:28:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:28:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/509776450' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:28:38 np0005634017 nova_compute[243452]: 2026-02-28 10:28:38.945 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:28:39 np0005634017 nova_compute[243452]: 2026-02-28 10:28:39.142 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:28:39 np0005634017 nova_compute[243452]: 2026-02-28 10:28:39.144 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3703MB free_disk=59.9874847875908GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:28:39 np0005634017 nova_compute[243452]: 2026-02-28 10:28:39.144 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:28:39 np0005634017 nova_compute[243452]: 2026-02-28 10:28:39.144 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:28:39 np0005634017 nova_compute[243452]: 2026-02-28 10:28:39.176 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:39 np0005634017 nova_compute[243452]: 2026-02-28 10:28:39.219 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:28:39 np0005634017 nova_compute[243452]: 2026-02-28 10:28:39.220 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:28:39 np0005634017 nova_compute[243452]: 2026-02-28 10:28:39.243 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:28:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:28:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1359593659' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:28:39 np0005634017 nova_compute[243452]: 2026-02-28 10:28:39.830 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:28:39 np0005634017 nova_compute[243452]: 2026-02-28 10:28:39.839 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:28:39 np0005634017 nova_compute[243452]: 2026-02-28 10:28:39.861 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:28:39 np0005634017 nova_compute[243452]: 2026-02-28 10:28:39.890 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:28:39 np0005634017 nova_compute[243452]: 2026-02-28 10:28:39.891 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:28:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:28:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:28:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:28:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:28:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:28:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.3276966644185762e-05 of space, bias 1.0, pg target 0.003983089993255729 quantized to 32 (current 32)
Feb 28 05:28:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:28:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:28:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:28:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:28:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:28:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493568655025092 of space, bias 1.0, pg target 0.7480705965075276 quantized to 32 (current 32)
Feb 28 05:28:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:28:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.368976920658764e-07 of space, bias 4.0, pg target 0.0008842772304790517 quantized to 16 (current 16)
Feb 28 05:28:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:28:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:28:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:28:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:28:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:28:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:28:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:28:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:28:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:28:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:28:41 np0005634017 nova_compute[243452]: 2026-02-28 10:28:41.840 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "360effe7-8380-410d-a5b8-59c28fa4a75a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:28:41 np0005634017 nova_compute[243452]: 2026-02-28 10:28:41.840 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "360effe7-8380-410d-a5b8-59c28fa4a75a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:28:41 np0005634017 nova_compute[243452]: 2026-02-28 10:28:41.861 243456 DEBUG nova.compute.manager [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:28:41 np0005634017 nova_compute[243452]: 2026-02-28 10:28:41.938 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:28:41 np0005634017 nova_compute[243452]: 2026-02-28 10:28:41.939 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:28:41 np0005634017 nova_compute[243452]: 2026-02-28 10:28:41.949 243456 DEBUG nova.virt.hardware [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:28:41 np0005634017 nova_compute[243452]: 2026-02-28 10:28:41.950 243456 INFO nova.compute.claims [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:28:42 np0005634017 nova_compute[243452]: 2026-02-28 10:28:42.056 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:28:42 np0005634017 nova_compute[243452]: 2026-02-28 10:28:42.227 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:28:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:28:42 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1108711752' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:28:42 np0005634017 nova_compute[243452]: 2026-02-28 10:28:42.703 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:28:42 np0005634017 nova_compute[243452]: 2026-02-28 10:28:42.710 243456 DEBUG nova.compute.provider_tree [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:28:42 np0005634017 nova_compute[243452]: 2026-02-28 10:28:42.734 243456 DEBUG nova.scheduler.client.report [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:28:42 np0005634017 nova_compute[243452]: 2026-02-28 10:28:42.773 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:28:42 np0005634017 nova_compute[243452]: 2026-02-28 10:28:42.774 243456 DEBUG nova.compute.manager [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:28:42 np0005634017 nova_compute[243452]: 2026-02-28 10:28:42.861 243456 DEBUG nova.compute.manager [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:28:42 np0005634017 nova_compute[243452]: 2026-02-28 10:28:42.862 243456 DEBUG nova.network.neutron [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:28:42 np0005634017 nova_compute[243452]: 2026-02-28 10:28:42.883 243456 INFO nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:28:42 np0005634017 nova_compute[243452]: 2026-02-28 10:28:42.903 243456 DEBUG nova.compute.manager [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:28:43 np0005634017 nova_compute[243452]: 2026-02-28 10:28:43.007 243456 DEBUG nova.compute.manager [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:28:43 np0005634017 nova_compute[243452]: 2026-02-28 10:28:43.009 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:28:43 np0005634017 nova_compute[243452]: 2026-02-28 10:28:43.009 243456 INFO nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Creating image(s)#033[00m
Feb 28 05:28:43 np0005634017 nova_compute[243452]: 2026-02-28 10:28:43.046 243456 DEBUG nova.storage.rbd_utils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] rbd image 360effe7-8380-410d-a5b8-59c28fa4a75a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:28:43 np0005634017 nova_compute[243452]: 2026-02-28 10:28:43.085 243456 DEBUG nova.storage.rbd_utils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] rbd image 360effe7-8380-410d-a5b8-59c28fa4a75a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:28:43 np0005634017 nova_compute[243452]: 2026-02-28 10:28:43.120 243456 DEBUG nova.storage.rbd_utils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] rbd image 360effe7-8380-410d-a5b8-59c28fa4a75a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:28:43 np0005634017 nova_compute[243452]: 2026-02-28 10:28:43.126 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:28:43 np0005634017 nova_compute[243452]: 2026-02-28 10:28:43.214 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:28:43 np0005634017 nova_compute[243452]: 2026-02-28 10:28:43.215 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:28:43 np0005634017 nova_compute[243452]: 2026-02-28 10:28:43.215 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:28:43 np0005634017 nova_compute[243452]: 2026-02-28 10:28:43.216 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:28:43 np0005634017 nova_compute[243452]: 2026-02-28 10:28:43.243 243456 DEBUG nova.storage.rbd_utils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] rbd image 360effe7-8380-410d-a5b8-59c28fa4a75a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:28:43 np0005634017 nova_compute[243452]: 2026-02-28 10:28:43.248 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 360effe7-8380-410d-a5b8-59c28fa4a75a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:28:43 np0005634017 nova_compute[243452]: 2026-02-28 10:28:43.513 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 360effe7-8380-410d-a5b8-59c28fa4a75a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:28:43 np0005634017 nova_compute[243452]: 2026-02-28 10:28:43.603 243456 DEBUG nova.storage.rbd_utils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] resizing rbd image 360effe7-8380-410d-a5b8-59c28fa4a75a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:28:43 np0005634017 nova_compute[243452]: 2026-02-28 10:28:43.654 243456 DEBUG nova.policy [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f54beab12fce4ee8adf80742bf33b916', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '344dd946e14146ab93c01183964c71b3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:28:43 np0005634017 nova_compute[243452]: 2026-02-28 10:28:43.704 243456 DEBUG nova.objects.instance [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lazy-loading 'migration_context' on Instance uuid 360effe7-8380-410d-a5b8-59c28fa4a75a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:28:43 np0005634017 nova_compute[243452]: 2026-02-28 10:28:43.730 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:28:43 np0005634017 nova_compute[243452]: 2026-02-28 10:28:43.731 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Ensure instance console log exists: /var/lib/nova/instances/360effe7-8380-410d-a5b8-59c28fa4a75a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:28:43 np0005634017 nova_compute[243452]: 2026-02-28 10:28:43.732 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:28:43 np0005634017 nova_compute[243452]: 2026-02-28 10:28:43.733 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:28:43 np0005634017 nova_compute[243452]: 2026-02-28 10:28:43.733 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:28:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:28:44 np0005634017 nova_compute[243452]: 2026-02-28 10:28:44.179 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 165 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 396 KiB/s wr, 23 op/s
Feb 28 05:28:45 np0005634017 nova_compute[243452]: 2026-02-28 10:28:45.345 243456 DEBUG nova.network.neutron [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Successfully created port: 89352e9c-3fec-48bc-a264-6de98ec910c3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:28:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:28:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1102776912' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:28:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:28:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1102776912' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:28:46 np0005634017 nova_compute[243452]: 2026-02-28 10:28:46.470 243456 DEBUG nova.network.neutron [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Successfully updated port: 89352e9c-3fec-48bc-a264-6de98ec910c3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:28:46 np0005634017 nova_compute[243452]: 2026-02-28 10:28:46.486 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:28:46 np0005634017 nova_compute[243452]: 2026-02-28 10:28:46.486 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquired lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:28:46 np0005634017 nova_compute[243452]: 2026-02-28 10:28:46.487 243456 DEBUG nova.network.neutron [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:28:46 np0005634017 nova_compute[243452]: 2026-02-28 10:28:46.567 243456 DEBUG nova.compute.manager [req-264e4bb1-bc8a-46db-8ae1-14e946626087 req-92567b36-95af-4609-8dce-f5ff992fb36d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Received event network-changed-89352e9c-3fec-48bc-a264-6de98ec910c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:28:46 np0005634017 nova_compute[243452]: 2026-02-28 10:28:46.568 243456 DEBUG nova.compute.manager [req-264e4bb1-bc8a-46db-8ae1-14e946626087 req-92567b36-95af-4609-8dce-f5ff992fb36d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Refreshing instance network info cache due to event network-changed-89352e9c-3fec-48bc-a264-6de98ec910c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:28:46 np0005634017 nova_compute[243452]: 2026-02-28 10:28:46.568 243456 DEBUG oslo_concurrency.lockutils [req-264e4bb1-bc8a-46db-8ae1-14e946626087 req-92567b36-95af-4609-8dce-f5ff992fb36d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:28:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 179 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 629 KiB/s wr, 25 op/s
Feb 28 05:28:46 np0005634017 nova_compute[243452]: 2026-02-28 10:28:46.668 243456 DEBUG nova.network.neutron [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:28:47 np0005634017 nova_compute[243452]: 2026-02-28 10:28:47.229 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:47 np0005634017 nova_compute[243452]: 2026-02-28 10:28:47.873 243456 DEBUG nova.network.neutron [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Updating instance_info_cache with network_info: [{"id": "89352e9c-3fec-48bc-a264-6de98ec910c3", "address": "fa:16:3e:ae:b8:e7", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89352e9c-3f", "ovs_interfaceid": "89352e9c-3fec-48bc-a264-6de98ec910c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:28:47 np0005634017 nova_compute[243452]: 2026-02-28 10:28:47.895 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Releasing lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:28:47 np0005634017 nova_compute[243452]: 2026-02-28 10:28:47.896 243456 DEBUG nova.compute.manager [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Instance network_info: |[{"id": "89352e9c-3fec-48bc-a264-6de98ec910c3", "address": "fa:16:3e:ae:b8:e7", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89352e9c-3f", "ovs_interfaceid": "89352e9c-3fec-48bc-a264-6de98ec910c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:28:47 np0005634017 nova_compute[243452]: 2026-02-28 10:28:47.896 243456 DEBUG oslo_concurrency.lockutils [req-264e4bb1-bc8a-46db-8ae1-14e946626087 req-92567b36-95af-4609-8dce-f5ff992fb36d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:28:47 np0005634017 nova_compute[243452]: 2026-02-28 10:28:47.896 243456 DEBUG nova.network.neutron [req-264e4bb1-bc8a-46db-8ae1-14e946626087 req-92567b36-95af-4609-8dce-f5ff992fb36d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Refreshing network info cache for port 89352e9c-3fec-48bc-a264-6de98ec910c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:28:47 np0005634017 nova_compute[243452]: 2026-02-28 10:28:47.898 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Start _get_guest_xml network_info=[{"id": "89352e9c-3fec-48bc-a264-6de98ec910c3", "address": "fa:16:3e:ae:b8:e7", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89352e9c-3f", "ovs_interfaceid": "89352e9c-3fec-48bc-a264-6de98ec910c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:28:47 np0005634017 nova_compute[243452]: 2026-02-28 10:28:47.902 243456 WARNING nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:28:47 np0005634017 nova_compute[243452]: 2026-02-28 10:28:47.906 243456 DEBUG nova.virt.libvirt.host [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:28:47 np0005634017 nova_compute[243452]: 2026-02-28 10:28:47.907 243456 DEBUG nova.virt.libvirt.host [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:28:47 np0005634017 nova_compute[243452]: 2026-02-28 10:28:47.910 243456 DEBUG nova.virt.libvirt.host [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:28:47 np0005634017 nova_compute[243452]: 2026-02-28 10:28:47.910 243456 DEBUG nova.virt.libvirt.host [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:28:47 np0005634017 nova_compute[243452]: 2026-02-28 10:28:47.910 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:28:47 np0005634017 nova_compute[243452]: 2026-02-28 10:28:47.910 243456 DEBUG nova.virt.hardware [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:28:47 np0005634017 nova_compute[243452]: 2026-02-28 10:28:47.911 243456 DEBUG nova.virt.hardware [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:28:47 np0005634017 nova_compute[243452]: 2026-02-28 10:28:47.911 243456 DEBUG nova.virt.hardware [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:28:47 np0005634017 nova_compute[243452]: 2026-02-28 10:28:47.911 243456 DEBUG nova.virt.hardware [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:28:47 np0005634017 nova_compute[243452]: 2026-02-28 10:28:47.912 243456 DEBUG nova.virt.hardware [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:28:47 np0005634017 nova_compute[243452]: 2026-02-28 10:28:47.912 243456 DEBUG nova.virt.hardware [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:28:47 np0005634017 nova_compute[243452]: 2026-02-28 10:28:47.912 243456 DEBUG nova.virt.hardware [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:28:47 np0005634017 nova_compute[243452]: 2026-02-28 10:28:47.912 243456 DEBUG nova.virt.hardware [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:28:47 np0005634017 nova_compute[243452]: 2026-02-28 10:28:47.912 243456 DEBUG nova.virt.hardware [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:28:47 np0005634017 nova_compute[243452]: 2026-02-28 10:28:47.913 243456 DEBUG nova.virt.hardware [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:28:47 np0005634017 nova_compute[243452]: 2026-02-28 10:28:47.913 243456 DEBUG nova.virt.hardware [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:28:47 np0005634017 nova_compute[243452]: 2026-02-28 10:28:47.915 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:28:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:28:48 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2233154156' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:28:48 np0005634017 nova_compute[243452]: 2026-02-28 10:28:48.538 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:28:48 np0005634017 nova_compute[243452]: 2026-02-28 10:28:48.563 243456 DEBUG nova.storage.rbd_utils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] rbd image 360effe7-8380-410d-a5b8-59c28fa4a75a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:28:48 np0005634017 nova_compute[243452]: 2026-02-28 10:28:48.568 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:28:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 200 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:28:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:28:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:28:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/26869461' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.152 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.154 243456 DEBUG nova.virt.libvirt.vif [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:28:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1297881492',display_name='tempest-TestSnapshotPattern-server-1297881492',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1297881492',id=121,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPUJ4MbCHcIoNICw2tL1X5ZefRw67v1vL6nTq2IACMxv94r+pZLfX9PQ5uIyPper8mbOQnjB2mZqp0GfnlVkrAkMMtWu4Dla7/PQOjpuGeOsYTKM5E2GBss1gtEw6oeaiA==',key_name='tempest-TestSnapshotPattern-538459727',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='344dd946e14146ab93c01183964c71b3',ramdisk_id='',reservation_id='r-yvdubwju',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-2121060882',owner_user_name='tempest-TestSnapshotPattern-2121060882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:28:42Z,user_data=None,user_id='f54beab12fce4ee8adf80742bf33b916',uuid=360effe7-8380-410d-a5b8-59c28fa4a75a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "89352e9c-3fec-48bc-a264-6de98ec910c3", "address": "fa:16:3e:ae:b8:e7", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89352e9c-3f", "ovs_interfaceid": "89352e9c-3fec-48bc-a264-6de98ec910c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.154 243456 DEBUG nova.network.os_vif_util [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Converting VIF {"id": "89352e9c-3fec-48bc-a264-6de98ec910c3", "address": "fa:16:3e:ae:b8:e7", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89352e9c-3f", "ovs_interfaceid": "89352e9c-3fec-48bc-a264-6de98ec910c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.155 243456 DEBUG nova.network.os_vif_util [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:b8:e7,bridge_name='br-int',has_traffic_filtering=True,id=89352e9c-3fec-48bc-a264-6de98ec910c3,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89352e9c-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.156 243456 DEBUG nova.objects.instance [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 360effe7-8380-410d-a5b8-59c28fa4a75a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.172 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:28:49 np0005634017 nova_compute[243452]:  <uuid>360effe7-8380-410d-a5b8-59c28fa4a75a</uuid>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:  <name>instance-00000079</name>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestSnapshotPattern-server-1297881492</nova:name>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:28:47</nova:creationTime>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:28:49 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:        <nova:user uuid="f54beab12fce4ee8adf80742bf33b916">tempest-TestSnapshotPattern-2121060882-project-member</nova:user>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:        <nova:project uuid="344dd946e14146ab93c01183964c71b3">tempest-TestSnapshotPattern-2121060882</nova:project>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:        <nova:port uuid="89352e9c-3fec-48bc-a264-6de98ec910c3">
Feb 28 05:28:49 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <entry name="serial">360effe7-8380-410d-a5b8-59c28fa4a75a</entry>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <entry name="uuid">360effe7-8380-410d-a5b8-59c28fa4a75a</entry>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/360effe7-8380-410d-a5b8-59c28fa4a75a_disk">
Feb 28 05:28:49 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:28:49 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/360effe7-8380-410d-a5b8-59c28fa4a75a_disk.config">
Feb 28 05:28:49 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:28:49 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:ae:b8:e7"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <target dev="tap89352e9c-3f"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/360effe7-8380-410d-a5b8-59c28fa4a75a/console.log" append="off"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:28:49 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:28:49 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:28:49 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:28:49 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.173 243456 DEBUG nova.compute.manager [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Preparing to wait for external event network-vif-plugged-89352e9c-3fec-48bc-a264-6de98ec910c3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.173 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "360effe7-8380-410d-a5b8-59c28fa4a75a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.174 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "360effe7-8380-410d-a5b8-59c28fa4a75a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.174 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "360effe7-8380-410d-a5b8-59c28fa4a75a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.174 243456 DEBUG nova.virt.libvirt.vif [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:28:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1297881492',display_name='tempest-TestSnapshotPattern-server-1297881492',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1297881492',id=121,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPUJ4MbCHcIoNICw2tL1X5ZefRw67v1vL6nTq2IACMxv94r+pZLfX9PQ5uIyPper8mbOQnjB2mZqp0GfnlVkrAkMMtWu4Dla7/PQOjpuGeOsYTKM5E2GBss1gtEw6oeaiA==',key_name='tempest-TestSnapshotPattern-538459727',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='344dd946e14146ab93c01183964c71b3',ramdisk_id='',reservation_id='r-yvdubwju',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-2121060882',owner_user_name='tempest-TestSnapshotPattern-2121060882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:28:42Z,user_data=None,user_id='f54beab12fce4ee8adf80742bf33b916',uuid=360effe7-8380-410d-a5b8-59c28fa4a75a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "89352e9c-3fec-48bc-a264-6de98ec910c3", "address": "fa:16:3e:ae:b8:e7", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89352e9c-3f", "ovs_interfaceid": "89352e9c-3fec-48bc-a264-6de98ec910c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.175 243456 DEBUG nova.network.os_vif_util [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Converting VIF {"id": "89352e9c-3fec-48bc-a264-6de98ec910c3", "address": "fa:16:3e:ae:b8:e7", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89352e9c-3f", "ovs_interfaceid": "89352e9c-3fec-48bc-a264-6de98ec910c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.175 243456 DEBUG nova.network.os_vif_util [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:b8:e7,bridge_name='br-int',has_traffic_filtering=True,id=89352e9c-3fec-48bc-a264-6de98ec910c3,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89352e9c-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.175 243456 DEBUG os_vif [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:b8:e7,bridge_name='br-int',has_traffic_filtering=True,id=89352e9c-3fec-48bc-a264-6de98ec910c3,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89352e9c-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.176 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.176 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.177 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.180 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.180 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap89352e9c-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.181 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap89352e9c-3f, col_values=(('external_ids', {'iface-id': '89352e9c-3fec-48bc-a264-6de98ec910c3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:b8:e7', 'vm-uuid': '360effe7-8380-410d-a5b8-59c28fa4a75a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.181 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:49 np0005634017 NetworkManager[49805]: <info>  [1772274529.1837] manager: (tap89352e9c-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/500)
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.185 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.189 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.190 243456 INFO os_vif [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:b8:e7,bridge_name='br-int',has_traffic_filtering=True,id=89352e9c-3fec-48bc-a264-6de98ec910c3,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89352e9c-3f')#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.232 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.233 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.233 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] No VIF found with MAC fa:16:3e:ae:b8:e7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.233 243456 INFO nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Using config drive#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.258 243456 DEBUG nova.storage.rbd_utils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] rbd image 360effe7-8380-410d-a5b8-59c28fa4a75a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.265 243456 DEBUG nova.network.neutron [req-264e4bb1-bc8a-46db-8ae1-14e946626087 req-92567b36-95af-4609-8dce-f5ff992fb36d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Updated VIF entry in instance network info cache for port 89352e9c-3fec-48bc-a264-6de98ec910c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.265 243456 DEBUG nova.network.neutron [req-264e4bb1-bc8a-46db-8ae1-14e946626087 req-92567b36-95af-4609-8dce-f5ff992fb36d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Updating instance_info_cache with network_info: [{"id": "89352e9c-3fec-48bc-a264-6de98ec910c3", "address": "fa:16:3e:ae:b8:e7", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89352e9c-3f", "ovs_interfaceid": "89352e9c-3fec-48bc-a264-6de98ec910c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.288 243456 DEBUG oslo_concurrency.lockutils [req-264e4bb1-bc8a-46db-8ae1-14e946626087 req-92567b36-95af-4609-8dce-f5ff992fb36d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.768 243456 INFO nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Creating config drive at /var/lib/nova/instances/360effe7-8380-410d-a5b8-59c28fa4a75a/disk.config#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.774 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/360effe7-8380-410d-a5b8-59c28fa4a75a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpejgn27iq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.919 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/360effe7-8380-410d-a5b8-59c28fa4a75a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpejgn27iq" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.961 243456 DEBUG nova.storage.rbd_utils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] rbd image 360effe7-8380-410d-a5b8-59c28fa4a75a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:28:49 np0005634017 nova_compute[243452]: 2026-02-28 10:28:49.967 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/360effe7-8380-410d-a5b8-59c28fa4a75a/disk.config 360effe7-8380-410d-a5b8-59c28fa4a75a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:28:50 np0005634017 nova_compute[243452]: 2026-02-28 10:28:50.148 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/360effe7-8380-410d-a5b8-59c28fa4a75a/disk.config 360effe7-8380-410d-a5b8-59c28fa4a75a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:28:50 np0005634017 nova_compute[243452]: 2026-02-28 10:28:50.149 243456 INFO nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Deleting local config drive /var/lib/nova/instances/360effe7-8380-410d-a5b8-59c28fa4a75a/disk.config because it was imported into RBD.#033[00m
Feb 28 05:28:50 np0005634017 kernel: tap89352e9c-3f: entered promiscuous mode
Feb 28 05:28:50 np0005634017 NetworkManager[49805]: <info>  [1772274530.2122] manager: (tap89352e9c-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/501)
Feb 28 05:28:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:28:50Z|01221|binding|INFO|Claiming lport 89352e9c-3fec-48bc-a264-6de98ec910c3 for this chassis.
Feb 28 05:28:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:28:50Z|01222|binding|INFO|89352e9c-3fec-48bc-a264-6de98ec910c3: Claiming fa:16:3e:ae:b8:e7 10.100.0.7
Feb 28 05:28:50 np0005634017 nova_compute[243452]: 2026-02-28 10:28:50.213 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:50 np0005634017 nova_compute[243452]: 2026-02-28 10:28:50.217 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:50 np0005634017 nova_compute[243452]: 2026-02-28 10:28:50.222 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:50 np0005634017 nova_compute[243452]: 2026-02-28 10:28:50.226 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.234 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:b8:e7 10.100.0.7'], port_security=['fa:16:3e:ae:b8:e7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '360effe7-8380-410d-a5b8-59c28fa4a75a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-143add3f-ffeb-40fb-88e5-0af28b700615', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '344dd946e14146ab93c01183964c71b3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eee67bf9-5a46-4130-874d-acfd0939ad31', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c3d893a-030b-445a-98e5-9b2e4e6c6037, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=89352e9c-3fec-48bc-a264-6de98ec910c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.236 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 89352e9c-3fec-48bc-a264-6de98ec910c3 in datapath 143add3f-ffeb-40fb-88e5-0af28b700615 bound to our chassis#033[00m
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.237 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 143add3f-ffeb-40fb-88e5-0af28b700615#033[00m
Feb 28 05:28:50 np0005634017 systemd-udevd[348175]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:28:50 np0005634017 systemd-machined[209480]: New machine qemu-154-instance-00000079.
Feb 28 05:28:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:28:50Z|01223|binding|INFO|Setting lport 89352e9c-3fec-48bc-a264-6de98ec910c3 ovn-installed in OVS
Feb 28 05:28:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:28:50Z|01224|binding|INFO|Setting lport 89352e9c-3fec-48bc-a264-6de98ec910c3 up in Southbound
Feb 28 05:28:50 np0005634017 nova_compute[243452]: 2026-02-28 10:28:50.253 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:50 np0005634017 NetworkManager[49805]: <info>  [1772274530.2561] device (tap89352e9c-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:28:50 np0005634017 NetworkManager[49805]: <info>  [1772274530.2568] device (tap89352e9c-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.253 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[31790f84-21c6-4065-861e-d09048d3d4c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.255 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap143add3f-f1 in ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:28:50 np0005634017 systemd[1]: Started Virtual Machine qemu-154-instance-00000079.
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.258 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap143add3f-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.258 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eea448e1-a407-4662-a8e8-6cf1063ca348]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.259 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b8db518d-1f64-4520-ae58-5d373186bf83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.271 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[a441bb66-64bc-4cb0-9859-70e7bbcc583f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.297 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[142291d0-35d5-4176-a913-4012696aefa4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.327 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[942904e9-419f-4b6f-8c9a-320213f9b9a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.333 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e281e7ff-3c2c-48d2-ab6e-406aaa471106]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:28:50 np0005634017 NetworkManager[49805]: <info>  [1772274530.3352] manager: (tap143add3f-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/502)
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.366 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a1195cd9-95b6-4325-8942-ce710607433e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.371 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c71c8efe-e601-41d5-b9c8-e25b7648692a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:28:50 np0005634017 NetworkManager[49805]: <info>  [1772274530.3985] device (tap143add3f-f0): carrier: link connected
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.403 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[72c6cdfb-13f7-40ed-9e0f-072f0e11c6c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.419 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2d50d662-5cf8-4f09-92b1-b2ed0094d245]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap143add3f-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:02:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 365], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603768, 'reachable_time': 37208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348208, 'error': None, 'target': 'ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.435 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c5cbb174-06fc-4beb-bfaf-ccf8c56d2885]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:227'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603768, 'tstamp': 603768}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348209, 'error': None, 'target': 'ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.452 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d7684ea9-cf81-474e-ae5e-4cd38c467e78]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap143add3f-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:02:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 365], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603768, 'reachable_time': 37208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 348210, 'error': None, 'target': 'ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.496 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cc9d282b-f7d3-4572-be70-6a9f7450696f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.563 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[251d4645-9429-4b50-86db-bce0c418d74f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.565 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap143add3f-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.565 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.565 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap143add3f-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:28:50 np0005634017 NetworkManager[49805]: <info>  [1772274530.5687] manager: (tap143add3f-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/503)
Feb 28 05:28:50 np0005634017 kernel: tap143add3f-f0: entered promiscuous mode
Feb 28 05:28:50 np0005634017 nova_compute[243452]: 2026-02-28 10:28:50.569 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.571 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap143add3f-f0, col_values=(('external_ids', {'iface-id': '57c5c2bd-8863-4c9b-bde1-322e4dae0a4d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:28:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:28:50Z|01225|binding|INFO|Releasing lport 57c5c2bd-8863-4c9b-bde1-322e4dae0a4d from this chassis (sb_readonly=0)
Feb 28 05:28:50 np0005634017 nova_compute[243452]: 2026-02-28 10:28:50.574 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.575 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/143add3f-ffeb-40fb-88e5-0af28b700615.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/143add3f-ffeb-40fb-88e5-0af28b700615.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.576 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[344ee94c-d8c3-4e4e-b96f-9880a95b7c5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.577 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-143add3f-ffeb-40fb-88e5-0af28b700615
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/143add3f-ffeb-40fb-88e5-0af28b700615.pid.haproxy
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 143add3f-ffeb-40fb-88e5-0af28b700615
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:28:50 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.577 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615', 'env', 'PROCESS_TAG=haproxy-143add3f-ffeb-40fb-88e5-0af28b700615', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/143add3f-ffeb-40fb-88e5-0af28b700615.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:28:50 np0005634017 nova_compute[243452]: 2026-02-28 10:28:50.578 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 200 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Feb 28 05:28:50 np0005634017 nova_compute[243452]: 2026-02-28 10:28:50.629 243456 DEBUG nova.compute.manager [req-98029af8-ac9f-4373-812b-f234088b47f3 req-acaecab0-46ed-4da9-886f-7344109b0dfb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Received event network-vif-plugged-89352e9c-3fec-48bc-a264-6de98ec910c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:28:50 np0005634017 nova_compute[243452]: 2026-02-28 10:28:50.630 243456 DEBUG oslo_concurrency.lockutils [req-98029af8-ac9f-4373-812b-f234088b47f3 req-acaecab0-46ed-4da9-886f-7344109b0dfb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "360effe7-8380-410d-a5b8-59c28fa4a75a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:28:50 np0005634017 nova_compute[243452]: 2026-02-28 10:28:50.630 243456 DEBUG oslo_concurrency.lockutils [req-98029af8-ac9f-4373-812b-f234088b47f3 req-acaecab0-46ed-4da9-886f-7344109b0dfb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "360effe7-8380-410d-a5b8-59c28fa4a75a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:28:50 np0005634017 nova_compute[243452]: 2026-02-28 10:28:50.630 243456 DEBUG oslo_concurrency.lockutils [req-98029af8-ac9f-4373-812b-f234088b47f3 req-acaecab0-46ed-4da9-886f-7344109b0dfb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "360effe7-8380-410d-a5b8-59c28fa4a75a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:28:50 np0005634017 nova_compute[243452]: 2026-02-28 10:28:50.631 243456 DEBUG nova.compute.manager [req-98029af8-ac9f-4373-812b-f234088b47f3 req-acaecab0-46ed-4da9-886f-7344109b0dfb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Processing event network-vif-plugged-89352e9c-3fec-48bc-a264-6de98ec910c3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:28:50 np0005634017 podman[348242]: 2026-02-28 10:28:50.913479057 +0000 UTC m=+0.054948303 container create 12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 28 05:28:50 np0005634017 systemd[1]: Started libpod-conmon-12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383.scope.
Feb 28 05:28:50 np0005634017 podman[348242]: 2026-02-28 10:28:50.880949039 +0000 UTC m=+0.022418325 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:28:50 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:28:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/939fb4b0dac1cbd7c3f3f65d2ef77a6441361f4baacd11851b6ce732b973b040/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:28:51 np0005634017 podman[348242]: 2026-02-28 10:28:51.013951785 +0000 UTC m=+0.155421031 container init 12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 05:28:51 np0005634017 podman[348242]: 2026-02-28 10:28:51.021355154 +0000 UTC m=+0.162824400 container start 12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:28:51 np0005634017 neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615[348257]: [NOTICE]   (348261) : New worker (348263) forked
Feb 28 05:28:51 np0005634017 neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615[348257]: [NOTICE]   (348261) : Loading success.
Feb 28 05:28:51 np0005634017 nova_compute[243452]: 2026-02-28 10:28:51.862 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274531.8621616, 360effe7-8380-410d-a5b8-59c28fa4a75a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:28:51 np0005634017 nova_compute[243452]: 2026-02-28 10:28:51.863 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] VM Started (Lifecycle Event)#033[00m
Feb 28 05:28:51 np0005634017 nova_compute[243452]: 2026-02-28 10:28:51.866 243456 DEBUG nova.compute.manager [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:28:51 np0005634017 nova_compute[243452]: 2026-02-28 10:28:51.872 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:28:51 np0005634017 nova_compute[243452]: 2026-02-28 10:28:51.876 243456 INFO nova.virt.libvirt.driver [-] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Instance spawned successfully.#033[00m
Feb 28 05:28:51 np0005634017 nova_compute[243452]: 2026-02-28 10:28:51.877 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:28:51 np0005634017 nova_compute[243452]: 2026-02-28 10:28:51.890 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:28:51 np0005634017 nova_compute[243452]: 2026-02-28 10:28:51.894 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:28:51 np0005634017 nova_compute[243452]: 2026-02-28 10:28:51.914 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:28:51 np0005634017 nova_compute[243452]: 2026-02-28 10:28:51.915 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274531.8624976, 360effe7-8380-410d-a5b8-59c28fa4a75a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:28:51 np0005634017 nova_compute[243452]: 2026-02-28 10:28:51.916 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:28:51 np0005634017 nova_compute[243452]: 2026-02-28 10:28:51.921 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:28:51 np0005634017 nova_compute[243452]: 2026-02-28 10:28:51.922 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:28:51 np0005634017 nova_compute[243452]: 2026-02-28 10:28:51.922 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:28:51 np0005634017 nova_compute[243452]: 2026-02-28 10:28:51.923 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:28:51 np0005634017 nova_compute[243452]: 2026-02-28 10:28:51.924 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:28:51 np0005634017 nova_compute[243452]: 2026-02-28 10:28:51.925 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:28:51 np0005634017 nova_compute[243452]: 2026-02-28 10:28:51.937 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:28:51 np0005634017 nova_compute[243452]: 2026-02-28 10:28:51.942 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274531.8708794, 360effe7-8380-410d-a5b8-59c28fa4a75a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:28:51 np0005634017 nova_compute[243452]: 2026-02-28 10:28:51.942 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:28:51 np0005634017 nova_compute[243452]: 2026-02-28 10:28:51.978 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:28:51 np0005634017 nova_compute[243452]: 2026-02-28 10:28:51.984 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:28:51 np0005634017 nova_compute[243452]: 2026-02-28 10:28:51.993 243456 INFO nova.compute.manager [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Took 8.99 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:28:51 np0005634017 nova_compute[243452]: 2026-02-28 10:28:51.994 243456 DEBUG nova.compute.manager [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:28:52 np0005634017 nova_compute[243452]: 2026-02-28 10:28:52.007 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:28:52 np0005634017 nova_compute[243452]: 2026-02-28 10:28:52.196 243456 INFO nova.compute.manager [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Took 10.28 seconds to build instance.#033[00m
Feb 28 05:28:52 np0005634017 nova_compute[243452]: 2026-02-28 10:28:52.215 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "360effe7-8380-410d-a5b8-59c28fa4a75a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.375s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:28:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 305 active+clean; 200 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Feb 28 05:28:52 np0005634017 nova_compute[243452]: 2026-02-28 10:28:52.755 243456 DEBUG nova.compute.manager [req-3c278018-8d36-45f6-95f5-80845d526e8d req-ca6455e1-9347-4941-bd39-8d0d8e999ed8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Received event network-vif-plugged-89352e9c-3fec-48bc-a264-6de98ec910c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:28:52 np0005634017 nova_compute[243452]: 2026-02-28 10:28:52.755 243456 DEBUG oslo_concurrency.lockutils [req-3c278018-8d36-45f6-95f5-80845d526e8d req-ca6455e1-9347-4941-bd39-8d0d8e999ed8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "360effe7-8380-410d-a5b8-59c28fa4a75a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:28:52 np0005634017 nova_compute[243452]: 2026-02-28 10:28:52.755 243456 DEBUG oslo_concurrency.lockutils [req-3c278018-8d36-45f6-95f5-80845d526e8d req-ca6455e1-9347-4941-bd39-8d0d8e999ed8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "360effe7-8380-410d-a5b8-59c28fa4a75a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:28:52 np0005634017 nova_compute[243452]: 2026-02-28 10:28:52.756 243456 DEBUG oslo_concurrency.lockutils [req-3c278018-8d36-45f6-95f5-80845d526e8d req-ca6455e1-9347-4941-bd39-8d0d8e999ed8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "360effe7-8380-410d-a5b8-59c28fa4a75a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:28:52 np0005634017 nova_compute[243452]: 2026-02-28 10:28:52.756 243456 DEBUG nova.compute.manager [req-3c278018-8d36-45f6-95f5-80845d526e8d req-ca6455e1-9347-4941-bd39-8d0d8e999ed8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] No waiting events found dispatching network-vif-plugged-89352e9c-3fec-48bc-a264-6de98ec910c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:28:52 np0005634017 nova_compute[243452]: 2026-02-28 10:28:52.756 243456 WARNING nova.compute.manager [req-3c278018-8d36-45f6-95f5-80845d526e8d req-ca6455e1-9347-4941-bd39-8d0d8e999ed8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Received unexpected event network-vif-plugged-89352e9c-3fec-48bc-a264-6de98ec910c3 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:28:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:28:54 np0005634017 nova_compute[243452]: 2026-02-28 10:28:54.184 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 200 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 510 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Feb 28 05:28:56 np0005634017 podman[348314]: 2026-02-28 10:28:56.159971192 +0000 UTC m=+0.088705457 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0)
Feb 28 05:28:56 np0005634017 podman[348315]: 2026-02-28 10:28:56.17547726 +0000 UTC m=+0.096613040 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 05:28:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 200 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 798 KiB/s rd, 1.4 MiB/s wr, 40 op/s
Feb 28 05:28:56 np0005634017 nova_compute[243452]: 2026-02-28 10:28:56.702 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:56 np0005634017 NetworkManager[49805]: <info>  [1772274536.7066] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/504)
Feb 28 05:28:56 np0005634017 NetworkManager[49805]: <info>  [1772274536.7081] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/505)
Feb 28 05:28:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:28:56Z|01226|binding|INFO|Releasing lport 57c5c2bd-8863-4c9b-bde1-322e4dae0a4d from this chassis (sb_readonly=0)
Feb 28 05:28:56 np0005634017 nova_compute[243452]: 2026-02-28 10:28:56.755 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:56 np0005634017 nova_compute[243452]: 2026-02-28 10:28:56.765 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:28:57 np0005634017 nova_compute[243452]: 2026-02-28 10:28:57.518 243456 DEBUG nova.compute.manager [req-16f06e19-4ddd-4990-9cf8-8ba4b04d600b req-09f8ac65-50a7-44b1-a1ec-f9ce7d52d9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Received event network-changed-89352e9c-3fec-48bc-a264-6de98ec910c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:28:57 np0005634017 nova_compute[243452]: 2026-02-28 10:28:57.518 243456 DEBUG nova.compute.manager [req-16f06e19-4ddd-4990-9cf8-8ba4b04d600b req-09f8ac65-50a7-44b1-a1ec-f9ce7d52d9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Refreshing instance network info cache due to event network-changed-89352e9c-3fec-48bc-a264-6de98ec910c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:28:57 np0005634017 nova_compute[243452]: 2026-02-28 10:28:57.519 243456 DEBUG oslo_concurrency.lockutils [req-16f06e19-4ddd-4990-9cf8-8ba4b04d600b req-09f8ac65-50a7-44b1-a1ec-f9ce7d52d9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:28:57 np0005634017 nova_compute[243452]: 2026-02-28 10:28:57.519 243456 DEBUG oslo_concurrency.lockutils [req-16f06e19-4ddd-4990-9cf8-8ba4b04d600b req-09f8ac65-50a7-44b1-a1ec-f9ce7d52d9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:28:57 np0005634017 nova_compute[243452]: 2026-02-28 10:28:57.519 243456 DEBUG nova.network.neutron [req-16f06e19-4ddd-4990-9cf8-8ba4b04d600b req-09f8ac65-50a7-44b1-a1ec-f9ce7d52d9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Refreshing network info cache for port 89352e9c-3fec-48bc-a264-6de98ec910c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:28:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:57.868 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:28:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:57.869 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:28:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:28:57.869 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:28:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 305 active+clean; 200 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 74 op/s
Feb 28 05:28:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:28:59 np0005634017 nova_compute[243452]: 2026-02-28 10:28:59.186 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:00 np0005634017 nova_compute[243452]: 2026-02-28 10:29:00.200 243456 DEBUG nova.network.neutron [req-16f06e19-4ddd-4990-9cf8-8ba4b04d600b req-09f8ac65-50a7-44b1-a1ec-f9ce7d52d9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Updated VIF entry in instance network info cache for port 89352e9c-3fec-48bc-a264-6de98ec910c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:29:00 np0005634017 nova_compute[243452]: 2026-02-28 10:29:00.201 243456 DEBUG nova.network.neutron [req-16f06e19-4ddd-4990-9cf8-8ba4b04d600b req-09f8ac65-50a7-44b1-a1ec-f9ce7d52d9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Updating instance_info_cache with network_info: [{"id": "89352e9c-3fec-48bc-a264-6de98ec910c3", "address": "fa:16:3e:ae:b8:e7", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89352e9c-3f", "ovs_interfaceid": "89352e9c-3fec-48bc-a264-6de98ec910c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:29:00 np0005634017 nova_compute[243452]: 2026-02-28 10:29:00.236 243456 DEBUG oslo_concurrency.lockutils [req-16f06e19-4ddd-4990-9cf8-8ba4b04d600b req-09f8ac65-50a7-44b1-a1ec-f9ce7d52d9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:29:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:29:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:29:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:29:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:29:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:29:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:29:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 200 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 05:29:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 28 05:29:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 05:29:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:29:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:29:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:29:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:29:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:29:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:29:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:29:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:29:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:29:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:29:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:29:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:29:01 np0005634017 podman[348503]: 2026-02-28 10:29:01.93233191 +0000 UTC m=+0.066712155 container create 4985bf843ea8f3dc1eb548b220dc9c71255172e6f949aee01f069bec2c0e9ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_grothendieck, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 05:29:01 np0005634017 systemd[1]: Started libpod-conmon-4985bf843ea8f3dc1eb548b220dc9c71255172e6f949aee01f069bec2c0e9ef6.scope.
Feb 28 05:29:01 np0005634017 podman[348503]: 2026-02-28 10:29:01.892136425 +0000 UTC m=+0.026516730 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:29:02 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:29:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 05:29:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:29:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:29:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:29:02 np0005634017 podman[348503]: 2026-02-28 10:29:02.035964657 +0000 UTC m=+0.170344922 container init 4985bf843ea8f3dc1eb548b220dc9c71255172e6f949aee01f069bec2c0e9ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:29:02 np0005634017 podman[348503]: 2026-02-28 10:29:02.047284427 +0000 UTC m=+0.181664672 container start 4985bf843ea8f3dc1eb548b220dc9c71255172e6f949aee01f069bec2c0e9ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 28 05:29:02 np0005634017 podman[348503]: 2026-02-28 10:29:02.052683269 +0000 UTC m=+0.187063524 container attach 4985bf843ea8f3dc1eb548b220dc9c71255172e6f949aee01f069bec2c0e9ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_grothendieck, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Feb 28 05:29:02 np0005634017 stupefied_grothendieck[348520]: 167 167
Feb 28 05:29:02 np0005634017 systemd[1]: libpod-4985bf843ea8f3dc1eb548b220dc9c71255172e6f949aee01f069bec2c0e9ef6.scope: Deactivated successfully.
Feb 28 05:29:02 np0005634017 podman[348503]: 2026-02-28 10:29:02.070810351 +0000 UTC m=+0.205190606 container died 4985bf843ea8f3dc1eb548b220dc9c71255172e6f949aee01f069bec2c0e9ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:29:02 np0005634017 systemd[1]: var-lib-containers-storage-overlay-5a8b232f25abe1920265b1188db3c529006e5db8f3a0363cbed0d273a20ed7ac-merged.mount: Deactivated successfully.
Feb 28 05:29:02 np0005634017 podman[348503]: 2026-02-28 10:29:02.122280585 +0000 UTC m=+0.256660810 container remove 4985bf843ea8f3dc1eb548b220dc9c71255172e6f949aee01f069bec2c0e9ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_grothendieck, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 05:29:02 np0005634017 systemd[1]: libpod-conmon-4985bf843ea8f3dc1eb548b220dc9c71255172e6f949aee01f069bec2c0e9ef6.scope: Deactivated successfully.
Feb 28 05:29:02 np0005634017 podman[348542]: 2026-02-28 10:29:02.312300212 +0000 UTC m=+0.056007383 container create 0989d8b8f987720911c96c328e47436f87d47d745638167737577187e7762545 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Feb 28 05:29:02 np0005634017 systemd[1]: Started libpod-conmon-0989d8b8f987720911c96c328e47436f87d47d745638167737577187e7762545.scope.
Feb 28 05:29:02 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:29:02 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3519148e2ed27371d0d4ca3ea4626100b96ebd170b9b79e5e5a1375ff27b69e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:29:02 np0005634017 podman[348542]: 2026-02-28 10:29:02.287188243 +0000 UTC m=+0.030895484 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:29:02 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3519148e2ed27371d0d4ca3ea4626100b96ebd170b9b79e5e5a1375ff27b69e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:29:02 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3519148e2ed27371d0d4ca3ea4626100b96ebd170b9b79e5e5a1375ff27b69e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:29:02 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3519148e2ed27371d0d4ca3ea4626100b96ebd170b9b79e5e5a1375ff27b69e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:29:02 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3519148e2ed27371d0d4ca3ea4626100b96ebd170b9b79e5e5a1375ff27b69e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:29:02 np0005634017 podman[348542]: 2026-02-28 10:29:02.402954063 +0000 UTC m=+0.146661234 container init 0989d8b8f987720911c96c328e47436f87d47d745638167737577187e7762545 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cori, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:29:02 np0005634017 podman[348542]: 2026-02-28 10:29:02.409388084 +0000 UTC m=+0.153095275 container start 0989d8b8f987720911c96c328e47436f87d47d745638167737577187e7762545 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cori, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 28 05:29:02 np0005634017 podman[348542]: 2026-02-28 10:29:02.414829968 +0000 UTC m=+0.158537119 container attach 0989d8b8f987720911c96c328e47436f87d47d745638167737577187e7762545 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cori, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:29:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 212 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 893 KiB/s wr, 87 op/s
Feb 28 05:29:02 np0005634017 determined_cori[348559]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:29:02 np0005634017 determined_cori[348559]: --> All data devices are unavailable
Feb 28 05:29:02 np0005634017 systemd[1]: libpod-0989d8b8f987720911c96c328e47436f87d47d745638167737577187e7762545.scope: Deactivated successfully.
Feb 28 05:29:02 np0005634017 podman[348542]: 2026-02-28 10:29:02.955597881 +0000 UTC m=+0.699305052 container died 0989d8b8f987720911c96c328e47436f87d47d745638167737577187e7762545 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:29:02 np0005634017 systemd[1]: var-lib-containers-storage-overlay-c3519148e2ed27371d0d4ca3ea4626100b96ebd170b9b79e5e5a1375ff27b69e-merged.mount: Deactivated successfully.
Feb 28 05:29:03 np0005634017 podman[348542]: 2026-02-28 10:29:03.012672583 +0000 UTC m=+0.756379764 container remove 0989d8b8f987720911c96c328e47436f87d47d745638167737577187e7762545 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cori, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:29:03 np0005634017 systemd[1]: libpod-conmon-0989d8b8f987720911c96c328e47436f87d47d745638167737577187e7762545.scope: Deactivated successfully.
Feb 28 05:29:03 np0005634017 podman[348653]: 2026-02-28 10:29:03.427400527 +0000 UTC m=+0.032462988 container create 3be2808dd8ed9065f3abbc42e7ebf94491db27a519e94a02775725d09d00c445 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_black, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 05:29:03 np0005634017 systemd[1]: Started libpod-conmon-3be2808dd8ed9065f3abbc42e7ebf94491db27a519e94a02775725d09d00c445.scope.
Feb 28 05:29:03 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:29:03 np0005634017 podman[348653]: 2026-02-28 10:29:03.502884609 +0000 UTC m=+0.107947110 container init 3be2808dd8ed9065f3abbc42e7ebf94491db27a519e94a02775725d09d00c445 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_black, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Feb 28 05:29:03 np0005634017 podman[348653]: 2026-02-28 10:29:03.412133346 +0000 UTC m=+0.017195827 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:29:03 np0005634017 podman[348653]: 2026-02-28 10:29:03.511234525 +0000 UTC m=+0.116296986 container start 3be2808dd8ed9065f3abbc42e7ebf94491db27a519e94a02775725d09d00c445 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_black, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:29:03 np0005634017 adoring_black[348670]: 167 167
Feb 28 05:29:03 np0005634017 podman[348653]: 2026-02-28 10:29:03.515177716 +0000 UTC m=+0.120240217 container attach 3be2808dd8ed9065f3abbc42e7ebf94491db27a519e94a02775725d09d00c445 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_black, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:29:03 np0005634017 systemd[1]: libpod-3be2808dd8ed9065f3abbc42e7ebf94491db27a519e94a02775725d09d00c445.scope: Deactivated successfully.
Feb 28 05:29:03 np0005634017 podman[348653]: 2026-02-28 10:29:03.516204095 +0000 UTC m=+0.121266556 container died 3be2808dd8ed9065f3abbc42e7ebf94491db27a519e94a02775725d09d00c445 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_black, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 28 05:29:03 np0005634017 systemd[1]: var-lib-containers-storage-overlay-46f632a377a61946bf067db8effed746ad51a63442cbb63af10445237b9c2815-merged.mount: Deactivated successfully.
Feb 28 05:29:03 np0005634017 podman[348653]: 2026-02-28 10:29:03.559155008 +0000 UTC m=+0.164217469 container remove 3be2808dd8ed9065f3abbc42e7ebf94491db27a519e94a02775725d09d00c445 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_black, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Feb 28 05:29:03 np0005634017 systemd[1]: libpod-conmon-3be2808dd8ed9065f3abbc42e7ebf94491db27a519e94a02775725d09d00c445.scope: Deactivated successfully.
Feb 28 05:29:03 np0005634017 podman[348694]: 2026-02-28 10:29:03.744685309 +0000 UTC m=+0.076719608 container create 6038d1cd19654d9b31824fb552616971e94afcb3c66a6fe89d1eb646d2bbce2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_turing, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 05:29:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:29:03Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ae:b8:e7 10.100.0.7
Feb 28 05:29:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:29:03Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ae:b8:e7 10.100.0.7
Feb 28 05:29:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:29:03 np0005634017 systemd[1]: Started libpod-conmon-6038d1cd19654d9b31824fb552616971e94afcb3c66a6fe89d1eb646d2bbce2f.scope.
Feb 28 05:29:03 np0005634017 podman[348694]: 2026-02-28 10:29:03.713200419 +0000 UTC m=+0.045234788 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:29:03 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:29:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86326b5bf5fbe8749d96846638796a44deec41e2ba575377020e582c283f27a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:29:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86326b5bf5fbe8749d96846638796a44deec41e2ba575377020e582c283f27a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:29:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86326b5bf5fbe8749d96846638796a44deec41e2ba575377020e582c283f27a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:29:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86326b5bf5fbe8749d96846638796a44deec41e2ba575377020e582c283f27a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:29:03 np0005634017 podman[348694]: 2026-02-28 10:29:03.847332658 +0000 UTC m=+0.179366957 container init 6038d1cd19654d9b31824fb552616971e94afcb3c66a6fe89d1eb646d2bbce2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_turing, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Feb 28 05:29:03 np0005634017 podman[348694]: 2026-02-28 10:29:03.853752869 +0000 UTC m=+0.185787148 container start 6038d1cd19654d9b31824fb552616971e94afcb3c66a6fe89d1eb646d2bbce2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_turing, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 05:29:03 np0005634017 podman[348694]: 2026-02-28 10:29:03.858630597 +0000 UTC m=+0.190664896 container attach 6038d1cd19654d9b31824fb552616971e94afcb3c66a6fe89d1eb646d2bbce2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]: {
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:    "0": [
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:        {
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "devices": [
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "/dev/loop3"
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            ],
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "lv_name": "ceph_lv0",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "lv_size": "21470642176",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "name": "ceph_lv0",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "tags": {
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.cluster_name": "ceph",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.crush_device_class": "",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.encrypted": "0",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.objectstore": "bluestore",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.osd_id": "0",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.type": "block",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.vdo": "0",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.with_tpm": "0"
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            },
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "type": "block",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "vg_name": "ceph_vg0"
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:        }
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:    ],
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:    "1": [
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:        {
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "devices": [
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "/dev/loop4"
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            ],
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "lv_name": "ceph_lv1",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "lv_size": "21470642176",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "name": "ceph_lv1",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "tags": {
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.cluster_name": "ceph",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.crush_device_class": "",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.encrypted": "0",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.objectstore": "bluestore",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.osd_id": "1",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.type": "block",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.vdo": "0",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.with_tpm": "0"
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            },
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "type": "block",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "vg_name": "ceph_vg1"
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:        }
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:    ],
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:    "2": [
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:        {
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "devices": [
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "/dev/loop5"
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            ],
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "lv_name": "ceph_lv2",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "lv_size": "21470642176",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "name": "ceph_lv2",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "tags": {
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.cluster_name": "ceph",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.crush_device_class": "",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.encrypted": "0",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.objectstore": "bluestore",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.osd_id": "2",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.type": "block",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.vdo": "0",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:                "ceph.with_tpm": "0"
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            },
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "type": "block",
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:            "vg_name": "ceph_vg2"
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:        }
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]:    ]
Feb 28 05:29:04 np0005634017 vigilant_turing[348710]: }
Feb 28 05:29:04 np0005634017 systemd[1]: libpod-6038d1cd19654d9b31824fb552616971e94afcb3c66a6fe89d1eb646d2bbce2f.scope: Deactivated successfully.
Feb 28 05:29:04 np0005634017 podman[348694]: 2026-02-28 10:29:04.183502593 +0000 UTC m=+0.515536872 container died 6038d1cd19654d9b31824fb552616971e94afcb3c66a6fe89d1eb646d2bbce2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_turing, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:29:04 np0005634017 nova_compute[243452]: 2026-02-28 10:29:04.187 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:29:04 np0005634017 systemd[1]: var-lib-containers-storage-overlay-86326b5bf5fbe8749d96846638796a44deec41e2ba575377020e582c283f27a7-merged.mount: Deactivated successfully.
Feb 28 05:29:04 np0005634017 podman[348694]: 2026-02-28 10:29:04.226890778 +0000 UTC m=+0.558925047 container remove 6038d1cd19654d9b31824fb552616971e94afcb3c66a6fe89d1eb646d2bbce2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 05:29:04 np0005634017 systemd[1]: libpod-conmon-6038d1cd19654d9b31824fb552616971e94afcb3c66a6fe89d1eb646d2bbce2f.scope: Deactivated successfully.
Feb 28 05:29:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 218 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 98 op/s
Feb 28 05:29:04 np0005634017 podman[348794]: 2026-02-28 10:29:04.659789036 +0000 UTC m=+0.046295879 container create c810cd3916abb3007f332be89f5b6b6dde600966a6bdaf867f3bca6fdcc6b623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_joliot, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 05:29:04 np0005634017 systemd[1]: Started libpod-conmon-c810cd3916abb3007f332be89f5b6b6dde600966a6bdaf867f3bca6fdcc6b623.scope.
Feb 28 05:29:04 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:29:04 np0005634017 podman[348794]: 2026-02-28 10:29:04.636139658 +0000 UTC m=+0.022646581 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:29:04 np0005634017 podman[348794]: 2026-02-28 10:29:04.744742895 +0000 UTC m=+0.131249808 container init c810cd3916abb3007f332be89f5b6b6dde600966a6bdaf867f3bca6fdcc6b623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 05:29:04 np0005634017 podman[348794]: 2026-02-28 10:29:04.751918188 +0000 UTC m=+0.138425061 container start c810cd3916abb3007f332be89f5b6b6dde600966a6bdaf867f3bca6fdcc6b623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_joliot, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 28 05:29:04 np0005634017 podman[348794]: 2026-02-28 10:29:04.756568489 +0000 UTC m=+0.143075352 container attach c810cd3916abb3007f332be89f5b6b6dde600966a6bdaf867f3bca6fdcc6b623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_joliot, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:29:04 np0005634017 systemd[1]: libpod-c810cd3916abb3007f332be89f5b6b6dde600966a6bdaf867f3bca6fdcc6b623.scope: Deactivated successfully.
Feb 28 05:29:04 np0005634017 wonderful_joliot[348810]: 167 167
Feb 28 05:29:04 np0005634017 conmon[348810]: conmon c810cd3916abb3007f33 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c810cd3916abb3007f332be89f5b6b6dde600966a6bdaf867f3bca6fdcc6b623.scope/container/memory.events
Feb 28 05:29:04 np0005634017 podman[348794]: 2026-02-28 10:29:04.759814941 +0000 UTC m=+0.146321824 container died c810cd3916abb3007f332be89f5b6b6dde600966a6bdaf867f3bca6fdcc6b623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_joliot, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:29:04 np0005634017 systemd[1]: var-lib-containers-storage-overlay-377e7f4fc97f8bbec2df499eccf0e6e179a7ccbc8f8b14a05bdc289be000c61e-merged.mount: Deactivated successfully.
Feb 28 05:29:04 np0005634017 podman[348794]: 2026-02-28 10:29:04.802171877 +0000 UTC m=+0.188678760 container remove c810cd3916abb3007f332be89f5b6b6dde600966a6bdaf867f3bca6fdcc6b623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 28 05:29:04 np0005634017 systemd[1]: libpod-conmon-c810cd3916abb3007f332be89f5b6b6dde600966a6bdaf867f3bca6fdcc6b623.scope: Deactivated successfully.
Feb 28 05:29:04 np0005634017 podman[348833]: 2026-02-28 10:29:04.990363903 +0000 UTC m=+0.056977111 container create 598912602c26c43ff8827f803a8145e6d8b8cb70ccccfcbff0881343fece8766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_mendel, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:29:05 np0005634017 systemd[1]: Started libpod-conmon-598912602c26c43ff8827f803a8145e6d8b8cb70ccccfcbff0881343fece8766.scope.
Feb 28 05:29:05 np0005634017 podman[348833]: 2026-02-28 10:29:04.967726733 +0000 UTC m=+0.034339991 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:29:05 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:29:05 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3759a8770ea52661801db83f10cc14ad03550a456e8ece50bde6e543d001c35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:29:05 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3759a8770ea52661801db83f10cc14ad03550a456e8ece50bde6e543d001c35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:29:05 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3759a8770ea52661801db83f10cc14ad03550a456e8ece50bde6e543d001c35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:29:05 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3759a8770ea52661801db83f10cc14ad03550a456e8ece50bde6e543d001c35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:29:05 np0005634017 podman[348833]: 2026-02-28 10:29:05.089387429 +0000 UTC m=+0.156000627 container init 598912602c26c43ff8827f803a8145e6d8b8cb70ccccfcbff0881343fece8766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3)
Feb 28 05:29:05 np0005634017 podman[348833]: 2026-02-28 10:29:05.104361512 +0000 UTC m=+0.170974730 container start 598912602c26c43ff8827f803a8145e6d8b8cb70ccccfcbff0881343fece8766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_mendel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 28 05:29:05 np0005634017 podman[348833]: 2026-02-28 10:29:05.109658012 +0000 UTC m=+0.176271270 container attach 598912602c26c43ff8827f803a8145e6d8b8cb70ccccfcbff0881343fece8766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:29:05 np0005634017 lvm[348927]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:29:05 np0005634017 lvm[348927]: VG ceph_vg0 finished
Feb 28 05:29:05 np0005634017 lvm[348928]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:29:05 np0005634017 lvm[348928]: VG ceph_vg1 finished
Feb 28 05:29:05 np0005634017 lvm[348930]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:29:05 np0005634017 lvm[348930]: VG ceph_vg2 finished
Feb 28 05:29:05 np0005634017 xenodochial_mendel[348849]: {}
Feb 28 05:29:05 np0005634017 systemd[1]: libpod-598912602c26c43ff8827f803a8145e6d8b8cb70ccccfcbff0881343fece8766.scope: Deactivated successfully.
Feb 28 05:29:05 np0005634017 systemd[1]: libpod-598912602c26c43ff8827f803a8145e6d8b8cb70ccccfcbff0881343fece8766.scope: Consumed 1.115s CPU time.
Feb 28 05:29:05 np0005634017 podman[348833]: 2026-02-28 10:29:05.932146033 +0000 UTC m=+0.998759251 container died 598912602c26c43ff8827f803a8145e6d8b8cb70ccccfcbff0881343fece8766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_mendel, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 05:29:05 np0005634017 systemd[1]: var-lib-containers-storage-overlay-e3759a8770ea52661801db83f10cc14ad03550a456e8ece50bde6e543d001c35-merged.mount: Deactivated successfully.
Feb 28 05:29:05 np0005634017 podman[348833]: 2026-02-28 10:29:05.98125968 +0000 UTC m=+1.047872858 container remove 598912602c26c43ff8827f803a8145e6d8b8cb70ccccfcbff0881343fece8766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 05:29:05 np0005634017 systemd[1]: libpod-conmon-598912602c26c43ff8827f803a8145e6d8b8cb70ccccfcbff0881343fece8766.scope: Deactivated successfully.
Feb 28 05:29:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:29:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:29:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:29:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:29:06 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:29:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 227 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 109 op/s
Feb 28 05:29:07 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:29:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 305 active+clean; 233 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 99 op/s
Feb 28 05:29:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:29:09 np0005634017 nova_compute[243452]: 2026-02-28 10:29:09.189 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:09 np0005634017 nova_compute[243452]: 2026-02-28 10:29:09.192 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 233 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Feb 28 05:29:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 233 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 05:29:13 np0005634017 nova_compute[243452]: 2026-02-28 10:29:13.675 243456 DEBUG nova.compute.manager [None req-9195a26a-3c5c-45d0-aace-cd48ecf1a53a f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:29:13 np0005634017 nova_compute[243452]: 2026-02-28 10:29:13.753 243456 INFO nova.compute.manager [None req-9195a26a-3c5c-45d0-aace-cd48ecf1a53a f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] instance snapshotting#033[00m
Feb 28 05:29:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:29:14 np0005634017 nova_compute[243452]: 2026-02-28 10:29:14.065 243456 INFO nova.virt.libvirt.driver [None req-9195a26a-3c5c-45d0-aace-cd48ecf1a53a f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Beginning live snapshot process#033[00m
Feb 28 05:29:14 np0005634017 nova_compute[243452]: 2026-02-28 10:29:14.220 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:14 np0005634017 nova_compute[243452]: 2026-02-28 10:29:14.231 243456 DEBUG nova.virt.libvirt.imagebackend [None req-9195a26a-3c5c-45d0-aace-cd48ecf1a53a f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 28 05:29:14 np0005634017 nova_compute[243452]: 2026-02-28 10:29:14.442 243456 DEBUG nova.storage.rbd_utils [None req-9195a26a-3c5c-45d0-aace-cd48ecf1a53a f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] creating snapshot(cb18998cfc044a609e0df4ccadf9c2bf) on rbd image(360effe7-8380-410d-a5b8-59c28fa4a75a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:29:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 233 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 234 KiB/s rd, 1.3 MiB/s wr, 50 op/s
Feb 28 05:29:15 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Feb 28 05:29:15 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Feb 28 05:29:15 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Feb 28 05:29:15 np0005634017 nova_compute[243452]: 2026-02-28 10:29:15.154 243456 DEBUG nova.storage.rbd_utils [None req-9195a26a-3c5c-45d0-aace-cd48ecf1a53a f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] cloning vms/360effe7-8380-410d-a5b8-59c28fa4a75a_disk@cb18998cfc044a609e0df4ccadf9c2bf to images/889a5b88-09b7-4c48-88ed-168ca73ca921 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 28 05:29:15 np0005634017 nova_compute[243452]: 2026-02-28 10:29:15.265 243456 DEBUG nova.storage.rbd_utils [None req-9195a26a-3c5c-45d0-aace-cd48ecf1a53a f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] flattening images/889a5b88-09b7-4c48-88ed-168ca73ca921 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 28 05:29:15 np0005634017 nova_compute[243452]: 2026-02-28 10:29:15.580 243456 DEBUG nova.storage.rbd_utils [None req-9195a26a-3c5c-45d0-aace-cd48ecf1a53a f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] removing snapshot(cb18998cfc044a609e0df4ccadf9c2bf) on rbd image(360effe7-8380-410d-a5b8-59c28fa4a75a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 28 05:29:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Feb 28 05:29:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Feb 28 05:29:16 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Feb 28 05:29:16 np0005634017 nova_compute[243452]: 2026-02-28 10:29:16.161 243456 DEBUG nova.storage.rbd_utils [None req-9195a26a-3c5c-45d0-aace-cd48ecf1a53a f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] creating snapshot(snap) on rbd image(889a5b88-09b7-4c48-88ed-168ca73ca921) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:29:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 256 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 47 op/s
Feb 28 05:29:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Feb 28 05:29:17 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Feb 28 05:29:17 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Feb 28 05:29:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 305 active+clean; 268 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 3.5 MiB/s wr, 94 op/s
Feb 28 05:29:18 np0005634017 nova_compute[243452]: 2026-02-28 10:29:18.716 243456 INFO nova.virt.libvirt.driver [None req-9195a26a-3c5c-45d0-aace-cd48ecf1a53a f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Snapshot image upload complete#033[00m
Feb 28 05:29:18 np0005634017 nova_compute[243452]: 2026-02-28 10:29:18.717 243456 INFO nova.compute.manager [None req-9195a26a-3c5c-45d0-aace-cd48ecf1a53a f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Took 4.96 seconds to snapshot the instance on the hypervisor.#033[00m
Feb 28 05:29:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:29:19 np0005634017 nova_compute[243452]: 2026-02-28 10:29:19.195 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:19 np0005634017 nova_compute[243452]: 2026-02-28 10:29:19.224 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 312 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 154 op/s
Feb 28 05:29:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:29:21.910 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:29:21 np0005634017 nova_compute[243452]: 2026-02-28 10:29:21.910 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:21 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:29:21.911 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:29:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 312 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 6.2 MiB/s wr, 137 op/s
Feb 28 05:29:23 np0005634017 nova_compute[243452]: 2026-02-28 10:29:23.677 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "0292c690-55db-421d-a004-21a0acd72961" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:29:23 np0005634017 nova_compute[243452]: 2026-02-28 10:29:23.677 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "0292c690-55db-421d-a004-21a0acd72961" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:29:23 np0005634017 nova_compute[243452]: 2026-02-28 10:29:23.714 243456 DEBUG nova.compute.manager [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:29:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:29:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Feb 28 05:29:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Feb 28 05:29:23 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Feb 28 05:29:23 np0005634017 nova_compute[243452]: 2026-02-28 10:29:23.811 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:29:23 np0005634017 nova_compute[243452]: 2026-02-28 10:29:23.812 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:29:23 np0005634017 nova_compute[243452]: 2026-02-28 10:29:23.822 243456 DEBUG nova.virt.hardware [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:29:23 np0005634017 nova_compute[243452]: 2026-02-28 10:29:23.822 243456 INFO nova.compute.claims [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:29:23 np0005634017 nova_compute[243452]: 2026-02-28 10:29:23.954 243456 DEBUG oslo_concurrency.processutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:29:24 np0005634017 nova_compute[243452]: 2026-02-28 10:29:24.196 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:24 np0005634017 nova_compute[243452]: 2026-02-28 10:29:24.226 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:29:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2745733054' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:29:24 np0005634017 nova_compute[243452]: 2026-02-28 10:29:24.504 243456 DEBUG oslo_concurrency.processutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:29:24 np0005634017 nova_compute[243452]: 2026-02-28 10:29:24.512 243456 DEBUG nova.compute.provider_tree [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:29:24 np0005634017 nova_compute[243452]: 2026-02-28 10:29:24.530 243456 DEBUG nova.scheduler.client.report [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:29:24 np0005634017 nova_compute[243452]: 2026-02-28 10:29:24.553 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:29:24 np0005634017 nova_compute[243452]: 2026-02-28 10:29:24.554 243456 DEBUG nova.compute.manager [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:29:24 np0005634017 nova_compute[243452]: 2026-02-28 10:29:24.603 243456 DEBUG nova.compute.manager [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:29:24 np0005634017 nova_compute[243452]: 2026-02-28 10:29:24.603 243456 DEBUG nova.network.neutron [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:29:24 np0005634017 nova_compute[243452]: 2026-02-28 10:29:24.619 243456 INFO nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:29:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 305 active+clean; 312 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.1 MiB/s wr, 89 op/s
Feb 28 05:29:24 np0005634017 nova_compute[243452]: 2026-02-28 10:29:24.635 243456 DEBUG nova.compute.manager [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:29:24 np0005634017 nova_compute[243452]: 2026-02-28 10:29:24.735 243456 DEBUG nova.compute.manager [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:29:24 np0005634017 nova_compute[243452]: 2026-02-28 10:29:24.737 243456 DEBUG nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:29:24 np0005634017 nova_compute[243452]: 2026-02-28 10:29:24.738 243456 INFO nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Creating image(s)#033[00m
Feb 28 05:29:24 np0005634017 nova_compute[243452]: 2026-02-28 10:29:24.775 243456 DEBUG nova.storage.rbd_utils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] rbd image 0292c690-55db-421d-a004-21a0acd72961_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:29:24 np0005634017 nova_compute[243452]: 2026-02-28 10:29:24.801 243456 DEBUG nova.storage.rbd_utils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] rbd image 0292c690-55db-421d-a004-21a0acd72961_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:29:24 np0005634017 nova_compute[243452]: 2026-02-28 10:29:24.827 243456 DEBUG nova.storage.rbd_utils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] rbd image 0292c690-55db-421d-a004-21a0acd72961_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:29:24 np0005634017 nova_compute[243452]: 2026-02-28 10:29:24.830 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "4d2b9309393ff07c8cae21ceab624ba9ca90168e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:29:24 np0005634017 nova_compute[243452]: 2026-02-28 10:29:24.831 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "4d2b9309393ff07c8cae21ceab624ba9ca90168e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:29:24 np0005634017 nova_compute[243452]: 2026-02-28 10:29:24.900 243456 DEBUG nova.policy [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f54beab12fce4ee8adf80742bf33b916', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '344dd946e14146ab93c01183964c71b3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:29:25 np0005634017 nova_compute[243452]: 2026-02-28 10:29:25.091 243456 DEBUG nova.virt.libvirt.imagebackend [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Image locations are: [{'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/889a5b88-09b7-4c48-88ed-168ca73ca921/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/889a5b88-09b7-4c48-88ed-168ca73ca921/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Feb 28 05:29:25 np0005634017 nova_compute[243452]: 2026-02-28 10:29:25.133 243456 DEBUG nova.virt.libvirt.imagebackend [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Selected location: {'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/889a5b88-09b7-4c48-88ed-168ca73ca921/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Feb 28 05:29:25 np0005634017 nova_compute[243452]: 2026-02-28 10:29:25.134 243456 DEBUG nova.storage.rbd_utils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] cloning images/889a5b88-09b7-4c48-88ed-168ca73ca921@snap to None/0292c690-55db-421d-a004-21a0acd72961_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 28 05:29:25 np0005634017 nova_compute[243452]: 2026-02-28 10:29:25.297 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "4d2b9309393ff07c8cae21ceab624ba9ca90168e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:29:25 np0005634017 nova_compute[243452]: 2026-02-28 10:29:25.444 243456 DEBUG nova.objects.instance [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lazy-loading 'migration_context' on Instance uuid 0292c690-55db-421d-a004-21a0acd72961 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:29:25 np0005634017 nova_compute[243452]: 2026-02-28 10:29:25.463 243456 DEBUG nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:29:25 np0005634017 nova_compute[243452]: 2026-02-28 10:29:25.463 243456 DEBUG nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Ensure instance console log exists: /var/lib/nova/instances/0292c690-55db-421d-a004-21a0acd72961/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:29:25 np0005634017 nova_compute[243452]: 2026-02-28 10:29:25.464 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:29:25 np0005634017 nova_compute[243452]: 2026-02-28 10:29:25.464 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:29:25 np0005634017 nova_compute[243452]: 2026-02-28 10:29:25.464 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:29:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 312 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.4 MiB/s wr, 78 op/s
Feb 28 05:29:26 np0005634017 nova_compute[243452]: 2026-02-28 10:29:26.760 243456 DEBUG nova.network.neutron [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Successfully created port: 0e852cef-a459-43ce-ad9b-4f379e129ace _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:29:27 np0005634017 podman[349312]: 2026-02-28 10:29:27.169942512 +0000 UTC m=+0.090266040 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 05:29:27 np0005634017 podman[349311]: 2026-02-28 10:29:27.209944682 +0000 UTC m=+0.130217339 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 28 05:29:28 np0005634017 nova_compute[243452]: 2026-02-28 10:29:28.148 243456 DEBUG nova.network.neutron [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Successfully updated port: 0e852cef-a459-43ce-ad9b-4f379e129ace _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:29:28 np0005634017 nova_compute[243452]: 2026-02-28 10:29:28.166 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "refresh_cache-0292c690-55db-421d-a004-21a0acd72961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:29:28 np0005634017 nova_compute[243452]: 2026-02-28 10:29:28.167 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquired lock "refresh_cache-0292c690-55db-421d-a004-21a0acd72961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:29:28 np0005634017 nova_compute[243452]: 2026-02-28 10:29:28.168 243456 DEBUG nova.network.neutron [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:29:28 np0005634017 nova_compute[243452]: 2026-02-28 10:29:28.277 243456 DEBUG nova.compute.manager [req-df03af75-24a0-4f3e-8c06-03ac08e4c143 req-a93ddc47-6bb2-449a-b13c-2cecc9a9b33d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Received event network-changed-0e852cef-a459-43ce-ad9b-4f379e129ace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:29:28 np0005634017 nova_compute[243452]: 2026-02-28 10:29:28.278 243456 DEBUG nova.compute.manager [req-df03af75-24a0-4f3e-8c06-03ac08e4c143 req-a93ddc47-6bb2-449a-b13c-2cecc9a9b33d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Refreshing instance network info cache due to event network-changed-0e852cef-a459-43ce-ad9b-4f379e129ace. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:29:28 np0005634017 nova_compute[243452]: 2026-02-28 10:29:28.279 243456 DEBUG oslo_concurrency.lockutils [req-df03af75-24a0-4f3e-8c06-03ac08e4c143 req-a93ddc47-6bb2-449a-b13c-2cecc9a9b33d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-0292c690-55db-421d-a004-21a0acd72961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:29:28 np0005634017 nova_compute[243452]: 2026-02-28 10:29:28.365 243456 DEBUG nova.network.neutron [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:29:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 312 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.6 MiB/s wr, 73 op/s
Feb 28 05:29:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:29:28 np0005634017 nova_compute[243452]: 2026-02-28 10:29:28.893 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:29:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:29:29
Feb 28 05:29:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:29:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:29:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'backups', 'vms', 'default.rgw.meta', '.rgw.root', '.mgr']
Feb 28 05:29:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:29:29 np0005634017 nova_compute[243452]: 2026-02-28 10:29:29.198 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:29 np0005634017 nova_compute[243452]: 2026-02-28 10:29:29.228 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:29:29.913 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:29:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:29:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:29:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:29:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:29:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:29:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:29:30 np0005634017 nova_compute[243452]: 2026-02-28 10:29:30.538 243456 DEBUG nova.network.neutron [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Updating instance_info_cache with network_info: [{"id": "0e852cef-a459-43ce-ad9b-4f379e129ace", "address": "fa:16:3e:64:2e:7f", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e852cef-a4", "ovs_interfaceid": "0e852cef-a459-43ce-ad9b-4f379e129ace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:29:30 np0005634017 nova_compute[243452]: 2026-02-28 10:29:30.577 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Releasing lock "refresh_cache-0292c690-55db-421d-a004-21a0acd72961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:29:30 np0005634017 nova_compute[243452]: 2026-02-28 10:29:30.578 243456 DEBUG nova.compute.manager [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Instance network_info: |[{"id": "0e852cef-a459-43ce-ad9b-4f379e129ace", "address": "fa:16:3e:64:2e:7f", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e852cef-a4", "ovs_interfaceid": "0e852cef-a459-43ce-ad9b-4f379e129ace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:29:30 np0005634017 nova_compute[243452]: 2026-02-28 10:29:30.579 243456 DEBUG oslo_concurrency.lockutils [req-df03af75-24a0-4f3e-8c06-03ac08e4c143 req-a93ddc47-6bb2-449a-b13c-2cecc9a9b33d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-0292c690-55db-421d-a004-21a0acd72961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:29:30 np0005634017 nova_compute[243452]: 2026-02-28 10:29:30.579 243456 DEBUG nova.network.neutron [req-df03af75-24a0-4f3e-8c06-03ac08e4c143 req-a93ddc47-6bb2-449a-b13c-2cecc9a9b33d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Refreshing network info cache for port 0e852cef-a459-43ce-ad9b-4f379e129ace _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:29:30 np0005634017 nova_compute[243452]: 2026-02-28 10:29:30.585 243456 DEBUG nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Start _get_guest_xml network_info=[{"id": "0e852cef-a459-43ce-ad9b-4f379e129ace", "address": "fa:16:3e:64:2e:7f", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e852cef-a4", "ovs_interfaceid": "0e852cef-a459-43ce-ad9b-4f379e129ace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-02-28T10:29:13Z,direct_url=<?>,disk_format='raw',id=889a5b88-09b7-4c48-88ed-168ca73ca921,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-946860430',owner='344dd946e14146ab93c01183964c71b3',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-28T10:29:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': '889a5b88-09b7-4c48-88ed-168ca73ca921'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:29:30 np0005634017 nova_compute[243452]: 2026-02-28 10:29:30.591 243456 WARNING nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:29:30 np0005634017 nova_compute[243452]: 2026-02-28 10:29:30.598 243456 DEBUG nova.virt.libvirt.host [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:29:30 np0005634017 nova_compute[243452]: 2026-02-28 10:29:30.599 243456 DEBUG nova.virt.libvirt.host [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:29:30 np0005634017 nova_compute[243452]: 2026-02-28 10:29:30.610 243456 DEBUG nova.virt.libvirt.host [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:29:30 np0005634017 nova_compute[243452]: 2026-02-28 10:29:30.611 243456 DEBUG nova.virt.libvirt.host [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:29:30 np0005634017 nova_compute[243452]: 2026-02-28 10:29:30.611 243456 DEBUG nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:29:30 np0005634017 nova_compute[243452]: 2026-02-28 10:29:30.612 243456 DEBUG nova.virt.hardware [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-02-28T10:29:13Z,direct_url=<?>,disk_format='raw',id=889a5b88-09b7-4c48-88ed-168ca73ca921,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-946860430',owner='344dd946e14146ab93c01183964c71b3',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-28T10:29:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:29:30 np0005634017 nova_compute[243452]: 2026-02-28 10:29:30.612 243456 DEBUG nova.virt.hardware [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:29:30 np0005634017 nova_compute[243452]: 2026-02-28 10:29:30.613 243456 DEBUG nova.virt.hardware [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:29:30 np0005634017 nova_compute[243452]: 2026-02-28 10:29:30.613 243456 DEBUG nova.virt.hardware [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:29:30 np0005634017 nova_compute[243452]: 2026-02-28 10:29:30.613 243456 DEBUG nova.virt.hardware [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:29:30 np0005634017 nova_compute[243452]: 2026-02-28 10:29:30.614 243456 DEBUG nova.virt.hardware [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:29:30 np0005634017 nova_compute[243452]: 2026-02-28 10:29:30.614 243456 DEBUG nova.virt.hardware [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:29:30 np0005634017 nova_compute[243452]: 2026-02-28 10:29:30.615 243456 DEBUG nova.virt.hardware [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:29:30 np0005634017 nova_compute[243452]: 2026-02-28 10:29:30.615 243456 DEBUG nova.virt.hardware [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:29:30 np0005634017 nova_compute[243452]: 2026-02-28 10:29:30.615 243456 DEBUG nova.virt.hardware [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:29:30 np0005634017 nova_compute[243452]: 2026-02-28 10:29:30.616 243456 DEBUG nova.virt.hardware [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:29:30 np0005634017 nova_compute[243452]: 2026-02-28 10:29:30.620 243456 DEBUG oslo_concurrency.processutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:29:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.1 KiB/s wr, 43 op/s
Feb 28 05:29:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:29:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:29:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:29:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:29:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:29:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:29:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:29:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:29:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:29:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:29:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:29:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3528040933' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.234 243456 DEBUG oslo_concurrency.processutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.265 243456 DEBUG nova.storage.rbd_utils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] rbd image 0292c690-55db-421d-a004-21a0acd72961_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.272 243456 DEBUG oslo_concurrency.processutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:29:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:29:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3690528608' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.835 243456 DEBUG oslo_concurrency.processutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.837 243456 DEBUG nova.virt.libvirt.vif [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:29:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-981413902',display_name='tempest-TestSnapshotPattern-server-981413902',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-981413902',id=122,image_ref='889a5b88-09b7-4c48-88ed-168ca73ca921',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPUJ4MbCHcIoNICw2tL1X5ZefRw67v1vL6nTq2IACMxv94r+pZLfX9PQ5uIyPper8mbOQnjB2mZqp0GfnlVkrAkMMtWu4Dla7/PQOjpuGeOsYTKM5E2GBss1gtEw6oeaiA==',key_name='tempest-TestSnapshotPattern-538459727',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='344dd946e14146ab93c01183964c71b3',ramdisk_id='',reservation_id='r-r0dl808n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='360effe7-8380-410d-a5b8-59c28fa4a75a',image_min_disk='1',image_min_ram='0',image_owner_id='344dd946e14146ab93c01183964c71b3',image_owner_project_name='tempest-TestSnapshotPattern-2121060882',image_owner_user_name='tempest-TestSnapshotPattern-2121060882-project-member',image_user_id='f54beab12fce4ee8adf80742bf33b916',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-2121060882',owner_user_name='tempest-TestSnapshotPattern-2121060882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:29:24Z,user_data=None,user_id='f54beab12fce4ee8adf80742bf33b916',uuid=0292c690-55db-421d-a004-21a0acd72961,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0e852cef-a459-43ce-ad9b-4f379e129ace", "address": "fa:16:3e:64:2e:7f", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e852cef-a4", "ovs_interfaceid": "0e852cef-a459-43ce-ad9b-4f379e129ace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.838 243456 DEBUG nova.network.os_vif_util [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Converting VIF {"id": "0e852cef-a459-43ce-ad9b-4f379e129ace", "address": "fa:16:3e:64:2e:7f", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e852cef-a4", "ovs_interfaceid": "0e852cef-a459-43ce-ad9b-4f379e129ace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.839 243456 DEBUG nova.network.os_vif_util [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:2e:7f,bridge_name='br-int',has_traffic_filtering=True,id=0e852cef-a459-43ce-ad9b-4f379e129ace,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e852cef-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.841 243456 DEBUG nova.objects.instance [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0292c690-55db-421d-a004-21a0acd72961 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.868 243456 DEBUG nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:29:31 np0005634017 nova_compute[243452]:  <uuid>0292c690-55db-421d-a004-21a0acd72961</uuid>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:  <name>instance-0000007a</name>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestSnapshotPattern-server-981413902</nova:name>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:29:30</nova:creationTime>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:29:31 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:        <nova:user uuid="f54beab12fce4ee8adf80742bf33b916">tempest-TestSnapshotPattern-2121060882-project-member</nova:user>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:        <nova:project uuid="344dd946e14146ab93c01183964c71b3">tempest-TestSnapshotPattern-2121060882</nova:project>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="889a5b88-09b7-4c48-88ed-168ca73ca921"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:        <nova:port uuid="0e852cef-a459-43ce-ad9b-4f379e129ace">
Feb 28 05:29:31 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <entry name="serial">0292c690-55db-421d-a004-21a0acd72961</entry>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <entry name="uuid">0292c690-55db-421d-a004-21a0acd72961</entry>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/0292c690-55db-421d-a004-21a0acd72961_disk">
Feb 28 05:29:31 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:29:31 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/0292c690-55db-421d-a004-21a0acd72961_disk.config">
Feb 28 05:29:31 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:29:31 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:64:2e:7f"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <target dev="tap0e852cef-a4"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/0292c690-55db-421d-a004-21a0acd72961/console.log" append="off"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <input type="keyboard" bus="usb"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:29:31 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:29:31 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:29:31 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:29:31 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.869 243456 DEBUG nova.compute.manager [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Preparing to wait for external event network-vif-plugged-0e852cef-a459-43ce-ad9b-4f379e129ace prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.870 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "0292c690-55db-421d-a004-21a0acd72961-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.870 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "0292c690-55db-421d-a004-21a0acd72961-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.871 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "0292c690-55db-421d-a004-21a0acd72961-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.873 243456 DEBUG nova.virt.libvirt.vif [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:29:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-981413902',display_name='tempest-TestSnapshotPattern-server-981413902',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-981413902',id=122,image_ref='889a5b88-09b7-4c48-88ed-168ca73ca921',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPUJ4MbCHcIoNICw2tL1X5ZefRw67v1vL6nTq2IACMxv94r+pZLfX9PQ5uIyPper8mbOQnjB2mZqp0GfnlVkrAkMMtWu4Dla7/PQOjpuGeOsYTKM5E2GBss1gtEw6oeaiA==',key_name='tempest-TestSnapshotPattern-538459727',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='344dd946e14146ab93c01183964c71b3',ramdisk_id='',reservation_id='r-r0dl808n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='360effe7-8380-410d-a5b8-59c28fa4a75a',image_min_disk='1',image_min_ram='0',image_owner_id='344dd946e14146ab93c01183964c71b3',image_owner_project_name='tempest-TestSnapshotPattern-2121060882',image_owner_user_name='tempest-TestSnapshotPattern-2121060882-project-member',image_user_id='f54beab12fce4ee8adf80742bf33b916',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-2121060882',owner_user_name='tempest-TestSnapshotPattern-2121060882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:29:24Z,user_data=None,user_id='f54beab12fce4ee8adf80742bf33b916',uuid=0292c690-55db-421d-a004-21a0acd72961,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0e852cef-a459-43ce-ad9b-4f379e129ace", "address": "fa:16:3e:64:2e:7f", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e852cef-a4", "ovs_interfaceid": "0e852cef-a459-43ce-ad9b-4f379e129ace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.874 243456 DEBUG nova.network.os_vif_util [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Converting VIF {"id": "0e852cef-a459-43ce-ad9b-4f379e129ace", "address": "fa:16:3e:64:2e:7f", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e852cef-a4", "ovs_interfaceid": "0e852cef-a459-43ce-ad9b-4f379e129ace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.875 243456 DEBUG nova.network.os_vif_util [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:2e:7f,bridge_name='br-int',has_traffic_filtering=True,id=0e852cef-a459-43ce-ad9b-4f379e129ace,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e852cef-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.876 243456 DEBUG os_vif [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:2e:7f,bridge_name='br-int',has_traffic_filtering=True,id=0e852cef-a459-43ce-ad9b-4f379e129ace,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e852cef-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.877 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.878 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.879 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.885 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.885 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0e852cef-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.886 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0e852cef-a4, col_values=(('external_ids', {'iface-id': '0e852cef-a459-43ce-ad9b-4f379e129ace', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:2e:7f', 'vm-uuid': '0292c690-55db-421d-a004-21a0acd72961'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:29:31 np0005634017 NetworkManager[49805]: <info>  [1772274571.8897] manager: (tap0e852cef-a4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/506)
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.891 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.896 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.897 243456 INFO os_vif [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:2e:7f,bridge_name='br-int',has_traffic_filtering=True,id=0e852cef-a459-43ce-ad9b-4f379e129ace,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e852cef-a4')#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.964 243456 DEBUG nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.964 243456 DEBUG nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.964 243456 DEBUG nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] No VIF found with MAC fa:16:3e:64:2e:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.965 243456 INFO nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Using config drive#033[00m
Feb 28 05:29:31 np0005634017 nova_compute[243452]: 2026-02-28 10:29:31.992 243456 DEBUG nova.storage.rbd_utils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] rbd image 0292c690-55db-421d-a004-21a0acd72961_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:29:32 np0005634017 nova_compute[243452]: 2026-02-28 10:29:32.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:29:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.1 KiB/s wr, 32 op/s
Feb 28 05:29:33 np0005634017 nova_compute[243452]: 2026-02-28 10:29:33.016 243456 INFO nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Creating config drive at /var/lib/nova/instances/0292c690-55db-421d-a004-21a0acd72961/disk.config#033[00m
Feb 28 05:29:33 np0005634017 nova_compute[243452]: 2026-02-28 10:29:33.020 243456 DEBUG oslo_concurrency.processutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0292c690-55db-421d-a004-21a0acd72961/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpjofqfkwr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:29:33 np0005634017 nova_compute[243452]: 2026-02-28 10:29:33.171 243456 DEBUG oslo_concurrency.processutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0292c690-55db-421d-a004-21a0acd72961/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpjofqfkwr" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:29:33 np0005634017 nova_compute[243452]: 2026-02-28 10:29:33.212 243456 DEBUG nova.storage.rbd_utils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] rbd image 0292c690-55db-421d-a004-21a0acd72961_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:29:33 np0005634017 nova_compute[243452]: 2026-02-28 10:29:33.218 243456 DEBUG oslo_concurrency.processutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0292c690-55db-421d-a004-21a0acd72961/disk.config 0292c690-55db-421d-a004-21a0acd72961_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:29:33 np0005634017 nova_compute[243452]: 2026-02-28 10:29:33.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:29:33 np0005634017 nova_compute[243452]: 2026-02-28 10:29:33.398 243456 DEBUG oslo_concurrency.processutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0292c690-55db-421d-a004-21a0acd72961/disk.config 0292c690-55db-421d-a004-21a0acd72961_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:29:33 np0005634017 nova_compute[243452]: 2026-02-28 10:29:33.398 243456 INFO nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Deleting local config drive /var/lib/nova/instances/0292c690-55db-421d-a004-21a0acd72961/disk.config because it was imported into RBD.#033[00m
Feb 28 05:29:33 np0005634017 kernel: tap0e852cef-a4: entered promiscuous mode
Feb 28 05:29:33 np0005634017 NetworkManager[49805]: <info>  [1772274573.4536] manager: (tap0e852cef-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/507)
Feb 28 05:29:33 np0005634017 ovn_controller[146846]: 2026-02-28T10:29:33Z|01227|binding|INFO|Claiming lport 0e852cef-a459-43ce-ad9b-4f379e129ace for this chassis.
Feb 28 05:29:33 np0005634017 ovn_controller[146846]: 2026-02-28T10:29:33Z|01228|binding|INFO|0e852cef-a459-43ce-ad9b-4f379e129ace: Claiming fa:16:3e:64:2e:7f 10.100.0.11
Feb 28 05:29:33 np0005634017 nova_compute[243452]: 2026-02-28 10:29:33.453 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.464 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:2e:7f 10.100.0.11'], port_security=['fa:16:3e:64:2e:7f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0292c690-55db-421d-a004-21a0acd72961', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-143add3f-ffeb-40fb-88e5-0af28b700615', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '344dd946e14146ab93c01183964c71b3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eee67bf9-5a46-4130-874d-acfd0939ad31', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c3d893a-030b-445a-98e5-9b2e4e6c6037, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0e852cef-a459-43ce-ad9b-4f379e129ace) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:29:33 np0005634017 ovn_controller[146846]: 2026-02-28T10:29:33Z|01229|binding|INFO|Setting lport 0e852cef-a459-43ce-ad9b-4f379e129ace ovn-installed in OVS
Feb 28 05:29:33 np0005634017 ovn_controller[146846]: 2026-02-28T10:29:33Z|01230|binding|INFO|Setting lport 0e852cef-a459-43ce-ad9b-4f379e129ace up in Southbound
Feb 28 05:29:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.466 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0e852cef-a459-43ce-ad9b-4f379e129ace in datapath 143add3f-ffeb-40fb-88e5-0af28b700615 bound to our chassis#033[00m
Feb 28 05:29:33 np0005634017 nova_compute[243452]: 2026-02-28 10:29:33.466 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.468 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 143add3f-ffeb-40fb-88e5-0af28b700615#033[00m
Feb 28 05:29:33 np0005634017 nova_compute[243452]: 2026-02-28 10:29:33.469 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:33 np0005634017 systemd-udevd[349489]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:29:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.485 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[92ac4296-6b36-4962-841c-aaa3d2b9c72d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:29:33 np0005634017 systemd-machined[209480]: New machine qemu-155-instance-0000007a.
Feb 28 05:29:33 np0005634017 NetworkManager[49805]: <info>  [1772274573.4978] device (tap0e852cef-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:29:33 np0005634017 NetworkManager[49805]: <info>  [1772274573.4985] device (tap0e852cef-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:29:33 np0005634017 systemd[1]: Started Virtual Machine qemu-155-instance-0000007a.
Feb 28 05:29:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.514 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[951efe52-e67d-46cb-a898-04aa25a49863]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:29:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.517 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4f73c696-3351-4520-bf02-c2608d3f3ac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:29:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.548 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d371b0c6-e169-4bde-b5db-25ed3121c064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:29:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.566 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[80ea9a06-0199-48ee-907b-a96e0741294d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap143add3f-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:02:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 365], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603768, 'reachable_time': 37208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349502, 'error': None, 'target': 'ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:29:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.576 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8530431f-cd9e-45b2-adb6-8fcd6685af68]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap143add3f-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603781, 'tstamp': 603781}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349503, 'error': None, 'target': 'ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap143add3f-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603784, 'tstamp': 603784}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349503, 'error': None, 'target': 'ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:29:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.578 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap143add3f-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:29:33 np0005634017 nova_compute[243452]: 2026-02-28 10:29:33.580 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:33 np0005634017 nova_compute[243452]: 2026-02-28 10:29:33.581 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.581 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap143add3f-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:29:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.582 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:29:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.582 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap143add3f-f0, col_values=(('external_ids', {'iface-id': '57c5c2bd-8863-4c9b-bde1-322e4dae0a4d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:29:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.582 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:29:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:29:33 np0005634017 nova_compute[243452]: 2026-02-28 10:29:33.947 243456 DEBUG nova.network.neutron [req-df03af75-24a0-4f3e-8c06-03ac08e4c143 req-a93ddc47-6bb2-449a-b13c-2cecc9a9b33d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Updated VIF entry in instance network info cache for port 0e852cef-a459-43ce-ad9b-4f379e129ace. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:29:33 np0005634017 nova_compute[243452]: 2026-02-28 10:29:33.948 243456 DEBUG nova.network.neutron [req-df03af75-24a0-4f3e-8c06-03ac08e4c143 req-a93ddc47-6bb2-449a-b13c-2cecc9a9b33d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Updating instance_info_cache with network_info: [{"id": "0e852cef-a459-43ce-ad9b-4f379e129ace", "address": "fa:16:3e:64:2e:7f", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e852cef-a4", "ovs_interfaceid": "0e852cef-a459-43ce-ad9b-4f379e129ace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:29:33 np0005634017 nova_compute[243452]: 2026-02-28 10:29:33.967 243456 DEBUG oslo_concurrency.lockutils [req-df03af75-24a0-4f3e-8c06-03ac08e4c143 req-a93ddc47-6bb2-449a-b13c-2cecc9a9b33d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-0292c690-55db-421d-a004-21a0acd72961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:29:34 np0005634017 nova_compute[243452]: 2026-02-28 10:29:34.068 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274574.0675745, 0292c690-55db-421d-a004-21a0acd72961 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:29:34 np0005634017 nova_compute[243452]: 2026-02-28 10:29:34.069 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0292c690-55db-421d-a004-21a0acd72961] VM Started (Lifecycle Event)#033[00m
Feb 28 05:29:34 np0005634017 nova_compute[243452]: 2026-02-28 10:29:34.089 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0292c690-55db-421d-a004-21a0acd72961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:29:34 np0005634017 nova_compute[243452]: 2026-02-28 10:29:34.095 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274574.0678458, 0292c690-55db-421d-a004-21a0acd72961 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:29:34 np0005634017 nova_compute[243452]: 2026-02-28 10:29:34.095 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0292c690-55db-421d-a004-21a0acd72961] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:29:34 np0005634017 nova_compute[243452]: 2026-02-28 10:29:34.111 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0292c690-55db-421d-a004-21a0acd72961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:29:34 np0005634017 nova_compute[243452]: 2026-02-28 10:29:34.115 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0292c690-55db-421d-a004-21a0acd72961] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:29:34 np0005634017 nova_compute[243452]: 2026-02-28 10:29:34.136 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0292c690-55db-421d-a004-21a0acd72961] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:29:34 np0005634017 nova_compute[243452]: 2026-02-28 10:29:34.201 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 15 KiB/s wr, 34 op/s
Feb 28 05:29:34 np0005634017 nova_compute[243452]: 2026-02-28 10:29:34.861 243456 DEBUG nova.compute.manager [req-80013901-bd31-49f8-ab8c-3ae32e815e53 req-1bacb42d-47c4-4a95-baaf-5f2886bc1185 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Received event network-vif-plugged-0e852cef-a459-43ce-ad9b-4f379e129ace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:29:34 np0005634017 nova_compute[243452]: 2026-02-28 10:29:34.861 243456 DEBUG oslo_concurrency.lockutils [req-80013901-bd31-49f8-ab8c-3ae32e815e53 req-1bacb42d-47c4-4a95-baaf-5f2886bc1185 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "0292c690-55db-421d-a004-21a0acd72961-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:29:34 np0005634017 nova_compute[243452]: 2026-02-28 10:29:34.862 243456 DEBUG oslo_concurrency.lockutils [req-80013901-bd31-49f8-ab8c-3ae32e815e53 req-1bacb42d-47c4-4a95-baaf-5f2886bc1185 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0292c690-55db-421d-a004-21a0acd72961-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:29:34 np0005634017 nova_compute[243452]: 2026-02-28 10:29:34.862 243456 DEBUG oslo_concurrency.lockutils [req-80013901-bd31-49f8-ab8c-3ae32e815e53 req-1bacb42d-47c4-4a95-baaf-5f2886bc1185 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0292c690-55db-421d-a004-21a0acd72961-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:29:34 np0005634017 nova_compute[243452]: 2026-02-28 10:29:34.863 243456 DEBUG nova.compute.manager [req-80013901-bd31-49f8-ab8c-3ae32e815e53 req-1bacb42d-47c4-4a95-baaf-5f2886bc1185 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Processing event network-vif-plugged-0e852cef-a459-43ce-ad9b-4f379e129ace _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:29:34 np0005634017 nova_compute[243452]: 2026-02-28 10:29:34.864 243456 DEBUG nova.compute.manager [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:29:34 np0005634017 nova_compute[243452]: 2026-02-28 10:29:34.869 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274574.8691294, 0292c690-55db-421d-a004-21a0acd72961 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:29:34 np0005634017 nova_compute[243452]: 2026-02-28 10:29:34.869 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0292c690-55db-421d-a004-21a0acd72961] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:29:34 np0005634017 nova_compute[243452]: 2026-02-28 10:29:34.873 243456 DEBUG nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:29:34 np0005634017 nova_compute[243452]: 2026-02-28 10:29:34.877 243456 INFO nova.virt.libvirt.driver [-] [instance: 0292c690-55db-421d-a004-21a0acd72961] Instance spawned successfully.#033[00m
Feb 28 05:29:34 np0005634017 nova_compute[243452]: 2026-02-28 10:29:34.877 243456 INFO nova.compute.manager [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Took 10.14 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:29:34 np0005634017 nova_compute[243452]: 2026-02-28 10:29:34.878 243456 DEBUG nova.compute.manager [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:29:34 np0005634017 nova_compute[243452]: 2026-02-28 10:29:34.894 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0292c690-55db-421d-a004-21a0acd72961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:29:34 np0005634017 nova_compute[243452]: 2026-02-28 10:29:34.900 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0292c690-55db-421d-a004-21a0acd72961] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:29:34 np0005634017 nova_compute[243452]: 2026-02-28 10:29:34.933 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0292c690-55db-421d-a004-21a0acd72961] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:29:34 np0005634017 nova_compute[243452]: 2026-02-28 10:29:34.972 243456 INFO nova.compute.manager [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Took 11.19 seconds to build instance.#033[00m
Feb 28 05:29:34 np0005634017 nova_compute[243452]: 2026-02-28 10:29:34.989 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "0292c690-55db-421d-a004-21a0acd72961" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:29:36 np0005634017 nova_compute[243452]: 2026-02-28 10:29:36.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:29:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 13 KiB/s wr, 33 op/s
Feb 28 05:29:36 np0005634017 nova_compute[243452]: 2026-02-28 10:29:36.889 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 502 KiB/s rd, 13 KiB/s wr, 54 op/s
Feb 28 05:29:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:29:39 np0005634017 nova_compute[243452]: 2026-02-28 10:29:39.204 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:39 np0005634017 nova_compute[243452]: 2026-02-28 10:29:39.674 243456 DEBUG nova.compute.manager [req-5424511b-6951-46bf-a0c2-bb2c3a2a4165 req-aed33b3a-6079-40f2-aa99-ff3efedad845 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Received event network-vif-plugged-0e852cef-a459-43ce-ad9b-4f379e129ace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:29:39 np0005634017 nova_compute[243452]: 2026-02-28 10:29:39.675 243456 DEBUG oslo_concurrency.lockutils [req-5424511b-6951-46bf-a0c2-bb2c3a2a4165 req-aed33b3a-6079-40f2-aa99-ff3efedad845 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "0292c690-55db-421d-a004-21a0acd72961-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:29:39 np0005634017 nova_compute[243452]: 2026-02-28 10:29:39.676 243456 DEBUG oslo_concurrency.lockutils [req-5424511b-6951-46bf-a0c2-bb2c3a2a4165 req-aed33b3a-6079-40f2-aa99-ff3efedad845 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0292c690-55db-421d-a004-21a0acd72961-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:29:39 np0005634017 nova_compute[243452]: 2026-02-28 10:29:39.676 243456 DEBUG oslo_concurrency.lockutils [req-5424511b-6951-46bf-a0c2-bb2c3a2a4165 req-aed33b3a-6079-40f2-aa99-ff3efedad845 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0292c690-55db-421d-a004-21a0acd72961-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:29:39 np0005634017 nova_compute[243452]: 2026-02-28 10:29:39.677 243456 DEBUG nova.compute.manager [req-5424511b-6951-46bf-a0c2-bb2c3a2a4165 req-aed33b3a-6079-40f2-aa99-ff3efedad845 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] No waiting events found dispatching network-vif-plugged-0e852cef-a459-43ce-ad9b-4f379e129ace pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:29:39 np0005634017 nova_compute[243452]: 2026-02-28 10:29:39.677 243456 WARNING nova.compute.manager [req-5424511b-6951-46bf-a0c2-bb2c3a2a4165 req-aed33b3a-6079-40f2-aa99-ff3efedad845 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Received unexpected event network-vif-plugged-0e852cef-a459-43ce-ad9b-4f379e129ace for instance with vm_state active and task_state None.#033[00m
Feb 28 05:29:39 np0005634017 nova_compute[243452]: 2026-02-28 10:29:39.692 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:29:39 np0005634017 nova_compute[243452]: 2026-02-28 10:29:39.692 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:29:39 np0005634017 nova_compute[243452]: 2026-02-28 10:29:39.693 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:29:40 np0005634017 nova_compute[243452]: 2026-02-28 10:29:40.244 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:29:40 np0005634017 nova_compute[243452]: 2026-02-28 10:29:40.244 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:29:40 np0005634017 nova_compute[243452]: 2026-02-28 10:29:40.245 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:29:40 np0005634017 nova_compute[243452]: 2026-02-28 10:29:40.245 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 360effe7-8380-410d-a5b8-59c28fa4a75a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:29:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 86 op/s
Feb 28 05:29:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:29:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:29:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:29:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:29:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007763523107519869 of space, bias 1.0, pg target 0.23290569322559607 quantized to 32 (current 32)
Feb 28 05:29:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:29:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:29:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:29:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:29:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:29:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0032514036342161058 of space, bias 1.0, pg target 0.9754210902648317 quantized to 32 (current 32)
Feb 28 05:29:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:29:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.358730371058775e-07 of space, bias 4.0, pg target 0.0008830476445270529 quantized to 16 (current 16)
Feb 28 05:29:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:29:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:29:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:29:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:29:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:29:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:29:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:29:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:29:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:29:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:29:41 np0005634017 nova_compute[243452]: 2026-02-28 10:29:41.891 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Feb 28 05:29:43 np0005634017 nova_compute[243452]: 2026-02-28 10:29:43.273 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Updating instance_info_cache with network_info: [{"id": "89352e9c-3fec-48bc-a264-6de98ec910c3", "address": "fa:16:3e:ae:b8:e7", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89352e9c-3f", "ovs_interfaceid": "89352e9c-3fec-48bc-a264-6de98ec910c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:29:43 np0005634017 nova_compute[243452]: 2026-02-28 10:29:43.297 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:29:43 np0005634017 nova_compute[243452]: 2026-02-28 10:29:43.298 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:29:43 np0005634017 nova_compute[243452]: 2026-02-28 10:29:43.298 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:29:43 np0005634017 nova_compute[243452]: 2026-02-28 10:29:43.299 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:29:43 np0005634017 nova_compute[243452]: 2026-02-28 10:29:43.299 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:29:43 np0005634017 nova_compute[243452]: 2026-02-28 10:29:43.299 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:29:43 np0005634017 nova_compute[243452]: 2026-02-28 10:29:43.322 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:29:43 np0005634017 nova_compute[243452]: 2026-02-28 10:29:43.323 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:29:43 np0005634017 nova_compute[243452]: 2026-02-28 10:29:43.324 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:29:43 np0005634017 nova_compute[243452]: 2026-02-28 10:29:43.324 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:29:43 np0005634017 nova_compute[243452]: 2026-02-28 10:29:43.325 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:29:43 np0005634017 nova_compute[243452]: 2026-02-28 10:29:43.674 243456 DEBUG nova.compute.manager [req-203af496-928d-49e8-97e8-e2a878eebaaf req-f2a6ee0a-6118-4654-bcbd-3c8fb94986a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Received event network-changed-0e852cef-a459-43ce-ad9b-4f379e129ace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:29:43 np0005634017 nova_compute[243452]: 2026-02-28 10:29:43.675 243456 DEBUG nova.compute.manager [req-203af496-928d-49e8-97e8-e2a878eebaaf req-f2a6ee0a-6118-4654-bcbd-3c8fb94986a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Refreshing instance network info cache due to event network-changed-0e852cef-a459-43ce-ad9b-4f379e129ace. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:29:43 np0005634017 nova_compute[243452]: 2026-02-28 10:29:43.675 243456 DEBUG oslo_concurrency.lockutils [req-203af496-928d-49e8-97e8-e2a878eebaaf req-f2a6ee0a-6118-4654-bcbd-3c8fb94986a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-0292c690-55db-421d-a004-21a0acd72961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:29:43 np0005634017 nova_compute[243452]: 2026-02-28 10:29:43.675 243456 DEBUG oslo_concurrency.lockutils [req-203af496-928d-49e8-97e8-e2a878eebaaf req-f2a6ee0a-6118-4654-bcbd-3c8fb94986a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-0292c690-55db-421d-a004-21a0acd72961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:29:43 np0005634017 nova_compute[243452]: 2026-02-28 10:29:43.676 243456 DEBUG nova.network.neutron [req-203af496-928d-49e8-97e8-e2a878eebaaf req-f2a6ee0a-6118-4654-bcbd-3c8fb94986a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Refreshing network info cache for port 0e852cef-a459-43ce-ad9b-4f379e129ace _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:29:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:29:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:29:43 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2655490118' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:29:43 np0005634017 nova_compute[243452]: 2026-02-28 10:29:43.921 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:29:44 np0005634017 nova_compute[243452]: 2026-02-28 10:29:44.043 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:29:44 np0005634017 nova_compute[243452]: 2026-02-28 10:29:44.044 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:29:44 np0005634017 nova_compute[243452]: 2026-02-28 10:29:44.053 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:29:44 np0005634017 nova_compute[243452]: 2026-02-28 10:29:44.054 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:29:44 np0005634017 nova_compute[243452]: 2026-02-28 10:29:44.206 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:44 np0005634017 nova_compute[243452]: 2026-02-28 10:29:44.295 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:29:44 np0005634017 nova_compute[243452]: 2026-02-28 10:29:44.296 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3334MB free_disk=59.94170920923352GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:29:44 np0005634017 nova_compute[243452]: 2026-02-28 10:29:44.297 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:29:44 np0005634017 nova_compute[243452]: 2026-02-28 10:29:44.298 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:29:44 np0005634017 nova_compute[243452]: 2026-02-28 10:29:44.398 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 360effe7-8380-410d-a5b8-59c28fa4a75a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:29:44 np0005634017 nova_compute[243452]: 2026-02-28 10:29:44.400 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 0292c690-55db-421d-a004-21a0acd72961 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:29:44 np0005634017 nova_compute[243452]: 2026-02-28 10:29:44.400 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:29:44 np0005634017 nova_compute[243452]: 2026-02-28 10:29:44.401 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:29:44 np0005634017 nova_compute[243452]: 2026-02-28 10:29:44.426 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 28 05:29:44 np0005634017 nova_compute[243452]: 2026-02-28 10:29:44.443 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 28 05:29:44 np0005634017 nova_compute[243452]: 2026-02-28 10:29:44.444 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 28 05:29:44 np0005634017 nova_compute[243452]: 2026-02-28 10:29:44.461 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 28 05:29:44 np0005634017 nova_compute[243452]: 2026-02-28 10:29:44.503 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 28 05:29:44 np0005634017 nova_compute[243452]: 2026-02-28 10:29:44.567 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:29:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Feb 28 05:29:44 np0005634017 nova_compute[243452]: 2026-02-28 10:29:44.881 243456 DEBUG nova.network.neutron [req-203af496-928d-49e8-97e8-e2a878eebaaf req-f2a6ee0a-6118-4654-bcbd-3c8fb94986a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Updated VIF entry in instance network info cache for port 0e852cef-a459-43ce-ad9b-4f379e129ace. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:29:44 np0005634017 nova_compute[243452]: 2026-02-28 10:29:44.882 243456 DEBUG nova.network.neutron [req-203af496-928d-49e8-97e8-e2a878eebaaf req-f2a6ee0a-6118-4654-bcbd-3c8fb94986a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Updating instance_info_cache with network_info: [{"id": "0e852cef-a459-43ce-ad9b-4f379e129ace", "address": "fa:16:3e:64:2e:7f", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e852cef-a4", "ovs_interfaceid": "0e852cef-a459-43ce-ad9b-4f379e129ace", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:29:44 np0005634017 nova_compute[243452]: 2026-02-28 10:29:44.906 243456 DEBUG oslo_concurrency.lockutils [req-203af496-928d-49e8-97e8-e2a878eebaaf req-f2a6ee0a-6118-4654-bcbd-3c8fb94986a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-0292c690-55db-421d-a004-21a0acd72961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:29:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:29:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1906682355' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:29:45 np0005634017 nova_compute[243452]: 2026-02-28 10:29:45.104 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:29:45 np0005634017 nova_compute[243452]: 2026-02-28 10:29:45.111 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:29:45 np0005634017 nova_compute[243452]: 2026-02-28 10:29:45.130 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:29:45 np0005634017 nova_compute[243452]: 2026-02-28 10:29:45.160 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:29:45 np0005634017 nova_compute[243452]: 2026-02-28 10:29:45.161 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:29:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:29:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2025239271' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:29:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:29:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2025239271' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:29:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 938 B/s wr, 73 op/s
Feb 28 05:29:46 np0005634017 nova_compute[243452]: 2026-02-28 10:29:46.893 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:47 np0005634017 ovn_controller[146846]: 2026-02-28T10:29:47Z|00133|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.7 does not match offer 10.100.0.11
Feb 28 05:29:47 np0005634017 ovn_controller[146846]: 2026-02-28T10:29:47Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:64:2e:7f 10.100.0.11
Feb 28 05:29:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 318 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 178 KiB/s wr, 86 op/s
Feb 28 05:29:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:29:49 np0005634017 nova_compute[243452]: 2026-02-28 10:29:49.210 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 499 KiB/s wr, 101 op/s
Feb 28 05:29:51 np0005634017 nova_compute[243452]: 2026-02-28 10:29:51.895 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:29:52Z|00135|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.7 does not match offer 10.100.0.11
Feb 28 05:29:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:29:52Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:64:2e:7f 10.100.0.11
Feb 28 05:29:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 499 KiB/s wr, 54 op/s
Feb 28 05:29:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:29:52Z|00137|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:64:2e:7f 10.100.0.11
Feb 28 05:29:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:29:52Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:64:2e:7f 10.100.0.11
Feb 28 05:29:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:29:54 np0005634017 nova_compute[243452]: 2026-02-28 10:29:54.211 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 509 KiB/s wr, 55 op/s
Feb 28 05:29:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 329 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 56 op/s
Feb 28 05:29:56 np0005634017 nova_compute[243452]: 2026-02-28 10:29:56.897 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:29:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:29:57.869 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:29:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:29:57.870 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:29:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:29:57.872 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:29:58 np0005634017 podman[349593]: 2026-02-28 10:29:58.131770112 +0000 UTC m=+0.063301584 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 28 05:29:58 np0005634017 podman[349592]: 2026-02-28 10:29:58.185167694 +0000 UTC m=+0.119326591 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 28 05:29:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 329 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 545 KiB/s wr, 56 op/s
Feb 28 05:29:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:29:59 np0005634017 nova_compute[243452]: 2026-02-28 10:29:59.214 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:00 np0005634017 nova_compute[243452]: 2026-02-28 10:30:00.093 243456 DEBUG nova.compute.manager [None req-62e28a86-b388-4154-a764-f162b1061421 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:30:00 np0005634017 nova_compute[243452]: 2026-02-28 10:30:00.217 243456 INFO nova.compute.manager [None req-62e28a86-b388-4154-a764-f162b1061421 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] instance snapshotting#033[00m
Feb 28 05:30:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:30:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:30:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:30:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:30:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:30:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:30:00 np0005634017 nova_compute[243452]: 2026-02-28 10:30:00.530 243456 INFO nova.virt.libvirt.driver [None req-62e28a86-b388-4154-a764-f162b1061421 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Beginning live snapshot process#033[00m
Feb 28 05:30:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 329 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 371 KiB/s wr, 45 op/s
Feb 28 05:30:00 np0005634017 nova_compute[243452]: 2026-02-28 10:30:00.784 243456 DEBUG nova.storage.rbd_utils [None req-62e28a86-b388-4154-a764-f162b1061421 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] creating snapshot(3390a3503c494e2abd5d03d13aa46d06) on rbd image(0292c690-55db-421d-a004-21a0acd72961_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:30:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Feb 28 05:30:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Feb 28 05:30:01 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Feb 28 05:30:01 np0005634017 nova_compute[243452]: 2026-02-28 10:30:01.520 243456 DEBUG nova.storage.rbd_utils [None req-62e28a86-b388-4154-a764-f162b1061421 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] cloning vms/0292c690-55db-421d-a004-21a0acd72961_disk@3390a3503c494e2abd5d03d13aa46d06 to images/4f149197-39c6-4851-9529-9ce8f0064ae9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 28 05:30:01 np0005634017 nova_compute[243452]: 2026-02-28 10:30:01.646 243456 DEBUG nova.storage.rbd_utils [None req-62e28a86-b388-4154-a764-f162b1061421 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] flattening images/4f149197-39c6-4851-9529-9ce8f0064ae9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 28 05:30:01 np0005634017 nova_compute[243452]: 2026-02-28 10:30:01.899 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:02 np0005634017 nova_compute[243452]: 2026-02-28 10:30:02.094 243456 DEBUG nova.storage.rbd_utils [None req-62e28a86-b388-4154-a764-f162b1061421 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] removing snapshot(3390a3503c494e2abd5d03d13aa46d06) on rbd image(0292c690-55db-421d-a004-21a0acd72961_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 28 05:30:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Feb 28 05:30:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Feb 28 05:30:02 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Feb 28 05:30:02 np0005634017 nova_compute[243452]: 2026-02-28 10:30:02.501 243456 DEBUG nova.storage.rbd_utils [None req-62e28a86-b388-4154-a764-f162b1061421 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] creating snapshot(snap) on rbd image(4f149197-39c6-4851-9529-9ce8f0064ae9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:30:02 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Feb 28 05:30:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 351 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.9 MiB/s wr, 37 op/s
Feb 28 05:30:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Feb 28 05:30:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Feb 28 05:30:03 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Feb 28 05:30:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:30:04 np0005634017 nova_compute[243452]: 2026-02-28 10:30:04.216 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 399 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 10 MiB/s wr, 146 op/s
Feb 28 05:30:05 np0005634017 nova_compute[243452]: 2026-02-28 10:30:05.241 243456 INFO nova.virt.libvirt.driver [None req-62e28a86-b388-4154-a764-f162b1061421 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Snapshot image upload complete#033[00m
Feb 28 05:30:05 np0005634017 nova_compute[243452]: 2026-02-28 10:30:05.242 243456 INFO nova.compute.manager [None req-62e28a86-b388-4154-a764-f162b1061421 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Took 5.02 seconds to snapshot the instance on the hypervisor.#033[00m
Feb 28 05:30:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Feb 28 05:30:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Feb 28 05:30:06 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Feb 28 05:30:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 431 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.1 MiB/s rd, 17 MiB/s wr, 220 op/s
Feb 28 05:30:06 np0005634017 podman[349875]: 2026-02-28 10:30:06.799315238 +0000 UTC m=+0.075300064 container exec 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 28 05:30:06 np0005634017 podman[349875]: 2026-02-28 10:30:06.886293052 +0000 UTC m=+0.162277878 container exec_died 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:30:06 np0005634017 nova_compute[243452]: 2026-02-28 10:30:06.901 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:30:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:30:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:30:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:30:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 410 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 15 MiB/s wr, 189 op/s
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.802765) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274608802822, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 1870, "num_deletes": 253, "total_data_size": 2941732, "memory_usage": 2988088, "flush_reason": "Manual Compaction"}
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274608813456, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 2876890, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40082, "largest_seqno": 41951, "table_properties": {"data_size": 2868053, "index_size": 5523, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 16807, "raw_average_key_size": 19, "raw_value_size": 2850633, "raw_average_value_size": 3254, "num_data_blocks": 243, "num_entries": 876, "num_filter_entries": 876, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772274442, "oldest_key_time": 1772274442, "file_creation_time": 1772274608, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 10768 microseconds, and 5506 cpu microseconds.
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.813535) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 2876890 bytes OK
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.813562) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.814871) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.814889) EVENT_LOG_v1 {"time_micros": 1772274608814883, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.814918) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 2933693, prev total WAL file size 2933693, number of live WAL files 2.
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.815702) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(2809KB)], [89(9117KB)]
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274608815754, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 12213194, "oldest_snapshot_seqno": -1}
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6861 keys, 11461924 bytes, temperature: kUnknown
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274608869863, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 11461924, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11413133, "index_size": 30503, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17221, "raw_key_size": 174497, "raw_average_key_size": 25, "raw_value_size": 11287735, "raw_average_value_size": 1645, "num_data_blocks": 1215, "num_entries": 6861, "num_filter_entries": 6861, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772274608, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.870149) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 11461924 bytes
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.871453) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 225.4 rd, 211.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 8.9 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(8.2) write-amplify(4.0) OK, records in: 7382, records dropped: 521 output_compression: NoCompression
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.871473) EVENT_LOG_v1 {"time_micros": 1772274608871463, "job": 52, "event": "compaction_finished", "compaction_time_micros": 54193, "compaction_time_cpu_micros": 31112, "output_level": 6, "num_output_files": 1, "total_output_size": 11461924, "num_input_records": 7382, "num_output_records": 6861, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274608871862, "job": 52, "event": "table_file_deletion", "file_number": 91}
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274608873020, "job": 52, "event": "table_file_deletion", "file_number": 89}
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.815593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.873049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.873054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.873056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.873058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:30:08 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.873060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:30:09 np0005634017 podman[350204]: 2026-02-28 10:30:09.006209543 +0000 UTC m=+0.061218946 container create 217eca288214117b46e5c672921ed95e432d6492d194501248ab57d3332aef4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ganguly, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.055 243456 DEBUG nova.compute.manager [req-637aa530-5530-4253-a469-fed987b7de67 req-26276085-102a-4a89-9b16-ec38a4027287 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Received event network-changed-0e852cef-a459-43ce-ad9b-4f379e129ace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.055 243456 DEBUG nova.compute.manager [req-637aa530-5530-4253-a469-fed987b7de67 req-26276085-102a-4a89-9b16-ec38a4027287 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Refreshing instance network info cache due to event network-changed-0e852cef-a459-43ce-ad9b-4f379e129ace. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.056 243456 DEBUG oslo_concurrency.lockutils [req-637aa530-5530-4253-a469-fed987b7de67 req-26276085-102a-4a89-9b16-ec38a4027287 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-0292c690-55db-421d-a004-21a0acd72961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.056 243456 DEBUG oslo_concurrency.lockutils [req-637aa530-5530-4253-a469-fed987b7de67 req-26276085-102a-4a89-9b16-ec38a4027287 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-0292c690-55db-421d-a004-21a0acd72961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.057 243456 DEBUG nova.network.neutron [req-637aa530-5530-4253-a469-fed987b7de67 req-26276085-102a-4a89-9b16-ec38a4027287 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Refreshing network info cache for port 0e852cef-a459-43ce-ad9b-4f379e129ace _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:30:09 np0005634017 systemd[1]: Started libpod-conmon-217eca288214117b46e5c672921ed95e432d6492d194501248ab57d3332aef4a.scope.
Feb 28 05:30:09 np0005634017 podman[350204]: 2026-02-28 10:30:08.980473294 +0000 UTC m=+0.035482777 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:30:09 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:30:09 np0005634017 podman[350204]: 2026-02-28 10:30:09.113400999 +0000 UTC m=+0.168410422 container init 217eca288214117b46e5c672921ed95e432d6492d194501248ab57d3332aef4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 05:30:09 np0005634017 podman[350204]: 2026-02-28 10:30:09.121291783 +0000 UTC m=+0.176301216 container start 217eca288214117b46e5c672921ed95e432d6492d194501248ab57d3332aef4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ganguly, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:30:09 np0005634017 podman[350204]: 2026-02-28 10:30:09.126426538 +0000 UTC m=+0.181435941 container attach 217eca288214117b46e5c672921ed95e432d6492d194501248ab57d3332aef4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 05:30:09 np0005634017 youthful_ganguly[350221]: 167 167
Feb 28 05:30:09 np0005634017 systemd[1]: libpod-217eca288214117b46e5c672921ed95e432d6492d194501248ab57d3332aef4a.scope: Deactivated successfully.
Feb 28 05:30:09 np0005634017 conmon[350221]: conmon 217eca288214117b46e5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-217eca288214117b46e5c672921ed95e432d6492d194501248ab57d3332aef4a.scope/container/memory.events
Feb 28 05:30:09 np0005634017 podman[350204]: 2026-02-28 10:30:09.132303294 +0000 UTC m=+0.187312767 container died 217eca288214117b46e5c672921ed95e432d6492d194501248ab57d3332aef4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ganguly, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 28 05:30:09 np0005634017 systemd[1]: var-lib-containers-storage-overlay-c39fabcd655c0b2873905d296a9dea0b9f484a825b23e6a7fef4b94405983071-merged.mount: Deactivated successfully.
Feb 28 05:30:09 np0005634017 podman[350204]: 2026-02-28 10:30:09.188605949 +0000 UTC m=+0.243615362 container remove 217eca288214117b46e5c672921ed95e432d6492d194501248ab57d3332aef4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 05:30:09 np0005634017 systemd[1]: libpod-conmon-217eca288214117b46e5c672921ed95e432d6492d194501248ab57d3332aef4a.scope: Deactivated successfully.
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.219 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.372 243456 DEBUG oslo_concurrency.lockutils [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "0292c690-55db-421d-a004-21a0acd72961" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.373 243456 DEBUG oslo_concurrency.lockutils [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "0292c690-55db-421d-a004-21a0acd72961" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.373 243456 DEBUG oslo_concurrency.lockutils [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "0292c690-55db-421d-a004-21a0acd72961-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.374 243456 DEBUG oslo_concurrency.lockutils [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "0292c690-55db-421d-a004-21a0acd72961-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.374 243456 DEBUG oslo_concurrency.lockutils [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "0292c690-55db-421d-a004-21a0acd72961-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.376 243456 INFO nova.compute.manager [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Terminating instance#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.377 243456 DEBUG nova.compute.manager [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:30:09 np0005634017 podman[350245]: 2026-02-28 10:30:09.381483333 +0000 UTC m=+0.054235617 container create f07e10ea06f09b27a8306afe41f061fb791fe59231a42b3a9932204cb847a430 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jackson, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:30:09 np0005634017 systemd[1]: Started libpod-conmon-f07e10ea06f09b27a8306afe41f061fb791fe59231a42b3a9932204cb847a430.scope.
Feb 28 05:30:09 np0005634017 kernel: tap0e852cef-a4 (unregistering): left promiscuous mode
Feb 28 05:30:09 np0005634017 NetworkManager[49805]: <info>  [1772274609.4466] device (tap0e852cef-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:30:09 np0005634017 podman[350245]: 2026-02-28 10:30:09.357692859 +0000 UTC m=+0.030445163 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.453 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:09 np0005634017 ovn_controller[146846]: 2026-02-28T10:30:09Z|01231|binding|INFO|Releasing lport 0e852cef-a459-43ce-ad9b-4f379e129ace from this chassis (sb_readonly=0)
Feb 28 05:30:09 np0005634017 ovn_controller[146846]: 2026-02-28T10:30:09Z|01232|binding|INFO|Setting lport 0e852cef-a459-43ce-ad9b-4f379e129ace down in Southbound
Feb 28 05:30:09 np0005634017 ovn_controller[146846]: 2026-02-28T10:30:09Z|01233|binding|INFO|Removing iface tap0e852cef-a4 ovn-installed in OVS
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.456 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.461 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.473 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:2e:7f 10.100.0.11'], port_security=['fa:16:3e:64:2e:7f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0292c690-55db-421d-a004-21a0acd72961', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-143add3f-ffeb-40fb-88e5-0af28b700615', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '344dd946e14146ab93c01183964c71b3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eee67bf9-5a46-4130-874d-acfd0939ad31', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c3d893a-030b-445a-98e5-9b2e4e6c6037, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0e852cef-a459-43ce-ad9b-4f379e129ace) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:30:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.475 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0e852cef-a459-43ce-ad9b-4f379e129ace in datapath 143add3f-ffeb-40fb-88e5-0af28b700615 unbound from our chassis#033[00m
Feb 28 05:30:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.477 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 143add3f-ffeb-40fb-88e5-0af28b700615#033[00m
Feb 28 05:30:09 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:30:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c6d7b8668a60401c1b48c023903cd1abb01d7eb85f36c651d9d0a4338ccc77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:30:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c6d7b8668a60401c1b48c023903cd1abb01d7eb85f36c651d9d0a4338ccc77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:30:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c6d7b8668a60401c1b48c023903cd1abb01d7eb85f36c651d9d0a4338ccc77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:30:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c6d7b8668a60401c1b48c023903cd1abb01d7eb85f36c651d9d0a4338ccc77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:30:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c6d7b8668a60401c1b48c023903cd1abb01d7eb85f36c651d9d0a4338ccc77/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:30:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.501 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[46f5f91a-9fc9-4f7a-a58f-d6ddcddabbb3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:30:09 np0005634017 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Feb 28 05:30:09 np0005634017 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007a.scope: Consumed 13.976s CPU time.
Feb 28 05:30:09 np0005634017 podman[350245]: 2026-02-28 10:30:09.508030188 +0000 UTC m=+0.180782482 container init f07e10ea06f09b27a8306afe41f061fb791fe59231a42b3a9932204cb847a430 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:30:09 np0005634017 systemd-machined[209480]: Machine qemu-155-instance-0000007a terminated.
Feb 28 05:30:09 np0005634017 podman[350245]: 2026-02-28 10:30:09.516540829 +0000 UTC m=+0.189293143 container start f07e10ea06f09b27a8306afe41f061fb791fe59231a42b3a9932204cb847a430 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 28 05:30:09 np0005634017 podman[350245]: 2026-02-28 10:30:09.521727416 +0000 UTC m=+0.194479710 container attach f07e10ea06f09b27a8306afe41f061fb791fe59231a42b3a9932204cb847a430 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jackson, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 05:30:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.531 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ca33baa8-86e9-477c-a707-c65af47e9624]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:30:09 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:30:09 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:30:09 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:30:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.537 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1848bcd2-88cf-4bc9-87fd-4da9adb5f054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:30:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.563 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c9cf5d21-e249-4023-a44b-d770c01b5de4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:30:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.586 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2fa6b2f3-4d39-49b3-a815-3eea24cc3651]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap143add3f-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:02:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 365], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603768, 'reachable_time': 37208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350278, 'error': None, 'target': 'ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:30:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.604 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bd390dcf-21f7-46a6-873c-6c6d9048c929]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap143add3f-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603781, 'tstamp': 603781}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350279, 'error': None, 'target': 'ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap143add3f-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603784, 'tstamp': 603784}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350279, 'error': None, 'target': 'ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:30:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.607 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap143add3f-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.609 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.615 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.616 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap143add3f-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:30:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.616 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:30:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.616 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap143add3f-f0, col_values=(('external_ids', {'iface-id': '57c5c2bd-8863-4c9b-bde1-322e4dae0a4d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:30:09 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.617 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.618 243456 INFO nova.virt.libvirt.driver [-] [instance: 0292c690-55db-421d-a004-21a0acd72961] Instance destroyed successfully.#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.619 243456 DEBUG nova.objects.instance [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lazy-loading 'resources' on Instance uuid 0292c690-55db-421d-a004-21a0acd72961 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.647 243456 DEBUG nova.virt.libvirt.vif [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:29:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-981413902',display_name='tempest-TestSnapshotPattern-server-981413902',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-981413902',id=122,image_ref='889a5b88-09b7-4c48-88ed-168ca73ca921',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPUJ4MbCHcIoNICw2tL1X5ZefRw67v1vL6nTq2IACMxv94r+pZLfX9PQ5uIyPper8mbOQnjB2mZqp0GfnlVkrAkMMtWu4Dla7/PQOjpuGeOsYTKM5E2GBss1gtEw6oeaiA==',key_name='tempest-TestSnapshotPattern-538459727',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:29:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='344dd946e14146ab93c01183964c71b3',ramdisk_id='',reservation_id='r-r0dl808n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='360effe7-8380-410d-a5b8-59c28fa4a75a',image_min_disk='1',image_min_ram='0',image_owner_id='344dd946e14146ab93c01183964c71b3',image_owner_project_name='tempest-TestSnapshotPattern-2121060882',image_owner_user_name='tempest-TestSnapshotPattern-2121060882-project-member',image_user_id='f54beab12fce4ee8adf80742bf33b916',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-2121060882',owner_user_name='tempest-TestSnapshotPattern-2121060882-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:30:05Z,user_data=None,user_id='f54beab12fce4ee8adf80742bf33b916',uuid=0292c690-55db-421d-a004-21a0acd72961,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0e852cef-a459-43ce-ad9b-4f379e129ace", "address": "fa:16:3e:64:2e:7f", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e852cef-a4", "ovs_interfaceid": "0e852cef-a459-43ce-ad9b-4f379e129ace", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.648 243456 DEBUG nova.network.os_vif_util [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Converting VIF {"id": "0e852cef-a459-43ce-ad9b-4f379e129ace", "address": "fa:16:3e:64:2e:7f", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e852cef-a4", "ovs_interfaceid": "0e852cef-a459-43ce-ad9b-4f379e129ace", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.649 243456 DEBUG nova.network.os_vif_util [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:64:2e:7f,bridge_name='br-int',has_traffic_filtering=True,id=0e852cef-a459-43ce-ad9b-4f379e129ace,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e852cef-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.650 243456 DEBUG os_vif [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:2e:7f,bridge_name='br-int',has_traffic_filtering=True,id=0e852cef-a459-43ce-ad9b-4f379e129ace,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e852cef-a4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.653 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.654 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e852cef-a4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.655 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.659 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.664 243456 INFO os_vif [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:2e:7f,bridge_name='br-int',has_traffic_filtering=True,id=0e852cef-a459-43ce-ad9b-4f379e129ace,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e852cef-a4')#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.970 243456 INFO nova.virt.libvirt.driver [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Deleting instance files /var/lib/nova/instances/0292c690-55db-421d-a004-21a0acd72961_del#033[00m
Feb 28 05:30:09 np0005634017 nova_compute[243452]: 2026-02-28 10:30:09.972 243456 INFO nova.virt.libvirt.driver [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Deletion of /var/lib/nova/instances/0292c690-55db-421d-a004-21a0acd72961_del complete#033[00m
Feb 28 05:30:10 np0005634017 busy_jackson[350261]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:30:10 np0005634017 busy_jackson[350261]: --> All data devices are unavailable
Feb 28 05:30:10 np0005634017 systemd[1]: libpod-f07e10ea06f09b27a8306afe41f061fb791fe59231a42b3a9932204cb847a430.scope: Deactivated successfully.
Feb 28 05:30:10 np0005634017 podman[350245]: 2026-02-28 10:30:10.046416099 +0000 UTC m=+0.719168413 container died f07e10ea06f09b27a8306afe41f061fb791fe59231a42b3a9932204cb847a430 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:30:10 np0005634017 systemd[1]: var-lib-containers-storage-overlay-22c6d7b8668a60401c1b48c023903cd1abb01d7eb85f36c651d9d0a4338ccc77-merged.mount: Deactivated successfully.
Feb 28 05:30:10 np0005634017 podman[350245]: 2026-02-28 10:30:10.103159546 +0000 UTC m=+0.775911850 container remove f07e10ea06f09b27a8306afe41f061fb791fe59231a42b3a9932204cb847a430 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:30:10 np0005634017 systemd[1]: libpod-conmon-f07e10ea06f09b27a8306afe41f061fb791fe59231a42b3a9932204cb847a430.scope: Deactivated successfully.
Feb 28 05:30:10 np0005634017 nova_compute[243452]: 2026-02-28 10:30:10.374 243456 INFO nova.compute.manager [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Took 1.00 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:30:10 np0005634017 nova_compute[243452]: 2026-02-28 10:30:10.375 243456 DEBUG oslo.service.loopingcall [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:30:10 np0005634017 nova_compute[243452]: 2026-02-28 10:30:10.375 243456 DEBUG nova.compute.manager [-] [instance: 0292c690-55db-421d-a004-21a0acd72961] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:30:10 np0005634017 nova_compute[243452]: 2026-02-28 10:30:10.376 243456 DEBUG nova.network.neutron [-] [instance: 0292c690-55db-421d-a004-21a0acd72961] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:30:10 np0005634017 podman[350401]: 2026-02-28 10:30:10.583313438 +0000 UTC m=+0.054574157 container create ecbebcd51643caa4f6656596c6f5f0d56b5d64602e8ad09ec7c7f9522ed2d63a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_northcutt, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:30:10 np0005634017 systemd[1]: Started libpod-conmon-ecbebcd51643caa4f6656596c6f5f0d56b5d64602e8ad09ec7c7f9522ed2d63a.scope.
Feb 28 05:30:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 9.5 MiB/s wr, 151 op/s
Feb 28 05:30:10 np0005634017 podman[350401]: 2026-02-28 10:30:10.557206668 +0000 UTC m=+0.028467437 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:30:10 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:30:10 np0005634017 podman[350401]: 2026-02-28 10:30:10.675348085 +0000 UTC m=+0.146608854 container init ecbebcd51643caa4f6656596c6f5f0d56b5d64602e8ad09ec7c7f9522ed2d63a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_northcutt, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:30:10 np0005634017 podman[350401]: 2026-02-28 10:30:10.682632851 +0000 UTC m=+0.153893560 container start ecbebcd51643caa4f6656596c6f5f0d56b5d64602e8ad09ec7c7f9522ed2d63a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:30:10 np0005634017 podman[350401]: 2026-02-28 10:30:10.686278134 +0000 UTC m=+0.157538853 container attach ecbebcd51643caa4f6656596c6f5f0d56b5d64602e8ad09ec7c7f9522ed2d63a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_northcutt, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:30:10 np0005634017 funny_northcutt[350419]: 167 167
Feb 28 05:30:10 np0005634017 systemd[1]: libpod-ecbebcd51643caa4f6656596c6f5f0d56b5d64602e8ad09ec7c7f9522ed2d63a.scope: Deactivated successfully.
Feb 28 05:30:10 np0005634017 podman[350401]: 2026-02-28 10:30:10.689241338 +0000 UTC m=+0.160502047 container died ecbebcd51643caa4f6656596c6f5f0d56b5d64602e8ad09ec7c7f9522ed2d63a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:30:10 np0005634017 systemd[1]: var-lib-containers-storage-overlay-7d1fb276c7ac69b4f3b08b90adca009a3193d304eddb492b5b956f0e7c5487ca-merged.mount: Deactivated successfully.
Feb 28 05:30:10 np0005634017 podman[350401]: 2026-02-28 10:30:10.729935731 +0000 UTC m=+0.201196410 container remove ecbebcd51643caa4f6656596c6f5f0d56b5d64602e8ad09ec7c7f9522ed2d63a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 28 05:30:10 np0005634017 systemd[1]: libpod-conmon-ecbebcd51643caa4f6656596c6f5f0d56b5d64602e8ad09ec7c7f9522ed2d63a.scope: Deactivated successfully.
Feb 28 05:30:10 np0005634017 podman[350445]: 2026-02-28 10:30:10.878014415 +0000 UTC m=+0.044899872 container create 7a6736a192d7c661eefea3c3f478843f1324f2e0d85deda2c675c48918c4a997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 05:30:10 np0005634017 systemd[1]: Started libpod-conmon-7a6736a192d7c661eefea3c3f478843f1324f2e0d85deda2c675c48918c4a997.scope.
Feb 28 05:30:10 np0005634017 podman[350445]: 2026-02-28 10:30:10.858487973 +0000 UTC m=+0.025373480 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:30:10 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:30:10 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f603fe79234b8baed627fb08e8be842ecbc8cb126214966ce42539a235d08503/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:30:10 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f603fe79234b8baed627fb08e8be842ecbc8cb126214966ce42539a235d08503/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:30:10 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f603fe79234b8baed627fb08e8be842ecbc8cb126214966ce42539a235d08503/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:30:10 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f603fe79234b8baed627fb08e8be842ecbc8cb126214966ce42539a235d08503/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:30:10 np0005634017 podman[350445]: 2026-02-28 10:30:10.9929373 +0000 UTC m=+0.159822767 container init 7a6736a192d7c661eefea3c3f478843f1324f2e0d85deda2c675c48918c4a997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wing, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:30:11 np0005634017 podman[350445]: 2026-02-28 10:30:11.001792271 +0000 UTC m=+0.168677748 container start 7a6736a192d7c661eefea3c3f478843f1324f2e0d85deda2c675c48918c4a997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:30:11 np0005634017 podman[350445]: 2026-02-28 10:30:11.005407593 +0000 UTC m=+0.172293050 container attach 7a6736a192d7c661eefea3c3f478843f1324f2e0d85deda2c675c48918c4a997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 05:30:11 np0005634017 zealous_wing[350461]: {
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:    "0": [
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:        {
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "devices": [
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "/dev/loop3"
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            ],
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "lv_name": "ceph_lv0",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "lv_size": "21470642176",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "name": "ceph_lv0",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "tags": {
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.cluster_name": "ceph",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.crush_device_class": "",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.encrypted": "0",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.objectstore": "bluestore",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.osd_id": "0",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.type": "block",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.vdo": "0",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.with_tpm": "0"
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            },
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "type": "block",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "vg_name": "ceph_vg0"
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:        }
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:    ],
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:    "1": [
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:        {
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "devices": [
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "/dev/loop4"
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            ],
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "lv_name": "ceph_lv1",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "lv_size": "21470642176",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "name": "ceph_lv1",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "tags": {
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.cluster_name": "ceph",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.crush_device_class": "",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.encrypted": "0",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.objectstore": "bluestore",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.osd_id": "1",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.type": "block",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.vdo": "0",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.with_tpm": "0"
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            },
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "type": "block",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "vg_name": "ceph_vg1"
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:        }
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:    ],
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:    "2": [
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:        {
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "devices": [
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "/dev/loop5"
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            ],
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "lv_name": "ceph_lv2",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "lv_size": "21470642176",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "name": "ceph_lv2",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "tags": {
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.cluster_name": "ceph",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.crush_device_class": "",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.encrypted": "0",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.objectstore": "bluestore",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.osd_id": "2",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.type": "block",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.vdo": "0",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:                "ceph.with_tpm": "0"
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            },
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "type": "block",
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:            "vg_name": "ceph_vg2"
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:        }
Feb 28 05:30:11 np0005634017 zealous_wing[350461]:    ]
Feb 28 05:30:11 np0005634017 zealous_wing[350461]: }
Feb 28 05:30:11 np0005634017 systemd[1]: libpod-7a6736a192d7c661eefea3c3f478843f1324f2e0d85deda2c675c48918c4a997.scope: Deactivated successfully.
Feb 28 05:30:11 np0005634017 podman[350445]: 2026-02-28 10:30:11.311115323 +0000 UTC m=+0.478000770 container died 7a6736a192d7c661eefea3c3f478843f1324f2e0d85deda2c675c48918c4a997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 05:30:11 np0005634017 systemd[1]: var-lib-containers-storage-overlay-f603fe79234b8baed627fb08e8be842ecbc8cb126214966ce42539a235d08503-merged.mount: Deactivated successfully.
Feb 28 05:30:11 np0005634017 podman[350445]: 2026-02-28 10:30:11.491481913 +0000 UTC m=+0.658367360 container remove 7a6736a192d7c661eefea3c3f478843f1324f2e0d85deda2c675c48918c4a997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 05:30:11 np0005634017 systemd[1]: libpod-conmon-7a6736a192d7c661eefea3c3f478843f1324f2e0d85deda2c675c48918c4a997.scope: Deactivated successfully.
Feb 28 05:30:11 np0005634017 nova_compute[243452]: 2026-02-28 10:30:11.712 243456 DEBUG nova.compute.manager [req-f57b3178-3239-4004-b354-61ea18513485 req-f05cf79f-c5b5-40e6-b4d4-57453ff317c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Received event network-vif-deleted-0e852cef-a459-43ce-ad9b-4f379e129ace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:30:11 np0005634017 nova_compute[243452]: 2026-02-28 10:30:11.713 243456 INFO nova.compute.manager [req-f57b3178-3239-4004-b354-61ea18513485 req-f05cf79f-c5b5-40e6-b4d4-57453ff317c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Neutron deleted interface 0e852cef-a459-43ce-ad9b-4f379e129ace; detaching it from the instance and deleting it from the info cache#033[00m
Feb 28 05:30:11 np0005634017 nova_compute[243452]: 2026-02-28 10:30:11.713 243456 DEBUG nova.network.neutron [req-f57b3178-3239-4004-b354-61ea18513485 req-f05cf79f-c5b5-40e6-b4d4-57453ff317c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:30:11 np0005634017 nova_compute[243452]: 2026-02-28 10:30:11.732 243456 DEBUG nova.network.neutron [-] [instance: 0292c690-55db-421d-a004-21a0acd72961] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:30:11 np0005634017 nova_compute[243452]: 2026-02-28 10:30:11.764 243456 DEBUG nova.network.neutron [req-637aa530-5530-4253-a469-fed987b7de67 req-26276085-102a-4a89-9b16-ec38a4027287 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Updated VIF entry in instance network info cache for port 0e852cef-a459-43ce-ad9b-4f379e129ace. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:30:11 np0005634017 nova_compute[243452]: 2026-02-28 10:30:11.765 243456 DEBUG nova.network.neutron [req-637aa530-5530-4253-a469-fed987b7de67 req-26276085-102a-4a89-9b16-ec38a4027287 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Updating instance_info_cache with network_info: [{"id": "0e852cef-a459-43ce-ad9b-4f379e129ace", "address": "fa:16:3e:64:2e:7f", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e852cef-a4", "ovs_interfaceid": "0e852cef-a459-43ce-ad9b-4f379e129ace", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:30:12 np0005634017 podman[350546]: 2026-02-28 10:30:12.062430636 +0000 UTC m=+0.069292474 container create 9f40741f7a68b4a6009216d0cb847cca8106d29b32fc1558e6a0d3645118a2bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 05:30:12 np0005634017 podman[350546]: 2026-02-28 10:30:12.032723265 +0000 UTC m=+0.039585173 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:30:12 np0005634017 systemd[1]: Started libpod-conmon-9f40741f7a68b4a6009216d0cb847cca8106d29b32fc1558e6a0d3645118a2bd.scope.
Feb 28 05:30:12 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:30:12 np0005634017 nova_compute[243452]: 2026-02-28 10:30:12.228 243456 INFO nova.compute.manager [-] [instance: 0292c690-55db-421d-a004-21a0acd72961] Took 1.85 seconds to deallocate network for instance.#033[00m
Feb 28 05:30:12 np0005634017 podman[350546]: 2026-02-28 10:30:12.232627387 +0000 UTC m=+0.239489285 container init 9f40741f7a68b4a6009216d0cb847cca8106d29b32fc1558e6a0d3645118a2bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 28 05:30:12 np0005634017 nova_compute[243452]: 2026-02-28 10:30:12.238 243456 DEBUG nova.compute.manager [req-f57b3178-3239-4004-b354-61ea18513485 req-f05cf79f-c5b5-40e6-b4d4-57453ff317c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Detach interface failed, port_id=0e852cef-a459-43ce-ad9b-4f379e129ace, reason: Instance 0292c690-55db-421d-a004-21a0acd72961 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 28 05:30:12 np0005634017 podman[350546]: 2026-02-28 10:30:12.240170541 +0000 UTC m=+0.247032379 container start 9f40741f7a68b4a6009216d0cb847cca8106d29b32fc1558e6a0d3645118a2bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:30:12 np0005634017 determined_satoshi[350563]: 167 167
Feb 28 05:30:12 np0005634017 systemd[1]: libpod-9f40741f7a68b4a6009216d0cb847cca8106d29b32fc1558e6a0d3645118a2bd.scope: Deactivated successfully.
Feb 28 05:30:12 np0005634017 podman[350546]: 2026-02-28 10:30:12.24931111 +0000 UTC m=+0.256172948 container attach 9f40741f7a68b4a6009216d0cb847cca8106d29b32fc1558e6a0d3645118a2bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 05:30:12 np0005634017 podman[350546]: 2026-02-28 10:30:12.249864406 +0000 UTC m=+0.256726254 container died 9f40741f7a68b4a6009216d0cb847cca8106d29b32fc1558e6a0d3645118a2bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_satoshi, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:30:12 np0005634017 systemd[1]: var-lib-containers-storage-overlay-9288ac3a71ba79d832c9d7be1da9178ef41a90896fa4a5452cfa6a9aa8b079a4-merged.mount: Deactivated successfully.
Feb 28 05:30:12 np0005634017 nova_compute[243452]: 2026-02-28 10:30:12.356 243456 DEBUG oslo_concurrency.lockutils [req-637aa530-5530-4253-a469-fed987b7de67 req-26276085-102a-4a89-9b16-ec38a4027287 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-0292c690-55db-421d-a004-21a0acd72961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:30:12 np0005634017 podman[350546]: 2026-02-28 10:30:12.36120678 +0000 UTC m=+0.368068618 container remove 9f40741f7a68b4a6009216d0cb847cca8106d29b32fc1558e6a0d3645118a2bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_satoshi, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:30:12 np0005634017 systemd[1]: libpod-conmon-9f40741f7a68b4a6009216d0cb847cca8106d29b32fc1558e6a0d3645118a2bd.scope: Deactivated successfully.
Feb 28 05:30:12 np0005634017 nova_compute[243452]: 2026-02-28 10:30:12.485 243456 DEBUG oslo_concurrency.lockutils [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:30:12 np0005634017 nova_compute[243452]: 2026-02-28 10:30:12.486 243456 DEBUG oslo_concurrency.lockutils [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:30:12 np0005634017 podman[350586]: 2026-02-28 10:30:12.560162196 +0000 UTC m=+0.072725201 container create dd5c3eed717951fea73edea2060c360a28b3c1bedabef348710998fa1e62d7d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_morse, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:30:12 np0005634017 nova_compute[243452]: 2026-02-28 10:30:12.559 243456 DEBUG oslo_concurrency.processutils [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:30:12 np0005634017 systemd[1]: Started libpod-conmon-dd5c3eed717951fea73edea2060c360a28b3c1bedabef348710998fa1e62d7d4.scope.
Feb 28 05:30:12 np0005634017 podman[350586]: 2026-02-28 10:30:12.527875931 +0000 UTC m=+0.040439046 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:30:12 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:30:12 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3970c8c42fd3dc5939003a0d09cb1762211ea50e578535a5ea1f256d2b5d2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:30:12 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3970c8c42fd3dc5939003a0d09cb1762211ea50e578535a5ea1f256d2b5d2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:30:12 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3970c8c42fd3dc5939003a0d09cb1762211ea50e578535a5ea1f256d2b5d2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:30:12 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3970c8c42fd3dc5939003a0d09cb1762211ea50e578535a5ea1f256d2b5d2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:30:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 320 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 8.3 MiB/s wr, 146 op/s
Feb 28 05:30:12 np0005634017 podman[350586]: 2026-02-28 10:30:12.668861375 +0000 UTC m=+0.181424380 container init dd5c3eed717951fea73edea2060c360a28b3c1bedabef348710998fa1e62d7d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_morse, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 28 05:30:12 np0005634017 podman[350586]: 2026-02-28 10:30:12.67609093 +0000 UTC m=+0.188653935 container start dd5c3eed717951fea73edea2060c360a28b3c1bedabef348710998fa1e62d7d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_morse, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 05:30:12 np0005634017 podman[350586]: 2026-02-28 10:30:12.67997665 +0000 UTC m=+0.192539675 container attach dd5c3eed717951fea73edea2060c360a28b3c1bedabef348710998fa1e62d7d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_morse, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:30:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:30:13 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1419439717' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:30:13 np0005634017 nova_compute[243452]: 2026-02-28 10:30:13.134 243456 DEBUG oslo_concurrency.processutils [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:30:13 np0005634017 nova_compute[243452]: 2026-02-28 10:30:13.141 243456 DEBUG nova.compute.provider_tree [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:30:13 np0005634017 nova_compute[243452]: 2026-02-28 10:30:13.192 243456 DEBUG nova.scheduler.client.report [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:30:13 np0005634017 lvm[350704]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:30:13 np0005634017 lvm[350704]: VG ceph_vg1 finished
Feb 28 05:30:13 np0005634017 lvm[350701]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:30:13 np0005634017 lvm[350701]: VG ceph_vg0 finished
Feb 28 05:30:13 np0005634017 lvm[350706]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:30:13 np0005634017 lvm[350706]: VG ceph_vg2 finished
Feb 28 05:30:13 np0005634017 peaceful_morse[350604]: {}
Feb 28 05:30:13 np0005634017 systemd[1]: libpod-dd5c3eed717951fea73edea2060c360a28b3c1bedabef348710998fa1e62d7d4.scope: Deactivated successfully.
Feb 28 05:30:13 np0005634017 systemd[1]: libpod-dd5c3eed717951fea73edea2060c360a28b3c1bedabef348710998fa1e62d7d4.scope: Consumed 1.277s CPU time.
Feb 28 05:30:13 np0005634017 podman[350586]: 2026-02-28 10:30:13.51239627 +0000 UTC m=+1.024959295 container died dd5c3eed717951fea73edea2060c360a28b3c1bedabef348710998fa1e62d7d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_morse, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:30:13 np0005634017 systemd[1]: var-lib-containers-storage-overlay-dd3970c8c42fd3dc5939003a0d09cb1762211ea50e578535a5ea1f256d2b5d2c-merged.mount: Deactivated successfully.
Feb 28 05:30:13 np0005634017 nova_compute[243452]: 2026-02-28 10:30:13.542 243456 DEBUG oslo_concurrency.lockutils [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:30:13 np0005634017 podman[350586]: 2026-02-28 10:30:13.562436747 +0000 UTC m=+1.074999802 container remove dd5c3eed717951fea73edea2060c360a28b3c1bedabef348710998fa1e62d7d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 05:30:13 np0005634017 systemd[1]: libpod-conmon-dd5c3eed717951fea73edea2060c360a28b3c1bedabef348710998fa1e62d7d4.scope: Deactivated successfully.
Feb 28 05:30:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:30:13 np0005634017 nova_compute[243452]: 2026-02-28 10:30:13.632 243456 INFO nova.scheduler.client.report [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Deleted allocations for instance 0292c690-55db-421d-a004-21a0acd72961#033[00m
Feb 28 05:30:13 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:30:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:30:13 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:30:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:30:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Feb 28 05:30:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Feb 28 05:30:13 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Feb 28 05:30:13 np0005634017 nova_compute[243452]: 2026-02-28 10:30:13.877 243456 DEBUG oslo_concurrency.lockutils [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "0292c690-55db-421d-a004-21a0acd72961" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.504s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:30:14 np0005634017 nova_compute[243452]: 2026-02-28 10:30:14.220 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:14 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:30:14 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:30:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 3.7 KiB/s wr, 75 op/s
Feb 28 05:30:14 np0005634017 nova_compute[243452]: 2026-02-28 10:30:14.656 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Feb 28 05:30:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Feb 28 05:30:14 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Feb 28 05:30:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 289 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 4.0 KiB/s wr, 80 op/s
Feb 28 05:30:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 276 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 3.7 KiB/s wr, 70 op/s
Feb 28 05:30:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.224 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.237 243456 DEBUG nova.compute.manager [req-8fc083ce-13a4-4c48-822a-a9542fdc87ae req-0ce0a081-5f85-4a8d-b266-bf2fc7bb167d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Received event network-changed-89352e9c-3fec-48bc-a264-6de98ec910c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.237 243456 DEBUG nova.compute.manager [req-8fc083ce-13a4-4c48-822a-a9542fdc87ae req-0ce0a081-5f85-4a8d-b266-bf2fc7bb167d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Refreshing instance network info cache due to event network-changed-89352e9c-3fec-48bc-a264-6de98ec910c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.238 243456 DEBUG oslo_concurrency.lockutils [req-8fc083ce-13a4-4c48-822a-a9542fdc87ae req-0ce0a081-5f85-4a8d-b266-bf2fc7bb167d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.239 243456 DEBUG oslo_concurrency.lockutils [req-8fc083ce-13a4-4c48-822a-a9542fdc87ae req-0ce0a081-5f85-4a8d-b266-bf2fc7bb167d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.239 243456 DEBUG nova.network.neutron [req-8fc083ce-13a4-4c48-822a-a9542fdc87ae req-0ce0a081-5f85-4a8d-b266-bf2fc7bb167d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Refreshing network info cache for port 89352e9c-3fec-48bc-a264-6de98ec910c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.364 243456 DEBUG oslo_concurrency.lockutils [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "360effe7-8380-410d-a5b8-59c28fa4a75a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.365 243456 DEBUG oslo_concurrency.lockutils [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "360effe7-8380-410d-a5b8-59c28fa4a75a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.365 243456 DEBUG oslo_concurrency.lockutils [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "360effe7-8380-410d-a5b8-59c28fa4a75a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.366 243456 DEBUG oslo_concurrency.lockutils [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "360effe7-8380-410d-a5b8-59c28fa4a75a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.366 243456 DEBUG oslo_concurrency.lockutils [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "360effe7-8380-410d-a5b8-59c28fa4a75a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.368 243456 INFO nova.compute.manager [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Terminating instance#033[00m
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.369 243456 DEBUG nova.compute.manager [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:30:19 np0005634017 kernel: tap89352e9c-3f (unregistering): left promiscuous mode
Feb 28 05:30:19 np0005634017 NetworkManager[49805]: <info>  [1772274619.4376] device (tap89352e9c-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:30:19 np0005634017 ovn_controller[146846]: 2026-02-28T10:30:19Z|01234|binding|INFO|Releasing lport 89352e9c-3fec-48bc-a264-6de98ec910c3 from this chassis (sb_readonly=0)
Feb 28 05:30:19 np0005634017 ovn_controller[146846]: 2026-02-28T10:30:19Z|01235|binding|INFO|Setting lport 89352e9c-3fec-48bc-a264-6de98ec910c3 down in Southbound
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.449 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:19 np0005634017 ovn_controller[146846]: 2026-02-28T10:30:19Z|01236|binding|INFO|Removing iface tap89352e9c-3f ovn-installed in OVS
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.461 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.471 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:b8:e7 10.100.0.7'], port_security=['fa:16:3e:ae:b8:e7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '360effe7-8380-410d-a5b8-59c28fa4a75a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-143add3f-ffeb-40fb-88e5-0af28b700615', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '344dd946e14146ab93c01183964c71b3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eee67bf9-5a46-4130-874d-acfd0939ad31', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c3d893a-030b-445a-98e5-9b2e4e6c6037, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=89352e9c-3fec-48bc-a264-6de98ec910c3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:30:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.472 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 89352e9c-3fec-48bc-a264-6de98ec910c3 in datapath 143add3f-ffeb-40fb-88e5-0af28b700615 unbound from our chassis#033[00m
Feb 28 05:30:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.474 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 143add3f-ffeb-40fb-88e5-0af28b700615, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:30:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.475 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[de3274e8-7ff2-4cdc-a2c3-1c6869f1444f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:30:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.476 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615 namespace which is not needed anymore#033[00m
Feb 28 05:30:19 np0005634017 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d00000079.scope: Deactivated successfully.
Feb 28 05:30:19 np0005634017 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d00000079.scope: Consumed 15.753s CPU time.
Feb 28 05:30:19 np0005634017 systemd-machined[209480]: Machine qemu-154-instance-00000079 terminated.
Feb 28 05:30:19 np0005634017 neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615[348257]: [NOTICE]   (348261) : haproxy version is 2.8.14-c23fe91
Feb 28 05:30:19 np0005634017 neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615[348257]: [NOTICE]   (348261) : path to executable is /usr/sbin/haproxy
Feb 28 05:30:19 np0005634017 neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615[348257]: [WARNING]  (348261) : Exiting Master process...
Feb 28 05:30:19 np0005634017 neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615[348257]: [ALERT]    (348261) : Current worker (348263) exited with code 143 (Terminated)
Feb 28 05:30:19 np0005634017 neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615[348257]: [WARNING]  (348261) : All workers exited. Exiting... (0)
Feb 28 05:30:19 np0005634017 systemd[1]: libpod-12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383.scope: Deactivated successfully.
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.605 243456 INFO nova.virt.libvirt.driver [-] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Instance destroyed successfully.#033[00m
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.605 243456 DEBUG nova.objects.instance [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lazy-loading 'resources' on Instance uuid 360effe7-8380-410d-a5b8-59c28fa4a75a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:30:19 np0005634017 podman[350769]: 2026-02-28 10:30:19.607315851 +0000 UTC m=+0.044179403 container died 12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:30:19 np0005634017 systemd[1]: var-lib-containers-storage-overlay-939fb4b0dac1cbd7c3f3f65d2ef77a6441361f4baacd11851b6ce732b973b040-merged.mount: Deactivated successfully.
Feb 28 05:30:19 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383-userdata-shm.mount: Deactivated successfully.
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.637 243456 DEBUG nova.virt.libvirt.vif [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:28:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1297881492',display_name='tempest-TestSnapshotPattern-server-1297881492',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1297881492',id=121,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPUJ4MbCHcIoNICw2tL1X5ZefRw67v1vL6nTq2IACMxv94r+pZLfX9PQ5uIyPper8mbOQnjB2mZqp0GfnlVkrAkMMtWu4Dla7/PQOjpuGeOsYTKM5E2GBss1gtEw6oeaiA==',key_name='tempest-TestSnapshotPattern-538459727',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:28:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='344dd946e14146ab93c01183964c71b3',ramdisk_id='',reservation_id='r-yvdubwju',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-2121060882',owner_user_name='tempest-TestSnapshotPattern-2121060882-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:29:18Z,user_data=None,user_id='f54beab12fce4ee8adf80742bf33b916',uuid=360effe7-8380-410d-a5b8-59c28fa4a75a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "89352e9c-3fec-48bc-a264-6de98ec910c3", "address": "fa:16:3e:ae:b8:e7", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89352e9c-3f", "ovs_interfaceid": "89352e9c-3fec-48bc-a264-6de98ec910c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.638 243456 DEBUG nova.network.os_vif_util [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Converting VIF {"id": "89352e9c-3fec-48bc-a264-6de98ec910c3", "address": "fa:16:3e:ae:b8:e7", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89352e9c-3f", "ovs_interfaceid": "89352e9c-3fec-48bc-a264-6de98ec910c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.639 243456 DEBUG nova.network.os_vif_util [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ae:b8:e7,bridge_name='br-int',has_traffic_filtering=True,id=89352e9c-3fec-48bc-a264-6de98ec910c3,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89352e9c-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.639 243456 DEBUG os_vif [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:b8:e7,bridge_name='br-int',has_traffic_filtering=True,id=89352e9c-3fec-48bc-a264-6de98ec910c3,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89352e9c-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.642 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.643 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap89352e9c-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.644 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.646 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.649 243456 INFO os_vif [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:b8:e7,bridge_name='br-int',has_traffic_filtering=True,id=89352e9c-3fec-48bc-a264-6de98ec910c3,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89352e9c-3f')#033[00m
Feb 28 05:30:19 np0005634017 podman[350769]: 2026-02-28 10:30:19.668623138 +0000 UTC m=+0.105486720 container cleanup 12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 05:30:19 np0005634017 systemd[1]: libpod-conmon-12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383.scope: Deactivated successfully.
Feb 28 05:30:19 np0005634017 podman[350823]: 2026-02-28 10:30:19.749654313 +0000 UTC m=+0.053698442 container remove 12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:30:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.755 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c286f6df-2e8c-4fe5-ad9c-690632d5c304]: (4, ('Sat Feb 28 10:30:19 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615 (12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383)\n12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383\nSat Feb 28 10:30:19 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615 (12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383)\n12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:30:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.757 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b5151e6a-9589-40e3-bca3-95e45d66e979]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:30:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.758 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap143add3f-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.759 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:19 np0005634017 kernel: tap143add3f-f0: left promiscuous mode
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.768 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.772 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[24d252a3-5f31-44ab-a49c-89f32ae30a94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:30:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.789 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f8479928-5307-407c-8633-7c24e6adb1c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:30:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.791 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[19b9a24e-16f0-413d-8f95-03a7cd129db0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:30:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.809 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6b8b26de-a131-4bd3-b793-baf321fec2ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603760, 'reachable_time': 27278, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350841, 'error': None, 'target': 'ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:30:19 np0005634017 systemd[1]: run-netns-ovnmeta\x2d143add3f\x2dffeb\x2d40fb\x2d88e5\x2d0af28b700615.mount: Deactivated successfully.
Feb 28 05:30:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.812 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:30:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.812 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[1d780805-b0e7-4e5d-bcc1-ec408d61938c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.930 243456 INFO nova.virt.libvirt.driver [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Deleting instance files /var/lib/nova/instances/360effe7-8380-410d-a5b8-59c28fa4a75a_del#033[00m
Feb 28 05:30:19 np0005634017 nova_compute[243452]: 2026-02-28 10:30:19.931 243456 INFO nova.virt.libvirt.driver [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Deletion of /var/lib/nova/instances/360effe7-8380-410d-a5b8-59c28fa4a75a_del complete#033[00m
Feb 28 05:30:20 np0005634017 nova_compute[243452]: 2026-02-28 10:30:20.129 243456 INFO nova.compute.manager [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:30:20 np0005634017 nova_compute[243452]: 2026-02-28 10:30:20.131 243456 DEBUG oslo.service.loopingcall [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:30:20 np0005634017 nova_compute[243452]: 2026-02-28 10:30:20.131 243456 DEBUG nova.compute.manager [-] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:30:20 np0005634017 nova_compute[243452]: 2026-02-28 10:30:20.132 243456 DEBUG nova.network.neutron [-] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:30:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 209 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 3.5 KiB/s wr, 71 op/s
Feb 28 05:30:22 np0005634017 nova_compute[243452]: 2026-02-28 10:30:22.462 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:22.462 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:30:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:22.463 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:30:22 np0005634017 nova_compute[243452]: 2026-02-28 10:30:22.505 243456 DEBUG nova.network.neutron [-] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:30:22 np0005634017 nova_compute[243452]: 2026-02-28 10:30:22.579 243456 INFO nova.compute.manager [-] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Took 2.45 seconds to deallocate network for instance.#033[00m
Feb 28 05:30:22 np0005634017 nova_compute[243452]: 2026-02-28 10:30:22.588 243456 DEBUG nova.compute.manager [req-15f52590-43e8-40d3-8cf9-d23f9cac5702 req-ba70bc49-980b-443b-8cca-0f0f16372cb3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Received event network-vif-deleted-89352e9c-3fec-48bc-a264-6de98ec910c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:30:22 np0005634017 nova_compute[243452]: 2026-02-28 10:30:22.588 243456 INFO nova.compute.manager [req-15f52590-43e8-40d3-8cf9-d23f9cac5702 req-ba70bc49-980b-443b-8cca-0f0f16372cb3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Neutron deleted interface 89352e9c-3fec-48bc-a264-6de98ec910c3; detaching it from the instance and deleting it from the info cache#033[00m
Feb 28 05:30:22 np0005634017 nova_compute[243452]: 2026-02-28 10:30:22.589 243456 DEBUG nova.network.neutron [req-15f52590-43e8-40d3-8cf9-d23f9cac5702 req-ba70bc49-980b-443b-8cca-0f0f16372cb3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:30:22 np0005634017 nova_compute[243452]: 2026-02-28 10:30:22.611 243456 DEBUG nova.compute.manager [req-15f52590-43e8-40d3-8cf9-d23f9cac5702 req-ba70bc49-980b-443b-8cca-0f0f16372cb3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Detach interface failed, port_id=89352e9c-3fec-48bc-a264-6de98ec910c3, reason: Instance 360effe7-8380-410d-a5b8-59c28fa4a75a could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 28 05:30:22 np0005634017 nova_compute[243452]: 2026-02-28 10:30:22.621 243456 DEBUG nova.network.neutron [req-8fc083ce-13a4-4c48-822a-a9542fdc87ae req-0ce0a081-5f85-4a8d-b266-bf2fc7bb167d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Updated VIF entry in instance network info cache for port 89352e9c-3fec-48bc-a264-6de98ec910c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:30:22 np0005634017 nova_compute[243452]: 2026-02-28 10:30:22.621 243456 DEBUG nova.network.neutron [req-8fc083ce-13a4-4c48-822a-a9542fdc87ae req-0ce0a081-5f85-4a8d-b266-bf2fc7bb167d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Updating instance_info_cache with network_info: [{"id": "89352e9c-3fec-48bc-a264-6de98ec910c3", "address": "fa:16:3e:ae:b8:e7", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89352e9c-3f", "ovs_interfaceid": "89352e9c-3fec-48bc-a264-6de98ec910c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:30:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 171 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.2 KiB/s wr, 68 op/s
Feb 28 05:30:22 np0005634017 nova_compute[243452]: 2026-02-28 10:30:22.659 243456 DEBUG oslo_concurrency.lockutils [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:30:22 np0005634017 nova_compute[243452]: 2026-02-28 10:30:22.659 243456 DEBUG oslo_concurrency.lockutils [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:30:22 np0005634017 nova_compute[243452]: 2026-02-28 10:30:22.683 243456 DEBUG oslo_concurrency.lockutils [req-8fc083ce-13a4-4c48-822a-a9542fdc87ae req-0ce0a081-5f85-4a8d-b266-bf2fc7bb167d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:30:22 np0005634017 nova_compute[243452]: 2026-02-28 10:30:22.721 243456 DEBUG oslo_concurrency.processutils [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:30:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:30:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3614214594' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:30:23 np0005634017 nova_compute[243452]: 2026-02-28 10:30:23.295 243456 DEBUG oslo_concurrency.processutils [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:30:23 np0005634017 nova_compute[243452]: 2026-02-28 10:30:23.304 243456 DEBUG nova.compute.provider_tree [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:30:23 np0005634017 nova_compute[243452]: 2026-02-28 10:30:23.336 243456 DEBUG nova.scheduler.client.report [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:30:23 np0005634017 nova_compute[243452]: 2026-02-28 10:30:23.376 243456 DEBUG oslo_concurrency.lockutils [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:30:23 np0005634017 nova_compute[243452]: 2026-02-28 10:30:23.430 243456 INFO nova.scheduler.client.report [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Deleted allocations for instance 360effe7-8380-410d-a5b8-59c28fa4a75a#033[00m
Feb 28 05:30:23 np0005634017 nova_compute[243452]: 2026-02-28 10:30:23.526 243456 DEBUG oslo_concurrency.lockutils [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "360effe7-8380-410d-a5b8-59c28fa4a75a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:30:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:30:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Feb 28 05:30:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Feb 28 05:30:23 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Feb 28 05:30:24 np0005634017 nova_compute[243452]: 2026-02-28 10:30:24.227 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:24 np0005634017 nova_compute[243452]: 2026-02-28 10:30:24.618 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274609.6162598, 0292c690-55db-421d-a004-21a0acd72961 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:30:24 np0005634017 nova_compute[243452]: 2026-02-28 10:30:24.618 243456 INFO nova.compute.manager [-] [instance: 0292c690-55db-421d-a004-21a0acd72961] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:30:24 np0005634017 nova_compute[243452]: 2026-02-28 10:30:24.645 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:24 np0005634017 nova_compute[243452]: 2026-02-28 10:30:24.651 243456 DEBUG nova.compute.manager [None req-40fe18ae-f308-47f8-9046-c48d25a3a97e - - - - - -] [instance: 0292c690-55db-421d-a004-21a0acd72961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:30:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 2.8 KiB/s wr, 61 op/s
Feb 28 05:30:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 54 op/s
Feb 28 05:30:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 35 op/s
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.815533) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274628815576, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 480, "num_deletes": 259, "total_data_size": 403177, "memory_usage": 413552, "flush_reason": "Manual Compaction"}
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274628820307, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 399628, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41952, "largest_seqno": 42431, "table_properties": {"data_size": 396807, "index_size": 793, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6833, "raw_average_key_size": 18, "raw_value_size": 391023, "raw_average_value_size": 1086, "num_data_blocks": 35, "num_entries": 360, "num_filter_entries": 360, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772274609, "oldest_key_time": 1772274609, "file_creation_time": 1772274628, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 4833 microseconds, and 1683 cpu microseconds.
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.820362) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 399628 bytes OK
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.820387) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.822034) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.822055) EVENT_LOG_v1 {"time_micros": 1772274628822049, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.822108) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 400266, prev total WAL file size 400266, number of live WAL files 2.
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.822510) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353032' seq:72057594037927935, type:22 .. '6C6F676D0031373535' seq:0, type:0; will stop at (end)
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(390KB)], [92(10MB)]
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274628822550, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 11861552, "oldest_snapshot_seqno": -1}
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 6688 keys, 11729649 bytes, temperature: kUnknown
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274628880880, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 11729649, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11681109, "index_size": 30700, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 171907, "raw_average_key_size": 25, "raw_value_size": 11557688, "raw_average_value_size": 1728, "num_data_blocks": 1221, "num_entries": 6688, "num_filter_entries": 6688, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772274628, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.881151) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 11729649 bytes
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.882473) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.1 rd, 200.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.9 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(59.0) write-amplify(29.4) OK, records in: 7221, records dropped: 533 output_compression: NoCompression
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.882490) EVENT_LOG_v1 {"time_micros": 1772274628882481, "job": 54, "event": "compaction_finished", "compaction_time_micros": 58405, "compaction_time_cpu_micros": 22330, "output_level": 6, "num_output_files": 1, "total_output_size": 11729649, "num_input_records": 7221, "num_output_records": 6688, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274628882642, "job": 54, "event": "table_file_deletion", "file_number": 94}
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274628883903, "job": 54, "event": "table_file_deletion", "file_number": 92}
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.822458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.883978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.883998) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.884000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.884002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:30:28 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.884004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:30:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:30:29
Feb 28 05:30:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:30:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:30:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'vms', '.mgr', 'default.rgw.meta', 'volumes', 'images', '.rgw.root', 'default.rgw.log']
Feb 28 05:30:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:30:29 np0005634017 podman[350866]: 2026-02-28 10:30:29.143264518 +0000 UTC m=+0.075234763 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 28 05:30:29 np0005634017 podman[350865]: 2026-02-28 10:30:29.15887337 +0000 UTC m=+0.097474502 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:30:29 np0005634017 nova_compute[243452]: 2026-02-28 10:30:29.229 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:29 np0005634017 nova_compute[243452]: 2026-02-28 10:30:29.648 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:30 np0005634017 nova_compute[243452]: 2026-02-28 10:30:30.178 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:30:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:30:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:30:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:30:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:30:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:30:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:30:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.4 KiB/s wr, 22 op/s
Feb 28 05:30:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:30:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:30:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:30:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:30:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:30:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:30:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:30:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:30:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:30:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:30:30 np0005634017 nova_compute[243452]: 2026-02-28 10:30:30.845 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:30 np0005634017 nova_compute[243452]: 2026-02-28 10:30:30.910 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:31 np0005634017 nova_compute[243452]: 2026-02-28 10:30:31.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:30:32 np0005634017 nova_compute[243452]: 2026-02-28 10:30:32.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:30:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:32.465 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:30:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.4 KiB/s wr, 18 op/s
Feb 28 05:30:33 np0005634017 nova_compute[243452]: 2026-02-28 10:30:33.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:30:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:30:34 np0005634017 nova_compute[243452]: 2026-02-28 10:30:34.233 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:34 np0005634017 nova_compute[243452]: 2026-02-28 10:30:34.604 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274619.6021905, 360effe7-8380-410d-a5b8-59c28fa4a75a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:30:34 np0005634017 nova_compute[243452]: 2026-02-28 10:30:34.604 243456 INFO nova.compute.manager [-] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:30:34 np0005634017 nova_compute[243452]: 2026-02-28 10:30:34.623 243456 DEBUG nova.compute.manager [None req-feef7267-1560-474c-a095-9c4a1f22b6a3 - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:30:34 np0005634017 nova_compute[243452]: 2026-02-28 10:30:34.650 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.3 KiB/s wr, 17 op/s
Feb 28 05:30:35 np0005634017 nova_compute[243452]: 2026-02-28 10:30:35.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:30:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:30:38 np0005634017 nova_compute[243452]: 2026-02-28 10:30:38.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:30:38 np0005634017 nova_compute[243452]: 2026-02-28 10:30:38.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:30:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:30:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:30:39 np0005634017 nova_compute[243452]: 2026-02-28 10:30:39.234 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:39 np0005634017 nova_compute[243452]: 2026-02-28 10:30:39.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:30:39 np0005634017 nova_compute[243452]: 2026-02-28 10:30:39.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:30:39 np0005634017 nova_compute[243452]: 2026-02-28 10:30:39.352 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:30:39 np0005634017 nova_compute[243452]: 2026-02-28 10:30:39.352 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:30:39 np0005634017 nova_compute[243452]: 2026-02-28 10:30:39.353 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:30:39 np0005634017 nova_compute[243452]: 2026-02-28 10:30:39.353 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:30:39 np0005634017 nova_compute[243452]: 2026-02-28 10:30:39.353 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:30:39 np0005634017 nova_compute[243452]: 2026-02-28 10:30:39.652 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:30:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2622985130' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:30:39 np0005634017 nova_compute[243452]: 2026-02-28 10:30:39.906 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:30:39 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Feb 28 05:30:39 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:39.935297) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:30:39 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Feb 28 05:30:39 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274639935346, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 341, "num_deletes": 251, "total_data_size": 183183, "memory_usage": 191000, "flush_reason": "Manual Compaction"}
Feb 28 05:30:39 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Feb 28 05:30:39 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274639939745, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 181724, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42432, "largest_seqno": 42772, "table_properties": {"data_size": 179574, "index_size": 314, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5358, "raw_average_key_size": 18, "raw_value_size": 175398, "raw_average_value_size": 604, "num_data_blocks": 14, "num_entries": 290, "num_filter_entries": 290, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772274628, "oldest_key_time": 1772274628, "file_creation_time": 1772274639, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:30:39 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 4523 microseconds, and 1698 cpu microseconds.
Feb 28 05:30:39 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:30:39 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:39.939817) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 181724 bytes OK
Feb 28 05:30:39 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:39.939843) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Feb 28 05:30:39 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:39.942336) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Feb 28 05:30:39 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:39.942354) EVENT_LOG_v1 {"time_micros": 1772274639942348, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:30:39 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:39.942367) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:30:39 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 180857, prev total WAL file size 180857, number of live WAL files 2.
Feb 28 05:30:39 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:30:39 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:39.945052) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Feb 28 05:30:39 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:30:39 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(177KB)], [95(11MB)]
Feb 28 05:30:39 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274639945209, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 11911373, "oldest_snapshot_seqno": -1}
Feb 28 05:30:40 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 6469 keys, 10298370 bytes, temperature: kUnknown
Feb 28 05:30:40 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274640018666, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 10298370, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10252641, "index_size": 28467, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 168046, "raw_average_key_size": 25, "raw_value_size": 10134329, "raw_average_value_size": 1566, "num_data_blocks": 1118, "num_entries": 6469, "num_filter_entries": 6469, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772274639, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:30:40 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:30:40 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:40.018998) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 10298370 bytes
Feb 28 05:30:40 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:40.021110) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.9 rd, 140.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 11.2 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(122.2) write-amplify(56.7) OK, records in: 6978, records dropped: 509 output_compression: NoCompression
Feb 28 05:30:40 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:40.021143) EVENT_LOG_v1 {"time_micros": 1772274640021127, "job": 56, "event": "compaction_finished", "compaction_time_micros": 73550, "compaction_time_cpu_micros": 40144, "output_level": 6, "num_output_files": 1, "total_output_size": 10298370, "num_input_records": 6978, "num_output_records": 6469, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:30:40 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:30:40 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274640021334, "job": 56, "event": "table_file_deletion", "file_number": 97}
Feb 28 05:30:40 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:30:40 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274640023402, "job": 56, "event": "table_file_deletion", "file_number": 95}
Feb 28 05:30:40 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:39.942619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:30:40 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:40.023520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:30:40 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:40.023531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:30:40 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:40.023533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:30:40 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:40.023535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:30:40 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:40.023537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:30:40 np0005634017 nova_compute[243452]: 2026-02-28 10:30:40.128 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:30:40 np0005634017 nova_compute[243452]: 2026-02-28 10:30:40.131 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3694MB free_disk=59.987464110367GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:30:40 np0005634017 nova_compute[243452]: 2026-02-28 10:30:40.132 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:30:40 np0005634017 nova_compute[243452]: 2026-02-28 10:30:40.132 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:30:40 np0005634017 nova_compute[243452]: 2026-02-28 10:30:40.208 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:30:40 np0005634017 nova_compute[243452]: 2026-02-28 10:30:40.209 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:30:40 np0005634017 nova_compute[243452]: 2026-02-28 10:30:40.232 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:30:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:30:40 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:30:40 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3664543732' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:30:40 np0005634017 nova_compute[243452]: 2026-02-28 10:30:40.846 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:30:40 np0005634017 nova_compute[243452]: 2026-02-28 10:30:40.854 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:30:40 np0005634017 nova_compute[243452]: 2026-02-28 10:30:40.877 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:30:40 np0005634017 nova_compute[243452]: 2026-02-28 10:30:40.912 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:30:40 np0005634017 nova_compute[243452]: 2026-02-28 10:30:40.912 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:30:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:30:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:30:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:30:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:30:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.3621654362699331e-05 of space, bias 1.0, pg target 0.004086496308809799 quantized to 32 (current 32)
Feb 28 05:30:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:30:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:30:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:30:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:30:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:30:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024938175375048455 of space, bias 1.0, pg target 0.7481452612514536 quantized to 32 (current 32)
Feb 28 05:30:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:30:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.36012762782241e-07 of space, bias 4.0, pg target 0.0008832153153386892 quantized to 16 (current 16)
Feb 28 05:30:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:30:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:30:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:30:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:30:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:30:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:30:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:30:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:30:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:30:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:30:41 np0005634017 nova_compute[243452]: 2026-02-28 10:30:41.913 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:30:41 np0005634017 nova_compute[243452]: 2026-02-28 10:30:41.914 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:30:41 np0005634017 nova_compute[243452]: 2026-02-28 10:30:41.940 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 05:30:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:30:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:30:44 np0005634017 nova_compute[243452]: 2026-02-28 10:30:44.238 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:44 np0005634017 nova_compute[243452]: 2026-02-28 10:30:44.654 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:30:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:30:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/865405397' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:30:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:30:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/865405397' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:30:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:30:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:30:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:30:49 np0005634017 nova_compute[243452]: 2026-02-28 10:30:49.240 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:49 np0005634017 nova_compute[243452]: 2026-02-28 10:30:49.656 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:30:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:30:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:30:54 np0005634017 nova_compute[243452]: 2026-02-28 10:30:54.243 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:54 np0005634017 nova_compute[243452]: 2026-02-28 10:30:54.658 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:30:56 np0005634017 nova_compute[243452]: 2026-02-28 10:30:56.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:30:56 np0005634017 nova_compute[243452]: 2026-02-28 10:30:56.316 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:30:56 np0005634017 nova_compute[243452]: 2026-02-28 10:30:56.317 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:30:56 np0005634017 nova_compute[243452]: 2026-02-28 10:30:56.317 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:30:56 np0005634017 nova_compute[243452]: 2026-02-28 10:30:56.318 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:30:56 np0005634017 nova_compute[243452]: 2026-02-28 10:30:56.318 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:30:56 np0005634017 nova_compute[243452]: 2026-02-28 10:30:56.318 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:30:56 np0005634017 nova_compute[243452]: 2026-02-28 10:30:56.340 243456 DEBUG nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Feb 28 05:30:56 np0005634017 nova_compute[243452]: 2026-02-28 10:30:56.350 243456 DEBUG nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Feb 28 05:30:56 np0005634017 nova_compute[243452]: 2026-02-28 10:30:56.350 243456 WARNING nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Unknown base file: /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2#033[00m
Feb 28 05:30:56 np0005634017 nova_compute[243452]: 2026-02-28 10:30:56.351 243456 WARNING nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Unknown base file: /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847#033[00m
Feb 28 05:30:56 np0005634017 nova_compute[243452]: 2026-02-28 10:30:56.351 243456 INFO nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Removable base files: /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847#033[00m
Feb 28 05:30:56 np0005634017 nova_compute[243452]: 2026-02-28 10:30:56.351 243456 INFO nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2#033[00m
Feb 28 05:30:56 np0005634017 nova_compute[243452]: 2026-02-28 10:30:56.351 243456 INFO nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847#033[00m
Feb 28 05:30:56 np0005634017 nova_compute[243452]: 2026-02-28 10:30:56.351 243456 DEBUG nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Feb 28 05:30:56 np0005634017 nova_compute[243452]: 2026-02-28 10:30:56.352 243456 DEBUG nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Feb 28 05:30:56 np0005634017 nova_compute[243452]: 2026-02-28 10:30:56.352 243456 DEBUG nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Feb 28 05:30:56 np0005634017 nova_compute[243452]: 2026-02-28 10:30:56.352 243456 INFO nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Feb 28 05:30:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:30:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:57.871 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:30:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:57.871 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:30:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:30:57.871 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:30:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:30:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:30:59 np0005634017 nova_compute[243452]: 2026-02-28 10:30:59.245 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:30:59 np0005634017 nova_compute[243452]: 2026-02-28 10:30:59.660 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:00 np0005634017 podman[350962]: 2026-02-28 10:31:00.144176491 +0000 UTC m=+0.068604244 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:31:00 np0005634017 podman[350961]: 2026-02-28 10:31:00.189759372 +0000 UTC m=+0.121650847 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:31:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:31:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:31:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:31:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:31:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:31:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:31:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:31:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:31:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:31:04 np0005634017 nova_compute[243452]: 2026-02-28 10:31:04.246 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:04 np0005634017 nova_compute[243452]: 2026-02-28 10:31:04.662 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:31:06 np0005634017 ovn_controller[146846]: 2026-02-28T10:31:06Z|01237|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Feb 28 05:31:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:31:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:31:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:31:09 np0005634017 nova_compute[243452]: 2026-02-28 10:31:09.248 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:09 np0005634017 nova_compute[243452]: 2026-02-28 10:31:09.665 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:31:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:31:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:31:14 np0005634017 nova_compute[243452]: 2026-02-28 10:31:14.249 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:31:14 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:31:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:31:14 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:31:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:31:14 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:31:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:31:14 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:31:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:31:14 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:31:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:31:14 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:31:14 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:31:14 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:31:14 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:31:14 np0005634017 nova_compute[243452]: 2026-02-28 10:31:14.667 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:31:14 np0005634017 podman[351147]: 2026-02-28 10:31:14.824732588 +0000 UTC m=+0.072237238 container create a37262dff4618b07e5b0844a044b42708c2593b84beffbb7b5ad3493eda4761b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_morse, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:31:14 np0005634017 systemd[1]: Started libpod-conmon-a37262dff4618b07e5b0844a044b42708c2593b84beffbb7b5ad3493eda4761b.scope.
Feb 28 05:31:14 np0005634017 podman[351147]: 2026-02-28 10:31:14.787481813 +0000 UTC m=+0.034986472 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:31:14 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:31:14 np0005634017 podman[351147]: 2026-02-28 10:31:14.919524024 +0000 UTC m=+0.167028693 container init a37262dff4618b07e5b0844a044b42708c2593b84beffbb7b5ad3493eda4761b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 05:31:14 np0005634017 podman[351147]: 2026-02-28 10:31:14.93137985 +0000 UTC m=+0.178884489 container start a37262dff4618b07e5b0844a044b42708c2593b84beffbb7b5ad3493eda4761b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_morse, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 28 05:31:14 np0005634017 podman[351147]: 2026-02-28 10:31:14.935488996 +0000 UTC m=+0.182993665 container attach a37262dff4618b07e5b0844a044b42708c2593b84beffbb7b5ad3493eda4761b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Feb 28 05:31:14 np0005634017 fervent_morse[351163]: 167 167
Feb 28 05:31:14 np0005634017 systemd[1]: libpod-a37262dff4618b07e5b0844a044b42708c2593b84beffbb7b5ad3493eda4761b.scope: Deactivated successfully.
Feb 28 05:31:14 np0005634017 podman[351147]: 2026-02-28 10:31:14.940300282 +0000 UTC m=+0.187804901 container died a37262dff4618b07e5b0844a044b42708c2593b84beffbb7b5ad3493eda4761b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_morse, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:31:14 np0005634017 systemd[1]: var-lib-containers-storage-overlay-07571faa3d10730a2a8e126624fbae3cdca599b41b16882e038f07f3f843939b-merged.mount: Deactivated successfully.
Feb 28 05:31:14 np0005634017 podman[351147]: 2026-02-28 10:31:14.981411677 +0000 UTC m=+0.228916296 container remove a37262dff4618b07e5b0844a044b42708c2593b84beffbb7b5ad3493eda4761b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_morse, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:31:15 np0005634017 systemd[1]: libpod-conmon-a37262dff4618b07e5b0844a044b42708c2593b84beffbb7b5ad3493eda4761b.scope: Deactivated successfully.
Feb 28 05:31:15 np0005634017 podman[351189]: 2026-02-28 10:31:15.153663866 +0000 UTC m=+0.039279793 container create 083bcae3d393ad26a90657897a08d01b378a2664712800b1054eab7ad09a9a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_elion, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 05:31:15 np0005634017 systemd[1]: Started libpod-conmon-083bcae3d393ad26a90657897a08d01b378a2664712800b1054eab7ad09a9a70.scope.
Feb 28 05:31:15 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:31:15 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4443879d69994eb46f087578065393b985a26985b579a889be6b821cbdd91749/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:31:15 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4443879d69994eb46f087578065393b985a26985b579a889be6b821cbdd91749/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:31:15 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4443879d69994eb46f087578065393b985a26985b579a889be6b821cbdd91749/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:31:15 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4443879d69994eb46f087578065393b985a26985b579a889be6b821cbdd91749/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:31:15 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4443879d69994eb46f087578065393b985a26985b579a889be6b821cbdd91749/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:31:15 np0005634017 podman[351189]: 2026-02-28 10:31:15.13757379 +0000 UTC m=+0.023189737 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:31:15 np0005634017 podman[351189]: 2026-02-28 10:31:15.263807206 +0000 UTC m=+0.149423163 container init 083bcae3d393ad26a90657897a08d01b378a2664712800b1054eab7ad09a9a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_elion, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 05:31:15 np0005634017 podman[351189]: 2026-02-28 10:31:15.269676763 +0000 UTC m=+0.155292690 container start 083bcae3d393ad26a90657897a08d01b378a2664712800b1054eab7ad09a9a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_elion, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:31:15 np0005634017 podman[351189]: 2026-02-28 10:31:15.273192472 +0000 UTC m=+0.158808439 container attach 083bcae3d393ad26a90657897a08d01b378a2664712800b1054eab7ad09a9a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_elion, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 28 05:31:15 np0005634017 youthful_elion[351206]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:31:15 np0005634017 youthful_elion[351206]: --> All data devices are unavailable
Feb 28 05:31:15 np0005634017 systemd[1]: libpod-083bcae3d393ad26a90657897a08d01b378a2664712800b1054eab7ad09a9a70.scope: Deactivated successfully.
Feb 28 05:31:15 np0005634017 podman[351189]: 2026-02-28 10:31:15.792538843 +0000 UTC m=+0.678154790 container died 083bcae3d393ad26a90657897a08d01b378a2664712800b1054eab7ad09a9a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_elion, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:31:15 np0005634017 systemd[1]: var-lib-containers-storage-overlay-4443879d69994eb46f087578065393b985a26985b579a889be6b821cbdd91749-merged.mount: Deactivated successfully.
Feb 28 05:31:15 np0005634017 podman[351189]: 2026-02-28 10:31:15.840641846 +0000 UTC m=+0.726257773 container remove 083bcae3d393ad26a90657897a08d01b378a2664712800b1054eab7ad09a9a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_elion, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:31:15 np0005634017 systemd[1]: libpod-conmon-083bcae3d393ad26a90657897a08d01b378a2664712800b1054eab7ad09a9a70.scope: Deactivated successfully.
Feb 28 05:31:16 np0005634017 podman[351300]: 2026-02-28 10:31:16.324149522 +0000 UTC m=+0.054729621 container create 6f3b4c0073410eefd02a1ed95993c58c13ec4865c5e39cfb43efdb8ccdbab6c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_bassi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 05:31:16 np0005634017 systemd[1]: Started libpod-conmon-6f3b4c0073410eefd02a1ed95993c58c13ec4865c5e39cfb43efdb8ccdbab6c7.scope.
Feb 28 05:31:16 np0005634017 podman[351300]: 2026-02-28 10:31:16.302212901 +0000 UTC m=+0.032793010 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:31:16 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:31:16 np0005634017 podman[351300]: 2026-02-28 10:31:16.417235109 +0000 UTC m=+0.147815228 container init 6f3b4c0073410eefd02a1ed95993c58c13ec4865c5e39cfb43efdb8ccdbab6c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_bassi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 05:31:16 np0005634017 podman[351300]: 2026-02-28 10:31:16.427823619 +0000 UTC m=+0.158403688 container start 6f3b4c0073410eefd02a1ed95993c58c13ec4865c5e39cfb43efdb8ccdbab6c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_bassi, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:31:16 np0005634017 podman[351300]: 2026-02-28 10:31:16.431520653 +0000 UTC m=+0.162100822 container attach 6f3b4c0073410eefd02a1ed95993c58c13ec4865c5e39cfb43efdb8ccdbab6c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_bassi, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 28 05:31:16 np0005634017 stupefied_bassi[351316]: 167 167
Feb 28 05:31:16 np0005634017 systemd[1]: libpod-6f3b4c0073410eefd02a1ed95993c58c13ec4865c5e39cfb43efdb8ccdbab6c7.scope: Deactivated successfully.
Feb 28 05:31:16 np0005634017 podman[351300]: 2026-02-28 10:31:16.435038393 +0000 UTC m=+0.165618462 container died 6f3b4c0073410eefd02a1ed95993c58c13ec4865c5e39cfb43efdb8ccdbab6c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_bassi, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:31:16 np0005634017 systemd[1]: var-lib-containers-storage-overlay-996a831a32f525b1937bd83053ab6b9b4622b4e9db103d61227d7ae87261b26f-merged.mount: Deactivated successfully.
Feb 28 05:31:16 np0005634017 podman[351300]: 2026-02-28 10:31:16.476414705 +0000 UTC m=+0.206994814 container remove 6f3b4c0073410eefd02a1ed95993c58c13ec4865c5e39cfb43efdb8ccdbab6c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_bassi, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 05:31:16 np0005634017 systemd[1]: libpod-conmon-6f3b4c0073410eefd02a1ed95993c58c13ec4865c5e39cfb43efdb8ccdbab6c7.scope: Deactivated successfully.
Feb 28 05:31:16 np0005634017 podman[351340]: 2026-02-28 10:31:16.596009633 +0000 UTC m=+0.030045492 container create 8ffd87df22f8342f545dca9d543fd1b5efb1e1f91d99c530939e33e9a2c1c5e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_bell, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:31:16 np0005634017 systemd[1]: Started libpod-conmon-8ffd87df22f8342f545dca9d543fd1b5efb1e1f91d99c530939e33e9a2c1c5e1.scope.
Feb 28 05:31:16 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:31:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08659e60d96c378b5f799639a2be55cc45185d982e10b4412668cc9cba38c156/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:31:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08659e60d96c378b5f799639a2be55cc45185d982e10b4412668cc9cba38c156/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:31:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08659e60d96c378b5f799639a2be55cc45185d982e10b4412668cc9cba38c156/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:31:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08659e60d96c378b5f799639a2be55cc45185d982e10b4412668cc9cba38c156/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:31:16 np0005634017 podman[351340]: 2026-02-28 10:31:16.676240246 +0000 UTC m=+0.110276125 container init 8ffd87df22f8342f545dca9d543fd1b5efb1e1f91d99c530939e33e9a2c1c5e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 05:31:16 np0005634017 podman[351340]: 2026-02-28 10:31:16.582870101 +0000 UTC m=+0.016905980 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:31:16 np0005634017 podman[351340]: 2026-02-28 10:31:16.683520462 +0000 UTC m=+0.117556321 container start 8ffd87df22f8342f545dca9d543fd1b5efb1e1f91d99c530939e33e9a2c1c5e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_bell, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:31:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:31:16 np0005634017 podman[351340]: 2026-02-28 10:31:16.686876277 +0000 UTC m=+0.120912156 container attach 8ffd87df22f8342f545dca9d543fd1b5efb1e1f91d99c530939e33e9a2c1c5e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 05:31:16 np0005634017 silly_bell[351357]: {
Feb 28 05:31:16 np0005634017 silly_bell[351357]:    "0": [
Feb 28 05:31:16 np0005634017 silly_bell[351357]:        {
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "devices": [
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "/dev/loop3"
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            ],
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "lv_name": "ceph_lv0",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "lv_size": "21470642176",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "name": "ceph_lv0",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "tags": {
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.cluster_name": "ceph",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.crush_device_class": "",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.encrypted": "0",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.objectstore": "bluestore",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.osd_id": "0",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.type": "block",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.vdo": "0",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.with_tpm": "0"
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            },
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "type": "block",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "vg_name": "ceph_vg0"
Feb 28 05:31:16 np0005634017 silly_bell[351357]:        }
Feb 28 05:31:16 np0005634017 silly_bell[351357]:    ],
Feb 28 05:31:16 np0005634017 silly_bell[351357]:    "1": [
Feb 28 05:31:16 np0005634017 silly_bell[351357]:        {
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "devices": [
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "/dev/loop4"
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            ],
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "lv_name": "ceph_lv1",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "lv_size": "21470642176",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "name": "ceph_lv1",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "tags": {
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.cluster_name": "ceph",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.crush_device_class": "",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.encrypted": "0",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.objectstore": "bluestore",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.osd_id": "1",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.type": "block",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.vdo": "0",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.with_tpm": "0"
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            },
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "type": "block",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "vg_name": "ceph_vg1"
Feb 28 05:31:16 np0005634017 silly_bell[351357]:        }
Feb 28 05:31:16 np0005634017 silly_bell[351357]:    ],
Feb 28 05:31:16 np0005634017 silly_bell[351357]:    "2": [
Feb 28 05:31:16 np0005634017 silly_bell[351357]:        {
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "devices": [
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "/dev/loop5"
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            ],
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "lv_name": "ceph_lv2",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "lv_size": "21470642176",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "name": "ceph_lv2",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "tags": {
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.cluster_name": "ceph",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.crush_device_class": "",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.encrypted": "0",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.objectstore": "bluestore",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.osd_id": "2",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.type": "block",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.vdo": "0",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:                "ceph.with_tpm": "0"
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            },
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "type": "block",
Feb 28 05:31:16 np0005634017 silly_bell[351357]:            "vg_name": "ceph_vg2"
Feb 28 05:31:16 np0005634017 silly_bell[351357]:        }
Feb 28 05:31:16 np0005634017 silly_bell[351357]:    ]
Feb 28 05:31:16 np0005634017 silly_bell[351357]: }
Feb 28 05:31:16 np0005634017 systemd[1]: libpod-8ffd87df22f8342f545dca9d543fd1b5efb1e1f91d99c530939e33e9a2c1c5e1.scope: Deactivated successfully.
Feb 28 05:31:16 np0005634017 podman[351340]: 2026-02-28 10:31:16.993550574 +0000 UTC m=+0.427586473 container died 8ffd87df22f8342f545dca9d543fd1b5efb1e1f91d99c530939e33e9a2c1c5e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:31:17 np0005634017 systemd[1]: var-lib-containers-storage-overlay-08659e60d96c378b5f799639a2be55cc45185d982e10b4412668cc9cba38c156-merged.mount: Deactivated successfully.
Feb 28 05:31:17 np0005634017 podman[351340]: 2026-02-28 10:31:17.032873628 +0000 UTC m=+0.466909497 container remove 8ffd87df22f8342f545dca9d543fd1b5efb1e1f91d99c530939e33e9a2c1c5e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 05:31:17 np0005634017 systemd[1]: libpod-conmon-8ffd87df22f8342f545dca9d543fd1b5efb1e1f91d99c530939e33e9a2c1c5e1.scope: Deactivated successfully.
Feb 28 05:31:17 np0005634017 podman[351439]: 2026-02-28 10:31:17.517099805 +0000 UTC m=+0.054044682 container create c8b78611c712cf8f13fcc887378925c2ee663998045ba26a20ab016eec232d3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:31:17 np0005634017 systemd[1]: Started libpod-conmon-c8b78611c712cf8f13fcc887378925c2ee663998045ba26a20ab016eec232d3d.scope.
Feb 28 05:31:17 np0005634017 podman[351439]: 2026-02-28 10:31:17.486976202 +0000 UTC m=+0.023921139 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:31:17 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:31:17 np0005634017 podman[351439]: 2026-02-28 10:31:17.600353433 +0000 UTC m=+0.137298361 container init c8b78611c712cf8f13fcc887378925c2ee663998045ba26a20ab016eec232d3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_matsumoto, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:31:17 np0005634017 podman[351439]: 2026-02-28 10:31:17.607915598 +0000 UTC m=+0.144860465 container start c8b78611c712cf8f13fcc887378925c2ee663998045ba26a20ab016eec232d3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:31:17 np0005634017 goofy_matsumoto[351455]: 167 167
Feb 28 05:31:17 np0005634017 podman[351439]: 2026-02-28 10:31:17.612426526 +0000 UTC m=+0.149371393 container attach c8b78611c712cf8f13fcc887378925c2ee663998045ba26a20ab016eec232d3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_matsumoto, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:31:17 np0005634017 systemd[1]: libpod-c8b78611c712cf8f13fcc887378925c2ee663998045ba26a20ab016eec232d3d.scope: Deactivated successfully.
Feb 28 05:31:17 np0005634017 podman[351439]: 2026-02-28 10:31:17.613521797 +0000 UTC m=+0.150466634 container died c8b78611c712cf8f13fcc887378925c2ee663998045ba26a20ab016eec232d3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_matsumoto, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:31:17 np0005634017 systemd[1]: var-lib-containers-storage-overlay-0a98ea44e33e27c85de37d7e2341383f1a1601debcdde59a8b81f8117057d225-merged.mount: Deactivated successfully.
Feb 28 05:31:17 np0005634017 podman[351439]: 2026-02-28 10:31:17.657235095 +0000 UTC m=+0.194179932 container remove c8b78611c712cf8f13fcc887378925c2ee663998045ba26a20ab016eec232d3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_matsumoto, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 05:31:17 np0005634017 systemd[1]: libpod-conmon-c8b78611c712cf8f13fcc887378925c2ee663998045ba26a20ab016eec232d3d.scope: Deactivated successfully.
Feb 28 05:31:17 np0005634017 podman[351479]: 2026-02-28 10:31:17.835165165 +0000 UTC m=+0.058950661 container create c5fa8207cbfa6a297662e1b2e4a54fb951a2bded10994ee4244369a8662e85c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilbur, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 05:31:17 np0005634017 systemd[1]: Started libpod-conmon-c5fa8207cbfa6a297662e1b2e4a54fb951a2bded10994ee4244369a8662e85c1.scope.
Feb 28 05:31:17 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:31:17 np0005634017 podman[351479]: 2026-02-28 10:31:17.808894481 +0000 UTC m=+0.032680067 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:31:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbc94a44248c867539cab050052b1ff61f2e5d3ed1f66baf67b8f22f29aa49ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:31:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbc94a44248c867539cab050052b1ff61f2e5d3ed1f66baf67b8f22f29aa49ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:31:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbc94a44248c867539cab050052b1ff61f2e5d3ed1f66baf67b8f22f29aa49ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:31:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbc94a44248c867539cab050052b1ff61f2e5d3ed1f66baf67b8f22f29aa49ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:31:17 np0005634017 podman[351479]: 2026-02-28 10:31:17.919099083 +0000 UTC m=+0.142884649 container init c5fa8207cbfa6a297662e1b2e4a54fb951a2bded10994ee4244369a8662e85c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:31:17 np0005634017 podman[351479]: 2026-02-28 10:31:17.925790832 +0000 UTC m=+0.149576358 container start c5fa8207cbfa6a297662e1b2e4a54fb951a2bded10994ee4244369a8662e85c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:31:17 np0005634017 podman[351479]: 2026-02-28 10:31:17.929123907 +0000 UTC m=+0.152909483 container attach c5fa8207cbfa6a297662e1b2e4a54fb951a2bded10994ee4244369a8662e85c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilbur, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:31:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:31:18 np0005634017 lvm[351574]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:31:18 np0005634017 lvm[351574]: VG ceph_vg1 finished
Feb 28 05:31:18 np0005634017 lvm[351573]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:31:18 np0005634017 lvm[351573]: VG ceph_vg0 finished
Feb 28 05:31:18 np0005634017 lvm[351576]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:31:18 np0005634017 lvm[351576]: VG ceph_vg2 finished
Feb 28 05:31:18 np0005634017 bold_wilbur[351495]: {}
Feb 28 05:31:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:31:18 np0005634017 systemd[1]: libpod-c5fa8207cbfa6a297662e1b2e4a54fb951a2bded10994ee4244369a8662e85c1.scope: Deactivated successfully.
Feb 28 05:31:18 np0005634017 systemd[1]: libpod-c5fa8207cbfa6a297662e1b2e4a54fb951a2bded10994ee4244369a8662e85c1.scope: Consumed 1.380s CPU time.
Feb 28 05:31:18 np0005634017 podman[351479]: 2026-02-28 10:31:18.845445634 +0000 UTC m=+1.069231140 container died c5fa8207cbfa6a297662e1b2e4a54fb951a2bded10994ee4244369a8662e85c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 05:31:18 np0005634017 systemd[1]: var-lib-containers-storage-overlay-dbc94a44248c867539cab050052b1ff61f2e5d3ed1f66baf67b8f22f29aa49ca-merged.mount: Deactivated successfully.
Feb 28 05:31:18 np0005634017 podman[351479]: 2026-02-28 10:31:18.898022752 +0000 UTC m=+1.121808238 container remove c5fa8207cbfa6a297662e1b2e4a54fb951a2bded10994ee4244369a8662e85c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 05:31:18 np0005634017 systemd[1]: libpod-conmon-c5fa8207cbfa6a297662e1b2e4a54fb951a2bded10994ee4244369a8662e85c1.scope: Deactivated successfully.
Feb 28 05:31:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:31:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:31:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:31:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:31:19 np0005634017 nova_compute[243452]: 2026-02-28 10:31:19.252 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:19 np0005634017 nova_compute[243452]: 2026-02-28 10:31:19.669 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:19 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:31:19 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:31:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:31:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:22.526 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:31:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:22.526 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:31:22 np0005634017 nova_compute[243452]: 2026-02-28 10:31:22.527 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:31:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:31:24 np0005634017 nova_compute[243452]: 2026-02-28 10:31:24.255 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:24.529 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:31:24 np0005634017 nova_compute[243452]: 2026-02-28 10:31:24.673 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:31:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:31:28 np0005634017 nova_compute[243452]: 2026-02-28 10:31:28.352 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:31:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:31:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:31:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:31:29
Feb 28 05:31:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:31:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:31:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'vms', '.mgr']
Feb 28 05:31:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:31:29 np0005634017 nova_compute[243452]: 2026-02-28 10:31:29.258 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:29 np0005634017 nova_compute[243452]: 2026-02-28 10:31:29.676 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:31:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:31:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:31:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:31:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:31:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:31:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:31:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:31:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:31:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:31:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:31:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:31:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:31:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:31:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:31:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:31:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:31:31 np0005634017 podman[351623]: 2026-02-28 10:31:31.124495401 +0000 UTC m=+0.060298580 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 05:31:31 np0005634017 podman[351622]: 2026-02-28 10:31:31.164753141 +0000 UTC m=+0.106604861 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 28 05:31:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:31:33 np0005634017 nova_compute[243452]: 2026-02-28 10:31:33.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:31:33 np0005634017 nova_compute[243452]: 2026-02-28 10:31:33.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:31:33 np0005634017 systemd[1]: virtsecretd.service: Deactivated successfully.
Feb 28 05:31:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:31:34 np0005634017 nova_compute[243452]: 2026-02-28 10:31:34.260 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:34 np0005634017 nova_compute[243452]: 2026-02-28 10:31:34.679 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:31:35 np0005634017 nova_compute[243452]: 2026-02-28 10:31:35.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:31:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:31:37 np0005634017 nova_compute[243452]: 2026-02-28 10:31:37.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:31:37 np0005634017 nova_compute[243452]: 2026-02-28 10:31:37.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:31:38 np0005634017 nova_compute[243452]: 2026-02-28 10:31:38.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:31:38 np0005634017 nova_compute[243452]: 2026-02-28 10:31:38.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:31:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:31:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:31:39 np0005634017 nova_compute[243452]: 2026-02-28 10:31:39.262 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:39 np0005634017 nova_compute[243452]: 2026-02-28 10:31:39.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:31:39 np0005634017 nova_compute[243452]: 2026-02-28 10:31:39.681 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:31:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:31:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:31:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:31:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:31:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.3621654362699331e-05 of space, bias 1.0, pg target 0.004086496308809799 quantized to 32 (current 32)
Feb 28 05:31:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:31:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:31:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:31:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:31:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:31:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024938175375048455 of space, bias 1.0, pg target 0.7481452612514536 quantized to 32 (current 32)
Feb 28 05:31:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:31:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.36012762782241e-07 of space, bias 4.0, pg target 0.0008832153153386892 quantized to 16 (current 16)
Feb 28 05:31:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:31:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:31:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:31:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:31:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:31:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:31:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:31:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:31:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:31:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:31:41 np0005634017 nova_compute[243452]: 2026-02-28 10:31:41.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:31:41 np0005634017 nova_compute[243452]: 2026-02-28 10:31:41.318 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:31:41 np0005634017 nova_compute[243452]: 2026-02-28 10:31:41.318 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:31:41 np0005634017 nova_compute[243452]: 2026-02-28 10:31:41.345 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 05:31:41 np0005634017 nova_compute[243452]: 2026-02-28 10:31:41.346 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:31:41 np0005634017 nova_compute[243452]: 2026-02-28 10:31:41.379 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:31:41 np0005634017 nova_compute[243452]: 2026-02-28 10:31:41.379 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:31:41 np0005634017 nova_compute[243452]: 2026-02-28 10:31:41.380 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:31:41 np0005634017 nova_compute[243452]: 2026-02-28 10:31:41.380 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:31:41 np0005634017 nova_compute[243452]: 2026-02-28 10:31:41.380 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:31:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:31:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3640620870' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:31:41 np0005634017 nova_compute[243452]: 2026-02-28 10:31:41.975 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:31:42 np0005634017 nova_compute[243452]: 2026-02-28 10:31:42.225 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:31:42 np0005634017 nova_compute[243452]: 2026-02-28 10:31:42.227 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3786MB free_disk=59.987464110367GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:31:42 np0005634017 nova_compute[243452]: 2026-02-28 10:31:42.228 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:31:42 np0005634017 nova_compute[243452]: 2026-02-28 10:31:42.228 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:31:42 np0005634017 nova_compute[243452]: 2026-02-28 10:31:42.401 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:31:42 np0005634017 nova_compute[243452]: 2026-02-28 10:31:42.402 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:31:42 np0005634017 nova_compute[243452]: 2026-02-28 10:31:42.481 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:31:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:31:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:31:43 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/339860127' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:31:43 np0005634017 nova_compute[243452]: 2026-02-28 10:31:43.071 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:31:43 np0005634017 nova_compute[243452]: 2026-02-28 10:31:43.080 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:31:43 np0005634017 nova_compute[243452]: 2026-02-28 10:31:43.107 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:31:43 np0005634017 nova_compute[243452]: 2026-02-28 10:31:43.110 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:31:43 np0005634017 nova_compute[243452]: 2026-02-28 10:31:43.111 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:31:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:31:43 np0005634017 nova_compute[243452]: 2026-02-28 10:31:43.853 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "995ed68f-0189-47b0-b060-6b738468c986" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:31:43 np0005634017 nova_compute[243452]: 2026-02-28 10:31:43.854 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:31:43 np0005634017 nova_compute[243452]: 2026-02-28 10:31:43.898 243456 DEBUG nova.compute.manager [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:31:44 np0005634017 nova_compute[243452]: 2026-02-28 10:31:44.062 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:31:44 np0005634017 nova_compute[243452]: 2026-02-28 10:31:44.063 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:31:44 np0005634017 nova_compute[243452]: 2026-02-28 10:31:44.072 243456 DEBUG nova.virt.hardware [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:31:44 np0005634017 nova_compute[243452]: 2026-02-28 10:31:44.073 243456 INFO nova.compute.claims [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:31:44 np0005634017 nova_compute[243452]: 2026-02-28 10:31:44.188 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:31:44 np0005634017 nova_compute[243452]: 2026-02-28 10:31:44.264 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:44 np0005634017 nova_compute[243452]: 2026-02-28 10:31:44.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:31:44 np0005634017 nova_compute[243452]: 2026-02-28 10:31:44.683 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:31:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:31:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4200127688' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:31:44 np0005634017 nova_compute[243452]: 2026-02-28 10:31:44.745 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:31:44 np0005634017 nova_compute[243452]: 2026-02-28 10:31:44.754 243456 DEBUG nova.compute.provider_tree [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:31:44 np0005634017 nova_compute[243452]: 2026-02-28 10:31:44.776 243456 DEBUG nova.scheduler.client.report [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:31:44 np0005634017 nova_compute[243452]: 2026-02-28 10:31:44.802 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:31:44 np0005634017 nova_compute[243452]: 2026-02-28 10:31:44.803 243456 DEBUG nova.compute.manager [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:31:44 np0005634017 nova_compute[243452]: 2026-02-28 10:31:44.853 243456 DEBUG nova.compute.manager [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:31:44 np0005634017 nova_compute[243452]: 2026-02-28 10:31:44.853 243456 DEBUG nova.network.neutron [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:31:44 np0005634017 nova_compute[243452]: 2026-02-28 10:31:44.877 243456 INFO nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:31:44 np0005634017 nova_compute[243452]: 2026-02-28 10:31:44.896 243456 DEBUG nova.compute.manager [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:31:45 np0005634017 nova_compute[243452]: 2026-02-28 10:31:45.002 243456 DEBUG nova.compute.manager [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:31:45 np0005634017 nova_compute[243452]: 2026-02-28 10:31:45.004 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:31:45 np0005634017 nova_compute[243452]: 2026-02-28 10:31:45.005 243456 INFO nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Creating image(s)#033[00m
Feb 28 05:31:45 np0005634017 nova_compute[243452]: 2026-02-28 10:31:45.038 243456 DEBUG nova.storage.rbd_utils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 995ed68f-0189-47b0-b060-6b738468c986_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:31:45 np0005634017 nova_compute[243452]: 2026-02-28 10:31:45.073 243456 DEBUG nova.storage.rbd_utils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 995ed68f-0189-47b0-b060-6b738468c986_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:31:45 np0005634017 nova_compute[243452]: 2026-02-28 10:31:45.113 243456 DEBUG nova.storage.rbd_utils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 995ed68f-0189-47b0-b060-6b738468c986_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:31:45 np0005634017 nova_compute[243452]: 2026-02-28 10:31:45.118 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:31:45 np0005634017 nova_compute[243452]: 2026-02-28 10:31:45.157 243456 DEBUG nova.policy [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:31:45 np0005634017 nova_compute[243452]: 2026-02-28 10:31:45.208 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:31:45 np0005634017 nova_compute[243452]: 2026-02-28 10:31:45.209 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:31:45 np0005634017 nova_compute[243452]: 2026-02-28 10:31:45.210 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:31:45 np0005634017 nova_compute[243452]: 2026-02-28 10:31:45.210 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:31:45 np0005634017 nova_compute[243452]: 2026-02-28 10:31:45.231 243456 DEBUG nova.storage.rbd_utils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 995ed68f-0189-47b0-b060-6b738468c986_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:31:45 np0005634017 nova_compute[243452]: 2026-02-28 10:31:45.235 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 995ed68f-0189-47b0-b060-6b738468c986_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:31:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:31:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3186015786' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:31:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:31:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3186015786' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:31:45 np0005634017 nova_compute[243452]: 2026-02-28 10:31:45.554 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 995ed68f-0189-47b0-b060-6b738468c986_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:31:45 np0005634017 nova_compute[243452]: 2026-02-28 10:31:45.631 243456 DEBUG nova.storage.rbd_utils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] resizing rbd image 995ed68f-0189-47b0-b060-6b738468c986_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:31:45 np0005634017 nova_compute[243452]: 2026-02-28 10:31:45.719 243456 DEBUG nova.objects.instance [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'migration_context' on Instance uuid 995ed68f-0189-47b0-b060-6b738468c986 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:31:45 np0005634017 nova_compute[243452]: 2026-02-28 10:31:45.732 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:31:45 np0005634017 nova_compute[243452]: 2026-02-28 10:31:45.733 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Ensure instance console log exists: /var/lib/nova/instances/995ed68f-0189-47b0-b060-6b738468c986/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:31:45 np0005634017 nova_compute[243452]: 2026-02-28 10:31:45.733 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:31:45 np0005634017 nova_compute[243452]: 2026-02-28 10:31:45.734 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:31:45 np0005634017 nova_compute[243452]: 2026-02-28 10:31:45.734 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:31:46 np0005634017 nova_compute[243452]: 2026-02-28 10:31:46.394 243456 DEBUG nova.network.neutron [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Successfully created port: 09f54242-3301-4e2f-b606-8423be606192 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:31:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2023: 305 pgs: 305 active+clean; 167 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 291 KiB/s wr, 3 op/s
Feb 28 05:31:47 np0005634017 nova_compute[243452]: 2026-02-28 10:31:47.202 243456 DEBUG nova.network.neutron [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Successfully updated port: 09f54242-3301-4e2f-b606-8423be606192 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:31:47 np0005634017 nova_compute[243452]: 2026-02-28 10:31:47.217 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:31:47 np0005634017 nova_compute[243452]: 2026-02-28 10:31:47.217 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:31:47 np0005634017 nova_compute[243452]: 2026-02-28 10:31:47.217 243456 DEBUG nova.network.neutron [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:31:47 np0005634017 nova_compute[243452]: 2026-02-28 10:31:47.310 243456 DEBUG nova.compute.manager [req-7c546b01-ba2b-4920-b518-06bbe708eb5a req-094c9548-8cfb-4f15-947d-f325f999ed7f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Received event network-changed-09f54242-3301-4e2f-b606-8423be606192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:31:47 np0005634017 nova_compute[243452]: 2026-02-28 10:31:47.311 243456 DEBUG nova.compute.manager [req-7c546b01-ba2b-4920-b518-06bbe708eb5a req-094c9548-8cfb-4f15-947d-f325f999ed7f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Refreshing instance network info cache due to event network-changed-09f54242-3301-4e2f-b606-8423be606192. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:31:47 np0005634017 nova_compute[243452]: 2026-02-28 10:31:47.311 243456 DEBUG oslo_concurrency.lockutils [req-7c546b01-ba2b-4920-b518-06bbe708eb5a req-094c9548-8cfb-4f15-947d-f325f999ed7f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:31:47 np0005634017 nova_compute[243452]: 2026-02-28 10:31:47.375 243456 DEBUG nova.network.neutron [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.177 243456 DEBUG nova.network.neutron [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Updating instance_info_cache with network_info: [{"id": "09f54242-3301-4e2f-b606-8423be606192", "address": "fa:16:3e:3a:3a:58", "network": {"id": "11c8a335-c267-4faf-a7d1-f407690da05d", "bridge": "br-int", "label": "tempest-network-smoke--2063408073", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09f54242-33", "ovs_interfaceid": "09f54242-3301-4e2f-b606-8423be606192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.215 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.215 243456 DEBUG nova.compute.manager [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Instance network_info: |[{"id": "09f54242-3301-4e2f-b606-8423be606192", "address": "fa:16:3e:3a:3a:58", "network": {"id": "11c8a335-c267-4faf-a7d1-f407690da05d", "bridge": "br-int", "label": "tempest-network-smoke--2063408073", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09f54242-33", "ovs_interfaceid": "09f54242-3301-4e2f-b606-8423be606192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.216 243456 DEBUG oslo_concurrency.lockutils [req-7c546b01-ba2b-4920-b518-06bbe708eb5a req-094c9548-8cfb-4f15-947d-f325f999ed7f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.217 243456 DEBUG nova.network.neutron [req-7c546b01-ba2b-4920-b518-06bbe708eb5a req-094c9548-8cfb-4f15-947d-f325f999ed7f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Refreshing network info cache for port 09f54242-3301-4e2f-b606-8423be606192 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.222 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Start _get_guest_xml network_info=[{"id": "09f54242-3301-4e2f-b606-8423be606192", "address": "fa:16:3e:3a:3a:58", "network": {"id": "11c8a335-c267-4faf-a7d1-f407690da05d", "bridge": "br-int", "label": "tempest-network-smoke--2063408073", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09f54242-33", "ovs_interfaceid": "09f54242-3301-4e2f-b606-8423be606192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.228 243456 WARNING nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.236 243456 DEBUG nova.virt.libvirt.host [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.237 243456 DEBUG nova.virt.libvirt.host [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.247 243456 DEBUG nova.virt.libvirt.host [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.248 243456 DEBUG nova.virt.libvirt.host [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.248 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.249 243456 DEBUG nova.virt.hardware [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.250 243456 DEBUG nova.virt.hardware [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.250 243456 DEBUG nova.virt.hardware [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.250 243456 DEBUG nova.virt.hardware [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.251 243456 DEBUG nova.virt.hardware [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.251 243456 DEBUG nova.virt.hardware [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.251 243456 DEBUG nova.virt.hardware [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.252 243456 DEBUG nova.virt.hardware [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.252 243456 DEBUG nova.virt.hardware [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.253 243456 DEBUG nova.virt.hardware [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.253 243456 DEBUG nova.virt.hardware [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.258 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:31:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 171 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 304 KiB/s wr, 15 op/s
Feb 28 05:31:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:31:48 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1253996747' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:31:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.850 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.875 243456 DEBUG nova.storage.rbd_utils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 995ed68f-0189-47b0-b060-6b738468c986_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:31:48 np0005634017 nova_compute[243452]: 2026-02-28 10:31:48.881 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.266 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:31:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1973787330' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.481 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.484 243456 DEBUG nova.virt.libvirt.vif [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:31:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1357951307',display_name='tempest-TestNetworkBasicOps-server-1357951307',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1357951307',id=123,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZHc+J8n+8VIElpku7oO13/ribfCNYlkJQgBtlmLu1lYsVW1sBn5Xu9dPl6htrZdrELeWC7ndyh8q80ljjBh3aAz6MIHX6lH85ayvL5wAI984ueXttgABed6q+nVWvTQw==',key_name='tempest-TestNetworkBasicOps-951014656',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-lmudny40',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:31:44Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=995ed68f-0189-47b0-b060-6b738468c986,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09f54242-3301-4e2f-b606-8423be606192", "address": "fa:16:3e:3a:3a:58", "network": {"id": "11c8a335-c267-4faf-a7d1-f407690da05d", "bridge": "br-int", "label": "tempest-network-smoke--2063408073", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09f54242-33", "ovs_interfaceid": "09f54242-3301-4e2f-b606-8423be606192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.484 243456 DEBUG nova.network.os_vif_util [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "09f54242-3301-4e2f-b606-8423be606192", "address": "fa:16:3e:3a:3a:58", "network": {"id": "11c8a335-c267-4faf-a7d1-f407690da05d", "bridge": "br-int", "label": "tempest-network-smoke--2063408073", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09f54242-33", "ovs_interfaceid": "09f54242-3301-4e2f-b606-8423be606192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.486 243456 DEBUG nova.network.os_vif_util [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:3a:58,bridge_name='br-int',has_traffic_filtering=True,id=09f54242-3301-4e2f-b606-8423be606192,network=Network(11c8a335-c267-4faf-a7d1-f407690da05d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09f54242-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.488 243456 DEBUG nova.objects.instance [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 995ed68f-0189-47b0-b060-6b738468c986 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.509 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:31:49 np0005634017 nova_compute[243452]:  <uuid>995ed68f-0189-47b0-b060-6b738468c986</uuid>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:  <name>instance-0000007b</name>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestNetworkBasicOps-server-1357951307</nova:name>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:31:48</nova:creationTime>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:31:49 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:        <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:        <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:        <nova:port uuid="09f54242-3301-4e2f-b606-8423be606192">
Feb 28 05:31:49 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <entry name="serial">995ed68f-0189-47b0-b060-6b738468c986</entry>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <entry name="uuid">995ed68f-0189-47b0-b060-6b738468c986</entry>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/995ed68f-0189-47b0-b060-6b738468c986_disk">
Feb 28 05:31:49 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:31:49 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/995ed68f-0189-47b0-b060-6b738468c986_disk.config">
Feb 28 05:31:49 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:31:49 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:3a:3a:58"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <target dev="tap09f54242-33"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/995ed68f-0189-47b0-b060-6b738468c986/console.log" append="off"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:31:49 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:31:49 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:31:49 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:31:49 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.510 243456 DEBUG nova.compute.manager [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Preparing to wait for external event network-vif-plugged-09f54242-3301-4e2f-b606-8423be606192 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.512 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "995ed68f-0189-47b0-b060-6b738468c986-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.513 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.513 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.515 243456 DEBUG nova.virt.libvirt.vif [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:31:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1357951307',display_name='tempest-TestNetworkBasicOps-server-1357951307',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1357951307',id=123,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZHc+J8n+8VIElpku7oO13/ribfCNYlkJQgBtlmLu1lYsVW1sBn5Xu9dPl6htrZdrELeWC7ndyh8q80ljjBh3aAz6MIHX6lH85ayvL5wAI984ueXttgABed6q+nVWvTQw==',key_name='tempest-TestNetworkBasicOps-951014656',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-lmudny40',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:31:44Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=995ed68f-0189-47b0-b060-6b738468c986,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09f54242-3301-4e2f-b606-8423be606192", "address": "fa:16:3e:3a:3a:58", "network": {"id": "11c8a335-c267-4faf-a7d1-f407690da05d", "bridge": "br-int", "label": "tempest-network-smoke--2063408073", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09f54242-33", "ovs_interfaceid": "09f54242-3301-4e2f-b606-8423be606192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.516 243456 DEBUG nova.network.os_vif_util [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "09f54242-3301-4e2f-b606-8423be606192", "address": "fa:16:3e:3a:3a:58", "network": {"id": "11c8a335-c267-4faf-a7d1-f407690da05d", "bridge": "br-int", "label": "tempest-network-smoke--2063408073", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09f54242-33", "ovs_interfaceid": "09f54242-3301-4e2f-b606-8423be606192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.517 243456 DEBUG nova.network.os_vif_util [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:3a:58,bridge_name='br-int',has_traffic_filtering=True,id=09f54242-3301-4e2f-b606-8423be606192,network=Network(11c8a335-c267-4faf-a7d1-f407690da05d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09f54242-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.518 243456 DEBUG os_vif [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:3a:58,bridge_name='br-int',has_traffic_filtering=True,id=09f54242-3301-4e2f-b606-8423be606192,network=Network(11c8a335-c267-4faf-a7d1-f407690da05d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09f54242-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.519 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.519 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.520 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.525 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.526 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09f54242-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.527 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap09f54242-33, col_values=(('external_ids', {'iface-id': '09f54242-3301-4e2f-b606-8423be606192', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:3a:58', 'vm-uuid': '995ed68f-0189-47b0-b060-6b738468c986'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.529 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:49 np0005634017 NetworkManager[49805]: <info>  [1772274709.5307] manager: (tap09f54242-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/508)
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.533 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.539 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.541 243456 INFO os_vif [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:3a:58,bridge_name='br-int',has_traffic_filtering=True,id=09f54242-3301-4e2f-b606-8423be606192,network=Network(11c8a335-c267-4faf-a7d1-f407690da05d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09f54242-33')#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.617 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.618 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.618 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:3a:3a:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.619 243456 INFO nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Using config drive#033[00m
Feb 28 05:31:49 np0005634017 nova_compute[243452]: 2026-02-28 10:31:49.648 243456 DEBUG nova.storage.rbd_utils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 995ed68f-0189-47b0-b060-6b738468c986_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:31:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:31:51 np0005634017 nova_compute[243452]: 2026-02-28 10:31:51.686 243456 INFO nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Creating config drive at /var/lib/nova/instances/995ed68f-0189-47b0-b060-6b738468c986/disk.config#033[00m
Feb 28 05:31:51 np0005634017 nova_compute[243452]: 2026-02-28 10:31:51.691 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/995ed68f-0189-47b0-b060-6b738468c986/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpedrqg1n_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:31:51 np0005634017 nova_compute[243452]: 2026-02-28 10:31:51.827 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/995ed68f-0189-47b0-b060-6b738468c986/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpedrqg1n_" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:31:51 np0005634017 nova_compute[243452]: 2026-02-28 10:31:51.855 243456 DEBUG nova.storage.rbd_utils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 995ed68f-0189-47b0-b060-6b738468c986_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:31:51 np0005634017 nova_compute[243452]: 2026-02-28 10:31:51.861 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/995ed68f-0189-47b0-b060-6b738468c986/disk.config 995ed68f-0189-47b0-b060-6b738468c986_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.001 243456 DEBUG nova.network.neutron [req-7c546b01-ba2b-4920-b518-06bbe708eb5a req-094c9548-8cfb-4f15-947d-f325f999ed7f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Updated VIF entry in instance network info cache for port 09f54242-3301-4e2f-b606-8423be606192. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.002 243456 DEBUG nova.network.neutron [req-7c546b01-ba2b-4920-b518-06bbe708eb5a req-094c9548-8cfb-4f15-947d-f325f999ed7f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Updating instance_info_cache with network_info: [{"id": "09f54242-3301-4e2f-b606-8423be606192", "address": "fa:16:3e:3a:3a:58", "network": {"id": "11c8a335-c267-4faf-a7d1-f407690da05d", "bridge": "br-int", "label": "tempest-network-smoke--2063408073", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09f54242-33", "ovs_interfaceid": "09f54242-3301-4e2f-b606-8423be606192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.018 243456 DEBUG oslo_concurrency.lockutils [req-7c546b01-ba2b-4920-b518-06bbe708eb5a req-094c9548-8cfb-4f15-947d-f325f999ed7f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.033 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/995ed68f-0189-47b0-b060-6b738468c986/disk.config 995ed68f-0189-47b0-b060-6b738468c986_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.034 243456 INFO nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Deleting local config drive /var/lib/nova/instances/995ed68f-0189-47b0-b060-6b738468c986/disk.config because it was imported into RBD.#033[00m
Feb 28 05:31:52 np0005634017 systemd[1]: Starting libvirt secret daemon...
Feb 28 05:31:52 np0005634017 systemd[1]: Started libvirt secret daemon.
Feb 28 05:31:52 np0005634017 kernel: tap09f54242-33: entered promiscuous mode
Feb 28 05:31:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:31:52Z|01238|binding|INFO|Claiming lport 09f54242-3301-4e2f-b606-8423be606192 for this chassis.
Feb 28 05:31:52 np0005634017 NetworkManager[49805]: <info>  [1772274712.1284] manager: (tap09f54242-33): new Tun device (/org/freedesktop/NetworkManager/Devices/509)
Feb 28 05:31:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:31:52Z|01239|binding|INFO|09f54242-3301-4e2f-b606-8423be606192: Claiming fa:16:3e:3a:3a:58 10.100.0.13
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.128 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.132 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.136 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.140 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.147 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:3a:58 10.100.0.13'], port_security=['fa:16:3e:3a:3a:58 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '995ed68f-0189-47b0-b060-6b738468c986', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-11c8a335-c267-4faf-a7d1-f407690da05d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '96252fd8-ac35-49bb-9585-1943d9426258', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51d69cbf-f4a8-43f3-8231-31d5040383f1, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=09f54242-3301-4e2f-b606-8423be606192) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.148 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 09f54242-3301-4e2f-b606-8423be606192 in datapath 11c8a335-c267-4faf-a7d1-f407690da05d bound to our chassis#033[00m
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.149 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 11c8a335-c267-4faf-a7d1-f407690da05d#033[00m
Feb 28 05:31:52 np0005634017 systemd-udevd[352051]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:31:52 np0005634017 systemd-machined[209480]: New machine qemu-156-instance-0000007b.
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.163 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc81cb8-b142-4a09-a7b3-fc8aa5332af4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.165 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap11c8a335-c1 in ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.167 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap11c8a335-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.167 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8e7901ce-21b5-48ed-b47e-3118067b872f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:31:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:31:52Z|01240|binding|INFO|Setting lport 09f54242-3301-4e2f-b606-8423be606192 ovn-installed in OVS
Feb 28 05:31:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:31:52Z|01241|binding|INFO|Setting lport 09f54242-3301-4e2f-b606-8423be606192 up in Southbound
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.169 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[15d916a0-69eb-48d0-a7a8-98886010c557]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:31:52 np0005634017 NetworkManager[49805]: <info>  [1772274712.1710] device (tap09f54242-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:31:52 np0005634017 NetworkManager[49805]: <info>  [1772274712.1716] device (tap09f54242-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.170 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:52 np0005634017 systemd[1]: Started Virtual Machine qemu-156-instance-0000007b.
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.183 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc6fd90-1a4a-422d-aa2b-33f592a03862]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.195 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e0cc056e-d7ab-484f-84e6-2ffdacda1cc2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.224 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6e2fc74d-6abc-4219-98f3-9238269416d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.230 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[686a52f1-2606-4b19-a43b-098f842eaae5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:31:52 np0005634017 NetworkManager[49805]: <info>  [1772274712.2315] manager: (tap11c8a335-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/510)
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.262 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc79437-83c6-4b01-8dde-0a8d2b6055d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.267 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[502a6644-a74b-4c7b-8266-ef85784f1247]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:31:52 np0005634017 NetworkManager[49805]: <info>  [1772274712.2870] device (tap11c8a335-c0): carrier: link connected
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.290 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[07c159fa-2259-4695-85b9-df9458f4369e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.312 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8ac00714-8176-4485-acff-3a6e84df78e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap11c8a335-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0d:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 370], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621957, 'reachable_time': 34615, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352085, 'error': None, 'target': 'ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.326 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d8667337-ef6d-417e-8fb8-9ee5d230ce1c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:d42'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 621957, 'tstamp': 621957}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352086, 'error': None, 'target': 'ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.328 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.328 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.338 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b844da06-27b1-4e94-9655-005a18edb207]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap11c8a335-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0d:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 370], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621957, 'reachable_time': 34615, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 352087, 'error': None, 'target': 'ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.345 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.371 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e532f09e-0d6f-418d-b3b6-26ee001f06b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.419 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8bc807e9-bb01-43d4-83f0-afe297a8c17c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.421 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11c8a335-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.422 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.422 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap11c8a335-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:31:52 np0005634017 NetworkManager[49805]: <info>  [1772274712.4251] manager: (tap11c8a335-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/511)
Feb 28 05:31:52 np0005634017 kernel: tap11c8a335-c0: entered promiscuous mode
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.427 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap11c8a335-c0, col_values=(('external_ids', {'iface-id': '6d119866-5b77-4352-91cc-12a26b0fe463'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:31:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:31:52Z|01242|binding|INFO|Releasing lport 6d119866-5b77-4352-91cc-12a26b0fe463 from this chassis (sb_readonly=0)
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.431 243456 DEBUG nova.compute.manager [req-74576557-6833-4fb1-bc34-7f1cd2b1f3ab req-b1ff1942-7f01-4659-a256-4b07bdf2bc78 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Received event network-vif-plugged-09f54242-3301-4e2f-b606-8423be606192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.432 243456 DEBUG oslo_concurrency.lockutils [req-74576557-6833-4fb1-bc34-7f1cd2b1f3ab req-b1ff1942-7f01-4659-a256-4b07bdf2bc78 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "995ed68f-0189-47b0-b060-6b738468c986-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.432 243456 DEBUG oslo_concurrency.lockutils [req-74576557-6833-4fb1-bc34-7f1cd2b1f3ab req-b1ff1942-7f01-4659-a256-4b07bdf2bc78 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.433 243456 DEBUG oslo_concurrency.lockutils [req-74576557-6833-4fb1-bc34-7f1cd2b1f3ab req-b1ff1942-7f01-4659-a256-4b07bdf2bc78 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.434 243456 DEBUG nova.compute.manager [req-74576557-6833-4fb1-bc34-7f1cd2b1f3ab req-b1ff1942-7f01-4659-a256-4b07bdf2bc78 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Processing event network-vif-plugged-09f54242-3301-4e2f-b606-8423be606192 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.435 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.438 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/11c8a335-c267-4faf-a7d1-f407690da05d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/11c8a335-c267-4faf-a7d1-f407690da05d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.439 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c3bb8795-ce66-44af-b61b-854870e2b829]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.440 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-11c8a335-c267-4faf-a7d1-f407690da05d
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/11c8a335-c267-4faf-a7d1-f407690da05d.pid.haproxy
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 11c8a335-c267-4faf-a7d1-f407690da05d
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:31:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.443 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d', 'env', 'PROCESS_TAG=haproxy-11c8a335-c267-4faf-a7d1-f407690da05d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/11c8a335-c267-4faf-a7d1-f407690da05d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.662 243456 DEBUG nova.compute.manager [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.665 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274712.6616683, 995ed68f-0189-47b0-b060-6b738468c986 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.665 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] VM Started (Lifecycle Event)#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.669 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.674 243456 INFO nova.virt.libvirt.driver [-] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Instance spawned successfully.#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.674 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.700 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:31:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.711 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.717 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.718 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.719 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.720 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.720 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.721 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.763 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.764 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274712.663714, 995ed68f-0189-47b0-b060-6b738468c986 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.764 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:31:52 np0005634017 podman[352161]: 2026-02-28 10:31:52.794794806 +0000 UTC m=+0.053824096 container create 8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.800 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.807 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274712.6686232, 995ed68f-0189-47b0-b060-6b738468c986 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.807 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.816 243456 INFO nova.compute.manager [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Took 7.81 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.817 243456 DEBUG nova.compute.manager [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:31:52 np0005634017 systemd[1]: Started libpod-conmon-8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c.scope.
Feb 28 05:31:52 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:31:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8bb818805ca7bc9b5cb8f2064ec0e8cd84927ba5cb46b2bac1c983c996bfba5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:31:52 np0005634017 podman[352161]: 2026-02-28 10:31:52.860172348 +0000 UTC m=+0.119201668 container init 8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0)
Feb 28 05:31:52 np0005634017 podman[352161]: 2026-02-28 10:31:52.864821339 +0000 UTC m=+0.123850619 container start 8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 28 05:31:52 np0005634017 podman[352161]: 2026-02-28 10:31:52.768702787 +0000 UTC m=+0.027732087 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.866 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.872 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:31:52 np0005634017 neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d[352177]: [NOTICE]   (352181) : New worker (352183) forked
Feb 28 05:31:52 np0005634017 neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d[352177]: [NOTICE]   (352181) : Loading success.
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.898 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.920 243456 INFO nova.compute.manager [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Took 8.94 seconds to build instance.#033[00m
Feb 28 05:31:52 np0005634017 nova_compute[243452]: 2026-02-28 10:31:52.939 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:31:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:31:54 np0005634017 nova_compute[243452]: 2026-02-28 10:31:54.270 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:54 np0005634017 nova_compute[243452]: 2026-02-28 10:31:54.530 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:54 np0005634017 nova_compute[243452]: 2026-02-28 10:31:54.553 243456 DEBUG nova.compute.manager [req-fa0f1a36-40ed-48c8-9b96-3ae3a77ddfb8 req-d27d943c-dc59-4859-a3e6-c4900a2a7a45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Received event network-vif-plugged-09f54242-3301-4e2f-b606-8423be606192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:31:54 np0005634017 nova_compute[243452]: 2026-02-28 10:31:54.553 243456 DEBUG oslo_concurrency.lockutils [req-fa0f1a36-40ed-48c8-9b96-3ae3a77ddfb8 req-d27d943c-dc59-4859-a3e6-c4900a2a7a45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "995ed68f-0189-47b0-b060-6b738468c986-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:31:54 np0005634017 nova_compute[243452]: 2026-02-28 10:31:54.554 243456 DEBUG oslo_concurrency.lockutils [req-fa0f1a36-40ed-48c8-9b96-3ae3a77ddfb8 req-d27d943c-dc59-4859-a3e6-c4900a2a7a45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:31:54 np0005634017 nova_compute[243452]: 2026-02-28 10:31:54.554 243456 DEBUG oslo_concurrency.lockutils [req-fa0f1a36-40ed-48c8-9b96-3ae3a77ddfb8 req-d27d943c-dc59-4859-a3e6-c4900a2a7a45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:31:54 np0005634017 nova_compute[243452]: 2026-02-28 10:31:54.555 243456 DEBUG nova.compute.manager [req-fa0f1a36-40ed-48c8-9b96-3ae3a77ddfb8 req-d27d943c-dc59-4859-a3e6-c4900a2a7a45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] No waiting events found dispatching network-vif-plugged-09f54242-3301-4e2f-b606-8423be606192 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:31:54 np0005634017 nova_compute[243452]: 2026-02-28 10:31:54.555 243456 WARNING nova.compute.manager [req-fa0f1a36-40ed-48c8-9b96-3ae3a77ddfb8 req-d27d943c-dc59-4859-a3e6-c4900a2a7a45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Received unexpected event network-vif-plugged-09f54242-3301-4e2f-b606-8423be606192 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:31:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 84 op/s
Feb 28 05:31:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 05:31:57 np0005634017 nova_compute[243452]: 2026-02-28 10:31:57.269 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:31:57 np0005634017 nova_compute[243452]: 2026-02-28 10:31:57.748 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Triggering sync for uuid 995ed68f-0189-47b0-b060-6b738468c986 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb 28 05:31:57 np0005634017 nova_compute[243452]: 2026-02-28 10:31:57.749 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "995ed68f-0189-47b0-b060-6b738468c986" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:31:57 np0005634017 nova_compute[243452]: 2026-02-28 10:31:57.749 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "995ed68f-0189-47b0-b060-6b738468c986" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:31:57 np0005634017 nova_compute[243452]: 2026-02-28 10:31:57.772 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "995ed68f-0189-47b0-b060-6b738468c986" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.023s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:31:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:57.872 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:31:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:57.873 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:31:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:31:57.873 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:31:58 np0005634017 nova_compute[243452]: 2026-02-28 10:31:58.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:31:58 np0005634017 nova_compute[243452]: 2026-02-28 10:31:58.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 28 05:31:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 96 op/s
Feb 28 05:31:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:31:59 np0005634017 nova_compute[243452]: 2026-02-28 10:31:59.271 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:31:59 np0005634017 nova_compute[243452]: 2026-02-28 10:31:59.532 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:32:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:32:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:32:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:32:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:32:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:32:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 85 op/s
Feb 28 05:32:02 np0005634017 podman[352193]: 2026-02-28 10:32:02.128610705 +0000 UTC m=+0.057194991 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 28 05:32:02 np0005634017 podman[352192]: 2026-02-28 10:32:02.191518867 +0000 UTC m=+0.126993158 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 28 05:32:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 05:32:03 np0005634017 NetworkManager[49805]: <info>  [1772274723.5419] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/512)
Feb 28 05:32:03 np0005634017 NetworkManager[49805]: <info>  [1772274723.5432] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/513)
Feb 28 05:32:03 np0005634017 nova_compute[243452]: 2026-02-28 10:32:03.541 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:03 np0005634017 nova_compute[243452]: 2026-02-28 10:32:03.594 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:32:03Z|01243|binding|INFO|Releasing lport 6d119866-5b77-4352-91cc-12a26b0fe463 from this chassis (sb_readonly=0)
Feb 28 05:32:03 np0005634017 nova_compute[243452]: 2026-02-28 10:32:03.605 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:32:03 np0005634017 nova_compute[243452]: 2026-02-28 10:32:03.974 243456 DEBUG nova.compute.manager [req-349d0c1e-bc0b-410c-a2bc-6628ad8dfc2e req-a00f5d85-ca7e-43dc-ab01-da504b1bdcac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Received event network-changed-09f54242-3301-4e2f-b606-8423be606192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:32:03 np0005634017 nova_compute[243452]: 2026-02-28 10:32:03.974 243456 DEBUG nova.compute.manager [req-349d0c1e-bc0b-410c-a2bc-6628ad8dfc2e req-a00f5d85-ca7e-43dc-ab01-da504b1bdcac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Refreshing instance network info cache due to event network-changed-09f54242-3301-4e2f-b606-8423be606192. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:32:03 np0005634017 nova_compute[243452]: 2026-02-28 10:32:03.975 243456 DEBUG oslo_concurrency.lockutils [req-349d0c1e-bc0b-410c-a2bc-6628ad8dfc2e req-a00f5d85-ca7e-43dc-ab01-da504b1bdcac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:32:03 np0005634017 nova_compute[243452]: 2026-02-28 10:32:03.975 243456 DEBUG oslo_concurrency.lockutils [req-349d0c1e-bc0b-410c-a2bc-6628ad8dfc2e req-a00f5d85-ca7e-43dc-ab01-da504b1bdcac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:32:03 np0005634017 nova_compute[243452]: 2026-02-28 10:32:03.976 243456 DEBUG nova.network.neutron [req-349d0c1e-bc0b-410c-a2bc-6628ad8dfc2e req-a00f5d85-ca7e-43dc-ab01-da504b1bdcac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Refreshing network info cache for port 09f54242-3301-4e2f-b606-8423be606192 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:32:04 np0005634017 ovn_controller[146846]: 2026-02-28T10:32:04Z|00139|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3a:3a:58 10.100.0.13
Feb 28 05:32:04 np0005634017 ovn_controller[146846]: 2026-02-28T10:32:04Z|00140|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3a:3a:58 10.100.0.13
Feb 28 05:32:04 np0005634017 nova_compute[243452]: 2026-02-28 10:32:04.274 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:04 np0005634017 nova_compute[243452]: 2026-02-28 10:32:04.533 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 305 active+clean; 205 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 685 KiB/s wr, 82 op/s
Feb 28 05:32:05 np0005634017 nova_compute[243452]: 2026-02-28 10:32:05.850 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 216 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 696 KiB/s rd, 1.4 MiB/s wr, 54 op/s
Feb 28 05:32:07 np0005634017 nova_compute[243452]: 2026-02-28 10:32:07.812 243456 DEBUG nova.network.neutron [req-349d0c1e-bc0b-410c-a2bc-6628ad8dfc2e req-a00f5d85-ca7e-43dc-ab01-da504b1bdcac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Updated VIF entry in instance network info cache for port 09f54242-3301-4e2f-b606-8423be606192. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:32:07 np0005634017 nova_compute[243452]: 2026-02-28 10:32:07.813 243456 DEBUG nova.network.neutron [req-349d0c1e-bc0b-410c-a2bc-6628ad8dfc2e req-a00f5d85-ca7e-43dc-ab01-da504b1bdcac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Updating instance_info_cache with network_info: [{"id": "09f54242-3301-4e2f-b606-8423be606192", "address": "fa:16:3e:3a:3a:58", "network": {"id": "11c8a335-c267-4faf-a7d1-f407690da05d", "bridge": "br-int", "label": "tempest-network-smoke--2063408073", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09f54242-33", "ovs_interfaceid": "09f54242-3301-4e2f-b606-8423be606192", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:32:07 np0005634017 nova_compute[243452]: 2026-02-28 10:32:07.848 243456 DEBUG oslo_concurrency.lockutils [req-349d0c1e-bc0b-410c-a2bc-6628ad8dfc2e req-a00f5d85-ca7e-43dc-ab01-da504b1bdcac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:32:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2034: 305 pgs: 305 active+clean; 232 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Feb 28 05:32:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:32:09 np0005634017 nova_compute[243452]: 2026-02-28 10:32:09.277 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:09 np0005634017 nova_compute[243452]: 2026-02-28 10:32:09.536 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:09 np0005634017 ovn_controller[146846]: 2026-02-28T10:32:09Z|01244|binding|INFO|Releasing lport 6d119866-5b77-4352-91cc-12a26b0fe463 from this chassis (sb_readonly=0)
Feb 28 05:32:09 np0005634017 nova_compute[243452]: 2026-02-28 10:32:09.849 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 05:32:11 np0005634017 nova_compute[243452]: 2026-02-28 10:32:11.344 243456 INFO nova.compute.manager [None req-4057f0f1-b445-4be5-8fc7-ecc31ddc2357 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Get console output#033[00m
Feb 28 05:32:11 np0005634017 nova_compute[243452]: 2026-02-28 10:32:11.354 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 28 05:32:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 05:32:12 np0005634017 ovn_controller[146846]: 2026-02-28T10:32:12Z|01245|binding|INFO|Releasing lport 6d119866-5b77-4352-91cc-12a26b0fe463 from this chassis (sb_readonly=0)
Feb 28 05:32:12 np0005634017 nova_compute[243452]: 2026-02-28 10:32:12.985 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:32:14 np0005634017 nova_compute[243452]: 2026-02-28 10:32:14.279 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:14 np0005634017 nova_compute[243452]: 2026-02-28 10:32:14.539 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 05:32:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 1.5 MiB/s wr, 53 op/s
Feb 28 05:32:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2039: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 715 KiB/s wr, 25 op/s
Feb 28 05:32:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:32:19 np0005634017 nova_compute[243452]: 2026-02-28 10:32:19.283 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:19 np0005634017 nova_compute[243452]: 2026-02-28 10:32:19.541 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:32:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:32:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:32:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:32:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:32:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:32:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:32:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:32:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:32:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:32:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:32:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:32:20 np0005634017 podman[352383]: 2026-02-28 10:32:20.230766038 +0000 UTC m=+0.045294174 container create cc482a0848fc8254303210c31ed6a2528c800b314cc7369c4845133f6bc9424d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_knuth, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:32:20 np0005634017 systemd[1]: Started libpod-conmon-cc482a0848fc8254303210c31ed6a2528c800b314cc7369c4845133f6bc9424d.scope.
Feb 28 05:32:20 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:32:20 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:32:20 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:32:20 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:32:20 np0005634017 podman[352383]: 2026-02-28 10:32:20.211563494 +0000 UTC m=+0.026091640 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:32:20 np0005634017 podman[352383]: 2026-02-28 10:32:20.316469186 +0000 UTC m=+0.130997342 container init cc482a0848fc8254303210c31ed6a2528c800b314cc7369c4845133f6bc9424d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:32:20 np0005634017 podman[352383]: 2026-02-28 10:32:20.324160573 +0000 UTC m=+0.138688699 container start cc482a0848fc8254303210c31ed6a2528c800b314cc7369c4845133f6bc9424d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:32:20 np0005634017 podman[352383]: 2026-02-28 10:32:20.327320113 +0000 UTC m=+0.141848409 container attach cc482a0848fc8254303210c31ed6a2528c800b314cc7369c4845133f6bc9424d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_knuth, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:32:20 np0005634017 eager_knuth[352400]: 167 167
Feb 28 05:32:20 np0005634017 systemd[1]: libpod-cc482a0848fc8254303210c31ed6a2528c800b314cc7369c4845133f6bc9424d.scope: Deactivated successfully.
Feb 28 05:32:20 np0005634017 conmon[352400]: conmon cc482a0848fc82543032 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc482a0848fc8254303210c31ed6a2528c800b314cc7369c4845133f6bc9424d.scope/container/memory.events
Feb 28 05:32:20 np0005634017 podman[352383]: 2026-02-28 10:32:20.334371173 +0000 UTC m=+0.148899299 container died cc482a0848fc8254303210c31ed6a2528c800b314cc7369c4845133f6bc9424d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 28 05:32:20 np0005634017 systemd[1]: var-lib-containers-storage-overlay-e19b874f6a4b04e75703d78a32076434ae677a496ce305d6cc3df42dd2835ed3-merged.mount: Deactivated successfully.
Feb 28 05:32:20 np0005634017 podman[352383]: 2026-02-28 10:32:20.382483226 +0000 UTC m=+0.197011352 container remove cc482a0848fc8254303210c31ed6a2528c800b314cc7369c4845133f6bc9424d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_knuth, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 28 05:32:20 np0005634017 systemd[1]: libpod-conmon-cc482a0848fc8254303210c31ed6a2528c800b314cc7369c4845133f6bc9424d.scope: Deactivated successfully.
Feb 28 05:32:20 np0005634017 podman[352423]: 2026-02-28 10:32:20.523805149 +0000 UTC m=+0.043303768 container create 53a0f99a4b241201b21ffa5fe39d42b563215ce9d69d0ef173d216f9738b2e05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_sinoussi, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:32:20 np0005634017 systemd[1]: Started libpod-conmon-53a0f99a4b241201b21ffa5fe39d42b563215ce9d69d0ef173d216f9738b2e05.scope.
Feb 28 05:32:20 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:32:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ee393ec6c18baa957788bdd02553783066a9ac16d94f93e1cab0b885f05539/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:32:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ee393ec6c18baa957788bdd02553783066a9ac16d94f93e1cab0b885f05539/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:32:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ee393ec6c18baa957788bdd02553783066a9ac16d94f93e1cab0b885f05539/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:32:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ee393ec6c18baa957788bdd02553783066a9ac16d94f93e1cab0b885f05539/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:32:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ee393ec6c18baa957788bdd02553783066a9ac16d94f93e1cab0b885f05539/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:32:20 np0005634017 podman[352423]: 2026-02-28 10:32:20.502609288 +0000 UTC m=+0.022107937 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:32:20 np0005634017 podman[352423]: 2026-02-28 10:32:20.616384061 +0000 UTC m=+0.135882680 container init 53a0f99a4b241201b21ffa5fe39d42b563215ce9d69d0ef173d216f9738b2e05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_sinoussi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 05:32:20 np0005634017 podman[352423]: 2026-02-28 10:32:20.632353434 +0000 UTC m=+0.151852033 container start 53a0f99a4b241201b21ffa5fe39d42b563215ce9d69d0ef173d216f9738b2e05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_sinoussi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 28 05:32:20 np0005634017 podman[352423]: 2026-02-28 10:32:20.636151041 +0000 UTC m=+0.155649670 container attach 53a0f99a4b241201b21ffa5fe39d42b563215ce9d69d0ef173d216f9738b2e05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_sinoussi, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 05:32:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 58 KiB/s wr, 3 op/s
Feb 28 05:32:21 np0005634017 frosty_sinoussi[352440]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:32:21 np0005634017 frosty_sinoussi[352440]: --> All data devices are unavailable
Feb 28 05:32:21 np0005634017 systemd[1]: libpod-53a0f99a4b241201b21ffa5fe39d42b563215ce9d69d0ef173d216f9738b2e05.scope: Deactivated successfully.
Feb 28 05:32:21 np0005634017 podman[352423]: 2026-02-28 10:32:21.161701559 +0000 UTC m=+0.681200158 container died 53a0f99a4b241201b21ffa5fe39d42b563215ce9d69d0ef173d216f9738b2e05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_sinoussi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:32:21 np0005634017 systemd[1]: var-lib-containers-storage-overlay-24ee393ec6c18baa957788bdd02553783066a9ac16d94f93e1cab0b885f05539-merged.mount: Deactivated successfully.
Feb 28 05:32:21 np0005634017 podman[352423]: 2026-02-28 10:32:21.196545796 +0000 UTC m=+0.716044395 container remove 53a0f99a4b241201b21ffa5fe39d42b563215ce9d69d0ef173d216f9738b2e05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_sinoussi, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 05:32:21 np0005634017 systemd[1]: libpod-conmon-53a0f99a4b241201b21ffa5fe39d42b563215ce9d69d0ef173d216f9738b2e05.scope: Deactivated successfully.
Feb 28 05:32:21 np0005634017 podman[352534]: 2026-02-28 10:32:21.650385502 +0000 UTC m=+0.064669483 container create b584b565ed05c31d57bf73c460a563182129293e1d107fe872a48957ba99c0d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_dewdney, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:32:21 np0005634017 systemd[1]: Started libpod-conmon-b584b565ed05c31d57bf73c460a563182129293e1d107fe872a48957ba99c0d2.scope.
Feb 28 05:32:21 np0005634017 podman[352534]: 2026-02-28 10:32:21.626292189 +0000 UTC m=+0.040576160 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:32:21 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:32:21 np0005634017 podman[352534]: 2026-02-28 10:32:21.747654207 +0000 UTC m=+0.161938198 container init b584b565ed05c31d57bf73c460a563182129293e1d107fe872a48957ba99c0d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_dewdney, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 05:32:21 np0005634017 podman[352534]: 2026-02-28 10:32:21.752629528 +0000 UTC m=+0.166913469 container start b584b565ed05c31d57bf73c460a563182129293e1d107fe872a48957ba99c0d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 28 05:32:21 np0005634017 podman[352534]: 2026-02-28 10:32:21.755847559 +0000 UTC m=+0.170131590 container attach b584b565ed05c31d57bf73c460a563182129293e1d107fe872a48957ba99c0d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_dewdney, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:32:21 np0005634017 dazzling_dewdney[352551]: 167 167
Feb 28 05:32:21 np0005634017 systemd[1]: libpod-b584b565ed05c31d57bf73c460a563182129293e1d107fe872a48957ba99c0d2.scope: Deactivated successfully.
Feb 28 05:32:21 np0005634017 podman[352534]: 2026-02-28 10:32:21.760052658 +0000 UTC m=+0.174336619 container died b584b565ed05c31d57bf73c460a563182129293e1d107fe872a48957ba99c0d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_dewdney, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:32:21 np0005634017 systemd[1]: var-lib-containers-storage-overlay-d8c96c3a3bfcc8fb96d2ece91ab44f8242921e40238d56da4a80e7fbd4a88759-merged.mount: Deactivated successfully.
Feb 28 05:32:21 np0005634017 podman[352534]: 2026-02-28 10:32:21.794195606 +0000 UTC m=+0.208479547 container remove b584b565ed05c31d57bf73c460a563182129293e1d107fe872a48957ba99c0d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_dewdney, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:32:21 np0005634017 systemd[1]: libpod-conmon-b584b565ed05c31d57bf73c460a563182129293e1d107fe872a48957ba99c0d2.scope: Deactivated successfully.
Feb 28 05:32:21 np0005634017 podman[352575]: 2026-02-28 10:32:21.966307541 +0000 UTC m=+0.058461207 container create 293f87efd43c3d2a911d80301fc80c71e04a0fd6e07421a91ff4c5735704a760 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 05:32:22 np0005634017 systemd[1]: Started libpod-conmon-293f87efd43c3d2a911d80301fc80c71e04a0fd6e07421a91ff4c5735704a760.scope.
Feb 28 05:32:22 np0005634017 podman[352575]: 2026-02-28 10:32:21.941328544 +0000 UTC m=+0.033482030 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:32:22 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:32:22 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d1103ad3c9817f8449c863b914a0226fc1506083e257c693c18e932df9878ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:32:22 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d1103ad3c9817f8449c863b914a0226fc1506083e257c693c18e932df9878ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:32:22 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d1103ad3c9817f8449c863b914a0226fc1506083e257c693c18e932df9878ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:32:22 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d1103ad3c9817f8449c863b914a0226fc1506083e257c693c18e932df9878ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:32:22 np0005634017 podman[352575]: 2026-02-28 10:32:22.059710247 +0000 UTC m=+0.151863683 container init 293f87efd43c3d2a911d80301fc80c71e04a0fd6e07421a91ff4c5735704a760 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:32:22 np0005634017 podman[352575]: 2026-02-28 10:32:22.071006577 +0000 UTC m=+0.163159983 container start 293f87efd43c3d2a911d80301fc80c71e04a0fd6e07421a91ff4c5735704a760 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 05:32:22 np0005634017 podman[352575]: 2026-02-28 10:32:22.075487534 +0000 UTC m=+0.167640950 container attach 293f87efd43c3d2a911d80301fc80c71e04a0fd6e07421a91ff4c5735704a760 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]: {
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:    "0": [
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:        {
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "devices": [
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "/dev/loop3"
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            ],
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "lv_name": "ceph_lv0",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "lv_size": "21470642176",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "name": "ceph_lv0",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "tags": {
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.cluster_name": "ceph",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.crush_device_class": "",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.encrypted": "0",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.objectstore": "bluestore",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.osd_id": "0",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.type": "block",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.vdo": "0",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.with_tpm": "0"
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            },
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "type": "block",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "vg_name": "ceph_vg0"
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:        }
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:    ],
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:    "1": [
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:        {
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "devices": [
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "/dev/loop4"
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            ],
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "lv_name": "ceph_lv1",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "lv_size": "21470642176",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "name": "ceph_lv1",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "tags": {
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.cluster_name": "ceph",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.crush_device_class": "",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.encrypted": "0",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.objectstore": "bluestore",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.osd_id": "1",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.type": "block",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.vdo": "0",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.with_tpm": "0"
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            },
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "type": "block",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "vg_name": "ceph_vg1"
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:        }
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:    ],
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:    "2": [
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:        {
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "devices": [
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "/dev/loop5"
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            ],
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "lv_name": "ceph_lv2",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "lv_size": "21470642176",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "name": "ceph_lv2",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "tags": {
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.cluster_name": "ceph",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.crush_device_class": "",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.encrypted": "0",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.objectstore": "bluestore",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.osd_id": "2",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.type": "block",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.vdo": "0",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:                "ceph.with_tpm": "0"
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            },
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "type": "block",
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:            "vg_name": "ceph_vg2"
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:        }
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]:    ]
Feb 28 05:32:22 np0005634017 charming_mirzakhani[352592]: }
Feb 28 05:32:22 np0005634017 systemd[1]: libpod-293f87efd43c3d2a911d80301fc80c71e04a0fd6e07421a91ff4c5735704a760.scope: Deactivated successfully.
Feb 28 05:32:22 np0005634017 podman[352575]: 2026-02-28 10:32:22.386759701 +0000 UTC m=+0.478913137 container died 293f87efd43c3d2a911d80301fc80c71e04a0fd6e07421a91ff4c5735704a760 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 28 05:32:22 np0005634017 systemd[1]: var-lib-containers-storage-overlay-6d1103ad3c9817f8449c863b914a0226fc1506083e257c693c18e932df9878ef-merged.mount: Deactivated successfully.
Feb 28 05:32:22 np0005634017 podman[352575]: 2026-02-28 10:32:22.425006215 +0000 UTC m=+0.517159621 container remove 293f87efd43c3d2a911d80301fc80c71e04a0fd6e07421a91ff4c5735704a760 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 28 05:32:22 np0005634017 systemd[1]: libpod-conmon-293f87efd43c3d2a911d80301fc80c71e04a0fd6e07421a91ff4c5735704a760.scope: Deactivated successfully.
Feb 28 05:32:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 15 KiB/s wr, 0 op/s
Feb 28 05:32:22 np0005634017 podman[352677]: 2026-02-28 10:32:22.980699346 +0000 UTC m=+0.064483528 container create 9bacd5e27c66091f2c446099dc58fdc10ffb4c9d098fc013e8845d038e698ddf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 05:32:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:22.996 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:32:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:22.999 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:32:23 np0005634017 nova_compute[243452]: 2026-02-28 10:32:22.996 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:23 np0005634017 systemd[1]: Started libpod-conmon-9bacd5e27c66091f2c446099dc58fdc10ffb4c9d098fc013e8845d038e698ddf.scope.
Feb 28 05:32:23 np0005634017 podman[352677]: 2026-02-28 10:32:22.954380031 +0000 UTC m=+0.038164293 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:32:23 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:32:23 np0005634017 podman[352677]: 2026-02-28 10:32:23.074840093 +0000 UTC m=+0.158624315 container init 9bacd5e27c66091f2c446099dc58fdc10ffb4c9d098fc013e8845d038e698ddf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:32:23 np0005634017 podman[352677]: 2026-02-28 10:32:23.080949026 +0000 UTC m=+0.164733198 container start 9bacd5e27c66091f2c446099dc58fdc10ffb4c9d098fc013e8845d038e698ddf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_goldstine, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:32:23 np0005634017 podman[352677]: 2026-02-28 10:32:23.084760494 +0000 UTC m=+0.168544676 container attach 9bacd5e27c66091f2c446099dc58fdc10ffb4c9d098fc013e8845d038e698ddf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_goldstine, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:32:23 np0005634017 vibrant_goldstine[352693]: 167 167
Feb 28 05:32:23 np0005634017 systemd[1]: libpod-9bacd5e27c66091f2c446099dc58fdc10ffb4c9d098fc013e8845d038e698ddf.scope: Deactivated successfully.
Feb 28 05:32:23 np0005634017 podman[352677]: 2026-02-28 10:32:23.087636365 +0000 UTC m=+0.171420547 container died 9bacd5e27c66091f2c446099dc58fdc10ffb4c9d098fc013e8845d038e698ddf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_goldstine, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 05:32:23 np0005634017 systemd[1]: var-lib-containers-storage-overlay-49ce7dfef9ce2f860e88a2d7322ce977072315bd49f4e6ddd92780e09a5575d5-merged.mount: Deactivated successfully.
Feb 28 05:32:23 np0005634017 podman[352677]: 2026-02-28 10:32:23.123226513 +0000 UTC m=+0.207010695 container remove 9bacd5e27c66091f2c446099dc58fdc10ffb4c9d098fc013e8845d038e698ddf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 05:32:23 np0005634017 systemd[1]: libpod-conmon-9bacd5e27c66091f2c446099dc58fdc10ffb4c9d098fc013e8845d038e698ddf.scope: Deactivated successfully.
Feb 28 05:32:23 np0005634017 podman[352719]: 2026-02-28 10:32:23.306840794 +0000 UTC m=+0.057415207 container create c2b509275be56de3f0cb89a874173db8090e021249d9eec0571e7bac99e84bc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_driscoll, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:32:23 np0005634017 systemd[1]: Started libpod-conmon-c2b509275be56de3f0cb89a874173db8090e021249d9eec0571e7bac99e84bc7.scope.
Feb 28 05:32:23 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:32:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4acac552662e52f2af114c5faf5371d71beaa58771e1740f300b047a591c0108/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:32:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4acac552662e52f2af114c5faf5371d71beaa58771e1740f300b047a591c0108/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:32:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4acac552662e52f2af114c5faf5371d71beaa58771e1740f300b047a591c0108/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:32:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4acac552662e52f2af114c5faf5371d71beaa58771e1740f300b047a591c0108/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:32:23 np0005634017 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 05:32:23 np0005634017 podman[352719]: 2026-02-28 10:32:23.284511092 +0000 UTC m=+0.035085545 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:32:23 np0005634017 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 05:32:23 np0005634017 podman[352719]: 2026-02-28 10:32:23.393378535 +0000 UTC m=+0.143952968 container init c2b509275be56de3f0cb89a874173db8090e021249d9eec0571e7bac99e84bc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 05:32:23 np0005634017 podman[352719]: 2026-02-28 10:32:23.401880806 +0000 UTC m=+0.152455209 container start c2b509275be56de3f0cb89a874173db8090e021249d9eec0571e7bac99e84bc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 05:32:23 np0005634017 podman[352719]: 2026-02-28 10:32:23.40554396 +0000 UTC m=+0.156118413 container attach c2b509275be56de3f0cb89a874173db8090e021249d9eec0571e7bac99e84bc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.840726) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274743840795, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 1060, "num_deletes": 250, "total_data_size": 1551929, "memory_usage": 1575608, "flush_reason": "Manual Compaction"}
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274743849964, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 925313, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42773, "largest_seqno": 43832, "table_properties": {"data_size": 921238, "index_size": 1663, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10704, "raw_average_key_size": 20, "raw_value_size": 912522, "raw_average_value_size": 1765, "num_data_blocks": 75, "num_entries": 517, "num_filter_entries": 517, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772274640, "oldest_key_time": 1772274640, "file_creation_time": 1772274743, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 9294 microseconds, and 2848 cpu microseconds.
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.850022) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 925313 bytes OK
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.850051) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.852287) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.852304) EVENT_LOG_v1 {"time_micros": 1772274743852299, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.852331) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 1546971, prev total WAL file size 1546971, number of live WAL files 2.
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.852853) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353032' seq:72057594037927935, type:22 .. '6D6772737461740031373533' seq:0, type:0; will stop at (end)
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(903KB)], [98(10057KB)]
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274743852929, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 11223683, "oldest_snapshot_seqno": -1}
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 6520 keys, 8494150 bytes, temperature: kUnknown
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274743894420, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 8494150, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8451514, "index_size": 25246, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 169277, "raw_average_key_size": 25, "raw_value_size": 8335819, "raw_average_value_size": 1278, "num_data_blocks": 987, "num_entries": 6520, "num_filter_entries": 6520, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772274743, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.894727) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 8494150 bytes
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.896117) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 269.9 rd, 204.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.8 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(21.3) write-amplify(9.2) OK, records in: 6986, records dropped: 466 output_compression: NoCompression
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.896136) EVENT_LOG_v1 {"time_micros": 1772274743896126, "job": 58, "event": "compaction_finished", "compaction_time_micros": 41581, "compaction_time_cpu_micros": 20217, "output_level": 6, "num_output_files": 1, "total_output_size": 8494150, "num_input_records": 6986, "num_output_records": 6520, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274743896334, "job": 58, "event": "table_file_deletion", "file_number": 100}
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274743897574, "job": 58, "event": "table_file_deletion", "file_number": 98}
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.852756) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.897746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.897755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.897757) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.897846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:32:23 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.897848) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:32:24 np0005634017 lvm[352815]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:32:24 np0005634017 lvm[352816]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:32:24 np0005634017 lvm[352816]: VG ceph_vg1 finished
Feb 28 05:32:24 np0005634017 lvm[352815]: VG ceph_vg0 finished
Feb 28 05:32:24 np0005634017 lvm[352818]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:32:24 np0005634017 lvm[352818]: VG ceph_vg2 finished
Feb 28 05:32:24 np0005634017 nice_driscoll[352736]: {}
Feb 28 05:32:24 np0005634017 systemd[1]: libpod-c2b509275be56de3f0cb89a874173db8090e021249d9eec0571e7bac99e84bc7.scope: Deactivated successfully.
Feb 28 05:32:24 np0005634017 systemd[1]: libpod-c2b509275be56de3f0cb89a874173db8090e021249d9eec0571e7bac99e84bc7.scope: Consumed 1.264s CPU time.
Feb 28 05:32:24 np0005634017 podman[352719]: 2026-02-28 10:32:24.247369746 +0000 UTC m=+0.997944259 container died c2b509275be56de3f0cb89a874173db8090e021249d9eec0571e7bac99e84bc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 05:32:24 np0005634017 systemd[1]: var-lib-containers-storage-overlay-4acac552662e52f2af114c5faf5371d71beaa58771e1740f300b047a591c0108-merged.mount: Deactivated successfully.
Feb 28 05:32:24 np0005634017 nova_compute[243452]: 2026-02-28 10:32:24.284 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:24 np0005634017 podman[352719]: 2026-02-28 10:32:24.309355842 +0000 UTC m=+1.059930245 container remove c2b509275be56de3f0cb89a874173db8090e021249d9eec0571e7bac99e84bc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:32:24 np0005634017 systemd[1]: libpod-conmon-c2b509275be56de3f0cb89a874173db8090e021249d9eec0571e7bac99e84bc7.scope: Deactivated successfully.
Feb 28 05:32:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:32:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:32:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:32:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:32:24 np0005634017 nova_compute[243452]: 2026-02-28 10:32:24.543 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 16 KiB/s wr, 1 op/s
Feb 28 05:32:25 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:32:25 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:32:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 5.0 KiB/s wr, 0 op/s
Feb 28 05:32:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 4.3 KiB/s wr, 0 op/s
Feb 28 05:32:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:32:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:32:29
Feb 28 05:32:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:32:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:32:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'vms', 'default.rgw.control', 'volumes', 'images', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log']
Feb 28 05:32:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:32:29 np0005634017 nova_compute[243452]: 2026-02-28 10:32:29.287 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:29 np0005634017 nova_compute[243452]: 2026-02-28 10:32:29.333 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:32:29 np0005634017 nova_compute[243452]: 2026-02-28 10:32:29.547 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:30 np0005634017 nova_compute[243452]: 2026-02-28 10:32:30.119 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:32:30 np0005634017 nova_compute[243452]: 2026-02-28 10:32:30.120 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:32:30 np0005634017 nova_compute[243452]: 2026-02-28 10:32:30.145 243456 DEBUG nova.compute.manager [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:32:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:32:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:32:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:32:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:32:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:32:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:32:30 np0005634017 nova_compute[243452]: 2026-02-28 10:32:30.352 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:32:30 np0005634017 nova_compute[243452]: 2026-02-28 10:32:30.353 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:32:30 np0005634017 nova_compute[243452]: 2026-02-28 10:32:30.366 243456 DEBUG nova.virt.hardware [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:32:30 np0005634017 nova_compute[243452]: 2026-02-28 10:32:30.366 243456 INFO nova.compute.claims [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:32:30 np0005634017 nova_compute[243452]: 2026-02-28 10:32:30.572 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:32:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 4.3 KiB/s wr, 0 op/s
Feb 28 05:32:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:32:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:32:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:32:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:32:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:32:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:32:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:32:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:32:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:32:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:32:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:32:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1517754547' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:32:31 np0005634017 nova_compute[243452]: 2026-02-28 10:32:31.146 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:32:31 np0005634017 nova_compute[243452]: 2026-02-28 10:32:31.155 243456 DEBUG nova.compute.provider_tree [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:32:31 np0005634017 nova_compute[243452]: 2026-02-28 10:32:31.176 243456 DEBUG nova.scheduler.client.report [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:32:31 np0005634017 nova_compute[243452]: 2026-02-28 10:32:31.209 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:32:31 np0005634017 nova_compute[243452]: 2026-02-28 10:32:31.210 243456 DEBUG nova.compute.manager [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:32:31 np0005634017 nova_compute[243452]: 2026-02-28 10:32:31.272 243456 DEBUG nova.compute.manager [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:32:31 np0005634017 nova_compute[243452]: 2026-02-28 10:32:31.272 243456 DEBUG nova.network.neutron [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:32:31 np0005634017 nova_compute[243452]: 2026-02-28 10:32:31.310 243456 INFO nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:32:31 np0005634017 nova_compute[243452]: 2026-02-28 10:32:31.330 243456 DEBUG nova.compute.manager [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:32:31 np0005634017 nova_compute[243452]: 2026-02-28 10:32:31.411 243456 DEBUG nova.compute.manager [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:32:31 np0005634017 nova_compute[243452]: 2026-02-28 10:32:31.413 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:32:31 np0005634017 nova_compute[243452]: 2026-02-28 10:32:31.413 243456 INFO nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Creating image(s)#033[00m
Feb 28 05:32:31 np0005634017 nova_compute[243452]: 2026-02-28 10:32:31.444 243456 DEBUG nova.storage.rbd_utils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:32:31 np0005634017 nova_compute[243452]: 2026-02-28 10:32:31.477 243456 DEBUG nova.storage.rbd_utils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:32:31 np0005634017 nova_compute[243452]: 2026-02-28 10:32:31.503 243456 DEBUG nova.storage.rbd_utils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:32:31 np0005634017 nova_compute[243452]: 2026-02-28 10:32:31.508 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:32:31 np0005634017 nova_compute[243452]: 2026-02-28 10:32:31.587 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:32:31 np0005634017 nova_compute[243452]: 2026-02-28 10:32:31.588 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:32:31 np0005634017 nova_compute[243452]: 2026-02-28 10:32:31.588 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:32:31 np0005634017 nova_compute[243452]: 2026-02-28 10:32:31.589 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:32:31 np0005634017 nova_compute[243452]: 2026-02-28 10:32:31.612 243456 DEBUG nova.storage.rbd_utils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:32:31 np0005634017 nova_compute[243452]: 2026-02-28 10:32:31.617 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:32:31 np0005634017 nova_compute[243452]: 2026-02-28 10:32:31.735 243456 DEBUG nova.policy [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:32:31 np0005634017 nova_compute[243452]: 2026-02-28 10:32:31.850 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:32:31 np0005634017 nova_compute[243452]: 2026-02-28 10:32:31.941 243456 DEBUG nova.storage.rbd_utils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] resizing rbd image 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:32:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:32.001 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:32:32 np0005634017 nova_compute[243452]: 2026-02-28 10:32:32.030 243456 DEBUG nova.objects.instance [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'migration_context' on Instance uuid 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:32:32 np0005634017 nova_compute[243452]: 2026-02-28 10:32:32.051 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:32:32 np0005634017 nova_compute[243452]: 2026-02-28 10:32:32.052 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Ensure instance console log exists: /var/lib/nova/instances/8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:32:32 np0005634017 nova_compute[243452]: 2026-02-28 10:32:32.052 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:32:32 np0005634017 nova_compute[243452]: 2026-02-28 10:32:32.053 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:32:32 np0005634017 nova_compute[243452]: 2026-02-28 10:32:32.053 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:32:32 np0005634017 nova_compute[243452]: 2026-02-28 10:32:32.631 243456 DEBUG nova.network.neutron [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Successfully created port: 9beeb630-5801-426b-8ae3-6d7b49d83ebe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:32:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 305 active+clean; 263 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 1.4 MiB/s wr, 2 op/s
Feb 28 05:32:33 np0005634017 podman[353047]: 2026-02-28 10:32:33.131553909 +0000 UTC m=+0.064703124 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 05:32:33 np0005634017 podman[353046]: 2026-02-28 10:32:33.159761098 +0000 UTC m=+0.090944087 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Feb 28 05:32:33 np0005634017 nova_compute[243452]: 2026-02-28 10:32:33.185 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:33 np0005634017 nova_compute[243452]: 2026-02-28 10:32:33.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:32:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:32:34 np0005634017 nova_compute[243452]: 2026-02-28 10:32:34.226 243456 DEBUG nova.network.neutron [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Successfully updated port: 9beeb630-5801-426b-8ae3-6d7b49d83ebe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:32:34 np0005634017 nova_compute[243452]: 2026-02-28 10:32:34.255 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:32:34 np0005634017 nova_compute[243452]: 2026-02-28 10:32:34.255 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:32:34 np0005634017 nova_compute[243452]: 2026-02-28 10:32:34.256 243456 DEBUG nova.network.neutron [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:32:34 np0005634017 nova_compute[243452]: 2026-02-28 10:32:34.289 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:34 np0005634017 nova_compute[243452]: 2026-02-28 10:32:34.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:32:34 np0005634017 nova_compute[243452]: 2026-02-28 10:32:34.402 243456 DEBUG nova.compute.manager [req-4d57d239-23f6-45ae-9722-840d056cea36 req-55999fa8-ead5-483d-9fc7-32ea751aadac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Received event network-changed-9beeb630-5801-426b-8ae3-6d7b49d83ebe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:32:34 np0005634017 nova_compute[243452]: 2026-02-28 10:32:34.403 243456 DEBUG nova.compute.manager [req-4d57d239-23f6-45ae-9722-840d056cea36 req-55999fa8-ead5-483d-9fc7-32ea751aadac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Refreshing instance network info cache due to event network-changed-9beeb630-5801-426b-8ae3-6d7b49d83ebe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:32:34 np0005634017 nova_compute[243452]: 2026-02-28 10:32:34.403 243456 DEBUG oslo_concurrency.lockutils [req-4d57d239-23f6-45ae-9722-840d056cea36 req-55999fa8-ead5-483d-9fc7-32ea751aadac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:32:34 np0005634017 nova_compute[243452]: 2026-02-28 10:32:34.549 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:34 np0005634017 nova_compute[243452]: 2026-02-28 10:32:34.602 243456 DEBUG nova.network.neutron [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:32:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 275 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 596 B/s rd, 1.7 MiB/s wr, 4 op/s
Feb 28 05:32:35 np0005634017 nova_compute[243452]: 2026-02-28 10:32:35.906 243456 DEBUG nova.network.neutron [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Updating instance_info_cache with network_info: [{"id": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "address": "fa:16:3e:1d:37:36", "network": {"id": "e78b0fff-240f-4279-a0b9-45aaac19e3aa", "bridge": "br-int", "label": "tempest-network-smoke--702672644", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9beeb630-58", "ovs_interfaceid": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:32:35 np0005634017 nova_compute[243452]: 2026-02-28 10:32:35.952 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:32:35 np0005634017 nova_compute[243452]: 2026-02-28 10:32:35.953 243456 DEBUG nova.compute.manager [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Instance network_info: |[{"id": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "address": "fa:16:3e:1d:37:36", "network": {"id": "e78b0fff-240f-4279-a0b9-45aaac19e3aa", "bridge": "br-int", "label": "tempest-network-smoke--702672644", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9beeb630-58", "ovs_interfaceid": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:32:35 np0005634017 nova_compute[243452]: 2026-02-28 10:32:35.953 243456 DEBUG oslo_concurrency.lockutils [req-4d57d239-23f6-45ae-9722-840d056cea36 req-55999fa8-ead5-483d-9fc7-32ea751aadac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:32:35 np0005634017 nova_compute[243452]: 2026-02-28 10:32:35.954 243456 DEBUG nova.network.neutron [req-4d57d239-23f6-45ae-9722-840d056cea36 req-55999fa8-ead5-483d-9fc7-32ea751aadac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Refreshing network info cache for port 9beeb630-5801-426b-8ae3-6d7b49d83ebe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:32:35 np0005634017 nova_compute[243452]: 2026-02-28 10:32:35.959 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Start _get_guest_xml network_info=[{"id": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "address": "fa:16:3e:1d:37:36", "network": {"id": "e78b0fff-240f-4279-a0b9-45aaac19e3aa", "bridge": "br-int", "label": "tempest-network-smoke--702672644", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9beeb630-58", "ovs_interfaceid": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:32:35 np0005634017 nova_compute[243452]: 2026-02-28 10:32:35.966 243456 WARNING nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:32:35 np0005634017 nova_compute[243452]: 2026-02-28 10:32:35.972 243456 DEBUG nova.virt.libvirt.host [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:32:35 np0005634017 nova_compute[243452]: 2026-02-28 10:32:35.973 243456 DEBUG nova.virt.libvirt.host [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:32:35 np0005634017 nova_compute[243452]: 2026-02-28 10:32:35.980 243456 DEBUG nova.virt.libvirt.host [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:32:35 np0005634017 nova_compute[243452]: 2026-02-28 10:32:35.980 243456 DEBUG nova.virt.libvirt.host [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:32:35 np0005634017 nova_compute[243452]: 2026-02-28 10:32:35.981 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:32:35 np0005634017 nova_compute[243452]: 2026-02-28 10:32:35.981 243456 DEBUG nova.virt.hardware [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:32:35 np0005634017 nova_compute[243452]: 2026-02-28 10:32:35.982 243456 DEBUG nova.virt.hardware [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:32:35 np0005634017 nova_compute[243452]: 2026-02-28 10:32:35.982 243456 DEBUG nova.virt.hardware [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:32:35 np0005634017 nova_compute[243452]: 2026-02-28 10:32:35.982 243456 DEBUG nova.virt.hardware [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:32:35 np0005634017 nova_compute[243452]: 2026-02-28 10:32:35.982 243456 DEBUG nova.virt.hardware [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:32:35 np0005634017 nova_compute[243452]: 2026-02-28 10:32:35.982 243456 DEBUG nova.virt.hardware [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:32:35 np0005634017 nova_compute[243452]: 2026-02-28 10:32:35.983 243456 DEBUG nova.virt.hardware [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:32:35 np0005634017 nova_compute[243452]: 2026-02-28 10:32:35.983 243456 DEBUG nova.virt.hardware [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:32:35 np0005634017 nova_compute[243452]: 2026-02-28 10:32:35.983 243456 DEBUG nova.virt.hardware [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:32:35 np0005634017 nova_compute[243452]: 2026-02-28 10:32:35.984 243456 DEBUG nova.virt.hardware [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:32:35 np0005634017 nova_compute[243452]: 2026-02-28 10:32:35.984 243456 DEBUG nova.virt.hardware [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:32:35 np0005634017 nova_compute[243452]: 2026-02-28 10:32:35.987 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:32:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:32:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2368257348' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:32:36 np0005634017 nova_compute[243452]: 2026-02-28 10:32:36.540 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:32:36 np0005634017 nova_compute[243452]: 2026-02-28 10:32:36.579 243456 DEBUG nova.storage.rbd_utils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:32:36 np0005634017 nova_compute[243452]: 2026-02-28 10:32:36.585 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:32:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 279 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:32:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:32:37 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/67345559' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.165 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.168 243456 DEBUG nova.virt.libvirt.vif [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:32:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1011796007',display_name='tempest-TestNetworkBasicOps-server-1011796007',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1011796007',id=124,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGaI6GmFiY2M2XWv8n7VZfco5jNR9UsWFQgG0FSFHFzEznnh5xFgYcRiJcUqa2jmfAlISnzEuMLhfEtiHn6OYhFIRroUvBHFYJa82ZAB9Pz8Xnj494OEdbr2ujYWW7kOeQ==',key_name='tempest-TestNetworkBasicOps-243906406',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-yb1lwevu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:32:31Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "address": "fa:16:3e:1d:37:36", "network": {"id": "e78b0fff-240f-4279-a0b9-45aaac19e3aa", "bridge": "br-int", "label": "tempest-network-smoke--702672644", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9beeb630-58", "ovs_interfaceid": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.169 243456 DEBUG nova.network.os_vif_util [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "address": "fa:16:3e:1d:37:36", "network": {"id": "e78b0fff-240f-4279-a0b9-45aaac19e3aa", "bridge": "br-int", "label": "tempest-network-smoke--702672644", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9beeb630-58", "ovs_interfaceid": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.170 243456 DEBUG nova.network.os_vif_util [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:37:36,bridge_name='br-int',has_traffic_filtering=True,id=9beeb630-5801-426b-8ae3-6d7b49d83ebe,network=Network(e78b0fff-240f-4279-a0b9-45aaac19e3aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9beeb630-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.172 243456 DEBUG nova.objects.instance [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.198 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:32:37 np0005634017 nova_compute[243452]:  <uuid>8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc</uuid>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:  <name>instance-0000007c</name>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestNetworkBasicOps-server-1011796007</nova:name>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:32:35</nova:creationTime>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:32:37 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:        <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:        <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:        <nova:port uuid="9beeb630-5801-426b-8ae3-6d7b49d83ebe">
Feb 28 05:32:37 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <entry name="serial">8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc</entry>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <entry name="uuid">8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc</entry>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk">
Feb 28 05:32:37 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:32:37 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk.config">
Feb 28 05:32:37 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:32:37 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:1d:37:36"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <target dev="tap9beeb630-58"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc/console.log" append="off"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:32:37 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:32:37 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:32:37 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:32:37 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.199 243456 DEBUG nova.compute.manager [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Preparing to wait for external event network-vif-plugged-9beeb630-5801-426b-8ae3-6d7b49d83ebe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.200 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.200 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.200 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.201 243456 DEBUG nova.virt.libvirt.vif [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:32:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1011796007',display_name='tempest-TestNetworkBasicOps-server-1011796007',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1011796007',id=124,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGaI6GmFiY2M2XWv8n7VZfco5jNR9UsWFQgG0FSFHFzEznnh5xFgYcRiJcUqa2jmfAlISnzEuMLhfEtiHn6OYhFIRroUvBHFYJa82ZAB9Pz8Xnj494OEdbr2ujYWW7kOeQ==',key_name='tempest-TestNetworkBasicOps-243906406',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-yb1lwevu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:32:31Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "address": "fa:16:3e:1d:37:36", "network": {"id": "e78b0fff-240f-4279-a0b9-45aaac19e3aa", "bridge": "br-int", "label": "tempest-network-smoke--702672644", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9beeb630-58", "ovs_interfaceid": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.201 243456 DEBUG nova.network.os_vif_util [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "address": "fa:16:3e:1d:37:36", "network": {"id": "e78b0fff-240f-4279-a0b9-45aaac19e3aa", "bridge": "br-int", "label": "tempest-network-smoke--702672644", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9beeb630-58", "ovs_interfaceid": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.202 243456 DEBUG nova.network.os_vif_util [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:37:36,bridge_name='br-int',has_traffic_filtering=True,id=9beeb630-5801-426b-8ae3-6d7b49d83ebe,network=Network(e78b0fff-240f-4279-a0b9-45aaac19e3aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9beeb630-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.203 243456 DEBUG os_vif [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:37:36,bridge_name='br-int',has_traffic_filtering=True,id=9beeb630-5801-426b-8ae3-6d7b49d83ebe,network=Network(e78b0fff-240f-4279-a0b9-45aaac19e3aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9beeb630-58') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.203 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.204 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.204 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.208 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.209 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9beeb630-58, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.209 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9beeb630-58, col_values=(('external_ids', {'iface-id': '9beeb630-5801-426b-8ae3-6d7b49d83ebe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1d:37:36', 'vm-uuid': '8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.211 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.213 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:32:37 np0005634017 NetworkManager[49805]: <info>  [1772274757.2129] manager: (tap9beeb630-58): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/514)
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.219 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.220 243456 INFO os_vif [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:37:36,bridge_name='br-int',has_traffic_filtering=True,id=9beeb630-5801-426b-8ae3-6d7b49d83ebe,network=Network(e78b0fff-240f-4279-a0b9-45aaac19e3aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9beeb630-58')#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.293 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.294 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.294 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:1d:37:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.295 243456 INFO nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Using config drive#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.325 243456 DEBUG nova.storage.rbd_utils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.333 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:32:37 np0005634017 nova_compute[243452]: 2026-02-28 10:32:37.966 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:38 np0005634017 nova_compute[243452]: 2026-02-28 10:32:38.107 243456 INFO nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Creating config drive at /var/lib/nova/instances/8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc/disk.config#033[00m
Feb 28 05:32:38 np0005634017 nova_compute[243452]: 2026-02-28 10:32:38.113 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp0bed9d7d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:32:38 np0005634017 nova_compute[243452]: 2026-02-28 10:32:38.265 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp0bed9d7d" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:32:38 np0005634017 nova_compute[243452]: 2026-02-28 10:32:38.302 243456 DEBUG nova.storage.rbd_utils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:32:38 np0005634017 nova_compute[243452]: 2026-02-28 10:32:38.306 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc/disk.config 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:32:38 np0005634017 nova_compute[243452]: 2026-02-28 10:32:38.356 243456 DEBUG nova.network.neutron [req-4d57d239-23f6-45ae-9722-840d056cea36 req-55999fa8-ead5-483d-9fc7-32ea751aadac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Updated VIF entry in instance network info cache for port 9beeb630-5801-426b-8ae3-6d7b49d83ebe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:32:38 np0005634017 nova_compute[243452]: 2026-02-28 10:32:38.358 243456 DEBUG nova.network.neutron [req-4d57d239-23f6-45ae-9722-840d056cea36 req-55999fa8-ead5-483d-9fc7-32ea751aadac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Updating instance_info_cache with network_info: [{"id": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "address": "fa:16:3e:1d:37:36", "network": {"id": "e78b0fff-240f-4279-a0b9-45aaac19e3aa", "bridge": "br-int", "label": "tempest-network-smoke--702672644", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9beeb630-58", "ovs_interfaceid": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:32:38 np0005634017 nova_compute[243452]: 2026-02-28 10:32:38.377 243456 DEBUG oslo_concurrency.lockutils [req-4d57d239-23f6-45ae-9722-840d056cea36 req-55999fa8-ead5-483d-9fc7-32ea751aadac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:32:38 np0005634017 nova_compute[243452]: 2026-02-28 10:32:38.456 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc/disk.config 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:32:38 np0005634017 nova_compute[243452]: 2026-02-28 10:32:38.457 243456 INFO nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Deleting local config drive /var/lib/nova/instances/8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc/disk.config because it was imported into RBD.#033[00m
Feb 28 05:32:38 np0005634017 kernel: tap9beeb630-58: entered promiscuous mode
Feb 28 05:32:38 np0005634017 NetworkManager[49805]: <info>  [1772274758.5423] manager: (tap9beeb630-58): new Tun device (/org/freedesktop/NetworkManager/Devices/515)
Feb 28 05:32:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:32:38Z|01246|binding|INFO|Claiming lport 9beeb630-5801-426b-8ae3-6d7b49d83ebe for this chassis.
Feb 28 05:32:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:32:38Z|01247|binding|INFO|9beeb630-5801-426b-8ae3-6d7b49d83ebe: Claiming fa:16:3e:1d:37:36 10.100.0.24
Feb 28 05:32:38 np0005634017 nova_compute[243452]: 2026-02-28 10:32:38.545 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.559 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:37:36 10.100.0.24'], port_security=['fa:16:3e:1d:37:36 10.100.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': '8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e78b0fff-240f-4279-a0b9-45aaac19e3aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f5856a5a-70ec-4d73-a6be-8929e117dbd8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c7b4133e-8779-4f70-9e9b-cefbe96b1736, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=9beeb630-5801-426b-8ae3-6d7b49d83ebe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.561 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 9beeb630-5801-426b-8ae3-6d7b49d83ebe in datapath e78b0fff-240f-4279-a0b9-45aaac19e3aa bound to our chassis#033[00m
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.564 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e78b0fff-240f-4279-a0b9-45aaac19e3aa#033[00m
Feb 28 05:32:38 np0005634017 nova_compute[243452]: 2026-02-28 10:32:38.567 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:38 np0005634017 systemd-udevd[353225]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:32:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:32:38Z|01248|binding|INFO|Setting lport 9beeb630-5801-426b-8ae3-6d7b49d83ebe ovn-installed in OVS
Feb 28 05:32:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:32:38Z|01249|binding|INFO|Setting lport 9beeb630-5801-426b-8ae3-6d7b49d83ebe up in Southbound
Feb 28 05:32:38 np0005634017 nova_compute[243452]: 2026-02-28 10:32:38.575 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.575 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9a52c39f-9d4f-4db5-8b23-28a1ffdb7aa8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.578 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape78b0fff-21 in ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.580 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape78b0fff-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.580 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b63a6b62-d4ee-41d3-93fe-4b871a9189c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.583 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c1348784-c350-4065-a951-173d11ff2077]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:38 np0005634017 systemd-machined[209480]: New machine qemu-157-instance-0000007c.
Feb 28 05:32:38 np0005634017 NetworkManager[49805]: <info>  [1772274758.5931] device (tap9beeb630-58): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:32:38 np0005634017 NetworkManager[49805]: <info>  [1772274758.5944] device (tap9beeb630-58): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.597 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa8cc93-9b53-42f1-a61b-654e511cb4ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:38 np0005634017 systemd[1]: Started Virtual Machine qemu-157-instance-0000007c.
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.624 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[79c9b1d8-fb3d-4ab3-a23c-e2caed1d53a7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.659 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1066b3f5-dc3c-4919-baed-919225179c6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:38 np0005634017 NetworkManager[49805]: <info>  [1772274758.6668] manager: (tape78b0fff-20): new Veth device (/org/freedesktop/NetworkManager/Devices/516)
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.665 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9337e031-20dd-4427-ad41-2585628fe46e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:38 np0005634017 systemd-udevd[353231]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.701 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[93e0d9a7-aba1-47d9-a9e0-79fbb75e5567]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.706 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4cc487a5-571c-4006-bccc-4d3bd1a27f0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:38 np0005634017 NetworkManager[49805]: <info>  [1772274758.7299] device (tape78b0fff-20): carrier: link connected
Feb 28 05:32:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.738 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4f1282-448e-49a4-97dc-1747db489017]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.758 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[59e6814c-23c6-4237-a81a-009f10f957cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape78b0fff-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c1:9e:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 372], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626601, 'reachable_time': 42603, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353260, 'error': None, 'target': 'ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.780 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5fdd2841-2d0f-4182-9766-e999de5febad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec1:9e4d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626601, 'tstamp': 626601}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353261, 'error': None, 'target': 'ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.803 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7839ea5f-f7ae-4a4b-b879-4720a6b175ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape78b0fff-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c1:9e:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 372], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626601, 'reachable_time': 42603, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 353262, 'error': None, 'target': 'ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.848 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9101ea9a-fe6c-4315-b667-bfdf4783a397]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.917 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2c1fce0d-b1d8-4fb7-bff4-1a76e08aee3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.920 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape78b0fff-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.920 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.921 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape78b0fff-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:32:38 np0005634017 NetworkManager[49805]: <info>  [1772274758.9234] manager: (tape78b0fff-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/517)
Feb 28 05:32:38 np0005634017 kernel: tape78b0fff-20: entered promiscuous mode
Feb 28 05:32:38 np0005634017 nova_compute[243452]: 2026-02-28 10:32:38.923 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.930 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape78b0fff-20, col_values=(('external_ids', {'iface-id': 'f3d174ad-0a29-4dd5-bb79-a661870a23b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:32:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:32:38Z|01250|binding|INFO|Releasing lport f3d174ad-0a29-4dd5-bb79-a661870a23b4 from this chassis (sb_readonly=0)
Feb 28 05:32:38 np0005634017 nova_compute[243452]: 2026-02-28 10:32:38.931 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.935 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e78b0fff-240f-4279-a0b9-45aaac19e3aa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e78b0fff-240f-4279-a0b9-45aaac19e3aa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:32:38 np0005634017 nova_compute[243452]: 2026-02-28 10:32:38.936 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.936 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0c301722-15a5-40ef-9ca2-89dba6ec50c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.938 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-e78b0fff-240f-4279-a0b9-45aaac19e3aa
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/e78b0fff-240f-4279-a0b9-45aaac19e3aa.pid.haproxy
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID e78b0fff-240f-4279-a0b9-45aaac19e3aa
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:32:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.938 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa', 'env', 'PROCESS_TAG=haproxy-e78b0fff-240f-4279-a0b9-45aaac19e3aa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e78b0fff-240f-4279-a0b9-45aaac19e3aa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:32:39 np0005634017 nova_compute[243452]: 2026-02-28 10:32:39.292 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:39.296 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:f5:cc 10.100.0.2 2001:db8::f816:3eff:fe51:f5cc'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe51:f5cc/64', 'neutron:device_id': 'ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5cc36483-813d-493a-bc2a-2e5a365011f2, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=45460f81-310d-424c-88ee-85e2ae1b7444) old=Port_Binding(mac=['fa:16:3e:51:f5:cc 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:32:39 np0005634017 podman[353294]: 2026-02-28 10:32:39.309730177 +0000 UTC m=+0.057696795 container create 1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 05:32:39 np0005634017 nova_compute[243452]: 2026-02-28 10:32:39.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:32:39 np0005634017 nova_compute[243452]: 2026-02-28 10:32:39.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:32:39 np0005634017 nova_compute[243452]: 2026-02-28 10:32:39.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:32:39 np0005634017 nova_compute[243452]: 2026-02-28 10:32:39.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:32:39 np0005634017 systemd[1]: Started libpod-conmon-1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca.scope.
Feb 28 05:32:39 np0005634017 podman[353294]: 2026-02-28 10:32:39.276428294 +0000 UTC m=+0.024394912 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:32:39 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:32:39 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800c74c84193b4781da546456c3a23d2158ddc640ab518b56dc1fd583515e1d0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:32:39 np0005634017 podman[353294]: 2026-02-28 10:32:39.411820969 +0000 UTC m=+0.159787627 container init 1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:32:39 np0005634017 podman[353294]: 2026-02-28 10:32:39.415947146 +0000 UTC m=+0.163913764 container start 1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:32:39 np0005634017 neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa[353349]: [NOTICE]   (353354) : New worker (353357) forked
Feb 28 05:32:39 np0005634017 neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa[353349]: [NOTICE]   (353354) : Loading success.
Feb 28 05:32:39 np0005634017 nova_compute[243452]: 2026-02-28 10:32:39.462 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274759.4614143, 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:32:39 np0005634017 nova_compute[243452]: 2026-02-28 10:32:39.462 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] VM Started (Lifecycle Event)#033[00m
Feb 28 05:32:39 np0005634017 nova_compute[243452]: 2026-02-28 10:32:39.486 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:32:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:39.491 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 45460f81-310d-424c-88ee-85e2ae1b7444 in datapath c662211d-11bd-4aa5-95d2-794ccdac29d7 updated#033[00m
Feb 28 05:32:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:39.494 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c662211d-11bd-4aa5-95d2-794ccdac29d7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:32:39 np0005634017 nova_compute[243452]: 2026-02-28 10:32:39.494 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274759.4615698, 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:32:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:39.495 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[789c47b3-9003-445d-8e95-8ae1f6526676]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:39 np0005634017 nova_compute[243452]: 2026-02-28 10:32:39.495 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:32:39 np0005634017 nova_compute[243452]: 2026-02-28 10:32:39.513 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:32:39 np0005634017 nova_compute[243452]: 2026-02-28 10:32:39.516 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:32:39 np0005634017 nova_compute[243452]: 2026-02-28 10:32:39.533 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:32:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Feb 28 05:32:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:32:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:32:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:32:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:32:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011217277279944673 of space, bias 1.0, pg target 0.3365183183983402 quantized to 32 (current 32)
Feb 28 05:32:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:32:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:32:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:32:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:32:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:32:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493819121062511 of space, bias 1.0, pg target 0.7481457363187534 quantized to 32 (current 32)
Feb 28 05:32:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:32:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.36012762782241e-07 of space, bias 4.0, pg target 0.0008832153153386892 quantized to 16 (current 16)
Feb 28 05:32:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:32:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:32:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:32:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:32:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:32:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:32:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:32:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:32:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:32:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:32:42 np0005634017 nova_compute[243452]: 2026-02-28 10:32:42.213 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 28 05:32:43 np0005634017 nova_compute[243452]: 2026-02-28 10:32:43.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:32:43 np0005634017 nova_compute[243452]: 2026-02-28 10:32:43.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:32:43 np0005634017 nova_compute[243452]: 2026-02-28 10:32:43.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:32:43 np0005634017 nova_compute[243452]: 2026-02-28 10:32:43.350 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 28 05:32:43 np0005634017 nova_compute[243452]: 2026-02-28 10:32:43.816 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:32:43 np0005634017 nova_compute[243452]: 2026-02-28 10:32:43.817 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:32:43 np0005634017 nova_compute[243452]: 2026-02-28 10:32:43.817 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:32:43 np0005634017 nova_compute[243452]: 2026-02-28 10:32:43.818 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 995ed68f-0189-47b0-b060-6b738468c986 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:32:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.199 243456 DEBUG nova.compute.manager [req-6f2ce2cd-15a2-4acb-98b9-db168a137176 req-ad14abe1-c8c6-430f-a974-10a93d28b0d2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Received event network-vif-plugged-9beeb630-5801-426b-8ae3-6d7b49d83ebe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.200 243456 DEBUG oslo_concurrency.lockutils [req-6f2ce2cd-15a2-4acb-98b9-db168a137176 req-ad14abe1-c8c6-430f-a974-10a93d28b0d2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.201 243456 DEBUG oslo_concurrency.lockutils [req-6f2ce2cd-15a2-4acb-98b9-db168a137176 req-ad14abe1-c8c6-430f-a974-10a93d28b0d2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.201 243456 DEBUG oslo_concurrency.lockutils [req-6f2ce2cd-15a2-4acb-98b9-db168a137176 req-ad14abe1-c8c6-430f-a974-10a93d28b0d2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.202 243456 DEBUG nova.compute.manager [req-6f2ce2cd-15a2-4acb-98b9-db168a137176 req-ad14abe1-c8c6-430f-a974-10a93d28b0d2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Processing event network-vif-plugged-9beeb630-5801-426b-8ae3-6d7b49d83ebe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.203 243456 DEBUG nova.compute.manager [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.207 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274764.2071095, 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.207 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.213 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.223 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.224 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.229 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.235 243456 INFO nova.virt.libvirt.driver [-] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Instance spawned successfully.#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.236 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.241 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.246 243456 DEBUG nova.compute.manager [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.266 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.271 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.272 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.273 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.274 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.274 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.275 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.293 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.353 243456 INFO nova.compute.manager [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Took 12.94 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.354 243456 DEBUG nova.compute.manager [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.369 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.370 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.379 243456 DEBUG nova.virt.hardware [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.380 243456 INFO nova.compute.claims [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.467 243456 INFO nova.compute.manager [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Took 14.15 seconds to build instance.#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.488 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:32:44 np0005634017 nova_compute[243452]: 2026-02-28 10:32:44.552 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:32:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 384 KiB/s wr, 35 op/s
Feb 28 05:32:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:32:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1892046915' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:32:45 np0005634017 nova_compute[243452]: 2026-02-28 10:32:45.180 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:32:45 np0005634017 nova_compute[243452]: 2026-02-28 10:32:45.189 243456 DEBUG nova.compute.provider_tree [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:32:45 np0005634017 nova_compute[243452]: 2026-02-28 10:32:45.215 243456 DEBUG nova.scheduler.client.report [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:32:45 np0005634017 nova_compute[243452]: 2026-02-28 10:32:45.253 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:32:45 np0005634017 nova_compute[243452]: 2026-02-28 10:32:45.254 243456 DEBUG nova.compute.manager [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:32:45 np0005634017 nova_compute[243452]: 2026-02-28 10:32:45.336 243456 DEBUG nova.compute.manager [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:32:45 np0005634017 nova_compute[243452]: 2026-02-28 10:32:45.337 243456 DEBUG nova.network.neutron [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:32:45 np0005634017 nova_compute[243452]: 2026-02-28 10:32:45.356 243456 INFO nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:32:45 np0005634017 nova_compute[243452]: 2026-02-28 10:32:45.378 243456 DEBUG nova.compute.manager [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:32:45 np0005634017 nova_compute[243452]: 2026-02-28 10:32:45.466 243456 DEBUG nova.compute.manager [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:32:45 np0005634017 nova_compute[243452]: 2026-02-28 10:32:45.468 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:32:45 np0005634017 nova_compute[243452]: 2026-02-28 10:32:45.468 243456 INFO nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Creating image(s)#033[00m
Feb 28 05:32:45 np0005634017 nova_compute[243452]: 2026-02-28 10:32:45.493 243456 DEBUG nova.storage.rbd_utils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:32:45 np0005634017 nova_compute[243452]: 2026-02-28 10:32:45.527 243456 DEBUG nova.storage.rbd_utils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:32:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:32:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1715526932' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:32:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:32:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1715526932' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:32:45 np0005634017 nova_compute[243452]: 2026-02-28 10:32:45.551 243456 DEBUG nova.storage.rbd_utils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:32:45 np0005634017 nova_compute[243452]: 2026-02-28 10:32:45.555 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:32:45 np0005634017 nova_compute[243452]: 2026-02-28 10:32:45.625 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:32:45 np0005634017 nova_compute[243452]: 2026-02-28 10:32:45.626 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:32:45 np0005634017 nova_compute[243452]: 2026-02-28 10:32:45.627 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:32:45 np0005634017 nova_compute[243452]: 2026-02-28 10:32:45.627 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:32:45 np0005634017 nova_compute[243452]: 2026-02-28 10:32:45.648 243456 DEBUG nova.storage.rbd_utils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:32:45 np0005634017 nova_compute[243452]: 2026-02-28 10:32:45.651 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:32:45 np0005634017 nova_compute[243452]: 2026-02-28 10:32:45.841 243456 DEBUG nova.policy [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:32:45 np0005634017 nova_compute[243452]: 2026-02-28 10:32:45.886 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:32:45 np0005634017 nova_compute[243452]: 2026-02-28 10:32:45.950 243456 DEBUG nova.storage.rbd_utils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:32:46 np0005634017 nova_compute[243452]: 2026-02-28 10:32:46.038 243456 DEBUG nova.objects.instance [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid b07b5e80-4820-4ee8-9750-3ee5ddc53519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:32:46 np0005634017 nova_compute[243452]: 2026-02-28 10:32:46.115 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:32:46 np0005634017 nova_compute[243452]: 2026-02-28 10:32:46.115 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Ensure instance console log exists: /var/lib/nova/instances/b07b5e80-4820-4ee8-9750-3ee5ddc53519/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:32:46 np0005634017 nova_compute[243452]: 2026-02-28 10:32:46.116 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:32:46 np0005634017 nova_compute[243452]: 2026-02-28 10:32:46.116 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:32:46 np0005634017 nova_compute[243452]: 2026-02-28 10:32:46.116 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:32:46 np0005634017 nova_compute[243452]: 2026-02-28 10:32:46.309 243456 DEBUG nova.compute.manager [req-7b47143a-6a44-4dfa-ad6f-5b94e7c147ee req-f229d992-488e-4add-a021-5de46ff04268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Received event network-vif-plugged-9beeb630-5801-426b-8ae3-6d7b49d83ebe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:32:46 np0005634017 nova_compute[243452]: 2026-02-28 10:32:46.310 243456 DEBUG oslo_concurrency.lockutils [req-7b47143a-6a44-4dfa-ad6f-5b94e7c147ee req-f229d992-488e-4add-a021-5de46ff04268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:32:46 np0005634017 nova_compute[243452]: 2026-02-28 10:32:46.310 243456 DEBUG oslo_concurrency.lockutils [req-7b47143a-6a44-4dfa-ad6f-5b94e7c147ee req-f229d992-488e-4add-a021-5de46ff04268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:32:46 np0005634017 nova_compute[243452]: 2026-02-28 10:32:46.310 243456 DEBUG oslo_concurrency.lockutils [req-7b47143a-6a44-4dfa-ad6f-5b94e7c147ee req-f229d992-488e-4add-a021-5de46ff04268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:32:46 np0005634017 nova_compute[243452]: 2026-02-28 10:32:46.310 243456 DEBUG nova.compute.manager [req-7b47143a-6a44-4dfa-ad6f-5b94e7c147ee req-f229d992-488e-4add-a021-5de46ff04268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] No waiting events found dispatching network-vif-plugged-9beeb630-5801-426b-8ae3-6d7b49d83ebe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:32:46 np0005634017 nova_compute[243452]: 2026-02-28 10:32:46.310 243456 WARNING nova.compute.manager [req-7b47143a-6a44-4dfa-ad6f-5b94e7c147ee req-f229d992-488e-4add-a021-5de46ff04268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Received unexpected event network-vif-plugged-9beeb630-5801-426b-8ae3-6d7b49d83ebe for instance with vm_state active and task_state None.#033[00m
Feb 28 05:32:46 np0005634017 nova_compute[243452]: 2026-02-28 10:32:46.540 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Updating instance_info_cache with network_info: [{"id": "09f54242-3301-4e2f-b606-8423be606192", "address": "fa:16:3e:3a:3a:58", "network": {"id": "11c8a335-c267-4faf-a7d1-f407690da05d", "bridge": "br-int", "label": "tempest-network-smoke--2063408073", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09f54242-33", "ovs_interfaceid": "09f54242-3301-4e2f-b606-8423be606192", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:32:46 np0005634017 nova_compute[243452]: 2026-02-28 10:32:46.564 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:32:46 np0005634017 nova_compute[243452]: 2026-02-28 10:32:46.564 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:32:46 np0005634017 nova_compute[243452]: 2026-02-28 10:32:46.565 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:32:46 np0005634017 nova_compute[243452]: 2026-02-28 10:32:46.590 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:32:46 np0005634017 nova_compute[243452]: 2026-02-28 10:32:46.590 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:32:46 np0005634017 nova_compute[243452]: 2026-02-28 10:32:46.591 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:32:46 np0005634017 nova_compute[243452]: 2026-02-28 10:32:46.591 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:32:46 np0005634017 nova_compute[243452]: 2026-02-28 10:32:46.592 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:32:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 96 KiB/s rd, 42 KiB/s wr, 37 op/s
Feb 28 05:32:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:32:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2020885134' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:32:47 np0005634017 nova_compute[243452]: 2026-02-28 10:32:47.202 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:32:47 np0005634017 nova_compute[243452]: 2026-02-28 10:32:47.215 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:47 np0005634017 nova_compute[243452]: 2026-02-28 10:32:47.266 243456 DEBUG nova.network.neutron [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Successfully created port: b811084c-ad80-4e64-904a-d56bd59c9766 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:32:47 np0005634017 nova_compute[243452]: 2026-02-28 10:32:47.359 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:32:47 np0005634017 nova_compute[243452]: 2026-02-28 10:32:47.359 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:32:47 np0005634017 nova_compute[243452]: 2026-02-28 10:32:47.364 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:32:47 np0005634017 nova_compute[243452]: 2026-02-28 10:32:47.364 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:32:47 np0005634017 nova_compute[243452]: 2026-02-28 10:32:47.545 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:32:47 np0005634017 nova_compute[243452]: 2026-02-28 10:32:47.547 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3282MB free_disk=59.92099027894437GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:32:47 np0005634017 nova_compute[243452]: 2026-02-28 10:32:47.547 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:32:47 np0005634017 nova_compute[243452]: 2026-02-28 10:32:47.547 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:32:47 np0005634017 nova_compute[243452]: 2026-02-28 10:32:47.778 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 995ed68f-0189-47b0-b060-6b738468c986 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:32:47 np0005634017 nova_compute[243452]: 2026-02-28 10:32:47.778 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:32:47 np0005634017 nova_compute[243452]: 2026-02-28 10:32:47.781 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance b07b5e80-4820-4ee8-9750-3ee5ddc53519 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:32:47 np0005634017 nova_compute[243452]: 2026-02-28 10:32:47.781 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:32:47 np0005634017 nova_compute[243452]: 2026-02-28 10:32:47.782 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:32:47 np0005634017 nova_compute[243452]: 2026-02-28 10:32:47.855 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:32:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:32:48 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1695953747' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:32:48 np0005634017 nova_compute[243452]: 2026-02-28 10:32:48.390 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:32:48 np0005634017 nova_compute[243452]: 2026-02-28 10:32:48.397 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:32:48 np0005634017 nova_compute[243452]: 2026-02-28 10:32:48.414 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:32:48 np0005634017 nova_compute[243452]: 2026-02-28 10:32:48.441 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:32:48 np0005634017 nova_compute[243452]: 2026-02-28 10:32:48.441 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:32:48 np0005634017 nova_compute[243452]: 2026-02-28 10:32:48.662 243456 DEBUG nova.network.neutron [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Successfully updated port: b811084c-ad80-4e64-904a-d56bd59c9766 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:32:48 np0005634017 nova_compute[243452]: 2026-02-28 10:32:48.687 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:32:48 np0005634017 nova_compute[243452]: 2026-02-28 10:32:48.687 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:32:48 np0005634017 nova_compute[243452]: 2026-02-28 10:32:48.688 243456 DEBUG nova.network.neutron [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:32:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 298 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 878 KiB/s wr, 76 op/s
Feb 28 05:32:48 np0005634017 nova_compute[243452]: 2026-02-28 10:32:48.753 243456 DEBUG nova.compute.manager [req-596d5a4b-4122-4ba3-86f0-af2ba0b8f051 req-2bcd2191-274c-43da-9382-a5daedb0b1c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Received event network-changed-b811084c-ad80-4e64-904a-d56bd59c9766 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:32:48 np0005634017 nova_compute[243452]: 2026-02-28 10:32:48.754 243456 DEBUG nova.compute.manager [req-596d5a4b-4122-4ba3-86f0-af2ba0b8f051 req-2bcd2191-274c-43da-9382-a5daedb0b1c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Refreshing instance network info cache due to event network-changed-b811084c-ad80-4e64-904a-d56bd59c9766. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:32:48 np0005634017 nova_compute[243452]: 2026-02-28 10:32:48.754 243456 DEBUG oslo_concurrency.lockutils [req-596d5a4b-4122-4ba3-86f0-af2ba0b8f051 req-2bcd2191-274c-43da-9382-a5daedb0b1c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:32:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:32:48 np0005634017 nova_compute[243452]: 2026-02-28 10:32:48.899 243456 DEBUG nova.network.neutron [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:32:49 np0005634017 nova_compute[243452]: 2026-02-28 10:32:49.298 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 325 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 28 05:32:50 np0005634017 nova_compute[243452]: 2026-02-28 10:32:50.875 243456 DEBUG nova.network.neutron [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Updating instance_info_cache with network_info: [{"id": "b811084c-ad80-4e64-904a-d56bd59c9766", "address": "fa:16:3e:c3:c4:e4", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec3:c4e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb811084c-ad", "ovs_interfaceid": "b811084c-ad80-4e64-904a-d56bd59c9766", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:32:50 np0005634017 nova_compute[243452]: 2026-02-28 10:32:50.893 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:32:50 np0005634017 nova_compute[243452]: 2026-02-28 10:32:50.894 243456 DEBUG nova.compute.manager [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Instance network_info: |[{"id": "b811084c-ad80-4e64-904a-d56bd59c9766", "address": "fa:16:3e:c3:c4:e4", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec3:c4e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb811084c-ad", "ovs_interfaceid": "b811084c-ad80-4e64-904a-d56bd59c9766", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:32:50 np0005634017 nova_compute[243452]: 2026-02-28 10:32:50.894 243456 DEBUG oslo_concurrency.lockutils [req-596d5a4b-4122-4ba3-86f0-af2ba0b8f051 req-2bcd2191-274c-43da-9382-a5daedb0b1c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:32:50 np0005634017 nova_compute[243452]: 2026-02-28 10:32:50.894 243456 DEBUG nova.network.neutron [req-596d5a4b-4122-4ba3-86f0-af2ba0b8f051 req-2bcd2191-274c-43da-9382-a5daedb0b1c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Refreshing network info cache for port b811084c-ad80-4e64-904a-d56bd59c9766 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:32:50 np0005634017 nova_compute[243452]: 2026-02-28 10:32:50.898 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Start _get_guest_xml network_info=[{"id": "b811084c-ad80-4e64-904a-d56bd59c9766", "address": "fa:16:3e:c3:c4:e4", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec3:c4e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb811084c-ad", "ovs_interfaceid": "b811084c-ad80-4e64-904a-d56bd59c9766", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:32:50 np0005634017 nova_compute[243452]: 2026-02-28 10:32:50.903 243456 WARNING nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:32:50 np0005634017 nova_compute[243452]: 2026-02-28 10:32:50.908 243456 DEBUG nova.virt.libvirt.host [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:32:50 np0005634017 nova_compute[243452]: 2026-02-28 10:32:50.910 243456 DEBUG nova.virt.libvirt.host [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:32:50 np0005634017 nova_compute[243452]: 2026-02-28 10:32:50.913 243456 DEBUG nova.virt.libvirt.host [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:32:50 np0005634017 nova_compute[243452]: 2026-02-28 10:32:50.914 243456 DEBUG nova.virt.libvirt.host [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:32:50 np0005634017 nova_compute[243452]: 2026-02-28 10:32:50.914 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:32:50 np0005634017 nova_compute[243452]: 2026-02-28 10:32:50.915 243456 DEBUG nova.virt.hardware [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:32:50 np0005634017 nova_compute[243452]: 2026-02-28 10:32:50.915 243456 DEBUG nova.virt.hardware [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:32:50 np0005634017 nova_compute[243452]: 2026-02-28 10:32:50.915 243456 DEBUG nova.virt.hardware [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:32:50 np0005634017 nova_compute[243452]: 2026-02-28 10:32:50.916 243456 DEBUG nova.virt.hardware [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:32:50 np0005634017 nova_compute[243452]: 2026-02-28 10:32:50.916 243456 DEBUG nova.virt.hardware [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:32:50 np0005634017 nova_compute[243452]: 2026-02-28 10:32:50.916 243456 DEBUG nova.virt.hardware [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:32:50 np0005634017 nova_compute[243452]: 2026-02-28 10:32:50.916 243456 DEBUG nova.virt.hardware [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:32:50 np0005634017 nova_compute[243452]: 2026-02-28 10:32:50.917 243456 DEBUG nova.virt.hardware [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:32:50 np0005634017 nova_compute[243452]: 2026-02-28 10:32:50.917 243456 DEBUG nova.virt.hardware [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:32:50 np0005634017 nova_compute[243452]: 2026-02-28 10:32:50.917 243456 DEBUG nova.virt.hardware [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:32:50 np0005634017 nova_compute[243452]: 2026-02-28 10:32:50.917 243456 DEBUG nova.virt.hardware [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:32:50 np0005634017 nova_compute[243452]: 2026-02-28 10:32:50.921 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:32:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:32:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3031815688' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:32:51 np0005634017 nova_compute[243452]: 2026-02-28 10:32:51.520 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:32:51 np0005634017 nova_compute[243452]: 2026-02-28 10:32:51.541 243456 DEBUG nova.storage.rbd_utils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:32:51 np0005634017 nova_compute[243452]: 2026-02-28 10:32:51.545 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:32:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:32:52 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1712000785' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.223 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.239 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.693s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.240 243456 DEBUG nova.virt.libvirt.vif [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:32:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-870516469',display_name='tempest-TestGettingAddress-server-870516469',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-870516469',id=125,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFieZm47z3EaXX8zbx7LaLT+w65lGbaba1NOcr04FPZCW63/mJeAcaJ3qZq57Nb7LiBZaem/WgewaoumutbM8PMth7FUNu0jTunr9bd13indOeb5XDIeGzJsnEfyg+wcGA==',key_name='tempest-TestGettingAddress-401608194',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-lcsv3c02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:32:45Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=b07b5e80-4820-4ee8-9750-3ee5ddc53519,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b811084c-ad80-4e64-904a-d56bd59c9766", "address": "fa:16:3e:c3:c4:e4", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec3:c4e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb811084c-ad", "ovs_interfaceid": "b811084c-ad80-4e64-904a-d56bd59c9766", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.241 243456 DEBUG nova.network.os_vif_util [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "b811084c-ad80-4e64-904a-d56bd59c9766", "address": "fa:16:3e:c3:c4:e4", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec3:c4e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb811084c-ad", "ovs_interfaceid": "b811084c-ad80-4e64-904a-d56bd59c9766", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.242 243456 DEBUG nova.network.os_vif_util [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:c4:e4,bridge_name='br-int',has_traffic_filtering=True,id=b811084c-ad80-4e64-904a-d56bd59c9766,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb811084c-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.243 243456 DEBUG nova.objects.instance [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid b07b5e80-4820-4ee8-9750-3ee5ddc53519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.265 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:32:52 np0005634017 nova_compute[243452]:  <uuid>b07b5e80-4820-4ee8-9750-3ee5ddc53519</uuid>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:  <name>instance-0000007d</name>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestGettingAddress-server-870516469</nova:name>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:32:50</nova:creationTime>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:32:52 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:        <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:        <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:        <nova:port uuid="b811084c-ad80-4e64-904a-d56bd59c9766">
Feb 28 05:32:52 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fec3:c4e4" ipVersion="6"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <entry name="serial">b07b5e80-4820-4ee8-9750-3ee5ddc53519</entry>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <entry name="uuid">b07b5e80-4820-4ee8-9750-3ee5ddc53519</entry>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk">
Feb 28 05:32:52 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:32:52 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk.config">
Feb 28 05:32:52 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:32:52 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:c3:c4:e4"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <target dev="tapb811084c-ad"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/b07b5e80-4820-4ee8-9750-3ee5ddc53519/console.log" append="off"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:32:52 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:32:52 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:32:52 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:32:52 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.267 243456 DEBUG nova.compute.manager [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Preparing to wait for external event network-vif-plugged-b811084c-ad80-4e64-904a-d56bd59c9766 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.267 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.268 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.268 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.269 243456 DEBUG nova.virt.libvirt.vif [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:32:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-870516469',display_name='tempest-TestGettingAddress-server-870516469',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-870516469',id=125,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFieZm47z3EaXX8zbx7LaLT+w65lGbaba1NOcr04FPZCW63/mJeAcaJ3qZq57Nb7LiBZaem/WgewaoumutbM8PMth7FUNu0jTunr9bd13indOeb5XDIeGzJsnEfyg+wcGA==',key_name='tempest-TestGettingAddress-401608194',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-lcsv3c02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:32:45Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=b07b5e80-4820-4ee8-9750-3ee5ddc53519,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b811084c-ad80-4e64-904a-d56bd59c9766", "address": "fa:16:3e:c3:c4:e4", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec3:c4e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb811084c-ad", "ovs_interfaceid": "b811084c-ad80-4e64-904a-d56bd59c9766", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.270 243456 DEBUG nova.network.os_vif_util [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "b811084c-ad80-4e64-904a-d56bd59c9766", "address": "fa:16:3e:c3:c4:e4", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec3:c4e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb811084c-ad", "ovs_interfaceid": "b811084c-ad80-4e64-904a-d56bd59c9766", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.270 243456 DEBUG nova.network.os_vif_util [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:c4:e4,bridge_name='br-int',has_traffic_filtering=True,id=b811084c-ad80-4e64-904a-d56bd59c9766,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb811084c-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.271 243456 DEBUG os_vif [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:c4:e4,bridge_name='br-int',has_traffic_filtering=True,id=b811084c-ad80-4e64-904a-d56bd59c9766,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb811084c-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.271 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.272 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.272 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.276 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.276 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb811084c-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.277 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb811084c-ad, col_values=(('external_ids', {'iface-id': 'b811084c-ad80-4e64-904a-d56bd59c9766', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c3:c4:e4', 'vm-uuid': 'b07b5e80-4820-4ee8-9750-3ee5ddc53519'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.278 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:52 np0005634017 NetworkManager[49805]: <info>  [1772274772.2800] manager: (tapb811084c-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/518)
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.282 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.288 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.289 243456 INFO os_vif [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:c4:e4,bridge_name='br-int',has_traffic_filtering=True,id=b811084c-ad80-4e64-904a-d56bd59c9766,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb811084c-ad')#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.333 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.336 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.337 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:c3:c4:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.337 243456 INFO nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Using config drive#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.360 243456 DEBUG nova.storage.rbd_utils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:32:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 325 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.924 243456 INFO nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Creating config drive at /var/lib/nova/instances/b07b5e80-4820-4ee8-9750-3ee5ddc53519/disk.config#033[00m
Feb 28 05:32:52 np0005634017 nova_compute[243452]: 2026-02-28 10:32:52.930 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b07b5e80-4820-4ee8-9750-3ee5ddc53519/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmphgn30jv1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:32:53 np0005634017 nova_compute[243452]: 2026-02-28 10:32:53.060 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b07b5e80-4820-4ee8-9750-3ee5ddc53519/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmphgn30jv1" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:32:53 np0005634017 nova_compute[243452]: 2026-02-28 10:32:53.096 243456 DEBUG nova.storage.rbd_utils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:32:53 np0005634017 nova_compute[243452]: 2026-02-28 10:32:53.101 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b07b5e80-4820-4ee8-9750-3ee5ddc53519/disk.config b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:32:53 np0005634017 nova_compute[243452]: 2026-02-28 10:32:53.241 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b07b5e80-4820-4ee8-9750-3ee5ddc53519/disk.config b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:32:53 np0005634017 nova_compute[243452]: 2026-02-28 10:32:53.242 243456 INFO nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Deleting local config drive /var/lib/nova/instances/b07b5e80-4820-4ee8-9750-3ee5ddc53519/disk.config because it was imported into RBD.#033[00m
Feb 28 05:32:53 np0005634017 NetworkManager[49805]: <info>  [1772274773.2738] manager: (tapb811084c-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/519)
Feb 28 05:32:53 np0005634017 kernel: tapb811084c-ad: entered promiscuous mode
Feb 28 05:32:53 np0005634017 nova_compute[243452]: 2026-02-28 10:32:53.280 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:32:53Z|01251|binding|INFO|Claiming lport b811084c-ad80-4e64-904a-d56bd59c9766 for this chassis.
Feb 28 05:32:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:32:53Z|01252|binding|INFO|b811084c-ad80-4e64-904a-d56bd59c9766: Claiming fa:16:3e:c3:c4:e4 10.100.0.12 2001:db8::f816:3eff:fec3:c4e4
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.289 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:c4:e4 10.100.0.12 2001:db8::f816:3eff:fec3:c4e4'], port_security=['fa:16:3e:c3:c4:e4 10.100.0.12 2001:db8::f816:3eff:fec3:c4e4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fec3:c4e4/64', 'neutron:device_id': 'b07b5e80-4820-4ee8-9750-3ee5ddc53519', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6e383b51-5295-4e4b-b3c5-448c2d4edbf6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5cc36483-813d-493a-bc2a-2e5a365011f2, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b811084c-ad80-4e64-904a-d56bd59c9766) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:32:53 np0005634017 nova_compute[243452]: 2026-02-28 10:32:53.290 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.291 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b811084c-ad80-4e64-904a-d56bd59c9766 in datapath c662211d-11bd-4aa5-95d2-794ccdac29d7 bound to our chassis#033[00m
Feb 28 05:32:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:32:53Z|01253|binding|INFO|Setting lport b811084c-ad80-4e64-904a-d56bd59c9766 ovn-installed in OVS
Feb 28 05:32:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:32:53Z|01254|binding|INFO|Setting lport b811084c-ad80-4e64-904a-d56bd59c9766 up in Southbound
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.292 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c662211d-11bd-4aa5-95d2-794ccdac29d7#033[00m
Feb 28 05:32:53 np0005634017 nova_compute[243452]: 2026-02-28 10:32:53.293 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:53 np0005634017 systemd-machined[209480]: New machine qemu-158-instance-0000007d.
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.303 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cac05552-43e0-45e2-92b0-7a9eb3a5f277]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.304 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc662211d-11 in ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.306 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc662211d-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.306 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dd037136-a661-4b51-8eba-9d6db5651af6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.307 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b7b81b10-a6a8-4f86-947e-0a1f99d31d2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:53 np0005634017 systemd[1]: Started Virtual Machine qemu-158-instance-0000007d.
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.320 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[f8d76572-f00b-4119-865f-7bc14316e25c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:53 np0005634017 systemd-udevd[353736]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.332 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9d8f205f-a410-4af0-bb42-dc7393ee1870]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:53 np0005634017 NetworkManager[49805]: <info>  [1772274773.3422] device (tapb811084c-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:32:53 np0005634017 NetworkManager[49805]: <info>  [1772274773.3432] device (tapb811084c-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.356 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ffa5db51-6740-46d2-a1d2-02352fcbc75f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:53 np0005634017 NetworkManager[49805]: <info>  [1772274773.3741] manager: (tapc662211d-10): new Veth device (/org/freedesktop/NetworkManager/Devices/520)
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.373 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0e89b3d9-f42b-4e8c-ab14-592d38bdb959]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.412 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[97905aa4-502c-40eb-8dd6-31cbc8c66bd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.415 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[67f2f968-5b76-41f9-a842-9b4654618107]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:53 np0005634017 NetworkManager[49805]: <info>  [1772274773.4389] device (tapc662211d-10): carrier: link connected
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.441 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[00cfd9a3-0365-4741-b102-48cc87ec900b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.453 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[97030e0e-bb2e-4df6-8433-ac1900a5ee23]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc662211d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:f5:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628072, 'reachable_time': 22047, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353766, 'error': None, 'target': 'ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.462 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cf0b9d25-61f0-40fd-8225-bda56fbf0300]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe51:f5cc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 628072, 'tstamp': 628072}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353767, 'error': None, 'target': 'ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.471 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7ef93853-d497-4d49-9fee-c5a17269ba18]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc662211d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:f5:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628072, 'reachable_time': 22047, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 353768, 'error': None, 'target': 'ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.498 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cf64aaf7-1ca2-402d-9491-99acbc75a4d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.552 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[39d0158d-a1ba-43ed-9ce1-cd731c156d4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.553 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc662211d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.553 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.554 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc662211d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:32:53 np0005634017 nova_compute[243452]: 2026-02-28 10:32:53.555 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:53 np0005634017 NetworkManager[49805]: <info>  [1772274773.5561] manager: (tapc662211d-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/521)
Feb 28 05:32:53 np0005634017 kernel: tapc662211d-10: entered promiscuous mode
Feb 28 05:32:53 np0005634017 nova_compute[243452]: 2026-02-28 10:32:53.558 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.559 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc662211d-10, col_values=(('external_ids', {'iface-id': '45460f81-310d-424c-88ee-85e2ae1b7444'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:32:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:32:53Z|01255|binding|INFO|Releasing lport 45460f81-310d-424c-88ee-85e2ae1b7444 from this chassis (sb_readonly=0)
Feb 28 05:32:53 np0005634017 nova_compute[243452]: 2026-02-28 10:32:53.560 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:53 np0005634017 nova_compute[243452]: 2026-02-28 10:32:53.569 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.570 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c662211d-11bd-4aa5-95d2-794ccdac29d7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c662211d-11bd-4aa5-95d2-794ccdac29d7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.570 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7723459f-71a1-4306-afb5-080e3e5f2439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.571 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-c662211d-11bd-4aa5-95d2-794ccdac29d7
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/c662211d-11bd-4aa5-95d2-794ccdac29d7.pid.haproxy
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID c662211d-11bd-4aa5-95d2-794ccdac29d7
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:32:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.572 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'env', 'PROCESS_TAG=haproxy-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c662211d-11bd-4aa5-95d2-794ccdac29d7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:32:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:32:53 np0005634017 podman[353798]: 2026-02-28 10:32:53.883677661 +0000 UTC m=+0.041236089 container create b79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 05:32:53 np0005634017 systemd[1]: Started libpod-conmon-b79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541.scope.
Feb 28 05:32:53 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:32:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ccde48f8a36bab7126971be92fbdb0c7788e67982ad3a6ca48727f62f636de5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:32:53 np0005634017 podman[353798]: 2026-02-28 10:32:53.947992403 +0000 UTC m=+0.105550861 container init b79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 05:32:53 np0005634017 podman[353798]: 2026-02-28 10:32:53.952521891 +0000 UTC m=+0.110080319 container start b79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:32:53 np0005634017 podman[353798]: 2026-02-28 10:32:53.862877312 +0000 UTC m=+0.020435760 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:32:53 np0005634017 neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7[353814]: [NOTICE]   (353818) : New worker (353820) forked
Feb 28 05:32:53 np0005634017 neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7[353814]: [NOTICE]   (353818) : Loading success.
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.015 243456 DEBUG nova.compute.manager [req-1addc04e-3f63-4ac6-8354-9c297147df77 req-a6f9aa95-a788-4d80-8736-300cf1c8f9d0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Received event network-vif-plugged-b811084c-ad80-4e64-904a-d56bd59c9766 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.015 243456 DEBUG oslo_concurrency.lockutils [req-1addc04e-3f63-4ac6-8354-9c297147df77 req-a6f9aa95-a788-4d80-8736-300cf1c8f9d0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.016 243456 DEBUG oslo_concurrency.lockutils [req-1addc04e-3f63-4ac6-8354-9c297147df77 req-a6f9aa95-a788-4d80-8736-300cf1c8f9d0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.016 243456 DEBUG oslo_concurrency.lockutils [req-1addc04e-3f63-4ac6-8354-9c297147df77 req-a6f9aa95-a788-4d80-8736-300cf1c8f9d0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.016 243456 DEBUG nova.compute.manager [req-1addc04e-3f63-4ac6-8354-9c297147df77 req-a6f9aa95-a788-4d80-8736-300cf1c8f9d0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Processing event network-vif-plugged-b811084c-ad80-4e64-904a-d56bd59c9766 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.074 243456 DEBUG nova.network.neutron [req-596d5a4b-4122-4ba3-86f0-af2ba0b8f051 req-2bcd2191-274c-43da-9382-a5daedb0b1c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Updated VIF entry in instance network info cache for port b811084c-ad80-4e64-904a-d56bd59c9766. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.074 243456 DEBUG nova.network.neutron [req-596d5a4b-4122-4ba3-86f0-af2ba0b8f051 req-2bcd2191-274c-43da-9382-a5daedb0b1c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Updating instance_info_cache with network_info: [{"id": "b811084c-ad80-4e64-904a-d56bd59c9766", "address": "fa:16:3e:c3:c4:e4", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec3:c4e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb811084c-ad", "ovs_interfaceid": "b811084c-ad80-4e64-904a-d56bd59c9766", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.090 243456 DEBUG oslo_concurrency.lockutils [req-596d5a4b-4122-4ba3-86f0-af2ba0b8f051 req-2bcd2191-274c-43da-9382-a5daedb0b1c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.300 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 98 op/s
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.877 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274774.8770206, b07b5e80-4820-4ee8-9750-3ee5ddc53519 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.878 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] VM Started (Lifecycle Event)#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.880 243456 DEBUG nova.compute.manager [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.883 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.886 243456 INFO nova.virt.libvirt.driver [-] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Instance spawned successfully.#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.886 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.901 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.905 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.912 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.912 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.913 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.913 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.914 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.914 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.941 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.942 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274774.878188, b07b5e80-4820-4ee8-9750-3ee5ddc53519 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.942 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.972 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.975 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274774.8824677, b07b5e80-4820-4ee8-9750-3ee5ddc53519 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.975 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.980 243456 INFO nova.compute.manager [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Took 9.51 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.981 243456 DEBUG nova.compute.manager [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.990 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:32:54 np0005634017 nova_compute[243452]: 2026-02-28 10:32:54.992 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:32:55 np0005634017 nova_compute[243452]: 2026-02-28 10:32:55.010 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:32:55 np0005634017 nova_compute[243452]: 2026-02-28 10:32:55.044 243456 INFO nova.compute.manager [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Took 10.72 seconds to build instance.#033[00m
Feb 28 05:32:55 np0005634017 nova_compute[243452]: 2026-02-28 10:32:55.064 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:32:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:32:56Z|00141|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1d:37:36 10.100.0.24
Feb 28 05:32:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:32:56Z|00142|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1d:37:36 10.100.0.24
Feb 28 05:32:56 np0005634017 nova_compute[243452]: 2026-02-28 10:32:56.228 243456 DEBUG nova.compute.manager [req-842203e5-5dd1-44e9-b864-9a21f8afa7e7 req-762cacef-2dc7-45ac-8cb3-971b58c57705 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Received event network-vif-plugged-b811084c-ad80-4e64-904a-d56bd59c9766 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:32:56 np0005634017 nova_compute[243452]: 2026-02-28 10:32:56.228 243456 DEBUG oslo_concurrency.lockutils [req-842203e5-5dd1-44e9-b864-9a21f8afa7e7 req-762cacef-2dc7-45ac-8cb3-971b58c57705 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:32:56 np0005634017 nova_compute[243452]: 2026-02-28 10:32:56.229 243456 DEBUG oslo_concurrency.lockutils [req-842203e5-5dd1-44e9-b864-9a21f8afa7e7 req-762cacef-2dc7-45ac-8cb3-971b58c57705 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:32:56 np0005634017 nova_compute[243452]: 2026-02-28 10:32:56.229 243456 DEBUG oslo_concurrency.lockutils [req-842203e5-5dd1-44e9-b864-9a21f8afa7e7 req-762cacef-2dc7-45ac-8cb3-971b58c57705 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:32:56 np0005634017 nova_compute[243452]: 2026-02-28 10:32:56.229 243456 DEBUG nova.compute.manager [req-842203e5-5dd1-44e9-b864-9a21f8afa7e7 req-762cacef-2dc7-45ac-8cb3-971b58c57705 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] No waiting events found dispatching network-vif-plugged-b811084c-ad80-4e64-904a-d56bd59c9766 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:32:56 np0005634017 nova_compute[243452]: 2026-02-28 10:32:56.230 243456 WARNING nova.compute.manager [req-842203e5-5dd1-44e9-b864-9a21f8afa7e7 req-762cacef-2dc7-45ac-8cb3-971b58c57705 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Received unexpected event network-vif-plugged-b811084c-ad80-4e64-904a-d56bd59c9766 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:32:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 337 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 113 op/s
Feb 28 05:32:57 np0005634017 nova_compute[243452]: 2026-02-28 10:32:57.280 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:32:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:57.872 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:32:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:57.873 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:32:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:32:57.874 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:32:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 305 active+clean; 344 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.3 MiB/s wr, 173 op/s
Feb 28 05:32:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:32:58 np0005634017 nova_compute[243452]: 2026-02-28 10:32:58.909 243456 DEBUG nova.compute.manager [req-79c8a50c-604f-4f04-aa56-e93bb23d4e44 req-b3543c47-5486-46e6-80f3-331cb4fab2bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Received event network-changed-b811084c-ad80-4e64-904a-d56bd59c9766 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:32:58 np0005634017 nova_compute[243452]: 2026-02-28 10:32:58.909 243456 DEBUG nova.compute.manager [req-79c8a50c-604f-4f04-aa56-e93bb23d4e44 req-b3543c47-5486-46e6-80f3-331cb4fab2bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Refreshing instance network info cache due to event network-changed-b811084c-ad80-4e64-904a-d56bd59c9766. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:32:58 np0005634017 nova_compute[243452]: 2026-02-28 10:32:58.910 243456 DEBUG oslo_concurrency.lockutils [req-79c8a50c-604f-4f04-aa56-e93bb23d4e44 req-b3543c47-5486-46e6-80f3-331cb4fab2bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:32:58 np0005634017 nova_compute[243452]: 2026-02-28 10:32:58.910 243456 DEBUG oslo_concurrency.lockutils [req-79c8a50c-604f-4f04-aa56-e93bb23d4e44 req-b3543c47-5486-46e6-80f3-331cb4fab2bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:32:58 np0005634017 nova_compute[243452]: 2026-02-28 10:32:58.910 243456 DEBUG nova.network.neutron [req-79c8a50c-604f-4f04-aa56-e93bb23d4e44 req-b3543c47-5486-46e6-80f3-331cb4fab2bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Refreshing network info cache for port b811084c-ad80-4e64-904a-d56bd59c9766 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:32:59 np0005634017 nova_compute[243452]: 2026-02-28 10:32:59.302 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:00 np0005634017 nova_compute[243452]: 2026-02-28 10:33:00.166 243456 DEBUG nova.network.neutron [req-79c8a50c-604f-4f04-aa56-e93bb23d4e44 req-b3543c47-5486-46e6-80f3-331cb4fab2bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Updated VIF entry in instance network info cache for port b811084c-ad80-4e64-904a-d56bd59c9766. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:33:00 np0005634017 nova_compute[243452]: 2026-02-28 10:33:00.169 243456 DEBUG nova.network.neutron [req-79c8a50c-604f-4f04-aa56-e93bb23d4e44 req-b3543c47-5486-46e6-80f3-331cb4fab2bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Updating instance_info_cache with network_info: [{"id": "b811084c-ad80-4e64-904a-d56bd59c9766", "address": "fa:16:3e:c3:c4:e4", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec3:c4e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb811084c-ad", "ovs_interfaceid": "b811084c-ad80-4e64-904a-d56bd59c9766", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:33:00 np0005634017 nova_compute[243452]: 2026-02-28 10:33:00.209 243456 DEBUG oslo_concurrency.lockutils [req-79c8a50c-604f-4f04-aa56-e93bb23d4e44 req-b3543c47-5486-46e6-80f3-331cb4fab2bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:33:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:33:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:33:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:33:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:33:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:33:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:33:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.1 MiB/s wr, 164 op/s
Feb 28 05:33:02 np0005634017 nova_compute[243452]: 2026-02-28 10:33:02.282 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Feb 28 05:33:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:33:04 np0005634017 podman[353872]: 2026-02-28 10:33:04.127626912 +0000 UTC m=+0.060183836 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 28 05:33:04 np0005634017 podman[353871]: 2026-02-28 10:33:04.155301176 +0000 UTC m=+0.085941386 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Feb 28 05:33:04 np0005634017 nova_compute[243452]: 2026-02-28 10:33:04.305 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.434 243456 DEBUG oslo_concurrency.lockutils [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.436 243456 DEBUG oslo_concurrency.lockutils [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.436 243456 DEBUG oslo_concurrency.lockutils [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.436 243456 DEBUG oslo_concurrency.lockutils [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.437 243456 DEBUG oslo_concurrency.lockutils [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.439 243456 INFO nova.compute.manager [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Terminating instance#033[00m
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.441 243456 DEBUG nova.compute.manager [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:33:05 np0005634017 kernel: tap9beeb630-58 (unregistering): left promiscuous mode
Feb 28 05:33:05 np0005634017 NetworkManager[49805]: <info>  [1772274785.4891] device (tap9beeb630-58): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.491 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:05Z|01256|binding|INFO|Releasing lport 9beeb630-5801-426b-8ae3-6d7b49d83ebe from this chassis (sb_readonly=0)
Feb 28 05:33:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:05Z|01257|binding|INFO|Setting lport 9beeb630-5801-426b-8ae3-6d7b49d83ebe down in Southbound
Feb 28 05:33:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:05Z|01258|binding|INFO|Removing iface tap9beeb630-58 ovn-installed in OVS
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.504 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.503 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:37:36 10.100.0.24'], port_security=['fa:16:3e:1d:37:36 10.100.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': '8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e78b0fff-240f-4279-a0b9-45aaac19e3aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f5856a5a-70ec-4d73-a6be-8929e117dbd8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c7b4133e-8779-4f70-9e9b-cefbe96b1736, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=9beeb630-5801-426b-8ae3-6d7b49d83ebe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:33:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.506 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 9beeb630-5801-426b-8ae3-6d7b49d83ebe in datapath e78b0fff-240f-4279-a0b9-45aaac19e3aa unbound from our chassis#033[00m
Feb 28 05:33:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.508 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e78b0fff-240f-4279-a0b9-45aaac19e3aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:33:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.509 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[27b4b828-2e0c-4188-84f0-2f4a89d75df6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.509 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa namespace which is not needed anymore#033[00m
Feb 28 05:33:05 np0005634017 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Feb 28 05:33:05 np0005634017 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007c.scope: Consumed 12.608s CPU time.
Feb 28 05:33:05 np0005634017 systemd-machined[209480]: Machine qemu-157-instance-0000007c terminated.
Feb 28 05:33:05 np0005634017 neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa[353349]: [NOTICE]   (353354) : haproxy version is 2.8.14-c23fe91
Feb 28 05:33:05 np0005634017 neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa[353349]: [NOTICE]   (353354) : path to executable is /usr/sbin/haproxy
Feb 28 05:33:05 np0005634017 neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa[353349]: [WARNING]  (353354) : Exiting Master process...
Feb 28 05:33:05 np0005634017 neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa[353349]: [ALERT]    (353354) : Current worker (353357) exited with code 143 (Terminated)
Feb 28 05:33:05 np0005634017 neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa[353349]: [WARNING]  (353354) : All workers exited. Exiting... (0)
Feb 28 05:33:05 np0005634017 systemd[1]: libpod-1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca.scope: Deactivated successfully.
Feb 28 05:33:05 np0005634017 podman[353942]: 2026-02-28 10:33:05.650265874 +0000 UTC m=+0.045831449 container died 1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.670 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:05 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca-userdata-shm.mount: Deactivated successfully.
Feb 28 05:33:05 np0005634017 systemd[1]: var-lib-containers-storage-overlay-800c74c84193b4781da546456c3a23d2158ddc640ab518b56dc1fd583515e1d0-merged.mount: Deactivated successfully.
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.692 243456 INFO nova.virt.libvirt.driver [-] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Instance destroyed successfully.#033[00m
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.693 243456 DEBUG nova.objects.instance [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'resources' on Instance uuid 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:33:05 np0005634017 podman[353942]: 2026-02-28 10:33:05.697264825 +0000 UTC m=+0.092830430 container cleanup 1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:33:05 np0005634017 systemd[1]: libpod-conmon-1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca.scope: Deactivated successfully.
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.713 243456 DEBUG nova.virt.libvirt.vif [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:32:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1011796007',display_name='tempest-TestNetworkBasicOps-server-1011796007',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1011796007',id=124,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGaI6GmFiY2M2XWv8n7VZfco5jNR9UsWFQgG0FSFHFzEznnh5xFgYcRiJcUqa2jmfAlISnzEuMLhfEtiHn6OYhFIRroUvBHFYJa82ZAB9Pz8Xnj494OEdbr2ujYWW7kOeQ==',key_name='tempest-TestNetworkBasicOps-243906406',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:32:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-yb1lwevu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:32:44Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "address": "fa:16:3e:1d:37:36", "network": {"id": "e78b0fff-240f-4279-a0b9-45aaac19e3aa", "bridge": "br-int", "label": "tempest-network-smoke--702672644", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9beeb630-58", "ovs_interfaceid": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.714 243456 DEBUG nova.network.os_vif_util [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "address": "fa:16:3e:1d:37:36", "network": {"id": "e78b0fff-240f-4279-a0b9-45aaac19e3aa", "bridge": "br-int", "label": "tempest-network-smoke--702672644", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9beeb630-58", "ovs_interfaceid": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.715 243456 DEBUG nova.network.os_vif_util [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:37:36,bridge_name='br-int',has_traffic_filtering=True,id=9beeb630-5801-426b-8ae3-6d7b49d83ebe,network=Network(e78b0fff-240f-4279-a0b9-45aaac19e3aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9beeb630-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.715 243456 DEBUG os_vif [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:37:36,bridge_name='br-int',has_traffic_filtering=True,id=9beeb630-5801-426b-8ae3-6d7b49d83ebe,network=Network(e78b0fff-240f-4279-a0b9-45aaac19e3aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9beeb630-58') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.718 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.718 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9beeb630-58, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.721 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.725 243456 INFO os_vif [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:37:36,bridge_name='br-int',has_traffic_filtering=True,id=9beeb630-5801-426b-8ae3-6d7b49d83ebe,network=Network(e78b0fff-240f-4279-a0b9-45aaac19e3aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9beeb630-58')#033[00m
Feb 28 05:33:05 np0005634017 podman[353979]: 2026-02-28 10:33:05.755005171 +0000 UTC m=+0.039249383 container remove 1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 05:33:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.760 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6b17b507-6a7d-4c8a-ad2c-2af3491d17e1]: (4, ('Sat Feb 28 10:33:05 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa (1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca)\n1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca\nSat Feb 28 10:33:05 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa (1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca)\n1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.762 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[070ff5be-552e-4781-8ca8-193b33311234]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.764 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape78b0fff-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.767 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:05 np0005634017 kernel: tape78b0fff-20: left promiscuous mode
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.776 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.782 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fb94d9b4-bf2e-48d3-982b-c7d66ae39962]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.804 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[78809cb3-6cd8-4123-8f53-f90c6199f4b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.806 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e94af009-483c-483f-9ba5-d205d1d9ed55]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.827 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a76fc1c9-4fd8-4292-8e52-421ab5759115]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626593, 'reachable_time': 44785, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354010, 'error': None, 'target': 'ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:05 np0005634017 systemd[1]: run-netns-ovnmeta\x2de78b0fff\x2d240f\x2d4279\x2da0b9\x2d45aaac19e3aa.mount: Deactivated successfully.
Feb 28 05:33:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.831 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:33:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.832 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[6b058294-000f-4f2b-871e-e69f472cd141]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.954 243456 DEBUG nova.compute.manager [req-b19f5ffa-14a4-447c-8cfd-1d26535d149b req-dcbf59bf-a0dd-4d76-91fe-6a85558aafe3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Received event network-vif-unplugged-9beeb630-5801-426b-8ae3-6d7b49d83ebe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.954 243456 DEBUG oslo_concurrency.lockutils [req-b19f5ffa-14a4-447c-8cfd-1d26535d149b req-dcbf59bf-a0dd-4d76-91fe-6a85558aafe3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.955 243456 DEBUG oslo_concurrency.lockutils [req-b19f5ffa-14a4-447c-8cfd-1d26535d149b req-dcbf59bf-a0dd-4d76-91fe-6a85558aafe3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.955 243456 DEBUG oslo_concurrency.lockutils [req-b19f5ffa-14a4-447c-8cfd-1d26535d149b req-dcbf59bf-a0dd-4d76-91fe-6a85558aafe3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.956 243456 DEBUG nova.compute.manager [req-b19f5ffa-14a4-447c-8cfd-1d26535d149b req-dcbf59bf-a0dd-4d76-91fe-6a85558aafe3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] No waiting events found dispatching network-vif-unplugged-9beeb630-5801-426b-8ae3-6d7b49d83ebe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:33:05 np0005634017 nova_compute[243452]: 2026-02-28 10:33:05.956 243456 DEBUG nova.compute.manager [req-b19f5ffa-14a4-447c-8cfd-1d26535d149b req-dcbf59bf-a0dd-4d76-91fe-6a85558aafe3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Received event network-vif-unplugged-9beeb630-5801-426b-8ae3-6d7b49d83ebe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:33:06 np0005634017 nova_compute[243452]: 2026-02-28 10:33:06.097 243456 INFO nova.virt.libvirt.driver [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Deleting instance files /var/lib/nova/instances/8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_del#033[00m
Feb 28 05:33:06 np0005634017 nova_compute[243452]: 2026-02-28 10:33:06.098 243456 INFO nova.virt.libvirt.driver [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Deletion of /var/lib/nova/instances/8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_del complete#033[00m
Feb 28 05:33:06 np0005634017 nova_compute[243452]: 2026-02-28 10:33:06.152 243456 INFO nova.compute.manager [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:33:06 np0005634017 nova_compute[243452]: 2026-02-28 10:33:06.153 243456 DEBUG oslo.service.loopingcall [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:33:06 np0005634017 nova_compute[243452]: 2026-02-28 10:33:06.154 243456 DEBUG nova.compute.manager [-] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:33:06 np0005634017 nova_compute[243452]: 2026-02-28 10:33:06.154 243456 DEBUG nova.network.neutron [-] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:33:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.7 MiB/s wr, 132 op/s
Feb 28 05:33:07 np0005634017 nova_compute[243452]: 2026-02-28 10:33:07.039 243456 DEBUG nova.network.neutron [-] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:33:07 np0005634017 nova_compute[243452]: 2026-02-28 10:33:07.061 243456 INFO nova.compute.manager [-] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Took 0.91 seconds to deallocate network for instance.#033[00m
Feb 28 05:33:07 np0005634017 nova_compute[243452]: 2026-02-28 10:33:07.185 243456 DEBUG oslo_concurrency.lockutils [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:07 np0005634017 nova_compute[243452]: 2026-02-28 10:33:07.185 243456 DEBUG oslo_concurrency.lockutils [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:07 np0005634017 nova_compute[243452]: 2026-02-28 10:33:07.604 243456 DEBUG oslo_concurrency.processutils [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:33:08 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:08Z|00143|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c3:c4:e4 10.100.0.12
Feb 28 05:33:08 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:08Z|00144|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c3:c4:e4 10.100.0.12
Feb 28 05:33:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:33:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1963900522' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:33:08 np0005634017 nova_compute[243452]: 2026-02-28 10:33:08.131 243456 DEBUG nova.compute.manager [req-54b3f7f3-2220-45af-a160-f13310926632 req-86ba2ad7-e806-4e2a-a949-4f49fc6a8280 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Received event network-vif-plugged-9beeb630-5801-426b-8ae3-6d7b49d83ebe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:33:08 np0005634017 nova_compute[243452]: 2026-02-28 10:33:08.132 243456 DEBUG oslo_concurrency.lockutils [req-54b3f7f3-2220-45af-a160-f13310926632 req-86ba2ad7-e806-4e2a-a949-4f49fc6a8280 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:08 np0005634017 nova_compute[243452]: 2026-02-28 10:33:08.132 243456 DEBUG oslo_concurrency.lockutils [req-54b3f7f3-2220-45af-a160-f13310926632 req-86ba2ad7-e806-4e2a-a949-4f49fc6a8280 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:08 np0005634017 nova_compute[243452]: 2026-02-28 10:33:08.133 243456 DEBUG oslo_concurrency.lockutils [req-54b3f7f3-2220-45af-a160-f13310926632 req-86ba2ad7-e806-4e2a-a949-4f49fc6a8280 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:08 np0005634017 nova_compute[243452]: 2026-02-28 10:33:08.133 243456 DEBUG nova.compute.manager [req-54b3f7f3-2220-45af-a160-f13310926632 req-86ba2ad7-e806-4e2a-a949-4f49fc6a8280 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] No waiting events found dispatching network-vif-plugged-9beeb630-5801-426b-8ae3-6d7b49d83ebe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:33:08 np0005634017 nova_compute[243452]: 2026-02-28 10:33:08.133 243456 WARNING nova.compute.manager [req-54b3f7f3-2220-45af-a160-f13310926632 req-86ba2ad7-e806-4e2a-a949-4f49fc6a8280 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Received unexpected event network-vif-plugged-9beeb630-5801-426b-8ae3-6d7b49d83ebe for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:33:08 np0005634017 nova_compute[243452]: 2026-02-28 10:33:08.134 243456 DEBUG nova.compute.manager [req-54b3f7f3-2220-45af-a160-f13310926632 req-86ba2ad7-e806-4e2a-a949-4f49fc6a8280 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Received event network-vif-deleted-9beeb630-5801-426b-8ae3-6d7b49d83ebe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:33:08 np0005634017 nova_compute[243452]: 2026-02-28 10:33:08.140 243456 DEBUG oslo_concurrency.processutils [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:33:08 np0005634017 nova_compute[243452]: 2026-02-28 10:33:08.146 243456 DEBUG nova.compute.provider_tree [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:33:08 np0005634017 nova_compute[243452]: 2026-02-28 10:33:08.236 243456 DEBUG nova.scheduler.client.report [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:33:08 np0005634017 nova_compute[243452]: 2026-02-28 10:33:08.464 243456 DEBUG oslo_concurrency.lockutils [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.279s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:08 np0005634017 nova_compute[243452]: 2026-02-28 10:33:08.603 243456 INFO nova.scheduler.client.report [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Deleted allocations for instance 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc#033[00m
Feb 28 05:33:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 305 active+clean; 335 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.3 MiB/s wr, 153 op/s
Feb 28 05:33:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:33:09 np0005634017 nova_compute[243452]: 2026-02-28 10:33:09.038 243456 DEBUG oslo_concurrency.lockutils [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:09 np0005634017 nova_compute[243452]: 2026-02-28 10:33:09.308 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:10 np0005634017 nova_compute[243452]: 2026-02-28 10:33:10.721 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:10 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 05:33:10 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 9643 writes, 44K keys, 9643 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 9643 writes, 9643 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1491 writes, 7418 keys, 1491 commit groups, 1.0 writes per commit group, ingest: 9.32 MB, 0.02 MB/s#012Interval WAL: 1491 writes, 1491 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     71.4      0.73              0.13        29    0.025       0      0       0.0       0.0#012  L6      1/0    8.10 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.5    157.3    132.6      1.77              0.62        28    0.063    159K    15K       0.0       0.0#012 Sum      1/0    8.10 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.5    111.5    114.7      2.50              0.75        57    0.044    159K    15K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.7    171.3    169.0      0.53              0.25        16    0.033     56K   4093       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    157.3    132.6      1.77              0.62        28    0.063    159K    15K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     71.8      0.72              0.13        28    0.026       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.5      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.051, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.28 GB write, 0.08 MB/s write, 0.27 GB read, 0.08 MB/s read, 2.5 seconds#012Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555caadbb8d0#2 capacity: 304.00 MB usage: 30.20 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000338 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1918,29.00 MB,9.53935%) FilterBlock(58,448.61 KB,0.14411%) IndexBlock(58,782.92 KB,0.251504%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb 28 05:33:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.8 MiB/s wr, 145 op/s
Feb 28 05:33:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Feb 28 05:33:12 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:12Z|01259|binding|INFO|Releasing lport 45460f81-310d-424c-88ee-85e2ae1b7444 from this chassis (sb_readonly=0)
Feb 28 05:33:12 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:12Z|01260|binding|INFO|Releasing lport 6d119866-5b77-4352-91cc-12a26b0fe463 from this chassis (sb_readonly=0)
Feb 28 05:33:13 np0005634017 nova_compute[243452]: 2026-02-28 10:33:13.009 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.223 243456 DEBUG nova.compute.manager [req-17e0e765-06d2-472c-a18e-e322c3e690a6 req-a9c8a9f9-f00e-4424-bd4d-bfaa7f4ea569 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Received event network-changed-09f54242-3301-4e2f-b606-8423be606192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.223 243456 DEBUG nova.compute.manager [req-17e0e765-06d2-472c-a18e-e322c3e690a6 req-a9c8a9f9-f00e-4424-bd4d-bfaa7f4ea569 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Refreshing instance network info cache due to event network-changed-09f54242-3301-4e2f-b606-8423be606192. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.224 243456 DEBUG oslo_concurrency.lockutils [req-17e0e765-06d2-472c-a18e-e322c3e690a6 req-a9c8a9f9-f00e-4424-bd4d-bfaa7f4ea569 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.224 243456 DEBUG oslo_concurrency.lockutils [req-17e0e765-06d2-472c-a18e-e322c3e690a6 req-a9c8a9f9-f00e-4424-bd4d-bfaa7f4ea569 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.224 243456 DEBUG nova.network.neutron [req-17e0e765-06d2-472c-a18e-e322c3e690a6 req-a9c8a9f9-f00e-4424-bd4d-bfaa7f4ea569 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Refreshing network info cache for port 09f54242-3301-4e2f-b606-8423be606192 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.300 243456 DEBUG oslo_concurrency.lockutils [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "995ed68f-0189-47b0-b060-6b738468c986" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.301 243456 DEBUG oslo_concurrency.lockutils [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.302 243456 DEBUG oslo_concurrency.lockutils [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "995ed68f-0189-47b0-b060-6b738468c986-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.302 243456 DEBUG oslo_concurrency.lockutils [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.302 243456 DEBUG oslo_concurrency.lockutils [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.304 243456 INFO nova.compute.manager [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Terminating instance#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.305 243456 DEBUG nova.compute.manager [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.309 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:14 np0005634017 kernel: tap09f54242-33 (unregistering): left promiscuous mode
Feb 28 05:33:14 np0005634017 NetworkManager[49805]: <info>  [1772274794.3584] device (tap09f54242-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:33:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:14Z|01261|binding|INFO|Releasing lport 09f54242-3301-4e2f-b606-8423be606192 from this chassis (sb_readonly=0)
Feb 28 05:33:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:14Z|01262|binding|INFO|Setting lport 09f54242-3301-4e2f-b606-8423be606192 down in Southbound
Feb 28 05:33:14 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:14Z|01263|binding|INFO|Removing iface tap09f54242-33 ovn-installed in OVS
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.369 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.373 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.378 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.400 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:3a:58 10.100.0.13'], port_security=['fa:16:3e:3a:3a:58 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '995ed68f-0189-47b0-b060-6b738468c986', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-11c8a335-c267-4faf-a7d1-f407690da05d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '96252fd8-ac35-49bb-9585-1943d9426258', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51d69cbf-f4a8-43f3-8231-31d5040383f1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=09f54242-3301-4e2f-b606-8423be606192) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:33:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.402 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 09f54242-3301-4e2f-b606-8423be606192 in datapath 11c8a335-c267-4faf-a7d1-f407690da05d unbound from our chassis#033[00m
Feb 28 05:33:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.404 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 11c8a335-c267-4faf-a7d1-f407690da05d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:33:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.405 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3b4a6e82-65b2-43f5-aee1-69b303198961]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.406 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d namespace which is not needed anymore#033[00m
Feb 28 05:33:14 np0005634017 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Feb 28 05:33:14 np0005634017 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007b.scope: Consumed 15.896s CPU time.
Feb 28 05:33:14 np0005634017 systemd-machined[209480]: Machine qemu-156-instance-0000007b terminated.
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.528 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.535 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.542 243456 INFO nova.virt.libvirt.driver [-] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Instance destroyed successfully.#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.543 243456 DEBUG nova.objects.instance [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'resources' on Instance uuid 995ed68f-0189-47b0-b060-6b738468c986 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:33:14 np0005634017 neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d[352177]: [NOTICE]   (352181) : haproxy version is 2.8.14-c23fe91
Feb 28 05:33:14 np0005634017 neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d[352177]: [NOTICE]   (352181) : path to executable is /usr/sbin/haproxy
Feb 28 05:33:14 np0005634017 neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d[352177]: [WARNING]  (352181) : Exiting Master process...
Feb 28 05:33:14 np0005634017 neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d[352177]: [ALERT]    (352181) : Current worker (352183) exited with code 143 (Terminated)
Feb 28 05:33:14 np0005634017 neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d[352177]: [WARNING]  (352181) : All workers exited. Exiting... (0)
Feb 28 05:33:14 np0005634017 systemd[1]: libpod-8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c.scope: Deactivated successfully.
Feb 28 05:33:14 np0005634017 podman[354057]: 2026-02-28 10:33:14.57868071 +0000 UTC m=+0.059911968 container died 8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.579 243456 DEBUG nova.virt.libvirt.vif [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:31:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1357951307',display_name='tempest-TestNetworkBasicOps-server-1357951307',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1357951307',id=123,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZHc+J8n+8VIElpku7oO13/ribfCNYlkJQgBtlmLu1lYsVW1sBn5Xu9dPl6htrZdrELeWC7ndyh8q80ljjBh3aAz6MIHX6lH85ayvL5wAI984ueXttgABed6q+nVWvTQw==',key_name='tempest-TestNetworkBasicOps-951014656',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:31:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-lmudny40',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:31:52Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=995ed68f-0189-47b0-b060-6b738468c986,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09f54242-3301-4e2f-b606-8423be606192", "address": "fa:16:3e:3a:3a:58", "network": {"id": "11c8a335-c267-4faf-a7d1-f407690da05d", "bridge": "br-int", "label": "tempest-network-smoke--2063408073", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09f54242-33", "ovs_interfaceid": "09f54242-3301-4e2f-b606-8423be606192", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.580 243456 DEBUG nova.network.os_vif_util [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "09f54242-3301-4e2f-b606-8423be606192", "address": "fa:16:3e:3a:3a:58", "network": {"id": "11c8a335-c267-4faf-a7d1-f407690da05d", "bridge": "br-int", "label": "tempest-network-smoke--2063408073", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09f54242-33", "ovs_interfaceid": "09f54242-3301-4e2f-b606-8423be606192", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.581 243456 DEBUG nova.network.os_vif_util [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:3a:58,bridge_name='br-int',has_traffic_filtering=True,id=09f54242-3301-4e2f-b606-8423be606192,network=Network(11c8a335-c267-4faf-a7d1-f407690da05d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09f54242-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.581 243456 DEBUG os_vif [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:3a:58,bridge_name='br-int',has_traffic_filtering=True,id=09f54242-3301-4e2f-b606-8423be606192,network=Network(11c8a335-c267-4faf-a7d1-f407690da05d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09f54242-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.582 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.583 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09f54242-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.584 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.586 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.591 243456 INFO os_vif [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:3a:58,bridge_name='br-int',has_traffic_filtering=True,id=09f54242-3301-4e2f-b606-8423be606192,network=Network(11c8a335-c267-4faf-a7d1-f407690da05d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09f54242-33')#033[00m
Feb 28 05:33:14 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c-userdata-shm.mount: Deactivated successfully.
Feb 28 05:33:14 np0005634017 systemd[1]: var-lib-containers-storage-overlay-f8bb818805ca7bc9b5cb8f2064ec0e8cd84927ba5cb46b2bac1c983c996bfba5-merged.mount: Deactivated successfully.
Feb 28 05:33:14 np0005634017 podman[354057]: 2026-02-28 10:33:14.649019203 +0000 UTC m=+0.130250431 container cleanup 8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true)
Feb 28 05:33:14 np0005634017 systemd[1]: libpod-conmon-8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c.scope: Deactivated successfully.
Feb 28 05:33:14 np0005634017 podman[354113]: 2026-02-28 10:33:14.726621281 +0000 UTC m=+0.050329217 container remove 8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 05:33:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.734 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2966625a-cda9-459d-98d7-016a34e4a2bd]: (4, ('Sat Feb 28 10:33:14 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d (8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c)\n8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c\nSat Feb 28 10:33:14 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d (8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c)\n8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.738 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f3d72836-54a3-4ee8-9a6e-27c00c53a1cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.739 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11c8a335-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:33:14 np0005634017 kernel: tap11c8a335-c0: left promiscuous mode
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.742 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.748 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.749 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.752 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[38f0cbc7-db3a-4356-8892-89be305eba6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.764 243456 DEBUG nova.compute.manager [req-af6ef85d-4bf8-47e7-98d2-32a4d125ffff req-e7e62f8e-85e8-4561-8dc3-21fa4865adda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Received event network-vif-unplugged-09f54242-3301-4e2f-b606-8423be606192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.764 243456 DEBUG oslo_concurrency.lockutils [req-af6ef85d-4bf8-47e7-98d2-32a4d125ffff req-e7e62f8e-85e8-4561-8dc3-21fa4865adda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "995ed68f-0189-47b0-b060-6b738468c986-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.764 243456 DEBUG oslo_concurrency.lockutils [req-af6ef85d-4bf8-47e7-98d2-32a4d125ffff req-e7e62f8e-85e8-4561-8dc3-21fa4865adda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.765 243456 DEBUG oslo_concurrency.lockutils [req-af6ef85d-4bf8-47e7-98d2-32a4d125ffff req-e7e62f8e-85e8-4561-8dc3-21fa4865adda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.765 243456 DEBUG nova.compute.manager [req-af6ef85d-4bf8-47e7-98d2-32a4d125ffff req-e7e62f8e-85e8-4561-8dc3-21fa4865adda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] No waiting events found dispatching network-vif-unplugged-09f54242-3301-4e2f-b606-8423be606192 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:33:14 np0005634017 nova_compute[243452]: 2026-02-28 10:33:14.765 243456 DEBUG nova.compute.manager [req-af6ef85d-4bf8-47e7-98d2-32a4d125ffff req-e7e62f8e-85e8-4561-8dc3-21fa4865adda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Received event network-vif-unplugged-09f54242-3301-4e2f-b606-8423be606192 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:33:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.768 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cd3de8e0-32d0-4f03-9f66-535c3c77014c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.771 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2f35c95a-296d-4953-a33e-d33e347b192a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.787 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8f22f72c-f078-49e0-aade-da36af1d5455]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621950, 'reachable_time': 30240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354129, 'error': None, 'target': 'ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:14 np0005634017 systemd[1]: run-netns-ovnmeta\x2d11c8a335\x2dc267\x2d4faf\x2da7d1\x2df407690da05d.mount: Deactivated successfully.
Feb 28 05:33:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.803 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:33:14 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.803 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[a0453582-9132-4ccc-b50e-c3b9d41194e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:15 np0005634017 nova_compute[243452]: 2026-02-28 10:33:15.048 243456 INFO nova.virt.libvirt.driver [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Deleting instance files /var/lib/nova/instances/995ed68f-0189-47b0-b060-6b738468c986_del#033[00m
Feb 28 05:33:15 np0005634017 nova_compute[243452]: 2026-02-28 10:33:15.050 243456 INFO nova.virt.libvirt.driver [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Deletion of /var/lib/nova/instances/995ed68f-0189-47b0-b060-6b738468c986_del complete#033[00m
Feb 28 05:33:15 np0005634017 nova_compute[243452]: 2026-02-28 10:33:15.113 243456 INFO nova.compute.manager [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:33:15 np0005634017 nova_compute[243452]: 2026-02-28 10:33:15.114 243456 DEBUG oslo.service.loopingcall [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:33:15 np0005634017 nova_compute[243452]: 2026-02-28 10:33:15.114 243456 DEBUG nova.compute.manager [-] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:33:15 np0005634017 nova_compute[243452]: 2026-02-28 10:33:15.114 243456 DEBUG nova.network.neutron [-] [instance: 995ed68f-0189-47b0-b060-6b738468c986] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:33:15 np0005634017 nova_compute[243452]: 2026-02-28 10:33:15.811 243456 DEBUG nova.network.neutron [req-17e0e765-06d2-472c-a18e-e322c3e690a6 req-a9c8a9f9-f00e-4424-bd4d-bfaa7f4ea569 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Updated VIF entry in instance network info cache for port 09f54242-3301-4e2f-b606-8423be606192. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:33:15 np0005634017 nova_compute[243452]: 2026-02-28 10:33:15.812 243456 DEBUG nova.network.neutron [req-17e0e765-06d2-472c-a18e-e322c3e690a6 req-a9c8a9f9-f00e-4424-bd4d-bfaa7f4ea569 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Updating instance_info_cache with network_info: [{"id": "09f54242-3301-4e2f-b606-8423be606192", "address": "fa:16:3e:3a:3a:58", "network": {"id": "11c8a335-c267-4faf-a7d1-f407690da05d", "bridge": "br-int", "label": "tempest-network-smoke--2063408073", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09f54242-33", "ovs_interfaceid": "09f54242-3301-4e2f-b606-8423be606192", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:33:15 np0005634017 nova_compute[243452]: 2026-02-28 10:33:15.838 243456 DEBUG oslo_concurrency.lockutils [req-17e0e765-06d2-472c-a18e-e322c3e690a6 req-a9c8a9f9-f00e-4424-bd4d-bfaa7f4ea569 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:33:16 np0005634017 nova_compute[243452]: 2026-02-28 10:33:16.159 243456 DEBUG nova.network.neutron [-] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:33:16 np0005634017 nova_compute[243452]: 2026-02-28 10:33:16.178 243456 INFO nova.compute.manager [-] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Took 1.06 seconds to deallocate network for instance.#033[00m
Feb 28 05:33:16 np0005634017 nova_compute[243452]: 2026-02-28 10:33:16.233 243456 DEBUG oslo_concurrency.lockutils [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:16 np0005634017 nova_compute[243452]: 2026-02-28 10:33:16.234 243456 DEBUG oslo_concurrency.lockutils [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:16 np0005634017 nova_compute[243452]: 2026-02-28 10:33:16.332 243456 DEBUG oslo_concurrency.processutils [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:33:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 305 active+clean; 287 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 2.2 MiB/s wr, 102 op/s
Feb 28 05:33:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:33:16 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1995444784' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:33:16 np0005634017 nova_compute[243452]: 2026-02-28 10:33:16.910 243456 DEBUG nova.compute.manager [req-3a334efa-c81a-4a59-927e-04b4fce2f7ef req-2398b378-ed25-44f8-80cf-62646d80ac7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Received event network-vif-plugged-09f54242-3301-4e2f-b606-8423be606192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:33:16 np0005634017 nova_compute[243452]: 2026-02-28 10:33:16.911 243456 DEBUG oslo_concurrency.lockutils [req-3a334efa-c81a-4a59-927e-04b4fce2f7ef req-2398b378-ed25-44f8-80cf-62646d80ac7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "995ed68f-0189-47b0-b060-6b738468c986-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:16 np0005634017 nova_compute[243452]: 2026-02-28 10:33:16.912 243456 DEBUG oslo_concurrency.lockutils [req-3a334efa-c81a-4a59-927e-04b4fce2f7ef req-2398b378-ed25-44f8-80cf-62646d80ac7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:16 np0005634017 nova_compute[243452]: 2026-02-28 10:33:16.912 243456 DEBUG oslo_concurrency.lockutils [req-3a334efa-c81a-4a59-927e-04b4fce2f7ef req-2398b378-ed25-44f8-80cf-62646d80ac7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:16 np0005634017 nova_compute[243452]: 2026-02-28 10:33:16.913 243456 DEBUG nova.compute.manager [req-3a334efa-c81a-4a59-927e-04b4fce2f7ef req-2398b378-ed25-44f8-80cf-62646d80ac7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] No waiting events found dispatching network-vif-plugged-09f54242-3301-4e2f-b606-8423be606192 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:33:16 np0005634017 nova_compute[243452]: 2026-02-28 10:33:16.913 243456 WARNING nova.compute.manager [req-3a334efa-c81a-4a59-927e-04b4fce2f7ef req-2398b378-ed25-44f8-80cf-62646d80ac7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Received unexpected event network-vif-plugged-09f54242-3301-4e2f-b606-8423be606192 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:33:16 np0005634017 nova_compute[243452]: 2026-02-28 10:33:16.914 243456 DEBUG nova.compute.manager [req-3a334efa-c81a-4a59-927e-04b4fce2f7ef req-2398b378-ed25-44f8-80cf-62646d80ac7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Received event network-vif-deleted-09f54242-3301-4e2f-b606-8423be606192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:33:16 np0005634017 nova_compute[243452]: 2026-02-28 10:33:16.927 243456 DEBUG oslo_concurrency.processutils [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:33:16 np0005634017 nova_compute[243452]: 2026-02-28 10:33:16.935 243456 DEBUG nova.compute.provider_tree [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:33:16 np0005634017 nova_compute[243452]: 2026-02-28 10:33:16.955 243456 DEBUG nova.scheduler.client.report [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:33:16 np0005634017 nova_compute[243452]: 2026-02-28 10:33:16.984 243456 DEBUG oslo_concurrency.lockutils [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:17 np0005634017 nova_compute[243452]: 2026-02-28 10:33:17.011 243456 INFO nova.scheduler.client.report [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Deleted allocations for instance 995ed68f-0189-47b0-b060-6b738468c986#033[00m
Feb 28 05:33:17 np0005634017 nova_compute[243452]: 2026-02-28 10:33:17.082 243456 DEBUG oslo_concurrency.lockutils [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:17 np0005634017 nova_compute[243452]: 2026-02-28 10:33:17.731 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:17 np0005634017 nova_compute[243452]: 2026-02-28 10:33:17.731 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:17 np0005634017 nova_compute[243452]: 2026-02-28 10:33:17.764 243456 DEBUG nova.compute.manager [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:33:17 np0005634017 nova_compute[243452]: 2026-02-28 10:33:17.851 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:17 np0005634017 nova_compute[243452]: 2026-02-28 10:33:17.853 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:17 np0005634017 nova_compute[243452]: 2026-02-28 10:33:17.863 243456 DEBUG nova.virt.hardware [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:33:17 np0005634017 nova_compute[243452]: 2026-02-28 10:33:17.863 243456 INFO nova.compute.claims [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:33:18 np0005634017 nova_compute[243452]: 2026-02-28 10:33:18.040 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:33:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:33:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1837150722' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:33:18 np0005634017 nova_compute[243452]: 2026-02-28 10:33:18.653 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:33:18 np0005634017 nova_compute[243452]: 2026-02-28 10:33:18.663 243456 DEBUG nova.compute.provider_tree [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:33:18 np0005634017 nova_compute[243452]: 2026-02-28 10:33:18.728 243456 DEBUG nova.scheduler.client.report [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:33:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 267 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 106 op/s
Feb 28 05:33:18 np0005634017 nova_compute[243452]: 2026-02-28 10:33:18.786 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.934s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:18 np0005634017 nova_compute[243452]: 2026-02-28 10:33:18.787 243456 DEBUG nova.compute.manager [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:33:18 np0005634017 nova_compute[243452]: 2026-02-28 10:33:18.845 243456 DEBUG nova.compute.manager [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:33:18 np0005634017 nova_compute[243452]: 2026-02-28 10:33:18.845 243456 DEBUG nova.network.neutron [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:33:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:33:18 np0005634017 nova_compute[243452]: 2026-02-28 10:33:18.873 243456 INFO nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:33:18 np0005634017 nova_compute[243452]: 2026-02-28 10:33:18.900 243456 DEBUG nova.compute.manager [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:33:19 np0005634017 nova_compute[243452]: 2026-02-28 10:33:19.020 243456 DEBUG nova.compute.manager [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:33:19 np0005634017 nova_compute[243452]: 2026-02-28 10:33:19.022 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:33:19 np0005634017 nova_compute[243452]: 2026-02-28 10:33:19.022 243456 INFO nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Creating image(s)#033[00m
Feb 28 05:33:19 np0005634017 nova_compute[243452]: 2026-02-28 10:33:19.047 243456 DEBUG nova.storage.rbd_utils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:33:19 np0005634017 nova_compute[243452]: 2026-02-28 10:33:19.073 243456 DEBUG nova.storage.rbd_utils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:33:19 np0005634017 nova_compute[243452]: 2026-02-28 10:33:19.098 243456 DEBUG nova.storage.rbd_utils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:33:19 np0005634017 nova_compute[243452]: 2026-02-28 10:33:19.103 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:33:19 np0005634017 nova_compute[243452]: 2026-02-28 10:33:19.145 243456 DEBUG nova.policy [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:33:19 np0005634017 nova_compute[243452]: 2026-02-28 10:33:19.188 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:33:19 np0005634017 nova_compute[243452]: 2026-02-28 10:33:19.189 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:19 np0005634017 nova_compute[243452]: 2026-02-28 10:33:19.189 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:19 np0005634017 nova_compute[243452]: 2026-02-28 10:33:19.189 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:19 np0005634017 nova_compute[243452]: 2026-02-28 10:33:19.213 243456 DEBUG nova.storage.rbd_utils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:33:19 np0005634017 nova_compute[243452]: 2026-02-28 10:33:19.218 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:33:19 np0005634017 nova_compute[243452]: 2026-02-28 10:33:19.312 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:19 np0005634017 nova_compute[243452]: 2026-02-28 10:33:19.497 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:33:19 np0005634017 nova_compute[243452]: 2026-02-28 10:33:19.576 243456 DEBUG nova.storage.rbd_utils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:33:19 np0005634017 nova_compute[243452]: 2026-02-28 10:33:19.604 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:19 np0005634017 nova_compute[243452]: 2026-02-28 10:33:19.665 243456 DEBUG nova.objects.instance [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:33:19 np0005634017 nova_compute[243452]: 2026-02-28 10:33:19.692 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:33:19 np0005634017 nova_compute[243452]: 2026-02-28 10:33:19.692 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Ensure instance console log exists: /var/lib/nova/instances/f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:33:19 np0005634017 nova_compute[243452]: 2026-02-28 10:33:19.693 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:19 np0005634017 nova_compute[243452]: 2026-02-28 10:33:19.693 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:19 np0005634017 nova_compute[243452]: 2026-02-28 10:33:19.693 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:20 np0005634017 nova_compute[243452]: 2026-02-28 10:33:20.687 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274785.6856453, 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:33:20 np0005634017 nova_compute[243452]: 2026-02-28 10:33:20.688 243456 INFO nova.compute.manager [-] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:33:20 np0005634017 nova_compute[243452]: 2026-02-28 10:33:20.727 243456 DEBUG nova.compute.manager [None req-105bb2c7-aa48-492b-bd4b-282072091f70 - - - - - -] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:33:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 260 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 311 KiB/s rd, 1.9 MiB/s wr, 109 op/s
Feb 28 05:33:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 305 active+clean; 279 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 28 05:33:23 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:23Z|01264|binding|INFO|Releasing lport 45460f81-310d-424c-88ee-85e2ae1b7444 from this chassis (sb_readonly=0)
Feb 28 05:33:23 np0005634017 nova_compute[243452]: 2026-02-28 10:33:23.084 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:23 np0005634017 nova_compute[243452]: 2026-02-28 10:33:23.804 243456 DEBUG nova.network.neutron [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Successfully created port: 22759577-b4b2-4051-bacd-740bbdfcc4b4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:33:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:33:24 np0005634017 nova_compute[243452]: 2026-02-28 10:33:24.190 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:24.190 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:33:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:24.191 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:33:24 np0005634017 nova_compute[243452]: 2026-02-28 10:33:24.314 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:24 np0005634017 nova_compute[243452]: 2026-02-28 10:33:24.607 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 28 05:33:24 np0005634017 nova_compute[243452]: 2026-02-28 10:33:24.921 243456 DEBUG nova.network.neutron [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Successfully updated port: 22759577-b4b2-4051-bacd-740bbdfcc4b4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:33:25 np0005634017 nova_compute[243452]: 2026-02-28 10:33:25.036 243456 DEBUG nova.compute.manager [req-96d8585c-d2e1-45e2-bf03-eca48d814b80 req-0ac4f346-4b3d-4216-8fca-67d16ebf587b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Received event network-changed-22759577-b4b2-4051-bacd-740bbdfcc4b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:33:25 np0005634017 nova_compute[243452]: 2026-02-28 10:33:25.037 243456 DEBUG nova.compute.manager [req-96d8585c-d2e1-45e2-bf03-eca48d814b80 req-0ac4f346-4b3d-4216-8fca-67d16ebf587b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Refreshing instance network info cache due to event network-changed-22759577-b4b2-4051-bacd-740bbdfcc4b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:33:25 np0005634017 nova_compute[243452]: 2026-02-28 10:33:25.038 243456 DEBUG oslo_concurrency.lockutils [req-96d8585c-d2e1-45e2-bf03-eca48d814b80 req-0ac4f346-4b3d-4216-8fca-67d16ebf587b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:33:25 np0005634017 nova_compute[243452]: 2026-02-28 10:33:25.038 243456 DEBUG oslo_concurrency.lockutils [req-96d8585c-d2e1-45e2-bf03-eca48d814b80 req-0ac4f346-4b3d-4216-8fca-67d16ebf587b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:33:25 np0005634017 nova_compute[243452]: 2026-02-28 10:33:25.039 243456 DEBUG nova.network.neutron [req-96d8585c-d2e1-45e2-bf03-eca48d814b80 req-0ac4f346-4b3d-4216-8fca-67d16ebf587b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Refreshing network info cache for port 22759577-b4b2-4051-bacd-740bbdfcc4b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:33:25 np0005634017 nova_compute[243452]: 2026-02-28 10:33:25.042 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:33:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:33:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:33:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:33:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:33:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:33:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:33:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:33:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:33:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:33:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:33:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:33:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:33:25 np0005634017 podman[354488]: 2026-02-28 10:33:25.725972739 +0000 UTC m=+0.068538772 container create 638857c5a83e1c43585a2a740c34c6a86201cf8b9ef3404783895fa175e3d354 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_torvalds, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 05:33:25 np0005634017 systemd[1]: Started libpod-conmon-638857c5a83e1c43585a2a740c34c6a86201cf8b9ef3404783895fa175e3d354.scope.
Feb 28 05:33:25 np0005634017 podman[354488]: 2026-02-28 10:33:25.701379563 +0000 UTC m=+0.043945666 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:33:25 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:33:25 np0005634017 podman[354488]: 2026-02-28 10:33:25.821108114 +0000 UTC m=+0.163674157 container init 638857c5a83e1c43585a2a740c34c6a86201cf8b9ef3404783895fa175e3d354 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_torvalds, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:33:25 np0005634017 podman[354488]: 2026-02-28 10:33:25.82980751 +0000 UTC m=+0.172373543 container start 638857c5a83e1c43585a2a740c34c6a86201cf8b9ef3404783895fa175e3d354 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 28 05:33:25 np0005634017 podman[354488]: 2026-02-28 10:33:25.833751862 +0000 UTC m=+0.176317925 container attach 638857c5a83e1c43585a2a740c34c6a86201cf8b9ef3404783895fa175e3d354 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 05:33:25 np0005634017 frosty_torvalds[354504]: 167 167
Feb 28 05:33:25 np0005634017 systemd[1]: libpod-638857c5a83e1c43585a2a740c34c6a86201cf8b9ef3404783895fa175e3d354.scope: Deactivated successfully.
Feb 28 05:33:25 np0005634017 conmon[354504]: conmon 638857c5a83e1c43585a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-638857c5a83e1c43585a2a740c34c6a86201cf8b9ef3404783895fa175e3d354.scope/container/memory.events
Feb 28 05:33:25 np0005634017 podman[354488]: 2026-02-28 10:33:25.8390022 +0000 UTC m=+0.181568223 container died 638857c5a83e1c43585a2a740c34c6a86201cf8b9ef3404783895fa175e3d354 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_torvalds, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:33:25 np0005634017 systemd[1]: var-lib-containers-storage-overlay-98d9f64a7ec5ef7638d93e9e8469255ec11a831a2564f58afee95fb5190d2ce4-merged.mount: Deactivated successfully.
Feb 28 05:33:25 np0005634017 nova_compute[243452]: 2026-02-28 10:33:25.866 243456 DEBUG nova.network.neutron [req-96d8585c-d2e1-45e2-bf03-eca48d814b80 req-0ac4f346-4b3d-4216-8fca-67d16ebf587b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:33:25 np0005634017 podman[354488]: 2026-02-28 10:33:25.882964816 +0000 UTC m=+0.225530849 container remove 638857c5a83e1c43585a2a740c34c6a86201cf8b9ef3404783895fa175e3d354 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 05:33:25 np0005634017 systemd[1]: libpod-conmon-638857c5a83e1c43585a2a740c34c6a86201cf8b9ef3404783895fa175e3d354.scope: Deactivated successfully.
Feb 28 05:33:25 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:33:25 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:33:25 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:33:26 np0005634017 podman[354527]: 2026-02-28 10:33:26.027372796 +0000 UTC m=+0.049869813 container create 9929b20e0d679449c1722c6bb5985845a682dbe4135c5c7114892a8bd033d584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:33:26 np0005634017 systemd[1]: Started libpod-conmon-9929b20e0d679449c1722c6bb5985845a682dbe4135c5c7114892a8bd033d584.scope.
Feb 28 05:33:26 np0005634017 podman[354527]: 2026-02-28 10:33:26.005805895 +0000 UTC m=+0.028302942 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:33:26 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:33:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551933fec683181af7988ba1011e629b9d01d76b7fbe38913fd42ddd29d922b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:33:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551933fec683181af7988ba1011e629b9d01d76b7fbe38913fd42ddd29d922b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:33:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551933fec683181af7988ba1011e629b9d01d76b7fbe38913fd42ddd29d922b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:33:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551933fec683181af7988ba1011e629b9d01d76b7fbe38913fd42ddd29d922b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:33:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551933fec683181af7988ba1011e629b9d01d76b7fbe38913fd42ddd29d922b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:33:26 np0005634017 podman[354527]: 2026-02-28 10:33:26.136981671 +0000 UTC m=+0.159478708 container init 9929b20e0d679449c1722c6bb5985845a682dbe4135c5c7114892a8bd033d584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lehmann, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 28 05:33:26 np0005634017 podman[354527]: 2026-02-28 10:33:26.14929528 +0000 UTC m=+0.171792277 container start 9929b20e0d679449c1722c6bb5985845a682dbe4135c5c7114892a8bd033d584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lehmann, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 05:33:26 np0005634017 podman[354527]: 2026-02-28 10:33:26.153712305 +0000 UTC m=+0.176209392 container attach 9929b20e0d679449c1722c6bb5985845a682dbe4135c5c7114892a8bd033d584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lehmann, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 05:33:26 np0005634017 nova_compute[243452]: 2026-02-28 10:33:26.315 243456 DEBUG nova.network.neutron [req-96d8585c-d2e1-45e2-bf03-eca48d814b80 req-0ac4f346-4b3d-4216-8fca-67d16ebf587b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:33:26 np0005634017 nova_compute[243452]: 2026-02-28 10:33:26.334 243456 DEBUG oslo_concurrency.lockutils [req-96d8585c-d2e1-45e2-bf03-eca48d814b80 req-0ac4f346-4b3d-4216-8fca-67d16ebf587b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:33:26 np0005634017 nova_compute[243452]: 2026-02-28 10:33:26.335 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:33:26 np0005634017 nova_compute[243452]: 2026-02-28 10:33:26.335 243456 DEBUG nova.network.neutron [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:33:26 np0005634017 nova_compute[243452]: 2026-02-28 10:33:26.484 243456 DEBUG nova.network.neutron [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:33:26 np0005634017 adoring_lehmann[354543]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:33:26 np0005634017 adoring_lehmann[354543]: --> All data devices are unavailable
Feb 28 05:33:26 np0005634017 systemd[1]: libpod-9929b20e0d679449c1722c6bb5985845a682dbe4135c5c7114892a8bd033d584.scope: Deactivated successfully.
Feb 28 05:33:26 np0005634017 podman[354527]: 2026-02-28 10:33:26.713696598 +0000 UTC m=+0.736193635 container died 9929b20e0d679449c1722c6bb5985845a682dbe4135c5c7114892a8bd033d584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 28 05:33:26 np0005634017 systemd[1]: var-lib-containers-storage-overlay-551933fec683181af7988ba1011e629b9d01d76b7fbe38913fd42ddd29d922b6-merged.mount: Deactivated successfully.
Feb 28 05:33:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 28 05:33:26 np0005634017 podman[354527]: 2026-02-28 10:33:26.770936289 +0000 UTC m=+0.793433326 container remove 9929b20e0d679449c1722c6bb5985845a682dbe4135c5c7114892a8bd033d584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:33:26 np0005634017 systemd[1]: libpod-conmon-9929b20e0d679449c1722c6bb5985845a682dbe4135c5c7114892a8bd033d584.scope: Deactivated successfully.
Feb 28 05:33:27 np0005634017 podman[354638]: 2026-02-28 10:33:27.283513539 +0000 UTC m=+0.062957274 container create dfc03d60cc45e9c97dccabd40a75a503840effaab573e6042445fe4ef7f117fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:33:27 np0005634017 systemd[1]: Started libpod-conmon-dfc03d60cc45e9c97dccabd40a75a503840effaab573e6042445fe4ef7f117fb.scope.
Feb 28 05:33:27 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:33:27 np0005634017 podman[354638]: 2026-02-28 10:33:27.262328279 +0000 UTC m=+0.041772004 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:33:27 np0005634017 podman[354638]: 2026-02-28 10:33:27.372665965 +0000 UTC m=+0.152109740 container init dfc03d60cc45e9c97dccabd40a75a503840effaab573e6042445fe4ef7f117fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 28 05:33:27 np0005634017 podman[354638]: 2026-02-28 10:33:27.379872609 +0000 UTC m=+0.159316314 container start dfc03d60cc45e9c97dccabd40a75a503840effaab573e6042445fe4ef7f117fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Feb 28 05:33:27 np0005634017 zen_agnesi[354654]: 167 167
Feb 28 05:33:27 np0005634017 systemd[1]: libpod-dfc03d60cc45e9c97dccabd40a75a503840effaab573e6042445fe4ef7f117fb.scope: Deactivated successfully.
Feb 28 05:33:27 np0005634017 podman[354638]: 2026-02-28 10:33:27.38485987 +0000 UTC m=+0.164303575 container attach dfc03d60cc45e9c97dccabd40a75a503840effaab573e6042445fe4ef7f117fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_agnesi, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:33:27 np0005634017 podman[354638]: 2026-02-28 10:33:27.385497368 +0000 UTC m=+0.164941093 container died dfc03d60cc45e9c97dccabd40a75a503840effaab573e6042445fe4ef7f117fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_agnesi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:33:27 np0005634017 systemd[1]: var-lib-containers-storage-overlay-7e17fb3de6d6b88850aa2ae2d57f070972a5c5454498ed2491b43250982aae8f-merged.mount: Deactivated successfully.
Feb 28 05:33:27 np0005634017 podman[354638]: 2026-02-28 10:33:27.428520017 +0000 UTC m=+0.207963742 container remove dfc03d60cc45e9c97dccabd40a75a503840effaab573e6042445fe4ef7f117fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_agnesi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 05:33:27 np0005634017 systemd[1]: libpod-conmon-dfc03d60cc45e9c97dccabd40a75a503840effaab573e6042445fe4ef7f117fb.scope: Deactivated successfully.
Feb 28 05:33:27 np0005634017 podman[354681]: 2026-02-28 10:33:27.633107192 +0000 UTC m=+0.063850740 container create 0a89d0a46ff60acc35a97c50d2097834df3a10774bf87b6878c88b111a9d75d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hellman, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:33:27 np0005634017 systemd[1]: Started libpod-conmon-0a89d0a46ff60acc35a97c50d2097834df3a10774bf87b6878c88b111a9d75d3.scope.
Feb 28 05:33:27 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:33:27 np0005634017 podman[354681]: 2026-02-28 10:33:27.604834211 +0000 UTC m=+0.035577779 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:33:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee57a45e46e9f35e9a55da99e5f5bb4bfa659336eee81c11393cf0d2e21c72e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:33:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee57a45e46e9f35e9a55da99e5f5bb4bfa659336eee81c11393cf0d2e21c72e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:33:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee57a45e46e9f35e9a55da99e5f5bb4bfa659336eee81c11393cf0d2e21c72e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:33:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee57a45e46e9f35e9a55da99e5f5bb4bfa659336eee81c11393cf0d2e21c72e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:33:27 np0005634017 podman[354681]: 2026-02-28 10:33:27.725635772 +0000 UTC m=+0.156379360 container init 0a89d0a46ff60acc35a97c50d2097834df3a10774bf87b6878c88b111a9d75d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:33:27 np0005634017 podman[354681]: 2026-02-28 10:33:27.735583594 +0000 UTC m=+0.166327152 container start 0a89d0a46ff60acc35a97c50d2097834df3a10774bf87b6878c88b111a9d75d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hellman, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 05:33:27 np0005634017 podman[354681]: 2026-02-28 10:33:27.740246156 +0000 UTC m=+0.170989764 container attach 0a89d0a46ff60acc35a97c50d2097834df3a10774bf87b6878c88b111a9d75d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hellman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]: {
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:    "0": [
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:        {
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "devices": [
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "/dev/loop3"
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            ],
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "lv_name": "ceph_lv0",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "lv_size": "21470642176",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "name": "ceph_lv0",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "tags": {
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.cluster_name": "ceph",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.crush_device_class": "",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.encrypted": "0",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.objectstore": "bluestore",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.osd_id": "0",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.type": "block",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.vdo": "0",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.with_tpm": "0"
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            },
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "type": "block",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "vg_name": "ceph_vg0"
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:        }
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:    ],
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:    "1": [
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:        {
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "devices": [
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "/dev/loop4"
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            ],
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "lv_name": "ceph_lv1",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "lv_size": "21470642176",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "name": "ceph_lv1",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "tags": {
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.cluster_name": "ceph",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.crush_device_class": "",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.encrypted": "0",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.objectstore": "bluestore",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.osd_id": "1",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.type": "block",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.vdo": "0",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.with_tpm": "0"
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            },
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "type": "block",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "vg_name": "ceph_vg1"
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:        }
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:    ],
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:    "2": [
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:        {
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "devices": [
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "/dev/loop5"
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            ],
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "lv_name": "ceph_lv2",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "lv_size": "21470642176",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "name": "ceph_lv2",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "tags": {
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.cluster_name": "ceph",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.crush_device_class": "",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.encrypted": "0",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.objectstore": "bluestore",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.osd_id": "2",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.type": "block",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.vdo": "0",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:                "ceph.with_tpm": "0"
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            },
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "type": "block",
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:            "vg_name": "ceph_vg2"
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:        }
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]:    ]
Feb 28 05:33:28 np0005634017 quizzical_hellman[354698]: }
Feb 28 05:33:28 np0005634017 systemd[1]: libpod-0a89d0a46ff60acc35a97c50d2097834df3a10774bf87b6878c88b111a9d75d3.scope: Deactivated successfully.
Feb 28 05:33:28 np0005634017 podman[354681]: 2026-02-28 10:33:28.075229705 +0000 UTC m=+0.505973283 container died 0a89d0a46ff60acc35a97c50d2097834df3a10774bf87b6878c88b111a9d75d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:33:28 np0005634017 systemd[1]: var-lib-containers-storage-overlay-6ee57a45e46e9f35e9a55da99e5f5bb4bfa659336eee81c11393cf0d2e21c72e-merged.mount: Deactivated successfully.
Feb 28 05:33:28 np0005634017 podman[354681]: 2026-02-28 10:33:28.131601022 +0000 UTC m=+0.562344580 container remove 0a89d0a46ff60acc35a97c50d2097834df3a10774bf87b6878c88b111a9d75d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hellman, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:33:28 np0005634017 systemd[1]: libpod-conmon-0a89d0a46ff60acc35a97c50d2097834df3a10774bf87b6878c88b111a9d75d3.scope: Deactivated successfully.
Feb 28 05:33:28 np0005634017 nova_compute[243452]: 2026-02-28 10:33:28.149 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:28 np0005634017 nova_compute[243452]: 2026-02-28 10:33:28.539 243456 DEBUG nova.network.neutron [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Updating instance_info_cache with network_info: [{"id": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "address": "fa:16:3e:6e:a3:94", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6e:a394", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22759577-b4", "ovs_interfaceid": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:33:28 np0005634017 nova_compute[243452]: 2026-02-28 10:33:28.579 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:33:28 np0005634017 nova_compute[243452]: 2026-02-28 10:33:28.580 243456 DEBUG nova.compute.manager [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Instance network_info: |[{"id": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "address": "fa:16:3e:6e:a3:94", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6e:a394", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22759577-b4", "ovs_interfaceid": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:33:28 np0005634017 nova_compute[243452]: 2026-02-28 10:33:28.585 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Start _get_guest_xml network_info=[{"id": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "address": "fa:16:3e:6e:a3:94", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6e:a394", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22759577-b4", "ovs_interfaceid": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:33:28 np0005634017 nova_compute[243452]: 2026-02-28 10:33:28.594 243456 WARNING nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:33:28 np0005634017 nova_compute[243452]: 2026-02-28 10:33:28.602 243456 DEBUG nova.virt.libvirt.host [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:33:28 np0005634017 nova_compute[243452]: 2026-02-28 10:33:28.604 243456 DEBUG nova.virt.libvirt.host [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:33:28 np0005634017 nova_compute[243452]: 2026-02-28 10:33:28.610 243456 DEBUG nova.virt.libvirt.host [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:33:28 np0005634017 nova_compute[243452]: 2026-02-28 10:33:28.611 243456 DEBUG nova.virt.libvirt.host [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:33:28 np0005634017 nova_compute[243452]: 2026-02-28 10:33:28.612 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:33:28 np0005634017 nova_compute[243452]: 2026-02-28 10:33:28.612 243456 DEBUG nova.virt.hardware [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:33:28 np0005634017 nova_compute[243452]: 2026-02-28 10:33:28.613 243456 DEBUG nova.virt.hardware [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:33:28 np0005634017 nova_compute[243452]: 2026-02-28 10:33:28.614 243456 DEBUG nova.virt.hardware [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:33:28 np0005634017 nova_compute[243452]: 2026-02-28 10:33:28.614 243456 DEBUG nova.virt.hardware [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:33:28 np0005634017 nova_compute[243452]: 2026-02-28 10:33:28.614 243456 DEBUG nova.virt.hardware [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:33:28 np0005634017 nova_compute[243452]: 2026-02-28 10:33:28.615 243456 DEBUG nova.virt.hardware [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:33:28 np0005634017 nova_compute[243452]: 2026-02-28 10:33:28.615 243456 DEBUG nova.virt.hardware [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:33:28 np0005634017 nova_compute[243452]: 2026-02-28 10:33:28.616 243456 DEBUG nova.virt.hardware [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:33:28 np0005634017 nova_compute[243452]: 2026-02-28 10:33:28.616 243456 DEBUG nova.virt.hardware [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:33:28 np0005634017 nova_compute[243452]: 2026-02-28 10:33:28.617 243456 DEBUG nova.virt.hardware [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:33:28 np0005634017 nova_compute[243452]: 2026-02-28 10:33:28.618 243456 DEBUG nova.virt.hardware [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:33:28 np0005634017 nova_compute[243452]: 2026-02-28 10:33:28.623 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:33:28 np0005634017 podman[354781]: 2026-02-28 10:33:28.637629487 +0000 UTC m=+0.052883299 container create 23295b076556bbd44c5642c50a155223b9ce8062dc57b1a4c512953b9208f847 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hermann, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:33:28 np0005634017 systemd[1]: Started libpod-conmon-23295b076556bbd44c5642c50a155223b9ce8062dc57b1a4c512953b9208f847.scope.
Feb 28 05:33:28 np0005634017 podman[354781]: 2026-02-28 10:33:28.60949547 +0000 UTC m=+0.024749352 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:33:28 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:33:28 np0005634017 podman[354781]: 2026-02-28 10:33:28.737806554 +0000 UTC m=+0.153060416 container init 23295b076556bbd44c5642c50a155223b9ce8062dc57b1a4c512953b9208f847 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hermann, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 05:33:28 np0005634017 podman[354781]: 2026-02-28 10:33:28.743367902 +0000 UTC m=+0.158621674 container start 23295b076556bbd44c5642c50a155223b9ce8062dc57b1a4c512953b9208f847 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hermann, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:33:28 np0005634017 gracious_hermann[354798]: 167 167
Feb 28 05:33:28 np0005634017 systemd[1]: libpod-23295b076556bbd44c5642c50a155223b9ce8062dc57b1a4c512953b9208f847.scope: Deactivated successfully.
Feb 28 05:33:28 np0005634017 podman[354781]: 2026-02-28 10:33:28.748712363 +0000 UTC m=+0.163966195 container attach 23295b076556bbd44c5642c50a155223b9ce8062dc57b1a4c512953b9208f847 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hermann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 28 05:33:28 np0005634017 conmon[354798]: conmon 23295b076556bbd44c56 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-23295b076556bbd44c5642c50a155223b9ce8062dc57b1a4c512953b9208f847.scope/container/memory.events
Feb 28 05:33:28 np0005634017 podman[354781]: 2026-02-28 10:33:28.749448854 +0000 UTC m=+0.164702626 container died 23295b076556bbd44c5642c50a155223b9ce8062dc57b1a4c512953b9208f847 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hermann, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:33:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Feb 28 05:33:28 np0005634017 systemd[1]: var-lib-containers-storage-overlay-a5c16c639e2a1c79793abec9675afe4698be30853a91e54849e1e2e93fc91b0c-merged.mount: Deactivated successfully.
Feb 28 05:33:28 np0005634017 podman[354781]: 2026-02-28 10:33:28.804833623 +0000 UTC m=+0.220087405 container remove 23295b076556bbd44c5642c50a155223b9ce8062dc57b1a4c512953b9208f847 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 05:33:28 np0005634017 systemd[1]: libpod-conmon-23295b076556bbd44c5642c50a155223b9ce8062dc57b1a4c512953b9208f847.scope: Deactivated successfully.
Feb 28 05:33:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:33:28 np0005634017 podman[354841]: 2026-02-28 10:33:28.973147521 +0000 UTC m=+0.052059826 container create 9c5d8573d8321b5411f835523bc087478e22b4551456f943c30744b6e88451ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_dhawan, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:33:29 np0005634017 systemd[1]: Started libpod-conmon-9c5d8573d8321b5411f835523bc087478e22b4551456f943c30744b6e88451ae.scope.
Feb 28 05:33:29 np0005634017 podman[354841]: 2026-02-28 10:33:28.9480504 +0000 UTC m=+0.026962725 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:33:29 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:33:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d875306b098d36a95503a0aa27672366e653242c2297f572492deff66e2f5b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:33:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d875306b098d36a95503a0aa27672366e653242c2297f572492deff66e2f5b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:33:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d875306b098d36a95503a0aa27672366e653242c2297f572492deff66e2f5b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:33:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d875306b098d36a95503a0aa27672366e653242c2297f572492deff66e2f5b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:33:29 np0005634017 podman[354841]: 2026-02-28 10:33:29.076135138 +0000 UTC m=+0.155047523 container init 9c5d8573d8321b5411f835523bc087478e22b4551456f943c30744b6e88451ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 05:33:29 np0005634017 podman[354841]: 2026-02-28 10:33:29.083423895 +0000 UTC m=+0.162336240 container start 9c5d8573d8321b5411f835523bc087478e22b4551456f943c30744b6e88451ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Feb 28 05:33:29 np0005634017 podman[354841]: 2026-02-28 10:33:29.086916964 +0000 UTC m=+0.165829359 container attach 9c5d8573d8321b5411f835523bc087478e22b4551456f943c30744b6e88451ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_dhawan, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 28 05:33:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:33:29
Feb 28 05:33:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:33:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:33:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['images', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.log']
Feb 28 05:33:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:33:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:33:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2648422724' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.184 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.217 243456 DEBUG nova.storage.rbd_utils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.224 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.316 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.542 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274794.5411866, 995ed68f-0189-47b0-b060-6b738468c986 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.543 243456 INFO nova.compute.manager [-] [instance: 995ed68f-0189-47b0-b060-6b738468c986] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.570 243456 DEBUG nova.compute.manager [None req-9eda41a0-f359-4023-aa0c-e74067ce366d - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.609 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:33:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2031951798' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.743 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.745 243456 DEBUG nova.virt.libvirt.vif [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:33:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-994916245',display_name='tempest-TestGettingAddress-server-994916245',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-994916245',id=126,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFieZm47z3EaXX8zbx7LaLT+w65lGbaba1NOcr04FPZCW63/mJeAcaJ3qZq57Nb7LiBZaem/WgewaoumutbM8PMth7FUNu0jTunr9bd13indOeb5XDIeGzJsnEfyg+wcGA==',key_name='tempest-TestGettingAddress-401608194',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-6ygaahaz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:33:18Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "address": "fa:16:3e:6e:a3:94", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6e:a394", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22759577-b4", "ovs_interfaceid": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.746 243456 DEBUG nova.network.os_vif_util [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "address": "fa:16:3e:6e:a3:94", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6e:a394", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22759577-b4", "ovs_interfaceid": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.747 243456 DEBUG nova.network.os_vif_util [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:a3:94,bridge_name='br-int',has_traffic_filtering=True,id=22759577-b4b2-4051-bacd-740bbdfcc4b4,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22759577-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.748 243456 DEBUG nova.objects.instance [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:33:29 np0005634017 lvm[354981]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:33:29 np0005634017 lvm[354981]: VG ceph_vg1 finished
Feb 28 05:33:29 np0005634017 lvm[354980]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:33:29 np0005634017 lvm[354980]: VG ceph_vg0 finished
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.764 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:33:29 np0005634017 nova_compute[243452]:  <uuid>f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989</uuid>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:  <name>instance-0000007e</name>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestGettingAddress-server-994916245</nova:name>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:33:28</nova:creationTime>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:33:29 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:        <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:        <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:        <nova:port uuid="22759577-b4b2-4051-bacd-740bbdfcc4b4">
Feb 28 05:33:29 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe6e:a394" ipVersion="6"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <entry name="serial">f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989</entry>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <entry name="uuid">f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989</entry>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk">
Feb 28 05:33:29 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:33:29 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk.config">
Feb 28 05:33:29 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:33:29 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:6e:a3:94"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <target dev="tap22759577-b4"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989/console.log" append="off"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:33:29 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:33:29 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:33:29 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:33:29 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.765 243456 DEBUG nova.compute.manager [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Preparing to wait for external event network-vif-plugged-22759577-b4b2-4051-bacd-740bbdfcc4b4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.765 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.765 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.765 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.766 243456 DEBUG nova.virt.libvirt.vif [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:33:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-994916245',display_name='tempest-TestGettingAddress-server-994916245',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-994916245',id=126,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFieZm47z3EaXX8zbx7LaLT+w65lGbaba1NOcr04FPZCW63/mJeAcaJ3qZq57Nb7LiBZaem/WgewaoumutbM8PMth7FUNu0jTunr9bd13indOeb5XDIeGzJsnEfyg+wcGA==',key_name='tempest-TestGettingAddress-401608194',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-6ygaahaz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:33:18Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "address": "fa:16:3e:6e:a3:94", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6e:a394", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22759577-b4", "ovs_interfaceid": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.766 243456 DEBUG nova.network.os_vif_util [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "address": "fa:16:3e:6e:a3:94", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6e:a394", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22759577-b4", "ovs_interfaceid": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.767 243456 DEBUG nova.network.os_vif_util [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:a3:94,bridge_name='br-int',has_traffic_filtering=True,id=22759577-b4b2-4051-bacd-740bbdfcc4b4,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22759577-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.767 243456 DEBUG os_vif [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:a3:94,bridge_name='br-int',has_traffic_filtering=True,id=22759577-b4b2-4051-bacd-740bbdfcc4b4,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22759577-b4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.768 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.768 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.769 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.773 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.773 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap22759577-b4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.773 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap22759577-b4, col_values=(('external_ids', {'iface-id': '22759577-b4b2-4051-bacd-740bbdfcc4b4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6e:a3:94', 'vm-uuid': 'f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.775 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:29 np0005634017 NetworkManager[49805]: <info>  [1772274809.7768] manager: (tap22759577-b4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/522)
Feb 28 05:33:29 np0005634017 lvm[354983]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:33:29 np0005634017 lvm[354983]: VG ceph_vg2 finished
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.780 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.783 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.784 243456 INFO os_vif [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:a3:94,bridge_name='br-int',has_traffic_filtering=True,id=22759577-b4b2-4051-bacd-740bbdfcc4b4,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22759577-b4')#033[00m
Feb 28 05:33:29 np0005634017 lvm[354986]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:33:29 np0005634017 lvm[354986]: VG ceph_vg0 finished
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.845 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.846 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.847 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:6e:a3:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:33:29 np0005634017 nice_dhawan[354858]: {}
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.847 243456 INFO nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Using config drive#033[00m
Feb 28 05:33:29 np0005634017 nova_compute[243452]: 2026-02-28 10:33:29.876 243456 DEBUG nova.storage.rbd_utils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:33:29 np0005634017 systemd[1]: libpod-9c5d8573d8321b5411f835523bc087478e22b4551456f943c30744b6e88451ae.scope: Deactivated successfully.
Feb 28 05:33:29 np0005634017 podman[354841]: 2026-02-28 10:33:29.885903487 +0000 UTC m=+0.964815822 container died 9c5d8573d8321b5411f835523bc087478e22b4551456f943c30744b6e88451ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_dhawan, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Feb 28 05:33:29 np0005634017 systemd[1]: libpod-9c5d8573d8321b5411f835523bc087478e22b4551456f943c30744b6e88451ae.scope: Consumed 1.255s CPU time.
Feb 28 05:33:29 np0005634017 systemd[1]: var-lib-containers-storage-overlay-4d875306b098d36a95503a0aa27672366e653242c2297f572492deff66e2f5b9-merged.mount: Deactivated successfully.
Feb 28 05:33:29 np0005634017 podman[354841]: 2026-02-28 10:33:29.938541548 +0000 UTC m=+1.017453863 container remove 9c5d8573d8321b5411f835523bc087478e22b4551456f943c30744b6e88451ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_dhawan, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 05:33:29 np0005634017 systemd[1]: libpod-conmon-9c5d8573d8321b5411f835523bc087478e22b4551456f943c30744b6e88451ae.scope: Deactivated successfully.
Feb 28 05:33:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:33:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:33:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:33:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:33:30 np0005634017 nova_compute[243452]: 2026-02-28 10:33:30.281 243456 INFO nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Creating config drive at /var/lib/nova/instances/f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989/disk.config#033[00m
Feb 28 05:33:30 np0005634017 nova_compute[243452]: 2026-02-28 10:33:30.291 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpf9eqbc1b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:33:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:33:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:33:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:33:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:33:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:33:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:33:30 np0005634017 nova_compute[243452]: 2026-02-28 10:33:30.438 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpf9eqbc1b" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:33:30 np0005634017 nova_compute[243452]: 2026-02-28 10:33:30.478 243456 DEBUG nova.storage.rbd_utils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:33:30 np0005634017 nova_compute[243452]: 2026-02-28 10:33:30.484 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989/disk.config f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:33:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Feb 28 05:33:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:33:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:33:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:33:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:33:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:33:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:33:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:33:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:33:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:33:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:33:31 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:33:31 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.104 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989/disk.config f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.105 243456 INFO nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Deleting local config drive /var/lib/nova/instances/f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989/disk.config because it was imported into RBD.#033[00m
Feb 28 05:33:31 np0005634017 kernel: tap22759577-b4: entered promiscuous mode
Feb 28 05:33:31 np0005634017 NetworkManager[49805]: <info>  [1772274811.1911] manager: (tap22759577-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/523)
Feb 28 05:33:31 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:31Z|01265|binding|INFO|Claiming lport 22759577-b4b2-4051-bacd-740bbdfcc4b4 for this chassis.
Feb 28 05:33:31 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:31Z|01266|binding|INFO|22759577-b4b2-4051-bacd-740bbdfcc4b4: Claiming fa:16:3e:6e:a3:94 10.100.0.14 2001:db8::f816:3eff:fe6e:a394
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.193 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:31 np0005634017 systemd-udevd[354982]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:33:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.203 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:a3:94 10.100.0.14 2001:db8::f816:3eff:fe6e:a394'], port_security=['fa:16:3e:6e:a3:94 10.100.0.14 2001:db8::f816:3eff:fe6e:a394'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8::f816:3eff:fe6e:a394/64', 'neutron:device_id': 'f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6e383b51-5295-4e4b-b3c5-448c2d4edbf6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5cc36483-813d-493a-bc2a-2e5a365011f2, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=22759577-b4b2-4051-bacd-740bbdfcc4b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:33:31 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:31Z|01267|binding|INFO|Setting lport 22759577-b4b2-4051-bacd-740bbdfcc4b4 ovn-installed in OVS
Feb 28 05:33:31 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:31Z|01268|binding|INFO|Setting lport 22759577-b4b2-4051-bacd-740bbdfcc4b4 up in Southbound
Feb 28 05:33:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.205 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 22759577-b4b2-4051-bacd-740bbdfcc4b4 in datapath c662211d-11bd-4aa5-95d2-794ccdac29d7 bound to our chassis#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.206 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.207 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c662211d-11bd-4aa5-95d2-794ccdac29d7#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.208 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:31 np0005634017 NetworkManager[49805]: <info>  [1772274811.2179] device (tap22759577-b4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:33:31 np0005634017 NetworkManager[49805]: <info>  [1772274811.2187] device (tap22759577-b4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:33:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.224 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c94e7037-fa80-4095-9b3c-da888f13eabc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:31 np0005634017 systemd-machined[209480]: New machine qemu-159-instance-0000007e.
Feb 28 05:33:31 np0005634017 systemd[1]: Started Virtual Machine qemu-159-instance-0000007e.
Feb 28 05:33:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.258 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0b6e339f-d6e9-49fd-904c-fd73d2708c23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.262 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b477214d-0c0a-4506-97ed-16a962e909cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.291 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[68bd3a2e-e78b-4edb-94cd-8e9139752c16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.310 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[61c0ef5a-bbc2-4d88-9bc6-0b6131891f38]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc662211d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:f5:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 6, 'rx_bytes': 1656, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 6, 'rx_bytes': 1656, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628072, 'reachable_time': 22047, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355112, 'error': None, 'target': 'ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.335 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eac6621a-bf68-4d32-af6c-b6f7bce957dd]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc662211d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 628080, 'tstamp': 628080}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355113, 'error': None, 'target': 'ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc662211d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 628083, 'tstamp': 628083}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355113, 'error': None, 'target': 'ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.337 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc662211d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.338 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.340 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc662211d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:33:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.340 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:33:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.341 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc662211d-10, col_values=(('external_ids', {'iface-id': '45460f81-310d-424c-88ee-85e2ae1b7444'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:33:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.341 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.593 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274811.5929613, f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.593 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] VM Started (Lifecycle Event)#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.628 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.633 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274811.5931377, f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.633 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.655 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.659 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.685 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.936 243456 DEBUG nova.compute.manager [req-de3e79e6-058d-4199-8baa-463d8c0cccc3 req-f4f74660-0d0b-47c1-a975-e989bb2273a6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Received event network-vif-plugged-22759577-b4b2-4051-bacd-740bbdfcc4b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.937 243456 DEBUG oslo_concurrency.lockutils [req-de3e79e6-058d-4199-8baa-463d8c0cccc3 req-f4f74660-0d0b-47c1-a975-e989bb2273a6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.937 243456 DEBUG oslo_concurrency.lockutils [req-de3e79e6-058d-4199-8baa-463d8c0cccc3 req-f4f74660-0d0b-47c1-a975-e989bb2273a6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.937 243456 DEBUG oslo_concurrency.lockutils [req-de3e79e6-058d-4199-8baa-463d8c0cccc3 req-f4f74660-0d0b-47c1-a975-e989bb2273a6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.938 243456 DEBUG nova.compute.manager [req-de3e79e6-058d-4199-8baa-463d8c0cccc3 req-f4f74660-0d0b-47c1-a975-e989bb2273a6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Processing event network-vif-plugged-22759577-b4b2-4051-bacd-740bbdfcc4b4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.938 243456 DEBUG nova.compute.manager [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.942 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274811.9419978, f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.942 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.946 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.951 243456 INFO nova.virt.libvirt.driver [-] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Instance spawned successfully.#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.951 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.965 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.968 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.976 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.976 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.977 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.977 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.977 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.978 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:33:31 np0005634017 nova_compute[243452]: 2026-02-28 10:33:31.985 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:33:32 np0005634017 nova_compute[243452]: 2026-02-28 10:33:32.046 243456 INFO nova.compute.manager [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Took 13.02 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:33:32 np0005634017 nova_compute[243452]: 2026-02-28 10:33:32.046 243456 DEBUG nova.compute.manager [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:33:32 np0005634017 nova_compute[243452]: 2026-02-28 10:33:32.110 243456 INFO nova.compute.manager [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Took 14.29 seconds to build instance.#033[00m
Feb 28 05:33:32 np0005634017 nova_compute[243452]: 2026-02-28 10:33:32.126 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:32 np0005634017 nova_compute[243452]: 2026-02-28 10:33:32.192 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:33:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:32.194 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:33:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 991 KiB/s wr, 4 op/s
Feb 28 05:33:33 np0005634017 nova_compute[243452]: 2026-02-28 10:33:33.318 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:33:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:33:34 np0005634017 nova_compute[243452]: 2026-02-28 10:33:34.034 243456 DEBUG nova.compute.manager [req-5c7c8f8e-1297-40cb-a08a-a51a4b008088 req-f71b16ec-aa96-464a-b7a8-2101a7116871 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Received event network-vif-plugged-22759577-b4b2-4051-bacd-740bbdfcc4b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:33:34 np0005634017 nova_compute[243452]: 2026-02-28 10:33:34.035 243456 DEBUG oslo_concurrency.lockutils [req-5c7c8f8e-1297-40cb-a08a-a51a4b008088 req-f71b16ec-aa96-464a-b7a8-2101a7116871 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:34 np0005634017 nova_compute[243452]: 2026-02-28 10:33:34.035 243456 DEBUG oslo_concurrency.lockutils [req-5c7c8f8e-1297-40cb-a08a-a51a4b008088 req-f71b16ec-aa96-464a-b7a8-2101a7116871 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:34 np0005634017 nova_compute[243452]: 2026-02-28 10:33:34.035 243456 DEBUG oslo_concurrency.lockutils [req-5c7c8f8e-1297-40cb-a08a-a51a4b008088 req-f71b16ec-aa96-464a-b7a8-2101a7116871 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:34 np0005634017 nova_compute[243452]: 2026-02-28 10:33:34.035 243456 DEBUG nova.compute.manager [req-5c7c8f8e-1297-40cb-a08a-a51a4b008088 req-f71b16ec-aa96-464a-b7a8-2101a7116871 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] No waiting events found dispatching network-vif-plugged-22759577-b4b2-4051-bacd-740bbdfcc4b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:33:34 np0005634017 nova_compute[243452]: 2026-02-28 10:33:34.036 243456 WARNING nova.compute.manager [req-5c7c8f8e-1297-40cb-a08a-a51a4b008088 req-f71b16ec-aa96-464a-b7a8-2101a7116871 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Received unexpected event network-vif-plugged-22759577-b4b2-4051-bacd-740bbdfcc4b4 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:33:34 np0005634017 nova_compute[243452]: 2026-02-28 10:33:34.322 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 12 KiB/s wr, 16 op/s
Feb 28 05:33:34 np0005634017 nova_compute[243452]: 2026-02-28 10:33:34.777 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:35 np0005634017 nova_compute[243452]: 2026-02-28 10:33:35.123 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:35 np0005634017 podman[355159]: 2026-02-28 10:33:35.152499983 +0000 UTC m=+0.078253577 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Feb 28 05:33:35 np0005634017 podman[355158]: 2026-02-28 10:33:35.190549641 +0000 UTC m=+0.121825402 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 05:33:36 np0005634017 nova_compute[243452]: 2026-02-28 10:33:36.319 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:33:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1007 KiB/s rd, 12 KiB/s wr, 43 op/s
Feb 28 05:33:37 np0005634017 nova_compute[243452]: 2026-02-28 10:33:37.114 243456 DEBUG nova.compute.manager [req-2973e6f5-2274-42b0-aae7-86f5645d8213 req-d3adf2b9-c518-483e-a539-5ae09f142fc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Received event network-changed-22759577-b4b2-4051-bacd-740bbdfcc4b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:33:37 np0005634017 nova_compute[243452]: 2026-02-28 10:33:37.115 243456 DEBUG nova.compute.manager [req-2973e6f5-2274-42b0-aae7-86f5645d8213 req-d3adf2b9-c518-483e-a539-5ae09f142fc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Refreshing instance network info cache due to event network-changed-22759577-b4b2-4051-bacd-740bbdfcc4b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:33:37 np0005634017 nova_compute[243452]: 2026-02-28 10:33:37.115 243456 DEBUG oslo_concurrency.lockutils [req-2973e6f5-2274-42b0-aae7-86f5645d8213 req-d3adf2b9-c518-483e-a539-5ae09f142fc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:33:37 np0005634017 nova_compute[243452]: 2026-02-28 10:33:37.116 243456 DEBUG oslo_concurrency.lockutils [req-2973e6f5-2274-42b0-aae7-86f5645d8213 req-d3adf2b9-c518-483e-a539-5ae09f142fc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:33:37 np0005634017 nova_compute[243452]: 2026-02-28 10:33:37.116 243456 DEBUG nova.network.neutron [req-2973e6f5-2274-42b0-aae7-86f5645d8213 req-d3adf2b9-c518-483e-a539-5ae09f142fc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Refreshing network info cache for port 22759577-b4b2-4051-bacd-740bbdfcc4b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:33:37 np0005634017 nova_compute[243452]: 2026-02-28 10:33:37.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:33:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 05:33:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:33:39 np0005634017 nova_compute[243452]: 2026-02-28 10:33:39.172 243456 DEBUG nova.network.neutron [req-2973e6f5-2274-42b0-aae7-86f5645d8213 req-d3adf2b9-c518-483e-a539-5ae09f142fc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Updated VIF entry in instance network info cache for port 22759577-b4b2-4051-bacd-740bbdfcc4b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:33:39 np0005634017 nova_compute[243452]: 2026-02-28 10:33:39.173 243456 DEBUG nova.network.neutron [req-2973e6f5-2274-42b0-aae7-86f5645d8213 req-d3adf2b9-c518-483e-a539-5ae09f142fc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Updating instance_info_cache with network_info: [{"id": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "address": "fa:16:3e:6e:a3:94", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6e:a394", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22759577-b4", "ovs_interfaceid": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:33:39 np0005634017 nova_compute[243452]: 2026-02-28 10:33:39.197 243456 DEBUG oslo_concurrency.lockutils [req-2973e6f5-2274-42b0-aae7-86f5645d8213 req-d3adf2b9-c518-483e-a539-5ae09f142fc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:33:39 np0005634017 nova_compute[243452]: 2026-02-28 10:33:39.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:33:39 np0005634017 nova_compute[243452]: 2026-02-28 10:33:39.321 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:39 np0005634017 nova_compute[243452]: 2026-02-28 10:33:39.333 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:33:39 np0005634017 nova_compute[243452]: 2026-02-28 10:33:39.333 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:33:39 np0005634017 nova_compute[243452]: 2026-02-28 10:33:39.780 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Feb 28 05:33:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:33:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:33:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:33:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:33:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011215138390341052 of space, bias 1.0, pg target 0.3364541517102316 quantized to 32 (current 32)
Feb 28 05:33:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:33:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:33:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:33:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:33:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:33:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024937829631624833 of space, bias 1.0, pg target 0.748134888948745 quantized to 32 (current 32)
Feb 28 05:33:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:33:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.354693851519385e-07 of space, bias 4.0, pg target 0.0008825632621823263 quantized to 16 (current 16)
Feb 28 05:33:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:33:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:33:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:33:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:33:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:33:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:33:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:33:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:33:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:33:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:33:41 np0005634017 nova_compute[243452]: 2026-02-28 10:33:41.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:33:41 np0005634017 nova_compute[243452]: 2026-02-28 10:33:41.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:33:41 np0005634017 nova_compute[243452]: 2026-02-28 10:33:41.809 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:41 np0005634017 nova_compute[243452]: 2026-02-28 10:33:41.809 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:41 np0005634017 nova_compute[243452]: 2026-02-28 10:33:41.844 243456 DEBUG nova.compute.manager [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:33:41 np0005634017 nova_compute[243452]: 2026-02-28 10:33:41.946 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:41 np0005634017 nova_compute[243452]: 2026-02-28 10:33:41.947 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:41 np0005634017 nova_compute[243452]: 2026-02-28 10:33:41.955 243456 DEBUG nova.virt.hardware [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:33:41 np0005634017 nova_compute[243452]: 2026-02-28 10:33:41.956 243456 INFO nova.compute.claims [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:33:42 np0005634017 nova_compute[243452]: 2026-02-28 10:33:42.115 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:33:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:33:42 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1000893372' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:33:42 np0005634017 nova_compute[243452]: 2026-02-28 10:33:42.698 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:33:42 np0005634017 nova_compute[243452]: 2026-02-28 10:33:42.705 243456 DEBUG nova.compute.provider_tree [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:33:42 np0005634017 nova_compute[243452]: 2026-02-28 10:33:42.730 243456 DEBUG nova.scheduler.client.report [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:33:42 np0005634017 nova_compute[243452]: 2026-02-28 10:33:42.759 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:42 np0005634017 nova_compute[243452]: 2026-02-28 10:33:42.761 243456 DEBUG nova.compute.manager [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:33:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2081: 305 pgs: 305 active+clean; 279 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 313 KiB/s wr, 81 op/s
Feb 28 05:33:42 np0005634017 nova_compute[243452]: 2026-02-28 10:33:42.843 243456 DEBUG nova.compute.manager [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:33:42 np0005634017 nova_compute[243452]: 2026-02-28 10:33:42.844 243456 DEBUG nova.network.neutron [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:33:42 np0005634017 nova_compute[243452]: 2026-02-28 10:33:42.873 243456 INFO nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:33:42 np0005634017 nova_compute[243452]: 2026-02-28 10:33:42.910 243456 DEBUG nova.compute.manager [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.010 243456 DEBUG nova.compute.manager [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.012 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.013 243456 INFO nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Creating image(s)#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.039 243456 DEBUG nova.storage.rbd_utils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.066 243456 DEBUG nova.storage.rbd_utils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.089 243456 DEBUG nova.storage.rbd_utils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.093 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.138 243456 DEBUG nova.policy [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.182 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.187 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.188 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.189 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.220 243456 DEBUG nova.storage.rbd_utils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.224 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.345 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.460 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.540 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.540 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.541 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.542 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b07b5e80-4820-4ee8-9750-3ee5ddc53519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.553 243456 DEBUG nova.storage.rbd_utils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] resizing rbd image 4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.650 243456 DEBUG nova.objects.instance [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'migration_context' on Instance uuid 4cf0bd1e-baff-4e42-ab90-cb45145ea4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.673 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.673 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Ensure instance console log exists: /var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.674 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.674 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:43 np0005634017 nova_compute[243452]: 2026-02-28 10:33:43.674 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:33:44 np0005634017 nova_compute[243452]: 2026-02-28 10:33:44.149 243456 DEBUG nova.network.neutron [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Successfully created port: 37a6ff99-c79f-4d1f-8384-b2117545bacf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:33:44 np0005634017 nova_compute[243452]: 2026-02-28 10:33:44.327 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:44Z|00145|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6e:a3:94 10.100.0.14
Feb 28 05:33:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:44Z|00146|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6e:a3:94 10.100.0.14
Feb 28 05:33:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 305 active+clean; 313 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Feb 28 05:33:44 np0005634017 nova_compute[243452]: 2026-02-28 10:33:44.782 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:44 np0005634017 nova_compute[243452]: 2026-02-28 10:33:44.864 243456 DEBUG nova.network.neutron [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Successfully updated port: 37a6ff99-c79f-4d1f-8384-b2117545bacf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:33:44 np0005634017 nova_compute[243452]: 2026-02-28 10:33:44.885 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:33:44 np0005634017 nova_compute[243452]: 2026-02-28 10:33:44.885 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:33:44 np0005634017 nova_compute[243452]: 2026-02-28 10:33:44.886 243456 DEBUG nova.network.neutron [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:33:44 np0005634017 nova_compute[243452]: 2026-02-28 10:33:44.957 243456 DEBUG nova.compute.manager [req-5df34a12-6a56-46f0-a134-9ab5e50869ef req-a173dba7-a4ca-4c13-9c9a-797748d01467 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-changed-37a6ff99-c79f-4d1f-8384-b2117545bacf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:33:44 np0005634017 nova_compute[243452]: 2026-02-28 10:33:44.957 243456 DEBUG nova.compute.manager [req-5df34a12-6a56-46f0-a134-9ab5e50869ef req-a173dba7-a4ca-4c13-9c9a-797748d01467 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Refreshing instance network info cache due to event network-changed-37a6ff99-c79f-4d1f-8384-b2117545bacf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:33:44 np0005634017 nova_compute[243452]: 2026-02-28 10:33:44.958 243456 DEBUG oslo_concurrency.lockutils [req-5df34a12-6a56-46f0-a134-9ab5e50869ef req-a173dba7-a4ca-4c13-9c9a-797748d01467 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:33:45 np0005634017 nova_compute[243452]: 2026-02-28 10:33:45.052 243456 DEBUG nova.network.neutron [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:33:45 np0005634017 nova_compute[243452]: 2026-02-28 10:33:45.319 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Updating instance_info_cache with network_info: [{"id": "b811084c-ad80-4e64-904a-d56bd59c9766", "address": "fa:16:3e:c3:c4:e4", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec3:c4e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb811084c-ad", "ovs_interfaceid": "b811084c-ad80-4e64-904a-d56bd59c9766", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:33:45 np0005634017 nova_compute[243452]: 2026-02-28 10:33:45.340 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:33:45 np0005634017 nova_compute[243452]: 2026-02-28 10:33:45.341 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:33:45 np0005634017 nova_compute[243452]: 2026-02-28 10:33:45.342 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:33:45 np0005634017 nova_compute[243452]: 2026-02-28 10:33:45.371 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:45 np0005634017 nova_compute[243452]: 2026-02-28 10:33:45.372 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:45 np0005634017 nova_compute[243452]: 2026-02-28 10:33:45.373 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:45 np0005634017 nova_compute[243452]: 2026-02-28 10:33:45.373 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:33:45 np0005634017 nova_compute[243452]: 2026-02-28 10:33:45.374 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:33:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:33:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2846806603' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:33:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:33:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2846806603' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:33:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:33:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/939079276' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:33:45 np0005634017 nova_compute[243452]: 2026-02-28 10:33:45.980 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.070 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.070 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.076 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.077 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.251 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.252 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3293MB free_disk=59.90014861244708GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.252 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.253 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.350 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance b07b5e80-4820-4ee8-9750-3ee5ddc53519 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.351 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.351 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 4cf0bd1e-baff-4e42-ab90-cb45145ea4db actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.352 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.352 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.455 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.499 243456 DEBUG nova.network.neutron [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Updating instance_info_cache with network_info: [{"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.527 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.528 243456 DEBUG nova.compute.manager [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Instance network_info: |[{"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.529 243456 DEBUG oslo_concurrency.lockutils [req-5df34a12-6a56-46f0-a134-9ab5e50869ef req-a173dba7-a4ca-4c13-9c9a-797748d01467 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.530 243456 DEBUG nova.network.neutron [req-5df34a12-6a56-46f0-a134-9ab5e50869ef req-a173dba7-a4ca-4c13-9c9a-797748d01467 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Refreshing network info cache for port 37a6ff99-c79f-4d1f-8384-b2117545bacf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.535 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Start _get_guest_xml network_info=[{"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.541 243456 WARNING nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.545 243456 DEBUG nova.virt.libvirt.host [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.546 243456 DEBUG nova.virt.libvirt.host [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.550 243456 DEBUG nova.virt.libvirt.host [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.551 243456 DEBUG nova.virt.libvirt.host [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.551 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.552 243456 DEBUG nova.virt.hardware [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.553 243456 DEBUG nova.virt.hardware [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.553 243456 DEBUG nova.virt.hardware [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.554 243456 DEBUG nova.virt.hardware [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.554 243456 DEBUG nova.virt.hardware [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.555 243456 DEBUG nova.virt.hardware [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.555 243456 DEBUG nova.virt.hardware [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.555 243456 DEBUG nova.virt.hardware [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.556 243456 DEBUG nova.virt.hardware [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.556 243456 DEBUG nova.virt.hardware [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.557 243456 DEBUG nova.virt.hardware [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:33:46 np0005634017 nova_compute[243452]: 2026-02-28 10:33:46.563 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:33:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 340 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.4 MiB/s wr, 134 op/s
Feb 28 05:33:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:33:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/958435207' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.059 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.065 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:33:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:33:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3100237635' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.090 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.095 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.123 243456 DEBUG nova.storage.rbd_utils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.129 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.168 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.169 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:33:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1186315616' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.634 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.636 243456 DEBUG nova.virt.libvirt.vif [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-770937853',display_name='tempest-TestNetworkBasicOps-server-770937853',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-770937853',id=127,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFAY2dHmsUOb7P6bLjV8d94Nwp5/6Rxqb3nkXfFeiEJx36+H0xArf++Ad9VgIE8+4m1lCjIO06O/qKmlMqDQresg45xrebHyRTaZLsdPDBvsp3hmYqaTlPvbO1fPCnhR8w==',key_name='tempest-TestNetworkBasicOps-431108198',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-bri91k0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:33:42Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=4cf0bd1e-baff-4e42-ab90-cb45145ea4db,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.637 243456 DEBUG nova.network.os_vif_util [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.638 243456 DEBUG nova.network.os_vif_util [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:c3:a3,bridge_name='br-int',has_traffic_filtering=True,id=37a6ff99-c79f-4d1f-8384-b2117545bacf,network=Network(183ae61b-3b9b-4e1b-a73e-6b7a38731453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37a6ff99-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.639 243456 DEBUG nova.objects.instance [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4cf0bd1e-baff-4e42-ab90-cb45145ea4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.662 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:33:47 np0005634017 nova_compute[243452]:  <uuid>4cf0bd1e-baff-4e42-ab90-cb45145ea4db</uuid>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:  <name>instance-0000007f</name>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestNetworkBasicOps-server-770937853</nova:name>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:33:46</nova:creationTime>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:33:47 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:        <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:        <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:        <nova:port uuid="37a6ff99-c79f-4d1f-8384-b2117545bacf">
Feb 28 05:33:47 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <entry name="serial">4cf0bd1e-baff-4e42-ab90-cb45145ea4db</entry>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <entry name="uuid">4cf0bd1e-baff-4e42-ab90-cb45145ea4db</entry>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk">
Feb 28 05:33:47 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:33:47 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk.config">
Feb 28 05:33:47 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:33:47 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:e1:c3:a3"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <target dev="tap37a6ff99-c7"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/console.log" append="off"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:33:47 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:33:47 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:33:47 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:33:47 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.663 243456 DEBUG nova.compute.manager [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Preparing to wait for external event network-vif-plugged-37a6ff99-c79f-4d1f-8384-b2117545bacf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.663 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.663 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.664 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.664 243456 DEBUG nova.virt.libvirt.vif [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-770937853',display_name='tempest-TestNetworkBasicOps-server-770937853',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-770937853',id=127,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFAY2dHmsUOb7P6bLjV8d94Nwp5/6Rxqb3nkXfFeiEJx36+H0xArf++Ad9VgIE8+4m1lCjIO06O/qKmlMqDQresg45xrebHyRTaZLsdPDBvsp3hmYqaTlPvbO1fPCnhR8w==',key_name='tempest-TestNetworkBasicOps-431108198',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-bri91k0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:33:42Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=4cf0bd1e-baff-4e42-ab90-cb45145ea4db,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.665 243456 DEBUG nova.network.os_vif_util [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.666 243456 DEBUG nova.network.os_vif_util [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:c3:a3,bridge_name='br-int',has_traffic_filtering=True,id=37a6ff99-c79f-4d1f-8384-b2117545bacf,network=Network(183ae61b-3b9b-4e1b-a73e-6b7a38731453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37a6ff99-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.666 243456 DEBUG os_vif [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:c3:a3,bridge_name='br-int',has_traffic_filtering=True,id=37a6ff99-c79f-4d1f-8384-b2117545bacf,network=Network(183ae61b-3b9b-4e1b-a73e-6b7a38731453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37a6ff99-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.667 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.667 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.667 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.671 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.671 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap37a6ff99-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.672 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap37a6ff99-c7, col_values=(('external_ids', {'iface-id': '37a6ff99-c79f-4d1f-8384-b2117545bacf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e1:c3:a3', 'vm-uuid': '4cf0bd1e-baff-4e42-ab90-cb45145ea4db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.673 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:47 np0005634017 NetworkManager[49805]: <info>  [1772274827.6742] manager: (tap37a6ff99-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/524)
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.678 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.679 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.679 243456 INFO os_vif [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:c3:a3,bridge_name='br-int',has_traffic_filtering=True,id=37a6ff99-c79f-4d1f-8384-b2117545bacf,network=Network(183ae61b-3b9b-4e1b-a73e-6b7a38731453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37a6ff99-c7')#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.725 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.726 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.726 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:e1:c3:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.727 243456 INFO nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Using config drive#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.754 243456 DEBUG nova.storage.rbd_utils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.848 243456 DEBUG nova.network.neutron [req-5df34a12-6a56-46f0-a134-9ab5e50869ef req-a173dba7-a4ca-4c13-9c9a-797748d01467 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Updated VIF entry in instance network info cache for port 37a6ff99-c79f-4d1f-8384-b2117545bacf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.849 243456 DEBUG nova.network.neutron [req-5df34a12-6a56-46f0-a134-9ab5e50869ef req-a173dba7-a4ca-4c13-9c9a-797748d01467 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Updating instance_info_cache with network_info: [{"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:33:47 np0005634017 nova_compute[243452]: 2026-02-28 10:33:47.868 243456 DEBUG oslo_concurrency.lockutils [req-5df34a12-6a56-46f0-a134-9ab5e50869ef req-a173dba7-a4ca-4c13-9c9a-797748d01467 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:33:48 np0005634017 nova_compute[243452]: 2026-02-28 10:33:48.063 243456 INFO nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Creating config drive at /var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/disk.config#033[00m
Feb 28 05:33:48 np0005634017 nova_compute[243452]: 2026-02-28 10:33:48.067 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpx6yuhs9s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:33:48 np0005634017 nova_compute[243452]: 2026-02-28 10:33:48.211 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpx6yuhs9s" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:33:48 np0005634017 nova_compute[243452]: 2026-02-28 10:33:48.243 243456 DEBUG nova.storage.rbd_utils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:33:48 np0005634017 nova_compute[243452]: 2026-02-28 10:33:48.249 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/disk.config 4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:33:48 np0005634017 nova_compute[243452]: 2026-02-28 10:33:48.398 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/disk.config 4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:33:48 np0005634017 nova_compute[243452]: 2026-02-28 10:33:48.400 243456 INFO nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Deleting local config drive /var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/disk.config because it was imported into RBD.#033[00m
Feb 28 05:33:48 np0005634017 kernel: tap37a6ff99-c7: entered promiscuous mode
Feb 28 05:33:48 np0005634017 NetworkManager[49805]: <info>  [1772274828.4564] manager: (tap37a6ff99-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/525)
Feb 28 05:33:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:48Z|01269|binding|INFO|Claiming lport 37a6ff99-c79f-4d1f-8384-b2117545bacf for this chassis.
Feb 28 05:33:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:48Z|01270|binding|INFO|37a6ff99-c79f-4d1f-8384-b2117545bacf: Claiming fa:16:3e:e1:c3:a3 10.100.0.3
Feb 28 05:33:48 np0005634017 nova_compute[243452]: 2026-02-28 10:33:48.458 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.469 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:c3:a3 10.100.0.3'], port_security=['fa:16:3e:e1:c3:a3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4cf0bd1e-baff-4e42-ab90-cb45145ea4db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-183ae61b-3b9b-4e1b-a73e-6b7a38731453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0844b2ff-c3dd-41f7-ab33-952597a3bda8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9e841af-c685-47c1-acc4-502d4238e857, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=37a6ff99-c79f-4d1f-8384-b2117545bacf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.471 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 37a6ff99-c79f-4d1f-8384-b2117545bacf in datapath 183ae61b-3b9b-4e1b-a73e-6b7a38731453 bound to our chassis#033[00m
Feb 28 05:33:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:48Z|01271|binding|INFO|Setting lport 37a6ff99-c79f-4d1f-8384-b2117545bacf ovn-installed in OVS
Feb 28 05:33:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:48Z|01272|binding|INFO|Setting lport 37a6ff99-c79f-4d1f-8384-b2117545bacf up in Southbound
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.473 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 183ae61b-3b9b-4e1b-a73e-6b7a38731453#033[00m
Feb 28 05:33:48 np0005634017 nova_compute[243452]: 2026-02-28 10:33:48.474 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:48 np0005634017 nova_compute[243452]: 2026-02-28 10:33:48.481 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.487 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3f5ea435-7f90-4e33-8be8-4f2cf745f98f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.489 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap183ae61b-31 in ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.491 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap183ae61b-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.491 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[25cf20ad-3a1e-437b-89a2-5997daaa04ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.492 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b77b3164-4223-4c2a-b5a2-e6e97e9d65aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:48 np0005634017 systemd-machined[209480]: New machine qemu-160-instance-0000007f.
Feb 28 05:33:48 np0005634017 systemd-udevd[355570]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.512 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[969550ba-1d61-40ef-aee0-26f3057c9a16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:48 np0005634017 systemd[1]: Started Virtual Machine qemu-160-instance-0000007f.
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.530 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4cfdd216-f8f4-4935-8b76-9603da146503]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:48 np0005634017 NetworkManager[49805]: <info>  [1772274828.5329] device (tap37a6ff99-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:33:48 np0005634017 NetworkManager[49805]: <info>  [1772274828.5341] device (tap37a6ff99-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.570 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f46e027c-6250-404f-9167-46de4b904704]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.578 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[24eaa41c-49ea-47d7-afb2-ec8f95f1bcd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:48 np0005634017 NetworkManager[49805]: <info>  [1772274828.5804] manager: (tap183ae61b-30): new Veth device (/org/freedesktop/NetworkManager/Devices/526)
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.617 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bd381c89-1dda-4ea7-99ca-a917b6cbcb9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.621 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a86abca6-a47b-4c31-becc-61a9ee553ef2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:48 np0005634017 NetworkManager[49805]: <info>  [1772274828.6459] device (tap183ae61b-30): carrier: link connected
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.650 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[27701165-8e24-4fa8-b7e3-bfa81106940b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.671 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3df27f88-3f4b-4b09-9990-e03bb9d17f19]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap183ae61b-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:42:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633593, 'reachable_time': 41175, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355601, 'error': None, 'target': 'ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.688 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a12384ab-bcf2-47c7-908c-2190bcb7e1db]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb8:4238'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633593, 'tstamp': 633593}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355602, 'error': None, 'target': 'ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:48 np0005634017 nova_compute[243452]: 2026-02-28 10:33:48.703 243456 DEBUG nova.compute.manager [req-b5b20c20-2a73-4f99-a8f2-e6d24fb7b2a0 req-07112d7f-1a97-4762-862f-a3046f7b56e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-vif-plugged-37a6ff99-c79f-4d1f-8384-b2117545bacf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:33:48 np0005634017 nova_compute[243452]: 2026-02-28 10:33:48.704 243456 DEBUG oslo_concurrency.lockutils [req-b5b20c20-2a73-4f99-a8f2-e6d24fb7b2a0 req-07112d7f-1a97-4762-862f-a3046f7b56e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:48 np0005634017 nova_compute[243452]: 2026-02-28 10:33:48.704 243456 DEBUG oslo_concurrency.lockutils [req-b5b20c20-2a73-4f99-a8f2-e6d24fb7b2a0 req-07112d7f-1a97-4762-862f-a3046f7b56e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:48 np0005634017 nova_compute[243452]: 2026-02-28 10:33:48.705 243456 DEBUG oslo_concurrency.lockutils [req-b5b20c20-2a73-4f99-a8f2-e6d24fb7b2a0 req-07112d7f-1a97-4762-862f-a3046f7b56e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:48 np0005634017 nova_compute[243452]: 2026-02-28 10:33:48.705 243456 DEBUG nova.compute.manager [req-b5b20c20-2a73-4f99-a8f2-e6d24fb7b2a0 req-07112d7f-1a97-4762-862f-a3046f7b56e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Processing event network-vif-plugged-37a6ff99-c79f-4d1f-8384-b2117545bacf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.714 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b12f8ce5-326b-4e38-a4bd-aeb0390bdf44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap183ae61b-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:42:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633593, 'reachable_time': 41175, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355603, 'error': None, 'target': 'ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.753 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3b710c41-b5a7-4e0f-bb28-f2011a3bfb00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2084: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 121 op/s
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.816 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ee527e50-ff12-46f8-87f6-0bc7aceb65ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.818 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap183ae61b-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.819 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.819 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap183ae61b-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:33:48 np0005634017 NetworkManager[49805]: <info>  [1772274828.8238] manager: (tap183ae61b-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/527)
Feb 28 05:33:48 np0005634017 kernel: tap183ae61b-30: entered promiscuous mode
Feb 28 05:33:48 np0005634017 nova_compute[243452]: 2026-02-28 10:33:48.826 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.829 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap183ae61b-30, col_values=(('external_ids', {'iface-id': 'dbe8062b-c5f4-44f4-b690-d738c9fe51f1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:33:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:48Z|01273|binding|INFO|Releasing lport dbe8062b-c5f4-44f4-b690-d738c9fe51f1 from this chassis (sb_readonly=0)
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.832 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/183ae61b-3b9b-4e1b-a73e-6b7a38731453.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/183ae61b-3b9b-4e1b-a73e-6b7a38731453.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.833 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[00ac1c63-b852-4ce1-b47e-ed100fae94f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.834 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-183ae61b-3b9b-4e1b-a73e-6b7a38731453
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/183ae61b-3b9b-4e1b-a73e-6b7a38731453.pid.haproxy
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 183ae61b-3b9b-4e1b-a73e-6b7a38731453
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:33:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.835 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453', 'env', 'PROCESS_TAG=haproxy-183ae61b-3b9b-4e1b-a73e-6b7a38731453', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/183ae61b-3b9b-4e1b-a73e-6b7a38731453.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:33:48 np0005634017 nova_compute[243452]: 2026-02-28 10:33:48.841 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.151 243456 DEBUG nova.compute.manager [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.153 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274829.150857, 4cf0bd1e-baff-4e42-ab90-cb45145ea4db => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.154 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] VM Started (Lifecycle Event)#033[00m
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.159 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.165 243456 INFO nova.virt.libvirt.driver [-] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Instance spawned successfully.#033[00m
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.165 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.184 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.190 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.195 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.195 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.196 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.196 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.197 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.197 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.225 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.226 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274829.1525602, 4cf0bd1e-baff-4e42-ab90-cb45145ea4db => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.226 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:33:49 np0005634017 podman[355677]: 2026-02-28 10:33:49.253856513 +0000 UTC m=+0.070807395 container create 8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.274 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.278 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274829.1585624, 4cf0bd1e-baff-4e42-ab90-cb45145ea4db => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.278 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.289 243456 INFO nova.compute.manager [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Took 6.28 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.290 243456 DEBUG nova.compute.manager [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.300 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.303 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:33:49 np0005634017 systemd[1]: Started libpod-conmon-8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d.scope.
Feb 28 05:33:49 np0005634017 podman[355677]: 2026-02-28 10:33:49.221354953 +0000 UTC m=+0.038305885 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.327 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.331 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:33:49 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:33:49 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/519415a6e8605ec671aba53f1da4d6a90fb9d621f73f1043083370c3d20e14cc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.349 243456 INFO nova.compute.manager [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Took 7.43 seconds to build instance.#033[00m
Feb 28 05:33:49 np0005634017 podman[355677]: 2026-02-28 10:33:49.353537737 +0000 UTC m=+0.170488609 container init 8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS)
Feb 28 05:33:49 np0005634017 podman[355677]: 2026-02-28 10:33:49.360500514 +0000 UTC m=+0.177451356 container start 8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 05:33:49 np0005634017 nova_compute[243452]: 2026-02-28 10:33:49.364 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:49 np0005634017 neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453[355692]: [NOTICE]   (355696) : New worker (355698) forked
Feb 28 05:33:49 np0005634017 neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453[355692]: [NOTICE]   (355696) : Loading success.
Feb 28 05:33:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 934 KiB/s rd, 3.9 MiB/s wr, 117 op/s
Feb 28 05:33:50 np0005634017 nova_compute[243452]: 2026-02-28 10:33:50.790 243456 DEBUG nova.compute.manager [req-c849b18c-71d3-487a-ab1a-490e488f1fc8 req-75566613-af99-4de0-ab1a-d4701fbaebc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-vif-plugged-37a6ff99-c79f-4d1f-8384-b2117545bacf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:33:50 np0005634017 nova_compute[243452]: 2026-02-28 10:33:50.792 243456 DEBUG oslo_concurrency.lockutils [req-c849b18c-71d3-487a-ab1a-490e488f1fc8 req-75566613-af99-4de0-ab1a-d4701fbaebc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:50 np0005634017 nova_compute[243452]: 2026-02-28 10:33:50.793 243456 DEBUG oslo_concurrency.lockutils [req-c849b18c-71d3-487a-ab1a-490e488f1fc8 req-75566613-af99-4de0-ab1a-d4701fbaebc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:50 np0005634017 nova_compute[243452]: 2026-02-28 10:33:50.793 243456 DEBUG oslo_concurrency.lockutils [req-c849b18c-71d3-487a-ab1a-490e488f1fc8 req-75566613-af99-4de0-ab1a-d4701fbaebc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:50 np0005634017 nova_compute[243452]: 2026-02-28 10:33:50.794 243456 DEBUG nova.compute.manager [req-c849b18c-71d3-487a-ab1a-490e488f1fc8 req-75566613-af99-4de0-ab1a-d4701fbaebc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] No waiting events found dispatching network-vif-plugged-37a6ff99-c79f-4d1f-8384-b2117545bacf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:33:50 np0005634017 nova_compute[243452]: 2026-02-28 10:33:50.795 243456 WARNING nova.compute.manager [req-c849b18c-71d3-487a-ab1a-490e488f1fc8 req-75566613-af99-4de0-ab1a-d4701fbaebc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received unexpected event network-vif-plugged-37a6ff99-c79f-4d1f-8384-b2117545bacf for instance with vm_state active and task_state None.#033[00m
Feb 28 05:33:52 np0005634017 nova_compute[243452]: 2026-02-28 10:33:52.674 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 137 op/s
Feb 28 05:33:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:33:54 np0005634017 nova_compute[243452]: 2026-02-28 10:33:54.330 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:54 np0005634017 nova_compute[243452]: 2026-02-28 10:33:54.733 243456 DEBUG nova.compute.manager [req-5634bd63-c482-4e91-bdde-53b89d73d63a req-1e42aa1a-9652-47da-9b28-0845a7884b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Received event network-changed-22759577-b4b2-4051-bacd-740bbdfcc4b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:33:54 np0005634017 nova_compute[243452]: 2026-02-28 10:33:54.733 243456 DEBUG nova.compute.manager [req-5634bd63-c482-4e91-bdde-53b89d73d63a req-1e42aa1a-9652-47da-9b28-0845a7884b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Refreshing instance network info cache due to event network-changed-22759577-b4b2-4051-bacd-740bbdfcc4b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:33:54 np0005634017 nova_compute[243452]: 2026-02-28 10:33:54.734 243456 DEBUG oslo_concurrency.lockutils [req-5634bd63-c482-4e91-bdde-53b89d73d63a req-1e42aa1a-9652-47da-9b28-0845a7884b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:33:54 np0005634017 nova_compute[243452]: 2026-02-28 10:33:54.734 243456 DEBUG oslo_concurrency.lockutils [req-5634bd63-c482-4e91-bdde-53b89d73d63a req-1e42aa1a-9652-47da-9b28-0845a7884b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:33:54 np0005634017 nova_compute[243452]: 2026-02-28 10:33:54.734 243456 DEBUG nova.network.neutron [req-5634bd63-c482-4e91-bdde-53b89d73d63a req-1e42aa1a-9652-47da-9b28-0845a7884b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Refreshing network info cache for port 22759577-b4b2-4051-bacd-740bbdfcc4b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:33:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 159 op/s
Feb 28 05:33:54 np0005634017 nova_compute[243452]: 2026-02-28 10:33:54.805 243456 DEBUG oslo_concurrency.lockutils [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:54 np0005634017 nova_compute[243452]: 2026-02-28 10:33:54.805 243456 DEBUG oslo_concurrency.lockutils [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:54 np0005634017 nova_compute[243452]: 2026-02-28 10:33:54.806 243456 DEBUG oslo_concurrency.lockutils [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:54 np0005634017 nova_compute[243452]: 2026-02-28 10:33:54.807 243456 DEBUG oslo_concurrency.lockutils [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:54 np0005634017 nova_compute[243452]: 2026-02-28 10:33:54.807 243456 DEBUG oslo_concurrency.lockutils [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:54 np0005634017 nova_compute[243452]: 2026-02-28 10:33:54.810 243456 INFO nova.compute.manager [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Terminating instance#033[00m
Feb 28 05:33:54 np0005634017 nova_compute[243452]: 2026-02-28 10:33:54.811 243456 DEBUG nova.compute.manager [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:33:54 np0005634017 kernel: tap22759577-b4 (unregistering): left promiscuous mode
Feb 28 05:33:54 np0005634017 NetworkManager[49805]: <info>  [1772274834.8624] device (tap22759577-b4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:33:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:54Z|01274|binding|INFO|Releasing lport 22759577-b4b2-4051-bacd-740bbdfcc4b4 from this chassis (sb_readonly=0)
Feb 28 05:33:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:54Z|01275|binding|INFO|Setting lport 22759577-b4b2-4051-bacd-740bbdfcc4b4 down in Southbound
Feb 28 05:33:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:54Z|01276|binding|INFO|Removing iface tap22759577-b4 ovn-installed in OVS
Feb 28 05:33:54 np0005634017 nova_compute[243452]: 2026-02-28 10:33:54.881 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:54.888 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:a3:94 10.100.0.14 2001:db8::f816:3eff:fe6e:a394'], port_security=['fa:16:3e:6e:a3:94 10.100.0.14 2001:db8::f816:3eff:fe6e:a394'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8::f816:3eff:fe6e:a394/64', 'neutron:device_id': 'f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6e383b51-5295-4e4b-b3c5-448c2d4edbf6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5cc36483-813d-493a-bc2a-2e5a365011f2, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=22759577-b4b2-4051-bacd-740bbdfcc4b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:33:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:54.889 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 22759577-b4b2-4051-bacd-740bbdfcc4b4 in datapath c662211d-11bd-4aa5-95d2-794ccdac29d7 unbound from our chassis#033[00m
Feb 28 05:33:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:54.890 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c662211d-11bd-4aa5-95d2-794ccdac29d7#033[00m
Feb 28 05:33:54 np0005634017 nova_compute[243452]: 2026-02-28 10:33:54.890 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:54.911 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ef57a7d4-1dff-4e1d-89ef-255053ab7aeb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:54 np0005634017 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Feb 28 05:33:54 np0005634017 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007e.scope: Consumed 12.455s CPU time.
Feb 28 05:33:54 np0005634017 systemd-machined[209480]: Machine qemu-159-instance-0000007e terminated.
Feb 28 05:33:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:54.944 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[45ffcba4-137e-46e7-81c6-d5f94fa79145]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:54.948 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[764b8bea-66c2-41e4-84b3-8b3d60a8ccf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:54.976 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[90919e9c-ab63-4796-911b-332861fa117d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:54.996 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c37c585f-6bc4-4dd9-b0a4-3f6b03fbb2e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc662211d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:f5:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 8, 'rx_bytes': 2780, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 8, 'rx_bytes': 2780, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628072, 'reachable_time': 22047, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355719, 'error': None, 'target': 'ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:55.020 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[90f83873-32e4-4aac-8bb4-67549bef1318]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc662211d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 628080, 'tstamp': 628080}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355720, 'error': None, 'target': 'ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc662211d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 628083, 'tstamp': 628083}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355720, 'error': None, 'target': 'ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:55.028 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc662211d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:33:55 np0005634017 nova_compute[243452]: 2026-02-28 10:33:55.036 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:55 np0005634017 nova_compute[243452]: 2026-02-28 10:33:55.040 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:55.041 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc662211d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:33:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:55.042 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:33:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:55.043 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc662211d-10, col_values=(('external_ids', {'iface-id': '45460f81-310d-424c-88ee-85e2ae1b7444'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:33:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:55.044 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:33:55 np0005634017 nova_compute[243452]: 2026-02-28 10:33:55.063 243456 INFO nova.virt.libvirt.driver [-] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Instance destroyed successfully.#033[00m
Feb 28 05:33:55 np0005634017 nova_compute[243452]: 2026-02-28 10:33:55.063 243456 DEBUG nova.objects.instance [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:33:55 np0005634017 nova_compute[243452]: 2026-02-28 10:33:55.076 243456 DEBUG nova.virt.libvirt.vif [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:33:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-994916245',display_name='tempest-TestGettingAddress-server-994916245',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-994916245',id=126,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFieZm47z3EaXX8zbx7LaLT+w65lGbaba1NOcr04FPZCW63/mJeAcaJ3qZq57Nb7LiBZaem/WgewaoumutbM8PMth7FUNu0jTunr9bd13indOeb5XDIeGzJsnEfyg+wcGA==',key_name='tempest-TestGettingAddress-401608194',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:33:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-6ygaahaz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:33:32Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "address": "fa:16:3e:6e:a3:94", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6e:a394", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22759577-b4", "ovs_interfaceid": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:33:55 np0005634017 nova_compute[243452]: 2026-02-28 10:33:55.077 243456 DEBUG nova.network.os_vif_util [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "address": "fa:16:3e:6e:a3:94", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6e:a394", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22759577-b4", "ovs_interfaceid": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:33:55 np0005634017 nova_compute[243452]: 2026-02-28 10:33:55.078 243456 DEBUG nova.network.os_vif_util [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6e:a3:94,bridge_name='br-int',has_traffic_filtering=True,id=22759577-b4b2-4051-bacd-740bbdfcc4b4,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22759577-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:33:55 np0005634017 nova_compute[243452]: 2026-02-28 10:33:55.078 243456 DEBUG os_vif [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:a3:94,bridge_name='br-int',has_traffic_filtering=True,id=22759577-b4b2-4051-bacd-740bbdfcc4b4,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22759577-b4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:33:55 np0005634017 nova_compute[243452]: 2026-02-28 10:33:55.080 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:55 np0005634017 nova_compute[243452]: 2026-02-28 10:33:55.080 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22759577-b4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:33:55 np0005634017 nova_compute[243452]: 2026-02-28 10:33:55.081 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:55 np0005634017 nova_compute[243452]: 2026-02-28 10:33:55.084 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:33:55 np0005634017 nova_compute[243452]: 2026-02-28 10:33:55.086 243456 INFO os_vif [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:a3:94,bridge_name='br-int',has_traffic_filtering=True,id=22759577-b4b2-4051-bacd-740bbdfcc4b4,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22759577-b4')#033[00m
Feb 28 05:33:55 np0005634017 nova_compute[243452]: 2026-02-28 10:33:55.353 243456 INFO nova.virt.libvirt.driver [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Deleting instance files /var/lib/nova/instances/f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_del#033[00m
Feb 28 05:33:55 np0005634017 nova_compute[243452]: 2026-02-28 10:33:55.354 243456 INFO nova.virt.libvirt.driver [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Deletion of /var/lib/nova/instances/f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_del complete#033[00m
Feb 28 05:33:55 np0005634017 nova_compute[243452]: 2026-02-28 10:33:55.418 243456 INFO nova.compute.manager [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Took 0.61 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:33:55 np0005634017 nova_compute[243452]: 2026-02-28 10:33:55.419 243456 DEBUG oslo.service.loopingcall [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:33:55 np0005634017 nova_compute[243452]: 2026-02-28 10:33:55.419 243456 DEBUG nova.compute.manager [-] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:33:55 np0005634017 nova_compute[243452]: 2026-02-28 10:33:55.419 243456 DEBUG nova.network.neutron [-] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:33:56 np0005634017 nova_compute[243452]: 2026-02-28 10:33:56.764 243456 DEBUG nova.network.neutron [-] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:33:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 305 active+clean; 322 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 150 op/s
Feb 28 05:33:56 np0005634017 nova_compute[243452]: 2026-02-28 10:33:56.787 243456 INFO nova.compute.manager [-] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Took 1.37 seconds to deallocate network for instance.#033[00m
Feb 28 05:33:56 np0005634017 nova_compute[243452]: 2026-02-28 10:33:56.832 243456 DEBUG oslo_concurrency.lockutils [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:56 np0005634017 nova_compute[243452]: 2026-02-28 10:33:56.833 243456 DEBUG oslo_concurrency.lockutils [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:56 np0005634017 nova_compute[243452]: 2026-02-28 10:33:56.836 243456 DEBUG nova.compute.manager [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-changed-37a6ff99-c79f-4d1f-8384-b2117545bacf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:33:56 np0005634017 nova_compute[243452]: 2026-02-28 10:33:56.836 243456 DEBUG nova.compute.manager [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Refreshing instance network info cache due to event network-changed-37a6ff99-c79f-4d1f-8384-b2117545bacf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:33:56 np0005634017 nova_compute[243452]: 2026-02-28 10:33:56.836 243456 DEBUG oslo_concurrency.lockutils [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:33:56 np0005634017 nova_compute[243452]: 2026-02-28 10:33:56.837 243456 DEBUG oslo_concurrency.lockutils [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:33:56 np0005634017 nova_compute[243452]: 2026-02-28 10:33:56.837 243456 DEBUG nova.network.neutron [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Refreshing network info cache for port 37a6ff99-c79f-4d1f-8384-b2117545bacf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:33:56 np0005634017 nova_compute[243452]: 2026-02-28 10:33:56.842 243456 DEBUG nova.compute.manager [req-186c8f72-2b3f-4af3-a906-b3b99b4bd21d req-714b3a0f-98c9-465f-98cf-bc7892b37284 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Received event network-vif-deleted-22759577-b4b2-4051-bacd-740bbdfcc4b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:33:56 np0005634017 nova_compute[243452]: 2026-02-28 10:33:56.946 243456 DEBUG oslo_concurrency.processutils [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:33:57 np0005634017 nova_compute[243452]: 2026-02-28 10:33:57.102 243456 DEBUG nova.network.neutron [req-5634bd63-c482-4e91-bdde-53b89d73d63a req-1e42aa1a-9652-47da-9b28-0845a7884b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Updated VIF entry in instance network info cache for port 22759577-b4b2-4051-bacd-740bbdfcc4b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:33:57 np0005634017 nova_compute[243452]: 2026-02-28 10:33:57.103 243456 DEBUG nova.network.neutron [req-5634bd63-c482-4e91-bdde-53b89d73d63a req-1e42aa1a-9652-47da-9b28-0845a7884b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Updating instance_info_cache with network_info: [{"id": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "address": "fa:16:3e:6e:a3:94", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6e:a394", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22759577-b4", "ovs_interfaceid": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:33:57 np0005634017 nova_compute[243452]: 2026-02-28 10:33:57.134 243456 DEBUG oslo_concurrency.lockutils [req-5634bd63-c482-4e91-bdde-53b89d73d63a req-1e42aa1a-9652-47da-9b28-0845a7884b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:33:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:33:57 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3666230549' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:33:57 np0005634017 nova_compute[243452]: 2026-02-28 10:33:57.525 243456 DEBUG oslo_concurrency.processutils [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:33:57 np0005634017 nova_compute[243452]: 2026-02-28 10:33:57.532 243456 DEBUG nova.compute.provider_tree [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:33:57 np0005634017 nova_compute[243452]: 2026-02-28 10:33:57.832 243456 DEBUG nova.scheduler.client.report [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:33:57 np0005634017 nova_compute[243452]: 2026-02-28 10:33:57.857 243456 DEBUG oslo_concurrency.lockutils [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:57.873 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:57.873 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:57.874 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:57 np0005634017 nova_compute[243452]: 2026-02-28 10:33:57.886 243456 INFO nova.scheduler.client.report [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989#033[00m
Feb 28 05:33:57 np0005634017 nova_compute[243452]: 2026-02-28 10:33:57.958 243456 DEBUG oslo_concurrency.lockutils [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:58 np0005634017 nova_compute[243452]: 2026-02-28 10:33:58.758 243456 DEBUG nova.network.neutron [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Updated VIF entry in instance network info cache for port 37a6ff99-c79f-4d1f-8384-b2117545bacf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:33:58 np0005634017 nova_compute[243452]: 2026-02-28 10:33:58.759 243456 DEBUG nova.network.neutron [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Updating instance_info_cache with network_info: [{"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:33:58 np0005634017 nova_compute[243452]: 2026-02-28 10:33:58.782 243456 DEBUG oslo_concurrency.lockutils [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:33:58 np0005634017 nova_compute[243452]: 2026-02-28 10:33:58.782 243456 DEBUG nova.compute.manager [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Received event network-vif-unplugged-22759577-b4b2-4051-bacd-740bbdfcc4b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:33:58 np0005634017 nova_compute[243452]: 2026-02-28 10:33:58.783 243456 DEBUG oslo_concurrency.lockutils [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:58 np0005634017 nova_compute[243452]: 2026-02-28 10:33:58.783 243456 DEBUG oslo_concurrency.lockutils [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:58 np0005634017 nova_compute[243452]: 2026-02-28 10:33:58.783 243456 DEBUG oslo_concurrency.lockutils [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:58 np0005634017 nova_compute[243452]: 2026-02-28 10:33:58.783 243456 DEBUG nova.compute.manager [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] No waiting events found dispatching network-vif-unplugged-22759577-b4b2-4051-bacd-740bbdfcc4b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:33:58 np0005634017 nova_compute[243452]: 2026-02-28 10:33:58.784 243456 DEBUG nova.compute.manager [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Received event network-vif-unplugged-22759577-b4b2-4051-bacd-740bbdfcc4b4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:33:58 np0005634017 nova_compute[243452]: 2026-02-28 10:33:58.784 243456 DEBUG nova.compute.manager [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Received event network-vif-plugged-22759577-b4b2-4051-bacd-740bbdfcc4b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:33:58 np0005634017 nova_compute[243452]: 2026-02-28 10:33:58.784 243456 DEBUG oslo_concurrency.lockutils [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 305 active+clean; 300 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 517 KiB/s wr, 104 op/s
Feb 28 05:33:58 np0005634017 nova_compute[243452]: 2026-02-28 10:33:58.784 243456 DEBUG oslo_concurrency.lockutils [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:58 np0005634017 nova_compute[243452]: 2026-02-28 10:33:58.785 243456 DEBUG oslo_concurrency.lockutils [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:58 np0005634017 nova_compute[243452]: 2026-02-28 10:33:58.785 243456 DEBUG nova.compute.manager [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] No waiting events found dispatching network-vif-plugged-22759577-b4b2-4051-bacd-740bbdfcc4b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:33:58 np0005634017 nova_compute[243452]: 2026-02-28 10:33:58.785 243456 WARNING nova.compute.manager [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Received unexpected event network-vif-plugged-22759577-b4b2-4051-bacd-740bbdfcc4b4 for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:33:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:33:59 np0005634017 nova_compute[243452]: 2026-02-28 10:33:59.333 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:59 np0005634017 nova_compute[243452]: 2026-02-28 10:33:59.789 243456 DEBUG nova.compute.manager [req-da05aea6-f215-4742-9a30-3b290e855254 req-961e6cbe-448a-4a3e-ba77-f022f70ef755 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Received event network-changed-b811084c-ad80-4e64-904a-d56bd59c9766 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:33:59 np0005634017 nova_compute[243452]: 2026-02-28 10:33:59.790 243456 DEBUG nova.compute.manager [req-da05aea6-f215-4742-9a30-3b290e855254 req-961e6cbe-448a-4a3e-ba77-f022f70ef755 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Refreshing instance network info cache due to event network-changed-b811084c-ad80-4e64-904a-d56bd59c9766. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:33:59 np0005634017 nova_compute[243452]: 2026-02-28 10:33:59.790 243456 DEBUG oslo_concurrency.lockutils [req-da05aea6-f215-4742-9a30-3b290e855254 req-961e6cbe-448a-4a3e-ba77-f022f70ef755 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:33:59 np0005634017 nova_compute[243452]: 2026-02-28 10:33:59.791 243456 DEBUG oslo_concurrency.lockutils [req-da05aea6-f215-4742-9a30-3b290e855254 req-961e6cbe-448a-4a3e-ba77-f022f70ef755 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:33:59 np0005634017 nova_compute[243452]: 2026-02-28 10:33:59.791 243456 DEBUG nova.network.neutron [req-da05aea6-f215-4742-9a30-3b290e855254 req-961e6cbe-448a-4a3e-ba77-f022f70ef755 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Refreshing network info cache for port b811084c-ad80-4e64-904a-d56bd59c9766 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:33:59 np0005634017 nova_compute[243452]: 2026-02-28 10:33:59.879 243456 DEBUG oslo_concurrency.lockutils [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:59 np0005634017 nova_compute[243452]: 2026-02-28 10:33:59.880 243456 DEBUG oslo_concurrency.lockutils [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:59 np0005634017 nova_compute[243452]: 2026-02-28 10:33:59.881 243456 DEBUG oslo_concurrency.lockutils [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:33:59 np0005634017 nova_compute[243452]: 2026-02-28 10:33:59.881 243456 DEBUG oslo_concurrency.lockutils [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:33:59 np0005634017 nova_compute[243452]: 2026-02-28 10:33:59.882 243456 DEBUG oslo_concurrency.lockutils [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:33:59 np0005634017 nova_compute[243452]: 2026-02-28 10:33:59.883 243456 INFO nova.compute.manager [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Terminating instance#033[00m
Feb 28 05:33:59 np0005634017 nova_compute[243452]: 2026-02-28 10:33:59.885 243456 DEBUG nova.compute.manager [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:33:59 np0005634017 kernel: tapb811084c-ad (unregistering): left promiscuous mode
Feb 28 05:33:59 np0005634017 NetworkManager[49805]: <info>  [1772274839.9372] device (tapb811084c-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:33:59 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:59Z|01277|binding|INFO|Releasing lport b811084c-ad80-4e64-904a-d56bd59c9766 from this chassis (sb_readonly=0)
Feb 28 05:33:59 np0005634017 nova_compute[243452]: 2026-02-28 10:33:59.944 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:59 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:59Z|01278|binding|INFO|Setting lport b811084c-ad80-4e64-904a-d56bd59c9766 down in Southbound
Feb 28 05:33:59 np0005634017 ovn_controller[146846]: 2026-02-28T10:33:59Z|01279|binding|INFO|Removing iface tapb811084c-ad ovn-installed in OVS
Feb 28 05:33:59 np0005634017 nova_compute[243452]: 2026-02-28 10:33:59.949 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:59 np0005634017 nova_compute[243452]: 2026-02-28 10:33:59.955 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:33:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:59.957 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:c4:e4 10.100.0.12 2001:db8::f816:3eff:fec3:c4e4'], port_security=['fa:16:3e:c3:c4:e4 10.100.0.12 2001:db8::f816:3eff:fec3:c4e4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fec3:c4e4/64', 'neutron:device_id': 'b07b5e80-4820-4ee8-9750-3ee5ddc53519', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6e383b51-5295-4e4b-b3c5-448c2d4edbf6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5cc36483-813d-493a-bc2a-2e5a365011f2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b811084c-ad80-4e64-904a-d56bd59c9766) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:33:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:59.962 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b811084c-ad80-4e64-904a-d56bd59c9766 in datapath c662211d-11bd-4aa5-95d2-794ccdac29d7 unbound from our chassis#033[00m
Feb 28 05:33:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:59.964 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c662211d-11bd-4aa5-95d2-794ccdac29d7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:33:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:59.966 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eabd2d89-0150-4413-b6e0-6a47c952761d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:33:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:33:59.967 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7 namespace which is not needed anymore#033[00m
Feb 28 05:34:00 np0005634017 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Feb 28 05:34:00 np0005634017 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007d.scope: Consumed 16.371s CPU time.
Feb 28 05:34:00 np0005634017 systemd-machined[209480]: Machine qemu-158-instance-0000007d terminated.
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.082 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:00 np0005634017 neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7[353814]: [NOTICE]   (353818) : haproxy version is 2.8.14-c23fe91
Feb 28 05:34:00 np0005634017 neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7[353814]: [NOTICE]   (353818) : path to executable is /usr/sbin/haproxy
Feb 28 05:34:00 np0005634017 neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7[353814]: [WARNING]  (353818) : Exiting Master process...
Feb 28 05:34:00 np0005634017 neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7[353814]: [WARNING]  (353818) : Exiting Master process...
Feb 28 05:34:00 np0005634017 neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7[353814]: [ALERT]    (353818) : Current worker (353820) exited with code 143 (Terminated)
Feb 28 05:34:00 np0005634017 neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7[353814]: [WARNING]  (353818) : All workers exited. Exiting... (0)
Feb 28 05:34:00 np0005634017 systemd[1]: libpod-b79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541.scope: Deactivated successfully.
Feb 28 05:34:00 np0005634017 NetworkManager[49805]: <info>  [1772274840.1064] manager: (tapb811084c-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/528)
Feb 28 05:34:00 np0005634017 podman[355798]: 2026-02-28 10:34:00.10730861 +0000 UTC m=+0.048686800 container died b79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.122 243456 INFO nova.virt.libvirt.driver [-] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Instance destroyed successfully.#033[00m
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.124 243456 DEBUG nova.objects.instance [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid b07b5e80-4820-4ee8-9750-3ee5ddc53519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:34:00 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541-userdata-shm.mount: Deactivated successfully.
Feb 28 05:34:00 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3ccde48f8a36bab7126971be92fbdb0c7788e67982ad3a6ca48727f62f636de5-merged.mount: Deactivated successfully.
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.146 243456 DEBUG nova.virt.libvirt.vif [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:32:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-870516469',display_name='tempest-TestGettingAddress-server-870516469',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-870516469',id=125,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFieZm47z3EaXX8zbx7LaLT+w65lGbaba1NOcr04FPZCW63/mJeAcaJ3qZq57Nb7LiBZaem/WgewaoumutbM8PMth7FUNu0jTunr9bd13indOeb5XDIeGzJsnEfyg+wcGA==',key_name='tempest-TestGettingAddress-401608194',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:32:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-lcsv3c02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:32:55Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=b07b5e80-4820-4ee8-9750-3ee5ddc53519,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b811084c-ad80-4e64-904a-d56bd59c9766", "address": "fa:16:3e:c3:c4:e4", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec3:c4e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb811084c-ad", "ovs_interfaceid": "b811084c-ad80-4e64-904a-d56bd59c9766", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.147 243456 DEBUG nova.network.os_vif_util [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "b811084c-ad80-4e64-904a-d56bd59c9766", "address": "fa:16:3e:c3:c4:e4", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec3:c4e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb811084c-ad", "ovs_interfaceid": "b811084c-ad80-4e64-904a-d56bd59c9766", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.148 243456 DEBUG nova.network.os_vif_util [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c3:c4:e4,bridge_name='br-int',has_traffic_filtering=True,id=b811084c-ad80-4e64-904a-d56bd59c9766,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb811084c-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:34:00 np0005634017 podman[355798]: 2026-02-28 10:34:00.14895842 +0000 UTC m=+0.090336610 container cleanup b79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.149 243456 DEBUG os_vif [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:c4:e4,bridge_name='br-int',has_traffic_filtering=True,id=b811084c-ad80-4e64-904a-d56bd59c9766,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb811084c-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.152 243456 DEBUG nova.compute.manager [req-c49da988-23b4-41db-b3ce-d45da35a6e4d req-e4300efa-389d-478e-9d30-1d759b0bbd7a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Received event network-vif-unplugged-b811084c-ad80-4e64-904a-d56bd59c9766 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.153 243456 DEBUG oslo_concurrency.lockutils [req-c49da988-23b4-41db-b3ce-d45da35a6e4d req-e4300efa-389d-478e-9d30-1d759b0bbd7a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.153 243456 DEBUG oslo_concurrency.lockutils [req-c49da988-23b4-41db-b3ce-d45da35a6e4d req-e4300efa-389d-478e-9d30-1d759b0bbd7a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.153 243456 DEBUG oslo_concurrency.lockutils [req-c49da988-23b4-41db-b3ce-d45da35a6e4d req-e4300efa-389d-478e-9d30-1d759b0bbd7a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.154 243456 DEBUG nova.compute.manager [req-c49da988-23b4-41db-b3ce-d45da35a6e4d req-e4300efa-389d-478e-9d30-1d759b0bbd7a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] No waiting events found dispatching network-vif-unplugged-b811084c-ad80-4e64-904a-d56bd59c9766 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.154 243456 DEBUG nova.compute.manager [req-c49da988-23b4-41db-b3ce-d45da35a6e4d req-e4300efa-389d-478e-9d30-1d759b0bbd7a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Received event network-vif-unplugged-b811084c-ad80-4e64-904a-d56bd59c9766 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.155 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.155 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb811084c-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.158 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.160 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.162 243456 INFO os_vif [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:c4:e4,bridge_name='br-int',has_traffic_filtering=True,id=b811084c-ad80-4e64-904a-d56bd59c9766,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb811084c-ad')#033[00m
Feb 28 05:34:00 np0005634017 systemd[1]: libpod-conmon-b79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541.scope: Deactivated successfully.
Feb 28 05:34:00 np0005634017 podman[355837]: 2026-02-28 10:34:00.22769121 +0000 UTC m=+0.052840998 container remove b79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 05:34:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:00.232 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dca10a51-ba23-4fbb-aadc-37c7c7ace9b5]: (4, ('Sat Feb 28 10:34:00 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7 (b79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541)\nb79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541\nSat Feb 28 10:34:00 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7 (b79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541)\nb79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:00.234 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[85047c11-33fc-45fe-83a6-903460b2fb93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:00.236 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc662211d-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.238 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:00 np0005634017 kernel: tapc662211d-10: left promiscuous mode
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.246 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.247 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:00.251 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b6e19c06-89dd-4edb-bf91-baca1e753549]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:00.263 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[943cd5da-b988-4faa-a432-028a5a01c76a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:00.265 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a1f6f6-3a35-47ec-ac7d-c1b87612fc62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:00.278 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f52a52ff-4376-4f7f-bd96-ed81d3dfb62e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628063, 'reachable_time': 16766, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355870, 'error': None, 'target': 'ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:00 np0005634017 systemd[1]: run-netns-ovnmeta\x2dc662211d\x2d11bd\x2d4aa5\x2d95d2\x2d794ccdac29d7.mount: Deactivated successfully.
Feb 28 05:34:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:00.282 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:34:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:00.283 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[3a262ffe-7485-4084-94b6-e566a6c95337]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:34:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:34:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:34:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:34:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:34:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.459 243456 INFO nova.virt.libvirt.driver [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Deleting instance files /var/lib/nova/instances/b07b5e80-4820-4ee8-9750-3ee5ddc53519_del#033[00m
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.461 243456 INFO nova.virt.libvirt.driver [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Deletion of /var/lib/nova/instances/b07b5e80-4820-4ee8-9750-3ee5ddc53519_del complete#033[00m
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.506 243456 INFO nova.compute.manager [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Took 0.62 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.506 243456 DEBUG oslo.service.loopingcall [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.507 243456 DEBUG nova.compute.manager [-] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:34:00 np0005634017 nova_compute[243452]: 2026-02-28 10:34:00.507 243456 DEBUG nova.network.neutron [-] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:34:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 267 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 37 KiB/s wr, 108 op/s
Feb 28 05:34:01 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:01Z|00147|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e1:c3:a3 10.100.0.3
Feb 28 05:34:01 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:01Z|00148|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e1:c3:a3 10.100.0.3
Feb 28 05:34:02 np0005634017 nova_compute[243452]: 2026-02-28 10:34:02.410 243456 DEBUG nova.compute.manager [req-fb7a1aa4-937c-4a9c-b1b2-19c543f5217e req-f466be58-1fe4-46b7-87c7-a5c66e9cd9e7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Received event network-vif-plugged-b811084c-ad80-4e64-904a-d56bd59c9766 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:34:02 np0005634017 nova_compute[243452]: 2026-02-28 10:34:02.411 243456 DEBUG oslo_concurrency.lockutils [req-fb7a1aa4-937c-4a9c-b1b2-19c543f5217e req-f466be58-1fe4-46b7-87c7-a5c66e9cd9e7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:02 np0005634017 nova_compute[243452]: 2026-02-28 10:34:02.412 243456 DEBUG oslo_concurrency.lockutils [req-fb7a1aa4-937c-4a9c-b1b2-19c543f5217e req-f466be58-1fe4-46b7-87c7-a5c66e9cd9e7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:02 np0005634017 nova_compute[243452]: 2026-02-28 10:34:02.412 243456 DEBUG oslo_concurrency.lockutils [req-fb7a1aa4-937c-4a9c-b1b2-19c543f5217e req-f466be58-1fe4-46b7-87c7-a5c66e9cd9e7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:02 np0005634017 nova_compute[243452]: 2026-02-28 10:34:02.413 243456 DEBUG nova.compute.manager [req-fb7a1aa4-937c-4a9c-b1b2-19c543f5217e req-f466be58-1fe4-46b7-87c7-a5c66e9cd9e7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] No waiting events found dispatching network-vif-plugged-b811084c-ad80-4e64-904a-d56bd59c9766 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:34:02 np0005634017 nova_compute[243452]: 2026-02-28 10:34:02.413 243456 WARNING nova.compute.manager [req-fb7a1aa4-937c-4a9c-b1b2-19c543f5217e req-f466be58-1fe4-46b7-87c7-a5c66e9cd9e7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Received unexpected event network-vif-plugged-b811084c-ad80-4e64-904a-d56bd59c9766 for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:34:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 305 active+clean; 242 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 367 KiB/s wr, 102 op/s
Feb 28 05:34:02 np0005634017 nova_compute[243452]: 2026-02-28 10:34:02.995 243456 DEBUG nova.network.neutron [-] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:34:03 np0005634017 nova_compute[243452]: 2026-02-28 10:34:03.021 243456 DEBUG nova.compute.manager [req-6177312f-ad6b-489b-8ae8-5d30eb0638b4 req-195be376-0b9e-4943-8400-eaf4dbd96598 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Received event network-vif-deleted-b811084c-ad80-4e64-904a-d56bd59c9766 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:34:03 np0005634017 nova_compute[243452]: 2026-02-28 10:34:03.022 243456 INFO nova.compute.manager [req-6177312f-ad6b-489b-8ae8-5d30eb0638b4 req-195be376-0b9e-4943-8400-eaf4dbd96598 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Neutron deleted interface b811084c-ad80-4e64-904a-d56bd59c9766; detaching it from the instance and deleting it from the info cache#033[00m
Feb 28 05:34:03 np0005634017 nova_compute[243452]: 2026-02-28 10:34:03.022 243456 DEBUG nova.network.neutron [req-6177312f-ad6b-489b-8ae8-5d30eb0638b4 req-195be376-0b9e-4943-8400-eaf4dbd96598 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:34:03 np0005634017 nova_compute[243452]: 2026-02-28 10:34:03.041 243456 DEBUG nova.network.neutron [req-da05aea6-f215-4742-9a30-3b290e855254 req-961e6cbe-448a-4a3e-ba77-f022f70ef755 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Updated VIF entry in instance network info cache for port b811084c-ad80-4e64-904a-d56bd59c9766. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:34:03 np0005634017 nova_compute[243452]: 2026-02-28 10:34:03.042 243456 DEBUG nova.network.neutron [req-da05aea6-f215-4742-9a30-3b290e855254 req-961e6cbe-448a-4a3e-ba77-f022f70ef755 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Updating instance_info_cache with network_info: [{"id": "b811084c-ad80-4e64-904a-d56bd59c9766", "address": "fa:16:3e:c3:c4:e4", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec3:c4e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb811084c-ad", "ovs_interfaceid": "b811084c-ad80-4e64-904a-d56bd59c9766", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:34:03 np0005634017 nova_compute[243452]: 2026-02-28 10:34:03.052 243456 INFO nova.compute.manager [-] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Took 2.55 seconds to deallocate network for instance.#033[00m
Feb 28 05:34:03 np0005634017 nova_compute[243452]: 2026-02-28 10:34:03.059 243456 DEBUG nova.compute.manager [req-6177312f-ad6b-489b-8ae8-5d30eb0638b4 req-195be376-0b9e-4943-8400-eaf4dbd96598 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Detach interface failed, port_id=b811084c-ad80-4e64-904a-d56bd59c9766, reason: Instance b07b5e80-4820-4ee8-9750-3ee5ddc53519 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 28 05:34:03 np0005634017 nova_compute[243452]: 2026-02-28 10:34:03.077 243456 DEBUG oslo_concurrency.lockutils [req-da05aea6-f215-4742-9a30-3b290e855254 req-961e6cbe-448a-4a3e-ba77-f022f70ef755 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:34:03 np0005634017 nova_compute[243452]: 2026-02-28 10:34:03.112 243456 DEBUG oslo_concurrency.lockutils [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:03 np0005634017 nova_compute[243452]: 2026-02-28 10:34:03.112 243456 DEBUG oslo_concurrency.lockutils [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:03 np0005634017 nova_compute[243452]: 2026-02-28 10:34:03.192 243456 DEBUG oslo_concurrency.processutils [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:34:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:34:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2831465108' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:34:03 np0005634017 nova_compute[243452]: 2026-02-28 10:34:03.795 243456 DEBUG oslo_concurrency.processutils [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:34:03 np0005634017 nova_compute[243452]: 2026-02-28 10:34:03.803 243456 DEBUG nova.compute.provider_tree [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:34:03 np0005634017 nova_compute[243452]: 2026-02-28 10:34:03.824 243456 DEBUG nova.scheduler.client.report [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:34:03 np0005634017 nova_compute[243452]: 2026-02-28 10:34:03.851 243456 DEBUG oslo_concurrency.lockutils [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:34:03 np0005634017 nova_compute[243452]: 2026-02-28 10:34:03.887 243456 INFO nova.scheduler.client.report [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance b07b5e80-4820-4ee8-9750-3ee5ddc53519#033[00m
Feb 28 05:34:03 np0005634017 nova_compute[243452]: 2026-02-28 10:34:03.986 243456 DEBUG oslo_concurrency.lockutils [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:04 np0005634017 nova_compute[243452]: 2026-02-28 10:34:04.337 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 216 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.2 MiB/s wr, 128 op/s
Feb 28 05:34:05 np0005634017 nova_compute[243452]: 2026-02-28 10:34:05.157 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:06 np0005634017 podman[355897]: 2026-02-28 10:34:06.159255004 +0000 UTC m=+0.084247747 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Feb 28 05:34:06 np0005634017 podman[355896]: 2026-02-28 10:34:06.204163347 +0000 UTC m=+0.129605223 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 28 05:34:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 2.1 MiB/s wr, 120 op/s
Feb 28 05:34:07 np0005634017 nova_compute[243452]: 2026-02-28 10:34:07.795 243456 INFO nova.compute.manager [None req-7fb326aa-9c02-45cd-ba7c-3655b56a971b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Get console output#033[00m
Feb 28 05:34:07 np0005634017 nova_compute[243452]: 2026-02-28 10:34:07.807 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 28 05:34:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 108 op/s
Feb 28 05:34:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:34:09 np0005634017 nova_compute[243452]: 2026-02-28 10:34:09.339 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:10 np0005634017 nova_compute[243452]: 2026-02-28 10:34:10.056 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274835.0559974, f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:34:10 np0005634017 nova_compute[243452]: 2026-02-28 10:34:10.057 243456 INFO nova.compute.manager [-] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:34:10 np0005634017 nova_compute[243452]: 2026-02-28 10:34:10.082 243456 DEBUG nova.compute.manager [None req-d5cefe88-35a1-4b10-806d-3e505ceb9a46 - - - - - -] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:34:10 np0005634017 nova_compute[243452]: 2026-02-28 10:34:10.160 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:10 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:10Z|01280|binding|INFO|Releasing lport dbe8062b-c5f4-44f4-b690-d738c9fe51f1 from this chassis (sb_readonly=0)
Feb 28 05:34:10 np0005634017 nova_compute[243452]: 2026-02-28 10:34:10.461 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 384 KiB/s rd, 2.1 MiB/s wr, 106 op/s
Feb 28 05:34:11 np0005634017 nova_compute[243452]: 2026-02-28 10:34:11.304 243456 DEBUG oslo_concurrency.lockutils [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "interface-4cf0bd1e-baff-4e42-ab90-cb45145ea4db-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:11 np0005634017 nova_compute[243452]: 2026-02-28 10:34:11.305 243456 DEBUG oslo_concurrency.lockutils [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "interface-4cf0bd1e-baff-4e42-ab90-cb45145ea4db-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:11 np0005634017 nova_compute[243452]: 2026-02-28 10:34:11.306 243456 DEBUG nova.objects.instance [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'flavor' on Instance uuid 4cf0bd1e-baff-4e42-ab90-cb45145ea4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:34:11 np0005634017 nova_compute[243452]: 2026-02-28 10:34:11.931 243456 DEBUG nova.objects.instance [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_requests' on Instance uuid 4cf0bd1e-baff-4e42-ab90-cb45145ea4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:34:11 np0005634017 nova_compute[243452]: 2026-02-28 10:34:11.945 243456 DEBUG nova.network.neutron [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:34:12 np0005634017 nova_compute[243452]: 2026-02-28 10:34:12.110 243456 DEBUG nova.policy [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:34:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Feb 28 05:34:12 np0005634017 nova_compute[243452]: 2026-02-28 10:34:12.988 243456 DEBUG nova.network.neutron [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Successfully created port: edcb3b03-b894-4abf-96a5-f832e8ee3371 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:34:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:34:14 np0005634017 nova_compute[243452]: 2026-02-28 10:34:14.247 243456 DEBUG nova.network.neutron [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Successfully updated port: edcb3b03-b894-4abf-96a5-f832e8ee3371 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:34:14 np0005634017 nova_compute[243452]: 2026-02-28 10:34:14.265 243456 DEBUG oslo_concurrency.lockutils [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:34:14 np0005634017 nova_compute[243452]: 2026-02-28 10:34:14.266 243456 DEBUG oslo_concurrency.lockutils [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:34:14 np0005634017 nova_compute[243452]: 2026-02-28 10:34:14.266 243456 DEBUG nova.network.neutron [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:34:14 np0005634017 nova_compute[243452]: 2026-02-28 10:34:14.341 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:14 np0005634017 nova_compute[243452]: 2026-02-28 10:34:14.349 243456 DEBUG nova.compute.manager [req-03191dcb-ed8c-4371-8e27-4aece7893743 req-37e522e2-3642-4328-beb7-f2901dfb628b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-changed-edcb3b03-b894-4abf-96a5-f832e8ee3371 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:34:14 np0005634017 nova_compute[243452]: 2026-02-28 10:34:14.350 243456 DEBUG nova.compute.manager [req-03191dcb-ed8c-4371-8e27-4aece7893743 req-37e522e2-3642-4328-beb7-f2901dfb628b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Refreshing instance network info cache due to event network-changed-edcb3b03-b894-4abf-96a5-f832e8ee3371. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:34:14 np0005634017 nova_compute[243452]: 2026-02-28 10:34:14.350 243456 DEBUG oslo_concurrency.lockutils [req-03191dcb-ed8c-4371-8e27-4aece7893743 req-37e522e2-3642-4328-beb7-f2901dfb628b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:34:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 321 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Feb 28 05:34:15 np0005634017 nova_compute[243452]: 2026-02-28 10:34:15.120 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274840.1189055, b07b5e80-4820-4ee8-9750-3ee5ddc53519 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:34:15 np0005634017 nova_compute[243452]: 2026-02-28 10:34:15.121 243456 INFO nova.compute.manager [-] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:34:15 np0005634017 nova_compute[243452]: 2026-02-28 10:34:15.143 243456 DEBUG nova.compute.manager [None req-ffdbded0-5656-47f9-9d93-cde79cbc653c - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:34:15 np0005634017 nova_compute[243452]: 2026-02-28 10:34:15.164 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.396 243456 DEBUG nova.network.neutron [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Updating instance_info_cache with network_info: [{"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.417 243456 DEBUG oslo_concurrency.lockutils [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.418 243456 DEBUG oslo_concurrency.lockutils [req-03191dcb-ed8c-4371-8e27-4aece7893743 req-37e522e2-3642-4328-beb7-f2901dfb628b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.419 243456 DEBUG nova.network.neutron [req-03191dcb-ed8c-4371-8e27-4aece7893743 req-37e522e2-3642-4328-beb7-f2901dfb628b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Refreshing network info cache for port edcb3b03-b894-4abf-96a5-f832e8ee3371 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.423 243456 DEBUG nova.virt.libvirt.vif [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-770937853',display_name='tempest-TestNetworkBasicOps-server-770937853',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-770937853',id=127,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFAY2dHmsUOb7P6bLjV8d94Nwp5/6Rxqb3nkXfFeiEJx36+H0xArf++Ad9VgIE8+4m1lCjIO06O/qKmlMqDQresg45xrebHyRTaZLsdPDBvsp3hmYqaTlPvbO1fPCnhR8w==',key_name='tempest-TestNetworkBasicOps-431108198',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:33:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-bri91k0h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:33:49Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=4cf0bd1e-baff-4e42-ab90-cb45145ea4db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.423 243456 DEBUG nova.network.os_vif_util [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.424 243456 DEBUG nova.network.os_vif_util [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:6b:1e,bridge_name='br-int',has_traffic_filtering=True,id=edcb3b03-b894-4abf-96a5-f832e8ee3371,network=Network(0d699467-190a-4754-be38-8dcbc56ed7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedcb3b03-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.424 243456 DEBUG os_vif [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:6b:1e,bridge_name='br-int',has_traffic_filtering=True,id=edcb3b03-b894-4abf-96a5-f832e8ee3371,network=Network(0d699467-190a-4754-be38-8dcbc56ed7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedcb3b03-b8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.425 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.425 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.425 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.429 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.429 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapedcb3b03-b8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.430 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapedcb3b03-b8, col_values=(('external_ids', {'iface-id': 'edcb3b03-b894-4abf-96a5-f832e8ee3371', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:6b:1e', 'vm-uuid': '4cf0bd1e-baff-4e42-ab90-cb45145ea4db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.431 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:16 np0005634017 NetworkManager[49805]: <info>  [1772274856.4328] manager: (tapedcb3b03-b8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/529)
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.437 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.439 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.439 243456 INFO os_vif [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:6b:1e,bridge_name='br-int',has_traffic_filtering=True,id=edcb3b03-b894-4abf-96a5-f832e8ee3371,network=Network(0d699467-190a-4754-be38-8dcbc56ed7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedcb3b03-b8')#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.440 243456 DEBUG nova.virt.libvirt.vif [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-770937853',display_name='tempest-TestNetworkBasicOps-server-770937853',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-770937853',id=127,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFAY2dHmsUOb7P6bLjV8d94Nwp5/6Rxqb3nkXfFeiEJx36+H0xArf++Ad9VgIE8+4m1lCjIO06O/qKmlMqDQresg45xrebHyRTaZLsdPDBvsp3hmYqaTlPvbO1fPCnhR8w==',key_name='tempest-TestNetworkBasicOps-431108198',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:33:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-bri91k0h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:33:49Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=4cf0bd1e-baff-4e42-ab90-cb45145ea4db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.441 243456 DEBUG nova.network.os_vif_util [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.441 243456 DEBUG nova.network.os_vif_util [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:6b:1e,bridge_name='br-int',has_traffic_filtering=True,id=edcb3b03-b894-4abf-96a5-f832e8ee3371,network=Network(0d699467-190a-4754-be38-8dcbc56ed7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedcb3b03-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.444 243456 DEBUG nova.virt.libvirt.guest [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] attach device xml: <interface type="ethernet">
Feb 28 05:34:16 np0005634017 nova_compute[243452]:  <mac address="fa:16:3e:64:6b:1e"/>
Feb 28 05:34:16 np0005634017 nova_compute[243452]:  <model type="virtio"/>
Feb 28 05:34:16 np0005634017 nova_compute[243452]:  <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:34:16 np0005634017 nova_compute[243452]:  <mtu size="1442"/>
Feb 28 05:34:16 np0005634017 nova_compute[243452]:  <target dev="tapedcb3b03-b8"/>
Feb 28 05:34:16 np0005634017 nova_compute[243452]: </interface>
Feb 28 05:34:16 np0005634017 nova_compute[243452]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb 28 05:34:16 np0005634017 kernel: tapedcb3b03-b8: entered promiscuous mode
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.463 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:16 np0005634017 NetworkManager[49805]: <info>  [1772274856.4644] manager: (tapedcb3b03-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/530)
Feb 28 05:34:16 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:16Z|01281|binding|INFO|Claiming lport edcb3b03-b894-4abf-96a5-f832e8ee3371 for this chassis.
Feb 28 05:34:16 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:16Z|01282|binding|INFO|edcb3b03-b894-4abf-96a5-f832e8ee3371: Claiming fa:16:3e:64:6b:1e 10.100.0.24
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.474 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:6b:1e 10.100.0.24'], port_security=['fa:16:3e:64:6b:1e 10.100.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': '4cf0bd1e-baff-4e42-ab90-cb45145ea4db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d699467-190a-4754-be38-8dcbc56ed7da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '26d464f3-c9fa-4347-96b9-fda1593d34a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8d3e50d-7a54-4c37-996e-1d4928f66955, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=edcb3b03-b894-4abf-96a5-f832e8ee3371) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.476 156681 INFO neutron.agent.ovn.metadata.agent [-] Port edcb3b03-b894-4abf-96a5-f832e8ee3371 in datapath 0d699467-190a-4754-be38-8dcbc56ed7da bound to our chassis#033[00m
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.477 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0d699467-190a-4754-be38-8dcbc56ed7da#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.484 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:16 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:16Z|01283|binding|INFO|Setting lport edcb3b03-b894-4abf-96a5-f832e8ee3371 ovn-installed in OVS
Feb 28 05:34:16 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:16Z|01284|binding|INFO|Setting lport edcb3b03-b894-4abf-96a5-f832e8ee3371 up in Southbound
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.491 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.494 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0d715f68-344d-4278-8dd2-9dc69c26624f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.495 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0d699467-11 in ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.498 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0d699467-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.498 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[37cbeffd-929e-4e32-8a56-b30f894ced83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.500 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4b5830b0-49fe-4164-985d-ae31c9a2945e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:16 np0005634017 systemd-udevd[355950]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.514 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[ad27918f-b05f-4120-a4dd-d2cc4a9dab33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:16 np0005634017 NetworkManager[49805]: <info>  [1772274856.5215] device (tapedcb3b03-b8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:34:16 np0005634017 NetworkManager[49805]: <info>  [1772274856.5229] device (tapedcb3b03-b8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.533 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[51776354-9087-4855-b986-9f90fec795aa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.554 243456 DEBUG nova.virt.libvirt.driver [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.554 243456 DEBUG nova.virt.libvirt.driver [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.554 243456 DEBUG nova.virt.libvirt.driver [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:e1:c3:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.554 243456 DEBUG nova.virt.libvirt.driver [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:64:6b:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.567 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[06d418b8-7f2a-43dc-95c1-ba3108395e70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.574 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[14d4fc2d-36e5-4444-be84-dfcc0a82d1d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:16 np0005634017 NetworkManager[49805]: <info>  [1772274856.5759] manager: (tap0d699467-10): new Veth device (/org/freedesktop/NetworkManager/Devices/531)
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.581 243456 DEBUG nova.virt.libvirt.guest [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:34:16 np0005634017 nova_compute[243452]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:34:16 np0005634017 nova_compute[243452]:  <nova:name>tempest-TestNetworkBasicOps-server-770937853</nova:name>
Feb 28 05:34:16 np0005634017 nova_compute[243452]:  <nova:creationTime>2026-02-28 10:34:16</nova:creationTime>
Feb 28 05:34:16 np0005634017 nova_compute[243452]:  <nova:flavor name="m1.nano">
Feb 28 05:34:16 np0005634017 nova_compute[243452]:    <nova:memory>128</nova:memory>
Feb 28 05:34:16 np0005634017 nova_compute[243452]:    <nova:disk>1</nova:disk>
Feb 28 05:34:16 np0005634017 nova_compute[243452]:    <nova:swap>0</nova:swap>
Feb 28 05:34:16 np0005634017 nova_compute[243452]:    <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:34:16 np0005634017 nova_compute[243452]:    <nova:vcpus>1</nova:vcpus>
Feb 28 05:34:16 np0005634017 nova_compute[243452]:  </nova:flavor>
Feb 28 05:34:16 np0005634017 nova_compute[243452]:  <nova:owner>
Feb 28 05:34:16 np0005634017 nova_compute[243452]:    <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 05:34:16 np0005634017 nova_compute[243452]:    <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 05:34:16 np0005634017 nova_compute[243452]:  </nova:owner>
Feb 28 05:34:16 np0005634017 nova_compute[243452]:  <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:34:16 np0005634017 nova_compute[243452]:  <nova:ports>
Feb 28 05:34:16 np0005634017 nova_compute[243452]:    <nova:port uuid="37a6ff99-c79f-4d1f-8384-b2117545bacf">
Feb 28 05:34:16 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 05:34:16 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:34:16 np0005634017 nova_compute[243452]:    <nova:port uuid="edcb3b03-b894-4abf-96a5-f832e8ee3371">
Feb 28 05:34:16 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Feb 28 05:34:16 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:34:16 np0005634017 nova_compute[243452]:  </nova:ports>
Feb 28 05:34:16 np0005634017 nova_compute[243452]: </nova:instance>
Feb 28 05:34:16 np0005634017 nova_compute[243452]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.608 243456 DEBUG oslo_concurrency.lockutils [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "interface-4cf0bd1e-baff-4e42-ab90-cb45145ea4db-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.303s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.609 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8173ed4d-d566-4b88-9682-91b089579e7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.613 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c437fa-2ce5-4aeb-905b-66713caf334e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:16 np0005634017 NetworkManager[49805]: <info>  [1772274856.6355] device (tap0d699467-10): carrier: link connected
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.640 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e1f89fc5-5507-443a-bfda-75f71a1fe83a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.659 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[46d13ebd-71d6-4968-bf06-a985db1d2c33]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0d699467-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:d8:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636392, 'reachable_time': 21712, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355976, 'error': None, 'target': 'ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 05:34:16 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 38K writes, 151K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s#012Cumulative WAL: 38K writes, 13K syncs, 2.74 writes per sync, written: 0.15 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4164 writes, 16K keys, 4164 commit groups, 1.0 writes per commit group, ingest: 17.42 MB, 0.03 MB/s#012Interval WAL: 4164 writes, 1650 syncs, 2.52 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.678 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[11d26ec8-2085-42b6-a294-4ce723fe618e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:d8a3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 636392, 'tstamp': 636392}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355977, 'error': None, 'target': 'ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.700 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0ef6291d-82f7-4555-85d4-5ddea69b4799]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0d699467-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:d8:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636392, 'reachable_time': 21712, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355978, 'error': None, 'target': 'ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.737 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[247642cd-0379-457c-9af9-338199949489]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 100 KiB/s rd, 991 KiB/s wr, 22 op/s
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.812 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3e338fb6-d24f-4eea-aa81-b11fcb904083]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.814 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d699467-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.814 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.815 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d699467-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:16 np0005634017 kernel: tap0d699467-10: entered promiscuous mode
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.818 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:16 np0005634017 NetworkManager[49805]: <info>  [1772274856.8203] manager: (tap0d699467-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/532)
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.821 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.822 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0d699467-10, col_values=(('external_ids', {'iface-id': '70e83520-4cec-4f09-8d59-db572fd53673'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.824 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:16 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:16Z|01285|binding|INFO|Releasing lport 70e83520-4cec-4f09-8d59-db572fd53673 from this chassis (sb_readonly=0)
Feb 28 05:34:16 np0005634017 nova_compute[243452]: 2026-02-28 10:34:16.836 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.838 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0d699467-190a-4754-be38-8dcbc56ed7da.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0d699467-190a-4754-be38-8dcbc56ed7da.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.839 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f98a1df2-d123-4803-b383-ddabeb4892fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.839 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-0d699467-190a-4754-be38-8dcbc56ed7da
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/0d699467-190a-4754-be38-8dcbc56ed7da.pid.haproxy
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 0d699467-190a-4754-be38-8dcbc56ed7da
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:34:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.840 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da', 'env', 'PROCESS_TAG=haproxy-0d699467-190a-4754-be38-8dcbc56ed7da', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0d699467-190a-4754-be38-8dcbc56ed7da.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:34:17 np0005634017 nova_compute[243452]: 2026-02-28 10:34:17.119 243456 DEBUG nova.compute.manager [req-01f80b31-7fc1-40a8-b441-7817a86e28b9 req-27c24061-24ab-4339-9a0c-8791cde6c344 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-vif-plugged-edcb3b03-b894-4abf-96a5-f832e8ee3371 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:34:17 np0005634017 nova_compute[243452]: 2026-02-28 10:34:17.119 243456 DEBUG oslo_concurrency.lockutils [req-01f80b31-7fc1-40a8-b441-7817a86e28b9 req-27c24061-24ab-4339-9a0c-8791cde6c344 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:17 np0005634017 nova_compute[243452]: 2026-02-28 10:34:17.120 243456 DEBUG oslo_concurrency.lockutils [req-01f80b31-7fc1-40a8-b441-7817a86e28b9 req-27c24061-24ab-4339-9a0c-8791cde6c344 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:17 np0005634017 nova_compute[243452]: 2026-02-28 10:34:17.120 243456 DEBUG oslo_concurrency.lockutils [req-01f80b31-7fc1-40a8-b441-7817a86e28b9 req-27c24061-24ab-4339-9a0c-8791cde6c344 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:17 np0005634017 nova_compute[243452]: 2026-02-28 10:34:17.120 243456 DEBUG nova.compute.manager [req-01f80b31-7fc1-40a8-b441-7817a86e28b9 req-27c24061-24ab-4339-9a0c-8791cde6c344 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] No waiting events found dispatching network-vif-plugged-edcb3b03-b894-4abf-96a5-f832e8ee3371 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:34:17 np0005634017 nova_compute[243452]: 2026-02-28 10:34:17.120 243456 WARNING nova.compute.manager [req-01f80b31-7fc1-40a8-b441-7817a86e28b9 req-27c24061-24ab-4339-9a0c-8791cde6c344 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received unexpected event network-vif-plugged-edcb3b03-b894-4abf-96a5-f832e8ee3371 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:34:17 np0005634017 podman[356010]: 2026-02-28 10:34:17.178203186 +0000 UTC m=+0.049174054 container create b278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:34:17 np0005634017 systemd[1]: Started libpod-conmon-b278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66.scope.
Feb 28 05:34:17 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:34:17 np0005634017 podman[356010]: 2026-02-28 10:34:17.1508065 +0000 UTC m=+0.021777368 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:34:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4b2e07cdeaa3d845002b7d5c69c4a6ea946ebe8ae1371af79f5f05018ddb64c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:34:17 np0005634017 podman[356010]: 2026-02-28 10:34:17.261680141 +0000 UTC m=+0.132651029 container init b278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:34:17 np0005634017 podman[356010]: 2026-02-28 10:34:17.267137875 +0000 UTC m=+0.138108743 container start b278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:34:17 np0005634017 neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da[356025]: [NOTICE]   (356029) : New worker (356031) forked
Feb 28 05:34:17 np0005634017 neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da[356025]: [NOTICE]   (356029) : Loading success.
Feb 28 05:34:18 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:18Z|00149|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:64:6b:1e 10.100.0.24
Feb 28 05:34:18 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:18Z|00150|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:64:6b:1e 10.100.0.24
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.355 243456 DEBUG oslo_concurrency.lockutils [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "interface-4cf0bd1e-baff-4e42-ab90-cb45145ea4db-edcb3b03-b894-4abf-96a5-f832e8ee3371" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.356 243456 DEBUG oslo_concurrency.lockutils [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "interface-4cf0bd1e-baff-4e42-ab90-cb45145ea4db-edcb3b03-b894-4abf-96a5-f832e8ee3371" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.372 243456 DEBUG nova.objects.instance [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'flavor' on Instance uuid 4cf0bd1e-baff-4e42-ab90-cb45145ea4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.394 243456 DEBUG nova.virt.libvirt.vif [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-770937853',display_name='tempest-TestNetworkBasicOps-server-770937853',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-770937853',id=127,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFAY2dHmsUOb7P6bLjV8d94Nwp5/6Rxqb3nkXfFeiEJx36+H0xArf++Ad9VgIE8+4m1lCjIO06O/qKmlMqDQresg45xrebHyRTaZLsdPDBvsp3hmYqaTlPvbO1fPCnhR8w==',key_name='tempest-TestNetworkBasicOps-431108198',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:33:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-bri91k0h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:33:49Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=4cf0bd1e-baff-4e42-ab90-cb45145ea4db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.394 243456 DEBUG nova.network.os_vif_util [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.395 243456 DEBUG nova.network.os_vif_util [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:6b:1e,bridge_name='br-int',has_traffic_filtering=True,id=edcb3b03-b894-4abf-96a5-f832e8ee3371,network=Network(0d699467-190a-4754-be38-8dcbc56ed7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedcb3b03-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.400 243456 DEBUG nova.virt.libvirt.guest [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:64:6b:1e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapedcb3b03-b8"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.404 243456 DEBUG nova.virt.libvirt.guest [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:64:6b:1e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapedcb3b03-b8"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.408 243456 DEBUG nova.virt.libvirt.driver [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Attempting to detach device tapedcb3b03-b8 from instance 4cf0bd1e-baff-4e42-ab90-cb45145ea4db from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.409 243456 DEBUG nova.virt.libvirt.guest [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] detach device xml: <interface type="ethernet">
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <mac address="fa:16:3e:64:6b:1e"/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <model type="virtio"/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <mtu size="1442"/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <target dev="tapedcb3b03-b8"/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]: </interface>
Feb 28 05:34:18 np0005634017 nova_compute[243452]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.419 243456 DEBUG nova.virt.libvirt.guest [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:64:6b:1e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapedcb3b03-b8"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.423 243456 DEBUG nova.virt.libvirt.guest [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:64:6b:1e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapedcb3b03-b8"/></interface>not found in domain: <domain type='kvm' id='160'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <name>instance-0000007f</name>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <uuid>4cf0bd1e-baff-4e42-ab90-cb45145ea4db</uuid>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <nova:name>tempest-TestNetworkBasicOps-server-770937853</nova:name>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <nova:creationTime>2026-02-28 10:34:16</nova:creationTime>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <nova:flavor name="m1.nano">
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:memory>128</nova:memory>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:disk>1</nova:disk>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:swap>0</nova:swap>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:vcpus>1</nova:vcpus>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </nova:flavor>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <nova:owner>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </nova:owner>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <nova:ports>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:port uuid="37a6ff99-c79f-4d1f-8384-b2117545bacf">
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:port uuid="edcb3b03-b894-4abf-96a5-f832e8ee3371">
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </nova:ports>
Feb 28 05:34:18 np0005634017 nova_compute[243452]: </nova:instance>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <memory unit='KiB'>131072</memory>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <vcpu placement='static'>1</vcpu>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <resource>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <partition>/machine</partition>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </resource>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <sysinfo type='smbios'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <entry name='manufacturer'>RDO</entry>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <entry name='product'>OpenStack Compute</entry>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <entry name='serial'>4cf0bd1e-baff-4e42-ab90-cb45145ea4db</entry>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <entry name='uuid'>4cf0bd1e-baff-4e42-ab90-cb45145ea4db</entry>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <entry name='family'>Virtual Machine</entry>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <boot dev='hd'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <smbios mode='sysinfo'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <vmcoreinfo state='on'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <cpu mode='custom' match='exact' check='full'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <model fallback='forbid'>EPYC-Rome</model>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <vendor>AMD</vendor>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='x2apic'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='tsc-deadline'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='hypervisor'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='tsc_adjust'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='spec-ctrl'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='stibp'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='ssbd'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='cmp_legacy'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='overflow-recov'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='succor'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='ibrs'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='amd-ssbd'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='virt-ssbd'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='disable' name='lbrv'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='disable' name='tsc-scale'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='disable' name='vmcb-clean'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='disable' name='flushbyasid'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='disable' name='pause-filter'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='disable' name='pfthreshold'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='disable' name='svme-addr-chk'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='lfence-always-serializing'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='disable' name='xsaves'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='disable' name='svm'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='topoext'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='disable' name='npt'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='disable' name='nrip-save'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <clock offset='utc'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <timer name='pit' tickpolicy='delay'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <timer name='rtc' tickpolicy='catchup'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <timer name='hpet' present='no'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <on_poweroff>destroy</on_poweroff>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <on_reboot>restart</on_reboot>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <on_crash>destroy</on_crash>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <disk type='network' device='disk'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <driver name='qemu' type='raw' cache='none'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <auth username='openstack'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:        <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <source protocol='rbd' name='vms/4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk' index='2'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:        <host name='192.168.122.100' port='6789'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target dev='vda' bus='virtio'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='virtio-disk0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <disk type='network' device='cdrom'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <driver name='qemu' type='raw' cache='none'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <auth username='openstack'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:        <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <source protocol='rbd' name='vms/4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk.config' index='1'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:        <host name='192.168.122.100' port='6789'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target dev='sda' bus='sata'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <readonly/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='sata0-0-0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='0' model='pcie-root'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pcie.0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='1' port='0x10'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.1'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='2' port='0x11'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.2'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='3' port='0x12'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.3'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='4' port='0x13'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.4'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='5' port='0x14'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.5'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='6' port='0x15'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.6'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='7' port='0x16'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.7'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='8' port='0x17'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.8'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='9' port='0x18'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.9'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='10' port='0x19'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.10'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='11' port='0x1a'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.11'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='12' port='0x1b'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.12'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='13' port='0x1c'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.13'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='14' port='0x1d'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.14'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='15' port='0x1e'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.15'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='16' port='0x1f'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.16'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='17' port='0x20'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.17'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='18' port='0x21'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.18'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='19' port='0x22'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.19'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='20' port='0x23'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.20'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='21' port='0x24'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.21'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='22' port='0x25'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.22'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='23' port='0x26'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.23'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='24' port='0x27'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.24'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='25' port='0x28'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.25'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-pci-bridge'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.26'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='usb'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='sata' index='0'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='ide'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <interface type='ethernet'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <mac address='fa:16:3e:e1:c3:a3'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target dev='tap37a6ff99-c7'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model type='virtio'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <driver name='vhost' rx_queue_size='512'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <mtu size='1442'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='net0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <interface type='ethernet'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <mac address='fa:16:3e:64:6b:1e'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target dev='tapedcb3b03-b8'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model type='virtio'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <driver name='vhost' rx_queue_size='512'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <mtu size='1442'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='net1'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <serial type='pty'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <source path='/dev/pts/1'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <log file='/var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/console.log' append='off'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target type='isa-serial' port='0'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:        <model name='isa-serial'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      </target>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='serial0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <console type='pty' tty='/dev/pts/1'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <source path='/dev/pts/1'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <log file='/var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/console.log' append='off'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target type='serial' port='0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='serial0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </console>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <input type='tablet' bus='usb'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='input0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='usb' bus='0' port='1'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <input type='mouse' bus='ps2'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='input1'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <input type='keyboard' bus='ps2'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='input2'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <listen type='address' address='::0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </graphics>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <audio id='1' type='none'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model type='virtio' heads='1' primary='yes'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='video0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <watchdog model='itco' action='reset'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='watchdog0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </watchdog>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <memballoon model='virtio'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <stats period='10'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='balloon0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <rng model='virtio'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <backend model='random'>/dev/urandom</backend>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='rng0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <label>system_u:system_r:svirt_t:s0:c811,c1010</label>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c811,c1010</imagelabel>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </seclabel>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <label>+107:+107</label>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <imagelabel>+107:+107</imagelabel>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </seclabel>
Feb 28 05:34:18 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:34:18 np0005634017 nova_compute[243452]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.425 243456 INFO nova.virt.libvirt.driver [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully detached device tapedcb3b03-b8 from instance 4cf0bd1e-baff-4e42-ab90-cb45145ea4db from the persistent domain config.#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.425 243456 DEBUG nova.virt.libvirt.driver [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] (1/8): Attempting to detach device tapedcb3b03-b8 with device alias net1 from instance 4cf0bd1e-baff-4e42-ab90-cb45145ea4db from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.426 243456 DEBUG nova.virt.libvirt.guest [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] detach device xml: <interface type="ethernet">
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <mac address="fa:16:3e:64:6b:1e"/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <model type="virtio"/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <mtu size="1442"/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <target dev="tapedcb3b03-b8"/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]: </interface>
Feb 28 05:34:18 np0005634017 nova_compute[243452]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.463 243456 DEBUG nova.network.neutron [req-03191dcb-ed8c-4371-8e27-4aece7893743 req-37e522e2-3642-4328-beb7-f2901dfb628b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Updated VIF entry in instance network info cache for port edcb3b03-b894-4abf-96a5-f832e8ee3371. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.464 243456 DEBUG nova.network.neutron [req-03191dcb-ed8c-4371-8e27-4aece7893743 req-37e522e2-3642-4328-beb7-f2901dfb628b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Updating instance_info_cache with network_info: [{"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.478 243456 DEBUG oslo_concurrency.lockutils [req-03191dcb-ed8c-4371-8e27-4aece7893743 req-37e522e2-3642-4328-beb7-f2901dfb628b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:34:18 np0005634017 kernel: tapedcb3b03-b8 (unregistering): left promiscuous mode
Feb 28 05:34:18 np0005634017 NetworkManager[49805]: <info>  [1772274858.5316] device (tapedcb3b03-b8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:34:18 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:18Z|01286|binding|INFO|Releasing lport edcb3b03-b894-4abf-96a5-f832e8ee3371 from this chassis (sb_readonly=0)
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.538 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:18 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:18Z|01287|binding|INFO|Setting lport edcb3b03-b894-4abf-96a5-f832e8ee3371 down in Southbound
Feb 28 05:34:18 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:18Z|01288|binding|INFO|Removing iface tapedcb3b03-b8 ovn-installed in OVS
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.541 243456 DEBUG nova.virt.libvirt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Received event <DeviceRemovedEvent: 1772274858.5413241, 4cf0bd1e-baff-4e42-ab90-cb45145ea4db => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.541 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.542 243456 DEBUG nova.virt.libvirt.driver [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Start waiting for the detach event from libvirt for device tapedcb3b03-b8 with device alias net1 for instance 4cf0bd1e-baff-4e42-ab90-cb45145ea4db _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.543 243456 DEBUG nova.virt.libvirt.guest [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:64:6b:1e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapedcb3b03-b8"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.546 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.548 243456 DEBUG nova.virt.libvirt.guest [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:64:6b:1e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapedcb3b03-b8"/></interface>not found in domain: <domain type='kvm' id='160'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <name>instance-0000007f</name>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <uuid>4cf0bd1e-baff-4e42-ab90-cb45145ea4db</uuid>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <nova:name>tempest-TestNetworkBasicOps-server-770937853</nova:name>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <nova:creationTime>2026-02-28 10:34:16</nova:creationTime>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <nova:flavor name="m1.nano">
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:memory>128</nova:memory>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:disk>1</nova:disk>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:swap>0</nova:swap>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:vcpus>1</nova:vcpus>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </nova:flavor>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <nova:owner>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </nova:owner>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <nova:ports>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:port uuid="37a6ff99-c79f-4d1f-8384-b2117545bacf">
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:port uuid="edcb3b03-b894-4abf-96a5-f832e8ee3371">
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </nova:ports>
Feb 28 05:34:18 np0005634017 nova_compute[243452]: </nova:instance>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <memory unit='KiB'>131072</memory>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <vcpu placement='static'>1</vcpu>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <resource>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <partition>/machine</partition>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </resource>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <sysinfo type='smbios'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <entry name='manufacturer'>RDO</entry>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <entry name='product'>OpenStack Compute</entry>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <entry name='serial'>4cf0bd1e-baff-4e42-ab90-cb45145ea4db</entry>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <entry name='uuid'>4cf0bd1e-baff-4e42-ab90-cb45145ea4db</entry>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <entry name='family'>Virtual Machine</entry>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <boot dev='hd'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <smbios mode='sysinfo'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <vmcoreinfo state='on'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <cpu mode='custom' match='exact' check='full'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <model fallback='forbid'>EPYC-Rome</model>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <vendor>AMD</vendor>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='x2apic'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='tsc-deadline'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='hypervisor'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='tsc_adjust'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='spec-ctrl'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='stibp'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='ssbd'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='cmp_legacy'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='overflow-recov'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='succor'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='ibrs'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='amd-ssbd'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='virt-ssbd'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='disable' name='lbrv'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='disable' name='tsc-scale'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='disable' name='vmcb-clean'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='disable' name='flushbyasid'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='disable' name='pause-filter'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='disable' name='pfthreshold'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='disable' name='svme-addr-chk'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='lfence-always-serializing'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='disable' name='xsaves'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='disable' name='svm'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='require' name='topoext'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='disable' name='npt'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <feature policy='disable' name='nrip-save'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <clock offset='utc'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <timer name='pit' tickpolicy='delay'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <timer name='rtc' tickpolicy='catchup'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <timer name='hpet' present='no'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <on_poweroff>destroy</on_poweroff>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <on_reboot>restart</on_reboot>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <on_crash>destroy</on_crash>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <disk type='network' device='disk'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <driver name='qemu' type='raw' cache='none'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <auth username='openstack'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:        <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <source protocol='rbd' name='vms/4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk' index='2'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:        <host name='192.168.122.100' port='6789'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target dev='vda' bus='virtio'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='virtio-disk0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <disk type='network' device='cdrom'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <driver name='qemu' type='raw' cache='none'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <auth username='openstack'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:        <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <source protocol='rbd' name='vms/4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk.config' index='1'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:        <host name='192.168.122.100' port='6789'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target dev='sda' bus='sata'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <readonly/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='sata0-0-0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='0' model='pcie-root'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pcie.0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='1' port='0x10'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.1'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='2' port='0x11'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.2'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='3' port='0x12'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.3'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='4' port='0x13'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.4'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='5' port='0x14'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.5'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='6' port='0x15'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.6'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='7' port='0x16'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.7'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='8' port='0x17'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.8'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='9' port='0x18'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.9'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='10' port='0x19'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.10'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='11' port='0x1a'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.11'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='12' port='0x1b'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.12'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='13' port='0x1c'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.13'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='14' port='0x1d'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.14'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='15' port='0x1e'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.15'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='16' port='0x1f'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.16'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='17' port='0x20'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.17'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='18' port='0x21'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.18'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='19' port='0x22'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.19'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='20' port='0x23'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.20'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='21' port='0x24'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.21'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='22' port='0x25'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.22'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='23' port='0x26'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.23'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='24' port='0x27'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.24'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target chassis='25' port='0x28'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.25'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model name='pcie-pci-bridge'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='pci.26'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='usb'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <controller type='sata' index='0'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='ide'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <interface type='ethernet'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <mac address='fa:16:3e:e1:c3:a3'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target dev='tap37a6ff99-c7'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model type='virtio'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <driver name='vhost' rx_queue_size='512'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <mtu size='1442'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='net0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <serial type='pty'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <source path='/dev/pts/1'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <log file='/var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/console.log' append='off'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target type='isa-serial' port='0'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:        <model name='isa-serial'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      </target>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='serial0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <console type='pty' tty='/dev/pts/1'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <source path='/dev/pts/1'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <log file='/var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/console.log' append='off'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <target type='serial' port='0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='serial0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </console>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <input type='tablet' bus='usb'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='input0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='usb' bus='0' port='1'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <input type='mouse' bus='ps2'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='input1'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <input type='keyboard' bus='ps2'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='input2'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <listen type='address' address='::0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </graphics>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <audio id='1' type='none'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <model type='virtio' heads='1' primary='yes'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='video0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <watchdog model='itco' action='reset'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='watchdog0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </watchdog>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <memballoon model='virtio'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <stats period='10'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='balloon0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <rng model='virtio'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <backend model='random'>/dev/urandom</backend>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <alias name='rng0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <label>system_u:system_r:svirt_t:s0:c811,c1010</label>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c811,c1010</imagelabel>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </seclabel>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <label>+107:+107</label>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <imagelabel>+107:+107</imagelabel>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </seclabel>
Feb 28 05:34:18 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:34:18 np0005634017 nova_compute[243452]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.548 243456 INFO nova.virt.libvirt.driver [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully detached device tapedcb3b03-b8 from instance 4cf0bd1e-baff-4e42-ab90-cb45145ea4db from the live domain config.#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.549 243456 DEBUG nova.virt.libvirt.vif [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-770937853',display_name='tempest-TestNetworkBasicOps-server-770937853',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-770937853',id=127,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFAY2dHmsUOb7P6bLjV8d94Nwp5/6Rxqb3nkXfFeiEJx36+H0xArf++Ad9VgIE8+4m1lCjIO06O/qKmlMqDQresg45xrebHyRTaZLsdPDBvsp3hmYqaTlPvbO1fPCnhR8w==',key_name='tempest-TestNetworkBasicOps-431108198',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:33:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-bri91k0h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:33:49Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=4cf0bd1e-baff-4e42-ab90-cb45145ea4db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.550 243456 DEBUG nova.network.os_vif_util [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.550 243456 DEBUG nova.network.os_vif_util [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:6b:1e,bridge_name='br-int',has_traffic_filtering=True,id=edcb3b03-b894-4abf-96a5-f832e8ee3371,network=Network(0d699467-190a-4754-be38-8dcbc56ed7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedcb3b03-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.551 243456 DEBUG os_vif [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:6b:1e,bridge_name='br-int',has_traffic_filtering=True,id=edcb3b03-b894-4abf-96a5-f832e8ee3371,network=Network(0d699467-190a-4754-be38-8dcbc56ed7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedcb3b03-b8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:34:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.550 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:6b:1e 10.100.0.24'], port_security=['fa:16:3e:64:6b:1e 10.100.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': '4cf0bd1e-baff-4e42-ab90-cb45145ea4db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d699467-190a-4754-be38-8dcbc56ed7da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '26d464f3-c9fa-4347-96b9-fda1593d34a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8d3e50d-7a54-4c37-996e-1d4928f66955, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=edcb3b03-b894-4abf-96a5-f832e8ee3371) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.553 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.553 156681 INFO neutron.agent.ovn.metadata.agent [-] Port edcb3b03-b894-4abf-96a5-f832e8ee3371 in datapath 0d699467-190a-4754-be38-8dcbc56ed7da unbound from our chassis#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.553 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapedcb3b03-b8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.555 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.555 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0d699467-190a-4754-be38-8dcbc56ed7da, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.556 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.557 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bf42ee51-d395-4338-b5cc-7661ebe6b8a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.558 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da namespace which is not needed anymore#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.559 243456 INFO os_vif [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:6b:1e,bridge_name='br-int',has_traffic_filtering=True,id=edcb3b03-b894-4abf-96a5-f832e8ee3371,network=Network(0d699467-190a-4754-be38-8dcbc56ed7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedcb3b03-b8')#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.559 243456 DEBUG nova.virt.libvirt.guest [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <nova:name>tempest-TestNetworkBasicOps-server-770937853</nova:name>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <nova:creationTime>2026-02-28 10:34:18</nova:creationTime>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <nova:flavor name="m1.nano">
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:memory>128</nova:memory>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:disk>1</nova:disk>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:swap>0</nova:swap>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:vcpus>1</nova:vcpus>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </nova:flavor>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <nova:owner>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </nova:owner>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  <nova:ports>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    <nova:port uuid="37a6ff99-c79f-4d1f-8384-b2117545bacf">
Feb 28 05:34:18 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:34:18 np0005634017 nova_compute[243452]:  </nova:ports>
Feb 28 05:34:18 np0005634017 nova_compute[243452]: </nova:instance>
Feb 28 05:34:18 np0005634017 nova_compute[243452]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb 28 05:34:18 np0005634017 neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da[356025]: [NOTICE]   (356029) : haproxy version is 2.8.14-c23fe91
Feb 28 05:34:18 np0005634017 neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da[356025]: [NOTICE]   (356029) : path to executable is /usr/sbin/haproxy
Feb 28 05:34:18 np0005634017 neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da[356025]: [WARNING]  (356029) : Exiting Master process...
Feb 28 05:34:18 np0005634017 neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da[356025]: [ALERT]    (356029) : Current worker (356031) exited with code 143 (Terminated)
Feb 28 05:34:18 np0005634017 neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da[356025]: [WARNING]  (356029) : All workers exited. Exiting... (0)
Feb 28 05:34:18 np0005634017 systemd[1]: libpod-b278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66.scope: Deactivated successfully.
Feb 28 05:34:18 np0005634017 podman[356061]: 2026-02-28 10:34:18.687487449 +0000 UTC m=+0.055043330 container died b278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 05:34:18 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66-userdata-shm.mount: Deactivated successfully.
Feb 28 05:34:18 np0005634017 systemd[1]: var-lib-containers-storage-overlay-b4b2e07cdeaa3d845002b7d5c69c4a6ea946ebe8ae1371af79f5f05018ddb64c-merged.mount: Deactivated successfully.
Feb 28 05:34:18 np0005634017 podman[356061]: 2026-02-28 10:34:18.735607332 +0000 UTC m=+0.103163203 container cleanup b278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:34:18 np0005634017 systemd[1]: libpod-conmon-b278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66.scope: Deactivated successfully.
Feb 28 05:34:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 15 KiB/s wr, 0 op/s
Feb 28 05:34:18 np0005634017 podman[356090]: 2026-02-28 10:34:18.802551829 +0000 UTC m=+0.044220054 container remove b278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 05:34:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.807 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8100af35-d367-41c1-9960-7977ef53a55a]: (4, ('Sat Feb 28 10:34:18 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da (b278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66)\nb278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66\nSat Feb 28 10:34:18 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da (b278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66)\nb278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.809 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9255c953-0639-4ac1-9a02-ce0543aca0de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.810 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d699467-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.812 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:18 np0005634017 kernel: tap0d699467-10: left promiscuous mode
Feb 28 05:34:18 np0005634017 nova_compute[243452]: 2026-02-28 10:34:18.820 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.823 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[376edc94-9687-4ddb-8e42-6afc2b291ac2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.839 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[95cb9833-c684-4af5-ab53-c1f404d53e50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.840 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[af24fd3c-0330-4615-944b-10a0e2394c31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.857 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2e49c154-1746-42d6-a0ae-e1adb3be1bf6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636384, 'reachable_time': 21105, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356108, 'error': None, 'target': 'ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:18 np0005634017 systemd[1]: run-netns-ovnmeta\x2d0d699467\x2d190a\x2d4754\x2dbe38\x2d8dcbc56ed7da.mount: Deactivated successfully.
Feb 28 05:34:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.861 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:34:18 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.861 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a12669-8d4d-4044-88e3-3e7f39000f15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:34:19 np0005634017 nova_compute[243452]: 2026-02-28 10:34:19.212 243456 DEBUG nova.compute.manager [req-183ed6aa-638e-4849-bc5a-b525602969f5 req-cac71693-bb00-4a08-a0da-b81cbf670166 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-vif-plugged-edcb3b03-b894-4abf-96a5-f832e8ee3371 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:34:19 np0005634017 nova_compute[243452]: 2026-02-28 10:34:19.213 243456 DEBUG oslo_concurrency.lockutils [req-183ed6aa-638e-4849-bc5a-b525602969f5 req-cac71693-bb00-4a08-a0da-b81cbf670166 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:19 np0005634017 nova_compute[243452]: 2026-02-28 10:34:19.213 243456 DEBUG oslo_concurrency.lockutils [req-183ed6aa-638e-4849-bc5a-b525602969f5 req-cac71693-bb00-4a08-a0da-b81cbf670166 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:19 np0005634017 nova_compute[243452]: 2026-02-28 10:34:19.213 243456 DEBUG oslo_concurrency.lockutils [req-183ed6aa-638e-4849-bc5a-b525602969f5 req-cac71693-bb00-4a08-a0da-b81cbf670166 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:19 np0005634017 nova_compute[243452]: 2026-02-28 10:34:19.213 243456 DEBUG nova.compute.manager [req-183ed6aa-638e-4849-bc5a-b525602969f5 req-cac71693-bb00-4a08-a0da-b81cbf670166 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] No waiting events found dispatching network-vif-plugged-edcb3b03-b894-4abf-96a5-f832e8ee3371 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:34:19 np0005634017 nova_compute[243452]: 2026-02-28 10:34:19.213 243456 WARNING nova.compute.manager [req-183ed6aa-638e-4849-bc5a-b525602969f5 req-cac71693-bb00-4a08-a0da-b81cbf670166 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received unexpected event network-vif-plugged-edcb3b03-b894-4abf-96a5-f832e8ee3371 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:34:19 np0005634017 nova_compute[243452]: 2026-02-28 10:34:19.214 243456 DEBUG nova.compute.manager [req-183ed6aa-638e-4849-bc5a-b525602969f5 req-cac71693-bb00-4a08-a0da-b81cbf670166 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-vif-unplugged-edcb3b03-b894-4abf-96a5-f832e8ee3371 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:34:19 np0005634017 nova_compute[243452]: 2026-02-28 10:34:19.214 243456 DEBUG oslo_concurrency.lockutils [req-183ed6aa-638e-4849-bc5a-b525602969f5 req-cac71693-bb00-4a08-a0da-b81cbf670166 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:19 np0005634017 nova_compute[243452]: 2026-02-28 10:34:19.214 243456 DEBUG oslo_concurrency.lockutils [req-183ed6aa-638e-4849-bc5a-b525602969f5 req-cac71693-bb00-4a08-a0da-b81cbf670166 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:19 np0005634017 nova_compute[243452]: 2026-02-28 10:34:19.214 243456 DEBUG oslo_concurrency.lockutils [req-183ed6aa-638e-4849-bc5a-b525602969f5 req-cac71693-bb00-4a08-a0da-b81cbf670166 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:19 np0005634017 nova_compute[243452]: 2026-02-28 10:34:19.214 243456 DEBUG nova.compute.manager [req-183ed6aa-638e-4849-bc5a-b525602969f5 req-cac71693-bb00-4a08-a0da-b81cbf670166 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] No waiting events found dispatching network-vif-unplugged-edcb3b03-b894-4abf-96a5-f832e8ee3371 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:34:19 np0005634017 nova_compute[243452]: 2026-02-28 10:34:19.215 243456 WARNING nova.compute.manager [req-183ed6aa-638e-4849-bc5a-b525602969f5 req-cac71693-bb00-4a08-a0da-b81cbf670166 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received unexpected event network-vif-unplugged-edcb3b03-b894-4abf-96a5-f832e8ee3371 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:34:19 np0005634017 nova_compute[243452]: 2026-02-28 10:34:19.343 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:19 np0005634017 nova_compute[243452]: 2026-02-28 10:34:19.702 243456 DEBUG oslo_concurrency.lockutils [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:34:19 np0005634017 nova_compute[243452]: 2026-02-28 10:34:19.702 243456 DEBUG oslo_concurrency.lockutils [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:34:19 np0005634017 nova_compute[243452]: 2026-02-28 10:34:19.703 243456 DEBUG nova.network.neutron [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:34:20 np0005634017 nova_compute[243452]: 2026-02-28 10:34:20.482 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 8.3 KiB/s rd, 3.7 KiB/s wr, 1 op/s
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.359 243456 DEBUG nova.compute.manager [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-vif-plugged-edcb3b03-b894-4abf-96a5-f832e8ee3371 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.359 243456 DEBUG oslo_concurrency.lockutils [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.360 243456 DEBUG oslo_concurrency.lockutils [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.360 243456 DEBUG oslo_concurrency.lockutils [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.360 243456 DEBUG nova.compute.manager [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] No waiting events found dispatching network-vif-plugged-edcb3b03-b894-4abf-96a5-f832e8ee3371 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.361 243456 WARNING nova.compute.manager [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received unexpected event network-vif-plugged-edcb3b03-b894-4abf-96a5-f832e8ee3371 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.361 243456 DEBUG nova.compute.manager [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-vif-deleted-edcb3b03-b894-4abf-96a5-f832e8ee3371 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.361 243456 INFO nova.compute.manager [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Neutron deleted interface edcb3b03-b894-4abf-96a5-f832e8ee3371; detaching it from the instance and deleting it from the info cache#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.361 243456 DEBUG nova.network.neutron [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Updating instance_info_cache with network_info: [{"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.396 243456 DEBUG nova.objects.instance [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lazy-loading 'system_metadata' on Instance uuid 4cf0bd1e-baff-4e42-ab90-cb45145ea4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.420 243456 DEBUG nova.objects.instance [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lazy-loading 'flavor' on Instance uuid 4cf0bd1e-baff-4e42-ab90-cb45145ea4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.444 243456 DEBUG nova.virt.libvirt.vif [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-770937853',display_name='tempest-TestNetworkBasicOps-server-770937853',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-770937853',id=127,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFAY2dHmsUOb7P6bLjV8d94Nwp5/6Rxqb3nkXfFeiEJx36+H0xArf++Ad9VgIE8+4m1lCjIO06O/qKmlMqDQresg45xrebHyRTaZLsdPDBvsp3hmYqaTlPvbO1fPCnhR8w==',key_name='tempest-TestNetworkBasicOps-431108198',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:33:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-bri91k0h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:33:49Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=4cf0bd1e-baff-4e42-ab90-cb45145ea4db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.445 243456 DEBUG nova.network.os_vif_util [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converting VIF {"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.446 243456 DEBUG nova.network.os_vif_util [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:6b:1e,bridge_name='br-int',has_traffic_filtering=True,id=edcb3b03-b894-4abf-96a5-f832e8ee3371,network=Network(0d699467-190a-4754-be38-8dcbc56ed7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedcb3b03-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.450 243456 DEBUG nova.virt.libvirt.guest [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:64:6b:1e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapedcb3b03-b8"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.456 243456 DEBUG nova.virt.libvirt.guest [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:64:6b:1e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapedcb3b03-b8"/></interface>not found in domain: <domain type='kvm' id='160'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <name>instance-0000007f</name>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <uuid>4cf0bd1e-baff-4e42-ab90-cb45145ea4db</uuid>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <nova:name>tempest-TestNetworkBasicOps-server-770937853</nova:name>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <nova:creationTime>2026-02-28 10:34:18</nova:creationTime>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <nova:flavor name="m1.nano">
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <nova:memory>128</nova:memory>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <nova:disk>1</nova:disk>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <nova:swap>0</nova:swap>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <nova:vcpus>1</nova:vcpus>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </nova:flavor>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <nova:owner>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </nova:owner>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <nova:ports>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <nova:port uuid="37a6ff99-c79f-4d1f-8384-b2117545bacf">
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </nova:ports>
Feb 28 05:34:21 np0005634017 nova_compute[243452]: </nova:instance>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <memory unit='KiB'>131072</memory>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <vcpu placement='static'>1</vcpu>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <resource>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <partition>/machine</partition>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </resource>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <sysinfo type='smbios'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <entry name='manufacturer'>RDO</entry>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <entry name='product'>OpenStack Compute</entry>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <entry name='serial'>4cf0bd1e-baff-4e42-ab90-cb45145ea4db</entry>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <entry name='uuid'>4cf0bd1e-baff-4e42-ab90-cb45145ea4db</entry>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <entry name='family'>Virtual Machine</entry>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <boot dev='hd'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <smbios mode='sysinfo'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <vmcoreinfo state='on'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <cpu mode='custom' match='exact' check='full'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <model fallback='forbid'>EPYC-Rome</model>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <vendor>AMD</vendor>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='x2apic'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='tsc-deadline'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='hypervisor'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='tsc_adjust'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='spec-ctrl'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='stibp'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='ssbd'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='cmp_legacy'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='overflow-recov'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='succor'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='ibrs'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='amd-ssbd'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='virt-ssbd'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='disable' name='lbrv'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='disable' name='tsc-scale'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='disable' name='vmcb-clean'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='disable' name='flushbyasid'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='disable' name='pause-filter'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='disable' name='pfthreshold'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='disable' name='svme-addr-chk'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='lfence-always-serializing'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='disable' name='xsaves'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='disable' name='svm'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='topoext'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='disable' name='npt'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='disable' name='nrip-save'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <clock offset='utc'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <timer name='pit' tickpolicy='delay'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <timer name='rtc' tickpolicy='catchup'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <timer name='hpet' present='no'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <on_poweroff>destroy</on_poweroff>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <on_reboot>restart</on_reboot>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <on_crash>destroy</on_crash>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <disk type='network' device='disk'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <driver name='qemu' type='raw' cache='none'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <auth username='openstack'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:        <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <source protocol='rbd' name='vms/4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk' index='2'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:        <host name='192.168.122.100' port='6789'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target dev='vda' bus='virtio'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='virtio-disk0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <disk type='network' device='cdrom'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <driver name='qemu' type='raw' cache='none'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <auth username='openstack'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:        <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <source protocol='rbd' name='vms/4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk.config' index='1'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:        <host name='192.168.122.100' port='6789'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target dev='sda' bus='sata'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <readonly/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='sata0-0-0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='0' model='pcie-root'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pcie.0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='1' port='0x10'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.1'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='2' port='0x11'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.2'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='3' port='0x12'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.3'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='4' port='0x13'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.4'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='5' port='0x14'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.5'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='6' port='0x15'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.6'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='7' port='0x16'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.7'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='8' port='0x17'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.8'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='9' port='0x18'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.9'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='10' port='0x19'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.10'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='11' port='0x1a'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.11'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='12' port='0x1b'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.12'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='13' port='0x1c'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.13'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='14' port='0x1d'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.14'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='15' port='0x1e'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.15'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='16' port='0x1f'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.16'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='17' port='0x20'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.17'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='18' port='0x21'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.18'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='19' port='0x22'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.19'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='20' port='0x23'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.20'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='21' port='0x24'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.21'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='22' port='0x25'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.22'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='23' port='0x26'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.23'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='24' port='0x27'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.24'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='25' port='0x28'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.25'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-pci-bridge'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.26'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='usb'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='sata' index='0'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='ide'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <interface type='ethernet'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <mac address='fa:16:3e:e1:c3:a3'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target dev='tap37a6ff99-c7'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model type='virtio'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <driver name='vhost' rx_queue_size='512'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <mtu size='1442'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='net0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <serial type='pty'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <source path='/dev/pts/1'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <log file='/var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/console.log' append='off'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target type='isa-serial' port='0'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:        <model name='isa-serial'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      </target>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='serial0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <console type='pty' tty='/dev/pts/1'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <source path='/dev/pts/1'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <log file='/var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/console.log' append='off'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target type='serial' port='0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='serial0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </console>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <input type='tablet' bus='usb'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='input0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='usb' bus='0' port='1'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <input type='mouse' bus='ps2'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='input1'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <input type='keyboard' bus='ps2'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='input2'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <listen type='address' address='::0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </graphics>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <audio id='1' type='none'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model type='virtio' heads='1' primary='yes'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='video0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <watchdog model='itco' action='reset'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='watchdog0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </watchdog>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <memballoon model='virtio'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <stats period='10'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='balloon0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <rng model='virtio'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <backend model='random'>/dev/urandom</backend>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='rng0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <label>system_u:system_r:svirt_t:s0:c811,c1010</label>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c811,c1010</imagelabel>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </seclabel>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <label>+107:+107</label>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <imagelabel>+107:+107</imagelabel>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </seclabel>
Feb 28 05:34:21 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:34:21 np0005634017 nova_compute[243452]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.458 243456 DEBUG nova.virt.libvirt.guest [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:64:6b:1e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapedcb3b03-b8"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.465 243456 DEBUG nova.virt.libvirt.guest [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:64:6b:1e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapedcb3b03-b8"/></interface>not found in domain: <domain type='kvm' id='160'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <name>instance-0000007f</name>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <uuid>4cf0bd1e-baff-4e42-ab90-cb45145ea4db</uuid>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <nova:name>tempest-TestNetworkBasicOps-server-770937853</nova:name>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <nova:creationTime>2026-02-28 10:34:18</nova:creationTime>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <nova:flavor name="m1.nano">
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <nova:memory>128</nova:memory>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <nova:disk>1</nova:disk>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <nova:swap>0</nova:swap>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <nova:vcpus>1</nova:vcpus>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </nova:flavor>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <nova:owner>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </nova:owner>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <nova:ports>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <nova:port uuid="37a6ff99-c79f-4d1f-8384-b2117545bacf">
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </nova:ports>
Feb 28 05:34:21 np0005634017 nova_compute[243452]: </nova:instance>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <memory unit='KiB'>131072</memory>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <vcpu placement='static'>1</vcpu>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <resource>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <partition>/machine</partition>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </resource>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <sysinfo type='smbios'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <entry name='manufacturer'>RDO</entry>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <entry name='product'>OpenStack Compute</entry>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <entry name='serial'>4cf0bd1e-baff-4e42-ab90-cb45145ea4db</entry>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <entry name='uuid'>4cf0bd1e-baff-4e42-ab90-cb45145ea4db</entry>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <entry name='family'>Virtual Machine</entry>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <boot dev='hd'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <smbios mode='sysinfo'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <vmcoreinfo state='on'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <cpu mode='custom' match='exact' check='full'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <model fallback='forbid'>EPYC-Rome</model>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <vendor>AMD</vendor>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='x2apic'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='tsc-deadline'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='hypervisor'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='tsc_adjust'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='spec-ctrl'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='stibp'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='ssbd'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='cmp_legacy'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='overflow-recov'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='succor'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='ibrs'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='amd-ssbd'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='virt-ssbd'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='disable' name='lbrv'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='disable' name='tsc-scale'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='disable' name='vmcb-clean'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='disable' name='flushbyasid'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='disable' name='pause-filter'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='disable' name='pfthreshold'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='disable' name='svme-addr-chk'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='lfence-always-serializing'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='disable' name='xsaves'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='disable' name='svm'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='require' name='topoext'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='disable' name='npt'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <feature policy='disable' name='nrip-save'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <clock offset='utc'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <timer name='pit' tickpolicy='delay'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <timer name='rtc' tickpolicy='catchup'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <timer name='hpet' present='no'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <on_poweroff>destroy</on_poweroff>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <on_reboot>restart</on_reboot>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <on_crash>destroy</on_crash>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <disk type='network' device='disk'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <driver name='qemu' type='raw' cache='none'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <auth username='openstack'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:        <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <source protocol='rbd' name='vms/4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk' index='2'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:        <host name='192.168.122.100' port='6789'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target dev='vda' bus='virtio'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='virtio-disk0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <disk type='network' device='cdrom'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <driver name='qemu' type='raw' cache='none'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <auth username='openstack'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:        <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <source protocol='rbd' name='vms/4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk.config' index='1'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:        <host name='192.168.122.100' port='6789'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target dev='sda' bus='sata'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <readonly/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='sata0-0-0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='0' model='pcie-root'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pcie.0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='1' port='0x10'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.1'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='2' port='0x11'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.2'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='3' port='0x12'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.3'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='4' port='0x13'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.4'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='5' port='0x14'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.5'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='6' port='0x15'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.6'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='7' port='0x16'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.7'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='8' port='0x17'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.8'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='9' port='0x18'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.9'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='10' port='0x19'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.10'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='11' port='0x1a'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.11'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='12' port='0x1b'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.12'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='13' port='0x1c'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.13'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='14' port='0x1d'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.14'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='15' port='0x1e'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.15'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='16' port='0x1f'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.16'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='17' port='0x20'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.17'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='18' port='0x21'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.18'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='19' port='0x22'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.19'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='20' port='0x23'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.20'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='21' port='0x24'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.21'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='22' port='0x25'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.22'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='23' port='0x26'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.23'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='24' port='0x27'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.24'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target chassis='25' port='0x28'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.25'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model name='pcie-pci-bridge'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='pci.26'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='usb'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <controller type='sata' index='0'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='ide'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <interface type='ethernet'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <mac address='fa:16:3e:e1:c3:a3'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target dev='tap37a6ff99-c7'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model type='virtio'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <driver name='vhost' rx_queue_size='512'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <mtu size='1442'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='net0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <serial type='pty'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <source path='/dev/pts/1'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <log file='/var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/console.log' append='off'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target type='isa-serial' port='0'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:        <model name='isa-serial'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      </target>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='serial0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <console type='pty' tty='/dev/pts/1'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <source path='/dev/pts/1'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <log file='/var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/console.log' append='off'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <target type='serial' port='0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='serial0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </console>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <input type='tablet' bus='usb'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='input0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='usb' bus='0' port='1'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <input type='mouse' bus='ps2'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='input1'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <input type='keyboard' bus='ps2'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='input2'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <listen type='address' address='::0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </graphics>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <audio id='1' type='none'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <model type='virtio' heads='1' primary='yes'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='video0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <watchdog model='itco' action='reset'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='watchdog0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </watchdog>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <memballoon model='virtio'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <stats period='10'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='balloon0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <rng model='virtio'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <backend model='random'>/dev/urandom</backend>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <alias name='rng0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <label>system_u:system_r:svirt_t:s0:c811,c1010</label>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c811,c1010</imagelabel>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </seclabel>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <label>+107:+107</label>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <imagelabel>+107:+107</imagelabel>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </seclabel>
Feb 28 05:34:21 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:34:21 np0005634017 nova_compute[243452]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.466 243456 WARNING nova.virt.libvirt.driver [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Detaching interface fa:16:3e:64:6b:1e failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapedcb3b03-b8' not found.#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.468 243456 DEBUG nova.virt.libvirt.vif [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-770937853',display_name='tempest-TestNetworkBasicOps-server-770937853',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-770937853',id=127,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFAY2dHmsUOb7P6bLjV8d94Nwp5/6Rxqb3nkXfFeiEJx36+H0xArf++Ad9VgIE8+4m1lCjIO06O/qKmlMqDQresg45xrebHyRTaZLsdPDBvsp3hmYqaTlPvbO1fPCnhR8w==',key_name='tempest-TestNetworkBasicOps-431108198',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:33:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-bri91k0h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:33:49Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=4cf0bd1e-baff-4e42-ab90-cb45145ea4db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.468 243456 DEBUG nova.network.os_vif_util [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converting VIF {"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.469 243456 DEBUG nova.network.os_vif_util [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:6b:1e,bridge_name='br-int',has_traffic_filtering=True,id=edcb3b03-b894-4abf-96a5-f832e8ee3371,network=Network(0d699467-190a-4754-be38-8dcbc56ed7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedcb3b03-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.470 243456 DEBUG os_vif [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:6b:1e,bridge_name='br-int',has_traffic_filtering=True,id=edcb3b03-b894-4abf-96a5-f832e8ee3371,network=Network(0d699467-190a-4754-be38-8dcbc56ed7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedcb3b03-b8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.472 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.473 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapedcb3b03-b8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.474 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.476 243456 INFO os_vif [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:6b:1e,bridge_name='br-int',has_traffic_filtering=True,id=edcb3b03-b894-4abf-96a5-f832e8ee3371,network=Network(0d699467-190a-4754-be38-8dcbc56ed7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedcb3b03-b8')#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.477 243456 DEBUG nova.virt.libvirt.guest [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <nova:name>tempest-TestNetworkBasicOps-server-770937853</nova:name>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <nova:creationTime>2026-02-28 10:34:21</nova:creationTime>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <nova:flavor name="m1.nano">
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <nova:memory>128</nova:memory>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <nova:disk>1</nova:disk>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <nova:swap>0</nova:swap>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <nova:vcpus>1</nova:vcpus>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </nova:flavor>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <nova:owner>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </nova:owner>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  <nova:ports>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    <nova:port uuid="37a6ff99-c79f-4d1f-8384-b2117545bacf">
Feb 28 05:34:21 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:34:21 np0005634017 nova_compute[243452]:  </nova:ports>
Feb 28 05:34:21 np0005634017 nova_compute[243452]: </nova:instance>
Feb 28 05:34:21 np0005634017 nova_compute[243452]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb 28 05:34:21 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:21Z|01289|binding|INFO|Releasing lport dbe8062b-c5f4-44f4-b690-d738c9fe51f1 from this chassis (sb_readonly=0)
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.541 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.983 243456 INFO nova.network.neutron [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Port edcb3b03-b894-4abf-96a5-f832e8ee3371 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Feb 28 05:34:21 np0005634017 nova_compute[243452]: 2026-02-28 10:34:21.983 243456 DEBUG nova.network.neutron [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Updating instance_info_cache with network_info: [{"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.001 243456 DEBUG oslo_concurrency.lockutils [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.026 243456 DEBUG oslo_concurrency.lockutils [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "interface-4cf0bd1e-baff-4e42-ab90-cb45145ea4db-edcb3b03-b894-4abf-96a5-f832e8ee3371" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.374 243456 DEBUG nova.compute.manager [req-1da8939a-f180-4035-8468-48e93ecc8e38 req-1bf9465a-f8af-4bba-81c3-3086edfcf87a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-changed-37a6ff99-c79f-4d1f-8384-b2117545bacf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.374 243456 DEBUG nova.compute.manager [req-1da8939a-f180-4035-8468-48e93ecc8e38 req-1bf9465a-f8af-4bba-81c3-3086edfcf87a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Refreshing instance network info cache due to event network-changed-37a6ff99-c79f-4d1f-8384-b2117545bacf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.375 243456 DEBUG oslo_concurrency.lockutils [req-1da8939a-f180-4035-8468-48e93ecc8e38 req-1bf9465a-f8af-4bba-81c3-3086edfcf87a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.375 243456 DEBUG oslo_concurrency.lockutils [req-1da8939a-f180-4035-8468-48e93ecc8e38 req-1bf9465a-f8af-4bba-81c3-3086edfcf87a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.376 243456 DEBUG nova.network.neutron [req-1da8939a-f180-4035-8468-48e93ecc8e38 req-1bf9465a-f8af-4bba-81c3-3086edfcf87a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Refreshing network info cache for port 37a6ff99-c79f-4d1f-8384-b2117545bacf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.432 243456 DEBUG oslo_concurrency.lockutils [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.433 243456 DEBUG oslo_concurrency.lockutils [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.434 243456 DEBUG oslo_concurrency.lockutils [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.435 243456 DEBUG oslo_concurrency.lockutils [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.436 243456 DEBUG oslo_concurrency.lockutils [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.438 243456 INFO nova.compute.manager [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Terminating instance#033[00m
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.441 243456 DEBUG nova.compute.manager [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:34:22 np0005634017 kernel: tap37a6ff99-c7 (unregistering): left promiscuous mode
Feb 28 05:34:22 np0005634017 NetworkManager[49805]: <info>  [1772274862.5061] device (tap37a6ff99-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.505 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:22 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:22Z|01290|binding|INFO|Releasing lport 37a6ff99-c79f-4d1f-8384-b2117545bacf from this chassis (sb_readonly=0)
Feb 28 05:34:22 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:22Z|01291|binding|INFO|Setting lport 37a6ff99-c79f-4d1f-8384-b2117545bacf down in Southbound
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.515 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:22 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:22Z|01292|binding|INFO|Removing iface tap37a6ff99-c7 ovn-installed in OVS
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.518 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.528 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:c3:a3 10.100.0.3'], port_security=['fa:16:3e:e1:c3:a3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4cf0bd1e-baff-4e42-ab90-cb45145ea4db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-183ae61b-3b9b-4e1b-a73e-6b7a38731453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0844b2ff-c3dd-41f7-ab33-952597a3bda8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9e841af-c685-47c1-acc4-502d4238e857, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=37a6ff99-c79f-4d1f-8384-b2117545bacf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.528 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.531 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 37a6ff99-c79f-4d1f-8384-b2117545bacf in datapath 183ae61b-3b9b-4e1b-a73e-6b7a38731453 unbound from our chassis#033[00m
Feb 28 05:34:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.533 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 183ae61b-3b9b-4e1b-a73e-6b7a38731453, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:34:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.534 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[928abaeb-ef00-4739-85d9-961262a808fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.535 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453 namespace which is not needed anymore#033[00m
Feb 28 05:34:22 np0005634017 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Feb 28 05:34:22 np0005634017 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d0000007f.scope: Consumed 14.932s CPU time.
Feb 28 05:34:22 np0005634017 systemd-machined[209480]: Machine qemu-160-instance-0000007f terminated.
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.690 243456 INFO nova.virt.libvirt.driver [-] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Instance destroyed successfully.#033[00m
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.691 243456 DEBUG nova.objects.instance [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'resources' on Instance uuid 4cf0bd1e-baff-4e42-ab90-cb45145ea4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.715 243456 DEBUG nova.virt.libvirt.vif [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-770937853',display_name='tempest-TestNetworkBasicOps-server-770937853',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-770937853',id=127,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFAY2dHmsUOb7P6bLjV8d94Nwp5/6Rxqb3nkXfFeiEJx36+H0xArf++Ad9VgIE8+4m1lCjIO06O/qKmlMqDQresg45xrebHyRTaZLsdPDBvsp3hmYqaTlPvbO1fPCnhR8w==',key_name='tempest-TestNetworkBasicOps-431108198',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:33:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-bri91k0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:33:49Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=4cf0bd1e-baff-4e42-ab90-cb45145ea4db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.717 243456 DEBUG nova.network.os_vif_util [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:34:22 np0005634017 neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453[355692]: [NOTICE]   (355696) : haproxy version is 2.8.14-c23fe91
Feb 28 05:34:22 np0005634017 neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453[355692]: [NOTICE]   (355696) : path to executable is /usr/sbin/haproxy
Feb 28 05:34:22 np0005634017 neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453[355692]: [WARNING]  (355696) : Exiting Master process...
Feb 28 05:34:22 np0005634017 neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453[355692]: [WARNING]  (355696) : Exiting Master process...
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.719 243456 DEBUG nova.network.os_vif_util [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e1:c3:a3,bridge_name='br-int',has_traffic_filtering=True,id=37a6ff99-c79f-4d1f-8384-b2117545bacf,network=Network(183ae61b-3b9b-4e1b-a73e-6b7a38731453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37a6ff99-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.720 243456 DEBUG os_vif [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:c3:a3,bridge_name='br-int',has_traffic_filtering=True,id=37a6ff99-c79f-4d1f-8384-b2117545bacf,network=Network(183ae61b-3b9b-4e1b-a73e-6b7a38731453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37a6ff99-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:34:22 np0005634017 neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453[355692]: [ALERT]    (355696) : Current worker (355698) exited with code 143 (Terminated)
Feb 28 05:34:22 np0005634017 neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453[355692]: [WARNING]  (355696) : All workers exited. Exiting... (0)
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.722 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.723 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37a6ff99-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:22 np0005634017 systemd[1]: libpod-8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d.scope: Deactivated successfully.
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.726 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.728 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:22 np0005634017 podman[356133]: 2026-02-28 10:34:22.730499216 +0000 UTC m=+0.073156294 container died 8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.732 243456 INFO os_vif [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:c3:a3,bridge_name='br-int',has_traffic_filtering=True,id=37a6ff99-c79f-4d1f-8384-b2117545bacf,network=Network(183ae61b-3b9b-4e1b-a73e-6b7a38731453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37a6ff99-c7')#033[00m
Feb 28 05:34:22 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d-userdata-shm.mount: Deactivated successfully.
Feb 28 05:34:22 np0005634017 systemd[1]: var-lib-containers-storage-overlay-519415a6e8605ec671aba53f1da4d6a90fb9d621f73f1043083370c3d20e14cc-merged.mount: Deactivated successfully.
Feb 28 05:34:22 np0005634017 podman[356133]: 2026-02-28 10:34:22.773338589 +0000 UTC m=+0.115995597 container cleanup 8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 05:34:22 np0005634017 systemd[1]: libpod-conmon-8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d.scope: Deactivated successfully.
Feb 28 05:34:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 7.3 KiB/s wr, 1 op/s
Feb 28 05:34:22 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 05:34:22 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3601.6 total, 600.0 interval#012Cumulative writes: 41K writes, 159K keys, 41K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s#012Cumulative WAL: 41K writes, 15K syncs, 2.72 writes per sync, written: 0.15 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4949 writes, 18K keys, 4949 commit groups, 1.0 writes per commit group, ingest: 19.96 MB, 0.03 MB/s#012Interval WAL: 4949 writes, 2024 syncs, 2.45 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 05:34:22 np0005634017 podman[356183]: 2026-02-28 10:34:22.843421805 +0000 UTC m=+0.042903467 container remove 8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 05:34:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.849 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c94fe7bf-ac81-45b1-b3d5-ba78ae31bbd0]: (4, ('Sat Feb 28 10:34:22 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453 (8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d)\n8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d\nSat Feb 28 10:34:22 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453 (8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d)\n8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.852 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c7d0ae1d-d968-4c64-abe5-7cd2a0513e7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.853 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap183ae61b-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.855 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:22 np0005634017 kernel: tap183ae61b-30: left promiscuous mode
Feb 28 05:34:22 np0005634017 nova_compute[243452]: 2026-02-28 10:34:22.862 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.865 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[20ad6bb5-3f2f-40c4-8dda-7a908e2b6a6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.880 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[74cde80e-59ca-4bc8-aadb-dbd14df5bcfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.883 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7cfaa574-0d36-4a18-b678-3ff229a001bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.900 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[879189f0-f6be-457a-b9d8-5a331cc703f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633584, 'reachable_time': 27621, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356202, 'error': None, 'target': 'ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:22 np0005634017 systemd[1]: run-netns-ovnmeta\x2d183ae61b\x2d3b9b\x2d4e1b\x2da73e\x2d6b7a38731453.mount: Deactivated successfully.
Feb 28 05:34:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.904 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:34:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.904 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[f3729f3f-8484-4543-b21d-87cd1d897c8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:23 np0005634017 nova_compute[243452]: 2026-02-28 10:34:23.030 243456 INFO nova.virt.libvirt.driver [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Deleting instance files /var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db_del#033[00m
Feb 28 05:34:23 np0005634017 nova_compute[243452]: 2026-02-28 10:34:23.033 243456 INFO nova.virt.libvirt.driver [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Deletion of /var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db_del complete#033[00m
Feb 28 05:34:23 np0005634017 nova_compute[243452]: 2026-02-28 10:34:23.102 243456 INFO nova.compute.manager [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Took 0.66 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:34:23 np0005634017 nova_compute[243452]: 2026-02-28 10:34:23.103 243456 DEBUG oslo.service.loopingcall [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:34:23 np0005634017 nova_compute[243452]: 2026-02-28 10:34:23.103 243456 DEBUG nova.compute.manager [-] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:34:23 np0005634017 nova_compute[243452]: 2026-02-28 10:34:23.103 243456 DEBUG nova.network.neutron [-] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:34:23 np0005634017 nova_compute[243452]: 2026-02-28 10:34:23.461 243456 DEBUG nova.compute.manager [req-309fcedc-b6dc-4348-b76b-cd1355494574 req-e1b04d03-2ba5-4225-9833-d1d45f3cdf31 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-vif-unplugged-37a6ff99-c79f-4d1f-8384-b2117545bacf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:34:23 np0005634017 nova_compute[243452]: 2026-02-28 10:34:23.461 243456 DEBUG oslo_concurrency.lockutils [req-309fcedc-b6dc-4348-b76b-cd1355494574 req-e1b04d03-2ba5-4225-9833-d1d45f3cdf31 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:23 np0005634017 nova_compute[243452]: 2026-02-28 10:34:23.462 243456 DEBUG oslo_concurrency.lockutils [req-309fcedc-b6dc-4348-b76b-cd1355494574 req-e1b04d03-2ba5-4225-9833-d1d45f3cdf31 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:23 np0005634017 nova_compute[243452]: 2026-02-28 10:34:23.463 243456 DEBUG oslo_concurrency.lockutils [req-309fcedc-b6dc-4348-b76b-cd1355494574 req-e1b04d03-2ba5-4225-9833-d1d45f3cdf31 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:23 np0005634017 nova_compute[243452]: 2026-02-28 10:34:23.463 243456 DEBUG nova.compute.manager [req-309fcedc-b6dc-4348-b76b-cd1355494574 req-e1b04d03-2ba5-4225-9833-d1d45f3cdf31 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] No waiting events found dispatching network-vif-unplugged-37a6ff99-c79f-4d1f-8384-b2117545bacf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:34:23 np0005634017 nova_compute[243452]: 2026-02-28 10:34:23.463 243456 DEBUG nova.compute.manager [req-309fcedc-b6dc-4348-b76b-cd1355494574 req-e1b04d03-2ba5-4225-9833-d1d45f3cdf31 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-vif-unplugged-37a6ff99-c79f-4d1f-8384-b2117545bacf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:34:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:34:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:24.287 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:34:24 np0005634017 nova_compute[243452]: 2026-02-28 10:34:24.287 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:24.289 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:34:24 np0005634017 nova_compute[243452]: 2026-02-28 10:34:24.345 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:24 np0005634017 nova_compute[243452]: 2026-02-28 10:34:24.374 243456 DEBUG nova.network.neutron [req-1da8939a-f180-4035-8468-48e93ecc8e38 req-1bf9465a-f8af-4bba-81c3-3086edfcf87a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Updated VIF entry in instance network info cache for port 37a6ff99-c79f-4d1f-8384-b2117545bacf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:34:24 np0005634017 nova_compute[243452]: 2026-02-28 10:34:24.375 243456 DEBUG nova.network.neutron [req-1da8939a-f180-4035-8468-48e93ecc8e38 req-1bf9465a-f8af-4bba-81c3-3086edfcf87a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Updating instance_info_cache with network_info: [{"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:34:24 np0005634017 nova_compute[243452]: 2026-02-28 10:34:24.413 243456 DEBUG nova.network.neutron [-] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:34:24 np0005634017 nova_compute[243452]: 2026-02-28 10:34:24.431 243456 DEBUG oslo_concurrency.lockutils [req-1da8939a-f180-4035-8468-48e93ecc8e38 req-1bf9465a-f8af-4bba-81c3-3086edfcf87a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:34:24 np0005634017 nova_compute[243452]: 2026-02-28 10:34:24.454 243456 INFO nova.compute.manager [-] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Took 1.35 seconds to deallocate network for instance.#033[00m
Feb 28 05:34:24 np0005634017 nova_compute[243452]: 2026-02-28 10:34:24.500 243456 DEBUG oslo_concurrency.lockutils [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:24 np0005634017 nova_compute[243452]: 2026-02-28 10:34:24.501 243456 DEBUG oslo_concurrency.lockutils [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:24 np0005634017 nova_compute[243452]: 2026-02-28 10:34:24.567 243456 DEBUG oslo_concurrency.processutils [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:34:24 np0005634017 nova_compute[243452]: 2026-02-28 10:34:24.620 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 203 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 7.8 KiB/s wr, 11 op/s
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.007589) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274865007680, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1241, "num_deletes": 251, "total_data_size": 1868878, "memory_usage": 1896928, "flush_reason": "Manual Compaction"}
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274865023003, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 1839316, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43833, "largest_seqno": 45073, "table_properties": {"data_size": 1833430, "index_size": 3217, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12599, "raw_average_key_size": 19, "raw_value_size": 1821626, "raw_average_value_size": 2873, "num_data_blocks": 144, "num_entries": 634, "num_filter_entries": 634, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772274744, "oldest_key_time": 1772274744, "file_creation_time": 1772274865, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 15601 microseconds, and 4834 cpu microseconds.
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.023202) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 1839316 bytes OK
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.023248) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.024903) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.024920) EVENT_LOG_v1 {"time_micros": 1772274865024915, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.024944) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 1863252, prev total WAL file size 1863252, number of live WAL files 2.
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.025703) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(1796KB)], [101(8295KB)]
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274865025767, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 10333466, "oldest_snapshot_seqno": -1}
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 6640 keys, 8687052 bytes, temperature: kUnknown
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274865081105, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 8687052, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8643385, "index_size": 25955, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16645, "raw_key_size": 172449, "raw_average_key_size": 25, "raw_value_size": 8525256, "raw_average_value_size": 1283, "num_data_blocks": 1013, "num_entries": 6640, "num_filter_entries": 6640, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772274865, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.081430) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 8687052 bytes
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.082976) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.4 rd, 156.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 8.1 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(10.3) write-amplify(4.7) OK, records in: 7154, records dropped: 514 output_compression: NoCompression
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.082999) EVENT_LOG_v1 {"time_micros": 1772274865082987, "job": 60, "event": "compaction_finished", "compaction_time_micros": 55425, "compaction_time_cpu_micros": 30484, "output_level": 6, "num_output_files": 1, "total_output_size": 8687052, "num_input_records": 7154, "num_output_records": 6640, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274865083321, "job": 60, "event": "table_file_deletion", "file_number": 103}
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274865084622, "job": 60, "event": "table_file_deletion", "file_number": 101}
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.025595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.084764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.084775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.084780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.084783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.084787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:34:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/392256275' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:34:25 np0005634017 nova_compute[243452]: 2026-02-28 10:34:25.271 243456 DEBUG oslo_concurrency.processutils [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.704s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:34:25 np0005634017 nova_compute[243452]: 2026-02-28 10:34:25.279 243456 DEBUG nova.compute.provider_tree [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:34:25 np0005634017 nova_compute[243452]: 2026-02-28 10:34:25.300 243456 DEBUG nova.scheduler.client.report [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:34:25 np0005634017 nova_compute[243452]: 2026-02-28 10:34:25.333 243456 DEBUG oslo_concurrency.lockutils [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:25 np0005634017 nova_compute[243452]: 2026-02-28 10:34:25.365 243456 INFO nova.scheduler.client.report [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Deleted allocations for instance 4cf0bd1e-baff-4e42-ab90-cb45145ea4db#033[00m
Feb 28 05:34:25 np0005634017 nova_compute[243452]: 2026-02-28 10:34:25.431 243456 DEBUG oslo_concurrency.lockutils [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.999s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:25 np0005634017 nova_compute[243452]: 2026-02-28 10:34:25.586 243456 DEBUG nova.compute.manager [req-3b5b45a8-82a6-4595-804b-76caee1247ff req-f4d5c920-ab1a-4be8-bcf8-96deec008fa1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-vif-plugged-37a6ff99-c79f-4d1f-8384-b2117545bacf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:34:25 np0005634017 nova_compute[243452]: 2026-02-28 10:34:25.587 243456 DEBUG oslo_concurrency.lockutils [req-3b5b45a8-82a6-4595-804b-76caee1247ff req-f4d5c920-ab1a-4be8-bcf8-96deec008fa1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:25 np0005634017 nova_compute[243452]: 2026-02-28 10:34:25.588 243456 DEBUG oslo_concurrency.lockutils [req-3b5b45a8-82a6-4595-804b-76caee1247ff req-f4d5c920-ab1a-4be8-bcf8-96deec008fa1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:25 np0005634017 nova_compute[243452]: 2026-02-28 10:34:25.588 243456 DEBUG oslo_concurrency.lockutils [req-3b5b45a8-82a6-4595-804b-76caee1247ff req-f4d5c920-ab1a-4be8-bcf8-96deec008fa1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:25 np0005634017 nova_compute[243452]: 2026-02-28 10:34:25.589 243456 DEBUG nova.compute.manager [req-3b5b45a8-82a6-4595-804b-76caee1247ff req-f4d5c920-ab1a-4be8-bcf8-96deec008fa1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] No waiting events found dispatching network-vif-plugged-37a6ff99-c79f-4d1f-8384-b2117545bacf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:34:25 np0005634017 nova_compute[243452]: 2026-02-28 10:34:25.589 243456 WARNING nova.compute.manager [req-3b5b45a8-82a6-4595-804b-76caee1247ff req-f4d5c920-ab1a-4be8-bcf8-96deec008fa1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received unexpected event network-vif-plugged-37a6ff99-c79f-4d1f-8384-b2117545bacf for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:34:25 np0005634017 nova_compute[243452]: 2026-02-28 10:34:25.590 243456 DEBUG nova.compute.manager [req-3b5b45a8-82a6-4595-804b-76caee1247ff req-f4d5c920-ab1a-4be8-bcf8-96deec008fa1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-vif-deleted-37a6ff99-c79f-4d1f-8384-b2117545bacf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:34:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 178 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 8.8 KiB/s wr, 23 op/s
Feb 28 05:34:27 np0005634017 nova_compute[243452]: 2026-02-28 10:34:27.727 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 153 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 9.2 KiB/s wr, 28 op/s
Feb 28 05:34:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 05:34:28 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.2 total, 600.0 interval#012Cumulative writes: 33K writes, 127K keys, 33K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.04 MB/s#012Cumulative WAL: 33K writes, 12K syncs, 2.73 writes per sync, written: 0.12 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4166 writes, 15K keys, 4166 commit groups, 1.0 writes per commit group, ingest: 17.10 MB, 0.03 MB/s#012Interval WAL: 4166 writes, 1712 syncs, 2.43 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 05:34:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:34:29
Feb 28 05:34:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:34:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:34:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'backups', 'images', '.rgw.root', 'default.rgw.control']
Feb 28 05:34:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:34:29 np0005634017 nova_compute[243452]: 2026-02-28 10:34:29.269 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:29.291 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:29 np0005634017 nova_compute[243452]: 2026-02-28 10:34:29.346 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:29 np0005634017 nova_compute[243452]: 2026-02-28 10:34:29.808 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:29 np0005634017 nova_compute[243452]: 2026-02-28 10:34:29.884 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:34:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:34:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:34:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:34:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:34:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:34:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:34:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:34:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:34:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:34:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:34:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:34:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:34:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:34:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:34:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:34:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 153 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 6.8 KiB/s wr, 28 op/s
Feb 28 05:34:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:34:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:34:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:34:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:34:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:34:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:34:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:34:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:34:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:34:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:34:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:34:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:34:31 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:34:31 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:34:31 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:34:31 np0005634017 ceph-mgr[76610]: [devicehealth INFO root] Check health
Feb 28 05:34:31 np0005634017 podman[356372]: 2026-02-28 10:34:31.353937683 +0000 UTC m=+0.070973812 container create 75c88b29786c27df72233c281d0ff1514a4f92c0b37697088a912f6813b4c52c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_hopper, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:34:31 np0005634017 systemd[1]: Started libpod-conmon-75c88b29786c27df72233c281d0ff1514a4f92c0b37697088a912f6813b4c52c.scope.
Feb 28 05:34:31 np0005634017 podman[356372]: 2026-02-28 10:34:31.326157646 +0000 UTC m=+0.043193825 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:34:31 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:34:31 np0005634017 podman[356372]: 2026-02-28 10:34:31.448731048 +0000 UTC m=+0.165767257 container init 75c88b29786c27df72233c281d0ff1514a4f92c0b37697088a912f6813b4c52c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 28 05:34:31 np0005634017 podman[356372]: 2026-02-28 10:34:31.458554346 +0000 UTC m=+0.175590515 container start 75c88b29786c27df72233c281d0ff1514a4f92c0b37697088a912f6813b4c52c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_hopper, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:34:31 np0005634017 podman[356372]: 2026-02-28 10:34:31.464463734 +0000 UTC m=+0.181499893 container attach 75c88b29786c27df72233c281d0ff1514a4f92c0b37697088a912f6813b4c52c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_hopper, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:34:31 np0005634017 unruffled_hopper[356388]: 167 167
Feb 28 05:34:31 np0005634017 systemd[1]: libpod-75c88b29786c27df72233c281d0ff1514a4f92c0b37697088a912f6813b4c52c.scope: Deactivated successfully.
Feb 28 05:34:31 np0005634017 podman[356372]: 2026-02-28 10:34:31.468253911 +0000 UTC m=+0.185290080 container died 75c88b29786c27df72233c281d0ff1514a4f92c0b37697088a912f6813b4c52c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_hopper, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:34:31 np0005634017 systemd[1]: var-lib-containers-storage-overlay-7e3c0693479d1dd5409223268e044fc11d30379c36f83bd69dc2718733b7ca44-merged.mount: Deactivated successfully.
Feb 28 05:34:31 np0005634017 podman[356372]: 2026-02-28 10:34:31.513347599 +0000 UTC m=+0.230383718 container remove 75c88b29786c27df72233c281d0ff1514a4f92c0b37697088a912f6813b4c52c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 05:34:31 np0005634017 systemd[1]: libpod-conmon-75c88b29786c27df72233c281d0ff1514a4f92c0b37697088a912f6813b4c52c.scope: Deactivated successfully.
Feb 28 05:34:31 np0005634017 podman[356412]: 2026-02-28 10:34:31.701730985 +0000 UTC m=+0.054025861 container create b94cd2e572b571432504219fc7261faf5815ba15a4f5593c9e3b8bd6012e9b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_jang, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 05:34:31 np0005634017 systemd[1]: Started libpod-conmon-b94cd2e572b571432504219fc7261faf5815ba15a4f5593c9e3b8bd6012e9b32.scope.
Feb 28 05:34:31 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:34:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5db55bc98f0b0d73846869bf496d3bf745bbfbe442d431acf8ed731533f395cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:34:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5db55bc98f0b0d73846869bf496d3bf745bbfbe442d431acf8ed731533f395cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:34:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5db55bc98f0b0d73846869bf496d3bf745bbfbe442d431acf8ed731533f395cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:34:31 np0005634017 podman[356412]: 2026-02-28 10:34:31.683692604 +0000 UTC m=+0.035987510 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:34:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5db55bc98f0b0d73846869bf496d3bf745bbfbe442d431acf8ed731533f395cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:34:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5db55bc98f0b0d73846869bf496d3bf745bbfbe442d431acf8ed731533f395cd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:34:31 np0005634017 podman[356412]: 2026-02-28 10:34:31.792161197 +0000 UTC m=+0.144456113 container init b94cd2e572b571432504219fc7261faf5815ba15a4f5593c9e3b8bd6012e9b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_jang, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:34:31 np0005634017 podman[356412]: 2026-02-28 10:34:31.803456107 +0000 UTC m=+0.155751003 container start b94cd2e572b571432504219fc7261faf5815ba15a4f5593c9e3b8bd6012e9b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:34:31 np0005634017 podman[356412]: 2026-02-28 10:34:31.807701977 +0000 UTC m=+0.159996863 container attach b94cd2e572b571432504219fc7261faf5815ba15a4f5593c9e3b8bd6012e9b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_jang, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 05:34:32 np0005634017 pedantic_jang[356428]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:34:32 np0005634017 pedantic_jang[356428]: --> All data devices are unavailable
Feb 28 05:34:32 np0005634017 systemd[1]: libpod-b94cd2e572b571432504219fc7261faf5815ba15a4f5593c9e3b8bd6012e9b32.scope: Deactivated successfully.
Feb 28 05:34:32 np0005634017 podman[356412]: 2026-02-28 10:34:32.284168753 +0000 UTC m=+0.636463649 container died b94cd2e572b571432504219fc7261faf5815ba15a4f5593c9e3b8bd6012e9b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_jang, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 05:34:32 np0005634017 systemd[1]: var-lib-containers-storage-overlay-5db55bc98f0b0d73846869bf496d3bf745bbfbe442d431acf8ed731533f395cd-merged.mount: Deactivated successfully.
Feb 28 05:34:32 np0005634017 podman[356412]: 2026-02-28 10:34:32.347688242 +0000 UTC m=+0.699983118 container remove b94cd2e572b571432504219fc7261faf5815ba15a4f5593c9e3b8bd6012e9b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:34:32 np0005634017 systemd[1]: libpod-conmon-b94cd2e572b571432504219fc7261faf5815ba15a4f5593c9e3b8bd6012e9b32.scope: Deactivated successfully.
Feb 28 05:34:32 np0005634017 nova_compute[243452]: 2026-02-28 10:34:32.730 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 153 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 6.8 KiB/s wr, 27 op/s
Feb 28 05:34:32 np0005634017 podman[356521]: 2026-02-28 10:34:32.897019293 +0000 UTC m=+0.038922283 container create ef6b598d272863769fd44247c016b41e896a5bcfea7b6423d1629277c5878ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_nightingale, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:34:32 np0005634017 systemd[1]: Started libpod-conmon-ef6b598d272863769fd44247c016b41e896a5bcfea7b6423d1629277c5878ab3.scope.
Feb 28 05:34:32 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:34:32 np0005634017 podman[356521]: 2026-02-28 10:34:32.97315948 +0000 UTC m=+0.115062480 container init ef6b598d272863769fd44247c016b41e896a5bcfea7b6423d1629277c5878ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_nightingale, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:34:32 np0005634017 podman[356521]: 2026-02-28 10:34:32.878050816 +0000 UTC m=+0.019953866 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:34:32 np0005634017 podman[356521]: 2026-02-28 10:34:32.978561743 +0000 UTC m=+0.120464753 container start ef6b598d272863769fd44247c016b41e896a5bcfea7b6423d1629277c5878ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 28 05:34:32 np0005634017 podman[356521]: 2026-02-28 10:34:32.981893688 +0000 UTC m=+0.123796718 container attach ef6b598d272863769fd44247c016b41e896a5bcfea7b6423d1629277c5878ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 05:34:32 np0005634017 priceless_nightingale[356537]: 167 167
Feb 28 05:34:32 np0005634017 systemd[1]: libpod-ef6b598d272863769fd44247c016b41e896a5bcfea7b6423d1629277c5878ab3.scope: Deactivated successfully.
Feb 28 05:34:32 np0005634017 conmon[356537]: conmon ef6b598d272863769fd4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ef6b598d272863769fd44247c016b41e896a5bcfea7b6423d1629277c5878ab3.scope/container/memory.events
Feb 28 05:34:32 np0005634017 podman[356521]: 2026-02-28 10:34:32.984275785 +0000 UTC m=+0.126178785 container died ef6b598d272863769fd44247c016b41e896a5bcfea7b6423d1629277c5878ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_nightingale, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:34:33 np0005634017 systemd[1]: var-lib-containers-storage-overlay-806b9e05dca1acdff5ca5558ada994500022ebfe7bf2651fff195cc146fbe00f-merged.mount: Deactivated successfully.
Feb 28 05:34:33 np0005634017 podman[356521]: 2026-02-28 10:34:33.018164615 +0000 UTC m=+0.160067625 container remove ef6b598d272863769fd44247c016b41e896a5bcfea7b6423d1629277c5878ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_nightingale, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 05:34:33 np0005634017 systemd[1]: libpod-conmon-ef6b598d272863769fd44247c016b41e896a5bcfea7b6423d1629277c5878ab3.scope: Deactivated successfully.
Feb 28 05:34:33 np0005634017 nova_compute[243452]: 2026-02-28 10:34:33.143 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:34:33 np0005634017 podman[356561]: 2026-02-28 10:34:33.17754275 +0000 UTC m=+0.052960421 container create 8c3d6846cae8f8c182c916301233458884de58f36909afd3d2aba8a3382b27dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_kare, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Feb 28 05:34:33 np0005634017 systemd[1]: Started libpod-conmon-8c3d6846cae8f8c182c916301233458884de58f36909afd3d2aba8a3382b27dd.scope.
Feb 28 05:34:33 np0005634017 podman[356561]: 2026-02-28 10:34:33.152154571 +0000 UTC m=+0.027572322 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:34:33 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:34:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928e1e36594004ff8016b6185a7e95de71566a5965f4057d0618371ba45d6525/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:34:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928e1e36594004ff8016b6185a7e95de71566a5965f4057d0618371ba45d6525/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:34:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928e1e36594004ff8016b6185a7e95de71566a5965f4057d0618371ba45d6525/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:34:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928e1e36594004ff8016b6185a7e95de71566a5965f4057d0618371ba45d6525/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:34:33 np0005634017 podman[356561]: 2026-02-28 10:34:33.290293664 +0000 UTC m=+0.165711425 container init 8c3d6846cae8f8c182c916301233458884de58f36909afd3d2aba8a3382b27dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_kare, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Feb 28 05:34:33 np0005634017 podman[356561]: 2026-02-28 10:34:33.30251298 +0000 UTC m=+0.177930681 container start 8c3d6846cae8f8c182c916301233458884de58f36909afd3d2aba8a3382b27dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_kare, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 05:34:33 np0005634017 podman[356561]: 2026-02-28 10:34:33.306822282 +0000 UTC m=+0.182239983 container attach 8c3d6846cae8f8c182c916301233458884de58f36909afd3d2aba8a3382b27dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_kare, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]: {
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:    "0": [
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:        {
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "devices": [
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "/dev/loop3"
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            ],
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "lv_name": "ceph_lv0",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "lv_size": "21470642176",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "name": "ceph_lv0",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "tags": {
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.cluster_name": "ceph",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.crush_device_class": "",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.encrypted": "0",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.objectstore": "bluestore",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.osd_id": "0",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.type": "block",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.vdo": "0",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.with_tpm": "0"
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            },
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "type": "block",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "vg_name": "ceph_vg0"
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:        }
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:    ],
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:    "1": [
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:        {
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "devices": [
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "/dev/loop4"
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            ],
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "lv_name": "ceph_lv1",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "lv_size": "21470642176",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "name": "ceph_lv1",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "tags": {
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.cluster_name": "ceph",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.crush_device_class": "",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.encrypted": "0",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.objectstore": "bluestore",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.osd_id": "1",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.type": "block",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.vdo": "0",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.with_tpm": "0"
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            },
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "type": "block",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "vg_name": "ceph_vg1"
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:        }
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:    ],
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:    "2": [
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:        {
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "devices": [
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "/dev/loop5"
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            ],
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "lv_name": "ceph_lv2",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "lv_size": "21470642176",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "name": "ceph_lv2",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "tags": {
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.cluster_name": "ceph",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.crush_device_class": "",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.encrypted": "0",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.objectstore": "bluestore",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.osd_id": "2",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.type": "block",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.vdo": "0",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:                "ceph.with_tpm": "0"
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            },
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "type": "block",
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:            "vg_name": "ceph_vg2"
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:        }
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]:    ]
Feb 28 05:34:33 np0005634017 peaceful_kare[356577]: }
Feb 28 05:34:33 np0005634017 systemd[1]: libpod-8c3d6846cae8f8c182c916301233458884de58f36909afd3d2aba8a3382b27dd.scope: Deactivated successfully.
Feb 28 05:34:33 np0005634017 podman[356561]: 2026-02-28 10:34:33.626757505 +0000 UTC m=+0.502175216 container died 8c3d6846cae8f8c182c916301233458884de58f36909afd3d2aba8a3382b27dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 05:34:33 np0005634017 systemd[1]: var-lib-containers-storage-overlay-928e1e36594004ff8016b6185a7e95de71566a5965f4057d0618371ba45d6525-merged.mount: Deactivated successfully.
Feb 28 05:34:33 np0005634017 podman[356561]: 2026-02-28 10:34:33.682945257 +0000 UTC m=+0.558362928 container remove 8c3d6846cae8f8c182c916301233458884de58f36909afd3d2aba8a3382b27dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_kare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 05:34:33 np0005634017 systemd[1]: libpod-conmon-8c3d6846cae8f8c182c916301233458884de58f36909afd3d2aba8a3382b27dd.scope: Deactivated successfully.
Feb 28 05:34:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:34:34 np0005634017 podman[356660]: 2026-02-28 10:34:34.124866105 +0000 UTC m=+0.045914362 container create 373542bd2d054ae681d9439acea153b3b86561a6e7b89faa929c7cf0d7fcc45f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_beaver, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:34:34 np0005634017 systemd[1]: Started libpod-conmon-373542bd2d054ae681d9439acea153b3b86561a6e7b89faa929c7cf0d7fcc45f.scope.
Feb 28 05:34:34 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:34:34 np0005634017 podman[356660]: 2026-02-28 10:34:34.197364649 +0000 UTC m=+0.118412906 container init 373542bd2d054ae681d9439acea153b3b86561a6e7b89faa929c7cf0d7fcc45f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_beaver, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 28 05:34:34 np0005634017 podman[356660]: 2026-02-28 10:34:34.106703071 +0000 UTC m=+0.027751368 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:34:34 np0005634017 podman[356660]: 2026-02-28 10:34:34.20482136 +0000 UTC m=+0.125869607 container start 373542bd2d054ae681d9439acea153b3b86561a6e7b89faa929c7cf0d7fcc45f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 28 05:34:34 np0005634017 podman[356660]: 2026-02-28 10:34:34.208214436 +0000 UTC m=+0.129262743 container attach 373542bd2d054ae681d9439acea153b3b86561a6e7b89faa929c7cf0d7fcc45f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_beaver, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 05:34:34 np0005634017 modest_beaver[356678]: 167 167
Feb 28 05:34:34 np0005634017 systemd[1]: libpod-373542bd2d054ae681d9439acea153b3b86561a6e7b89faa929c7cf0d7fcc45f.scope: Deactivated successfully.
Feb 28 05:34:34 np0005634017 podman[356660]: 2026-02-28 10:34:34.209796721 +0000 UTC m=+0.130844958 container died 373542bd2d054ae681d9439acea153b3b86561a6e7b89faa929c7cf0d7fcc45f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:34:34 np0005634017 systemd[1]: var-lib-containers-storage-overlay-f0fecf89f690cec9420bbbe6cd3622820c8754910623dcb3d411729be17a62d5-merged.mount: Deactivated successfully.
Feb 28 05:34:34 np0005634017 podman[356660]: 2026-02-28 10:34:34.259218651 +0000 UTC m=+0.180266938 container remove 373542bd2d054ae681d9439acea153b3b86561a6e7b89faa929c7cf0d7fcc45f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_beaver, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 28 05:34:34 np0005634017 systemd[1]: libpod-conmon-373542bd2d054ae681d9439acea153b3b86561a6e7b89faa929c7cf0d7fcc45f.scope: Deactivated successfully.
Feb 28 05:34:34 np0005634017 nova_compute[243452]: 2026-02-28 10:34:34.318 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:34:34 np0005634017 nova_compute[243452]: 2026-02-28 10:34:34.348 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:34 np0005634017 podman[356702]: 2026-02-28 10:34:34.445819117 +0000 UTC m=+0.053855147 container create 8bc4cbd0ee616afe12c95baed2011d8dde18867fc8fa10e1c732ee4fb7874165 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_panini, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 05:34:34 np0005634017 systemd[1]: Started libpod-conmon-8bc4cbd0ee616afe12c95baed2011d8dde18867fc8fa10e1c732ee4fb7874165.scope.
Feb 28 05:34:34 np0005634017 podman[356702]: 2026-02-28 10:34:34.416117235 +0000 UTC m=+0.024153245 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:34:34 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:34:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d113fb27a73192ec1172163ce6494febe2ecd60094d6d6930d5e35a2d39f3f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:34:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d113fb27a73192ec1172163ce6494febe2ecd60094d6d6930d5e35a2d39f3f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:34:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d113fb27a73192ec1172163ce6494febe2ecd60094d6d6930d5e35a2d39f3f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:34:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d113fb27a73192ec1172163ce6494febe2ecd60094d6d6930d5e35a2d39f3f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:34:34 np0005634017 podman[356702]: 2026-02-28 10:34:34.546024905 +0000 UTC m=+0.154060985 container init 8bc4cbd0ee616afe12c95baed2011d8dde18867fc8fa10e1c732ee4fb7874165 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_panini, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 05:34:34 np0005634017 podman[356702]: 2026-02-28 10:34:34.551350386 +0000 UTC m=+0.159386396 container start 8bc4cbd0ee616afe12c95baed2011d8dde18867fc8fa10e1c732ee4fb7874165 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_panini, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:34:34 np0005634017 podman[356702]: 2026-02-28 10:34:34.554524096 +0000 UTC m=+0.162560196 container attach 8bc4cbd0ee616afe12c95baed2011d8dde18867fc8fa10e1c732ee4fb7874165 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_panini, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:34:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 305 active+clean; 153 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 KiB/s wr, 27 op/s
Feb 28 05:34:35 np0005634017 lvm[356796]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:34:35 np0005634017 lvm[356796]: VG ceph_vg0 finished
Feb 28 05:34:35 np0005634017 lvm[356797]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:34:35 np0005634017 lvm[356797]: VG ceph_vg1 finished
Feb 28 05:34:35 np0005634017 lvm[356799]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:34:35 np0005634017 lvm[356799]: VG ceph_vg2 finished
Feb 28 05:34:35 np0005634017 festive_panini[356718]: {}
Feb 28 05:34:35 np0005634017 systemd[1]: libpod-8bc4cbd0ee616afe12c95baed2011d8dde18867fc8fa10e1c732ee4fb7874165.scope: Deactivated successfully.
Feb 28 05:34:35 np0005634017 systemd[1]: libpod-8bc4cbd0ee616afe12c95baed2011d8dde18867fc8fa10e1c732ee4fb7874165.scope: Consumed 1.128s CPU time.
Feb 28 05:34:35 np0005634017 podman[356702]: 2026-02-28 10:34:35.330039184 +0000 UTC m=+0.938075184 container died 8bc4cbd0ee616afe12c95baed2011d8dde18867fc8fa10e1c732ee4fb7874165 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_panini, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:34:35 np0005634017 systemd[1]: var-lib-containers-storage-overlay-2d113fb27a73192ec1172163ce6494febe2ecd60094d6d6930d5e35a2d39f3f2-merged.mount: Deactivated successfully.
Feb 28 05:34:35 np0005634017 nova_compute[243452]: 2026-02-28 10:34:35.367 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "29ebb761-c674-4ed1-aae0-554adf945402" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:35 np0005634017 nova_compute[243452]: 2026-02-28 10:34:35.369 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:35 np0005634017 podman[356702]: 2026-02-28 10:34:35.37578923 +0000 UTC m=+0.983825230 container remove 8bc4cbd0ee616afe12c95baed2011d8dde18867fc8fa10e1c732ee4fb7874165 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_panini, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 28 05:34:35 np0005634017 systemd[1]: libpod-conmon-8bc4cbd0ee616afe12c95baed2011d8dde18867fc8fa10e1c732ee4fb7874165.scope: Deactivated successfully.
Feb 28 05:34:35 np0005634017 nova_compute[243452]: 2026-02-28 10:34:35.398 243456 DEBUG nova.compute.manager [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:34:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:34:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:34:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:34:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:34:35 np0005634017 nova_compute[243452]: 2026-02-28 10:34:35.481 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:35 np0005634017 nova_compute[243452]: 2026-02-28 10:34:35.482 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:35 np0005634017 nova_compute[243452]: 2026-02-28 10:34:35.491 243456 DEBUG nova.virt.hardware [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:34:35 np0005634017 nova_compute[243452]: 2026-02-28 10:34:35.491 243456 INFO nova.compute.claims [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:34:35 np0005634017 nova_compute[243452]: 2026-02-28 10:34:35.608 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:34:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:34:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/285151819' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:34:36 np0005634017 nova_compute[243452]: 2026-02-28 10:34:36.146 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:34:36 np0005634017 nova_compute[243452]: 2026-02-28 10:34:36.153 243456 DEBUG nova.compute.provider_tree [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:34:36 np0005634017 nova_compute[243452]: 2026-02-28 10:34:36.168 243456 DEBUG nova.scheduler.client.report [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:34:36 np0005634017 nova_compute[243452]: 2026-02-28 10:34:36.203 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:36 np0005634017 nova_compute[243452]: 2026-02-28 10:34:36.204 243456 DEBUG nova.compute.manager [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:34:36 np0005634017 nova_compute[243452]: 2026-02-28 10:34:36.254 243456 DEBUG nova.compute.manager [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:34:36 np0005634017 nova_compute[243452]: 2026-02-28 10:34:36.255 243456 DEBUG nova.network.neutron [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:34:36 np0005634017 nova_compute[243452]: 2026-02-28 10:34:36.285 243456 INFO nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:34:36 np0005634017 nova_compute[243452]: 2026-02-28 10:34:36.307 243456 DEBUG nova.compute.manager [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:34:36 np0005634017 nova_compute[243452]: 2026-02-28 10:34:36.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:34:36 np0005634017 nova_compute[243452]: 2026-02-28 10:34:36.419 243456 DEBUG nova.compute.manager [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:34:36 np0005634017 nova_compute[243452]: 2026-02-28 10:34:36.420 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:34:36 np0005634017 nova_compute[243452]: 2026-02-28 10:34:36.421 243456 INFO nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Creating image(s)#033[00m
Feb 28 05:34:36 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:34:36 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:34:36 np0005634017 nova_compute[243452]: 2026-02-28 10:34:36.451 243456 DEBUG nova.storage.rbd_utils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 29ebb761-c674-4ed1-aae0-554adf945402_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:34:36 np0005634017 nova_compute[243452]: 2026-02-28 10:34:36.481 243456 DEBUG nova.storage.rbd_utils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 29ebb761-c674-4ed1-aae0-554adf945402_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:34:36 np0005634017 nova_compute[243452]: 2026-02-28 10:34:36.510 243456 DEBUG nova.storage.rbd_utils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 29ebb761-c674-4ed1-aae0-554adf945402_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:34:36 np0005634017 nova_compute[243452]: 2026-02-28 10:34:36.515 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:34:36 np0005634017 nova_compute[243452]: 2026-02-28 10:34:36.547 243456 DEBUG nova.policy [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:34:36 np0005634017 nova_compute[243452]: 2026-02-28 10:34:36.586 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:34:36 np0005634017 nova_compute[243452]: 2026-02-28 10:34:36.587 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:36 np0005634017 nova_compute[243452]: 2026-02-28 10:34:36.588 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:36 np0005634017 nova_compute[243452]: 2026-02-28 10:34:36.589 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:36 np0005634017 nova_compute[243452]: 2026-02-28 10:34:36.620 243456 DEBUG nova.storage.rbd_utils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 29ebb761-c674-4ed1-aae0-554adf945402_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:34:36 np0005634017 nova_compute[243452]: 2026-02-28 10:34:36.624 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 29ebb761-c674-4ed1-aae0-554adf945402_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:34:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 153 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.3 KiB/s wr, 17 op/s
Feb 28 05:34:36 np0005634017 nova_compute[243452]: 2026-02-28 10:34:36.893 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 29ebb761-c674-4ed1-aae0-554adf945402_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:34:36 np0005634017 nova_compute[243452]: 2026-02-28 10:34:36.949 243456 DEBUG nova.storage.rbd_utils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image 29ebb761-c674-4ed1-aae0-554adf945402_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:34:37 np0005634017 nova_compute[243452]: 2026-02-28 10:34:37.020 243456 DEBUG nova.objects.instance [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid 29ebb761-c674-4ed1-aae0-554adf945402 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:34:37 np0005634017 nova_compute[243452]: 2026-02-28 10:34:37.046 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:34:37 np0005634017 nova_compute[243452]: 2026-02-28 10:34:37.046 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Ensure instance console log exists: /var/lib/nova/instances/29ebb761-c674-4ed1-aae0-554adf945402/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:34:37 np0005634017 nova_compute[243452]: 2026-02-28 10:34:37.047 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:37 np0005634017 nova_compute[243452]: 2026-02-28 10:34:37.047 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:37 np0005634017 nova_compute[243452]: 2026-02-28 10:34:37.047 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:37 np0005634017 podman[357028]: 2026-02-28 10:34:37.113650318 +0000 UTC m=+0.051109328 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 05:34:37 np0005634017 podman[357027]: 2026-02-28 10:34:37.143677129 +0000 UTC m=+0.080951144 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:34:37 np0005634017 nova_compute[243452]: 2026-02-28 10:34:37.488 243456 DEBUG nova.network.neutron [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Successfully created port: 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:34:37 np0005634017 nova_compute[243452]: 2026-02-28 10:34:37.688 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274862.6872473, 4cf0bd1e-baff-4e42-ab90-cb45145ea4db => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:34:37 np0005634017 nova_compute[243452]: 2026-02-28 10:34:37.689 243456 INFO nova.compute.manager [-] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:34:37 np0005634017 nova_compute[243452]: 2026-02-28 10:34:37.709 243456 DEBUG nova.compute.manager [None req-3cfaae37-2fbb-4c5f-8de7-2f949caf50ae - - - - - -] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:34:37 np0005634017 nova_compute[243452]: 2026-02-28 10:34:37.734 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:37 np0005634017 nova_compute[243452]: 2026-02-28 10:34:37.928 243456 DEBUG nova.network.neutron [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Successfully created port: 8f25c48f-b281-4784-a6b0-a2662d928d28 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:34:38 np0005634017 nova_compute[243452]: 2026-02-28 10:34:38.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:34:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 157 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 256 KiB/s wr, 16 op/s
Feb 28 05:34:38 np0005634017 nova_compute[243452]: 2026-02-28 10:34:38.843 243456 DEBUG nova.network.neutron [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Successfully updated port: 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:34:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:34:38 np0005634017 nova_compute[243452]: 2026-02-28 10:34:38.969 243456 DEBUG nova.compute.manager [req-8cea70a9-ac12-481d-b44c-cfb1fd4d1f45 req-584dce0c-c27e-42a4-9caf-e489262101bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-changed-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:34:38 np0005634017 nova_compute[243452]: 2026-02-28 10:34:38.970 243456 DEBUG nova.compute.manager [req-8cea70a9-ac12-481d-b44c-cfb1fd4d1f45 req-584dce0c-c27e-42a4-9caf-e489262101bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Refreshing instance network info cache due to event network-changed-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:34:38 np0005634017 nova_compute[243452]: 2026-02-28 10:34:38.970 243456 DEBUG oslo_concurrency.lockutils [req-8cea70a9-ac12-481d-b44c-cfb1fd4d1f45 req-584dce0c-c27e-42a4-9caf-e489262101bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:34:38 np0005634017 nova_compute[243452]: 2026-02-28 10:34:38.970 243456 DEBUG oslo_concurrency.lockutils [req-8cea70a9-ac12-481d-b44c-cfb1fd4d1f45 req-584dce0c-c27e-42a4-9caf-e489262101bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:34:38 np0005634017 nova_compute[243452]: 2026-02-28 10:34:38.971 243456 DEBUG nova.network.neutron [req-8cea70a9-ac12-481d-b44c-cfb1fd4d1f45 req-584dce0c-c27e-42a4-9caf-e489262101bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Refreshing network info cache for port 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:34:39 np0005634017 nova_compute[243452]: 2026-02-28 10:34:39.245 243456 DEBUG nova.network.neutron [req-8cea70a9-ac12-481d-b44c-cfb1fd4d1f45 req-584dce0c-c27e-42a4-9caf-e489262101bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:34:39 np0005634017 nova_compute[243452]: 2026-02-28 10:34:39.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:34:39 np0005634017 nova_compute[243452]: 2026-02-28 10:34:39.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:34:39 np0005634017 nova_compute[243452]: 2026-02-28 10:34:39.350 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:39 np0005634017 nova_compute[243452]: 2026-02-28 10:34:39.899 243456 DEBUG nova.network.neutron [req-8cea70a9-ac12-481d-b44c-cfb1fd4d1f45 req-584dce0c-c27e-42a4-9caf-e489262101bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:34:39 np0005634017 nova_compute[243452]: 2026-02-28 10:34:39.916 243456 DEBUG oslo_concurrency.lockutils [req-8cea70a9-ac12-481d-b44c-cfb1fd4d1f45 req-584dce0c-c27e-42a4-9caf-e489262101bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:34:39 np0005634017 nova_compute[243452]: 2026-02-28 10:34:39.987 243456 DEBUG nova.network.neutron [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Successfully updated port: 8f25c48f-b281-4784-a6b0-a2662d928d28 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:34:40 np0005634017 nova_compute[243452]: 2026-02-28 10:34:40.007 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:34:40 np0005634017 nova_compute[243452]: 2026-02-28 10:34:40.007 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:34:40 np0005634017 nova_compute[243452]: 2026-02-28 10:34:40.008 243456 DEBUG nova.network.neutron [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:34:40 np0005634017 nova_compute[243452]: 2026-02-28 10:34:40.178 243456 DEBUG nova.network.neutron [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:34:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 200 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:34:41 np0005634017 nova_compute[243452]: 2026-02-28 10:34:41.055 243456 DEBUG nova.compute.manager [req-e048e61d-f681-4e88-921f-43c5ba917fd5 req-4739973b-13d3-4082-8b3a-1d69c15964bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-changed-8f25c48f-b281-4784-a6b0-a2662d928d28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:34:41 np0005634017 nova_compute[243452]: 2026-02-28 10:34:41.056 243456 DEBUG nova.compute.manager [req-e048e61d-f681-4e88-921f-43c5ba917fd5 req-4739973b-13d3-4082-8b3a-1d69c15964bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Refreshing instance network info cache due to event network-changed-8f25c48f-b281-4784-a6b0-a2662d928d28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:34:41 np0005634017 nova_compute[243452]: 2026-02-28 10:34:41.056 243456 DEBUG oslo_concurrency.lockutils [req-e048e61d-f681-4e88-921f-43c5ba917fd5 req-4739973b-13d3-4082-8b3a-1d69c15964bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:34:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:34:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:34:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:34:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:34:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003597216889628009 of space, bias 1.0, pg target 0.10791650668884027 quantized to 32 (current 32)
Feb 28 05:34:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:34:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:34:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:34:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:34:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:34:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024938484013542466 of space, bias 1.0, pg target 0.748154520406274 quantized to 32 (current 32)
Feb 28 05:34:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:34:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.349881078222421e-07 of space, bias 4.0, pg target 0.0008819857293866905 quantized to 16 (current 16)
Feb 28 05:34:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:34:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:34:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:34:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:34:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:34:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:34:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:34:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:34:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:34:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:34:41 np0005634017 nova_compute[243452]: 2026-02-28 10:34:41.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.277 243456 DEBUG nova.network.neutron [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Updating instance_info_cache with network_info: [{"id": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "address": "fa:16:3e:12:72:96", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b2cb81f-77", "ovs_interfaceid": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.316 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.317 243456 DEBUG nova.compute.manager [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Instance network_info: |[{"id": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "address": "fa:16:3e:12:72:96", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b2cb81f-77", "ovs_interfaceid": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.318 243456 DEBUG oslo_concurrency.lockutils [req-e048e61d-f681-4e88-921f-43c5ba917fd5 req-4739973b-13d3-4082-8b3a-1d69c15964bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.318 243456 DEBUG nova.network.neutron [req-e048e61d-f681-4e88-921f-43c5ba917fd5 req-4739973b-13d3-4082-8b3a-1d69c15964bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Refreshing network info cache for port 8f25c48f-b281-4784-a6b0-a2662d928d28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.323 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Start _get_guest_xml network_info=[{"id": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "address": "fa:16:3e:12:72:96", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b2cb81f-77", "ovs_interfaceid": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.328 243456 WARNING nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.334 243456 DEBUG nova.virt.libvirt.host [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.335 243456 DEBUG nova.virt.libvirt.host [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.338 243456 DEBUG nova.virt.libvirt.host [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.338 243456 DEBUG nova.virt.libvirt.host [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.339 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.339 243456 DEBUG nova.virt.hardware [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.340 243456 DEBUG nova.virt.hardware [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.340 243456 DEBUG nova.virt.hardware [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.341 243456 DEBUG nova.virt.hardware [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.341 243456 DEBUG nova.virt.hardware [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.341 243456 DEBUG nova.virt.hardware [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.341 243456 DEBUG nova.virt.hardware [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.342 243456 DEBUG nova.virt.hardware [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.342 243456 DEBUG nova.virt.hardware [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.342 243456 DEBUG nova.virt.hardware [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.343 243456 DEBUG nova.virt.hardware [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.347 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.737 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:34:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:34:42 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1186612537' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.961 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:34:42 np0005634017 nova_compute[243452]: 2026-02-28 10:34:42.994 243456 DEBUG nova.storage.rbd_utils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 29ebb761-c674-4ed1-aae0-554adf945402_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.001 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:34:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:34:43 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1627211567' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.563 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.566 243456 DEBUG nova.virt.libvirt.vif [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:34:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1840268040',display_name='tempest-TestGettingAddress-server-1840268040',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1840268040',id=128,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLvbGTbrEfz9W2tnu/9jGE+Ib3UUNypoWhKLTWUiVUxUtyJAzaLx270DWUn96sPglPAR/qjjr3PTH3Chgoe7aVB/RslRFNHrD7aVCbvYz4FzhoyO9zWK1x//nFLJoPGj0w==',key_name='tempest-TestGettingAddress-1669963629',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-6j90yoru',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:34:36Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=29ebb761-c674-4ed1-aae0-554adf945402,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "address": "fa:16:3e:12:72:96", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b2cb81f-77", "ovs_interfaceid": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.567 243456 DEBUG nova.network.os_vif_util [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "address": "fa:16:3e:12:72:96", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b2cb81f-77", "ovs_interfaceid": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.568 243456 DEBUG nova.network.os_vif_util [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:72:96,bridge_name='br-int',has_traffic_filtering=True,id=8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b2cb81f-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.570 243456 DEBUG nova.virt.libvirt.vif [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:34:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1840268040',display_name='tempest-TestGettingAddress-server-1840268040',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1840268040',id=128,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLvbGTbrEfz9W2tnu/9jGE+Ib3UUNypoWhKLTWUiVUxUtyJAzaLx270DWUn96sPglPAR/qjjr3PTH3Chgoe7aVB/RslRFNHrD7aVCbvYz4FzhoyO9zWK1x//nFLJoPGj0w==',key_name='tempest-TestGettingAddress-1669963629',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-6j90yoru',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:34:36Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=29ebb761-c674-4ed1-aae0-554adf945402,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.570 243456 DEBUG nova.network.os_vif_util [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.571 243456 DEBUG nova.network.os_vif_util [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:88:f4,bridge_name='br-int',has_traffic_filtering=True,id=8f25c48f-b281-4784-a6b0-a2662d928d28,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f25c48f-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.573 243456 DEBUG nova.objects.instance [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 29ebb761-c674-4ed1-aae0-554adf945402 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.593 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:34:43 np0005634017 nova_compute[243452]:  <uuid>29ebb761-c674-4ed1-aae0-554adf945402</uuid>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:  <name>instance-00000080</name>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestGettingAddress-server-1840268040</nova:name>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:34:42</nova:creationTime>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:34:43 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:        <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:        <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:        <nova:port uuid="8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b">
Feb 28 05:34:43 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:        <nova:port uuid="8f25c48f-b281-4784-a6b0-a2662d928d28">
Feb 28 05:34:43 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe9f:88f4" ipVersion="6"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <entry name="serial">29ebb761-c674-4ed1-aae0-554adf945402</entry>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <entry name="uuid">29ebb761-c674-4ed1-aae0-554adf945402</entry>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/29ebb761-c674-4ed1-aae0-554adf945402_disk">
Feb 28 05:34:43 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:34:43 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/29ebb761-c674-4ed1-aae0-554adf945402_disk.config">
Feb 28 05:34:43 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:34:43 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:12:72:96"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <target dev="tap8b2cb81f-77"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:9f:88:f4"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <target dev="tap8f25c48f-b2"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/29ebb761-c674-4ed1-aae0-554adf945402/console.log" append="off"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:34:43 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:34:43 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:34:43 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:34:43 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.595 243456 DEBUG nova.compute.manager [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Preparing to wait for external event network-vif-plugged-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.595 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "29ebb761-c674-4ed1-aae0-554adf945402-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.596 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.596 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.597 243456 DEBUG nova.compute.manager [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Preparing to wait for external event network-vif-plugged-8f25c48f-b281-4784-a6b0-a2662d928d28 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.597 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "29ebb761-c674-4ed1-aae0-554adf945402-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.598 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.598 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.599 243456 DEBUG nova.virt.libvirt.vif [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:34:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1840268040',display_name='tempest-TestGettingAddress-server-1840268040',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1840268040',id=128,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLvbGTbrEfz9W2tnu/9jGE+Ib3UUNypoWhKLTWUiVUxUtyJAzaLx270DWUn96sPglPAR/qjjr3PTH3Chgoe7aVB/RslRFNHrD7aVCbvYz4FzhoyO9zWK1x//nFLJoPGj0w==',key_name='tempest-TestGettingAddress-1669963629',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-6j90yoru',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:34:36Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=29ebb761-c674-4ed1-aae0-554adf945402,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "address": "fa:16:3e:12:72:96", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b2cb81f-77", "ovs_interfaceid": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.600 243456 DEBUG nova.network.os_vif_util [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "address": "fa:16:3e:12:72:96", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b2cb81f-77", "ovs_interfaceid": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.601 243456 DEBUG nova.network.os_vif_util [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:72:96,bridge_name='br-int',has_traffic_filtering=True,id=8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b2cb81f-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.602 243456 DEBUG os_vif [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:72:96,bridge_name='br-int',has_traffic_filtering=True,id=8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b2cb81f-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.603 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.604 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.604 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.609 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.609 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b2cb81f-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.610 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8b2cb81f-77, col_values=(('external_ids', {'iface-id': '8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:12:72:96', 'vm-uuid': '29ebb761-c674-4ed1-aae0-554adf945402'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:43 np0005634017 NetworkManager[49805]: <info>  [1772274883.6137] manager: (tap8b2cb81f-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/533)
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.612 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.618 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.622 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.624 243456 INFO os_vif [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:72:96,bridge_name='br-int',has_traffic_filtering=True,id=8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b2cb81f-77')#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.625 243456 DEBUG nova.virt.libvirt.vif [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:34:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1840268040',display_name='tempest-TestGettingAddress-server-1840268040',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1840268040',id=128,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLvbGTbrEfz9W2tnu/9jGE+Ib3UUNypoWhKLTWUiVUxUtyJAzaLx270DWUn96sPglPAR/qjjr3PTH3Chgoe7aVB/RslRFNHrD7aVCbvYz4FzhoyO9zWK1x//nFLJoPGj0w==',key_name='tempest-TestGettingAddress-1669963629',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-6j90yoru',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:34:36Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=29ebb761-c674-4ed1-aae0-554adf945402,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.625 243456 DEBUG nova.network.os_vif_util [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.626 243456 DEBUG nova.network.os_vif_util [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:88:f4,bridge_name='br-int',has_traffic_filtering=True,id=8f25c48f-b281-4784-a6b0-a2662d928d28,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f25c48f-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.626 243456 DEBUG os_vif [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:88:f4,bridge_name='br-int',has_traffic_filtering=True,id=8f25c48f-b281-4784-a6b0-a2662d928d28,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f25c48f-b2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.627 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.627 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.627 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.630 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.630 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f25c48f-b2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.630 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8f25c48f-b2, col_values=(('external_ids', {'iface-id': '8f25c48f-b281-4784-a6b0-a2662d928d28', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9f:88:f4', 'vm-uuid': '29ebb761-c674-4ed1-aae0-554adf945402'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.632 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:43 np0005634017 NetworkManager[49805]: <info>  [1772274883.6337] manager: (tap8f25c48f-b2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/534)
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.634 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.639 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.640 243456 INFO os_vif [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:88:f4,bridge_name='br-int',has_traffic_filtering=True,id=8f25c48f-b281-4784-a6b0-a2662d928d28,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f25c48f-b2')#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.710 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.711 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.711 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:12:72:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.711 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:9f:88:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.712 243456 INFO nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Using config drive#033[00m
Feb 28 05:34:43 np0005634017 nova_compute[243452]: 2026-02-28 10:34:43.736 243456 DEBUG nova.storage.rbd_utils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 29ebb761-c674-4ed1-aae0-554adf945402_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:34:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:34:44 np0005634017 nova_compute[243452]: 2026-02-28 10:34:44.100 243456 INFO nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Creating config drive at /var/lib/nova/instances/29ebb761-c674-4ed1-aae0-554adf945402/disk.config#033[00m
Feb 28 05:34:44 np0005634017 nova_compute[243452]: 2026-02-28 10:34:44.104 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29ebb761-c674-4ed1-aae0-554adf945402/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpebrxls1q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:34:44 np0005634017 nova_compute[243452]: 2026-02-28 10:34:44.145 243456 DEBUG nova.network.neutron [req-e048e61d-f681-4e88-921f-43c5ba917fd5 req-4739973b-13d3-4082-8b3a-1d69c15964bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Updated VIF entry in instance network info cache for port 8f25c48f-b281-4784-a6b0-a2662d928d28. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:34:44 np0005634017 nova_compute[243452]: 2026-02-28 10:34:44.146 243456 DEBUG nova.network.neutron [req-e048e61d-f681-4e88-921f-43c5ba917fd5 req-4739973b-13d3-4082-8b3a-1d69c15964bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Updating instance_info_cache with network_info: [{"id": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "address": "fa:16:3e:12:72:96", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b2cb81f-77", "ovs_interfaceid": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:34:44 np0005634017 nova_compute[243452]: 2026-02-28 10:34:44.167 243456 DEBUG oslo_concurrency.lockutils [req-e048e61d-f681-4e88-921f-43c5ba917fd5 req-4739973b-13d3-4082-8b3a-1d69c15964bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:34:44 np0005634017 nova_compute[243452]: 2026-02-28 10:34:44.249 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29ebb761-c674-4ed1-aae0-554adf945402/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpebrxls1q" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:34:44 np0005634017 nova_compute[243452]: 2026-02-28 10:34:44.278 243456 DEBUG nova.storage.rbd_utils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 29ebb761-c674-4ed1-aae0-554adf945402_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:34:44 np0005634017 nova_compute[243452]: 2026-02-28 10:34:44.282 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29ebb761-c674-4ed1-aae0-554adf945402/disk.config 29ebb761-c674-4ed1-aae0-554adf945402_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:34:44 np0005634017 nova_compute[243452]: 2026-02-28 10:34:44.383 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:44 np0005634017 nova_compute[243452]: 2026-02-28 10:34:44.461 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29ebb761-c674-4ed1-aae0-554adf945402/disk.config 29ebb761-c674-4ed1-aae0-554adf945402_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:34:44 np0005634017 nova_compute[243452]: 2026-02-28 10:34:44.461 243456 INFO nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Deleting local config drive /var/lib/nova/instances/29ebb761-c674-4ed1-aae0-554adf945402/disk.config because it was imported into RBD.#033[00m
Feb 28 05:34:44 np0005634017 kernel: tap8b2cb81f-77: entered promiscuous mode
Feb 28 05:34:44 np0005634017 NetworkManager[49805]: <info>  [1772274884.5209] manager: (tap8b2cb81f-77): new Tun device (/org/freedesktop/NetworkManager/Devices/535)
Feb 28 05:34:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:44Z|01293|binding|INFO|Claiming lport 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b for this chassis.
Feb 28 05:34:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:44Z|01294|binding|INFO|8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b: Claiming fa:16:3e:12:72:96 10.100.0.3
Feb 28 05:34:44 np0005634017 nova_compute[243452]: 2026-02-28 10:34:44.523 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:44 np0005634017 NetworkManager[49805]: <info>  [1772274884.5336] manager: (tap8f25c48f-b2): new Tun device (/org/freedesktop/NetworkManager/Devices/536)
Feb 28 05:34:44 np0005634017 nova_compute[243452]: 2026-02-28 10:34:44.532 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:44 np0005634017 kernel: tap8f25c48f-b2: entered promiscuous mode
Feb 28 05:34:44 np0005634017 nova_compute[243452]: 2026-02-28 10:34:44.538 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:44 np0005634017 nova_compute[243452]: 2026-02-28 10:34:44.543 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:44Z|01295|if_status|INFO|Not updating pb chassis for 8f25c48f-b281-4784-a6b0-a2662d928d28 now as sb is readonly
Feb 28 05:34:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:44Z|01296|binding|INFO|Claiming lport 8f25c48f-b281-4784-a6b0-a2662d928d28 for this chassis.
Feb 28 05:34:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:44Z|01297|binding|INFO|8f25c48f-b281-4784-a6b0-a2662d928d28: Claiming fa:16:3e:9f:88:f4 2001:db8::f816:3eff:fe9f:88f4
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.551 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:72:96 10.100.0.3'], port_security=['fa:16:3e:12:72:96 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '29ebb761-c674-4ed1-aae0-554adf945402', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd8337edd-8f59-40cf-a216-2b5eec8b4c00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d65e56d1-78ed-40b2-a041-17e977e92cba, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.553 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b in datapath 6d7aad4f-1a53-4b74-a216-4cac4be4283b bound to our chassis#033[00m
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.555 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d7aad4f-1a53-4b74-a216-4cac4be4283b#033[00m
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.562 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:88:f4 2001:db8::f816:3eff:fe9f:88f4'], port_security=['fa:16:3e:9f:88:f4 2001:db8::f816:3eff:fe9f:88f4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe9f:88f4/64', 'neutron:device_id': '29ebb761-c674-4ed1-aae0-554adf945402', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd8337edd-8f59-40cf-a216-2b5eec8b4c00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f7ae13e-602a-487f-8652-e7d0de9d97fa, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=8f25c48f-b281-4784-a6b0-a2662d928d28) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:34:44 np0005634017 systemd-udevd[357215]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:34:44 np0005634017 systemd-udevd[357214]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.568 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3ec1964a-7483-4c01-8ef8-55b9b17e2bb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.569 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6d7aad4f-11 in ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:34:44 np0005634017 systemd-machined[209480]: New machine qemu-161-instance-00000080.
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.571 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6d7aad4f-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.571 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e178e995-99cc-4773-a496-4afdb13e7ff0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.573 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0a0850e2-e806-4ee9-b22a-9c556f214c25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:44Z|01298|binding|INFO|Setting lport 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b ovn-installed in OVS
Feb 28 05:34:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:44Z|01299|binding|INFO|Setting lport 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b up in Southbound
Feb 28 05:34:44 np0005634017 nova_compute[243452]: 2026-02-28 10:34:44.578 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:44Z|01300|binding|INFO|Setting lport 8f25c48f-b281-4784-a6b0-a2662d928d28 ovn-installed in OVS
Feb 28 05:34:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:44Z|01301|binding|INFO|Setting lport 8f25c48f-b281-4784-a6b0-a2662d928d28 up in Southbound
Feb 28 05:34:44 np0005634017 systemd[1]: Started Virtual Machine qemu-161-instance-00000080.
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.584 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[15de9745-728e-4131-9db6-b22c8bd1401a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:44 np0005634017 NetworkManager[49805]: <info>  [1772274884.5889] device (tap8f25c48f-b2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:34:44 np0005634017 NetworkManager[49805]: <info>  [1772274884.5897] device (tap8f25c48f-b2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:34:44 np0005634017 nova_compute[243452]: 2026-02-28 10:34:44.586 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:44 np0005634017 NetworkManager[49805]: <info>  [1772274884.5924] device (tap8b2cb81f-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:34:44 np0005634017 NetworkManager[49805]: <info>  [1772274884.5947] device (tap8b2cb81f-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.599 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f1c91d5d-bd7b-40f0-b27c-ba52119d369c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.627 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[cab5b0e4-c18c-4a26-a26f-de57ae46cf02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:44 np0005634017 NetworkManager[49805]: <info>  [1772274884.6344] manager: (tap6d7aad4f-10): new Veth device (/org/freedesktop/NetworkManager/Devices/537)
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.633 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[23426419-043d-4e39-808b-ea266af7cf4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.664 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[60956e49-9cb1-47a4-b98d-09ec0d994188]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.668 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[cb6184eb-a36c-406d-a5c6-acd082e167a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:44 np0005634017 NetworkManager[49805]: <info>  [1772274884.6898] device (tap6d7aad4f-10): carrier: link connected
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.694 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4f2cff55-378d-41df-83fa-f3fca8a1b596]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.708 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[13f19c25-243d-43fd-a602-53b93ff4b6e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d7aad4f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:af:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 387], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639197, 'reachable_time': 19551, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357247, 'error': None, 'target': 'ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.723 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ed9a2912-4689-42c1-8c8f-9501def70088]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6d:af4e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 639197, 'tstamp': 639197}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357248, 'error': None, 'target': 'ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.741 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b109234c-ce63-48b4-9577-e355723a7e69]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d7aad4f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:af:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 387], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639197, 'reachable_time': 19551, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357249, 'error': None, 'target': 'ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.767 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a77dc7ca-99f3-4139-b12b-c052cebad6e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.816 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cb7131a0-e327-4a28-8e56-e45ce0e47583]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.818 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d7aad4f-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.818 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.819 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d7aad4f-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:44 np0005634017 nova_compute[243452]: 2026-02-28 10:34:44.820 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:44 np0005634017 NetworkManager[49805]: <info>  [1772274884.8214] manager: (tap6d7aad4f-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/538)
Feb 28 05:34:44 np0005634017 kernel: tap6d7aad4f-10: entered promiscuous mode
Feb 28 05:34:44 np0005634017 nova_compute[243452]: 2026-02-28 10:34:44.823 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.824 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d7aad4f-10, col_values=(('external_ids', {'iface-id': '99dd359f-3ab9-477c-a58c-1c56298be9c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:44 np0005634017 nova_compute[243452]: 2026-02-28 10:34:44.825 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:44Z|01302|binding|INFO|Releasing lport 99dd359f-3ab9-477c-a58c-1c56298be9c7 from this chassis (sb_readonly=0)
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.826 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6d7aad4f-1a53-4b74-a216-4cac4be4283b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6d7aad4f-1a53-4b74-a216-4cac4be4283b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.827 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[62d28e61-4b4c-4960-8b18-9c6a49f458a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.828 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-6d7aad4f-1a53-4b74-a216-4cac4be4283b
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/6d7aad4f-1a53-4b74-a216-4cac4be4283b.pid.haproxy
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 6d7aad4f-1a53-4b74-a216-4cac4be4283b
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:34:44 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.829 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'env', 'PROCESS_TAG=haproxy-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6d7aad4f-1a53-4b74-a216-4cac4be4283b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:34:44 np0005634017 nova_compute[243452]: 2026-02-28 10:34:44.830 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.071 243456 DEBUG nova.compute.manager [req-ac9756ea-b330-45d1-8cdc-ffb8cbe3ecf8 req-4665e637-ac40-4708-b699-7d369b3a2d2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-vif-plugged-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.071 243456 DEBUG oslo_concurrency.lockutils [req-ac9756ea-b330-45d1-8cdc-ffb8cbe3ecf8 req-4665e637-ac40-4708-b699-7d369b3a2d2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "29ebb761-c674-4ed1-aae0-554adf945402-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.072 243456 DEBUG oslo_concurrency.lockutils [req-ac9756ea-b330-45d1-8cdc-ffb8cbe3ecf8 req-4665e637-ac40-4708-b699-7d369b3a2d2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.072 243456 DEBUG oslo_concurrency.lockutils [req-ac9756ea-b330-45d1-8cdc-ffb8cbe3ecf8 req-4665e637-ac40-4708-b699-7d369b3a2d2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.072 243456 DEBUG nova.compute.manager [req-ac9756ea-b330-45d1-8cdc-ffb8cbe3ecf8 req-4665e637-ac40-4708-b699-7d369b3a2d2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Processing event network-vif-plugged-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.199 243456 DEBUG nova.compute.manager [req-d01de718-edfe-43b3-b09b-c07e39e58b2c req-069a95f4-f62a-4225-a2a1-eebdb03ea393 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-vif-plugged-8f25c48f-b281-4784-a6b0-a2662d928d28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.200 243456 DEBUG oslo_concurrency.lockutils [req-d01de718-edfe-43b3-b09b-c07e39e58b2c req-069a95f4-f62a-4225-a2a1-eebdb03ea393 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "29ebb761-c674-4ed1-aae0-554adf945402-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.200 243456 DEBUG oslo_concurrency.lockutils [req-d01de718-edfe-43b3-b09b-c07e39e58b2c req-069a95f4-f62a-4225-a2a1-eebdb03ea393 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.201 243456 DEBUG oslo_concurrency.lockutils [req-d01de718-edfe-43b3-b09b-c07e39e58b2c req-069a95f4-f62a-4225-a2a1-eebdb03ea393 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.201 243456 DEBUG nova.compute.manager [req-d01de718-edfe-43b3-b09b-c07e39e58b2c req-069a95f4-f62a-4225-a2a1-eebdb03ea393 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Processing event network-vif-plugged-8f25c48f-b281-4784-a6b0-a2662d928d28 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:34:45 np0005634017 podman[357317]: 2026-02-28 10:34:45.213002758 +0000 UTC m=+0.068885782 container create 53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 28 05:34:45 np0005634017 systemd[1]: Started libpod-conmon-53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb.scope.
Feb 28 05:34:45 np0005634017 podman[357317]: 2026-02-28 10:34:45.167679285 +0000 UTC m=+0.023562349 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:34:45 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.269 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274885.2683806, 29ebb761-c674-4ed1-aae0-554adf945402 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.270 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] VM Started (Lifecycle Event)#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.274 243456 DEBUG nova.compute.manager [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:34:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56367b1f032c1af58ec46ce8c0be39bfb51d1c49bf83d9e0c00051135fcf4571/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.287 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.290 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:34:45 np0005634017 podman[357317]: 2026-02-28 10:34:45.29319829 +0000 UTC m=+0.149081324 container init 53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.295 243456 INFO nova.virt.libvirt.driver [-] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Instance spawned successfully.#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.296 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.299 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:34:45 np0005634017 podman[357317]: 2026-02-28 10:34:45.301774863 +0000 UTC m=+0.157657847 container start 53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.320 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.321 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.326 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.327 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274885.268748, 29ebb761-c674-4ed1-aae0-554adf945402 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.327 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.336 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.337 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.338 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.338 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:34:45 np0005634017 neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b[357339]: [NOTICE]   (357343) : New worker (357345) forked
Feb 28 05:34:45 np0005634017 neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b[357339]: [NOTICE]   (357343) : Loading success.
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.339 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.340 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.348 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.348 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.353 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274885.285433, 29ebb761-c674-4ed1-aae0-554adf945402 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.354 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.375 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.380 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.398 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 8f25c48f-b281-4784-a6b0-a2662d928d28 in datapath 49ec66b0-8f5d-445b-a7e6-7fd41e785d9a unbound from our chassis#033[00m
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.399 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 49ec66b0-8f5d-445b-a7e6-7fd41e785d9a#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.402 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.411 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fc5c4824-763b-45a1-b9ec-fb17845bfd00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.412 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap49ec66b0-81 in ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.414 243456 INFO nova.compute.manager [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Took 8.99 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.414 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap49ec66b0-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.415 243456 DEBUG nova.compute.manager [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.415 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4270933e-7004-4db5-8850-0c7540e0d4c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.416 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[484be7ba-b2cd-402d-ba31-88fd3c51c3d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.431 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[ec21a504-b3b9-4db8-b922-82323f1c8969]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.444 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e892f67e-6678-4ee3-bc9e-d0e4b9486e60]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.479 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f5eca19d-a2ca-432a-95b0-46428add556d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.487 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bab0812b-533b-4d20-a0ed-c1d4996aec40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.487 243456 INFO nova.compute.manager [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Took 10.04 seconds to build instance.#033[00m
Feb 28 05:34:45 np0005634017 systemd-udevd[357238]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:34:45 np0005634017 NetworkManager[49805]: <info>  [1772274885.4901] manager: (tap49ec66b0-80): new Veth device (/org/freedesktop/NetworkManager/Devices/539)
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.520 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.541 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[269b3f34-6041-49ea-a295-b9ae01662858]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.546 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1671cbb4-32df-4f50-9c59-24af5444899d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:34:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2096092848' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:34:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:34:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2096092848' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:34:45 np0005634017 NetworkManager[49805]: <info>  [1772274885.5734] device (tap49ec66b0-80): carrier: link connected
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.578 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bdb39be6-b7af-425c-8951-c3797affa60b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.599 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[69cc6280-617e-43ab-b04c-c60403273631]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49ec66b0-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:61:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639285, 'reachable_time': 43442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357364, 'error': None, 'target': 'ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.614 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bdcd01ca-620c-4b26-aaeb-dcfb03e3379b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:61be'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 639285, 'tstamp': 639285}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357365, 'error': None, 'target': 'ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.632 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cf18d8a2-36ba-45c0-91f0-2f42b3a871ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49ec66b0-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:61:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639285, 'reachable_time': 43442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357366, 'error': None, 'target': 'ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.680 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b75b5dbd-ac73-4180-865e-15156e293801]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.716 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5a52cc83-a23a-4d73-bd0f-f5b49e3b9481]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.719 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49ec66b0-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.719 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.720 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49ec66b0-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.723 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:45 np0005634017 NetworkManager[49805]: <info>  [1772274885.7249] manager: (tap49ec66b0-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/540)
Feb 28 05:34:45 np0005634017 kernel: tap49ec66b0-80: entered promiscuous mode
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.731 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap49ec66b0-80, col_values=(('external_ids', {'iface-id': '0d93ffc1-1158-4b54-b2c1-6b7d48d62d16'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:45 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:45Z|01303|binding|INFO|Releasing lport 0d93ffc1-1158-4b54-b2c1-6b7d48d62d16 from this chassis (sb_readonly=0)
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.733 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.738 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/49ec66b0-8f5d-445b-a7e6-7fd41e785d9a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/49ec66b0-8f5d-445b-a7e6-7fd41e785d9a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:34:45 np0005634017 nova_compute[243452]: 2026-02-28 10:34:45.740 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.740 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b00722e5-16d8-4d52-af71-dbad671f6fe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.742 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/49ec66b0-8f5d-445b-a7e6-7fd41e785d9a.pid.haproxy
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 49ec66b0-8f5d-445b-a7e6-7fd41e785d9a
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:34:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.746 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'env', 'PROCESS_TAG=haproxy-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/49ec66b0-8f5d-445b-a7e6-7fd41e785d9a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:34:46 np0005634017 nova_compute[243452]: 2026-02-28 10:34:46.054 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "367042aa-0043-4283-a399-ea4a6a1545f7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:46 np0005634017 nova_compute[243452]: 2026-02-28 10:34:46.055 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:46 np0005634017 nova_compute[243452]: 2026-02-28 10:34:46.091 243456 DEBUG nova.compute.manager [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:34:46 np0005634017 podman[357396]: 2026-02-28 10:34:46.116316697 +0000 UTC m=+0.049362739 container create 892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:34:46 np0005634017 systemd[1]: Started libpod-conmon-892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817.scope.
Feb 28 05:34:46 np0005634017 podman[357396]: 2026-02-28 10:34:46.090831275 +0000 UTC m=+0.023877297 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:34:46 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:34:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d17053d1edc9fada6f68225e884d047e5575813d4682169f8b77e31a5f244466/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:34:46 np0005634017 nova_compute[243452]: 2026-02-28 10:34:46.210 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:46 np0005634017 nova_compute[243452]: 2026-02-28 10:34:46.210 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:46 np0005634017 nova_compute[243452]: 2026-02-28 10:34:46.220 243456 DEBUG nova.virt.hardware [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:34:46 np0005634017 nova_compute[243452]: 2026-02-28 10:34:46.220 243456 INFO nova.compute.claims [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:34:46 np0005634017 podman[357396]: 2026-02-28 10:34:46.234386922 +0000 UTC m=+0.167433034 container init 892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:34:46 np0005634017 podman[357396]: 2026-02-28 10:34:46.242712877 +0000 UTC m=+0.175758919 container start 892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0)
Feb 28 05:34:46 np0005634017 neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a[357412]: [NOTICE]   (357416) : New worker (357418) forked
Feb 28 05:34:46 np0005634017 neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a[357412]: [NOTICE]   (357416) : Loading success.
Feb 28 05:34:46 np0005634017 nova_compute[243452]: 2026-02-28 10:34:46.294 243456 DEBUG nova.scheduler.client.report [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 28 05:34:46 np0005634017 nova_compute[243452]: 2026-02-28 10:34:46.310 243456 DEBUG nova.scheduler.client.report [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 28 05:34:46 np0005634017 nova_compute[243452]: 2026-02-28 10:34:46.311 243456 DEBUG nova.compute.provider_tree [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 28 05:34:46 np0005634017 nova_compute[243452]: 2026-02-28 10:34:46.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:34:46 np0005634017 nova_compute[243452]: 2026-02-28 10:34:46.331 243456 DEBUG nova.scheduler.client.report [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 28 05:34:46 np0005634017 nova_compute[243452]: 2026-02-28 10:34:46.335 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:46 np0005634017 nova_compute[243452]: 2026-02-28 10:34:46.360 243456 DEBUG nova.scheduler.client.report [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 28 05:34:46 np0005634017 nova_compute[243452]: 2026-02-28 10:34:46.429 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:34:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Feb 28 05:34:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:34:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2956785917' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.006 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.015 243456 DEBUG nova.compute.provider_tree [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.034 243456 DEBUG nova.scheduler.client.report [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.060 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.061 243456 DEBUG nova.compute.manager [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.063 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.063 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.064 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.064 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.153 243456 DEBUG nova.compute.manager [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.153 243456 DEBUG nova.network.neutron [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.157 243456 DEBUG nova.compute.manager [req-6f84bf83-b2a9-4a94-a4b3-2fdf1dd57688 req-8c46db3f-194b-4318-8ea7-68f485dedce6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-vif-plugged-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.157 243456 DEBUG oslo_concurrency.lockutils [req-6f84bf83-b2a9-4a94-a4b3-2fdf1dd57688 req-8c46db3f-194b-4318-8ea7-68f485dedce6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "29ebb761-c674-4ed1-aae0-554adf945402-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.157 243456 DEBUG oslo_concurrency.lockutils [req-6f84bf83-b2a9-4a94-a4b3-2fdf1dd57688 req-8c46db3f-194b-4318-8ea7-68f485dedce6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.158 243456 DEBUG oslo_concurrency.lockutils [req-6f84bf83-b2a9-4a94-a4b3-2fdf1dd57688 req-8c46db3f-194b-4318-8ea7-68f485dedce6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.158 243456 DEBUG nova.compute.manager [req-6f84bf83-b2a9-4a94-a4b3-2fdf1dd57688 req-8c46db3f-194b-4318-8ea7-68f485dedce6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] No waiting events found dispatching network-vif-plugged-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.158 243456 WARNING nova.compute.manager [req-6f84bf83-b2a9-4a94-a4b3-2fdf1dd57688 req-8c46db3f-194b-4318-8ea7-68f485dedce6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received unexpected event network-vif-plugged-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b for instance with vm_state active and task_state None.#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.170 243456 INFO nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.188 243456 DEBUG nova.compute.manager [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.269 243456 DEBUG nova.compute.manager [req-c5a3041e-a045-467c-9ffd-48c6adf38368 req-0b76ab3d-486c-43ed-9dd3-a5a4a9fcf74c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-vif-plugged-8f25c48f-b281-4784-a6b0-a2662d928d28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.269 243456 DEBUG oslo_concurrency.lockutils [req-c5a3041e-a045-467c-9ffd-48c6adf38368 req-0b76ab3d-486c-43ed-9dd3-a5a4a9fcf74c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "29ebb761-c674-4ed1-aae0-554adf945402-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.270 243456 DEBUG oslo_concurrency.lockutils [req-c5a3041e-a045-467c-9ffd-48c6adf38368 req-0b76ab3d-486c-43ed-9dd3-a5a4a9fcf74c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.270 243456 DEBUG oslo_concurrency.lockutils [req-c5a3041e-a045-467c-9ffd-48c6adf38368 req-0b76ab3d-486c-43ed-9dd3-a5a4a9fcf74c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.270 243456 DEBUG nova.compute.manager [req-c5a3041e-a045-467c-9ffd-48c6adf38368 req-0b76ab3d-486c-43ed-9dd3-a5a4a9fcf74c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] No waiting events found dispatching network-vif-plugged-8f25c48f-b281-4784-a6b0-a2662d928d28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.271 243456 WARNING nova.compute.manager [req-c5a3041e-a045-467c-9ffd-48c6adf38368 req-0b76ab3d-486c-43ed-9dd3-a5a4a9fcf74c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received unexpected event network-vif-plugged-8f25c48f-b281-4784-a6b0-a2662d928d28 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.276 243456 DEBUG nova.compute.manager [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.277 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.278 243456 INFO nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Creating image(s)#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.302 243456 DEBUG nova.storage.rbd_utils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 367042aa-0043-4283-a399-ea4a6a1545f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.328 243456 DEBUG nova.storage.rbd_utils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 367042aa-0043-4283-a399-ea4a6a1545f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.358 243456 DEBUG nova.storage.rbd_utils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 367042aa-0043-4283-a399-ea4a6a1545f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.363 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.439 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.441 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.442 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.442 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.478 243456 DEBUG nova.storage.rbd_utils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 367042aa-0043-4283-a399-ea4a6a1545f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.483 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 367042aa-0043-4283-a399-ea4a6a1545f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.708 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 367042aa-0043-4283-a399-ea4a6a1545f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:34:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:34:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1710081428' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.793 243456 DEBUG nova.storage.rbd_utils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] resizing rbd image 367042aa-0043-4283-a399-ea4a6a1545f7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.828 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.764s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.886 243456 DEBUG nova.objects.instance [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'migration_context' on Instance uuid 367042aa-0043-4283-a399-ea4a6a1545f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.907 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.908 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Ensure instance console log exists: /var/lib/nova/instances/367042aa-0043-4283-a399-ea4a6a1545f7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.908 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.908 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.909 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.918 243456 DEBUG nova.policy [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.926 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:34:47 np0005634017 nova_compute[243452]: 2026-02-28 10:34:47.926 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:34:48 np0005634017 nova_compute[243452]: 2026-02-28 10:34:48.078 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:34:48 np0005634017 nova_compute[243452]: 2026-02-28 10:34:48.079 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3474MB free_disk=59.966557927429676GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:34:48 np0005634017 nova_compute[243452]: 2026-02-28 10:34:48.079 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:48 np0005634017 nova_compute[243452]: 2026-02-28 10:34:48.080 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:48 np0005634017 nova_compute[243452]: 2026-02-28 10:34:48.141 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 29ebb761-c674-4ed1-aae0-554adf945402 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:34:48 np0005634017 nova_compute[243452]: 2026-02-28 10:34:48.142 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 367042aa-0043-4283-a399-ea4a6a1545f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:34:48 np0005634017 nova_compute[243452]: 2026-02-28 10:34:48.142 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:34:48 np0005634017 nova_compute[243452]: 2026-02-28 10:34:48.142 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:34:48 np0005634017 nova_compute[243452]: 2026-02-28 10:34:48.190 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:34:48 np0005634017 nova_compute[243452]: 2026-02-28 10:34:48.633 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:34:48 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/841280857' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:34:48 np0005634017 nova_compute[243452]: 2026-02-28 10:34:48.785 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:34:48 np0005634017 nova_compute[243452]: 2026-02-28 10:34:48.793 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:34:48 np0005634017 nova_compute[243452]: 2026-02-28 10:34:48.811 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:34:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Feb 28 05:34:48 np0005634017 nova_compute[243452]: 2026-02-28 10:34:48.839 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:34:48 np0005634017 nova_compute[243452]: 2026-02-28 10:34:48.840 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:34:49 np0005634017 nova_compute[243452]: 2026-02-28 10:34:49.134 243456 DEBUG nova.network.neutron [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Successfully created port: 92f5c154-2fa7-43e9-a6fd-da26d3ad985b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:34:49 np0005634017 nova_compute[243452]: 2026-02-28 10:34:49.386 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:49 np0005634017 nova_compute[243452]: 2026-02-28 10:34:49.606 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:49 np0005634017 NetworkManager[49805]: <info>  [1772274889.6076] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/541)
Feb 28 05:34:49 np0005634017 NetworkManager[49805]: <info>  [1772274889.6085] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/542)
Feb 28 05:34:49 np0005634017 nova_compute[243452]: 2026-02-28 10:34:49.652 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:49 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:49Z|01304|binding|INFO|Releasing lport 0d93ffc1-1158-4b54-b2c1-6b7d48d62d16 from this chassis (sb_readonly=0)
Feb 28 05:34:49 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:49Z|01305|binding|INFO|Releasing lport 99dd359f-3ab9-477c-a58c-1c56298be9c7 from this chassis (sb_readonly=0)
Feb 28 05:34:49 np0005634017 nova_compute[243452]: 2026-02-28 10:34:49.666 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:50 np0005634017 nova_compute[243452]: 2026-02-28 10:34:50.156 243456 DEBUG nova.compute.manager [req-a54a671c-7309-4763-ac95-5647cbe4e704 req-1f2c534d-5a70-4834-b38b-9296ee6da096 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-changed-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:34:50 np0005634017 nova_compute[243452]: 2026-02-28 10:34:50.157 243456 DEBUG nova.compute.manager [req-a54a671c-7309-4763-ac95-5647cbe4e704 req-1f2c534d-5a70-4834-b38b-9296ee6da096 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Refreshing instance network info cache due to event network-changed-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:34:50 np0005634017 nova_compute[243452]: 2026-02-28 10:34:50.157 243456 DEBUG oslo_concurrency.lockutils [req-a54a671c-7309-4763-ac95-5647cbe4e704 req-1f2c534d-5a70-4834-b38b-9296ee6da096 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:34:50 np0005634017 nova_compute[243452]: 2026-02-28 10:34:50.158 243456 DEBUG oslo_concurrency.lockutils [req-a54a671c-7309-4763-ac95-5647cbe4e704 req-1f2c534d-5a70-4834-b38b-9296ee6da096 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:34:50 np0005634017 nova_compute[243452]: 2026-02-28 10:34:50.158 243456 DEBUG nova.network.neutron [req-a54a671c-7309-4763-ac95-5647cbe4e704 req-1f2c534d-5a70-4834-b38b-9296ee6da096 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Refreshing network info cache for port 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:34:50 np0005634017 nova_compute[243452]: 2026-02-28 10:34:50.562 243456 DEBUG nova.network.neutron [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Successfully updated port: 92f5c154-2fa7-43e9-a6fd-da26d3ad985b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:34:50 np0005634017 nova_compute[243452]: 2026-02-28 10:34:50.577 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-367042aa-0043-4283-a399-ea4a6a1545f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:34:50 np0005634017 nova_compute[243452]: 2026-02-28 10:34:50.578 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-367042aa-0043-4283-a399-ea4a6a1545f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:34:50 np0005634017 nova_compute[243452]: 2026-02-28 10:34:50.578 243456 DEBUG nova.network.neutron [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:34:50 np0005634017 nova_compute[243452]: 2026-02-28 10:34:50.778 243456 DEBUG nova.network.neutron [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:34:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 305 active+clean; 224 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.8 MiB/s wr, 103 op/s
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.241 243456 DEBUG nova.network.neutron [req-a54a671c-7309-4763-ac95-5647cbe4e704 req-1f2c534d-5a70-4834-b38b-9296ee6da096 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Updated VIF entry in instance network info cache for port 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.242 243456 DEBUG nova.network.neutron [req-a54a671c-7309-4763-ac95-5647cbe4e704 req-1f2c534d-5a70-4834-b38b-9296ee6da096 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Updating instance_info_cache with network_info: [{"id": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "address": "fa:16:3e:12:72:96", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b2cb81f-77", "ovs_interfaceid": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.311 243456 DEBUG nova.compute.manager [req-f555f860-582a-4717-ab43-9e01b17c7a4a req-5ffc07bc-7f33-4ad3-9671-d0a289fea844 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Received event network-changed-92f5c154-2fa7-43e9-a6fd-da26d3ad985b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.312 243456 DEBUG nova.compute.manager [req-f555f860-582a-4717-ab43-9e01b17c7a4a req-5ffc07bc-7f33-4ad3-9671-d0a289fea844 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Refreshing instance network info cache due to event network-changed-92f5c154-2fa7-43e9-a6fd-da26d3ad985b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.312 243456 DEBUG oslo_concurrency.lockutils [req-f555f860-582a-4717-ab43-9e01b17c7a4a req-5ffc07bc-7f33-4ad3-9671-d0a289fea844 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-367042aa-0043-4283-a399-ea4a6a1545f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.403 243456 DEBUG oslo_concurrency.lockutils [req-a54a671c-7309-4763-ac95-5647cbe4e704 req-1f2c534d-5a70-4834-b38b-9296ee6da096 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:34:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 246 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.867 243456 DEBUG nova.network.neutron [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Updating instance_info_cache with network_info: [{"id": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "address": "fa:16:3e:56:eb:46", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92f5c154-2f", "ovs_interfaceid": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.890 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-367042aa-0043-4283-a399-ea4a6a1545f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.890 243456 DEBUG nova.compute.manager [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Instance network_info: |[{"id": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "address": "fa:16:3e:56:eb:46", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92f5c154-2f", "ovs_interfaceid": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.892 243456 DEBUG oslo_concurrency.lockutils [req-f555f860-582a-4717-ab43-9e01b17c7a4a req-5ffc07bc-7f33-4ad3-9671-d0a289fea844 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-367042aa-0043-4283-a399-ea4a6a1545f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.892 243456 DEBUG nova.network.neutron [req-f555f860-582a-4717-ab43-9e01b17c7a4a req-5ffc07bc-7f33-4ad3-9671-d0a289fea844 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Refreshing network info cache for port 92f5c154-2fa7-43e9-a6fd-da26d3ad985b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.895 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Start _get_guest_xml network_info=[{"id": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "address": "fa:16:3e:56:eb:46", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92f5c154-2f", "ovs_interfaceid": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.901 243456 WARNING nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.907 243456 DEBUG nova.virt.libvirt.host [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.907 243456 DEBUG nova.virt.libvirt.host [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.914 243456 DEBUG nova.virt.libvirt.host [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.916 243456 DEBUG nova.virt.libvirt.host [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.916 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.917 243456 DEBUG nova.virt.hardware [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.918 243456 DEBUG nova.virt.hardware [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.918 243456 DEBUG nova.virt.hardware [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.918 243456 DEBUG nova.virt.hardware [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.919 243456 DEBUG nova.virt.hardware [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.919 243456 DEBUG nova.virt.hardware [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.919 243456 DEBUG nova.virt.hardware [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.920 243456 DEBUG nova.virt.hardware [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.920 243456 DEBUG nova.virt.hardware [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.920 243456 DEBUG nova.virt.hardware [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.921 243456 DEBUG nova.virt.hardware [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:34:52 np0005634017 nova_compute[243452]: 2026-02-28 10:34:52.925 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:34:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:34:53 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/934714578' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:34:53 np0005634017 nova_compute[243452]: 2026-02-28 10:34:53.542 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:34:53 np0005634017 nova_compute[243452]: 2026-02-28 10:34:53.571 243456 DEBUG nova.storage.rbd_utils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 367042aa-0043-4283-a399-ea4a6a1545f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:34:53 np0005634017 nova_compute[243452]: 2026-02-28 10:34:53.577 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:34:53 np0005634017 nova_compute[243452]: 2026-02-28 10:34:53.637 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:34:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:34:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2636135008' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.087 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.091 243456 DEBUG nova.virt.libvirt.vif [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:34:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1965662134',display_name='tempest-TestNetworkBasicOps-server-1965662134',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1965662134',id=129,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAQEbRWdJEdDDH9ryFVMZgxJMx06zvko9WQ/yItISiQGZQFgF2ldPicyjXhFLx3IsCNHqxs8LEYCBDvAtjLxsqEXUJAPPXqcb32CUcOFzuHymtVJP4PyLuUKki41H129Mg==',key_name='tempest-TestNetworkBasicOps-1815275820',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-9q1u5ht7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:34:47Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=367042aa-0043-4283-a399-ea4a6a1545f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "address": "fa:16:3e:56:eb:46", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92f5c154-2f", "ovs_interfaceid": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.092 243456 DEBUG nova.network.os_vif_util [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "address": "fa:16:3e:56:eb:46", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92f5c154-2f", "ovs_interfaceid": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.094 243456 DEBUG nova.network.os_vif_util [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:eb:46,bridge_name='br-int',has_traffic_filtering=True,id=92f5c154-2fa7-43e9-a6fd-da26d3ad985b,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92f5c154-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.097 243456 DEBUG nova.objects.instance [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 367042aa-0043-4283-a399-ea4a6a1545f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.113 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:34:54 np0005634017 nova_compute[243452]:  <uuid>367042aa-0043-4283-a399-ea4a6a1545f7</uuid>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:  <name>instance-00000081</name>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestNetworkBasicOps-server-1965662134</nova:name>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:34:52</nova:creationTime>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:34:54 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:        <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:        <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:        <nova:port uuid="92f5c154-2fa7-43e9-a6fd-da26d3ad985b">
Feb 28 05:34:54 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <entry name="serial">367042aa-0043-4283-a399-ea4a6a1545f7</entry>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <entry name="uuid">367042aa-0043-4283-a399-ea4a6a1545f7</entry>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/367042aa-0043-4283-a399-ea4a6a1545f7_disk">
Feb 28 05:34:54 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:34:54 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/367042aa-0043-4283-a399-ea4a6a1545f7_disk.config">
Feb 28 05:34:54 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:34:54 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:56:eb:46"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <target dev="tap92f5c154-2f"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/367042aa-0043-4283-a399-ea4a6a1545f7/console.log" append="off"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:34:54 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:34:54 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:34:54 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:34:54 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.126 243456 DEBUG nova.compute.manager [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Preparing to wait for external event network-vif-plugged-92f5c154-2fa7-43e9-a6fd-da26d3ad985b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.127 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.128 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.128 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.134 243456 DEBUG nova.virt.libvirt.vif [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:34:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1965662134',display_name='tempest-TestNetworkBasicOps-server-1965662134',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1965662134',id=129,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAQEbRWdJEdDDH9ryFVMZgxJMx06zvko9WQ/yItISiQGZQFgF2ldPicyjXhFLx3IsCNHqxs8LEYCBDvAtjLxsqEXUJAPPXqcb32CUcOFzuHymtVJP4PyLuUKki41H129Mg==',key_name='tempest-TestNetworkBasicOps-1815275820',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-9q1u5ht7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:34:47Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=367042aa-0043-4283-a399-ea4a6a1545f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "address": "fa:16:3e:56:eb:46", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92f5c154-2f", "ovs_interfaceid": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.135 243456 DEBUG nova.network.os_vif_util [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "address": "fa:16:3e:56:eb:46", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92f5c154-2f", "ovs_interfaceid": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.136 243456 DEBUG nova.network.os_vif_util [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:eb:46,bridge_name='br-int',has_traffic_filtering=True,id=92f5c154-2fa7-43e9-a6fd-da26d3ad985b,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92f5c154-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.137 243456 DEBUG os_vif [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:eb:46,bridge_name='br-int',has_traffic_filtering=True,id=92f5c154-2fa7-43e9-a6fd-da26d3ad985b,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92f5c154-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.138 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.138 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.139 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.143 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.144 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92f5c154-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.144 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap92f5c154-2f, col_values=(('external_ids', {'iface-id': '92f5c154-2fa7-43e9-a6fd-da26d3ad985b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:eb:46', 'vm-uuid': '367042aa-0043-4283-a399-ea4a6a1545f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:54 np0005634017 NetworkManager[49805]: <info>  [1772274894.1481] manager: (tap92f5c154-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/543)
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.147 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.158 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.160 243456 INFO os_vif [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:eb:46,bridge_name='br-int',has_traffic_filtering=True,id=92f5c154-2fa7-43e9-a6fd-da26d3ad985b,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92f5c154-2f')#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.214 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.214 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.215 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:56:eb:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.216 243456 INFO nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Using config drive#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.248 243456 DEBUG nova.storage.rbd_utils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 367042aa-0043-4283-a399-ea4a6a1545f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.387 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.464 243456 DEBUG nova.network.neutron [req-f555f860-582a-4717-ab43-9e01b17c7a4a req-5ffc07bc-7f33-4ad3-9671-d0a289fea844 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Updated VIF entry in instance network info cache for port 92f5c154-2fa7-43e9-a6fd-da26d3ad985b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.465 243456 DEBUG nova.network.neutron [req-f555f860-582a-4717-ab43-9e01b17c7a4a req-5ffc07bc-7f33-4ad3-9671-d0a289fea844 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Updating instance_info_cache with network_info: [{"id": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "address": "fa:16:3e:56:eb:46", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92f5c154-2f", "ovs_interfaceid": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.480 243456 DEBUG oslo_concurrency.lockutils [req-f555f860-582a-4717-ab43-9e01b17c7a4a req-5ffc07bc-7f33-4ad3-9671-d0a289fea844 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-367042aa-0043-4283-a399-ea4a6a1545f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:34:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 305 active+clean; 246 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.867 243456 INFO nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Creating config drive at /var/lib/nova/instances/367042aa-0043-4283-a399-ea4a6a1545f7/disk.config#033[00m
Feb 28 05:34:54 np0005634017 nova_compute[243452]: 2026-02-28 10:34:54.874 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/367042aa-0043-4283-a399-ea4a6a1545f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpe8ehqoe4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:34:55 np0005634017 nova_compute[243452]: 2026-02-28 10:34:55.024 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/367042aa-0043-4283-a399-ea4a6a1545f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpe8ehqoe4" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:34:55 np0005634017 nova_compute[243452]: 2026-02-28 10:34:55.057 243456 DEBUG nova.storage.rbd_utils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 367042aa-0043-4283-a399-ea4a6a1545f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:34:55 np0005634017 nova_compute[243452]: 2026-02-28 10:34:55.061 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/367042aa-0043-4283-a399-ea4a6a1545f7/disk.config 367042aa-0043-4283-a399-ea4a6a1545f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:34:55 np0005634017 nova_compute[243452]: 2026-02-28 10:34:55.212 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/367042aa-0043-4283-a399-ea4a6a1545f7/disk.config 367042aa-0043-4283-a399-ea4a6a1545f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:34:55 np0005634017 nova_compute[243452]: 2026-02-28 10:34:55.214 243456 INFO nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Deleting local config drive /var/lib/nova/instances/367042aa-0043-4283-a399-ea4a6a1545f7/disk.config because it was imported into RBD.#033[00m
Feb 28 05:34:55 np0005634017 NetworkManager[49805]: <info>  [1772274895.2917] manager: (tap92f5c154-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/544)
Feb 28 05:34:55 np0005634017 kernel: tap92f5c154-2f: entered promiscuous mode
Feb 28 05:34:55 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:55Z|01306|binding|INFO|Claiming lport 92f5c154-2fa7-43e9-a6fd-da26d3ad985b for this chassis.
Feb 28 05:34:55 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:55Z|01307|binding|INFO|92f5c154-2fa7-43e9-a6fd-da26d3ad985b: Claiming fa:16:3e:56:eb:46 10.100.0.11
Feb 28 05:34:55 np0005634017 nova_compute[243452]: 2026-02-28 10:34:55.302 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.309 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:eb:46 10.100.0.11'], port_security=['fa:16:3e:56:eb:46 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '367042aa-0043-4283-a399-ea4a6a1545f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c48ff26a-49d0-4144-b27f-14431e751ba2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '18ab5e58-5378-41c9-af44-86d27866eb7f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cef915b3-5462-468d-ad19-808529c0dfba, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=92f5c154-2fa7-43e9-a6fd-da26d3ad985b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.312 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 92f5c154-2fa7-43e9-a6fd-da26d3ad985b in datapath c48ff26a-49d0-4144-b27f-14431e751ba2 bound to our chassis#033[00m
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.315 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c48ff26a-49d0-4144-b27f-14431e751ba2#033[00m
Feb 28 05:34:55 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:55Z|01308|binding|INFO|Setting lport 92f5c154-2fa7-43e9-a6fd-da26d3ad985b ovn-installed in OVS
Feb 28 05:34:55 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:55Z|01309|binding|INFO|Setting lport 92f5c154-2fa7-43e9-a6fd-da26d3ad985b up in Southbound
Feb 28 05:34:55 np0005634017 nova_compute[243452]: 2026-02-28 10:34:55.324 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:55 np0005634017 systemd-udevd[357798]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.329 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ace1e54e-61a6-4a9d-a5ea-15770741e2eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.333 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc48ff26a-41 in ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.338 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc48ff26a-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.338 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9cbf029b-a787-4fdf-a86f-93f83d22c50a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.340 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fa77982e-9d0c-4df7-b14c-8eb98816765e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:55 np0005634017 systemd-machined[209480]: New machine qemu-162-instance-00000081.
Feb 28 05:34:55 np0005634017 NetworkManager[49805]: <info>  [1772274895.3447] device (tap92f5c154-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:34:55 np0005634017 NetworkManager[49805]: <info>  [1772274895.3458] device (tap92f5c154-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:34:55 np0005634017 systemd[1]: Started Virtual Machine qemu-162-instance-00000081.
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.357 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[f67701cc-b8b5-4a06-893f-2531b84fb599]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.370 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ebe309e7-fadd-484a-8002-ba2f820d2bde]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.399 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4811d5ad-ca40-4d53-b40a-160a1919eebe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.405 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4d07e639-091a-4039-8dfb-e6233861cc9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:55 np0005634017 NetworkManager[49805]: <info>  [1772274895.4056] manager: (tapc48ff26a-40): new Veth device (/org/freedesktop/NetworkManager/Devices/545)
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.442 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[810a140e-6857-4701-a0d5-c49ba20a9b39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.445 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a2e438-21de-48a6-9962-d4b3c80ace14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:55 np0005634017 NetworkManager[49805]: <info>  [1772274895.4657] device (tapc48ff26a-40): carrier: link connected
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.469 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[515efe5a-513a-40e9-ba73-c65f0bccf114]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.488 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3aef9563-66af-41ff-928b-6df6968401ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc48ff26a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:5f:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640275, 'reachable_time': 38982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357831, 'error': None, 'target': 'ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.500 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5811a0e0-6565-4925-9851-819e24f8065a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9e:5ffd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640275, 'tstamp': 640275}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357832, 'error': None, 'target': 'ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.513 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[17bb3afd-de26-40ac-b702-78c2b08e3681]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc48ff26a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:5f:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640275, 'reachable_time': 38982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357833, 'error': None, 'target': 'ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.534 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1b81654d-7d2d-431e-a9a5-bce7994c898c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:55 np0005634017 nova_compute[243452]: 2026-02-28 10:34:55.575 243456 DEBUG nova.compute.manager [req-a4575a6c-458c-4f58-bfad-5e13aa91dd04 req-19f96246-dd2b-4ba5-9971-3db05819e5e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Received event network-vif-plugged-92f5c154-2fa7-43e9-a6fd-da26d3ad985b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:34:55 np0005634017 nova_compute[243452]: 2026-02-28 10:34:55.575 243456 DEBUG oslo_concurrency.lockutils [req-a4575a6c-458c-4f58-bfad-5e13aa91dd04 req-19f96246-dd2b-4ba5-9971-3db05819e5e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:55 np0005634017 nova_compute[243452]: 2026-02-28 10:34:55.575 243456 DEBUG oslo_concurrency.lockutils [req-a4575a6c-458c-4f58-bfad-5e13aa91dd04 req-19f96246-dd2b-4ba5-9971-3db05819e5e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:55 np0005634017 nova_compute[243452]: 2026-02-28 10:34:55.575 243456 DEBUG oslo_concurrency.lockutils [req-a4575a6c-458c-4f58-bfad-5e13aa91dd04 req-19f96246-dd2b-4ba5-9971-3db05819e5e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:55 np0005634017 nova_compute[243452]: 2026-02-28 10:34:55.576 243456 DEBUG nova.compute.manager [req-a4575a6c-458c-4f58-bfad-5e13aa91dd04 req-19f96246-dd2b-4ba5-9971-3db05819e5e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Processing event network-vif-plugged-92f5c154-2fa7-43e9-a6fd-da26d3ad985b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.585 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d536e7-9a7f-4f92-b706-d947a99202c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.587 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc48ff26a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.587 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.588 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc48ff26a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:55 np0005634017 nova_compute[243452]: 2026-02-28 10:34:55.589 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:55 np0005634017 NetworkManager[49805]: <info>  [1772274895.5905] manager: (tapc48ff26a-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/546)
Feb 28 05:34:55 np0005634017 kernel: tapc48ff26a-40: entered promiscuous mode
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.593 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc48ff26a-40, col_values=(('external_ids', {'iface-id': 'cb4199b0-c270-4ea9-a607-db9799f6f157'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:34:55 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:55Z|01310|binding|INFO|Releasing lport cb4199b0-c270-4ea9-a607-db9799f6f157 from this chassis (sb_readonly=0)
Feb 28 05:34:55 np0005634017 nova_compute[243452]: 2026-02-28 10:34:55.602 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.604 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c48ff26a-49d0-4144-b27f-14431e751ba2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c48ff26a-49d0-4144-b27f-14431e751ba2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.605 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[165f15a1-e9e1-4378-aff6-e02756d65c63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.606 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-c48ff26a-49d0-4144-b27f-14431e751ba2
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/c48ff26a-49d0-4144-b27f-14431e751ba2.pid.haproxy
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID c48ff26a-49d0-4144-b27f-14431e751ba2
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:34:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.608 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2', 'env', 'PROCESS_TAG=haproxy-c48ff26a-49d0-4144-b27f-14431e751ba2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c48ff26a-49d0-4144-b27f-14431e751ba2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:34:56 np0005634017 podman[357863]: 2026-02-28 10:34:56.03026725 +0000 UTC m=+0.072521836 container create c2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 05:34:56 np0005634017 systemd[1]: Started libpod-conmon-c2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3.scope.
Feb 28 05:34:56 np0005634017 podman[357863]: 2026-02-28 10:34:55.992570652 +0000 UTC m=+0.034825258 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:34:56 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:34:56 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7cbf1e9a1127d383c385bf63b55eb566697bacb083039d81a658a47a2f6ccb6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:34:56 np0005634017 podman[357863]: 2026-02-28 10:34:56.109367681 +0000 UTC m=+0.151622317 container init c2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 28 05:34:56 np0005634017 podman[357863]: 2026-02-28 10:34:56.116015889 +0000 UTC m=+0.158270475 container start c2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS)
Feb 28 05:34:56 np0005634017 neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2[357912]: [NOTICE]   (357923) : New worker (357925) forked
Feb 28 05:34:56 np0005634017 neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2[357912]: [NOTICE]   (357923) : Loading success.
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.183 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274896.1834483, 367042aa-0043-4283-a399-ea4a6a1545f7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.185 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] VM Started (Lifecycle Event)#033[00m
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.190 243456 DEBUG nova.compute.manager [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.195 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.199 243456 INFO nova.virt.libvirt.driver [-] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Instance spawned successfully.#033[00m
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.199 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.223 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.229 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.234 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.235 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.236 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.236 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.236 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.237 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.248 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.249 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274896.183598, 367042aa-0043-4283-a399-ea4a6a1545f7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.249 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.271 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.275 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274896.1924148, 367042aa-0043-4283-a399-ea4a6a1545f7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.275 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.292 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.295 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.299 243456 INFO nova.compute.manager [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Took 9.02 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.300 243456 DEBUG nova.compute.manager [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.311 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.359 243456 INFO nova.compute.manager [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Took 10.19 seconds to build instance.#033[00m
Feb 28 05:34:56 np0005634017 nova_compute[243452]: 2026-02-28 10:34:56.374 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.319s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 305 active+clean; 246 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Feb 28 05:34:57 np0005634017 nova_compute[243452]: 2026-02-28 10:34:57.670 243456 DEBUG nova.compute.manager [req-daefa8b7-7f69-4936-8cdd-691b22fcbc50 req-c00ec00b-dfdf-4788-9436-33d4c22071ae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Received event network-vif-plugged-92f5c154-2fa7-43e9-a6fd-da26d3ad985b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:34:57 np0005634017 nova_compute[243452]: 2026-02-28 10:34:57.671 243456 DEBUG oslo_concurrency.lockutils [req-daefa8b7-7f69-4936-8cdd-691b22fcbc50 req-c00ec00b-dfdf-4788-9436-33d4c22071ae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:57 np0005634017 nova_compute[243452]: 2026-02-28 10:34:57.672 243456 DEBUG oslo_concurrency.lockutils [req-daefa8b7-7f69-4936-8cdd-691b22fcbc50 req-c00ec00b-dfdf-4788-9436-33d4c22071ae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:57 np0005634017 nova_compute[243452]: 2026-02-28 10:34:57.673 243456 DEBUG oslo_concurrency.lockutils [req-daefa8b7-7f69-4936-8cdd-691b22fcbc50 req-c00ec00b-dfdf-4788-9436-33d4c22071ae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:57 np0005634017 nova_compute[243452]: 2026-02-28 10:34:57.673 243456 DEBUG nova.compute.manager [req-daefa8b7-7f69-4936-8cdd-691b22fcbc50 req-c00ec00b-dfdf-4788-9436-33d4c22071ae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] No waiting events found dispatching network-vif-plugged-92f5c154-2fa7-43e9-a6fd-da26d3ad985b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:34:57 np0005634017 nova_compute[243452]: 2026-02-28 10:34:57.674 243456 WARNING nova.compute.manager [req-daefa8b7-7f69-4936-8cdd-691b22fcbc50 req-c00ec00b-dfdf-4788-9436-33d4c22071ae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Received unexpected event network-vif-plugged-92f5c154-2fa7-43e9-a6fd-da26d3ad985b for instance with vm_state active and task_state None.#033[00m
Feb 28 05:34:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:57.874 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:34:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:57.875 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:34:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:34:57.876 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:34:58 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:58Z|00151|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:12:72:96 10.100.0.3
Feb 28 05:34:58 np0005634017 ovn_controller[146846]: 2026-02-28T10:34:58Z|00152|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:12:72:96 10.100.0.3
Feb 28 05:34:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 305 active+clean; 251 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.0 MiB/s wr, 123 op/s
Feb 28 05:34:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:34:59 np0005634017 nova_compute[243452]: 2026-02-28 10:34:59.149 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:34:59 np0005634017 nova_compute[243452]: 2026-02-28 10:34:59.390 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:00 np0005634017 nova_compute[243452]: 2026-02-28 10:35:00.114 243456 DEBUG nova.compute.manager [req-57ceae84-4f96-4798-a26c-11082b1c1c7b req-b1c51633-6b76-4917-81c3-b8b3165b027b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Received event network-changed-92f5c154-2fa7-43e9-a6fd-da26d3ad985b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:00 np0005634017 nova_compute[243452]: 2026-02-28 10:35:00.115 243456 DEBUG nova.compute.manager [req-57ceae84-4f96-4798-a26c-11082b1c1c7b req-b1c51633-6b76-4917-81c3-b8b3165b027b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Refreshing instance network info cache due to event network-changed-92f5c154-2fa7-43e9-a6fd-da26d3ad985b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:35:00 np0005634017 nova_compute[243452]: 2026-02-28 10:35:00.116 243456 DEBUG oslo_concurrency.lockutils [req-57ceae84-4f96-4798-a26c-11082b1c1c7b req-b1c51633-6b76-4917-81c3-b8b3165b027b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-367042aa-0043-4283-a399-ea4a6a1545f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:35:00 np0005634017 nova_compute[243452]: 2026-02-28 10:35:00.116 243456 DEBUG oslo_concurrency.lockutils [req-57ceae84-4f96-4798-a26c-11082b1c1c7b req-b1c51633-6b76-4917-81c3-b8b3165b027b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-367042aa-0043-4283-a399-ea4a6a1545f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:35:00 np0005634017 nova_compute[243452]: 2026-02-28 10:35:00.116 243456 DEBUG nova.network.neutron [req-57ceae84-4f96-4798-a26c-11082b1c1c7b req-b1c51633-6b76-4917-81c3-b8b3165b027b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Refreshing network info cache for port 92f5c154-2fa7-43e9-a6fd-da26d3ad985b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:35:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:35:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:35:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:35:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:35:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:35:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:35:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 305 active+clean; 277 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 223 op/s
Feb 28 05:35:01 np0005634017 nova_compute[243452]: 2026-02-28 10:35:01.375 243456 DEBUG nova.network.neutron [req-57ceae84-4f96-4798-a26c-11082b1c1c7b req-b1c51633-6b76-4917-81c3-b8b3165b027b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Updated VIF entry in instance network info cache for port 92f5c154-2fa7-43e9-a6fd-da26d3ad985b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:35:01 np0005634017 nova_compute[243452]: 2026-02-28 10:35:01.377 243456 DEBUG nova.network.neutron [req-57ceae84-4f96-4798-a26c-11082b1c1c7b req-b1c51633-6b76-4917-81c3-b8b3165b027b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Updating instance_info_cache with network_info: [{"id": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "address": "fa:16:3e:56:eb:46", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92f5c154-2f", "ovs_interfaceid": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:35:01 np0005634017 nova_compute[243452]: 2026-02-28 10:35:01.401 243456 DEBUG oslo_concurrency.lockutils [req-57ceae84-4f96-4798-a26c-11082b1c1c7b req-b1c51633-6b76-4917-81c3-b8b3165b027b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-367042aa-0043-4283-a399-ea4a6a1545f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:35:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 150 op/s
Feb 28 05:35:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:35:04 np0005634017 nova_compute[243452]: 2026-02-28 10:35:04.151 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:04 np0005634017 nova_compute[243452]: 2026-02-28 10:35:04.393 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Feb 28 05:35:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Feb 28 05:35:07 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:07Z|00153|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:56:eb:46 10.100.0.11
Feb 28 05:35:07 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:07Z|00154|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:56:eb:46 10.100.0.11
Feb 28 05:35:08 np0005634017 podman[357935]: 2026-02-28 10:35:08.133624982 +0000 UTC m=+0.071375013 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:35:08 np0005634017 podman[357936]: 2026-02-28 10:35:08.137739869 +0000 UTC m=+0.070622282 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223)
Feb 28 05:35:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 288 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 157 op/s
Feb 28 05:35:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:35:08 np0005634017 nova_compute[243452]: 2026-02-28 10:35:08.941 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f13d8adc-1a08-412b-a9fa-c8a601cda923" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:08 np0005634017 nova_compute[243452]: 2026-02-28 10:35:08.942 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:08 np0005634017 nova_compute[243452]: 2026-02-28 10:35:08.956 243456 DEBUG nova.compute.manager [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:35:09 np0005634017 nova_compute[243452]: 2026-02-28 10:35:09.028 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:09 np0005634017 nova_compute[243452]: 2026-02-28 10:35:09.029 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:09 np0005634017 nova_compute[243452]: 2026-02-28 10:35:09.038 243456 DEBUG nova.virt.hardware [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:35:09 np0005634017 nova_compute[243452]: 2026-02-28 10:35:09.039 243456 INFO nova.compute.claims [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:35:09 np0005634017 nova_compute[243452]: 2026-02-28 10:35:09.154 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:09 np0005634017 nova_compute[243452]: 2026-02-28 10:35:09.162 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:35:09 np0005634017 nova_compute[243452]: 2026-02-28 10:35:09.397 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:35:09 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/328452197' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:35:09 np0005634017 nova_compute[243452]: 2026-02-28 10:35:09.740 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:35:09 np0005634017 nova_compute[243452]: 2026-02-28 10:35:09.745 243456 DEBUG nova.compute.provider_tree [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:35:09 np0005634017 nova_compute[243452]: 2026-02-28 10:35:09.762 243456 DEBUG nova.scheduler.client.report [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:35:09 np0005634017 nova_compute[243452]: 2026-02-28 10:35:09.783 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:09 np0005634017 nova_compute[243452]: 2026-02-28 10:35:09.784 243456 DEBUG nova.compute.manager [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:35:09 np0005634017 nova_compute[243452]: 2026-02-28 10:35:09.826 243456 DEBUG nova.compute.manager [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:35:09 np0005634017 nova_compute[243452]: 2026-02-28 10:35:09.827 243456 DEBUG nova.network.neutron [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:35:09 np0005634017 nova_compute[243452]: 2026-02-28 10:35:09.846 243456 INFO nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:35:09 np0005634017 nova_compute[243452]: 2026-02-28 10:35:09.864 243456 DEBUG nova.compute.manager [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:35:09 np0005634017 nova_compute[243452]: 2026-02-28 10:35:09.948 243456 DEBUG nova.compute.manager [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:35:09 np0005634017 nova_compute[243452]: 2026-02-28 10:35:09.950 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:35:09 np0005634017 nova_compute[243452]: 2026-02-28 10:35:09.950 243456 INFO nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Creating image(s)#033[00m
Feb 28 05:35:09 np0005634017 nova_compute[243452]: 2026-02-28 10:35:09.977 243456 DEBUG nova.storage.rbd_utils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f13d8adc-1a08-412b-a9fa-c8a601cda923_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:35:10 np0005634017 nova_compute[243452]: 2026-02-28 10:35:10.005 243456 DEBUG nova.storage.rbd_utils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f13d8adc-1a08-412b-a9fa-c8a601cda923_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:35:10 np0005634017 nova_compute[243452]: 2026-02-28 10:35:10.032 243456 DEBUG nova.storage.rbd_utils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f13d8adc-1a08-412b-a9fa-c8a601cda923_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:35:10 np0005634017 nova_compute[243452]: 2026-02-28 10:35:10.035 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:35:10 np0005634017 nova_compute[243452]: 2026-02-28 10:35:10.070 243456 DEBUG nova.policy [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:35:10 np0005634017 nova_compute[243452]: 2026-02-28 10:35:10.122 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:35:10 np0005634017 nova_compute[243452]: 2026-02-28 10:35:10.123 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:10 np0005634017 nova_compute[243452]: 2026-02-28 10:35:10.124 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:10 np0005634017 nova_compute[243452]: 2026-02-28 10:35:10.125 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:10 np0005634017 nova_compute[243452]: 2026-02-28 10:35:10.156 243456 DEBUG nova.storage.rbd_utils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f13d8adc-1a08-412b-a9fa-c8a601cda923_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:35:10 np0005634017 nova_compute[243452]: 2026-02-28 10:35:10.160 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 f13d8adc-1a08-412b-a9fa-c8a601cda923_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:35:10 np0005634017 nova_compute[243452]: 2026-02-28 10:35:10.423 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 f13d8adc-1a08-412b-a9fa-c8a601cda923_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:35:10 np0005634017 nova_compute[243452]: 2026-02-28 10:35:10.516 243456 DEBUG nova.storage.rbd_utils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image f13d8adc-1a08-412b-a9fa-c8a601cda923_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:35:10 np0005634017 nova_compute[243452]: 2026-02-28 10:35:10.642 243456 DEBUG nova.objects.instance [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid f13d8adc-1a08-412b-a9fa-c8a601cda923 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:35:10 np0005634017 nova_compute[243452]: 2026-02-28 10:35:10.662 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:35:10 np0005634017 nova_compute[243452]: 2026-02-28 10:35:10.662 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Ensure instance console log exists: /var/lib/nova/instances/f13d8adc-1a08-412b-a9fa-c8a601cda923/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:35:10 np0005634017 nova_compute[243452]: 2026-02-28 10:35:10.663 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:10 np0005634017 nova_compute[243452]: 2026-02-28 10:35:10.664 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:10 np0005634017 nova_compute[243452]: 2026-02-28 10:35:10.664 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:10 np0005634017 nova_compute[243452]: 2026-02-28 10:35:10.677 243456 DEBUG nova.network.neutron [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Successfully created port: a81e3b75-649b-4321-b436-ab01ab0a9e05 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:35:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 305 active+clean; 322 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.8 MiB/s wr, 165 op/s
Feb 28 05:35:11 np0005634017 nova_compute[243452]: 2026-02-28 10:35:11.206 243456 DEBUG nova.network.neutron [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Successfully created port: d681366d-e6b5-4dad-847e-d091bc7b112d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:35:11 np0005634017 nova_compute[243452]: 2026-02-28 10:35:11.833 243456 DEBUG nova.network.neutron [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Successfully updated port: a81e3b75-649b-4321-b436-ab01ab0a9e05 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:35:11 np0005634017 nova_compute[243452]: 2026-02-28 10:35:11.909 243456 DEBUG nova.compute.manager [req-7b789caf-1680-4a0d-80e1-e61234a67bc4 req-c4aea1bb-336e-40f0-ac33-6bf4ff298ed7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-changed-a81e3b75-649b-4321-b436-ab01ab0a9e05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:11 np0005634017 nova_compute[243452]: 2026-02-28 10:35:11.910 243456 DEBUG nova.compute.manager [req-7b789caf-1680-4a0d-80e1-e61234a67bc4 req-c4aea1bb-336e-40f0-ac33-6bf4ff298ed7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Refreshing instance network info cache due to event network-changed-a81e3b75-649b-4321-b436-ab01ab0a9e05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:35:11 np0005634017 nova_compute[243452]: 2026-02-28 10:35:11.910 243456 DEBUG oslo_concurrency.lockutils [req-7b789caf-1680-4a0d-80e1-e61234a67bc4 req-c4aea1bb-336e-40f0-ac33-6bf4ff298ed7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:35:11 np0005634017 nova_compute[243452]: 2026-02-28 10:35:11.910 243456 DEBUG oslo_concurrency.lockutils [req-7b789caf-1680-4a0d-80e1-e61234a67bc4 req-c4aea1bb-336e-40f0-ac33-6bf4ff298ed7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:35:11 np0005634017 nova_compute[243452]: 2026-02-28 10:35:11.910 243456 DEBUG nova.network.neutron [req-7b789caf-1680-4a0d-80e1-e61234a67bc4 req-c4aea1bb-336e-40f0-ac33-6bf4ff298ed7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Refreshing network info cache for port a81e3b75-649b-4321-b436-ab01ab0a9e05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:35:12 np0005634017 nova_compute[243452]: 2026-02-28 10:35:12.059 243456 DEBUG nova.network.neutron [req-7b789caf-1680-4a0d-80e1-e61234a67bc4 req-c4aea1bb-336e-40f0-ac33-6bf4ff298ed7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:35:12 np0005634017 nova_compute[243452]: 2026-02-28 10:35:12.711 243456 DEBUG nova.network.neutron [req-7b789caf-1680-4a0d-80e1-e61234a67bc4 req-c4aea1bb-336e-40f0-ac33-6bf4ff298ed7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:35:12 np0005634017 nova_compute[243452]: 2026-02-28 10:35:12.729 243456 DEBUG oslo_concurrency.lockutils [req-7b789caf-1680-4a0d-80e1-e61234a67bc4 req-c4aea1bb-336e-40f0-ac33-6bf4ff298ed7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:35:12 np0005634017 nova_compute[243452]: 2026-02-28 10:35:12.739 243456 INFO nova.compute.manager [None req-2c9fc8d1-def2-4a7a-ab0f-844ace4f96fd ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Get console output#033[00m
Feb 28 05:35:12 np0005634017 nova_compute[243452]: 2026-02-28 10:35:12.748 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 28 05:35:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 305 active+clean; 344 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 283 KiB/s rd, 3.7 MiB/s wr, 75 op/s
Feb 28 05:35:12 np0005634017 nova_compute[243452]: 2026-02-28 10:35:12.907 243456 DEBUG nova.network.neutron [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Successfully updated port: d681366d-e6b5-4dad-847e-d091bc7b112d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:35:12 np0005634017 nova_compute[243452]: 2026-02-28 10:35:12.924 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:35:12 np0005634017 nova_compute[243452]: 2026-02-28 10:35:12.925 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:35:12 np0005634017 nova_compute[243452]: 2026-02-28 10:35:12.925 243456 DEBUG nova.network.neutron [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:35:13 np0005634017 nova_compute[243452]: 2026-02-28 10:35:13.066 243456 DEBUG nova.network.neutron [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:35:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.052 243456 DEBUG nova.compute.manager [req-10722c50-b0b0-4e26-a272-f523f83a84d0 req-2aaf6ba4-b481-40cb-a232-c90f0f07bb47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-changed-d681366d-e6b5-4dad-847e-d091bc7b112d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.053 243456 DEBUG nova.compute.manager [req-10722c50-b0b0-4e26-a272-f523f83a84d0 req-2aaf6ba4-b481-40cb-a232-c90f0f07bb47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Refreshing instance network info cache due to event network-changed-d681366d-e6b5-4dad-847e-d091bc7b112d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.054 243456 DEBUG oslo_concurrency.lockutils [req-10722c50-b0b0-4e26-a272-f523f83a84d0 req-2aaf6ba4-b481-40cb-a232-c90f0f07bb47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.156 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.193 243456 DEBUG nova.compute.manager [req-2b9979fc-5369-440f-baee-a7cff35bf526 req-1cc59d99-a5ad-4a3c-94af-834b6d269aa0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Received event network-changed-92f5c154-2fa7-43e9-a6fd-da26d3ad985b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.193 243456 DEBUG nova.compute.manager [req-2b9979fc-5369-440f-baee-a7cff35bf526 req-1cc59d99-a5ad-4a3c-94af-834b6d269aa0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Refreshing instance network info cache due to event network-changed-92f5c154-2fa7-43e9-a6fd-da26d3ad985b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.194 243456 DEBUG oslo_concurrency.lockutils [req-2b9979fc-5369-440f-baee-a7cff35bf526 req-1cc59d99-a5ad-4a3c-94af-834b6d269aa0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-367042aa-0043-4283-a399-ea4a6a1545f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.194 243456 DEBUG oslo_concurrency.lockutils [req-2b9979fc-5369-440f-baee-a7cff35bf526 req-1cc59d99-a5ad-4a3c-94af-834b6d269aa0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-367042aa-0043-4283-a399-ea4a6a1545f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.194 243456 DEBUG nova.network.neutron [req-2b9979fc-5369-440f-baee-a7cff35bf526 req-1cc59d99-a5ad-4a3c-94af-834b6d269aa0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Refreshing network info cache for port 92f5c154-2fa7-43e9-a6fd-da26d3ad985b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.400 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 297 KiB/s rd, 3.9 MiB/s wr, 100 op/s
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.861 243456 DEBUG nova.network.neutron [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Updating instance_info_cache with network_info: [{"id": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "address": "fa:16:3e:2e:7a:c9", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa81e3b75-64", "ovs_interfaceid": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d681366d-e6b5-4dad-847e-d091bc7b112d", "address": "fa:16:3e:bb:a7:ba", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:a7ba", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd681366d-e6", "ovs_interfaceid": "d681366d-e6b5-4dad-847e-d091bc7b112d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.892 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.893 243456 DEBUG nova.compute.manager [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Instance network_info: |[{"id": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "address": "fa:16:3e:2e:7a:c9", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa81e3b75-64", "ovs_interfaceid": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d681366d-e6b5-4dad-847e-d091bc7b112d", "address": "fa:16:3e:bb:a7:ba", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:a7ba", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd681366d-e6", "ovs_interfaceid": "d681366d-e6b5-4dad-847e-d091bc7b112d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.894 243456 DEBUG oslo_concurrency.lockutils [req-10722c50-b0b0-4e26-a272-f523f83a84d0 req-2aaf6ba4-b481-40cb-a232-c90f0f07bb47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.894 243456 DEBUG nova.network.neutron [req-10722c50-b0b0-4e26-a272-f523f83a84d0 req-2aaf6ba4-b481-40cb-a232-c90f0f07bb47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Refreshing network info cache for port d681366d-e6b5-4dad-847e-d091bc7b112d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.901 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Start _get_guest_xml network_info=[{"id": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "address": "fa:16:3e:2e:7a:c9", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa81e3b75-64", "ovs_interfaceid": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d681366d-e6b5-4dad-847e-d091bc7b112d", "address": "fa:16:3e:bb:a7:ba", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:a7ba", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd681366d-e6", "ovs_interfaceid": "d681366d-e6b5-4dad-847e-d091bc7b112d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.908 243456 WARNING nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.914 243456 DEBUG nova.virt.libvirt.host [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.915 243456 DEBUG nova.virt.libvirt.host [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.927 243456 DEBUG nova.virt.libvirt.host [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.928 243456 DEBUG nova.virt.libvirt.host [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.928 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.929 243456 DEBUG nova.virt.hardware [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.929 243456 DEBUG nova.virt.hardware [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.930 243456 DEBUG nova.virt.hardware [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.930 243456 DEBUG nova.virt.hardware [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.930 243456 DEBUG nova.virt.hardware [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.931 243456 DEBUG nova.virt.hardware [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.931 243456 DEBUG nova.virt.hardware [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.932 243456 DEBUG nova.virt.hardware [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.932 243456 DEBUG nova.virt.hardware [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.932 243456 DEBUG nova.virt.hardware [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.933 243456 DEBUG nova.virt.hardware [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:35:14 np0005634017 nova_compute[243452]: 2026-02-28 10:35:14.938 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:35:15 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:35:15 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1037745291' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:35:15 np0005634017 nova_compute[243452]: 2026-02-28 10:35:15.510 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:35:15 np0005634017 nova_compute[243452]: 2026-02-28 10:35:15.536 243456 DEBUG nova.storage.rbd_utils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f13d8adc-1a08-412b-a9fa-c8a601cda923_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:35:15 np0005634017 nova_compute[243452]: 2026-02-28 10:35:15.541 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:35:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:15.859 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:35:15 np0005634017 nova_compute[243452]: 2026-02-28 10:35:15.860 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:15.862 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:35:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:35:16 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3408567993' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.102 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.105 243456 DEBUG nova.virt.libvirt.vif [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:35:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1382104323',display_name='tempest-TestGettingAddress-server-1382104323',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1382104323',id=130,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLvbGTbrEfz9W2tnu/9jGE+Ib3UUNypoWhKLTWUiVUxUtyJAzaLx270DWUn96sPglPAR/qjjr3PTH3Chgoe7aVB/RslRFNHrD7aVCbvYz4FzhoyO9zWK1x//nFLJoPGj0w==',key_name='tempest-TestGettingAddress-1669963629',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-kd4ju8yj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:35:09Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f13d8adc-1a08-412b-a9fa-c8a601cda923,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "address": "fa:16:3e:2e:7a:c9", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa81e3b75-64", "ovs_interfaceid": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.106 243456 DEBUG nova.network.os_vif_util [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "address": "fa:16:3e:2e:7a:c9", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa81e3b75-64", "ovs_interfaceid": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.107 243456 DEBUG nova.network.os_vif_util [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:7a:c9,bridge_name='br-int',has_traffic_filtering=True,id=a81e3b75-649b-4321-b436-ab01ab0a9e05,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa81e3b75-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.109 243456 DEBUG nova.virt.libvirt.vif [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:35:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1382104323',display_name='tempest-TestGettingAddress-server-1382104323',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1382104323',id=130,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLvbGTbrEfz9W2tnu/9jGE+Ib3UUNypoWhKLTWUiVUxUtyJAzaLx270DWUn96sPglPAR/qjjr3PTH3Chgoe7aVB/RslRFNHrD7aVCbvYz4FzhoyO9zWK1x//nFLJoPGj0w==',key_name='tempest-TestGettingAddress-1669963629',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-kd4ju8yj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:35:09Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f13d8adc-1a08-412b-a9fa-c8a601cda923,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d681366d-e6b5-4dad-847e-d091bc7b112d", "address": "fa:16:3e:bb:a7:ba", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:a7ba", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd681366d-e6", "ovs_interfaceid": "d681366d-e6b5-4dad-847e-d091bc7b112d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.109 243456 DEBUG nova.network.os_vif_util [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "d681366d-e6b5-4dad-847e-d091bc7b112d", "address": "fa:16:3e:bb:a7:ba", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:a7ba", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd681366d-e6", "ovs_interfaceid": "d681366d-e6b5-4dad-847e-d091bc7b112d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.110 243456 DEBUG nova.network.os_vif_util [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:a7:ba,bridge_name='br-int',has_traffic_filtering=True,id=d681366d-e6b5-4dad-847e-d091bc7b112d,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd681366d-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.112 243456 DEBUG nova.objects.instance [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid f13d8adc-1a08-412b-a9fa-c8a601cda923 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.134 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:35:16 np0005634017 nova_compute[243452]:  <uuid>f13d8adc-1a08-412b-a9fa-c8a601cda923</uuid>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:  <name>instance-00000082</name>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestGettingAddress-server-1382104323</nova:name>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:35:14</nova:creationTime>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:35:16 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:        <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:        <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:        <nova:port uuid="a81e3b75-649b-4321-b436-ab01ab0a9e05">
Feb 28 05:35:16 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:        <nova:port uuid="d681366d-e6b5-4dad-847e-d091bc7b112d">
Feb 28 05:35:16 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:febb:a7ba" ipVersion="6"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <entry name="serial">f13d8adc-1a08-412b-a9fa-c8a601cda923</entry>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <entry name="uuid">f13d8adc-1a08-412b-a9fa-c8a601cda923</entry>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/f13d8adc-1a08-412b-a9fa-c8a601cda923_disk">
Feb 28 05:35:16 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:35:16 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/f13d8adc-1a08-412b-a9fa-c8a601cda923_disk.config">
Feb 28 05:35:16 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:35:16 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:2e:7a:c9"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <target dev="tapa81e3b75-64"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:bb:a7:ba"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <target dev="tapd681366d-e6"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/f13d8adc-1a08-412b-a9fa-c8a601cda923/console.log" append="off"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:35:16 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:35:16 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:35:16 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:35:16 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.135 243456 DEBUG nova.compute.manager [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Preparing to wait for external event network-vif-plugged-a81e3b75-649b-4321-b436-ab01ab0a9e05 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.135 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.136 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.136 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.137 243456 DEBUG nova.compute.manager [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Preparing to wait for external event network-vif-plugged-d681366d-e6b5-4dad-847e-d091bc7b112d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.137 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.137 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.138 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.139 243456 DEBUG nova.virt.libvirt.vif [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:35:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1382104323',display_name='tempest-TestGettingAddress-server-1382104323',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1382104323',id=130,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLvbGTbrEfz9W2tnu/9jGE+Ib3UUNypoWhKLTWUiVUxUtyJAzaLx270DWUn96sPglPAR/qjjr3PTH3Chgoe7aVB/RslRFNHrD7aVCbvYz4FzhoyO9zWK1x//nFLJoPGj0w==',key_name='tempest-TestGettingAddress-1669963629',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-kd4ju8yj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:35:09Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f13d8adc-1a08-412b-a9fa-c8a601cda923,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "address": "fa:16:3e:2e:7a:c9", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa81e3b75-64", "ovs_interfaceid": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.140 243456 DEBUG nova.network.os_vif_util [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "address": "fa:16:3e:2e:7a:c9", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa81e3b75-64", "ovs_interfaceid": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.141 243456 DEBUG nova.network.os_vif_util [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:7a:c9,bridge_name='br-int',has_traffic_filtering=True,id=a81e3b75-649b-4321-b436-ab01ab0a9e05,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa81e3b75-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.142 243456 DEBUG os_vif [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:7a:c9,bridge_name='br-int',has_traffic_filtering=True,id=a81e3b75-649b-4321-b436-ab01ab0a9e05,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa81e3b75-64') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.143 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.143 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.144 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.149 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.149 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa81e3b75-64, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.150 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa81e3b75-64, col_values=(('external_ids', {'iface-id': 'a81e3b75-649b-4321-b436-ab01ab0a9e05', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:7a:c9', 'vm-uuid': 'f13d8adc-1a08-412b-a9fa-c8a601cda923'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:16 np0005634017 NetworkManager[49805]: <info>  [1772274916.1545] manager: (tapa81e3b75-64): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/547)
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.155 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.160 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.161 243456 INFO os_vif [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:7a:c9,bridge_name='br-int',has_traffic_filtering=True,id=a81e3b75-649b-4321-b436-ab01ab0a9e05,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa81e3b75-64')#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.162 243456 DEBUG nova.virt.libvirt.vif [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:35:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1382104323',display_name='tempest-TestGettingAddress-server-1382104323',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1382104323',id=130,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLvbGTbrEfz9W2tnu/9jGE+Ib3UUNypoWhKLTWUiVUxUtyJAzaLx270DWUn96sPglPAR/qjjr3PTH3Chgoe7aVB/RslRFNHrD7aVCbvYz4FzhoyO9zWK1x//nFLJoPGj0w==',key_name='tempest-TestGettingAddress-1669963629',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-kd4ju8yj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:35:09Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f13d8adc-1a08-412b-a9fa-c8a601cda923,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d681366d-e6b5-4dad-847e-d091bc7b112d", "address": "fa:16:3e:bb:a7:ba", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:a7ba", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd681366d-e6", "ovs_interfaceid": "d681366d-e6b5-4dad-847e-d091bc7b112d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.162 243456 DEBUG nova.network.os_vif_util [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "d681366d-e6b5-4dad-847e-d091bc7b112d", "address": "fa:16:3e:bb:a7:ba", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:a7ba", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd681366d-e6", "ovs_interfaceid": "d681366d-e6b5-4dad-847e-d091bc7b112d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.163 243456 DEBUG nova.network.os_vif_util [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:a7:ba,bridge_name='br-int',has_traffic_filtering=True,id=d681366d-e6b5-4dad-847e-d091bc7b112d,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd681366d-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.163 243456 DEBUG os_vif [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:a7:ba,bridge_name='br-int',has_traffic_filtering=True,id=d681366d-e6b5-4dad-847e-d091bc7b112d,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd681366d-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.163 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.164 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.164 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.167 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.167 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd681366d-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.168 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd681366d-e6, col_values=(('external_ids', {'iface-id': 'd681366d-e6b5-4dad-847e-d091bc7b112d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:a7:ba', 'vm-uuid': 'f13d8adc-1a08-412b-a9fa-c8a601cda923'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.169 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:16 np0005634017 NetworkManager[49805]: <info>  [1772274916.1704] manager: (tapd681366d-e6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/548)
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.171 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.177 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.178 243456 INFO os_vif [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:a7:ba,bridge_name='br-int',has_traffic_filtering=True,id=d681366d-e6b5-4dad-847e-d091bc7b112d,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd681366d-e6')#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.242 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.243 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.243 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:2e:7a:c9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.243 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:bb:a7:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.244 243456 INFO nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Using config drive#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.274 243456 DEBUG nova.storage.rbd_utils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f13d8adc-1a08-412b-a9fa-c8a601cda923_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.406 243456 DEBUG nova.network.neutron [req-2b9979fc-5369-440f-baee-a7cff35bf526 req-1cc59d99-a5ad-4a3c-94af-834b6d269aa0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Updated VIF entry in instance network info cache for port 92f5c154-2fa7-43e9-a6fd-da26d3ad985b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.407 243456 DEBUG nova.network.neutron [req-2b9979fc-5369-440f-baee-a7cff35bf526 req-1cc59d99-a5ad-4a3c-94af-834b6d269aa0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Updating instance_info_cache with network_info: [{"id": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "address": "fa:16:3e:56:eb:46", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92f5c154-2f", "ovs_interfaceid": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.432 243456 DEBUG oslo_concurrency.lockutils [req-2b9979fc-5369-440f-baee-a7cff35bf526 req-1cc59d99-a5ad-4a3c-94af-834b6d269aa0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-367042aa-0043-4283-a399-ea4a6a1545f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.590 243456 INFO nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Creating config drive at /var/lib/nova/instances/f13d8adc-1a08-412b-a9fa-c8a601cda923/disk.config#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.596 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f13d8adc-1a08-412b-a9fa-c8a601cda923/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3h85tpwm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.750 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f13d8adc-1a08-412b-a9fa-c8a601cda923/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3h85tpwm" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.788 243456 DEBUG nova.storage.rbd_utils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f13d8adc-1a08-412b-a9fa-c8a601cda923_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.792 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f13d8adc-1a08-412b-a9fa-c8a601cda923/disk.config f13d8adc-1a08-412b-a9fa-c8a601cda923_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:35:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 308 KiB/s rd, 3.9 MiB/s wr, 112 op/s
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.866 243456 DEBUG nova.network.neutron [req-10722c50-b0b0-4e26-a272-f523f83a84d0 req-2aaf6ba4-b481-40cb-a232-c90f0f07bb47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Updated VIF entry in instance network info cache for port d681366d-e6b5-4dad-847e-d091bc7b112d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.867 243456 DEBUG nova.network.neutron [req-10722c50-b0b0-4e26-a272-f523f83a84d0 req-2aaf6ba4-b481-40cb-a232-c90f0f07bb47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Updating instance_info_cache with network_info: [{"id": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "address": "fa:16:3e:2e:7a:c9", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa81e3b75-64", "ovs_interfaceid": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d681366d-e6b5-4dad-847e-d091bc7b112d", "address": "fa:16:3e:bb:a7:ba", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:a7ba", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd681366d-e6", "ovs_interfaceid": "d681366d-e6b5-4dad-847e-d091bc7b112d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.886 243456 DEBUG oslo_concurrency.lockutils [req-10722c50-b0b0-4e26-a272-f523f83a84d0 req-2aaf6ba4-b481-40cb-a232-c90f0f07bb47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.962 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f13d8adc-1a08-412b-a9fa-c8a601cda923/disk.config f13d8adc-1a08-412b-a9fa-c8a601cda923_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:35:16 np0005634017 nova_compute[243452]: 2026-02-28 10:35:16.963 243456 INFO nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Deleting local config drive /var/lib/nova/instances/f13d8adc-1a08-412b-a9fa-c8a601cda923/disk.config because it was imported into RBD.#033[00m
Feb 28 05:35:17 np0005634017 kernel: tapa81e3b75-64: entered promiscuous mode
Feb 28 05:35:17 np0005634017 NetworkManager[49805]: <info>  [1772274917.0125] manager: (tapa81e3b75-64): new Tun device (/org/freedesktop/NetworkManager/Devices/549)
Feb 28 05:35:17 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:17Z|01311|binding|INFO|Claiming lport a81e3b75-649b-4321-b436-ab01ab0a9e05 for this chassis.
Feb 28 05:35:17 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:17Z|01312|binding|INFO|a81e3b75-649b-4321-b436-ab01ab0a9e05: Claiming fa:16:3e:2e:7a:c9 10.100.0.13
Feb 28 05:35:17 np0005634017 nova_compute[243452]: 2026-02-28 10:35:17.015 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.026 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:7a:c9 10.100.0.13'], port_security=['fa:16:3e:2e:7a:c9 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f13d8adc-1a08-412b-a9fa-c8a601cda923', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd8337edd-8f59-40cf-a216-2b5eec8b4c00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d65e56d1-78ed-40b2-a041-17e977e92cba, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a81e3b75-649b-4321-b436-ab01ab0a9e05) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.029 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a81e3b75-649b-4321-b436-ab01ab0a9e05 in datapath 6d7aad4f-1a53-4b74-a216-4cac4be4283b bound to our chassis#033[00m
Feb 28 05:35:17 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:17Z|01313|binding|INFO|Setting lport a81e3b75-649b-4321-b436-ab01ab0a9e05 ovn-installed in OVS
Feb 28 05:35:17 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:17Z|01314|binding|INFO|Setting lport a81e3b75-649b-4321-b436-ab01ab0a9e05 up in Southbound
Feb 28 05:35:17 np0005634017 nova_compute[243452]: 2026-02-28 10:35:17.032 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.033 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d7aad4f-1a53-4b74-a216-4cac4be4283b#033[00m
Feb 28 05:35:17 np0005634017 kernel: tapd681366d-e6: entered promiscuous mode
Feb 28 05:35:17 np0005634017 NetworkManager[49805]: <info>  [1772274917.0362] manager: (tapd681366d-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/550)
Feb 28 05:35:17 np0005634017 nova_compute[243452]: 2026-02-28 10:35:17.039 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:17 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:17Z|01315|if_status|INFO|Not updating pb chassis for d681366d-e6b5-4dad-847e-d091bc7b112d now as sb is readonly
Feb 28 05:35:17 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:17Z|01316|binding|INFO|Claiming lport d681366d-e6b5-4dad-847e-d091bc7b112d for this chassis.
Feb 28 05:35:17 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:17Z|01317|binding|INFO|d681366d-e6b5-4dad-847e-d091bc7b112d: Claiming fa:16:3e:bb:a7:ba 2001:db8::f816:3eff:febb:a7ba
Feb 28 05:35:17 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:17Z|01318|binding|INFO|Setting lport d681366d-e6b5-4dad-847e-d091bc7b112d ovn-installed in OVS
Feb 28 05:35:17 np0005634017 systemd-udevd[358308]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:35:17 np0005634017 systemd-udevd[358309]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:35:17 np0005634017 nova_compute[243452]: 2026-02-28 10:35:17.051 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.054 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:a7:ba 2001:db8::f816:3eff:febb:a7ba'], port_security=['fa:16:3e:bb:a7:ba 2001:db8::f816:3eff:febb:a7ba'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:febb:a7ba/64', 'neutron:device_id': 'f13d8adc-1a08-412b-a9fa-c8a601cda923', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd8337edd-8f59-40cf-a216-2b5eec8b4c00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f7ae13e-602a-487f-8652-e7d0de9d97fa, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d681366d-e6b5-4dad-847e-d091bc7b112d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:35:17 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:17Z|01319|binding|INFO|Setting lport d681366d-e6b5-4dad-847e-d091bc7b112d up in Southbound
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.054 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ae57246e-daed-4f66-b5a6-22d279fd03d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:17 np0005634017 nova_compute[243452]: 2026-02-28 10:35:17.055 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:17 np0005634017 NetworkManager[49805]: <info>  [1772274917.0646] device (tapa81e3b75-64): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:35:17 np0005634017 NetworkManager[49805]: <info>  [1772274917.0655] device (tapa81e3b75-64): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:35:17 np0005634017 NetworkManager[49805]: <info>  [1772274917.0674] device (tapd681366d-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:35:17 np0005634017 NetworkManager[49805]: <info>  [1772274917.0682] device (tapd681366d-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:35:17 np0005634017 systemd-machined[209480]: New machine qemu-163-instance-00000082.
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.091 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[234acf0a-6926-4353-9cfa-5adff19d3900]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.094 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[483b6df9-5365-4a3f-a9de-2abbcd384c0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:17 np0005634017 systemd[1]: Started Virtual Machine qemu-163-instance-00000082.
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.123 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2847b9c9-b7ad-4875-803d-f281f4f3706a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.142 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5d2f12-1e55-4824-a2fd-5e5f7f5bb776]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d7aad4f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:af:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 387], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639197, 'reachable_time': 19551, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358325, 'error': None, 'target': 'ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.158 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9e0126c2-67e3-4825-8851-e1f80fe9361e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6d7aad4f-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 639207, 'tstamp': 639207}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358327, 'error': None, 'target': 'ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6d7aad4f-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 639210, 'tstamp': 639210}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358327, 'error': None, 'target': 'ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.161 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d7aad4f-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:17 np0005634017 nova_compute[243452]: 2026-02-28 10:35:17.163 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:17 np0005634017 nova_compute[243452]: 2026-02-28 10:35:17.164 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.164 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d7aad4f-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.164 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.165 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d7aad4f-10, col_values=(('external_ids', {'iface-id': '99dd359f-3ab9-477c-a58c-1c56298be9c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.165 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.167 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d681366d-e6b5-4dad-847e-d091bc7b112d in datapath 49ec66b0-8f5d-445b-a7e6-7fd41e785d9a unbound from our chassis#033[00m
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.168 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 49ec66b0-8f5d-445b-a7e6-7fd41e785d9a#033[00m
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.185 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[feb9ac71-9a39-4eda-ba7e-34c7c863e491]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.207 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4a3546a2-ce02-41f6-8ae8-cef92046b870]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.210 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[808df8af-4cc8-4a4d-8d22-d137882b5efc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.231 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2fb1a0c1-af39-4651-8657-381790bbe548]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.248 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[76d901c1-7e8b-4fd6-935f-ff430078e2b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49ec66b0-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:61:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 18, 'tx_packets': 5, 'rx_bytes': 1572, 'tx_bytes': 398, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 18, 'tx_packets': 5, 'rx_bytes': 1572, 'tx_bytes': 398, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639285, 'reachable_time': 43442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358333, 'error': None, 'target': 'ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.259 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ea741ae6-231c-43d2-9faf-37bb0a5843ba]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap49ec66b0-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 639299, 'tstamp': 639299}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358334, 'error': None, 'target': 'ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.261 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49ec66b0-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:17 np0005634017 nova_compute[243452]: 2026-02-28 10:35:17.262 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.264 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49ec66b0-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.264 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.264 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap49ec66b0-80, col_values=(('external_ids', {'iface-id': '0d93ffc1-1158-4b54-b2c1-6b7d48d62d16'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.265 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:35:17 np0005634017 nova_compute[243452]: 2026-02-28 10:35:17.548 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274917.5477853, f13d8adc-1a08-412b-a9fa-c8a601cda923 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:35:17 np0005634017 nova_compute[243452]: 2026-02-28 10:35:17.549 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] VM Started (Lifecycle Event)#033[00m
Feb 28 05:35:17 np0005634017 nova_compute[243452]: 2026-02-28 10:35:17.578 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:35:17 np0005634017 nova_compute[243452]: 2026-02-28 10:35:17.583 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274917.5481324, f13d8adc-1a08-412b-a9fa-c8a601cda923 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:35:17 np0005634017 nova_compute[243452]: 2026-02-28 10:35:17.583 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:35:17 np0005634017 nova_compute[243452]: 2026-02-28 10:35:17.601 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:35:17 np0005634017 nova_compute[243452]: 2026-02-28 10:35:17.605 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:35:17 np0005634017 nova_compute[243452]: 2026-02-28 10:35:17.625 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:35:18 np0005634017 nova_compute[243452]: 2026-02-28 10:35:18.686 243456 DEBUG nova.compute.manager [req-8f881ff6-02bb-41d3-8285-070a1bfc7bc1 req-d6eebcef-a011-40c1-85a9-47e4423af8d4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-vif-plugged-a81e3b75-649b-4321-b436-ab01ab0a9e05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:18 np0005634017 nova_compute[243452]: 2026-02-28 10:35:18.686 243456 DEBUG oslo_concurrency.lockutils [req-8f881ff6-02bb-41d3-8285-070a1bfc7bc1 req-d6eebcef-a011-40c1-85a9-47e4423af8d4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:18 np0005634017 nova_compute[243452]: 2026-02-28 10:35:18.687 243456 DEBUG oslo_concurrency.lockutils [req-8f881ff6-02bb-41d3-8285-070a1bfc7bc1 req-d6eebcef-a011-40c1-85a9-47e4423af8d4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:18 np0005634017 nova_compute[243452]: 2026-02-28 10:35:18.687 243456 DEBUG oslo_concurrency.lockutils [req-8f881ff6-02bb-41d3-8285-070a1bfc7bc1 req-d6eebcef-a011-40c1-85a9-47e4423af8d4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:18 np0005634017 nova_compute[243452]: 2026-02-28 10:35:18.688 243456 DEBUG nova.compute.manager [req-8f881ff6-02bb-41d3-8285-070a1bfc7bc1 req-d6eebcef-a011-40c1-85a9-47e4423af8d4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Processing event network-vif-plugged-a81e3b75-649b-4321-b436-ab01ab0a9e05 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:35:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 318 KiB/s rd, 3.9 MiB/s wr, 128 op/s
Feb 28 05:35:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:35:19 np0005634017 nova_compute[243452]: 2026-02-28 10:35:19.404 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.772 243456 DEBUG nova.compute.manager [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-vif-plugged-a81e3b75-649b-4321-b436-ab01ab0a9e05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.774 243456 DEBUG oslo_concurrency.lockutils [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.775 243456 DEBUG oslo_concurrency.lockutils [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.775 243456 DEBUG oslo_concurrency.lockutils [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.775 243456 DEBUG nova.compute.manager [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] No event matching network-vif-plugged-a81e3b75-649b-4321-b436-ab01ab0a9e05 in dict_keys([('network-vif-plugged', 'd681366d-e6b5-4dad-847e-d091bc7b112d')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.775 243456 WARNING nova.compute.manager [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received unexpected event network-vif-plugged-a81e3b75-649b-4321-b436-ab01ab0a9e05 for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.776 243456 DEBUG nova.compute.manager [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-vif-plugged-d681366d-e6b5-4dad-847e-d091bc7b112d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.776 243456 DEBUG oslo_concurrency.lockutils [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.776 243456 DEBUG oslo_concurrency.lockutils [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.776 243456 DEBUG oslo_concurrency.lockutils [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.776 243456 DEBUG nova.compute.manager [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Processing event network-vif-plugged-d681366d-e6b5-4dad-847e-d091bc7b112d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.777 243456 DEBUG nova.compute.manager [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-vif-plugged-d681366d-e6b5-4dad-847e-d091bc7b112d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.777 243456 DEBUG oslo_concurrency.lockutils [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.777 243456 DEBUG oslo_concurrency.lockutils [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.778 243456 DEBUG oslo_concurrency.lockutils [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.778 243456 DEBUG nova.compute.manager [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] No waiting events found dispatching network-vif-plugged-d681366d-e6b5-4dad-847e-d091bc7b112d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.778 243456 WARNING nova.compute.manager [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received unexpected event network-vif-plugged-d681366d-e6b5-4dad-847e-d091bc7b112d for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.779 243456 DEBUG nova.compute.manager [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Instance event wait completed in 3 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.784 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274920.7842233, f13d8adc-1a08-412b-a9fa-c8a601cda923 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.785 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.787 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.792 243456 INFO nova.virt.libvirt.driver [-] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Instance spawned successfully.#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.792 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.812 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.820 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.826 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.827 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.828 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.828 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.829 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:35:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 283 KiB/s rd, 2.9 MiB/s wr, 147 op/s
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.830 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.841 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.891 243456 INFO nova.compute.manager [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Took 10.94 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.892 243456 DEBUG nova.compute.manager [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.971 243456 INFO nova.compute.manager [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Took 11.97 seconds to build instance.#033[00m
Feb 28 05:35:20 np0005634017 nova_compute[243452]: 2026-02-28 10:35:20.997 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:21 np0005634017 nova_compute[243452]: 2026-02-28 10:35:21.170 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 964 KiB/s rd, 1.0 MiB/s wr, 159 op/s
Feb 28 05:35:23 np0005634017 nova_compute[243452]: 2026-02-28 10:35:23.327 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b7d98834-924e-4fbd-a701-d22949f44f77" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:23 np0005634017 nova_compute[243452]: 2026-02-28 10:35:23.328 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:23 np0005634017 nova_compute[243452]: 2026-02-28 10:35:23.350 243456 DEBUG nova.compute.manager [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:35:23 np0005634017 nova_compute[243452]: 2026-02-28 10:35:23.436 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:23 np0005634017 nova_compute[243452]: 2026-02-28 10:35:23.437 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:23 np0005634017 nova_compute[243452]: 2026-02-28 10:35:23.444 243456 DEBUG nova.virt.hardware [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:35:23 np0005634017 nova_compute[243452]: 2026-02-28 10:35:23.444 243456 INFO nova.compute.claims [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:35:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:23.864 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:35:24 np0005634017 nova_compute[243452]: 2026-02-28 10:35:24.041 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:35:24 np0005634017 nova_compute[243452]: 2026-02-28 10:35:24.405 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:35:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3654053878' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:35:24 np0005634017 nova_compute[243452]: 2026-02-28 10:35:24.640 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:35:24 np0005634017 nova_compute[243452]: 2026-02-28 10:35:24.647 243456 DEBUG nova.compute.provider_tree [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:35:24 np0005634017 nova_compute[243452]: 2026-02-28 10:35:24.665 243456 DEBUG nova.scheduler.client.report [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:35:24 np0005634017 nova_compute[243452]: 2026-02-28 10:35:24.694 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:24 np0005634017 nova_compute[243452]: 2026-02-28 10:35:24.695 243456 DEBUG nova.compute.manager [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:35:24 np0005634017 nova_compute[243452]: 2026-02-28 10:35:24.762 243456 DEBUG nova.compute.manager [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:35:24 np0005634017 nova_compute[243452]: 2026-02-28 10:35:24.763 243456 DEBUG nova.network.neutron [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:35:24 np0005634017 nova_compute[243452]: 2026-02-28 10:35:24.783 243456 INFO nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:35:24 np0005634017 nova_compute[243452]: 2026-02-28 10:35:24.800 243456 DEBUG nova.compute.manager [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:35:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 248 KiB/s wr, 148 op/s
Feb 28 05:35:24 np0005634017 nova_compute[243452]: 2026-02-28 10:35:24.898 243456 DEBUG nova.compute.manager [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:35:24 np0005634017 nova_compute[243452]: 2026-02-28 10:35:24.900 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:35:24 np0005634017 nova_compute[243452]: 2026-02-28 10:35:24.900 243456 INFO nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Creating image(s)#033[00m
Feb 28 05:35:24 np0005634017 nova_compute[243452]: 2026-02-28 10:35:24.927 243456 DEBUG nova.storage.rbd_utils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b7d98834-924e-4fbd-a701-d22949f44f77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:35:24 np0005634017 nova_compute[243452]: 2026-02-28 10:35:24.953 243456 DEBUG nova.storage.rbd_utils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b7d98834-924e-4fbd-a701-d22949f44f77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:35:24 np0005634017 nova_compute[243452]: 2026-02-28 10:35:24.980 243456 DEBUG nova.storage.rbd_utils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b7d98834-924e-4fbd-a701-d22949f44f77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:35:24 np0005634017 nova_compute[243452]: 2026-02-28 10:35:24.985 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:35:25 np0005634017 nova_compute[243452]: 2026-02-28 10:35:25.023 243456 DEBUG nova.policy [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:35:25 np0005634017 nova_compute[243452]: 2026-02-28 10:35:25.063 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:35:25 np0005634017 nova_compute[243452]: 2026-02-28 10:35:25.064 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:25 np0005634017 nova_compute[243452]: 2026-02-28 10:35:25.065 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:25 np0005634017 nova_compute[243452]: 2026-02-28 10:35:25.065 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:25 np0005634017 nova_compute[243452]: 2026-02-28 10:35:25.089 243456 DEBUG nova.storage.rbd_utils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b7d98834-924e-4fbd-a701-d22949f44f77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:35:25 np0005634017 nova_compute[243452]: 2026-02-28 10:35:25.095 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 b7d98834-924e-4fbd-a701-d22949f44f77_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:35:25 np0005634017 nova_compute[243452]: 2026-02-28 10:35:25.307 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 b7d98834-924e-4fbd-a701-d22949f44f77_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:35:25 np0005634017 nova_compute[243452]: 2026-02-28 10:35:25.378 243456 DEBUG nova.storage.rbd_utils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] resizing rbd image b7d98834-924e-4fbd-a701-d22949f44f77_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:35:25 np0005634017 nova_compute[243452]: 2026-02-28 10:35:25.471 243456 DEBUG nova.objects.instance [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'migration_context' on Instance uuid b7d98834-924e-4fbd-a701-d22949f44f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:35:25 np0005634017 nova_compute[243452]: 2026-02-28 10:35:25.488 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:35:25 np0005634017 nova_compute[243452]: 2026-02-28 10:35:25.489 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Ensure instance console log exists: /var/lib/nova/instances/b7d98834-924e-4fbd-a701-d22949f44f77/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:35:25 np0005634017 nova_compute[243452]: 2026-02-28 10:35:25.489 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:25 np0005634017 nova_compute[243452]: 2026-02-28 10:35:25.489 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:25 np0005634017 nova_compute[243452]: 2026-02-28 10:35:25.489 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:26 np0005634017 nova_compute[243452]: 2026-02-28 10:35:26.173 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:26 np0005634017 nova_compute[243452]: 2026-02-28 10:35:26.764 243456 DEBUG nova.network.neutron [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Successfully created port: f6a52694-af4a-4ecc-926c-b1867c375983 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:35:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 380 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 609 KiB/s wr, 154 op/s
Feb 28 05:35:27 np0005634017 nova_compute[243452]: 2026-02-28 10:35:27.578 243456 DEBUG nova.network.neutron [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Successfully updated port: f6a52694-af4a-4ecc-926c-b1867c375983 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:35:27 np0005634017 nova_compute[243452]: 2026-02-28 10:35:27.600 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-b7d98834-924e-4fbd-a701-d22949f44f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:35:27 np0005634017 nova_compute[243452]: 2026-02-28 10:35:27.600 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-b7d98834-924e-4fbd-a701-d22949f44f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:35:27 np0005634017 nova_compute[243452]: 2026-02-28 10:35:27.600 243456 DEBUG nova.network.neutron [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:35:27 np0005634017 nova_compute[243452]: 2026-02-28 10:35:27.676 243456 DEBUG nova.compute.manager [req-9c845327-4664-46e0-8453-fa123ab8102d req-9d9e340d-8109-4553-902c-5b611594f60c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Received event network-changed-f6a52694-af4a-4ecc-926c-b1867c375983 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:27 np0005634017 nova_compute[243452]: 2026-02-28 10:35:27.677 243456 DEBUG nova.compute.manager [req-9c845327-4664-46e0-8453-fa123ab8102d req-9d9e340d-8109-4553-902c-5b611594f60c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Refreshing instance network info cache due to event network-changed-f6a52694-af4a-4ecc-926c-b1867c375983. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:35:27 np0005634017 nova_compute[243452]: 2026-02-28 10:35:27.677 243456 DEBUG oslo_concurrency.lockutils [req-9c845327-4664-46e0-8453-fa123ab8102d req-9d9e340d-8109-4553-902c-5b611594f60c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b7d98834-924e-4fbd-a701-d22949f44f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:35:27 np0005634017 nova_compute[243452]: 2026-02-28 10:35:27.755 243456 DEBUG nova.compute.manager [req-e720d5ee-4367-4fc2-b3b0-65d9e76f550a req-c5d30e9a-1e04-4755-a0d4-592ea7025d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-changed-a81e3b75-649b-4321-b436-ab01ab0a9e05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:27 np0005634017 nova_compute[243452]: 2026-02-28 10:35:27.756 243456 DEBUG nova.compute.manager [req-e720d5ee-4367-4fc2-b3b0-65d9e76f550a req-c5d30e9a-1e04-4755-a0d4-592ea7025d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Refreshing instance network info cache due to event network-changed-a81e3b75-649b-4321-b436-ab01ab0a9e05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:35:27 np0005634017 nova_compute[243452]: 2026-02-28 10:35:27.756 243456 DEBUG oslo_concurrency.lockutils [req-e720d5ee-4367-4fc2-b3b0-65d9e76f550a req-c5d30e9a-1e04-4755-a0d4-592ea7025d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:35:27 np0005634017 nova_compute[243452]: 2026-02-28 10:35:27.757 243456 DEBUG oslo_concurrency.lockutils [req-e720d5ee-4367-4fc2-b3b0-65d9e76f550a req-c5d30e9a-1e04-4755-a0d4-592ea7025d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:35:27 np0005634017 nova_compute[243452]: 2026-02-28 10:35:27.757 243456 DEBUG nova.network.neutron [req-e720d5ee-4367-4fc2-b3b0-65d9e76f550a req-c5d30e9a-1e04-4755-a0d4-592ea7025d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Refreshing network info cache for port a81e3b75-649b-4321-b436-ab01ab0a9e05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:35:27 np0005634017 nova_compute[243452]: 2026-02-28 10:35:27.786 243456 DEBUG nova.network.neutron [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:35:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 385 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 656 KiB/s wr, 155 op/s
Feb 28 05:35:28 np0005634017 nova_compute[243452]: 2026-02-28 10:35:28.851 243456 DEBUG nova.network.neutron [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Updating instance_info_cache with network_info: [{"id": "f6a52694-af4a-4ecc-926c-b1867c375983", "address": "fa:16:3e:21:5e:ba", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6a52694-af", "ovs_interfaceid": "f6a52694-af4a-4ecc-926c-b1867c375983", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:35:28 np0005634017 nova_compute[243452]: 2026-02-28 10:35:28.871 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-b7d98834-924e-4fbd-a701-d22949f44f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:35:28 np0005634017 nova_compute[243452]: 2026-02-28 10:35:28.872 243456 DEBUG nova.compute.manager [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Instance network_info: |[{"id": "f6a52694-af4a-4ecc-926c-b1867c375983", "address": "fa:16:3e:21:5e:ba", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6a52694-af", "ovs_interfaceid": "f6a52694-af4a-4ecc-926c-b1867c375983", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:35:28 np0005634017 nova_compute[243452]: 2026-02-28 10:35:28.873 243456 DEBUG oslo_concurrency.lockutils [req-9c845327-4664-46e0-8453-fa123ab8102d req-9d9e340d-8109-4553-902c-5b611594f60c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b7d98834-924e-4fbd-a701-d22949f44f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:35:28 np0005634017 nova_compute[243452]: 2026-02-28 10:35:28.874 243456 DEBUG nova.network.neutron [req-9c845327-4664-46e0-8453-fa123ab8102d req-9d9e340d-8109-4553-902c-5b611594f60c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Refreshing network info cache for port f6a52694-af4a-4ecc-926c-b1867c375983 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:35:28 np0005634017 nova_compute[243452]: 2026-02-28 10:35:28.880 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Start _get_guest_xml network_info=[{"id": "f6a52694-af4a-4ecc-926c-b1867c375983", "address": "fa:16:3e:21:5e:ba", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6a52694-af", "ovs_interfaceid": "f6a52694-af4a-4ecc-926c-b1867c375983", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:35:28 np0005634017 nova_compute[243452]: 2026-02-28 10:35:28.891 243456 WARNING nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:35:28 np0005634017 nova_compute[243452]: 2026-02-28 10:35:28.907 243456 DEBUG nova.virt.libvirt.host [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:35:28 np0005634017 nova_compute[243452]: 2026-02-28 10:35:28.908 243456 DEBUG nova.virt.libvirt.host [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:35:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:35:28 np0005634017 nova_compute[243452]: 2026-02-28 10:35:28.912 243456 DEBUG nova.virt.libvirt.host [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:35:28 np0005634017 nova_compute[243452]: 2026-02-28 10:35:28.913 243456 DEBUG nova.virt.libvirt.host [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:35:28 np0005634017 nova_compute[243452]: 2026-02-28 10:35:28.914 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:35:28 np0005634017 nova_compute[243452]: 2026-02-28 10:35:28.915 243456 DEBUG nova.virt.hardware [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:35:28 np0005634017 nova_compute[243452]: 2026-02-28 10:35:28.916 243456 DEBUG nova.virt.hardware [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:35:28 np0005634017 nova_compute[243452]: 2026-02-28 10:35:28.916 243456 DEBUG nova.virt.hardware [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:35:28 np0005634017 nova_compute[243452]: 2026-02-28 10:35:28.917 243456 DEBUG nova.virt.hardware [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:35:28 np0005634017 nova_compute[243452]: 2026-02-28 10:35:28.918 243456 DEBUG nova.virt.hardware [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:35:28 np0005634017 nova_compute[243452]: 2026-02-28 10:35:28.918 243456 DEBUG nova.virt.hardware [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:35:28 np0005634017 nova_compute[243452]: 2026-02-28 10:35:28.919 243456 DEBUG nova.virt.hardware [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:35:28 np0005634017 nova_compute[243452]: 2026-02-28 10:35:28.919 243456 DEBUG nova.virt.hardware [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:35:28 np0005634017 nova_compute[243452]: 2026-02-28 10:35:28.920 243456 DEBUG nova.virt.hardware [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:35:28 np0005634017 nova_compute[243452]: 2026-02-28 10:35:28.921 243456 DEBUG nova.virt.hardware [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:35:28 np0005634017 nova_compute[243452]: 2026-02-28 10:35:28.921 243456 DEBUG nova.virt.hardware [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:35:28 np0005634017 nova_compute[243452]: 2026-02-28 10:35:28.928 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:35:29 np0005634017 nova_compute[243452]: 2026-02-28 10:35:29.134 243456 DEBUG nova.network.neutron [req-e720d5ee-4367-4fc2-b3b0-65d9e76f550a req-c5d30e9a-1e04-4755-a0d4-592ea7025d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Updated VIF entry in instance network info cache for port a81e3b75-649b-4321-b436-ab01ab0a9e05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:35:29 np0005634017 nova_compute[243452]: 2026-02-28 10:35:29.135 243456 DEBUG nova.network.neutron [req-e720d5ee-4367-4fc2-b3b0-65d9e76f550a req-c5d30e9a-1e04-4755-a0d4-592ea7025d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Updating instance_info_cache with network_info: [{"id": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "address": "fa:16:3e:2e:7a:c9", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa81e3b75-64", "ovs_interfaceid": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d681366d-e6b5-4dad-847e-d091bc7b112d", "address": "fa:16:3e:bb:a7:ba", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:a7ba", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd681366d-e6", "ovs_interfaceid": "d681366d-e6b5-4dad-847e-d091bc7b112d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:35:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:35:29
Feb 28 05:35:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:35:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:35:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', '.mgr', 'images', 'vms', 'default.rgw.control', 'backups', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log']
Feb 28 05:35:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:35:29 np0005634017 nova_compute[243452]: 2026-02-28 10:35:29.153 243456 DEBUG oslo_concurrency.lockutils [req-e720d5ee-4367-4fc2-b3b0-65d9e76f550a req-c5d30e9a-1e04-4755-a0d4-592ea7025d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:35:29 np0005634017 nova_compute[243452]: 2026-02-28 10:35:29.407 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:35:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3483494627' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:35:29 np0005634017 nova_compute[243452]: 2026-02-28 10:35:29.559 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:35:29 np0005634017 nova_compute[243452]: 2026-02-28 10:35:29.597 243456 DEBUG nova.storage.rbd_utils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b7d98834-924e-4fbd-a701-d22949f44f77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:35:29 np0005634017 nova_compute[243452]: 2026-02-28 10:35:29.605 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:35:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:35:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2644296980' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.140 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.143 243456 DEBUG nova.virt.libvirt.vif [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:35:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1920589440',display_name='tempest-TestNetworkBasicOps-server-1920589440',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1920589440',id=131,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBx2yXJrlxl/hjn9MMHE5KhSD01UI6o5tdb/M6CuQZyKKSCBz23zQdd3YZK9kMbqEHNUQ9eYVQehuDxFu0Ax80TEtUbLmqV5yzH0bXR2RG7KLxh93Ak1z3EjsfV7s46ZjA==',key_name='tempest-TestNetworkBasicOps-1493336509',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-tu2phxkv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:35:24Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=b7d98834-924e-4fbd-a701-d22949f44f77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f6a52694-af4a-4ecc-926c-b1867c375983", "address": "fa:16:3e:21:5e:ba", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6a52694-af", "ovs_interfaceid": "f6a52694-af4a-4ecc-926c-b1867c375983", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.143 243456 DEBUG nova.network.os_vif_util [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "f6a52694-af4a-4ecc-926c-b1867c375983", "address": "fa:16:3e:21:5e:ba", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6a52694-af", "ovs_interfaceid": "f6a52694-af4a-4ecc-926c-b1867c375983", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.145 243456 DEBUG nova.network.os_vif_util [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=f6a52694-af4a-4ecc-926c-b1867c375983,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6a52694-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.147 243456 DEBUG nova.objects.instance [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid b7d98834-924e-4fbd-a701-d22949f44f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.176 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:35:30 np0005634017 nova_compute[243452]:  <uuid>b7d98834-924e-4fbd-a701-d22949f44f77</uuid>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:  <name>instance-00000083</name>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestNetworkBasicOps-server-1920589440</nova:name>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:35:28</nova:creationTime>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:35:30 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:        <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:        <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:        <nova:port uuid="f6a52694-af4a-4ecc-926c-b1867c375983">
Feb 28 05:35:30 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <entry name="serial">b7d98834-924e-4fbd-a701-d22949f44f77</entry>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <entry name="uuid">b7d98834-924e-4fbd-a701-d22949f44f77</entry>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/b7d98834-924e-4fbd-a701-d22949f44f77_disk">
Feb 28 05:35:30 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:35:30 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/b7d98834-924e-4fbd-a701-d22949f44f77_disk.config">
Feb 28 05:35:30 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:35:30 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:21:5e:ba"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <target dev="tapf6a52694-af"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/b7d98834-924e-4fbd-a701-d22949f44f77/console.log" append="off"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:35:30 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:35:30 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:35:30 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:35:30 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.183 243456 DEBUG nova.compute.manager [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Preparing to wait for external event network-vif-plugged-f6a52694-af4a-4ecc-926c-b1867c375983 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.183 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.183 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.184 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.185 243456 DEBUG nova.virt.libvirt.vif [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:35:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1920589440',display_name='tempest-TestNetworkBasicOps-server-1920589440',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1920589440',id=131,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBx2yXJrlxl/hjn9MMHE5KhSD01UI6o5tdb/M6CuQZyKKSCBz23zQdd3YZK9kMbqEHNUQ9eYVQehuDxFu0Ax80TEtUbLmqV5yzH0bXR2RG7KLxh93Ak1z3EjsfV7s46ZjA==',key_name='tempest-TestNetworkBasicOps-1493336509',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-tu2phxkv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:35:24Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=b7d98834-924e-4fbd-a701-d22949f44f77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f6a52694-af4a-4ecc-926c-b1867c375983", "address": "fa:16:3e:21:5e:ba", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6a52694-af", "ovs_interfaceid": "f6a52694-af4a-4ecc-926c-b1867c375983", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.185 243456 DEBUG nova.network.os_vif_util [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "f6a52694-af4a-4ecc-926c-b1867c375983", "address": "fa:16:3e:21:5e:ba", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6a52694-af", "ovs_interfaceid": "f6a52694-af4a-4ecc-926c-b1867c375983", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.186 243456 DEBUG nova.network.os_vif_util [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=f6a52694-af4a-4ecc-926c-b1867c375983,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6a52694-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.187 243456 DEBUG os_vif [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=f6a52694-af4a-4ecc-926c-b1867c375983,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6a52694-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.188 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.189 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.189 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.192 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.193 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6a52694-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.193 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf6a52694-af, col_values=(('external_ids', {'iface-id': 'f6a52694-af4a-4ecc-926c-b1867c375983', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:21:5e:ba', 'vm-uuid': 'b7d98834-924e-4fbd-a701-d22949f44f77'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:30 np0005634017 NetworkManager[49805]: <info>  [1772274930.1963] manager: (tapf6a52694-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/551)
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.199 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.203 243456 INFO os_vif [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=f6a52694-af4a-4ecc-926c-b1867c375983,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6a52694-af')#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.218 243456 DEBUG nova.network.neutron [req-9c845327-4664-46e0-8453-fa123ab8102d req-9d9e340d-8109-4553-902c-5b611594f60c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Updated VIF entry in instance network info cache for port f6a52694-af4a-4ecc-926c-b1867c375983. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.219 243456 DEBUG nova.network.neutron [req-9c845327-4664-46e0-8453-fa123ab8102d req-9d9e340d-8109-4553-902c-5b611594f60c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Updating instance_info_cache with network_info: [{"id": "f6a52694-af4a-4ecc-926c-b1867c375983", "address": "fa:16:3e:21:5e:ba", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6a52694-af", "ovs_interfaceid": "f6a52694-af4a-4ecc-926c-b1867c375983", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.252 243456 DEBUG oslo_concurrency.lockutils [req-9c845327-4664-46e0-8453-fa123ab8102d req-9d9e340d-8109-4553-902c-5b611594f60c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b7d98834-924e-4fbd-a701-d22949f44f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.271 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.272 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.272 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:21:5e:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.273 243456 INFO nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Using config drive#033[00m
Feb 28 05:35:30 np0005634017 nova_compute[243452]: 2026-02-28 10:35:30.301 243456 DEBUG nova.storage.rbd_utils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b7d98834-924e-4fbd-a701-d22949f44f77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:35:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:35:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:35:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:35:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:35:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:35:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:35:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:35:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:35:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:35:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:35:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:35:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:35:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:35:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:35:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:35:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:35:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 151 op/s
Feb 28 05:35:31 np0005634017 nova_compute[243452]: 2026-02-28 10:35:31.039 243456 INFO nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Creating config drive at /var/lib/nova/instances/b7d98834-924e-4fbd-a701-d22949f44f77/disk.config#033[00m
Feb 28 05:35:31 np0005634017 nova_compute[243452]: 2026-02-28 10:35:31.047 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b7d98834-924e-4fbd-a701-d22949f44f77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp0b7vozgt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:35:31 np0005634017 nova_compute[243452]: 2026-02-28 10:35:31.188 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b7d98834-924e-4fbd-a701-d22949f44f77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp0b7vozgt" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:35:31 np0005634017 nova_compute[243452]: 2026-02-28 10:35:31.232 243456 DEBUG nova.storage.rbd_utils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b7d98834-924e-4fbd-a701-d22949f44f77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:35:31 np0005634017 nova_compute[243452]: 2026-02-28 10:35:31.238 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b7d98834-924e-4fbd-a701-d22949f44f77/disk.config b7d98834-924e-4fbd-a701-d22949f44f77_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:35:31 np0005634017 nova_compute[243452]: 2026-02-28 10:35:31.397 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b7d98834-924e-4fbd-a701-d22949f44f77/disk.config b7d98834-924e-4fbd-a701-d22949f44f77_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:35:31 np0005634017 nova_compute[243452]: 2026-02-28 10:35:31.398 243456 INFO nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Deleting local config drive /var/lib/nova/instances/b7d98834-924e-4fbd-a701-d22949f44f77/disk.config because it was imported into RBD.#033[00m
Feb 28 05:35:31 np0005634017 NetworkManager[49805]: <info>  [1772274931.4390] manager: (tapf6a52694-af): new Tun device (/org/freedesktop/NetworkManager/Devices/552)
Feb 28 05:35:31 np0005634017 kernel: tapf6a52694-af: entered promiscuous mode
Feb 28 05:35:31 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:31Z|01320|binding|INFO|Claiming lport f6a52694-af4a-4ecc-926c-b1867c375983 for this chassis.
Feb 28 05:35:31 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:31Z|01321|binding|INFO|f6a52694-af4a-4ecc-926c-b1867c375983: Claiming fa:16:3e:21:5e:ba 10.100.0.12
Feb 28 05:35:31 np0005634017 nova_compute[243452]: 2026-02-28 10:35:31.446 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.452 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:5e:ba 10.100.0.12'], port_security=['fa:16:3e:21:5e:ba 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b7d98834-924e-4fbd-a701-d22949f44f77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c48ff26a-49d0-4144-b27f-14431e751ba2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7156ff74-6e4d-4300-84e4-6890f3b16e55', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cef915b3-5462-468d-ad19-808529c0dfba, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=f6a52694-af4a-4ecc-926c-b1867c375983) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:35:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.453 156681 INFO neutron.agent.ovn.metadata.agent [-] Port f6a52694-af4a-4ecc-926c-b1867c375983 in datapath c48ff26a-49d0-4144-b27f-14431e751ba2 bound to our chassis#033[00m
Feb 28 05:35:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.455 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c48ff26a-49d0-4144-b27f-14431e751ba2#033[00m
Feb 28 05:35:31 np0005634017 nova_compute[243452]: 2026-02-28 10:35:31.462 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:31 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:31Z|01322|binding|INFO|Setting lport f6a52694-af4a-4ecc-926c-b1867c375983 ovn-installed in OVS
Feb 28 05:35:31 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:31Z|01323|binding|INFO|Setting lport f6a52694-af4a-4ecc-926c-b1867c375983 up in Southbound
Feb 28 05:35:31 np0005634017 systemd-udevd[358701]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:35:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.480 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[108f6beb-28cf-4a5e-92c9-49ea81bab93f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:31 np0005634017 NetworkManager[49805]: <info>  [1772274931.4866] device (tapf6a52694-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:35:31 np0005634017 NetworkManager[49805]: <info>  [1772274931.4872] device (tapf6a52694-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:35:31 np0005634017 systemd-machined[209480]: New machine qemu-164-instance-00000083.
Feb 28 05:35:31 np0005634017 systemd[1]: Started Virtual Machine qemu-164-instance-00000083.
Feb 28 05:35:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.511 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c3cbb520-ef39-48ef-8f02-50b9a833de2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.516 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[41e244bd-37e7-4374-9049-87d1c83e3c8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.545 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b3b1b112-ae1d-47ba-bfda-72bf5a32ba04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.564 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1c520d69-4340-4283-a88f-a55cf4cca3c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc48ff26a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:5f:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640275, 'reachable_time': 38982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358715, 'error': None, 'target': 'ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.590 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[132d835d-ee8f-416c-ae99-c3904704cab0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc48ff26a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640284, 'tstamp': 640284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358717, 'error': None, 'target': 'ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc48ff26a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640286, 'tstamp': 640286}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358717, 'error': None, 'target': 'ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.592 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc48ff26a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:31 np0005634017 nova_compute[243452]: 2026-02-28 10:35:31.594 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:31 np0005634017 nova_compute[243452]: 2026-02-28 10:35:31.595 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.595 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc48ff26a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.596 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:35:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.596 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc48ff26a-40, col_values=(('external_ids', {'iface-id': 'cb4199b0-c270-4ea9-a607-db9799f6f157'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.597 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:35:31 np0005634017 nova_compute[243452]: 2026-02-28 10:35:31.793 243456 DEBUG nova.compute.manager [req-8a64292b-98aa-4a62-ae62-f284c20c32d6 req-33e73449-2d07-4cb7-94e4-68a8b28a79fa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Received event network-vif-plugged-f6a52694-af4a-4ecc-926c-b1867c375983 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:31 np0005634017 nova_compute[243452]: 2026-02-28 10:35:31.794 243456 DEBUG oslo_concurrency.lockutils [req-8a64292b-98aa-4a62-ae62-f284c20c32d6 req-33e73449-2d07-4cb7-94e4-68a8b28a79fa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:31 np0005634017 nova_compute[243452]: 2026-02-28 10:35:31.794 243456 DEBUG oslo_concurrency.lockutils [req-8a64292b-98aa-4a62-ae62-f284c20c32d6 req-33e73449-2d07-4cb7-94e4-68a8b28a79fa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:31 np0005634017 nova_compute[243452]: 2026-02-28 10:35:31.794 243456 DEBUG oslo_concurrency.lockutils [req-8a64292b-98aa-4a62-ae62-f284c20c32d6 req-33e73449-2d07-4cb7-94e4-68a8b28a79fa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:31 np0005634017 nova_compute[243452]: 2026-02-28 10:35:31.794 243456 DEBUG nova.compute.manager [req-8a64292b-98aa-4a62-ae62-f284c20c32d6 req-33e73449-2d07-4cb7-94e4-68a8b28a79fa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Processing event network-vif-plugged-f6a52694-af4a-4ecc-926c-b1867c375983 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.266 243456 DEBUG nova.compute.manager [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.268 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274932.2652206, b7d98834-924e-4fbd-a701-d22949f44f77 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.269 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] VM Started (Lifecycle Event)#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.273 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.277 243456 INFO nova.virt.libvirt.driver [-] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Instance spawned successfully.#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.278 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.295 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.302 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.307 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.308 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.308 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.309 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.309 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.310 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.336 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.337 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274932.272231, b7d98834-924e-4fbd-a701-d22949f44f77 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.337 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.358 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.362 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274932.2727046, b7d98834-924e-4fbd-a701-d22949f44f77 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.362 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.374 243456 INFO nova.compute.manager [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Took 7.48 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.375 243456 DEBUG nova.compute.manager [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.382 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.385 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.418 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.446 243456 INFO nova.compute.manager [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Took 9.04 seconds to build instance.#033[00m
Feb 28 05:35:32 np0005634017 nova_compute[243452]: 2026-02-28 10:35:32.462 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Feb 28 05:35:33 np0005634017 nova_compute[243452]: 2026-02-28 10:35:33.841 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:35:33 np0005634017 nova_compute[243452]: 2026-02-28 10:35:33.887 243456 DEBUG nova.compute.manager [req-0417454d-ec34-4e5e-9f09-0e79ae2ae1cc req-91a410bb-cdf8-42fc-88c5-71b3a694aeb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Received event network-vif-plugged-f6a52694-af4a-4ecc-926c-b1867c375983 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:33 np0005634017 nova_compute[243452]: 2026-02-28 10:35:33.888 243456 DEBUG oslo_concurrency.lockutils [req-0417454d-ec34-4e5e-9f09-0e79ae2ae1cc req-91a410bb-cdf8-42fc-88c5-71b3a694aeb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:33 np0005634017 nova_compute[243452]: 2026-02-28 10:35:33.888 243456 DEBUG oslo_concurrency.lockutils [req-0417454d-ec34-4e5e-9f09-0e79ae2ae1cc req-91a410bb-cdf8-42fc-88c5-71b3a694aeb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:33 np0005634017 nova_compute[243452]: 2026-02-28 10:35:33.888 243456 DEBUG oslo_concurrency.lockutils [req-0417454d-ec34-4e5e-9f09-0e79ae2ae1cc req-91a410bb-cdf8-42fc-88c5-71b3a694aeb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:33 np0005634017 nova_compute[243452]: 2026-02-28 10:35:33.888 243456 DEBUG nova.compute.manager [req-0417454d-ec34-4e5e-9f09-0e79ae2ae1cc req-91a410bb-cdf8-42fc-88c5-71b3a694aeb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] No waiting events found dispatching network-vif-plugged-f6a52694-af4a-4ecc-926c-b1867c375983 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:35:33 np0005634017 nova_compute[243452]: 2026-02-28 10:35:33.888 243456 WARNING nova.compute.manager [req-0417454d-ec34-4e5e-9f09-0e79ae2ae1cc req-91a410bb-cdf8-42fc-88c5-71b3a694aeb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Received unexpected event network-vif-plugged-f6a52694-af4a-4ecc-926c-b1867c375983 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:35:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:35:34 np0005634017 nova_compute[243452]: 2026-02-28 10:35:34.416 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 414 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Feb 28 05:35:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:34Z|00155|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2e:7a:c9 10.100.0.13
Feb 28 05:35:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:34Z|00156|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2e:7a:c9 10.100.0.13
Feb 28 05:35:35 np0005634017 nova_compute[243452]: 2026-02-28 10:35:35.197 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:35 np0005634017 nova_compute[243452]: 2026-02-28 10:35:35.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:35:36 np0005634017 nova_compute[243452]: 2026-02-28 10:35:36.016 243456 DEBUG nova.compute.manager [req-2e0b5240-64b2-4766-b056-abbbf018eeae req-028cf98a-59f7-43cd-8a9d-00a16d81fa06 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Received event network-changed-f6a52694-af4a-4ecc-926c-b1867c375983 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:36 np0005634017 nova_compute[243452]: 2026-02-28 10:35:36.018 243456 DEBUG nova.compute.manager [req-2e0b5240-64b2-4766-b056-abbbf018eeae req-028cf98a-59f7-43cd-8a9d-00a16d81fa06 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Refreshing instance network info cache due to event network-changed-f6a52694-af4a-4ecc-926c-b1867c375983. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:35:36 np0005634017 nova_compute[243452]: 2026-02-28 10:35:36.018 243456 DEBUG oslo_concurrency.lockutils [req-2e0b5240-64b2-4766-b056-abbbf018eeae req-028cf98a-59f7-43cd-8a9d-00a16d81fa06 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b7d98834-924e-4fbd-a701-d22949f44f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:35:36 np0005634017 nova_compute[243452]: 2026-02-28 10:35:36.020 243456 DEBUG oslo_concurrency.lockutils [req-2e0b5240-64b2-4766-b056-abbbf018eeae req-028cf98a-59f7-43cd-8a9d-00a16d81fa06 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b7d98834-924e-4fbd-a701-d22949f44f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:35:36 np0005634017 nova_compute[243452]: 2026-02-28 10:35:36.020 243456 DEBUG nova.network.neutron [req-2e0b5240-64b2-4766-b056-abbbf018eeae req-028cf98a-59f7-43cd-8a9d-00a16d81fa06 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Refreshing network info cache for port f6a52694-af4a-4ecc-926c-b1867c375983 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:35:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:35:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:35:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:35:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:35:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 430 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.2 MiB/s wr, 175 op/s
Feb 28 05:35:37 np0005634017 podman[358975]: 2026-02-28 10:35:37.214524887 +0000 UTC m=+0.055247987 container create a3a772c3db91bd5050bcd540f6adee88d6f018cdc3902c3690dd0bad48cba8be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mahavira, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:35:37 np0005634017 nova_compute[243452]: 2026-02-28 10:35:37.223 243456 DEBUG nova.network.neutron [req-2e0b5240-64b2-4766-b056-abbbf018eeae req-028cf98a-59f7-43cd-8a9d-00a16d81fa06 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Updated VIF entry in instance network info cache for port f6a52694-af4a-4ecc-926c-b1867c375983. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:35:37 np0005634017 nova_compute[243452]: 2026-02-28 10:35:37.224 243456 DEBUG nova.network.neutron [req-2e0b5240-64b2-4766-b056-abbbf018eeae req-028cf98a-59f7-43cd-8a9d-00a16d81fa06 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Updating instance_info_cache with network_info: [{"id": "f6a52694-af4a-4ecc-926c-b1867c375983", "address": "fa:16:3e:21:5e:ba", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6a52694-af", "ovs_interfaceid": "f6a52694-af4a-4ecc-926c-b1867c375983", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:35:37 np0005634017 nova_compute[243452]: 2026-02-28 10:35:37.252 243456 DEBUG oslo_concurrency.lockutils [req-2e0b5240-64b2-4766-b056-abbbf018eeae req-028cf98a-59f7-43cd-8a9d-00a16d81fa06 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b7d98834-924e-4fbd-a701-d22949f44f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:35:37 np0005634017 systemd[1]: Started libpod-conmon-a3a772c3db91bd5050bcd540f6adee88d6f018cdc3902c3690dd0bad48cba8be.scope.
Feb 28 05:35:37 np0005634017 podman[358975]: 2026-02-28 10:35:37.194704765 +0000 UTC m=+0.035427885 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:35:37 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:35:37 np0005634017 podman[358975]: 2026-02-28 10:35:37.307807039 +0000 UTC m=+0.148530159 container init a3a772c3db91bd5050bcd540f6adee88d6f018cdc3902c3690dd0bad48cba8be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mahavira, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 05:35:37 np0005634017 podman[358975]: 2026-02-28 10:35:37.316508595 +0000 UTC m=+0.157231705 container start a3a772c3db91bd5050bcd540f6adee88d6f018cdc3902c3690dd0bad48cba8be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 05:35:37 np0005634017 systemd[1]: libpod-a3a772c3db91bd5050bcd540f6adee88d6f018cdc3902c3690dd0bad48cba8be.scope: Deactivated successfully.
Feb 28 05:35:37 np0005634017 podman[358975]: 2026-02-28 10:35:37.324632296 +0000 UTC m=+0.165355436 container attach a3a772c3db91bd5050bcd540f6adee88d6f018cdc3902c3690dd0bad48cba8be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:35:37 np0005634017 tender_mahavira[358991]: 167 167
Feb 28 05:35:37 np0005634017 conmon[358991]: conmon a3a772c3db91bd5050bc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a3a772c3db91bd5050bcd540f6adee88d6f018cdc3902c3690dd0bad48cba8be.scope/container/memory.events
Feb 28 05:35:37 np0005634017 podman[358975]: 2026-02-28 10:35:37.328647589 +0000 UTC m=+0.169370699 container died a3a772c3db91bd5050bcd540f6adee88d6f018cdc3902c3690dd0bad48cba8be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:35:37 np0005634017 systemd[1]: var-lib-containers-storage-overlay-16b8dfa4ad5c05acdcfc61890be661bbf6614484bfa9ca86a9ab461fa8553dc2-merged.mount: Deactivated successfully.
Feb 28 05:35:37 np0005634017 podman[358975]: 2026-02-28 10:35:37.37349386 +0000 UTC m=+0.214216950 container remove a3a772c3db91bd5050bcd540f6adee88d6f018cdc3902c3690dd0bad48cba8be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mahavira, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 28 05:35:37 np0005634017 systemd[1]: libpod-conmon-a3a772c3db91bd5050bcd540f6adee88d6f018cdc3902c3690dd0bad48cba8be.scope: Deactivated successfully.
Feb 28 05:35:37 np0005634017 podman[359015]: 2026-02-28 10:35:37.529861479 +0000 UTC m=+0.048293189 container create 9e5f9a14eac468770ef3f1c8ac8bd54e05780f7b29b6dd7bb17e53b6e0eb5313 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_bhabha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Feb 28 05:35:37 np0005634017 systemd[1]: Started libpod-conmon-9e5f9a14eac468770ef3f1c8ac8bd54e05780f7b29b6dd7bb17e53b6e0eb5313.scope.
Feb 28 05:35:37 np0005634017 podman[359015]: 2026-02-28 10:35:37.5048237 +0000 UTC m=+0.023255400 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:35:37 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:35:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c212b0124df158038bd7c4373e8ff5cb4ee7f220ea912fd19a718ac7d3e2dadc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:35:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c212b0124df158038bd7c4373e8ff5cb4ee7f220ea912fd19a718ac7d3e2dadc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:35:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c212b0124df158038bd7c4373e8ff5cb4ee7f220ea912fd19a718ac7d3e2dadc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:35:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c212b0124df158038bd7c4373e8ff5cb4ee7f220ea912fd19a718ac7d3e2dadc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:35:37 np0005634017 podman[359015]: 2026-02-28 10:35:37.646640067 +0000 UTC m=+0.165071787 container init 9e5f9a14eac468770ef3f1c8ac8bd54e05780f7b29b6dd7bb17e53b6e0eb5313 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 05:35:37 np0005634017 podman[359015]: 2026-02-28 10:35:37.652589216 +0000 UTC m=+0.171020916 container start 9e5f9a14eac468770ef3f1c8ac8bd54e05780f7b29b6dd7bb17e53b6e0eb5313 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_bhabha, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 05:35:37 np0005634017 podman[359015]: 2026-02-28 10:35:37.656545738 +0000 UTC m=+0.174977428 container attach 9e5f9a14eac468770ef3f1c8ac8bd54e05780f7b29b6dd7bb17e53b6e0eb5313 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 05:35:37 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:35:37 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]: [
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:    {
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:        "available": false,
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:        "being_replaced": false,
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:        "ceph_device_lvm": false,
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:        "device_id": "QEMU_DVD-ROM_QM00001",
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:        "lsm_data": {},
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:        "lvs": [],
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:        "path": "/dev/sr0",
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:        "rejected_reasons": [
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:            "Insufficient space (<5GB)",
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:            "Has a FileSystem"
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:        ],
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:        "sys_api": {
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:            "actuators": null,
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:            "device_nodes": [
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:                "sr0"
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:            ],
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:            "devname": "sr0",
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:            "human_readable_size": "482.00 KB",
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:            "id_bus": "ata",
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:            "model": "QEMU DVD-ROM",
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:            "nr_requests": "2",
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:            "parent": "/dev/sr0",
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:            "partitions": {},
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:            "path": "/dev/sr0",
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:            "removable": "1",
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:            "rev": "2.5+",
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:            "ro": "0",
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:            "rotational": "1",
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:            "sas_address": "",
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:            "sas_device_handle": "",
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:            "scheduler_mode": "mq-deadline",
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:            "sectors": 0,
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:            "sectorsize": "2048",
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:            "size": 493568.0,
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:            "support_discard": "2048",
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:            "type": "disk",
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:            "vendor": "QEMU"
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:        }
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]:    }
Feb 28 05:35:38 np0005634017 condescending_bhabha[359032]: ]
Feb 28 05:35:38 np0005634017 systemd[1]: libpod-9e5f9a14eac468770ef3f1c8ac8bd54e05780f7b29b6dd7bb17e53b6e0eb5313.scope: Deactivated successfully.
Feb 28 05:35:38 np0005634017 podman[359015]: 2026-02-28 10:35:38.268813562 +0000 UTC m=+0.787245242 container died 9e5f9a14eac468770ef3f1c8ac8bd54e05780f7b29b6dd7bb17e53b6e0eb5313 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_bhabha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:35:38 np0005634017 systemd[1]: var-lib-containers-storage-overlay-c212b0124df158038bd7c4373e8ff5cb4ee7f220ea912fd19a718ac7d3e2dadc-merged.mount: Deactivated successfully.
Feb 28 05:35:38 np0005634017 nova_compute[243452]: 2026-02-28 10:35:38.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:35:38 np0005634017 podman[359015]: 2026-02-28 10:35:38.326805624 +0000 UTC m=+0.845237304 container remove 9e5f9a14eac468770ef3f1c8ac8bd54e05780f7b29b6dd7bb17e53b6e0eb5313 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 28 05:35:38 np0005634017 systemd[1]: libpod-conmon-9e5f9a14eac468770ef3f1c8ac8bd54e05780f7b29b6dd7bb17e53b6e0eb5313.scope: Deactivated successfully.
Feb 28 05:35:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:35:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:35:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:35:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:35:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:35:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:35:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:35:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:35:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:35:38 np0005634017 podman[359855]: 2026-02-28 10:35:38.416710641 +0000 UTC m=+0.099991043 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:35:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:35:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:35:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:35:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:35:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:35:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:35:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:35:38 np0005634017 podman[359846]: 2026-02-28 10:35:38.43360931 +0000 UTC m=+0.126775412 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 28 05:35:38 np0005634017 podman[359965]: 2026-02-28 10:35:38.833153348 +0000 UTC m=+0.054300179 container create c5054d18fbcf2e83c26136fb7038473557045402cf4dec184817ff3d4da99da0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_bose, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:35:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 438 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 163 op/s
Feb 28 05:35:38 np0005634017 systemd[1]: Started libpod-conmon-c5054d18fbcf2e83c26136fb7038473557045402cf4dec184817ff3d4da99da0.scope.
Feb 28 05:35:38 np0005634017 podman[359965]: 2026-02-28 10:35:38.805306419 +0000 UTC m=+0.026453300 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:35:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:35:38 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:35:38 np0005634017 podman[359965]: 2026-02-28 10:35:38.935750924 +0000 UTC m=+0.156897815 container init c5054d18fbcf2e83c26136fb7038473557045402cf4dec184817ff3d4da99da0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:35:38 np0005634017 podman[359965]: 2026-02-28 10:35:38.944524043 +0000 UTC m=+0.165670854 container start c5054d18fbcf2e83c26136fb7038473557045402cf4dec184817ff3d4da99da0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_bose, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 05:35:38 np0005634017 podman[359965]: 2026-02-28 10:35:38.947714053 +0000 UTC m=+0.168860964 container attach c5054d18fbcf2e83c26136fb7038473557045402cf4dec184817ff3d4da99da0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_bose, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 05:35:38 np0005634017 serene_bose[359982]: 167 167
Feb 28 05:35:38 np0005634017 systemd[1]: libpod-c5054d18fbcf2e83c26136fb7038473557045402cf4dec184817ff3d4da99da0.scope: Deactivated successfully.
Feb 28 05:35:38 np0005634017 conmon[359982]: conmon c5054d18fbcf2e83c261 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c5054d18fbcf2e83c26136fb7038473557045402cf4dec184817ff3d4da99da0.scope/container/memory.events
Feb 28 05:35:38 np0005634017 podman[359965]: 2026-02-28 10:35:38.952463368 +0000 UTC m=+0.173610209 container died c5054d18fbcf2e83c26136fb7038473557045402cf4dec184817ff3d4da99da0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_bose, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 05:35:38 np0005634017 systemd[1]: var-lib-containers-storage-overlay-530bd5c9f50dbb13b9ff04eae7a6c92ad79fad44736ad559be153dc05f5e7204-merged.mount: Deactivated successfully.
Feb 28 05:35:38 np0005634017 podman[359965]: 2026-02-28 10:35:38.993284694 +0000 UTC m=+0.214431525 container remove c5054d18fbcf2e83c26136fb7038473557045402cf4dec184817ff3d4da99da0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_bose, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 05:35:39 np0005634017 systemd[1]: libpod-conmon-c5054d18fbcf2e83c26136fb7038473557045402cf4dec184817ff3d4da99da0.scope: Deactivated successfully.
Feb 28 05:35:39 np0005634017 podman[360007]: 2026-02-28 10:35:39.209816988 +0000 UTC m=+0.056015658 container create 905eb9c858ce002f0c0f1cd23d7892daeae7106f5d37baf888e63db47c5e81be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 05:35:39 np0005634017 systemd[1]: Started libpod-conmon-905eb9c858ce002f0c0f1cd23d7892daeae7106f5d37baf888e63db47c5e81be.scope.
Feb 28 05:35:39 np0005634017 podman[360007]: 2026-02-28 10:35:39.186695563 +0000 UTC m=+0.032894263 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:35:39 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:35:39 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffab7d5c250874486c5d828c486c700354b874c430f27f63a038de6cd8f0e0ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:35:39 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffab7d5c250874486c5d828c486c700354b874c430f27f63a038de6cd8f0e0ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:35:39 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffab7d5c250874486c5d828c486c700354b874c430f27f63a038de6cd8f0e0ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:35:39 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffab7d5c250874486c5d828c486c700354b874c430f27f63a038de6cd8f0e0ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:35:39 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffab7d5c250874486c5d828c486c700354b874c430f27f63a038de6cd8f0e0ac/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:35:39 np0005634017 podman[360007]: 2026-02-28 10:35:39.311391945 +0000 UTC m=+0.157590655 container init 905eb9c858ce002f0c0f1cd23d7892daeae7106f5d37baf888e63db47c5e81be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_murdock, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 05:35:39 np0005634017 nova_compute[243452]: 2026-02-28 10:35:39.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:35:39 np0005634017 podman[360007]: 2026-02-28 10:35:39.320567305 +0000 UTC m=+0.166765965 container start 905eb9c858ce002f0c0f1cd23d7892daeae7106f5d37baf888e63db47c5e81be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_murdock, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:35:39 np0005634017 podman[360007]: 2026-02-28 10:35:39.325551456 +0000 UTC m=+0.171750196 container attach 905eb9c858ce002f0c0f1cd23d7892daeae7106f5d37baf888e63db47c5e81be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 05:35:39 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:35:39 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:35:39 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:35:39 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:35:39 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:35:39 np0005634017 nova_compute[243452]: 2026-02-28 10:35:39.416 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:39 np0005634017 mystifying_murdock[360024]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:35:39 np0005634017 mystifying_murdock[360024]: --> All data devices are unavailable
Feb 28 05:35:39 np0005634017 systemd[1]: libpod-905eb9c858ce002f0c0f1cd23d7892daeae7106f5d37baf888e63db47c5e81be.scope: Deactivated successfully.
Feb 28 05:35:39 np0005634017 podman[360007]: 2026-02-28 10:35:39.802852037 +0000 UTC m=+0.649050697 container died 905eb9c858ce002f0c0f1cd23d7892daeae7106f5d37baf888e63db47c5e81be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_murdock, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 05:35:39 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ffab7d5c250874486c5d828c486c700354b874c430f27f63a038de6cd8f0e0ac-merged.mount: Deactivated successfully.
Feb 28 05:35:39 np0005634017 podman[360007]: 2026-02-28 10:35:39.845414863 +0000 UTC m=+0.691613523 container remove 905eb9c858ce002f0c0f1cd23d7892daeae7106f5d37baf888e63db47c5e81be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_murdock, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:35:39 np0005634017 systemd[1]: libpod-conmon-905eb9c858ce002f0c0f1cd23d7892daeae7106f5d37baf888e63db47c5e81be.scope: Deactivated successfully.
Feb 28 05:35:40 np0005634017 nova_compute[243452]: 2026-02-28 10:35:40.202 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:40 np0005634017 nova_compute[243452]: 2026-02-28 10:35:40.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:35:40 np0005634017 nova_compute[243452]: 2026-02-28 10:35:40.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:35:40 np0005634017 nova_compute[243452]: 2026-02-28 10:35:40.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:35:40 np0005634017 podman[360118]: 2026-02-28 10:35:40.365945237 +0000 UTC m=+0.058444327 container create 01c10fc256e62fc07529dd3110b765669772d1b384b6a847bd8cf3e87bc47672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 28 05:35:40 np0005634017 systemd[1]: Started libpod-conmon-01c10fc256e62fc07529dd3110b765669772d1b384b6a847bd8cf3e87bc47672.scope.
Feb 28 05:35:40 np0005634017 podman[360118]: 2026-02-28 10:35:40.344269283 +0000 UTC m=+0.036768363 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:35:40 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:35:40 np0005634017 podman[360118]: 2026-02-28 10:35:40.471656351 +0000 UTC m=+0.164155431 container init 01c10fc256e62fc07529dd3110b765669772d1b384b6a847bd8cf3e87bc47672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:35:40 np0005634017 podman[360118]: 2026-02-28 10:35:40.480960455 +0000 UTC m=+0.173459565 container start 01c10fc256e62fc07529dd3110b765669772d1b384b6a847bd8cf3e87bc47672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_swanson, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 28 05:35:40 np0005634017 podman[360118]: 2026-02-28 10:35:40.485271707 +0000 UTC m=+0.177770797 container attach 01c10fc256e62fc07529dd3110b765669772d1b384b6a847bd8cf3e87bc47672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_swanson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:35:40 np0005634017 nifty_swanson[360134]: 167 167
Feb 28 05:35:40 np0005634017 systemd[1]: libpod-01c10fc256e62fc07529dd3110b765669772d1b384b6a847bd8cf3e87bc47672.scope: Deactivated successfully.
Feb 28 05:35:40 np0005634017 podman[360118]: 2026-02-28 10:35:40.488941201 +0000 UTC m=+0.181440341 container died 01c10fc256e62fc07529dd3110b765669772d1b384b6a847bd8cf3e87bc47672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_swanson, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 05:35:40 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3e99647b06ae26ff0596cb2763ab89a66ee1933f41bc524e00abee55de202676-merged.mount: Deactivated successfully.
Feb 28 05:35:40 np0005634017 podman[360118]: 2026-02-28 10:35:40.535822929 +0000 UTC m=+0.228321999 container remove 01c10fc256e62fc07529dd3110b765669772d1b384b6a847bd8cf3e87bc47672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 05:35:40 np0005634017 systemd[1]: libpod-conmon-01c10fc256e62fc07529dd3110b765669772d1b384b6a847bd8cf3e87bc47672.scope: Deactivated successfully.
Feb 28 05:35:40 np0005634017 podman[360158]: 2026-02-28 10:35:40.722106166 +0000 UTC m=+0.050782470 container create d8f3d3efe154f3d36a3990da3d7e56f60e7fe6c2e56a0665414acedf045abf27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mestorf, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Feb 28 05:35:40 np0005634017 systemd[1]: Started libpod-conmon-d8f3d3efe154f3d36a3990da3d7e56f60e7fe6c2e56a0665414acedf045abf27.scope.
Feb 28 05:35:40 np0005634017 podman[360158]: 2026-02-28 10:35:40.701824931 +0000 UTC m=+0.030501245 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:35:40 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:35:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86f36de4d6ed877ff3dad9c907e79de2ce39ce96af776e7e483a6d9dea690aeb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:35:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86f36de4d6ed877ff3dad9c907e79de2ce39ce96af776e7e483a6d9dea690aeb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:35:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86f36de4d6ed877ff3dad9c907e79de2ce39ce96af776e7e483a6d9dea690aeb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:35:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86f36de4d6ed877ff3dad9c907e79de2ce39ce96af776e7e483a6d9dea690aeb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:35:40 np0005634017 podman[360158]: 2026-02-28 10:35:40.819860905 +0000 UTC m=+0.148537199 container init d8f3d3efe154f3d36a3990da3d7e56f60e7fe6c2e56a0665414acedf045abf27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 28 05:35:40 np0005634017 podman[360158]: 2026-02-28 10:35:40.825168405 +0000 UTC m=+0.153844699 container start d8f3d3efe154f3d36a3990da3d7e56f60e7fe6c2e56a0665414acedf045abf27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 05:35:40 np0005634017 podman[360158]: 2026-02-28 10:35:40.828546521 +0000 UTC m=+0.157222815 container attach d8f3d3efe154f3d36a3990da3d7e56f60e7fe6c2e56a0665414acedf045abf27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mestorf, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:35:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 305 active+clean; 438 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 150 op/s
Feb 28 05:35:41 np0005634017 great_mestorf[360175]: {
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:    "0": [
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:        {
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "devices": [
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "/dev/loop3"
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            ],
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "lv_name": "ceph_lv0",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "lv_size": "21470642176",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "name": "ceph_lv0",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "tags": {
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.cluster_name": "ceph",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.crush_device_class": "",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.encrypted": "0",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.objectstore": "bluestore",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.osd_id": "0",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.type": "block",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.vdo": "0",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.with_tpm": "0"
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            },
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "type": "block",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "vg_name": "ceph_vg0"
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:        }
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:    ],
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:    "1": [
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:        {
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "devices": [
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "/dev/loop4"
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            ],
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "lv_name": "ceph_lv1",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "lv_size": "21470642176",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "name": "ceph_lv1",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "tags": {
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.cluster_name": "ceph",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.crush_device_class": "",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.encrypted": "0",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.objectstore": "bluestore",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.osd_id": "1",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.type": "block",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.vdo": "0",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.with_tpm": "0"
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            },
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "type": "block",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "vg_name": "ceph_vg1"
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:        }
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:    ],
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:    "2": [
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:        {
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "devices": [
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "/dev/loop5"
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            ],
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "lv_name": "ceph_lv2",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "lv_size": "21470642176",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "name": "ceph_lv2",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "tags": {
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.cluster_name": "ceph",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.crush_device_class": "",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.encrypted": "0",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.objectstore": "bluestore",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.osd_id": "2",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.type": "block",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.vdo": "0",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:                "ceph.with_tpm": "0"
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            },
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "type": "block",
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:            "vg_name": "ceph_vg2"
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:        }
Feb 28 05:35:41 np0005634017 great_mestorf[360175]:    ]
Feb 28 05:35:41 np0005634017 great_mestorf[360175]: }
Feb 28 05:35:41 np0005634017 systemd[1]: libpod-d8f3d3efe154f3d36a3990da3d7e56f60e7fe6c2e56a0665414acedf045abf27.scope: Deactivated successfully.
Feb 28 05:35:41 np0005634017 conmon[360175]: conmon d8f3d3efe154f3d36a39 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d8f3d3efe154f3d36a3990da3d7e56f60e7fe6c2e56a0665414acedf045abf27.scope/container/memory.events
Feb 28 05:35:41 np0005634017 podman[360158]: 2026-02-28 10:35:41.152622061 +0000 UTC m=+0.481298365 container died d8f3d3efe154f3d36a3990da3d7e56f60e7fe6c2e56a0665414acedf045abf27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mestorf, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 05:35:41 np0005634017 systemd[1]: var-lib-containers-storage-overlay-86f36de4d6ed877ff3dad9c907e79de2ce39ce96af776e7e483a6d9dea690aeb-merged.mount: Deactivated successfully.
Feb 28 05:35:41 np0005634017 podman[360158]: 2026-02-28 10:35:41.2199963 +0000 UTC m=+0.548672634 container remove d8f3d3efe154f3d36a3990da3d7e56f60e7fe6c2e56a0665414acedf045abf27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 28 05:35:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:35:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:35:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:35:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:35:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026378167702551347 of space, bias 1.0, pg target 0.7913450310765404 quantized to 32 (current 32)
Feb 28 05:35:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:35:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:35:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:35:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:35:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:35:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024938661775652953 of space, bias 1.0, pg target 0.7481598532695886 quantized to 32 (current 32)
Feb 28 05:35:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:35:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.339634528622432e-07 of space, bias 4.0, pg target 0.0008807561434346919 quantized to 16 (current 16)
Feb 28 05:35:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:35:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:35:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:35:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:35:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:35:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:35:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:35:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:35:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:35:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:35:41 np0005634017 systemd[1]: libpod-conmon-d8f3d3efe154f3d36a3990da3d7e56f60e7fe6c2e56a0665414acedf045abf27.scope: Deactivated successfully.
Feb 28 05:35:41 np0005634017 podman[360258]: 2026-02-28 10:35:41.758967368 +0000 UTC m=+0.050255135 container create eff3e5f6a458e743b08287f6c21a538f6e78e6095b92b2d97a276540000e03f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_greider, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Feb 28 05:35:41 np0005634017 systemd[1]: Started libpod-conmon-eff3e5f6a458e743b08287f6c21a538f6e78e6095b92b2d97a276540000e03f4.scope.
Feb 28 05:35:41 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:35:41 np0005634017 podman[360258]: 2026-02-28 10:35:41.742912093 +0000 UTC m=+0.034199900 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:35:41 np0005634017 podman[360258]: 2026-02-28 10:35:41.842825253 +0000 UTC m=+0.134113040 container init eff3e5f6a458e743b08287f6c21a538f6e78e6095b92b2d97a276540000e03f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_greider, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 05:35:41 np0005634017 podman[360258]: 2026-02-28 10:35:41.850630184 +0000 UTC m=+0.141917961 container start eff3e5f6a458e743b08287f6c21a538f6e78e6095b92b2d97a276540000e03f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_greider, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:35:41 np0005634017 podman[360258]: 2026-02-28 10:35:41.854818703 +0000 UTC m=+0.146106500 container attach eff3e5f6a458e743b08287f6c21a538f6e78e6095b92b2d97a276540000e03f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 28 05:35:41 np0005634017 recursing_greider[360275]: 167 167
Feb 28 05:35:41 np0005634017 systemd[1]: libpod-eff3e5f6a458e743b08287f6c21a538f6e78e6095b92b2d97a276540000e03f4.scope: Deactivated successfully.
Feb 28 05:35:41 np0005634017 conmon[360275]: conmon eff3e5f6a458e743b082 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eff3e5f6a458e743b08287f6c21a538f6e78e6095b92b2d97a276540000e03f4.scope/container/memory.events
Feb 28 05:35:41 np0005634017 podman[360258]: 2026-02-28 10:35:41.857215001 +0000 UTC m=+0.148502798 container died eff3e5f6a458e743b08287f6c21a538f6e78e6095b92b2d97a276540000e03f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_greider, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 05:35:41 np0005634017 systemd[1]: var-lib-containers-storage-overlay-324704176df6e23636e319956af41dad779dc0ae7fd7c3dcc247d2ac2fb9788e-merged.mount: Deactivated successfully.
Feb 28 05:35:41 np0005634017 podman[360258]: 2026-02-28 10:35:41.906460126 +0000 UTC m=+0.197747903 container remove eff3e5f6a458e743b08287f6c21a538f6e78e6095b92b2d97a276540000e03f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 28 05:35:41 np0005634017 systemd[1]: libpod-conmon-eff3e5f6a458e743b08287f6c21a538f6e78e6095b92b2d97a276540000e03f4.scope: Deactivated successfully.
Feb 28 05:35:42 np0005634017 podman[360299]: 2026-02-28 10:35:42.061655112 +0000 UTC m=+0.041728263 container create 03e1a14ebdc5549b4bff8526e8d2a657989a5d46fc14c4eea531d46123918b4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shockley, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:35:42 np0005634017 systemd[1]: Started libpod-conmon-03e1a14ebdc5549b4bff8526e8d2a657989a5d46fc14c4eea531d46123918b4f.scope.
Feb 28 05:35:42 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:35:42 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b63b0a9429f6d27f5331b06b6b711aca282462b371d79f2adcdf36949008d4cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:35:42 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b63b0a9429f6d27f5331b06b6b711aca282462b371d79f2adcdf36949008d4cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:35:42 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b63b0a9429f6d27f5331b06b6b711aca282462b371d79f2adcdf36949008d4cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:35:42 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b63b0a9429f6d27f5331b06b6b711aca282462b371d79f2adcdf36949008d4cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:35:42 np0005634017 podman[360299]: 2026-02-28 10:35:42.131695156 +0000 UTC m=+0.111768307 container init 03e1a14ebdc5549b4bff8526e8d2a657989a5d46fc14c4eea531d46123918b4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shockley, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:35:42 np0005634017 podman[360299]: 2026-02-28 10:35:42.139198539 +0000 UTC m=+0.119271730 container start 03e1a14ebdc5549b4bff8526e8d2a657989a5d46fc14c4eea531d46123918b4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shockley, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:35:42 np0005634017 podman[360299]: 2026-02-28 10:35:42.04462971 +0000 UTC m=+0.024702891 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:35:42 np0005634017 podman[360299]: 2026-02-28 10:35:42.143196552 +0000 UTC m=+0.123269703 container attach 03e1a14ebdc5549b4bff8526e8d2a657989a5d46fc14c4eea531d46123918b4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shockley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:35:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 438 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Feb 28 05:35:42 np0005634017 lvm[360394]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:35:42 np0005634017 lvm[360395]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:35:42 np0005634017 lvm[360395]: VG ceph_vg0 finished
Feb 28 05:35:42 np0005634017 lvm[360394]: VG ceph_vg1 finished
Feb 28 05:35:42 np0005634017 lvm[360397]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:35:42 np0005634017 lvm[360397]: VG ceph_vg2 finished
Feb 28 05:35:43 np0005634017 zen_shockley[360316]: {}
Feb 28 05:35:43 np0005634017 systemd[1]: libpod-03e1a14ebdc5549b4bff8526e8d2a657989a5d46fc14c4eea531d46123918b4f.scope: Deactivated successfully.
Feb 28 05:35:43 np0005634017 systemd[1]: libpod-03e1a14ebdc5549b4bff8526e8d2a657989a5d46fc14c4eea531d46123918b4f.scope: Consumed 1.310s CPU time.
Feb 28 05:35:43 np0005634017 podman[360299]: 2026-02-28 10:35:43.061504915 +0000 UTC m=+1.041578086 container died 03e1a14ebdc5549b4bff8526e8d2a657989a5d46fc14c4eea531d46123918b4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shockley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:35:43 np0005634017 systemd[1]: var-lib-containers-storage-overlay-b63b0a9429f6d27f5331b06b6b711aca282462b371d79f2adcdf36949008d4cd-merged.mount: Deactivated successfully.
Feb 28 05:35:43 np0005634017 podman[360299]: 2026-02-28 10:35:43.121218897 +0000 UTC m=+1.101292078 container remove 03e1a14ebdc5549b4bff8526e8d2a657989a5d46fc14c4eea531d46123918b4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shockley, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 05:35:43 np0005634017 systemd[1]: libpod-conmon-03e1a14ebdc5549b4bff8526e8d2a657989a5d46fc14c4eea531d46123918b4f.scope: Deactivated successfully.
Feb 28 05:35:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:35:43 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:35:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:35:43 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:35:43 np0005634017 nova_compute[243452]: 2026-02-28 10:35:43.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:35:43 np0005634017 nova_compute[243452]: 2026-02-28 10:35:43.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:35:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:35:44 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:35:44 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:35:44 np0005634017 nova_compute[243452]: 2026-02-28 10:35:44.418 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 438 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Feb 28 05:35:45 np0005634017 nova_compute[243452]: 2026-02-28 10:35:45.206 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:35:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1958862523' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:35:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:35:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1958862523' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:35:46 np0005634017 nova_compute[243452]: 2026-02-28 10:35:46.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:35:46 np0005634017 nova_compute[243452]: 2026-02-28 10:35:46.349 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:46 np0005634017 nova_compute[243452]: 2026-02-28 10:35:46.350 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:46 np0005634017 nova_compute[243452]: 2026-02-28 10:35:46.350 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:46 np0005634017 nova_compute[243452]: 2026-02-28 10:35:46.351 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:35:46 np0005634017 nova_compute[243452]: 2026-02-28 10:35:46.351 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:35:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 305 active+clean; 445 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 861 KiB/s rd, 2.8 MiB/s wr, 76 op/s
Feb 28 05:35:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:35:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1267820184' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:35:46 np0005634017 nova_compute[243452]: 2026-02-28 10:35:46.970 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.087 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.088 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.095 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.096 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.104 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.106 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.112 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.113 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.247 243456 DEBUG nova.compute.manager [req-9437197d-8555-4a42-a3a4-00b5c4379c01 req-b2ade040-bc72-438b-a676-e7fbdacc314b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-changed-a81e3b75-649b-4321-b436-ab01ab0a9e05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.248 243456 DEBUG nova.compute.manager [req-9437197d-8555-4a42-a3a4-00b5c4379c01 req-b2ade040-bc72-438b-a676-e7fbdacc314b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Refreshing instance network info cache due to event network-changed-a81e3b75-649b-4321-b436-ab01ab0a9e05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.248 243456 DEBUG oslo_concurrency.lockutils [req-9437197d-8555-4a42-a3a4-00b5c4379c01 req-b2ade040-bc72-438b-a676-e7fbdacc314b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.249 243456 DEBUG oslo_concurrency.lockutils [req-9437197d-8555-4a42-a3a4-00b5c4379c01 req-b2ade040-bc72-438b-a676-e7fbdacc314b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.249 243456 DEBUG nova.network.neutron [req-9437197d-8555-4a42-a3a4-00b5c4379c01 req-b2ade040-bc72-438b-a676-e7fbdacc314b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Refreshing network info cache for port a81e3b75-649b-4321-b436-ab01ab0a9e05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.356 243456 DEBUG oslo_concurrency.lockutils [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f13d8adc-1a08-412b-a9fa-c8a601cda923" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.356 243456 DEBUG oslo_concurrency.lockutils [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.357 243456 DEBUG oslo_concurrency.lockutils [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.357 243456 DEBUG oslo_concurrency.lockutils [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.358 243456 DEBUG oslo_concurrency.lockutils [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.359 243456 INFO nova.compute.manager [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Terminating instance#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.360 243456 DEBUG nova.compute.manager [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.378 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.379 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2903MB free_disk=59.8299172706902GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.379 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.379 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:47 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:47Z|00157|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:21:5e:ba 10.100.0.12
Feb 28 05:35:47 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:47Z|00158|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:21:5e:ba 10.100.0.12
Feb 28 05:35:47 np0005634017 kernel: tapa81e3b75-64 (unregistering): left promiscuous mode
Feb 28 05:35:47 np0005634017 NetworkManager[49805]: <info>  [1772274947.4181] device (tapa81e3b75-64): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:35:47 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:47Z|01324|binding|INFO|Releasing lport a81e3b75-649b-4321-b436-ab01ab0a9e05 from this chassis (sb_readonly=0)
Feb 28 05:35:47 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:47Z|01325|binding|INFO|Setting lport a81e3b75-649b-4321-b436-ab01ab0a9e05 down in Southbound
Feb 28 05:35:47 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:47Z|01326|binding|INFO|Removing iface tapa81e3b75-64 ovn-installed in OVS
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.430 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:47 np0005634017 kernel: tapd681366d-e6 (unregistering): left promiscuous mode
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.439 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:7a:c9 10.100.0.13'], port_security=['fa:16:3e:2e:7a:c9 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f13d8adc-1a08-412b-a9fa-c8a601cda923', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd8337edd-8f59-40cf-a216-2b5eec8b4c00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d65e56d1-78ed-40b2-a041-17e977e92cba, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a81e3b75-649b-4321-b436-ab01ab0a9e05) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.441 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a81e3b75-649b-4321-b436-ab01ab0a9e05 in datapath 6d7aad4f-1a53-4b74-a216-4cac4be4283b unbound from our chassis#033[00m
Feb 28 05:35:47 np0005634017 NetworkManager[49805]: <info>  [1772274947.4420] device (tapd681366d-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.444 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d7aad4f-1a53-4b74-a216-4cac4be4283b#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.446 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:47 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:47Z|01327|binding|INFO|Releasing lport d681366d-e6b5-4dad-847e-d091bc7b112d from this chassis (sb_readonly=0)
Feb 28 05:35:47 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:47Z|01328|binding|INFO|Setting lport d681366d-e6b5-4dad-847e-d091bc7b112d down in Southbound
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.455 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:47 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:47Z|01329|binding|INFO|Removing iface tapd681366d-e6 ovn-installed in OVS
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.461 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.468 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.471 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:a7:ba 2001:db8::f816:3eff:febb:a7ba'], port_security=['fa:16:3e:bb:a7:ba 2001:db8::f816:3eff:febb:a7ba'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:febb:a7ba/64', 'neutron:device_id': 'f13d8adc-1a08-412b-a9fa-c8a601cda923', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd8337edd-8f59-40cf-a216-2b5eec8b4c00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f7ae13e-602a-487f-8652-e7d0de9d97fa, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d681366d-e6b5-4dad-847e-d091bc7b112d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.471 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b2974f0d-9e9c-4676-a65e-ec7de6b7bb6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.485 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 29ebb761-c674-4ed1-aae0-554adf945402 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.485 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 367042aa-0043-4283-a399-ea4a6a1545f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.485 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance f13d8adc-1a08-412b-a9fa-c8a601cda923 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.486 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance b7d98834-924e-4fbd-a701-d22949f44f77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.486 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.486 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:35:47 np0005634017 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000082.scope: Deactivated successfully.
Feb 28 05:35:47 np0005634017 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000082.scope: Consumed 14.526s CPU time.
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.511 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5e169f25-7790-4455-bece-a618d731adcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:47 np0005634017 systemd-machined[209480]: Machine qemu-163-instance-00000082 terminated.
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.516 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[dfde8eec-8043-4a4c-8c19-61024c2147f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.542 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2a1f3778-9906-478b-af0b-26f26533d28e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.557 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1827305b-afa8-4aff-a585-3751334b02d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d7aad4f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:af:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 387], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639197, 'reachable_time': 19551, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360475, 'error': None, 'target': 'ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.572 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.576 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[92327584-c0ef-469a-ba75-802bca269e8e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6d7aad4f-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 639207, 'tstamp': 639207}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360476, 'error': None, 'target': 'ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6d7aad4f-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 639210, 'tstamp': 639210}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360476, 'error': None, 'target': 'ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.579 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d7aad4f-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:47 np0005634017 NetworkManager[49805]: <info>  [1772274947.5830] manager: (tapa81e3b75-64): new Tun device (/org/freedesktop/NetworkManager/Devices/553)
Feb 28 05:35:47 np0005634017 NetworkManager[49805]: <info>  [1772274947.5937] manager: (tapd681366d-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/554)
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.598 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d7aad4f-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.599 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.599 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d7aad4f-10, col_values=(('external_ids', {'iface-id': '99dd359f-3ab9-477c-a58c-1c56298be9c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.600 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.602 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d681366d-e6b5-4dad-847e-d091bc7b112d in datapath 49ec66b0-8f5d-445b-a7e6-7fd41e785d9a unbound from our chassis#033[00m
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.604 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 49ec66b0-8f5d-445b-a7e6-7fd41e785d9a#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.616 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.622 243456 INFO nova.virt.libvirt.driver [-] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Instance destroyed successfully.#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.623 243456 DEBUG nova.objects.instance [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid f13d8adc-1a08-412b-a9fa-c8a601cda923 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.624 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[432b32a1-4dba-40f7-84bc-dece40d82063]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.640 243456 DEBUG nova.virt.libvirt.vif [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:35:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1382104323',display_name='tempest-TestGettingAddress-server-1382104323',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1382104323',id=130,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLvbGTbrEfz9W2tnu/9jGE+Ib3UUNypoWhKLTWUiVUxUtyJAzaLx270DWUn96sPglPAR/qjjr3PTH3Chgoe7aVB/RslRFNHrD7aVCbvYz4FzhoyO9zWK1x//nFLJoPGj0w==',key_name='tempest-TestGettingAddress-1669963629',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:35:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-kd4ju8yj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:35:20Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f13d8adc-1a08-412b-a9fa-c8a601cda923,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "address": "fa:16:3e:2e:7a:c9", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa81e3b75-64", "ovs_interfaceid": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.641 243456 DEBUG nova.network.os_vif_util [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "address": "fa:16:3e:2e:7a:c9", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa81e3b75-64", "ovs_interfaceid": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.642 243456 DEBUG nova.network.os_vif_util [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2e:7a:c9,bridge_name='br-int',has_traffic_filtering=True,id=a81e3b75-649b-4321-b436-ab01ab0a9e05,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa81e3b75-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.643 243456 DEBUG os_vif [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:7a:c9,bridge_name='br-int',has_traffic_filtering=True,id=a81e3b75-649b-4321-b436-ab01ab0a9e05,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa81e3b75-64') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.646 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.646 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa81e3b75-64, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.649 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.651 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.654 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.655 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a925e197-ea54-4d81-b77f-33e8784daf5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.656 243456 INFO os_vif [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:7a:c9,bridge_name='br-int',has_traffic_filtering=True,id=a81e3b75-649b-4321-b436-ab01ab0a9e05,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa81e3b75-64')#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.657 243456 DEBUG nova.virt.libvirt.vif [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:35:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1382104323',display_name='tempest-TestGettingAddress-server-1382104323',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1382104323',id=130,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLvbGTbrEfz9W2tnu/9jGE+Ib3UUNypoWhKLTWUiVUxUtyJAzaLx270DWUn96sPglPAR/qjjr3PTH3Chgoe7aVB/RslRFNHrD7aVCbvYz4FzhoyO9zWK1x//nFLJoPGj0w==',key_name='tempest-TestGettingAddress-1669963629',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:35:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-kd4ju8yj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:35:20Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f13d8adc-1a08-412b-a9fa-c8a601cda923,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d681366d-e6b5-4dad-847e-d091bc7b112d", "address": "fa:16:3e:bb:a7:ba", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:a7ba", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd681366d-e6", "ovs_interfaceid": "d681366d-e6b5-4dad-847e-d091bc7b112d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.658 243456 DEBUG nova.network.os_vif_util [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "d681366d-e6b5-4dad-847e-d091bc7b112d", "address": "fa:16:3e:bb:a7:ba", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:a7ba", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd681366d-e6", "ovs_interfaceid": "d681366d-e6b5-4dad-847e-d091bc7b112d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.658 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0cc020c2-a2a5-4799-a01a-0e8ee0d9bc8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.659 243456 DEBUG nova.network.os_vif_util [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:a7:ba,bridge_name='br-int',has_traffic_filtering=True,id=d681366d-e6b5-4dad-847e-d091bc7b112d,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd681366d-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.659 243456 DEBUG os_vif [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:a7:ba,bridge_name='br-int',has_traffic_filtering=True,id=d681366d-e6b5-4dad-847e-d091bc7b112d,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd681366d-e6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.660 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.660 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd681366d-e6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.662 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.665 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.667 243456 INFO os_vif [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:a7:ba,bridge_name='br-int',has_traffic_filtering=True,id=d681366d-e6b5-4dad-847e-d091bc7b112d,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd681366d-e6')#033[00m
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.687 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2404f36f-2570-419a-8179-1f5698c751d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.704 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4643291e-3abc-4fc0-be53-66000054f70f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49ec66b0-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:61:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 6, 'rx_bytes': 2612, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 6, 'rx_bytes': 2612, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639285, 'reachable_time': 43442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360517, 'error': None, 'target': 'ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.725 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0c7aa397-d06e-42b4-a3e6-2c3b7cee3a3e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap49ec66b0-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 639299, 'tstamp': 639299}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360523, 'error': None, 'target': 'ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.728 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49ec66b0-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.731 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.732 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49ec66b0-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.733 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.733 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap49ec66b0-80, col_values=(('external_ids', {'iface-id': '0d93ffc1-1158-4b54-b2c1-6b7d48d62d16'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.734 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.969 243456 INFO nova.virt.libvirt.driver [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Deleting instance files /var/lib/nova/instances/f13d8adc-1a08-412b-a9fa-c8a601cda923_del#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.970 243456 INFO nova.virt.libvirt.driver [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Deletion of /var/lib/nova/instances/f13d8adc-1a08-412b-a9fa-c8a601cda923_del complete#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.977 243456 DEBUG nova.compute.manager [req-7af90231-dd3e-4d08-98ad-aad6b436f289 req-3e4ef0d2-ed2c-4899-8f9a-56f8fee9b122 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-vif-unplugged-a81e3b75-649b-4321-b436-ab01ab0a9e05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.978 243456 DEBUG oslo_concurrency.lockutils [req-7af90231-dd3e-4d08-98ad-aad6b436f289 req-3e4ef0d2-ed2c-4899-8f9a-56f8fee9b122 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.978 243456 DEBUG oslo_concurrency.lockutils [req-7af90231-dd3e-4d08-98ad-aad6b436f289 req-3e4ef0d2-ed2c-4899-8f9a-56f8fee9b122 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.978 243456 DEBUG oslo_concurrency.lockutils [req-7af90231-dd3e-4d08-98ad-aad6b436f289 req-3e4ef0d2-ed2c-4899-8f9a-56f8fee9b122 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.978 243456 DEBUG nova.compute.manager [req-7af90231-dd3e-4d08-98ad-aad6b436f289 req-3e4ef0d2-ed2c-4899-8f9a-56f8fee9b122 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] No waiting events found dispatching network-vif-unplugged-a81e3b75-649b-4321-b436-ab01ab0a9e05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:35:47 np0005634017 nova_compute[243452]: 2026-02-28 10:35:47.978 243456 DEBUG nova.compute.manager [req-7af90231-dd3e-4d08-98ad-aad6b436f289 req-3e4ef0d2-ed2c-4899-8f9a-56f8fee9b122 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-vif-unplugged-a81e3b75-649b-4321-b436-ab01ab0a9e05 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:35:48 np0005634017 nova_compute[243452]: 2026-02-28 10:35:48.024 243456 INFO nova.compute.manager [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Took 0.66 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:35:48 np0005634017 nova_compute[243452]: 2026-02-28 10:35:48.026 243456 DEBUG oslo.service.loopingcall [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:35:48 np0005634017 nova_compute[243452]: 2026-02-28 10:35:48.027 243456 DEBUG nova.compute.manager [-] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:35:48 np0005634017 nova_compute[243452]: 2026-02-28 10:35:48.028 243456 DEBUG nova.network.neutron [-] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:35:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:35:48 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/789579066' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:35:48 np0005634017 nova_compute[243452]: 2026-02-28 10:35:48.162 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:35:48 np0005634017 nova_compute[243452]: 2026-02-28 10:35:48.169 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:35:48 np0005634017 nova_compute[243452]: 2026-02-28 10:35:48.191 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:35:48 np0005634017 nova_compute[243452]: 2026-02-28 10:35:48.228 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:35:48 np0005634017 nova_compute[243452]: 2026-02-28 10:35:48.229 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 458 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 290 KiB/s rd, 2.4 MiB/s wr, 65 op/s
Feb 28 05:35:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:35:48 np0005634017 nova_compute[243452]: 2026-02-28 10:35:48.964 243456 DEBUG nova.network.neutron [req-9437197d-8555-4a42-a3a4-00b5c4379c01 req-b2ade040-bc72-438b-a676-e7fbdacc314b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Updated VIF entry in instance network info cache for port a81e3b75-649b-4321-b436-ab01ab0a9e05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:35:48 np0005634017 nova_compute[243452]: 2026-02-28 10:35:48.965 243456 DEBUG nova.network.neutron [req-9437197d-8555-4a42-a3a4-00b5c4379c01 req-b2ade040-bc72-438b-a676-e7fbdacc314b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Updating instance_info_cache with network_info: [{"id": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "address": "fa:16:3e:2e:7a:c9", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa81e3b75-64", "ovs_interfaceid": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d681366d-e6b5-4dad-847e-d091bc7b112d", "address": "fa:16:3e:bb:a7:ba", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:a7ba", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd681366d-e6", "ovs_interfaceid": "d681366d-e6b5-4dad-847e-d091bc7b112d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:35:48 np0005634017 nova_compute[243452]: 2026-02-28 10:35:48.990 243456 DEBUG oslo_concurrency.lockutils [req-9437197d-8555-4a42-a3a4-00b5c4379c01 req-b2ade040-bc72-438b-a676-e7fbdacc314b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:35:49 np0005634017 nova_compute[243452]: 2026-02-28 10:35:49.230 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:35:49 np0005634017 nova_compute[243452]: 2026-02-28 10:35:49.231 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:35:49 np0005634017 nova_compute[243452]: 2026-02-28 10:35:49.231 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:35:49 np0005634017 nova_compute[243452]: 2026-02-28 10:35:49.265 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Feb 28 05:35:49 np0005634017 nova_compute[243452]: 2026-02-28 10:35:49.343 243456 DEBUG nova.compute.manager [req-baed6973-e289-464f-ad19-000ddbef3569 req-c91d79c2-48f6-4129-806f-0ecfbcef0816 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-vif-deleted-a81e3b75-649b-4321-b436-ab01ab0a9e05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:49 np0005634017 nova_compute[243452]: 2026-02-28 10:35:49.344 243456 INFO nova.compute.manager [req-baed6973-e289-464f-ad19-000ddbef3569 req-c91d79c2-48f6-4129-806f-0ecfbcef0816 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Neutron deleted interface a81e3b75-649b-4321-b436-ab01ab0a9e05; detaching it from the instance and deleting it from the info cache#033[00m
Feb 28 05:35:49 np0005634017 nova_compute[243452]: 2026-02-28 10:35:49.344 243456 DEBUG nova.network.neutron [req-baed6973-e289-464f-ad19-000ddbef3569 req-c91d79c2-48f6-4129-806f-0ecfbcef0816 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Updating instance_info_cache with network_info: [{"id": "d681366d-e6b5-4dad-847e-d091bc7b112d", "address": "fa:16:3e:bb:a7:ba", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:a7ba", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd681366d-e6", "ovs_interfaceid": "d681366d-e6b5-4dad-847e-d091bc7b112d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:35:49 np0005634017 nova_compute[243452]: 2026-02-28 10:35:49.348 243456 DEBUG nova.network.neutron [-] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:35:49 np0005634017 nova_compute[243452]: 2026-02-28 10:35:49.364 243456 DEBUG nova.compute.manager [req-baed6973-e289-464f-ad19-000ddbef3569 req-c91d79c2-48f6-4129-806f-0ecfbcef0816 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Detach interface failed, port_id=a81e3b75-649b-4321-b436-ab01ab0a9e05, reason: Instance f13d8adc-1a08-412b-a9fa-c8a601cda923 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 28 05:35:49 np0005634017 nova_compute[243452]: 2026-02-28 10:35:49.367 243456 INFO nova.compute.manager [-] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Took 1.34 seconds to deallocate network for instance.#033[00m
Feb 28 05:35:49 np0005634017 nova_compute[243452]: 2026-02-28 10:35:49.406 243456 DEBUG oslo_concurrency.lockutils [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:49 np0005634017 nova_compute[243452]: 2026-02-28 10:35:49.407 243456 DEBUG oslo_concurrency.lockutils [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:49 np0005634017 nova_compute[243452]: 2026-02-28 10:35:49.422 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:49 np0005634017 nova_compute[243452]: 2026-02-28 10:35:49.430 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:35:49 np0005634017 nova_compute[243452]: 2026-02-28 10:35:49.430 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:35:49 np0005634017 nova_compute[243452]: 2026-02-28 10:35:49.431 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:35:49 np0005634017 nova_compute[243452]: 2026-02-28 10:35:49.431 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 29ebb761-c674-4ed1-aae0-554adf945402 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:35:49 np0005634017 nova_compute[243452]: 2026-02-28 10:35:49.496 243456 DEBUG oslo_concurrency.processutils [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:35:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:35:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2398810676' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:35:50 np0005634017 nova_compute[243452]: 2026-02-28 10:35:50.086 243456 DEBUG nova.compute.manager [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-vif-plugged-a81e3b75-649b-4321-b436-ab01ab0a9e05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:50 np0005634017 nova_compute[243452]: 2026-02-28 10:35:50.086 243456 DEBUG oslo_concurrency.lockutils [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:50 np0005634017 nova_compute[243452]: 2026-02-28 10:35:50.087 243456 DEBUG oslo_concurrency.lockutils [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:50 np0005634017 nova_compute[243452]: 2026-02-28 10:35:50.087 243456 DEBUG oslo_concurrency.lockutils [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:50 np0005634017 nova_compute[243452]: 2026-02-28 10:35:50.087 243456 DEBUG nova.compute.manager [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] No waiting events found dispatching network-vif-plugged-a81e3b75-649b-4321-b436-ab01ab0a9e05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:35:50 np0005634017 nova_compute[243452]: 2026-02-28 10:35:50.087 243456 WARNING nova.compute.manager [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received unexpected event network-vif-plugged-a81e3b75-649b-4321-b436-ab01ab0a9e05 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:35:50 np0005634017 nova_compute[243452]: 2026-02-28 10:35:50.087 243456 DEBUG nova.compute.manager [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-vif-unplugged-d681366d-e6b5-4dad-847e-d091bc7b112d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:50 np0005634017 nova_compute[243452]: 2026-02-28 10:35:50.087 243456 DEBUG oslo_concurrency.lockutils [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:50 np0005634017 nova_compute[243452]: 2026-02-28 10:35:50.088 243456 DEBUG oslo_concurrency.lockutils [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:50 np0005634017 nova_compute[243452]: 2026-02-28 10:35:50.088 243456 DEBUG oslo_concurrency.lockutils [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:50 np0005634017 nova_compute[243452]: 2026-02-28 10:35:50.088 243456 DEBUG nova.compute.manager [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] No waiting events found dispatching network-vif-unplugged-d681366d-e6b5-4dad-847e-d091bc7b112d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:35:50 np0005634017 nova_compute[243452]: 2026-02-28 10:35:50.088 243456 WARNING nova.compute.manager [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received unexpected event network-vif-unplugged-d681366d-e6b5-4dad-847e-d091bc7b112d for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:35:50 np0005634017 nova_compute[243452]: 2026-02-28 10:35:50.088 243456 DEBUG nova.compute.manager [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-vif-plugged-d681366d-e6b5-4dad-847e-d091bc7b112d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:50 np0005634017 nova_compute[243452]: 2026-02-28 10:35:50.088 243456 DEBUG oslo_concurrency.lockutils [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:50 np0005634017 nova_compute[243452]: 2026-02-28 10:35:50.089 243456 DEBUG oslo_concurrency.lockutils [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:50 np0005634017 nova_compute[243452]: 2026-02-28 10:35:50.089 243456 DEBUG oslo_concurrency.lockutils [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:50 np0005634017 nova_compute[243452]: 2026-02-28 10:35:50.089 243456 DEBUG nova.compute.manager [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] No waiting events found dispatching network-vif-plugged-d681366d-e6b5-4dad-847e-d091bc7b112d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:35:50 np0005634017 nova_compute[243452]: 2026-02-28 10:35:50.089 243456 WARNING nova.compute.manager [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received unexpected event network-vif-plugged-d681366d-e6b5-4dad-847e-d091bc7b112d for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:35:50 np0005634017 nova_compute[243452]: 2026-02-28 10:35:50.100 243456 DEBUG oslo_concurrency.processutils [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:35:50 np0005634017 nova_compute[243452]: 2026-02-28 10:35:50.107 243456 DEBUG nova.compute.provider_tree [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:35:50 np0005634017 nova_compute[243452]: 2026-02-28 10:35:50.127 243456 DEBUG nova.scheduler.client.report [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:35:50 np0005634017 nova_compute[243452]: 2026-02-28 10:35:50.169 243456 DEBUG oslo_concurrency.lockutils [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:50 np0005634017 nova_compute[243452]: 2026-02-28 10:35:50.196 243456 INFO nova.scheduler.client.report [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance f13d8adc-1a08-412b-a9fa-c8a601cda923#033[00m
Feb 28 05:35:50 np0005634017 nova_compute[243452]: 2026-02-28 10:35:50.270 243456 DEBUG oslo_concurrency.lockutils [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 305 active+clean; 419 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 295 KiB/s rd, 2.2 MiB/s wr, 82 op/s
Feb 28 05:35:51 np0005634017 nova_compute[243452]: 2026-02-28 10:35:51.476 243456 DEBUG nova.compute.manager [req-482fcf4d-a8f3-4373-a81e-334275295f23 req-bae0e608-0d2d-4c32-a194-09c99b101614 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-vif-deleted-d681366d-e6b5-4dad-847e-d091bc7b112d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:51 np0005634017 nova_compute[243452]: 2026-02-28 10:35:51.511 243456 DEBUG oslo_concurrency.lockutils [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "29ebb761-c674-4ed1-aae0-554adf945402" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:51 np0005634017 nova_compute[243452]: 2026-02-28 10:35:51.512 243456 DEBUG oslo_concurrency.lockutils [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:51 np0005634017 nova_compute[243452]: 2026-02-28 10:35:51.512 243456 DEBUG oslo_concurrency.lockutils [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "29ebb761-c674-4ed1-aae0-554adf945402-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:51 np0005634017 nova_compute[243452]: 2026-02-28 10:35:51.513 243456 DEBUG oslo_concurrency.lockutils [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:51 np0005634017 nova_compute[243452]: 2026-02-28 10:35:51.513 243456 DEBUG oslo_concurrency.lockutils [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:51 np0005634017 nova_compute[243452]: 2026-02-28 10:35:51.515 243456 INFO nova.compute.manager [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Terminating instance#033[00m
Feb 28 05:35:51 np0005634017 nova_compute[243452]: 2026-02-28 10:35:51.518 243456 DEBUG nova.compute.manager [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:35:51 np0005634017 kernel: tap8b2cb81f-77 (unregistering): left promiscuous mode
Feb 28 05:35:51 np0005634017 NetworkManager[49805]: <info>  [1772274951.8318] device (tap8b2cb81f-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:35:51 np0005634017 nova_compute[243452]: 2026-02-28 10:35:51.840 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:51Z|01330|binding|INFO|Releasing lport 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b from this chassis (sb_readonly=0)
Feb 28 05:35:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:51Z|01331|binding|INFO|Setting lport 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b down in Southbound
Feb 28 05:35:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:51Z|01332|binding|INFO|Removing iface tap8b2cb81f-77 ovn-installed in OVS
Feb 28 05:35:51 np0005634017 nova_compute[243452]: 2026-02-28 10:35:51.844 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:51 np0005634017 nova_compute[243452]: 2026-02-28 10:35:51.850 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:51.851 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:72:96 10.100.0.3'], port_security=['fa:16:3e:12:72:96 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '29ebb761-c674-4ed1-aae0-554adf945402', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd8337edd-8f59-40cf-a216-2b5eec8b4c00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d65e56d1-78ed-40b2-a041-17e977e92cba, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:35:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:51.853 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b in datapath 6d7aad4f-1a53-4b74-a216-4cac4be4283b unbound from our chassis#033[00m
Feb 28 05:35:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:51.855 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d7aad4f-1a53-4b74-a216-4cac4be4283b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:35:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:51.856 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0f0aa2ba-57d5-4897-b3f8-b1fbd3b8c16a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:51.857 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b namespace which is not needed anymore#033[00m
Feb 28 05:35:51 np0005634017 kernel: tap8f25c48f-b2 (unregistering): left promiscuous mode
Feb 28 05:35:51 np0005634017 NetworkManager[49805]: <info>  [1772274951.8645] device (tap8f25c48f-b2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:35:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:51Z|01333|binding|INFO|Releasing lport 8f25c48f-b281-4784-a6b0-a2662d928d28 from this chassis (sb_readonly=0)
Feb 28 05:35:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:51Z|01334|binding|INFO|Setting lport 8f25c48f-b281-4784-a6b0-a2662d928d28 down in Southbound
Feb 28 05:35:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:51Z|01335|binding|INFO|Removing iface tap8f25c48f-b2 ovn-installed in OVS
Feb 28 05:35:51 np0005634017 nova_compute[243452]: 2026-02-28 10:35:51.876 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:51 np0005634017 nova_compute[243452]: 2026-02-28 10:35:51.878 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:51 np0005634017 nova_compute[243452]: 2026-02-28 10:35:51.886 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:51.886 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:88:f4 2001:db8::f816:3eff:fe9f:88f4'], port_security=['fa:16:3e:9f:88:f4 2001:db8::f816:3eff:fe9f:88f4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe9f:88f4/64', 'neutron:device_id': '29ebb761-c674-4ed1-aae0-554adf945402', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd8337edd-8f59-40cf-a216-2b5eec8b4c00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f7ae13e-602a-487f-8652-e7d0de9d97fa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=8f25c48f-b281-4784-a6b0-a2662d928d28) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:35:51 np0005634017 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000080.scope: Deactivated successfully.
Feb 28 05:35:51 np0005634017 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000080.scope: Consumed 15.259s CPU time.
Feb 28 05:35:51 np0005634017 systemd-machined[209480]: Machine qemu-161-instance-00000080 terminated.
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.011 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Updating instance_info_cache with network_info: [{"id": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "address": "fa:16:3e:12:72:96", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b2cb81f-77", "ovs_interfaceid": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:35:52 np0005634017 neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b[357339]: [NOTICE]   (357343) : haproxy version is 2.8.14-c23fe91
Feb 28 05:35:52 np0005634017 neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b[357339]: [NOTICE]   (357343) : path to executable is /usr/sbin/haproxy
Feb 28 05:35:52 np0005634017 neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b[357339]: [WARNING]  (357343) : Exiting Master process...
Feb 28 05:35:52 np0005634017 neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b[357339]: [WARNING]  (357343) : Exiting Master process...
Feb 28 05:35:52 np0005634017 neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b[357339]: [ALERT]    (357343) : Current worker (357345) exited with code 143 (Terminated)
Feb 28 05:35:52 np0005634017 neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b[357339]: [WARNING]  (357343) : All workers exited. Exiting... (0)
Feb 28 05:35:52 np0005634017 systemd[1]: libpod-53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb.scope: Deactivated successfully.
Feb 28 05:35:52 np0005634017 podman[360595]: 2026-02-28 10:35:52.041613643 +0000 UTC m=+0.061150863 container died 53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.040 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.042 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.047 243456 INFO nova.virt.libvirt.driver [-] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Instance destroyed successfully.#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.047 243456 DEBUG nova.objects.instance [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid 29ebb761-c674-4ed1-aae0-554adf945402 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.060 243456 DEBUG nova.virt.libvirt.vif [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:34:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1840268040',display_name='tempest-TestGettingAddress-server-1840268040',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1840268040',id=128,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLvbGTbrEfz9W2tnu/9jGE+Ib3UUNypoWhKLTWUiVUxUtyJAzaLx270DWUn96sPglPAR/qjjr3PTH3Chgoe7aVB/RslRFNHrD7aVCbvYz4FzhoyO9zWK1x//nFLJoPGj0w==',key_name='tempest-TestGettingAddress-1669963629',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:34:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-6j90yoru',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:34:45Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=29ebb761-c674-4ed1-aae0-554adf945402,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "address": "fa:16:3e:12:72:96", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b2cb81f-77", "ovs_interfaceid": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.060 243456 DEBUG nova.network.os_vif_util [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "address": "fa:16:3e:12:72:96", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b2cb81f-77", "ovs_interfaceid": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.061 243456 DEBUG nova.network.os_vif_util [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:12:72:96,bridge_name='br-int',has_traffic_filtering=True,id=8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b2cb81f-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.061 243456 DEBUG os_vif [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:72:96,bridge_name='br-int',has_traffic_filtering=True,id=8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b2cb81f-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.063 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.063 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b2cb81f-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.065 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.068 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.070 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.073 243456 INFO os_vif [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:72:96,bridge_name='br-int',has_traffic_filtering=True,id=8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b2cb81f-77')#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.075 243456 DEBUG nova.virt.libvirt.vif [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:34:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1840268040',display_name='tempest-TestGettingAddress-server-1840268040',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1840268040',id=128,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLvbGTbrEfz9W2tnu/9jGE+Ib3UUNypoWhKLTWUiVUxUtyJAzaLx270DWUn96sPglPAR/qjjr3PTH3Chgoe7aVB/RslRFNHrD7aVCbvYz4FzhoyO9zWK1x//nFLJoPGj0w==',key_name='tempest-TestGettingAddress-1669963629',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:34:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-6j90yoru',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:34:45Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=29ebb761-c674-4ed1-aae0-554adf945402,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:35:52 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb-userdata-shm.mount: Deactivated successfully.
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.076 243456 DEBUG nova.network.os_vif_util [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.077 243456 DEBUG nova.network.os_vif_util [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:88:f4,bridge_name='br-int',has_traffic_filtering=True,id=8f25c48f-b281-4784-a6b0-a2662d928d28,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f25c48f-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.078 243456 DEBUG os_vif [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:88:f4,bridge_name='br-int',has_traffic_filtering=True,id=8f25c48f-b281-4784-a6b0-a2662d928d28,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f25c48f-b2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:35:52 np0005634017 systemd[1]: var-lib-containers-storage-overlay-56367b1f032c1af58ec46ce8c0be39bfb51d1c49bf83d9e0c00051135fcf4571-merged.mount: Deactivated successfully.
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.080 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.081 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f25c48f-b2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:52 np0005634017 podman[360595]: 2026-02-28 10:35:52.084301773 +0000 UTC m=+0.103838993 container cleanup 53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.084 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.087 243456 INFO os_vif [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:88:f4,bridge_name='br-int',has_traffic_filtering=True,id=8f25c48f-b281-4784-a6b0-a2662d928d28,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f25c48f-b2')#033[00m
Feb 28 05:35:52 np0005634017 systemd[1]: libpod-conmon-53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb.scope: Deactivated successfully.
Feb 28 05:35:52 np0005634017 podman[360658]: 2026-02-28 10:35:52.155310884 +0000 UTC m=+0.045691435 container remove 53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS)
Feb 28 05:35:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.165 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed6fba9-98c5-44b0-bb88-747df577ab7b]: (4, ('Sat Feb 28 10:35:51 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b (53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb)\n53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb\nSat Feb 28 10:35:52 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b (53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb)\n53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.168 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[74e0035b-c938-463f-acaf-f0a52ee0ba32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.168 243456 DEBUG nova.compute.manager [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-changed-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.169 243456 DEBUG nova.compute.manager [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Refreshing instance network info cache due to event network-changed-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:35:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.169 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d7aad4f-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.169 243456 DEBUG oslo_concurrency.lockutils [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.169 243456 DEBUG oslo_concurrency.lockutils [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.169 243456 DEBUG nova.network.neutron [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Refreshing network info cache for port 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:35:52 np0005634017 kernel: tap6d7aad4f-10: left promiscuous mode
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.172 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.175 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.176 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[edfaa418-63c9-4c07-8638-130bcf21cba0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.177 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.193 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0872d0-9902-4a4f-a1f7-e3bde3dec828]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.194 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bb4ba571-86bc-4709-a31f-229d0f48188e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.215 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bac4c47f-0ddc-449f-91d1-8a7c9b728786]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639190, 'reachable_time': 15837, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360685, 'error': None, 'target': 'ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:52 np0005634017 systemd[1]: run-netns-ovnmeta\x2d6d7aad4f\x2d1a53\x2d4b74\x2da216\x2d4cac4be4283b.mount: Deactivated successfully.
Feb 28 05:35:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.218 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:35:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.219 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[03cddc68-f2d4-4809-9592-1e74f066c752]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.220 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 8f25c48f-b281-4784-a6b0-a2662d928d28 in datapath 49ec66b0-8f5d-445b-a7e6-7fd41e785d9a unbound from our chassis#033[00m
Feb 28 05:35:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.222 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 49ec66b0-8f5d-445b-a7e6-7fd41e785d9a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:35:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.223 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[54fa5f03-9d88-4d8e-80c0-6301f0815345]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.223 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a namespace which is not needed anymore#033[00m
Feb 28 05:35:52 np0005634017 neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a[357412]: [NOTICE]   (357416) : haproxy version is 2.8.14-c23fe91
Feb 28 05:35:52 np0005634017 neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a[357412]: [NOTICE]   (357416) : path to executable is /usr/sbin/haproxy
Feb 28 05:35:52 np0005634017 neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a[357412]: [WARNING]  (357416) : Exiting Master process...
Feb 28 05:35:52 np0005634017 neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a[357412]: [WARNING]  (357416) : Exiting Master process...
Feb 28 05:35:52 np0005634017 neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a[357412]: [ALERT]    (357416) : Current worker (357418) exited with code 143 (Terminated)
Feb 28 05:35:52 np0005634017 neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a[357412]: [WARNING]  (357416) : All workers exited. Exiting... (0)
Feb 28 05:35:52 np0005634017 systemd[1]: libpod-892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817.scope: Deactivated successfully.
Feb 28 05:35:52 np0005634017 podman[360705]: 2026-02-28 10:35:52.34685431 +0000 UTC m=+0.041334392 container died 892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.354 243456 INFO nova.virt.libvirt.driver [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Deleting instance files /var/lib/nova/instances/29ebb761-c674-4ed1-aae0-554adf945402_del#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.355 243456 INFO nova.virt.libvirt.driver [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Deletion of /var/lib/nova/instances/29ebb761-c674-4ed1-aae0-554adf945402_del complete#033[00m
Feb 28 05:35:52 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817-userdata-shm.mount: Deactivated successfully.
Feb 28 05:35:52 np0005634017 systemd[1]: var-lib-containers-storage-overlay-d17053d1edc9fada6f68225e884d047e5575813d4682169f8b77e31a5f244466-merged.mount: Deactivated successfully.
Feb 28 05:35:52 np0005634017 podman[360705]: 2026-02-28 10:35:52.379479774 +0000 UTC m=+0.073959856 container cleanup 892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS)
Feb 28 05:35:52 np0005634017 systemd[1]: libpod-conmon-892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817.scope: Deactivated successfully.
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.426 243456 INFO nova.compute.manager [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Took 0.91 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.427 243456 DEBUG oslo.service.loopingcall [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.427 243456 DEBUG nova.compute.manager [-] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.427 243456 DEBUG nova.network.neutron [-] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:35:52 np0005634017 podman[360734]: 2026-02-28 10:35:52.433788452 +0000 UTC m=+0.038874292 container remove 892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 28 05:35:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.439 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8a0d820c-4365-42a4-9aba-b57caecfe7f4]: (4, ('Sat Feb 28 10:35:52 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a (892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817)\n892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817\nSat Feb 28 10:35:52 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a (892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817)\n892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.440 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[81c45e48-bdbc-46d8-b0a4-4eff10aad77a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.441 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49ec66b0-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.443 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:52 np0005634017 kernel: tap49ec66b0-80: left promiscuous mode
Feb 28 05:35:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.448 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bc752214-6892-4d3b-a025-775a6c42e289]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:52 np0005634017 nova_compute[243452]: 2026-02-28 10:35:52.449 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.465 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4b42315d-e188-44d6-b7d5-0211441d36fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.466 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d3c4e121-91bf-4dcf-8a8e-fb0b734405cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.484 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[345e1f78-c0e1-45db-b045-92f6f5d0b104]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639276, 'reachable_time': 23403, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360749, 'error': None, 'target': 'ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.486 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:35:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.486 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[980b6d2b-7de3-438d-852b-cfa3986b3983]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 380 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 2.2 MiB/s wr, 101 op/s
Feb 28 05:35:53 np0005634017 systemd[1]: run-netns-ovnmeta\x2d49ec66b0\x2d8f5d\x2d445b\x2da7e6\x2d7fd41e785d9a.mount: Deactivated successfully.
Feb 28 05:35:53 np0005634017 nova_compute[243452]: 2026-02-28 10:35:53.548 243456 DEBUG nova.compute.manager [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-vif-unplugged-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:53 np0005634017 nova_compute[243452]: 2026-02-28 10:35:53.548 243456 DEBUG oslo_concurrency.lockutils [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "29ebb761-c674-4ed1-aae0-554adf945402-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:53 np0005634017 nova_compute[243452]: 2026-02-28 10:35:53.549 243456 DEBUG oslo_concurrency.lockutils [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:53 np0005634017 nova_compute[243452]: 2026-02-28 10:35:53.549 243456 DEBUG oslo_concurrency.lockutils [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:53 np0005634017 nova_compute[243452]: 2026-02-28 10:35:53.549 243456 DEBUG nova.compute.manager [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] No waiting events found dispatching network-vif-unplugged-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:35:53 np0005634017 nova_compute[243452]: 2026-02-28 10:35:53.549 243456 DEBUG nova.compute.manager [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-vif-unplugged-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:35:53 np0005634017 nova_compute[243452]: 2026-02-28 10:35:53.550 243456 DEBUG nova.compute.manager [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-vif-plugged-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:53 np0005634017 nova_compute[243452]: 2026-02-28 10:35:53.550 243456 DEBUG oslo_concurrency.lockutils [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "29ebb761-c674-4ed1-aae0-554adf945402-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:53 np0005634017 nova_compute[243452]: 2026-02-28 10:35:53.550 243456 DEBUG oslo_concurrency.lockutils [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:53 np0005634017 nova_compute[243452]: 2026-02-28 10:35:53.550 243456 DEBUG oslo_concurrency.lockutils [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:53 np0005634017 nova_compute[243452]: 2026-02-28 10:35:53.550 243456 DEBUG nova.compute.manager [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] No waiting events found dispatching network-vif-plugged-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:35:53 np0005634017 nova_compute[243452]: 2026-02-28 10:35:53.551 243456 WARNING nova.compute.manager [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received unexpected event network-vif-plugged-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:35:53 np0005634017 nova_compute[243452]: 2026-02-28 10:35:53.551 243456 DEBUG nova.compute.manager [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-vif-deleted-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:53 np0005634017 nova_compute[243452]: 2026-02-28 10:35:53.551 243456 INFO nova.compute.manager [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Neutron deleted interface 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b; detaching it from the instance and deleting it from the info cache#033[00m
Feb 28 05:35:53 np0005634017 nova_compute[243452]: 2026-02-28 10:35:53.551 243456 DEBUG nova.network.neutron [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Updating instance_info_cache with network_info: [{"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:35:53 np0005634017 nova_compute[243452]: 2026-02-28 10:35:53.573 243456 DEBUG nova.compute.manager [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Detach interface failed, port_id=8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b, reason: Instance 29ebb761-c674-4ed1-aae0-554adf945402 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 28 05:35:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.203 243456 DEBUG nova.network.neutron [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Updated VIF entry in instance network info cache for port 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.203 243456 DEBUG nova.network.neutron [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Updating instance_info_cache with network_info: [{"id": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "address": "fa:16:3e:12:72:96", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b2cb81f-77", "ovs_interfaceid": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.219 243456 DEBUG nova.network.neutron [-] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.228 243456 DEBUG oslo_concurrency.lockutils [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.229 243456 DEBUG nova.compute.manager [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-vif-unplugged-8f25c48f-b281-4784-a6b0-a2662d928d28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.230 243456 DEBUG oslo_concurrency.lockutils [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "29ebb761-c674-4ed1-aae0-554adf945402-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.231 243456 DEBUG oslo_concurrency.lockutils [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.231 243456 DEBUG oslo_concurrency.lockutils [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.232 243456 DEBUG nova.compute.manager [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] No waiting events found dispatching network-vif-unplugged-8f25c48f-b281-4784-a6b0-a2662d928d28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.232 243456 DEBUG nova.compute.manager [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-vif-unplugged-8f25c48f-b281-4784-a6b0-a2662d928d28 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.237 243456 INFO nova.compute.manager [-] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Took 1.81 seconds to deallocate network for instance.#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.269 243456 DEBUG nova.compute.manager [req-6d105208-2bbb-4352-88fb-56ea506f5714 req-aa287118-5e9c-4b60-ac6d-5fecdecbafa2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-vif-plugged-8f25c48f-b281-4784-a6b0-a2662d928d28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.270 243456 DEBUG oslo_concurrency.lockutils [req-6d105208-2bbb-4352-88fb-56ea506f5714 req-aa287118-5e9c-4b60-ac6d-5fecdecbafa2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "29ebb761-c674-4ed1-aae0-554adf945402-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.270 243456 DEBUG oslo_concurrency.lockutils [req-6d105208-2bbb-4352-88fb-56ea506f5714 req-aa287118-5e9c-4b60-ac6d-5fecdecbafa2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.270 243456 DEBUG oslo_concurrency.lockutils [req-6d105208-2bbb-4352-88fb-56ea506f5714 req-aa287118-5e9c-4b60-ac6d-5fecdecbafa2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.271 243456 DEBUG nova.compute.manager [req-6d105208-2bbb-4352-88fb-56ea506f5714 req-aa287118-5e9c-4b60-ac6d-5fecdecbafa2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] No waiting events found dispatching network-vif-plugged-8f25c48f-b281-4784-a6b0-a2662d928d28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.271 243456 WARNING nova.compute.manager [req-6d105208-2bbb-4352-88fb-56ea506f5714 req-aa287118-5e9c-4b60-ac6d-5fecdecbafa2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received unexpected event network-vif-plugged-8f25c48f-b281-4784-a6b0-a2662d928d28 for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.284 243456 DEBUG oslo_concurrency.lockutils [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.284 243456 DEBUG oslo_concurrency.lockutils [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.356 243456 DEBUG oslo_concurrency.processutils [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.424 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.517 243456 INFO nova.compute.manager [None req-c12e0a07-2f36-4543-b469-66593776abce ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Get console output#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.528 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 28 05:35:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 305 active+clean; 334 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 2.2 MiB/s wr, 118 op/s
Feb 28 05:35:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:35:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1242944676' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.896 243456 DEBUG oslo_concurrency.processutils [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.898 243456 DEBUG oslo_concurrency.lockutils [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b7d98834-924e-4fbd-a701-d22949f44f77" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.899 243456 DEBUG oslo_concurrency.lockutils [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.899 243456 DEBUG oslo_concurrency.lockutils [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.899 243456 DEBUG oslo_concurrency.lockutils [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.900 243456 DEBUG oslo_concurrency.lockutils [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.901 243456 INFO nova.compute.manager [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Terminating instance#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.903 243456 DEBUG nova.compute.manager [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.909 243456 DEBUG nova.compute.provider_tree [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.926 243456 DEBUG nova.scheduler.client.report [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:35:54 np0005634017 kernel: tapf6a52694-af (unregistering): left promiscuous mode
Feb 28 05:35:54 np0005634017 NetworkManager[49805]: <info>  [1772274954.9513] device (tapf6a52694-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.955 243456 DEBUG oslo_concurrency.lockutils [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:54Z|01336|binding|INFO|Releasing lport f6a52694-af4a-4ecc-926c-b1867c375983 from this chassis (sb_readonly=0)
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.962 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:54Z|01337|binding|INFO|Setting lport f6a52694-af4a-4ecc-926c-b1867c375983 down in Southbound
Feb 28 05:35:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:54Z|01338|binding|INFO|Removing iface tapf6a52694-af ovn-installed in OVS
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.966 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:54.972 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:5e:ba 10.100.0.12'], port_security=['fa:16:3e:21:5e:ba 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b7d98834-924e-4fbd-a701-d22949f44f77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c48ff26a-49d0-4144-b27f-14431e751ba2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7156ff74-6e4d-4300-84e4-6890f3b16e55', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cef915b3-5462-468d-ad19-808529c0dfba, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=f6a52694-af4a-4ecc-926c-b1867c375983) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:35:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:54.974 156681 INFO neutron.agent.ovn.metadata.agent [-] Port f6a52694-af4a-4ecc-926c-b1867c375983 in datapath c48ff26a-49d0-4144-b27f-14431e751ba2 unbound from our chassis#033[00m
Feb 28 05:35:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:54.979 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c48ff26a-49d0-4144-b27f-14431e751ba2#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.979 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:54 np0005634017 nova_compute[243452]: 2026-02-28 10:35:54.992 243456 INFO nova.scheduler.client.report [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance 29ebb761-c674-4ed1-aae0-554adf945402#033[00m
Feb 28 05:35:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:55.002 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b30c8202-1f30-4b84-8061-06839151a1aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:55.035 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1d99bb1e-20a7-4e03-acfe-43f8feb1c402]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:55.040 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1fdf1fdd-0633-4052-9650-c5338b44b040]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:55 np0005634017 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000083.scope: Deactivated successfully.
Feb 28 05:35:55 np0005634017 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000083.scope: Consumed 15.426s CPU time.
Feb 28 05:35:55 np0005634017 systemd-machined[209480]: Machine qemu-164-instance-00000083 terminated.
Feb 28 05:35:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:55.077 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[29aafdfd-d508-49ba-b3fe-3c39d94567a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:55 np0005634017 nova_compute[243452]: 2026-02-28 10:35:55.080 243456 DEBUG oslo_concurrency.lockutils [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:55.101 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b2467b65-7cbb-4c85-bb1f-26ac3532b3dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc48ff26a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:5f:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640275, 'reachable_time': 38982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360783, 'error': None, 'target': 'ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:55.124 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[afe4cc98-8f33-41d5-9355-d53f7430f253]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc48ff26a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640284, 'tstamp': 640284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360784, 'error': None, 'target': 'ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc48ff26a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640286, 'tstamp': 640286}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360784, 'error': None, 'target': 'ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:55.127 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc48ff26a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:55 np0005634017 nova_compute[243452]: 2026-02-28 10:35:55.129 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:55 np0005634017 nova_compute[243452]: 2026-02-28 10:35:55.138 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:55.139 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc48ff26a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:55.140 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:35:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:55.141 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc48ff26a-40, col_values=(('external_ids', {'iface-id': 'cb4199b0-c270-4ea9-a607-db9799f6f157'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:55.141 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:35:55 np0005634017 nova_compute[243452]: 2026-02-28 10:35:55.150 243456 INFO nova.virt.libvirt.driver [-] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Instance destroyed successfully.#033[00m
Feb 28 05:35:55 np0005634017 nova_compute[243452]: 2026-02-28 10:35:55.150 243456 DEBUG nova.objects.instance [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'resources' on Instance uuid b7d98834-924e-4fbd-a701-d22949f44f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:35:55 np0005634017 nova_compute[243452]: 2026-02-28 10:35:55.163 243456 DEBUG nova.virt.libvirt.vif [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:35:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1920589440',display_name='tempest-TestNetworkBasicOps-server-1920589440',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1920589440',id=131,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBx2yXJrlxl/hjn9MMHE5KhSD01UI6o5tdb/M6CuQZyKKSCBz23zQdd3YZK9kMbqEHNUQ9eYVQehuDxFu0Ax80TEtUbLmqV5yzH0bXR2RG7KLxh93Ak1z3EjsfV7s46ZjA==',key_name='tempest-TestNetworkBasicOps-1493336509',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:35:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-tu2phxkv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:35:32Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=b7d98834-924e-4fbd-a701-d22949f44f77,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f6a52694-af4a-4ecc-926c-b1867c375983", "address": "fa:16:3e:21:5e:ba", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6a52694-af", "ovs_interfaceid": "f6a52694-af4a-4ecc-926c-b1867c375983", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:35:55 np0005634017 nova_compute[243452]: 2026-02-28 10:35:55.163 243456 DEBUG nova.network.os_vif_util [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "f6a52694-af4a-4ecc-926c-b1867c375983", "address": "fa:16:3e:21:5e:ba", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6a52694-af", "ovs_interfaceid": "f6a52694-af4a-4ecc-926c-b1867c375983", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:35:55 np0005634017 nova_compute[243452]: 2026-02-28 10:35:55.164 243456 DEBUG nova.network.os_vif_util [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:21:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=f6a52694-af4a-4ecc-926c-b1867c375983,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6a52694-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:35:55 np0005634017 nova_compute[243452]: 2026-02-28 10:35:55.164 243456 DEBUG os_vif [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:21:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=f6a52694-af4a-4ecc-926c-b1867c375983,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6a52694-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:35:55 np0005634017 nova_compute[243452]: 2026-02-28 10:35:55.165 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:55 np0005634017 nova_compute[243452]: 2026-02-28 10:35:55.166 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6a52694-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:55 np0005634017 nova_compute[243452]: 2026-02-28 10:35:55.168 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:55 np0005634017 nova_compute[243452]: 2026-02-28 10:35:55.172 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:35:55 np0005634017 nova_compute[243452]: 2026-02-28 10:35:55.176 243456 INFO os_vif [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:21:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=f6a52694-af4a-4ecc-926c-b1867c375983,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6a52694-af')#033[00m
Feb 28 05:35:55 np0005634017 nova_compute[243452]: 2026-02-28 10:35:55.485 243456 INFO nova.virt.libvirt.driver [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Deleting instance files /var/lib/nova/instances/b7d98834-924e-4fbd-a701-d22949f44f77_del#033[00m
Feb 28 05:35:55 np0005634017 nova_compute[243452]: 2026-02-28 10:35:55.486 243456 INFO nova.virt.libvirt.driver [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Deletion of /var/lib/nova/instances/b7d98834-924e-4fbd-a701-d22949f44f77_del complete#033[00m
Feb 28 05:35:55 np0005634017 nova_compute[243452]: 2026-02-28 10:35:55.540 243456 INFO nova.compute.manager [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:35:55 np0005634017 nova_compute[243452]: 2026-02-28 10:35:55.541 243456 DEBUG oslo.service.loopingcall [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:35:55 np0005634017 nova_compute[243452]: 2026-02-28 10:35:55.542 243456 DEBUG nova.compute.manager [-] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:35:55 np0005634017 nova_compute[243452]: 2026-02-28 10:35:55.542 243456 DEBUG nova.network.neutron [-] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:35:55 np0005634017 nova_compute[243452]: 2026-02-28 10:35:55.651 243456 DEBUG nova.compute.manager [req-318199de-e63c-4481-bf4e-2e2a3128c625 req-90324762-20dd-4ddd-a0e1-95f12d42f74b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-vif-deleted-8f25c48f-b281-4784-a6b0-a2662d928d28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:56 np0005634017 nova_compute[243452]: 2026-02-28 10:35:56.601 243456 DEBUG nova.compute.manager [req-0805e054-66a1-4ad4-a4f4-fd46d98fde91 req-c1f68a37-9002-4347-92e5-c6e9dcfd5c40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Received event network-vif-unplugged-f6a52694-af4a-4ecc-926c-b1867c375983 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:56 np0005634017 nova_compute[243452]: 2026-02-28 10:35:56.601 243456 DEBUG oslo_concurrency.lockutils [req-0805e054-66a1-4ad4-a4f4-fd46d98fde91 req-c1f68a37-9002-4347-92e5-c6e9dcfd5c40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:56 np0005634017 nova_compute[243452]: 2026-02-28 10:35:56.602 243456 DEBUG oslo_concurrency.lockutils [req-0805e054-66a1-4ad4-a4f4-fd46d98fde91 req-c1f68a37-9002-4347-92e5-c6e9dcfd5c40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:56 np0005634017 nova_compute[243452]: 2026-02-28 10:35:56.602 243456 DEBUG oslo_concurrency.lockutils [req-0805e054-66a1-4ad4-a4f4-fd46d98fde91 req-c1f68a37-9002-4347-92e5-c6e9dcfd5c40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:56 np0005634017 nova_compute[243452]: 2026-02-28 10:35:56.602 243456 DEBUG nova.compute.manager [req-0805e054-66a1-4ad4-a4f4-fd46d98fde91 req-c1f68a37-9002-4347-92e5-c6e9dcfd5c40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] No waiting events found dispatching network-vif-unplugged-f6a52694-af4a-4ecc-926c-b1867c375983 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:35:56 np0005634017 nova_compute[243452]: 2026-02-28 10:35:56.603 243456 DEBUG nova.compute.manager [req-0805e054-66a1-4ad4-a4f4-fd46d98fde91 req-c1f68a37-9002-4347-92e5-c6e9dcfd5c40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Received event network-vif-unplugged-f6a52694-af4a-4ecc-926c-b1867c375983 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:35:56 np0005634017 nova_compute[243452]: 2026-02-28 10:35:56.603 243456 DEBUG nova.compute.manager [req-0805e054-66a1-4ad4-a4f4-fd46d98fde91 req-c1f68a37-9002-4347-92e5-c6e9dcfd5c40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Received event network-vif-plugged-f6a52694-af4a-4ecc-926c-b1867c375983 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:56 np0005634017 nova_compute[243452]: 2026-02-28 10:35:56.603 243456 DEBUG oslo_concurrency.lockutils [req-0805e054-66a1-4ad4-a4f4-fd46d98fde91 req-c1f68a37-9002-4347-92e5-c6e9dcfd5c40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:56 np0005634017 nova_compute[243452]: 2026-02-28 10:35:56.604 243456 DEBUG oslo_concurrency.lockutils [req-0805e054-66a1-4ad4-a4f4-fd46d98fde91 req-c1f68a37-9002-4347-92e5-c6e9dcfd5c40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:56 np0005634017 nova_compute[243452]: 2026-02-28 10:35:56.604 243456 DEBUG oslo_concurrency.lockutils [req-0805e054-66a1-4ad4-a4f4-fd46d98fde91 req-c1f68a37-9002-4347-92e5-c6e9dcfd5c40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:56 np0005634017 nova_compute[243452]: 2026-02-28 10:35:56.604 243456 DEBUG nova.compute.manager [req-0805e054-66a1-4ad4-a4f4-fd46d98fde91 req-c1f68a37-9002-4347-92e5-c6e9dcfd5c40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] No waiting events found dispatching network-vif-plugged-f6a52694-af4a-4ecc-926c-b1867c375983 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:35:56 np0005634017 nova_compute[243452]: 2026-02-28 10:35:56.604 243456 WARNING nova.compute.manager [req-0805e054-66a1-4ad4-a4f4-fd46d98fde91 req-c1f68a37-9002-4347-92e5-c6e9dcfd5c40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Received unexpected event network-vif-plugged-f6a52694-af4a-4ecc-926c-b1867c375983 for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:35:56 np0005634017 nova_compute[243452]: 2026-02-28 10:35:56.700 243456 DEBUG nova.network.neutron [-] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:35:56 np0005634017 nova_compute[243452]: 2026-02-28 10:35:56.722 243456 INFO nova.compute.manager [-] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Took 1.18 seconds to deallocate network for instance.#033[00m
Feb 28 05:35:56 np0005634017 nova_compute[243452]: 2026-02-28 10:35:56.780 243456 DEBUG oslo_concurrency.lockutils [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:56 np0005634017 nova_compute[243452]: 2026-02-28 10:35:56.780 243456 DEBUG oslo_concurrency.lockutils [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 305 active+clean; 277 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 2.2 MiB/s wr, 120 op/s
Feb 28 05:35:56 np0005634017 nova_compute[243452]: 2026-02-28 10:35:56.865 243456 DEBUG oslo_concurrency.processutils [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:35:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:35:57 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2063512626' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:35:57 np0005634017 nova_compute[243452]: 2026-02-28 10:35:57.405 243456 DEBUG oslo_concurrency.processutils [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:35:57 np0005634017 nova_compute[243452]: 2026-02-28 10:35:57.411 243456 DEBUG nova.compute.provider_tree [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:35:57 np0005634017 nova_compute[243452]: 2026-02-28 10:35:57.431 243456 DEBUG nova.scheduler.client.report [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:35:57 np0005634017 nova_compute[243452]: 2026-02-28 10:35:57.458 243456 DEBUG oslo_concurrency.lockutils [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:57 np0005634017 nova_compute[243452]: 2026-02-28 10:35:57.487 243456 INFO nova.scheduler.client.report [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Deleted allocations for instance b7d98834-924e-4fbd-a701-d22949f44f77#033[00m
Feb 28 05:35:57 np0005634017 nova_compute[243452]: 2026-02-28 10:35:57.560 243456 DEBUG oslo_concurrency.lockutils [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:57.875 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:57.875 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:57.876 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:58 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:58Z|01339|binding|INFO|Releasing lport cb4199b0-c270-4ea9-a607-db9799f6f157 from this chassis (sb_readonly=0)
Feb 28 05:35:58 np0005634017 nova_compute[243452]: 2026-02-28 10:35:58.193 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:58 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:58Z|01340|binding|INFO|Releasing lport cb4199b0-c270-4ea9-a607-db9799f6f157 from this chassis (sb_readonly=0)
Feb 28 05:35:58 np0005634017 nova_compute[243452]: 2026-02-28 10:35:58.253 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:58 np0005634017 nova_compute[243452]: 2026-02-28 10:35:58.708 243456 DEBUG nova.compute.manager [req-fb659427-0428-48ca-b568-469f2f1ec7d9 req-58ed66c2-5984-49e1-b618-ed245d93d08e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Received event network-vif-deleted-f6a52694-af4a-4ecc-926c-b1867c375983 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:35:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 260 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 302 KiB/s rd, 1.1 MiB/s wr, 119 op/s
Feb 28 05:35:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:35:58 np0005634017 nova_compute[243452]: 2026-02-28 10:35:58.944 243456 DEBUG oslo_concurrency.lockutils [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "367042aa-0043-4283-a399-ea4a6a1545f7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:58 np0005634017 nova_compute[243452]: 2026-02-28 10:35:58.945 243456 DEBUG oslo_concurrency.lockutils [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:58 np0005634017 nova_compute[243452]: 2026-02-28 10:35:58.945 243456 DEBUG oslo_concurrency.lockutils [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:35:58 np0005634017 nova_compute[243452]: 2026-02-28 10:35:58.945 243456 DEBUG oslo_concurrency.lockutils [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:35:58 np0005634017 nova_compute[243452]: 2026-02-28 10:35:58.946 243456 DEBUG oslo_concurrency.lockutils [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:35:58 np0005634017 nova_compute[243452]: 2026-02-28 10:35:58.948 243456 INFO nova.compute.manager [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Terminating instance#033[00m
Feb 28 05:35:58 np0005634017 nova_compute[243452]: 2026-02-28 10:35:58.949 243456 DEBUG nova.compute.manager [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:35:59 np0005634017 kernel: tap92f5c154-2f (unregistering): left promiscuous mode
Feb 28 05:35:59 np0005634017 NetworkManager[49805]: <info>  [1772274959.0087] device (tap92f5c154-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:35:59 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:59Z|01341|binding|INFO|Releasing lport 92f5c154-2fa7-43e9-a6fd-da26d3ad985b from this chassis (sb_readonly=0)
Feb 28 05:35:59 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:59Z|01342|binding|INFO|Setting lport 92f5c154-2fa7-43e9-a6fd-da26d3ad985b down in Southbound
Feb 28 05:35:59 np0005634017 nova_compute[243452]: 2026-02-28 10:35:59.012 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:59 np0005634017 ovn_controller[146846]: 2026-02-28T10:35:59Z|01343|binding|INFO|Removing iface tap92f5c154-2f ovn-installed in OVS
Feb 28 05:35:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.021 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:eb:46 10.100.0.11'], port_security=['fa:16:3e:56:eb:46 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '367042aa-0043-4283-a399-ea4a6a1545f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c48ff26a-49d0-4144-b27f-14431e751ba2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '18ab5e58-5378-41c9-af44-86d27866eb7f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cef915b3-5462-468d-ad19-808529c0dfba, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=92f5c154-2fa7-43e9-a6fd-da26d3ad985b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:35:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.024 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 92f5c154-2fa7-43e9-a6fd-da26d3ad985b in datapath c48ff26a-49d0-4144-b27f-14431e751ba2 unbound from our chassis#033[00m
Feb 28 05:35:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.026 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c48ff26a-49d0-4144-b27f-14431e751ba2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:35:59 np0005634017 nova_compute[243452]: 2026-02-28 10:35:59.026 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.027 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[708379da-f18a-476f-85c7-03c4cbcea8ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.028 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2 namespace which is not needed anymore#033[00m
Feb 28 05:35:59 np0005634017 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000081.scope: Deactivated successfully.
Feb 28 05:35:59 np0005634017 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000081.scope: Consumed 15.124s CPU time.
Feb 28 05:35:59 np0005634017 systemd-machined[209480]: Machine qemu-162-instance-00000081 terminated.
Feb 28 05:35:59 np0005634017 nova_compute[243452]: 2026-02-28 10:35:59.171 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:59 np0005634017 nova_compute[243452]: 2026-02-28 10:35:59.229 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:59 np0005634017 neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2[357912]: [NOTICE]   (357923) : haproxy version is 2.8.14-c23fe91
Feb 28 05:35:59 np0005634017 neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2[357912]: [NOTICE]   (357923) : path to executable is /usr/sbin/haproxy
Feb 28 05:35:59 np0005634017 neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2[357912]: [WARNING]  (357923) : Exiting Master process...
Feb 28 05:35:59 np0005634017 neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2[357912]: [WARNING]  (357923) : Exiting Master process...
Feb 28 05:35:59 np0005634017 neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2[357912]: [ALERT]    (357923) : Current worker (357925) exited with code 143 (Terminated)
Feb 28 05:35:59 np0005634017 neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2[357912]: [WARNING]  (357923) : All workers exited. Exiting... (0)
Feb 28 05:35:59 np0005634017 systemd[1]: libpod-c2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3.scope: Deactivated successfully.
Feb 28 05:35:59 np0005634017 nova_compute[243452]: 2026-02-28 10:35:59.235 243456 INFO nova.virt.libvirt.driver [-] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Instance destroyed successfully.#033[00m
Feb 28 05:35:59 np0005634017 nova_compute[243452]: 2026-02-28 10:35:59.236 243456 DEBUG nova.objects.instance [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'resources' on Instance uuid 367042aa-0043-4283-a399-ea4a6a1545f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:35:59 np0005634017 podman[360863]: 2026-02-28 10:35:59.242599625 +0000 UTC m=+0.107805885 container died c2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 05:35:59 np0005634017 nova_compute[243452]: 2026-02-28 10:35:59.249 243456 DEBUG nova.virt.libvirt.vif [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:34:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1965662134',display_name='tempest-TestNetworkBasicOps-server-1965662134',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1965662134',id=129,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAQEbRWdJEdDDH9ryFVMZgxJMx06zvko9WQ/yItISiQGZQFgF2ldPicyjXhFLx3IsCNHqxs8LEYCBDvAtjLxsqEXUJAPPXqcb32CUcOFzuHymtVJP4PyLuUKki41H129Mg==',key_name='tempest-TestNetworkBasicOps-1815275820',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:34:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-9q1u5ht7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:34:56Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=367042aa-0043-4283-a399-ea4a6a1545f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "address": "fa:16:3e:56:eb:46", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92f5c154-2f", "ovs_interfaceid": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:35:59 np0005634017 nova_compute[243452]: 2026-02-28 10:35:59.250 243456 DEBUG nova.network.os_vif_util [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "address": "fa:16:3e:56:eb:46", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92f5c154-2f", "ovs_interfaceid": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:35:59 np0005634017 nova_compute[243452]: 2026-02-28 10:35:59.251 243456 DEBUG nova.network.os_vif_util [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:56:eb:46,bridge_name='br-int',has_traffic_filtering=True,id=92f5c154-2fa7-43e9-a6fd-da26d3ad985b,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92f5c154-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:35:59 np0005634017 nova_compute[243452]: 2026-02-28 10:35:59.251 243456 DEBUG os_vif [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:eb:46,bridge_name='br-int',has_traffic_filtering=True,id=92f5c154-2fa7-43e9-a6fd-da26d3ad985b,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92f5c154-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:35:59 np0005634017 nova_compute[243452]: 2026-02-28 10:35:59.254 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:59 np0005634017 nova_compute[243452]: 2026-02-28 10:35:59.254 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92f5c154-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:59 np0005634017 nova_compute[243452]: 2026-02-28 10:35:59.257 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:59 np0005634017 nova_compute[243452]: 2026-02-28 10:35:59.260 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:35:59 np0005634017 nova_compute[243452]: 2026-02-28 10:35:59.264 243456 INFO os_vif [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:eb:46,bridge_name='br-int',has_traffic_filtering=True,id=92f5c154-2fa7-43e9-a6fd-da26d3ad985b,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92f5c154-2f')#033[00m
Feb 28 05:35:59 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3-userdata-shm.mount: Deactivated successfully.
Feb 28 05:35:59 np0005634017 systemd[1]: var-lib-containers-storage-overlay-f7cbf1e9a1127d383c385bf63b55eb566697bacb083039d81a658a47a2f6ccb6-merged.mount: Deactivated successfully.
Feb 28 05:35:59 np0005634017 podman[360863]: 2026-02-28 10:35:59.28124946 +0000 UTC m=+0.146455680 container cleanup c2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 28 05:35:59 np0005634017 systemd[1]: libpod-conmon-c2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3.scope: Deactivated successfully.
Feb 28 05:35:59 np0005634017 podman[360916]: 2026-02-28 10:35:59.365667982 +0000 UTC m=+0.058306803 container remove c2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 28 05:35:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.371 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[37dc1703-b240-40a5-b3e7-c7ca36f6b1e5]: (4, ('Sat Feb 28 10:35:59 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2 (c2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3)\nc2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3\nSat Feb 28 10:35:59 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2 (c2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3)\nc2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.374 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a411ce32-a9b9-46eb-b342-9ffa2269ae3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.375 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc48ff26a-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:35:59 np0005634017 nova_compute[243452]: 2026-02-28 10:35:59.377 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:59 np0005634017 kernel: tapc48ff26a-40: left promiscuous mode
Feb 28 05:35:59 np0005634017 nova_compute[243452]: 2026-02-28 10:35:59.387 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.390 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[601791a7-0beb-4ea7-a6cb-49c561be44f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.401 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cce141d3-5d3c-4daa-8aeb-25ef652eb1d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.404 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a7ea3a-1785-459e-87ff-68b5ae12fe67]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.424 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[52db82cd-edc4-41a0-afc1-a1d3e263e7a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640267, 'reachable_time': 27917, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360935, 'error': None, 'target': 'ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:59 np0005634017 nova_compute[243452]: 2026-02-28 10:35:59.426 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:35:59 np0005634017 systemd[1]: run-netns-ovnmeta\x2dc48ff26a\x2d49d0\x2d4144\x2db27f\x2d14431e751ba2.mount: Deactivated successfully.
Feb 28 05:35:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.428 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:35:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.428 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[f70334a2-6113-476e-b95a-1bc8394d1754]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:35:59 np0005634017 nova_compute[243452]: 2026-02-28 10:35:59.581 243456 INFO nova.virt.libvirt.driver [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Deleting instance files /var/lib/nova/instances/367042aa-0043-4283-a399-ea4a6a1545f7_del#033[00m
Feb 28 05:35:59 np0005634017 nova_compute[243452]: 2026-02-28 10:35:59.582 243456 INFO nova.virt.libvirt.driver [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Deletion of /var/lib/nova/instances/367042aa-0043-4283-a399-ea4a6a1545f7_del complete#033[00m
Feb 28 05:35:59 np0005634017 nova_compute[243452]: 2026-02-28 10:35:59.658 243456 INFO nova.compute.manager [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:35:59 np0005634017 nova_compute[243452]: 2026-02-28 10:35:59.658 243456 DEBUG oslo.service.loopingcall [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:35:59 np0005634017 nova_compute[243452]: 2026-02-28 10:35:59.659 243456 DEBUG nova.compute.manager [-] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:35:59 np0005634017 nova_compute[243452]: 2026-02-28 10:35:59.659 243456 DEBUG nova.network.neutron [-] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:36:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:36:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:36:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:36:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:36:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:36:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:36:00 np0005634017 nova_compute[243452]: 2026-02-28 10:36:00.523 243456 DEBUG nova.network.neutron [-] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:36:00 np0005634017 nova_compute[243452]: 2026-02-28 10:36:00.544 243456 INFO nova.compute.manager [-] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Took 0.89 seconds to deallocate network for instance.#033[00m
Feb 28 05:36:00 np0005634017 nova_compute[243452]: 2026-02-28 10:36:00.622 243456 DEBUG oslo_concurrency.lockutils [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:36:00 np0005634017 nova_compute[243452]: 2026-02-28 10:36:00.623 243456 DEBUG oslo_concurrency.lockutils [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:36:00 np0005634017 nova_compute[243452]: 2026-02-28 10:36:00.683 243456 DEBUG oslo_concurrency.processutils [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:36:00 np0005634017 nova_compute[243452]: 2026-02-28 10:36:00.785 243456 DEBUG nova.compute.manager [req-66afb6b5-aa66-4e67-815b-9a700ef1e799 req-f4aadaf1-4cae-4e54-a88a-a767f26214ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Received event network-vif-unplugged-92f5c154-2fa7-43e9-a6fd-da26d3ad985b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:36:00 np0005634017 nova_compute[243452]: 2026-02-28 10:36:00.786 243456 DEBUG oslo_concurrency.lockutils [req-66afb6b5-aa66-4e67-815b-9a700ef1e799 req-f4aadaf1-4cae-4e54-a88a-a767f26214ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:36:00 np0005634017 nova_compute[243452]: 2026-02-28 10:36:00.786 243456 DEBUG oslo_concurrency.lockutils [req-66afb6b5-aa66-4e67-815b-9a700ef1e799 req-f4aadaf1-4cae-4e54-a88a-a767f26214ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:36:00 np0005634017 nova_compute[243452]: 2026-02-28 10:36:00.786 243456 DEBUG oslo_concurrency.lockutils [req-66afb6b5-aa66-4e67-815b-9a700ef1e799 req-f4aadaf1-4cae-4e54-a88a-a767f26214ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:36:00 np0005634017 nova_compute[243452]: 2026-02-28 10:36:00.787 243456 DEBUG nova.compute.manager [req-66afb6b5-aa66-4e67-815b-9a700ef1e799 req-f4aadaf1-4cae-4e54-a88a-a767f26214ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] No waiting events found dispatching network-vif-unplugged-92f5c154-2fa7-43e9-a6fd-da26d3ad985b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:36:00 np0005634017 nova_compute[243452]: 2026-02-28 10:36:00.787 243456 WARNING nova.compute.manager [req-66afb6b5-aa66-4e67-815b-9a700ef1e799 req-f4aadaf1-4cae-4e54-a88a-a767f26214ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Received unexpected event network-vif-unplugged-92f5c154-2fa7-43e9-a6fd-da26d3ad985b for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:36:00 np0005634017 nova_compute[243452]: 2026-02-28 10:36:00.787 243456 DEBUG nova.compute.manager [req-66afb6b5-aa66-4e67-815b-9a700ef1e799 req-f4aadaf1-4cae-4e54-a88a-a767f26214ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Received event network-vif-plugged-92f5c154-2fa7-43e9-a6fd-da26d3ad985b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:36:00 np0005634017 nova_compute[243452]: 2026-02-28 10:36:00.787 243456 DEBUG oslo_concurrency.lockutils [req-66afb6b5-aa66-4e67-815b-9a700ef1e799 req-f4aadaf1-4cae-4e54-a88a-a767f26214ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:36:00 np0005634017 nova_compute[243452]: 2026-02-28 10:36:00.788 243456 DEBUG oslo_concurrency.lockutils [req-66afb6b5-aa66-4e67-815b-9a700ef1e799 req-f4aadaf1-4cae-4e54-a88a-a767f26214ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:36:00 np0005634017 nova_compute[243452]: 2026-02-28 10:36:00.788 243456 DEBUG oslo_concurrency.lockutils [req-66afb6b5-aa66-4e67-815b-9a700ef1e799 req-f4aadaf1-4cae-4e54-a88a-a767f26214ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:36:00 np0005634017 nova_compute[243452]: 2026-02-28 10:36:00.788 243456 DEBUG nova.compute.manager [req-66afb6b5-aa66-4e67-815b-9a700ef1e799 req-f4aadaf1-4cae-4e54-a88a-a767f26214ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] No waiting events found dispatching network-vif-plugged-92f5c154-2fa7-43e9-a6fd-da26d3ad985b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:36:00 np0005634017 nova_compute[243452]: 2026-02-28 10:36:00.788 243456 WARNING nova.compute.manager [req-66afb6b5-aa66-4e67-815b-9a700ef1e799 req-f4aadaf1-4cae-4e54-a88a-a767f26214ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Received unexpected event network-vif-plugged-92f5c154-2fa7-43e9-a6fd-da26d3ad985b for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:36:00 np0005634017 nova_compute[243452]: 2026-02-28 10:36:00.788 243456 DEBUG nova.compute.manager [req-66afb6b5-aa66-4e67-815b-9a700ef1e799 req-f4aadaf1-4cae-4e54-a88a-a767f26214ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Received event network-vif-deleted-92f5c154-2fa7-43e9-a6fd-da26d3ad985b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:36:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 305 active+clean; 187 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 227 KiB/s rd, 532 KiB/s wr, 113 op/s
Feb 28 05:36:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:36:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4187903459' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:36:01 np0005634017 nova_compute[243452]: 2026-02-28 10:36:01.217 243456 DEBUG oslo_concurrency.processutils [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:36:01 np0005634017 nova_compute[243452]: 2026-02-28 10:36:01.225 243456 DEBUG nova.compute.provider_tree [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:36:01 np0005634017 nova_compute[243452]: 2026-02-28 10:36:01.248 243456 DEBUG nova.scheduler.client.report [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:36:01 np0005634017 nova_compute[243452]: 2026-02-28 10:36:01.270 243456 DEBUG oslo_concurrency.lockutils [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:36:01 np0005634017 nova_compute[243452]: 2026-02-28 10:36:01.296 243456 INFO nova.scheduler.client.report [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Deleted allocations for instance 367042aa-0043-4283-a399-ea4a6a1545f7#033[00m
Feb 28 05:36:01 np0005634017 nova_compute[243452]: 2026-02-28 10:36:01.364 243456 DEBUG oslo_concurrency.lockutils [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:36:02 np0005634017 nova_compute[243452]: 2026-02-28 10:36:02.617 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274947.6078107, f13d8adc-1a08-412b-a9fa-c8a601cda923 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:36:02 np0005634017 nova_compute[243452]: 2026-02-28 10:36:02.618 243456 INFO nova.compute.manager [-] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:36:02 np0005634017 nova_compute[243452]: 2026-02-28 10:36:02.640 243456 DEBUG nova.compute.manager [None req-0f59a399-c570-4a6f-84a4-759152fb0719 - - - - - -] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:36:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 153 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 15 KiB/s wr, 90 op/s
Feb 28 05:36:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:36:04 np0005634017 nova_compute[243452]: 2026-02-28 10:36:04.260 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:04 np0005634017 nova_compute[243452]: 2026-02-28 10:36:04.428 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 15 KiB/s wr, 74 op/s
Feb 28 05:36:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 13 KiB/s wr, 57 op/s
Feb 28 05:36:07 np0005634017 nova_compute[243452]: 2026-02-28 10:36:07.045 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274952.043712, 29ebb761-c674-4ed1-aae0-554adf945402 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:36:07 np0005634017 nova_compute[243452]: 2026-02-28 10:36:07.045 243456 INFO nova.compute.manager [-] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:36:07 np0005634017 nova_compute[243452]: 2026-02-28 10:36:07.061 243456 DEBUG nova.compute.manager [None req-1267ded7-c238-44ae-bbb9-196781f08275 - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:36:07 np0005634017 nova_compute[243452]: 2026-02-28 10:36:07.313 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 53 op/s
Feb 28 05:36:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:36:09 np0005634017 podman[360960]: 2026-02-28 10:36:09.154694206 +0000 UTC m=+0.080474941 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 28 05:36:09 np0005634017 podman[360959]: 2026-02-28 10:36:09.166286024 +0000 UTC m=+0.100972471 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:36:09 np0005634017 nova_compute[243452]: 2026-02-28 10:36:09.262 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:09 np0005634017 nova_compute[243452]: 2026-02-28 10:36:09.431 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:10 np0005634017 nova_compute[243452]: 2026-02-28 10:36:10.148 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274955.147958, b7d98834-924e-4fbd-a701-d22949f44f77 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:36:10 np0005634017 nova_compute[243452]: 2026-02-28 10:36:10.149 243456 INFO nova.compute.manager [-] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:36:10 np0005634017 nova_compute[243452]: 2026-02-28 10:36:10.169 243456 DEBUG nova.compute.manager [None req-945f364a-9cef-47bd-a422-268a87865e9f - - - - - -] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:36:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.0 KiB/s wr, 41 op/s
Feb 28 05:36:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 938 B/s wr, 16 op/s
Feb 28 05:36:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:36:14 np0005634017 nova_compute[243452]: 2026-02-28 10:36:14.187 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274959.1860487, 367042aa-0043-4283-a399-ea4a6a1545f7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:36:14 np0005634017 nova_compute[243452]: 2026-02-28 10:36:14.188 243456 INFO nova.compute.manager [-] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:36:14 np0005634017 nova_compute[243452]: 2026-02-28 10:36:14.217 243456 DEBUG nova.compute.manager [None req-473571e4-5217-4f47-b87e-cd46d383ad00 - - - - - -] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:36:14 np0005634017 nova_compute[243452]: 2026-02-28 10:36:14.265 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:14 np0005634017 nova_compute[243452]: 2026-02-28 10:36:14.432 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 1.9 KiB/s rd, 596 B/s wr, 3 op/s
Feb 28 05:36:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:16.693 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:36:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:16.694 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:36:16 np0005634017 nova_compute[243452]: 2026-02-28 10:36:16.695 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:36:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:36:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:36:19 np0005634017 nova_compute[243452]: 2026-02-28 10:36:19.269 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:19 np0005634017 nova_compute[243452]: 2026-02-28 10:36:19.435 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:19.508 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:9f:67 2001:db8:0:1:f816:3eff:fea2:9f67 2001:db8::f816:3eff:fea2:9f67'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fea2:9f67/64 2001:db8::f816:3eff:fea2:9f67/64', 'neutron:device_id': 'ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ac6d088-e3e0-46fc-b81f-2eb4f1fd8c9c, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a03e0d2d-7933-48b4-9d0a-61369ba848c9) old=Port_Binding(mac=['fa:16:3e:a2:9f:67 2001:db8::f816:3eff:fea2:9f67'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fea2:9f67/64', 'neutron:device_id': 'ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:36:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:19.510 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a03e0d2d-7933-48b4-9d0a-61369ba848c9 in datapath bf5aa4f8-b85b-496f-a7ad-1ab36250968c updated#033[00m
Feb 28 05:36:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:19.512 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bf5aa4f8-b85b-496f-a7ad-1ab36250968c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:36:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:19.513 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[675155e6-e4f2-44ce-be1c-a89434d37d25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:19.696 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:36:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:36:22 np0005634017 nova_compute[243452]: 2026-02-28 10:36:22.171 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:36:22 np0005634017 nova_compute[243452]: 2026-02-28 10:36:22.172 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:36:22 np0005634017 nova_compute[243452]: 2026-02-28 10:36:22.189 243456 DEBUG nova.compute.manager [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:36:22 np0005634017 nova_compute[243452]: 2026-02-28 10:36:22.276 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:36:22 np0005634017 nova_compute[243452]: 2026-02-28 10:36:22.277 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:36:22 np0005634017 nova_compute[243452]: 2026-02-28 10:36:22.286 243456 DEBUG nova.virt.hardware [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:36:22 np0005634017 nova_compute[243452]: 2026-02-28 10:36:22.286 243456 INFO nova.compute.claims [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:36:22 np0005634017 nova_compute[243452]: 2026-02-28 10:36:22.384 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:36:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:36:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:36:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1540571358' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:36:22 np0005634017 nova_compute[243452]: 2026-02-28 10:36:22.991 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:36:23 np0005634017 nova_compute[243452]: 2026-02-28 10:36:23.001 243456 DEBUG nova.compute.provider_tree [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:36:23 np0005634017 nova_compute[243452]: 2026-02-28 10:36:23.024 243456 DEBUG nova.scheduler.client.report [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:36:23 np0005634017 nova_compute[243452]: 2026-02-28 10:36:23.062 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:36:23 np0005634017 nova_compute[243452]: 2026-02-28 10:36:23.063 243456 DEBUG nova.compute.manager [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:36:23 np0005634017 nova_compute[243452]: 2026-02-28 10:36:23.116 243456 DEBUG nova.compute.manager [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:36:23 np0005634017 nova_compute[243452]: 2026-02-28 10:36:23.117 243456 DEBUG nova.network.neutron [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:36:23 np0005634017 nova_compute[243452]: 2026-02-28 10:36:23.138 243456 INFO nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:36:23 np0005634017 nova_compute[243452]: 2026-02-28 10:36:23.164 243456 DEBUG nova.compute.manager [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:36:23 np0005634017 nova_compute[243452]: 2026-02-28 10:36:23.258 243456 DEBUG nova.compute.manager [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:36:23 np0005634017 nova_compute[243452]: 2026-02-28 10:36:23.261 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:36:23 np0005634017 nova_compute[243452]: 2026-02-28 10:36:23.262 243456 INFO nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Creating image(s)#033[00m
Feb 28 05:36:23 np0005634017 nova_compute[243452]: 2026-02-28 10:36:23.298 243456 DEBUG nova.storage.rbd_utils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:36:23 np0005634017 nova_compute[243452]: 2026-02-28 10:36:23.334 243456 DEBUG nova.storage.rbd_utils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:36:23 np0005634017 nova_compute[243452]: 2026-02-28 10:36:23.363 243456 DEBUG nova.storage.rbd_utils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:36:23 np0005634017 nova_compute[243452]: 2026-02-28 10:36:23.369 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:36:23 np0005634017 nova_compute[243452]: 2026-02-28 10:36:23.464 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:36:23 np0005634017 nova_compute[243452]: 2026-02-28 10:36:23.466 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:36:23 np0005634017 nova_compute[243452]: 2026-02-28 10:36:23.468 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:36:23 np0005634017 nova_compute[243452]: 2026-02-28 10:36:23.468 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:36:23 np0005634017 nova_compute[243452]: 2026-02-28 10:36:23.500 243456 DEBUG nova.storage.rbd_utils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:36:23 np0005634017 nova_compute[243452]: 2026-02-28 10:36:23.504 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:36:23 np0005634017 nova_compute[243452]: 2026-02-28 10:36:23.769 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:36:23 np0005634017 nova_compute[243452]: 2026-02-28 10:36:23.864 243456 DEBUG nova.storage.rbd_utils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] resizing rbd image b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:36:23 np0005634017 nova_compute[243452]: 2026-02-28 10:36:23.918 243456 DEBUG nova.policy [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:36:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:36:23 np0005634017 nova_compute[243452]: 2026-02-28 10:36:23.976 243456 DEBUG nova.objects.instance [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'migration_context' on Instance uuid b3a7b19c-ebb2-442d-bac0-66e1b9e2655c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:36:24 np0005634017 nova_compute[243452]: 2026-02-28 10:36:24.015 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:36:24 np0005634017 nova_compute[243452]: 2026-02-28 10:36:24.015 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Ensure instance console log exists: /var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:36:24 np0005634017 nova_compute[243452]: 2026-02-28 10:36:24.016 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:36:24 np0005634017 nova_compute[243452]: 2026-02-28 10:36:24.016 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:36:24 np0005634017 nova_compute[243452]: 2026-02-28 10:36:24.017 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:36:24 np0005634017 nova_compute[243452]: 2026-02-28 10:36:24.273 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:24 np0005634017 nova_compute[243452]: 2026-02-28 10:36:24.436 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 165 MiB data, 972 MiB used, 59 GiB / 60 GiB avail; 657 KiB/s wr, 0 op/s
Feb 28 05:36:26 np0005634017 nova_compute[243452]: 2026-02-28 10:36:26.514 243456 DEBUG nova.network.neutron [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Successfully created port: 4b48043a-8194-4cf4-bd7f-1c138d7960ac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:36:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 305 active+clean; 181 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 1.4 MiB/s wr, 13 op/s
Feb 28 05:36:26 np0005634017 nova_compute[243452]: 2026-02-28 10:36:26.936 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:36:26 np0005634017 nova_compute[243452]: 2026-02-28 10:36:26.937 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:36:26 np0005634017 nova_compute[243452]: 2026-02-28 10:36:26.953 243456 DEBUG nova.compute.manager [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:36:27 np0005634017 nova_compute[243452]: 2026-02-28 10:36:27.007 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:36:27 np0005634017 nova_compute[243452]: 2026-02-28 10:36:27.008 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:36:27 np0005634017 nova_compute[243452]: 2026-02-28 10:36:27.014 243456 DEBUG nova.virt.hardware [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:36:27 np0005634017 nova_compute[243452]: 2026-02-28 10:36:27.015 243456 INFO nova.compute.claims [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:36:27 np0005634017 nova_compute[243452]: 2026-02-28 10:36:27.123 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:36:27 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:36:27 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3799923328' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:36:27 np0005634017 nova_compute[243452]: 2026-02-28 10:36:27.682 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:36:27 np0005634017 nova_compute[243452]: 2026-02-28 10:36:27.688 243456 DEBUG nova.compute.provider_tree [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:36:27 np0005634017 nova_compute[243452]: 2026-02-28 10:36:27.706 243456 DEBUG nova.scheduler.client.report [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:36:27 np0005634017 nova_compute[243452]: 2026-02-28 10:36:27.733 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:36:27 np0005634017 nova_compute[243452]: 2026-02-28 10:36:27.734 243456 DEBUG nova.compute.manager [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:36:27 np0005634017 nova_compute[243452]: 2026-02-28 10:36:27.790 243456 DEBUG nova.compute.manager [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:36:27 np0005634017 nova_compute[243452]: 2026-02-28 10:36:27.791 243456 DEBUG nova.network.neutron [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:36:27 np0005634017 nova_compute[243452]: 2026-02-28 10:36:27.819 243456 INFO nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:36:27 np0005634017 nova_compute[243452]: 2026-02-28 10:36:27.841 243456 DEBUG nova.compute.manager [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:36:27 np0005634017 nova_compute[243452]: 2026-02-28 10:36:27.946 243456 DEBUG nova.network.neutron [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Successfully updated port: 4b48043a-8194-4cf4-bd7f-1c138d7960ac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:36:27 np0005634017 nova_compute[243452]: 2026-02-28 10:36:27.950 243456 DEBUG nova.compute.manager [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:36:27 np0005634017 nova_compute[243452]: 2026-02-28 10:36:27.952 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:36:27 np0005634017 nova_compute[243452]: 2026-02-28 10:36:27.952 243456 INFO nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Creating image(s)#033[00m
Feb 28 05:36:27 np0005634017 nova_compute[243452]: 2026-02-28 10:36:27.976 243456 DEBUG nova.storage.rbd_utils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.004 243456 DEBUG nova.storage.rbd_utils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.031 243456 DEBUG nova.storage.rbd_utils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.035 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.067 243456 DEBUG nova.policy [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.073 243456 DEBUG nova.compute.manager [req-ae47af30-8196-4839-ac0c-74dce5fc1180 req-c150eff6-08bc-410c-b18d-ce297cd50b52 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-changed-4b48043a-8194-4cf4-bd7f-1c138d7960ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.074 243456 DEBUG nova.compute.manager [req-ae47af30-8196-4839-ac0c-74dce5fc1180 req-c150eff6-08bc-410c-b18d-ce297cd50b52 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Refreshing instance network info cache due to event network-changed-4b48043a-8194-4cf4-bd7f-1c138d7960ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.074 243456 DEBUG oslo_concurrency.lockutils [req-ae47af30-8196-4839-ac0c-74dce5fc1180 req-c150eff6-08bc-410c-b18d-ce297cd50b52 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.074 243456 DEBUG oslo_concurrency.lockutils [req-ae47af30-8196-4839-ac0c-74dce5fc1180 req-c150eff6-08bc-410c-b18d-ce297cd50b52 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.075 243456 DEBUG nova.network.neutron [req-ae47af30-8196-4839-ac0c-74dce5fc1180 req-c150eff6-08bc-410c-b18d-ce297cd50b52 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Refreshing network info cache for port 4b48043a-8194-4cf4-bd7f-1c138d7960ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.076 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.096 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.097 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.098 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.098 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.121 243456 DEBUG nova.storage.rbd_utils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.124 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.329 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.406 243456 DEBUG nova.storage.rbd_utils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image 555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.501 243456 DEBUG nova.objects.instance [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid 555d381e-ed8a-4a73-9f43-f79c0b0a0afd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.520 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.521 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Ensure instance console log exists: /var/lib/nova/instances/555d381e-ed8a-4a73-9f43-f79c0b0a0afd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.521 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.522 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.522 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.534 243456 DEBUG nova.network.neutron [req-ae47af30-8196-4839-ac0c-74dce5fc1180 req-c150eff6-08bc-410c-b18d-ce297cd50b52 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.813 243456 DEBUG nova.network.neutron [req-ae47af30-8196-4839-ac0c-74dce5fc1180 req-c150eff6-08bc-410c-b18d-ce297cd50b52 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.829 243456 DEBUG oslo_concurrency.lockutils [req-ae47af30-8196-4839-ac0c-74dce5fc1180 req-c150eff6-08bc-410c-b18d-ce297cd50b52 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.830 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.830 243456 DEBUG nova.network.neutron [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:36:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 200 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:36:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:36:28 np0005634017 nova_compute[243452]: 2026-02-28 10:36:28.932 243456 DEBUG nova.network.neutron [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Successfully created port: b8c427fe-78c5-4d60-9c44-68985f50b598 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:36:29 np0005634017 nova_compute[243452]: 2026-02-28 10:36:29.023 243456 DEBUG nova.network.neutron [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:36:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:36:29
Feb 28 05:36:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:36:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:36:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'backups', 'default.rgw.control', 'default.rgw.log', 'images', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data']
Feb 28 05:36:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:36:29 np0005634017 nova_compute[243452]: 2026-02-28 10:36:29.277 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:29 np0005634017 nova_compute[243452]: 2026-02-28 10:36:29.437 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:29 np0005634017 nova_compute[243452]: 2026-02-28 10:36:29.508 243456 DEBUG nova.network.neutron [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Successfully created port: ffaef000-523c-4637-99e6-2cc96b907c15 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:36:29 np0005634017 nova_compute[243452]: 2026-02-28 10:36:29.844 243456 DEBUG nova.network.neutron [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updating instance_info_cache with network_info: [{"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:36:29 np0005634017 nova_compute[243452]: 2026-02-28 10:36:29.936 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:36:29 np0005634017 nova_compute[243452]: 2026-02-28 10:36:29.937 243456 DEBUG nova.compute.manager [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Instance network_info: |[{"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:36:29 np0005634017 nova_compute[243452]: 2026-02-28 10:36:29.939 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Start _get_guest_xml network_info=[{"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:36:29 np0005634017 nova_compute[243452]: 2026-02-28 10:36:29.946 243456 WARNING nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:36:29 np0005634017 nova_compute[243452]: 2026-02-28 10:36:29.954 243456 DEBUG nova.virt.libvirt.host [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:36:29 np0005634017 nova_compute[243452]: 2026-02-28 10:36:29.954 243456 DEBUG nova.virt.libvirt.host [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:36:29 np0005634017 nova_compute[243452]: 2026-02-28 10:36:29.959 243456 DEBUG nova.virt.libvirt.host [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:36:29 np0005634017 nova_compute[243452]: 2026-02-28 10:36:29.959 243456 DEBUG nova.virt.libvirt.host [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:36:29 np0005634017 nova_compute[243452]: 2026-02-28 10:36:29.960 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:36:29 np0005634017 nova_compute[243452]: 2026-02-28 10:36:29.960 243456 DEBUG nova.virt.hardware [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:36:29 np0005634017 nova_compute[243452]: 2026-02-28 10:36:29.960 243456 DEBUG nova.virt.hardware [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:36:29 np0005634017 nova_compute[243452]: 2026-02-28 10:36:29.961 243456 DEBUG nova.virt.hardware [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:36:29 np0005634017 nova_compute[243452]: 2026-02-28 10:36:29.961 243456 DEBUG nova.virt.hardware [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:36:29 np0005634017 nova_compute[243452]: 2026-02-28 10:36:29.961 243456 DEBUG nova.virt.hardware [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:36:29 np0005634017 nova_compute[243452]: 2026-02-28 10:36:29.961 243456 DEBUG nova.virt.hardware [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:36:29 np0005634017 nova_compute[243452]: 2026-02-28 10:36:29.962 243456 DEBUG nova.virt.hardware [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:36:29 np0005634017 nova_compute[243452]: 2026-02-28 10:36:29.962 243456 DEBUG nova.virt.hardware [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:36:29 np0005634017 nova_compute[243452]: 2026-02-28 10:36:29.962 243456 DEBUG nova.virt.hardware [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:36:29 np0005634017 nova_compute[243452]: 2026-02-28 10:36:29.962 243456 DEBUG nova.virt.hardware [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:36:29 np0005634017 nova_compute[243452]: 2026-02-28 10:36:29.963 243456 DEBUG nova.virt.hardware [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:36:29 np0005634017 nova_compute[243452]: 2026-02-28 10:36:29.966 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:36:30 np0005634017 nova_compute[243452]: 2026-02-28 10:36:30.238 243456 DEBUG nova.network.neutron [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Successfully updated port: b8c427fe-78c5-4d60-9c44-68985f50b598 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:36:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:36:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:36:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:36:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:36:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:36:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:36:30 np0005634017 nova_compute[243452]: 2026-02-28 10:36:30.384 243456 DEBUG nova.compute.manager [req-99638faf-1a3c-4277-b36a-5398292134ff req-11db7e0b-03ef-4d54-b411-caf191eb44f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-changed-b8c427fe-78c5-4d60-9c44-68985f50b598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:36:30 np0005634017 nova_compute[243452]: 2026-02-28 10:36:30.384 243456 DEBUG nova.compute.manager [req-99638faf-1a3c-4277-b36a-5398292134ff req-11db7e0b-03ef-4d54-b411-caf191eb44f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Refreshing instance network info cache due to event network-changed-b8c427fe-78c5-4d60-9c44-68985f50b598. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:36:30 np0005634017 nova_compute[243452]: 2026-02-28 10:36:30.385 243456 DEBUG oslo_concurrency.lockutils [req-99638faf-1a3c-4277-b36a-5398292134ff req-11db7e0b-03ef-4d54-b411-caf191eb44f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:36:30 np0005634017 nova_compute[243452]: 2026-02-28 10:36:30.385 243456 DEBUG oslo_concurrency.lockutils [req-99638faf-1a3c-4277-b36a-5398292134ff req-11db7e0b-03ef-4d54-b411-caf191eb44f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:36:30 np0005634017 nova_compute[243452]: 2026-02-28 10:36:30.386 243456 DEBUG nova.network.neutron [req-99638faf-1a3c-4277-b36a-5398292134ff req-11db7e0b-03ef-4d54-b411-caf191eb44f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Refreshing network info cache for port b8c427fe-78c5-4d60-9c44-68985f50b598 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:36:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:36:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2025732497' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:36:30 np0005634017 nova_compute[243452]: 2026-02-28 10:36:30.494 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:36:30 np0005634017 nova_compute[243452]: 2026-02-28 10:36:30.534 243456 DEBUG nova.storage.rbd_utils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:36:30 np0005634017 nova_compute[243452]: 2026-02-28 10:36:30.542 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:36:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:36:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:36:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:36:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:36:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:36:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:36:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:36:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:36:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:36:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:36:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 235 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 3.4 MiB/s wr, 42 op/s
Feb 28 05:36:30 np0005634017 nova_compute[243452]: 2026-02-28 10:36:30.886 243456 DEBUG nova.network.neutron [req-99638faf-1a3c-4277-b36a-5398292134ff req-11db7e0b-03ef-4d54-b411-caf191eb44f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:36:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:36:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4072729338' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.133 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.135 243456 DEBUG nova.virt.libvirt.vif [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:36:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1855517723',display_name='tempest-TestNetworkBasicOps-server-1855517723',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1855517723',id=132,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJL4zKlYo/AdgMNnuaaJtL1hh1goiDwZemHJ69UN8pXos044opX7z+RfBxL3O8z0M85ghtATU1A3xGoZN5EAhEOadn0D2QfxzAhrWZrpHsFCg070YOwKNthfwFLhWWt9Cw==',key_name='tempest-TestNetworkBasicOps-2114804889',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-fsl96lrj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:36:23Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=b3a7b19c-ebb2-442d-bac0-66e1b9e2655c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.135 243456 DEBUG nova.network.os_vif_util [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.136 243456 DEBUG nova.network.os_vif_util [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:60:12,bridge_name='br-int',has_traffic_filtering=True,id=4b48043a-8194-4cf4-bd7f-1c138d7960ac,network=Network(f91ad996-44c8-45ac-a5d6-208982ca2ce1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b48043a-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.137 243456 DEBUG nova.objects.instance [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid b3a7b19c-ebb2-442d-bac0-66e1b9e2655c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.161 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:36:31 np0005634017 nova_compute[243452]:  <uuid>b3a7b19c-ebb2-442d-bac0-66e1b9e2655c</uuid>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:  <name>instance-00000084</name>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestNetworkBasicOps-server-1855517723</nova:name>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:36:29</nova:creationTime>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:36:31 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:        <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:        <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:        <nova:port uuid="4b48043a-8194-4cf4-bd7f-1c138d7960ac">
Feb 28 05:36:31 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <entry name="serial">b3a7b19c-ebb2-442d-bac0-66e1b9e2655c</entry>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <entry name="uuid">b3a7b19c-ebb2-442d-bac0-66e1b9e2655c</entry>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk">
Feb 28 05:36:31 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:36:31 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk.config">
Feb 28 05:36:31 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:36:31 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:e3:60:12"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <target dev="tap4b48043a-81"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/console.log" append="off"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:36:31 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:36:31 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:36:31 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:36:31 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.162 243456 DEBUG nova.compute.manager [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Preparing to wait for external event network-vif-plugged-4b48043a-8194-4cf4-bd7f-1c138d7960ac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.163 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.163 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.163 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.164 243456 DEBUG nova.virt.libvirt.vif [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:36:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1855517723',display_name='tempest-TestNetworkBasicOps-server-1855517723',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1855517723',id=132,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJL4zKlYo/AdgMNnuaaJtL1hh1goiDwZemHJ69UN8pXos044opX7z+RfBxL3O8z0M85ghtATU1A3xGoZN5EAhEOadn0D2QfxzAhrWZrpHsFCg070YOwKNthfwFLhWWt9Cw==',key_name='tempest-TestNetworkBasicOps-2114804889',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-fsl96lrj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:36:23Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=b3a7b19c-ebb2-442d-bac0-66e1b9e2655c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.164 243456 DEBUG nova.network.os_vif_util [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.165 243456 DEBUG nova.network.os_vif_util [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:60:12,bridge_name='br-int',has_traffic_filtering=True,id=4b48043a-8194-4cf4-bd7f-1c138d7960ac,network=Network(f91ad996-44c8-45ac-a5d6-208982ca2ce1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b48043a-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.165 243456 DEBUG os_vif [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:60:12,bridge_name='br-int',has_traffic_filtering=True,id=4b48043a-8194-4cf4-bd7f-1c138d7960ac,network=Network(f91ad996-44c8-45ac-a5d6-208982ca2ce1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b48043a-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.166 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.167 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.168 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.173 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.173 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b48043a-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.173 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4b48043a-81, col_values=(('external_ids', {'iface-id': '4b48043a-8194-4cf4-bd7f-1c138d7960ac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e3:60:12', 'vm-uuid': 'b3a7b19c-ebb2-442d-bac0-66e1b9e2655c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:36:31 np0005634017 NetworkManager[49805]: <info>  [1772274991.1769] manager: (tap4b48043a-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/555)
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.176 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.179 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.183 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.184 243456 INFO os_vif [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:60:12,bridge_name='br-int',has_traffic_filtering=True,id=4b48043a-8194-4cf4-bd7f-1c138d7960ac,network=Network(f91ad996-44c8-45ac-a5d6-208982ca2ce1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b48043a-81')#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.222 243456 DEBUG nova.network.neutron [req-99638faf-1a3c-4277-b36a-5398292134ff req-11db7e0b-03ef-4d54-b411-caf191eb44f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.268 243456 DEBUG oslo_concurrency.lockutils [req-99638faf-1a3c-4277-b36a-5398292134ff req-11db7e0b-03ef-4d54-b411-caf191eb44f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.293 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.294 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.294 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:e3:60:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.294 243456 INFO nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Using config drive#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.322 243456 DEBUG nova.storage.rbd_utils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.813 243456 INFO nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Creating config drive at /var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/disk.config#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.819 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6s8jk2ol execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.866 243456 DEBUG nova.network.neutron [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Successfully updated port: ffaef000-523c-4637-99e6-2cc96b907c15 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.968 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.969 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.969 243456 DEBUG nova.network.neutron [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:36:31 np0005634017 nova_compute[243452]: 2026-02-28 10:36:31.974 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6s8jk2ol" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:36:32 np0005634017 nova_compute[243452]: 2026-02-28 10:36:32.016 243456 DEBUG nova.storage.rbd_utils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:36:32 np0005634017 nova_compute[243452]: 2026-02-28 10:36:32.022 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/disk.config b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:36:32 np0005634017 nova_compute[243452]: 2026-02-28 10:36:32.176 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/disk.config b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:36:32 np0005634017 nova_compute[243452]: 2026-02-28 10:36:32.177 243456 INFO nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Deleting local config drive /var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/disk.config because it was imported into RBD.#033[00m
Feb 28 05:36:32 np0005634017 nova_compute[243452]: 2026-02-28 10:36:32.219 243456 DEBUG nova.network.neutron [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:36:32 np0005634017 kernel: tap4b48043a-81: entered promiscuous mode
Feb 28 05:36:32 np0005634017 NetworkManager[49805]: <info>  [1772274992.2370] manager: (tap4b48043a-81): new Tun device (/org/freedesktop/NetworkManager/Devices/556)
Feb 28 05:36:32 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:32Z|01344|binding|INFO|Claiming lport 4b48043a-8194-4cf4-bd7f-1c138d7960ac for this chassis.
Feb 28 05:36:32 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:32Z|01345|binding|INFO|4b48043a-8194-4cf4-bd7f-1c138d7960ac: Claiming fa:16:3e:e3:60:12 10.100.0.5
Feb 28 05:36:32 np0005634017 nova_compute[243452]: 2026-02-28 10:36:32.241 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:32 np0005634017 nova_compute[243452]: 2026-02-28 10:36:32.253 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:32 np0005634017 systemd-machined[209480]: New machine qemu-165-instance-00000084.
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.279 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:60:12 10.100.0.5'], port_security=['fa:16:3e:e3:60:12 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b3a7b19c-ebb2-442d-bac0-66e1b9e2655c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f91ad996-44c8-45ac-a5d6-208982ca2ce1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '76f6832b-9f40-4eef-bddf-580a90432b21', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f90d2e4f-4906-4cb9-bc1a-6b3a4bcb9d24, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=4b48043a-8194-4cf4-bd7f-1c138d7960ac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.281 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 4b48043a-8194-4cf4-bd7f-1c138d7960ac in datapath f91ad996-44c8-45ac-a5d6-208982ca2ce1 bound to our chassis#033[00m
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.283 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f91ad996-44c8-45ac-a5d6-208982ca2ce1#033[00m
Feb 28 05:36:32 np0005634017 systemd[1]: Started Virtual Machine qemu-165-instance-00000084.
Feb 28 05:36:32 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:32Z|01346|binding|INFO|Setting lport 4b48043a-8194-4cf4-bd7f-1c138d7960ac ovn-installed in OVS
Feb 28 05:36:32 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:32Z|01347|binding|INFO|Setting lport 4b48043a-8194-4cf4-bd7f-1c138d7960ac up in Southbound
Feb 28 05:36:32 np0005634017 nova_compute[243452]: 2026-02-28 10:36:32.294 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:32 np0005634017 systemd-udevd[361523]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.299 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c4d9bd6d-3eb4-49e4-bda4-f77aa3526738]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.301 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf91ad996-41 in ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.304 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf91ad996-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.304 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1079cc95-e727-4d7e-a23f-8465e74a3036]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.306 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d6059286-ef59-4a08-86a4-69edb9794c09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:32 np0005634017 NetworkManager[49805]: <info>  [1772274992.3149] device (tap4b48043a-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:36:32 np0005634017 NetworkManager[49805]: <info>  [1772274992.3159] device (tap4b48043a-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.321 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[e36624b2-4c49-4028-8ff7-27d57df90926]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.349 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ff396e1c-053e-4727-854c-17c8bd03aeab]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.385 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a543177f-46f9-4a2d-ae51-97ef724ec540]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.392 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ff70b10e-c103-4a9d-9c98-acd9419ab5aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:32 np0005634017 systemd-udevd[361526]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:36:32 np0005634017 NetworkManager[49805]: <info>  [1772274992.3937] manager: (tapf91ad996-40): new Veth device (/org/freedesktop/NetworkManager/Devices/557)
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.424 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5cd86172-a4aa-4949-bd94-f4d7ab44ae03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.427 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9c748978-de71-4c44-b96e-e2cb1ea7fb79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:32 np0005634017 NetworkManager[49805]: <info>  [1772274992.4536] device (tapf91ad996-40): carrier: link connected
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.458 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[192dbf3e-5d30-4541-8c41-1fd320b2ceb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.477 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0c306817-2bda-4da2-a317-9c7a2ba54829]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf91ad996-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:58:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 401], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649973, 'reachable_time': 28350, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361555, 'error': None, 'target': 'ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.494 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c307c35e-66ec-418b-becb-8c2dfde929c0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:585b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 649973, 'tstamp': 649973}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361556, 'error': None, 'target': 'ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.512 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[75b8e8ce-3faa-4415-ba77-ee963b8a6bd9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf91ad996-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:58:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 401], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649973, 'reachable_time': 28350, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 361557, 'error': None, 'target': 'ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.547 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[30c9e605-299c-4f6d-8954-24763ab256fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.616 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[128e0a71-7b38-41e6-ac13-de46b50e55ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.619 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf91ad996-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.619 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.620 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf91ad996-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:36:32 np0005634017 kernel: tapf91ad996-40: entered promiscuous mode
Feb 28 05:36:32 np0005634017 NetworkManager[49805]: <info>  [1772274992.6240] manager: (tapf91ad996-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/558)
Feb 28 05:36:32 np0005634017 nova_compute[243452]: 2026-02-28 10:36:32.626 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.629 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf91ad996-40, col_values=(('external_ids', {'iface-id': '806ea448-5fbd-4b2a-a972-1602ffe39b97'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:36:32 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:32Z|01348|binding|INFO|Releasing lport 806ea448-5fbd-4b2a-a972-1602ffe39b97 from this chassis (sb_readonly=0)
Feb 28 05:36:32 np0005634017 nova_compute[243452]: 2026-02-28 10:36:32.630 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.632 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f91ad996-44c8-45ac-a5d6-208982ca2ce1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f91ad996-44c8-45ac-a5d6-208982ca2ce1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:36:32 np0005634017 nova_compute[243452]: 2026-02-28 10:36:32.640 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.639 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f539c9f5-3e61-4d92-b721-55ceb09796d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.643 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-f91ad996-44c8-45ac-a5d6-208982ca2ce1
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/f91ad996-44c8-45ac-a5d6-208982ca2ce1.pid.haproxy
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID f91ad996-44c8-45ac-a5d6-208982ca2ce1
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:36:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.644 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1', 'env', 'PROCESS_TAG=haproxy-f91ad996-44c8-45ac-a5d6-208982ca2ce1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f91ad996-44c8-45ac-a5d6-208982ca2ce1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:36:32 np0005634017 nova_compute[243452]: 2026-02-28 10:36:32.685 243456 DEBUG nova.compute.manager [req-e1a742dd-ef0a-4317-be5a-3de2d95efb2c req-d556379a-a944-4d95-9e53-9e8414e14000 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-changed-ffaef000-523c-4637-99e6-2cc96b907c15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:36:32 np0005634017 nova_compute[243452]: 2026-02-28 10:36:32.686 243456 DEBUG nova.compute.manager [req-e1a742dd-ef0a-4317-be5a-3de2d95efb2c req-d556379a-a944-4d95-9e53-9e8414e14000 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Refreshing instance network info cache due to event network-changed-ffaef000-523c-4637-99e6-2cc96b907c15. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:36:32 np0005634017 nova_compute[243452]: 2026-02-28 10:36:32.687 243456 DEBUG oslo_concurrency.lockutils [req-e1a742dd-ef0a-4317-be5a-3de2d95efb2c req-d556379a-a944-4d95-9e53-9e8414e14000 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:36:32 np0005634017 nova_compute[243452]: 2026-02-28 10:36:32.797 243456 DEBUG nova.compute.manager [req-cd013528-6b5d-4045-9085-f1ba3e09960a req-d4d786df-32a5-46b6-b3aa-03c5a8c1c5b0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-vif-plugged-4b48043a-8194-4cf4-bd7f-1c138d7960ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:36:32 np0005634017 nova_compute[243452]: 2026-02-28 10:36:32.798 243456 DEBUG oslo_concurrency.lockutils [req-cd013528-6b5d-4045-9085-f1ba3e09960a req-d4d786df-32a5-46b6-b3aa-03c5a8c1c5b0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:36:32 np0005634017 nova_compute[243452]: 2026-02-28 10:36:32.798 243456 DEBUG oslo_concurrency.lockutils [req-cd013528-6b5d-4045-9085-f1ba3e09960a req-d4d786df-32a5-46b6-b3aa-03c5a8c1c5b0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:36:32 np0005634017 nova_compute[243452]: 2026-02-28 10:36:32.799 243456 DEBUG oslo_concurrency.lockutils [req-cd013528-6b5d-4045-9085-f1ba3e09960a req-d4d786df-32a5-46b6-b3aa-03c5a8c1c5b0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:36:32 np0005634017 nova_compute[243452]: 2026-02-28 10:36:32.799 243456 DEBUG nova.compute.manager [req-cd013528-6b5d-4045-9085-f1ba3e09960a req-d4d786df-32a5-46b6-b3aa-03c5a8c1c5b0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Processing event network-vif-plugged-4b48043a-8194-4cf4-bd7f-1c138d7960ac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:36:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 246 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 58 op/s
Feb 28 05:36:33 np0005634017 podman[361589]: 2026-02-28 10:36:33.032035909 +0000 UTC m=+0.063901041 container create 4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 28 05:36:33 np0005634017 podman[361589]: 2026-02-28 10:36:32.996993856 +0000 UTC m=+0.028859038 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:36:33 np0005634017 systemd[1]: Started libpod-conmon-4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f.scope.
Feb 28 05:36:33 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:36:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16788202b082e39979c66011ee31c3dde40edd942458ccb06127574de32bd96f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:36:33 np0005634017 podman[361589]: 2026-02-28 10:36:33.153288394 +0000 UTC m=+0.185153546 container init 4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 05:36:33 np0005634017 podman[361589]: 2026-02-28 10:36:33.160794086 +0000 UTC m=+0.192659198 container start 4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 28 05:36:33 np0005634017 neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1[361605]: [NOTICE]   (361609) : New worker (361611) forked
Feb 28 05:36:33 np0005634017 neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1[361605]: [NOTICE]   (361609) : Loading success.
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.402 243456 DEBUG nova.compute.manager [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.404 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274993.4018123, b3a7b19c-ebb2-442d-bac0-66e1b9e2655c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.404 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] VM Started (Lifecycle Event)#033[00m
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.407 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.412 243456 INFO nova.virt.libvirt.driver [-] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Instance spawned successfully.#033[00m
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.412 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.455 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.464 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.468 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.469 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.469 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.470 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.470 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.471 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.525 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.526 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274993.4022837, b3a7b19c-ebb2-442d-bac0-66e1b9e2655c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.526 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.602 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.607 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274993.4070237, b3a7b19c-ebb2-442d-bac0-66e1b9e2655c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.609 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.620 243456 INFO nova.compute.manager [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Took 10.36 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.621 243456 DEBUG nova.compute.manager [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.658 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.663 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.708 243456 INFO nova.compute.manager [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Took 11.46 seconds to build instance.#033[00m
Feb 28 05:36:33 np0005634017 nova_compute[243452]: 2026-02-28 10:36:33.736 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:36:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.331 243456 DEBUG nova.network.neutron [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Updating instance_info_cache with network_info: [{"id": "b8c427fe-78c5-4d60-9c44-68985f50b598", "address": "fa:16:3e:d6:2a:81", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c427fe-78", "ovs_interfaceid": "b8c427fe-78c5-4d60-9c44-68985f50b598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ffaef000-523c-4637-99e6-2cc96b907c15", "address": "fa:16:3e:28:b7:5a", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaef000-52", "ovs_interfaceid": "ffaef000-523c-4637-99e6-2cc96b907c15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.357 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.358 243456 DEBUG nova.compute.manager [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Instance network_info: |[{"id": "b8c427fe-78c5-4d60-9c44-68985f50b598", "address": "fa:16:3e:d6:2a:81", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c427fe-78", "ovs_interfaceid": "b8c427fe-78c5-4d60-9c44-68985f50b598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ffaef000-523c-4637-99e6-2cc96b907c15", "address": "fa:16:3e:28:b7:5a", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaef000-52", "ovs_interfaceid": "ffaef000-523c-4637-99e6-2cc96b907c15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.358 243456 DEBUG oslo_concurrency.lockutils [req-e1a742dd-ef0a-4317-be5a-3de2d95efb2c req-d556379a-a944-4d95-9e53-9e8414e14000 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.359 243456 DEBUG nova.network.neutron [req-e1a742dd-ef0a-4317-be5a-3de2d95efb2c req-d556379a-a944-4d95-9e53-9e8414e14000 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Refreshing network info cache for port ffaef000-523c-4637-99e6-2cc96b907c15 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.366 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Start _get_guest_xml network_info=[{"id": "b8c427fe-78c5-4d60-9c44-68985f50b598", "address": "fa:16:3e:d6:2a:81", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c427fe-78", "ovs_interfaceid": "b8c427fe-78c5-4d60-9c44-68985f50b598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ffaef000-523c-4637-99e6-2cc96b907c15", "address": "fa:16:3e:28:b7:5a", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaef000-52", "ovs_interfaceid": "ffaef000-523c-4637-99e6-2cc96b907c15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.374 243456 WARNING nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.382 243456 DEBUG nova.virt.libvirt.host [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.384 243456 DEBUG nova.virt.libvirt.host [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.402 243456 DEBUG nova.virt.libvirt.host [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.403 243456 DEBUG nova.virt.libvirt.host [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.404 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.405 243456 DEBUG nova.virt.hardware [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.406 243456 DEBUG nova.virt.hardware [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.407 243456 DEBUG nova.virt.hardware [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.408 243456 DEBUG nova.virt.hardware [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.408 243456 DEBUG nova.virt.hardware [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.409 243456 DEBUG nova.virt.hardware [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.410 243456 DEBUG nova.virt.hardware [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.410 243456 DEBUG nova.virt.hardware [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.411 243456 DEBUG nova.virt.hardware [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.411 243456 DEBUG nova.virt.hardware [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.412 243456 DEBUG nova.virt.hardware [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.419 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.454 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 246 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 589 KiB/s rd, 3.6 MiB/s wr, 77 op/s
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.925 243456 DEBUG nova.compute.manager [req-6d6aa6a4-867e-4176-a389-c0fc29c3349e req-49bd2bde-298a-4f19-9917-d6058dffcbd8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-vif-plugged-4b48043a-8194-4cf4-bd7f-1c138d7960ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.926 243456 DEBUG oslo_concurrency.lockutils [req-6d6aa6a4-867e-4176-a389-c0fc29c3349e req-49bd2bde-298a-4f19-9917-d6058dffcbd8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.927 243456 DEBUG oslo_concurrency.lockutils [req-6d6aa6a4-867e-4176-a389-c0fc29c3349e req-49bd2bde-298a-4f19-9917-d6058dffcbd8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.927 243456 DEBUG oslo_concurrency.lockutils [req-6d6aa6a4-867e-4176-a389-c0fc29c3349e req-49bd2bde-298a-4f19-9917-d6058dffcbd8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.928 243456 DEBUG nova.compute.manager [req-6d6aa6a4-867e-4176-a389-c0fc29c3349e req-49bd2bde-298a-4f19-9917-d6058dffcbd8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] No waiting events found dispatching network-vif-plugged-4b48043a-8194-4cf4-bd7f-1c138d7960ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.928 243456 WARNING nova.compute.manager [req-6d6aa6a4-867e-4176-a389-c0fc29c3349e req-49bd2bde-298a-4f19-9917-d6058dffcbd8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received unexpected event network-vif-plugged-4b48043a-8194-4cf4-bd7f-1c138d7960ac for instance with vm_state active and task_state None.#033[00m
Feb 28 05:36:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:36:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3290399496' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:36:34 np0005634017 nova_compute[243452]: 2026-02-28 10:36:34.972 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.009 243456 DEBUG nova.storage.rbd_utils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.014 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:36:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:36:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3333573190' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.632 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.635 243456 DEBUG nova.virt.libvirt.vif [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:36:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1102866438',display_name='tempest-TestGettingAddress-server-1102866438',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1102866438',id=133,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjJUmVDkOZGXP9YHWwcrQX3rRmUtQjwiGdrqFm8v//iM8ddQ6Oygrz9eS7/NNUCVpxPFFxhbxOwQ9+w5wIv/8LuNuf9Xfg2nwokqVt8ivVWLTyGZCoHMk8LXyLAebo5og==',key_name='tempest-TestGettingAddress-95527438',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-qiz0sgp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:36:27Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=555d381e-ed8a-4a73-9f43-f79c0b0a0afd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b8c427fe-78c5-4d60-9c44-68985f50b598", "address": "fa:16:3e:d6:2a:81", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c427fe-78", "ovs_interfaceid": "b8c427fe-78c5-4d60-9c44-68985f50b598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.636 243456 DEBUG nova.network.os_vif_util [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "b8c427fe-78c5-4d60-9c44-68985f50b598", "address": "fa:16:3e:d6:2a:81", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c427fe-78", "ovs_interfaceid": "b8c427fe-78c5-4d60-9c44-68985f50b598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.637 243456 DEBUG nova.network.os_vif_util [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:2a:81,bridge_name='br-int',has_traffic_filtering=True,id=b8c427fe-78c5-4d60-9c44-68985f50b598,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8c427fe-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.639 243456 DEBUG nova.virt.libvirt.vif [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:36:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1102866438',display_name='tempest-TestGettingAddress-server-1102866438',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1102866438',id=133,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjJUmVDkOZGXP9YHWwcrQX3rRmUtQjwiGdrqFm8v//iM8ddQ6Oygrz9eS7/NNUCVpxPFFxhbxOwQ9+w5wIv/8LuNuf9Xfg2nwokqVt8ivVWLTyGZCoHMk8LXyLAebo5og==',key_name='tempest-TestGettingAddress-95527438',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-qiz0sgp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:36:27Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=555d381e-ed8a-4a73-9f43-f79c0b0a0afd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ffaef000-523c-4637-99e6-2cc96b907c15", "address": "fa:16:3e:28:b7:5a", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaef000-52", "ovs_interfaceid": "ffaef000-523c-4637-99e6-2cc96b907c15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.640 243456 DEBUG nova.network.os_vif_util [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "ffaef000-523c-4637-99e6-2cc96b907c15", "address": "fa:16:3e:28:b7:5a", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaef000-52", "ovs_interfaceid": "ffaef000-523c-4637-99e6-2cc96b907c15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.641 243456 DEBUG nova.network.os_vif_util [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:b7:5a,bridge_name='br-int',has_traffic_filtering=True,id=ffaef000-523c-4637-99e6-2cc96b907c15,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffaef000-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.643 243456 DEBUG nova.objects.instance [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 555d381e-ed8a-4a73-9f43-f79c0b0a0afd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.694 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:36:35 np0005634017 nova_compute[243452]:  <uuid>555d381e-ed8a-4a73-9f43-f79c0b0a0afd</uuid>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:  <name>instance-00000085</name>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestGettingAddress-server-1102866438</nova:name>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:36:34</nova:creationTime>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:36:35 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:        <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:        <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:        <nova:port uuid="b8c427fe-78c5-4d60-9c44-68985f50b598">
Feb 28 05:36:35 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:        <nova:port uuid="ffaef000-523c-4637-99e6-2cc96b907c15">
Feb 28 05:36:35 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe28:b75a" ipVersion="6"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe28:b75a" ipVersion="6"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <entry name="serial">555d381e-ed8a-4a73-9f43-f79c0b0a0afd</entry>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <entry name="uuid">555d381e-ed8a-4a73-9f43-f79c0b0a0afd</entry>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk">
Feb 28 05:36:35 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:36:35 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk.config">
Feb 28 05:36:35 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:36:35 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:d6:2a:81"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <target dev="tapb8c427fe-78"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:28:b7:5a"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <target dev="tapffaef000-52"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/555d381e-ed8a-4a73-9f43-f79c0b0a0afd/console.log" append="off"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:36:35 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:36:35 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:36:35 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:36:35 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.696 243456 DEBUG nova.compute.manager [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Preparing to wait for external event network-vif-plugged-b8c427fe-78c5-4d60-9c44-68985f50b598 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.697 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.697 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.698 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.698 243456 DEBUG nova.compute.manager [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Preparing to wait for external event network-vif-plugged-ffaef000-523c-4637-99e6-2cc96b907c15 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.699 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.699 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.699 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.701 243456 DEBUG nova.virt.libvirt.vif [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:36:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1102866438',display_name='tempest-TestGettingAddress-server-1102866438',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1102866438',id=133,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjJUmVDkOZGXP9YHWwcrQX3rRmUtQjwiGdrqFm8v//iM8ddQ6Oygrz9eS7/NNUCVpxPFFxhbxOwQ9+w5wIv/8LuNuf9Xfg2nwokqVt8ivVWLTyGZCoHMk8LXyLAebo5og==',key_name='tempest-TestGettingAddress-95527438',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-qiz0sgp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:36:27Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=555d381e-ed8a-4a73-9f43-f79c0b0a0afd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b8c427fe-78c5-4d60-9c44-68985f50b598", "address": "fa:16:3e:d6:2a:81", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c427fe-78", "ovs_interfaceid": "b8c427fe-78c5-4d60-9c44-68985f50b598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.701 243456 DEBUG nova.network.os_vif_util [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "b8c427fe-78c5-4d60-9c44-68985f50b598", "address": "fa:16:3e:d6:2a:81", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c427fe-78", "ovs_interfaceid": "b8c427fe-78c5-4d60-9c44-68985f50b598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.702 243456 DEBUG nova.network.os_vif_util [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:2a:81,bridge_name='br-int',has_traffic_filtering=True,id=b8c427fe-78c5-4d60-9c44-68985f50b598,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8c427fe-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.703 243456 DEBUG os_vif [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:2a:81,bridge_name='br-int',has_traffic_filtering=True,id=b8c427fe-78c5-4d60-9c44-68985f50b598,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8c427fe-78') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.704 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.705 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.706 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.709 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.710 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8c427fe-78, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.711 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb8c427fe-78, col_values=(('external_ids', {'iface-id': 'b8c427fe-78c5-4d60-9c44-68985f50b598', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:2a:81', 'vm-uuid': '555d381e-ed8a-4a73-9f43-f79c0b0a0afd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.713 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:35 np0005634017 NetworkManager[49805]: <info>  [1772274995.7146] manager: (tapb8c427fe-78): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/559)
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.717 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.721 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.722 243456 INFO os_vif [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:2a:81,bridge_name='br-int',has_traffic_filtering=True,id=b8c427fe-78c5-4d60-9c44-68985f50b598,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8c427fe-78')#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.723 243456 DEBUG nova.virt.libvirt.vif [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:36:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1102866438',display_name='tempest-TestGettingAddress-server-1102866438',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1102866438',id=133,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjJUmVDkOZGXP9YHWwcrQX3rRmUtQjwiGdrqFm8v//iM8ddQ6Oygrz9eS7/NNUCVpxPFFxhbxOwQ9+w5wIv/8LuNuf9Xfg2nwokqVt8ivVWLTyGZCoHMk8LXyLAebo5og==',key_name='tempest-TestGettingAddress-95527438',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-qiz0sgp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:36:27Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=555d381e-ed8a-4a73-9f43-f79c0b0a0afd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ffaef000-523c-4637-99e6-2cc96b907c15", "address": "fa:16:3e:28:b7:5a", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaef000-52", "ovs_interfaceid": "ffaef000-523c-4637-99e6-2cc96b907c15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.723 243456 DEBUG nova.network.os_vif_util [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "ffaef000-523c-4637-99e6-2cc96b907c15", "address": "fa:16:3e:28:b7:5a", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaef000-52", "ovs_interfaceid": "ffaef000-523c-4637-99e6-2cc96b907c15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.725 243456 DEBUG nova.network.os_vif_util [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:b7:5a,bridge_name='br-int',has_traffic_filtering=True,id=ffaef000-523c-4637-99e6-2cc96b907c15,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffaef000-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.726 243456 DEBUG os_vif [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:b7:5a,bridge_name='br-int',has_traffic_filtering=True,id=ffaef000-523c-4637-99e6-2cc96b907c15,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffaef000-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.726 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.726 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.727 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.731 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.731 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapffaef000-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.732 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapffaef000-52, col_values=(('external_ids', {'iface-id': 'ffaef000-523c-4637-99e6-2cc96b907c15', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:b7:5a', 'vm-uuid': '555d381e-ed8a-4a73-9f43-f79c0b0a0afd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.733 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:35 np0005634017 NetworkManager[49805]: <info>  [1772274995.7352] manager: (tapffaef000-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/560)
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.736 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.752 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.753 243456 INFO os_vif [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:b7:5a,bridge_name='br-int',has_traffic_filtering=True,id=ffaef000-523c-4637-99e6-2cc96b907c15,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffaef000-52')#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.912 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.913 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.913 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:d6:2a:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.914 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:28:b7:5a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.915 243456 INFO nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Using config drive#033[00m
Feb 28 05:36:35 np0005634017 nova_compute[243452]: 2026-02-28 10:36:35.953 243456 DEBUG nova.storage.rbd_utils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:36:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 246 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.9 MiB/s wr, 101 op/s
Feb 28 05:36:37 np0005634017 nova_compute[243452]: 2026-02-28 10:36:37.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:36:38 np0005634017 nova_compute[243452]: 2026-02-28 10:36:38.535 243456 INFO nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Creating config drive at /var/lib/nova/instances/555d381e-ed8a-4a73-9f43-f79c0b0a0afd/disk.config#033[00m
Feb 28 05:36:38 np0005634017 nova_compute[243452]: 2026-02-28 10:36:38.540 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/555d381e-ed8a-4a73-9f43-f79c0b0a0afd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpl2zmk9w_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:36:38 np0005634017 nova_compute[243452]: 2026-02-28 10:36:38.681 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/555d381e-ed8a-4a73-9f43-f79c0b0a0afd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpl2zmk9w_" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:36:38 np0005634017 nova_compute[243452]: 2026-02-28 10:36:38.723 243456 DEBUG nova.storage.rbd_utils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:36:38 np0005634017 nova_compute[243452]: 2026-02-28 10:36:38.729 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/555d381e-ed8a-4a73-9f43-f79c0b0a0afd/disk.config 555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:36:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 246 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 110 op/s
Feb 28 05:36:38 np0005634017 nova_compute[243452]: 2026-02-28 10:36:38.898 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/555d381e-ed8a-4a73-9f43-f79c0b0a0afd/disk.config 555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:36:38 np0005634017 nova_compute[243452]: 2026-02-28 10:36:38.899 243456 INFO nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Deleting local config drive /var/lib/nova/instances/555d381e-ed8a-4a73-9f43-f79c0b0a0afd/disk.config because it was imported into RBD.#033[00m
Feb 28 05:36:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:36:38 np0005634017 NetworkManager[49805]: <info>  [1772274998.9654] manager: (tapb8c427fe-78): new Tun device (/org/freedesktop/NetworkManager/Devices/561)
Feb 28 05:36:38 np0005634017 kernel: tapb8c427fe-78: entered promiscuous mode
Feb 28 05:36:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:38Z|01349|binding|INFO|Claiming lport b8c427fe-78c5-4d60-9c44-68985f50b598 for this chassis.
Feb 28 05:36:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:38Z|01350|binding|INFO|b8c427fe-78c5-4d60-9c44-68985f50b598: Claiming fa:16:3e:d6:2a:81 10.100.0.13
Feb 28 05:36:38 np0005634017 nova_compute[243452]: 2026-02-28 10:36:38.978 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:38 np0005634017 NetworkManager[49805]: <info>  [1772274998.9855] manager: (tapffaef000-52): new Tun device (/org/freedesktop/NetworkManager/Devices/562)
Feb 28 05:36:38 np0005634017 kernel: tapffaef000-52: entered promiscuous mode
Feb 28 05:36:38 np0005634017 nova_compute[243452]: 2026-02-28 10:36:38.992 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:38 np0005634017 systemd-udevd[361803]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:36:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:38Z|01351|if_status|INFO|Dropped 1 log messages in last 82 seconds (most recently, 82 seconds ago) due to excessive rate
Feb 28 05:36:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:38Z|01352|if_status|INFO|Not updating pb chassis for ffaef000-523c-4637-99e6-2cc96b907c15 now as sb is readonly
Feb 28 05:36:38 np0005634017 systemd-udevd[361804]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:36:39 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:39Z|01353|binding|INFO|Claiming lport ffaef000-523c-4637-99e6-2cc96b907c15 for this chassis.
Feb 28 05:36:39 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:39Z|01354|binding|INFO|ffaef000-523c-4637-99e6-2cc96b907c15: Claiming fa:16:3e:28:b7:5a 2001:db8:0:1:f816:3eff:fe28:b75a 2001:db8::f816:3eff:fe28:b75a
Feb 28 05:36:39 np0005634017 NetworkManager[49805]: <info>  [1772274999.0108] device (tapb8c427fe-78): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:36:39 np0005634017 NetworkManager[49805]: <info>  [1772274999.0123] device (tapb8c427fe-78): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.006 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:2a:81 10.100.0.13'], port_security=['fa:16:3e:d6:2a:81 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '555d381e-ed8a-4a73-9f43-f79c0b0a0afd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '41e41550-f20b-4082-98cd-41d49e388f83', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f53aa21-0a70-46b8-aad2-7db237a64c92, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b8c427fe-78c5-4d60-9c44-68985f50b598) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.009 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b8c427fe-78c5-4d60-9c44-68985f50b598 in datapath e68f9d98-c075-4ed0-b1ee-ef05de1c055d bound to our chassis#033[00m
Feb 28 05:36:39 np0005634017 NetworkManager[49805]: <info>  [1772274999.0158] device (tapffaef000-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.016 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e68f9d98-c075-4ed0-b1ee-ef05de1c055d#033[00m
Feb 28 05:36:39 np0005634017 NetworkManager[49805]: <info>  [1772274999.0203] device (tapffaef000-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:36:39 np0005634017 systemd-machined[209480]: New machine qemu-166-instance-00000085.
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.029 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:b7:5a 2001:db8:0:1:f816:3eff:fe28:b75a 2001:db8::f816:3eff:fe28:b75a'], port_security=['fa:16:3e:28:b7:5a 2001:db8:0:1:f816:3eff:fe28:b75a 2001:db8::f816:3eff:fe28:b75a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe28:b75a/64 2001:db8::f816:3eff:fe28:b75a/64', 'neutron:device_id': '555d381e-ed8a-4a73-9f43-f79c0b0a0afd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '41e41550-f20b-4082-98cd-41d49e388f83', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ac6d088-e3e0-46fc-b81f-2eb4f1fd8c9c, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ffaef000-523c-4637-99e6-2cc96b907c15) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.031 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4a9421fa-6d06-4ee8-89d0-b3d8967c47c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.032 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape68f9d98-c1 in ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.034 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape68f9d98-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.035 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3757e409-6bc2-4acf-81b7-fa06e5082f65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.036 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ce73544e-5534-4eae-8df1-133986e0a6e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:39 np0005634017 systemd[1]: Started Virtual Machine qemu-166-instance-00000085.
Feb 28 05:36:39 np0005634017 nova_compute[243452]: 2026-02-28 10:36:39.038 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:39 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:39Z|01355|binding|INFO|Setting lport b8c427fe-78c5-4d60-9c44-68985f50b598 ovn-installed in OVS
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.053 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[fd60d04a-720e-44c9-a2ae-4d92ceaf6dd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:39 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:39Z|01356|binding|INFO|Setting lport b8c427fe-78c5-4d60-9c44-68985f50b598 up in Southbound
Feb 28 05:36:39 np0005634017 nova_compute[243452]: 2026-02-28 10:36:39.055 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:39 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:39Z|01357|binding|INFO|Setting lport ffaef000-523c-4637-99e6-2cc96b907c15 ovn-installed in OVS
Feb 28 05:36:39 np0005634017 nova_compute[243452]: 2026-02-28 10:36:39.060 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:39 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:39Z|01358|binding|INFO|Setting lport ffaef000-523c-4637-99e6-2cc96b907c15 up in Southbound
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.071 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[27e0ac0c-6d2a-46ec-852d-9e5ea7dbb88f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.135 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5deb49f6-c992-4e36-8536-eaa2c5467bf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.146 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[547f1856-bcee-4f75-9142-8a8c01a86130]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:39 np0005634017 NetworkManager[49805]: <info>  [1772274999.1480] manager: (tape68f9d98-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/563)
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.180 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ac698580-cd3f-47a6-9ce0-e1be871efab5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.184 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f0edab68-533a-45c0-a7d8-ba40d1eae41d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:39 np0005634017 NetworkManager[49805]: <info>  [1772274999.2082] device (tape68f9d98-c0): carrier: link connected
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.214 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b526eb24-94d3-4e83-83a0-f04183f1c07c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.235 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[68ddecb8-b273-4032-91dd-73d03bb85746]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape68f9d98-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:f2:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 404], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650649, 'reachable_time': 23842, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361840, 'error': None, 'target': 'ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.252 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[054472a5-24be-44fb-a4d1-66cefc0f1f95]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec8:f24a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650649, 'tstamp': 650649}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361841, 'error': None, 'target': 'ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.278 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d577e852-3cbd-4d1c-a808-7b913f4acacd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape68f9d98-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:f2:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 404], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650649, 'reachable_time': 23842, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 361842, 'error': None, 'target': 'ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.326 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[95bf6757-db84-47e6-a97e-66c99e5e7c33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.394 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c77684ee-2377-48dc-b8b8-5ea63ae5afae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.396 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape68f9d98-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.397 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.397 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape68f9d98-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:36:39 np0005634017 kernel: tape68f9d98-c0: entered promiscuous mode
Feb 28 05:36:39 np0005634017 NetworkManager[49805]: <info>  [1772274999.4010] manager: (tape68f9d98-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/564)
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.404 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape68f9d98-c0, col_values=(('external_ids', {'iface-id': 'b9d9d72f-0ab4-40d7-98e4-dcafab8ff1a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:36:39 np0005634017 nova_compute[243452]: 2026-02-28 10:36:39.404 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:39 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:39Z|01359|binding|INFO|Releasing lport b9d9d72f-0ab4-40d7-98e4-dcafab8ff1a1 from this chassis (sb_readonly=0)
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.407 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e68f9d98-c075-4ed0-b1ee-ef05de1c055d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e68f9d98-c075-4ed0-b1ee-ef05de1c055d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.410 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cf20539e-b333-49e6-9771-f1f9beb31d70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.411 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-e68f9d98-c075-4ed0-b1ee-ef05de1c055d
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/e68f9d98-c075-4ed0-b1ee-ef05de1c055d.pid.haproxy
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID e68f9d98-c075-4ed0-b1ee-ef05de1c055d
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:36:39 np0005634017 nova_compute[243452]: 2026-02-28 10:36:39.412 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.413 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'env', 'PROCESS_TAG=haproxy-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e68f9d98-c075-4ed0-b1ee-ef05de1c055d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:36:39 np0005634017 nova_compute[243452]: 2026-02-28 10:36:39.442 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:39 np0005634017 nova_compute[243452]: 2026-02-28 10:36:39.663 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274999.660911, 555d381e-ed8a-4a73-9f43-f79c0b0a0afd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:36:39 np0005634017 nova_compute[243452]: 2026-02-28 10:36:39.663 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] VM Started (Lifecycle Event)#033[00m
Feb 28 05:36:39 np0005634017 nova_compute[243452]: 2026-02-28 10:36:39.714 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:36:39 np0005634017 nova_compute[243452]: 2026-02-28 10:36:39.720 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274999.6635458, 555d381e-ed8a-4a73-9f43-f79c0b0a0afd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:36:39 np0005634017 nova_compute[243452]: 2026-02-28 10:36:39.721 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:36:39 np0005634017 nova_compute[243452]: 2026-02-28 10:36:39.795 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:36:39 np0005634017 nova_compute[243452]: 2026-02-28 10:36:39.804 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:36:39 np0005634017 podman[361917]: 2026-02-28 10:36:39.83512343 +0000 UTC m=+0.072261018 container create 86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 05:36:39 np0005634017 systemd[1]: Started libpod-conmon-86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21.scope.
Feb 28 05:36:39 np0005634017 nova_compute[243452]: 2026-02-28 10:36:39.893 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:36:39 np0005634017 podman[361917]: 2026-02-28 10:36:39.80793918 +0000 UTC m=+0.045076748 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:36:39 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:36:39 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/377aba4e407f3a2f65f2a4eb7e0e22e21a70346077c645cb2f77ea779d688910/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:36:39 np0005634017 podman[361917]: 2026-02-28 10:36:39.947532174 +0000 UTC m=+0.184669732 container init 86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:36:39 np0005634017 nova_compute[243452]: 2026-02-28 10:36:39.952 243456 DEBUG nova.network.neutron [req-e1a742dd-ef0a-4317-be5a-3de2d95efb2c req-d556379a-a944-4d95-9e53-9e8414e14000 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Updated VIF entry in instance network info cache for port ffaef000-523c-4637-99e6-2cc96b907c15. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:36:39 np0005634017 nova_compute[243452]: 2026-02-28 10:36:39.953 243456 DEBUG nova.network.neutron [req-e1a742dd-ef0a-4317-be5a-3de2d95efb2c req-d556379a-a944-4d95-9e53-9e8414e14000 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Updating instance_info_cache with network_info: [{"id": "b8c427fe-78c5-4d60-9c44-68985f50b598", "address": "fa:16:3e:d6:2a:81", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c427fe-78", "ovs_interfaceid": "b8c427fe-78c5-4d60-9c44-68985f50b598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ffaef000-523c-4637-99e6-2cc96b907c15", "address": "fa:16:3e:28:b7:5a", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaef000-52", "ovs_interfaceid": "ffaef000-523c-4637-99e6-2cc96b907c15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:36:39 np0005634017 podman[361917]: 2026-02-28 10:36:39.957119186 +0000 UTC m=+0.194256744 container start 86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:36:39 np0005634017 podman[361931]: 2026-02-28 10:36:39.955360066 +0000 UTC m=+0.066016371 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Feb 28 05:36:39 np0005634017 nova_compute[243452]: 2026-02-28 10:36:39.986 243456 DEBUG oslo_concurrency.lockutils [req-e1a742dd-ef0a-4317-be5a-3de2d95efb2c req-d556379a-a944-4d95-9e53-9e8414e14000 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:36:39 np0005634017 neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d[361939]: [NOTICE]   (361969) : New worker (361973) forked
Feb 28 05:36:39 np0005634017 neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d[361939]: [NOTICE]   (361969) : Loading success.
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.044 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ffaef000-523c-4637-99e6-2cc96b907c15 in datapath bf5aa4f8-b85b-496f-a7ad-1ab36250968c unbound from our chassis#033[00m
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.047 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf5aa4f8-b85b-496f-a7ad-1ab36250968c#033[00m
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.058 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e37940ca-b61e-4f0b-bc85-f9f397bb0275]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.060 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbf5aa4f8-b1 in ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.062 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbf5aa4f8-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.062 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bb053ed1-510d-4c84-940d-6acf7d924f02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.063 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[51980c19-1190-4a69-a4e7-da8eb2409473]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.079 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[c2cf1234-a7f4-43ef-98e9-644384609851]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:40 np0005634017 podman[361930]: 2026-02-28 10:36:40.090035001 +0000 UTC m=+0.185738842 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.097 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c8459d1c-c3f1-4548-87ea-487ac8d2ef4c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:40 np0005634017 nova_compute[243452]: 2026-02-28 10:36:40.103 243456 DEBUG nova.compute.manager [req-ec6f2d17-f8c9-43bc-9b7d-e27050a3d209 req-7dbf5d53-523a-488f-851a-740001ef3fef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-vif-plugged-b8c427fe-78c5-4d60-9c44-68985f50b598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:36:40 np0005634017 nova_compute[243452]: 2026-02-28 10:36:40.104 243456 DEBUG oslo_concurrency.lockutils [req-ec6f2d17-f8c9-43bc-9b7d-e27050a3d209 req-7dbf5d53-523a-488f-851a-740001ef3fef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:36:40 np0005634017 nova_compute[243452]: 2026-02-28 10:36:40.104 243456 DEBUG oslo_concurrency.lockutils [req-ec6f2d17-f8c9-43bc-9b7d-e27050a3d209 req-7dbf5d53-523a-488f-851a-740001ef3fef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:36:40 np0005634017 nova_compute[243452]: 2026-02-28 10:36:40.105 243456 DEBUG oslo_concurrency.lockutils [req-ec6f2d17-f8c9-43bc-9b7d-e27050a3d209 req-7dbf5d53-523a-488f-851a-740001ef3fef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:36:40 np0005634017 nova_compute[243452]: 2026-02-28 10:36:40.105 243456 DEBUG nova.compute.manager [req-ec6f2d17-f8c9-43bc-9b7d-e27050a3d209 req-7dbf5d53-523a-488f-851a-740001ef3fef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Processing event network-vif-plugged-b8c427fe-78c5-4d60-9c44-68985f50b598 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.124 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[901dbb42-4884-4a84-ac68-ef6ee260c9b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.131 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eefa1276-7799-4728-8437-201d418c6afa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:40 np0005634017 NetworkManager[49805]: <info>  [1772275000.1326] manager: (tapbf5aa4f8-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/565)
Feb 28 05:36:40 np0005634017 systemd-udevd[361837]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.165 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[53fe636d-ae01-4a00-b429-f7e1760d142f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.169 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7ca5b4f5-af67-4ad3-8eda-3286167f67b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:40 np0005634017 NetworkManager[49805]: <info>  [1772275000.1923] device (tapbf5aa4f8-b0): carrier: link connected
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.196 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[56c44b6d-b8a3-4291-a781-f545999d195f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.218 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e16f2c94-e210-4793-b446-88d9c2f8b9d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf5aa4f8-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:9f:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650747, 'reachable_time': 24980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362003, 'error': None, 'target': 'ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.236 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[faca2518-2dbb-4d21-b83b-ee5ab6aa39d5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea2:9f67'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650747, 'tstamp': 650747}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362004, 'error': None, 'target': 'ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.257 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3345d26b-0181-4048-b940-0f2e01bf5aeb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf5aa4f8-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:9f:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650747, 'reachable_time': 24980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362005, 'error': None, 'target': 'ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.296 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5bffa9cd-a8b8-472f-8957-1ff05d92a962]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:40 np0005634017 nova_compute[243452]: 2026-02-28 10:36:40.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:36:40 np0005634017 nova_compute[243452]: 2026-02-28 10:36:40.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:36:40 np0005634017 nova_compute[243452]: 2026-02-28 10:36:40.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.340 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2dd7abcc-7abd-4f7f-aedd-b76cbe71fc96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.343 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf5aa4f8-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.344 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.345 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf5aa4f8-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:36:40 np0005634017 kernel: tapbf5aa4f8-b0: entered promiscuous mode
Feb 28 05:36:40 np0005634017 NetworkManager[49805]: <info>  [1772275000.3490] manager: (tapbf5aa4f8-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/566)
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.353 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf5aa4f8-b0, col_values=(('external_ids', {'iface-id': 'a03e0d2d-7933-48b4-9d0a-61369ba848c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:36:40 np0005634017 nova_compute[243452]: 2026-02-28 10:36:40.353 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:40Z|01360|binding|INFO|Releasing lport a03e0d2d-7933-48b4-9d0a-61369ba848c9 from this chassis (sb_readonly=0)
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.358 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bf5aa4f8-b85b-496f-a7ad-1ab36250968c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bf5aa4f8-b85b-496f-a7ad-1ab36250968c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.359 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1ae55112-6912-44ad-ab60-a83805d05303]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.360 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-bf5aa4f8-b85b-496f-a7ad-1ab36250968c
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/bf5aa4f8-b85b-496f-a7ad-1ab36250968c.pid.haproxy
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID bf5aa4f8-b85b-496f-a7ad-1ab36250968c
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:36:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.363 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'env', 'PROCESS_TAG=haproxy-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bf5aa4f8-b85b-496f-a7ad-1ab36250968c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:36:40 np0005634017 nova_compute[243452]: 2026-02-28 10:36:40.365 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:40 np0005634017 nova_compute[243452]: 2026-02-28 10:36:40.734 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:40 np0005634017 podman[362035]: 2026-02-28 10:36:40.80237817 +0000 UTC m=+0.072754372 container create 54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 05:36:40 np0005634017 systemd[1]: Started libpod-conmon-54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e.scope.
Feb 28 05:36:40 np0005634017 podman[362035]: 2026-02-28 10:36:40.776119596 +0000 UTC m=+0.046495848 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:36:40 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:36:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 305 active+clean; 246 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Feb 28 05:36:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8955cce884765a04034c460f9c74e35c1e8bf35ef5a4315568d401093bba957d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:36:40 np0005634017 podman[362035]: 2026-02-28 10:36:40.899408549 +0000 UTC m=+0.169784791 container init 54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:36:40 np0005634017 podman[362035]: 2026-02-28 10:36:40.908104325 +0000 UTC m=+0.178480547 container start 54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 28 05:36:40 np0005634017 neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c[362050]: [NOTICE]   (362054) : New worker (362056) forked
Feb 28 05:36:40 np0005634017 neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c[362050]: [NOTICE]   (362054) : Loading success.
Feb 28 05:36:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:36:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:36:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:36:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:36:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007083475290894501 of space, bias 1.0, pg target 0.21250425872683504 quantized to 32 (current 32)
Feb 28 05:36:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:36:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:36:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:36:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:36:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:36:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024938552168622384 of space, bias 1.0, pg target 0.7481565650586715 quantized to 32 (current 32)
Feb 28 05:36:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:36:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.326593465495173e-07 of space, bias 4.0, pg target 0.0008791912158594207 quantized to 16 (current 16)
Feb 28 05:36:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:36:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:36:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:36:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:36:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:36:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:36:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:36:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:36:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:36:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.191 243456 DEBUG nova.compute.manager [req-d48d6969-0155-4ddf-a6dc-5041f4be3e75 req-6e2f26f3-baa3-4b0f-8526-0a886f2a6ade 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-vif-plugged-b8c427fe-78c5-4d60-9c44-68985f50b598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.192 243456 DEBUG oslo_concurrency.lockutils [req-d48d6969-0155-4ddf-a6dc-5041f4be3e75 req-6e2f26f3-baa3-4b0f-8526-0a886f2a6ade 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.193 243456 DEBUG oslo_concurrency.lockutils [req-d48d6969-0155-4ddf-a6dc-5041f4be3e75 req-6e2f26f3-baa3-4b0f-8526-0a886f2a6ade 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.194 243456 DEBUG oslo_concurrency.lockutils [req-d48d6969-0155-4ddf-a6dc-5041f4be3e75 req-6e2f26f3-baa3-4b0f-8526-0a886f2a6ade 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.195 243456 DEBUG nova.compute.manager [req-d48d6969-0155-4ddf-a6dc-5041f4be3e75 req-6e2f26f3-baa3-4b0f-8526-0a886f2a6ade 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] No event matching network-vif-plugged-b8c427fe-78c5-4d60-9c44-68985f50b598 in dict_keys([('network-vif-plugged', 'ffaef000-523c-4637-99e6-2cc96b907c15')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.195 243456 WARNING nova.compute.manager [req-d48d6969-0155-4ddf-a6dc-5041f4be3e75 req-6e2f26f3-baa3-4b0f-8526-0a886f2a6ade 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received unexpected event network-vif-plugged-b8c427fe-78c5-4d60-9c44-68985f50b598 for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.196 243456 DEBUG nova.compute.manager [req-d48d6969-0155-4ddf-a6dc-5041f4be3e75 req-6e2f26f3-baa3-4b0f-8526-0a886f2a6ade 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-vif-plugged-ffaef000-523c-4637-99e6-2cc96b907c15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.197 243456 DEBUG oslo_concurrency.lockutils [req-d48d6969-0155-4ddf-a6dc-5041f4be3e75 req-6e2f26f3-baa3-4b0f-8526-0a886f2a6ade 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.197 243456 DEBUG oslo_concurrency.lockutils [req-d48d6969-0155-4ddf-a6dc-5041f4be3e75 req-6e2f26f3-baa3-4b0f-8526-0a886f2a6ade 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.198 243456 DEBUG oslo_concurrency.lockutils [req-d48d6969-0155-4ddf-a6dc-5041f4be3e75 req-6e2f26f3-baa3-4b0f-8526-0a886f2a6ade 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.199 243456 DEBUG nova.compute.manager [req-d48d6969-0155-4ddf-a6dc-5041f4be3e75 req-6e2f26f3-baa3-4b0f-8526-0a886f2a6ade 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Processing event network-vif-plugged-ffaef000-523c-4637-99e6-2cc96b907c15 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.200 243456 DEBUG nova.compute.manager [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.207 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275002.2058234, 555d381e-ed8a-4a73-9f43-f79c0b0a0afd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.207 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.212 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.217 243456 INFO nova.virt.libvirt.driver [-] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Instance spawned successfully.#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.218 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.238 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.245 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.249 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.250 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.250 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.251 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.252 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.252 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.291 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.335 243456 INFO nova.compute.manager [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Took 14.38 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.336 243456 DEBUG nova.compute.manager [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.418 243456 INFO nova.compute.manager [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Took 15.43 seconds to build instance.#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.444 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:36:42 np0005634017 NetworkManager[49805]: <info>  [1772275002.5983] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/567)
Feb 28 05:36:42 np0005634017 NetworkManager[49805]: <info>  [1772275002.5997] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/568)
Feb 28 05:36:42 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:42Z|01361|binding|INFO|Releasing lport 806ea448-5fbd-4b2a-a972-1602ffe39b97 from this chassis (sb_readonly=0)
Feb 28 05:36:42 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:42Z|01362|binding|INFO|Releasing lport a03e0d2d-7933-48b4-9d0a-61369ba848c9 from this chassis (sb_readonly=0)
Feb 28 05:36:42 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:42Z|01363|binding|INFO|Releasing lport b9d9d72f-0ab4-40d7-98e4-dcafab8ff1a1 from this chassis (sb_readonly=0)
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.611 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:42 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:42Z|01364|binding|INFO|Releasing lport 806ea448-5fbd-4b2a-a972-1602ffe39b97 from this chassis (sb_readonly=0)
Feb 28 05:36:42 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:42Z|01365|binding|INFO|Releasing lport a03e0d2d-7933-48b4-9d0a-61369ba848c9 from this chassis (sb_readonly=0)
Feb 28 05:36:42 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:42Z|01366|binding|INFO|Releasing lport b9d9d72f-0ab4-40d7-98e4-dcafab8ff1a1 from this chassis (sb_readonly=0)
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.631 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:42 np0005634017 nova_compute[243452]: 2026-02-28 10:36:42.646 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 246 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 166 KiB/s wr, 95 op/s
Feb 28 05:36:43 np0005634017 nova_compute[243452]: 2026-02-28 10:36:43.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:36:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:36:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:36:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:36:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:36:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:36:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:36:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:36:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:36:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:36:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:36:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:36:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:36:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:36:44 np0005634017 nova_compute[243452]: 2026-02-28 10:36:44.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:36:44 np0005634017 nova_compute[243452]: 2026-02-28 10:36:44.370 243456 DEBUG nova.compute.manager [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-vif-plugged-ffaef000-523c-4637-99e6-2cc96b907c15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:36:44 np0005634017 nova_compute[243452]: 2026-02-28 10:36:44.370 243456 DEBUG oslo_concurrency.lockutils [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:36:44 np0005634017 nova_compute[243452]: 2026-02-28 10:36:44.370 243456 DEBUG oslo_concurrency.lockutils [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:36:44 np0005634017 nova_compute[243452]: 2026-02-28 10:36:44.370 243456 DEBUG oslo_concurrency.lockutils [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:36:44 np0005634017 nova_compute[243452]: 2026-02-28 10:36:44.371 243456 DEBUG nova.compute.manager [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] No waiting events found dispatching network-vif-plugged-ffaef000-523c-4637-99e6-2cc96b907c15 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:36:44 np0005634017 nova_compute[243452]: 2026-02-28 10:36:44.371 243456 WARNING nova.compute.manager [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received unexpected event network-vif-plugged-ffaef000-523c-4637-99e6-2cc96b907c15 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:36:44 np0005634017 nova_compute[243452]: 2026-02-28 10:36:44.371 243456 DEBUG nova.compute.manager [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-changed-4b48043a-8194-4cf4-bd7f-1c138d7960ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:36:44 np0005634017 nova_compute[243452]: 2026-02-28 10:36:44.371 243456 DEBUG nova.compute.manager [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Refreshing instance network info cache due to event network-changed-4b48043a-8194-4cf4-bd7f-1c138d7960ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:36:44 np0005634017 nova_compute[243452]: 2026-02-28 10:36:44.371 243456 DEBUG oslo_concurrency.lockutils [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:36:44 np0005634017 nova_compute[243452]: 2026-02-28 10:36:44.371 243456 DEBUG oslo_concurrency.lockutils [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:36:44 np0005634017 nova_compute[243452]: 2026-02-28 10:36:44.371 243456 DEBUG nova.network.neutron [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Refreshing network info cache for port 4b48043a-8194-4cf4-bd7f-1c138d7960ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:36:44 np0005634017 nova_compute[243452]: 2026-02-28 10:36:44.444 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:44 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:36:44 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:36:44 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:36:44 np0005634017 podman[362210]: 2026-02-28 10:36:44.578847355 +0000 UTC m=+0.041253198 container create c50e256efd41f15ce8164cf7b5a625337a938052ca12d002824febc7ab7a7365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bassi, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:36:44 np0005634017 systemd[1]: Started libpod-conmon-c50e256efd41f15ce8164cf7b5a625337a938052ca12d002824febc7ab7a7365.scope.
Feb 28 05:36:44 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:36:44 np0005634017 podman[362210]: 2026-02-28 10:36:44.563303556 +0000 UTC m=+0.025709419 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:36:44 np0005634017 podman[362210]: 2026-02-28 10:36:44.66161807 +0000 UTC m=+0.124023943 container init c50e256efd41f15ce8164cf7b5a625337a938052ca12d002824febc7ab7a7365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 05:36:44 np0005634017 podman[362210]: 2026-02-28 10:36:44.67042658 +0000 UTC m=+0.132832413 container start c50e256efd41f15ce8164cf7b5a625337a938052ca12d002824febc7ab7a7365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bassi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 05:36:44 np0005634017 thirsty_bassi[362227]: 167 167
Feb 28 05:36:44 np0005634017 systemd[1]: libpod-c50e256efd41f15ce8164cf7b5a625337a938052ca12d002824febc7ab7a7365.scope: Deactivated successfully.
Feb 28 05:36:44 np0005634017 podman[362210]: 2026-02-28 10:36:44.676397819 +0000 UTC m=+0.138803682 container attach c50e256efd41f15ce8164cf7b5a625337a938052ca12d002824febc7ab7a7365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:36:44 np0005634017 podman[362210]: 2026-02-28 10:36:44.676771809 +0000 UTC m=+0.139177652 container died c50e256efd41f15ce8164cf7b5a625337a938052ca12d002824febc7ab7a7365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bassi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 05:36:44 np0005634017 systemd[1]: var-lib-containers-storage-overlay-8812063b7af6f60afa17b57ffffec7ae260b30ec26466bd033bf4064fc37bb4c-merged.mount: Deactivated successfully.
Feb 28 05:36:44 np0005634017 podman[362210]: 2026-02-28 10:36:44.714753975 +0000 UTC m=+0.177159818 container remove c50e256efd41f15ce8164cf7b5a625337a938052ca12d002824febc7ab7a7365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 28 05:36:44 np0005634017 systemd[1]: libpod-conmon-c50e256efd41f15ce8164cf7b5a625337a938052ca12d002824febc7ab7a7365.scope: Deactivated successfully.
Feb 28 05:36:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 257 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 812 KiB/s wr, 118 op/s
Feb 28 05:36:44 np0005634017 podman[362250]: 2026-02-28 10:36:44.887084377 +0000 UTC m=+0.053569159 container create 82b516bb37eb8b57ab60a6e6a3846c6dbd3b83d30eb874672d5375b8eaea7549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_proskuriakova, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:36:44 np0005634017 systemd[1]: Started libpod-conmon-82b516bb37eb8b57ab60a6e6a3846c6dbd3b83d30eb874672d5375b8eaea7549.scope.
Feb 28 05:36:44 np0005634017 podman[362250]: 2026-02-28 10:36:44.859905027 +0000 UTC m=+0.026389889 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:36:45 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:36:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394ee0e72795abab66400a867e8985a92bd0200423acc3618c7b3953b41ebc75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:36:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394ee0e72795abab66400a867e8985a92bd0200423acc3618c7b3953b41ebc75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:36:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394ee0e72795abab66400a867e8985a92bd0200423acc3618c7b3953b41ebc75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:36:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394ee0e72795abab66400a867e8985a92bd0200423acc3618c7b3953b41ebc75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:36:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394ee0e72795abab66400a867e8985a92bd0200423acc3618c7b3953b41ebc75/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:36:45 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:45Z|00159|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e3:60:12 10.100.0.5
Feb 28 05:36:45 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:45Z|00160|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e3:60:12 10.100.0.5
Feb 28 05:36:45 np0005634017 podman[362250]: 2026-02-28 10:36:45.032096035 +0000 UTC m=+0.198580857 container init 82b516bb37eb8b57ab60a6e6a3846c6dbd3b83d30eb874672d5375b8eaea7549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 28 05:36:45 np0005634017 podman[362250]: 2026-02-28 10:36:45.043841477 +0000 UTC m=+0.210326259 container start 82b516bb37eb8b57ab60a6e6a3846c6dbd3b83d30eb874672d5375b8eaea7549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_proskuriakova, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:36:45 np0005634017 podman[362250]: 2026-02-28 10:36:45.047641355 +0000 UTC m=+0.214126147 container attach 82b516bb37eb8b57ab60a6e6a3846c6dbd3b83d30eb874672d5375b8eaea7549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_proskuriakova, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:36:45 np0005634017 beautiful_proskuriakova[362266]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:36:45 np0005634017 beautiful_proskuriakova[362266]: --> All data devices are unavailable
Feb 28 05:36:45 np0005634017 systemd[1]: libpod-82b516bb37eb8b57ab60a6e6a3846c6dbd3b83d30eb874672d5375b8eaea7549.scope: Deactivated successfully.
Feb 28 05:36:45 np0005634017 conmon[362266]: conmon 82b516bb37eb8b57ab60 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82b516bb37eb8b57ab60a6e6a3846c6dbd3b83d30eb874672d5375b8eaea7549.scope/container/memory.events
Feb 28 05:36:45 np0005634017 podman[362250]: 2026-02-28 10:36:45.552892638 +0000 UTC m=+0.719377440 container died 82b516bb37eb8b57ab60a6e6a3846c6dbd3b83d30eb874672d5375b8eaea7549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 05:36:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:36:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/660430431' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:36:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:36:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/660430431' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:36:45 np0005634017 systemd[1]: var-lib-containers-storage-overlay-394ee0e72795abab66400a867e8985a92bd0200423acc3618c7b3953b41ebc75-merged.mount: Deactivated successfully.
Feb 28 05:36:45 np0005634017 podman[362250]: 2026-02-28 10:36:45.61158377 +0000 UTC m=+0.778068582 container remove 82b516bb37eb8b57ab60a6e6a3846c6dbd3b83d30eb874672d5375b8eaea7549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 28 05:36:45 np0005634017 systemd[1]: libpod-conmon-82b516bb37eb8b57ab60a6e6a3846c6dbd3b83d30eb874672d5375b8eaea7549.scope: Deactivated successfully.
Feb 28 05:36:45 np0005634017 nova_compute[243452]: 2026-02-28 10:36:45.736 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:46 np0005634017 podman[362358]: 2026-02-28 10:36:46.225922749 +0000 UTC m=+0.036572958 container create fac68737be5178c934b5e58ed09e39532405861cd88b1b9594bc83c82c9a017b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:36:46 np0005634017 systemd[1]: Started libpod-conmon-fac68737be5178c934b5e58ed09e39532405861cd88b1b9594bc83c82c9a017b.scope.
Feb 28 05:36:46 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:36:46 np0005634017 podman[362358]: 2026-02-28 10:36:46.209762761 +0000 UTC m=+0.020412970 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:36:46 np0005634017 podman[362358]: 2026-02-28 10:36:46.322294873 +0000 UTC m=+0.132945122 container init fac68737be5178c934b5e58ed09e39532405861cd88b1b9594bc83c82c9a017b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ellis, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:36:46 np0005634017 podman[362358]: 2026-02-28 10:36:46.32928113 +0000 UTC m=+0.139931329 container start fac68737be5178c934b5e58ed09e39532405861cd88b1b9594bc83c82c9a017b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ellis, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:36:46 np0005634017 podman[362358]: 2026-02-28 10:36:46.332546203 +0000 UTC m=+0.143196472 container attach fac68737be5178c934b5e58ed09e39532405861cd88b1b9594bc83c82c9a017b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ellis, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 05:36:46 np0005634017 suspicious_ellis[362374]: 167 167
Feb 28 05:36:46 np0005634017 systemd[1]: libpod-fac68737be5178c934b5e58ed09e39532405861cd88b1b9594bc83c82c9a017b.scope: Deactivated successfully.
Feb 28 05:36:46 np0005634017 podman[362358]: 2026-02-28 10:36:46.335329182 +0000 UTC m=+0.145979461 container died fac68737be5178c934b5e58ed09e39532405861cd88b1b9594bc83c82c9a017b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ellis, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 05:36:46 np0005634017 systemd[1]: var-lib-containers-storage-overlay-62f24a719683a8b277e9ec87d58837ed56194773a29600cb6e57b178e70908fa-merged.mount: Deactivated successfully.
Feb 28 05:36:46 np0005634017 podman[362358]: 2026-02-28 10:36:46.368769119 +0000 UTC m=+0.179419328 container remove fac68737be5178c934b5e58ed09e39532405861cd88b1b9594bc83c82c9a017b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ellis, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 05:36:46 np0005634017 systemd[1]: libpod-conmon-fac68737be5178c934b5e58ed09e39532405861cd88b1b9594bc83c82c9a017b.scope: Deactivated successfully.
Feb 28 05:36:46 np0005634017 podman[362399]: 2026-02-28 10:36:46.530604433 +0000 UTC m=+0.048800453 container create 2915a448859820274a985cc70dc47648427233558ae470a2980c7f286c387d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 05:36:46 np0005634017 systemd[1]: Started libpod-conmon-2915a448859820274a985cc70dc47648427233558ae470a2980c7f286c387d08.scope.
Feb 28 05:36:46 np0005634017 podman[362399]: 2026-02-28 10:36:46.505550844 +0000 UTC m=+0.023746874 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:36:46 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:36:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d3cc99809b7240e969b1a592d467d177d4a334816d094e43876872043d6002/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:36:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d3cc99809b7240e969b1a592d467d177d4a334816d094e43876872043d6002/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:36:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d3cc99809b7240e969b1a592d467d177d4a334816d094e43876872043d6002/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:36:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d3cc99809b7240e969b1a592d467d177d4a334816d094e43876872043d6002/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:36:46 np0005634017 podman[362399]: 2026-02-28 10:36:46.645510428 +0000 UTC m=+0.163706518 container init 2915a448859820274a985cc70dc47648427233558ae470a2980c7f286c387d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_hugle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 05:36:46 np0005634017 podman[362399]: 2026-02-28 10:36:46.65969752 +0000 UTC m=+0.177893540 container start 2915a448859820274a985cc70dc47648427233558ae470a2980c7f286c387d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 05:36:46 np0005634017 podman[362399]: 2026-02-28 10:36:46.663407285 +0000 UTC m=+0.181603305 container attach 2915a448859820274a985cc70dc47648427233558ae470a2980c7f286c387d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 05:36:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 257 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 995 KiB/s wr, 144 op/s
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]: {
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:    "0": [
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:        {
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "devices": [
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "/dev/loop3"
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            ],
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "lv_name": "ceph_lv0",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "lv_size": "21470642176",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "name": "ceph_lv0",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "tags": {
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.cluster_name": "ceph",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.crush_device_class": "",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.encrypted": "0",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.objectstore": "bluestore",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.osd_id": "0",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.type": "block",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.vdo": "0",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.with_tpm": "0"
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            },
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "type": "block",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "vg_name": "ceph_vg0"
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:        }
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:    ],
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:    "1": [
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:        {
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "devices": [
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "/dev/loop4"
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            ],
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "lv_name": "ceph_lv1",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "lv_size": "21470642176",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "name": "ceph_lv1",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "tags": {
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.cluster_name": "ceph",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.crush_device_class": "",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.encrypted": "0",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.objectstore": "bluestore",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.osd_id": "1",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.type": "block",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.vdo": "0",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.with_tpm": "0"
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            },
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "type": "block",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "vg_name": "ceph_vg1"
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:        }
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:    ],
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:    "2": [
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:        {
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "devices": [
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "/dev/loop5"
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            ],
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "lv_name": "ceph_lv2",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "lv_size": "21470642176",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "name": "ceph_lv2",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "tags": {
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.cluster_name": "ceph",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.crush_device_class": "",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.encrypted": "0",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.objectstore": "bluestore",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.osd_id": "2",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.type": "block",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.vdo": "0",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:                "ceph.with_tpm": "0"
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            },
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "type": "block",
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:            "vg_name": "ceph_vg2"
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:        }
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]:    ]
Feb 28 05:36:46 np0005634017 frosty_hugle[362415]: }
Feb 28 05:36:46 np0005634017 systemd[1]: libpod-2915a448859820274a985cc70dc47648427233558ae470a2980c7f286c387d08.scope: Deactivated successfully.
Feb 28 05:36:47 np0005634017 podman[362399]: 2026-02-28 10:36:47.000593837 +0000 UTC m=+0.518789867 container died 2915a448859820274a985cc70dc47648427233558ae470a2980c7f286c387d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 05:36:47 np0005634017 systemd[1]: var-lib-containers-storage-overlay-94d3cc99809b7240e969b1a592d467d177d4a334816d094e43876872043d6002-merged.mount: Deactivated successfully.
Feb 28 05:36:47 np0005634017 podman[362399]: 2026-02-28 10:36:47.043706888 +0000 UTC m=+0.561902908 container remove 2915a448859820274a985cc70dc47648427233558ae470a2980c7f286c387d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:36:47 np0005634017 systemd[1]: libpod-conmon-2915a448859820274a985cc70dc47648427233558ae470a2980c7f286c387d08.scope: Deactivated successfully.
Feb 28 05:36:47 np0005634017 nova_compute[243452]: 2026-02-28 10:36:47.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:36:47 np0005634017 nova_compute[243452]: 2026-02-28 10:36:47.350 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:36:47 np0005634017 nova_compute[243452]: 2026-02-28 10:36:47.350 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:36:47 np0005634017 nova_compute[243452]: 2026-02-28 10:36:47.351 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:36:47 np0005634017 nova_compute[243452]: 2026-02-28 10:36:47.351 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:36:47 np0005634017 nova_compute[243452]: 2026-02-28 10:36:47.351 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:36:47 np0005634017 nova_compute[243452]: 2026-02-28 10:36:47.516 243456 DEBUG nova.network.neutron [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updated VIF entry in instance network info cache for port 4b48043a-8194-4cf4-bd7f-1c138d7960ac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:36:47 np0005634017 nova_compute[243452]: 2026-02-28 10:36:47.518 243456 DEBUG nova.network.neutron [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updating instance_info_cache with network_info: [{"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:36:47 np0005634017 nova_compute[243452]: 2026-02-28 10:36:47.545 243456 DEBUG oslo_concurrency.lockutils [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:36:47 np0005634017 podman[362499]: 2026-02-28 10:36:47.559190731 +0000 UTC m=+0.060474885 container create fed6c2ba7a05a4944188f1f222e5a727e929c866a3cb2f7fb8e91b30bb6f1399 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banach, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:36:47 np0005634017 systemd[1]: Started libpod-conmon-fed6c2ba7a05a4944188f1f222e5a727e929c866a3cb2f7fb8e91b30bb6f1399.scope.
Feb 28 05:36:47 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:36:47 np0005634017 podman[362499]: 2026-02-28 10:36:47.540900583 +0000 UTC m=+0.042184747 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:36:47 np0005634017 podman[362499]: 2026-02-28 10:36:47.647565904 +0000 UTC m=+0.148850078 container init fed6c2ba7a05a4944188f1f222e5a727e929c866a3cb2f7fb8e91b30bb6f1399 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:36:47 np0005634017 podman[362499]: 2026-02-28 10:36:47.655640553 +0000 UTC m=+0.156924707 container start fed6c2ba7a05a4944188f1f222e5a727e929c866a3cb2f7fb8e91b30bb6f1399 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banach, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 05:36:47 np0005634017 podman[362499]: 2026-02-28 10:36:47.660024427 +0000 UTC m=+0.161308601 container attach fed6c2ba7a05a4944188f1f222e5a727e929c866a3cb2f7fb8e91b30bb6f1399 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:36:47 np0005634017 epic_banach[362532]: 167 167
Feb 28 05:36:47 np0005634017 systemd[1]: libpod-fed6c2ba7a05a4944188f1f222e5a727e929c866a3cb2f7fb8e91b30bb6f1399.scope: Deactivated successfully.
Feb 28 05:36:47 np0005634017 conmon[362532]: conmon fed6c2ba7a05a4944188 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fed6c2ba7a05a4944188f1f222e5a727e929c866a3cb2f7fb8e91b30bb6f1399.scope/container/memory.events
Feb 28 05:36:47 np0005634017 podman[362499]: 2026-02-28 10:36:47.664273917 +0000 UTC m=+0.165558071 container died fed6c2ba7a05a4944188f1f222e5a727e929c866a3cb2f7fb8e91b30bb6f1399 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banach, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 28 05:36:47 np0005634017 systemd[1]: var-lib-containers-storage-overlay-5f460d5b8aa3b861a4d7793eed5340ef4aeabc52441a424dfa1b881c16313889-merged.mount: Deactivated successfully.
Feb 28 05:36:47 np0005634017 podman[362499]: 2026-02-28 10:36:47.703711034 +0000 UTC m=+0.204995188 container remove fed6c2ba7a05a4944188f1f222e5a727e929c866a3cb2f7fb8e91b30bb6f1399 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banach, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 28 05:36:47 np0005634017 systemd[1]: libpod-conmon-fed6c2ba7a05a4944188f1f222e5a727e929c866a3cb2f7fb8e91b30bb6f1399.scope: Deactivated successfully.
Feb 28 05:36:47 np0005634017 podman[362559]: 2026-02-28 10:36:47.887513541 +0000 UTC m=+0.057230512 container create 970d9c6dbc94530188e5f42f6a33a41c24b3b1bdc2db96c85f37f541b1c0e4a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_joliot, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:36:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:36:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1454418771' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:36:47 np0005634017 systemd[1]: Started libpod-conmon-970d9c6dbc94530188e5f42f6a33a41c24b3b1bdc2db96c85f37f541b1c0e4a5.scope.
Feb 28 05:36:47 np0005634017 podman[362559]: 2026-02-28 10:36:47.862660657 +0000 UTC m=+0.032377668 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:36:47 np0005634017 nova_compute[243452]: 2026-02-28 10:36:47.963 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:36:47 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:36:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c122b183770727355e1bf59fc73b719f6aa21d40e479a5b8cef672dfbbd59c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:36:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c122b183770727355e1bf59fc73b719f6aa21d40e479a5b8cef672dfbbd59c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:36:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c122b183770727355e1bf59fc73b719f6aa21d40e479a5b8cef672dfbbd59c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:36:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c122b183770727355e1bf59fc73b719f6aa21d40e479a5b8cef672dfbbd59c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:36:47 np0005634017 podman[362559]: 2026-02-28 10:36:47.998388452 +0000 UTC m=+0.168105453 container init 970d9c6dbc94530188e5f42f6a33a41c24b3b1bdc2db96c85f37f541b1c0e4a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 05:36:48 np0005634017 podman[362559]: 2026-02-28 10:36:48.005082752 +0000 UTC m=+0.174799703 container start 970d9c6dbc94530188e5f42f6a33a41c24b3b1bdc2db96c85f37f541b1c0e4a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_joliot, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:36:48 np0005634017 podman[362559]: 2026-02-28 10:36:48.008362734 +0000 UTC m=+0.178079695 container attach 970d9c6dbc94530188e5f42f6a33a41c24b3b1bdc2db96c85f37f541b1c0e4a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_joliot, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 28 05:36:48 np0005634017 nova_compute[243452]: 2026-02-28 10:36:48.078 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:36:48 np0005634017 nova_compute[243452]: 2026-02-28 10:36:48.079 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:36:48 np0005634017 nova_compute[243452]: 2026-02-28 10:36:48.082 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:36:48 np0005634017 nova_compute[243452]: 2026-02-28 10:36:48.082 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:36:48 np0005634017 nova_compute[243452]: 2026-02-28 10:36:48.088 243456 DEBUG nova.compute.manager [req-350e5fdd-71d5-42dd-9084-61137c849cd5 req-710be096-1c02-46c9-9563-436f88d84c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-changed-b8c427fe-78c5-4d60-9c44-68985f50b598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:36:48 np0005634017 nova_compute[243452]: 2026-02-28 10:36:48.088 243456 DEBUG nova.compute.manager [req-350e5fdd-71d5-42dd-9084-61137c849cd5 req-710be096-1c02-46c9-9563-436f88d84c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Refreshing instance network info cache due to event network-changed-b8c427fe-78c5-4d60-9c44-68985f50b598. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:36:48 np0005634017 nova_compute[243452]: 2026-02-28 10:36:48.089 243456 DEBUG oslo_concurrency.lockutils [req-350e5fdd-71d5-42dd-9084-61137c849cd5 req-710be096-1c02-46c9-9563-436f88d84c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:36:48 np0005634017 nova_compute[243452]: 2026-02-28 10:36:48.089 243456 DEBUG oslo_concurrency.lockutils [req-350e5fdd-71d5-42dd-9084-61137c849cd5 req-710be096-1c02-46c9-9563-436f88d84c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:36:48 np0005634017 nova_compute[243452]: 2026-02-28 10:36:48.089 243456 DEBUG nova.network.neutron [req-350e5fdd-71d5-42dd-9084-61137c849cd5 req-710be096-1c02-46c9-9563-436f88d84c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Refreshing network info cache for port b8c427fe-78c5-4d60-9c44-68985f50b598 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:36:48 np0005634017 nova_compute[243452]: 2026-02-28 10:36:48.306 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:36:48 np0005634017 nova_compute[243452]: 2026-02-28 10:36:48.307 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3204MB free_disk=59.934458611533046GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:36:48 np0005634017 nova_compute[243452]: 2026-02-28 10:36:48.307 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:36:48 np0005634017 nova_compute[243452]: 2026-02-28 10:36:48.307 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:36:48 np0005634017 nova_compute[243452]: 2026-02-28 10:36:48.480 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance b3a7b19c-ebb2-442d-bac0-66e1b9e2655c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:36:48 np0005634017 nova_compute[243452]: 2026-02-28 10:36:48.482 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 555d381e-ed8a-4a73-9f43-f79c0b0a0afd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:36:48 np0005634017 nova_compute[243452]: 2026-02-28 10:36:48.482 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:36:48 np0005634017 nova_compute[243452]: 2026-02-28 10:36:48.483 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:36:48 np0005634017 nova_compute[243452]: 2026-02-28 10:36:48.633 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:36:48 np0005634017 lvm[362661]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:36:48 np0005634017 lvm[362661]: VG ceph_vg1 finished
Feb 28 05:36:48 np0005634017 lvm[362659]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:36:48 np0005634017 lvm[362659]: VG ceph_vg0 finished
Feb 28 05:36:48 np0005634017 lvm[362672]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:36:48 np0005634017 lvm[362672]: VG ceph_vg2 finished
Feb 28 05:36:48 np0005634017 nostalgic_joliot[362578]: {}
Feb 28 05:36:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 271 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Feb 28 05:36:48 np0005634017 systemd[1]: libpod-970d9c6dbc94530188e5f42f6a33a41c24b3b1bdc2db96c85f37f541b1c0e4a5.scope: Deactivated successfully.
Feb 28 05:36:48 np0005634017 systemd[1]: libpod-970d9c6dbc94530188e5f42f6a33a41c24b3b1bdc2db96c85f37f541b1c0e4a5.scope: Consumed 1.291s CPU time.
Feb 28 05:36:48 np0005634017 podman[362559]: 2026-02-28 10:36:48.886135388 +0000 UTC m=+1.055852379 container died 970d9c6dbc94530188e5f42f6a33a41c24b3b1bdc2db96c85f37f541b1c0e4a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_joliot, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:36:48 np0005634017 systemd[1]: var-lib-containers-storage-overlay-7c122b183770727355e1bf59fc73b719f6aa21d40e479a5b8cef672dfbbd59c1-merged.mount: Deactivated successfully.
Feb 28 05:36:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:36:48 np0005634017 podman[362559]: 2026-02-28 10:36:48.948272979 +0000 UTC m=+1.117989940 container remove 970d9c6dbc94530188e5f42f6a33a41c24b3b1bdc2db96c85f37f541b1c0e4a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_joliot, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:36:48 np0005634017 systemd[1]: libpod-conmon-970d9c6dbc94530188e5f42f6a33a41c24b3b1bdc2db96c85f37f541b1c0e4a5.scope: Deactivated successfully.
Feb 28 05:36:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:36:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:36:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:36:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:36:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:36:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3807439814' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:36:49 np0005634017 nova_compute[243452]: 2026-02-28 10:36:49.200 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:36:49 np0005634017 nova_compute[243452]: 2026-02-28 10:36:49.207 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:36:49 np0005634017 nova_compute[243452]: 2026-02-28 10:36:49.225 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:36:49 np0005634017 nova_compute[243452]: 2026-02-28 10:36:49.254 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:36:49 np0005634017 nova_compute[243452]: 2026-02-28 10:36:49.255 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:36:49 np0005634017 nova_compute[243452]: 2026-02-28 10:36:49.442 243456 DEBUG nova.network.neutron [req-350e5fdd-71d5-42dd-9084-61137c849cd5 req-710be096-1c02-46c9-9563-436f88d84c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Updated VIF entry in instance network info cache for port b8c427fe-78c5-4d60-9c44-68985f50b598. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:36:49 np0005634017 nova_compute[243452]: 2026-02-28 10:36:49.443 243456 DEBUG nova.network.neutron [req-350e5fdd-71d5-42dd-9084-61137c849cd5 req-710be096-1c02-46c9-9563-436f88d84c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Updating instance_info_cache with network_info: [{"id": "b8c427fe-78c5-4d60-9c44-68985f50b598", "address": "fa:16:3e:d6:2a:81", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c427fe-78", "ovs_interfaceid": "b8c427fe-78c5-4d60-9c44-68985f50b598", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ffaef000-523c-4637-99e6-2cc96b907c15", "address": "fa:16:3e:28:b7:5a", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaef000-52", "ovs_interfaceid": "ffaef000-523c-4637-99e6-2cc96b907c15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:36:49 np0005634017 nova_compute[243452]: 2026-02-28 10:36:49.447 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:49 np0005634017 nova_compute[243452]: 2026-02-28 10:36:49.473 243456 DEBUG oslo_concurrency.lockutils [req-350e5fdd-71d5-42dd-9084-61137c849cd5 req-710be096-1c02-46c9-9563-436f88d84c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:36:50 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:36:50 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:36:50 np0005634017 nova_compute[243452]: 2026-02-28 10:36:50.260 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:36:50 np0005634017 nova_compute[243452]: 2026-02-28 10:36:50.261 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:36:50 np0005634017 nova_compute[243452]: 2026-02-28 10:36:50.326 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 05:36:50 np0005634017 nova_compute[243452]: 2026-02-28 10:36:50.424 243456 INFO nova.compute.manager [None req-f9a6f329-827d-4ee7-9613-8890c5cac900 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Get console output#033[00m
Feb 28 05:36:50 np0005634017 nova_compute[243452]: 2026-02-28 10:36:50.434 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 28 05:36:50 np0005634017 nova_compute[243452]: 2026-02-28 10:36:50.740 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Feb 28 05:36:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 133 op/s
Feb 28 05:36:53 np0005634017 nova_compute[243452]: 2026-02-28 10:36:53.653 243456 DEBUG oslo_concurrency.lockutils [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "interface-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:36:53 np0005634017 nova_compute[243452]: 2026-02-28 10:36:53.654 243456 DEBUG oslo_concurrency.lockutils [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "interface-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:36:53 np0005634017 nova_compute[243452]: 2026-02-28 10:36:53.654 243456 DEBUG nova.objects.instance [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'flavor' on Instance uuid b3a7b19c-ebb2-442d-bac0-66e1b9e2655c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:36:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:53Z|00161|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d6:2a:81 10.100.0.13
Feb 28 05:36:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:36:53Z|00162|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:2a:81 10.100.0.13
Feb 28 05:36:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:36:54 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Feb 28 05:36:54 np0005634017 nova_compute[243452]: 2026-02-28 10:36:54.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:36:54 np0005634017 nova_compute[243452]: 2026-02-28 10:36:54.448 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 288 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 145 op/s
Feb 28 05:36:54 np0005634017 nova_compute[243452]: 2026-02-28 10:36:54.926 243456 DEBUG nova.objects.instance [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_requests' on Instance uuid b3a7b19c-ebb2-442d-bac0-66e1b9e2655c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:36:54 np0005634017 nova_compute[243452]: 2026-02-28 10:36:54.945 243456 DEBUG nova.network.neutron [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:36:55 np0005634017 nova_compute[243452]: 2026-02-28 10:36:55.742 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:36:56 np0005634017 nova_compute[243452]: 2026-02-28 10:36:56.000 243456 DEBUG nova.policy [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:36:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 305 active+clean; 304 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.2 MiB/s wr, 122 op/s
Feb 28 05:36:56 np0005634017 nova_compute[243452]: 2026-02-28 10:36:56.925 243456 DEBUG nova.network.neutron [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Successfully created port: 784aca10-13b2-42d5-9828-68914533af46 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:36:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:57.876 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:36:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:57.877 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:36:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:36:57.878 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:36:58 np0005634017 nova_compute[243452]: 2026-02-28 10:36:58.230 243456 DEBUG nova.network.neutron [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Successfully updated port: 784aca10-13b2-42d5-9828-68914533af46 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:36:58 np0005634017 nova_compute[243452]: 2026-02-28 10:36:58.244 243456 DEBUG oslo_concurrency.lockutils [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:36:58 np0005634017 nova_compute[243452]: 2026-02-28 10:36:58.245 243456 DEBUG oslo_concurrency.lockutils [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:36:58 np0005634017 nova_compute[243452]: 2026-02-28 10:36:58.245 243456 DEBUG nova.network.neutron [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:36:58 np0005634017 nova_compute[243452]: 2026-02-28 10:36:58.331 243456 DEBUG nova.compute.manager [req-0ae1ad0a-dc6b-4e16-b6f0-59090e980425 req-bc20593f-e4b2-41b3-8a15-1212a92479d9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-changed-784aca10-13b2-42d5-9828-68914533af46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:36:58 np0005634017 nova_compute[243452]: 2026-02-28 10:36:58.331 243456 DEBUG nova.compute.manager [req-0ae1ad0a-dc6b-4e16-b6f0-59090e980425 req-bc20593f-e4b2-41b3-8a15-1212a92479d9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Refreshing instance network info cache due to event network-changed-784aca10-13b2-42d5-9828-68914533af46. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:36:58 np0005634017 nova_compute[243452]: 2026-02-28 10:36:58.332 243456 DEBUG oslo_concurrency.lockutils [req-0ae1ad0a-dc6b-4e16-b6f0-59090e980425 req-bc20593f-e4b2-41b3-8a15-1212a92479d9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:36:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 678 KiB/s rd, 3.3 MiB/s wr, 105 op/s
Feb 28 05:36:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:36:59 np0005634017 nova_compute[243452]: 2026-02-28 10:36:59.452 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.155 243456 DEBUG nova.network.neutron [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updating instance_info_cache with network_info: [{"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.182 243456 DEBUG oslo_concurrency.lockutils [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.184 243456 DEBUG oslo_concurrency.lockutils [req-0ae1ad0a-dc6b-4e16-b6f0-59090e980425 req-bc20593f-e4b2-41b3-8a15-1212a92479d9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.184 243456 DEBUG nova.network.neutron [req-0ae1ad0a-dc6b-4e16-b6f0-59090e980425 req-bc20593f-e4b2-41b3-8a15-1212a92479d9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Refreshing network info cache for port 784aca10-13b2-42d5-9828-68914533af46 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.189 243456 DEBUG nova.virt.libvirt.vif [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:36:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1855517723',display_name='tempest-TestNetworkBasicOps-server-1855517723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1855517723',id=132,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJL4zKlYo/AdgMNnuaaJtL1hh1goiDwZemHJ69UN8pXos044opX7z+RfBxL3O8z0M85ghtATU1A3xGoZN5EAhEOadn0D2QfxzAhrWZrpHsFCg070YOwKNthfwFLhWWt9Cw==',key_name='tempest-TestNetworkBasicOps-2114804889',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:36:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-fsl96lrj',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:36:33Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=b3a7b19c-ebb2-442d-bac0-66e1b9e2655c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.189 243456 DEBUG nova.network.os_vif_util [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.191 243456 DEBUG nova.network.os_vif_util [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.191 243456 DEBUG os_vif [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.192 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.193 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.194 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.197 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.197 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap784aca10-13, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.198 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap784aca10-13, col_values=(('external_ids', {'iface-id': '784aca10-13b2-42d5-9828-68914533af46', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:24:67', 'vm-uuid': 'b3a7b19c-ebb2-442d-bac0-66e1b9e2655c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:00 np0005634017 NetworkManager[49805]: <info>  [1772275020.2017] manager: (tap784aca10-13): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/569)
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.207 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.212 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.213 243456 INFO os_vif [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13')#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.215 243456 DEBUG nova.virt.libvirt.vif [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:36:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1855517723',display_name='tempest-TestNetworkBasicOps-server-1855517723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1855517723',id=132,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJL4zKlYo/AdgMNnuaaJtL1hh1goiDwZemHJ69UN8pXos044opX7z+RfBxL3O8z0M85ghtATU1A3xGoZN5EAhEOadn0D2QfxzAhrWZrpHsFCg070YOwKNthfwFLhWWt9Cw==',key_name='tempest-TestNetworkBasicOps-2114804889',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:36:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-fsl96lrj',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:36:33Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=b3a7b19c-ebb2-442d-bac0-66e1b9e2655c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.215 243456 DEBUG nova.network.os_vif_util [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.217 243456 DEBUG nova.network.os_vif_util [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.221 243456 DEBUG nova.virt.libvirt.guest [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] attach device xml: <interface type="ethernet">
Feb 28 05:37:00 np0005634017 nova_compute[243452]:  <mac address="fa:16:3e:6a:24:67"/>
Feb 28 05:37:00 np0005634017 nova_compute[243452]:  <model type="virtio"/>
Feb 28 05:37:00 np0005634017 nova_compute[243452]:  <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:37:00 np0005634017 nova_compute[243452]:  <mtu size="1442"/>
Feb 28 05:37:00 np0005634017 nova_compute[243452]:  <target dev="tap784aca10-13"/>
Feb 28 05:37:00 np0005634017 nova_compute[243452]: </interface>
Feb 28 05:37:00 np0005634017 nova_compute[243452]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb 28 05:37:00 np0005634017 NetworkManager[49805]: <info>  [1772275020.2372] manager: (tap784aca10-13): new Tun device (/org/freedesktop/NetworkManager/Devices/570)
Feb 28 05:37:00 np0005634017 kernel: tap784aca10-13: entered promiscuous mode
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.240 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:00Z|01367|binding|INFO|Claiming lport 784aca10-13b2-42d5-9828-68914533af46 for this chassis.
Feb 28 05:37:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:00Z|01368|binding|INFO|784aca10-13b2-42d5-9828-68914533af46: Claiming fa:16:3e:6a:24:67 10.100.0.20
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.254 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:24:67 10.100.0.20'], port_security=['fa:16:3e:6a:24:67 10.100.0.20'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.20/28', 'neutron:device_id': 'b3a7b19c-ebb2-442d-bac0-66e1b9e2655c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '26d464f3-c9fa-4347-96b9-fda1593d34a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe4444ac-cf87-44d4-a522-71da130237e6, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=784aca10-13b2-42d5-9828-68914533af46) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.256 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 784aca10-13b2-42d5-9828-68914533af46 in datapath 9fbbf27e-04ea-4f06-b1ef-3ac4b98db361 bound to our chassis#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.257 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.258 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9fbbf27e-04ea-4f06-b1ef-3ac4b98db361#033[00m
Feb 28 05:37:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:00Z|01369|binding|INFO|Setting lport 784aca10-13b2-42d5-9828-68914533af46 ovn-installed in OVS
Feb 28 05:37:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:00Z|01370|binding|INFO|Setting lport 784aca10-13b2-42d5-9828-68914533af46 up in Southbound
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.265 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:00 np0005634017 systemd-udevd[362733]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.276 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dc00c5e8-8599-444b-bd3b-86368c05dee3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.277 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9fbbf27e-01 in ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.279 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9fbbf27e-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.279 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c42da96c-cf10-4b19-8436-41aa1c5fbb47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.281 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dc8c122c-653c-4e8b-8abf-6289da949d9c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:00 np0005634017 NetworkManager[49805]: <info>  [1772275020.2890] device (tap784aca10-13): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:37:00 np0005634017 NetworkManager[49805]: <info>  [1772275020.2898] device (tap784aca10-13): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.300 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[fd91fe58-cf17-4812-ba55-4489bb8f7fe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.321 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6005828c-294c-43a8-b5b4-789cfe4ba3ae]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.333 243456 DEBUG nova.virt.libvirt.driver [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.334 243456 DEBUG nova.virt.libvirt.driver [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.334 243456 DEBUG nova.virt.libvirt.driver [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:e3:60:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.335 243456 DEBUG nova.virt.libvirt.driver [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:6a:24:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:37:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:37:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:37:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:37:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:37:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:37:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.349 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6f75d9-e259-4a5e-85bb-df181ed38d07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:00 np0005634017 systemd-udevd[362737]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:37:00 np0005634017 NetworkManager[49805]: <info>  [1772275020.3602] manager: (tap9fbbf27e-00): new Veth device (/org/freedesktop/NetworkManager/Devices/571)
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.359 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[48ae886a-63f7-4046-919c-671e6e2530fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.366 243456 DEBUG nova.virt.libvirt.guest [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:37:00 np0005634017 nova_compute[243452]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:37:00 np0005634017 nova_compute[243452]:  <nova:name>tempest-TestNetworkBasicOps-server-1855517723</nova:name>
Feb 28 05:37:00 np0005634017 nova_compute[243452]:  <nova:creationTime>2026-02-28 10:37:00</nova:creationTime>
Feb 28 05:37:00 np0005634017 nova_compute[243452]:  <nova:flavor name="m1.nano">
Feb 28 05:37:00 np0005634017 nova_compute[243452]:    <nova:memory>128</nova:memory>
Feb 28 05:37:00 np0005634017 nova_compute[243452]:    <nova:disk>1</nova:disk>
Feb 28 05:37:00 np0005634017 nova_compute[243452]:    <nova:swap>0</nova:swap>
Feb 28 05:37:00 np0005634017 nova_compute[243452]:    <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:37:00 np0005634017 nova_compute[243452]:    <nova:vcpus>1</nova:vcpus>
Feb 28 05:37:00 np0005634017 nova_compute[243452]:  </nova:flavor>
Feb 28 05:37:00 np0005634017 nova_compute[243452]:  <nova:owner>
Feb 28 05:37:00 np0005634017 nova_compute[243452]:    <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 05:37:00 np0005634017 nova_compute[243452]:    <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 05:37:00 np0005634017 nova_compute[243452]:  </nova:owner>
Feb 28 05:37:00 np0005634017 nova_compute[243452]:  <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:37:00 np0005634017 nova_compute[243452]:  <nova:ports>
Feb 28 05:37:00 np0005634017 nova_compute[243452]:    <nova:port uuid="4b48043a-8194-4cf4-bd7f-1c138d7960ac">
Feb 28 05:37:00 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 05:37:00 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:37:00 np0005634017 nova_compute[243452]:    <nova:port uuid="784aca10-13b2-42d5-9828-68914533af46">
Feb 28 05:37:00 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.20" ipVersion="4"/>
Feb 28 05:37:00 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:37:00 np0005634017 nova_compute[243452]:  </nova:ports>
Feb 28 05:37:00 np0005634017 nova_compute[243452]: </nova:instance>
Feb 28 05:37:00 np0005634017 nova_compute[243452]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.391 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5418d987-0901-4c19-9cbb-28b658d70244]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.395 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a25b914d-86a6-4c46-82a8-df4733d851bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:00 np0005634017 NetworkManager[49805]: <info>  [1772275020.4182] device (tap9fbbf27e-00): carrier: link connected
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.421 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c21e2053-c327-4fc8-9d8c-ffdfa49f0a0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.423 243456 DEBUG oslo_concurrency.lockutils [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "interface-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.439 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dc85ea7a-10a3-41e8-8a40-7eba656e9d7f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9fbbf27e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:56:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 407], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652770, 'reachable_time': 24614, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362760, 'error': None, 'target': 'ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.458 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[93488ba5-5622-45b7-8ea2-c42f9f7e0b6c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec9:56da'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652770, 'tstamp': 652770}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362761, 'error': None, 'target': 'ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.479 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bd8ec5dc-cb07-43da-bdac-d9821871309f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9fbbf27e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:56:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 407], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652770, 'reachable_time': 24614, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362762, 'error': None, 'target': 'ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.514 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4e538215-274b-4333-ac14-e417dab81c7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.577 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[94f213e3-5025-42dc-905e-6decea640682]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.580 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9fbbf27e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.581 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.581 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9fbbf27e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.583 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:00 np0005634017 kernel: tap9fbbf27e-00: entered promiscuous mode
Feb 28 05:37:00 np0005634017 NetworkManager[49805]: <info>  [1772275020.5848] manager: (tap9fbbf27e-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/572)
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.588 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.590 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9fbbf27e-00, col_values=(('external_ids', {'iface-id': '5b7b2456-d9c3-4443-8d4c-51b3db80db5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:00Z|01371|binding|INFO|Releasing lport 5b7b2456-d9c3-4443-8d4c-51b3db80db5c from this chassis (sb_readonly=0)
Feb 28 05:37:00 np0005634017 nova_compute[243452]: 2026-02-28 10:37:00.602 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.604 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9fbbf27e-04ea-4f06-b1ef-3ac4b98db361.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9fbbf27e-04ea-4f06-b1ef-3ac4b98db361.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.605 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b3541856-976f-47b0-a7df-acc1e60b4b83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.606 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/9fbbf27e-04ea-4f06-b1ef-3ac4b98db361.pid.haproxy
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 9fbbf27e-04ea-4f06-b1ef-3ac4b98db361
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:37:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.606 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'env', 'PROCESS_TAG=haproxy-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9fbbf27e-04ea-4f06-b1ef-3ac4b98db361.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:37:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 2.2 MiB/s wr, 83 op/s
Feb 28 05:37:01 np0005634017 podman[362794]: 2026-02-28 10:37:01.019365657 +0000 UTC m=+0.061173994 container create cf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 28 05:37:01 np0005634017 systemd[1]: Started libpod-conmon-cf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0.scope.
Feb 28 05:37:01 np0005634017 podman[362794]: 2026-02-28 10:37:00.992119315 +0000 UTC m=+0.033927662 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:37:01 np0005634017 nova_compute[243452]: 2026-02-28 10:37:01.096 243456 DEBUG nova.compute.manager [req-cc886b45-5e1a-40c4-be19-d80b2624fd39 req-a47b396f-d0d6-42d8-9217-fbdf635c153d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-vif-plugged-784aca10-13b2-42d5-9828-68914533af46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:01 np0005634017 nova_compute[243452]: 2026-02-28 10:37:01.097 243456 DEBUG oslo_concurrency.lockutils [req-cc886b45-5e1a-40c4-be19-d80b2624fd39 req-a47b396f-d0d6-42d8-9217-fbdf635c153d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:01 np0005634017 nova_compute[243452]: 2026-02-28 10:37:01.098 243456 DEBUG oslo_concurrency.lockutils [req-cc886b45-5e1a-40c4-be19-d80b2624fd39 req-a47b396f-d0d6-42d8-9217-fbdf635c153d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:01 np0005634017 nova_compute[243452]: 2026-02-28 10:37:01.098 243456 DEBUG oslo_concurrency.lockutils [req-cc886b45-5e1a-40c4-be19-d80b2624fd39 req-a47b396f-d0d6-42d8-9217-fbdf635c153d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:01 np0005634017 nova_compute[243452]: 2026-02-28 10:37:01.099 243456 DEBUG nova.compute.manager [req-cc886b45-5e1a-40c4-be19-d80b2624fd39 req-a47b396f-d0d6-42d8-9217-fbdf635c153d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] No waiting events found dispatching network-vif-plugged-784aca10-13b2-42d5-9828-68914533af46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:37:01 np0005634017 nova_compute[243452]: 2026-02-28 10:37:01.100 243456 WARNING nova.compute.manager [req-cc886b45-5e1a-40c4-be19-d80b2624fd39 req-a47b396f-d0d6-42d8-9217-fbdf635c153d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received unexpected event network-vif-plugged-784aca10-13b2-42d5-9828-68914533af46 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:37:01 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:37:01 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6de54db0465fd2bcf02f33b89700b7eb0c2eaa14d4e03df2211f39fd0f3421/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:37:01 np0005634017 podman[362794]: 2026-02-28 10:37:01.133427638 +0000 UTC m=+0.175236045 container init cf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 05:37:01 np0005634017 podman[362794]: 2026-02-28 10:37:01.14162518 +0000 UTC m=+0.183433547 container start cf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:37:01 np0005634017 neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361[362810]: [NOTICE]   (362814) : New worker (362816) forked
Feb 28 05:37:01 np0005634017 neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361[362810]: [NOTICE]   (362814) : Loading success.
Feb 28 05:37:01 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:01Z|00163|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6a:24:67 10.100.0.20
Feb 28 05:37:01 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:01Z|00164|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6a:24:67 10.100.0.20
Feb 28 05:37:01 np0005634017 nova_compute[243452]: 2026-02-28 10:37:01.951 243456 DEBUG nova.network.neutron [req-0ae1ad0a-dc6b-4e16-b6f0-59090e980425 req-bc20593f-e4b2-41b3-8a15-1212a92479d9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updated VIF entry in instance network info cache for port 784aca10-13b2-42d5-9828-68914533af46. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:37:01 np0005634017 nova_compute[243452]: 2026-02-28 10:37:01.952 243456 DEBUG nova.network.neutron [req-0ae1ad0a-dc6b-4e16-b6f0-59090e980425 req-bc20593f-e4b2-41b3-8a15-1212a92479d9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updating instance_info_cache with network_info: [{"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:37:01 np0005634017 nova_compute[243452]: 2026-02-28 10:37:01.977 243456 DEBUG oslo_concurrency.lockutils [req-0ae1ad0a-dc6b-4e16-b6f0-59090e980425 req-bc20593f-e4b2-41b3-8a15-1212a92479d9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:37:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 275 KiB/s rd, 2.2 MiB/s wr, 62 op/s
Feb 28 05:37:03 np0005634017 nova_compute[243452]: 2026-02-28 10:37:03.333 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:37:03 np0005634017 nova_compute[243452]: 2026-02-28 10:37:03.334 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 28 05:37:03 np0005634017 nova_compute[243452]: 2026-02-28 10:37:03.425 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 28 05:37:03 np0005634017 nova_compute[243452]: 2026-02-28 10:37:03.470 243456 DEBUG nova.compute.manager [req-194299ad-d224-4b92-9801-8694efc3c6b8 req-6532dc58-dec4-4f2a-a2f0-17cd05e88787 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-vif-plugged-784aca10-13b2-42d5-9828-68914533af46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:03 np0005634017 nova_compute[243452]: 2026-02-28 10:37:03.470 243456 DEBUG oslo_concurrency.lockutils [req-194299ad-d224-4b92-9801-8694efc3c6b8 req-6532dc58-dec4-4f2a-a2f0-17cd05e88787 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:03 np0005634017 nova_compute[243452]: 2026-02-28 10:37:03.471 243456 DEBUG oslo_concurrency.lockutils [req-194299ad-d224-4b92-9801-8694efc3c6b8 req-6532dc58-dec4-4f2a-a2f0-17cd05e88787 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:03 np0005634017 nova_compute[243452]: 2026-02-28 10:37:03.471 243456 DEBUG oslo_concurrency.lockutils [req-194299ad-d224-4b92-9801-8694efc3c6b8 req-6532dc58-dec4-4f2a-a2f0-17cd05e88787 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:03 np0005634017 nova_compute[243452]: 2026-02-28 10:37:03.471 243456 DEBUG nova.compute.manager [req-194299ad-d224-4b92-9801-8694efc3c6b8 req-6532dc58-dec4-4f2a-a2f0-17cd05e88787 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] No waiting events found dispatching network-vif-plugged-784aca10-13b2-42d5-9828-68914533af46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:37:03 np0005634017 nova_compute[243452]: 2026-02-28 10:37:03.471 243456 WARNING nova.compute.manager [req-194299ad-d224-4b92-9801-8694efc3c6b8 req-6532dc58-dec4-4f2a-a2f0-17cd05e88787 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received unexpected event network-vif-plugged-784aca10-13b2-42d5-9828-68914533af46 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:37:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:37:04 np0005634017 nova_compute[243452]: 2026-02-28 10:37:04.456 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 275 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Feb 28 05:37:05 np0005634017 nova_compute[243452]: 2026-02-28 10:37:05.201 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:05 np0005634017 nova_compute[243452]: 2026-02-28 10:37:05.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:37:05 np0005634017 nova_compute[243452]: 2026-02-28 10:37:05.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 28 05:37:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 154 KiB/s rd, 1.6 MiB/s wr, 45 op/s
Feb 28 05:37:07 np0005634017 nova_compute[243452]: 2026-02-28 10:37:07.553 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:07 np0005634017 nova_compute[243452]: 2026-02-28 10:37:07.554 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:07 np0005634017 nova_compute[243452]: 2026-02-28 10:37:07.628 243456 DEBUG nova.compute.manager [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:37:07 np0005634017 nova_compute[243452]: 2026-02-28 10:37:07.848 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:07 np0005634017 nova_compute[243452]: 2026-02-28 10:37:07.850 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:07 np0005634017 nova_compute[243452]: 2026-02-28 10:37:07.860 243456 DEBUG nova.virt.hardware [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:37:07 np0005634017 nova_compute[243452]: 2026-02-28 10:37:07.861 243456 INFO nova.compute.claims [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:37:08 np0005634017 nova_compute[243452]: 2026-02-28 10:37:08.150 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:37:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:37:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3069755206' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:37:08 np0005634017 nova_compute[243452]: 2026-02-28 10:37:08.739 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:37:08 np0005634017 nova_compute[243452]: 2026-02-28 10:37:08.747 243456 DEBUG nova.compute.provider_tree [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:37:08 np0005634017 nova_compute[243452]: 2026-02-28 10:37:08.766 243456 DEBUG nova.scheduler.client.report [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:37:08 np0005634017 nova_compute[243452]: 2026-02-28 10:37:08.786 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:08 np0005634017 nova_compute[243452]: 2026-02-28 10:37:08.788 243456 DEBUG nova.compute.manager [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:37:08 np0005634017 nova_compute[243452]: 2026-02-28 10:37:08.847 243456 DEBUG nova.compute.manager [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:37:08 np0005634017 nova_compute[243452]: 2026-02-28 10:37:08.848 243456 DEBUG nova.network.neutron [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:37:08 np0005634017 nova_compute[243452]: 2026-02-28 10:37:08.872 243456 INFO nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:37:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 349 KiB/s wr, 28 op/s
Feb 28 05:37:08 np0005634017 nova_compute[243452]: 2026-02-28 10:37:08.893 243456 DEBUG nova.compute.manager [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:37:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.003 243456 DEBUG nova.compute.manager [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.006 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.008 243456 INFO nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Creating image(s)#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.034 243456 DEBUG nova.storage.rbd_utils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.071 243456 DEBUG nova.storage.rbd_utils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.103 243456 DEBUG nova.storage.rbd_utils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.109 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.157 243456 DEBUG nova.policy [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.171 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "cbf94077-46c2-457d-8486-25f3dd0517b9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.172 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.189 243456 DEBUG nova.compute.manager [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.207 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.208 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.209 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.209 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.238 243456 DEBUG nova.storage.rbd_utils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.244 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.317 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.318 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.325 243456 DEBUG nova.virt.hardware [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.326 243456 INFO nova.compute.claims [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.464 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.510 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.549 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.642 243456 DEBUG nova.storage.rbd_utils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.762 243456 DEBUG nova.objects.instance [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid ae0acd9c-7c2d-4e8b-84a4-d577eff31d02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.782 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.783 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Ensure instance console log exists: /var/lib/nova/instances/ae0acd9c-7c2d-4e8b-84a4-d577eff31d02/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.783 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.783 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.784 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:09 np0005634017 nova_compute[243452]: 2026-02-28 10:37:09.846 243456 DEBUG nova.network.neutron [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Successfully created port: 0739db55-7f81-4ed8-a2c2-73ad1bf09084 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:37:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:37:10 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1423731554' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:37:10 np0005634017 podman[363033]: 2026-02-28 10:37:10.173838301 +0000 UTC m=+0.102560916 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 28 05:37:10 np0005634017 nova_compute[243452]: 2026-02-28 10:37:10.201 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:37:10 np0005634017 nova_compute[243452]: 2026-02-28 10:37:10.205 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:10 np0005634017 nova_compute[243452]: 2026-02-28 10:37:10.209 243456 DEBUG nova.compute.provider_tree [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:37:10 np0005634017 nova_compute[243452]: 2026-02-28 10:37:10.229 243456 DEBUG nova.scheduler.client.report [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:37:10 np0005634017 nova_compute[243452]: 2026-02-28 10:37:10.254 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:10 np0005634017 nova_compute[243452]: 2026-02-28 10:37:10.255 243456 DEBUG nova.compute.manager [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:37:10 np0005634017 podman[363052]: 2026-02-28 10:37:10.269176552 +0000 UTC m=+0.091063421 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:37:10 np0005634017 nova_compute[243452]: 2026-02-28 10:37:10.320 243456 DEBUG nova.compute.manager [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:37:10 np0005634017 nova_compute[243452]: 2026-02-28 10:37:10.321 243456 DEBUG nova.network.neutron [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:37:10 np0005634017 nova_compute[243452]: 2026-02-28 10:37:10.344 243456 INFO nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:37:10 np0005634017 nova_compute[243452]: 2026-02-28 10:37:10.368 243456 DEBUG nova.compute.manager [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:37:10 np0005634017 nova_compute[243452]: 2026-02-28 10:37:10.471 243456 DEBUG nova.compute.manager [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:37:10 np0005634017 nova_compute[243452]: 2026-02-28 10:37:10.474 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:37:10 np0005634017 nova_compute[243452]: 2026-02-28 10:37:10.474 243456 INFO nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Creating image(s)#033[00m
Feb 28 05:37:10 np0005634017 nova_compute[243452]: 2026-02-28 10:37:10.498 243456 DEBUG nova.storage.rbd_utils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image cbf94077-46c2-457d-8486-25f3dd0517b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:37:10 np0005634017 nova_compute[243452]: 2026-02-28 10:37:10.520 243456 DEBUG nova.storage.rbd_utils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image cbf94077-46c2-457d-8486-25f3dd0517b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:37:10 np0005634017 nova_compute[243452]: 2026-02-28 10:37:10.544 243456 DEBUG nova.storage.rbd_utils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image cbf94077-46c2-457d-8486-25f3dd0517b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:37:10 np0005634017 nova_compute[243452]: 2026-02-28 10:37:10.548 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:37:10 np0005634017 nova_compute[243452]: 2026-02-28 10:37:10.603 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:37:10 np0005634017 nova_compute[243452]: 2026-02-28 10:37:10.604 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:10 np0005634017 nova_compute[243452]: 2026-02-28 10:37:10.605 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:10 np0005634017 nova_compute[243452]: 2026-02-28 10:37:10.605 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:10 np0005634017 nova_compute[243452]: 2026-02-28 10:37:10.633 243456 DEBUG nova.storage.rbd_utils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image cbf94077-46c2-457d-8486-25f3dd0517b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:37:10 np0005634017 nova_compute[243452]: 2026-02-28 10:37:10.638 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 cbf94077-46c2-457d-8486-25f3dd0517b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:37:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 459 KiB/s wr, 23 op/s
Feb 28 05:37:10 np0005634017 nova_compute[243452]: 2026-02-28 10:37:10.911 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 cbf94077-46c2-457d-8486-25f3dd0517b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:37:10 np0005634017 nova_compute[243452]: 2026-02-28 10:37:10.986 243456 DEBUG nova.policy [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:37:10 np0005634017 nova_compute[243452]: 2026-02-28 10:37:10.995 243456 DEBUG nova.storage.rbd_utils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] resizing rbd image cbf94077-46c2-457d-8486-25f3dd0517b9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:37:11 np0005634017 nova_compute[243452]: 2026-02-28 10:37:11.072 243456 DEBUG nova.objects.instance [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'migration_context' on Instance uuid cbf94077-46c2-457d-8486-25f3dd0517b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:37:11 np0005634017 nova_compute[243452]: 2026-02-28 10:37:11.089 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:37:11 np0005634017 nova_compute[243452]: 2026-02-28 10:37:11.090 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Ensure instance console log exists: /var/lib/nova/instances/cbf94077-46c2-457d-8486-25f3dd0517b9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:37:11 np0005634017 nova_compute[243452]: 2026-02-28 10:37:11.090 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:11 np0005634017 nova_compute[243452]: 2026-02-28 10:37:11.090 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:11 np0005634017 nova_compute[243452]: 2026-02-28 10:37:11.091 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:11 np0005634017 nova_compute[243452]: 2026-02-28 10:37:11.393 243456 DEBUG nova.network.neutron [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Successfully created port: fbc01aec-00f6-4ce9-960c-352295a6c18e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:37:12 np0005634017 nova_compute[243452]: 2026-02-28 10:37:12.278 243456 DEBUG nova.network.neutron [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Successfully created port: 282fa143-1175-40e2-9ab8-2d2b012d5b78 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:37:12 np0005634017 nova_compute[243452]: 2026-02-28 10:37:12.684 243456 DEBUG nova.network.neutron [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Successfully updated port: 0739db55-7f81-4ed8-a2c2-73ad1bf09084 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:37:12 np0005634017 nova_compute[243452]: 2026-02-28 10:37:12.807 243456 DEBUG nova.compute.manager [req-bdc03138-e4ea-4022-bb62-aba6ab21335f req-f193a12e-107f-4964-a6db-1d87fc24fde4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-changed-0739db55-7f81-4ed8-a2c2-73ad1bf09084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:12 np0005634017 nova_compute[243452]: 2026-02-28 10:37:12.807 243456 DEBUG nova.compute.manager [req-bdc03138-e4ea-4022-bb62-aba6ab21335f req-f193a12e-107f-4964-a6db-1d87fc24fde4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Refreshing instance network info cache due to event network-changed-0739db55-7f81-4ed8-a2c2-73ad1bf09084. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:37:12 np0005634017 nova_compute[243452]: 2026-02-28 10:37:12.807 243456 DEBUG oslo_concurrency.lockutils [req-bdc03138-e4ea-4022-bb62-aba6ab21335f req-f193a12e-107f-4964-a6db-1d87fc24fde4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:37:12 np0005634017 nova_compute[243452]: 2026-02-28 10:37:12.808 243456 DEBUG oslo_concurrency.lockutils [req-bdc03138-e4ea-4022-bb62-aba6ab21335f req-f193a12e-107f-4964-a6db-1d87fc24fde4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:37:12 np0005634017 nova_compute[243452]: 2026-02-28 10:37:12.808 243456 DEBUG nova.network.neutron [req-bdc03138-e4ea-4022-bb62-aba6ab21335f req-f193a12e-107f-4964-a6db-1d87fc24fde4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Refreshing network info cache for port 0739db55-7f81-4ed8-a2c2-73ad1bf09084 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:37:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 376 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.5 MiB/s wr, 28 op/s
Feb 28 05:37:12 np0005634017 nova_compute[243452]: 2026-02-28 10:37:12.964 243456 DEBUG nova.network.neutron [req-bdc03138-e4ea-4022-bb62-aba6ab21335f req-f193a12e-107f-4964-a6db-1d87fc24fde4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:37:13 np0005634017 nova_compute[243452]: 2026-02-28 10:37:13.362 243456 DEBUG nova.network.neutron [req-bdc03138-e4ea-4022-bb62-aba6ab21335f req-f193a12e-107f-4964-a6db-1d87fc24fde4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:37:13 np0005634017 nova_compute[243452]: 2026-02-28 10:37:13.376 243456 DEBUG oslo_concurrency.lockutils [req-bdc03138-e4ea-4022-bb62-aba6ab21335f req-f193a12e-107f-4964-a6db-1d87fc24fde4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:37:13 np0005634017 nova_compute[243452]: 2026-02-28 10:37:13.636 243456 DEBUG nova.network.neutron [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Successfully updated port: 282fa143-1175-40e2-9ab8-2d2b012d5b78 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:37:13 np0005634017 nova_compute[243452]: 2026-02-28 10:37:13.665 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-cbf94077-46c2-457d-8486-25f3dd0517b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:37:13 np0005634017 nova_compute[243452]: 2026-02-28 10:37:13.666 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-cbf94077-46c2-457d-8486-25f3dd0517b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:37:13 np0005634017 nova_compute[243452]: 2026-02-28 10:37:13.666 243456 DEBUG nova.network.neutron [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:37:13 np0005634017 nova_compute[243452]: 2026-02-28 10:37:13.691 243456 DEBUG nova.network.neutron [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Successfully updated port: fbc01aec-00f6-4ce9-960c-352295a6c18e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:37:13 np0005634017 nova_compute[243452]: 2026-02-28 10:37:13.708 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:37:13 np0005634017 nova_compute[243452]: 2026-02-28 10:37:13.708 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:37:13 np0005634017 nova_compute[243452]: 2026-02-28 10:37:13.709 243456 DEBUG nova.network.neutron [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:37:13 np0005634017 nova_compute[243452]: 2026-02-28 10:37:13.857 243456 DEBUG nova.network.neutron [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:37:13 np0005634017 nova_compute[243452]: 2026-02-28 10:37:13.906 243456 DEBUG nova.network.neutron [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:37:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:37:14 np0005634017 nova_compute[243452]: 2026-02-28 10:37:14.495 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:14 np0005634017 nova_compute[243452]: 2026-02-28 10:37:14.885 243456 DEBUG nova.compute.manager [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Received event network-changed-282fa143-1175-40e2-9ab8-2d2b012d5b78 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:14 np0005634017 nova_compute[243452]: 2026-02-28 10:37:14.886 243456 DEBUG nova.compute.manager [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Refreshing instance network info cache due to event network-changed-282fa143-1175-40e2-9ab8-2d2b012d5b78. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:37:14 np0005634017 nova_compute[243452]: 2026-02-28 10:37:14.886 243456 DEBUG oslo_concurrency.lockutils [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-cbf94077-46c2-457d-8486-25f3dd0517b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:37:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 387 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.8 MiB/s wr, 41 op/s
Feb 28 05:37:15 np0005634017 nova_compute[243452]: 2026-02-28 10:37:15.208 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:15 np0005634017 nova_compute[243452]: 2026-02-28 10:37:15.382 243456 DEBUG nova.network.neutron [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Updating instance_info_cache with network_info: [{"id": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "address": "fa:16:3e:65:25:11", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282fa143-11", "ovs_interfaceid": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:37:15 np0005634017 nova_compute[243452]: 2026-02-28 10:37:15.404 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-cbf94077-46c2-457d-8486-25f3dd0517b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:37:15 np0005634017 nova_compute[243452]: 2026-02-28 10:37:15.405 243456 DEBUG nova.compute.manager [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Instance network_info: |[{"id": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "address": "fa:16:3e:65:25:11", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282fa143-11", "ovs_interfaceid": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:37:15 np0005634017 nova_compute[243452]: 2026-02-28 10:37:15.406 243456 DEBUG oslo_concurrency.lockutils [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-cbf94077-46c2-457d-8486-25f3dd0517b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:37:15 np0005634017 nova_compute[243452]: 2026-02-28 10:37:15.407 243456 DEBUG nova.network.neutron [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Refreshing network info cache for port 282fa143-1175-40e2-9ab8-2d2b012d5b78 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:37:15 np0005634017 nova_compute[243452]: 2026-02-28 10:37:15.413 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Start _get_guest_xml network_info=[{"id": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "address": "fa:16:3e:65:25:11", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282fa143-11", "ovs_interfaceid": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:37:15 np0005634017 nova_compute[243452]: 2026-02-28 10:37:15.421 243456 WARNING nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:37:15 np0005634017 nova_compute[243452]: 2026-02-28 10:37:15.426 243456 DEBUG nova.virt.libvirt.host [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:37:15 np0005634017 nova_compute[243452]: 2026-02-28 10:37:15.427 243456 DEBUG nova.virt.libvirt.host [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:37:15 np0005634017 nova_compute[243452]: 2026-02-28 10:37:15.436 243456 DEBUG nova.virt.libvirt.host [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:37:15 np0005634017 nova_compute[243452]: 2026-02-28 10:37:15.437 243456 DEBUG nova.virt.libvirt.host [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:37:15 np0005634017 nova_compute[243452]: 2026-02-28 10:37:15.438 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:37:15 np0005634017 nova_compute[243452]: 2026-02-28 10:37:15.438 243456 DEBUG nova.virt.hardware [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:37:15 np0005634017 nova_compute[243452]: 2026-02-28 10:37:15.439 243456 DEBUG nova.virt.hardware [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:37:15 np0005634017 nova_compute[243452]: 2026-02-28 10:37:15.439 243456 DEBUG nova.virt.hardware [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:37:15 np0005634017 nova_compute[243452]: 2026-02-28 10:37:15.439 243456 DEBUG nova.virt.hardware [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:37:15 np0005634017 nova_compute[243452]: 2026-02-28 10:37:15.439 243456 DEBUG nova.virt.hardware [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:37:15 np0005634017 nova_compute[243452]: 2026-02-28 10:37:15.440 243456 DEBUG nova.virt.hardware [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:37:15 np0005634017 nova_compute[243452]: 2026-02-28 10:37:15.440 243456 DEBUG nova.virt.hardware [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:37:15 np0005634017 nova_compute[243452]: 2026-02-28 10:37:15.440 243456 DEBUG nova.virt.hardware [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:37:15 np0005634017 nova_compute[243452]: 2026-02-28 10:37:15.440 243456 DEBUG nova.virt.hardware [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:37:15 np0005634017 nova_compute[243452]: 2026-02-28 10:37:15.441 243456 DEBUG nova.virt.hardware [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:37:15 np0005634017 nova_compute[243452]: 2026-02-28 10:37:15.441 243456 DEBUG nova.virt.hardware [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:37:15 np0005634017 nova_compute[243452]: 2026-02-28 10:37:15.444 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:37:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:37:16 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3451173864' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.046 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.081 243456 DEBUG nova.storage.rbd_utils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image cbf94077-46c2-457d-8486-25f3dd0517b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.087 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:37:16 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:37:16 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2430262589' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.669 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.672 243456 DEBUG nova.virt.libvirt.vif [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:37:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1336250417',display_name='tempest-TestNetworkBasicOps-server-1336250417',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1336250417',id=135,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFdV8K7ZSzWmQkbjGONrQPTCovb6WrbWPkGawJk1yo00887S+Z8HGahuNdlpRV8Rys7acQ23bSRlDv0kC4ADkUOxH7s/2SDS0CUgX9WEqv+CcAPyMpFhaDSJb5+3QqGJmA==',key_name='tempest-TestNetworkBasicOps-99259032',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-6zhm2bv3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:37:10Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=cbf94077-46c2-457d-8486-25f3dd0517b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "address": "fa:16:3e:65:25:11", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282fa143-11", "ovs_interfaceid": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.672 243456 DEBUG nova.network.os_vif_util [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "address": "fa:16:3e:65:25:11", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282fa143-11", "ovs_interfaceid": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.674 243456 DEBUG nova.network.os_vif_util [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:25:11,bridge_name='br-int',has_traffic_filtering=True,id=282fa143-1175-40e2-9ab8-2d2b012d5b78,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282fa143-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.676 243456 DEBUG nova.objects.instance [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid cbf94077-46c2-457d-8486-25f3dd0517b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.699 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:37:16 np0005634017 nova_compute[243452]:  <uuid>cbf94077-46c2-457d-8486-25f3dd0517b9</uuid>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:  <name>instance-00000087</name>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestNetworkBasicOps-server-1336250417</nova:name>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:37:15</nova:creationTime>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:37:16 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:        <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:        <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:        <nova:port uuid="282fa143-1175-40e2-9ab8-2d2b012d5b78">
Feb 28 05:37:16 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <entry name="serial">cbf94077-46c2-457d-8486-25f3dd0517b9</entry>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <entry name="uuid">cbf94077-46c2-457d-8486-25f3dd0517b9</entry>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/cbf94077-46c2-457d-8486-25f3dd0517b9_disk">
Feb 28 05:37:16 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:37:16 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/cbf94077-46c2-457d-8486-25f3dd0517b9_disk.config">
Feb 28 05:37:16 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:37:16 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:65:25:11"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <target dev="tap282fa143-11"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/cbf94077-46c2-457d-8486-25f3dd0517b9/console.log" append="off"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:37:16 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:37:16 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:37:16 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:37:16 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.700 243456 DEBUG nova.compute.manager [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Preparing to wait for external event network-vif-plugged-282fa143-1175-40e2-9ab8-2d2b012d5b78 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.701 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.701 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.702 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.703 243456 DEBUG nova.virt.libvirt.vif [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:37:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1336250417',display_name='tempest-TestNetworkBasicOps-server-1336250417',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1336250417',id=135,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFdV8K7ZSzWmQkbjGONrQPTCovb6WrbWPkGawJk1yo00887S+Z8HGahuNdlpRV8Rys7acQ23bSRlDv0kC4ADkUOxH7s/2SDS0CUgX9WEqv+CcAPyMpFhaDSJb5+3QqGJmA==',key_name='tempest-TestNetworkBasicOps-99259032',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-6zhm2bv3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:37:10Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=cbf94077-46c2-457d-8486-25f3dd0517b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "address": "fa:16:3e:65:25:11", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282fa143-11", "ovs_interfaceid": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.703 243456 DEBUG nova.network.os_vif_util [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "address": "fa:16:3e:65:25:11", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282fa143-11", "ovs_interfaceid": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.704 243456 DEBUG nova.network.os_vif_util [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:25:11,bridge_name='br-int',has_traffic_filtering=True,id=282fa143-1175-40e2-9ab8-2d2b012d5b78,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282fa143-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.705 243456 DEBUG os_vif [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:25:11,bridge_name='br-int',has_traffic_filtering=True,id=282fa143-1175-40e2-9ab8-2d2b012d5b78,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282fa143-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.706 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.706 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.707 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.713 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.713 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap282fa143-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.714 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap282fa143-11, col_values=(('external_ids', {'iface-id': '282fa143-1175-40e2-9ab8-2d2b012d5b78', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:25:11', 'vm-uuid': 'cbf94077-46c2-457d-8486-25f3dd0517b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.730 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:16 np0005634017 NetworkManager[49805]: <info>  [1772275036.7316] manager: (tap282fa143-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/573)
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.734 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.740 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.741 243456 INFO os_vif [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:25:11,bridge_name='br-int',has_traffic_filtering=True,id=282fa143-1175-40e2-9ab8-2d2b012d5b78,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282fa143-11')#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.827 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.828 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.828 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:65:25:11, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.829 243456 INFO nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Using config drive#033[00m
Feb 28 05:37:16 np0005634017 nova_compute[243452]: 2026-02-28 10:37:16.867 243456 DEBUG nova.storage.rbd_utils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image cbf94077-46c2-457d-8486-25f3dd0517b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:37:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.6 MiB/s wr, 54 op/s
Feb 28 05:37:17 np0005634017 nova_compute[243452]: 2026-02-28 10:37:17.239 243456 INFO nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Creating config drive at /var/lib/nova/instances/cbf94077-46c2-457d-8486-25f3dd0517b9/disk.config#033[00m
Feb 28 05:37:17 np0005634017 nova_compute[243452]: 2026-02-28 10:37:17.246 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cbf94077-46c2-457d-8486-25f3dd0517b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpu7lreufj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:37:17 np0005634017 nova_compute[243452]: 2026-02-28 10:37:17.392 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cbf94077-46c2-457d-8486-25f3dd0517b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpu7lreufj" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:37:17 np0005634017 nova_compute[243452]: 2026-02-28 10:37:17.424 243456 DEBUG nova.storage.rbd_utils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image cbf94077-46c2-457d-8486-25f3dd0517b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:37:17 np0005634017 nova_compute[243452]: 2026-02-28 10:37:17.429 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cbf94077-46c2-457d-8486-25f3dd0517b9/disk.config cbf94077-46c2-457d-8486-25f3dd0517b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:37:17 np0005634017 nova_compute[243452]: 2026-02-28 10:37:17.616 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cbf94077-46c2-457d-8486-25f3dd0517b9/disk.config cbf94077-46c2-457d-8486-25f3dd0517b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:37:17 np0005634017 nova_compute[243452]: 2026-02-28 10:37:17.617 243456 INFO nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Deleting local config drive /var/lib/nova/instances/cbf94077-46c2-457d-8486-25f3dd0517b9/disk.config because it was imported into RBD.#033[00m
Feb 28 05:37:17 np0005634017 NetworkManager[49805]: <info>  [1772275037.6893] manager: (tap282fa143-11): new Tun device (/org/freedesktop/NetworkManager/Devices/574)
Feb 28 05:37:17 np0005634017 kernel: tap282fa143-11: entered promiscuous mode
Feb 28 05:37:17 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:17Z|01372|binding|INFO|Claiming lport 282fa143-1175-40e2-9ab8-2d2b012d5b78 for this chassis.
Feb 28 05:37:17 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:17Z|01373|binding|INFO|282fa143-1175-40e2-9ab8-2d2b012d5b78: Claiming fa:16:3e:65:25:11 10.100.0.28
Feb 28 05:37:17 np0005634017 nova_compute[243452]: 2026-02-28 10:37:17.693 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:17 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:17Z|01374|binding|INFO|Setting lport 282fa143-1175-40e2-9ab8-2d2b012d5b78 ovn-installed in OVS
Feb 28 05:37:17 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:17Z|01375|binding|INFO|Setting lport 282fa143-1175-40e2-9ab8-2d2b012d5b78 up in Southbound
Feb 28 05:37:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.702 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:25:11 10.100.0.28'], port_security=['fa:16:3e:65:25:11 10.100.0.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': 'cbf94077-46c2-457d-8486-25f3dd0517b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '47e8c685-91d8-4bae-bf96-b1284d3eef68', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe4444ac-cf87-44d4-a522-71da130237e6, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=282fa143-1175-40e2-9ab8-2d2b012d5b78) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:37:17 np0005634017 nova_compute[243452]: 2026-02-28 10:37:17.702 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.705 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 282fa143-1175-40e2-9ab8-2d2b012d5b78 in datapath 9fbbf27e-04ea-4f06-b1ef-3ac4b98db361 bound to our chassis#033[00m
Feb 28 05:37:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.706 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9fbbf27e-04ea-4f06-b1ef-3ac4b98db361#033[00m
Feb 28 05:37:17 np0005634017 systemd-udevd[363380]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:37:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.725 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7a51ff79-0952-4e75-b17a-a26d19be521b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:17 np0005634017 NetworkManager[49805]: <info>  [1772275037.7353] device (tap282fa143-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:37:17 np0005634017 NetworkManager[49805]: <info>  [1772275037.7367] device (tap282fa143-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:37:17 np0005634017 systemd-machined[209480]: New machine qemu-167-instance-00000087.
Feb 28 05:37:17 np0005634017 systemd[1]: Started Virtual Machine qemu-167-instance-00000087.
Feb 28 05:37:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.758 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[45c36006-36c6-4d73-bfe6-d53c78ee27f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.762 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9b0628c4-8570-4c5c-836d-36a98fa8fc14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.790 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc4585f-d802-42b4-84e4-6884393f56a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.808 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6cfd0520-26e4-459c-b2c2-548b9191a168]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9fbbf27e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:56:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 407], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652770, 'reachable_time': 24614, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363392, 'error': None, 'target': 'ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.831 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d8c67e13-c195-4a2c-a845-35b66bb4657d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap9fbbf27e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652782, 'tstamp': 652782}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363395, 'error': None, 'target': 'ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9fbbf27e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652785, 'tstamp': 652785}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363395, 'error': None, 'target': 'ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.834 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9fbbf27e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:17 np0005634017 nova_compute[243452]: 2026-02-28 10:37:17.836 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.838 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9fbbf27e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.838 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:37:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.838 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9fbbf27e-00, col_values=(('external_ids', {'iface-id': '5b7b2456-d9c3-4443-8d4c-51b3db80db5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:17 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.839 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.025 243456 DEBUG nova.network.neutron [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Updating instance_info_cache with network_info: [{"id": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "address": "fa:16:3e:b8:ed:5e", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0739db55-7f", "ovs_interfaceid": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "address": "fa:16:3e:bb:cb:24", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc01aec-00", "ovs_interfaceid": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.033 243456 DEBUG nova.network.neutron [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Updated VIF entry in instance network info cache for port 282fa143-1175-40e2-9ab8-2d2b012d5b78. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.034 243456 DEBUG nova.network.neutron [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Updating instance_info_cache with network_info: [{"id": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "address": "fa:16:3e:65:25:11", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282fa143-11", "ovs_interfaceid": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.046 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.046 243456 DEBUG nova.compute.manager [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Instance network_info: |[{"id": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "address": "fa:16:3e:b8:ed:5e", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0739db55-7f", "ovs_interfaceid": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "address": "fa:16:3e:bb:cb:24", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc01aec-00", "ovs_interfaceid": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.047 243456 DEBUG oslo_concurrency.lockutils [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-cbf94077-46c2-457d-8486-25f3dd0517b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.047 243456 DEBUG nova.compute.manager [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-changed-fbc01aec-00f6-4ce9-960c-352295a6c18e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.048 243456 DEBUG nova.compute.manager [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Refreshing instance network info cache due to event network-changed-fbc01aec-00f6-4ce9-960c-352295a6c18e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.048 243456 DEBUG oslo_concurrency.lockutils [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.048 243456 DEBUG oslo_concurrency.lockutils [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.048 243456 DEBUG nova.network.neutron [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Refreshing network info cache for port fbc01aec-00f6-4ce9-960c-352295a6c18e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.053 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Start _get_guest_xml network_info=[{"id": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "address": "fa:16:3e:b8:ed:5e", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0739db55-7f", "ovs_interfaceid": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "address": "fa:16:3e:bb:cb:24", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc01aec-00", "ovs_interfaceid": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.060 243456 WARNING nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.066 243456 DEBUG nova.virt.libvirt.host [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.067 243456 DEBUG nova.virt.libvirt.host [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.070 243456 DEBUG nova.virt.libvirt.host [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.070 243456 DEBUG nova.virt.libvirt.host [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.071 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.071 243456 DEBUG nova.virt.hardware [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.072 243456 DEBUG nova.virt.hardware [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.072 243456 DEBUG nova.virt.hardware [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.072 243456 DEBUG nova.virt.hardware [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.072 243456 DEBUG nova.virt.hardware [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.072 243456 DEBUG nova.virt.hardware [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.073 243456 DEBUG nova.virt.hardware [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.073 243456 DEBUG nova.virt.hardware [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.073 243456 DEBUG nova.virt.hardware [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.073 243456 DEBUG nova.virt.hardware [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.073 243456 DEBUG nova.virt.hardware [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.077 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.121 243456 DEBUG nova.compute.manager [req-2f713891-45ad-4025-bb42-e913f6553f72 req-701ea399-f961-411d-91e2-053db1c9bb63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Received event network-vif-plugged-282fa143-1175-40e2-9ab8-2d2b012d5b78 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.122 243456 DEBUG oslo_concurrency.lockutils [req-2f713891-45ad-4025-bb42-e913f6553f72 req-701ea399-f961-411d-91e2-053db1c9bb63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.122 243456 DEBUG oslo_concurrency.lockutils [req-2f713891-45ad-4025-bb42-e913f6553f72 req-701ea399-f961-411d-91e2-053db1c9bb63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.123 243456 DEBUG oslo_concurrency.lockutils [req-2f713891-45ad-4025-bb42-e913f6553f72 req-701ea399-f961-411d-91e2-053db1c9bb63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.123 243456 DEBUG nova.compute.manager [req-2f713891-45ad-4025-bb42-e913f6553f72 req-701ea399-f961-411d-91e2-053db1c9bb63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Processing event network-vif-plugged-282fa143-1175-40e2-9ab8-2d2b012d5b78 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.124 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275038.1241934, cbf94077-46c2-457d-8486-25f3dd0517b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.124 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] VM Started (Lifecycle Event)#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.128 243456 DEBUG nova.compute.manager [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.133 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.141 243456 INFO nova.virt.libvirt.driver [-] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Instance spawned successfully.#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.150 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.155 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.160 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.187 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.187 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275038.126763, cbf94077-46c2-457d-8486-25f3dd0517b9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.188 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.198 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.199 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.200 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.200 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.201 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.202 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.214 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.224 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275038.1318524, cbf94077-46c2-457d-8486-25f3dd0517b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.225 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.251 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.256 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.266 243456 INFO nova.compute.manager [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Took 7.79 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.267 243456 DEBUG nova.compute.manager [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.276 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.346 243456 INFO nova.compute.manager [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Took 9.06 seconds to build instance.#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.371 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:37:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/917389727' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.689 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.726 243456 DEBUG nova.storage.rbd_utils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:37:18 np0005634017 nova_compute[243452]: 2026-02-28 10:37:18.732 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:37:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.6 MiB/s wr, 55 op/s
Feb 28 05:37:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:37:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:37:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3284198875' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.434 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.702s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.436 243456 DEBUG nova.virt.libvirt.vif [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:37:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-25721937',display_name='tempest-TestGettingAddress-server-25721937',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-25721937',id=134,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjJUmVDkOZGXP9YHWwcrQX3rRmUtQjwiGdrqFm8v//iM8ddQ6Oygrz9eS7/NNUCVpxPFFxhbxOwQ9+w5wIv/8LuNuf9Xfg2nwokqVt8ivVWLTyGZCoHMk8LXyLAebo5og==',key_name='tempest-TestGettingAddress-95527438',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-v0acm2xt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:37:08Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=ae0acd9c-7c2d-4e8b-84a4-d577eff31d02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "address": "fa:16:3e:b8:ed:5e", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0739db55-7f", "ovs_interfaceid": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.436 243456 DEBUG nova.network.os_vif_util [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "address": "fa:16:3e:b8:ed:5e", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0739db55-7f", "ovs_interfaceid": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.437 243456 DEBUG nova.network.os_vif_util [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:ed:5e,bridge_name='br-int',has_traffic_filtering=True,id=0739db55-7f81-4ed8-a2c2-73ad1bf09084,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0739db55-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.438 243456 DEBUG nova.virt.libvirt.vif [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:37:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-25721937',display_name='tempest-TestGettingAddress-server-25721937',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-25721937',id=134,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjJUmVDkOZGXP9YHWwcrQX3rRmUtQjwiGdrqFm8v//iM8ddQ6Oygrz9eS7/NNUCVpxPFFxhbxOwQ9+w5wIv/8LuNuf9Xfg2nwokqVt8ivVWLTyGZCoHMk8LXyLAebo5og==',key_name='tempest-TestGettingAddress-95527438',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-v0acm2xt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:37:08Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=ae0acd9c-7c2d-4e8b-84a4-d577eff31d02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "address": "fa:16:3e:bb:cb:24", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc01aec-00", "ovs_interfaceid": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.438 243456 DEBUG nova.network.os_vif_util [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "address": "fa:16:3e:bb:cb:24", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc01aec-00", "ovs_interfaceid": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.439 243456 DEBUG nova.network.os_vif_util [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:cb:24,bridge_name='br-int',has_traffic_filtering=True,id=fbc01aec-00f6-4ce9-960c-352295a6c18e,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc01aec-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.440 243456 DEBUG nova.objects.instance [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid ae0acd9c-7c2d-4e8b-84a4-d577eff31d02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.454 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:37:19 np0005634017 nova_compute[243452]:  <uuid>ae0acd9c-7c2d-4e8b-84a4-d577eff31d02</uuid>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:  <name>instance-00000086</name>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestGettingAddress-server-25721937</nova:name>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:37:18</nova:creationTime>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:37:19 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:        <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:        <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:        <nova:port uuid="0739db55-7f81-4ed8-a2c2-73ad1bf09084">
Feb 28 05:37:19 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:        <nova:port uuid="fbc01aec-00f6-4ce9-960c-352295a6c18e">
Feb 28 05:37:19 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:febb:cb24" ipVersion="6"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:febb:cb24" ipVersion="6"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <entry name="serial">ae0acd9c-7c2d-4e8b-84a4-d577eff31d02</entry>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <entry name="uuid">ae0acd9c-7c2d-4e8b-84a4-d577eff31d02</entry>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk">
Feb 28 05:37:19 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:37:19 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk.config">
Feb 28 05:37:19 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:37:19 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:b8:ed:5e"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <target dev="tap0739db55-7f"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:bb:cb:24"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <target dev="tapfbc01aec-00"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/ae0acd9c-7c2d-4e8b-84a4-d577eff31d02/console.log" append="off"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:37:19 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:37:19 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:37:19 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:37:19 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.459 243456 DEBUG nova.compute.manager [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Preparing to wait for external event network-vif-plugged-0739db55-7f81-4ed8-a2c2-73ad1bf09084 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.459 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.460 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.460 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.460 243456 DEBUG nova.compute.manager [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Preparing to wait for external event network-vif-plugged-fbc01aec-00f6-4ce9-960c-352295a6c18e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.460 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.461 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.461 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.462 243456 DEBUG nova.virt.libvirt.vif [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:37:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-25721937',display_name='tempest-TestGettingAddress-server-25721937',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-25721937',id=134,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjJUmVDkOZGXP9YHWwcrQX3rRmUtQjwiGdrqFm8v//iM8ddQ6Oygrz9eS7/NNUCVpxPFFxhbxOwQ9+w5wIv/8LuNuf9Xfg2nwokqVt8ivVWLTyGZCoHMk8LXyLAebo5og==',key_name='tempest-TestGettingAddress-95527438',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-v0acm2xt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:37:08Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=ae0acd9c-7c2d-4e8b-84a4-d577eff31d02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "address": "fa:16:3e:b8:ed:5e", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0739db55-7f", "ovs_interfaceid": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.462 243456 DEBUG nova.network.os_vif_util [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "address": "fa:16:3e:b8:ed:5e", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0739db55-7f", "ovs_interfaceid": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.463 243456 DEBUG nova.network.os_vif_util [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:ed:5e,bridge_name='br-int',has_traffic_filtering=True,id=0739db55-7f81-4ed8-a2c2-73ad1bf09084,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0739db55-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.463 243456 DEBUG os_vif [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:ed:5e,bridge_name='br-int',has_traffic_filtering=True,id=0739db55-7f81-4ed8-a2c2-73ad1bf09084,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0739db55-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.464 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.464 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.464 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.468 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.469 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0739db55-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.469 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0739db55-7f, col_values=(('external_ids', {'iface-id': '0739db55-7f81-4ed8-a2c2-73ad1bf09084', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:ed:5e', 'vm-uuid': 'ae0acd9c-7c2d-4e8b-84a4-d577eff31d02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.471 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.472 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:37:19 np0005634017 NetworkManager[49805]: <info>  [1772275039.4738] manager: (tap0739db55-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/575)
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.479 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.480 243456 INFO os_vif [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:ed:5e,bridge_name='br-int',has_traffic_filtering=True,id=0739db55-7f81-4ed8-a2c2-73ad1bf09084,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0739db55-7f')#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.481 243456 DEBUG nova.virt.libvirt.vif [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:37:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-25721937',display_name='tempest-TestGettingAddress-server-25721937',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-25721937',id=134,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjJUmVDkOZGXP9YHWwcrQX3rRmUtQjwiGdrqFm8v//iM8ddQ6Oygrz9eS7/NNUCVpxPFFxhbxOwQ9+w5wIv/8LuNuf9Xfg2nwokqVt8ivVWLTyGZCoHMk8LXyLAebo5og==',key_name='tempest-TestGettingAddress-95527438',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-v0acm2xt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:37:08Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=ae0acd9c-7c2d-4e8b-84a4-d577eff31d02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "address": "fa:16:3e:bb:cb:24", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc01aec-00", "ovs_interfaceid": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.481 243456 DEBUG nova.network.os_vif_util [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "address": "fa:16:3e:bb:cb:24", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc01aec-00", "ovs_interfaceid": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.482 243456 DEBUG nova.network.os_vif_util [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:cb:24,bridge_name='br-int',has_traffic_filtering=True,id=fbc01aec-00f6-4ce9-960c-352295a6c18e,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc01aec-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.482 243456 DEBUG os_vif [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:cb:24,bridge_name='br-int',has_traffic_filtering=True,id=fbc01aec-00f6-4ce9-960c-352295a6c18e,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc01aec-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.483 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.483 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.484 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.486 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.486 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbc01aec-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.486 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfbc01aec-00, col_values=(('external_ids', {'iface-id': 'fbc01aec-00f6-4ce9-960c-352295a6c18e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:cb:24', 'vm-uuid': 'ae0acd9c-7c2d-4e8b-84a4-d577eff31d02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.487 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.489 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:37:19 np0005634017 NetworkManager[49805]: <info>  [1772275039.4902] manager: (tapfbc01aec-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/576)
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.496 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.498 243456 INFO os_vif [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:cb:24,bridge_name='br-int',has_traffic_filtering=True,id=fbc01aec-00f6-4ce9-960c-352295a6c18e,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc01aec-00')#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.550 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.551 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.551 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:b8:ed:5e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.551 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:bb:cb:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.552 243456 INFO nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Using config drive#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.573 243456 DEBUG nova.storage.rbd_utils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.973 243456 DEBUG nova.network.neutron [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Updated VIF entry in instance network info cache for port fbc01aec-00f6-4ce9-960c-352295a6c18e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.975 243456 DEBUG nova.network.neutron [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Updating instance_info_cache with network_info: [{"id": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "address": "fa:16:3e:b8:ed:5e", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0739db55-7f", "ovs_interfaceid": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "address": "fa:16:3e:bb:cb:24", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc01aec-00", "ovs_interfaceid": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:37:19 np0005634017 nova_compute[243452]: 2026-02-28 10:37:19.997 243456 DEBUG oslo_concurrency.lockutils [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:37:20 np0005634017 nova_compute[243452]: 2026-02-28 10:37:20.078 243456 INFO nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Creating config drive at /var/lib/nova/instances/ae0acd9c-7c2d-4e8b-84a4-d577eff31d02/disk.config#033[00m
Feb 28 05:37:20 np0005634017 nova_compute[243452]: 2026-02-28 10:37:20.085 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ae0acd9c-7c2d-4e8b-84a4-d577eff31d02/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8amf3fzn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:37:20 np0005634017 nova_compute[243452]: 2026-02-28 10:37:20.177 243456 DEBUG nova.compute.manager [req-531aa3d2-bac0-49ba-882c-e5276fd6eb8b req-f4bcd59f-1caa-4a0f-bd51-34f499a9daf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Received event network-vif-plugged-282fa143-1175-40e2-9ab8-2d2b012d5b78 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:20 np0005634017 nova_compute[243452]: 2026-02-28 10:37:20.178 243456 DEBUG oslo_concurrency.lockutils [req-531aa3d2-bac0-49ba-882c-e5276fd6eb8b req-f4bcd59f-1caa-4a0f-bd51-34f499a9daf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:20 np0005634017 nova_compute[243452]: 2026-02-28 10:37:20.179 243456 DEBUG oslo_concurrency.lockutils [req-531aa3d2-bac0-49ba-882c-e5276fd6eb8b req-f4bcd59f-1caa-4a0f-bd51-34f499a9daf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:20 np0005634017 nova_compute[243452]: 2026-02-28 10:37:20.179 243456 DEBUG oslo_concurrency.lockutils [req-531aa3d2-bac0-49ba-882c-e5276fd6eb8b req-f4bcd59f-1caa-4a0f-bd51-34f499a9daf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:20 np0005634017 nova_compute[243452]: 2026-02-28 10:37:20.180 243456 DEBUG nova.compute.manager [req-531aa3d2-bac0-49ba-882c-e5276fd6eb8b req-f4bcd59f-1caa-4a0f-bd51-34f499a9daf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] No waiting events found dispatching network-vif-plugged-282fa143-1175-40e2-9ab8-2d2b012d5b78 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:37:20 np0005634017 nova_compute[243452]: 2026-02-28 10:37:20.181 243456 WARNING nova.compute.manager [req-531aa3d2-bac0-49ba-882c-e5276fd6eb8b req-f4bcd59f-1caa-4a0f-bd51-34f499a9daf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Received unexpected event network-vif-plugged-282fa143-1175-40e2-9ab8-2d2b012d5b78 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:37:20 np0005634017 nova_compute[243452]: 2026-02-28 10:37:20.238 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ae0acd9c-7c2d-4e8b-84a4-d577eff31d02/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8amf3fzn" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:37:20 np0005634017 nova_compute[243452]: 2026-02-28 10:37:20.259 243456 DEBUG nova.storage.rbd_utils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:37:20 np0005634017 nova_compute[243452]: 2026-02-28 10:37:20.265 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ae0acd9c-7c2d-4e8b-84a4-d577eff31d02/disk.config ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:37:20 np0005634017 nova_compute[243452]: 2026-02-28 10:37:20.557 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ae0acd9c-7c2d-4e8b-84a4-d577eff31d02/disk.config ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.291s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:37:20 np0005634017 nova_compute[243452]: 2026-02-28 10:37:20.560 243456 INFO nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Deleting local config drive /var/lib/nova/instances/ae0acd9c-7c2d-4e8b-84a4-d577eff31d02/disk.config because it was imported into RBD.#033[00m
Feb 28 05:37:20 np0005634017 kernel: tap0739db55-7f: entered promiscuous mode
Feb 28 05:37:20 np0005634017 NetworkManager[49805]: <info>  [1772275040.6178] manager: (tap0739db55-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/577)
Feb 28 05:37:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:20Z|01376|binding|INFO|Claiming lport 0739db55-7f81-4ed8-a2c2-73ad1bf09084 for this chassis.
Feb 28 05:37:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:20Z|01377|binding|INFO|0739db55-7f81-4ed8-a2c2-73ad1bf09084: Claiming fa:16:3e:b8:ed:5e 10.100.0.9
Feb 28 05:37:20 np0005634017 nova_compute[243452]: 2026-02-28 10:37:20.622 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.631 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:ed:5e 10.100.0.9'], port_security=['fa:16:3e:b8:ed:5e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ae0acd9c-7c2d-4e8b-84a4-d577eff31d02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '41e41550-f20b-4082-98cd-41d49e388f83', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f53aa21-0a70-46b8-aad2-7db237a64c92, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0739db55-7f81-4ed8-a2c2-73ad1bf09084) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.632 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0739db55-7f81-4ed8-a2c2-73ad1bf09084 in datapath e68f9d98-c075-4ed0-b1ee-ef05de1c055d bound to our chassis#033[00m
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.633 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e68f9d98-c075-4ed0-b1ee-ef05de1c055d#033[00m
Feb 28 05:37:20 np0005634017 NetworkManager[49805]: <info>  [1772275040.6359] device (tap0739db55-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:37:20 np0005634017 NetworkManager[49805]: <info>  [1772275040.6367] device (tap0739db55-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:37:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:20Z|01378|binding|INFO|Setting lport 0739db55-7f81-4ed8-a2c2-73ad1bf09084 ovn-installed in OVS
Feb 28 05:37:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:20Z|01379|binding|INFO|Setting lport 0739db55-7f81-4ed8-a2c2-73ad1bf09084 up in Southbound
Feb 28 05:37:20 np0005634017 nova_compute[243452]: 2026-02-28 10:37:20.643 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:20 np0005634017 kernel: tapfbc01aec-00: entered promiscuous mode
Feb 28 05:37:20 np0005634017 NetworkManager[49805]: <info>  [1772275040.6520] manager: (tapfbc01aec-00): new Tun device (/org/freedesktop/NetworkManager/Devices/578)
Feb 28 05:37:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:20Z|01380|binding|INFO|Claiming lport fbc01aec-00f6-4ce9-960c-352295a6c18e for this chassis.
Feb 28 05:37:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:20Z|01381|binding|INFO|fbc01aec-00f6-4ce9-960c-352295a6c18e: Claiming fa:16:3e:bb:cb:24 2001:db8:0:1:f816:3eff:febb:cb24 2001:db8::f816:3eff:febb:cb24
Feb 28 05:37:20 np0005634017 nova_compute[243452]: 2026-02-28 10:37:20.656 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.657 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f2568481-16cf-4b43-88fc-8c802ef98abf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.663 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:cb:24 2001:db8:0:1:f816:3eff:febb:cb24 2001:db8::f816:3eff:febb:cb24'], port_security=['fa:16:3e:bb:cb:24 2001:db8:0:1:f816:3eff:febb:cb24 2001:db8::f816:3eff:febb:cb24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:febb:cb24/64 2001:db8::f816:3eff:febb:cb24/64', 'neutron:device_id': 'ae0acd9c-7c2d-4e8b-84a4-d577eff31d02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '41e41550-f20b-4082-98cd-41d49e388f83', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ac6d088-e3e0-46fc-b81f-2eb4f1fd8c9c, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=fbc01aec-00f6-4ce9-960c-352295a6c18e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:37:20 np0005634017 NetworkManager[49805]: <info>  [1772275040.6649] device (tapfbc01aec-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:37:20 np0005634017 NetworkManager[49805]: <info>  [1772275040.6658] device (tapfbc01aec-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:37:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:20Z|01382|binding|INFO|Setting lport fbc01aec-00f6-4ce9-960c-352295a6c18e ovn-installed in OVS
Feb 28 05:37:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:20Z|01383|binding|INFO|Setting lport fbc01aec-00f6-4ce9-960c-352295a6c18e up in Southbound
Feb 28 05:37:20 np0005634017 nova_compute[243452]: 2026-02-28 10:37:20.668 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:20 np0005634017 nova_compute[243452]: 2026-02-28 10:37:20.672 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.690 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9bec46e7-d230-4b60-a099-72cd20369ebf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.697 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ffac8f22-ca58-42ce-ba7f-0f63d3a3cf8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:20 np0005634017 systemd-machined[209480]: New machine qemu-168-instance-00000086.
Feb 28 05:37:20 np0005634017 systemd[1]: Started Virtual Machine qemu-168-instance-00000086.
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.728 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[eed4ff30-c40c-4b76-9072-1c68c2f322e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.745 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4d03fe1a-d7b7-495e-8b8d-ded8f5d84d4a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape68f9d98-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:f2:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 404], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650649, 'reachable_time': 31213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363591, 'error': None, 'target': 'ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.766 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4282c692-dc57-4105-a21d-c02f3e823fa7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape68f9d98-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650664, 'tstamp': 650664}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363595, 'error': None, 'target': 'ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape68f9d98-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650667, 'tstamp': 650667}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363595, 'error': None, 'target': 'ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.768 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape68f9d98-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:20 np0005634017 nova_compute[243452]: 2026-02-28 10:37:20.769 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:20 np0005634017 nova_compute[243452]: 2026-02-28 10:37:20.770 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.771 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape68f9d98-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.771 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.771 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape68f9d98-c0, col_values=(('external_ids', {'iface-id': 'b9d9d72f-0ab4-40d7-98e4-dcafab8ff1a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.772 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.773 156681 INFO neutron.agent.ovn.metadata.agent [-] Port fbc01aec-00f6-4ce9-960c-352295a6c18e in datapath bf5aa4f8-b85b-496f-a7ad-1ab36250968c unbound from our chassis#033[00m
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.775 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf5aa4f8-b85b-496f-a7ad-1ab36250968c#033[00m
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.792 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[758e9416-e228-480a-8f16-93018e62dc08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.826 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4984b230-426b-4b05-9827-86070c549aa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.833 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[239a87f5-de0f-46cc-93ad-1dfeb3d58b36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.861 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[08766900-d6ad-4e88-8407-5445ea0415a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.882 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ebd169ff-33e1-48e2-bc42-9c105f1bcee0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf5aa4f8-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:9f:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650747, 'reachable_time': 20076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363602, 'error': None, 'target': 'ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.6 MiB/s wr, 104 op/s
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.903 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d43f3792-8728-4c16-a3c8-9b20f0288bc9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbf5aa4f8-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650761, 'tstamp': 650761}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363603, 'error': None, 'target': 'ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.905 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf5aa4f8-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:20 np0005634017 nova_compute[243452]: 2026-02-28 10:37:20.907 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:20 np0005634017 nova_compute[243452]: 2026-02-28 10:37:20.908 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.911 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf5aa4f8-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.911 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.911 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf5aa4f8-b0, col_values=(('external_ids', {'iface-id': 'a03e0d2d-7933-48b4-9d0a-61369ba848c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.912 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:37:21 np0005634017 nova_compute[243452]: 2026-02-28 10:37:21.193 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275041.1918066, ae0acd9c-7c2d-4e8b-84a4-d577eff31d02 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:37:21 np0005634017 nova_compute[243452]: 2026-02-28 10:37:21.194 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] VM Started (Lifecycle Event)#033[00m
Feb 28 05:37:21 np0005634017 nova_compute[243452]: 2026-02-28 10:37:21.221 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:37:21 np0005634017 nova_compute[243452]: 2026-02-28 10:37:21.226 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275041.1922653, ae0acd9c-7c2d-4e8b-84a4-d577eff31d02 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:37:21 np0005634017 nova_compute[243452]: 2026-02-28 10:37:21.226 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:37:21 np0005634017 nova_compute[243452]: 2026-02-28 10:37:21.244 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:37:21 np0005634017 nova_compute[243452]: 2026-02-28 10:37:21.249 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:37:21 np0005634017 nova_compute[243452]: 2026-02-28 10:37:21.272 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.263 243456 DEBUG nova.compute.manager [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-vif-plugged-0739db55-7f81-4ed8-a2c2-73ad1bf09084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.264 243456 DEBUG oslo_concurrency.lockutils [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.265 243456 DEBUG oslo_concurrency.lockutils [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.265 243456 DEBUG oslo_concurrency.lockutils [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.266 243456 DEBUG nova.compute.manager [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Processing event network-vif-plugged-0739db55-7f81-4ed8-a2c2-73ad1bf09084 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.266 243456 DEBUG nova.compute.manager [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-vif-plugged-0739db55-7f81-4ed8-a2c2-73ad1bf09084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.267 243456 DEBUG oslo_concurrency.lockutils [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.268 243456 DEBUG oslo_concurrency.lockutils [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.268 243456 DEBUG oslo_concurrency.lockutils [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.269 243456 DEBUG nova.compute.manager [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] No event matching network-vif-plugged-0739db55-7f81-4ed8-a2c2-73ad1bf09084 in dict_keys([('network-vif-plugged', 'fbc01aec-00f6-4ce9-960c-352295a6c18e')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.270 243456 WARNING nova.compute.manager [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received unexpected event network-vif-plugged-0739db55-7f81-4ed8-a2c2-73ad1bf09084 for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.270 243456 DEBUG nova.compute.manager [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-vif-plugged-fbc01aec-00f6-4ce9-960c-352295a6c18e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.271 243456 DEBUG oslo_concurrency.lockutils [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.272 243456 DEBUG oslo_concurrency.lockutils [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.273 243456 DEBUG oslo_concurrency.lockutils [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.273 243456 DEBUG nova.compute.manager [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Processing event network-vif-plugged-fbc01aec-00f6-4ce9-960c-352295a6c18e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.274 243456 DEBUG nova.compute.manager [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-vif-plugged-fbc01aec-00f6-4ce9-960c-352295a6c18e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.275 243456 DEBUG oslo_concurrency.lockutils [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.276 243456 DEBUG oslo_concurrency.lockutils [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.276 243456 DEBUG oslo_concurrency.lockutils [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.277 243456 DEBUG nova.compute.manager [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] No waiting events found dispatching network-vif-plugged-fbc01aec-00f6-4ce9-960c-352295a6c18e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.277 243456 WARNING nova.compute.manager [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received unexpected event network-vif-plugged-fbc01aec-00f6-4ce9-960c-352295a6c18e for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.279 243456 DEBUG nova.compute.manager [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.283 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275042.2831938, ae0acd9c-7c2d-4e8b-84a4-d577eff31d02 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.284 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.288 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.292 243456 INFO nova.virt.libvirt.driver [-] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Instance spawned successfully.#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.293 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.313 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.322 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.327 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.328 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.328 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.329 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.329 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.330 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.337 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.377 243456 INFO nova.compute.manager [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Took 13.37 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.378 243456 DEBUG nova.compute.manager [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.436 243456 INFO nova.compute.manager [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Took 14.63 seconds to build instance.#033[00m
Feb 28 05:37:22 np0005634017 nova_compute[243452]: 2026-02-28 10:37:22.456 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.1 MiB/s wr, 113 op/s
Feb 28 05:37:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:37:24 np0005634017 nova_compute[243452]: 2026-02-28 10:37:24.490 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:24 np0005634017 nova_compute[243452]: 2026-02-28 10:37:24.499 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 123 op/s
Feb 28 05:37:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 751 KiB/s wr, 116 op/s
Feb 28 05:37:27 np0005634017 nova_compute[243452]: 2026-02-28 10:37:27.396 243456 DEBUG nova.compute.manager [req-2361a3ec-998a-4694-9a63-b98fcd2388d7 req-608708cc-e891-41ec-8849-a77c4ac0eb9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-changed-0739db55-7f81-4ed8-a2c2-73ad1bf09084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:27 np0005634017 nova_compute[243452]: 2026-02-28 10:37:27.397 243456 DEBUG nova.compute.manager [req-2361a3ec-998a-4694-9a63-b98fcd2388d7 req-608708cc-e891-41ec-8849-a77c4ac0eb9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Refreshing instance network info cache due to event network-changed-0739db55-7f81-4ed8-a2c2-73ad1bf09084. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:37:27 np0005634017 nova_compute[243452]: 2026-02-28 10:37:27.398 243456 DEBUG oslo_concurrency.lockutils [req-2361a3ec-998a-4694-9a63-b98fcd2388d7 req-608708cc-e891-41ec-8849-a77c4ac0eb9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:37:27 np0005634017 nova_compute[243452]: 2026-02-28 10:37:27.399 243456 DEBUG oslo_concurrency.lockutils [req-2361a3ec-998a-4694-9a63-b98fcd2388d7 req-608708cc-e891-41ec-8849-a77c4ac0eb9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:37:27 np0005634017 nova_compute[243452]: 2026-02-28 10:37:27.399 243456 DEBUG nova.network.neutron [req-2361a3ec-998a-4694-9a63-b98fcd2388d7 req-608708cc-e891-41ec-8849-a77c4ac0eb9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Refreshing network info cache for port 0739db55-7f81-4ed8-a2c2-73ad1bf09084 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:37:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 33 KiB/s wr, 149 op/s
Feb 28 05:37:28 np0005634017 nova_compute[243452]: 2026-02-28 10:37:28.932 243456 DEBUG nova.network.neutron [req-2361a3ec-998a-4694-9a63-b98fcd2388d7 req-608708cc-e891-41ec-8849-a77c4ac0eb9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Updated VIF entry in instance network info cache for port 0739db55-7f81-4ed8-a2c2-73ad1bf09084. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:37:28 np0005634017 nova_compute[243452]: 2026-02-28 10:37:28.932 243456 DEBUG nova.network.neutron [req-2361a3ec-998a-4694-9a63-b98fcd2388d7 req-608708cc-e891-41ec-8849-a77c4ac0eb9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Updating instance_info_cache with network_info: [{"id": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "address": "fa:16:3e:b8:ed:5e", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0739db55-7f", "ovs_interfaceid": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "address": "fa:16:3e:bb:cb:24", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc01aec-00", "ovs_interfaceid": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:37:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:37:28 np0005634017 nova_compute[243452]: 2026-02-28 10:37:28.953 243456 DEBUG oslo_concurrency.lockutils [req-2361a3ec-998a-4694-9a63-b98fcd2388d7 req-608708cc-e891-41ec-8849-a77c4ac0eb9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:37:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:37:29
Feb 28 05:37:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:37:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:37:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'images', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'backups', '.mgr', 'default.rgw.meta']
Feb 28 05:37:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:37:29 np0005634017 nova_compute[243452]: 2026-02-28 10:37:29.493 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:29 np0005634017 nova_compute[243452]: 2026-02-28 10:37:29.503 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:37:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:37:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:37:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:37:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:37:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:37:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:37:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:37:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:37:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:37:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:37:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:37:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:37:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:37:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:37:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:37:30 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:30Z|00165|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:65:25:11 10.100.0.28
Feb 28 05:37:30 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:30Z|00166|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:25:11 10.100.0.28
Feb 28 05:37:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 413 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 170 op/s
Feb 28 05:37:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 305 active+clean; 427 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.0 MiB/s wr, 145 op/s
Feb 28 05:37:33 np0005634017 nova_compute[243452]: 2026-02-28 10:37:33.364 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:37:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:37:34 np0005634017 nova_compute[243452]: 2026-02-28 10:37:34.497 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:34 np0005634017 nova_compute[243452]: 2026-02-28 10:37:34.504 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 434 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.4 MiB/s wr, 124 op/s
Feb 28 05:37:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:35Z|00167|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b8:ed:5e 10.100.0.9
Feb 28 05:37:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:35Z|00168|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:ed:5e 10.100.0.9
Feb 28 05:37:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 446 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.6 MiB/s wr, 131 op/s
Feb 28 05:37:37 np0005634017 nova_compute[243452]: 2026-02-28 10:37:37.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:37:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 305 active+clean; 463 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.2 MiB/s wr, 156 op/s
Feb 28 05:37:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:37:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:39.171 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:37:39 np0005634017 nova_compute[243452]: 2026-02-28 10:37:39.172 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:39.175 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:37:39 np0005634017 nova_compute[243452]: 2026-02-28 10:37:39.299 243456 DEBUG nova.compute.manager [req-86972e59-dfcb-44a8-8774-54408e2a14a0 req-8473bc0e-7a22-4e86-964c-b2252602b195 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-changed-784aca10-13b2-42d5-9828-68914533af46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:39 np0005634017 nova_compute[243452]: 2026-02-28 10:37:39.299 243456 DEBUG nova.compute.manager [req-86972e59-dfcb-44a8-8774-54408e2a14a0 req-8473bc0e-7a22-4e86-964c-b2252602b195 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Refreshing instance network info cache due to event network-changed-784aca10-13b2-42d5-9828-68914533af46. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:37:39 np0005634017 nova_compute[243452]: 2026-02-28 10:37:39.300 243456 DEBUG oslo_concurrency.lockutils [req-86972e59-dfcb-44a8-8774-54408e2a14a0 req-8473bc0e-7a22-4e86-964c-b2252602b195 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:37:39 np0005634017 nova_compute[243452]: 2026-02-28 10:37:39.300 243456 DEBUG oslo_concurrency.lockutils [req-86972e59-dfcb-44a8-8774-54408e2a14a0 req-8473bc0e-7a22-4e86-964c-b2252602b195 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:37:39 np0005634017 nova_compute[243452]: 2026-02-28 10:37:39.300 243456 DEBUG nova.network.neutron [req-86972e59-dfcb-44a8-8774-54408e2a14a0 req-8473bc0e-7a22-4e86-964c-b2252602b195 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Refreshing network info cache for port 784aca10-13b2-42d5-9828-68914533af46 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:37:39 np0005634017 nova_compute[243452]: 2026-02-28 10:37:39.501 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:39 np0005634017 nova_compute[243452]: 2026-02-28 10:37:39.508 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:40.177 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:40 np0005634017 nova_compute[243452]: 2026-02-28 10:37:40.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:37:40 np0005634017 nova_compute[243452]: 2026-02-28 10:37:40.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:37:40 np0005634017 nova_compute[243452]: 2026-02-28 10:37:40.425 243456 DEBUG nova.network.neutron [req-86972e59-dfcb-44a8-8774-54408e2a14a0 req-8473bc0e-7a22-4e86-964c-b2252602b195 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updated VIF entry in instance network info cache for port 784aca10-13b2-42d5-9828-68914533af46. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:37:40 np0005634017 nova_compute[243452]: 2026-02-28 10:37:40.426 243456 DEBUG nova.network.neutron [req-86972e59-dfcb-44a8-8774-54408e2a14a0 req-8473bc0e-7a22-4e86-964c-b2252602b195 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updating instance_info_cache with network_info: [{"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:37:40 np0005634017 nova_compute[243452]: 2026-02-28 10:37:40.446 243456 DEBUG oslo_concurrency.lockutils [req-86972e59-dfcb-44a8-8774-54408e2a14a0 req-8473bc0e-7a22-4e86-964c-b2252602b195 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:37:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 471 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 612 KiB/s rd, 4.3 MiB/s wr, 123 op/s
Feb 28 05:37:41 np0005634017 podman[363649]: 2026-02-28 10:37:41.170301316 +0000 UTC m=+0.093195731 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:37:41 np0005634017 podman[363648]: 2026-02-28 10:37:41.197833176 +0000 UTC m=+0.119430944 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:37:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:37:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:37:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:37:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:37:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0030503430589766695 of space, bias 1.0, pg target 0.9151029176930009 quantized to 32 (current 32)
Feb 28 05:37:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:37:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:37:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:37:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:37:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:37:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493843790406927 of space, bias 1.0, pg target 0.7481531371220781 quantized to 32 (current 32)
Feb 28 05:37:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:37:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.326593465495173e-07 of space, bias 4.0, pg target 0.0008791912158594207 quantized to 16 (current 16)
Feb 28 05:37:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:37:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:37:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:37:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:37:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:37:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:37:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:37:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:37:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:37:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:37:42 np0005634017 nova_compute[243452]: 2026-02-28 10:37:42.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:37:42 np0005634017 nova_compute[243452]: 2026-02-28 10:37:42.337 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:37:42 np0005634017 nova_compute[243452]: 2026-02-28 10:37:42.338 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:37:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 471 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 527 KiB/s rd, 3.3 MiB/s wr, 102 op/s
Feb 28 05:37:43 np0005634017 nova_compute[243452]: 2026-02-28 10:37:43.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:37:43 np0005634017 nova_compute[243452]: 2026-02-28 10:37:43.750 243456 DEBUG oslo_concurrency.lockutils [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "cbf94077-46c2-457d-8486-25f3dd0517b9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:43 np0005634017 nova_compute[243452]: 2026-02-28 10:37:43.751 243456 DEBUG oslo_concurrency.lockutils [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:43 np0005634017 nova_compute[243452]: 2026-02-28 10:37:43.752 243456 DEBUG oslo_concurrency.lockutils [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:43 np0005634017 nova_compute[243452]: 2026-02-28 10:37:43.752 243456 DEBUG oslo_concurrency.lockutils [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:43 np0005634017 nova_compute[243452]: 2026-02-28 10:37:43.753 243456 DEBUG oslo_concurrency.lockutils [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:43 np0005634017 nova_compute[243452]: 2026-02-28 10:37:43.756 243456 INFO nova.compute.manager [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Terminating instance#033[00m
Feb 28 05:37:43 np0005634017 nova_compute[243452]: 2026-02-28 10:37:43.757 243456 DEBUG nova.compute.manager [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:37:43 np0005634017 kernel: tap282fa143-11 (unregistering): left promiscuous mode
Feb 28 05:37:43 np0005634017 NetworkManager[49805]: <info>  [1772275063.8014] device (tap282fa143-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:37:43 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:43Z|01384|binding|INFO|Releasing lport 282fa143-1175-40e2-9ab8-2d2b012d5b78 from this chassis (sb_readonly=0)
Feb 28 05:37:43 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:43Z|01385|binding|INFO|Setting lport 282fa143-1175-40e2-9ab8-2d2b012d5b78 down in Southbound
Feb 28 05:37:43 np0005634017 nova_compute[243452]: 2026-02-28 10:37:43.811 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:43 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:43Z|01386|binding|INFO|Removing iface tap282fa143-11 ovn-installed in OVS
Feb 28 05:37:43 np0005634017 nova_compute[243452]: 2026-02-28 10:37:43.814 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.820 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:25:11 10.100.0.28'], port_security=['fa:16:3e:65:25:11 10.100.0.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': 'cbf94077-46c2-457d-8486-25f3dd0517b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '47e8c685-91d8-4bae-bf96-b1284d3eef68', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe4444ac-cf87-44d4-a522-71da130237e6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=282fa143-1175-40e2-9ab8-2d2b012d5b78) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:37:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.822 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 282fa143-1175-40e2-9ab8-2d2b012d5b78 in datapath 9fbbf27e-04ea-4f06-b1ef-3ac4b98db361 unbound from our chassis#033[00m
Feb 28 05:37:43 np0005634017 nova_compute[243452]: 2026-02-28 10:37:43.823 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.824 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9fbbf27e-04ea-4f06-b1ef-3ac4b98db361#033[00m
Feb 28 05:37:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.843 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7e7dc1c3-4ece-41db-b3e4-5cac5f3e8c86]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:43 np0005634017 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000087.scope: Deactivated successfully.
Feb 28 05:37:43 np0005634017 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000087.scope: Consumed 13.408s CPU time.
Feb 28 05:37:43 np0005634017 systemd-machined[209480]: Machine qemu-167-instance-00000087 terminated.
Feb 28 05:37:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.874 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e1208c4d-d109-4b27-bf41-635388f00c8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.878 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[81443d68-1050-4599-a6f1-271b25e8f284]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.907 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ae2def45-d020-4d39-828d-786e4e641f2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.923 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b0b294-fca3-46eb-89ba-bab724d67322]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9fbbf27e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:56:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 407], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652770, 'reachable_time': 24614, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363704, 'error': None, 'target': 'ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.943 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[feb20d39-d18f-4ec7-a3b6-93cb81453a71]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap9fbbf27e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652782, 'tstamp': 652782}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363705, 'error': None, 'target': 'ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9fbbf27e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652785, 'tstamp': 652785}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363705, 'error': None, 'target': 'ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.945 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9fbbf27e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:37:43 np0005634017 nova_compute[243452]: 2026-02-28 10:37:43.948 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:43 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Feb 28 05:37:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:43.952253) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:37:43 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Feb 28 05:37:43 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275063952294, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 1927, "num_deletes": 256, "total_data_size": 3146175, "memory_usage": 3192368, "flush_reason": "Manual Compaction"}
Feb 28 05:37:43 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Feb 28 05:37:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.954 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9fbbf27e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.955 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:37:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.955 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9fbbf27e-00, col_values=(('external_ids', {'iface-id': '5b7b2456-d9c3-4443-8d4c-51b3db80db5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.956 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:37:43 np0005634017 nova_compute[243452]: 2026-02-28 10:37:43.955 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:43 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275063974883, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 3069104, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45074, "largest_seqno": 47000, "table_properties": {"data_size": 3060395, "index_size": 5331, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17893, "raw_average_key_size": 19, "raw_value_size": 3042958, "raw_average_value_size": 3381, "num_data_blocks": 237, "num_entries": 900, "num_filter_entries": 900, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772274866, "oldest_key_time": 1772274866, "file_creation_time": 1772275063, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:37:43 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 22701 microseconds, and 5723 cpu microseconds.
Feb 28 05:37:43 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:37:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:43.974949) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 3069104 bytes OK
Feb 28 05:37:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:43.974975) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Feb 28 05:37:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:43.977396) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Feb 28 05:37:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:43.977412) EVENT_LOG_v1 {"time_micros": 1772275063977407, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:37:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:43.977437) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:37:43 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3137974, prev total WAL file size 3137974, number of live WAL files 2.
Feb 28 05:37:43 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:37:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:43.977996) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373534' seq:72057594037927935, type:22 .. '6C6F676D0032303036' seq:0, type:0; will stop at (end)
Feb 28 05:37:43 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:37:43 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(2997KB)], [104(8483KB)]
Feb 28 05:37:43 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275063978037, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 11756156, "oldest_snapshot_seqno": -1}
Feb 28 05:37:44 np0005634017 nova_compute[243452]: 2026-02-28 10:37:43.999 243456 INFO nova.virt.libvirt.driver [-] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Instance destroyed successfully.#033[00m
Feb 28 05:37:44 np0005634017 nova_compute[243452]: 2026-02-28 10:37:43.999 243456 DEBUG nova.objects.instance [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'resources' on Instance uuid cbf94077-46c2-457d-8486-25f3dd0517b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:37:44 np0005634017 nova_compute[243452]: 2026-02-28 10:37:44.019 243456 DEBUG nova.virt.libvirt.vif [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:37:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1336250417',display_name='tempest-TestNetworkBasicOps-server-1336250417',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1336250417',id=135,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFdV8K7ZSzWmQkbjGONrQPTCovb6WrbWPkGawJk1yo00887S+Z8HGahuNdlpRV8Rys7acQ23bSRlDv0kC4ADkUOxH7s/2SDS0CUgX9WEqv+CcAPyMpFhaDSJb5+3QqGJmA==',key_name='tempest-TestNetworkBasicOps-99259032',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:37:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-6zhm2bv3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:37:18Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=cbf94077-46c2-457d-8486-25f3dd0517b9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "address": "fa:16:3e:65:25:11", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282fa143-11", "ovs_interfaceid": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:37:44 np0005634017 nova_compute[243452]: 2026-02-28 10:37:44.019 243456 DEBUG nova.network.os_vif_util [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "address": "fa:16:3e:65:25:11", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282fa143-11", "ovs_interfaceid": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:37:44 np0005634017 nova_compute[243452]: 2026-02-28 10:37:44.020 243456 DEBUG nova.network.os_vif_util [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:25:11,bridge_name='br-int',has_traffic_filtering=True,id=282fa143-1175-40e2-9ab8-2d2b012d5b78,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282fa143-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:37:44 np0005634017 nova_compute[243452]: 2026-02-28 10:37:44.021 243456 DEBUG os_vif [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:25:11,bridge_name='br-int',has_traffic_filtering=True,id=282fa143-1175-40e2-9ab8-2d2b012d5b78,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282fa143-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:37:44 np0005634017 nova_compute[243452]: 2026-02-28 10:37:44.023 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:44 np0005634017 nova_compute[243452]: 2026-02-28 10:37:44.023 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap282fa143-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:44 np0005634017 nova_compute[243452]: 2026-02-28 10:37:44.025 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:44 np0005634017 nova_compute[243452]: 2026-02-28 10:37:44.027 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:44 np0005634017 nova_compute[243452]: 2026-02-28 10:37:44.029 243456 INFO os_vif [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:25:11,bridge_name='br-int',has_traffic_filtering=True,id=282fa143-1175-40e2-9ab8-2d2b012d5b78,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282fa143-11')#033[00m
Feb 28 05:37:44 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 7016 keys, 11630128 bytes, temperature: kUnknown
Feb 28 05:37:44 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275064058326, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 11630128, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11580644, "index_size": 30812, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17605, "raw_key_size": 181198, "raw_average_key_size": 25, "raw_value_size": 11452743, "raw_average_value_size": 1632, "num_data_blocks": 1221, "num_entries": 7016, "num_filter_entries": 7016, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772275063, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:37:44 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:37:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:44.058593) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 11630128 bytes
Feb 28 05:37:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:44.060455) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.3 rd, 144.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 8.3 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(7.6) write-amplify(3.8) OK, records in: 7540, records dropped: 524 output_compression: NoCompression
Feb 28 05:37:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:44.060475) EVENT_LOG_v1 {"time_micros": 1772275064060465, "job": 62, "event": "compaction_finished", "compaction_time_micros": 80370, "compaction_time_cpu_micros": 21762, "output_level": 6, "num_output_files": 1, "total_output_size": 11630128, "num_input_records": 7540, "num_output_records": 7016, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:37:44 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:37:44 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275064060928, "job": 62, "event": "table_file_deletion", "file_number": 106}
Feb 28 05:37:44 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:37:44 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275064061871, "job": 62, "event": "table_file_deletion", "file_number": 104}
Feb 28 05:37:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:43.977910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:37:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:44.062003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:37:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:44.062015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:37:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:44.062018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:37:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:44.062022) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:37:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:44.062025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:37:44 np0005634017 nova_compute[243452]: 2026-02-28 10:37:44.103 243456 DEBUG nova.compute.manager [req-98b0488f-9cab-4d2c-b96f-54047f38de2b req-7426cabe-f961-4952-9ba5-d4fe2c9b629d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Received event network-vif-unplugged-282fa143-1175-40e2-9ab8-2d2b012d5b78 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:44 np0005634017 nova_compute[243452]: 2026-02-28 10:37:44.103 243456 DEBUG oslo_concurrency.lockutils [req-98b0488f-9cab-4d2c-b96f-54047f38de2b req-7426cabe-f961-4952-9ba5-d4fe2c9b629d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:44 np0005634017 nova_compute[243452]: 2026-02-28 10:37:44.104 243456 DEBUG oslo_concurrency.lockutils [req-98b0488f-9cab-4d2c-b96f-54047f38de2b req-7426cabe-f961-4952-9ba5-d4fe2c9b629d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:44 np0005634017 nova_compute[243452]: 2026-02-28 10:37:44.104 243456 DEBUG oslo_concurrency.lockutils [req-98b0488f-9cab-4d2c-b96f-54047f38de2b req-7426cabe-f961-4952-9ba5-d4fe2c9b629d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:44 np0005634017 nova_compute[243452]: 2026-02-28 10:37:44.104 243456 DEBUG nova.compute.manager [req-98b0488f-9cab-4d2c-b96f-54047f38de2b req-7426cabe-f961-4952-9ba5-d4fe2c9b629d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] No waiting events found dispatching network-vif-unplugged-282fa143-1175-40e2-9ab8-2d2b012d5b78 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:37:44 np0005634017 nova_compute[243452]: 2026-02-28 10:37:44.104 243456 DEBUG nova.compute.manager [req-98b0488f-9cab-4d2c-b96f-54047f38de2b req-7426cabe-f961-4952-9ba5-d4fe2c9b629d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Received event network-vif-unplugged-282fa143-1175-40e2-9ab8-2d2b012d5b78 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:37:44 np0005634017 nova_compute[243452]: 2026-02-28 10:37:44.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:37:44 np0005634017 nova_compute[243452]: 2026-02-28 10:37:44.320 243456 INFO nova.virt.libvirt.driver [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Deleting instance files /var/lib/nova/instances/cbf94077-46c2-457d-8486-25f3dd0517b9_del#033[00m
Feb 28 05:37:44 np0005634017 nova_compute[243452]: 2026-02-28 10:37:44.321 243456 INFO nova.virt.libvirt.driver [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Deletion of /var/lib/nova/instances/cbf94077-46c2-457d-8486-25f3dd0517b9_del complete#033[00m
Feb 28 05:37:44 np0005634017 nova_compute[243452]: 2026-02-28 10:37:44.404 243456 INFO nova.compute.manager [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:37:44 np0005634017 nova_compute[243452]: 2026-02-28 10:37:44.405 243456 DEBUG oslo.service.loopingcall [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:37:44 np0005634017 nova_compute[243452]: 2026-02-28 10:37:44.405 243456 DEBUG nova.compute.manager [-] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:37:44 np0005634017 nova_compute[243452]: 2026-02-28 10:37:44.405 243456 DEBUG nova.network.neutron [-] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:37:44 np0005634017 nova_compute[243452]: 2026-02-28 10:37:44.512 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 452 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 461 KiB/s rd, 2.3 MiB/s wr, 95 op/s
Feb 28 05:37:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:37:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3837739649' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:37:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:37:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3837739649' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:37:46 np0005634017 nova_compute[243452]: 2026-02-28 10:37:46.193 243456 DEBUG nova.compute.manager [req-4f3306a0-f076-4a36-9bb9-6f3df45e89ad req-83ca6106-faae-405f-a742-55540db2b453 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Received event network-vif-plugged-282fa143-1175-40e2-9ab8-2d2b012d5b78 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:46 np0005634017 nova_compute[243452]: 2026-02-28 10:37:46.194 243456 DEBUG oslo_concurrency.lockutils [req-4f3306a0-f076-4a36-9bb9-6f3df45e89ad req-83ca6106-faae-405f-a742-55540db2b453 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:46 np0005634017 nova_compute[243452]: 2026-02-28 10:37:46.194 243456 DEBUG oslo_concurrency.lockutils [req-4f3306a0-f076-4a36-9bb9-6f3df45e89ad req-83ca6106-faae-405f-a742-55540db2b453 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:46 np0005634017 nova_compute[243452]: 2026-02-28 10:37:46.194 243456 DEBUG oslo_concurrency.lockutils [req-4f3306a0-f076-4a36-9bb9-6f3df45e89ad req-83ca6106-faae-405f-a742-55540db2b453 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:46 np0005634017 nova_compute[243452]: 2026-02-28 10:37:46.195 243456 DEBUG nova.compute.manager [req-4f3306a0-f076-4a36-9bb9-6f3df45e89ad req-83ca6106-faae-405f-a742-55540db2b453 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] No waiting events found dispatching network-vif-plugged-282fa143-1175-40e2-9ab8-2d2b012d5b78 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:37:46 np0005634017 nova_compute[243452]: 2026-02-28 10:37:46.195 243456 WARNING nova.compute.manager [req-4f3306a0-f076-4a36-9bb9-6f3df45e89ad req-83ca6106-faae-405f-a742-55540db2b453 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Received unexpected event network-vif-plugged-282fa143-1175-40e2-9ab8-2d2b012d5b78 for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:37:46 np0005634017 nova_compute[243452]: 2026-02-28 10:37:46.376 243456 DEBUG nova.network.neutron [-] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:37:46 np0005634017 nova_compute[243452]: 2026-02-28 10:37:46.404 243456 INFO nova.compute.manager [-] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Took 2.00 seconds to deallocate network for instance.#033[00m
Feb 28 05:37:46 np0005634017 nova_compute[243452]: 2026-02-28 10:37:46.481 243456 DEBUG oslo_concurrency.lockutils [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:46 np0005634017 nova_compute[243452]: 2026-02-28 10:37:46.482 243456 DEBUG oslo_concurrency.lockutils [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:46 np0005634017 nova_compute[243452]: 2026-02-28 10:37:46.586 243456 DEBUG oslo_concurrency.processutils [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:37:46 np0005634017 nova_compute[243452]: 2026-02-28 10:37:46.824 243456 DEBUG oslo_concurrency.lockutils [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:46 np0005634017 nova_compute[243452]: 2026-02-28 10:37:46.825 243456 DEBUG oslo_concurrency.lockutils [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:46 np0005634017 nova_compute[243452]: 2026-02-28 10:37:46.825 243456 DEBUG oslo_concurrency.lockutils [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:46 np0005634017 nova_compute[243452]: 2026-02-28 10:37:46.826 243456 DEBUG oslo_concurrency.lockutils [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:46 np0005634017 nova_compute[243452]: 2026-02-28 10:37:46.826 243456 DEBUG oslo_concurrency.lockutils [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:46 np0005634017 nova_compute[243452]: 2026-02-28 10:37:46.828 243456 INFO nova.compute.manager [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Terminating instance#033[00m
Feb 28 05:37:46 np0005634017 nova_compute[243452]: 2026-02-28 10:37:46.829 243456 DEBUG nova.compute.manager [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:37:46 np0005634017 kernel: tap0739db55-7f (unregistering): left promiscuous mode
Feb 28 05:37:46 np0005634017 NetworkManager[49805]: <info>  [1772275066.8797] device (tap0739db55-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:37:46 np0005634017 nova_compute[243452]: 2026-02-28 10:37:46.888 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:46Z|01387|binding|INFO|Releasing lport 0739db55-7f81-4ed8-a2c2-73ad1bf09084 from this chassis (sb_readonly=0)
Feb 28 05:37:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:46Z|01388|binding|INFO|Setting lport 0739db55-7f81-4ed8-a2c2-73ad1bf09084 down in Southbound
Feb 28 05:37:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:46Z|01389|binding|INFO|Removing iface tap0739db55-7f ovn-installed in OVS
Feb 28 05:37:46 np0005634017 nova_compute[243452]: 2026-02-28 10:37:46.893 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:46 np0005634017 nova_compute[243452]: 2026-02-28 10:37:46.899 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:46.901 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:ed:5e 10.100.0.9'], port_security=['fa:16:3e:b8:ed:5e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ae0acd9c-7c2d-4e8b-84a4-d577eff31d02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '41e41550-f20b-4082-98cd-41d49e388f83', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f53aa21-0a70-46b8-aad2-7db237a64c92, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0739db55-7f81-4ed8-a2c2-73ad1bf09084) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:37:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:46.902 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0739db55-7f81-4ed8-a2c2-73ad1bf09084 in datapath e68f9d98-c075-4ed0-b1ee-ef05de1c055d unbound from our chassis#033[00m
Feb 28 05:37:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:46.903 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e68f9d98-c075-4ed0-b1ee-ef05de1c055d#033[00m
Feb 28 05:37:46 np0005634017 kernel: tapfbc01aec-00 (unregistering): left promiscuous mode
Feb 28 05:37:46 np0005634017 NetworkManager[49805]: <info>  [1772275066.9128] device (tapfbc01aec-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:37:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 423 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 1.9 MiB/s wr, 92 op/s
Feb 28 05:37:46 np0005634017 nova_compute[243452]: 2026-02-28 10:37:46.930 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:46.932 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e71ff592-e586-4793-8661-f378ca4c4296]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:46Z|01390|binding|INFO|Releasing lport fbc01aec-00f6-4ce9-960c-352295a6c18e from this chassis (sb_readonly=0)
Feb 28 05:37:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:46Z|01391|binding|INFO|Setting lport fbc01aec-00f6-4ce9-960c-352295a6c18e down in Southbound
Feb 28 05:37:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:46Z|01392|binding|INFO|Removing iface tapfbc01aec-00 ovn-installed in OVS
Feb 28 05:37:46 np0005634017 nova_compute[243452]: 2026-02-28 10:37:46.937 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:46.950 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:cb:24 2001:db8:0:1:f816:3eff:febb:cb24 2001:db8::f816:3eff:febb:cb24'], port_security=['fa:16:3e:bb:cb:24 2001:db8:0:1:f816:3eff:febb:cb24 2001:db8::f816:3eff:febb:cb24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:febb:cb24/64 2001:db8::f816:3eff:febb:cb24/64', 'neutron:device_id': 'ae0acd9c-7c2d-4e8b-84a4-d577eff31d02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '41e41550-f20b-4082-98cd-41d49e388f83', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ac6d088-e3e0-46fc-b81f-2eb4f1fd8c9c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=fbc01aec-00f6-4ce9-960c-352295a6c18e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:37:46 np0005634017 nova_compute[243452]: 2026-02-28 10:37:46.953 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:46.970 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[37ea6e91-5d07-46cf-9779-1e321e70feb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:46 np0005634017 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000086.scope: Deactivated successfully.
Feb 28 05:37:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:46.974 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb34efe-8bc3-4724-9279-f04b813eb37f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:46 np0005634017 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000086.scope: Consumed 14.563s CPU time.
Feb 28 05:37:46 np0005634017 systemd-machined[209480]: Machine qemu-168-instance-00000086 terminated.
Feb 28 05:37:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.006 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[86b81f85-791f-4b59-a47c-2966df6efcf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.025 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[55ac2cc5-a374-41ef-88e7-21dc96525687]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape68f9d98-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:f2:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 404], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650649, 'reachable_time': 31213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363774, 'error': None, 'target': 'ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.043 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6b006e81-cf65-46b9-b0c6-beed5c64171e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape68f9d98-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650664, 'tstamp': 650664}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363775, 'error': None, 'target': 'ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape68f9d98-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650667, 'tstamp': 650667}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363775, 'error': None, 'target': 'ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.045 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape68f9d98-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.051 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.054 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.055 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape68f9d98-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.056 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:37:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.056 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape68f9d98-c0, col_values=(('external_ids', {'iface-id': 'b9d9d72f-0ab4-40d7-98e4-dcafab8ff1a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.056 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:37:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.058 156681 INFO neutron.agent.ovn.metadata.agent [-] Port fbc01aec-00f6-4ce9-960c-352295a6c18e in datapath bf5aa4f8-b85b-496f-a7ad-1ab36250968c unbound from our chassis#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.062 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.063 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf5aa4f8-b85b-496f-a7ad-1ab36250968c#033[00m
Feb 28 05:37:47 np0005634017 NetworkManager[49805]: <info>  [1772275067.0652] manager: (tapfbc01aec-00): new Tun device (/org/freedesktop/NetworkManager/Devices/579)
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.071 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.076 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.091 243456 INFO nova.virt.libvirt.driver [-] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Instance destroyed successfully.#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.092 243456 DEBUG nova.objects.instance [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid ae0acd9c-7c2d-4e8b-84a4-d577eff31d02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:37:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.100 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[371c5e8a-69cd-465f-bf50-25fb16997a8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.108 243456 DEBUG nova.virt.libvirt.vif [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:37:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-25721937',display_name='tempest-TestGettingAddress-server-25721937',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-25721937',id=134,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjJUmVDkOZGXP9YHWwcrQX3rRmUtQjwiGdrqFm8v//iM8ddQ6Oygrz9eS7/NNUCVpxPFFxhbxOwQ9+w5wIv/8LuNuf9Xfg2nwokqVt8ivVWLTyGZCoHMk8LXyLAebo5og==',key_name='tempest-TestGettingAddress-95527438',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:37:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-v0acm2xt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:37:22Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=ae0acd9c-7c2d-4e8b-84a4-d577eff31d02,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "address": "fa:16:3e:b8:ed:5e", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0739db55-7f", "ovs_interfaceid": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.108 243456 DEBUG nova.network.os_vif_util [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "address": "fa:16:3e:b8:ed:5e", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0739db55-7f", "ovs_interfaceid": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.109 243456 DEBUG nova.network.os_vif_util [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:ed:5e,bridge_name='br-int',has_traffic_filtering=True,id=0739db55-7f81-4ed8-a2c2-73ad1bf09084,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0739db55-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.110 243456 DEBUG os_vif [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:ed:5e,bridge_name='br-int',has_traffic_filtering=True,id=0739db55-7f81-4ed8-a2c2-73ad1bf09084,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0739db55-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.111 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.112 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0739db55-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.114 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.116 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.117 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.120 243456 INFO os_vif [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:ed:5e,bridge_name='br-int',has_traffic_filtering=True,id=0739db55-7f81-4ed8-a2c2-73ad1bf09084,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0739db55-7f')#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.121 243456 DEBUG nova.virt.libvirt.vif [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:37:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-25721937',display_name='tempest-TestGettingAddress-server-25721937',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-25721937',id=134,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjJUmVDkOZGXP9YHWwcrQX3rRmUtQjwiGdrqFm8v//iM8ddQ6Oygrz9eS7/NNUCVpxPFFxhbxOwQ9+w5wIv/8LuNuf9Xfg2nwokqVt8ivVWLTyGZCoHMk8LXyLAebo5og==',key_name='tempest-TestGettingAddress-95527438',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:37:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-v0acm2xt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:37:22Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=ae0acd9c-7c2d-4e8b-84a4-d577eff31d02,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "address": "fa:16:3e:bb:cb:24", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc01aec-00", "ovs_interfaceid": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.121 243456 DEBUG nova.network.os_vif_util [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "address": "fa:16:3e:bb:cb:24", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc01aec-00", "ovs_interfaceid": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.122 243456 DEBUG nova.network.os_vif_util [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:cb:24,bridge_name='br-int',has_traffic_filtering=True,id=fbc01aec-00f6-4ce9-960c-352295a6c18e,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc01aec-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.123 243456 DEBUG os_vif [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:cb:24,bridge_name='br-int',has_traffic_filtering=True,id=fbc01aec-00f6-4ce9-960c-352295a6c18e,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc01aec-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.124 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.124 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbc01aec-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.126 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.127 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.129 243456 INFO os_vif [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:cb:24,bridge_name='br-int',has_traffic_filtering=True,id=fbc01aec-00f6-4ce9-960c-352295a6c18e,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc01aec-00')#033[00m
Feb 28 05:37:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.137 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a19262fb-24c7-41cb-878e-2b0a7260b2c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.140 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e72b31ce-b9d5-43f3-b2ee-64fa60685660]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:37:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2433001994' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:37:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.181 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[472eaea3-956a-4a35-a23a-1799ae1653ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.194 243456 DEBUG oslo_concurrency.processutils [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:37:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.203 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ad48e882-44f5-45ef-820f-f09866dba418]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf5aa4f8-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:9f:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650747, 'reachable_time': 20076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363824, 'error': None, 'target': 'ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.204 243456 DEBUG nova.compute.provider_tree [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:37:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.225 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9bbee128-8941-466d-9254-749ec1ddb840]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbf5aa4f8-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650761, 'tstamp': 650761}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363825, 'error': None, 'target': 'ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.227 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf5aa4f8-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.229 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.232 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf5aa4f8-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.233 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:37:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.234 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf5aa4f8-b0, col_values=(('external_ids', {'iface-id': 'a03e0d2d-7933-48b4-9d0a-61369ba848c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.235 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.263 243456 DEBUG nova.scheduler.client.report [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.296 243456 DEBUG oslo_concurrency.lockutils [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.329 243456 INFO nova.scheduler.client.report [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Deleted allocations for instance cbf94077-46c2-457d-8486-25f3dd0517b9#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.415 243456 INFO nova.virt.libvirt.driver [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Deleting instance files /var/lib/nova/instances/ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_del#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.416 243456 INFO nova.virt.libvirt.driver [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Deletion of /var/lib/nova/instances/ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_del complete#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.422 243456 DEBUG oslo_concurrency.lockutils [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.481 243456 INFO nova.compute.manager [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.482 243456 DEBUG oslo.service.loopingcall [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.482 243456 DEBUG nova.compute.manager [-] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:37:47 np0005634017 nova_compute[243452]: 2026-02-28 10:37:47.483 243456 DEBUG nova.network.neutron [-] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:37:48 np0005634017 nova_compute[243452]: 2026-02-28 10:37:48.276 243456 DEBUG nova.compute.manager [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Received event network-vif-deleted-282fa143-1175-40e2-9ab8-2d2b012d5b78 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:48 np0005634017 nova_compute[243452]: 2026-02-28 10:37:48.276 243456 DEBUG nova.compute.manager [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-changed-0739db55-7f81-4ed8-a2c2-73ad1bf09084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:48 np0005634017 nova_compute[243452]: 2026-02-28 10:37:48.277 243456 DEBUG nova.compute.manager [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Refreshing instance network info cache due to event network-changed-0739db55-7f81-4ed8-a2c2-73ad1bf09084. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:37:48 np0005634017 nova_compute[243452]: 2026-02-28 10:37:48.277 243456 DEBUG oslo_concurrency.lockutils [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:37:48 np0005634017 nova_compute[243452]: 2026-02-28 10:37:48.278 243456 DEBUG oslo_concurrency.lockutils [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:37:48 np0005634017 nova_compute[243452]: 2026-02-28 10:37:48.278 243456 DEBUG nova.network.neutron [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Refreshing network info cache for port 0739db55-7f81-4ed8-a2c2-73ad1bf09084 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:37:48 np0005634017 nova_compute[243452]: 2026-02-28 10:37:48.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:37:48 np0005634017 nova_compute[243452]: 2026-02-28 10:37:48.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:37:48 np0005634017 nova_compute[243452]: 2026-02-28 10:37:48.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:37:48 np0005634017 nova_compute[243452]: 2026-02-28 10:37:48.337 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Feb 28 05:37:48 np0005634017 nova_compute[243452]: 2026-02-28 10:37:48.579 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:37:48 np0005634017 nova_compute[243452]: 2026-02-28 10:37:48.580 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:37:48 np0005634017 nova_compute[243452]: 2026-02-28 10:37:48.581 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:37:48 np0005634017 nova_compute[243452]: 2026-02-28 10:37:48.581 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b3a7b19c-ebb2-442d-bac0-66e1b9e2655c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:37:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 305 active+clean; 345 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 290 KiB/s rd, 1.7 MiB/s wr, 86 op/s
Feb 28 05:37:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.197 243456 DEBUG oslo_concurrency.lockutils [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "interface-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-784aca10-13b2-42d5-9828-68914533af46" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.199 243456 DEBUG oslo_concurrency.lockutils [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "interface-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-784aca10-13b2-42d5-9828-68914533af46" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.217 243456 DEBUG nova.objects.instance [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'flavor' on Instance uuid b3a7b19c-ebb2-442d-bac0-66e1b9e2655c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.238 243456 DEBUG nova.virt.libvirt.vif [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:36:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1855517723',display_name='tempest-TestNetworkBasicOps-server-1855517723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1855517723',id=132,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJL4zKlYo/AdgMNnuaaJtL1hh1goiDwZemHJ69UN8pXos044opX7z+RfBxL3O8z0M85ghtATU1A3xGoZN5EAhEOadn0D2QfxzAhrWZrpHsFCg070YOwKNthfwFLhWWt9Cw==',key_name='tempest-TestNetworkBasicOps-2114804889',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:36:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-fsl96lrj',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:36:33Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=b3a7b19c-ebb2-442d-bac0-66e1b9e2655c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.239 243456 DEBUG nova.network.os_vif_util [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.239 243456 DEBUG nova.network.os_vif_util [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.244 243456 DEBUG nova.virt.libvirt.guest [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:6a:24:67"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap784aca10-13"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.248 243456 DEBUG nova.virt.libvirt.guest [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:6a:24:67"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap784aca10-13"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.252 243456 DEBUG nova.virt.libvirt.driver [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Attempting to detach device tap784aca10-13 from instance b3a7b19c-ebb2-442d-bac0-66e1b9e2655c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.253 243456 DEBUG nova.virt.libvirt.guest [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] detach device xml: <interface type="ethernet">
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <mac address="fa:16:3e:6a:24:67"/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <model type="virtio"/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <mtu size="1442"/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <target dev="tap784aca10-13"/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]: </interface>
Feb 28 05:37:49 np0005634017 nova_compute[243452]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.261 243456 DEBUG nova.virt.libvirt.guest [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:6a:24:67"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap784aca10-13"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.267 243456 DEBUG nova.virt.libvirt.guest [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:6a:24:67"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap784aca10-13"/></interface>not found in domain: <domain type='kvm' id='165'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <name>instance-00000084</name>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <uuid>b3a7b19c-ebb2-442d-bac0-66e1b9e2655c</uuid>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <nova:name>tempest-TestNetworkBasicOps-server-1855517723</nova:name>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <nova:creationTime>2026-02-28 10:37:00</nova:creationTime>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <nova:flavor name="m1.nano">
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:memory>128</nova:memory>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:disk>1</nova:disk>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:swap>0</nova:swap>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:vcpus>1</nova:vcpus>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </nova:flavor>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <nova:owner>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </nova:owner>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <nova:ports>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:port uuid="4b48043a-8194-4cf4-bd7f-1c138d7960ac">
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:port uuid="784aca10-13b2-42d5-9828-68914533af46">
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.20" ipVersion="4"/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </nova:ports>
Feb 28 05:37:49 np0005634017 nova_compute[243452]: </nova:instance>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <memory unit='KiB'>131072</memory>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <vcpu placement='static'>1</vcpu>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <resource>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <partition>/machine</partition>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </resource>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <sysinfo type='smbios'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <entry name='manufacturer'>RDO</entry>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <entry name='product'>OpenStack Compute</entry>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <entry name='serial'>b3a7b19c-ebb2-442d-bac0-66e1b9e2655c</entry>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <entry name='uuid'>b3a7b19c-ebb2-442d-bac0-66e1b9e2655c</entry>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <entry name='family'>Virtual Machine</entry>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <boot dev='hd'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <smbios mode='sysinfo'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <vmcoreinfo state='on'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <cpu mode='custom' match='exact' check='full'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <model fallback='forbid'>EPYC-Rome</model>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <vendor>AMD</vendor>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='x2apic'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='tsc-deadline'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='hypervisor'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='tsc_adjust'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='spec-ctrl'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='stibp'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='ssbd'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='cmp_legacy'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='overflow-recov'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='succor'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='ibrs'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='amd-ssbd'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='virt-ssbd'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='lbrv'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='tsc-scale'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='vmcb-clean'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='flushbyasid'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='pause-filter'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='pfthreshold'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='svme-addr-chk'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='lfence-always-serializing'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='xsaves'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='svm'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='topoext'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='npt'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='nrip-save'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <clock offset='utc'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <timer name='pit' tickpolicy='delay'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <timer name='rtc' tickpolicy='catchup'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <timer name='hpet' present='no'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <on_poweroff>destroy</on_poweroff>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <on_reboot>restart</on_reboot>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <on_crash>destroy</on_crash>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <disk type='network' device='disk'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <driver name='qemu' type='raw' cache='none'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <auth username='openstack'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:        <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <source protocol='rbd' name='vms/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk' index='2'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:        <host name='192.168.122.100' port='6789'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target dev='vda' bus='virtio'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='virtio-disk0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <disk type='network' device='cdrom'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <driver name='qemu' type='raw' cache='none'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <auth username='openstack'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:        <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <source protocol='rbd' name='vms/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk.config' index='1'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:        <host name='192.168.122.100' port='6789'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target dev='sda' bus='sata'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <readonly/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='sata0-0-0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='0' model='pcie-root'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pcie.0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='1' port='0x10'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.1'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='2' port='0x11'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.2'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='3' port='0x12'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.3'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='4' port='0x13'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.4'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='5' port='0x14'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.5'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='6' port='0x15'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.6'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='7' port='0x16'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.7'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='8' port='0x17'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.8'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='9' port='0x18'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.9'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='10' port='0x19'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.10'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='11' port='0x1a'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.11'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='12' port='0x1b'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.12'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='13' port='0x1c'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.13'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='14' port='0x1d'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.14'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='15' port='0x1e'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.15'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='16' port='0x1f'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.16'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='17' port='0x20'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.17'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='18' port='0x21'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.18'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='19' port='0x22'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.19'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='20' port='0x23'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.20'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='21' port='0x24'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.21'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='22' port='0x25'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.22'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='23' port='0x26'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.23'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='24' port='0x27'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.24'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='25' port='0x28'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.25'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-pci-bridge'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.26'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='usb'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='sata' index='0'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='ide'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <interface type='ethernet'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <mac address='fa:16:3e:e3:60:12'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target dev='tap4b48043a-81'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model type='virtio'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <driver name='vhost' rx_queue_size='512'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <mtu size='1442'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='net0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <interface type='ethernet'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <mac address='fa:16:3e:6a:24:67'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target dev='tap784aca10-13'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model type='virtio'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <driver name='vhost' rx_queue_size='512'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <mtu size='1442'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='net1'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <serial type='pty'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <source path='/dev/pts/0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <log file='/var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/console.log' append='off'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target type='isa-serial' port='0'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:        <model name='isa-serial'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      </target>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='serial0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <console type='pty' tty='/dev/pts/0'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <source path='/dev/pts/0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <log file='/var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/console.log' append='off'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target type='serial' port='0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='serial0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </console>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <input type='tablet' bus='usb'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='input0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='usb' bus='0' port='1'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <input type='mouse' bus='ps2'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='input1'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <input type='keyboard' bus='ps2'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='input2'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <listen type='address' address='::0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </graphics>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <audio id='1' type='none'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model type='virtio' heads='1' primary='yes'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='video0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <watchdog model='itco' action='reset'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='watchdog0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </watchdog>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <memballoon model='virtio'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <stats period='10'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='balloon0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <rng model='virtio'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <backend model='random'>/dev/urandom</backend>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='rng0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <label>system_u:system_r:svirt_t:s0:c786,c875</label>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c786,c875</imagelabel>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </seclabel>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <label>+107:+107</label>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <imagelabel>+107:+107</imagelabel>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </seclabel>
Feb 28 05:37:49 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:37:49 np0005634017 nova_compute[243452]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.267 243456 INFO nova.virt.libvirt.driver [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully detached device tap784aca10-13 from instance b3a7b19c-ebb2-442d-bac0-66e1b9e2655c from the persistent domain config.#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.268 243456 DEBUG nova.virt.libvirt.driver [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] (1/8): Attempting to detach device tap784aca10-13 with device alias net1 from instance b3a7b19c-ebb2-442d-bac0-66e1b9e2655c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.268 243456 DEBUG nova.virt.libvirt.guest [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] detach device xml: <interface type="ethernet">
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <mac address="fa:16:3e:6a:24:67"/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <model type="virtio"/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <mtu size="1442"/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <target dev="tap784aca10-13"/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]: </interface>
Feb 28 05:37:49 np0005634017 nova_compute[243452]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb 28 05:37:49 np0005634017 kernel: tap784aca10-13 (unregistering): left promiscuous mode
Feb 28 05:37:49 np0005634017 NetworkManager[49805]: <info>  [1772275069.3663] device (tap784aca10-13): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:37:49 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:49Z|01393|binding|INFO|Releasing lport 784aca10-13b2-42d5-9828-68914533af46 from this chassis (sb_readonly=0)
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.373 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:49 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:49Z|01394|binding|INFO|Setting lport 784aca10-13b2-42d5-9828-68914533af46 down in Southbound
Feb 28 05:37:49 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:49Z|01395|binding|INFO|Removing iface tap784aca10-13 ovn-installed in OVS
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.377 243456 DEBUG nova.virt.libvirt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Received event <DeviceRemovedEvent: 1772275069.377185, b3a7b19c-ebb2-442d-bac0-66e1b9e2655c => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.378 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.379 243456 DEBUG nova.virt.libvirt.driver [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Start waiting for the detach event from libvirt for device tap784aca10-13 with device alias net1 for instance b3a7b19c-ebb2-442d-bac0-66e1b9e2655c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.380 243456 DEBUG nova.virt.libvirt.guest [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:6a:24:67"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap784aca10-13"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.384 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.385 243456 DEBUG nova.virt.libvirt.guest [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:6a:24:67"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap784aca10-13"/></interface>not found in domain: <domain type='kvm' id='165'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <name>instance-00000084</name>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <uuid>b3a7b19c-ebb2-442d-bac0-66e1b9e2655c</uuid>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <nova:name>tempest-TestNetworkBasicOps-server-1855517723</nova:name>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <nova:creationTime>2026-02-28 10:37:00</nova:creationTime>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <nova:flavor name="m1.nano">
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:memory>128</nova:memory>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:disk>1</nova:disk>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:swap>0</nova:swap>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:vcpus>1</nova:vcpus>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </nova:flavor>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <nova:owner>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </nova:owner>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <nova:ports>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:port uuid="4b48043a-8194-4cf4-bd7f-1c138d7960ac">
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:port uuid="784aca10-13b2-42d5-9828-68914533af46">
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.20" ipVersion="4"/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </nova:ports>
Feb 28 05:37:49 np0005634017 nova_compute[243452]: </nova:instance>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <memory unit='KiB'>131072</memory>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <vcpu placement='static'>1</vcpu>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <resource>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <partition>/machine</partition>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </resource>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <sysinfo type='smbios'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <entry name='manufacturer'>RDO</entry>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <entry name='product'>OpenStack Compute</entry>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <entry name='serial'>b3a7b19c-ebb2-442d-bac0-66e1b9e2655c</entry>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <entry name='uuid'>b3a7b19c-ebb2-442d-bac0-66e1b9e2655c</entry>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <entry name='family'>Virtual Machine</entry>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <boot dev='hd'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <smbios mode='sysinfo'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <vmcoreinfo state='on'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <cpu mode='custom' match='exact' check='full'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <model fallback='forbid'>EPYC-Rome</model>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <vendor>AMD</vendor>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='x2apic'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='tsc-deadline'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='hypervisor'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='tsc_adjust'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='spec-ctrl'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='stibp'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='ssbd'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='cmp_legacy'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='overflow-recov'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='succor'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='ibrs'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='amd-ssbd'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='virt-ssbd'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='lbrv'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='tsc-scale'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='vmcb-clean'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='flushbyasid'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='pause-filter'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='pfthreshold'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='svme-addr-chk'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='lfence-always-serializing'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='xsaves'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='svm'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='require' name='topoext'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='npt'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <feature policy='disable' name='nrip-save'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <clock offset='utc'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <timer name='pit' tickpolicy='delay'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <timer name='rtc' tickpolicy='catchup'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <timer name='hpet' present='no'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <on_poweroff>destroy</on_poweroff>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <on_reboot>restart</on_reboot>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <on_crash>destroy</on_crash>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <disk type='network' device='disk'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <driver name='qemu' type='raw' cache='none'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <auth username='openstack'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:        <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <source protocol='rbd' name='vms/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk' index='2'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:        <host name='192.168.122.100' port='6789'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target dev='vda' bus='virtio'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='virtio-disk0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <disk type='network' device='cdrom'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <driver name='qemu' type='raw' cache='none'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <auth username='openstack'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:        <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <source protocol='rbd' name='vms/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk.config' index='1'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:        <host name='192.168.122.100' port='6789'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target dev='sda' bus='sata'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <readonly/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='sata0-0-0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='0' model='pcie-root'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pcie.0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='1' port='0x10'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.1'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='2' port='0x11'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.2'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='3' port='0x12'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.3'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='4' port='0x13'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.4'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='5' port='0x14'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.5'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='6' port='0x15'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.6'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='7' port='0x16'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.7'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='8' port='0x17'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.8'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='9' port='0x18'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.9'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='10' port='0x19'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.10'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='11' port='0x1a'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.11'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='12' port='0x1b'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.12'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='13' port='0x1c'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.13'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='14' port='0x1d'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.14'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='15' port='0x1e'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.15'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='16' port='0x1f'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.16'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='17' port='0x20'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.17'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='18' port='0x21'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.18'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='19' port='0x22'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.19'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='20' port='0x23'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.20'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='21' port='0x24'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.21'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='22' port='0x25'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.22'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='23' port='0x26'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.23'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='24' port='0x27'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.24'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target chassis='25' port='0x28'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.25'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model name='pcie-pci-bridge'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='pci.26'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='usb'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <controller type='sata' index='0'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='ide'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <interface type='ethernet'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <mac address='fa:16:3e:e3:60:12'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target dev='tap4b48043a-81'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model type='virtio'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <driver name='vhost' rx_queue_size='512'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <mtu size='1442'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='net0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <serial type='pty'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <source path='/dev/pts/0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <log file='/var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/console.log' append='off'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target type='isa-serial' port='0'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:        <model name='isa-serial'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      </target>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='serial0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <console type='pty' tty='/dev/pts/0'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <source path='/dev/pts/0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <log file='/var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/console.log' append='off'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <target type='serial' port='0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='serial0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </console>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <input type='tablet' bus='usb'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='input0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='usb' bus='0' port='1'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <input type='mouse' bus='ps2'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='input1'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <input type='keyboard' bus='ps2'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='input2'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <listen type='address' address='::0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </graphics>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <audio id='1' type='none'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <model type='virtio' heads='1' primary='yes'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='video0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <watchdog model='itco' action='reset'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='watchdog0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </watchdog>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <memballoon model='virtio'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <stats period='10'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='balloon0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <rng model='virtio'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <backend model='random'>/dev/urandom</backend>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <alias name='rng0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <label>system_u:system_r:svirt_t:s0:c786,c875</label>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c786,c875</imagelabel>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </seclabel>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <label>+107:+107</label>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <imagelabel>+107:+107</imagelabel>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </seclabel>
Feb 28 05:37:49 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:37:49 np0005634017 nova_compute[243452]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.386 243456 INFO nova.virt.libvirt.driver [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully detached device tap784aca10-13 from instance b3a7b19c-ebb2-442d-bac0-66e1b9e2655c from the live domain config.#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.386 243456 DEBUG nova.virt.libvirt.vif [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:36:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1855517723',display_name='tempest-TestNetworkBasicOps-server-1855517723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1855517723',id=132,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJL4zKlYo/AdgMNnuaaJtL1hh1goiDwZemHJ69UN8pXos044opX7z+RfBxL3O8z0M85ghtATU1A3xGoZN5EAhEOadn0D2QfxzAhrWZrpHsFCg070YOwKNthfwFLhWWt9Cw==',key_name='tempest-TestNetworkBasicOps-2114804889',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:36:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-fsl96lrj',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:36:33Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=b3a7b19c-ebb2-442d-bac0-66e1b9e2655c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.387 243456 DEBUG nova.network.os_vif_util [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.388 243456 DEBUG nova.network.os_vif_util [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.388 243456 DEBUG os_vif [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.389 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.390 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap784aca10-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.391 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.393 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.396 243456 INFO os_vif [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13')#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.397 243456 DEBUG nova.virt.libvirt.guest [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <nova:name>tempest-TestNetworkBasicOps-server-1855517723</nova:name>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <nova:creationTime>2026-02-28 10:37:49</nova:creationTime>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <nova:flavor name="m1.nano">
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:memory>128</nova:memory>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:disk>1</nova:disk>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:swap>0</nova:swap>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:vcpus>1</nova:vcpus>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </nova:flavor>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <nova:owner>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </nova:owner>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  <nova:ports>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    <nova:port uuid="4b48043a-8194-4cf4-bd7f-1c138d7960ac">
Feb 28 05:37:49 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:37:49 np0005634017 nova_compute[243452]:  </nova:ports>
Feb 28 05:37:49 np0005634017 nova_compute[243452]: </nova:instance>
Feb 28 05:37:49 np0005634017 nova_compute[243452]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb 28 05:37:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.423 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:24:67 10.100.0.20', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.20/28', 'neutron:device_id': 'b3a7b19c-ebb2-442d-bac0-66e1b9e2655c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe4444ac-cf87-44d4-a522-71da130237e6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=784aca10-13b2-42d5-9828-68914533af46) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:37:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.424 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 784aca10-13b2-42d5-9828-68914533af46 in datapath 9fbbf27e-04ea-4f06-b1ef-3ac4b98db361 unbound from our chassis#033[00m
Feb 28 05:37:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.425 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9fbbf27e-04ea-4f06-b1ef-3ac4b98db361, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:37:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.426 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6976bd1c-7d9c-4de5-aff4-737e6dcb6ec0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.427 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361 namespace which is not needed anymore#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.516 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:49 np0005634017 neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361[362810]: [NOTICE]   (362814) : haproxy version is 2.8.14-c23fe91
Feb 28 05:37:49 np0005634017 neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361[362810]: [NOTICE]   (362814) : path to executable is /usr/sbin/haproxy
Feb 28 05:37:49 np0005634017 neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361[362810]: [WARNING]  (362814) : Exiting Master process...
Feb 28 05:37:49 np0005634017 neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361[362810]: [WARNING]  (362814) : Exiting Master process...
Feb 28 05:37:49 np0005634017 neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361[362810]: [ALERT]    (362814) : Current worker (362816) exited with code 143 (Terminated)
Feb 28 05:37:49 np0005634017 neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361[362810]: [WARNING]  (362814) : All workers exited. Exiting... (0)
Feb 28 05:37:49 np0005634017 systemd[1]: libpod-cf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0.scope: Deactivated successfully.
Feb 28 05:37:49 np0005634017 podman[363909]: 2026-02-28 10:37:49.576077466 +0000 UTC m=+0.052475808 container died cf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 05:37:49 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0-userdata-shm.mount: Deactivated successfully.
Feb 28 05:37:49 np0005634017 systemd[1]: var-lib-containers-storage-overlay-7f6de54db0465fd2bcf02f33b89700b7eb0c2eaa14d4e03df2211f39fd0f3421-merged.mount: Deactivated successfully.
Feb 28 05:37:49 np0005634017 podman[363909]: 2026-02-28 10:37:49.632132674 +0000 UTC m=+0.108531026 container cleanup cf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 28 05:37:49 np0005634017 systemd[1]: libpod-conmon-cf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0.scope: Deactivated successfully.
Feb 28 05:37:49 np0005634017 podman[363942]: 2026-02-28 10:37:49.707770056 +0000 UTC m=+0.053571188 container remove cf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:37:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.713 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b739e22d-c45b-47d8-9839-d122ec682dd0]: (4, ('Sat Feb 28 10:37:49 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361 (cf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0)\ncf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0\nSat Feb 28 10:37:49 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361 (cf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0)\ncf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.715 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2c7dfb7c-a773-4cc4-a81c-be402cf49bb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.717 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9fbbf27e-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:49 np0005634017 kernel: tap9fbbf27e-00: left promiscuous mode
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.719 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.727 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.731 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d7cfc522-1654-426a-8ade-3ec7b759d790]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.751 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b601236f-b080-4e64-934e-ba4e8a7f84ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.752 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[51bda20e-c63c-4a91-bafd-aada1718e874]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.773 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e5878ec5-66ea-45c0-9682-56a31b237703]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652763, 'reachable_time': 41643, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363964, 'error': None, 'target': 'ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:49 np0005634017 systemd[1]: run-netns-ovnmeta\x2d9fbbf27e\x2d04ea\x2d4f06\x2db1ef\x2d3ac4b98db361.mount: Deactivated successfully.
Feb 28 05:37:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.778 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:37:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.779 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d22dc1-830a-45fb-8e5f-daf550d77145]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.884 243456 DEBUG nova.network.neutron [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Updated VIF entry in instance network info cache for port 0739db55-7f81-4ed8-a2c2-73ad1bf09084. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.885 243456 DEBUG nova.network.neutron [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Updating instance_info_cache with network_info: [{"id": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "address": "fa:16:3e:b8:ed:5e", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0739db55-7f", "ovs_interfaceid": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "address": "fa:16:3e:bb:cb:24", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc01aec-00", "ovs_interfaceid": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:37:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:37:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:37:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:37:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:37:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:37:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:37:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:37:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:37:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:37:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:37:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:37:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.953 243456 DEBUG oslo_concurrency.lockutils [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.954 243456 DEBUG nova.compute.manager [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-vif-unplugged-0739db55-7f81-4ed8-a2c2-73ad1bf09084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.955 243456 DEBUG oslo_concurrency.lockutils [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.955 243456 DEBUG oslo_concurrency.lockutils [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.955 243456 DEBUG oslo_concurrency.lockutils [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.956 243456 DEBUG nova.compute.manager [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] No waiting events found dispatching network-vif-unplugged-0739db55-7f81-4ed8-a2c2-73ad1bf09084 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.956 243456 DEBUG nova.compute.manager [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-vif-unplugged-0739db55-7f81-4ed8-a2c2-73ad1bf09084 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.956 243456 DEBUG nova.compute.manager [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-vif-plugged-0739db55-7f81-4ed8-a2c2-73ad1bf09084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.957 243456 DEBUG oslo_concurrency.lockutils [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.957 243456 DEBUG oslo_concurrency.lockutils [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.958 243456 DEBUG oslo_concurrency.lockutils [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.958 243456 DEBUG nova.compute.manager [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] No waiting events found dispatching network-vif-plugged-0739db55-7f81-4ed8-a2c2-73ad1bf09084 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:37:49 np0005634017 nova_compute[243452]: 2026-02-28 10:37:49.958 243456 WARNING nova.compute.manager [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received unexpected event network-vif-plugged-0739db55-7f81-4ed8-a2c2-73ad1bf09084 for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.208 243456 DEBUG nova.network.neutron [-] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.229 243456 INFO nova.compute.manager [-] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Took 2.75 seconds to deallocate network for instance.#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.278 243456 DEBUG oslo_concurrency.lockutils [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.279 243456 DEBUG oslo_concurrency.lockutils [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.353 243456 DEBUG oslo_concurrency.lockutils [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:37:50 np0005634017 podman[364038]: 2026-02-28 10:37:50.379368761 +0000 UTC m=+0.053230819 container create 58600fd143cb0e71b9099bd72fb8aed035e49905621e720c0ef11c5303f0a9e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_roentgen, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.385 243456 DEBUG nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-vif-unplugged-fbc01aec-00f6-4ce9-960c-352295a6c18e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.386 243456 DEBUG oslo_concurrency.lockutils [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.386 243456 DEBUG oslo_concurrency.lockutils [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.386 243456 DEBUG oslo_concurrency.lockutils [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.386 243456 DEBUG nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] No waiting events found dispatching network-vif-unplugged-fbc01aec-00f6-4ce9-960c-352295a6c18e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.386 243456 WARNING nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received unexpected event network-vif-unplugged-fbc01aec-00f6-4ce9-960c-352295a6c18e for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.387 243456 DEBUG nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-vif-plugged-fbc01aec-00f6-4ce9-960c-352295a6c18e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.387 243456 DEBUG oslo_concurrency.lockutils [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.387 243456 DEBUG oslo_concurrency.lockutils [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.387 243456 DEBUG oslo_concurrency.lockutils [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.387 243456 DEBUG nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] No waiting events found dispatching network-vif-plugged-fbc01aec-00f6-4ce9-960c-352295a6c18e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.388 243456 WARNING nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received unexpected event network-vif-plugged-fbc01aec-00f6-4ce9-960c-352295a6c18e for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.388 243456 DEBUG nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-vif-deleted-fbc01aec-00f6-4ce9-960c-352295a6c18e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.388 243456 DEBUG nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-vif-unplugged-784aca10-13b2-42d5-9828-68914533af46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.388 243456 DEBUG oslo_concurrency.lockutils [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.388 243456 DEBUG oslo_concurrency.lockutils [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.388 243456 DEBUG oslo_concurrency.lockutils [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.389 243456 DEBUG nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] No waiting events found dispatching network-vif-unplugged-784aca10-13b2-42d5-9828-68914533af46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.389 243456 WARNING nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received unexpected event network-vif-unplugged-784aca10-13b2-42d5-9828-68914533af46 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.389 243456 DEBUG nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-vif-deleted-0739db55-7f81-4ed8-a2c2-73ad1bf09084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.389 243456 DEBUG nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-vif-plugged-784aca10-13b2-42d5-9828-68914533af46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.389 243456 DEBUG oslo_concurrency.lockutils [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.390 243456 DEBUG oslo_concurrency.lockutils [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.390 243456 DEBUG oslo_concurrency.lockutils [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.390 243456 DEBUG nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] No waiting events found dispatching network-vif-plugged-784aca10-13b2-42d5-9828-68914533af46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.390 243456 WARNING nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received unexpected event network-vif-plugged-784aca10-13b2-42d5-9828-68914533af46 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.396 243456 DEBUG oslo_concurrency.processutils [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:37:50 np0005634017 systemd[1]: Started libpod-conmon-58600fd143cb0e71b9099bd72fb8aed035e49905621e720c0ef11c5303f0a9e6.scope.
Feb 28 05:37:50 np0005634017 podman[364038]: 2026-02-28 10:37:50.358919942 +0000 UTC m=+0.032782020 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:37:50 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:37:50 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:37:50 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:37:50 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:37:50 np0005634017 podman[364038]: 2026-02-28 10:37:50.497868738 +0000 UTC m=+0.171730886 container init 58600fd143cb0e71b9099bd72fb8aed035e49905621e720c0ef11c5303f0a9e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_roentgen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 05:37:50 np0005634017 podman[364038]: 2026-02-28 10:37:50.508009425 +0000 UTC m=+0.181871493 container start 58600fd143cb0e71b9099bd72fb8aed035e49905621e720c0ef11c5303f0a9e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:37:50 np0005634017 podman[364038]: 2026-02-28 10:37:50.511485193 +0000 UTC m=+0.185347271 container attach 58600fd143cb0e71b9099bd72fb8aed035e49905621e720c0ef11c5303f0a9e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_roentgen, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:37:50 np0005634017 systemd[1]: libpod-58600fd143cb0e71b9099bd72fb8aed035e49905621e720c0ef11c5303f0a9e6.scope: Deactivated successfully.
Feb 28 05:37:50 np0005634017 elated_roentgen[364055]: 167 167
Feb 28 05:37:50 np0005634017 podman[364038]: 2026-02-28 10:37:50.518527763 +0000 UTC m=+0.192389821 container died 58600fd143cb0e71b9099bd72fb8aed035e49905621e720c0ef11c5303f0a9e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_roentgen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:37:50 np0005634017 conmon[364055]: conmon 58600fd143cb0e71b909 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-58600fd143cb0e71b9099bd72fb8aed035e49905621e720c0ef11c5303f0a9e6.scope/container/memory.events
Feb 28 05:37:50 np0005634017 systemd[1]: var-lib-containers-storage-overlay-dc9d296b531044f456d24b2daed4db1258fc7f61a8073aa82b6690a7df849cd3-merged.mount: Deactivated successfully.
Feb 28 05:37:50 np0005634017 podman[364038]: 2026-02-28 10:37:50.559524584 +0000 UTC m=+0.233386642 container remove 58600fd143cb0e71b9099bd72fb8aed035e49905621e720c0ef11c5303f0a9e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_roentgen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:37:50 np0005634017 systemd[1]: libpod-conmon-58600fd143cb0e71b9099bd72fb8aed035e49905621e720c0ef11c5303f0a9e6.scope: Deactivated successfully.
Feb 28 05:37:50 np0005634017 podman[364097]: 2026-02-28 10:37:50.742804306 +0000 UTC m=+0.049826622 container create a8f6c0e03adfbfe153dccc6c801d75d3cc12fae2f8261662b336f4211e6726fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 28 05:37:50 np0005634017 systemd[1]: Started libpod-conmon-a8f6c0e03adfbfe153dccc6c801d75d3cc12fae2f8261662b336f4211e6726fb.scope.
Feb 28 05:37:50 np0005634017 podman[364097]: 2026-02-28 10:37:50.724658172 +0000 UTC m=+0.031680518 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:37:50 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:37:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc90bc29140d33a7d3f900c3d8aa002e751210d1bad039b2eb638e09b340ae1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:37:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc90bc29140d33a7d3f900c3d8aa002e751210d1bad039b2eb638e09b340ae1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:37:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc90bc29140d33a7d3f900c3d8aa002e751210d1bad039b2eb638e09b340ae1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:37:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc90bc29140d33a7d3f900c3d8aa002e751210d1bad039b2eb638e09b340ae1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:37:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc90bc29140d33a7d3f900c3d8aa002e751210d1bad039b2eb638e09b340ae1d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:37:50 np0005634017 podman[364097]: 2026-02-28 10:37:50.863267038 +0000 UTC m=+0.170289384 container init a8f6c0e03adfbfe153dccc6c801d75d3cc12fae2f8261662b336f4211e6726fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 05:37:50 np0005634017 podman[364097]: 2026-02-28 10:37:50.875023212 +0000 UTC m=+0.182045528 container start a8f6c0e03adfbfe153dccc6c801d75d3cc12fae2f8261662b336f4211e6726fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:37:50 np0005634017 podman[364097]: 2026-02-28 10:37:50.878569162 +0000 UTC m=+0.185591498 container attach a8f6c0e03adfbfe153dccc6c801d75d3cc12fae2f8261662b336f4211e6726fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_meninsky, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:37:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 99 KiB/s wr, 72 op/s
Feb 28 05:37:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:37:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1770482847' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.980 243456 DEBUG oslo_concurrency.processutils [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:37:50 np0005634017 nova_compute[243452]: 2026-02-28 10:37:50.990 243456 DEBUG nova.compute.provider_tree [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:37:51 np0005634017 nova_compute[243452]: 2026-02-28 10:37:51.007 243456 DEBUG nova.scheduler.client.report [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:37:51 np0005634017 nova_compute[243452]: 2026-02-28 10:37:51.027 243456 DEBUG oslo_concurrency.lockutils [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:51 np0005634017 nova_compute[243452]: 2026-02-28 10:37:51.051 243456 INFO nova.scheduler.client.report [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance ae0acd9c-7c2d-4e8b-84a4-d577eff31d02#033[00m
Feb 28 05:37:51 np0005634017 nova_compute[243452]: 2026-02-28 10:37:51.108 243456 DEBUG oslo_concurrency.lockutils [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.283s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:51 np0005634017 nova_compute[243452]: 2026-02-28 10:37:51.239 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updating instance_info_cache with network_info: [{"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:37:51 np0005634017 nova_compute[243452]: 2026-02-28 10:37:51.256 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:37:51 np0005634017 nova_compute[243452]: 2026-02-28 10:37:51.257 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:37:51 np0005634017 nova_compute[243452]: 2026-02-28 10:37:51.257 243456 DEBUG oslo_concurrency.lockutils [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:37:51 np0005634017 nova_compute[243452]: 2026-02-28 10:37:51.258 243456 DEBUG nova.network.neutron [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:37:51 np0005634017 nova_compute[243452]: 2026-02-28 10:37:51.261 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:37:51 np0005634017 nova_compute[243452]: 2026-02-28 10:37:51.286 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:51 np0005634017 nova_compute[243452]: 2026-02-28 10:37:51.287 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:51 np0005634017 nova_compute[243452]: 2026-02-28 10:37:51.287 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:51 np0005634017 nova_compute[243452]: 2026-02-28 10:37:51.287 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:37:51 np0005634017 nova_compute[243452]: 2026-02-28 10:37:51.288 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:37:51 np0005634017 jovial_meninsky[364113]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:37:51 np0005634017 jovial_meninsky[364113]: --> All data devices are unavailable
Feb 28 05:37:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:51Z|01396|binding|INFO|Releasing lport 806ea448-5fbd-4b2a-a972-1602ffe39b97 from this chassis (sb_readonly=0)
Feb 28 05:37:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:51Z|01397|binding|INFO|Releasing lport a03e0d2d-7933-48b4-9d0a-61369ba848c9 from this chassis (sb_readonly=0)
Feb 28 05:37:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:51Z|01398|binding|INFO|Releasing lport b9d9d72f-0ab4-40d7-98e4-dcafab8ff1a1 from this chassis (sb_readonly=0)
Feb 28 05:37:51 np0005634017 systemd[1]: libpod-a8f6c0e03adfbfe153dccc6c801d75d3cc12fae2f8261662b336f4211e6726fb.scope: Deactivated successfully.
Feb 28 05:37:51 np0005634017 podman[364097]: 2026-02-28 10:37:51.407825134 +0000 UTC m=+0.714847460 container died a8f6c0e03adfbfe153dccc6c801d75d3cc12fae2f8261662b336f4211e6726fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Feb 28 05:37:51 np0005634017 nova_compute[243452]: 2026-02-28 10:37:51.417 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:51 np0005634017 systemd[1]: var-lib-containers-storage-overlay-dc90bc29140d33a7d3f900c3d8aa002e751210d1bad039b2eb638e09b340ae1d-merged.mount: Deactivated successfully.
Feb 28 05:37:51 np0005634017 podman[364097]: 2026-02-28 10:37:51.449783763 +0000 UTC m=+0.756806089 container remove a8f6c0e03adfbfe153dccc6c801d75d3cc12fae2f8261662b336f4211e6726fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_meninsky, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 05:37:51 np0005634017 systemd[1]: libpod-conmon-a8f6c0e03adfbfe153dccc6c801d75d3cc12fae2f8261662b336f4211e6726fb.scope: Deactivated successfully.
Feb 28 05:37:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:37:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1605093675' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:37:51 np0005634017 nova_compute[243452]: 2026-02-28 10:37:51.901 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:37:51 np0005634017 podman[364233]: 2026-02-28 10:37:51.951540396 +0000 UTC m=+0.052746455 container create 2729a835efb67e968792ecdcc2ef2bbd917e34834d50c2b6ae40a2330b18e351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 05:37:51 np0005634017 nova_compute[243452]: 2026-02-28 10:37:51.980 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:37:51 np0005634017 nova_compute[243452]: 2026-02-28 10:37:51.981 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:37:51 np0005634017 nova_compute[243452]: 2026-02-28 10:37:51.986 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:37:51 np0005634017 nova_compute[243452]: 2026-02-28 10:37:51.986 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:37:51 np0005634017 systemd[1]: Started libpod-conmon-2729a835efb67e968792ecdcc2ef2bbd917e34834d50c2b6ae40a2330b18e351.scope.
Feb 28 05:37:52 np0005634017 podman[364233]: 2026-02-28 10:37:51.928580296 +0000 UTC m=+0.029786375 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:37:52 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:37:52 np0005634017 podman[364233]: 2026-02-28 10:37:52.048203965 +0000 UTC m=+0.149410004 container init 2729a835efb67e968792ecdcc2ef2bbd917e34834d50c2b6ae40a2330b18e351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_goldberg, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Feb 28 05:37:52 np0005634017 podman[364233]: 2026-02-28 10:37:52.055849881 +0000 UTC m=+0.157055910 container start 2729a835efb67e968792ecdcc2ef2bbd917e34834d50c2b6ae40a2330b18e351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_goldberg, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 05:37:52 np0005634017 podman[364233]: 2026-02-28 10:37:52.060604406 +0000 UTC m=+0.161810455 container attach 2729a835efb67e968792ecdcc2ef2bbd917e34834d50c2b6ae40a2330b18e351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 05:37:52 np0005634017 funny_goldberg[364250]: 167 167
Feb 28 05:37:52 np0005634017 systemd[1]: libpod-2729a835efb67e968792ecdcc2ef2bbd917e34834d50c2b6ae40a2330b18e351.scope: Deactivated successfully.
Feb 28 05:37:52 np0005634017 conmon[364250]: conmon 2729a835efb67e968792 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2729a835efb67e968792ecdcc2ef2bbd917e34834d50c2b6ae40a2330b18e351.scope/container/memory.events
Feb 28 05:37:52 np0005634017 podman[364233]: 2026-02-28 10:37:52.064416364 +0000 UTC m=+0.165622403 container died 2729a835efb67e968792ecdcc2ef2bbd917e34834d50c2b6ae40a2330b18e351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 05:37:52 np0005634017 systemd[1]: var-lib-containers-storage-overlay-a5d8aca32a9abedd0397a63da3ab918ec4ffe9f1ab242db9355d2f7f7dfcbf3e-merged.mount: Deactivated successfully.
Feb 28 05:37:52 np0005634017 podman[364233]: 2026-02-28 10:37:52.100482896 +0000 UTC m=+0.201688915 container remove 2729a835efb67e968792ecdcc2ef2bbd917e34834d50c2b6ae40a2330b18e351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_goldberg, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 28 05:37:52 np0005634017 systemd[1]: libpod-conmon-2729a835efb67e968792ecdcc2ef2bbd917e34834d50c2b6ae40a2330b18e351.scope: Deactivated successfully.
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.189 243456 DEBUG oslo_concurrency.lockutils [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.190 243456 DEBUG oslo_concurrency.lockutils [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.190 243456 DEBUG oslo_concurrency.lockutils [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.190 243456 DEBUG oslo_concurrency.lockutils [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.190 243456 DEBUG oslo_concurrency.lockutils [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.192 243456 INFO nova.compute.manager [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Terminating instance#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.193 243456 DEBUG nova.compute.manager [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:37:52 np0005634017 kernel: tap4b48043a-81 (unregistering): left promiscuous mode
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.232 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.234 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3204MB free_disk=59.89615597296506GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.234 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.234 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:52 np0005634017 NetworkManager[49805]: <info>  [1772275072.2377] device (tap4b48043a-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.245 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:52Z|01399|binding|INFO|Releasing lport 4b48043a-8194-4cf4-bd7f-1c138d7960ac from this chassis (sb_readonly=0)
Feb 28 05:37:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:52Z|01400|binding|INFO|Setting lport 4b48043a-8194-4cf4-bd7f-1c138d7960ac down in Southbound
Feb 28 05:37:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:52Z|01401|binding|INFO|Removing iface tap4b48043a-81 ovn-installed in OVS
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.248 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:52.252 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:60:12 10.100.0.5'], port_security=['fa:16:3e:e3:60:12 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b3a7b19c-ebb2-442d-bac0-66e1b9e2655c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f91ad996-44c8-45ac-a5d6-208982ca2ce1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '76f6832b-9f40-4eef-bddf-580a90432b21', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f90d2e4f-4906-4cb9-bc1a-6b3a4bcb9d24, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=4b48043a-8194-4cf4-bd7f-1c138d7960ac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:37:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:52.254 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 4b48043a-8194-4cf4-bd7f-1c138d7960ac in datapath f91ad996-44c8-45ac-a5d6-208982ca2ce1 unbound from our chassis#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.256 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:52.257 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f91ad996-44c8-45ac-a5d6-208982ca2ce1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:37:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:52.258 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d576a0b5-a0c0-4f09-9abc-242547852e46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:52.258 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1 namespace which is not needed anymore#033[00m
Feb 28 05:37:52 np0005634017 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000084.scope: Deactivated successfully.
Feb 28 05:37:52 np0005634017 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000084.scope: Consumed 16.926s CPU time.
Feb 28 05:37:52 np0005634017 systemd-machined[209480]: Machine qemu-165-instance-00000084 terminated.
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.311 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance b3a7b19c-ebb2-442d-bac0-66e1b9e2655c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.312 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 555d381e-ed8a-4a73-9f43-f79c0b0a0afd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.312 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.313 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:37:52 np0005634017 podman[364277]: 2026-02-28 10:37:52.320096197 +0000 UTC m=+0.053363993 container create 3755428e7550ea3b0bf9ecd34dd3528e78782a18cb8687c4f31fa8d382d6433b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shamir, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 05:37:52 np0005634017 systemd[1]: Started libpod-conmon-3755428e7550ea3b0bf9ecd34dd3528e78782a18cb8687c4f31fa8d382d6433b.scope.
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.370 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:37:52 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:37:52 np0005634017 podman[364277]: 2026-02-28 10:37:52.293525024 +0000 UTC m=+0.026792870 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:37:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11a7a86fa35507d8088a08f8efaa3a81c96ff4d58da2771d52b1f659a40d976/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:37:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11a7a86fa35507d8088a08f8efaa3a81c96ff4d58da2771d52b1f659a40d976/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:37:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11a7a86fa35507d8088a08f8efaa3a81c96ff4d58da2771d52b1f659a40d976/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:37:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11a7a86fa35507d8088a08f8efaa3a81c96ff4d58da2771d52b1f659a40d976/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:37:52 np0005634017 neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1[361605]: [NOTICE]   (361609) : haproxy version is 2.8.14-c23fe91
Feb 28 05:37:52 np0005634017 neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1[361605]: [NOTICE]   (361609) : path to executable is /usr/sbin/haproxy
Feb 28 05:37:52 np0005634017 neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1[361605]: [WARNING]  (361609) : Exiting Master process...
Feb 28 05:37:52 np0005634017 neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1[361605]: [WARNING]  (361609) : Exiting Master process...
Feb 28 05:37:52 np0005634017 neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1[361605]: [ALERT]    (361609) : Current worker (361611) exited with code 143 (Terminated)
Feb 28 05:37:52 np0005634017 neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1[361605]: [WARNING]  (361609) : All workers exited. Exiting... (0)
Feb 28 05:37:52 np0005634017 systemd[1]: libpod-4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f.scope: Deactivated successfully.
Feb 28 05:37:52 np0005634017 podman[364314]: 2026-02-28 10:37:52.411603399 +0000 UTC m=+0.048638289 container died 4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.428 243456 INFO nova.virt.libvirt.driver [-] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Instance destroyed successfully.#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.429 243456 DEBUG nova.objects.instance [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'resources' on Instance uuid b3a7b19c-ebb2-442d-bac0-66e1b9e2655c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:37:52 np0005634017 podman[364277]: 2026-02-28 10:37:52.438974464 +0000 UTC m=+0.172242290 container init 3755428e7550ea3b0bf9ecd34dd3528e78782a18cb8687c4f31fa8d382d6433b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shamir, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.442 243456 DEBUG nova.virt.libvirt.vif [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:36:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1855517723',display_name='tempest-TestNetworkBasicOps-server-1855517723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1855517723',id=132,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJL4zKlYo/AdgMNnuaaJtL1hh1goiDwZemHJ69UN8pXos044opX7z+RfBxL3O8z0M85ghtATU1A3xGoZN5EAhEOadn0D2QfxzAhrWZrpHsFCg070YOwKNthfwFLhWWt9Cw==',key_name='tempest-TestNetworkBasicOps-2114804889',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:36:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-fsl96lrj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:36:33Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=b3a7b19c-ebb2-442d-bac0-66e1b9e2655c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.443 243456 DEBUG nova.network.os_vif_util [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.444 243456 DEBUG nova.network.os_vif_util [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e3:60:12,bridge_name='br-int',has_traffic_filtering=True,id=4b48043a-8194-4cf4-bd7f-1c138d7960ac,network=Network(f91ad996-44c8-45ac-a5d6-208982ca2ce1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b48043a-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.445 243456 DEBUG os_vif [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e3:60:12,bridge_name='br-int',has_traffic_filtering=True,id=4b48043a-8194-4cf4-bd7f-1c138d7960ac,network=Network(f91ad996-44c8-45ac-a5d6-208982ca2ce1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b48043a-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.447 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:52 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f-userdata-shm.mount: Deactivated successfully.
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.448 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b48043a-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:52 np0005634017 systemd[1]: var-lib-containers-storage-overlay-16788202b082e39979c66011ee31c3dde40edd942458ccb06127574de32bd96f-merged.mount: Deactivated successfully.
Feb 28 05:37:52 np0005634017 podman[364277]: 2026-02-28 10:37:52.451351105 +0000 UTC m=+0.184618921 container start 3755428e7550ea3b0bf9ecd34dd3528e78782a18cb8687c4f31fa8d382d6433b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.452 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.453 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:52 np0005634017 podman[364277]: 2026-02-28 10:37:52.456310725 +0000 UTC m=+0.189578531 container attach 3755428e7550ea3b0bf9ecd34dd3528e78782a18cb8687c4f31fa8d382d6433b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shamir, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.457 243456 INFO os_vif [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e3:60:12,bridge_name='br-int',has_traffic_filtering=True,id=4b48043a-8194-4cf4-bd7f-1c138d7960ac,network=Network(f91ad996-44c8-45ac-a5d6-208982ca2ce1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b48043a-81')#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.459 243456 DEBUG nova.virt.libvirt.vif [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:36:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1855517723',display_name='tempest-TestNetworkBasicOps-server-1855517723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1855517723',id=132,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJL4zKlYo/AdgMNnuaaJtL1hh1goiDwZemHJ69UN8pXos044opX7z+RfBxL3O8z0M85ghtATU1A3xGoZN5EAhEOadn0D2QfxzAhrWZrpHsFCg070YOwKNthfwFLhWWt9Cw==',key_name='tempest-TestNetworkBasicOps-2114804889',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:36:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-fsl96lrj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:36:33Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=b3a7b19c-ebb2-442d-bac0-66e1b9e2655c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.459 243456 DEBUG nova.network.os_vif_util [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:37:52 np0005634017 podman[364314]: 2026-02-28 10:37:52.460049951 +0000 UTC m=+0.097084841 container cleanup 4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.461 243456 DEBUG nova.network.os_vif_util [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.462 243456 DEBUG os_vif [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.463 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.464 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap784aca10-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.464 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.472 243456 DEBUG nova.compute.manager [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-vif-deleted-784aca10-13b2-42d5-9828-68914533af46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.472 243456 INFO nova.compute.manager [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Neutron deleted interface 784aca10-13b2-42d5-9828-68914533af46; detaching it from the instance and deleting it from the info cache#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.472 243456 DEBUG nova.network.neutron [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updating instance_info_cache with network_info: [{"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:37:52 np0005634017 systemd[1]: libpod-conmon-4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f.scope: Deactivated successfully.
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.475 243456 INFO os_vif [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13')#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.509 243456 DEBUG nova.objects.instance [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lazy-loading 'system_metadata' on Instance uuid b3a7b19c-ebb2-442d-bac0-66e1b9e2655c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.532 243456 DEBUG nova.objects.instance [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lazy-loading 'flavor' on Instance uuid b3a7b19c-ebb2-442d-bac0-66e1b9e2655c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:37:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:37:52 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2168583835' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:37:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 33 KiB/s wr, 61 op/s
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.954 243456 DEBUG nova.virt.libvirt.vif [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:36:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1855517723',display_name='tempest-TestNetworkBasicOps-server-1855517723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1855517723',id=132,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJL4zKlYo/AdgMNnuaaJtL1hh1goiDwZemHJ69UN8pXos044opX7z+RfBxL3O8z0M85ghtATU1A3xGoZN5EAhEOadn0D2QfxzAhrWZrpHsFCg070YOwKNthfwFLhWWt9Cw==',key_name='tempest-TestNetworkBasicOps-2114804889',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:36:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-fsl96lrj',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:37:52Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=b3a7b19c-ebb2-442d-bac0-66e1b9e2655c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.955 243456 DEBUG nova.network.os_vif_util [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converting VIF {"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.956 243456 DEBUG nova.network.os_vif_util [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.966 243456 DEBUG nova.virt.libvirt.guest [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:6a:24:67"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap784aca10-13"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 28 05:37:52 np0005634017 podman[364358]: 2026-02-28 10:37:52.971317973 +0000 UTC m=+0.485755660 container remove 4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.972 243456 DEBUG nova.virt.libvirt.guest [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:6a:24:67"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap784aca10-13"/></interface>not found in domain: <domain type='kvm'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  <name>instance-00000084</name>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  <uuid>b3a7b19c-ebb2-442d-bac0-66e1b9e2655c</uuid>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestNetworkBasicOps-server-1855517723</nova:name>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:36:29</nova:creationTime>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:37:52 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:        <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:        <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:        <nova:port uuid="4b48043a-8194-4cf4-bd7f-1c138d7960ac">
Feb 28 05:37:52 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  <memory unit='KiB'>131072</memory>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  <vcpu placement='static'>1</vcpu>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  <sysinfo type='smbios'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <entry name='manufacturer'>RDO</entry>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <entry name='product'>OpenStack Compute</entry>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <entry name='serial'>b3a7b19c-ebb2-442d-bac0-66e1b9e2655c</entry>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <entry name='uuid'>b3a7b19c-ebb2-442d-bac0-66e1b9e2655c</entry>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <entry name='family'>Virtual Machine</entry>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <boot dev='hd'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <smbios mode='sysinfo'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <vmcoreinfo state='on'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  <cpu mode='host-model' check='partial'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  <clock offset='utc'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <timer name='pit' tickpolicy='delay'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <timer name='rtc' tickpolicy='catchup'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <timer name='hpet' present='no'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  <on_poweroff>destroy</on_poweroff>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  <on_reboot>restart</on_reboot>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  <on_crash>destroy</on_crash>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <disk type='network' device='disk'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <driver name='qemu' type='raw' cache='none'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <auth username='openstack'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:        <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <source protocol='rbd' name='vms/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:        <host name='192.168.122.100' port='6789'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target dev='vda' bus='virtio'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <disk type='network' device='cdrom'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <driver name='qemu' type='raw' cache='none'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <auth username='openstack'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:        <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <source protocol='rbd' name='vms/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk.config'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:        <host name='192.168.122.100' port='6789'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target dev='sda' bus='sata'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <readonly/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='0' model='pcie-root'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target chassis='1' port='0x10'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target chassis='2' port='0x11'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target chassis='3' port='0x12'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target chassis='4' port='0x13'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target chassis='5' port='0x14'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target chassis='6' port='0x15'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target chassis='7' port='0x16'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target chassis='8' port='0x17'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target chassis='9' port='0x18'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target chassis='10' port='0x19'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target chassis='11' port='0x1a'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target chassis='12' port='0x1b'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target chassis='13' port='0x1c'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target chassis='14' port='0x1d'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target chassis='15' port='0x1e'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target chassis='16' port='0x1f'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target chassis='17' port='0x20'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target chassis='18' port='0x21'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target chassis='19' port='0x22'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target chassis='20' port='0x23'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target chassis='21' port='0x24'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target chassis='22' port='0x25'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target chassis='23' port='0x26'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target chassis='24' port='0x27'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model name='pcie-root-port'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target chassis='25' port='0x28'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model name='pcie-pci-bridge'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <controller type='sata' index='0'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </controller>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <interface type='ethernet'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <mac address='fa:16:3e:e3:60:12'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target dev='tap4b48043a-81'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model type='virtio'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <driver name='vhost' rx_queue_size='512'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <mtu size='1442'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <serial type='pty'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <log file='/var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/console.log' append='off'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target type='isa-serial' port='0'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:        <model name='isa-serial'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      </target>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <console type='pty'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <log file='/var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/console.log' append='off'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <target type='serial' port='0'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </console>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <input type='tablet' bus='usb'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='usb' bus='0' port='1'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </input>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <input type='mouse' bus='ps2'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <input type='keyboard' bus='ps2'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <listen type='address' address='::0'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </graphics>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <audio id='1' type='none'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <model type='virtio' heads='1' primary='yes'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <watchdog model='itco' action='reset'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <memballoon model='virtio'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <stats period='10'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <rng model='virtio'>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <backend model='random'>/dev/urandom</backend>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:37:52 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:37:52 np0005634017 nova_compute[243452]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.972 243456 WARNING nova.virt.libvirt.driver [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Detaching interface fa:16:3e:6a:24:67 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap784aca10-13' not found.#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.973 243456 DEBUG nova.virt.libvirt.vif [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:36:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1855517723',display_name='tempest-TestNetworkBasicOps-server-1855517723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1855517723',id=132,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJL4zKlYo/AdgMNnuaaJtL1hh1goiDwZemHJ69UN8pXos044opX7z+RfBxL3O8z0M85ghtATU1A3xGoZN5EAhEOadn0D2QfxzAhrWZrpHsFCg070YOwKNthfwFLhWWt9Cw==',key_name='tempest-TestNetworkBasicOps-2114804889',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:36:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-fsl96lrj',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:37:52Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=b3a7b19c-ebb2-442d-bac0-66e1b9e2655c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.974 243456 DEBUG nova.network.os_vif_util [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converting VIF {"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.974 243456 DEBUG nova.network.os_vif_util [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.975 243456 DEBUG os_vif [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:37:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:52.975 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2a4f9c0e-7dfd-4b82-be62-a1f016ac9bc6]: (4, ('Sat Feb 28 10:37:52 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1 (4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f)\n4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f\nSat Feb 28 10:37:52 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1 (4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f)\n4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.978 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:52.978 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1642d9aa-ca59-4028-acb6-791549ec2569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.978 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap784aca10-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:52.979 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf91ad996-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.979 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:37:52 np0005634017 kernel: tapf91ad996-40: left promiscuous mode
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.982 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.985 243456 INFO os_vif [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13')#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.987 243456 DEBUG nova.virt.libvirt.guest [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  <nova:name>tempest-TestNetworkBasicOps-server-1855517723</nova:name>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  <nova:creationTime>2026-02-28 10:37:52</nova:creationTime>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  <nova:flavor name="m1.nano">
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <nova:memory>128</nova:memory>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <nova:disk>1</nova:disk>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <nova:swap>0</nova:swap>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <nova:vcpus>1</nova:vcpus>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  </nova:flavor>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  <nova:owner>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  </nova:owner>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  <nova:ports>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    <nova:port uuid="4b48043a-8194-4cf4-bd7f-1c138d7960ac">
Feb 28 05:37:52 np0005634017 nova_compute[243452]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:    </nova:port>
Feb 28 05:37:52 np0005634017 nova_compute[243452]:  </nova:ports>
Feb 28 05:37:52 np0005634017 nova_compute[243452]: </nova:instance>
Feb 28 05:37:52 np0005634017 nova_compute[243452]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.988 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.991 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:37:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:52.992 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff11da8-f540-4dca-a5b0-64015698b3a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.995 243456 DEBUG nova.compute.manager [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-changed-4b48043a-8194-4cf4-bd7f-1c138d7960ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.996 243456 DEBUG nova.compute.manager [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Refreshing instance network info cache due to event network-changed-4b48043a-8194-4cf4-bd7f-1c138d7960ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:37:52 np0005634017 nova_compute[243452]: 2026-02-28 10:37:52.996 243456 DEBUG oslo_concurrency.lockutils [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.002 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:37:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.004 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[27c942a5-651c-4086-9006-3d510ddae43f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.006 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a8ac9759-e44d-407d-8ec1-ef76141678cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.021 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:37:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.023 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[98b7b718-3c0c-49a8-ad54-87a1ea7310fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649966, 'reachable_time': 18037, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364414, 'error': None, 'target': 'ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.026 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:37:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.026 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[1822b56e-4058-49b2-8dcf-4804a51c3d18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:53 np0005634017 systemd[1]: run-netns-ovnmeta\x2df91ad996\x2d44c8\x2d45ac\x2da5d6\x2d208982ca2ce1.mount: Deactivated successfully.
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]: {
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:    "0": [
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:        {
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "devices": [
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "/dev/loop3"
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            ],
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "lv_name": "ceph_lv0",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "lv_size": "21470642176",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "name": "ceph_lv0",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "tags": {
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.cluster_name": "ceph",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.crush_device_class": "",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.encrypted": "0",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.objectstore": "bluestore",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.osd_id": "0",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.type": "block",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.vdo": "0",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.with_tpm": "0"
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            },
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "type": "block",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "vg_name": "ceph_vg0"
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:        }
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:    ],
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:    "1": [
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:        {
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "devices": [
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "/dev/loop4"
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            ],
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "lv_name": "ceph_lv1",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "lv_size": "21470642176",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "name": "ceph_lv1",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "tags": {
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.cluster_name": "ceph",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.crush_device_class": "",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.encrypted": "0",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.objectstore": "bluestore",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.osd_id": "1",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.type": "block",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.vdo": "0",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.with_tpm": "0"
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            },
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "type": "block",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "vg_name": "ceph_vg1"
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:        }
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:    ],
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:    "2": [
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:        {
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "devices": [
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "/dev/loop5"
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            ],
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "lv_name": "ceph_lv2",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "lv_size": "21470642176",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "name": "ceph_lv2",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "tags": {
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.cluster_name": "ceph",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.crush_device_class": "",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.encrypted": "0",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.objectstore": "bluestore",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.osd_id": "2",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.type": "block",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.vdo": "0",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:                "ceph.with_tpm": "0"
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            },
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "type": "block",
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:            "vg_name": "ceph_vg2"
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:        }
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]:    ]
Feb 28 05:37:53 np0005634017 infallible_shamir[364320]: }
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.053 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.054 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:53 np0005634017 systemd[1]: libpod-3755428e7550ea3b0bf9ecd34dd3528e78782a18cb8687c4f31fa8d382d6433b.scope: Deactivated successfully.
Feb 28 05:37:53 np0005634017 conmon[364320]: conmon 3755428e7550ea3b0bf9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3755428e7550ea3b0bf9ecd34dd3528e78782a18cb8687c4f31fa8d382d6433b.scope/container/memory.events
Feb 28 05:37:53 np0005634017 podman[364277]: 2026-02-28 10:37:53.087898795 +0000 UTC m=+0.821166641 container died 3755428e7550ea3b0bf9ecd34dd3528e78782a18cb8687c4f31fa8d382d6433b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shamir, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 05:37:53 np0005634017 systemd[1]: var-lib-containers-storage-overlay-b11a7a86fa35507d8088a08f8efaa3a81c96ff4d58da2771d52b1f659a40d976-merged.mount: Deactivated successfully.
Feb 28 05:37:53 np0005634017 podman[364277]: 2026-02-28 10:37:53.139209209 +0000 UTC m=+0.872477055 container remove 3755428e7550ea3b0bf9ecd34dd3528e78782a18cb8687c4f31fa8d382d6433b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shamir, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 05:37:53 np0005634017 systemd[1]: libpod-conmon-3755428e7550ea3b0bf9ecd34dd3528e78782a18cb8687c4f31fa8d382d6433b.scope: Deactivated successfully.
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.162 243456 DEBUG nova.compute.manager [req-675304aa-ffd7-4705-aed5-c46d7b6619ca req-29039da8-6e9f-4e64-88a7-cbf12580f68e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-vif-unplugged-4b48043a-8194-4cf4-bd7f-1c138d7960ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.164 243456 DEBUG oslo_concurrency.lockutils [req-675304aa-ffd7-4705-aed5-c46d7b6619ca req-29039da8-6e9f-4e64-88a7-cbf12580f68e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.164 243456 DEBUG oslo_concurrency.lockutils [req-675304aa-ffd7-4705-aed5-c46d7b6619ca req-29039da8-6e9f-4e64-88a7-cbf12580f68e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.164 243456 DEBUG oslo_concurrency.lockutils [req-675304aa-ffd7-4705-aed5-c46d7b6619ca req-29039da8-6e9f-4e64-88a7-cbf12580f68e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.165 243456 DEBUG nova.compute.manager [req-675304aa-ffd7-4705-aed5-c46d7b6619ca req-29039da8-6e9f-4e64-88a7-cbf12580f68e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] No waiting events found dispatching network-vif-unplugged-4b48043a-8194-4cf4-bd7f-1c138d7960ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.165 243456 DEBUG nova.compute.manager [req-675304aa-ffd7-4705-aed5-c46d7b6619ca req-29039da8-6e9f-4e64-88a7-cbf12580f68e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-vif-unplugged-4b48043a-8194-4cf4-bd7f-1c138d7960ac for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.257 243456 INFO nova.virt.libvirt.driver [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Deleting instance files /var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_del#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.258 243456 INFO nova.virt.libvirt.driver [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Deletion of /var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_del complete#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.315 243456 INFO nova.compute.manager [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Took 1.12 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.315 243456 DEBUG oslo.service.loopingcall [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.316 243456 DEBUG nova.compute.manager [-] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.316 243456 DEBUG nova.network.neutron [-] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.501 243456 DEBUG oslo_concurrency.lockutils [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.503 243456 DEBUG oslo_concurrency.lockutils [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.504 243456 DEBUG oslo_concurrency.lockutils [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.504 243456 DEBUG oslo_concurrency.lockutils [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.504 243456 DEBUG oslo_concurrency.lockutils [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.506 243456 INFO nova.compute.manager [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Terminating instance#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.507 243456 DEBUG nova.compute.manager [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.508 243456 INFO nova.network.neutron [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Port 784aca10-13b2-42d5-9828-68914533af46 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.508 243456 DEBUG nova.network.neutron [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updating instance_info_cache with network_info: [{"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.527 243456 DEBUG oslo_concurrency.lockutils [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.530 243456 DEBUG oslo_concurrency.lockutils [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.530 243456 DEBUG nova.network.neutron [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Refreshing network info cache for port 4b48043a-8194-4cf4-bd7f-1c138d7960ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:37:53 np0005634017 kernel: tapb8c427fe-78 (unregistering): left promiscuous mode
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.561 243456 DEBUG oslo_concurrency.lockutils [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "interface-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-784aca10-13b2-42d5-9828-68914533af46" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:53 np0005634017 NetworkManager[49805]: <info>  [1772275073.5625] device (tapb8c427fe-78): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.575 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:53Z|01402|binding|INFO|Releasing lport b8c427fe-78c5-4d60-9c44-68985f50b598 from this chassis (sb_readonly=0)
Feb 28 05:37:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:53Z|01403|binding|INFO|Setting lport b8c427fe-78c5-4d60-9c44-68985f50b598 down in Southbound
Feb 28 05:37:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:53Z|01404|binding|INFO|Removing iface tapb8c427fe-78 ovn-installed in OVS
Feb 28 05:37:53 np0005634017 kernel: tapffaef000-52 (unregistering): left promiscuous mode
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.586 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:53 np0005634017 NetworkManager[49805]: <info>  [1772275073.5898] device (tapffaef000-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:37:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.590 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:2a:81 10.100.0.13'], port_security=['fa:16:3e:d6:2a:81 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '555d381e-ed8a-4a73-9f43-f79c0b0a0afd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '41e41550-f20b-4082-98cd-41d49e388f83', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f53aa21-0a70-46b8-aad2-7db237a64c92, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b8c427fe-78c5-4d60-9c44-68985f50b598) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:37:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.591 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b8c427fe-78c5-4d60-9c44-68985f50b598 in datapath e68f9d98-c075-4ed0-b1ee-ef05de1c055d unbound from our chassis#033[00m
Feb 28 05:37:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.593 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e68f9d98-c075-4ed0-b1ee-ef05de1c055d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:37:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.595 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a51da2d2-f78c-496a-86ae-2e1833f9e1b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.598 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d namespace which is not needed anymore#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.599 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:53Z|01405|binding|INFO|Releasing lport ffaef000-523c-4637-99e6-2cc96b907c15 from this chassis (sb_readonly=0)
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.608 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:53Z|01406|binding|INFO|Setting lport ffaef000-523c-4637-99e6-2cc96b907c15 down in Southbound
Feb 28 05:37:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:37:53Z|01407|binding|INFO|Removing iface tapffaef000-52 ovn-installed in OVS
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.609 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.615 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.616 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:b7:5a 2001:db8:0:1:f816:3eff:fe28:b75a 2001:db8::f816:3eff:fe28:b75a'], port_security=['fa:16:3e:28:b7:5a 2001:db8:0:1:f816:3eff:fe28:b75a 2001:db8::f816:3eff:fe28:b75a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe28:b75a/64 2001:db8::f816:3eff:fe28:b75a/64', 'neutron:device_id': '555d381e-ed8a-4a73-9f43-f79c0b0a0afd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '41e41550-f20b-4082-98cd-41d49e388f83', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ac6d088-e3e0-46fc-b81f-2eb4f1fd8c9c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ffaef000-523c-4637-99e6-2cc96b907c15) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:37:53 np0005634017 podman[364490]: 2026-02-28 10:37:53.634320974 +0000 UTC m=+0.046612811 container create 61880a85aca61d5c720743a4ff06fd0f218efc00d727c040b4a168fa5c887264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mayer, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:37:53 np0005634017 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000085.scope: Deactivated successfully.
Feb 28 05:37:53 np0005634017 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000085.scope: Consumed 15.521s CPU time.
Feb 28 05:37:53 np0005634017 systemd-machined[209480]: Machine qemu-166-instance-00000085 terminated.
Feb 28 05:37:53 np0005634017 systemd[1]: Started libpod-conmon-61880a85aca61d5c720743a4ff06fd0f218efc00d727c040b4a168fa5c887264.scope.
Feb 28 05:37:53 np0005634017 podman[364490]: 2026-02-28 10:37:53.612835515 +0000 UTC m=+0.025127393 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:37:53 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:37:53 np0005634017 podman[364490]: 2026-02-28 10:37:53.731888078 +0000 UTC m=+0.144179945 container init 61880a85aca61d5c720743a4ff06fd0f218efc00d727c040b4a168fa5c887264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 28 05:37:53 np0005634017 podman[364490]: 2026-02-28 10:37:53.738952428 +0000 UTC m=+0.151244265 container start 61880a85aca61d5c720743a4ff06fd0f218efc00d727c040b4a168fa5c887264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mayer, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 05:37:53 np0005634017 NetworkManager[49805]: <info>  [1772275073.7432] manager: (tapffaef000-52): new Tun device (/org/freedesktop/NetworkManager/Devices/580)
Feb 28 05:37:53 np0005634017 podman[364490]: 2026-02-28 10:37:53.743942129 +0000 UTC m=+0.156233966 container attach 61880a85aca61d5c720743a4ff06fd0f218efc00d727c040b4a168fa5c887264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mayer, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 05:37:53 np0005634017 condescending_mayer[364529]: 167 167
Feb 28 05:37:53 np0005634017 systemd[1]: libpod-61880a85aca61d5c720743a4ff06fd0f218efc00d727c040b4a168fa5c887264.scope: Deactivated successfully.
Feb 28 05:37:53 np0005634017 podman[364490]: 2026-02-28 10:37:53.759271544 +0000 UTC m=+0.171563381 container died 61880a85aca61d5c720743a4ff06fd0f218efc00d727c040b4a168fa5c887264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mayer, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.759 243456 INFO nova.virt.libvirt.driver [-] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Instance destroyed successfully.#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.762 243456 DEBUG nova.objects.instance [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid 555d381e-ed8a-4a73-9f43-f79c0b0a0afd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:37:53 np0005634017 neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d[361939]: [NOTICE]   (361969) : haproxy version is 2.8.14-c23fe91
Feb 28 05:37:53 np0005634017 neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d[361939]: [NOTICE]   (361969) : path to executable is /usr/sbin/haproxy
Feb 28 05:37:53 np0005634017 neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d[361939]: [WARNING]  (361969) : Exiting Master process...
Feb 28 05:37:53 np0005634017 neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d[361939]: [WARNING]  (361969) : Exiting Master process...
Feb 28 05:37:53 np0005634017 neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d[361939]: [ALERT]    (361969) : Current worker (361973) exited with code 143 (Terminated)
Feb 28 05:37:53 np0005634017 neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d[361939]: [WARNING]  (361969) : All workers exited. Exiting... (0)
Feb 28 05:37:53 np0005634017 systemd[1]: libpod-86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21.scope: Deactivated successfully.
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.774 243456 DEBUG nova.virt.libvirt.vif [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:36:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1102866438',display_name='tempest-TestGettingAddress-server-1102866438',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1102866438',id=133,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjJUmVDkOZGXP9YHWwcrQX3rRmUtQjwiGdrqFm8v//iM8ddQ6Oygrz9eS7/NNUCVpxPFFxhbxOwQ9+w5wIv/8LuNuf9Xfg2nwokqVt8ivVWLTyGZCoHMk8LXyLAebo5og==',key_name='tempest-TestGettingAddress-95527438',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:36:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-qiz0sgp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:36:42Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=555d381e-ed8a-4a73-9f43-f79c0b0a0afd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b8c427fe-78c5-4d60-9c44-68985f50b598", "address": "fa:16:3e:d6:2a:81", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c427fe-78", "ovs_interfaceid": "b8c427fe-78c5-4d60-9c44-68985f50b598", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:37:53 np0005634017 podman[364533]: 2026-02-28 10:37:53.774492425 +0000 UTC m=+0.068664966 container died 86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.775 243456 DEBUG nova.network.os_vif_util [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "b8c427fe-78c5-4d60-9c44-68985f50b598", "address": "fa:16:3e:d6:2a:81", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c427fe-78", "ovs_interfaceid": "b8c427fe-78c5-4d60-9c44-68985f50b598", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.775 243456 DEBUG nova.network.os_vif_util [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d6:2a:81,bridge_name='br-int',has_traffic_filtering=True,id=b8c427fe-78c5-4d60-9c44-68985f50b598,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8c427fe-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.776 243456 DEBUG os_vif [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:2a:81,bridge_name='br-int',has_traffic_filtering=True,id=b8c427fe-78c5-4d60-9c44-68985f50b598,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8c427fe-78') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.777 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.778 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8c427fe-78, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.780 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.781 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.787 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.790 243456 INFO os_vif [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:2a:81,bridge_name='br-int',has_traffic_filtering=True,id=b8c427fe-78c5-4d60-9c44-68985f50b598,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8c427fe-78')#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.792 243456 DEBUG nova.virt.libvirt.vif [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:36:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1102866438',display_name='tempest-TestGettingAddress-server-1102866438',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1102866438',id=133,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjJUmVDkOZGXP9YHWwcrQX3rRmUtQjwiGdrqFm8v//iM8ddQ6Oygrz9eS7/NNUCVpxPFFxhbxOwQ9+w5wIv/8LuNuf9Xfg2nwokqVt8ivVWLTyGZCoHMk8LXyLAebo5og==',key_name='tempest-TestGettingAddress-95527438',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:36:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-qiz0sgp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:36:42Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=555d381e-ed8a-4a73-9f43-f79c0b0a0afd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ffaef000-523c-4637-99e6-2cc96b907c15", "address": "fa:16:3e:28:b7:5a", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaef000-52", "ovs_interfaceid": "ffaef000-523c-4637-99e6-2cc96b907c15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.792 243456 DEBUG nova.network.os_vif_util [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "ffaef000-523c-4637-99e6-2cc96b907c15", "address": "fa:16:3e:28:b7:5a", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaef000-52", "ovs_interfaceid": "ffaef000-523c-4637-99e6-2cc96b907c15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.793 243456 DEBUG nova.network.os_vif_util [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:b7:5a,bridge_name='br-int',has_traffic_filtering=True,id=ffaef000-523c-4637-99e6-2cc96b907c15,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffaef000-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.794 243456 DEBUG os_vif [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:b7:5a,bridge_name='br-int',has_traffic_filtering=True,id=ffaef000-523c-4637-99e6-2cc96b907c15,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffaef000-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.796 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.797 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapffaef000-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:53 np0005634017 systemd[1]: var-lib-containers-storage-overlay-524088f97a17eeda9184aa2c185be8645e35f5e61298abfe94e599e260c4602a-merged.mount: Deactivated successfully.
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.798 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.801 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.802 243456 INFO os_vif [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:b7:5a,bridge_name='br-int',has_traffic_filtering=True,id=ffaef000-523c-4637-99e6-2cc96b907c15,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffaef000-52')#033[00m
Feb 28 05:37:53 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21-userdata-shm.mount: Deactivated successfully.
Feb 28 05:37:53 np0005634017 podman[364533]: 2026-02-28 10:37:53.816016781 +0000 UTC m=+0.110189302 container cleanup 86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 28 05:37:53 np0005634017 podman[364490]: 2026-02-28 10:37:53.820897069 +0000 UTC m=+0.233188906 container remove 61880a85aca61d5c720743a4ff06fd0f218efc00d727c040b4a168fa5c887264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Feb 28 05:37:53 np0005634017 systemd[1]: libpod-conmon-61880a85aca61d5c720743a4ff06fd0f218efc00d727c040b4a168fa5c887264.scope: Deactivated successfully.
Feb 28 05:37:53 np0005634017 systemd[1]: libpod-conmon-86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21.scope: Deactivated successfully.
Feb 28 05:37:53 np0005634017 podman[364611]: 2026-02-28 10:37:53.887086544 +0000 UTC m=+0.048546506 container remove 86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 05:37:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.891 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[38e8ebdf-712e-4f20-bdf7-d72769d944a7]: (4, ('Sat Feb 28 10:37:53 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d (86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21)\n86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21\nSat Feb 28 10:37:53 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d (86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21)\n86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.894 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c83dea-4e94-448b-bb6a-16aa5100e83b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.895 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape68f9d98-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:53 np0005634017 kernel: tape68f9d98-c0: left promiscuous mode
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.899 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:53 np0005634017 nova_compute[243452]: 2026-02-28 10:37:53.905 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.909 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[386cd538-ffd4-4660-af6f-9103333a3514]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.928 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[526c98e8-8cb7-4903-bd22-876529b54f38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.930 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bb2a94e1-1b39-4c8b-971c-f6aea4bf0ebf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:37:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.959 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[50383623-89fb-4304-8504-00f1129d2579]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650641, 'reachable_time': 18882, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364638, 'error': None, 'target': 'ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.961 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:37:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.961 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[5237a2ba-286b-4a71-bfe4-2d9f1c9ad2d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.962 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ffaef000-523c-4637-99e6-2cc96b907c15 in datapath bf5aa4f8-b85b-496f-a7ad-1ab36250968c unbound from our chassis#033[00m
Feb 28 05:37:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.964 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bf5aa4f8-b85b-496f-a7ad-1ab36250968c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:37:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.965 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea17539-5d52-49ac-ac04-271a7e4c9644]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.965 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c namespace which is not needed anymore#033[00m
Feb 28 05:37:53 np0005634017 podman[364636]: 2026-02-28 10:37:53.99142852 +0000 UTC m=+0.048984819 container create c745accc01e0d52c8a24e6d8b75e550e99283c4b87a6b82fc20c51d147e4c549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:37:54 np0005634017 systemd[1]: var-lib-containers-storage-overlay-377aba4e407f3a2f65f2a4eb7e0e22e21a70346077c645cb2f77ea779d688910-merged.mount: Deactivated successfully.
Feb 28 05:37:54 np0005634017 systemd[1]: run-netns-ovnmeta\x2de68f9d98\x2dc075\x2d4ed0\x2db1ee\x2def05de1c055d.mount: Deactivated successfully.
Feb 28 05:37:54 np0005634017 systemd[1]: Started libpod-conmon-c745accc01e0d52c8a24e6d8b75e550e99283c4b87a6b82fc20c51d147e4c549.scope.
Feb 28 05:37:54 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:37:54 np0005634017 podman[364636]: 2026-02-28 10:37:53.967702598 +0000 UTC m=+0.025258967 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:37:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5622cb74c664c425ef3752326d6bf5d49a3fe415180ab4b30d13c0f5844194c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:37:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5622cb74c664c425ef3752326d6bf5d49a3fe415180ab4b30d13c0f5844194c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:37:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5622cb74c664c425ef3752326d6bf5d49a3fe415180ab4b30d13c0f5844194c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:37:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5622cb74c664c425ef3752326d6bf5d49a3fe415180ab4b30d13c0f5844194c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:37:54 np0005634017 podman[364636]: 2026-02-28 10:37:54.088189821 +0000 UTC m=+0.145746120 container init c745accc01e0d52c8a24e6d8b75e550e99283c4b87a6b82fc20c51d147e4c549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hermann, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 05:37:54 np0005634017 podman[364636]: 2026-02-28 10:37:54.097764812 +0000 UTC m=+0.155321111 container start c745accc01e0d52c8a24e6d8b75e550e99283c4b87a6b82fc20c51d147e4c549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 28 05:37:54 np0005634017 nova_compute[243452]: 2026-02-28 10:37:54.099 243456 INFO nova.virt.libvirt.driver [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Deleting instance files /var/lib/nova/instances/555d381e-ed8a-4a73-9f43-f79c0b0a0afd_del#033[00m
Feb 28 05:37:54 np0005634017 nova_compute[243452]: 2026-02-28 10:37:54.100 243456 INFO nova.virt.libvirt.driver [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Deletion of /var/lib/nova/instances/555d381e-ed8a-4a73-9f43-f79c0b0a0afd_del complete#033[00m
Feb 28 05:37:54 np0005634017 podman[364636]: 2026-02-28 10:37:54.106941312 +0000 UTC m=+0.164497601 container attach c745accc01e0d52c8a24e6d8b75e550e99283c4b87a6b82fc20c51d147e4c549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hermann, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:37:54 np0005634017 neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c[362050]: [NOTICE]   (362054) : haproxy version is 2.8.14-c23fe91
Feb 28 05:37:54 np0005634017 neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c[362050]: [NOTICE]   (362054) : path to executable is /usr/sbin/haproxy
Feb 28 05:37:54 np0005634017 neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c[362050]: [WARNING]  (362054) : Exiting Master process...
Feb 28 05:37:54 np0005634017 neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c[362050]: [ALERT]    (362054) : Current worker (362056) exited with code 143 (Terminated)
Feb 28 05:37:54 np0005634017 neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c[362050]: [WARNING]  (362054) : All workers exited. Exiting... (0)
Feb 28 05:37:54 np0005634017 systemd[1]: libpod-54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e.scope: Deactivated successfully.
Feb 28 05:37:54 np0005634017 podman[364674]: 2026-02-28 10:37:54.139243827 +0000 UTC m=+0.062369218 container died 54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 28 05:37:54 np0005634017 nova_compute[243452]: 2026-02-28 10:37:54.151 243456 INFO nova.compute.manager [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:37:54 np0005634017 nova_compute[243452]: 2026-02-28 10:37:54.152 243456 DEBUG oslo.service.loopingcall [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:37:54 np0005634017 nova_compute[243452]: 2026-02-28 10:37:54.152 243456 DEBUG nova.compute.manager [-] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:37:54 np0005634017 nova_compute[243452]: 2026-02-28 10:37:54.152 243456 DEBUG nova.network.neutron [-] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:37:54 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e-userdata-shm.mount: Deactivated successfully.
Feb 28 05:37:54 np0005634017 systemd[1]: var-lib-containers-storage-overlay-8955cce884765a04034c460f9c74e35c1e8bf35ef5a4315568d401093bba957d-merged.mount: Deactivated successfully.
Feb 28 05:37:54 np0005634017 podman[364674]: 2026-02-28 10:37:54.168904737 +0000 UTC m=+0.092030118 container cleanup 54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:37:54 np0005634017 systemd[1]: libpod-conmon-54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e.scope: Deactivated successfully.
Feb 28 05:37:54 np0005634017 podman[364702]: 2026-02-28 10:37:54.25017677 +0000 UTC m=+0.055884254 container remove 54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:37:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:54.255 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c83f6dff-e7e2-4d10-828e-b51a2c8839c5]: (4, ('Sat Feb 28 10:37:54 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c (54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e)\n54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e\nSat Feb 28 10:37:54 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c (54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e)\n54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:54.258 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[72b44cd8-1b62-4f2d-be2c-201a3f3c61cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:54.259 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf5aa4f8-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:37:54 np0005634017 nova_compute[243452]: 2026-02-28 10:37:54.262 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:54 np0005634017 kernel: tapbf5aa4f8-b0: left promiscuous mode
Feb 28 05:37:54 np0005634017 nova_compute[243452]: 2026-02-28 10:37:54.272 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:54.279 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[55333be1-d50b-4882-b20e-474a126bd171]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:54.290 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1c8a6de2-81aa-4b37-a390-478e71e25e89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:54.292 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7b8168eb-b2c0-4d36-836a-bf796a8734e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:54.309 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ad4fb647-ef1e-44f6-a2bb-df5b69c0fb4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650740, 'reachable_time': 21107, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364727, 'error': None, 'target': 'ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:54 np0005634017 systemd[1]: run-netns-ovnmeta\x2dbf5aa4f8\x2db85b\x2d496f\x2da7ad\x2d1ab36250968c.mount: Deactivated successfully.
Feb 28 05:37:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:54.311 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:37:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:54.311 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[25427bfa-40d5-49bd-b1d0-78f26cb7f514]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:37:54 np0005634017 nova_compute[243452]: 2026-02-28 10:37:54.517 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:54 np0005634017 nova_compute[243452]: 2026-02-28 10:37:54.574 243456 DEBUG nova.network.neutron [-] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:37:54 np0005634017 nova_compute[243452]: 2026-02-28 10:37:54.603 243456 INFO nova.compute.manager [-] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Took 1.29 seconds to deallocate network for instance.#033[00m
Feb 28 05:37:54 np0005634017 nova_compute[243452]: 2026-02-28 10:37:54.647 243456 DEBUG nova.compute.manager [req-7717b20c-0876-4aa3-96c8-28d8358f7307 req-d500b584-4af8-41a7-96fb-0f38a7704bb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-vif-unplugged-b8c427fe-78c5-4d60-9c44-68985f50b598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:54 np0005634017 nova_compute[243452]: 2026-02-28 10:37:54.647 243456 DEBUG oslo_concurrency.lockutils [req-7717b20c-0876-4aa3-96c8-28d8358f7307 req-d500b584-4af8-41a7-96fb-0f38a7704bb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:54 np0005634017 nova_compute[243452]: 2026-02-28 10:37:54.648 243456 DEBUG oslo_concurrency.lockutils [req-7717b20c-0876-4aa3-96c8-28d8358f7307 req-d500b584-4af8-41a7-96fb-0f38a7704bb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:54 np0005634017 nova_compute[243452]: 2026-02-28 10:37:54.648 243456 DEBUG oslo_concurrency.lockutils [req-7717b20c-0876-4aa3-96c8-28d8358f7307 req-d500b584-4af8-41a7-96fb-0f38a7704bb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:54 np0005634017 nova_compute[243452]: 2026-02-28 10:37:54.648 243456 DEBUG nova.compute.manager [req-7717b20c-0876-4aa3-96c8-28d8358f7307 req-d500b584-4af8-41a7-96fb-0f38a7704bb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] No waiting events found dispatching network-vif-unplugged-b8c427fe-78c5-4d60-9c44-68985f50b598 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:37:54 np0005634017 nova_compute[243452]: 2026-02-28 10:37:54.649 243456 DEBUG nova.compute.manager [req-7717b20c-0876-4aa3-96c8-28d8358f7307 req-d500b584-4af8-41a7-96fb-0f38a7704bb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-vif-unplugged-b8c427fe-78c5-4d60-9c44-68985f50b598 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:37:54 np0005634017 nova_compute[243452]: 2026-02-28 10:37:54.650 243456 DEBUG oslo_concurrency.lockutils [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:54 np0005634017 nova_compute[243452]: 2026-02-28 10:37:54.651 243456 DEBUG oslo_concurrency.lockutils [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:54 np0005634017 nova_compute[243452]: 2026-02-28 10:37:54.744 243456 DEBUG oslo_concurrency.processutils [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:37:54 np0005634017 lvm[364789]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:37:54 np0005634017 lvm[364789]: VG ceph_vg0 finished
Feb 28 05:37:54 np0005634017 lvm[364791]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:37:54 np0005634017 lvm[364791]: VG ceph_vg1 finished
Feb 28 05:37:54 np0005634017 lvm[364792]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:37:54 np0005634017 lvm[364792]: VG ceph_vg2 finished
Feb 28 05:37:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 305 active+clean; 255 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 21 KiB/s wr, 69 op/s
Feb 28 05:37:54 np0005634017 serene_hermann[364668]: {}
Feb 28 05:37:55 np0005634017 systemd[1]: libpod-c745accc01e0d52c8a24e6d8b75e550e99283c4b87a6b82fc20c51d147e4c549.scope: Deactivated successfully.
Feb 28 05:37:55 np0005634017 podman[364636]: 2026-02-28 10:37:55.006151124 +0000 UTC m=+1.063707413 container died c745accc01e0d52c8a24e6d8b75e550e99283c4b87a6b82fc20c51d147e4c549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 05:37:55 np0005634017 systemd[1]: libpod-c745accc01e0d52c8a24e6d8b75e550e99283c4b87a6b82fc20c51d147e4c549.scope: Consumed 1.246s CPU time.
Feb 28 05:37:55 np0005634017 systemd[1]: var-lib-containers-storage-overlay-5622cb74c664c425ef3752326d6bf5d49a3fe415180ab4b30d13c0f5844194c4-merged.mount: Deactivated successfully.
Feb 28 05:37:55 np0005634017 podman[364636]: 2026-02-28 10:37:55.053471725 +0000 UTC m=+1.111028004 container remove c745accc01e0d52c8a24e6d8b75e550e99283c4b87a6b82fc20c51d147e4c549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:37:55 np0005634017 systemd[1]: libpod-conmon-c745accc01e0d52c8a24e6d8b75e550e99283c4b87a6b82fc20c51d147e4c549.scope: Deactivated successfully.
Feb 28 05:37:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:37:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:37:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:37:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:37:55 np0005634017 nova_compute[243452]: 2026-02-28 10:37:55.257 243456 DEBUG nova.compute.manager [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-vif-plugged-4b48043a-8194-4cf4-bd7f-1c138d7960ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:55 np0005634017 nova_compute[243452]: 2026-02-28 10:37:55.261 243456 DEBUG oslo_concurrency.lockutils [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:55 np0005634017 nova_compute[243452]: 2026-02-28 10:37:55.262 243456 DEBUG oslo_concurrency.lockutils [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:55 np0005634017 nova_compute[243452]: 2026-02-28 10:37:55.262 243456 DEBUG oslo_concurrency.lockutils [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:55 np0005634017 nova_compute[243452]: 2026-02-28 10:37:55.263 243456 DEBUG nova.compute.manager [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] No waiting events found dispatching network-vif-plugged-4b48043a-8194-4cf4-bd7f-1c138d7960ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:37:55 np0005634017 nova_compute[243452]: 2026-02-28 10:37:55.263 243456 WARNING nova.compute.manager [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received unexpected event network-vif-plugged-4b48043a-8194-4cf4-bd7f-1c138d7960ac for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:37:55 np0005634017 nova_compute[243452]: 2026-02-28 10:37:55.264 243456 DEBUG nova.compute.manager [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-changed-b8c427fe-78c5-4d60-9c44-68985f50b598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:55 np0005634017 nova_compute[243452]: 2026-02-28 10:37:55.265 243456 DEBUG nova.compute.manager [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Refreshing instance network info cache due to event network-changed-b8c427fe-78c5-4d60-9c44-68985f50b598. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:37:55 np0005634017 nova_compute[243452]: 2026-02-28 10:37:55.265 243456 DEBUG oslo_concurrency.lockutils [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:37:55 np0005634017 nova_compute[243452]: 2026-02-28 10:37:55.266 243456 DEBUG oslo_concurrency.lockutils [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:37:55 np0005634017 nova_compute[243452]: 2026-02-28 10:37:55.266 243456 DEBUG nova.network.neutron [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Refreshing network info cache for port b8c427fe-78c5-4d60-9c44-68985f50b598 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:37:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:37:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/423195585' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:37:55 np0005634017 nova_compute[243452]: 2026-02-28 10:37:55.329 243456 DEBUG oslo_concurrency.processutils [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:37:55 np0005634017 nova_compute[243452]: 2026-02-28 10:37:55.336 243456 DEBUG nova.compute.provider_tree [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:37:55 np0005634017 nova_compute[243452]: 2026-02-28 10:37:55.365 243456 DEBUG nova.scheduler.client.report [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:37:55 np0005634017 nova_compute[243452]: 2026-02-28 10:37:55.475 243456 DEBUG oslo_concurrency.lockutils [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:55 np0005634017 nova_compute[243452]: 2026-02-28 10:37:55.587 243456 INFO nova.scheduler.client.report [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Deleted allocations for instance b3a7b19c-ebb2-442d-bac0-66e1b9e2655c#033[00m
Feb 28 05:37:55 np0005634017 nova_compute[243452]: 2026-02-28 10:37:55.735 243456 DEBUG oslo_concurrency.lockutils [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:55 np0005634017 nova_compute[243452]: 2026-02-28 10:37:55.930 243456 DEBUG nova.network.neutron [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updated VIF entry in instance network info cache for port 4b48043a-8194-4cf4-bd7f-1c138d7960ac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:37:55 np0005634017 nova_compute[243452]: 2026-02-28 10:37:55.930 243456 DEBUG nova.network.neutron [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updating instance_info_cache with network_info: [{"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:37:55 np0005634017 nova_compute[243452]: 2026-02-28 10:37:55.998 243456 DEBUG oslo_concurrency.lockutils [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.124208) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275076124252, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 401, "num_deletes": 251, "total_data_size": 249007, "memory_usage": 256360, "flush_reason": "Manual Compaction"}
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275076129281, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 246630, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47001, "largest_seqno": 47401, "table_properties": {"data_size": 244229, "index_size": 501, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6185, "raw_average_key_size": 19, "raw_value_size": 239345, "raw_average_value_size": 741, "num_data_blocks": 22, "num_entries": 323, "num_filter_entries": 323, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772275064, "oldest_key_time": 1772275064, "file_creation_time": 1772275076, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 5143 microseconds, and 1722 cpu microseconds.
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.129345) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 246630 bytes OK
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.129376) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.131128) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.131149) EVENT_LOG_v1 {"time_micros": 1772275076131142, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.131174) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 246429, prev total WAL file size 246429, number of live WAL files 2.
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.131709) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(240KB)], [107(11MB)]
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275076131806, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 11876758, "oldest_snapshot_seqno": -1}
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6826 keys, 10089030 bytes, temperature: kUnknown
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275076191759, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 10089030, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10042519, "index_size": 28351, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 177980, "raw_average_key_size": 26, "raw_value_size": 9919562, "raw_average_value_size": 1453, "num_data_blocks": 1108, "num_entries": 6826, "num_filter_entries": 6826, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772275076, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.192307) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 10089030 bytes
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.193953) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.7 rd, 168.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 11.1 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(89.1) write-amplify(40.9) OK, records in: 7339, records dropped: 513 output_compression: NoCompression
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.193987) EVENT_LOG_v1 {"time_micros": 1772275076193971, "job": 64, "event": "compaction_finished", "compaction_time_micros": 60071, "compaction_time_cpu_micros": 37374, "output_level": 6, "num_output_files": 1, "total_output_size": 10089030, "num_input_records": 7339, "num_output_records": 6826, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275076194279, "job": 64, "event": "table_file_deletion", "file_number": 109}
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275076196602, "job": 64, "event": "table_file_deletion", "file_number": 107}
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.131550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.196653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.196661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.196665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.196669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.196673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.239 243456 DEBUG nova.network.neutron [-] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.255 243456 INFO nova.compute.manager [-] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Took 2.10 seconds to deallocate network for instance.#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.299 243456 DEBUG oslo_concurrency.lockutils [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.299 243456 DEBUG oslo_concurrency.lockutils [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.353 243456 DEBUG oslo_concurrency.processutils [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.737 243456 DEBUG nova.compute.manager [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-vif-deleted-4b48043a-8194-4cf4-bd7f-1c138d7960ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.737 243456 DEBUG nova.compute.manager [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-vif-plugged-b8c427fe-78c5-4d60-9c44-68985f50b598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.738 243456 DEBUG oslo_concurrency.lockutils [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.739 243456 DEBUG oslo_concurrency.lockutils [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.739 243456 DEBUG oslo_concurrency.lockutils [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.739 243456 DEBUG nova.compute.manager [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] No waiting events found dispatching network-vif-plugged-b8c427fe-78c5-4d60-9c44-68985f50b598 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.739 243456 WARNING nova.compute.manager [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received unexpected event network-vif-plugged-b8c427fe-78c5-4d60-9c44-68985f50b598 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.740 243456 DEBUG nova.compute.manager [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-vif-unplugged-ffaef000-523c-4637-99e6-2cc96b907c15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.740 243456 DEBUG oslo_concurrency.lockutils [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.740 243456 DEBUG oslo_concurrency.lockutils [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.740 243456 DEBUG oslo_concurrency.lockutils [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.741 243456 DEBUG nova.compute.manager [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] No waiting events found dispatching network-vif-unplugged-ffaef000-523c-4637-99e6-2cc96b907c15 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.741 243456 WARNING nova.compute.manager [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received unexpected event network-vif-unplugged-ffaef000-523c-4637-99e6-2cc96b907c15 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.741 243456 DEBUG nova.compute.manager [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-vif-plugged-ffaef000-523c-4637-99e6-2cc96b907c15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.741 243456 DEBUG oslo_concurrency.lockutils [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.742 243456 DEBUG oslo_concurrency.lockutils [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.742 243456 DEBUG oslo_concurrency.lockutils [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.742 243456 DEBUG nova.compute.manager [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] No waiting events found dispatching network-vif-plugged-ffaef000-523c-4637-99e6-2cc96b907c15 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.742 243456 WARNING nova.compute.manager [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received unexpected event network-vif-plugged-ffaef000-523c-4637-99e6-2cc96b907c15 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.743 243456 DEBUG nova.compute.manager [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-vif-deleted-ffaef000-523c-4637-99e6-2cc96b907c15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.778 243456 DEBUG nova.network.neutron [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Updated VIF entry in instance network info cache for port b8c427fe-78c5-4d60-9c44-68985f50b598. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.778 243456 DEBUG nova.network.neutron [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Updating instance_info_cache with network_info: [{"id": "b8c427fe-78c5-4d60-9c44-68985f50b598", "address": "fa:16:3e:d6:2a:81", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c427fe-78", "ovs_interfaceid": "b8c427fe-78c5-4d60-9c44-68985f50b598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ffaef000-523c-4637-99e6-2cc96b907c15", "address": "fa:16:3e:28:b7:5a", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaef000-52", "ovs_interfaceid": "ffaef000-523c-4637-99e6-2cc96b907c15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.800 243456 DEBUG oslo_concurrency.lockutils [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:37:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1366421356' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.930 243456 DEBUG oslo_concurrency.processutils [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.937 243456 DEBUG nova.compute.provider_tree [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.952 243456 DEBUG nova.scheduler.client.report [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:37:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 305 active+clean; 204 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 20 KiB/s wr, 65 op/s
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.972 243456 DEBUG oslo_concurrency.lockutils [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:56 np0005634017 nova_compute[243452]: 2026-02-28 10:37:56.993 243456 INFO nova.scheduler.client.report [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance 555d381e-ed8a-4a73-9f43-f79c0b0a0afd#033[00m
Feb 28 05:37:57 np0005634017 nova_compute[243452]: 2026-02-28 10:37:57.055 243456 DEBUG oslo_concurrency.lockutils [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:57 np0005634017 nova_compute[243452]: 2026-02-28 10:37:57.331 243456 DEBUG nova.compute.manager [req-2719855d-db36-451e-9308-e65794438866 req-8707ac56-190a-413b-a4b4-ca47ff68fd07 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-vif-deleted-b8c427fe-78c5-4d60-9c44-68985f50b598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:37:57 np0005634017 nova_compute[243452]: 2026-02-28 10:37:57.332 243456 INFO nova.compute.manager [req-2719855d-db36-451e-9308-e65794438866 req-8707ac56-190a-413b-a4b4-ca47ff68fd07 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Neutron deleted interface b8c427fe-78c5-4d60-9c44-68985f50b598; detaching it from the instance and deleting it from the info cache#033[00m
Feb 28 05:37:57 np0005634017 nova_compute[243452]: 2026-02-28 10:37:57.332 243456 DEBUG nova.network.neutron [req-2719855d-db36-451e-9308-e65794438866 req-8707ac56-190a-413b-a4b4-ca47ff68fd07 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Feb 28 05:37:57 np0005634017 nova_compute[243452]: 2026-02-28 10:37:57.336 243456 DEBUG nova.compute.manager [req-2719855d-db36-451e-9308-e65794438866 req-8707ac56-190a-413b-a4b4-ca47ff68fd07 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Detach interface failed, port_id=b8c427fe-78c5-4d60-9c44-68985f50b598, reason: Instance 555d381e-ed8a-4a73-9f43-f79c0b0a0afd could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 28 05:37:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:57.876 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:37:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:57.877 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:37:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:37:57.877 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:37:58 np0005634017 nova_compute[243452]: 2026-02-28 10:37:58.800 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:37:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:37:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 305 active+clean; 153 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 20 KiB/s wr, 88 op/s
Feb 28 05:37:58 np0005634017 nova_compute[243452]: 2026-02-28 10:37:58.997 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275063.9958286, cbf94077-46c2-457d-8486-25f3dd0517b9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:37:58 np0005634017 nova_compute[243452]: 2026-02-28 10:37:58.998 243456 INFO nova.compute.manager [-] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:37:59 np0005634017 nova_compute[243452]: 2026-02-28 10:37:59.023 243456 DEBUG nova.compute.manager [None req-f7a2e2be-e8ae-4a75-b743-ddb6950af602 - - - - - -] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:37:59 np0005634017 nova_compute[243452]: 2026-02-28 10:37:59.519 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:38:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:38:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:38:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:38:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:38:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:38:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 18 KiB/s wr, 71 op/s
Feb 28 05:38:01 np0005634017 nova_compute[243452]: 2026-02-28 10:38:01.397 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:01 np0005634017 nova_compute[243452]: 2026-02-28 10:38:01.473 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:02 np0005634017 nova_compute[243452]: 2026-02-28 10:38:02.085 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275067.0843432, ae0acd9c-7c2d-4e8b-84a4-d577eff31d02 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:38:02 np0005634017 nova_compute[243452]: 2026-02-28 10:38:02.086 243456 INFO nova.compute.manager [-] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:38:02 np0005634017 nova_compute[243452]: 2026-02-28 10:38:02.107 243456 DEBUG nova.compute.manager [None req-d522a948-91b5-4cf5-9aad-8e231bedb056 - - - - - -] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:38:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 7.3 KiB/s wr, 55 op/s
Feb 28 05:38:03 np0005634017 nova_compute[243452]: 2026-02-28 10:38:03.804 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:38:04 np0005634017 nova_compute[243452]: 2026-02-28 10:38:04.521 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 3.3 KiB/s wr, 53 op/s
Feb 28 05:38:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2213: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 1023 B/s wr, 44 op/s
Feb 28 05:38:07 np0005634017 nova_compute[243452]: 2026-02-28 10:38:07.426 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275072.424708, b3a7b19c-ebb2-442d-bac0-66e1b9e2655c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:38:07 np0005634017 nova_compute[243452]: 2026-02-28 10:38:07.426 243456 INFO nova.compute.manager [-] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:38:07 np0005634017 nova_compute[243452]: 2026-02-28 10:38:07.453 243456 DEBUG nova.compute.manager [None req-36d2661b-b219-45bf-9a1d-76467cc932a0 - - - - - -] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:38:08 np0005634017 nova_compute[243452]: 2026-02-28 10:38:08.756 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275073.7554436, 555d381e-ed8a-4a73-9f43-f79c0b0a0afd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:38:08 np0005634017 nova_compute[243452]: 2026-02-28 10:38:08.756 243456 INFO nova.compute.manager [-] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:38:08 np0005634017 nova_compute[243452]: 2026-02-28 10:38:08.781 243456 DEBUG nova.compute.manager [None req-fd34f180-230d-412f-b82a-e0572084e007 - - - - - -] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:38:08 np0005634017 nova_compute[243452]: 2026-02-28 10:38:08.807 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:38:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 682 B/s wr, 31 op/s
Feb 28 05:38:09 np0005634017 nova_compute[243452]: 2026-02-28 10:38:09.522 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:38:12 np0005634017 podman[364875]: 2026-02-28 10:38:12.161014492 +0000 UTC m=+0.088537548 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Feb 28 05:38:12 np0005634017 podman[364874]: 2026-02-28 10:38:12.188890201 +0000 UTC m=+0.117581250 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.build-date=20260223)
Feb 28 05:38:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:38:13 np0005634017 nova_compute[243452]: 2026-02-28 10:38:13.850 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:38:14 np0005634017 nova_compute[243452]: 2026-02-28 10:38:14.524 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2217: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:38:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:38:18 np0005634017 nova_compute[243452]: 2026-02-28 10:38:18.602 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "99d76f35-045f-485f-8e24-62b0c9293154" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:18 np0005634017 nova_compute[243452]: 2026-02-28 10:38:18.603 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:18 np0005634017 nova_compute[243452]: 2026-02-28 10:38:18.622 243456 DEBUG nova.compute.manager [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:38:18 np0005634017 nova_compute[243452]: 2026-02-28 10:38:18.737 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:18 np0005634017 nova_compute[243452]: 2026-02-28 10:38:18.738 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:18 np0005634017 nova_compute[243452]: 2026-02-28 10:38:18.746 243456 DEBUG nova.virt.hardware [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:38:18 np0005634017 nova_compute[243452]: 2026-02-28 10:38:18.746 243456 INFO nova.compute.claims [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:38:18 np0005634017 nova_compute[243452]: 2026-02-28 10:38:18.859 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:18 np0005634017 nova_compute[243452]: 2026-02-28 10:38:18.899 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:38:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:38:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Feb 28 05:38:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:38:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2474547833' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:38:19 np0005634017 nova_compute[243452]: 2026-02-28 10:38:19.465 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:38:19 np0005634017 nova_compute[243452]: 2026-02-28 10:38:19.473 243456 DEBUG nova.compute.provider_tree [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:38:19 np0005634017 nova_compute[243452]: 2026-02-28 10:38:19.499 243456 DEBUG nova.scheduler.client.report [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:38:19 np0005634017 nova_compute[243452]: 2026-02-28 10:38:19.557 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:19 np0005634017 nova_compute[243452]: 2026-02-28 10:38:19.558 243456 DEBUG nova.compute.manager [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:38:19 np0005634017 nova_compute[243452]: 2026-02-28 10:38:19.563 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:19 np0005634017 nova_compute[243452]: 2026-02-28 10:38:19.616 243456 DEBUG nova.compute.manager [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:38:19 np0005634017 nova_compute[243452]: 2026-02-28 10:38:19.617 243456 DEBUG nova.network.neutron [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:38:19 np0005634017 nova_compute[243452]: 2026-02-28 10:38:19.655 243456 INFO nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:38:19 np0005634017 nova_compute[243452]: 2026-02-28 10:38:19.676 243456 DEBUG nova.compute.manager [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:38:19 np0005634017 nova_compute[243452]: 2026-02-28 10:38:19.760 243456 DEBUG nova.compute.manager [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:38:19 np0005634017 nova_compute[243452]: 2026-02-28 10:38:19.763 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:38:19 np0005634017 nova_compute[243452]: 2026-02-28 10:38:19.763 243456 INFO nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Creating image(s)#033[00m
Feb 28 05:38:19 np0005634017 nova_compute[243452]: 2026-02-28 10:38:19.795 243456 DEBUG nova.storage.rbd_utils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 99d76f35-045f-485f-8e24-62b0c9293154_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:38:19 np0005634017 nova_compute[243452]: 2026-02-28 10:38:19.815 243456 DEBUG nova.storage.rbd_utils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 99d76f35-045f-485f-8e24-62b0c9293154_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:38:19 np0005634017 nova_compute[243452]: 2026-02-28 10:38:19.839 243456 DEBUG nova.storage.rbd_utils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 99d76f35-045f-485f-8e24-62b0c9293154_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:38:19 np0005634017 nova_compute[243452]: 2026-02-28 10:38:19.843 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:38:19 np0005634017 nova_compute[243452]: 2026-02-28 10:38:19.900 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:38:19 np0005634017 nova_compute[243452]: 2026-02-28 10:38:19.901 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:19 np0005634017 nova_compute[243452]: 2026-02-28 10:38:19.902 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:19 np0005634017 nova_compute[243452]: 2026-02-28 10:38:19.903 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:19 np0005634017 nova_compute[243452]: 2026-02-28 10:38:19.930 243456 DEBUG nova.storage.rbd_utils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 99d76f35-045f-485f-8e24-62b0c9293154_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:38:19 np0005634017 nova_compute[243452]: 2026-02-28 10:38:19.934 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 99d76f35-045f-485f-8e24-62b0c9293154_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:38:20 np0005634017 nova_compute[243452]: 2026-02-28 10:38:20.035 243456 DEBUG nova.policy [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:38:20 np0005634017 nova_compute[243452]: 2026-02-28 10:38:20.175 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 99d76f35-045f-485f-8e24-62b0c9293154_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.241s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:38:20 np0005634017 nova_compute[243452]: 2026-02-28 10:38:20.248 243456 DEBUG nova.storage.rbd_utils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] resizing rbd image 99d76f35-045f-485f-8e24-62b0c9293154_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:38:20 np0005634017 nova_compute[243452]: 2026-02-28 10:38:20.340 243456 DEBUG nova.objects.instance [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'migration_context' on Instance uuid 99d76f35-045f-485f-8e24-62b0c9293154 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:38:20 np0005634017 nova_compute[243452]: 2026-02-28 10:38:20.356 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:38:20 np0005634017 nova_compute[243452]: 2026-02-28 10:38:20.356 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Ensure instance console log exists: /var/lib/nova/instances/99d76f35-045f-485f-8e24-62b0c9293154/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:38:20 np0005634017 nova_compute[243452]: 2026-02-28 10:38:20.357 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:20 np0005634017 nova_compute[243452]: 2026-02-28 10:38:20.358 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:20 np0005634017 nova_compute[243452]: 2026-02-28 10:38:20.358 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:20 np0005634017 nova_compute[243452]: 2026-02-28 10:38:20.926 243456 DEBUG nova.network.neutron [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Successfully updated port: fc0b52d5-3577-4d09-bac1-192cb6c80057 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:38:20 np0005634017 nova_compute[243452]: 2026-02-28 10:38:20.942 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-99d76f35-045f-485f-8e24-62b0c9293154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:38:20 np0005634017 nova_compute[243452]: 2026-02-28 10:38:20.943 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-99d76f35-045f-485f-8e24-62b0c9293154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:38:20 np0005634017 nova_compute[243452]: 2026-02-28 10:38:20.943 243456 DEBUG nova.network.neutron [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:38:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 305 active+clean; 164 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 421 KiB/s wr, 12 op/s
Feb 28 05:38:21 np0005634017 nova_compute[243452]: 2026-02-28 10:38:21.039 243456 DEBUG nova.compute.manager [req-792d3b42-0b1e-4395-856f-f8f5fcec18bc req-1f66332a-fead-4657-85b5-71c85a407fd5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Received event network-changed-fc0b52d5-3577-4d09-bac1-192cb6c80057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:38:21 np0005634017 nova_compute[243452]: 2026-02-28 10:38:21.040 243456 DEBUG nova.compute.manager [req-792d3b42-0b1e-4395-856f-f8f5fcec18bc req-1f66332a-fead-4657-85b5-71c85a407fd5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Refreshing instance network info cache due to event network-changed-fc0b52d5-3577-4d09-bac1-192cb6c80057. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:38:21 np0005634017 nova_compute[243452]: 2026-02-28 10:38:21.040 243456 DEBUG oslo_concurrency.lockutils [req-792d3b42-0b1e-4395-856f-f8f5fcec18bc req-1f66332a-fead-4657-85b5-71c85a407fd5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-99d76f35-045f-485f-8e24-62b0c9293154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:38:21 np0005634017 nova_compute[243452]: 2026-02-28 10:38:21.289 243456 DEBUG nova.network.neutron [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.170 243456 DEBUG nova.network.neutron [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Updating instance_info_cache with network_info: [{"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.206 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-99d76f35-045f-485f-8e24-62b0c9293154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.206 243456 DEBUG nova.compute.manager [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Instance network_info: |[{"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.206 243456 DEBUG oslo_concurrency.lockutils [req-792d3b42-0b1e-4395-856f-f8f5fcec18bc req-1f66332a-fead-4657-85b5-71c85a407fd5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-99d76f35-045f-485f-8e24-62b0c9293154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.207 243456 DEBUG nova.network.neutron [req-792d3b42-0b1e-4395-856f-f8f5fcec18bc req-1f66332a-fead-4657-85b5-71c85a407fd5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Refreshing network info cache for port fc0b52d5-3577-4d09-bac1-192cb6c80057 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.210 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Start _get_guest_xml network_info=[{"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.217 243456 WARNING nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.224 243456 DEBUG nova.virt.libvirt.host [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.225 243456 DEBUG nova.virt.libvirt.host [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.228 243456 DEBUG nova.virt.libvirt.host [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.228 243456 DEBUG nova.virt.libvirt.host [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.229 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.229 243456 DEBUG nova.virt.hardware [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.229 243456 DEBUG nova.virt.hardware [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.230 243456 DEBUG nova.virt.hardware [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.230 243456 DEBUG nova.virt.hardware [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.230 243456 DEBUG nova.virt.hardware [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.230 243456 DEBUG nova.virt.hardware [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.230 243456 DEBUG nova.virt.hardware [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.231 243456 DEBUG nova.virt.hardware [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.231 243456 DEBUG nova.virt.hardware [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.231 243456 DEBUG nova.virt.hardware [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.231 243456 DEBUG nova.virt.hardware [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.234 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:38:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:38:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1862684305' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.811 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.841 243456 DEBUG nova.storage.rbd_utils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 99d76f35-045f-485f-8e24-62b0c9293154_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:38:22 np0005634017 nova_compute[243452]: 2026-02-28 10:38:22.847 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:38:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 176 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 532 KiB/s wr, 13 op/s
Feb 28 05:38:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:38:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3692655547' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.429 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.433 243456 DEBUG nova.virt.libvirt.vif [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:38:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1194760241',display_name='tempest-TestNetworkBasicOps-server-1194760241',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1194760241',id=136,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLMpYWEfQZAeIGdf778Rcwj2RHzc0cF5hcgXyiKvBbK6MzrGJ4l9QqwgzOi+/5IsYucdlZIzO5dS5nkEroUnidPMu1WEcs4ZeEyF4/2vdCruZF9XUmldWgh1Mor2eQK+6g==',key_name='tempest-TestNetworkBasicOps-1495280618',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-0l56ndn2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:38:19Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=99d76f35-045f-485f-8e24-62b0c9293154,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.433 243456 DEBUG nova.network.os_vif_util [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.435 243456 DEBUG nova.network.os_vif_util [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.437 243456 DEBUG nova.objects.instance [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 99d76f35-045f-485f-8e24-62b0c9293154 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.454 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:38:23 np0005634017 nova_compute[243452]:  <uuid>99d76f35-045f-485f-8e24-62b0c9293154</uuid>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:  <name>instance-00000088</name>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestNetworkBasicOps-server-1194760241</nova:name>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:38:22</nova:creationTime>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:38:23 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:        <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:        <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:        <nova:port uuid="fc0b52d5-3577-4d09-bac1-192cb6c80057">
Feb 28 05:38:23 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <entry name="serial">99d76f35-045f-485f-8e24-62b0c9293154</entry>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <entry name="uuid">99d76f35-045f-485f-8e24-62b0c9293154</entry>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/99d76f35-045f-485f-8e24-62b0c9293154_disk">
Feb 28 05:38:23 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:38:23 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/99d76f35-045f-485f-8e24-62b0c9293154_disk.config">
Feb 28 05:38:23 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:38:23 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:68:09:7e"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <target dev="tapfc0b52d5-35"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/99d76f35-045f-485f-8e24-62b0c9293154/console.log" append="off"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:38:23 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:38:23 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:38:23 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:38:23 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.456 243456 DEBUG nova.compute.manager [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Preparing to wait for external event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.457 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "99d76f35-045f-485f-8e24-62b0c9293154-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.457 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.458 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.459 243456 DEBUG nova.virt.libvirt.vif [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:38:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1194760241',display_name='tempest-TestNetworkBasicOps-server-1194760241',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1194760241',id=136,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLMpYWEfQZAeIGdf778Rcwj2RHzc0cF5hcgXyiKvBbK6MzrGJ4l9QqwgzOi+/5IsYucdlZIzO5dS5nkEroUnidPMu1WEcs4ZeEyF4/2vdCruZF9XUmldWgh1Mor2eQK+6g==',key_name='tempest-TestNetworkBasicOps-1495280618',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-0l56ndn2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:38:19Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=99d76f35-045f-485f-8e24-62b0c9293154,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.459 243456 DEBUG nova.network.os_vif_util [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.461 243456 DEBUG nova.network.os_vif_util [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.461 243456 DEBUG os_vif [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.462 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.463 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.464 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.469 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.470 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc0b52d5-35, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.471 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfc0b52d5-35, col_values=(('external_ids', {'iface-id': 'fc0b52d5-3577-4d09-bac1-192cb6c80057', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:09:7e', 'vm-uuid': '99d76f35-045f-485f-8e24-62b0c9293154'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.473 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:23 np0005634017 NetworkManager[49805]: <info>  [1772275103.4747] manager: (tapfc0b52d5-35): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/581)
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.477 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.481 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.482 243456 INFO os_vif [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35')#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.544 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.544 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.545 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:68:09:7e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.545 243456 INFO nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Using config drive#033[00m
Feb 28 05:38:23 np0005634017 nova_compute[243452]: 2026-02-28 10:38:23.573 243456 DEBUG nova.storage.rbd_utils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 99d76f35-045f-485f-8e24-62b0c9293154_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:38:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:38:24 np0005634017 nova_compute[243452]: 2026-02-28 10:38:24.557 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:24 np0005634017 nova_compute[243452]: 2026-02-28 10:38:24.952 243456 INFO nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Creating config drive at /var/lib/nova/instances/99d76f35-045f-485f-8e24-62b0c9293154/disk.config#033[00m
Feb 28 05:38:24 np0005634017 nova_compute[243452]: 2026-02-28 10:38:24.958 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/99d76f35-045f-485f-8e24-62b0c9293154/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp97f7f97t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:38:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:38:25 np0005634017 nova_compute[243452]: 2026-02-28 10:38:25.093 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/99d76f35-045f-485f-8e24-62b0c9293154/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp97f7f97t" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:38:25 np0005634017 nova_compute[243452]: 2026-02-28 10:38:25.134 243456 DEBUG nova.storage.rbd_utils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 99d76f35-045f-485f-8e24-62b0c9293154_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:38:25 np0005634017 nova_compute[243452]: 2026-02-28 10:38:25.138 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/99d76f35-045f-485f-8e24-62b0c9293154/disk.config 99d76f35-045f-485f-8e24-62b0c9293154_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:38:25 np0005634017 nova_compute[243452]: 2026-02-28 10:38:25.309 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/99d76f35-045f-485f-8e24-62b0c9293154/disk.config 99d76f35-045f-485f-8e24-62b0c9293154_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:38:25 np0005634017 nova_compute[243452]: 2026-02-28 10:38:25.310 243456 INFO nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Deleting local config drive /var/lib/nova/instances/99d76f35-045f-485f-8e24-62b0c9293154/disk.config because it was imported into RBD.#033[00m
Feb 28 05:38:25 np0005634017 kernel: tapfc0b52d5-35: entered promiscuous mode
Feb 28 05:38:25 np0005634017 NetworkManager[49805]: <info>  [1772275105.3812] manager: (tapfc0b52d5-35): new Tun device (/org/freedesktop/NetworkManager/Devices/582)
Feb 28 05:38:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:25Z|01408|binding|INFO|Claiming lport fc0b52d5-3577-4d09-bac1-192cb6c80057 for this chassis.
Feb 28 05:38:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:25Z|01409|binding|INFO|fc0b52d5-3577-4d09-bac1-192cb6c80057: Claiming fa:16:3e:68:09:7e 10.100.0.7
Feb 28 05:38:25 np0005634017 nova_compute[243452]: 2026-02-28 10:38:25.406 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:25 np0005634017 nova_compute[243452]: 2026-02-28 10:38:25.411 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.423 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:09:7e 10.100.0.7'], port_security=['fa:16:3e:68:09:7e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-600658055', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '99d76f35-045f-485f-8e24-62b0c9293154', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48d6bc14-7625-4769-ad1d-6c202dc94953', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-600658055', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '26d464f3-c9fa-4347-96b9-fda1593d34a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61ce24be-7860-4d1f-956c-5f6de3fe0ffe, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=fc0b52d5-3577-4d09-bac1-192cb6c80057) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.426 156681 INFO neutron.agent.ovn.metadata.agent [-] Port fc0b52d5-3577-4d09-bac1-192cb6c80057 in datapath 48d6bc14-7625-4769-ad1d-6c202dc94953 bound to our chassis#033[00m
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.429 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 48d6bc14-7625-4769-ad1d-6c202dc94953#033[00m
Feb 28 05:38:25 np0005634017 systemd-udevd[365243]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:38:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:25Z|01410|binding|INFO|Setting lport fc0b52d5-3577-4d09-bac1-192cb6c80057 ovn-installed in OVS
Feb 28 05:38:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:25Z|01411|binding|INFO|Setting lport fc0b52d5-3577-4d09-bac1-192cb6c80057 up in Southbound
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.442 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c98c7463-1620-4fb2-9f85-5546cf3f0210]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:25 np0005634017 systemd-machined[209480]: New machine qemu-169-instance-00000088.
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.443 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap48d6bc14-71 in ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.446 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap48d6bc14-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.446 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b39af200-4967-4b88-a6ed-701b8eaa8023]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:25 np0005634017 nova_compute[243452]: 2026-02-28 10:38:25.444 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.449 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1362a3cc-ba84-4e0d-8278-1d1b1a6ecfe6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:25 np0005634017 NetworkManager[49805]: <info>  [1772275105.4518] device (tapfc0b52d5-35): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:38:25 np0005634017 NetworkManager[49805]: <info>  [1772275105.4527] device (tapfc0b52d5-35): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:38:25 np0005634017 systemd[1]: Started Virtual Machine qemu-169-instance-00000088.
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.464 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[cdb928df-7406-4a8f-b214-2b9b8973312e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.481 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f4bf5ff7-a085-4e1e-9ab2-681e3e0ea595]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.512 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe03181-d1fb-498f-9f0c-95f813f08be5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.519 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2aaebd37-eb34-46d3-a602-984d5c4a8a66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:25 np0005634017 NetworkManager[49805]: <info>  [1772275105.5207] manager: (tap48d6bc14-70): new Veth device (/org/freedesktop/NetworkManager/Devices/583)
Feb 28 05:38:25 np0005634017 systemd-udevd[365247]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.555 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[43b0f5e7-6721-4e16-ae52-79ab4d9c8c66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.560 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[dff4fbcc-7eff-46c0-89fa-2cddbf1ad025]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:25 np0005634017 NetworkManager[49805]: <info>  [1772275105.5898] device (tap48d6bc14-70): carrier: link connected
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.595 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ea55f712-9f08-4551-ad1e-4f19082491d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.615 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1e36360e-df0d-4738-add5-108ca55f7903]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48d6bc14-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:e1:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661287, 'reachable_time': 32128, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365277, 'error': None, 'target': 'ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.635 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[af729dca-9bf8-4691-a61c-d1e61f2eec1a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:e19e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661287, 'tstamp': 661287}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365278, 'error': None, 'target': 'ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.659 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d8aae10b-5360-4439-b8a7-6de7e46311c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48d6bc14-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:e1:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661287, 'reachable_time': 32128, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 365279, 'error': None, 'target': 'ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.695 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d007778b-4c14-4896-9bd5-ec059c2f1fd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.768 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[42e57607-31dc-49d1-8377-566d2bda0c08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.770 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48d6bc14-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.770 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.771 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48d6bc14-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:25 np0005634017 nova_compute[243452]: 2026-02-28 10:38:25.774 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:25 np0005634017 kernel: tap48d6bc14-70: entered promiscuous mode
Feb 28 05:38:25 np0005634017 NetworkManager[49805]: <info>  [1772275105.7754] manager: (tap48d6bc14-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/584)
Feb 28 05:38:25 np0005634017 nova_compute[243452]: 2026-02-28 10:38:25.778 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.780 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap48d6bc14-70, col_values=(('external_ids', {'iface-id': '6091ce53-d3cc-490d-8054-526a8c4039b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:25 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:25Z|01412|binding|INFO|Releasing lport 6091ce53-d3cc-490d-8054-526a8c4039b0 from this chassis (sb_readonly=0)
Feb 28 05:38:25 np0005634017 nova_compute[243452]: 2026-02-28 10:38:25.782 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:25 np0005634017 nova_compute[243452]: 2026-02-28 10:38:25.783 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.784 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/48d6bc14-7625-4769-ad1d-6c202dc94953.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/48d6bc14-7625-4769-ad1d-6c202dc94953.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.785 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9694b499-2ad3-4056-923c-e1844bff6f27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.786 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-48d6bc14-7625-4769-ad1d-6c202dc94953
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/48d6bc14-7625-4769-ad1d-6c202dc94953.pid.haproxy
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 48d6bc14-7625-4769-ad1d-6c202dc94953
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:38:25 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.787 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953', 'env', 'PROCESS_TAG=haproxy-48d6bc14-7625-4769-ad1d-6c202dc94953', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/48d6bc14-7625-4769-ad1d-6c202dc94953.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:38:25 np0005634017 nova_compute[243452]: 2026-02-28 10:38:25.788 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:25 np0005634017 nova_compute[243452]: 2026-02-28 10:38:25.921 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275105.9211786, 99d76f35-045f-485f-8e24-62b0c9293154 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:38:25 np0005634017 nova_compute[243452]: 2026-02-28 10:38:25.922 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] VM Started (Lifecycle Event)#033[00m
Feb 28 05:38:25 np0005634017 nova_compute[243452]: 2026-02-28 10:38:25.953 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:38:25 np0005634017 nova_compute[243452]: 2026-02-28 10:38:25.961 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275105.9226139, 99d76f35-045f-485f-8e24-62b0c9293154 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:38:25 np0005634017 nova_compute[243452]: 2026-02-28 10:38:25.962 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:38:25 np0005634017 nova_compute[243452]: 2026-02-28 10:38:25.994 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.000 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.026 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.113 243456 DEBUG nova.compute.manager [req-cf3ad46e-5315-45f1-9145-a9b2362e49f7 req-0691b9db-13b0-43c1-805f-1b04f328ac98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Received event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.114 243456 DEBUG oslo_concurrency.lockutils [req-cf3ad46e-5315-45f1-9145-a9b2362e49f7 req-0691b9db-13b0-43c1-805f-1b04f328ac98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "99d76f35-045f-485f-8e24-62b0c9293154-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.114 243456 DEBUG oslo_concurrency.lockutils [req-cf3ad46e-5315-45f1-9145-a9b2362e49f7 req-0691b9db-13b0-43c1-805f-1b04f328ac98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.114 243456 DEBUG oslo_concurrency.lockutils [req-cf3ad46e-5315-45f1-9145-a9b2362e49f7 req-0691b9db-13b0-43c1-805f-1b04f328ac98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.115 243456 DEBUG nova.compute.manager [req-cf3ad46e-5315-45f1-9145-a9b2362e49f7 req-0691b9db-13b0-43c1-805f-1b04f328ac98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Processing event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.115 243456 DEBUG nova.compute.manager [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.120 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275106.11997, 99d76f35-045f-485f-8e24-62b0c9293154 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.120 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.122 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.125 243456 INFO nova.virt.libvirt.driver [-] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Instance spawned successfully.#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.125 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.155 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.162 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.167 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.168 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.168 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.168 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.169 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.169 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.197 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.239 243456 INFO nova.compute.manager [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Took 6.48 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.240 243456 DEBUG nova.compute.manager [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:38:26 np0005634017 podman[365353]: 2026-02-28 10:38:26.280587121 +0000 UTC m=+0.089335671 container create 54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.302 243456 INFO nova.compute.manager [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Took 7.61 seconds to build instance.#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.320 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:26 np0005634017 podman[365353]: 2026-02-28 10:38:26.230955925 +0000 UTC m=+0.039704525 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:38:26 np0005634017 systemd[1]: Started libpod-conmon-54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74.scope.
Feb 28 05:38:26 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:38:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3142c94eb22a5779a4e33f0f5abf6a497c89eeb647fdd906caceeb4b98a92608/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:38:26 np0005634017 podman[365353]: 2026-02-28 10:38:26.378283947 +0000 UTC m=+0.187032597 container init 54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 28 05:38:26 np0005634017 podman[365353]: 2026-02-28 10:38:26.385573294 +0000 UTC m=+0.194321854 container start 54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 05:38:26 np0005634017 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[365368]: [NOTICE]   (365372) : New worker (365374) forked
Feb 28 05:38:26 np0005634017 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[365368]: [NOTICE]   (365372) : Loading success.
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.850 243456 DEBUG nova.network.neutron [req-792d3b42-0b1e-4395-856f-f8f5fcec18bc req-1f66332a-fead-4657-85b5-71c85a407fd5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Updated VIF entry in instance network info cache for port fc0b52d5-3577-4d09-bac1-192cb6c80057. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.853 243456 DEBUG nova.network.neutron [req-792d3b42-0b1e-4395-856f-f8f5fcec18bc req-1f66332a-fead-4657-85b5-71c85a407fd5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Updating instance_info_cache with network_info: [{"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:38:26 np0005634017 nova_compute[243452]: 2026-02-28 10:38:26.957 243456 DEBUG oslo_concurrency.lockutils [req-792d3b42-0b1e-4395-856f-f8f5fcec18bc req-1f66332a-fead-4657-85b5-71c85a407fd5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-99d76f35-045f-485f-8e24-62b0c9293154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:38:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:38:28 np0005634017 nova_compute[243452]: 2026-02-28 10:38:28.230 243456 DEBUG nova.compute.manager [req-0f1e3581-f87f-41e0-b317-20f8954a03e7 req-395dcbc2-8f71-4df1-93e4-bbb72b4d5d49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Received event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:38:28 np0005634017 nova_compute[243452]: 2026-02-28 10:38:28.232 243456 DEBUG oslo_concurrency.lockutils [req-0f1e3581-f87f-41e0-b317-20f8954a03e7 req-395dcbc2-8f71-4df1-93e4-bbb72b4d5d49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "99d76f35-045f-485f-8e24-62b0c9293154-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:28 np0005634017 nova_compute[243452]: 2026-02-28 10:38:28.232 243456 DEBUG oslo_concurrency.lockutils [req-0f1e3581-f87f-41e0-b317-20f8954a03e7 req-395dcbc2-8f71-4df1-93e4-bbb72b4d5d49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:28 np0005634017 nova_compute[243452]: 2026-02-28 10:38:28.233 243456 DEBUG oslo_concurrency.lockutils [req-0f1e3581-f87f-41e0-b317-20f8954a03e7 req-395dcbc2-8f71-4df1-93e4-bbb72b4d5d49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:28 np0005634017 nova_compute[243452]: 2026-02-28 10:38:28.234 243456 DEBUG nova.compute.manager [req-0f1e3581-f87f-41e0-b317-20f8954a03e7 req-395dcbc2-8f71-4df1-93e4-bbb72b4d5d49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] No waiting events found dispatching network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:38:28 np0005634017 nova_compute[243452]: 2026-02-28 10:38:28.234 243456 WARNING nova.compute.manager [req-0f1e3581-f87f-41e0-b317-20f8954a03e7 req-395dcbc2-8f71-4df1-93e4-bbb72b4d5d49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Received unexpected event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:38:28 np0005634017 nova_compute[243452]: 2026-02-28 10:38:28.474 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:38:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2224: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 184 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 28 05:38:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:38:29
Feb 28 05:38:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:38:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:38:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['volumes', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', '.mgr', 'vms', 'default.rgw.control', 'default.rgw.meta', '.rgw.root']
Feb 28 05:38:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:38:29 np0005634017 nova_compute[243452]: 2026-02-28 10:38:29.559 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:29.664 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:b7:5d 2001:db8:0:1:f816:3eff:fe53:b75d 2001:db8::f816:3eff:fe53:b75d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe53:b75d/64 2001:db8::f816:3eff:fe53:b75d/64', 'neutron:device_id': 'ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df8aa31f-e638-4e49-ac27-bf5e1988a64a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=86a5934a-bf0f-4d7e-ba7a-fdb27e99df6a) old=Port_Binding(mac=['fa:16:3e:53:b7:5d 2001:db8::f816:3eff:fe53:b75d'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe53:b75d/64', 'neutron:device_id': 'ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:38:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:29.667 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 86a5934a-bf0f-4d7e-ba7a-fdb27e99df6a in datapath 685f3a92-853c-417a-a00b-ba5c70b02f2d updated#033[00m
Feb 28 05:38:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:29.670 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 685f3a92-853c-417a-a00b-ba5c70b02f2d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:38:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:29.671 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[490cfd71-5227-47f7-92e2-a73dd99121dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:38:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:38:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:38:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:38:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:38:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:38:30 np0005634017 nova_compute[243452]: 2026-02-28 10:38:30.705 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:30 np0005634017 NetworkManager[49805]: <info>  [1772275110.7062] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/585)
Feb 28 05:38:30 np0005634017 NetworkManager[49805]: <info>  [1772275110.7083] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/586)
Feb 28 05:38:30 np0005634017 nova_compute[243452]: 2026-02-28 10:38:30.763 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:30 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:30Z|01413|binding|INFO|Releasing lport 6091ce53-d3cc-490d-8054-526a8c4039b0 from this chassis (sb_readonly=0)
Feb 28 05:38:30 np0005634017 nova_compute[243452]: 2026-02-28 10:38:30.786 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:38:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:38:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:38:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:38:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:38:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:38:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:38:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:38:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:38:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:38:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 05:38:31 np0005634017 nova_compute[243452]: 2026-02-28 10:38:31.792 243456 DEBUG nova.compute.manager [req-c31afc70-95f0-4e9f-a3c5-54dbb3121c46 req-906ee03b-dca5-4f1d-b2d2-9da62ef45fcf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Received event network-changed-fc0b52d5-3577-4d09-bac1-192cb6c80057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:38:31 np0005634017 nova_compute[243452]: 2026-02-28 10:38:31.792 243456 DEBUG nova.compute.manager [req-c31afc70-95f0-4e9f-a3c5-54dbb3121c46 req-906ee03b-dca5-4f1d-b2d2-9da62ef45fcf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Refreshing instance network info cache due to event network-changed-fc0b52d5-3577-4d09-bac1-192cb6c80057. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:38:31 np0005634017 nova_compute[243452]: 2026-02-28 10:38:31.793 243456 DEBUG oslo_concurrency.lockutils [req-c31afc70-95f0-4e9f-a3c5-54dbb3121c46 req-906ee03b-dca5-4f1d-b2d2-9da62ef45fcf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-99d76f35-045f-485f-8e24-62b0c9293154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:38:31 np0005634017 nova_compute[243452]: 2026-02-28 10:38:31.793 243456 DEBUG oslo_concurrency.lockutils [req-c31afc70-95f0-4e9f-a3c5-54dbb3121c46 req-906ee03b-dca5-4f1d-b2d2-9da62ef45fcf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-99d76f35-045f-485f-8e24-62b0c9293154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:38:31 np0005634017 nova_compute[243452]: 2026-02-28 10:38:31.793 243456 DEBUG nova.network.neutron [req-c31afc70-95f0-4e9f-a3c5-54dbb3121c46 req-906ee03b-dca5-4f1d-b2d2-9da62ef45fcf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Refreshing network info cache for port fc0b52d5-3577-4d09-bac1-192cb6c80057 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:38:31 np0005634017 nova_compute[243452]: 2026-02-28 10:38:31.950 243456 DEBUG oslo_concurrency.lockutils [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "99d76f35-045f-485f-8e24-62b0c9293154" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:31 np0005634017 nova_compute[243452]: 2026-02-28 10:38:31.951 243456 DEBUG oslo_concurrency.lockutils [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:31 np0005634017 nova_compute[243452]: 2026-02-28 10:38:31.951 243456 DEBUG oslo_concurrency.lockutils [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "99d76f35-045f-485f-8e24-62b0c9293154-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:31 np0005634017 nova_compute[243452]: 2026-02-28 10:38:31.951 243456 DEBUG oslo_concurrency.lockutils [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:31 np0005634017 nova_compute[243452]: 2026-02-28 10:38:31.952 243456 DEBUG oslo_concurrency.lockutils [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:31 np0005634017 nova_compute[243452]: 2026-02-28 10:38:31.953 243456 INFO nova.compute.manager [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Terminating instance#033[00m
Feb 28 05:38:31 np0005634017 nova_compute[243452]: 2026-02-28 10:38:31.954 243456 DEBUG nova.compute.manager [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:38:31 np0005634017 kernel: tapfc0b52d5-35 (unregistering): left promiscuous mode
Feb 28 05:38:31 np0005634017 NetworkManager[49805]: <info>  [1772275111.9906] device (tapfc0b52d5-35): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:38:32 np0005634017 nova_compute[243452]: 2026-02-28 10:38:32.005 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:32 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:32Z|01414|binding|INFO|Releasing lport fc0b52d5-3577-4d09-bac1-192cb6c80057 from this chassis (sb_readonly=0)
Feb 28 05:38:32 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:32Z|01415|binding|INFO|Setting lport fc0b52d5-3577-4d09-bac1-192cb6c80057 down in Southbound
Feb 28 05:38:32 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:32Z|01416|binding|INFO|Removing iface tapfc0b52d5-35 ovn-installed in OVS
Feb 28 05:38:32 np0005634017 nova_compute[243452]: 2026-02-28 10:38:32.008 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:32 np0005634017 nova_compute[243452]: 2026-02-28 10:38:32.015 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.017 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:09:7e 10.100.0.7'], port_security=['fa:16:3e:68:09:7e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-600658055', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '99d76f35-045f-485f-8e24-62b0c9293154', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48d6bc14-7625-4769-ad1d-6c202dc94953', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-600658055', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '26d464f3-c9fa-4347-96b9-fda1593d34a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61ce24be-7860-4d1f-956c-5f6de3fe0ffe, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=fc0b52d5-3577-4d09-bac1-192cb6c80057) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:38:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.018 156681 INFO neutron.agent.ovn.metadata.agent [-] Port fc0b52d5-3577-4d09-bac1-192cb6c80057 in datapath 48d6bc14-7625-4769-ad1d-6c202dc94953 unbound from our chassis#033[00m
Feb 28 05:38:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.019 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 48d6bc14-7625-4769-ad1d-6c202dc94953, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:38:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.020 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f7226da5-81a7-4b7b-b78c-ce39ec7ddc60]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.020 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953 namespace which is not needed anymore#033[00m
Feb 28 05:38:32 np0005634017 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000088.scope: Deactivated successfully.
Feb 28 05:38:32 np0005634017 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000088.scope: Consumed 6.437s CPU time.
Feb 28 05:38:32 np0005634017 systemd-machined[209480]: Machine qemu-169-instance-00000088 terminated.
Feb 28 05:38:32 np0005634017 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[365368]: [NOTICE]   (365372) : haproxy version is 2.8.14-c23fe91
Feb 28 05:38:32 np0005634017 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[365368]: [NOTICE]   (365372) : path to executable is /usr/sbin/haproxy
Feb 28 05:38:32 np0005634017 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[365368]: [WARNING]  (365372) : Exiting Master process...
Feb 28 05:38:32 np0005634017 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[365368]: [ALERT]    (365372) : Current worker (365374) exited with code 143 (Terminated)
Feb 28 05:38:32 np0005634017 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[365368]: [WARNING]  (365372) : All workers exited. Exiting... (0)
Feb 28 05:38:32 np0005634017 systemd[1]: libpod-54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74.scope: Deactivated successfully.
Feb 28 05:38:32 np0005634017 podman[365409]: 2026-02-28 10:38:32.173775391 +0000 UTC m=+0.066844934 container died 54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:38:32 np0005634017 nova_compute[243452]: 2026-02-28 10:38:32.188 243456 INFO nova.virt.libvirt.driver [-] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Instance destroyed successfully.#033[00m
Feb 28 05:38:32 np0005634017 nova_compute[243452]: 2026-02-28 10:38:32.189 243456 DEBUG nova.objects.instance [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'resources' on Instance uuid 99d76f35-045f-485f-8e24-62b0c9293154 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:38:32 np0005634017 nova_compute[243452]: 2026-02-28 10:38:32.203 243456 DEBUG nova.virt.libvirt.vif [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:38:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1194760241',display_name='tempest-TestNetworkBasicOps-server-1194760241',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1194760241',id=136,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLMpYWEfQZAeIGdf778Rcwj2RHzc0cF5hcgXyiKvBbK6MzrGJ4l9QqwgzOi+/5IsYucdlZIzO5dS5nkEroUnidPMu1WEcs4ZeEyF4/2vdCruZF9XUmldWgh1Mor2eQK+6g==',key_name='tempest-TestNetworkBasicOps-1495280618',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:38:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-0l56ndn2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:38:26Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=99d76f35-045f-485f-8e24-62b0c9293154,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:38:32 np0005634017 nova_compute[243452]: 2026-02-28 10:38:32.204 243456 DEBUG nova.network.os_vif_util [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:38:32 np0005634017 nova_compute[243452]: 2026-02-28 10:38:32.204 243456 DEBUG nova.network.os_vif_util [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:38:32 np0005634017 nova_compute[243452]: 2026-02-28 10:38:32.205 243456 DEBUG os_vif [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:38:32 np0005634017 nova_compute[243452]: 2026-02-28 10:38:32.209 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:32 np0005634017 nova_compute[243452]: 2026-02-28 10:38:32.209 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc0b52d5-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:32 np0005634017 nova_compute[243452]: 2026-02-28 10:38:32.211 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:32 np0005634017 nova_compute[243452]: 2026-02-28 10:38:32.212 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:32 np0005634017 nova_compute[243452]: 2026-02-28 10:38:32.215 243456 INFO os_vif [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35')#033[00m
Feb 28 05:38:32 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74-userdata-shm.mount: Deactivated successfully.
Feb 28 05:38:32 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3142c94eb22a5779a4e33f0f5abf6a497c89eeb647fdd906caceeb4b98a92608-merged.mount: Deactivated successfully.
Feb 28 05:38:32 np0005634017 podman[365409]: 2026-02-28 10:38:32.266703273 +0000 UTC m=+0.159772806 container cleanup 54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:38:32 np0005634017 systemd[1]: libpod-conmon-54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74.scope: Deactivated successfully.
Feb 28 05:38:32 np0005634017 podman[365469]: 2026-02-28 10:38:32.329881101 +0000 UTC m=+0.041432544 container remove 54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 28 05:38:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.335 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e471721f-e19a-46ff-a0cd-d3f138a2e2aa]: (4, ('Sat Feb 28 10:38:32 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953 (54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74)\n54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74\nSat Feb 28 10:38:32 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953 (54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74)\n54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.337 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b80d3d56-2ff6-4e8c-b7fb-ad99ceb73700]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.338 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48d6bc14-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:32 np0005634017 nova_compute[243452]: 2026-02-28 10:38:32.341 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:32 np0005634017 kernel: tap48d6bc14-70: left promiscuous mode
Feb 28 05:38:32 np0005634017 nova_compute[243452]: 2026-02-28 10:38:32.346 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.349 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[31d201a7-38ee-4b0c-a2c5-e9b2b0819620]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.378 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0190e094-d2dd-40da-abdf-25a75aff8e35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.380 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b05b6ed2-a5ba-451f-b8c9-f217959b1290]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.396 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dbc1e7af-4308-4fbc-b3ce-1ee95a0873bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661279, 'reachable_time': 26375, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365482, 'error': None, 'target': 'ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:32 np0005634017 systemd[1]: run-netns-ovnmeta\x2d48d6bc14\x2d7625\x2d4769\x2dad1d\x2d6c202dc94953.mount: Deactivated successfully.
Feb 28 05:38:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.399 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:38:32 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.399 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[e64f0109-6685-470a-80ed-c097f426be9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:32 np0005634017 nova_compute[243452]: 2026-02-28 10:38:32.491 243456 INFO nova.virt.libvirt.driver [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Deleting instance files /var/lib/nova/instances/99d76f35-045f-485f-8e24-62b0c9293154_del#033[00m
Feb 28 05:38:32 np0005634017 nova_compute[243452]: 2026-02-28 10:38:32.492 243456 INFO nova.virt.libvirt.driver [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Deletion of /var/lib/nova/instances/99d76f35-045f-485f-8e24-62b0c9293154_del complete#033[00m
Feb 28 05:38:32 np0005634017 nova_compute[243452]: 2026-02-28 10:38:32.683 243456 INFO nova.compute.manager [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:38:32 np0005634017 nova_compute[243452]: 2026-02-28 10:38:32.684 243456 DEBUG oslo.service.loopingcall [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:38:32 np0005634017 nova_compute[243452]: 2026-02-28 10:38:32.684 243456 DEBUG nova.compute.manager [-] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:38:32 np0005634017 nova_compute[243452]: 2026-02-28 10:38:32.685 243456 DEBUG nova.network.neutron [-] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:38:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 305 active+clean; 188 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 92 op/s
Feb 28 05:38:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:38:34 np0005634017 nova_compute[243452]: 2026-02-28 10:38:34.165 243456 DEBUG nova.compute.manager [req-21e7ab27-9586-4fcb-b4a5-f22c5d6d0e32 req-caf78d7c-818c-4745-84f3-2ec17b9844c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Received event network-vif-unplugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:38:34 np0005634017 nova_compute[243452]: 2026-02-28 10:38:34.166 243456 DEBUG oslo_concurrency.lockutils [req-21e7ab27-9586-4fcb-b4a5-f22c5d6d0e32 req-caf78d7c-818c-4745-84f3-2ec17b9844c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "99d76f35-045f-485f-8e24-62b0c9293154-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:34 np0005634017 nova_compute[243452]: 2026-02-28 10:38:34.167 243456 DEBUG oslo_concurrency.lockutils [req-21e7ab27-9586-4fcb-b4a5-f22c5d6d0e32 req-caf78d7c-818c-4745-84f3-2ec17b9844c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:34 np0005634017 nova_compute[243452]: 2026-02-28 10:38:34.167 243456 DEBUG oslo_concurrency.lockutils [req-21e7ab27-9586-4fcb-b4a5-f22c5d6d0e32 req-caf78d7c-818c-4745-84f3-2ec17b9844c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:34 np0005634017 nova_compute[243452]: 2026-02-28 10:38:34.168 243456 DEBUG nova.compute.manager [req-21e7ab27-9586-4fcb-b4a5-f22c5d6d0e32 req-caf78d7c-818c-4745-84f3-2ec17b9844c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] No waiting events found dispatching network-vif-unplugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:38:34 np0005634017 nova_compute[243452]: 2026-02-28 10:38:34.168 243456 DEBUG nova.compute.manager [req-21e7ab27-9586-4fcb-b4a5-f22c5d6d0e32 req-caf78d7c-818c-4745-84f3-2ec17b9844c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Received event network-vif-unplugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:38:34 np0005634017 nova_compute[243452]: 2026-02-28 10:38:34.562 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 305 active+clean; 164 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 104 op/s
Feb 28 05:38:35 np0005634017 nova_compute[243452]: 2026-02-28 10:38:35.376 243456 DEBUG nova.network.neutron [req-c31afc70-95f0-4e9f-a3c5-54dbb3121c46 req-906ee03b-dca5-4f1d-b2d2-9da62ef45fcf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Updated VIF entry in instance network info cache for port fc0b52d5-3577-4d09-bac1-192cb6c80057. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:38:35 np0005634017 nova_compute[243452]: 2026-02-28 10:38:35.376 243456 DEBUG nova.network.neutron [req-c31afc70-95f0-4e9f-a3c5-54dbb3121c46 req-906ee03b-dca5-4f1d-b2d2-9da62ef45fcf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Updating instance_info_cache with network_info: [{"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:38:35 np0005634017 nova_compute[243452]: 2026-02-28 10:38:35.401 243456 DEBUG oslo_concurrency.lockutils [req-c31afc70-95f0-4e9f-a3c5-54dbb3121c46 req-906ee03b-dca5-4f1d-b2d2-9da62ef45fcf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-99d76f35-045f-485f-8e24-62b0c9293154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:38:36 np0005634017 nova_compute[243452]: 2026-02-28 10:38:36.022 243456 DEBUG nova.network.neutron [-] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:38:36 np0005634017 nova_compute[243452]: 2026-02-28 10:38:36.048 243456 INFO nova.compute.manager [-] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Took 3.36 seconds to deallocate network for instance.#033[00m
Feb 28 05:38:36 np0005634017 nova_compute[243452]: 2026-02-28 10:38:36.103 243456 DEBUG oslo_concurrency.lockutils [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:36 np0005634017 nova_compute[243452]: 2026-02-28 10:38:36.104 243456 DEBUG oslo_concurrency.lockutils [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:36 np0005634017 nova_compute[243452]: 2026-02-28 10:38:36.173 243456 DEBUG oslo_concurrency.processutils [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:38:36 np0005634017 nova_compute[243452]: 2026-02-28 10:38:36.284 243456 DEBUG nova.compute.manager [req-11f2452f-e8be-43ff-a0e8-0a8f33698891 req-936ace32-aeae-4c10-a23b-4527838ce0ed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Received event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:38:36 np0005634017 nova_compute[243452]: 2026-02-28 10:38:36.285 243456 DEBUG oslo_concurrency.lockutils [req-11f2452f-e8be-43ff-a0e8-0a8f33698891 req-936ace32-aeae-4c10-a23b-4527838ce0ed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "99d76f35-045f-485f-8e24-62b0c9293154-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:36 np0005634017 nova_compute[243452]: 2026-02-28 10:38:36.285 243456 DEBUG oslo_concurrency.lockutils [req-11f2452f-e8be-43ff-a0e8-0a8f33698891 req-936ace32-aeae-4c10-a23b-4527838ce0ed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:36 np0005634017 nova_compute[243452]: 2026-02-28 10:38:36.286 243456 DEBUG oslo_concurrency.lockutils [req-11f2452f-e8be-43ff-a0e8-0a8f33698891 req-936ace32-aeae-4c10-a23b-4527838ce0ed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:36 np0005634017 nova_compute[243452]: 2026-02-28 10:38:36.286 243456 DEBUG nova.compute.manager [req-11f2452f-e8be-43ff-a0e8-0a8f33698891 req-936ace32-aeae-4c10-a23b-4527838ce0ed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] No waiting events found dispatching network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:38:36 np0005634017 nova_compute[243452]: 2026-02-28 10:38:36.287 243456 WARNING nova.compute.manager [req-11f2452f-e8be-43ff-a0e8-0a8f33698891 req-936ace32-aeae-4c10-a23b-4527838ce0ed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Received unexpected event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:38:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:38:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4149854294' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:38:36 np0005634017 nova_compute[243452]: 2026-02-28 10:38:36.734 243456 DEBUG oslo_concurrency.processutils [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:38:36 np0005634017 nova_compute[243452]: 2026-02-28 10:38:36.741 243456 DEBUG nova.compute.provider_tree [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:38:36 np0005634017 nova_compute[243452]: 2026-02-28 10:38:36.759 243456 DEBUG nova.scheduler.client.report [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:38:36 np0005634017 nova_compute[243452]: 2026-02-28 10:38:36.781 243456 DEBUG oslo_concurrency.lockutils [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:36 np0005634017 nova_compute[243452]: 2026-02-28 10:38:36.807 243456 INFO nova.scheduler.client.report [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Deleted allocations for instance 99d76f35-045f-485f-8e24-62b0c9293154#033[00m
Feb 28 05:38:36 np0005634017 nova_compute[243452]: 2026-02-28 10:38:36.885 243456 DEBUG oslo_concurrency.lockutils [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.934s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 305 active+clean; 153 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Feb 28 05:38:37 np0005634017 nova_compute[243452]: 2026-02-28 10:38:37.092 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f6deb920-f186-43f5-9ea0-642f4a6e830e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:37 np0005634017 nova_compute[243452]: 2026-02-28 10:38:37.093 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:37 np0005634017 nova_compute[243452]: 2026-02-28 10:38:37.108 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:38:37 np0005634017 nova_compute[243452]: 2026-02-28 10:38:37.108 243456 DEBUG nova.compute.manager [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:38:37 np0005634017 nova_compute[243452]: 2026-02-28 10:38:37.167 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:37 np0005634017 nova_compute[243452]: 2026-02-28 10:38:37.167 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:37 np0005634017 nova_compute[243452]: 2026-02-28 10:38:37.174 243456 DEBUG nova.virt.hardware [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:38:37 np0005634017 nova_compute[243452]: 2026-02-28 10:38:37.175 243456 INFO nova.compute.claims [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:38:37 np0005634017 nova_compute[243452]: 2026-02-28 10:38:37.213 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:37 np0005634017 nova_compute[243452]: 2026-02-28 10:38:37.270 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:38:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:38:37 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/826191562' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:38:37 np0005634017 nova_compute[243452]: 2026-02-28 10:38:37.812 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:38:37 np0005634017 nova_compute[243452]: 2026-02-28 10:38:37.819 243456 DEBUG nova.compute.provider_tree [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:38:37 np0005634017 nova_compute[243452]: 2026-02-28 10:38:37.837 243456 DEBUG nova.scheduler.client.report [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:38:37 np0005634017 nova_compute[243452]: 2026-02-28 10:38:37.864 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:37 np0005634017 nova_compute[243452]: 2026-02-28 10:38:37.865 243456 DEBUG nova.compute.manager [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:38:37 np0005634017 nova_compute[243452]: 2026-02-28 10:38:37.918 243456 DEBUG nova.compute.manager [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:38:37 np0005634017 nova_compute[243452]: 2026-02-28 10:38:37.919 243456 DEBUG nova.network.neutron [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:38:37 np0005634017 nova_compute[243452]: 2026-02-28 10:38:37.942 243456 INFO nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:38:37 np0005634017 nova_compute[243452]: 2026-02-28 10:38:37.970 243456 DEBUG nova.compute.manager [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:38:38 np0005634017 nova_compute[243452]: 2026-02-28 10:38:38.061 243456 DEBUG nova.compute.manager [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:38:38 np0005634017 nova_compute[243452]: 2026-02-28 10:38:38.062 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:38:38 np0005634017 nova_compute[243452]: 2026-02-28 10:38:38.063 243456 INFO nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Creating image(s)#033[00m
Feb 28 05:38:38 np0005634017 nova_compute[243452]: 2026-02-28 10:38:38.095 243456 DEBUG nova.storage.rbd_utils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f6deb920-f186-43f5-9ea0-642f4a6e830e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:38:38 np0005634017 nova_compute[243452]: 2026-02-28 10:38:38.135 243456 DEBUG nova.storage.rbd_utils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f6deb920-f186-43f5-9ea0-642f4a6e830e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:38:38 np0005634017 nova_compute[243452]: 2026-02-28 10:38:38.172 243456 DEBUG nova.storage.rbd_utils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f6deb920-f186-43f5-9ea0-642f4a6e830e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:38:38 np0005634017 nova_compute[243452]: 2026-02-28 10:38:38.177 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:38:38 np0005634017 nova_compute[243452]: 2026-02-28 10:38:38.269 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:38:38 np0005634017 nova_compute[243452]: 2026-02-28 10:38:38.270 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:38 np0005634017 nova_compute[243452]: 2026-02-28 10:38:38.271 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:38 np0005634017 nova_compute[243452]: 2026-02-28 10:38:38.271 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:38 np0005634017 nova_compute[243452]: 2026-02-28 10:38:38.301 243456 DEBUG nova.storage.rbd_utils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f6deb920-f186-43f5-9ea0-642f4a6e830e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:38:38 np0005634017 nova_compute[243452]: 2026-02-28 10:38:38.306 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 f6deb920-f186-43f5-9ea0-642f4a6e830e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:38:38 np0005634017 nova_compute[243452]: 2026-02-28 10:38:38.542 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 f6deb920-f186-43f5-9ea0-642f4a6e830e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:38:38 np0005634017 nova_compute[243452]: 2026-02-28 10:38:38.584 243456 DEBUG nova.policy [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:38:38 np0005634017 nova_compute[243452]: 2026-02-28 10:38:38.632 243456 DEBUG nova.storage.rbd_utils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image f6deb920-f186-43f5-9ea0-642f4a6e830e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:38:38 np0005634017 nova_compute[243452]: 2026-02-28 10:38:38.734 243456 DEBUG nova.objects.instance [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid f6deb920-f186-43f5-9ea0-642f4a6e830e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:38:38 np0005634017 nova_compute[243452]: 2026-02-28 10:38:38.748 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:38:38 np0005634017 nova_compute[243452]: 2026-02-28 10:38:38.748 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Ensure instance console log exists: /var/lib/nova/instances/f6deb920-f186-43f5-9ea0-642f4a6e830e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:38:38 np0005634017 nova_compute[243452]: 2026-02-28 10:38:38.749 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:38 np0005634017 nova_compute[243452]: 2026-02-28 10:38:38.749 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:38 np0005634017 nova_compute[243452]: 2026-02-28 10:38:38.749 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:38:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2229: 305 pgs: 305 active+clean; 153 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Feb 28 05:38:39 np0005634017 nova_compute[243452]: 2026-02-28 10:38:39.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:38:39 np0005634017 nova_compute[243452]: 2026-02-28 10:38:39.563 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:39.827 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:38:39 np0005634017 nova_compute[243452]: 2026-02-28 10:38:39.827 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:39 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:39.828 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:38:39 np0005634017 nova_compute[243452]: 2026-02-28 10:38:39.966 243456 DEBUG nova.network.neutron [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Successfully created port: 5bc174e9-3e24-499b-8f52-0bcff974bf56 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:38:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:40.830 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 305 active+clean; 187 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.3 MiB/s wr, 105 op/s
Feb 28 05:38:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:38:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:38:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:38:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:38:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0002643687577423674 of space, bias 1.0, pg target 0.07931062732271021 quantized to 32 (current 32)
Feb 28 05:38:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:38:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:38:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:38:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:38:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:38:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024937943896177948 of space, bias 1.0, pg target 0.7481383168853384 quantized to 32 (current 32)
Feb 28 05:38:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:38:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.302374348258836e-07 of space, bias 4.0, pg target 0.0008762849217910602 quantized to 16 (current 16)
Feb 28 05:38:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:38:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:38:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:38:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:38:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:38:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:38:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:38:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:38:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:38:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:38:41 np0005634017 nova_compute[243452]: 2026-02-28 10:38:41.982 243456 DEBUG nova.network.neutron [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Successfully created port: 2fcd845f-8646-4afc-923a-1b7570fbdc36 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:38:42 np0005634017 nova_compute[243452]: 2026-02-28 10:38:42.217 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:42 np0005634017 nova_compute[243452]: 2026-02-28 10:38:42.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:38:42 np0005634017 nova_compute[243452]: 2026-02-28 10:38:42.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:38:42 np0005634017 nova_compute[243452]: 2026-02-28 10:38:42.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:38:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Feb 28 05:38:43 np0005634017 podman[365694]: 2026-02-28 10:38:43.13332562 +0000 UTC m=+0.066609007 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Feb 28 05:38:43 np0005634017 podman[365693]: 2026-02-28 10:38:43.173052405 +0000 UTC m=+0.106488056 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:38:43 np0005634017 nova_compute[243452]: 2026-02-28 10:38:43.646 243456 DEBUG nova.network.neutron [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Successfully updated port: 5bc174e9-3e24-499b-8f52-0bcff974bf56 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:38:43 np0005634017 nova_compute[243452]: 2026-02-28 10:38:43.803 243456 DEBUG nova.compute.manager [req-6d3d5b07-7af8-4f8f-9f73-d6d715b9dd33 req-58e23341-426e-45b5-b916-5f49d05ee746 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-changed-5bc174e9-3e24-499b-8f52-0bcff974bf56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:38:43 np0005634017 nova_compute[243452]: 2026-02-28 10:38:43.804 243456 DEBUG nova.compute.manager [req-6d3d5b07-7af8-4f8f-9f73-d6d715b9dd33 req-58e23341-426e-45b5-b916-5f49d05ee746 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Refreshing instance network info cache due to event network-changed-5bc174e9-3e24-499b-8f52-0bcff974bf56. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:38:43 np0005634017 nova_compute[243452]: 2026-02-28 10:38:43.804 243456 DEBUG oslo_concurrency.lockutils [req-6d3d5b07-7af8-4f8f-9f73-d6d715b9dd33 req-58e23341-426e-45b5-b916-5f49d05ee746 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:38:43 np0005634017 nova_compute[243452]: 2026-02-28 10:38:43.805 243456 DEBUG oslo_concurrency.lockutils [req-6d3d5b07-7af8-4f8f-9f73-d6d715b9dd33 req-58e23341-426e-45b5-b916-5f49d05ee746 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:38:43 np0005634017 nova_compute[243452]: 2026-02-28 10:38:43.805 243456 DEBUG nova.network.neutron [req-6d3d5b07-7af8-4f8f-9f73-d6d715b9dd33 req-58e23341-426e-45b5-b916-5f49d05ee746 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Refreshing network info cache for port 5bc174e9-3e24-499b-8f52-0bcff974bf56 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:38:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:38:44 np0005634017 nova_compute[243452]: 2026-02-28 10:38:44.112 243456 DEBUG nova.network.neutron [req-6d3d5b07-7af8-4f8f-9f73-d6d715b9dd33 req-58e23341-426e-45b5-b916-5f49d05ee746 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:38:44 np0005634017 nova_compute[243452]: 2026-02-28 10:38:44.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:38:44 np0005634017 nova_compute[243452]: 2026-02-28 10:38:44.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:38:44 np0005634017 nova_compute[243452]: 2026-02-28 10:38:44.564 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Feb 28 05:38:45 np0005634017 nova_compute[243452]: 2026-02-28 10:38:45.233 243456 DEBUG nova.network.neutron [req-6d3d5b07-7af8-4f8f-9f73-d6d715b9dd33 req-58e23341-426e-45b5-b916-5f49d05ee746 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:38:45 np0005634017 nova_compute[243452]: 2026-02-28 10:38:45.342 243456 DEBUG oslo_concurrency.lockutils [req-6d3d5b07-7af8-4f8f-9f73-d6d715b9dd33 req-58e23341-426e-45b5-b916-5f49d05ee746 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:38:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:38:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1722856212' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:38:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:38:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1722856212' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:38:46 np0005634017 nova_compute[243452]: 2026-02-28 10:38:46.014 243456 DEBUG nova.network.neutron [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Successfully updated port: 2fcd845f-8646-4afc-923a-1b7570fbdc36 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:38:46 np0005634017 nova_compute[243452]: 2026-02-28 10:38:46.031 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:38:46 np0005634017 nova_compute[243452]: 2026-02-28 10:38:46.032 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:38:46 np0005634017 nova_compute[243452]: 2026-02-28 10:38:46.032 243456 DEBUG nova.network.neutron [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:38:46 np0005634017 nova_compute[243452]: 2026-02-28 10:38:46.124 243456 DEBUG nova.compute.manager [req-3a736e76-05e8-4933-a427-d4c5455f8b9f req-f8f5bf8e-b74f-4cff-b90c-d93b12e9d62f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-changed-2fcd845f-8646-4afc-923a-1b7570fbdc36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:38:46 np0005634017 nova_compute[243452]: 2026-02-28 10:38:46.124 243456 DEBUG nova.compute.manager [req-3a736e76-05e8-4933-a427-d4c5455f8b9f req-f8f5bf8e-b74f-4cff-b90c-d93b12e9d62f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Refreshing instance network info cache due to event network-changed-2fcd845f-8646-4afc-923a-1b7570fbdc36. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:38:46 np0005634017 nova_compute[243452]: 2026-02-28 10:38:46.125 243456 DEBUG oslo_concurrency.lockutils [req-3a736e76-05e8-4933-a427-d4c5455f8b9f req-f8f5bf8e-b74f-4cff-b90c-d93b12e9d62f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:38:46 np0005634017 nova_compute[243452]: 2026-02-28 10:38:46.233 243456 DEBUG nova.network.neutron [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:38:46 np0005634017 nova_compute[243452]: 2026-02-28 10:38:46.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:38:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Feb 28 05:38:47 np0005634017 nova_compute[243452]: 2026-02-28 10:38:47.187 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275112.186385, 99d76f35-045f-485f-8e24-62b0c9293154 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:38:47 np0005634017 nova_compute[243452]: 2026-02-28 10:38:47.188 243456 INFO nova.compute.manager [-] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:38:47 np0005634017 nova_compute[243452]: 2026-02-28 10:38:47.213 243456 DEBUG nova.compute.manager [None req-b3f64d5b-f8bc-4195-a00a-e6827ed1cea9 - - - - - -] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:38:47 np0005634017 nova_compute[243452]: 2026-02-28 10:38:47.256 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.341 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.469 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "1546a5a7-9aea-450c-acbe-689f2b660359" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.470 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.496 243456 DEBUG nova.compute.manager [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.602 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.603 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.613 243456 DEBUG nova.virt.hardware [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.613 243456 INFO nova.compute.claims [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.741 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.913 243456 DEBUG nova.network.neutron [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Updating instance_info_cache with network_info: [{"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "address": "fa:16:3e:b3:ee:08", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fcd845f-86", "ovs_interfaceid": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.941 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.942 243456 DEBUG nova.compute.manager [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Instance network_info: |[{"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "address": "fa:16:3e:b3:ee:08", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fcd845f-86", "ovs_interfaceid": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.943 243456 DEBUG oslo_concurrency.lockutils [req-3a736e76-05e8-4933-a427-d4c5455f8b9f req-f8f5bf8e-b74f-4cff-b90c-d93b12e9d62f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.944 243456 DEBUG nova.network.neutron [req-3a736e76-05e8-4933-a427-d4c5455f8b9f req-f8f5bf8e-b74f-4cff-b90c-d93b12e9d62f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Refreshing network info cache for port 2fcd845f-8646-4afc-923a-1b7570fbdc36 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.950 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Start _get_guest_xml network_info=[{"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "address": "fa:16:3e:b3:ee:08", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fcd845f-86", "ovs_interfaceid": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.956 243456 WARNING nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:38:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.964 243456 DEBUG nova.virt.libvirt.host [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.965 243456 DEBUG nova.virt.libvirt.host [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.976 243456 DEBUG nova.virt.libvirt.host [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.977 243456 DEBUG nova.virt.libvirt.host [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.977 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.978 243456 DEBUG nova.virt.hardware [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.978 243456 DEBUG nova.virt.hardware [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.978 243456 DEBUG nova.virt.hardware [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.979 243456 DEBUG nova.virt.hardware [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.979 243456 DEBUG nova.virt.hardware [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.979 243456 DEBUG nova.virt.hardware [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.979 243456 DEBUG nova.virt.hardware [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.980 243456 DEBUG nova.virt.hardware [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.980 243456 DEBUG nova.virt.hardware [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.980 243456 DEBUG nova.virt.hardware [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.980 243456 DEBUG nova.virt.hardware [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:38:48 np0005634017 nova_compute[243452]: 2026-02-28 10:38:48.985 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:38:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:38:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:38:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2540973052' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:38:49 np0005634017 nova_compute[243452]: 2026-02-28 10:38:49.297 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:38:49 np0005634017 nova_compute[243452]: 2026-02-28 10:38:49.305 243456 DEBUG nova.compute.provider_tree [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:38:49 np0005634017 nova_compute[243452]: 2026-02-28 10:38:49.326 243456 DEBUG nova.scheduler.client.report [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:38:49 np0005634017 nova_compute[243452]: 2026-02-28 10:38:49.359 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:49 np0005634017 nova_compute[243452]: 2026-02-28 10:38:49.361 243456 DEBUG nova.compute.manager [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:38:49 np0005634017 nova_compute[243452]: 2026-02-28 10:38:49.447 243456 DEBUG nova.compute.manager [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:38:49 np0005634017 nova_compute[243452]: 2026-02-28 10:38:49.447 243456 DEBUG nova.network.neutron [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:38:49 np0005634017 nova_compute[243452]: 2026-02-28 10:38:49.474 243456 INFO nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:38:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:38:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4122792992' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:38:49 np0005634017 nova_compute[243452]: 2026-02-28 10:38:49.497 243456 DEBUG nova.compute.manager [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:38:49 np0005634017 nova_compute[243452]: 2026-02-28 10:38:49.517 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:38:49 np0005634017 nova_compute[243452]: 2026-02-28 10:38:49.549 243456 DEBUG nova.storage.rbd_utils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f6deb920-f186-43f5-9ea0-642f4a6e830e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:38:49 np0005634017 nova_compute[243452]: 2026-02-28 10:38:49.556 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:38:49 np0005634017 nova_compute[243452]: 2026-02-28 10:38:49.590 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:49 np0005634017 nova_compute[243452]: 2026-02-28 10:38:49.642 243456 DEBUG nova.compute.manager [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:38:49 np0005634017 nova_compute[243452]: 2026-02-28 10:38:49.645 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:38:49 np0005634017 nova_compute[243452]: 2026-02-28 10:38:49.645 243456 INFO nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Creating image(s)#033[00m
Feb 28 05:38:49 np0005634017 nova_compute[243452]: 2026-02-28 10:38:49.674 243456 DEBUG nova.storage.rbd_utils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 1546a5a7-9aea-450c-acbe-689f2b660359_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:38:49 np0005634017 nova_compute[243452]: 2026-02-28 10:38:49.706 243456 DEBUG nova.storage.rbd_utils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 1546a5a7-9aea-450c-acbe-689f2b660359_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:38:49 np0005634017 nova_compute[243452]: 2026-02-28 10:38:49.738 243456 DEBUG nova.storage.rbd_utils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 1546a5a7-9aea-450c-acbe-689f2b660359_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:38:49 np0005634017 nova_compute[243452]: 2026-02-28 10:38:49.744 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:38:49 np0005634017 nova_compute[243452]: 2026-02-28 10:38:49.821 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:38:49 np0005634017 nova_compute[243452]: 2026-02-28 10:38:49.823 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:49 np0005634017 nova_compute[243452]: 2026-02-28 10:38:49.824 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:49 np0005634017 nova_compute[243452]: 2026-02-28 10:38:49.824 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:49 np0005634017 nova_compute[243452]: 2026-02-28 10:38:49.852 243456 DEBUG nova.storage.rbd_utils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 1546a5a7-9aea-450c-acbe-689f2b660359_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:38:49 np0005634017 nova_compute[243452]: 2026-02-28 10:38:49.857 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 1546a5a7-9aea-450c-acbe-689f2b660359_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.042 243456 DEBUG nova.policy [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.117 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 1546a5a7-9aea-450c-acbe-689f2b660359_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:38:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:38:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3107797305' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.188 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.195 243456 DEBUG nova.virt.libvirt.vif [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:38:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1245781750',display_name='tempest-TestGettingAddress-server-1245781750',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1245781750',id=137,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHDH+5DUTx5iBPqC0EDa1S6dogceA/+8wHwWd1BSaRvWHn3D8AtYXHbWFXwRPGXZg46P7b16sTNaYzFdl+C2i24U7XPS1mXh/yVjcDyoOTxx5RFD1Jt0UBrR4R+2oVkYmg==',key_name='tempest-TestGettingAddress-830625014',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-34sdq54p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:38:38Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f6deb920-f186-43f5-9ea0-642f4a6e830e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.195 243456 DEBUG nova.network.os_vif_util [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.197 243456 DEBUG nova.network.os_vif_util [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:80:6e,bridge_name='br-int',has_traffic_filtering=True,id=5bc174e9-3e24-499b-8f52-0bcff974bf56,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bc174e9-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.198 243456 DEBUG nova.virt.libvirt.vif [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:38:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1245781750',display_name='tempest-TestGettingAddress-server-1245781750',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1245781750',id=137,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHDH+5DUTx5iBPqC0EDa1S6dogceA/+8wHwWd1BSaRvWHn3D8AtYXHbWFXwRPGXZg46P7b16sTNaYzFdl+C2i24U7XPS1mXh/yVjcDyoOTxx5RFD1Jt0UBrR4R+2oVkYmg==',key_name='tempest-TestGettingAddress-830625014',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-34sdq54p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:38:38Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f6deb920-f186-43f5-9ea0-642f4a6e830e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "address": "fa:16:3e:b3:ee:08", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fcd845f-86", "ovs_interfaceid": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.198 243456 DEBUG nova.network.os_vif_util [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "address": "fa:16:3e:b3:ee:08", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fcd845f-86", "ovs_interfaceid": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.199 243456 DEBUG nova.network.os_vif_util [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:ee:08,bridge_name='br-int',has_traffic_filtering=True,id=2fcd845f-8646-4afc-923a-1b7570fbdc36,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fcd845f-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.200 243456 DEBUG nova.objects.instance [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid f6deb920-f186-43f5-9ea0-642f4a6e830e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.206 243456 DEBUG nova.storage.rbd_utils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] resizing rbd image 1546a5a7-9aea-450c-acbe-689f2b660359_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.246 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:38:50 np0005634017 nova_compute[243452]:  <uuid>f6deb920-f186-43f5-9ea0-642f4a6e830e</uuid>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:  <name>instance-00000089</name>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestGettingAddress-server-1245781750</nova:name>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:38:48</nova:creationTime>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:38:50 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:        <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:        <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:        <nova:port uuid="5bc174e9-3e24-499b-8f52-0bcff974bf56">
Feb 28 05:38:50 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:        <nova:port uuid="2fcd845f-8646-4afc-923a-1b7570fbdc36">
Feb 28 05:38:50 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:feb3:ee08" ipVersion="6"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:feb3:ee08" ipVersion="6"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <entry name="serial">f6deb920-f186-43f5-9ea0-642f4a6e830e</entry>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <entry name="uuid">f6deb920-f186-43f5-9ea0-642f4a6e830e</entry>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/f6deb920-f186-43f5-9ea0-642f4a6e830e_disk">
Feb 28 05:38:50 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:38:50 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/f6deb920-f186-43f5-9ea0-642f4a6e830e_disk.config">
Feb 28 05:38:50 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:38:50 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:40:80:6e"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <target dev="tap5bc174e9-3e"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:b3:ee:08"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <target dev="tap2fcd845f-86"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/f6deb920-f186-43f5-9ea0-642f4a6e830e/console.log" append="off"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:38:50 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:38:50 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:38:50 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:38:50 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.248 243456 DEBUG nova.compute.manager [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Preparing to wait for external event network-vif-plugged-5bc174e9-3e24-499b-8f52-0bcff974bf56 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.249 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.250 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.250 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.250 243456 DEBUG nova.compute.manager [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Preparing to wait for external event network-vif-plugged-2fcd845f-8646-4afc-923a-1b7570fbdc36 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.250 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.251 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.251 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.252 243456 DEBUG nova.virt.libvirt.vif [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:38:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1245781750',display_name='tempest-TestGettingAddress-server-1245781750',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1245781750',id=137,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHDH+5DUTx5iBPqC0EDa1S6dogceA/+8wHwWd1BSaRvWHn3D8AtYXHbWFXwRPGXZg46P7b16sTNaYzFdl+C2i24U7XPS1mXh/yVjcDyoOTxx5RFD1Jt0UBrR4R+2oVkYmg==',key_name='tempest-TestGettingAddress-830625014',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-34sdq54p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:38:38Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f6deb920-f186-43f5-9ea0-642f4a6e830e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.252 243456 DEBUG nova.network.os_vif_util [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.253 243456 DEBUG nova.network.os_vif_util [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:80:6e,bridge_name='br-int',has_traffic_filtering=True,id=5bc174e9-3e24-499b-8f52-0bcff974bf56,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bc174e9-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.254 243456 DEBUG os_vif [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:80:6e,bridge_name='br-int',has_traffic_filtering=True,id=5bc174e9-3e24-499b-8f52-0bcff974bf56,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bc174e9-3e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.254 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.255 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.256 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.261 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.261 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bc174e9-3e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.262 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5bc174e9-3e, col_values=(('external_ids', {'iface-id': '5bc174e9-3e24-499b-8f52-0bcff974bf56', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:80:6e', 'vm-uuid': 'f6deb920-f186-43f5-9ea0-642f4a6e830e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.263 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:50 np0005634017 NetworkManager[49805]: <info>  [1772275130.2649] manager: (tap5bc174e9-3e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/587)
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.265 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.270 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.272 243456 INFO os_vif [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:80:6e,bridge_name='br-int',has_traffic_filtering=True,id=5bc174e9-3e24-499b-8f52-0bcff974bf56,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bc174e9-3e')#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.272 243456 DEBUG nova.virt.libvirt.vif [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:38:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1245781750',display_name='tempest-TestGettingAddress-server-1245781750',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1245781750',id=137,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHDH+5DUTx5iBPqC0EDa1S6dogceA/+8wHwWd1BSaRvWHn3D8AtYXHbWFXwRPGXZg46P7b16sTNaYzFdl+C2i24U7XPS1mXh/yVjcDyoOTxx5RFD1Jt0UBrR4R+2oVkYmg==',key_name='tempest-TestGettingAddress-830625014',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-34sdq54p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:38:38Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f6deb920-f186-43f5-9ea0-642f4a6e830e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "address": "fa:16:3e:b3:ee:08", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fcd845f-86", "ovs_interfaceid": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.273 243456 DEBUG nova.network.os_vif_util [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "address": "fa:16:3e:b3:ee:08", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fcd845f-86", "ovs_interfaceid": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.274 243456 DEBUG nova.network.os_vif_util [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:ee:08,bridge_name='br-int',has_traffic_filtering=True,id=2fcd845f-8646-4afc-923a-1b7570fbdc36,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fcd845f-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.274 243456 DEBUG os_vif [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:ee:08,bridge_name='br-int',has_traffic_filtering=True,id=2fcd845f-8646-4afc-923a-1b7570fbdc36,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fcd845f-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.275 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.275 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.275 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.309 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.310 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2fcd845f-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.311 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2fcd845f-86, col_values=(('external_ids', {'iface-id': '2fcd845f-8646-4afc-923a-1b7570fbdc36', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:ee:08', 'vm-uuid': 'f6deb920-f186-43f5-9ea0-642f4a6e830e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:50 np0005634017 NetworkManager[49805]: <info>  [1772275130.3136] manager: (tap2fcd845f-86): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/588)
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.317 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.325 243456 DEBUG nova.objects.instance [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'migration_context' on Instance uuid 1546a5a7-9aea-450c-acbe-689f2b660359 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.327 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.328 243456 INFO os_vif [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:ee:08,bridge_name='br-int',has_traffic_filtering=True,id=2fcd845f-8646-4afc-923a-1b7570fbdc36,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fcd845f-86')#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.338 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.339 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Ensure instance console log exists: /var/lib/nova/instances/1546a5a7-9aea-450c-acbe-689f2b660359/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.339 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.340 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.340 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.378 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.379 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.379 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:40:80:6e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.379 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:b3:ee:08, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.380 243456 INFO nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Using config drive#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.404 243456 DEBUG nova.storage.rbd_utils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f6deb920-f186-43f5-9ea0-642f4a6e830e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.773 243456 INFO nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Creating config drive at /var/lib/nova/instances/f6deb920-f186-43f5-9ea0-642f4a6e830e/disk.config#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.780 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f6deb920-f186-43f5-9ea0-642f4a6e830e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpmjo_2x8o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.839 243456 DEBUG nova.network.neutron [req-3a736e76-05e8-4933-a427-d4c5455f8b9f req-f8f5bf8e-b74f-4cff-b90c-d93b12e9d62f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Updated VIF entry in instance network info cache for port 2fcd845f-8646-4afc-923a-1b7570fbdc36. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.841 243456 DEBUG nova.network.neutron [req-3a736e76-05e8-4933-a427-d4c5455f8b9f req-f8f5bf8e-b74f-4cff-b90c-d93b12e9d62f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Updating instance_info_cache with network_info: [{"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "address": "fa:16:3e:b3:ee:08", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fcd845f-86", "ovs_interfaceid": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.858 243456 DEBUG oslo_concurrency.lockutils [req-3a736e76-05e8-4933-a427-d4c5455f8b9f req-f8f5bf8e-b74f-4cff-b90c-d93b12e9d62f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.924 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f6deb920-f186-43f5-9ea0-642f4a6e830e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpmjo_2x8o" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.959 243456 DEBUG nova.storage.rbd_utils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f6deb920-f186-43f5-9ea0-642f4a6e830e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:38:50 np0005634017 nova_compute[243452]: 2026-02-28 10:38:50.963 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f6deb920-f186-43f5-9ea0-642f4a6e830e/disk.config f6deb920-f186-43f5-9ea0-642f4a6e830e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:38:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 305 active+clean; 211 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.2 MiB/s wr, 27 op/s
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.054 243456 DEBUG nova.network.neutron [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Successfully updated port: fc0b52d5-3577-4d09-bac1-192cb6c80057 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.076 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-1546a5a7-9aea-450c-acbe-689f2b660359" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.077 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-1546a5a7-9aea-450c-acbe-689f2b660359" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.077 243456 DEBUG nova.network.neutron [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.151 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f6deb920-f186-43f5-9ea0-642f4a6e830e/disk.config f6deb920-f186-43f5-9ea0-642f4a6e830e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.152 243456 INFO nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Deleting local config drive /var/lib/nova/instances/f6deb920-f186-43f5-9ea0-642f4a6e830e/disk.config because it was imported into RBD.#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.230 243456 DEBUG nova.compute.manager [req-5f70c5b4-e1bb-475d-b638-fa665b171077 req-b57b7f49-74db-4948-abcd-04895a545914 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Received event network-changed-fc0b52d5-3577-4d09-bac1-192cb6c80057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:38:51 np0005634017 NetworkManager[49805]: <info>  [1772275131.2310] manager: (tap5bc174e9-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/589)
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.230 243456 DEBUG nova.compute.manager [req-5f70c5b4-e1bb-475d-b638-fa665b171077 req-b57b7f49-74db-4948-abcd-04895a545914 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Refreshing instance network info cache due to event network-changed-fc0b52d5-3577-4d09-bac1-192cb6c80057. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.231 243456 DEBUG oslo_concurrency.lockutils [req-5f70c5b4-e1bb-475d-b638-fa665b171077 req-b57b7f49-74db-4948-abcd-04895a545914 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-1546a5a7-9aea-450c-acbe-689f2b660359" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:38:51 np0005634017 kernel: tap5bc174e9-3e: entered promiscuous mode
Feb 28 05:38:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:51Z|01417|binding|INFO|Claiming lport 5bc174e9-3e24-499b-8f52-0bcff974bf56 for this chassis.
Feb 28 05:38:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:51Z|01418|binding|INFO|5bc174e9-3e24-499b-8f52-0bcff974bf56: Claiming fa:16:3e:40:80:6e 10.100.0.9
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.236 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.246 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:80:6e 10.100.0.9'], port_security=['fa:16:3e:40:80:6e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'f6deb920-f186-43f5-9ea0-642f4a6e830e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-386ad93c-8128-414d-bc97-7c3f009f2aee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '77637734-a2d3-42b4-baf6-bf1326483326', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12c8c4dc-c3d0-48e2-a2ff-065b46b0bb43, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=5bc174e9-3e24-499b-8f52-0bcff974bf56) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.248 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 5bc174e9-3e24-499b-8f52-0bcff974bf56 in datapath 386ad93c-8128-414d-bc97-7c3f009f2aee bound to our chassis#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.249 243456 DEBUG nova.network.neutron [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.250 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 386ad93c-8128-414d-bc97-7c3f009f2aee#033[00m
Feb 28 05:38:51 np0005634017 NetworkManager[49805]: <info>  [1772275131.2530] manager: (tap2fcd845f-86): new Tun device (/org/freedesktop/NetworkManager/Devices/590)
Feb 28 05:38:51 np0005634017 kernel: tap2fcd845f-86: entered promiscuous mode
Feb 28 05:38:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:51Z|01419|binding|INFO|Setting lport 5bc174e9-3e24-499b-8f52-0bcff974bf56 ovn-installed in OVS
Feb 28 05:38:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:51Z|01420|binding|INFO|Setting lport 5bc174e9-3e24-499b-8f52-0bcff974bf56 up in Southbound
Feb 28 05:38:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:51Z|01421|if_status|INFO|Not updating pb chassis for 2fcd845f-8646-4afc-923a-1b7570fbdc36 now as sb is readonly
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.259 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:51Z|01422|binding|INFO|Claiming lport 2fcd845f-8646-4afc-923a-1b7570fbdc36 for this chassis.
Feb 28 05:38:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:51Z|01423|binding|INFO|2fcd845f-8646-4afc-923a-1b7570fbdc36: Claiming fa:16:3e:b3:ee:08 2001:db8:0:1:f816:3eff:feb3:ee08 2001:db8::f816:3eff:feb3:ee08
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.267 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c1e445d9-45fa-44d9-bf56-5baca00e22f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.268 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap386ad93c-81 in ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.271 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:ee:08 2001:db8:0:1:f816:3eff:feb3:ee08 2001:db8::f816:3eff:feb3:ee08'], port_security=['fa:16:3e:b3:ee:08 2001:db8:0:1:f816:3eff:feb3:ee08 2001:db8::f816:3eff:feb3:ee08'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feb3:ee08/64 2001:db8::f816:3eff:feb3:ee08/64', 'neutron:device_id': 'f6deb920-f186-43f5-9ea0-642f4a6e830e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '77637734-a2d3-42b4-baf6-bf1326483326', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df8aa31f-e638-4e49-ac27-bf5e1988a64a, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2fcd845f-8646-4afc-923a-1b7570fbdc36) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.272 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.271 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap386ad93c-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.271 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cdbaf9c5-d22f-40f8-b6d0-16a8ccf7334c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:51Z|01424|binding|INFO|Setting lport 2fcd845f-8646-4afc-923a-1b7570fbdc36 ovn-installed in OVS
Feb 28 05:38:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:51Z|01425|binding|INFO|Setting lport 2fcd845f-8646-4afc-923a-1b7570fbdc36 up in Southbound
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.275 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f982ca95-3820-4b27-89c8-f247ddaf71a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.276 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:51 np0005634017 systemd-udevd[366069]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:38:51 np0005634017 systemd-udevd[366071]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.293 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[fd57ff43-a2c8-4e01-8219-b824b3170ade]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:51 np0005634017 NetworkManager[49805]: <info>  [1772275131.3062] device (tap2fcd845f-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:38:51 np0005634017 NetworkManager[49805]: <info>  [1772275131.3073] device (tap2fcd845f-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:38:51 np0005634017 NetworkManager[49805]: <info>  [1772275131.3088] device (tap5bc174e9-3e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:38:51 np0005634017 NetworkManager[49805]: <info>  [1772275131.3103] device (tap5bc174e9-3e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:38:51 np0005634017 systemd-machined[209480]: New machine qemu-170-instance-00000089.
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.312 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d43657c9-a366-41dd-9262-e9ed388ddd2d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:38:51 np0005634017 systemd[1]: Started Virtual Machine qemu-170-instance-00000089.
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.340 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.341 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.341 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.341 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.342 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.349 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb1c1e7-7ccf-4152-8f58-7f19c801de46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.354 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb4a04b-0b42-4e15-bd42-7477903008f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:51 np0005634017 NetworkManager[49805]: <info>  [1772275131.3564] manager: (tap386ad93c-80): new Veth device (/org/freedesktop/NetworkManager/Devices/591)
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.384 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4dbbf49d-c93d-4efb-be0b-0e6b350836a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.387 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[77287526-1a95-49ee-a3db-e634b63ac63c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:51 np0005634017 NetworkManager[49805]: <info>  [1772275131.4041] device (tap386ad93c-80): carrier: link connected
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.405 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1c9e4022-d552-41cb-a239-f9018c7f666c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.425 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[77bb0242-eb98-4c6c-9778-3b852094dcb8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap386ad93c-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:f3:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 422], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663869, 'reachable_time': 19915, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366104, 'error': None, 'target': 'ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.443 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9069f0b3-a82e-40f5-894b-6d0017a89d28]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec9:f3db'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663869, 'tstamp': 663869}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366105, 'error': None, 'target': 'ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.467 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c928f557-cd7c-48db-9527-a97d9beaee95]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap386ad93c-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:f3:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 422], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663869, 'reachable_time': 19915, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366106, 'error': None, 'target': 'ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.508 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba1edb4-88bb-498b-a2c7-7f4a63351e6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.556 243456 DEBUG nova.compute.manager [req-885898a8-88b7-4b30-8d51-0b870405e25e req-aa3f592d-ece7-46f0-a484-95fd2aec5f54 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-vif-plugged-2fcd845f-8646-4afc-923a-1b7570fbdc36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.556 243456 DEBUG oslo_concurrency.lockutils [req-885898a8-88b7-4b30-8d51-0b870405e25e req-aa3f592d-ece7-46f0-a484-95fd2aec5f54 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.560 243456 DEBUG oslo_concurrency.lockutils [req-885898a8-88b7-4b30-8d51-0b870405e25e req-aa3f592d-ece7-46f0-a484-95fd2aec5f54 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.560 243456 DEBUG oslo_concurrency.lockutils [req-885898a8-88b7-4b30-8d51-0b870405e25e req-aa3f592d-ece7-46f0-a484-95fd2aec5f54 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.561 243456 DEBUG nova.compute.manager [req-885898a8-88b7-4b30-8d51-0b870405e25e req-aa3f592d-ece7-46f0-a484-95fd2aec5f54 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Processing event network-vif-plugged-2fcd845f-8646-4afc-923a-1b7570fbdc36 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.586 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[119b9ee9-4565-4550-b790-b3ce3cdeb06c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.588 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap386ad93c-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.588 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.589 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap386ad93c-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:51 np0005634017 NetworkManager[49805]: <info>  [1772275131.5924] manager: (tap386ad93c-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/592)
Feb 28 05:38:51 np0005634017 kernel: tap386ad93c-80: entered promiscuous mode
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.594 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.596 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap386ad93c-80, col_values=(('external_ids', {'iface-id': '1c423105-d23e-4da9-afeb-9405c7fd0060'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:51Z|01426|binding|INFO|Releasing lport 1c423105-d23e-4da9-afeb-9405c7fd0060 from this chassis (sb_readonly=0)
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.602 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/386ad93c-8128-414d-bc97-7c3f009f2aee.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/386ad93c-8128-414d-bc97-7c3f009f2aee.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.604 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.604 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cf6446c7-b3bf-480e-851f-d2e35b2ae0e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.605 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-386ad93c-8128-414d-bc97-7c3f009f2aee
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/386ad93c-8128-414d-bc97-7c3f009f2aee.pid.haproxy
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 386ad93c-8128-414d-bc97-7c3f009f2aee
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:38:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.606 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee', 'env', 'PROCESS_TAG=haproxy-386ad93c-8128-414d-bc97-7c3f009f2aee', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/386ad93c-8128-414d-bc97-7c3f009f2aee.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.789 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275131.7887, f6deb920-f186-43f5-9ea0-642f4a6e830e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.789 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] VM Started (Lifecycle Event)#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.806 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.813 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275131.7896054, f6deb920-f186-43f5-9ea0-642f4a6e830e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.814 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.849 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.853 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.873 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:38:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:38:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2518945075' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.896 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.965 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:38:51 np0005634017 nova_compute[243452]: 2026-02-28 10:38:51.965 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:38:51 np0005634017 podman[366203]: 2026-02-28 10:38:51.997795256 +0000 UTC m=+0.066739861 container create 19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.026 243456 DEBUG nova.network.neutron [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Updating instance_info_cache with network_info: [{"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:38:52 np0005634017 systemd[1]: Started libpod-conmon-19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d.scope.
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.047 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-1546a5a7-9aea-450c-acbe-689f2b660359" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.048 243456 DEBUG nova.compute.manager [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Instance network_info: |[{"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.048 243456 DEBUG oslo_concurrency.lockutils [req-5f70c5b4-e1bb-475d-b638-fa665b171077 req-b57b7f49-74db-4948-abcd-04895a545914 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-1546a5a7-9aea-450c-acbe-689f2b660359" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.048 243456 DEBUG nova.network.neutron [req-5f70c5b4-e1bb-475d-b638-fa665b171077 req-b57b7f49-74db-4948-abcd-04895a545914 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Refreshing network info cache for port fc0b52d5-3577-4d09-bac1-192cb6c80057 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:38:52 np0005634017 podman[366203]: 2026-02-28 10:38:51.957609318 +0000 UTC m=+0.026554003 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.052 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Start _get_guest_xml network_info=[{"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.057 243456 WARNING nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.062 243456 DEBUG nova.virt.libvirt.host [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.063 243456 DEBUG nova.virt.libvirt.host [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.071 243456 DEBUG nova.virt.libvirt.host [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.072 243456 DEBUG nova.virt.libvirt.host [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.073 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.073 243456 DEBUG nova.virt.hardware [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.074 243456 DEBUG nova.virt.hardware [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.074 243456 DEBUG nova.virt.hardware [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.074 243456 DEBUG nova.virt.hardware [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.074 243456 DEBUG nova.virt.hardware [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.075 243456 DEBUG nova.virt.hardware [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.075 243456 DEBUG nova.virt.hardware [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.075 243456 DEBUG nova.virt.hardware [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.075 243456 DEBUG nova.virt.hardware [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.076 243456 DEBUG nova.virt.hardware [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.076 243456 DEBUG nova.virt.hardware [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.079 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:38:52 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:38:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/353ad3c61fc41f7332854dd7161b413a55568a38d52cee91efbf7a09e713c9f0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:38:52 np0005634017 podman[366203]: 2026-02-28 10:38:52.12118698 +0000 UTC m=+0.190131675 container init 19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 28 05:38:52 np0005634017 podman[366203]: 2026-02-28 10:38:52.126671325 +0000 UTC m=+0.195615970 container start 19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:38:52 np0005634017 neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee[366218]: [NOTICE]   (366223) : New worker (366225) forked
Feb 28 05:38:52 np0005634017 neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee[366218]: [NOTICE]   (366223) : Loading success.
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.207 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2fcd845f-8646-4afc-923a-1b7570fbdc36 in datapath 685f3a92-853c-417a-a00b-ba5c70b02f2d unbound from our chassis#033[00m
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.210 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 685f3a92-853c-417a-a00b-ba5c70b02f2d#033[00m
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.220 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[08c1e369-7d97-44ee-a73f-c464bbedaa4e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.222 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap685f3a92-81 in ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.224 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap685f3a92-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.224 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d32961-3975-4194-9c08-c181bbac7efa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.226 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5c63ed08-3b32-4b70-a59d-7b7bb84f809f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.239 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[be3303b3-c285-4964-9ed3-ee1a3e60e6da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.256 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5a77ca5d-3b51-43d7-a132-085a6771ea77]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.275 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.277 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3472MB free_disk=59.96124897431582GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.277 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.277 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.286 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[822aa432-2021-4194-98aa-7430c9a17fa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.291 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bcb0a65b-50e1-4b5f-81e1-b1dcdb02a621]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:52 np0005634017 NetworkManager[49805]: <info>  [1772275132.2929] manager: (tap685f3a92-80): new Veth device (/org/freedesktop/NetworkManager/Devices/593)
Feb 28 05:38:52 np0005634017 systemd-udevd[366094]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.322 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb8ae48-57f6-4729-98cf-282a98163c3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.326 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[fc2ad932-58d8-437a-8f0e-875df1e85452]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:52 np0005634017 NetworkManager[49805]: <info>  [1772275132.3528] device (tap685f3a92-80): carrier: link connected
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.359 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[95bd1991-3b8f-4327-9b8e-72a4b19d2418]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.375 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f0f08ac4-a8de-4378-8ad6-345f74904743]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap685f3a92-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:b7:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 423], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663963, 'reachable_time': 33771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366263, 'error': None, 'target': 'ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.393 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[594d4c6b-9863-4fdb-8617-0e6924a45f78]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe53:b75d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663963, 'tstamp': 663963}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366264, 'error': None, 'target': 'ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.412 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9c27e2c5-d2ee-4cf1-93d8-406000c8f2e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap685f3a92-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:b7:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 423], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663963, 'reachable_time': 33771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366265, 'error': None, 'target': 'ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.451 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f54f0777-377b-4893-a1f5-763e2afa7818]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.483 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b24da5b1-f7e3-484b-9899-1120dcd0576f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.485 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap685f3a92-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.486 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.487 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap685f3a92-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:52 np0005634017 NetworkManager[49805]: <info>  [1772275132.4902] manager: (tap685f3a92-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/594)
Feb 28 05:38:52 np0005634017 kernel: tap685f3a92-80: entered promiscuous mode
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.492 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.498 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap685f3a92-80, col_values=(('external_ids', {'iface-id': '86a5934a-bf0f-4d7e-ba7a-fdb27e99df6a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:52Z|01427|binding|INFO|Releasing lport 86a5934a-bf0f-4d7e-ba7a-fdb27e99df6a from this chassis (sb_readonly=0)
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.499 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.501 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.503 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/685f3a92-853c-417a-a00b-ba5c70b02f2d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/685f3a92-853c-417a-a00b-ba5c70b02f2d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.504 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dffb88c0-0e25-4005-ade2-4e57243112c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.505 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-685f3a92-853c-417a-a00b-ba5c70b02f2d
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/685f3a92-853c-417a-a00b-ba5c70b02f2d.pid.haproxy
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 685f3a92-853c-417a-a00b-ba5c70b02f2d
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:38:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.506 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'env', 'PROCESS_TAG=haproxy-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/685f3a92-853c-417a-a00b-ba5c70b02f2d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.508 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.521 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance f6deb920-f186-43f5-9ea0-642f4a6e830e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.522 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 1546a5a7-9aea-450c-acbe-689f2b660359 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.522 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.522 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.585 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:38:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:38:52 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2685745294' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.659 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.686 243456 DEBUG nova.storage.rbd_utils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 1546a5a7-9aea-450c-acbe-689f2b660359_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:38:52 np0005634017 nova_compute[243452]: 2026-02-28 10:38:52.692 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:38:53 np0005634017 podman[366334]: 2026-02-28 10:38:52.817578699 +0000 UTC m=+0.031206225 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:38:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 305 active+clean; 232 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.5 MiB/s wr, 30 op/s
Feb 28 05:38:53 np0005634017 podman[366334]: 2026-02-28 10:38:53.036865748 +0000 UTC m=+0.250493264 container create 7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:38:53 np0005634017 systemd[1]: Started libpod-conmon-7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003.scope.
Feb 28 05:38:53 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:38:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eca0bd47b535c15e2228c3436fbfee03368e983c319a518449256dd29eafc04f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:38:53 np0005634017 podman[366334]: 2026-02-28 10:38:53.120459215 +0000 UTC m=+0.334086761 container init 7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 28 05:38:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:38:53 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4233696413' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:38:53 np0005634017 podman[366334]: 2026-02-28 10:38:53.127759132 +0000 UTC m=+0.341386648 container start 7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.149 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.156 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:38:53 np0005634017 neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d[366368]: [NOTICE]   (366373) : New worker (366376) forked
Feb 28 05:38:53 np0005634017 neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d[366368]: [NOTICE]   (366373) : Loading success.
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.176 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.205 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.206 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:38:53 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2730403303' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.257 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.259 243456 DEBUG nova.virt.libvirt.vif [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:38:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-734126459',display_name='tempest-TestNetworkBasicOps-server-734126459',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-734126459',id=138,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTD6voq7CUXfm7WlI8qSl7cssyv4IdTFz+Rehs0Uzk9fFDHOvsC3uyrogbmbPToHSRv6RvbfGhcR/wGG6aXvaGAP4w+wIGU1b4ql+5cXN1wXdt7cbhMO8QZYhAJcKUF+w==',key_name='tempest-TestNetworkBasicOps-1975385676',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-ai5i2y9c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:38:49Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=1546a5a7-9aea-450c-acbe-689f2b660359,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.259 243456 DEBUG nova.network.os_vif_util [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.260 243456 DEBUG nova.network.os_vif_util [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.262 243456 DEBUG nova.objects.instance [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1546a5a7-9aea-450c-acbe-689f2b660359 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.281 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:38:53 np0005634017 nova_compute[243452]:  <uuid>1546a5a7-9aea-450c-acbe-689f2b660359</uuid>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:  <name>instance-0000008a</name>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestNetworkBasicOps-server-734126459</nova:name>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:38:52</nova:creationTime>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:38:53 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:        <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:        <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:        <nova:port uuid="fc0b52d5-3577-4d09-bac1-192cb6c80057">
Feb 28 05:38:53 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <entry name="serial">1546a5a7-9aea-450c-acbe-689f2b660359</entry>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <entry name="uuid">1546a5a7-9aea-450c-acbe-689f2b660359</entry>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/1546a5a7-9aea-450c-acbe-689f2b660359_disk">
Feb 28 05:38:53 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:38:53 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/1546a5a7-9aea-450c-acbe-689f2b660359_disk.config">
Feb 28 05:38:53 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:38:53 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:68:09:7e"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <target dev="tapfc0b52d5-35"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/1546a5a7-9aea-450c-acbe-689f2b660359/console.log" append="off"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:38:53 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:38:53 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:38:53 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:38:53 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.284 243456 DEBUG nova.compute.manager [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Preparing to wait for external event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.284 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.284 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.284 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.285 243456 DEBUG nova.virt.libvirt.vif [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:38:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-734126459',display_name='tempest-TestNetworkBasicOps-server-734126459',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-734126459',id=138,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTD6voq7CUXfm7WlI8qSl7cssyv4IdTFz+Rehs0Uzk9fFDHOvsC3uyrogbmbPToHSRv6RvbfGhcR/wGG6aXvaGAP4w+wIGU1b4ql+5cXN1wXdt7cbhMO8QZYhAJcKUF+w==',key_name='tempest-TestNetworkBasicOps-1975385676',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-ai5i2y9c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:38:49Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=1546a5a7-9aea-450c-acbe-689f2b660359,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.285 243456 DEBUG nova.network.os_vif_util [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.286 243456 DEBUG nova.network.os_vif_util [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.287 243456 DEBUG os_vif [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.287 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.287 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.288 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.291 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.291 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc0b52d5-35, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.292 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfc0b52d5-35, col_values=(('external_ids', {'iface-id': 'fc0b52d5-3577-4d09-bac1-192cb6c80057', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:09:7e', 'vm-uuid': '1546a5a7-9aea-450c-acbe-689f2b660359'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.293 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:53 np0005634017 NetworkManager[49805]: <info>  [1772275133.2943] manager: (tapfc0b52d5-35): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/595)
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.296 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.299 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.300 243456 INFO os_vif [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35')#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.336 243456 DEBUG nova.compute.manager [req-b991b2cf-69aa-448c-aa6f-1421d5006d01 req-575bf46f-92df-4d41-954b-9f3a204f6515 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-vif-plugged-5bc174e9-3e24-499b-8f52-0bcff974bf56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.337 243456 DEBUG oslo_concurrency.lockutils [req-b991b2cf-69aa-448c-aa6f-1421d5006d01 req-575bf46f-92df-4d41-954b-9f3a204f6515 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.337 243456 DEBUG oslo_concurrency.lockutils [req-b991b2cf-69aa-448c-aa6f-1421d5006d01 req-575bf46f-92df-4d41-954b-9f3a204f6515 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.338 243456 DEBUG oslo_concurrency.lockutils [req-b991b2cf-69aa-448c-aa6f-1421d5006d01 req-575bf46f-92df-4d41-954b-9f3a204f6515 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.338 243456 DEBUG nova.compute.manager [req-b991b2cf-69aa-448c-aa6f-1421d5006d01 req-575bf46f-92df-4d41-954b-9f3a204f6515 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Processing event network-vif-plugged-5bc174e9-3e24-499b-8f52-0bcff974bf56 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.338 243456 DEBUG nova.compute.manager [req-b991b2cf-69aa-448c-aa6f-1421d5006d01 req-575bf46f-92df-4d41-954b-9f3a204f6515 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-vif-plugged-5bc174e9-3e24-499b-8f52-0bcff974bf56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.338 243456 DEBUG oslo_concurrency.lockutils [req-b991b2cf-69aa-448c-aa6f-1421d5006d01 req-575bf46f-92df-4d41-954b-9f3a204f6515 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.338 243456 DEBUG oslo_concurrency.lockutils [req-b991b2cf-69aa-448c-aa6f-1421d5006d01 req-575bf46f-92df-4d41-954b-9f3a204f6515 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.339 243456 DEBUG oslo_concurrency.lockutils [req-b991b2cf-69aa-448c-aa6f-1421d5006d01 req-575bf46f-92df-4d41-954b-9f3a204f6515 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.339 243456 DEBUG nova.compute.manager [req-b991b2cf-69aa-448c-aa6f-1421d5006d01 req-575bf46f-92df-4d41-954b-9f3a204f6515 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] No waiting events found dispatching network-vif-plugged-5bc174e9-3e24-499b-8f52-0bcff974bf56 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.339 243456 WARNING nova.compute.manager [req-b991b2cf-69aa-448c-aa6f-1421d5006d01 req-575bf46f-92df-4d41-954b-9f3a204f6515 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received unexpected event network-vif-plugged-5bc174e9-3e24-499b-8f52-0bcff974bf56 for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.341 243456 DEBUG nova.compute.manager [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.347 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275133.3467934, f6deb920-f186-43f5-9ea0-642f4a6e830e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.348 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.349 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.358 243456 INFO nova.virt.libvirt.driver [-] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Instance spawned successfully.#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.359 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.362 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.363 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.363 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:68:09:7e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.364 243456 INFO nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Using config drive#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.388 243456 DEBUG nova.storage.rbd_utils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 1546a5a7-9aea-450c-acbe-689f2b660359_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.397 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.402 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.406 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.406 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.406 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.407 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.407 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.408 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.433 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.469 243456 INFO nova.compute.manager [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Took 15.41 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.470 243456 DEBUG nova.compute.manager [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.526 243456 INFO nova.compute.manager [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Took 16.38 seconds to build instance.#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.540 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.447s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.640 243456 DEBUG nova.compute.manager [req-27bee96e-10fe-4e7d-96fe-d24ca2432140 req-40d2c3a2-0e89-4eee-a6e8-321ebd7e4230 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-vif-plugged-2fcd845f-8646-4afc-923a-1b7570fbdc36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.641 243456 DEBUG oslo_concurrency.lockutils [req-27bee96e-10fe-4e7d-96fe-d24ca2432140 req-40d2c3a2-0e89-4eee-a6e8-321ebd7e4230 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.641 243456 DEBUG oslo_concurrency.lockutils [req-27bee96e-10fe-4e7d-96fe-d24ca2432140 req-40d2c3a2-0e89-4eee-a6e8-321ebd7e4230 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.641 243456 DEBUG oslo_concurrency.lockutils [req-27bee96e-10fe-4e7d-96fe-d24ca2432140 req-40d2c3a2-0e89-4eee-a6e8-321ebd7e4230 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.642 243456 DEBUG nova.compute.manager [req-27bee96e-10fe-4e7d-96fe-d24ca2432140 req-40d2c3a2-0e89-4eee-a6e8-321ebd7e4230 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] No waiting events found dispatching network-vif-plugged-2fcd845f-8646-4afc-923a-1b7570fbdc36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.642 243456 WARNING nova.compute.manager [req-27bee96e-10fe-4e7d-96fe-d24ca2432140 req-40d2c3a2-0e89-4eee-a6e8-321ebd7e4230 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received unexpected event network-vif-plugged-2fcd845f-8646-4afc-923a-1b7570fbdc36 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.681 243456 DEBUG nova.network.neutron [req-5f70c5b4-e1bb-475d-b638-fa665b171077 req-b57b7f49-74db-4948-abcd-04895a545914 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Updated VIF entry in instance network info cache for port fc0b52d5-3577-4d09-bac1-192cb6c80057. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.682 243456 DEBUG nova.network.neutron [req-5f70c5b4-e1bb-475d-b638-fa665b171077 req-b57b7f49-74db-4948-abcd-04895a545914 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Updating instance_info_cache with network_info: [{"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:38:53 np0005634017 nova_compute[243452]: 2026-02-28 10:38:53.697 243456 DEBUG oslo_concurrency.lockutils [req-5f70c5b4-e1bb-475d-b638-fa665b171077 req-b57b7f49-74db-4948-abcd-04895a545914 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-1546a5a7-9aea-450c-acbe-689f2b660359" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:38:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:38:54 np0005634017 nova_compute[243452]: 2026-02-28 10:38:54.019 243456 INFO nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Creating config drive at /var/lib/nova/instances/1546a5a7-9aea-450c-acbe-689f2b660359/disk.config#033[00m
Feb 28 05:38:54 np0005634017 nova_compute[243452]: 2026-02-28 10:38:54.022 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1546a5a7-9aea-450c-acbe-689f2b660359/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpd97okymh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:38:54 np0005634017 nova_compute[243452]: 2026-02-28 10:38:54.166 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1546a5a7-9aea-450c-acbe-689f2b660359/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpd97okymh" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:38:54 np0005634017 nova_compute[243452]: 2026-02-28 10:38:54.199 243456 DEBUG nova.storage.rbd_utils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 1546a5a7-9aea-450c-acbe-689f2b660359_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:38:54 np0005634017 nova_compute[243452]: 2026-02-28 10:38:54.203 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1546a5a7-9aea-450c-acbe-689f2b660359/disk.config 1546a5a7-9aea-450c-acbe-689f2b660359_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:38:54 np0005634017 nova_compute[243452]: 2026-02-28 10:38:54.358 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1546a5a7-9aea-450c-acbe-689f2b660359/disk.config 1546a5a7-9aea-450c-acbe-689f2b660359_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:38:54 np0005634017 nova_compute[243452]: 2026-02-28 10:38:54.359 243456 INFO nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Deleting local config drive /var/lib/nova/instances/1546a5a7-9aea-450c-acbe-689f2b660359/disk.config because it was imported into RBD.#033[00m
Feb 28 05:38:54 np0005634017 kernel: tapfc0b52d5-35: entered promiscuous mode
Feb 28 05:38:54 np0005634017 NetworkManager[49805]: <info>  [1772275134.4132] manager: (tapfc0b52d5-35): new Tun device (/org/freedesktop/NetworkManager/Devices/596)
Feb 28 05:38:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:54Z|01428|binding|INFO|Claiming lport fc0b52d5-3577-4d09-bac1-192cb6c80057 for this chassis.
Feb 28 05:38:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:54Z|01429|binding|INFO|fc0b52d5-3577-4d09-bac1-192cb6c80057: Claiming fa:16:3e:68:09:7e 10.100.0.7
Feb 28 05:38:54 np0005634017 nova_compute[243452]: 2026-02-28 10:38:54.416 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.426 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:09:7e 10.100.0.7'], port_security=['fa:16:3e:68:09:7e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-600658055', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1546a5a7-9aea-450c-acbe-689f2b660359', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48d6bc14-7625-4769-ad1d-6c202dc94953', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-600658055', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '7', 'neutron:security_group_ids': '26d464f3-c9fa-4347-96b9-fda1593d34a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61ce24be-7860-4d1f-956c-5f6de3fe0ffe, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=fc0b52d5-3577-4d09-bac1-192cb6c80057) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.427 156681 INFO neutron.agent.ovn.metadata.agent [-] Port fc0b52d5-3577-4d09-bac1-192cb6c80057 in datapath 48d6bc14-7625-4769-ad1d-6c202dc94953 bound to our chassis#033[00m
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.428 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 48d6bc14-7625-4769-ad1d-6c202dc94953#033[00m
Feb 28 05:38:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:54Z|01430|binding|INFO|Setting lport fc0b52d5-3577-4d09-bac1-192cb6c80057 ovn-installed in OVS
Feb 28 05:38:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:54Z|01431|binding|INFO|Setting lport fc0b52d5-3577-4d09-bac1-192cb6c80057 up in Southbound
Feb 28 05:38:54 np0005634017 nova_compute[243452]: 2026-02-28 10:38:54.434 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:54 np0005634017 nova_compute[243452]: 2026-02-28 10:38:54.438 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.442 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bea0a2e1-7c81-48ad-a665-b9c6c44466fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.443 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap48d6bc14-71 in ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.444 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap48d6bc14-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.444 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b1b8c36a-3820-4443-aa84-0c41913f05ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.445 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[65e4c64c-e035-4c2b-92e9-9ccaa86e05de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.460 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[ab91f93b-1b2a-4ca5-badc-e07a277c0839]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:54 np0005634017 systemd-machined[209480]: New machine qemu-171-instance-0000008a.
Feb 28 05:38:54 np0005634017 systemd[1]: Started Virtual Machine qemu-171-instance-0000008a.
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.475 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d067cd4c-af0e-4033-9d9a-a72c610b004e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:54 np0005634017 systemd-udevd[366462]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:38:54 np0005634017 NetworkManager[49805]: <info>  [1772275134.4867] device (tapfc0b52d5-35): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:38:54 np0005634017 NetworkManager[49805]: <info>  [1772275134.4874] device (tapfc0b52d5-35): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.506 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e5b7d2b8-8485-42ec-97c7-7fab7fd03e01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:54 np0005634017 NetworkManager[49805]: <info>  [1772275134.5163] manager: (tap48d6bc14-70): new Veth device (/org/freedesktop/NetworkManager/Devices/597)
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.514 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[30ffedc3-b074-4d80-8062-ba77d19d548a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.546 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[eec08c06-e672-4635-baf1-493a4b6ee133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.550 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ec3c4a7e-f75e-4949-b5a4-0872f4ec6fcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:54 np0005634017 NetworkManager[49805]: <info>  [1772275134.5684] device (tap48d6bc14-70): carrier: link connected
Feb 28 05:38:54 np0005634017 nova_compute[243452]: 2026-02-28 10:38:54.568 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.574 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[038aeddf-b0a2-44cc-b59b-9ac16865b1ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.592 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0a5b2d4d-0c09-4c5b-9aa4-4deda5b64cbb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48d6bc14-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:e1:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664185, 'reachable_time': 43124, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366492, 'error': None, 'target': 'ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.606 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5819ab2c-ec05-4370-9f6a-d7f03d6d9ea0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:e19e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 664185, 'tstamp': 664185}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366493, 'error': None, 'target': 'ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.623 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1296819f-5891-4531-a18d-113edc569b9b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48d6bc14-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:e1:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664185, 'reachable_time': 43124, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366494, 'error': None, 'target': 'ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.656 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d5388088-03b0-4bf3-8290-49520777fad2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.724 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0199d91c-02b8-4947-b4f7-fedc7124c3f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.725 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48d6bc14-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.725 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.726 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48d6bc14-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:54 np0005634017 nova_compute[243452]: 2026-02-28 10:38:54.728 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:54 np0005634017 kernel: tap48d6bc14-70: entered promiscuous mode
Feb 28 05:38:54 np0005634017 NetworkManager[49805]: <info>  [1772275134.7304] manager: (tap48d6bc14-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/598)
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.730 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap48d6bc14-70, col_values=(('external_ids', {'iface-id': '6091ce53-d3cc-490d-8054-526a8c4039b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:54 np0005634017 nova_compute[243452]: 2026-02-28 10:38:54.731 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:54Z|01432|binding|INFO|Releasing lport 6091ce53-d3cc-490d-8054-526a8c4039b0 from this chassis (sb_readonly=0)
Feb 28 05:38:54 np0005634017 nova_compute[243452]: 2026-02-28 10:38:54.732 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.734 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/48d6bc14-7625-4769-ad1d-6c202dc94953.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/48d6bc14-7625-4769-ad1d-6c202dc94953.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.735 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f46ed904-50f8-4da2-9f87-f1ca2ecbfae8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.736 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-48d6bc14-7625-4769-ad1d-6c202dc94953
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/48d6bc14-7625-4769-ad1d-6c202dc94953.pid.haproxy
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 48d6bc14-7625-4769-ad1d-6c202dc94953
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:38:54 np0005634017 nova_compute[243452]: 2026-02-28 10:38:54.737 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.737 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953', 'env', 'PROCESS_TAG=haproxy-48d6bc14-7625-4769-ad1d-6c202dc94953', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/48d6bc14-7625-4769-ad1d-6c202dc94953.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:38:54 np0005634017 nova_compute[243452]: 2026-02-28 10:38:54.855 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275134.8553002, 1546a5a7-9aea-450c-acbe-689f2b660359 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:38:54 np0005634017 nova_compute[243452]: 2026-02-28 10:38:54.856 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] VM Started (Lifecycle Event)#033[00m
Feb 28 05:38:54 np0005634017 nova_compute[243452]: 2026-02-28 10:38:54.878 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:38:54 np0005634017 nova_compute[243452]: 2026-02-28 10:38:54.891 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275134.8555055, 1546a5a7-9aea-450c-acbe-689f2b660359 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:38:54 np0005634017 nova_compute[243452]: 2026-02-28 10:38:54.891 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:38:54 np0005634017 nova_compute[243452]: 2026-02-28 10:38:54.917 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:38:54 np0005634017 nova_compute[243452]: 2026-02-28 10:38:54.922 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:38:54 np0005634017 nova_compute[243452]: 2026-02-28 10:38:54.942 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:38:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 730 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Feb 28 05:38:55 np0005634017 podman[366568]: 2026-02-28 10:38:55.089397247 +0000 UTC m=+0.052097236 container create bd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 05:38:55 np0005634017 systemd[1]: Started libpod-conmon-bd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704.scope.
Feb 28 05:38:55 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:38:55 np0005634017 podman[366568]: 2026-02-28 10:38:55.063956816 +0000 UTC m=+0.026656815 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:38:55 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98216c588452612fd8d98a1b43b07a762efeac7094a299d039a9b7dd839c2a5b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:38:55 np0005634017 podman[366568]: 2026-02-28 10:38:55.181493055 +0000 UTC m=+0.144193124 container init bd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 05:38:55 np0005634017 podman[366568]: 2026-02-28 10:38:55.186587439 +0000 UTC m=+0.149287458 container start bd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:38:55 np0005634017 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[366583]: [NOTICE]   (366587) : New worker (366589) forked
Feb 28 05:38:55 np0005634017 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[366583]: [NOTICE]   (366587) : Loading success.
Feb 28 05:38:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:38:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:38:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:38:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.750 243456 DEBUG nova.compute.manager [req-2f654643-bd93-47af-95d1-9f9d579db35c req-1af68714-6ca0-44cd-a41e-dc6c102a5cc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Received event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.751 243456 DEBUG oslo_concurrency.lockutils [req-2f654643-bd93-47af-95d1-9f9d579db35c req-1af68714-6ca0-44cd-a41e-dc6c102a5cc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.752 243456 DEBUG oslo_concurrency.lockutils [req-2f654643-bd93-47af-95d1-9f9d579db35c req-1af68714-6ca0-44cd-a41e-dc6c102a5cc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.752 243456 DEBUG oslo_concurrency.lockutils [req-2f654643-bd93-47af-95d1-9f9d579db35c req-1af68714-6ca0-44cd-a41e-dc6c102a5cc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.752 243456 DEBUG nova.compute.manager [req-2f654643-bd93-47af-95d1-9f9d579db35c req-1af68714-6ca0-44cd-a41e-dc6c102a5cc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Processing event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.752 243456 DEBUG nova.compute.manager [req-2f654643-bd93-47af-95d1-9f9d579db35c req-1af68714-6ca0-44cd-a41e-dc6c102a5cc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Received event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.753 243456 DEBUG oslo_concurrency.lockutils [req-2f654643-bd93-47af-95d1-9f9d579db35c req-1af68714-6ca0-44cd-a41e-dc6c102a5cc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.753 243456 DEBUG oslo_concurrency.lockutils [req-2f654643-bd93-47af-95d1-9f9d579db35c req-1af68714-6ca0-44cd-a41e-dc6c102a5cc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.753 243456 DEBUG oslo_concurrency.lockutils [req-2f654643-bd93-47af-95d1-9f9d579db35c req-1af68714-6ca0-44cd-a41e-dc6c102a5cc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.753 243456 DEBUG nova.compute.manager [req-2f654643-bd93-47af-95d1-9f9d579db35c req-1af68714-6ca0-44cd-a41e-dc6c102a5cc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] No waiting events found dispatching network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.753 243456 WARNING nova.compute.manager [req-2f654643-bd93-47af-95d1-9f9d579db35c req-1af68714-6ca0-44cd-a41e-dc6c102a5cc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Received unexpected event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.754 243456 DEBUG nova.compute.manager [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.768 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.769 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275135.768259, 1546a5a7-9aea-450c-acbe-689f2b660359 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.769 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.775 243456 INFO nova.virt.libvirt.driver [-] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Instance spawned successfully.#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.775 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.787 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.789 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.806 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.807 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.807 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.808 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.808 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.808 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.823 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.876 243456 INFO nova.compute.manager [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Took 6.23 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.877 243456 DEBUG nova.compute.manager [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.946 243456 INFO nova.compute.manager [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Took 7.38 seconds to build instance.#033[00m
Feb 28 05:38:55 np0005634017 nova_compute[243452]: 2026-02-28 10:38:55.973 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.503s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:38:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:38:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:38:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:38:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:38:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:38:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:38:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:38:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:38:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:38:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:38:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:38:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:38:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:38:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:38:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:38:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:38:56 np0005634017 podman[366811]: 2026-02-28 10:38:56.798214764 +0000 UTC m=+0.043891584 container create 75475741eec07c33abaf0ad144069405897ea5eed13049cd4c438ba6b9e9ff38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_perlman, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:38:56 np0005634017 systemd[1]: Started libpod-conmon-75475741eec07c33abaf0ad144069405897ea5eed13049cd4c438ba6b9e9ff38.scope.
Feb 28 05:38:56 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:38:56 np0005634017 podman[366811]: 2026-02-28 10:38:56.874196885 +0000 UTC m=+0.119873725 container init 75475741eec07c33abaf0ad144069405897ea5eed13049cd4c438ba6b9e9ff38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:38:56 np0005634017 podman[366811]: 2026-02-28 10:38:56.779191055 +0000 UTC m=+0.024867925 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:38:56 np0005634017 podman[366811]: 2026-02-28 10:38:56.881742879 +0000 UTC m=+0.127419699 container start 75475741eec07c33abaf0ad144069405897ea5eed13049cd4c438ba6b9e9ff38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_perlman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 28 05:38:56 np0005634017 podman[366811]: 2026-02-28 10:38:56.885269969 +0000 UTC m=+0.130946789 container attach 75475741eec07c33abaf0ad144069405897ea5eed13049cd4c438ba6b9e9ff38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_perlman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 05:38:56 np0005634017 heuristic_perlman[366827]: 167 167
Feb 28 05:38:56 np0005634017 systemd[1]: libpod-75475741eec07c33abaf0ad144069405897ea5eed13049cd4c438ba6b9e9ff38.scope: Deactivated successfully.
Feb 28 05:38:56 np0005634017 conmon[366827]: conmon 75475741eec07c33abaf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-75475741eec07c33abaf0ad144069405897ea5eed13049cd4c438ba6b9e9ff38.scope/container/memory.events
Feb 28 05:38:56 np0005634017 podman[366811]: 2026-02-28 10:38:56.890483327 +0000 UTC m=+0.136160157 container died 75475741eec07c33abaf0ad144069405897ea5eed13049cd4c438ba6b9e9ff38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_perlman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 05:38:56 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3c767b7b819ffff5f7190349d56156435bd73b8813d80a91cc0c07ed00ea6104-merged.mount: Deactivated successfully.
Feb 28 05:38:56 np0005634017 podman[366811]: 2026-02-28 10:38:56.926530057 +0000 UTC m=+0.172206877 container remove 75475741eec07c33abaf0ad144069405897ea5eed13049cd4c438ba6b9e9ff38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:38:56 np0005634017 systemd[1]: libpod-conmon-75475741eec07c33abaf0ad144069405897ea5eed13049cd4c438ba6b9e9ff38.scope: Deactivated successfully.
Feb 28 05:38:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2238: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Feb 28 05:38:57 np0005634017 podman[366849]: 2026-02-28 10:38:57.125702446 +0000 UTC m=+0.077651570 container create c779049d746282a3ee91e03bc012f890254accd60a330eae60fb5d76023ecd77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_maxwell, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 05:38:57 np0005634017 podman[366849]: 2026-02-28 10:38:57.074159967 +0000 UTC m=+0.026109191 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:38:57 np0005634017 systemd[1]: Started libpod-conmon-c779049d746282a3ee91e03bc012f890254accd60a330eae60fb5d76023ecd77.scope.
Feb 28 05:38:57 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:38:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f866e600cb0a0b0edc9973c320cd1f9549342f568c0991da39a593ceee83db0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:38:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f866e600cb0a0b0edc9973c320cd1f9549342f568c0991da39a593ceee83db0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:38:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f866e600cb0a0b0edc9973c320cd1f9549342f568c0991da39a593ceee83db0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:38:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f866e600cb0a0b0edc9973c320cd1f9549342f568c0991da39a593ceee83db0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:38:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f866e600cb0a0b0edc9973c320cd1f9549342f568c0991da39a593ceee83db0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:38:57 np0005634017 podman[366849]: 2026-02-28 10:38:57.230265827 +0000 UTC m=+0.182215051 container init c779049d746282a3ee91e03bc012f890254accd60a330eae60fb5d76023ecd77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:38:57 np0005634017 podman[366849]: 2026-02-28 10:38:57.237402359 +0000 UTC m=+0.189351493 container start c779049d746282a3ee91e03bc012f890254accd60a330eae60fb5d76023ecd77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:38:57 np0005634017 podman[366849]: 2026-02-28 10:38:57.241290249 +0000 UTC m=+0.193239423 container attach c779049d746282a3ee91e03bc012f890254accd60a330eae60fb5d76023ecd77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_maxwell, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 05:38:57 np0005634017 flamboyant_maxwell[366865]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:38:57 np0005634017 flamboyant_maxwell[366865]: --> All data devices are unavailable
Feb 28 05:38:57 np0005634017 systemd[1]: libpod-c779049d746282a3ee91e03bc012f890254accd60a330eae60fb5d76023ecd77.scope: Deactivated successfully.
Feb 28 05:38:57 np0005634017 podman[366849]: 2026-02-28 10:38:57.749042397 +0000 UTC m=+0.700991531 container died c779049d746282a3ee91e03bc012f890254accd60a330eae60fb5d76023ecd77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:38:57 np0005634017 systemd[1]: var-lib-containers-storage-overlay-4f866e600cb0a0b0edc9973c320cd1f9549342f568c0991da39a593ceee83db0-merged.mount: Deactivated successfully.
Feb 28 05:38:57 np0005634017 podman[366849]: 2026-02-28 10:38:57.817929937 +0000 UTC m=+0.769879071 container remove c779049d746282a3ee91e03bc012f890254accd60a330eae60fb5d76023ecd77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_maxwell, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:38:57 np0005634017 systemd[1]: libpod-conmon-c779049d746282a3ee91e03bc012f890254accd60a330eae60fb5d76023ecd77.scope: Deactivated successfully.
Feb 28 05:38:57 np0005634017 nova_compute[243452]: 2026-02-28 10:38:57.857 243456 DEBUG nova.compute.manager [req-4d2bd8d9-6f27-4bb8-8eb8-268f8af3cb78 req-051a0615-f4db-4447-abbf-b7a7ec4430f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-changed-5bc174e9-3e24-499b-8f52-0bcff974bf56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:38:57 np0005634017 nova_compute[243452]: 2026-02-28 10:38:57.858 243456 DEBUG nova.compute.manager [req-4d2bd8d9-6f27-4bb8-8eb8-268f8af3cb78 req-051a0615-f4db-4447-abbf-b7a7ec4430f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Refreshing instance network info cache due to event network-changed-5bc174e9-3e24-499b-8f52-0bcff974bf56. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:38:57 np0005634017 nova_compute[243452]: 2026-02-28 10:38:57.858 243456 DEBUG oslo_concurrency.lockutils [req-4d2bd8d9-6f27-4bb8-8eb8-268f8af3cb78 req-051a0615-f4db-4447-abbf-b7a7ec4430f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:38:57 np0005634017 nova_compute[243452]: 2026-02-28 10:38:57.858 243456 DEBUG oslo_concurrency.lockutils [req-4d2bd8d9-6f27-4bb8-8eb8-268f8af3cb78 req-051a0615-f4db-4447-abbf-b7a7ec4430f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:38:57 np0005634017 nova_compute[243452]: 2026-02-28 10:38:57.858 243456 DEBUG nova.network.neutron [req-4d2bd8d9-6f27-4bb8-8eb8-268f8af3cb78 req-051a0615-f4db-4447-abbf-b7a7ec4430f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Refreshing network info cache for port 5bc174e9-3e24-499b-8f52-0bcff974bf56 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:38:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:57.877 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:57.878 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:57.879 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:58 np0005634017 podman[366960]: 2026-02-28 10:38:58.258193854 +0000 UTC m=+0.052788106 container create e8526d8766595112dc23aac29a200da1f8555ca026e467cf4266bd7dac35bcb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:38:58 np0005634017 nova_compute[243452]: 2026-02-28 10:38:58.293 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:58 np0005634017 systemd[1]: Started libpod-conmon-e8526d8766595112dc23aac29a200da1f8555ca026e467cf4266bd7dac35bcb1.scope.
Feb 28 05:38:58 np0005634017 podman[366960]: 2026-02-28 10:38:58.233421953 +0000 UTC m=+0.028016285 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:38:58 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:38:58 np0005634017 podman[366960]: 2026-02-28 10:38:58.363619189 +0000 UTC m=+0.158213471 container init e8526d8766595112dc23aac29a200da1f8555ca026e467cf4266bd7dac35bcb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_zhukovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 05:38:58 np0005634017 podman[366960]: 2026-02-28 10:38:58.370453243 +0000 UTC m=+0.165047525 container start e8526d8766595112dc23aac29a200da1f8555ca026e467cf4266bd7dac35bcb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:38:58 np0005634017 reverent_zhukovsky[366976]: 167 167
Feb 28 05:38:58 np0005634017 systemd[1]: libpod-e8526d8766595112dc23aac29a200da1f8555ca026e467cf4266bd7dac35bcb1.scope: Deactivated successfully.
Feb 28 05:38:58 np0005634017 podman[366960]: 2026-02-28 10:38:58.377102511 +0000 UTC m=+0.171696783 container attach e8526d8766595112dc23aac29a200da1f8555ca026e467cf4266bd7dac35bcb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:38:58 np0005634017 podman[366960]: 2026-02-28 10:38:58.377445961 +0000 UTC m=+0.172040203 container died e8526d8766595112dc23aac29a200da1f8555ca026e467cf4266bd7dac35bcb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Feb 28 05:38:58 np0005634017 systemd[1]: var-lib-containers-storage-overlay-a361e3c784d79b109ee2772374c6796cd2024f8501b3b403bcb456af43d03154-merged.mount: Deactivated successfully.
Feb 28 05:38:58 np0005634017 podman[366960]: 2026-02-28 10:38:58.425353227 +0000 UTC m=+0.219947509 container remove e8526d8766595112dc23aac29a200da1f8555ca026e467cf4266bd7dac35bcb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:38:58 np0005634017 systemd[1]: libpod-conmon-e8526d8766595112dc23aac29a200da1f8555ca026e467cf4266bd7dac35bcb1.scope: Deactivated successfully.
Feb 28 05:38:58 np0005634017 podman[366999]: 2026-02-28 10:38:58.628546661 +0000 UTC m=+0.062923353 container create d057e94191b57b4bb13b11e26bbaed38154f446f5cfb0dcc25d248a415e91fe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 28 05:38:58 np0005634017 systemd[1]: Started libpod-conmon-d057e94191b57b4bb13b11e26bbaed38154f446f5cfb0dcc25d248a415e91fe6.scope.
Feb 28 05:38:58 np0005634017 podman[366999]: 2026-02-28 10:38:58.605286212 +0000 UTC m=+0.039662904 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:38:58 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:38:58 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76fad9603dfaa5a6aa35fc1ec81afaaca0f600fafa9f4131646ca8e4da746f6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:38:58 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76fad9603dfaa5a6aa35fc1ec81afaaca0f600fafa9f4131646ca8e4da746f6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:38:58 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76fad9603dfaa5a6aa35fc1ec81afaaca0f600fafa9f4131646ca8e4da746f6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:38:58 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76fad9603dfaa5a6aa35fc1ec81afaaca0f600fafa9f4131646ca8e4da746f6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:38:58 np0005634017 podman[366999]: 2026-02-28 10:38:58.726552346 +0000 UTC m=+0.160929068 container init d057e94191b57b4bb13b11e26bbaed38154f446f5cfb0dcc25d248a415e91fe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:38:58 np0005634017 podman[366999]: 2026-02-28 10:38:58.73446068 +0000 UTC m=+0.168837342 container start d057e94191b57b4bb13b11e26bbaed38154f446f5cfb0dcc25d248a415e91fe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 05:38:58 np0005634017 podman[366999]: 2026-02-28 10:38:58.738227937 +0000 UTC m=+0.172604639 container attach d057e94191b57b4bb13b11e26bbaed38154f446f5cfb0dcc25d248a415e91fe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:38:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]: {
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:    "0": [
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:        {
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:            "devices": [
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:                "/dev/loop3"
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:            ],
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:            "lv_name": "ceph_lv0",
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:            "lv_size": "21470642176",
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:            "name": "ceph_lv0",
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:            "tags": {
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:                "ceph.cluster_name": "ceph",
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:                "ceph.crush_device_class": "",
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:                "ceph.encrypted": "0",
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:                "ceph.objectstore": "bluestore",
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:                "ceph.osd_id": "0",
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:                "ceph.type": "block",
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:                "ceph.vdo": "0",
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:                "ceph.with_tpm": "0"
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:            },
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:            "type": "block",
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:            "vg_name": "ceph_vg0"
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:        }
Feb 28 05:38:58 np0005634017 fervent_einstein[367015]:    ],
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:    "1": [
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:        {
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:            "devices": [
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "/dev/loop4"
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:            ],
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:            "lv_name": "ceph_lv1",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:            "lv_size": "21470642176",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:            "name": "ceph_lv1",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:            "tags": {
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.cluster_name": "ceph",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.crush_device_class": "",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.encrypted": "0",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.objectstore": "bluestore",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.osd_id": "1",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.type": "block",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.vdo": "0",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.with_tpm": "0"
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:            },
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:            "type": "block",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:            "vg_name": "ceph_vg1"
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:        }
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:    ],
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:    "2": [
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:        {
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:            "devices": [
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "/dev/loop5"
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:            ],
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:            "lv_name": "ceph_lv2",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:            "lv_size": "21470642176",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:            "name": "ceph_lv2",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:            "tags": {
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.cluster_name": "ceph",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.crush_device_class": "",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.encrypted": "0",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.objectstore": "bluestore",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.osd_id": "2",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.type": "block",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.vdo": "0",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:                "ceph.with_tpm": "0"
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:            },
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:            "type": "block",
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:            "vg_name": "ceph_vg2"
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:        }
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]:    ]
Feb 28 05:38:59 np0005634017 fervent_einstein[367015]: }
Feb 28 05:38:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 120 op/s
Feb 28 05:38:59 np0005634017 systemd[1]: libpod-d057e94191b57b4bb13b11e26bbaed38154f446f5cfb0dcc25d248a415e91fe6.scope: Deactivated successfully.
Feb 28 05:38:59 np0005634017 podman[366999]: 2026-02-28 10:38:59.037343037 +0000 UTC m=+0.471719679 container died d057e94191b57b4bb13b11e26bbaed38154f446f5cfb0dcc25d248a415e91fe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 28 05:38:59 np0005634017 systemd[1]: var-lib-containers-storage-overlay-76fad9603dfaa5a6aa35fc1ec81afaaca0f600fafa9f4131646ca8e4da746f6b-merged.mount: Deactivated successfully.
Feb 28 05:38:59 np0005634017 podman[366999]: 2026-02-28 10:38:59.076705641 +0000 UTC m=+0.511082293 container remove d057e94191b57b4bb13b11e26bbaed38154f446f5cfb0dcc25d248a415e91fe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:38:59 np0005634017 systemd[1]: libpod-conmon-d057e94191b57b4bb13b11e26bbaed38154f446f5cfb0dcc25d248a415e91fe6.scope: Deactivated successfully.
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.095 243456 DEBUG oslo_concurrency.lockutils [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "1546a5a7-9aea-450c-acbe-689f2b660359" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.096 243456 DEBUG oslo_concurrency.lockutils [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.097 243456 DEBUG oslo_concurrency.lockutils [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.097 243456 DEBUG oslo_concurrency.lockutils [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.097 243456 DEBUG oslo_concurrency.lockutils [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.099 243456 INFO nova.compute.manager [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Terminating instance#033[00m
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.099 243456 DEBUG nova.compute.manager [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:38:59 np0005634017 kernel: tapfc0b52d5-35 (unregistering): left promiscuous mode
Feb 28 05:38:59 np0005634017 NetworkManager[49805]: <info>  [1772275139.1711] device (tapfc0b52d5-35): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.177 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:59 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:59Z|01433|binding|INFO|Releasing lport fc0b52d5-3577-4d09-bac1-192cb6c80057 from this chassis (sb_readonly=0)
Feb 28 05:38:59 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:59Z|01434|binding|INFO|Setting lport fc0b52d5-3577-4d09-bac1-192cb6c80057 down in Southbound
Feb 28 05:38:59 np0005634017 ovn_controller[146846]: 2026-02-28T10:38:59Z|01435|binding|INFO|Removing iface tapfc0b52d5-35 ovn-installed in OVS
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.183 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.209 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:09:7e 10.100.0.7'], port_security=['fa:16:3e:68:09:7e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-600658055', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1546a5a7-9aea-450c-acbe-689f2b660359', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48d6bc14-7625-4769-ad1d-6c202dc94953', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-600658055', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '9', 'neutron:security_group_ids': '26d464f3-c9fa-4347-96b9-fda1593d34a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.241', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61ce24be-7860-4d1f-956c-5f6de3fe0ffe, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=fc0b52d5-3577-4d09-bac1-192cb6c80057) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:38:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.212 156681 INFO neutron.agent.ovn.metadata.agent [-] Port fc0b52d5-3577-4d09-bac1-192cb6c80057 in datapath 48d6bc14-7625-4769-ad1d-6c202dc94953 unbound from our chassis#033[00m
Feb 28 05:38:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.215 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 48d6bc14-7625-4769-ad1d-6c202dc94953, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:38:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.216 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[14c9d6b6-950b-4c32-ab3c-d21d9fd8d8cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.217 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953 namespace which is not needed anymore#033[00m
Feb 28 05:38:59 np0005634017 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Feb 28 05:38:59 np0005634017 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d0000008a.scope: Consumed 3.693s CPU time.
Feb 28 05:38:59 np0005634017 systemd-machined[209480]: Machine qemu-171-instance-0000008a terminated.
Feb 28 05:38:59 np0005634017 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[366583]: [NOTICE]   (366587) : haproxy version is 2.8.14-c23fe91
Feb 28 05:38:59 np0005634017 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[366583]: [NOTICE]   (366587) : path to executable is /usr/sbin/haproxy
Feb 28 05:38:59 np0005634017 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[366583]: [WARNING]  (366587) : Exiting Master process...
Feb 28 05:38:59 np0005634017 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[366583]: [WARNING]  (366587) : Exiting Master process...
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.337 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:59 np0005634017 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[366583]: [ALERT]    (366587) : Current worker (366589) exited with code 143 (Terminated)
Feb 28 05:38:59 np0005634017 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[366583]: [WARNING]  (366587) : All workers exited. Exiting... (0)
Feb 28 05:38:59 np0005634017 systemd[1]: libpod-bd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704.scope: Deactivated successfully.
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.343 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:59 np0005634017 podman[367110]: 2026-02-28 10:38:59.348550969 +0000 UTC m=+0.044124341 container died bd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.354 243456 INFO nova.virt.libvirt.driver [-] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Instance destroyed successfully.#033[00m
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.355 243456 DEBUG nova.objects.instance [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'resources' on Instance uuid 1546a5a7-9aea-450c-acbe-689f2b660359 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:38:59 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704-userdata-shm.mount: Deactivated successfully.
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.377 243456 DEBUG nova.virt.libvirt.vif [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:38:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-734126459',display_name='tempest-TestNetworkBasicOps-server-734126459',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-734126459',id=138,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTD6voq7CUXfm7WlI8qSl7cssyv4IdTFz+Rehs0Uzk9fFDHOvsC3uyrogbmbPToHSRv6RvbfGhcR/wGG6aXvaGAP4w+wIGU1b4ql+5cXN1wXdt7cbhMO8QZYhAJcKUF+w==',key_name='tempest-TestNetworkBasicOps-1975385676',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:38:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-ai5i2y9c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:38:55Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=1546a5a7-9aea-450c-acbe-689f2b660359,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.378 243456 DEBUG nova.network.os_vif_util [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.379 243456 DEBUG nova.network.os_vif_util [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.379 243456 DEBUG os_vif [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.382 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.382 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc0b52d5-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.384 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.386 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.388 243456 INFO os_vif [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35')#033[00m
Feb 28 05:38:59 np0005634017 systemd[1]: var-lib-containers-storage-overlay-98216c588452612fd8d98a1b43b07a762efeac7094a299d039a9b7dd839c2a5b-merged.mount: Deactivated successfully.
Feb 28 05:38:59 np0005634017 podman[367110]: 2026-02-28 10:38:59.402100115 +0000 UTC m=+0.097673507 container cleanup bd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 05:38:59 np0005634017 systemd[1]: libpod-conmon-bd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704.scope: Deactivated successfully.
Feb 28 05:38:59 np0005634017 podman[367163]: 2026-02-28 10:38:59.470613075 +0000 UTC m=+0.047423404 container remove bd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:38:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.477 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2872113d-341d-431a-8bd0-1877ab36f664]: (4, ('Sat Feb 28 10:38:59 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953 (bd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704)\nbd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704\nSat Feb 28 10:38:59 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953 (bd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704)\nbd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.480 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[81d98239-0198-435e-9d8d-34c55e98e2dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.481 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48d6bc14-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.483 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:59 np0005634017 kernel: tap48d6bc14-70: left promiscuous mode
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.491 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.491 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[077e92c1-b0dc-46de-ba02-deb9a1b4afcb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.508 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e0d98160-fc76-43be-b4ae-66ca1bcd4d9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.510 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1280daef-f251-4dc6-b53f-41dca3b4cb90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.528 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d557efc5-1618-421e-bdf3-b0f298a8ff1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664179, 'reachable_time': 44909, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367194, 'error': None, 'target': 'ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:59 np0005634017 systemd[1]: run-netns-ovnmeta\x2d48d6bc14\x2d7625\x2d4769\x2dad1d\x2d6c202dc94953.mount: Deactivated successfully.
Feb 28 05:38:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.535 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:38:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.535 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[ed1aa000-5724-4b40-8951-391d93d78290]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:38:59 np0005634017 podman[367192]: 2026-02-28 10:38:59.564579646 +0000 UTC m=+0.041793504 container create e5ad60a99096d10d1cf06a22418206564def6724d822a4abb15c66d17e89e177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_heyrovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.572 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:38:59 np0005634017 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 05:38:59 np0005634017 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 05:38:59 np0005634017 systemd[1]: Started libpod-conmon-e5ad60a99096d10d1cf06a22418206564def6724d822a4abb15c66d17e89e177.scope.
Feb 28 05:38:59 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:38:59 np0005634017 podman[367192]: 2026-02-28 10:38:59.545692681 +0000 UTC m=+0.022906339 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:38:59 np0005634017 podman[367192]: 2026-02-28 10:38:59.656014875 +0000 UTC m=+0.133228523 container init e5ad60a99096d10d1cf06a22418206564def6724d822a4abb15c66d17e89e177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_heyrovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:38:59 np0005634017 podman[367192]: 2026-02-28 10:38:59.663799536 +0000 UTC m=+0.141013164 container start e5ad60a99096d10d1cf06a22418206564def6724d822a4abb15c66d17e89e177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 28 05:38:59 np0005634017 podman[367192]: 2026-02-28 10:38:59.667632814 +0000 UTC m=+0.144846482 container attach e5ad60a99096d10d1cf06a22418206564def6724d822a4abb15c66d17e89e177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 05:38:59 np0005634017 vigilant_heyrovsky[367208]: 167 167
Feb 28 05:38:59 np0005634017 systemd[1]: libpod-e5ad60a99096d10d1cf06a22418206564def6724d822a4abb15c66d17e89e177.scope: Deactivated successfully.
Feb 28 05:38:59 np0005634017 conmon[367208]: conmon e5ad60a99096d10d1cf0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e5ad60a99096d10d1cf06a22418206564def6724d822a4abb15c66d17e89e177.scope/container/memory.events
Feb 28 05:38:59 np0005634017 podman[367192]: 2026-02-28 10:38:59.672548003 +0000 UTC m=+0.149761631 container died e5ad60a99096d10d1cf06a22418206564def6724d822a4abb15c66d17e89e177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 28 05:38:59 np0005634017 systemd[1]: var-lib-containers-storage-overlay-97cfdd2ce4842ceb5a41ebf8e455f8e9c609025d5db3eaaedd9a66d5cad9c5e9-merged.mount: Deactivated successfully.
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.714 243456 INFO nova.virt.libvirt.driver [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Deleting instance files /var/lib/nova/instances/1546a5a7-9aea-450c-acbe-689f2b660359_del#033[00m
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.716 243456 INFO nova.virt.libvirt.driver [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Deletion of /var/lib/nova/instances/1546a5a7-9aea-450c-acbe-689f2b660359_del complete#033[00m
Feb 28 05:38:59 np0005634017 podman[367192]: 2026-02-28 10:38:59.719824422 +0000 UTC m=+0.197038060 container remove e5ad60a99096d10d1cf06a22418206564def6724d822a4abb15c66d17e89e177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 28 05:38:59 np0005634017 systemd[1]: libpod-conmon-e5ad60a99096d10d1cf06a22418206564def6724d822a4abb15c66d17e89e177.scope: Deactivated successfully.
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.856 243456 INFO nova.compute.manager [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.858 243456 DEBUG oslo.service.loopingcall [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.859 243456 DEBUG nova.compute.manager [-] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:38:59 np0005634017 nova_compute[243452]: 2026-02-28 10:38:59.859 243456 DEBUG nova.network.neutron [-] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:38:59 np0005634017 podman[367234]: 2026-02-28 10:38:59.884921437 +0000 UTC m=+0.045908571 container create 5872de76d109d9133442f330f53a13bfa30c53d1b132317c48a31e2358f496a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_liskov, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:38:59 np0005634017 systemd[1]: Started libpod-conmon-5872de76d109d9133442f330f53a13bfa30c53d1b132317c48a31e2358f496a0.scope.
Feb 28 05:38:59 np0005634017 podman[367234]: 2026-02-28 10:38:59.864052086 +0000 UTC m=+0.025039240 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:38:59 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:38:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccdb2e1973d7c77c8b190c9ce13144d7f6f7439b3fbc3d492805ecbcec52e98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:38:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccdb2e1973d7c77c8b190c9ce13144d7f6f7439b3fbc3d492805ecbcec52e98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:38:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccdb2e1973d7c77c8b190c9ce13144d7f6f7439b3fbc3d492805ecbcec52e98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:38:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccdb2e1973d7c77c8b190c9ce13144d7f6f7439b3fbc3d492805ecbcec52e98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:38:59 np0005634017 podman[367234]: 2026-02-28 10:38:59.997586717 +0000 UTC m=+0.158573871 container init 5872de76d109d9133442f330f53a13bfa30c53d1b132317c48a31e2358f496a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 05:39:00 np0005634017 podman[367234]: 2026-02-28 10:39:00.010792421 +0000 UTC m=+0.171779565 container start 5872de76d109d9133442f330f53a13bfa30c53d1b132317c48a31e2358f496a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 28 05:39:00 np0005634017 nova_compute[243452]: 2026-02-28 10:39:00.010 243456 DEBUG nova.compute.manager [req-7494d190-ec1c-4774-8e62-252ab267f007 req-750e5d08-df42-4efb-b86c-4675a09b4264 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Received event network-vif-unplugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:39:00 np0005634017 nova_compute[243452]: 2026-02-28 10:39:00.010 243456 DEBUG oslo_concurrency.lockutils [req-7494d190-ec1c-4774-8e62-252ab267f007 req-750e5d08-df42-4efb-b86c-4675a09b4264 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:00 np0005634017 nova_compute[243452]: 2026-02-28 10:39:00.011 243456 DEBUG oslo_concurrency.lockutils [req-7494d190-ec1c-4774-8e62-252ab267f007 req-750e5d08-df42-4efb-b86c-4675a09b4264 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:00 np0005634017 nova_compute[243452]: 2026-02-28 10:39:00.012 243456 DEBUG oslo_concurrency.lockutils [req-7494d190-ec1c-4774-8e62-252ab267f007 req-750e5d08-df42-4efb-b86c-4675a09b4264 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:00 np0005634017 nova_compute[243452]: 2026-02-28 10:39:00.012 243456 DEBUG nova.compute.manager [req-7494d190-ec1c-4774-8e62-252ab267f007 req-750e5d08-df42-4efb-b86c-4675a09b4264 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] No waiting events found dispatching network-vif-unplugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:39:00 np0005634017 nova_compute[243452]: 2026-02-28 10:39:00.012 243456 DEBUG nova.compute.manager [req-7494d190-ec1c-4774-8e62-252ab267f007 req-750e5d08-df42-4efb-b86c-4675a09b4264 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Received event network-vif-unplugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:39:00 np0005634017 podman[367234]: 2026-02-28 10:39:00.016481152 +0000 UTC m=+0.177468296 container attach 5872de76d109d9133442f330f53a13bfa30c53d1b132317c48a31e2358f496a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_liskov, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 28 05:39:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:39:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:39:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:39:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:39:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:39:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:39:00 np0005634017 nova_compute[243452]: 2026-02-28 10:39:00.401 243456 DEBUG nova.network.neutron [req-4d2bd8d9-6f27-4bb8-8eb8-268f8af3cb78 req-051a0615-f4db-4447-abbf-b7a7ec4430f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Updated VIF entry in instance network info cache for port 5bc174e9-3e24-499b-8f52-0bcff974bf56. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:39:00 np0005634017 nova_compute[243452]: 2026-02-28 10:39:00.401 243456 DEBUG nova.network.neutron [req-4d2bd8d9-6f27-4bb8-8eb8-268f8af3cb78 req-051a0615-f4db-4447-abbf-b7a7ec4430f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Updating instance_info_cache with network_info: [{"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "address": "fa:16:3e:b3:ee:08", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fcd845f-86", "ovs_interfaceid": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:39:00 np0005634017 nova_compute[243452]: 2026-02-28 10:39:00.608 243456 DEBUG oslo_concurrency.lockutils [req-4d2bd8d9-6f27-4bb8-8eb8-268f8af3cb78 req-051a0615-f4db-4447-abbf-b7a7ec4430f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:39:00 np0005634017 lvm[367326]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:39:00 np0005634017 lvm[367326]: VG ceph_vg0 finished
Feb 28 05:39:00 np0005634017 lvm[367328]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:39:00 np0005634017 lvm[367328]: VG ceph_vg1 finished
Feb 28 05:39:00 np0005634017 lvm[367329]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:39:00 np0005634017 lvm[367329]: VG ceph_vg2 finished
Feb 28 05:39:00 np0005634017 infallible_liskov[367251]: {}
Feb 28 05:39:00 np0005634017 systemd[1]: libpod-5872de76d109d9133442f330f53a13bfa30c53d1b132317c48a31e2358f496a0.scope: Deactivated successfully.
Feb 28 05:39:00 np0005634017 systemd[1]: libpod-5872de76d109d9133442f330f53a13bfa30c53d1b132317c48a31e2358f496a0.scope: Consumed 1.301s CPU time.
Feb 28 05:39:00 np0005634017 podman[367332]: 2026-02-28 10:39:00.921797346 +0000 UTC m=+0.030177015 container died 5872de76d109d9133442f330f53a13bfa30c53d1b132317c48a31e2358f496a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_liskov, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:39:00 np0005634017 systemd[1]: var-lib-containers-storage-overlay-4ccdb2e1973d7c77c8b190c9ce13144d7f6f7439b3fbc3d492805ecbcec52e98-merged.mount: Deactivated successfully.
Feb 28 05:39:00 np0005634017 podman[367332]: 2026-02-28 10:39:00.982825084 +0000 UTC m=+0.091204713 container remove 5872de76d109d9133442f330f53a13bfa30c53d1b132317c48a31e2358f496a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 05:39:00 np0005634017 systemd[1]: libpod-conmon-5872de76d109d9133442f330f53a13bfa30c53d1b132317c48a31e2358f496a0.scope: Deactivated successfully.
Feb 28 05:39:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 305 active+clean; 235 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 176 op/s
Feb 28 05:39:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:39:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:39:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:39:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:39:01 np0005634017 nova_compute[243452]: 2026-02-28 10:39:01.767 243456 DEBUG nova.network.neutron [-] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:39:01 np0005634017 nova_compute[243452]: 2026-02-28 10:39:01.882 243456 INFO nova.compute.manager [-] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Took 2.02 seconds to deallocate network for instance.#033[00m
Feb 28 05:39:01 np0005634017 nova_compute[243452]: 2026-02-28 10:39:01.982 243456 DEBUG oslo_concurrency.lockutils [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:01 np0005634017 nova_compute[243452]: 2026-02-28 10:39:01.982 243456 DEBUG oslo_concurrency.lockutils [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:02 np0005634017 nova_compute[243452]: 2026-02-28 10:39:02.068 243456 DEBUG oslo_concurrency.processutils [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:39:02 np0005634017 nova_compute[243452]: 2026-02-28 10:39:02.109 243456 DEBUG nova.compute.manager [req-41e501e8-cb70-4580-a367-4a728d708e53 req-b377c67e-eec1-4156-ad26-68c1f1a4df30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Received event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:39:02 np0005634017 nova_compute[243452]: 2026-02-28 10:39:02.109 243456 DEBUG oslo_concurrency.lockutils [req-41e501e8-cb70-4580-a367-4a728d708e53 req-b377c67e-eec1-4156-ad26-68c1f1a4df30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:02 np0005634017 nova_compute[243452]: 2026-02-28 10:39:02.110 243456 DEBUG oslo_concurrency.lockutils [req-41e501e8-cb70-4580-a367-4a728d708e53 req-b377c67e-eec1-4156-ad26-68c1f1a4df30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:02 np0005634017 nova_compute[243452]: 2026-02-28 10:39:02.110 243456 DEBUG oslo_concurrency.lockutils [req-41e501e8-cb70-4580-a367-4a728d708e53 req-b377c67e-eec1-4156-ad26-68c1f1a4df30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:02 np0005634017 nova_compute[243452]: 2026-02-28 10:39:02.110 243456 DEBUG nova.compute.manager [req-41e501e8-cb70-4580-a367-4a728d708e53 req-b377c67e-eec1-4156-ad26-68c1f1a4df30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] No waiting events found dispatching network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:39:02 np0005634017 nova_compute[243452]: 2026-02-28 10:39:02.110 243456 WARNING nova.compute.manager [req-41e501e8-cb70-4580-a367-4a728d708e53 req-b377c67e-eec1-4156-ad26-68c1f1a4df30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Received unexpected event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:39:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:39:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:39:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:39:02 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/216275258' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:39:02 np0005634017 nova_compute[243452]: 2026-02-28 10:39:02.582 243456 DEBUG oslo_concurrency.processutils [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:39:02 np0005634017 nova_compute[243452]: 2026-02-28 10:39:02.589 243456 DEBUG nova.compute.provider_tree [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:39:02 np0005634017 nova_compute[243452]: 2026-02-28 10:39:02.623 243456 DEBUG nova.scheduler.client.report [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:39:02 np0005634017 nova_compute[243452]: 2026-02-28 10:39:02.810 243456 DEBUG oslo_concurrency.lockutils [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:02 np0005634017 nova_compute[243452]: 2026-02-28 10:39:02.847 243456 INFO nova.scheduler.client.report [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Deleted allocations for instance 1546a5a7-9aea-450c-acbe-689f2b660359#033[00m
Feb 28 05:39:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 214 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.3 MiB/s wr, 185 op/s
Feb 28 05:39:03 np0005634017 nova_compute[243452]: 2026-02-28 10:39:03.101 243456 DEBUG oslo_concurrency.lockutils [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:39:04 np0005634017 nova_compute[243452]: 2026-02-28 10:39:04.386 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:04 np0005634017 nova_compute[243452]: 2026-02-28 10:39:04.613 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:04 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Feb 28 05:39:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 850 KiB/s wr, 182 op/s
Feb 28 05:39:05 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Feb 28 05:39:06 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:06Z|00169|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:40:80:6e 10.100.0.9
Feb 28 05:39:06 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:06Z|00170|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:40:80:6e 10.100.0.9
Feb 28 05:39:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2243: 305 pgs: 305 active+clean; 204 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 389 KiB/s wr, 153 op/s
Feb 28 05:39:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:39:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2244: 305 pgs: 305 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.1 MiB/s wr, 144 op/s
Feb 28 05:39:09 np0005634017 nova_compute[243452]: 2026-02-28 10:39:09.389 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:09 np0005634017 nova_compute[243452]: 2026-02-28 10:39:09.615 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2245: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 142 op/s
Feb 28 05:39:11 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:11Z|01436|binding|INFO|Releasing lport 1c423105-d23e-4da9-afeb-9405c7fd0060 from this chassis (sb_readonly=0)
Feb 28 05:39:11 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:11Z|01437|binding|INFO|Releasing lport 86a5934a-bf0f-4d7e-ba7a-fdb27e99df6a from this chassis (sb_readonly=0)
Feb 28 05:39:11 np0005634017 nova_compute[243452]: 2026-02-28 10:39:11.538 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Feb 28 05:39:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:39:14 np0005634017 podman[367398]: 2026-02-28 10:39:14.148215915 +0000 UTC m=+0.074561412 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 28 05:39:14 np0005634017 podman[367397]: 2026-02-28 10:39:14.186423857 +0000 UTC m=+0.113564616 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 28 05:39:14 np0005634017 nova_compute[243452]: 2026-02-28 10:39:14.352 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275139.3489072, 1546a5a7-9aea-450c-acbe-689f2b660359 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:39:14 np0005634017 nova_compute[243452]: 2026-02-28 10:39:14.352 243456 INFO nova.compute.manager [-] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:39:14 np0005634017 nova_compute[243452]: 2026-02-28 10:39:14.387 243456 DEBUG nova.compute.manager [None req-6312bf48-fee0-4fbc-84ee-fd1c9231f0c0 - - - - - -] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:39:14 np0005634017 nova_compute[243452]: 2026-02-28 10:39:14.391 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:14 np0005634017 nova_compute[243452]: 2026-02-28 10:39:14.619 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Feb 28 05:39:16 np0005634017 nova_compute[243452]: 2026-02-28 10:39:16.098 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 05:39:17 np0005634017 nova_compute[243452]: 2026-02-28 10:39:17.338 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "69899d22-e5ee-410a-8280-57cc79ffa188" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:17 np0005634017 nova_compute[243452]: 2026-02-28 10:39:17.338 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:17 np0005634017 nova_compute[243452]: 2026-02-28 10:39:17.362 243456 DEBUG nova.compute.manager [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:39:17 np0005634017 nova_compute[243452]: 2026-02-28 10:39:17.471 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:17 np0005634017 nova_compute[243452]: 2026-02-28 10:39:17.472 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:17 np0005634017 nova_compute[243452]: 2026-02-28 10:39:17.481 243456 DEBUG nova.virt.hardware [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:39:17 np0005634017 nova_compute[243452]: 2026-02-28 10:39:17.481 243456 INFO nova.compute.claims [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:39:17 np0005634017 nova_compute[243452]: 2026-02-28 10:39:17.617 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:39:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:39:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2947877545' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:39:18 np0005634017 nova_compute[243452]: 2026-02-28 10:39:18.165 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:39:18 np0005634017 nova_compute[243452]: 2026-02-28 10:39:18.172 243456 DEBUG nova.compute.provider_tree [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:39:18 np0005634017 nova_compute[243452]: 2026-02-28 10:39:18.189 243456 DEBUG nova.scheduler.client.report [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:39:18 np0005634017 nova_compute[243452]: 2026-02-28 10:39:18.210 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:18 np0005634017 nova_compute[243452]: 2026-02-28 10:39:18.211 243456 DEBUG nova.compute.manager [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:39:18 np0005634017 nova_compute[243452]: 2026-02-28 10:39:18.267 243456 DEBUG nova.compute.manager [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:39:18 np0005634017 nova_compute[243452]: 2026-02-28 10:39:18.268 243456 DEBUG nova.network.neutron [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:39:18 np0005634017 nova_compute[243452]: 2026-02-28 10:39:18.292 243456 INFO nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:39:18 np0005634017 nova_compute[243452]: 2026-02-28 10:39:18.311 243456 DEBUG nova.compute.manager [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:39:18 np0005634017 nova_compute[243452]: 2026-02-28 10:39:18.418 243456 DEBUG nova.compute.manager [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:39:18 np0005634017 nova_compute[243452]: 2026-02-28 10:39:18.419 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:39:18 np0005634017 nova_compute[243452]: 2026-02-28 10:39:18.420 243456 INFO nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Creating image(s)#033[00m
Feb 28 05:39:18 np0005634017 nova_compute[243452]: 2026-02-28 10:39:18.450 243456 DEBUG nova.storage.rbd_utils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 69899d22-e5ee-410a-8280-57cc79ffa188_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:39:18 np0005634017 nova_compute[243452]: 2026-02-28 10:39:18.483 243456 DEBUG nova.storage.rbd_utils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 69899d22-e5ee-410a-8280-57cc79ffa188_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:39:18 np0005634017 nova_compute[243452]: 2026-02-28 10:39:18.514 243456 DEBUG nova.storage.rbd_utils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 69899d22-e5ee-410a-8280-57cc79ffa188_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:39:18 np0005634017 nova_compute[243452]: 2026-02-28 10:39:18.519 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:39:18 np0005634017 nova_compute[243452]: 2026-02-28 10:39:18.553 243456 DEBUG nova.policy [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:39:18 np0005634017 nova_compute[243452]: 2026-02-28 10:39:18.590 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:39:18 np0005634017 nova_compute[243452]: 2026-02-28 10:39:18.591 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:18 np0005634017 nova_compute[243452]: 2026-02-28 10:39:18.591 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:18 np0005634017 nova_compute[243452]: 2026-02-28 10:39:18.592 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:18 np0005634017 nova_compute[243452]: 2026-02-28 10:39:18.615 243456 DEBUG nova.storage.rbd_utils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 69899d22-e5ee-410a-8280-57cc79ffa188_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:39:18 np0005634017 nova_compute[243452]: 2026-02-28 10:39:18.620 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 69899d22-e5ee-410a-8280-57cc79ffa188_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:39:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:39:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Feb 28 05:39:19 np0005634017 nova_compute[243452]: 2026-02-28 10:39:19.057 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 69899d22-e5ee-410a-8280-57cc79ffa188_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:39:19 np0005634017 nova_compute[243452]: 2026-02-28 10:39:19.131 243456 DEBUG nova.storage.rbd_utils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image 69899d22-e5ee-410a-8280-57cc79ffa188_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:39:19 np0005634017 nova_compute[243452]: 2026-02-28 10:39:19.261 243456 DEBUG nova.objects.instance [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid 69899d22-e5ee-410a-8280-57cc79ffa188 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:39:19 np0005634017 nova_compute[243452]: 2026-02-28 10:39:19.281 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:39:19 np0005634017 nova_compute[243452]: 2026-02-28 10:39:19.282 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Ensure instance console log exists: /var/lib/nova/instances/69899d22-e5ee-410a-8280-57cc79ffa188/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:39:19 np0005634017 nova_compute[243452]: 2026-02-28 10:39:19.283 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:19 np0005634017 nova_compute[243452]: 2026-02-28 10:39:19.283 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:19 np0005634017 nova_compute[243452]: 2026-02-28 10:39:19.284 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:19 np0005634017 nova_compute[243452]: 2026-02-28 10:39:19.312 243456 DEBUG nova.network.neutron [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Successfully created port: 05baa6e8-a67d-4e5d-84d7-2d27c726335a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:39:19 np0005634017 nova_compute[243452]: 2026-02-28 10:39:19.428 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:19 np0005634017 nova_compute[243452]: 2026-02-28 10:39:19.621 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:19 np0005634017 nova_compute[243452]: 2026-02-28 10:39:19.858 243456 DEBUG nova.network.neutron [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Successfully created port: 05cd9b0a-030d-404b-b77a-570992915fae _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:39:21 np0005634017 nova_compute[243452]: 2026-02-28 10:39:21.030 243456 DEBUG nova.network.neutron [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Successfully updated port: 05baa6e8-a67d-4e5d-84d7-2d27c726335a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:39:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 305 active+clean; 257 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 1.9 MiB/s wr, 36 op/s
Feb 28 05:39:21 np0005634017 nova_compute[243452]: 2026-02-28 10:39:21.175 243456 DEBUG nova.compute.manager [req-be48a697-b51c-4eb5-9924-69acc3fe9c64 req-0cf63253-9da5-4e7a-a11f-ea14b9f9f396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-changed-05baa6e8-a67d-4e5d-84d7-2d27c726335a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:39:21 np0005634017 nova_compute[243452]: 2026-02-28 10:39:21.177 243456 DEBUG nova.compute.manager [req-be48a697-b51c-4eb5-9924-69acc3fe9c64 req-0cf63253-9da5-4e7a-a11f-ea14b9f9f396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Refreshing instance network info cache due to event network-changed-05baa6e8-a67d-4e5d-84d7-2d27c726335a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:39:21 np0005634017 nova_compute[243452]: 2026-02-28 10:39:21.178 243456 DEBUG oslo_concurrency.lockutils [req-be48a697-b51c-4eb5-9924-69acc3fe9c64 req-0cf63253-9da5-4e7a-a11f-ea14b9f9f396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:39:21 np0005634017 nova_compute[243452]: 2026-02-28 10:39:21.178 243456 DEBUG oslo_concurrency.lockutils [req-be48a697-b51c-4eb5-9924-69acc3fe9c64 req-0cf63253-9da5-4e7a-a11f-ea14b9f9f396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:39:21 np0005634017 nova_compute[243452]: 2026-02-28 10:39:21.178 243456 DEBUG nova.network.neutron [req-be48a697-b51c-4eb5-9924-69acc3fe9c64 req-0cf63253-9da5-4e7a-a11f-ea14b9f9f396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Refreshing network info cache for port 05baa6e8-a67d-4e5d-84d7-2d27c726335a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:39:21 np0005634017 nova_compute[243452]: 2026-02-28 10:39:21.373 243456 DEBUG nova.network.neutron [req-be48a697-b51c-4eb5-9924-69acc3fe9c64 req-0cf63253-9da5-4e7a-a11f-ea14b9f9f396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:39:21 np0005634017 nova_compute[243452]: 2026-02-28 10:39:21.730 243456 DEBUG nova.network.neutron [req-be48a697-b51c-4eb5-9924-69acc3fe9c64 req-0cf63253-9da5-4e7a-a11f-ea14b9f9f396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:39:21 np0005634017 nova_compute[243452]: 2026-02-28 10:39:21.749 243456 DEBUG oslo_concurrency.lockutils [req-be48a697-b51c-4eb5-9924-69acc3fe9c64 req-0cf63253-9da5-4e7a-a11f-ea14b9f9f396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:39:21 np0005634017 nova_compute[243452]: 2026-02-28 10:39:21.776 243456 DEBUG nova.network.neutron [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Successfully updated port: 05cd9b0a-030d-404b-b77a-570992915fae _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:39:21 np0005634017 nova_compute[243452]: 2026-02-28 10:39:21.792 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:39:21 np0005634017 nova_compute[243452]: 2026-02-28 10:39:21.793 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:39:21 np0005634017 nova_compute[243452]: 2026-02-28 10:39:21.793 243456 DEBUG nova.network.neutron [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:39:21 np0005634017 nova_compute[243452]: 2026-02-28 10:39:21.980 243456 DEBUG nova.network.neutron [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:39:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:39:23 np0005634017 nova_compute[243452]: 2026-02-28 10:39:23.534 243456 DEBUG nova.compute.manager [req-2a9bdf59-ba2d-4568-953c-40e13ca8e99c req-48716d26-ed7e-45e4-87b6-4eeb6d58586a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-changed-05cd9b0a-030d-404b-b77a-570992915fae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:39:23 np0005634017 nova_compute[243452]: 2026-02-28 10:39:23.535 243456 DEBUG nova.compute.manager [req-2a9bdf59-ba2d-4568-953c-40e13ca8e99c req-48716d26-ed7e-45e4-87b6-4eeb6d58586a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Refreshing instance network info cache due to event network-changed-05cd9b0a-030d-404b-b77a-570992915fae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:39:23 np0005634017 nova_compute[243452]: 2026-02-28 10:39:23.535 243456 DEBUG oslo_concurrency.lockutils [req-2a9bdf59-ba2d-4568-953c-40e13ca8e99c req-48716d26-ed7e-45e4-87b6-4eeb6d58586a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:39:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.054 243456 DEBUG nova.network.neutron [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Updating instance_info_cache with network_info: [{"id": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "address": "fa:16:3e:b7:6f:db", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05baa6e8-a6", "ovs_interfaceid": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "05cd9b0a-030d-404b-b77a-570992915fae", "address": "fa:16:3e:0b:32:1c", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cd9b0a-03", "ovs_interfaceid": "05cd9b0a-030d-404b-b77a-570992915fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.077 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.078 243456 DEBUG nova.compute.manager [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Instance network_info: |[{"id": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "address": "fa:16:3e:b7:6f:db", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05baa6e8-a6", "ovs_interfaceid": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "05cd9b0a-030d-404b-b77a-570992915fae", "address": "fa:16:3e:0b:32:1c", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cd9b0a-03", "ovs_interfaceid": "05cd9b0a-030d-404b-b77a-570992915fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.079 243456 DEBUG oslo_concurrency.lockutils [req-2a9bdf59-ba2d-4568-953c-40e13ca8e99c req-48716d26-ed7e-45e4-87b6-4eeb6d58586a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.079 243456 DEBUG nova.network.neutron [req-2a9bdf59-ba2d-4568-953c-40e13ca8e99c req-48716d26-ed7e-45e4-87b6-4eeb6d58586a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Refreshing network info cache for port 05cd9b0a-030d-404b-b77a-570992915fae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.086 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Start _get_guest_xml network_info=[{"id": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "address": "fa:16:3e:b7:6f:db", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05baa6e8-a6", "ovs_interfaceid": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "05cd9b0a-030d-404b-b77a-570992915fae", "address": "fa:16:3e:0b:32:1c", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cd9b0a-03", "ovs_interfaceid": "05cd9b0a-030d-404b-b77a-570992915fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.095 243456 WARNING nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.113 243456 DEBUG nova.virt.libvirt.host [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.114 243456 DEBUG nova.virt.libvirt.host [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.119 243456 DEBUG nova.virt.libvirt.host [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.119 243456 DEBUG nova.virt.libvirt.host [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.120 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.121 243456 DEBUG nova.virt.hardware [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.122 243456 DEBUG nova.virt.hardware [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.122 243456 DEBUG nova.virt.hardware [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.123 243456 DEBUG nova.virt.hardware [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.123 243456 DEBUG nova.virt.hardware [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.124 243456 DEBUG nova.virt.hardware [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.124 243456 DEBUG nova.virt.hardware [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.125 243456 DEBUG nova.virt.hardware [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.125 243456 DEBUG nova.virt.hardware [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.126 243456 DEBUG nova.virt.hardware [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.126 243456 DEBUG nova.virt.hardware [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.132 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.431 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.624 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:39:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/366693204' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.744 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.780 243456 DEBUG nova.storage.rbd_utils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 69899d22-e5ee-410a-8280-57cc79ffa188_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:39:24 np0005634017 nova_compute[243452]: 2026-02-28 10:39:24.786 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:39:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.306 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.307 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.327 243456 DEBUG nova.compute.manager [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:39:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:39:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/620690958' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.382 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.384 243456 DEBUG nova.virt.libvirt.vif [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:39:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1192420529',display_name='tempest-TestGettingAddress-server-1192420529',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1192420529',id=139,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHDH+5DUTx5iBPqC0EDa1S6dogceA/+8wHwWd1BSaRvWHn3D8AtYXHbWFXwRPGXZg46P7b16sTNaYzFdl+C2i24U7XPS1mXh/yVjcDyoOTxx5RFD1Jt0UBrR4R+2oVkYmg==',key_name='tempest-TestGettingAddress-830625014',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-2ripkvj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:39:18Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=69899d22-e5ee-410a-8280-57cc79ffa188,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "address": "fa:16:3e:b7:6f:db", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05baa6e8-a6", "ovs_interfaceid": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.385 243456 DEBUG nova.network.os_vif_util [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "address": "fa:16:3e:b7:6f:db", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05baa6e8-a6", "ovs_interfaceid": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.386 243456 DEBUG nova.network.os_vif_util [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:6f:db,bridge_name='br-int',has_traffic_filtering=True,id=05baa6e8-a67d-4e5d-84d7-2d27c726335a,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05baa6e8-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.387 243456 DEBUG nova.virt.libvirt.vif [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:39:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1192420529',display_name='tempest-TestGettingAddress-server-1192420529',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1192420529',id=139,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHDH+5DUTx5iBPqC0EDa1S6dogceA/+8wHwWd1BSaRvWHn3D8AtYXHbWFXwRPGXZg46P7b16sTNaYzFdl+C2i24U7XPS1mXh/yVjcDyoOTxx5RFD1Jt0UBrR4R+2oVkYmg==',key_name='tempest-TestGettingAddress-830625014',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-2ripkvj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:39:18Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=69899d22-e5ee-410a-8280-57cc79ffa188,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05cd9b0a-030d-404b-b77a-570992915fae", "address": "fa:16:3e:0b:32:1c", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cd9b0a-03", "ovs_interfaceid": "05cd9b0a-030d-404b-b77a-570992915fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.387 243456 DEBUG nova.network.os_vif_util [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "05cd9b0a-030d-404b-b77a-570992915fae", "address": "fa:16:3e:0b:32:1c", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cd9b0a-03", "ovs_interfaceid": "05cd9b0a-030d-404b-b77a-570992915fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.388 243456 DEBUG nova.network.os_vif_util [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:32:1c,bridge_name='br-int',has_traffic_filtering=True,id=05cd9b0a-030d-404b-b77a-570992915fae,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cd9b0a-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.389 243456 DEBUG nova.objects.instance [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 69899d22-e5ee-410a-8280-57cc79ffa188 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.411 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:39:25 np0005634017 nova_compute[243452]:  <uuid>69899d22-e5ee-410a-8280-57cc79ffa188</uuid>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:  <name>instance-0000008b</name>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestGettingAddress-server-1192420529</nova:name>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:39:24</nova:creationTime>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:39:25 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:        <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:        <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:        <nova:port uuid="05baa6e8-a67d-4e5d-84d7-2d27c726335a">
Feb 28 05:39:25 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:        <nova:port uuid="05cd9b0a-030d-404b-b77a-570992915fae">
Feb 28 05:39:25 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe0b:321c" ipVersion="6"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe0b:321c" ipVersion="6"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <entry name="serial">69899d22-e5ee-410a-8280-57cc79ffa188</entry>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <entry name="uuid">69899d22-e5ee-410a-8280-57cc79ffa188</entry>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/69899d22-e5ee-410a-8280-57cc79ffa188_disk">
Feb 28 05:39:25 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:39:25 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/69899d22-e5ee-410a-8280-57cc79ffa188_disk.config">
Feb 28 05:39:25 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:39:25 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:b7:6f:db"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <target dev="tap05baa6e8-a6"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:0b:32:1c"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <target dev="tap05cd9b0a-03"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/69899d22-e5ee-410a-8280-57cc79ffa188/console.log" append="off"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:39:25 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:39:25 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:39:25 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:39:25 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.413 243456 DEBUG nova.compute.manager [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Preparing to wait for external event network-vif-plugged-05baa6e8-a67d-4e5d-84d7-2d27c726335a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.413 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.413 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.414 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.414 243456 DEBUG nova.compute.manager [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Preparing to wait for external event network-vif-plugged-05cd9b0a-030d-404b-b77a-570992915fae prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.414 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.414 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.415 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.415 243456 DEBUG nova.virt.libvirt.vif [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:39:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1192420529',display_name='tempest-TestGettingAddress-server-1192420529',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1192420529',id=139,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHDH+5DUTx5iBPqC0EDa1S6dogceA/+8wHwWd1BSaRvWHn3D8AtYXHbWFXwRPGXZg46P7b16sTNaYzFdl+C2i24U7XPS1mXh/yVjcDyoOTxx5RFD1Jt0UBrR4R+2oVkYmg==',key_name='tempest-TestGettingAddress-830625014',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-2ripkvj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:39:18Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=69899d22-e5ee-410a-8280-57cc79ffa188,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "address": "fa:16:3e:b7:6f:db", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05baa6e8-a6", "ovs_interfaceid": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.416 243456 DEBUG nova.network.os_vif_util [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "address": "fa:16:3e:b7:6f:db", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05baa6e8-a6", "ovs_interfaceid": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.416 243456 DEBUG nova.network.os_vif_util [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:6f:db,bridge_name='br-int',has_traffic_filtering=True,id=05baa6e8-a67d-4e5d-84d7-2d27c726335a,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05baa6e8-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.417 243456 DEBUG os_vif [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:6f:db,bridge_name='br-int',has_traffic_filtering=True,id=05baa6e8-a67d-4e5d-84d7-2d27c726335a,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05baa6e8-a6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.418 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.419 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.419 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.420 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.421 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.424 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.425 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05baa6e8-a6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.426 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05baa6e8-a6, col_values=(('external_ids', {'iface-id': '05baa6e8-a67d-4e5d-84d7-2d27c726335a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:6f:db', 'vm-uuid': '69899d22-e5ee-410a-8280-57cc79ffa188'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:25 np0005634017 NetworkManager[49805]: <info>  [1772275165.4292] manager: (tap05baa6e8-a6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/599)
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.432 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.437 243456 DEBUG nova.virt.hardware [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.438 243456 INFO nova.compute.claims [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.442 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.446 243456 INFO os_vif [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:6f:db,bridge_name='br-int',has_traffic_filtering=True,id=05baa6e8-a67d-4e5d-84d7-2d27c726335a,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05baa6e8-a6')#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.448 243456 DEBUG nova.virt.libvirt.vif [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:39:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1192420529',display_name='tempest-TestGettingAddress-server-1192420529',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1192420529',id=139,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHDH+5DUTx5iBPqC0EDa1S6dogceA/+8wHwWd1BSaRvWHn3D8AtYXHbWFXwRPGXZg46P7b16sTNaYzFdl+C2i24U7XPS1mXh/yVjcDyoOTxx5RFD1Jt0UBrR4R+2oVkYmg==',key_name='tempest-TestGettingAddress-830625014',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-2ripkvj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:39:18Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=69899d22-e5ee-410a-8280-57cc79ffa188,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05cd9b0a-030d-404b-b77a-570992915fae", "address": "fa:16:3e:0b:32:1c", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cd9b0a-03", "ovs_interfaceid": "05cd9b0a-030d-404b-b77a-570992915fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.448 243456 DEBUG nova.network.os_vif_util [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "05cd9b0a-030d-404b-b77a-570992915fae", "address": "fa:16:3e:0b:32:1c", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cd9b0a-03", "ovs_interfaceid": "05cd9b0a-030d-404b-b77a-570992915fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.450 243456 DEBUG nova.network.os_vif_util [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:32:1c,bridge_name='br-int',has_traffic_filtering=True,id=05cd9b0a-030d-404b-b77a-570992915fae,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cd9b0a-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.451 243456 DEBUG os_vif [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:32:1c,bridge_name='br-int',has_traffic_filtering=True,id=05cd9b0a-030d-404b-b77a-570992915fae,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cd9b0a-03') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.452 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.452 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.453 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.456 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.457 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05cd9b0a-03, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.457 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05cd9b0a-03, col_values=(('external_ids', {'iface-id': '05cd9b0a-030d-404b-b77a-570992915fae', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:32:1c', 'vm-uuid': '69899d22-e5ee-410a-8280-57cc79ffa188'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.460 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:25 np0005634017 NetworkManager[49805]: <info>  [1772275165.4617] manager: (tap05cd9b0a-03): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/600)
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.462 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.466 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.467 243456 INFO os_vif [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:32:1c,bridge_name='br-int',has_traffic_filtering=True,id=05cd9b0a-030d-404b-b77a-570992915fae,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cd9b0a-03')#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.581 243456 DEBUG nova.network.neutron [req-2a9bdf59-ba2d-4568-953c-40e13ca8e99c req-48716d26-ed7e-45e4-87b6-4eeb6d58586a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Updated VIF entry in instance network info cache for port 05cd9b0a-030d-404b-b77a-570992915fae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.583 243456 DEBUG nova.network.neutron [req-2a9bdf59-ba2d-4568-953c-40e13ca8e99c req-48716d26-ed7e-45e4-87b6-4eeb6d58586a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Updating instance_info_cache with network_info: [{"id": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "address": "fa:16:3e:b7:6f:db", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05baa6e8-a6", "ovs_interfaceid": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "05cd9b0a-030d-404b-b77a-570992915fae", "address": "fa:16:3e:0b:32:1c", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cd9b0a-03", "ovs_interfaceid": "05cd9b0a-030d-404b-b77a-570992915fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.592 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.592 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.593 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:b7:6f:db, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.593 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:0b:32:1c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.594 243456 INFO nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Using config drive#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.625 243456 DEBUG nova.storage.rbd_utils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 69899d22-e5ee-410a-8280-57cc79ffa188_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.633 243456 DEBUG oslo_concurrency.lockutils [req-2a9bdf59-ba2d-4568-953c-40e13ca8e99c req-48716d26-ed7e-45e4-87b6-4eeb6d58586a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:39:25 np0005634017 nova_compute[243452]: 2026-02-28 10:39:25.680 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.020 243456 INFO nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Creating config drive at /var/lib/nova/instances/69899d22-e5ee-410a-8280-57cc79ffa188/disk.config#033[00m
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.030 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/69899d22-e5ee-410a-8280-57cc79ffa188/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpjqx6p9zh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.173 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/69899d22-e5ee-410a-8280-57cc79ffa188/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpjqx6p9zh" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.204 243456 DEBUG nova.storage.rbd_utils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 69899d22-e5ee-410a-8280-57cc79ffa188_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.208 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/69899d22-e5ee-410a-8280-57cc79ffa188/disk.config 69899d22-e5ee-410a-8280-57cc79ffa188_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:39:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:39:26 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3371558887' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.252 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.259 243456 DEBUG nova.compute.provider_tree [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.276 243456 DEBUG nova.scheduler.client.report [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.326 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.327 243456 DEBUG nova.compute.manager [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.483 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/69899d22-e5ee-410a-8280-57cc79ffa188/disk.config 69899d22-e5ee-410a-8280-57cc79ffa188_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.484 243456 INFO nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Deleting local config drive /var/lib/nova/instances/69899d22-e5ee-410a-8280-57cc79ffa188/disk.config because it was imported into RBD.#033[00m
Feb 28 05:39:26 np0005634017 NetworkManager[49805]: <info>  [1772275166.5415] manager: (tap05baa6e8-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/601)
Feb 28 05:39:26 np0005634017 kernel: tap05baa6e8-a6: entered promiscuous mode
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.545 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:26 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:26Z|01438|binding|INFO|Claiming lport 05baa6e8-a67d-4e5d-84d7-2d27c726335a for this chassis.
Feb 28 05:39:26 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:26Z|01439|binding|INFO|05baa6e8-a67d-4e5d-84d7-2d27c726335a: Claiming fa:16:3e:b7:6f:db 10.100.0.13
Feb 28 05:39:26 np0005634017 kernel: tap05cd9b0a-03: entered promiscuous mode
Feb 28 05:39:26 np0005634017 NetworkManager[49805]: <info>  [1772275166.5595] manager: (tap05cd9b0a-03): new Tun device (/org/freedesktop/NetworkManager/Devices/602)
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.561 243456 DEBUG nova.compute.manager [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.561 243456 DEBUG nova.network.neutron [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:39:26 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:26Z|01440|binding|INFO|Setting lport 05baa6e8-a67d-4e5d-84d7-2d27c726335a ovn-installed in OVS
Feb 28 05:39:26 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:26Z|01441|if_status|INFO|Dropped 1 log messages in last 35 seconds (most recently, 35 seconds ago) due to excessive rate
Feb 28 05:39:26 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:26Z|01442|if_status|INFO|Not updating pb chassis for 05cd9b0a-030d-404b-b77a-570992915fae now as sb is readonly
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.564 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:26 np0005634017 systemd-udevd[367794]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:39:26 np0005634017 systemd-udevd[367795]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.579 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:26 np0005634017 NetworkManager[49805]: <info>  [1772275166.5799] device (tap05cd9b0a-03): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:39:26 np0005634017 NetworkManager[49805]: <info>  [1772275166.5831] device (tap05cd9b0a-03): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:39:26 np0005634017 NetworkManager[49805]: <info>  [1772275166.5838] device (tap05baa6e8-a6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:39:26 np0005634017 NetworkManager[49805]: <info>  [1772275166.5844] device (tap05baa6e8-a6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:39:26 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:26Z|01443|binding|INFO|Claiming lport 05cd9b0a-030d-404b-b77a-570992915fae for this chassis.
Feb 28 05:39:26 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:26Z|01444|binding|INFO|05cd9b0a-030d-404b-b77a-570992915fae: Claiming fa:16:3e:0b:32:1c 2001:db8:0:1:f816:3eff:fe0b:321c 2001:db8::f816:3eff:fe0b:321c
Feb 28 05:39:26 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:26Z|01445|binding|INFO|Setting lport 05baa6e8-a67d-4e5d-84d7-2d27c726335a up in Southbound
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.610 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:6f:db 10.100.0.13'], port_security=['fa:16:3e:b7:6f:db 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '69899d22-e5ee-410a-8280-57cc79ffa188', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-386ad93c-8128-414d-bc97-7c3f009f2aee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '77637734-a2d3-42b4-baf6-bf1326483326', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12c8c4dc-c3d0-48e2-a2ff-065b46b0bb43, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=05baa6e8-a67d-4e5d-84d7-2d27c726335a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:39:26 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:26Z|01446|binding|INFO|Setting lport 05cd9b0a-030d-404b-b77a-570992915fae ovn-installed in OVS
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.611 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 05baa6e8-a67d-4e5d-84d7-2d27c726335a in datapath 386ad93c-8128-414d-bc97-7c3f009f2aee bound to our chassis#033[00m
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.612 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 386ad93c-8128-414d-bc97-7c3f009f2aee#033[00m
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.612 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:26 np0005634017 systemd-machined[209480]: New machine qemu-172-instance-0000008b.
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.630 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4e187b48-c520-42db-b8b7-11cbc560778a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:26 np0005634017 systemd[1]: Started Virtual Machine qemu-172-instance-0000008b.
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.649 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:32:1c 2001:db8:0:1:f816:3eff:fe0b:321c 2001:db8::f816:3eff:fe0b:321c'], port_security=['fa:16:3e:0b:32:1c 2001:db8:0:1:f816:3eff:fe0b:321c 2001:db8::f816:3eff:fe0b:321c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe0b:321c/64 2001:db8::f816:3eff:fe0b:321c/64', 'neutron:device_id': '69899d22-e5ee-410a-8280-57cc79ffa188', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '77637734-a2d3-42b4-baf6-bf1326483326', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df8aa31f-e638-4e49-ac27-bf5e1988a64a, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=05cd9b0a-030d-404b-b77a-570992915fae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:39:26 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:26Z|01447|binding|INFO|Setting lport 05cd9b0a-030d-404b-b77a-570992915fae up in Southbound
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.650 243456 INFO nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.659 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[77ece079-88ed-41eb-a7e6-8402f9e07445]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.662 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b52427-09b8-4d18-b2b6-7cee077bb519]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.694 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[cd04589a-e7ed-4e14-900c-bc3b21b6176e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.715 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[852d7c2d-6600-4519-ae92-8e6e01f12b19]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap386ad93c-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:f3:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 422], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663869, 'reachable_time': 19915, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367813, 'error': None, 'target': 'ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.733 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[19b5bae6-9762-4fc9-b748-eb9c95503072]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap386ad93c-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663882, 'tstamp': 663882}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367815, 'error': None, 'target': 'ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap386ad93c-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663886, 'tstamp': 663886}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367815, 'error': None, 'target': 'ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.735 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap386ad93c-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.741 243456 DEBUG nova.compute.manager [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.786 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.788 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap386ad93c-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.788 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.789 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap386ad93c-80, col_values=(('external_ids', {'iface-id': '1c423105-d23e-4da9-afeb-9405c7fd0060'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.789 243456 DEBUG nova.policy [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.789 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.791 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.791 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 05cd9b0a-030d-404b-b77a-570992915fae in datapath 685f3a92-853c-417a-a00b-ba5c70b02f2d unbound from our chassis#033[00m
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.792 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 685f3a92-853c-417a-a00b-ba5c70b02f2d#033[00m
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.805 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0892e985-baca-46cd-af7d-a663ea25830c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.836 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[80d6b0be-c621-4e73-b8c7-3be7194387ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.840 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e830fa-64f7-4967-ba70-02c1422d0e2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.862 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9663b3ec-a655-470b-bbdb-ad37cd492114]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.880 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2521eae7-ef7b-45e2-999f-cbe4b84f0764]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap685f3a92-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:b7:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 5, 'rx_bytes': 1846, 'tx_bytes': 398, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 5, 'rx_bytes': 1846, 'tx_bytes': 398, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 423], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663963, 'reachable_time': 33771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367821, 'error': None, 'target': 'ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.894 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[48049375-0f20-42c0-adc1-5f8af4c88726]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap685f3a92-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663976, 'tstamp': 663976}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367822, 'error': None, 'target': 'ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.896 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap685f3a92-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.897 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.899 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap685f3a92-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.899 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.899 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap685f3a92-80, col_values=(('external_ids', {'iface-id': '86a5934a-bf0f-4d7e-ba7a-fdb27e99df6a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.900 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.931 243456 DEBUG nova.compute.manager [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.932 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.933 243456 INFO nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Creating image(s)#033[00m
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.954 243456 DEBUG nova.storage.rbd_utils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:39:26 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.978 243456 DEBUG nova.storage.rbd_utils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:26.999 243456 DEBUG nova.storage.rbd_utils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.004 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:39:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.045 243456 DEBUG nova.compute.manager [req-10ee384b-e006-4b89-928d-652fdee25965 req-81c99645-22ef-4bf0-9946-d18a9fc3001d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-vif-plugged-05baa6e8-a67d-4e5d-84d7-2d27c726335a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.046 243456 DEBUG oslo_concurrency.lockutils [req-10ee384b-e006-4b89-928d-652fdee25965 req-81c99645-22ef-4bf0-9946-d18a9fc3001d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.046 243456 DEBUG oslo_concurrency.lockutils [req-10ee384b-e006-4b89-928d-652fdee25965 req-81c99645-22ef-4bf0-9946-d18a9fc3001d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.047 243456 DEBUG oslo_concurrency.lockutils [req-10ee384b-e006-4b89-928d-652fdee25965 req-81c99645-22ef-4bf0-9946-d18a9fc3001d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.047 243456 DEBUG nova.compute.manager [req-10ee384b-e006-4b89-928d-652fdee25965 req-81c99645-22ef-4bf0-9946-d18a9fc3001d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Processing event network-vif-plugged-05baa6e8-a67d-4e5d-84d7-2d27c726335a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.070 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.070 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.071 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.071 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.092 243456 DEBUG nova.storage.rbd_utils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.096 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.302 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275167.3015208, 69899d22-e5ee-410a-8280-57cc79ffa188 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.303 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] VM Started (Lifecycle Event)#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.344 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.350 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275167.3038695, 69899d22-e5ee-410a-8280-57cc79ffa188 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.350 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.380 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.385 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.389 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.465 243456 DEBUG nova.storage.rbd_utils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] resizing rbd image 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.497 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.551 243456 DEBUG nova.objects.instance [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'migration_context' on Instance uuid 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.568 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.568 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Ensure instance console log exists: /var/lib/nova/instances/023e788b-dd3e-4a85-a2e5-e97ad24d6ff1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.569 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.569 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:27 np0005634017 nova_compute[243452]: 2026-02-28 10:39:27.570 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:28 np0005634017 nova_compute[243452]: 2026-02-28 10:39:28.089 243456 DEBUG nova.network.neutron [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Successfully created port: 247ab5af-a0a0-4d02-93ea-d7dbed82f571 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:39:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:39:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 305 active+clean; 288 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.1 MiB/s wr, 45 op/s
Feb 28 05:39:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:39:29
Feb 28 05:39:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:39:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:39:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'volumes', 'default.rgw.log', 'vms', 'backups', 'default.rgw.meta']
Feb 28 05:39:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.627 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.708 243456 DEBUG nova.compute.manager [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-vif-plugged-05baa6e8-a67d-4e5d-84d7-2d27c726335a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.709 243456 DEBUG oslo_concurrency.lockutils [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.710 243456 DEBUG oslo_concurrency.lockutils [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.710 243456 DEBUG oslo_concurrency.lockutils [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.711 243456 DEBUG nova.compute.manager [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] No event matching network-vif-plugged-05baa6e8-a67d-4e5d-84d7-2d27c726335a in dict_keys([('network-vif-plugged', '05cd9b0a-030d-404b-b77a-570992915fae')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.712 243456 WARNING nova.compute.manager [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received unexpected event network-vif-plugged-05baa6e8-a67d-4e5d-84d7-2d27c726335a for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.712 243456 DEBUG nova.compute.manager [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-vif-plugged-05cd9b0a-030d-404b-b77a-570992915fae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.713 243456 DEBUG oslo_concurrency.lockutils [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.714 243456 DEBUG oslo_concurrency.lockutils [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.714 243456 DEBUG oslo_concurrency.lockutils [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.715 243456 DEBUG nova.compute.manager [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Processing event network-vif-plugged-05cd9b0a-030d-404b-b77a-570992915fae _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.716 243456 DEBUG nova.compute.manager [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-vif-plugged-05cd9b0a-030d-404b-b77a-570992915fae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.716 243456 DEBUG oslo_concurrency.lockutils [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.717 243456 DEBUG oslo_concurrency.lockutils [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.718 243456 DEBUG oslo_concurrency.lockutils [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.718 243456 DEBUG nova.compute.manager [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] No waiting events found dispatching network-vif-plugged-05cd9b0a-030d-404b-b77a-570992915fae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.719 243456 WARNING nova.compute.manager [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received unexpected event network-vif-plugged-05cd9b0a-030d-404b-b77a-570992915fae for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.721 243456 DEBUG nova.compute.manager [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.726 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275169.72594, 69899d22-e5ee-410a-8280-57cc79ffa188 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.727 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.731 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.736 243456 INFO nova.virt.libvirt.driver [-] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Instance spawned successfully.#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.737 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.748 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.753 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.808 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.811 243456 DEBUG nova.network.neutron [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Successfully updated port: 247ab5af-a0a0-4d02-93ea-d7dbed82f571 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.815 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.816 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.816 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.817 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.818 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.818 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.824 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.824 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.824 243456 DEBUG nova.network.neutron [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.884 243456 INFO nova.compute.manager [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Took 11.47 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.885 243456 DEBUG nova.compute.manager [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.953 243456 INFO nova.compute.manager [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Took 12.53 seconds to build instance.#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.974 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:29 np0005634017 nova_compute[243452]: 2026-02-28 10:39:29.986 243456 DEBUG nova.network.neutron [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:39:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:39:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:39:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:39:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:39:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:39:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:39:30 np0005634017 nova_compute[243452]: 2026-02-28 10:39:30.493 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:39:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:39:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:39:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:39:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:39:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:39:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:39:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:39:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:39:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:39:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 178 KiB/s rd, 3.6 MiB/s wr, 69 op/s
Feb 28 05:39:31 np0005634017 nova_compute[243452]: 2026-02-28 10:39:31.880 243456 DEBUG nova.compute.manager [req-3295f048-b555-4250-99bc-e430b3b9625e req-4d3d1e3a-7b3c-46ae-bbc4-21227e10d049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Received event network-changed-247ab5af-a0a0-4d02-93ea-d7dbed82f571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:39:31 np0005634017 nova_compute[243452]: 2026-02-28 10:39:31.881 243456 DEBUG nova.compute.manager [req-3295f048-b555-4250-99bc-e430b3b9625e req-4d3d1e3a-7b3c-46ae-bbc4-21227e10d049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Refreshing instance network info cache due to event network-changed-247ab5af-a0a0-4d02-93ea-d7dbed82f571. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:39:31 np0005634017 nova_compute[243452]: 2026-02-28 10:39:31.882 243456 DEBUG oslo_concurrency.lockutils [req-3295f048-b555-4250-99bc-e430b3b9625e req-4d3d1e3a-7b3c-46ae-bbc4-21227e10d049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:39:31 np0005634017 nova_compute[243452]: 2026-02-28 10:39:31.994 243456 DEBUG nova.network.neutron [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Updating instance_info_cache with network_info: [{"id": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "address": "fa:16:3e:7f:58:95", "network": {"id": "72eb3f47-a6c4-4982-9972-2a47fea8b4a8", "bridge": "br-int", "label": "tempest-network-smoke--2052995257", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247ab5af-a0", "ovs_interfaceid": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:39:32 np0005634017 nova_compute[243452]: 2026-02-28 10:39:32.243 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:39:32 np0005634017 nova_compute[243452]: 2026-02-28 10:39:32.244 243456 DEBUG nova.compute.manager [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Instance network_info: |[{"id": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "address": "fa:16:3e:7f:58:95", "network": {"id": "72eb3f47-a6c4-4982-9972-2a47fea8b4a8", "bridge": "br-int", "label": "tempest-network-smoke--2052995257", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247ab5af-a0", "ovs_interfaceid": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:39:32 np0005634017 nova_compute[243452]: 2026-02-28 10:39:32.246 243456 DEBUG oslo_concurrency.lockutils [req-3295f048-b555-4250-99bc-e430b3b9625e req-4d3d1e3a-7b3c-46ae-bbc4-21227e10d049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:39:32 np0005634017 nova_compute[243452]: 2026-02-28 10:39:32.246 243456 DEBUG nova.network.neutron [req-3295f048-b555-4250-99bc-e430b3b9625e req-4d3d1e3a-7b3c-46ae-bbc4-21227e10d049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Refreshing network info cache for port 247ab5af-a0a0-4d02-93ea-d7dbed82f571 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:39:32 np0005634017 nova_compute[243452]: 2026-02-28 10:39:32.249 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Start _get_guest_xml network_info=[{"id": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "address": "fa:16:3e:7f:58:95", "network": {"id": "72eb3f47-a6c4-4982-9972-2a47fea8b4a8", "bridge": "br-int", "label": "tempest-network-smoke--2052995257", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247ab5af-a0", "ovs_interfaceid": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:39:32 np0005634017 nova_compute[243452]: 2026-02-28 10:39:32.255 243456 WARNING nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:39:32 np0005634017 nova_compute[243452]: 2026-02-28 10:39:32.260 243456 DEBUG nova.virt.libvirt.host [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:39:32 np0005634017 nova_compute[243452]: 2026-02-28 10:39:32.261 243456 DEBUG nova.virt.libvirt.host [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:39:32 np0005634017 nova_compute[243452]: 2026-02-28 10:39:32.264 243456 DEBUG nova.virt.libvirt.host [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:39:32 np0005634017 nova_compute[243452]: 2026-02-28 10:39:32.265 243456 DEBUG nova.virt.libvirt.host [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:39:32 np0005634017 nova_compute[243452]: 2026-02-28 10:39:32.266 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:39:32 np0005634017 nova_compute[243452]: 2026-02-28 10:39:32.266 243456 DEBUG nova.virt.hardware [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:39:32 np0005634017 nova_compute[243452]: 2026-02-28 10:39:32.267 243456 DEBUG nova.virt.hardware [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:39:32 np0005634017 nova_compute[243452]: 2026-02-28 10:39:32.267 243456 DEBUG nova.virt.hardware [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:39:32 np0005634017 nova_compute[243452]: 2026-02-28 10:39:32.268 243456 DEBUG nova.virt.hardware [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:39:32 np0005634017 nova_compute[243452]: 2026-02-28 10:39:32.268 243456 DEBUG nova.virt.hardware [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:39:32 np0005634017 nova_compute[243452]: 2026-02-28 10:39:32.269 243456 DEBUG nova.virt.hardware [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:39:32 np0005634017 nova_compute[243452]: 2026-02-28 10:39:32.269 243456 DEBUG nova.virt.hardware [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:39:32 np0005634017 nova_compute[243452]: 2026-02-28 10:39:32.269 243456 DEBUG nova.virt.hardware [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:39:32 np0005634017 nova_compute[243452]: 2026-02-28 10:39:32.270 243456 DEBUG nova.virt.hardware [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:39:32 np0005634017 nova_compute[243452]: 2026-02-28 10:39:32.270 243456 DEBUG nova.virt.hardware [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:39:32 np0005634017 nova_compute[243452]: 2026-02-28 10:39:32.271 243456 DEBUG nova.virt.hardware [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:39:32 np0005634017 nova_compute[243452]: 2026-02-28 10:39:32.275 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:39:32 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:39:32 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/252349400' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:39:32 np0005634017 nova_compute[243452]: 2026-02-28 10:39:32.823 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:39:32 np0005634017 nova_compute[243452]: 2026-02-28 10:39:32.869 243456 DEBUG nova.storage.rbd_utils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:39:32 np0005634017 nova_compute[243452]: 2026-02-28 10:39:32.876 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:39:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.6 MiB/s wr, 95 op/s
Feb 28 05:39:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:39:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3397351184' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.482 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.484 243456 DEBUG nova.virt.libvirt.vif [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:39:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-823103484',display_name='tempest-TestNetworkBasicOps-server-823103484',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-823103484',id=140,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDlRPDBZ/UdFU5xhRLg2YsbBBqME/rNpkkZFa7kTxtUL7bLINgILV0oznkIgnASqfKbT8PhHg4HY5yFJH+rTHkD1QXlc7NRmyoVKdQxyQnQqn2gOVqEPju9kBsy9tmaBqQ==',key_name='tempest-TestNetworkBasicOps-1428223995',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-d46lhxih',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:39:26Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=023e788b-dd3e-4a85-a2e5-e97ad24d6ff1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "address": "fa:16:3e:7f:58:95", "network": {"id": "72eb3f47-a6c4-4982-9972-2a47fea8b4a8", "bridge": "br-int", "label": "tempest-network-smoke--2052995257", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247ab5af-a0", "ovs_interfaceid": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.485 243456 DEBUG nova.network.os_vif_util [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "address": "fa:16:3e:7f:58:95", "network": {"id": "72eb3f47-a6c4-4982-9972-2a47fea8b4a8", "bridge": "br-int", "label": "tempest-network-smoke--2052995257", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247ab5af-a0", "ovs_interfaceid": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.486 243456 DEBUG nova.network.os_vif_util [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:58:95,bridge_name='br-int',has_traffic_filtering=True,id=247ab5af-a0a0-4d02-93ea-d7dbed82f571,network=Network(72eb3f47-a6c4-4982-9972-2a47fea8b4a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247ab5af-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.488 243456 DEBUG nova.objects.instance [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.533 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:39:33 np0005634017 nova_compute[243452]:  <uuid>023e788b-dd3e-4a85-a2e5-e97ad24d6ff1</uuid>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:  <name>instance-0000008c</name>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestNetworkBasicOps-server-823103484</nova:name>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:39:32</nova:creationTime>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:39:33 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:        <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:        <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:        <nova:port uuid="247ab5af-a0a0-4d02-93ea-d7dbed82f571">
Feb 28 05:39:33 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <entry name="serial">023e788b-dd3e-4a85-a2e5-e97ad24d6ff1</entry>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <entry name="uuid">023e788b-dd3e-4a85-a2e5-e97ad24d6ff1</entry>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk">
Feb 28 05:39:33 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:39:33 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk.config">
Feb 28 05:39:33 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:39:33 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:7f:58:95"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <target dev="tap247ab5af-a0"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/023e788b-dd3e-4a85-a2e5-e97ad24d6ff1/console.log" append="off"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:39:33 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:39:33 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:39:33 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:39:33 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.537 243456 DEBUG nova.compute.manager [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Preparing to wait for external event network-vif-plugged-247ab5af-a0a0-4d02-93ea-d7dbed82f571 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.538 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.539 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.539 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.540 243456 DEBUG nova.virt.libvirt.vif [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:39:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-823103484',display_name='tempest-TestNetworkBasicOps-server-823103484',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-823103484',id=140,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDlRPDBZ/UdFU5xhRLg2YsbBBqME/rNpkkZFa7kTxtUL7bLINgILV0oznkIgnASqfKbT8PhHg4HY5yFJH+rTHkD1QXlc7NRmyoVKdQxyQnQqn2gOVqEPju9kBsy9tmaBqQ==',key_name='tempest-TestNetworkBasicOps-1428223995',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-d46lhxih',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:39:26Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=023e788b-dd3e-4a85-a2e5-e97ad24d6ff1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "address": "fa:16:3e:7f:58:95", "network": {"id": "72eb3f47-a6c4-4982-9972-2a47fea8b4a8", "bridge": "br-int", "label": "tempest-network-smoke--2052995257", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247ab5af-a0", "ovs_interfaceid": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.540 243456 DEBUG nova.network.os_vif_util [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "address": "fa:16:3e:7f:58:95", "network": {"id": "72eb3f47-a6c4-4982-9972-2a47fea8b4a8", "bridge": "br-int", "label": "tempest-network-smoke--2052995257", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247ab5af-a0", "ovs_interfaceid": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.541 243456 DEBUG nova.network.os_vif_util [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:58:95,bridge_name='br-int',has_traffic_filtering=True,id=247ab5af-a0a0-4d02-93ea-d7dbed82f571,network=Network(72eb3f47-a6c4-4982-9972-2a47fea8b4a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247ab5af-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.541 243456 DEBUG os_vif [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:58:95,bridge_name='br-int',has_traffic_filtering=True,id=247ab5af-a0a0-4d02-93ea-d7dbed82f571,network=Network(72eb3f47-a6c4-4982-9972-2a47fea8b4a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247ab5af-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.542 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.542 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.543 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.545 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.545 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap247ab5af-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.546 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap247ab5af-a0, col_values=(('external_ids', {'iface-id': '247ab5af-a0a0-4d02-93ea-d7dbed82f571', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:58:95', 'vm-uuid': '023e788b-dd3e-4a85-a2e5-e97ad24d6ff1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.548 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:33 np0005634017 NetworkManager[49805]: <info>  [1772275173.5492] manager: (tap247ab5af-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/603)
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.552 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.554 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.554 243456 INFO os_vif [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:58:95,bridge_name='br-int',has_traffic_filtering=True,id=247ab5af-a0a0-4d02-93ea-d7dbed82f571,network=Network(72eb3f47-a6c4-4982-9972-2a47fea8b4a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247ab5af-a0')#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.654 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.655 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.655 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:7f:58:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.656 243456 INFO nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Using config drive#033[00m
Feb 28 05:39:33 np0005634017 nova_compute[243452]: 2026-02-28 10:39:33.678 243456 DEBUG nova.storage.rbd_utils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:39:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:39:34 np0005634017 nova_compute[243452]: 2026-02-28 10:39:34.004 243456 DEBUG nova.network.neutron [req-3295f048-b555-4250-99bc-e430b3b9625e req-4d3d1e3a-7b3c-46ae-bbc4-21227e10d049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Updated VIF entry in instance network info cache for port 247ab5af-a0a0-4d02-93ea-d7dbed82f571. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:39:34 np0005634017 nova_compute[243452]: 2026-02-28 10:39:34.004 243456 DEBUG nova.network.neutron [req-3295f048-b555-4250-99bc-e430b3b9625e req-4d3d1e3a-7b3c-46ae-bbc4-21227e10d049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Updating instance_info_cache with network_info: [{"id": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "address": "fa:16:3e:7f:58:95", "network": {"id": "72eb3f47-a6c4-4982-9972-2a47fea8b4a8", "bridge": "br-int", "label": "tempest-network-smoke--2052995257", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247ab5af-a0", "ovs_interfaceid": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:39:34 np0005634017 nova_compute[243452]: 2026-02-28 10:39:34.088 243456 DEBUG oslo_concurrency.lockutils [req-3295f048-b555-4250-99bc-e430b3b9625e req-4d3d1e3a-7b3c-46ae-bbc4-21227e10d049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:39:34 np0005634017 nova_compute[243452]: 2026-02-28 10:39:34.118 243456 INFO nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Creating config drive at /var/lib/nova/instances/023e788b-dd3e-4a85-a2e5-e97ad24d6ff1/disk.config#033[00m
Feb 28 05:39:34 np0005634017 nova_compute[243452]: 2026-02-28 10:39:34.123 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/023e788b-dd3e-4a85-a2e5-e97ad24d6ff1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5wuyk6i3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:39:34 np0005634017 nova_compute[243452]: 2026-02-28 10:39:34.262 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/023e788b-dd3e-4a85-a2e5-e97ad24d6ff1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5wuyk6i3" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:39:34 np0005634017 nova_compute[243452]: 2026-02-28 10:39:34.327 243456 DEBUG nova.storage.rbd_utils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:39:34 np0005634017 nova_compute[243452]: 2026-02-28 10:39:34.332 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/023e788b-dd3e-4a85-a2e5-e97ad24d6ff1/disk.config 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:39:34 np0005634017 nova_compute[243452]: 2026-02-28 10:39:34.486 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/023e788b-dd3e-4a85-a2e5-e97ad24d6ff1/disk.config 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:39:34 np0005634017 nova_compute[243452]: 2026-02-28 10:39:34.488 243456 INFO nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Deleting local config drive /var/lib/nova/instances/023e788b-dd3e-4a85-a2e5-e97ad24d6ff1/disk.config because it was imported into RBD.#033[00m
Feb 28 05:39:34 np0005634017 NetworkManager[49805]: <info>  [1772275174.5656] manager: (tap247ab5af-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/604)
Feb 28 05:39:34 np0005634017 kernel: tap247ab5af-a0: entered promiscuous mode
Feb 28 05:39:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:34Z|01448|binding|INFO|Claiming lport 247ab5af-a0a0-4d02-93ea-d7dbed82f571 for this chassis.
Feb 28 05:39:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:34Z|01449|binding|INFO|247ab5af-a0a0-4d02-93ea-d7dbed82f571: Claiming fa:16:3e:7f:58:95 10.100.0.4
Feb 28 05:39:34 np0005634017 nova_compute[243452]: 2026-02-28 10:39:34.575 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:34Z|01450|binding|INFO|Setting lport 247ab5af-a0a0-4d02-93ea-d7dbed82f571 ovn-installed in OVS
Feb 28 05:39:34 np0005634017 nova_compute[243452]: 2026-02-28 10:39:34.584 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:34 np0005634017 nova_compute[243452]: 2026-02-28 10:39:34.588 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:34 np0005634017 systemd-udevd[368165]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:39:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.598 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:58:95 10.100.0.4'], port_security=['fa:16:3e:7f:58:95 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '023e788b-dd3e-4a85-a2e5-e97ad24d6ff1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72eb3f47-a6c4-4982-9972-2a47fea8b4a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4240f7d5-bbec-4937-b4a6-68a1a0e467cd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d75922a7-401f-4a05-a541-e32618728569, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=247ab5af-a0a0-4d02-93ea-d7dbed82f571) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:39:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:34Z|01451|binding|INFO|Setting lport 247ab5af-a0a0-4d02-93ea-d7dbed82f571 up in Southbound
Feb 28 05:39:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.600 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 247ab5af-a0a0-4d02-93ea-d7dbed82f571 in datapath 72eb3f47-a6c4-4982-9972-2a47fea8b4a8 bound to our chassis#033[00m
Feb 28 05:39:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.601 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 72eb3f47-a6c4-4982-9972-2a47fea8b4a8#033[00m
Feb 28 05:39:34 np0005634017 NetworkManager[49805]: <info>  [1772275174.6045] device (tap247ab5af-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:39:34 np0005634017 NetworkManager[49805]: <info>  [1772275174.6054] device (tap247ab5af-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:39:34 np0005634017 systemd-machined[209480]: New machine qemu-173-instance-0000008c.
Feb 28 05:39:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.623 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f49b9ae2-e4c2-46c4-97ac-9c8ba695bd12]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.624 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap72eb3f47-a1 in ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:39:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.626 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap72eb3f47-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:39:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.626 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cec91d5e-2ed7-4db5-9e6c-f63985397d61]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.628 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[832a625e-2862-463c-b272-b75aa6283886]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:34 np0005634017 nova_compute[243452]: 2026-02-28 10:39:34.631 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:34 np0005634017 systemd[1]: Started Virtual Machine qemu-173-instance-0000008c.
Feb 28 05:39:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.645 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b9d514-664e-46fb-80fe-842dfc5b578e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.670 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9741acbc-3486-4717-b39f-631953ddac28]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.703 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1c9f05ad-4c4a-4e4b-a516-7fc4443b3a8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:34 np0005634017 NetworkManager[49805]: <info>  [1772275174.7108] manager: (tap72eb3f47-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/605)
Feb 28 05:39:34 np0005634017 systemd-udevd[368169]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:39:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.714 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b587b8f0-e071-4fc9-9cd7-0d3ec893df2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.754 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7241af1e-e032-4e14-aae6-15fe1b9c6ec0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.759 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b7295ee6-2716-46aa-9729-28e5e89eb186]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:34 np0005634017 NetworkManager[49805]: <info>  [1772275174.7911] device (tap72eb3f47-a0): carrier: link connected
Feb 28 05:39:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.798 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bf26987d-3209-4eb5-a778-4c1df0a52ef5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.820 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6bdd45b9-beaa-417e-ae5a-b07ae835bbbb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72eb3f47-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3f:eb:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 430], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668207, 'reachable_time': 24149, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368201, 'error': None, 'target': 'ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.840 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4f0149f7-200e-49d4-9e6a-2994ca2cc848]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3f:eb92'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 668207, 'tstamp': 668207}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368202, 'error': None, 'target': 'ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.861 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bde6892e-7ac0-4a8b-a2c2-afc639f1c9a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72eb3f47-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3f:eb:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 430], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668207, 'reachable_time': 24149, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 368203, 'error': None, 'target': 'ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.896 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6c8bb65d-da4b-49da-b3e5-2c7a3f8af9a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.971 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6544009d-88ae-4ea1-b9c6-68275d45828b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.973 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72eb3f47-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.974 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:39:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.975 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72eb3f47-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:34 np0005634017 NetworkManager[49805]: <info>  [1772275174.9780] manager: (tap72eb3f47-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/606)
Feb 28 05:39:34 np0005634017 kernel: tap72eb3f47-a0: entered promiscuous mode
Feb 28 05:39:34 np0005634017 nova_compute[243452]: 2026-02-28 10:39:34.980 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:34 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.990 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap72eb3f47-a0, col_values=(('external_ids', {'iface-id': 'a55b6aa3-f8a4-4c13-b5f3-0d6d78f0bfc1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:34Z|01452|binding|INFO|Releasing lport a55b6aa3-f8a4-4c13-b5f3-0d6d78f0bfc1 from this chassis (sb_readonly=0)
Feb 28 05:39:34 np0005634017 nova_compute[243452]: 2026-02-28 10:39:34.994 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:35 np0005634017 nova_compute[243452]: 2026-02-28 10:39:35.002 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:35.003 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/72eb3f47-a6c4-4982-9972-2a47fea8b4a8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/72eb3f47-a6c4-4982-9972-2a47fea8b4a8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:35.004 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5bbe62b6-3fc0-484d-a96e-3311ad64cd4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:35.006 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-72eb3f47-a6c4-4982-9972-2a47fea8b4a8
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/72eb3f47-a6c4-4982-9972-2a47fea8b4a8.pid.haproxy
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 72eb3f47-a6c4-4982-9972-2a47fea8b4a8
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:39:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:35.007 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8', 'env', 'PROCESS_TAG=haproxy-72eb3f47-a6c4-4982-9972-2a47fea8b4a8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/72eb3f47-a6c4-4982-9972-2a47fea8b4a8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:39:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Feb 28 05:39:35 np0005634017 nova_compute[243452]: 2026-02-28 10:39:35.316 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275175.3156798, 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:39:35 np0005634017 nova_compute[243452]: 2026-02-28 10:39:35.318 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] VM Started (Lifecycle Event)#033[00m
Feb 28 05:39:35 np0005634017 podman[368276]: 2026-02-28 10:39:35.410025254 +0000 UTC m=+0.055329458 container create 983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 05:39:35 np0005634017 systemd[1]: Started libpod-conmon-983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375.scope.
Feb 28 05:39:35 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:39:35 np0005634017 podman[368276]: 2026-02-28 10:39:35.377832792 +0000 UTC m=+0.023136996 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:39:35 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05b41b330d4269ed8fc4838007e0e8ccc448467ce7b729453762d243b302d938/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:39:35 np0005634017 nova_compute[243452]: 2026-02-28 10:39:35.482 243456 DEBUG nova.compute.manager [req-a828c0ef-7aa3-4c12-9172-cf6a2a840cee req-3fbd039b-319a-46a9-b609-47b3ac80fa65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-changed-05baa6e8-a67d-4e5d-84d7-2d27c726335a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:39:35 np0005634017 nova_compute[243452]: 2026-02-28 10:39:35.483 243456 DEBUG nova.compute.manager [req-a828c0ef-7aa3-4c12-9172-cf6a2a840cee req-3fbd039b-319a-46a9-b609-47b3ac80fa65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Refreshing instance network info cache due to event network-changed-05baa6e8-a67d-4e5d-84d7-2d27c726335a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:39:35 np0005634017 nova_compute[243452]: 2026-02-28 10:39:35.484 243456 DEBUG oslo_concurrency.lockutils [req-a828c0ef-7aa3-4c12-9172-cf6a2a840cee req-3fbd039b-319a-46a9-b609-47b3ac80fa65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:39:35 np0005634017 nova_compute[243452]: 2026-02-28 10:39:35.484 243456 DEBUG oslo_concurrency.lockutils [req-a828c0ef-7aa3-4c12-9172-cf6a2a840cee req-3fbd039b-319a-46a9-b609-47b3ac80fa65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:39:35 np0005634017 nova_compute[243452]: 2026-02-28 10:39:35.484 243456 DEBUG nova.network.neutron [req-a828c0ef-7aa3-4c12-9172-cf6a2a840cee req-3fbd039b-319a-46a9-b609-47b3ac80fa65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Refreshing network info cache for port 05baa6e8-a67d-4e5d-84d7-2d27c726335a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:39:35 np0005634017 podman[368276]: 2026-02-28 10:39:35.488857296 +0000 UTC m=+0.134161500 container init 983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 28 05:39:35 np0005634017 podman[368276]: 2026-02-28 10:39:35.493902599 +0000 UTC m=+0.139206803 container start 983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223)
Feb 28 05:39:35 np0005634017 nova_compute[243452]: 2026-02-28 10:39:35.500 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:39:35 np0005634017 nova_compute[243452]: 2026-02-28 10:39:35.506 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275175.3160083, 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:39:35 np0005634017 nova_compute[243452]: 2026-02-28 10:39:35.507 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:39:35 np0005634017 neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8[368291]: [NOTICE]   (368295) : New worker (368297) forked
Feb 28 05:39:35 np0005634017 neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8[368291]: [NOTICE]   (368295) : Loading success.
Feb 28 05:39:35 np0005634017 nova_compute[243452]: 2026-02-28 10:39:35.575 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:39:35 np0005634017 nova_compute[243452]: 2026-02-28 10:39:35.582 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:39:35 np0005634017 nova_compute[243452]: 2026-02-28 10:39:35.620 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:39:36 np0005634017 nova_compute[243452]: 2026-02-28 10:39:36.150 243456 DEBUG nova.compute.manager [req-52c37f4e-3599-4255-a1fc-19263529f2a7 req-fd1dd2fa-5812-4c16-98c9-f8032bfcd155 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Received event network-vif-plugged-247ab5af-a0a0-4d02-93ea-d7dbed82f571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:39:36 np0005634017 nova_compute[243452]: 2026-02-28 10:39:36.151 243456 DEBUG oslo_concurrency.lockutils [req-52c37f4e-3599-4255-a1fc-19263529f2a7 req-fd1dd2fa-5812-4c16-98c9-f8032bfcd155 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:36 np0005634017 nova_compute[243452]: 2026-02-28 10:39:36.152 243456 DEBUG oslo_concurrency.lockutils [req-52c37f4e-3599-4255-a1fc-19263529f2a7 req-fd1dd2fa-5812-4c16-98c9-f8032bfcd155 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:36 np0005634017 nova_compute[243452]: 2026-02-28 10:39:36.152 243456 DEBUG oslo_concurrency.lockutils [req-52c37f4e-3599-4255-a1fc-19263529f2a7 req-fd1dd2fa-5812-4c16-98c9-f8032bfcd155 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:36 np0005634017 nova_compute[243452]: 2026-02-28 10:39:36.153 243456 DEBUG nova.compute.manager [req-52c37f4e-3599-4255-a1fc-19263529f2a7 req-fd1dd2fa-5812-4c16-98c9-f8032bfcd155 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Processing event network-vif-plugged-247ab5af-a0a0-4d02-93ea-d7dbed82f571 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:39:36 np0005634017 nova_compute[243452]: 2026-02-28 10:39:36.154 243456 DEBUG nova.compute.manager [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:39:36 np0005634017 nova_compute[243452]: 2026-02-28 10:39:36.160 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275176.16023, 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:39:36 np0005634017 nova_compute[243452]: 2026-02-28 10:39:36.161 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:39:36 np0005634017 nova_compute[243452]: 2026-02-28 10:39:36.163 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:39:36 np0005634017 nova_compute[243452]: 2026-02-28 10:39:36.173 243456 INFO nova.virt.libvirt.driver [-] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Instance spawned successfully.#033[00m
Feb 28 05:39:36 np0005634017 nova_compute[243452]: 2026-02-28 10:39:36.174 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:39:36 np0005634017 nova_compute[243452]: 2026-02-28 10:39:36.187 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:39:36 np0005634017 nova_compute[243452]: 2026-02-28 10:39:36.196 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:39:36 np0005634017 nova_compute[243452]: 2026-02-28 10:39:36.205 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:39:36 np0005634017 nova_compute[243452]: 2026-02-28 10:39:36.206 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:39:36 np0005634017 nova_compute[243452]: 2026-02-28 10:39:36.207 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:39:36 np0005634017 nova_compute[243452]: 2026-02-28 10:39:36.208 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:39:36 np0005634017 nova_compute[243452]: 2026-02-28 10:39:36.209 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:39:36 np0005634017 nova_compute[243452]: 2026-02-28 10:39:36.210 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:39:36 np0005634017 nova_compute[243452]: 2026-02-28 10:39:36.280 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:39:36 np0005634017 nova_compute[243452]: 2026-02-28 10:39:36.302 243456 INFO nova.compute.manager [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Took 9.37 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:39:36 np0005634017 nova_compute[243452]: 2026-02-28 10:39:36.303 243456 DEBUG nova.compute.manager [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:39:36 np0005634017 nova_compute[243452]: 2026-02-28 10:39:36.380 243456 INFO nova.compute.manager [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Took 11.01 seconds to build instance.#033[00m
Feb 28 05:39:36 np0005634017 nova_compute[243452]: 2026-02-28 10:39:36.410 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Feb 28 05:39:38 np0005634017 nova_compute[243452]: 2026-02-28 10:39:38.240 243456 DEBUG nova.compute.manager [req-e168949d-cefa-4120-a6d8-607d9ac813a3 req-625a0702-3107-4c81-a8a3-4847bf55a593 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Received event network-vif-plugged-247ab5af-a0a0-4d02-93ea-d7dbed82f571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:39:38 np0005634017 nova_compute[243452]: 2026-02-28 10:39:38.242 243456 DEBUG oslo_concurrency.lockutils [req-e168949d-cefa-4120-a6d8-607d9ac813a3 req-625a0702-3107-4c81-a8a3-4847bf55a593 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:38 np0005634017 nova_compute[243452]: 2026-02-28 10:39:38.242 243456 DEBUG oslo_concurrency.lockutils [req-e168949d-cefa-4120-a6d8-607d9ac813a3 req-625a0702-3107-4c81-a8a3-4847bf55a593 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:38 np0005634017 nova_compute[243452]: 2026-02-28 10:39:38.243 243456 DEBUG oslo_concurrency.lockutils [req-e168949d-cefa-4120-a6d8-607d9ac813a3 req-625a0702-3107-4c81-a8a3-4847bf55a593 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:38 np0005634017 nova_compute[243452]: 2026-02-28 10:39:38.244 243456 DEBUG nova.compute.manager [req-e168949d-cefa-4120-a6d8-607d9ac813a3 req-625a0702-3107-4c81-a8a3-4847bf55a593 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] No waiting events found dispatching network-vif-plugged-247ab5af-a0a0-4d02-93ea-d7dbed82f571 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:39:38 np0005634017 nova_compute[243452]: 2026-02-28 10:39:38.244 243456 WARNING nova.compute.manager [req-e168949d-cefa-4120-a6d8-607d9ac813a3 req-625a0702-3107-4c81-a8a3-4847bf55a593 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Received unexpected event network-vif-plugged-247ab5af-a0a0-4d02-93ea-d7dbed82f571 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:39:38 np0005634017 nova_compute[243452]: 2026-02-28 10:39:38.287 243456 DEBUG nova.network.neutron [req-a828c0ef-7aa3-4c12-9172-cf6a2a840cee req-3fbd039b-319a-46a9-b609-47b3ac80fa65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Updated VIF entry in instance network info cache for port 05baa6e8-a67d-4e5d-84d7-2d27c726335a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:39:38 np0005634017 nova_compute[243452]: 2026-02-28 10:39:38.288 243456 DEBUG nova.network.neutron [req-a828c0ef-7aa3-4c12-9172-cf6a2a840cee req-3fbd039b-319a-46a9-b609-47b3ac80fa65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Updating instance_info_cache with network_info: [{"id": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "address": "fa:16:3e:b7:6f:db", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05baa6e8-a6", "ovs_interfaceid": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "05cd9b0a-030d-404b-b77a-570992915fae", "address": "fa:16:3e:0b:32:1c", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cd9b0a-03", "ovs_interfaceid": "05cd9b0a-030d-404b-b77a-570992915fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:39:38 np0005634017 nova_compute[243452]: 2026-02-28 10:39:38.319 243456 DEBUG oslo_concurrency.lockutils [req-a828c0ef-7aa3-4c12-9172-cf6a2a840cee req-3fbd039b-319a-46a9-b609-47b3ac80fa65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:39:38 np0005634017 nova_compute[243452]: 2026-02-28 10:39:38.549 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:39:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Feb 28 05:39:39 np0005634017 nova_compute[243452]: 2026-02-28 10:39:39.207 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:39:39 np0005634017 nova_compute[243452]: 2026-02-28 10:39:39.631 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:40 np0005634017 nova_compute[243452]: 2026-02-28 10:39:40.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:39:40 np0005634017 nova_compute[243452]: 2026-02-28 10:39:40.335 243456 DEBUG nova.compute.manager [req-feb07667-5d44-456a-ab12-1ddb25208c25 req-ac1a7ae1-774f-4d44-9cc7-eada70518aac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Received event network-changed-247ab5af-a0a0-4d02-93ea-d7dbed82f571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:39:40 np0005634017 nova_compute[243452]: 2026-02-28 10:39:40.336 243456 DEBUG nova.compute.manager [req-feb07667-5d44-456a-ab12-1ddb25208c25 req-ac1a7ae1-774f-4d44-9cc7-eada70518aac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Refreshing instance network info cache due to event network-changed-247ab5af-a0a0-4d02-93ea-d7dbed82f571. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:39:40 np0005634017 nova_compute[243452]: 2026-02-28 10:39:40.336 243456 DEBUG oslo_concurrency.lockutils [req-feb07667-5d44-456a-ab12-1ddb25208c25 req-ac1a7ae1-774f-4d44-9cc7-eada70518aac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:39:40 np0005634017 nova_compute[243452]: 2026-02-28 10:39:40.337 243456 DEBUG oslo_concurrency.lockutils [req-feb07667-5d44-456a-ab12-1ddb25208c25 req-ac1a7ae1-774f-4d44-9cc7-eada70518aac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:39:40 np0005634017 nova_compute[243452]: 2026-02-28 10:39:40.337 243456 DEBUG nova.network.neutron [req-feb07667-5d44-456a-ab12-1ddb25208c25 req-ac1a7ae1-774f-4d44-9cc7-eada70518aac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Refreshing network info cache for port 247ab5af-a0a0-4d02-93ea-d7dbed82f571 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:39:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:40Z|00171|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b7:6f:db 10.100.0.13
Feb 28 05:39:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:40Z|00172|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:6f:db 10.100.0.13
Feb 28 05:39:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2260: 305 pgs: 305 active+clean; 331 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.9 MiB/s wr, 160 op/s
Feb 28 05:39:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:39:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:39:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:39:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:39:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001559519042740221 of space, bias 1.0, pg target 0.4678557128220663 quantized to 32 (current 32)
Feb 28 05:39:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:39:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:39:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:39:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:39:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:39:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024939441600177813 of space, bias 1.0, pg target 0.7481832480053344 quantized to 32 (current 32)
Feb 28 05:39:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:39:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.299579834731566e-07 of space, bias 4.0, pg target 0.0008759495801677878 quantized to 16 (current 16)
Feb 28 05:39:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:39:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:39:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:39:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:39:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:39:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:39:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:39:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:39:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:39:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:39:41 np0005634017 nova_compute[243452]: 2026-02-28 10:39:41.717 243456 DEBUG nova.network.neutron [req-feb07667-5d44-456a-ab12-1ddb25208c25 req-ac1a7ae1-774f-4d44-9cc7-eada70518aac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Updated VIF entry in instance network info cache for port 247ab5af-a0a0-4d02-93ea-d7dbed82f571. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:39:41 np0005634017 nova_compute[243452]: 2026-02-28 10:39:41.719 243456 DEBUG nova.network.neutron [req-feb07667-5d44-456a-ab12-1ddb25208c25 req-ac1a7ae1-774f-4d44-9cc7-eada70518aac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Updating instance_info_cache with network_info: [{"id": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "address": "fa:16:3e:7f:58:95", "network": {"id": "72eb3f47-a6c4-4982-9972-2a47fea8b4a8", "bridge": "br-int", "label": "tempest-network-smoke--2052995257", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247ab5af-a0", "ovs_interfaceid": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:39:41 np0005634017 nova_compute[243452]: 2026-02-28 10:39:41.744 243456 DEBUG oslo_concurrency.lockutils [req-feb07667-5d44-456a-ab12-1ddb25208c25 req-ac1a7ae1-774f-4d44-9cc7-eada70518aac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:39:42 np0005634017 nova_compute[243452]: 2026-02-28 10:39:42.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:39:42 np0005634017 nova_compute[243452]: 2026-02-28 10:39:42.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:39:42 np0005634017 nova_compute[243452]: 2026-02-28 10:39:42.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:39:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2261: 305 pgs: 305 active+clean; 347 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.3 MiB/s wr, 164 op/s
Feb 28 05:39:43 np0005634017 nova_compute[243452]: 2026-02-28 10:39:43.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:39:43 np0005634017 nova_compute[243452]: 2026-02-28 10:39:43.552 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:39:44 np0005634017 nova_compute[243452]: 2026-02-28 10:39:44.635 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.1 MiB/s wr, 166 op/s
Feb 28 05:39:45 np0005634017 podman[368308]: 2026-02-28 10:39:45.145035749 +0000 UTC m=+0.070985981 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 28 05:39:45 np0005634017 podman[368307]: 2026-02-28 10:39:45.18851947 +0000 UTC m=+0.116652364 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 28 05:39:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:39:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/82606068' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:39:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:39:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/82606068' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:39:46 np0005634017 nova_compute[243452]: 2026-02-28 10:39:46.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:39:46 np0005634017 nova_compute[243452]: 2026-02-28 10:39:46.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:39:46 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Feb 28 05:39:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Feb 28 05:39:47 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:47Z|00173|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7f:58:95 10.100.0.4
Feb 28 05:39:47 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:47Z|00174|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:58:95 10.100.0.4
Feb 28 05:39:48 np0005634017 nova_compute[243452]: 2026-02-28 10:39:48.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:39:48 np0005634017 nova_compute[243452]: 2026-02-28 10:39:48.554 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:39:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2264: 305 pgs: 305 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 143 op/s
Feb 28 05:39:49 np0005634017 nova_compute[243452]: 2026-02-28 10:39:49.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:39:49 np0005634017 nova_compute[243452]: 2026-02-28 10:39:49.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:39:49 np0005634017 nova_compute[243452]: 2026-02-28 10:39:49.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:39:49 np0005634017 nova_compute[243452]: 2026-02-28 10:39:49.596 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:39:49 np0005634017 nova_compute[243452]: 2026-02-28 10:39:49.597 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:39:49 np0005634017 nova_compute[243452]: 2026-02-28 10:39:49.597 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:39:49 np0005634017 nova_compute[243452]: 2026-02-28 10:39:49.597 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid f6deb920-f186-43f5-9ea0-642f4a6e830e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:39:49 np0005634017 nova_compute[243452]: 2026-02-28 10:39:49.642 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 305 active+clean; 389 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.2 MiB/s wr, 171 op/s
Feb 28 05:39:52 np0005634017 nova_compute[243452]: 2026-02-28 10:39:52.293 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Updating instance_info_cache with network_info: [{"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "address": "fa:16:3e:b3:ee:08", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fcd845f-86", "ovs_interfaceid": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:39:52 np0005634017 nova_compute[243452]: 2026-02-28 10:39:52.320 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:39:52 np0005634017 nova_compute[243452]: 2026-02-28 10:39:52.321 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:39:52 np0005634017 nova_compute[243452]: 2026-02-28 10:39:52.322 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:39:52 np0005634017 nova_compute[243452]: 2026-02-28 10:39:52.343 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:52 np0005634017 nova_compute[243452]: 2026-02-28 10:39:52.344 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:52 np0005634017 nova_compute[243452]: 2026-02-28 10:39:52.344 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:52 np0005634017 nova_compute[243452]: 2026-02-28 10:39:52.344 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:39:52 np0005634017 nova_compute[243452]: 2026-02-28 10:39:52.344 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:39:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:39:52 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4104048542' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:39:52 np0005634017 nova_compute[243452]: 2026-02-28 10:39:52.980 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.636s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:39:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 628 KiB/s rd, 3.8 MiB/s wr, 123 op/s
Feb 28 05:39:53 np0005634017 nova_compute[243452]: 2026-02-28 10:39:53.066 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:39:53 np0005634017 nova_compute[243452]: 2026-02-28 10:39:53.066 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:39:53 np0005634017 nova_compute[243452]: 2026-02-28 10:39:53.070 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:39:53 np0005634017 nova_compute[243452]: 2026-02-28 10:39:53.070 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:39:53 np0005634017 nova_compute[243452]: 2026-02-28 10:39:53.074 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:39:53 np0005634017 nova_compute[243452]: 2026-02-28 10:39:53.074 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:39:53 np0005634017 nova_compute[243452]: 2026-02-28 10:39:53.263 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:39:53 np0005634017 nova_compute[243452]: 2026-02-28 10:39:53.265 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2976MB free_disk=59.851134022697806GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:39:53 np0005634017 nova_compute[243452]: 2026-02-28 10:39:53.265 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:53 np0005634017 nova_compute[243452]: 2026-02-28 10:39:53.265 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:53 np0005634017 nova_compute[243452]: 2026-02-28 10:39:53.346 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance f6deb920-f186-43f5-9ea0-642f4a6e830e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:39:53 np0005634017 nova_compute[243452]: 2026-02-28 10:39:53.346 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 69899d22-e5ee-410a-8280-57cc79ffa188 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:39:53 np0005634017 nova_compute[243452]: 2026-02-28 10:39:53.347 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:39:53 np0005634017 nova_compute[243452]: 2026-02-28 10:39:53.347 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:39:53 np0005634017 nova_compute[243452]: 2026-02-28 10:39:53.347 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:39:53 np0005634017 nova_compute[243452]: 2026-02-28 10:39:53.360 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 28 05:39:53 np0005634017 nova_compute[243452]: 2026-02-28 10:39:53.375 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 28 05:39:53 np0005634017 nova_compute[243452]: 2026-02-28 10:39:53.376 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 28 05:39:53 np0005634017 nova_compute[243452]: 2026-02-28 10:39:53.388 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 28 05:39:53 np0005634017 nova_compute[243452]: 2026-02-28 10:39:53.407 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 28 05:39:53 np0005634017 nova_compute[243452]: 2026-02-28 10:39:53.468 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:39:53 np0005634017 nova_compute[243452]: 2026-02-28 10:39:53.556 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:39:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:39:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1400991154' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.095 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.102 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.120 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.144 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.144 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.327 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.328 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:39:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.329 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.574 243456 DEBUG nova.compute.manager [req-067175d0-1714-4db9-a1dd-1e2e5dd27c42 req-9c8f1508-2605-4760-9073-c2b583290022 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-changed-05baa6e8-a67d-4e5d-84d7-2d27c726335a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.575 243456 DEBUG nova.compute.manager [req-067175d0-1714-4db9-a1dd-1e2e5dd27c42 req-9c8f1508-2605-4760-9073-c2b583290022 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Refreshing instance network info cache due to event network-changed-05baa6e8-a67d-4e5d-84d7-2d27c726335a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.578 243456 DEBUG oslo_concurrency.lockutils [req-067175d0-1714-4db9-a1dd-1e2e5dd27c42 req-9c8f1508-2605-4760-9073-c2b583290022 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.579 243456 DEBUG oslo_concurrency.lockutils [req-067175d0-1714-4db9-a1dd-1e2e5dd27c42 req-9c8f1508-2605-4760-9073-c2b583290022 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.579 243456 DEBUG nova.network.neutron [req-067175d0-1714-4db9-a1dd-1e2e5dd27c42 req-9c8f1508-2605-4760-9073-c2b583290022 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Refreshing network info cache for port 05baa6e8-a67d-4e5d-84d7-2d27c726335a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.645 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.673 243456 DEBUG oslo_concurrency.lockutils [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "69899d22-e5ee-410a-8280-57cc79ffa188" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.673 243456 DEBUG oslo_concurrency.lockutils [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.674 243456 DEBUG oslo_concurrency.lockutils [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.674 243456 DEBUG oslo_concurrency.lockutils [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.675 243456 DEBUG oslo_concurrency.lockutils [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.678 243456 INFO nova.compute.manager [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Terminating instance#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.680 243456 DEBUG nova.compute.manager [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:39:54 np0005634017 kernel: tap05baa6e8-a6 (unregistering): left promiscuous mode
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.732 243456 INFO nova.compute.manager [None req-a6c02d16-1f2d-465c-987a-532b6922013b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Get console output#033[00m
Feb 28 05:39:54 np0005634017 NetworkManager[49805]: <info>  [1772275194.7347] device (tap05baa6e8-a6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.744 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 28 05:39:54 np0005634017 kernel: tap05cd9b0a-03 (unregistering): left promiscuous mode
Feb 28 05:39:54 np0005634017 NetworkManager[49805]: <info>  [1772275194.7563] device (tap05cd9b0a-03): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:39:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:54Z|01453|binding|INFO|Releasing lport 05baa6e8-a67d-4e5d-84d7-2d27c726335a from this chassis (sb_readonly=0)
Feb 28 05:39:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:54Z|01454|binding|INFO|Setting lport 05baa6e8-a67d-4e5d-84d7-2d27c726335a down in Southbound
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.802 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:54Z|01455|binding|INFO|Removing iface tap05baa6e8-a6 ovn-installed in OVS
Feb 28 05:39:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.814 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:6f:db 10.100.0.13'], port_security=['fa:16:3e:b7:6f:db 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '69899d22-e5ee-410a-8280-57cc79ffa188', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-386ad93c-8128-414d-bc97-7c3f009f2aee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '77637734-a2d3-42b4-baf6-bf1326483326', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12c8c4dc-c3d0-48e2-a2ff-065b46b0bb43, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=05baa6e8-a67d-4e5d-84d7-2d27c726335a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:39:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.815 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 05baa6e8-a67d-4e5d-84d7-2d27c726335a in datapath 386ad93c-8128-414d-bc97-7c3f009f2aee unbound from our chassis#033[00m
Feb 28 05:39:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.817 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 386ad93c-8128-414d-bc97-7c3f009f2aee#033[00m
Feb 28 05:39:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:54Z|01456|binding|INFO|Releasing lport 05cd9b0a-030d-404b-b77a-570992915fae from this chassis (sb_readonly=0)
Feb 28 05:39:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:54Z|01457|binding|INFO|Setting lport 05cd9b0a-030d-404b-b77a-570992915fae down in Southbound
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.819 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:54Z|01458|binding|INFO|Removing iface tap05cd9b0a-03 ovn-installed in OVS
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.825 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.827 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:32:1c 2001:db8:0:1:f816:3eff:fe0b:321c 2001:db8::f816:3eff:fe0b:321c'], port_security=['fa:16:3e:0b:32:1c 2001:db8:0:1:f816:3eff:fe0b:321c 2001:db8::f816:3eff:fe0b:321c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe0b:321c/64 2001:db8::f816:3eff:fe0b:321c/64', 'neutron:device_id': '69899d22-e5ee-410a-8280-57cc79ffa188', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '77637734-a2d3-42b4-baf6-bf1326483326', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df8aa31f-e638-4e49-ac27-bf5e1988a64a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=05cd9b0a-030d-404b-b77a-570992915fae) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.829 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.838 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ea04e9d5-7bba-4340-bdba-4a46eafd8365]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:54 np0005634017 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Feb 28 05:39:54 np0005634017 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008b.scope: Consumed 12.605s CPU time.
Feb 28 05:39:54 np0005634017 systemd-machined[209480]: Machine qemu-172-instance-0000008b terminated.
Feb 28 05:39:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.864 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf328d7-413f-4d51-8783-3abdb9ec9457]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.868 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6079e890-7675-4f7b-a2ee-07624a6c107c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.890 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[61b0da5d-d28a-430f-b85b-56f48cc9ce0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.908 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f86e5fef-5acd-44e9-b2c1-001bf1621713]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap386ad93c-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:f3:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 422], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663869, 'reachable_time': 19915, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368411, 'error': None, 'target': 'ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.928 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4923e28f-858a-4e0e-8b85-81147a673019]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap386ad93c-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663882, 'tstamp': 663882}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368421, 'error': None, 'target': 'ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap386ad93c-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663886, 'tstamp': 663886}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368421, 'error': None, 'target': 'ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.930 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap386ad93c-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.932 243456 INFO nova.virt.libvirt.driver [-] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Instance destroyed successfully.#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.932 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.933 243456 DEBUG nova.objects.instance [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid 69899d22-e5ee-410a-8280-57cc79ffa188 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.940 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.940 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap386ad93c-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.942 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:39:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.943 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap386ad93c-80, col_values=(('external_ids', {'iface-id': '1c423105-d23e-4da9-afeb-9405c7fd0060'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.943 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:39:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.945 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 05cd9b0a-030d-404b-b77a-570992915fae in datapath 685f3a92-853c-417a-a00b-ba5c70b02f2d unbound from our chassis#033[00m
Feb 28 05:39:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.946 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 685f3a92-853c-417a-a00b-ba5c70b02f2d#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.951 243456 DEBUG nova.virt.libvirt.vif [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:39:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1192420529',display_name='tempest-TestGettingAddress-server-1192420529',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1192420529',id=139,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHDH+5DUTx5iBPqC0EDa1S6dogceA/+8wHwWd1BSaRvWHn3D8AtYXHbWFXwRPGXZg46P7b16sTNaYzFdl+C2i24U7XPS1mXh/yVjcDyoOTxx5RFD1Jt0UBrR4R+2oVkYmg==',key_name='tempest-TestGettingAddress-830625014',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:39:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-2ripkvj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:39:29Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=69899d22-e5ee-410a-8280-57cc79ffa188,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "address": "fa:16:3e:b7:6f:db", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05baa6e8-a6", "ovs_interfaceid": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.953 243456 DEBUG nova.network.os_vif_util [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "address": "fa:16:3e:b7:6f:db", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05baa6e8-a6", "ovs_interfaceid": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.954 243456 DEBUG nova.network.os_vif_util [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b7:6f:db,bridge_name='br-int',has_traffic_filtering=True,id=05baa6e8-a67d-4e5d-84d7-2d27c726335a,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05baa6e8-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.955 243456 DEBUG os_vif [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:6f:db,bridge_name='br-int',has_traffic_filtering=True,id=05baa6e8-a67d-4e5d-84d7-2d27c726335a,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05baa6e8-a6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.958 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.958 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05baa6e8-a6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.961 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.962 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fe9549e1-2f66-4121-b99f-2e33c1f89a49]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.966 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.971 243456 INFO os_vif [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:6f:db,bridge_name='br-int',has_traffic_filtering=True,id=05baa6e8-a67d-4e5d-84d7-2d27c726335a,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05baa6e8-a6')#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.972 243456 DEBUG nova.virt.libvirt.vif [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:39:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1192420529',display_name='tempest-TestGettingAddress-server-1192420529',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1192420529',id=139,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHDH+5DUTx5iBPqC0EDa1S6dogceA/+8wHwWd1BSaRvWHn3D8AtYXHbWFXwRPGXZg46P7b16sTNaYzFdl+C2i24U7XPS1mXh/yVjcDyoOTxx5RFD1Jt0UBrR4R+2oVkYmg==',key_name='tempest-TestGettingAddress-830625014',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:39:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-2ripkvj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:39:29Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=69899d22-e5ee-410a-8280-57cc79ffa188,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05cd9b0a-030d-404b-b77a-570992915fae", "address": "fa:16:3e:0b:32:1c", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cd9b0a-03", "ovs_interfaceid": "05cd9b0a-030d-404b-b77a-570992915fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.972 243456 DEBUG nova.network.os_vif_util [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "05cd9b0a-030d-404b-b77a-570992915fae", "address": "fa:16:3e:0b:32:1c", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cd9b0a-03", "ovs_interfaceid": "05cd9b0a-030d-404b-b77a-570992915fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.973 243456 DEBUG nova.network.os_vif_util [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:32:1c,bridge_name='br-int',has_traffic_filtering=True,id=05cd9b0a-030d-404b-b77a-570992915fae,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cd9b0a-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.974 243456 DEBUG os_vif [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:32:1c,bridge_name='br-int',has_traffic_filtering=True,id=05cd9b0a-030d-404b-b77a-570992915fae,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cd9b0a-03') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.975 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.975 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05cd9b0a-03, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.977 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.979 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:39:54 np0005634017 nova_compute[243452]: 2026-02-28 10:39:54.982 243456 INFO os_vif [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:32:1c,bridge_name='br-int',has_traffic_filtering=True,id=05cd9b0a-030d-404b-b77a-570992915fae,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cd9b0a-03')#033[00m
Feb 28 05:39:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.984 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[82f9c21a-33a1-4ffe-a384-381a2b4052db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.987 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8f62c852-4940-46fd-92fe-93411a81cbc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:55.012 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[05cce998-b841-4280-ae1e-eb8f56640d4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:55.028 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3d8b8ac8-3aee-47e3-b94f-9dc95c17ecb1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap685f3a92-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:b7:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 36, 'tx_packets': 6, 'rx_bytes': 3160, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 36, 'tx_packets': 6, 'rx_bytes': 3160, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 423], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663963, 'reachable_time': 33771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368461, 'error': None, 'target': 'ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:55.043 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[493253ce-816a-4080-848a-f810ee7502a9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap685f3a92-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663976, 'tstamp': 663976}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368462, 'error': None, 'target': 'ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:55.046 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap685f3a92-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:55 np0005634017 nova_compute[243452]: 2026-02-28 10:39:55.047 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:55.049 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap685f3a92-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:55 np0005634017 nova_compute[243452]: 2026-02-28 10:39:55.049 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:55.049 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:39:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:55.050 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap685f3a92-80, col_values=(('external_ids', {'iface-id': '86a5934a-bf0f-4d7e-ba7a-fdb27e99df6a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:55.050 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:39:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 571 KiB/s rd, 3.0 MiB/s wr, 97 op/s
Feb 28 05:39:55 np0005634017 nova_compute[243452]: 2026-02-28 10:39:55.260 243456 INFO nova.virt.libvirt.driver [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Deleting instance files /var/lib/nova/instances/69899d22-e5ee-410a-8280-57cc79ffa188_del#033[00m
Feb 28 05:39:55 np0005634017 nova_compute[243452]: 2026-02-28 10:39:55.261 243456 INFO nova.virt.libvirt.driver [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Deletion of /var/lib/nova/instances/69899d22-e5ee-410a-8280-57cc79ffa188_del complete#033[00m
Feb 28 05:39:55 np0005634017 nova_compute[243452]: 2026-02-28 10:39:55.321 243456 INFO nova.compute.manager [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:39:55 np0005634017 nova_compute[243452]: 2026-02-28 10:39:55.322 243456 DEBUG oslo.service.loopingcall [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:39:55 np0005634017 nova_compute[243452]: 2026-02-28 10:39:55.323 243456 DEBUG nova.compute.manager [-] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:39:55 np0005634017 nova_compute[243452]: 2026-02-28 10:39:55.323 243456 DEBUG nova.network.neutron [-] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:39:55 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:55.332 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:56Z|00175|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:58:95 10.100.0.4
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.415 243456 DEBUG nova.network.neutron [req-067175d0-1714-4db9-a1dd-1e2e5dd27c42 req-9c8f1508-2605-4760-9073-c2b583290022 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Updated VIF entry in instance network info cache for port 05baa6e8-a67d-4e5d-84d7-2d27c726335a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.416 243456 DEBUG nova.network.neutron [req-067175d0-1714-4db9-a1dd-1e2e5dd27c42 req-9c8f1508-2605-4760-9073-c2b583290022 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Updating instance_info_cache with network_info: [{"id": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "address": "fa:16:3e:b7:6f:db", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05baa6e8-a6", "ovs_interfaceid": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "05cd9b0a-030d-404b-b77a-570992915fae", "address": "fa:16:3e:0b:32:1c", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cd9b0a-03", "ovs_interfaceid": "05cd9b0a-030d-404b-b77a-570992915fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.441 243456 DEBUG oslo_concurrency.lockutils [req-067175d0-1714-4db9-a1dd-1e2e5dd27c42 req-9c8f1508-2605-4760-9073-c2b583290022 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.785 243456 DEBUG nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-vif-unplugged-05baa6e8-a67d-4e5d-84d7-2d27c726335a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.785 243456 DEBUG oslo_concurrency.lockutils [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.785 243456 DEBUG oslo_concurrency.lockutils [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.785 243456 DEBUG oslo_concurrency.lockutils [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.786 243456 DEBUG nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] No waiting events found dispatching network-vif-unplugged-05baa6e8-a67d-4e5d-84d7-2d27c726335a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.786 243456 DEBUG nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-vif-unplugged-05baa6e8-a67d-4e5d-84d7-2d27c726335a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.786 243456 DEBUG nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-vif-plugged-05baa6e8-a67d-4e5d-84d7-2d27c726335a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.786 243456 DEBUG oslo_concurrency.lockutils [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.786 243456 DEBUG oslo_concurrency.lockutils [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.786 243456 DEBUG oslo_concurrency.lockutils [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.786 243456 DEBUG nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] No waiting events found dispatching network-vif-plugged-05baa6e8-a67d-4e5d-84d7-2d27c726335a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.787 243456 WARNING nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received unexpected event network-vif-plugged-05baa6e8-a67d-4e5d-84d7-2d27c726335a for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.787 243456 DEBUG nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-vif-unplugged-05cd9b0a-030d-404b-b77a-570992915fae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.787 243456 DEBUG oslo_concurrency.lockutils [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.787 243456 DEBUG oslo_concurrency.lockutils [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.787 243456 DEBUG oslo_concurrency.lockutils [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.787 243456 DEBUG nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] No waiting events found dispatching network-vif-unplugged-05cd9b0a-030d-404b-b77a-570992915fae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.788 243456 DEBUG nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-vif-unplugged-05cd9b0a-030d-404b-b77a-570992915fae for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.788 243456 DEBUG nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-vif-plugged-05cd9b0a-030d-404b-b77a-570992915fae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.788 243456 DEBUG oslo_concurrency.lockutils [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.788 243456 DEBUG oslo_concurrency.lockutils [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.788 243456 DEBUG oslo_concurrency.lockutils [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.788 243456 DEBUG nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] No waiting events found dispatching network-vif-plugged-05cd9b0a-030d-404b-b77a-570992915fae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.789 243456 WARNING nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received unexpected event network-vif-plugged-05cd9b0a-030d-404b-b77a-570992915fae for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.789 243456 DEBUG nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-vif-deleted-05baa6e8-a67d-4e5d-84d7-2d27c726335a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.789 243456 INFO nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Neutron deleted interface 05baa6e8-a67d-4e5d-84d7-2d27c726335a; detaching it from the instance and deleting it from the info cache#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.789 243456 DEBUG nova.network.neutron [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Updating instance_info_cache with network_info: [{"id": "05cd9b0a-030d-404b-b77a-570992915fae", "address": "fa:16:3e:0b:32:1c", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cd9b0a-03", "ovs_interfaceid": "05cd9b0a-030d-404b-b77a-570992915fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.790 243456 DEBUG nova.network.neutron [-] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.813 243456 DEBUG nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Detach interface failed, port_id=05baa6e8-a67d-4e5d-84d7-2d27c726335a, reason: Instance 69899d22-e5ee-410a-8280-57cc79ffa188 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.822 243456 INFO nova.compute.manager [-] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Took 1.50 seconds to deallocate network for instance.#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.885 243456 DEBUG oslo_concurrency.lockutils [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.886 243456 DEBUG oslo_concurrency.lockutils [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.891 243456 DEBUG nova.compute.manager [req-a46d8eb7-ff43-48f8-acf0-3791b1579470 req-1f7f6ba6-eca8-4a91-8aab-30c1cde294e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Received event network-changed-247ab5af-a0a0-4d02-93ea-d7dbed82f571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.892 243456 DEBUG nova.compute.manager [req-a46d8eb7-ff43-48f8-acf0-3791b1579470 req-1f7f6ba6-eca8-4a91-8aab-30c1cde294e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Refreshing instance network info cache due to event network-changed-247ab5af-a0a0-4d02-93ea-d7dbed82f571. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.893 243456 DEBUG oslo_concurrency.lockutils [req-a46d8eb7-ff43-48f8-acf0-3791b1579470 req-1f7f6ba6-eca8-4a91-8aab-30c1cde294e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.893 243456 DEBUG oslo_concurrency.lockutils [req-a46d8eb7-ff43-48f8-acf0-3791b1579470 req-1f7f6ba6-eca8-4a91-8aab-30c1cde294e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.894 243456 DEBUG nova.network.neutron [req-a46d8eb7-ff43-48f8-acf0-3791b1579470 req-1f7f6ba6-eca8-4a91-8aab-30c1cde294e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Refreshing network info cache for port 247ab5af-a0a0-4d02-93ea-d7dbed82f571 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.948 243456 DEBUG oslo_concurrency.lockutils [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.948 243456 DEBUG oslo_concurrency.lockutils [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.949 243456 DEBUG oslo_concurrency.lockutils [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.949 243456 DEBUG oslo_concurrency.lockutils [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.949 243456 DEBUG oslo_concurrency.lockutils [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.950 243456 INFO nova.compute.manager [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Terminating instance#033[00m
Feb 28 05:39:56 np0005634017 nova_compute[243452]: 2026-02-28 10:39:56.951 243456 DEBUG nova.compute.manager [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.003 243456 DEBUG oslo_concurrency.processutils [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:39:57 np0005634017 kernel: tap247ab5af-a0 (unregistering): left promiscuous mode
Feb 28 05:39:57 np0005634017 NetworkManager[49805]: <info>  [1772275197.0223] device (tap247ab5af-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:39:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 305 active+clean; 378 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 318 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Feb 28 05:39:57 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:57Z|01459|binding|INFO|Releasing lport 247ab5af-a0a0-4d02-93ea-d7dbed82f571 from this chassis (sb_readonly=0)
Feb 28 05:39:57 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:57Z|01460|binding|INFO|Setting lport 247ab5af-a0a0-4d02-93ea-d7dbed82f571 down in Southbound
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.053 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:57 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:57Z|01461|binding|INFO|Removing iface tap247ab5af-a0 ovn-installed in OVS
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.056 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.061 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.065 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:58:95 10.100.0.4'], port_security=['fa:16:3e:7f:58:95 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '023e788b-dd3e-4a85-a2e5-e97ad24d6ff1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72eb3f47-a6c4-4982-9972-2a47fea8b4a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4240f7d5-bbec-4937-b4a6-68a1a0e467cd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d75922a7-401f-4a05-a541-e32618728569, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=247ab5af-a0a0-4d02-93ea-d7dbed82f571) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:39:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.066 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 247ab5af-a0a0-4d02-93ea-d7dbed82f571 in datapath 72eb3f47-a6c4-4982-9972-2a47fea8b4a8 unbound from our chassis#033[00m
Feb 28 05:39:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.068 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 72eb3f47-a6c4-4982-9972-2a47fea8b4a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:39:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.069 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[02c3cf29-e484-4421-ae95-66764a2339f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.069 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8 namespace which is not needed anymore#033[00m
Feb 28 05:39:57 np0005634017 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Feb 28 05:39:57 np0005634017 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008c.scope: Consumed 12.696s CPU time.
Feb 28 05:39:57 np0005634017 systemd-machined[209480]: Machine qemu-173-instance-0000008c terminated.
Feb 28 05:39:57 np0005634017 kernel: tap247ab5af-a0: entered promiscuous mode
Feb 28 05:39:57 np0005634017 NetworkManager[49805]: <info>  [1772275197.1746] manager: (tap247ab5af-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/607)
Feb 28 05:39:57 np0005634017 systemd-udevd[368400]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:39:57 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:57Z|01462|binding|INFO|Claiming lport 247ab5af-a0a0-4d02-93ea-d7dbed82f571 for this chassis.
Feb 28 05:39:57 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:57Z|01463|binding|INFO|247ab5af-a0a0-4d02-93ea-d7dbed82f571: Claiming fa:16:3e:7f:58:95 10.100.0.4
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.176 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:57 np0005634017 kernel: tap247ab5af-a0 (unregistering): left promiscuous mode
Feb 28 05:39:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.188 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:58:95 10.100.0.4'], port_security=['fa:16:3e:7f:58:95 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '023e788b-dd3e-4a85-a2e5-e97ad24d6ff1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72eb3f47-a6c4-4982-9972-2a47fea8b4a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4240f7d5-bbec-4937-b4a6-68a1a0e467cd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d75922a7-401f-4a05-a541-e32618728569, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=247ab5af-a0a0-4d02-93ea-d7dbed82f571) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.195 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:57 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:57Z|01464|binding|INFO|Releasing lport 247ab5af-a0a0-4d02-93ea-d7dbed82f571 from this chassis (sb_readonly=0)
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.199 243456 INFO nova.virt.libvirt.driver [-] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Instance destroyed successfully.#033[00m
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.200 243456 DEBUG nova.objects.instance [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'resources' on Instance uuid 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:39:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.205 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:58:95 10.100.0.4'], port_security=['fa:16:3e:7f:58:95 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '023e788b-dd3e-4a85-a2e5-e97ad24d6ff1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72eb3f47-a6c4-4982-9972-2a47fea8b4a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4240f7d5-bbec-4937-b4a6-68a1a0e467cd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d75922a7-401f-4a05-a541-e32618728569, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=247ab5af-a0a0-4d02-93ea-d7dbed82f571) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.218 243456 DEBUG nova.virt.libvirt.vif [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:39:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-823103484',display_name='tempest-TestNetworkBasicOps-server-823103484',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-823103484',id=140,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDlRPDBZ/UdFU5xhRLg2YsbBBqME/rNpkkZFa7kTxtUL7bLINgILV0oznkIgnASqfKbT8PhHg4HY5yFJH+rTHkD1QXlc7NRmyoVKdQxyQnQqn2gOVqEPju9kBsy9tmaBqQ==',key_name='tempest-TestNetworkBasicOps-1428223995',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:39:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-d46lhxih',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:39:36Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=023e788b-dd3e-4a85-a2e5-e97ad24d6ff1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "address": "fa:16:3e:7f:58:95", "network": {"id": "72eb3f47-a6c4-4982-9972-2a47fea8b4a8", "bridge": "br-int", "label": "tempest-network-smoke--2052995257", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247ab5af-a0", "ovs_interfaceid": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.218 243456 DEBUG nova.network.os_vif_util [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "address": "fa:16:3e:7f:58:95", "network": {"id": "72eb3f47-a6c4-4982-9972-2a47fea8b4a8", "bridge": "br-int", "label": "tempest-network-smoke--2052995257", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247ab5af-a0", "ovs_interfaceid": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.219 243456 DEBUG nova.network.os_vif_util [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7f:58:95,bridge_name='br-int',has_traffic_filtering=True,id=247ab5af-a0a0-4d02-93ea-d7dbed82f571,network=Network(72eb3f47-a6c4-4982-9972-2a47fea8b4a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247ab5af-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.219 243456 DEBUG os_vif [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:58:95,bridge_name='br-int',has_traffic_filtering=True,id=247ab5af-a0a0-4d02-93ea-d7dbed82f571,network=Network(72eb3f47-a6c4-4982-9972-2a47fea8b4a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247ab5af-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.221 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:57 np0005634017 neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8[368291]: [NOTICE]   (368295) : haproxy version is 2.8.14-c23fe91
Feb 28 05:39:57 np0005634017 neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8[368291]: [NOTICE]   (368295) : path to executable is /usr/sbin/haproxy
Feb 28 05:39:57 np0005634017 neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8[368291]: [WARNING]  (368295) : Exiting Master process...
Feb 28 05:39:57 np0005634017 neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8[368291]: [WARNING]  (368295) : Exiting Master process...
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.221 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap247ab5af-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.223 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.224 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:57 np0005634017 neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8[368291]: [ALERT]    (368295) : Current worker (368297) exited with code 143 (Terminated)
Feb 28 05:39:57 np0005634017 neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8[368291]: [WARNING]  (368295) : All workers exited. Exiting... (0)
Feb 28 05:39:57 np0005634017 systemd[1]: libpod-983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375.scope: Deactivated successfully.
Feb 28 05:39:57 np0005634017 conmon[368291]: conmon 983d5540a162e6b997ec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375.scope/container/memory.events
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.226 243456 INFO os_vif [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:58:95,bridge_name='br-int',has_traffic_filtering=True,id=247ab5af-a0a0-4d02-93ea-d7dbed82f571,network=Network(72eb3f47-a6c4-4982-9972-2a47fea8b4a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247ab5af-a0')#033[00m
Feb 28 05:39:57 np0005634017 podman[368503]: 2026-02-28 10:39:57.233813132 +0000 UTC m=+0.070319902 container died 983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 28 05:39:57 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375-userdata-shm.mount: Deactivated successfully.
Feb 28 05:39:57 np0005634017 systemd[1]: var-lib-containers-storage-overlay-05b41b330d4269ed8fc4838007e0e8ccc448467ce7b729453762d243b302d938-merged.mount: Deactivated successfully.
Feb 28 05:39:57 np0005634017 podman[368503]: 2026-02-28 10:39:57.273263989 +0000 UTC m=+0.109770759 container cleanup 983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:39:57 np0005634017 systemd[1]: libpod-conmon-983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375.scope: Deactivated successfully.
Feb 28 05:39:57 np0005634017 podman[368554]: 2026-02-28 10:39:57.3439183 +0000 UTC m=+0.051800178 container remove 983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 05:39:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.349 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a34ccbd7-23f5-4b93-aa72-227316196366]: (4, ('Sat Feb 28 10:39:57 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8 (983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375)\n983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375\nSat Feb 28 10:39:57 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8 (983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375)\n983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.352 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0b171db2-f4ab-4f16-af96-593b9b4e841a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.353 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72eb3f47-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.356 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:57 np0005634017 kernel: tap72eb3f47-a0: left promiscuous mode
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.364 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.368 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f99350-1799-4b6c-9bdf-524d0acf36da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.394 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[00324aaa-1385-426d-8388-3c87eb95e7aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.396 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2cbbae86-44de-46c1-b1af-7894c2e6a8af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.413 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[747c4707-a50e-4fee-afec-dabaa8334023]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668198, 'reachable_time': 31790, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368569, 'error': None, 'target': 'ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:57 np0005634017 systemd[1]: run-netns-ovnmeta\x2d72eb3f47\x2da6c4\x2d4982\x2d9972\x2d2a47fea8b4a8.mount: Deactivated successfully.
Feb 28 05:39:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.418 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:39:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.419 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[e5204d31-53df-4599-b046-e30a1b0d391a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.420 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 247ab5af-a0a0-4d02-93ea-d7dbed82f571 in datapath 72eb3f47-a6c4-4982-9972-2a47fea8b4a8 unbound from our chassis#033[00m
Feb 28 05:39:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.421 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 72eb3f47-a6c4-4982-9972-2a47fea8b4a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:39:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.422 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a212b197-37ac-45bc-879f-484cd8af8092]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.422 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 247ab5af-a0a0-4d02-93ea-d7dbed82f571 in datapath 72eb3f47-a6c4-4982-9972-2a47fea8b4a8 unbound from our chassis#033[00m
Feb 28 05:39:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.423 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 72eb3f47-a6c4-4982-9972-2a47fea8b4a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:39:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.424 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8e333786-bd4c-47c2-99b5-0c5d5ffa0cba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:39:57 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1293911241' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.562 243456 DEBUG oslo_concurrency.processutils [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.570 243456 DEBUG nova.compute.provider_tree [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.579 243456 INFO nova.virt.libvirt.driver [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Deleting instance files /var/lib/nova/instances/023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_del#033[00m
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.581 243456 INFO nova.virt.libvirt.driver [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Deletion of /var/lib/nova/instances/023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_del complete#033[00m
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.607 243456 DEBUG nova.scheduler.client.report [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.636 243456 DEBUG oslo_concurrency.lockutils [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.640 243456 INFO nova.compute.manager [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.640 243456 DEBUG oslo.service.loopingcall [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.641 243456 DEBUG nova.compute.manager [-] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.641 243456 DEBUG nova.network.neutron [-] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.663 243456 INFO nova.scheduler.client.report [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance 69899d22-e5ee-410a-8280-57cc79ffa188#033[00m
Feb 28 05:39:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.878 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.878 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.879 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:57 np0005634017 nova_compute[243452]: 2026-02-28 10:39:57.938 243456 DEBUG oslo_concurrency.lockutils [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.265s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.273 243456 DEBUG nova.network.neutron [req-a46d8eb7-ff43-48f8-acf0-3791b1579470 req-1f7f6ba6-eca8-4a91-8aab-30c1cde294e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Updated VIF entry in instance network info cache for port 247ab5af-a0a0-4d02-93ea-d7dbed82f571. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.274 243456 DEBUG nova.network.neutron [req-a46d8eb7-ff43-48f8-acf0-3791b1579470 req-1f7f6ba6-eca8-4a91-8aab-30c1cde294e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Updating instance_info_cache with network_info: [{"id": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "address": "fa:16:3e:7f:58:95", "network": {"id": "72eb3f47-a6c4-4982-9972-2a47fea8b4a8", "bridge": "br-int", "label": "tempest-network-smoke--2052995257", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247ab5af-a0", "ovs_interfaceid": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.303 243456 DEBUG oslo_concurrency.lockutils [req-a46d8eb7-ff43-48f8-acf0-3791b1579470 req-1f7f6ba6-eca8-4a91-8aab-30c1cde294e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.353 243456 DEBUG nova.network.neutron [-] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.376 243456 INFO nova.compute.manager [-] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Took 0.74 seconds to deallocate network for instance.#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.429 243456 DEBUG oslo_concurrency.lockutils [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.431 243456 DEBUG oslo_concurrency.lockutils [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.510 243456 DEBUG oslo_concurrency.processutils [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.701 243456 DEBUG oslo_concurrency.lockutils [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f6deb920-f186-43f5-9ea0-642f4a6e830e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.702 243456 DEBUG oslo_concurrency.lockutils [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.703 243456 DEBUG oslo_concurrency.lockutils [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.704 243456 DEBUG oslo_concurrency.lockutils [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.704 243456 DEBUG oslo_concurrency.lockutils [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.707 243456 INFO nova.compute.manager [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Terminating instance#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.709 243456 DEBUG nova.compute.manager [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:39:58 np0005634017 kernel: tap5bc174e9-3e (unregistering): left promiscuous mode
Feb 28 05:39:58 np0005634017 NetworkManager[49805]: <info>  [1772275198.7684] device (tap5bc174e9-3e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:39:58 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:58Z|01465|binding|INFO|Releasing lport 5bc174e9-3e24-499b-8f52-0bcff974bf56 from this chassis (sb_readonly=0)
Feb 28 05:39:58 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:58Z|01466|binding|INFO|Setting lport 5bc174e9-3e24-499b-8f52-0bcff974bf56 down in Southbound
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.779 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:58 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:58Z|01467|binding|INFO|Removing iface tap5bc174e9-3e ovn-installed in OVS
Feb 28 05:39:58 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:58.790 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:80:6e 10.100.0.9'], port_security=['fa:16:3e:40:80:6e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'f6deb920-f186-43f5-9ea0-642f4a6e830e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-386ad93c-8128-414d-bc97-7c3f009f2aee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '77637734-a2d3-42b4-baf6-bf1326483326', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12c8c4dc-c3d0-48e2-a2ff-065b46b0bb43, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=5bc174e9-3e24-499b-8f52-0bcff974bf56) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:39:58 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:58.792 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 5bc174e9-3e24-499b-8f52-0bcff974bf56 in datapath 386ad93c-8128-414d-bc97-7c3f009f2aee unbound from our chassis#033[00m
Feb 28 05:39:58 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:58.794 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 386ad93c-8128-414d-bc97-7c3f009f2aee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:39:58 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:58.795 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ee9d98c5-d7e1-40bd-87f8-b202ea9b68fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:58 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:58.796 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee namespace which is not needed anymore#033[00m
Feb 28 05:39:58 np0005634017 kernel: tap2fcd845f-86 (unregistering): left promiscuous mode
Feb 28 05:39:58 np0005634017 NetworkManager[49805]: <info>  [1772275198.8033] device (tap2fcd845f-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.808 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.815 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:58 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:58Z|01468|binding|INFO|Releasing lport 2fcd845f-8646-4afc-923a-1b7570fbdc36 from this chassis (sb_readonly=0)
Feb 28 05:39:58 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:58Z|01469|binding|INFO|Setting lport 2fcd845f-8646-4afc-923a-1b7570fbdc36 down in Southbound
Feb 28 05:39:58 np0005634017 ovn_controller[146846]: 2026-02-28T10:39:58Z|01470|binding|INFO|Removing iface tap2fcd845f-86 ovn-installed in OVS
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.823 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:58 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:58.829 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:ee:08 2001:db8:0:1:f816:3eff:feb3:ee08 2001:db8::f816:3eff:feb3:ee08'], port_security=['fa:16:3e:b3:ee:08 2001:db8:0:1:f816:3eff:feb3:ee08 2001:db8::f816:3eff:feb3:ee08'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feb3:ee08/64 2001:db8::f816:3eff:feb3:ee08/64', 'neutron:device_id': 'f6deb920-f186-43f5-9ea0-642f4a6e830e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '77637734-a2d3-42b4-baf6-bf1326483326', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df8aa31f-e638-4e49-ac27-bf5e1988a64a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2fcd845f-8646-4afc-923a-1b7570fbdc36) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.843 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:58 np0005634017 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d00000089.scope: Deactivated successfully.
Feb 28 05:39:58 np0005634017 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d00000089.scope: Consumed 15.126s CPU time.
Feb 28 05:39:58 np0005634017 systemd-machined[209480]: Machine qemu-170-instance-00000089 terminated.
Feb 28 05:39:58 np0005634017 neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee[366218]: [NOTICE]   (366223) : haproxy version is 2.8.14-c23fe91
Feb 28 05:39:58 np0005634017 neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee[366218]: [NOTICE]   (366223) : path to executable is /usr/sbin/haproxy
Feb 28 05:39:58 np0005634017 neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee[366218]: [WARNING]  (366223) : Exiting Master process...
Feb 28 05:39:58 np0005634017 neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee[366218]: [WARNING]  (366223) : Exiting Master process...
Feb 28 05:39:58 np0005634017 neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee[366218]: [ALERT]    (366223) : Current worker (366225) exited with code 143 (Terminated)
Feb 28 05:39:58 np0005634017 neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee[366218]: [WARNING]  (366223) : All workers exited. Exiting... (0)
Feb 28 05:39:58 np0005634017 systemd[1]: libpod-19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d.scope: Deactivated successfully.
Feb 28 05:39:58 np0005634017 NetworkManager[49805]: <info>  [1772275198.9434] manager: (tap2fcd845f-86): new Tun device (/org/freedesktop/NetworkManager/Devices/608)
Feb 28 05:39:58 np0005634017 podman[368621]: 2026-02-28 10:39:58.949601915 +0000 UTC m=+0.050023967 container died 19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 28 05:39:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.982 243456 DEBUG nova.compute.manager [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-vif-deleted-05cd9b0a-030d-404b-b77a-570992915fae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.983 243456 DEBUG nova.compute.manager [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Received event network-vif-unplugged-247ab5af-a0a0-4d02-93ea-d7dbed82f571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.984 243456 DEBUG oslo_concurrency.lockutils [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.987 243456 DEBUG oslo_concurrency.lockutils [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.987 243456 DEBUG oslo_concurrency.lockutils [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.987 243456 DEBUG nova.compute.manager [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] No waiting events found dispatching network-vif-unplugged-247ab5af-a0a0-4d02-93ea-d7dbed82f571 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.987 243456 WARNING nova.compute.manager [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Received unexpected event network-vif-unplugged-247ab5af-a0a0-4d02-93ea-d7dbed82f571 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.988 243456 DEBUG nova.compute.manager [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Received event network-vif-plugged-247ab5af-a0a0-4d02-93ea-d7dbed82f571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.988 243456 DEBUG oslo_concurrency.lockutils [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.988 243456 DEBUG oslo_concurrency.lockutils [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.988 243456 DEBUG oslo_concurrency.lockutils [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.988 243456 DEBUG nova.compute.manager [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] No waiting events found dispatching network-vif-plugged-247ab5af-a0a0-4d02-93ea-d7dbed82f571 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.988 243456 WARNING nova.compute.manager [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Received unexpected event network-vif-plugged-247ab5af-a0a0-4d02-93ea-d7dbed82f571 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.989 243456 DEBUG nova.compute.manager [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Received event network-vif-deleted-247ab5af-a0a0-4d02-93ea-d7dbed82f571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.995 243456 INFO nova.virt.libvirt.driver [-] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Instance destroyed successfully.#033[00m
Feb 28 05:39:58 np0005634017 nova_compute[243452]: 2026-02-28 10:39:58.996 243456 DEBUG nova.objects.instance [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid f6deb920-f186-43f5-9ea0-642f4a6e830e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:39:59 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d-userdata-shm.mount: Deactivated successfully.
Feb 28 05:39:59 np0005634017 systemd[1]: var-lib-containers-storage-overlay-353ad3c61fc41f7332854dd7161b413a55568a38d52cee91efbf7a09e713c9f0-merged.mount: Deactivated successfully.
Feb 28 05:39:59 np0005634017 podman[368621]: 2026-02-28 10:39:59.018958419 +0000 UTC m=+0.119380451 container cleanup 19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.021 243456 DEBUG nova.virt.libvirt.vif [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:38:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1245781750',display_name='tempest-TestGettingAddress-server-1245781750',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1245781750',id=137,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHDH+5DUTx5iBPqC0EDa1S6dogceA/+8wHwWd1BSaRvWHn3D8AtYXHbWFXwRPGXZg46P7b16sTNaYzFdl+C2i24U7XPS1mXh/yVjcDyoOTxx5RFD1Jt0UBrR4R+2oVkYmg==',key_name='tempest-TestGettingAddress-830625014',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:38:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-34sdq54p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:38:53Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f6deb920-f186-43f5-9ea0-642f4a6e830e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.022 243456 DEBUG nova.network.os_vif_util [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.022 243456 DEBUG nova.network.os_vif_util [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:80:6e,bridge_name='br-int',has_traffic_filtering=True,id=5bc174e9-3e24-499b-8f52-0bcff974bf56,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bc174e9-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.023 243456 DEBUG os_vif [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:80:6e,bridge_name='br-int',has_traffic_filtering=True,id=5bc174e9-3e24-499b-8f52-0bcff974bf56,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bc174e9-3e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.027 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.027 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bc174e9-3e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:59 np0005634017 systemd[1]: libpod-conmon-19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d.scope: Deactivated successfully.
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.029 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.031 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.034 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.036 243456 INFO os_vif [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:80:6e,bridge_name='br-int',has_traffic_filtering=True,id=5bc174e9-3e24-499b-8f52-0bcff974bf56,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bc174e9-3e')#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.037 243456 DEBUG nova.virt.libvirt.vif [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:38:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1245781750',display_name='tempest-TestGettingAddress-server-1245781750',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1245781750',id=137,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHDH+5DUTx5iBPqC0EDa1S6dogceA/+8wHwWd1BSaRvWHn3D8AtYXHbWFXwRPGXZg46P7b16sTNaYzFdl+C2i24U7XPS1mXh/yVjcDyoOTxx5RFD1Jt0UBrR4R+2oVkYmg==',key_name='tempest-TestGettingAddress-830625014',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:38:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-34sdq54p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:38:53Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f6deb920-f186-43f5-9ea0-642f4a6e830e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "address": "fa:16:3e:b3:ee:08", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fcd845f-86", "ovs_interfaceid": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.038 243456 DEBUG nova.network.os_vif_util [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "address": "fa:16:3e:b3:ee:08", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fcd845f-86", "ovs_interfaceid": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.038 243456 DEBUG nova.network.os_vif_util [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b3:ee:08,bridge_name='br-int',has_traffic_filtering=True,id=2fcd845f-8646-4afc-923a-1b7570fbdc36,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fcd845f-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.038 243456 DEBUG os_vif [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:ee:08,bridge_name='br-int',has_traffic_filtering=True,id=2fcd845f-8646-4afc-923a-1b7570fbdc36,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fcd845f-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.039 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.039 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fcd845f-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.040 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.041 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.042 243456 INFO os_vif [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:ee:08,bridge_name='br-int',has_traffic_filtering=True,id=2fcd845f-8646-4afc-923a-1b7570fbdc36,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fcd845f-86')#033[00m
Feb 28 05:39:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 305 active+clean; 313 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Feb 28 05:39:59 np0005634017 podman[368671]: 2026-02-28 10:39:59.086127081 +0000 UTC m=+0.048820093 container remove 19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 05:39:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.093 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d5da477d-9a15-46ac-b72b-15de52169454]: (4, ('Sat Feb 28 10:39:58 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee (19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d)\n19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d\nSat Feb 28 10:39:59 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee (19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d)\n19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.095 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff21df3-3f75-4f38-9a47-5f3362396704]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.096 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap386ad93c-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.098 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:59 np0005634017 kernel: tap386ad93c-80: left promiscuous mode
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.106 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.107 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.109 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7f7cb165-4ec0-4222-8160-5488df5bb13c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.126 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a8e4f5df-d53c-4665-a019-8b8cc7aaf9ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.127 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f0dea760-a8e9-4878-8765-2f7dbb73e879]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:39:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2070273112' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:39:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.144 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cac4dedd-331b-466c-a3f8-455e61a39985]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663862, 'reachable_time': 25554, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368705, 'error': None, 'target': 'ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:59 np0005634017 systemd[1]: run-netns-ovnmeta\x2d386ad93c\x2d8128\x2d414d\x2dbc97\x2d7c3f009f2aee.mount: Deactivated successfully.
Feb 28 05:39:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.149 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:39:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.149 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[92c07af0-f69d-4f9f-894c-2481ec714d52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.150 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2fcd845f-8646-4afc-923a-1b7570fbdc36 in datapath 685f3a92-853c-417a-a00b-ba5c70b02f2d unbound from our chassis#033[00m
Feb 28 05:39:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.152 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 685f3a92-853c-417a-a00b-ba5c70b02f2d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:39:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.153 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fd9ca700-398d-42bb-b6fc-d4ad3d335324]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.154 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d namespace which is not needed anymore#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.174 243456 DEBUG oslo_concurrency.processutils [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.664s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.182 243456 DEBUG nova.compute.provider_tree [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:39:59 np0005634017 neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d[366368]: [NOTICE]   (366373) : haproxy version is 2.8.14-c23fe91
Feb 28 05:39:59 np0005634017 neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d[366368]: [NOTICE]   (366373) : path to executable is /usr/sbin/haproxy
Feb 28 05:39:59 np0005634017 neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d[366368]: [WARNING]  (366373) : Exiting Master process...
Feb 28 05:39:59 np0005634017 neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d[366368]: [ALERT]    (366373) : Current worker (366376) exited with code 143 (Terminated)
Feb 28 05:39:59 np0005634017 neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d[366368]: [WARNING]  (366373) : All workers exited. Exiting... (0)
Feb 28 05:39:59 np0005634017 systemd[1]: libpod-7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003.scope: Deactivated successfully.
Feb 28 05:39:59 np0005634017 podman[368727]: 2026-02-28 10:39:59.295753777 +0000 UTC m=+0.051629893 container died 7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 28 05:39:59 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003-userdata-shm.mount: Deactivated successfully.
Feb 28 05:39:59 np0005634017 systemd[1]: var-lib-containers-storage-overlay-eca0bd47b535c15e2228c3436fbfee03368e983c319a518449256dd29eafc04f-merged.mount: Deactivated successfully.
Feb 28 05:39:59 np0005634017 podman[368727]: 2026-02-28 10:39:59.333791024 +0000 UTC m=+0.089667140 container cleanup 7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 05:39:59 np0005634017 systemd[1]: libpod-conmon-7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003.scope: Deactivated successfully.
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.348 243456 INFO nova.virt.libvirt.driver [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Deleting instance files /var/lib/nova/instances/f6deb920-f186-43f5-9ea0-642f4a6e830e_del#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.349 243456 INFO nova.virt.libvirt.driver [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Deletion of /var/lib/nova/instances/f6deb920-f186-43f5-9ea0-642f4a6e830e_del complete#033[00m
Feb 28 05:39:59 np0005634017 podman[368758]: 2026-02-28 10:39:59.395649096 +0000 UTC m=+0.040613682 container remove 7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 05:39:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.399 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8d5534c7-4a81-4e91-95cf-ec6c210573fc]: (4, ('Sat Feb 28 10:39:59 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d (7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003)\n7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003\nSat Feb 28 10:39:59 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d (7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003)\n7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.401 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f581f36f-f06e-4dbd-8ae5-860a5507b90e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.402 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap685f3a92-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.403 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:59 np0005634017 kernel: tap685f3a92-80: left promiscuous mode
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.408 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:39:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.410 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3e14b21d-582d-4bbd-89c3-7b5bc5c3a55d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.422 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[08aa3a8b-f3dc-4325-81bd-c1260f8ae36b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.423 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb398a7-5145-4452-a011-73914c640000]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.436 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1a30459f-7a0a-4d45-bc6a-dc1d640b3e89]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663956, 'reachable_time': 44608, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368773, 'error': None, 'target': 'ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.438 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:39:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.438 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[a65761f2-833d-48ff-b68e-99ada1d41e4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.473 243456 DEBUG nova.compute.manager [req-5dcc5254-7555-435d-b61e-46f119d6602e req-eb2aecec-9b5c-40da-a28d-45b50de26049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-changed-5bc174e9-3e24-499b-8f52-0bcff974bf56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.473 243456 DEBUG nova.compute.manager [req-5dcc5254-7555-435d-b61e-46f119d6602e req-eb2aecec-9b5c-40da-a28d-45b50de26049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Refreshing instance network info cache due to event network-changed-5bc174e9-3e24-499b-8f52-0bcff974bf56. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.473 243456 DEBUG oslo_concurrency.lockutils [req-5dcc5254-7555-435d-b61e-46f119d6602e req-eb2aecec-9b5c-40da-a28d-45b50de26049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.473 243456 DEBUG oslo_concurrency.lockutils [req-5dcc5254-7555-435d-b61e-46f119d6602e req-eb2aecec-9b5c-40da-a28d-45b50de26049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.474 243456 DEBUG nova.network.neutron [req-5dcc5254-7555-435d-b61e-46f119d6602e req-eb2aecec-9b5c-40da-a28d-45b50de26049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Refreshing network info cache for port 5bc174e9-3e24-499b-8f52-0bcff974bf56 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.495 243456 DEBUG nova.scheduler.client.report [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.526 243456 DEBUG oslo_concurrency.lockutils [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.533 243456 INFO nova.compute.manager [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.534 243456 DEBUG oslo.service.loopingcall [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.534 243456 DEBUG nova.compute.manager [-] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.534 243456 DEBUG nova.network.neutron [-] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.568 243456 INFO nova.scheduler.client.report [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Deleted allocations for instance 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.633 243456 DEBUG oslo_concurrency.lockutils [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:39:59 np0005634017 nova_compute[243452]: 2026-02-28 10:39:59.642 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:00 np0005634017 systemd[1]: run-netns-ovnmeta\x2d685f3a92\x2d853c\x2d417a\x2da00b\x2dba5c70b02f2d.mount: Deactivated successfully.
Feb 28 05:40:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:40:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:40:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:40:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:40:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:40:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:40:00 np0005634017 nova_compute[243452]: 2026-02-28 10:40:00.808 243456 DEBUG nova.network.neutron [req-5dcc5254-7555-435d-b61e-46f119d6602e req-eb2aecec-9b5c-40da-a28d-45b50de26049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Updated VIF entry in instance network info cache for port 5bc174e9-3e24-499b-8f52-0bcff974bf56. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:40:00 np0005634017 nova_compute[243452]: 2026-02-28 10:40:00.809 243456 DEBUG nova.network.neutron [req-5dcc5254-7555-435d-b61e-46f119d6602e req-eb2aecec-9b5c-40da-a28d-45b50de26049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Updating instance_info_cache with network_info: [{"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "address": "fa:16:3e:b3:ee:08", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fcd845f-86", "ovs_interfaceid": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:40:00 np0005634017 nova_compute[243452]: 2026-02-28 10:40:00.830 243456 DEBUG oslo_concurrency.lockutils [req-5dcc5254-7555-435d-b61e-46f119d6602e req-eb2aecec-9b5c-40da-a28d-45b50de26049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:40:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 305 active+clean; 175 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 293 KiB/s rd, 1.5 MiB/s wr, 127 op/s
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.081 243456 DEBUG nova.compute.manager [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-vif-unplugged-5bc174e9-3e24-499b-8f52-0bcff974bf56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.082 243456 DEBUG oslo_concurrency.lockutils [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.082 243456 DEBUG oslo_concurrency.lockutils [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.083 243456 DEBUG oslo_concurrency.lockutils [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.083 243456 DEBUG nova.compute.manager [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] No waiting events found dispatching network-vif-unplugged-5bc174e9-3e24-499b-8f52-0bcff974bf56 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.083 243456 DEBUG nova.compute.manager [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-vif-unplugged-5bc174e9-3e24-499b-8f52-0bcff974bf56 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.084 243456 DEBUG nova.compute.manager [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-vif-plugged-5bc174e9-3e24-499b-8f52-0bcff974bf56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.084 243456 DEBUG oslo_concurrency.lockutils [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.084 243456 DEBUG oslo_concurrency.lockutils [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.084 243456 DEBUG oslo_concurrency.lockutils [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.085 243456 DEBUG nova.compute.manager [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] No waiting events found dispatching network-vif-plugged-5bc174e9-3e24-499b-8f52-0bcff974bf56 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.085 243456 WARNING nova.compute.manager [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received unexpected event network-vif-plugged-5bc174e9-3e24-499b-8f52-0bcff974bf56 for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.085 243456 DEBUG nova.compute.manager [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-vif-deleted-2fcd845f-8646-4afc-923a-1b7570fbdc36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.086 243456 INFO nova.compute.manager [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Neutron deleted interface 2fcd845f-8646-4afc-923a-1b7570fbdc36; detaching it from the instance and deleting it from the info cache#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.086 243456 DEBUG nova.network.neutron [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Updating instance_info_cache with network_info: [{"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.111 243456 DEBUG nova.compute.manager [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Detach interface failed, port_id=2fcd845f-8646-4afc-923a-1b7570fbdc36, reason: Instance f6deb920-f186-43f5-9ea0-642f4a6e830e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.157 243456 DEBUG nova.network.neutron [-] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.172 243456 INFO nova.compute.manager [-] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Took 1.64 seconds to deallocate network for instance.#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.220 243456 DEBUG oslo_concurrency.lockutils [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.221 243456 DEBUG oslo_concurrency.lockutils [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.267 243456 DEBUG oslo_concurrency.processutils [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.572 243456 DEBUG nova.compute.manager [req-4d9cb7b8-c076-4568-a7f6-e44bcfa2ceb8 req-9eda3b91-e542-4ee8-9013-d5c675d9708f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-vif-unplugged-2fcd845f-8646-4afc-923a-1b7570fbdc36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.572 243456 DEBUG oslo_concurrency.lockutils [req-4d9cb7b8-c076-4568-a7f6-e44bcfa2ceb8 req-9eda3b91-e542-4ee8-9013-d5c675d9708f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.572 243456 DEBUG oslo_concurrency.lockutils [req-4d9cb7b8-c076-4568-a7f6-e44bcfa2ceb8 req-9eda3b91-e542-4ee8-9013-d5c675d9708f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.572 243456 DEBUG oslo_concurrency.lockutils [req-4d9cb7b8-c076-4568-a7f6-e44bcfa2ceb8 req-9eda3b91-e542-4ee8-9013-d5c675d9708f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.573 243456 DEBUG nova.compute.manager [req-4d9cb7b8-c076-4568-a7f6-e44bcfa2ceb8 req-9eda3b91-e542-4ee8-9013-d5c675d9708f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] No waiting events found dispatching network-vif-unplugged-2fcd845f-8646-4afc-923a-1b7570fbdc36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.573 243456 WARNING nova.compute.manager [req-4d9cb7b8-c076-4568-a7f6-e44bcfa2ceb8 req-9eda3b91-e542-4ee8-9013-d5c675d9708f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received unexpected event network-vif-unplugged-2fcd845f-8646-4afc-923a-1b7570fbdc36 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.573 243456 DEBUG nova.compute.manager [req-4d9cb7b8-c076-4568-a7f6-e44bcfa2ceb8 req-9eda3b91-e542-4ee8-9013-d5c675d9708f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-vif-plugged-2fcd845f-8646-4afc-923a-1b7570fbdc36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.573 243456 DEBUG oslo_concurrency.lockutils [req-4d9cb7b8-c076-4568-a7f6-e44bcfa2ceb8 req-9eda3b91-e542-4ee8-9013-d5c675d9708f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.573 243456 DEBUG oslo_concurrency.lockutils [req-4d9cb7b8-c076-4568-a7f6-e44bcfa2ceb8 req-9eda3b91-e542-4ee8-9013-d5c675d9708f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.573 243456 DEBUG oslo_concurrency.lockutils [req-4d9cb7b8-c076-4568-a7f6-e44bcfa2ceb8 req-9eda3b91-e542-4ee8-9013-d5c675d9708f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.573 243456 DEBUG nova.compute.manager [req-4d9cb7b8-c076-4568-a7f6-e44bcfa2ceb8 req-9eda3b91-e542-4ee8-9013-d5c675d9708f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] No waiting events found dispatching network-vif-plugged-2fcd845f-8646-4afc-923a-1b7570fbdc36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.574 243456 WARNING nova.compute.manager [req-4d9cb7b8-c076-4568-a7f6-e44bcfa2ceb8 req-9eda3b91-e542-4ee8-9013-d5c675d9708f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received unexpected event network-vif-plugged-2fcd845f-8646-4afc-923a-1b7570fbdc36 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:40:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 28 05:40:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 05:40:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:40:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:40:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:40:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:40:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:40:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:40:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:40:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:40:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:40:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:40:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:40:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:40:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:40:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/987712077' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.853 243456 DEBUG oslo_concurrency.processutils [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.860 243456 DEBUG nova.compute.provider_tree [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.875 243456 DEBUG nova.scheduler.client.report [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.894 243456 DEBUG oslo_concurrency.lockutils [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:01 np0005634017 nova_compute[243452]: 2026-02-28 10:40:01.928 243456 INFO nova.scheduler.client.report [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance f6deb920-f186-43f5-9ea0-642f4a6e830e#033[00m
Feb 28 05:40:02 np0005634017 nova_compute[243452]: 2026-02-28 10:40:02.018 243456 DEBUG oslo_concurrency.lockutils [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:02 np0005634017 podman[368938]: 2026-02-28 10:40:02.203009678 +0000 UTC m=+0.063191000 container create fda13903744767e27f600e63b468ac7709b06715c066e88cf6096312cecddf98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_colden, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 05:40:02 np0005634017 systemd[1]: Started libpod-conmon-fda13903744767e27f600e63b468ac7709b06715c066e88cf6096312cecddf98.scope.
Feb 28 05:40:02 np0005634017 podman[368938]: 2026-02-28 10:40:02.177198187 +0000 UTC m=+0.037379579 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:40:02 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:40:02 np0005634017 podman[368938]: 2026-02-28 10:40:02.296507615 +0000 UTC m=+0.156688937 container init fda13903744767e27f600e63b468ac7709b06715c066e88cf6096312cecddf98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_colden, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 05:40:02 np0005634017 podman[368938]: 2026-02-28 10:40:02.30444371 +0000 UTC m=+0.164625002 container start fda13903744767e27f600e63b468ac7709b06715c066e88cf6096312cecddf98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 05:40:02 np0005634017 podman[368938]: 2026-02-28 10:40:02.309332429 +0000 UTC m=+0.169513721 container attach fda13903744767e27f600e63b468ac7709b06715c066e88cf6096312cecddf98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_colden, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 28 05:40:02 np0005634017 dreamy_colden[368955]: 167 167
Feb 28 05:40:02 np0005634017 systemd[1]: libpod-fda13903744767e27f600e63b468ac7709b06715c066e88cf6096312cecddf98.scope: Deactivated successfully.
Feb 28 05:40:02 np0005634017 podman[368938]: 2026-02-28 10:40:02.314096243 +0000 UTC m=+0.174277545 container died fda13903744767e27f600e63b468ac7709b06715c066e88cf6096312cecddf98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_colden, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:40:02 np0005634017 systemd[1]: var-lib-containers-storage-overlay-1276add8fcafdffaa64631a2efa3a41707c252dd2da49707ed6364ddaef0d7a7-merged.mount: Deactivated successfully.
Feb 28 05:40:02 np0005634017 podman[368938]: 2026-02-28 10:40:02.358691856 +0000 UTC m=+0.218873188 container remove fda13903744767e27f600e63b468ac7709b06715c066e88cf6096312cecddf98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_colden, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:40:02 np0005634017 systemd[1]: libpod-conmon-fda13903744767e27f600e63b468ac7709b06715c066e88cf6096312cecddf98.scope: Deactivated successfully.
Feb 28 05:40:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 05:40:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:40:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:40:02 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:40:02 np0005634017 podman[368978]: 2026-02-28 10:40:02.485425295 +0000 UTC m=+0.038978735 container create fb1fe6fcb999346f07c38761793f5ed24802191013bfccf329045215d18ac0c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 05:40:02 np0005634017 systemd[1]: Started libpod-conmon-fb1fe6fcb999346f07c38761793f5ed24802191013bfccf329045215d18ac0c1.scope.
Feb 28 05:40:02 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:40:02 np0005634017 podman[368978]: 2026-02-28 10:40:02.469550595 +0000 UTC m=+0.023104045 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:40:02 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d47b26eec4190c25cad8e1c053cbfc2b6dad5eee24b93242add40126d2b117/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:40:02 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d47b26eec4190c25cad8e1c053cbfc2b6dad5eee24b93242add40126d2b117/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:40:02 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d47b26eec4190c25cad8e1c053cbfc2b6dad5eee24b93242add40126d2b117/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:40:02 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d47b26eec4190c25cad8e1c053cbfc2b6dad5eee24b93242add40126d2b117/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:40:02 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d47b26eec4190c25cad8e1c053cbfc2b6dad5eee24b93242add40126d2b117/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:40:02 np0005634017 podman[368978]: 2026-02-28 10:40:02.592041484 +0000 UTC m=+0.145595014 container init fb1fe6fcb999346f07c38761793f5ed24802191013bfccf329045215d18ac0c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:40:02 np0005634017 podman[368978]: 2026-02-28 10:40:02.599964188 +0000 UTC m=+0.153517628 container start fb1fe6fcb999346f07c38761793f5ed24802191013bfccf329045215d18ac0c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_khayyam, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:40:02 np0005634017 podman[368978]: 2026-02-28 10:40:02.604317091 +0000 UTC m=+0.157870531 container attach fb1fe6fcb999346f07c38761793f5ed24802191013bfccf329045215d18ac0c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_khayyam, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 05:40:03 np0005634017 happy_khayyam[368994]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:40:03 np0005634017 happy_khayyam[368994]: --> All data devices are unavailable
Feb 28 05:40:03 np0005634017 systemd[1]: libpod-fb1fe6fcb999346f07c38761793f5ed24802191013bfccf329045215d18ac0c1.scope: Deactivated successfully.
Feb 28 05:40:03 np0005634017 podman[368978]: 2026-02-28 10:40:03.040136622 +0000 UTC m=+0.593690052 container died fb1fe6fcb999346f07c38761793f5ed24802191013bfccf329045215d18ac0c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_khayyam, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 05:40:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 72 KiB/s wr, 91 op/s
Feb 28 05:40:03 np0005634017 systemd[1]: var-lib-containers-storage-overlay-b4d47b26eec4190c25cad8e1c053cbfc2b6dad5eee24b93242add40126d2b117-merged.mount: Deactivated successfully.
Feb 28 05:40:03 np0005634017 podman[368978]: 2026-02-28 10:40:03.156981331 +0000 UTC m=+0.710534761 container remove fb1fe6fcb999346f07c38761793f5ed24802191013bfccf329045215d18ac0c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:40:03 np0005634017 nova_compute[243452]: 2026-02-28 10:40:03.154 243456 DEBUG nova.compute.manager [req-a2b7722a-143e-4e7a-954a-b4884d2a9304 req-916c271b-eda5-4e70-bb48-9326ea7f0aee 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-vif-deleted-5bc174e9-3e24-499b-8f52-0bcff974bf56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:40:03 np0005634017 systemd[1]: libpod-conmon-fb1fe6fcb999346f07c38761793f5ed24802191013bfccf329045215d18ac0c1.scope: Deactivated successfully.
Feb 28 05:40:03 np0005634017 podman[369090]: 2026-02-28 10:40:03.589290632 +0000 UTC m=+0.043057520 container create 9fea45b1262f2162897c1dce23140873e2ad400bcf1d6b144ca303da0fb8114a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_galois, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 05:40:03 np0005634017 systemd[1]: Started libpod-conmon-9fea45b1262f2162897c1dce23140873e2ad400bcf1d6b144ca303da0fb8114a.scope.
Feb 28 05:40:03 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:40:03 np0005634017 podman[369090]: 2026-02-28 10:40:03.660783716 +0000 UTC m=+0.114550574 container init 9fea45b1262f2162897c1dce23140873e2ad400bcf1d6b144ca303da0fb8114a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:40:03 np0005634017 podman[369090]: 2026-02-28 10:40:03.57122208 +0000 UTC m=+0.024988938 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:40:03 np0005634017 podman[369090]: 2026-02-28 10:40:03.670254245 +0000 UTC m=+0.124021093 container start 9fea45b1262f2162897c1dce23140873e2ad400bcf1d6b144ca303da0fb8114a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_galois, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:40:03 np0005634017 podman[369090]: 2026-02-28 10:40:03.673713163 +0000 UTC m=+0.127480001 container attach 9fea45b1262f2162897c1dce23140873e2ad400bcf1d6b144ca303da0fb8114a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_galois, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:40:03 np0005634017 pedantic_galois[369106]: 167 167
Feb 28 05:40:03 np0005634017 systemd[1]: libpod-9fea45b1262f2162897c1dce23140873e2ad400bcf1d6b144ca303da0fb8114a.scope: Deactivated successfully.
Feb 28 05:40:03 np0005634017 podman[369090]: 2026-02-28 10:40:03.675882404 +0000 UTC m=+0.129649252 container died 9fea45b1262f2162897c1dce23140873e2ad400bcf1d6b144ca303da0fb8114a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 05:40:03 np0005634017 systemd[1]: var-lib-containers-storage-overlay-26292ed989e625c93f0e7aa5d8b937c11db22b4d063a8b78edb54e8ea35ba78b-merged.mount: Deactivated successfully.
Feb 28 05:40:03 np0005634017 podman[369090]: 2026-02-28 10:40:03.710554876 +0000 UTC m=+0.164321714 container remove 9fea45b1262f2162897c1dce23140873e2ad400bcf1d6b144ca303da0fb8114a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_galois, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:40:03 np0005634017 systemd[1]: libpod-conmon-9fea45b1262f2162897c1dce23140873e2ad400bcf1d6b144ca303da0fb8114a.scope: Deactivated successfully.
Feb 28 05:40:03 np0005634017 podman[369131]: 2026-02-28 10:40:03.861891761 +0000 UTC m=+0.043397190 container create 36fa44d458abcf61decbe9b9b5b59ae685c9802ccac2e3a3aed0629e35c1a8df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 28 05:40:03 np0005634017 systemd[1]: Started libpod-conmon-36fa44d458abcf61decbe9b9b5b59ae685c9802ccac2e3a3aed0629e35c1a8df.scope.
Feb 28 05:40:03 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:40:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e179c9e25e327aabbd523f1f6161c35cb2fc27402b7e73467729448d7bf0072a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:40:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e179c9e25e327aabbd523f1f6161c35cb2fc27402b7e73467729448d7bf0072a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:40:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e179c9e25e327aabbd523f1f6161c35cb2fc27402b7e73467729448d7bf0072a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:40:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e179c9e25e327aabbd523f1f6161c35cb2fc27402b7e73467729448d7bf0072a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:40:03 np0005634017 podman[369131]: 2026-02-28 10:40:03.922622231 +0000 UTC m=+0.104127670 container init 36fa44d458abcf61decbe9b9b5b59ae685c9802ccac2e3a3aed0629e35c1a8df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_buck, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 28 05:40:03 np0005634017 podman[369131]: 2026-02-28 10:40:03.927307513 +0000 UTC m=+0.108812902 container start 36fa44d458abcf61decbe9b9b5b59ae685c9802ccac2e3a3aed0629e35c1a8df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_buck, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:40:03 np0005634017 podman[369131]: 2026-02-28 10:40:03.930453142 +0000 UTC m=+0.111958561 container attach 36fa44d458abcf61decbe9b9b5b59ae685c9802ccac2e3a3aed0629e35c1a8df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_buck, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:40:03 np0005634017 podman[369131]: 2026-02-28 10:40:03.842709418 +0000 UTC m=+0.024214807 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:40:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:40:04 np0005634017 nova_compute[243452]: 2026-02-28 10:40:04.043 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:04 np0005634017 gracious_buck[369147]: {
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:    "0": [
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:        {
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "devices": [
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "/dev/loop3"
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            ],
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "lv_name": "ceph_lv0",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "lv_size": "21470642176",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "name": "ceph_lv0",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "tags": {
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.cluster_name": "ceph",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.crush_device_class": "",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.encrypted": "0",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.objectstore": "bluestore",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.osd_id": "0",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.type": "block",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.vdo": "0",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.with_tpm": "0"
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            },
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "type": "block",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "vg_name": "ceph_vg0"
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:        }
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:    ],
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:    "1": [
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:        {
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "devices": [
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "/dev/loop4"
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            ],
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "lv_name": "ceph_lv1",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "lv_size": "21470642176",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "name": "ceph_lv1",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "tags": {
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.cluster_name": "ceph",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.crush_device_class": "",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.encrypted": "0",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.objectstore": "bluestore",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.osd_id": "1",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.type": "block",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.vdo": "0",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.with_tpm": "0"
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            },
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "type": "block",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "vg_name": "ceph_vg1"
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:        }
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:    ],
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:    "2": [
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:        {
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "devices": [
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "/dev/loop5"
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            ],
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "lv_name": "ceph_lv2",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "lv_size": "21470642176",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "name": "ceph_lv2",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "tags": {
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.cluster_name": "ceph",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.crush_device_class": "",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.encrypted": "0",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.objectstore": "bluestore",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.osd_id": "2",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.type": "block",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.vdo": "0",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:                "ceph.with_tpm": "0"
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            },
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "type": "block",
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:            "vg_name": "ceph_vg2"
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:        }
Feb 28 05:40:04 np0005634017 gracious_buck[369147]:    ]
Feb 28 05:40:04 np0005634017 gracious_buck[369147]: }
Feb 28 05:40:04 np0005634017 systemd[1]: libpod-36fa44d458abcf61decbe9b9b5b59ae685c9802ccac2e3a3aed0629e35c1a8df.scope: Deactivated successfully.
Feb 28 05:40:04 np0005634017 podman[369131]: 2026-02-28 10:40:04.19451475 +0000 UTC m=+0.376020159 container died 36fa44d458abcf61decbe9b9b5b59ae685c9802ccac2e3a3aed0629e35c1a8df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 05:40:04 np0005634017 systemd[1]: var-lib-containers-storage-overlay-e179c9e25e327aabbd523f1f6161c35cb2fc27402b7e73467729448d7bf0072a-merged.mount: Deactivated successfully.
Feb 28 05:40:04 np0005634017 podman[369131]: 2026-02-28 10:40:04.252998996 +0000 UTC m=+0.434504425 container remove 36fa44d458abcf61decbe9b9b5b59ae685c9802ccac2e3a3aed0629e35c1a8df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 05:40:04 np0005634017 systemd[1]: libpod-conmon-36fa44d458abcf61decbe9b9b5b59ae685c9802ccac2e3a3aed0629e35c1a8df.scope: Deactivated successfully.
Feb 28 05:40:04 np0005634017 nova_compute[243452]: 2026-02-28 10:40:04.645 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:04 np0005634017 podman[369232]: 2026-02-28 10:40:04.780512583 +0000 UTC m=+0.057388906 container create accf3cfa935b1c3d2bb0dd6c700c758f679a5f0a1579bb465c8e4155dcc73477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chaplygin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 05:40:04 np0005634017 systemd[1]: Started libpod-conmon-accf3cfa935b1c3d2bb0dd6c700c758f679a5f0a1579bb465c8e4155dcc73477.scope.
Feb 28 05:40:04 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:40:04 np0005634017 podman[369232]: 2026-02-28 10:40:04.759721834 +0000 UTC m=+0.036598127 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:40:04 np0005634017 podman[369232]: 2026-02-28 10:40:04.856440323 +0000 UTC m=+0.133316636 container init accf3cfa935b1c3d2bb0dd6c700c758f679a5f0a1579bb465c8e4155dcc73477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 28 05:40:04 np0005634017 podman[369232]: 2026-02-28 10:40:04.866149608 +0000 UTC m=+0.143025891 container start accf3cfa935b1c3d2bb0dd6c700c758f679a5f0a1579bb465c8e4155dcc73477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 28 05:40:04 np0005634017 podman[369232]: 2026-02-28 10:40:04.87012147 +0000 UTC m=+0.146997853 container attach accf3cfa935b1c3d2bb0dd6c700c758f679a5f0a1579bb465c8e4155dcc73477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chaplygin, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:40:04 np0005634017 xenodochial_chaplygin[369248]: 167 167
Feb 28 05:40:04 np0005634017 systemd[1]: libpod-accf3cfa935b1c3d2bb0dd6c700c758f679a5f0a1579bb465c8e4155dcc73477.scope: Deactivated successfully.
Feb 28 05:40:04 np0005634017 podman[369232]: 2026-02-28 10:40:04.872910529 +0000 UTC m=+0.149786852 container died accf3cfa935b1c3d2bb0dd6c700c758f679a5f0a1579bb465c8e4155dcc73477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chaplygin, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 05:40:04 np0005634017 systemd[1]: var-lib-containers-storage-overlay-b75f804841c73e142a4a23f3425e931fdc1d9e738785159166ddeb316c53b245-merged.mount: Deactivated successfully.
Feb 28 05:40:04 np0005634017 podman[369232]: 2026-02-28 10:40:04.911342377 +0000 UTC m=+0.188218690 container remove accf3cfa935b1c3d2bb0dd6c700c758f679a5f0a1579bb465c8e4155dcc73477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chaplygin, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 05:40:04 np0005634017 systemd[1]: libpod-conmon-accf3cfa935b1c3d2bb0dd6c700c758f679a5f0a1579bb465c8e4155dcc73477.scope: Deactivated successfully.
Feb 28 05:40:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 31 KiB/s wr, 87 op/s
Feb 28 05:40:05 np0005634017 podman[369272]: 2026-02-28 10:40:05.121294552 +0000 UTC m=+0.066012971 container create cba106451efaf8ffac354c8868bcb708a8eca602617b92328b16e3d589168081 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_feynman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 05:40:05 np0005634017 systemd[1]: Started libpod-conmon-cba106451efaf8ffac354c8868bcb708a8eca602617b92328b16e3d589168081.scope.
Feb 28 05:40:05 np0005634017 podman[369272]: 2026-02-28 10:40:05.094369359 +0000 UTC m=+0.039087858 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:40:05 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:40:05 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/073a1c4da57fab067b583b837cbbb6361f094172c492a7bbebea5b3e7960852f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:40:05 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/073a1c4da57fab067b583b837cbbb6361f094172c492a7bbebea5b3e7960852f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:40:05 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/073a1c4da57fab067b583b837cbbb6361f094172c492a7bbebea5b3e7960852f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:40:05 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/073a1c4da57fab067b583b837cbbb6361f094172c492a7bbebea5b3e7960852f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:40:05 np0005634017 podman[369272]: 2026-02-28 10:40:05.237902963 +0000 UTC m=+0.182621482 container init cba106451efaf8ffac354c8868bcb708a8eca602617b92328b16e3d589168081 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:40:05 np0005634017 podman[369272]: 2026-02-28 10:40:05.245705984 +0000 UTC m=+0.190424433 container start cba106451efaf8ffac354c8868bcb708a8eca602617b92328b16e3d589168081 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_feynman, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 05:40:05 np0005634017 podman[369272]: 2026-02-28 10:40:05.253612178 +0000 UTC m=+0.198330637 container attach cba106451efaf8ffac354c8868bcb708a8eca602617b92328b16e3d589168081 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_feynman, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:40:05 np0005634017 nova_compute[243452]: 2026-02-28 10:40:05.646 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:05 np0005634017 nova_compute[243452]: 2026-02-28 10:40:05.694 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:05 np0005634017 lvm[369366]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:40:05 np0005634017 lvm[369368]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:40:05 np0005634017 lvm[369368]: VG ceph_vg1 finished
Feb 28 05:40:05 np0005634017 lvm[369366]: VG ceph_vg0 finished
Feb 28 05:40:05 np0005634017 lvm[369370]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:40:05 np0005634017 lvm[369370]: VG ceph_vg2 finished
Feb 28 05:40:06 np0005634017 great_feynman[369288]: {}
Feb 28 05:40:06 np0005634017 systemd[1]: libpod-cba106451efaf8ffac354c8868bcb708a8eca602617b92328b16e3d589168081.scope: Deactivated successfully.
Feb 28 05:40:06 np0005634017 podman[369272]: 2026-02-28 10:40:06.03380093 +0000 UTC m=+0.978519339 container died cba106451efaf8ffac354c8868bcb708a8eca602617b92328b16e3d589168081 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_feynman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 05:40:06 np0005634017 systemd[1]: libpod-cba106451efaf8ffac354c8868bcb708a8eca602617b92328b16e3d589168081.scope: Consumed 1.219s CPU time.
Feb 28 05:40:06 np0005634017 systemd[1]: var-lib-containers-storage-overlay-073a1c4da57fab067b583b837cbbb6361f094172c492a7bbebea5b3e7960852f-merged.mount: Deactivated successfully.
Feb 28 05:40:06 np0005634017 podman[369272]: 2026-02-28 10:40:06.072005382 +0000 UTC m=+1.016723781 container remove cba106451efaf8ffac354c8868bcb708a8eca602617b92328b16e3d589168081 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_feynman, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 05:40:06 np0005634017 systemd[1]: libpod-conmon-cba106451efaf8ffac354c8868bcb708a8eca602617b92328b16e3d589168081.scope: Deactivated successfully.
Feb 28 05:40:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:40:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:40:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:40:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:40:06 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:40:06 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:40:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2273: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 25 KiB/s wr, 85 op/s
Feb 28 05:40:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:40:09 np0005634017 nova_compute[243452]: 2026-02-28 10:40:09.047 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 10 KiB/s wr, 83 op/s
Feb 28 05:40:09 np0005634017 nova_compute[243452]: 2026-02-28 10:40:09.646 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:09 np0005634017 nova_compute[243452]: 2026-02-28 10:40:09.930 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275194.9295805, 69899d22-e5ee-410a-8280-57cc79ffa188 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:40:09 np0005634017 nova_compute[243452]: 2026-02-28 10:40:09.931 243456 INFO nova.compute.manager [-] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:40:09 np0005634017 nova_compute[243452]: 2026-02-28 10:40:09.970 243456 DEBUG nova.compute.manager [None req-056af5f4-c19e-4fc7-bb36-936df1a5ae18 - - - - - -] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:40:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 6.8 KiB/s wr, 55 op/s
Feb 28 05:40:12 np0005634017 nova_compute[243452]: 2026-02-28 10:40:12.193 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275197.1926315, 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:40:12 np0005634017 nova_compute[243452]: 2026-02-28 10:40:12.194 243456 INFO nova.compute.manager [-] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:40:12 np0005634017 nova_compute[243452]: 2026-02-28 10:40:12.213 243456 DEBUG nova.compute.manager [None req-3465c9e4-376c-4f56-ba90-5aa71a01ae72 - - - - - -] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:40:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 0 B/s wr, 9 op/s
Feb 28 05:40:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:40:13 np0005634017 nova_compute[243452]: 2026-02-28 10:40:13.990 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275198.9838357, f6deb920-f186-43f5-9ea0-642f4a6e830e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:40:13 np0005634017 nova_compute[243452]: 2026-02-28 10:40:13.991 243456 INFO nova.compute.manager [-] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:40:14 np0005634017 nova_compute[243452]: 2026-02-28 10:40:14.027 243456 DEBUG nova.compute.manager [None req-e893f997-e533-4ec5-b280-c26b22d4d9da - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:40:14 np0005634017 nova_compute[243452]: 2026-02-28 10:40:14.050 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:14 np0005634017 nova_compute[243452]: 2026-02-28 10:40:14.650 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:40:16 np0005634017 podman[369411]: 2026-02-28 10:40:16.157467859 +0000 UTC m=+0.085091660 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:40:16 np0005634017 podman[369410]: 2026-02-28 10:40:16.213034332 +0000 UTC m=+0.144768030 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Feb 28 05:40:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:40:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:40:19 np0005634017 nova_compute[243452]: 2026-02-28 10:40:19.054 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:40:19 np0005634017 nova_compute[243452]: 2026-02-28 10:40:19.477 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "5e1a8d62-9ac1-417d-8194-58901bb4018e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:19 np0005634017 nova_compute[243452]: 2026-02-28 10:40:19.478 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:19 np0005634017 nova_compute[243452]: 2026-02-28 10:40:19.500 243456 DEBUG nova.compute.manager [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:40:19 np0005634017 nova_compute[243452]: 2026-02-28 10:40:19.594 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:19 np0005634017 nova_compute[243452]: 2026-02-28 10:40:19.595 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:19 np0005634017 nova_compute[243452]: 2026-02-28 10:40:19.607 243456 DEBUG nova.virt.hardware [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:40:19 np0005634017 nova_compute[243452]: 2026-02-28 10:40:19.607 243456 INFO nova.compute.claims [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:40:19 np0005634017 nova_compute[243452]: 2026-02-28 10:40:19.657 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:19 np0005634017 nova_compute[243452]: 2026-02-28 10:40:19.722 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:40:20 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:40:20 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3453011184' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:40:20 np0005634017 nova_compute[243452]: 2026-02-28 10:40:20.315 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:40:20 np0005634017 nova_compute[243452]: 2026-02-28 10:40:20.325 243456 DEBUG nova.compute.provider_tree [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:40:20 np0005634017 nova_compute[243452]: 2026-02-28 10:40:20.346 243456 DEBUG nova.scheduler.client.report [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:40:20 np0005634017 nova_compute[243452]: 2026-02-28 10:40:20.379 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:20 np0005634017 nova_compute[243452]: 2026-02-28 10:40:20.381 243456 DEBUG nova.compute.manager [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:40:20 np0005634017 nova_compute[243452]: 2026-02-28 10:40:20.438 243456 DEBUG nova.compute.manager [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:40:20 np0005634017 nova_compute[243452]: 2026-02-28 10:40:20.439 243456 DEBUG nova.network.neutron [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:40:20 np0005634017 nova_compute[243452]: 2026-02-28 10:40:20.468 243456 INFO nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:40:20 np0005634017 nova_compute[243452]: 2026-02-28 10:40:20.492 243456 DEBUG nova.compute.manager [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:40:20 np0005634017 nova_compute[243452]: 2026-02-28 10:40:20.579 243456 DEBUG nova.compute.manager [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:40:20 np0005634017 nova_compute[243452]: 2026-02-28 10:40:20.581 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:40:20 np0005634017 nova_compute[243452]: 2026-02-28 10:40:20.582 243456 INFO nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Creating image(s)#033[00m
Feb 28 05:40:20 np0005634017 nova_compute[243452]: 2026-02-28 10:40:20.621 243456 DEBUG nova.storage.rbd_utils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 5e1a8d62-9ac1-417d-8194-58901bb4018e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:40:20 np0005634017 nova_compute[243452]: 2026-02-28 10:40:20.655 243456 DEBUG nova.storage.rbd_utils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 5e1a8d62-9ac1-417d-8194-58901bb4018e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:40:20 np0005634017 nova_compute[243452]: 2026-02-28 10:40:20.689 243456 DEBUG nova.storage.rbd_utils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 5e1a8d62-9ac1-417d-8194-58901bb4018e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:40:20 np0005634017 nova_compute[243452]: 2026-02-28 10:40:20.695 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:40:20 np0005634017 nova_compute[243452]: 2026-02-28 10:40:20.728 243456 DEBUG nova.policy [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:40:20 np0005634017 nova_compute[243452]: 2026-02-28 10:40:20.780 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:40:20 np0005634017 nova_compute[243452]: 2026-02-28 10:40:20.781 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:20 np0005634017 nova_compute[243452]: 2026-02-28 10:40:20.782 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:20 np0005634017 nova_compute[243452]: 2026-02-28 10:40:20.782 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:20 np0005634017 nova_compute[243452]: 2026-02-28 10:40:20.812 243456 DEBUG nova.storage.rbd_utils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 5e1a8d62-9ac1-417d-8194-58901bb4018e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:40:20 np0005634017 nova_compute[243452]: 2026-02-28 10:40:20.816 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 5e1a8d62-9ac1-417d-8194-58901bb4018e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:40:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:40:21 np0005634017 nova_compute[243452]: 2026-02-28 10:40:21.073 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 5e1a8d62-9ac1-417d-8194-58901bb4018e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.256s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:40:21 np0005634017 nova_compute[243452]: 2026-02-28 10:40:21.160 243456 DEBUG nova.storage.rbd_utils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] resizing rbd image 5e1a8d62-9ac1-417d-8194-58901bb4018e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:40:21 np0005634017 nova_compute[243452]: 2026-02-28 10:40:21.253 243456 DEBUG nova.objects.instance [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'migration_context' on Instance uuid 5e1a8d62-9ac1-417d-8194-58901bb4018e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:40:21 np0005634017 nova_compute[243452]: 2026-02-28 10:40:21.269 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:40:21 np0005634017 nova_compute[243452]: 2026-02-28 10:40:21.270 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Ensure instance console log exists: /var/lib/nova/instances/5e1a8d62-9ac1-417d-8194-58901bb4018e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:40:21 np0005634017 nova_compute[243452]: 2026-02-28 10:40:21.270 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:21 np0005634017 nova_compute[243452]: 2026-02-28 10:40:21.271 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:21 np0005634017 nova_compute[243452]: 2026-02-28 10:40:21.271 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:22 np0005634017 nova_compute[243452]: 2026-02-28 10:40:22.085 243456 DEBUG nova.network.neutron [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Successfully created port: d3feb971-63a7-4d54-8310-9c6d40c29637 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:40:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 305 active+clean; 176 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 717 KiB/s wr, 1 op/s
Feb 28 05:40:23 np0005634017 nova_compute[243452]: 2026-02-28 10:40:23.135 243456 DEBUG nova.network.neutron [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Successfully updated port: d3feb971-63a7-4d54-8310-9c6d40c29637 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:40:23 np0005634017 nova_compute[243452]: 2026-02-28 10:40:23.157 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:40:23 np0005634017 nova_compute[243452]: 2026-02-28 10:40:23.157 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:40:23 np0005634017 nova_compute[243452]: 2026-02-28 10:40:23.157 243456 DEBUG nova.network.neutron [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:40:23 np0005634017 nova_compute[243452]: 2026-02-28 10:40:23.273 243456 DEBUG nova.compute.manager [req-756931c4-57af-4f80-b6b8-5537ce4120c2 req-ab9500cf-4fe8-4b02-8d96-728c80de1268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-changed-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:40:23 np0005634017 nova_compute[243452]: 2026-02-28 10:40:23.273 243456 DEBUG nova.compute.manager [req-756931c4-57af-4f80-b6b8-5537ce4120c2 req-ab9500cf-4fe8-4b02-8d96-728c80de1268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Refreshing instance network info cache due to event network-changed-d3feb971-63a7-4d54-8310-9c6d40c29637. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:40:23 np0005634017 nova_compute[243452]: 2026-02-28 10:40:23.274 243456 DEBUG oslo_concurrency.lockutils [req-756931c4-57af-4f80-b6b8-5537ce4120c2 req-ab9500cf-4fe8-4b02-8d96-728c80de1268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:40:23 np0005634017 nova_compute[243452]: 2026-02-28 10:40:23.350 243456 DEBUG nova.network.neutron [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:40:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.025 243456 DEBUG nova.network.neutron [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Updating instance_info_cache with network_info: [{"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.048 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.049 243456 DEBUG nova.compute.manager [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Instance network_info: |[{"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.049 243456 DEBUG oslo_concurrency.lockutils [req-756931c4-57af-4f80-b6b8-5537ce4120c2 req-ab9500cf-4fe8-4b02-8d96-728c80de1268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.050 243456 DEBUG nova.network.neutron [req-756931c4-57af-4f80-b6b8-5537ce4120c2 req-ab9500cf-4fe8-4b02-8d96-728c80de1268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Refreshing network info cache for port d3feb971-63a7-4d54-8310-9c6d40c29637 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.059 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Start _get_guest_xml network_info=[{"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.077 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.089 243456 WARNING nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.096 243456 DEBUG nova.virt.libvirt.host [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.097 243456 DEBUG nova.virt.libvirt.host [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.101 243456 DEBUG nova.virt.libvirt.host [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.102 243456 DEBUG nova.virt.libvirt.host [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.103 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.103 243456 DEBUG nova.virt.hardware [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.104 243456 DEBUG nova.virt.hardware [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.105 243456 DEBUG nova.virt.hardware [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.105 243456 DEBUG nova.virt.hardware [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.106 243456 DEBUG nova.virt.hardware [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.106 243456 DEBUG nova.virt.hardware [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.107 243456 DEBUG nova.virt.hardware [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.107 243456 DEBUG nova.virt.hardware [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.108 243456 DEBUG nova.virt.hardware [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.108 243456 DEBUG nova.virt.hardware [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.109 243456 DEBUG nova.virt.hardware [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.114 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.655 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:40:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/363183282' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.716 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.742 243456 DEBUG nova.storage.rbd_utils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 5e1a8d62-9ac1-417d-8194-58901bb4018e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:40:24 np0005634017 nova_compute[243452]: 2026-02-28 10:40:24.747 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:40:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 190 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.4 MiB/s wr, 13 op/s
Feb 28 05:40:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:40:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1618747359' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.306 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.309 243456 DEBUG nova.virt.libvirt.vif [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:40:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-806666479',display_name='tempest-TestNetworkBasicOps-server-806666479',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-806666479',id=141,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEvYU5ohMTONYZF98enrXff7H/UA6S+83Ft8Ojoxq+P+keZUL46io/3fcohxtuAI5aeVtpG6o1nJ2kDJwbKtvHAweQjJLzp2omWlOmQ8VQJrOL3ujh53baZZlNUH6C9OQ==',key_name='tempest-TestNetworkBasicOps-2075562092',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-12rzj72r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:40:20Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=5e1a8d62-9ac1-417d-8194-58901bb4018e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.310 243456 DEBUG nova.network.os_vif_util [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.311 243456 DEBUG nova.network.os_vif_util [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:70:52,bridge_name='br-int',has_traffic_filtering=True,id=d3feb971-63a7-4d54-8310-9c6d40c29637,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3feb971-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.313 243456 DEBUG nova.objects.instance [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5e1a8d62-9ac1-417d-8194-58901bb4018e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.332 243456 DEBUG nova.network.neutron [req-756931c4-57af-4f80-b6b8-5537ce4120c2 req-ab9500cf-4fe8-4b02-8d96-728c80de1268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Updated VIF entry in instance network info cache for port d3feb971-63a7-4d54-8310-9c6d40c29637. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.333 243456 DEBUG nova.network.neutron [req-756931c4-57af-4f80-b6b8-5537ce4120c2 req-ab9500cf-4fe8-4b02-8d96-728c80de1268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Updating instance_info_cache with network_info: [{"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.341 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:40:25 np0005634017 nova_compute[243452]:  <uuid>5e1a8d62-9ac1-417d-8194-58901bb4018e</uuid>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:  <name>instance-0000008d</name>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestNetworkBasicOps-server-806666479</nova:name>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:40:24</nova:creationTime>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:40:25 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:        <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:        <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:        <nova:port uuid="d3feb971-63a7-4d54-8310-9c6d40c29637">
Feb 28 05:40:25 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <entry name="serial">5e1a8d62-9ac1-417d-8194-58901bb4018e</entry>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <entry name="uuid">5e1a8d62-9ac1-417d-8194-58901bb4018e</entry>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/5e1a8d62-9ac1-417d-8194-58901bb4018e_disk">
Feb 28 05:40:25 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:40:25 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/5e1a8d62-9ac1-417d-8194-58901bb4018e_disk.config">
Feb 28 05:40:25 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:40:25 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:7a:70:52"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <target dev="tapd3feb971-63"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/5e1a8d62-9ac1-417d-8194-58901bb4018e/console.log" append="off"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:40:25 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:40:25 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:40:25 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:40:25 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.344 243456 DEBUG nova.compute.manager [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Preparing to wait for external event network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.344 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.344 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.345 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.345 243456 DEBUG nova.virt.libvirt.vif [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:40:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-806666479',display_name='tempest-TestNetworkBasicOps-server-806666479',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-806666479',id=141,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEvYU5ohMTONYZF98enrXff7H/UA6S+83Ft8Ojoxq+P+keZUL46io/3fcohxtuAI5aeVtpG6o1nJ2kDJwbKtvHAweQjJLzp2omWlOmQ8VQJrOL3ujh53baZZlNUH6C9OQ==',key_name='tempest-TestNetworkBasicOps-2075562092',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-12rzj72r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:40:20Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=5e1a8d62-9ac1-417d-8194-58901bb4018e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.346 243456 DEBUG nova.network.os_vif_util [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.346 243456 DEBUG nova.network.os_vif_util [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:70:52,bridge_name='br-int',has_traffic_filtering=True,id=d3feb971-63a7-4d54-8310-9c6d40c29637,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3feb971-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.347 243456 DEBUG os_vif [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:70:52,bridge_name='br-int',has_traffic_filtering=True,id=d3feb971-63a7-4d54-8310-9c6d40c29637,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3feb971-63') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.348 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.348 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.349 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.352 243456 DEBUG oslo_concurrency.lockutils [req-756931c4-57af-4f80-b6b8-5537ce4120c2 req-ab9500cf-4fe8-4b02-8d96-728c80de1268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.354 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.355 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3feb971-63, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.356 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd3feb971-63, col_values=(('external_ids', {'iface-id': 'd3feb971-63a7-4d54-8310-9c6d40c29637', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:70:52', 'vm-uuid': '5e1a8d62-9ac1-417d-8194-58901bb4018e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:40:25 np0005634017 NetworkManager[49805]: <info>  [1772275225.3599] manager: (tapd3feb971-63): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/609)
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.363 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.368 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.369 243456 INFO os_vif [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:70:52,bridge_name='br-int',has_traffic_filtering=True,id=d3feb971-63a7-4d54-8310-9c6d40c29637,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3feb971-63')#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.434 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.435 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.435 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:7a:70:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.437 243456 INFO nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Using config drive#033[00m
Feb 28 05:40:25 np0005634017 nova_compute[243452]: 2026-02-28 10:40:25.477 243456 DEBUG nova.storage.rbd_utils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 5e1a8d62-9ac1-417d-8194-58901bb4018e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:40:26 np0005634017 nova_compute[243452]: 2026-02-28 10:40:26.063 243456 INFO nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Creating config drive at /var/lib/nova/instances/5e1a8d62-9ac1-417d-8194-58901bb4018e/disk.config#033[00m
Feb 28 05:40:26 np0005634017 nova_compute[243452]: 2026-02-28 10:40:26.070 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5e1a8d62-9ac1-417d-8194-58901bb4018e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_uic1oqr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:40:26 np0005634017 nova_compute[243452]: 2026-02-28 10:40:26.217 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5e1a8d62-9ac1-417d-8194-58901bb4018e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_uic1oqr" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:40:26 np0005634017 nova_compute[243452]: 2026-02-28 10:40:26.258 243456 DEBUG nova.storage.rbd_utils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 5e1a8d62-9ac1-417d-8194-58901bb4018e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:40:26 np0005634017 nova_compute[243452]: 2026-02-28 10:40:26.265 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5e1a8d62-9ac1-417d-8194-58901bb4018e/disk.config 5e1a8d62-9ac1-417d-8194-58901bb4018e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:40:26 np0005634017 nova_compute[243452]: 2026-02-28 10:40:26.424 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5e1a8d62-9ac1-417d-8194-58901bb4018e/disk.config 5e1a8d62-9ac1-417d-8194-58901bb4018e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:40:26 np0005634017 nova_compute[243452]: 2026-02-28 10:40:26.426 243456 INFO nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Deleting local config drive /var/lib/nova/instances/5e1a8d62-9ac1-417d-8194-58901bb4018e/disk.config because it was imported into RBD.#033[00m
Feb 28 05:40:26 np0005634017 kernel: tapd3feb971-63: entered promiscuous mode
Feb 28 05:40:26 np0005634017 NetworkManager[49805]: <info>  [1772275226.4923] manager: (tapd3feb971-63): new Tun device (/org/freedesktop/NetworkManager/Devices/610)
Feb 28 05:40:26 np0005634017 ovn_controller[146846]: 2026-02-28T10:40:26Z|01471|binding|INFO|Claiming lport d3feb971-63a7-4d54-8310-9c6d40c29637 for this chassis.
Feb 28 05:40:26 np0005634017 ovn_controller[146846]: 2026-02-28T10:40:26Z|01472|binding|INFO|d3feb971-63a7-4d54-8310-9c6d40c29637: Claiming fa:16:3e:7a:70:52 10.100.0.7
Feb 28 05:40:26 np0005634017 nova_compute[243452]: 2026-02-28 10:40:26.495 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:26 np0005634017 nova_compute[243452]: 2026-02-28 10:40:26.498 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:26 np0005634017 nova_compute[243452]: 2026-02-28 10:40:26.500 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:26 np0005634017 nova_compute[243452]: 2026-02-28 10:40:26.504 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.518 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:70:52 10.100.0.7'], port_security=['fa:16:3e:7a:70:52 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5e1a8d62-9ac1-417d-8194-58901bb4018e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '44c3724e-fd4e-435a-91b1-2ee7cbaa561d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27d4da06-6dac-452d-951c-54e43b1c22a3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d3feb971-63a7-4d54-8310-9c6d40c29637) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.520 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d3feb971-63a7-4d54-8310-9c6d40c29637 in datapath 11e06da5-bfc5-4a1a-9148-ff3afccf9569 bound to our chassis#033[00m
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.522 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 11e06da5-bfc5-4a1a-9148-ff3afccf9569#033[00m
Feb 28 05:40:26 np0005634017 systemd-udevd[369779]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:40:26 np0005634017 systemd-machined[209480]: New machine qemu-174-instance-0000008d.
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.534 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d06516be-ae27-4079-988e-a19bc25b62a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.536 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap11e06da5-b1 in ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:40:26 np0005634017 ovn_controller[146846]: 2026-02-28T10:40:26Z|01473|binding|INFO|Setting lport d3feb971-63a7-4d54-8310-9c6d40c29637 ovn-installed in OVS
Feb 28 05:40:26 np0005634017 ovn_controller[146846]: 2026-02-28T10:40:26Z|01474|binding|INFO|Setting lport d3feb971-63a7-4d54-8310-9c6d40c29637 up in Southbound
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.538 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap11e06da5-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.538 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6dff3e6c-1abf-46cb-bf5d-98963833ef01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:26 np0005634017 nova_compute[243452]: 2026-02-28 10:40:26.539 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.540 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cfe245ca-c146-461d-b877-878746647f0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:26 np0005634017 systemd[1]: Started Virtual Machine qemu-174-instance-0000008d.
Feb 28 05:40:26 np0005634017 NetworkManager[49805]: <info>  [1772275226.5515] device (tapd3feb971-63): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:40:26 np0005634017 NetworkManager[49805]: <info>  [1772275226.5521] device (tapd3feb971-63): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.560 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[c74c47f2-76d9-4dd6-9005-0658b62f0711]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.586 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[501bf585-f5fc-4bd9-be68-c3c8d2c0ea3d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.624 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[62737ff3-3581-4be4-8943-89bac49ca8d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.631 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e98e91ac-733f-4f7b-8b51-462693f2cb25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:26 np0005634017 NetworkManager[49805]: <info>  [1772275226.6328] manager: (tap11e06da5-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/611)
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.668 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[05d810b1-1e46-44f8-a2ff-239b4c08bfd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.672 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[945386ba-7224-4c85-9e2a-54d4fb604654]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:26 np0005634017 NetworkManager[49805]: <info>  [1772275226.7073] device (tap11e06da5-b0): carrier: link connected
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.713 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[afa2cbfd-618b-4a94-aa3d-d9fd3417a5f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.736 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[41e90ce2-9284-4858-890d-4e8d8618f389]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap11e06da5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:17:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673399, 'reachable_time': 31055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369812, 'error': None, 'target': 'ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.752 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0327682f-71cc-4516-9f0e-764a3ccbcec6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1e:1773'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 673399, 'tstamp': 673399}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369813, 'error': None, 'target': 'ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.770 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5f3410bb-2542-4dea-9471-1fa64673026a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap11e06da5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:17:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673399, 'reachable_time': 31055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 369814, 'error': None, 'target': 'ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.816 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f394638b-45da-4b4b-b08c-6c795ec782e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.876 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3e2677d4-1829-4703-9c97-b41abc21bd9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.879 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11e06da5-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.879 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.879 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap11e06da5-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:40:26 np0005634017 nova_compute[243452]: 2026-02-28 10:40:26.881 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:26 np0005634017 NetworkManager[49805]: <info>  [1772275226.8825] manager: (tap11e06da5-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/612)
Feb 28 05:40:26 np0005634017 kernel: tap11e06da5-b0: entered promiscuous mode
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.884 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap11e06da5-b0, col_values=(('external_ids', {'iface-id': 'ea6114a2-28c6-4510-bbd6-16e4d9cb4f71'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:40:26 np0005634017 nova_compute[243452]: 2026-02-28 10:40:26.885 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:26 np0005634017 ovn_controller[146846]: 2026-02-28T10:40:26Z|01475|binding|INFO|Releasing lport ea6114a2-28c6-4510-bbd6-16e4d9cb4f71 from this chassis (sb_readonly=0)
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.886 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/11e06da5-bfc5-4a1a-9148-ff3afccf9569.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/11e06da5-bfc5-4a1a-9148-ff3afccf9569.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.888 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d6b32851-e9ea-4422-be81-921ea8d8b94d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.888 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-11e06da5-bfc5-4a1a-9148-ff3afccf9569
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/11e06da5-bfc5-4a1a-9148-ff3afccf9569.pid.haproxy
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 11e06da5-bfc5-4a1a-9148-ff3afccf9569
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:40:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.889 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'env', 'PROCESS_TAG=haproxy-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/11e06da5-bfc5-4a1a-9148-ff3afccf9569.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:40:26 np0005634017 nova_compute[243452]: 2026-02-28 10:40:26.892 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:26 np0005634017 nova_compute[243452]: 2026-02-28 10:40:26.977 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275226.9766095, 5e1a8d62-9ac1-417d-8194-58901bb4018e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:40:26 np0005634017 nova_compute[243452]: 2026-02-28 10:40:26.977 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] VM Started (Lifecycle Event)#033[00m
Feb 28 05:40:26 np0005634017 nova_compute[243452]: 2026-02-28 10:40:26.995 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:40:26 np0005634017 nova_compute[243452]: 2026-02-28 10:40:26.997 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275226.9776947, 5e1a8d62-9ac1-417d-8194-58901bb4018e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:40:26 np0005634017 nova_compute[243452]: 2026-02-28 10:40:26.998 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:40:27 np0005634017 nova_compute[243452]: 2026-02-28 10:40:27.015 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:40:27 np0005634017 nova_compute[243452]: 2026-02-28 10:40:27.018 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:40:27 np0005634017 nova_compute[243452]: 2026-02-28 10:40:27.033 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:40:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:40:27 np0005634017 podman[369888]: 2026-02-28 10:40:27.223110844 +0000 UTC m=+0.058061495 container create 1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 05:40:27 np0005634017 systemd[1]: Started libpod-conmon-1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1.scope.
Feb 28 05:40:27 np0005634017 podman[369888]: 2026-02-28 10:40:27.188134324 +0000 UTC m=+0.023084835 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:40:27 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:40:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec7ac42405f247e62e492745590376a4dfe553da32d8a80cb4173cd7e7d5ce14/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:40:27 np0005634017 podman[369888]: 2026-02-28 10:40:27.31584062 +0000 UTC m=+0.150791051 container init 1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 05:40:27 np0005634017 podman[369888]: 2026-02-28 10:40:27.32008561 +0000 UTC m=+0.155036051 container start 1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 28 05:40:27 np0005634017 neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569[369903]: [NOTICE]   (369907) : New worker (369909) forked
Feb 28 05:40:27 np0005634017 neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569[369903]: [NOTICE]   (369907) : Loading success.
Feb 28 05:40:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:40:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2284: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:40:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:40:29
Feb 28 05:40:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:40:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:40:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'backups', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.control']
Feb 28 05:40:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:40:29 np0005634017 nova_compute[243452]: 2026-02-28 10:40:29.698 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:30 np0005634017 nova_compute[243452]: 2026-02-28 10:40:30.359 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:40:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:40:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:40:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:40:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:40:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:40:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:40:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:40:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:40:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:40:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:40:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:40:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:40:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:40:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:40:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:40:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2285: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 28 05:40:31 np0005634017 nova_compute[243452]: 2026-02-28 10:40:31.855 243456 DEBUG nova.compute.manager [req-599d0bcd-7516-4fa3-ac1e-06f00b6caca1 req-b46f6f54-9010-4494-92f2-eb9a6e5dfa03 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:40:31 np0005634017 nova_compute[243452]: 2026-02-28 10:40:31.856 243456 DEBUG oslo_concurrency.lockutils [req-599d0bcd-7516-4fa3-ac1e-06f00b6caca1 req-b46f6f54-9010-4494-92f2-eb9a6e5dfa03 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:31 np0005634017 nova_compute[243452]: 2026-02-28 10:40:31.856 243456 DEBUG oslo_concurrency.lockutils [req-599d0bcd-7516-4fa3-ac1e-06f00b6caca1 req-b46f6f54-9010-4494-92f2-eb9a6e5dfa03 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:31 np0005634017 nova_compute[243452]: 2026-02-28 10:40:31.857 243456 DEBUG oslo_concurrency.lockutils [req-599d0bcd-7516-4fa3-ac1e-06f00b6caca1 req-b46f6f54-9010-4494-92f2-eb9a6e5dfa03 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:31 np0005634017 nova_compute[243452]: 2026-02-28 10:40:31.857 243456 DEBUG nova.compute.manager [req-599d0bcd-7516-4fa3-ac1e-06f00b6caca1 req-b46f6f54-9010-4494-92f2-eb9a6e5dfa03 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Processing event network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:40:31 np0005634017 nova_compute[243452]: 2026-02-28 10:40:31.859 243456 DEBUG nova.compute.manager [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:40:31 np0005634017 nova_compute[243452]: 2026-02-28 10:40:31.864 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275231.8640964, 5e1a8d62-9ac1-417d-8194-58901bb4018e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:40:31 np0005634017 nova_compute[243452]: 2026-02-28 10:40:31.865 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:40:31 np0005634017 nova_compute[243452]: 2026-02-28 10:40:31.869 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:40:31 np0005634017 nova_compute[243452]: 2026-02-28 10:40:31.876 243456 INFO nova.virt.libvirt.driver [-] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Instance spawned successfully.#033[00m
Feb 28 05:40:31 np0005634017 nova_compute[243452]: 2026-02-28 10:40:31.877 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:40:31 np0005634017 nova_compute[243452]: 2026-02-28 10:40:31.897 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:40:31 np0005634017 nova_compute[243452]: 2026-02-28 10:40:31.908 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:40:31 np0005634017 nova_compute[243452]: 2026-02-28 10:40:31.913 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:40:31 np0005634017 nova_compute[243452]: 2026-02-28 10:40:31.913 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:40:31 np0005634017 nova_compute[243452]: 2026-02-28 10:40:31.914 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:40:31 np0005634017 nova_compute[243452]: 2026-02-28 10:40:31.914 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:40:31 np0005634017 nova_compute[243452]: 2026-02-28 10:40:31.915 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:40:31 np0005634017 nova_compute[243452]: 2026-02-28 10:40:31.915 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:40:31 np0005634017 nova_compute[243452]: 2026-02-28 10:40:31.941 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:40:31 np0005634017 nova_compute[243452]: 2026-02-28 10:40:31.976 243456 INFO nova.compute.manager [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Took 11.40 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:40:31 np0005634017 nova_compute[243452]: 2026-02-28 10:40:31.977 243456 DEBUG nova.compute.manager [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:40:32 np0005634017 nova_compute[243452]: 2026-02-28 10:40:32.067 243456 INFO nova.compute.manager [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Took 12.51 seconds to build instance.#033[00m
Feb 28 05:40:32 np0005634017 nova_compute[243452]: 2026-02-28 10:40:32.094 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2286: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 170 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Feb 28 05:40:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:40:34 np0005634017 nova_compute[243452]: 2026-02-28 10:40:34.177 243456 DEBUG nova.compute.manager [req-efd542c9-4902-44dd-92c9-baa691cfd563 req-0abb1550-76bc-47e3-b0dc-71ffcbdc49cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:40:34 np0005634017 nova_compute[243452]: 2026-02-28 10:40:34.178 243456 DEBUG oslo_concurrency.lockutils [req-efd542c9-4902-44dd-92c9-baa691cfd563 req-0abb1550-76bc-47e3-b0dc-71ffcbdc49cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:34 np0005634017 nova_compute[243452]: 2026-02-28 10:40:34.178 243456 DEBUG oslo_concurrency.lockutils [req-efd542c9-4902-44dd-92c9-baa691cfd563 req-0abb1550-76bc-47e3-b0dc-71ffcbdc49cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:34 np0005634017 nova_compute[243452]: 2026-02-28 10:40:34.178 243456 DEBUG oslo_concurrency.lockutils [req-efd542c9-4902-44dd-92c9-baa691cfd563 req-0abb1550-76bc-47e3-b0dc-71ffcbdc49cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:34 np0005634017 nova_compute[243452]: 2026-02-28 10:40:34.178 243456 DEBUG nova.compute.manager [req-efd542c9-4902-44dd-92c9-baa691cfd563 req-0abb1550-76bc-47e3-b0dc-71ffcbdc49cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] No waiting events found dispatching network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:40:34 np0005634017 nova_compute[243452]: 2026-02-28 10:40:34.179 243456 WARNING nova.compute.manager [req-efd542c9-4902-44dd-92c9-baa691cfd563 req-0abb1550-76bc-47e3-b0dc-71ffcbdc49cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received unexpected event network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:40:34 np0005634017 nova_compute[243452]: 2026-02-28 10:40:34.701 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2287: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 967 KiB/s rd, 1.1 MiB/s wr, 66 op/s
Feb 28 05:40:35 np0005634017 nova_compute[243452]: 2026-02-28 10:40:35.361 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:35 np0005634017 nova_compute[243452]: 2026-02-28 10:40:35.499 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "9b081668-1653-448a-957e-da1ead7ecd21" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:35 np0005634017 nova_compute[243452]: 2026-02-28 10:40:35.499 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:35 np0005634017 nova_compute[243452]: 2026-02-28 10:40:35.517 243456 DEBUG nova.compute.manager [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:40:35 np0005634017 nova_compute[243452]: 2026-02-28 10:40:35.662 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:35 np0005634017 nova_compute[243452]: 2026-02-28 10:40:35.663 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:35 np0005634017 nova_compute[243452]: 2026-02-28 10:40:35.674 243456 DEBUG nova.virt.hardware [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:40:35 np0005634017 nova_compute[243452]: 2026-02-28 10:40:35.674 243456 INFO nova.compute.claims [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:40:35 np0005634017 nova_compute[243452]: 2026-02-28 10:40:35.825 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:40:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:40:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3922049186' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:40:36 np0005634017 nova_compute[243452]: 2026-02-28 10:40:36.388 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:40:36 np0005634017 NetworkManager[49805]: <info>  [1772275236.4083] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/613)
Feb 28 05:40:36 np0005634017 NetworkManager[49805]: <info>  [1772275236.4089] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/614)
Feb 28 05:40:36 np0005634017 nova_compute[243452]: 2026-02-28 10:40:36.407 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:36 np0005634017 nova_compute[243452]: 2026-02-28 10:40:36.413 243456 DEBUG nova.compute.provider_tree [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:40:36 np0005634017 nova_compute[243452]: 2026-02-28 10:40:36.432 243456 DEBUG nova.scheduler.client.report [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:40:36 np0005634017 nova_compute[243452]: 2026-02-28 10:40:36.461 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:36 np0005634017 nova_compute[243452]: 2026-02-28 10:40:36.463 243456 DEBUG nova.compute.manager [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:40:36 np0005634017 nova_compute[243452]: 2026-02-28 10:40:36.468 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:36 np0005634017 ovn_controller[146846]: 2026-02-28T10:40:36Z|01476|binding|INFO|Releasing lport ea6114a2-28c6-4510-bbd6-16e4d9cb4f71 from this chassis (sb_readonly=0)
Feb 28 05:40:36 np0005634017 nova_compute[243452]: 2026-02-28 10:40:36.491 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:36 np0005634017 nova_compute[243452]: 2026-02-28 10:40:36.522 243456 DEBUG nova.compute.manager [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:40:36 np0005634017 nova_compute[243452]: 2026-02-28 10:40:36.523 243456 DEBUG nova.network.neutron [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:40:36 np0005634017 nova_compute[243452]: 2026-02-28 10:40:36.546 243456 INFO nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:40:36 np0005634017 nova_compute[243452]: 2026-02-28 10:40:36.572 243456 DEBUG nova.compute.manager [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:40:36 np0005634017 nova_compute[243452]: 2026-02-28 10:40:36.688 243456 DEBUG nova.compute.manager [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:40:36 np0005634017 nova_compute[243452]: 2026-02-28 10:40:36.690 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:40:36 np0005634017 nova_compute[243452]: 2026-02-28 10:40:36.691 243456 INFO nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Creating image(s)#033[00m
Feb 28 05:40:36 np0005634017 nova_compute[243452]: 2026-02-28 10:40:36.719 243456 DEBUG nova.storage.rbd_utils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9b081668-1653-448a-957e-da1ead7ecd21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:40:36 np0005634017 nova_compute[243452]: 2026-02-28 10:40:36.752 243456 DEBUG nova.storage.rbd_utils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9b081668-1653-448a-957e-da1ead7ecd21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:40:36 np0005634017 nova_compute[243452]: 2026-02-28 10:40:36.785 243456 DEBUG nova.storage.rbd_utils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9b081668-1653-448a-957e-da1ead7ecd21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:40:36 np0005634017 nova_compute[243452]: 2026-02-28 10:40:36.790 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:40:36 np0005634017 nova_compute[243452]: 2026-02-28 10:40:36.829 243456 DEBUG nova.policy [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:40:36 np0005634017 nova_compute[243452]: 2026-02-28 10:40:36.836 243456 DEBUG nova.compute.manager [req-9c218cc3-c63b-47a7-986d-55c77670ff30 req-d3ebe85d-5322-4a39-9bf1-6df3085a95b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-changed-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:40:36 np0005634017 nova_compute[243452]: 2026-02-28 10:40:36.836 243456 DEBUG nova.compute.manager [req-9c218cc3-c63b-47a7-986d-55c77670ff30 req-d3ebe85d-5322-4a39-9bf1-6df3085a95b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Refreshing instance network info cache due to event network-changed-d3feb971-63a7-4d54-8310-9c6d40c29637. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:40:36 np0005634017 nova_compute[243452]: 2026-02-28 10:40:36.836 243456 DEBUG oslo_concurrency.lockutils [req-9c218cc3-c63b-47a7-986d-55c77670ff30 req-d3ebe85d-5322-4a39-9bf1-6df3085a95b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:40:36 np0005634017 nova_compute[243452]: 2026-02-28 10:40:36.837 243456 DEBUG oslo_concurrency.lockutils [req-9c218cc3-c63b-47a7-986d-55c77670ff30 req-d3ebe85d-5322-4a39-9bf1-6df3085a95b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:40:36 np0005634017 nova_compute[243452]: 2026-02-28 10:40:36.837 243456 DEBUG nova.network.neutron [req-9c218cc3-c63b-47a7-986d-55c77670ff30 req-d3ebe85d-5322-4a39-9bf1-6df3085a95b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Refreshing network info cache for port d3feb971-63a7-4d54-8310-9c6d40c29637 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:40:37 np0005634017 nova_compute[243452]: 2026-02-28 10:40:37.050 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:40:37 np0005634017 nova_compute[243452]: 2026-02-28 10:40:37.051 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:37 np0005634017 nova_compute[243452]: 2026-02-28 10:40:37.051 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:37 np0005634017 nova_compute[243452]: 2026-02-28 10:40:37.052 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2288: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 434 KiB/s wr, 66 op/s
Feb 28 05:40:37 np0005634017 nova_compute[243452]: 2026-02-28 10:40:37.075 243456 DEBUG nova.storage.rbd_utils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9b081668-1653-448a-957e-da1ead7ecd21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:40:37 np0005634017 nova_compute[243452]: 2026-02-28 10:40:37.080 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9b081668-1653-448a-957e-da1ead7ecd21_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:40:37 np0005634017 nova_compute[243452]: 2026-02-28 10:40:37.351 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9b081668-1653-448a-957e-da1ead7ecd21_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.271s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:40:37 np0005634017 nova_compute[243452]: 2026-02-28 10:40:37.439 243456 DEBUG nova.storage.rbd_utils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image 9b081668-1653-448a-957e-da1ead7ecd21_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:40:37 np0005634017 nova_compute[243452]: 2026-02-28 10:40:37.538 243456 DEBUG nova.objects.instance [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b081668-1653-448a-957e-da1ead7ecd21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:40:37 np0005634017 nova_compute[243452]: 2026-02-28 10:40:37.554 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:40:37 np0005634017 nova_compute[243452]: 2026-02-28 10:40:37.554 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Ensure instance console log exists: /var/lib/nova/instances/9b081668-1653-448a-957e-da1ead7ecd21/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:40:37 np0005634017 nova_compute[243452]: 2026-02-28 10:40:37.555 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:37 np0005634017 nova_compute[243452]: 2026-02-28 10:40:37.555 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:37 np0005634017 nova_compute[243452]: 2026-02-28 10:40:37.556 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:38 np0005634017 nova_compute[243452]: 2026-02-28 10:40:38.249 243456 DEBUG nova.network.neutron [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Successfully created port: 5952cc57-b25c-40a2-b208-47e2104b88ad _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:40:38 np0005634017 nova_compute[243452]: 2026-02-28 10:40:38.726 243456 DEBUG nova.network.neutron [req-9c218cc3-c63b-47a7-986d-55c77670ff30 req-d3ebe85d-5322-4a39-9bf1-6df3085a95b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Updated VIF entry in instance network info cache for port d3feb971-63a7-4d54-8310-9c6d40c29637. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:40:38 np0005634017 nova_compute[243452]: 2026-02-28 10:40:38.726 243456 DEBUG nova.network.neutron [req-9c218cc3-c63b-47a7-986d-55c77670ff30 req-d3ebe85d-5322-4a39-9bf1-6df3085a95b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Updating instance_info_cache with network_info: [{"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:40:38 np0005634017 nova_compute[243452]: 2026-02-28 10:40:38.746 243456 DEBUG oslo_concurrency.lockutils [req-9c218cc3-c63b-47a7-986d-55c77670ff30 req-d3ebe85d-5322-4a39-9bf1-6df3085a95b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:40:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:40:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2289: 305 pgs: 305 active+clean; 218 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 86 op/s
Feb 28 05:40:39 np0005634017 nova_compute[243452]: 2026-02-28 10:40:39.104 243456 DEBUG nova.network.neutron [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Successfully created port: 6914940c-920a-4dc2-982a-9ae63584aee2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:40:39 np0005634017 nova_compute[243452]: 2026-02-28 10:40:39.704 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:40 np0005634017 nova_compute[243452]: 2026-02-28 10:40:40.340 243456 DEBUG nova.network.neutron [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Successfully updated port: 5952cc57-b25c-40a2-b208-47e2104b88ad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:40:40 np0005634017 nova_compute[243452]: 2026-02-28 10:40:40.363 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:40 np0005634017 nova_compute[243452]: 2026-02-28 10:40:40.594 243456 DEBUG nova.compute.manager [req-502c62c2-6cdf-4348-9dec-0a8d940d9ab2 req-0dbb999b-4e78-47c0-bb83-fbed0fd44f65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-changed-5952cc57-b25c-40a2-b208-47e2104b88ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:40:40 np0005634017 nova_compute[243452]: 2026-02-28 10:40:40.594 243456 DEBUG nova.compute.manager [req-502c62c2-6cdf-4348-9dec-0a8d940d9ab2 req-0dbb999b-4e78-47c0-bb83-fbed0fd44f65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Refreshing instance network info cache due to event network-changed-5952cc57-b25c-40a2-b208-47e2104b88ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:40:40 np0005634017 nova_compute[243452]: 2026-02-28 10:40:40.594 243456 DEBUG oslo_concurrency.lockutils [req-502c62c2-6cdf-4348-9dec-0a8d940d9ab2 req-0dbb999b-4e78-47c0-bb83-fbed0fd44f65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:40:40 np0005634017 nova_compute[243452]: 2026-02-28 10:40:40.594 243456 DEBUG oslo_concurrency.lockutils [req-502c62c2-6cdf-4348-9dec-0a8d940d9ab2 req-0dbb999b-4e78-47c0-bb83-fbed0fd44f65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:40:40 np0005634017 nova_compute[243452]: 2026-02-28 10:40:40.595 243456 DEBUG nova.network.neutron [req-502c62c2-6cdf-4348-9dec-0a8d940d9ab2 req-0dbb999b-4e78-47c0-bb83-fbed0fd44f65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Refreshing network info cache for port 5952cc57-b25c-40a2-b208-47e2104b88ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:40:40 np0005634017 nova_compute[243452]: 2026-02-28 10:40:40.797 243456 DEBUG nova.network.neutron [req-502c62c2-6cdf-4348-9dec-0a8d940d9ab2 req-0dbb999b-4e78-47c0-bb83-fbed0fd44f65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:40:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2290: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 05:40:41 np0005634017 nova_compute[243452]: 2026-02-28 10:40:41.130 243456 DEBUG nova.network.neutron [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Successfully updated port: 6914940c-920a-4dc2-982a-9ae63584aee2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:40:41 np0005634017 nova_compute[243452]: 2026-02-28 10:40:41.138 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:40:41 np0005634017 nova_compute[243452]: 2026-02-28 10:40:41.145 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:40:41 np0005634017 nova_compute[243452]: 2026-02-28 10:40:41.174 243456 DEBUG nova.network.neutron [req-502c62c2-6cdf-4348-9dec-0a8d940d9ab2 req-0dbb999b-4e78-47c0-bb83-fbed0fd44f65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:40:41 np0005634017 nova_compute[243452]: 2026-02-28 10:40:41.190 243456 DEBUG oslo_concurrency.lockutils [req-502c62c2-6cdf-4348-9dec-0a8d940d9ab2 req-0dbb999b-4e78-47c0-bb83-fbed0fd44f65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:40:41 np0005634017 nova_compute[243452]: 2026-02-28 10:40:41.191 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:40:41 np0005634017 nova_compute[243452]: 2026-02-28 10:40:41.191 243456 DEBUG nova.network.neutron [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:40:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:40:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:40:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:40:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:40:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007084495598833457 of space, bias 1.0, pg target 0.21253486796500373 quantized to 32 (current 32)
Feb 28 05:40:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:40:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:40:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:40:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:40:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:40:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024939279828894733 of space, bias 1.0, pg target 0.748178394866842 quantized to 32 (current 32)
Feb 28 05:40:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:40:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.298182577967931e-07 of space, bias 4.0, pg target 0.0008757819093561517 quantized to 16 (current 16)
Feb 28 05:40:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:40:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:40:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:40:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:40:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:40:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:40:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:40:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:40:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:40:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:40:41 np0005634017 nova_compute[243452]: 2026-02-28 10:40:41.377 243456 DEBUG nova.network.neutron [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:40:41 np0005634017 nova_compute[243452]: 2026-02-28 10:40:41.954 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:41 np0005634017 nova_compute[243452]: 2026-02-28 10:40:41.955 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:41 np0005634017 nova_compute[243452]: 2026-02-28 10:40:41.981 243456 DEBUG nova.compute.manager [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:40:42 np0005634017 nova_compute[243452]: 2026-02-28 10:40:42.060 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:42 np0005634017 nova_compute[243452]: 2026-02-28 10:40:42.060 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:42 np0005634017 nova_compute[243452]: 2026-02-28 10:40:42.069 243456 DEBUG nova.virt.hardware [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:40:42 np0005634017 nova_compute[243452]: 2026-02-28 10:40:42.070 243456 INFO nova.compute.claims [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:40:42 np0005634017 nova_compute[243452]: 2026-02-28 10:40:42.240 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:40:42 np0005634017 nova_compute[243452]: 2026-02-28 10:40:42.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:40:42 np0005634017 nova_compute[243452]: 2026-02-28 10:40:42.668 243456 DEBUG nova.compute.manager [req-26f75e43-46a5-4027-ae7c-d96730a9c1e8 req-455a3b70-3c8d-4a6c-9cdd-c1778ac2ee63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-changed-6914940c-920a-4dc2-982a-9ae63584aee2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:40:42 np0005634017 nova_compute[243452]: 2026-02-28 10:40:42.669 243456 DEBUG nova.compute.manager [req-26f75e43-46a5-4027-ae7c-d96730a9c1e8 req-455a3b70-3c8d-4a6c-9cdd-c1778ac2ee63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Refreshing instance network info cache due to event network-changed-6914940c-920a-4dc2-982a-9ae63584aee2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:40:42 np0005634017 nova_compute[243452]: 2026-02-28 10:40:42.669 243456 DEBUG oslo_concurrency.lockutils [req-26f75e43-46a5-4027-ae7c-d96730a9c1e8 req-455a3b70-3c8d-4a6c-9cdd-c1778ac2ee63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:40:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:40:42 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/269186968' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:40:42 np0005634017 nova_compute[243452]: 2026-02-28 10:40:42.866 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:40:42 np0005634017 nova_compute[243452]: 2026-02-28 10:40:42.871 243456 DEBUG nova.compute.provider_tree [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:40:42 np0005634017 nova_compute[243452]: 2026-02-28 10:40:42.886 243456 DEBUG nova.scheduler.client.report [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:40:42 np0005634017 nova_compute[243452]: 2026-02-28 10:40:42.909 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:42 np0005634017 nova_compute[243452]: 2026-02-28 10:40:42.910 243456 DEBUG nova.compute.manager [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:40:42 np0005634017 nova_compute[243452]: 2026-02-28 10:40:42.959 243456 DEBUG nova.compute.manager [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:40:42 np0005634017 nova_compute[243452]: 2026-02-28 10:40:42.960 243456 DEBUG nova.network.neutron [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:40:42 np0005634017 nova_compute[243452]: 2026-02-28 10:40:42.985 243456 INFO nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.003 243456 DEBUG nova.compute.manager [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:40:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 305 active+clean; 256 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.8 MiB/s wr, 99 op/s
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.100 243456 DEBUG nova.compute.manager [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.101 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.102 243456 INFO nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Creating image(s)#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.123 243456 DEBUG nova.storage.rbd_utils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.147 243456 DEBUG nova.storage.rbd_utils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.182 243456 DEBUG nova.storage.rbd_utils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.189 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.227 243456 DEBUG nova.network.neutron [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Updating instance_info_cache with network_info: [{"id": "5952cc57-b25c-40a2-b208-47e2104b88ad", "address": "fa:16:3e:49:9b:bb", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5952cc57-b2", "ovs_interfaceid": "5952cc57-b25c-40a2-b208-47e2104b88ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6914940c-920a-4dc2-982a-9ae63584aee2", "address": "fa:16:3e:fb:c8:97", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefb:c897", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6914940c-92", "ovs_interfaceid": "6914940c-920a-4dc2-982a-9ae63584aee2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.231 243456 DEBUG nova.policy [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.260 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.261 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.262 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.263 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.292 243456 DEBUG nova.storage.rbd_utils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.296 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.325 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.326 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.327 243456 DEBUG nova.compute.manager [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Instance network_info: |[{"id": "5952cc57-b25c-40a2-b208-47e2104b88ad", "address": "fa:16:3e:49:9b:bb", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5952cc57-b2", "ovs_interfaceid": "5952cc57-b25c-40a2-b208-47e2104b88ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6914940c-920a-4dc2-982a-9ae63584aee2", "address": "fa:16:3e:fb:c8:97", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefb:c897", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6914940c-92", "ovs_interfaceid": "6914940c-920a-4dc2-982a-9ae63584aee2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.328 243456 DEBUG oslo_concurrency.lockutils [req-26f75e43-46a5-4027-ae7c-d96730a9c1e8 req-455a3b70-3c8d-4a6c-9cdd-c1778ac2ee63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.329 243456 DEBUG nova.network.neutron [req-26f75e43-46a5-4027-ae7c-d96730a9c1e8 req-455a3b70-3c8d-4a6c-9cdd-c1778ac2ee63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Refreshing network info cache for port 6914940c-920a-4dc2-982a-9ae63584aee2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.336 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Start _get_guest_xml network_info=[{"id": "5952cc57-b25c-40a2-b208-47e2104b88ad", "address": "fa:16:3e:49:9b:bb", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5952cc57-b2", "ovs_interfaceid": "5952cc57-b25c-40a2-b208-47e2104b88ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6914940c-920a-4dc2-982a-9ae63584aee2", "address": "fa:16:3e:fb:c8:97", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefb:c897", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6914940c-92", "ovs_interfaceid": "6914940c-920a-4dc2-982a-9ae63584aee2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.343 243456 WARNING nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.351 243456 DEBUG nova.virt.libvirt.host [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.352 243456 DEBUG nova.virt.libvirt.host [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.366 243456 DEBUG nova.virt.libvirt.host [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.367 243456 DEBUG nova.virt.libvirt.host [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.368 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.368 243456 DEBUG nova.virt.hardware [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.369 243456 DEBUG nova.virt.hardware [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.369 243456 DEBUG nova.virt.hardware [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.370 243456 DEBUG nova.virt.hardware [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.370 243456 DEBUG nova.virt.hardware [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.370 243456 DEBUG nova.virt.hardware [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.371 243456 DEBUG nova.virt.hardware [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.371 243456 DEBUG nova.virt.hardware [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.372 243456 DEBUG nova.virt.hardware [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.372 243456 DEBUG nova.virt.hardware [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.373 243456 DEBUG nova.virt.hardware [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.377 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.513 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.610 243456 DEBUG nova.storage.rbd_utils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] resizing rbd image edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.739 243456 DEBUG nova.objects.instance [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'migration_context' on Instance uuid edd7bb04-60e7-4998-afe7-73fa36c25f5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.759 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.760 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Ensure instance console log exists: /var/lib/nova/instances/edd7bb04-60e7-4998-afe7-73fa36c25f5d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.762 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.762 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:43 np0005634017 nova_compute[243452]: 2026-02-28 10:40:43.763 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:40:43 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Feb 28 05:40:43 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:43.990678) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:40:43 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Feb 28 05:40:43 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275243990756, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 1615, "num_deletes": 250, "total_data_size": 2590305, "memory_usage": 2622600, "flush_reason": "Manual Compaction"}
Feb 28 05:40:43 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Feb 28 05:40:43 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275243999144, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 1504286, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47402, "largest_seqno": 49016, "table_properties": {"data_size": 1498814, "index_size": 2612, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14541, "raw_average_key_size": 20, "raw_value_size": 1486757, "raw_average_value_size": 2126, "num_data_blocks": 119, "num_entries": 699, "num_filter_entries": 699, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772275077, "oldest_key_time": 1772275077, "file_creation_time": 1772275243, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:40:43 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 8545 microseconds, and 4369 cpu microseconds.
Feb 28 05:40:43 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:43.999240) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 1504286 bytes OK
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:43.999282) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:44.000616) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:44.000635) EVENT_LOG_v1 {"time_micros": 1772275244000628, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:44.000661) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 2583286, prev total WAL file size 2583286, number of live WAL files 2.
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:44.001861) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373532' seq:72057594037927935, type:22 .. '6D6772737461740032303033' seq:0, type:0; will stop at (end)
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(1469KB)], [110(9852KB)]
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275244001929, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 11593316, "oldest_snapshot_seqno": -1}
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3362230952' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 7091 keys, 9281075 bytes, temperature: kUnknown
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275244037967, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 9281075, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9235407, "index_size": 26868, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17733, "raw_key_size": 183688, "raw_average_key_size": 25, "raw_value_size": 9110521, "raw_average_value_size": 1284, "num_data_blocks": 1053, "num_entries": 7091, "num_filter_entries": 7091, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772275244, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:44.038313) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 9281075 bytes
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:44.040269) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 320.5 rd, 256.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.6 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(13.9) write-amplify(6.2) OK, records in: 7525, records dropped: 434 output_compression: NoCompression
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:44.040309) EVENT_LOG_v1 {"time_micros": 1772275244040297, "job": 66, "event": "compaction_finished", "compaction_time_micros": 36170, "compaction_time_cpu_micros": 18744, "output_level": 6, "num_output_files": 1, "total_output_size": 9281075, "num_input_records": 7525, "num_output_records": 7091, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275244040622, "job": 66, "event": "table_file_deletion", "file_number": 112}
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275244042189, "job": 66, "event": "table_file_deletion", "file_number": 110}
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:44.001784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:44.042288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:44.042297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:44.042299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:44.042300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:44.042302) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.051 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.674s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.072 243456 DEBUG nova.storage.rbd_utils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9b081668-1653-448a-957e-da1ead7ecd21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.076 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:40:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:40:44Z|00176|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7a:70:52 10.100.0.7
Feb 28 05:40:44 np0005634017 ovn_controller[146846]: 2026-02-28T10:40:44Z|00177|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:70:52 10.100.0.7
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.252 243456 DEBUG nova.network.neutron [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Successfully created port: fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:40:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1647071877' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.618 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.621 243456 DEBUG nova.virt.libvirt.vif [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:40:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1426595857',display_name='tempest-TestGettingAddress-server-1426595857',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1426595857',id=142,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIFM8yB08pIGURi8yBxt5PdCNey3s6NrP3no5rEh86EsRCqvxaXm0AljpWt6cD1IWrli2324RMsAEvYyRpYidnTv9SaSqyUCtTNlJoKd1y2+M8lrZO3SJcVKRfWdVShAUg==',key_name='tempest-TestGettingAddress-1215811874',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-nmxy1pc6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:40:36Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=9b081668-1653-448a-957e-da1ead7ecd21,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5952cc57-b25c-40a2-b208-47e2104b88ad", "address": "fa:16:3e:49:9b:bb", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5952cc57-b2", "ovs_interfaceid": "5952cc57-b25c-40a2-b208-47e2104b88ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.622 243456 DEBUG nova.network.os_vif_util [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "5952cc57-b25c-40a2-b208-47e2104b88ad", "address": "fa:16:3e:49:9b:bb", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5952cc57-b2", "ovs_interfaceid": "5952cc57-b25c-40a2-b208-47e2104b88ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.623 243456 DEBUG nova.network.os_vif_util [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:9b:bb,bridge_name='br-int',has_traffic_filtering=True,id=5952cc57-b25c-40a2-b208-47e2104b88ad,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5952cc57-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.623 243456 DEBUG nova.virt.libvirt.vif [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:40:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1426595857',display_name='tempest-TestGettingAddress-server-1426595857',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1426595857',id=142,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIFM8yB08pIGURi8yBxt5PdCNey3s6NrP3no5rEh86EsRCqvxaXm0AljpWt6cD1IWrli2324RMsAEvYyRpYidnTv9SaSqyUCtTNlJoKd1y2+M8lrZO3SJcVKRfWdVShAUg==',key_name='tempest-TestGettingAddress-1215811874',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-nmxy1pc6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:40:36Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=9b081668-1653-448a-957e-da1ead7ecd21,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6914940c-920a-4dc2-982a-9ae63584aee2", "address": "fa:16:3e:fb:c8:97", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefb:c897", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6914940c-92", "ovs_interfaceid": "6914940c-920a-4dc2-982a-9ae63584aee2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.624 243456 DEBUG nova.network.os_vif_util [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "6914940c-920a-4dc2-982a-9ae63584aee2", "address": "fa:16:3e:fb:c8:97", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefb:c897", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6914940c-92", "ovs_interfaceid": "6914940c-920a-4dc2-982a-9ae63584aee2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.624 243456 DEBUG nova.network.os_vif_util [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=6914940c-920a-4dc2-982a-9ae63584aee2,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6914940c-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.625 243456 DEBUG nova.objects.instance [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b081668-1653-448a-957e-da1ead7ecd21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.705 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.727 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:40:44 np0005634017 nova_compute[243452]:  <uuid>9b081668-1653-448a-957e-da1ead7ecd21</uuid>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:  <name>instance-0000008e</name>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestGettingAddress-server-1426595857</nova:name>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:40:43</nova:creationTime>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:40:44 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:        <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:        <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:        <nova:port uuid="5952cc57-b25c-40a2-b208-47e2104b88ad">
Feb 28 05:40:44 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:        <nova:port uuid="6914940c-920a-4dc2-982a-9ae63584aee2">
Feb 28 05:40:44 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fefb:c897" ipVersion="6"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <entry name="serial">9b081668-1653-448a-957e-da1ead7ecd21</entry>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <entry name="uuid">9b081668-1653-448a-957e-da1ead7ecd21</entry>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/9b081668-1653-448a-957e-da1ead7ecd21_disk">
Feb 28 05:40:44 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:40:44 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/9b081668-1653-448a-957e-da1ead7ecd21_disk.config">
Feb 28 05:40:44 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:40:44 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:49:9b:bb"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <target dev="tap5952cc57-b2"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:fb:c8:97"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <target dev="tap6914940c-92"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/9b081668-1653-448a-957e-da1ead7ecd21/console.log" append="off"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:40:44 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:40:44 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:40:44 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:40:44 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.728 243456 DEBUG nova.compute.manager [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Preparing to wait for external event network-vif-plugged-5952cc57-b25c-40a2-b208-47e2104b88ad prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.728 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "9b081668-1653-448a-957e-da1ead7ecd21-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.728 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.729 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.729 243456 DEBUG nova.compute.manager [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Preparing to wait for external event network-vif-plugged-6914940c-920a-4dc2-982a-9ae63584aee2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.729 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "9b081668-1653-448a-957e-da1ead7ecd21-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.729 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.729 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.730 243456 DEBUG nova.virt.libvirt.vif [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:40:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1426595857',display_name='tempest-TestGettingAddress-server-1426595857',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1426595857',id=142,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIFM8yB08pIGURi8yBxt5PdCNey3s6NrP3no5rEh86EsRCqvxaXm0AljpWt6cD1IWrli2324RMsAEvYyRpYidnTv9SaSqyUCtTNlJoKd1y2+M8lrZO3SJcVKRfWdVShAUg==',key_name='tempest-TestGettingAddress-1215811874',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-nmxy1pc6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:40:36Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=9b081668-1653-448a-957e-da1ead7ecd21,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5952cc57-b25c-40a2-b208-47e2104b88ad", "address": "fa:16:3e:49:9b:bb", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5952cc57-b2", "ovs_interfaceid": "5952cc57-b25c-40a2-b208-47e2104b88ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.730 243456 DEBUG nova.network.os_vif_util [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "5952cc57-b25c-40a2-b208-47e2104b88ad", "address": "fa:16:3e:49:9b:bb", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5952cc57-b2", "ovs_interfaceid": "5952cc57-b25c-40a2-b208-47e2104b88ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.731 243456 DEBUG nova.network.os_vif_util [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:9b:bb,bridge_name='br-int',has_traffic_filtering=True,id=5952cc57-b25c-40a2-b208-47e2104b88ad,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5952cc57-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.731 243456 DEBUG os_vif [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:9b:bb,bridge_name='br-int',has_traffic_filtering=True,id=5952cc57-b25c-40a2-b208-47e2104b88ad,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5952cc57-b2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.731 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.732 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.732 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.735 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.735 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5952cc57-b2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.736 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5952cc57-b2, col_values=(('external_ids', {'iface-id': '5952cc57-b25c-40a2-b208-47e2104b88ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:9b:bb', 'vm-uuid': '9b081668-1653-448a-957e-da1ead7ecd21'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.737 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:44 np0005634017 NetworkManager[49805]: <info>  [1772275244.7383] manager: (tap5952cc57-b2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/615)
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.742 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.746 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.747 243456 INFO os_vif [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:9b:bb,bridge_name='br-int',has_traffic_filtering=True,id=5952cc57-b25c-40a2-b208-47e2104b88ad,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5952cc57-b2')#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.748 243456 DEBUG nova.virt.libvirt.vif [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:40:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1426595857',display_name='tempest-TestGettingAddress-server-1426595857',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1426595857',id=142,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIFM8yB08pIGURi8yBxt5PdCNey3s6NrP3no5rEh86EsRCqvxaXm0AljpWt6cD1IWrli2324RMsAEvYyRpYidnTv9SaSqyUCtTNlJoKd1y2+M8lrZO3SJcVKRfWdVShAUg==',key_name='tempest-TestGettingAddress-1215811874',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-nmxy1pc6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:40:36Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=9b081668-1653-448a-957e-da1ead7ecd21,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6914940c-920a-4dc2-982a-9ae63584aee2", "address": "fa:16:3e:fb:c8:97", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefb:c897", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6914940c-92", "ovs_interfaceid": "6914940c-920a-4dc2-982a-9ae63584aee2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.748 243456 DEBUG nova.network.os_vif_util [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "6914940c-920a-4dc2-982a-9ae63584aee2", "address": "fa:16:3e:fb:c8:97", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefb:c897", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6914940c-92", "ovs_interfaceid": "6914940c-920a-4dc2-982a-9ae63584aee2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.749 243456 DEBUG nova.network.os_vif_util [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=6914940c-920a-4dc2-982a-9ae63584aee2,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6914940c-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.749 243456 DEBUG os_vif [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=6914940c-920a-4dc2-982a-9ae63584aee2,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6914940c-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.750 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.750 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.750 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.753 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.754 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6914940c-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.754 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6914940c-92, col_values=(('external_ids', {'iface-id': '6914940c-920a-4dc2-982a-9ae63584aee2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:c8:97', 'vm-uuid': '9b081668-1653-448a-957e-da1ead7ecd21'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.755 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:44 np0005634017 NetworkManager[49805]: <info>  [1772275244.7567] manager: (tap6914940c-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/616)
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.760 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.762 243456 INFO os_vif [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=6914940c-920a-4dc2-982a-9ae63584aee2,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6914940c-92')#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.832 243456 DEBUG nova.network.neutron [req-26f75e43-46a5-4027-ae7c-d96730a9c1e8 req-455a3b70-3c8d-4a6c-9cdd-c1778ac2ee63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Updated VIF entry in instance network info cache for port 6914940c-920a-4dc2-982a-9ae63584aee2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.833 243456 DEBUG nova.network.neutron [req-26f75e43-46a5-4027-ae7c-d96730a9c1e8 req-455a3b70-3c8d-4a6c-9cdd-c1778ac2ee63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Updating instance_info_cache with network_info: [{"id": "5952cc57-b25c-40a2-b208-47e2104b88ad", "address": "fa:16:3e:49:9b:bb", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5952cc57-b2", "ovs_interfaceid": "5952cc57-b25c-40a2-b208-47e2104b88ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6914940c-920a-4dc2-982a-9ae63584aee2", "address": "fa:16:3e:fb:c8:97", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefb:c897", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6914940c-92", "ovs_interfaceid": "6914940c-920a-4dc2-982a-9ae63584aee2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.838 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.838 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.839 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:49:9b:bb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.839 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:fb:c8:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.840 243456 INFO nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Using config drive#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.874 243456 DEBUG nova.storage.rbd_utils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9b081668-1653-448a-957e-da1ead7ecd21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:40:44 np0005634017 nova_compute[243452]: 2026-02-28 10:40:44.886 243456 DEBUG oslo_concurrency.lockutils [req-26f75e43-46a5-4027-ae7c-d96730a9c1e8 req-455a3b70-3c8d-4a6c-9cdd-c1778ac2ee63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:40:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 305 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.5 MiB/s wr, 137 op/s
Feb 28 05:40:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:40:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3869735481' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:40:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:40:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3869735481' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:40:46 np0005634017 nova_compute[243452]: 2026-02-28 10:40:46.531 243456 INFO nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Creating config drive at /var/lib/nova/instances/9b081668-1653-448a-957e-da1ead7ecd21/disk.config#033[00m
Feb 28 05:40:46 np0005634017 nova_compute[243452]: 2026-02-28 10:40:46.535 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b081668-1653-448a-957e-da1ead7ecd21/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8wl40i0o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:40:46 np0005634017 nova_compute[243452]: 2026-02-28 10:40:46.569 243456 DEBUG nova.network.neutron [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Successfully updated port: fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:40:46 np0005634017 nova_compute[243452]: 2026-02-28 10:40:46.676 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b081668-1653-448a-957e-da1ead7ecd21/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8wl40i0o" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:40:46 np0005634017 nova_compute[243452]: 2026-02-28 10:40:46.705 243456 DEBUG nova.storage.rbd_utils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9b081668-1653-448a-957e-da1ead7ecd21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:40:46 np0005634017 nova_compute[243452]: 2026-02-28 10:40:46.709 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b081668-1653-448a-957e-da1ead7ecd21/disk.config 9b081668-1653-448a-957e-da1ead7ecd21_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:40:46 np0005634017 nova_compute[243452]: 2026-02-28 10:40:46.746 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-edd7bb04-60e7-4998-afe7-73fa36c25f5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:40:46 np0005634017 nova_compute[243452]: 2026-02-28 10:40:46.747 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-edd7bb04-60e7-4998-afe7-73fa36c25f5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:40:46 np0005634017 nova_compute[243452]: 2026-02-28 10:40:46.747 243456 DEBUG nova.network.neutron [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:40:46 np0005634017 nova_compute[243452]: 2026-02-28 10:40:46.858 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b081668-1653-448a-957e-da1ead7ecd21/disk.config 9b081668-1653-448a-957e-da1ead7ecd21_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:40:46 np0005634017 nova_compute[243452]: 2026-02-28 10:40:46.859 243456 INFO nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Deleting local config drive /var/lib/nova/instances/9b081668-1653-448a-957e-da1ead7ecd21/disk.config because it was imported into RBD.#033[00m
Feb 28 05:40:46 np0005634017 nova_compute[243452]: 2026-02-28 10:40:46.902 243456 DEBUG nova.compute.manager [req-98abd5fa-c27c-43f8-92fd-25714e905ad8 req-06bfc05c-5082-4263-bb20-14a80476ac97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Received event network-changed-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:40:46 np0005634017 nova_compute[243452]: 2026-02-28 10:40:46.903 243456 DEBUG nova.compute.manager [req-98abd5fa-c27c-43f8-92fd-25714e905ad8 req-06bfc05c-5082-4263-bb20-14a80476ac97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Refreshing instance network info cache due to event network-changed-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:40:46 np0005634017 nova_compute[243452]: 2026-02-28 10:40:46.903 243456 DEBUG oslo_concurrency.lockutils [req-98abd5fa-c27c-43f8-92fd-25714e905ad8 req-06bfc05c-5082-4263-bb20-14a80476ac97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-edd7bb04-60e7-4998-afe7-73fa36c25f5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:40:46 np0005634017 kernel: tap5952cc57-b2: entered promiscuous mode
Feb 28 05:40:46 np0005634017 NetworkManager[49805]: <info>  [1772275246.9183] manager: (tap5952cc57-b2): new Tun device (/org/freedesktop/NetworkManager/Devices/617)
Feb 28 05:40:46 np0005634017 nova_compute[243452]: 2026-02-28 10:40:46.925 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:40:46Z|01477|binding|INFO|Claiming lport 5952cc57-b25c-40a2-b208-47e2104b88ad for this chassis.
Feb 28 05:40:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:40:46Z|01478|binding|INFO|5952cc57-b25c-40a2-b208-47e2104b88ad: Claiming fa:16:3e:49:9b:bb 10.100.0.13
Feb 28 05:40:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:40:46Z|01479|binding|INFO|Setting lport 5952cc57-b25c-40a2-b208-47e2104b88ad ovn-installed in OVS
Feb 28 05:40:46 np0005634017 NetworkManager[49805]: <info>  [1772275246.9388] manager: (tap6914940c-92): new Tun device (/org/freedesktop/NetworkManager/Devices/618)
Feb 28 05:40:46 np0005634017 nova_compute[243452]: 2026-02-28 10:40:46.938 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:46 np0005634017 kernel: tap6914940c-92: entered promiscuous mode
Feb 28 05:40:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:40:46Z|01480|if_status|INFO|Dropped 6 log messages in last 81 seconds (most recently, 81 seconds ago) due to excessive rate
Feb 28 05:40:46 np0005634017 nova_compute[243452]: 2026-02-28 10:40:46.945 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:40:46Z|01481|if_status|INFO|Not updating pb chassis for 6914940c-920a-4dc2-982a-9ae63584aee2 now as sb is readonly
Feb 28 05:40:46 np0005634017 systemd-machined[209480]: New machine qemu-175-instance-0000008e.
Feb 28 05:40:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:46.969 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:9b:bb 10.100.0.13'], port_security=['fa:16:3e:49:9b:bb 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9b081668-1653-448a-957e-da1ead7ecd21', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '67f9b9ea-1e17-4554-8bb9-a54c6cbc146f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e740c691-959b-49dd-920a-81817c99bba9, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=5952cc57-b25c-40a2-b208-47e2104b88ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:40:46 np0005634017 systemd-udevd[370451]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:40:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:46.970 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 5952cc57-b25c-40a2-b208-47e2104b88ad in datapath b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9 bound to our chassis#033[00m
Feb 28 05:40:46 np0005634017 systemd-udevd[370452]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:40:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:46.972 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9#033[00m
Feb 28 05:40:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:40:46Z|01482|binding|INFO|Claiming lport 6914940c-920a-4dc2-982a-9ae63584aee2 for this chassis.
Feb 28 05:40:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:40:46Z|01483|binding|INFO|6914940c-920a-4dc2-982a-9ae63584aee2: Claiming fa:16:3e:fb:c8:97 2001:db8::f816:3eff:fefb:c897
Feb 28 05:40:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:40:46Z|01484|binding|INFO|Setting lport 5952cc57-b25c-40a2-b208-47e2104b88ad up in Southbound
Feb 28 05:40:46 np0005634017 ovn_controller[146846]: 2026-02-28T10:40:46Z|01485|binding|INFO|Setting lport 6914940c-920a-4dc2-982a-9ae63584aee2 ovn-installed in OVS
Feb 28 05:40:46 np0005634017 systemd[1]: Started Virtual Machine qemu-175-instance-0000008e.
Feb 28 05:40:46 np0005634017 nova_compute[243452]: 2026-02-28 10:40:46.972 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:46 np0005634017 NetworkManager[49805]: <info>  [1772275246.9897] device (tap6914940c-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:40:46 np0005634017 NetworkManager[49805]: <info>  [1772275246.9902] device (tap5952cc57-b2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:40:46 np0005634017 NetworkManager[49805]: <info>  [1772275246.9905] device (tap6914940c-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:40:46 np0005634017 NetworkManager[49805]: <info>  [1772275246.9909] device (tap5952cc57-b2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:40:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:46.990 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2f5f84d3-7b5c-4d37-adfb-11ddc6388f3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:46.991 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb0ad6cbe-81 in ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:40:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:46.993 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb0ad6cbe-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:40:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:46.993 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[94311a69-63bf-4ed3-99f3-183abc0d5cd3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:46 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:46.994 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fbf2e56d-a1d2-497d-809c-4a9c84d0132e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:46.999 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:c8:97 2001:db8::f816:3eff:fefb:c897'], port_security=['fa:16:3e:fb:c8:97 2001:db8::f816:3eff:fefb:c897'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fefb:c897/64', 'neutron:device_id': '9b081668-1653-448a-957e-da1ead7ecd21', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '67f9b9ea-1e17-4554-8bb9-a54c6cbc146f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b98334fa-e37e-4532-99f2-fef0d70a8f7d, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=6914940c-920a-4dc2-982a-9ae63584aee2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:40:47 np0005634017 ovn_controller[146846]: 2026-02-28T10:40:46Z|01486|binding|INFO|Setting lport 6914940c-920a-4dc2-982a-9ae63584aee2 up in Southbound
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.008 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[822cf5d8-800c-4acc-8746-e2688f41077a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.023 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[93eba847-1922-4558-bcbe-182ca82a48e6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:47 np0005634017 nova_compute[243452]: 2026-02-28 10:40:47.030 243456 DEBUG nova.network.neutron [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:40:47 np0005634017 podman[370435]: 2026-02-28 10:40:47.046532863 +0000 UTC m=+0.089247538 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Feb 28 05:40:47 np0005634017 podman[370431]: 2026-02-28 10:40:47.073775404 +0000 UTC m=+0.118530117 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 28 05:40:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2293: 305 pgs: 305 active+clean; 302 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 4.9 MiB/s wr, 144 op/s
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.182 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[143707ee-285a-40c0-b039-b6f3a148c404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.188 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2bad9b90-9e7a-42dd-b0c2-e2cb066f4874]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:47 np0005634017 NetworkManager[49805]: <info>  [1772275247.1908] manager: (tapb0ad6cbe-80): new Veth device (/org/freedesktop/NetworkManager/Devices/619)
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.237 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0abf67ea-96c2-4372-bf68-5d80cf706466]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.241 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a2bbc4f7-ad8e-466f-be0f-51efb83a392c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:47 np0005634017 NetworkManager[49805]: <info>  [1772275247.2679] device (tapb0ad6cbe-80): carrier: link connected
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.278 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[26c93401-feb3-4137-99b0-50ed24ffb51d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.298 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[69ce7c17-90b4-4418-8a0e-6ce17e312877]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb0ad6cbe-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:f6:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 440], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675455, 'reachable_time': 42136, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370514, 'error': None, 'target': 'ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.314 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dc350d06-caca-4b4c-87f5-9fb76cdaa9db]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:f67e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675455, 'tstamp': 675455}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370515, 'error': None, 'target': 'ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:47 np0005634017 nova_compute[243452]: 2026-02-28 10:40:47.318 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:40:47 np0005634017 nova_compute[243452]: 2026-02-28 10:40:47.318 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.337 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b1306692-c8ce-4737-9ae3-afd86b248fb2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb0ad6cbe-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:f6:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 440], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675455, 'reachable_time': 42136, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 370516, 'error': None, 'target': 'ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.372 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f505da8d-1a07-4a7f-8955-a5527c470a34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.449 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[02e45937-a405-4524-b626-1e026fa46e41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.450 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0ad6cbe-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.451 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.453 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0ad6cbe-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:40:47 np0005634017 NetworkManager[49805]: <info>  [1772275247.4936] manager: (tapb0ad6cbe-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/620)
Feb 28 05:40:47 np0005634017 kernel: tapb0ad6cbe-80: entered promiscuous mode
Feb 28 05:40:47 np0005634017 nova_compute[243452]: 2026-02-28 10:40:47.492 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:47 np0005634017 nova_compute[243452]: 2026-02-28 10:40:47.496 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.499 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb0ad6cbe-80, col_values=(('external_ids', {'iface-id': '7d3f4d3d-8eb5-46b3-a0b2-7c10be424ebb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:40:47 np0005634017 nova_compute[243452]: 2026-02-28 10:40:47.501 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:47 np0005634017 ovn_controller[146846]: 2026-02-28T10:40:47Z|01487|binding|INFO|Releasing lport 7d3f4d3d-8eb5-46b3-a0b2-7c10be424ebb from this chassis (sb_readonly=0)
Feb 28 05:40:47 np0005634017 nova_compute[243452]: 2026-02-28 10:40:47.506 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.507 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.508 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ee4c27ad-939d-4f74-976a-5da28027ac99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.510 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9.pid.haproxy
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:40:47 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.510 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'env', 'PROCESS_TAG=haproxy-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:40:47 np0005634017 nova_compute[243452]: 2026-02-28 10:40:47.580 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275247.5801299, 9b081668-1653-448a-957e-da1ead7ecd21 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:40:47 np0005634017 nova_compute[243452]: 2026-02-28 10:40:47.581 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] VM Started (Lifecycle Event)#033[00m
Feb 28 05:40:47 np0005634017 nova_compute[243452]: 2026-02-28 10:40:47.744 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:40:47 np0005634017 nova_compute[243452]: 2026-02-28 10:40:47.752 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275247.5808573, 9b081668-1653-448a-957e-da1ead7ecd21 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:40:47 np0005634017 nova_compute[243452]: 2026-02-28 10:40:47.753 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:40:47 np0005634017 nova_compute[243452]: 2026-02-28 10:40:47.801 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:40:47 np0005634017 nova_compute[243452]: 2026-02-28 10:40:47.806 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:40:47 np0005634017 nova_compute[243452]: 2026-02-28 10:40:47.883 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:40:47 np0005634017 podman[370591]: 2026-02-28 10:40:47.887472795 +0000 UTC m=+0.064674612 container create 3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 28 05:40:47 np0005634017 systemd[1]: Started libpod-conmon-3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9.scope.
Feb 28 05:40:47 np0005634017 podman[370591]: 2026-02-28 10:40:47.859186934 +0000 UTC m=+0.036388811 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:40:47 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:40:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b49f6fee2a5624b0785481ede3d870b1d9e7cdf89c50a35e547fbbaa11922ea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:40:47 np0005634017 podman[370591]: 2026-02-28 10:40:47.988285809 +0000 UTC m=+0.165487686 container init 3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 28 05:40:47 np0005634017 podman[370591]: 2026-02-28 10:40:47.993143096 +0000 UTC m=+0.170344933 container start 3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 28 05:40:48 np0005634017 neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9[370607]: [NOTICE]   (370611) : New worker (370613) forked
Feb 28 05:40:48 np0005634017 neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9[370607]: [NOTICE]   (370611) : Loading success.
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.042 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 6914940c-920a-4dc2-982a-9ae63584aee2 in datapath caa8646e-5c97-4eb8-add7-69ea9ee54379 unbound from our chassis#033[00m
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.044 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network caa8646e-5c97-4eb8-add7-69ea9ee54379#033[00m
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.051 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2220c127-ab23-4e04-bfee-d400c537a2be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.052 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcaa8646e-51 in ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.054 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcaa8646e-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.054 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[61106858-6846-44ec-8c21-b1bc6d3f0745]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.055 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[60ccf8a0-d8e3-4f67-b0ea-b0d9ac9f9fc3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.065 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[959a8aa8-3acc-46f4-977b-d45749a07f36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.088 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ca92b2f8-7ee0-4953-9fb4-fd5a04eef275]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.115 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[072db84a-74b7-4bdd-b2f5-a7145c9e79e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:48 np0005634017 NetworkManager[49805]: <info>  [1772275248.1242] manager: (tapcaa8646e-50): new Veth device (/org/freedesktop/NetworkManager/Devices/621)
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.124 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5ebba587-31f4-473a-b218-a77ff43c84bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:48 np0005634017 systemd-udevd[370499]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.161 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[442d269b-4166-4a9b-ba0a-5cd1d3b445bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.165 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e441089c-44a2-4458-bd53-fa58e3d8a0b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:48 np0005634017 NetworkManager[49805]: <info>  [1772275248.1873] device (tapcaa8646e-50): carrier: link connected
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.194 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6a3df221-1d52-4796-9b58-823f425c5c70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.214 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bc79356c-d892-4c33-ba2c-f347b8c00990]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcaa8646e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:7b:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 441], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675547, 'reachable_time': 36290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370632, 'error': None, 'target': 'ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.236 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[99c8e9c5-d927-49b8-8ca2-f75d20019d10]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe74:7bce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675547, 'tstamp': 675547}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370633, 'error': None, 'target': 'ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.257 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c18a53ee-3af9-44ab-9b5c-7b657b3ebfa9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcaa8646e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:7b:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 441], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675547, 'reachable_time': 36290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 370634, 'error': None, 'target': 'ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.297 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba729ab-76c7-429b-886b-b7f7cb1fdba2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.345 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dbc8de9e-403c-4cbc-884b-bd088b5d2c13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.348 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcaa8646e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.349 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.350 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcaa8646e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:40:48 np0005634017 nova_compute[243452]: 2026-02-28 10:40:48.353 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:48 np0005634017 NetworkManager[49805]: <info>  [1772275248.3540] manager: (tapcaa8646e-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/622)
Feb 28 05:40:48 np0005634017 kernel: tapcaa8646e-50: entered promiscuous mode
Feb 28 05:40:48 np0005634017 nova_compute[243452]: 2026-02-28 10:40:48.357 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.359 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcaa8646e-50, col_values=(('external_ids', {'iface-id': 'e3227d18-ed73-459d-b1a1-aecd179beb21'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:40:48 np0005634017 nova_compute[243452]: 2026-02-28 10:40:48.361 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:40:48Z|01488|binding|INFO|Releasing lport e3227d18-ed73-459d-b1a1-aecd179beb21 from this chassis (sb_readonly=0)
Feb 28 05:40:48 np0005634017 nova_compute[243452]: 2026-02-28 10:40:48.372 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.373 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/caa8646e-5c97-4eb8-add7-69ea9ee54379.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/caa8646e-5c97-4eb8-add7-69ea9ee54379.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.375 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[99aabbd0-d62b-4fd6-8e48-3561df0869f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.376 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-caa8646e-5c97-4eb8-add7-69ea9ee54379
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/caa8646e-5c97-4eb8-add7-69ea9ee54379.pid.haproxy
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID caa8646e-5c97-4eb8-add7-69ea9ee54379
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:40:48 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.377 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'env', 'PROCESS_TAG=haproxy-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/caa8646e-5c97-4eb8-add7-69ea9ee54379.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:40:48 np0005634017 nova_compute[243452]: 2026-02-28 10:40:48.741 243456 DEBUG nova.compute.manager [req-d045dcfb-f0ce-4dd9-8476-5f0b01d37ef0 req-15de751c-61c8-4cd1-982d-3464e141f5b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-vif-plugged-5952cc57-b25c-40a2-b208-47e2104b88ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:40:48 np0005634017 nova_compute[243452]: 2026-02-28 10:40:48.742 243456 DEBUG oslo_concurrency.lockutils [req-d045dcfb-f0ce-4dd9-8476-5f0b01d37ef0 req-15de751c-61c8-4cd1-982d-3464e141f5b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9b081668-1653-448a-957e-da1ead7ecd21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:48 np0005634017 nova_compute[243452]: 2026-02-28 10:40:48.744 243456 DEBUG oslo_concurrency.lockutils [req-d045dcfb-f0ce-4dd9-8476-5f0b01d37ef0 req-15de751c-61c8-4cd1-982d-3464e141f5b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:48 np0005634017 nova_compute[243452]: 2026-02-28 10:40:48.745 243456 DEBUG oslo_concurrency.lockutils [req-d045dcfb-f0ce-4dd9-8476-5f0b01d37ef0 req-15de751c-61c8-4cd1-982d-3464e141f5b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:48 np0005634017 nova_compute[243452]: 2026-02-28 10:40:48.746 243456 DEBUG nova.compute.manager [req-d045dcfb-f0ce-4dd9-8476-5f0b01d37ef0 req-15de751c-61c8-4cd1-982d-3464e141f5b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Processing event network-vif-plugged-5952cc57-b25c-40a2-b208-47e2104b88ad _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:40:48 np0005634017 podman[370664]: 2026-02-28 10:40:48.781605683 +0000 UTC m=+0.064454116 container create c74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 28 05:40:48 np0005634017 systemd[1]: Started libpod-conmon-c74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28.scope.
Feb 28 05:40:48 np0005634017 podman[370664]: 2026-02-28 10:40:48.747526578 +0000 UTC m=+0.030375071 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:40:48 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:40:48 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eb10f203ca6c1c169e1b23413d123cb3b37254372fe81642560a63587f0c639/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:40:48 np0005634017 podman[370664]: 2026-02-28 10:40:48.87192564 +0000 UTC m=+0.154774073 container init c74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 05:40:48 np0005634017 podman[370664]: 2026-02-28 10:40:48.880103082 +0000 UTC m=+0.162951505 container start c74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:40:48 np0005634017 neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379[370680]: [NOTICE]   (370684) : New worker (370686) forked
Feb 28 05:40:48 np0005634017 neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379[370680]: [NOTICE]   (370684) : Loading success.
Feb 28 05:40:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.034 243456 DEBUG nova.compute.manager [req-6a4e2798-f521-4e6a-9ef5-fca82b956291 req-d9d2b273-682e-4020-b84b-7673bc60bb01 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-vif-plugged-6914940c-920a-4dc2-982a-9ae63584aee2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.035 243456 DEBUG oslo_concurrency.lockutils [req-6a4e2798-f521-4e6a-9ef5-fca82b956291 req-d9d2b273-682e-4020-b84b-7673bc60bb01 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9b081668-1653-448a-957e-da1ead7ecd21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.035 243456 DEBUG oslo_concurrency.lockutils [req-6a4e2798-f521-4e6a-9ef5-fca82b956291 req-d9d2b273-682e-4020-b84b-7673bc60bb01 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.035 243456 DEBUG oslo_concurrency.lockutils [req-6a4e2798-f521-4e6a-9ef5-fca82b956291 req-d9d2b273-682e-4020-b84b-7673bc60bb01 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.036 243456 DEBUG nova.compute.manager [req-6a4e2798-f521-4e6a-9ef5-fca82b956291 req-d9d2b273-682e-4020-b84b-7673bc60bb01 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Processing event network-vif-plugged-6914940c-920a-4dc2-982a-9ae63584aee2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.036 243456 DEBUG nova.compute.manager [req-6a4e2798-f521-4e6a-9ef5-fca82b956291 req-d9d2b273-682e-4020-b84b-7673bc60bb01 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-vif-plugged-6914940c-920a-4dc2-982a-9ae63584aee2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.036 243456 DEBUG oslo_concurrency.lockutils [req-6a4e2798-f521-4e6a-9ef5-fca82b956291 req-d9d2b273-682e-4020-b84b-7673bc60bb01 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9b081668-1653-448a-957e-da1ead7ecd21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.036 243456 DEBUG oslo_concurrency.lockutils [req-6a4e2798-f521-4e6a-9ef5-fca82b956291 req-d9d2b273-682e-4020-b84b-7673bc60bb01 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.037 243456 DEBUG oslo_concurrency.lockutils [req-6a4e2798-f521-4e6a-9ef5-fca82b956291 req-d9d2b273-682e-4020-b84b-7673bc60bb01 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.037 243456 DEBUG nova.compute.manager [req-6a4e2798-f521-4e6a-9ef5-fca82b956291 req-d9d2b273-682e-4020-b84b-7673bc60bb01 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] No waiting events found dispatching network-vif-plugged-6914940c-920a-4dc2-982a-9ae63584aee2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.037 243456 WARNING nova.compute.manager [req-6a4e2798-f521-4e6a-9ef5-fca82b956291 req-d9d2b273-682e-4020-b84b-7673bc60bb01 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received unexpected event network-vif-plugged-6914940c-920a-4dc2-982a-9ae63584aee2 for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.038 243456 DEBUG nova.compute.manager [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.041 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275249.0412693, 9b081668-1653-448a-957e-da1ead7ecd21 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.042 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.044 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.047 243456 INFO nova.virt.libvirt.driver [-] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Instance spawned successfully.#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.048 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:40:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 5.7 MiB/s wr, 143 op/s
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.083 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.089 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.094 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.094 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.095 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.095 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.096 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.096 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.108 243456 DEBUG nova.network.neutron [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Updating instance_info_cache with network_info: [{"id": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "address": "fa:16:3e:11:97:49", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcbaa6a-ae", "ovs_interfaceid": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.126 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.192 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-edd7bb04-60e7-4998-afe7-73fa36c25f5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.193 243456 DEBUG nova.compute.manager [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Instance network_info: |[{"id": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "address": "fa:16:3e:11:97:49", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcbaa6a-ae", "ovs_interfaceid": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.193 243456 DEBUG oslo_concurrency.lockutils [req-98abd5fa-c27c-43f8-92fd-25714e905ad8 req-06bfc05c-5082-4263-bb20-14a80476ac97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-edd7bb04-60e7-4998-afe7-73fa36c25f5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.194 243456 DEBUG nova.network.neutron [req-98abd5fa-c27c-43f8-92fd-25714e905ad8 req-06bfc05c-5082-4263-bb20-14a80476ac97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Refreshing network info cache for port fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.198 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Start _get_guest_xml network_info=[{"id": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "address": "fa:16:3e:11:97:49", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcbaa6a-ae", "ovs_interfaceid": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.203 243456 WARNING nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.210 243456 DEBUG nova.virt.libvirt.host [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.210 243456 DEBUG nova.virt.libvirt.host [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.215 243456 DEBUG nova.virt.libvirt.host [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.216 243456 DEBUG nova.virt.libvirt.host [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.216 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.216 243456 DEBUG nova.virt.hardware [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.217 243456 DEBUG nova.virt.hardware [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.217 243456 DEBUG nova.virt.hardware [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.217 243456 DEBUG nova.virt.hardware [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.217 243456 DEBUG nova.virt.hardware [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.218 243456 DEBUG nova.virt.hardware [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.218 243456 DEBUG nova.virt.hardware [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.218 243456 DEBUG nova.virt.hardware [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.218 243456 DEBUG nova.virt.hardware [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.218 243456 DEBUG nova.virt.hardware [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.219 243456 DEBUG nova.virt.hardware [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.222 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.269 243456 INFO nova.compute.manager [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Took 12.58 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.270 243456 DEBUG nova.compute.manager [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.333 243456 INFO nova.compute.manager [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Took 13.70 seconds to build instance.#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.352 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.710 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.755 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:40:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2767841553' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.812 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.850 243456 DEBUG nova.storage.rbd_utils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:40:49 np0005634017 nova_compute[243452]: 2026-02-28 10:40:49.856 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.381 243456 DEBUG nova.network.neutron [req-98abd5fa-c27c-43f8-92fd-25714e905ad8 req-06bfc05c-5082-4263-bb20-14a80476ac97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Updated VIF entry in instance network info cache for port fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.383 243456 DEBUG nova.network.neutron [req-98abd5fa-c27c-43f8-92fd-25714e905ad8 req-06bfc05c-5082-4263-bb20-14a80476ac97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Updating instance_info_cache with network_info: [{"id": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "address": "fa:16:3e:11:97:49", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcbaa6a-ae", "ovs_interfaceid": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.413 243456 DEBUG oslo_concurrency.lockutils [req-98abd5fa-c27c-43f8-92fd-25714e905ad8 req-06bfc05c-5082-4263-bb20-14a80476ac97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-edd7bb04-60e7-4998-afe7-73fa36c25f5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:40:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:40:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4174489619' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.455 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.458 243456 DEBUG nova.virt.libvirt.vif [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:40:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1953289712',display_name='tempest-TestNetworkBasicOps-server-1953289712',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1953289712',id=143,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPUh8LuUCOXWWDa6D8UF8euvZqjOZJ54pO6ZFrTQ6XYCJT4oCgqwtyXIMREsD2M2EJrCVoUUgoERLi0Re4iBqJPnVad9u2jkDO9MLRXngT9Ks7jWQAvBhSsQWV2xvZyyw==',key_name='tempest-TestNetworkBasicOps-1629892919',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-plwthb3l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:40:43Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=edd7bb04-60e7-4998-afe7-73fa36c25f5d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "address": "fa:16:3e:11:97:49", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcbaa6a-ae", "ovs_interfaceid": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.459 243456 DEBUG nova.network.os_vif_util [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "address": "fa:16:3e:11:97:49", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcbaa6a-ae", "ovs_interfaceid": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.461 243456 DEBUG nova.network.os_vif_util [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:97:49,bridge_name='br-int',has_traffic_filtering=True,id=fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcbaa6a-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.464 243456 DEBUG nova.objects.instance [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid edd7bb04-60e7-4998-afe7-73fa36c25f5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.486 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:40:50 np0005634017 nova_compute[243452]:  <uuid>edd7bb04-60e7-4998-afe7-73fa36c25f5d</uuid>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:  <name>instance-0000008f</name>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestNetworkBasicOps-server-1953289712</nova:name>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:40:49</nova:creationTime>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:40:50 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:        <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:        <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:        <nova:port uuid="fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565">
Feb 28 05:40:50 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <entry name="serial">edd7bb04-60e7-4998-afe7-73fa36c25f5d</entry>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <entry name="uuid">edd7bb04-60e7-4998-afe7-73fa36c25f5d</entry>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk">
Feb 28 05:40:50 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:40:50 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk.config">
Feb 28 05:40:50 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:40:50 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:11:97:49"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <target dev="tapfdcbaa6a-ae"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/edd7bb04-60e7-4998-afe7-73fa36c25f5d/console.log" append="off"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:40:50 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:40:50 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:40:50 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:40:50 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.498 243456 DEBUG nova.compute.manager [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Preparing to wait for external event network-vif-plugged-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.499 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.499 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.500 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.501 243456 DEBUG nova.virt.libvirt.vif [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:40:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1953289712',display_name='tempest-TestNetworkBasicOps-server-1953289712',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1953289712',id=143,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPUh8LuUCOXWWDa6D8UF8euvZqjOZJ54pO6ZFrTQ6XYCJT4oCgqwtyXIMREsD2M2EJrCVoUUgoERLi0Re4iBqJPnVad9u2jkDO9MLRXngT9Ks7jWQAvBhSsQWV2xvZyyw==',key_name='tempest-TestNetworkBasicOps-1629892919',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-plwthb3l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:40:43Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=edd7bb04-60e7-4998-afe7-73fa36c25f5d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "address": "fa:16:3e:11:97:49", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcbaa6a-ae", "ovs_interfaceid": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.503 243456 DEBUG nova.network.os_vif_util [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "address": "fa:16:3e:11:97:49", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcbaa6a-ae", "ovs_interfaceid": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.504 243456 DEBUG nova.network.os_vif_util [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:97:49,bridge_name='br-int',has_traffic_filtering=True,id=fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcbaa6a-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.505 243456 DEBUG os_vif [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:97:49,bridge_name='br-int',has_traffic_filtering=True,id=fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcbaa6a-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.506 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.508 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.508 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.514 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.514 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfdcbaa6a-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.516 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfdcbaa6a-ae, col_values=(('external_ids', {'iface-id': 'fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:11:97:49', 'vm-uuid': 'edd7bb04-60e7-4998-afe7-73fa36c25f5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:40:50 np0005634017 NetworkManager[49805]: <info>  [1772275250.5194] manager: (tapfdcbaa6a-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/623)
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.518 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.526 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.528 243456 INFO os_vif [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:97:49,bridge_name='br-int',has_traffic_filtering=True,id=fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcbaa6a-ae')#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.686 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.687 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.688 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:11:97:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.689 243456 INFO nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Using config drive#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.726 243456 DEBUG nova.storage.rbd_utils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.953 243456 DEBUG nova.compute.manager [req-3b509401-7a2a-4404-849b-4d4f13715594 req-38727c35-b883-44db-b497-5514eca2f110 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-vif-plugged-5952cc57-b25c-40a2-b208-47e2104b88ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.955 243456 DEBUG oslo_concurrency.lockutils [req-3b509401-7a2a-4404-849b-4d4f13715594 req-38727c35-b883-44db-b497-5514eca2f110 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9b081668-1653-448a-957e-da1ead7ecd21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.956 243456 DEBUG oslo_concurrency.lockutils [req-3b509401-7a2a-4404-849b-4d4f13715594 req-38727c35-b883-44db-b497-5514eca2f110 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.956 243456 DEBUG oslo_concurrency.lockutils [req-3b509401-7a2a-4404-849b-4d4f13715594 req-38727c35-b883-44db-b497-5514eca2f110 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.957 243456 DEBUG nova.compute.manager [req-3b509401-7a2a-4404-849b-4d4f13715594 req-38727c35-b883-44db-b497-5514eca2f110 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] No waiting events found dispatching network-vif-plugged-5952cc57-b25c-40a2-b208-47e2104b88ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:40:50 np0005634017 nova_compute[243452]: 2026-02-28 10:40:50.958 243456 WARNING nova.compute.manager [req-3b509401-7a2a-4404-849b-4d4f13715594 req-38727c35-b883-44db-b497-5514eca2f110 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received unexpected event network-vif-plugged-5952cc57-b25c-40a2-b208-47e2104b88ad for instance with vm_state active and task_state None.#033[00m
Feb 28 05:40:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2295: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 448 KiB/s rd, 4.6 MiB/s wr, 119 op/s
Feb 28 05:40:51 np0005634017 nova_compute[243452]: 2026-02-28 10:40:51.223 243456 INFO nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Creating config drive at /var/lib/nova/instances/edd7bb04-60e7-4998-afe7-73fa36c25f5d/disk.config#033[00m
Feb 28 05:40:51 np0005634017 nova_compute[243452]: 2026-02-28 10:40:51.228 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/edd7bb04-60e7-4998-afe7-73fa36c25f5d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpp25gxqpo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:40:51 np0005634017 nova_compute[243452]: 2026-02-28 10:40:51.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:40:51 np0005634017 nova_compute[243452]: 2026-02-28 10:40:51.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:40:51 np0005634017 nova_compute[243452]: 2026-02-28 10:40:51.377 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/edd7bb04-60e7-4998-afe7-73fa36c25f5d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpp25gxqpo" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:40:51 np0005634017 nova_compute[243452]: 2026-02-28 10:40:51.409 243456 DEBUG nova.storage.rbd_utils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:40:51 np0005634017 nova_compute[243452]: 2026-02-28 10:40:51.414 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/edd7bb04-60e7-4998-afe7-73fa36c25f5d/disk.config edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:40:51 np0005634017 nova_compute[243452]: 2026-02-28 10:40:51.550 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 05:40:51 np0005634017 nova_compute[243452]: 2026-02-28 10:40:51.552 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/edd7bb04-60e7-4998-afe7-73fa36c25f5d/disk.config edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:40:51 np0005634017 nova_compute[243452]: 2026-02-28 10:40:51.553 243456 INFO nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Deleting local config drive /var/lib/nova/instances/edd7bb04-60e7-4998-afe7-73fa36c25f5d/disk.config because it was imported into RBD.#033[00m
Feb 28 05:40:51 np0005634017 NetworkManager[49805]: <info>  [1772275251.6080] manager: (tapfdcbaa6a-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/624)
Feb 28 05:40:51 np0005634017 kernel: tapfdcbaa6a-ae: entered promiscuous mode
Feb 28 05:40:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:40:51Z|01489|binding|INFO|Claiming lport fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 for this chassis.
Feb 28 05:40:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:40:51Z|01490|binding|INFO|fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565: Claiming fa:16:3e:11:97:49 10.100.0.4
Feb 28 05:40:51 np0005634017 nova_compute[243452]: 2026-02-28 10:40:51.619 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:40:51Z|01491|binding|INFO|Setting lport fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 ovn-installed in OVS
Feb 28 05:40:51 np0005634017 nova_compute[243452]: 2026-02-28 10:40:51.625 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:51 np0005634017 nova_compute[243452]: 2026-02-28 10:40:51.629 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:51 np0005634017 systemd-udevd[370828]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:40:51 np0005634017 NetworkManager[49805]: <info>  [1772275251.6487] device (tapfdcbaa6a-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:40:51 np0005634017 NetworkManager[49805]: <info>  [1772275251.6497] device (tapfdcbaa6a-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:40:51 np0005634017 systemd-machined[209480]: New machine qemu-176-instance-0000008f.
Feb 28 05:40:51 np0005634017 systemd[1]: Started Virtual Machine qemu-176-instance-0000008f.
Feb 28 05:40:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:40:51Z|01492|binding|INFO|Setting lport fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 up in Southbound
Feb 28 05:40:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.676 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:97:49 10.100.0.4'], port_security=['fa:16:3e:11:97:49 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'edd7bb04-60e7-4998-afe7-73fa36c25f5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a9965c3d-ca45-4c8c-963d-46eedcf9cfa8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27d4da06-6dac-452d-951c-54e43b1c22a3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:40:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.679 156681 INFO neutron.agent.ovn.metadata.agent [-] Port fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 in datapath 11e06da5-bfc5-4a1a-9148-ff3afccf9569 bound to our chassis#033[00m
Feb 28 05:40:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.681 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 11e06da5-bfc5-4a1a-9148-ff3afccf9569#033[00m
Feb 28 05:40:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.702 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[54a43c0c-ebda-4ff3-8f27-fad022d91308]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.732 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[86dc68ed-b919-420a-9b3a-89639eae9726]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.736 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[34253bd2-b2d9-47f4-9168-41498efe7dba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.772 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[62754743-3066-4fd6-be9d-74dd6bcefc5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.796 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b6f85ff6-9ad6-4da3-bec2-7ffb813aa235]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap11e06da5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:17:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673399, 'reachable_time': 31055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370845, 'error': None, 'target': 'ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.813 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1637f329-54a8-4c9f-8a45-f8bf5f56cca1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap11e06da5-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 673413, 'tstamp': 673413}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370846, 'error': None, 'target': 'ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap11e06da5-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 673415, 'tstamp': 673415}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370846, 'error': None, 'target': 'ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:40:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.815 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11e06da5-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:40:51 np0005634017 nova_compute[243452]: 2026-02-28 10:40:51.817 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:51 np0005634017 nova_compute[243452]: 2026-02-28 10:40:51.818 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.819 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap11e06da5-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:40:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.820 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:40:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.821 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap11e06da5-b0, col_values=(('external_ids', {'iface-id': 'ea6114a2-28c6-4510-bbd6-16e4d9cb4f71'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:40:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.821 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:40:51 np0005634017 nova_compute[243452]: 2026-02-28 10:40:51.986 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275251.9852517, edd7bb04-60e7-4998-afe7-73fa36c25f5d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:40:51 np0005634017 nova_compute[243452]: 2026-02-28 10:40:51.987 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] VM Started (Lifecycle Event)#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.044 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.050 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275251.9866974, edd7bb04-60e7-4998-afe7-73fa36c25f5d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.051 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.072 243456 DEBUG nova.compute.manager [req-12ddcfe9-2e70-4b3e-8acb-2867d38b1e34 req-7cce9769-4d8d-4c3d-9351-94f80f61e89d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Received event network-vif-plugged-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.073 243456 DEBUG oslo_concurrency.lockutils [req-12ddcfe9-2e70-4b3e-8acb-2867d38b1e34 req-7cce9769-4d8d-4c3d-9351-94f80f61e89d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.074 243456 DEBUG oslo_concurrency.lockutils [req-12ddcfe9-2e70-4b3e-8acb-2867d38b1e34 req-7cce9769-4d8d-4c3d-9351-94f80f61e89d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.074 243456 DEBUG oslo_concurrency.lockutils [req-12ddcfe9-2e70-4b3e-8acb-2867d38b1e34 req-7cce9769-4d8d-4c3d-9351-94f80f61e89d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.075 243456 DEBUG nova.compute.manager [req-12ddcfe9-2e70-4b3e-8acb-2867d38b1e34 req-7cce9769-4d8d-4c3d-9351-94f80f61e89d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Processing event network-vif-plugged-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.076 243456 DEBUG nova.compute.manager [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.084 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.090 243456 INFO nova.virt.libvirt.driver [-] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Instance spawned successfully.#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.090 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.096 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.100 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275252.0794888, edd7bb04-60e7-4998-afe7-73fa36c25f5d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.101 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.285 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.294 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.301 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.301 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.302 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.303 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.303 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.304 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.637 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.782 243456 INFO nova.compute.manager [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Took 9.68 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.783 243456 DEBUG nova.compute.manager [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:40:52 np0005634017 nova_compute[243452]: 2026-02-28 10:40:52.915 243456 INFO nova.compute.manager [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Took 10.88 seconds to build instance.#033[00m
Feb 28 05:40:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 147 op/s
Feb 28 05:40:53 np0005634017 nova_compute[243452]: 2026-02-28 10:40:53.098 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:53 np0005634017 nova_compute[243452]: 2026-02-28 10:40:53.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:40:53 np0005634017 nova_compute[243452]: 2026-02-28 10:40:53.464 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:53 np0005634017 nova_compute[243452]: 2026-02-28 10:40:53.464 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:53 np0005634017 nova_compute[243452]: 2026-02-28 10:40:53.465 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:53 np0005634017 nova_compute[243452]: 2026-02-28 10:40:53.465 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:40:53 np0005634017 nova_compute[243452]: 2026-02-28 10:40:53.465 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:40:53 np0005634017 nova_compute[243452]: 2026-02-28 10:40:53.724 243456 DEBUG nova.compute.manager [req-7408730a-ff05-4bc3-a262-1032457d528b req-57acfc0b-df9c-4af4-9d7a-17b0c73555d2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-changed-5952cc57-b25c-40a2-b208-47e2104b88ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:40:53 np0005634017 nova_compute[243452]: 2026-02-28 10:40:53.724 243456 DEBUG nova.compute.manager [req-7408730a-ff05-4bc3-a262-1032457d528b req-57acfc0b-df9c-4af4-9d7a-17b0c73555d2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Refreshing instance network info cache due to event network-changed-5952cc57-b25c-40a2-b208-47e2104b88ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:40:53 np0005634017 nova_compute[243452]: 2026-02-28 10:40:53.725 243456 DEBUG oslo_concurrency.lockutils [req-7408730a-ff05-4bc3-a262-1032457d528b req-57acfc0b-df9c-4af4-9d7a-17b0c73555d2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:40:53 np0005634017 nova_compute[243452]: 2026-02-28 10:40:53.725 243456 DEBUG oslo_concurrency.lockutils [req-7408730a-ff05-4bc3-a262-1032457d528b req-57acfc0b-df9c-4af4-9d7a-17b0c73555d2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:40:53 np0005634017 nova_compute[243452]: 2026-02-28 10:40:53.725 243456 DEBUG nova.network.neutron [req-7408730a-ff05-4bc3-a262-1032457d528b req-57acfc0b-df9c-4af4-9d7a-17b0c73555d2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Refreshing network info cache for port 5952cc57-b25c-40a2-b208-47e2104b88ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:40:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:40:53 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/670201687' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:40:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:40:54 np0005634017 nova_compute[243452]: 2026-02-28 10:40:54.002 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:40:54 np0005634017 nova_compute[243452]: 2026-02-28 10:40:54.116 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:40:54 np0005634017 nova_compute[243452]: 2026-02-28 10:40:54.117 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:40:54 np0005634017 nova_compute[243452]: 2026-02-28 10:40:54.121 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:40:54 np0005634017 nova_compute[243452]: 2026-02-28 10:40:54.121 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:40:54 np0005634017 nova_compute[243452]: 2026-02-28 10:40:54.125 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:40:54 np0005634017 nova_compute[243452]: 2026-02-28 10:40:54.126 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:40:54 np0005634017 nova_compute[243452]: 2026-02-28 10:40:54.152 243456 DEBUG nova.compute.manager [req-c7201c40-80ff-460e-bbc7-08b3a131c527 req-2111e595-bbb4-4570-af6c-af83d10b32e2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Received event network-vif-plugged-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:40:54 np0005634017 nova_compute[243452]: 2026-02-28 10:40:54.153 243456 DEBUG oslo_concurrency.lockutils [req-c7201c40-80ff-460e-bbc7-08b3a131c527 req-2111e595-bbb4-4570-af6c-af83d10b32e2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:54 np0005634017 nova_compute[243452]: 2026-02-28 10:40:54.153 243456 DEBUG oslo_concurrency.lockutils [req-c7201c40-80ff-460e-bbc7-08b3a131c527 req-2111e595-bbb4-4570-af6c-af83d10b32e2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:54 np0005634017 nova_compute[243452]: 2026-02-28 10:40:54.153 243456 DEBUG oslo_concurrency.lockutils [req-c7201c40-80ff-460e-bbc7-08b3a131c527 req-2111e595-bbb4-4570-af6c-af83d10b32e2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:54 np0005634017 nova_compute[243452]: 2026-02-28 10:40:54.153 243456 DEBUG nova.compute.manager [req-c7201c40-80ff-460e-bbc7-08b3a131c527 req-2111e595-bbb4-4570-af6c-af83d10b32e2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] No waiting events found dispatching network-vif-plugged-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:40:54 np0005634017 nova_compute[243452]: 2026-02-28 10:40:54.154 243456 WARNING nova.compute.manager [req-c7201c40-80ff-460e-bbc7-08b3a131c527 req-2111e595-bbb4-4570-af6c-af83d10b32e2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Received unexpected event network-vif-plugged-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:40:54 np0005634017 nova_compute[243452]: 2026-02-28 10:40:54.315 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:40:54 np0005634017 nova_compute[243452]: 2026-02-28 10:40:54.316 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3140MB free_disk=59.90025749709457GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:40:54 np0005634017 nova_compute[243452]: 2026-02-28 10:40:54.316 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:54 np0005634017 nova_compute[243452]: 2026-02-28 10:40:54.317 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:54 np0005634017 nova_compute[243452]: 2026-02-28 10:40:54.401 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 5e1a8d62-9ac1-417d-8194-58901bb4018e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:40:54 np0005634017 nova_compute[243452]: 2026-02-28 10:40:54.401 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 9b081668-1653-448a-957e-da1ead7ecd21 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:40:54 np0005634017 nova_compute[243452]: 2026-02-28 10:40:54.401 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance edd7bb04-60e7-4998-afe7-73fa36c25f5d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:40:54 np0005634017 nova_compute[243452]: 2026-02-28 10:40:54.402 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:40:54 np0005634017 nova_compute[243452]: 2026-02-28 10:40:54.402 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:40:54 np0005634017 nova_compute[243452]: 2026-02-28 10:40:54.491 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:40:54 np0005634017 nova_compute[243452]: 2026-02-28 10:40:54.712 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:40:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/374508833' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:40:55 np0005634017 nova_compute[243452]: 2026-02-28 10:40:55.040 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:40:55 np0005634017 nova_compute[243452]: 2026-02-28 10:40:55.051 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:40:55 np0005634017 nova_compute[243452]: 2026-02-28 10:40:55.072 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:40:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.9 MiB/s wr, 199 op/s
Feb 28 05:40:55 np0005634017 nova_compute[243452]: 2026-02-28 10:40:55.100 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:40:55 np0005634017 nova_compute[243452]: 2026-02-28 10:40:55.101 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:55 np0005634017 nova_compute[243452]: 2026-02-28 10:40:55.281 243456 DEBUG nova.network.neutron [req-7408730a-ff05-4bc3-a262-1032457d528b req-57acfc0b-df9c-4af4-9d7a-17b0c73555d2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Updated VIF entry in instance network info cache for port 5952cc57-b25c-40a2-b208-47e2104b88ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:40:55 np0005634017 nova_compute[243452]: 2026-02-28 10:40:55.283 243456 DEBUG nova.network.neutron [req-7408730a-ff05-4bc3-a262-1032457d528b req-57acfc0b-df9c-4af4-9d7a-17b0c73555d2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Updating instance_info_cache with network_info: [{"id": "5952cc57-b25c-40a2-b208-47e2104b88ad", "address": "fa:16:3e:49:9b:bb", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5952cc57-b2", "ovs_interfaceid": "5952cc57-b25c-40a2-b208-47e2104b88ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6914940c-920a-4dc2-982a-9ae63584aee2", "address": "fa:16:3e:fb:c8:97", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefb:c897", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6914940c-92", "ovs_interfaceid": "6914940c-920a-4dc2-982a-9ae63584aee2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:40:55 np0005634017 nova_compute[243452]: 2026-02-28 10:40:55.318 243456 DEBUG oslo_concurrency.lockutils [req-7408730a-ff05-4bc3-a262-1032457d528b req-57acfc0b-df9c-4af4-9d7a-17b0c73555d2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:40:55 np0005634017 nova_compute[243452]: 2026-02-28 10:40:55.518 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:56.263 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:40:56 np0005634017 nova_compute[243452]: 2026-02-28 10:40:56.293 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:40:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:56.294 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:40:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2298: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.2 MiB/s wr, 163 op/s
Feb 28 05:40:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:57.879 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:40:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:57.880 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:40:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:57.880 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:40:58 np0005634017 nova_compute[243452]: 2026-02-28 10:40:58.699 243456 DEBUG nova.compute.manager [req-23b99f19-2bef-43ed-b9b4-4c56496f8690 req-f947901f-88d1-4947-868d-c5681f0c0039 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Received event network-changed-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:40:58 np0005634017 nova_compute[243452]: 2026-02-28 10:40:58.701 243456 DEBUG nova.compute.manager [req-23b99f19-2bef-43ed-b9b4-4c56496f8690 req-f947901f-88d1-4947-868d-c5681f0c0039 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Refreshing instance network info cache due to event network-changed-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:40:58 np0005634017 nova_compute[243452]: 2026-02-28 10:40:58.701 243456 DEBUG oslo_concurrency.lockutils [req-23b99f19-2bef-43ed-b9b4-4c56496f8690 req-f947901f-88d1-4947-868d-c5681f0c0039 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-edd7bb04-60e7-4998-afe7-73fa36c25f5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:40:58 np0005634017 nova_compute[243452]: 2026-02-28 10:40:58.702 243456 DEBUG oslo_concurrency.lockutils [req-23b99f19-2bef-43ed-b9b4-4c56496f8690 req-f947901f-88d1-4947-868d-c5681f0c0039 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-edd7bb04-60e7-4998-afe7-73fa36c25f5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:40:58 np0005634017 nova_compute[243452]: 2026-02-28 10:40:58.702 243456 DEBUG nova.network.neutron [req-23b99f19-2bef-43ed-b9b4-4c56496f8690 req-f947901f-88d1-4947-868d-c5681f0c0039 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Refreshing network info cache for port fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:40:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:40:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 787 KiB/s wr, 155 op/s
Feb 28 05:40:59 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:40:59.298 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:40:59 np0005634017 nova_compute[243452]: 2026-02-28 10:40:59.715 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:00 np0005634017 nova_compute[243452]: 2026-02-28 10:41:00.048 243456 DEBUG nova.network.neutron [req-23b99f19-2bef-43ed-b9b4-4c56496f8690 req-f947901f-88d1-4947-868d-c5681f0c0039 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Updated VIF entry in instance network info cache for port fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:41:00 np0005634017 nova_compute[243452]: 2026-02-28 10:41:00.048 243456 DEBUG nova.network.neutron [req-23b99f19-2bef-43ed-b9b4-4c56496f8690 req-f947901f-88d1-4947-868d-c5681f0c0039 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Updating instance_info_cache with network_info: [{"id": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "address": "fa:16:3e:11:97:49", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcbaa6a-ae", "ovs_interfaceid": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:41:00 np0005634017 nova_compute[243452]: 2026-02-28 10:41:00.068 243456 DEBUG oslo_concurrency.lockutils [req-23b99f19-2bef-43ed-b9b4-4c56496f8690 req-f947901f-88d1-4947-868d-c5681f0c0039 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-edd7bb04-60e7-4998-afe7-73fa36c25f5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:41:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:41:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:41:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:41:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:41:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:41:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:41:00 np0005634017 nova_compute[243452]: 2026-02-28 10:41:00.521 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:00Z|00178|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:49:9b:bb 10.100.0.13
Feb 28 05:41:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:00Z|00179|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:49:9b:bb 10.100.0.13
Feb 28 05:41:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2300: 305 pgs: 305 active+clean; 329 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 257 KiB/s wr, 147 op/s
Feb 28 05:41:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2301: 305 pgs: 305 active+clean; 345 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.3 MiB/s wr, 167 op/s
Feb 28 05:41:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:03Z|00180|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:11:97:49 10.100.0.4
Feb 28 05:41:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:03Z|00181|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:11:97:49 10.100.0.4
Feb 28 05:41:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:41:04 np0005634017 nova_compute[243452]: 2026-02-28 10:41:04.719 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2302: 305 pgs: 305 active+clean; 378 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.6 MiB/s wr, 199 op/s
Feb 28 05:41:05 np0005634017 nova_compute[243452]: 2026-02-28 10:41:05.523 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:06 np0005634017 podman[371028]: 2026-02-28 10:41:06.865198925 +0000 UTC m=+0.071643190 container exec 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:41:07 np0005634017 podman[371028]: 2026-02-28 10:41:07.027225693 +0000 UTC m=+0.233669878 container exec_died 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 05:41:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2303: 305 pgs: 305 active+clean; 390 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.0 MiB/s wr, 153 op/s
Feb 28 05:41:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:41:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:41:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:41:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:41:08 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:41:08 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:41:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:41:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:41:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:41:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:41:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:41:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:41:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:41:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:41:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:41:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:41:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:41:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:41:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:41:09 np0005634017 podman[371362]: 2026-02-28 10:41:09.088060457 +0000 UTC m=+0.066651568 container create d3954b8d1bae0910f412e626b1dc43a58754114fdb180687b48aa694ccbae0e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_jones, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 28 05:41:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.3 MiB/s wr, 153 op/s
Feb 28 05:41:09 np0005634017 podman[371362]: 2026-02-28 10:41:09.052643044 +0000 UTC m=+0.031234225 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:41:09 np0005634017 systemd[1]: Started libpod-conmon-d3954b8d1bae0910f412e626b1dc43a58754114fdb180687b48aa694ccbae0e0.scope.
Feb 28 05:41:09 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:41:09 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:41:09 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:41:09 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:41:09 np0005634017 podman[371362]: 2026-02-28 10:41:09.201506379 +0000 UTC m=+0.180097500 container init d3954b8d1bae0910f412e626b1dc43a58754114fdb180687b48aa694ccbae0e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 05:41:09 np0005634017 podman[371362]: 2026-02-28 10:41:09.208834937 +0000 UTC m=+0.187426058 container start d3954b8d1bae0910f412e626b1dc43a58754114fdb180687b48aa694ccbae0e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_jones, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 05:41:09 np0005634017 podman[371362]: 2026-02-28 10:41:09.212927093 +0000 UTC m=+0.191518184 container attach d3954b8d1bae0910f412e626b1dc43a58754114fdb180687b48aa694ccbae0e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_jones, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:41:09 np0005634017 systemd[1]: libpod-d3954b8d1bae0910f412e626b1dc43a58754114fdb180687b48aa694ccbae0e0.scope: Deactivated successfully.
Feb 28 05:41:09 np0005634017 jolly_jones[371378]: 167 167
Feb 28 05:41:09 np0005634017 conmon[371378]: conmon d3954b8d1bae0910f412 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d3954b8d1bae0910f412e626b1dc43a58754114fdb180687b48aa694ccbae0e0.scope/container/memory.events
Feb 28 05:41:09 np0005634017 podman[371362]: 2026-02-28 10:41:09.217441551 +0000 UTC m=+0.196032672 container died d3954b8d1bae0910f412e626b1dc43a58754114fdb180687b48aa694ccbae0e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_jones, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 28 05:41:09 np0005634017 systemd[1]: var-lib-containers-storage-overlay-79b1fb3273cd653a81ea424462aa73b0a2aa3505ede94f3ce2b98b09957552b2-merged.mount: Deactivated successfully.
Feb 28 05:41:09 np0005634017 podman[371362]: 2026-02-28 10:41:09.266306714 +0000 UTC m=+0.244897825 container remove d3954b8d1bae0910f412e626b1dc43a58754114fdb180687b48aa694ccbae0e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_jones, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:41:09 np0005634017 systemd[1]: libpod-conmon-d3954b8d1bae0910f412e626b1dc43a58754114fdb180687b48aa694ccbae0e0.scope: Deactivated successfully.
Feb 28 05:41:09 np0005634017 podman[371401]: 2026-02-28 10:41:09.47525565 +0000 UTC m=+0.061231684 container create 135ba6235386fce0bc5564b3850b758e5a11caf22a3b3562dbb87e1e6fecd786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:41:09 np0005634017 systemd[1]: Started libpod-conmon-135ba6235386fce0bc5564b3850b758e5a11caf22a3b3562dbb87e1e6fecd786.scope.
Feb 28 05:41:09 np0005634017 podman[371401]: 2026-02-28 10:41:09.448995526 +0000 UTC m=+0.034971560 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:41:09 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:41:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2762a65e31663c609c771b2d3a49328d1ba61b8e80c584b228fe0bd0b0496f26/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:41:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2762a65e31663c609c771b2d3a49328d1ba61b8e80c584b228fe0bd0b0496f26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:41:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2762a65e31663c609c771b2d3a49328d1ba61b8e80c584b228fe0bd0b0496f26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:41:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2762a65e31663c609c771b2d3a49328d1ba61b8e80c584b228fe0bd0b0496f26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:41:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2762a65e31663c609c771b2d3a49328d1ba61b8e80c584b228fe0bd0b0496f26/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:41:09 np0005634017 podman[371401]: 2026-02-28 10:41:09.588207098 +0000 UTC m=+0.174183172 container init 135ba6235386fce0bc5564b3850b758e5a11caf22a3b3562dbb87e1e6fecd786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:41:09 np0005634017 podman[371401]: 2026-02-28 10:41:09.598210502 +0000 UTC m=+0.184186506 container start 135ba6235386fce0bc5564b3850b758e5a11caf22a3b3562dbb87e1e6fecd786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hypatia, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 05:41:09 np0005634017 podman[371401]: 2026-02-28 10:41:09.602249316 +0000 UTC m=+0.188225390 container attach 135ba6235386fce0bc5564b3850b758e5a11caf22a3b3562dbb87e1e6fecd786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hypatia, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 05:41:09 np0005634017 nova_compute[243452]: 2026-02-28 10:41:09.723 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:10 np0005634017 interesting_hypatia[371418]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:41:10 np0005634017 interesting_hypatia[371418]: --> All data devices are unavailable
Feb 28 05:41:10 np0005634017 systemd[1]: libpod-135ba6235386fce0bc5564b3850b758e5a11caf22a3b3562dbb87e1e6fecd786.scope: Deactivated successfully.
Feb 28 05:41:10 np0005634017 podman[371401]: 2026-02-28 10:41:10.129616499 +0000 UTC m=+0.715592543 container died 135ba6235386fce0bc5564b3850b758e5a11caf22a3b3562dbb87e1e6fecd786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:41:10 np0005634017 systemd[1]: var-lib-containers-storage-overlay-2762a65e31663c609c771b2d3a49328d1ba61b8e80c584b228fe0bd0b0496f26-merged.mount: Deactivated successfully.
Feb 28 05:41:10 np0005634017 podman[371401]: 2026-02-28 10:41:10.177839355 +0000 UTC m=+0.763815339 container remove 135ba6235386fce0bc5564b3850b758e5a11caf22a3b3562dbb87e1e6fecd786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hypatia, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:41:10 np0005634017 systemd[1]: libpod-conmon-135ba6235386fce0bc5564b3850b758e5a11caf22a3b3562dbb87e1e6fecd786.scope: Deactivated successfully.
Feb 28 05:41:10 np0005634017 nova_compute[243452]: 2026-02-28 10:41:10.525 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:10 np0005634017 podman[371512]: 2026-02-28 10:41:10.671455362 +0000 UTC m=+0.048754962 container create 9623db0b9105f546074939178d208554af08c9c473b8a0449f380cf633ea8136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:41:10 np0005634017 systemd[1]: Started libpod-conmon-9623db0b9105f546074939178d208554af08c9c473b8a0449f380cf633ea8136.scope.
Feb 28 05:41:10 np0005634017 podman[371512]: 2026-02-28 10:41:10.645710093 +0000 UTC m=+0.023009743 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:41:10 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:41:10 np0005634017 podman[371512]: 2026-02-28 10:41:10.774670115 +0000 UTC m=+0.151969745 container init 9623db0b9105f546074939178d208554af08c9c473b8a0449f380cf633ea8136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_zhukovsky, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 28 05:41:10 np0005634017 podman[371512]: 2026-02-28 10:41:10.784455262 +0000 UTC m=+0.161754892 container start 9623db0b9105f546074939178d208554af08c9c473b8a0449f380cf633ea8136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_zhukovsky, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 05:41:10 np0005634017 podman[371512]: 2026-02-28 10:41:10.789017201 +0000 UTC m=+0.166316831 container attach 9623db0b9105f546074939178d208554af08c9c473b8a0449f380cf633ea8136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:41:10 np0005634017 modest_zhukovsky[371529]: 167 167
Feb 28 05:41:10 np0005634017 systemd[1]: libpod-9623db0b9105f546074939178d208554af08c9c473b8a0449f380cf633ea8136.scope: Deactivated successfully.
Feb 28 05:41:10 np0005634017 podman[371512]: 2026-02-28 10:41:10.790593715 +0000 UTC m=+0.167893345 container died 9623db0b9105f546074939178d208554af08c9c473b8a0449f380cf633ea8136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 05:41:10 np0005634017 systemd[1]: var-lib-containers-storage-overlay-b532511749929baa8358acc45e4428316161154651ac61cad504c1725f5b29a7-merged.mount: Deactivated successfully.
Feb 28 05:41:10 np0005634017 podman[371512]: 2026-02-28 10:41:10.838293936 +0000 UTC m=+0.215593566 container remove 9623db0b9105f546074939178d208554af08c9c473b8a0449f380cf633ea8136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:41:10 np0005634017 systemd[1]: libpod-conmon-9623db0b9105f546074939178d208554af08c9c473b8a0449f380cf633ea8136.scope: Deactivated successfully.
Feb 28 05:41:11 np0005634017 podman[371553]: 2026-02-28 10:41:11.029382787 +0000 UTC m=+0.042343530 container create dfdf05cde32835f405fe6032d5c51f02c189647cb2a5d4d3ffbf9e965d0cce15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_goldberg, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Feb 28 05:41:11 np0005634017 systemd[1]: Started libpod-conmon-dfdf05cde32835f405fe6032d5c51f02c189647cb2a5d4d3ffbf9e965d0cce15.scope.
Feb 28 05:41:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 685 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Feb 28 05:41:11 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:41:11 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8f9c83e25499c1d517154ada15b3b9cf06b68c98a34786d5b6fcf8998a50c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:41:11 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8f9c83e25499c1d517154ada15b3b9cf06b68c98a34786d5b6fcf8998a50c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:41:11 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8f9c83e25499c1d517154ada15b3b9cf06b68c98a34786d5b6fcf8998a50c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:41:11 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8f9c83e25499c1d517154ada15b3b9cf06b68c98a34786d5b6fcf8998a50c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:41:11 np0005634017 podman[371553]: 2026-02-28 10:41:11.010457351 +0000 UTC m=+0.023418094 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:41:11 np0005634017 podman[371553]: 2026-02-28 10:41:11.136340876 +0000 UTC m=+0.149301689 container init dfdf05cde32835f405fe6032d5c51f02c189647cb2a5d4d3ffbf9e965d0cce15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_goldberg, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 28 05:41:11 np0005634017 podman[371553]: 2026-02-28 10:41:11.146435541 +0000 UTC m=+0.159396284 container start dfdf05cde32835f405fe6032d5c51f02c189647cb2a5d4d3ffbf9e965d0cce15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_goldberg, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:41:11 np0005634017 podman[371553]: 2026-02-28 10:41:11.150698392 +0000 UTC m=+0.163659165 container attach dfdf05cde32835f405fe6032d5c51f02c189647cb2a5d4d3ffbf9e965d0cce15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]: {
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:    "0": [
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:        {
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "devices": [
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "/dev/loop3"
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            ],
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "lv_name": "ceph_lv0",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "lv_size": "21470642176",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "name": "ceph_lv0",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "tags": {
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.cluster_name": "ceph",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.crush_device_class": "",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.encrypted": "0",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.objectstore": "bluestore",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.osd_id": "0",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.type": "block",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.vdo": "0",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.with_tpm": "0"
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            },
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "type": "block",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "vg_name": "ceph_vg0"
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:        }
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:    ],
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:    "1": [
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:        {
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "devices": [
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "/dev/loop4"
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            ],
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "lv_name": "ceph_lv1",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "lv_size": "21470642176",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "name": "ceph_lv1",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "tags": {
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.cluster_name": "ceph",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.crush_device_class": "",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.encrypted": "0",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.objectstore": "bluestore",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.osd_id": "1",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.type": "block",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.vdo": "0",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.with_tpm": "0"
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            },
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "type": "block",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "vg_name": "ceph_vg1"
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:        }
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:    ],
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:    "2": [
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:        {
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "devices": [
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "/dev/loop5"
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            ],
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "lv_name": "ceph_lv2",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "lv_size": "21470642176",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "name": "ceph_lv2",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "tags": {
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.cluster_name": "ceph",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.crush_device_class": "",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.encrypted": "0",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.objectstore": "bluestore",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.osd_id": "2",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.type": "block",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.vdo": "0",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:                "ceph.with_tpm": "0"
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            },
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "type": "block",
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:            "vg_name": "ceph_vg2"
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:        }
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]:    ]
Feb 28 05:41:11 np0005634017 wizardly_goldberg[371570]: }
Feb 28 05:41:11 np0005634017 systemd[1]: libpod-dfdf05cde32835f405fe6032d5c51f02c189647cb2a5d4d3ffbf9e965d0cce15.scope: Deactivated successfully.
Feb 28 05:41:11 np0005634017 podman[371553]: 2026-02-28 10:41:11.433246953 +0000 UTC m=+0.446207746 container died dfdf05cde32835f405fe6032d5c51f02c189647cb2a5d4d3ffbf9e965d0cce15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:41:11 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ed8f9c83e25499c1d517154ada15b3b9cf06b68c98a34786d5b6fcf8998a50c5-merged.mount: Deactivated successfully.
Feb 28 05:41:11 np0005634017 podman[371553]: 2026-02-28 10:41:11.487867409 +0000 UTC m=+0.500828182 container remove dfdf05cde32835f405fe6032d5c51f02c189647cb2a5d4d3ffbf9e965d0cce15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 28 05:41:11 np0005634017 systemd[1]: libpod-conmon-dfdf05cde32835f405fe6032d5c51f02c189647cb2a5d4d3ffbf9e965d0cce15.scope: Deactivated successfully.
Feb 28 05:41:11 np0005634017 podman[371654]: 2026-02-28 10:41:11.93782288 +0000 UTC m=+0.041624019 container create 3ee62b7e511552750e5c8b3d29d89c46a8da4ef56aec6b3486d4f560ca501ee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:41:11 np0005634017 systemd[1]: Started libpod-conmon-3ee62b7e511552750e5c8b3d29d89c46a8da4ef56aec6b3486d4f560ca501ee5.scope.
Feb 28 05:41:12 np0005634017 podman[371654]: 2026-02-28 10:41:11.920978413 +0000 UTC m=+0.024779582 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:41:12 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:41:12 np0005634017 podman[371654]: 2026-02-28 10:41:12.038273385 +0000 UTC m=+0.142074544 container init 3ee62b7e511552750e5c8b3d29d89c46a8da4ef56aec6b3486d4f560ca501ee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hamilton, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 05:41:12 np0005634017 podman[371654]: 2026-02-28 10:41:12.047681031 +0000 UTC m=+0.151482160 container start 3ee62b7e511552750e5c8b3d29d89c46a8da4ef56aec6b3486d4f560ca501ee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 28 05:41:12 np0005634017 podman[371654]: 2026-02-28 10:41:12.051981163 +0000 UTC m=+0.155782472 container attach 3ee62b7e511552750e5c8b3d29d89c46a8da4ef56aec6b3486d4f560ca501ee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hamilton, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 28 05:41:12 np0005634017 charming_hamilton[371670]: 167 167
Feb 28 05:41:12 np0005634017 systemd[1]: libpod-3ee62b7e511552750e5c8b3d29d89c46a8da4ef56aec6b3486d4f560ca501ee5.scope: Deactivated successfully.
Feb 28 05:41:12 np0005634017 podman[371654]: 2026-02-28 10:41:12.054954697 +0000 UTC m=+0.158755826 container died 3ee62b7e511552750e5c8b3d29d89c46a8da4ef56aec6b3486d4f560ca501ee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hamilton, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 05:41:12 np0005634017 systemd[1]: var-lib-containers-storage-overlay-df0153440a59ffc30239947511fc5ef40dcadad74239ff267f02cf0d67e8aae6-merged.mount: Deactivated successfully.
Feb 28 05:41:12 np0005634017 podman[371654]: 2026-02-28 10:41:12.094676102 +0000 UTC m=+0.198477271 container remove 3ee62b7e511552750e5c8b3d29d89c46a8da4ef56aec6b3486d4f560ca501ee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:41:12 np0005634017 systemd[1]: libpod-conmon-3ee62b7e511552750e5c8b3d29d89c46a8da4ef56aec6b3486d4f560ca501ee5.scope: Deactivated successfully.
Feb 28 05:41:12 np0005634017 podman[371694]: 2026-02-28 10:41:12.285375242 +0000 UTC m=+0.044811160 container create 7e3f2f9288585d28a179f7be9b04e4176f76f21cf8be6b4e5aa0fbc46df29f39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_greider, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:41:12 np0005634017 systemd[1]: Started libpod-conmon-7e3f2f9288585d28a179f7be9b04e4176f76f21cf8be6b4e5aa0fbc46df29f39.scope.
Feb 28 05:41:12 np0005634017 podman[371694]: 2026-02-28 10:41:12.26272867 +0000 UTC m=+0.022164348 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:41:12 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:41:12 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cea9687415f431f5e3ad234a28224019221870349d9494980308195a8f09ada/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:41:12 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cea9687415f431f5e3ad234a28224019221870349d9494980308195a8f09ada/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:41:12 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cea9687415f431f5e3ad234a28224019221870349d9494980308195a8f09ada/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:41:12 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cea9687415f431f5e3ad234a28224019221870349d9494980308195a8f09ada/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:41:12 np0005634017 podman[371694]: 2026-02-28 10:41:12.392199507 +0000 UTC m=+0.151635255 container init 7e3f2f9288585d28a179f7be9b04e4176f76f21cf8be6b4e5aa0fbc46df29f39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:41:12 np0005634017 podman[371694]: 2026-02-28 10:41:12.4039761 +0000 UTC m=+0.163411788 container start 7e3f2f9288585d28a179f7be9b04e4176f76f21cf8be6b4e5aa0fbc46df29f39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 28 05:41:12 np0005634017 podman[371694]: 2026-02-28 10:41:12.409773814 +0000 UTC m=+0.169209572 container attach 7e3f2f9288585d28a179f7be9b04e4176f76f21cf8be6b4e5aa0fbc46df29f39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_greider, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 28 05:41:12 np0005634017 nova_compute[243452]: 2026-02-28 10:41:12.414 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "47d618f6-612e-4944-8a4d-a3509d6e3d35" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:12 np0005634017 nova_compute[243452]: 2026-02-28 10:41:12.416 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:12 np0005634017 nova_compute[243452]: 2026-02-28 10:41:12.433 243456 DEBUG nova.compute.manager [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:41:12 np0005634017 nova_compute[243452]: 2026-02-28 10:41:12.509 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:12 np0005634017 nova_compute[243452]: 2026-02-28 10:41:12.509 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:12 np0005634017 nova_compute[243452]: 2026-02-28 10:41:12.520 243456 DEBUG nova.virt.hardware [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:41:12 np0005634017 nova_compute[243452]: 2026-02-28 10:41:12.520 243456 INFO nova.compute.claims [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:41:12 np0005634017 nova_compute[243452]: 2026-02-28 10:41:12.698 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:41:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2306: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 678 KiB/s rd, 4.1 MiB/s wr, 127 op/s
Feb 28 05:41:13 np0005634017 lvm[371811]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:41:13 np0005634017 lvm[371811]: VG ceph_vg1 finished
Feb 28 05:41:13 np0005634017 lvm[371810]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:41:13 np0005634017 lvm[371810]: VG ceph_vg0 finished
Feb 28 05:41:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:41:13 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2193481640' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:41:13 np0005634017 lvm[371813]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:41:13 np0005634017 lvm[371813]: VG ceph_vg2 finished
Feb 28 05:41:13 np0005634017 nova_compute[243452]: 2026-02-28 10:41:13.251 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:41:13 np0005634017 nova_compute[243452]: 2026-02-28 10:41:13.262 243456 DEBUG nova.compute.provider_tree [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:41:13 np0005634017 nova_compute[243452]: 2026-02-28 10:41:13.285 243456 DEBUG nova.scheduler.client.report [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:41:13 np0005634017 nova_compute[243452]: 2026-02-28 10:41:13.309 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:13 np0005634017 nova_compute[243452]: 2026-02-28 10:41:13.310 243456 DEBUG nova.compute.manager [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:41:13 np0005634017 dreamy_greider[371711]: {}
Feb 28 05:41:13 np0005634017 nova_compute[243452]: 2026-02-28 10:41:13.354 243456 DEBUG nova.compute.manager [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:41:13 np0005634017 nova_compute[243452]: 2026-02-28 10:41:13.355 243456 DEBUG nova.network.neutron [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:41:13 np0005634017 systemd[1]: libpod-7e3f2f9288585d28a179f7be9b04e4176f76f21cf8be6b4e5aa0fbc46df29f39.scope: Deactivated successfully.
Feb 28 05:41:13 np0005634017 systemd[1]: libpod-7e3f2f9288585d28a179f7be9b04e4176f76f21cf8be6b4e5aa0fbc46df29f39.scope: Consumed 1.337s CPU time.
Feb 28 05:41:13 np0005634017 podman[371694]: 2026-02-28 10:41:13.374887261 +0000 UTC m=+1.134322949 container died 7e3f2f9288585d28a179f7be9b04e4176f76f21cf8be6b4e5aa0fbc46df29f39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_greider, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 05:41:13 np0005634017 nova_compute[243452]: 2026-02-28 10:41:13.375 243456 INFO nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:41:13 np0005634017 nova_compute[243452]: 2026-02-28 10:41:13.390 243456 DEBUG nova.compute.manager [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:41:13 np0005634017 systemd[1]: var-lib-containers-storage-overlay-4cea9687415f431f5e3ad234a28224019221870349d9494980308195a8f09ada-merged.mount: Deactivated successfully.
Feb 28 05:41:13 np0005634017 podman[371694]: 2026-02-28 10:41:13.432262796 +0000 UTC m=+1.191698494 container remove 7e3f2f9288585d28a179f7be9b04e4176f76f21cf8be6b4e5aa0fbc46df29f39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:41:13 np0005634017 systemd[1]: libpod-conmon-7e3f2f9288585d28a179f7be9b04e4176f76f21cf8be6b4e5aa0fbc46df29f39.scope: Deactivated successfully.
Feb 28 05:41:13 np0005634017 nova_compute[243452]: 2026-02-28 10:41:13.479 243456 DEBUG nova.compute.manager [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:41:13 np0005634017 nova_compute[243452]: 2026-02-28 10:41:13.480 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:41:13 np0005634017 nova_compute[243452]: 2026-02-28 10:41:13.481 243456 INFO nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Creating image(s)#033[00m
Feb 28 05:41:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:41:13 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:41:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:41:13 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:41:13 np0005634017 nova_compute[243452]: 2026-02-28 10:41:13.521 243456 DEBUG nova.storage.rbd_utils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 47d618f6-612e-4944-8a4d-a3509d6e3d35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:41:13 np0005634017 nova_compute[243452]: 2026-02-28 10:41:13.551 243456 DEBUG nova.storage.rbd_utils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 47d618f6-612e-4944-8a4d-a3509d6e3d35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:41:13 np0005634017 nova_compute[243452]: 2026-02-28 10:41:13.587 243456 DEBUG nova.storage.rbd_utils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 47d618f6-612e-4944-8a4d-a3509d6e3d35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:41:13 np0005634017 nova_compute[243452]: 2026-02-28 10:41:13.595 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:41:13 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:41:13 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:41:13 np0005634017 nova_compute[243452]: 2026-02-28 10:41:13.649 243456 DEBUG nova.policy [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:41:13 np0005634017 nova_compute[243452]: 2026-02-28 10:41:13.710 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:41:13 np0005634017 nova_compute[243452]: 2026-02-28 10:41:13.711 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:13 np0005634017 nova_compute[243452]: 2026-02-28 10:41:13.712 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:13 np0005634017 nova_compute[243452]: 2026-02-28 10:41:13.713 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:13 np0005634017 nova_compute[243452]: 2026-02-28 10:41:13.743 243456 DEBUG nova.storage.rbd_utils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 47d618f6-612e-4944-8a4d-a3509d6e3d35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:41:13 np0005634017 nova_compute[243452]: 2026-02-28 10:41:13.748 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 47d618f6-612e-4944-8a4d-a3509d6e3d35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:41:13 np0005634017 nova_compute[243452]: 2026-02-28 10:41:13.993 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 47d618f6-612e-4944-8a4d-a3509d6e3d35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.245s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:41:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:41:14 np0005634017 nova_compute[243452]: 2026-02-28 10:41:14.064 243456 DEBUG nova.storage.rbd_utils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image 47d618f6-612e-4944-8a4d-a3509d6e3d35_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:41:14 np0005634017 nova_compute[243452]: 2026-02-28 10:41:14.143 243456 DEBUG nova.objects.instance [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid 47d618f6-612e-4944-8a4d-a3509d6e3d35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:41:14 np0005634017 nova_compute[243452]: 2026-02-28 10:41:14.159 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:41:14 np0005634017 nova_compute[243452]: 2026-02-28 10:41:14.159 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Ensure instance console log exists: /var/lib/nova/instances/47d618f6-612e-4944-8a4d-a3509d6e3d35/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:41:14 np0005634017 nova_compute[243452]: 2026-02-28 10:41:14.160 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:14 np0005634017 nova_compute[243452]: 2026-02-28 10:41:14.160 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:14 np0005634017 nova_compute[243452]: 2026-02-28 10:41:14.160 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:14 np0005634017 nova_compute[243452]: 2026-02-28 10:41:14.374 243456 INFO nova.compute.manager [None req-03318e18-c967-45dc-bf0c-ba2092f5c42b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Get console output#033[00m
Feb 28 05:41:14 np0005634017 nova_compute[243452]: 2026-02-28 10:41:14.384 243456 DEBUG nova.network.neutron [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Successfully created port: b06d794d-14a7-4fd8-bd85-2e37bff5e446 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:41:14 np0005634017 nova_compute[243452]: 2026-02-28 10:41:14.384 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 28 05:41:14 np0005634017 nova_compute[243452]: 2026-02-28 10:41:14.746 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:14 np0005634017 nova_compute[243452]: 2026-02-28 10:41:14.954 243456 DEBUG nova.network.neutron [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Successfully created port: f107fdf1-b771-447b-adb6-bd9bc1e35b53 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:41:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 305 active+clean; 401 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 463 KiB/s rd, 3.4 MiB/s wr, 110 op/s
Feb 28 05:41:15 np0005634017 nova_compute[243452]: 2026-02-28 10:41:15.279 243456 DEBUG nova.compute.manager [req-548c264f-1694-4372-b9ab-a404a48fb9a5 req-55887fca-35a6-4201-950c-18855c9675e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-changed-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:15 np0005634017 nova_compute[243452]: 2026-02-28 10:41:15.279 243456 DEBUG nova.compute.manager [req-548c264f-1694-4372-b9ab-a404a48fb9a5 req-55887fca-35a6-4201-950c-18855c9675e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Refreshing instance network info cache due to event network-changed-d3feb971-63a7-4d54-8310-9c6d40c29637. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:41:15 np0005634017 nova_compute[243452]: 2026-02-28 10:41:15.280 243456 DEBUG oslo_concurrency.lockutils [req-548c264f-1694-4372-b9ab-a404a48fb9a5 req-55887fca-35a6-4201-950c-18855c9675e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:41:15 np0005634017 nova_compute[243452]: 2026-02-28 10:41:15.280 243456 DEBUG oslo_concurrency.lockutils [req-548c264f-1694-4372-b9ab-a404a48fb9a5 req-55887fca-35a6-4201-950c-18855c9675e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:41:15 np0005634017 nova_compute[243452]: 2026-02-28 10:41:15.280 243456 DEBUG nova.network.neutron [req-548c264f-1694-4372-b9ab-a404a48fb9a5 req-55887fca-35a6-4201-950c-18855c9675e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Refreshing network info cache for port d3feb971-63a7-4d54-8310-9c6d40c29637 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:41:15 np0005634017 nova_compute[243452]: 2026-02-28 10:41:15.527 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:15 np0005634017 nova_compute[243452]: 2026-02-28 10:41:15.770 243456 DEBUG nova.network.neutron [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Successfully updated port: b06d794d-14a7-4fd8-bd85-2e37bff5e446 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:41:16 np0005634017 nova_compute[243452]: 2026-02-28 10:41:16.345 243456 INFO nova.compute.manager [None req-755334bc-63bd-4ce8-8fdf-63c60a5ee753 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Get console output#033[00m
Feb 28 05:41:16 np0005634017 nova_compute[243452]: 2026-02-28 10:41:16.351 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 28 05:41:16 np0005634017 nova_compute[243452]: 2026-02-28 10:41:16.409 243456 DEBUG nova.network.neutron [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Successfully updated port: f107fdf1-b771-447b-adb6-bd9bc1e35b53 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:41:16 np0005634017 nova_compute[243452]: 2026-02-28 10:41:16.427 243456 DEBUG nova.network.neutron [req-548c264f-1694-4372-b9ab-a404a48fb9a5 req-55887fca-35a6-4201-950c-18855c9675e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Updated VIF entry in instance network info cache for port d3feb971-63a7-4d54-8310-9c6d40c29637. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:41:16 np0005634017 nova_compute[243452]: 2026-02-28 10:41:16.428 243456 DEBUG nova.network.neutron [req-548c264f-1694-4372-b9ab-a404a48fb9a5 req-55887fca-35a6-4201-950c-18855c9675e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Updating instance_info_cache with network_info: [{"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:41:16 np0005634017 nova_compute[243452]: 2026-02-28 10:41:16.431 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:41:16 np0005634017 nova_compute[243452]: 2026-02-28 10:41:16.431 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:41:16 np0005634017 nova_compute[243452]: 2026-02-28 10:41:16.431 243456 DEBUG nova.network.neutron [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:41:16 np0005634017 nova_compute[243452]: 2026-02-28 10:41:16.448 243456 DEBUG oslo_concurrency.lockutils [req-548c264f-1694-4372-b9ab-a404a48fb9a5 req-55887fca-35a6-4201-950c-18855c9675e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:41:16 np0005634017 nova_compute[243452]: 2026-02-28 10:41:16.615 243456 DEBUG nova.network.neutron [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:41:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 305 active+clean; 421 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 181 KiB/s rd, 1.9 MiB/s wr, 50 op/s
Feb 28 05:41:17 np0005634017 nova_compute[243452]: 2026-02-28 10:41:17.370 243456 DEBUG nova.compute.manager [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-vif-unplugged-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:17 np0005634017 nova_compute[243452]: 2026-02-28 10:41:17.370 243456 DEBUG oslo_concurrency.lockutils [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:17 np0005634017 nova_compute[243452]: 2026-02-28 10:41:17.371 243456 DEBUG oslo_concurrency.lockutils [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:17 np0005634017 nova_compute[243452]: 2026-02-28 10:41:17.371 243456 DEBUG oslo_concurrency.lockutils [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:17 np0005634017 nova_compute[243452]: 2026-02-28 10:41:17.372 243456 DEBUG nova.compute.manager [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] No waiting events found dispatching network-vif-unplugged-d3feb971-63a7-4d54-8310-9c6d40c29637 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:41:17 np0005634017 nova_compute[243452]: 2026-02-28 10:41:17.372 243456 WARNING nova.compute.manager [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received unexpected event network-vif-unplugged-d3feb971-63a7-4d54-8310-9c6d40c29637 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:41:17 np0005634017 nova_compute[243452]: 2026-02-28 10:41:17.372 243456 DEBUG nova.compute.manager [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:17 np0005634017 nova_compute[243452]: 2026-02-28 10:41:17.373 243456 DEBUG oslo_concurrency.lockutils [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:17 np0005634017 nova_compute[243452]: 2026-02-28 10:41:17.373 243456 DEBUG oslo_concurrency.lockutils [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:17 np0005634017 nova_compute[243452]: 2026-02-28 10:41:17.373 243456 DEBUG oslo_concurrency.lockutils [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:17 np0005634017 nova_compute[243452]: 2026-02-28 10:41:17.373 243456 DEBUG nova.compute.manager [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] No waiting events found dispatching network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:41:17 np0005634017 nova_compute[243452]: 2026-02-28 10:41:17.374 243456 WARNING nova.compute.manager [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received unexpected event network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:41:17 np0005634017 nova_compute[243452]: 2026-02-28 10:41:17.374 243456 DEBUG nova.compute.manager [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-changed-b06d794d-14a7-4fd8-bd85-2e37bff5e446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:17 np0005634017 nova_compute[243452]: 2026-02-28 10:41:17.374 243456 DEBUG nova.compute.manager [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Refreshing instance network info cache due to event network-changed-b06d794d-14a7-4fd8-bd85-2e37bff5e446. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:41:17 np0005634017 nova_compute[243452]: 2026-02-28 10:41:17.375 243456 DEBUG oslo_concurrency.lockutils [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.075 243456 DEBUG nova.network.neutron [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Updating instance_info_cache with network_info: [{"id": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "address": "fa:16:3e:02:0a:a1", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06d794d-14", "ovs_interfaceid": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.087 243456 INFO nova.compute.manager [None req-ff45accb-2c0b-40dd-a2cf-3a52bea86823 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Get console output#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.100 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.101 243456 DEBUG nova.compute.manager [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Instance network_info: |[{"id": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "address": "fa:16:3e:02:0a:a1", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06d794d-14", "ovs_interfaceid": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.097 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.103 243456 DEBUG oslo_concurrency.lockutils [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.103 243456 DEBUG nova.network.neutron [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Refreshing network info cache for port b06d794d-14a7-4fd8-bd85-2e37bff5e446 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.112 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Start _get_guest_xml network_info=[{"id": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "address": "fa:16:3e:02:0a:a1", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06d794d-14", "ovs_interfaceid": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.126 243456 WARNING nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.143 243456 DEBUG nova.virt.libvirt.host [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.144 243456 DEBUG nova.virt.libvirt.host [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.149 243456 DEBUG nova.virt.libvirt.host [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.150 243456 DEBUG nova.virt.libvirt.host [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.150 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.151 243456 DEBUG nova.virt.hardware [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.151 243456 DEBUG nova.virt.hardware [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.151 243456 DEBUG nova.virt.hardware [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.152 243456 DEBUG nova.virt.hardware [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.152 243456 DEBUG nova.virt.hardware [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.152 243456 DEBUG nova.virt.hardware [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.152 243456 DEBUG nova.virt.hardware [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.152 243456 DEBUG nova.virt.hardware [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.152 243456 DEBUG nova.virt.hardware [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.153 243456 DEBUG nova.virt.hardware [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.153 243456 DEBUG nova.virt.hardware [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.155 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:41:18 np0005634017 podman[372024]: 2026-02-28 10:41:18.177460521 +0000 UTC m=+0.099908780 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 28 05:41:18 np0005634017 podman[372023]: 2026-02-28 10:41:18.186044524 +0000 UTC m=+0.109449700 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Feb 28 05:41:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:41:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1372369374' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.732 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.772 243456 DEBUG nova.storage.rbd_utils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 47d618f6-612e-4944-8a4d-a3509d6e3d35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:41:18 np0005634017 nova_compute[243452]: 2026-02-28 10:41:18.778 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:41:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.039 243456 DEBUG oslo_concurrency.lockutils [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.040 243456 DEBUG oslo_concurrency.lockutils [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.040 243456 DEBUG oslo_concurrency.lockutils [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.041 243456 DEBUG oslo_concurrency.lockutils [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.041 243456 DEBUG oslo_concurrency.lockutils [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.043 243456 INFO nova.compute.manager [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Terminating instance#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.045 243456 DEBUG nova.compute.manager [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:41:19 np0005634017 kernel: tapfdcbaa6a-ae (unregistering): left promiscuous mode
Feb 28 05:41:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2309: 305 pgs: 305 active+clean; 438 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 121 KiB/s rd, 2.1 MiB/s wr, 38 op/s
Feb 28 05:41:19 np0005634017 NetworkManager[49805]: <info>  [1772275279.1003] device (tapfdcbaa6a-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.125 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:19 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:19Z|01493|binding|INFO|Releasing lport fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 from this chassis (sb_readonly=0)
Feb 28 05:41:19 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:19Z|01494|binding|INFO|Setting lport fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 down in Southbound
Feb 28 05:41:19 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:19Z|01495|binding|INFO|Removing iface tapfdcbaa6a-ae ovn-installed in OVS
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.128 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.138 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:97:49 10.100.0.4'], port_security=['fa:16:3e:11:97:49 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'edd7bb04-60e7-4998-afe7-73fa36c25f5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a9965c3d-ca45-4c8c-963d-46eedcf9cfa8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27d4da06-6dac-452d-951c-54e43b1c22a3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:41:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.142 156681 INFO neutron.agent.ovn.metadata.agent [-] Port fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 in datapath 11e06da5-bfc5-4a1a-9148-ff3afccf9569 unbound from our chassis#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.142 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.146 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 11e06da5-bfc5-4a1a-9148-ff3afccf9569#033[00m
Feb 28 05:41:19 np0005634017 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Feb 28 05:41:19 np0005634017 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008f.scope: Consumed 12.512s CPU time.
Feb 28 05:41:19 np0005634017 systemd-machined[209480]: Machine qemu-176-instance-0000008f terminated.
Feb 28 05:41:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.166 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[63eeb7b9-8fff-4156-a983-c7aa74c3fa3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.199 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e23a6084-41d6-4986-a207-ce13b58ccbd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.203 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[eb9bb8a1-7417-4d76-9ba1-3df3c8bc3ad7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.236 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf2acc5-bcac-473c-b699-53c7382c362a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.264 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f11b8638-386b-43fa-a0c3-2abc85d00053]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap11e06da5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:17:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673399, 'reachable_time': 31055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372141, 'error': None, 'target': 'ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.285 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[813e15dc-b943-4fb8-82b1-e8ffaed3c746]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap11e06da5-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 673413, 'tstamp': 673413}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372145, 'error': None, 'target': 'ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap11e06da5-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 673415, 'tstamp': 673415}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372145, 'error': None, 'target': 'ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.288 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11e06da5-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.289 243456 INFO nova.virt.libvirt.driver [-] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Instance destroyed successfully.#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.291 243456 DEBUG nova.objects.instance [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'resources' on Instance uuid edd7bb04-60e7-4998-afe7-73fa36c25f5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.307 243456 DEBUG nova.virt.libvirt.vif [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:40:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1953289712',display_name='tempest-TestNetworkBasicOps-server-1953289712',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1953289712',id=143,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPUh8LuUCOXWWDa6D8UF8euvZqjOZJ54pO6ZFrTQ6XYCJT4oCgqwtyXIMREsD2M2EJrCVoUUgoERLi0Re4iBqJPnVad9u2jkDO9MLRXngT9Ks7jWQAvBhSsQWV2xvZyyw==',key_name='tempest-TestNetworkBasicOps-1629892919',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:40:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-plwthb3l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:40:52Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=edd7bb04-60e7-4998-afe7-73fa36c25f5d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "address": "fa:16:3e:11:97:49", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcbaa6a-ae", "ovs_interfaceid": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.308 243456 DEBUG nova.network.os_vif_util [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "address": "fa:16:3e:11:97:49", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcbaa6a-ae", "ovs_interfaceid": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.310 243456 DEBUG nova.network.os_vif_util [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:11:97:49,bridge_name='br-int',has_traffic_filtering=True,id=fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcbaa6a-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.311 243456 DEBUG os_vif [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:11:97:49,bridge_name='br-int',has_traffic_filtering=True,id=fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcbaa6a-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.314 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.315 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfdcbaa6a-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.338 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.340 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.343 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap11e06da5-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.343 243456 INFO os_vif [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:11:97:49,bridge_name='br-int',has_traffic_filtering=True,id=fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcbaa6a-ae')#033[00m
Feb 28 05:41:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.343 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:41:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.344 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap11e06da5-b0, col_values=(('external_ids', {'iface-id': 'ea6114a2-28c6-4510-bbd6-16e4d9cb4f71'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:19 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.345 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:41:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:41:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1800979560' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.389 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.391 243456 DEBUG nova.virt.libvirt.vif [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:41:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-740899339',display_name='tempest-TestGettingAddress-server-740899339',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-740899339',id=144,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIFM8yB08pIGURi8yBxt5PdCNey3s6NrP3no5rEh86EsRCqvxaXm0AljpWt6cD1IWrli2324RMsAEvYyRpYidnTv9SaSqyUCtTNlJoKd1y2+M8lrZO3SJcVKRfWdVShAUg==',key_name='tempest-TestGettingAddress-1215811874',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-uspjsb87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:41:13Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=47d618f6-612e-4944-8a4d-a3509d6e3d35,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "address": "fa:16:3e:02:0a:a1", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06d794d-14", "ovs_interfaceid": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.391 243456 DEBUG nova.network.os_vif_util [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "address": "fa:16:3e:02:0a:a1", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06d794d-14", "ovs_interfaceid": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.392 243456 DEBUG nova.network.os_vif_util [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:0a:a1,bridge_name='br-int',has_traffic_filtering=True,id=b06d794d-14a7-4fd8-bd85-2e37bff5e446,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06d794d-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.394 243456 DEBUG nova.virt.libvirt.vif [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:41:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-740899339',display_name='tempest-TestGettingAddress-server-740899339',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-740899339',id=144,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIFM8yB08pIGURi8yBxt5PdCNey3s6NrP3no5rEh86EsRCqvxaXm0AljpWt6cD1IWrli2324RMsAEvYyRpYidnTv9SaSqyUCtTNlJoKd1y2+M8lrZO3SJcVKRfWdVShAUg==',key_name='tempest-TestGettingAddress-1215811874',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-uspjsb87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:41:13Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=47d618f6-612e-4944-8a4d-a3509d6e3d35,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.395 243456 DEBUG nova.network.os_vif_util [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.396 243456 DEBUG nova.network.os_vif_util [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:f0:9a,bridge_name='br-int',has_traffic_filtering=True,id=f107fdf1-b771-447b-adb6-bd9bc1e35b53,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf107fdf1-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.398 243456 DEBUG nova.objects.instance [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 47d618f6-612e-4944-8a4d-a3509d6e3d35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.416 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:41:19 np0005634017 nova_compute[243452]:  <uuid>47d618f6-612e-4944-8a4d-a3509d6e3d35</uuid>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:  <name>instance-00000090</name>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestGettingAddress-server-740899339</nova:name>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:41:18</nova:creationTime>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:41:19 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:        <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:        <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:        <nova:port uuid="b06d794d-14a7-4fd8-bd85-2e37bff5e446">
Feb 28 05:41:19 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:        <nova:port uuid="f107fdf1-b771-447b-adb6-bd9bc1e35b53">
Feb 28 05:41:19 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe01:f09a" ipVersion="6"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <entry name="serial">47d618f6-612e-4944-8a4d-a3509d6e3d35</entry>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <entry name="uuid">47d618f6-612e-4944-8a4d-a3509d6e3d35</entry>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/47d618f6-612e-4944-8a4d-a3509d6e3d35_disk">
Feb 28 05:41:19 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:41:19 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/47d618f6-612e-4944-8a4d-a3509d6e3d35_disk.config">
Feb 28 05:41:19 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:41:19 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:02:0a:a1"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <target dev="tapb06d794d-14"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:01:f0:9a"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <target dev="tapf107fdf1-b7"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/47d618f6-612e-4944-8a4d-a3509d6e3d35/console.log" append="off"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:41:19 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:41:19 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:41:19 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:41:19 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.417 243456 DEBUG nova.compute.manager [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Preparing to wait for external event network-vif-plugged-b06d794d-14a7-4fd8-bd85-2e37bff5e446 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.417 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.418 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.418 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.419 243456 DEBUG nova.compute.manager [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Preparing to wait for external event network-vif-plugged-f107fdf1-b771-447b-adb6-bd9bc1e35b53 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.419 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.419 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.420 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.421 243456 DEBUG nova.virt.libvirt.vif [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:41:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-740899339',display_name='tempest-TestGettingAddress-server-740899339',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-740899339',id=144,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIFM8yB08pIGURi8yBxt5PdCNey3s6NrP3no5rEh86EsRCqvxaXm0AljpWt6cD1IWrli2324RMsAEvYyRpYidnTv9SaSqyUCtTNlJoKd1y2+M8lrZO3SJcVKRfWdVShAUg==',key_name='tempest-TestGettingAddress-1215811874',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-uspjsb87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:41:13Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=47d618f6-612e-4944-8a4d-a3509d6e3d35,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "address": "fa:16:3e:02:0a:a1", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06d794d-14", "ovs_interfaceid": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.421 243456 DEBUG nova.network.os_vif_util [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "address": "fa:16:3e:02:0a:a1", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06d794d-14", "ovs_interfaceid": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.422 243456 DEBUG nova.network.os_vif_util [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:0a:a1,bridge_name='br-int',has_traffic_filtering=True,id=b06d794d-14a7-4fd8-bd85-2e37bff5e446,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06d794d-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.423 243456 DEBUG os_vif [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:0a:a1,bridge_name='br-int',has_traffic_filtering=True,id=b06d794d-14a7-4fd8-bd85-2e37bff5e446,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06d794d-14') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.424 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.424 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.425 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.429 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.430 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb06d794d-14, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.431 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb06d794d-14, col_values=(('external_ids', {'iface-id': 'b06d794d-14a7-4fd8-bd85-2e37bff5e446', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:0a:a1', 'vm-uuid': '47d618f6-612e-4944-8a4d-a3509d6e3d35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.433 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:19 np0005634017 NetworkManager[49805]: <info>  [1772275279.4340] manager: (tapb06d794d-14): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/625)
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.442 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.443 243456 INFO os_vif [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:0a:a1,bridge_name='br-int',has_traffic_filtering=True,id=b06d794d-14a7-4fd8-bd85-2e37bff5e446,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06d794d-14')#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.444 243456 DEBUG nova.virt.libvirt.vif [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:41:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-740899339',display_name='tempest-TestGettingAddress-server-740899339',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-740899339',id=144,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIFM8yB08pIGURi8yBxt5PdCNey3s6NrP3no5rEh86EsRCqvxaXm0AljpWt6cD1IWrli2324RMsAEvYyRpYidnTv9SaSqyUCtTNlJoKd1y2+M8lrZO3SJcVKRfWdVShAUg==',key_name='tempest-TestGettingAddress-1215811874',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-uspjsb87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:41:13Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=47d618f6-612e-4944-8a4d-a3509d6e3d35,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.445 243456 DEBUG nova.network.os_vif_util [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.446 243456 DEBUG nova.network.os_vif_util [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:f0:9a,bridge_name='br-int',has_traffic_filtering=True,id=f107fdf1-b771-447b-adb6-bd9bc1e35b53,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf107fdf1-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.446 243456 DEBUG os_vif [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:f0:9a,bridge_name='br-int',has_traffic_filtering=True,id=f107fdf1-b771-447b-adb6-bd9bc1e35b53,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf107fdf1-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.447 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.447 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.447 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.451 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.452 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf107fdf1-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.453 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf107fdf1-b7, col_values=(('external_ids', {'iface-id': 'f107fdf1-b771-447b-adb6-bd9bc1e35b53', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:f0:9a', 'vm-uuid': '47d618f6-612e-4944-8a4d-a3509d6e3d35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.455 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:19 np0005634017 NetworkManager[49805]: <info>  [1772275279.4561] manager: (tapf107fdf1-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/626)
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.458 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.462 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.464 243456 INFO os_vif [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:f0:9a,bridge_name='br-int',has_traffic_filtering=True,id=f107fdf1-b771-447b-adb6-bd9bc1e35b53,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf107fdf1-b7')#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.468 243456 DEBUG nova.compute.manager [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-changed-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.469 243456 DEBUG nova.compute.manager [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Refreshing instance network info cache due to event network-changed-d3feb971-63a7-4d54-8310-9c6d40c29637. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.469 243456 DEBUG oslo_concurrency.lockutils [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.469 243456 DEBUG oslo_concurrency.lockutils [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.469 243456 DEBUG nova.network.neutron [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Refreshing network info cache for port d3feb971-63a7-4d54-8310-9c6d40c29637 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.530 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.530 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.531 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:02:0a:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.531 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:01:f0:9a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.532 243456 INFO nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Using config drive#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.562 243456 DEBUG nova.storage.rbd_utils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 47d618f6-612e-4944-8a4d-a3509d6e3d35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.599 243456 INFO nova.virt.libvirt.driver [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Deleting instance files /var/lib/nova/instances/edd7bb04-60e7-4998-afe7-73fa36c25f5d_del#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.600 243456 INFO nova.virt.libvirt.driver [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Deletion of /var/lib/nova/instances/edd7bb04-60e7-4998-afe7-73fa36c25f5d_del complete#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.655 243456 INFO nova.compute.manager [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Took 0.61 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.656 243456 DEBUG oslo.service.loopingcall [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.656 243456 DEBUG nova.compute.manager [-] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.656 243456 DEBUG nova.network.neutron [-] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.748 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.925 243456 DEBUG nova.network.neutron [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Updated VIF entry in instance network info cache for port b06d794d-14a7-4fd8-bd85-2e37bff5e446. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.925 243456 DEBUG nova.network.neutron [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Updating instance_info_cache with network_info: [{"id": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "address": "fa:16:3e:02:0a:a1", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06d794d-14", "ovs_interfaceid": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.943 243456 DEBUG oslo_concurrency.lockutils [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.943 243456 DEBUG nova.compute.manager [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-changed-f107fdf1-b771-447b-adb6-bd9bc1e35b53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.943 243456 DEBUG nova.compute.manager [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Refreshing instance network info cache due to event network-changed-f107fdf1-b771-447b-adb6-bd9bc1e35b53. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.944 243456 DEBUG oslo_concurrency.lockutils [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.944 243456 DEBUG oslo_concurrency.lockutils [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:41:19 np0005634017 nova_compute[243452]: 2026-02-28 10:41:19.944 243456 DEBUG nova.network.neutron [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Refreshing network info cache for port f107fdf1-b771-447b-adb6-bd9bc1e35b53 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.011 243456 INFO nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Creating config drive at /var/lib/nova/instances/47d618f6-612e-4944-8a4d-a3509d6e3d35/disk.config#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.015 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/47d618f6-612e-4944-8a4d-a3509d6e3d35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1webwknp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.158 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/47d618f6-612e-4944-8a4d-a3509d6e3d35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1webwknp" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.190 243456 DEBUG nova.storage.rbd_utils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 47d618f6-612e-4944-8a4d-a3509d6e3d35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.195 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/47d618f6-612e-4944-8a4d-a3509d6e3d35/disk.config 47d618f6-612e-4944-8a4d-a3509d6e3d35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.368 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/47d618f6-612e-4944-8a4d-a3509d6e3d35/disk.config 47d618f6-612e-4944-8a4d-a3509d6e3d35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.369 243456 INFO nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Deleting local config drive /var/lib/nova/instances/47d618f6-612e-4944-8a4d-a3509d6e3d35/disk.config because it was imported into RBD.#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.372 243456 DEBUG nova.network.neutron [-] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.399 243456 INFO nova.compute.manager [-] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Took 0.74 seconds to deallocate network for instance.#033[00m
Feb 28 05:41:20 np0005634017 kernel: tapb06d794d-14: entered promiscuous mode
Feb 28 05:41:20 np0005634017 systemd-udevd[372133]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:41:20 np0005634017 NetworkManager[49805]: <info>  [1772275280.4171] manager: (tapb06d794d-14): new Tun device (/org/freedesktop/NetworkManager/Devices/627)
Feb 28 05:41:20 np0005634017 NetworkManager[49805]: <info>  [1772275280.4728] device (tapb06d794d-14): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:41:20 np0005634017 NetworkManager[49805]: <info>  [1772275280.4757] device (tapb06d794d-14): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.475 243456 DEBUG oslo_concurrency.lockutils [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:20Z|01496|binding|INFO|Claiming lport b06d794d-14a7-4fd8-bd85-2e37bff5e446 for this chassis.
Feb 28 05:41:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:20Z|01497|binding|INFO|b06d794d-14a7-4fd8-bd85-2e37bff5e446: Claiming fa:16:3e:02:0a:a1 10.100.0.5
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.476 243456 DEBUG oslo_concurrency.lockutils [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.478 243456 DEBUG nova.compute.manager [req-1a0a0735-d085-4980-9686-5415fae56f95 req-23969133-0392-4d7d-9705-9e402e51e579 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Received event network-vif-deleted-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.479 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:20 np0005634017 NetworkManager[49805]: <info>  [1772275280.4855] manager: (tapf107fdf1-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/628)
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.485 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:0a:a1 10.100.0.5'], port_security=['fa:16:3e:02:0a:a1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '47d618f6-612e-4944-8a4d-a3509d6e3d35', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '67f9b9ea-1e17-4554-8bb9-a54c6cbc146f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e740c691-959b-49dd-920a-81817c99bba9, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b06d794d-14a7-4fd8-bd85-2e37bff5e446) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.487 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b06d794d-14a7-4fd8-bd85-2e37bff5e446 in datapath b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9 bound to our chassis#033[00m
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.488 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9#033[00m
Feb 28 05:41:20 np0005634017 kernel: tapf107fdf1-b7: entered promiscuous mode
Feb 28 05:41:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:20Z|01498|binding|INFO|Setting lport b06d794d-14a7-4fd8-bd85-2e37bff5e446 ovn-installed in OVS
Feb 28 05:41:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:20Z|01499|binding|INFO|Setting lport b06d794d-14a7-4fd8-bd85-2e37bff5e446 up in Southbound
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.496 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:20Z|01500|binding|INFO|Claiming lport f107fdf1-b771-447b-adb6-bd9bc1e35b53 for this chassis.
Feb 28 05:41:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:20Z|01501|binding|INFO|f107fdf1-b771-447b-adb6-bd9bc1e35b53: Claiming fa:16:3e:01:f0:9a 2001:db8::f816:3eff:fe01:f09a
Feb 28 05:41:20 np0005634017 NetworkManager[49805]: <info>  [1772275280.5040] device (tapf107fdf1-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:41:20 np0005634017 NetworkManager[49805]: <info>  [1772275280.5051] device (tapf107fdf1-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.511 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:f0:9a 2001:db8::f816:3eff:fe01:f09a'], port_security=['fa:16:3e:01:f0:9a 2001:db8::f816:3eff:fe01:f09a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe01:f09a/64', 'neutron:device_id': '47d618f6-612e-4944-8a4d-a3509d6e3d35', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '67f9b9ea-1e17-4554-8bb9-a54c6cbc146f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b98334fa-e37e-4532-99f2-fef0d70a8f7d, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=f107fdf1-b771-447b-adb6-bd9bc1e35b53) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.508 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dc911d56-a175-4ac0-93fd-185675954ead]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:20Z|01502|binding|INFO|Setting lport f107fdf1-b771-447b-adb6-bd9bc1e35b53 ovn-installed in OVS
Feb 28 05:41:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:20Z|01503|binding|INFO|Setting lport f107fdf1-b771-447b-adb6-bd9bc1e35b53 up in Southbound
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.524 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:20 np0005634017 systemd-machined[209480]: New machine qemu-177-instance-00000090.
Feb 28 05:41:20 np0005634017 systemd[1]: Started Virtual Machine qemu-177-instance-00000090.
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.562 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[af9afa1d-2f0e-43a2-b8e6-973658eeb028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.567 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f49ca0-6913-485f-b2b1-ee877fb52f3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.594 243456 DEBUG oslo_concurrency.processutils [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.605 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bd9f160d-ddde-42cf-8aef-32b7f0b876f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.627 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7209cd86-9ecf-41b8-968a-da667e55c9c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb0ad6cbe-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:f6:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 440], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675455, 'reachable_time': 42136, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372266, 'error': None, 'target': 'ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.646 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e3939f68-4561-4f54-a105-4db9e276d305]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb0ad6cbe-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675468, 'tstamp': 675468}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372269, 'error': None, 'target': 'ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb0ad6cbe-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675472, 'tstamp': 675472}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372269, 'error': None, 'target': 'ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.650 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0ad6cbe-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.652 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.654 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0ad6cbe-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.655 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.656 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb0ad6cbe-80, col_values=(('external_ids', {'iface-id': '7d3f4d3d-8eb5-46b3-a0b2-7c10be424ebb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.657 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.660 156681 INFO neutron.agent.ovn.metadata.agent [-] Port f107fdf1-b771-447b-adb6-bd9bc1e35b53 in datapath caa8646e-5c97-4eb8-add7-69ea9ee54379 unbound from our chassis#033[00m
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.663 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network caa8646e-5c97-4eb8-add7-69ea9ee54379#033[00m
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.679 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1a721200-d4fb-44e1-80a5-96f2deac7605]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.713 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2e79a1e7-3ab3-4336-9d14-8504eb8d16c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.718 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[fd13343d-4e3d-4c05-8b30-a35c6c1eb539]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.747 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ef361ac1-a8f1-4f45-b450-e71a4c10e365]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.764 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2f871863-5fe8-4ab9-aa66-efc3d49211bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcaa8646e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:7b:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 17, 'tx_packets': 5, 'rx_bytes': 1502, 'tx_bytes': 398, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 17, 'tx_packets': 5, 'rx_bytes': 1502, 'tx_bytes': 398, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 441], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675547, 'reachable_time': 36290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 17, 'inoctets': 1264, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 17, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1264, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 17, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372309, 'error': None, 'target': 'ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.765 243456 DEBUG nova.network.neutron [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Updated VIF entry in instance network info cache for port d3feb971-63a7-4d54-8310-9c6d40c29637. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.766 243456 DEBUG nova.network.neutron [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Updating instance_info_cache with network_info: [{"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.785 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bf68a074-4c86-4849-b5c2-eaf288261705]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcaa8646e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675562, 'tstamp': 675562}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372311, 'error': None, 'target': 'ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.788 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcaa8646e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.790 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.791 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcaa8646e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.792 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.792 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcaa8646e-50, col_values=(('external_ids', {'iface-id': 'e3227d18-ed73-459d-b1a1-aecd179beb21'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.793 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.795 243456 DEBUG oslo_concurrency.lockutils [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.795 243456 DEBUG nova.compute.manager [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.796 243456 DEBUG oslo_concurrency.lockutils [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.797 243456 DEBUG oslo_concurrency.lockutils [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.797 243456 DEBUG oslo_concurrency.lockutils [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.797 243456 DEBUG nova.compute.manager [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] No waiting events found dispatching network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.798 243456 WARNING nova.compute.manager [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received unexpected event network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.798 243456 DEBUG nova.compute.manager [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.799 243456 DEBUG oslo_concurrency.lockutils [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.799 243456 DEBUG oslo_concurrency.lockutils [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.800 243456 DEBUG oslo_concurrency.lockutils [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.801 243456 DEBUG nova.compute.manager [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] No waiting events found dispatching network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.802 243456 WARNING nova.compute.manager [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received unexpected event network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.802 243456 DEBUG nova.compute.manager [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Received event network-changed-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.803 243456 DEBUG nova.compute.manager [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Refreshing instance network info cache due to event network-changed-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.803 243456 DEBUG oslo_concurrency.lockutils [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-edd7bb04-60e7-4998-afe7-73fa36c25f5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.804 243456 DEBUG oslo_concurrency.lockutils [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-edd7bb04-60e7-4998-afe7-73fa36c25f5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.804 243456 DEBUG nova.network.neutron [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Refreshing network info cache for port fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.926 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275280.9259956, 47d618f6-612e-4944-8a4d-a3509d6e3d35 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.927 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] VM Started (Lifecycle Event)#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.945 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.949 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275280.9261737, 47d618f6-612e-4944-8a4d-a3509d6e3d35 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.950 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.961 243456 DEBUG nova.network.neutron [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.968 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.973 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:41:20 np0005634017 nova_compute[243452]: 2026-02-28 10:41:20.993 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:41:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2310: 305 pgs: 305 active+clean; 379 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Feb 28 05:41:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:41:21 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4003126640' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.195 243456 DEBUG oslo_concurrency.processutils [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.201 243456 DEBUG nova.compute.provider_tree [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.221 243456 DEBUG nova.scheduler.client.report [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.243 243456 DEBUG oslo_concurrency.lockutils [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.269 243456 INFO nova.scheduler.client.report [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Deleted allocations for instance edd7bb04-60e7-4998-afe7-73fa36c25f5d#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.278 243456 DEBUG nova.network.neutron [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.304 243456 DEBUG oslo_concurrency.lockutils [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-edd7bb04-60e7-4998-afe7-73fa36c25f5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.314 243456 DEBUG nova.network.neutron [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Updated VIF entry in instance network info cache for port f107fdf1-b771-447b-adb6-bd9bc1e35b53. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.315 243456 DEBUG nova.network.neutron [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Updating instance_info_cache with network_info: [{"id": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "address": "fa:16:3e:02:0a:a1", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06d794d-14", "ovs_interfaceid": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.341 243456 DEBUG oslo_concurrency.lockutils [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.346 243456 DEBUG oslo_concurrency.lockutils [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.572 243456 DEBUG nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Received event network-vif-unplugged-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.572 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.573 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.573 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.573 243456 DEBUG nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] No waiting events found dispatching network-vif-unplugged-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.573 243456 WARNING nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Received unexpected event network-vif-unplugged-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.573 243456 DEBUG nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Received event network-vif-plugged-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.573 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.573 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.574 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.574 243456 DEBUG nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] No waiting events found dispatching network-vif-plugged-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.574 243456 WARNING nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Received unexpected event network-vif-plugged-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.574 243456 DEBUG nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-vif-plugged-b06d794d-14a7-4fd8-bd85-2e37bff5e446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.574 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.574 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.575 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.575 243456 DEBUG nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Processing event network-vif-plugged-b06d794d-14a7-4fd8-bd85-2e37bff5e446 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.575 243456 DEBUG nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-vif-plugged-b06d794d-14a7-4fd8-bd85-2e37bff5e446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.575 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.575 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.575 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.575 243456 DEBUG nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] No event matching network-vif-plugged-b06d794d-14a7-4fd8-bd85-2e37bff5e446 in dict_keys([('network-vif-plugged', 'f107fdf1-b771-447b-adb6-bd9bc1e35b53')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.576 243456 WARNING nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received unexpected event network-vif-plugged-b06d794d-14a7-4fd8-bd85-2e37bff5e446 for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.576 243456 DEBUG nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-vif-plugged-f107fdf1-b771-447b-adb6-bd9bc1e35b53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.576 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.576 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.576 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.576 243456 DEBUG nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Processing event network-vif-plugged-f107fdf1-b771-447b-adb6-bd9bc1e35b53 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.577 243456 DEBUG nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-vif-plugged-f107fdf1-b771-447b-adb6-bd9bc1e35b53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.577 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.577 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.577 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.577 243456 DEBUG nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] No waiting events found dispatching network-vif-plugged-f107fdf1-b771-447b-adb6-bd9bc1e35b53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.577 243456 WARNING nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received unexpected event network-vif-plugged-f107fdf1-b771-447b-adb6-bd9bc1e35b53 for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.578 243456 DEBUG nova.compute.manager [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.582 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275281.5822911, 47d618f6-612e-4944-8a4d-a3509d6e3d35 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.582 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.585 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.589 243456 INFO nova.virt.libvirt.driver [-] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Instance spawned successfully.#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.590 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.609 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.616 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.619 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.620 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.620 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.620 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.621 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.621 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.654 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.695 243456 INFO nova.compute.manager [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Took 8.22 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.695 243456 DEBUG nova.compute.manager [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.764 243456 INFO nova.compute.manager [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Took 9.29 seconds to build instance.#033[00m
Feb 28 05:41:21 np0005634017 nova_compute[243452]: 2026-02-28 10:41:21.781 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.364s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 511 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Feb 28 05:41:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.094 243456 DEBUG nova.compute.manager [req-febe04c7-869c-498e-829f-d534f9347016 req-8d1d79a7-eca5-4ea2-b1c9-ece23f522d60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-changed-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.094 243456 DEBUG nova.compute.manager [req-febe04c7-869c-498e-829f-d534f9347016 req-8d1d79a7-eca5-4ea2-b1c9-ece23f522d60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Refreshing instance network info cache due to event network-changed-d3feb971-63a7-4d54-8310-9c6d40c29637. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.095 243456 DEBUG oslo_concurrency.lockutils [req-febe04c7-869c-498e-829f-d534f9347016 req-8d1d79a7-eca5-4ea2-b1c9-ece23f522d60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.095 243456 DEBUG oslo_concurrency.lockutils [req-febe04c7-869c-498e-829f-d534f9347016 req-8d1d79a7-eca5-4ea2-b1c9-ece23f522d60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.095 243456 DEBUG nova.network.neutron [req-febe04c7-869c-498e-829f-d534f9347016 req-8d1d79a7-eca5-4ea2-b1c9-ece23f522d60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Refreshing network info cache for port d3feb971-63a7-4d54-8310-9c6d40c29637 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.388 243456 DEBUG oslo_concurrency.lockutils [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "5e1a8d62-9ac1-417d-8194-58901bb4018e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.388 243456 DEBUG oslo_concurrency.lockutils [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.389 243456 DEBUG oslo_concurrency.lockutils [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.389 243456 DEBUG oslo_concurrency.lockutils [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.389 243456 DEBUG oslo_concurrency.lockutils [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.390 243456 INFO nova.compute.manager [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Terminating instance#033[00m
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.391 243456 DEBUG nova.compute.manager [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:41:24 np0005634017 kernel: tapd3feb971-63 (unregistering): left promiscuous mode
Feb 28 05:41:24 np0005634017 NetworkManager[49805]: <info>  [1772275284.4448] device (tapd3feb971-63): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:41:24 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:24Z|01504|binding|INFO|Releasing lport d3feb971-63a7-4d54-8310-9c6d40c29637 from this chassis (sb_readonly=0)
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.456 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:24 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:24Z|01505|binding|INFO|Setting lport d3feb971-63a7-4d54-8310-9c6d40c29637 down in Southbound
Feb 28 05:41:24 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:24Z|01506|binding|INFO|Removing iface tapd3feb971-63 ovn-installed in OVS
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.460 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.464 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.467 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:70:52 10.100.0.7'], port_security=['fa:16:3e:7a:70:52 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5e1a8d62-9ac1-417d-8194-58901bb4018e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '8', 'neutron:security_group_ids': '44c3724e-fd4e-435a-91b1-2ee7cbaa561d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27d4da06-6dac-452d-951c-54e43b1c22a3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d3feb971-63a7-4d54-8310-9c6d40c29637) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:41:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.470 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d3feb971-63a7-4d54-8310-9c6d40c29637 in datapath 11e06da5-bfc5-4a1a-9148-ff3afccf9569 unbound from our chassis#033[00m
Feb 28 05:41:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.473 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 11e06da5-bfc5-4a1a-9148-ff3afccf9569, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:41:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.476 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b7e7381d-a167-43f6-adf8-a3da29223132]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.477 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569 namespace which is not needed anymore#033[00m
Feb 28 05:41:24 np0005634017 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Feb 28 05:41:24 np0005634017 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008d.scope: Consumed 13.767s CPU time.
Feb 28 05:41:24 np0005634017 systemd-machined[209480]: Machine qemu-174-instance-0000008d terminated.
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.631 243456 INFO nova.virt.libvirt.driver [-] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Instance destroyed successfully.#033[00m
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.632 243456 DEBUG nova.objects.instance [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'resources' on Instance uuid 5e1a8d62-9ac1-417d-8194-58901bb4018e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:41:24 np0005634017 neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569[369903]: [NOTICE]   (369907) : haproxy version is 2.8.14-c23fe91
Feb 28 05:41:24 np0005634017 neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569[369903]: [NOTICE]   (369907) : path to executable is /usr/sbin/haproxy
Feb 28 05:41:24 np0005634017 neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569[369903]: [WARNING]  (369907) : Exiting Master process...
Feb 28 05:41:24 np0005634017 neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569[369903]: [ALERT]    (369907) : Current worker (369909) exited with code 143 (Terminated)
Feb 28 05:41:24 np0005634017 neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569[369903]: [WARNING]  (369907) : All workers exited. Exiting... (0)
Feb 28 05:41:24 np0005634017 systemd[1]: libpod-1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1.scope: Deactivated successfully.
Feb 28 05:41:24 np0005634017 conmon[369903]: conmon 1ff0507443255768e763 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1.scope/container/memory.events
Feb 28 05:41:24 np0005634017 podman[372365]: 2026-02-28 10:41:24.652293931 +0000 UTC m=+0.060245497 container stop 1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.653 243456 DEBUG nova.virt.libvirt.vif [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:40:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-806666479',display_name='tempest-TestNetworkBasicOps-server-806666479',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-806666479',id=141,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEvYU5ohMTONYZF98enrXff7H/UA6S+83Ft8Ojoxq+P+keZUL46io/3fcohxtuAI5aeVtpG6o1nJ2kDJwbKtvHAweQjJLzp2omWlOmQ8VQJrOL3ujh53baZZlNUH6C9OQ==',key_name='tempest-TestNetworkBasicOps-2075562092',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:40:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-12rzj72r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:40:32Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=5e1a8d62-9ac1-417d-8194-58901bb4018e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.653 243456 DEBUG nova.network.os_vif_util [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.654 243456 DEBUG nova.network.os_vif_util [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:70:52,bridge_name='br-int',has_traffic_filtering=True,id=d3feb971-63a7-4d54-8310-9c6d40c29637,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3feb971-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.654 243456 DEBUG os_vif [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:70:52,bridge_name='br-int',has_traffic_filtering=True,id=d3feb971-63a7-4d54-8310-9c6d40c29637,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3feb971-63') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.658 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.659 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3feb971-63, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.663 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.666 243456 INFO os_vif [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:70:52,bridge_name='br-int',has_traffic_filtering=True,id=d3feb971-63a7-4d54-8310-9c6d40c29637,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3feb971-63')#033[00m
Feb 28 05:41:24 np0005634017 podman[372365]: 2026-02-28 10:41:24.683300079 +0000 UTC m=+0.091251695 container died 1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 05:41:24 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1-userdata-shm.mount: Deactivated successfully.
Feb 28 05:41:24 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ec7ac42405f247e62e492745590376a4dfe553da32d8a80cb4173cd7e7d5ce14-merged.mount: Deactivated successfully.
Feb 28 05:41:24 np0005634017 podman[372365]: 2026-02-28 10:41:24.742834925 +0000 UTC m=+0.150786511 container cleanup 1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.749 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:24 np0005634017 systemd[1]: libpod-conmon-1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1.scope: Deactivated successfully.
Feb 28 05:41:24 np0005634017 podman[372420]: 2026-02-28 10:41:24.844128313 +0000 UTC m=+0.079922564 container remove 1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:41:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.849 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[97a35ffb-900b-4983-af49-d527fad724f1]: (4, ('Sat Feb 28 10:41:24 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569 (1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1)\n1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1\nSat Feb 28 10:41:24 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569 (1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1)\n1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.854 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5911f52f-08f3-405e-9a71-10170d550e4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.855 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11e06da5-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:24 np0005634017 kernel: tap11e06da5-b0: left promiscuous mode
Feb 28 05:41:24 np0005634017 nova_compute[243452]: 2026-02-28 10:41:24.865 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.868 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ce2fe6c4-660a-4483-8e80-d1e3140cf5e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.885 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0c244f67-0700-4169-8b0c-263326f92ae2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.888 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ade82b7b-e713-4003-868f-e9a061b73a30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.909 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5970d7ec-f5db-4646-8f7d-f709f235de62]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673390, 'reachable_time': 22571, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372436, 'error': None, 'target': 'ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:24 np0005634017 systemd[1]: run-netns-ovnmeta\x2d11e06da5\x2dbfc5\x2d4a1a\x2d9148\x2dff3afccf9569.mount: Deactivated successfully.
Feb 28 05:41:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.912 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:41:24 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.912 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[1df218c5-8d6c-4539-968f-1a211b98f9c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:25 np0005634017 nova_compute[243452]: 2026-02-28 10:41:25.017 243456 INFO nova.virt.libvirt.driver [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Deleting instance files /var/lib/nova/instances/5e1a8d62-9ac1-417d-8194-58901bb4018e_del#033[00m
Feb 28 05:41:25 np0005634017 nova_compute[243452]: 2026-02-28 10:41:25.018 243456 INFO nova.virt.libvirt.driver [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Deletion of /var/lib/nova/instances/5e1a8d62-9ac1-417d-8194-58901bb4018e_del complete#033[00m
Feb 28 05:41:25 np0005634017 nova_compute[243452]: 2026-02-28 10:41:25.090 243456 INFO nova.compute.manager [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:41:25 np0005634017 nova_compute[243452]: 2026-02-28 10:41:25.090 243456 DEBUG oslo.service.loopingcall [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:41:25 np0005634017 nova_compute[243452]: 2026-02-28 10:41:25.091 243456 DEBUG nova.compute.manager [-] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:41:25 np0005634017 nova_compute[243452]: 2026-02-28 10:41:25.091 243456 DEBUG nova.network.neutron [-] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:41:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2312: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 957 KiB/s rd, 1.8 MiB/s wr, 93 op/s
Feb 28 05:41:25 np0005634017 nova_compute[243452]: 2026-02-28 10:41:25.656 243456 DEBUG nova.network.neutron [-] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:41:25 np0005634017 nova_compute[243452]: 2026-02-28 10:41:25.684 243456 INFO nova.compute.manager [-] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Took 0.59 seconds to deallocate network for instance.#033[00m
Feb 28 05:41:25 np0005634017 nova_compute[243452]: 2026-02-28 10:41:25.695 243456 DEBUG nova.network.neutron [req-febe04c7-869c-498e-829f-d534f9347016 req-8d1d79a7-eca5-4ea2-b1c9-ece23f522d60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Updated VIF entry in instance network info cache for port d3feb971-63a7-4d54-8310-9c6d40c29637. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:41:25 np0005634017 nova_compute[243452]: 2026-02-28 10:41:25.695 243456 DEBUG nova.network.neutron [req-febe04c7-869c-498e-829f-d534f9347016 req-8d1d79a7-eca5-4ea2-b1c9-ece23f522d60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Updating instance_info_cache with network_info: [{"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:41:25 np0005634017 nova_compute[243452]: 2026-02-28 10:41:25.741 243456 DEBUG oslo_concurrency.lockutils [req-febe04c7-869c-498e-829f-d534f9347016 req-8d1d79a7-eca5-4ea2-b1c9-ece23f522d60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:41:25 np0005634017 nova_compute[243452]: 2026-02-28 10:41:25.747 243456 DEBUG oslo_concurrency.lockutils [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:25 np0005634017 nova_compute[243452]: 2026-02-28 10:41:25.748 243456 DEBUG oslo_concurrency.lockutils [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:25 np0005634017 nova_compute[243452]: 2026-02-28 10:41:25.782 243456 DEBUG nova.compute.manager [req-b2d69d29-a6d5-44ab-91c5-9b96a2b280ea req-42bb6533-b9e3-4a1b-92c6-61b6a593c5a8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-changed-b06d794d-14a7-4fd8-bd85-2e37bff5e446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:25 np0005634017 nova_compute[243452]: 2026-02-28 10:41:25.783 243456 DEBUG nova.compute.manager [req-b2d69d29-a6d5-44ab-91c5-9b96a2b280ea req-42bb6533-b9e3-4a1b-92c6-61b6a593c5a8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Refreshing instance network info cache due to event network-changed-b06d794d-14a7-4fd8-bd85-2e37bff5e446. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:41:25 np0005634017 nova_compute[243452]: 2026-02-28 10:41:25.783 243456 DEBUG oslo_concurrency.lockutils [req-b2d69d29-a6d5-44ab-91c5-9b96a2b280ea req-42bb6533-b9e3-4a1b-92c6-61b6a593c5a8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:41:25 np0005634017 nova_compute[243452]: 2026-02-28 10:41:25.783 243456 DEBUG oslo_concurrency.lockutils [req-b2d69d29-a6d5-44ab-91c5-9b96a2b280ea req-42bb6533-b9e3-4a1b-92c6-61b6a593c5a8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:41:25 np0005634017 nova_compute[243452]: 2026-02-28 10:41:25.783 243456 DEBUG nova.network.neutron [req-b2d69d29-a6d5-44ab-91c5-9b96a2b280ea req-42bb6533-b9e3-4a1b-92c6-61b6a593c5a8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Refreshing network info cache for port b06d794d-14a7-4fd8-bd85-2e37bff5e446 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:41:25 np0005634017 nova_compute[243452]: 2026-02-28 10:41:25.824 243456 DEBUG oslo_concurrency.processutils [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:41:26 np0005634017 nova_compute[243452]: 2026-02-28 10:41:26.202 243456 DEBUG nova.compute.manager [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-vif-unplugged-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:26 np0005634017 nova_compute[243452]: 2026-02-28 10:41:26.203 243456 DEBUG oslo_concurrency.lockutils [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:26 np0005634017 nova_compute[243452]: 2026-02-28 10:41:26.204 243456 DEBUG oslo_concurrency.lockutils [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:26 np0005634017 nova_compute[243452]: 2026-02-28 10:41:26.204 243456 DEBUG oslo_concurrency.lockutils [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:26 np0005634017 nova_compute[243452]: 2026-02-28 10:41:26.205 243456 DEBUG nova.compute.manager [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] No waiting events found dispatching network-vif-unplugged-d3feb971-63a7-4d54-8310-9c6d40c29637 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:41:26 np0005634017 nova_compute[243452]: 2026-02-28 10:41:26.205 243456 WARNING nova.compute.manager [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received unexpected event network-vif-unplugged-d3feb971-63a7-4d54-8310-9c6d40c29637 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:41:26 np0005634017 nova_compute[243452]: 2026-02-28 10:41:26.206 243456 DEBUG nova.compute.manager [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:26 np0005634017 nova_compute[243452]: 2026-02-28 10:41:26.206 243456 DEBUG oslo_concurrency.lockutils [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:26 np0005634017 nova_compute[243452]: 2026-02-28 10:41:26.207 243456 DEBUG oslo_concurrency.lockutils [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:26 np0005634017 nova_compute[243452]: 2026-02-28 10:41:26.208 243456 DEBUG oslo_concurrency.lockutils [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:26 np0005634017 nova_compute[243452]: 2026-02-28 10:41:26.208 243456 DEBUG nova.compute.manager [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] No waiting events found dispatching network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:41:26 np0005634017 nova_compute[243452]: 2026-02-28 10:41:26.209 243456 WARNING nova.compute.manager [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received unexpected event network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:41:26 np0005634017 nova_compute[243452]: 2026-02-28 10:41:26.210 243456 DEBUG nova.compute.manager [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-vif-deleted-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:26 np0005634017 nova_compute[243452]: 2026-02-28 10:41:26.210 243456 INFO nova.compute.manager [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Neutron deleted interface d3feb971-63a7-4d54-8310-9c6d40c29637; detaching it from the instance and deleting it from the info cache#033[00m
Feb 28 05:41:26 np0005634017 nova_compute[243452]: 2026-02-28 10:41:26.211 243456 DEBUG nova.network.neutron [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:41:26 np0005634017 nova_compute[243452]: 2026-02-28 10:41:26.254 243456 DEBUG nova.compute.manager [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Detach interface failed, port_id=d3feb971-63a7-4d54-8310-9c6d40c29637, reason: Instance 5e1a8d62-9ac1-417d-8194-58901bb4018e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 28 05:41:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:41:26 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3289016923' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:41:26 np0005634017 nova_compute[243452]: 2026-02-28 10:41:26.466 243456 DEBUG oslo_concurrency.processutils [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:41:26 np0005634017 nova_compute[243452]: 2026-02-28 10:41:26.477 243456 DEBUG nova.compute.provider_tree [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:41:26 np0005634017 nova_compute[243452]: 2026-02-28 10:41:26.497 243456 DEBUG nova.scheduler.client.report [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:41:26 np0005634017 nova_compute[243452]: 2026-02-28 10:41:26.532 243456 DEBUG oslo_concurrency.lockutils [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:26 np0005634017 nova_compute[243452]: 2026-02-28 10:41:26.560 243456 INFO nova.scheduler.client.report [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Deleted allocations for instance 5e1a8d62-9ac1-417d-8194-58901bb4018e#033[00m
Feb 28 05:41:26 np0005634017 nova_compute[243452]: 2026-02-28 10:41:26.658 243456 DEBUG oslo_concurrency.lockutils [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.270s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 305 active+clean; 343 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 131 op/s
Feb 28 05:41:27 np0005634017 nova_compute[243452]: 2026-02-28 10:41:27.111 243456 DEBUG nova.network.neutron [req-b2d69d29-a6d5-44ab-91c5-9b96a2b280ea req-42bb6533-b9e3-4a1b-92c6-61b6a593c5a8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Updated VIF entry in instance network info cache for port b06d794d-14a7-4fd8-bd85-2e37bff5e446. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:41:27 np0005634017 nova_compute[243452]: 2026-02-28 10:41:27.112 243456 DEBUG nova.network.neutron [req-b2d69d29-a6d5-44ab-91c5-9b96a2b280ea req-42bb6533-b9e3-4a1b-92c6-61b6a593c5a8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Updating instance_info_cache with network_info: [{"id": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "address": "fa:16:3e:02:0a:a1", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06d794d-14", "ovs_interfaceid": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:41:27 np0005634017 nova_compute[243452]: 2026-02-28 10:41:27.139 243456 DEBUG oslo_concurrency.lockutils [req-b2d69d29-a6d5-44ab-91c5-9b96a2b280ea req-42bb6533-b9e3-4a1b-92c6-61b6a593c5a8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:41:28 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:41:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 305 active+clean; 306 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 576 KiB/s wr, 121 op/s
Feb 28 05:41:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:41:29
Feb 28 05:41:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:41:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:41:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['images', 'volumes', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', '.rgw.root']
Feb 28 05:41:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:41:29 np0005634017 nova_compute[243452]: 2026-02-28 10:41:29.663 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:29 np0005634017 nova_compute[243452]: 2026-02-28 10:41:29.752 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:41:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:41:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:41:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:41:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:41:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:41:30 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:30Z|01507|binding|INFO|Releasing lport e3227d18-ed73-459d-b1a1-aecd179beb21 from this chassis (sb_readonly=0)
Feb 28 05:41:30 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:30Z|01508|binding|INFO|Releasing lport 7d3f4d3d-8eb5-46b3-a0b2-7c10be424ebb from this chassis (sb_readonly=0)
Feb 28 05:41:30 np0005634017 nova_compute[243452]: 2026-02-28 10:41:30.442 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:41:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:41:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:41:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:41:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:41:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:41:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:41:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:41:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:41:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:41:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 18 KiB/s wr, 129 op/s
Feb 28 05:41:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2316: 305 pgs: 305 active+clean; 280 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 392 KiB/s wr, 111 op/s
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.732272) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275293733028, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 700, "num_deletes": 251, "total_data_size": 840054, "memory_usage": 854136, "flush_reason": "Manual Compaction"}
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275293739818, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 820390, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49017, "largest_seqno": 49716, "table_properties": {"data_size": 816768, "index_size": 1466, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8444, "raw_average_key_size": 19, "raw_value_size": 809436, "raw_average_value_size": 1869, "num_data_blocks": 65, "num_entries": 433, "num_filter_entries": 433, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772275244, "oldest_key_time": 1772275244, "file_creation_time": 1772275293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 7607 microseconds, and 2900 cpu microseconds.
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.739887) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 820390 bytes OK
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.739912) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.741546) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.741566) EVENT_LOG_v1 {"time_micros": 1772275293741557, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.741588) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 836391, prev total WAL file size 836391, number of live WAL files 2.
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.742271) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(801KB)], [113(9063KB)]
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275293742321, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 10101465, "oldest_snapshot_seqno": -1}
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 7011 keys, 8369619 bytes, temperature: kUnknown
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275293776512, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 8369619, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8325368, "index_size": 25643, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17541, "raw_key_size": 182729, "raw_average_key_size": 26, "raw_value_size": 8202819, "raw_average_value_size": 1169, "num_data_blocks": 994, "num_entries": 7011, "num_filter_entries": 7011, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772275293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.776749) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 8369619 bytes
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.777782) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 294.8 rd, 244.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 8.9 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(22.5) write-amplify(10.2) OK, records in: 7524, records dropped: 513 output_compression: NoCompression
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.777802) EVENT_LOG_v1 {"time_micros": 1772275293777792, "job": 68, "event": "compaction_finished", "compaction_time_micros": 34268, "compaction_time_cpu_micros": 17902, "output_level": 6, "num_output_files": 1, "total_output_size": 8369619, "num_input_records": 7524, "num_output_records": 7011, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275293777982, "job": 68, "event": "table_file_deletion", "file_number": 115}
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275293778693, "job": 68, "event": "table_file_deletion", "file_number": 113}
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.742178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.778799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.778804) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.778806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.778809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.778811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:41:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:41:34 np0005634017 nova_compute[243452]: 2026-02-28 10:41:34.286 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275279.2850125, edd7bb04-60e7-4998-afe7-73fa36c25f5d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:41:34 np0005634017 nova_compute[243452]: 2026-02-28 10:41:34.287 243456 INFO nova.compute.manager [-] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:41:34 np0005634017 nova_compute[243452]: 2026-02-28 10:41:34.312 243456 DEBUG nova.compute.manager [None req-7bac265c-80cc-4a06-894b-a35ba8569eda - - - - - -] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:41:34 np0005634017 nova_compute[243452]: 2026-02-28 10:41:34.665 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:34Z|00182|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:02:0a:a1 10.100.0.5
Feb 28 05:41:34 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:34Z|00183|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:0a:a1 10.100.0.5
Feb 28 05:41:34 np0005634017 nova_compute[243452]: 2026-02-28 10:41:34.754 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2317: 305 pgs: 305 active+clean; 290 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.0 MiB/s wr, 114 op/s
Feb 28 05:41:36 np0005634017 nova_compute[243452]: 2026-02-28 10:41:36.286 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 305 active+clean; 305 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Feb 28 05:41:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:41:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 381 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Feb 28 05:41:39 np0005634017 nova_compute[243452]: 2026-02-28 10:41:39.629 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275284.6277022, 5e1a8d62-9ac1-417d-8194-58901bb4018e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:41:39 np0005634017 nova_compute[243452]: 2026-02-28 10:41:39.630 243456 INFO nova.compute.manager [-] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:41:39 np0005634017 nova_compute[243452]: 2026-02-28 10:41:39.654 243456 DEBUG nova.compute.manager [None req-f5135037-bfaf-46bb-9c1c-3d854d5ccd28 - - - - - -] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:41:39 np0005634017 nova_compute[243452]: 2026-02-28 10:41:39.669 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:39 np0005634017 nova_compute[243452]: 2026-02-28 10:41:39.756 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:40 np0005634017 nova_compute[243452]: 2026-02-28 10:41:40.065 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Feb 28 05:41:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:41:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:41:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:41:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:41:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015326591723209138 of space, bias 1.0, pg target 0.4597977516962742 quantized to 32 (current 32)
Feb 28 05:41:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:41:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:41:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:41:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:41:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:41:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024939305755770236 of space, bias 1.0, pg target 0.7481791726731071 quantized to 32 (current 32)
Feb 28 05:41:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:41:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.298182577967931e-07 of space, bias 4.0, pg target 0.0008757819093561517 quantized to 16 (current 16)
Feb 28 05:41:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:41:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:41:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:41:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:41:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:41:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:41:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:41:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:41:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:41:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:41:43 np0005634017 nova_compute[243452]: 2026-02-28 10:41:43.103 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:41:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 05:41:43 np0005634017 nova_compute[243452]: 2026-02-28 10:41:43.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:41:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:41:44 np0005634017 nova_compute[243452]: 2026-02-28 10:41:44.671 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:44 np0005634017 nova_compute[243452]: 2026-02-28 10:41:44.758 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.357 243456 DEBUG oslo_concurrency.lockutils [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "47d618f6-612e-4944-8a4d-a3509d6e3d35" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.358 243456 DEBUG oslo_concurrency.lockutils [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.358 243456 DEBUG oslo_concurrency.lockutils [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.359 243456 DEBUG oslo_concurrency.lockutils [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.359 243456 DEBUG oslo_concurrency.lockutils [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.361 243456 INFO nova.compute.manager [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Terminating instance#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.363 243456 DEBUG nova.compute.manager [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:41:45 np0005634017 kernel: tapb06d794d-14 (unregistering): left promiscuous mode
Feb 28 05:41:45 np0005634017 NetworkManager[49805]: <info>  [1772275305.4256] device (tapb06d794d-14): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.432 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:45 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:45Z|01509|binding|INFO|Releasing lport b06d794d-14a7-4fd8-bd85-2e37bff5e446 from this chassis (sb_readonly=0)
Feb 28 05:41:45 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:45Z|01510|binding|INFO|Setting lport b06d794d-14a7-4fd8-bd85-2e37bff5e446 down in Southbound
Feb 28 05:41:45 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:45Z|01511|binding|INFO|Removing iface tapb06d794d-14 ovn-installed in OVS
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.436 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:45 np0005634017 kernel: tapf107fdf1-b7 (unregistering): left promiscuous mode
Feb 28 05:41:45 np0005634017 NetworkManager[49805]: <info>  [1772275305.4440] device (tapf107fdf1-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.446 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.451 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:45 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:45Z|01512|binding|INFO|Releasing lport f107fdf1-b771-447b-adb6-bd9bc1e35b53 from this chassis (sb_readonly=1)
Feb 28 05:41:45 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:45Z|01513|binding|INFO|Removing iface tapf107fdf1-b7 ovn-installed in OVS
Feb 28 05:41:45 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:45Z|01514|if_status|INFO|Dropped 1 log messages in last 1231 seconds (most recently, 1231 seconds ago) due to excessive rate
Feb 28 05:41:45 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:45Z|01515|if_status|INFO|Not setting lport f107fdf1-b771-447b-adb6-bd9bc1e35b53 down as sb is readonly
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.459 243456 DEBUG nova.compute.manager [req-7c03dd9b-c6b2-492b-bfea-14c86cbab202 req-f3b9d133-da29-4e8e-9162-08a5a4f07ff0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-changed-b06d794d-14a7-4fd8-bd85-2e37bff5e446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.460 243456 DEBUG nova.compute.manager [req-7c03dd9b-c6b2-492b-bfea-14c86cbab202 req-f3b9d133-da29-4e8e-9162-08a5a4f07ff0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Refreshing instance network info cache due to event network-changed-b06d794d-14a7-4fd8-bd85-2e37bff5e446. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.460 243456 DEBUG oslo_concurrency.lockutils [req-7c03dd9b-c6b2-492b-bfea-14c86cbab202 req-f3b9d133-da29-4e8e-9162-08a5a4f07ff0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.461 243456 DEBUG oslo_concurrency.lockutils [req-7c03dd9b-c6b2-492b-bfea-14c86cbab202 req-f3b9d133-da29-4e8e-9162-08a5a4f07ff0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.461 243456 DEBUG nova.network.neutron [req-7c03dd9b-c6b2-492b-bfea-14c86cbab202 req-f3b9d133-da29-4e8e-9162-08a5a4f07ff0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Refreshing network info cache for port b06d794d-14a7-4fd8-bd85-2e37bff5e446 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.463 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:45 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:45Z|01516|binding|INFO|Setting lport f107fdf1-b771-447b-adb6-bd9bc1e35b53 down in Southbound
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.490 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:0a:a1 10.100.0.5'], port_security=['fa:16:3e:02:0a:a1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '47d618f6-612e-4944-8a4d-a3509d6e3d35', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '67f9b9ea-1e17-4554-8bb9-a54c6cbc146f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e740c691-959b-49dd-920a-81817c99bba9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b06d794d-14a7-4fd8-bd85-2e37bff5e446) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.491 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b06d794d-14a7-4fd8-bd85-2e37bff5e446 in datapath b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9 unbound from our chassis#033[00m
Feb 28 05:41:45 np0005634017 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000090.scope: Deactivated successfully.
Feb 28 05:41:45 np0005634017 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000090.scope: Consumed 13.581s CPU time.
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.493 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9#033[00m
Feb 28 05:41:45 np0005634017 systemd-machined[209480]: Machine qemu-177-instance-00000090 terminated.
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.506 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4cdb2244-7389-464b-ba25-544648de1f91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.540 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5608660d-f281-44b3-a85d-d3c96a67b9f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.543 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[004df7a6-feeb-42e9-adfb-b27960749019]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.570 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[74fbbb07-23f8-4a9b-a693-4cc66f4f103a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.589 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.591 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1bcfd7b9-93d3-4f57-b16e-3540f42bfe66]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb0ad6cbe-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:f6:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 440], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675455, 'reachable_time': 42136, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372476, 'error': None, 'target': 'ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:45 np0005634017 NetworkManager[49805]: <info>  [1772275305.5939] manager: (tapf107fdf1-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/629)
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.599 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:41:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3468049051' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:41:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:41:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3468049051' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.609 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e376bd01-84e2-4759-a674-907a2389db00]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb0ad6cbe-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675468, 'tstamp': 675468}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372489, 'error': None, 'target': 'ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb0ad6cbe-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675472, 'tstamp': 675472}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372489, 'error': None, 'target': 'ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.611 243456 INFO nova.virt.libvirt.driver [-] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Instance destroyed successfully.#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.612 243456 DEBUG nova.objects.instance [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid 47d618f6-612e-4944-8a4d-a3509d6e3d35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.612 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0ad6cbe-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.613 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.621 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.621 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0ad6cbe-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.622 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.622 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb0ad6cbe-80, col_values=(('external_ids', {'iface-id': '7d3f4d3d-8eb5-46b3-a0b2-7c10be424ebb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.623 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.647 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:f0:9a 2001:db8::f816:3eff:fe01:f09a'], port_security=['fa:16:3e:01:f0:9a 2001:db8::f816:3eff:fe01:f09a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe01:f09a/64', 'neutron:device_id': '47d618f6-612e-4944-8a4d-a3509d6e3d35', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '67f9b9ea-1e17-4554-8bb9-a54c6cbc146f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b98334fa-e37e-4532-99f2-fef0d70a8f7d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=f107fdf1-b771-447b-adb6-bd9bc1e35b53) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.649 156681 INFO neutron.agent.ovn.metadata.agent [-] Port f107fdf1-b771-447b-adb6-bd9bc1e35b53 in datapath caa8646e-5c97-4eb8-add7-69ea9ee54379 unbound from our chassis#033[00m
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.650 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network caa8646e-5c97-4eb8-add7-69ea9ee54379#033[00m
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.669 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1caf53f7-a8b9-451b-aad6-b69dcb2b6116]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.701 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[35d59c8a-8303-4dac-b81d-278a528ae6ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.706 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ae4a0d3d-8f0c-4e8b-8a7e-f4d987cbfdcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.734 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c854aca3-0df3-4285-b530-6b0a0d001dad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.751 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[63de763c-1a7b-43f1-8f82-483e94dd1b68]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcaa8646e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:7b:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 28, 'tx_packets': 6, 'rx_bytes': 2472, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 28, 'tx_packets': 6, 'rx_bytes': 2472, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 441], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675547, 'reachable_time': 36290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 28, 'inoctets': 2080, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 28, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2080, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 28, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372504, 'error': None, 'target': 'ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.772 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4e85014e-2b1d-4609-a65f-b7a625d606e4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcaa8646e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675562, 'tstamp': 675562}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372505, 'error': None, 'target': 'ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.774 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcaa8646e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.776 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.783 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.784 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcaa8646e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.784 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.785 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcaa8646e-50, col_values=(('external_ids', {'iface-id': 'e3227d18-ed73-459d-b1a1-aecd179beb21'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:45 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.785 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.803 243456 DEBUG nova.virt.libvirt.vif [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:41:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-740899339',display_name='tempest-TestGettingAddress-server-740899339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-740899339',id=144,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIFM8yB08pIGURi8yBxt5PdCNey3s6NrP3no5rEh86EsRCqvxaXm0AljpWt6cD1IWrli2324RMsAEvYyRpYidnTv9SaSqyUCtTNlJoKd1y2+M8lrZO3SJcVKRfWdVShAUg==',key_name='tempest-TestGettingAddress-1215811874',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:41:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-uspjsb87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:41:21Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=47d618f6-612e-4944-8a4d-a3509d6e3d35,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "address": "fa:16:3e:02:0a:a1", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06d794d-14", "ovs_interfaceid": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.804 243456 DEBUG nova.network.os_vif_util [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "address": "fa:16:3e:02:0a:a1", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06d794d-14", "ovs_interfaceid": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.805 243456 DEBUG nova.network.os_vif_util [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:02:0a:a1,bridge_name='br-int',has_traffic_filtering=True,id=b06d794d-14a7-4fd8-bd85-2e37bff5e446,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06d794d-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.806 243456 DEBUG os_vif [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:0a:a1,bridge_name='br-int',has_traffic_filtering=True,id=b06d794d-14a7-4fd8-bd85-2e37bff5e446,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06d794d-14') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.808 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.808 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb06d794d-14, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.810 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.812 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.813 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.816 243456 INFO os_vif [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:0a:a1,bridge_name='br-int',has_traffic_filtering=True,id=b06d794d-14a7-4fd8-bd85-2e37bff5e446,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06d794d-14')#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.817 243456 DEBUG nova.virt.libvirt.vif [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:41:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-740899339',display_name='tempest-TestGettingAddress-server-740899339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-740899339',id=144,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIFM8yB08pIGURi8yBxt5PdCNey3s6NrP3no5rEh86EsRCqvxaXm0AljpWt6cD1IWrli2324RMsAEvYyRpYidnTv9SaSqyUCtTNlJoKd1y2+M8lrZO3SJcVKRfWdVShAUg==',key_name='tempest-TestGettingAddress-1215811874',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:41:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-uspjsb87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:41:21Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=47d618f6-612e-4944-8a4d-a3509d6e3d35,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.817 243456 DEBUG nova.network.os_vif_util [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.818 243456 DEBUG nova.network.os_vif_util [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:f0:9a,bridge_name='br-int',has_traffic_filtering=True,id=f107fdf1-b771-447b-adb6-bd9bc1e35b53,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf107fdf1-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.818 243456 DEBUG os_vif [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:f0:9a,bridge_name='br-int',has_traffic_filtering=True,id=f107fdf1-b771-447b-adb6-bd9bc1e35b53,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf107fdf1-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.820 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.820 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf107fdf1-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.821 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.823 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:45 np0005634017 nova_compute[243452]: 2026-02-28 10:41:45.825 243456 INFO os_vif [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:f0:9a,bridge_name='br-int',has_traffic_filtering=True,id=f107fdf1-b771-447b-adb6-bd9bc1e35b53,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf107fdf1-b7')#033[00m
Feb 28 05:41:46 np0005634017 nova_compute[243452]: 2026-02-28 10:41:46.198 243456 INFO nova.virt.libvirt.driver [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Deleting instance files /var/lib/nova/instances/47d618f6-612e-4944-8a4d-a3509d6e3d35_del#033[00m
Feb 28 05:41:46 np0005634017 nova_compute[243452]: 2026-02-28 10:41:46.200 243456 INFO nova.virt.libvirt.driver [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Deletion of /var/lib/nova/instances/47d618f6-612e-4944-8a4d-a3509d6e3d35_del complete#033[00m
Feb 28 05:41:46 np0005634017 nova_compute[243452]: 2026-02-28 10:41:46.272 243456 INFO nova.compute.manager [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Took 0.91 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:41:46 np0005634017 nova_compute[243452]: 2026-02-28 10:41:46.273 243456 DEBUG oslo.service.loopingcall [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:41:46 np0005634017 nova_compute[243452]: 2026-02-28 10:41:46.273 243456 DEBUG nova.compute.manager [-] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:41:46 np0005634017 nova_compute[243452]: 2026-02-28 10:41:46.274 243456 DEBUG nova.network.neutron [-] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:41:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2323: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 232 KiB/s rd, 1.1 MiB/s wr, 33 op/s
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.149 243456 DEBUG nova.compute.manager [req-29581a3f-4346-4662-b8f6-907fef5b7c37 req-c5f4b36a-7318-406a-97f7-680405ff3902 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-vif-deleted-b06d794d-14a7-4fd8-bd85-2e37bff5e446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.149 243456 INFO nova.compute.manager [req-29581a3f-4346-4662-b8f6-907fef5b7c37 req-c5f4b36a-7318-406a-97f7-680405ff3902 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Neutron deleted interface b06d794d-14a7-4fd8-bd85-2e37bff5e446; detaching it from the instance and deleting it from the info cache#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.150 243456 DEBUG nova.network.neutron [req-29581a3f-4346-4662-b8f6-907fef5b7c37 req-c5f4b36a-7318-406a-97f7-680405ff3902 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Updating instance_info_cache with network_info: [{"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.180 243456 DEBUG nova.compute.manager [req-29581a3f-4346-4662-b8f6-907fef5b7c37 req-c5f4b36a-7318-406a-97f7-680405ff3902 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Detach interface failed, port_id=b06d794d-14a7-4fd8-bd85-2e37bff5e446, reason: Instance 47d618f6-612e-4944-8a4d-a3509d6e3d35 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.241 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.241 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.262 243456 DEBUG nova.compute.manager [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.313 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.336 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.343 243456 DEBUG nova.network.neutron [req-7c03dd9b-c6b2-492b-bfea-14c86cbab202 req-f3b9d133-da29-4e8e-9162-08a5a4f07ff0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Updated VIF entry in instance network info cache for port b06d794d-14a7-4fd8-bd85-2e37bff5e446. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.344 243456 DEBUG nova.network.neutron [req-7c03dd9b-c6b2-492b-bfea-14c86cbab202 req-f3b9d133-da29-4e8e-9162-08a5a4f07ff0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Updating instance_info_cache with network_info: [{"id": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "address": "fa:16:3e:02:0a:a1", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06d794d-14", "ovs_interfaceid": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.355 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.356 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.360 243456 DEBUG oslo_concurrency.lockutils [req-7c03dd9b-c6b2-492b-bfea-14c86cbab202 req-f3b9d133-da29-4e8e-9162-08a5a4f07ff0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.365 243456 DEBUG nova.virt.hardware [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.366 243456 INFO nova.compute.claims [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.448 243456 DEBUG nova.network.neutron [-] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.472 243456 INFO nova.compute.manager [-] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Took 1.20 seconds to deallocate network for instance.#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.501 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.550 243456 DEBUG oslo_concurrency.lockutils [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.557 243456 DEBUG nova.compute.manager [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-vif-unplugged-b06d794d-14a7-4fd8-bd85-2e37bff5e446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.557 243456 DEBUG oslo_concurrency.lockutils [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.558 243456 DEBUG oslo_concurrency.lockutils [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.558 243456 DEBUG oslo_concurrency.lockutils [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.558 243456 DEBUG nova.compute.manager [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] No waiting events found dispatching network-vif-unplugged-b06d794d-14a7-4fd8-bd85-2e37bff5e446 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.559 243456 WARNING nova.compute.manager [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received unexpected event network-vif-unplugged-b06d794d-14a7-4fd8-bd85-2e37bff5e446 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.559 243456 DEBUG nova.compute.manager [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-vif-plugged-b06d794d-14a7-4fd8-bd85-2e37bff5e446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.560 243456 DEBUG oslo_concurrency.lockutils [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.560 243456 DEBUG oslo_concurrency.lockutils [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.560 243456 DEBUG oslo_concurrency.lockutils [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.561 243456 DEBUG nova.compute.manager [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] No waiting events found dispatching network-vif-plugged-b06d794d-14a7-4fd8-bd85-2e37bff5e446 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.561 243456 WARNING nova.compute.manager [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received unexpected event network-vif-plugged-b06d794d-14a7-4fd8-bd85-2e37bff5e446 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.562 243456 DEBUG nova.compute.manager [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-vif-unplugged-f107fdf1-b771-447b-adb6-bd9bc1e35b53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.562 243456 DEBUG oslo_concurrency.lockutils [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.562 243456 DEBUG oslo_concurrency.lockutils [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.563 243456 DEBUG oslo_concurrency.lockutils [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.563 243456 DEBUG nova.compute.manager [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] No waiting events found dispatching network-vif-unplugged-f107fdf1-b771-447b-adb6-bd9bc1e35b53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.564 243456 WARNING nova.compute.manager [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received unexpected event network-vif-unplugged-f107fdf1-b771-447b-adb6-bd9bc1e35b53 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.564 243456 DEBUG nova.compute.manager [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-vif-plugged-f107fdf1-b771-447b-adb6-bd9bc1e35b53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.564 243456 DEBUG oslo_concurrency.lockutils [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.565 243456 DEBUG oslo_concurrency.lockutils [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.565 243456 DEBUG oslo_concurrency.lockutils [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.565 243456 DEBUG nova.compute.manager [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] No waiting events found dispatching network-vif-plugged-f107fdf1-b771-447b-adb6-bd9bc1e35b53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:41:47 np0005634017 nova_compute[243452]: 2026-02-28 10:41:47.566 243456 WARNING nova.compute.manager [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received unexpected event network-vif-plugged-f107fdf1-b771-447b-adb6-bd9bc1e35b53 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:41:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:41:48 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/130683762' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.114 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.122 243456 DEBUG nova.compute.provider_tree [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.138 243456 DEBUG nova.scheduler.client.report [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.161 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.163 243456 DEBUG nova.compute.manager [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.168 243456 DEBUG oslo_concurrency.lockutils [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.218 243456 DEBUG nova.compute.manager [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.218 243456 DEBUG nova.network.neutron [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.237 243456 INFO nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.257 243456 DEBUG nova.compute.manager [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.363 243456 DEBUG nova.compute.manager [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.365 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.366 243456 INFO nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Creating image(s)#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.400 243456 DEBUG nova.storage.rbd_utils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.437 243456 DEBUG nova.storage.rbd_utils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.477 243456 DEBUG nova.storage.rbd_utils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.481 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.524 243456 DEBUG nova.policy [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.553 243456 DEBUG oslo_concurrency.processutils [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.592 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.593 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.593 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.594 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.619 243456 DEBUG nova.storage.rbd_utils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.623 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.889 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:41:48 np0005634017 nova_compute[243452]: 2026-02-28 10:41:48.969 243456 DEBUG nova.storage.rbd_utils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] resizing rbd image af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:41:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:41:49 np0005634017 nova_compute[243452]: 2026-02-28 10:41:49.081 243456 DEBUG nova.objects.instance [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'migration_context' on Instance uuid af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:41:49 np0005634017 nova_compute[243452]: 2026-02-28 10:41:49.097 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:41:49 np0005634017 nova_compute[243452]: 2026-02-28 10:41:49.097 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Ensure instance console log exists: /var/lib/nova/instances/af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:41:49 np0005634017 nova_compute[243452]: 2026-02-28 10:41:49.098 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:49 np0005634017 nova_compute[243452]: 2026-02-28 10:41:49.098 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:49 np0005634017 nova_compute[243452]: 2026-02-28 10:41:49.098 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 305 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 167 KiB/s rd, 66 KiB/s wr, 27 op/s
Feb 28 05:41:49 np0005634017 podman[372735]: 2026-02-28 10:41:49.136318829 +0000 UTC m=+0.069304263 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 28 05:41:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:41:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1633658776' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:41:49 np0005634017 nova_compute[243452]: 2026-02-28 10:41:49.162 243456 DEBUG oslo_concurrency.processutils [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:41:49 np0005634017 podman[372718]: 2026-02-28 10:41:49.169255212 +0000 UTC m=+0.103993716 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:41:49 np0005634017 nova_compute[243452]: 2026-02-28 10:41:49.169 243456 DEBUG nova.compute.provider_tree [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:41:49 np0005634017 nova_compute[243452]: 2026-02-28 10:41:49.185 243456 DEBUG nova.scheduler.client.report [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:41:49 np0005634017 nova_compute[243452]: 2026-02-28 10:41:49.206 243456 DEBUG oslo_concurrency.lockutils [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:49 np0005634017 nova_compute[243452]: 2026-02-28 10:41:49.225 243456 INFO nova.scheduler.client.report [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance 47d618f6-612e-4944-8a4d-a3509d6e3d35#033[00m
Feb 28 05:41:49 np0005634017 nova_compute[243452]: 2026-02-28 10:41:49.237 243456 DEBUG nova.compute.manager [req-6f4fce3b-020c-4c6a-8018-e85dd6261155 req-96a7cf5d-ed00-4ad7-8389-6a6b0ae3b991 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-vif-deleted-f107fdf1-b771-447b-adb6-bd9bc1e35b53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:49 np0005634017 nova_compute[243452]: 2026-02-28 10:41:49.305 243456 DEBUG oslo_concurrency.lockutils [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:49 np0005634017 nova_compute[243452]: 2026-02-28 10:41:49.543 243456 DEBUG nova.network.neutron [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Successfully created port: 85b213ac-7186-4120-8f8e-043293c9de8b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:41:49 np0005634017 nova_compute[243452]: 2026-02-28 10:41:49.761 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:50 np0005634017 nova_compute[243452]: 2026-02-28 10:41:50.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:41:50 np0005634017 nova_compute[243452]: 2026-02-28 10:41:50.380 243456 DEBUG nova.network.neutron [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Successfully updated port: 85b213ac-7186-4120-8f8e-043293c9de8b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:41:50 np0005634017 nova_compute[243452]: 2026-02-28 10:41:50.396 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:41:50 np0005634017 nova_compute[243452]: 2026-02-28 10:41:50.396 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:41:50 np0005634017 nova_compute[243452]: 2026-02-28 10:41:50.397 243456 DEBUG nova.network.neutron [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:41:50 np0005634017 nova_compute[243452]: 2026-02-28 10:41:50.560 243456 DEBUG nova.network.neutron [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:41:50 np0005634017 nova_compute[243452]: 2026-02-28 10:41:50.822 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 259 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 1.1 MiB/s wr, 54 op/s
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.328 243456 DEBUG nova.compute.manager [req-26c9a47f-4f76-4d7b-8249-97b46debc728 req-b2381116-ace6-45a0-98bc-d1bc85574975 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Received event network-changed-85b213ac-7186-4120-8f8e-043293c9de8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.329 243456 DEBUG nova.compute.manager [req-26c9a47f-4f76-4d7b-8249-97b46debc728 req-b2381116-ace6-45a0-98bc-d1bc85574975 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Refreshing instance network info cache due to event network-changed-85b213ac-7186-4120-8f8e-043293c9de8b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.329 243456 DEBUG oslo_concurrency.lockutils [req-26c9a47f-4f76-4d7b-8249-97b46debc728 req-b2381116-ace6-45a0-98bc-d1bc85574975 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.340 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.455 243456 DEBUG oslo_concurrency.lockutils [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "9b081668-1653-448a-957e-da1ead7ecd21" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.455 243456 DEBUG oslo_concurrency.lockutils [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.456 243456 DEBUG oslo_concurrency.lockutils [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "9b081668-1653-448a-957e-da1ead7ecd21-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.456 243456 DEBUG oslo_concurrency.lockutils [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.456 243456 DEBUG oslo_concurrency.lockutils [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.457 243456 INFO nova.compute.manager [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Terminating instance#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.458 243456 DEBUG nova.compute.manager [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.494 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.495 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.495 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.495 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9b081668-1653-448a-957e-da1ead7ecd21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:41:51 np0005634017 kernel: tap5952cc57-b2 (unregistering): left promiscuous mode
Feb 28 05:41:51 np0005634017 NetworkManager[49805]: <info>  [1772275311.5200] device (tap5952cc57-b2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.528 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:51Z|01517|binding|INFO|Releasing lport 5952cc57-b25c-40a2-b208-47e2104b88ad from this chassis (sb_readonly=0)
Feb 28 05:41:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:51Z|01518|binding|INFO|Setting lport 5952cc57-b25c-40a2-b208-47e2104b88ad down in Southbound
Feb 28 05:41:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:51Z|01519|binding|INFO|Removing iface tap5952cc57-b2 ovn-installed in OVS
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.533 243456 DEBUG nova.network.neutron [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Updating instance_info_cache with network_info: [{"id": "85b213ac-7186-4120-8f8e-043293c9de8b", "address": "fa:16:3e:4e:23:13", "network": {"id": "60f40e8c-30be-4a73-8b0b-ca447dd19ffc", "bridge": "br-int", "label": "tempest-network-smoke--1376393359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b213ac-71", "ovs_interfaceid": "85b213ac-7186-4120-8f8e-043293c9de8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:41:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.538 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:9b:bb 10.100.0.13'], port_security=['fa:16:3e:49:9b:bb 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9b081668-1653-448a-957e-da1ead7ecd21', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '67f9b9ea-1e17-4554-8bb9-a54c6cbc146f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e740c691-959b-49dd-920a-81817c99bba9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=5952cc57-b25c-40a2-b208-47e2104b88ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.539 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.540 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 5952cc57-b25c-40a2-b208-47e2104b88ad in datapath b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9 unbound from our chassis#033[00m
Feb 28 05:41:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.541 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:41:51 np0005634017 kernel: tap6914940c-92 (unregistering): left promiscuous mode
Feb 28 05:41:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.542 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0644aca6-ad8c-43d3-974d-51f9632839e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.544 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9 namespace which is not needed anymore#033[00m
Feb 28 05:41:51 np0005634017 NetworkManager[49805]: <info>  [1772275311.5450] device (tap6914940c-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.548 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.551 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.551 243456 DEBUG nova.compute.manager [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Instance network_info: |[{"id": "85b213ac-7186-4120-8f8e-043293c9de8b", "address": "fa:16:3e:4e:23:13", "network": {"id": "60f40e8c-30be-4a73-8b0b-ca447dd19ffc", "bridge": "br-int", "label": "tempest-network-smoke--1376393359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b213ac-71", "ovs_interfaceid": "85b213ac-7186-4120-8f8e-043293c9de8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.552 243456 DEBUG oslo_concurrency.lockutils [req-26c9a47f-4f76-4d7b-8249-97b46debc728 req-b2381116-ace6-45a0-98bc-d1bc85574975 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.552 243456 DEBUG nova.network.neutron [req-26c9a47f-4f76-4d7b-8249-97b46debc728 req-b2381116-ace6-45a0-98bc-d1bc85574975 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Refreshing network info cache for port 85b213ac-7186-4120-8f8e-043293c9de8b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.554 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Start _get_guest_xml network_info=[{"id": "85b213ac-7186-4120-8f8e-043293c9de8b", "address": "fa:16:3e:4e:23:13", "network": {"id": "60f40e8c-30be-4a73-8b0b-ca447dd19ffc", "bridge": "br-int", "label": "tempest-network-smoke--1376393359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b213ac-71", "ovs_interfaceid": "85b213ac-7186-4120-8f8e-043293c9de8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:41:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:51Z|01520|binding|INFO|Releasing lport 6914940c-920a-4dc2-982a-9ae63584aee2 from this chassis (sb_readonly=0)
Feb 28 05:41:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:51Z|01521|binding|INFO|Setting lport 6914940c-920a-4dc2-982a-9ae63584aee2 down in Southbound
Feb 28 05:41:51 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:51Z|01522|binding|INFO|Removing iface tap6914940c-92 ovn-installed in OVS
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.556 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.563 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:c8:97 2001:db8::f816:3eff:fefb:c897'], port_security=['fa:16:3e:fb:c8:97 2001:db8::f816:3eff:fefb:c897'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fefb:c897/64', 'neutron:device_id': '9b081668-1653-448a-957e-da1ead7ecd21', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '67f9b9ea-1e17-4554-8bb9-a54c6cbc146f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b98334fa-e37e-4532-99f2-fef0d70a8f7d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=6914940c-920a-4dc2-982a-9ae63584aee2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.563 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.576 243456 WARNING nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.583 243456 DEBUG nova.virt.libvirt.host [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.583 243456 DEBUG nova.virt.libvirt.host [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.586 243456 DEBUG nova.virt.libvirt.host [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.587 243456 DEBUG nova.virt.libvirt.host [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.587 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.587 243456 DEBUG nova.virt.hardware [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.588 243456 DEBUG nova.virt.hardware [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.588 243456 DEBUG nova.virt.hardware [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.588 243456 DEBUG nova.virt.hardware [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.588 243456 DEBUG nova.virt.hardware [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.588 243456 DEBUG nova.virt.hardware [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.588 243456 DEBUG nova.virt.hardware [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.589 243456 DEBUG nova.virt.hardware [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.589 243456 DEBUG nova.virt.hardware [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.589 243456 DEBUG nova.virt.hardware [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.589 243456 DEBUG nova.virt.hardware [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:41:51 np0005634017 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Feb 28 05:41:51 np0005634017 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008e.scope: Consumed 14.766s CPU time.
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.593 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:41:51 np0005634017 systemd-machined[209480]: Machine qemu-175-instance-0000008e terminated.
Feb 28 05:41:51 np0005634017 neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9[370607]: [NOTICE]   (370611) : haproxy version is 2.8.14-c23fe91
Feb 28 05:41:51 np0005634017 neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9[370607]: [NOTICE]   (370611) : path to executable is /usr/sbin/haproxy
Feb 28 05:41:51 np0005634017 neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9[370607]: [WARNING]  (370611) : Exiting Master process...
Feb 28 05:41:51 np0005634017 neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9[370607]: [ALERT]    (370611) : Current worker (370613) exited with code 143 (Terminated)
Feb 28 05:41:51 np0005634017 neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9[370607]: [WARNING]  (370611) : All workers exited. Exiting... (0)
Feb 28 05:41:51 np0005634017 systemd[1]: libpod-3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9.scope: Deactivated successfully.
Feb 28 05:41:51 np0005634017 podman[372813]: 2026-02-28 10:41:51.686408027 +0000 UTC m=+0.044722568 container died 3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 05:41:51 np0005634017 NetworkManager[49805]: <info>  [1772275311.6927] manager: (tap6914940c-92): new Tun device (/org/freedesktop/NetworkManager/Devices/630)
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.721 243456 INFO nova.virt.libvirt.driver [-] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Instance destroyed successfully.#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.724 243456 DEBUG nova.objects.instance [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid 9b081668-1653-448a-957e-da1ead7ecd21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:41:51 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9-userdata-shm.mount: Deactivated successfully.
Feb 28 05:41:51 np0005634017 systemd[1]: var-lib-containers-storage-overlay-8b49f6fee2a5624b0785481ede3d870b1d9e7cdf89c50a35e547fbbaa11922ea-merged.mount: Deactivated successfully.
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.760 243456 DEBUG nova.virt.libvirt.vif [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:40:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1426595857',display_name='tempest-TestGettingAddress-server-1426595857',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1426595857',id=142,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIFM8yB08pIGURi8yBxt5PdCNey3s6NrP3no5rEh86EsRCqvxaXm0AljpWt6cD1IWrli2324RMsAEvYyRpYidnTv9SaSqyUCtTNlJoKd1y2+M8lrZO3SJcVKRfWdVShAUg==',key_name='tempest-TestGettingAddress-1215811874',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:40:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-nmxy1pc6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:40:49Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=9b081668-1653-448a-957e-da1ead7ecd21,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5952cc57-b25c-40a2-b208-47e2104b88ad", "address": "fa:16:3e:49:9b:bb", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5952cc57-b2", "ovs_interfaceid": "5952cc57-b25c-40a2-b208-47e2104b88ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.761 243456 DEBUG nova.network.os_vif_util [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "5952cc57-b25c-40a2-b208-47e2104b88ad", "address": "fa:16:3e:49:9b:bb", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5952cc57-b2", "ovs_interfaceid": "5952cc57-b25c-40a2-b208-47e2104b88ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.762 243456 DEBUG nova.network.os_vif_util [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:49:9b:bb,bridge_name='br-int',has_traffic_filtering=True,id=5952cc57-b25c-40a2-b208-47e2104b88ad,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5952cc57-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.763 243456 DEBUG os_vif [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:9b:bb,bridge_name='br-int',has_traffic_filtering=True,id=5952cc57-b25c-40a2-b208-47e2104b88ad,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5952cc57-b2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.764 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.765 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5952cc57-b2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.767 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.773 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.776 243456 INFO os_vif [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:9b:bb,bridge_name='br-int',has_traffic_filtering=True,id=5952cc57-b25c-40a2-b208-47e2104b88ad,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5952cc57-b2')#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.777 243456 DEBUG nova.virt.libvirt.vif [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:40:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1426595857',display_name='tempest-TestGettingAddress-server-1426595857',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1426595857',id=142,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIFM8yB08pIGURi8yBxt5PdCNey3s6NrP3no5rEh86EsRCqvxaXm0AljpWt6cD1IWrli2324RMsAEvYyRpYidnTv9SaSqyUCtTNlJoKd1y2+M8lrZO3SJcVKRfWdVShAUg==',key_name='tempest-TestGettingAddress-1215811874',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:40:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-nmxy1pc6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:40:49Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=9b081668-1653-448a-957e-da1ead7ecd21,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6914940c-920a-4dc2-982a-9ae63584aee2", "address": "fa:16:3e:fb:c8:97", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefb:c897", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6914940c-92", "ovs_interfaceid": "6914940c-920a-4dc2-982a-9ae63584aee2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.777 243456 DEBUG nova.network.os_vif_util [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "6914940c-920a-4dc2-982a-9ae63584aee2", "address": "fa:16:3e:fb:c8:97", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefb:c897", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6914940c-92", "ovs_interfaceid": "6914940c-920a-4dc2-982a-9ae63584aee2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.778 243456 DEBUG nova.network.os_vif_util [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=6914940c-920a-4dc2-982a-9ae63584aee2,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6914940c-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.778 243456 DEBUG os_vif [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=6914940c-920a-4dc2-982a-9ae63584aee2,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6914940c-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.780 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.780 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6914940c-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.781 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.783 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.785 243456 INFO os_vif [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=6914940c-920a-4dc2-982a-9ae63584aee2,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6914940c-92')#033[00m
Feb 28 05:41:51 np0005634017 podman[372813]: 2026-02-28 10:41:51.793383116 +0000 UTC m=+0.151697647 container cleanup 3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 28 05:41:51 np0005634017 systemd[1]: libpod-conmon-3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9.scope: Deactivated successfully.
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.858 243456 DEBUG nova.compute.manager [req-a041fb39-a4dd-48eb-8459-c41cf334f420 req-99933234-8235-4709-8ab6-9b36560795e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-vif-unplugged-6914940c-920a-4dc2-982a-9ae63584aee2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.859 243456 DEBUG oslo_concurrency.lockutils [req-a041fb39-a4dd-48eb-8459-c41cf334f420 req-99933234-8235-4709-8ab6-9b36560795e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9b081668-1653-448a-957e-da1ead7ecd21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.859 243456 DEBUG oslo_concurrency.lockutils [req-a041fb39-a4dd-48eb-8459-c41cf334f420 req-99933234-8235-4709-8ab6-9b36560795e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.860 243456 DEBUG oslo_concurrency.lockutils [req-a041fb39-a4dd-48eb-8459-c41cf334f420 req-99933234-8235-4709-8ab6-9b36560795e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.860 243456 DEBUG nova.compute.manager [req-a041fb39-a4dd-48eb-8459-c41cf334f420 req-99933234-8235-4709-8ab6-9b36560795e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] No waiting events found dispatching network-vif-unplugged-6914940c-920a-4dc2-982a-9ae63584aee2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.860 243456 DEBUG nova.compute.manager [req-a041fb39-a4dd-48eb-8459-c41cf334f420 req-99933234-8235-4709-8ab6-9b36560795e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-vif-unplugged-6914940c-920a-4dc2-982a-9ae63584aee2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:41:51 np0005634017 podman[372892]: 2026-02-28 10:41:51.907922459 +0000 UTC m=+0.093971482 container remove 3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:41:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.915 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[610fbf0d-9ecf-4ffe-86fb-bb14912edfd2]: (4, ('Sat Feb 28 10:41:51 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9 (3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9)\n3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9\nSat Feb 28 10:41:51 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9 (3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9)\n3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.917 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[40dea92b-a547-44e2-93b3-0faec6fbc91a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.918 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0ad6cbe-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.920 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:51 np0005634017 kernel: tapb0ad6cbe-80: left promiscuous mode
Feb 28 05:41:51 np0005634017 nova_compute[243452]: 2026-02-28 10:41:51.929 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.932 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c53ccc48-f83b-4e24-bf0a-b1411e2d80b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.953 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[26864579-f530-4598-81e3-8b3bd2e3df16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.955 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e8be0169-14d2-4597-8e52-e6ab46c69d21]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.973 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5733ea82-6f54-43d8-939b-b6fe06d391e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675446, 'reachable_time': 22616, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372916, 'error': None, 'target': 'ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:51 np0005634017 systemd[1]: run-netns-ovnmeta\x2db0ad6cbe\x2d89fe\x2d42da\x2da806\x2dcec0a8b2c3c9.mount: Deactivated successfully.
Feb 28 05:41:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.978 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:41:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.978 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[7a42e337-23b2-473b-bfe5-9a23943a0610]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.980 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 6914940c-920a-4dc2-982a-9ae63584aee2 in datapath caa8646e-5c97-4eb8-add7-69ea9ee54379 unbound from our chassis#033[00m
Feb 28 05:41:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.981 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network caa8646e-5c97-4eb8-add7-69ea9ee54379, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:41:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.983 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ba2cffd0-1166-404d-9337-8ddae9b82841]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:51 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.983 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379 namespace which is not needed anymore#033[00m
Feb 28 05:41:52 np0005634017 neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379[370680]: [NOTICE]   (370684) : haproxy version is 2.8.14-c23fe91
Feb 28 05:41:52 np0005634017 neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379[370680]: [NOTICE]   (370684) : path to executable is /usr/sbin/haproxy
Feb 28 05:41:52 np0005634017 neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379[370680]: [WARNING]  (370684) : Exiting Master process...
Feb 28 05:41:52 np0005634017 neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379[370680]: [ALERT]    (370684) : Current worker (370686) exited with code 143 (Terminated)
Feb 28 05:41:52 np0005634017 neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379[370680]: [WARNING]  (370684) : All workers exited. Exiting... (0)
Feb 28 05:41:52 np0005634017 systemd[1]: libpod-c74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28.scope: Deactivated successfully.
Feb 28 05:41:52 np0005634017 podman[372935]: 2026-02-28 10:41:52.117221946 +0000 UTC m=+0.045694635 container died c74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 05:41:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:41:52 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4269545208' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.165 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:41:52 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28-userdata-shm.mount: Deactivated successfully.
Feb 28 05:41:52 np0005634017 systemd[1]: var-lib-containers-storage-overlay-9eb10f203ca6c1c169e1b23413d123cb3b37254372fe81642560a63587f0c639-merged.mount: Deactivated successfully.
Feb 28 05:41:52 np0005634017 podman[372935]: 2026-02-28 10:41:52.201859212 +0000 UTC m=+0.130331901 container cleanup c74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:41:52 np0005634017 systemd[1]: libpod-conmon-c74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28.scope: Deactivated successfully.
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.223 243456 DEBUG nova.storage.rbd_utils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.228 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.281 243456 INFO nova.virt.libvirt.driver [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Deleting instance files /var/lib/nova/instances/9b081668-1653-448a-957e-da1ead7ecd21_del#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.282 243456 INFO nova.virt.libvirt.driver [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Deletion of /var/lib/nova/instances/9b081668-1653-448a-957e-da1ead7ecd21_del complete#033[00m
Feb 28 05:41:52 np0005634017 podman[372983]: 2026-02-28 10:41:52.31266667 +0000 UTC m=+0.086115039 container remove c74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:41:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:52.318 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e50c991f-754e-4d7b-b9f1-fba56830b78d]: (4, ('Sat Feb 28 10:41:52 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379 (c74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28)\nc74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28\nSat Feb 28 10:41:52 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379 (c74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28)\nc74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:52.320 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ceac435c-1146-4bf4-bc38-a5d7a5028054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:52.321 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcaa8646e-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.323 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:52 np0005634017 kernel: tapcaa8646e-50: left promiscuous mode
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.332 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:52.335 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[16a25821-7169-4945-bbb5-bd64e6947e08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.339 243456 INFO nova.compute.manager [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.340 243456 DEBUG oslo.service.loopingcall [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.340 243456 DEBUG nova.compute.manager [-] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.340 243456 DEBUG nova.network.neutron [-] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:41:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:52.351 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[715066c2-7f1e-4ca5-a96a-8f1c85bf020a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:52.352 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f53b7f7c-f899-45b5-a62c-8ab6a9679f0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:52.371 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ba037da1-db70-4dbe-b074-ceda39fb2143]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675539, 'reachable_time': 37825, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373003, 'error': None, 'target': 'ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:52.373 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:41:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:52.373 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[3e73faa6-8819-48a9-aaf1-ec7c3382bef0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:52 np0005634017 systemd[1]: run-netns-ovnmeta\x2dcaa8646e\x2d5c97\x2d4eb8\x2dadd7\x2d69ea9ee54379.mount: Deactivated successfully.
Feb 28 05:41:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:41:52 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2004734532' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.859 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.861 243456 DEBUG nova.virt.libvirt.vif [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:41:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1524442108',display_name='tempest-TestNetworkBasicOps-server-1524442108',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1524442108',id=145,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG91PlskTqdY3f/2ETDaAtm5odqY3Pm73UeMDdBa5KYy9HWAlLs8njv3PvKqtqCgQvXgaXApoQMAIbbB474iNBXEqM7bw3A67rDTDaE0oBVKxg+86+w9ZDJ22AtUTnVnBg==',key_name='tempest-TestNetworkBasicOps-1470921471',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-lqpz0euc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:41:48Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "85b213ac-7186-4120-8f8e-043293c9de8b", "address": "fa:16:3e:4e:23:13", "network": {"id": "60f40e8c-30be-4a73-8b0b-ca447dd19ffc", "bridge": "br-int", "label": "tempest-network-smoke--1376393359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b213ac-71", "ovs_interfaceid": "85b213ac-7186-4120-8f8e-043293c9de8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.861 243456 DEBUG nova.network.os_vif_util [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "85b213ac-7186-4120-8f8e-043293c9de8b", "address": "fa:16:3e:4e:23:13", "network": {"id": "60f40e8c-30be-4a73-8b0b-ca447dd19ffc", "bridge": "br-int", "label": "tempest-network-smoke--1376393359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b213ac-71", "ovs_interfaceid": "85b213ac-7186-4120-8f8e-043293c9de8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.862 243456 DEBUG nova.network.os_vif_util [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:23:13,bridge_name='br-int',has_traffic_filtering=True,id=85b213ac-7186-4120-8f8e-043293c9de8b,network=Network(60f40e8c-30be-4a73-8b0b-ca447dd19ffc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85b213ac-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.863 243456 DEBUG nova.objects.instance [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.882 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:41:52 np0005634017 nova_compute[243452]:  <uuid>af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9</uuid>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:  <name>instance-00000091</name>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestNetworkBasicOps-server-1524442108</nova:name>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:41:51</nova:creationTime>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:41:52 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:        <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:        <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:        <nova:port uuid="85b213ac-7186-4120-8f8e-043293c9de8b">
Feb 28 05:41:52 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <entry name="serial">af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9</entry>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <entry name="uuid">af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9</entry>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk">
Feb 28 05:41:52 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:41:52 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk.config">
Feb 28 05:41:52 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:41:52 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:4e:23:13"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <target dev="tap85b213ac-71"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9/console.log" append="off"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:41:52 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:41:52 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:41:52 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:41:52 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.882 243456 DEBUG nova.compute.manager [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Preparing to wait for external event network-vif-plugged-85b213ac-7186-4120-8f8e-043293c9de8b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.883 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.883 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.883 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.884 243456 DEBUG nova.virt.libvirt.vif [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:41:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1524442108',display_name='tempest-TestNetworkBasicOps-server-1524442108',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1524442108',id=145,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG91PlskTqdY3f/2ETDaAtm5odqY3Pm73UeMDdBa5KYy9HWAlLs8njv3PvKqtqCgQvXgaXApoQMAIbbB474iNBXEqM7bw3A67rDTDaE0oBVKxg+86+w9ZDJ22AtUTnVnBg==',key_name='tempest-TestNetworkBasicOps-1470921471',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-lqpz0euc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:41:48Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "85b213ac-7186-4120-8f8e-043293c9de8b", "address": "fa:16:3e:4e:23:13", "network": {"id": "60f40e8c-30be-4a73-8b0b-ca447dd19ffc", "bridge": "br-int", "label": "tempest-network-smoke--1376393359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b213ac-71", "ovs_interfaceid": "85b213ac-7186-4120-8f8e-043293c9de8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.884 243456 DEBUG nova.network.os_vif_util [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "85b213ac-7186-4120-8f8e-043293c9de8b", "address": "fa:16:3e:4e:23:13", "network": {"id": "60f40e8c-30be-4a73-8b0b-ca447dd19ffc", "bridge": "br-int", "label": "tempest-network-smoke--1376393359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b213ac-71", "ovs_interfaceid": "85b213ac-7186-4120-8f8e-043293c9de8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.884 243456 DEBUG nova.network.os_vif_util [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:23:13,bridge_name='br-int',has_traffic_filtering=True,id=85b213ac-7186-4120-8f8e-043293c9de8b,network=Network(60f40e8c-30be-4a73-8b0b-ca447dd19ffc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85b213ac-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.885 243456 DEBUG os_vif [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:23:13,bridge_name='br-int',has_traffic_filtering=True,id=85b213ac-7186-4120-8f8e-043293c9de8b,network=Network(60f40e8c-30be-4a73-8b0b-ca447dd19ffc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85b213ac-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.885 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.885 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.886 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.888 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.888 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85b213ac-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.888 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap85b213ac-71, col_values=(('external_ids', {'iface-id': '85b213ac-7186-4120-8f8e-043293c9de8b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4e:23:13', 'vm-uuid': 'af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.890 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:52 np0005634017 NetworkManager[49805]: <info>  [1772275312.8909] manager: (tap85b213ac-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/631)
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.892 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.894 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.895 243456 INFO os_vif [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:23:13,bridge_name='br-int',has_traffic_filtering=True,id=85b213ac-7186-4120-8f8e-043293c9de8b,network=Network(60f40e8c-30be-4a73-8b0b-ca447dd19ffc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85b213ac-71')#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.947 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.948 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.948 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:4e:23:13, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.949 243456 INFO nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Using config drive#033[00m
Feb 28 05:41:52 np0005634017 nova_compute[243452]: 2026-02-28 10:41:52.977 243456 DEBUG nova.storage.rbd_utils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:41:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.345 243456 DEBUG nova.network.neutron [req-26c9a47f-4f76-4d7b-8249-97b46debc728 req-b2381116-ace6-45a0-98bc-d1bc85574975 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Updated VIF entry in instance network info cache for port 85b213ac-7186-4120-8f8e-043293c9de8b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.346 243456 DEBUG nova.network.neutron [req-26c9a47f-4f76-4d7b-8249-97b46debc728 req-b2381116-ace6-45a0-98bc-d1bc85574975 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Updating instance_info_cache with network_info: [{"id": "85b213ac-7186-4120-8f8e-043293c9de8b", "address": "fa:16:3e:4e:23:13", "network": {"id": "60f40e8c-30be-4a73-8b0b-ca447dd19ffc", "bridge": "br-int", "label": "tempest-network-smoke--1376393359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b213ac-71", "ovs_interfaceid": "85b213ac-7186-4120-8f8e-043293c9de8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.367 243456 DEBUG oslo_concurrency.lockutils [req-26c9a47f-4f76-4d7b-8249-97b46debc728 req-b2381116-ace6-45a0-98bc-d1bc85574975 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.368 243456 DEBUG nova.network.neutron [-] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.390 243456 INFO nova.compute.manager [-] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Took 1.05 seconds to deallocate network for instance.#033[00m
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.417 243456 DEBUG nova.compute.manager [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-changed-5952cc57-b25c-40a2-b208-47e2104b88ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.417 243456 DEBUG nova.compute.manager [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Refreshing instance network info cache due to event network-changed-5952cc57-b25c-40a2-b208-47e2104b88ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.418 243456 DEBUG oslo_concurrency.lockutils [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.440 243456 DEBUG oslo_concurrency.lockutils [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.441 243456 DEBUG oslo_concurrency.lockutils [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.481 243456 INFO nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Creating config drive at /var/lib/nova/instances/af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9/disk.config#033[00m
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.486 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmprao_i94p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.540 243456 DEBUG oslo_concurrency.processutils [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.629 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmprao_i94p" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.665 243456 DEBUG nova.storage.rbd_utils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.671 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9/disk.config af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.822 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9/disk.config af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.823 243456 INFO nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Deleting local config drive /var/lib/nova/instances/af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9/disk.config because it was imported into RBD.#033[00m
Feb 28 05:41:53 np0005634017 kernel: tap85b213ac-71: entered promiscuous mode
Feb 28 05:41:53 np0005634017 systemd-udevd[372795]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:41:53 np0005634017 NetworkManager[49805]: <info>  [1772275313.8796] manager: (tap85b213ac-71): new Tun device (/org/freedesktop/NetworkManager/Devices/632)
Feb 28 05:41:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:53Z|01523|binding|INFO|Claiming lport 85b213ac-7186-4120-8f8e-043293c9de8b for this chassis.
Feb 28 05:41:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:53Z|01524|binding|INFO|85b213ac-7186-4120-8f8e-043293c9de8b: Claiming fa:16:3e:4e:23:13 10.100.0.4
Feb 28 05:41:53 np0005634017 NetworkManager[49805]: <info>  [1772275313.9112] device (tap85b213ac-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.909 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:53 np0005634017 NetworkManager[49805]: <info>  [1772275313.9119] device (tap85b213ac-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:41:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:53.916 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:23:13 10.100.0.4'], port_security=['fa:16:3e:4e:23:13 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60f40e8c-30be-4a73-8b0b-ca447dd19ffc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'db7c5780-813a-4d88-b76c-0ab09180b373', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c5ca6d69-e30b-431e-a08f-47f084fea79e, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=85b213ac-7186-4120-8f8e-043293c9de8b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:41:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:53.917 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 85b213ac-7186-4120-8f8e-043293c9de8b in datapath 60f40e8c-30be-4a73-8b0b-ca447dd19ffc bound to our chassis#033[00m
Feb 28 05:41:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:53.918 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60f40e8c-30be-4a73-8b0b-ca447dd19ffc#033[00m
Feb 28 05:41:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:53Z|01525|binding|INFO|Setting lport 85b213ac-7186-4120-8f8e-043293c9de8b up in Southbound
Feb 28 05:41:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:53Z|01526|binding|INFO|Setting lport 85b213ac-7186-4120-8f8e-043293c9de8b ovn-installed in OVS
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.924 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:53.931 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9f719369-bc24-439e-922e-e24de8e024f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:53.932 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap60f40e8c-31 in ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.933 243456 DEBUG nova.compute.manager [req-f4d9ba21-e49b-42b4-8a71-03a598ece368 req-ca72a1ad-22a2-4149-84ee-01883ef79ad5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-vif-plugged-6914940c-920a-4dc2-982a-9ae63584aee2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.933 243456 DEBUG oslo_concurrency.lockutils [req-f4d9ba21-e49b-42b4-8a71-03a598ece368 req-ca72a1ad-22a2-4149-84ee-01883ef79ad5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9b081668-1653-448a-957e-da1ead7ecd21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.933 243456 DEBUG oslo_concurrency.lockutils [req-f4d9ba21-e49b-42b4-8a71-03a598ece368 req-ca72a1ad-22a2-4149-84ee-01883ef79ad5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.934 243456 DEBUG oslo_concurrency.lockutils [req-f4d9ba21-e49b-42b4-8a71-03a598ece368 req-ca72a1ad-22a2-4149-84ee-01883ef79ad5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.934 243456 DEBUG nova.compute.manager [req-f4d9ba21-e49b-42b4-8a71-03a598ece368 req-ca72a1ad-22a2-4149-84ee-01883ef79ad5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] No waiting events found dispatching network-vif-plugged-6914940c-920a-4dc2-982a-9ae63584aee2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.934 243456 WARNING nova.compute.manager [req-f4d9ba21-e49b-42b4-8a71-03a598ece368 req-ca72a1ad-22a2-4149-84ee-01883ef79ad5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received unexpected event network-vif-plugged-6914940c-920a-4dc2-982a-9ae63584aee2 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.934 243456 DEBUG nova.compute.manager [req-f4d9ba21-e49b-42b4-8a71-03a598ece368 req-ca72a1ad-22a2-4149-84ee-01883ef79ad5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-vif-deleted-6914940c-920a-4dc2-982a-9ae63584aee2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.934 243456 DEBUG nova.compute.manager [req-f4d9ba21-e49b-42b4-8a71-03a598ece368 req-ca72a1ad-22a2-4149-84ee-01883ef79ad5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-vif-deleted-5952cc57-b25c-40a2-b208-47e2104b88ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:53 np0005634017 nova_compute[243452]: 2026-02-28 10:41:53.935 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:53.935 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap60f40e8c-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:41:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:53.936 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[85bdc040-dbac-4c2c-89a9-9e482e91f59b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:53.937 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4e18a0bd-74ea-4590-a318-0e3ba2c8c3ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:53 np0005634017 systemd-machined[209480]: New machine qemu-178-instance-00000091.
Feb 28 05:41:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:53.952 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[2cd383d9-6643-4855-9e02-af4e51723ba1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:53 np0005634017 systemd[1]: Started Virtual Machine qemu-178-instance-00000091.
Feb 28 05:41:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:53.975 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7d1d87fa-d42e-4cea-9a88-513459f990fe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.003 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[250b9f17-0a50-423a-a9b8-15389de3f4e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.009 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0304de0f-2568-42dd-9f73-a0a055dff1d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:54 np0005634017 NetworkManager[49805]: <info>  [1772275314.0110] manager: (tap60f40e8c-30): new Veth device (/org/freedesktop/NetworkManager/Devices/633)
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.043 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1e0b8afb-1422-46b8-a182-ffd9c9247e59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.046 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[40eed87a-9cc6-4b64-9354-30e0d0e295a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:54 np0005634017 NetworkManager[49805]: <info>  [1772275314.0651] device (tap60f40e8c-30): carrier: link connected
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.067 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c7d125d1-2fd6-490c-bc44-77dc40482f0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.080 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ecbd35ac-8498-4934-ab04-703c68d3f00b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60f40e8c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:4d:f4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 452], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682135, 'reachable_time': 29337, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373147, 'error': None, 'target': 'ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.101 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0134b980-11dc-4478-aa47-15a72a4b3215]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef4:4df4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 682135, 'tstamp': 682135}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373148, 'error': None, 'target': 'ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.115 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a750fbf7-777c-45c8-a2a3-2eec1e370b27]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60f40e8c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:4d:f4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 452], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682135, 'reachable_time': 29337, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 373149, 'error': None, 'target': 'ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.140 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8dec70f5-d9bf-44c2-8973-ab9cb21db199]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.169 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Updating instance_info_cache with network_info: [{"id": "5952cc57-b25c-40a2-b208-47e2104b88ad", "address": "fa:16:3e:49:9b:bb", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5952cc57-b2", "ovs_interfaceid": "5952cc57-b25c-40a2-b208-47e2104b88ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6914940c-920a-4dc2-982a-9ae63584aee2", "address": "fa:16:3e:fb:c8:97", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefb:c897", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6914940c-92", "ovs_interfaceid": "6914940c-920a-4dc2-982a-9ae63584aee2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:41:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:41:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/822945097' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.190 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.190 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.191 243456 DEBUG oslo_concurrency.lockutils [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.191 243456 DEBUG nova.network.neutron [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Refreshing network info cache for port 5952cc57-b25c-40a2-b208-47e2104b88ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.214 243456 DEBUG oslo_concurrency.processutils [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.674s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.214 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[71391e2b-a1fc-4141-a9d0-b020ca85b6a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.217 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60f40e8c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.217 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.219 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60f40e8c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.222 243456 DEBUG nova.compute.provider_tree [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:41:54 np0005634017 NetworkManager[49805]: <info>  [1772275314.2235] manager: (tap60f40e8c-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/634)
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.224 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:54 np0005634017 kernel: tap60f40e8c-30: entered promiscuous mode
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.227 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.232 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60f40e8c-30, col_values=(('external_ids', {'iface-id': 'ee36debe-a46a-457c-a873-10b09fca4736'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.234 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:54Z|01527|binding|INFO|Releasing lport ee36debe-a46a-457c-a873-10b09fca4736 from this chassis (sb_readonly=0)
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.235 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.236 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/60f40e8c-30be-4a73-8b0b-ca447dd19ffc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/60f40e8c-30be-4a73-8b0b-ca447dd19ffc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.237 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[968a3e9d-cd51-48ab-8ed5-f9e079b24ce3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.238 243456 DEBUG nova.scheduler.client.report [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.238 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-60f40e8c-30be-4a73-8b0b-ca447dd19ffc
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/60f40e8c-30be-4a73-8b0b-ca447dd19ffc.pid.haproxy
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 60f40e8c-30be-4a73-8b0b-ca447dd19ffc
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:41:54 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.239 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc', 'env', 'PROCESS_TAG=haproxy-60f40e8c-30be-4a73-8b0b-ca447dd19ffc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/60f40e8c-30be-4a73-8b0b-ca447dd19ffc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.242 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.259 243456 DEBUG oslo_concurrency.lockutils [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.287 243456 INFO nova.scheduler.client.report [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance 9b081668-1653-448a-957e-da1ead7ecd21#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.326 243456 INFO nova.network.neutron [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Port 5952cc57-b25c-40a2-b208-47e2104b88ad from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.326 243456 DEBUG nova.network.neutron [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Updating instance_info_cache with network_info: [{"id": "6914940c-920a-4dc2-982a-9ae63584aee2", "address": "fa:16:3e:fb:c8:97", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefb:c897", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6914940c-92", "ovs_interfaceid": "6914940c-920a-4dc2-982a-9ae63584aee2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.344 243456 DEBUG oslo_concurrency.lockutils [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.345 243456 DEBUG oslo_concurrency.lockutils [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.346 243456 DEBUG nova.compute.manager [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-vif-unplugged-5952cc57-b25c-40a2-b208-47e2104b88ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.346 243456 DEBUG oslo_concurrency.lockutils [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9b081668-1653-448a-957e-da1ead7ecd21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.346 243456 DEBUG oslo_concurrency.lockutils [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.347 243456 DEBUG oslo_concurrency.lockutils [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.347 243456 DEBUG nova.compute.manager [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] No waiting events found dispatching network-vif-unplugged-5952cc57-b25c-40a2-b208-47e2104b88ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.347 243456 DEBUG nova.compute.manager [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-vif-unplugged-5952cc57-b25c-40a2-b208-47e2104b88ad for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.347 243456 DEBUG nova.compute.manager [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-vif-plugged-5952cc57-b25c-40a2-b208-47e2104b88ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.348 243456 DEBUG oslo_concurrency.lockutils [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9b081668-1653-448a-957e-da1ead7ecd21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.348 243456 DEBUG oslo_concurrency.lockutils [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.348 243456 DEBUG oslo_concurrency.lockutils [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.348 243456 DEBUG nova.compute.manager [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] No waiting events found dispatching network-vif-plugged-5952cc57-b25c-40a2-b208-47e2104b88ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.349 243456 WARNING nova.compute.manager [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received unexpected event network-vif-plugged-5952cc57-b25c-40a2-b208-47e2104b88ad for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:41:54 np0005634017 podman[373197]: 2026-02-28 10:41:54.621160606 +0000 UTC m=+0.058161657 container create 1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:41:54 np0005634017 systemd[1]: Started libpod-conmon-1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41.scope.
Feb 28 05:41:54 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:41:54 np0005634017 podman[373197]: 2026-02-28 10:41:54.593094452 +0000 UTC m=+0.030095553 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:41:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c688fbac57555e23608447219d1d8709e3953d040412dffacb23a65f86321c8e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:41:54 np0005634017 podman[373197]: 2026-02-28 10:41:54.706210785 +0000 UTC m=+0.143211866 container init 1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:41:54 np0005634017 podman[373197]: 2026-02-28 10:41:54.715976401 +0000 UTC m=+0.152977452 container start 1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.724 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275314.7238557, af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.725 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] VM Started (Lifecycle Event)#033[00m
Feb 28 05:41:54 np0005634017 neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc[373239]: [NOTICE]   (373244) : New worker (373246) forked
Feb 28 05:41:54 np0005634017 neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc[373239]: [NOTICE]   (373244) : Loading success.
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.751 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.757 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275314.7251217, af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.757 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.766 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.776 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.780 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:41:54 np0005634017 nova_compute[243452]: 2026-02-28 10:41:54.799 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:41:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 305 active+clean; 208 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Feb 28 05:41:55 np0005634017 nova_compute[243452]: 2026-02-28 10:41:55.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:41:55 np0005634017 nova_compute[243452]: 2026-02-28 10:41:55.351 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:55 np0005634017 nova_compute[243452]: 2026-02-28 10:41:55.352 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:55 np0005634017 nova_compute[243452]: 2026-02-28 10:41:55.353 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:55 np0005634017 nova_compute[243452]: 2026-02-28 10:41:55.353 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:41:55 np0005634017 nova_compute[243452]: 2026-02-28 10:41:55.354 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:41:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:41:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3939127106' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.006 243456 DEBUG nova.compute.manager [req-b8bc75a3-ce99-4544-8003-1243fb91c66b req-f2653cee-9f18-440b-841c-9725258d567e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Received event network-vif-plugged-85b213ac-7186-4120-8f8e-043293c9de8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.007 243456 DEBUG oslo_concurrency.lockutils [req-b8bc75a3-ce99-4544-8003-1243fb91c66b req-f2653cee-9f18-440b-841c-9725258d567e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.008 243456 DEBUG oslo_concurrency.lockutils [req-b8bc75a3-ce99-4544-8003-1243fb91c66b req-f2653cee-9f18-440b-841c-9725258d567e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.008 243456 DEBUG oslo_concurrency.lockutils [req-b8bc75a3-ce99-4544-8003-1243fb91c66b req-f2653cee-9f18-440b-841c-9725258d567e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.009 243456 DEBUG nova.compute.manager [req-b8bc75a3-ce99-4544-8003-1243fb91c66b req-f2653cee-9f18-440b-841c-9725258d567e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Processing event network-vif-plugged-85b213ac-7186-4120-8f8e-043293c9de8b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.009 243456 DEBUG nova.compute.manager [req-b8bc75a3-ce99-4544-8003-1243fb91c66b req-f2653cee-9f18-440b-841c-9725258d567e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Received event network-vif-plugged-85b213ac-7186-4120-8f8e-043293c9de8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.010 243456 DEBUG oslo_concurrency.lockutils [req-b8bc75a3-ce99-4544-8003-1243fb91c66b req-f2653cee-9f18-440b-841c-9725258d567e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.010 243456 DEBUG oslo_concurrency.lockutils [req-b8bc75a3-ce99-4544-8003-1243fb91c66b req-f2653cee-9f18-440b-841c-9725258d567e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.011 243456 DEBUG oslo_concurrency.lockutils [req-b8bc75a3-ce99-4544-8003-1243fb91c66b req-f2653cee-9f18-440b-841c-9725258d567e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.011 243456 DEBUG nova.compute.manager [req-b8bc75a3-ce99-4544-8003-1243fb91c66b req-f2653cee-9f18-440b-841c-9725258d567e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] No waiting events found dispatching network-vif-plugged-85b213ac-7186-4120-8f8e-043293c9de8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.012 243456 WARNING nova.compute.manager [req-b8bc75a3-ce99-4544-8003-1243fb91c66b req-f2653cee-9f18-440b-841c-9725258d567e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Received unexpected event network-vif-plugged-85b213ac-7186-4120-8f8e-043293c9de8b for instance with vm_state building and task_state spawning.#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.014 243456 DEBUG nova.compute.manager [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.027 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.673s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.030 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275316.0285456, af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.031 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.035 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.045 243456 INFO nova.virt.libvirt.driver [-] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Instance spawned successfully.#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.047 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.057 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.069 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.085 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.086 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.086 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.087 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.087 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.087 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.092 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.148 243456 INFO nova.compute.manager [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Took 7.78 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.149 243456 DEBUG nova.compute.manager [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.160 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.160 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.245 243456 INFO nova.compute.manager [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Took 8.93 seconds to build instance.#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.262 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.339 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.340 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3495MB free_disk=59.96363378781825GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.341 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.341 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.406 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.407 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.408 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:41:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:56.412 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:41:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:56.413 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.464 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:56 np0005634017 nova_compute[243452]: 2026-02-28 10:41:56.478 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:41:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:41:57 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3290257330' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:41:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2328: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Feb 28 05:41:57 np0005634017 nova_compute[243452]: 2026-02-28 10:41:57.128 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:41:57 np0005634017 nova_compute[243452]: 2026-02-28 10:41:57.137 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:41:57 np0005634017 nova_compute[243452]: 2026-02-28 10:41:57.156 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:41:57 np0005634017 nova_compute[243452]: 2026-02-28 10:41:57.467 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:41:57 np0005634017 nova_compute[243452]: 2026-02-28 10:41:57.467 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:57.880 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:41:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:57.881 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:41:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:41:57.882 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:41:57 np0005634017 nova_compute[243452]: 2026-02-28 10:41:57.890 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:58 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:58Z|01528|binding|INFO|Releasing lport ee36debe-a46a-457c-a873-10b09fca4736 from this chassis (sb_readonly=0)
Feb 28 05:41:58 np0005634017 nova_compute[243452]: 2026-02-28 10:41:58.292 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:58 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:58Z|01529|binding|INFO|Releasing lport ee36debe-a46a-457c-a873-10b09fca4736 from this chassis (sb_readonly=0)
Feb 28 05:41:58 np0005634017 nova_compute[243452]: 2026-02-28 10:41:58.328 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:41:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 852 KiB/s rd, 1.8 MiB/s wr, 115 op/s
Feb 28 05:41:59 np0005634017 nova_compute[243452]: 2026-02-28 10:41:59.765 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:59 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:59Z|01530|binding|INFO|Releasing lport ee36debe-a46a-457c-a873-10b09fca4736 from this chassis (sb_readonly=0)
Feb 28 05:41:59 np0005634017 nova_compute[243452]: 2026-02-28 10:41:59.957 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:59 np0005634017 NetworkManager[49805]: <info>  [1772275319.9581] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/635)
Feb 28 05:41:59 np0005634017 NetworkManager[49805]: <info>  [1772275319.9587] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/636)
Feb 28 05:41:59 np0005634017 ovn_controller[146846]: 2026-02-28T10:41:59Z|01531|binding|INFO|Releasing lport ee36debe-a46a-457c-a873-10b09fca4736 from this chassis (sb_readonly=0)
Feb 28 05:41:59 np0005634017 nova_compute[243452]: 2026-02-28 10:41:59.973 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:41:59 np0005634017 nova_compute[243452]: 2026-02-28 10:41:59.978 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:42:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:42:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:42:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:42:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:42:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:42:00 np0005634017 nova_compute[243452]: 2026-02-28 10:42:00.405 243456 DEBUG nova.compute.manager [req-5aa90817-05e1-4542-aa87-38f2568da283 req-d678152a-d1fe-4b0f-92fc-8b70588eb468 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Received event network-changed-85b213ac-7186-4120-8f8e-043293c9de8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:42:00 np0005634017 nova_compute[243452]: 2026-02-28 10:42:00.406 243456 DEBUG nova.compute.manager [req-5aa90817-05e1-4542-aa87-38f2568da283 req-d678152a-d1fe-4b0f-92fc-8b70588eb468 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Refreshing instance network info cache due to event network-changed-85b213ac-7186-4120-8f8e-043293c9de8b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:42:00 np0005634017 nova_compute[243452]: 2026-02-28 10:42:00.407 243456 DEBUG oslo_concurrency.lockutils [req-5aa90817-05e1-4542-aa87-38f2568da283 req-d678152a-d1fe-4b0f-92fc-8b70588eb468 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:42:00 np0005634017 nova_compute[243452]: 2026-02-28 10:42:00.407 243456 DEBUG oslo_concurrency.lockutils [req-5aa90817-05e1-4542-aa87-38f2568da283 req-d678152a-d1fe-4b0f-92fc-8b70588eb468 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:42:00 np0005634017 nova_compute[243452]: 2026-02-28 10:42:00.408 243456 DEBUG nova.network.neutron [req-5aa90817-05e1-4542-aa87-38f2568da283 req-d678152a-d1fe-4b0f-92fc-8b70588eb468 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Refreshing network info cache for port 85b213ac-7186-4120-8f8e-043293c9de8b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:42:00 np0005634017 nova_compute[243452]: 2026-02-28 10:42:00.610 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275305.6076045, 47d618f6-612e-4944-8a4d-a3509d6e3d35 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:42:00 np0005634017 nova_compute[243452]: 2026-02-28 10:42:00.611 243456 INFO nova.compute.manager [-] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:42:00 np0005634017 nova_compute[243452]: 2026-02-28 10:42:00.638 243456 DEBUG nova.compute.manager [None req-f8c07176-b44e-4962-9935-03572d873649 - - - - - -] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:42:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2330: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 143 op/s
Feb 28 05:42:02 np0005634017 nova_compute[243452]: 2026-02-28 10:42:02.298 243456 DEBUG nova.network.neutron [req-5aa90817-05e1-4542-aa87-38f2568da283 req-d678152a-d1fe-4b0f-92fc-8b70588eb468 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Updated VIF entry in instance network info cache for port 85b213ac-7186-4120-8f8e-043293c9de8b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:42:02 np0005634017 nova_compute[243452]: 2026-02-28 10:42:02.299 243456 DEBUG nova.network.neutron [req-5aa90817-05e1-4542-aa87-38f2568da283 req-d678152a-d1fe-4b0f-92fc-8b70588eb468 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Updating instance_info_cache with network_info: [{"id": "85b213ac-7186-4120-8f8e-043293c9de8b", "address": "fa:16:3e:4e:23:13", "network": {"id": "60f40e8c-30be-4a73-8b0b-ca447dd19ffc", "bridge": "br-int", "label": "tempest-network-smoke--1376393359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b213ac-71", "ovs_interfaceid": "85b213ac-7186-4120-8f8e-043293c9de8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:42:02 np0005634017 nova_compute[243452]: 2026-02-28 10:42:02.323 243456 DEBUG oslo_concurrency.lockutils [req-5aa90817-05e1-4542-aa87-38f2568da283 req-d678152a-d1fe-4b0f-92fc-8b70588eb468 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:42:02 np0005634017 nova_compute[243452]: 2026-02-28 10:42:02.892 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 768 KiB/s wr, 104 op/s
Feb 28 05:42:03 np0005634017 nova_compute[243452]: 2026-02-28 10:42:03.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:42:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:03.415 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:42:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:42:04 np0005634017 nova_compute[243452]: 2026-02-28 10:42:04.767 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 91 op/s
Feb 28 05:42:06 np0005634017 nova_compute[243452]: 2026-02-28 10:42:06.708 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275311.7067533, 9b081668-1653-448a-957e-da1ead7ecd21 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:42:06 np0005634017 nova_compute[243452]: 2026-02-28 10:42:06.708 243456 INFO nova.compute.manager [-] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:42:06 np0005634017 nova_compute[243452]: 2026-02-28 10:42:06.725 243456 DEBUG nova.compute.manager [None req-37e45e02-7a29-4477-8cdd-bfc416ffca6f - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:42:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Feb 28 05:42:07 np0005634017 nova_compute[243452]: 2026-02-28 10:42:07.894 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:08 np0005634017 nova_compute[243452]: 2026-02-28 10:42:08.469 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:08 np0005634017 ovn_controller[146846]: 2026-02-28T10:42:08Z|00184|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4e:23:13 10.100.0.4
Feb 28 05:42:08 np0005634017 ovn_controller[146846]: 2026-02-28T10:42:08Z|00185|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4e:23:13 10.100.0.4
Feb 28 05:42:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:42:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 305 active+clean; 212 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 972 KiB/s wr, 84 op/s
Feb 28 05:42:09 np0005634017 nova_compute[243452]: 2026-02-28 10:42:09.770 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:10 np0005634017 nova_compute[243452]: 2026-02-28 10:42:10.334 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:42:10 np0005634017 nova_compute[243452]: 2026-02-28 10:42:10.334 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 28 05:42:10 np0005634017 nova_compute[243452]: 2026-02-28 10:42:10.353 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 28 05:42:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 231 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 102 op/s
Feb 28 05:42:11 np0005634017 nova_compute[243452]: 2026-02-28 10:42:11.657 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:12 np0005634017 nova_compute[243452]: 2026-02-28 10:42:12.896 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 05:42:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:13.240 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:79:6c 10.100.0.2 2001:db8::f816:3eff:feb9:796c'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feb9:796c/64', 'neutron:device_id': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5ccb81b-dba1-47db-8a77-320af312ccad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39e94574-ff62-481a-b217-9667ee1ef596, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=9495031a-3350-4b5d-b9e3-f7a6b929d37e) old=Port_Binding(mac=['fa:16:3e:b9:79:6c 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5ccb81b-dba1-47db-8a77-320af312ccad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:42:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:13.241 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 9495031a-3350-4b5d-b9e3-f7a6b929d37e in datapath f5ccb81b-dba1-47db-8a77-320af312ccad updated#033[00m
Feb 28 05:42:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:13.241 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f5ccb81b-dba1-47db-8a77-320af312ccad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:42:13 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:13.243 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b418ac21-03cc-4373-a0f9-89c1b0b76218]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:42:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:42:14 np0005634017 nova_compute[243452]: 2026-02-28 10:42:14.271 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:42:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:42:14 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:42:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:42:14 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:42:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:42:14 np0005634017 nova_compute[243452]: 2026-02-28 10:42:14.305 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Triggering sync for uuid af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb 28 05:42:14 np0005634017 nova_compute[243452]: 2026-02-28 10:42:14.306 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:42:14 np0005634017 nova_compute[243452]: 2026-02-28 10:42:14.306 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:42:14 np0005634017 nova_compute[243452]: 2026-02-28 10:42:14.340 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:42:14 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:42:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:42:14 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:42:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:42:14 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:42:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:42:14 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:42:14 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:42:14 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:42:14 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:42:14 np0005634017 podman[373446]: 2026-02-28 10:42:14.740012319 +0000 UTC m=+0.052781056 container create af21d4ce1c08e0f7bd13b579f133718a5afdd01e64c7e3268cac74c066d3fe7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Feb 28 05:42:14 np0005634017 nova_compute[243452]: 2026-02-28 10:42:14.772 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:14 np0005634017 systemd[1]: Started libpod-conmon-af21d4ce1c08e0f7bd13b579f133718a5afdd01e64c7e3268cac74c066d3fe7c.scope.
Feb 28 05:42:14 np0005634017 podman[373446]: 2026-02-28 10:42:14.7089978 +0000 UTC m=+0.021766587 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:42:14 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:42:14 np0005634017 podman[373446]: 2026-02-28 10:42:14.826483707 +0000 UTC m=+0.139252434 container init af21d4ce1c08e0f7bd13b579f133718a5afdd01e64c7e3268cac74c066d3fe7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tesla, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:42:14 np0005634017 podman[373446]: 2026-02-28 10:42:14.833481995 +0000 UTC m=+0.146250702 container start af21d4ce1c08e0f7bd13b579f133718a5afdd01e64c7e3268cac74c066d3fe7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tesla, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:42:14 np0005634017 podman[373446]: 2026-02-28 10:42:14.83964049 +0000 UTC m=+0.152409207 container attach af21d4ce1c08e0f7bd13b579f133718a5afdd01e64c7e3268cac74c066d3fe7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 05:42:14 np0005634017 romantic_tesla[373463]: 167 167
Feb 28 05:42:14 np0005634017 systemd[1]: libpod-af21d4ce1c08e0f7bd13b579f133718a5afdd01e64c7e3268cac74c066d3fe7c.scope: Deactivated successfully.
Feb 28 05:42:14 np0005634017 podman[373446]: 2026-02-28 10:42:14.842037887 +0000 UTC m=+0.154806634 container died af21d4ce1c08e0f7bd13b579f133718a5afdd01e64c7e3268cac74c066d3fe7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tesla, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 05:42:14 np0005634017 systemd[1]: var-lib-containers-storage-overlay-0b2f2a2fa0801f4f761dbe82efeec91e0ecc0c2dd8b5e6dd24c9bd0c45e0b180-merged.mount: Deactivated successfully.
Feb 28 05:42:14 np0005634017 podman[373446]: 2026-02-28 10:42:14.898255909 +0000 UTC m=+0.211024636 container remove af21d4ce1c08e0f7bd13b579f133718a5afdd01e64c7e3268cac74c066d3fe7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tesla, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Feb 28 05:42:14 np0005634017 systemd[1]: libpod-conmon-af21d4ce1c08e0f7bd13b579f133718a5afdd01e64c7e3268cac74c066d3fe7c.scope: Deactivated successfully.
Feb 28 05:42:15 np0005634017 podman[373486]: 2026-02-28 10:42:15.045033926 +0000 UTC m=+0.049870474 container create bfb36c32e81275922defbb12b81db4b640c1ab513912cf04d43b926e321cc9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_mahavira, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:42:15 np0005634017 systemd[1]: Started libpod-conmon-bfb36c32e81275922defbb12b81db4b640c1ab513912cf04d43b926e321cc9e8.scope.
Feb 28 05:42:15 np0005634017 podman[373486]: 2026-02-28 10:42:15.022344903 +0000 UTC m=+0.027181451 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:42:15 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:42:15 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac84d274873620fc6bbd835e9a1f5f30a06d21eedbef55a4135e82f1e7f7f3a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:42:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2337: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 05:42:15 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac84d274873620fc6bbd835e9a1f5f30a06d21eedbef55a4135e82f1e7f7f3a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:42:15 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac84d274873620fc6bbd835e9a1f5f30a06d21eedbef55a4135e82f1e7f7f3a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:42:15 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac84d274873620fc6bbd835e9a1f5f30a06d21eedbef55a4135e82f1e7f7f3a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:42:15 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac84d274873620fc6bbd835e9a1f5f30a06d21eedbef55a4135e82f1e7f7f3a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:42:15 np0005634017 podman[373486]: 2026-02-28 10:42:15.149252977 +0000 UTC m=+0.154089515 container init bfb36c32e81275922defbb12b81db4b640c1ab513912cf04d43b926e321cc9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_mahavira, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:42:15 np0005634017 podman[373486]: 2026-02-28 10:42:15.157784178 +0000 UTC m=+0.162620726 container start bfb36c32e81275922defbb12b81db4b640c1ab513912cf04d43b926e321cc9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_mahavira, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:42:15 np0005634017 podman[373486]: 2026-02-28 10:42:15.167237866 +0000 UTC m=+0.172074424 container attach bfb36c32e81275922defbb12b81db4b640c1ab513912cf04d43b926e321cc9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_mahavira, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 28 05:42:15 np0005634017 nova_compute[243452]: 2026-02-28 10:42:15.279 243456 INFO nova.compute.manager [None req-34ba6e3e-c34d-4060-8835-20e824cc17ea ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Get console output#033[00m
Feb 28 05:42:15 np0005634017 nova_compute[243452]: 2026-02-28 10:42:15.290 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 28 05:42:15 np0005634017 kind_mahavira[373503]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:42:15 np0005634017 kind_mahavira[373503]: --> All data devices are unavailable
Feb 28 05:42:15 np0005634017 podman[373486]: 2026-02-28 10:42:15.673809119 +0000 UTC m=+0.678645657 container died bfb36c32e81275922defbb12b81db4b640c1ab513912cf04d43b926e321cc9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_mahavira, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 05:42:15 np0005634017 systemd[1]: libpod-bfb36c32e81275922defbb12b81db4b640c1ab513912cf04d43b926e321cc9e8.scope: Deactivated successfully.
Feb 28 05:42:15 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ac84d274873620fc6bbd835e9a1f5f30a06d21eedbef55a4135e82f1e7f7f3a8-merged.mount: Deactivated successfully.
Feb 28 05:42:15 np0005634017 podman[373486]: 2026-02-28 10:42:15.856782871 +0000 UTC m=+0.861619429 container remove bfb36c32e81275922defbb12b81db4b640c1ab513912cf04d43b926e321cc9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_mahavira, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:42:15 np0005634017 systemd[1]: libpod-conmon-bfb36c32e81275922defbb12b81db4b640c1ab513912cf04d43b926e321cc9e8.scope: Deactivated successfully.
Feb 28 05:42:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:42:15Z|01532|binding|INFO|Releasing lport ee36debe-a46a-457c-a873-10b09fca4736 from this chassis (sb_readonly=0)
Feb 28 05:42:15 np0005634017 nova_compute[243452]: 2026-02-28 10:42:15.900 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:42:15Z|01533|binding|INFO|Releasing lport ee36debe-a46a-457c-a873-10b09fca4736 from this chassis (sb_readonly=0)
Feb 28 05:42:15 np0005634017 nova_compute[243452]: 2026-02-28 10:42:15.935 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:16 np0005634017 podman[373598]: 2026-02-28 10:42:16.352955411 +0000 UTC m=+0.056335177 container create f27c60e4f650a5a751e6627bf09a44d14e91da14a07267fa43928cf6695ca350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_haibt, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 05:42:16 np0005634017 systemd[1]: Started libpod-conmon-f27c60e4f650a5a751e6627bf09a44d14e91da14a07267fa43928cf6695ca350.scope.
Feb 28 05:42:16 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:42:16 np0005634017 podman[373598]: 2026-02-28 10:42:16.333586592 +0000 UTC m=+0.036966318 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:42:16 np0005634017 podman[373598]: 2026-02-28 10:42:16.437596957 +0000 UTC m=+0.140976743 container init f27c60e4f650a5a751e6627bf09a44d14e91da14a07267fa43928cf6695ca350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_haibt, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:42:16 np0005634017 podman[373598]: 2026-02-28 10:42:16.447358834 +0000 UTC m=+0.150738560 container start f27c60e4f650a5a751e6627bf09a44d14e91da14a07267fa43928cf6695ca350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:42:16 np0005634017 podman[373598]: 2026-02-28 10:42:16.451369067 +0000 UTC m=+0.154748893 container attach f27c60e4f650a5a751e6627bf09a44d14e91da14a07267fa43928cf6695ca350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_haibt, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 05:42:16 np0005634017 zealous_haibt[373614]: 167 167
Feb 28 05:42:16 np0005634017 systemd[1]: libpod-f27c60e4f650a5a751e6627bf09a44d14e91da14a07267fa43928cf6695ca350.scope: Deactivated successfully.
Feb 28 05:42:16 np0005634017 podman[373598]: 2026-02-28 10:42:16.455928806 +0000 UTC m=+0.159308542 container died f27c60e4f650a5a751e6627bf09a44d14e91da14a07267fa43928cf6695ca350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_haibt, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 28 05:42:16 np0005634017 systemd[1]: var-lib-containers-storage-overlay-7f87d9347988b2ebb9fb058487d99c817e7f2dae02f56d70f2c2a08ff7cfce02-merged.mount: Deactivated successfully.
Feb 28 05:42:16 np0005634017 podman[373598]: 2026-02-28 10:42:16.492861562 +0000 UTC m=+0.196241298 container remove f27c60e4f650a5a751e6627bf09a44d14e91da14a07267fa43928cf6695ca350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_haibt, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:42:16 np0005634017 systemd[1]: libpod-conmon-f27c60e4f650a5a751e6627bf09a44d14e91da14a07267fa43928cf6695ca350.scope: Deactivated successfully.
Feb 28 05:42:16 np0005634017 podman[373640]: 2026-02-28 10:42:16.70043259 +0000 UTC m=+0.064815077 container create e106d1cd1c2fa406c17e22790699ef40446b41da860e6ae19adcd97133a2f967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 05:42:16 np0005634017 systemd[1]: Started libpod-conmon-e106d1cd1c2fa406c17e22790699ef40446b41da860e6ae19adcd97133a2f967.scope.
Feb 28 05:42:16 np0005634017 podman[373640]: 2026-02-28 10:42:16.67395539 +0000 UTC m=+0.038337937 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:42:16 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:42:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f8ae253f2e2685ca3a1c07b57a73edde5e07ce8bca64a7fc3eacb73b26c53ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:42:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f8ae253f2e2685ca3a1c07b57a73edde5e07ce8bca64a7fc3eacb73b26c53ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:42:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f8ae253f2e2685ca3a1c07b57a73edde5e07ce8bca64a7fc3eacb73b26c53ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:42:16 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f8ae253f2e2685ca3a1c07b57a73edde5e07ce8bca64a7fc3eacb73b26c53ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:42:16 np0005634017 podman[373640]: 2026-02-28 10:42:16.800700499 +0000 UTC m=+0.165083006 container init e106d1cd1c2fa406c17e22790699ef40446b41da860e6ae19adcd97133a2f967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_margulis, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:42:16 np0005634017 podman[373640]: 2026-02-28 10:42:16.8074661 +0000 UTC m=+0.171848547 container start e106d1cd1c2fa406c17e22790699ef40446b41da860e6ae19adcd97133a2f967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_margulis, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 28 05:42:16 np0005634017 podman[373640]: 2026-02-28 10:42:16.810917198 +0000 UTC m=+0.175299645 container attach e106d1cd1c2fa406c17e22790699ef40446b41da860e6ae19adcd97133a2f967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_margulis, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 05:42:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:16.949 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:79:6c 10.100.0.2 2001:db8:0:1:f816:3eff:feb9:796c 2001:db8::f816:3eff:feb9:796c'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:feb9:796c/64 2001:db8::f816:3eff:feb9:796c/64', 'neutron:device_id': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5ccb81b-dba1-47db-8a77-320af312ccad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39e94574-ff62-481a-b217-9667ee1ef596, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=9495031a-3350-4b5d-b9e3-f7a6b929d37e) old=Port_Binding(mac=['fa:16:3e:b9:79:6c 10.100.0.2 2001:db8::f816:3eff:feb9:796c'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feb9:796c/64', 'neutron:device_id': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5ccb81b-dba1-47db-8a77-320af312ccad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:42:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:16.951 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 9495031a-3350-4b5d-b9e3-f7a6b929d37e in datapath f5ccb81b-dba1-47db-8a77-320af312ccad updated#033[00m
Feb 28 05:42:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:16.954 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f5ccb81b-dba1-47db-8a77-320af312ccad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:42:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:16.955 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[71ff1383-5103-4cf1-bf5d-5aa7c4e0ca39]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]: {
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:    "0": [
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:        {
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "devices": [
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "/dev/loop3"
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            ],
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "lv_name": "ceph_lv0",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "lv_size": "21470642176",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "name": "ceph_lv0",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "tags": {
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.cluster_name": "ceph",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.crush_device_class": "",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.encrypted": "0",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.objectstore": "bluestore",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.osd_id": "0",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.type": "block",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.vdo": "0",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.with_tpm": "0"
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            },
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "type": "block",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "vg_name": "ceph_vg0"
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:        }
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:    ],
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:    "1": [
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:        {
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "devices": [
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "/dev/loop4"
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            ],
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "lv_name": "ceph_lv1",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "lv_size": "21470642176",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "name": "ceph_lv1",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "tags": {
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.cluster_name": "ceph",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.crush_device_class": "",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.encrypted": "0",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.objectstore": "bluestore",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.osd_id": "1",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.type": "block",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.vdo": "0",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.with_tpm": "0"
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            },
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "type": "block",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "vg_name": "ceph_vg1"
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:        }
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:    ],
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:    "2": [
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:        {
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "devices": [
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "/dev/loop5"
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            ],
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "lv_name": "ceph_lv2",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "lv_size": "21470642176",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "name": "ceph_lv2",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "tags": {
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.cluster_name": "ceph",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.crush_device_class": "",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.encrypted": "0",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.objectstore": "bluestore",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.osd_id": "2",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.type": "block",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.vdo": "0",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:                "ceph.with_tpm": "0"
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            },
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "type": "block",
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:            "vg_name": "ceph_vg2"
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:        }
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]:    ]
Feb 28 05:42:17 np0005634017 thirsty_margulis[373657]: }
Feb 28 05:42:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 05:42:17 np0005634017 systemd[1]: libpod-e106d1cd1c2fa406c17e22790699ef40446b41da860e6ae19adcd97133a2f967.scope: Deactivated successfully.
Feb 28 05:42:17 np0005634017 podman[373640]: 2026-02-28 10:42:17.149006272 +0000 UTC m=+0.513388749 container died e106d1cd1c2fa406c17e22790699ef40446b41da860e6ae19adcd97133a2f967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_margulis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 05:42:17 np0005634017 systemd[1]: var-lib-containers-storage-overlay-2f8ae253f2e2685ca3a1c07b57a73edde5e07ce8bca64a7fc3eacb73b26c53ab-merged.mount: Deactivated successfully.
Feb 28 05:42:17 np0005634017 podman[373640]: 2026-02-28 10:42:17.191019381 +0000 UTC m=+0.555401848 container remove e106d1cd1c2fa406c17e22790699ef40446b41da860e6ae19adcd97133a2f967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_margulis, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 05:42:17 np0005634017 systemd[1]: libpod-conmon-e106d1cd1c2fa406c17e22790699ef40446b41da860e6ae19adcd97133a2f967.scope: Deactivated successfully.
Feb 28 05:42:17 np0005634017 nova_compute[243452]: 2026-02-28 10:42:17.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:42:17 np0005634017 nova_compute[243452]: 2026-02-28 10:42:17.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 28 05:42:17 np0005634017 nova_compute[243452]: 2026-02-28 10:42:17.493 243456 INFO nova.compute.manager [None req-c3a0c0e0-7076-495b-9581-5ecc53f70879 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Get console output#033[00m
Feb 28 05:42:17 np0005634017 nova_compute[243452]: 2026-02-28 10:42:17.501 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 28 05:42:17 np0005634017 podman[373742]: 2026-02-28 10:42:17.603174411 +0000 UTC m=+0.038872042 container create c1474c2e702e18a1842e0f60818e45f44a49b7acd405336bf8ba7854c6b32cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:42:17 np0005634017 systemd[1]: Started libpod-conmon-c1474c2e702e18a1842e0f60818e45f44a49b7acd405336bf8ba7854c6b32cfb.scope.
Feb 28 05:42:17 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:42:17 np0005634017 podman[373742]: 2026-02-28 10:42:17.584976485 +0000 UTC m=+0.020674096 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:42:17 np0005634017 podman[373742]: 2026-02-28 10:42:17.690964387 +0000 UTC m=+0.126661998 container init c1474c2e702e18a1842e0f60818e45f44a49b7acd405336bf8ba7854c6b32cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_feynman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 28 05:42:17 np0005634017 podman[373742]: 2026-02-28 10:42:17.698408597 +0000 UTC m=+0.134106178 container start c1474c2e702e18a1842e0f60818e45f44a49b7acd405336bf8ba7854c6b32cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:42:17 np0005634017 podman[373742]: 2026-02-28 10:42:17.701548606 +0000 UTC m=+0.137246197 container attach c1474c2e702e18a1842e0f60818e45f44a49b7acd405336bf8ba7854c6b32cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:42:17 np0005634017 affectionate_feynman[373758]: 167 167
Feb 28 05:42:17 np0005634017 systemd[1]: libpod-c1474c2e702e18a1842e0f60818e45f44a49b7acd405336bf8ba7854c6b32cfb.scope: Deactivated successfully.
Feb 28 05:42:17 np0005634017 podman[373742]: 2026-02-28 10:42:17.704359706 +0000 UTC m=+0.140057297 container died c1474c2e702e18a1842e0f60818e45f44a49b7acd405336bf8ba7854c6b32cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_feynman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:42:17 np0005634017 systemd[1]: var-lib-containers-storage-overlay-fcf10f1421ef0613d30ecef5c72d0c7c49c23a9e6845ae49346d7ca04029a0f8-merged.mount: Deactivated successfully.
Feb 28 05:42:17 np0005634017 podman[373742]: 2026-02-28 10:42:17.739555562 +0000 UTC m=+0.175253153 container remove c1474c2e702e18a1842e0f60818e45f44a49b7acd405336bf8ba7854c6b32cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_feynman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 05:42:17 np0005634017 systemd[1]: libpod-conmon-c1474c2e702e18a1842e0f60818e45f44a49b7acd405336bf8ba7854c6b32cfb.scope: Deactivated successfully.
Feb 28 05:42:17 np0005634017 podman[373782]: 2026-02-28 10:42:17.8768489 +0000 UTC m=+0.032401798 container create ad53725caa104f9477b44026bd877ffeea0899690282a8c4ecd856fffba052e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keller, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:42:17 np0005634017 nova_compute[243452]: 2026-02-28 10:42:17.898 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:17 np0005634017 systemd[1]: Started libpod-conmon-ad53725caa104f9477b44026bd877ffeea0899690282a8c4ecd856fffba052e1.scope.
Feb 28 05:42:17 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:42:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25a6bc487fe33fe49a89de3da07ca18af70d399c9a6c328f1cc1addb6cef4ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:42:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25a6bc487fe33fe49a89de3da07ca18af70d399c9a6c328f1cc1addb6cef4ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:42:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25a6bc487fe33fe49a89de3da07ca18af70d399c9a6c328f1cc1addb6cef4ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:42:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25a6bc487fe33fe49a89de3da07ca18af70d399c9a6c328f1cc1addb6cef4ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:42:17 np0005634017 podman[373782]: 2026-02-28 10:42:17.954872929 +0000 UTC m=+0.110425897 container init ad53725caa104f9477b44026bd877ffeea0899690282a8c4ecd856fffba052e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keller, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 05:42:17 np0005634017 podman[373782]: 2026-02-28 10:42:17.863393919 +0000 UTC m=+0.018946837 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:42:17 np0005634017 podman[373782]: 2026-02-28 10:42:17.963271047 +0000 UTC m=+0.118823945 container start ad53725caa104f9477b44026bd877ffeea0899690282a8c4ecd856fffba052e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keller, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 05:42:17 np0005634017 podman[373782]: 2026-02-28 10:42:17.966754616 +0000 UTC m=+0.122307504 container attach ad53725caa104f9477b44026bd877ffeea0899690282a8c4ecd856fffba052e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 05:42:18 np0005634017 lvm[373875]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:42:18 np0005634017 lvm[373875]: VG ceph_vg0 finished
Feb 28 05:42:18 np0005634017 lvm[373878]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:42:18 np0005634017 lvm[373878]: VG ceph_vg1 finished
Feb 28 05:42:18 np0005634017 lvm[373880]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:42:18 np0005634017 lvm[373880]: VG ceph_vg2 finished
Feb 28 05:42:18 np0005634017 dreamy_keller[373798]: {}
Feb 28 05:42:18 np0005634017 systemd[1]: libpod-ad53725caa104f9477b44026bd877ffeea0899690282a8c4ecd856fffba052e1.scope: Deactivated successfully.
Feb 28 05:42:18 np0005634017 systemd[1]: libpod-ad53725caa104f9477b44026bd877ffeea0899690282a8c4ecd856fffba052e1.scope: Consumed 1.286s CPU time.
Feb 28 05:42:18 np0005634017 podman[373782]: 2026-02-28 10:42:18.834353673 +0000 UTC m=+0.989906601 container died ad53725caa104f9477b44026bd877ffeea0899690282a8c4ecd856fffba052e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keller, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:42:18 np0005634017 systemd[1]: var-lib-containers-storage-overlay-a25a6bc487fe33fe49a89de3da07ca18af70d399c9a6c328f1cc1addb6cef4ba-merged.mount: Deactivated successfully.
Feb 28 05:42:18 np0005634017 podman[373782]: 2026-02-28 10:42:18.885480091 +0000 UTC m=+1.041032989 container remove ad53725caa104f9477b44026bd877ffeea0899690282a8c4ecd856fffba052e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keller, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:42:18 np0005634017 systemd[1]: libpod-conmon-ad53725caa104f9477b44026bd877ffeea0899690282a8c4ecd856fffba052e1.scope: Deactivated successfully.
Feb 28 05:42:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:42:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:42:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:42:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:42:18 np0005634017 NetworkManager[49805]: <info>  [1772275338.9924] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/637)
Feb 28 05:42:18 np0005634017 NetworkManager[49805]: <info>  [1772275338.9938] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/638)
Feb 28 05:42:18 np0005634017 nova_compute[243452]: 2026-02-28 10:42:18.991 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:42:19 np0005634017 nova_compute[243452]: 2026-02-28 10:42:19.042 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:19 np0005634017 ovn_controller[146846]: 2026-02-28T10:42:19Z|01534|binding|INFO|Releasing lport ee36debe-a46a-457c-a873-10b09fca4736 from this chassis (sb_readonly=0)
Feb 28 05:42:19 np0005634017 nova_compute[243452]: 2026-02-28 10:42:19.054 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 05:42:19 np0005634017 nova_compute[243452]: 2026-02-28 10:42:19.386 243456 INFO nova.compute.manager [None req-9396e5df-4ec8-4712-ac22-e7799c041980 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Get console output#033[00m
Feb 28 05:42:19 np0005634017 nova_compute[243452]: 2026-02-28 10:42:19.395 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 28 05:42:19 np0005634017 nova_compute[243452]: 2026-02-28 10:42:19.774 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:19 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:42:19 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.106 243456 DEBUG nova.compute.manager [req-196f6aab-a658-41bc-9243-9bb9ed1e7166 req-2909678a-9316-4619-996e-53fd05788779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Received event network-changed-85b213ac-7186-4120-8f8e-043293c9de8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.107 243456 DEBUG nova.compute.manager [req-196f6aab-a658-41bc-9243-9bb9ed1e7166 req-2909678a-9316-4619-996e-53fd05788779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Refreshing instance network info cache due to event network-changed-85b213ac-7186-4120-8f8e-043293c9de8b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.107 243456 DEBUG oslo_concurrency.lockutils [req-196f6aab-a658-41bc-9243-9bb9ed1e7166 req-2909678a-9316-4619-996e-53fd05788779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.108 243456 DEBUG oslo_concurrency.lockutils [req-196f6aab-a658-41bc-9243-9bb9ed1e7166 req-2909678a-9316-4619-996e-53fd05788779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.108 243456 DEBUG nova.network.neutron [req-196f6aab-a658-41bc-9243-9bb9ed1e7166 req-2909678a-9316-4619-996e-53fd05788779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Refreshing network info cache for port 85b213ac-7186-4120-8f8e-043293c9de8b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:42:20 np0005634017 podman[373923]: 2026-02-28 10:42:20.147124956 +0000 UTC m=+0.084523584 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 28 05:42:20 np0005634017 podman[373924]: 2026-02-28 10:42:20.147560038 +0000 UTC m=+0.084704469 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.170 243456 DEBUG oslo_concurrency.lockutils [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.171 243456 DEBUG oslo_concurrency.lockutils [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.171 243456 DEBUG oslo_concurrency.lockutils [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.171 243456 DEBUG oslo_concurrency.lockutils [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.171 243456 DEBUG oslo_concurrency.lockutils [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.173 243456 INFO nova.compute.manager [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Terminating instance#033[00m
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.174 243456 DEBUG nova.compute.manager [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:42:20 np0005634017 kernel: tap85b213ac-71 (unregistering): left promiscuous mode
Feb 28 05:42:20 np0005634017 NetworkManager[49805]: <info>  [1772275340.2291] device (tap85b213ac-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:42:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:42:20Z|01535|binding|INFO|Releasing lport 85b213ac-7186-4120-8f8e-043293c9de8b from this chassis (sb_readonly=0)
Feb 28 05:42:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:42:20Z|01536|binding|INFO|Setting lport 85b213ac-7186-4120-8f8e-043293c9de8b down in Southbound
Feb 28 05:42:20 np0005634017 ovn_controller[146846]: 2026-02-28T10:42:20Z|01537|binding|INFO|Removing iface tap85b213ac-71 ovn-installed in OVS
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.238 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.245 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:23:13 10.100.0.4'], port_security=['fa:16:3e:4e:23:13 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60f40e8c-30be-4a73-8b0b-ca447dd19ffc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'db7c5780-813a-4d88-b76c-0ab09180b373', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c5ca6d69-e30b-431e-a08f-47f084fea79e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=85b213ac-7186-4120-8f8e-043293c9de8b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:42:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.246 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 85b213ac-7186-4120-8f8e-043293c9de8b in datapath 60f40e8c-30be-4a73-8b0b-ca447dd19ffc unbound from our chassis#033[00m
Feb 28 05:42:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.247 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 60f40e8c-30be-4a73-8b0b-ca447dd19ffc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:42:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.248 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4f065283-c658-4b46-a024-386fcfbb852c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:42:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.249 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc namespace which is not needed anymore#033[00m
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.252 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:20 np0005634017 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000091.scope: Deactivated successfully.
Feb 28 05:42:20 np0005634017 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000091.scope: Consumed 12.738s CPU time.
Feb 28 05:42:20 np0005634017 systemd-machined[209480]: Machine qemu-178-instance-00000091 terminated.
Feb 28 05:42:20 np0005634017 neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc[373239]: [NOTICE]   (373244) : haproxy version is 2.8.14-c23fe91
Feb 28 05:42:20 np0005634017 neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc[373239]: [NOTICE]   (373244) : path to executable is /usr/sbin/haproxy
Feb 28 05:42:20 np0005634017 neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc[373239]: [WARNING]  (373244) : Exiting Master process...
Feb 28 05:42:20 np0005634017 neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc[373239]: [ALERT]    (373244) : Current worker (373246) exited with code 143 (Terminated)
Feb 28 05:42:20 np0005634017 neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc[373239]: [WARNING]  (373244) : All workers exited. Exiting... (0)
Feb 28 05:42:20 np0005634017 systemd[1]: libpod-1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41.scope: Deactivated successfully.
Feb 28 05:42:20 np0005634017 podman[373988]: 2026-02-28 10:42:20.40014528 +0000 UTC m=+0.053213467 container died 1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.420 243456 INFO nova.virt.libvirt.driver [-] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Instance destroyed successfully.#033[00m
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.421 243456 DEBUG nova.objects.instance [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'resources' on Instance uuid af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.435 243456 DEBUG nova.virt.libvirt.vif [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:41:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1524442108',display_name='tempest-TestNetworkBasicOps-server-1524442108',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1524442108',id=145,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG91PlskTqdY3f/2ETDaAtm5odqY3Pm73UeMDdBa5KYy9HWAlLs8njv3PvKqtqCgQvXgaXApoQMAIbbB474iNBXEqM7bw3A67rDTDaE0oBVKxg+86+w9ZDJ22AtUTnVnBg==',key_name='tempest-TestNetworkBasicOps-1470921471',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:41:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-lqpz0euc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:41:56Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "85b213ac-7186-4120-8f8e-043293c9de8b", "address": "fa:16:3e:4e:23:13", "network": {"id": "60f40e8c-30be-4a73-8b0b-ca447dd19ffc", "bridge": "br-int", "label": "tempest-network-smoke--1376393359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b213ac-71", "ovs_interfaceid": "85b213ac-7186-4120-8f8e-043293c9de8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.436 243456 DEBUG nova.network.os_vif_util [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "85b213ac-7186-4120-8f8e-043293c9de8b", "address": "fa:16:3e:4e:23:13", "network": {"id": "60f40e8c-30be-4a73-8b0b-ca447dd19ffc", "bridge": "br-int", "label": "tempest-network-smoke--1376393359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b213ac-71", "ovs_interfaceid": "85b213ac-7186-4120-8f8e-043293c9de8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.438 243456 DEBUG nova.network.os_vif_util [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4e:23:13,bridge_name='br-int',has_traffic_filtering=True,id=85b213ac-7186-4120-8f8e-043293c9de8b,network=Network(60f40e8c-30be-4a73-8b0b-ca447dd19ffc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85b213ac-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.439 243456 DEBUG os_vif [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4e:23:13,bridge_name='br-int',has_traffic_filtering=True,id=85b213ac-7186-4120-8f8e-043293c9de8b,network=Network(60f40e8c-30be-4a73-8b0b-ca447dd19ffc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85b213ac-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.444 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.445 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85b213ac-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.447 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.449 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.453 243456 INFO os_vif [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4e:23:13,bridge_name='br-int',has_traffic_filtering=True,id=85b213ac-7186-4120-8f8e-043293c9de8b,network=Network(60f40e8c-30be-4a73-8b0b-ca447dd19ffc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85b213ac-71')#033[00m
Feb 28 05:42:20 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41-userdata-shm.mount: Deactivated successfully.
Feb 28 05:42:20 np0005634017 systemd[1]: var-lib-containers-storage-overlay-c688fbac57555e23608447219d1d8709e3953d040412dffacb23a65f86321c8e-merged.mount: Deactivated successfully.
Feb 28 05:42:20 np0005634017 podman[373988]: 2026-02-28 10:42:20.526968202 +0000 UTC m=+0.180036389 container cleanup 1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 28 05:42:20 np0005634017 systemd[1]: libpod-conmon-1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41.scope: Deactivated successfully.
Feb 28 05:42:20 np0005634017 podman[374045]: 2026-02-28 10:42:20.700777293 +0000 UTC m=+0.141540749 container remove 1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 05:42:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.707 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[99ab1cd3-9d13-4234-98f6-a4ba24e8e121]: (4, ('Sat Feb 28 10:42:20 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc (1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41)\n1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41\nSat Feb 28 10:42:20 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc (1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41)\n1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:42:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.709 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b407a6-d569-4ae3-924f-39c720ab91f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:42:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.711 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60f40e8c-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:42:20 np0005634017 kernel: tap60f40e8c-30: left promiscuous mode
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.715 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:20 np0005634017 nova_compute[243452]: 2026-02-28 10:42:20.722 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.725 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e905dd6f-f8f0-4f77-ac6d-83551830bf49]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:42:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.745 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6bf37fb6-3ec7-4909-bed6-66f726df571e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:42:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.747 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1e79a42d-1d45-487f-b763-27bce101199f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:42:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.764 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eb2c07f9-1ff7-4113-92ad-b07c5d4b5243]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682128, 'reachable_time': 43453, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374061, 'error': None, 'target': 'ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:42:20 np0005634017 systemd[1]: run-netns-ovnmeta\x2d60f40e8c\x2d30be\x2d4a73\x2d8b0b\x2dca447dd19ffc.mount: Deactivated successfully.
Feb 28 05:42:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.768 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:42:20 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.768 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[36d2e593-b127-45a8-b7fa-860763752eef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:42:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 213 KiB/s rd, 1.2 MiB/s wr, 47 op/s
Feb 28 05:42:21 np0005634017 nova_compute[243452]: 2026-02-28 10:42:21.140 243456 INFO nova.virt.libvirt.driver [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Deleting instance files /var/lib/nova/instances/af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_del#033[00m
Feb 28 05:42:21 np0005634017 nova_compute[243452]: 2026-02-28 10:42:21.142 243456 INFO nova.virt.libvirt.driver [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Deletion of /var/lib/nova/instances/af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_del complete#033[00m
Feb 28 05:42:21 np0005634017 nova_compute[243452]: 2026-02-28 10:42:21.244 243456 INFO nova.compute.manager [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Took 1.07 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:42:21 np0005634017 nova_compute[243452]: 2026-02-28 10:42:21.245 243456 DEBUG oslo.service.loopingcall [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:42:21 np0005634017 nova_compute[243452]: 2026-02-28 10:42:21.245 243456 DEBUG nova.compute.manager [-] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:42:21 np0005634017 nova_compute[243452]: 2026-02-28 10:42:21.245 243456 DEBUG nova.network.neutron [-] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:42:21 np0005634017 nova_compute[243452]: 2026-02-28 10:42:21.889 243456 DEBUG nova.compute.manager [req-65e1459a-4a90-4728-b739-32e94fee7a9d req-6eb47ef7-c2bc-43f9-a7d7-b6c2a524d943 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Received event network-vif-unplugged-85b213ac-7186-4120-8f8e-043293c9de8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:42:21 np0005634017 nova_compute[243452]: 2026-02-28 10:42:21.890 243456 DEBUG oslo_concurrency.lockutils [req-65e1459a-4a90-4728-b739-32e94fee7a9d req-6eb47ef7-c2bc-43f9-a7d7-b6c2a524d943 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:42:21 np0005634017 nova_compute[243452]: 2026-02-28 10:42:21.891 243456 DEBUG oslo_concurrency.lockutils [req-65e1459a-4a90-4728-b739-32e94fee7a9d req-6eb47ef7-c2bc-43f9-a7d7-b6c2a524d943 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:42:21 np0005634017 nova_compute[243452]: 2026-02-28 10:42:21.891 243456 DEBUG oslo_concurrency.lockutils [req-65e1459a-4a90-4728-b739-32e94fee7a9d req-6eb47ef7-c2bc-43f9-a7d7-b6c2a524d943 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:42:21 np0005634017 nova_compute[243452]: 2026-02-28 10:42:21.892 243456 DEBUG nova.compute.manager [req-65e1459a-4a90-4728-b739-32e94fee7a9d req-6eb47ef7-c2bc-43f9-a7d7-b6c2a524d943 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] No waiting events found dispatching network-vif-unplugged-85b213ac-7186-4120-8f8e-043293c9de8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:42:21 np0005634017 nova_compute[243452]: 2026-02-28 10:42:21.892 243456 DEBUG nova.compute.manager [req-65e1459a-4a90-4728-b739-32e94fee7a9d req-6eb47ef7-c2bc-43f9-a7d7-b6c2a524d943 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Received event network-vif-unplugged-85b213ac-7186-4120-8f8e-043293c9de8b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:42:22 np0005634017 nova_compute[243452]: 2026-02-28 10:42:22.220 243456 DEBUG nova.network.neutron [-] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:42:22 np0005634017 nova_compute[243452]: 2026-02-28 10:42:22.246 243456 INFO nova.compute.manager [-] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Took 1.00 seconds to deallocate network for instance.#033[00m
Feb 28 05:42:22 np0005634017 nova_compute[243452]: 2026-02-28 10:42:22.286 243456 DEBUG nova.compute.manager [req-d6b7af44-825d-4d56-9096-504afd821702 req-afe9b870-23e9-45ba-b56a-8c5680eb5ce3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Received event network-vif-deleted-85b213ac-7186-4120-8f8e-043293c9de8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:42:22 np0005634017 nova_compute[243452]: 2026-02-28 10:42:22.301 243456 DEBUG oslo_concurrency.lockutils [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:42:22 np0005634017 nova_compute[243452]: 2026-02-28 10:42:22.302 243456 DEBUG oslo_concurrency.lockutils [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:42:22 np0005634017 nova_compute[243452]: 2026-02-28 10:42:22.357 243456 DEBUG oslo_concurrency.processutils [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:42:22 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:42:22 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1683179128' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:42:22 np0005634017 nova_compute[243452]: 2026-02-28 10:42:22.895 243456 DEBUG oslo_concurrency.processutils [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:42:22 np0005634017 nova_compute[243452]: 2026-02-28 10:42:22.900 243456 DEBUG nova.compute.provider_tree [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:42:22 np0005634017 nova_compute[243452]: 2026-02-28 10:42:22.920 243456 DEBUG nova.scheduler.client.report [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:42:22 np0005634017 nova_compute[243452]: 2026-02-28 10:42:22.944 243456 DEBUG oslo_concurrency.lockutils [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:42:22 np0005634017 nova_compute[243452]: 2026-02-28 10:42:22.971 243456 INFO nova.scheduler.client.report [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Deleted allocations for instance af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9#033[00m
Feb 28 05:42:23 np0005634017 nova_compute[243452]: 2026-02-28 10:42:23.050 243456 DEBUG oslo_concurrency.lockutils [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:42:23 np0005634017 nova_compute[243452]: 2026-02-28 10:42:23.110 243456 DEBUG nova.network.neutron [req-196f6aab-a658-41bc-9243-9bb9ed1e7166 req-2909678a-9316-4619-996e-53fd05788779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Updated VIF entry in instance network info cache for port 85b213ac-7186-4120-8f8e-043293c9de8b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:42:23 np0005634017 nova_compute[243452]: 2026-02-28 10:42:23.111 243456 DEBUG nova.network.neutron [req-196f6aab-a658-41bc-9243-9bb9ed1e7166 req-2909678a-9316-4619-996e-53fd05788779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Updating instance_info_cache with network_info: [{"id": "85b213ac-7186-4120-8f8e-043293c9de8b", "address": "fa:16:3e:4e:23:13", "network": {"id": "60f40e8c-30be-4a73-8b0b-ca447dd19ffc", "bridge": "br-int", "label": "tempest-network-smoke--1376393359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b213ac-71", "ovs_interfaceid": "85b213ac-7186-4120-8f8e-043293c9de8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:42:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2341: 305 pgs: 305 active+clean; 199 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 20 KiB/s wr, 6 op/s
Feb 28 05:42:23 np0005634017 nova_compute[243452]: 2026-02-28 10:42:23.133 243456 DEBUG oslo_concurrency.lockutils [req-196f6aab-a658-41bc-9243-9bb9ed1e7166 req-2909678a-9316-4619-996e-53fd05788779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:42:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:42:24 np0005634017 nova_compute[243452]: 2026-02-28 10:42:24.009 243456 DEBUG nova.compute.manager [req-c21d5ece-d492-4d09-823b-5f6e2c25e74f req-ed4e408b-4983-4a01-a9ac-5a7dfb9107e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Received event network-vif-plugged-85b213ac-7186-4120-8f8e-043293c9de8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:42:24 np0005634017 nova_compute[243452]: 2026-02-28 10:42:24.010 243456 DEBUG oslo_concurrency.lockutils [req-c21d5ece-d492-4d09-823b-5f6e2c25e74f req-ed4e408b-4983-4a01-a9ac-5a7dfb9107e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:42:24 np0005634017 nova_compute[243452]: 2026-02-28 10:42:24.011 243456 DEBUG oslo_concurrency.lockutils [req-c21d5ece-d492-4d09-823b-5f6e2c25e74f req-ed4e408b-4983-4a01-a9ac-5a7dfb9107e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:42:24 np0005634017 nova_compute[243452]: 2026-02-28 10:42:24.011 243456 DEBUG oslo_concurrency.lockutils [req-c21d5ece-d492-4d09-823b-5f6e2c25e74f req-ed4e408b-4983-4a01-a9ac-5a7dfb9107e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:42:24 np0005634017 nova_compute[243452]: 2026-02-28 10:42:24.012 243456 DEBUG nova.compute.manager [req-c21d5ece-d492-4d09-823b-5f6e2c25e74f req-ed4e408b-4983-4a01-a9ac-5a7dfb9107e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] No waiting events found dispatching network-vif-plugged-85b213ac-7186-4120-8f8e-043293c9de8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:42:24 np0005634017 nova_compute[243452]: 2026-02-28 10:42:24.012 243456 WARNING nova.compute.manager [req-c21d5ece-d492-4d09-823b-5f6e2c25e74f req-ed4e408b-4983-4a01-a9ac-5a7dfb9107e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Received unexpected event network-vif-plugged-85b213ac-7186-4120-8f8e-043293c9de8b for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:42:24 np0005634017 nova_compute[243452]: 2026-02-28 10:42:24.777 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 305 active+clean; 174 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 27 op/s
Feb 28 05:42:25 np0005634017 nova_compute[243452]: 2026-02-28 10:42:25.448 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:25 np0005634017 nova_compute[243452]: 2026-02-28 10:42:25.670 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "9a93fbef-9a9c-4d32-b200-626428537bfa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:42:25 np0005634017 nova_compute[243452]: 2026-02-28 10:42:25.671 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:42:25 np0005634017 nova_compute[243452]: 2026-02-28 10:42:25.691 243456 DEBUG nova.compute.manager [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:42:25 np0005634017 nova_compute[243452]: 2026-02-28 10:42:25.779 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:42:25 np0005634017 nova_compute[243452]: 2026-02-28 10:42:25.780 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:42:25 np0005634017 nova_compute[243452]: 2026-02-28 10:42:25.787 243456 DEBUG nova.virt.hardware [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:42:25 np0005634017 nova_compute[243452]: 2026-02-28 10:42:25.788 243456 INFO nova.compute.claims [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:42:25 np0005634017 nova_compute[243452]: 2026-02-28 10:42:25.909 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:42:26 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:42:26 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2227240430' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:42:26 np0005634017 nova_compute[243452]: 2026-02-28 10:42:26.452 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:42:26 np0005634017 nova_compute[243452]: 2026-02-28 10:42:26.461 243456 DEBUG nova.compute.provider_tree [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:42:26 np0005634017 nova_compute[243452]: 2026-02-28 10:42:26.479 243456 DEBUG nova.scheduler.client.report [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:42:26 np0005634017 nova_compute[243452]: 2026-02-28 10:42:26.501 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:42:26 np0005634017 nova_compute[243452]: 2026-02-28 10:42:26.502 243456 DEBUG nova.compute.manager [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:42:26 np0005634017 nova_compute[243452]: 2026-02-28 10:42:26.549 243456 DEBUG nova.compute.manager [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:42:26 np0005634017 nova_compute[243452]: 2026-02-28 10:42:26.549 243456 DEBUG nova.network.neutron [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:42:26 np0005634017 nova_compute[243452]: 2026-02-28 10:42:26.578 243456 INFO nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:42:26 np0005634017 nova_compute[243452]: 2026-02-28 10:42:26.596 243456 DEBUG nova.compute.manager [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:42:26 np0005634017 nova_compute[243452]: 2026-02-28 10:42:26.696 243456 DEBUG nova.compute.manager [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:42:26 np0005634017 nova_compute[243452]: 2026-02-28 10:42:26.698 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:42:26 np0005634017 nova_compute[243452]: 2026-02-28 10:42:26.698 243456 INFO nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Creating image(s)#033[00m
Feb 28 05:42:26 np0005634017 nova_compute[243452]: 2026-02-28 10:42:26.725 243456 DEBUG nova.storage.rbd_utils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9a93fbef-9a9c-4d32-b200-626428537bfa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:42:26 np0005634017 nova_compute[243452]: 2026-02-28 10:42:26.754 243456 DEBUG nova.storage.rbd_utils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9a93fbef-9a9c-4d32-b200-626428537bfa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:42:26 np0005634017 nova_compute[243452]: 2026-02-28 10:42:26.779 243456 DEBUG nova.storage.rbd_utils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9a93fbef-9a9c-4d32-b200-626428537bfa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:42:26 np0005634017 nova_compute[243452]: 2026-02-28 10:42:26.783 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:42:26 np0005634017 nova_compute[243452]: 2026-02-28 10:42:26.817 243456 DEBUG nova.policy [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:42:26 np0005634017 nova_compute[243452]: 2026-02-28 10:42:26.860 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:42:26 np0005634017 nova_compute[243452]: 2026-02-28 10:42:26.860 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:42:26 np0005634017 nova_compute[243452]: 2026-02-28 10:42:26.861 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:42:26 np0005634017 nova_compute[243452]: 2026-02-28 10:42:26.861 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:42:26 np0005634017 nova_compute[243452]: 2026-02-28 10:42:26.882 243456 DEBUG nova.storage.rbd_utils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9a93fbef-9a9c-4d32-b200-626428537bfa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:42:26 np0005634017 nova_compute[243452]: 2026-02-28 10:42:26.886 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9a93fbef-9a9c-4d32-b200-626428537bfa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:42:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2343: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 28 op/s
Feb 28 05:42:27 np0005634017 nova_compute[243452]: 2026-02-28 10:42:27.748 243456 DEBUG nova.network.neutron [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Successfully created port: 5b4f91cf-9f52-4422-873f-f11cf0d49dde _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:42:27 np0005634017 nova_compute[243452]: 2026-02-28 10:42:27.946 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:27 np0005634017 nova_compute[243452]: 2026-02-28 10:42:27.995 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:28 np0005634017 nova_compute[243452]: 2026-02-28 10:42:28.326 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9a93fbef-9a9c-4d32-b200-626428537bfa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:42:28 np0005634017 nova_compute[243452]: 2026-02-28 10:42:28.422 243456 DEBUG nova.storage.rbd_utils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image 9a93fbef-9a9c-4d32-b200-626428537bfa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:42:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:42:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 305 active+clean; 155 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 98 KiB/s wr, 40 op/s
Feb 28 05:42:29 np0005634017 nova_compute[243452]: 2026-02-28 10:42:29.160 243456 DEBUG nova.network.neutron [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Successfully updated port: 5b4f91cf-9f52-4422-873f-f11cf0d49dde _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:42:29 np0005634017 nova_compute[243452]: 2026-02-28 10:42:29.174 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:42:29 np0005634017 nova_compute[243452]: 2026-02-28 10:42:29.174 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:42:29 np0005634017 nova_compute[243452]: 2026-02-28 10:42:29.175 243456 DEBUG nova.network.neutron [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:42:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:42:29
Feb 28 05:42:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:42:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:42:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.log', 'images', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', '.rgw.root', '.mgr']
Feb 28 05:42:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:42:29 np0005634017 nova_compute[243452]: 2026-02-28 10:42:29.277 243456 DEBUG nova.objects.instance [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid 9a93fbef-9a9c-4d32-b200-626428537bfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:42:29 np0005634017 nova_compute[243452]: 2026-02-28 10:42:29.286 243456 DEBUG nova.compute.manager [req-c32ad750-f43e-44f3-843c-82cfbd7099cf req-2a28a37a-3fcb-4e82-88c6-04dd22d32b25 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Received event network-changed-5b4f91cf-9f52-4422-873f-f11cf0d49dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:42:29 np0005634017 nova_compute[243452]: 2026-02-28 10:42:29.287 243456 DEBUG nova.compute.manager [req-c32ad750-f43e-44f3-843c-82cfbd7099cf req-2a28a37a-3fcb-4e82-88c6-04dd22d32b25 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Refreshing instance network info cache due to event network-changed-5b4f91cf-9f52-4422-873f-f11cf0d49dde. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:42:29 np0005634017 nova_compute[243452]: 2026-02-28 10:42:29.287 243456 DEBUG oslo_concurrency.lockutils [req-c32ad750-f43e-44f3-843c-82cfbd7099cf req-2a28a37a-3fcb-4e82-88c6-04dd22d32b25 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:42:29 np0005634017 nova_compute[243452]: 2026-02-28 10:42:29.306 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:42:29 np0005634017 nova_compute[243452]: 2026-02-28 10:42:29.306 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Ensure instance console log exists: /var/lib/nova/instances/9a93fbef-9a9c-4d32-b200-626428537bfa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:42:29 np0005634017 nova_compute[243452]: 2026-02-28 10:42:29.307 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:42:29 np0005634017 nova_compute[243452]: 2026-02-28 10:42:29.307 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:42:29 np0005634017 nova_compute[243452]: 2026-02-28 10:42:29.307 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:42:29 np0005634017 nova_compute[243452]: 2026-02-28 10:42:29.653 243456 DEBUG nova.network.neutron [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:42:29 np0005634017 nova_compute[243452]: 2026-02-28 10:42:29.779 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:42:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:42:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:42:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:42:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:42:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:42:30 np0005634017 nova_compute[243452]: 2026-02-28 10:42:30.450 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:42:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:42:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:42:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:42:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:42:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:42:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:42:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:42:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:42:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:42:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2345: 305 pgs: 305 active+clean; 181 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.2 MiB/s wr, 44 op/s
Feb 28 05:42:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 28 05:42:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:42:34 np0005634017 nova_compute[243452]: 2026-02-28 10:42:34.781 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2347: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Feb 28 05:42:35 np0005634017 nova_compute[243452]: 2026-02-28 10:42:35.417 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275340.415453, af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:42:35 np0005634017 nova_compute[243452]: 2026-02-28 10:42:35.417 243456 INFO nova.compute.manager [-] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:42:35 np0005634017 nova_compute[243452]: 2026-02-28 10:42:35.452 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:35 np0005634017 nova_compute[243452]: 2026-02-28 10:42:35.535 243456 DEBUG nova.compute.manager [None req-2b7a500c-2e80-48f5-9ddf-ccf5d2c6917a - - - - - -] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:42:36 np0005634017 nova_compute[243452]: 2026-02-28 10:42:36.168 243456 DEBUG nova.network.neutron [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Updating instance_info_cache with network_info: [{"id": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "address": "fa:16:3e:54:61:bf", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b4f91cf-9f", "ovs_interfaceid": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:42:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2348: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 28 05:42:37 np0005634017 nova_compute[243452]: 2026-02-28 10:42:37.351 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:42:37 np0005634017 nova_compute[243452]: 2026-02-28 10:42:37.352 243456 DEBUG nova.compute.manager [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Instance network_info: |[{"id": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "address": "fa:16:3e:54:61:bf", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b4f91cf-9f", "ovs_interfaceid": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:42:37 np0005634017 nova_compute[243452]: 2026-02-28 10:42:37.353 243456 DEBUG oslo_concurrency.lockutils [req-c32ad750-f43e-44f3-843c-82cfbd7099cf req-2a28a37a-3fcb-4e82-88c6-04dd22d32b25 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:42:37 np0005634017 nova_compute[243452]: 2026-02-28 10:42:37.354 243456 DEBUG nova.network.neutron [req-c32ad750-f43e-44f3-843c-82cfbd7099cf req-2a28a37a-3fcb-4e82-88c6-04dd22d32b25 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Refreshing network info cache for port 5b4f91cf-9f52-4422-873f-f11cf0d49dde _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:42:37 np0005634017 nova_compute[243452]: 2026-02-28 10:42:37.361 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Start _get_guest_xml network_info=[{"id": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "address": "fa:16:3e:54:61:bf", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b4f91cf-9f", "ovs_interfaceid": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:42:37 np0005634017 nova_compute[243452]: 2026-02-28 10:42:37.368 243456 WARNING nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:42:37 np0005634017 nova_compute[243452]: 2026-02-28 10:42:37.380 243456 DEBUG nova.virt.libvirt.host [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:42:37 np0005634017 nova_compute[243452]: 2026-02-28 10:42:37.381 243456 DEBUG nova.virt.libvirt.host [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:42:37 np0005634017 nova_compute[243452]: 2026-02-28 10:42:37.391 243456 DEBUG nova.virt.libvirt.host [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:42:37 np0005634017 nova_compute[243452]: 2026-02-28 10:42:37.392 243456 DEBUG nova.virt.libvirt.host [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:42:37 np0005634017 nova_compute[243452]: 2026-02-28 10:42:37.392 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:42:37 np0005634017 nova_compute[243452]: 2026-02-28 10:42:37.393 243456 DEBUG nova.virt.hardware [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:42:37 np0005634017 nova_compute[243452]: 2026-02-28 10:42:37.394 243456 DEBUG nova.virt.hardware [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:42:37 np0005634017 nova_compute[243452]: 2026-02-28 10:42:37.394 243456 DEBUG nova.virt.hardware [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:42:37 np0005634017 nova_compute[243452]: 2026-02-28 10:42:37.394 243456 DEBUG nova.virt.hardware [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:42:37 np0005634017 nova_compute[243452]: 2026-02-28 10:42:37.395 243456 DEBUG nova.virt.hardware [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:42:37 np0005634017 nova_compute[243452]: 2026-02-28 10:42:37.395 243456 DEBUG nova.virt.hardware [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:42:37 np0005634017 nova_compute[243452]: 2026-02-28 10:42:37.396 243456 DEBUG nova.virt.hardware [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:42:37 np0005634017 nova_compute[243452]: 2026-02-28 10:42:37.396 243456 DEBUG nova.virt.hardware [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:42:37 np0005634017 nova_compute[243452]: 2026-02-28 10:42:37.397 243456 DEBUG nova.virt.hardware [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:42:37 np0005634017 nova_compute[243452]: 2026-02-28 10:42:37.397 243456 DEBUG nova.virt.hardware [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:42:37 np0005634017 nova_compute[243452]: 2026-02-28 10:42:37.398 243456 DEBUG nova.virt.hardware [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:42:37 np0005634017 nova_compute[243452]: 2026-02-28 10:42:37.403 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:42:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:42:37 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2526933911' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:42:37 np0005634017 nova_compute[243452]: 2026-02-28 10:42:37.941 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:42:37 np0005634017 nova_compute[243452]: 2026-02-28 10:42:37.966 243456 DEBUG nova.storage.rbd_utils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9a93fbef-9a9c-4d32-b200-626428537bfa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:42:37 np0005634017 nova_compute[243452]: 2026-02-28 10:42:37.973 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:42:38 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:42:38 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2358285007' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.549 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.551 243456 DEBUG nova.virt.libvirt.vif [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:42:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-344642459',display_name='tempest-TestGettingAddress-server-344642459',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-344642459',id=146,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDkbXRt6B1cCWBHrWtATNSLJHmA0iLq2AXd2wKDeL3s6gDq5DjSDGWA5dtPpegRqUnUl/4AOWSTUcLQlCwH/VQEOx1aCDIzKCGn/G2HO33UA7mhN+ANludWedDRQ8saCjg==',key_name='tempest-TestGettingAddress-1127208510',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-r4xtm6sn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:42:26Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=9a93fbef-9a9c-4d32-b200-626428537bfa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "address": "fa:16:3e:54:61:bf", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b4f91cf-9f", "ovs_interfaceid": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.552 243456 DEBUG nova.network.os_vif_util [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "address": "fa:16:3e:54:61:bf", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b4f91cf-9f", "ovs_interfaceid": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.554 243456 DEBUG nova.network.os_vif_util [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:61:bf,bridge_name='br-int',has_traffic_filtering=True,id=5b4f91cf-9f52-4422-873f-f11cf0d49dde,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b4f91cf-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.555 243456 DEBUG nova.objects.instance [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9a93fbef-9a9c-4d32-b200-626428537bfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.631 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:42:38 np0005634017 nova_compute[243452]:  <uuid>9a93fbef-9a9c-4d32-b200-626428537bfa</uuid>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:  <name>instance-00000092</name>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestGettingAddress-server-344642459</nova:name>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:42:37</nova:creationTime>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:42:38 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:        <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:        <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:        <nova:port uuid="5b4f91cf-9f52-4422-873f-f11cf0d49dde">
Feb 28 05:42:38 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe54:61bf" ipVersion="6"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe54:61bf" ipVersion="6"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <entry name="serial">9a93fbef-9a9c-4d32-b200-626428537bfa</entry>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <entry name="uuid">9a93fbef-9a9c-4d32-b200-626428537bfa</entry>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/9a93fbef-9a9c-4d32-b200-626428537bfa_disk">
Feb 28 05:42:38 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:42:38 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/9a93fbef-9a9c-4d32-b200-626428537bfa_disk.config">
Feb 28 05:42:38 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:42:38 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:54:61:bf"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <target dev="tap5b4f91cf-9f"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/9a93fbef-9a9c-4d32-b200-626428537bfa/console.log" append="off"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:42:38 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:42:38 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:42:38 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:42:38 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.633 243456 DEBUG nova.compute.manager [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Preparing to wait for external event network-vif-plugged-5b4f91cf-9f52-4422-873f-f11cf0d49dde prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.633 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.634 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.634 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.635 243456 DEBUG nova.virt.libvirt.vif [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:42:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-344642459',display_name='tempest-TestGettingAddress-server-344642459',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-344642459',id=146,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDkbXRt6B1cCWBHrWtATNSLJHmA0iLq2AXd2wKDeL3s6gDq5DjSDGWA5dtPpegRqUnUl/4AOWSTUcLQlCwH/VQEOx1aCDIzKCGn/G2HO33UA7mhN+ANludWedDRQ8saCjg==',key_name='tempest-TestGettingAddress-1127208510',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-r4xtm6sn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:42:26Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=9a93fbef-9a9c-4d32-b200-626428537bfa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "address": "fa:16:3e:54:61:bf", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b4f91cf-9f", "ovs_interfaceid": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.636 243456 DEBUG nova.network.os_vif_util [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "address": "fa:16:3e:54:61:bf", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b4f91cf-9f", "ovs_interfaceid": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.638 243456 DEBUG nova.network.os_vif_util [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:61:bf,bridge_name='br-int',has_traffic_filtering=True,id=5b4f91cf-9f52-4422-873f-f11cf0d49dde,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b4f91cf-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.638 243456 DEBUG os_vif [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:61:bf,bridge_name='br-int',has_traffic_filtering=True,id=5b4f91cf-9f52-4422-873f-f11cf0d49dde,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b4f91cf-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.639 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.640 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.641 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.645 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.645 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b4f91cf-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.646 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5b4f91cf-9f, col_values=(('external_ids', {'iface-id': '5b4f91cf-9f52-4422-873f-f11cf0d49dde', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:61:bf', 'vm-uuid': '9a93fbef-9a9c-4d32-b200-626428537bfa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.677 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:38 np0005634017 NetworkManager[49805]: <info>  [1772275358.6780] manager: (tap5b4f91cf-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/639)
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.680 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.684 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.685 243456 INFO os_vif [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:61:bf,bridge_name='br-int',has_traffic_filtering=True,id=5b4f91cf-9f52-4422-873f-f11cf0d49dde,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b4f91cf-9f')#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.878 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.878 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.879 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:54:61:bf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.880 243456 INFO nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Using config drive#033[00m
Feb 28 05:42:38 np0005634017 nova_compute[243452]: 2026-02-28 10:42:38.917 243456 DEBUG nova.storage.rbd_utils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9a93fbef-9a9c-4d32-b200-626428537bfa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:42:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:42:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:42:39 np0005634017 nova_compute[243452]: 2026-02-28 10:42:39.782 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:39 np0005634017 nova_compute[243452]: 2026-02-28 10:42:39.898 243456 INFO nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Creating config drive at /var/lib/nova/instances/9a93fbef-9a9c-4d32-b200-626428537bfa/disk.config#033[00m
Feb 28 05:42:39 np0005634017 nova_compute[243452]: 2026-02-28 10:42:39.904 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9a93fbef-9a9c-4d32-b200-626428537bfa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpmhackzgn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:42:40 np0005634017 nova_compute[243452]: 2026-02-28 10:42:40.063 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9a93fbef-9a9c-4d32-b200-626428537bfa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpmhackzgn" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:42:40 np0005634017 nova_compute[243452]: 2026-02-28 10:42:40.103 243456 DEBUG nova.storage.rbd_utils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9a93fbef-9a9c-4d32-b200-626428537bfa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:42:40 np0005634017 nova_compute[243452]: 2026-02-28 10:42:40.108 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9a93fbef-9a9c-4d32-b200-626428537bfa/disk.config 9a93fbef-9a9c-4d32-b200-626428537bfa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:42:40 np0005634017 nova_compute[243452]: 2026-02-28 10:42:40.270 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9a93fbef-9a9c-4d32-b200-626428537bfa/disk.config 9a93fbef-9a9c-4d32-b200-626428537bfa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:42:40 np0005634017 nova_compute[243452]: 2026-02-28 10:42:40.271 243456 INFO nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Deleting local config drive /var/lib/nova/instances/9a93fbef-9a9c-4d32-b200-626428537bfa/disk.config because it was imported into RBD.#033[00m
Feb 28 05:42:40 np0005634017 kernel: tap5b4f91cf-9f: entered promiscuous mode
Feb 28 05:42:40 np0005634017 NetworkManager[49805]: <info>  [1772275360.3528] manager: (tap5b4f91cf-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/640)
Feb 28 05:42:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:42:40Z|01538|binding|INFO|Claiming lport 5b4f91cf-9f52-4422-873f-f11cf0d49dde for this chassis.
Feb 28 05:42:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:42:40Z|01539|binding|INFO|5b4f91cf-9f52-4422-873f-f11cf0d49dde: Claiming fa:16:3e:54:61:bf 10.100.0.14 2001:db8:0:1:f816:3eff:fe54:61bf 2001:db8::f816:3eff:fe54:61bf
Feb 28 05:42:40 np0005634017 nova_compute[243452]: 2026-02-28 10:42:40.354 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:40 np0005634017 nova_compute[243452]: 2026-02-28 10:42:40.358 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.372 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:61:bf 10.100.0.14 2001:db8:0:1:f816:3eff:fe54:61bf 2001:db8::f816:3eff:fe54:61bf'], port_security=['fa:16:3e:54:61:bf 10.100.0.14 2001:db8:0:1:f816:3eff:fe54:61bf 2001:db8::f816:3eff:fe54:61bf'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8:0:1:f816:3eff:fe54:61bf/64 2001:db8::f816:3eff:fe54:61bf/64', 'neutron:device_id': '9a93fbef-9a9c-4d32-b200-626428537bfa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5ccb81b-dba1-47db-8a77-320af312ccad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '00df5388-1a6c-45af-8672-636481c0abde', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39e94574-ff62-481a-b217-9667ee1ef596, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=5b4f91cf-9f52-4422-873f-f11cf0d49dde) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.374 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 5b4f91cf-9f52-4422-873f-f11cf0d49dde in datapath f5ccb81b-dba1-47db-8a77-320af312ccad bound to our chassis#033[00m
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.375 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f5ccb81b-dba1-47db-8a77-320af312ccad#033[00m
Feb 28 05:42:40 np0005634017 nova_compute[243452]: 2026-02-28 10:42:40.383 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:40 np0005634017 systemd-machined[209480]: New machine qemu-179-instance-00000092.
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.390 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8d6a42ee-8f4c-4b63-89de-5db7dfcf831f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.391 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf5ccb81b-d1 in ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:42:40 np0005634017 systemd[1]: Started Virtual Machine qemu-179-instance-00000092.
Feb 28 05:42:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:42:40Z|01540|binding|INFO|Setting lport 5b4f91cf-9f52-4422-873f-f11cf0d49dde ovn-installed in OVS
Feb 28 05:42:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:42:40Z|01541|binding|INFO|Setting lport 5b4f91cf-9f52-4422-873f-f11cf0d49dde up in Southbound
Feb 28 05:42:40 np0005634017 nova_compute[243452]: 2026-02-28 10:42:40.395 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.396 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf5ccb81b-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.396 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[104941a9-01f1-4739-bb5c-87d090218bda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.397 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f6f785cf-d7ff-4954-87ae-c144459f844d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:42:40 np0005634017 systemd-udevd[374415]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.409 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[91f0d243-fcb6-492d-928c-017a5ac5482d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:42:40 np0005634017 NetworkManager[49805]: <info>  [1772275360.4191] device (tap5b4f91cf-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:42:40 np0005634017 NetworkManager[49805]: <info>  [1772275360.4214] device (tap5b4f91cf-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.440 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c688dc-ac83-4823-92c4-343f3316e827]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.476 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ae54af0d-5aec-4ad1-873c-a739990880c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:42:40 np0005634017 systemd-udevd[374420]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:42:40 np0005634017 NetworkManager[49805]: <info>  [1772275360.4833] manager: (tapf5ccb81b-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/641)
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.483 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[178be08e-f6e6-4ab5-899a-eae623263f38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.519 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b04910-c8a5-4256-927f-684227716512]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.523 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ed73522b-7a4b-41ba-b6ca-9b12c705d3b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:42:40 np0005634017 NetworkManager[49805]: <info>  [1772275360.5526] device (tapf5ccb81b-d0): carrier: link connected
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.557 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9a157e12-176a-4aaf-9f41-60eae296116f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.573 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8f5b990e-d169-4d7a-b71e-ff6345e1de71]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5ccb81b-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:79:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686783, 'reachable_time': 38506, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374476, 'error': None, 'target': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.585 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f1797e53-1a34-434a-860a-4a94faaa707c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb9:796c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686783, 'tstamp': 686783}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374484, 'error': None, 'target': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.597 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cf806ede-e0a9-4ddd-93cf-4b341017fa18]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5ccb81b-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:79:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686783, 'reachable_time': 38506, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 374486, 'error': None, 'target': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.627 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fa5d44aa-a649-4c15-a241-c8078c269443]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.667 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e2bdf3d1-619c-4553-96fb-a84737d6f006]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.668 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5ccb81b-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.668 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.669 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5ccb81b-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:42:40 np0005634017 nova_compute[243452]: 2026-02-28 10:42:40.670 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:40 np0005634017 NetworkManager[49805]: <info>  [1772275360.6713] manager: (tapf5ccb81b-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/642)
Feb 28 05:42:40 np0005634017 kernel: tapf5ccb81b-d0: entered promiscuous mode
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.674 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf5ccb81b-d0, col_values=(('external_ids', {'iface-id': '9495031a-3350-4b5d-b9e3-f7a6b929d37e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:42:40 np0005634017 nova_compute[243452]: 2026-02-28 10:42:40.675 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:40 np0005634017 ovn_controller[146846]: 2026-02-28T10:42:40Z|01542|binding|INFO|Releasing lport 9495031a-3350-4b5d-b9e3-f7a6b929d37e from this chassis (sb_readonly=0)
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.677 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f5ccb81b-dba1-47db-8a77-320af312ccad.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f5ccb81b-dba1-47db-8a77-320af312ccad.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.677 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bee7a3f0-5f57-455c-8299-4cbd0b67c3d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.678 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-f5ccb81b-dba1-47db-8a77-320af312ccad
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/f5ccb81b-dba1-47db-8a77-320af312ccad.pid.haproxy
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID f5ccb81b-dba1-47db-8a77-320af312ccad
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:42:40 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.679 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'env', 'PROCESS_TAG=haproxy-f5ccb81b-dba1-47db-8a77-320af312ccad', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f5ccb81b-dba1-47db-8a77-320af312ccad.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:42:40 np0005634017 nova_compute[243452]: 2026-02-28 10:42:40.681 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:40 np0005634017 nova_compute[243452]: 2026-02-28 10:42:40.683 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275360.6829443, 9a93fbef-9a9c-4d32-b200-626428537bfa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:42:40 np0005634017 nova_compute[243452]: 2026-02-28 10:42:40.684 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] VM Started (Lifecycle Event)#033[00m
Feb 28 05:42:40 np0005634017 nova_compute[243452]: 2026-02-28 10:42:40.724 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:42:40 np0005634017 nova_compute[243452]: 2026-02-28 10:42:40.729 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275360.683456, 9a93fbef-9a9c-4d32-b200-626428537bfa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:42:40 np0005634017 nova_compute[243452]: 2026-02-28 10:42:40.729 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:42:40 np0005634017 nova_compute[243452]: 2026-02-28 10:42:40.764 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:42:40 np0005634017 nova_compute[243452]: 2026-02-28 10:42:40.769 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:42:40 np0005634017 nova_compute[243452]: 2026-02-28 10:42:40.799 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:42:40 np0005634017 podman[374523]: 2026-02-28 10:42:40.990678559 +0000 UTC m=+0.060856334 container create 79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:42:41 np0005634017 systemd[1]: Started libpod-conmon-79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e.scope.
Feb 28 05:42:41 np0005634017 podman[374523]: 2026-02-28 10:42:40.951706196 +0000 UTC m=+0.021884011 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:42:41 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:42:41 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3621c3eb848faba7f94d263f84c1b37e270520b2c10993c5bb339bb0de8e7c9c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:42:41 np0005634017 podman[374523]: 2026-02-28 10:42:41.083771795 +0000 UTC m=+0.153949550 container init 79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:42:41 np0005634017 podman[374523]: 2026-02-28 10:42:41.087935743 +0000 UTC m=+0.158113478 container start 79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:42:41 np0005634017 neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad[374539]: [NOTICE]   (374543) : New worker (374545) forked
Feb 28 05:42:41 np0005634017 neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad[374539]: [NOTICE]   (374543) : Loading success.
Feb 28 05:42:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.7 MiB/s wr, 14 op/s
Feb 28 05:42:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:42:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:42:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:42:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:42:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003626374843771541 of space, bias 1.0, pg target 0.10879124531314624 quantized to 32 (current 32)
Feb 28 05:42:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:42:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:42:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:42:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:42:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:42:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493924862349368 of space, bias 1.0, pg target 0.7481774587048103 quantized to 32 (current 32)
Feb 28 05:42:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:42:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.286538771604307e-07 of space, bias 4.0, pg target 0.0008743846525925168 quantized to 16 (current 16)
Feb 28 05:42:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:42:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:42:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:42:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:42:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:42:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:42:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:42:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:42:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:42:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:42:41 np0005634017 nova_compute[243452]: 2026-02-28 10:42:41.940 243456 DEBUG nova.compute.manager [req-24cb2a2a-ec3a-45c7-bd96-d5ff3d36c642 req-5b32b42d-a763-4215-a857-e985f67c2127 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Received event network-vif-plugged-5b4f91cf-9f52-4422-873f-f11cf0d49dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:42:41 np0005634017 nova_compute[243452]: 2026-02-28 10:42:41.941 243456 DEBUG oslo_concurrency.lockutils [req-24cb2a2a-ec3a-45c7-bd96-d5ff3d36c642 req-5b32b42d-a763-4215-a857-e985f67c2127 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:42:41 np0005634017 nova_compute[243452]: 2026-02-28 10:42:41.942 243456 DEBUG oslo_concurrency.lockutils [req-24cb2a2a-ec3a-45c7-bd96-d5ff3d36c642 req-5b32b42d-a763-4215-a857-e985f67c2127 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:42:41 np0005634017 nova_compute[243452]: 2026-02-28 10:42:41.942 243456 DEBUG oslo_concurrency.lockutils [req-24cb2a2a-ec3a-45c7-bd96-d5ff3d36c642 req-5b32b42d-a763-4215-a857-e985f67c2127 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:42:41 np0005634017 nova_compute[243452]: 2026-02-28 10:42:41.942 243456 DEBUG nova.compute.manager [req-24cb2a2a-ec3a-45c7-bd96-d5ff3d36c642 req-5b32b42d-a763-4215-a857-e985f67c2127 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Processing event network-vif-plugged-5b4f91cf-9f52-4422-873f-f11cf0d49dde _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:42:41 np0005634017 nova_compute[243452]: 2026-02-28 10:42:41.943 243456 DEBUG nova.compute.manager [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:42:41 np0005634017 nova_compute[243452]: 2026-02-28 10:42:41.949 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275361.9487586, 9a93fbef-9a9c-4d32-b200-626428537bfa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:42:41 np0005634017 nova_compute[243452]: 2026-02-28 10:42:41.949 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:42:41 np0005634017 nova_compute[243452]: 2026-02-28 10:42:41.951 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:42:41 np0005634017 nova_compute[243452]: 2026-02-28 10:42:41.954 243456 INFO nova.virt.libvirt.driver [-] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Instance spawned successfully.#033[00m
Feb 28 05:42:41 np0005634017 nova_compute[243452]: 2026-02-28 10:42:41.954 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:42:41 np0005634017 nova_compute[243452]: 2026-02-28 10:42:41.959 243456 DEBUG nova.network.neutron [req-c32ad750-f43e-44f3-843c-82cfbd7099cf req-2a28a37a-3fcb-4e82-88c6-04dd22d32b25 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Updated VIF entry in instance network info cache for port 5b4f91cf-9f52-4422-873f-f11cf0d49dde. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:42:41 np0005634017 nova_compute[243452]: 2026-02-28 10:42:41.960 243456 DEBUG nova.network.neutron [req-c32ad750-f43e-44f3-843c-82cfbd7099cf req-2a28a37a-3fcb-4e82-88c6-04dd22d32b25 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Updating instance_info_cache with network_info: [{"id": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "address": "fa:16:3e:54:61:bf", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b4f91cf-9f", "ovs_interfaceid": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:42:41 np0005634017 nova_compute[243452]: 2026-02-28 10:42:41.978 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:42:41 np0005634017 nova_compute[243452]: 2026-02-28 10:42:41.981 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:42:42 np0005634017 nova_compute[243452]: 2026-02-28 10:42:42.174 243456 DEBUG oslo_concurrency.lockutils [req-c32ad750-f43e-44f3-843c-82cfbd7099cf req-2a28a37a-3fcb-4e82-88c6-04dd22d32b25 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:42:42 np0005634017 nova_compute[243452]: 2026-02-28 10:42:42.177 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:42:42 np0005634017 nova_compute[243452]: 2026-02-28 10:42:42.183 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:42:42 np0005634017 nova_compute[243452]: 2026-02-28 10:42:42.183 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:42:42 np0005634017 nova_compute[243452]: 2026-02-28 10:42:42.184 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:42:42 np0005634017 nova_compute[243452]: 2026-02-28 10:42:42.185 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:42:42 np0005634017 nova_compute[243452]: 2026-02-28 10:42:42.186 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:42:42 np0005634017 nova_compute[243452]: 2026-02-28 10:42:42.187 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:42:42 np0005634017 nova_compute[243452]: 2026-02-28 10:42:42.353 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:42:42 np0005634017 nova_compute[243452]: 2026-02-28 10:42:42.418 243456 INFO nova.compute.manager [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Took 15.72 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:42:42 np0005634017 nova_compute[243452]: 2026-02-28 10:42:42.419 243456 DEBUG nova.compute.manager [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:42:42 np0005634017 nova_compute[243452]: 2026-02-28 10:42:42.697 243456 INFO nova.compute.manager [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Took 16.95 seconds to build instance.#033[00m
Feb 28 05:42:42 np0005634017 nova_compute[243452]: 2026-02-28 10:42:42.777 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:42:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2351: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 668 KiB/s rd, 582 KiB/s wr, 36 op/s
Feb 28 05:42:43 np0005634017 nova_compute[243452]: 2026-02-28 10:42:43.677 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:42:44 np0005634017 nova_compute[243452]: 2026-02-28 10:42:44.136 243456 DEBUG nova.compute.manager [req-761954a4-18b8-4172-954d-63087e9ddf38 req-1edd133a-06e7-412c-8004-b8ede19040eb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Received event network-vif-plugged-5b4f91cf-9f52-4422-873f-f11cf0d49dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:42:44 np0005634017 nova_compute[243452]: 2026-02-28 10:42:44.138 243456 DEBUG oslo_concurrency.lockutils [req-761954a4-18b8-4172-954d-63087e9ddf38 req-1edd133a-06e7-412c-8004-b8ede19040eb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:42:44 np0005634017 nova_compute[243452]: 2026-02-28 10:42:44.140 243456 DEBUG oslo_concurrency.lockutils [req-761954a4-18b8-4172-954d-63087e9ddf38 req-1edd133a-06e7-412c-8004-b8ede19040eb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:42:44 np0005634017 nova_compute[243452]: 2026-02-28 10:42:44.140 243456 DEBUG oslo_concurrency.lockutils [req-761954a4-18b8-4172-954d-63087e9ddf38 req-1edd133a-06e7-412c-8004-b8ede19040eb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:42:44 np0005634017 nova_compute[243452]: 2026-02-28 10:42:44.140 243456 DEBUG nova.compute.manager [req-761954a4-18b8-4172-954d-63087e9ddf38 req-1edd133a-06e7-412c-8004-b8ede19040eb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] No waiting events found dispatching network-vif-plugged-5b4f91cf-9f52-4422-873f-f11cf0d49dde pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:42:44 np0005634017 nova_compute[243452]: 2026-02-28 10:42:44.140 243456 WARNING nova.compute.manager [req-761954a4-18b8-4172-954d-63087e9ddf38 req-1edd133a-06e7-412c-8004-b8ede19040eb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Received unexpected event network-vif-plugged-5b4f91cf-9f52-4422-873f-f11cf0d49dde for instance with vm_state active and task_state None.#033[00m
Feb 28 05:42:44 np0005634017 nova_compute[243452]: 2026-02-28 10:42:44.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:42:44 np0005634017 nova_compute[243452]: 2026-02-28 10:42:44.784 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 12 KiB/s wr, 51 op/s
Feb 28 05:42:45 np0005634017 nova_compute[243452]: 2026-02-28 10:42:45.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:42:45 np0005634017 nova_compute[243452]: 2026-02-28 10:42:45.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:42:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:42:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1961663572' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:42:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:42:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1961663572' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:42:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2353: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 05:42:47 np0005634017 nova_compute[243452]: 2026-02-28 10:42:47.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:42:48 np0005634017 nova_compute[243452]: 2026-02-28 10:42:48.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:42:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:42:48Z|01543|binding|INFO|Releasing lport 9495031a-3350-4b5d-b9e3-f7a6b929d37e from this chassis (sb_readonly=0)
Feb 28 05:42:48 np0005634017 NetworkManager[49805]: <info>  [1772275368.4953] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/643)
Feb 28 05:42:48 np0005634017 NetworkManager[49805]: <info>  [1772275368.4970] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/644)
Feb 28 05:42:48 np0005634017 nova_compute[243452]: 2026-02-28 10:42:48.502 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:42:48Z|01544|binding|INFO|Releasing lport 9495031a-3350-4b5d-b9e3-f7a6b929d37e from this chassis (sb_readonly=0)
Feb 28 05:42:48 np0005634017 nova_compute[243452]: 2026-02-28 10:42:48.510 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:48 np0005634017 nova_compute[243452]: 2026-02-28 10:42:48.680 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:48 np0005634017 nova_compute[243452]: 2026-02-28 10:42:48.770 243456 DEBUG nova.compute.manager [req-5b57f837-71d4-4ec6-8300-81f31bf3f9d4 req-1dca5a8c-4211-442a-80de-548802077b89 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Received event network-changed-5b4f91cf-9f52-4422-873f-f11cf0d49dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:42:48 np0005634017 nova_compute[243452]: 2026-02-28 10:42:48.770 243456 DEBUG nova.compute.manager [req-5b57f837-71d4-4ec6-8300-81f31bf3f9d4 req-1dca5a8c-4211-442a-80de-548802077b89 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Refreshing instance network info cache due to event network-changed-5b4f91cf-9f52-4422-873f-f11cf0d49dde. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:42:48 np0005634017 nova_compute[243452]: 2026-02-28 10:42:48.771 243456 DEBUG oslo_concurrency.lockutils [req-5b57f837-71d4-4ec6-8300-81f31bf3f9d4 req-1dca5a8c-4211-442a-80de-548802077b89 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:42:48 np0005634017 nova_compute[243452]: 2026-02-28 10:42:48.771 243456 DEBUG oslo_concurrency.lockutils [req-5b57f837-71d4-4ec6-8300-81f31bf3f9d4 req-1dca5a8c-4211-442a-80de-548802077b89 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:42:48 np0005634017 nova_compute[243452]: 2026-02-28 10:42:48.772 243456 DEBUG nova.network.neutron [req-5b57f837-71d4-4ec6-8300-81f31bf3f9d4 req-1dca5a8c-4211-442a-80de-548802077b89 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Refreshing network info cache for port 5b4f91cf-9f52-4422-873f-f11cf0d49dde _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:42:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:42:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2354: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 05:42:49 np0005634017 nova_compute[243452]: 2026-02-28 10:42:49.787 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:50 np0005634017 nova_compute[243452]: 2026-02-28 10:42:50.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:42:50 np0005634017 nova_compute[243452]: 2026-02-28 10:42:50.647 243456 DEBUG nova.network.neutron [req-5b57f837-71d4-4ec6-8300-81f31bf3f9d4 req-1dca5a8c-4211-442a-80de-548802077b89 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Updated VIF entry in instance network info cache for port 5b4f91cf-9f52-4422-873f-f11cf0d49dde. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:42:50 np0005634017 nova_compute[243452]: 2026-02-28 10:42:50.648 243456 DEBUG nova.network.neutron [req-5b57f837-71d4-4ec6-8300-81f31bf3f9d4 req-1dca5a8c-4211-442a-80de-548802077b89 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Updating instance_info_cache with network_info: [{"id": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "address": "fa:16:3e:54:61:bf", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b4f91cf-9f", "ovs_interfaceid": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:42:50 np0005634017 nova_compute[243452]: 2026-02-28 10:42:50.785 243456 DEBUG oslo_concurrency.lockutils [req-5b57f837-71d4-4ec6-8300-81f31bf3f9d4 req-1dca5a8c-4211-442a-80de-548802077b89 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:42:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 05:42:51 np0005634017 podman[374556]: 2026-02-28 10:42:51.240581424 +0000 UTC m=+0.049298357 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 28 05:42:51 np0005634017 podman[374555]: 2026-02-28 10:42:51.293041059 +0000 UTC m=+0.098317384 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.43.0)
Feb 28 05:42:51 np0005634017 nova_compute[243452]: 2026-02-28 10:42:51.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:42:51 np0005634017 nova_compute[243452]: 2026-02-28 10:42:51.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:42:51 np0005634017 nova_compute[243452]: 2026-02-28 10:42:51.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:42:51 np0005634017 nova_compute[243452]: 2026-02-28 10:42:51.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:42:51 np0005634017 nova_compute[243452]: 2026-02-28 10:42:51.665 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:42:51 np0005634017 nova_compute[243452]: 2026-02-28 10:42:51.666 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:42:51 np0005634017 nova_compute[243452]: 2026-02-28 10:42:51.666 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:42:51 np0005634017 nova_compute[243452]: 2026-02-28 10:42:51.667 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9a93fbef-9a9c-4d32-b200-626428537bfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:42:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 305 active+clean; 214 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 91 op/s
Feb 28 05:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:42:53Z|00186|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:54:61:bf 10.100.0.14
Feb 28 05:42:53 np0005634017 ovn_controller[146846]: 2026-02-28T10:42:53Z|00187|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:54:61:bf 10.100.0.14
Feb 28 05:42:53 np0005634017 nova_compute[243452]: 2026-02-28 10:42:53.683 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:42:54 np0005634017 nova_compute[243452]: 2026-02-28 10:42:54.331 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Updating instance_info_cache with network_info: [{"id": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "address": "fa:16:3e:54:61:bf", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b4f91cf-9f", "ovs_interfaceid": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:42:54 np0005634017 nova_compute[243452]: 2026-02-28 10:42:54.349 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:42:54 np0005634017 nova_compute[243452]: 2026-02-28 10:42:54.350 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:42:54 np0005634017 nova_compute[243452]: 2026-02-28 10:42:54.791 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 305 active+clean; 230 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 91 op/s
Feb 28 05:42:56 np0005634017 nova_compute[243452]: 2026-02-28 10:42:56.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:42:56 np0005634017 nova_compute[243452]: 2026-02-28 10:42:56.339 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:42:56 np0005634017 nova_compute[243452]: 2026-02-28 10:42:56.339 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:42:56 np0005634017 nova_compute[243452]: 2026-02-28 10:42:56.339 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:42:56 np0005634017 nova_compute[243452]: 2026-02-28 10:42:56.340 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:42:56 np0005634017 nova_compute[243452]: 2026-02-28 10:42:56.340 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:42:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:42:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2080692160' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:42:56 np0005634017 nova_compute[243452]: 2026-02-28 10:42:56.946 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:42:57 np0005634017 nova_compute[243452]: 2026-02-28 10:42:57.012 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:42:57 np0005634017 nova_compute[243452]: 2026-02-28 10:42:57.012 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:42:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2358: 305 pgs: 305 active+clean; 231 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 982 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Feb 28 05:42:57 np0005634017 nova_compute[243452]: 2026-02-28 10:42:57.174 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:42:57 np0005634017 nova_compute[243452]: 2026-02-28 10:42:57.175 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3386MB free_disk=59.9426728002727GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:42:57 np0005634017 nova_compute[243452]: 2026-02-28 10:42:57.175 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:42:57 np0005634017 nova_compute[243452]: 2026-02-28 10:42:57.176 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:42:57 np0005634017 nova_compute[243452]: 2026-02-28 10:42:57.244 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 9a93fbef-9a9c-4d32-b200-626428537bfa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:42:57 np0005634017 nova_compute[243452]: 2026-02-28 10:42:57.245 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:42:57 np0005634017 nova_compute[243452]: 2026-02-28 10:42:57.245 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:42:57 np0005634017 nova_compute[243452]: 2026-02-28 10:42:57.291 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:42:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:42:57 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/156772642' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:42:57 np0005634017 nova_compute[243452]: 2026-02-28 10:42:57.811 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:42:57 np0005634017 nova_compute[243452]: 2026-02-28 10:42:57.817 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:42:57 np0005634017 nova_compute[243452]: 2026-02-28 10:42:57.831 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:42:57 np0005634017 nova_compute[243452]: 2026-02-28 10:42:57.854 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:42:57 np0005634017 nova_compute[243452]: 2026-02-28 10:42:57.855 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:42:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:57.882 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:42:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:57.883 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:42:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:42:57.884 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:42:58 np0005634017 nova_compute[243452]: 2026-02-28 10:42:58.718 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:42:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:42:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2359: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 05:42:59 np0005634017 nova_compute[243452]: 2026-02-28 10:42:59.794 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:43:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:43:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:43:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:43:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:43:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:43:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 05:43:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 05:43:03 np0005634017 nova_compute[243452]: 2026-02-28 10:43:03.720 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:43:04 np0005634017 nova_compute[243452]: 2026-02-28 10:43:04.795 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2362: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 212 KiB/s rd, 855 KiB/s wr, 44 op/s
Feb 28 05:43:06 np0005634017 nova_compute[243452]: 2026-02-28 10:43:06.789 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "4717a174-511e-4100-af09-e351eb2784a3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:43:06 np0005634017 nova_compute[243452]: 2026-02-28 10:43:06.789 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:43:06 np0005634017 nova_compute[243452]: 2026-02-28 10:43:06.808 243456 DEBUG nova.compute.manager [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:43:06 np0005634017 nova_compute[243452]: 2026-02-28 10:43:06.904 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:43:06 np0005634017 nova_compute[243452]: 2026-02-28 10:43:06.905 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:43:06 np0005634017 nova_compute[243452]: 2026-02-28 10:43:06.914 243456 DEBUG nova.virt.hardware [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:43:06 np0005634017 nova_compute[243452]: 2026-02-28 10:43:06.915 243456 INFO nova.compute.claims [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:43:07 np0005634017 nova_compute[243452]: 2026-02-28 10:43:07.051 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:43:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 74 KiB/s wr, 19 op/s
Feb 28 05:43:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:43:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3416797750' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:43:07 np0005634017 nova_compute[243452]: 2026-02-28 10:43:07.623 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:43:07 np0005634017 nova_compute[243452]: 2026-02-28 10:43:07.630 243456 DEBUG nova.compute.provider_tree [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:43:07 np0005634017 nova_compute[243452]: 2026-02-28 10:43:07.648 243456 DEBUG nova.scheduler.client.report [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:43:07 np0005634017 nova_compute[243452]: 2026-02-28 10:43:07.673 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:43:07 np0005634017 nova_compute[243452]: 2026-02-28 10:43:07.675 243456 DEBUG nova.compute.manager [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:43:07 np0005634017 nova_compute[243452]: 2026-02-28 10:43:07.719 243456 DEBUG nova.compute.manager [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:43:07 np0005634017 nova_compute[243452]: 2026-02-28 10:43:07.719 243456 DEBUG nova.network.neutron [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:43:07 np0005634017 nova_compute[243452]: 2026-02-28 10:43:07.736 243456 INFO nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:43:07 np0005634017 nova_compute[243452]: 2026-02-28 10:43:07.752 243456 DEBUG nova.compute.manager [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:43:07 np0005634017 nova_compute[243452]: 2026-02-28 10:43:07.832 243456 DEBUG nova.compute.manager [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:43:07 np0005634017 nova_compute[243452]: 2026-02-28 10:43:07.833 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:43:07 np0005634017 nova_compute[243452]: 2026-02-28 10:43:07.834 243456 INFO nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Creating image(s)#033[00m
Feb 28 05:43:07 np0005634017 nova_compute[243452]: 2026-02-28 10:43:07.864 243456 DEBUG nova.storage.rbd_utils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 4717a174-511e-4100-af09-e351eb2784a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:43:07 np0005634017 nova_compute[243452]: 2026-02-28 10:43:07.898 243456 DEBUG nova.storage.rbd_utils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 4717a174-511e-4100-af09-e351eb2784a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:43:07 np0005634017 nova_compute[243452]: 2026-02-28 10:43:07.929 243456 DEBUG nova.storage.rbd_utils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 4717a174-511e-4100-af09-e351eb2784a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:43:07 np0005634017 nova_compute[243452]: 2026-02-28 10:43:07.933 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:43:08 np0005634017 nova_compute[243452]: 2026-02-28 10:43:08.003 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:43:08 np0005634017 nova_compute[243452]: 2026-02-28 10:43:08.005 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:43:08 np0005634017 nova_compute[243452]: 2026-02-28 10:43:08.006 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:43:08 np0005634017 nova_compute[243452]: 2026-02-28 10:43:08.006 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:43:08 np0005634017 nova_compute[243452]: 2026-02-28 10:43:08.038 243456 DEBUG nova.storage.rbd_utils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 4717a174-511e-4100-af09-e351eb2784a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:43:08 np0005634017 nova_compute[243452]: 2026-02-28 10:43:08.043 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 4717a174-511e-4100-af09-e351eb2784a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:43:08 np0005634017 nova_compute[243452]: 2026-02-28 10:43:08.216 243456 DEBUG nova.policy [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:43:08 np0005634017 nova_compute[243452]: 2026-02-28 10:43:08.348 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 4717a174-511e-4100-af09-e351eb2784a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:43:08 np0005634017 nova_compute[243452]: 2026-02-28 10:43:08.447 243456 DEBUG nova.storage.rbd_utils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image 4717a174-511e-4100-af09-e351eb2784a3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:43:08 np0005634017 nova_compute[243452]: 2026-02-28 10:43:08.556 243456 DEBUG nova.objects.instance [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid 4717a174-511e-4100-af09-e351eb2784a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:43:08 np0005634017 nova_compute[243452]: 2026-02-28 10:43:08.574 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:43:08 np0005634017 nova_compute[243452]: 2026-02-28 10:43:08.575 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Ensure instance console log exists: /var/lib/nova/instances/4717a174-511e-4100-af09-e351eb2784a3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:43:08 np0005634017 nova_compute[243452]: 2026-02-28 10:43:08.576 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:43:08 np0005634017 nova_compute[243452]: 2026-02-28 10:43:08.577 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:43:08 np0005634017 nova_compute[243452]: 2026-02-28 10:43:08.577 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:43:08 np0005634017 nova_compute[243452]: 2026-02-28 10:43:08.723 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:43:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 73 KiB/s wr, 11 op/s
Feb 28 05:43:09 np0005634017 nova_compute[243452]: 2026-02-28 10:43:09.246 243456 DEBUG nova.network.neutron [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Successfully created port: 95742053-49d5-4e84-9dde-4a6563ebf953 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:43:09 np0005634017 nova_compute[243452]: 2026-02-28 10:43:09.797 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:10 np0005634017 nova_compute[243452]: 2026-02-28 10:43:10.279 243456 DEBUG nova.network.neutron [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Successfully updated port: 95742053-49d5-4e84-9dde-4a6563ebf953 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:43:10 np0005634017 nova_compute[243452]: 2026-02-28 10:43:10.316 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-4717a174-511e-4100-af09-e351eb2784a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:43:10 np0005634017 nova_compute[243452]: 2026-02-28 10:43:10.317 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-4717a174-511e-4100-af09-e351eb2784a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:43:10 np0005634017 nova_compute[243452]: 2026-02-28 10:43:10.318 243456 DEBUG nova.network.neutron [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:43:10 np0005634017 nova_compute[243452]: 2026-02-28 10:43:10.365 243456 DEBUG nova.compute.manager [req-206c7f2a-9a10-491d-8833-1ac18d065030 req-1a8afca7-e5e8-40c0-ae55-cccce9fb1d69 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Received event network-changed-95742053-49d5-4e84-9dde-4a6563ebf953 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:43:10 np0005634017 nova_compute[243452]: 2026-02-28 10:43:10.366 243456 DEBUG nova.compute.manager [req-206c7f2a-9a10-491d-8833-1ac18d065030 req-1a8afca7-e5e8-40c0-ae55-cccce9fb1d69 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Refreshing instance network info cache due to event network-changed-95742053-49d5-4e84-9dde-4a6563ebf953. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:43:10 np0005634017 nova_compute[243452]: 2026-02-28 10:43:10.366 243456 DEBUG oslo_concurrency.lockutils [req-206c7f2a-9a10-491d-8833-1ac18d065030 req-1a8afca7-e5e8-40c0-ae55-cccce9fb1d69 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-4717a174-511e-4100-af09-e351eb2784a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:43:10 np0005634017 nova_compute[243452]: 2026-02-28 10:43:10.462 243456 DEBUG nova.network.neutron [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:43:10 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 05:43:10 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 11K writes, 50K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1386 writes, 6266 keys, 1386 commit groups, 1.0 writes per commit group, ingest: 8.95 MB, 0.01 MB/s#012Interval WAL: 1386 writes, 1386 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     75.0      0.79              0.15        34    0.023       0      0       0.0       0.0#012  L6      1/0    7.98 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.8    162.8    137.7      2.04              0.74        33    0.062    196K    17K       0.0       0.0#012 Sum      1/0    7.98 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.8    117.4    120.3      2.82              0.90        67    0.042    196K    17K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   7.4    162.9    162.5      0.33              0.15        10    0.033     37K   2498       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    162.8    137.7      2.04              0.74        33    0.062    196K    17K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     75.5      0.78              0.15        33    0.024       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.5      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.058, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.33 GB write, 0.08 MB/s write, 0.32 GB read, 0.08 MB/s read, 2.8 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555caadbb8d0#2 capacity: 304.00 MB usage: 37.30 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000391 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2363,35.84 MB,11.7885%) FilterBlock(68,557.17 KB,0.178985%) IndexBlock(68,945.70 KB,0.303795%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb 28 05:43:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2365: 305 pgs: 305 active+clean; 256 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.0 MiB/s wr, 14 op/s
Feb 28 05:43:11 np0005634017 nova_compute[243452]: 2026-02-28 10:43:11.848 243456 DEBUG nova.network.neutron [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Updating instance_info_cache with network_info: [{"id": "95742053-49d5-4e84-9dde-4a6563ebf953", "address": "fa:16:3e:e3:bf:44", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95742053-49", "ovs_interfaceid": "95742053-49d5-4e84-9dde-4a6563ebf953", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:43:11 np0005634017 nova_compute[243452]: 2026-02-28 10:43:11.875 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-4717a174-511e-4100-af09-e351eb2784a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:43:11 np0005634017 nova_compute[243452]: 2026-02-28 10:43:11.876 243456 DEBUG nova.compute.manager [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Instance network_info: |[{"id": "95742053-49d5-4e84-9dde-4a6563ebf953", "address": "fa:16:3e:e3:bf:44", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95742053-49", "ovs_interfaceid": "95742053-49d5-4e84-9dde-4a6563ebf953", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:43:11 np0005634017 nova_compute[243452]: 2026-02-28 10:43:11.877 243456 DEBUG oslo_concurrency.lockutils [req-206c7f2a-9a10-491d-8833-1ac18d065030 req-1a8afca7-e5e8-40c0-ae55-cccce9fb1d69 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-4717a174-511e-4100-af09-e351eb2784a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:43:11 np0005634017 nova_compute[243452]: 2026-02-28 10:43:11.878 243456 DEBUG nova.network.neutron [req-206c7f2a-9a10-491d-8833-1ac18d065030 req-1a8afca7-e5e8-40c0-ae55-cccce9fb1d69 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Refreshing network info cache for port 95742053-49d5-4e84-9dde-4a6563ebf953 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:43:11 np0005634017 nova_compute[243452]: 2026-02-28 10:43:11.884 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Start _get_guest_xml network_info=[{"id": "95742053-49d5-4e84-9dde-4a6563ebf953", "address": "fa:16:3e:e3:bf:44", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95742053-49", "ovs_interfaceid": "95742053-49d5-4e84-9dde-4a6563ebf953", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:43:11 np0005634017 nova_compute[243452]: 2026-02-28 10:43:11.891 243456 WARNING nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:43:11 np0005634017 nova_compute[243452]: 2026-02-28 10:43:11.896 243456 DEBUG nova.virt.libvirt.host [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:43:11 np0005634017 nova_compute[243452]: 2026-02-28 10:43:11.897 243456 DEBUG nova.virt.libvirt.host [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:43:11 np0005634017 nova_compute[243452]: 2026-02-28 10:43:11.908 243456 DEBUG nova.virt.libvirt.host [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:43:11 np0005634017 nova_compute[243452]: 2026-02-28 10:43:11.908 243456 DEBUG nova.virt.libvirt.host [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:43:11 np0005634017 nova_compute[243452]: 2026-02-28 10:43:11.909 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:43:11 np0005634017 nova_compute[243452]: 2026-02-28 10:43:11.910 243456 DEBUG nova.virt.hardware [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:43:11 np0005634017 nova_compute[243452]: 2026-02-28 10:43:11.911 243456 DEBUG nova.virt.hardware [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:43:11 np0005634017 nova_compute[243452]: 2026-02-28 10:43:11.911 243456 DEBUG nova.virt.hardware [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:43:11 np0005634017 nova_compute[243452]: 2026-02-28 10:43:11.911 243456 DEBUG nova.virt.hardware [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:43:11 np0005634017 nova_compute[243452]: 2026-02-28 10:43:11.912 243456 DEBUG nova.virt.hardware [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:43:11 np0005634017 nova_compute[243452]: 2026-02-28 10:43:11.912 243456 DEBUG nova.virt.hardware [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:43:11 np0005634017 nova_compute[243452]: 2026-02-28 10:43:11.913 243456 DEBUG nova.virt.hardware [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:43:11 np0005634017 nova_compute[243452]: 2026-02-28 10:43:11.913 243456 DEBUG nova.virt.hardware [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:43:11 np0005634017 nova_compute[243452]: 2026-02-28 10:43:11.914 243456 DEBUG nova.virt.hardware [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:43:11 np0005634017 nova_compute[243452]: 2026-02-28 10:43:11.914 243456 DEBUG nova.virt.hardware [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:43:11 np0005634017 nova_compute[243452]: 2026-02-28 10:43:11.915 243456 DEBUG nova.virt.hardware [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:43:11 np0005634017 nova_compute[243452]: 2026-02-28 10:43:11.920 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:43:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:43:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1634588789' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:43:12 np0005634017 nova_compute[243452]: 2026-02-28 10:43:12.578 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:43:12 np0005634017 nova_compute[243452]: 2026-02-28 10:43:12.604 243456 DEBUG nova.storage.rbd_utils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 4717a174-511e-4100-af09-e351eb2784a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:43:12 np0005634017 nova_compute[243452]: 2026-02-28 10:43:12.610 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:43:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:43:13 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4043135003' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:43:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.199 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.201 243456 DEBUG nova.virt.libvirt.vif [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:43:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-612443702',display_name='tempest-TestGettingAddress-server-612443702',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-612443702',id=147,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDkbXRt6B1cCWBHrWtATNSLJHmA0iLq2AXd2wKDeL3s6gDq5DjSDGWA5dtPpegRqUnUl/4AOWSTUcLQlCwH/VQEOx1aCDIzKCGn/G2HO33UA7mhN+ANludWedDRQ8saCjg==',key_name='tempest-TestGettingAddress-1127208510',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-7oml1apf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:43:07Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=4717a174-511e-4100-af09-e351eb2784a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "95742053-49d5-4e84-9dde-4a6563ebf953", "address": "fa:16:3e:e3:bf:44", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95742053-49", "ovs_interfaceid": "95742053-49d5-4e84-9dde-4a6563ebf953", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.202 243456 DEBUG nova.network.os_vif_util [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "95742053-49d5-4e84-9dde-4a6563ebf953", "address": "fa:16:3e:e3:bf:44", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95742053-49", "ovs_interfaceid": "95742053-49d5-4e84-9dde-4a6563ebf953", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.203 243456 DEBUG nova.network.os_vif_util [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:bf:44,bridge_name='br-int',has_traffic_filtering=True,id=95742053-49d5-4e84-9dde-4a6563ebf953,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95742053-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.204 243456 DEBUG nova.objects.instance [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4717a174-511e-4100-af09-e351eb2784a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.224 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:43:13 np0005634017 nova_compute[243452]:  <uuid>4717a174-511e-4100-af09-e351eb2784a3</uuid>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:  <name>instance-00000093</name>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestGettingAddress-server-612443702</nova:name>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:43:11</nova:creationTime>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:43:13 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:        <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:        <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:        <nova:port uuid="95742053-49d5-4e84-9dde-4a6563ebf953">
Feb 28 05:43:13 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fee3:bf44" ipVersion="6"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fee3:bf44" ipVersion="6"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <entry name="serial">4717a174-511e-4100-af09-e351eb2784a3</entry>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <entry name="uuid">4717a174-511e-4100-af09-e351eb2784a3</entry>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/4717a174-511e-4100-af09-e351eb2784a3_disk">
Feb 28 05:43:13 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:43:13 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/4717a174-511e-4100-af09-e351eb2784a3_disk.config">
Feb 28 05:43:13 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:43:13 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:e3:bf:44"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <target dev="tap95742053-49"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/4717a174-511e-4100-af09-e351eb2784a3/console.log" append="off"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:43:13 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:43:13 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:43:13 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:43:13 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.225 243456 DEBUG nova.compute.manager [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Preparing to wait for external event network-vif-plugged-95742053-49d5-4e84-9dde-4a6563ebf953 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.226 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "4717a174-511e-4100-af09-e351eb2784a3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.226 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.226 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.227 243456 DEBUG nova.virt.libvirt.vif [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:43:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-612443702',display_name='tempest-TestGettingAddress-server-612443702',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-612443702',id=147,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDkbXRt6B1cCWBHrWtATNSLJHmA0iLq2AXd2wKDeL3s6gDq5DjSDGWA5dtPpegRqUnUl/4AOWSTUcLQlCwH/VQEOx1aCDIzKCGn/G2HO33UA7mhN+ANludWedDRQ8saCjg==',key_name='tempest-TestGettingAddress-1127208510',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-7oml1apf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:43:07Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=4717a174-511e-4100-af09-e351eb2784a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "95742053-49d5-4e84-9dde-4a6563ebf953", "address": "fa:16:3e:e3:bf:44", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95742053-49", "ovs_interfaceid": "95742053-49d5-4e84-9dde-4a6563ebf953", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.227 243456 DEBUG nova.network.os_vif_util [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "95742053-49d5-4e84-9dde-4a6563ebf953", "address": "fa:16:3e:e3:bf:44", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95742053-49", "ovs_interfaceid": "95742053-49d5-4e84-9dde-4a6563ebf953", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.228 243456 DEBUG nova.network.os_vif_util [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:bf:44,bridge_name='br-int',has_traffic_filtering=True,id=95742053-49d5-4e84-9dde-4a6563ebf953,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95742053-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.228 243456 DEBUG os_vif [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:bf:44,bridge_name='br-int',has_traffic_filtering=True,id=95742053-49d5-4e84-9dde-4a6563ebf953,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95742053-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.229 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.229 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.231 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.236 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.236 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95742053-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.237 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap95742053-49, col_values=(('external_ids', {'iface-id': '95742053-49d5-4e84-9dde-4a6563ebf953', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e3:bf:44', 'vm-uuid': '4717a174-511e-4100-af09-e351eb2784a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.239 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:13 np0005634017 NetworkManager[49805]: <info>  [1772275393.2402] manager: (tap95742053-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/645)
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.243 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.249 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.251 243456 INFO os_vif [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:bf:44,bridge_name='br-int',has_traffic_filtering=True,id=95742053-49d5-4e84-9dde-4a6563ebf953,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95742053-49')#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.298 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.298 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.298 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:e3:bf:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.299 243456 INFO nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Using config drive#033[00m
Feb 28 05:43:13 np0005634017 nova_compute[243452]: 2026-02-28 10:43:13.318 243456 DEBUG nova.storage.rbd_utils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 4717a174-511e-4100-af09-e351eb2784a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:43:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:43:14 np0005634017 nova_compute[243452]: 2026-02-28 10:43:14.800 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2367: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:43:15 np0005634017 nova_compute[243452]: 2026-02-28 10:43:15.190 243456 INFO nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Creating config drive at /var/lib/nova/instances/4717a174-511e-4100-af09-e351eb2784a3/disk.config#033[00m
Feb 28 05:43:15 np0005634017 nova_compute[243452]: 2026-02-28 10:43:15.195 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4717a174-511e-4100-af09-e351eb2784a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpy5zpbeyu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:43:15 np0005634017 nova_compute[243452]: 2026-02-28 10:43:15.244 243456 DEBUG nova.network.neutron [req-206c7f2a-9a10-491d-8833-1ac18d065030 req-1a8afca7-e5e8-40c0-ae55-cccce9fb1d69 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Updated VIF entry in instance network info cache for port 95742053-49d5-4e84-9dde-4a6563ebf953. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:43:15 np0005634017 nova_compute[243452]: 2026-02-28 10:43:15.246 243456 DEBUG nova.network.neutron [req-206c7f2a-9a10-491d-8833-1ac18d065030 req-1a8afca7-e5e8-40c0-ae55-cccce9fb1d69 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Updating instance_info_cache with network_info: [{"id": "95742053-49d5-4e84-9dde-4a6563ebf953", "address": "fa:16:3e:e3:bf:44", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95742053-49", "ovs_interfaceid": "95742053-49d5-4e84-9dde-4a6563ebf953", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:43:15 np0005634017 nova_compute[243452]: 2026-02-28 10:43:15.268 243456 DEBUG oslo_concurrency.lockutils [req-206c7f2a-9a10-491d-8833-1ac18d065030 req-1a8afca7-e5e8-40c0-ae55-cccce9fb1d69 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-4717a174-511e-4100-af09-e351eb2784a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:43:15 np0005634017 nova_compute[243452]: 2026-02-28 10:43:15.349 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4717a174-511e-4100-af09-e351eb2784a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpy5zpbeyu" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:43:15 np0005634017 nova_compute[243452]: 2026-02-28 10:43:15.386 243456 DEBUG nova.storage.rbd_utils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 4717a174-511e-4100-af09-e351eb2784a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:43:15 np0005634017 nova_compute[243452]: 2026-02-28 10:43:15.391 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4717a174-511e-4100-af09-e351eb2784a3/disk.config 4717a174-511e-4100-af09-e351eb2784a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:43:15 np0005634017 nova_compute[243452]: 2026-02-28 10:43:15.565 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4717a174-511e-4100-af09-e351eb2784a3/disk.config 4717a174-511e-4100-af09-e351eb2784a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:43:15 np0005634017 nova_compute[243452]: 2026-02-28 10:43:15.567 243456 INFO nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Deleting local config drive /var/lib/nova/instances/4717a174-511e-4100-af09-e351eb2784a3/disk.config because it was imported into RBD.#033[00m
Feb 28 05:43:15 np0005634017 kernel: tap95742053-49: entered promiscuous mode
Feb 28 05:43:15 np0005634017 NetworkManager[49805]: <info>  [1772275395.6287] manager: (tap95742053-49): new Tun device (/org/freedesktop/NetworkManager/Devices/646)
Feb 28 05:43:15 np0005634017 systemd-udevd[374962]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:43:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:43:15Z|01545|binding|INFO|Claiming lport 95742053-49d5-4e84-9dde-4a6563ebf953 for this chassis.
Feb 28 05:43:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:43:15Z|01546|binding|INFO|95742053-49d5-4e84-9dde-4a6563ebf953: Claiming fa:16:3e:e3:bf:44 10.100.0.4 2001:db8:0:1:f816:3eff:fee3:bf44 2001:db8::f816:3eff:fee3:bf44
Feb 28 05:43:15 np0005634017 nova_compute[243452]: 2026-02-28 10:43:15.685 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:43:15Z|01547|binding|INFO|Setting lport 95742053-49d5-4e84-9dde-4a6563ebf953 ovn-installed in OVS
Feb 28 05:43:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:43:15Z|01548|binding|INFO|Setting lport 95742053-49d5-4e84-9dde-4a6563ebf953 up in Southbound
Feb 28 05:43:15 np0005634017 nova_compute[243452]: 2026-02-28 10:43:15.695 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.699 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:bf:44 10.100.0.4 2001:db8:0:1:f816:3eff:fee3:bf44 2001:db8::f816:3eff:fee3:bf44'], port_security=['fa:16:3e:e3:bf:44 10.100.0.4 2001:db8:0:1:f816:3eff:fee3:bf44 2001:db8::f816:3eff:fee3:bf44'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8:0:1:f816:3eff:fee3:bf44/64 2001:db8::f816:3eff:fee3:bf44/64', 'neutron:device_id': '4717a174-511e-4100-af09-e351eb2784a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5ccb81b-dba1-47db-8a77-320af312ccad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '00df5388-1a6c-45af-8672-636481c0abde', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39e94574-ff62-481a-b217-9667ee1ef596, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=95742053-49d5-4e84-9dde-4a6563ebf953) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:43:15 np0005634017 NetworkManager[49805]: <info>  [1772275395.7020] device (tap95742053-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:43:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.700 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 95742053-49d5-4e84-9dde-4a6563ebf953 in datapath f5ccb81b-dba1-47db-8a77-320af312ccad bound to our chassis#033[00m
Feb 28 05:43:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.701 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f5ccb81b-dba1-47db-8a77-320af312ccad#033[00m
Feb 28 05:43:15 np0005634017 NetworkManager[49805]: <info>  [1772275395.7041] device (tap95742053-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:43:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.722 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e966d291-7ad6-4539-8980-f5f5d49bc33e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:43:15 np0005634017 systemd-machined[209480]: New machine qemu-180-instance-00000093.
Feb 28 05:43:15 np0005634017 systemd[1]: Started Virtual Machine qemu-180-instance-00000093.
Feb 28 05:43:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.746 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9429d6ca-5227-402c-a745-8b7f0343452f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:43:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.749 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9f6d2cea-1e21-4e2a-87cc-d56139cf4754]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:43:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.769 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[78db983f-a43d-4de7-89fa-c8ea1370eeec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:43:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.792 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[86a5228f-624c-43a5-8dd9-8b2a3240affc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5ccb81b-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:79:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686783, 'reachable_time': 38506, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374977, 'error': None, 'target': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:43:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.808 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[91d2f732-40d7-4950-a89c-9e39bf633536]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf5ccb81b-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686793, 'tstamp': 686793}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374979, 'error': None, 'target': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf5ccb81b-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686795, 'tstamp': 686795}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374979, 'error': None, 'target': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:43:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.810 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5ccb81b-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:43:15 np0005634017 nova_compute[243452]: 2026-02-28 10:43:15.813 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:15 np0005634017 nova_compute[243452]: 2026-02-28 10:43:15.814 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.815 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5ccb81b-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:43:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.815 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:43:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.815 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf5ccb81b-d0, col_values=(('external_ids', {'iface-id': '9495031a-3350-4b5d-b9e3-f7a6b929d37e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:43:15 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.816 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.061 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275396.0609992, 4717a174-511e-4100-af09-e351eb2784a3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.062 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4717a174-511e-4100-af09-e351eb2784a3] VM Started (Lifecycle Event)#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.093 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.097 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275396.0620372, 4717a174-511e-4100-af09-e351eb2784a3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.098 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4717a174-511e-4100-af09-e351eb2784a3] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.119 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.125 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.151 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4717a174-511e-4100-af09-e351eb2784a3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.289 243456 DEBUG nova.compute.manager [req-da62b507-4a13-4628-8c9f-e69fd584bac6 req-0ebe69b6-28ee-4a17-97c7-760af96ebf87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Received event network-vif-plugged-95742053-49d5-4e84-9dde-4a6563ebf953 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.289 243456 DEBUG oslo_concurrency.lockutils [req-da62b507-4a13-4628-8c9f-e69fd584bac6 req-0ebe69b6-28ee-4a17-97c7-760af96ebf87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4717a174-511e-4100-af09-e351eb2784a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.290 243456 DEBUG oslo_concurrency.lockutils [req-da62b507-4a13-4628-8c9f-e69fd584bac6 req-0ebe69b6-28ee-4a17-97c7-760af96ebf87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.290 243456 DEBUG oslo_concurrency.lockutils [req-da62b507-4a13-4628-8c9f-e69fd584bac6 req-0ebe69b6-28ee-4a17-97c7-760af96ebf87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.290 243456 DEBUG nova.compute.manager [req-da62b507-4a13-4628-8c9f-e69fd584bac6 req-0ebe69b6-28ee-4a17-97c7-760af96ebf87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Processing event network-vif-plugged-95742053-49d5-4e84-9dde-4a6563ebf953 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.291 243456 DEBUG nova.compute.manager [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.294 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275396.294668, 4717a174-511e-4100-af09-e351eb2784a3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.295 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4717a174-511e-4100-af09-e351eb2784a3] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.296 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.300 243456 INFO nova.virt.libvirt.driver [-] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Instance spawned successfully.#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.300 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.317 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.326 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.326 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.327 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.327 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.327 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.328 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.332 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.358 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4717a174-511e-4100-af09-e351eb2784a3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.381 243456 INFO nova.compute.manager [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Took 8.55 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.382 243456 DEBUG nova.compute.manager [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.415 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:16.416 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:43:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:16.417 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:43:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:16.418 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.434 243456 INFO nova.compute.manager [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Took 9.57 seconds to build instance.#033[00m
Feb 28 05:43:16 np0005634017 nova_compute[243452]: 2026-02-28 10:43:16.449 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:43:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 28 05:43:18 np0005634017 nova_compute[243452]: 2026-02-28 10:43:18.239 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:18 np0005634017 nova_compute[243452]: 2026-02-28 10:43:18.384 243456 DEBUG nova.compute.manager [req-60d41a9a-92f0-449d-8ae2-943972ed157d req-506873bd-2f5d-48ee-8eec-b7951ad74025 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Received event network-vif-plugged-95742053-49d5-4e84-9dde-4a6563ebf953 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:43:18 np0005634017 nova_compute[243452]: 2026-02-28 10:43:18.385 243456 DEBUG oslo_concurrency.lockutils [req-60d41a9a-92f0-449d-8ae2-943972ed157d req-506873bd-2f5d-48ee-8eec-b7951ad74025 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4717a174-511e-4100-af09-e351eb2784a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:43:18 np0005634017 nova_compute[243452]: 2026-02-28 10:43:18.385 243456 DEBUG oslo_concurrency.lockutils [req-60d41a9a-92f0-449d-8ae2-943972ed157d req-506873bd-2f5d-48ee-8eec-b7951ad74025 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:43:18 np0005634017 nova_compute[243452]: 2026-02-28 10:43:18.385 243456 DEBUG oslo_concurrency.lockutils [req-60d41a9a-92f0-449d-8ae2-943972ed157d req-506873bd-2f5d-48ee-8eec-b7951ad74025 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:43:18 np0005634017 nova_compute[243452]: 2026-02-28 10:43:18.385 243456 DEBUG nova.compute.manager [req-60d41a9a-92f0-449d-8ae2-943972ed157d req-506873bd-2f5d-48ee-8eec-b7951ad74025 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] No waiting events found dispatching network-vif-plugged-95742053-49d5-4e84-9dde-4a6563ebf953 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:43:18 np0005634017 nova_compute[243452]: 2026-02-28 10:43:18.386 243456 WARNING nova.compute.manager [req-60d41a9a-92f0-449d-8ae2-943972ed157d req-506873bd-2f5d-48ee-8eec-b7951ad74025 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Received unexpected event network-vif-plugged-95742053-49d5-4e84-9dde-4a6563ebf953 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:43:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:43:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 897 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Feb 28 05:43:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:43:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:43:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:43:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:43:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:43:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:43:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:43:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:43:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:43:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:43:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:43:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:43:19 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:43:19 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:43:19 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:43:19 np0005634017 nova_compute[243452]: 2026-02-28 10:43:19.801 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:20 np0005634017 podman[375168]: 2026-02-28 10:43:20.235597279 +0000 UTC m=+0.065997610 container create ce26e6580e8595aa62eecaf9e34c15accace5df22258164d839c5f269497fdde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:43:20 np0005634017 systemd[1]: Started libpod-conmon-ce26e6580e8595aa62eecaf9e34c15accace5df22258164d839c5f269497fdde.scope.
Feb 28 05:43:20 np0005634017 podman[375168]: 2026-02-28 10:43:20.205857387 +0000 UTC m=+0.036257738 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:43:20 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:43:20 np0005634017 podman[375168]: 2026-02-28 10:43:20.335044415 +0000 UTC m=+0.165444786 container init ce26e6580e8595aa62eecaf9e34c15accace5df22258164d839c5f269497fdde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:43:20 np0005634017 podman[375168]: 2026-02-28 10:43:20.342952739 +0000 UTC m=+0.173353050 container start ce26e6580e8595aa62eecaf9e34c15accace5df22258164d839c5f269497fdde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_elbakyan, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:43:20 np0005634017 podman[375168]: 2026-02-28 10:43:20.347431226 +0000 UTC m=+0.177831567 container attach ce26e6580e8595aa62eecaf9e34c15accace5df22258164d839c5f269497fdde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:43:20 np0005634017 systemd[1]: libpod-ce26e6580e8595aa62eecaf9e34c15accace5df22258164d839c5f269497fdde.scope: Deactivated successfully.
Feb 28 05:43:20 np0005634017 romantic_elbakyan[375185]: 167 167
Feb 28 05:43:20 np0005634017 conmon[375185]: conmon ce26e6580e8595aa62ee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce26e6580e8595aa62eecaf9e34c15accace5df22258164d839c5f269497fdde.scope/container/memory.events
Feb 28 05:43:20 np0005634017 podman[375168]: 2026-02-28 10:43:20.35147148 +0000 UTC m=+0.181871821 container died ce26e6580e8595aa62eecaf9e34c15accace5df22258164d839c5f269497fdde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 05:43:20 np0005634017 systemd[1]: var-lib-containers-storage-overlay-a56a783a4d6015f26a451843b8d04fc3623b3b11bbbd83165df17784963e945f-merged.mount: Deactivated successfully.
Feb 28 05:43:20 np0005634017 nova_compute[243452]: 2026-02-28 10:43:20.396 243456 DEBUG nova.compute.manager [req-b5de043a-c072-4b21-ad89-42496ab43728 req-92a74962-332d-47fc-9929-5acea9eb50d7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Received event network-changed-95742053-49d5-4e84-9dde-4a6563ebf953 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:43:20 np0005634017 nova_compute[243452]: 2026-02-28 10:43:20.397 243456 DEBUG nova.compute.manager [req-b5de043a-c072-4b21-ad89-42496ab43728 req-92a74962-332d-47fc-9929-5acea9eb50d7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Refreshing instance network info cache due to event network-changed-95742053-49d5-4e84-9dde-4a6563ebf953. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:43:20 np0005634017 nova_compute[243452]: 2026-02-28 10:43:20.398 243456 DEBUG oslo_concurrency.lockutils [req-b5de043a-c072-4b21-ad89-42496ab43728 req-92a74962-332d-47fc-9929-5acea9eb50d7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-4717a174-511e-4100-af09-e351eb2784a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:43:20 np0005634017 nova_compute[243452]: 2026-02-28 10:43:20.398 243456 DEBUG oslo_concurrency.lockutils [req-b5de043a-c072-4b21-ad89-42496ab43728 req-92a74962-332d-47fc-9929-5acea9eb50d7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-4717a174-511e-4100-af09-e351eb2784a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:43:20 np0005634017 nova_compute[243452]: 2026-02-28 10:43:20.398 243456 DEBUG nova.network.neutron [req-b5de043a-c072-4b21-ad89-42496ab43728 req-92a74962-332d-47fc-9929-5acea9eb50d7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Refreshing network info cache for port 95742053-49d5-4e84-9dde-4a6563ebf953 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:43:20 np0005634017 podman[375168]: 2026-02-28 10:43:20.419370743 +0000 UTC m=+0.249771084 container remove ce26e6580e8595aa62eecaf9e34c15accace5df22258164d839c5f269497fdde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_elbakyan, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:43:20 np0005634017 systemd[1]: libpod-conmon-ce26e6580e8595aa62eecaf9e34c15accace5df22258164d839c5f269497fdde.scope: Deactivated successfully.
Feb 28 05:43:20 np0005634017 podman[375209]: 2026-02-28 10:43:20.594614395 +0000 UTC m=+0.049469892 container create 922d1999805dca416dd845c01a8a4623937d55c56376f3421a4820198c8f9b5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:43:20 np0005634017 systemd[1]: Started libpod-conmon-922d1999805dca416dd845c01a8a4623937d55c56376f3421a4820198c8f9b5f.scope.
Feb 28 05:43:20 np0005634017 podman[375209]: 2026-02-28 10:43:20.579111326 +0000 UTC m=+0.033966843 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:43:20 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:43:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bcf9ad6da9b131d4c4edaea0b77f304aa58d053789e86ee711d2dae386da26/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:43:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bcf9ad6da9b131d4c4edaea0b77f304aa58d053789e86ee711d2dae386da26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:43:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bcf9ad6da9b131d4c4edaea0b77f304aa58d053789e86ee711d2dae386da26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:43:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bcf9ad6da9b131d4c4edaea0b77f304aa58d053789e86ee711d2dae386da26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:43:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bcf9ad6da9b131d4c4edaea0b77f304aa58d053789e86ee711d2dae386da26/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:43:20 np0005634017 podman[375209]: 2026-02-28 10:43:20.745685993 +0000 UTC m=+0.200541600 container init 922d1999805dca416dd845c01a8a4623937d55c56376f3421a4820198c8f9b5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 05:43:20 np0005634017 podman[375209]: 2026-02-28 10:43:20.758813285 +0000 UTC m=+0.213668822 container start 922d1999805dca416dd845c01a8a4623937d55c56376f3421a4820198c8f9b5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:43:20 np0005634017 podman[375209]: 2026-02-28 10:43:20.765194675 +0000 UTC m=+0.220050212 container attach 922d1999805dca416dd845c01a8a4623937d55c56376f3421a4820198c8f9b5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cartwright, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 05:43:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 05:43:21 np0005634017 sweet_cartwright[375225]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:43:21 np0005634017 sweet_cartwright[375225]: --> All data devices are unavailable
Feb 28 05:43:21 np0005634017 systemd[1]: libpod-922d1999805dca416dd845c01a8a4623937d55c56376f3421a4820198c8f9b5f.scope: Deactivated successfully.
Feb 28 05:43:21 np0005634017 conmon[375225]: conmon 922d1999805dca416dd8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-922d1999805dca416dd845c01a8a4623937d55c56376f3421a4820198c8f9b5f.scope/container/memory.events
Feb 28 05:43:21 np0005634017 podman[375209]: 2026-02-28 10:43:21.298644481 +0000 UTC m=+0.753499998 container died 922d1999805dca416dd845c01a8a4623937d55c56376f3421a4820198c8f9b5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cartwright, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:43:21 np0005634017 systemd[1]: var-lib-containers-storage-overlay-d0bcf9ad6da9b131d4c4edaea0b77f304aa58d053789e86ee711d2dae386da26-merged.mount: Deactivated successfully.
Feb 28 05:43:21 np0005634017 podman[375209]: 2026-02-28 10:43:21.347839914 +0000 UTC m=+0.802695411 container remove 922d1999805dca416dd845c01a8a4623937d55c56376f3421a4820198c8f9b5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:43:21 np0005634017 systemd[1]: libpod-conmon-922d1999805dca416dd845c01a8a4623937d55c56376f3421a4820198c8f9b5f.scope: Deactivated successfully.
Feb 28 05:43:21 np0005634017 podman[375245]: 2026-02-28 10:43:21.421743026 +0000 UTC m=+0.096386910 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:43:21 np0005634017 podman[375249]: 2026-02-28 10:43:21.429747513 +0000 UTC m=+0.099772536 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller)
Feb 28 05:43:21 np0005634017 podman[375364]: 2026-02-28 10:43:21.796672183 +0000 UTC m=+0.048287778 container create 26c6de63f255f86670d81884dae48c9fa0f9ee429b0676e5c61ab1b20d0d3083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 05:43:21 np0005634017 systemd[1]: Started libpod-conmon-26c6de63f255f86670d81884dae48c9fa0f9ee429b0676e5c61ab1b20d0d3083.scope.
Feb 28 05:43:21 np0005634017 podman[375364]: 2026-02-28 10:43:21.771214272 +0000 UTC m=+0.022829877 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:43:21 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:43:21 np0005634017 podman[375364]: 2026-02-28 10:43:21.908535879 +0000 UTC m=+0.160151484 container init 26c6de63f255f86670d81884dae48c9fa0f9ee429b0676e5c61ab1b20d0d3083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 05:43:21 np0005634017 podman[375364]: 2026-02-28 10:43:21.91951779 +0000 UTC m=+0.171133395 container start 26c6de63f255f86670d81884dae48c9fa0f9ee429b0676e5c61ab1b20d0d3083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wright, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 28 05:43:21 np0005634017 podman[375364]: 2026-02-28 10:43:21.923927175 +0000 UTC m=+0.175542780 container attach 26c6de63f255f86670d81884dae48c9fa0f9ee429b0676e5c61ab1b20d0d3083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wright, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:43:21 np0005634017 nostalgic_wright[375380]: 167 167
Feb 28 05:43:21 np0005634017 systemd[1]: libpod-26c6de63f255f86670d81884dae48c9fa0f9ee429b0676e5c61ab1b20d0d3083.scope: Deactivated successfully.
Feb 28 05:43:21 np0005634017 podman[375364]: 2026-02-28 10:43:21.927855356 +0000 UTC m=+0.179470931 container died 26c6de63f255f86670d81884dae48c9fa0f9ee429b0676e5c61ab1b20d0d3083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 28 05:43:21 np0005634017 systemd[1]: var-lib-containers-storage-overlay-fdaca3247c9298b85a94a04883d1e549e041aa89efcca6cf4ccc2bf3f42f1dc6-merged.mount: Deactivated successfully.
Feb 28 05:43:21 np0005634017 podman[375364]: 2026-02-28 10:43:21.972916752 +0000 UTC m=+0.224532357 container remove 26c6de63f255f86670d81884dae48c9fa0f9ee429b0676e5c61ab1b20d0d3083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:43:22 np0005634017 systemd[1]: libpod-conmon-26c6de63f255f86670d81884dae48c9fa0f9ee429b0676e5c61ab1b20d0d3083.scope: Deactivated successfully.
Feb 28 05:43:22 np0005634017 podman[375405]: 2026-02-28 10:43:22.158917699 +0000 UTC m=+0.044464430 container create 43a7feb0b6f89541504354aaf3032f3428469719649ff5e0251e877c4682388d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_pascal, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 28 05:43:22 np0005634017 nova_compute[243452]: 2026-02-28 10:43:22.173 243456 DEBUG nova.network.neutron [req-b5de043a-c072-4b21-ad89-42496ab43728 req-92a74962-332d-47fc-9929-5acea9eb50d7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Updated VIF entry in instance network info cache for port 95742053-49d5-4e84-9dde-4a6563ebf953. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:43:22 np0005634017 nova_compute[243452]: 2026-02-28 10:43:22.175 243456 DEBUG nova.network.neutron [req-b5de043a-c072-4b21-ad89-42496ab43728 req-92a74962-332d-47fc-9929-5acea9eb50d7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Updating instance_info_cache with network_info: [{"id": "95742053-49d5-4e84-9dde-4a6563ebf953", "address": "fa:16:3e:e3:bf:44", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95742053-49", "ovs_interfaceid": "95742053-49d5-4e84-9dde-4a6563ebf953", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:43:22 np0005634017 nova_compute[243452]: 2026-02-28 10:43:22.193 243456 DEBUG oslo_concurrency.lockutils [req-b5de043a-c072-4b21-ad89-42496ab43728 req-92a74962-332d-47fc-9929-5acea9eb50d7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-4717a174-511e-4100-af09-e351eb2784a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:43:22 np0005634017 systemd[1]: Started libpod-conmon-43a7feb0b6f89541504354aaf3032f3428469719649ff5e0251e877c4682388d.scope.
Feb 28 05:43:22 np0005634017 podman[375405]: 2026-02-28 10:43:22.13986794 +0000 UTC m=+0.025414691 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:43:22 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:43:22 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b64546b29ca0c492120ac26d01b6487e58d9b774393f91b9830312141905dc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:43:22 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b64546b29ca0c492120ac26d01b6487e58d9b774393f91b9830312141905dc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:43:22 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b64546b29ca0c492120ac26d01b6487e58d9b774393f91b9830312141905dc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:43:22 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b64546b29ca0c492120ac26d01b6487e58d9b774393f91b9830312141905dc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:43:22 np0005634017 podman[375405]: 2026-02-28 10:43:22.272740862 +0000 UTC m=+0.158287613 container init 43a7feb0b6f89541504354aaf3032f3428469719649ff5e0251e877c4682388d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 28 05:43:22 np0005634017 podman[375405]: 2026-02-28 10:43:22.279279897 +0000 UTC m=+0.164826628 container start 43a7feb0b6f89541504354aaf3032f3428469719649ff5e0251e877c4682388d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_pascal, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:43:22 np0005634017 podman[375405]: 2026-02-28 10:43:22.282699574 +0000 UTC m=+0.168246325 container attach 43a7feb0b6f89541504354aaf3032f3428469719649ff5e0251e877c4682388d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle)
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]: {
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:    "0": [
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:        {
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "devices": [
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "/dev/loop3"
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            ],
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "lv_name": "ceph_lv0",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "lv_size": "21470642176",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "name": "ceph_lv0",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "tags": {
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.cluster_name": "ceph",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.crush_device_class": "",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.encrypted": "0",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.objectstore": "bluestore",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.osd_id": "0",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.type": "block",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.vdo": "0",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.with_tpm": "0"
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            },
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "type": "block",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "vg_name": "ceph_vg0"
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:        }
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:    ],
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:    "1": [
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:        {
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "devices": [
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "/dev/loop4"
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            ],
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "lv_name": "ceph_lv1",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "lv_size": "21470642176",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "name": "ceph_lv1",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "tags": {
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.cluster_name": "ceph",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.crush_device_class": "",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.encrypted": "0",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.objectstore": "bluestore",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.osd_id": "1",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.type": "block",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.vdo": "0",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.with_tpm": "0"
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            },
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "type": "block",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "vg_name": "ceph_vg1"
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:        }
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:    ],
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:    "2": [
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:        {
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "devices": [
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "/dev/loop5"
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            ],
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "lv_name": "ceph_lv2",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "lv_size": "21470642176",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "name": "ceph_lv2",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "tags": {
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.cluster_name": "ceph",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.crush_device_class": "",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.encrypted": "0",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.objectstore": "bluestore",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.osd_id": "2",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.type": "block",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.vdo": "0",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:                "ceph.with_tpm": "0"
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            },
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "type": "block",
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:            "vg_name": "ceph_vg2"
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:        }
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]:    ]
Feb 28 05:43:22 np0005634017 goofy_pascal[375422]: }
Feb 28 05:43:22 np0005634017 systemd[1]: libpod-43a7feb0b6f89541504354aaf3032f3428469719649ff5e0251e877c4682388d.scope: Deactivated successfully.
Feb 28 05:43:22 np0005634017 podman[375405]: 2026-02-28 10:43:22.606156743 +0000 UTC m=+0.491703514 container died 43a7feb0b6f89541504354aaf3032f3428469719649ff5e0251e877c4682388d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:43:22 np0005634017 systemd[1]: var-lib-containers-storage-overlay-5b64546b29ca0c492120ac26d01b6487e58d9b774393f91b9830312141905dc5-merged.mount: Deactivated successfully.
Feb 28 05:43:22 np0005634017 podman[375405]: 2026-02-28 10:43:22.652989769 +0000 UTC m=+0.538536530 container remove 43a7feb0b6f89541504354aaf3032f3428469719649ff5e0251e877c4682388d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:43:22 np0005634017 systemd[1]: libpod-conmon-43a7feb0b6f89541504354aaf3032f3428469719649ff5e0251e877c4682388d.scope: Deactivated successfully.
Feb 28 05:43:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 815 KiB/s wr, 87 op/s
Feb 28 05:43:23 np0005634017 podman[375506]: 2026-02-28 10:43:23.220218901 +0000 UTC m=+0.044416679 container create b01e44c3baf2d898becb0272958494f3306849f176711ae04d9883f0ba6fb0d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 05:43:23 np0005634017 nova_compute[243452]: 2026-02-28 10:43:23.241 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:23 np0005634017 systemd[1]: Started libpod-conmon-b01e44c3baf2d898becb0272958494f3306849f176711ae04d9883f0ba6fb0d5.scope.
Feb 28 05:43:23 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:43:23 np0005634017 podman[375506]: 2026-02-28 10:43:23.203556899 +0000 UTC m=+0.027754687 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:43:23 np0005634017 podman[375506]: 2026-02-28 10:43:23.314721317 +0000 UTC m=+0.138919175 container init b01e44c3baf2d898becb0272958494f3306849f176711ae04d9883f0ba6fb0d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:43:23 np0005634017 podman[375506]: 2026-02-28 10:43:23.322505257 +0000 UTC m=+0.146703065 container start b01e44c3baf2d898becb0272958494f3306849f176711ae04d9883f0ba6fb0d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_sanderson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 05:43:23 np0005634017 podman[375506]: 2026-02-28 10:43:23.325897853 +0000 UTC m=+0.150095661 container attach b01e44c3baf2d898becb0272958494f3306849f176711ae04d9883f0ba6fb0d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 05:43:23 np0005634017 cranky_sanderson[375523]: 167 167
Feb 28 05:43:23 np0005634017 systemd[1]: libpod-b01e44c3baf2d898becb0272958494f3306849f176711ae04d9883f0ba6fb0d5.scope: Deactivated successfully.
Feb 28 05:43:23 np0005634017 podman[375506]: 2026-02-28 10:43:23.330424472 +0000 UTC m=+0.154622250 container died b01e44c3baf2d898becb0272958494f3306849f176711ae04d9883f0ba6fb0d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_sanderson, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:43:23 np0005634017 systemd[1]: var-lib-containers-storage-overlay-34d21a063bea2ae7063dd93a13e4cc4955f6b7f97de73eeb1b5c822779e7d197-merged.mount: Deactivated successfully.
Feb 28 05:43:23 np0005634017 podman[375506]: 2026-02-28 10:43:23.373523252 +0000 UTC m=+0.197721020 container remove b01e44c3baf2d898becb0272958494f3306849f176711ae04d9883f0ba6fb0d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_sanderson, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 28 05:43:23 np0005634017 systemd[1]: libpod-conmon-b01e44c3baf2d898becb0272958494f3306849f176711ae04d9883f0ba6fb0d5.scope: Deactivated successfully.
Feb 28 05:43:23 np0005634017 podman[375550]: 2026-02-28 10:43:23.552455949 +0000 UTC m=+0.062308106 container create 947225870f5d25f5b891ac576a3854b493e89f934f91965578926a7a2c47d565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_snyder, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Feb 28 05:43:23 np0005634017 systemd[1]: Started libpod-conmon-947225870f5d25f5b891ac576a3854b493e89f934f91965578926a7a2c47d565.scope.
Feb 28 05:43:23 np0005634017 podman[375550]: 2026-02-28 10:43:23.526743741 +0000 UTC m=+0.036595958 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:43:23 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:43:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cec211b4a99323ebc3eb828e3aa8003cb94a8be11ce45ad0ddb8b7752de4448/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:43:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cec211b4a99323ebc3eb828e3aa8003cb94a8be11ce45ad0ddb8b7752de4448/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:43:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cec211b4a99323ebc3eb828e3aa8003cb94a8be11ce45ad0ddb8b7752de4448/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:43:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cec211b4a99323ebc3eb828e3aa8003cb94a8be11ce45ad0ddb8b7752de4448/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:43:23 np0005634017 podman[375550]: 2026-02-28 10:43:23.657544534 +0000 UTC m=+0.167396741 container init 947225870f5d25f5b891ac576a3854b493e89f934f91965578926a7a2c47d565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_snyder, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:43:23 np0005634017 podman[375550]: 2026-02-28 10:43:23.667104325 +0000 UTC m=+0.176956462 container start 947225870f5d25f5b891ac576a3854b493e89f934f91965578926a7a2c47d565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:43:23 np0005634017 podman[375550]: 2026-02-28 10:43:23.671418407 +0000 UTC m=+0.181270584 container attach 947225870f5d25f5b891ac576a3854b493e89f934f91965578926a7a2c47d565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 28 05:43:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:43:24 np0005634017 lvm[375648]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:43:24 np0005634017 lvm[375645]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:43:24 np0005634017 lvm[375648]: VG ceph_vg2 finished
Feb 28 05:43:24 np0005634017 lvm[375645]: VG ceph_vg1 finished
Feb 28 05:43:24 np0005634017 lvm[375646]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:43:24 np0005634017 lvm[375646]: VG ceph_vg0 finished
Feb 28 05:43:24 np0005634017 agitated_snyder[375566]: {}
Feb 28 05:43:24 np0005634017 systemd[1]: libpod-947225870f5d25f5b891ac576a3854b493e89f934f91965578926a7a2c47d565.scope: Deactivated successfully.
Feb 28 05:43:24 np0005634017 systemd[1]: libpod-947225870f5d25f5b891ac576a3854b493e89f934f91965578926a7a2c47d565.scope: Consumed 1.375s CPU time.
Feb 28 05:43:24 np0005634017 podman[375651]: 2026-02-28 10:43:24.581715194 +0000 UTC m=+0.043480803 container died 947225870f5d25f5b891ac576a3854b493e89f934f91965578926a7a2c47d565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_snyder, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:43:24 np0005634017 systemd[1]: var-lib-containers-storage-overlay-9cec211b4a99323ebc3eb828e3aa8003cb94a8be11ce45ad0ddb8b7752de4448-merged.mount: Deactivated successfully.
Feb 28 05:43:24 np0005634017 podman[375651]: 2026-02-28 10:43:24.631424871 +0000 UTC m=+0.093190420 container remove 947225870f5d25f5b891ac576a3854b493e89f934f91965578926a7a2c47d565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:43:24 np0005634017 systemd[1]: libpod-conmon-947225870f5d25f5b891ac576a3854b493e89f934f91965578926a7a2c47d565.scope: Deactivated successfully.
Feb 28 05:43:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:43:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:43:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:43:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:43:24 np0005634017 nova_compute[243452]: 2026-02-28 10:43:24.804 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:24 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:43:24 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:43:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Feb 28 05:43:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2373: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Feb 28 05:43:28 np0005634017 nova_compute[243452]: 2026-02-28 10:43:28.246 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:43:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:43:29
Feb 28 05:43:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:43:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:43:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', '.mgr', 'images', '.rgw.root']
Feb 28 05:43:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:43:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 305 active+clean; 288 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 82 op/s
Feb 28 05:43:29 np0005634017 nova_compute[243452]: 2026-02-28 10:43:29.807 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:30 np0005634017 ovn_controller[146846]: 2026-02-28T10:43:30Z|00188|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e3:bf:44 10.100.0.4
Feb 28 05:43:30 np0005634017 ovn_controller[146846]: 2026-02-28T10:43:30Z|00189|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e3:bf:44 10.100.0.4
Feb 28 05:43:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:43:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:43:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:43:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:43:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:43:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:43:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:43:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:43:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:43:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:43:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:43:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:43:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:43:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:43:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:43:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:43:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 305 active+clean; 307 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 97 op/s
Feb 28 05:43:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 05:43:33 np0005634017 nova_compute[243452]: 2026-02-28 10:43:33.248 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:43:34 np0005634017 nova_compute[243452]: 2026-02-28 10:43:34.808 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 05:43:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2378: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.250 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.586 243456 DEBUG nova.compute.manager [req-ea6004bf-4bbc-4686-bd64-90ed0ad44ce0 req-e4dfb0ee-b7b6-4792-8077-aa113a0c96e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Received event network-changed-95742053-49d5-4e84-9dde-4a6563ebf953 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.587 243456 DEBUG nova.compute.manager [req-ea6004bf-4bbc-4686-bd64-90ed0ad44ce0 req-e4dfb0ee-b7b6-4792-8077-aa113a0c96e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Refreshing instance network info cache due to event network-changed-95742053-49d5-4e84-9dde-4a6563ebf953. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.588 243456 DEBUG oslo_concurrency.lockutils [req-ea6004bf-4bbc-4686-bd64-90ed0ad44ce0 req-e4dfb0ee-b7b6-4792-8077-aa113a0c96e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-4717a174-511e-4100-af09-e351eb2784a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.588 243456 DEBUG oslo_concurrency.lockutils [req-ea6004bf-4bbc-4686-bd64-90ed0ad44ce0 req-e4dfb0ee-b7b6-4792-8077-aa113a0c96e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-4717a174-511e-4100-af09-e351eb2784a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.588 243456 DEBUG nova.network.neutron [req-ea6004bf-4bbc-4686-bd64-90ed0ad44ce0 req-e4dfb0ee-b7b6-4792-8077-aa113a0c96e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Refreshing network info cache for port 95742053-49d5-4e84-9dde-4a6563ebf953 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.684 243456 DEBUG oslo_concurrency.lockutils [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "4717a174-511e-4100-af09-e351eb2784a3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.684 243456 DEBUG oslo_concurrency.lockutils [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.685 243456 DEBUG oslo_concurrency.lockutils [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "4717a174-511e-4100-af09-e351eb2784a3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.685 243456 DEBUG oslo_concurrency.lockutils [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.686 243456 DEBUG oslo_concurrency.lockutils [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.688 243456 INFO nova.compute.manager [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Terminating instance#033[00m
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.689 243456 DEBUG nova.compute.manager [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:43:38 np0005634017 kernel: tap95742053-49 (unregistering): left promiscuous mode
Feb 28 05:43:38 np0005634017 NetworkManager[49805]: <info>  [1772275418.7458] device (tap95742053-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:43:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:43:38Z|01549|binding|INFO|Releasing lport 95742053-49d5-4e84-9dde-4a6563ebf953 from this chassis (sb_readonly=0)
Feb 28 05:43:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:43:38Z|01550|binding|INFO|Setting lport 95742053-49d5-4e84-9dde-4a6563ebf953 down in Southbound
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.757 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:38 np0005634017 ovn_controller[146846]: 2026-02-28T10:43:38Z|01551|binding|INFO|Removing iface tap95742053-49 ovn-installed in OVS
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.764 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.766 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:bf:44 10.100.0.4 2001:db8:0:1:f816:3eff:fee3:bf44 2001:db8::f816:3eff:fee3:bf44'], port_security=['fa:16:3e:e3:bf:44 10.100.0.4 2001:db8:0:1:f816:3eff:fee3:bf44 2001:db8::f816:3eff:fee3:bf44'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8:0:1:f816:3eff:fee3:bf44/64 2001:db8::f816:3eff:fee3:bf44/64', 'neutron:device_id': '4717a174-511e-4100-af09-e351eb2784a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5ccb81b-dba1-47db-8a77-320af312ccad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '00df5388-1a6c-45af-8672-636481c0abde', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39e94574-ff62-481a-b217-9667ee1ef596, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=95742053-49d5-4e84-9dde-4a6563ebf953) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:43:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.767 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 95742053-49d5-4e84-9dde-4a6563ebf953 in datapath f5ccb81b-dba1-47db-8a77-320af312ccad unbound from our chassis#033[00m
Feb 28 05:43:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.768 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f5ccb81b-dba1-47db-8a77-320af312ccad#033[00m
Feb 28 05:43:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.790 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[70b1e71e-820a-4680-b8ae-c576c0d137cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:43:38 np0005634017 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000093.scope: Deactivated successfully.
Feb 28 05:43:38 np0005634017 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000093.scope: Consumed 13.556s CPU time.
Feb 28 05:43:38 np0005634017 systemd-machined[209480]: Machine qemu-180-instance-00000093 terminated.
Feb 28 05:43:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.822 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8b077991-0aa3-4a41-a5f5-ec8ce412925d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:43:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.826 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9c29b6a8-6c2b-44e9-8899-9a92e51c4fa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:43:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.857 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[94e8fb4f-c39e-4733-b021-dc08c96105ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:43:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.873 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[21c199c1-303c-47a3-a2fe-b4eb2d2620ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5ccb81b-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:79:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686783, 'reachable_time': 38506, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375703, 'error': None, 'target': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:43:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.884 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[392293e2-2215-4f5a-862a-66e5be1becd1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf5ccb81b-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686793, 'tstamp': 686793}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375704, 'error': None, 'target': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf5ccb81b-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686795, 'tstamp': 686795}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375704, 'error': None, 'target': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:43:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.886 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5ccb81b-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.888 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.892 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.893 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5ccb81b-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:43:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.894 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:43:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.894 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf5ccb81b-d0, col_values=(('external_ids', {'iface-id': '9495031a-3350-4b5d-b9e3-f7a6b929d37e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:43:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.894 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.926 243456 INFO nova.virt.libvirt.driver [-] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Instance destroyed successfully.#033[00m
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.927 243456 DEBUG nova.objects.instance [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid 4717a174-511e-4100-af09-e351eb2784a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.941 243456 DEBUG nova.virt.libvirt.vif [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:43:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-612443702',display_name='tempest-TestGettingAddress-server-612443702',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-612443702',id=147,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDkbXRt6B1cCWBHrWtATNSLJHmA0iLq2AXd2wKDeL3s6gDq5DjSDGWA5dtPpegRqUnUl/4AOWSTUcLQlCwH/VQEOx1aCDIzKCGn/G2HO33UA7mhN+ANludWedDRQ8saCjg==',key_name='tempest-TestGettingAddress-1127208510',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:43:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-7oml1apf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:43:16Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=4717a174-511e-4100-af09-e351eb2784a3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "95742053-49d5-4e84-9dde-4a6563ebf953", "address": "fa:16:3e:e3:bf:44", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95742053-49", "ovs_interfaceid": "95742053-49d5-4e84-9dde-4a6563ebf953", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.942 243456 DEBUG nova.network.os_vif_util [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "95742053-49d5-4e84-9dde-4a6563ebf953", "address": "fa:16:3e:e3:bf:44", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95742053-49", "ovs_interfaceid": "95742053-49d5-4e84-9dde-4a6563ebf953", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.943 243456 DEBUG nova.network.os_vif_util [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e3:bf:44,bridge_name='br-int',has_traffic_filtering=True,id=95742053-49d5-4e84-9dde-4a6563ebf953,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95742053-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.944 243456 DEBUG os_vif [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e3:bf:44,bridge_name='br-int',has_traffic_filtering=True,id=95742053-49d5-4e84-9dde-4a6563ebf953,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95742053-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.945 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.946 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95742053-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.947 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.949 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:38 np0005634017 nova_compute[243452]: 2026-02-28 10:43:38.952 243456 INFO os_vif [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e3:bf:44,bridge_name='br-int',has_traffic_filtering=True,id=95742053-49d5-4e84-9dde-4a6563ebf953,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95742053-49')#033[00m
Feb 28 05:43:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:43:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 28 05:43:39 np0005634017 nova_compute[243452]: 2026-02-28 10:43:39.244 243456 INFO nova.virt.libvirt.driver [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Deleting instance files /var/lib/nova/instances/4717a174-511e-4100-af09-e351eb2784a3_del#033[00m
Feb 28 05:43:39 np0005634017 nova_compute[243452]: 2026-02-28 10:43:39.245 243456 INFO nova.virt.libvirt.driver [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Deletion of /var/lib/nova/instances/4717a174-511e-4100-af09-e351eb2784a3_del complete#033[00m
Feb 28 05:43:39 np0005634017 nova_compute[243452]: 2026-02-28 10:43:39.259 243456 DEBUG nova.compute.manager [req-527793a3-a2c3-493d-b549-8e6e3a3754de req-d922fb25-845e-4cb0-8e24-133be8102f0b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Received event network-vif-unplugged-95742053-49d5-4e84-9dde-4a6563ebf953 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:43:39 np0005634017 nova_compute[243452]: 2026-02-28 10:43:39.259 243456 DEBUG oslo_concurrency.lockutils [req-527793a3-a2c3-493d-b549-8e6e3a3754de req-d922fb25-845e-4cb0-8e24-133be8102f0b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4717a174-511e-4100-af09-e351eb2784a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:43:39 np0005634017 nova_compute[243452]: 2026-02-28 10:43:39.260 243456 DEBUG oslo_concurrency.lockutils [req-527793a3-a2c3-493d-b549-8e6e3a3754de req-d922fb25-845e-4cb0-8e24-133be8102f0b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:43:39 np0005634017 nova_compute[243452]: 2026-02-28 10:43:39.260 243456 DEBUG oslo_concurrency.lockutils [req-527793a3-a2c3-493d-b549-8e6e3a3754de req-d922fb25-845e-4cb0-8e24-133be8102f0b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:43:39 np0005634017 nova_compute[243452]: 2026-02-28 10:43:39.261 243456 DEBUG nova.compute.manager [req-527793a3-a2c3-493d-b549-8e6e3a3754de req-d922fb25-845e-4cb0-8e24-133be8102f0b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] No waiting events found dispatching network-vif-unplugged-95742053-49d5-4e84-9dde-4a6563ebf953 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:43:39 np0005634017 nova_compute[243452]: 2026-02-28 10:43:39.261 243456 DEBUG nova.compute.manager [req-527793a3-a2c3-493d-b549-8e6e3a3754de req-d922fb25-845e-4cb0-8e24-133be8102f0b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Received event network-vif-unplugged-95742053-49d5-4e84-9dde-4a6563ebf953 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:43:39 np0005634017 nova_compute[243452]: 2026-02-28 10:43:39.327 243456 INFO nova.compute.manager [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:43:39 np0005634017 nova_compute[243452]: 2026-02-28 10:43:39.328 243456 DEBUG oslo.service.loopingcall [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:43:39 np0005634017 nova_compute[243452]: 2026-02-28 10:43:39.328 243456 DEBUG nova.compute.manager [-] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:43:39 np0005634017 nova_compute[243452]: 2026-02-28 10:43:39.329 243456 DEBUG nova.network.neutron [-] [instance: 4717a174-511e-4100-af09-e351eb2784a3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:43:39 np0005634017 nova_compute[243452]: 2026-02-28 10:43:39.810 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:40 np0005634017 nova_compute[243452]: 2026-02-28 10:43:40.188 243456 DEBUG nova.network.neutron [-] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:43:40 np0005634017 nova_compute[243452]: 2026-02-28 10:43:40.204 243456 INFO nova.compute.manager [-] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Took 0.88 seconds to deallocate network for instance.#033[00m
Feb 28 05:43:40 np0005634017 nova_compute[243452]: 2026-02-28 10:43:40.244 243456 DEBUG oslo_concurrency.lockutils [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:43:40 np0005634017 nova_compute[243452]: 2026-02-28 10:43:40.245 243456 DEBUG oslo_concurrency.lockutils [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:43:40 np0005634017 nova_compute[243452]: 2026-02-28 10:43:40.313 243456 DEBUG oslo_concurrency.processutils [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:43:40 np0005634017 nova_compute[243452]: 2026-02-28 10:43:40.647 243456 DEBUG nova.network.neutron [req-ea6004bf-4bbc-4686-bd64-90ed0ad44ce0 req-e4dfb0ee-b7b6-4792-8077-aa113a0c96e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Updated VIF entry in instance network info cache for port 95742053-49d5-4e84-9dde-4a6563ebf953. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:43:40 np0005634017 nova_compute[243452]: 2026-02-28 10:43:40.648 243456 DEBUG nova.network.neutron [req-ea6004bf-4bbc-4686-bd64-90ed0ad44ce0 req-e4dfb0ee-b7b6-4792-8077-aa113a0c96e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Updating instance_info_cache with network_info: [{"id": "95742053-49d5-4e84-9dde-4a6563ebf953", "address": "fa:16:3e:e3:bf:44", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95742053-49", "ovs_interfaceid": "95742053-49d5-4e84-9dde-4a6563ebf953", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:43:40 np0005634017 nova_compute[243452]: 2026-02-28 10:43:40.751 243456 DEBUG oslo_concurrency.lockutils [req-ea6004bf-4bbc-4686-bd64-90ed0ad44ce0 req-e4dfb0ee-b7b6-4792-8077-aa113a0c96e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-4717a174-511e-4100-af09-e351eb2784a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:43:40 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:43:40 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1691343177' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:43:40 np0005634017 nova_compute[243452]: 2026-02-28 10:43:40.874 243456 DEBUG oslo_concurrency.processutils [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:43:40 np0005634017 nova_compute[243452]: 2026-02-28 10:43:40.881 243456 DEBUG nova.compute.provider_tree [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:43:40 np0005634017 nova_compute[243452]: 2026-02-28 10:43:40.897 243456 DEBUG nova.scheduler.client.report [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:43:40 np0005634017 nova_compute[243452]: 2026-02-28 10:43:40.919 243456 DEBUG oslo_concurrency.lockutils [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:43:40 np0005634017 nova_compute[243452]: 2026-02-28 10:43:40.972 243456 INFO nova.scheduler.client.report [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance 4717a174-511e-4100-af09-e351eb2784a3#033[00m
Feb 28 05:43:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 305 active+clean; 269 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 1.0 MiB/s wr, 77 op/s
Feb 28 05:43:41 np0005634017 nova_compute[243452]: 2026-02-28 10:43:41.209 243456 DEBUG oslo_concurrency.lockutils [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:43:41 np0005634017 nova_compute[243452]: 2026-02-28 10:43:41.331 243456 DEBUG nova.compute.manager [req-2e43daff-1451-47e2-b56b-db9728966e95 req-e3455e2b-75f7-4a72-99e6-cbaef64429f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Received event network-vif-plugged-95742053-49d5-4e84-9dde-4a6563ebf953 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:43:41 np0005634017 nova_compute[243452]: 2026-02-28 10:43:41.331 243456 DEBUG oslo_concurrency.lockutils [req-2e43daff-1451-47e2-b56b-db9728966e95 req-e3455e2b-75f7-4a72-99e6-cbaef64429f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4717a174-511e-4100-af09-e351eb2784a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:43:41 np0005634017 nova_compute[243452]: 2026-02-28 10:43:41.332 243456 DEBUG oslo_concurrency.lockutils [req-2e43daff-1451-47e2-b56b-db9728966e95 req-e3455e2b-75f7-4a72-99e6-cbaef64429f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:43:41 np0005634017 nova_compute[243452]: 2026-02-28 10:43:41.332 243456 DEBUG oslo_concurrency.lockutils [req-2e43daff-1451-47e2-b56b-db9728966e95 req-e3455e2b-75f7-4a72-99e6-cbaef64429f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:43:41 np0005634017 nova_compute[243452]: 2026-02-28 10:43:41.332 243456 DEBUG nova.compute.manager [req-2e43daff-1451-47e2-b56b-db9728966e95 req-e3455e2b-75f7-4a72-99e6-cbaef64429f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] No waiting events found dispatching network-vif-plugged-95742053-49d5-4e84-9dde-4a6563ebf953 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:43:41 np0005634017 nova_compute[243452]: 2026-02-28 10:43:41.332 243456 WARNING nova.compute.manager [req-2e43daff-1451-47e2-b56b-db9728966e95 req-e3455e2b-75f7-4a72-99e6-cbaef64429f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Received unexpected event network-vif-plugged-95742053-49d5-4e84-9dde-4a6563ebf953 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:43:41 np0005634017 nova_compute[243452]: 2026-02-28 10:43:41.332 243456 DEBUG nova.compute.manager [req-2e43daff-1451-47e2-b56b-db9728966e95 req-e3455e2b-75f7-4a72-99e6-cbaef64429f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Received event network-vif-deleted-95742053-49d5-4e84-9dde-4a6563ebf953 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:43:41 np0005634017 nova_compute[243452]: 2026-02-28 10:43:41.333 243456 INFO nova.compute.manager [req-2e43daff-1451-47e2-b56b-db9728966e95 req-e3455e2b-75f7-4a72-99e6-cbaef64429f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Neutron deleted interface 95742053-49d5-4e84-9dde-4a6563ebf953; detaching it from the instance and deleting it from the info cache#033[00m
Feb 28 05:43:41 np0005634017 nova_compute[243452]: 2026-02-28 10:43:41.333 243456 DEBUG nova.network.neutron [req-2e43daff-1451-47e2-b56b-db9728966e95 req-e3455e2b-75f7-4a72-99e6-cbaef64429f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Feb 28 05:43:41 np0005634017 nova_compute[243452]: 2026-02-28 10:43:41.334 243456 DEBUG nova.compute.manager [req-2e43daff-1451-47e2-b56b-db9728966e95 req-e3455e2b-75f7-4a72-99e6-cbaef64429f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Detach interface failed, port_id=95742053-49d5-4e84-9dde-4a6563ebf953, reason: Instance 4717a174-511e-4100-af09-e351eb2784a3 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 28 05:43:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:43:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:43:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:43:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:43:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001152170056080208 of space, bias 1.0, pg target 0.3456510168240624 quantized to 32 (current 32)
Feb 28 05:43:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:43:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:43:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:43:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:43:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:43:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024939238687445584 of space, bias 1.0, pg target 0.7481771606233675 quantized to 32 (current 32)
Feb 28 05:43:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:43:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.285141514840672e-07 of space, bias 4.0, pg target 0.0008742169817808807 quantized to 16 (current 16)
Feb 28 05:43:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:43:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:43:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:43:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:43:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:43:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:43:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:43:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:43:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:43:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:43:42 np0005634017 nova_compute[243452]: 2026-02-28 10:43:42.613 243456 DEBUG nova.compute.manager [req-e74a609f-5393-45fd-b62d-eee66837a290 req-6d8f55f6-9a25-4db4-824e-965b1d2a2883 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Received event network-changed-5b4f91cf-9f52-4422-873f-f11cf0d49dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:43:42 np0005634017 nova_compute[243452]: 2026-02-28 10:43:42.613 243456 DEBUG nova.compute.manager [req-e74a609f-5393-45fd-b62d-eee66837a290 req-6d8f55f6-9a25-4db4-824e-965b1d2a2883 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Refreshing instance network info cache due to event network-changed-5b4f91cf-9f52-4422-873f-f11cf0d49dde. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:43:42 np0005634017 nova_compute[243452]: 2026-02-28 10:43:42.614 243456 DEBUG oslo_concurrency.lockutils [req-e74a609f-5393-45fd-b62d-eee66837a290 req-6d8f55f6-9a25-4db4-824e-965b1d2a2883 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:43:42 np0005634017 nova_compute[243452]: 2026-02-28 10:43:42.614 243456 DEBUG oslo_concurrency.lockutils [req-e74a609f-5393-45fd-b62d-eee66837a290 req-6d8f55f6-9a25-4db4-824e-965b1d2a2883 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:43:42 np0005634017 nova_compute[243452]: 2026-02-28 10:43:42.614 243456 DEBUG nova.network.neutron [req-e74a609f-5393-45fd-b62d-eee66837a290 req-6d8f55f6-9a25-4db4-824e-965b1d2a2883 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Refreshing network info cache for port 5b4f91cf-9f52-4422-873f-f11cf0d49dde _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:43:42 np0005634017 nova_compute[243452]: 2026-02-28 10:43:42.698 243456 DEBUG oslo_concurrency.lockutils [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "9a93fbef-9a9c-4d32-b200-626428537bfa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:43:42 np0005634017 nova_compute[243452]: 2026-02-28 10:43:42.699 243456 DEBUG oslo_concurrency.lockutils [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:43:42 np0005634017 nova_compute[243452]: 2026-02-28 10:43:42.700 243456 DEBUG oslo_concurrency.lockutils [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:43:42 np0005634017 nova_compute[243452]: 2026-02-28 10:43:42.700 243456 DEBUG oslo_concurrency.lockutils [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:43:42 np0005634017 nova_compute[243452]: 2026-02-28 10:43:42.700 243456 DEBUG oslo_concurrency.lockutils [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:43:42 np0005634017 nova_compute[243452]: 2026-02-28 10:43:42.701 243456 INFO nova.compute.manager [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Terminating instance#033[00m
Feb 28 05:43:42 np0005634017 nova_compute[243452]: 2026-02-28 10:43:42.703 243456 DEBUG nova.compute.manager [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:43:42 np0005634017 kernel: tap5b4f91cf-9f (unregistering): left promiscuous mode
Feb 28 05:43:42 np0005634017 NetworkManager[49805]: <info>  [1772275422.7539] device (tap5b4f91cf-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:43:42 np0005634017 ovn_controller[146846]: 2026-02-28T10:43:42Z|01552|binding|INFO|Releasing lport 5b4f91cf-9f52-4422-873f-f11cf0d49dde from this chassis (sb_readonly=0)
Feb 28 05:43:42 np0005634017 ovn_controller[146846]: 2026-02-28T10:43:42Z|01553|binding|INFO|Setting lport 5b4f91cf-9f52-4422-873f-f11cf0d49dde down in Southbound
Feb 28 05:43:42 np0005634017 ovn_controller[146846]: 2026-02-28T10:43:42Z|01554|binding|INFO|Removing iface tap5b4f91cf-9f ovn-installed in OVS
Feb 28 05:43:42 np0005634017 nova_compute[243452]: 2026-02-28 10:43:42.760 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:42.770 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:61:bf 10.100.0.14 2001:db8:0:1:f816:3eff:fe54:61bf 2001:db8::f816:3eff:fe54:61bf'], port_security=['fa:16:3e:54:61:bf 10.100.0.14 2001:db8:0:1:f816:3eff:fe54:61bf 2001:db8::f816:3eff:fe54:61bf'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8:0:1:f816:3eff:fe54:61bf/64 2001:db8::f816:3eff:fe54:61bf/64', 'neutron:device_id': '9a93fbef-9a9c-4d32-b200-626428537bfa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5ccb81b-dba1-47db-8a77-320af312ccad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '00df5388-1a6c-45af-8672-636481c0abde', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39e94574-ff62-481a-b217-9667ee1ef596, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=5b4f91cf-9f52-4422-873f-f11cf0d49dde) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:43:42 np0005634017 nova_compute[243452]: 2026-02-28 10:43:42.771 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:42.772 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 5b4f91cf-9f52-4422-873f-f11cf0d49dde in datapath f5ccb81b-dba1-47db-8a77-320af312ccad unbound from our chassis#033[00m
Feb 28 05:43:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:42.773 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f5ccb81b-dba1-47db-8a77-320af312ccad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:43:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:42.773 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c68e100c-f14d-4f5a-a74a-9e0f788fc034]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:43:42 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:42.774 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad namespace which is not needed anymore#033[00m
Feb 28 05:43:42 np0005634017 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000092.scope: Deactivated successfully.
Feb 28 05:43:42 np0005634017 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000092.scope: Consumed 13.861s CPU time.
Feb 28 05:43:42 np0005634017 systemd-machined[209480]: Machine qemu-179-instance-00000092 terminated.
Feb 28 05:43:42 np0005634017 neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad[374539]: [NOTICE]   (374543) : haproxy version is 2.8.14-c23fe91
Feb 28 05:43:42 np0005634017 neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad[374539]: [NOTICE]   (374543) : path to executable is /usr/sbin/haproxy
Feb 28 05:43:42 np0005634017 neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad[374539]: [WARNING]  (374543) : Exiting Master process...
Feb 28 05:43:42 np0005634017 neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad[374539]: [WARNING]  (374543) : Exiting Master process...
Feb 28 05:43:42 np0005634017 neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad[374539]: [ALERT]    (374543) : Current worker (374545) exited with code 143 (Terminated)
Feb 28 05:43:42 np0005634017 neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad[374539]: [WARNING]  (374543) : All workers exited. Exiting... (0)
Feb 28 05:43:42 np0005634017 systemd[1]: libpod-79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e.scope: Deactivated successfully.
Feb 28 05:43:42 np0005634017 podman[375782]: 2026-02-28 10:43:42.91138616 +0000 UTC m=+0.047194097 container died 79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 05:43:42 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e-userdata-shm.mount: Deactivated successfully.
Feb 28 05:43:42 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3621c3eb848faba7f94d263f84c1b37e270520b2c10993c5bb339bb0de8e7c9c-merged.mount: Deactivated successfully.
Feb 28 05:43:42 np0005634017 nova_compute[243452]: 2026-02-28 10:43:42.946 243456 INFO nova.virt.libvirt.driver [-] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Instance destroyed successfully.#033[00m
Feb 28 05:43:42 np0005634017 nova_compute[243452]: 2026-02-28 10:43:42.947 243456 DEBUG nova.objects.instance [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid 9a93fbef-9a9c-4d32-b200-626428537bfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:43:42 np0005634017 podman[375782]: 2026-02-28 10:43:42.952043821 +0000 UTC m=+0.087851758 container cleanup 79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 28 05:43:42 np0005634017 systemd[1]: libpod-conmon-79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e.scope: Deactivated successfully.
Feb 28 05:43:42 np0005634017 nova_compute[243452]: 2026-02-28 10:43:42.963 243456 DEBUG nova.virt.libvirt.vif [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:42:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-344642459',display_name='tempest-TestGettingAddress-server-344642459',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-344642459',id=146,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDkbXRt6B1cCWBHrWtATNSLJHmA0iLq2AXd2wKDeL3s6gDq5DjSDGWA5dtPpegRqUnUl/4AOWSTUcLQlCwH/VQEOx1aCDIzKCGn/G2HO33UA7mhN+ANludWedDRQ8saCjg==',key_name='tempest-TestGettingAddress-1127208510',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:42:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-r4xtm6sn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:42:42Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=9a93fbef-9a9c-4d32-b200-626428537bfa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "address": "fa:16:3e:54:61:bf", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b4f91cf-9f", "ovs_interfaceid": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:43:42 np0005634017 nova_compute[243452]: 2026-02-28 10:43:42.964 243456 DEBUG nova.network.os_vif_util [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "address": "fa:16:3e:54:61:bf", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b4f91cf-9f", "ovs_interfaceid": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:43:42 np0005634017 nova_compute[243452]: 2026-02-28 10:43:42.967 243456 DEBUG nova.network.os_vif_util [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:61:bf,bridge_name='br-int',has_traffic_filtering=True,id=5b4f91cf-9f52-4422-873f-f11cf0d49dde,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b4f91cf-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:43:42 np0005634017 nova_compute[243452]: 2026-02-28 10:43:42.967 243456 DEBUG os_vif [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:61:bf,bridge_name='br-int',has_traffic_filtering=True,id=5b4f91cf-9f52-4422-873f-f11cf0d49dde,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b4f91cf-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:43:42 np0005634017 nova_compute[243452]: 2026-02-28 10:43:42.970 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:42 np0005634017 nova_compute[243452]: 2026-02-28 10:43:42.970 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b4f91cf-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:43:42 np0005634017 nova_compute[243452]: 2026-02-28 10:43:42.972 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:42 np0005634017 nova_compute[243452]: 2026-02-28 10:43:42.975 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:43:42 np0005634017 nova_compute[243452]: 2026-02-28 10:43:42.978 243456 INFO os_vif [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:61:bf,bridge_name='br-int',has_traffic_filtering=True,id=5b4f91cf-9f52-4422-873f-f11cf0d49dde,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b4f91cf-9f')#033[00m
Feb 28 05:43:43 np0005634017 podman[375820]: 2026-02-28 10:43:43.022303841 +0000 UTC m=+0.046076086 container remove 79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 28 05:43:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:43.028 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d687a156-8f21-4a53-bbe7-cde1a7f93935]: (4, ('Sat Feb 28 10:43:42 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad (79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e)\n79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e\nSat Feb 28 10:43:42 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad (79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e)\n79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:43:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:43.031 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[766242ac-6e8b-479b-b6f7-d6e26d241ba0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:43:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:43.033 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5ccb81b-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:43:43 np0005634017 nova_compute[243452]: 2026-02-28 10:43:43.083 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:43 np0005634017 kernel: tapf5ccb81b-d0: left promiscuous mode
Feb 28 05:43:43 np0005634017 nova_compute[243452]: 2026-02-28 10:43:43.091 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:43.095 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ecd376c1-6798-4887-87bb-d0b8571c0b64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:43:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:43.123 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ca7c3c94-3f1a-4f29-939d-8df72b04f412]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:43:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:43.125 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[582ce830-3d9a-4d90-8dcb-e302c592be8f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:43:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:43.137 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3d9cca81-f2ae-49d9-a96f-d502f7ea3054]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686775, 'reachable_time': 38808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375853, 'error': None, 'target': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:43:43 np0005634017 systemd[1]: run-netns-ovnmeta\x2df5ccb81b\x2ddba1\x2d47db\x2d8a77\x2d320af312ccad.mount: Deactivated successfully.
Feb 28 05:43:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:43.141 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:43:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:43.141 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[7e40dd15-03c0-46ad-849d-061e94cf2ecb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:43:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 61 KiB/s wr, 38 op/s
Feb 28 05:43:43 np0005634017 nova_compute[243452]: 2026-02-28 10:43:43.303 243456 INFO nova.virt.libvirt.driver [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Deleting instance files /var/lib/nova/instances/9a93fbef-9a9c-4d32-b200-626428537bfa_del#033[00m
Feb 28 05:43:43 np0005634017 nova_compute[243452]: 2026-02-28 10:43:43.305 243456 INFO nova.virt.libvirt.driver [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Deletion of /var/lib/nova/instances/9a93fbef-9a9c-4d32-b200-626428537bfa_del complete#033[00m
Feb 28 05:43:43 np0005634017 nova_compute[243452]: 2026-02-28 10:43:43.351 243456 INFO nova.compute.manager [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:43:43 np0005634017 nova_compute[243452]: 2026-02-28 10:43:43.352 243456 DEBUG oslo.service.loopingcall [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:43:43 np0005634017 nova_compute[243452]: 2026-02-28 10:43:43.353 243456 DEBUG nova.compute.manager [-] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:43:43 np0005634017 nova_compute[243452]: 2026-02-28 10:43:43.353 243456 DEBUG nova.network.neutron [-] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:43:43 np0005634017 nova_compute[243452]: 2026-02-28 10:43:43.597 243456 DEBUG nova.compute.manager [req-2abf4122-d46a-4123-b9be-d61cbb78b5b6 req-d1866f58-9f17-4a22-a4a0-25b13535ab6e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Received event network-vif-unplugged-5b4f91cf-9f52-4422-873f-f11cf0d49dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:43:43 np0005634017 nova_compute[243452]: 2026-02-28 10:43:43.598 243456 DEBUG oslo_concurrency.lockutils [req-2abf4122-d46a-4123-b9be-d61cbb78b5b6 req-d1866f58-9f17-4a22-a4a0-25b13535ab6e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:43:43 np0005634017 nova_compute[243452]: 2026-02-28 10:43:43.598 243456 DEBUG oslo_concurrency.lockutils [req-2abf4122-d46a-4123-b9be-d61cbb78b5b6 req-d1866f58-9f17-4a22-a4a0-25b13535ab6e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:43:43 np0005634017 nova_compute[243452]: 2026-02-28 10:43:43.599 243456 DEBUG oslo_concurrency.lockutils [req-2abf4122-d46a-4123-b9be-d61cbb78b5b6 req-d1866f58-9f17-4a22-a4a0-25b13535ab6e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:43:43 np0005634017 nova_compute[243452]: 2026-02-28 10:43:43.599 243456 DEBUG nova.compute.manager [req-2abf4122-d46a-4123-b9be-d61cbb78b5b6 req-d1866f58-9f17-4a22-a4a0-25b13535ab6e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] No waiting events found dispatching network-vif-unplugged-5b4f91cf-9f52-4422-873f-f11cf0d49dde pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:43:43 np0005634017 nova_compute[243452]: 2026-02-28 10:43:43.599 243456 DEBUG nova.compute.manager [req-2abf4122-d46a-4123-b9be-d61cbb78b5b6 req-d1866f58-9f17-4a22-a4a0-25b13535ab6e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Received event network-vif-unplugged-5b4f91cf-9f52-4422-873f-f11cf0d49dde for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:43:43 np0005634017 nova_compute[243452]: 2026-02-28 10:43:43.855 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:43:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:43:44 np0005634017 nova_compute[243452]: 2026-02-28 10:43:44.595 243456 DEBUG nova.network.neutron [-] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:43:44 np0005634017 nova_compute[243452]: 2026-02-28 10:43:44.614 243456 INFO nova.compute.manager [-] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Took 1.26 seconds to deallocate network for instance.#033[00m
Feb 28 05:43:44 np0005634017 nova_compute[243452]: 2026-02-28 10:43:44.660 243456 DEBUG oslo_concurrency.lockutils [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:43:44 np0005634017 nova_compute[243452]: 2026-02-28 10:43:44.661 243456 DEBUG oslo_concurrency.lockutils [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:43:44 np0005634017 nova_compute[243452]: 2026-02-28 10:43:44.691 243456 DEBUG nova.compute.manager [req-93e2964f-0f4f-418b-a397-abcbfc090db6 req-53bfe3c6-385f-4b2b-8385-c0a8a79f8d13 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Received event network-vif-deleted-5b4f91cf-9f52-4422-873f-f11cf0d49dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:43:44 np0005634017 nova_compute[243452]: 2026-02-28 10:43:44.713 243456 DEBUG oslo_concurrency.processutils [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:43:44 np0005634017 nova_compute[243452]: 2026-02-28 10:43:44.814 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2382: 305 pgs: 305 active+clean; 198 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 16 KiB/s wr, 43 op/s
Feb 28 05:43:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:43:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/274630273' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:43:45 np0005634017 nova_compute[243452]: 2026-02-28 10:43:45.323 243456 DEBUG oslo_concurrency.processutils [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:43:45 np0005634017 nova_compute[243452]: 2026-02-28 10:43:45.358 243456 DEBUG nova.compute.provider_tree [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:43:45 np0005634017 nova_compute[243452]: 2026-02-28 10:43:45.376 243456 DEBUG nova.scheduler.client.report [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:43:45 np0005634017 nova_compute[243452]: 2026-02-28 10:43:45.400 243456 DEBUG oslo_concurrency.lockutils [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:43:45 np0005634017 nova_compute[243452]: 2026-02-28 10:43:45.426 243456 INFO nova.scheduler.client.report [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance 9a93fbef-9a9c-4d32-b200-626428537bfa#033[00m
Feb 28 05:43:45 np0005634017 nova_compute[243452]: 2026-02-28 10:43:45.436 243456 DEBUG nova.network.neutron [req-e74a609f-5393-45fd-b62d-eee66837a290 req-6d8f55f6-9a25-4db4-824e-965b1d2a2883 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Updated VIF entry in instance network info cache for port 5b4f91cf-9f52-4422-873f-f11cf0d49dde. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:43:45 np0005634017 nova_compute[243452]: 2026-02-28 10:43:45.436 243456 DEBUG nova.network.neutron [req-e74a609f-5393-45fd-b62d-eee66837a290 req-6d8f55f6-9a25-4db4-824e-965b1d2a2883 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Updating instance_info_cache with network_info: [{"id": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "address": "fa:16:3e:54:61:bf", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b4f91cf-9f", "ovs_interfaceid": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:43:45 np0005634017 nova_compute[243452]: 2026-02-28 10:43:45.459 243456 DEBUG oslo_concurrency.lockutils [req-e74a609f-5393-45fd-b62d-eee66837a290 req-6d8f55f6-9a25-4db4-824e-965b1d2a2883 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:43:45 np0005634017 nova_compute[243452]: 2026-02-28 10:43:45.502 243456 DEBUG oslo_concurrency.lockutils [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:43:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:43:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1437969117' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:43:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:43:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1437969117' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:43:45 np0005634017 nova_compute[243452]: 2026-02-28 10:43:45.687 243456 DEBUG nova.compute.manager [req-a4af4da8-c241-4754-8bad-c8675110fdc9 req-0ddc2665-4fa0-443d-b9f7-2f21d6eb775e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Received event network-vif-plugged-5b4f91cf-9f52-4422-873f-f11cf0d49dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:43:45 np0005634017 nova_compute[243452]: 2026-02-28 10:43:45.687 243456 DEBUG oslo_concurrency.lockutils [req-a4af4da8-c241-4754-8bad-c8675110fdc9 req-0ddc2665-4fa0-443d-b9f7-2f21d6eb775e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:43:45 np0005634017 nova_compute[243452]: 2026-02-28 10:43:45.688 243456 DEBUG oslo_concurrency.lockutils [req-a4af4da8-c241-4754-8bad-c8675110fdc9 req-0ddc2665-4fa0-443d-b9f7-2f21d6eb775e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:43:45 np0005634017 nova_compute[243452]: 2026-02-28 10:43:45.689 243456 DEBUG oslo_concurrency.lockutils [req-a4af4da8-c241-4754-8bad-c8675110fdc9 req-0ddc2665-4fa0-443d-b9f7-2f21d6eb775e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:43:45 np0005634017 nova_compute[243452]: 2026-02-28 10:43:45.689 243456 DEBUG nova.compute.manager [req-a4af4da8-c241-4754-8bad-c8675110fdc9 req-0ddc2665-4fa0-443d-b9f7-2f21d6eb775e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] No waiting events found dispatching network-vif-plugged-5b4f91cf-9f52-4422-873f-f11cf0d49dde pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:43:45 np0005634017 nova_compute[243452]: 2026-02-28 10:43:45.689 243456 WARNING nova.compute.manager [req-a4af4da8-c241-4754-8bad-c8675110fdc9 req-0ddc2665-4fa0-443d-b9f7-2f21d6eb775e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Received unexpected event network-vif-plugged-5b4f91cf-9f52-4422-873f-f11cf0d49dde for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:43:46 np0005634017 nova_compute[243452]: 2026-02-28 10:43:46.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:43:46 np0005634017 nova_compute[243452]: 2026-02-28 10:43:46.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:43:46 np0005634017 nova_compute[243452]: 2026-02-28 10:43:46.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:43:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 305 active+clean; 189 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 16 KiB/s wr, 44 op/s
Feb 28 05:43:47 np0005634017 nova_compute[243452]: 2026-02-28 10:43:47.973 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:48 np0005634017 nova_compute[243452]: 2026-02-28 10:43:48.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:43:48 np0005634017 nova_compute[243452]: 2026-02-28 10:43:48.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:43:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:43:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 15 KiB/s wr, 56 op/s
Feb 28 05:43:49 np0005634017 nova_compute[243452]: 2026-02-28 10:43:49.814 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:50 np0005634017 nova_compute[243452]: 2026-02-28 10:43:50.147 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:50 np0005634017 nova_compute[243452]: 2026-02-28 10:43:50.196 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:50 np0005634017 nova_compute[243452]: 2026-02-28 10:43:50.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:43:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2385: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 5.1 KiB/s wr, 56 op/s
Feb 28 05:43:51 np0005634017 nova_compute[243452]: 2026-02-28 10:43:51.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:43:51 np0005634017 nova_compute[243452]: 2026-02-28 10:43:51.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:43:51 np0005634017 nova_compute[243452]: 2026-02-28 10:43:51.330 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:43:51 np0005634017 nova_compute[243452]: 2026-02-28 10:43:51.331 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:43:51 np0005634017 nova_compute[243452]: 2026-02-28 10:43:51.331 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:43:51 np0005634017 nova_compute[243452]: 2026-02-28 10:43:51.345 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 05:43:52 np0005634017 podman[375879]: 2026-02-28 10:43:52.155470296 +0000 UTC m=+0.081227831 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 28 05:43:52 np0005634017 podman[375878]: 2026-02-28 10:43:52.214143197 +0000 UTC m=+0.146733186 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 28 05:43:52 np0005634017 nova_compute[243452]: 2026-02-28 10:43:52.976 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2386: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 4.2 KiB/s wr, 33 op/s
Feb 28 05:43:53 np0005634017 nova_compute[243452]: 2026-02-28 10:43:53.925 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275418.9233723, 4717a174-511e-4100-af09-e351eb2784a3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:43:53 np0005634017 nova_compute[243452]: 2026-02-28 10:43:53.925 243456 INFO nova.compute.manager [-] [instance: 4717a174-511e-4100-af09-e351eb2784a3] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:43:53 np0005634017 nova_compute[243452]: 2026-02-28 10:43:53.945 243456 DEBUG nova.compute.manager [None req-8eae81fd-3047-4389-a30c-ac964f09c1e3 - - - - - -] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:43:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:43:54 np0005634017 nova_compute[243452]: 2026-02-28 10:43:54.816 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2387: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb 28 05:43:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2388: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 341 B/s wr, 13 op/s
Feb 28 05:43:57 np0005634017 nova_compute[243452]: 2026-02-28 10:43:57.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:43:57 np0005634017 nova_compute[243452]: 2026-02-28 10:43:57.341 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:43:57 np0005634017 nova_compute[243452]: 2026-02-28 10:43:57.341 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:43:57 np0005634017 nova_compute[243452]: 2026-02-28 10:43:57.342 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:43:57 np0005634017 nova_compute[243452]: 2026-02-28 10:43:57.342 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:43:57 np0005634017 nova_compute[243452]: 2026-02-28 10:43:57.342 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:57.883 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:57.884 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:43:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:43:57.884 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:43:57 np0005634017 nova_compute[243452]: 2026-02-28 10:43:57.941 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275422.939746, 9a93fbef-9a9c-4d32-b200-626428537bfa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:43:57 np0005634017 nova_compute[243452]: 2026-02-28 10:43:57.942 243456 INFO nova.compute.manager [-] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:43:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:43:57 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3441247053' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:43:57 np0005634017 nova_compute[243452]: 2026-02-28 10:43:57.965 243456 DEBUG nova.compute.manager [None req-99ae7032-add8-45ad-9a9e-9e491f16ce2d - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:43:57 np0005634017 nova_compute[243452]: 2026-02-28 10:43:57.970 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:43:57 np0005634017 nova_compute[243452]: 2026-02-28 10:43:57.978 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:43:58 np0005634017 nova_compute[243452]: 2026-02-28 10:43:58.173 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:43:58 np0005634017 nova_compute[243452]: 2026-02-28 10:43:58.174 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3607MB free_disk=59.98741508834064GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:43:58 np0005634017 nova_compute[243452]: 2026-02-28 10:43:58.174 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:43:58 np0005634017 nova_compute[243452]: 2026-02-28 10:43:58.174 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:43:58 np0005634017 nova_compute[243452]: 2026-02-28 10:43:58.294 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:43:58 np0005634017 nova_compute[243452]: 2026-02-28 10:43:58.295 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:43:58 np0005634017 nova_compute[243452]: 2026-02-28 10:43:58.315 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:43:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:43:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3425035802' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:43:58 np0005634017 nova_compute[243452]: 2026-02-28 10:43:58.887 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:43:58 np0005634017 nova_compute[243452]: 2026-02-28 10:43:58.896 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:43:58 np0005634017 nova_compute[243452]: 2026-02-28 10:43:58.914 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:43:58 np0005634017 nova_compute[243452]: 2026-02-28 10:43:58.940 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:43:58 np0005634017 nova_compute[243452]: 2026-02-28 10:43:58.941 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:43:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:43:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 341 B/s wr, 12 op/s
Feb 28 05:43:59 np0005634017 nova_compute[243452]: 2026-02-28 10:43:59.818 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:44:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:44:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:44:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:44:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:44:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:44:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2390: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:44:02 np0005634017 nova_compute[243452]: 2026-02-28 10:44:02.981 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2391: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:44:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:44:04 np0005634017 nova_compute[243452]: 2026-02-28 10:44:04.819 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2392: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:44:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:44:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:07.345 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:3c:df 10.100.0.2 2001:db8::f816:3eff:fe9f:3cdf'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe9f:3cdf/64', 'neutron:device_id': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c946f0a3-f57c-4aff-b2b1-40e06de1a163, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1a8a9fba-10a8-48af-ae28-f3be3422056a) old=Port_Binding(mac=['fa:16:3e:9f:3c:df 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:44:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:07.347 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1a8a9fba-10a8-48af-ae28-f3be3422056a in datapath b90f6c6c-a634-4d08-98d7-fde34f18e37c updated#033[00m
Feb 28 05:44:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:07.349 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b90f6c6c-a634-4d08-98d7-fde34f18e37c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:44:07 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:07.350 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8fae9190-9ad1-4e06-81c6-f77e7f9cff40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:44:07 np0005634017 nova_compute[243452]: 2026-02-28 10:44:07.983 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:44:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2394: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:44:09 np0005634017 nova_compute[243452]: 2026-02-28 10:44:09.832 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:10.742 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:3c:df 10.100.0.2 2001:db8:0:1:f816:3eff:fe9f:3cdf 2001:db8::f816:3eff:fe9f:3cdf'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe9f:3cdf/64 2001:db8::f816:3eff:fe9f:3cdf/64', 'neutron:device_id': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c946f0a3-f57c-4aff-b2b1-40e06de1a163, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1a8a9fba-10a8-48af-ae28-f3be3422056a) old=Port_Binding(mac=['fa:16:3e:9f:3c:df 10.100.0.2 2001:db8::f816:3eff:fe9f:3cdf'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe9f:3cdf/64', 'neutron:device_id': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:44:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:10.744 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1a8a9fba-10a8-48af-ae28-f3be3422056a in datapath b90f6c6c-a634-4d08-98d7-fde34f18e37c updated#033[00m
Feb 28 05:44:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:10.746 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b90f6c6c-a634-4d08-98d7-fde34f18e37c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:44:10 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:10.747 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[06e8597f-f079-4a32-b4f2-b55edeef9612]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:44:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2395: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:44:12 np0005634017 nova_compute[243452]: 2026-02-28 10:44:12.985 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2396: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:44:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:44:14 np0005634017 nova_compute[243452]: 2026-02-28 10:44:14.880 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:15 np0005634017 nova_compute[243452]: 2026-02-28 10:44:15.071 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:44:15 np0005634017 nova_compute[243452]: 2026-02-28 10:44:15.072 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:44:15 np0005634017 nova_compute[243452]: 2026-02-28 10:44:15.094 243456 DEBUG nova.compute.manager [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:44:15 np0005634017 nova_compute[243452]: 2026-02-28 10:44:15.187 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:44:15 np0005634017 nova_compute[243452]: 2026-02-28 10:44:15.188 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:44:15 np0005634017 nova_compute[243452]: 2026-02-28 10:44:15.199 243456 DEBUG nova.virt.hardware [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:44:15 np0005634017 nova_compute[243452]: 2026-02-28 10:44:15.200 243456 INFO nova.compute.claims [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:44:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2397: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:44:15 np0005634017 nova_compute[243452]: 2026-02-28 10:44:15.317 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:44:15 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:44:15 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2954323466' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:44:15 np0005634017 nova_compute[243452]: 2026-02-28 10:44:15.908 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:44:15 np0005634017 nova_compute[243452]: 2026-02-28 10:44:15.917 243456 DEBUG nova.compute.provider_tree [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:44:15 np0005634017 nova_compute[243452]: 2026-02-28 10:44:15.939 243456 DEBUG nova.scheduler.client.report [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:44:15 np0005634017 nova_compute[243452]: 2026-02-28 10:44:15.966 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:44:15 np0005634017 nova_compute[243452]: 2026-02-28 10:44:15.967 243456 DEBUG nova.compute.manager [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:44:16 np0005634017 nova_compute[243452]: 2026-02-28 10:44:16.028 243456 DEBUG nova.compute.manager [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:44:16 np0005634017 nova_compute[243452]: 2026-02-28 10:44:16.029 243456 DEBUG nova.network.neutron [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:44:16 np0005634017 nova_compute[243452]: 2026-02-28 10:44:16.054 243456 INFO nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:44:16 np0005634017 nova_compute[243452]: 2026-02-28 10:44:16.093 243456 DEBUG nova.compute.manager [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:44:16 np0005634017 nova_compute[243452]: 2026-02-28 10:44:16.230 243456 DEBUG nova.policy [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:44:16 np0005634017 nova_compute[243452]: 2026-02-28 10:44:16.273 243456 DEBUG nova.compute.manager [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:44:16 np0005634017 nova_compute[243452]: 2026-02-28 10:44:16.275 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:44:16 np0005634017 nova_compute[243452]: 2026-02-28 10:44:16.275 243456 INFO nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Creating image(s)#033[00m
Feb 28 05:44:16 np0005634017 nova_compute[243452]: 2026-02-28 10:44:16.309 243456 DEBUG nova.storage.rbd_utils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:44:16 np0005634017 nova_compute[243452]: 2026-02-28 10:44:16.345 243456 DEBUG nova.storage.rbd_utils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:44:16 np0005634017 nova_compute[243452]: 2026-02-28 10:44:16.380 243456 DEBUG nova.storage.rbd_utils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:44:16 np0005634017 nova_compute[243452]: 2026-02-28 10:44:16.385 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:44:16 np0005634017 nova_compute[243452]: 2026-02-28 10:44:16.467 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:44:16 np0005634017 nova_compute[243452]: 2026-02-28 10:44:16.468 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:44:16 np0005634017 nova_compute[243452]: 2026-02-28 10:44:16.469 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:44:16 np0005634017 nova_compute[243452]: 2026-02-28 10:44:16.469 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:44:16 np0005634017 nova_compute[243452]: 2026-02-28 10:44:16.494 243456 DEBUG nova.storage.rbd_utils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:44:16 np0005634017 nova_compute[243452]: 2026-02-28 10:44:16.498 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:44:16 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 05:44:16 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 43K writes, 173K keys, 43K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s#012Cumulative WAL: 43K writes, 16K syncs, 2.71 writes per sync, written: 0.17 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5287 writes, 21K keys, 5287 commit groups, 1.0 writes per commit group, ingest: 24.46 MB, 0.04 MB/s#012Interval WAL: 5286 writes, 2100 syncs, 2.52 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 05:44:16 np0005634017 nova_compute[243452]: 2026-02-28 10:44:16.771 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:44:16 np0005634017 nova_compute[243452]: 2026-02-28 10:44:16.862 243456 DEBUG nova.storage.rbd_utils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:44:16 np0005634017 nova_compute[243452]: 2026-02-28 10:44:16.982 243456 DEBUG nova.objects.instance [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid c45d561e-3581-4ca2-a8b7-b56cc6ce5b43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:44:17 np0005634017 nova_compute[243452]: 2026-02-28 10:44:17.014 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:44:17 np0005634017 nova_compute[243452]: 2026-02-28 10:44:17.015 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Ensure instance console log exists: /var/lib/nova/instances/c45d561e-3581-4ca2-a8b7-b56cc6ce5b43/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:44:17 np0005634017 nova_compute[243452]: 2026-02-28 10:44:17.016 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:44:17 np0005634017 nova_compute[243452]: 2026-02-28 10:44:17.016 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:44:17 np0005634017 nova_compute[243452]: 2026-02-28 10:44:17.017 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:44:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2398: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:44:17 np0005634017 nova_compute[243452]: 2026-02-28 10:44:17.987 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:18 np0005634017 nova_compute[243452]: 2026-02-28 10:44:18.184 243456 DEBUG nova.network.neutron [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Successfully created port: e109a5e0-a347-4744-84e4-96a126379d1c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:44:19 np0005634017 nova_compute[243452]: 2026-02-28 10:44:19.017 243456 DEBUG nova.network.neutron [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Successfully updated port: e109a5e0-a347-4744-84e4-96a126379d1c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:44:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:44:19 np0005634017 nova_compute[243452]: 2026-02-28 10:44:19.058 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:44:19 np0005634017 nova_compute[243452]: 2026-02-28 10:44:19.059 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:44:19 np0005634017 nova_compute[243452]: 2026-02-28 10:44:19.059 243456 DEBUG nova.network.neutron [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:44:19 np0005634017 nova_compute[243452]: 2026-02-28 10:44:19.122 243456 DEBUG nova.compute.manager [req-df4e1d10-a16a-4d36-9a0f-429936d977b8 req-96da64da-5a16-42d5-aadb-b9eb22ac1e49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Received event network-changed-e109a5e0-a347-4744-84e4-96a126379d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:44:19 np0005634017 nova_compute[243452]: 2026-02-28 10:44:19.122 243456 DEBUG nova.compute.manager [req-df4e1d10-a16a-4d36-9a0f-429936d977b8 req-96da64da-5a16-42d5-aadb-b9eb22ac1e49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Refreshing instance network info cache due to event network-changed-e109a5e0-a347-4744-84e4-96a126379d1c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:44:19 np0005634017 nova_compute[243452]: 2026-02-28 10:44:19.123 243456 DEBUG oslo_concurrency.lockutils [req-df4e1d10-a16a-4d36-9a0f-429936d977b8 req-96da64da-5a16-42d5-aadb-b9eb22ac1e49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:44:19 np0005634017 nova_compute[243452]: 2026-02-28 10:44:19.201 243456 DEBUG nova.network.neutron [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:44:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2399: 305 pgs: 305 active+clean; 166 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 670 KiB/s wr, 12 op/s
Feb 28 05:44:19 np0005634017 nova_compute[243452]: 2026-02-28 10:44:19.883 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:20 np0005634017 nova_compute[243452]: 2026-02-28 10:44:20.693 243456 DEBUG nova.network.neutron [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Updating instance_info_cache with network_info: [{"id": "e109a5e0-a347-4744-84e4-96a126379d1c", "address": "fa:16:3e:7a:44:88", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape109a5e0-a3", "ovs_interfaceid": "e109a5e0-a347-4744-84e4-96a126379d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:44:20 np0005634017 nova_compute[243452]: 2026-02-28 10:44:20.740 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:44:20 np0005634017 nova_compute[243452]: 2026-02-28 10:44:20.742 243456 DEBUG nova.compute.manager [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Instance network_info: |[{"id": "e109a5e0-a347-4744-84e4-96a126379d1c", "address": "fa:16:3e:7a:44:88", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape109a5e0-a3", "ovs_interfaceid": "e109a5e0-a347-4744-84e4-96a126379d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:44:20 np0005634017 nova_compute[243452]: 2026-02-28 10:44:20.743 243456 DEBUG oslo_concurrency.lockutils [req-df4e1d10-a16a-4d36-9a0f-429936d977b8 req-96da64da-5a16-42d5-aadb-b9eb22ac1e49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:44:20 np0005634017 nova_compute[243452]: 2026-02-28 10:44:20.744 243456 DEBUG nova.network.neutron [req-df4e1d10-a16a-4d36-9a0f-429936d977b8 req-96da64da-5a16-42d5-aadb-b9eb22ac1e49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Refreshing network info cache for port e109a5e0-a347-4744-84e4-96a126379d1c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:44:20 np0005634017 nova_compute[243452]: 2026-02-28 10:44:20.751 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Start _get_guest_xml network_info=[{"id": "e109a5e0-a347-4744-84e4-96a126379d1c", "address": "fa:16:3e:7a:44:88", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape109a5e0-a3", "ovs_interfaceid": "e109a5e0-a347-4744-84e4-96a126379d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:44:20 np0005634017 nova_compute[243452]: 2026-02-28 10:44:20.758 243456 WARNING nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:44:20 np0005634017 nova_compute[243452]: 2026-02-28 10:44:20.763 243456 DEBUG nova.virt.libvirt.host [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:44:20 np0005634017 nova_compute[243452]: 2026-02-28 10:44:20.765 243456 DEBUG nova.virt.libvirt.host [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:44:20 np0005634017 nova_compute[243452]: 2026-02-28 10:44:20.774 243456 DEBUG nova.virt.libvirt.host [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:44:20 np0005634017 nova_compute[243452]: 2026-02-28 10:44:20.775 243456 DEBUG nova.virt.libvirt.host [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:44:20 np0005634017 nova_compute[243452]: 2026-02-28 10:44:20.776 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:44:20 np0005634017 nova_compute[243452]: 2026-02-28 10:44:20.776 243456 DEBUG nova.virt.hardware [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:44:20 np0005634017 nova_compute[243452]: 2026-02-28 10:44:20.777 243456 DEBUG nova.virt.hardware [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:44:20 np0005634017 nova_compute[243452]: 2026-02-28 10:44:20.778 243456 DEBUG nova.virt.hardware [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:44:20 np0005634017 nova_compute[243452]: 2026-02-28 10:44:20.779 243456 DEBUG nova.virt.hardware [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:44:20 np0005634017 nova_compute[243452]: 2026-02-28 10:44:20.779 243456 DEBUG nova.virt.hardware [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:44:20 np0005634017 nova_compute[243452]: 2026-02-28 10:44:20.780 243456 DEBUG nova.virt.hardware [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:44:20 np0005634017 nova_compute[243452]: 2026-02-28 10:44:20.781 243456 DEBUG nova.virt.hardware [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:44:20 np0005634017 nova_compute[243452]: 2026-02-28 10:44:20.782 243456 DEBUG nova.virt.hardware [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:44:20 np0005634017 nova_compute[243452]: 2026-02-28 10:44:20.782 243456 DEBUG nova.virt.hardware [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:44:20 np0005634017 nova_compute[243452]: 2026-02-28 10:44:20.783 243456 DEBUG nova.virt.hardware [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:44:20 np0005634017 nova_compute[243452]: 2026-02-28 10:44:20.784 243456 DEBUG nova.virt.hardware [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:44:20 np0005634017 nova_compute[243452]: 2026-02-28 10:44:20.789 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:44:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2400: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:44:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:44:21 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/592900751' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:44:21 np0005634017 nova_compute[243452]: 2026-02-28 10:44:21.407 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:44:21 np0005634017 nova_compute[243452]: 2026-02-28 10:44:21.432 243456 DEBUG nova.storage.rbd_utils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:44:21 np0005634017 nova_compute[243452]: 2026-02-28 10:44:21.437 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:44:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:44:21 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/997131405' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:44:21 np0005634017 nova_compute[243452]: 2026-02-28 10:44:21.942 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:44:21 np0005634017 nova_compute[243452]: 2026-02-28 10:44:21.944 243456 DEBUG nova.virt.libvirt.vif [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:44:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-82849746',display_name='tempest-TestGettingAddress-server-82849746',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-82849746',id=148,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjumutHDOiCCJO6cfUL6ugRSFzHXiNGug9dJkwM0CV8zL85MVF3a1LvAcIuG15mWPEoaXuen5D3Ktxjz9xOw1dfLwx0HcN7GW0eMtopxI1h0bccpxE9hlkunZoyhuwL5A==',key_name='tempest-TestGettingAddress-1355395394',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-i18076q2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:44:16Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=c45d561e-3581-4ca2-a8b7-b56cc6ce5b43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e109a5e0-a347-4744-84e4-96a126379d1c", "address": "fa:16:3e:7a:44:88", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape109a5e0-a3", "ovs_interfaceid": "e109a5e0-a347-4744-84e4-96a126379d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:44:21 np0005634017 nova_compute[243452]: 2026-02-28 10:44:21.944 243456 DEBUG nova.network.os_vif_util [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "e109a5e0-a347-4744-84e4-96a126379d1c", "address": "fa:16:3e:7a:44:88", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape109a5e0-a3", "ovs_interfaceid": "e109a5e0-a347-4744-84e4-96a126379d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:44:21 np0005634017 nova_compute[243452]: 2026-02-28 10:44:21.946 243456 DEBUG nova.network.os_vif_util [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:44:88,bridge_name='br-int',has_traffic_filtering=True,id=e109a5e0-a347-4744-84e4-96a126379d1c,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape109a5e0-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:44:21 np0005634017 nova_compute[243452]: 2026-02-28 10:44:21.949 243456 DEBUG nova.objects.instance [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid c45d561e-3581-4ca2-a8b7-b56cc6ce5b43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:44:21 np0005634017 nova_compute[243452]: 2026-02-28 10:44:21.976 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:44:21 np0005634017 nova_compute[243452]:  <uuid>c45d561e-3581-4ca2-a8b7-b56cc6ce5b43</uuid>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:  <name>instance-00000094</name>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestGettingAddress-server-82849746</nova:name>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:44:20</nova:creationTime>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:44:21 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:        <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:        <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:        <nova:port uuid="e109a5e0-a347-4744-84e4-96a126379d1c">
Feb 28 05:44:21 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe7a:4488" ipVersion="6"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe7a:4488" ipVersion="6"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <entry name="serial">c45d561e-3581-4ca2-a8b7-b56cc6ce5b43</entry>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <entry name="uuid">c45d561e-3581-4ca2-a8b7-b56cc6ce5b43</entry>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk">
Feb 28 05:44:21 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:44:21 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk.config">
Feb 28 05:44:21 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:44:21 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:7a:44:88"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <target dev="tape109a5e0-a3"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/c45d561e-3581-4ca2-a8b7-b56cc6ce5b43/console.log" append="off"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:44:21 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:44:21 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:44:21 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:44:21 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:44:21 np0005634017 nova_compute[243452]: 2026-02-28 10:44:21.977 243456 DEBUG nova.compute.manager [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Preparing to wait for external event network-vif-plugged-e109a5e0-a347-4744-84e4-96a126379d1c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:44:21 np0005634017 nova_compute[243452]: 2026-02-28 10:44:21.977 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:44:21 np0005634017 nova_compute[243452]: 2026-02-28 10:44:21.977 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:44:21 np0005634017 nova_compute[243452]: 2026-02-28 10:44:21.978 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:44:21 np0005634017 nova_compute[243452]: 2026-02-28 10:44:21.978 243456 DEBUG nova.virt.libvirt.vif [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:44:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-82849746',display_name='tempest-TestGettingAddress-server-82849746',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-82849746',id=148,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjumutHDOiCCJO6cfUL6ugRSFzHXiNGug9dJkwM0CV8zL85MVF3a1LvAcIuG15mWPEoaXuen5D3Ktxjz9xOw1dfLwx0HcN7GW0eMtopxI1h0bccpxE9hlkunZoyhuwL5A==',key_name='tempest-TestGettingAddress-1355395394',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-i18076q2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:44:16Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=c45d561e-3581-4ca2-a8b7-b56cc6ce5b43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e109a5e0-a347-4744-84e4-96a126379d1c", "address": "fa:16:3e:7a:44:88", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape109a5e0-a3", "ovs_interfaceid": "e109a5e0-a347-4744-84e4-96a126379d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:44:21 np0005634017 nova_compute[243452]: 2026-02-28 10:44:21.979 243456 DEBUG nova.network.os_vif_util [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "e109a5e0-a347-4744-84e4-96a126379d1c", "address": "fa:16:3e:7a:44:88", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape109a5e0-a3", "ovs_interfaceid": "e109a5e0-a347-4744-84e4-96a126379d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:44:21 np0005634017 nova_compute[243452]: 2026-02-28 10:44:21.980 243456 DEBUG nova.network.os_vif_util [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:44:88,bridge_name='br-int',has_traffic_filtering=True,id=e109a5e0-a347-4744-84e4-96a126379d1c,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape109a5e0-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:44:21 np0005634017 nova_compute[243452]: 2026-02-28 10:44:21.980 243456 DEBUG os_vif [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:44:88,bridge_name='br-int',has_traffic_filtering=True,id=e109a5e0-a347-4744-84e4-96a126379d1c,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape109a5e0-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:44:21 np0005634017 nova_compute[243452]: 2026-02-28 10:44:21.981 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:21 np0005634017 nova_compute[243452]: 2026-02-28 10:44:21.982 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:44:21 np0005634017 nova_compute[243452]: 2026-02-28 10:44:21.983 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:44:21 np0005634017 nova_compute[243452]: 2026-02-28 10:44:21.988 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:21 np0005634017 nova_compute[243452]: 2026-02-28 10:44:21.988 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape109a5e0-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:44:21 np0005634017 nova_compute[243452]: 2026-02-28 10:44:21.989 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape109a5e0-a3, col_values=(('external_ids', {'iface-id': 'e109a5e0-a347-4744-84e4-96a126379d1c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:44:88', 'vm-uuid': 'c45d561e-3581-4ca2-a8b7-b56cc6ce5b43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:44:21 np0005634017 nova_compute[243452]: 2026-02-28 10:44:21.991 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:21 np0005634017 NetworkManager[49805]: <info>  [1772275461.9928] manager: (tape109a5e0-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/647)
Feb 28 05:44:21 np0005634017 nova_compute[243452]: 2026-02-28 10:44:21.996 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:44:21 np0005634017 nova_compute[243452]: 2026-02-28 10:44:21.997 243456 INFO os_vif [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:44:88,bridge_name='br-int',has_traffic_filtering=True,id=e109a5e0-a347-4744-84e4-96a126379d1c,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape109a5e0-a3')#033[00m
Feb 28 05:44:22 np0005634017 nova_compute[243452]: 2026-02-28 10:44:22.052 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:44:22 np0005634017 nova_compute[243452]: 2026-02-28 10:44:22.053 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:44:22 np0005634017 nova_compute[243452]: 2026-02-28 10:44:22.053 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:7a:44:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:44:22 np0005634017 nova_compute[243452]: 2026-02-28 10:44:22.054 243456 INFO nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Using config drive#033[00m
Feb 28 05:44:22 np0005634017 nova_compute[243452]: 2026-02-28 10:44:22.085 243456 DEBUG nova.storage.rbd_utils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:44:22 np0005634017 nova_compute[243452]: 2026-02-28 10:44:22.398 243456 INFO nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Creating config drive at /var/lib/nova/instances/c45d561e-3581-4ca2-a8b7-b56cc6ce5b43/disk.config#033[00m
Feb 28 05:44:22 np0005634017 nova_compute[243452]: 2026-02-28 10:44:22.403 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c45d561e-3581-4ca2-a8b7-b56cc6ce5b43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8y90bd6n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:44:22 np0005634017 nova_compute[243452]: 2026-02-28 10:44:22.547 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c45d561e-3581-4ca2-a8b7-b56cc6ce5b43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8y90bd6n" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:44:22 np0005634017 nova_compute[243452]: 2026-02-28 10:44:22.584 243456 DEBUG nova.storage.rbd_utils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:44:22 np0005634017 nova_compute[243452]: 2026-02-28 10:44:22.589 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c45d561e-3581-4ca2-a8b7-b56cc6ce5b43/disk.config c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:44:22 np0005634017 nova_compute[243452]: 2026-02-28 10:44:22.712 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c45d561e-3581-4ca2-a8b7-b56cc6ce5b43/disk.config c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:44:22 np0005634017 nova_compute[243452]: 2026-02-28 10:44:22.713 243456 INFO nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Deleting local config drive /var/lib/nova/instances/c45d561e-3581-4ca2-a8b7-b56cc6ce5b43/disk.config because it was imported into RBD.#033[00m
Feb 28 05:44:22 np0005634017 kernel: tape109a5e0-a3: entered promiscuous mode
Feb 28 05:44:22 np0005634017 NetworkManager[49805]: <info>  [1772275462.7684] manager: (tape109a5e0-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/648)
Feb 28 05:44:22 np0005634017 ovn_controller[146846]: 2026-02-28T10:44:22Z|01555|binding|INFO|Claiming lport e109a5e0-a347-4744-84e4-96a126379d1c for this chassis.
Feb 28 05:44:22 np0005634017 nova_compute[243452]: 2026-02-28 10:44:22.769 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:22 np0005634017 ovn_controller[146846]: 2026-02-28T10:44:22Z|01556|binding|INFO|e109a5e0-a347-4744-84e4-96a126379d1c: Claiming fa:16:3e:7a:44:88 10.100.0.5 2001:db8:0:1:f816:3eff:fe7a:4488 2001:db8::f816:3eff:fe7a:4488
Feb 28 05:44:22 np0005634017 nova_compute[243452]: 2026-02-28 10:44:22.776 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.790 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:44:88 10.100.0.5 2001:db8:0:1:f816:3eff:fe7a:4488 2001:db8::f816:3eff:fe7a:4488'], port_security=['fa:16:3e:7a:44:88 10.100.0.5 2001:db8:0:1:f816:3eff:fe7a:4488 2001:db8::f816:3eff:fe7a:4488'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28 2001:db8:0:1:f816:3eff:fe7a:4488/64 2001:db8::f816:3eff:fe7a:4488/64', 'neutron:device_id': 'c45d561e-3581-4ca2-a8b7-b56cc6ce5b43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6b062de3-7eb9-4b80-8825-2f32e8eb2071', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c946f0a3-f57c-4aff-b2b1-40e06de1a163, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=e109a5e0-a347-4744-84e4-96a126379d1c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:44:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.792 156681 INFO neutron.agent.ovn.metadata.agent [-] Port e109a5e0-a347-4744-84e4-96a126379d1c in datapath b90f6c6c-a634-4d08-98d7-fde34f18e37c bound to our chassis#033[00m
Feb 28 05:44:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.793 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b90f6c6c-a634-4d08-98d7-fde34f18e37c#033[00m
Feb 28 05:44:22 np0005634017 nova_compute[243452]: 2026-02-28 10:44:22.796 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:22 np0005634017 ovn_controller[146846]: 2026-02-28T10:44:22Z|01557|binding|INFO|Setting lport e109a5e0-a347-4744-84e4-96a126379d1c ovn-installed in OVS
Feb 28 05:44:22 np0005634017 ovn_controller[146846]: 2026-02-28T10:44:22Z|01558|binding|INFO|Setting lport e109a5e0-a347-4744-84e4-96a126379d1c up in Southbound
Feb 28 05:44:22 np0005634017 nova_compute[243452]: 2026-02-28 10:44:22.805 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.805 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[997c8e9a-95fe-48b6-a9fd-16e86dbcb713]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:44:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.806 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb90f6c6c-a1 in ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:44:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.809 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb90f6c6c-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:44:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.809 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8ce06796-15a2-4db6-b6ba-340c5ced81d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:44:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.810 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c5f90e4d-cd3f-41bf-8c5c-67874961c05c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:44:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.822 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[5277160f-f25f-4b4b-8d5f-df16200b3a5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:44:22 np0005634017 systemd-machined[209480]: New machine qemu-181-instance-00000094.
Feb 28 05:44:22 np0005634017 systemd-udevd[376313]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:44:22 np0005634017 systemd[1]: Started Virtual Machine qemu-181-instance-00000094.
Feb 28 05:44:22 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 05:44:22 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4201.6 total, 600.0 interval#012Cumulative writes: 46K writes, 180K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s#012Cumulative WAL: 46K writes, 17K syncs, 2.68 writes per sync, written: 0.17 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5448 writes, 20K keys, 5448 commit groups, 1.0 writes per commit group, ingest: 23.91 MB, 0.04 MB/s#012Interval WAL: 5448 writes, 2224 syncs, 2.45 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 05:44:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.835 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1dcf8fe1-0bcc-43bd-a31d-222b5ce088ed]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:44:22 np0005634017 NetworkManager[49805]: <info>  [1772275462.8376] device (tape109a5e0-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:44:22 np0005634017 NetworkManager[49805]: <info>  [1772275462.8381] device (tape109a5e0-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:44:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.866 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6442bac3-3e7d-4370-9ab5-375b523ce0d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:44:22 np0005634017 NetworkManager[49805]: <info>  [1772275462.8747] manager: (tapb90f6c6c-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/649)
Feb 28 05:44:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.873 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bcad5985-155b-4be4-b0cc-61a920b183d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:44:22 np0005634017 podman[376290]: 2026-02-28 10:44:22.874786829 +0000 UTC m=+0.077411933 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 28 05:44:22 np0005634017 podman[376291]: 2026-02-28 10:44:22.882284521 +0000 UTC m=+0.084742281 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 05:44:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.902 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bcdb9005-d893-4735-9b4f-082225af7914]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:44:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.906 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[17e187b8-a1cf-436c-9f8b-2d51bdecddc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:44:22 np0005634017 NetworkManager[49805]: <info>  [1772275462.9258] device (tapb90f6c6c-a0): carrier: link connected
Feb 28 05:44:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.929 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1fecc492-dd9e-447a-b8ff-0e06cd7e1ce0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:44:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.945 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[63641463-8878-45d3-89db-366dfa1a5bdb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb90f6c6c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:3c:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697021, 'reachable_time': 30198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376371, 'error': None, 'target': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:44:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.958 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d5554bfa-7793-421a-bb26-d468c5c86519]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:3cdf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697021, 'tstamp': 697021}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376372, 'error': None, 'target': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:44:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.972 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e4392112-2348-4177-8c12-9ad479279758]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb90f6c6c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:3c:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697021, 'reachable_time': 30198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 376373, 'error': None, 'target': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:44:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.997 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2eff099b-a40d-4560-955c-3eb719db8b86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:23.042 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[06627395-5acc-41c2-a819-f637d59844a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:23.045 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb90f6c6c-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:23.045 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:23.046 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb90f6c6c-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.048 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:23 np0005634017 NetworkManager[49805]: <info>  [1772275463.0495] manager: (tapb90f6c6c-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/650)
Feb 28 05:44:23 np0005634017 kernel: tapb90f6c6c-a0: entered promiscuous mode
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.052 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:23.054 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb90f6c6c-a0, col_values=(('external_ids', {'iface-id': '1a8a9fba-10a8-48af-ae28-f3be3422056a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.055 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:23 np0005634017 ovn_controller[146846]: 2026-02-28T10:44:23Z|01559|binding|INFO|Releasing lport 1a8a9fba-10a8-48af-ae28-f3be3422056a from this chassis (sb_readonly=0)
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.056 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:23.059 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b90f6c6c-a634-4d08-98d7-fde34f18e37c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b90f6c6c-a634-4d08-98d7-fde34f18e37c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:23.060 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[93557ebe-a0bb-4bdd-a699-19fc72cde917]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:23.061 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-b90f6c6c-a634-4d08-98d7-fde34f18e37c
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/b90f6c6c-a634-4d08-98d7-fde34f18e37c.pid.haproxy
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID b90f6c6c-a634-4d08-98d7-fde34f18e37c
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:23.062 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'env', 'PROCESS_TAG=haproxy-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b90f6c6c-a634-4d08-98d7-fde34f18e37c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.065 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.198 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275463.1977963, c45d561e-3581-4ca2-a8b7-b56cc6ce5b43 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.199 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] VM Started (Lifecycle Event)#033[00m
Feb 28 05:44:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2401: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.230 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.242 243456 DEBUG nova.network.neutron [req-df4e1d10-a16a-4d36-9a0f-429936d977b8 req-96da64da-5a16-42d5-aadb-b9eb22ac1e49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Updated VIF entry in instance network info cache for port e109a5e0-a347-4744-84e4-96a126379d1c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.242 243456 DEBUG nova.network.neutron [req-df4e1d10-a16a-4d36-9a0f-429936d977b8 req-96da64da-5a16-42d5-aadb-b9eb22ac1e49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Updating instance_info_cache with network_info: [{"id": "e109a5e0-a347-4744-84e4-96a126379d1c", "address": "fa:16:3e:7a:44:88", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape109a5e0-a3", "ovs_interfaceid": "e109a5e0-a347-4744-84e4-96a126379d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.245 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275463.1981812, c45d561e-3581-4ca2-a8b7-b56cc6ce5b43 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.245 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.272 243456 DEBUG oslo_concurrency.lockutils [req-df4e1d10-a16a-4d36-9a0f-429936d977b8 req-96da64da-5a16-42d5-aadb-b9eb22ac1e49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.284 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.287 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.307 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:44:23 np0005634017 podman[376446]: 2026-02-28 10:44:23.421246922 +0000 UTC m=+0.065250848 container create 5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:44:23 np0005634017 systemd[1]: Started libpod-conmon-5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129.scope.
Feb 28 05:44:23 np0005634017 podman[376446]: 2026-02-28 10:44:23.383874734 +0000 UTC m=+0.027878730 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:44:23 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:44:23 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b3be6acdeb1e35fcdffcfd7b1606173c8b51c2d19d95f4b060892ed6a5761cd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:44:23 np0005634017 podman[376446]: 2026-02-28 10:44:23.526013259 +0000 UTC m=+0.170017195 container init 5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 05:44:23 np0005634017 podman[376446]: 2026-02-28 10:44:23.536110925 +0000 UTC m=+0.180114831 container start 5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:44:23 np0005634017 neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c[376462]: [NOTICE]   (376466) : New worker (376468) forked
Feb 28 05:44:23 np0005634017 neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c[376462]: [NOTICE]   (376466) : Loading success.
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.690 243456 DEBUG nova.compute.manager [req-37510728-5000-46a2-b650-fd99e04b5077 req-a7b3fdb9-b167-45df-bb0f-b87de3a81236 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Received event network-vif-plugged-e109a5e0-a347-4744-84e4-96a126379d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.692 243456 DEBUG oslo_concurrency.lockutils [req-37510728-5000-46a2-b650-fd99e04b5077 req-a7b3fdb9-b167-45df-bb0f-b87de3a81236 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.693 243456 DEBUG oslo_concurrency.lockutils [req-37510728-5000-46a2-b650-fd99e04b5077 req-a7b3fdb9-b167-45df-bb0f-b87de3a81236 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.693 243456 DEBUG oslo_concurrency.lockutils [req-37510728-5000-46a2-b650-fd99e04b5077 req-a7b3fdb9-b167-45df-bb0f-b87de3a81236 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.694 243456 DEBUG nova.compute.manager [req-37510728-5000-46a2-b650-fd99e04b5077 req-a7b3fdb9-b167-45df-bb0f-b87de3a81236 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Processing event network-vif-plugged-e109a5e0-a347-4744-84e4-96a126379d1c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.696 243456 DEBUG nova.compute.manager [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.702 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275463.7017825, c45d561e-3581-4ca2-a8b7-b56cc6ce5b43 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.702 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.705 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.710 243456 INFO nova.virt.libvirt.driver [-] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Instance spawned successfully.#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.711 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.726 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.735 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.740 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.740 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.741 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.741 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.742 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.742 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.772 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.819 243456 INFO nova.compute.manager [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Took 7.55 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.820 243456 DEBUG nova.compute.manager [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:23.849 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:44:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:23.852 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.892 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.895 243456 INFO nova.compute.manager [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Took 8.75 seconds to build instance.#033[00m
Feb 28 05:44:23 np0005634017 nova_compute[243452]: 2026-02-28 10:44:23.937 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:44:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:44:24 np0005634017 nova_compute[243452]: 2026-02-28 10:44:24.887 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2402: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 418 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Feb 28 05:44:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:44:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:44:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:44:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:44:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:44:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:44:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:44:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:44:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:44:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:44:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:44:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:44:25 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:44:25 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:44:25 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:44:25 np0005634017 nova_compute[243452]: 2026-02-28 10:44:25.780 243456 DEBUG nova.compute.manager [req-bb71fe7c-213b-4351-a56f-8d4343562fb5 req-ca6c275d-e044-4169-99ba-ec76db68be83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Received event network-vif-plugged-e109a5e0-a347-4744-84e4-96a126379d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:44:25 np0005634017 nova_compute[243452]: 2026-02-28 10:44:25.781 243456 DEBUG oslo_concurrency.lockutils [req-bb71fe7c-213b-4351-a56f-8d4343562fb5 req-ca6c275d-e044-4169-99ba-ec76db68be83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:44:25 np0005634017 nova_compute[243452]: 2026-02-28 10:44:25.782 243456 DEBUG oslo_concurrency.lockutils [req-bb71fe7c-213b-4351-a56f-8d4343562fb5 req-ca6c275d-e044-4169-99ba-ec76db68be83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:44:25 np0005634017 nova_compute[243452]: 2026-02-28 10:44:25.783 243456 DEBUG oslo_concurrency.lockutils [req-bb71fe7c-213b-4351-a56f-8d4343562fb5 req-ca6c275d-e044-4169-99ba-ec76db68be83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:44:25 np0005634017 nova_compute[243452]: 2026-02-28 10:44:25.783 243456 DEBUG nova.compute.manager [req-bb71fe7c-213b-4351-a56f-8d4343562fb5 req-ca6c275d-e044-4169-99ba-ec76db68be83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] No waiting events found dispatching network-vif-plugged-e109a5e0-a347-4744-84e4-96a126379d1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:44:25 np0005634017 nova_compute[243452]: 2026-02-28 10:44:25.783 243456 WARNING nova.compute.manager [req-bb71fe7c-213b-4351-a56f-8d4343562fb5 req-ca6c275d-e044-4169-99ba-ec76db68be83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Received unexpected event network-vif-plugged-e109a5e0-a347-4744-84e4-96a126379d1c for instance with vm_state active and task_state None.#033[00m
Feb 28 05:44:25 np0005634017 podman[376620]: 2026-02-28 10:44:25.995596058 +0000 UTC m=+0.047441804 container create e2d546d6efc6b6b709b02037fcd3c95b59cb6a675dc0ed6b97722bea0e95d9f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 05:44:26 np0005634017 systemd[1]: Started libpod-conmon-e2d546d6efc6b6b709b02037fcd3c95b59cb6a675dc0ed6b97722bea0e95d9f8.scope.
Feb 28 05:44:26 np0005634017 podman[376620]: 2026-02-28 10:44:25.969671164 +0000 UTC m=+0.021516890 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:44:26 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:44:26 np0005634017 podman[376620]: 2026-02-28 10:44:26.086984916 +0000 UTC m=+0.138830642 container init e2d546d6efc6b6b709b02037fcd3c95b59cb6a675dc0ed6b97722bea0e95d9f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lichterman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 05:44:26 np0005634017 podman[376620]: 2026-02-28 10:44:26.094875109 +0000 UTC m=+0.146720825 container start e2d546d6efc6b6b709b02037fcd3c95b59cb6a675dc0ed6b97722bea0e95d9f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 28 05:44:26 np0005634017 podman[376620]: 2026-02-28 10:44:26.098246115 +0000 UTC m=+0.150091861 container attach e2d546d6efc6b6b709b02037fcd3c95b59cb6a675dc0ed6b97722bea0e95d9f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lichterman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 05:44:26 np0005634017 tender_lichterman[376637]: 167 167
Feb 28 05:44:26 np0005634017 systemd[1]: libpod-e2d546d6efc6b6b709b02037fcd3c95b59cb6a675dc0ed6b97722bea0e95d9f8.scope: Deactivated successfully.
Feb 28 05:44:26 np0005634017 conmon[376637]: conmon e2d546d6efc6b6b709b0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e2d546d6efc6b6b709b02037fcd3c95b59cb6a675dc0ed6b97722bea0e95d9f8.scope/container/memory.events
Feb 28 05:44:26 np0005634017 podman[376620]: 2026-02-28 10:44:26.100510749 +0000 UTC m=+0.152356475 container died e2d546d6efc6b6b709b02037fcd3c95b59cb6a675dc0ed6b97722bea0e95d9f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:44:26 np0005634017 systemd[1]: var-lib-containers-storage-overlay-044315fac50dbe83a6562f0bafe791c7287df28ec14ee888fdaca464e7f9a7b5-merged.mount: Deactivated successfully.
Feb 28 05:44:26 np0005634017 podman[376620]: 2026-02-28 10:44:26.142535959 +0000 UTC m=+0.194381665 container remove e2d546d6efc6b6b709b02037fcd3c95b59cb6a675dc0ed6b97722bea0e95d9f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lichterman, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 05:44:26 np0005634017 systemd[1]: libpod-conmon-e2d546d6efc6b6b709b02037fcd3c95b59cb6a675dc0ed6b97722bea0e95d9f8.scope: Deactivated successfully.
Feb 28 05:44:26 np0005634017 podman[376661]: 2026-02-28 10:44:26.27183648 +0000 UTC m=+0.039430917 container create 8021004dc7622038460b2c43350aef9eb897259e12bb20b6cef077e9d5091201 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hugle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 28 05:44:26 np0005634017 systemd[1]: Started libpod-conmon-8021004dc7622038460b2c43350aef9eb897259e12bb20b6cef077e9d5091201.scope.
Feb 28 05:44:26 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:44:26 np0005634017 podman[376661]: 2026-02-28 10:44:26.251238337 +0000 UTC m=+0.018832794 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:44:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a47792399e767409489ff3c279915da6985cc90071d831f00fdb11883e2f9b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:44:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a47792399e767409489ff3c279915da6985cc90071d831f00fdb11883e2f9b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:44:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a47792399e767409489ff3c279915da6985cc90071d831f00fdb11883e2f9b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:44:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a47792399e767409489ff3c279915da6985cc90071d831f00fdb11883e2f9b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:44:26 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a47792399e767409489ff3c279915da6985cc90071d831f00fdb11883e2f9b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:44:26 np0005634017 podman[376661]: 2026-02-28 10:44:26.370133052 +0000 UTC m=+0.137727489 container init 8021004dc7622038460b2c43350aef9eb897259e12bb20b6cef077e9d5091201 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hugle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 05:44:26 np0005634017 podman[376661]: 2026-02-28 10:44:26.377308596 +0000 UTC m=+0.144903033 container start 8021004dc7622038460b2c43350aef9eb897259e12bb20b6cef077e9d5091201 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 05:44:26 np0005634017 podman[376661]: 2026-02-28 10:44:26.381399982 +0000 UTC m=+0.148994439 container attach 8021004dc7622038460b2c43350aef9eb897259e12bb20b6cef077e9d5091201 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hugle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:44:26 np0005634017 reverent_hugle[376678]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:44:26 np0005634017 reverent_hugle[376678]: --> All data devices are unavailable
Feb 28 05:44:26 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:26.855 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:44:26 np0005634017 systemd[1]: libpod-8021004dc7622038460b2c43350aef9eb897259e12bb20b6cef077e9d5091201.scope: Deactivated successfully.
Feb 28 05:44:26 np0005634017 podman[376661]: 2026-02-28 10:44:26.905504852 +0000 UTC m=+0.673099289 container died 8021004dc7622038460b2c43350aef9eb897259e12bb20b6cef077e9d5091201 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hugle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:44:26 np0005634017 systemd[1]: var-lib-containers-storage-overlay-9a47792399e767409489ff3c279915da6985cc90071d831f00fdb11883e2f9b6-merged.mount: Deactivated successfully.
Feb 28 05:44:26 np0005634017 podman[376661]: 2026-02-28 10:44:26.946977577 +0000 UTC m=+0.714572014 container remove 8021004dc7622038460b2c43350aef9eb897259e12bb20b6cef077e9d5091201 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hugle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 05:44:26 np0005634017 systemd[1]: libpod-conmon-8021004dc7622038460b2c43350aef9eb897259e12bb20b6cef077e9d5091201.scope: Deactivated successfully.
Feb 28 05:44:26 np0005634017 nova_compute[243452]: 2026-02-28 10:44:26.992 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2403: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 850 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Feb 28 05:44:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:44:27Z|01560|binding|INFO|Releasing lport 1a8a9fba-10a8-48af-ae28-f3be3422056a from this chassis (sb_readonly=0)
Feb 28 05:44:27 np0005634017 NetworkManager[49805]: <info>  [1772275467.3167] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/651)
Feb 28 05:44:27 np0005634017 NetworkManager[49805]: <info>  [1772275467.3189] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/652)
Feb 28 05:44:27 np0005634017 nova_compute[243452]: 2026-02-28 10:44:27.315 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:27 np0005634017 ovn_controller[146846]: 2026-02-28T10:44:27Z|01561|binding|INFO|Releasing lport 1a8a9fba-10a8-48af-ae28-f3be3422056a from this chassis (sb_readonly=0)
Feb 28 05:44:27 np0005634017 nova_compute[243452]: 2026-02-28 10:44:27.338 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:27 np0005634017 nova_compute[243452]: 2026-02-28 10:44:27.344 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:27 np0005634017 podman[376772]: 2026-02-28 10:44:27.418288872 +0000 UTC m=+0.056776588 container create e334eafec96f2fef2c7e3aae730dffab7e44cd2e30786ce4033a2e68858efedb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_clarke, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 05:44:27 np0005634017 systemd[1]: Started libpod-conmon-e334eafec96f2fef2c7e3aae730dffab7e44cd2e30786ce4033a2e68858efedb.scope.
Feb 28 05:44:27 np0005634017 podman[376772]: 2026-02-28 10:44:27.390185086 +0000 UTC m=+0.028672842 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:44:27 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:44:27 np0005634017 podman[376772]: 2026-02-28 10:44:27.51109867 +0000 UTC m=+0.149586346 container init e334eafec96f2fef2c7e3aae730dffab7e44cd2e30786ce4033a2e68858efedb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_clarke, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 28 05:44:27 np0005634017 podman[376772]: 2026-02-28 10:44:27.518178231 +0000 UTC m=+0.156665917 container start e334eafec96f2fef2c7e3aae730dffab7e44cd2e30786ce4033a2e68858efedb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 05:44:27 np0005634017 podman[376772]: 2026-02-28 10:44:27.52167867 +0000 UTC m=+0.160166356 container attach e334eafec96f2fef2c7e3aae730dffab7e44cd2e30786ce4033a2e68858efedb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_clarke, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:44:27 np0005634017 jolly_clarke[376788]: 167 167
Feb 28 05:44:27 np0005634017 systemd[1]: libpod-e334eafec96f2fef2c7e3aae730dffab7e44cd2e30786ce4033a2e68858efedb.scope: Deactivated successfully.
Feb 28 05:44:27 np0005634017 podman[376772]: 2026-02-28 10:44:27.523329477 +0000 UTC m=+0.161817183 container died e334eafec96f2fef2c7e3aae730dffab7e44cd2e30786ce4033a2e68858efedb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_clarke, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 05:44:27 np0005634017 systemd[1]: var-lib-containers-storage-overlay-e9dd3166ea46e5377c405198e72c6cf7df57b4c877c44cc8b3f2060210885c02-merged.mount: Deactivated successfully.
Feb 28 05:44:27 np0005634017 podman[376772]: 2026-02-28 10:44:27.576020329 +0000 UTC m=+0.214508045 container remove e334eafec96f2fef2c7e3aae730dffab7e44cd2e30786ce4033a2e68858efedb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_clarke, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 05:44:27 np0005634017 systemd[1]: libpod-conmon-e334eafec96f2fef2c7e3aae730dffab7e44cd2e30786ce4033a2e68858efedb.scope: Deactivated successfully.
Feb 28 05:44:27 np0005634017 nova_compute[243452]: 2026-02-28 10:44:27.666 243456 DEBUG nova.compute.manager [req-e937f3ab-67bf-4e24-9bf4-eaada657d548 req-d16eaf1e-b465-49ea-be61-2b64ffc1fe57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Received event network-changed-e109a5e0-a347-4744-84e4-96a126379d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:44:27 np0005634017 nova_compute[243452]: 2026-02-28 10:44:27.668 243456 DEBUG nova.compute.manager [req-e937f3ab-67bf-4e24-9bf4-eaada657d548 req-d16eaf1e-b465-49ea-be61-2b64ffc1fe57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Refreshing instance network info cache due to event network-changed-e109a5e0-a347-4744-84e4-96a126379d1c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:44:27 np0005634017 nova_compute[243452]: 2026-02-28 10:44:27.668 243456 DEBUG oslo_concurrency.lockutils [req-e937f3ab-67bf-4e24-9bf4-eaada657d548 req-d16eaf1e-b465-49ea-be61-2b64ffc1fe57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:44:27 np0005634017 nova_compute[243452]: 2026-02-28 10:44:27.669 243456 DEBUG oslo_concurrency.lockutils [req-e937f3ab-67bf-4e24-9bf4-eaada657d548 req-d16eaf1e-b465-49ea-be61-2b64ffc1fe57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:44:27 np0005634017 nova_compute[243452]: 2026-02-28 10:44:27.669 243456 DEBUG nova.network.neutron [req-e937f3ab-67bf-4e24-9bf4-eaada657d548 req-d16eaf1e-b465-49ea-be61-2b64ffc1fe57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Refreshing network info cache for port e109a5e0-a347-4744-84e4-96a126379d1c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:44:27 np0005634017 podman[376811]: 2026-02-28 10:44:27.771208466 +0000 UTC m=+0.041368213 container create 1b57e1125c1b6c7da8b5a5ce305f80fe44a4afd32916dae091ec976f34363a87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_saha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 05:44:27 np0005634017 systemd[1]: Started libpod-conmon-1b57e1125c1b6c7da8b5a5ce305f80fe44a4afd32916dae091ec976f34363a87.scope.
Feb 28 05:44:27 np0005634017 podman[376811]: 2026-02-28 10:44:27.753124054 +0000 UTC m=+0.023283801 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:44:27 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:44:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d03969ce95156c5e5bf3dc290ce1d4d025a836ce85336020e2e44a63662ed59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:44:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d03969ce95156c5e5bf3dc290ce1d4d025a836ce85336020e2e44a63662ed59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:44:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d03969ce95156c5e5bf3dc290ce1d4d025a836ce85336020e2e44a63662ed59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:44:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d03969ce95156c5e5bf3dc290ce1d4d025a836ce85336020e2e44a63662ed59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:44:27 np0005634017 podman[376811]: 2026-02-28 10:44:27.897228144 +0000 UTC m=+0.167387931 container init 1b57e1125c1b6c7da8b5a5ce305f80fe44a4afd32916dae091ec976f34363a87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_saha, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 28 05:44:27 np0005634017 podman[376811]: 2026-02-28 10:44:27.903564823 +0000 UTC m=+0.173724540 container start 1b57e1125c1b6c7da8b5a5ce305f80fe44a4afd32916dae091ec976f34363a87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_saha, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:44:27 np0005634017 podman[376811]: 2026-02-28 10:44:27.906882197 +0000 UTC m=+0.177041994 container attach 1b57e1125c1b6c7da8b5a5ce305f80fe44a4afd32916dae091ec976f34363a87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]: {
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:    "0": [
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:        {
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "devices": [
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "/dev/loop3"
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            ],
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "lv_name": "ceph_lv0",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "lv_size": "21470642176",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "name": "ceph_lv0",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "tags": {
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.cluster_name": "ceph",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.crush_device_class": "",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.encrypted": "0",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.objectstore": "bluestore",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.osd_id": "0",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.type": "block",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.vdo": "0",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.with_tpm": "0"
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            },
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "type": "block",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "vg_name": "ceph_vg0"
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:        }
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:    ],
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:    "1": [
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:        {
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "devices": [
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "/dev/loop4"
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            ],
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "lv_name": "ceph_lv1",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "lv_size": "21470642176",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "name": "ceph_lv1",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "tags": {
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.cluster_name": "ceph",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.crush_device_class": "",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.encrypted": "0",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.objectstore": "bluestore",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.osd_id": "1",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.type": "block",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.vdo": "0",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.with_tpm": "0"
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            },
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "type": "block",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "vg_name": "ceph_vg1"
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:        }
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:    ],
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:    "2": [
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:        {
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "devices": [
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "/dev/loop5"
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            ],
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "lv_name": "ceph_lv2",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "lv_size": "21470642176",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "name": "ceph_lv2",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "tags": {
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.cluster_name": "ceph",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.crush_device_class": "",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.encrypted": "0",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.objectstore": "bluestore",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.osd_id": "2",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.type": "block",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.vdo": "0",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:                "ceph.with_tpm": "0"
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            },
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "type": "block",
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:            "vg_name": "ceph_vg2"
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:        }
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]:    ]
Feb 28 05:44:28 np0005634017 suspicious_saha[376828]: }
Feb 28 05:44:28 np0005634017 systemd[1]: libpod-1b57e1125c1b6c7da8b5a5ce305f80fe44a4afd32916dae091ec976f34363a87.scope: Deactivated successfully.
Feb 28 05:44:28 np0005634017 podman[376811]: 2026-02-28 10:44:28.248371137 +0000 UTC m=+0.518530944 container died 1b57e1125c1b6c7da8b5a5ce305f80fe44a4afd32916dae091ec976f34363a87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_saha, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 28 05:44:28 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3d03969ce95156c5e5bf3dc290ce1d4d025a836ce85336020e2e44a63662ed59-merged.mount: Deactivated successfully.
Feb 28 05:44:28 np0005634017 podman[376811]: 2026-02-28 10:44:28.302461579 +0000 UTC m=+0.572621326 container remove 1b57e1125c1b6c7da8b5a5ce305f80fe44a4afd32916dae091ec976f34363a87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_saha, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 05:44:28 np0005634017 systemd[1]: libpod-conmon-1b57e1125c1b6c7da8b5a5ce305f80fe44a4afd32916dae091ec976f34363a87.scope: Deactivated successfully.
Feb 28 05:44:28 np0005634017 podman[376913]: 2026-02-28 10:44:28.811570585 +0000 UTC m=+0.048871855 container create 43a3c1f747ba2d5ba3c207ac3988a26b0734e9f89e05b63cce159187a755fc54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:44:28 np0005634017 systemd[1]: Started libpod-conmon-43a3c1f747ba2d5ba3c207ac3988a26b0734e9f89e05b63cce159187a755fc54.scope.
Feb 28 05:44:28 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:44:28 np0005634017 podman[376913]: 2026-02-28 10:44:28.786943367 +0000 UTC m=+0.024244687 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:44:28 np0005634017 podman[376913]: 2026-02-28 10:44:28.887245667 +0000 UTC m=+0.124546917 container init 43a3c1f747ba2d5ba3c207ac3988a26b0734e9f89e05b63cce159187a755fc54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_rosalind, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:44:28 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 05:44:28 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.2 total, 600.0 interval#012Cumulative writes: 36K writes, 142K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 36K writes, 13K syncs, 2.71 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3550 writes, 14K keys, 3550 commit groups, 1.0 writes per commit group, ingest: 18.05 MB, 0.03 MB/s#012Interval WAL: 3550 writes, 1394 syncs, 2.55 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 05:44:28 np0005634017 podman[376913]: 2026-02-28 10:44:28.897055475 +0000 UTC m=+0.134356745 container start 43a3c1f747ba2d5ba3c207ac3988a26b0734e9f89e05b63cce159187a755fc54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_rosalind, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 05:44:28 np0005634017 priceless_rosalind[376929]: 167 167
Feb 28 05:44:28 np0005634017 systemd[1]: libpod-43a3c1f747ba2d5ba3c207ac3988a26b0734e9f89e05b63cce159187a755fc54.scope: Deactivated successfully.
Feb 28 05:44:28 np0005634017 podman[376913]: 2026-02-28 10:44:28.901632285 +0000 UTC m=+0.138933535 container attach 43a3c1f747ba2d5ba3c207ac3988a26b0734e9f89e05b63cce159187a755fc54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 28 05:44:28 np0005634017 podman[376913]: 2026-02-28 10:44:28.902396306 +0000 UTC m=+0.139697546 container died 43a3c1f747ba2d5ba3c207ac3988a26b0734e9f89e05b63cce159187a755fc54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_rosalind, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 05:44:28 np0005634017 systemd[1]: var-lib-containers-storage-overlay-63c9c0a10b1aa4e04857c24f3d7d692e43181c4da3b8d87dc2f6d9cafd35dec1-merged.mount: Deactivated successfully.
Feb 28 05:44:28 np0005634017 podman[376913]: 2026-02-28 10:44:28.934732892 +0000 UTC m=+0.172034122 container remove 43a3c1f747ba2d5ba3c207ac3988a26b0734e9f89e05b63cce159187a755fc54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 05:44:28 np0005634017 systemd[1]: libpod-conmon-43a3c1f747ba2d5ba3c207ac3988a26b0734e9f89e05b63cce159187a755fc54.scope: Deactivated successfully.
Feb 28 05:44:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:44:29 np0005634017 podman[376953]: 2026-02-28 10:44:29.123558389 +0000 UTC m=+0.065675401 container create 17ad47dc15b6458e64350b61209a9a907f9ab4d16981bf563ff947f2191cba69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:44:29 np0005634017 podman[376953]: 2026-02-28 10:44:29.088368962 +0000 UTC m=+0.030485964 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:44:29 np0005634017 systemd[1]: Started libpod-conmon-17ad47dc15b6458e64350b61209a9a907f9ab4d16981bf563ff947f2191cba69.scope.
Feb 28 05:44:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:44:29
Feb 28 05:44:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:44:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:44:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['images', 'default.rgw.control', 'backups', '.mgr', 'vms', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root']
Feb 28 05:44:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:44:29 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:44:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b20911904dc4bd737523268be41edff916d1c814c88e7c41d1c9d42c9b6b56fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:44:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b20911904dc4bd737523268be41edff916d1c814c88e7c41d1c9d42c9b6b56fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:44:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b20911904dc4bd737523268be41edff916d1c814c88e7c41d1c9d42c9b6b56fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:44:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Feb 28 05:44:29 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b20911904dc4bd737523268be41edff916d1c814c88e7c41d1c9d42c9b6b56fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:44:29 np0005634017 podman[376953]: 2026-02-28 10:44:29.238209965 +0000 UTC m=+0.180326977 container init 17ad47dc15b6458e64350b61209a9a907f9ab4d16981bf563ff947f2191cba69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:44:29 np0005634017 podman[376953]: 2026-02-28 10:44:29.244061501 +0000 UTC m=+0.186178463 container start 17ad47dc15b6458e64350b61209a9a907f9ab4d16981bf563ff947f2191cba69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 28 05:44:29 np0005634017 podman[376953]: 2026-02-28 10:44:29.247381885 +0000 UTC m=+0.189498837 container attach 17ad47dc15b6458e64350b61209a9a907f9ab4d16981bf563ff947f2191cba69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 05:44:29 np0005634017 nova_compute[243452]: 2026-02-28 10:44:29.330 243456 DEBUG nova.network.neutron [req-e937f3ab-67bf-4e24-9bf4-eaada657d548 req-d16eaf1e-b465-49ea-be61-2b64ffc1fe57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Updated VIF entry in instance network info cache for port e109a5e0-a347-4744-84e4-96a126379d1c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:44:29 np0005634017 nova_compute[243452]: 2026-02-28 10:44:29.334 243456 DEBUG nova.network.neutron [req-e937f3ab-67bf-4e24-9bf4-eaada657d548 req-d16eaf1e-b465-49ea-be61-2b64ffc1fe57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Updating instance_info_cache with network_info: [{"id": "e109a5e0-a347-4744-84e4-96a126379d1c", "address": "fa:16:3e:7a:44:88", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape109a5e0-a3", "ovs_interfaceid": "e109a5e0-a347-4744-84e4-96a126379d1c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:44:29 np0005634017 nova_compute[243452]: 2026-02-28 10:44:29.397 243456 DEBUG oslo_concurrency.lockutils [req-e937f3ab-67bf-4e24-9bf4-eaada657d548 req-d16eaf1e-b465-49ea-be61-2b64ffc1fe57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:44:29 np0005634017 nova_compute[243452]: 2026-02-28 10:44:29.919 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:29 np0005634017 lvm[377049]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:44:30 np0005634017 lvm[377049]: VG ceph_vg1 finished
Feb 28 05:44:30 np0005634017 lvm[377047]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:44:30 np0005634017 lvm[377047]: VG ceph_vg0 finished
Feb 28 05:44:30 np0005634017 lvm[377052]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:44:30 np0005634017 lvm[377052]: VG ceph_vg2 finished
Feb 28 05:44:30 np0005634017 lvm[377055]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:44:30 np0005634017 lvm[377055]: VG ceph_vg0 finished
Feb 28 05:44:30 np0005634017 dazzling_taussig[376970]: {}
Feb 28 05:44:30 np0005634017 systemd[1]: libpod-17ad47dc15b6458e64350b61209a9a907f9ab4d16981bf563ff947f2191cba69.scope: Deactivated successfully.
Feb 28 05:44:30 np0005634017 podman[376953]: 2026-02-28 10:44:30.146290018 +0000 UTC m=+1.088406950 container died 17ad47dc15b6458e64350b61209a9a907f9ab4d16981bf563ff947f2191cba69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Feb 28 05:44:30 np0005634017 systemd[1]: libpod-17ad47dc15b6458e64350b61209a9a907f9ab4d16981bf563ff947f2191cba69.scope: Consumed 1.234s CPU time.
Feb 28 05:44:30 np0005634017 systemd[1]: var-lib-containers-storage-overlay-b20911904dc4bd737523268be41edff916d1c814c88e7c41d1c9d42c9b6b56fe-merged.mount: Deactivated successfully.
Feb 28 05:44:30 np0005634017 podman[376953]: 2026-02-28 10:44:30.227020754 +0000 UTC m=+1.169137676 container remove 17ad47dc15b6458e64350b61209a9a907f9ab4d16981bf563ff947f2191cba69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 05:44:30 np0005634017 systemd[1]: libpod-conmon-17ad47dc15b6458e64350b61209a9a907f9ab4d16981bf563ff947f2191cba69.scope: Deactivated successfully.
Feb 28 05:44:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:44:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:44:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:44:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:44:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:44:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:44:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:44:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:44:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:44:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:44:30 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:44:30 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:44:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:44:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:44:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:44:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:44:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:44:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:44:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:44:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:44:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:44:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:44:31 np0005634017 ceph-mgr[76610]: [devicehealth INFO root] Check health
Feb 28 05:44:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2405: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 88 op/s
Feb 28 05:44:31 np0005634017 nova_compute[243452]: 2026-02-28 10:44:31.995 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 05:44:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:44:34 np0005634017 nova_compute[243452]: 2026-02-28 10:44:34.957 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2407: 305 pgs: 305 active+clean; 205 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 700 KiB/s wr, 85 op/s
Feb 28 05:44:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:44:35Z|00190|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7a:44:88 10.100.0.5
Feb 28 05:44:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:44:35Z|00191|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:44:88 10.100.0.5
Feb 28 05:44:37 np0005634017 nova_compute[243452]: 2026-02-28 10:44:37.002 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2408: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.3 MiB/s wr, 90 op/s
Feb 28 05:44:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:44:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2409: 305 pgs: 305 active+clean; 231 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 93 op/s
Feb 28 05:44:39 np0005634017 nova_compute[243452]: 2026-02-28 10:44:39.960 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2410: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 547 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Feb 28 05:44:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:44:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:44:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:44:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:44:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007711872734998348 of space, bias 1.0, pg target 0.23135618204995043 quantized to 32 (current 32)
Feb 28 05:44:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:44:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:44:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:44:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:44:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:44:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024939207792546033 of space, bias 1.0, pg target 0.748176233776381 quantized to 32 (current 32)
Feb 28 05:44:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:44:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.270082191943719e-07 of space, bias 4.0, pg target 0.0008724098630332462 quantized to 16 (current 16)
Feb 28 05:44:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:44:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:44:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:44:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:44:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:44:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:44:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:44:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:44:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:44:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:44:42 np0005634017 nova_compute[243452]: 2026-02-28 10:44:42.005 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 205 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Feb 28 05:44:43 np0005634017 nova_compute[243452]: 2026-02-28 10:44:43.942 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:44:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:44:44 np0005634017 nova_compute[243452]: 2026-02-28 10:44:44.962 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2412: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 205 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Feb 28 05:44:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:44:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4010870092' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:44:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:44:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4010870092' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:44:45 np0005634017 nova_compute[243452]: 2026-02-28 10:44:45.678 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:44:45 np0005634017 nova_compute[243452]: 2026-02-28 10:44:45.679 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:44:45 np0005634017 nova_compute[243452]: 2026-02-28 10:44:45.701 243456 DEBUG nova.compute.manager [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:44:45 np0005634017 nova_compute[243452]: 2026-02-28 10:44:45.798 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:44:45 np0005634017 nova_compute[243452]: 2026-02-28 10:44:45.799 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:44:45 np0005634017 nova_compute[243452]: 2026-02-28 10:44:45.809 243456 DEBUG nova.virt.hardware [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:44:45 np0005634017 nova_compute[243452]: 2026-02-28 10:44:45.810 243456 INFO nova.compute.claims [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:44:45 np0005634017 nova_compute[243452]: 2026-02-28 10:44:45.940 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:44:46 np0005634017 nova_compute[243452]: 2026-02-28 10:44:46.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:44:46 np0005634017 nova_compute[243452]: 2026-02-28 10:44:46.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:44:46 np0005634017 nova_compute[243452]: 2026-02-28 10:44:46.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:44:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:44:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/129897488' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:44:46 np0005634017 nova_compute[243452]: 2026-02-28 10:44:46.508 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:44:46 np0005634017 nova_compute[243452]: 2026-02-28 10:44:46.515 243456 DEBUG nova.compute.provider_tree [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:44:46 np0005634017 nova_compute[243452]: 2026-02-28 10:44:46.535 243456 DEBUG nova.scheduler.client.report [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:44:46 np0005634017 nova_compute[243452]: 2026-02-28 10:44:46.563 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:44:46 np0005634017 nova_compute[243452]: 2026-02-28 10:44:46.564 243456 DEBUG nova.compute.manager [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:44:46 np0005634017 nova_compute[243452]: 2026-02-28 10:44:46.623 243456 DEBUG nova.compute.manager [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:44:46 np0005634017 nova_compute[243452]: 2026-02-28 10:44:46.624 243456 DEBUG nova.network.neutron [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:44:46 np0005634017 nova_compute[243452]: 2026-02-28 10:44:46.644 243456 INFO nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:44:46 np0005634017 nova_compute[243452]: 2026-02-28 10:44:46.669 243456 DEBUG nova.compute.manager [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:44:46 np0005634017 nova_compute[243452]: 2026-02-28 10:44:46.769 243456 DEBUG nova.compute.manager [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:44:46 np0005634017 nova_compute[243452]: 2026-02-28 10:44:46.771 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:44:46 np0005634017 nova_compute[243452]: 2026-02-28 10:44:46.772 243456 INFO nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Creating image(s)#033[00m
Feb 28 05:44:46 np0005634017 nova_compute[243452]: 2026-02-28 10:44:46.805 243456 DEBUG nova.storage.rbd_utils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:44:46 np0005634017 nova_compute[243452]: 2026-02-28 10:44:46.837 243456 DEBUG nova.storage.rbd_utils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:44:46 np0005634017 nova_compute[243452]: 2026-02-28 10:44:46.870 243456 DEBUG nova.storage.rbd_utils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:44:46 np0005634017 nova_compute[243452]: 2026-02-28 10:44:46.875 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:44:46 np0005634017 nova_compute[243452]: 2026-02-28 10:44:46.914 243456 DEBUG nova.policy [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:44:46 np0005634017 nova_compute[243452]: 2026-02-28 10:44:46.968 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:44:46 np0005634017 nova_compute[243452]: 2026-02-28 10:44:46.969 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:44:46 np0005634017 nova_compute[243452]: 2026-02-28 10:44:46.970 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:44:46 np0005634017 nova_compute[243452]: 2026-02-28 10:44:46.971 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:44:47 np0005634017 nova_compute[243452]: 2026-02-28 10:44:47.017 243456 DEBUG nova.storage.rbd_utils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:44:47 np0005634017 nova_compute[243452]: 2026-02-28 10:44:47.023 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:44:47 np0005634017 nova_compute[243452]: 2026-02-28 10:44:47.073 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2413: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 181 KiB/s rd, 1.5 MiB/s wr, 45 op/s
Feb 28 05:44:47 np0005634017 nova_compute[243452]: 2026-02-28 10:44:47.274 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:44:47 np0005634017 nova_compute[243452]: 2026-02-28 10:44:47.346 243456 DEBUG nova.storage.rbd_utils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image 795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:44:47 np0005634017 nova_compute[243452]: 2026-02-28 10:44:47.437 243456 DEBUG nova.objects.instance [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid 795552e4-ec78-4c39-a0a9-60c83ca0f0a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:44:47 np0005634017 nova_compute[243452]: 2026-02-28 10:44:47.453 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:44:47 np0005634017 nova_compute[243452]: 2026-02-28 10:44:47.453 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Ensure instance console log exists: /var/lib/nova/instances/795552e4-ec78-4c39-a0a9-60c83ca0f0a2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:44:47 np0005634017 nova_compute[243452]: 2026-02-28 10:44:47.454 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:44:47 np0005634017 nova_compute[243452]: 2026-02-28 10:44:47.455 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:44:47 np0005634017 nova_compute[243452]: 2026-02-28 10:44:47.455 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:44:48 np0005634017 nova_compute[243452]: 2026-02-28 10:44:48.385 243456 DEBUG nova.network.neutron [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Successfully created port: 138ee9e6-821d-4256-8858-52f9e81f4c8b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.051493) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275489051588, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 1847, "num_deletes": 256, "total_data_size": 2965887, "memory_usage": 3010448, "flush_reason": "Manual Compaction"}
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275489063501, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 2913343, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49717, "largest_seqno": 51563, "table_properties": {"data_size": 2904958, "index_size": 5135, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17169, "raw_average_key_size": 19, "raw_value_size": 2888141, "raw_average_value_size": 3335, "num_data_blocks": 228, "num_entries": 866, "num_filter_entries": 866, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772275294, "oldest_key_time": 1772275294, "file_creation_time": 1772275489, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 12051 microseconds, and 6043 cpu microseconds.
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.063563) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 2913343 bytes OK
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.063586) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.065888) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.065906) EVENT_LOG_v1 {"time_micros": 1772275489065901, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.065932) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 2958006, prev total WAL file size 2958006, number of live WAL files 2.
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.066797) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303035' seq:72057594037927935, type:22 .. '6C6F676D0032323537' seq:0, type:0; will stop at (end)
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(2845KB)], [116(8173KB)]
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275489066853, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 11282962, "oldest_snapshot_seqno": -1}
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 7353 keys, 11166748 bytes, temperature: kUnknown
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275489122146, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 11166748, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11116875, "index_size": 30369, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18437, "raw_key_size": 190754, "raw_average_key_size": 25, "raw_value_size": 10985106, "raw_average_value_size": 1493, "num_data_blocks": 1196, "num_entries": 7353, "num_filter_entries": 7353, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772275489, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.122720) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 11166748 bytes
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.124037) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.7 rd, 201.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 8.0 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(7.7) write-amplify(3.8) OK, records in: 7877, records dropped: 524 output_compression: NoCompression
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.124059) EVENT_LOG_v1 {"time_micros": 1772275489124048, "job": 70, "event": "compaction_finished", "compaction_time_micros": 55378, "compaction_time_cpu_micros": 37725, "output_level": 6, "num_output_files": 1, "total_output_size": 11166748, "num_input_records": 7877, "num_output_records": 7353, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275489124644, "job": 70, "event": "table_file_deletion", "file_number": 118}
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275489125622, "job": 70, "event": "table_file_deletion", "file_number": 116}
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.066622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.125852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.125864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.125867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.125871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:44:49 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.125874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:44:49 np0005634017 nova_compute[243452]: 2026-02-28 10:44:49.177 243456 DEBUG nova.network.neutron [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Successfully updated port: 138ee9e6-821d-4256-8858-52f9e81f4c8b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:44:49 np0005634017 nova_compute[243452]: 2026-02-28 10:44:49.191 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-795552e4-ec78-4c39-a0a9-60c83ca0f0a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:44:49 np0005634017 nova_compute[243452]: 2026-02-28 10:44:49.191 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-795552e4-ec78-4c39-a0a9-60c83ca0f0a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:44:49 np0005634017 nova_compute[243452]: 2026-02-28 10:44:49.191 243456 DEBUG nova.network.neutron [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:44:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2414: 305 pgs: 305 active+clean; 247 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 111 KiB/s rd, 1.3 MiB/s wr, 34 op/s
Feb 28 05:44:49 np0005634017 nova_compute[243452]: 2026-02-28 10:44:49.292 243456 DEBUG nova.compute.manager [req-1c61ac87-dbd8-4ac8-88ff-3231eea7fe4f req-db380281-9894-4a44-94ef-2bf6f041753f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Received event network-changed-138ee9e6-821d-4256-8858-52f9e81f4c8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:44:49 np0005634017 nova_compute[243452]: 2026-02-28 10:44:49.292 243456 DEBUG nova.compute.manager [req-1c61ac87-dbd8-4ac8-88ff-3231eea7fe4f req-db380281-9894-4a44-94ef-2bf6f041753f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Refreshing instance network info cache due to event network-changed-138ee9e6-821d-4256-8858-52f9e81f4c8b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:44:49 np0005634017 nova_compute[243452]: 2026-02-28 10:44:49.292 243456 DEBUG oslo_concurrency.lockutils [req-1c61ac87-dbd8-4ac8-88ff-3231eea7fe4f req-db380281-9894-4a44-94ef-2bf6f041753f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-795552e4-ec78-4c39-a0a9-60c83ca0f0a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:44:49 np0005634017 nova_compute[243452]: 2026-02-28 10:44:49.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:44:49 np0005634017 nova_compute[243452]: 2026-02-28 10:44:49.326 243456 DEBUG nova.network.neutron [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:44:50 np0005634017 nova_compute[243452]: 2026-02-28 10:44:50.004 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:50 np0005634017 nova_compute[243452]: 2026-02-28 10:44:50.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:44:50 np0005634017 nova_compute[243452]: 2026-02-28 10:44:50.402 243456 DEBUG nova.network.neutron [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Updating instance_info_cache with network_info: [{"id": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "address": "fa:16:3e:4a:d2:53", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138ee9e6-82", "ovs_interfaceid": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:44:50 np0005634017 nova_compute[243452]: 2026-02-28 10:44:50.436 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-795552e4-ec78-4c39-a0a9-60c83ca0f0a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:44:50 np0005634017 nova_compute[243452]: 2026-02-28 10:44:50.437 243456 DEBUG nova.compute.manager [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Instance network_info: |[{"id": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "address": "fa:16:3e:4a:d2:53", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138ee9e6-82", "ovs_interfaceid": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:44:50 np0005634017 nova_compute[243452]: 2026-02-28 10:44:50.437 243456 DEBUG oslo_concurrency.lockutils [req-1c61ac87-dbd8-4ac8-88ff-3231eea7fe4f req-db380281-9894-4a44-94ef-2bf6f041753f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-795552e4-ec78-4c39-a0a9-60c83ca0f0a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:44:50 np0005634017 nova_compute[243452]: 2026-02-28 10:44:50.437 243456 DEBUG nova.network.neutron [req-1c61ac87-dbd8-4ac8-88ff-3231eea7fe4f req-db380281-9894-4a44-94ef-2bf6f041753f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Refreshing network info cache for port 138ee9e6-821d-4256-8858-52f9e81f4c8b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:44:50 np0005634017 nova_compute[243452]: 2026-02-28 10:44:50.440 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Start _get_guest_xml network_info=[{"id": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "address": "fa:16:3e:4a:d2:53", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138ee9e6-82", "ovs_interfaceid": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:44:50 np0005634017 nova_compute[243452]: 2026-02-28 10:44:50.445 243456 WARNING nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:44:50 np0005634017 nova_compute[243452]: 2026-02-28 10:44:50.454 243456 DEBUG nova.virt.libvirt.host [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:44:50 np0005634017 nova_compute[243452]: 2026-02-28 10:44:50.454 243456 DEBUG nova.virt.libvirt.host [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:44:50 np0005634017 nova_compute[243452]: 2026-02-28 10:44:50.459 243456 DEBUG nova.virt.libvirt.host [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:44:50 np0005634017 nova_compute[243452]: 2026-02-28 10:44:50.460 243456 DEBUG nova.virt.libvirt.host [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:44:50 np0005634017 nova_compute[243452]: 2026-02-28 10:44:50.460 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:44:50 np0005634017 nova_compute[243452]: 2026-02-28 10:44:50.460 243456 DEBUG nova.virt.hardware [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:44:50 np0005634017 nova_compute[243452]: 2026-02-28 10:44:50.461 243456 DEBUG nova.virt.hardware [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:44:50 np0005634017 nova_compute[243452]: 2026-02-28 10:44:50.461 243456 DEBUG nova.virt.hardware [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:44:50 np0005634017 nova_compute[243452]: 2026-02-28 10:44:50.461 243456 DEBUG nova.virt.hardware [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:44:50 np0005634017 nova_compute[243452]: 2026-02-28 10:44:50.461 243456 DEBUG nova.virt.hardware [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:44:50 np0005634017 nova_compute[243452]: 2026-02-28 10:44:50.461 243456 DEBUG nova.virt.hardware [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:44:50 np0005634017 nova_compute[243452]: 2026-02-28 10:44:50.461 243456 DEBUG nova.virt.hardware [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:44:50 np0005634017 nova_compute[243452]: 2026-02-28 10:44:50.462 243456 DEBUG nova.virt.hardware [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:44:50 np0005634017 nova_compute[243452]: 2026-02-28 10:44:50.462 243456 DEBUG nova.virt.hardware [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:44:50 np0005634017 nova_compute[243452]: 2026-02-28 10:44:50.462 243456 DEBUG nova.virt.hardware [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:44:50 np0005634017 nova_compute[243452]: 2026-02-28 10:44:50.462 243456 DEBUG nova.virt.hardware [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:44:50 np0005634017 nova_compute[243452]: 2026-02-28 10:44:50.465 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:44:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:44:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3776011761' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:44:51 np0005634017 nova_compute[243452]: 2026-02-28 10:44:51.044 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:44:51 np0005634017 nova_compute[243452]: 2026-02-28 10:44:51.073 243456 DEBUG nova.storage.rbd_utils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:44:51 np0005634017 nova_compute[243452]: 2026-02-28 10:44:51.079 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:44:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2415: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Feb 28 05:44:51 np0005634017 nova_compute[243452]: 2026-02-28 10:44:51.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:44:51 np0005634017 nova_compute[243452]: 2026-02-28 10:44:51.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:44:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:44:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/480743214' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:44:51 np0005634017 nova_compute[243452]: 2026-02-28 10:44:51.987 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.908s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:44:51 np0005634017 nova_compute[243452]: 2026-02-28 10:44:51.989 243456 DEBUG nova.virt.libvirt.vif [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1587775994',display_name='tempest-TestGettingAddress-server-1587775994',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1587775994',id=149,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjumutHDOiCCJO6cfUL6ugRSFzHXiNGug9dJkwM0CV8zL85MVF3a1LvAcIuG15mWPEoaXuen5D3Ktxjz9xOw1dfLwx0HcN7GW0eMtopxI1h0bccpxE9hlkunZoyhuwL5A==',key_name='tempest-TestGettingAddress-1355395394',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-i9i85993',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:44:46Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=795552e4-ec78-4c39-a0a9-60c83ca0f0a2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "address": "fa:16:3e:4a:d2:53", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138ee9e6-82", "ovs_interfaceid": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:44:51 np0005634017 nova_compute[243452]: 2026-02-28 10:44:51.990 243456 DEBUG nova.network.os_vif_util [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "address": "fa:16:3e:4a:d2:53", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138ee9e6-82", "ovs_interfaceid": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:44:51 np0005634017 nova_compute[243452]: 2026-02-28 10:44:51.991 243456 DEBUG nova.network.os_vif_util [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:d2:53,bridge_name='br-int',has_traffic_filtering=True,id=138ee9e6-821d-4256-8858-52f9e81f4c8b,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap138ee9e6-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:44:51 np0005634017 nova_compute[243452]: 2026-02-28 10:44:51.992 243456 DEBUG nova.objects.instance [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 795552e4-ec78-4c39-a0a9-60c83ca0f0a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.005 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:44:52 np0005634017 nova_compute[243452]:  <uuid>795552e4-ec78-4c39-a0a9-60c83ca0f0a2</uuid>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:  <name>instance-00000095</name>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestGettingAddress-server-1587775994</nova:name>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:44:50</nova:creationTime>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:44:52 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:        <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:        <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:        <nova:port uuid="138ee9e6-821d-4256-8858-52f9e81f4c8b">
Feb 28 05:44:52 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe4a:d253" ipVersion="6"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe4a:d253" ipVersion="6"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <entry name="serial">795552e4-ec78-4c39-a0a9-60c83ca0f0a2</entry>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <entry name="uuid">795552e4-ec78-4c39-a0a9-60c83ca0f0a2</entry>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk">
Feb 28 05:44:52 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:44:52 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk.config">
Feb 28 05:44:52 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:44:52 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:4a:d2:53"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <target dev="tap138ee9e6-82"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/795552e4-ec78-4c39-a0a9-60c83ca0f0a2/console.log" append="off"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:44:52 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:44:52 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:44:52 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:44:52 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.006 243456 DEBUG nova.compute.manager [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Preparing to wait for external event network-vif-plugged-138ee9e6-821d-4256-8858-52f9e81f4c8b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.006 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.006 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.006 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.007 243456 DEBUG nova.virt.libvirt.vif [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1587775994',display_name='tempest-TestGettingAddress-server-1587775994',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1587775994',id=149,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjumutHDOiCCJO6cfUL6ugRSFzHXiNGug9dJkwM0CV8zL85MVF3a1LvAcIuG15mWPEoaXuen5D3Ktxjz9xOw1dfLwx0HcN7GW0eMtopxI1h0bccpxE9hlkunZoyhuwL5A==',key_name='tempest-TestGettingAddress-1355395394',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-i9i85993',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:44:46Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=795552e4-ec78-4c39-a0a9-60c83ca0f0a2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "address": "fa:16:3e:4a:d2:53", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138ee9e6-82", "ovs_interfaceid": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.007 243456 DEBUG nova.network.os_vif_util [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "address": "fa:16:3e:4a:d2:53", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138ee9e6-82", "ovs_interfaceid": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.010 243456 DEBUG nova.network.os_vif_util [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:d2:53,bridge_name='br-int',has_traffic_filtering=True,id=138ee9e6-821d-4256-8858-52f9e81f4c8b,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap138ee9e6-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.011 243456 DEBUG os_vif [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:d2:53,bridge_name='br-int',has_traffic_filtering=True,id=138ee9e6-821d-4256-8858-52f9e81f4c8b,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap138ee9e6-82') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.011 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.012 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.012 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.016 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.016 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap138ee9e6-82, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.017 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap138ee9e6-82, col_values=(('external_ids', {'iface-id': '138ee9e6-821d-4256-8858-52f9e81f4c8b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:d2:53', 'vm-uuid': '795552e4-ec78-4c39-a0a9-60c83ca0f0a2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.018 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:52 np0005634017 NetworkManager[49805]: <info>  [1772275492.0197] manager: (tap138ee9e6-82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/653)
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.019 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.026 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.027 243456 INFO os_vif [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:d2:53,bridge_name='br-int',has_traffic_filtering=True,id=138ee9e6-821d-4256-8858-52f9e81f4c8b,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap138ee9e6-82')#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.076 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.076 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.076 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:4a:d2:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.077 243456 INFO nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Using config drive#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.103 243456 DEBUG nova.storage.rbd_utils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.504 243456 INFO nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Creating config drive at /var/lib/nova/instances/795552e4-ec78-4c39-a0a9-60c83ca0f0a2/disk.config#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.508 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/795552e4-ec78-4c39-a0a9-60c83ca0f0a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpcv5opa6m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.552 243456 DEBUG nova.network.neutron [req-1c61ac87-dbd8-4ac8-88ff-3231eea7fe4f req-db380281-9894-4a44-94ef-2bf6f041753f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Updated VIF entry in instance network info cache for port 138ee9e6-821d-4256-8858-52f9e81f4c8b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.554 243456 DEBUG nova.network.neutron [req-1c61ac87-dbd8-4ac8-88ff-3231eea7fe4f req-db380281-9894-4a44-94ef-2bf6f041753f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Updating instance_info_cache with network_info: [{"id": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "address": "fa:16:3e:4a:d2:53", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138ee9e6-82", "ovs_interfaceid": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.581 243456 DEBUG oslo_concurrency.lockutils [req-1c61ac87-dbd8-4ac8-88ff-3231eea7fe4f req-db380281-9894-4a44-94ef-2bf6f041753f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-795552e4-ec78-4c39-a0a9-60c83ca0f0a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.655 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/795552e4-ec78-4c39-a0a9-60c83ca0f0a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpcv5opa6m" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.689 243456 DEBUG nova.storage.rbd_utils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.693 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/795552e4-ec78-4c39-a0a9-60c83ca0f0a2/disk.config 795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.840 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/795552e4-ec78-4c39-a0a9-60c83ca0f0a2/disk.config 795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.841 243456 INFO nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Deleting local config drive /var/lib/nova/instances/795552e4-ec78-4c39-a0a9-60c83ca0f0a2/disk.config because it was imported into RBD.#033[00m
Feb 28 05:44:52 np0005634017 kernel: tap138ee9e6-82: entered promiscuous mode
Feb 28 05:44:52 np0005634017 NetworkManager[49805]: <info>  [1772275492.8925] manager: (tap138ee9e6-82): new Tun device (/org/freedesktop/NetworkManager/Devices/654)
Feb 28 05:44:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:44:52Z|01562|binding|INFO|Claiming lport 138ee9e6-821d-4256-8858-52f9e81f4c8b for this chassis.
Feb 28 05:44:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:44:52Z|01563|binding|INFO|138ee9e6-821d-4256-8858-52f9e81f4c8b: Claiming fa:16:3e:4a:d2:53 10.100.0.6 2001:db8:0:1:f816:3eff:fe4a:d253 2001:db8::f816:3eff:fe4a:d253
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.893 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.897 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:52.904 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:d2:53 10.100.0.6 2001:db8:0:1:f816:3eff:fe4a:d253 2001:db8::f816:3eff:fe4a:d253'], port_security=['fa:16:3e:4a:d2:53 10.100.0.6 2001:db8:0:1:f816:3eff:fe4a:d253 2001:db8::f816:3eff:fe4a:d253'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:fe4a:d253/64 2001:db8::f816:3eff:fe4a:d253/64', 'neutron:device_id': '795552e4-ec78-4c39-a0a9-60c83ca0f0a2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6b062de3-7eb9-4b80-8825-2f32e8eb2071', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c946f0a3-f57c-4aff-b2b1-40e06de1a163, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=138ee9e6-821d-4256-8858-52f9e81f4c8b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:44:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:44:52Z|01564|binding|INFO|Setting lport 138ee9e6-821d-4256-8858-52f9e81f4c8b ovn-installed in OVS
Feb 28 05:44:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:44:52Z|01565|binding|INFO|Setting lport 138ee9e6-821d-4256-8858-52f9e81f4c8b up in Southbound
Feb 28 05:44:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:52.905 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 138ee9e6-821d-4256-8858-52f9e81f4c8b in datapath b90f6c6c-a634-4d08-98d7-fde34f18e37c bound to our chassis#033[00m
Feb 28 05:44:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:52.906 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b90f6c6c-a634-4d08-98d7-fde34f18e37c#033[00m
Feb 28 05:44:52 np0005634017 nova_compute[243452]: 2026-02-28 10:44:52.907 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:52.931 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[71b4aa1b-8c3c-4d45-bb25-f0d8cf5790c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:44:52 np0005634017 systemd-udevd[377432]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:44:52 np0005634017 systemd-machined[209480]: New machine qemu-182-instance-00000095.
Feb 28 05:44:52 np0005634017 systemd[1]: Started Virtual Machine qemu-182-instance-00000095.
Feb 28 05:44:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:52.964 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0215a68f-b7a6-4436-9710-0fa84ede6037]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:44:52 np0005634017 NetworkManager[49805]: <info>  [1772275492.9677] device (tap138ee9e6-82): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:44:52 np0005634017 NetworkManager[49805]: <info>  [1772275492.9691] device (tap138ee9e6-82): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:44:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:52.970 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c805ed40-0a01-4815-8ad9-d6bc2817a554]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:44:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:53.011 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[08f5b622-e55e-4031-b04d-db2e83612ce2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:44:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:53.030 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4e095f18-60c4-41a8-93b7-956c328b5ed5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb90f6c6c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:3c:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 24, 'tx_packets': 5, 'rx_bytes': 2000, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 24, 'tx_packets': 5, 'rx_bytes': 2000, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697021, 'reachable_time': 30198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377467, 'error': None, 'target': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:44:53 np0005634017 podman[377416]: 2026-02-28 10:44:53.041390162 +0000 UTC m=+0.100397864 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:44:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:53.049 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1657135c-b8a5-4b7f-a3e2-ffafdaf54a12]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb90f6c6c-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697030, 'tstamp': 697030}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377472, 'error': None, 'target': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb90f6c6c-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697032, 'tstamp': 697032}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377472, 'error': None, 'target': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:44:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:53.053 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb90f6c6c-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:44:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:53.056 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb90f6c6c-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.056 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:53.056 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:44:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:53.057 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb90f6c6c-a0, col_values=(('external_ids', {'iface-id': '1a8a9fba-10a8-48af-ae28-f3be3422056a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:44:53 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:53.057 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:44:53 np0005634017 podman[377414]: 2026-02-28 10:44:53.060605556 +0000 UTC m=+0.122909542 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:44:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.338 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.388 243456 DEBUG nova.compute.manager [req-be7c3d40-0888-4bfb-8ba0-d260426bdaef req-16404c3a-d3c6-47b4-87de-398f7a79b5c7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Received event network-vif-plugged-138ee9e6-821d-4256-8858-52f9e81f4c8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.389 243456 DEBUG oslo_concurrency.lockutils [req-be7c3d40-0888-4bfb-8ba0-d260426bdaef req-16404c3a-d3c6-47b4-87de-398f7a79b5c7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.390 243456 DEBUG oslo_concurrency.lockutils [req-be7c3d40-0888-4bfb-8ba0-d260426bdaef req-16404c3a-d3c6-47b4-87de-398f7a79b5c7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.390 243456 DEBUG oslo_concurrency.lockutils [req-be7c3d40-0888-4bfb-8ba0-d260426bdaef req-16404c3a-d3c6-47b4-87de-398f7a79b5c7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.391 243456 DEBUG nova.compute.manager [req-be7c3d40-0888-4bfb-8ba0-d260426bdaef req-16404c3a-d3c6-47b4-87de-398f7a79b5c7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Processing event network-vif-plugged-138ee9e6-821d-4256-8858-52f9e81f4c8b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.393 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275493.3932407, 795552e4-ec78-4c39-a0a9-60c83ca0f0a2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.394 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] VM Started (Lifecycle Event)#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.398 243456 DEBUG nova.compute.manager [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.403 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.412 243456 INFO nova.virt.libvirt.driver [-] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Instance spawned successfully.#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.413 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.417 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.421 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.441 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.441 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.442 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.442 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.442 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.443 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.447 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.447 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275493.3978465, 795552e4-ec78-4c39-a0a9-60c83ca0f0a2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.447 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.473 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.477 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.477 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.477 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.478 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid c45d561e-3581-4ca2-a8b7-b56cc6ce5b43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.481 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275493.4019325, 795552e4-ec78-4c39-a0a9-60c83ca0f0a2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.481 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.510 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.519 243456 INFO nova.compute.manager [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Took 6.75 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.520 243456 DEBUG nova.compute.manager [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.521 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.536 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.581 243456 INFO nova.compute.manager [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Took 7.81 seconds to build instance.#033[00m
Feb 28 05:44:53 np0005634017 nova_compute[243452]: 2026-02-28 10:44:53.600 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:44:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:44:55 np0005634017 nova_compute[243452]: 2026-02-28 10:44:55.043 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2417: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 66 op/s
Feb 28 05:44:55 np0005634017 nova_compute[243452]: 2026-02-28 10:44:55.492 243456 DEBUG nova.compute.manager [req-983f79c3-9e4b-45b5-98e3-e789594e55f0 req-03f8a8ba-8230-4688-8c8e-a444bf799b6a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Received event network-vif-plugged-138ee9e6-821d-4256-8858-52f9e81f4c8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:44:55 np0005634017 nova_compute[243452]: 2026-02-28 10:44:55.493 243456 DEBUG oslo_concurrency.lockutils [req-983f79c3-9e4b-45b5-98e3-e789594e55f0 req-03f8a8ba-8230-4688-8c8e-a444bf799b6a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:44:55 np0005634017 nova_compute[243452]: 2026-02-28 10:44:55.494 243456 DEBUG oslo_concurrency.lockutils [req-983f79c3-9e4b-45b5-98e3-e789594e55f0 req-03f8a8ba-8230-4688-8c8e-a444bf799b6a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:44:55 np0005634017 nova_compute[243452]: 2026-02-28 10:44:55.494 243456 DEBUG oslo_concurrency.lockutils [req-983f79c3-9e4b-45b5-98e3-e789594e55f0 req-03f8a8ba-8230-4688-8c8e-a444bf799b6a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:44:55 np0005634017 nova_compute[243452]: 2026-02-28 10:44:55.494 243456 DEBUG nova.compute.manager [req-983f79c3-9e4b-45b5-98e3-e789594e55f0 req-03f8a8ba-8230-4688-8c8e-a444bf799b6a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] No waiting events found dispatching network-vif-plugged-138ee9e6-821d-4256-8858-52f9e81f4c8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:44:55 np0005634017 nova_compute[243452]: 2026-02-28 10:44:55.495 243456 WARNING nova.compute.manager [req-983f79c3-9e4b-45b5-98e3-e789594e55f0 req-03f8a8ba-8230-4688-8c8e-a444bf799b6a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Received unexpected event network-vif-plugged-138ee9e6-821d-4256-8858-52f9e81f4c8b for instance with vm_state active and task_state None.#033[00m
Feb 28 05:44:56 np0005634017 nova_compute[243452]: 2026-02-28 10:44:56.118 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Updating instance_info_cache with network_info: [{"id": "e109a5e0-a347-4744-84e4-96a126379d1c", "address": "fa:16:3e:7a:44:88", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape109a5e0-a3", "ovs_interfaceid": "e109a5e0-a347-4744-84e4-96a126379d1c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:44:56 np0005634017 nova_compute[243452]: 2026-02-28 10:44:56.135 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:44:56 np0005634017 nova_compute[243452]: 2026-02-28 10:44:56.135 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:44:56 np0005634017 nova_compute[243452]: 2026-02-28 10:44:56.656 243456 DEBUG nova.compute.manager [req-ef669ee8-47fd-4308-997b-cafc0d5afa16 req-ba44e30f-af88-443e-821b-1ad402de5779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Received event network-changed-138ee9e6-821d-4256-8858-52f9e81f4c8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:44:56 np0005634017 nova_compute[243452]: 2026-02-28 10:44:56.657 243456 DEBUG nova.compute.manager [req-ef669ee8-47fd-4308-997b-cafc0d5afa16 req-ba44e30f-af88-443e-821b-1ad402de5779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Refreshing instance network info cache due to event network-changed-138ee9e6-821d-4256-8858-52f9e81f4c8b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:44:56 np0005634017 nova_compute[243452]: 2026-02-28 10:44:56.658 243456 DEBUG oslo_concurrency.lockutils [req-ef669ee8-47fd-4308-997b-cafc0d5afa16 req-ba44e30f-af88-443e-821b-1ad402de5779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-795552e4-ec78-4c39-a0a9-60c83ca0f0a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:44:56 np0005634017 nova_compute[243452]: 2026-02-28 10:44:56.658 243456 DEBUG oslo_concurrency.lockutils [req-ef669ee8-47fd-4308-997b-cafc0d5afa16 req-ba44e30f-af88-443e-821b-1ad402de5779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-795552e4-ec78-4c39-a0a9-60c83ca0f0a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:44:56 np0005634017 nova_compute[243452]: 2026-02-28 10:44:56.659 243456 DEBUG nova.network.neutron [req-ef669ee8-47fd-4308-997b-cafc0d5afa16 req-ba44e30f-af88-443e-821b-1ad402de5779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Refreshing network info cache for port 138ee9e6-821d-4256-8858-52f9e81f4c8b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:44:57 np0005634017 nova_compute[243452]: 2026-02-28 10:44:57.021 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:44:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2418: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 75 op/s
Feb 28 05:44:57 np0005634017 nova_compute[243452]: 2026-02-28 10:44:57.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:44:57 np0005634017 nova_compute[243452]: 2026-02-28 10:44:57.338 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:44:57 np0005634017 nova_compute[243452]: 2026-02-28 10:44:57.338 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:44:57 np0005634017 nova_compute[243452]: 2026-02-28 10:44:57.339 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:44:57 np0005634017 nova_compute[243452]: 2026-02-28 10:44:57.339 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:44:57 np0005634017 nova_compute[243452]: 2026-02-28 10:44:57.340 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:44:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:57.884 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:44:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:57.885 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:44:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:44:57.886 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:44:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:44:57 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/657388479' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:44:57 np0005634017 nova_compute[243452]: 2026-02-28 10:44:57.920 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:44:58 np0005634017 nova_compute[243452]: 2026-02-28 10:44:58.878 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:44:58 np0005634017 nova_compute[243452]: 2026-02-28 10:44:58.879 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:44:58 np0005634017 nova_compute[243452]: 2026-02-28 10:44:58.884 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:44:58 np0005634017 nova_compute[243452]: 2026-02-28 10:44:58.884 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:44:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:44:59 np0005634017 nova_compute[243452]: 2026-02-28 10:44:59.054 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:44:59 np0005634017 nova_compute[243452]: 2026-02-28 10:44:59.055 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3250MB free_disk=59.92095302604139GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:44:59 np0005634017 nova_compute[243452]: 2026-02-28 10:44:59.055 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:44:59 np0005634017 nova_compute[243452]: 2026-02-28 10:44:59.055 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:44:59 np0005634017 nova_compute[243452]: 2026-02-28 10:44:59.127 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance c45d561e-3581-4ca2-a8b7-b56cc6ce5b43 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:44:59 np0005634017 nova_compute[243452]: 2026-02-28 10:44:59.128 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 795552e4-ec78-4c39-a0a9-60c83ca0f0a2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:44:59 np0005634017 nova_compute[243452]: 2026-02-28 10:44:59.129 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:44:59 np0005634017 nova_compute[243452]: 2026-02-28 10:44:59.129 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:44:59 np0005634017 nova_compute[243452]: 2026-02-28 10:44:59.149 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 28 05:44:59 np0005634017 nova_compute[243452]: 2026-02-28 10:44:59.164 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 28 05:44:59 np0005634017 nova_compute[243452]: 2026-02-28 10:44:59.165 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 28 05:44:59 np0005634017 nova_compute[243452]: 2026-02-28 10:44:59.179 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 28 05:44:59 np0005634017 nova_compute[243452]: 2026-02-28 10:44:59.209 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 28 05:44:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2419: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 28 05:44:59 np0005634017 nova_compute[243452]: 2026-02-28 10:44:59.266 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:44:59 np0005634017 nova_compute[243452]: 2026-02-28 10:44:59.713 243456 DEBUG nova.network.neutron [req-ef669ee8-47fd-4308-997b-cafc0d5afa16 req-ba44e30f-af88-443e-821b-1ad402de5779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Updated VIF entry in instance network info cache for port 138ee9e6-821d-4256-8858-52f9e81f4c8b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:44:59 np0005634017 nova_compute[243452]: 2026-02-28 10:44:59.715 243456 DEBUG nova.network.neutron [req-ef669ee8-47fd-4308-997b-cafc0d5afa16 req-ba44e30f-af88-443e-821b-1ad402de5779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Updating instance_info_cache with network_info: [{"id": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "address": "fa:16:3e:4a:d2:53", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138ee9e6-82", "ovs_interfaceid": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:44:59 np0005634017 nova_compute[243452]: 2026-02-28 10:44:59.743 243456 DEBUG oslo_concurrency.lockutils [req-ef669ee8-47fd-4308-997b-cafc0d5afa16 req-ba44e30f-af88-443e-821b-1ad402de5779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-795552e4-ec78-4c39-a0a9-60c83ca0f0a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:44:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:44:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1185575056' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:44:59 np0005634017 nova_compute[243452]: 2026-02-28 10:44:59.779 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:44:59 np0005634017 nova_compute[243452]: 2026-02-28 10:44:59.785 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:44:59 np0005634017 nova_compute[243452]: 2026-02-28 10:44:59.800 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:44:59 np0005634017 nova_compute[243452]: 2026-02-28 10:44:59.820 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:44:59 np0005634017 nova_compute[243452]: 2026-02-28 10:44:59.821 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:45:00 np0005634017 nova_compute[243452]: 2026-02-28 10:45:00.045 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:45:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:45:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:45:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:45:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:45:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:45:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2420: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 87 op/s
Feb 28 05:45:02 np0005634017 nova_compute[243452]: 2026-02-28 10:45:02.024 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2421: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 05:45:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:45:05 np0005634017 nova_compute[243452]: 2026-02-28 10:45:05.047 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2422: 305 pgs: 305 active+clean; 283 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 575 KiB/s wr, 76 op/s
Feb 28 05:45:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:45:05Z|00192|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4a:d2:53 10.100.0.6
Feb 28 05:45:05 np0005634017 ovn_controller[146846]: 2026-02-28T10:45:05Z|00193|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4a:d2:53 10.100.0.6
Feb 28 05:45:07 np0005634017 nova_compute[243452]: 2026-02-28 10:45:07.027 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2423: 305 pgs: 305 active+clean; 286 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 964 KiB/s rd, 797 KiB/s wr, 46 op/s
Feb 28 05:45:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:45:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2424: 305 pgs: 305 active+clean; 304 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 841 KiB/s rd, 1.9 MiB/s wr, 68 op/s
Feb 28 05:45:10 np0005634017 nova_compute[243452]: 2026-02-28 10:45:10.050 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2425: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Feb 28 05:45:12 np0005634017 nova_compute[243452]: 2026-02-28 10:45:12.030 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2426: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 05:45:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:45:15 np0005634017 nova_compute[243452]: 2026-02-28 10:45:15.053 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2427: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 321 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.368 243456 DEBUG nova.compute.manager [req-b04ec359-2139-4f98-916c-1fe4d39ddc82 req-5e3bbac5-3c02-4f9f-b81f-17b7de09a73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Received event network-changed-138ee9e6-821d-4256-8858-52f9e81f4c8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.368 243456 DEBUG nova.compute.manager [req-b04ec359-2139-4f98-916c-1fe4d39ddc82 req-5e3bbac5-3c02-4f9f-b81f-17b7de09a73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Refreshing instance network info cache due to event network-changed-138ee9e6-821d-4256-8858-52f9e81f4c8b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.369 243456 DEBUG oslo_concurrency.lockutils [req-b04ec359-2139-4f98-916c-1fe4d39ddc82 req-5e3bbac5-3c02-4f9f-b81f-17b7de09a73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-795552e4-ec78-4c39-a0a9-60c83ca0f0a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.369 243456 DEBUG oslo_concurrency.lockutils [req-b04ec359-2139-4f98-916c-1fe4d39ddc82 req-5e3bbac5-3c02-4f9f-b81f-17b7de09a73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-795552e4-ec78-4c39-a0a9-60c83ca0f0a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.370 243456 DEBUG nova.network.neutron [req-b04ec359-2139-4f98-916c-1fe4d39ddc82 req-5e3bbac5-3c02-4f9f-b81f-17b7de09a73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Refreshing network info cache for port 138ee9e6-821d-4256-8858-52f9e81f4c8b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.420 243456 DEBUG oslo_concurrency.lockutils [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.421 243456 DEBUG oslo_concurrency.lockutils [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.422 243456 DEBUG oslo_concurrency.lockutils [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.422 243456 DEBUG oslo_concurrency.lockutils [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.423 243456 DEBUG oslo_concurrency.lockutils [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.425 243456 INFO nova.compute.manager [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Terminating instance#033[00m
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.427 243456 DEBUG nova.compute.manager [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:45:16 np0005634017 kernel: tap138ee9e6-82 (unregistering): left promiscuous mode
Feb 28 05:45:16 np0005634017 NetworkManager[49805]: <info>  [1772275516.4782] device (tap138ee9e6-82): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:45:16 np0005634017 ovn_controller[146846]: 2026-02-28T10:45:16Z|01566|binding|INFO|Releasing lport 138ee9e6-821d-4256-8858-52f9e81f4c8b from this chassis (sb_readonly=0)
Feb 28 05:45:16 np0005634017 ovn_controller[146846]: 2026-02-28T10:45:16Z|01567|binding|INFO|Setting lport 138ee9e6-821d-4256-8858-52f9e81f4c8b down in Southbound
Feb 28 05:45:16 np0005634017 ovn_controller[146846]: 2026-02-28T10:45:16Z|01568|binding|INFO|Removing iface tap138ee9e6-82 ovn-installed in OVS
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.490 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.492 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.497 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:d2:53 10.100.0.6 2001:db8:0:1:f816:3eff:fe4a:d253 2001:db8::f816:3eff:fe4a:d253'], port_security=['fa:16:3e:4a:d2:53 10.100.0.6 2001:db8:0:1:f816:3eff:fe4a:d253 2001:db8::f816:3eff:fe4a:d253'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:fe4a:d253/64 2001:db8::f816:3eff:fe4a:d253/64', 'neutron:device_id': '795552e4-ec78-4c39-a0a9-60c83ca0f0a2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6b062de3-7eb9-4b80-8825-2f32e8eb2071', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c946f0a3-f57c-4aff-b2b1-40e06de1a163, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=138ee9e6-821d-4256-8858-52f9e81f4c8b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:45:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.501 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 138ee9e6-821d-4256-8858-52f9e81f4c8b in datapath b90f6c6c-a634-4d08-98d7-fde34f18e37c unbound from our chassis#033[00m
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.501 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.504 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b90f6c6c-a634-4d08-98d7-fde34f18e37c#033[00m
Feb 28 05:45:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.596 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1e7f83e5-6667-49e5-b308-418607eb477e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:45:16 np0005634017 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000095.scope: Deactivated successfully.
Feb 28 05:45:16 np0005634017 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000095.scope: Consumed 12.780s CPU time.
Feb 28 05:45:16 np0005634017 systemd-machined[209480]: Machine qemu-182-instance-00000095 terminated.
Feb 28 05:45:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.632 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f8715ed1-f6ca-43a8-94bf-cd8b529dcd9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:45:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.638 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1bf11608-7c99-421c-adca-da8db5573fbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.655824) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275516655866, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 462, "num_deletes": 251, "total_data_size": 412687, "memory_usage": 421512, "flush_reason": "Manual Compaction"}
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275516660526, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 409031, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51564, "largest_seqno": 52025, "table_properties": {"data_size": 406334, "index_size": 733, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6290, "raw_average_key_size": 18, "raw_value_size": 401125, "raw_average_value_size": 1197, "num_data_blocks": 33, "num_entries": 335, "num_filter_entries": 335, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772275490, "oldest_key_time": 1772275490, "file_creation_time": 1772275516, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 4800 microseconds, and 1490 cpu microseconds.
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.660626) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 409031 bytes OK
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.660662) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.663339) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.663407) EVENT_LOG_v1 {"time_micros": 1772275516663394, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.663442) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 409922, prev total WAL file size 409922, number of live WAL files 2.
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.664841) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(399KB)], [119(10MB)]
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275516664907, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 11575779, "oldest_snapshot_seqno": -1}
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.664 243456 INFO nova.virt.libvirt.driver [-] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Instance destroyed successfully.#033[00m
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.665 243456 DEBUG nova.objects.instance [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid 795552e4-ec78-4c39-a0a9-60c83ca0f0a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:45:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.675 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[67792a3e-ab0f-45c4-a639-27901ac438fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.677 243456 DEBUG nova.virt.libvirt.vif [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1587775994',display_name='tempest-TestGettingAddress-server-1587775994',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1587775994',id=149,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjumutHDOiCCJO6cfUL6ugRSFzHXiNGug9dJkwM0CV8zL85MVF3a1LvAcIuG15mWPEoaXuen5D3Ktxjz9xOw1dfLwx0HcN7GW0eMtopxI1h0bccpxE9hlkunZoyhuwL5A==',key_name='tempest-TestGettingAddress-1355395394',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:44:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-i9i85993',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:44:53Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=795552e4-ec78-4c39-a0a9-60c83ca0f0a2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "address": "fa:16:3e:4a:d2:53", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138ee9e6-82", "ovs_interfaceid": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.677 243456 DEBUG nova.network.os_vif_util [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "address": "fa:16:3e:4a:d2:53", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138ee9e6-82", "ovs_interfaceid": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.678 243456 DEBUG nova.network.os_vif_util [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:d2:53,bridge_name='br-int',has_traffic_filtering=True,id=138ee9e6-821d-4256-8858-52f9e81f4c8b,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap138ee9e6-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.678 243456 DEBUG os_vif [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:d2:53,bridge_name='br-int',has_traffic_filtering=True,id=138ee9e6-821d-4256-8858-52f9e81f4c8b,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap138ee9e6-82') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.680 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.681 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap138ee9e6-82, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.682 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.683 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.687 243456 INFO os_vif [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:d2:53,bridge_name='br-int',has_traffic_filtering=True,id=138ee9e6-821d-4256-8858-52f9e81f4c8b,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap138ee9e6-82')#033[00m
Feb 28 05:45:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.694 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe656c9-58c4-4ac9-847f-c9697cb9c69f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb90f6c6c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:3c:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3468, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3468, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697021, 'reachable_time': 30198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377590, 'error': None, 'target': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:45:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.710 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5475b55f-89a2-418b-9ccb-dac1e312f2b8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb90f6c6c-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697030, 'tstamp': 697030}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377598, 'error': None, 'target': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb90f6c6c-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697032, 'tstamp': 697032}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377598, 'error': None, 'target': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:45:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.712 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb90f6c6c-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.713 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.715 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb90f6c6c-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:45:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.715 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:45:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.715 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb90f6c6c-a0, col_values=(('external_ids', {'iface-id': '1a8a9fba-10a8-48af-ae28-f3be3422056a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:45:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.716 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 7179 keys, 9868612 bytes, temperature: kUnknown
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275516718881, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 9868612, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9821005, "index_size": 28551, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17989, "raw_key_size": 187825, "raw_average_key_size": 26, "raw_value_size": 9693358, "raw_average_value_size": 1350, "num_data_blocks": 1111, "num_entries": 7179, "num_filter_entries": 7179, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772275516, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.719193) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 9868612 bytes
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.720562) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 214.1 rd, 182.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.6 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(52.4) write-amplify(24.1) OK, records in: 7688, records dropped: 509 output_compression: NoCompression
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.720628) EVENT_LOG_v1 {"time_micros": 1772275516720617, "job": 72, "event": "compaction_finished", "compaction_time_micros": 54075, "compaction_time_cpu_micros": 27810, "output_level": 6, "num_output_files": 1, "total_output_size": 9868612, "num_input_records": 7688, "num_output_records": 7179, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275516720943, "job": 72, "event": "table_file_deletion", "file_number": 121}
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275516722559, "job": 72, "event": "table_file_deletion", "file_number": 119}
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.664724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.722684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.722699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.722719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.722729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:45:16 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.722731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.994 243456 INFO nova.virt.libvirt.driver [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Deleting instance files /var/lib/nova/instances/795552e4-ec78-4c39-a0a9-60c83ca0f0a2_del#033[00m
Feb 28 05:45:16 np0005634017 nova_compute[243452]: 2026-02-28 10:45:16.995 243456 INFO nova.virt.libvirt.driver [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Deletion of /var/lib/nova/instances/795552e4-ec78-4c39-a0a9-60c83ca0f0a2_del complete#033[00m
Feb 28 05:45:17 np0005634017 nova_compute[243452]: 2026-02-28 10:45:17.143 243456 INFO nova.compute.manager [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:45:17 np0005634017 nova_compute[243452]: 2026-02-28 10:45:17.145 243456 DEBUG oslo.service.loopingcall [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:45:17 np0005634017 nova_compute[243452]: 2026-02-28 10:45:17.146 243456 DEBUG nova.compute.manager [-] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:45:17 np0005634017 nova_compute[243452]: 2026-02-28 10:45:17.147 243456 DEBUG nova.network.neutron [-] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:45:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2428: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 1.6 MiB/s wr, 60 op/s
Feb 28 05:45:17 np0005634017 nova_compute[243452]: 2026-02-28 10:45:17.554 243456 DEBUG nova.compute.manager [req-308fcca6-9234-4d2c-a5e8-89decd12dce5 req-5d3a7f11-a4b0-4b9a-9cfc-2ec2134916a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Received event network-vif-unplugged-138ee9e6-821d-4256-8858-52f9e81f4c8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:45:17 np0005634017 nova_compute[243452]: 2026-02-28 10:45:17.555 243456 DEBUG oslo_concurrency.lockutils [req-308fcca6-9234-4d2c-a5e8-89decd12dce5 req-5d3a7f11-a4b0-4b9a-9cfc-2ec2134916a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:45:17 np0005634017 nova_compute[243452]: 2026-02-28 10:45:17.556 243456 DEBUG oslo_concurrency.lockutils [req-308fcca6-9234-4d2c-a5e8-89decd12dce5 req-5d3a7f11-a4b0-4b9a-9cfc-2ec2134916a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:45:17 np0005634017 nova_compute[243452]: 2026-02-28 10:45:17.557 243456 DEBUG oslo_concurrency.lockutils [req-308fcca6-9234-4d2c-a5e8-89decd12dce5 req-5d3a7f11-a4b0-4b9a-9cfc-2ec2134916a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:45:17 np0005634017 nova_compute[243452]: 2026-02-28 10:45:17.557 243456 DEBUG nova.compute.manager [req-308fcca6-9234-4d2c-a5e8-89decd12dce5 req-5d3a7f11-a4b0-4b9a-9cfc-2ec2134916a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] No waiting events found dispatching network-vif-unplugged-138ee9e6-821d-4256-8858-52f9e81f4c8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:45:17 np0005634017 nova_compute[243452]: 2026-02-28 10:45:17.557 243456 DEBUG nova.compute.manager [req-308fcca6-9234-4d2c-a5e8-89decd12dce5 req-5d3a7f11-a4b0-4b9a-9cfc-2ec2134916a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Received event network-vif-unplugged-138ee9e6-821d-4256-8858-52f9e81f4c8b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:45:18 np0005634017 nova_compute[243452]: 2026-02-28 10:45:18.759 243456 DEBUG nova.network.neutron [-] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:45:18 np0005634017 nova_compute[243452]: 2026-02-28 10:45:18.776 243456 INFO nova.compute.manager [-] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Took 1.63 seconds to deallocate network for instance.#033[00m
Feb 28 05:45:18 np0005634017 nova_compute[243452]: 2026-02-28 10:45:18.814 243456 DEBUG oslo_concurrency.lockutils [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:45:18 np0005634017 nova_compute[243452]: 2026-02-28 10:45:18.815 243456 DEBUG oslo_concurrency.lockutils [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:45:18 np0005634017 nova_compute[243452]: 2026-02-28 10:45:18.886 243456 DEBUG oslo_concurrency.processutils [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:45:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:45:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2429: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 1.4 MiB/s wr, 74 op/s
Feb 28 05:45:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:45:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/146506471' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:45:19 np0005634017 nova_compute[243452]: 2026-02-28 10:45:19.467 243456 DEBUG oslo_concurrency.processutils [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:45:19 np0005634017 nova_compute[243452]: 2026-02-28 10:45:19.476 243456 DEBUG nova.compute.provider_tree [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:45:19 np0005634017 nova_compute[243452]: 2026-02-28 10:45:19.503 243456 DEBUG nova.scheduler.client.report [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:45:19 np0005634017 nova_compute[243452]: 2026-02-28 10:45:19.534 243456 DEBUG oslo_concurrency.lockutils [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:45:19 np0005634017 nova_compute[243452]: 2026-02-28 10:45:19.537 243456 DEBUG nova.network.neutron [req-b04ec359-2139-4f98-916c-1fe4d39ddc82 req-5e3bbac5-3c02-4f9f-b81f-17b7de09a73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Updated VIF entry in instance network info cache for port 138ee9e6-821d-4256-8858-52f9e81f4c8b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:45:19 np0005634017 nova_compute[243452]: 2026-02-28 10:45:19.537 243456 DEBUG nova.network.neutron [req-b04ec359-2139-4f98-916c-1fe4d39ddc82 req-5e3bbac5-3c02-4f9f-b81f-17b7de09a73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Updating instance_info_cache with network_info: [{"id": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "address": "fa:16:3e:4a:d2:53", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138ee9e6-82", "ovs_interfaceid": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:45:19 np0005634017 nova_compute[243452]: 2026-02-28 10:45:19.560 243456 DEBUG oslo_concurrency.lockutils [req-b04ec359-2139-4f98-916c-1fe4d39ddc82 req-5e3bbac5-3c02-4f9f-b81f-17b7de09a73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-795552e4-ec78-4c39-a0a9-60c83ca0f0a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:45:19 np0005634017 nova_compute[243452]: 2026-02-28 10:45:19.571 243456 INFO nova.scheduler.client.report [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance 795552e4-ec78-4c39-a0a9-60c83ca0f0a2#033[00m
Feb 28 05:45:19 np0005634017 nova_compute[243452]: 2026-02-28 10:45:19.641 243456 DEBUG nova.compute.manager [req-b2b5fa43-a0f9-46c5-b5fa-b73b3231bc2a req-3e151b8c-8f9f-45b4-9c7c-c44ba1f2f724 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Received event network-vif-plugged-138ee9e6-821d-4256-8858-52f9e81f4c8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:45:19 np0005634017 nova_compute[243452]: 2026-02-28 10:45:19.642 243456 DEBUG oslo_concurrency.lockutils [req-b2b5fa43-a0f9-46c5-b5fa-b73b3231bc2a req-3e151b8c-8f9f-45b4-9c7c-c44ba1f2f724 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:45:19 np0005634017 nova_compute[243452]: 2026-02-28 10:45:19.642 243456 DEBUG oslo_concurrency.lockutils [req-b2b5fa43-a0f9-46c5-b5fa-b73b3231bc2a req-3e151b8c-8f9f-45b4-9c7c-c44ba1f2f724 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:45:19 np0005634017 nova_compute[243452]: 2026-02-28 10:45:19.643 243456 DEBUG oslo_concurrency.lockutils [req-b2b5fa43-a0f9-46c5-b5fa-b73b3231bc2a req-3e151b8c-8f9f-45b4-9c7c-c44ba1f2f724 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:45:19 np0005634017 nova_compute[243452]: 2026-02-28 10:45:19.643 243456 DEBUG nova.compute.manager [req-b2b5fa43-a0f9-46c5-b5fa-b73b3231bc2a req-3e151b8c-8f9f-45b4-9c7c-c44ba1f2f724 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] No waiting events found dispatching network-vif-plugged-138ee9e6-821d-4256-8858-52f9e81f4c8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:45:19 np0005634017 nova_compute[243452]: 2026-02-28 10:45:19.644 243456 WARNING nova.compute.manager [req-b2b5fa43-a0f9-46c5-b5fa-b73b3231bc2a req-3e151b8c-8f9f-45b4-9c7c-c44ba1f2f724 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Received unexpected event network-vif-plugged-138ee9e6-821d-4256-8858-52f9e81f4c8b for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:45:19 np0005634017 nova_compute[243452]: 2026-02-28 10:45:19.644 243456 DEBUG nova.compute.manager [req-b2b5fa43-a0f9-46c5-b5fa-b73b3231bc2a req-3e151b8c-8f9f-45b4-9c7c-c44ba1f2f724 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Received event network-vif-deleted-138ee9e6-821d-4256-8858-52f9e81f4c8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:45:19 np0005634017 nova_compute[243452]: 2026-02-28 10:45:19.644 243456 INFO nova.compute.manager [req-b2b5fa43-a0f9-46c5-b5fa-b73b3231bc2a req-3e151b8c-8f9f-45b4-9c7c-c44ba1f2f724 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Neutron deleted interface 138ee9e6-821d-4256-8858-52f9e81f4c8b; detaching it from the instance and deleting it from the info cache#033[00m
Feb 28 05:45:19 np0005634017 nova_compute[243452]: 2026-02-28 10:45:19.645 243456 DEBUG nova.network.neutron [req-b2b5fa43-a0f9-46c5-b5fa-b73b3231bc2a req-3e151b8c-8f9f-45b4-9c7c-c44ba1f2f724 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:45:19 np0005634017 nova_compute[243452]: 2026-02-28 10:45:19.651 243456 DEBUG oslo_concurrency.lockutils [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:45:19 np0005634017 nova_compute[243452]: 2026-02-28 10:45:19.671 243456 DEBUG nova.compute.manager [req-b2b5fa43-a0f9-46c5-b5fa-b73b3231bc2a req-3e151b8c-8f9f-45b4-9c7c-c44ba1f2f724 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Detach interface failed, port_id=138ee9e6-821d-4256-8858-52f9e81f4c8b, reason: Instance 795552e4-ec78-4c39-a0a9-60c83ca0f0a2 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 28 05:45:20 np0005634017 nova_compute[243452]: 2026-02-28 10:45:20.096 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2430: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 213 KiB/s rd, 244 KiB/s wr, 89 op/s
Feb 28 05:45:21 np0005634017 nova_compute[243452]: 2026-02-28 10:45:21.684 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.083 243456 DEBUG nova.compute.manager [req-baf2e8a6-5480-486c-96e0-44e19de47be4 req-ad51a0c0-523f-423a-9b6d-cdf0a3c5b601 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Received event network-changed-e109a5e0-a347-4744-84e4-96a126379d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.083 243456 DEBUG nova.compute.manager [req-baf2e8a6-5480-486c-96e0-44e19de47be4 req-ad51a0c0-523f-423a-9b6d-cdf0a3c5b601 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Refreshing instance network info cache due to event network-changed-e109a5e0-a347-4744-84e4-96a126379d1c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.084 243456 DEBUG oslo_concurrency.lockutils [req-baf2e8a6-5480-486c-96e0-44e19de47be4 req-ad51a0c0-523f-423a-9b6d-cdf0a3c5b601 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.084 243456 DEBUG oslo_concurrency.lockutils [req-baf2e8a6-5480-486c-96e0-44e19de47be4 req-ad51a0c0-523f-423a-9b6d-cdf0a3c5b601 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.084 243456 DEBUG nova.network.neutron [req-baf2e8a6-5480-486c-96e0-44e19de47be4 req-ad51a0c0-523f-423a-9b6d-cdf0a3c5b601 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Refreshing network info cache for port e109a5e0-a347-4744-84e4-96a126379d1c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.540 243456 DEBUG oslo_concurrency.lockutils [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.541 243456 DEBUG oslo_concurrency.lockutils [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.541 243456 DEBUG oslo_concurrency.lockutils [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.541 243456 DEBUG oslo_concurrency.lockutils [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.542 243456 DEBUG oslo_concurrency.lockutils [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.543 243456 INFO nova.compute.manager [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Terminating instance#033[00m
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.544 243456 DEBUG nova.compute.manager [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:45:22 np0005634017 kernel: tape109a5e0-a3 (unregistering): left promiscuous mode
Feb 28 05:45:22 np0005634017 NetworkManager[49805]: <info>  [1772275522.6037] device (tape109a5e0-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.616 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:22 np0005634017 ovn_controller[146846]: 2026-02-28T10:45:22Z|01569|binding|INFO|Releasing lport e109a5e0-a347-4744-84e4-96a126379d1c from this chassis (sb_readonly=0)
Feb 28 05:45:22 np0005634017 ovn_controller[146846]: 2026-02-28T10:45:22Z|01570|binding|INFO|Setting lport e109a5e0-a347-4744-84e4-96a126379d1c down in Southbound
Feb 28 05:45:22 np0005634017 ovn_controller[146846]: 2026-02-28T10:45:22Z|01571|binding|INFO|Removing iface tape109a5e0-a3 ovn-installed in OVS
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.631 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:22 np0005634017 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000094.scope: Deactivated successfully.
Feb 28 05:45:22 np0005634017 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000094.scope: Consumed 14.179s CPU time.
Feb 28 05:45:22 np0005634017 systemd-machined[209480]: Machine qemu-181-instance-00000094 terminated.
Feb 28 05:45:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:22.722 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:44:88 10.100.0.5 2001:db8:0:1:f816:3eff:fe7a:4488 2001:db8::f816:3eff:fe7a:4488'], port_security=['fa:16:3e:7a:44:88 10.100.0.5 2001:db8:0:1:f816:3eff:fe7a:4488 2001:db8::f816:3eff:fe7a:4488'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28 2001:db8:0:1:f816:3eff:fe7a:4488/64 2001:db8::f816:3eff:fe7a:4488/64', 'neutron:device_id': 'c45d561e-3581-4ca2-a8b7-b56cc6ce5b43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6b062de3-7eb9-4b80-8825-2f32e8eb2071', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c946f0a3-f57c-4aff-b2b1-40e06de1a163, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=e109a5e0-a347-4744-84e4-96a126379d1c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:45:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:22.724 156681 INFO neutron.agent.ovn.metadata.agent [-] Port e109a5e0-a347-4744-84e4-96a126379d1c in datapath b90f6c6c-a634-4d08-98d7-fde34f18e37c unbound from our chassis#033[00m
Feb 28 05:45:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:22.725 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b90f6c6c-a634-4d08-98d7-fde34f18e37c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:45:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:22.726 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[de509ebf-2c90-4a1c-80af-aa09b5e39916]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:45:22 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:22.727 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c namespace which is not needed anymore#033[00m
Feb 28 05:45:22 np0005634017 NetworkManager[49805]: <info>  [1772275522.7698] manager: (tape109a5e0-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/655)
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.772 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.777 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.795 243456 INFO nova.virt.libvirt.driver [-] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Instance destroyed successfully.#033[00m
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.796 243456 DEBUG nova.objects.instance [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid c45d561e-3581-4ca2-a8b7-b56cc6ce5b43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:45:22 np0005634017 neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c[376462]: [NOTICE]   (376466) : haproxy version is 2.8.14-c23fe91
Feb 28 05:45:22 np0005634017 neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c[376462]: [NOTICE]   (376466) : path to executable is /usr/sbin/haproxy
Feb 28 05:45:22 np0005634017 neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c[376462]: [WARNING]  (376466) : Exiting Master process...
Feb 28 05:45:22 np0005634017 neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c[376462]: [WARNING]  (376466) : Exiting Master process...
Feb 28 05:45:22 np0005634017 neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c[376462]: [ALERT]    (376466) : Current worker (376468) exited with code 143 (Terminated)
Feb 28 05:45:22 np0005634017 neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c[376462]: [WARNING]  (376466) : All workers exited. Exiting... (0)
Feb 28 05:45:22 np0005634017 systemd[1]: libpod-5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129.scope: Deactivated successfully.
Feb 28 05:45:22 np0005634017 podman[377668]: 2026-02-28 10:45:22.911271321 +0000 UTC m=+0.068840461 container died 5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.941 243456 DEBUG nova.virt.libvirt.vif [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:44:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-82849746',display_name='tempest-TestGettingAddress-server-82849746',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-82849746',id=148,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjumutHDOiCCJO6cfUL6ugRSFzHXiNGug9dJkwM0CV8zL85MVF3a1LvAcIuG15mWPEoaXuen5D3Ktxjz9xOw1dfLwx0HcN7GW0eMtopxI1h0bccpxE9hlkunZoyhuwL5A==',key_name='tempest-TestGettingAddress-1355395394',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:44:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-i18076q2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:44:23Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=c45d561e-3581-4ca2-a8b7-b56cc6ce5b43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e109a5e0-a347-4744-84e4-96a126379d1c", "address": "fa:16:3e:7a:44:88", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape109a5e0-a3", "ovs_interfaceid": "e109a5e0-a347-4744-84e4-96a126379d1c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.942 243456 DEBUG nova.network.os_vif_util [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "e109a5e0-a347-4744-84e4-96a126379d1c", "address": "fa:16:3e:7a:44:88", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape109a5e0-a3", "ovs_interfaceid": "e109a5e0-a347-4744-84e4-96a126379d1c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.943 243456 DEBUG nova.network.os_vif_util [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:44:88,bridge_name='br-int',has_traffic_filtering=True,id=e109a5e0-a347-4744-84e4-96a126379d1c,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape109a5e0-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.944 243456 DEBUG os_vif [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:44:88,bridge_name='br-int',has_traffic_filtering=True,id=e109a5e0-a347-4744-84e4-96a126379d1c,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape109a5e0-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.946 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.947 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape109a5e0-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:45:22 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129-userdata-shm.mount: Deactivated successfully.
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.950 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.951 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:22 np0005634017 systemd[1]: var-lib-containers-storage-overlay-7b3be6acdeb1e35fcdffcfd7b1606173c8b51c2d19d95f4b060892ed6a5761cd-merged.mount: Deactivated successfully.
Feb 28 05:45:22 np0005634017 nova_compute[243452]: 2026-02-28 10:45:22.954 243456 INFO os_vif [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:44:88,bridge_name='br-int',has_traffic_filtering=True,id=e109a5e0-a347-4744-84e4-96a126379d1c,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape109a5e0-a3')#033[00m
Feb 28 05:45:22 np0005634017 podman[377668]: 2026-02-28 10:45:22.973244026 +0000 UTC m=+0.130813176 container cleanup 5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 28 05:45:22 np0005634017 systemd[1]: libpod-conmon-5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129.scope: Deactivated successfully.
Feb 28 05:45:23 np0005634017 podman[377712]: 2026-02-28 10:45:23.047513169 +0000 UTC m=+0.049072661 container remove 5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 05:45:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:23.054 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6f7e410c-12e4-46e8-a5da-e8cd21d1efbf]: (4, ('Sat Feb 28 10:45:22 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c (5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129)\n5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129\nSat Feb 28 10:45:22 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c (5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129)\n5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:45:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:23.057 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[34ae9796-fcb6-4bfe-9f24-b6984f7df591]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:45:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:23.058 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb90f6c6c-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:45:23 np0005634017 nova_compute[243452]: 2026-02-28 10:45:23.059 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:23 np0005634017 kernel: tapb90f6c6c-a0: left promiscuous mode
Feb 28 05:45:23 np0005634017 nova_compute[243452]: 2026-02-28 10:45:23.066 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:23.069 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[77e6f371-f125-4f70-8703-080805818469]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:45:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:23.085 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3360ec23-991d-4949-be0b-564f79c7ce2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:45:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:23.087 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1c564e3e-eda8-4c5e-80fd-cad790456073]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:45:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:23.103 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a0f51f55-9270-46c8-9a29-55546b0d99d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697014, 'reachable_time': 32071, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377737, 'error': None, 'target': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:45:23 np0005634017 systemd[1]: run-netns-ovnmeta\x2db90f6c6c\x2da634\x2d4d08\x2d98d7\x2dfde34f18e37c.mount: Deactivated successfully.
Feb 28 05:45:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:23.109 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:45:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:23.109 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c4c16c-f5b4-4654-8d86-798f4b8a46d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:45:23 np0005634017 podman[377728]: 2026-02-28 10:45:23.15561323 +0000 UTC m=+0.064489287 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent)
Feb 28 05:45:23 np0005634017 podman[377730]: 2026-02-28 10:45:23.183910361 +0000 UTC m=+0.089997289 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 05:45:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2431: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 16 KiB/s wr, 89 op/s
Feb 28 05:45:23 np0005634017 nova_compute[243452]: 2026-02-28 10:45:23.298 243456 INFO nova.virt.libvirt.driver [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Deleting instance files /var/lib/nova/instances/c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_del#033[00m
Feb 28 05:45:23 np0005634017 nova_compute[243452]: 2026-02-28 10:45:23.299 243456 INFO nova.virt.libvirt.driver [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Deletion of /var/lib/nova/instances/c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_del complete#033[00m
Feb 28 05:45:23 np0005634017 nova_compute[243452]: 2026-02-28 10:45:23.349 243456 INFO nova.compute.manager [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:45:23 np0005634017 nova_compute[243452]: 2026-02-28 10:45:23.350 243456 DEBUG oslo.service.loopingcall [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:45:23 np0005634017 nova_compute[243452]: 2026-02-28 10:45:23.350 243456 DEBUG nova.compute.manager [-] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:45:23 np0005634017 nova_compute[243452]: 2026-02-28 10:45:23.350 243456 DEBUG nova.network.neutron [-] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:45:23 np0005634017 nova_compute[243452]: 2026-02-28 10:45:23.640 243456 DEBUG nova.compute.manager [req-01178898-91e3-438c-a2ee-ea94675bc9f0 req-daa6bc34-e4b3-420d-8fab-269b05e4933f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Received event network-vif-unplugged-e109a5e0-a347-4744-84e4-96a126379d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:45:23 np0005634017 nova_compute[243452]: 2026-02-28 10:45:23.641 243456 DEBUG oslo_concurrency.lockutils [req-01178898-91e3-438c-a2ee-ea94675bc9f0 req-daa6bc34-e4b3-420d-8fab-269b05e4933f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:45:23 np0005634017 nova_compute[243452]: 2026-02-28 10:45:23.641 243456 DEBUG oslo_concurrency.lockutils [req-01178898-91e3-438c-a2ee-ea94675bc9f0 req-daa6bc34-e4b3-420d-8fab-269b05e4933f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:45:23 np0005634017 nova_compute[243452]: 2026-02-28 10:45:23.642 243456 DEBUG oslo_concurrency.lockutils [req-01178898-91e3-438c-a2ee-ea94675bc9f0 req-daa6bc34-e4b3-420d-8fab-269b05e4933f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:45:23 np0005634017 nova_compute[243452]: 2026-02-28 10:45:23.643 243456 DEBUG nova.compute.manager [req-01178898-91e3-438c-a2ee-ea94675bc9f0 req-daa6bc34-e4b3-420d-8fab-269b05e4933f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] No waiting events found dispatching network-vif-unplugged-e109a5e0-a347-4744-84e4-96a126379d1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:45:23 np0005634017 nova_compute[243452]: 2026-02-28 10:45:23.643 243456 DEBUG nova.compute.manager [req-01178898-91e3-438c-a2ee-ea94675bc9f0 req-daa6bc34-e4b3-420d-8fab-269b05e4933f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Received event network-vif-unplugged-e109a5e0-a347-4744-84e4-96a126379d1c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:45:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:23.925 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:45:23 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:23.926 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:45:23 np0005634017 nova_compute[243452]: 2026-02-28 10:45:23.940 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:45:24 np0005634017 nova_compute[243452]: 2026-02-28 10:45:24.468 243456 DEBUG nova.network.neutron [-] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:45:24 np0005634017 nova_compute[243452]: 2026-02-28 10:45:24.486 243456 INFO nova.compute.manager [-] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Took 1.14 seconds to deallocate network for instance.#033[00m
Feb 28 05:45:24 np0005634017 nova_compute[243452]: 2026-02-28 10:45:24.537 243456 DEBUG oslo_concurrency.lockutils [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:45:24 np0005634017 nova_compute[243452]: 2026-02-28 10:45:24.538 243456 DEBUG oslo_concurrency.lockutils [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:45:24 np0005634017 nova_compute[243452]: 2026-02-28 10:45:24.599 243456 DEBUG oslo_concurrency.processutils [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:45:24 np0005634017 nova_compute[243452]: 2026-02-28 10:45:24.700 243456 DEBUG nova.network.neutron [req-baf2e8a6-5480-486c-96e0-44e19de47be4 req-ad51a0c0-523f-423a-9b6d-cdf0a3c5b601 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Updated VIF entry in instance network info cache for port e109a5e0-a347-4744-84e4-96a126379d1c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:45:24 np0005634017 nova_compute[243452]: 2026-02-28 10:45:24.701 243456 DEBUG nova.network.neutron [req-baf2e8a6-5480-486c-96e0-44e19de47be4 req-ad51a0c0-523f-423a-9b6d-cdf0a3c5b601 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Updating instance_info_cache with network_info: [{"id": "e109a5e0-a347-4744-84e4-96a126379d1c", "address": "fa:16:3e:7a:44:88", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape109a5e0-a3", "ovs_interfaceid": "e109a5e0-a347-4744-84e4-96a126379d1c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:45:24 np0005634017 nova_compute[243452]: 2026-02-28 10:45:24.727 243456 DEBUG oslo_concurrency.lockutils [req-baf2e8a6-5480-486c-96e0-44e19de47be4 req-ad51a0c0-523f-423a-9b6d-cdf0a3c5b601 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:45:25 np0005634017 nova_compute[243452]: 2026-02-28 10:45:25.134 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:45:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1766251624' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:45:25 np0005634017 nova_compute[243452]: 2026-02-28 10:45:25.191 243456 DEBUG oslo_concurrency.processutils [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:45:25 np0005634017 nova_compute[243452]: 2026-02-28 10:45:25.198 243456 DEBUG nova.compute.provider_tree [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:45:25 np0005634017 nova_compute[243452]: 2026-02-28 10:45:25.229 243456 DEBUG nova.scheduler.client.report [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:45:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2432: 305 pgs: 305 active+clean; 207 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 5.1 KiB/s wr, 101 op/s
Feb 28 05:45:25 np0005634017 nova_compute[243452]: 2026-02-28 10:45:25.259 243456 DEBUG oslo_concurrency.lockutils [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:45:25 np0005634017 nova_compute[243452]: 2026-02-28 10:45:25.306 243456 INFO nova.scheduler.client.report [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance c45d561e-3581-4ca2-a8b7-b56cc6ce5b43#033[00m
Feb 28 05:45:25 np0005634017 nova_compute[243452]: 2026-02-28 10:45:25.378 243456 DEBUG oslo_concurrency.lockutils [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:45:25 np0005634017 nova_compute[243452]: 2026-02-28 10:45:25.757 243456 DEBUG nova.compute.manager [req-502ff059-ba56-408a-ad6f-7a31f6c8119e req-25ff827a-eb46-4a55-8e08-70e4e7c9334a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Received event network-vif-plugged-e109a5e0-a347-4744-84e4-96a126379d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:45:25 np0005634017 nova_compute[243452]: 2026-02-28 10:45:25.758 243456 DEBUG oslo_concurrency.lockutils [req-502ff059-ba56-408a-ad6f-7a31f6c8119e req-25ff827a-eb46-4a55-8e08-70e4e7c9334a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:45:25 np0005634017 nova_compute[243452]: 2026-02-28 10:45:25.758 243456 DEBUG oslo_concurrency.lockutils [req-502ff059-ba56-408a-ad6f-7a31f6c8119e req-25ff827a-eb46-4a55-8e08-70e4e7c9334a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:45:25 np0005634017 nova_compute[243452]: 2026-02-28 10:45:25.758 243456 DEBUG oslo_concurrency.lockutils [req-502ff059-ba56-408a-ad6f-7a31f6c8119e req-25ff827a-eb46-4a55-8e08-70e4e7c9334a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:45:25 np0005634017 nova_compute[243452]: 2026-02-28 10:45:25.758 243456 DEBUG nova.compute.manager [req-502ff059-ba56-408a-ad6f-7a31f6c8119e req-25ff827a-eb46-4a55-8e08-70e4e7c9334a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] No waiting events found dispatching network-vif-plugged-e109a5e0-a347-4744-84e4-96a126379d1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:45:25 np0005634017 nova_compute[243452]: 2026-02-28 10:45:25.759 243456 WARNING nova.compute.manager [req-502ff059-ba56-408a-ad6f-7a31f6c8119e req-25ff827a-eb46-4a55-8e08-70e4e7c9334a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Received unexpected event network-vif-plugged-e109a5e0-a347-4744-84e4-96a126379d1c for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:45:25 np0005634017 nova_compute[243452]: 2026-02-28 10:45:25.759 243456 DEBUG nova.compute.manager [req-502ff059-ba56-408a-ad6f-7a31f6c8119e req-25ff827a-eb46-4a55-8e08-70e4e7c9334a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Received event network-vif-deleted-e109a5e0-a347-4744-84e4-96a126379d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:45:25 np0005634017 nova_compute[243452]: 2026-02-28 10:45:25.759 243456 INFO nova.compute.manager [req-502ff059-ba56-408a-ad6f-7a31f6c8119e req-25ff827a-eb46-4a55-8e08-70e4e7c9334a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Neutron deleted interface e109a5e0-a347-4744-84e4-96a126379d1c; detaching it from the instance and deleting it from the info cache#033[00m
Feb 28 05:45:25 np0005634017 nova_compute[243452]: 2026-02-28 10:45:25.760 243456 DEBUG nova.network.neutron [req-502ff059-ba56-408a-ad6f-7a31f6c8119e req-25ff827a-eb46-4a55-8e08-70e4e7c9334a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Feb 28 05:45:25 np0005634017 nova_compute[243452]: 2026-02-28 10:45:25.762 243456 DEBUG nova.compute.manager [req-502ff059-ba56-408a-ad6f-7a31f6c8119e req-25ff827a-eb46-4a55-8e08-70e4e7c9334a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Detach interface failed, port_id=e109a5e0-a347-4744-84e4-96a126379d1c, reason: Instance c45d561e-3581-4ca2-a8b7-b56cc6ce5b43 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 28 05:45:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2433: 305 pgs: 305 active+clean; 180 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 5.4 KiB/s wr, 103 op/s
Feb 28 05:45:27 np0005634017 nova_compute[243452]: 2026-02-28 10:45:27.952 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:45:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:45:29
Feb 28 05:45:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:45:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:45:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['vms', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'images', 'backups', 'cephfs.cephfs.meta']
Feb 28 05:45:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:45:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2434: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 5.1 KiB/s wr, 111 op/s
Feb 28 05:45:30 np0005634017 nova_compute[243452]: 2026-02-28 10:45:30.137 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:45:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:45:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:45:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:45:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:45:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:45:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:45:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:45:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:45:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:45:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:45:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:45:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:45:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:45:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:45:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:45:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:30.928 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:45:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:45:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:45:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:45:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:45:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:45:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:45:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:45:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:45:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:45:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:45:31 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:45:31 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:45:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2435: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 4.3 KiB/s wr, 92 op/s
Feb 28 05:45:31 np0005634017 nova_compute[243452]: 2026-02-28 10:45:31.289 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:31 np0005634017 podman[377950]: 2026-02-28 10:45:31.639023345 +0000 UTC m=+0.058339903 container create bebdd6caf16a72ec173a91b8b408cac22fd6991d0c588b9432d98b453b90001f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 28 05:45:31 np0005634017 nova_compute[243452]: 2026-02-28 10:45:31.662 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275516.6620302, 795552e4-ec78-4c39-a0a9-60c83ca0f0a2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:45:31 np0005634017 nova_compute[243452]: 2026-02-28 10:45:31.663 243456 INFO nova.compute.manager [-] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:45:31 np0005634017 nova_compute[243452]: 2026-02-28 10:45:31.687 243456 DEBUG nova.compute.manager [None req-47f860cf-ee8d-45c0-9129-775999614954 - - - - - -] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:45:31 np0005634017 systemd[1]: Started libpod-conmon-bebdd6caf16a72ec173a91b8b408cac22fd6991d0c588b9432d98b453b90001f.scope.
Feb 28 05:45:31 np0005634017 podman[377950]: 2026-02-28 10:45:31.613953195 +0000 UTC m=+0.033269823 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:45:31 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:45:31 np0005634017 podman[377950]: 2026-02-28 10:45:31.753701662 +0000 UTC m=+0.173018290 container init bebdd6caf16a72ec173a91b8b408cac22fd6991d0c588b9432d98b453b90001f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_heyrovsky, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:45:31 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:45:31 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:45:31 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:45:31 np0005634017 podman[377950]: 2026-02-28 10:45:31.763202261 +0000 UTC m=+0.182518829 container start bebdd6caf16a72ec173a91b8b408cac22fd6991d0c588b9432d98b453b90001f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_heyrovsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:45:31 np0005634017 podman[377950]: 2026-02-28 10:45:31.767898104 +0000 UTC m=+0.187214662 container attach bebdd6caf16a72ec173a91b8b408cac22fd6991d0c588b9432d98b453b90001f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_heyrovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 28 05:45:31 np0005634017 angry_heyrovsky[377966]: 167 167
Feb 28 05:45:31 np0005634017 systemd[1]: libpod-bebdd6caf16a72ec173a91b8b408cac22fd6991d0c588b9432d98b453b90001f.scope: Deactivated successfully.
Feb 28 05:45:31 np0005634017 podman[377950]: 2026-02-28 10:45:31.772400232 +0000 UTC m=+0.191716800 container died bebdd6caf16a72ec173a91b8b408cac22fd6991d0c588b9432d98b453b90001f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_heyrovsky, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 05:45:31 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3aae6d4fce438787de20cc003fb641c084f5fe38f155cc22f8397b7d07841deb-merged.mount: Deactivated successfully.
Feb 28 05:45:31 np0005634017 podman[377950]: 2026-02-28 10:45:31.816984614 +0000 UTC m=+0.236301172 container remove bebdd6caf16a72ec173a91b8b408cac22fd6991d0c588b9432d98b453b90001f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:45:31 np0005634017 systemd[1]: libpod-conmon-bebdd6caf16a72ec173a91b8b408cac22fd6991d0c588b9432d98b453b90001f.scope: Deactivated successfully.
Feb 28 05:45:31 np0005634017 podman[377990]: 2026-02-28 10:45:31.986950907 +0000 UTC m=+0.047559848 container create 4e1a2da099e0651299065653c7b2c38dc948adc89490350a2b886f5f590c72fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_buck, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 05:45:32 np0005634017 systemd[1]: Started libpod-conmon-4e1a2da099e0651299065653c7b2c38dc948adc89490350a2b886f5f590c72fa.scope.
Feb 28 05:45:32 np0005634017 podman[377990]: 2026-02-28 10:45:31.966199759 +0000 UTC m=+0.026808790 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:45:32 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:45:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ec705bfbeb27e2edc927325fa39b9364190e9750d36517dc752325dbb63b567/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:45:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ec705bfbeb27e2edc927325fa39b9364190e9750d36517dc752325dbb63b567/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:45:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ec705bfbeb27e2edc927325fa39b9364190e9750d36517dc752325dbb63b567/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:45:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ec705bfbeb27e2edc927325fa39b9364190e9750d36517dc752325dbb63b567/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:45:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ec705bfbeb27e2edc927325fa39b9364190e9750d36517dc752325dbb63b567/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:45:32 np0005634017 podman[377990]: 2026-02-28 10:45:32.091727944 +0000 UTC m=+0.152336965 container init 4e1a2da099e0651299065653c7b2c38dc948adc89490350a2b886f5f590c72fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_buck, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:45:32 np0005634017 podman[377990]: 2026-02-28 10:45:32.106854562 +0000 UTC m=+0.167463523 container start 4e1a2da099e0651299065653c7b2c38dc948adc89490350a2b886f5f590c72fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_buck, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:45:32 np0005634017 podman[377990]: 2026-02-28 10:45:32.112039929 +0000 UTC m=+0.172648910 container attach 4e1a2da099e0651299065653c7b2c38dc948adc89490350a2b886f5f590c72fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_buck, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 28 05:45:32 np0005634017 serene_buck[378007]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:45:32 np0005634017 serene_buck[378007]: --> All data devices are unavailable
Feb 28 05:45:32 np0005634017 systemd[1]: libpod-4e1a2da099e0651299065653c7b2c38dc948adc89490350a2b886f5f590c72fa.scope: Deactivated successfully.
Feb 28 05:45:32 np0005634017 podman[377990]: 2026-02-28 10:45:32.623382598 +0000 UTC m=+0.683991559 container died 4e1a2da099e0651299065653c7b2c38dc948adc89490350a2b886f5f590c72fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_buck, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 28 05:45:32 np0005634017 systemd[1]: var-lib-containers-storage-overlay-2ec705bfbeb27e2edc927325fa39b9364190e9750d36517dc752325dbb63b567-merged.mount: Deactivated successfully.
Feb 28 05:45:32 np0005634017 podman[377990]: 2026-02-28 10:45:32.663433872 +0000 UTC m=+0.724042823 container remove 4e1a2da099e0651299065653c7b2c38dc948adc89490350a2b886f5f590c72fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_buck, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 05:45:32 np0005634017 systemd[1]: libpod-conmon-4e1a2da099e0651299065653c7b2c38dc948adc89490350a2b886f5f590c72fa.scope: Deactivated successfully.
Feb 28 05:45:32 np0005634017 nova_compute[243452]: 2026-02-28 10:45:32.956 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:33 np0005634017 podman[378102]: 2026-02-28 10:45:33.136463407 +0000 UTC m=+0.062628295 container create 3e7737fac7fb1223e45a584f31a6d7b86ad977f8db93cfd2ab9dc810398b1b36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:45:33 np0005634017 systemd[1]: Started libpod-conmon-3e7737fac7fb1223e45a584f31a6d7b86ad977f8db93cfd2ab9dc810398b1b36.scope.
Feb 28 05:45:33 np0005634017 podman[378102]: 2026-02-28 10:45:33.108857235 +0000 UTC m=+0.035022153 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:45:33 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:45:33 np0005634017 podman[378102]: 2026-02-28 10:45:33.220247419 +0000 UTC m=+0.146412307 container init 3e7737fac7fb1223e45a584f31a6d7b86ad977f8db93cfd2ab9dc810398b1b36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 05:45:33 np0005634017 podman[378102]: 2026-02-28 10:45:33.226571388 +0000 UTC m=+0.152736236 container start 3e7737fac7fb1223e45a584f31a6d7b86ad977f8db93cfd2ab9dc810398b1b36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moser, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:45:33 np0005634017 podman[378102]: 2026-02-28 10:45:33.231821917 +0000 UTC m=+0.157986855 container attach 3e7737fac7fb1223e45a584f31a6d7b86ad977f8db93cfd2ab9dc810398b1b36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moser, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 28 05:45:33 np0005634017 ecstatic_moser[378118]: 167 167
Feb 28 05:45:33 np0005634017 systemd[1]: libpod-3e7737fac7fb1223e45a584f31a6d7b86ad977f8db93cfd2ab9dc810398b1b36.scope: Deactivated successfully.
Feb 28 05:45:33 np0005634017 podman[378102]: 2026-02-28 10:45:33.233634558 +0000 UTC m=+0.159799436 container died 3e7737fac7fb1223e45a584f31a6d7b86ad977f8db93cfd2ab9dc810398b1b36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moser, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 05:45:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2436: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.2 KiB/s wr, 46 op/s
Feb 28 05:45:33 np0005634017 systemd[1]: var-lib-containers-storage-overlay-857b4af8337db996ace11a171cfa8533a1d86cf930293f32e18ed5125698e308-merged.mount: Deactivated successfully.
Feb 28 05:45:33 np0005634017 podman[378102]: 2026-02-28 10:45:33.276211814 +0000 UTC m=+0.202376702 container remove 3e7737fac7fb1223e45a584f31a6d7b86ad977f8db93cfd2ab9dc810398b1b36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 05:45:33 np0005634017 systemd[1]: libpod-conmon-3e7737fac7fb1223e45a584f31a6d7b86ad977f8db93cfd2ab9dc810398b1b36.scope: Deactivated successfully.
Feb 28 05:45:33 np0005634017 podman[378140]: 2026-02-28 10:45:33.44172741 +0000 UTC m=+0.054857614 container create 895ff2fcf2486d7795201c03d81c224f7e3ee0b508dcf01950471eea59e94dff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 28 05:45:33 np0005634017 systemd[1]: Started libpod-conmon-895ff2fcf2486d7795201c03d81c224f7e3ee0b508dcf01950471eea59e94dff.scope.
Feb 28 05:45:33 np0005634017 podman[378140]: 2026-02-28 10:45:33.413521332 +0000 UTC m=+0.026651566 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:45:33 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:45:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5d32cf1fb02ae65e6e2f10b0a7e216689f2da4b80b7a4418982307c80f7059b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:45:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5d32cf1fb02ae65e6e2f10b0a7e216689f2da4b80b7a4418982307c80f7059b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:45:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5d32cf1fb02ae65e6e2f10b0a7e216689f2da4b80b7a4418982307c80f7059b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:45:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5d32cf1fb02ae65e6e2f10b0a7e216689f2da4b80b7a4418982307c80f7059b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:45:33 np0005634017 podman[378140]: 2026-02-28 10:45:33.552215459 +0000 UTC m=+0.165345723 container init 895ff2fcf2486d7795201c03d81c224f7e3ee0b508dcf01950471eea59e94dff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wozniak, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:45:33 np0005634017 podman[378140]: 2026-02-28 10:45:33.563507849 +0000 UTC m=+0.176638053 container start 895ff2fcf2486d7795201c03d81c224f7e3ee0b508dcf01950471eea59e94dff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 05:45:33 np0005634017 podman[378140]: 2026-02-28 10:45:33.567921674 +0000 UTC m=+0.181051878 container attach 895ff2fcf2486d7795201c03d81c224f7e3ee0b508dcf01950471eea59e94dff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wozniak, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]: {
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:    "0": [
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:        {
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "devices": [
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "/dev/loop3"
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            ],
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "lv_name": "ceph_lv0",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "lv_size": "21470642176",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "name": "ceph_lv0",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "tags": {
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.cluster_name": "ceph",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.crush_device_class": "",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.encrypted": "0",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.objectstore": "bluestore",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.osd_id": "0",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.type": "block",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.vdo": "0",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.with_tpm": "0"
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            },
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "type": "block",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "vg_name": "ceph_vg0"
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:        }
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:    ],
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:    "1": [
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:        {
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "devices": [
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "/dev/loop4"
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            ],
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "lv_name": "ceph_lv1",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "lv_size": "21470642176",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "name": "ceph_lv1",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "tags": {
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.cluster_name": "ceph",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.crush_device_class": "",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.encrypted": "0",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.objectstore": "bluestore",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.osd_id": "1",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.type": "block",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.vdo": "0",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.with_tpm": "0"
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            },
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "type": "block",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "vg_name": "ceph_vg1"
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:        }
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:    ],
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:    "2": [
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:        {
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "devices": [
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "/dev/loop5"
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            ],
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "lv_name": "ceph_lv2",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "lv_size": "21470642176",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "name": "ceph_lv2",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "tags": {
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.cluster_name": "ceph",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.crush_device_class": "",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.encrypted": "0",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.objectstore": "bluestore",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.osd_id": "2",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.type": "block",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.vdo": "0",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:                "ceph.with_tpm": "0"
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            },
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "type": "block",
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:            "vg_name": "ceph_vg2"
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:        }
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]:    ]
Feb 28 05:45:33 np0005634017 mystifying_wozniak[378156]: }
Feb 28 05:45:33 np0005634017 systemd[1]: libpod-895ff2fcf2486d7795201c03d81c224f7e3ee0b508dcf01950471eea59e94dff.scope: Deactivated successfully.
Feb 28 05:45:33 np0005634017 podman[378140]: 2026-02-28 10:45:33.893698979 +0000 UTC m=+0.506829163 container died 895ff2fcf2486d7795201c03d81c224f7e3ee0b508dcf01950471eea59e94dff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:45:33 np0005634017 systemd[1]: var-lib-containers-storage-overlay-f5d32cf1fb02ae65e6e2f10b0a7e216689f2da4b80b7a4418982307c80f7059b-merged.mount: Deactivated successfully.
Feb 28 05:45:33 np0005634017 podman[378140]: 2026-02-28 10:45:33.955631552 +0000 UTC m=+0.568761756 container remove 895ff2fcf2486d7795201c03d81c224f7e3ee0b508dcf01950471eea59e94dff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 05:45:33 np0005634017 systemd[1]: libpod-conmon-895ff2fcf2486d7795201c03d81c224f7e3ee0b508dcf01950471eea59e94dff.scope: Deactivated successfully.
Feb 28 05:45:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:45:34 np0005634017 podman[378240]: 2026-02-28 10:45:34.464996645 +0000 UTC m=+0.063880140 container create ca28731013c2c5b2044394496f82177add496441ba6c9be1728ec1d5e4b5d4ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cerf, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 05:45:34 np0005634017 systemd[1]: Started libpod-conmon-ca28731013c2c5b2044394496f82177add496441ba6c9be1728ec1d5e4b5d4ce.scope.
Feb 28 05:45:34 np0005634017 podman[378240]: 2026-02-28 10:45:34.440320276 +0000 UTC m=+0.039203831 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:45:34 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:45:34 np0005634017 podman[378240]: 2026-02-28 10:45:34.575464613 +0000 UTC m=+0.174348088 container init ca28731013c2c5b2044394496f82177add496441ba6c9be1728ec1d5e4b5d4ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cerf, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:45:34 np0005634017 podman[378240]: 2026-02-28 10:45:34.585518107 +0000 UTC m=+0.184401602 container start ca28731013c2c5b2044394496f82177add496441ba6c9be1728ec1d5e4b5d4ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Feb 28 05:45:34 np0005634017 podman[378240]: 2026-02-28 10:45:34.589658714 +0000 UTC m=+0.188542179 container attach ca28731013c2c5b2044394496f82177add496441ba6c9be1728ec1d5e4b5d4ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cerf, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 05:45:34 np0005634017 frosty_cerf[378256]: 167 167
Feb 28 05:45:34 np0005634017 systemd[1]: libpod-ca28731013c2c5b2044394496f82177add496441ba6c9be1728ec1d5e4b5d4ce.scope: Deactivated successfully.
Feb 28 05:45:34 np0005634017 podman[378240]: 2026-02-28 10:45:34.592011291 +0000 UTC m=+0.190894796 container died ca28731013c2c5b2044394496f82177add496441ba6c9be1728ec1d5e4b5d4ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 05:45:34 np0005634017 systemd[1]: var-lib-containers-storage-overlay-304f5ced1b52adef178e09dd00eed4cb756ba069a4b99ca78ee418b1fb7fc76e-merged.mount: Deactivated successfully.
Feb 28 05:45:34 np0005634017 podman[378240]: 2026-02-28 10:45:34.644271711 +0000 UTC m=+0.243155216 container remove ca28731013c2c5b2044394496f82177add496441ba6c9be1728ec1d5e4b5d4ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cerf, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 28 05:45:34 np0005634017 systemd[1]: libpod-conmon-ca28731013c2c5b2044394496f82177add496441ba6c9be1728ec1d5e4b5d4ce.scope: Deactivated successfully.
Feb 28 05:45:34 np0005634017 podman[378279]: 2026-02-28 10:45:34.858531198 +0000 UTC m=+0.070094636 container create d024ae4a62786f6994e14e88afd3d65dd84e68f59303d0b898b125426e2cf2a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_proskuriakova, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 05:45:34 np0005634017 systemd[1]: Started libpod-conmon-d024ae4a62786f6994e14e88afd3d65dd84e68f59303d0b898b125426e2cf2a5.scope.
Feb 28 05:45:34 np0005634017 podman[378279]: 2026-02-28 10:45:34.827852189 +0000 UTC m=+0.039415717 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:45:34 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:45:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48d7289e2faab44f3020ada844ceb7c0b925b113984eda2423239b71ce5b1c63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:45:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48d7289e2faab44f3020ada844ceb7c0b925b113984eda2423239b71ce5b1c63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:45:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48d7289e2faab44f3020ada844ceb7c0b925b113984eda2423239b71ce5b1c63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:45:34 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48d7289e2faab44f3020ada844ceb7c0b925b113984eda2423239b71ce5b1c63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:45:34 np0005634017 podman[378279]: 2026-02-28 10:45:34.964849458 +0000 UTC m=+0.176412946 container init d024ae4a62786f6994e14e88afd3d65dd84e68f59303d0b898b125426e2cf2a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_proskuriakova, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 05:45:34 np0005634017 podman[378279]: 2026-02-28 10:45:34.978101484 +0000 UTC m=+0.189664932 container start d024ae4a62786f6994e14e88afd3d65dd84e68f59303d0b898b125426e2cf2a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:45:34 np0005634017 podman[378279]: 2026-02-28 10:45:34.981672265 +0000 UTC m=+0.193235683 container attach d024ae4a62786f6994e14e88afd3d65dd84e68f59303d0b898b125426e2cf2a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 05:45:35 np0005634017 nova_compute[243452]: 2026-02-28 10:45:35.140 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2437: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb 28 05:45:35 np0005634017 lvm[378374]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:45:35 np0005634017 lvm[378374]: VG ceph_vg0 finished
Feb 28 05:45:35 np0005634017 lvm[378376]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:45:35 np0005634017 lvm[378376]: VG ceph_vg1 finished
Feb 28 05:45:35 np0005634017 lvm[378377]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:45:35 np0005634017 lvm[378377]: VG ceph_vg2 finished
Feb 28 05:45:35 np0005634017 heuristic_proskuriakova[378296]: {}
Feb 28 05:45:35 np0005634017 systemd[1]: libpod-d024ae4a62786f6994e14e88afd3d65dd84e68f59303d0b898b125426e2cf2a5.scope: Deactivated successfully.
Feb 28 05:45:35 np0005634017 systemd[1]: libpod-d024ae4a62786f6994e14e88afd3d65dd84e68f59303d0b898b125426e2cf2a5.scope: Consumed 1.505s CPU time.
Feb 28 05:45:35 np0005634017 podman[378279]: 2026-02-28 10:45:35.930451241 +0000 UTC m=+1.142014689 container died d024ae4a62786f6994e14e88afd3d65dd84e68f59303d0b898b125426e2cf2a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_proskuriakova, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:45:35 np0005634017 systemd[1]: var-lib-containers-storage-overlay-48d7289e2faab44f3020ada844ceb7c0b925b113984eda2423239b71ce5b1c63-merged.mount: Deactivated successfully.
Feb 28 05:45:35 np0005634017 podman[378279]: 2026-02-28 10:45:35.987229338 +0000 UTC m=+1.198792756 container remove d024ae4a62786f6994e14e88afd3d65dd84e68f59303d0b898b125426e2cf2a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:45:36 np0005634017 systemd[1]: libpod-conmon-d024ae4a62786f6994e14e88afd3d65dd84e68f59303d0b898b125426e2cf2a5.scope: Deactivated successfully.
Feb 28 05:45:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:45:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:45:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:45:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:45:36 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:45:36 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:45:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2438: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 341 B/s wr, 14 op/s
Feb 28 05:45:37 np0005634017 nova_compute[243452]: 2026-02-28 10:45:37.793 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275522.791517, c45d561e-3581-4ca2-a8b7-b56cc6ce5b43 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:45:37 np0005634017 nova_compute[243452]: 2026-02-28 10:45:37.794 243456 INFO nova.compute.manager [-] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:45:37 np0005634017 nova_compute[243452]: 2026-02-28 10:45:37.811 243456 DEBUG nova.compute.manager [None req-aed2dd59-caca-4026-9a44-9e511843907c - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:45:37 np0005634017 nova_compute[243452]: 2026-02-28 10:45:37.961 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:45:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2439: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.3 KiB/s rd, 0 B/s wr, 9 op/s
Feb 28 05:45:40 np0005634017 nova_compute[243452]: 2026-02-28 10:45:40.141 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2440: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:45:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:45:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:45:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:45:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:45:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.4508337979826865e-05 of space, bias 1.0, pg target 0.00435250139394806 quantized to 32 (current 32)
Feb 28 05:45:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:45:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:45:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:45:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:45:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:45:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024939187920449838 of space, bias 1.0, pg target 0.7481756376134951 quantized to 32 (current 32)
Feb 28 05:45:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:45:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.25983564234373e-07 of space, bias 4.0, pg target 0.0008711802770812475 quantized to 16 (current 16)
Feb 28 05:45:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:45:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:45:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:45:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:45:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:45:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:45:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:45:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:45:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:45:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:45:43 np0005634017 nova_compute[243452]: 2026-02-28 10:45:43.110 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2441: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:45:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:45:45 np0005634017 nova_compute[243452]: 2026-02-28 10:45:45.144 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2442: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:45:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:45:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2639511603' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:45:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:45:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2639511603' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:45:45 np0005634017 nova_compute[243452]: 2026-02-28 10:45:45.821 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:45:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2443: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:45:47 np0005634017 nova_compute[243452]: 2026-02-28 10:45:47.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:45:47 np0005634017 nova_compute[243452]: 2026-02-28 10:45:47.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:45:48 np0005634017 nova_compute[243452]: 2026-02-28 10:45:48.112 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:48 np0005634017 nova_compute[243452]: 2026-02-28 10:45:48.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:45:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:45:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2444: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:45:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:49.681 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:48:42 10.100.0.2 2001:db8::f816:3eff:feae:4842'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feae:4842/64', 'neutron:device_id': 'ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23d0752a-b8d1-4ae1-a510-cd602da802b5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=74ab365e-66c5-40e6-8916-4199b68ab2e3) old=Port_Binding(mac=['fa:16:3e:ae:48:42 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:45:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:49.682 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 74ab365e-66c5-40e6-8916-4199b68ab2e3 in datapath daf6d125-3e9a-40be-b7d7-68719005c3b1 updated#033[00m
Feb 28 05:45:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:49.683 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network daf6d125-3e9a-40be-b7d7-68719005c3b1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:45:49 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:49.687 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[af7cfc31-99f6-4fcc-89a1-c109036582de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:45:50 np0005634017 nova_compute[243452]: 2026-02-28 10:45:50.146 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:50 np0005634017 nova_compute[243452]: 2026-02-28 10:45:50.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:45:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2445: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:45:52 np0005634017 nova_compute[243452]: 2026-02-28 10:45:52.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:45:52 np0005634017 nova_compute[243452]: 2026-02-28 10:45:52.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:45:53 np0005634017 nova_compute[243452]: 2026-02-28 10:45:53.115 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2446: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:45:53 np0005634017 nova_compute[243452]: 2026-02-28 10:45:53.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:45:53 np0005634017 nova_compute[243452]: 2026-02-28 10:45:53.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:45:53 np0005634017 nova_compute[243452]: 2026-02-28 10:45:53.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:45:53 np0005634017 nova_compute[243452]: 2026-02-28 10:45:53.332 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 05:45:53 np0005634017 nova_compute[243452]: 2026-02-28 10:45:53.333 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:45:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:45:54 np0005634017 podman[378419]: 2026-02-28 10:45:54.145676242 +0000 UTC m=+0.080139740 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:45:54 np0005634017 podman[378418]: 2026-02-28 10:45:54.184016338 +0000 UTC m=+0.117367195 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 05:45:54 np0005634017 nova_compute[243452]: 2026-02-28 10:45:54.791 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:45:54 np0005634017 nova_compute[243452]: 2026-02-28 10:45:54.791 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:45:54 np0005634017 nova_compute[243452]: 2026-02-28 10:45:54.815 243456 DEBUG nova.compute.manager [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:45:54 np0005634017 nova_compute[243452]: 2026-02-28 10:45:54.895 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:45:54 np0005634017 nova_compute[243452]: 2026-02-28 10:45:54.896 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:45:54 np0005634017 nova_compute[243452]: 2026-02-28 10:45:54.907 243456 DEBUG nova.virt.hardware [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:45:54 np0005634017 nova_compute[243452]: 2026-02-28 10:45:54.908 243456 INFO nova.compute.claims [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:45:55 np0005634017 nova_compute[243452]: 2026-02-28 10:45:55.008 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:45:55 np0005634017 nova_compute[243452]: 2026-02-28 10:45:55.149 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2447: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:45:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:45:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4064019870' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:45:55 np0005634017 nova_compute[243452]: 2026-02-28 10:45:55.551 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:45:55 np0005634017 nova_compute[243452]: 2026-02-28 10:45:55.559 243456 DEBUG nova.compute.provider_tree [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:45:55 np0005634017 nova_compute[243452]: 2026-02-28 10:45:55.577 243456 DEBUG nova.scheduler.client.report [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:45:55 np0005634017 nova_compute[243452]: 2026-02-28 10:45:55.611 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:45:55 np0005634017 nova_compute[243452]: 2026-02-28 10:45:55.612 243456 DEBUG nova.compute.manager [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:45:55 np0005634017 nova_compute[243452]: 2026-02-28 10:45:55.657 243456 DEBUG nova.compute.manager [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:45:55 np0005634017 nova_compute[243452]: 2026-02-28 10:45:55.658 243456 DEBUG nova.network.neutron [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:45:55 np0005634017 nova_compute[243452]: 2026-02-28 10:45:55.682 243456 INFO nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:45:55 np0005634017 nova_compute[243452]: 2026-02-28 10:45:55.700 243456 DEBUG nova.compute.manager [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:45:55 np0005634017 nova_compute[243452]: 2026-02-28 10:45:55.777 243456 DEBUG nova.compute.manager [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:45:55 np0005634017 nova_compute[243452]: 2026-02-28 10:45:55.778 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:45:55 np0005634017 nova_compute[243452]: 2026-02-28 10:45:55.779 243456 INFO nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Creating image(s)#033[00m
Feb 28 05:45:55 np0005634017 nova_compute[243452]: 2026-02-28 10:45:55.808 243456 DEBUG nova.storage.rbd_utils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:45:55 np0005634017 nova_compute[243452]: 2026-02-28 10:45:55.839 243456 DEBUG nova.storage.rbd_utils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:45:55 np0005634017 nova_compute[243452]: 2026-02-28 10:45:55.867 243456 DEBUG nova.storage.rbd_utils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:45:55 np0005634017 nova_compute[243452]: 2026-02-28 10:45:55.873 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:45:55 np0005634017 nova_compute[243452]: 2026-02-28 10:45:55.941 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:45:55 np0005634017 nova_compute[243452]: 2026-02-28 10:45:55.942 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:45:55 np0005634017 nova_compute[243452]: 2026-02-28 10:45:55.943 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:45:55 np0005634017 nova_compute[243452]: 2026-02-28 10:45:55.943 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:45:55 np0005634017 nova_compute[243452]: 2026-02-28 10:45:55.971 243456 DEBUG nova.storage.rbd_utils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:45:55 np0005634017 nova_compute[243452]: 2026-02-28 10:45:55.976 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:45:56 np0005634017 nova_compute[243452]: 2026-02-28 10:45:56.234 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:45:56 np0005634017 nova_compute[243452]: 2026-02-28 10:45:56.271 243456 DEBUG nova.policy [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:45:56 np0005634017 nova_compute[243452]: 2026-02-28 10:45:56.315 243456 DEBUG nova.storage.rbd_utils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:45:56 np0005634017 nova_compute[243452]: 2026-02-28 10:45:56.357 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:45:56 np0005634017 nova_compute[243452]: 2026-02-28 10:45:56.420 243456 DEBUG nova.objects.instance [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid b6b6d2e0-12e8-4804-a192-da4e2444f20e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:45:56 np0005634017 nova_compute[243452]: 2026-02-28 10:45:56.433 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:45:56 np0005634017 nova_compute[243452]: 2026-02-28 10:45:56.433 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Ensure instance console log exists: /var/lib/nova/instances/b6b6d2e0-12e8-4804-a192-da4e2444f20e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:45:56 np0005634017 nova_compute[243452]: 2026-02-28 10:45:56.433 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:45:56 np0005634017 nova_compute[243452]: 2026-02-28 10:45:56.434 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:45:56 np0005634017 nova_compute[243452]: 2026-02-28 10:45:56.434 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:45:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2448: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:45:57 np0005634017 nova_compute[243452]: 2026-02-28 10:45:57.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:45:57 np0005634017 nova_compute[243452]: 2026-02-28 10:45:57.343 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:45:57 np0005634017 nova_compute[243452]: 2026-02-28 10:45:57.344 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:45:57 np0005634017 nova_compute[243452]: 2026-02-28 10:45:57.344 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:45:57 np0005634017 nova_compute[243452]: 2026-02-28 10:45:57.344 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:45:57 np0005634017 nova_compute[243452]: 2026-02-28 10:45:57.345 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:45:57 np0005634017 nova_compute[243452]: 2026-02-28 10:45:57.382 243456 DEBUG nova.network.neutron [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Successfully created port: 194124fb-5288-4359-ab42-cd28d0ec06bc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:45:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:45:57 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2777666500' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:45:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:57.886 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:45:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:57.887 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:45:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:45:57.887 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:45:57 np0005634017 nova_compute[243452]: 2026-02-28 10:45:57.904 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:45:58 np0005634017 nova_compute[243452]: 2026-02-28 10:45:58.104 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:45:58 np0005634017 nova_compute[243452]: 2026-02-28 10:45:58.106 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3540MB free_disk=59.987410919740796GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:45:58 np0005634017 nova_compute[243452]: 2026-02-28 10:45:58.107 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:45:58 np0005634017 nova_compute[243452]: 2026-02-28 10:45:58.107 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:45:58 np0005634017 nova_compute[243452]: 2026-02-28 10:45:58.118 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:45:58 np0005634017 nova_compute[243452]: 2026-02-28 10:45:58.187 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance b6b6d2e0-12e8-4804-a192-da4e2444f20e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:45:58 np0005634017 nova_compute[243452]: 2026-02-28 10:45:58.188 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:45:58 np0005634017 nova_compute[243452]: 2026-02-28 10:45:58.189 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:45:58 np0005634017 nova_compute[243452]: 2026-02-28 10:45:58.230 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:45:58 np0005634017 nova_compute[243452]: 2026-02-28 10:45:58.510 243456 DEBUG nova.network.neutron [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Successfully updated port: 194124fb-5288-4359-ab42-cd28d0ec06bc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:45:58 np0005634017 nova_compute[243452]: 2026-02-28 10:45:58.528 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:45:58 np0005634017 nova_compute[243452]: 2026-02-28 10:45:58.528 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:45:58 np0005634017 nova_compute[243452]: 2026-02-28 10:45:58.529 243456 DEBUG nova.network.neutron [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:45:58 np0005634017 nova_compute[243452]: 2026-02-28 10:45:58.609 243456 DEBUG nova.compute.manager [req-b0d48944-cd29-45b0-a53e-bd1039df0ce0 req-71385869-3580-476e-825c-89073e07a15d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Received event network-changed-194124fb-5288-4359-ab42-cd28d0ec06bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:45:58 np0005634017 nova_compute[243452]: 2026-02-28 10:45:58.610 243456 DEBUG nova.compute.manager [req-b0d48944-cd29-45b0-a53e-bd1039df0ce0 req-71385869-3580-476e-825c-89073e07a15d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Refreshing instance network info cache due to event network-changed-194124fb-5288-4359-ab42-cd28d0ec06bc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:45:58 np0005634017 nova_compute[243452]: 2026-02-28 10:45:58.610 243456 DEBUG oslo_concurrency.lockutils [req-b0d48944-cd29-45b0-a53e-bd1039df0ce0 req-71385869-3580-476e-825c-89073e07a15d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:45:58 np0005634017 nova_compute[243452]: 2026-02-28 10:45:58.672 243456 DEBUG nova.network.neutron [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:45:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:45:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3469095079' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:45:58 np0005634017 nova_compute[243452]: 2026-02-28 10:45:58.784 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:45:58 np0005634017 nova_compute[243452]: 2026-02-28 10:45:58.791 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:45:58 np0005634017 nova_compute[243452]: 2026-02-28 10:45:58.807 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:45:58 np0005634017 nova_compute[243452]: 2026-02-28 10:45:58.835 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:45:58 np0005634017 nova_compute[243452]: 2026-02-28 10:45:58.836 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:45:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:45:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2449: 305 pgs: 305 active+clean; 175 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 895 KiB/s wr, 13 op/s
Feb 28 05:46:00 np0005634017 nova_compute[243452]: 2026-02-28 10:46:00.150 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:00 np0005634017 nova_compute[243452]: 2026-02-28 10:46:00.224 243456 DEBUG nova.network.neutron [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Updating instance_info_cache with network_info: [{"id": "194124fb-5288-4359-ab42-cd28d0ec06bc", "address": "fa:16:3e:e5:09:7c", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:97c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194124fb-52", "ovs_interfaceid": "194124fb-5288-4359-ab42-cd28d0ec06bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:46:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:46:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:46:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:46:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:46:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:46:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:46:00 np0005634017 nova_compute[243452]: 2026-02-28 10:46:00.767 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:46:00 np0005634017 nova_compute[243452]: 2026-02-28 10:46:00.767 243456 DEBUG nova.compute.manager [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Instance network_info: |[{"id": "194124fb-5288-4359-ab42-cd28d0ec06bc", "address": "fa:16:3e:e5:09:7c", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:97c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194124fb-52", "ovs_interfaceid": "194124fb-5288-4359-ab42-cd28d0ec06bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:46:00 np0005634017 nova_compute[243452]: 2026-02-28 10:46:00.767 243456 DEBUG oslo_concurrency.lockutils [req-b0d48944-cd29-45b0-a53e-bd1039df0ce0 req-71385869-3580-476e-825c-89073e07a15d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:46:00 np0005634017 nova_compute[243452]: 2026-02-28 10:46:00.768 243456 DEBUG nova.network.neutron [req-b0d48944-cd29-45b0-a53e-bd1039df0ce0 req-71385869-3580-476e-825c-89073e07a15d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Refreshing network info cache for port 194124fb-5288-4359-ab42-cd28d0ec06bc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:46:00 np0005634017 nova_compute[243452]: 2026-02-28 10:46:00.771 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Start _get_guest_xml network_info=[{"id": "194124fb-5288-4359-ab42-cd28d0ec06bc", "address": "fa:16:3e:e5:09:7c", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:97c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194124fb-52", "ovs_interfaceid": "194124fb-5288-4359-ab42-cd28d0ec06bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:46:00 np0005634017 nova_compute[243452]: 2026-02-28 10:46:00.778 243456 WARNING nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:46:00 np0005634017 nova_compute[243452]: 2026-02-28 10:46:00.783 243456 DEBUG nova.virt.libvirt.host [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:46:00 np0005634017 nova_compute[243452]: 2026-02-28 10:46:00.784 243456 DEBUG nova.virt.libvirt.host [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:46:00 np0005634017 nova_compute[243452]: 2026-02-28 10:46:00.795 243456 DEBUG nova.virt.libvirt.host [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:46:00 np0005634017 nova_compute[243452]: 2026-02-28 10:46:00.796 243456 DEBUG nova.virt.libvirt.host [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:46:00 np0005634017 nova_compute[243452]: 2026-02-28 10:46:00.796 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:46:00 np0005634017 nova_compute[243452]: 2026-02-28 10:46:00.797 243456 DEBUG nova.virt.hardware [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:46:00 np0005634017 nova_compute[243452]: 2026-02-28 10:46:00.797 243456 DEBUG nova.virt.hardware [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:46:00 np0005634017 nova_compute[243452]: 2026-02-28 10:46:00.798 243456 DEBUG nova.virt.hardware [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:46:00 np0005634017 nova_compute[243452]: 2026-02-28 10:46:00.798 243456 DEBUG nova.virt.hardware [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:46:00 np0005634017 nova_compute[243452]: 2026-02-28 10:46:00.798 243456 DEBUG nova.virt.hardware [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:46:00 np0005634017 nova_compute[243452]: 2026-02-28 10:46:00.799 243456 DEBUG nova.virt.hardware [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:46:00 np0005634017 nova_compute[243452]: 2026-02-28 10:46:00.799 243456 DEBUG nova.virt.hardware [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:46:00 np0005634017 nova_compute[243452]: 2026-02-28 10:46:00.799 243456 DEBUG nova.virt.hardware [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:46:00 np0005634017 nova_compute[243452]: 2026-02-28 10:46:00.800 243456 DEBUG nova.virt.hardware [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:46:00 np0005634017 nova_compute[243452]: 2026-02-28 10:46:00.800 243456 DEBUG nova.virt.hardware [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:46:00 np0005634017 nova_compute[243452]: 2026-02-28 10:46:00.800 243456 DEBUG nova.virt.hardware [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:46:00 np0005634017 nova_compute[243452]: 2026-02-28 10:46:00.806 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:46:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2450: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:46:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:46:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1668947447' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.384 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.410 243456 DEBUG nova.storage.rbd_utils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.414 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:46:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:46:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2094674942' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.940 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.943 243456 DEBUG nova.virt.libvirt.vif [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:45:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-760627914',display_name='tempest-TestGettingAddress-server-760627914',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-760627914',id=150,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCLqeUdnCmsfQtG8FFNYG3SOwWwHCJPtg4jwUf3aliCHpxaY4aOcizIwMIBv7Kr3bQWdfyFCQBO6S7GfxcAqSJauFtY2aqdteGcGPbGDToL8K97rBhkuRxZcIqdxYgu10A==',key_name='tempest-TestGettingAddress-891123315',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-7spaod71',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:45:55Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=b6b6d2e0-12e8-4804-a192-da4e2444f20e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "194124fb-5288-4359-ab42-cd28d0ec06bc", "address": "fa:16:3e:e5:09:7c", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:97c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194124fb-52", "ovs_interfaceid": "194124fb-5288-4359-ab42-cd28d0ec06bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.944 243456 DEBUG nova.network.os_vif_util [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "194124fb-5288-4359-ab42-cd28d0ec06bc", "address": "fa:16:3e:e5:09:7c", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:97c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194124fb-52", "ovs_interfaceid": "194124fb-5288-4359-ab42-cd28d0ec06bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.945 243456 DEBUG nova.network.os_vif_util [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:09:7c,bridge_name='br-int',has_traffic_filtering=True,id=194124fb-5288-4359-ab42-cd28d0ec06bc,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194124fb-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.946 243456 DEBUG nova.objects.instance [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid b6b6d2e0-12e8-4804-a192-da4e2444f20e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.968 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:46:01 np0005634017 nova_compute[243452]:  <uuid>b6b6d2e0-12e8-4804-a192-da4e2444f20e</uuid>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:  <name>instance-00000096</name>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestGettingAddress-server-760627914</nova:name>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:46:00</nova:creationTime>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:46:01 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:        <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:        <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:        <nova:port uuid="194124fb-5288-4359-ab42-cd28d0ec06bc">
Feb 28 05:46:01 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fee5:97c" ipVersion="6"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <entry name="serial">b6b6d2e0-12e8-4804-a192-da4e2444f20e</entry>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <entry name="uuid">b6b6d2e0-12e8-4804-a192-da4e2444f20e</entry>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk">
Feb 28 05:46:01 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:46:01 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk.config">
Feb 28 05:46:01 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:46:01 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:e5:09:7c"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <target dev="tap194124fb-52"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/b6b6d2e0-12e8-4804-a192-da4e2444f20e/console.log" append="off"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:46:01 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:46:01 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:46:01 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:46:01 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.969 243456 DEBUG nova.compute.manager [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Preparing to wait for external event network-vif-plugged-194124fb-5288-4359-ab42-cd28d0ec06bc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.969 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.969 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.969 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.970 243456 DEBUG nova.virt.libvirt.vif [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:45:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-760627914',display_name='tempest-TestGettingAddress-server-760627914',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-760627914',id=150,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCLqeUdnCmsfQtG8FFNYG3SOwWwHCJPtg4jwUf3aliCHpxaY4aOcizIwMIBv7Kr3bQWdfyFCQBO6S7GfxcAqSJauFtY2aqdteGcGPbGDToL8K97rBhkuRxZcIqdxYgu10A==',key_name='tempest-TestGettingAddress-891123315',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-7spaod71',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:45:55Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=b6b6d2e0-12e8-4804-a192-da4e2444f20e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "194124fb-5288-4359-ab42-cd28d0ec06bc", "address": "fa:16:3e:e5:09:7c", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:97c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194124fb-52", "ovs_interfaceid": "194124fb-5288-4359-ab42-cd28d0ec06bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.970 243456 DEBUG nova.network.os_vif_util [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "194124fb-5288-4359-ab42-cd28d0ec06bc", "address": "fa:16:3e:e5:09:7c", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:97c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194124fb-52", "ovs_interfaceid": "194124fb-5288-4359-ab42-cd28d0ec06bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.971 243456 DEBUG nova.network.os_vif_util [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:09:7c,bridge_name='br-int',has_traffic_filtering=True,id=194124fb-5288-4359-ab42-cd28d0ec06bc,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194124fb-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.971 243456 DEBUG os_vif [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:09:7c,bridge_name='br-int',has_traffic_filtering=True,id=194124fb-5288-4359-ab42-cd28d0ec06bc,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194124fb-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.972 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.972 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.973 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.976 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.976 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap194124fb-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.976 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap194124fb-52, col_values=(('external_ids', {'iface-id': '194124fb-5288-4359-ab42-cd28d0ec06bc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e5:09:7c', 'vm-uuid': 'b6b6d2e0-12e8-4804-a192-da4e2444f20e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.978 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:01 np0005634017 NetworkManager[49805]: <info>  [1772275561.9797] manager: (tap194124fb-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/656)
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.981 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.986 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:01 np0005634017 nova_compute[243452]: 2026-02-28 10:46:01.988 243456 INFO os_vif [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:09:7c,bridge_name='br-int',has_traffic_filtering=True,id=194124fb-5288-4359-ab42-cd28d0ec06bc,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194124fb-52')#033[00m
Feb 28 05:46:02 np0005634017 nova_compute[243452]: 2026-02-28 10:46:02.043 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:46:02 np0005634017 nova_compute[243452]: 2026-02-28 10:46:02.044 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:46:02 np0005634017 nova_compute[243452]: 2026-02-28 10:46:02.044 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:e5:09:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:46:02 np0005634017 nova_compute[243452]: 2026-02-28 10:46:02.045 243456 INFO nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Using config drive#033[00m
Feb 28 05:46:02 np0005634017 nova_compute[243452]: 2026-02-28 10:46:02.077 243456 DEBUG nova.storage.rbd_utils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:46:02 np0005634017 nova_compute[243452]: 2026-02-28 10:46:02.697 243456 INFO nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Creating config drive at /var/lib/nova/instances/b6b6d2e0-12e8-4804-a192-da4e2444f20e/disk.config#033[00m
Feb 28 05:46:02 np0005634017 nova_compute[243452]: 2026-02-28 10:46:02.700 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b6b6d2e0-12e8-4804-a192-da4e2444f20e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp9vat019v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:46:02 np0005634017 nova_compute[243452]: 2026-02-28 10:46:02.792 243456 DEBUG nova.network.neutron [req-b0d48944-cd29-45b0-a53e-bd1039df0ce0 req-71385869-3580-476e-825c-89073e07a15d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Updated VIF entry in instance network info cache for port 194124fb-5288-4359-ab42-cd28d0ec06bc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:46:02 np0005634017 nova_compute[243452]: 2026-02-28 10:46:02.793 243456 DEBUG nova.network.neutron [req-b0d48944-cd29-45b0-a53e-bd1039df0ce0 req-71385869-3580-476e-825c-89073e07a15d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Updating instance_info_cache with network_info: [{"id": "194124fb-5288-4359-ab42-cd28d0ec06bc", "address": "fa:16:3e:e5:09:7c", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:97c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194124fb-52", "ovs_interfaceid": "194124fb-5288-4359-ab42-cd28d0ec06bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:46:02 np0005634017 nova_compute[243452]: 2026-02-28 10:46:02.808 243456 DEBUG oslo_concurrency.lockutils [req-b0d48944-cd29-45b0-a53e-bd1039df0ce0 req-71385869-3580-476e-825c-89073e07a15d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:46:02 np0005634017 nova_compute[243452]: 2026-02-28 10:46:02.839 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b6b6d2e0-12e8-4804-a192-da4e2444f20e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp9vat019v" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:46:02 np0005634017 nova_compute[243452]: 2026-02-28 10:46:02.869 243456 DEBUG nova.storage.rbd_utils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:46:02 np0005634017 nova_compute[243452]: 2026-02-28 10:46:02.874 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b6b6d2e0-12e8-4804-a192-da4e2444f20e/disk.config b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.014 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b6b6d2e0-12e8-4804-a192-da4e2444f20e/disk.config b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.015 243456 INFO nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Deleting local config drive /var/lib/nova/instances/b6b6d2e0-12e8-4804-a192-da4e2444f20e/disk.config because it was imported into RBD.#033[00m
Feb 28 05:46:03 np0005634017 kernel: tap194124fb-52: entered promiscuous mode
Feb 28 05:46:03 np0005634017 NetworkManager[49805]: <info>  [1772275563.0623] manager: (tap194124fb-52): new Tun device (/org/freedesktop/NetworkManager/Devices/657)
Feb 28 05:46:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:46:03Z|01572|binding|INFO|Claiming lport 194124fb-5288-4359-ab42-cd28d0ec06bc for this chassis.
Feb 28 05:46:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:46:03Z|01573|binding|INFO|194124fb-5288-4359-ab42-cd28d0ec06bc: Claiming fa:16:3e:e5:09:7c 10.100.0.4 2001:db8::f816:3eff:fee5:97c
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.061 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.069 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.072 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.081 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:09:7c 10.100.0.4 2001:db8::f816:3eff:fee5:97c'], port_security=['fa:16:3e:e5:09:7c 10.100.0.4 2001:db8::f816:3eff:fee5:97c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:fee5:97c/64', 'neutron:device_id': 'b6b6d2e0-12e8-4804-a192-da4e2444f20e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cfc43bea-ddab-4fae-8219-6187bd0feedd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23d0752a-b8d1-4ae1-a510-cd602da802b5, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=194124fb-5288-4359-ab42-cd28d0ec06bc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.083 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 194124fb-5288-4359-ab42-cd28d0ec06bc in datapath daf6d125-3e9a-40be-b7d7-68719005c3b1 bound to our chassis#033[00m
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.084 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network daf6d125-3e9a-40be-b7d7-68719005c3b1#033[00m
Feb 28 05:46:03 np0005634017 systemd-udevd[378831]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:46:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:46:03Z|01574|binding|INFO|Setting lport 194124fb-5288-4359-ab42-cd28d0ec06bc ovn-installed in OVS
Feb 28 05:46:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:46:03Z|01575|binding|INFO|Setting lport 194124fb-5288-4359-ab42-cd28d0ec06bc up in Southbound
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.092 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:03 np0005634017 systemd-machined[209480]: New machine qemu-183-instance-00000096.
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.095 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1cd03e5e-be6b-4dd1-92b7-e68fbff90e0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.097 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdaf6d125-31 in ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.101 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdaf6d125-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.101 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[94f3b940-13ff-4156-a357-d0e172836a96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.103 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fe3beecf-40cb-421e-87d5-97d0dd8c092f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:46:03 np0005634017 NetworkManager[49805]: <info>  [1772275563.1067] device (tap194124fb-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:46:03 np0005634017 NetworkManager[49805]: <info>  [1772275563.1076] device (tap194124fb-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:46:03 np0005634017 systemd[1]: Started Virtual Machine qemu-183-instance-00000096.
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.114 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[2259d5d3-c46a-46cf-b5a4-36f61670ac65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.126 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1defb123-6c59-4ce8-acd2-e15d625d1f50]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.158 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[cc493af5-9c79-4de6-870e-9111840d3a23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.163 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[07401b11-6869-430d-abd1-90c233024350]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:46:03 np0005634017 NetworkManager[49805]: <info>  [1772275563.1648] manager: (tapdaf6d125-30): new Veth device (/org/freedesktop/NetworkManager/Devices/658)
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.198 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d4595f4c-07a9-4f43-955f-6206a9328d22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.202 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b71cf11b-3593-4fc2-8e56-4d56cb5c8776]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:46:03 np0005634017 NetworkManager[49805]: <info>  [1772275563.2224] device (tapdaf6d125-30): carrier: link connected
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.226 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ee7acb8f-c5ed-43e5-9a5e-d15ceac29711]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.241 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7e51bcd3-f179-4d5e-bd8f-eaa80c7b70cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdaf6d125-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:48:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 465], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707050, 'reachable_time': 34156, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378865, 'error': None, 'target': 'ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.256 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c924758b-8177-4216-b3eb-88038989befc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feae:4842'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707050, 'tstamp': 707050}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378866, 'error': None, 'target': 'ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.271 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e1bcc6d5-a08b-4cca-b5ee-f5e0ab4fd1d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdaf6d125-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:48:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 465], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707050, 'reachable_time': 34156, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378867, 'error': None, 'target': 'ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:46:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2451: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.296 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[aacbfc61-89d3-484f-8785-8e3ac2d7bb72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.334 243456 DEBUG nova.compute.manager [req-ad3ef9d7-f2c8-4f52-ad19-71b7aad4c6ed req-088a2c18-2c4e-4863-a289-e830c92a6ff6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Received event network-vif-plugged-194124fb-5288-4359-ab42-cd28d0ec06bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.335 243456 DEBUG oslo_concurrency.lockutils [req-ad3ef9d7-f2c8-4f52-ad19-71b7aad4c6ed req-088a2c18-2c4e-4863-a289-e830c92a6ff6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.335 243456 DEBUG oslo_concurrency.lockutils [req-ad3ef9d7-f2c8-4f52-ad19-71b7aad4c6ed req-088a2c18-2c4e-4863-a289-e830c92a6ff6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.336 243456 DEBUG oslo_concurrency.lockutils [req-ad3ef9d7-f2c8-4f52-ad19-71b7aad4c6ed req-088a2c18-2c4e-4863-a289-e830c92a6ff6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.337 243456 DEBUG nova.compute.manager [req-ad3ef9d7-f2c8-4f52-ad19-71b7aad4c6ed req-088a2c18-2c4e-4863-a289-e830c92a6ff6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Processing event network-vif-plugged-194124fb-5288-4359-ab42-cd28d0ec06bc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.366 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ffe91663-0d9b-4316-aea4-380b9703261c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.368 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdaf6d125-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.368 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.368 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdaf6d125-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:46:03 np0005634017 NetworkManager[49805]: <info>  [1772275563.3713] manager: (tapdaf6d125-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/659)
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.371 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:03 np0005634017 kernel: tapdaf6d125-30: entered promiscuous mode
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.374 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdaf6d125-30, col_values=(('external_ids', {'iface-id': '74ab365e-66c5-40e6-8916-4199b68ab2e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:46:03 np0005634017 ovn_controller[146846]: 2026-02-28T10:46:03Z|01576|binding|INFO|Releasing lport 74ab365e-66c5-40e6-8916-4199b68ab2e3 from this chassis (sb_readonly=0)
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.380 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.381 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/daf6d125-3e9a-40be-b7d7-68719005c3b1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/daf6d125-3e9a-40be-b7d7-68719005c3b1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.382 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[271f063c-ac27-49aa-8321-020882f4997b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.383 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-daf6d125-3e9a-40be-b7d7-68719005c3b1
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/daf6d125-3e9a-40be-b7d7-68719005c3b1.pid.haproxy
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID daf6d125-3e9a-40be-b7d7-68719005c3b1
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:46:03 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.384 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'env', 'PROCESS_TAG=haproxy-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/daf6d125-3e9a-40be-b7d7-68719005c3b1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.649 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275563.6483757, b6b6d2e0-12e8-4804-a192-da4e2444f20e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.651 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] VM Started (Lifecycle Event)#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.654 243456 DEBUG nova.compute.manager [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.657 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.660 243456 INFO nova.virt.libvirt.driver [-] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Instance spawned successfully.#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.661 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.672 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.677 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.681 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.681 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.682 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.682 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.682 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.683 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.714 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.715 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275563.649735, b6b6d2e0-12e8-4804-a192-da4e2444f20e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.715 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:46:03 np0005634017 podman[378937]: 2026-02-28 10:46:03.738038046 +0000 UTC m=+0.046157588 container create 5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 05:46:03 np0005634017 systemd[1]: Started libpod-conmon-5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66.scope.
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.768 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.772 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275563.656227, b6b6d2e0-12e8-4804-a192-da4e2444f20e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.772 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.775 243456 INFO nova.compute.manager [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Took 8.00 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.775 243456 DEBUG nova.compute.manager [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:46:03 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:46:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fcf45e9da452fe6f31772cee126479285411300a0844b62ff3dcab6d823ade5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.812 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:46:03 np0005634017 podman[378937]: 2026-02-28 10:46:03.714688155 +0000 UTC m=+0.022807727 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.815 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:46:03 np0005634017 podman[378937]: 2026-02-28 10:46:03.81693477 +0000 UTC m=+0.125054322 container init 5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 28 05:46:03 np0005634017 podman[378937]: 2026-02-28 10:46:03.826277635 +0000 UTC m=+0.134397187 container start 5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 05:46:03 np0005634017 neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1[378952]: [NOTICE]   (378956) : New worker (378958) forked
Feb 28 05:46:03 np0005634017 neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1[378952]: [NOTICE]   (378956) : Loading success.
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.856 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.873 243456 INFO nova.compute.manager [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Took 9.01 seconds to build instance.#033[00m
Feb 28 05:46:03 np0005634017 nova_compute[243452]: 2026-02-28 10:46:03.893 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:46:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:46:05 np0005634017 nova_compute[243452]: 2026-02-28 10:46:05.153 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2452: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 845 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Feb 28 05:46:05 np0005634017 nova_compute[243452]: 2026-02-28 10:46:05.442 243456 DEBUG nova.compute.manager [req-213a34a2-bd8c-4096-b59b-efa886fd5d68 req-bef64cbd-3da4-4775-8a48-e1c63097ba40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Received event network-vif-plugged-194124fb-5288-4359-ab42-cd28d0ec06bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:46:05 np0005634017 nova_compute[243452]: 2026-02-28 10:46:05.442 243456 DEBUG oslo_concurrency.lockutils [req-213a34a2-bd8c-4096-b59b-efa886fd5d68 req-bef64cbd-3da4-4775-8a48-e1c63097ba40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:46:05 np0005634017 nova_compute[243452]: 2026-02-28 10:46:05.445 243456 DEBUG oslo_concurrency.lockutils [req-213a34a2-bd8c-4096-b59b-efa886fd5d68 req-bef64cbd-3da4-4775-8a48-e1c63097ba40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:46:05 np0005634017 nova_compute[243452]: 2026-02-28 10:46:05.445 243456 DEBUG oslo_concurrency.lockutils [req-213a34a2-bd8c-4096-b59b-efa886fd5d68 req-bef64cbd-3da4-4775-8a48-e1c63097ba40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:46:05 np0005634017 nova_compute[243452]: 2026-02-28 10:46:05.445 243456 DEBUG nova.compute.manager [req-213a34a2-bd8c-4096-b59b-efa886fd5d68 req-bef64cbd-3da4-4775-8a48-e1c63097ba40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] No waiting events found dispatching network-vif-plugged-194124fb-5288-4359-ab42-cd28d0ec06bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:46:05 np0005634017 nova_compute[243452]: 2026-02-28 10:46:05.446 243456 WARNING nova.compute.manager [req-213a34a2-bd8c-4096-b59b-efa886fd5d68 req-bef64cbd-3da4-4775-8a48-e1c63097ba40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Received unexpected event network-vif-plugged-194124fb-5288-4359-ab42-cd28d0ec06bc for instance with vm_state active and task_state None.#033[00m
Feb 28 05:46:06 np0005634017 nova_compute[243452]: 2026-02-28 10:46:06.980 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2453: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 862 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Feb 28 05:46:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:46:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2454: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Feb 28 05:46:09 np0005634017 NetworkManager[49805]: <info>  [1772275569.6759] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/660)
Feb 28 05:46:09 np0005634017 NetworkManager[49805]: <info>  [1772275569.6773] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/661)
Feb 28 05:46:09 np0005634017 nova_compute[243452]: 2026-02-28 10:46:09.675 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:09 np0005634017 ovn_controller[146846]: 2026-02-28T10:46:09Z|01577|binding|INFO|Releasing lport 74ab365e-66c5-40e6-8916-4199b68ab2e3 from this chassis (sb_readonly=0)
Feb 28 05:46:09 np0005634017 nova_compute[243452]: 2026-02-28 10:46:09.692 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:09 np0005634017 ovn_controller[146846]: 2026-02-28T10:46:09Z|01578|binding|INFO|Releasing lport 74ab365e-66c5-40e6-8916-4199b68ab2e3 from this chassis (sb_readonly=0)
Feb 28 05:46:09 np0005634017 nova_compute[243452]: 2026-02-28 10:46:09.698 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:10 np0005634017 nova_compute[243452]: 2026-02-28 10:46:10.155 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:10 np0005634017 nova_compute[243452]: 2026-02-28 10:46:10.804 243456 DEBUG nova.compute.manager [req-999acf5c-1772-4419-99ce-efc267d94232 req-2fde3b12-5623-49d3-ab6d-dec174d3b62a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Received event network-changed-194124fb-5288-4359-ab42-cd28d0ec06bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:46:10 np0005634017 nova_compute[243452]: 2026-02-28 10:46:10.804 243456 DEBUG nova.compute.manager [req-999acf5c-1772-4419-99ce-efc267d94232 req-2fde3b12-5623-49d3-ab6d-dec174d3b62a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Refreshing instance network info cache due to event network-changed-194124fb-5288-4359-ab42-cd28d0ec06bc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:46:10 np0005634017 nova_compute[243452]: 2026-02-28 10:46:10.804 243456 DEBUG oslo_concurrency.lockutils [req-999acf5c-1772-4419-99ce-efc267d94232 req-2fde3b12-5623-49d3-ab6d-dec174d3b62a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:46:10 np0005634017 nova_compute[243452]: 2026-02-28 10:46:10.805 243456 DEBUG oslo_concurrency.lockutils [req-999acf5c-1772-4419-99ce-efc267d94232 req-2fde3b12-5623-49d3-ab6d-dec174d3b62a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:46:10 np0005634017 nova_compute[243452]: 2026-02-28 10:46:10.805 243456 DEBUG nova.network.neutron [req-999acf5c-1772-4419-99ce-efc267d94232 req-2fde3b12-5623-49d3-ab6d-dec174d3b62a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Refreshing network info cache for port 194124fb-5288-4359-ab42-cd28d0ec06bc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:46:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2455: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 932 KiB/s wr, 87 op/s
Feb 28 05:46:11 np0005634017 nova_compute[243452]: 2026-02-28 10:46:11.983 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:12 np0005634017 nova_compute[243452]: 2026-02-28 10:46:12.491 243456 DEBUG nova.network.neutron [req-999acf5c-1772-4419-99ce-efc267d94232 req-2fde3b12-5623-49d3-ab6d-dec174d3b62a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Updated VIF entry in instance network info cache for port 194124fb-5288-4359-ab42-cd28d0ec06bc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:46:12 np0005634017 nova_compute[243452]: 2026-02-28 10:46:12.492 243456 DEBUG nova.network.neutron [req-999acf5c-1772-4419-99ce-efc267d94232 req-2fde3b12-5623-49d3-ab6d-dec174d3b62a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Updating instance_info_cache with network_info: [{"id": "194124fb-5288-4359-ab42-cd28d0ec06bc", "address": "fa:16:3e:e5:09:7c", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:97c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194124fb-52", "ovs_interfaceid": "194124fb-5288-4359-ab42-cd28d0ec06bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:46:12 np0005634017 nova_compute[243452]: 2026-02-28 10:46:12.516 243456 DEBUG oslo_concurrency.lockutils [req-999acf5c-1772-4419-99ce-efc267d94232 req-2fde3b12-5623-49d3-ab6d-dec174d3b62a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:46:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2456: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 05:46:13 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Feb 28 05:46:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:46:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:46:15Z|00194|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e5:09:7c 10.100.0.4
Feb 28 05:46:15 np0005634017 ovn_controller[146846]: 2026-02-28T10:46:15Z|00195|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e5:09:7c 10.100.0.4
Feb 28 05:46:15 np0005634017 nova_compute[243452]: 2026-02-28 10:46:15.157 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2457: 305 pgs: 305 active+clean; 214 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 96 op/s
Feb 28 05:46:16 np0005634017 nova_compute[243452]: 2026-02-28 10:46:16.986 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2458: 305 pgs: 305 active+clean; 222 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.4 MiB/s wr, 74 op/s
Feb 28 05:46:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:46:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2459: 305 pgs: 305 active+clean; 230 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 93 op/s
Feb 28 05:46:20 np0005634017 nova_compute[243452]: 2026-02-28 10:46:20.161 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2460: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 530 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Feb 28 05:46:21 np0005634017 nova_compute[243452]: 2026-02-28 10:46:21.989 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2461: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 28 05:46:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:46:25 np0005634017 podman[378970]: 2026-02-28 10:46:25.152923943 +0000 UTC m=+0.081316123 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Feb 28 05:46:25 np0005634017 nova_compute[243452]: 2026-02-28 10:46:25.163 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:25 np0005634017 podman[378969]: 2026-02-28 10:46:25.178015774 +0000 UTC m=+0.109396129 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.43.0)
Feb 28 05:46:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2462: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 28 05:46:26 np0005634017 nova_compute[243452]: 2026-02-28 10:46:26.993 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2463: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 237 KiB/s rd, 906 KiB/s wr, 43 op/s
Feb 28 05:46:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:46:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:46:29
Feb 28 05:46:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:46:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:46:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'backups', 'volumes', 'vms', 'default.rgw.control', '.mgr', '.rgw.root']
Feb 28 05:46:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:46:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2464: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 210 KiB/s rd, 807 KiB/s wr, 32 op/s
Feb 28 05:46:29 np0005634017 nova_compute[243452]: 2026-02-28 10:46:29.670 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "a4475d45-f11e-4b8c-a118-5b4a347c2506" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:46:29 np0005634017 nova_compute[243452]: 2026-02-28 10:46:29.671 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:46:29 np0005634017 nova_compute[243452]: 2026-02-28 10:46:29.693 243456 DEBUG nova.compute.manager [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:46:29 np0005634017 nova_compute[243452]: 2026-02-28 10:46:29.772 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:46:29 np0005634017 nova_compute[243452]: 2026-02-28 10:46:29.773 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:46:29 np0005634017 nova_compute[243452]: 2026-02-28 10:46:29.784 243456 DEBUG nova.virt.hardware [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:46:29 np0005634017 nova_compute[243452]: 2026-02-28 10:46:29.785 243456 INFO nova.compute.claims [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:46:29 np0005634017 nova_compute[243452]: 2026-02-28 10:46:29.889 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:46:30 np0005634017 nova_compute[243452]: 2026-02-28 10:46:30.166 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:46:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:46:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:46:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:46:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:46:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:46:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:46:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/319888767' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:46:30 np0005634017 nova_compute[243452]: 2026-02-28 10:46:30.451 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:46:30 np0005634017 nova_compute[243452]: 2026-02-28 10:46:30.460 243456 DEBUG nova.compute.provider_tree [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:46:30 np0005634017 nova_compute[243452]: 2026-02-28 10:46:30.484 243456 DEBUG nova.scheduler.client.report [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:46:30 np0005634017 nova_compute[243452]: 2026-02-28 10:46:30.515 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:46:30 np0005634017 nova_compute[243452]: 2026-02-28 10:46:30.516 243456 DEBUG nova.compute.manager [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:46:30 np0005634017 nova_compute[243452]: 2026-02-28 10:46:30.566 243456 DEBUG nova.compute.manager [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:46:30 np0005634017 nova_compute[243452]: 2026-02-28 10:46:30.567 243456 DEBUG nova.network.neutron [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:46:30 np0005634017 nova_compute[243452]: 2026-02-28 10:46:30.588 243456 INFO nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:46:30 np0005634017 nova_compute[243452]: 2026-02-28 10:46:30.607 243456 DEBUG nova.compute.manager [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:46:30 np0005634017 nova_compute[243452]: 2026-02-28 10:46:30.699 243456 DEBUG nova.compute.manager [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:46:30 np0005634017 nova_compute[243452]: 2026-02-28 10:46:30.702 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:46:30 np0005634017 nova_compute[243452]: 2026-02-28 10:46:30.703 243456 INFO nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Creating image(s)#033[00m
Feb 28 05:46:30 np0005634017 nova_compute[243452]: 2026-02-28 10:46:30.736 243456 DEBUG nova.storage.rbd_utils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image a4475d45-f11e-4b8c-a118-5b4a347c2506_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:46:30 np0005634017 nova_compute[243452]: 2026-02-28 10:46:30.765 243456 DEBUG nova.storage.rbd_utils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image a4475d45-f11e-4b8c-a118-5b4a347c2506_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:46:30 np0005634017 nova_compute[243452]: 2026-02-28 10:46:30.791 243456 DEBUG nova.storage.rbd_utils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image a4475d45-f11e-4b8c-a118-5b4a347c2506_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:46:30 np0005634017 nova_compute[243452]: 2026-02-28 10:46:30.795 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:46:30 np0005634017 nova_compute[243452]: 2026-02-28 10:46:30.874 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:46:30 np0005634017 nova_compute[243452]: 2026-02-28 10:46:30.875 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:46:30 np0005634017 nova_compute[243452]: 2026-02-28 10:46:30.875 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:46:30 np0005634017 nova_compute[243452]: 2026-02-28 10:46:30.875 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:46:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:46:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:46:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:46:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:46:30 np0005634017 nova_compute[243452]: 2026-02-28 10:46:30.899 243456 DEBUG nova.storage.rbd_utils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image a4475d45-f11e-4b8c-a118-5b4a347c2506_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:46:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:46:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:46:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:46:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:46:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:46:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:46:30 np0005634017 nova_compute[243452]: 2026-02-28 10:46:30.903 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 a4475d45-f11e-4b8c-a118-5b4a347c2506_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:46:31 np0005634017 nova_compute[243452]: 2026-02-28 10:46:31.146 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 a4475d45-f11e-4b8c-a118-5b4a347c2506_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.243s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:46:31 np0005634017 nova_compute[243452]: 2026-02-28 10:46:31.230 243456 DEBUG nova.storage.rbd_utils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image a4475d45-f11e-4b8c-a118-5b4a347c2506_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:46:31 np0005634017 nova_compute[243452]: 2026-02-28 10:46:31.286 243456 DEBUG nova.policy [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:46:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2465: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 51 KiB/s wr, 12 op/s
Feb 28 05:46:31 np0005634017 nova_compute[243452]: 2026-02-28 10:46:31.337 243456 DEBUG nova.objects.instance [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid a4475d45-f11e-4b8c-a118-5b4a347c2506 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:46:31 np0005634017 nova_compute[243452]: 2026-02-28 10:46:31.350 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:46:31 np0005634017 nova_compute[243452]: 2026-02-28 10:46:31.351 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Ensure instance console log exists: /var/lib/nova/instances/a4475d45-f11e-4b8c-a118-5b4a347c2506/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:46:31 np0005634017 nova_compute[243452]: 2026-02-28 10:46:31.351 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:46:31 np0005634017 nova_compute[243452]: 2026-02-28 10:46:31.351 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:46:31 np0005634017 nova_compute[243452]: 2026-02-28 10:46:31.351 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:46:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:31.966 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:46:31 np0005634017 nova_compute[243452]: 2026-02-28 10:46:31.966 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:31 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:31.969 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:46:31 np0005634017 nova_compute[243452]: 2026-02-28 10:46:31.995 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:32 np0005634017 nova_compute[243452]: 2026-02-28 10:46:32.051 243456 DEBUG nova.network.neutron [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Successfully created port: 7b3c6111-c4ec-40c0-94eb-725c085f9600 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:46:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2466: 305 pgs: 305 active+clean; 255 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 1.1 MiB/s wr, 14 op/s
Feb 28 05:46:33 np0005634017 nova_compute[243452]: 2026-02-28 10:46:33.520 243456 DEBUG nova.network.neutron [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Successfully updated port: 7b3c6111-c4ec-40c0-94eb-725c085f9600 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:46:33 np0005634017 nova_compute[243452]: 2026-02-28 10:46:33.539 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-a4475d45-f11e-4b8c-a118-5b4a347c2506" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:46:33 np0005634017 nova_compute[243452]: 2026-02-28 10:46:33.540 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-a4475d45-f11e-4b8c-a118-5b4a347c2506" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:46:33 np0005634017 nova_compute[243452]: 2026-02-28 10:46:33.540 243456 DEBUG nova.network.neutron [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:46:33 np0005634017 nova_compute[243452]: 2026-02-28 10:46:33.606 243456 DEBUG nova.compute.manager [req-0ad981f9-6fff-4090-85e4-8311dad9d236 req-5dedd6e3-5e27-4476-804f-b1849210ed9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Received event network-changed-7b3c6111-c4ec-40c0-94eb-725c085f9600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:46:33 np0005634017 nova_compute[243452]: 2026-02-28 10:46:33.607 243456 DEBUG nova.compute.manager [req-0ad981f9-6fff-4090-85e4-8311dad9d236 req-5dedd6e3-5e27-4476-804f-b1849210ed9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Refreshing instance network info cache due to event network-changed-7b3c6111-c4ec-40c0-94eb-725c085f9600. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:46:33 np0005634017 nova_compute[243452]: 2026-02-28 10:46:33.607 243456 DEBUG oslo_concurrency.lockutils [req-0ad981f9-6fff-4090-85e4-8311dad9d236 req-5dedd6e3-5e27-4476-804f-b1849210ed9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-a4475d45-f11e-4b8c-a118-5b4a347c2506" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:46:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:46:34 np0005634017 nova_compute[243452]: 2026-02-28 10:46:34.215 243456 DEBUG nova.network.neutron [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:46:35 np0005634017 nova_compute[243452]: 2026-02-28 10:46:35.212 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2467: 305 pgs: 305 active+clean; 266 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 MiB/s wr, 25 op/s
Feb 28 05:46:35 np0005634017 nova_compute[243452]: 2026-02-28 10:46:35.598 243456 DEBUG nova.network.neutron [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Updating instance_info_cache with network_info: [{"id": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "address": "fa:16:3e:c4:c9:6d", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:c96d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3c6111-c4", "ovs_interfaceid": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:46:35 np0005634017 nova_compute[243452]: 2026-02-28 10:46:35.615 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-a4475d45-f11e-4b8c-a118-5b4a347c2506" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:46:35 np0005634017 nova_compute[243452]: 2026-02-28 10:46:35.615 243456 DEBUG nova.compute.manager [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Instance network_info: |[{"id": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "address": "fa:16:3e:c4:c9:6d", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:c96d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3c6111-c4", "ovs_interfaceid": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:46:35 np0005634017 nova_compute[243452]: 2026-02-28 10:46:35.616 243456 DEBUG oslo_concurrency.lockutils [req-0ad981f9-6fff-4090-85e4-8311dad9d236 req-5dedd6e3-5e27-4476-804f-b1849210ed9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-a4475d45-f11e-4b8c-a118-5b4a347c2506" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:46:35 np0005634017 nova_compute[243452]: 2026-02-28 10:46:35.616 243456 DEBUG nova.network.neutron [req-0ad981f9-6fff-4090-85e4-8311dad9d236 req-5dedd6e3-5e27-4476-804f-b1849210ed9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Refreshing network info cache for port 7b3c6111-c4ec-40c0-94eb-725c085f9600 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:46:35 np0005634017 nova_compute[243452]: 2026-02-28 10:46:35.619 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Start _get_guest_xml network_info=[{"id": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "address": "fa:16:3e:c4:c9:6d", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:c96d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3c6111-c4", "ovs_interfaceid": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:46:35 np0005634017 nova_compute[243452]: 2026-02-28 10:46:35.623 243456 WARNING nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:46:35 np0005634017 nova_compute[243452]: 2026-02-28 10:46:35.627 243456 DEBUG nova.virt.libvirt.host [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:46:35 np0005634017 nova_compute[243452]: 2026-02-28 10:46:35.628 243456 DEBUG nova.virt.libvirt.host [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:46:35 np0005634017 nova_compute[243452]: 2026-02-28 10:46:35.630 243456 DEBUG nova.virt.libvirt.host [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:46:35 np0005634017 nova_compute[243452]: 2026-02-28 10:46:35.631 243456 DEBUG nova.virt.libvirt.host [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:46:35 np0005634017 nova_compute[243452]: 2026-02-28 10:46:35.631 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:46:35 np0005634017 nova_compute[243452]: 2026-02-28 10:46:35.631 243456 DEBUG nova.virt.hardware [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:46:35 np0005634017 nova_compute[243452]: 2026-02-28 10:46:35.632 243456 DEBUG nova.virt.hardware [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:46:35 np0005634017 nova_compute[243452]: 2026-02-28 10:46:35.632 243456 DEBUG nova.virt.hardware [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:46:35 np0005634017 nova_compute[243452]: 2026-02-28 10:46:35.632 243456 DEBUG nova.virt.hardware [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:46:35 np0005634017 nova_compute[243452]: 2026-02-28 10:46:35.633 243456 DEBUG nova.virt.hardware [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:46:35 np0005634017 nova_compute[243452]: 2026-02-28 10:46:35.633 243456 DEBUG nova.virt.hardware [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:46:35 np0005634017 nova_compute[243452]: 2026-02-28 10:46:35.633 243456 DEBUG nova.virt.hardware [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:46:35 np0005634017 nova_compute[243452]: 2026-02-28 10:46:35.633 243456 DEBUG nova.virt.hardware [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:46:35 np0005634017 nova_compute[243452]: 2026-02-28 10:46:35.634 243456 DEBUG nova.virt.hardware [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:46:35 np0005634017 nova_compute[243452]: 2026-02-28 10:46:35.634 243456 DEBUG nova.virt.hardware [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:46:35 np0005634017 nova_compute[243452]: 2026-02-28 10:46:35.634 243456 DEBUG nova.virt.hardware [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:46:35 np0005634017 nova_compute[243452]: 2026-02-28 10:46:35.637 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:46:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:46:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2625851700' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.186 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.219 243456 DEBUG nova.storage.rbd_utils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image a4475d45-f11e-4b8c-a118-5b4a347c2506_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.228 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:46:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:46:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2363406954' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.751 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.753 243456 DEBUG nova.virt.libvirt.vif [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:46:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1318971108',display_name='tempest-TestGettingAddress-server-1318971108',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1318971108',id=151,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCLqeUdnCmsfQtG8FFNYG3SOwWwHCJPtg4jwUf3aliCHpxaY4aOcizIwMIBv7Kr3bQWdfyFCQBO6S7GfxcAqSJauFtY2aqdteGcGPbGDToL8K97rBhkuRxZcIqdxYgu10A==',key_name='tempest-TestGettingAddress-891123315',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-4j40xw8q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:46:30Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=a4475d45-f11e-4b8c-a118-5b4a347c2506,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "address": "fa:16:3e:c4:c9:6d", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:c96d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3c6111-c4", "ovs_interfaceid": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.753 243456 DEBUG nova.network.os_vif_util [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "address": "fa:16:3e:c4:c9:6d", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:c96d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3c6111-c4", "ovs_interfaceid": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.754 243456 DEBUG nova.network.os_vif_util [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:c9:6d,bridge_name='br-int',has_traffic_filtering=True,id=7b3c6111-c4ec-40c0-94eb-725c085f9600,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3c6111-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.756 243456 DEBUG nova.objects.instance [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid a4475d45-f11e-4b8c-a118-5b4a347c2506 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.778 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:46:36 np0005634017 nova_compute[243452]:  <uuid>a4475d45-f11e-4b8c-a118-5b4a347c2506</uuid>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:  <name>instance-00000097</name>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestGettingAddress-server-1318971108</nova:name>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:46:35</nova:creationTime>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:46:36 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:        <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:        <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:        <nova:port uuid="7b3c6111-c4ec-40c0-94eb-725c085f9600">
Feb 28 05:46:36 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fec4:c96d" ipVersion="6"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <entry name="serial">a4475d45-f11e-4b8c-a118-5b4a347c2506</entry>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <entry name="uuid">a4475d45-f11e-4b8c-a118-5b4a347c2506</entry>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/a4475d45-f11e-4b8c-a118-5b4a347c2506_disk">
Feb 28 05:46:36 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:46:36 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/a4475d45-f11e-4b8c-a118-5b4a347c2506_disk.config">
Feb 28 05:46:36 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:46:36 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:c4:c9:6d"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <target dev="tap7b3c6111-c4"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/a4475d45-f11e-4b8c-a118-5b4a347c2506/console.log" append="off"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:46:36 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:46:36 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:46:36 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:46:36 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.779 243456 DEBUG nova.compute.manager [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Preparing to wait for external event network-vif-plugged-7b3c6111-c4ec-40c0-94eb-725c085f9600 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.779 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.780 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.780 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.782 243456 DEBUG nova.virt.libvirt.vif [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:46:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1318971108',display_name='tempest-TestGettingAddress-server-1318971108',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1318971108',id=151,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCLqeUdnCmsfQtG8FFNYG3SOwWwHCJPtg4jwUf3aliCHpxaY4aOcizIwMIBv7Kr3bQWdfyFCQBO6S7GfxcAqSJauFtY2aqdteGcGPbGDToL8K97rBhkuRxZcIqdxYgu10A==',key_name='tempest-TestGettingAddress-891123315',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-4j40xw8q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:46:30Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=a4475d45-f11e-4b8c-a118-5b4a347c2506,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "address": "fa:16:3e:c4:c9:6d", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:c96d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3c6111-c4", "ovs_interfaceid": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.782 243456 DEBUG nova.network.os_vif_util [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "address": "fa:16:3e:c4:c9:6d", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:c96d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3c6111-c4", "ovs_interfaceid": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.783 243456 DEBUG nova.network.os_vif_util [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:c9:6d,bridge_name='br-int',has_traffic_filtering=True,id=7b3c6111-c4ec-40c0-94eb-725c085f9600,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3c6111-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.784 243456 DEBUG os_vif [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:c9:6d,bridge_name='br-int',has_traffic_filtering=True,id=7b3c6111-c4ec-40c0-94eb-725c085f9600,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3c6111-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.785 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.785 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.786 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.793 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.794 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b3c6111-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.795 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7b3c6111-c4, col_values=(('external_ids', {'iface-id': '7b3c6111-c4ec-40c0-94eb-725c085f9600', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:c9:6d', 'vm-uuid': 'a4475d45-f11e-4b8c-a118-5b4a347c2506'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.797 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:36 np0005634017 NetworkManager[49805]: <info>  [1772275596.7991] manager: (tap7b3c6111-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/662)
Feb 28 05:46:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:46:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.800 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:46:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:46:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:46:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.804 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.805 243456 INFO os_vif [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:c9:6d,bridge_name='br-int',has_traffic_filtering=True,id=7b3c6111-c4ec-40c0-94eb-725c085f9600,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3c6111-c4')#033[00m
Feb 28 05:46:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:46:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:46:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:46:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:46:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:46:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:46:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.860 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.862 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.862 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:c4:c9:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.863 243456 INFO nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Using config drive#033[00m
Feb 28 05:46:36 np0005634017 nova_compute[243452]: 2026-02-28 10:46:36.900 243456 DEBUG nova.storage.rbd_utils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image a4475d45-f11e-4b8c-a118-5b4a347c2506_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:46:37 np0005634017 podman[379430]: 2026-02-28 10:46:37.271440048 +0000 UTC m=+0.059645620 container create fcf72b970947a7754d06ba123ed66d2a3f9d23d80643e1fe4fa9bc888fa173a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_cerf, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 28 05:46:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2468: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:46:37 np0005634017 systemd[1]: Started libpod-conmon-fcf72b970947a7754d06ba123ed66d2a3f9d23d80643e1fe4fa9bc888fa173a9.scope.
Feb 28 05:46:37 np0005634017 podman[379430]: 2026-02-28 10:46:37.240293446 +0000 UTC m=+0.028499078 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:46:37 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:46:37 np0005634017 podman[379430]: 2026-02-28 10:46:37.374923199 +0000 UTC m=+0.163128761 container init fcf72b970947a7754d06ba123ed66d2a3f9d23d80643e1fe4fa9bc888fa173a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_cerf, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Feb 28 05:46:37 np0005634017 podman[379430]: 2026-02-28 10:46:37.381991609 +0000 UTC m=+0.170197171 container start fcf72b970947a7754d06ba123ed66d2a3f9d23d80643e1fe4fa9bc888fa173a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_cerf, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 05:46:37 np0005634017 confident_cerf[379446]: 167 167
Feb 28 05:46:37 np0005634017 systemd[1]: libpod-fcf72b970947a7754d06ba123ed66d2a3f9d23d80643e1fe4fa9bc888fa173a9.scope: Deactivated successfully.
Feb 28 05:46:37 np0005634017 podman[379430]: 2026-02-28 10:46:37.387163575 +0000 UTC m=+0.175369147 container attach fcf72b970947a7754d06ba123ed66d2a3f9d23d80643e1fe4fa9bc888fa173a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_cerf, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:46:37 np0005634017 podman[379430]: 2026-02-28 10:46:37.387661809 +0000 UTC m=+0.175867371 container died fcf72b970947a7754d06ba123ed66d2a3f9d23d80643e1fe4fa9bc888fa173a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_cerf, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:46:37 np0005634017 systemd[1]: var-lib-containers-storage-overlay-01ea78c662de353dca9f27653a330019e30177deeab7d134afa7d0a411802adf-merged.mount: Deactivated successfully.
Feb 28 05:46:37 np0005634017 nova_compute[243452]: 2026-02-28 10:46:37.423 243456 INFO nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Creating config drive at /var/lib/nova/instances/a4475d45-f11e-4b8c-a118-5b4a347c2506/disk.config#033[00m
Feb 28 05:46:37 np0005634017 nova_compute[243452]: 2026-02-28 10:46:37.430 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a4475d45-f11e-4b8c-a118-5b4a347c2506/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpk4xonqgk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:46:37 np0005634017 podman[379430]: 2026-02-28 10:46:37.461059608 +0000 UTC m=+0.249265160 container remove fcf72b970947a7754d06ba123ed66d2a3f9d23d80643e1fe4fa9bc888fa173a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_cerf, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 28 05:46:37 np0005634017 systemd[1]: libpod-conmon-fcf72b970947a7754d06ba123ed66d2a3f9d23d80643e1fe4fa9bc888fa173a9.scope: Deactivated successfully.
Feb 28 05:46:37 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:46:37 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:46:37 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:46:37 np0005634017 nova_compute[243452]: 2026-02-28 10:46:37.568 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a4475d45-f11e-4b8c-a118-5b4a347c2506/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpk4xonqgk" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:46:37 np0005634017 nova_compute[243452]: 2026-02-28 10:46:37.600 243456 DEBUG nova.storage.rbd_utils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image a4475d45-f11e-4b8c-a118-5b4a347c2506_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:46:37 np0005634017 nova_compute[243452]: 2026-02-28 10:46:37.604 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a4475d45-f11e-4b8c-a118-5b4a347c2506/disk.config a4475d45-f11e-4b8c-a118-5b4a347c2506_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:46:37 np0005634017 podman[379474]: 2026-02-28 10:46:37.640873429 +0000 UTC m=+0.074151320 container create 3a4c7b43980aef68d669828aa754d0da7c7062a01bbe7b6d4044e07ca4c2b17d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_margulis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:46:37 np0005634017 systemd[1]: Started libpod-conmon-3a4c7b43980aef68d669828aa754d0da7c7062a01bbe7b6d4044e07ca4c2b17d.scope.
Feb 28 05:46:37 np0005634017 podman[379474]: 2026-02-28 10:46:37.608874913 +0000 UTC m=+0.042152864 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:46:37 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:46:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9552d04dac78a63bd34a2c159527c874de55d5c73274fb51f30867d253b9d4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:46:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9552d04dac78a63bd34a2c159527c874de55d5c73274fb51f30867d253b9d4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:46:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9552d04dac78a63bd34a2c159527c874de55d5c73274fb51f30867d253b9d4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:46:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9552d04dac78a63bd34a2c159527c874de55d5c73274fb51f30867d253b9d4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:46:37 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9552d04dac78a63bd34a2c159527c874de55d5c73274fb51f30867d253b9d4b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:46:37 np0005634017 podman[379474]: 2026-02-28 10:46:37.736840527 +0000 UTC m=+0.170118398 container init 3a4c7b43980aef68d669828aa754d0da7c7062a01bbe7b6d4044e07ca4c2b17d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_margulis, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:46:37 np0005634017 podman[379474]: 2026-02-28 10:46:37.74471714 +0000 UTC m=+0.177994991 container start 3a4c7b43980aef68d669828aa754d0da7c7062a01bbe7b6d4044e07ca4c2b17d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_margulis, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:46:37 np0005634017 podman[379474]: 2026-02-28 10:46:37.749262698 +0000 UTC m=+0.182540629 container attach 3a4c7b43980aef68d669828aa754d0da7c7062a01bbe7b6d4044e07ca4c2b17d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_margulis, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 28 05:46:37 np0005634017 nova_compute[243452]: 2026-02-28 10:46:37.780 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a4475d45-f11e-4b8c-a118-5b4a347c2506/disk.config a4475d45-f11e-4b8c-a118-5b4a347c2506_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:46:37 np0005634017 nova_compute[243452]: 2026-02-28 10:46:37.781 243456 INFO nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Deleting local config drive /var/lib/nova/instances/a4475d45-f11e-4b8c-a118-5b4a347c2506/disk.config because it was imported into RBD.#033[00m
Feb 28 05:46:37 np0005634017 kernel: tap7b3c6111-c4: entered promiscuous mode
Feb 28 05:46:37 np0005634017 NetworkManager[49805]: <info>  [1772275597.8313] manager: (tap7b3c6111-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/663)
Feb 28 05:46:37 np0005634017 nova_compute[243452]: 2026-02-28 10:46:37.831 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:37 np0005634017 ovn_controller[146846]: 2026-02-28T10:46:37Z|01579|binding|INFO|Claiming lport 7b3c6111-c4ec-40c0-94eb-725c085f9600 for this chassis.
Feb 28 05:46:37 np0005634017 ovn_controller[146846]: 2026-02-28T10:46:37Z|01580|binding|INFO|7b3c6111-c4ec-40c0-94eb-725c085f9600: Claiming fa:16:3e:c4:c9:6d 10.100.0.9 2001:db8::f816:3eff:fec4:c96d
Feb 28 05:46:37 np0005634017 ovn_controller[146846]: 2026-02-28T10:46:37Z|01581|binding|INFO|Setting lport 7b3c6111-c4ec-40c0-94eb-725c085f9600 ovn-installed in OVS
Feb 28 05:46:37 np0005634017 nova_compute[243452]: 2026-02-28 10:46:37.840 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:37 np0005634017 nova_compute[243452]: 2026-02-28 10:46:37.844 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:37 np0005634017 systemd-machined[209480]: New machine qemu-184-instance-00000097.
Feb 28 05:46:37 np0005634017 systemd-udevd[379546]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:46:37 np0005634017 NetworkManager[49805]: <info>  [1772275597.8840] device (tap7b3c6111-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:46:37 np0005634017 systemd[1]: Started Virtual Machine qemu-184-instance-00000097.
Feb 28 05:46:37 np0005634017 NetworkManager[49805]: <info>  [1772275597.8851] device (tap7b3c6111-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:46:37 np0005634017 ovn_controller[146846]: 2026-02-28T10:46:37Z|01582|binding|INFO|Setting lport 7b3c6111-c4ec-40c0-94eb-725c085f9600 up in Southbound
Feb 28 05:46:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:37.916 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:c9:6d 10.100.0.9 2001:db8::f816:3eff:fec4:c96d'], port_security=['fa:16:3e:c4:c9:6d 10.100.0.9 2001:db8::f816:3eff:fec4:c96d'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8::f816:3eff:fec4:c96d/64', 'neutron:device_id': 'a4475d45-f11e-4b8c-a118-5b4a347c2506', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cfc43bea-ddab-4fae-8219-6187bd0feedd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23d0752a-b8d1-4ae1-a510-cd602da802b5, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=7b3c6111-c4ec-40c0-94eb-725c085f9600) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:46:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:37.917 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 7b3c6111-c4ec-40c0-94eb-725c085f9600 in datapath daf6d125-3e9a-40be-b7d7-68719005c3b1 bound to our chassis#033[00m
Feb 28 05:46:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:37.918 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network daf6d125-3e9a-40be-b7d7-68719005c3b1#033[00m
Feb 28 05:46:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:37.931 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea84289-8e5f-4e11-b157-c01f4ef5bd68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:46:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:37.964 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7c295f13-5cb3-4be9-98fd-8f3b6e694292]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:46:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:37.968 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[af288cf9-4f88-4ecb-8876-a2ba81498d05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:46:37 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:37.994 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[17452bed-5ca2-4c2b-ad13-ac3e924da77a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:46:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:38.007 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4e7667c2-8c91-4e17-b1b3-76cf66007c38]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdaf6d125-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:48:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1656, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1656, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 465], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707050, 'reachable_time': 34156, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379564, 'error': None, 'target': 'ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:46:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:38.020 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[79cac967-a252-4d52-b82b-6e1b3a33412f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdaf6d125-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707061, 'tstamp': 707061}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379566, 'error': None, 'target': 'ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapdaf6d125-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707064, 'tstamp': 707064}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379566, 'error': None, 'target': 'ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:46:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:38.022 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdaf6d125-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.024 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:38.025 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdaf6d125-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:46:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:38.025 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:46:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:38.025 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdaf6d125-30, col_values=(('external_ids', {'iface-id': '74ab365e-66c5-40e6-8916-4199b68ab2e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:46:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:38.026 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:46:38 np0005634017 funny_margulis[379510]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:46:38 np0005634017 funny_margulis[379510]: --> All data devices are unavailable
Feb 28 05:46:38 np0005634017 systemd[1]: libpod-3a4c7b43980aef68d669828aa754d0da7c7062a01bbe7b6d4044e07ca4c2b17d.scope: Deactivated successfully.
Feb 28 05:46:38 np0005634017 podman[379614]: 2026-02-28 10:46:38.279267426 +0000 UTC m=+0.029532597 container died 3a4c7b43980aef68d669828aa754d0da7c7062a01bbe7b6d4044e07ca4c2b17d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_margulis, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.297 243456 DEBUG nova.network.neutron [req-0ad981f9-6fff-4090-85e4-8311dad9d236 req-5dedd6e3-5e27-4476-804f-b1849210ed9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Updated VIF entry in instance network info cache for port 7b3c6111-c4ec-40c0-94eb-725c085f9600. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.298 243456 DEBUG nova.network.neutron [req-0ad981f9-6fff-4090-85e4-8311dad9d236 req-5dedd6e3-5e27-4476-804f-b1849210ed9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Updating instance_info_cache with network_info: [{"id": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "address": "fa:16:3e:c4:c9:6d", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:c96d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3c6111-c4", "ovs_interfaceid": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:46:38 np0005634017 systemd[1]: var-lib-containers-storage-overlay-f9552d04dac78a63bd34a2c159527c874de55d5c73274fb51f30867d253b9d4b-merged.mount: Deactivated successfully.
Feb 28 05:46:38 np0005634017 podman[379614]: 2026-02-28 10:46:38.332777851 +0000 UTC m=+0.083043022 container remove 3a4c7b43980aef68d669828aa754d0da7c7062a01bbe7b6d4044e07ca4c2b17d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:46:38 np0005634017 systemd[1]: libpod-conmon-3a4c7b43980aef68d669828aa754d0da7c7062a01bbe7b6d4044e07ca4c2b17d.scope: Deactivated successfully.
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.358 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275598.356372, a4475d45-f11e-4b8c-a118-5b4a347c2506 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.358 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] VM Started (Lifecycle Event)#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.398 243456 DEBUG oslo_concurrency.lockutils [req-0ad981f9-6fff-4090-85e4-8311dad9d236 req-5dedd6e3-5e27-4476-804f-b1849210ed9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-a4475d45-f11e-4b8c-a118-5b4a347c2506" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.430 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.435 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275598.3601243, a4475d45-f11e-4b8c-a118-5b4a347c2506 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.436 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.442 243456 DEBUG nova.compute.manager [req-4052938e-6343-40cf-92b3-849a569e5b55 req-57dbe0ad-27b4-44a9-b5f8-0e6b93897ba4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Received event network-vif-plugged-7b3c6111-c4ec-40c0-94eb-725c085f9600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.442 243456 DEBUG oslo_concurrency.lockutils [req-4052938e-6343-40cf-92b3-849a569e5b55 req-57dbe0ad-27b4-44a9-b5f8-0e6b93897ba4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.442 243456 DEBUG oslo_concurrency.lockutils [req-4052938e-6343-40cf-92b3-849a569e5b55 req-57dbe0ad-27b4-44a9-b5f8-0e6b93897ba4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.443 243456 DEBUG oslo_concurrency.lockutils [req-4052938e-6343-40cf-92b3-849a569e5b55 req-57dbe0ad-27b4-44a9-b5f8-0e6b93897ba4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.443 243456 DEBUG nova.compute.manager [req-4052938e-6343-40cf-92b3-849a569e5b55 req-57dbe0ad-27b4-44a9-b5f8-0e6b93897ba4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Processing event network-vif-plugged-7b3c6111-c4ec-40c0-94eb-725c085f9600 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.444 243456 DEBUG nova.compute.manager [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.449 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.454 243456 INFO nova.virt.libvirt.driver [-] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Instance spawned successfully.#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.454 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.458 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.462 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275598.4476247, a4475d45-f11e-4b8c-a118-5b4a347c2506 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.462 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.475 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.477 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.477 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.478 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.478 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.479 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.488 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.492 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.531 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.753 243456 INFO nova.compute.manager [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Took 8.05 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.754 243456 DEBUG nova.compute.manager [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:46:38 np0005634017 podman[379696]: 2026-02-28 10:46:38.82754063 +0000 UTC m=+0.038702727 container create 80e9d20887d2627ef0ed6aa2b52a87386c9f51f092dbc13a7b333aaf9c629404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.851 243456 INFO nova.compute.manager [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Took 9.12 seconds to build instance.#033[00m
Feb 28 05:46:38 np0005634017 nova_compute[243452]: 2026-02-28 10:46:38.877 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:46:38 np0005634017 systemd[1]: Started libpod-conmon-80e9d20887d2627ef0ed6aa2b52a87386c9f51f092dbc13a7b333aaf9c629404.scope.
Feb 28 05:46:38 np0005634017 podman[379696]: 2026-02-28 10:46:38.809802968 +0000 UTC m=+0.020965055 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:46:38 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:46:38 np0005634017 podman[379696]: 2026-02-28 10:46:38.930713002 +0000 UTC m=+0.141875159 container init 80e9d20887d2627ef0ed6aa2b52a87386c9f51f092dbc13a7b333aaf9c629404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_almeida, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:46:38 np0005634017 podman[379696]: 2026-02-28 10:46:38.945877021 +0000 UTC m=+0.157039128 container start 80e9d20887d2627ef0ed6aa2b52a87386c9f51f092dbc13a7b333aaf9c629404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:46:38 np0005634017 podman[379696]: 2026-02-28 10:46:38.95077664 +0000 UTC m=+0.161938817 container attach 80e9d20887d2627ef0ed6aa2b52a87386c9f51f092dbc13a7b333aaf9c629404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_almeida, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:46:38 np0005634017 magical_almeida[379712]: 167 167
Feb 28 05:46:38 np0005634017 systemd[1]: libpod-80e9d20887d2627ef0ed6aa2b52a87386c9f51f092dbc13a7b333aaf9c629404.scope: Deactivated successfully.
Feb 28 05:46:38 np0005634017 podman[379696]: 2026-02-28 10:46:38.953716503 +0000 UTC m=+0.164878620 container died 80e9d20887d2627ef0ed6aa2b52a87386c9f51f092dbc13a7b333aaf9c629404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_almeida, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:46:38 np0005634017 systemd[1]: var-lib-containers-storage-overlay-5102d2b7618442f932cd9642af0ddb52a9e35e6554fefadb4bd882727be19cf4-merged.mount: Deactivated successfully.
Feb 28 05:46:38 np0005634017 podman[379696]: 2026-02-28 10:46:38.998025858 +0000 UTC m=+0.209187945 container remove 80e9d20887d2627ef0ed6aa2b52a87386c9f51f092dbc13a7b333aaf9c629404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_almeida, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:46:39 np0005634017 systemd[1]: libpod-conmon-80e9d20887d2627ef0ed6aa2b52a87386c9f51f092dbc13a7b333aaf9c629404.scope: Deactivated successfully.
Feb 28 05:46:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:46:39 np0005634017 podman[379738]: 2026-02-28 10:46:39.177286714 +0000 UTC m=+0.038909943 container create 68d2a51195f120feaed9a40c73c8033e28b70f45f9ef894962976dd592eab312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wing, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Feb 28 05:46:39 np0005634017 systemd[1]: Started libpod-conmon-68d2a51195f120feaed9a40c73c8033e28b70f45f9ef894962976dd592eab312.scope.
Feb 28 05:46:39 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:46:39 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c84338f5ed72175e6c08cca3b308d706792ba98bf784fe94b0732b6950be6cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:46:39 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c84338f5ed72175e6c08cca3b308d706792ba98bf784fe94b0732b6950be6cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:46:39 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c84338f5ed72175e6c08cca3b308d706792ba98bf784fe94b0732b6950be6cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:46:39 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c84338f5ed72175e6c08cca3b308d706792ba98bf784fe94b0732b6950be6cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:46:39 np0005634017 podman[379738]: 2026-02-28 10:46:39.162248098 +0000 UTC m=+0.023871347 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:46:39 np0005634017 podman[379738]: 2026-02-28 10:46:39.263614258 +0000 UTC m=+0.125237507 container init 68d2a51195f120feaed9a40c73c8033e28b70f45f9ef894962976dd592eab312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 05:46:39 np0005634017 podman[379738]: 2026-02-28 10:46:39.269142185 +0000 UTC m=+0.130765414 container start 68d2a51195f120feaed9a40c73c8033e28b70f45f9ef894962976dd592eab312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wing, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:46:39 np0005634017 podman[379738]: 2026-02-28 10:46:39.273377805 +0000 UTC m=+0.135001034 container attach 68d2a51195f120feaed9a40c73c8033e28b70f45f9ef894962976dd592eab312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 05:46:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2469: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]: {
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:    "0": [
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:        {
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "devices": [
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "/dev/loop3"
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            ],
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "lv_name": "ceph_lv0",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "lv_size": "21470642176",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "name": "ceph_lv0",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "tags": {
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.cluster_name": "ceph",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.crush_device_class": "",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.encrypted": "0",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.objectstore": "bluestore",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.osd_id": "0",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.type": "block",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.vdo": "0",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.with_tpm": "0"
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            },
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "type": "block",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "vg_name": "ceph_vg0"
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:        }
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:    ],
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:    "1": [
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:        {
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "devices": [
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "/dev/loop4"
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            ],
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "lv_name": "ceph_lv1",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "lv_size": "21470642176",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "name": "ceph_lv1",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "tags": {
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.cluster_name": "ceph",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.crush_device_class": "",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.encrypted": "0",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.objectstore": "bluestore",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.osd_id": "1",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.type": "block",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.vdo": "0",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.with_tpm": "0"
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            },
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "type": "block",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "vg_name": "ceph_vg1"
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:        }
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:    ],
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:    "2": [
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:        {
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "devices": [
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "/dev/loop5"
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            ],
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "lv_name": "ceph_lv2",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "lv_size": "21470642176",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "name": "ceph_lv2",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "tags": {
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.cluster_name": "ceph",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.crush_device_class": "",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.encrypted": "0",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.objectstore": "bluestore",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.osd_id": "2",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.type": "block",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.vdo": "0",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:                "ceph.with_tpm": "0"
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            },
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "type": "block",
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:            "vg_name": "ceph_vg2"
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:        }
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]:    ]
Feb 28 05:46:39 np0005634017 stupefied_wing[379755]: }
Feb 28 05:46:39 np0005634017 systemd[1]: libpod-68d2a51195f120feaed9a40c73c8033e28b70f45f9ef894962976dd592eab312.scope: Deactivated successfully.
Feb 28 05:46:39 np0005634017 podman[379738]: 2026-02-28 10:46:39.570378034 +0000 UTC m=+0.432001263 container died 68d2a51195f120feaed9a40c73c8033e28b70f45f9ef894962976dd592eab312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wing, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 28 05:46:39 np0005634017 systemd[1]: var-lib-containers-storage-overlay-9c84338f5ed72175e6c08cca3b308d706792ba98bf784fe94b0732b6950be6cc-merged.mount: Deactivated successfully.
Feb 28 05:46:39 np0005634017 podman[379738]: 2026-02-28 10:46:39.617803817 +0000 UTC m=+0.479427046 container remove 68d2a51195f120feaed9a40c73c8033e28b70f45f9ef894962976dd592eab312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:46:39 np0005634017 systemd[1]: libpod-conmon-68d2a51195f120feaed9a40c73c8033e28b70f45f9ef894962976dd592eab312.scope: Deactivated successfully.
Feb 28 05:46:40 np0005634017 podman[379840]: 2026-02-28 10:46:40.11937667 +0000 UTC m=+0.045253242 container create 5420984903f8ad618db5ee3edac67d7253b8422b33b9d34f4d313c4f61b647d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 28 05:46:40 np0005634017 systemd[1]: Started libpod-conmon-5420984903f8ad618db5ee3edac67d7253b8422b33b9d34f4d313c4f61b647d7.scope.
Feb 28 05:46:40 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:46:40 np0005634017 podman[379840]: 2026-02-28 10:46:40.096754169 +0000 UTC m=+0.022630761 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:46:40 np0005634017 podman[379840]: 2026-02-28 10:46:40.200870548 +0000 UTC m=+0.126747150 container init 5420984903f8ad618db5ee3edac67d7253b8422b33b9d34f4d313c4f61b647d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mccarthy, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:46:40 np0005634017 podman[379840]: 2026-02-28 10:46:40.21155238 +0000 UTC m=+0.137428942 container start 5420984903f8ad618db5ee3edac67d7253b8422b33b9d34f4d313c4f61b647d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mccarthy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:46:40 np0005634017 podman[379840]: 2026-02-28 10:46:40.21577214 +0000 UTC m=+0.141648742 container attach 5420984903f8ad618db5ee3edac67d7253b8422b33b9d34f4d313c4f61b647d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 05:46:40 np0005634017 systemd[1]: libpod-5420984903f8ad618db5ee3edac67d7253b8422b33b9d34f4d313c4f61b647d7.scope: Deactivated successfully.
Feb 28 05:46:40 np0005634017 strange_mccarthy[379856]: 167 167
Feb 28 05:46:40 np0005634017 conmon[379856]: conmon 5420984903f8ad618db5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5420984903f8ad618db5ee3edac67d7253b8422b33b9d34f4d313c4f61b647d7.scope/container/memory.events
Feb 28 05:46:40 np0005634017 podman[379840]: 2026-02-28 10:46:40.252972373 +0000 UTC m=+0.178848945 container died 5420984903f8ad618db5ee3edac67d7253b8422b33b9d34f4d313c4f61b647d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mccarthy, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:46:40 np0005634017 nova_compute[243452]: 2026-02-28 10:46:40.254 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:40 np0005634017 systemd[1]: var-lib-containers-storage-overlay-b4b4885894f5fb226486b6b915f5c9632aaa01dc6ac2f277e8c0e19bf11df8b1-merged.mount: Deactivated successfully.
Feb 28 05:46:40 np0005634017 podman[379840]: 2026-02-28 10:46:40.301346983 +0000 UTC m=+0.227223585 container remove 5420984903f8ad618db5ee3edac67d7253b8422b33b9d34f4d313c4f61b647d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Feb 28 05:46:40 np0005634017 systemd[1]: libpod-conmon-5420984903f8ad618db5ee3edac67d7253b8422b33b9d34f4d313c4f61b647d7.scope: Deactivated successfully.
Feb 28 05:46:40 np0005634017 podman[379881]: 2026-02-28 10:46:40.47183733 +0000 UTC m=+0.046335993 container create e544e59f7b77aed0af4816503a443883117ebc80e0f7ae34ee18d67c0dd15550 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bartik, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 05:46:40 np0005634017 systemd[1]: Started libpod-conmon-e544e59f7b77aed0af4816503a443883117ebc80e0f7ae34ee18d67c0dd15550.scope.
Feb 28 05:46:40 np0005634017 nova_compute[243452]: 2026-02-28 10:46:40.538 243456 DEBUG nova.compute.manager [req-6c6bebd0-d300-47a8-98fb-da393fe3ccc8 req-046c0613-db35-4c9e-a181-c51c3602c406 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Received event network-vif-plugged-7b3c6111-c4ec-40c0-94eb-725c085f9600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:46:40 np0005634017 nova_compute[243452]: 2026-02-28 10:46:40.538 243456 DEBUG oslo_concurrency.lockutils [req-6c6bebd0-d300-47a8-98fb-da393fe3ccc8 req-046c0613-db35-4c9e-a181-c51c3602c406 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:46:40 np0005634017 nova_compute[243452]: 2026-02-28 10:46:40.539 243456 DEBUG oslo_concurrency.lockutils [req-6c6bebd0-d300-47a8-98fb-da393fe3ccc8 req-046c0613-db35-4c9e-a181-c51c3602c406 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:46:40 np0005634017 nova_compute[243452]: 2026-02-28 10:46:40.539 243456 DEBUG oslo_concurrency.lockutils [req-6c6bebd0-d300-47a8-98fb-da393fe3ccc8 req-046c0613-db35-4c9e-a181-c51c3602c406 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:46:40 np0005634017 nova_compute[243452]: 2026-02-28 10:46:40.539 243456 DEBUG nova.compute.manager [req-6c6bebd0-d300-47a8-98fb-da393fe3ccc8 req-046c0613-db35-4c9e-a181-c51c3602c406 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] No waiting events found dispatching network-vif-plugged-7b3c6111-c4ec-40c0-94eb-725c085f9600 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:46:40 np0005634017 nova_compute[243452]: 2026-02-28 10:46:40.539 243456 WARNING nova.compute.manager [req-6c6bebd0-d300-47a8-98fb-da393fe3ccc8 req-046c0613-db35-4c9e-a181-c51c3602c406 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Received unexpected event network-vif-plugged-7b3c6111-c4ec-40c0-94eb-725c085f9600 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:46:40 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:46:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17bbc5cd147fb426a68fe6d2418678497dfcf1b632ef98c0c24a03b701ce197b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:46:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17bbc5cd147fb426a68fe6d2418678497dfcf1b632ef98c0c24a03b701ce197b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:46:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17bbc5cd147fb426a68fe6d2418678497dfcf1b632ef98c0c24a03b701ce197b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:46:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17bbc5cd147fb426a68fe6d2418678497dfcf1b632ef98c0c24a03b701ce197b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:46:40 np0005634017 podman[379881]: 2026-02-28 10:46:40.448462138 +0000 UTC m=+0.022960781 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:46:40 np0005634017 podman[379881]: 2026-02-28 10:46:40.580632951 +0000 UTC m=+0.155131624 container init e544e59f7b77aed0af4816503a443883117ebc80e0f7ae34ee18d67c0dd15550 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bartik, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:46:40 np0005634017 podman[379881]: 2026-02-28 10:46:40.587056313 +0000 UTC m=+0.161554976 container start e544e59f7b77aed0af4816503a443883117ebc80e0f7ae34ee18d67c0dd15550 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bartik, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:46:40 np0005634017 podman[379881]: 2026-02-28 10:46:40.591300903 +0000 UTC m=+0.165799546 container attach e544e59f7b77aed0af4816503a443883117ebc80e0f7ae34ee18d67c0dd15550 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 05:46:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2470: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 860 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Feb 28 05:46:41 np0005634017 lvm[379976]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:46:41 np0005634017 lvm[379976]: VG ceph_vg1 finished
Feb 28 05:46:41 np0005634017 lvm[379975]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:46:41 np0005634017 lvm[379975]: VG ceph_vg0 finished
Feb 28 05:46:41 np0005634017 lvm[379978]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:46:41 np0005634017 lvm[379978]: VG ceph_vg2 finished
Feb 28 05:46:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:46:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:46:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:46:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:46:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011221662958424221 of space, bias 1.0, pg target 0.33664988875272667 quantized to 32 (current 32)
Feb 28 05:46:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:46:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:46:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:46:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:46:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:46:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024939187920449838 of space, bias 1.0, pg target 0.7481756376134951 quantized to 32 (current 32)
Feb 28 05:46:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:46:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.25983564234373e-07 of space, bias 4.0, pg target 0.0008711802770812475 quantized to 16 (current 16)
Feb 28 05:46:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:46:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:46:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:46:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:46:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:46:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:46:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:46:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:46:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:46:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:46:41 np0005634017 quizzical_bartik[379897]: {}
Feb 28 05:46:41 np0005634017 systemd[1]: libpod-e544e59f7b77aed0af4816503a443883117ebc80e0f7ae34ee18d67c0dd15550.scope: Deactivated successfully.
Feb 28 05:46:41 np0005634017 systemd[1]: libpod-e544e59f7b77aed0af4816503a443883117ebc80e0f7ae34ee18d67c0dd15550.scope: Consumed 1.271s CPU time.
Feb 28 05:46:41 np0005634017 conmon[379897]: conmon e544e59f7b77aed0af48 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e544e59f7b77aed0af4816503a443883117ebc80e0f7ae34ee18d67c0dd15550.scope/container/memory.events
Feb 28 05:46:41 np0005634017 podman[379881]: 2026-02-28 10:46:41.468959625 +0000 UTC m=+1.043458248 container died e544e59f7b77aed0af4816503a443883117ebc80e0f7ae34ee18d67c0dd15550 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bartik, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 05:46:41 np0005634017 systemd[1]: var-lib-containers-storage-overlay-17bbc5cd147fb426a68fe6d2418678497dfcf1b632ef98c0c24a03b701ce197b-merged.mount: Deactivated successfully.
Feb 28 05:46:41 np0005634017 podman[379881]: 2026-02-28 10:46:41.512553849 +0000 UTC m=+1.087052472 container remove e544e59f7b77aed0af4816503a443883117ebc80e0f7ae34ee18d67c0dd15550 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bartik, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:46:41 np0005634017 systemd[1]: libpod-conmon-e544e59f7b77aed0af4816503a443883117ebc80e0f7ae34ee18d67c0dd15550.scope: Deactivated successfully.
Feb 28 05:46:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:46:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:46:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:46:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:46:41 np0005634017 nova_compute[243452]: 2026-02-28 10:46:41.798 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:41.972 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:46:42 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:46:42 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:46:42 np0005634017 nova_compute[243452]: 2026-02-28 10:46:42.631 243456 DEBUG nova.compute.manager [req-70a43941-66b8-4406-9734-346a85bf6de1 req-bc0c6402-5b3c-4e6f-9131-483d6e1f333c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Received event network-changed-7b3c6111-c4ec-40c0-94eb-725c085f9600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:46:42 np0005634017 nova_compute[243452]: 2026-02-28 10:46:42.631 243456 DEBUG nova.compute.manager [req-70a43941-66b8-4406-9734-346a85bf6de1 req-bc0c6402-5b3c-4e6f-9131-483d6e1f333c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Refreshing instance network info cache due to event network-changed-7b3c6111-c4ec-40c0-94eb-725c085f9600. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:46:42 np0005634017 nova_compute[243452]: 2026-02-28 10:46:42.632 243456 DEBUG oslo_concurrency.lockutils [req-70a43941-66b8-4406-9734-346a85bf6de1 req-bc0c6402-5b3c-4e6f-9131-483d6e1f333c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-a4475d45-f11e-4b8c-a118-5b4a347c2506" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:46:42 np0005634017 nova_compute[243452]: 2026-02-28 10:46:42.632 243456 DEBUG oslo_concurrency.lockutils [req-70a43941-66b8-4406-9734-346a85bf6de1 req-bc0c6402-5b3c-4e6f-9131-483d6e1f333c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-a4475d45-f11e-4b8c-a118-5b4a347c2506" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:46:42 np0005634017 nova_compute[243452]: 2026-02-28 10:46:42.632 243456 DEBUG nova.network.neutron [req-70a43941-66b8-4406-9734-346a85bf6de1 req-bc0c6402-5b3c-4e6f-9131-483d6e1f333c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Refreshing network info cache for port 7b3c6111-c4ec-40c0-94eb-725c085f9600 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:46:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2471: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 05:46:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:46:44 np0005634017 nova_compute[243452]: 2026-02-28 10:46:44.662 243456 DEBUG nova.network.neutron [req-70a43941-66b8-4406-9734-346a85bf6de1 req-bc0c6402-5b3c-4e6f-9131-483d6e1f333c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Updated VIF entry in instance network info cache for port 7b3c6111-c4ec-40c0-94eb-725c085f9600. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:46:44 np0005634017 nova_compute[243452]: 2026-02-28 10:46:44.662 243456 DEBUG nova.network.neutron [req-70a43941-66b8-4406-9734-346a85bf6de1 req-bc0c6402-5b3c-4e6f-9131-483d6e1f333c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Updating instance_info_cache with network_info: [{"id": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "address": "fa:16:3e:c4:c9:6d", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:c96d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3c6111-c4", "ovs_interfaceid": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:46:44 np0005634017 nova_compute[243452]: 2026-02-28 10:46:44.735 243456 DEBUG oslo_concurrency.lockutils [req-70a43941-66b8-4406-9734-346a85bf6de1 req-bc0c6402-5b3c-4e6f-9131-483d6e1f333c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-a4475d45-f11e-4b8c-a118-5b4a347c2506" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:46:45 np0005634017 nova_compute[243452]: 2026-02-28 10:46:45.256 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2472: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 760 KiB/s wr, 87 op/s
Feb 28 05:46:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:46:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1141842466' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:46:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:46:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1141842466' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:46:46 np0005634017 nova_compute[243452]: 2026-02-28 10:46:46.801 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:46 np0005634017 nova_compute[243452]: 2026-02-28 10:46:46.836 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:46:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2473: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 603 KiB/s wr, 75 op/s
Feb 28 05:46:48 np0005634017 nova_compute[243452]: 2026-02-28 10:46:48.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:46:48 np0005634017 nova_compute[243452]: 2026-02-28 10:46:48.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:46:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:46:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2474: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Feb 28 05:46:49 np0005634017 nova_compute[243452]: 2026-02-28 10:46:49.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:46:50 np0005634017 nova_compute[243452]: 2026-02-28 10:46:50.299 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:50 np0005634017 nova_compute[243452]: 2026-02-28 10:46:50.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:46:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:46:50Z|00196|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c4:c9:6d 10.100.0.9
Feb 28 05:46:50 np0005634017 ovn_controller[146846]: 2026-02-28T10:46:50Z|00197|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c4:c9:6d 10.100.0.9
Feb 28 05:46:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2475: 305 pgs: 305 active+clean; 292 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 891 KiB/s wr, 103 op/s
Feb 28 05:46:51 np0005634017 nova_compute[243452]: 2026-02-28 10:46:51.804 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2476: 305 pgs: 305 active+clean; 310 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 95 op/s
Feb 28 05:46:53 np0005634017 nova_compute[243452]: 2026-02-28 10:46:53.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:46:53 np0005634017 nova_compute[243452]: 2026-02-28 10:46:53.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:46:53 np0005634017 nova_compute[243452]: 2026-02-28 10:46:53.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:46:53 np0005634017 nova_compute[243452]: 2026-02-28 10:46:53.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:46:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:46:54 np0005634017 nova_compute[243452]: 2026-02-28 10:46:54.213 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:46:54 np0005634017 nova_compute[243452]: 2026-02-28 10:46:54.214 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:46:54 np0005634017 nova_compute[243452]: 2026-02-28 10:46:54.214 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:46:54 np0005634017 nova_compute[243452]: 2026-02-28 10:46:54.214 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b6b6d2e0-12e8-4804-a192-da4e2444f20e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:46:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2477: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Feb 28 05:46:55 np0005634017 nova_compute[243452]: 2026-02-28 10:46:55.341 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:56 np0005634017 podman[380019]: 2026-02-28 10:46:56.139829156 +0000 UTC m=+0.070157688 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 28 05:46:56 np0005634017 podman[380018]: 2026-02-28 10:46:56.18635497 +0000 UTC m=+0.117429013 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:46:56 np0005634017 nova_compute[243452]: 2026-02-28 10:46:56.551 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Updating instance_info_cache with network_info: [{"id": "194124fb-5288-4359-ab42-cd28d0ec06bc", "address": "fa:16:3e:e5:09:7c", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:97c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194124fb-52", "ovs_interfaceid": "194124fb-5288-4359-ab42-cd28d0ec06bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:46:56 np0005634017 nova_compute[243452]: 2026-02-28 10:46:56.566 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:46:56 np0005634017 nova_compute[243452]: 2026-02-28 10:46:56.567 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:46:56 np0005634017 nova_compute[243452]: 2026-02-28 10:46:56.568 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:46:56 np0005634017 nova_compute[243452]: 2026-02-28 10:46:56.568 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:46:56 np0005634017 nova_compute[243452]: 2026-02-28 10:46:56.806 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:46:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2478: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Feb 28 05:46:57 np0005634017 nova_compute[243452]: 2026-02-28 10:46:57.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:46:57 np0005634017 nova_compute[243452]: 2026-02-28 10:46:57.582 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:46:57 np0005634017 nova_compute[243452]: 2026-02-28 10:46:57.583 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:46:57 np0005634017 nova_compute[243452]: 2026-02-28 10:46:57.583 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:46:57 np0005634017 nova_compute[243452]: 2026-02-28 10:46:57.584 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:46:57 np0005634017 nova_compute[243452]: 2026-02-28 10:46:57.585 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:46:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:57.888 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:46:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:57.888 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:46:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:46:57.889 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:46:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:46:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2970508270' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:46:58 np0005634017 nova_compute[243452]: 2026-02-28 10:46:58.147 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:46:58 np0005634017 nova_compute[243452]: 2026-02-28 10:46:58.245 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:46:58 np0005634017 nova_compute[243452]: 2026-02-28 10:46:58.246 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:46:58 np0005634017 nova_compute[243452]: 2026-02-28 10:46:58.252 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:46:58 np0005634017 nova_compute[243452]: 2026-02-28 10:46:58.252 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:46:58 np0005634017 nova_compute[243452]: 2026-02-28 10:46:58.494 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:46:58 np0005634017 nova_compute[243452]: 2026-02-28 10:46:58.496 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3197MB free_disk=59.89634436648339GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:46:58 np0005634017 nova_compute[243452]: 2026-02-28 10:46:58.496 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:46:58 np0005634017 nova_compute[243452]: 2026-02-28 10:46:58.496 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:46:58 np0005634017 nova_compute[243452]: 2026-02-28 10:46:58.669 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance b6b6d2e0-12e8-4804-a192-da4e2444f20e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:46:58 np0005634017 nova_compute[243452]: 2026-02-28 10:46:58.670 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance a4475d45-f11e-4b8c-a118-5b4a347c2506 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:46:58 np0005634017 nova_compute[243452]: 2026-02-28 10:46:58.670 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:46:58 np0005634017 nova_compute[243452]: 2026-02-28 10:46:58.670 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:46:58 np0005634017 nova_compute[243452]: 2026-02-28 10:46:58.808 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:46:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:46:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2479: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Feb 28 05:46:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:46:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1873521596' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:46:59 np0005634017 nova_compute[243452]: 2026-02-28 10:46:59.600 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.791s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:46:59 np0005634017 nova_compute[243452]: 2026-02-28 10:46:59.605 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.240 243456 DEBUG nova.compute.manager [req-0b361309-8026-4f6c-a8ed-4c79e918365a req-ce87b276-1c14-4f7a-a6c8-8ecedfcff7fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Received event network-changed-7b3c6111-c4ec-40c0-94eb-725c085f9600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.240 243456 DEBUG nova.compute.manager [req-0b361309-8026-4f6c-a8ed-4c79e918365a req-ce87b276-1c14-4f7a-a6c8-8ecedfcff7fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Refreshing instance network info cache due to event network-changed-7b3c6111-c4ec-40c0-94eb-725c085f9600. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.240 243456 DEBUG oslo_concurrency.lockutils [req-0b361309-8026-4f6c-a8ed-4c79e918365a req-ce87b276-1c14-4f7a-a6c8-8ecedfcff7fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-a4475d45-f11e-4b8c-a118-5b4a347c2506" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.240 243456 DEBUG oslo_concurrency.lockutils [req-0b361309-8026-4f6c-a8ed-4c79e918365a req-ce87b276-1c14-4f7a-a6c8-8ecedfcff7fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-a4475d45-f11e-4b8c-a118-5b4a347c2506" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.241 243456 DEBUG nova.network.neutron [req-0b361309-8026-4f6c-a8ed-4c79e918365a req-ce87b276-1c14-4f7a-a6c8-8ecedfcff7fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Refreshing network info cache for port 7b3c6111-c4ec-40c0-94eb-725c085f9600 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.242 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:47:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:47:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:47:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:47:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:47:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:47:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.595 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.595 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.603 243456 DEBUG oslo_concurrency.lockutils [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "a4475d45-f11e-4b8c-a118-5b4a347c2506" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.603 243456 DEBUG oslo_concurrency.lockutils [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.603 243456 DEBUG oslo_concurrency.lockutils [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.604 243456 DEBUG oslo_concurrency.lockutils [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.604 243456 DEBUG oslo_concurrency.lockutils [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.605 243456 INFO nova.compute.manager [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Terminating instance#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.606 243456 DEBUG nova.compute.manager [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.609 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:00 np0005634017 kernel: tap7b3c6111-c4 (unregistering): left promiscuous mode
Feb 28 05:47:00 np0005634017 NetworkManager[49805]: <info>  [1772275620.6583] device (tap7b3c6111-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.673 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:47:00Z|01583|binding|INFO|Releasing lport 7b3c6111-c4ec-40c0-94eb-725c085f9600 from this chassis (sb_readonly=0)
Feb 28 05:47:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:47:00Z|01584|binding|INFO|Setting lport 7b3c6111-c4ec-40c0-94eb-725c085f9600 down in Southbound
Feb 28 05:47:00 np0005634017 ovn_controller[146846]: 2026-02-28T10:47:00Z|01585|binding|INFO|Removing iface tap7b3c6111-c4 ovn-installed in OVS
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.676 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.683 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.681 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:c9:6d 10.100.0.9 2001:db8::f816:3eff:fec4:c96d'], port_security=['fa:16:3e:c4:c9:6d 10.100.0.9 2001:db8::f816:3eff:fec4:c96d'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8::f816:3eff:fec4:c96d/64', 'neutron:device_id': 'a4475d45-f11e-4b8c-a118-5b4a347c2506', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cfc43bea-ddab-4fae-8219-6187bd0feedd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23d0752a-b8d1-4ae1-a510-cd602da802b5, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=7b3c6111-c4ec-40c0-94eb-725c085f9600) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:47:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.685 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 7b3c6111-c4ec-40c0-94eb-725c085f9600 in datapath daf6d125-3e9a-40be-b7d7-68719005c3b1 unbound from our chassis#033[00m
Feb 28 05:47:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.687 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network daf6d125-3e9a-40be-b7d7-68719005c3b1#033[00m
Feb 28 05:47:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.706 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[24c15f21-eb6b-4f93-bcfb-cb905485410c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:00 np0005634017 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000097.scope: Deactivated successfully.
Feb 28 05:47:00 np0005634017 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000097.scope: Consumed 12.466s CPU time.
Feb 28 05:47:00 np0005634017 systemd-machined[209480]: Machine qemu-184-instance-00000097 terminated.
Feb 28 05:47:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.736 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ff6382a3-3984-4602-9443-c80e0ab26295]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.740 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[499b19f4-853a-44fb-89df-f0ef4d192e35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.769 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[414b9b3b-1a42-4c5e-8a3b-64397e294642]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.792 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[667e8e64-880b-443d-941f-07b51a1b3654]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdaf6d125-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:48:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2780, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2780, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 465], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707050, 'reachable_time': 34156, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380118, 'error': None, 'target': 'ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.815 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[90dd9e86-b8af-49ee-9c96-d2feffd80678]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdaf6d125-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707061, 'tstamp': 707061}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380119, 'error': None, 'target': 'ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapdaf6d125-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707064, 'tstamp': 707064}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380119, 'error': None, 'target': 'ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.818 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdaf6d125-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.821 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:00 np0005634017 NetworkManager[49805]: <info>  [1772275620.8261] manager: (tap7b3c6111-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/664)
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.828 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.831 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdaf6d125-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:47:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.832 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:47:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.833 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdaf6d125-30, col_values=(('external_ids', {'iface-id': '74ab365e-66c5-40e6-8916-4199b68ab2e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:47:00 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.834 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.845 243456 INFO nova.virt.libvirt.driver [-] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Instance destroyed successfully.#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.846 243456 DEBUG nova.objects.instance [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid a4475d45-f11e-4b8c-a118-5b4a347c2506 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.860 243456 DEBUG nova.virt.libvirt.vif [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:46:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1318971108',display_name='tempest-TestGettingAddress-server-1318971108',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1318971108',id=151,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCLqeUdnCmsfQtG8FFNYG3SOwWwHCJPtg4jwUf3aliCHpxaY4aOcizIwMIBv7Kr3bQWdfyFCQBO6S7GfxcAqSJauFtY2aqdteGcGPbGDToL8K97rBhkuRxZcIqdxYgu10A==',key_name='tempest-TestGettingAddress-891123315',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:46:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-4j40xw8q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:46:38Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=a4475d45-f11e-4b8c-a118-5b4a347c2506,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "address": "fa:16:3e:c4:c9:6d", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:c96d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3c6111-c4", "ovs_interfaceid": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.861 243456 DEBUG nova.network.os_vif_util [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "address": "fa:16:3e:c4:c9:6d", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:c96d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3c6111-c4", "ovs_interfaceid": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.862 243456 DEBUG nova.network.os_vif_util [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c4:c9:6d,bridge_name='br-int',has_traffic_filtering=True,id=7b3c6111-c4ec-40c0-94eb-725c085f9600,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3c6111-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.862 243456 DEBUG os_vif [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:c9:6d,bridge_name='br-int',has_traffic_filtering=True,id=7b3c6111-c4ec-40c0-94eb-725c085f9600,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3c6111-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.864 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.865 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b3c6111-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.868 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:00 np0005634017 nova_compute[243452]: 2026-02-28 10:47:00.872 243456 INFO os_vif [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:c9:6d,bridge_name='br-int',has_traffic_filtering=True,id=7b3c6111-c4ec-40c0-94eb-725c085f9600,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3c6111-c4')#033[00m
Feb 28 05:47:01 np0005634017 nova_compute[243452]: 2026-02-28 10:47:01.204 243456 INFO nova.virt.libvirt.driver [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Deleting instance files /var/lib/nova/instances/a4475d45-f11e-4b8c-a118-5b4a347c2506_del#033[00m
Feb 28 05:47:01 np0005634017 nova_compute[243452]: 2026-02-28 10:47:01.205 243456 INFO nova.virt.libvirt.driver [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Deletion of /var/lib/nova/instances/a4475d45-f11e-4b8c-a118-5b4a347c2506_del complete#033[00m
Feb 28 05:47:01 np0005634017 nova_compute[243452]: 2026-02-28 10:47:01.268 243456 INFO nova.compute.manager [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Took 0.66 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:47:01 np0005634017 nova_compute[243452]: 2026-02-28 10:47:01.269 243456 DEBUG oslo.service.loopingcall [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:47:01 np0005634017 nova_compute[243452]: 2026-02-28 10:47:01.269 243456 DEBUG nova.compute.manager [-] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:47:01 np0005634017 nova_compute[243452]: 2026-02-28 10:47:01.269 243456 DEBUG nova.network.neutron [-] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:47:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2480: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 295 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Feb 28 05:47:01 np0005634017 nova_compute[243452]: 2026-02-28 10:47:01.434 243456 DEBUG nova.compute.manager [req-d5732811-e481-43f1-80dd-1687c315c3a8 req-ce63df15-8cb9-4e0b-a833-23fd4860c218 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Received event network-vif-unplugged-7b3c6111-c4ec-40c0-94eb-725c085f9600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:47:01 np0005634017 nova_compute[243452]: 2026-02-28 10:47:01.434 243456 DEBUG oslo_concurrency.lockutils [req-d5732811-e481-43f1-80dd-1687c315c3a8 req-ce63df15-8cb9-4e0b-a833-23fd4860c218 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:47:01 np0005634017 nova_compute[243452]: 2026-02-28 10:47:01.434 243456 DEBUG oslo_concurrency.lockutils [req-d5732811-e481-43f1-80dd-1687c315c3a8 req-ce63df15-8cb9-4e0b-a833-23fd4860c218 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:47:01 np0005634017 nova_compute[243452]: 2026-02-28 10:47:01.435 243456 DEBUG oslo_concurrency.lockutils [req-d5732811-e481-43f1-80dd-1687c315c3a8 req-ce63df15-8cb9-4e0b-a833-23fd4860c218 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:47:01 np0005634017 nova_compute[243452]: 2026-02-28 10:47:01.435 243456 DEBUG nova.compute.manager [req-d5732811-e481-43f1-80dd-1687c315c3a8 req-ce63df15-8cb9-4e0b-a833-23fd4860c218 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] No waiting events found dispatching network-vif-unplugged-7b3c6111-c4ec-40c0-94eb-725c085f9600 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:47:01 np0005634017 nova_compute[243452]: 2026-02-28 10:47:01.436 243456 DEBUG nova.compute.manager [req-d5732811-e481-43f1-80dd-1687c315c3a8 req-ce63df15-8cb9-4e0b-a833-23fd4860c218 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Received event network-vif-unplugged-7b3c6111-c4ec-40c0-94eb-725c085f9600 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:47:01 np0005634017 nova_compute[243452]: 2026-02-28 10:47:01.806 243456 DEBUG nova.network.neutron [-] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:47:01 np0005634017 nova_compute[243452]: 2026-02-28 10:47:01.828 243456 INFO nova.compute.manager [-] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Took 0.56 seconds to deallocate network for instance.#033[00m
Feb 28 05:47:01 np0005634017 nova_compute[243452]: 2026-02-28 10:47:01.878 243456 DEBUG oslo_concurrency.lockutils [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:47:01 np0005634017 nova_compute[243452]: 2026-02-28 10:47:01.878 243456 DEBUG oslo_concurrency.lockutils [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:47:01 np0005634017 nova_compute[243452]: 2026-02-28 10:47:01.950 243456 DEBUG oslo_concurrency.processutils [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:47:02 np0005634017 nova_compute[243452]: 2026-02-28 10:47:02.281 243456 DEBUG nova.network.neutron [req-0b361309-8026-4f6c-a8ed-4c79e918365a req-ce87b276-1c14-4f7a-a6c8-8ecedfcff7fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Updated VIF entry in instance network info cache for port 7b3c6111-c4ec-40c0-94eb-725c085f9600. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:47:02 np0005634017 nova_compute[243452]: 2026-02-28 10:47:02.282 243456 DEBUG nova.network.neutron [req-0b361309-8026-4f6c-a8ed-4c79e918365a req-ce87b276-1c14-4f7a-a6c8-8ecedfcff7fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Updating instance_info_cache with network_info: [{"id": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "address": "fa:16:3e:c4:c9:6d", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:c96d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3c6111-c4", "ovs_interfaceid": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:47:02 np0005634017 nova_compute[243452]: 2026-02-28 10:47:02.304 243456 DEBUG oslo_concurrency.lockutils [req-0b361309-8026-4f6c-a8ed-4c79e918365a req-ce87b276-1c14-4f7a-a6c8-8ecedfcff7fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-a4475d45-f11e-4b8c-a118-5b4a347c2506" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:47:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:47:02 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2154103194' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:47:02 np0005634017 nova_compute[243452]: 2026-02-28 10:47:02.524 243456 DEBUG oslo_concurrency.processutils [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:47:02 np0005634017 nova_compute[243452]: 2026-02-28 10:47:02.531 243456 DEBUG nova.compute.provider_tree [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:47:02 np0005634017 nova_compute[243452]: 2026-02-28 10:47:02.547 243456 DEBUG nova.scheduler.client.report [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:47:02 np0005634017 nova_compute[243452]: 2026-02-28 10:47:02.569 243456 DEBUG oslo_concurrency.lockutils [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:47:02 np0005634017 nova_compute[243452]: 2026-02-28 10:47:02.595 243456 INFO nova.scheduler.client.report [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance a4475d45-f11e-4b8c-a118-5b4a347c2506#033[00m
Feb 28 05:47:02 np0005634017 nova_compute[243452]: 2026-02-28 10:47:02.685 243456 DEBUG oslo_concurrency.lockutils [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:47:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2481: 305 pgs: 305 active+clean; 272 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 130 KiB/s rd, 1.3 MiB/s wr, 37 op/s
Feb 28 05:47:03 np0005634017 nova_compute[243452]: 2026-02-28 10:47:03.544 243456 DEBUG nova.compute.manager [req-79a80469-88a3-4ca4-a35b-e9dcfcf71143 req-bf4eef27-de6b-46cb-a363-450c30c135a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Received event network-vif-plugged-7b3c6111-c4ec-40c0-94eb-725c085f9600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:47:03 np0005634017 nova_compute[243452]: 2026-02-28 10:47:03.545 243456 DEBUG oslo_concurrency.lockutils [req-79a80469-88a3-4ca4-a35b-e9dcfcf71143 req-bf4eef27-de6b-46cb-a363-450c30c135a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:47:03 np0005634017 nova_compute[243452]: 2026-02-28 10:47:03.545 243456 DEBUG oslo_concurrency.lockutils [req-79a80469-88a3-4ca4-a35b-e9dcfcf71143 req-bf4eef27-de6b-46cb-a363-450c30c135a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:47:03 np0005634017 nova_compute[243452]: 2026-02-28 10:47:03.546 243456 DEBUG oslo_concurrency.lockutils [req-79a80469-88a3-4ca4-a35b-e9dcfcf71143 req-bf4eef27-de6b-46cb-a363-450c30c135a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:47:03 np0005634017 nova_compute[243452]: 2026-02-28 10:47:03.546 243456 DEBUG nova.compute.manager [req-79a80469-88a3-4ca4-a35b-e9dcfcf71143 req-bf4eef27-de6b-46cb-a363-450c30c135a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] No waiting events found dispatching network-vif-plugged-7b3c6111-c4ec-40c0-94eb-725c085f9600 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:47:03 np0005634017 nova_compute[243452]: 2026-02-28 10:47:03.547 243456 WARNING nova.compute.manager [req-79a80469-88a3-4ca4-a35b-e9dcfcf71143 req-bf4eef27-de6b-46cb-a363-450c30c135a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Received unexpected event network-vif-plugged-7b3c6111-c4ec-40c0-94eb-725c085f9600 for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:47:03 np0005634017 nova_compute[243452]: 2026-02-28 10:47:03.547 243456 DEBUG nova.compute.manager [req-79a80469-88a3-4ca4-a35b-e9dcfcf71143 req-bf4eef27-de6b-46cb-a363-450c30c135a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Received event network-vif-deleted-7b3c6111-c4ec-40c0-94eb-725c085f9600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:47:03 np0005634017 nova_compute[243452]: 2026-02-28 10:47:03.547 243456 INFO nova.compute.manager [req-79a80469-88a3-4ca4-a35b-e9dcfcf71143 req-bf4eef27-de6b-46cb-a363-450c30c135a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Neutron deleted interface 7b3c6111-c4ec-40c0-94eb-725c085f9600; detaching it from the instance and deleting it from the info cache#033[00m
Feb 28 05:47:03 np0005634017 nova_compute[243452]: 2026-02-28 10:47:03.548 243456 DEBUG nova.network.neutron [req-79a80469-88a3-4ca4-a35b-e9dcfcf71143 req-bf4eef27-de6b-46cb-a363-450c30c135a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Feb 28 05:47:03 np0005634017 nova_compute[243452]: 2026-02-28 10:47:03.550 243456 DEBUG nova.compute.manager [req-79a80469-88a3-4ca4-a35b-e9dcfcf71143 req-bf4eef27-de6b-46cb-a363-450c30c135a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Detach interface failed, port_id=7b3c6111-c4ec-40c0-94eb-725c085f9600, reason: Instance a4475d45-f11e-4b8c-a118-5b4a347c2506 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 28 05:47:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:47:04 np0005634017 nova_compute[243452]: 2026-02-28 10:47:04.643 243456 DEBUG nova.compute.manager [req-1dce98c8-f165-47c8-a85a-c468b6c64772 req-7149d00d-9336-4a0a-81a4-7c1205033611 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Received event network-changed-194124fb-5288-4359-ab42-cd28d0ec06bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:47:04 np0005634017 nova_compute[243452]: 2026-02-28 10:47:04.643 243456 DEBUG nova.compute.manager [req-1dce98c8-f165-47c8-a85a-c468b6c64772 req-7149d00d-9336-4a0a-81a4-7c1205033611 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Refreshing instance network info cache due to event network-changed-194124fb-5288-4359-ab42-cd28d0ec06bc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:47:04 np0005634017 nova_compute[243452]: 2026-02-28 10:47:04.643 243456 DEBUG oslo_concurrency.lockutils [req-1dce98c8-f165-47c8-a85a-c468b6c64772 req-7149d00d-9336-4a0a-81a4-7c1205033611 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:47:04 np0005634017 nova_compute[243452]: 2026-02-28 10:47:04.643 243456 DEBUG oslo_concurrency.lockutils [req-1dce98c8-f165-47c8-a85a-c468b6c64772 req-7149d00d-9336-4a0a-81a4-7c1205033611 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:47:04 np0005634017 nova_compute[243452]: 2026-02-28 10:47:04.644 243456 DEBUG nova.network.neutron [req-1dce98c8-f165-47c8-a85a-c468b6c64772 req-7149d00d-9336-4a0a-81a4-7c1205033611 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Refreshing network info cache for port 194124fb-5288-4359-ab42-cd28d0ec06bc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:47:04 np0005634017 nova_compute[243452]: 2026-02-28 10:47:04.698 243456 DEBUG oslo_concurrency.lockutils [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:47:04 np0005634017 nova_compute[243452]: 2026-02-28 10:47:04.700 243456 DEBUG oslo_concurrency.lockutils [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:47:04 np0005634017 nova_compute[243452]: 2026-02-28 10:47:04.701 243456 DEBUG oslo_concurrency.lockutils [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:47:04 np0005634017 nova_compute[243452]: 2026-02-28 10:47:04.701 243456 DEBUG oslo_concurrency.lockutils [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:47:04 np0005634017 nova_compute[243452]: 2026-02-28 10:47:04.702 243456 DEBUG oslo_concurrency.lockutils [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:47:04 np0005634017 nova_compute[243452]: 2026-02-28 10:47:04.705 243456 INFO nova.compute.manager [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Terminating instance#033[00m
Feb 28 05:47:04 np0005634017 nova_compute[243452]: 2026-02-28 10:47:04.707 243456 DEBUG nova.compute.manager [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:47:04 np0005634017 kernel: tap194124fb-52 (unregistering): left promiscuous mode
Feb 28 05:47:04 np0005634017 NetworkManager[49805]: <info>  [1772275624.7536] device (tap194124fb-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:47:04 np0005634017 ovn_controller[146846]: 2026-02-28T10:47:04Z|01586|binding|INFO|Releasing lport 194124fb-5288-4359-ab42-cd28d0ec06bc from this chassis (sb_readonly=0)
Feb 28 05:47:04 np0005634017 ovn_controller[146846]: 2026-02-28T10:47:04Z|01587|binding|INFO|Setting lport 194124fb-5288-4359-ab42-cd28d0ec06bc down in Southbound
Feb 28 05:47:04 np0005634017 nova_compute[243452]: 2026-02-28 10:47:04.758 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:04 np0005634017 ovn_controller[146846]: 2026-02-28T10:47:04Z|01588|binding|INFO|Removing iface tap194124fb-52 ovn-installed in OVS
Feb 28 05:47:04 np0005634017 nova_compute[243452]: 2026-02-28 10:47:04.760 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:04.764 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:09:7c 10.100.0.4 2001:db8::f816:3eff:fee5:97c'], port_security=['fa:16:3e:e5:09:7c 10.100.0.4 2001:db8::f816:3eff:fee5:97c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:fee5:97c/64', 'neutron:device_id': 'b6b6d2e0-12e8-4804-a192-da4e2444f20e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cfc43bea-ddab-4fae-8219-6187bd0feedd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23d0752a-b8d1-4ae1-a510-cd602da802b5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=194124fb-5288-4359-ab42-cd28d0ec06bc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:47:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:04.765 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 194124fb-5288-4359-ab42-cd28d0ec06bc in datapath daf6d125-3e9a-40be-b7d7-68719005c3b1 unbound from our chassis#033[00m
Feb 28 05:47:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:04.766 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network daf6d125-3e9a-40be-b7d7-68719005c3b1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:47:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:04.767 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e0984a0f-bac3-4459-839b-930f09e508c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:04.767 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1 namespace which is not needed anymore#033[00m
Feb 28 05:47:04 np0005634017 nova_compute[243452]: 2026-02-28 10:47:04.773 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:04 np0005634017 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000096.scope: Deactivated successfully.
Feb 28 05:47:04 np0005634017 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000096.scope: Consumed 14.393s CPU time.
Feb 28 05:47:04 np0005634017 systemd-machined[209480]: Machine qemu-183-instance-00000096 terminated.
Feb 28 05:47:04 np0005634017 nova_compute[243452]: 2026-02-28 10:47:04.955 243456 INFO nova.virt.libvirt.driver [-] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Instance destroyed successfully.#033[00m
Feb 28 05:47:04 np0005634017 nova_compute[243452]: 2026-02-28 10:47:04.956 243456 DEBUG nova.objects.instance [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid b6b6d2e0-12e8-4804-a192-da4e2444f20e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:47:04 np0005634017 neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1[378952]: [NOTICE]   (378956) : haproxy version is 2.8.14-c23fe91
Feb 28 05:47:04 np0005634017 neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1[378952]: [NOTICE]   (378956) : path to executable is /usr/sbin/haproxy
Feb 28 05:47:04 np0005634017 neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1[378952]: [WARNING]  (378956) : Exiting Master process...
Feb 28 05:47:04 np0005634017 neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1[378952]: [WARNING]  (378956) : Exiting Master process...
Feb 28 05:47:04 np0005634017 neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1[378952]: [ALERT]    (378956) : Current worker (378958) exited with code 143 (Terminated)
Feb 28 05:47:04 np0005634017 neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1[378952]: [WARNING]  (378956) : All workers exited. Exiting... (0)
Feb 28 05:47:04 np0005634017 systemd[1]: libpod-5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66.scope: Deactivated successfully.
Feb 28 05:47:04 np0005634017 podman[380194]: 2026-02-28 10:47:04.970081604 +0000 UTC m=+0.072750581 container died 5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:47:04 np0005634017 nova_compute[243452]: 2026-02-28 10:47:04.974 243456 DEBUG nova.virt.libvirt.vif [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:45:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-760627914',display_name='tempest-TestGettingAddress-server-760627914',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-760627914',id=150,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCLqeUdnCmsfQtG8FFNYG3SOwWwHCJPtg4jwUf3aliCHpxaY4aOcizIwMIBv7Kr3bQWdfyFCQBO6S7GfxcAqSJauFtY2aqdteGcGPbGDToL8K97rBhkuRxZcIqdxYgu10A==',key_name='tempest-TestGettingAddress-891123315',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:46:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-7spaod71',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:46:03Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=b6b6d2e0-12e8-4804-a192-da4e2444f20e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "194124fb-5288-4359-ab42-cd28d0ec06bc", "address": "fa:16:3e:e5:09:7c", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:97c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194124fb-52", "ovs_interfaceid": "194124fb-5288-4359-ab42-cd28d0ec06bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:47:04 np0005634017 nova_compute[243452]: 2026-02-28 10:47:04.975 243456 DEBUG nova.network.os_vif_util [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "194124fb-5288-4359-ab42-cd28d0ec06bc", "address": "fa:16:3e:e5:09:7c", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:97c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194124fb-52", "ovs_interfaceid": "194124fb-5288-4359-ab42-cd28d0ec06bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:47:04 np0005634017 nova_compute[243452]: 2026-02-28 10:47:04.976 243456 DEBUG nova.network.os_vif_util [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e5:09:7c,bridge_name='br-int',has_traffic_filtering=True,id=194124fb-5288-4359-ab42-cd28d0ec06bc,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194124fb-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:47:04 np0005634017 nova_compute[243452]: 2026-02-28 10:47:04.976 243456 DEBUG os_vif [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:09:7c,bridge_name='br-int',has_traffic_filtering=True,id=194124fb-5288-4359-ab42-cd28d0ec06bc,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194124fb-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:47:04 np0005634017 nova_compute[243452]: 2026-02-28 10:47:04.979 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:04 np0005634017 nova_compute[243452]: 2026-02-28 10:47:04.979 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap194124fb-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:47:04 np0005634017 nova_compute[243452]: 2026-02-28 10:47:04.980 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:04 np0005634017 nova_compute[243452]: 2026-02-28 10:47:04.982 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:04 np0005634017 nova_compute[243452]: 2026-02-28 10:47:04.984 243456 INFO os_vif [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:09:7c,bridge_name='br-int',has_traffic_filtering=True,id=194124fb-5288-4359-ab42-cd28d0ec06bc,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194124fb-52')#033[00m
Feb 28 05:47:05 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66-userdata-shm.mount: Deactivated successfully.
Feb 28 05:47:05 np0005634017 systemd[1]: var-lib-containers-storage-overlay-9fcf45e9da452fe6f31772cee126479285411300a0844b62ff3dcab6d823ade5-merged.mount: Deactivated successfully.
Feb 28 05:47:05 np0005634017 podman[380194]: 2026-02-28 10:47:05.02227222 +0000 UTC m=+0.124941177 container cleanup 5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 05:47:05 np0005634017 systemd[1]: libpod-conmon-5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66.scope: Deactivated successfully.
Feb 28 05:47:05 np0005634017 podman[380247]: 2026-02-28 10:47:05.116126141 +0000 UTC m=+0.069510409 container remove 5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 05:47:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:05.122 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5b87cf96-0fdb-4c78-bcd7-f1c6d8927c34]: (4, ('Sat Feb 28 10:47:04 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1 (5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66)\n5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66\nSat Feb 28 10:47:05 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1 (5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66)\n5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:05.124 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[20a4836d-4e62-42f9-8bf0-d4b06a00d441]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:05.125 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdaf6d125-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:47:05 np0005634017 kernel: tapdaf6d125-30: left promiscuous mode
Feb 28 05:47:05 np0005634017 nova_compute[243452]: 2026-02-28 10:47:05.129 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:05 np0005634017 nova_compute[243452]: 2026-02-28 10:47:05.134 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:05.138 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[115bb2ba-6b8f-48d2-8a9b-fe660fe60cc0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:05.150 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[36482e39-1a74-431f-8e32-f6217b1f81f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:05.151 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5ab0b82b-90eb-447a-a604-0ebb59a9566e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:05.165 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[04867f1d-83cf-4e19-8f00-6f26b119503a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707043, 'reachable_time': 23185, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380265, 'error': None, 'target': 'ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:05.166 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:47:05 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:05.167 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[68b95509-ff82-47e4-b565-a7db2ae87033]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:05 np0005634017 systemd[1]: run-netns-ovnmeta\x2ddaf6d125\x2d3e9a\x2d40be\x2db7d7\x2d68719005c3b1.mount: Deactivated successfully.
Feb 28 05:47:05 np0005634017 nova_compute[243452]: 2026-02-28 10:47:05.303 243456 INFO nova.virt.libvirt.driver [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Deleting instance files /var/lib/nova/instances/b6b6d2e0-12e8-4804-a192-da4e2444f20e_del#033[00m
Feb 28 05:47:05 np0005634017 nova_compute[243452]: 2026-02-28 10:47:05.304 243456 INFO nova.virt.libvirt.driver [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Deletion of /var/lib/nova/instances/b6b6d2e0-12e8-4804-a192-da4e2444f20e_del complete#033[00m
Feb 28 05:47:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2482: 305 pgs: 305 active+clean; 258 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 19 KiB/s wr, 29 op/s
Feb 28 05:47:05 np0005634017 nova_compute[243452]: 2026-02-28 10:47:05.370 243456 INFO nova.compute.manager [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Took 0.66 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:47:05 np0005634017 nova_compute[243452]: 2026-02-28 10:47:05.371 243456 DEBUG oslo.service.loopingcall [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:47:05 np0005634017 nova_compute[243452]: 2026-02-28 10:47:05.372 243456 DEBUG nova.compute.manager [-] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:47:05 np0005634017 nova_compute[243452]: 2026-02-28 10:47:05.372 243456 DEBUG nova.network.neutron [-] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:47:05 np0005634017 nova_compute[243452]: 2026-02-28 10:47:05.612 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:05 np0005634017 nova_compute[243452]: 2026-02-28 10:47:05.636 243456 DEBUG nova.compute.manager [req-c693c3ab-a279-4893-8f40-91a5ee05b2d9 req-758b1e90-f982-4135-8864-613d3fb06520 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Received event network-vif-unplugged-194124fb-5288-4359-ab42-cd28d0ec06bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:47:05 np0005634017 nova_compute[243452]: 2026-02-28 10:47:05.637 243456 DEBUG oslo_concurrency.lockutils [req-c693c3ab-a279-4893-8f40-91a5ee05b2d9 req-758b1e90-f982-4135-8864-613d3fb06520 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:47:05 np0005634017 nova_compute[243452]: 2026-02-28 10:47:05.638 243456 DEBUG oslo_concurrency.lockutils [req-c693c3ab-a279-4893-8f40-91a5ee05b2d9 req-758b1e90-f982-4135-8864-613d3fb06520 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:47:05 np0005634017 nova_compute[243452]: 2026-02-28 10:47:05.638 243456 DEBUG oslo_concurrency.lockutils [req-c693c3ab-a279-4893-8f40-91a5ee05b2d9 req-758b1e90-f982-4135-8864-613d3fb06520 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:47:05 np0005634017 nova_compute[243452]: 2026-02-28 10:47:05.639 243456 DEBUG nova.compute.manager [req-c693c3ab-a279-4893-8f40-91a5ee05b2d9 req-758b1e90-f982-4135-8864-613d3fb06520 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] No waiting events found dispatching network-vif-unplugged-194124fb-5288-4359-ab42-cd28d0ec06bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:47:05 np0005634017 nova_compute[243452]: 2026-02-28 10:47:05.640 243456 DEBUG nova.compute.manager [req-c693c3ab-a279-4893-8f40-91a5ee05b2d9 req-758b1e90-f982-4135-8864-613d3fb06520 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Received event network-vif-unplugged-194124fb-5288-4359-ab42-cd28d0ec06bc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:47:06 np0005634017 nova_compute[243452]: 2026-02-28 10:47:06.489 243456 DEBUG nova.network.neutron [req-1dce98c8-f165-47c8-a85a-c468b6c64772 req-7149d00d-9336-4a0a-81a4-7c1205033611 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Updated VIF entry in instance network info cache for port 194124fb-5288-4359-ab42-cd28d0ec06bc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:47:06 np0005634017 nova_compute[243452]: 2026-02-28 10:47:06.491 243456 DEBUG nova.network.neutron [req-1dce98c8-f165-47c8-a85a-c468b6c64772 req-7149d00d-9336-4a0a-81a4-7c1205033611 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Updating instance_info_cache with network_info: [{"id": "194124fb-5288-4359-ab42-cd28d0ec06bc", "address": "fa:16:3e:e5:09:7c", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:97c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194124fb-52", "ovs_interfaceid": "194124fb-5288-4359-ab42-cd28d0ec06bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:47:06 np0005634017 nova_compute[243452]: 2026-02-28 10:47:06.510 243456 DEBUG oslo_concurrency.lockutils [req-1dce98c8-f165-47c8-a85a-c468b6c64772 req-7149d00d-9336-4a0a-81a4-7c1205033611 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:47:06 np0005634017 nova_compute[243452]: 2026-02-28 10:47:06.605 243456 DEBUG nova.network.neutron [-] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:47:06 np0005634017 nova_compute[243452]: 2026-02-28 10:47:06.624 243456 INFO nova.compute.manager [-] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Took 1.25 seconds to deallocate network for instance.#033[00m
Feb 28 05:47:06 np0005634017 nova_compute[243452]: 2026-02-28 10:47:06.669 243456 DEBUG oslo_concurrency.lockutils [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:47:06 np0005634017 nova_compute[243452]: 2026-02-28 10:47:06.669 243456 DEBUG oslo_concurrency.lockutils [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:47:06 np0005634017 nova_compute[243452]: 2026-02-28 10:47:06.715 243456 DEBUG oslo_concurrency.processutils [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:47:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:47:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2297234911' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:47:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2483: 305 pgs: 305 active+clean; 221 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 14 KiB/s wr, 30 op/s
Feb 28 05:47:07 np0005634017 nova_compute[243452]: 2026-02-28 10:47:07.322 243456 DEBUG oslo_concurrency.processutils [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:47:07 np0005634017 nova_compute[243452]: 2026-02-28 10:47:07.329 243456 DEBUG nova.compute.provider_tree [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:47:07 np0005634017 nova_compute[243452]: 2026-02-28 10:47:07.354 243456 DEBUG nova.scheduler.client.report [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:47:07 np0005634017 nova_compute[243452]: 2026-02-28 10:47:07.381 243456 DEBUG oslo_concurrency.lockutils [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:47:07 np0005634017 nova_compute[243452]: 2026-02-28 10:47:07.403 243456 INFO nova.scheduler.client.report [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance b6b6d2e0-12e8-4804-a192-da4e2444f20e#033[00m
Feb 28 05:47:07 np0005634017 nova_compute[243452]: 2026-02-28 10:47:07.468 243456 DEBUG oslo_concurrency.lockutils [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:47:07 np0005634017 nova_compute[243452]: 2026-02-28 10:47:07.703 243456 DEBUG nova.compute.manager [req-041af1f1-d496-46be-9e60-18e6568b4e89 req-7a775792-efd6-47cd-a111-bc397dd3d320 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Received event network-vif-plugged-194124fb-5288-4359-ab42-cd28d0ec06bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:47:07 np0005634017 nova_compute[243452]: 2026-02-28 10:47:07.703 243456 DEBUG oslo_concurrency.lockutils [req-041af1f1-d496-46be-9e60-18e6568b4e89 req-7a775792-efd6-47cd-a111-bc397dd3d320 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:47:07 np0005634017 nova_compute[243452]: 2026-02-28 10:47:07.703 243456 DEBUG oslo_concurrency.lockutils [req-041af1f1-d496-46be-9e60-18e6568b4e89 req-7a775792-efd6-47cd-a111-bc397dd3d320 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:47:07 np0005634017 nova_compute[243452]: 2026-02-28 10:47:07.704 243456 DEBUG oslo_concurrency.lockutils [req-041af1f1-d496-46be-9e60-18e6568b4e89 req-7a775792-efd6-47cd-a111-bc397dd3d320 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:47:07 np0005634017 nova_compute[243452]: 2026-02-28 10:47:07.704 243456 DEBUG nova.compute.manager [req-041af1f1-d496-46be-9e60-18e6568b4e89 req-7a775792-efd6-47cd-a111-bc397dd3d320 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] No waiting events found dispatching network-vif-plugged-194124fb-5288-4359-ab42-cd28d0ec06bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:47:07 np0005634017 nova_compute[243452]: 2026-02-28 10:47:07.704 243456 WARNING nova.compute.manager [req-041af1f1-d496-46be-9e60-18e6568b4e89 req-7a775792-efd6-47cd-a111-bc397dd3d320 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Received unexpected event network-vif-plugged-194124fb-5288-4359-ab42-cd28d0ec06bc for instance with vm_state deleted and task_state None.#033[00m
Feb 28 05:47:07 np0005634017 nova_compute[243452]: 2026-02-28 10:47:07.705 243456 DEBUG nova.compute.manager [req-041af1f1-d496-46be-9e60-18e6568b4e89 req-7a775792-efd6-47cd-a111-bc397dd3d320 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Received event network-vif-deleted-194124fb-5288-4359-ab42-cd28d0ec06bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:47:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:47:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2484: 305 pgs: 305 active+clean; 192 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.3 KiB/s wr, 40 op/s
Feb 28 05:47:09 np0005634017 nova_compute[243452]: 2026-02-28 10:47:09.983 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:10 np0005634017 nova_compute[243452]: 2026-02-28 10:47:10.614 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2485: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 5.2 KiB/s wr, 56 op/s
Feb 28 05:47:11 np0005634017 nova_compute[243452]: 2026-02-28 10:47:11.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:47:11 np0005634017 nova_compute[243452]: 2026-02-28 10:47:11.945 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:11 np0005634017 nova_compute[243452]: 2026-02-28 10:47:11.987 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2486: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.2 KiB/s wr, 56 op/s
Feb 28 05:47:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:47:15 np0005634017 nova_compute[243452]: 2026-02-28 10:47:15.022 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2487: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 4.7 KiB/s wr, 50 op/s
Feb 28 05:47:15 np0005634017 nova_compute[243452]: 2026-02-28 10:47:15.617 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:15 np0005634017 nova_compute[243452]: 2026-02-28 10:47:15.844 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275620.842255, a4475d45-f11e-4b8c-a118-5b4a347c2506 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:47:15 np0005634017 nova_compute[243452]: 2026-02-28 10:47:15.844 243456 INFO nova.compute.manager [-] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:47:15 np0005634017 nova_compute[243452]: 2026-02-28 10:47:15.866 243456 DEBUG nova.compute.manager [None req-e51bca5d-5e35-477a-9b3a-3d0b3af7b74a - - - - - -] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:47:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2488: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 29 op/s
Feb 28 05:47:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:47:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2489: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 3.6 KiB/s wr, 26 op/s
Feb 28 05:47:19 np0005634017 nova_compute[243452]: 2026-02-28 10:47:19.952 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275624.950902, b6b6d2e0-12e8-4804-a192-da4e2444f20e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:47:19 np0005634017 nova_compute[243452]: 2026-02-28 10:47:19.953 243456 INFO nova.compute.manager [-] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:47:19 np0005634017 nova_compute[243452]: 2026-02-28 10:47:19.994 243456 DEBUG nova.compute.manager [None req-213b2414-1b8a-49fc-bba5-a1fb71d4a8a3 - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:47:20 np0005634017 nova_compute[243452]: 2026-02-28 10:47:20.056 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:20 np0005634017 nova_compute[243452]: 2026-02-28 10:47:20.618 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2490: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 16 op/s
Feb 28 05:47:22 np0005634017 nova_compute[243452]: 2026-02-28 10:47:22.333 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:47:22 np0005634017 nova_compute[243452]: 2026-02-28 10:47:22.334 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 28 05:47:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2491: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:47:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:47:24 np0005634017 nova_compute[243452]: 2026-02-28 10:47:24.334 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:47:24 np0005634017 nova_compute[243452]: 2026-02-28 10:47:24.335 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 28 05:47:24 np0005634017 nova_compute[243452]: 2026-02-28 10:47:24.354 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 28 05:47:25 np0005634017 nova_compute[243452]: 2026-02-28 10:47:25.088 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2492: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:47:25 np0005634017 nova_compute[243452]: 2026-02-28 10:47:25.622 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:27 np0005634017 podman[380294]: 2026-02-28 10:47:27.141935301 +0000 UTC m=+0.073984867 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:47:27 np0005634017 podman[380293]: 2026-02-28 10:47:27.17984791 +0000 UTC m=+0.112595416 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 28 05:47:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2493: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:47:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:47:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:47:29
Feb 28 05:47:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:47:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:47:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'volumes', '.mgr', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'default.rgw.meta']
Feb 28 05:47:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:47:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2494: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:47:30 np0005634017 nova_compute[243452]: 2026-02-28 10:47:30.092 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:47:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:47:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:47:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:47:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:47:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:47:30 np0005634017 nova_compute[243452]: 2026-02-28 10:47:30.624 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:47:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:47:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:47:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:47:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:47:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:47:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:47:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:47:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:47:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:47:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2495: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:47:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:33.258 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:47:33 np0005634017 nova_compute[243452]: 2026-02-28 10:47:33.258 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:33.259 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:47:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2496: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:47:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:47:35 np0005634017 nova_compute[243452]: 2026-02-28 10:47:35.134 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2497: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:47:35 np0005634017 nova_compute[243452]: 2026-02-28 10:47:35.626 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2498: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:47:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:47:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2499: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:47:40 np0005634017 nova_compute[243452]: 2026-02-28 10:47:40.173 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:40 np0005634017 nova_compute[243452]: 2026-02-28 10:47:40.628 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:41.262 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:47:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2500: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:47:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:47:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:47:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:47:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:47:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.4568978923368619e-05 of space, bias 1.0, pg target 0.004370693677010586 quantized to 32 (current 32)
Feb 28 05:47:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:47:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:47:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:47:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:47:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:47:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024939168048353643 of space, bias 1.0, pg target 0.7481750414506093 quantized to 32 (current 32)
Feb 28 05:47:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:47:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.24958909274374e-07 of space, bias 4.0, pg target 0.0008699506911292489 quantized to 16 (current 16)
Feb 28 05:47:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:47:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:47:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:47:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:47:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:47:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:47:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:47:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:47:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:47:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:47:41 np0005634017 nova_compute[243452]: 2026-02-28 10:47:41.541 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:47:41 np0005634017 nova_compute[243452]: 2026-02-28 10:47:41.542 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:47:41 np0005634017 nova_compute[243452]: 2026-02-28 10:47:41.560 243456 DEBUG nova.compute.manager [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:47:41 np0005634017 nova_compute[243452]: 2026-02-28 10:47:41.648 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:47:41 np0005634017 nova_compute[243452]: 2026-02-28 10:47:41.649 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:47:41 np0005634017 nova_compute[243452]: 2026-02-28 10:47:41.662 243456 DEBUG nova.virt.hardware [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:47:41 np0005634017 nova_compute[243452]: 2026-02-28 10:47:41.663 243456 INFO nova.compute.claims [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:47:41 np0005634017 nova_compute[243452]: 2026-02-28 10:47:41.833 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:47:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:47:42 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:47:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:47:42 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:47:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:47:42 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:47:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:47:42 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:47:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:47:42 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:47:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:47:42 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:47:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:47:42 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3095923840' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:47:42 np0005634017 nova_compute[243452]: 2026-02-28 10:47:42.372 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:47:42 np0005634017 nova_compute[243452]: 2026-02-28 10:47:42.382 243456 DEBUG nova.compute.provider_tree [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:47:42 np0005634017 nova_compute[243452]: 2026-02-28 10:47:42.403 243456 DEBUG nova.scheduler.client.report [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:47:42 np0005634017 nova_compute[243452]: 2026-02-28 10:47:42.429 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:47:42 np0005634017 nova_compute[243452]: 2026-02-28 10:47:42.431 243456 DEBUG nova.compute.manager [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:47:42 np0005634017 nova_compute[243452]: 2026-02-28 10:47:42.478 243456 DEBUG nova.compute.manager [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:47:42 np0005634017 nova_compute[243452]: 2026-02-28 10:47:42.478 243456 DEBUG nova.network.neutron [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:47:42 np0005634017 nova_compute[243452]: 2026-02-28 10:47:42.497 243456 INFO nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:47:42 np0005634017 nova_compute[243452]: 2026-02-28 10:47:42.513 243456 DEBUG nova.compute.manager [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:47:42 np0005634017 nova_compute[243452]: 2026-02-28 10:47:42.597 243456 DEBUG nova.compute.manager [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:47:42 np0005634017 nova_compute[243452]: 2026-02-28 10:47:42.599 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:47:42 np0005634017 nova_compute[243452]: 2026-02-28 10:47:42.599 243456 INFO nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Creating image(s)#033[00m
Feb 28 05:47:42 np0005634017 nova_compute[243452]: 2026-02-28 10:47:42.625 243456 DEBUG nova.storage.rbd_utils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:47:42 np0005634017 nova_compute[243452]: 2026-02-28 10:47:42.651 243456 DEBUG nova.storage.rbd_utils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:47:42 np0005634017 nova_compute[243452]: 2026-02-28 10:47:42.680 243456 DEBUG nova.storage.rbd_utils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:47:42 np0005634017 nova_compute[243452]: 2026-02-28 10:47:42.684 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:47:42 np0005634017 podman[380547]: 2026-02-28 10:47:42.735895242 +0000 UTC m=+0.057591590 container create 12fbd25f999fdbc8b1d565480585f52ca1a59b6f204b26da108a325a2a1786d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:47:42 np0005634017 nova_compute[243452]: 2026-02-28 10:47:42.750 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:47:42 np0005634017 nova_compute[243452]: 2026-02-28 10:47:42.751 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:47:42 np0005634017 nova_compute[243452]: 2026-02-28 10:47:42.751 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:47:42 np0005634017 nova_compute[243452]: 2026-02-28 10:47:42.752 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:47:42 np0005634017 systemd[1]: Started libpod-conmon-12fbd25f999fdbc8b1d565480585f52ca1a59b6f204b26da108a325a2a1786d6.scope.
Feb 28 05:47:42 np0005634017 nova_compute[243452]: 2026-02-28 10:47:42.772 243456 DEBUG nova.storage.rbd_utils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:47:42 np0005634017 nova_compute[243452]: 2026-02-28 10:47:42.776 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 d47f4919-0816-4363-b2eb-fa6580859e88_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:47:42 np0005634017 podman[380547]: 2026-02-28 10:47:42.701941616 +0000 UTC m=+0.023637984 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:47:42 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:47:42 np0005634017 podman[380547]: 2026-02-28 10:47:42.843560557 +0000 UTC m=+0.165256915 container init 12fbd25f999fdbc8b1d565480585f52ca1a59b6f204b26da108a325a2a1786d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_germain, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:47:42 np0005634017 podman[380547]: 2026-02-28 10:47:42.851172493 +0000 UTC m=+0.172868831 container start 12fbd25f999fdbc8b1d565480585f52ca1a59b6f204b26da108a325a2a1786d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_germain, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 05:47:42 np0005634017 nervous_germain[380592]: 167 167
Feb 28 05:47:42 np0005634017 systemd[1]: libpod-12fbd25f999fdbc8b1d565480585f52ca1a59b6f204b26da108a325a2a1786d6.scope: Deactivated successfully.
Feb 28 05:47:42 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:47:42 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:47:42 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:47:42 np0005634017 podman[380547]: 2026-02-28 10:47:42.889869165 +0000 UTC m=+0.211565503 container attach 12fbd25f999fdbc8b1d565480585f52ca1a59b6f204b26da108a325a2a1786d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 05:47:42 np0005634017 podman[380547]: 2026-02-28 10:47:42.890199974 +0000 UTC m=+0.211896312 container died 12fbd25f999fdbc8b1d565480585f52ca1a59b6f204b26da108a325a2a1786d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:47:43 np0005634017 systemd[1]: var-lib-containers-storage-overlay-6971a519c789758c877095a2786f7e5f62f7674400f4cf4b7a307866990cc86e-merged.mount: Deactivated successfully.
Feb 28 05:47:43 np0005634017 podman[380547]: 2026-02-28 10:47:43.094105038 +0000 UTC m=+0.415801416 container remove 12fbd25f999fdbc8b1d565480585f52ca1a59b6f204b26da108a325a2a1786d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Feb 28 05:47:43 np0005634017 systemd[1]: libpod-conmon-12fbd25f999fdbc8b1d565480585f52ca1a59b6f204b26da108a325a2a1786d6.scope: Deactivated successfully.
Feb 28 05:47:43 np0005634017 podman[380634]: 2026-02-28 10:47:43.230570221 +0000 UTC m=+0.039443653 container create a75de0ab38cbf79ac174d4c92398a6cb6caa9236c1d4ef2cae5dce7882b4cf8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 05:47:43 np0005634017 nova_compute[243452]: 2026-02-28 10:47:43.229 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 d47f4919-0816-4363-b2eb-fa6580859e88_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:47:43 np0005634017 systemd[1]: Started libpod-conmon-a75de0ab38cbf79ac174d4c92398a6cb6caa9236c1d4ef2cae5dce7882b4cf8a.scope.
Feb 28 05:47:43 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:47:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c9c2eefec0cad94d3637b580260184eae09496d5d0d26507afee33eb204cb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:47:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c9c2eefec0cad94d3637b580260184eae09496d5d0d26507afee33eb204cb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:47:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c9c2eefec0cad94d3637b580260184eae09496d5d0d26507afee33eb204cb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:47:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c9c2eefec0cad94d3637b580260184eae09496d5d0d26507afee33eb204cb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:47:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c9c2eefec0cad94d3637b580260184eae09496d5d0d26507afee33eb204cb3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:47:43 np0005634017 nova_compute[243452]: 2026-02-28 10:47:43.297 243456 DEBUG nova.policy [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7efc7418904f44aa8c8c9c3e06ac552b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd7baef4f72e742e8aa7530d7a586ed2b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 28 05:47:43 np0005634017 nova_compute[243452]: 2026-02-28 10:47:43.309 243456 DEBUG nova.storage.rbd_utils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] resizing rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:47:43 np0005634017 podman[380634]: 2026-02-28 10:47:43.311774433 +0000 UTC m=+0.120647885 container init a75de0ab38cbf79ac174d4c92398a6cb6caa9236c1d4ef2cae5dce7882b4cf8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 28 05:47:43 np0005634017 podman[380634]: 2026-02-28 10:47:43.215272486 +0000 UTC m=+0.024145918 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:47:43 np0005634017 podman[380634]: 2026-02-28 10:47:43.321612143 +0000 UTC m=+0.130485555 container start a75de0ab38cbf79ac174d4c92398a6cb6caa9236c1d4ef2cae5dce7882b4cf8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:47:43 np0005634017 podman[380634]: 2026-02-28 10:47:43.324964328 +0000 UTC m=+0.133837790 container attach a75de0ab38cbf79ac174d4c92398a6cb6caa9236c1d4ef2cae5dce7882b4cf8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:47:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2501: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 341 B/s wr, 3 op/s
Feb 28 05:47:43 np0005634017 nova_compute[243452]: 2026-02-28 10:47:43.382 243456 DEBUG nova.objects.instance [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lazy-loading 'migration_context' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:47:43 np0005634017 nova_compute[243452]: 2026-02-28 10:47:43.402 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:47:43 np0005634017 nova_compute[243452]: 2026-02-28 10:47:43.403 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Ensure instance console log exists: /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:47:43 np0005634017 nova_compute[243452]: 2026-02-28 10:47:43.403 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:47:43 np0005634017 nova_compute[243452]: 2026-02-28 10:47:43.403 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:47:43 np0005634017 nova_compute[243452]: 2026-02-28 10:47:43.404 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:47:43 np0005634017 practical_knuth[380668]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:47:43 np0005634017 practical_knuth[380668]: --> All data devices are unavailable
Feb 28 05:47:43 np0005634017 systemd[1]: libpod-a75de0ab38cbf79ac174d4c92398a6cb6caa9236c1d4ef2cae5dce7882b4cf8a.scope: Deactivated successfully.
Feb 28 05:47:43 np0005634017 podman[380742]: 2026-02-28 10:47:43.844769354 +0000 UTC m=+0.028349888 container died a75de0ab38cbf79ac174d4c92398a6cb6caa9236c1d4ef2cae5dce7882b4cf8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:47:43 np0005634017 systemd[1]: var-lib-containers-storage-overlay-58c9c2eefec0cad94d3637b580260184eae09496d5d0d26507afee33eb204cb3-merged.mount: Deactivated successfully.
Feb 28 05:47:43 np0005634017 podman[380742]: 2026-02-28 10:47:43.894220581 +0000 UTC m=+0.077801065 container remove a75de0ab38cbf79ac174d4c92398a6cb6caa9236c1d4ef2cae5dce7882b4cf8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 05:47:43 np0005634017 systemd[1]: libpod-conmon-a75de0ab38cbf79ac174d4c92398a6cb6caa9236c1d4ef2cae5dce7882b4cf8a.scope: Deactivated successfully.
Feb 28 05:47:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:47:44 np0005634017 podman[380820]: 2026-02-28 10:47:44.291417777 +0000 UTC m=+0.032997290 container create 7bab8cf4441fcb91e355538e3aa8cbdab5acb4881fcb2ece6e6a3193e17e6ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 28 05:47:44 np0005634017 systemd[1]: Started libpod-conmon-7bab8cf4441fcb91e355538e3aa8cbdab5acb4881fcb2ece6e6a3193e17e6ec4.scope.
Feb 28 05:47:44 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:47:44 np0005634017 podman[380820]: 2026-02-28 10:47:44.364044794 +0000 UTC m=+0.105624327 container init 7bab8cf4441fcb91e355538e3aa8cbdab5acb4881fcb2ece6e6a3193e17e6ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_payne, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:47:44 np0005634017 podman[380820]: 2026-02-28 10:47:44.368734348 +0000 UTC m=+0.110313861 container start 7bab8cf4441fcb91e355538e3aa8cbdab5acb4881fcb2ece6e6a3193e17e6ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:47:44 np0005634017 brave_payne[380837]: 167 167
Feb 28 05:47:44 np0005634017 systemd[1]: libpod-7bab8cf4441fcb91e355538e3aa8cbdab5acb4881fcb2ece6e6a3193e17e6ec4.scope: Deactivated successfully.
Feb 28 05:47:44 np0005634017 podman[380820]: 2026-02-28 10:47:44.373332719 +0000 UTC m=+0.114912232 container attach 7bab8cf4441fcb91e355538e3aa8cbdab5acb4881fcb2ece6e6a3193e17e6ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_payne, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:47:44 np0005634017 podman[380820]: 2026-02-28 10:47:44.373544225 +0000 UTC m=+0.115123738 container died 7bab8cf4441fcb91e355538e3aa8cbdab5acb4881fcb2ece6e6a3193e17e6ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_payne, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 05:47:44 np0005634017 podman[380820]: 2026-02-28 10:47:44.27782905 +0000 UTC m=+0.019408563 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:47:44 np0005634017 systemd[1]: var-lib-containers-storage-overlay-82134f1a858f243fe4d866c886a713ca8c8a118d78b88252257b216dfc2c9c33-merged.mount: Deactivated successfully.
Feb 28 05:47:44 np0005634017 podman[380820]: 2026-02-28 10:47:44.408552521 +0000 UTC m=+0.150132034 container remove 7bab8cf4441fcb91e355538e3aa8cbdab5acb4881fcb2ece6e6a3193e17e6ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_payne, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 05:47:44 np0005634017 systemd[1]: libpod-conmon-7bab8cf4441fcb91e355538e3aa8cbdab5acb4881fcb2ece6e6a3193e17e6ec4.scope: Deactivated successfully.
Feb 28 05:47:44 np0005634017 podman[380861]: 2026-02-28 10:47:44.523848433 +0000 UTC m=+0.036071668 container create 2e17f67e31ec90de083cfbba7d4539aa5f92a1102817379751cf844e40a5b2ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euclid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:47:44 np0005634017 systemd[1]: Started libpod-conmon-2e17f67e31ec90de083cfbba7d4539aa5f92a1102817379751cf844e40a5b2ae.scope.
Feb 28 05:47:44 np0005634017 podman[380861]: 2026-02-28 10:47:44.508824745 +0000 UTC m=+0.021048000 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:47:44 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:47:44 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89558d877cfea8159f4f6f39a834eb318395116a649029eb870a09309e21dd97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:47:44 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89558d877cfea8159f4f6f39a834eb318395116a649029eb870a09309e21dd97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:47:44 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89558d877cfea8159f4f6f39a834eb318395116a649029eb870a09309e21dd97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:47:44 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89558d877cfea8159f4f6f39a834eb318395116a649029eb870a09309e21dd97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:47:44 np0005634017 podman[380861]: 2026-02-28 10:47:44.63686914 +0000 UTC m=+0.149092395 container init 2e17f67e31ec90de083cfbba7d4539aa5f92a1102817379751cf844e40a5b2ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 05:47:44 np0005634017 podman[380861]: 2026-02-28 10:47:44.652762812 +0000 UTC m=+0.164986047 container start 2e17f67e31ec90de083cfbba7d4539aa5f92a1102817379751cf844e40a5b2ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:47:44 np0005634017 podman[380861]: 2026-02-28 10:47:44.657133057 +0000 UTC m=+0.169356322 container attach 2e17f67e31ec90de083cfbba7d4539aa5f92a1102817379751cf844e40a5b2ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euclid, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]: {
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:    "0": [
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:        {
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "devices": [
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "/dev/loop3"
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            ],
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "lv_name": "ceph_lv0",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "lv_size": "21470642176",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "name": "ceph_lv0",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "tags": {
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.cluster_name": "ceph",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.crush_device_class": "",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.encrypted": "0",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.objectstore": "bluestore",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.osd_id": "0",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.type": "block",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.vdo": "0",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.with_tpm": "0"
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            },
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "type": "block",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "vg_name": "ceph_vg0"
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:        }
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:    ],
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:    "1": [
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:        {
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "devices": [
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "/dev/loop4"
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            ],
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "lv_name": "ceph_lv1",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "lv_size": "21470642176",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "name": "ceph_lv1",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "tags": {
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.cluster_name": "ceph",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.crush_device_class": "",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.encrypted": "0",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.objectstore": "bluestore",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.osd_id": "1",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.type": "block",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.vdo": "0",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.with_tpm": "0"
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            },
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "type": "block",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "vg_name": "ceph_vg1"
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:        }
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:    ],
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:    "2": [
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:        {
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "devices": [
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "/dev/loop5"
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            ],
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "lv_name": "ceph_lv2",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "lv_size": "21470642176",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "name": "ceph_lv2",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "tags": {
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.cluster_name": "ceph",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.crush_device_class": "",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.encrypted": "0",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.objectstore": "bluestore",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.osd_id": "2",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.type": "block",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.vdo": "0",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:                "ceph.with_tpm": "0"
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            },
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "type": "block",
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:            "vg_name": "ceph_vg2"
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:        }
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]:    ]
Feb 28 05:47:44 np0005634017 nifty_euclid[380879]: }
Feb 28 05:47:45 np0005634017 systemd[1]: libpod-2e17f67e31ec90de083cfbba7d4539aa5f92a1102817379751cf844e40a5b2ae.scope: Deactivated successfully.
Feb 28 05:47:45 np0005634017 podman[380861]: 2026-02-28 10:47:45.002448416 +0000 UTC m=+0.514671731 container died 2e17f67e31ec90de083cfbba7d4539aa5f92a1102817379751cf844e40a5b2ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euclid, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:47:45 np0005634017 systemd[1]: var-lib-containers-storage-overlay-89558d877cfea8159f4f6f39a834eb318395116a649029eb870a09309e21dd97-merged.mount: Deactivated successfully.
Feb 28 05:47:45 np0005634017 podman[380861]: 2026-02-28 10:47:45.054886058 +0000 UTC m=+0.567109303 container remove 2e17f67e31ec90de083cfbba7d4539aa5f92a1102817379751cf844e40a5b2ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 05:47:45 np0005634017 systemd[1]: libpod-conmon-2e17f67e31ec90de083cfbba7d4539aa5f92a1102817379751cf844e40a5b2ae.scope: Deactivated successfully.
Feb 28 05:47:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2502: 305 pgs: 305 active+clean; 176 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 998 KiB/s wr, 4 op/s
Feb 28 05:47:45 np0005634017 nova_compute[243452]: 2026-02-28 10:47:45.299 243456 DEBUG nova.network.neutron [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Successfully created port: 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 28 05:47:45 np0005634017 nova_compute[243452]: 2026-02-28 10:47:45.630 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:47:45 np0005634017 nova_compute[243452]: 2026-02-28 10:47:45.632 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:47:45 np0005634017 nova_compute[243452]: 2026-02-28 10:47:45.632 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 05:47:45 np0005634017 nova_compute[243452]: 2026-02-28 10:47:45.632 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 05:47:45 np0005634017 nova_compute[243452]: 2026-02-28 10:47:45.842 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:45 np0005634017 nova_compute[243452]: 2026-02-28 10:47:45.842 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 05:47:45 np0005634017 nova_compute[243452]: 2026-02-28 10:47:45.847 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:46 np0005634017 podman[380964]: 2026-02-28 10:47:46.20588506 +0000 UTC m=+0.038277380 container create 4dc5d8a17530969eaa996038d8d7e3c6d1c3880825a482372ae73aaaae72550f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cannon, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:47:46 np0005634017 systemd[1]: Started libpod-conmon-4dc5d8a17530969eaa996038d8d7e3c6d1c3880825a482372ae73aaaae72550f.scope.
Feb 28 05:47:46 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:47:46 np0005634017 podman[380964]: 2026-02-28 10:47:46.265791345 +0000 UTC m=+0.098183675 container init 4dc5d8a17530969eaa996038d8d7e3c6d1c3880825a482372ae73aaaae72550f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cannon, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 05:47:46 np0005634017 podman[380964]: 2026-02-28 10:47:46.270820578 +0000 UTC m=+0.103212908 container start 4dc5d8a17530969eaa996038d8d7e3c6d1c3880825a482372ae73aaaae72550f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 28 05:47:46 np0005634017 podman[380964]: 2026-02-28 10:47:46.273566407 +0000 UTC m=+0.105958737 container attach 4dc5d8a17530969eaa996038d8d7e3c6d1c3880825a482372ae73aaaae72550f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cannon, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 28 05:47:46 np0005634017 determined_cannon[380981]: 167 167
Feb 28 05:47:46 np0005634017 systemd[1]: libpod-4dc5d8a17530969eaa996038d8d7e3c6d1c3880825a482372ae73aaaae72550f.scope: Deactivated successfully.
Feb 28 05:47:46 np0005634017 conmon[380981]: conmon 4dc5d8a17530969eaa99 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4dc5d8a17530969eaa996038d8d7e3c6d1c3880825a482372ae73aaaae72550f.scope/container/memory.events
Feb 28 05:47:46 np0005634017 podman[380964]: 2026-02-28 10:47:46.276675805 +0000 UTC m=+0.109068135 container died 4dc5d8a17530969eaa996038d8d7e3c6d1c3880825a482372ae73aaaae72550f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 05:47:46 np0005634017 podman[380964]: 2026-02-28 10:47:46.187231989 +0000 UTC m=+0.019624349 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:47:46 np0005634017 systemd[1]: var-lib-containers-storage-overlay-6bfa9e997f8fe54862bcaee426d0fdc9e367b56d59afbc63292133e1945d77c8-merged.mount: Deactivated successfully.
Feb 28 05:47:46 np0005634017 podman[380964]: 2026-02-28 10:47:46.306656128 +0000 UTC m=+0.139048458 container remove 4dc5d8a17530969eaa996038d8d7e3c6d1c3880825a482372ae73aaaae72550f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cannon, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 05:47:46 np0005634017 systemd[1]: libpod-conmon-4dc5d8a17530969eaa996038d8d7e3c6d1c3880825a482372ae73aaaae72550f.scope: Deactivated successfully.
Feb 28 05:47:46 np0005634017 podman[381004]: 2026-02-28 10:47:46.493333792 +0000 UTC m=+0.038389664 container create 65a8f8246a273b56cfe3c31049e63ad8be954f05da47ac7a608588c75d9f77cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_golick, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:47:46 np0005634017 systemd[1]: Started libpod-conmon-65a8f8246a273b56cfe3c31049e63ad8be954f05da47ac7a608588c75d9f77cb.scope.
Feb 28 05:47:46 np0005634017 nova_compute[243452]: 2026-02-28 10:47:46.546 243456 DEBUG nova.network.neutron [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Successfully updated port: 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 28 05:47:46 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:47:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30466ed993cdc133fb8c981459176f6c61377be298d3b8e0dc6ca2e4a2b67e37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:47:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30466ed993cdc133fb8c981459176f6c61377be298d3b8e0dc6ca2e4a2b67e37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:47:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30466ed993cdc133fb8c981459176f6c61377be298d3b8e0dc6ca2e4a2b67e37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:47:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30466ed993cdc133fb8c981459176f6c61377be298d3b8e0dc6ca2e4a2b67e37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:47:46 np0005634017 podman[381004]: 2026-02-28 10:47:46.476912255 +0000 UTC m=+0.021968157 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:47:46 np0005634017 podman[381004]: 2026-02-28 10:47:46.584741444 +0000 UTC m=+0.129797316 container init 65a8f8246a273b56cfe3c31049e63ad8be954f05da47ac7a608588c75d9f77cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:47:46 np0005634017 podman[381004]: 2026-02-28 10:47:46.589310954 +0000 UTC m=+0.134366866 container start 65a8f8246a273b56cfe3c31049e63ad8be954f05da47ac7a608588c75d9f77cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_golick, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 05:47:46 np0005634017 podman[381004]: 2026-02-28 10:47:46.593901595 +0000 UTC m=+0.138957497 container attach 65a8f8246a273b56cfe3c31049e63ad8be954f05da47ac7a608588c75d9f77cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_golick, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:47:46 np0005634017 nova_compute[243452]: 2026-02-28 10:47:46.780 243456 DEBUG nova.compute.manager [req-9dd28a7b-b09a-41dc-9d7b-37e3f5dbd03c req-e765113c-acaa-4d82-82cf-80e8d6c17608 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-changed-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:47:46 np0005634017 nova_compute[243452]: 2026-02-28 10:47:46.780 243456 DEBUG nova.compute.manager [req-9dd28a7b-b09a-41dc-9d7b-37e3f5dbd03c req-e765113c-acaa-4d82-82cf-80e8d6c17608 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Refreshing instance network info cache due to event network-changed-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:47:46 np0005634017 nova_compute[243452]: 2026-02-28 10:47:46.780 243456 DEBUG oslo_concurrency.lockutils [req-9dd28a7b-b09a-41dc-9d7b-37e3f5dbd03c req-e765113c-acaa-4d82-82cf-80e8d6c17608 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:47:46 np0005634017 nova_compute[243452]: 2026-02-28 10:47:46.781 243456 DEBUG oslo_concurrency.lockutils [req-9dd28a7b-b09a-41dc-9d7b-37e3f5dbd03c req-e765113c-acaa-4d82-82cf-80e8d6c17608 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:47:46 np0005634017 nova_compute[243452]: 2026-02-28 10:47:46.781 243456 DEBUG nova.network.neutron [req-9dd28a7b-b09a-41dc-9d7b-37e3f5dbd03c req-e765113c-acaa-4d82-82cf-80e8d6c17608 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Refreshing network info cache for port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:47:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2503: 305 pgs: 305 active+clean; 187 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.4 MiB/s wr, 5 op/s
Feb 28 05:47:47 np0005634017 nova_compute[243452]: 2026-02-28 10:47:47.336 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:47:47 np0005634017 lvm[381098]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:47:47 np0005634017 lvm[381098]: VG ceph_vg0 finished
Feb 28 05:47:47 np0005634017 lvm[381099]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:47:47 np0005634017 lvm[381099]: VG ceph_vg1 finished
Feb 28 05:47:47 np0005634017 lvm[381101]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:47:47 np0005634017 lvm[381101]: VG ceph_vg2 finished
Feb 28 05:47:47 np0005634017 nova_compute[243452]: 2026-02-28 10:47:47.477 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:47:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:47:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2413801720' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:47:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:47:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2413801720' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:47:47 np0005634017 elated_golick[381020]: {}
Feb 28 05:47:47 np0005634017 systemd[1]: libpod-65a8f8246a273b56cfe3c31049e63ad8be954f05da47ac7a608588c75d9f77cb.scope: Deactivated successfully.
Feb 28 05:47:47 np0005634017 systemd[1]: libpod-65a8f8246a273b56cfe3c31049e63ad8be954f05da47ac7a608588c75d9f77cb.scope: Consumed 1.453s CPU time.
Feb 28 05:47:47 np0005634017 podman[381004]: 2026-02-28 10:47:47.555187135 +0000 UTC m=+1.100243007 container died 65a8f8246a273b56cfe3c31049e63ad8be954f05da47ac7a608588c75d9f77cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:47:47 np0005634017 systemd[1]: var-lib-containers-storage-overlay-30466ed993cdc133fb8c981459176f6c61377be298d3b8e0dc6ca2e4a2b67e37-merged.mount: Deactivated successfully.
Feb 28 05:47:47 np0005634017 podman[381004]: 2026-02-28 10:47:47.593876217 +0000 UTC m=+1.138932089 container remove 65a8f8246a273b56cfe3c31049e63ad8be954f05da47ac7a608588c75d9f77cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:47:47 np0005634017 nova_compute[243452]: 2026-02-28 10:47:47.609 243456 DEBUG nova.network.neutron [req-9dd28a7b-b09a-41dc-9d7b-37e3f5dbd03c req-e765113c-acaa-4d82-82cf-80e8d6c17608 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:47:47 np0005634017 systemd[1]: libpod-conmon-65a8f8246a273b56cfe3c31049e63ad8be954f05da47ac7a608588c75d9f77cb.scope: Deactivated successfully.
Feb 28 05:47:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:47:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:47:47 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:47:47 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:47:47 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:47:47 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:47:48 np0005634017 nova_compute[243452]: 2026-02-28 10:47:48.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:47:48 np0005634017 nova_compute[243452]: 2026-02-28 10:47:48.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:47:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:47:49 np0005634017 nova_compute[243452]: 2026-02-28 10:47:49.215 243456 DEBUG nova.network.neutron [req-9dd28a7b-b09a-41dc-9d7b-37e3f5dbd03c req-e765113c-acaa-4d82-82cf-80e8d6c17608 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:47:49 np0005634017 nova_compute[243452]: 2026-02-28 10:47:49.229 243456 DEBUG oslo_concurrency.lockutils [req-9dd28a7b-b09a-41dc-9d7b-37e3f5dbd03c req-e765113c-acaa-4d82-82cf-80e8d6c17608 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:47:49 np0005634017 nova_compute[243452]: 2026-02-28 10:47:49.229 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquired lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:47:49 np0005634017 nova_compute[243452]: 2026-02-28 10:47:49.230 243456 DEBUG nova.network.neutron [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:47:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2504: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:47:49 np0005634017 nova_compute[243452]: 2026-02-28 10:47:49.374 243456 DEBUG nova.network.neutron [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:47:50 np0005634017 nova_compute[243452]: 2026-02-28 10:47:50.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:47:50 np0005634017 nova_compute[243452]: 2026-02-28 10:47:50.318 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:47:50 np0005634017 nova_compute[243452]: 2026-02-28 10:47:50.518 243456 DEBUG nova.network.neutron [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updating instance_info_cache with network_info: [{"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:47:50 np0005634017 nova_compute[243452]: 2026-02-28 10:47:50.543 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Releasing lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:47:50 np0005634017 nova_compute[243452]: 2026-02-28 10:47:50.544 243456 DEBUG nova.compute.manager [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Instance network_info: |[{"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:47:50 np0005634017 nova_compute[243452]: 2026-02-28 10:47:50.548 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Start _get_guest_xml network_info=[{"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:47:50 np0005634017 nova_compute[243452]: 2026-02-28 10:47:50.555 243456 WARNING nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:47:50 np0005634017 nova_compute[243452]: 2026-02-28 10:47:50.563 243456 DEBUG nova.virt.libvirt.host [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:47:50 np0005634017 nova_compute[243452]: 2026-02-28 10:47:50.565 243456 DEBUG nova.virt.libvirt.host [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:47:50 np0005634017 nova_compute[243452]: 2026-02-28 10:47:50.569 243456 DEBUG nova.virt.libvirt.host [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:47:50 np0005634017 nova_compute[243452]: 2026-02-28 10:47:50.570 243456 DEBUG nova.virt.libvirt.host [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:47:50 np0005634017 nova_compute[243452]: 2026-02-28 10:47:50.570 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:47:50 np0005634017 nova_compute[243452]: 2026-02-28 10:47:50.571 243456 DEBUG nova.virt.hardware [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:47:50 np0005634017 nova_compute[243452]: 2026-02-28 10:47:50.572 243456 DEBUG nova.virt.hardware [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:47:50 np0005634017 nova_compute[243452]: 2026-02-28 10:47:50.572 243456 DEBUG nova.virt.hardware [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:47:50 np0005634017 nova_compute[243452]: 2026-02-28 10:47:50.573 243456 DEBUG nova.virt.hardware [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:47:50 np0005634017 nova_compute[243452]: 2026-02-28 10:47:50.573 243456 DEBUG nova.virt.hardware [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:47:50 np0005634017 nova_compute[243452]: 2026-02-28 10:47:50.573 243456 DEBUG nova.virt.hardware [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:47:50 np0005634017 nova_compute[243452]: 2026-02-28 10:47:50.574 243456 DEBUG nova.virt.hardware [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:47:50 np0005634017 nova_compute[243452]: 2026-02-28 10:47:50.574 243456 DEBUG nova.virt.hardware [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:47:50 np0005634017 nova_compute[243452]: 2026-02-28 10:47:50.575 243456 DEBUG nova.virt.hardware [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:47:50 np0005634017 nova_compute[243452]: 2026-02-28 10:47:50.575 243456 DEBUG nova.virt.hardware [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:47:50 np0005634017 nova_compute[243452]: 2026-02-28 10:47:50.576 243456 DEBUG nova.virt.hardware [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:47:50 np0005634017 nova_compute[243452]: 2026-02-28 10:47:50.580 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:47:50 np0005634017 nova_compute[243452]: 2026-02-28 10:47:50.843 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:50 np0005634017 nova_compute[243452]: 2026-02-28 10:47:50.848 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:47:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2428197361' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.162 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.190 243456 DEBUG nova.storage.rbd_utils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.195 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:47:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2505: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:47:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:47:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3361919348' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.743 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.745 243456 DEBUG nova.virt.libvirt.vif [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:47:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-373381639',display_name='tempest-TestShelveInstance-server-373381639',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-373381639',id=152,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWK8NgpJq/O/Qf/cy3N8Z6KNy8MplNJlX7P39bqx+h9ho+aFNpLd6ovt9CyKQamqSZ1B8mN5tm9M+MnUX6czgs4K4I+aH3/UX/Gbp6WhW0TY9K8biYJUX8WuIDhtwQ8Cw==',key_name='tempest-TestShelveInstance-1745923104',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d7baef4f72e742e8aa7530d7a586ed2b',ramdisk_id='',reservation_id='r-4eojppig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1186988285',owner_user_name='tempest-TestShelveInstance-1186988285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:47:42Z,user_data=None,user_id='7efc7418904f44aa8c8c9c3e06ac552b',uuid=d47f4919-0816-4363-b2eb-fa6580859e88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.745 243456 DEBUG nova.network.os_vif_util [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Converting VIF {"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.746 243456 DEBUG nova.network.os_vif_util [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.748 243456 DEBUG nova.objects.instance [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lazy-loading 'pci_devices' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.765 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:47:51 np0005634017 nova_compute[243452]:  <uuid>d47f4919-0816-4363-b2eb-fa6580859e88</uuid>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:  <name>instance-00000098</name>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestShelveInstance-server-373381639</nova:name>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:47:50</nova:creationTime>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:47:51 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:        <nova:user uuid="7efc7418904f44aa8c8c9c3e06ac552b">tempest-TestShelveInstance-1186988285-project-member</nova:user>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:        <nova:project uuid="d7baef4f72e742e8aa7530d7a586ed2b">tempest-TestShelveInstance-1186988285</nova:project>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:        <nova:port uuid="12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4">
Feb 28 05:47:51 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <entry name="serial">d47f4919-0816-4363-b2eb-fa6580859e88</entry>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <entry name="uuid">d47f4919-0816-4363-b2eb-fa6580859e88</entry>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/d47f4919-0816-4363-b2eb-fa6580859e88_disk">
Feb 28 05:47:51 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:47:51 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/d47f4919-0816-4363-b2eb-fa6580859e88_disk.config">
Feb 28 05:47:51 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:47:51 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:b5:24:ce"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <target dev="tap12bbec3d-25"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/console.log" append="off"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:47:51 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:47:51 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:47:51 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:47:51 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.766 243456 DEBUG nova.compute.manager [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Preparing to wait for external event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.767 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.767 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.767 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.768 243456 DEBUG nova.virt.libvirt.vif [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:47:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-373381639',display_name='tempest-TestShelveInstance-server-373381639',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-373381639',id=152,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWK8NgpJq/O/Qf/cy3N8Z6KNy8MplNJlX7P39bqx+h9ho+aFNpLd6ovt9CyKQamqSZ1B8mN5tm9M+MnUX6czgs4K4I+aH3/UX/Gbp6WhW0TY9K8biYJUX8WuIDhtwQ8Cw==',key_name='tempest-TestShelveInstance-1745923104',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d7baef4f72e742e8aa7530d7a586ed2b',ramdisk_id='',reservation_id='r-4eojppig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1186988285',owner_user_name='tempest-TestShelveInstance-1186988285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:47:42Z,user_data=None,user_id='7efc7418904f44aa8c8c9c3e06ac552b',uuid=d47f4919-0816-4363-b2eb-fa6580859e88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.768 243456 DEBUG nova.network.os_vif_util [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Converting VIF {"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.769 243456 DEBUG nova.network.os_vif_util [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.769 243456 DEBUG os_vif [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.770 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.770 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.771 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.775 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.775 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12bbec3d-25, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.776 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap12bbec3d-25, col_values=(('external_ids', {'iface-id': '12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b5:24:ce', 'vm-uuid': 'd47f4919-0816-4363-b2eb-fa6580859e88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.820 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:51 np0005634017 NetworkManager[49805]: <info>  [1772275671.8210] manager: (tap12bbec3d-25): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/665)
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.823 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.827 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.829 243456 INFO os_vif [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25')#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.873 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.873 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.873 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] No VIF found with MAC fa:16:3e:b5:24:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.874 243456 INFO nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Using config drive#033[00m
Feb 28 05:47:51 np0005634017 nova_compute[243452]: 2026-02-28 10:47:51.895 243456 DEBUG nova.storage.rbd_utils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:47:52 np0005634017 nova_compute[243452]: 2026-02-28 10:47:52.204 243456 INFO nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Creating config drive at /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/disk.config#033[00m
Feb 28 05:47:52 np0005634017 nova_compute[243452]: 2026-02-28 10:47:52.207 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp149tadvv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:47:52 np0005634017 nova_compute[243452]: 2026-02-28 10:47:52.339 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp149tadvv" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:47:52 np0005634017 nova_compute[243452]: 2026-02-28 10:47:52.362 243456 DEBUG nova.storage.rbd_utils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:47:52 np0005634017 nova_compute[243452]: 2026-02-28 10:47:52.365 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/disk.config d47f4919-0816-4363-b2eb-fa6580859e88_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:47:52 np0005634017 nova_compute[243452]: 2026-02-28 10:47:52.526 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/disk.config d47f4919-0816-4363-b2eb-fa6580859e88_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:47:52 np0005634017 nova_compute[243452]: 2026-02-28 10:47:52.528 243456 INFO nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Deleting local config drive /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/disk.config because it was imported into RBD.#033[00m
Feb 28 05:47:52 np0005634017 kernel: tap12bbec3d-25: entered promiscuous mode
Feb 28 05:47:52 np0005634017 NetworkManager[49805]: <info>  [1772275672.5946] manager: (tap12bbec3d-25): new Tun device (/org/freedesktop/NetworkManager/Devices/666)
Feb 28 05:47:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:47:52Z|01589|binding|INFO|Claiming lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 for this chassis.
Feb 28 05:47:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:47:52Z|01590|binding|INFO|12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4: Claiming fa:16:3e:b5:24:ce 10.100.0.13
Feb 28 05:47:52 np0005634017 nova_compute[243452]: 2026-02-28 10:47:52.593 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:52 np0005634017 nova_compute[243452]: 2026-02-28 10:47:52.598 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:52 np0005634017 nova_compute[243452]: 2026-02-28 10:47:52.604 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.610 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:24:ce 10.100.0.13'], port_security=['fa:16:3e:b5:24:ce 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd47f4919-0816-4363-b2eb-fa6580859e88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-978ebc43-7003-4100-92ba-e083df3fe8ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd7baef4f72e742e8aa7530d7a586ed2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '97c43427-956a-4c4e-a592-053a957c2802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2014e61-1555-4c0f-9d39-804f817029ca, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.612 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 in datapath 978ebc43-7003-4100-92ba-e083df3fe8ab bound to our chassis#033[00m
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.614 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 978ebc43-7003-4100-92ba-e083df3fe8ab#033[00m
Feb 28 05:47:52 np0005634017 systemd-udevd[381275]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.627 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c2b11eae-8cff-4001-8790-a4d0b61c5644]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.628 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap978ebc43-71 in ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:47:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:47:52Z|01591|binding|INFO|Setting lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 ovn-installed in OVS
Feb 28 05:47:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:47:52Z|01592|binding|INFO|Setting lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 up in Southbound
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.631 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap978ebc43-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.631 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bf64da6f-a892-46f2-b375-031d83101ad2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:52 np0005634017 nova_compute[243452]: 2026-02-28 10:47:52.632 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:52 np0005634017 systemd-machined[209480]: New machine qemu-185-instance-00000098.
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.632 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[289e3795-044a-4b01-bce1-3136427791ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:52 np0005634017 NetworkManager[49805]: <info>  [1772275672.6451] device (tap12bbec3d-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:47:52 np0005634017 NetworkManager[49805]: <info>  [1772275672.6460] device (tap12bbec3d-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.646 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[0d14eed3-8ae2-4f84-ba5e-1bf6f8e7bc59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:52 np0005634017 systemd[1]: Started Virtual Machine qemu-185-instance-00000098.
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.660 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec5f052-2e0f-49d0-b7e3-bdffae813bc5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.688 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8b194180-5284-4495-a6a2-5c1bb0f2dbea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:52 np0005634017 systemd-udevd[381279]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.694 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[04c33958-267c-4841-acb8-5c1d66b9eeb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:52 np0005634017 NetworkManager[49805]: <info>  [1772275672.6956] manager: (tap978ebc43-70): new Veth device (/org/freedesktop/NetworkManager/Devices/667)
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.723 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[93ac0773-da0d-44f4-bda5-5fea566ee2ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.725 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[24bea523-9cb4-4b68-9237-2bee2e347861]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:52 np0005634017 NetworkManager[49805]: <info>  [1772275672.7516] device (tap978ebc43-70): carrier: link connected
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.755 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7e7a375f-f0af-4c2c-ba16-acfb1c94c15d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.771 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5cd41312-028d-414f-bb67-bacd129a0357]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap978ebc43-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:85:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 470], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718003, 'reachable_time': 15333, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381308, 'error': None, 'target': 'ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.787 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[69cef214-de04-4b1a-af34-e9ff218fa90c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:8571'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 718003, 'tstamp': 718003}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381309, 'error': None, 'target': 'ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.804 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[250c24da-188c-4df4-9376-86ed2e2be69c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap978ebc43-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:85:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 470], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718003, 'reachable_time': 15333, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 381310, 'error': None, 'target': 'ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.831 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f9030289-804c-40e3-af5d-0aadc580bf50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:52 np0005634017 nova_compute[243452]: 2026-02-28 10:47:52.860 243456 DEBUG nova.compute.manager [req-163c70fb-e4e1-4a22-b393-fbe47e125498 req-24116c30-c0dc-42f0-9579-26b560e46881 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:47:52 np0005634017 nova_compute[243452]: 2026-02-28 10:47:52.861 243456 DEBUG oslo_concurrency.lockutils [req-163c70fb-e4e1-4a22-b393-fbe47e125498 req-24116c30-c0dc-42f0-9579-26b560e46881 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:47:52 np0005634017 nova_compute[243452]: 2026-02-28 10:47:52.861 243456 DEBUG oslo_concurrency.lockutils [req-163c70fb-e4e1-4a22-b393-fbe47e125498 req-24116c30-c0dc-42f0-9579-26b560e46881 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:47:52 np0005634017 nova_compute[243452]: 2026-02-28 10:47:52.862 243456 DEBUG oslo_concurrency.lockutils [req-163c70fb-e4e1-4a22-b393-fbe47e125498 req-24116c30-c0dc-42f0-9579-26b560e46881 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:47:52 np0005634017 nova_compute[243452]: 2026-02-28 10:47:52.862 243456 DEBUG nova.compute.manager [req-163c70fb-e4e1-4a22-b393-fbe47e125498 req-24116c30-c0dc-42f0-9579-26b560e46881 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Processing event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.885 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e3201b4d-fc26-426c-ab4a-83e07e220482]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.886 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap978ebc43-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.887 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.887 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap978ebc43-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:47:52 np0005634017 kernel: tap978ebc43-70: entered promiscuous mode
Feb 28 05:47:52 np0005634017 NetworkManager[49805]: <info>  [1772275672.9299] manager: (tap978ebc43-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/668)
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.931 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap978ebc43-70, col_values=(('external_ids', {'iface-id': 'a5bdb09f-93f0-411c-9d75-fc368a22a5f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:47:52 np0005634017 ovn_controller[146846]: 2026-02-28T10:47:52Z|01593|binding|INFO|Releasing lport a5bdb09f-93f0-411c-9d75-fc368a22a5f6 from this chassis (sb_readonly=0)
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.934 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/978ebc43-7003-4100-92ba-e083df3fe8ab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/978ebc43-7003-4100-92ba-e083df3fe8ab.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:47:52 np0005634017 nova_compute[243452]: 2026-02-28 10:47:52.932 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.939 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cbfcd257-50c3-4438-b40f-32c2ea8c6df6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.940 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-978ebc43-7003-4100-92ba-e083df3fe8ab
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/978ebc43-7003-4100-92ba-e083df3fe8ab.pid.haproxy
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 978ebc43-7003-4100-92ba-e083df3fe8ab
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:47:52 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.941 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab', 'env', 'PROCESS_TAG=haproxy-978ebc43-7003-4100-92ba-e083df3fe8ab', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/978ebc43-7003-4100-92ba-e083df3fe8ab.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:47:53 np0005634017 podman[381348]: 2026-02-28 10:47:53.281813286 +0000 UTC m=+0.049159630 container create 579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:47:53 np0005634017 systemd[1]: Started libpod-conmon-579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff.scope.
Feb 28 05:47:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2506: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 05:47:53 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:47:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/818e01895170b08d6e90bb8185ddea8a836b8bce4f18e3c5adc25d8b4adae354/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:47:53 np0005634017 podman[381348]: 2026-02-28 10:47:53.252678397 +0000 UTC m=+0.020024521 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.350 243456 DEBUG nova.compute.manager [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.350 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275673.3504941, d47f4919-0816-4363-b2eb-fa6580859e88 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.351 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] VM Started (Lifecycle Event)#033[00m
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.354 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:47:53 np0005634017 podman[381348]: 2026-02-28 10:47:53.354504355 +0000 UTC m=+0.121850479 container init 579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.357 243456 INFO nova.virt.libvirt.driver [-] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Instance spawned successfully.#033[00m
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.357 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:47:53 np0005634017 podman[381348]: 2026-02-28 10:47:53.358706635 +0000 UTC m=+0.126052739 container start 579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 28 05:47:53 np0005634017 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[381396]: [NOTICE]   (381400) : New worker (381402) forked
Feb 28 05:47:53 np0005634017 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[381396]: [NOTICE]   (381400) : Loading success.
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.382 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.389 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.394 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.395 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.395 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.396 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.397 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.397 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.449 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.449 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275673.3530557, d47f4919-0816-4363-b2eb-fa6580859e88 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.450 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.473 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.477 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275673.3531463, d47f4919-0816-4363-b2eb-fa6580859e88 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.477 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.483 243456 INFO nova.compute.manager [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Took 10.89 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.483 243456 DEBUG nova.compute.manager [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.494 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.498 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.522 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.558 243456 INFO nova.compute.manager [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Took 11.95 seconds to build instance.#033[00m
Feb 28 05:47:53 np0005634017 nova_compute[243452]: 2026-02-28 10:47:53.578 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:47:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:47:54 np0005634017 nova_compute[243452]: 2026-02-28 10:47:54.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:47:54 np0005634017 nova_compute[243452]: 2026-02-28 10:47:54.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:47:54 np0005634017 nova_compute[243452]: 2026-02-28 10:47:54.356 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 05:47:54 np0005634017 nova_compute[243452]: 2026-02-28 10:47:54.356 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:47:54 np0005634017 nova_compute[243452]: 2026-02-28 10:47:54.951 243456 DEBUG nova.compute.manager [req-48453461-b0fc-48db-a9fb-2e24c7de5db1 req-3d744029-b093-4003-a8a7-a5e39760ef9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:47:54 np0005634017 nova_compute[243452]: 2026-02-28 10:47:54.951 243456 DEBUG oslo_concurrency.lockutils [req-48453461-b0fc-48db-a9fb-2e24c7de5db1 req-3d744029-b093-4003-a8a7-a5e39760ef9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:47:54 np0005634017 nova_compute[243452]: 2026-02-28 10:47:54.951 243456 DEBUG oslo_concurrency.lockutils [req-48453461-b0fc-48db-a9fb-2e24c7de5db1 req-3d744029-b093-4003-a8a7-a5e39760ef9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:47:54 np0005634017 nova_compute[243452]: 2026-02-28 10:47:54.952 243456 DEBUG oslo_concurrency.lockutils [req-48453461-b0fc-48db-a9fb-2e24c7de5db1 req-3d744029-b093-4003-a8a7-a5e39760ef9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:47:54 np0005634017 nova_compute[243452]: 2026-02-28 10:47:54.952 243456 DEBUG nova.compute.manager [req-48453461-b0fc-48db-a9fb-2e24c7de5db1 req-3d744029-b093-4003-a8a7-a5e39760ef9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] No waiting events found dispatching network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:47:54 np0005634017 nova_compute[243452]: 2026-02-28 10:47:54.952 243456 WARNING nova.compute.manager [req-48453461-b0fc-48db-a9fb-2e24c7de5db1 req-3d744029-b093-4003-a8a7-a5e39760ef9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received unexpected event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:47:55 np0005634017 nova_compute[243452]: 2026-02-28 10:47:55.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:47:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2507: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 60 op/s
Feb 28 05:47:55 np0005634017 nova_compute[243452]: 2026-02-28 10:47:55.850 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:56 np0005634017 nova_compute[243452]: 2026-02-28 10:47:56.820 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:57 np0005634017 nova_compute[243452]: 2026-02-28 10:47:57.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:47:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2508: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 829 KiB/s wr, 79 op/s
Feb 28 05:47:57 np0005634017 nova_compute[243452]: 2026-02-28 10:47:57.353 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:47:57 np0005634017 nova_compute[243452]: 2026-02-28 10:47:57.354 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:47:57 np0005634017 nova_compute[243452]: 2026-02-28 10:47:57.354 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:47:57 np0005634017 nova_compute[243452]: 2026-02-28 10:47:57.355 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:47:57 np0005634017 nova_compute[243452]: 2026-02-28 10:47:57.355 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:47:57 np0005634017 ovn_controller[146846]: 2026-02-28T10:47:57Z|01594|binding|INFO|Releasing lport a5bdb09f-93f0-411c-9d75-fc368a22a5f6 from this chassis (sb_readonly=0)
Feb 28 05:47:57 np0005634017 nova_compute[243452]: 2026-02-28 10:47:57.551 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:57 np0005634017 NetworkManager[49805]: <info>  [1772275677.5532] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/669)
Feb 28 05:47:57 np0005634017 nova_compute[243452]: 2026-02-28 10:47:57.554 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:57 np0005634017 NetworkManager[49805]: <info>  [1772275677.5553] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/670)
Feb 28 05:47:57 np0005634017 ovn_controller[146846]: 2026-02-28T10:47:57Z|01595|binding|INFO|Releasing lport a5bdb09f-93f0-411c-9d75-fc368a22a5f6 from this chassis (sb_readonly=0)
Feb 28 05:47:57 np0005634017 nova_compute[243452]: 2026-02-28 10:47:57.565 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:47:57 np0005634017 nova_compute[243452]: 2026-02-28 10:47:57.749 243456 DEBUG nova.compute.manager [req-d9761376-ece6-415e-aa52-48c21c9dcd7a req-54cfc01e-86e6-4958-a056-71cb31094e8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-changed-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:47:57 np0005634017 nova_compute[243452]: 2026-02-28 10:47:57.750 243456 DEBUG nova.compute.manager [req-d9761376-ece6-415e-aa52-48c21c9dcd7a req-54cfc01e-86e6-4958-a056-71cb31094e8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Refreshing instance network info cache due to event network-changed-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:47:57 np0005634017 nova_compute[243452]: 2026-02-28 10:47:57.750 243456 DEBUG oslo_concurrency.lockutils [req-d9761376-ece6-415e-aa52-48c21c9dcd7a req-54cfc01e-86e6-4958-a056-71cb31094e8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:47:57 np0005634017 nova_compute[243452]: 2026-02-28 10:47:57.751 243456 DEBUG oslo_concurrency.lockutils [req-d9761376-ece6-415e-aa52-48c21c9dcd7a req-54cfc01e-86e6-4958-a056-71cb31094e8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:47:57 np0005634017 nova_compute[243452]: 2026-02-28 10:47:57.751 243456 DEBUG nova.network.neutron [req-d9761376-ece6-415e-aa52-48c21c9dcd7a req-54cfc01e-86e6-4958-a056-71cb31094e8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Refreshing network info cache for port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:47:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:47:57 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2128345171' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:47:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:57.889 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:47:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:57.890 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:47:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:47:57.891 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:47:57 np0005634017 nova_compute[243452]: 2026-02-28 10:47:57.905 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:47:57 np0005634017 nova_compute[243452]: 2026-02-28 10:47:57.964 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:47:57 np0005634017 nova_compute[243452]: 2026-02-28 10:47:57.965 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 28 05:47:58 np0005634017 podman[381436]: 2026-02-28 10:47:58.001374611 +0000 UTC m=+0.059861845 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:47:58 np0005634017 podman[381435]: 2026-02-28 10:47:58.020364651 +0000 UTC m=+0.078988069 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 28 05:47:58 np0005634017 nova_compute[243452]: 2026-02-28 10:47:58.127 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:47:58 np0005634017 nova_compute[243452]: 2026-02-28 10:47:58.128 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3369MB free_disk=59.966501395218074GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:47:58 np0005634017 nova_compute[243452]: 2026-02-28 10:47:58.128 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:47:58 np0005634017 nova_compute[243452]: 2026-02-28 10:47:58.129 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:47:58 np0005634017 nova_compute[243452]: 2026-02-28 10:47:58.226 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance d47f4919-0816-4363-b2eb-fa6580859e88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 28 05:47:58 np0005634017 nova_compute[243452]: 2026-02-28 10:47:58.227 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:47:58 np0005634017 nova_compute[243452]: 2026-02-28 10:47:58.227 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:47:58 np0005634017 nova_compute[243452]: 2026-02-28 10:47:58.277 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:47:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:47:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1100574336' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:47:58 np0005634017 nova_compute[243452]: 2026-02-28 10:47:58.817 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:47:58 np0005634017 nova_compute[243452]: 2026-02-28 10:47:58.822 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:47:58 np0005634017 nova_compute[243452]: 2026-02-28 10:47:58.844 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:47:58 np0005634017 nova_compute[243452]: 2026-02-28 10:47:58.867 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:47:58 np0005634017 nova_compute[243452]: 2026-02-28 10:47:58.867 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:47:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:47:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2509: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 359 KiB/s wr, 95 op/s
Feb 28 05:47:59 np0005634017 nova_compute[243452]: 2026-02-28 10:47:59.706 243456 DEBUG nova.network.neutron [req-d9761376-ece6-415e-aa52-48c21c9dcd7a req-54cfc01e-86e6-4958-a056-71cb31094e8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updated VIF entry in instance network info cache for port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:47:59 np0005634017 nova_compute[243452]: 2026-02-28 10:47:59.707 243456 DEBUG nova.network.neutron [req-d9761376-ece6-415e-aa52-48c21c9dcd7a req-54cfc01e-86e6-4958-a056-71cb31094e8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updating instance_info_cache with network_info: [{"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:47:59 np0005634017 nova_compute[243452]: 2026-02-28 10:47:59.727 243456 DEBUG oslo_concurrency.lockutils [req-d9761376-ece6-415e-aa52-48c21c9dcd7a req-54cfc01e-86e6-4958-a056-71cb31094e8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:48:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:48:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:48:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:48:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:48:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:48:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:48:00 np0005634017 nova_compute[243452]: 2026-02-28 10:48:00.853 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2510: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 05:48:01 np0005634017 nova_compute[243452]: 2026-02-28 10:48:01.822 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:01 np0005634017 nova_compute[243452]: 2026-02-28 10:48:01.863 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:48:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2511: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 05:48:03 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Feb 28 05:48:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:48:04 np0005634017 ovn_controller[146846]: 2026-02-28T10:48:04Z|00198|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b5:24:ce 10.100.0.13
Feb 28 05:48:04 np0005634017 ovn_controller[146846]: 2026-02-28T10:48:04Z|00199|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b5:24:ce 10.100.0.13
Feb 28 05:48:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2512: 305 pgs: 305 active+clean; 206 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 533 KiB/s wr, 82 op/s
Feb 28 05:48:05 np0005634017 nova_compute[243452]: 2026-02-28 10:48:05.886 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:06 np0005634017 nova_compute[243452]: 2026-02-28 10:48:06.824 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2513: 305 pgs: 305 active+clean; 227 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.6 MiB/s wr, 71 op/s
Feb 28 05:48:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:48:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2514: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 770 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Feb 28 05:48:10 np0005634017 nova_compute[243452]: 2026-02-28 10:48:10.890 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2515: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 05:48:11 np0005634017 nova_compute[243452]: 2026-02-28 10:48:11.865 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:12 np0005634017 ceph-osd[87202]: bluestore.MempoolThread fragmentation_score=0.004223 took=0.000069s
Feb 28 05:48:12 np0005634017 ceph-osd[89322]: bluestore.MempoolThread fragmentation_score=0.003738 took=0.000053s
Feb 28 05:48:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread fragmentation_score=0.004393 took=0.000057s
Feb 28 05:48:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2516: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 05:48:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:48:14 np0005634017 nova_compute[243452]: 2026-02-28 10:48:14.184 243456 DEBUG oslo_concurrency.lockutils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:48:14 np0005634017 nova_compute[243452]: 2026-02-28 10:48:14.184 243456 DEBUG oslo_concurrency.lockutils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:48:14 np0005634017 nova_compute[243452]: 2026-02-28 10:48:14.185 243456 INFO nova.compute.manager [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Shelving#033[00m
Feb 28 05:48:14 np0005634017 nova_compute[243452]: 2026-02-28 10:48:14.205 243456 DEBUG nova.virt.libvirt.driver [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 28 05:48:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2517: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Feb 28 05:48:15 np0005634017 nova_compute[243452]: 2026-02-28 10:48:15.892 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:16 np0005634017 kernel: tap12bbec3d-25 (unregistering): left promiscuous mode
Feb 28 05:48:16 np0005634017 NetworkManager[49805]: <info>  [1772275696.4965] device (tap12bbec3d-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:48:16 np0005634017 ovn_controller[146846]: 2026-02-28T10:48:16Z|01596|binding|INFO|Releasing lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 from this chassis (sb_readonly=0)
Feb 28 05:48:16 np0005634017 ovn_controller[146846]: 2026-02-28T10:48:16Z|01597|binding|INFO|Setting lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 down in Southbound
Feb 28 05:48:16 np0005634017 nova_compute[243452]: 2026-02-28 10:48:16.503 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:16 np0005634017 ovn_controller[146846]: 2026-02-28T10:48:16Z|01598|binding|INFO|Removing iface tap12bbec3d-25 ovn-installed in OVS
Feb 28 05:48:16 np0005634017 nova_compute[243452]: 2026-02-28 10:48:16.524 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.526 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:24:ce 10.100.0.13'], port_security=['fa:16:3e:b5:24:ce 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd47f4919-0816-4363-b2eb-fa6580859e88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-978ebc43-7003-4100-92ba-e083df3fe8ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd7baef4f72e742e8aa7530d7a586ed2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '97c43427-956a-4c4e-a592-053a957c2802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2014e61-1555-4c0f-9d39-804f817029ca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:48:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.527 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 in datapath 978ebc43-7003-4100-92ba-e083df3fe8ab unbound from our chassis#033[00m
Feb 28 05:48:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.528 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 978ebc43-7003-4100-92ba-e083df3fe8ab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:48:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.529 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c80f07c6-0eab-4d40-84bc-580f0b0540dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.530 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab namespace which is not needed anymore#033[00m
Feb 28 05:48:16 np0005634017 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000098.scope: Deactivated successfully.
Feb 28 05:48:16 np0005634017 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000098.scope: Consumed 12.411s CPU time.
Feb 28 05:48:16 np0005634017 systemd-machined[209480]: Machine qemu-185-instance-00000098 terminated.
Feb 28 05:48:16 np0005634017 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[381396]: [NOTICE]   (381400) : haproxy version is 2.8.14-c23fe91
Feb 28 05:48:16 np0005634017 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[381396]: [NOTICE]   (381400) : path to executable is /usr/sbin/haproxy
Feb 28 05:48:16 np0005634017 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[381396]: [WARNING]  (381400) : Exiting Master process...
Feb 28 05:48:16 np0005634017 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[381396]: [WARNING]  (381400) : Exiting Master process...
Feb 28 05:48:16 np0005634017 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[381396]: [ALERT]    (381400) : Current worker (381402) exited with code 143 (Terminated)
Feb 28 05:48:16 np0005634017 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[381396]: [WARNING]  (381400) : All workers exited. Exiting... (0)
Feb 28 05:48:16 np0005634017 systemd[1]: libpod-579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff.scope: Deactivated successfully.
Feb 28 05:48:16 np0005634017 conmon[381396]: conmon 579179baa4938721d005 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff.scope/container/memory.events
Feb 28 05:48:16 np0005634017 podman[381531]: 2026-02-28 10:48:16.674909733 +0000 UTC m=+0.057006224 container died 579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:48:16 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff-userdata-shm.mount: Deactivated successfully.
Feb 28 05:48:16 np0005634017 systemd[1]: var-lib-containers-storage-overlay-818e01895170b08d6e90bb8185ddea8a836b8bce4f18e3c5adc25d8b4adae354-merged.mount: Deactivated successfully.
Feb 28 05:48:16 np0005634017 podman[381531]: 2026-02-28 10:48:16.731660258 +0000 UTC m=+0.113756779 container cleanup 579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 05:48:16 np0005634017 systemd[1]: libpod-conmon-579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff.scope: Deactivated successfully.
Feb 28 05:48:16 np0005634017 podman[381574]: 2026-02-28 10:48:16.808820394 +0000 UTC m=+0.054694808 container remove 579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 05:48:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.814 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9b283b79-967a-457d-a4a2-e2404ce09197]: (4, ('Sat Feb 28 10:48:16 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab (579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff)\n579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff\nSat Feb 28 10:48:16 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab (579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff)\n579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.816 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ec96db-2688-42f5-b0f5-92a2e76e3f93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.818 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap978ebc43-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:48:16 np0005634017 nova_compute[243452]: 2026-02-28 10:48:16.820 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:16 np0005634017 kernel: tap978ebc43-70: left promiscuous mode
Feb 28 05:48:16 np0005634017 nova_compute[243452]: 2026-02-28 10:48:16.827 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:16 np0005634017 nova_compute[243452]: 2026-02-28 10:48:16.828 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.831 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d3911a2e-802b-40dd-ae98-dc86194e6e08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.844 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fb8ea97a-a471-4c88-b59f-506119aab627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.846 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a6de2a6d-ac32-4785-a371-a9b066468086]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.861 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[17ca40af-1bdb-44a5-8c5d-0da45af96151]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717997, 'reachable_time': 27699, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381595, 'error': None, 'target': 'ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.863 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:48:16 np0005634017 systemd[1]: run-netns-ovnmeta\x2d978ebc43\x2d7003\x2d4100\x2d92ba\x2de083df3fe8ab.mount: Deactivated successfully.
Feb 28 05:48:16 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.863 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[638950ad-f2ec-4fc3-9a79-1fe9eb462ee7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:16 np0005634017 nova_compute[243452]: 2026-02-28 10:48:16.866 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:17 np0005634017 nova_compute[243452]: 2026-02-28 10:48:17.225 243456 INFO nova.virt.libvirt.driver [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Instance shutdown successfully after 3 seconds.#033[00m
Feb 28 05:48:17 np0005634017 nova_compute[243452]: 2026-02-28 10:48:17.232 243456 INFO nova.virt.libvirt.driver [-] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Instance destroyed successfully.#033[00m
Feb 28 05:48:17 np0005634017 nova_compute[243452]: 2026-02-28 10:48:17.232 243456 DEBUG nova.objects.instance [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lazy-loading 'numa_topology' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:48:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2518: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 1.6 MiB/s wr, 57 op/s
Feb 28 05:48:17 np0005634017 nova_compute[243452]: 2026-02-28 10:48:17.397 243456 DEBUG nova.compute.manager [req-2e7f3fbe-c434-4f49-93f2-153df84948a4 req-22bbd2d7-c763-4f5b-bbf6-0bd5258d3823 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-vif-unplugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:48:17 np0005634017 nova_compute[243452]: 2026-02-28 10:48:17.398 243456 DEBUG oslo_concurrency.lockutils [req-2e7f3fbe-c434-4f49-93f2-153df84948a4 req-22bbd2d7-c763-4f5b-bbf6-0bd5258d3823 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:48:17 np0005634017 nova_compute[243452]: 2026-02-28 10:48:17.398 243456 DEBUG oslo_concurrency.lockutils [req-2e7f3fbe-c434-4f49-93f2-153df84948a4 req-22bbd2d7-c763-4f5b-bbf6-0bd5258d3823 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:48:17 np0005634017 nova_compute[243452]: 2026-02-28 10:48:17.399 243456 DEBUG oslo_concurrency.lockutils [req-2e7f3fbe-c434-4f49-93f2-153df84948a4 req-22bbd2d7-c763-4f5b-bbf6-0bd5258d3823 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:48:17 np0005634017 nova_compute[243452]: 2026-02-28 10:48:17.399 243456 DEBUG nova.compute.manager [req-2e7f3fbe-c434-4f49-93f2-153df84948a4 req-22bbd2d7-c763-4f5b-bbf6-0bd5258d3823 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] No waiting events found dispatching network-vif-unplugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:48:17 np0005634017 nova_compute[243452]: 2026-02-28 10:48:17.399 243456 WARNING nova.compute.manager [req-2e7f3fbe-c434-4f49-93f2-153df84948a4 req-22bbd2d7-c763-4f5b-bbf6-0bd5258d3823 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received unexpected event network-vif-unplugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 for instance with vm_state active and task_state shelving.#033[00m
Feb 28 05:48:17 np0005634017 nova_compute[243452]: 2026-02-28 10:48:17.487 243456 INFO nova.virt.libvirt.driver [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Beginning cold snapshot process#033[00m
Feb 28 05:48:17 np0005634017 nova_compute[243452]: 2026-02-28 10:48:17.640 243456 DEBUG nova.virt.libvirt.imagebackend [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 28 05:48:17 np0005634017 nova_compute[243452]: 2026-02-28 10:48:17.840 243456 DEBUG nova.storage.rbd_utils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] creating snapshot(b382f622a0414db797cfdbf5ae588102) on rbd image(d47f4919-0816-4363-b2eb-fa6580859e88_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:48:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Feb 28 05:48:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Feb 28 05:48:18 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Feb 28 05:48:18 np0005634017 nova_compute[243452]: 2026-02-28 10:48:18.585 243456 DEBUG nova.storage.rbd_utils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] cloning vms/d47f4919-0816-4363-b2eb-fa6580859e88_disk@b382f622a0414db797cfdbf5ae588102 to images/4c20447f-84ff-45ca-86c2-e5ad9d598628 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 28 05:48:18 np0005634017 nova_compute[243452]: 2026-02-28 10:48:18.704 243456 DEBUG nova.storage.rbd_utils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] flattening images/4c20447f-84ff-45ca-86c2-e5ad9d598628 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 28 05:48:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:48:19 np0005634017 nova_compute[243452]: 2026-02-28 10:48:19.219 243456 DEBUG nova.storage.rbd_utils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] removing snapshot(b382f622a0414db797cfdbf5ae588102) on rbd image(d47f4919-0816-4363-b2eb-fa6580859e88_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 28 05:48:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2520: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 47 KiB/s wr, 12 op/s
Feb 28 05:48:19 np0005634017 nova_compute[243452]: 2026-02-28 10:48:19.494 243456 DEBUG nova.compute.manager [req-55ac6c79-e87e-4b34-b99b-814bd8d4c78c req-5a0006d0-74de-4ab4-baf4-22afdcebf2c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:48:19 np0005634017 nova_compute[243452]: 2026-02-28 10:48:19.495 243456 DEBUG oslo_concurrency.lockutils [req-55ac6c79-e87e-4b34-b99b-814bd8d4c78c req-5a0006d0-74de-4ab4-baf4-22afdcebf2c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:48:19 np0005634017 nova_compute[243452]: 2026-02-28 10:48:19.496 243456 DEBUG oslo_concurrency.lockutils [req-55ac6c79-e87e-4b34-b99b-814bd8d4c78c req-5a0006d0-74de-4ab4-baf4-22afdcebf2c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:48:19 np0005634017 nova_compute[243452]: 2026-02-28 10:48:19.496 243456 DEBUG oslo_concurrency.lockutils [req-55ac6c79-e87e-4b34-b99b-814bd8d4c78c req-5a0006d0-74de-4ab4-baf4-22afdcebf2c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:48:19 np0005634017 nova_compute[243452]: 2026-02-28 10:48:19.497 243456 DEBUG nova.compute.manager [req-55ac6c79-e87e-4b34-b99b-814bd8d4c78c req-5a0006d0-74de-4ab4-baf4-22afdcebf2c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] No waiting events found dispatching network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:48:19 np0005634017 nova_compute[243452]: 2026-02-28 10:48:19.497 243456 WARNING nova.compute.manager [req-55ac6c79-e87e-4b34-b99b-814bd8d4c78c req-5a0006d0-74de-4ab4-baf4-22afdcebf2c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received unexpected event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Feb 28 05:48:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Feb 28 05:48:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Feb 28 05:48:19 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Feb 28 05:48:19 np0005634017 nova_compute[243452]: 2026-02-28 10:48:19.584 243456 DEBUG nova.storage.rbd_utils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] creating snapshot(snap) on rbd image(4c20447f-84ff-45ca-86c2-e5ad9d598628) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 28 05:48:20 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Feb 28 05:48:20 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Feb 28 05:48:20 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Feb 28 05:48:20 np0005634017 nova_compute[243452]: 2026-02-28 10:48:20.893 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2523: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 295 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 6.0 MiB/s wr, 103 op/s
Feb 28 05:48:21 np0005634017 nova_compute[243452]: 2026-02-28 10:48:21.811 243456 INFO nova.virt.libvirt.driver [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Snapshot image upload complete#033[00m
Feb 28 05:48:21 np0005634017 nova_compute[243452]: 2026-02-28 10:48:21.812 243456 DEBUG nova.compute.manager [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:48:21 np0005634017 nova_compute[243452]: 2026-02-28 10:48:21.863 243456 INFO nova.compute.manager [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Shelve offloading#033[00m
Feb 28 05:48:21 np0005634017 nova_compute[243452]: 2026-02-28 10:48:21.869 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:21 np0005634017 nova_compute[243452]: 2026-02-28 10:48:21.877 243456 INFO nova.virt.libvirt.driver [-] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Instance destroyed successfully.#033[00m
Feb 28 05:48:21 np0005634017 nova_compute[243452]: 2026-02-28 10:48:21.878 243456 DEBUG nova.compute.manager [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:48:21 np0005634017 nova_compute[243452]: 2026-02-28 10:48:21.882 243456 DEBUG oslo_concurrency.lockutils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:48:21 np0005634017 nova_compute[243452]: 2026-02-28 10:48:21.883 243456 DEBUG oslo_concurrency.lockutils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquired lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:48:21 np0005634017 nova_compute[243452]: 2026-02-28 10:48:21.883 243456 DEBUG nova.network.neutron [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:48:23 np0005634017 nova_compute[243452]: 2026-02-28 10:48:23.256 243456 DEBUG nova.network.neutron [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updating instance_info_cache with network_info: [{"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:48:23 np0005634017 nova_compute[243452]: 2026-02-28 10:48:23.278 243456 DEBUG oslo_concurrency.lockutils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Releasing lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:48:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2524: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 151 op/s
Feb 28 05:48:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:48:24 np0005634017 nova_compute[243452]: 2026-02-28 10:48:24.329 243456 INFO nova.virt.libvirt.driver [-] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Instance destroyed successfully.#033[00m
Feb 28 05:48:24 np0005634017 nova_compute[243452]: 2026-02-28 10:48:24.330 243456 DEBUG nova.objects.instance [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lazy-loading 'resources' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:48:24 np0005634017 nova_compute[243452]: 2026-02-28 10:48:24.342 243456 DEBUG nova.virt.libvirt.vif [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:47:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-373381639',display_name='tempest-TestShelveInstance-server-373381639',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-373381639',id=152,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWK8NgpJq/O/Qf/cy3N8Z6KNy8MplNJlX7P39bqx+h9ho+aFNpLd6ovt9CyKQamqSZ1B8mN5tm9M+MnUX6czgs4K4I+aH3/UX/Gbp6WhW0TY9K8biYJUX8WuIDhtwQ8Cw==',key_name='tempest-TestShelveInstance-1745923104',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:47:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d7baef4f72e742e8aa7530d7a586ed2b',ramdisk_id='',reservation_id='r-4eojppig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1186988285',owner_user_name='tempest-TestShelveInstance-1186988285-project-member',shelved_at='2026-02-28T10:48:21.812417',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='4c20447f-84ff-45ca-86c2-e5ad9d598628'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:48:17Z,user_data=None,user_id='7efc7418904f44aa8c8c9c3e06ac552b',uuid=d47f4919-0816-4363-b2eb-fa6580859e88,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:48:24 np0005634017 nova_compute[243452]: 2026-02-28 10:48:24.343 243456 DEBUG nova.network.os_vif_util [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Converting VIF {"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:48:24 np0005634017 nova_compute[243452]: 2026-02-28 10:48:24.344 243456 DEBUG nova.network.os_vif_util [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:48:24 np0005634017 nova_compute[243452]: 2026-02-28 10:48:24.344 243456 DEBUG os_vif [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:48:24 np0005634017 nova_compute[243452]: 2026-02-28 10:48:24.346 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:24 np0005634017 nova_compute[243452]: 2026-02-28 10:48:24.347 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12bbec3d-25, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:48:24 np0005634017 nova_compute[243452]: 2026-02-28 10:48:24.349 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:24 np0005634017 nova_compute[243452]: 2026-02-28 10:48:24.351 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:24 np0005634017 nova_compute[243452]: 2026-02-28 10:48:24.354 243456 INFO os_vif [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25')#033[00m
Feb 28 05:48:24 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Feb 28 05:48:24 np0005634017 nova_compute[243452]: 2026-02-28 10:48:24.421 243456 DEBUG nova.compute.manager [req-552db8d8-f5c1-445e-9e13-99afbeba442f req-7b40a3d5-9183-4f6a-932b-abbf24020478 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-changed-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:48:24 np0005634017 nova_compute[243452]: 2026-02-28 10:48:24.422 243456 DEBUG nova.compute.manager [req-552db8d8-f5c1-445e-9e13-99afbeba442f req-7b40a3d5-9183-4f6a-932b-abbf24020478 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Refreshing instance network info cache due to event network-changed-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:48:24 np0005634017 nova_compute[243452]: 2026-02-28 10:48:24.422 243456 DEBUG oslo_concurrency.lockutils [req-552db8d8-f5c1-445e-9e13-99afbeba442f req-7b40a3d5-9183-4f6a-932b-abbf24020478 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:48:24 np0005634017 nova_compute[243452]: 2026-02-28 10:48:24.422 243456 DEBUG oslo_concurrency.lockutils [req-552db8d8-f5c1-445e-9e13-99afbeba442f req-7b40a3d5-9183-4f6a-932b-abbf24020478 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:48:24 np0005634017 nova_compute[243452]: 2026-02-28 10:48:24.422 243456 DEBUG nova.network.neutron [req-552db8d8-f5c1-445e-9e13-99afbeba442f req-7b40a3d5-9183-4f6a-932b-abbf24020478 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Refreshing network info cache for port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:48:24 np0005634017 nova_compute[243452]: 2026-02-28 10:48:24.714 243456 INFO nova.virt.libvirt.driver [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Deleting instance files /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88_del#033[00m
Feb 28 05:48:24 np0005634017 nova_compute[243452]: 2026-02-28 10:48:24.715 243456 INFO nova.virt.libvirt.driver [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Deletion of /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88_del complete#033[00m
Feb 28 05:48:24 np0005634017 nova_compute[243452]: 2026-02-28 10:48:24.813 243456 INFO nova.scheduler.client.report [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Deleted allocations for instance d47f4919-0816-4363-b2eb-fa6580859e88#033[00m
Feb 28 05:48:24 np0005634017 nova_compute[243452]: 2026-02-28 10:48:24.869 243456 DEBUG oslo_concurrency.lockutils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:48:24 np0005634017 nova_compute[243452]: 2026-02-28 10:48:24.870 243456 DEBUG oslo_concurrency.lockutils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:48:24 np0005634017 nova_compute[243452]: 2026-02-28 10:48:24.896 243456 DEBUG oslo_concurrency.processutils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:48:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2525: 305 pgs: 305 active+clean; 283 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 6.9 MiB/s wr, 145 op/s
Feb 28 05:48:25 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:48:25 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3949162236' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:48:25 np0005634017 nova_compute[243452]: 2026-02-28 10:48:25.524 243456 DEBUG oslo_concurrency.processutils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:48:25 np0005634017 nova_compute[243452]: 2026-02-28 10:48:25.531 243456 DEBUG nova.compute.provider_tree [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:48:25 np0005634017 nova_compute[243452]: 2026-02-28 10:48:25.554 243456 DEBUG nova.scheduler.client.report [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:48:25 np0005634017 nova_compute[243452]: 2026-02-28 10:48:25.581 243456 DEBUG oslo_concurrency.lockutils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:48:25 np0005634017 nova_compute[243452]: 2026-02-28 10:48:25.595 243456 DEBUG nova.network.neutron [req-552db8d8-f5c1-445e-9e13-99afbeba442f req-7b40a3d5-9183-4f6a-932b-abbf24020478 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updated VIF entry in instance network info cache for port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:48:25 np0005634017 nova_compute[243452]: 2026-02-28 10:48:25.595 243456 DEBUG nova.network.neutron [req-552db8d8-f5c1-445e-9e13-99afbeba442f req-7b40a3d5-9183-4f6a-932b-abbf24020478 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updating instance_info_cache with network_info: [{"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": null, "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap12bbec3d-25", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:48:25 np0005634017 nova_compute[243452]: 2026-02-28 10:48:25.751 243456 DEBUG oslo_concurrency.lockutils [req-552db8d8-f5c1-445e-9e13-99afbeba442f req-7b40a3d5-9183-4f6a-932b-abbf24020478 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:48:25 np0005634017 nova_compute[243452]: 2026-02-28 10:48:25.768 243456 DEBUG oslo_concurrency.lockutils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 11.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:48:25 np0005634017 nova_compute[243452]: 2026-02-28 10:48:25.895 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2526: 305 pgs: 305 active+clean; 251 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 143 op/s
Feb 28 05:48:28 np0005634017 podman[381784]: 2026-02-28 10:48:28.12492181 +0000 UTC m=+0.056458428 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 28 05:48:28 np0005634017 podman[381783]: 2026-02-28 10:48:28.158978429 +0000 UTC m=+0.091277689 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 05:48:28 np0005634017 nova_compute[243452]: 2026-02-28 10:48:28.500 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:48:28 np0005634017 nova_compute[243452]: 2026-02-28 10:48:28.500 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:48:28 np0005634017 nova_compute[243452]: 2026-02-28 10:48:28.501 243456 INFO nova.compute.manager [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Unshelving#033[00m
Feb 28 05:48:28 np0005634017 nova_compute[243452]: 2026-02-28 10:48:28.593 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:48:28 np0005634017 nova_compute[243452]: 2026-02-28 10:48:28.593 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:48:28 np0005634017 nova_compute[243452]: 2026-02-28 10:48:28.598 243456 DEBUG nova.objects.instance [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lazy-loading 'pci_requests' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:48:28 np0005634017 nova_compute[243452]: 2026-02-28 10:48:28.614 243456 DEBUG nova.objects.instance [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lazy-loading 'numa_topology' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:48:28 np0005634017 nova_compute[243452]: 2026-02-28 10:48:28.634 243456 DEBUG nova.virt.hardware [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:48:28 np0005634017 nova_compute[243452]: 2026-02-28 10:48:28.635 243456 INFO nova.compute.claims [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:48:28 np0005634017 nova_compute[243452]: 2026-02-28 10:48:28.749 243456 DEBUG oslo_concurrency.processutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:48:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:48:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Feb 28 05:48:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Feb 28 05:48:29 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Feb 28 05:48:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:48:29
Feb 28 05:48:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:48:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:48:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'backups', 'images', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta']
Feb 28 05:48:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:48:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:48:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1985976038' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:48:29 np0005634017 nova_compute[243452]: 2026-02-28 10:48:29.347 243456 DEBUG oslo_concurrency.processutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:48:29 np0005634017 nova_compute[243452]: 2026-02-28 10:48:29.349 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2528: 305 pgs: 305 active+clean; 232 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.4 MiB/s wr, 101 op/s
Feb 28 05:48:29 np0005634017 nova_compute[243452]: 2026-02-28 10:48:29.355 243456 DEBUG nova.compute.provider_tree [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:48:29 np0005634017 nova_compute[243452]: 2026-02-28 10:48:29.378 243456 DEBUG nova.scheduler.client.report [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:48:29 np0005634017 nova_compute[243452]: 2026-02-28 10:48:29.406 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:48:29 np0005634017 nova_compute[243452]: 2026-02-28 10:48:29.652 243456 INFO nova.network.neutron [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updating port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Feb 28 05:48:30 np0005634017 nova_compute[243452]: 2026-02-28 10:48:30.171 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:48:30 np0005634017 nova_compute[243452]: 2026-02-28 10:48:30.171 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquired lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:48:30 np0005634017 nova_compute[243452]: 2026-02-28 10:48:30.171 243456 DEBUG nova.network.neutron [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:48:30 np0005634017 nova_compute[243452]: 2026-02-28 10:48:30.265 243456 DEBUG nova.compute.manager [req-0f91d331-fb2b-4a07-a7ae-518a9ddfba4a req-02b4ba58-7139-4ebe-94ae-ed563da55566 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-changed-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:48:30 np0005634017 nova_compute[243452]: 2026-02-28 10:48:30.265 243456 DEBUG nova.compute.manager [req-0f91d331-fb2b-4a07-a7ae-518a9ddfba4a req-02b4ba58-7139-4ebe-94ae-ed563da55566 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Refreshing instance network info cache due to event network-changed-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:48:30 np0005634017 nova_compute[243452]: 2026-02-28 10:48:30.266 243456 DEBUG oslo_concurrency.lockutils [req-0f91d331-fb2b-4a07-a7ae-518a9ddfba4a req-02b4ba58-7139-4ebe-94ae-ed563da55566 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:48:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:48:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:48:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:48:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:48:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:48:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:48:30 np0005634017 nova_compute[243452]: 2026-02-28 10:48:30.898 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:48:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:48:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:48:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:48:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:48:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:48:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:48:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:48:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:48:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:48:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2529: 305 pgs: 305 active+clean; 232 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.1 MiB/s wr, 79 op/s
Feb 28 05:48:31 np0005634017 nova_compute[243452]: 2026-02-28 10:48:31.738 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275696.7374487, d47f4919-0816-4363-b2eb-fa6580859e88 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:48:31 np0005634017 nova_compute[243452]: 2026-02-28 10:48:31.739 243456 INFO nova.compute.manager [-] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:48:31 np0005634017 nova_compute[243452]: 2026-02-28 10:48:31.764 243456 DEBUG nova.compute.manager [None req-2e4e9143-6510-4ce0-9570-e6cff86f335f - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:48:31 np0005634017 nova_compute[243452]: 2026-02-28 10:48:31.924 243456 DEBUG nova.network.neutron [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updating instance_info_cache with network_info: [{"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:48:31 np0005634017 nova_compute[243452]: 2026-02-28 10:48:31.942 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Releasing lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:48:31 np0005634017 nova_compute[243452]: 2026-02-28 10:48:31.944 243456 DEBUG nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:48:31 np0005634017 nova_compute[243452]: 2026-02-28 10:48:31.945 243456 INFO nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Creating image(s)#033[00m
Feb 28 05:48:31 np0005634017 nova_compute[243452]: 2026-02-28 10:48:31.981 243456 DEBUG nova.storage.rbd_utils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:48:31 np0005634017 nova_compute[243452]: 2026-02-28 10:48:31.987 243456 DEBUG nova.objects.instance [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lazy-loading 'trusted_certs' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:48:31 np0005634017 nova_compute[243452]: 2026-02-28 10:48:31.989 243456 DEBUG oslo_concurrency.lockutils [req-0f91d331-fb2b-4a07-a7ae-518a9ddfba4a req-02b4ba58-7139-4ebe-94ae-ed563da55566 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:48:31 np0005634017 nova_compute[243452]: 2026-02-28 10:48:31.990 243456 DEBUG nova.network.neutron [req-0f91d331-fb2b-4a07-a7ae-518a9ddfba4a req-02b4ba58-7139-4ebe-94ae-ed563da55566 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Refreshing network info cache for port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:48:32 np0005634017 nova_compute[243452]: 2026-02-28 10:48:32.046 243456 DEBUG nova.storage.rbd_utils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:48:32 np0005634017 nova_compute[243452]: 2026-02-28 10:48:32.076 243456 DEBUG nova.storage.rbd_utils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:48:32 np0005634017 nova_compute[243452]: 2026-02-28 10:48:32.081 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "6bf3b1a0a7f7c0688ef1a94c5f209b0729c56432" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:48:32 np0005634017 nova_compute[243452]: 2026-02-28 10:48:32.082 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "6bf3b1a0a7f7c0688ef1a94c5f209b0729c56432" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:48:32 np0005634017 nova_compute[243452]: 2026-02-28 10:48:32.303 243456 DEBUG nova.virt.libvirt.imagebackend [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Image locations are: [{'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/4c20447f-84ff-45ca-86c2-e5ad9d598628/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/4c20447f-84ff-45ca-86c2-e5ad9d598628/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Feb 28 05:48:32 np0005634017 nova_compute[243452]: 2026-02-28 10:48:32.389 243456 DEBUG nova.virt.libvirt.imagebackend [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Selected location: {'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/4c20447f-84ff-45ca-86c2-e5ad9d598628/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Feb 28 05:48:32 np0005634017 nova_compute[243452]: 2026-02-28 10:48:32.390 243456 DEBUG nova.storage.rbd_utils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] cloning images/4c20447f-84ff-45ca-86c2-e5ad9d598628@snap to None/d47f4919-0816-4363-b2eb-fa6580859e88_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 28 05:48:32 np0005634017 nova_compute[243452]: 2026-02-28 10:48:32.534 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "6bf3b1a0a7f7c0688ef1a94c5f209b0729c56432" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:48:32 np0005634017 nova_compute[243452]: 2026-02-28 10:48:32.688 243456 DEBUG nova.objects.instance [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lazy-loading 'migration_context' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:48:32 np0005634017 nova_compute[243452]: 2026-02-28 10:48:32.752 243456 DEBUG nova.storage.rbd_utils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] flattening vms/d47f4919-0816-4363-b2eb-fa6580859e88_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.276 243456 DEBUG nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Image rbd:vms/d47f4919-0816-4363-b2eb-fa6580859e88_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.278 243456 DEBUG nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.278 243456 DEBUG nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Ensure instance console log exists: /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.279 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.280 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.280 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.285 243456 DEBUG nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Start _get_guest_xml network_info=[{"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-02-28T10:48:14Z,direct_url=<?>,disk_format='raw',id=4c20447f-84ff-45ca-86c2-e5ad9d598628,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-373381639-shelved',owner='d7baef4f72e742e8aa7530d7a586ed2b',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-28T10:48:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.291 243456 WARNING nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.296 243456 DEBUG nova.virt.libvirt.host [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.297 243456 DEBUG nova.virt.libvirt.host [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.303 243456 DEBUG nova.virt.libvirt.host [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.304 243456 DEBUG nova.virt.libvirt.host [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.305 243456 DEBUG nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.306 243456 DEBUG nova.virt.hardware [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-02-28T10:48:14Z,direct_url=<?>,disk_format='raw',id=4c20447f-84ff-45ca-86c2-e5ad9d598628,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-373381639-shelved',owner='d7baef4f72e742e8aa7530d7a586ed2b',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-28T10:48:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.307 243456 DEBUG nova.virt.hardware [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.307 243456 DEBUG nova.virt.hardware [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.308 243456 DEBUG nova.virt.hardware [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.308 243456 DEBUG nova.virt.hardware [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.309 243456 DEBUG nova.virt.hardware [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.309 243456 DEBUG nova.virt.hardware [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.310 243456 DEBUG nova.virt.hardware [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.310 243456 DEBUG nova.virt.hardware [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.310 243456 DEBUG nova.virt.hardware [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.311 243456 DEBUG nova.virt.hardware [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.311 243456 DEBUG nova.objects.instance [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lazy-loading 'vcpu_model' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.337 243456 DEBUG oslo_concurrency.processutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:48:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2530: 305 pgs: 305 active+clean; 243 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 803 KiB/s wr, 66 op/s
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.400 243456 DEBUG nova.network.neutron [req-0f91d331-fb2b-4a07-a7ae-518a9ddfba4a req-02b4ba58-7139-4ebe-94ae-ed563da55566 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updated VIF entry in instance network info cache for port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.402 243456 DEBUG nova.network.neutron [req-0f91d331-fb2b-4a07-a7ae-518a9ddfba4a req-02b4ba58-7139-4ebe-94ae-ed563da55566 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updating instance_info_cache with network_info: [{"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:48:33 np0005634017 nova_compute[243452]: 2026-02-28 10:48:33.428 243456 DEBUG oslo_concurrency.lockutils [req-0f91d331-fb2b-4a07-a7ae-518a9ddfba4a req-02b4ba58-7139-4ebe-94ae-ed563da55566 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:48:33 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:48:33 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/404527361' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.010 243456 DEBUG oslo_concurrency.processutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.673s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.045 243456 DEBUG nova.storage.rbd_utils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.051 243456 DEBUG oslo_concurrency.processutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:48:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.352 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:48:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1669057543' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.652 243456 DEBUG oslo_concurrency.processutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.656 243456 DEBUG nova.virt.libvirt.vif [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:47:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-373381639',display_name='tempest-TestShelveInstance-server-373381639',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-373381639',id=152,image_ref='4c20447f-84ff-45ca-86c2-e5ad9d598628',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1745923104',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:47:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='d7baef4f72e742e8aa7530d7a586ed2b',ramdisk_id='',reservation_id='r-4eojppig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1186988285',owner_user_name='tempest-TestShelveInstance-1186988285-project-member',shelved_at='2026-02-28T10:48:21.812417',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='4c20447f-84ff-45ca-86c2-e5ad9d598628'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:48:28Z,user_data=None,user_id='7efc7418904f44aa8c8c9c3e06ac552b',uuid=d47f4919-0816-4363-b2eb-fa6580859e88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.657 243456 DEBUG nova.network.os_vif_util [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Converting VIF {"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.659 243456 DEBUG nova.network.os_vif_util [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.663 243456 DEBUG nova.objects.instance [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lazy-loading 'pci_devices' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.691 243456 DEBUG nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:48:34 np0005634017 nova_compute[243452]:  <uuid>d47f4919-0816-4363-b2eb-fa6580859e88</uuid>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:  <name>instance-00000098</name>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <nova:name>tempest-TestShelveInstance-server-373381639</nova:name>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:48:33</nova:creationTime>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:48:34 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:        <nova:user uuid="7efc7418904f44aa8c8c9c3e06ac552b">tempest-TestShelveInstance-1186988285-project-member</nova:user>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:        <nova:project uuid="d7baef4f72e742e8aa7530d7a586ed2b">tempest-TestShelveInstance-1186988285</nova:project>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="4c20447f-84ff-45ca-86c2-e5ad9d598628"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <nova:ports>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:        <nova:port uuid="12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4">
Feb 28 05:48:34 np0005634017 nova_compute[243452]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:        </nova:port>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      </nova:ports>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <entry name="serial">d47f4919-0816-4363-b2eb-fa6580859e88</entry>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <entry name="uuid">d47f4919-0816-4363-b2eb-fa6580859e88</entry>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/d47f4919-0816-4363-b2eb-fa6580859e88_disk">
Feb 28 05:48:34 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:48:34 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/d47f4919-0816-4363-b2eb-fa6580859e88_disk.config">
Feb 28 05:48:34 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:48:34 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <interface type="ethernet">
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <mac address="fa:16:3e:b5:24:ce"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <driver name="vhost" rx_queue_size="512"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <mtu size="1442"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <target dev="tap12bbec3d-25"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    </interface>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/console.log" append="off"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <input type="keyboard" bus="usb"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:48:34 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:48:34 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:48:34 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:48:34 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.693 243456 DEBUG nova.compute.manager [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Preparing to wait for external event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.694 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.694 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.695 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.695 243456 DEBUG nova.virt.libvirt.vif [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:47:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-373381639',display_name='tempest-TestShelveInstance-server-373381639',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-373381639',id=152,image_ref='4c20447f-84ff-45ca-86c2-e5ad9d598628',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1745923104',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:47:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='d7baef4f72e742e8aa7530d7a586ed2b',ramdisk_id='',reservation_id='r-4eojppig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1186988285',owner_user_name='tempest-TestShelveInstance-1186988285-project-member',shelved_at='2026-02-28T10:48:21.812417',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='4c20447f-84ff-45ca-86c2-e5ad9d598628'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:48:28Z,user_data=None,user_id='7efc7418904f44aa8c8c9c3e06ac552b',uuid=d47f4919-0816-4363-b2eb-fa6580859e88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.696 243456 DEBUG nova.network.os_vif_util [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Converting VIF {"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.696 243456 DEBUG nova.network.os_vif_util [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.697 243456 DEBUG os_vif [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.697 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.698 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.698 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.702 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.702 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12bbec3d-25, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.703 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap12bbec3d-25, col_values=(('external_ids', {'iface-id': '12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b5:24:ce', 'vm-uuid': 'd47f4919-0816-4363-b2eb-fa6580859e88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.704 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:34 np0005634017 NetworkManager[49805]: <info>  [1772275714.7062] manager: (tap12bbec3d-25): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/671)
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.709 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.713 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.715 243456 INFO os_vif [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25')#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.780 243456 DEBUG nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.781 243456 DEBUG nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.781 243456 DEBUG nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] No VIF found with MAC fa:16:3e:b5:24:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.782 243456 INFO nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Using config drive#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.817 243456 DEBUG nova.storage.rbd_utils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.840 243456 DEBUG nova.objects.instance [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lazy-loading 'ec2_ids' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:48:34 np0005634017 nova_compute[243452]: 2026-02-28 10:48:34.885 243456 DEBUG nova.objects.instance [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lazy-loading 'keypairs' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:48:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2531: 305 pgs: 305 active+clean; 265 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.5 MiB/s wr, 97 op/s
Feb 28 05:48:35 np0005634017 nova_compute[243452]: 2026-02-28 10:48:35.373 243456 INFO nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Creating config drive at /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/disk.config#033[00m
Feb 28 05:48:35 np0005634017 nova_compute[243452]: 2026-02-28 10:48:35.382 243456 DEBUG oslo_concurrency.processutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqjj1bdu5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:48:35 np0005634017 nova_compute[243452]: 2026-02-28 10:48:35.533 243456 DEBUG oslo_concurrency.processutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqjj1bdu5" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:48:35 np0005634017 nova_compute[243452]: 2026-02-28 10:48:35.575 243456 DEBUG nova.storage.rbd_utils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:48:35 np0005634017 nova_compute[243452]: 2026-02-28 10:48:35.580 243456 DEBUG oslo_concurrency.processutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/disk.config d47f4919-0816-4363-b2eb-fa6580859e88_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:48:35 np0005634017 nova_compute[243452]: 2026-02-28 10:48:35.729 243456 DEBUG oslo_concurrency.processutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/disk.config d47f4919-0816-4363-b2eb-fa6580859e88_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:48:35 np0005634017 nova_compute[243452]: 2026-02-28 10:48:35.730 243456 INFO nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Deleting local config drive /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/disk.config because it was imported into RBD.#033[00m
Feb 28 05:48:35 np0005634017 kernel: tap12bbec3d-25: entered promiscuous mode
Feb 28 05:48:35 np0005634017 NetworkManager[49805]: <info>  [1772275715.7957] manager: (tap12bbec3d-25): new Tun device (/org/freedesktop/NetworkManager/Devices/672)
Feb 28 05:48:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:48:35Z|01599|binding|INFO|Claiming lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 for this chassis.
Feb 28 05:48:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:48:35Z|01600|binding|INFO|12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4: Claiming fa:16:3e:b5:24:ce 10.100.0.13
Feb 28 05:48:35 np0005634017 nova_compute[243452]: 2026-02-28 10:48:35.796 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.804 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:24:ce 10.100.0.13'], port_security=['fa:16:3e:b5:24:ce 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd47f4919-0816-4363-b2eb-fa6580859e88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-978ebc43-7003-4100-92ba-e083df3fe8ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd7baef4f72e742e8aa7530d7a586ed2b', 'neutron:revision_number': '7', 'neutron:security_group_ids': '97c43427-956a-4c4e-a592-053a957c2802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2014e61-1555-4c0f-9d39-804f817029ca, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:48:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:48:35Z|01601|binding|INFO|Setting lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 ovn-installed in OVS
Feb 28 05:48:35 np0005634017 ovn_controller[146846]: 2026-02-28T10:48:35Z|01602|binding|INFO|Setting lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 up in Southbound
Feb 28 05:48:35 np0005634017 nova_compute[243452]: 2026-02-28 10:48:35.806 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.807 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 in datapath 978ebc43-7003-4100-92ba-e083df3fe8ab bound to our chassis#033[00m
Feb 28 05:48:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.809 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 978ebc43-7003-4100-92ba-e083df3fe8ab#033[00m
Feb 28 05:48:35 np0005634017 nova_compute[243452]: 2026-02-28 10:48:35.811 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.824 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[db86a416-d541-49a9-957c-0a281c684fe9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.825 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap978ebc43-71 in ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 28 05:48:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.828 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap978ebc43-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 28 05:48:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.828 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[caf1ee53-78e4-4be5-84c8-4d96701fcef4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.829 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3432e102-7d66-4540-84be-8a628e013c54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:35 np0005634017 systemd-udevd[382202]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:48:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.839 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[bcb831f8-e143-4cf2-bf12-753a5b0e6143]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:35 np0005634017 NetworkManager[49805]: <info>  [1772275715.8474] device (tap12bbec3d-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 05:48:35 np0005634017 NetworkManager[49805]: <info>  [1772275715.8484] device (tap12bbec3d-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 05:48:35 np0005634017 systemd-machined[209480]: New machine qemu-186-instance-00000098.
Feb 28 05:48:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.852 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8e10690a-14d8-4172-9403-59362ef3b9ce]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:35 np0005634017 systemd[1]: Started Virtual Machine qemu-186-instance-00000098.
Feb 28 05:48:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.887 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5a48075c-4353-4b06-98c7-1b71c6becfa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.892 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[19e8aceb-0958-4555-920f-1f3d91ea919f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:35 np0005634017 NetworkManager[49805]: <info>  [1772275715.8937] manager: (tap978ebc43-70): new Veth device (/org/freedesktop/NetworkManager/Devices/673)
Feb 28 05:48:35 np0005634017 systemd-udevd[382207]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:48:35 np0005634017 nova_compute[243452]: 2026-02-28 10:48:35.901 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.931 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4c7f8d0b-37eb-4375-88f5-7a8fb1af1bf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.934 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[58855554-8b3b-4207-8011-a4f188607f26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:35 np0005634017 NetworkManager[49805]: <info>  [1772275715.9588] device (tap978ebc43-70): carrier: link connected
Feb 28 05:48:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.964 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d963608b-9b2b-4143-ab03-9017e1a3870d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:35 np0005634017 nova_compute[243452]: 2026-02-28 10:48:35.980 243456 DEBUG nova.compute.manager [req-dd4f55ee-0eda-4815-b277-dc31102d93a5 req-fe4f7546-7727-4244-9cb7-c8dbef2e5e38 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:48:35 np0005634017 nova_compute[243452]: 2026-02-28 10:48:35.980 243456 DEBUG oslo_concurrency.lockutils [req-dd4f55ee-0eda-4815-b277-dc31102d93a5 req-fe4f7546-7727-4244-9cb7-c8dbef2e5e38 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:48:35 np0005634017 nova_compute[243452]: 2026-02-28 10:48:35.980 243456 DEBUG oslo_concurrency.lockutils [req-dd4f55ee-0eda-4815-b277-dc31102d93a5 req-fe4f7546-7727-4244-9cb7-c8dbef2e5e38 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:48:35 np0005634017 nova_compute[243452]: 2026-02-28 10:48:35.981 243456 DEBUG oslo_concurrency.lockutils [req-dd4f55ee-0eda-4815-b277-dc31102d93a5 req-fe4f7546-7727-4244-9cb7-c8dbef2e5e38 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:48:35 np0005634017 nova_compute[243452]: 2026-02-28 10:48:35.981 243456 DEBUG nova.compute.manager [req-dd4f55ee-0eda-4815-b277-dc31102d93a5 req-fe4f7546-7727-4244-9cb7-c8dbef2e5e38 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Processing event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 28 05:48:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.981 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[90cf72f1-ec84-4b9f-b03c-b40f0e035dae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap978ebc43-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:85:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722324, 'reachable_time': 28644, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382235, 'error': None, 'target': 'ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.998 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6af9bddd-c1b5-414c-934e-f8447056449e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:8571'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 722324, 'tstamp': 722324}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382236, 'error': None, 'target': 'ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:36.019 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b7c4438c-a82b-4946-9965-e2e45e1ce9c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap978ebc43-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:85:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722324, 'reachable_time': 28644, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 382237, 'error': None, 'target': 'ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:36.061 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7ba423f4-63c6-4989-8ae6-349ff8d5927b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:36.065 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:48:36 np0005634017 nova_compute[243452]: 2026-02-28 10:48:36.066 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:36.124 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f729fd00-51fb-48bd-83c4-77053e28f0d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:36.126 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap978ebc43-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:36.126 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:36.127 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap978ebc43-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:48:36 np0005634017 NetworkManager[49805]: <info>  [1772275716.1296] manager: (tap978ebc43-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/674)
Feb 28 05:48:36 np0005634017 kernel: tap978ebc43-70: entered promiscuous mode
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:36.131 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap978ebc43-70, col_values=(('external_ids', {'iface-id': 'a5bdb09f-93f0-411c-9d75-fc368a22a5f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:48:36 np0005634017 ovn_controller[146846]: 2026-02-28T10:48:36Z|01603|binding|INFO|Releasing lport a5bdb09f-93f0-411c-9d75-fc368a22a5f6 from this chassis (sb_readonly=0)
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:36.144 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/978ebc43-7003-4100-92ba-e083df3fe8ab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/978ebc43-7003-4100-92ba-e083df3fe8ab.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:36.146 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4eaa7f80-e980-450d-be6d-96d77d4db020]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:36.147 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]: global
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]:    log         /dev/log local0 debug
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]:    log-tag     haproxy-metadata-proxy-978ebc43-7003-4100-92ba-e083df3fe8ab
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]:    user        root
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]:    group       root
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]:    maxconn     1024
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]:    pidfile     /var/lib/neutron/external/pids/978ebc43-7003-4100-92ba-e083df3fe8ab.pid.haproxy
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]:    daemon
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]: defaults
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]:    log global
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]:    mode http
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]:    option httplog
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]:    option dontlognull
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]:    option http-server-close
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]:    option forwardfor
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]:    retries                 3
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]:    timeout http-request    30s
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]:    timeout connect         30s
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]:    timeout client          32s
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]:    timeout server          32s
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]:    timeout http-keep-alive 30s
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]: 
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]: listen listener
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]:    bind 169.254.169.254:80
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]:    server metadata /var/lib/neutron/metadata_proxy
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]:    http-request add-header X-OVN-Network-ID 978ebc43-7003-4100-92ba-e083df3fe8ab
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 28 05:48:36 np0005634017 nova_compute[243452]: 2026-02-28 10:48:36.142 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:36.147 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab', 'env', 'PROCESS_TAG=haproxy-978ebc43-7003-4100-92ba-e083df3fe8ab', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/978ebc43-7003-4100-92ba-e083df3fe8ab.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 28 05:48:36 np0005634017 podman[382270]: 2026-02-28 10:48:36.537213443 +0000 UTC m=+0.064796415 container create ceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS)
Feb 28 05:48:36 np0005634017 systemd[1]: Started libpod-conmon-ceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3.scope.
Feb 28 05:48:36 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:48:36 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85896b15e407a783697a1b7680c2b0fcf7d8c767aa52c496ab5f36da73752f0c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 05:48:36 np0005634017 podman[382270]: 2026-02-28 10:48:36.497980776 +0000 UTC m=+0.025563758 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 05:48:36 np0005634017 podman[382270]: 2026-02-28 10:48:36.596351386 +0000 UTC m=+0.123934398 container init ceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 28 05:48:36 np0005634017 podman[382270]: 2026-02-28 10:48:36.603044587 +0000 UTC m=+0.130627549 container start ceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:48:36 np0005634017 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[382324]: [NOTICE]   (382329) : New worker (382332) forked
Feb 28 05:48:36 np0005634017 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[382324]: [NOTICE]   (382329) : Loading success.
Feb 28 05:48:36 np0005634017 nova_compute[243452]: 2026-02-28 10:48:36.647 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275716.646727, d47f4919-0816-4363-b2eb-fa6580859e88 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:48:36 np0005634017 nova_compute[243452]: 2026-02-28 10:48:36.648 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] VM Started (Lifecycle Event)#033[00m
Feb 28 05:48:36 np0005634017 nova_compute[243452]: 2026-02-28 10:48:36.651 243456 DEBUG nova.compute.manager [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:48:36 np0005634017 nova_compute[243452]: 2026-02-28 10:48:36.657 243456 DEBUG nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:48:36 np0005634017 nova_compute[243452]: 2026-02-28 10:48:36.662 243456 INFO nova.virt.libvirt.driver [-] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Instance spawned successfully.#033[00m
Feb 28 05:48:36 np0005634017 nova_compute[243452]: 2026-02-28 10:48:36.674 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:48:36 np0005634017 nova_compute[243452]: 2026-02-28 10:48:36.679 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:48:36 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:36.684 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:48:36 np0005634017 nova_compute[243452]: 2026-02-28 10:48:36.706 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:48:36 np0005634017 nova_compute[243452]: 2026-02-28 10:48:36.707 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275716.646968, d47f4919-0816-4363-b2eb-fa6580859e88 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:48:36 np0005634017 nova_compute[243452]: 2026-02-28 10:48:36.707 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] VM Paused (Lifecycle Event)#033[00m
Feb 28 05:48:36 np0005634017 nova_compute[243452]: 2026-02-28 10:48:36.727 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:48:36 np0005634017 nova_compute[243452]: 2026-02-28 10:48:36.732 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275716.656777, d47f4919-0816-4363-b2eb-fa6580859e88 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:48:36 np0005634017 nova_compute[243452]: 2026-02-28 10:48:36.732 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:48:36 np0005634017 nova_compute[243452]: 2026-02-28 10:48:36.752 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:48:36 np0005634017 nova_compute[243452]: 2026-02-28 10:48:36.757 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:48:36 np0005634017 nova_compute[243452]: 2026-02-28 10:48:36.785 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:48:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2532: 305 pgs: 305 active+clean; 304 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.3 MiB/s wr, 96 op/s
Feb 28 05:48:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Feb 28 05:48:37 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Feb 28 05:48:37 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Feb 28 05:48:37 np0005634017 nova_compute[243452]: 2026-02-28 10:48:37.788 243456 DEBUG nova.compute.manager [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:48:37 np0005634017 nova_compute[243452]: 2026-02-28 10:48:37.876 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 9.375s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:48:38 np0005634017 nova_compute[243452]: 2026-02-28 10:48:38.077 243456 DEBUG nova.compute.manager [req-6825f62a-bf45-4522-a425-6486c8a54317 req-8aaedf1f-cf5d-4dff-a6df-10c99247dc76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:48:38 np0005634017 nova_compute[243452]: 2026-02-28 10:48:38.077 243456 DEBUG oslo_concurrency.lockutils [req-6825f62a-bf45-4522-a425-6486c8a54317 req-8aaedf1f-cf5d-4dff-a6df-10c99247dc76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:48:38 np0005634017 nova_compute[243452]: 2026-02-28 10:48:38.078 243456 DEBUG oslo_concurrency.lockutils [req-6825f62a-bf45-4522-a425-6486c8a54317 req-8aaedf1f-cf5d-4dff-a6df-10c99247dc76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:48:38 np0005634017 nova_compute[243452]: 2026-02-28 10:48:38.078 243456 DEBUG oslo_concurrency.lockutils [req-6825f62a-bf45-4522-a425-6486c8a54317 req-8aaedf1f-cf5d-4dff-a6df-10c99247dc76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:48:38 np0005634017 nova_compute[243452]: 2026-02-28 10:48:38.078 243456 DEBUG nova.compute.manager [req-6825f62a-bf45-4522-a425-6486c8a54317 req-8aaedf1f-cf5d-4dff-a6df-10c99247dc76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] No waiting events found dispatching network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:48:38 np0005634017 nova_compute[243452]: 2026-02-28 10:48:38.079 243456 WARNING nova.compute.manager [req-6825f62a-bf45-4522-a425-6486c8a54317 req-8aaedf1f-cf5d-4dff-a6df-10c99247dc76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received unexpected event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 for instance with vm_state active and task_state None.#033[00m
Feb 28 05:48:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:48:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2534: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 4.7 MiB/s wr, 115 op/s
Feb 28 05:48:39 np0005634017 nova_compute[243452]: 2026-02-28 10:48:39.706 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:40 np0005634017 nova_compute[243452]: 2026-02-28 10:48:40.903 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2535: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 208 op/s
Feb 28 05:48:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:48:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:48:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:48:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:48:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007758369093071074 of space, bias 1.0, pg target 0.23275107279213222 quantized to 32 (current 32)
Feb 28 05:48:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:48:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:48:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:48:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:48:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:48:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493925902529403 of space, bias 1.0, pg target 0.7481777707588209 quantized to 32 (current 32)
Feb 28 05:48:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:48:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.261232899107364e-07 of space, bias 4.0, pg target 0.0008713479478928837 quantized to 16 (current 16)
Feb 28 05:48:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:48:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:48:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:48:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:48:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:48:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:48:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:48:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:48:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:48:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:48:41 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:41.686 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:48:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2536: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 3.9 MiB/s wr, 190 op/s
Feb 28 05:48:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:48:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Feb 28 05:48:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Feb 28 05:48:44 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Feb 28 05:48:44 np0005634017 nova_compute[243452]: 2026-02-28 10:48:44.709 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2538: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 475 KiB/s wr, 158 op/s
Feb 28 05:48:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:48:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1579559209' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:48:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:48:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1579559209' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:48:45 np0005634017 nova_compute[243452]: 2026-02-28 10:48:45.904 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2539: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 383 KiB/s wr, 127 op/s
Feb 28 05:48:48 np0005634017 ovn_controller[146846]: 2026-02-28T10:48:48Z|00200|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b5:24:ce 10.100.0.13
Feb 28 05:48:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:48:48 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:48:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:48:48 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:48:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:48:48 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:48:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:48:48 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:48:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:48:48 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:48:48 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:48:48 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:48:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:48:49 np0005634017 podman[382483]: 2026-02-28 10:48:49.119541601 +0000 UTC m=+0.021680588 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:48:49 np0005634017 podman[382483]: 2026-02-28 10:48:49.242260274 +0000 UTC m=+0.144399271 container create 5eaa5c90f33a466c7155172032cb1ad37b42bbdfed8860af5448a36d04347259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hellman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 05:48:49 np0005634017 nova_compute[243452]: 2026-02-28 10:48:49.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:48:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2540: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 KiB/s wr, 102 op/s
Feb 28 05:48:49 np0005634017 systemd[1]: Started libpod-conmon-5eaa5c90f33a466c7155172032cb1ad37b42bbdfed8860af5448a36d04347259.scope.
Feb 28 05:48:49 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:48:49 np0005634017 podman[382483]: 2026-02-28 10:48:49.479800625 +0000 UTC m=+0.381939612 container init 5eaa5c90f33a466c7155172032cb1ad37b42bbdfed8860af5448a36d04347259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hellman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 28 05:48:49 np0005634017 podman[382483]: 2026-02-28 10:48:49.487877615 +0000 UTC m=+0.390016582 container start 5eaa5c90f33a466c7155172032cb1ad37b42bbdfed8860af5448a36d04347259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hellman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 05:48:49 np0005634017 podman[382483]: 2026-02-28 10:48:49.492795725 +0000 UTC m=+0.394934702 container attach 5eaa5c90f33a466c7155172032cb1ad37b42bbdfed8860af5448a36d04347259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hellman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 05:48:49 np0005634017 strange_hellman[382499]: 167 167
Feb 28 05:48:49 np0005634017 systemd[1]: libpod-5eaa5c90f33a466c7155172032cb1ad37b42bbdfed8860af5448a36d04347259.scope: Deactivated successfully.
Feb 28 05:48:49 np0005634017 podman[382483]: 2026-02-28 10:48:49.495603535 +0000 UTC m=+0.397742542 container died 5eaa5c90f33a466c7155172032cb1ad37b42bbdfed8860af5448a36d04347259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hellman, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:48:49 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:48:49 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:48:49 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:48:49 np0005634017 systemd[1]: var-lib-containers-storage-overlay-d4e7d616f5ac3bcfd783729476b04fb58542d2506ade7b604faf8b47e270bd3e-merged.mount: Deactivated successfully.
Feb 28 05:48:49 np0005634017 podman[382483]: 2026-02-28 10:48:49.55725013 +0000 UTC m=+0.459389147 container remove 5eaa5c90f33a466c7155172032cb1ad37b42bbdfed8860af5448a36d04347259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:48:49 np0005634017 systemd[1]: libpod-conmon-5eaa5c90f33a466c7155172032cb1ad37b42bbdfed8860af5448a36d04347259.scope: Deactivated successfully.
Feb 28 05:48:49 np0005634017 nova_compute[243452]: 2026-02-28 10:48:49.711 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:49 np0005634017 podman[382524]: 2026-02-28 10:48:49.814305789 +0000 UTC m=+0.136806838 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:48:49 np0005634017 podman[382524]: 2026-02-28 10:48:49.985472125 +0000 UTC m=+0.307973154 container create 8cb49c26115b91aa4bec092a6fbe21c5e24c0eb2b64ac5a90d1434372f72abf5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:48:50 np0005634017 systemd[1]: Started libpod-conmon-8cb49c26115b91aa4bec092a6fbe21c5e24c0eb2b64ac5a90d1434372f72abf5.scope.
Feb 28 05:48:50 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:48:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68cf4f8140c70baca7634d253839695840f0bb1c2a53d5a8c15f245de7a7f978/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:48:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68cf4f8140c70baca7634d253839695840f0bb1c2a53d5a8c15f245de7a7f978/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:48:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68cf4f8140c70baca7634d253839695840f0bb1c2a53d5a8c15f245de7a7f978/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:48:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68cf4f8140c70baca7634d253839695840f0bb1c2a53d5a8c15f245de7a7f978/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:48:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68cf4f8140c70baca7634d253839695840f0bb1c2a53d5a8c15f245de7a7f978/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:48:50 np0005634017 podman[382524]: 2026-02-28 10:48:50.16785116 +0000 UTC m=+0.490352239 container init 8cb49c26115b91aa4bec092a6fbe21c5e24c0eb2b64ac5a90d1434372f72abf5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 28 05:48:50 np0005634017 podman[382524]: 2026-02-28 10:48:50.17876259 +0000 UTC m=+0.501263679 container start 8cb49c26115b91aa4bec092a6fbe21c5e24c0eb2b64ac5a90d1434372f72abf5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 05:48:50 np0005634017 podman[382524]: 2026-02-28 10:48:50.194644322 +0000 UTC m=+0.517145441 container attach 8cb49c26115b91aa4bec092a6fbe21c5e24c0eb2b64ac5a90d1434372f72abf5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 05:48:50 np0005634017 nova_compute[243452]: 2026-02-28 10:48:50.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:48:50 np0005634017 nova_compute[243452]: 2026-02-28 10:48:50.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:48:50 np0005634017 upbeat_lehmann[382541]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:48:50 np0005634017 upbeat_lehmann[382541]: --> All data devices are unavailable
Feb 28 05:48:50 np0005634017 systemd[1]: libpod-8cb49c26115b91aa4bec092a6fbe21c5e24c0eb2b64ac5a90d1434372f72abf5.scope: Deactivated successfully.
Feb 28 05:48:50 np0005634017 podman[382524]: 2026-02-28 10:48:50.680935634 +0000 UTC m=+1.003436663 container died 8cb49c26115b91aa4bec092a6fbe21c5e24c0eb2b64ac5a90d1434372f72abf5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:48:50 np0005634017 systemd[1]: var-lib-containers-storage-overlay-68cf4f8140c70baca7634d253839695840f0bb1c2a53d5a8c15f245de7a7f978-merged.mount: Deactivated successfully.
Feb 28 05:48:50 np0005634017 podman[382524]: 2026-02-28 10:48:50.729863997 +0000 UTC m=+1.052365056 container remove 8cb49c26115b91aa4bec092a6fbe21c5e24c0eb2b64ac5a90d1434372f72abf5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_lehmann, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 05:48:50 np0005634017 systemd[1]: libpod-conmon-8cb49c26115b91aa4bec092a6fbe21c5e24c0eb2b64ac5a90d1434372f72abf5.scope: Deactivated successfully.
Feb 28 05:48:50 np0005634017 nova_compute[243452]: 2026-02-28 10:48:50.906 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:51 np0005634017 podman[382636]: 2026-02-28 10:48:51.261770276 +0000 UTC m=+0.051803826 container create b979d0e3a9f7e3402f8f726f7b7d8997bc6bf64ba204d10e6949ef3859ae6511 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_wing, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 05:48:51 np0005634017 systemd[1]: Started libpod-conmon-b979d0e3a9f7e3402f8f726f7b7d8997bc6bf64ba204d10e6949ef3859ae6511.scope.
Feb 28 05:48:51 np0005634017 nova_compute[243452]: 2026-02-28 10:48:51.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:48:51 np0005634017 podman[382636]: 2026-02-28 10:48:51.236235489 +0000 UTC m=+0.026269069 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:48:51 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:48:51 np0005634017 podman[382636]: 2026-02-28 10:48:51.351331985 +0000 UTC m=+0.141365575 container init b979d0e3a9f7e3402f8f726f7b7d8997bc6bf64ba204d10e6949ef3859ae6511 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_wing, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:48:51 np0005634017 podman[382636]: 2026-02-28 10:48:51.358160979 +0000 UTC m=+0.148194499 container start b979d0e3a9f7e3402f8f726f7b7d8997bc6bf64ba204d10e6949ef3859ae6511 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_wing, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 28 05:48:51 np0005634017 podman[382636]: 2026-02-28 10:48:51.362247226 +0000 UTC m=+0.152280836 container attach b979d0e3a9f7e3402f8f726f7b7d8997bc6bf64ba204d10e6949ef3859ae6511 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:48:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2541: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 576 KiB/s rd, 16 KiB/s wr, 47 op/s
Feb 28 05:48:51 np0005634017 admiring_wing[382652]: 167 167
Feb 28 05:48:51 np0005634017 systemd[1]: libpod-b979d0e3a9f7e3402f8f726f7b7d8997bc6bf64ba204d10e6949ef3859ae6511.scope: Deactivated successfully.
Feb 28 05:48:51 np0005634017 podman[382636]: 2026-02-28 10:48:51.368294298 +0000 UTC m=+0.158327848 container died b979d0e3a9f7e3402f8f726f7b7d8997bc6bf64ba204d10e6949ef3859ae6511 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:48:51 np0005634017 systemd[1]: var-lib-containers-storage-overlay-bff6934d4aaea5ccbe64d40054ac9883badfd459622af257be30e97683364e2f-merged.mount: Deactivated successfully.
Feb 28 05:48:51 np0005634017 podman[382636]: 2026-02-28 10:48:51.415242784 +0000 UTC m=+0.205276304 container remove b979d0e3a9f7e3402f8f726f7b7d8997bc6bf64ba204d10e6949ef3859ae6511 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_wing, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:48:51 np0005634017 systemd[1]: libpod-conmon-b979d0e3a9f7e3402f8f726f7b7d8997bc6bf64ba204d10e6949ef3859ae6511.scope: Deactivated successfully.
Feb 28 05:48:51 np0005634017 podman[382675]: 2026-02-28 10:48:51.582634059 +0000 UTC m=+0.057406165 container create b9546f99a6d14cd5816bc18101ecb1d156e4e293a1c506b83ee4bfc8105370a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_visvesvaraya, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:48:51 np0005634017 systemd[1]: Started libpod-conmon-b9546f99a6d14cd5816bc18101ecb1d156e4e293a1c506b83ee4bfc8105370a7.scope.
Feb 28 05:48:51 np0005634017 podman[382675]: 2026-02-28 10:48:51.558716178 +0000 UTC m=+0.033488294 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:48:51 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:48:51 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb57497a274d15ef4a64dbe1085802800822f4f1313841d4c9d1916a361ce34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:48:51 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb57497a274d15ef4a64dbe1085802800822f4f1313841d4c9d1916a361ce34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:48:51 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb57497a274d15ef4a64dbe1085802800822f4f1313841d4c9d1916a361ce34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:48:51 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb57497a274d15ef4a64dbe1085802800822f4f1313841d4c9d1916a361ce34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:48:51 np0005634017 podman[382675]: 2026-02-28 10:48:51.701834392 +0000 UTC m=+0.176606538 container init b9546f99a6d14cd5816bc18101ecb1d156e4e293a1c506b83ee4bfc8105370a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:48:51 np0005634017 podman[382675]: 2026-02-28 10:48:51.717248751 +0000 UTC m=+0.192020837 container start b9546f99a6d14cd5816bc18101ecb1d156e4e293a1c506b83ee4bfc8105370a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_visvesvaraya, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 05:48:51 np0005634017 podman[382675]: 2026-02-28 10:48:51.724175508 +0000 UTC m=+0.198947604 container attach b9546f99a6d14cd5816bc18101ecb1d156e4e293a1c506b83ee4bfc8105370a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_visvesvaraya, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 05:48:51 np0005634017 adoring_visvesvaraya[382692]: {
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:    "0": [
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:        {
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "devices": [
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "/dev/loop3"
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            ],
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "lv_name": "ceph_lv0",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "lv_size": "21470642176",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "name": "ceph_lv0",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "tags": {
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.cluster_name": "ceph",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.crush_device_class": "",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.encrypted": "0",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.objectstore": "bluestore",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.osd_id": "0",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.type": "block",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.vdo": "0",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.with_tpm": "0"
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            },
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "type": "block",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "vg_name": "ceph_vg0"
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:        }
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:    ],
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:    "1": [
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:        {
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "devices": [
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "/dev/loop4"
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            ],
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "lv_name": "ceph_lv1",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "lv_size": "21470642176",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "name": "ceph_lv1",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "tags": {
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.cluster_name": "ceph",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.crush_device_class": "",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.encrypted": "0",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.objectstore": "bluestore",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.osd_id": "1",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.type": "block",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.vdo": "0",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.with_tpm": "0"
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            },
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "type": "block",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "vg_name": "ceph_vg1"
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:        }
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:    ],
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:    "2": [
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:        {
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "devices": [
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "/dev/loop5"
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            ],
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "lv_name": "ceph_lv2",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "lv_size": "21470642176",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "name": "ceph_lv2",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "tags": {
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.cluster_name": "ceph",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.crush_device_class": "",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.encrypted": "0",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.objectstore": "bluestore",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.osd_id": "2",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.type": "block",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.vdo": "0",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:                "ceph.with_tpm": "0"
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            },
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "type": "block",
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:            "vg_name": "ceph_vg2"
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:        }
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]:    ]
Feb 28 05:48:52 np0005634017 adoring_visvesvaraya[382692]: }
Feb 28 05:48:52 np0005634017 systemd[1]: libpod-b9546f99a6d14cd5816bc18101ecb1d156e4e293a1c506b83ee4bfc8105370a7.scope: Deactivated successfully.
Feb 28 05:48:52 np0005634017 conmon[382692]: conmon b9546f99a6d14cd5816b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b9546f99a6d14cd5816bc18101ecb1d156e4e293a1c506b83ee4bfc8105370a7.scope/container/memory.events
Feb 28 05:48:52 np0005634017 podman[382675]: 2026-02-28 10:48:52.033963365 +0000 UTC m=+0.508735471 container died b9546f99a6d14cd5816bc18101ecb1d156e4e293a1c506b83ee4bfc8105370a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 05:48:52 np0005634017 systemd[1]: var-lib-containers-storage-overlay-7bb57497a274d15ef4a64dbe1085802800822f4f1313841d4c9d1916a361ce34-merged.mount: Deactivated successfully.
Feb 28 05:48:52 np0005634017 podman[382675]: 2026-02-28 10:48:52.087274033 +0000 UTC m=+0.562046109 container remove b9546f99a6d14cd5816bc18101ecb1d156e4e293a1c506b83ee4bfc8105370a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:48:52 np0005634017 systemd[1]: libpod-conmon-b9546f99a6d14cd5816bc18101ecb1d156e4e293a1c506b83ee4bfc8105370a7.scope: Deactivated successfully.
Feb 28 05:48:52 np0005634017 nova_compute[243452]: 2026-02-28 10:48:52.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:48:52 np0005634017 podman[382777]: 2026-02-28 10:48:52.617611828 +0000 UTC m=+0.063869869 container create 8c37b538c6b67eca7b59396d5fa43322ec1cb3b7a428fbc0b075f48dc9d0fb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Feb 28 05:48:52 np0005634017 systemd[1]: Started libpod-conmon-8c37b538c6b67eca7b59396d5fa43322ec1cb3b7a428fbc0b075f48dc9d0fb77.scope.
Feb 28 05:48:52 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:48:52 np0005634017 podman[382777]: 2026-02-28 10:48:52.589348904 +0000 UTC m=+0.035607035 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:48:52 np0005634017 podman[382777]: 2026-02-28 10:48:52.701851906 +0000 UTC m=+0.148109967 container init 8c37b538c6b67eca7b59396d5fa43322ec1cb3b7a428fbc0b075f48dc9d0fb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_liskov, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:48:52 np0005634017 podman[382777]: 2026-02-28 10:48:52.712281483 +0000 UTC m=+0.158539514 container start 8c37b538c6b67eca7b59396d5fa43322ec1cb3b7a428fbc0b075f48dc9d0fb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_liskov, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 28 05:48:52 np0005634017 podman[382777]: 2026-02-28 10:48:52.715902726 +0000 UTC m=+0.162160797 container attach 8c37b538c6b67eca7b59396d5fa43322ec1cb3b7a428fbc0b075f48dc9d0fb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:48:52 np0005634017 quizzical_liskov[382793]: 167 167
Feb 28 05:48:52 np0005634017 systemd[1]: libpod-8c37b538c6b67eca7b59396d5fa43322ec1cb3b7a428fbc0b075f48dc9d0fb77.scope: Deactivated successfully.
Feb 28 05:48:52 np0005634017 podman[382777]: 2026-02-28 10:48:52.721061183 +0000 UTC m=+0.167319264 container died 8c37b538c6b67eca7b59396d5fa43322ec1cb3b7a428fbc0b075f48dc9d0fb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 28 05:48:52 np0005634017 systemd[1]: var-lib-containers-storage-overlay-f9f7f7e67096ab62ba36cbc36e6424fb97f6a816be829582fc2705e66c5c6c07-merged.mount: Deactivated successfully.
Feb 28 05:48:52 np0005634017 podman[382777]: 2026-02-28 10:48:52.766678801 +0000 UTC m=+0.212936842 container remove 8c37b538c6b67eca7b59396d5fa43322ec1cb3b7a428fbc0b075f48dc9d0fb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:48:52 np0005634017 systemd[1]: libpod-conmon-8c37b538c6b67eca7b59396d5fa43322ec1cb3b7a428fbc0b075f48dc9d0fb77.scope: Deactivated successfully.
Feb 28 05:48:52 np0005634017 podman[382817]: 2026-02-28 10:48:52.929919008 +0000 UTC m=+0.039486335 container create 38ffbf4bd94c4c2a65af01199e0b7cf3e3a455d1e523e3a96e76b795fc595be6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcnulty, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:48:52 np0005634017 systemd[1]: Started libpod-conmon-38ffbf4bd94c4c2a65af01199e0b7cf3e3a455d1e523e3a96e76b795fc595be6.scope.
Feb 28 05:48:53 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:48:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd769865de1b0e9ac2a42bc3858f61e3e42b8526b12c1167b1b46c1c6841164/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:48:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd769865de1b0e9ac2a42bc3858f61e3e42b8526b12c1167b1b46c1c6841164/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:48:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd769865de1b0e9ac2a42bc3858f61e3e42b8526b12c1167b1b46c1c6841164/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:48:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd769865de1b0e9ac2a42bc3858f61e3e42b8526b12c1167b1b46c1c6841164/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:48:53 np0005634017 podman[382817]: 2026-02-28 10:48:52.914761736 +0000 UTC m=+0.024329083 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:48:53 np0005634017 podman[382817]: 2026-02-28 10:48:53.057739116 +0000 UTC m=+0.167306523 container init 38ffbf4bd94c4c2a65af01199e0b7cf3e3a455d1e523e3a96e76b795fc595be6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcnulty, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:48:53 np0005634017 podman[382817]: 2026-02-28 10:48:53.066207937 +0000 UTC m=+0.175775304 container start 38ffbf4bd94c4c2a65af01199e0b7cf3e3a455d1e523e3a96e76b795fc595be6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcnulty, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:48:53 np0005634017 podman[382817]: 2026-02-28 10:48:53.069886722 +0000 UTC m=+0.179454089 container attach 38ffbf4bd94c4c2a65af01199e0b7cf3e3a455d1e523e3a96e76b795fc595be6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 05:48:53 np0005634017 nova_compute[243452]: 2026-02-28 10:48:53.313 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:48:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2542: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 647 KiB/s rd, 18 KiB/s wr, 53 op/s
Feb 28 05:48:53 np0005634017 lvm[382912]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:48:53 np0005634017 lvm[382912]: VG ceph_vg1 finished
Feb 28 05:48:53 np0005634017 lvm[382911]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:48:53 np0005634017 lvm[382911]: VG ceph_vg0 finished
Feb 28 05:48:53 np0005634017 lvm[382914]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:48:53 np0005634017 lvm[382914]: VG ceph_vg2 finished
Feb 28 05:48:53 np0005634017 lvm[382915]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:48:53 np0005634017 lvm[382915]: VG ceph_vg0 finished
Feb 28 05:48:53 np0005634017 vigorous_mcnulty[382833]: {}
Feb 28 05:48:53 np0005634017 systemd[1]: libpod-38ffbf4bd94c4c2a65af01199e0b7cf3e3a455d1e523e3a96e76b795fc595be6.scope: Deactivated successfully.
Feb 28 05:48:53 np0005634017 systemd[1]: libpod-38ffbf4bd94c4c2a65af01199e0b7cf3e3a455d1e523e3a96e76b795fc595be6.scope: Consumed 1.421s CPU time.
Feb 28 05:48:53 np0005634017 podman[382817]: 2026-02-28 10:48:53.960404069 +0000 UTC m=+1.069971396 container died 38ffbf4bd94c4c2a65af01199e0b7cf3e3a455d1e523e3a96e76b795fc595be6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 28 05:48:53 np0005634017 systemd[1]: var-lib-containers-storage-overlay-bcd769865de1b0e9ac2a42bc3858f61e3e42b8526b12c1167b1b46c1c6841164-merged.mount: Deactivated successfully.
Feb 28 05:48:54 np0005634017 podman[382817]: 2026-02-28 10:48:54.006255384 +0000 UTC m=+1.115822711 container remove 38ffbf4bd94c4c2a65af01199e0b7cf3e3a455d1e523e3a96e76b795fc595be6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcnulty, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Feb 28 05:48:54 np0005634017 systemd[1]: libpod-conmon-38ffbf4bd94c4c2a65af01199e0b7cf3e3a455d1e523e3a96e76b795fc595be6.scope: Deactivated successfully.
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.084676) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275734084752, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 2088, "num_deletes": 253, "total_data_size": 3445100, "memory_usage": 3501536, "flush_reason": "Manual Compaction"}
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275734103086, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 3376007, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52026, "largest_seqno": 54113, "table_properties": {"data_size": 3366391, "index_size": 6110, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19498, "raw_average_key_size": 20, "raw_value_size": 3347259, "raw_average_value_size": 3494, "num_data_blocks": 270, "num_entries": 958, "num_filter_entries": 958, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772275517, "oldest_key_time": 1772275517, "file_creation_time": 1772275734, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 19164 microseconds, and 8222 cpu microseconds.
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.103131) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 3376007 bytes OK
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.103882) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.105753) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.105771) EVENT_LOG_v1 {"time_micros": 1772275734105765, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.105798) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 3436327, prev total WAL file size 3436327, number of live WAL files 2.
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.106776) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(3296KB)], [122(9637KB)]
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275734106832, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 13244619, "oldest_snapshot_seqno": -1}
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 7615 keys, 11507710 bytes, temperature: kUnknown
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275734233491, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 11507710, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11455461, "index_size": 32105, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19077, "raw_key_size": 197600, "raw_average_key_size": 25, "raw_value_size": 11318445, "raw_average_value_size": 1486, "num_data_blocks": 1259, "num_entries": 7615, "num_filter_entries": 7615, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772275734, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.234974) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 11507710 bytes
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.236578) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.6 rd, 90.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 9.4 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(7.3) write-amplify(3.4) OK, records in: 8137, records dropped: 522 output_compression: NoCompression
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.236595) EVENT_LOG_v1 {"time_micros": 1772275734236587, "job": 74, "event": "compaction_finished", "compaction_time_micros": 127853, "compaction_time_cpu_micros": 32847, "output_level": 6, "num_output_files": 1, "total_output_size": 11507710, "num_input_records": 8137, "num_output_records": 7615, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275734237060, "job": 74, "event": "table_file_deletion", "file_number": 124}
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275734238039, "job": 74, "event": "table_file_deletion", "file_number": 122}
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.106669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.238125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.238134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.238138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.238141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.238144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:48:54 np0005634017 ovn_controller[146846]: 2026-02-28T10:48:54Z|01604|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:48:54 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:48:54 np0005634017 nova_compute[243452]: 2026-02-28 10:48:54.715 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:55 np0005634017 nova_compute[243452]: 2026-02-28 10:48:55.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:48:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2543: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 635 KiB/s rd, 16 KiB/s wr, 48 op/s
Feb 28 05:48:55 np0005634017 nova_compute[243452]: 2026-02-28 10:48:55.909 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.382 243456 DEBUG nova.compute.manager [req-8d090653-12db-4994-a91c-3167c9e5cf7f req-baaee5da-718c-4c4d-a2b7-151796d9fa87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-changed-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.383 243456 DEBUG nova.compute.manager [req-8d090653-12db-4994-a91c-3167c9e5cf7f req-baaee5da-718c-4c4d-a2b7-151796d9fa87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Refreshing instance network info cache due to event network-changed-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.384 243456 DEBUG oslo_concurrency.lockutils [req-8d090653-12db-4994-a91c-3167c9e5cf7f req-baaee5da-718c-4c4d-a2b7-151796d9fa87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.384 243456 DEBUG oslo_concurrency.lockutils [req-8d090653-12db-4994-a91c-3167c9e5cf7f req-baaee5da-718c-4c4d-a2b7-151796d9fa87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.385 243456 DEBUG nova.network.neutron [req-8d090653-12db-4994-a91c-3167c9e5cf7f req-baaee5da-718c-4c4d-a2b7-151796d9fa87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Refreshing network info cache for port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.445 243456 DEBUG oslo_concurrency.lockutils [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.446 243456 DEBUG oslo_concurrency.lockutils [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.447 243456 DEBUG oslo_concurrency.lockutils [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.447 243456 DEBUG oslo_concurrency.lockutils [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.448 243456 DEBUG oslo_concurrency.lockutils [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.450 243456 INFO nova.compute.manager [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Terminating instance#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.451 243456 DEBUG nova.compute.manager [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:48:56 np0005634017 kernel: tap12bbec3d-25 (unregistering): left promiscuous mode
Feb 28 05:48:56 np0005634017 NetworkManager[49805]: <info>  [1772275736.5099] device (tap12bbec3d-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 05:48:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:48:56Z|01605|binding|INFO|Releasing lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 from this chassis (sb_readonly=0)
Feb 28 05:48:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:48:56Z|01606|binding|INFO|Setting lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 down in Southbound
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.517 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:48:56Z|01607|binding|INFO|Removing iface tap12bbec3d-25 ovn-installed in OVS
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.520 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.524 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:48:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.527 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:24:ce 10.100.0.13'], port_security=['fa:16:3e:b5:24:ce 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd47f4919-0816-4363-b2eb-fa6580859e88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-978ebc43-7003-4100-92ba-e083df3fe8ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd7baef4f72e742e8aa7530d7a586ed2b', 'neutron:revision_number': '9', 'neutron:security_group_ids': '97c43427-956a-4c4e-a592-053a957c2802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2014e61-1555-4c0f-9d39-804f817029ca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:48:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.528 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 in datapath 978ebc43-7003-4100-92ba-e083df3fe8ab unbound from our chassis#033[00m
Feb 28 05:48:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.529 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 978ebc43-7003-4100-92ba-e083df3fe8ab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:48:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.532 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[65dd171d-0ffa-4056-b4be-b0dd2b4ab264]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.533 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab namespace which is not needed anymore#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.536 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:56 np0005634017 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000098.scope: Deactivated successfully.
Feb 28 05:48:56 np0005634017 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000098.scope: Consumed 13.530s CPU time.
Feb 28 05:48:56 np0005634017 systemd-machined[209480]: Machine qemu-186-instance-00000098 terminated.
Feb 28 05:48:56 np0005634017 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[382324]: [NOTICE]   (382329) : haproxy version is 2.8.14-c23fe91
Feb 28 05:48:56 np0005634017 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[382324]: [NOTICE]   (382329) : path to executable is /usr/sbin/haproxy
Feb 28 05:48:56 np0005634017 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[382324]: [WARNING]  (382329) : Exiting Master process...
Feb 28 05:48:56 np0005634017 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[382324]: [WARNING]  (382329) : Exiting Master process...
Feb 28 05:48:56 np0005634017 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[382324]: [ALERT]    (382329) : Current worker (382332) exited with code 143 (Terminated)
Feb 28 05:48:56 np0005634017 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[382324]: [WARNING]  (382329) : All workers exited. Exiting... (0)
Feb 28 05:48:56 np0005634017 systemd[1]: libpod-ceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3.scope: Deactivated successfully.
Feb 28 05:48:56 np0005634017 podman[382977]: 2026-02-28 10:48:56.671589038 +0000 UTC m=+0.049583932 container died ceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 05:48:56 np0005634017 NetworkManager[49805]: <info>  [1772275736.6808] manager: (tap12bbec3d-25): new Tun device (/org/freedesktop/NetworkManager/Devices/675)
Feb 28 05:48:56 np0005634017 kernel: tap12bbec3d-25: entered promiscuous mode
Feb 28 05:48:56 np0005634017 systemd-udevd[382910]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.682 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:48:56Z|01608|binding|INFO|Claiming lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 for this chassis.
Feb 28 05:48:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:48:56Z|01609|binding|INFO|12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4: Claiming fa:16:3e:b5:24:ce 10.100.0.13
Feb 28 05:48:56 np0005634017 kernel: tap12bbec3d-25 (unregistering): left promiscuous mode
Feb 28 05:48:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.694 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:24:ce 10.100.0.13'], port_security=['fa:16:3e:b5:24:ce 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd47f4919-0816-4363-b2eb-fa6580859e88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-978ebc43-7003-4100-92ba-e083df3fe8ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd7baef4f72e742e8aa7530d7a586ed2b', 'neutron:revision_number': '9', 'neutron:security_group_ids': '97c43427-956a-4c4e-a592-053a957c2802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2014e61-1555-4c0f-9d39-804f817029ca, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:48:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:48:56Z|01610|binding|INFO|Setting lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 ovn-installed in OVS
Feb 28 05:48:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:48:56Z|01611|binding|INFO|Setting lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 up in Southbound
Feb 28 05:48:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:48:56Z|01612|binding|INFO|Releasing lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 from this chassis (sb_readonly=1)
Feb 28 05:48:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:48:56Z|01613|if_status|INFO|Dropped 5 log messages in last 432 seconds (most recently, 432 seconds ago) due to excessive rate
Feb 28 05:48:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:48:56Z|01614|if_status|INFO|Not setting lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 down as sb is readonly
Feb 28 05:48:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:48:56Z|01615|binding|INFO|Removing iface tap12bbec3d-25 ovn-installed in OVS
Feb 28 05:48:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:48:56Z|01616|binding|INFO|Releasing lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 from this chassis (sb_readonly=0)
Feb 28 05:48:56 np0005634017 ovn_controller[146846]: 2026-02-28T10:48:56Z|01617|binding|INFO|Setting lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 down in Southbound
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.709 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:56 np0005634017 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3-userdata-shm.mount: Deactivated successfully.
Feb 28 05:48:56 np0005634017 systemd[1]: var-lib-containers-storage-overlay-85896b15e407a783697a1b7680c2b0fcf7d8c767aa52c496ab5f36da73752f0c-merged.mount: Deactivated successfully.
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.718 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.720 243456 INFO nova.virt.libvirt.driver [-] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Instance destroyed successfully.#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.721 243456 DEBUG nova.objects.instance [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lazy-loading 'resources' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:48:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.723 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:24:ce 10.100.0.13'], port_security=['fa:16:3e:b5:24:ce 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd47f4919-0816-4363-b2eb-fa6580859e88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-978ebc43-7003-4100-92ba-e083df3fe8ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd7baef4f72e742e8aa7530d7a586ed2b', 'neutron:revision_number': '9', 'neutron:security_group_ids': '97c43427-956a-4c4e-a592-053a957c2802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2014e61-1555-4c0f-9d39-804f817029ca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:48:56 np0005634017 podman[382977]: 2026-02-28 10:48:56.724870175 +0000 UTC m=+0.102865049 container cleanup ceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.735 243456 DEBUG nova.virt.libvirt.vif [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:47:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-373381639',display_name='tempest-TestShelveInstance-server-373381639',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-373381639',id=152,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWK8NgpJq/O/Qf/cy3N8Z6KNy8MplNJlX7P39bqx+h9ho+aFNpLd6ovt9CyKQamqSZ1B8mN5tm9M+MnUX6czgs4K4I+aH3/UX/Gbp6WhW0TY9K8biYJUX8WuIDhtwQ8Cw==',key_name='tempest-TestShelveInstance-1745923104',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:48:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d7baef4f72e742e8aa7530d7a586ed2b',ramdisk_id='',reservation_id='r-4eojppig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1186988285',owner_user_name='tempest-TestShelveInstance-1186988285-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:48:37Z,user_data=None,user_id='7efc7418904f44aa8c8c9c3e06ac552b',uuid=d47f4919-0816-4363-b2eb-fa6580859e88,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.736 243456 DEBUG nova.network.os_vif_util [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Converting VIF {"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.737 243456 DEBUG nova.network.os_vif_util [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.737 243456 DEBUG os_vif [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 28 05:48:56 np0005634017 systemd[1]: libpod-conmon-ceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3.scope: Deactivated successfully.
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.739 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.739 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12bbec3d-25, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.742 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.744 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.746 243456 INFO os_vif [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25')#033[00m
Feb 28 05:48:56 np0005634017 podman[383011]: 2026-02-28 10:48:56.814500096 +0000 UTC m=+0.060789401 container remove ceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 05:48:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.822 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fe885648-f48a-4d75-a0c3-27cddebc691c]: (4, ('Sat Feb 28 10:48:56 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab (ceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3)\nceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3\nSat Feb 28 10:48:56 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab (ceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3)\nceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.825 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bb711169-e078-4d5b-9f6b-d47286cdfa05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.827 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap978ebc43-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:48:56 np0005634017 kernel: tap978ebc43-70: left promiscuous mode
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.830 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.834 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:56 np0005634017 nova_compute[243452]: 2026-02-28 10:48:56.834 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:48:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.839 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[42891087-9c31-43be-baf5-664941958fed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.859 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[47b99e47-7961-49d1-88a9-3b454f44ddef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.861 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[93571f51-7bde-49bb-b121-bb93f67b90e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.881 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[58db5f30-56e0-43dc-9833-f847ee195b60]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722316, 'reachable_time': 20155, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383044, 'error': None, 'target': 'ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:56 np0005634017 systemd[1]: run-netns-ovnmeta\x2d978ebc43\x2d7003\x2d4100\x2d92ba\x2de083df3fe8ab.mount: Deactivated successfully.
Feb 28 05:48:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.885 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 28 05:48:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.886 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[0108e0bb-4cbb-4aae-a95f-4f38bd4fc6cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.888 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 in datapath 978ebc43-7003-4100-92ba-e083df3fe8ab unbound from our chassis#033[00m
Feb 28 05:48:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.889 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 978ebc43-7003-4100-92ba-e083df3fe8ab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:48:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.890 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d4c63e-a8fd-4322-938e-503175e8ac5f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.891 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 in datapath 978ebc43-7003-4100-92ba-e083df3fe8ab unbound from our chassis#033[00m
Feb 28 05:48:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.892 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 978ebc43-7003-4100-92ba-e083df3fe8ab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 28 05:48:56 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.892 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2a091eeb-14bc-4352-94db-0873f14fb216]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 28 05:48:57 np0005634017 nova_compute[243452]: 2026-02-28 10:48:57.039 243456 INFO nova.virt.libvirt.driver [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Deleting instance files /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88_del#033[00m
Feb 28 05:48:57 np0005634017 nova_compute[243452]: 2026-02-28 10:48:57.040 243456 INFO nova.virt.libvirt.driver [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Deletion of /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88_del complete#033[00m
Feb 28 05:48:57 np0005634017 nova_compute[243452]: 2026-02-28 10:48:57.087 243456 INFO nova.compute.manager [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Took 0.63 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:48:57 np0005634017 nova_compute[243452]: 2026-02-28 10:48:57.087 243456 DEBUG oslo.service.loopingcall [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:48:57 np0005634017 nova_compute[243452]: 2026-02-28 10:48:57.087 243456 DEBUG nova.compute.manager [-] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:48:57 np0005634017 nova_compute[243452]: 2026-02-28 10:48:57.088 243456 DEBUG nova.network.neutron [-] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:48:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2544: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 726 KiB/s rd, 23 KiB/s wr, 49 op/s
Feb 28 05:48:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:57.891 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:48:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:57.891 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:48:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:48:57.892 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.519 243456 DEBUG nova.compute.manager [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-vif-unplugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.519 243456 DEBUG oslo_concurrency.lockutils [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.520 243456 DEBUG oslo_concurrency.lockutils [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.520 243456 DEBUG oslo_concurrency.lockutils [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.520 243456 DEBUG nova.compute.manager [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] No waiting events found dispatching network-vif-unplugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.520 243456 DEBUG nova.compute.manager [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-vif-unplugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.520 243456 DEBUG nova.compute.manager [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.520 243456 DEBUG oslo_concurrency.lockutils [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.520 243456 DEBUG oslo_concurrency.lockutils [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.521 243456 DEBUG oslo_concurrency.lockutils [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.521 243456 DEBUG nova.compute.manager [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] No waiting events found dispatching network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.521 243456 WARNING nova.compute.manager [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received unexpected event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.521 243456 DEBUG nova.compute.manager [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.521 243456 DEBUG oslo_concurrency.lockutils [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.521 243456 DEBUG oslo_concurrency.lockutils [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.521 243456 DEBUG oslo_concurrency.lockutils [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.522 243456 DEBUG nova.compute.manager [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] No waiting events found dispatching network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.522 243456 WARNING nova.compute.manager [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received unexpected event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 for instance with vm_state active and task_state deleting.#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.649 243456 DEBUG nova.network.neutron [-] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.725 243456 DEBUG nova.compute.manager [req-45092eb5-ebcc-47de-bef0-e5861d257da7 req-be5fdd6b-572e-4bc9-aa27-9c6c67a8bc81 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-vif-deleted-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.726 243456 INFO nova.compute.manager [req-45092eb5-ebcc-47de-bef0-e5861d257da7 req-be5fdd6b-572e-4bc9-aa27-9c6c67a8bc81 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Neutron deleted interface 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4; detaching it from the instance and deleting it from the info cache#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.726 243456 DEBUG nova.network.neutron [req-45092eb5-ebcc-47de-bef0-e5861d257da7 req-be5fdd6b-572e-4bc9-aa27-9c6c67a8bc81 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.730 243456 INFO nova.compute.manager [-] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Took 1.64 seconds to deallocate network for instance.#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.746 243456 DEBUG nova.network.neutron [req-8d090653-12db-4994-a91c-3167c9e5cf7f req-baaee5da-718c-4c4d-a2b7-151796d9fa87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updated VIF entry in instance network info cache for port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.747 243456 DEBUG nova.network.neutron [req-8d090653-12db-4994-a91c-3167c9e5cf7f req-baaee5da-718c-4c4d-a2b7-151796d9fa87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updating instance_info_cache with network_info: [{"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.833 243456 DEBUG nova.compute.manager [req-45092eb5-ebcc-47de-bef0-e5861d257da7 req-be5fdd6b-572e-4bc9-aa27-9c6c67a8bc81 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Detach interface failed, port_id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4, reason: Instance d47f4919-0816-4363-b2eb-fa6580859e88 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.865 243456 DEBUG oslo_concurrency.lockutils [req-8d090653-12db-4994-a91c-3167c9e5cf7f req-baaee5da-718c-4c4d-a2b7-151796d9fa87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.867 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.867 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.867 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.869 243456 DEBUG oslo_concurrency.lockutils [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:48:58 np0005634017 nova_compute[243452]: 2026-02-28 10:48:58.869 243456 DEBUG oslo_concurrency.lockutils [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:48:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:48:59 np0005634017 podman[383049]: 2026-02-28 10:48:59.130499637 +0000 UTC m=+0.061492771 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Feb 28 05:48:59 np0005634017 podman[383048]: 2026-02-28 10:48:59.178278837 +0000 UTC m=+0.114223022 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:48:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2545: 305 pgs: 305 active+clean; 215 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 760 KiB/s rd, 24 KiB/s wr, 63 op/s
Feb 28 05:48:59 np0005634017 nova_compute[243452]: 2026-02-28 10:48:59.604 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:48:59 np0005634017 nova_compute[243452]: 2026-02-28 10:48:59.619 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:48:59 np0005634017 nova_compute[243452]: 2026-02-28 10:48:59.619 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 28 05:48:59 np0005634017 nova_compute[243452]: 2026-02-28 10:48:59.619 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:48:59 np0005634017 nova_compute[243452]: 2026-02-28 10:48:59.619 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:48:59 np0005634017 nova_compute[243452]: 2026-02-28 10:48:59.647 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:49:00 np0005634017 nova_compute[243452]: 2026-02-28 10:49:00.141 243456 DEBUG oslo_concurrency.processutils [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:49:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:49:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:49:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:49:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:49:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:49:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:49:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:49:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1678361634' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:49:00 np0005634017 nova_compute[243452]: 2026-02-28 10:49:00.719 243456 DEBUG oslo_concurrency.processutils [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:49:00 np0005634017 nova_compute[243452]: 2026-02-28 10:49:00.726 243456 DEBUG nova.compute.provider_tree [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:49:00 np0005634017 nova_compute[243452]: 2026-02-28 10:49:00.743 243456 DEBUG nova.scheduler.client.report [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:49:00 np0005634017 nova_compute[243452]: 2026-02-28 10:49:00.764 243456 DEBUG oslo_concurrency.lockutils [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:49:00 np0005634017 nova_compute[243452]: 2026-02-28 10:49:00.767 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 1.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:49:00 np0005634017 nova_compute[243452]: 2026-02-28 10:49:00.767 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:49:00 np0005634017 nova_compute[243452]: 2026-02-28 10:49:00.767 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:49:00 np0005634017 nova_compute[243452]: 2026-02-28 10:49:00.768 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:49:00 np0005634017 nova_compute[243452]: 2026-02-28 10:49:00.910 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:49:01 np0005634017 nova_compute[243452]: 2026-02-28 10:49:01.178 243456 INFO nova.scheduler.client.report [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Deleted allocations for instance d47f4919-0816-4363-b2eb-fa6580859e88#033[00m
Feb 28 05:49:01 np0005634017 nova_compute[243452]: 2026-02-28 10:49:01.261 243456 DEBUG oslo_concurrency.lockutils [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:49:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:49:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4226271041' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:49:01 np0005634017 nova_compute[243452]: 2026-02-28 10:49:01.330 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:49:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2546: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 676 KiB/s rd, 24 KiB/s wr, 70 op/s
Feb 28 05:49:01 np0005634017 nova_compute[243452]: 2026-02-28 10:49:01.485 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:49:01 np0005634017 nova_compute[243452]: 2026-02-28 10:49:01.487 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3467MB free_disk=59.95295037794858GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:49:01 np0005634017 nova_compute[243452]: 2026-02-28 10:49:01.488 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:49:01 np0005634017 nova_compute[243452]: 2026-02-28 10:49:01.489 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:49:01 np0005634017 nova_compute[243452]: 2026-02-28 10:49:01.557 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:49:01 np0005634017 nova_compute[243452]: 2026-02-28 10:49:01.557 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:49:01 np0005634017 nova_compute[243452]: 2026-02-28 10:49:01.586 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:49:01 np0005634017 nova_compute[243452]: 2026-02-28 10:49:01.740 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:49:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:49:02 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3441047511' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:49:02 np0005634017 nova_compute[243452]: 2026-02-28 10:49:02.113 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:49:02 np0005634017 nova_compute[243452]: 2026-02-28 10:49:02.119 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:49:02 np0005634017 nova_compute[243452]: 2026-02-28 10:49:02.142 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:49:02 np0005634017 nova_compute[243452]: 2026-02-28 10:49:02.177 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:49:02 np0005634017 nova_compute[243452]: 2026-02-28 10:49:02.178 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:49:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2547: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 290 KiB/s rd, 11 KiB/s wr, 38 op/s
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.112621) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275744112697, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 336, "num_deletes": 250, "total_data_size": 160667, "memory_usage": 167888, "flush_reason": "Manual Compaction"}
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275744117808, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 158874, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54114, "largest_seqno": 54449, "table_properties": {"data_size": 156734, "index_size": 304, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5883, "raw_average_key_size": 20, "raw_value_size": 152535, "raw_average_value_size": 525, "num_data_blocks": 14, "num_entries": 290, "num_filter_entries": 290, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772275734, "oldest_key_time": 1772275734, "file_creation_time": 1772275744, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 5236 microseconds, and 1487 cpu microseconds.
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.117860) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 158874 bytes OK
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.117882) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.121368) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.121407) EVENT_LOG_v1 {"time_micros": 1772275744121396, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.121437) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 158353, prev total WAL file size 158353, number of live WAL files 2.
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.122043) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303032' seq:72057594037927935, type:22 .. '6D6772737461740032323533' seq:0, type:0; will stop at (end)
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(155KB)], [125(10MB)]
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275744122138, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 11666584, "oldest_snapshot_seqno": -1}
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 7398 keys, 8336840 bytes, temperature: kUnknown
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275744187776, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 8336840, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8290871, "index_size": 26400, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18501, "raw_key_size": 193298, "raw_average_key_size": 26, "raw_value_size": 8162404, "raw_average_value_size": 1103, "num_data_blocks": 1021, "num_entries": 7398, "num_filter_entries": 7398, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772275744, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.188117) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 8336840 bytes
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.191033) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.5 rd, 126.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 11.0 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(125.9) write-amplify(52.5) OK, records in: 7905, records dropped: 507 output_compression: NoCompression
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.191083) EVENT_LOG_v1 {"time_micros": 1772275744191050, "job": 76, "event": "compaction_finished", "compaction_time_micros": 65727, "compaction_time_cpu_micros": 31470, "output_level": 6, "num_output_files": 1, "total_output_size": 8336840, "num_input_records": 7905, "num_output_records": 7398, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275744191256, "job": 76, "event": "table_file_deletion", "file_number": 127}
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275744192706, "job": 76, "event": "table_file_deletion", "file_number": 125}
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.121939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.192893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.192901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.192903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.192905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:49:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.192906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:49:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2548: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 9.8 KiB/s wr, 33 op/s
Feb 28 05:49:05 np0005634017 nova_compute[243452]: 2026-02-28 10:49:05.952 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:49:06 np0005634017 nova_compute[243452]: 2026-02-28 10:49:06.049 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:49:06 np0005634017 nova_compute[243452]: 2026-02-28 10:49:06.110 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:49:06 np0005634017 nova_compute[243452]: 2026-02-28 10:49:06.744 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:49:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2549: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 174 KiB/s rd, 9.8 KiB/s wr, 32 op/s
Feb 28 05:49:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:49:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2550: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 28 05:49:11 np0005634017 nova_compute[243452]: 2026-02-28 10:49:11.005 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:49:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2551: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.8 KiB/s rd, 852 B/s wr, 14 op/s
Feb 28 05:49:11 np0005634017 nova_compute[243452]: 2026-02-28 10:49:11.715 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275736.7144413, d47f4919-0816-4363-b2eb-fa6580859e88 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:49:11 np0005634017 nova_compute[243452]: 2026-02-28 10:49:11.716 243456 INFO nova.compute.manager [-] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:49:11 np0005634017 nova_compute[243452]: 2026-02-28 10:49:11.741 243456 DEBUG nova.compute.manager [None req-68313c09-2d81-41ba-a071-e48edcffaa1f - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:49:11 np0005634017 nova_compute[243452]: 2026-02-28 10:49:11.745 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:49:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2552: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:49:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:49:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2553: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:49:16 np0005634017 nova_compute[243452]: 2026-02-28 10:49:16.007 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:49:16 np0005634017 nova_compute[243452]: 2026-02-28 10:49:16.747 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:49:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2554: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:49:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:49:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2555: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:49:21 np0005634017 nova_compute[243452]: 2026-02-28 10:49:21.010 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:49:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2556: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:49:21 np0005634017 nova_compute[243452]: 2026-02-28 10:49:21.749 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:49:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2557: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:49:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:49:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2558: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:49:26 np0005634017 nova_compute[243452]: 2026-02-28 10:49:26.012 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:49:26 np0005634017 nova_compute[243452]: 2026-02-28 10:49:26.751 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:49:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2559: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:49:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:49:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:49:29
Feb 28 05:49:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:49:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:49:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['.rgw.root', 'vms', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'volumes', '.mgr']
Feb 28 05:49:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:49:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2560: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:49:30 np0005634017 podman[383161]: 2026-02-28 10:49:30.124584055 +0000 UTC m=+0.060388010 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 28 05:49:30 np0005634017 podman[383160]: 2026-02-28 10:49:30.154089555 +0000 UTC m=+0.092128034 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223)
Feb 28 05:49:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:49:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:49:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:49:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:49:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:49:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:49:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:49:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:49:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:49:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:49:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:49:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:49:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:49:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:49:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:49:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:49:31 np0005634017 nova_compute[243452]: 2026-02-28 10:49:31.051 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:49:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2561: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:49:31 np0005634017 nova_compute[243452]: 2026-02-28 10:49:31.754 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:49:33 np0005634017 nova_compute[243452]: 2026-02-28 10:49:33.281 243456 DEBUG oslo_concurrency.lockutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Acquiring lock "4daf2a6e-e18a-481f-8174-d36bb3f79438" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:49:33 np0005634017 nova_compute[243452]: 2026-02-28 10:49:33.282 243456 DEBUG oslo_concurrency.lockutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "4daf2a6e-e18a-481f-8174-d36bb3f79438" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:49:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2562: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:49:33 np0005634017 nova_compute[243452]: 2026-02-28 10:49:33.557 243456 DEBUG nova.compute.manager [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 28 05:49:33 np0005634017 nova_compute[243452]: 2026-02-28 10:49:33.673 243456 DEBUG oslo_concurrency.lockutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:49:33 np0005634017 nova_compute[243452]: 2026-02-28 10:49:33.674 243456 DEBUG oslo_concurrency.lockutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:49:33 np0005634017 nova_compute[243452]: 2026-02-28 10:49:33.683 243456 DEBUG nova.virt.hardware [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 28 05:49:33 np0005634017 nova_compute[243452]: 2026-02-28 10:49:33.683 243456 INFO nova.compute.claims [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 28 05:49:33 np0005634017 nova_compute[243452]: 2026-02-28 10:49:33.792 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:49:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:49:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:49:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2246775156' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:49:34 np0005634017 nova_compute[243452]: 2026-02-28 10:49:34.363 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:49:34 np0005634017 nova_compute[243452]: 2026-02-28 10:49:34.370 243456 DEBUG nova.compute.provider_tree [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:49:34 np0005634017 nova_compute[243452]: 2026-02-28 10:49:34.389 243456 DEBUG nova.scheduler.client.report [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:49:34 np0005634017 nova_compute[243452]: 2026-02-28 10:49:34.408 243456 DEBUG oslo_concurrency.lockutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:49:34 np0005634017 nova_compute[243452]: 2026-02-28 10:49:34.409 243456 DEBUG nova.compute.manager [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 28 05:49:34 np0005634017 nova_compute[243452]: 2026-02-28 10:49:34.475 243456 DEBUG nova.compute.manager [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 28 05:49:34 np0005634017 nova_compute[243452]: 2026-02-28 10:49:34.475 243456 DEBUG nova.network.neutron [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 28 05:49:34 np0005634017 nova_compute[243452]: 2026-02-28 10:49:34.507 243456 INFO nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 28 05:49:34 np0005634017 nova_compute[243452]: 2026-02-28 10:49:34.532 243456 DEBUG nova.compute.manager [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 28 05:49:34 np0005634017 nova_compute[243452]: 2026-02-28 10:49:34.632 243456 DEBUG nova.compute.manager [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 28 05:49:34 np0005634017 nova_compute[243452]: 2026-02-28 10:49:34.634 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 28 05:49:34 np0005634017 nova_compute[243452]: 2026-02-28 10:49:34.635 243456 INFO nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Creating image(s)#033[00m
Feb 28 05:49:34 np0005634017 nova_compute[243452]: 2026-02-28 10:49:34.672 243456 DEBUG nova.storage.rbd_utils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] rbd image 4daf2a6e-e18a-481f-8174-d36bb3f79438_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:49:34 np0005634017 nova_compute[243452]: 2026-02-28 10:49:34.701 243456 DEBUG nova.storage.rbd_utils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] rbd image 4daf2a6e-e18a-481f-8174-d36bb3f79438_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:49:34 np0005634017 nova_compute[243452]: 2026-02-28 10:49:34.732 243456 DEBUG nova.storage.rbd_utils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] rbd image 4daf2a6e-e18a-481f-8174-d36bb3f79438_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:49:34 np0005634017 nova_compute[243452]: 2026-02-28 10:49:34.737 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:49:34 np0005634017 nova_compute[243452]: 2026-02-28 10:49:34.778 243456 DEBUG nova.network.neutron [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 28 05:49:34 np0005634017 nova_compute[243452]: 2026-02-28 10:49:34.779 243456 DEBUG nova.compute.manager [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 28 05:49:34 np0005634017 nova_compute[243452]: 2026-02-28 10:49:34.818 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:49:34 np0005634017 nova_compute[243452]: 2026-02-28 10:49:34.819 243456 DEBUG oslo_concurrency.lockutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:49:34 np0005634017 nova_compute[243452]: 2026-02-28 10:49:34.820 243456 DEBUG oslo_concurrency.lockutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:49:34 np0005634017 nova_compute[243452]: 2026-02-28 10:49:34.820 243456 DEBUG oslo_concurrency.lockutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:49:34 np0005634017 nova_compute[243452]: 2026-02-28 10:49:34.847 243456 DEBUG nova.storage.rbd_utils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] rbd image 4daf2a6e-e18a-481f-8174-d36bb3f79438_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:49:34 np0005634017 nova_compute[243452]: 2026-02-28 10:49:34.853 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 4daf2a6e-e18a-481f-8174-d36bb3f79438_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.194 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 4daf2a6e-e18a-481f-8174-d36bb3f79438_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.288 243456 DEBUG nova.storage.rbd_utils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] resizing rbd image 4daf2a6e-e18a-481f-8174-d36bb3f79438_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 28 05:49:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2563: 305 pgs: 305 active+clean; 163 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 728 KiB/s wr, 10 op/s
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.392 243456 DEBUG nova.objects.instance [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lazy-loading 'migration_context' on Instance uuid 4daf2a6e-e18a-481f-8174-d36bb3f79438 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.410 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.411 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Ensure instance console log exists: /var/lib/nova/instances/4daf2a6e-e18a-481f-8174-d36bb3f79438/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.412 243456 DEBUG oslo_concurrency.lockutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.412 243456 DEBUG oslo_concurrency.lockutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.412 243456 DEBUG oslo_concurrency.lockutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.413 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.418 243456 WARNING nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.423 243456 DEBUG nova.virt.libvirt.host [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.424 243456 DEBUG nova.virt.libvirt.host [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.427 243456 DEBUG nova.virt.libvirt.host [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.427 243456 DEBUG nova.virt.libvirt.host [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.427 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.428 243456 DEBUG nova.virt.hardware [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.428 243456 DEBUG nova.virt.hardware [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.428 243456 DEBUG nova.virt.hardware [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.428 243456 DEBUG nova.virt.hardware [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.429 243456 DEBUG nova.virt.hardware [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.429 243456 DEBUG nova.virt.hardware [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.429 243456 DEBUG nova.virt.hardware [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.429 243456 DEBUG nova.virt.hardware [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.429 243456 DEBUG nova.virt.hardware [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.429 243456 DEBUG nova.virt.hardware [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.430 243456 DEBUG nova.virt.hardware [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 28 05:49:35 np0005634017 nova_compute[243452]: 2026-02-28 10:49:35.432 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:49:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:49:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1921002510' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:49:36 np0005634017 nova_compute[243452]: 2026-02-28 10:49:36.007 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:49:36 np0005634017 nova_compute[243452]: 2026-02-28 10:49:36.040 243456 DEBUG nova.storage.rbd_utils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] rbd image 4daf2a6e-e18a-481f-8174-d36bb3f79438_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:49:36 np0005634017 nova_compute[243452]: 2026-02-28 10:49:36.046 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:49:36 np0005634017 nova_compute[243452]: 2026-02-28 10:49:36.087 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:49:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 05:49:36 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1685980158' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 05:49:36 np0005634017 nova_compute[243452]: 2026-02-28 10:49:36.658 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:49:36 np0005634017 nova_compute[243452]: 2026-02-28 10:49:36.660 243456 DEBUG nova.objects.instance [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4daf2a6e-e18a-481f-8174-d36bb3f79438 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:49:36 np0005634017 nova_compute[243452]: 2026-02-28 10:49:36.678 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] End _get_guest_xml xml=<domain type="kvm">
Feb 28 05:49:36 np0005634017 nova_compute[243452]:  <uuid>4daf2a6e-e18a-481f-8174-d36bb3f79438</uuid>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:  <name>instance-00000099</name>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:  <memory>131072</memory>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:  <vcpu>1</vcpu>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:  <metadata>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      <nova:name>tempest-AggregatesAdminTestJSON-server-345626621</nova:name>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      <nova:creationTime>2026-02-28 10:49:35</nova:creationTime>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      <nova:flavor name="m1.nano">
Feb 28 05:49:36 np0005634017 nova_compute[243452]:        <nova:memory>128</nova:memory>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:        <nova:disk>1</nova:disk>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:        <nova:swap>0</nova:swap>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:        <nova:ephemeral>0</nova:ephemeral>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:        <nova:vcpus>1</nova:vcpus>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      </nova:flavor>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      <nova:owner>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:        <nova:user uuid="3b61ee2651c648abacf025ebb79ec599">tempest-AggregatesAdminTestJSON-1550774056-project-member</nova:user>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:        <nova:project uuid="10cb603dbd5b46848d82ec2c2fad1311">tempest-AggregatesAdminTestJSON-1550774056</nova:project>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      </nova:owner>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      <nova:ports/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    </nova:instance>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:  </metadata>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:  <sysinfo type="smbios">
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <system>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      <entry name="manufacturer">RDO</entry>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      <entry name="product">OpenStack Compute</entry>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      <entry name="serial">4daf2a6e-e18a-481f-8174-d36bb3f79438</entry>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      <entry name="uuid">4daf2a6e-e18a-481f-8174-d36bb3f79438</entry>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      <entry name="family">Virtual Machine</entry>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    </system>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:  </sysinfo>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:  <os>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <boot dev="hd"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <smbios mode="sysinfo"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:  </os>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:  <features>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <acpi/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <apic/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <vmcoreinfo/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:  </features>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:  <clock offset="utc">
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <timer name="pit" tickpolicy="delay"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <timer name="hpet" present="no"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:  </clock>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:  <cpu mode="host-model" match="exact">
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <topology sockets="1" cores="1" threads="1"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:  </cpu>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:  <devices>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <disk type="network" device="disk">
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/4daf2a6e-e18a-481f-8174-d36bb3f79438_disk">
Feb 28 05:49:36 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:49:36 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      <target dev="vda" bus="virtio"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <disk type="network" device="cdrom">
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      <driver type="raw" cache="none"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      <source protocol="rbd" name="vms/4daf2a6e-e18a-481f-8174-d36bb3f79438_disk.config">
Feb 28 05:49:36 np0005634017 nova_compute[243452]:        <host name="192.168.122.100" port="6789"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      </source>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      <auth username="openstack">
Feb 28 05:49:36 np0005634017 nova_compute[243452]:        <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      </auth>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      <target dev="sda" bus="sata"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    </disk>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <serial type="pty">
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      <log file="/var/lib/nova/instances/4daf2a6e-e18a-481f-8174-d36bb3f79438/console.log" append="off"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    </serial>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <video>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      <model type="virtio"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    </video>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <input type="tablet" bus="usb"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <rng model="virtio">
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      <backend model="random">/dev/urandom</backend>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    </rng>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <controller type="pci" model="pcie-root-port"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <controller type="usb" index="0"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    <memballoon model="virtio">
Feb 28 05:49:36 np0005634017 nova_compute[243452]:      <stats period="10"/>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:    </memballoon>
Feb 28 05:49:36 np0005634017 nova_compute[243452]:  </devices>
Feb 28 05:49:36 np0005634017 nova_compute[243452]: </domain>
Feb 28 05:49:36 np0005634017 nova_compute[243452]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 28 05:49:36 np0005634017 nova_compute[243452]: 2026-02-28 10:49:36.734 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:49:36 np0005634017 nova_compute[243452]: 2026-02-28 10:49:36.735 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 28 05:49:36 np0005634017 nova_compute[243452]: 2026-02-28 10:49:36.736 243456 INFO nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Using config drive#033[00m
Feb 28 05:49:36 np0005634017 nova_compute[243452]: 2026-02-28 10:49:36.772 243456 DEBUG nova.storage.rbd_utils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] rbd image 4daf2a6e-e18a-481f-8174-d36bb3f79438_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:49:36 np0005634017 nova_compute[243452]: 2026-02-28 10:49:36.781 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:49:37 np0005634017 nova_compute[243452]: 2026-02-28 10:49:37.281 243456 INFO nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Creating config drive at /var/lib/nova/instances/4daf2a6e-e18a-481f-8174-d36bb3f79438/disk.config#033[00m
Feb 28 05:49:37 np0005634017 nova_compute[243452]: 2026-02-28 10:49:37.289 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4daf2a6e-e18a-481f-8174-d36bb3f79438/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp68cth1hn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:49:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2564: 305 pgs: 305 active+clean; 163 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 728 KiB/s wr, 10 op/s
Feb 28 05:49:37 np0005634017 nova_compute[243452]: 2026-02-28 10:49:37.443 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4daf2a6e-e18a-481f-8174-d36bb3f79438/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp68cth1hn" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:49:37 np0005634017 nova_compute[243452]: 2026-02-28 10:49:37.477 243456 DEBUG nova.storage.rbd_utils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] rbd image 4daf2a6e-e18a-481f-8174-d36bb3f79438_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 28 05:49:37 np0005634017 nova_compute[243452]: 2026-02-28 10:49:37.483 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4daf2a6e-e18a-481f-8174-d36bb3f79438/disk.config 4daf2a6e-e18a-481f-8174-d36bb3f79438_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:49:37 np0005634017 nova_compute[243452]: 2026-02-28 10:49:37.672 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4daf2a6e-e18a-481f-8174-d36bb3f79438/disk.config 4daf2a6e-e18a-481f-8174-d36bb3f79438_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:49:37 np0005634017 nova_compute[243452]: 2026-02-28 10:49:37.674 243456 INFO nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Deleting local config drive /var/lib/nova/instances/4daf2a6e-e18a-481f-8174-d36bb3f79438/disk.config because it was imported into RBD.#033[00m
Feb 28 05:49:37 np0005634017 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 05:49:37 np0005634017 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 05:49:37 np0005634017 systemd-machined[209480]: New machine qemu-187-instance-00000099.
Feb 28 05:49:37 np0005634017 systemd[1]: Started Virtual Machine qemu-187-instance-00000099.
Feb 28 05:49:38 np0005634017 nova_compute[243452]: 2026-02-28 10:49:38.361 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275778.3605375, 4daf2a6e-e18a-481f-8174-d36bb3f79438 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:49:38 np0005634017 nova_compute[243452]: 2026-02-28 10:49:38.364 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] VM Resumed (Lifecycle Event)#033[00m
Feb 28 05:49:38 np0005634017 nova_compute[243452]: 2026-02-28 10:49:38.369 243456 DEBUG nova.compute.manager [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 28 05:49:38 np0005634017 nova_compute[243452]: 2026-02-28 10:49:38.369 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 28 05:49:38 np0005634017 nova_compute[243452]: 2026-02-28 10:49:38.376 243456 INFO nova.virt.libvirt.driver [-] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Instance spawned successfully.#033[00m
Feb 28 05:49:38 np0005634017 nova_compute[243452]: 2026-02-28 10:49:38.377 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 28 05:49:38 np0005634017 nova_compute[243452]: 2026-02-28 10:49:38.432 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:49:38 np0005634017 nova_compute[243452]: 2026-02-28 10:49:38.442 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:49:38 np0005634017 nova_compute[243452]: 2026-02-28 10:49:38.443 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:49:38 np0005634017 nova_compute[243452]: 2026-02-28 10:49:38.444 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:49:38 np0005634017 nova_compute[243452]: 2026-02-28 10:49:38.444 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:49:38 np0005634017 nova_compute[243452]: 2026-02-28 10:49:38.445 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:49:38 np0005634017 nova_compute[243452]: 2026-02-28 10:49:38.446 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 28 05:49:38 np0005634017 nova_compute[243452]: 2026-02-28 10:49:38.454 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:49:38 np0005634017 nova_compute[243452]: 2026-02-28 10:49:38.512 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:49:38 np0005634017 nova_compute[243452]: 2026-02-28 10:49:38.513 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275778.362056, 4daf2a6e-e18a-481f-8174-d36bb3f79438 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:49:38 np0005634017 nova_compute[243452]: 2026-02-28 10:49:38.513 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] VM Started (Lifecycle Event)#033[00m
Feb 28 05:49:38 np0005634017 nova_compute[243452]: 2026-02-28 10:49:38.534 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:49:38 np0005634017 nova_compute[243452]: 2026-02-28 10:49:38.540 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 28 05:49:38 np0005634017 nova_compute[243452]: 2026-02-28 10:49:38.550 243456 INFO nova.compute.manager [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Took 3.92 seconds to spawn the instance on the hypervisor.#033[00m
Feb 28 05:49:38 np0005634017 nova_compute[243452]: 2026-02-28 10:49:38.550 243456 DEBUG nova.compute.manager [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:49:38 np0005634017 nova_compute[243452]: 2026-02-28 10:49:38.575 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 28 05:49:38 np0005634017 nova_compute[243452]: 2026-02-28 10:49:38.648 243456 INFO nova.compute.manager [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Took 5.02 seconds to build instance.#033[00m
Feb 28 05:49:38 np0005634017 nova_compute[243452]: 2026-02-28 10:49:38.712 243456 DEBUG oslo_concurrency.lockutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "4daf2a6e-e18a-481f-8174-d36bb3f79438" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:49:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:49:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2565: 305 pgs: 305 active+clean; 180 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 943 KiB/s wr, 24 op/s
Feb 28 05:49:41 np0005634017 nova_compute[243452]: 2026-02-28 10:49:41.089 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:49:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2566: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 84 op/s
Feb 28 05:49:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:49:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:49:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:49:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:49:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003633719601574964 of space, bias 1.0, pg target 0.10901158804724892 quantized to 32 (current 32)
Feb 28 05:49:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:49:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:49:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:49:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:49:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:49:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024940314264652076 of space, bias 1.0, pg target 0.7482094279395622 quantized to 32 (current 32)
Feb 28 05:49:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:49:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.261232899107364e-07 of space, bias 4.0, pg target 0.0008713479478928837 quantized to 16 (current 16)
Feb 28 05:49:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:49:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:49:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:49:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:49:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:49:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:49:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:49:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:49:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:49:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:49:41 np0005634017 nova_compute[243452]: 2026-02-28 10:49:41.784 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:49:41 np0005634017 nova_compute[243452]: 2026-02-28 10:49:41.824 243456 DEBUG oslo_concurrency.lockutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Acquiring lock "4daf2a6e-e18a-481f-8174-d36bb3f79438" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:49:41 np0005634017 nova_compute[243452]: 2026-02-28 10:49:41.825 243456 DEBUG oslo_concurrency.lockutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "4daf2a6e-e18a-481f-8174-d36bb3f79438" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:49:41 np0005634017 nova_compute[243452]: 2026-02-28 10:49:41.825 243456 DEBUG oslo_concurrency.lockutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Acquiring lock "4daf2a6e-e18a-481f-8174-d36bb3f79438-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:49:41 np0005634017 nova_compute[243452]: 2026-02-28 10:49:41.826 243456 DEBUG oslo_concurrency.lockutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "4daf2a6e-e18a-481f-8174-d36bb3f79438-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:49:41 np0005634017 nova_compute[243452]: 2026-02-28 10:49:41.826 243456 DEBUG oslo_concurrency.lockutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "4daf2a6e-e18a-481f-8174-d36bb3f79438-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:49:41 np0005634017 nova_compute[243452]: 2026-02-28 10:49:41.829 243456 INFO nova.compute.manager [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Terminating instance#033[00m
Feb 28 05:49:41 np0005634017 nova_compute[243452]: 2026-02-28 10:49:41.831 243456 DEBUG oslo_concurrency.lockutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Acquiring lock "refresh_cache-4daf2a6e-e18a-481f-8174-d36bb3f79438" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 28 05:49:41 np0005634017 nova_compute[243452]: 2026-02-28 10:49:41.832 243456 DEBUG oslo_concurrency.lockutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Acquired lock "refresh_cache-4daf2a6e-e18a-481f-8174-d36bb3f79438" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 28 05:49:41 np0005634017 nova_compute[243452]: 2026-02-28 10:49:41.832 243456 DEBUG nova.network.neutron [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 28 05:49:41 np0005634017 nova_compute[243452]: 2026-02-28 10:49:41.990 243456 DEBUG nova.network.neutron [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:49:42 np0005634017 nova_compute[243452]: 2026-02-28 10:49:42.238 243456 DEBUG nova.network.neutron [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:49:42 np0005634017 nova_compute[243452]: 2026-02-28 10:49:42.253 243456 DEBUG oslo_concurrency.lockutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Releasing lock "refresh_cache-4daf2a6e-e18a-481f-8174-d36bb3f79438" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 28 05:49:42 np0005634017 nova_compute[243452]: 2026-02-28 10:49:42.254 243456 DEBUG nova.compute.manager [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 28 05:49:42 np0005634017 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d00000099.scope: Deactivated successfully.
Feb 28 05:49:42 np0005634017 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d00000099.scope: Consumed 4.619s CPU time.
Feb 28 05:49:42 np0005634017 systemd-machined[209480]: Machine qemu-187-instance-00000099 terminated.
Feb 28 05:49:42 np0005634017 nova_compute[243452]: 2026-02-28 10:49:42.476 243456 INFO nova.virt.libvirt.driver [-] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Instance destroyed successfully.#033[00m
Feb 28 05:49:42 np0005634017 nova_compute[243452]: 2026-02-28 10:49:42.477 243456 DEBUG nova.objects.instance [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lazy-loading 'resources' on Instance uuid 4daf2a6e-e18a-481f-8174-d36bb3f79438 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 28 05:49:42 np0005634017 nova_compute[243452]: 2026-02-28 10:49:42.826 243456 INFO nova.virt.libvirt.driver [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Deleting instance files /var/lib/nova/instances/4daf2a6e-e18a-481f-8174-d36bb3f79438_del#033[00m
Feb 28 05:49:42 np0005634017 nova_compute[243452]: 2026-02-28 10:49:42.828 243456 INFO nova.virt.libvirt.driver [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Deletion of /var/lib/nova/instances/4daf2a6e-e18a-481f-8174-d36bb3f79438_del complete#033[00m
Feb 28 05:49:42 np0005634017 nova_compute[243452]: 2026-02-28 10:49:42.908 243456 INFO nova.compute.manager [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Feb 28 05:49:42 np0005634017 nova_compute[243452]: 2026-02-28 10:49:42.909 243456 DEBUG oslo.service.loopingcall [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 28 05:49:42 np0005634017 nova_compute[243452]: 2026-02-28 10:49:42.909 243456 DEBUG nova.compute.manager [-] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 28 05:49:42 np0005634017 nova_compute[243452]: 2026-02-28 10:49:42.909 243456 DEBUG nova.network.neutron [-] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 28 05:49:43 np0005634017 nova_compute[243452]: 2026-02-28 10:49:43.274 243456 DEBUG nova.network.neutron [-] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 28 05:49:43 np0005634017 nova_compute[243452]: 2026-02-28 10:49:43.295 243456 DEBUG nova.network.neutron [-] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 28 05:49:43 np0005634017 nova_compute[243452]: 2026-02-28 10:49:43.313 243456 INFO nova.compute.manager [-] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Took 0.40 seconds to deallocate network for instance.#033[00m
Feb 28 05:49:43 np0005634017 nova_compute[243452]: 2026-02-28 10:49:43.351 243456 DEBUG oslo_concurrency.lockutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:49:43 np0005634017 nova_compute[243452]: 2026-02-28 10:49:43.351 243456 DEBUG oslo_concurrency.lockutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:49:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2567: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Feb 28 05:49:43 np0005634017 nova_compute[243452]: 2026-02-28 10:49:43.399 243456 DEBUG oslo_concurrency.processutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:49:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:49:43 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1242019216' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:49:43 np0005634017 nova_compute[243452]: 2026-02-28 10:49:43.901 243456 DEBUG oslo_concurrency.processutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:49:43 np0005634017 nova_compute[243452]: 2026-02-28 10:49:43.910 243456 DEBUG nova.compute.provider_tree [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:49:43 np0005634017 nova_compute[243452]: 2026-02-28 10:49:43.939 243456 DEBUG nova.scheduler.client.report [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:49:43 np0005634017 nova_compute[243452]: 2026-02-28 10:49:43.989 243456 DEBUG oslo_concurrency.lockutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:49:44 np0005634017 nova_compute[243452]: 2026-02-28 10:49:44.044 243456 INFO nova.scheduler.client.report [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Deleted allocations for instance 4daf2a6e-e18a-481f-8174-d36bb3f79438#033[00m
Feb 28 05:49:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:49:44 np0005634017 nova_compute[243452]: 2026-02-28 10:49:44.118 243456 DEBUG oslo_concurrency.lockutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "4daf2a6e-e18a-481f-8174-d36bb3f79438" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:49:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2568: 305 pgs: 305 active+clean; 171 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Feb 28 05:49:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:49:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2095587030' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:49:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:49:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2095587030' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:49:46 np0005634017 nova_compute[243452]: 2026-02-28 10:49:46.094 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:49:46 np0005634017 nova_compute[243452]: 2026-02-28 10:49:46.787 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:49:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2569: 305 pgs: 305 active+clean; 171 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 111 op/s
Feb 28 05:49:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:49:49 np0005634017 ovn_controller[146846]: 2026-02-28T10:49:49Z|01618|memory_trim|INFO|Detected inactivity (last active 30020 ms ago): trimming memory
Feb 28 05:49:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2570: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 116 op/s
Feb 28 05:49:51 np0005634017 nova_compute[243452]: 2026-02-28 10:49:51.130 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:49:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2571: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 885 KiB/s wr, 102 op/s
Feb 28 05:49:51 np0005634017 nova_compute[243452]: 2026-02-28 10:49:51.789 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:49:51 np0005634017 nova_compute[243452]: 2026-02-28 10:49:51.874 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:49:51 np0005634017 nova_compute[243452]: 2026-02-28 10:49:51.875 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:49:51 np0005634017 nova_compute[243452]: 2026-02-28 10:49:51.875 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:49:51 np0005634017 nova_compute[243452]: 2026-02-28 10:49:51.875 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:49:53 np0005634017 nova_compute[243452]: 2026-02-28 10:49:53.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:49:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2572: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 481 KiB/s rd, 1.2 KiB/s wr, 43 op/s
Feb 28 05:49:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:49:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:49:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:49:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:49:54 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:49:55 np0005634017 nova_compute[243452]: 2026-02-28 10:49:55.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:49:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2573: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 1.2 KiB/s wr, 34 op/s
Feb 28 05:49:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:49:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:49:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:49:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:49:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:49:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:49:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:49:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:49:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:49:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:49:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:49:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:49:55 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:49:55 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:49:55 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:49:55 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:49:55 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:49:55 np0005634017 podman[383831]: 2026-02-28 10:49:55.931668136 +0000 UTC m=+0.049972024 container create 2573cd9d5674167a96f2c3312aa58db0da002ba66e27a5e4dcb1de05ceb32435 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:49:55 np0005634017 systemd[1]: Started libpod-conmon-2573cd9d5674167a96f2c3312aa58db0da002ba66e27a5e4dcb1de05ceb32435.scope.
Feb 28 05:49:56 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:49:56 np0005634017 podman[383831]: 2026-02-28 10:49:55.908375903 +0000 UTC m=+0.026679801 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:49:56 np0005634017 podman[383831]: 2026-02-28 10:49:56.02385435 +0000 UTC m=+0.142158228 container init 2573cd9d5674167a96f2c3312aa58db0da002ba66e27a5e4dcb1de05ceb32435 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Feb 28 05:49:56 np0005634017 podman[383831]: 2026-02-28 10:49:56.035145771 +0000 UTC m=+0.153449619 container start 2573cd9d5674167a96f2c3312aa58db0da002ba66e27a5e4dcb1de05ceb32435 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 28 05:49:56 np0005634017 podman[383831]: 2026-02-28 10:49:56.040142533 +0000 UTC m=+0.158446381 container attach 2573cd9d5674167a96f2c3312aa58db0da002ba66e27a5e4dcb1de05ceb32435 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:49:56 np0005634017 nifty_euler[383848]: 167 167
Feb 28 05:49:56 np0005634017 systemd[1]: libpod-2573cd9d5674167a96f2c3312aa58db0da002ba66e27a5e4dcb1de05ceb32435.scope: Deactivated successfully.
Feb 28 05:49:56 np0005634017 conmon[383848]: conmon 2573cd9d5674167a96f2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2573cd9d5674167a96f2c3312aa58db0da002ba66e27a5e4dcb1de05ceb32435.scope/container/memory.events
Feb 28 05:49:56 np0005634017 podman[383831]: 2026-02-28 10:49:56.043654613 +0000 UTC m=+0.161958461 container died 2573cd9d5674167a96f2c3312aa58db0da002ba66e27a5e4dcb1de05ceb32435 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:49:56 np0005634017 systemd[1]: var-lib-containers-storage-overlay-79bb8f52362afa53bf30ae83f3562ef894ec4f50627471436486492575dc89e7-merged.mount: Deactivated successfully.
Feb 28 05:49:56 np0005634017 podman[383831]: 2026-02-28 10:49:56.092615337 +0000 UTC m=+0.210919215 container remove 2573cd9d5674167a96f2c3312aa58db0da002ba66e27a5e4dcb1de05ceb32435 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:49:56 np0005634017 systemd[1]: libpod-conmon-2573cd9d5674167a96f2c3312aa58db0da002ba66e27a5e4dcb1de05ceb32435.scope: Deactivated successfully.
Feb 28 05:49:56 np0005634017 nova_compute[243452]: 2026-02-28 10:49:56.200 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:49:56 np0005634017 nova_compute[243452]: 2026-02-28 10:49:56.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:49:56 np0005634017 nova_compute[243452]: 2026-02-28 10:49:56.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:49:56 np0005634017 nova_compute[243452]: 2026-02-28 10:49:56.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:49:56 np0005634017 podman[383874]: 2026-02-28 10:49:56.316911861 +0000 UTC m=+0.057530978 container create 26483610557f4fb1b4337e652b05b2e0a8bb7c84a6fa3dd5adddec52d48cce27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 05:49:56 np0005634017 nova_compute[243452]: 2026-02-28 10:49:56.334 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 05:49:56 np0005634017 nova_compute[243452]: 2026-02-28 10:49:56.334 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:49:56 np0005634017 systemd[1]: Started libpod-conmon-26483610557f4fb1b4337e652b05b2e0a8bb7c84a6fa3dd5adddec52d48cce27.scope.
Feb 28 05:49:56 np0005634017 podman[383874]: 2026-02-28 10:49:56.296971754 +0000 UTC m=+0.037590891 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:49:56 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:49:56 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe27c403bcd1808fa533a3d09d54e30adc6ac7a53b88fb62ba231d7c0288bc2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:49:56 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe27c403bcd1808fa533a3d09d54e30adc6ac7a53b88fb62ba231d7c0288bc2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:49:56 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe27c403bcd1808fa533a3d09d54e30adc6ac7a53b88fb62ba231d7c0288bc2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:49:56 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe27c403bcd1808fa533a3d09d54e30adc6ac7a53b88fb62ba231d7c0288bc2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:49:56 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe27c403bcd1808fa533a3d09d54e30adc6ac7a53b88fb62ba231d7c0288bc2b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:49:56 np0005634017 podman[383874]: 2026-02-28 10:49:56.435328202 +0000 UTC m=+0.175947389 container init 26483610557f4fb1b4337e652b05b2e0a8bb7c84a6fa3dd5adddec52d48cce27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_carson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 05:49:56 np0005634017 podman[383874]: 2026-02-28 10:49:56.44196021 +0000 UTC m=+0.182579317 container start 26483610557f4fb1b4337e652b05b2e0a8bb7c84a6fa3dd5adddec52d48cce27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_carson, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:49:56 np0005634017 podman[383874]: 2026-02-28 10:49:56.447444997 +0000 UTC m=+0.188064254 container attach 26483610557f4fb1b4337e652b05b2e0a8bb7c84a6fa3dd5adddec52d48cce27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_carson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 05:49:56 np0005634017 nova_compute[243452]: 2026-02-28 10:49:56.791 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:49:56 np0005634017 awesome_carson[383891]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:49:56 np0005634017 awesome_carson[383891]: --> All data devices are unavailable
Feb 28 05:49:56 np0005634017 systemd[1]: libpod-26483610557f4fb1b4337e652b05b2e0a8bb7c84a6fa3dd5adddec52d48cce27.scope: Deactivated successfully.
Feb 28 05:49:56 np0005634017 podman[383874]: 2026-02-28 10:49:56.926831272 +0000 UTC m=+0.667450389 container died 26483610557f4fb1b4337e652b05b2e0a8bb7c84a6fa3dd5adddec52d48cce27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 05:49:56 np0005634017 systemd[1]: var-lib-containers-storage-overlay-fe27c403bcd1808fa533a3d09d54e30adc6ac7a53b88fb62ba231d7c0288bc2b-merged.mount: Deactivated successfully.
Feb 28 05:49:56 np0005634017 podman[383874]: 2026-02-28 10:49:56.979329146 +0000 UTC m=+0.719948253 container remove 26483610557f4fb1b4337e652b05b2e0a8bb7c84a6fa3dd5adddec52d48cce27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:49:56 np0005634017 systemd[1]: libpod-conmon-26483610557f4fb1b4337e652b05b2e0a8bb7c84a6fa3dd5adddec52d48cce27.scope: Deactivated successfully.
Feb 28 05:49:57 np0005634017 nova_compute[243452]: 2026-02-28 10:49:57.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:49:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2574: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 341 B/s wr, 4 op/s
Feb 28 05:49:57 np0005634017 podman[383985]: 2026-02-28 10:49:57.449500779 +0000 UTC m=+0.038085395 container create c3c68a9b28ed041b75f5222b92ed255ff777b5a20aef6f77be2f88e1ae191f1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lalande, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:49:57 np0005634017 nova_compute[243452]: 2026-02-28 10:49:57.475 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275782.4739864, 4daf2a6e-e18a-481f-8174-d36bb3f79438 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 28 05:49:57 np0005634017 nova_compute[243452]: 2026-02-28 10:49:57.476 243456 INFO nova.compute.manager [-] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] VM Stopped (Lifecycle Event)#033[00m
Feb 28 05:49:57 np0005634017 systemd[1]: Started libpod-conmon-c3c68a9b28ed041b75f5222b92ed255ff777b5a20aef6f77be2f88e1ae191f1f.scope.
Feb 28 05:49:57 np0005634017 nova_compute[243452]: 2026-02-28 10:49:57.499 243456 DEBUG nova.compute.manager [None req-2be41c91-7702-4c9b-96f3-20fd2069abb5 - - - - - -] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 28 05:49:57 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:49:57 np0005634017 podman[383985]: 2026-02-28 10:49:57.522940619 +0000 UTC m=+0.111525275 container init c3c68a9b28ed041b75f5222b92ed255ff777b5a20aef6f77be2f88e1ae191f1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 05:49:57 np0005634017 podman[383985]: 2026-02-28 10:49:57.433507724 +0000 UTC m=+0.022092390 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:49:57 np0005634017 podman[383985]: 2026-02-28 10:49:57.532144741 +0000 UTC m=+0.120729377 container start c3c68a9b28ed041b75f5222b92ed255ff777b5a20aef6f77be2f88e1ae191f1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lalande, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 05:49:57 np0005634017 podman[383985]: 2026-02-28 10:49:57.535428735 +0000 UTC m=+0.124013411 container attach c3c68a9b28ed041b75f5222b92ed255ff777b5a20aef6f77be2f88e1ae191f1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 28 05:49:57 np0005634017 focused_lalande[384001]: 167 167
Feb 28 05:49:57 np0005634017 systemd[1]: libpod-c3c68a9b28ed041b75f5222b92ed255ff777b5a20aef6f77be2f88e1ae191f1f.scope: Deactivated successfully.
Feb 28 05:49:57 np0005634017 podman[383985]: 2026-02-28 10:49:57.539884152 +0000 UTC m=+0.128468778 container died c3c68a9b28ed041b75f5222b92ed255ff777b5a20aef6f77be2f88e1ae191f1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 28 05:49:57 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3199dff106b29b9333eb2ffb44987002493b8a1229cf6f8d45a2ca69bac686a3-merged.mount: Deactivated successfully.
Feb 28 05:49:57 np0005634017 podman[383985]: 2026-02-28 10:49:57.584248245 +0000 UTC m=+0.172832861 container remove c3c68a9b28ed041b75f5222b92ed255ff777b5a20aef6f77be2f88e1ae191f1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 05:49:57 np0005634017 systemd[1]: libpod-conmon-c3c68a9b28ed041b75f5222b92ed255ff777b5a20aef6f77be2f88e1ae191f1f.scope: Deactivated successfully.
Feb 28 05:49:57 np0005634017 podman[384026]: 2026-02-28 10:49:57.775861729 +0000 UTC m=+0.056944522 container create 74397ac73190812c3db69eaf160d35710dd7f4d12ac180e0484bfc3e57ae78f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_dhawan, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:49:57 np0005634017 systemd[1]: Started libpod-conmon-74397ac73190812c3db69eaf160d35710dd7f4d12ac180e0484bfc3e57ae78f3.scope.
Feb 28 05:49:57 np0005634017 podman[384026]: 2026-02-28 10:49:57.751034822 +0000 UTC m=+0.032117615 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:49:57 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:49:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5234b73bd0525111e52aa290403a1658e3731779ef09b3b49c37ac4833f4eb9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:49:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5234b73bd0525111e52aa290403a1658e3731779ef09b3b49c37ac4833f4eb9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:49:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5234b73bd0525111e52aa290403a1658e3731779ef09b3b49c37ac4833f4eb9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:49:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5234b73bd0525111e52aa290403a1658e3731779ef09b3b49c37ac4833f4eb9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:49:57 np0005634017 podman[384026]: 2026-02-28 10:49:57.887442545 +0000 UTC m=+0.168525398 container init 74397ac73190812c3db69eaf160d35710dd7f4d12ac180e0484bfc3e57ae78f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 28 05:49:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:49:57.891 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:49:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:49:57.893 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:49:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:49:57.893 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:49:57 np0005634017 podman[384026]: 2026-02-28 10:49:57.897784979 +0000 UTC m=+0.178867732 container start 74397ac73190812c3db69eaf160d35710dd7f4d12ac180e0484bfc3e57ae78f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:49:57 np0005634017 podman[384026]: 2026-02-28 10:49:57.901461884 +0000 UTC m=+0.182544677 container attach 74397ac73190812c3db69eaf160d35710dd7f4d12ac180e0484bfc3e57ae78f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]: {
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:    "0": [
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:        {
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "devices": [
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "/dev/loop3"
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            ],
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "lv_name": "ceph_lv0",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "lv_size": "21470642176",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "name": "ceph_lv0",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "tags": {
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.cluster_name": "ceph",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.crush_device_class": "",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.encrypted": "0",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.objectstore": "bluestore",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.osd_id": "0",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.type": "block",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.vdo": "0",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.with_tpm": "0"
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            },
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "type": "block",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "vg_name": "ceph_vg0"
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:        }
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:    ],
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:    "1": [
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:        {
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "devices": [
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "/dev/loop4"
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            ],
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "lv_name": "ceph_lv1",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "lv_size": "21470642176",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "name": "ceph_lv1",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "tags": {
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.cluster_name": "ceph",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.crush_device_class": "",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.encrypted": "0",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.objectstore": "bluestore",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.osd_id": "1",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.type": "block",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.vdo": "0",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.with_tpm": "0"
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            },
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "type": "block",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "vg_name": "ceph_vg1"
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:        }
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:    ],
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:    "2": [
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:        {
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "devices": [
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "/dev/loop5"
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            ],
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "lv_name": "ceph_lv2",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "lv_size": "21470642176",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "name": "ceph_lv2",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "tags": {
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.cluster_name": "ceph",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.crush_device_class": "",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.encrypted": "0",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.objectstore": "bluestore",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.osd_id": "2",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.type": "block",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.vdo": "0",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:                "ceph.with_tpm": "0"
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            },
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "type": "block",
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:            "vg_name": "ceph_vg2"
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:        }
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]:    ]
Feb 28 05:49:58 np0005634017 peaceful_dhawan[384042]: }
Feb 28 05:49:58 np0005634017 systemd[1]: libpod-74397ac73190812c3db69eaf160d35710dd7f4d12ac180e0484bfc3e57ae78f3.scope: Deactivated successfully.
Feb 28 05:49:58 np0005634017 podman[384026]: 2026-02-28 10:49:58.223599713 +0000 UTC m=+0.504682476 container died 74397ac73190812c3db69eaf160d35710dd7f4d12ac180e0484bfc3e57ae78f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_dhawan, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:49:58 np0005634017 systemd[1]: var-lib-containers-storage-overlay-5234b73bd0525111e52aa290403a1658e3731779ef09b3b49c37ac4833f4eb9b-merged.mount: Deactivated successfully.
Feb 28 05:49:58 np0005634017 podman[384026]: 2026-02-28 10:49:58.283091366 +0000 UTC m=+0.564174119 container remove 74397ac73190812c3db69eaf160d35710dd7f4d12ac180e0484bfc3e57ae78f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_dhawan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 05:49:58 np0005634017 systemd[1]: libpod-conmon-74397ac73190812c3db69eaf160d35710dd7f4d12ac180e0484bfc3e57ae78f3.scope: Deactivated successfully.
Feb 28 05:49:58 np0005634017 podman[384124]: 2026-02-28 10:49:58.717685466 +0000 UTC m=+0.055310965 container create f64a58eaa696dc2a43a6529f2c58d6302bdfac7f5cb41fb0ad14b0a23459b1f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:49:58 np0005634017 systemd[1]: Started libpod-conmon-f64a58eaa696dc2a43a6529f2c58d6302bdfac7f5cb41fb0ad14b0a23459b1f0.scope.
Feb 28 05:49:58 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:49:58 np0005634017 podman[384124]: 2026-02-28 10:49:58.694179277 +0000 UTC m=+0.031804796 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:49:58 np0005634017 podman[384124]: 2026-02-28 10:49:58.810817187 +0000 UTC m=+0.148442706 container init f64a58eaa696dc2a43a6529f2c58d6302bdfac7f5cb41fb0ad14b0a23459b1f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 05:49:58 np0005634017 podman[384124]: 2026-02-28 10:49:58.819138044 +0000 UTC m=+0.156763553 container start f64a58eaa696dc2a43a6529f2c58d6302bdfac7f5cb41fb0ad14b0a23459b1f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_murdock, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 05:49:58 np0005634017 podman[384124]: 2026-02-28 10:49:58.828150801 +0000 UTC m=+0.165776290 container attach f64a58eaa696dc2a43a6529f2c58d6302bdfac7f5cb41fb0ad14b0a23459b1f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_murdock, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 05:49:58 np0005634017 systemd[1]: libpod-f64a58eaa696dc2a43a6529f2c58d6302bdfac7f5cb41fb0ad14b0a23459b1f0.scope: Deactivated successfully.
Feb 28 05:49:58 np0005634017 frosty_murdock[384140]: 167 167
Feb 28 05:49:58 np0005634017 podman[384124]: 2026-02-28 10:49:58.830020364 +0000 UTC m=+0.167645833 container died f64a58eaa696dc2a43a6529f2c58d6302bdfac7f5cb41fb0ad14b0a23459b1f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_murdock, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 28 05:49:58 np0005634017 conmon[384140]: conmon f64a58eaa696dc2a43a6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f64a58eaa696dc2a43a6529f2c58d6302bdfac7f5cb41fb0ad14b0a23459b1f0.scope/container/memory.events
Feb 28 05:49:58 np0005634017 systemd[1]: var-lib-containers-storage-overlay-8504375ece0231ec830d1f9392e599e43a804b8bd28c9b12b04a52d52c0d756c-merged.mount: Deactivated successfully.
Feb 28 05:49:58 np0005634017 podman[384124]: 2026-02-28 10:49:58.8723786 +0000 UTC m=+0.210004089 container remove f64a58eaa696dc2a43a6529f2c58d6302bdfac7f5cb41fb0ad14b0a23459b1f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:49:58 np0005634017 systemd[1]: libpod-conmon-f64a58eaa696dc2a43a6529f2c58d6302bdfac7f5cb41fb0ad14b0a23459b1f0.scope: Deactivated successfully.
Feb 28 05:49:59 np0005634017 podman[384164]: 2026-02-28 10:49:59.016491422 +0000 UTC m=+0.040437662 container create 41854b6b0e20fac10b8fe6bfd919afc7a96447126fa590cab036d9f3418f076d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 05:49:59 np0005634017 systemd[1]: Started libpod-conmon-41854b6b0e20fac10b8fe6bfd919afc7a96447126fa590cab036d9f3418f076d.scope.
Feb 28 05:49:59 np0005634017 podman[384164]: 2026-02-28 10:49:58.996885704 +0000 UTC m=+0.020831984 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:49:59 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:49:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71da97c2ecef03159393e5698518c5cc1b12fa667dc45de26d1f9b07305067dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:49:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71da97c2ecef03159393e5698518c5cc1b12fa667dc45de26d1f9b07305067dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:49:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71da97c2ecef03159393e5698518c5cc1b12fa667dc45de26d1f9b07305067dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:49:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71da97c2ecef03159393e5698518c5cc1b12fa667dc45de26d1f9b07305067dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:49:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:49:59 np0005634017 podman[384164]: 2026-02-28 10:49:59.132289548 +0000 UTC m=+0.156235838 container init 41854b6b0e20fac10b8fe6bfd919afc7a96447126fa590cab036d9f3418f076d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_einstein, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:49:59 np0005634017 podman[384164]: 2026-02-28 10:49:59.140411339 +0000 UTC m=+0.164357579 container start 41854b6b0e20fac10b8fe6bfd919afc7a96447126fa590cab036d9f3418f076d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_einstein, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 28 05:49:59 np0005634017 podman[384164]: 2026-02-28 10:49:59.143694242 +0000 UTC m=+0.167640482 container attach 41854b6b0e20fac10b8fe6bfd919afc7a96447126fa590cab036d9f3418f076d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_einstein, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:49:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2575: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 341 B/s wr, 4 op/s
Feb 28 05:49:59 np0005634017 lvm[384260]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:49:59 np0005634017 lvm[384261]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:49:59 np0005634017 lvm[384261]: VG ceph_vg0 finished
Feb 28 05:49:59 np0005634017 lvm[384260]: VG ceph_vg1 finished
Feb 28 05:49:59 np0005634017 lvm[384263]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:49:59 np0005634017 lvm[384263]: VG ceph_vg2 finished
Feb 28 05:49:59 np0005634017 confident_einstein[384181]: {}
Feb 28 05:49:59 np0005634017 systemd[1]: libpod-41854b6b0e20fac10b8fe6bfd919afc7a96447126fa590cab036d9f3418f076d.scope: Deactivated successfully.
Feb 28 05:49:59 np0005634017 systemd[1]: libpod-41854b6b0e20fac10b8fe6bfd919afc7a96447126fa590cab036d9f3418f076d.scope: Consumed 1.201s CPU time.
Feb 28 05:49:59 np0005634017 podman[384164]: 2026-02-28 10:49:59.984947767 +0000 UTC m=+1.008894007 container died 41854b6b0e20fac10b8fe6bfd919afc7a96447126fa590cab036d9f3418f076d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 05:50:00 np0005634017 systemd[1]: var-lib-containers-storage-overlay-71da97c2ecef03159393e5698518c5cc1b12fa667dc45de26d1f9b07305067dd-merged.mount: Deactivated successfully.
Feb 28 05:50:00 np0005634017 podman[384164]: 2026-02-28 10:50:00.028211478 +0000 UTC m=+1.052157728 container remove 41854b6b0e20fac10b8fe6bfd919afc7a96447126fa590cab036d9f3418f076d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_einstein, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 28 05:50:00 np0005634017 systemd[1]: libpod-conmon-41854b6b0e20fac10b8fe6bfd919afc7a96447126fa590cab036d9f3418f076d.scope: Deactivated successfully.
Feb 28 05:50:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:50:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:50:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:50:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:50:00 np0005634017 podman[384301]: 2026-02-28 10:50:00.248679103 +0000 UTC m=+0.060791881 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 28 05:50:00 np0005634017 podman[384302]: 2026-02-28 10:50:00.310001799 +0000 UTC m=+0.112196165 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:50:00 np0005634017 nova_compute[243452]: 2026-02-28 10:50:00.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:50:00 np0005634017 nova_compute[243452]: 2026-02-28 10:50:00.345 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:50:00 np0005634017 nova_compute[243452]: 2026-02-28 10:50:00.346 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:50:00 np0005634017 nova_compute[243452]: 2026-02-28 10:50:00.346 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:50:00 np0005634017 nova_compute[243452]: 2026-02-28 10:50:00.346 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:50:00 np0005634017 nova_compute[243452]: 2026-02-28 10:50:00.347 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:50:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:50:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:50:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:50:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:50:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:50:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:50:00 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:50:00 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:50:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:50:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3991673712' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:50:00 np0005634017 nova_compute[243452]: 2026-02-28 10:50:00.909 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:50:01 np0005634017 nova_compute[243452]: 2026-02-28 10:50:01.106 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:50:01 np0005634017 nova_compute[243452]: 2026-02-28 10:50:01.108 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3502MB free_disk=59.98738141171634GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:50:01 np0005634017 nova_compute[243452]: 2026-02-28 10:50:01.108 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:50:01 np0005634017 nova_compute[243452]: 2026-02-28 10:50:01.109 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:50:01 np0005634017 nova_compute[243452]: 2026-02-28 10:50:01.171 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:50:01 np0005634017 nova_compute[243452]: 2026-02-28 10:50:01.172 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:50:01 np0005634017 nova_compute[243452]: 2026-02-28 10:50:01.186 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 28 05:50:01 np0005634017 nova_compute[243452]: 2026-02-28 10:50:01.203 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 28 05:50:01 np0005634017 nova_compute[243452]: 2026-02-28 10:50:01.204 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 28 05:50:01 np0005634017 nova_compute[243452]: 2026-02-28 10:50:01.218 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 28 05:50:01 np0005634017 nova_compute[243452]: 2026-02-28 10:50:01.233 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:50:01 np0005634017 nova_compute[243452]: 2026-02-28 10:50:01.243 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 28 05:50:01 np0005634017 nova_compute[243452]: 2026-02-28 10:50:01.262 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:50:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2576: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:50:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:50:01.776 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:50:01 np0005634017 nova_compute[243452]: 2026-02-28 10:50:01.777 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:50:01 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:50:01.779 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:50:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:50:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2267053798' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:50:01 np0005634017 nova_compute[243452]: 2026-02-28 10:50:01.793 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:50:01 np0005634017 nova_compute[243452]: 2026-02-28 10:50:01.808 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:50:01 np0005634017 nova_compute[243452]: 2026-02-28 10:50:01.815 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:50:01 np0005634017 nova_compute[243452]: 2026-02-28 10:50:01.841 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:50:01 np0005634017 nova_compute[243452]: 2026-02-28 10:50:01.865 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:50:01 np0005634017 nova_compute[243452]: 2026-02-28 10:50:01.866 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:50:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2577: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:50:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:50:04 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:50:04.781 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:50:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2578: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:50:06 np0005634017 nova_compute[243452]: 2026-02-28 10:50:06.268 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:50:06 np0005634017 nova_compute[243452]: 2026-02-28 10:50:06.795 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:50:06 np0005634017 nova_compute[243452]: 2026-02-28 10:50:06.861 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:50:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2579: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:50:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:50:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2580: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:50:11 np0005634017 nova_compute[243452]: 2026-02-28 10:50:11.353 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:50:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2581: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:50:11 np0005634017 nova_compute[243452]: 2026-02-28 10:50:11.797 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:50:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2582: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:50:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:50:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2583: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 11 op/s
Feb 28 05:50:16 np0005634017 nova_compute[243452]: 2026-02-28 10:50:16.388 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:50:16 np0005634017 nova_compute[243452]: 2026-02-28 10:50:16.799 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:50:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2584: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 11 op/s
Feb 28 05:50:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:50:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2585: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 28 05:50:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2586: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 28 05:50:21 np0005634017 nova_compute[243452]: 2026-02-28 10:50:21.433 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:50:21 np0005634017 nova_compute[243452]: 2026-02-28 10:50:21.802 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:50:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2587: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 28 05:50:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:50:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2588: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 28 05:50:26 np0005634017 nova_compute[243452]: 2026-02-28 10:50:26.435 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:50:26 np0005634017 nova_compute[243452]: 2026-02-28 10:50:26.804 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:50:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2589: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Feb 28 05:50:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:50:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:50:29
Feb 28 05:50:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:50:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:50:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'images', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', '.mgr', 'default.rgw.meta', 'backups']
Feb 28 05:50:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:50:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2590: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Feb 28 05:50:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:50:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:50:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:50:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:50:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:50:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:50:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:50:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:50:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:50:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:50:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:50:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:50:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:50:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:50:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:50:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:50:31 np0005634017 podman[384395]: 2026-02-28 10:50:31.130384218 +0000 UTC m=+0.052151676 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:50:31 np0005634017 podman[384394]: 2026-02-28 10:50:31.261811238 +0000 UTC m=+0.194437405 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Feb 28 05:50:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2591: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:50:31 np0005634017 nova_compute[243452]: 2026-02-28 10:50:31.437 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:50:31 np0005634017 nova_compute[243452]: 2026-02-28 10:50:31.806 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:50:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2592: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:50:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:50:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2593: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:50:36 np0005634017 nova_compute[243452]: 2026-02-28 10:50:36.465 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:50:36 np0005634017 nova_compute[243452]: 2026-02-28 10:50:36.809 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:50:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2594: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:50:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:50:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2595: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:50:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2596: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:50:41 np0005634017 nova_compute[243452]: 2026-02-28 10:50:41.468 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:50:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:50:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:50:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:50:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:50:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5000234460926945e-05 of space, bias 1.0, pg target 0.004500070338278084 quantized to 32 (current 32)
Feb 28 05:50:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:50:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:50:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:50:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:50:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:50:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002494020776263654 of space, bias 1.0, pg target 0.7482062328790963 quantized to 32 (current 32)
Feb 28 05:50:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:50:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.261232899107364e-07 of space, bias 4.0, pg target 0.0008713479478928837 quantized to 16 (current 16)
Feb 28 05:50:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:50:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:50:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:50:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:50:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:50:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:50:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:50:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:50:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:50:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:50:41 np0005634017 nova_compute[243452]: 2026-02-28 10:50:41.811 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:50:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2597: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:50:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:50:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Feb 28 05:50:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Feb 28 05:50:44 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Feb 28 05:50:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2599: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 614 B/s rd, 0 B/s wr, 1 op/s
Feb 28 05:50:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:50:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4085572625' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:50:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:50:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4085572625' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:50:46 np0005634017 nova_compute[243452]: 2026-02-28 10:50:46.493 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:50:46 np0005634017 nova_compute[243452]: 2026-02-28 10:50:46.813 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:50:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2600: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 73 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Feb 28 05:50:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:50:49 np0005634017 nova_compute[243452]: 2026-02-28 10:50:49.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:50:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2601: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 73 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Feb 28 05:50:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2602: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 28 05:50:51 np0005634017 nova_compute[243452]: 2026-02-28 10:50:51.548 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:50:51 np0005634017 nova_compute[243452]: 2026-02-28 10:50:51.815 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:50:52 np0005634017 nova_compute[243452]: 2026-02-28 10:50:52.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:50:52 np0005634017 nova_compute[243452]: 2026-02-28 10:50:52.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:50:52 np0005634017 nova_compute[243452]: 2026-02-28 10:50:52.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:50:53 np0005634017 nova_compute[243452]: 2026-02-28 10:50:53.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:50:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2603: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 28 05:50:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:50:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Feb 28 05:50:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Feb 28 05:50:54 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Feb 28 05:50:55 np0005634017 nova_compute[243452]: 2026-02-28 10:50:55.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:50:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2605: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Feb 28 05:50:56 np0005634017 nova_compute[243452]: 2026-02-28 10:50:56.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:50:56 np0005634017 nova_compute[243452]: 2026-02-28 10:50:56.550 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:50:56 np0005634017 nova_compute[243452]: 2026-02-28 10:50:56.817 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:50:57 np0005634017 nova_compute[243452]: 2026-02-28 10:50:57.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:50:57 np0005634017 nova_compute[243452]: 2026-02-28 10:50:57.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:50:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2606: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 716 B/s rd, 0 B/s wr, 1 op/s
Feb 28 05:50:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:50:57.892 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:50:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:50:57.893 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:50:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:50:57.893 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:50:58 np0005634017 nova_compute[243452]: 2026-02-28 10:50:58.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:50:58 np0005634017 nova_compute[243452]: 2026-02-28 10:50:58.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:50:58 np0005634017 nova_compute[243452]: 2026-02-28 10:50:58.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:50:58 np0005634017 nova_compute[243452]: 2026-02-28 10:50:58.420 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 05:50:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:50:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2607: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 716 B/s rd, 0 B/s wr, 1 op/s
Feb 28 05:51:00 np0005634017 nova_compute[243452]: 2026-02-28 10:51:00.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:51:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:51:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:51:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:51:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:51:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:51:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:51:00 np0005634017 nova_compute[243452]: 2026-02-28 10:51:00.383 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:51:00 np0005634017 nova_compute[243452]: 2026-02-28 10:51:00.384 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:51:00 np0005634017 nova_compute[243452]: 2026-02-28 10:51:00.384 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:51:00 np0005634017 nova_compute[243452]: 2026-02-28 10:51:00.384 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:51:00 np0005634017 nova_compute[243452]: 2026-02-28 10:51:00.385 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:51:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 28 05:51:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 05:51:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:51:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:51:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:51:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:51:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:51:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:51:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:51:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:51:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:51:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:51:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:51:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:51:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:51:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2130511990' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:51:00 np0005634017 nova_compute[243452]: 2026-02-28 10:51:00.990 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:51:01 np0005634017 nova_compute[243452]: 2026-02-28 10:51:01.223 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:51:01 np0005634017 nova_compute[243452]: 2026-02-28 10:51:01.224 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3556MB free_disk=59.98738014232367GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:51:01 np0005634017 nova_compute[243452]: 2026-02-28 10:51:01.224 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:51:01 np0005634017 nova_compute[243452]: 2026-02-28 10:51:01.225 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:51:01 np0005634017 podman[384606]: 2026-02-28 10:51:01.360325914 +0000 UTC m=+0.056060437 container create d230e5df6860d36cbe035a7355c56a6404d53ccd273ad74b11ba56dc6448940d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_goodall, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:51:01 np0005634017 nova_compute[243452]: 2026-02-28 10:51:01.384 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:51:01 np0005634017 nova_compute[243452]: 2026-02-28 10:51:01.384 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:51:01 np0005634017 systemd[1]: Started libpod-conmon-d230e5df6860d36cbe035a7355c56a6404d53ccd273ad74b11ba56dc6448940d.scope.
Feb 28 05:51:01 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:51:01 np0005634017 nova_compute[243452]: 2026-02-28 10:51:01.410 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:51:01 np0005634017 podman[384606]: 2026-02-28 10:51:01.337837434 +0000 UTC m=+0.033571987 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:51:01 np0005634017 podman[384606]: 2026-02-28 10:51:01.437316245 +0000 UTC m=+0.133050818 container init d230e5df6860d36cbe035a7355c56a6404d53ccd273ad74b11ba56dc6448940d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Feb 28 05:51:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2608: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:01 np0005634017 podman[384606]: 2026-02-28 10:51:01.451481949 +0000 UTC m=+0.147216482 container start d230e5df6860d36cbe035a7355c56a6404d53ccd273ad74b11ba56dc6448940d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_goodall, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:51:01 np0005634017 podman[384606]: 2026-02-28 10:51:01.457097918 +0000 UTC m=+0.152832451 container attach d230e5df6860d36cbe035a7355c56a6404d53ccd273ad74b11ba56dc6448940d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_goodall, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:51:01 np0005634017 pedantic_goodall[384633]: 167 167
Feb 28 05:51:01 np0005634017 systemd[1]: libpod-d230e5df6860d36cbe035a7355c56a6404d53ccd273ad74b11ba56dc6448940d.scope: Deactivated successfully.
Feb 28 05:51:01 np0005634017 podman[384606]: 2026-02-28 10:51:01.458624472 +0000 UTC m=+0.154359015 container died d230e5df6860d36cbe035a7355c56a6404d53ccd273ad74b11ba56dc6448940d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_goodall, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 28 05:51:01 np0005634017 systemd[1]: var-lib-containers-storage-overlay-70744b1ea8b0bc9bc71b9766eb8c8300c7b9d4ed2adc96cc690e67523bcbaba8-merged.mount: Deactivated successfully.
Feb 28 05:51:01 np0005634017 podman[384624]: 2026-02-28 10:51:01.501398919 +0000 UTC m=+0.098479094 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Feb 28 05:51:01 np0005634017 podman[384606]: 2026-02-28 10:51:01.515858001 +0000 UTC m=+0.211592524 container remove d230e5df6860d36cbe035a7355c56a6404d53ccd273ad74b11ba56dc6448940d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_goodall, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 05:51:01 np0005634017 podman[384621]: 2026-02-28 10:51:01.516418427 +0000 UTC m=+0.111603958 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb 28 05:51:01 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 05:51:01 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:51:01 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:51:01 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:51:01 np0005634017 systemd[1]: libpod-conmon-d230e5df6860d36cbe035a7355c56a6404d53ccd273ad74b11ba56dc6448940d.scope: Deactivated successfully.
Feb 28 05:51:01 np0005634017 nova_compute[243452]: 2026-02-28 10:51:01.556 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:51:01 np0005634017 podman[384710]: 2026-02-28 10:51:01.68972003 +0000 UTC m=+0.049877401 container create 9a4e1fffcba293f03ecde61abea714df9201157382bccce718023f7b71a619ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hamilton, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:51:01 np0005634017 systemd[1]: Started libpod-conmon-9a4e1fffcba293f03ecde61abea714df9201157382bccce718023f7b71a619ed.scope.
Feb 28 05:51:01 np0005634017 podman[384710]: 2026-02-28 10:51:01.665400348 +0000 UTC m=+0.025557709 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:51:01 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:51:01 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/608ec62e99c7794d8464232817eece5920f5cb57c8adb7e90c62489fb3fe9f22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:51:01 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/608ec62e99c7794d8464232817eece5920f5cb57c8adb7e90c62489fb3fe9f22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:51:01 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/608ec62e99c7794d8464232817eece5920f5cb57c8adb7e90c62489fb3fe9f22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:51:01 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/608ec62e99c7794d8464232817eece5920f5cb57c8adb7e90c62489fb3fe9f22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:51:01 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/608ec62e99c7794d8464232817eece5920f5cb57c8adb7e90c62489fb3fe9f22/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:51:01 np0005634017 podman[384710]: 2026-02-28 10:51:01.784767245 +0000 UTC m=+0.144924626 container init 9a4e1fffcba293f03ecde61abea714df9201157382bccce718023f7b71a619ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hamilton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:51:01 np0005634017 podman[384710]: 2026-02-28 10:51:01.796628693 +0000 UTC m=+0.156786034 container start 9a4e1fffcba293f03ecde61abea714df9201157382bccce718023f7b71a619ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hamilton, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:51:01 np0005634017 podman[384710]: 2026-02-28 10:51:01.801365788 +0000 UTC m=+0.161523209 container attach 9a4e1fffcba293f03ecde61abea714df9201157382bccce718023f7b71a619ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:51:01 np0005634017 nova_compute[243452]: 2026-02-28 10:51:01.819 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:51:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:51:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/266926937' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:51:01 np0005634017 nova_compute[243452]: 2026-02-28 10:51:01.990 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:51:01 np0005634017 nova_compute[243452]: 2026-02-28 10:51:01.997 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:51:02 np0005634017 nova_compute[243452]: 2026-02-28 10:51:02.067 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:51:02 np0005634017 nova_compute[243452]: 2026-02-28 10:51:02.068 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:51:02 np0005634017 nova_compute[243452]: 2026-02-28 10:51:02.068 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:51:02 np0005634017 dazzling_hamilton[384727]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:51:02 np0005634017 dazzling_hamilton[384727]: --> All data devices are unavailable
Feb 28 05:51:02 np0005634017 systemd[1]: libpod-9a4e1fffcba293f03ecde61abea714df9201157382bccce718023f7b71a619ed.scope: Deactivated successfully.
Feb 28 05:51:02 np0005634017 podman[384710]: 2026-02-28 10:51:02.26414977 +0000 UTC m=+0.624307121 container died 9a4e1fffcba293f03ecde61abea714df9201157382bccce718023f7b71a619ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hamilton, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:51:02 np0005634017 systemd[1]: var-lib-containers-storage-overlay-608ec62e99c7794d8464232817eece5920f5cb57c8adb7e90c62489fb3fe9f22-merged.mount: Deactivated successfully.
Feb 28 05:51:02 np0005634017 podman[384710]: 2026-02-28 10:51:02.310603493 +0000 UTC m=+0.670760854 container remove 9a4e1fffcba293f03ecde61abea714df9201157382bccce718023f7b71a619ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hamilton, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 05:51:02 np0005634017 systemd[1]: libpod-conmon-9a4e1fffcba293f03ecde61abea714df9201157382bccce718023f7b71a619ed.scope: Deactivated successfully.
Feb 28 05:51:02 np0005634017 podman[384822]: 2026-02-28 10:51:02.805778987 +0000 UTC m=+0.045221388 container create 441766aeb074401287704747df29232bd38ea799c3eb0f41a950578bff3496f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wiles, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 28 05:51:02 np0005634017 systemd[1]: Started libpod-conmon-441766aeb074401287704747df29232bd38ea799c3eb0f41a950578bff3496f9.scope.
Feb 28 05:51:02 np0005634017 podman[384822]: 2026-02-28 10:51:02.791215793 +0000 UTC m=+0.030658204 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:51:02 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:51:02 np0005634017 podman[384822]: 2026-02-28 10:51:02.905936748 +0000 UTC m=+0.145379169 container init 441766aeb074401287704747df29232bd38ea799c3eb0f41a950578bff3496f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:51:02 np0005634017 podman[384822]: 2026-02-28 10:51:02.914487631 +0000 UTC m=+0.153930032 container start 441766aeb074401287704747df29232bd38ea799c3eb0f41a950578bff3496f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wiles, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:51:02 np0005634017 podman[384822]: 2026-02-28 10:51:02.918415183 +0000 UTC m=+0.157857584 container attach 441766aeb074401287704747df29232bd38ea799c3eb0f41a950578bff3496f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wiles, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:51:02 np0005634017 stoic_wiles[384839]: 167 167
Feb 28 05:51:02 np0005634017 systemd[1]: libpod-441766aeb074401287704747df29232bd38ea799c3eb0f41a950578bff3496f9.scope: Deactivated successfully.
Feb 28 05:51:02 np0005634017 conmon[384839]: conmon 441766aeb07440128770 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-441766aeb074401287704747df29232bd38ea799c3eb0f41a950578bff3496f9.scope/container/memory.events
Feb 28 05:51:02 np0005634017 podman[384822]: 2026-02-28 10:51:02.921258844 +0000 UTC m=+0.160701265 container died 441766aeb074401287704747df29232bd38ea799c3eb0f41a950578bff3496f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wiles, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:51:02 np0005634017 systemd[1]: var-lib-containers-storage-overlay-f78a3bdc11e5f1927c896a98679f55a83a367e4370f29917b3d2e88d988ecb98-merged.mount: Deactivated successfully.
Feb 28 05:51:02 np0005634017 podman[384822]: 2026-02-28 10:51:02.96360987 +0000 UTC m=+0.203052261 container remove 441766aeb074401287704747df29232bd38ea799c3eb0f41a950578bff3496f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:51:02 np0005634017 systemd[1]: libpod-conmon-441766aeb074401287704747df29232bd38ea799c3eb0f41a950578bff3496f9.scope: Deactivated successfully.
Feb 28 05:51:03 np0005634017 podman[384863]: 2026-02-28 10:51:03.099384234 +0000 UTC m=+0.044143297 container create e2c5bf247abe4bcda3d2b99ddd46b0ee7929f11059bd2a93fe5a2a31f9c916bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 28 05:51:03 np0005634017 systemd[1]: Started libpod-conmon-e2c5bf247abe4bcda3d2b99ddd46b0ee7929f11059bd2a93fe5a2a31f9c916bc.scope.
Feb 28 05:51:03 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:51:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/908be1e1ed284efe27dfdbc4051cf4f3e0b3cc780eea24ca123606da099ab0b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:51:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/908be1e1ed284efe27dfdbc4051cf4f3e0b3cc780eea24ca123606da099ab0b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:51:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/908be1e1ed284efe27dfdbc4051cf4f3e0b3cc780eea24ca123606da099ab0b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:51:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/908be1e1ed284efe27dfdbc4051cf4f3e0b3cc780eea24ca123606da099ab0b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:51:03 np0005634017 podman[384863]: 2026-02-28 10:51:03.081790953 +0000 UTC m=+0.026550026 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:51:03 np0005634017 podman[384863]: 2026-02-28 10:51:03.188280425 +0000 UTC m=+0.133039498 container init e2c5bf247abe4bcda3d2b99ddd46b0ee7929f11059bd2a93fe5a2a31f9c916bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_borg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:51:03 np0005634017 podman[384863]: 2026-02-28 10:51:03.194935044 +0000 UTC m=+0.139694087 container start e2c5bf247abe4bcda3d2b99ddd46b0ee7929f11059bd2a93fe5a2a31f9c916bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_borg, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:51:03 np0005634017 podman[384863]: 2026-02-28 10:51:03.198565107 +0000 UTC m=+0.143324150 container attach e2c5bf247abe4bcda3d2b99ddd46b0ee7929f11059bd2a93fe5a2a31f9c916bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_borg, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 28 05:51:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2609: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:03 np0005634017 sharp_borg[384880]: {
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:    "0": [
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:        {
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "devices": [
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "/dev/loop3"
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            ],
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "lv_name": "ceph_lv0",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "lv_size": "21470642176",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "name": "ceph_lv0",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "tags": {
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.cluster_name": "ceph",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.crush_device_class": "",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.encrypted": "0",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.objectstore": "bluestore",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.osd_id": "0",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.type": "block",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.vdo": "0",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.with_tpm": "0"
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            },
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "type": "block",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "vg_name": "ceph_vg0"
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:        }
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:    ],
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:    "1": [
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:        {
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "devices": [
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "/dev/loop4"
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            ],
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "lv_name": "ceph_lv1",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "lv_size": "21470642176",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "name": "ceph_lv1",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "tags": {
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.cluster_name": "ceph",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.crush_device_class": "",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.encrypted": "0",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.objectstore": "bluestore",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.osd_id": "1",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.type": "block",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.vdo": "0",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.with_tpm": "0"
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            },
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "type": "block",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "vg_name": "ceph_vg1"
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:        }
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:    ],
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:    "2": [
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:        {
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "devices": [
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "/dev/loop5"
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            ],
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "lv_name": "ceph_lv2",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "lv_size": "21470642176",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "name": "ceph_lv2",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "tags": {
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.cluster_name": "ceph",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.crush_device_class": "",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.encrypted": "0",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.objectstore": "bluestore",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.osd_id": "2",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.type": "block",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.vdo": "0",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:                "ceph.with_tpm": "0"
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            },
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "type": "block",
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:            "vg_name": "ceph_vg2"
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:        }
Feb 28 05:51:03 np0005634017 sharp_borg[384880]:    ]
Feb 28 05:51:03 np0005634017 sharp_borg[384880]: }
Feb 28 05:51:03 np0005634017 systemd[1]: libpod-e2c5bf247abe4bcda3d2b99ddd46b0ee7929f11059bd2a93fe5a2a31f9c916bc.scope: Deactivated successfully.
Feb 28 05:51:03 np0005634017 podman[384863]: 2026-02-28 10:51:03.510692492 +0000 UTC m=+0.455451585 container died e2c5bf247abe4bcda3d2b99ddd46b0ee7929f11059bd2a93fe5a2a31f9c916bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 05:51:03 np0005634017 systemd[1]: var-lib-containers-storage-overlay-908be1e1ed284efe27dfdbc4051cf4f3e0b3cc780eea24ca123606da099ab0b5-merged.mount: Deactivated successfully.
Feb 28 05:51:03 np0005634017 podman[384863]: 2026-02-28 10:51:03.552885683 +0000 UTC m=+0.497644726 container remove e2c5bf247abe4bcda3d2b99ddd46b0ee7929f11059bd2a93fe5a2a31f9c916bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_borg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 05:51:03 np0005634017 systemd[1]: libpod-conmon-e2c5bf247abe4bcda3d2b99ddd46b0ee7929f11059bd2a93fe5a2a31f9c916bc.scope: Deactivated successfully.
Feb 28 05:51:04 np0005634017 podman[384964]: 2026-02-28 10:51:04.045575205 +0000 UTC m=+0.064709183 container create 5779fd0bca8c9541136806bcc0a1df9531a18dd8f91dc03a779f155e79465740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lamport, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 05:51:04 np0005634017 systemd[1]: Started libpod-conmon-5779fd0bca8c9541136806bcc0a1df9531a18dd8f91dc03a779f155e79465740.scope.
Feb 28 05:51:04 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:51:04 np0005634017 podman[384964]: 2026-02-28 10:51:04.01835018 +0000 UTC m=+0.037484178 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:51:04 np0005634017 podman[384964]: 2026-02-28 10:51:04.122134184 +0000 UTC m=+0.141268182 container init 5779fd0bca8c9541136806bcc0a1df9531a18dd8f91dc03a779f155e79465740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lamport, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:51:04 np0005634017 podman[384964]: 2026-02-28 10:51:04.128346241 +0000 UTC m=+0.147480179 container start 5779fd0bca8c9541136806bcc0a1df9531a18dd8f91dc03a779f155e79465740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lamport, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 05:51:04 np0005634017 podman[384964]: 2026-02-28 10:51:04.131679806 +0000 UTC m=+0.150813764 container attach 5779fd0bca8c9541136806bcc0a1df9531a18dd8f91dc03a779f155e79465740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lamport, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Feb 28 05:51:04 np0005634017 serene_lamport[384980]: 167 167
Feb 28 05:51:04 np0005634017 systemd[1]: libpod-5779fd0bca8c9541136806bcc0a1df9531a18dd8f91dc03a779f155e79465740.scope: Deactivated successfully.
Feb 28 05:51:04 np0005634017 conmon[384980]: conmon 5779fd0bca8c95411368 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5779fd0bca8c9541136806bcc0a1df9531a18dd8f91dc03a779f155e79465740.scope/container/memory.events
Feb 28 05:51:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:51:04 np0005634017 podman[384964]: 2026-02-28 10:51:04.135770282 +0000 UTC m=+0.154904340 container died 5779fd0bca8c9541136806bcc0a1df9531a18dd8f91dc03a779f155e79465740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lamport, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 05:51:04 np0005634017 systemd[1]: var-lib-containers-storage-overlay-aeddb8d30aa43e29dce9ac7cebd27422e9e453fd4c104c9142a1c2cfd8697cd8-merged.mount: Deactivated successfully.
Feb 28 05:51:04 np0005634017 podman[384964]: 2026-02-28 10:51:04.181550115 +0000 UTC m=+0.200684053 container remove 5779fd0bca8c9541136806bcc0a1df9531a18dd8f91dc03a779f155e79465740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lamport, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:51:04 np0005634017 systemd[1]: libpod-conmon-5779fd0bca8c9541136806bcc0a1df9531a18dd8f91dc03a779f155e79465740.scope: Deactivated successfully.
Feb 28 05:51:04 np0005634017 podman[385004]: 2026-02-28 10:51:04.386422606 +0000 UTC m=+0.065016291 container create 90f42fa80a77ab2527c90f42a2e544bf24522c609f73c46437977dd1c7e776e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_keldysh, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 28 05:51:04 np0005634017 systemd[1]: Started libpod-conmon-90f42fa80a77ab2527c90f42a2e544bf24522c609f73c46437977dd1c7e776e5.scope.
Feb 28 05:51:04 np0005634017 podman[385004]: 2026-02-28 10:51:04.359187091 +0000 UTC m=+0.037780836 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:51:04 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:51:04 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de802af4d2efc19574e0dc9c1dd219158ffe48d4055b430328ae5a826a9e1419/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:51:04 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de802af4d2efc19574e0dc9c1dd219158ffe48d4055b430328ae5a826a9e1419/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:51:04 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de802af4d2efc19574e0dc9c1dd219158ffe48d4055b430328ae5a826a9e1419/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:51:04 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de802af4d2efc19574e0dc9c1dd219158ffe48d4055b430328ae5a826a9e1419/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:51:04 np0005634017 podman[385004]: 2026-02-28 10:51:04.495578843 +0000 UTC m=+0.174172578 container init 90f42fa80a77ab2527c90f42a2e544bf24522c609f73c46437977dd1c7e776e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_keldysh, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:51:04 np0005634017 podman[385004]: 2026-02-28 10:51:04.504776545 +0000 UTC m=+0.183370200 container start 90f42fa80a77ab2527c90f42a2e544bf24522c609f73c46437977dd1c7e776e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:51:04 np0005634017 podman[385004]: 2026-02-28 10:51:04.511742733 +0000 UTC m=+0.190336438 container attach 90f42fa80a77ab2527c90f42a2e544bf24522c609f73c46437977dd1c7e776e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_keldysh, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 05:51:05 np0005634017 lvm[385100]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:51:05 np0005634017 lvm[385100]: VG ceph_vg1 finished
Feb 28 05:51:05 np0005634017 lvm[385097]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:51:05 np0005634017 lvm[385097]: VG ceph_vg0 finished
Feb 28 05:51:05 np0005634017 lvm[385102]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:51:05 np0005634017 lvm[385102]: VG ceph_vg2 finished
Feb 28 05:51:05 np0005634017 flamboyant_keldysh[385021]: {}
Feb 28 05:51:05 np0005634017 systemd[1]: libpod-90f42fa80a77ab2527c90f42a2e544bf24522c609f73c46437977dd1c7e776e5.scope: Deactivated successfully.
Feb 28 05:51:05 np0005634017 systemd[1]: libpod-90f42fa80a77ab2527c90f42a2e544bf24522c609f73c46437977dd1c7e776e5.scope: Consumed 1.332s CPU time.
Feb 28 05:51:05 np0005634017 podman[385004]: 2026-02-28 10:51:05.380979815 +0000 UTC m=+1.059573470 container died 90f42fa80a77ab2527c90f42a2e544bf24522c609f73c46437977dd1c7e776e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_keldysh, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:51:05 np0005634017 systemd[1]: var-lib-containers-storage-overlay-de802af4d2efc19574e0dc9c1dd219158ffe48d4055b430328ae5a826a9e1419-merged.mount: Deactivated successfully.
Feb 28 05:51:05 np0005634017 podman[385004]: 2026-02-28 10:51:05.421538529 +0000 UTC m=+1.100132204 container remove 90f42fa80a77ab2527c90f42a2e544bf24522c609f73c46437977dd1c7e776e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_keldysh, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 05:51:05 np0005634017 systemd[1]: libpod-conmon-90f42fa80a77ab2527c90f42a2e544bf24522c609f73c46437977dd1c7e776e5.scope: Deactivated successfully.
Feb 28 05:51:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2610: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:51:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:51:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:51:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:51:06 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:51:06 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:51:06 np0005634017 nova_compute[243452]: 2026-02-28 10:51:06.558 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:51:06 np0005634017 nova_compute[243452]: 2026-02-28 10:51:06.822 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:51:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2611: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:51:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2612: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2613: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:11 np0005634017 nova_compute[243452]: 2026-02-28 10:51:11.559 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:51:11 np0005634017 nova_compute[243452]: 2026-02-28 10:51:11.824 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:51:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2614: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:51:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2615: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:16 np0005634017 nova_compute[243452]: 2026-02-28 10:51:16.562 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:51:16 np0005634017 nova_compute[243452]: 2026-02-28 10:51:16.827 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:51:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2616: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:51:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2617: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2618: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:21 np0005634017 nova_compute[243452]: 2026-02-28 10:51:21.564 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:51:21 np0005634017 nova_compute[243452]: 2026-02-28 10:51:21.829 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:51:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2619: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:51:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2620: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:26 np0005634017 nova_compute[243452]: 2026-02-28 10:51:26.565 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:51:26 np0005634017 nova_compute[243452]: 2026-02-28 10:51:26.830 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:51:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2621: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:51:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:51:29
Feb 28 05:51:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:51:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:51:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', '.rgw.root', 'vms', 'volumes', 'default.rgw.log', 'images', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta']
Feb 28 05:51:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:51:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2622: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:51:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:51:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:51:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:51:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:51:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:51:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:51:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:51:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:51:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:51:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:51:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:51:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:51:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:51:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:51:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:51:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2623: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:31 np0005634017 nova_compute[243452]: 2026-02-28 10:51:31.567 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:51:31 np0005634017 nova_compute[243452]: 2026-02-28 10:51:31.832 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:51:32 np0005634017 podman[385146]: 2026-02-28 10:51:32.141879123 +0000 UTC m=+0.076721285 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 28 05:51:32 np0005634017 podman[385145]: 2026-02-28 10:51:32.201129579 +0000 UTC m=+0.136104765 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 05:51:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2624: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:51:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2625: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:36 np0005634017 nova_compute[243452]: 2026-02-28 10:51:36.570 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:51:36 np0005634017 nova_compute[243452]: 2026-02-28 10:51:36.834 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:51:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2626: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:37 np0005634017 systemd[1]: virtsecretd.service: Deactivated successfully.
Feb 28 05:51:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:51:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2627: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2628: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:51:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:51:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:51:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:51:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5021395138358437e-05 of space, bias 1.0, pg target 0.004506418541507531 quantized to 32 (current 32)
Feb 28 05:51:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:51:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:51:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:51:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:51:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:51:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000670922907129849 of space, bias 1.0, pg target 0.2012768721389547 quantized to 32 (current 32)
Feb 28 05:51:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:51:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.25983564234373e-07 of space, bias 4.0, pg target 0.0008711802770812475 quantized to 16 (current 16)
Feb 28 05:51:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:51:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:51:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:51:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:51:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:51:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:51:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:51:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:51:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:51:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:51:41 np0005634017 nova_compute[243452]: 2026-02-28 10:51:41.572 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:51:41 np0005634017 nova_compute[243452]: 2026-02-28 10:51:41.836 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:51:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2629: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:51:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2630: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:51:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3915553886' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:51:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:51:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3915553886' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:51:46 np0005634017 nova_compute[243452]: 2026-02-28 10:51:46.573 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:51:46 np0005634017 nova_compute[243452]: 2026-02-28 10:51:46.839 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:51:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2631: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:51:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2632: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2633: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:51 np0005634017 nova_compute[243452]: 2026-02-28 10:51:51.576 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:51:51 np0005634017 nova_compute[243452]: 2026-02-28 10:51:51.841 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:51:52 np0005634017 nova_compute[243452]: 2026-02-28 10:51:52.070 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:51:52 np0005634017 nova_compute[243452]: 2026-02-28 10:51:52.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:51:52 np0005634017 nova_compute[243452]: 2026-02-28 10:51:52.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:51:52 np0005634017 nova_compute[243452]: 2026-02-28 10:51:52.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:51:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2634: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:51:54 np0005634017 nova_compute[243452]: 2026-02-28 10:51:54.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:51:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2635: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:56 np0005634017 nova_compute[243452]: 2026-02-28 10:51:56.577 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:51:56 np0005634017 nova_compute[243452]: 2026-02-28 10:51:56.843 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:51:57 np0005634017 nova_compute[243452]: 2026-02-28 10:51:57.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:51:57 np0005634017 nova_compute[243452]: 2026-02-28 10:51:57.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:51:57 np0005634017 nova_compute[243452]: 2026-02-28 10:51:57.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:51:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2636: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:51:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:51:57.894 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:51:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:51:57.894 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:51:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:51:57.895 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:51:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:51:59 np0005634017 nova_compute[243452]: 2026-02-28 10:51:59.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:51:59 np0005634017 nova_compute[243452]: 2026-02-28 10:51:59.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:51:59 np0005634017 nova_compute[243452]: 2026-02-28 10:51:59.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:51:59 np0005634017 nova_compute[243452]: 2026-02-28 10:51:59.334 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 05:51:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2637: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:52:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:52:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:52:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:52:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:52:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:52:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:52:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2638: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:52:01 np0005634017 nova_compute[243452]: 2026-02-28 10:52:01.580 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:52:01 np0005634017 nova_compute[243452]: 2026-02-28 10:52:01.845 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:52:02 np0005634017 nova_compute[243452]: 2026-02-28 10:52:02.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:52:02 np0005634017 nova_compute[243452]: 2026-02-28 10:52:02.343 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:52:02 np0005634017 nova_compute[243452]: 2026-02-28 10:52:02.344 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:52:02 np0005634017 nova_compute[243452]: 2026-02-28 10:52:02.344 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:52:02 np0005634017 nova_compute[243452]: 2026-02-28 10:52:02.344 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:52:02 np0005634017 nova_compute[243452]: 2026-02-28 10:52:02.345 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:52:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:52:02 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3151074578' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:52:02 np0005634017 nova_compute[243452]: 2026-02-28 10:52:02.867 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:52:03 np0005634017 nova_compute[243452]: 2026-02-28 10:52:03.012 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:52:03 np0005634017 nova_compute[243452]: 2026-02-28 10:52:03.013 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3590MB free_disk=59.98738014232367GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:52:03 np0005634017 nova_compute[243452]: 2026-02-28 10:52:03.014 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:52:03 np0005634017 nova_compute[243452]: 2026-02-28 10:52:03.014 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:52:03 np0005634017 nova_compute[243452]: 2026-02-28 10:52:03.092 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:52:03 np0005634017 nova_compute[243452]: 2026-02-28 10:52:03.093 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:52:03 np0005634017 nova_compute[243452]: 2026-02-28 10:52:03.118 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:52:03 np0005634017 podman[385218]: 2026-02-28 10:52:03.123205629 +0000 UTC m=+0.057268331 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb 28 05:52:03 np0005634017 podman[385217]: 2026-02-28 10:52:03.145580546 +0000 UTC m=+0.081853981 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:52:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2639: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:52:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:52:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1075238627' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:52:03 np0005634017 nova_compute[243452]: 2026-02-28 10:52:03.635 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:52:03 np0005634017 nova_compute[243452]: 2026-02-28 10:52:03.642 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:52:03 np0005634017 nova_compute[243452]: 2026-02-28 10:52:03.662 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:52:03 np0005634017 nova_compute[243452]: 2026-02-28 10:52:03.665 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:52:03 np0005634017 nova_compute[243452]: 2026-02-28 10:52:03.665 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:52:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:52:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2640: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:52:06 np0005634017 podman[385376]: 2026-02-28 10:52:06.208682632 +0000 UTC m=+0.088108708 container exec 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:52:06 np0005634017 podman[385376]: 2026-02-28 10:52:06.318123318 +0000 UTC m=+0.197549344 container exec_died 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:52:06 np0005634017 nova_compute[243452]: 2026-02-28 10:52:06.582 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:52:06 np0005634017 nova_compute[243452]: 2026-02-28 10:52:06.847 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:52:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:52:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:52:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:52:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:52:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2641: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:52:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:52:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:52:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:52:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:52:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:52:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:52:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:52:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:52:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:52:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:52:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:52:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:52:08 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:52:08 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:52:08 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:52:08 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:52:08 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:52:08 np0005634017 podman[385708]: 2026-02-28 10:52:08.165900691 +0000 UTC m=+0.050191509 container create 15ebd7e4cfb7e480baf08dd59e4b8c72dd133784b301f256c297168af17bdeae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 05:52:08 np0005634017 systemd[1]: Started libpod-conmon-15ebd7e4cfb7e480baf08dd59e4b8c72dd133784b301f256c297168af17bdeae.scope.
Feb 28 05:52:08 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:52:08 np0005634017 podman[385708]: 2026-02-28 10:52:08.237101488 +0000 UTC m=+0.121392296 container init 15ebd7e4cfb7e480baf08dd59e4b8c72dd133784b301f256c297168af17bdeae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Feb 28 05:52:08 np0005634017 podman[385708]: 2026-02-28 10:52:08.142444794 +0000 UTC m=+0.026735702 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:52:08 np0005634017 podman[385708]: 2026-02-28 10:52:08.242390799 +0000 UTC m=+0.126681647 container start 15ebd7e4cfb7e480baf08dd59e4b8c72dd133784b301f256c297168af17bdeae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_curran, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:52:08 np0005634017 podman[385708]: 2026-02-28 10:52:08.245931729 +0000 UTC m=+0.130222557 container attach 15ebd7e4cfb7e480baf08dd59e4b8c72dd133784b301f256c297168af17bdeae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_curran, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:52:08 np0005634017 cranky_curran[385724]: 167 167
Feb 28 05:52:08 np0005634017 systemd[1]: libpod-15ebd7e4cfb7e480baf08dd59e4b8c72dd133784b301f256c297168af17bdeae.scope: Deactivated successfully.
Feb 28 05:52:08 np0005634017 podman[385708]: 2026-02-28 10:52:08.24769139 +0000 UTC m=+0.131982238 container died 15ebd7e4cfb7e480baf08dd59e4b8c72dd133784b301f256c297168af17bdeae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_curran, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 05:52:08 np0005634017 systemd[1]: var-lib-containers-storage-overlay-821d3bad9b9f80400923a10234d51b979d7ff0c54b40cbb3c5c9572aa0b78188-merged.mount: Deactivated successfully.
Feb 28 05:52:08 np0005634017 podman[385708]: 2026-02-28 10:52:08.293194405 +0000 UTC m=+0.177485253 container remove 15ebd7e4cfb7e480baf08dd59e4b8c72dd133784b301f256c297168af17bdeae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_curran, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:52:08 np0005634017 systemd[1]: libpod-conmon-15ebd7e4cfb7e480baf08dd59e4b8c72dd133784b301f256c297168af17bdeae.scope: Deactivated successfully.
Feb 28 05:52:08 np0005634017 podman[385748]: 2026-02-28 10:52:08.463672977 +0000 UTC m=+0.059867765 container create 3ab36f7198b3103336d3b7651d9d75073c35caa1228681a8d58cc46fc496c046 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 05:52:08 np0005634017 systemd[1]: Started libpod-conmon-3ab36f7198b3103336d3b7651d9d75073c35caa1228681a8d58cc46fc496c046.scope.
Feb 28 05:52:08 np0005634017 podman[385748]: 2026-02-28 10:52:08.439600122 +0000 UTC m=+0.035794960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:52:08 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:52:08 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c8280b2dd8966153c24d0c57dadede76aa13aa8a9e4c9f1c3d6e1c16397e7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:52:08 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c8280b2dd8966153c24d0c57dadede76aa13aa8a9e4c9f1c3d6e1c16397e7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:52:08 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c8280b2dd8966153c24d0c57dadede76aa13aa8a9e4c9f1c3d6e1c16397e7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:52:08 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c8280b2dd8966153c24d0c57dadede76aa13aa8a9e4c9f1c3d6e1c16397e7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:52:08 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c8280b2dd8966153c24d0c57dadede76aa13aa8a9e4c9f1c3d6e1c16397e7d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:52:08 np0005634017 podman[385748]: 2026-02-28 10:52:08.574711978 +0000 UTC m=+0.170906766 container init 3ab36f7198b3103336d3b7651d9d75073c35caa1228681a8d58cc46fc496c046 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wescoff, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 05:52:08 np0005634017 podman[385748]: 2026-02-28 10:52:08.589577531 +0000 UTC m=+0.185772319 container start 3ab36f7198b3103336d3b7651d9d75073c35caa1228681a8d58cc46fc496c046 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wescoff, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 05:52:08 np0005634017 podman[385748]: 2026-02-28 10:52:08.594297045 +0000 UTC m=+0.190491843 container attach 3ab36f7198b3103336d3b7651d9d75073c35caa1228681a8d58cc46fc496c046 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wescoff, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:52:08 np0005634017 nova_compute[243452]: 2026-02-28 10:52:08.663 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:52:09 np0005634017 boring_wescoff[385765]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:52:09 np0005634017 boring_wescoff[385765]: --> All data devices are unavailable
Feb 28 05:52:09 np0005634017 systemd[1]: libpod-3ab36f7198b3103336d3b7651d9d75073c35caa1228681a8d58cc46fc496c046.scope: Deactivated successfully.
Feb 28 05:52:09 np0005634017 podman[385748]: 2026-02-28 10:52:09.069410549 +0000 UTC m=+0.665605307 container died 3ab36f7198b3103336d3b7651d9d75073c35caa1228681a8d58cc46fc496c046 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wescoff, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 05:52:09 np0005634017 systemd[1]: var-lib-containers-storage-overlay-25c8280b2dd8966153c24d0c57dadede76aa13aa8a9e4c9f1c3d6e1c16397e7d-merged.mount: Deactivated successfully.
Feb 28 05:52:09 np0005634017 podman[385748]: 2026-02-28 10:52:09.116320764 +0000 UTC m=+0.712515522 container remove 3ab36f7198b3103336d3b7651d9d75073c35caa1228681a8d58cc46fc496c046 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 28 05:52:09 np0005634017 systemd[1]: libpod-conmon-3ab36f7198b3103336d3b7651d9d75073c35caa1228681a8d58cc46fc496c046.scope: Deactivated successfully.
Feb 28 05:52:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:52:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2642: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:52:09 np0005634017 podman[385857]: 2026-02-28 10:52:09.61257912 +0000 UTC m=+0.057783006 container create b8e110c08f89ae5c2ddb1e9280be35c5f4ae05f4b73d10700f556be676ea5655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lewin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:52:09 np0005634017 systemd[1]: Started libpod-conmon-b8e110c08f89ae5c2ddb1e9280be35c5f4ae05f4b73d10700f556be676ea5655.scope.
Feb 28 05:52:09 np0005634017 podman[385857]: 2026-02-28 10:52:09.583598625 +0000 UTC m=+0.028802541 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:52:09 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:52:09 np0005634017 podman[385857]: 2026-02-28 10:52:09.688812299 +0000 UTC m=+0.134016165 container init b8e110c08f89ae5c2ddb1e9280be35c5f4ae05f4b73d10700f556be676ea5655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lewin, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:52:09 np0005634017 podman[385857]: 2026-02-28 10:52:09.697035834 +0000 UTC m=+0.142239720 container start b8e110c08f89ae5c2ddb1e9280be35c5f4ae05f4b73d10700f556be676ea5655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lewin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Feb 28 05:52:09 np0005634017 festive_lewin[385873]: 167 167
Feb 28 05:52:09 np0005634017 podman[385857]: 2026-02-28 10:52:09.700968255 +0000 UTC m=+0.146172151 container attach b8e110c08f89ae5c2ddb1e9280be35c5f4ae05f4b73d10700f556be676ea5655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:52:09 np0005634017 podman[385857]: 2026-02-28 10:52:09.701611694 +0000 UTC m=+0.146815560 container died b8e110c08f89ae5c2ddb1e9280be35c5f4ae05f4b73d10700f556be676ea5655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lewin, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 28 05:52:09 np0005634017 systemd[1]: libpod-b8e110c08f89ae5c2ddb1e9280be35c5f4ae05f4b73d10700f556be676ea5655.scope: Deactivated successfully.
Feb 28 05:52:09 np0005634017 systemd[1]: var-lib-containers-storage-overlay-554477ad0fff0bd02d12df5dbd787b68ae9a429f10ccf7dfca645d3b5766b3f0-merged.mount: Deactivated successfully.
Feb 28 05:52:09 np0005634017 podman[385857]: 2026-02-28 10:52:09.745568005 +0000 UTC m=+0.190771861 container remove b8e110c08f89ae5c2ddb1e9280be35c5f4ae05f4b73d10700f556be676ea5655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lewin, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:52:09 np0005634017 systemd[1]: libpod-conmon-b8e110c08f89ae5c2ddb1e9280be35c5f4ae05f4b73d10700f556be676ea5655.scope: Deactivated successfully.
Feb 28 05:52:09 np0005634017 podman[385896]: 2026-02-28 10:52:09.902034619 +0000 UTC m=+0.043305564 container create da217ae8830a4e6706841ee74e3297d84b3733e72befce1244f7481048872966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hoover, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 05:52:09 np0005634017 systemd[1]: Started libpod-conmon-da217ae8830a4e6706841ee74e3297d84b3733e72befce1244f7481048872966.scope.
Feb 28 05:52:09 np0005634017 podman[385896]: 2026-02-28 10:52:09.884393206 +0000 UTC m=+0.025664161 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:52:09 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:52:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46fbaffdb1ab8d32c6d56416f7a74118de7941e09e00b9ce45de2029ea35bfc1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:52:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46fbaffdb1ab8d32c6d56416f7a74118de7941e09e00b9ce45de2029ea35bfc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:52:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46fbaffdb1ab8d32c6d56416f7a74118de7941e09e00b9ce45de2029ea35bfc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:52:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46fbaffdb1ab8d32c6d56416f7a74118de7941e09e00b9ce45de2029ea35bfc1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:52:10 np0005634017 podman[385896]: 2026-02-28 10:52:10.021726575 +0000 UTC m=+0.162997540 container init da217ae8830a4e6706841ee74e3297d84b3733e72befce1244f7481048872966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hoover, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:52:10 np0005634017 podman[385896]: 2026-02-28 10:52:10.033728167 +0000 UTC m=+0.174999142 container start da217ae8830a4e6706841ee74e3297d84b3733e72befce1244f7481048872966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hoover, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:52:10 np0005634017 podman[385896]: 2026-02-28 10:52:10.037809423 +0000 UTC m=+0.179080408 container attach da217ae8830a4e6706841ee74e3297d84b3733e72befce1244f7481048872966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hoover, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]: {
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:    "0": [
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:        {
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "devices": [
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "/dev/loop3"
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            ],
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "lv_name": "ceph_lv0",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "lv_size": "21470642176",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "name": "ceph_lv0",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "tags": {
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.cluster_name": "ceph",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.crush_device_class": "",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.encrypted": "0",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.objectstore": "bluestore",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.osd_id": "0",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.type": "block",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.vdo": "0",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.with_tpm": "0"
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            },
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "type": "block",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "vg_name": "ceph_vg0"
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:        }
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:    ],
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:    "1": [
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:        {
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "devices": [
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "/dev/loop4"
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            ],
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "lv_name": "ceph_lv1",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "lv_size": "21470642176",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "name": "ceph_lv1",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "tags": {
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.cluster_name": "ceph",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.crush_device_class": "",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.encrypted": "0",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.objectstore": "bluestore",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.osd_id": "1",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.type": "block",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.vdo": "0",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.with_tpm": "0"
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            },
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "type": "block",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "vg_name": "ceph_vg1"
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:        }
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:    ],
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:    "2": [
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:        {
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "devices": [
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "/dev/loop5"
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            ],
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "lv_name": "ceph_lv2",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "lv_size": "21470642176",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "name": "ceph_lv2",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "tags": {
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.cluster_name": "ceph",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.crush_device_class": "",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.encrypted": "0",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.objectstore": "bluestore",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.osd_id": "2",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.type": "block",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.vdo": "0",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:                "ceph.with_tpm": "0"
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            },
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "type": "block",
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:            "vg_name": "ceph_vg2"
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:        }
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]:    ]
Feb 28 05:52:10 np0005634017 jovial_hoover[385913]: }
Feb 28 05:52:10 np0005634017 systemd[1]: libpod-da217ae8830a4e6706841ee74e3297d84b3733e72befce1244f7481048872966.scope: Deactivated successfully.
Feb 28 05:52:10 np0005634017 podman[385896]: 2026-02-28 10:52:10.369218496 +0000 UTC m=+0.510489481 container died da217ae8830a4e6706841ee74e3297d84b3733e72befce1244f7481048872966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hoover, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 05:52:10 np0005634017 systemd[1]: var-lib-containers-storage-overlay-46fbaffdb1ab8d32c6d56416f7a74118de7941e09e00b9ce45de2029ea35bfc1-merged.mount: Deactivated successfully.
Feb 28 05:52:10 np0005634017 podman[385896]: 2026-02-28 10:52:10.42343783 +0000 UTC m=+0.564708795 container remove da217ae8830a4e6706841ee74e3297d84b3733e72befce1244f7481048872966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hoover, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:52:10 np0005634017 systemd[1]: libpod-conmon-da217ae8830a4e6706841ee74e3297d84b3733e72befce1244f7481048872966.scope: Deactivated successfully.
Feb 28 05:52:10 np0005634017 podman[386000]: 2026-02-28 10:52:10.970485981 +0000 UTC m=+0.058623870 container create b9fbbaae5eaa49fec9bab6db24a6c3b18ac2af29353400f557d8e2b01f43d2ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lichterman, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 05:52:11 np0005634017 systemd[1]: Started libpod-conmon-b9fbbaae5eaa49fec9bab6db24a6c3b18ac2af29353400f557d8e2b01f43d2ef.scope.
Feb 28 05:52:11 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:52:11 np0005634017 podman[386000]: 2026-02-28 10:52:10.948113744 +0000 UTC m=+0.036251733 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:52:11 np0005634017 podman[386000]: 2026-02-28 10:52:11.05968136 +0000 UTC m=+0.147819289 container init b9fbbaae5eaa49fec9bab6db24a6c3b18ac2af29353400f557d8e2b01f43d2ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lichterman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Feb 28 05:52:11 np0005634017 podman[386000]: 2026-02-28 10:52:11.066144334 +0000 UTC m=+0.154282223 container start b9fbbaae5eaa49fec9bab6db24a6c3b18ac2af29353400f557d8e2b01f43d2ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lichterman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 05:52:11 np0005634017 podman[386000]: 2026-02-28 10:52:11.070641092 +0000 UTC m=+0.158779021 container attach b9fbbaae5eaa49fec9bab6db24a6c3b18ac2af29353400f557d8e2b01f43d2ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lichterman, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 05:52:11 np0005634017 charming_lichterman[386016]: 167 167
Feb 28 05:52:11 np0005634017 systemd[1]: libpod-b9fbbaae5eaa49fec9bab6db24a6c3b18ac2af29353400f557d8e2b01f43d2ef.scope: Deactivated successfully.
Feb 28 05:52:11 np0005634017 podman[386000]: 2026-02-28 10:52:11.07374165 +0000 UTC m=+0.161879539 container died b9fbbaae5eaa49fec9bab6db24a6c3b18ac2af29353400f557d8e2b01f43d2ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lichterman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:52:11 np0005634017 systemd[1]: var-lib-containers-storage-overlay-71a6cfd8f9c614f0b635e1582e15089515240a3e763b606f33304ad3e5e8e714-merged.mount: Deactivated successfully.
Feb 28 05:52:11 np0005634017 podman[386000]: 2026-02-28 10:52:11.113298106 +0000 UTC m=+0.201436035 container remove b9fbbaae5eaa49fec9bab6db24a6c3b18ac2af29353400f557d8e2b01f43d2ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lichterman, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:52:11 np0005634017 systemd[1]: libpod-conmon-b9fbbaae5eaa49fec9bab6db24a6c3b18ac2af29353400f557d8e2b01f43d2ef.scope: Deactivated successfully.
Feb 28 05:52:11 np0005634017 podman[386040]: 2026-02-28 10:52:11.293216907 +0000 UTC m=+0.059439933 container create 2dbea2cad8d18a26838741a370d1d91c2dab3b9314fe3ad94c11fdaecd92d044 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True)
Feb 28 05:52:11 np0005634017 nova_compute[243452]: 2026-02-28 10:52:11.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:52:11 np0005634017 systemd[1]: Started libpod-conmon-2dbea2cad8d18a26838741a370d1d91c2dab3b9314fe3ad94c11fdaecd92d044.scope.
Feb 28 05:52:11 np0005634017 podman[386040]: 2026-02-28 10:52:11.269846132 +0000 UTC m=+0.036069238 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:52:11 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:52:11 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb81be24424f2e8e61a4c8c0b074940ace46f008c2aabcf50760ba305679c9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:52:11 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb81be24424f2e8e61a4c8c0b074940ace46f008c2aabcf50760ba305679c9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:52:11 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb81be24424f2e8e61a4c8c0b074940ace46f008c2aabcf50760ba305679c9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:52:11 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb81be24424f2e8e61a4c8c0b074940ace46f008c2aabcf50760ba305679c9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:52:11 np0005634017 podman[386040]: 2026-02-28 10:52:11.395494928 +0000 UTC m=+0.161718044 container init 2dbea2cad8d18a26838741a370d1d91c2dab3b9314fe3ad94c11fdaecd92d044 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:52:11 np0005634017 podman[386040]: 2026-02-28 10:52:11.405915995 +0000 UTC m=+0.172139061 container start 2dbea2cad8d18a26838741a370d1d91c2dab3b9314fe3ad94c11fdaecd92d044 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 05:52:11 np0005634017 podman[386040]: 2026-02-28 10:52:11.409701733 +0000 UTC m=+0.175924859 container attach 2dbea2cad8d18a26838741a370d1d91c2dab3b9314fe3ad94c11fdaecd92d044 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 05:52:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2643: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:52:11 np0005634017 nova_compute[243452]: 2026-02-28 10:52:11.584 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:52:11 np0005634017 nova_compute[243452]: 2026-02-28 10:52:11.849 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:52:12 np0005634017 lvm[386135]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:52:12 np0005634017 lvm[386135]: VG ceph_vg0 finished
Feb 28 05:52:12 np0005634017 lvm[386136]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:52:12 np0005634017 lvm[386136]: VG ceph_vg1 finished
Feb 28 05:52:12 np0005634017 lvm[386138]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:52:12 np0005634017 lvm[386138]: VG ceph_vg2 finished
Feb 28 05:52:12 np0005634017 priceless_black[386057]: {}
Feb 28 05:52:12 np0005634017 systemd[1]: libpod-2dbea2cad8d18a26838741a370d1d91c2dab3b9314fe3ad94c11fdaecd92d044.scope: Deactivated successfully.
Feb 28 05:52:12 np0005634017 podman[386040]: 2026-02-28 10:52:12.260957192 +0000 UTC m=+1.027180258 container died 2dbea2cad8d18a26838741a370d1d91c2dab3b9314fe3ad94c11fdaecd92d044 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:52:12 np0005634017 systemd[1]: libpod-2dbea2cad8d18a26838741a370d1d91c2dab3b9314fe3ad94c11fdaecd92d044.scope: Consumed 1.214s CPU time.
Feb 28 05:52:12 np0005634017 systemd[1]: var-lib-containers-storage-overlay-6eb81be24424f2e8e61a4c8c0b074940ace46f008c2aabcf50760ba305679c9a-merged.mount: Deactivated successfully.
Feb 28 05:52:12 np0005634017 podman[386040]: 2026-02-28 10:52:12.313844167 +0000 UTC m=+1.080067203 container remove 2dbea2cad8d18a26838741a370d1d91c2dab3b9314fe3ad94c11fdaecd92d044 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:52:12 np0005634017 systemd[1]: libpod-conmon-2dbea2cad8d18a26838741a370d1d91c2dab3b9314fe3ad94c11fdaecd92d044.scope: Deactivated successfully.
Feb 28 05:52:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:52:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:52:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:52:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:52:13 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:52:13 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:52:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2644: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:52:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:52:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2645: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:52:16 np0005634017 nova_compute[243452]: 2026-02-28 10:52:16.588 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:52:16 np0005634017 nova_compute[243452]: 2026-02-28 10:52:16.851 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:52:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2646: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.157730) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275939157815, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 1832, "num_deletes": 256, "total_data_size": 3081956, "memory_usage": 3119680, "flush_reason": "Manual Compaction"}
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275939172806, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 3006658, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54450, "largest_seqno": 56281, "table_properties": {"data_size": 2998183, "index_size": 5224, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17062, "raw_average_key_size": 19, "raw_value_size": 2981230, "raw_average_value_size": 3474, "num_data_blocks": 233, "num_entries": 858, "num_filter_entries": 858, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772275744, "oldest_key_time": 1772275744, "file_creation_time": 1772275939, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 15164 microseconds, and 7499 cpu microseconds.
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.172895) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 3006658 bytes OK
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.172935) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.174585) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.174608) EVENT_LOG_v1 {"time_micros": 1772275939174600, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.174639) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 3074132, prev total WAL file size 3074132, number of live WAL files 2.
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.175524) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323536' seq:72057594037927935, type:22 .. '6C6F676D0032353037' seq:0, type:0; will stop at (end)
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(2936KB)], [128(8141KB)]
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275939175595, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 11343498, "oldest_snapshot_seqno": -1}
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 7728 keys, 11229095 bytes, temperature: kUnknown
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275939237988, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 11229095, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11177467, "index_size": 31227, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19333, "raw_key_size": 201048, "raw_average_key_size": 26, "raw_value_size": 11039720, "raw_average_value_size": 1428, "num_data_blocks": 1228, "num_entries": 7728, "num_filter_entries": 7728, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772275939, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.238586) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 11229095 bytes
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.240155) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.7 rd, 178.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 8.0 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(7.5) write-amplify(3.7) OK, records in: 8256, records dropped: 528 output_compression: NoCompression
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.240186) EVENT_LOG_v1 {"time_micros": 1772275939240171, "job": 78, "event": "compaction_finished", "compaction_time_micros": 62762, "compaction_time_cpu_micros": 35832, "output_level": 6, "num_output_files": 1, "total_output_size": 11229095, "num_input_records": 8256, "num_output_records": 7728, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275939241295, "job": 78, "event": "table_file_deletion", "file_number": 130}
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275939242818, "job": 78, "event": "table_file_deletion", "file_number": 128}
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.175417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.243097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.243106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.243111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.243115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:52:19 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.243119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:52:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2647: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:52:21 np0005634017 nova_compute[243452]: 2026-02-28 10:52:21.095 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:52:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2648: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:52:21 np0005634017 nova_compute[243452]: 2026-02-28 10:52:21.590 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:52:21 np0005634017 nova_compute[243452]: 2026-02-28 10:52:21.853 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:52:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2649: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:52:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:52:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2650: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:52:26 np0005634017 nova_compute[243452]: 2026-02-28 10:52:26.593 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:52:26 np0005634017 nova_compute[243452]: 2026-02-28 10:52:26.855 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:52:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2651: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:52:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:52:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:52:29
Feb 28 05:52:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:52:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:52:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.meta', 'images']
Feb 28 05:52:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:52:29 np0005634017 nova_compute[243452]: 2026-02-28 10:52:29.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:52:29 np0005634017 nova_compute[243452]: 2026-02-28 10:52:29.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 28 05:52:29 np0005634017 nova_compute[243452]: 2026-02-28 10:52:29.333 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 28 05:52:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2652: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:52:30 np0005634017 nova_compute[243452]: 2026-02-28 10:52:30.269 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:52:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:52:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:52:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:52:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:52:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:52:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:52:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:52:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:52:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:52:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:52:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:52:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:52:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:52:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:52:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:52:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:52:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2653: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:52:31 np0005634017 nova_compute[243452]: 2026-02-28 10:52:31.596 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:52:31 np0005634017 nova_compute[243452]: 2026-02-28 10:52:31.859 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:52:33 np0005634017 nova_compute[243452]: 2026-02-28 10:52:33.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:52:33 np0005634017 nova_compute[243452]: 2026-02-28 10:52:33.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 28 05:52:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2654: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:52:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:52:34 np0005634017 podman[386184]: 2026-02-28 10:52:34.164360479 +0000 UTC m=+0.093567525 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 28 05:52:34 np0005634017 podman[386183]: 2026-02-28 10:52:34.19532807 +0000 UTC m=+0.124574477 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 05:52:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2655: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:52:36 np0005634017 nova_compute[243452]: 2026-02-28 10:52:36.597 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:52:36 np0005634017 nova_compute[243452]: 2026-02-28 10:52:36.862 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:52:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2656: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.623705) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275958623780, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 398, "num_deletes": 251, "total_data_size": 296459, "memory_usage": 303792, "flush_reason": "Manual Compaction"}
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275958628610, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 294051, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56282, "largest_seqno": 56679, "table_properties": {"data_size": 291613, "index_size": 537, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5785, "raw_average_key_size": 18, "raw_value_size": 286901, "raw_average_value_size": 922, "num_data_blocks": 24, "num_entries": 311, "num_filter_entries": 311, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772275940, "oldest_key_time": 1772275940, "file_creation_time": 1772275958, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 4939 microseconds, and 2084 cpu microseconds.
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.628656) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 294051 bytes OK
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.628678) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.630367) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.630391) EVENT_LOG_v1 {"time_micros": 1772275958630384, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.630418) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 293926, prev total WAL file size 293926, number of live WAL files 2.
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.630970) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(287KB)], [131(10MB)]
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275958631026, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 11523146, "oldest_snapshot_seqno": -1}
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 7529 keys, 9824285 bytes, temperature: kUnknown
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275958675552, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 9824285, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9775221, "index_size": 29111, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18885, "raw_key_size": 197590, "raw_average_key_size": 26, "raw_value_size": 9642145, "raw_average_value_size": 1280, "num_data_blocks": 1130, "num_entries": 7529, "num_filter_entries": 7529, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772275958, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.675954) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 9824285 bytes
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.678149) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 257.8 rd, 219.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.7 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(72.6) write-amplify(33.4) OK, records in: 8039, records dropped: 510 output_compression: NoCompression
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.678174) EVENT_LOG_v1 {"time_micros": 1772275958678162, "job": 80, "event": "compaction_finished", "compaction_time_micros": 44703, "compaction_time_cpu_micros": 22845, "output_level": 6, "num_output_files": 1, "total_output_size": 9824285, "num_input_records": 8039, "num_output_records": 7529, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275958678361, "job": 80, "event": "table_file_deletion", "file_number": 133}
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275958679445, "job": 80, "event": "table_file_deletion", "file_number": 131}
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.630802) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.679478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.679484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.679486) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.679488) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:52:38 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.679490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:52:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:52:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2657: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:52:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2658: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:52:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:52:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:52:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:52:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:52:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5021395138358437e-05 of space, bias 1.0, pg target 0.004506418541507531 quantized to 32 (current 32)
Feb 28 05:52:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:52:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:52:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:52:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:52:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:52:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000670922907129849 of space, bias 1.0, pg target 0.2012768721389547 quantized to 32 (current 32)
Feb 28 05:52:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:52:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.25983564234373e-07 of space, bias 4.0, pg target 0.0008711802770812475 quantized to 16 (current 16)
Feb 28 05:52:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:52:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:52:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:52:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:52:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:52:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:52:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:52:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:52:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:52:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:52:41 np0005634017 nova_compute[243452]: 2026-02-28 10:52:41.600 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:52:41 np0005634017 nova_compute[243452]: 2026-02-28 10:52:41.864 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:52:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2659: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:52:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:52:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2660: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Feb 28 05:52:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:52:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3149900555' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:52:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:52:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3149900555' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:52:46 np0005634017 nova_compute[243452]: 2026-02-28 10:52:46.603 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:52:46 np0005634017 nova_compute[243452]: 2026-02-28 10:52:46.866 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:52:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2661: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Feb 28 05:52:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:52:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2662: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Feb 28 05:52:51 np0005634017 nova_compute[243452]: 2026-02-28 10:52:51.334 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:52:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2663: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Feb 28 05:52:51 np0005634017 nova_compute[243452]: 2026-02-28 10:52:51.605 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:52:51 np0005634017 nova_compute[243452]: 2026-02-28 10:52:51.870 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:52:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Feb 28 05:52:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Feb 28 05:52:53 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Feb 28 05:52:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2665: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 204 B/s wr, 9 op/s
Feb 28 05:52:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:52:54 np0005634017 nova_compute[243452]: 2026-02-28 10:52:54.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:52:54 np0005634017 nova_compute[243452]: 2026-02-28 10:52:54.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:52:54 np0005634017 nova_compute[243452]: 2026-02-28 10:52:54.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:52:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2666: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 716 B/s wr, 20 op/s
Feb 28 05:52:56 np0005634017 nova_compute[243452]: 2026-02-28 10:52:56.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:52:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Feb 28 05:52:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Feb 28 05:52:56 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Feb 28 05:52:56 np0005634017 nova_compute[243452]: 2026-02-28 10:52:56.608 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:52:56 np0005634017 nova_compute[243452]: 2026-02-28 10:52:56.872 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:52:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2668: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 895 B/s wr, 25 op/s
Feb 28 05:52:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:52:57.895 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:52:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:52:57.896 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:52:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:52:57.896 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:52:58 np0005634017 nova_compute[243452]: 2026-02-28 10:52:58.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:52:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:52:59 np0005634017 nova_compute[243452]: 2026-02-28 10:52:59.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:52:59 np0005634017 nova_compute[243452]: 2026-02-28 10:52:59.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:52:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2669: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 2.9 KiB/s wr, 54 op/s
Feb 28 05:53:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:53:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:53:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:53:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:53:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:53:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:53:01 np0005634017 nova_compute[243452]: 2026-02-28 10:53:01.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:53:01 np0005634017 nova_compute[243452]: 2026-02-28 10:53:01.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:53:01 np0005634017 nova_compute[243452]: 2026-02-28 10:53:01.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:53:01 np0005634017 nova_compute[243452]: 2026-02-28 10:53:01.331 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 05:53:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2670: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 3.2 KiB/s wr, 58 op/s
Feb 28 05:53:01 np0005634017 nova_compute[243452]: 2026-02-28 10:53:01.610 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:53:01 np0005634017 nova_compute[243452]: 2026-02-28 10:53:01.874 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:53:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2671: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.7 KiB/s wr, 49 op/s
Feb 28 05:53:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:53:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Feb 28 05:53:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Feb 28 05:53:04 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Feb 28 05:53:04 np0005634017 nova_compute[243452]: 2026-02-28 10:53:04.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:53:04 np0005634017 nova_compute[243452]: 2026-02-28 10:53:04.344 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:53:04 np0005634017 nova_compute[243452]: 2026-02-28 10:53:04.345 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:53:04 np0005634017 nova_compute[243452]: 2026-02-28 10:53:04.346 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:53:04 np0005634017 nova_compute[243452]: 2026-02-28 10:53:04.346 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:53:04 np0005634017 nova_compute[243452]: 2026-02-28 10:53:04.347 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:53:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:53:04 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2649519656' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:53:04 np0005634017 nova_compute[243452]: 2026-02-28 10:53:04.908 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:53:05 np0005634017 nova_compute[243452]: 2026-02-28 10:53:05.147 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:53:05 np0005634017 podman[386250]: 2026-02-28 10:53:05.14862071 +0000 UTC m=+0.079817003 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 28 05:53:05 np0005634017 nova_compute[243452]: 2026-02-28 10:53:05.149 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3581MB free_disk=59.98737560398877GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:53:05 np0005634017 nova_compute[243452]: 2026-02-28 10:53:05.149 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:53:05 np0005634017 nova_compute[243452]: 2026-02-28 10:53:05.149 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:53:05 np0005634017 podman[386249]: 2026-02-28 10:53:05.180852207 +0000 UTC m=+0.115058056 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 28 05:53:05 np0005634017 nova_compute[243452]: 2026-02-28 10:53:05.217 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:53:05 np0005634017 nova_compute[243452]: 2026-02-28 10:53:05.217 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:53:05 np0005634017 nova_compute[243452]: 2026-02-28 10:53:05.240 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:53:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2673: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 2.4 KiB/s wr, 33 op/s
Feb 28 05:53:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:53:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1462219130' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:53:05 np0005634017 nova_compute[243452]: 2026-02-28 10:53:05.790 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:53:05 np0005634017 nova_compute[243452]: 2026-02-28 10:53:05.796 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:53:05 np0005634017 nova_compute[243452]: 2026-02-28 10:53:05.817 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:53:05 np0005634017 nova_compute[243452]: 2026-02-28 10:53:05.818 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:53:05 np0005634017 nova_compute[243452]: 2026-02-28 10:53:05.819 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:53:06 np0005634017 nova_compute[243452]: 2026-02-28 10:53:06.614 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:53:06 np0005634017 nova_compute[243452]: 2026-02-28 10:53:06.876 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:53:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2674: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 KiB/s wr, 29 op/s
Feb 28 05:53:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:53:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2675: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 3.6 KiB/s rd, 511 B/s wr, 6 op/s
Feb 28 05:53:10 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 05:53:10 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 12K writes, 56K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s#012Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1377 writes, 6469 keys, 1377 commit groups, 1.0 writes per commit group, ingest: 8.96 MB, 0.01 MB/s#012Interval WAL: 1377 writes, 1377 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     81.0      0.85              0.18        40    0.021       0      0       0.0       0.0#012  L6      1/0    9.37 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.9    163.0    138.8      2.45              0.93        39    0.063    244K    21K       0.0       0.0#012 Sum      1/0    9.37 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.9    121.1    123.9      3.30              1.11        79    0.042    244K    21K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.1    142.8    145.7      0.47              0.22        12    0.039     47K   3100       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    163.0    138.8      2.45              0.93        39    0.063    244K    21K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     81.5      0.84              0.18        39    0.022       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.5      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.067, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.40 GB write, 0.09 MB/s write, 0.39 GB read, 0.08 MB/s read, 3.3 seconds#012Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555caadbb8d0#2 capacity: 304.00 MB usage: 44.75 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000477 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2812,42.95 MB,14.1284%) FilterBlock(80,687.30 KB,0.220786%) IndexBlock(80,1.13 MB,0.370151%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb 28 05:53:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2676: 305 pgs: 305 active+clean; 64 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 6.4 MiB/s wr, 21 op/s
Feb 28 05:53:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Feb 28 05:53:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Feb 28 05:53:11 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Feb 28 05:53:11 np0005634017 nova_compute[243452]: 2026-02-28 10:53:11.878 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:53:11 np0005634017 nova_compute[243452]: 2026-02-28 10:53:11.879 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:53:11 np0005634017 nova_compute[243452]: 2026-02-28 10:53:11.880 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 05:53:11 np0005634017 nova_compute[243452]: 2026-02-28 10:53:11.880 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 05:53:12 np0005634017 nova_compute[243452]: 2026-02-28 10:53:12.892 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:53:12 np0005634017 nova_compute[243452]: 2026-02-28 10:53:12.893 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 05:53:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2678: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 9.4 MiB/s wr, 24 op/s
Feb 28 05:53:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:53:13 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:53:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:53:13 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:53:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:53:13 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:53:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:53:13 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:53:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:53:13 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:53:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:53:13 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:53:13 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:53:13 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:53:13 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:53:14 np0005634017 podman[386465]: 2026-02-28 10:53:14.009244534 +0000 UTC m=+0.060794851 container create 12bb7dea7785adb32ea6c472e5d98f8e17ddccd633dc14f9937921c1ada2281b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_easley, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 05:53:14 np0005634017 systemd[1]: Started libpod-conmon-12bb7dea7785adb32ea6c472e5d98f8e17ddccd633dc14f9937921c1ada2281b.scope.
Feb 28 05:53:14 np0005634017 podman[386465]: 2026-02-28 10:53:13.986110466 +0000 UTC m=+0.037660833 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:53:14 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:53:14 np0005634017 podman[386465]: 2026-02-28 10:53:14.101288364 +0000 UTC m=+0.152838711 container init 12bb7dea7785adb32ea6c472e5d98f8e17ddccd633dc14f9937921c1ada2281b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 05:53:14 np0005634017 podman[386465]: 2026-02-28 10:53:14.112174354 +0000 UTC m=+0.163724681 container start 12bb7dea7785adb32ea6c472e5d98f8e17ddccd633dc14f9937921c1ada2281b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:53:14 np0005634017 podman[386465]: 2026-02-28 10:53:14.115597282 +0000 UTC m=+0.167147629 container attach 12bb7dea7785adb32ea6c472e5d98f8e17ddccd633dc14f9937921c1ada2281b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_easley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 28 05:53:14 np0005634017 distracted_easley[386481]: 167 167
Feb 28 05:53:14 np0005634017 systemd[1]: libpod-12bb7dea7785adb32ea6c472e5d98f8e17ddccd633dc14f9937921c1ada2281b.scope: Deactivated successfully.
Feb 28 05:53:14 np0005634017 conmon[386481]: conmon 12bb7dea7785adb32ea6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-12bb7dea7785adb32ea6c472e5d98f8e17ddccd633dc14f9937921c1ada2281b.scope/container/memory.events
Feb 28 05:53:14 np0005634017 podman[386465]: 2026-02-28 10:53:14.123807205 +0000 UTC m=+0.175357552 container died 12bb7dea7785adb32ea6c472e5d98f8e17ddccd633dc14f9937921c1ada2281b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:53:14 np0005634017 systemd[1]: var-lib-containers-storage-overlay-37b085b6d3c3d884053068a0a68a5fa90f04f1c8e24717b306301de02e209303-merged.mount: Deactivated successfully.
Feb 28 05:53:14 np0005634017 podman[386465]: 2026-02-28 10:53:14.160191811 +0000 UTC m=+0.211742128 container remove 12bb7dea7785adb32ea6c472e5d98f8e17ddccd633dc14f9937921c1ada2281b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_easley, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Feb 28 05:53:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:53:14 np0005634017 systemd[1]: libpod-conmon-12bb7dea7785adb32ea6c472e5d98f8e17ddccd633dc14f9937921c1ada2281b.scope: Deactivated successfully.
Feb 28 05:53:14 np0005634017 podman[386507]: 2026-02-28 10:53:14.362234882 +0000 UTC m=+0.071739513 container create 8983e2e3cd1f4117697f630fdebc06ca66da91e8dbe438350a8ad73f0a99e05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lewin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:53:14 np0005634017 systemd[1]: Started libpod-conmon-8983e2e3cd1f4117697f630fdebc06ca66da91e8dbe438350a8ad73f0a99e05a.scope.
Feb 28 05:53:14 np0005634017 podman[386507]: 2026-02-28 10:53:14.33653824 +0000 UTC m=+0.046042871 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:53:14 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:53:14 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae327da7b264ddc702053ddf0e15e24783dfd8331462655001809ecc59ea3023/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:53:14 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae327da7b264ddc702053ddf0e15e24783dfd8331462655001809ecc59ea3023/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:53:14 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae327da7b264ddc702053ddf0e15e24783dfd8331462655001809ecc59ea3023/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:53:14 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae327da7b264ddc702053ddf0e15e24783dfd8331462655001809ecc59ea3023/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:53:14 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae327da7b264ddc702053ddf0e15e24783dfd8331462655001809ecc59ea3023/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:53:14 np0005634017 podman[386507]: 2026-02-28 10:53:14.473271162 +0000 UTC m=+0.182775773 container init 8983e2e3cd1f4117697f630fdebc06ca66da91e8dbe438350a8ad73f0a99e05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lewin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:53:14 np0005634017 podman[386507]: 2026-02-28 10:53:14.482739572 +0000 UTC m=+0.192244163 container start 8983e2e3cd1f4117697f630fdebc06ca66da91e8dbe438350a8ad73f0a99e05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 05:53:14 np0005634017 podman[386507]: 2026-02-28 10:53:14.486810308 +0000 UTC m=+0.196314959 container attach 8983e2e3cd1f4117697f630fdebc06ca66da91e8dbe438350a8ad73f0a99e05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lewin, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:53:14 np0005634017 busy_lewin[386523]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:53:14 np0005634017 busy_lewin[386523]: --> All data devices are unavailable
Feb 28 05:53:14 np0005634017 systemd[1]: libpod-8983e2e3cd1f4117697f630fdebc06ca66da91e8dbe438350a8ad73f0a99e05a.scope: Deactivated successfully.
Feb 28 05:53:14 np0005634017 podman[386507]: 2026-02-28 10:53:14.993212042 +0000 UTC m=+0.702716653 container died 8983e2e3cd1f4117697f630fdebc06ca66da91e8dbe438350a8ad73f0a99e05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 05:53:15 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ae327da7b264ddc702053ddf0e15e24783dfd8331462655001809ecc59ea3023-merged.mount: Deactivated successfully.
Feb 28 05:53:15 np0005634017 podman[386507]: 2026-02-28 10:53:15.042051782 +0000 UTC m=+0.751556383 container remove 8983e2e3cd1f4117697f630fdebc06ca66da91e8dbe438350a8ad73f0a99e05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lewin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:53:15 np0005634017 systemd[1]: libpod-conmon-8983e2e3cd1f4117697f630fdebc06ca66da91e8dbe438350a8ad73f0a99e05a.scope: Deactivated successfully.
Feb 28 05:53:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2679: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 11 MiB/s wr, 33 op/s
Feb 28 05:53:15 np0005634017 podman[386617]: 2026-02-28 10:53:15.550940157 +0000 UTC m=+0.051736704 container create 62b18ee212d0cf8fb28646de5dc5d9e69d1fa1d1a60503c7b98927f5b9cd31e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 05:53:15 np0005634017 systemd[1]: Started libpod-conmon-62b18ee212d0cf8fb28646de5dc5d9e69d1fa1d1a60503c7b98927f5b9cd31e0.scope.
Feb 28 05:53:15 np0005634017 podman[386617]: 2026-02-28 10:53:15.526039438 +0000 UTC m=+0.026836065 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:53:15 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:53:15 np0005634017 podman[386617]: 2026-02-28 10:53:15.638525 +0000 UTC m=+0.139321577 container init 62b18ee212d0cf8fb28646de5dc5d9e69d1fa1d1a60503c7b98927f5b9cd31e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 05:53:15 np0005634017 podman[386617]: 2026-02-28 10:53:15.647138765 +0000 UTC m=+0.147935312 container start 62b18ee212d0cf8fb28646de5dc5d9e69d1fa1d1a60503c7b98927f5b9cd31e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_curran, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Feb 28 05:53:15 np0005634017 podman[386617]: 2026-02-28 10:53:15.651018716 +0000 UTC m=+0.151815263 container attach 62b18ee212d0cf8fb28646de5dc5d9e69d1fa1d1a60503c7b98927f5b9cd31e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_curran, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 05:53:15 np0005634017 dazzling_curran[386633]: 167 167
Feb 28 05:53:15 np0005634017 systemd[1]: libpod-62b18ee212d0cf8fb28646de5dc5d9e69d1fa1d1a60503c7b98927f5b9cd31e0.scope: Deactivated successfully.
Feb 28 05:53:15 np0005634017 podman[386617]: 2026-02-28 10:53:15.653326261 +0000 UTC m=+0.154122808 container died 62b18ee212d0cf8fb28646de5dc5d9e69d1fa1d1a60503c7b98927f5b9cd31e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_curran, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:53:15 np0005634017 systemd[1]: var-lib-containers-storage-overlay-a6a434642f36f9b64fa643f5b8471722acd448b277ef86cffa5f59bc1a7b7c17-merged.mount: Deactivated successfully.
Feb 28 05:53:15 np0005634017 podman[386617]: 2026-02-28 10:53:15.703257023 +0000 UTC m=+0.204053600 container remove 62b18ee212d0cf8fb28646de5dc5d9e69d1fa1d1a60503c7b98927f5b9cd31e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_curran, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 05:53:15 np0005634017 systemd[1]: libpod-conmon-62b18ee212d0cf8fb28646de5dc5d9e69d1fa1d1a60503c7b98927f5b9cd31e0.scope: Deactivated successfully.
Feb 28 05:53:15 np0005634017 podman[386657]: 2026-02-28 10:53:15.90235863 +0000 UTC m=+0.066937996 container create 8ba3724585dd8e94c8a640c5ab8f157b5bc0ed99eb93856e91a7f8fa52fa03ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:53:15 np0005634017 systemd[1]: Started libpod-conmon-8ba3724585dd8e94c8a640c5ab8f157b5bc0ed99eb93856e91a7f8fa52fa03ad.scope.
Feb 28 05:53:15 np0005634017 podman[386657]: 2026-02-28 10:53:15.873429246 +0000 UTC m=+0.038008682 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:53:15 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:53:15 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1499730b5e5e4440e3434e1e42349046e0128589af7aefc1f811e8cc2707ca04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:53:15 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1499730b5e5e4440e3434e1e42349046e0128589af7aefc1f811e8cc2707ca04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:53:15 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1499730b5e5e4440e3434e1e42349046e0128589af7aefc1f811e8cc2707ca04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:53:15 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1499730b5e5e4440e3434e1e42349046e0128589af7aefc1f811e8cc2707ca04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:53:16 np0005634017 podman[386657]: 2026-02-28 10:53:16.013655088 +0000 UTC m=+0.178234454 container init 8ba3724585dd8e94c8a640c5ab8f157b5bc0ed99eb93856e91a7f8fa52fa03ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:53:16 np0005634017 podman[386657]: 2026-02-28 10:53:16.028097518 +0000 UTC m=+0.192676884 container start 8ba3724585dd8e94c8a640c5ab8f157b5bc0ed99eb93856e91a7f8fa52fa03ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:53:16 np0005634017 podman[386657]: 2026-02-28 10:53:16.031961208 +0000 UTC m=+0.196540584 container attach 8ba3724585dd8e94c8a640c5ab8f157b5bc0ed99eb93856e91a7f8fa52fa03ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]: {
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:    "0": [
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:        {
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "devices": [
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "/dev/loop3"
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            ],
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "lv_name": "ceph_lv0",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "lv_size": "21470642176",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "name": "ceph_lv0",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "tags": {
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.cluster_name": "ceph",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.crush_device_class": "",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.encrypted": "0",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.objectstore": "bluestore",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.osd_id": "0",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.type": "block",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.vdo": "0",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.with_tpm": "0"
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            },
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "type": "block",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "vg_name": "ceph_vg0"
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:        }
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:    ],
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:    "1": [
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:        {
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "devices": [
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "/dev/loop4"
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            ],
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "lv_name": "ceph_lv1",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "lv_size": "21470642176",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "name": "ceph_lv1",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "tags": {
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.cluster_name": "ceph",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.crush_device_class": "",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.encrypted": "0",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.objectstore": "bluestore",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.osd_id": "1",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.type": "block",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.vdo": "0",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.with_tpm": "0"
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            },
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "type": "block",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "vg_name": "ceph_vg1"
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:        }
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:    ],
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:    "2": [
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:        {
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "devices": [
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "/dev/loop5"
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            ],
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "lv_name": "ceph_lv2",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "lv_size": "21470642176",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "name": "ceph_lv2",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "tags": {
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.cluster_name": "ceph",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.crush_device_class": "",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.encrypted": "0",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.objectstore": "bluestore",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.osd_id": "2",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.type": "block",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.vdo": "0",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:                "ceph.with_tpm": "0"
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            },
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "type": "block",
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:            "vg_name": "ceph_vg2"
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:        }
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]:    ]
Feb 28 05:53:16 np0005634017 confident_hodgkin[386673]: }
Feb 28 05:53:16 np0005634017 systemd[1]: libpod-8ba3724585dd8e94c8a640c5ab8f157b5bc0ed99eb93856e91a7f8fa52fa03ad.scope: Deactivated successfully.
Feb 28 05:53:16 np0005634017 podman[386657]: 2026-02-28 10:53:16.363617408 +0000 UTC m=+0.528196784 container died 8ba3724585dd8e94c8a640c5ab8f157b5bc0ed99eb93856e91a7f8fa52fa03ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:53:16 np0005634017 systemd[1]: var-lib-containers-storage-overlay-1499730b5e5e4440e3434e1e42349046e0128589af7aefc1f811e8cc2707ca04-merged.mount: Deactivated successfully.
Feb 28 05:53:16 np0005634017 podman[386657]: 2026-02-28 10:53:16.428053082 +0000 UTC m=+0.592632458 container remove 8ba3724585dd8e94c8a640c5ab8f157b5bc0ed99eb93856e91a7f8fa52fa03ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_hodgkin, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:53:16 np0005634017 systemd[1]: libpod-conmon-8ba3724585dd8e94c8a640c5ab8f157b5bc0ed99eb93856e91a7f8fa52fa03ad.scope: Deactivated successfully.
Feb 28 05:53:16 np0005634017 podman[386755]: 2026-02-28 10:53:16.940670443 +0000 UTC m=+0.067022909 container create 5caf54900070496f74d14fd34755540e39fe4c69ddc27aca0a0c4f040816356a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_maxwell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:53:16 np0005634017 systemd[1]: Started libpod-conmon-5caf54900070496f74d14fd34755540e39fe4c69ddc27aca0a0c4f040816356a.scope.
Feb 28 05:53:17 np0005634017 podman[386755]: 2026-02-28 10:53:16.913246313 +0000 UTC m=+0.039598829 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:53:17 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:53:17 np0005634017 podman[386755]: 2026-02-28 10:53:17.026566538 +0000 UTC m=+0.152919044 container init 5caf54900070496f74d14fd34755540e39fe4c69ddc27aca0a0c4f040816356a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_maxwell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:53:17 np0005634017 podman[386755]: 2026-02-28 10:53:17.036180082 +0000 UTC m=+0.162532508 container start 5caf54900070496f74d14fd34755540e39fe4c69ddc27aca0a0c4f040816356a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_maxwell, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 28 05:53:17 np0005634017 gallant_maxwell[386772]: 167 167
Feb 28 05:53:17 np0005634017 podman[386755]: 2026-02-28 10:53:17.040904366 +0000 UTC m=+0.167256882 container attach 5caf54900070496f74d14fd34755540e39fe4c69ddc27aca0a0c4f040816356a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 05:53:17 np0005634017 systemd[1]: libpod-5caf54900070496f74d14fd34755540e39fe4c69ddc27aca0a0c4f040816356a.scope: Deactivated successfully.
Feb 28 05:53:17 np0005634017 podman[386755]: 2026-02-28 10:53:17.041604036 +0000 UTC m=+0.167956472 container died 5caf54900070496f74d14fd34755540e39fe4c69ddc27aca0a0c4f040816356a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_maxwell, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:53:17 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ca182b08646cfea93be199e60fd2ecfe981d31640e3ec949c030a119d3e0bfd5-merged.mount: Deactivated successfully.
Feb 28 05:53:17 np0005634017 podman[386755]: 2026-02-28 10:53:17.081630755 +0000 UTC m=+0.207983211 container remove 5caf54900070496f74d14fd34755540e39fe4c69ddc27aca0a0c4f040816356a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:53:17 np0005634017 systemd[1]: libpod-conmon-5caf54900070496f74d14fd34755540e39fe4c69ddc27aca0a0c4f040816356a.scope: Deactivated successfully.
Feb 28 05:53:17 np0005634017 podman[386796]: 2026-02-28 10:53:17.263456281 +0000 UTC m=+0.050187050 container create f3839e435d31606cc2038b5e2e6caef49e0ec95376ab598271c1b9beb45d2ea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_easley, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:53:17 np0005634017 systemd[1]: Started libpod-conmon-f3839e435d31606cc2038b5e2e6caef49e0ec95376ab598271c1b9beb45d2ea1.scope.
Feb 28 05:53:17 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:53:17 np0005634017 podman[386796]: 2026-02-28 10:53:17.243516963 +0000 UTC m=+0.030247762 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:53:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd353851c18f294195996993b6181df0850ecaf87e6c66a17f97ab92bf0fbf3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:53:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd353851c18f294195996993b6181df0850ecaf87e6c66a17f97ab92bf0fbf3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:53:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd353851c18f294195996993b6181df0850ecaf87e6c66a17f97ab92bf0fbf3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:53:17 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd353851c18f294195996993b6181df0850ecaf87e6c66a17f97ab92bf0fbf3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:53:17 np0005634017 podman[386796]: 2026-02-28 10:53:17.360600896 +0000 UTC m=+0.147331695 container init f3839e435d31606cc2038b5e2e6caef49e0ec95376ab598271c1b9beb45d2ea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_easley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 05:53:17 np0005634017 podman[386796]: 2026-02-28 10:53:17.370795986 +0000 UTC m=+0.157526785 container start f3839e435d31606cc2038b5e2e6caef49e0ec95376ab598271c1b9beb45d2ea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_easley, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 05:53:17 np0005634017 podman[386796]: 2026-02-28 10:53:17.375243053 +0000 UTC m=+0.161973852 container attach f3839e435d31606cc2038b5e2e6caef49e0ec95376ab598271c1b9beb45d2ea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_easley, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:53:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2680: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 11 MiB/s wr, 33 op/s
Feb 28 05:53:17 np0005634017 nova_compute[243452]: 2026-02-28 10:53:17.893 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:53:18 np0005634017 lvm[386893]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:53:18 np0005634017 lvm[386890]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:53:18 np0005634017 lvm[386893]: VG ceph_vg1 finished
Feb 28 05:53:18 np0005634017 lvm[386890]: VG ceph_vg0 finished
Feb 28 05:53:18 np0005634017 lvm[386895]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:53:18 np0005634017 lvm[386895]: VG ceph_vg2 finished
Feb 28 05:53:18 np0005634017 lvm[386896]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:53:18 np0005634017 lvm[386896]: VG ceph_vg1 finished
Feb 28 05:53:18 np0005634017 lvm[386898]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:53:18 np0005634017 lvm[386898]: VG ceph_vg1 finished
Feb 28 05:53:18 np0005634017 cranky_easley[386813]: {}
Feb 28 05:53:18 np0005634017 systemd[1]: libpod-f3839e435d31606cc2038b5e2e6caef49e0ec95376ab598271c1b9beb45d2ea1.scope: Deactivated successfully.
Feb 28 05:53:18 np0005634017 systemd[1]: libpod-f3839e435d31606cc2038b5e2e6caef49e0ec95376ab598271c1b9beb45d2ea1.scope: Consumed 1.336s CPU time.
Feb 28 05:53:18 np0005634017 podman[386796]: 2026-02-28 10:53:18.304906065 +0000 UTC m=+1.091636834 container died f3839e435d31606cc2038b5e2e6caef49e0ec95376ab598271c1b9beb45d2ea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:53:18 np0005634017 systemd[1]: var-lib-containers-storage-overlay-fd353851c18f294195996993b6181df0850ecaf87e6c66a17f97ab92bf0fbf3d-merged.mount: Deactivated successfully.
Feb 28 05:53:18 np0005634017 podman[386796]: 2026-02-28 10:53:18.3627096 +0000 UTC m=+1.149440389 container remove f3839e435d31606cc2038b5e2e6caef49e0ec95376ab598271c1b9beb45d2ea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:53:18 np0005634017 systemd[1]: libpod-conmon-f3839e435d31606cc2038b5e2e6caef49e0ec95376ab598271c1b9beb45d2ea1.scope: Deactivated successfully.
Feb 28 05:53:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:53:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:53:18 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:53:18 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:53:18 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:53:18 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:53:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:53:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2681: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 11 MiB/s wr, 33 op/s
Feb 28 05:53:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2682: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.9 KiB/s rd, 4.8 MiB/s wr, 11 op/s
Feb 28 05:53:22 np0005634017 nova_compute[243452]: 2026-02-28 10:53:22.895 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:53:22 np0005634017 nova_compute[243452]: 2026-02-28 10:53:22.897 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:53:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2683: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.8 KiB/s rd, 4.1 MiB/s wr, 10 op/s
Feb 28 05:53:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:53:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2684: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 2.0 MiB/s wr, 8 op/s
Feb 28 05:53:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2685: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:53:27 np0005634017 nova_compute[243452]: 2026-02-28 10:53:27.898 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:53:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:53:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:53:29
Feb 28 05:53:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:53:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:53:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['volumes', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'vms', 'images', 'cephfs.cephfs.meta']
Feb 28 05:53:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:53:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2686: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:53:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:53:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:53:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:53:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:53:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:53:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:53:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:53:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:53:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:53:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:53:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:53:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:53:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:53:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:53:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:53:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:53:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2687: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:53:32 np0005634017 nova_compute[243452]: 2026-02-28 10:53:32.899 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:53:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2688: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:53:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:53:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2689: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:53:36 np0005634017 podman[386939]: 2026-02-28 10:53:36.171944844 +0000 UTC m=+0.100157992 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 28 05:53:36 np0005634017 podman[386938]: 2026-02-28 10:53:36.1960434 +0000 UTC m=+0.123614109 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:53:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2690: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:53:37 np0005634017 nova_compute[243452]: 2026-02-28 10:53:37.901 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:53:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:53:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2691: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:53:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2692: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:53:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:53:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:53:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:53:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:53:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5158900728975258e-05 of space, bias 1.0, pg target 0.004547670218692577 quantized to 32 (current 32)
Feb 28 05:53:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:53:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:53:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:53:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:53:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:53:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0018285931247344293 of space, bias 1.0, pg target 0.5485779374203288 quantized to 32 (current 32)
Feb 28 05:53:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:53:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.268684935180083e-07 of space, bias 4.0, pg target 0.00087224219222161 quantized to 16 (current 16)
Feb 28 05:53:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:53:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:53:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:53:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:53:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:53:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:53:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:53:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:53:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:53:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:53:42 np0005634017 nova_compute[243452]: 2026-02-28 10:53:42.904 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:53:42 np0005634017 nova_compute[243452]: 2026-02-28 10:53:42.905 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:53:42 np0005634017 nova_compute[243452]: 2026-02-28 10:53:42.905 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 05:53:42 np0005634017 nova_compute[243452]: 2026-02-28 10:53:42.905 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 05:53:42 np0005634017 nova_compute[243452]: 2026-02-28 10:53:42.906 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 05:53:42 np0005634017 nova_compute[243452]: 2026-02-28 10:53:42.908 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:53:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2693: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:53:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:53:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2694: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:53:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:53:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2965535958' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:53:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:53:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2965535958' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:53:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2695: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:53:47 np0005634017 nova_compute[243452]: 2026-02-28 10:53:47.908 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:53:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:53:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2696: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:53:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2697: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:53:52 np0005634017 nova_compute[243452]: 2026-02-28 10:53:52.909 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:53:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2698: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:53:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:53:54 np0005634017 nova_compute[243452]: 2026-02-28 10:53:54.819 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:53:55 np0005634017 nova_compute[243452]: 2026-02-28 10:53:55.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:53:55 np0005634017 nova_compute[243452]: 2026-02-28 10:53:55.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:53:55 np0005634017 nova_compute[243452]: 2026-02-28 10:53:55.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:53:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2699: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:53:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2700: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:53:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:53:57.897 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:53:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:53:57.897 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:53:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:53:57.898 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:53:57 np0005634017 nova_compute[243452]: 2026-02-28 10:53:57.910 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:53:57 np0005634017 nova_compute[243452]: 2026-02-28 10:53:57.913 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:53:58 np0005634017 nova_compute[243452]: 2026-02-28 10:53:58.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:53:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:53:59 np0005634017 nova_compute[243452]: 2026-02-28 10:53:59.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:53:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2701: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:00 np0005634017 nova_compute[243452]: 2026-02-28 10:54:00.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:54:00 np0005634017 nova_compute[243452]: 2026-02-28 10:54:00.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:54:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:54:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:54:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:54:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:54:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:54:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:54:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2702: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:02 np0005634017 nova_compute[243452]: 2026-02-28 10:54:02.911 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:54:02 np0005634017 nova_compute[243452]: 2026-02-28 10:54:02.913 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:54:03 np0005634017 nova_compute[243452]: 2026-02-28 10:54:03.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:54:03 np0005634017 nova_compute[243452]: 2026-02-28 10:54:03.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:54:03 np0005634017 nova_compute[243452]: 2026-02-28 10:54:03.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:54:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2703: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:54:04 np0005634017 nova_compute[243452]: 2026-02-28 10:54:04.272 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 05:54:04 np0005634017 nova_compute[243452]: 2026-02-28 10:54:04.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:54:04 np0005634017 nova_compute[243452]: 2026-02-28 10:54:04.360 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:54:04 np0005634017 nova_compute[243452]: 2026-02-28 10:54:04.361 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:54:04 np0005634017 nova_compute[243452]: 2026-02-28 10:54:04.361 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:54:04 np0005634017 nova_compute[243452]: 2026-02-28 10:54:04.361 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:54:04 np0005634017 nova_compute[243452]: 2026-02-28 10:54:04.362 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:54:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:54:04 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1464655689' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:54:04 np0005634017 nova_compute[243452]: 2026-02-28 10:54:04.875 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:54:05 np0005634017 nova_compute[243452]: 2026-02-28 10:54:05.048 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:54:05 np0005634017 nova_compute[243452]: 2026-02-28 10:54:05.050 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3569MB free_disk=59.98737189359963GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:54:05 np0005634017 nova_compute[243452]: 2026-02-28 10:54:05.050 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:54:05 np0005634017 nova_compute[243452]: 2026-02-28 10:54:05.050 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:54:05 np0005634017 nova_compute[243452]: 2026-02-28 10:54:05.319 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:54:05 np0005634017 nova_compute[243452]: 2026-02-28 10:54:05.319 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:54:05 np0005634017 nova_compute[243452]: 2026-02-28 10:54:05.386 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:54:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2704: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:54:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2003571345' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:54:05 np0005634017 nova_compute[243452]: 2026-02-28 10:54:05.984 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:54:05 np0005634017 nova_compute[243452]: 2026-02-28 10:54:05.991 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:54:06 np0005634017 nova_compute[243452]: 2026-02-28 10:54:06.026 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:54:06 np0005634017 nova_compute[243452]: 2026-02-28 10:54:06.028 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:54:06 np0005634017 nova_compute[243452]: 2026-02-28 10:54:06.028 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.978s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:54:07 np0005634017 podman[387030]: 2026-02-28 10:54:07.126392236 +0000 UTC m=+0.060682048 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 05:54:07 np0005634017 podman[387029]: 2026-02-28 10:54:07.153043725 +0000 UTC m=+0.092085213 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 05:54:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2705: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:07 np0005634017 nova_compute[243452]: 2026-02-28 10:54:07.912 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:54:07 np0005634017 nova_compute[243452]: 2026-02-28 10:54:07.915 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:54:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:54:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2706: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2707: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:12 np0005634017 nova_compute[243452]: 2026-02-28 10:54:12.913 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:54:12 np0005634017 nova_compute[243452]: 2026-02-28 10:54:12.916 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:54:13 np0005634017 nova_compute[243452]: 2026-02-28 10:54:13.025 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:54:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2708: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:54:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2709: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:16 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 05:54:16 np0005634017 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 45K writes, 182K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s#012Cumulative WAL: 45K writes, 17K syncs, 2.69 writes per sync, written: 0.18 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2352 writes, 8930 keys, 2352 commit groups, 1.0 writes per commit group, ingest: 9.76 MB, 0.02 MB/s#012Interval WAL: 2353 writes, 958 syncs, 2.46 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 05:54:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2710: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:17 np0005634017 nova_compute[243452]: 2026-02-28 10:54:17.915 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:54:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:54:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:54:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:54:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:54:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:54:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:54:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:54:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:54:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:54:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:54:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:54:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:54:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:54:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2711: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:19 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:54:19 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:54:19 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:54:19 np0005634017 podman[387218]: 2026-02-28 10:54:19.741102674 +0000 UTC m=+0.057609820 container create b9b3d78ee872f40fc52af82fa892f141d89f465a5168dfca790998d20bfea8e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bouman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:54:19 np0005634017 systemd[1]: Started libpod-conmon-b9b3d78ee872f40fc52af82fa892f141d89f465a5168dfca790998d20bfea8e1.scope.
Feb 28 05:54:19 np0005634017 podman[387218]: 2026-02-28 10:54:19.711318217 +0000 UTC m=+0.027825413 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:54:19 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:54:19 np0005634017 podman[387218]: 2026-02-28 10:54:19.828484012 +0000 UTC m=+0.144991158 container init b9b3d78ee872f40fc52af82fa892f141d89f465a5168dfca790998d20bfea8e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bouman, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 05:54:19 np0005634017 podman[387218]: 2026-02-28 10:54:19.838641051 +0000 UTC m=+0.155148197 container start b9b3d78ee872f40fc52af82fa892f141d89f465a5168dfca790998d20bfea8e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bouman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:54:19 np0005634017 podman[387218]: 2026-02-28 10:54:19.842704636 +0000 UTC m=+0.159211782 container attach b9b3d78ee872f40fc52af82fa892f141d89f465a5168dfca790998d20bfea8e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Feb 28 05:54:19 np0005634017 charming_bouman[387235]: 167 167
Feb 28 05:54:19 np0005634017 systemd[1]: libpod-b9b3d78ee872f40fc52af82fa892f141d89f465a5168dfca790998d20bfea8e1.scope: Deactivated successfully.
Feb 28 05:54:19 np0005634017 podman[387218]: 2026-02-28 10:54:19.845327701 +0000 UTC m=+0.161834837 container died b9b3d78ee872f40fc52af82fa892f141d89f465a5168dfca790998d20bfea8e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:54:19 np0005634017 systemd[1]: var-lib-containers-storage-overlay-293d51b4e8aa71b7b63902f8d278219eb8e47c851260dbee0731f6b86670217f-merged.mount: Deactivated successfully.
Feb 28 05:54:19 np0005634017 podman[387218]: 2026-02-28 10:54:19.885946447 +0000 UTC m=+0.202453553 container remove b9b3d78ee872f40fc52af82fa892f141d89f465a5168dfca790998d20bfea8e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:54:19 np0005634017 systemd[1]: libpod-conmon-b9b3d78ee872f40fc52af82fa892f141d89f465a5168dfca790998d20bfea8e1.scope: Deactivated successfully.
Feb 28 05:54:20 np0005634017 podman[387261]: 2026-02-28 10:54:20.075571945 +0000 UTC m=+0.057702234 container create f311049750c0080ed5464b26b45cf94b28c2edf4719ae3f1acaec6da03b685f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shaw, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:54:20 np0005634017 systemd[1]: Started libpod-conmon-f311049750c0080ed5464b26b45cf94b28c2edf4719ae3f1acaec6da03b685f4.scope.
Feb 28 05:54:20 np0005634017 podman[387261]: 2026-02-28 10:54:20.055271987 +0000 UTC m=+0.037402246 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:54:20 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:54:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d7e64524252e0f512bfcaf65d905a93c097d5d3d36a8d58c39414c3c1e3da19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:54:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d7e64524252e0f512bfcaf65d905a93c097d5d3d36a8d58c39414c3c1e3da19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:54:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d7e64524252e0f512bfcaf65d905a93c097d5d3d36a8d58c39414c3c1e3da19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:54:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d7e64524252e0f512bfcaf65d905a93c097d5d3d36a8d58c39414c3c1e3da19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:54:20 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d7e64524252e0f512bfcaf65d905a93c097d5d3d36a8d58c39414c3c1e3da19/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:54:20 np0005634017 podman[387261]: 2026-02-28 10:54:20.181790698 +0000 UTC m=+0.163920997 container init f311049750c0080ed5464b26b45cf94b28c2edf4719ae3f1acaec6da03b685f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 05:54:20 np0005634017 podman[387261]: 2026-02-28 10:54:20.188700155 +0000 UTC m=+0.170830414 container start f311049750c0080ed5464b26b45cf94b28c2edf4719ae3f1acaec6da03b685f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shaw, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:54:20 np0005634017 podman[387261]: 2026-02-28 10:54:20.191644068 +0000 UTC m=+0.173774327 container attach f311049750c0080ed5464b26b45cf94b28c2edf4719ae3f1acaec6da03b685f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shaw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:54:20 np0005634017 serene_shaw[387278]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:54:20 np0005634017 serene_shaw[387278]: --> All data devices are unavailable
Feb 28 05:54:20 np0005634017 systemd[1]: libpod-f311049750c0080ed5464b26b45cf94b28c2edf4719ae3f1acaec6da03b685f4.scope: Deactivated successfully.
Feb 28 05:54:20 np0005634017 podman[387261]: 2026-02-28 10:54:20.680547194 +0000 UTC m=+0.662677453 container died f311049750c0080ed5464b26b45cf94b28c2edf4719ae3f1acaec6da03b685f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:54:20 np0005634017 systemd[1]: var-lib-containers-storage-overlay-1d7e64524252e0f512bfcaf65d905a93c097d5d3d36a8d58c39414c3c1e3da19-merged.mount: Deactivated successfully.
Feb 28 05:54:20 np0005634017 podman[387261]: 2026-02-28 10:54:20.734307214 +0000 UTC m=+0.716437503 container remove f311049750c0080ed5464b26b45cf94b28c2edf4719ae3f1acaec6da03b685f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shaw, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:54:20 np0005634017 systemd[1]: libpod-conmon-f311049750c0080ed5464b26b45cf94b28c2edf4719ae3f1acaec6da03b685f4.scope: Deactivated successfully.
Feb 28 05:54:21 np0005634017 podman[387372]: 2026-02-28 10:54:21.212639689 +0000 UTC m=+0.030107118 container create 8584ee59b998ecfc4325f53a11ddf18e8f926a44acfd288ebf4301f0372e3d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:54:21 np0005634017 systemd[1]: Started libpod-conmon-8584ee59b998ecfc4325f53a11ddf18e8f926a44acfd288ebf4301f0372e3d6d.scope.
Feb 28 05:54:21 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:54:21 np0005634017 podman[387372]: 2026-02-28 10:54:21.292520783 +0000 UTC m=+0.109988242 container init 8584ee59b998ecfc4325f53a11ddf18e8f926a44acfd288ebf4301f0372e3d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 05:54:21 np0005634017 podman[387372]: 2026-02-28 10:54:21.198903638 +0000 UTC m=+0.016371087 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:54:21 np0005634017 podman[387372]: 2026-02-28 10:54:21.30192723 +0000 UTC m=+0.119394659 container start 8584ee59b998ecfc4325f53a11ddf18e8f926a44acfd288ebf4301f0372e3d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_swartz, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:54:21 np0005634017 podman[387372]: 2026-02-28 10:54:21.305359628 +0000 UTC m=+0.122827057 container attach 8584ee59b998ecfc4325f53a11ddf18e8f926a44acfd288ebf4301f0372e3d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_swartz, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 05:54:21 np0005634017 happy_swartz[387389]: 167 167
Feb 28 05:54:21 np0005634017 systemd[1]: libpod-8584ee59b998ecfc4325f53a11ddf18e8f926a44acfd288ebf4301f0372e3d6d.scope: Deactivated successfully.
Feb 28 05:54:21 np0005634017 podman[387372]: 2026-02-28 10:54:21.306737887 +0000 UTC m=+0.124205356 container died 8584ee59b998ecfc4325f53a11ddf18e8f926a44acfd288ebf4301f0372e3d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_swartz, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 05:54:21 np0005634017 systemd[1]: var-lib-containers-storage-overlay-9ddce51e65ad4867c45a79dc40035f4d7116f7563491c6d2bc3de5cea330eb3a-merged.mount: Deactivated successfully.
Feb 28 05:54:21 np0005634017 podman[387372]: 2026-02-28 10:54:21.345957074 +0000 UTC m=+0.163424503 container remove 8584ee59b998ecfc4325f53a11ddf18e8f926a44acfd288ebf4301f0372e3d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_swartz, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Feb 28 05:54:21 np0005634017 systemd[1]: libpod-conmon-8584ee59b998ecfc4325f53a11ddf18e8f926a44acfd288ebf4301f0372e3d6d.scope: Deactivated successfully.
Feb 28 05:54:21 np0005634017 podman[387412]: 2026-02-28 10:54:21.525859844 +0000 UTC m=+0.056270842 container create aa9101d2f076f541861ceb7ac4e4a7e4138aabf53fc46b14cb713c6aea35185a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_pike, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:54:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2712: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:21 np0005634017 systemd[1]: Started libpod-conmon-aa9101d2f076f541861ceb7ac4e4a7e4138aabf53fc46b14cb713c6aea35185a.scope.
Feb 28 05:54:21 np0005634017 podman[387412]: 2026-02-28 10:54:21.503767266 +0000 UTC m=+0.034178264 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:54:21 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:54:21 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade6bd5dda0745f93572527b10928f9895de20071549c9e443adbe98b0b3f51d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:54:21 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade6bd5dda0745f93572527b10928f9895de20071549c9e443adbe98b0b3f51d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:54:21 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade6bd5dda0745f93572527b10928f9895de20071549c9e443adbe98b0b3f51d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:54:21 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade6bd5dda0745f93572527b10928f9895de20071549c9e443adbe98b0b3f51d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:54:21 np0005634017 podman[387412]: 2026-02-28 10:54:21.644010077 +0000 UTC m=+0.174421075 container init aa9101d2f076f541861ceb7ac4e4a7e4138aabf53fc46b14cb713c6aea35185a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_pike, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:54:21 np0005634017 podman[387412]: 2026-02-28 10:54:21.65076359 +0000 UTC m=+0.181174568 container start aa9101d2f076f541861ceb7ac4e4a7e4138aabf53fc46b14cb713c6aea35185a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:54:21 np0005634017 podman[387412]: 2026-02-28 10:54:21.655187466 +0000 UTC m=+0.185598524 container attach aa9101d2f076f541861ceb7ac4e4a7e4138aabf53fc46b14cb713c6aea35185a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_pike, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:54:21 np0005634017 distracted_pike[387428]: {
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:    "0": [
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:        {
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "devices": [
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "/dev/loop3"
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            ],
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "lv_name": "ceph_lv0",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "lv_size": "21470642176",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "name": "ceph_lv0",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "tags": {
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.cluster_name": "ceph",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.crush_device_class": "",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.encrypted": "0",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.objectstore": "bluestore",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.osd_id": "0",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.type": "block",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.vdo": "0",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.with_tpm": "0"
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            },
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "type": "block",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "vg_name": "ceph_vg0"
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:        }
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:    ],
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:    "1": [
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:        {
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "devices": [
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "/dev/loop4"
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            ],
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "lv_name": "ceph_lv1",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "lv_size": "21470642176",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "name": "ceph_lv1",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "tags": {
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.cluster_name": "ceph",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.crush_device_class": "",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.encrypted": "0",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.objectstore": "bluestore",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.osd_id": "1",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.type": "block",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.vdo": "0",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.with_tpm": "0"
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            },
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "type": "block",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "vg_name": "ceph_vg1"
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:        }
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:    ],
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:    "2": [
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:        {
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "devices": [
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "/dev/loop5"
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            ],
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "lv_name": "ceph_lv2",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "lv_size": "21470642176",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "name": "ceph_lv2",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "tags": {
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.cluster_name": "ceph",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.crush_device_class": "",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.encrypted": "0",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.objectstore": "bluestore",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.osd_id": "2",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.type": "block",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.vdo": "0",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:                "ceph.with_tpm": "0"
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            },
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "type": "block",
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:            "vg_name": "ceph_vg2"
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:        }
Feb 28 05:54:21 np0005634017 distracted_pike[387428]:    ]
Feb 28 05:54:21 np0005634017 distracted_pike[387428]: }
Feb 28 05:54:21 np0005634017 systemd[1]: libpod-aa9101d2f076f541861ceb7ac4e4a7e4138aabf53fc46b14cb713c6aea35185a.scope: Deactivated successfully.
Feb 28 05:54:21 np0005634017 podman[387412]: 2026-02-28 10:54:21.950690757 +0000 UTC m=+0.481101755 container died aa9101d2f076f541861ceb7ac4e4a7e4138aabf53fc46b14cb713c6aea35185a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_pike, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:54:21 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ade6bd5dda0745f93572527b10928f9895de20071549c9e443adbe98b0b3f51d-merged.mount: Deactivated successfully.
Feb 28 05:54:21 np0005634017 podman[387412]: 2026-02-28 10:54:21.996022077 +0000 UTC m=+0.526433035 container remove aa9101d2f076f541861ceb7ac4e4a7e4138aabf53fc46b14cb713c6aea35185a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_pike, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:54:22 np0005634017 systemd[1]: libpod-conmon-aa9101d2f076f541861ceb7ac4e4a7e4138aabf53fc46b14cb713c6aea35185a.scope: Deactivated successfully.
Feb 28 05:54:22 np0005634017 podman[387511]: 2026-02-28 10:54:22.514997999 +0000 UTC m=+0.057511638 container create badf7ffe13c9b2e8e0f6406d39cca8cfadbeeff2233ae5bd2434c5b9838ea949 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:54:22 np0005634017 systemd[1]: Started libpod-conmon-badf7ffe13c9b2e8e0f6406d39cca8cfadbeeff2233ae5bd2434c5b9838ea949.scope.
Feb 28 05:54:22 np0005634017 podman[387511]: 2026-02-28 10:54:22.49043537 +0000 UTC m=+0.032949049 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:54:22 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:54:22 np0005634017 podman[387511]: 2026-02-28 10:54:22.61234795 +0000 UTC m=+0.154861619 container init badf7ffe13c9b2e8e0f6406d39cca8cfadbeeff2233ae5bd2434c5b9838ea949 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:54:22 np0005634017 podman[387511]: 2026-02-28 10:54:22.617659761 +0000 UTC m=+0.160173400 container start badf7ffe13c9b2e8e0f6406d39cca8cfadbeeff2233ae5bd2434c5b9838ea949 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_leavitt, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:54:22 np0005634017 podman[387511]: 2026-02-28 10:54:22.620601165 +0000 UTC m=+0.163114804 container attach badf7ffe13c9b2e8e0f6406d39cca8cfadbeeff2233ae5bd2434c5b9838ea949 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:54:22 np0005634017 systemd[1]: libpod-badf7ffe13c9b2e8e0f6406d39cca8cfadbeeff2233ae5bd2434c5b9838ea949.scope: Deactivated successfully.
Feb 28 05:54:22 np0005634017 distracted_leavitt[387528]: 167 167
Feb 28 05:54:22 np0005634017 conmon[387528]: conmon badf7ffe13c9b2e8e0f6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-badf7ffe13c9b2e8e0f6406d39cca8cfadbeeff2233ae5bd2434c5b9838ea949.scope/container/memory.events
Feb 28 05:54:22 np0005634017 podman[387511]: 2026-02-28 10:54:22.623477727 +0000 UTC m=+0.165991356 container died badf7ffe13c9b2e8e0f6406d39cca8cfadbeeff2233ae5bd2434c5b9838ea949 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:54:22 np0005634017 systemd[1]: var-lib-containers-storage-overlay-c3220366f4075fd2beea108d08eedf8cf702a041f1c136951a6b7348d2f1fb95-merged.mount: Deactivated successfully.
Feb 28 05:54:22 np0005634017 podman[387511]: 2026-02-28 10:54:22.659712798 +0000 UTC m=+0.202226437 container remove badf7ffe13c9b2e8e0f6406d39cca8cfadbeeff2233ae5bd2434c5b9838ea949 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_leavitt, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:54:22 np0005634017 systemd[1]: libpod-conmon-badf7ffe13c9b2e8e0f6406d39cca8cfadbeeff2233ae5bd2434c5b9838ea949.scope: Deactivated successfully.
Feb 28 05:54:22 np0005634017 podman[387552]: 2026-02-28 10:54:22.823382647 +0000 UTC m=+0.041811101 container create 1dd75f9dd98c2b4126b7c6f75e289e3ef16fe07130e8d273eca556cc4d904352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 05:54:22 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 05:54:22 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4801.6 total, 600.0 interval#012Cumulative writes: 48K writes, 188K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s#012Cumulative WAL: 48K writes, 18K syncs, 2.67 writes per sync, written: 0.18 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2185 writes, 8173 keys, 2185 commit groups, 1.0 writes per commit group, ingest: 7.57 MB, 0.01 MB/s#012Interval WAL: 2185 writes, 915 syncs, 2.39 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 05:54:22 np0005634017 systemd[1]: Started libpod-conmon-1dd75f9dd98c2b4126b7c6f75e289e3ef16fe07130e8d273eca556cc4d904352.scope.
Feb 28 05:54:22 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:54:22 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e8a68d3fa5ba030da4bd92188087dab359ef51da214132850dfd270b8bab7f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:54:22 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e8a68d3fa5ba030da4bd92188087dab359ef51da214132850dfd270b8bab7f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:54:22 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e8a68d3fa5ba030da4bd92188087dab359ef51da214132850dfd270b8bab7f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:54:22 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e8a68d3fa5ba030da4bd92188087dab359ef51da214132850dfd270b8bab7f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:54:22 np0005634017 podman[387552]: 2026-02-28 10:54:22.80663823 +0000 UTC m=+0.025066734 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:54:22 np0005634017 podman[387552]: 2026-02-28 10:54:22.914307875 +0000 UTC m=+0.132736359 container init 1dd75f9dd98c2b4126b7c6f75e289e3ef16fe07130e8d273eca556cc4d904352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_thompson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 28 05:54:22 np0005634017 nova_compute[243452]: 2026-02-28 10:54:22.918 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:54:22 np0005634017 podman[387552]: 2026-02-28 10:54:22.926140872 +0000 UTC m=+0.144569346 container start 1dd75f9dd98c2b4126b7c6f75e289e3ef16fe07130e8d273eca556cc4d904352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_thompson, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 28 05:54:22 np0005634017 podman[387552]: 2026-02-28 10:54:22.932148933 +0000 UTC m=+0.150577467 container attach 1dd75f9dd98c2b4126b7c6f75e289e3ef16fe07130e8d273eca556cc4d904352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 05:54:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2713: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:23 np0005634017 lvm[387644]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:54:23 np0005634017 lvm[387644]: VG ceph_vg0 finished
Feb 28 05:54:23 np0005634017 lvm[387647]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:54:23 np0005634017 lvm[387647]: VG ceph_vg1 finished
Feb 28 05:54:23 np0005634017 lvm[387649]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:54:23 np0005634017 lvm[387649]: VG ceph_vg2 finished
Feb 28 05:54:23 np0005634017 lvm[387650]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:54:23 np0005634017 lvm[387650]: VG ceph_vg1 finished
Feb 28 05:54:23 np0005634017 dreamy_thompson[387568]: {}
Feb 28 05:54:23 np0005634017 systemd[1]: libpod-1dd75f9dd98c2b4126b7c6f75e289e3ef16fe07130e8d273eca556cc4d904352.scope: Deactivated successfully.
Feb 28 05:54:23 np0005634017 systemd[1]: libpod-1dd75f9dd98c2b4126b7c6f75e289e3ef16fe07130e8d273eca556cc4d904352.scope: Consumed 1.260s CPU time.
Feb 28 05:54:23 np0005634017 podman[387653]: 2026-02-28 10:54:23.845041807 +0000 UTC m=+0.020063752 container died 1dd75f9dd98c2b4126b7c6f75e289e3ef16fe07130e8d273eca556cc4d904352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:54:23 np0005634017 systemd[1]: var-lib-containers-storage-overlay-9e8a68d3fa5ba030da4bd92188087dab359ef51da214132850dfd270b8bab7f7-merged.mount: Deactivated successfully.
Feb 28 05:54:23 np0005634017 podman[387653]: 2026-02-28 10:54:23.887231778 +0000 UTC m=+0.062253723 container remove 1dd75f9dd98c2b4126b7c6f75e289e3ef16fe07130e8d273eca556cc4d904352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_thompson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:54:23 np0005634017 systemd[1]: libpod-conmon-1dd75f9dd98c2b4126b7c6f75e289e3ef16fe07130e8d273eca556cc4d904352.scope: Deactivated successfully.
Feb 28 05:54:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:54:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:54:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:54:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:54:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:54:24 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:54:24 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:54:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2714: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2715: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:27 np0005634017 nova_compute[243452]: 2026-02-28 10:54:27.921 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:54:27 np0005634017 nova_compute[243452]: 2026-02-28 10:54:27.924 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:54:28 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 05:54:28 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.2 total, 600.0 interval#012Cumulative writes: 38K writes, 148K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 38K writes, 14K syncs, 2.69 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1533 writes, 6281 keys, 1533 commit groups, 1.0 writes per commit group, ingest: 6.68 MB, 0.01 MB/s#012Interval WAL: 1533 writes, 657 syncs, 2.33 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 05:54:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:54:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:54:29
Feb 28 05:54:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:54:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:54:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.meta', 'vms', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'images', 'volumes', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta']
Feb 28 05:54:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:54:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2716: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:54:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:54:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:54:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:54:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:54:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:54:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:54:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:54:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:54:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:54:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:54:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:54:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:54:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:54:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:54:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:54:31 np0005634017 ceph-mgr[76610]: [devicehealth INFO root] Check health
Feb 28 05:54:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2717: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:32 np0005634017 nova_compute[243452]: 2026-02-28 10:54:32.922 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:54:32 np0005634017 nova_compute[243452]: 2026-02-28 10:54:32.924 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:54:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2718: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:54:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2719: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2720: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:37 np0005634017 nova_compute[243452]: 2026-02-28 10:54:37.925 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:54:38 np0005634017 podman[387694]: 2026-02-28 10:54:38.137518015 +0000 UTC m=+0.070610071 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:54:38 np0005634017 podman[387693]: 2026-02-28 10:54:38.165710227 +0000 UTC m=+0.099506763 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:54:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:54:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2721: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:54:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2722: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:54:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:54:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:54:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5158900728975258e-05 of space, bias 1.0, pg target 0.004547670218692577 quantized to 32 (current 32)
Feb 28 05:54:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:54:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:54:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:54:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:54:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:54:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0018285931247344293 of space, bias 1.0, pg target 0.5485779374203288 quantized to 32 (current 32)
Feb 28 05:54:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:54:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.268684935180083e-07 of space, bias 4.0, pg target 0.00087224219222161 quantized to 16 (current 16)
Feb 28 05:54:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:54:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:54:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:54:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:54:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:54:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:54:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:54:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:54:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:54:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:54:42 np0005634017 nova_compute[243452]: 2026-02-28 10:54:42.927 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:54:42 np0005634017 nova_compute[243452]: 2026-02-28 10:54:42.928 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:54:42 np0005634017 nova_compute[243452]: 2026-02-28 10:54:42.928 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 05:54:42 np0005634017 nova_compute[243452]: 2026-02-28 10:54:42.928 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 05:54:42 np0005634017 nova_compute[243452]: 2026-02-28 10:54:42.929 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 05:54:42 np0005634017 nova_compute[243452]: 2026-02-28 10:54:42.930 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:54:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2723: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:54:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2724: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:54:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2809831218' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:54:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:54:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2809831218' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:54:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2725: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:47 np0005634017 nova_compute[243452]: 2026-02-28 10:54:47.928 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:54:47 np0005634017 nova_compute[243452]: 2026-02-28 10:54:47.930 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:54:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:54:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2726: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2727: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:52 np0005634017 nova_compute[243452]: 2026-02-28 10:54:52.931 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:54:53 np0005634017 nova_compute[243452]: 2026-02-28 10:54:53.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:54:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2728: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:54:55 np0005634017 nova_compute[243452]: 2026-02-28 10:54:55.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:54:55 np0005634017 nova_compute[243452]: 2026-02-28 10:54:55.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:54:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2729: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:57 np0005634017 nova_compute[243452]: 2026-02-28 10:54:57.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:54:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2730: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:54:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:54:57.897 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:54:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:54:57.897 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:54:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:54:57.897 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:54:57 np0005634017 nova_compute[243452]: 2026-02-28 10:54:57.932 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:54:58 np0005634017 nova_compute[243452]: 2026-02-28 10:54:58.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:54:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:54:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2731: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:55:00 np0005634017 nova_compute[243452]: 2026-02-28 10:55:00.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:55:00 np0005634017 nova_compute[243452]: 2026-02-28 10:55:00.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:55:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:55:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:55:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:55:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:55:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:55:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:55:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2732: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s rd, 341 B/s wr, 4 op/s
Feb 28 05:55:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Feb 28 05:55:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Feb 28 05:55:01 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Feb 28 05:55:02 np0005634017 nova_compute[243452]: 2026-02-28 10:55:02.313 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:55:02 np0005634017 nova_compute[243452]: 2026-02-28 10:55:02.934 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:55:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2734: 305 pgs: 305 active+clean; 136 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.6 KiB/s rd, 2.4 MiB/s wr, 10 op/s
Feb 28 05:55:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Feb 28 05:55:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Feb 28 05:55:03 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Feb 28 05:55:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:55:04 np0005634017 nova_compute[243452]: 2026-02-28 10:55:04.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:55:04 np0005634017 nova_compute[243452]: 2026-02-28 10:55:04.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:55:04 np0005634017 nova_compute[243452]: 2026-02-28 10:55:04.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:55:04 np0005634017 nova_compute[243452]: 2026-02-28 10:55:04.333 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 05:55:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2736: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Feb 28 05:55:06 np0005634017 nova_compute[243452]: 2026-02-28 10:55:06.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:55:06 np0005634017 nova_compute[243452]: 2026-02-28 10:55:06.342 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:55:06 np0005634017 nova_compute[243452]: 2026-02-28 10:55:06.343 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:55:06 np0005634017 nova_compute[243452]: 2026-02-28 10:55:06.343 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:55:06 np0005634017 nova_compute[243452]: 2026-02-28 10:55:06.343 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:55:06 np0005634017 nova_compute[243452]: 2026-02-28 10:55:06.344 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:55:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:55:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1825881743' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:55:06 np0005634017 nova_compute[243452]: 2026-02-28 10:55:06.970 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:55:07 np0005634017 nova_compute[243452]: 2026-02-28 10:55:07.137 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:55:07 np0005634017 nova_compute[243452]: 2026-02-28 10:55:07.139 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3593MB free_disk=59.987370564602315GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:55:07 np0005634017 nova_compute[243452]: 2026-02-28 10:55:07.139 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:55:07 np0005634017 nova_compute[243452]: 2026-02-28 10:55:07.140 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:55:07 np0005634017 nova_compute[243452]: 2026-02-28 10:55:07.200 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:55:07 np0005634017 nova_compute[243452]: 2026-02-28 10:55:07.200 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:55:07 np0005634017 nova_compute[243452]: 2026-02-28 10:55:07.214 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 28 05:55:07 np0005634017 nova_compute[243452]: 2026-02-28 10:55:07.269 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 28 05:55:07 np0005634017 nova_compute[243452]: 2026-02-28 10:55:07.270 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 28 05:55:07 np0005634017 nova_compute[243452]: 2026-02-28 10:55:07.290 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 28 05:55:07 np0005634017 nova_compute[243452]: 2026-02-28 10:55:07.309 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 28 05:55:07 np0005634017 nova_compute[243452]: 2026-02-28 10:55:07.338 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:55:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2737: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Feb 28 05:55:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:55:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3244069651' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:55:07 np0005634017 nova_compute[243452]: 2026-02-28 10:55:07.936 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:55:07 np0005634017 nova_compute[243452]: 2026-02-28 10:55:07.951 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:55:07 np0005634017 nova_compute[243452]: 2026-02-28 10:55:07.959 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:55:07 np0005634017 nova_compute[243452]: 2026-02-28 10:55:07.977 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:55:07 np0005634017 nova_compute[243452]: 2026-02-28 10:55:07.979 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:55:07 np0005634017 nova_compute[243452]: 2026-02-28 10:55:07.980 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:55:09 np0005634017 podman[387785]: 2026-02-28 10:55:09.189277208 +0000 UTC m=+0.053326669 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 28 05:55:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:55:09 np0005634017 podman[387784]: 2026-02-28 10:55:09.322830191 +0000 UTC m=+0.187645205 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:55:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2738: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 5.1 MiB/s wr, 40 op/s
Feb 28 05:55:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2739: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 4.1 MiB/s wr, 32 op/s
Feb 28 05:55:12 np0005634017 nova_compute[243452]: 2026-02-28 10:55:12.939 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:55:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2740: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.7 MiB/s wr, 27 op/s
Feb 28 05:55:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:55:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2741: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 MiB/s wr, 26 op/s
Feb 28 05:55:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2742: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 85 B/s wr, 3 op/s
Feb 28 05:55:17 np0005634017 nova_compute[243452]: 2026-02-28 10:55:17.940 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:55:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:55:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2743: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 85 B/s wr, 20 op/s
Feb 28 05:55:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2744: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Feb 28 05:55:22 np0005634017 nova_compute[243452]: 2026-02-28 10:55:22.942 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:55:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2745: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Feb 28 05:55:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:55:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:55:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:55:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:55:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:55:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:55:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:55:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:55:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:55:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:55:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:55:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:55:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:55:24 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:55:24 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:55:24 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:55:25 np0005634017 podman[387968]: 2026-02-28 10:55:25.093252525 +0000 UTC m=+0.058487247 container create c9ae557c8427eb434a95c2e226acc08500b2e388fabd8191cd405d7a3c16b18a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hellman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 05:55:25 np0005634017 systemd[1]: Started libpod-conmon-c9ae557c8427eb434a95c2e226acc08500b2e388fabd8191cd405d7a3c16b18a.scope.
Feb 28 05:55:25 np0005634017 podman[387968]: 2026-02-28 10:55:25.068138474 +0000 UTC m=+0.033373276 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:55:25 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:55:25 np0005634017 podman[387968]: 2026-02-28 10:55:25.201808899 +0000 UTC m=+0.167043701 container init c9ae557c8427eb434a95c2e226acc08500b2e388fabd8191cd405d7a3c16b18a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Feb 28 05:55:25 np0005634017 podman[387968]: 2026-02-28 10:55:25.210448064 +0000 UTC m=+0.175682826 container start c9ae557c8427eb434a95c2e226acc08500b2e388fabd8191cd405d7a3c16b18a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 05:55:25 np0005634017 podman[387968]: 2026-02-28 10:55:25.215197969 +0000 UTC m=+0.180432801 container attach c9ae557c8427eb434a95c2e226acc08500b2e388fabd8191cd405d7a3c16b18a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hellman, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 05:55:25 np0005634017 friendly_hellman[387984]: 167 167
Feb 28 05:55:25 np0005634017 systemd[1]: libpod-c9ae557c8427eb434a95c2e226acc08500b2e388fabd8191cd405d7a3c16b18a.scope: Deactivated successfully.
Feb 28 05:55:25 np0005634017 conmon[387984]: conmon c9ae557c8427eb434a95 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c9ae557c8427eb434a95c2e226acc08500b2e388fabd8191cd405d7a3c16b18a.scope/container/memory.events
Feb 28 05:55:25 np0005634017 podman[387968]: 2026-02-28 10:55:25.220596101 +0000 UTC m=+0.185830863 container died c9ae557c8427eb434a95c2e226acc08500b2e388fabd8191cd405d7a3c16b18a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hellman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:55:25 np0005634017 systemd[1]: var-lib-containers-storage-overlay-8bbf8f160e9b71d381363a797f37c702638ab133a0e27cff813a112423d7c1ce-merged.mount: Deactivated successfully.
Feb 28 05:55:25 np0005634017 podman[387968]: 2026-02-28 10:55:25.266886082 +0000 UTC m=+0.232120814 container remove c9ae557c8427eb434a95c2e226acc08500b2e388fabd8191cd405d7a3c16b18a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 05:55:25 np0005634017 systemd[1]: libpod-conmon-c9ae557c8427eb434a95c2e226acc08500b2e388fabd8191cd405d7a3c16b18a.scope: Deactivated successfully.
Feb 28 05:55:25 np0005634017 podman[388009]: 2026-02-28 10:55:25.451915883 +0000 UTC m=+0.062209023 container create 22ae4f966bb46c6c6d8a108db243d24fbd8495ec0edf07cfbc4a80e4f0165463 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 05:55:25 np0005634017 systemd[1]: Started libpod-conmon-22ae4f966bb46c6c6d8a108db243d24fbd8495ec0edf07cfbc4a80e4f0165463.scope.
Feb 28 05:55:25 np0005634017 podman[388009]: 2026-02-28 10:55:25.425897786 +0000 UTC m=+0.036190946 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:55:25 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:55:25 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b5ef49cdbd652595e758c7f20631d53492a1629daeadeb1d2c67406dc50234/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:55:25 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b5ef49cdbd652595e758c7f20631d53492a1629daeadeb1d2c67406dc50234/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:55:25 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b5ef49cdbd652595e758c7f20631d53492a1629daeadeb1d2c67406dc50234/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:55:25 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b5ef49cdbd652595e758c7f20631d53492a1629daeadeb1d2c67406dc50234/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:55:25 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b5ef49cdbd652595e758c7f20631d53492a1629daeadeb1d2c67406dc50234/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:55:25 np0005634017 podman[388009]: 2026-02-28 10:55:25.539872944 +0000 UTC m=+0.150166084 container init 22ae4f966bb46c6c6d8a108db243d24fbd8495ec0edf07cfbc4a80e4f0165463 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_khorana, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 05:55:25 np0005634017 podman[388009]: 2026-02-28 10:55:25.548115717 +0000 UTC m=+0.158408847 container start 22ae4f966bb46c6c6d8a108db243d24fbd8495ec0edf07cfbc4a80e4f0165463 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_khorana, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 05:55:25 np0005634017 podman[388009]: 2026-02-28 10:55:25.551617936 +0000 UTC m=+0.161911066 container attach 22ae4f966bb46c6c6d8a108db243d24fbd8495ec0edf07cfbc4a80e4f0165463 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_khorana, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:55:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2746: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 28 05:55:26 np0005634017 keen_khorana[388026]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:55:26 np0005634017 keen_khorana[388026]: --> All data devices are unavailable
Feb 28 05:55:26 np0005634017 systemd[1]: libpod-22ae4f966bb46c6c6d8a108db243d24fbd8495ec0edf07cfbc4a80e4f0165463.scope: Deactivated successfully.
Feb 28 05:55:26 np0005634017 podman[388009]: 2026-02-28 10:55:26.069197325 +0000 UTC m=+0.679490535 container died 22ae4f966bb46c6c6d8a108db243d24fbd8495ec0edf07cfbc4a80e4f0165463 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:55:26 np0005634017 systemd[1]: var-lib-containers-storage-overlay-f5b5ef49cdbd652595e758c7f20631d53492a1629daeadeb1d2c67406dc50234-merged.mount: Deactivated successfully.
Feb 28 05:55:26 np0005634017 podman[388009]: 2026-02-28 10:55:26.118653155 +0000 UTC m=+0.728946305 container remove 22ae4f966bb46c6c6d8a108db243d24fbd8495ec0edf07cfbc4a80e4f0165463 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_khorana, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:55:26 np0005634017 systemd[1]: libpod-conmon-22ae4f966bb46c6c6d8a108db243d24fbd8495ec0edf07cfbc4a80e4f0165463.scope: Deactivated successfully.
Feb 28 05:55:26 np0005634017 podman[388120]: 2026-02-28 10:55:26.655402187 +0000 UTC m=+0.042704311 container create 54fb5fb85e3b24305d38ab318f938b4b12c9dc252c2f17f6db4d667afd31ea97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_buck, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 05:55:26 np0005634017 systemd[1]: Started libpod-conmon-54fb5fb85e3b24305d38ab318f938b4b12c9dc252c2f17f6db4d667afd31ea97.scope.
Feb 28 05:55:26 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:55:26 np0005634017 podman[388120]: 2026-02-28 10:55:26.6311507 +0000 UTC m=+0.018452844 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:55:26 np0005634017 podman[388120]: 2026-02-28 10:55:26.733122018 +0000 UTC m=+0.120424222 container init 54fb5fb85e3b24305d38ab318f938b4b12c9dc252c2f17f6db4d667afd31ea97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:55:26 np0005634017 podman[388120]: 2026-02-28 10:55:26.741484365 +0000 UTC m=+0.128786489 container start 54fb5fb85e3b24305d38ab318f938b4b12c9dc252c2f17f6db4d667afd31ea97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:55:26 np0005634017 podman[388120]: 2026-02-28 10:55:26.745317833 +0000 UTC m=+0.132620007 container attach 54fb5fb85e3b24305d38ab318f938b4b12c9dc252c2f17f6db4d667afd31ea97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_buck, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 05:55:26 np0005634017 distracted_buck[388136]: 167 167
Feb 28 05:55:26 np0005634017 systemd[1]: libpod-54fb5fb85e3b24305d38ab318f938b4b12c9dc252c2f17f6db4d667afd31ea97.scope: Deactivated successfully.
Feb 28 05:55:26 np0005634017 podman[388120]: 2026-02-28 10:55:26.747974179 +0000 UTC m=+0.135276353 container died 54fb5fb85e3b24305d38ab318f938b4b12c9dc252c2f17f6db4d667afd31ea97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_buck, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:55:26 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ee98b954bc9473f5054931cdffad1f550672fc10910c3fef57181b024a1d4848-merged.mount: Deactivated successfully.
Feb 28 05:55:26 np0005634017 podman[388120]: 2026-02-28 10:55:26.785131281 +0000 UTC m=+0.172433435 container remove 54fb5fb85e3b24305d38ab318f938b4b12c9dc252c2f17f6db4d667afd31ea97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:55:26 np0005634017 systemd[1]: libpod-conmon-54fb5fb85e3b24305d38ab318f938b4b12c9dc252c2f17f6db4d667afd31ea97.scope: Deactivated successfully.
Feb 28 05:55:26 np0005634017 podman[388159]: 2026-02-28 10:55:26.981198214 +0000 UTC m=+0.063921872 container create 52049f278d9d2d4f94e1695a1dc26471478f884bf1d77244a64edad5ad1d274c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_keldysh, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:55:27 np0005634017 systemd[1]: Started libpod-conmon-52049f278d9d2d4f94e1695a1dc26471478f884bf1d77244a64edad5ad1d274c.scope.
Feb 28 05:55:27 np0005634017 podman[388159]: 2026-02-28 10:55:26.955260419 +0000 UTC m=+0.037984137 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:55:27 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:55:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac2cc34a7bf9f9212d40928f6ddd249c185858b54ea11c32f2b0950e890a8ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:55:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac2cc34a7bf9f9212d40928f6ddd249c185858b54ea11c32f2b0950e890a8ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:55:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac2cc34a7bf9f9212d40928f6ddd249c185858b54ea11c32f2b0950e890a8ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:55:27 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac2cc34a7bf9f9212d40928f6ddd249c185858b54ea11c32f2b0950e890a8ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:55:27 np0005634017 podman[388159]: 2026-02-28 10:55:27.093771172 +0000 UTC m=+0.176494850 container init 52049f278d9d2d4f94e1695a1dc26471478f884bf1d77244a64edad5ad1d274c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_keldysh, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 05:55:27 np0005634017 podman[388159]: 2026-02-28 10:55:27.104954319 +0000 UTC m=+0.187677957 container start 52049f278d9d2d4f94e1695a1dc26471478f884bf1d77244a64edad5ad1d274c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_keldysh, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 28 05:55:27 np0005634017 podman[388159]: 2026-02-28 10:55:27.108496949 +0000 UTC m=+0.191220627 container attach 52049f278d9d2d4f94e1695a1dc26471478f884bf1d77244a64edad5ad1d274c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]: {
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:    "0": [
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:        {
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "devices": [
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "/dev/loop3"
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            ],
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "lv_name": "ceph_lv0",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "lv_size": "21470642176",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "name": "ceph_lv0",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "tags": {
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.cluster_name": "ceph",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.crush_device_class": "",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.encrypted": "0",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.objectstore": "bluestore",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.osd_id": "0",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.type": "block",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.vdo": "0",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.with_tpm": "0"
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            },
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "type": "block",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "vg_name": "ceph_vg0"
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:        }
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:    ],
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:    "1": [
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:        {
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "devices": [
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "/dev/loop4"
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            ],
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "lv_name": "ceph_lv1",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "lv_size": "21470642176",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "name": "ceph_lv1",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "tags": {
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.cluster_name": "ceph",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.crush_device_class": "",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.encrypted": "0",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.objectstore": "bluestore",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.osd_id": "1",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.type": "block",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.vdo": "0",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.with_tpm": "0"
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            },
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "type": "block",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "vg_name": "ceph_vg1"
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:        }
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:    ],
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:    "2": [
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:        {
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "devices": [
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "/dev/loop5"
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            ],
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "lv_name": "ceph_lv2",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "lv_size": "21470642176",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "name": "ceph_lv2",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "tags": {
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.cluster_name": "ceph",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.crush_device_class": "",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.encrypted": "0",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.objectstore": "bluestore",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.osd_id": "2",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.type": "block",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.vdo": "0",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:                "ceph.with_tpm": "0"
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            },
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "type": "block",
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:            "vg_name": "ceph_vg2"
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:        }
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]:    ]
Feb 28 05:55:27 np0005634017 gallant_keldysh[388175]: }
Feb 28 05:55:27 np0005634017 systemd[1]: libpod-52049f278d9d2d4f94e1695a1dc26471478f884bf1d77244a64edad5ad1d274c.scope: Deactivated successfully.
Feb 28 05:55:27 np0005634017 podman[388184]: 2026-02-28 10:55:27.449692192 +0000 UTC m=+0.041630700 container died 52049f278d9d2d4f94e1695a1dc26471478f884bf1d77244a64edad5ad1d274c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_keldysh, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 05:55:27 np0005634017 systemd[1]: var-lib-containers-storage-overlay-7ac2cc34a7bf9f9212d40928f6ddd249c185858b54ea11c32f2b0950e890a8ed-merged.mount: Deactivated successfully.
Feb 28 05:55:27 np0005634017 podman[388184]: 2026-02-28 10:55:27.489098518 +0000 UTC m=+0.081036936 container remove 52049f278d9d2d4f94e1695a1dc26471478f884bf1d77244a64edad5ad1d274c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_keldysh, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 05:55:27 np0005634017 systemd[1]: libpod-conmon-52049f278d9d2d4f94e1695a1dc26471478f884bf1d77244a64edad5ad1d274c.scope: Deactivated successfully.
Feb 28 05:55:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2747: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Feb 28 05:55:27 np0005634017 nova_compute[243452]: 2026-02-28 10:55:27.944 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:55:28 np0005634017 podman[388261]: 2026-02-28 10:55:28.015207818 +0000 UTC m=+0.042262838 container create bd17eaab93037d76df2cf712e696d1497e9d58419c204083090929810acf2c74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_shaw, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 05:55:28 np0005634017 systemd[1]: Started libpod-conmon-bd17eaab93037d76df2cf712e696d1497e9d58419c204083090929810acf2c74.scope.
Feb 28 05:55:28 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:55:28 np0005634017 podman[388261]: 2026-02-28 10:55:27.992837785 +0000 UTC m=+0.019892825 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:55:28 np0005634017 podman[388261]: 2026-02-28 10:55:28.10072621 +0000 UTC m=+0.127781290 container init bd17eaab93037d76df2cf712e696d1497e9d58419c204083090929810acf2c74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 28 05:55:28 np0005634017 podman[388261]: 2026-02-28 10:55:28.111704401 +0000 UTC m=+0.138759421 container start bd17eaab93037d76df2cf712e696d1497e9d58419c204083090929810acf2c74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_shaw, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:55:28 np0005634017 podman[388261]: 2026-02-28 10:55:28.11554976 +0000 UTC m=+0.142604870 container attach bd17eaab93037d76df2cf712e696d1497e9d58419c204083090929810acf2c74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_shaw, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:55:28 np0005634017 vibrant_shaw[388277]: 167 167
Feb 28 05:55:28 np0005634017 systemd[1]: libpod-bd17eaab93037d76df2cf712e696d1497e9d58419c204083090929810acf2c74.scope: Deactivated successfully.
Feb 28 05:55:28 np0005634017 podman[388261]: 2026-02-28 10:55:28.116840136 +0000 UTC m=+0.143895186 container died bd17eaab93037d76df2cf712e696d1497e9d58419c204083090929810acf2c74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 05:55:28 np0005634017 systemd[1]: var-lib-containers-storage-overlay-6e0820f8f1f9f0cf9ed159786b8c6661599f39480004de9097b9a3348a1700aa-merged.mount: Deactivated successfully.
Feb 28 05:55:28 np0005634017 podman[388261]: 2026-02-28 10:55:28.156327715 +0000 UTC m=+0.183382765 container remove bd17eaab93037d76df2cf712e696d1497e9d58419c204083090929810acf2c74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_shaw, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:55:28 np0005634017 systemd[1]: libpod-conmon-bd17eaab93037d76df2cf712e696d1497e9d58419c204083090929810acf2c74.scope: Deactivated successfully.
Feb 28 05:55:28 np0005634017 podman[388301]: 2026-02-28 10:55:28.315955016 +0000 UTC m=+0.053775784 container create 3d429c9f6699bf7bf13a597ca1cbec34e7c161af53672b05b2464db46db06d92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:55:28 np0005634017 systemd[1]: Started libpod-conmon-3d429c9f6699bf7bf13a597ca1cbec34e7c161af53672b05b2464db46db06d92.scope.
Feb 28 05:55:28 np0005634017 podman[388301]: 2026-02-28 10:55:28.293021166 +0000 UTC m=+0.030842004 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:55:28 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:55:28 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9adca9e92ae04ee11fdf96e24ff461278e2053f12f5fd991bd55c562314a6625/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:55:28 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9adca9e92ae04ee11fdf96e24ff461278e2053f12f5fd991bd55c562314a6625/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:55:28 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9adca9e92ae04ee11fdf96e24ff461278e2053f12f5fd991bd55c562314a6625/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:55:28 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9adca9e92ae04ee11fdf96e24ff461278e2053f12f5fd991bd55c562314a6625/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:55:28 np0005634017 podman[388301]: 2026-02-28 10:55:28.426827686 +0000 UTC m=+0.164648464 container init 3d429c9f6699bf7bf13a597ca1cbec34e7c161af53672b05b2464db46db06d92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 05:55:28 np0005634017 podman[388301]: 2026-02-28 10:55:28.433836794 +0000 UTC m=+0.171657542 container start 3d429c9f6699bf7bf13a597ca1cbec34e7c161af53672b05b2464db46db06d92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 28 05:55:28 np0005634017 podman[388301]: 2026-02-28 10:55:28.438156596 +0000 UTC m=+0.175977424 container attach 3d429c9f6699bf7bf13a597ca1cbec34e7c161af53672b05b2464db46db06d92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keller, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:55:29 np0005634017 lvm[388395]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:55:29 np0005634017 lvm[388395]: VG ceph_vg0 finished
Feb 28 05:55:29 np0005634017 lvm[388398]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:55:29 np0005634017 lvm[388398]: VG ceph_vg1 finished
Feb 28 05:55:29 np0005634017 lvm[388400]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:55:29 np0005634017 lvm[388400]: VG ceph_vg2 finished
Feb 28 05:55:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:55:29 np0005634017 nifty_keller[388319]: {}
Feb 28 05:55:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:55:29
Feb 28 05:55:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:55:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:55:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'images', 'volumes', '.rgw.root', 'default.rgw.control', 'vms']
Feb 28 05:55:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:55:29 np0005634017 systemd[1]: libpod-3d429c9f6699bf7bf13a597ca1cbec34e7c161af53672b05b2464db46db06d92.scope: Deactivated successfully.
Feb 28 05:55:29 np0005634017 podman[388301]: 2026-02-28 10:55:29.250131072 +0000 UTC m=+0.987951830 container died 3d429c9f6699bf7bf13a597ca1cbec34e7c161af53672b05b2464db46db06d92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keller, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:55:29 np0005634017 systemd[1]: libpod-3d429c9f6699bf7bf13a597ca1cbec34e7c161af53672b05b2464db46db06d92.scope: Consumed 1.183s CPU time.
Feb 28 05:55:29 np0005634017 systemd[1]: var-lib-containers-storage-overlay-9adca9e92ae04ee11fdf96e24ff461278e2053f12f5fd991bd55c562314a6625-merged.mount: Deactivated successfully.
Feb 28 05:55:29 np0005634017 podman[388301]: 2026-02-28 10:55:29.295227519 +0000 UTC m=+1.033048277 container remove 3d429c9f6699bf7bf13a597ca1cbec34e7c161af53672b05b2464db46db06d92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Feb 28 05:55:29 np0005634017 systemd[1]: libpod-conmon-3d429c9f6699bf7bf13a597ca1cbec34e7c161af53672b05b2464db46db06d92.scope: Deactivated successfully.
Feb 28 05:55:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:55:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:55:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:55:29 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:55:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2748: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Feb 28 05:55:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:55:29.708 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:55:29 np0005634017 nova_compute[243452]: 2026-02-28 10:55:29.710 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:55:29 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:55:29.710 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:55:30 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:55:30 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:55:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:55:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:55:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:55:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:55:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:55:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:55:30 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:55:30.712 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:55:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:55:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:55:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:55:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:55:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:55:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:55:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:55:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:55:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:55:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:55:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2749: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 39 op/s
Feb 28 05:55:32 np0005634017 nova_compute[243452]: 2026-02-28 10:55:32.946 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:55:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2750: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Feb 28 05:55:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:55:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2751: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Feb 28 05:55:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Feb 28 05:55:36 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Feb 28 05:55:36 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Feb 28 05:55:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2753: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:55:37 np0005634017 nova_compute[243452]: 2026-02-28 10:55:37.949 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:55:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:55:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2754: 305 pgs: 305 active+clean; 97 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 920 B/s wr, 20 op/s
Feb 28 05:55:40 np0005634017 podman[388444]: 2026-02-28 10:55:40.130193364 +0000 UTC m=+0.063172170 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 05:55:40 np0005634017 podman[388443]: 2026-02-28 10:55:40.175751414 +0000 UTC m=+0.108730370 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260223)
Feb 28 05:55:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:55:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:55:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:55:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:55:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5174751830704939e-05 of space, bias 1.0, pg target 0.0045524255492114815 quantized to 32 (current 32)
Feb 28 05:55:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:55:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:55:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:55:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:55:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:55:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0015822959425952849 of space, bias 1.0, pg target 0.47468878277858545 quantized to 32 (current 32)
Feb 28 05:55:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:55:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.25983564234373e-07 of space, bias 4.0, pg target 0.0008711802770812475 quantized to 16 (current 16)
Feb 28 05:55:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:55:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:55:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:55:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:55:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:55:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:55:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:55:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:55:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:55:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:55:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2755: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 28 05:55:42 np0005634017 nova_compute[243452]: 2026-02-28 10:55:42.951 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:55:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2756: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 28 05:55:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:55:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Feb 28 05:55:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Feb 28 05:55:44 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Feb 28 05:55:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2758: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Feb 28 05:55:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:55:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3913974501' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:55:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:55:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3913974501' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:55:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2759: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 28 05:55:47 np0005634017 nova_compute[243452]: 2026-02-28 10:55:47.953 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:55:47 np0005634017 nova_compute[243452]: 2026-02-28 10:55:47.956 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:55:47 np0005634017 nova_compute[243452]: 2026-02-28 10:55:47.956 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 05:55:47 np0005634017 nova_compute[243452]: 2026-02-28 10:55:47.956 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 05:55:47 np0005634017 nova_compute[243452]: 2026-02-28 10:55:47.981 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:55:47 np0005634017 nova_compute[243452]: 2026-02-28 10:55:47.982 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 05:55:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:55:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2760: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.9 KiB/s rd, 511 B/s wr, 5 op/s
Feb 28 05:55:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2761: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:55:52 np0005634017 nova_compute[243452]: 2026-02-28 10:55:52.983 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:55:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2762: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:55:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:55:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2763: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:55:55 np0005634017 nova_compute[243452]: 2026-02-28 10:55:55.982 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:55:57 np0005634017 nova_compute[243452]: 2026-02-28 10:55:57.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:55:57 np0005634017 nova_compute[243452]: 2026-02-28 10:55:57.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:55:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2764: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:55:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:55:57.898 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:55:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:55:57.899 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:55:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:55:57.900 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:55:57 np0005634017 nova_compute[243452]: 2026-02-28 10:55:57.986 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:55:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:55:59 np0005634017 nova_compute[243452]: 2026-02-28 10:55:59.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:55:59 np0005634017 nova_compute[243452]: 2026-02-28 10:55:59.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:55:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2765: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:00 np0005634017 nova_compute[243452]: 2026-02-28 10:56:00.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:56:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:56:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:56:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:56:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:56:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:56:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:56:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2766: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:02 np0005634017 nova_compute[243452]: 2026-02-28 10:56:02.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:56:02 np0005634017 nova_compute[243452]: 2026-02-28 10:56:02.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:56:02 np0005634017 nova_compute[243452]: 2026-02-28 10:56:02.988 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:56:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2767: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:56:04 np0005634017 nova_compute[243452]: 2026-02-28 10:56:04.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:56:04 np0005634017 nova_compute[243452]: 2026-02-28 10:56:04.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:56:04 np0005634017 nova_compute[243452]: 2026-02-28 10:56:04.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:56:04 np0005634017 nova_compute[243452]: 2026-02-28 10:56:04.336 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 05:56:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2768: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2769: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:07 np0005634017 nova_compute[243452]: 2026-02-28 10:56:07.990 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:56:07 np0005634017 nova_compute[243452]: 2026-02-28 10:56:07.991 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:56:07 np0005634017 nova_compute[243452]: 2026-02-28 10:56:07.991 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 05:56:07 np0005634017 nova_compute[243452]: 2026-02-28 10:56:07.991 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 05:56:07 np0005634017 nova_compute[243452]: 2026-02-28 10:56:07.992 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 05:56:07 np0005634017 nova_compute[243452]: 2026-02-28 10:56:07.993 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:56:08 np0005634017 nova_compute[243452]: 2026-02-28 10:56:08.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:56:08 np0005634017 nova_compute[243452]: 2026-02-28 10:56:08.350 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:56:08 np0005634017 nova_compute[243452]: 2026-02-28 10:56:08.350 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:56:08 np0005634017 nova_compute[243452]: 2026-02-28 10:56:08.351 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:56:08 np0005634017 nova_compute[243452]: 2026-02-28 10:56:08.351 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:56:08 np0005634017 nova_compute[243452]: 2026-02-28 10:56:08.352 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:56:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:56:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3842658606' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:56:08 np0005634017 nova_compute[243452]: 2026-02-28 10:56:08.890 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:56:09 np0005634017 nova_compute[243452]: 2026-02-28 10:56:09.066 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:56:09 np0005634017 nova_compute[243452]: 2026-02-28 10:56:09.067 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3581MB free_disk=59.987361152656376GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:56:09 np0005634017 nova_compute[243452]: 2026-02-28 10:56:09.068 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:56:09 np0005634017 nova_compute[243452]: 2026-02-28 10:56:09.068 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:56:09 np0005634017 nova_compute[243452]: 2026-02-28 10:56:09.142 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:56:09 np0005634017 nova_compute[243452]: 2026-02-28 10:56:09.142 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:56:09 np0005634017 nova_compute[243452]: 2026-02-28 10:56:09.163 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:56:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:56:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2770: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:56:09 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/347580519' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:56:09 np0005634017 nova_compute[243452]: 2026-02-28 10:56:09.736 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:56:09 np0005634017 nova_compute[243452]: 2026-02-28 10:56:09.742 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:56:09 np0005634017 nova_compute[243452]: 2026-02-28 10:56:09.781 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:56:09 np0005634017 nova_compute[243452]: 2026-02-28 10:56:09.783 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:56:09 np0005634017 nova_compute[243452]: 2026-02-28 10:56:09.783 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:56:11 np0005634017 podman[388535]: 2026-02-28 10:56:11.117907993 +0000 UTC m=+0.050145282 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 05:56:11 np0005634017 podman[388534]: 2026-02-28 10:56:11.179792035 +0000 UTC m=+0.108759231 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:56:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2771: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:12 np0005634017 nova_compute[243452]: 2026-02-28 10:56:12.779 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:56:12 np0005634017 nova_compute[243452]: 2026-02-28 10:56:12.992 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:56:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2772: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:56:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2773: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2774: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:17 np0005634017 nova_compute[243452]: 2026-02-28 10:56:17.995 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:56:17 np0005634017 nova_compute[243452]: 2026-02-28 10:56:17.997 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:56:17 np0005634017 nova_compute[243452]: 2026-02-28 10:56:17.997 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 05:56:17 np0005634017 nova_compute[243452]: 2026-02-28 10:56:17.997 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 05:56:17 np0005634017 nova_compute[243452]: 2026-02-28 10:56:17.998 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 05:56:18 np0005634017 nova_compute[243452]: 2026-02-28 10:56:18.000 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:56:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:56:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2775: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:19 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Feb 28 05:56:19 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:19.930132) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:56:19 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Feb 28 05:56:19 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276179930197, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 2076, "num_deletes": 253, "total_data_size": 3452590, "memory_usage": 3505664, "flush_reason": "Manual Compaction"}
Feb 28 05:56:19 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Feb 28 05:56:19 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276179947788, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 3394975, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56680, "largest_seqno": 58755, "table_properties": {"data_size": 3385415, "index_size": 6117, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19046, "raw_average_key_size": 20, "raw_value_size": 3366463, "raw_average_value_size": 3577, "num_data_blocks": 270, "num_entries": 941, "num_filter_entries": 941, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772275959, "oldest_key_time": 1772275959, "file_creation_time": 1772276179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:56:19 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 17713 microseconds, and 9640 cpu microseconds.
Feb 28 05:56:19 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:56:19 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:19.947852) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 3394975 bytes OK
Feb 28 05:56:19 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:19.947879) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Feb 28 05:56:19 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:19.950547) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Feb 28 05:56:19 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:19.950561) EVENT_LOG_v1 {"time_micros": 1772276179950556, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:56:19 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:19.950584) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:56:19 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 3443925, prev total WAL file size 3443925, number of live WAL files 2.
Feb 28 05:56:19 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:56:19 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:19.951274) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Feb 28 05:56:19 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:56:19 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(3315KB)], [134(9594KB)]
Feb 28 05:56:19 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276179951333, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 13219260, "oldest_snapshot_seqno": -1}
Feb 28 05:56:20 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 7948 keys, 11476341 bytes, temperature: kUnknown
Feb 28 05:56:20 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276180010856, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 11476341, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11422758, "index_size": 32606, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19909, "raw_key_size": 206913, "raw_average_key_size": 26, "raw_value_size": 11280609, "raw_average_value_size": 1419, "num_data_blocks": 1275, "num_entries": 7948, "num_filter_entries": 7948, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772276179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:56:20 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:56:20 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:20.011143) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 11476341 bytes
Feb 28 05:56:20 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:20.012659) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 221.9 rd, 192.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 9.4 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(7.3) write-amplify(3.4) OK, records in: 8470, records dropped: 522 output_compression: NoCompression
Feb 28 05:56:20 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:20.012675) EVENT_LOG_v1 {"time_micros": 1772276180012667, "job": 82, "event": "compaction_finished", "compaction_time_micros": 59578, "compaction_time_cpu_micros": 36964, "output_level": 6, "num_output_files": 1, "total_output_size": 11476341, "num_input_records": 8470, "num_output_records": 7948, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:56:20 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:56:20 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276180013141, "job": 82, "event": "table_file_deletion", "file_number": 136}
Feb 28 05:56:20 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:56:20 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276180014331, "job": 82, "event": "table_file_deletion", "file_number": 134}
Feb 28 05:56:20 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:19.951183) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:56:20 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:20.014446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:56:20 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:20.014456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:56:20 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:20.014461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:56:20 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:20.014464) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:56:20 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:20.014468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:56:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2776: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:23 np0005634017 nova_compute[243452]: 2026-02-28 10:56:22.999 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:56:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2777: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:56:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2778: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2779: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:28 np0005634017 nova_compute[243452]: 2026-02-28 10:56:28.001 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:56:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:56:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:56:29
Feb 28 05:56:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:56:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:56:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'default.rgw.control', '.rgw.root', 'volumes', 'images', '.mgr', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log']
Feb 28 05:56:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:56:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2780: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:56:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:56:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:56:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:56:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:56:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:56:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:56:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:56:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:56:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:56:30 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:56:30 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:56:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:56:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:56:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:56:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:56:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:56:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:56:30 np0005634017 podman[388721]: 2026-02-28 10:56:30.564981249 +0000 UTC m=+0.054570776 container create 4a1553391339b31f1d1663962a546ce8e3a2a406e4d9353a8247d51b01d6eb29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_turing, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 05:56:30 np0005634017 systemd[1]: Started libpod-conmon-4a1553391339b31f1d1663962a546ce8e3a2a406e4d9353a8247d51b01d6eb29.scope.
Feb 28 05:56:30 np0005634017 podman[388721]: 2026-02-28 10:56:30.54487869 +0000 UTC m=+0.034468227 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:56:30 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:56:30 np0005634017 podman[388721]: 2026-02-28 10:56:30.663546841 +0000 UTC m=+0.153136408 container init 4a1553391339b31f1d1663962a546ce8e3a2a406e4d9353a8247d51b01d6eb29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:56:30 np0005634017 podman[388721]: 2026-02-28 10:56:30.673508823 +0000 UTC m=+0.163098370 container start 4a1553391339b31f1d1663962a546ce8e3a2a406e4d9353a8247d51b01d6eb29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_turing, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:56:30 np0005634017 podman[388721]: 2026-02-28 10:56:30.6780031 +0000 UTC m=+0.167592657 container attach 4a1553391339b31f1d1663962a546ce8e3a2a406e4d9353a8247d51b01d6eb29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:56:30 np0005634017 youthful_turing[388738]: 167 167
Feb 28 05:56:30 np0005634017 systemd[1]: libpod-4a1553391339b31f1d1663962a546ce8e3a2a406e4d9353a8247d51b01d6eb29.scope: Deactivated successfully.
Feb 28 05:56:30 np0005634017 conmon[388738]: conmon 4a1553391339b31f1d16 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4a1553391339b31f1d1663962a546ce8e3a2a406e4d9353a8247d51b01d6eb29.scope/container/memory.events
Feb 28 05:56:30 np0005634017 podman[388721]: 2026-02-28 10:56:30.681142999 +0000 UTC m=+0.170732546 container died 4a1553391339b31f1d1663962a546ce8e3a2a406e4d9353a8247d51b01d6eb29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 05:56:30 np0005634017 systemd[1]: var-lib-containers-storage-overlay-a446289184b907c44eeadd2549a8672c7318bd3d86cd11bfea418c60b0859ecd-merged.mount: Deactivated successfully.
Feb 28 05:56:30 np0005634017 podman[388721]: 2026-02-28 10:56:30.737004041 +0000 UTC m=+0.226593568 container remove 4a1553391339b31f1d1663962a546ce8e3a2a406e4d9353a8247d51b01d6eb29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_turing, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:56:30 np0005634017 systemd[1]: libpod-conmon-4a1553391339b31f1d1663962a546ce8e3a2a406e4d9353a8247d51b01d6eb29.scope: Deactivated successfully.
Feb 28 05:56:30 np0005634017 podman[388762]: 2026-02-28 10:56:30.942352957 +0000 UTC m=+0.063571901 container create 904508c7553bf9bfa8a9d4bc994b78e5452e77f58fea2516d0b176336429880c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_satoshi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:56:30 np0005634017 systemd[1]: Started libpod-conmon-904508c7553bf9bfa8a9d4bc994b78e5452e77f58fea2516d0b176336429880c.scope.
Feb 28 05:56:30 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:56:30 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:56:30 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:56:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:56:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:56:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:56:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:56:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:56:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:56:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:56:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:56:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:56:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:56:31 np0005634017 podman[388762]: 2026-02-28 10:56:30.922047432 +0000 UTC m=+0.043266286 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:56:31 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:56:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9fbb4726de73256e4568e92d568b46cde8fc2a4cbbe7650a447e4dd442cf3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:56:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9fbb4726de73256e4568e92d568b46cde8fc2a4cbbe7650a447e4dd442cf3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:56:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9fbb4726de73256e4568e92d568b46cde8fc2a4cbbe7650a447e4dd442cf3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:56:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9fbb4726de73256e4568e92d568b46cde8fc2a4cbbe7650a447e4dd442cf3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:56:31 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9fbb4726de73256e4568e92d568b46cde8fc2a4cbbe7650a447e4dd442cf3b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:56:31 np0005634017 podman[388762]: 2026-02-28 10:56:31.064040123 +0000 UTC m=+0.185258947 container init 904508c7553bf9bfa8a9d4bc994b78e5452e77f58fea2516d0b176336429880c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:56:31 np0005634017 podman[388762]: 2026-02-28 10:56:31.072296127 +0000 UTC m=+0.193514971 container start 904508c7553bf9bfa8a9d4bc994b78e5452e77f58fea2516d0b176336429880c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:56:31 np0005634017 podman[388762]: 2026-02-28 10:56:31.0769923 +0000 UTC m=+0.198211144 container attach 904508c7553bf9bfa8a9d4bc994b78e5452e77f58fea2516d0b176336429880c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 05:56:31 np0005634017 cool_satoshi[388780]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:56:31 np0005634017 cool_satoshi[388780]: --> All data devices are unavailable
Feb 28 05:56:31 np0005634017 systemd[1]: libpod-904508c7553bf9bfa8a9d4bc994b78e5452e77f58fea2516d0b176336429880c.scope: Deactivated successfully.
Feb 28 05:56:31 np0005634017 podman[388762]: 2026-02-28 10:56:31.55929611 +0000 UTC m=+0.680514974 container died 904508c7553bf9bfa8a9d4bc994b78e5452e77f58fea2516d0b176336429880c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:56:31 np0005634017 systemd[1]: var-lib-containers-storage-overlay-4d9fbb4726de73256e4568e92d568b46cde8fc2a4cbbe7650a447e4dd442cf3b-merged.mount: Deactivated successfully.
Feb 28 05:56:31 np0005634017 podman[388762]: 2026-02-28 10:56:31.614430251 +0000 UTC m=+0.735649055 container remove 904508c7553bf9bfa8a9d4bc994b78e5452e77f58fea2516d0b176336429880c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:56:31 np0005634017 systemd[1]: libpod-conmon-904508c7553bf9bfa8a9d4bc994b78e5452e77f58fea2516d0b176336429880c.scope: Deactivated successfully.
Feb 28 05:56:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2781: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:32 np0005634017 podman[388876]: 2026-02-28 10:56:32.170378226 +0000 UTC m=+0.062662226 container create a7e0831f5ece1df6073302b6da54601ae1ccb7cf3cda41c3cc5b115958d7f0f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_saha, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:56:32 np0005634017 systemd[1]: Started libpod-conmon-a7e0831f5ece1df6073302b6da54601ae1ccb7cf3cda41c3cc5b115958d7f0f4.scope.
Feb 28 05:56:32 np0005634017 podman[388876]: 2026-02-28 10:56:32.140587452 +0000 UTC m=+0.032871262 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:56:32 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:56:32 np0005634017 podman[388876]: 2026-02-28 10:56:32.256552937 +0000 UTC m=+0.148836707 container init a7e0831f5ece1df6073302b6da54601ae1ccb7cf3cda41c3cc5b115958d7f0f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_saha, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 28 05:56:32 np0005634017 podman[388876]: 2026-02-28 10:56:32.264885553 +0000 UTC m=+0.157169293 container start a7e0831f5ece1df6073302b6da54601ae1ccb7cf3cda41c3cc5b115958d7f0f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_saha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:56:32 np0005634017 podman[388876]: 2026-02-28 10:56:32.268434283 +0000 UTC m=+0.160718023 container attach a7e0831f5ece1df6073302b6da54601ae1ccb7cf3cda41c3cc5b115958d7f0f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_saha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:56:32 np0005634017 fervent_saha[388892]: 167 167
Feb 28 05:56:32 np0005634017 systemd[1]: libpod-a7e0831f5ece1df6073302b6da54601ae1ccb7cf3cda41c3cc5b115958d7f0f4.scope: Deactivated successfully.
Feb 28 05:56:32 np0005634017 podman[388876]: 2026-02-28 10:56:32.271820289 +0000 UTC m=+0.164104029 container died a7e0831f5ece1df6073302b6da54601ae1ccb7cf3cda41c3cc5b115958d7f0f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 05:56:32 np0005634017 systemd[1]: var-lib-containers-storage-overlay-899ecc0b39653fd37c7955daf1c7def7cd475c2873ae07d1f3769e01be6434c5-merged.mount: Deactivated successfully.
Feb 28 05:56:32 np0005634017 podman[388876]: 2026-02-28 10:56:32.317330858 +0000 UTC m=+0.209614608 container remove a7e0831f5ece1df6073302b6da54601ae1ccb7cf3cda41c3cc5b115958d7f0f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:56:32 np0005634017 systemd[1]: libpod-conmon-a7e0831f5ece1df6073302b6da54601ae1ccb7cf3cda41c3cc5b115958d7f0f4.scope: Deactivated successfully.
Feb 28 05:56:32 np0005634017 podman[388916]: 2026-02-28 10:56:32.502408039 +0000 UTC m=+0.060578626 container create 7c6d92c986bdbf39a7571103f4c9e4e42654c71884d758bfa434e13392bfcc44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 28 05:56:32 np0005634017 systemd[1]: Started libpod-conmon-7c6d92c986bdbf39a7571103f4c9e4e42654c71884d758bfa434e13392bfcc44.scope.
Feb 28 05:56:32 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:56:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/663b5c87bf46ffd986115370bd5f359a8d1078a9bfd33555b5de22eee60683a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:56:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/663b5c87bf46ffd986115370bd5f359a8d1078a9bfd33555b5de22eee60683a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:56:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/663b5c87bf46ffd986115370bd5f359a8d1078a9bfd33555b5de22eee60683a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:56:32 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/663b5c87bf46ffd986115370bd5f359a8d1078a9bfd33555b5de22eee60683a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:56:32 np0005634017 podman[388916]: 2026-02-28 10:56:32.482384212 +0000 UTC m=+0.040554839 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:56:32 np0005634017 podman[388916]: 2026-02-28 10:56:32.587998704 +0000 UTC m=+0.146169281 container init 7c6d92c986bdbf39a7571103f4c9e4e42654c71884d758bfa434e13392bfcc44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gates, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 28 05:56:32 np0005634017 podman[388916]: 2026-02-28 10:56:32.595115975 +0000 UTC m=+0.153286552 container start 7c6d92c986bdbf39a7571103f4c9e4e42654c71884d758bfa434e13392bfcc44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:56:32 np0005634017 podman[388916]: 2026-02-28 10:56:32.600025964 +0000 UTC m=+0.158196541 container attach 7c6d92c986bdbf39a7571103f4c9e4e42654c71884d758bfa434e13392bfcc44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]: {
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:    "0": [
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:        {
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "devices": [
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "/dev/loop3"
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            ],
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "lv_name": "ceph_lv0",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "lv_size": "21470642176",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "name": "ceph_lv0",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "tags": {
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.cluster_name": "ceph",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.crush_device_class": "",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.encrypted": "0",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.objectstore": "bluestore",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.osd_id": "0",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.type": "block",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.vdo": "0",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.with_tpm": "0"
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            },
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "type": "block",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "vg_name": "ceph_vg0"
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:        }
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:    ],
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:    "1": [
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:        {
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "devices": [
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "/dev/loop4"
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            ],
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "lv_name": "ceph_lv1",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "lv_size": "21470642176",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "name": "ceph_lv1",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "tags": {
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.cluster_name": "ceph",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.crush_device_class": "",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.encrypted": "0",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.objectstore": "bluestore",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.osd_id": "1",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.type": "block",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.vdo": "0",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.with_tpm": "0"
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            },
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "type": "block",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "vg_name": "ceph_vg1"
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:        }
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:    ],
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:    "2": [
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:        {
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "devices": [
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "/dev/loop5"
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            ],
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "lv_name": "ceph_lv2",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "lv_size": "21470642176",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "name": "ceph_lv2",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "tags": {
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.cluster_name": "ceph",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.crush_device_class": "",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.encrypted": "0",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.objectstore": "bluestore",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.osd_id": "2",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.type": "block",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.vdo": "0",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:                "ceph.with_tpm": "0"
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            },
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "type": "block",
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:            "vg_name": "ceph_vg2"
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:        }
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]:    ]
Feb 28 05:56:32 np0005634017 pedantic_gates[388934]: }
Feb 28 05:56:32 np0005634017 systemd[1]: libpod-7c6d92c986bdbf39a7571103f4c9e4e42654c71884d758bfa434e13392bfcc44.scope: Deactivated successfully.
Feb 28 05:56:32 np0005634017 podman[388916]: 2026-02-28 10:56:32.866528942 +0000 UTC m=+0.424699519 container died 7c6d92c986bdbf39a7571103f4c9e4e42654c71884d758bfa434e13392bfcc44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 05:56:32 np0005634017 systemd[1]: var-lib-containers-storage-overlay-663b5c87bf46ffd986115370bd5f359a8d1078a9bfd33555b5de22eee60683a7-merged.mount: Deactivated successfully.
Feb 28 05:56:32 np0005634017 podman[388916]: 2026-02-28 10:56:32.914951822 +0000 UTC m=+0.473122399 container remove 7c6d92c986bdbf39a7571103f4c9e4e42654c71884d758bfa434e13392bfcc44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gates, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 28 05:56:32 np0005634017 systemd[1]: libpod-conmon-7c6d92c986bdbf39a7571103f4c9e4e42654c71884d758bfa434e13392bfcc44.scope: Deactivated successfully.
Feb 28 05:56:33 np0005634017 nova_compute[243452]: 2026-02-28 10:56:33.002 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:56:33 np0005634017 podman[389020]: 2026-02-28 10:56:33.342319756 +0000 UTC m=+0.054611288 container create 84c70abce96afa73e4d8becd850699fae9a61068c2c91a1d4f2654ffa1f33f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 05:56:33 np0005634017 systemd[1]: Started libpod-conmon-84c70abce96afa73e4d8becd850699fae9a61068c2c91a1d4f2654ffa1f33f7d.scope.
Feb 28 05:56:33 np0005634017 podman[389020]: 2026-02-28 10:56:33.317156133 +0000 UTC m=+0.029447715 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:56:33 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:56:33 np0005634017 podman[389020]: 2026-02-28 10:56:33.434870317 +0000 UTC m=+0.147161849 container init 84c70abce96afa73e4d8becd850699fae9a61068c2c91a1d4f2654ffa1f33f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_faraday, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 05:56:33 np0005634017 podman[389020]: 2026-02-28 10:56:33.443949024 +0000 UTC m=+0.156240556 container start 84c70abce96afa73e4d8becd850699fae9a61068c2c91a1d4f2654ffa1f33f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_faraday, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 05:56:33 np0005634017 podman[389020]: 2026-02-28 10:56:33.449439599 +0000 UTC m=+0.161731181 container attach 84c70abce96afa73e4d8becd850699fae9a61068c2c91a1d4f2654ffa1f33f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_faraday, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:56:33 np0005634017 nice_faraday[389036]: 167 167
Feb 28 05:56:33 np0005634017 systemd[1]: libpod-84c70abce96afa73e4d8becd850699fae9a61068c2c91a1d4f2654ffa1f33f7d.scope: Deactivated successfully.
Feb 28 05:56:33 np0005634017 podman[389020]: 2026-02-28 10:56:33.450781127 +0000 UTC m=+0.163072659 container died 84c70abce96afa73e4d8becd850699fae9a61068c2c91a1d4f2654ffa1f33f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_faraday, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 28 05:56:33 np0005634017 systemd[1]: var-lib-containers-storage-overlay-b464e3b8cfa4b93c3a384f0617ab2ade8818fcc54a140571a88ef4e22f59063b-merged.mount: Deactivated successfully.
Feb 28 05:56:33 np0005634017 podman[389020]: 2026-02-28 10:56:33.49888543 +0000 UTC m=+0.211176932 container remove 84c70abce96afa73e4d8becd850699fae9a61068c2c91a1d4f2654ffa1f33f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_faraday, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 05:56:33 np0005634017 systemd[1]: libpod-conmon-84c70abce96afa73e4d8becd850699fae9a61068c2c91a1d4f2654ffa1f33f7d.scope: Deactivated successfully.
Feb 28 05:56:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2782: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:33 np0005634017 podman[389060]: 2026-02-28 10:56:33.65637725 +0000 UTC m=+0.047419934 container create 9c072894ff4d1dff078eefa5ed0db8401fd6969b26a04440151c2febc72c0229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 05:56:33 np0005634017 systemd[1]: Started libpod-conmon-9c072894ff4d1dff078eefa5ed0db8401fd6969b26a04440151c2febc72c0229.scope.
Feb 28 05:56:33 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:56:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3ab0bfefe5aaf7cad55558f057f375cab0e2389c36916024e75f17cb28cb1cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:56:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3ab0bfefe5aaf7cad55558f057f375cab0e2389c36916024e75f17cb28cb1cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:56:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3ab0bfefe5aaf7cad55558f057f375cab0e2389c36916024e75f17cb28cb1cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:56:33 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3ab0bfefe5aaf7cad55558f057f375cab0e2389c36916024e75f17cb28cb1cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:56:33 np0005634017 podman[389060]: 2026-02-28 10:56:33.639130312 +0000 UTC m=+0.030173016 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:56:33 np0005634017 podman[389060]: 2026-02-28 10:56:33.751163425 +0000 UTC m=+0.142206129 container init 9c072894ff4d1dff078eefa5ed0db8401fd6969b26a04440151c2febc72c0229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 28 05:56:33 np0005634017 podman[389060]: 2026-02-28 10:56:33.765453819 +0000 UTC m=+0.156496513 container start 9c072894ff4d1dff078eefa5ed0db8401fd6969b26a04440151c2febc72c0229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_einstein, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 05:56:33 np0005634017 podman[389060]: 2026-02-28 10:56:33.772733205 +0000 UTC m=+0.163775879 container attach 9c072894ff4d1dff078eefa5ed0db8401fd6969b26a04440151c2febc72c0229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 05:56:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:56:34 np0005634017 lvm[389153]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:56:34 np0005634017 lvm[389153]: VG ceph_vg0 finished
Feb 28 05:56:34 np0005634017 lvm[389156]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:56:34 np0005634017 lvm[389156]: VG ceph_vg1 finished
Feb 28 05:56:34 np0005634017 lvm[389158]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:56:34 np0005634017 lvm[389158]: VG ceph_vg2 finished
Feb 28 05:56:34 np0005634017 relaxed_einstein[389076]: {}
Feb 28 05:56:34 np0005634017 systemd[1]: libpod-9c072894ff4d1dff078eefa5ed0db8401fd6969b26a04440151c2febc72c0229.scope: Deactivated successfully.
Feb 28 05:56:34 np0005634017 systemd[1]: libpod-9c072894ff4d1dff078eefa5ed0db8401fd6969b26a04440151c2febc72c0229.scope: Consumed 1.190s CPU time.
Feb 28 05:56:34 np0005634017 podman[389060]: 2026-02-28 10:56:34.61892015 +0000 UTC m=+1.009962834 container died 9c072894ff4d1dff078eefa5ed0db8401fd6969b26a04440151c2febc72c0229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:56:34 np0005634017 systemd[1]: var-lib-containers-storage-overlay-c3ab0bfefe5aaf7cad55558f057f375cab0e2389c36916024e75f17cb28cb1cc-merged.mount: Deactivated successfully.
Feb 28 05:56:34 np0005634017 podman[389060]: 2026-02-28 10:56:34.672333933 +0000 UTC m=+1.063376657 container remove 9c072894ff4d1dff078eefa5ed0db8401fd6969b26a04440151c2febc72c0229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 28 05:56:34 np0005634017 systemd[1]: libpod-conmon-9c072894ff4d1dff078eefa5ed0db8401fd6969b26a04440151c2febc72c0229.scope: Deactivated successfully.
Feb 28 05:56:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:56:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:56:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:56:34 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:56:35 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:56:35 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:56:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2783: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2784: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:38 np0005634017 nova_compute[243452]: 2026-02-28 10:56:38.004 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:56:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:56:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2785: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:56:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:56:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:56:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:56:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.533795142069749e-05 of space, bias 1.0, pg target 0.004601385426209247 quantized to 32 (current 32)
Feb 28 05:56:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:56:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:56:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:56:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:56:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:56:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006706137252582069 of space, bias 1.0, pg target 0.20118411757746207 quantized to 32 (current 32)
Feb 28 05:56:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:56:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.261232899107364e-07 of space, bias 4.0, pg target 0.0008713479478928837 quantized to 16 (current 16)
Feb 28 05:56:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:56:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:56:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:56:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:56:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:56:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:56:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:56:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:56:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:56:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:56:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2786: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:42 np0005634017 podman[389201]: 2026-02-28 10:56:42.14590092 +0000 UTC m=+0.074214163 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:56:42 np0005634017 podman[389200]: 2026-02-28 10:56:42.18051049 +0000 UTC m=+0.113273009 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller)
Feb 28 05:56:43 np0005634017 nova_compute[243452]: 2026-02-28 10:56:43.008 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:56:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2787: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:56:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:56:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1117919498' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:56:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:56:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1117919498' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:56:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2788: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2789: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:48 np0005634017 nova_compute[243452]: 2026-02-28 10:56:48.012 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:56:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:56:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2790: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2791: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:53 np0005634017 nova_compute[243452]: 2026-02-28 10:56:53.013 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:56:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2792: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:56:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2793: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:56 np0005634017 nova_compute[243452]: 2026-02-28 10:56:56.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:56:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2794: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:56:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:56:57.900 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:56:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:56:57.901 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:56:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:56:57.901 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:56:58 np0005634017 nova_compute[243452]: 2026-02-28 10:56:58.015 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:56:58 np0005634017 nova_compute[243452]: 2026-02-28 10:56:58.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:56:58 np0005634017 nova_compute[243452]: 2026-02-28 10:56:58.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:56:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:56:59 np0005634017 nova_compute[243452]: 2026-02-28 10:56:59.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:56:59 np0005634017 nova_compute[243452]: 2026-02-28 10:56:59.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:56:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2795: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:00 np0005634017 nova_compute[243452]: 2026-02-28 10:57:00.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:57:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:57:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:57:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:57:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:57:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:57:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:57:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2796: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:02 np0005634017 nova_compute[243452]: 2026-02-28 10:57:02.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:57:03 np0005634017 nova_compute[243452]: 2026-02-28 10:57:03.018 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:57:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2797: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:57:04 np0005634017 nova_compute[243452]: 2026-02-28 10:57:04.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:57:05 np0005634017 nova_compute[243452]: 2026-02-28 10:57:05.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:57:05 np0005634017 nova_compute[243452]: 2026-02-28 10:57:05.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:57:05 np0005634017 nova_compute[243452]: 2026-02-28 10:57:05.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:57:05 np0005634017 nova_compute[243452]: 2026-02-28 10:57:05.336 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 05:57:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2798: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2799: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:08 np0005634017 nova_compute[243452]: 2026-02-28 10:57:08.020 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:57:08 np0005634017 nova_compute[243452]: 2026-02-28 10:57:08.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:57:08 np0005634017 nova_compute[243452]: 2026-02-28 10:57:08.350 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:57:08 np0005634017 nova_compute[243452]: 2026-02-28 10:57:08.351 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:57:08 np0005634017 nova_compute[243452]: 2026-02-28 10:57:08.351 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:57:08 np0005634017 nova_compute[243452]: 2026-02-28 10:57:08.352 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:57:08 np0005634017 nova_compute[243452]: 2026-02-28 10:57:08.352 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:57:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:57:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/969889939' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:57:08 np0005634017 nova_compute[243452]: 2026-02-28 10:57:08.928 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:57:09 np0005634017 nova_compute[243452]: 2026-02-28 10:57:09.134 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:57:09 np0005634017 nova_compute[243452]: 2026-02-28 10:57:09.136 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3601MB free_disk=59.987361152656376GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:57:09 np0005634017 nova_compute[243452]: 2026-02-28 10:57:09.136 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:57:09 np0005634017 nova_compute[243452]: 2026-02-28 10:57:09.137 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:57:09 np0005634017 nova_compute[243452]: 2026-02-28 10:57:09.219 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:57:09 np0005634017 nova_compute[243452]: 2026-02-28 10:57:09.221 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:57:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:57:09 np0005634017 nova_compute[243452]: 2026-02-28 10:57:09.248 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:57:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2800: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:57:09 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3209798470' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:57:09 np0005634017 nova_compute[243452]: 2026-02-28 10:57:09.817 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:57:09 np0005634017 nova_compute[243452]: 2026-02-28 10:57:09.825 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:57:09 np0005634017 nova_compute[243452]: 2026-02-28 10:57:09.844 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:57:09 np0005634017 nova_compute[243452]: 2026-02-28 10:57:09.845 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:57:09 np0005634017 nova_compute[243452]: 2026-02-28 10:57:09.845 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:57:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2801: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:13 np0005634017 nova_compute[243452]: 2026-02-28 10:57:13.023 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:57:13 np0005634017 nova_compute[243452]: 2026-02-28 10:57:13.025 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:57:13 np0005634017 nova_compute[243452]: 2026-02-28 10:57:13.025 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 05:57:13 np0005634017 nova_compute[243452]: 2026-02-28 10:57:13.025 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 05:57:13 np0005634017 nova_compute[243452]: 2026-02-28 10:57:13.026 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 05:57:13 np0005634017 nova_compute[243452]: 2026-02-28 10:57:13.027 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:57:13 np0005634017 podman[389290]: 2026-02-28 10:57:13.150054323 +0000 UTC m=+0.076498477 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:57:13 np0005634017 podman[389289]: 2026-02-28 10:57:13.170664637 +0000 UTC m=+0.103078630 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:57:13 np0005634017 nova_compute[243452]: 2026-02-28 10:57:13.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:57:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2802: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:57:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2803: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2804: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:18 np0005634017 nova_compute[243452]: 2026-02-28 10:57:18.027 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:57:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:57:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2805: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2806: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:23 np0005634017 nova_compute[243452]: 2026-02-28 10:57:23.029 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:57:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2807: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.231404) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276244231469, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 750, "num_deletes": 250, "total_data_size": 975700, "memory_usage": 988336, "flush_reason": "Manual Compaction"}
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276244237922, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 620764, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58756, "largest_seqno": 59505, "table_properties": {"data_size": 617488, "index_size": 1119, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8667, "raw_average_key_size": 20, "raw_value_size": 610599, "raw_average_value_size": 1453, "num_data_blocks": 50, "num_entries": 420, "num_filter_entries": 420, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772276180, "oldest_key_time": 1772276180, "file_creation_time": 1772276244, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 6582 microseconds, and 3414 cpu microseconds.
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.237987) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 620764 bytes OK
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.238013) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.239503) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.239524) EVENT_LOG_v1 {"time_micros": 1772276244239517, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.239552) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 971880, prev total WAL file size 971880, number of live WAL files 2.
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.240038) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323532' seq:72057594037927935, type:22 .. '6D6772737461740032353033' seq:0, type:0; will stop at (end)
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(606KB)], [137(10MB)]
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276244240119, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 12097105, "oldest_snapshot_seqno": -1}
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 7882 keys, 9093214 bytes, temperature: kUnknown
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276244295266, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 9093214, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9044108, "index_size": 28321, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19717, "raw_key_size": 205707, "raw_average_key_size": 26, "raw_value_size": 8906972, "raw_average_value_size": 1130, "num_data_blocks": 1098, "num_entries": 7882, "num_filter_entries": 7882, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772276244, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.295531) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 9093214 bytes
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.296888) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 219.0 rd, 164.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 10.9 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(34.1) write-amplify(14.6) OK, records in: 8368, records dropped: 486 output_compression: NoCompression
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.296914) EVENT_LOG_v1 {"time_micros": 1772276244296901, "job": 84, "event": "compaction_finished", "compaction_time_micros": 55227, "compaction_time_cpu_micros": 33162, "output_level": 6, "num_output_files": 1, "total_output_size": 9093214, "num_input_records": 8368, "num_output_records": 7882, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276244297178, "job": 84, "event": "table_file_deletion", "file_number": 139}
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276244298630, "job": 84, "event": "table_file_deletion", "file_number": 137}
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.239961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.298662) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.298668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.298670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.298672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:57:24 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.298675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:57:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2808: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2809: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:28 np0005634017 nova_compute[243452]: 2026-02-28 10:57:28.031 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:57:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:57:29
Feb 28 05:57:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:57:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:57:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'volumes', '.mgr', 'images', 'backups', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta']
Feb 28 05:57:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:57:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:57:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2810: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:57:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:57:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:57:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:57:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:57:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:57:30 np0005634017 systemd[1]: Starting dnf makecache...
Feb 28 05:57:30 np0005634017 systemd-logind[815]: New session 51 of user zuul.
Feb 28 05:57:30 np0005634017 systemd[1]: Started Session 51 of User zuul.
Feb 28 05:57:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:57:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:57:30 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:57:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:57:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:57:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:57:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:57:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:57:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:57:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:57:31 np0005634017 dnf[389337]: Metadata cache refreshed recently.
Feb 28 05:57:31 np0005634017 systemd[1]: dnf-makecache.service: Deactivated successfully.
Feb 28 05:57:31 np0005634017 systemd[1]: Finished dnf makecache.
Feb 28 05:57:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2811: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:33 np0005634017 nova_compute[243452]: 2026-02-28 10:57:33.034 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:57:33 np0005634017 nova_compute[243452]: 2026-02-28 10:57:33.036 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:57:33 np0005634017 nova_compute[243452]: 2026-02-28 10:57:33.036 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 05:57:33 np0005634017 nova_compute[243452]: 2026-02-28 10:57:33.036 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 05:57:33 np0005634017 nova_compute[243452]: 2026-02-28 10:57:33.062 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:57:33 np0005634017 nova_compute[243452]: 2026-02-28 10:57:33.063 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 05:57:33 np0005634017 nova_compute[243452]: 2026-02-28 10:57:33.444 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:57:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:57:33.443 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:57:33 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:57:33.444 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:57:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2812: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:57:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:57:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:57:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:57:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:57:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:57:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:57:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:57:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:57:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:57:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:57:35 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:57:35 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:57:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2813: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:35 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:57:35 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:57:35 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:57:36 np0005634017 podman[389714]: 2026-02-28 10:57:36.074602823 +0000 UTC m=+0.083777124 container create 2a3198a839c22a6a31b2e1be9ea6892da9e58e457b627981d55bc3b3cb44666b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wilbur, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 28 05:57:36 np0005634017 systemd[1]: Started libpod-conmon-2a3198a839c22a6a31b2e1be9ea6892da9e58e457b627981d55bc3b3cb44666b.scope.
Feb 28 05:57:36 np0005634017 podman[389714]: 2026-02-28 10:57:36.043570854 +0000 UTC m=+0.052745205 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:57:36 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:57:36 np0005634017 podman[389714]: 2026-02-28 10:57:36.173521754 +0000 UTC m=+0.182696065 container init 2a3198a839c22a6a31b2e1be9ea6892da9e58e457b627981d55bc3b3cb44666b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:57:36 np0005634017 podman[389714]: 2026-02-28 10:57:36.183455165 +0000 UTC m=+0.192629476 container start 2a3198a839c22a6a31b2e1be9ea6892da9e58e457b627981d55bc3b3cb44666b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wilbur, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:57:36 np0005634017 podman[389714]: 2026-02-28 10:57:36.187563172 +0000 UTC m=+0.196737473 container attach 2a3198a839c22a6a31b2e1be9ea6892da9e58e457b627981d55bc3b3cb44666b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wilbur, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 05:57:36 np0005634017 confident_wilbur[389730]: 167 167
Feb 28 05:57:36 np0005634017 systemd[1]: libpod-2a3198a839c22a6a31b2e1be9ea6892da9e58e457b627981d55bc3b3cb44666b.scope: Deactivated successfully.
Feb 28 05:57:36 np0005634017 conmon[389730]: conmon 2a3198a839c22a6a31b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2a3198a839c22a6a31b2e1be9ea6892da9e58e457b627981d55bc3b3cb44666b.scope/container/memory.events
Feb 28 05:57:36 np0005634017 podman[389714]: 2026-02-28 10:57:36.19278464 +0000 UTC m=+0.201958941 container died 2a3198a839c22a6a31b2e1be9ea6892da9e58e457b627981d55bc3b3cb44666b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 05:57:36 np0005634017 systemd[1]: var-lib-containers-storage-overlay-69c8cd28fa32f3d27fc75b45c98577cec41f31d80d8ff511e3d26d91cfde3190-merged.mount: Deactivated successfully.
Feb 28 05:57:36 np0005634017 podman[389714]: 2026-02-28 10:57:36.244102863 +0000 UTC m=+0.253277174 container remove 2a3198a839c22a6a31b2e1be9ea6892da9e58e457b627981d55bc3b3cb44666b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Feb 28 05:57:36 np0005634017 systemd[1]: libpod-conmon-2a3198a839c22a6a31b2e1be9ea6892da9e58e457b627981d55bc3b3cb44666b.scope: Deactivated successfully.
Feb 28 05:57:36 np0005634017 nova_compute[243452]: 2026-02-28 10:57:36.421 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:57:36 np0005634017 nova_compute[243452]: 2026-02-28 10:57:36.423 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 28 05:57:36 np0005634017 podman[389754]: 2026-02-28 10:57:36.436945504 +0000 UTC m=+0.054246917 container create 611751ef0f1d82209745363c2a73ce0c251bb2aaf49087b829cdd6206dc9852e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_gagarin, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 05:57:36 np0005634017 systemd[1]: Started libpod-conmon-611751ef0f1d82209745363c2a73ce0c251bb2aaf49087b829cdd6206dc9852e.scope.
Feb 28 05:57:36 np0005634017 podman[389754]: 2026-02-28 10:57:36.415732904 +0000 UTC m=+0.033034397 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:57:36 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:57:36 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fef7177af9a303eed8b3c6a373368300ad33f3eb3b99d4467da47f216ebea6ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:57:36 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fef7177af9a303eed8b3c6a373368300ad33f3eb3b99d4467da47f216ebea6ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:57:36 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fef7177af9a303eed8b3c6a373368300ad33f3eb3b99d4467da47f216ebea6ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:57:36 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fef7177af9a303eed8b3c6a373368300ad33f3eb3b99d4467da47f216ebea6ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:57:36 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fef7177af9a303eed8b3c6a373368300ad33f3eb3b99d4467da47f216ebea6ec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:57:36 np0005634017 podman[389754]: 2026-02-28 10:57:36.536734731 +0000 UTC m=+0.154036164 container init 611751ef0f1d82209745363c2a73ce0c251bb2aaf49087b829cdd6206dc9852e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_gagarin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 28 05:57:36 np0005634017 podman[389754]: 2026-02-28 10:57:36.546993871 +0000 UTC m=+0.164295304 container start 611751ef0f1d82209745363c2a73ce0c251bb2aaf49087b829cdd6206dc9852e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_gagarin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:57:36 np0005634017 podman[389754]: 2026-02-28 10:57:36.551087397 +0000 UTC m=+0.168388830 container attach 611751ef0f1d82209745363c2a73ce0c251bb2aaf49087b829cdd6206dc9852e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 05:57:36 np0005634017 nova_compute[243452]: 2026-02-28 10:57:36.555 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 28 05:57:37 np0005634017 nifty_gagarin[389771]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:57:37 np0005634017 nifty_gagarin[389771]: --> All data devices are unavailable
Feb 28 05:57:37 np0005634017 systemd[1]: libpod-611751ef0f1d82209745363c2a73ce0c251bb2aaf49087b829cdd6206dc9852e.scope: Deactivated successfully.
Feb 28 05:57:37 np0005634017 podman[389754]: 2026-02-28 10:57:37.101361382 +0000 UTC m=+0.718662825 container died 611751ef0f1d82209745363c2a73ce0c251bb2aaf49087b829cdd6206dc9852e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 05:57:37 np0005634017 systemd[1]: var-lib-containers-storage-overlay-fef7177af9a303eed8b3c6a373368300ad33f3eb3b99d4467da47f216ebea6ec-merged.mount: Deactivated successfully.
Feb 28 05:57:37 np0005634017 podman[389754]: 2026-02-28 10:57:37.146329995 +0000 UTC m=+0.763631398 container remove 611751ef0f1d82209745363c2a73ce0c251bb2aaf49087b829cdd6206dc9852e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 05:57:37 np0005634017 systemd[1]: libpod-conmon-611751ef0f1d82209745363c2a73ce0c251bb2aaf49087b829cdd6206dc9852e.scope: Deactivated successfully.
Feb 28 05:57:37 np0005634017 systemd[1]: session-51.scope: Deactivated successfully.
Feb 28 05:57:37 np0005634017 systemd-logind[815]: Session 51 logged out. Waiting for processes to exit.
Feb 28 05:57:37 np0005634017 systemd-logind[815]: Removed session 51.
Feb 28 05:57:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2814: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:37 np0005634017 podman[389887]: 2026-02-28 10:57:37.675364987 +0000 UTC m=+0.056126431 container create b835a65abf4a923e35e98075958adaeb1a9464488d667cce4749d3dfe88df6e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_nash, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 05:57:37 np0005634017 systemd[1]: Started libpod-conmon-b835a65abf4a923e35e98075958adaeb1a9464488d667cce4749d3dfe88df6e0.scope.
Feb 28 05:57:37 np0005634017 podman[389887]: 2026-02-28 10:57:37.649280628 +0000 UTC m=+0.030042122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:57:37 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:57:37 np0005634017 podman[389887]: 2026-02-28 10:57:37.783843939 +0000 UTC m=+0.164605393 container init b835a65abf4a923e35e98075958adaeb1a9464488d667cce4749d3dfe88df6e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:57:37 np0005634017 podman[389887]: 2026-02-28 10:57:37.792847964 +0000 UTC m=+0.173609408 container start b835a65abf4a923e35e98075958adaeb1a9464488d667cce4749d3dfe88df6e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_nash, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:57:37 np0005634017 podman[389887]: 2026-02-28 10:57:37.796831267 +0000 UTC m=+0.177592721 container attach b835a65abf4a923e35e98075958adaeb1a9464488d667cce4749d3dfe88df6e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_nash, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:57:37 np0005634017 sleepy_nash[389903]: 167 167
Feb 28 05:57:37 np0005634017 systemd[1]: libpod-b835a65abf4a923e35e98075958adaeb1a9464488d667cce4749d3dfe88df6e0.scope: Deactivated successfully.
Feb 28 05:57:37 np0005634017 conmon[389903]: conmon b835a65abf4a923e35e9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b835a65abf4a923e35e98075958adaeb1a9464488d667cce4749d3dfe88df6e0.scope/container/memory.events
Feb 28 05:57:37 np0005634017 podman[389887]: 2026-02-28 10:57:37.800773209 +0000 UTC m=+0.181534643 container died b835a65abf4a923e35e98075958adaeb1a9464488d667cce4749d3dfe88df6e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_nash, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Feb 28 05:57:37 np0005634017 systemd[1]: var-lib-containers-storage-overlay-dafef8885ab8f7e94fa74354201abb948bf3f00c32cae2c7372f7e528a4502de-merged.mount: Deactivated successfully.
Feb 28 05:57:37 np0005634017 podman[389887]: 2026-02-28 10:57:37.849337794 +0000 UTC m=+0.230099208 container remove b835a65abf4a923e35e98075958adaeb1a9464488d667cce4749d3dfe88df6e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_nash, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 05:57:37 np0005634017 systemd[1]: libpod-conmon-b835a65abf4a923e35e98075958adaeb1a9464488d667cce4749d3dfe88df6e0.scope: Deactivated successfully.
Feb 28 05:57:38 np0005634017 podman[389929]: 2026-02-28 10:57:38.039682115 +0000 UTC m=+0.055984757 container create d3313c826a4d51f006aa21e6e4bfcc50a54a604405bce44bd59ad4727c633e8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 05:57:38 np0005634017 nova_compute[243452]: 2026-02-28 10:57:38.064 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:57:38 np0005634017 systemd[1]: Started libpod-conmon-d3313c826a4d51f006aa21e6e4bfcc50a54a604405bce44bd59ad4727c633e8d.scope.
Feb 28 05:57:38 np0005634017 podman[389929]: 2026-02-28 10:57:38.013452802 +0000 UTC m=+0.029755534 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:57:38 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:57:38 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d2fad089bb5d311706c99995c5c7cf8ef4ade3c7170b7ca92d85e0a2b50667c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:57:38 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d2fad089bb5d311706c99995c5c7cf8ef4ade3c7170b7ca92d85e0a2b50667c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:57:38 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d2fad089bb5d311706c99995c5c7cf8ef4ade3c7170b7ca92d85e0a2b50667c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:57:38 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d2fad089bb5d311706c99995c5c7cf8ef4ade3c7170b7ca92d85e0a2b50667c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:57:38 np0005634017 podman[389929]: 2026-02-28 10:57:38.13485982 +0000 UTC m=+0.151162502 container init d3313c826a4d51f006aa21e6e4bfcc50a54a604405bce44bd59ad4727c633e8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_taussig, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:57:38 np0005634017 podman[389929]: 2026-02-28 10:57:38.146419417 +0000 UTC m=+0.162722089 container start d3313c826a4d51f006aa21e6e4bfcc50a54a604405bce44bd59ad4727c633e8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_taussig, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:57:38 np0005634017 podman[389929]: 2026-02-28 10:57:38.151232234 +0000 UTC m=+0.167534876 container attach d3313c826a4d51f006aa21e6e4bfcc50a54a604405bce44bd59ad4727c633e8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_taussig, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]: {
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:    "0": [
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:        {
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "devices": [
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "/dev/loop3"
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            ],
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "lv_name": "ceph_lv0",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "lv_size": "21470642176",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "name": "ceph_lv0",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "tags": {
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.cluster_name": "ceph",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.crush_device_class": "",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.encrypted": "0",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.objectstore": "bluestore",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.osd_id": "0",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.type": "block",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.vdo": "0",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.with_tpm": "0"
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            },
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "type": "block",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "vg_name": "ceph_vg0"
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:        }
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:    ],
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:    "1": [
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:        {
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "devices": [
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "/dev/loop4"
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            ],
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "lv_name": "ceph_lv1",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "lv_size": "21470642176",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "name": "ceph_lv1",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "tags": {
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.cluster_name": "ceph",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.crush_device_class": "",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.encrypted": "0",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.objectstore": "bluestore",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.osd_id": "1",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.type": "block",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.vdo": "0",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.with_tpm": "0"
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            },
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "type": "block",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "vg_name": "ceph_vg1"
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:        }
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:    ],
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:    "2": [
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:        {
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "devices": [
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "/dev/loop5"
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            ],
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "lv_name": "ceph_lv2",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "lv_size": "21470642176",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "name": "ceph_lv2",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "tags": {
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.cluster_name": "ceph",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.crush_device_class": "",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.encrypted": "0",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.objectstore": "bluestore",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.osd_id": "2",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.type": "block",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.vdo": "0",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:                "ceph.with_tpm": "0"
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            },
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "type": "block",
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:            "vg_name": "ceph_vg2"
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:        }
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]:    ]
Feb 28 05:57:38 np0005634017 compassionate_taussig[389946]: }
Feb 28 05:57:38 np0005634017 systemd[1]: libpod-d3313c826a4d51f006aa21e6e4bfcc50a54a604405bce44bd59ad4727c633e8d.scope: Deactivated successfully.
Feb 28 05:57:38 np0005634017 podman[389929]: 2026-02-28 10:57:38.526870502 +0000 UTC m=+0.543173184 container died d3313c826a4d51f006aa21e6e4bfcc50a54a604405bce44bd59ad4727c633e8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:57:38 np0005634017 systemd[1]: var-lib-containers-storage-overlay-0d2fad089bb5d311706c99995c5c7cf8ef4ade3c7170b7ca92d85e0a2b50667c-merged.mount: Deactivated successfully.
Feb 28 05:57:38 np0005634017 podman[389929]: 2026-02-28 10:57:38.594017774 +0000 UTC m=+0.610320436 container remove d3313c826a4d51f006aa21e6e4bfcc50a54a604405bce44bd59ad4727c633e8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_taussig, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:57:38 np0005634017 systemd[1]: libpod-conmon-d3313c826a4d51f006aa21e6e4bfcc50a54a604405bce44bd59ad4727c633e8d.scope: Deactivated successfully.
Feb 28 05:57:39 np0005634017 podman[390031]: 2026-02-28 10:57:39.067213735 +0000 UTC m=+0.053779085 container create 1e52d6961924406917e3d49ad417c4c74fcd04e59cb69813790476b031d94e39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 05:57:39 np0005634017 systemd[1]: Started libpod-conmon-1e52d6961924406917e3d49ad417c4c74fcd04e59cb69813790476b031d94e39.scope.
Feb 28 05:57:39 np0005634017 podman[390031]: 2026-02-28 10:57:39.041835027 +0000 UTC m=+0.028400416 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:57:39 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:57:39 np0005634017 podman[390031]: 2026-02-28 10:57:39.154416505 +0000 UTC m=+0.140981884 container init 1e52d6961924406917e3d49ad417c4c74fcd04e59cb69813790476b031d94e39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bouman, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:57:39 np0005634017 podman[390031]: 2026-02-28 10:57:39.161496566 +0000 UTC m=+0.148061905 container start 1e52d6961924406917e3d49ad417c4c74fcd04e59cb69813790476b031d94e39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 05:57:39 np0005634017 relaxed_bouman[390048]: 167 167
Feb 28 05:57:39 np0005634017 podman[390031]: 2026-02-28 10:57:39.165265402 +0000 UTC m=+0.151830801 container attach 1e52d6961924406917e3d49ad417c4c74fcd04e59cb69813790476b031d94e39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bouman, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:57:39 np0005634017 systemd[1]: libpod-1e52d6961924406917e3d49ad417c4c74fcd04e59cb69813790476b031d94e39.scope: Deactivated successfully.
Feb 28 05:57:39 np0005634017 podman[390031]: 2026-02-28 10:57:39.16660013 +0000 UTC m=+0.153165469 container died 1e52d6961924406917e3d49ad417c4c74fcd04e59cb69813790476b031d94e39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:57:39 np0005634017 systemd[1]: var-lib-containers-storage-overlay-aeb1cb3e75daac96fc9658a98a45def6c202cbc89700f4d024e019053e299215-merged.mount: Deactivated successfully.
Feb 28 05:57:39 np0005634017 podman[390031]: 2026-02-28 10:57:39.208211789 +0000 UTC m=+0.194777138 container remove 1e52d6961924406917e3d49ad417c4c74fcd04e59cb69813790476b031d94e39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bouman, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:57:39 np0005634017 systemd[1]: libpod-conmon-1e52d6961924406917e3d49ad417c4c74fcd04e59cb69813790476b031d94e39.scope: Deactivated successfully.
Feb 28 05:57:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:57:39 np0005634017 podman[390073]: 2026-02-28 10:57:39.393421844 +0000 UTC m=+0.059922488 container create 88704ba9b2dd684dcf165e3e48c0afef384ecbae680340605dfb76cc8159af32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cohen, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:57:39 np0005634017 systemd[1]: Started libpod-conmon-88704ba9b2dd684dcf165e3e48c0afef384ecbae680340605dfb76cc8159af32.scope.
Feb 28 05:57:39 np0005634017 podman[390073]: 2026-02-28 10:57:39.368878169 +0000 UTC m=+0.035378873 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:57:39 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:57:39 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d4e30dfe2e02ae17c9d10760f98b414e0d22657854643ad46e2e24f401e6f16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:57:39 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d4e30dfe2e02ae17c9d10760f98b414e0d22657854643ad46e2e24f401e6f16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:57:39 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d4e30dfe2e02ae17c9d10760f98b414e0d22657854643ad46e2e24f401e6f16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:57:39 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d4e30dfe2e02ae17c9d10760f98b414e0d22657854643ad46e2e24f401e6f16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:57:39 np0005634017 podman[390073]: 2026-02-28 10:57:39.508476632 +0000 UTC m=+0.174977327 container init 88704ba9b2dd684dcf165e3e48c0afef384ecbae680340605dfb76cc8159af32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cohen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 05:57:39 np0005634017 podman[390073]: 2026-02-28 10:57:39.518756224 +0000 UTC m=+0.185256868 container start 88704ba9b2dd684dcf165e3e48c0afef384ecbae680340605dfb76cc8159af32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cohen, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:57:39 np0005634017 podman[390073]: 2026-02-28 10:57:39.522184451 +0000 UTC m=+0.188685095 container attach 88704ba9b2dd684dcf165e3e48c0afef384ecbae680340605dfb76cc8159af32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cohen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 28 05:57:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2815: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:40 np0005634017 lvm[390166]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:57:40 np0005634017 lvm[390166]: VG ceph_vg0 finished
Feb 28 05:57:40 np0005634017 lvm[390169]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:57:40 np0005634017 lvm[390169]: VG ceph_vg1 finished
Feb 28 05:57:40 np0005634017 lvm[390171]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:57:40 np0005634017 lvm[390171]: VG ceph_vg2 finished
Feb 28 05:57:40 np0005634017 boring_cohen[390090]: {}
Feb 28 05:57:40 np0005634017 systemd[1]: libpod-88704ba9b2dd684dcf165e3e48c0afef384ecbae680340605dfb76cc8159af32.scope: Deactivated successfully.
Feb 28 05:57:40 np0005634017 systemd[1]: libpod-88704ba9b2dd684dcf165e3e48c0afef384ecbae680340605dfb76cc8159af32.scope: Consumed 1.279s CPU time.
Feb 28 05:57:40 np0005634017 podman[390073]: 2026-02-28 10:57:40.385624374 +0000 UTC m=+1.052124988 container died 88704ba9b2dd684dcf165e3e48c0afef384ecbae680340605dfb76cc8159af32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:57:40 np0005634017 systemd[1]: var-lib-containers-storage-overlay-7d4e30dfe2e02ae17c9d10760f98b414e0d22657854643ad46e2e24f401e6f16-merged.mount: Deactivated successfully.
Feb 28 05:57:40 np0005634017 podman[390073]: 2026-02-28 10:57:40.436846705 +0000 UTC m=+1.103347359 container remove 88704ba9b2dd684dcf165e3e48c0afef384ecbae680340605dfb76cc8159af32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 28 05:57:40 np0005634017 systemd[1]: libpod-conmon-88704ba9b2dd684dcf165e3e48c0afef384ecbae680340605dfb76cc8159af32.scope: Deactivated successfully.
Feb 28 05:57:40 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:57:40 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:57:40 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:57:40 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:57:40 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:57:40 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:57:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:57:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:57:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:57:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:57:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.533795142069749e-05 of space, bias 1.0, pg target 0.004601385426209247 quantized to 32 (current 32)
Feb 28 05:57:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:57:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:57:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:57:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:57:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:57:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006706137252582069 of space, bias 1.0, pg target 0.20118411757746207 quantized to 32 (current 32)
Feb 28 05:57:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:57:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.261232899107364e-07 of space, bias 4.0, pg target 0.0008713479478928837 quantized to 16 (current 16)
Feb 28 05:57:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:57:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:57:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:57:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:57:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:57:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:57:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:57:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:57:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:57:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:57:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2816: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:43 np0005634017 nova_compute[243452]: 2026-02-28 10:57:43.066 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:57:43 np0005634017 nova_compute[243452]: 2026-02-28 10:57:43.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:57:43 np0005634017 nova_compute[243452]: 2026-02-28 10:57:43.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 28 05:57:43 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:57:43.446 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:57:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2817: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:44 np0005634017 podman[390213]: 2026-02-28 10:57:44.122829234 +0000 UTC m=+0.049718349 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 05:57:44 np0005634017 podman[390212]: 2026-02-28 10:57:44.167933412 +0000 UTC m=+0.092669136 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:57:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:57:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:57:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1548389974' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:57:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:57:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1548389974' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:57:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2818: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2819: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:48 np0005634017 nova_compute[243452]: 2026-02-28 10:57:48.069 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:57:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:57:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2820: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2821: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:53 np0005634017 nova_compute[243452]: 2026-02-28 10:57:53.071 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:57:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2822: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:57:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2823: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:56 np0005634017 nova_compute[243452]: 2026-02-28 10:57:56.333 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:57:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2824: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:57:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:57:57.901 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:57:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:57:57.902 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:57:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:57:57.902 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:57:58 np0005634017 nova_compute[243452]: 2026-02-28 10:57:58.075 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:57:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:57:59 np0005634017 nova_compute[243452]: 2026-02-28 10:57:59.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:57:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2825: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:00 np0005634017 nova_compute[243452]: 2026-02-28 10:58:00.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:58:00 np0005634017 nova_compute[243452]: 2026-02-28 10:58:00.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:58:00 np0005634017 nova_compute[243452]: 2026-02-28 10:58:00.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:58:00 np0005634017 nova_compute[243452]: 2026-02-28 10:58:00.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:58:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:58:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:58:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:58:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:58:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:58:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:58:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2826: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:02 np0005634017 nova_compute[243452]: 2026-02-28 10:58:02.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:58:03 np0005634017 nova_compute[243452]: 2026-02-28 10:58:03.076 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:58:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2827: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:58:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2828: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:06 np0005634017 nova_compute[243452]: 2026-02-28 10:58:06.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:58:07 np0005634017 nova_compute[243452]: 2026-02-28 10:58:07.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:58:07 np0005634017 nova_compute[243452]: 2026-02-28 10:58:07.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:58:07 np0005634017 nova_compute[243452]: 2026-02-28 10:58:07.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:58:07 np0005634017 nova_compute[243452]: 2026-02-28 10:58:07.345 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 05:58:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2829: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:08 np0005634017 nova_compute[243452]: 2026-02-28 10:58:08.078 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:58:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:58:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2830: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:10 np0005634017 nova_compute[243452]: 2026-02-28 10:58:10.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:58:10 np0005634017 nova_compute[243452]: 2026-02-28 10:58:10.346 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:58:10 np0005634017 nova_compute[243452]: 2026-02-28 10:58:10.346 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:58:10 np0005634017 nova_compute[243452]: 2026-02-28 10:58:10.347 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:58:10 np0005634017 nova_compute[243452]: 2026-02-28 10:58:10.347 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:58:10 np0005634017 nova_compute[243452]: 2026-02-28 10:58:10.348 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:58:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:58:10 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2125471960' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:58:10 np0005634017 nova_compute[243452]: 2026-02-28 10:58:10.917 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:58:11 np0005634017 nova_compute[243452]: 2026-02-28 10:58:11.091 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:58:11 np0005634017 nova_compute[243452]: 2026-02-28 10:58:11.092 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3582MB free_disk=59.987361152656376GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:58:11 np0005634017 nova_compute[243452]: 2026-02-28 10:58:11.093 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:58:11 np0005634017 nova_compute[243452]: 2026-02-28 10:58:11.093 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:58:11 np0005634017 nova_compute[243452]: 2026-02-28 10:58:11.160 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:58:11 np0005634017 nova_compute[243452]: 2026-02-28 10:58:11.161 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:58:11 np0005634017 nova_compute[243452]: 2026-02-28 10:58:11.185 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:58:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2831: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:58:11 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1596854906' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:58:11 np0005634017 nova_compute[243452]: 2026-02-28 10:58:11.805 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:58:11 np0005634017 nova_compute[243452]: 2026-02-28 10:58:11.814 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:58:11 np0005634017 nova_compute[243452]: 2026-02-28 10:58:11.926 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:58:11 np0005634017 nova_compute[243452]: 2026-02-28 10:58:11.931 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:58:11 np0005634017 nova_compute[243452]: 2026-02-28 10:58:11.931 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:58:13 np0005634017 nova_compute[243452]: 2026-02-28 10:58:13.080 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:58:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2832: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:58:14 np0005634017 nova_compute[243452]: 2026-02-28 10:58:14.927 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:58:15 np0005634017 podman[390303]: 2026-02-28 10:58:15.148983811 +0000 UTC m=+0.074938683 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 28 05:58:15 np0005634017 podman[390302]: 2026-02-28 10:58:15.187714898 +0000 UTC m=+0.115136512 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 28 05:58:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2833: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2834: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:18 np0005634017 nova_compute[243452]: 2026-02-28 10:58:18.083 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:58:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:58:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2835: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2836: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:23 np0005634017 nova_compute[243452]: 2026-02-28 10:58:23.086 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:58:23 np0005634017 nova_compute[243452]: 2026-02-28 10:58:23.087 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:58:23 np0005634017 nova_compute[243452]: 2026-02-28 10:58:23.088 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 05:58:23 np0005634017 nova_compute[243452]: 2026-02-28 10:58:23.088 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 05:58:23 np0005634017 nova_compute[243452]: 2026-02-28 10:58:23.089 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 05:58:23 np0005634017 nova_compute[243452]: 2026-02-28 10:58:23.090 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:58:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2837: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:58:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2838: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2839: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:28 np0005634017 nova_compute[243452]: 2026-02-28 10:58:28.089 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:58:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:58:29
Feb 28 05:58:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:58:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:58:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.log', 'backups', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'vms']
Feb 28 05:58:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:58:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:58:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2840: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:58:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:58:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:58:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:58:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:58:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:58:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:58:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:58:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:58:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:58:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:58:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:58:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:58:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:58:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:58:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:58:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2841: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:33 np0005634017 nova_compute[243452]: 2026-02-28 10:58:33.091 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:58:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2842: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:34 np0005634017 systemd-logind[815]: New session 52 of user zuul.
Feb 28 05:58:34 np0005634017 systemd[1]: Started Session 52 of User zuul.
Feb 28 05:58:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:58:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:58:35.578 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 28 05:58:35 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:58:35.578 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 28 05:58:35 np0005634017 nova_compute[243452]: 2026-02-28 10:58:35.579 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:58:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2843: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2844: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:38 np0005634017 nova_compute[243452]: 2026-02-28 10:58:38.092 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:58:38 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:58:38.580 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 28 05:58:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:58:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2845: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:39 np0005634017 systemd-logind[815]: New session 53 of user zuul.
Feb 28 05:58:39 np0005634017 systemd[1]: Started Session 53 of User zuul.
Feb 28 05:58:40 np0005634017 systemd[1]: Reloading.
Feb 28 05:58:40 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 05:58:40 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 05:58:41 np0005634017 systemd[1]: Reloading.
Feb 28 05:58:41 np0005634017 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 05:58:41 np0005634017 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 05:58:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:58:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:58:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:58:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:58:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:58:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:58:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:58:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:58:41 np0005634017 systemd[1]: Starting Podman API Socket...
Feb 28 05:58:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:58:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:58:41 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:58:41 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:58:41 np0005634017 systemd[1]: Listening on Podman API Socket.
Feb 28 05:58:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:58:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:58:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:58:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:58:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.533795142069749e-05 of space, bias 1.0, pg target 0.004601385426209247 quantized to 32 (current 32)
Feb 28 05:58:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:58:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:58:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:58:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:58:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:58:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006706137252582069 of space, bias 1.0, pg target 0.20118411757746207 quantized to 32 (current 32)
Feb 28 05:58:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:58:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.261232899107364e-07 of space, bias 4.0, pg target 0.0008713479478928837 quantized to 16 (current 16)
Feb 28 05:58:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:58:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:58:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:58:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:58:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:58:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:58:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:58:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:58:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:58:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:58:41 np0005634017 dbus-broker-launch[804]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Feb 28 05:58:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2846: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:41 np0005634017 systemd[1]: podman.socket: Deactivated successfully.
Feb 28 05:58:41 np0005634017 systemd[1]: Closed Podman API Socket.
Feb 28 05:58:41 np0005634017 systemd[1]: Stopping Podman API Socket...
Feb 28 05:58:41 np0005634017 systemd[1]: Starting Podman API Socket...
Feb 28 05:58:41 np0005634017 systemd[1]: Listening on Podman API Socket.
Feb 28 05:58:41 np0005634017 systemd-logind[815]: New session 54 of user zuul.
Feb 28 05:58:41 np0005634017 systemd[1]: Started Session 54 of User zuul.
Feb 28 05:58:41 np0005634017 systemd[1]: Starting Podman API Service...
Feb 28 05:58:41 np0005634017 systemd[1]: Started Podman API Service.
Feb 28 05:58:42 np0005634017 podman[390909]: time="2026-02-28T10:58:42Z" level=info msg="/usr/bin/podman filtering at log level info"
Feb 28 05:58:42 np0005634017 podman[390909]: time="2026-02-28T10:58:42Z" level=info msg="Setting parallel job count to 25"
Feb 28 05:58:42 np0005634017 podman[390909]: time="2026-02-28T10:58:42Z" level=info msg="Using sqlite as database backend"
Feb 28 05:58:42 np0005634017 podman[390909]: time="2026-02-28T10:58:42Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Feb 28 05:58:42 np0005634017 podman[390909]: time="2026-02-28T10:58:42Z" level=info msg="Using systemd socket activation to determine API endpoint"
Feb 28 05:58:42 np0005634017 podman[390909]: time="2026-02-28T10:58:42Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Feb 28 05:58:42 np0005634017 podman[390909]: @ - - [28/Feb/2026:10:58:42 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Feb 28 05:58:42 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:58:42 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:58:42 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:58:42 np0005634017 podman[390912]: 2026-02-28 10:58:42.051800621 +0000 UTC m=+0.056795119 container create b7d63a8909b34be933776d997b43bf64b6d9a1e85bd5dde603a359f143366980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_carson, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 05:58:42 np0005634017 systemd[1]: Started libpod-conmon-b7d63a8909b34be933776d997b43bf64b6d9a1e85bd5dde603a359f143366980.scope.
Feb 28 05:58:42 np0005634017 podman[390912]: 2026-02-28 10:58:42.025595179 +0000 UTC m=+0.030589667 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:58:42 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:58:42 np0005634017 podman[390912]: 2026-02-28 10:58:42.143861279 +0000 UTC m=+0.148855837 container init b7d63a8909b34be933776d997b43bf64b6d9a1e85bd5dde603a359f143366980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:58:42 np0005634017 podman[390912]: 2026-02-28 10:58:42.155810957 +0000 UTC m=+0.160805465 container start b7d63a8909b34be933776d997b43bf64b6d9a1e85bd5dde603a359f143366980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_carson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 05:58:42 np0005634017 podman[390912]: 2026-02-28 10:58:42.162871087 +0000 UTC m=+0.167865595 container attach b7d63a8909b34be933776d997b43bf64b6d9a1e85bd5dde603a359f143366980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_carson, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:58:42 np0005634017 naughty_carson[390940]: 167 167
Feb 28 05:58:42 np0005634017 systemd[1]: libpod-b7d63a8909b34be933776d997b43bf64b6d9a1e85bd5dde603a359f143366980.scope: Deactivated successfully.
Feb 28 05:58:42 np0005634017 podman[390912]: 2026-02-28 10:58:42.165330297 +0000 UTC m=+0.170324805 container died b7d63a8909b34be933776d997b43bf64b6d9a1e85bd5dde603a359f143366980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_carson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 05:58:42 np0005634017 podman[390909]: @ - - [28/Feb/2026:10:58:42 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 24282 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Feb 28 05:58:42 np0005634017 systemd[1]: var-lib-containers-storage-overlay-4caa9937293a3253e26272d648dbdf90a724686170dd8928852a42fe590f0b1c-merged.mount: Deactivated successfully.
Feb 28 05:58:42 np0005634017 podman[390912]: 2026-02-28 10:58:42.217128214 +0000 UTC m=+0.222122692 container remove b7d63a8909b34be933776d997b43bf64b6d9a1e85bd5dde603a359f143366980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_carson, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 05:58:42 np0005634017 systemd[1]: libpod-conmon-b7d63a8909b34be933776d997b43bf64b6d9a1e85bd5dde603a359f143366980.scope: Deactivated successfully.
Feb 28 05:58:42 np0005634017 podman[390966]: 2026-02-28 10:58:42.376666242 +0000 UTC m=+0.039743436 container create d32248f8145f5dfcf8e0e46e42b76e39e9ab93862e206da4689313b3973e9df6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_swirles, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 05:58:42 np0005634017 systemd[1]: Started libpod-conmon-d32248f8145f5dfcf8e0e46e42b76e39e9ab93862e206da4689313b3973e9df6.scope.
Feb 28 05:58:42 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:58:42 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfd256e4864db7db58fcf3915c2a2fdee4067cd58f43ea8c5bc610effb56190b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:58:42 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfd256e4864db7db58fcf3915c2a2fdee4067cd58f43ea8c5bc610effb56190b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:58:42 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfd256e4864db7db58fcf3915c2a2fdee4067cd58f43ea8c5bc610effb56190b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:58:42 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfd256e4864db7db58fcf3915c2a2fdee4067cd58f43ea8c5bc610effb56190b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:58:42 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfd256e4864db7db58fcf3915c2a2fdee4067cd58f43ea8c5bc610effb56190b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:58:42 np0005634017 podman[390966]: 2026-02-28 10:58:42.359902478 +0000 UTC m=+0.022979452 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:58:42 np0005634017 podman[390966]: 2026-02-28 10:58:42.469058389 +0000 UTC m=+0.132135343 container init d32248f8145f5dfcf8e0e46e42b76e39e9ab93862e206da4689313b3973e9df6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_swirles, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 05:58:42 np0005634017 podman[390966]: 2026-02-28 10:58:42.474975247 +0000 UTC m=+0.138052211 container start d32248f8145f5dfcf8e0e46e42b76e39e9ab93862e206da4689313b3973e9df6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 28 05:58:42 np0005634017 podman[390966]: 2026-02-28 10:58:42.480538224 +0000 UTC m=+0.143615218 container attach d32248f8145f5dfcf8e0e46e42b76e39e9ab93862e206da4689313b3973e9df6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_swirles, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:58:42 np0005634017 romantic_swirles[390982]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:58:42 np0005634017 romantic_swirles[390982]: --> All data devices are unavailable
Feb 28 05:58:43 np0005634017 systemd[1]: libpod-d32248f8145f5dfcf8e0e46e42b76e39e9ab93862e206da4689313b3973e9df6.scope: Deactivated successfully.
Feb 28 05:58:43 np0005634017 podman[390966]: 2026-02-28 10:58:43.006759727 +0000 UTC m=+0.669836701 container died d32248f8145f5dfcf8e0e46e42b76e39e9ab93862e206da4689313b3973e9df6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_swirles, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 05:58:43 np0005634017 systemd[1]: var-lib-containers-storage-overlay-dfd256e4864db7db58fcf3915c2a2fdee4067cd58f43ea8c5bc610effb56190b-merged.mount: Deactivated successfully.
Feb 28 05:58:43 np0005634017 podman[390966]: 2026-02-28 10:58:43.05697253 +0000 UTC m=+0.720049484 container remove d32248f8145f5dfcf8e0e46e42b76e39e9ab93862e206da4689313b3973e9df6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_swirles, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 05:58:43 np0005634017 systemd[1]: libpod-conmon-d32248f8145f5dfcf8e0e46e42b76e39e9ab93862e206da4689313b3973e9df6.scope: Deactivated successfully.
Feb 28 05:58:43 np0005634017 nova_compute[243452]: 2026-02-28 10:58:43.094 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:58:43 np0005634017 podman[391078]: 2026-02-28 10:58:43.545161536 +0000 UTC m=+0.065315431 container create 616444b6ec11636512053644c091f10dbf5bf005c8bb7e5599066d19f742edf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:58:43 np0005634017 systemd[1]: Started libpod-conmon-616444b6ec11636512053644c091f10dbf5bf005c8bb7e5599066d19f742edf8.scope.
Feb 28 05:58:43 np0005634017 podman[391078]: 2026-02-28 10:58:43.517284156 +0000 UTC m=+0.037438031 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:58:43 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:58:43 np0005634017 podman[391078]: 2026-02-28 10:58:43.640981979 +0000 UTC m=+0.161135814 container init 616444b6ec11636512053644c091f10dbf5bf005c8bb7e5599066d19f742edf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ptolemy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:58:43 np0005634017 podman[391078]: 2026-02-28 10:58:43.648604115 +0000 UTC m=+0.168757890 container start 616444b6ec11636512053644c091f10dbf5bf005c8bb7e5599066d19f742edf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 05:58:43 np0005634017 podman[391078]: 2026-02-28 10:58:43.654227744 +0000 UTC m=+0.174381529 container attach 616444b6ec11636512053644c091f10dbf5bf005c8bb7e5599066d19f742edf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ptolemy, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 05:58:43 np0005634017 upbeat_ptolemy[391095]: 167 167
Feb 28 05:58:43 np0005634017 systemd[1]: libpod-616444b6ec11636512053644c091f10dbf5bf005c8bb7e5599066d19f742edf8.scope: Deactivated successfully.
Feb 28 05:58:43 np0005634017 podman[391078]: 2026-02-28 10:58:43.656397316 +0000 UTC m=+0.176551171 container died 616444b6ec11636512053644c091f10dbf5bf005c8bb7e5599066d19f742edf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ptolemy, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 05:58:43 np0005634017 systemd[1]: var-lib-containers-storage-overlay-1133b5c94222cbc1d2bdfca04342688d3bb4511e96f128079d53b4bb2493356e-merged.mount: Deactivated successfully.
Feb 28 05:58:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2847: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:43 np0005634017 podman[391078]: 2026-02-28 10:58:43.709295474 +0000 UTC m=+0.229449249 container remove 616444b6ec11636512053644c091f10dbf5bf005c8bb7e5599066d19f742edf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ptolemy, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:58:43 np0005634017 systemd[1]: libpod-conmon-616444b6ec11636512053644c091f10dbf5bf005c8bb7e5599066d19f742edf8.scope: Deactivated successfully.
Feb 28 05:58:43 np0005634017 podman[391119]: 2026-02-28 10:58:43.883690513 +0000 UTC m=+0.060199126 container create 0367680e6a4f03747c0f821a8b2e6b368d5e02c69a715879c52100724fdae38c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 05:58:43 np0005634017 systemd[1]: Started libpod-conmon-0367680e6a4f03747c0f821a8b2e6b368d5e02c69a715879c52100724fdae38c.scope.
Feb 28 05:58:43 np0005634017 podman[391119]: 2026-02-28 10:58:43.848166217 +0000 UTC m=+0.024674850 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:58:43 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:58:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20618e06ad47eb3889a8741093bd038864c0bf5d12be955bd8a84f9500746b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:58:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20618e06ad47eb3889a8741093bd038864c0bf5d12be955bd8a84f9500746b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:58:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20618e06ad47eb3889a8741093bd038864c0bf5d12be955bd8a84f9500746b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:58:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20618e06ad47eb3889a8741093bd038864c0bf5d12be955bd8a84f9500746b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:58:43 np0005634017 podman[391119]: 2026-02-28 10:58:43.975236236 +0000 UTC m=+0.151744899 container init 0367680e6a4f03747c0f821a8b2e6b368d5e02c69a715879c52100724fdae38c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 05:58:43 np0005634017 podman[391119]: 2026-02-28 10:58:43.982738528 +0000 UTC m=+0.159247151 container start 0367680e6a4f03747c0f821a8b2e6b368d5e02c69a715879c52100724fdae38c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 05:58:43 np0005634017 podman[391119]: 2026-02-28 10:58:43.986015021 +0000 UTC m=+0.162523744 container attach 0367680e6a4f03747c0f821a8b2e6b368d5e02c69a715879c52100724fdae38c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ritchie, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]: {
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:    "0": [
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:        {
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "devices": [
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "/dev/loop3"
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            ],
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "lv_name": "ceph_lv0",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "lv_size": "21470642176",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "name": "ceph_lv0",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "tags": {
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.cluster_name": "ceph",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.crush_device_class": "",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.encrypted": "0",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.objectstore": "bluestore",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.osd_id": "0",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.type": "block",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.vdo": "0",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.with_tpm": "0"
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            },
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "type": "block",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "vg_name": "ceph_vg0"
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:        }
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:    ],
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:    "1": [
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:        {
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "devices": [
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "/dev/loop4"
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            ],
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "lv_name": "ceph_lv1",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "lv_size": "21470642176",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "name": "ceph_lv1",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "tags": {
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.cluster_name": "ceph",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.crush_device_class": "",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.encrypted": "0",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.objectstore": "bluestore",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.osd_id": "1",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.type": "block",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.vdo": "0",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.with_tpm": "0"
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            },
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "type": "block",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "vg_name": "ceph_vg1"
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:        }
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:    ],
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:    "2": [
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:        {
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "devices": [
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "/dev/loop5"
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            ],
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "lv_name": "ceph_lv2",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "lv_size": "21470642176",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "name": "ceph_lv2",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "tags": {
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.cluster_name": "ceph",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.crush_device_class": "",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.encrypted": "0",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.objectstore": "bluestore",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.osd_id": "2",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.type": "block",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.vdo": "0",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:                "ceph.with_tpm": "0"
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            },
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "type": "block",
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:            "vg_name": "ceph_vg2"
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:        }
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]:    ]
Feb 28 05:58:44 np0005634017 cranky_ritchie[391136]: }
Feb 28 05:58:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:58:44 np0005634017 systemd[1]: libpod-0367680e6a4f03747c0f821a8b2e6b368d5e02c69a715879c52100724fdae38c.scope: Deactivated successfully.
Feb 28 05:58:44 np0005634017 podman[391119]: 2026-02-28 10:58:44.287187701 +0000 UTC m=+0.463696354 container died 0367680e6a4f03747c0f821a8b2e6b368d5e02c69a715879c52100724fdae38c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 05:58:44 np0005634017 systemd[1]: var-lib-containers-storage-overlay-a20618e06ad47eb3889a8741093bd038864c0bf5d12be955bd8a84f9500746b6-merged.mount: Deactivated successfully.
Feb 28 05:58:44 np0005634017 podman[391119]: 2026-02-28 10:58:44.336423675 +0000 UTC m=+0.512932298 container remove 0367680e6a4f03747c0f821a8b2e6b368d5e02c69a715879c52100724fdae38c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:58:44 np0005634017 systemd[1]: libpod-conmon-0367680e6a4f03747c0f821a8b2e6b368d5e02c69a715879c52100724fdae38c.scope: Deactivated successfully.
Feb 28 05:58:44 np0005634017 podman[391220]: 2026-02-28 10:58:44.845011979 +0000 UTC m=+0.057588982 container create 626cd385fa9718cdccd35d13cec7c6e548c7db84a7b9bd9994b70a63993e7288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:58:44 np0005634017 systemd[1]: Started libpod-conmon-626cd385fa9718cdccd35d13cec7c6e548c7db84a7b9bd9994b70a63993e7288.scope.
Feb 28 05:58:44 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:58:44 np0005634017 podman[391220]: 2026-02-28 10:58:44.820162095 +0000 UTC m=+0.032739118 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:58:44 np0005634017 podman[391220]: 2026-02-28 10:58:44.929823051 +0000 UTC m=+0.142400074 container init 626cd385fa9718cdccd35d13cec7c6e548c7db84a7b9bd9994b70a63993e7288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_turing, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:58:44 np0005634017 podman[391220]: 2026-02-28 10:58:44.938460025 +0000 UTC m=+0.151036998 container start 626cd385fa9718cdccd35d13cec7c6e548c7db84a7b9bd9994b70a63993e7288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_turing, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Feb 28 05:58:44 np0005634017 podman[391220]: 2026-02-28 10:58:44.942463119 +0000 UTC m=+0.155040182 container attach 626cd385fa9718cdccd35d13cec7c6e548c7db84a7b9bd9994b70a63993e7288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_turing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 05:58:44 np0005634017 recursing_turing[391236]: 167 167
Feb 28 05:58:44 np0005634017 systemd[1]: libpod-626cd385fa9718cdccd35d13cec7c6e548c7db84a7b9bd9994b70a63993e7288.scope: Deactivated successfully.
Feb 28 05:58:44 np0005634017 podman[391220]: 2026-02-28 10:58:44.945941667 +0000 UTC m=+0.158518680 container died 626cd385fa9718cdccd35d13cec7c6e548c7db84a7b9bd9994b70a63993e7288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_turing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:58:44 np0005634017 systemd[1]: var-lib-containers-storage-overlay-e3cb1cedb78473ad7fa3d4e61ec7f5c58062821ac22c084b59a6d1e5f7092070-merged.mount: Deactivated successfully.
Feb 28 05:58:45 np0005634017 podman[391220]: 2026-02-28 10:58:45.012021739 +0000 UTC m=+0.224598722 container remove 626cd385fa9718cdccd35d13cec7c6e548c7db84a7b9bd9994b70a63993e7288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_turing, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:58:45 np0005634017 systemd[1]: libpod-conmon-626cd385fa9718cdccd35d13cec7c6e548c7db84a7b9bd9994b70a63993e7288.scope: Deactivated successfully.
Feb 28 05:58:45 np0005634017 podman[391260]: 2026-02-28 10:58:45.189218687 +0000 UTC m=+0.063996343 container create 61ad8e9913dc5c382e55ed5d77559dd7b6e74deb09ad2651ac89be8e382d6943 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 05:58:45 np0005634017 systemd[1]: Started libpod-conmon-61ad8e9913dc5c382e55ed5d77559dd7b6e74deb09ad2651ac89be8e382d6943.scope.
Feb 28 05:58:45 np0005634017 podman[391260]: 2026-02-28 10:58:45.163012365 +0000 UTC m=+0.037790061 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:58:45 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:58:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9b07d728b32a63e5979639cfe14b1ec06d3afd02d2511c79048148a81b443eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:58:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9b07d728b32a63e5979639cfe14b1ec06d3afd02d2511c79048148a81b443eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:58:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9b07d728b32a63e5979639cfe14b1ec06d3afd02d2511c79048148a81b443eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:58:45 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9b07d728b32a63e5979639cfe14b1ec06d3afd02d2511c79048148a81b443eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:58:45 np0005634017 podman[391260]: 2026-02-28 10:58:45.305262224 +0000 UTC m=+0.180039940 container init 61ad8e9913dc5c382e55ed5d77559dd7b6e74deb09ad2651ac89be8e382d6943 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_galileo, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 28 05:58:45 np0005634017 podman[391260]: 2026-02-28 10:58:45.312699324 +0000 UTC m=+0.187476950 container start 61ad8e9913dc5c382e55ed5d77559dd7b6e74deb09ad2651ac89be8e382d6943 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_galileo, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:58:45 np0005634017 podman[391260]: 2026-02-28 10:58:45.316562563 +0000 UTC m=+0.191340189 container attach 61ad8e9913dc5c382e55ed5d77559dd7b6e74deb09ad2651ac89be8e382d6943 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 05:58:45 np0005634017 podman[391278]: 2026-02-28 10:58:45.344464753 +0000 UTC m=+0.104660504 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Feb 28 05:58:45 np0005634017 podman[391275]: 2026-02-28 10:58:45.357723888 +0000 UTC m=+0.120715448 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 28 05:58:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:58:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2687783955' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:58:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:58:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2687783955' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:58:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2848: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:45 np0005634017 lvm[391401]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:58:45 np0005634017 lvm[391402]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:58:45 np0005634017 lvm[391402]: VG ceph_vg0 finished
Feb 28 05:58:45 np0005634017 lvm[391401]: VG ceph_vg1 finished
Feb 28 05:58:45 np0005634017 lvm[391404]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:58:45 np0005634017 lvm[391404]: VG ceph_vg2 finished
Feb 28 05:58:46 np0005634017 stoic_galileo[391279]: {}
Feb 28 05:58:46 np0005634017 systemd[1]: libpod-61ad8e9913dc5c382e55ed5d77559dd7b6e74deb09ad2651ac89be8e382d6943.scope: Deactivated successfully.
Feb 28 05:58:46 np0005634017 systemd[1]: libpod-61ad8e9913dc5c382e55ed5d77559dd7b6e74deb09ad2651ac89be8e382d6943.scope: Consumed 1.245s CPU time.
Feb 28 05:58:46 np0005634017 podman[391260]: 2026-02-28 10:58:46.102725828 +0000 UTC m=+0.977503524 container died 61ad8e9913dc5c382e55ed5d77559dd7b6e74deb09ad2651ac89be8e382d6943 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Feb 28 05:58:46 np0005634017 systemd[1]: var-lib-containers-storage-overlay-a9b07d728b32a63e5979639cfe14b1ec06d3afd02d2511c79048148a81b443eb-merged.mount: Deactivated successfully.
Feb 28 05:58:46 np0005634017 podman[391260]: 2026-02-28 10:58:46.158065835 +0000 UTC m=+1.032843461 container remove 61ad8e9913dc5c382e55ed5d77559dd7b6e74deb09ad2651ac89be8e382d6943 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_galileo, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:58:46 np0005634017 systemd[1]: libpod-conmon-61ad8e9913dc5c382e55ed5d77559dd7b6e74deb09ad2651ac89be8e382d6943.scope: Deactivated successfully.
Feb 28 05:58:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:58:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:58:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:58:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:58:47 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:58:47 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:58:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2849: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:48 np0005634017 nova_compute[243452]: 2026-02-28 10:58:48.096 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:58:48 np0005634017 nova_compute[243452]: 2026-02-28 10:58:48.100 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:58:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:58:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2850: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2851: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:53 np0005634017 nova_compute[243452]: 2026-02-28 10:58:53.098 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:58:53 np0005634017 nova_compute[243452]: 2026-02-28 10:58:53.101 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:58:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2852: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:58:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2853: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:57 np0005634017 podman[390909]: time="2026-02-28T10:58:57Z" level=info msg="Received shutdown.Stop(), terminating!" PID=390909
Feb 28 05:58:57 np0005634017 systemd[1]: podman.service: Deactivated successfully.
Feb 28 05:58:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2854: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:58:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:58:57.903 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:58:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:58:57.904 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:58:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:58:57.904 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:58:58 np0005634017 nova_compute[243452]: 2026-02-28 10:58:58.100 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:58:58 np0005634017 nova_compute[243452]: 2026-02-28 10:58:58.103 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:58:58 np0005634017 nova_compute[243452]: 2026-02-28 10:58:58.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:58:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:58:59 np0005634017 nova_compute[243452]: 2026-02-28 10:58:59.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:58:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2855: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:00 np0005634017 nova_compute[243452]: 2026-02-28 10:59:00.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:59:00 np0005634017 nova_compute[243452]: 2026-02-28 10:59:00.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 05:59:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:59:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:59:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:59:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:59:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:59:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:59:01 np0005634017 nova_compute[243452]: 2026-02-28 10:59:01.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:59:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2856: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:02 np0005634017 nova_compute[243452]: 2026-02-28 10:59:02.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:59:02 np0005634017 nova_compute[243452]: 2026-02-28 10:59:02.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:59:03 np0005634017 nova_compute[243452]: 2026-02-28 10:59:03.103 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:59:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2857: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:59:04 np0005634017 systemd[1]: session-52.scope: Deactivated successfully.
Feb 28 05:59:04 np0005634017 systemd-logind[815]: Session 52 logged out. Waiting for processes to exit.
Feb 28 05:59:04 np0005634017 systemd-logind[815]: Removed session 52.
Feb 28 05:59:04 np0005634017 systemd-logind[815]: Session 53 logged out. Waiting for processes to exit.
Feb 28 05:59:04 np0005634017 systemd[1]: session-53.scope: Deactivated successfully.
Feb 28 05:59:04 np0005634017 systemd[1]: session-53.scope: Consumed 1.171s CPU time.
Feb 28 05:59:04 np0005634017 systemd-logind[815]: Removed session 53.
Feb 28 05:59:05 np0005634017 systemd[1]: session-54.scope: Deactivated successfully.
Feb 28 05:59:05 np0005634017 systemd-logind[815]: Session 54 logged out. Waiting for processes to exit.
Feb 28 05:59:05 np0005634017 systemd-logind[815]: Removed session 54.
Feb 28 05:59:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2858: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:07 np0005634017 nova_compute[243452]: 2026-02-28 10:59:07.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:59:07 np0005634017 nova_compute[243452]: 2026-02-28 10:59:07.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 05:59:07 np0005634017 nova_compute[243452]: 2026-02-28 10:59:07.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 05:59:07 np0005634017 nova_compute[243452]: 2026-02-28 10:59:07.333 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 05:59:07 np0005634017 nova_compute[243452]: 2026-02-28 10:59:07.334 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:59:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2859: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:08 np0005634017 nova_compute[243452]: 2026-02-28 10:59:08.104 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:59:08 np0005634017 nova_compute[243452]: 2026-02-28 10:59:08.106 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:59:08 np0005634017 nova_compute[243452]: 2026-02-28 10:59:08.106 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 05:59:08 np0005634017 nova_compute[243452]: 2026-02-28 10:59:08.106 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 05:59:08 np0005634017 nova_compute[243452]: 2026-02-28 10:59:08.107 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 05:59:08 np0005634017 nova_compute[243452]: 2026-02-28 10:59:08.109 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:59:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:59:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2860: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:11 np0005634017 nova_compute[243452]: 2026-02-28 10:59:11.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 05:59:11 np0005634017 nova_compute[243452]: 2026-02-28 10:59:11.346 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:59:11 np0005634017 nova_compute[243452]: 2026-02-28 10:59:11.346 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:59:11 np0005634017 nova_compute[243452]: 2026-02-28 10:59:11.346 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:59:11 np0005634017 nova_compute[243452]: 2026-02-28 10:59:11.346 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 05:59:11 np0005634017 nova_compute[243452]: 2026-02-28 10:59:11.347 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:59:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2861: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:59:11 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/307564811' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:59:11 np0005634017 nova_compute[243452]: 2026-02-28 10:59:11.979 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:59:12 np0005634017 nova_compute[243452]: 2026-02-28 10:59:12.179 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 05:59:12 np0005634017 nova_compute[243452]: 2026-02-28 10:59:12.181 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3613MB free_disk=59.987361152656376GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 05:59:12 np0005634017 nova_compute[243452]: 2026-02-28 10:59:12.181 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:59:12 np0005634017 nova_compute[243452]: 2026-02-28 10:59:12.182 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:59:12 np0005634017 nova_compute[243452]: 2026-02-28 10:59:12.252 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 05:59:12 np0005634017 nova_compute[243452]: 2026-02-28 10:59:12.252 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 05:59:12 np0005634017 nova_compute[243452]: 2026-02-28 10:59:12.335 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 05:59:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 05:59:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2433169569' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 05:59:12 np0005634017 nova_compute[243452]: 2026-02-28 10:59:12.944 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 05:59:12 np0005634017 nova_compute[243452]: 2026-02-28 10:59:12.952 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 05:59:12 np0005634017 nova_compute[243452]: 2026-02-28 10:59:12.968 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 05:59:12 np0005634017 nova_compute[243452]: 2026-02-28 10:59:12.970 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 05:59:12 np0005634017 nova_compute[243452]: 2026-02-28 10:59:12.971 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:59:13 np0005634017 nova_compute[243452]: 2026-02-28 10:59:13.108 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:59:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2862: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:59:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2863: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:16 np0005634017 podman[391541]: 2026-02-28 10:59:16.15941913 +0000 UTC m=+0.083454224 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:59:16 np0005634017 podman[391540]: 2026-02-28 10:59:16.185971642 +0000 UTC m=+0.112124506 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_id=ovn_controller, io.buildah.version=1.43.0)
Feb 28 05:59:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2864: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:18 np0005634017 nova_compute[243452]: 2026-02-28 10:59:18.110 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:59:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:59:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2865: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2866: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:23 np0005634017 nova_compute[243452]: 2026-02-28 10:59:23.113 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:59:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2867: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:59:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2868: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2869: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:28 np0005634017 nova_compute[243452]: 2026-02-28 10:59:28.115 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:59:28 np0005634017 nova_compute[243452]: 2026-02-28 10:59:28.116 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:59:28 np0005634017 nova_compute[243452]: 2026-02-28 10:59:28.116 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 05:59:28 np0005634017 nova_compute[243452]: 2026-02-28 10:59:28.116 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 05:59:28 np0005634017 nova_compute[243452]: 2026-02-28 10:59:28.117 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 05:59:28 np0005634017 nova_compute[243452]: 2026-02-28 10:59:28.118 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:59:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:59:29
Feb 28 05:59:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 05:59:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 05:59:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'backups', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta']
Feb 28 05:59:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 05:59:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:59:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2870: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:59:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:59:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:59:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:59:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 05:59:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 05:59:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 05:59:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 05:59:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:59:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 05:59:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:59:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 05:59:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:59:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 05:59:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:59:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 05:59:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2871: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:33 np0005634017 nova_compute[243452]: 2026-02-28 10:59:33.118 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:59:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2872: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:59:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2873: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2874: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:38 np0005634017 nova_compute[243452]: 2026-02-28 10:59:38.119 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:59:38 np0005634017 nova_compute[243452]: 2026-02-28 10:59:38.120 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:59:38 np0005634017 nova_compute[243452]: 2026-02-28 10:59:38.121 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 05:59:38 np0005634017 nova_compute[243452]: 2026-02-28 10:59:38.121 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 05:59:38 np0005634017 nova_compute[243452]: 2026-02-28 10:59:38.121 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 05:59:38 np0005634017 nova_compute[243452]: 2026-02-28 10:59:38.123 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:59:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:59:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2875: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 05:59:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:59:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 05:59:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:59:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.533795142069749e-05 of space, bias 1.0, pg target 0.004601385426209247 quantized to 32 (current 32)
Feb 28 05:59:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:59:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:59:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:59:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:59:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:59:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006706137252582069 of space, bias 1.0, pg target 0.20118411757746207 quantized to 32 (current 32)
Feb 28 05:59:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:59:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.261232899107364e-07 of space, bias 4.0, pg target 0.0008713479478928837 quantized to 16 (current 16)
Feb 28 05:59:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:59:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:59:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:59:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 05:59:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:59:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 05:59:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:59:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 05:59:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 05:59:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 05:59:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2876: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:43 np0005634017 nova_compute[243452]: 2026-02-28 10:59:43.123 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:59:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2877: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:59:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 05:59:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1555191970' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 05:59:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 05:59:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1555191970' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 05:59:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2878: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:46 np0005634017 podman[391610]: 2026-02-28 10:59:46.536006378 +0000 UTC m=+0.112583020 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb 28 05:59:46 np0005634017 podman[391611]: 2026-02-28 10:59:46.558089823 +0000 UTC m=+0.125569517 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 05:59:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:59:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:59:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 05:59:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:59:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 05:59:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:59:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 05:59:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 05:59:46 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 05:59:46 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:59:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 05:59:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:59:46 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 05:59:46 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 05:59:47 np0005634017 podman[391775]: 2026-02-28 10:59:47.423168373 +0000 UTC m=+0.059598228 container create 898612e26b0c71ffd58a886ca928778ba52db78a090365cf43972da646ce6792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_golick, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:59:47 np0005634017 systemd[1]: Started libpod-conmon-898612e26b0c71ffd58a886ca928778ba52db78a090365cf43972da646ce6792.scope.
Feb 28 05:59:47 np0005634017 podman[391775]: 2026-02-28 10:59:47.391682252 +0000 UTC m=+0.028112197 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:59:47 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:59:47 np0005634017 podman[391775]: 2026-02-28 10:59:47.51484079 +0000 UTC m=+0.151270675 container init 898612e26b0c71ffd58a886ca928778ba52db78a090365cf43972da646ce6792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_golick, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:59:47 np0005634017 podman[391775]: 2026-02-28 10:59:47.525198763 +0000 UTC m=+0.161628638 container start 898612e26b0c71ffd58a886ca928778ba52db78a090365cf43972da646ce6792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 05:59:47 np0005634017 podman[391775]: 2026-02-28 10:59:47.529746532 +0000 UTC m=+0.166176387 container attach 898612e26b0c71ffd58a886ca928778ba52db78a090365cf43972da646ce6792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_golick, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:59:47 np0005634017 gallant_golick[391791]: 167 167
Feb 28 05:59:47 np0005634017 systemd[1]: libpod-898612e26b0c71ffd58a886ca928778ba52db78a090365cf43972da646ce6792.scope: Deactivated successfully.
Feb 28 05:59:47 np0005634017 podman[391775]: 2026-02-28 10:59:47.534437815 +0000 UTC m=+0.170867700 container died 898612e26b0c71ffd58a886ca928778ba52db78a090365cf43972da646ce6792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 05:59:47 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3e88419c30e4885c72014ed72a1b5a1825ac20977dbe19abf0f174eda0bf5289-merged.mount: Deactivated successfully.
Feb 28 05:59:47 np0005634017 podman[391775]: 2026-02-28 10:59:47.582173957 +0000 UTC m=+0.218603812 container remove 898612e26b0c71ffd58a886ca928778ba52db78a090365cf43972da646ce6792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_golick, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Feb 28 05:59:47 np0005634017 systemd[1]: libpod-conmon-898612e26b0c71ffd58a886ca928778ba52db78a090365cf43972da646ce6792.scope: Deactivated successfully.
Feb 28 05:59:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2879: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:47 np0005634017 podman[391816]: 2026-02-28 10:59:47.752648345 +0000 UTC m=+0.052944151 container create 9966669b23032f51b5547975d42f9330625316dd45e3a93da145523fa7b1c74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cartwright, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Feb 28 05:59:47 np0005634017 systemd[1]: Started libpod-conmon-9966669b23032f51b5547975d42f9330625316dd45e3a93da145523fa7b1c74c.scope.
Feb 28 05:59:47 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:59:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75c0c918f52ba79612e80db17ab73efc6547227a5fce511129675c247927ca1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:59:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75c0c918f52ba79612e80db17ab73efc6547227a5fce511129675c247927ca1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:59:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75c0c918f52ba79612e80db17ab73efc6547227a5fce511129675c247927ca1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:59:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75c0c918f52ba79612e80db17ab73efc6547227a5fce511129675c247927ca1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:59:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75c0c918f52ba79612e80db17ab73efc6547227a5fce511129675c247927ca1a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 05:59:47 np0005634017 podman[391816]: 2026-02-28 10:59:47.734225443 +0000 UTC m=+0.034521249 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:59:47 np0005634017 podman[391816]: 2026-02-28 10:59:47.843208929 +0000 UTC m=+0.143504705 container init 9966669b23032f51b5547975d42f9330625316dd45e3a93da145523fa7b1c74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 05:59:47 np0005634017 podman[391816]: 2026-02-28 10:59:47.85630738 +0000 UTC m=+0.156603156 container start 9966669b23032f51b5547975d42f9330625316dd45e3a93da145523fa7b1c74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:59:47 np0005634017 podman[391816]: 2026-02-28 10:59:47.860450698 +0000 UTC m=+0.160746484 container attach 9966669b23032f51b5547975d42f9330625316dd45e3a93da145523fa7b1c74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default)
Feb 28 05:59:48 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 05:59:48 np0005634017 nova_compute[243452]: 2026-02-28 10:59:48.124 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 05:59:48 np0005634017 wizardly_cartwright[391832]: --> passed data devices: 0 physical, 3 LVM
Feb 28 05:59:48 np0005634017 wizardly_cartwright[391832]: --> All data devices are unavailable
Feb 28 05:59:48 np0005634017 systemd[1]: libpod-9966669b23032f51b5547975d42f9330625316dd45e3a93da145523fa7b1c74c.scope: Deactivated successfully.
Feb 28 05:59:48 np0005634017 podman[391816]: 2026-02-28 10:59:48.304640608 +0000 UTC m=+0.604936384 container died 9966669b23032f51b5547975d42f9330625316dd45e3a93da145523fa7b1c74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 05:59:48 np0005634017 systemd[1]: var-lib-containers-storage-overlay-75c0c918f52ba79612e80db17ab73efc6547227a5fce511129675c247927ca1a-merged.mount: Deactivated successfully.
Feb 28 05:59:48 np0005634017 podman[391816]: 2026-02-28 10:59:48.355801717 +0000 UTC m=+0.656097523 container remove 9966669b23032f51b5547975d42f9330625316dd45e3a93da145523fa7b1c74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cartwright, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:59:48 np0005634017 systemd[1]: libpod-conmon-9966669b23032f51b5547975d42f9330625316dd45e3a93da145523fa7b1c74c.scope: Deactivated successfully.
Feb 28 05:59:48 np0005634017 podman[391924]: 2026-02-28 10:59:48.862239839 +0000 UTC m=+0.058396054 container create 91c0d6452df1605fb893122dc6a8383d492c3db2206c541a285ff1529e384849 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_galileo, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:59:48 np0005634017 systemd[1]: Started libpod-conmon-91c0d6452df1605fb893122dc6a8383d492c3db2206c541a285ff1529e384849.scope.
Feb 28 05:59:48 np0005634017 podman[391924]: 2026-02-28 10:59:48.838398844 +0000 UTC m=+0.034555089 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:59:48 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:59:48 np0005634017 podman[391924]: 2026-02-28 10:59:48.953377541 +0000 UTC m=+0.149533826 container init 91c0d6452df1605fb893122dc6a8383d492c3db2206c541a285ff1529e384849 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:59:48 np0005634017 podman[391924]: 2026-02-28 10:59:48.963786555 +0000 UTC m=+0.159942780 container start 91c0d6452df1605fb893122dc6a8383d492c3db2206c541a285ff1529e384849 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_galileo, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:59:48 np0005634017 podman[391924]: 2026-02-28 10:59:48.968513919 +0000 UTC m=+0.164670164 container attach 91c0d6452df1605fb893122dc6a8383d492c3db2206c541a285ff1529e384849 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 05:59:48 np0005634017 elastic_galileo[391941]: 167 167
Feb 28 05:59:48 np0005634017 systemd[1]: libpod-91c0d6452df1605fb893122dc6a8383d492c3db2206c541a285ff1529e384849.scope: Deactivated successfully.
Feb 28 05:59:48 np0005634017 podman[391924]: 2026-02-28 10:59:48.970830645 +0000 UTC m=+0.166986880 container died 91c0d6452df1605fb893122dc6a8383d492c3db2206c541a285ff1529e384849 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_galileo, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 28 05:59:48 np0005634017 systemd[1]: var-lib-containers-storage-overlay-794d47e1a03ebafd025c1416b0757cbf23889687d4d03fad64fc31a2e35ae9c5-merged.mount: Deactivated successfully.
Feb 28 05:59:49 np0005634017 podman[391924]: 2026-02-28 10:59:49.011184598 +0000 UTC m=+0.207340803 container remove 91c0d6452df1605fb893122dc6a8383d492c3db2206c541a285ff1529e384849 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_galileo, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 05:59:49 np0005634017 systemd[1]: libpod-conmon-91c0d6452df1605fb893122dc6a8383d492c3db2206c541a285ff1529e384849.scope: Deactivated successfully.
Feb 28 05:59:49 np0005634017 podman[391965]: 2026-02-28 10:59:49.152734327 +0000 UTC m=+0.044874462 container create 4680edc2c65954a3bd36e894fbabcb42db9c4e21a39b81294e317b36c5abccc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cartwright, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 05:59:49 np0005634017 systemd[1]: Started libpod-conmon-4680edc2c65954a3bd36e894fbabcb42db9c4e21a39b81294e317b36c5abccc9.scope.
Feb 28 05:59:49 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:59:49 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd941200daa04e5e7108670d47dfd35cddb9c03cd269829841162a6668f72753/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:59:49 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd941200daa04e5e7108670d47dfd35cddb9c03cd269829841162a6668f72753/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:59:49 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd941200daa04e5e7108670d47dfd35cddb9c03cd269829841162a6668f72753/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:59:49 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd941200daa04e5e7108670d47dfd35cddb9c03cd269829841162a6668f72753/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:59:49 np0005634017 podman[391965]: 2026-02-28 10:59:49.134639884 +0000 UTC m=+0.026780069 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:59:49 np0005634017 podman[391965]: 2026-02-28 10:59:49.258541903 +0000 UTC m=+0.150682078 container init 4680edc2c65954a3bd36e894fbabcb42db9c4e21a39b81294e317b36c5abccc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:59:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:59:49 np0005634017 podman[391965]: 2026-02-28 10:59:49.264254215 +0000 UTC m=+0.156394350 container start 4680edc2c65954a3bd36e894fbabcb42db9c4e21a39b81294e317b36c5abccc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cartwright, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:59:49 np0005634017 podman[391965]: 2026-02-28 10:59:49.268815634 +0000 UTC m=+0.160955809 container attach 4680edc2c65954a3bd36e894fbabcb42db9c4e21a39b81294e317b36c5abccc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cartwright, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]: {
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:    "0": [
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:        {
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "devices": [
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "/dev/loop3"
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            ],
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "lv_name": "ceph_lv0",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "lv_size": "21470642176",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "name": "ceph_lv0",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "tags": {
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.cluster_name": "ceph",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.crush_device_class": "",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.encrypted": "0",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.objectstore": "bluestore",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.osd_id": "0",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.type": "block",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.vdo": "0",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.with_tpm": "0"
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            },
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "type": "block",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "vg_name": "ceph_vg0"
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:        }
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:    ],
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:    "1": [
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:        {
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "devices": [
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "/dev/loop4"
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            ],
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "lv_name": "ceph_lv1",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "lv_size": "21470642176",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "name": "ceph_lv1",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "tags": {
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.cluster_name": "ceph",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.crush_device_class": "",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.encrypted": "0",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.objectstore": "bluestore",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.osd_id": "1",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.type": "block",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.vdo": "0",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.with_tpm": "0"
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            },
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "type": "block",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "vg_name": "ceph_vg1"
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:        }
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:    ],
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:    "2": [
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:        {
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "devices": [
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "/dev/loop5"
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            ],
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "lv_name": "ceph_lv2",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "lv_size": "21470642176",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "name": "ceph_lv2",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "tags": {
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.cephx_lockbox_secret": "",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.cluster_name": "ceph",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.crush_device_class": "",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.encrypted": "0",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.objectstore": "bluestore",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.osd_id": "2",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.type": "block",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.vdo": "0",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:                "ceph.with_tpm": "0"
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            },
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "type": "block",
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:            "vg_name": "ceph_vg2"
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:        }
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]:    ]
Feb 28 05:59:49 np0005634017 wizardly_cartwright[391981]: }
Feb 28 05:59:49 np0005634017 systemd[1]: libpod-4680edc2c65954a3bd36e894fbabcb42db9c4e21a39b81294e317b36c5abccc9.scope: Deactivated successfully.
Feb 28 05:59:49 np0005634017 podman[391965]: 2026-02-28 10:59:49.510012325 +0000 UTC m=+0.402152460 container died 4680edc2c65954a3bd36e894fbabcb42db9c4e21a39b81294e317b36c5abccc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 28 05:59:49 np0005634017 systemd[1]: var-lib-containers-storage-overlay-fd941200daa04e5e7108670d47dfd35cddb9c03cd269829841162a6668f72753-merged.mount: Deactivated successfully.
Feb 28 05:59:49 np0005634017 podman[391965]: 2026-02-28 10:59:49.555672548 +0000 UTC m=+0.447812693 container remove 4680edc2c65954a3bd36e894fbabcb42db9c4e21a39b81294e317b36c5abccc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cartwright, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Feb 28 05:59:49 np0005634017 systemd[1]: libpod-conmon-4680edc2c65954a3bd36e894fbabcb42db9c4e21a39b81294e317b36c5abccc9.scope: Deactivated successfully.
Feb 28 05:59:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2880: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:50 np0005634017 podman[392065]: 2026-02-28 10:59:50.010592021 +0000 UTC m=+0.048240597 container create 52fb93de215c93ee6826206adfef015a7e8342ca510dc425d3361ce618bae5c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_swanson, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 05:59:50 np0005634017 systemd[1]: Started libpod-conmon-52fb93de215c93ee6826206adfef015a7e8342ca510dc425d3361ce618bae5c8.scope.
Feb 28 05:59:50 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:59:50 np0005634017 podman[392065]: 2026-02-28 10:59:50.08928026 +0000 UTC m=+0.126928856 container init 52fb93de215c93ee6826206adfef015a7e8342ca510dc425d3361ce618bae5c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_swanson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:59:50 np0005634017 podman[392065]: 2026-02-28 10:59:49.995134123 +0000 UTC m=+0.032782719 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:59:50 np0005634017 podman[392065]: 2026-02-28 10:59:50.098357877 +0000 UTC m=+0.136006493 container start 52fb93de215c93ee6826206adfef015a7e8342ca510dc425d3361ce618bae5c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_swanson, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 28 05:59:50 np0005634017 podman[392065]: 2026-02-28 10:59:50.102255437 +0000 UTC m=+0.139904023 container attach 52fb93de215c93ee6826206adfef015a7e8342ca510dc425d3361ce618bae5c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_swanson, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:59:50 np0005634017 tender_swanson[392082]: 167 167
Feb 28 05:59:50 np0005634017 systemd[1]: libpod-52fb93de215c93ee6826206adfef015a7e8342ca510dc425d3361ce618bae5c8.scope: Deactivated successfully.
Feb 28 05:59:50 np0005634017 podman[392065]: 2026-02-28 10:59:50.104911872 +0000 UTC m=+0.142560438 container died 52fb93de215c93ee6826206adfef015a7e8342ca510dc425d3361ce618bae5c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_swanson, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 05:59:50 np0005634017 systemd[1]: var-lib-containers-storage-overlay-992f9b8af7cece38faf48b989e4f0632cd26dfa2b2ff01646f19d9fc2886de2a-merged.mount: Deactivated successfully.
Feb 28 05:59:50 np0005634017 podman[392065]: 2026-02-28 10:59:50.142458686 +0000 UTC m=+0.180107302 container remove 52fb93de215c93ee6826206adfef015a7e8342ca510dc425d3361ce618bae5c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:59:50 np0005634017 systemd[1]: libpod-conmon-52fb93de215c93ee6826206adfef015a7e8342ca510dc425d3361ce618bae5c8.scope: Deactivated successfully.
Feb 28 05:59:50 np0005634017 podman[392107]: 2026-02-28 10:59:50.324844411 +0000 UTC m=+0.049944255 container create 32e478710c4c27427bb789f9f330d2614cdf1923adca5254892e57468ac4bf73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 05:59:50 np0005634017 systemd[1]: Started libpod-conmon-32e478710c4c27427bb789f9f330d2614cdf1923adca5254892e57468ac4bf73.scope.
Feb 28 05:59:50 np0005634017 podman[392107]: 2026-02-28 10:59:50.300666996 +0000 UTC m=+0.025766730 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 05:59:50 np0005634017 systemd[1]: Started libcrun container.
Feb 28 05:59:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25cc8c49b3f117b504f60a3fe15ec8f72809b416267e7e8c46860d8e8fd6c354/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 05:59:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25cc8c49b3f117b504f60a3fe15ec8f72809b416267e7e8c46860d8e8fd6c354/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 05:59:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25cc8c49b3f117b504f60a3fe15ec8f72809b416267e7e8c46860d8e8fd6c354/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 05:59:50 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25cc8c49b3f117b504f60a3fe15ec8f72809b416267e7e8c46860d8e8fd6c354/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 05:59:50 np0005634017 podman[392107]: 2026-02-28 10:59:50.426396207 +0000 UTC m=+0.151496011 container init 32e478710c4c27427bb789f9f330d2614cdf1923adca5254892e57468ac4bf73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 28 05:59:50 np0005634017 podman[392107]: 2026-02-28 10:59:50.436496233 +0000 UTC m=+0.161595947 container start 32e478710c4c27427bb789f9f330d2614cdf1923adca5254892e57468ac4bf73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 05:59:50 np0005634017 podman[392107]: 2026-02-28 10:59:50.440493026 +0000 UTC m=+0.165592750 container attach 32e478710c4c27427bb789f9f330d2614cdf1923adca5254892e57468ac4bf73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_shannon, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 05:59:51 np0005634017 lvm[392203]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 05:59:51 np0005634017 lvm[392202]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 05:59:51 np0005634017 lvm[392203]: VG ceph_vg1 finished
Feb 28 05:59:51 np0005634017 lvm[392202]: VG ceph_vg0 finished
Feb 28 05:59:51 np0005634017 lvm[392205]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 05:59:51 np0005634017 lvm[392205]: VG ceph_vg2 finished
Feb 28 05:59:51 np0005634017 brave_shannon[392123]: {}
Feb 28 05:59:51 np0005634017 systemd[1]: libpod-32e478710c4c27427bb789f9f330d2614cdf1923adca5254892e57468ac4bf73.scope: Deactivated successfully.
Feb 28 05:59:51 np0005634017 systemd[1]: libpod-32e478710c4c27427bb789f9f330d2614cdf1923adca5254892e57468ac4bf73.scope: Consumed 1.318s CPU time.
Feb 28 05:59:51 np0005634017 podman[392208]: 2026-02-28 10:59:51.690613621 +0000 UTC m=+0.027669714 container died 32e478710c4c27427bb789f9f330d2614cdf1923adca5254892e57468ac4bf73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 28 05:59:51 np0005634017 systemd[1]: var-lib-containers-storage-overlay-25cc8c49b3f117b504f60a3fe15ec8f72809b416267e7e8c46860d8e8fd6c354-merged.mount: Deactivated successfully.
Feb 28 05:59:51 np0005634017 podman[392208]: 2026-02-28 10:59:51.73152613 +0000 UTC m=+0.068582243 container remove 32e478710c4c27427bb789f9f330d2614cdf1923adca5254892e57468ac4bf73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 05:59:51 np0005634017 systemd[1]: libpod-conmon-32e478710c4c27427bb789f9f330d2614cdf1923adca5254892e57468ac4bf73.scope: Deactivated successfully.
Feb 28 05:59:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2881: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 05:59:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:59:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 05:59:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:59:52 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:59:52 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 05:59:53 np0005634017 nova_compute[243452]: 2026-02-28 10:59:53.129 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:59:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2882: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:59:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2883: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2884: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 05:59:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:59:57.904 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 05:59:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:59:57.905 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 05:59:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 10:59:57.905 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 05:59:58 np0005634017 nova_compute[243452]: 2026-02-28 10:59:58.131 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.269834) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276399269926, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 1494, "num_deletes": 257, "total_data_size": 2358689, "memory_usage": 2403312, "flush_reason": "Manual Compaction"}
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276399283770, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 2312737, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59506, "largest_seqno": 60999, "table_properties": {"data_size": 2305819, "index_size": 3988, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14228, "raw_average_key_size": 19, "raw_value_size": 2291924, "raw_average_value_size": 3152, "num_data_blocks": 179, "num_entries": 727, "num_filter_entries": 727, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772276244, "oldest_key_time": 1772276244, "file_creation_time": 1772276399, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 14003 microseconds, and 6892 cpu microseconds.
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.283841) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 2312737 bytes OK
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.283870) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.285417) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.285448) EVENT_LOG_v1 {"time_micros": 1772276399285438, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.285484) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 2352126, prev total WAL file size 2352126, number of live WAL files 2.
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.286594) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353036' seq:72057594037927935, type:22 .. '6C6F676D0032373539' seq:0, type:0; will stop at (end)
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(2258KB)], [140(8880KB)]
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276399286673, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 11405951, "oldest_snapshot_seqno": -1}
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 8083 keys, 11286381 bytes, temperature: kUnknown
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276399334162, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 11286381, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11233211, "index_size": 31873, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20229, "raw_key_size": 210757, "raw_average_key_size": 26, "raw_value_size": 11089883, "raw_average_value_size": 1372, "num_data_blocks": 1249, "num_entries": 8083, "num_filter_entries": 8083, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772276399, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.334404) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 11286381 bytes
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.335800) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 239.9 rd, 237.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 8.7 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(9.8) write-amplify(4.9) OK, records in: 8609, records dropped: 526 output_compression: NoCompression
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.335825) EVENT_LOG_v1 {"time_micros": 1772276399335813, "job": 86, "event": "compaction_finished", "compaction_time_micros": 47554, "compaction_time_cpu_micros": 29512, "output_level": 6, "num_output_files": 1, "total_output_size": 11286381, "num_input_records": 8609, "num_output_records": 8083, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276399336229, "job": 86, "event": "table_file_deletion", "file_number": 142}
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276399337177, "job": 86, "event": "table_file_deletion", "file_number": 140}
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.286509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.337210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.337216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.337218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.337221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:59:59 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.337223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 05:59:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2885: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:00:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:00:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:00:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:00:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:00:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:00:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2886: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:01 np0005634017 nova_compute[243452]: 2026-02-28 11:00:01.973 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:00:01 np0005634017 nova_compute[243452]: 2026-02-28 11:00:01.974 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:00:01 np0005634017 nova_compute[243452]: 2026-02-28 11:00:01.975 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:00:01 np0005634017 nova_compute[243452]: 2026-02-28 11:00:01.975 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:00:01 np0005634017 nova_compute[243452]: 2026-02-28 11:00:01.976 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 06:00:02 np0005634017 nova_compute[243452]: 2026-02-28 11:00:02.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:00:03 np0005634017 nova_compute[243452]: 2026-02-28 11:00:03.132 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:00:03 np0005634017 nova_compute[243452]: 2026-02-28 11:00:03.134 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:00:03 np0005634017 nova_compute[243452]: 2026-02-28 11:00:03.134 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 06:00:03 np0005634017 nova_compute[243452]: 2026-02-28 11:00:03.134 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:00:03 np0005634017 nova_compute[243452]: 2026-02-28 11:00:03.135 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:00:03 np0005634017 nova_compute[243452]: 2026-02-28 11:00:03.137 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:00:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2887: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:00:04 np0005634017 nova_compute[243452]: 2026-02-28 11:00:04.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:00:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2888: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.830558) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276405830680, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 305, "num_deletes": 251, "total_data_size": 124248, "memory_usage": 131272, "flush_reason": "Manual Compaction"}
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276405834424, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 123467, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61000, "largest_seqno": 61304, "table_properties": {"data_size": 121471, "index_size": 223, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5063, "raw_average_key_size": 18, "raw_value_size": 117584, "raw_average_value_size": 426, "num_data_blocks": 10, "num_entries": 276, "num_filter_entries": 276, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772276400, "oldest_key_time": 1772276400, "file_creation_time": 1772276405, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 3972 microseconds, and 1504 cpu microseconds.
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.834534) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 123467 bytes OK
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.834594) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.836047) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.836124) EVENT_LOG_v1 {"time_micros": 1772276405836115, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.836150) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 122054, prev total WAL file size 122054, number of live WAL files 2.
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.836888) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(120KB)], [143(10MB)]
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276405836950, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 11409848, "oldest_snapshot_seqno": -1}
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 7850 keys, 9629039 bytes, temperature: kUnknown
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276405882030, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 9629039, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9578964, "index_size": 29354, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19653, "raw_key_size": 206601, "raw_average_key_size": 26, "raw_value_size": 9441208, "raw_average_value_size": 1202, "num_data_blocks": 1134, "num_entries": 7850, "num_filter_entries": 7850, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772276405, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.882519) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 9629039 bytes
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.883883) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 251.8 rd, 212.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 10.8 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(170.4) write-amplify(78.0) OK, records in: 8359, records dropped: 509 output_compression: NoCompression
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.883918) EVENT_LOG_v1 {"time_micros": 1772276405883902, "job": 88, "event": "compaction_finished", "compaction_time_micros": 45312, "compaction_time_cpu_micros": 23971, "output_level": 6, "num_output_files": 1, "total_output_size": 9629039, "num_input_records": 8359, "num_output_records": 7850, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276405884313, "job": 88, "event": "table_file_deletion", "file_number": 145}
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276405886868, "job": 88, "event": "table_file_deletion", "file_number": 143}
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.836795) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.887090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.887102) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.887106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.887110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 06:00:05 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.887113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 06:00:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2889: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:08 np0005634017 nova_compute[243452]: 2026-02-28 11:00:08.136 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:00:08 np0005634017 nova_compute[243452]: 2026-02-28 11:00:08.138 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:00:08 np0005634017 nova_compute[243452]: 2026-02-28 11:00:08.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:00:08 np0005634017 nova_compute[243452]: 2026-02-28 11:00:08.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 06:00:08 np0005634017 nova_compute[243452]: 2026-02-28 11:00:08.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 06:00:08 np0005634017 nova_compute[243452]: 2026-02-28 11:00:08.397 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 06:00:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:00:09 np0005634017 nova_compute[243452]: 2026-02-28 11:00:09.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:00:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2890: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2891: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:12 np0005634017 nova_compute[243452]: 2026-02-28 11:00:12.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:00:12 np0005634017 nova_compute[243452]: 2026-02-28 11:00:12.376 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:00:12 np0005634017 nova_compute[243452]: 2026-02-28 11:00:12.376 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:00:12 np0005634017 nova_compute[243452]: 2026-02-28 11:00:12.377 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:00:12 np0005634017 nova_compute[243452]: 2026-02-28 11:00:12.377 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 06:00:12 np0005634017 nova_compute[243452]: 2026-02-28 11:00:12.377 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 06:00:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 06:00:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1738323311' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 06:00:12 np0005634017 nova_compute[243452]: 2026-02-28 11:00:12.956 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 06:00:13 np0005634017 nova_compute[243452]: 2026-02-28 11:00:13.110 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 06:00:13 np0005634017 nova_compute[243452]: 2026-02-28 11:00:13.112 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3578MB free_disk=59.987361152656376GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 06:00:13 np0005634017 nova_compute[243452]: 2026-02-28 11:00:13.112 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:00:13 np0005634017 nova_compute[243452]: 2026-02-28 11:00:13.113 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:00:13 np0005634017 nova_compute[243452]: 2026-02-28 11:00:13.138 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:00:13 np0005634017 nova_compute[243452]: 2026-02-28 11:00:13.191 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 06:00:13 np0005634017 nova_compute[243452]: 2026-02-28 11:00:13.191 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 06:00:13 np0005634017 nova_compute[243452]: 2026-02-28 11:00:13.305 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 28 06:00:13 np0005634017 nova_compute[243452]: 2026-02-28 11:00:13.414 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 28 06:00:13 np0005634017 nova_compute[243452]: 2026-02-28 11:00:13.415 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 28 06:00:13 np0005634017 nova_compute[243452]: 2026-02-28 11:00:13.430 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 28 06:00:13 np0005634017 nova_compute[243452]: 2026-02-28 11:00:13.452 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 28 06:00:13 np0005634017 nova_compute[243452]: 2026-02-28 11:00:13.470 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 06:00:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2892: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 06:00:13 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2438395517' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 06:00:13 np0005634017 nova_compute[243452]: 2026-02-28 11:00:13.994 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 06:00:14 np0005634017 nova_compute[243452]: 2026-02-28 11:00:14.000 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 06:00:14 np0005634017 nova_compute[243452]: 2026-02-28 11:00:14.019 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 06:00:14 np0005634017 nova_compute[243452]: 2026-02-28 11:00:14.021 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 06:00:14 np0005634017 nova_compute[243452]: 2026-02-28 11:00:14.021 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:00:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:00:15 np0005634017 nova_compute[243452]: 2026-02-28 11:00:15.018 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:00:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2893: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:17 np0005634017 podman[392295]: 2026-02-28 11:00:17.143004108 +0000 UTC m=+0.071448125 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 06:00:17 np0005634017 podman[392294]: 2026-02-28 11:00:17.180390507 +0000 UTC m=+0.108881315 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 28 06:00:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2894: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:18 np0005634017 nova_compute[243452]: 2026-02-28 11:00:18.140 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:00:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:00:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2895: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2896: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:23 np0005634017 nova_compute[243452]: 2026-02-28 11:00:23.143 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:00:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2897: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:00:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2898: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2899: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:28 np0005634017 nova_compute[243452]: 2026-02-28 11:00:28.144 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:00:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_11:00:29
Feb 28 06:00:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 06:00:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 06:00:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'vms', 'backups', 'cephfs.cephfs.meta', '.mgr']
Feb 28 06:00:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 06:00:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:00:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2900: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:00:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:00:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:00:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:00:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:00:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:00:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 06:00:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 06:00:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 06:00:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 06:00:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 06:00:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 06:00:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 06:00:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 06:00:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 06:00:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 06:00:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2901: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:33 np0005634017 nova_compute[243452]: 2026-02-28 11:00:33.146 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:00:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2902: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:00:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2903: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2904: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:38 np0005634017 nova_compute[243452]: 2026-02-28 11:00:38.148 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:00:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:00:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2905: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 06:00:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:00:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 06:00:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:00:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.533795142069749e-05 of space, bias 1.0, pg target 0.004601385426209247 quantized to 32 (current 32)
Feb 28 06:00:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:00:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:00:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:00:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:00:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:00:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006706137252582069 of space, bias 1.0, pg target 0.20118411757746207 quantized to 32 (current 32)
Feb 28 06:00:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:00:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.261232899107364e-07 of space, bias 4.0, pg target 0.0008713479478928837 quantized to 16 (current 16)
Feb 28 06:00:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:00:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:00:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:00:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 06:00:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:00:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 06:00:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:00:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:00:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:00:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 06:00:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2906: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:43 np0005634017 nova_compute[243452]: 2026-02-28 11:00:43.151 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:00:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2907: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:00:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 06:00:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/342907728' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 06:00:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 06:00:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/342907728' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 06:00:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2908: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2909: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:48 np0005634017 podman[392337]: 2026-02-28 11:00:48.149705952 +0000 UTC m=+0.078913175 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 28 06:00:48 np0005634017 nova_compute[243452]: 2026-02-28 11:00:48.153 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:00:48 np0005634017 podman[392336]: 2026-02-28 11:00:48.17291685 +0000 UTC m=+0.105490669 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 06:00:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:00:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2910: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2911: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 06:00:52 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:00:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 06:00:52 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:00:53 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:00:53 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:00:53 np0005634017 nova_compute[243452]: 2026-02-28 11:00:53.155 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:00:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 06:00:53 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 06:00:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 06:00:53 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 06:00:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 06:00:53 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:00:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 06:00:53 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 06:00:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 06:00:53 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 06:00:53 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 06:00:53 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 06:00:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2912: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:53 np0005634017 podman[392597]: 2026-02-28 11:00:53.77831925 +0000 UTC m=+0.076724664 container create 7572a451158c8f119c52a2d4f3a8d75be7f38cc7615c253646dfa7d5ca0b2592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 06:00:53 np0005634017 systemd[1]: Started libpod-conmon-7572a451158c8f119c52a2d4f3a8d75be7f38cc7615c253646dfa7d5ca0b2592.scope.
Feb 28 06:00:53 np0005634017 podman[392597]: 2026-02-28 11:00:53.749160014 +0000 UTC m=+0.047565478 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:00:53 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:00:53 np0005634017 podman[392597]: 2026-02-28 11:00:53.889332084 +0000 UTC m=+0.187737538 container init 7572a451158c8f119c52a2d4f3a8d75be7f38cc7615c253646dfa7d5ca0b2592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 06:00:53 np0005634017 podman[392597]: 2026-02-28 11:00:53.899930904 +0000 UTC m=+0.198336288 container start 7572a451158c8f119c52a2d4f3a8d75be7f38cc7615c253646dfa7d5ca0b2592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_engelbart, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 06:00:53 np0005634017 sweet_engelbart[392613]: 167 167
Feb 28 06:00:53 np0005634017 podman[392597]: 2026-02-28 11:00:53.907709245 +0000 UTC m=+0.206114649 container attach 7572a451158c8f119c52a2d4f3a8d75be7f38cc7615c253646dfa7d5ca0b2592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_engelbart, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 06:00:53 np0005634017 systemd[1]: libpod-7572a451158c8f119c52a2d4f3a8d75be7f38cc7615c253646dfa7d5ca0b2592.scope: Deactivated successfully.
Feb 28 06:00:53 np0005634017 conmon[392613]: conmon 7572a451158c8f119c52 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7572a451158c8f119c52a2d4f3a8d75be7f38cc7615c253646dfa7d5ca0b2592.scope/container/memory.events
Feb 28 06:00:53 np0005634017 podman[392597]: 2026-02-28 11:00:53.909428913 +0000 UTC m=+0.207834337 container died 7572a451158c8f119c52a2d4f3a8d75be7f38cc7615c253646dfa7d5ca0b2592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 06:00:53 np0005634017 systemd[1]: var-lib-containers-storage-overlay-e5b18a34aeb32f61ff8f9e7a5b3eda6e2a6942b1736fd96627e6c0f20e10722a-merged.mount: Deactivated successfully.
Feb 28 06:00:53 np0005634017 podman[392597]: 2026-02-28 11:00:53.988122462 +0000 UTC m=+0.286527856 container remove 7572a451158c8f119c52a2d4f3a8d75be7f38cc7615c253646dfa7d5ca0b2592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 06:00:53 np0005634017 systemd[1]: libpod-conmon-7572a451158c8f119c52a2d4f3a8d75be7f38cc7615c253646dfa7d5ca0b2592.scope: Deactivated successfully.
Feb 28 06:00:54 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 06:00:54 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:00:54 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 06:00:54 np0005634017 podman[392639]: 2026-02-28 11:00:54.183681949 +0000 UTC m=+0.065013781 container create bf6597281926bb86e906dcf35a910ef441ce4d7bf666b8a19b553bc0723bf424 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 06:00:54 np0005634017 systemd[1]: Started libpod-conmon-bf6597281926bb86e906dcf35a910ef441ce4d7bf666b8a19b553bc0723bf424.scope.
Feb 28 06:00:54 np0005634017 podman[392639]: 2026-02-28 11:00:54.155964595 +0000 UTC m=+0.037296507 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:00:54 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:00:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d870459e03de1f0c5a831dcaf70fcf1116201c17d41ee48d7271534bbf930ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 06:00:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d870459e03de1f0c5a831dcaf70fcf1116201c17d41ee48d7271534bbf930ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 06:00:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d870459e03de1f0c5a831dcaf70fcf1116201c17d41ee48d7271534bbf930ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 06:00:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d870459e03de1f0c5a831dcaf70fcf1116201c17d41ee48d7271534bbf930ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 06:00:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d870459e03de1f0c5a831dcaf70fcf1116201c17d41ee48d7271534bbf930ce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 06:00:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:00:54 np0005634017 podman[392639]: 2026-02-28 11:00:54.298269405 +0000 UTC m=+0.179601257 container init bf6597281926bb86e906dcf35a910ef441ce4d7bf666b8a19b553bc0723bf424 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Feb 28 06:00:54 np0005634017 podman[392639]: 2026-02-28 11:00:54.307167317 +0000 UTC m=+0.188499139 container start bf6597281926bb86e906dcf35a910ef441ce4d7bf666b8a19b553bc0723bf424 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 06:00:54 np0005634017 podman[392639]: 2026-02-28 11:00:54.318293792 +0000 UTC m=+0.199625744 container attach bf6597281926bb86e906dcf35a910ef441ce4d7bf666b8a19b553bc0723bf424 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_beaver, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 28 06:00:54 np0005634017 nostalgic_beaver[392656]: --> passed data devices: 0 physical, 3 LVM
Feb 28 06:00:54 np0005634017 nostalgic_beaver[392656]: --> All data devices are unavailable
Feb 28 06:00:54 np0005634017 systemd[1]: libpod-bf6597281926bb86e906dcf35a910ef441ce4d7bf666b8a19b553bc0723bf424.scope: Deactivated successfully.
Feb 28 06:00:54 np0005634017 podman[392639]: 2026-02-28 11:00:54.851340438 +0000 UTC m=+0.732672300 container died bf6597281926bb86e906dcf35a910ef441ce4d7bf666b8a19b553bc0723bf424 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 06:00:54 np0005634017 systemd[1]: var-lib-containers-storage-overlay-2d870459e03de1f0c5a831dcaf70fcf1116201c17d41ee48d7271534bbf930ce-merged.mount: Deactivated successfully.
Feb 28 06:00:54 np0005634017 podman[392639]: 2026-02-28 11:00:54.904138234 +0000 UTC m=+0.785470066 container remove bf6597281926bb86e906dcf35a910ef441ce4d7bf666b8a19b553bc0723bf424 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_beaver, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 06:00:54 np0005634017 systemd[1]: libpod-conmon-bf6597281926bb86e906dcf35a910ef441ce4d7bf666b8a19b553bc0723bf424.scope: Deactivated successfully.
Feb 28 06:00:55 np0005634017 podman[392752]: 2026-02-28 11:00:55.418866821 +0000 UTC m=+0.049060940 container create 6bf045516c7f4829ea2faa4d68d7295943124087451e434d2e0fdec1b3f9e1a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_ardinghelli, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 06:00:55 np0005634017 systemd[1]: Started libpod-conmon-6bf045516c7f4829ea2faa4d68d7295943124087451e434d2e0fdec1b3f9e1a7.scope.
Feb 28 06:00:55 np0005634017 podman[392752]: 2026-02-28 11:00:55.393375789 +0000 UTC m=+0.023569928 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:00:55 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:00:55 np0005634017 podman[392752]: 2026-02-28 11:00:55.526560281 +0000 UTC m=+0.156754400 container init 6bf045516c7f4829ea2faa4d68d7295943124087451e434d2e0fdec1b3f9e1a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_ardinghelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Feb 28 06:00:55 np0005634017 podman[392752]: 2026-02-28 11:00:55.534776184 +0000 UTC m=+0.164970263 container start 6bf045516c7f4829ea2faa4d68d7295943124087451e434d2e0fdec1b3f9e1a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 06:00:55 np0005634017 podman[392752]: 2026-02-28 11:00:55.538695815 +0000 UTC m=+0.168889924 container attach 6bf045516c7f4829ea2faa4d68d7295943124087451e434d2e0fdec1b3f9e1a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 06:00:55 np0005634017 great_ardinghelli[392768]: 167 167
Feb 28 06:00:55 np0005634017 systemd[1]: libpod-6bf045516c7f4829ea2faa4d68d7295943124087451e434d2e0fdec1b3f9e1a7.scope: Deactivated successfully.
Feb 28 06:00:55 np0005634017 podman[392752]: 2026-02-28 11:00:55.542308407 +0000 UTC m=+0.172502516 container died 6bf045516c7f4829ea2faa4d68d7295943124087451e434d2e0fdec1b3f9e1a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_ardinghelli, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 06:00:55 np0005634017 systemd[1]: var-lib-containers-storage-overlay-c92563f4244079ff65a75fe760de7c0d0a8884ccba9416d66bfc3e3b32299277-merged.mount: Deactivated successfully.
Feb 28 06:00:55 np0005634017 podman[392752]: 2026-02-28 11:00:55.588826825 +0000 UTC m=+0.219020904 container remove 6bf045516c7f4829ea2faa4d68d7295943124087451e434d2e0fdec1b3f9e1a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_ardinghelli, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 28 06:00:55 np0005634017 systemd[1]: libpod-conmon-6bf045516c7f4829ea2faa4d68d7295943124087451e434d2e0fdec1b3f9e1a7.scope: Deactivated successfully.
Feb 28 06:00:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2913: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:55 np0005634017 podman[392792]: 2026-02-28 11:00:55.798201724 +0000 UTC m=+0.064594070 container create e0f7ffb494442f40293b723643a4fb1f92329c5f13541fc5a37a66c2022528c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_germain, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 06:00:55 np0005634017 systemd[1]: Started libpod-conmon-e0f7ffb494442f40293b723643a4fb1f92329c5f13541fc5a37a66c2022528c1.scope.
Feb 28 06:00:55 np0005634017 podman[392792]: 2026-02-28 11:00:55.769981005 +0000 UTC m=+0.036373411 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:00:55 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:00:55 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/947cb833f02a97e2d43bb94cdd4a4bafb7d7f6533e8a7214296be78713e11403/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 06:00:55 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/947cb833f02a97e2d43bb94cdd4a4bafb7d7f6533e8a7214296be78713e11403/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 06:00:55 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/947cb833f02a97e2d43bb94cdd4a4bafb7d7f6533e8a7214296be78713e11403/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 06:00:55 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/947cb833f02a97e2d43bb94cdd4a4bafb7d7f6533e8a7214296be78713e11403/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 06:00:55 np0005634017 podman[392792]: 2026-02-28 11:00:55.918622755 +0000 UTC m=+0.185015171 container init e0f7ffb494442f40293b723643a4fb1f92329c5f13541fc5a37a66c2022528c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 06:00:55 np0005634017 podman[392792]: 2026-02-28 11:00:55.929354489 +0000 UTC m=+0.195746805 container start e0f7ffb494442f40293b723643a4fb1f92329c5f13541fc5a37a66c2022528c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 06:00:55 np0005634017 podman[392792]: 2026-02-28 11:00:55.933874607 +0000 UTC m=+0.200267003 container attach e0f7ffb494442f40293b723643a4fb1f92329c5f13541fc5a37a66c2022528c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_germain, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 06:00:56 np0005634017 nice_germain[392808]: {
Feb 28 06:00:56 np0005634017 nice_germain[392808]:    "0": [
Feb 28 06:00:56 np0005634017 nice_germain[392808]:        {
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "devices": [
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "/dev/loop3"
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            ],
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "lv_name": "ceph_lv0",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "lv_size": "21470642176",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "name": "ceph_lv0",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "tags": {
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.cephx_lockbox_secret": "",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.cluster_name": "ceph",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.crush_device_class": "",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.encrypted": "0",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.objectstore": "bluestore",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.osd_id": "0",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.type": "block",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.vdo": "0",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.with_tpm": "0"
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            },
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "type": "block",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "vg_name": "ceph_vg0"
Feb 28 06:00:56 np0005634017 nice_germain[392808]:        }
Feb 28 06:00:56 np0005634017 nice_germain[392808]:    ],
Feb 28 06:00:56 np0005634017 nice_germain[392808]:    "1": [
Feb 28 06:00:56 np0005634017 nice_germain[392808]:        {
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "devices": [
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "/dev/loop4"
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            ],
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "lv_name": "ceph_lv1",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "lv_size": "21470642176",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "name": "ceph_lv1",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "tags": {
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.cephx_lockbox_secret": "",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.cluster_name": "ceph",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.crush_device_class": "",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.encrypted": "0",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.objectstore": "bluestore",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.osd_id": "1",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.type": "block",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.vdo": "0",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.with_tpm": "0"
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            },
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "type": "block",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "vg_name": "ceph_vg1"
Feb 28 06:00:56 np0005634017 nice_germain[392808]:        }
Feb 28 06:00:56 np0005634017 nice_germain[392808]:    ],
Feb 28 06:00:56 np0005634017 nice_germain[392808]:    "2": [
Feb 28 06:00:56 np0005634017 nice_germain[392808]:        {
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "devices": [
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "/dev/loop5"
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            ],
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "lv_name": "ceph_lv2",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "lv_size": "21470642176",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "name": "ceph_lv2",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "tags": {
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.cephx_lockbox_secret": "",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.cluster_name": "ceph",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.crush_device_class": "",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.encrypted": "0",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.objectstore": "bluestore",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.osd_id": "2",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.type": "block",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.vdo": "0",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:                "ceph.with_tpm": "0"
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            },
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "type": "block",
Feb 28 06:00:56 np0005634017 nice_germain[392808]:            "vg_name": "ceph_vg2"
Feb 28 06:00:56 np0005634017 nice_germain[392808]:        }
Feb 28 06:00:56 np0005634017 nice_germain[392808]:    ]
Feb 28 06:00:56 np0005634017 nice_germain[392808]: }
Feb 28 06:00:56 np0005634017 systemd[1]: libpod-e0f7ffb494442f40293b723643a4fb1f92329c5f13541fc5a37a66c2022528c1.scope: Deactivated successfully.
Feb 28 06:00:56 np0005634017 podman[392792]: 2026-02-28 11:00:56.280606487 +0000 UTC m=+0.546998833 container died e0f7ffb494442f40293b723643a4fb1f92329c5f13541fc5a37a66c2022528c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_germain, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 06:00:56 np0005634017 systemd[1]: var-lib-containers-storage-overlay-947cb833f02a97e2d43bb94cdd4a4bafb7d7f6533e8a7214296be78713e11403-merged.mount: Deactivated successfully.
Feb 28 06:00:56 np0005634017 podman[392792]: 2026-02-28 11:00:56.337299432 +0000 UTC m=+0.603691748 container remove e0f7ffb494442f40293b723643a4fb1f92329c5f13541fc5a37a66c2022528c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 06:00:56 np0005634017 systemd[1]: libpod-conmon-e0f7ffb494442f40293b723643a4fb1f92329c5f13541fc5a37a66c2022528c1.scope: Deactivated successfully.
Feb 28 06:00:56 np0005634017 podman[392893]: 2026-02-28 11:00:56.880137286 +0000 UTC m=+0.046439796 container create 908eb93a173d59a8ab9f5c355be186b1f4d694137e3b9a483652de33dbb61e45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 06:00:56 np0005634017 systemd[1]: Started libpod-conmon-908eb93a173d59a8ab9f5c355be186b1f4d694137e3b9a483652de33dbb61e45.scope.
Feb 28 06:00:56 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:00:56 np0005634017 podman[392893]: 2026-02-28 11:00:56.858310878 +0000 UTC m=+0.024613398 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:00:56 np0005634017 podman[392893]: 2026-02-28 11:00:56.963668562 +0000 UTC m=+0.129971112 container init 908eb93a173d59a8ab9f5c355be186b1f4d694137e3b9a483652de33dbb61e45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 06:00:56 np0005634017 podman[392893]: 2026-02-28 11:00:56.971518454 +0000 UTC m=+0.137820954 container start 908eb93a173d59a8ab9f5c355be186b1f4d694137e3b9a483652de33dbb61e45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 28 06:00:56 np0005634017 podman[392893]: 2026-02-28 11:00:56.975983471 +0000 UTC m=+0.142285951 container attach 908eb93a173d59a8ab9f5c355be186b1f4d694137e3b9a483652de33dbb61e45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 06:00:56 np0005634017 angry_beaver[392909]: 167 167
Feb 28 06:00:56 np0005634017 systemd[1]: libpod-908eb93a173d59a8ab9f5c355be186b1f4d694137e3b9a483652de33dbb61e45.scope: Deactivated successfully.
Feb 28 06:00:56 np0005634017 podman[392893]: 2026-02-28 11:00:56.978216624 +0000 UTC m=+0.144519124 container died 908eb93a173d59a8ab9f5c355be186b1f4d694137e3b9a483652de33dbb61e45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 06:00:57 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ba069e5449ed57e138341724b1091698b7a657ed86b963fa4728b56b8807112b-merged.mount: Deactivated successfully.
Feb 28 06:00:57 np0005634017 podman[392893]: 2026-02-28 11:00:57.026407549 +0000 UTC m=+0.192710059 container remove 908eb93a173d59a8ab9f5c355be186b1f4d694137e3b9a483652de33dbb61e45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 06:00:57 np0005634017 systemd[1]: libpod-conmon-908eb93a173d59a8ab9f5c355be186b1f4d694137e3b9a483652de33dbb61e45.scope: Deactivated successfully.
Feb 28 06:00:57 np0005634017 podman[392933]: 2026-02-28 11:00:57.243437715 +0000 UTC m=+0.067956395 container create 4303422d9489f6a098bfe80c37f9158c30ddd1e3b5edc7cb591d1f9bfa2077e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 06:00:57 np0005634017 systemd[1]: Started libpod-conmon-4303422d9489f6a098bfe80c37f9158c30ddd1e3b5edc7cb591d1f9bfa2077e6.scope.
Feb 28 06:00:57 np0005634017 podman[392933]: 2026-02-28 11:00:57.214467765 +0000 UTC m=+0.038986455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:00:57 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:00:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6ef6a511ae2b457052021bd9115abf69f874db0379c6bb8afaf43f1d2fdb2c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 06:00:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6ef6a511ae2b457052021bd9115abf69f874db0379c6bb8afaf43f1d2fdb2c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 06:00:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6ef6a511ae2b457052021bd9115abf69f874db0379c6bb8afaf43f1d2fdb2c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 06:00:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6ef6a511ae2b457052021bd9115abf69f874db0379c6bb8afaf43f1d2fdb2c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 06:00:57 np0005634017 podman[392933]: 2026-02-28 11:00:57.342003557 +0000 UTC m=+0.166522267 container init 4303422d9489f6a098bfe80c37f9158c30ddd1e3b5edc7cb591d1f9bfa2077e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_shannon, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 06:00:57 np0005634017 podman[392933]: 2026-02-28 11:00:57.348722207 +0000 UTC m=+0.173240897 container start 4303422d9489f6a098bfe80c37f9158c30ddd1e3b5edc7cb591d1f9bfa2077e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_shannon, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 06:00:57 np0005634017 podman[392933]: 2026-02-28 11:00:57.352155064 +0000 UTC m=+0.176673764 container attach 4303422d9489f6a098bfe80c37f9158c30ddd1e3b5edc7cb591d1f9bfa2077e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 06:00:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2914: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:00:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:00:57.905 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:00:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:00:57.908 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:00:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:00:57.908 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:00:58 np0005634017 lvm[393028]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 06:00:58 np0005634017 lvm[393028]: VG ceph_vg0 finished
Feb 28 06:00:58 np0005634017 lvm[393029]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 06:00:58 np0005634017 lvm[393029]: VG ceph_vg1 finished
Feb 28 06:00:58 np0005634017 lvm[393031]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 06:00:58 np0005634017 lvm[393031]: VG ceph_vg2 finished
Feb 28 06:00:58 np0005634017 tender_shannon[392950]: {}
Feb 28 06:00:58 np0005634017 nova_compute[243452]: 2026-02-28 11:00:58.158 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:00:58 np0005634017 systemd[1]: libpod-4303422d9489f6a098bfe80c37f9158c30ddd1e3b5edc7cb591d1f9bfa2077e6.scope: Deactivated successfully.
Feb 28 06:00:58 np0005634017 systemd[1]: libpod-4303422d9489f6a098bfe80c37f9158c30ddd1e3b5edc7cb591d1f9bfa2077e6.scope: Consumed 1.251s CPU time.
Feb 28 06:00:58 np0005634017 podman[392933]: 2026-02-28 11:00:58.198548284 +0000 UTC m=+1.023066964 container died 4303422d9489f6a098bfe80c37f9158c30ddd1e3b5edc7cb591d1f9bfa2077e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 06:00:58 np0005634017 systemd[1]: var-lib-containers-storage-overlay-a6ef6a511ae2b457052021bd9115abf69f874db0379c6bb8afaf43f1d2fdb2c6-merged.mount: Deactivated successfully.
Feb 28 06:00:58 np0005634017 podman[392933]: 2026-02-28 11:00:58.256179126 +0000 UTC m=+1.080697816 container remove 4303422d9489f6a098bfe80c37f9158c30ddd1e3b5edc7cb591d1f9bfa2077e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 06:00:58 np0005634017 systemd[1]: libpod-conmon-4303422d9489f6a098bfe80c37f9158c30ddd1e3b5edc7cb591d1f9bfa2077e6.scope: Deactivated successfully.
Feb 28 06:00:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 06:00:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:00:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 06:00:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:00:59 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:00:59 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:00:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:00:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2915: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:01:00 np0005634017 nova_compute[243452]: 2026-02-28 11:01:00.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:01:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:01:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:01:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:01:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:01:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:01:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:01:01 np0005634017 nova_compute[243452]: 2026-02-28 11:01:01.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:01:01 np0005634017 nova_compute[243452]: 2026-02-28 11:01:01.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:01:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2916: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:01:03 np0005634017 nova_compute[243452]: 2026-02-28 11:01:03.160 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:01:03 np0005634017 nova_compute[243452]: 2026-02-28 11:01:03.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:01:03 np0005634017 nova_compute[243452]: 2026-02-28 11:01:03.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 06:01:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2917: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:01:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:01:04 np0005634017 nova_compute[243452]: 2026-02-28 11:01:04.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:01:04 np0005634017 nova_compute[243452]: 2026-02-28 11:01:04.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:01:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2918: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:01:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2919: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:01:08 np0005634017 nova_compute[243452]: 2026-02-28 11:01:08.163 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:01:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:01:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2920: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:01:10 np0005634017 nova_compute[243452]: 2026-02-28 11:01:10.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:01:10 np0005634017 nova_compute[243452]: 2026-02-28 11:01:10.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 06:01:10 np0005634017 nova_compute[243452]: 2026-02-28 11:01:10.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 06:01:10 np0005634017 nova_compute[243452]: 2026-02-28 11:01:10.341 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 06:01:10 np0005634017 nova_compute[243452]: 2026-02-28 11:01:10.342 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:01:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2921: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:01:12 np0005634017 nova_compute[243452]: 2026-02-28 11:01:12.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:01:12 np0005634017 nova_compute[243452]: 2026-02-28 11:01:12.352 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:01:12 np0005634017 nova_compute[243452]: 2026-02-28 11:01:12.352 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:01:12 np0005634017 nova_compute[243452]: 2026-02-28 11:01:12.353 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:01:12 np0005634017 nova_compute[243452]: 2026-02-28 11:01:12.353 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 06:01:12 np0005634017 nova_compute[243452]: 2026-02-28 11:01:12.354 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 06:01:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 06:01:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1209747722' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 06:01:12 np0005634017 nova_compute[243452]: 2026-02-28 11:01:12.929 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 06:01:13 np0005634017 nova_compute[243452]: 2026-02-28 11:01:13.123 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 06:01:13 np0005634017 nova_compute[243452]: 2026-02-28 11:01:13.124 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3602MB free_disk=59.987361152656376GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 06:01:13 np0005634017 nova_compute[243452]: 2026-02-28 11:01:13.124 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:01:13 np0005634017 nova_compute[243452]: 2026-02-28 11:01:13.124 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:01:13 np0005634017 nova_compute[243452]: 2026-02-28 11:01:13.165 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:01:13 np0005634017 nova_compute[243452]: 2026-02-28 11:01:13.186 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 06:01:13 np0005634017 nova_compute[243452]: 2026-02-28 11:01:13.187 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 06:01:13 np0005634017 nova_compute[243452]: 2026-02-28 11:01:13.208 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 06:01:13 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 06:01:13 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1127938045' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 06:01:13 np0005634017 nova_compute[243452]: 2026-02-28 11:01:13.740 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 06:01:13 np0005634017 nova_compute[243452]: 2026-02-28 11:01:13.745 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 06:01:13 np0005634017 nova_compute[243452]: 2026-02-28 11:01:13.764 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 06:01:13 np0005634017 nova_compute[243452]: 2026-02-28 11:01:13.766 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 06:01:13 np0005634017 nova_compute[243452]: 2026-02-28 11:01:13.766 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:01:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2922: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:01:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:01:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2923: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:01:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2924: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:01:18 np0005634017 nova_compute[243452]: 2026-02-28 11:01:18.166 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:01:19 np0005634017 podman[393130]: 2026-02-28 11:01:19.145934236 +0000 UTC m=+0.076265291 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Feb 28 06:01:19 np0005634017 podman[393129]: 2026-02-28 11:01:19.181279257 +0000 UTC m=+0.111662694 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 28 06:01:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:01:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2925: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:01:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2926: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:01:23 np0005634017 nova_compute[243452]: 2026-02-28 11:01:23.169 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:01:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2927: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:01:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:01:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2928: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:01:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2929: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:01:28 np0005634017 nova_compute[243452]: 2026-02-28 11:01:28.170 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:01:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_11:01:29
Feb 28 06:01:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 06:01:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 06:01:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', '.mgr', 'backups', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta']
Feb 28 06:01:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 06:01:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:01:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2930: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:01:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:01:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:01:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:01:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:01:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:01:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:01:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 06:01:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 06:01:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 06:01:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 06:01:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 06:01:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 06:01:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 06:01:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 06:01:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 06:01:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 06:01:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2931: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:01:33 np0005634017 nova_compute[243452]: 2026-02-28 11:01:33.172 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:01:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2932: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:01:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:01:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2933: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:01:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2934: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:01:38 np0005634017 nova_compute[243452]: 2026-02-28 11:01:38.174 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:01:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:01:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2935: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:01:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 06:01:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:01:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 06:01:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:01:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.533795142069749e-05 of space, bias 1.0, pg target 0.004601385426209247 quantized to 32 (current 32)
Feb 28 06:01:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:01:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:01:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:01:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:01:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:01:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006706137252582069 of space, bias 1.0, pg target 0.20118411757746207 quantized to 32 (current 32)
Feb 28 06:01:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:01:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.261232899107364e-07 of space, bias 4.0, pg target 0.0008713479478928837 quantized to 16 (current 16)
Feb 28 06:01:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:01:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:01:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:01:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 06:01:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:01:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 06:01:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:01:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:01:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:01:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 06:01:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2936: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 06:01:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Feb 28 06:01:42 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Feb 28 06:01:42 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Feb 28 06:01:43 np0005634017 nova_compute[243452]: 2026-02-28 11:01:43.176 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:01:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2938: 305 pgs: 305 active+clean; 33 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 818 B/s wr, 5 op/s
Feb 28 06:01:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:01:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 06:01:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/863113695' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 06:01:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 06:01:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/863113695' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 06:01:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2939: 305 pgs: 305 active+clean; 21 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Feb 28 06:01:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Feb 28 06:01:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Feb 28 06:01:45 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Feb 28 06:01:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2941: 305 pgs: 305 active+clean; 21 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Feb 28 06:01:48 np0005634017 nova_compute[243452]: 2026-02-28 11:01:48.177 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:01:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:01:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Feb 28 06:01:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Feb 28 06:01:49 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Feb 28 06:01:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2943: 305 pgs: 305 active+clean; 4.9 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 4.1 KiB/s wr, 71 op/s
Feb 28 06:01:50 np0005634017 podman[393176]: 2026-02-28 11:01:50.150025595 +0000 UTC m=+0.080034208 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, config_id=ovn_metadata_agent)
Feb 28 06:01:50 np0005634017 podman[393175]: 2026-02-28 11:01:50.188933017 +0000 UTC m=+0.124313322 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 06:01:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Feb 28 06:01:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Feb 28 06:01:51 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Feb 28 06:01:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2945: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 3.0 KiB/s wr, 44 op/s
Feb 28 06:01:53 np0005634017 nova_compute[243452]: 2026-02-28 11:01:53.180 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:01:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2946: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.4 KiB/s wr, 33 op/s
Feb 28 06:01:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:01:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Feb 28 06:01:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Feb 28 06:01:54 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Feb 28 06:01:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2948: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 3.1 MiB/s wr, 54 op/s
Feb 28 06:01:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2949: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 2.6 MiB/s wr, 16 op/s
Feb 28 06:01:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:01:57.906 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:01:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:01:57.907 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:01:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:01:57.907 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:01:58 np0005634017 nova_compute[243452]: 2026-02-28 11:01:58.182 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:01:58 np0005634017 nova_compute[243452]: 2026-02-28 11:01:58.182 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:01:58 np0005634017 nova_compute[243452]: 2026-02-28 11:01:58.183 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 06:01:58 np0005634017 nova_compute[243452]: 2026-02-28 11:01:58.183 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:01:58 np0005634017 nova_compute[243452]: 2026-02-28 11:01:58.183 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:01:58 np0005634017 nova_compute[243452]: 2026-02-28 11:01:58.184 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:01:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 28 06:01:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 06:01:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 06:01:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 06:01:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 06:01:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 06:01:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 06:01:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:01:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 06:01:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 06:01:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 06:01:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 06:01:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 06:01:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 06:01:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:01:59 np0005634017 podman[393360]: 2026-02-28 11:01:59.517220111 +0000 UTC m=+0.051742556 container create a66eddac9745a193e000f608d3182a799a565dcbf07ed3d6795029de3b37e8fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_nobel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 06:01:59 np0005634017 systemd[1]: Started libpod-conmon-a66eddac9745a193e000f608d3182a799a565dcbf07ed3d6795029de3b37e8fc.scope.
Feb 28 06:01:59 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:01:59 np0005634017 podman[393360]: 2026-02-28 11:01:59.490535326 +0000 UTC m=+0.025057791 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:01:59 np0005634017 podman[393360]: 2026-02-28 11:01:59.589114128 +0000 UTC m=+0.123636603 container init a66eddac9745a193e000f608d3182a799a565dcbf07ed3d6795029de3b37e8fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_nobel, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 28 06:01:59 np0005634017 podman[393360]: 2026-02-28 11:01:59.596773725 +0000 UTC m=+0.131296170 container start a66eddac9745a193e000f608d3182a799a565dcbf07ed3d6795029de3b37e8fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_nobel, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 06:01:59 np0005634017 podman[393360]: 2026-02-28 11:01:59.600468649 +0000 UTC m=+0.134991114 container attach a66eddac9745a193e000f608d3182a799a565dcbf07ed3d6795029de3b37e8fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_nobel, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 06:01:59 np0005634017 quizzical_nobel[393377]: 167 167
Feb 28 06:01:59 np0005634017 systemd[1]: libpod-a66eddac9745a193e000f608d3182a799a565dcbf07ed3d6795029de3b37e8fc.scope: Deactivated successfully.
Feb 28 06:01:59 np0005634017 podman[393360]: 2026-02-28 11:01:59.6050836 +0000 UTC m=+0.139606055 container died a66eddac9745a193e000f608d3182a799a565dcbf07ed3d6795029de3b37e8fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_nobel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 06:01:59 np0005634017 systemd[1]: var-lib-containers-storage-overlay-526eece268e7da4fc04b89c242173b1c1cc0b507fc75d940df09901959e77105-merged.mount: Deactivated successfully.
Feb 28 06:01:59 np0005634017 podman[393360]: 2026-02-28 11:01:59.646292637 +0000 UTC m=+0.180815082 container remove a66eddac9745a193e000f608d3182a799a565dcbf07ed3d6795029de3b37e8fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 06:01:59 np0005634017 systemd[1]: libpod-conmon-a66eddac9745a193e000f608d3182a799a565dcbf07ed3d6795029de3b37e8fc.scope: Deactivated successfully.
Feb 28 06:01:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2950: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 8.1 KiB/s rd, 2.4 MiB/s wr, 12 op/s
Feb 28 06:01:59 np0005634017 podman[393402]: 2026-02-28 11:01:59.774487188 +0000 UTC m=+0.030621079 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:01:59 np0005634017 podman[393402]: 2026-02-28 11:01:59.905592441 +0000 UTC m=+0.161726272 container create 8e76992c67b2c0d23e878b09fba18417dcd95292d535d0d08368f6ebcdc53992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_feynman, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 28 06:01:59 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 06:01:59 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 06:01:59 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:01:59 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 06:01:59 np0005634017 systemd[1]: Started libpod-conmon-8e76992c67b2c0d23e878b09fba18417dcd95292d535d0d08368f6ebcdc53992.scope.
Feb 28 06:01:59 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:01:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6d22eb08c90f60da37e13995b14506e215fa3775574a4fbb09eb791c716d78a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 06:01:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6d22eb08c90f60da37e13995b14506e215fa3775574a4fbb09eb791c716d78a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 06:01:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6d22eb08c90f60da37e13995b14506e215fa3775574a4fbb09eb791c716d78a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 06:01:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6d22eb08c90f60da37e13995b14506e215fa3775574a4fbb09eb791c716d78a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 06:01:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6d22eb08c90f60da37e13995b14506e215fa3775574a4fbb09eb791c716d78a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 06:02:00 np0005634017 podman[393402]: 2026-02-28 11:02:00.004908163 +0000 UTC m=+0.261042044 container init 8e76992c67b2c0d23e878b09fba18417dcd95292d535d0d08368f6ebcdc53992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 06:02:00 np0005634017 podman[393402]: 2026-02-28 11:02:00.010523092 +0000 UTC m=+0.266656923 container start 8e76992c67b2c0d23e878b09fba18417dcd95292d535d0d08368f6ebcdc53992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_feynman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 06:02:00 np0005634017 podman[393402]: 2026-02-28 11:02:00.014197046 +0000 UTC m=+0.270330967 container attach 8e76992c67b2c0d23e878b09fba18417dcd95292d535d0d08368f6ebcdc53992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_feynman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 28 06:02:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:02:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:02:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:02:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:02:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:02:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:02:00 np0005634017 distracted_feynman[393418]: --> passed data devices: 0 physical, 3 LVM
Feb 28 06:02:00 np0005634017 distracted_feynman[393418]: --> All data devices are unavailable
Feb 28 06:02:00 np0005634017 systemd[1]: libpod-8e76992c67b2c0d23e878b09fba18417dcd95292d535d0d08368f6ebcdc53992.scope: Deactivated successfully.
Feb 28 06:02:00 np0005634017 podman[393402]: 2026-02-28 11:02:00.491135334 +0000 UTC m=+0.747269205 container died 8e76992c67b2c0d23e878b09fba18417dcd95292d535d0d08368f6ebcdc53992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_feynman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 28 06:02:00 np0005634017 systemd[1]: var-lib-containers-storage-overlay-b6d22eb08c90f60da37e13995b14506e215fa3775574a4fbb09eb791c716d78a-merged.mount: Deactivated successfully.
Feb 28 06:02:00 np0005634017 podman[393402]: 2026-02-28 11:02:00.538996639 +0000 UTC m=+0.795130510 container remove 8e76992c67b2c0d23e878b09fba18417dcd95292d535d0d08368f6ebcdc53992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 06:02:00 np0005634017 systemd[1]: libpod-conmon-8e76992c67b2c0d23e878b09fba18417dcd95292d535d0d08368f6ebcdc53992.scope: Deactivated successfully.
Feb 28 06:02:00 np0005634017 podman[393513]: 2026-02-28 11:02:00.975916733 +0000 UTC m=+0.046481877 container create 05c0f1b5a7b9ab7a92afa97ceb1bf6c715a2cbe279ea973023f6dbebd890ea59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_dewdney, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 06:02:01 np0005634017 systemd[1]: Started libpod-conmon-05c0f1b5a7b9ab7a92afa97ceb1bf6c715a2cbe279ea973023f6dbebd890ea59.scope.
Feb 28 06:02:01 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:02:01 np0005634017 podman[393513]: 2026-02-28 11:02:00.953527069 +0000 UTC m=+0.024092243 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:02:01 np0005634017 podman[393513]: 2026-02-28 11:02:01.050319761 +0000 UTC m=+0.120884925 container init 05c0f1b5a7b9ab7a92afa97ceb1bf6c715a2cbe279ea973023f6dbebd890ea59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 28 06:02:01 np0005634017 podman[393513]: 2026-02-28 11:02:01.056120225 +0000 UTC m=+0.126685369 container start 05c0f1b5a7b9ab7a92afa97ceb1bf6c715a2cbe279ea973023f6dbebd890ea59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_dewdney, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 06:02:01 np0005634017 elastic_dewdney[393530]: 167 167
Feb 28 06:02:01 np0005634017 systemd[1]: libpod-05c0f1b5a7b9ab7a92afa97ceb1bf6c715a2cbe279ea973023f6dbebd890ea59.scope: Deactivated successfully.
Feb 28 06:02:01 np0005634017 podman[393513]: 2026-02-28 11:02:01.059409388 +0000 UTC m=+0.129974552 container attach 05c0f1b5a7b9ab7a92afa97ceb1bf6c715a2cbe279ea973023f6dbebd890ea59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_dewdney, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 28 06:02:01 np0005634017 podman[393513]: 2026-02-28 11:02:01.05983765 +0000 UTC m=+0.130402794 container died 05c0f1b5a7b9ab7a92afa97ceb1bf6c715a2cbe279ea973023f6dbebd890ea59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 06:02:01 np0005634017 systemd[1]: var-lib-containers-storage-overlay-38cd46eceb73a02471b5dcc4c5dfca3d51546c801ba3f6ccbc13ba9e62fad3f5-merged.mount: Deactivated successfully.
Feb 28 06:02:01 np0005634017 podman[393513]: 2026-02-28 11:02:01.090246321 +0000 UTC m=+0.160811465 container remove 05c0f1b5a7b9ab7a92afa97ceb1bf6c715a2cbe279ea973023f6dbebd890ea59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_dewdney, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 06:02:01 np0005634017 systemd[1]: libpod-conmon-05c0f1b5a7b9ab7a92afa97ceb1bf6c715a2cbe279ea973023f6dbebd890ea59.scope: Deactivated successfully.
Feb 28 06:02:01 np0005634017 podman[393554]: 2026-02-28 11:02:01.195958255 +0000 UTC m=+0.033361946 container create ada6257be9315917ecb5e7eb33981e153936e362b7f2b5dd3f32da780d4803a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Feb 28 06:02:01 np0005634017 systemd[1]: Started libpod-conmon-ada6257be9315917ecb5e7eb33981e153936e362b7f2b5dd3f32da780d4803a6.scope.
Feb 28 06:02:01 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:02:01 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c40e8a10a868fc21e8ca683a451250ac66432c7535197be6f6e20c22955e7457/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 06:02:01 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c40e8a10a868fc21e8ca683a451250ac66432c7535197be6f6e20c22955e7457/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 06:02:01 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c40e8a10a868fc21e8ca683a451250ac66432c7535197be6f6e20c22955e7457/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 06:02:01 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c40e8a10a868fc21e8ca683a451250ac66432c7535197be6f6e20c22955e7457/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 06:02:01 np0005634017 podman[393554]: 2026-02-28 11:02:01.181413823 +0000 UTC m=+0.018817534 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:02:01 np0005634017 podman[393554]: 2026-02-28 11:02:01.280497109 +0000 UTC m=+0.117900840 container init ada6257be9315917ecb5e7eb33981e153936e362b7f2b5dd3f32da780d4803a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 06:02:01 np0005634017 podman[393554]: 2026-02-28 11:02:01.286485639 +0000 UTC m=+0.123889340 container start ada6257be9315917ecb5e7eb33981e153936e362b7f2b5dd3f32da780d4803a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_bhaskara, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 06:02:01 np0005634017 podman[393554]: 2026-02-28 11:02:01.292481029 +0000 UTC m=+0.129884750 container attach ada6257be9315917ecb5e7eb33981e153936e362b7f2b5dd3f32da780d4803a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_bhaskara, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]: {
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:    "0": [
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:        {
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "devices": [
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "/dev/loop3"
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            ],
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "lv_name": "ceph_lv0",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "lv_size": "21470642176",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "name": "ceph_lv0",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "tags": {
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.cephx_lockbox_secret": "",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.cluster_name": "ceph",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.crush_device_class": "",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.encrypted": "0",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.objectstore": "bluestore",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.osd_id": "0",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.type": "block",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.vdo": "0",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.with_tpm": "0"
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            },
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "type": "block",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "vg_name": "ceph_vg0"
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:        }
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:    ],
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:    "1": [
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:        {
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "devices": [
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "/dev/loop4"
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            ],
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "lv_name": "ceph_lv1",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "lv_size": "21470642176",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "name": "ceph_lv1",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "tags": {
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.cephx_lockbox_secret": "",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.cluster_name": "ceph",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.crush_device_class": "",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.encrypted": "0",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.objectstore": "bluestore",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.osd_id": "1",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.type": "block",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.vdo": "0",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.with_tpm": "0"
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            },
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "type": "block",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "vg_name": "ceph_vg1"
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:        }
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:    ],
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:    "2": [
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:        {
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "devices": [
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "/dev/loop5"
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            ],
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "lv_name": "ceph_lv2",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "lv_size": "21470642176",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "name": "ceph_lv2",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "tags": {
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.cephx_lockbox_secret": "",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.cluster_name": "ceph",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.crush_device_class": "",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.encrypted": "0",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.objectstore": "bluestore",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.osd_id": "2",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.type": "block",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.vdo": "0",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:                "ceph.with_tpm": "0"
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            },
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "type": "block",
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:            "vg_name": "ceph_vg2"
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:        }
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]:    ]
Feb 28 06:02:01 np0005634017 dreamy_bhaskara[393571]: }
Feb 28 06:02:01 np0005634017 systemd[1]: libpod-ada6257be9315917ecb5e7eb33981e153936e362b7f2b5dd3f32da780d4803a6.scope: Deactivated successfully.
Feb 28 06:02:01 np0005634017 podman[393554]: 2026-02-28 11:02:01.582906234 +0000 UTC m=+0.420309935 container died ada6257be9315917ecb5e7eb33981e153936e362b7f2b5dd3f32da780d4803a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 06:02:01 np0005634017 systemd[1]: var-lib-containers-storage-overlay-c40e8a10a868fc21e8ca683a451250ac66432c7535197be6f6e20c22955e7457-merged.mount: Deactivated successfully.
Feb 28 06:02:01 np0005634017 podman[393554]: 2026-02-28 11:02:01.627671952 +0000 UTC m=+0.465075683 container remove ada6257be9315917ecb5e7eb33981e153936e362b7f2b5dd3f32da780d4803a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 06:02:01 np0005634017 systemd[1]: libpod-conmon-ada6257be9315917ecb5e7eb33981e153936e362b7f2b5dd3f32da780d4803a6.scope: Deactivated successfully.
Feb 28 06:02:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2951: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 6.9 KiB/s rd, 2.0 MiB/s wr, 10 op/s
Feb 28 06:02:02 np0005634017 podman[393654]: 2026-02-28 11:02:02.058769681 +0000 UTC m=+0.042328830 container create 38ad56b42213c86e85b201ec07c508250baed09fc03ef4666753d810da53484b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_noyce, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 06:02:02 np0005634017 systemd[1]: Started libpod-conmon-38ad56b42213c86e85b201ec07c508250baed09fc03ef4666753d810da53484b.scope.
Feb 28 06:02:02 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:02:02 np0005634017 podman[393654]: 2026-02-28 11:02:02.123898846 +0000 UTC m=+0.107458055 container init 38ad56b42213c86e85b201ec07c508250baed09fc03ef4666753d810da53484b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 06:02:02 np0005634017 podman[393654]: 2026-02-28 11:02:02.130034779 +0000 UTC m=+0.113593948 container start 38ad56b42213c86e85b201ec07c508250baed09fc03ef4666753d810da53484b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_noyce, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 06:02:02 np0005634017 bold_noyce[393670]: 167 167
Feb 28 06:02:02 np0005634017 podman[393654]: 2026-02-28 11:02:02.134687051 +0000 UTC m=+0.118246220 container attach 38ad56b42213c86e85b201ec07c508250baed09fc03ef4666753d810da53484b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 06:02:02 np0005634017 systemd[1]: libpod-38ad56b42213c86e85b201ec07c508250baed09fc03ef4666753d810da53484b.scope: Deactivated successfully.
Feb 28 06:02:02 np0005634017 podman[393654]: 2026-02-28 11:02:02.135421982 +0000 UTC m=+0.118981151 container died 38ad56b42213c86e85b201ec07c508250baed09fc03ef4666753d810da53484b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 06:02:02 np0005634017 podman[393654]: 2026-02-28 11:02:02.041298016 +0000 UTC m=+0.024857165 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:02:02 np0005634017 systemd[1]: var-lib-containers-storage-overlay-8a2a53729a962318a4293b3d3086fbd873d94c40d56e3fbae8aed017e97196d7-merged.mount: Deactivated successfully.
Feb 28 06:02:02 np0005634017 podman[393654]: 2026-02-28 11:02:02.184621234 +0000 UTC m=+0.168180403 container remove 38ad56b42213c86e85b201ec07c508250baed09fc03ef4666753d810da53484b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_noyce, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 06:02:02 np0005634017 systemd[1]: libpod-conmon-38ad56b42213c86e85b201ec07c508250baed09fc03ef4666753d810da53484b.scope: Deactivated successfully.
Feb 28 06:02:02 np0005634017 podman[393694]: 2026-02-28 11:02:02.362566974 +0000 UTC m=+0.054018911 container create f9aa2258c517a10dc2614c1507f4d0060c0ebb313091229f562d9cb600547037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_grothendieck, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 06:02:02 np0005634017 systemd[1]: Started libpod-conmon-f9aa2258c517a10dc2614c1507f4d0060c0ebb313091229f562d9cb600547037.scope.
Feb 28 06:02:02 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:02:02 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dede4d42dcf575d9b9a06c4271fea3eed29656a94fb204ef8535405dad0e7c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 06:02:02 np0005634017 podman[393694]: 2026-02-28 11:02:02.341617631 +0000 UTC m=+0.033069618 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:02:02 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dede4d42dcf575d9b9a06c4271fea3eed29656a94fb204ef8535405dad0e7c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 06:02:02 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dede4d42dcf575d9b9a06c4271fea3eed29656a94fb204ef8535405dad0e7c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 06:02:02 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dede4d42dcf575d9b9a06c4271fea3eed29656a94fb204ef8535405dad0e7c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 06:02:02 np0005634017 podman[393694]: 2026-02-28 11:02:02.470895362 +0000 UTC m=+0.162347299 container init f9aa2258c517a10dc2614c1507f4d0060c0ebb313091229f562d9cb600547037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_grothendieck, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 28 06:02:02 np0005634017 podman[393694]: 2026-02-28 11:02:02.481197874 +0000 UTC m=+0.172649801 container start f9aa2258c517a10dc2614c1507f4d0060c0ebb313091229f562d9cb600547037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_grothendieck, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 06:02:02 np0005634017 podman[393694]: 2026-02-28 11:02:02.485621329 +0000 UTC m=+0.177073266 container attach f9aa2258c517a10dc2614c1507f4d0060c0ebb313091229f562d9cb600547037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 06:02:02 np0005634017 nova_compute[243452]: 2026-02-28 11:02:02.767 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:02:02 np0005634017 nova_compute[243452]: 2026-02-28 11:02:02.768 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:02:03 np0005634017 lvm[393790]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 06:02:03 np0005634017 lvm[393787]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 06:02:03 np0005634017 lvm[393790]: VG ceph_vg2 finished
Feb 28 06:02:03 np0005634017 lvm[393787]: VG ceph_vg0 finished
Feb 28 06:02:03 np0005634017 lvm[393791]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 06:02:03 np0005634017 lvm[393791]: VG ceph_vg1 finished
Feb 28 06:02:03 np0005634017 nova_compute[243452]: 2026-02-28 11:02:03.185 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:02:03 np0005634017 epic_grothendieck[393710]: {}
Feb 28 06:02:03 np0005634017 systemd[1]: libpod-f9aa2258c517a10dc2614c1507f4d0060c0ebb313091229f562d9cb600547037.scope: Deactivated successfully.
Feb 28 06:02:03 np0005634017 systemd[1]: libpod-f9aa2258c517a10dc2614c1507f4d0060c0ebb313091229f562d9cb600547037.scope: Consumed 1.108s CPU time.
Feb 28 06:02:03 np0005634017 podman[393694]: 2026-02-28 11:02:03.28257996 +0000 UTC m=+0.974031927 container died f9aa2258c517a10dc2614c1507f4d0060c0ebb313091229f562d9cb600547037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 28 06:02:03 np0005634017 systemd[1]: var-lib-containers-storage-overlay-2dede4d42dcf575d9b9a06c4271fea3eed29656a94fb204ef8535405dad0e7c4-merged.mount: Deactivated successfully.
Feb 28 06:02:03 np0005634017 nova_compute[243452]: 2026-02-28 11:02:03.319 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:02:03 np0005634017 podman[393694]: 2026-02-28 11:02:03.334602903 +0000 UTC m=+1.026054870 container remove f9aa2258c517a10dc2614c1507f4d0060c0ebb313091229f562d9cb600547037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_grothendieck, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 06:02:03 np0005634017 systemd[1]: libpod-conmon-f9aa2258c517a10dc2614c1507f4d0060c0ebb313091229f562d9cb600547037.scope: Deactivated successfully.
Feb 28 06:02:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 06:02:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:02:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 06:02:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:02:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2952: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 6.9 KiB/s rd, 2.0 MiB/s wr, 10 op/s
Feb 28 06:02:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:02:04 np0005634017 nova_compute[243452]: 2026-02-28 11:02:04.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:02:04 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:02:04 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:02:05 np0005634017 nova_compute[243452]: 2026-02-28 11:02:05.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:02:05 np0005634017 nova_compute[243452]: 2026-02-28 11:02:05.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:02:05 np0005634017 nova_compute[243452]: 2026-02-28 11:02:05.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 06:02:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2953: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 1.8 MiB/s wr, 8 op/s
Feb 28 06:02:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2954: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:02:08 np0005634017 nova_compute[243452]: 2026-02-28 11:02:08.187 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:02:08 np0005634017 nova_compute[243452]: 2026-02-28 11:02:08.189 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:02:08 np0005634017 nova_compute[243452]: 2026-02-28 11:02:08.189 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 06:02:08 np0005634017 nova_compute[243452]: 2026-02-28 11:02:08.189 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:02:08 np0005634017 nova_compute[243452]: 2026-02-28 11:02:08.189 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:02:08 np0005634017 nova_compute[243452]: 2026-02-28 11:02:08.191 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:02:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:02:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2955: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:02:10 np0005634017 nova_compute[243452]: 2026-02-28 11:02:10.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:02:10 np0005634017 nova_compute[243452]: 2026-02-28 11:02:10.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 06:02:10 np0005634017 nova_compute[243452]: 2026-02-28 11:02:10.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 06:02:10 np0005634017 nova_compute[243452]: 2026-02-28 11:02:10.341 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 06:02:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2956: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:02:12 np0005634017 nova_compute[243452]: 2026-02-28 11:02:12.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:02:13 np0005634017 nova_compute[243452]: 2026-02-28 11:02:13.191 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:02:13 np0005634017 nova_compute[243452]: 2026-02-28 11:02:13.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:02:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2957: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:02:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:02:14 np0005634017 nova_compute[243452]: 2026-02-28 11:02:14.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:02:14 np0005634017 nova_compute[243452]: 2026-02-28 11:02:14.342 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:02:14 np0005634017 nova_compute[243452]: 2026-02-28 11:02:14.343 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:02:14 np0005634017 nova_compute[243452]: 2026-02-28 11:02:14.344 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:02:14 np0005634017 nova_compute[243452]: 2026-02-28 11:02:14.344 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 06:02:14 np0005634017 nova_compute[243452]: 2026-02-28 11:02:14.344 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 06:02:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 06:02:14 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/583085115' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 06:02:14 np0005634017 nova_compute[243452]: 2026-02-28 11:02:14.875 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 06:02:15 np0005634017 nova_compute[243452]: 2026-02-28 11:02:15.034 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 06:02:15 np0005634017 nova_compute[243452]: 2026-02-28 11:02:15.036 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3599MB free_disk=59.987355089746416GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 06:02:15 np0005634017 nova_compute[243452]: 2026-02-28 11:02:15.036 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:02:15 np0005634017 nova_compute[243452]: 2026-02-28 11:02:15.036 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:02:15 np0005634017 nova_compute[243452]: 2026-02-28 11:02:15.112 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 06:02:15 np0005634017 nova_compute[243452]: 2026-02-28 11:02:15.113 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 06:02:15 np0005634017 nova_compute[243452]: 2026-02-28 11:02:15.135 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 06:02:15 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 06:02:15 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/777926260' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 06:02:15 np0005634017 nova_compute[243452]: 2026-02-28 11:02:15.630 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 06:02:15 np0005634017 nova_compute[243452]: 2026-02-28 11:02:15.637 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 06:02:15 np0005634017 nova_compute[243452]: 2026-02-28 11:02:15.654 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 06:02:15 np0005634017 nova_compute[243452]: 2026-02-28 11:02:15.656 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 06:02:15 np0005634017 nova_compute[243452]: 2026-02-28 11:02:15.656 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:02:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2958: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:02:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2959: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:02:18 np0005634017 nova_compute[243452]: 2026-02-28 11:02:18.192 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:02:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:02:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2960: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:02:21 np0005634017 podman[393878]: 2026-02-28 11:02:21.127971747 +0000 UTC m=+0.057546851 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 06:02:21 np0005634017 podman[393877]: 2026-02-28 11:02:21.186946967 +0000 UTC m=+0.116664035 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 06:02:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2961: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:02:23 np0005634017 nova_compute[243452]: 2026-02-28 11:02:23.194 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:02:23 np0005634017 nova_compute[243452]: 2026-02-28 11:02:23.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:02:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2962: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:02:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:02:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2963: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:02:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2964: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:02:28 np0005634017 nova_compute[243452]: 2026-02-28 11:02:28.196 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:02:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_11:02:29
Feb 28 06:02:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 06:02:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 06:02:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'backups', 'default.rgw.meta', 'default.rgw.log', 'images', 'volumes', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'vms']
Feb 28 06:02:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 06:02:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:02:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2965: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:02:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:02:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:02:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:02:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:02:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:02:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:02:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 06:02:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 06:02:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 06:02:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 06:02:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 06:02:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 06:02:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 06:02:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 06:02:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 06:02:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 06:02:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2966: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:02:33 np0005634017 nova_compute[243452]: 2026-02-28 11:02:33.199 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:02:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2967: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:02:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:02:35 np0005634017 nova_compute[243452]: 2026-02-28 11:02:35.282 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:02:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2968: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:02:37 np0005634017 nova_compute[243452]: 2026-02-28 11:02:37.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:02:37 np0005634017 nova_compute[243452]: 2026-02-28 11:02:37.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 28 06:02:37 np0005634017 nova_compute[243452]: 2026-02-28 11:02:37.336 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 28 06:02:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2969: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:02:38 np0005634017 nova_compute[243452]: 2026-02-28 11:02:38.201 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:02:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:02:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2970: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:02:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 06:02:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:02:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 06:02:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:02:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5439019659933745e-05 of space, bias 1.0, pg target 0.004631705897980123 quantized to 32 (current 32)
Feb 28 06:02:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:02:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:02:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:02:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:02:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:02:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00033777955687355776 of space, bias 1.0, pg target 0.10133386706206733 quantized to 32 (current 32)
Feb 28 06:02:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:02:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.272876705470989e-07 of space, bias 4.0, pg target 0.0008727452046565186 quantized to 16 (current 16)
Feb 28 06:02:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:02:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:02:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:02:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 06:02:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:02:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 06:02:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:02:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:02:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:02:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 06:02:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2971: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:02:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Feb 28 06:02:43 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Feb 28 06:02:43 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Feb 28 06:02:43 np0005634017 nova_compute[243452]: 2026-02-28 11:02:43.203 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:02:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2973: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 716 B/s rd, 307 B/s wr, 1 op/s
Feb 28 06:02:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:02:44 np0005634017 nova_compute[243452]: 2026-02-28 11:02:44.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:02:44 np0005634017 nova_compute[243452]: 2026-02-28 11:02:44.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 28 06:02:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 06:02:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3227995318' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 06:02:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 06:02:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3227995318' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 06:02:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2974: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Feb 28 06:02:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2975: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Feb 28 06:02:48 np0005634017 nova_compute[243452]: 2026-02-28 11:02:48.204 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:02:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:02:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Feb 28 06:02:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Feb 28 06:02:49 np0005634017 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Feb 28 06:02:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2977: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Feb 28 06:02:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2978: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Feb 28 06:02:52 np0005634017 podman[393920]: 2026-02-28 11:02:52.114782815 +0000 UTC m=+0.052210360 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 06:02:52 np0005634017 podman[393919]: 2026-02-28 11:02:52.134965297 +0000 UTC m=+0.074122401 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 28 06:02:53 np0005634017 nova_compute[243452]: 2026-02-28 11:02:53.206 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:02:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2979: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Feb 28 06:02:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:02:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2980: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:02:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2981: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:02:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:02:57.907 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:02:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:02:57.908 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:02:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:02:57.908 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:02:58 np0005634017 nova_compute[243452]: 2026-02-28 11:02:58.209 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:02:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:02:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2982: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:03:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:03:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:03:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:03:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:03:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:03:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2983: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:03 np0005634017 nova_compute[243452]: 2026-02-28 11:03:03.210 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:03:03 np0005634017 nova_compute[243452]: 2026-02-28 11:03:03.329 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:03:03 np0005634017 nova_compute[243452]: 2026-02-28 11:03:03.330 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:03:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2984: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:04 np0005634017 podman[394057]: 2026-02-28 11:03:04.114114503 +0000 UTC m=+0.086577253 container exec 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 06:03:04 np0005634017 podman[394057]: 2026-02-28 11:03:04.260354935 +0000 UTC m=+0.232817615 container exec_died 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 06:03:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:03:04 np0005634017 nova_compute[243452]: 2026-02-28 11:03:04.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:03:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 06:03:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:03:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 06:03:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:03:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 06:03:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 06:03:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 06:03:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 06:03:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 06:03:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:03:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 06:03:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 06:03:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 06:03:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 06:03:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 06:03:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 06:03:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2985: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:06 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:03:06 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:03:06 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 06:03:06 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:03:06 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 06:03:06 np0005634017 podman[394386]: 2026-02-28 11:03:06.128275196 +0000 UTC m=+0.058711783 container create 186f9f7c02b12b3b05bd2295bbb22adcd6dc30136bd2b8110030ecf59985ff5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_gould, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 06:03:06 np0005634017 systemd[1]: Started libpod-conmon-186f9f7c02b12b3b05bd2295bbb22adcd6dc30136bd2b8110030ecf59985ff5d.scope.
Feb 28 06:03:06 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:03:06 np0005634017 podman[394386]: 2026-02-28 11:03:06.103565017 +0000 UTC m=+0.034001684 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:03:06 np0005634017 podman[394386]: 2026-02-28 11:03:06.214880929 +0000 UTC m=+0.145317526 container init 186f9f7c02b12b3b05bd2295bbb22adcd6dc30136bd2b8110030ecf59985ff5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_gould, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 06:03:06 np0005634017 podman[394386]: 2026-02-28 11:03:06.223508434 +0000 UTC m=+0.153945011 container start 186f9f7c02b12b3b05bd2295bbb22adcd6dc30136bd2b8110030ecf59985ff5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 06:03:06 np0005634017 podman[394386]: 2026-02-28 11:03:06.227811945 +0000 UTC m=+0.158248542 container attach 186f9f7c02b12b3b05bd2295bbb22adcd6dc30136bd2b8110030ecf59985ff5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 06:03:06 np0005634017 focused_gould[394402]: 167 167
Feb 28 06:03:06 np0005634017 systemd[1]: libpod-186f9f7c02b12b3b05bd2295bbb22adcd6dc30136bd2b8110030ecf59985ff5d.scope: Deactivated successfully.
Feb 28 06:03:06 np0005634017 conmon[394402]: conmon 186f9f7c02b12b3b05bd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-186f9f7c02b12b3b05bd2295bbb22adcd6dc30136bd2b8110030ecf59985ff5d.scope/container/memory.events
Feb 28 06:03:06 np0005634017 podman[394386]: 2026-02-28 11:03:06.232873119 +0000 UTC m=+0.163309736 container died 186f9f7c02b12b3b05bd2295bbb22adcd6dc30136bd2b8110030ecf59985ff5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_gould, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 06:03:06 np0005634017 systemd[1]: var-lib-containers-storage-overlay-e99df1c1d5805059cd6e7b8e3c3a6d5e420706557012b7851a34593f43958f30-merged.mount: Deactivated successfully.
Feb 28 06:03:06 np0005634017 podman[394386]: 2026-02-28 11:03:06.280894189 +0000 UTC m=+0.211330806 container remove 186f9f7c02b12b3b05bd2295bbb22adcd6dc30136bd2b8110030ecf59985ff5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 06:03:06 np0005634017 systemd[1]: libpod-conmon-186f9f7c02b12b3b05bd2295bbb22adcd6dc30136bd2b8110030ecf59985ff5d.scope: Deactivated successfully.
Feb 28 06:03:06 np0005634017 nova_compute[243452]: 2026-02-28 11:03:06.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:03:06 np0005634017 nova_compute[243452]: 2026-02-28 11:03:06.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:03:06 np0005634017 nova_compute[243452]: 2026-02-28 11:03:06.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:03:06 np0005634017 nova_compute[243452]: 2026-02-28 11:03:06.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 06:03:06 np0005634017 podman[394427]: 2026-02-28 11:03:06.454082954 +0000 UTC m=+0.069458058 container create 3b41d73c7473c2958ee765c3921de01145d134feb96a0633c8f55ac02bf7ace8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 06:03:06 np0005634017 systemd[1]: Started libpod-conmon-3b41d73c7473c2958ee765c3921de01145d134feb96a0633c8f55ac02bf7ace8.scope.
Feb 28 06:03:06 np0005634017 podman[394427]: 2026-02-28 11:03:06.424611379 +0000 UTC m=+0.039986543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:03:06 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:03:06 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8149b186426fa227bef20414b5cea89dac5e3e3d2da31ecf10ddf3c79cc41b08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 06:03:06 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8149b186426fa227bef20414b5cea89dac5e3e3d2da31ecf10ddf3c79cc41b08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 06:03:06 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8149b186426fa227bef20414b5cea89dac5e3e3d2da31ecf10ddf3c79cc41b08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 06:03:06 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8149b186426fa227bef20414b5cea89dac5e3e3d2da31ecf10ddf3c79cc41b08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 06:03:06 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8149b186426fa227bef20414b5cea89dac5e3e3d2da31ecf10ddf3c79cc41b08/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 06:03:06 np0005634017 podman[394427]: 2026-02-28 11:03:06.553684395 +0000 UTC m=+0.169059479 container init 3b41d73c7473c2958ee765c3921de01145d134feb96a0633c8f55ac02bf7ace8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mccarthy, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 06:03:06 np0005634017 podman[394427]: 2026-02-28 11:03:06.569388259 +0000 UTC m=+0.184763333 container start 3b41d73c7473c2958ee765c3921de01145d134feb96a0633c8f55ac02bf7ace8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mccarthy, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 06:03:06 np0005634017 podman[394427]: 2026-02-28 11:03:06.573552287 +0000 UTC m=+0.188927361 container attach 3b41d73c7473c2958ee765c3921de01145d134feb96a0633c8f55ac02bf7ace8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 06:03:07 np0005634017 tender_mccarthy[394443]: --> passed data devices: 0 physical, 3 LVM
Feb 28 06:03:07 np0005634017 tender_mccarthy[394443]: --> All data devices are unavailable
Feb 28 06:03:07 np0005634017 systemd[1]: libpod-3b41d73c7473c2958ee765c3921de01145d134feb96a0633c8f55ac02bf7ace8.scope: Deactivated successfully.
Feb 28 06:03:07 np0005634017 podman[394427]: 2026-02-28 11:03:07.060471726 +0000 UTC m=+0.675846840 container died 3b41d73c7473c2958ee765c3921de01145d134feb96a0633c8f55ac02bf7ace8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mccarthy, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 06:03:07 np0005634017 systemd[1]: var-lib-containers-storage-overlay-8149b186426fa227bef20414b5cea89dac5e3e3d2da31ecf10ddf3c79cc41b08-merged.mount: Deactivated successfully.
Feb 28 06:03:07 np0005634017 podman[394427]: 2026-02-28 11:03:07.104813012 +0000 UTC m=+0.720188076 container remove 3b41d73c7473c2958ee765c3921de01145d134feb96a0633c8f55ac02bf7ace8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mccarthy, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 06:03:07 np0005634017 systemd[1]: libpod-conmon-3b41d73c7473c2958ee765c3921de01145d134feb96a0633c8f55ac02bf7ace8.scope: Deactivated successfully.
Feb 28 06:03:07 np0005634017 podman[394540]: 2026-02-28 11:03:07.558157001 +0000 UTC m=+0.052970031 container create 55ace20f42aaf263532513d2c08efcd7f79a16d3bc3870e5d560cd9b92a96a62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_clarke, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 28 06:03:07 np0005634017 systemd[1]: Started libpod-conmon-55ace20f42aaf263532513d2c08efcd7f79a16d3bc3870e5d560cd9b92a96a62.scope.
Feb 28 06:03:07 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:03:07 np0005634017 podman[394540]: 2026-02-28 11:03:07.531687142 +0000 UTC m=+0.026500272 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:03:07 np0005634017 podman[394540]: 2026-02-28 11:03:07.64036572 +0000 UTC m=+0.135178790 container init 55ace20f42aaf263532513d2c08efcd7f79a16d3bc3870e5d560cd9b92a96a62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_clarke, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 06:03:07 np0005634017 podman[394540]: 2026-02-28 11:03:07.646683878 +0000 UTC m=+0.141496908 container start 55ace20f42aaf263532513d2c08efcd7f79a16d3bc3870e5d560cd9b92a96a62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_clarke, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 06:03:07 np0005634017 podman[394540]: 2026-02-28 11:03:07.649852138 +0000 UTC m=+0.144665198 container attach 55ace20f42aaf263532513d2c08efcd7f79a16d3bc3870e5d560cd9b92a96a62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 06:03:07 np0005634017 beautiful_clarke[394556]: 167 167
Feb 28 06:03:07 np0005634017 systemd[1]: libpod-55ace20f42aaf263532513d2c08efcd7f79a16d3bc3870e5d560cd9b92a96a62.scope: Deactivated successfully.
Feb 28 06:03:07 np0005634017 podman[394540]: 2026-02-28 11:03:07.653549613 +0000 UTC m=+0.148362653 container died 55ace20f42aaf263532513d2c08efcd7f79a16d3bc3870e5d560cd9b92a96a62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 06:03:07 np0005634017 systemd[1]: var-lib-containers-storage-overlay-65996b36587bf5938fc687a45c38e162d4983eb030c0f0e5f99a15bb7544e2f7-merged.mount: Deactivated successfully.
Feb 28 06:03:07 np0005634017 podman[394540]: 2026-02-28 11:03:07.690990323 +0000 UTC m=+0.185803373 container remove 55ace20f42aaf263532513d2c08efcd7f79a16d3bc3870e5d560cd9b92a96a62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_clarke, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 06:03:07 np0005634017 systemd[1]: libpod-conmon-55ace20f42aaf263532513d2c08efcd7f79a16d3bc3870e5d560cd9b92a96a62.scope: Deactivated successfully.
Feb 28 06:03:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2986: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:07 np0005634017 podman[394580]: 2026-02-28 11:03:07.864416615 +0000 UTC m=+0.047221118 container create 70ed2c667b5905eb337e6861d3b9052a88b8a9054799572fc9fa8d940830374f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_allen, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 06:03:07 np0005634017 systemd[1]: Started libpod-conmon-70ed2c667b5905eb337e6861d3b9052a88b8a9054799572fc9fa8d940830374f.scope.
Feb 28 06:03:07 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:03:07 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/340e7adbe16cdb65822cf762a4d20168816e0e84a09af3cb63ae332cdf6c71db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 06:03:07 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/340e7adbe16cdb65822cf762a4d20168816e0e84a09af3cb63ae332cdf6c71db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 06:03:07 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/340e7adbe16cdb65822cf762a4d20168816e0e84a09af3cb63ae332cdf6c71db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 06:03:07 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/340e7adbe16cdb65822cf762a4d20168816e0e84a09af3cb63ae332cdf6c71db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 06:03:07 np0005634017 podman[394580]: 2026-02-28 11:03:07.839009485 +0000 UTC m=+0.021814028 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:03:07 np0005634017 podman[394580]: 2026-02-28 11:03:07.949659149 +0000 UTC m=+0.132463672 container init 70ed2c667b5905eb337e6861d3b9052a88b8a9054799572fc9fa8d940830374f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_allen, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 06:03:07 np0005634017 podman[394580]: 2026-02-28 11:03:07.960112505 +0000 UTC m=+0.142916998 container start 70ed2c667b5905eb337e6861d3b9052a88b8a9054799572fc9fa8d940830374f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_allen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 06:03:07 np0005634017 podman[394580]: 2026-02-28 11:03:07.963533922 +0000 UTC m=+0.146338415 container attach 70ed2c667b5905eb337e6861d3b9052a88b8a9054799572fc9fa8d940830374f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_allen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]: {
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:    "0": [
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:        {
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "devices": [
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "/dev/loop3"
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            ],
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "lv_name": "ceph_lv0",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "lv_size": "21470642176",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "name": "ceph_lv0",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "tags": {
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.cephx_lockbox_secret": "",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.cluster_name": "ceph",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.crush_device_class": "",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.encrypted": "0",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.objectstore": "bluestore",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.osd_id": "0",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.type": "block",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.vdo": "0",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.with_tpm": "0"
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            },
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "type": "block",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "vg_name": "ceph_vg0"
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:        }
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:    ],
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:    "1": [
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:        {
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "devices": [
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "/dev/loop4"
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            ],
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "lv_name": "ceph_lv1",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "lv_size": "21470642176",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "name": "ceph_lv1",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "tags": {
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.cephx_lockbox_secret": "",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.cluster_name": "ceph",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.crush_device_class": "",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.encrypted": "0",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.objectstore": "bluestore",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.osd_id": "1",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.type": "block",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.vdo": "0",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.with_tpm": "0"
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            },
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "type": "block",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "vg_name": "ceph_vg1"
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:        }
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:    ],
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:    "2": [
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:        {
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "devices": [
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "/dev/loop5"
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            ],
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "lv_name": "ceph_lv2",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "lv_size": "21470642176",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "name": "ceph_lv2",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "tags": {
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.cephx_lockbox_secret": "",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.cluster_name": "ceph",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.crush_device_class": "",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.encrypted": "0",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.objectstore": "bluestore",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.osd_id": "2",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.type": "block",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.vdo": "0",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:                "ceph.with_tpm": "0"
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            },
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "type": "block",
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:            "vg_name": "ceph_vg2"
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:        }
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]:    ]
Feb 28 06:03:08 np0005634017 hardcore_allen[394596]: }
Feb 28 06:03:08 np0005634017 nova_compute[243452]: 2026-02-28 11:03:08.212 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:03:08 np0005634017 systemd[1]: libpod-70ed2c667b5905eb337e6861d3b9052a88b8a9054799572fc9fa8d940830374f.scope: Deactivated successfully.
Feb 28 06:03:08 np0005634017 podman[394580]: 2026-02-28 11:03:08.252696012 +0000 UTC m=+0.435500545 container died 70ed2c667b5905eb337e6861d3b9052a88b8a9054799572fc9fa8d940830374f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 06:03:08 np0005634017 systemd[1]: var-lib-containers-storage-overlay-340e7adbe16cdb65822cf762a4d20168816e0e84a09af3cb63ae332cdf6c71db-merged.mount: Deactivated successfully.
Feb 28 06:03:08 np0005634017 podman[394580]: 2026-02-28 11:03:08.308007338 +0000 UTC m=+0.490811871 container remove 70ed2c667b5905eb337e6861d3b9052a88b8a9054799572fc9fa8d940830374f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_allen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 06:03:08 np0005634017 systemd[1]: libpod-conmon-70ed2c667b5905eb337e6861d3b9052a88b8a9054799572fc9fa8d940830374f.scope: Deactivated successfully.
Feb 28 06:03:08 np0005634017 podman[394680]: 2026-02-28 11:03:08.883557668 +0000 UTC m=+0.066962547 container create 799c29a39db4c1e2d2a0e7566ebb0be5546b03989932746b0792e982c1e58e23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_engelbart, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 06:03:08 np0005634017 systemd[1]: Started libpod-conmon-799c29a39db4c1e2d2a0e7566ebb0be5546b03989932746b0792e982c1e58e23.scope.
Feb 28 06:03:08 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:03:08 np0005634017 podman[394680]: 2026-02-28 11:03:08.854584218 +0000 UTC m=+0.037989147 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:03:08 np0005634017 podman[394680]: 2026-02-28 11:03:08.957595225 +0000 UTC m=+0.141000104 container init 799c29a39db4c1e2d2a0e7566ebb0be5546b03989932746b0792e982c1e58e23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 06:03:08 np0005634017 podman[394680]: 2026-02-28 11:03:08.963027189 +0000 UTC m=+0.146432038 container start 799c29a39db4c1e2d2a0e7566ebb0be5546b03989932746b0792e982c1e58e23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 06:03:08 np0005634017 podman[394680]: 2026-02-28 11:03:08.965981173 +0000 UTC m=+0.149386032 container attach 799c29a39db4c1e2d2a0e7566ebb0be5546b03989932746b0792e982c1e58e23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 06:03:08 np0005634017 nifty_engelbart[394696]: 167 167
Feb 28 06:03:08 np0005634017 systemd[1]: libpod-799c29a39db4c1e2d2a0e7566ebb0be5546b03989932746b0792e982c1e58e23.scope: Deactivated successfully.
Feb 28 06:03:08 np0005634017 podman[394680]: 2026-02-28 11:03:08.970425388 +0000 UTC m=+0.153830267 container died 799c29a39db4c1e2d2a0e7566ebb0be5546b03989932746b0792e982c1e58e23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 06:03:08 np0005634017 systemd[1]: var-lib-containers-storage-overlay-af09bf33e3ca87c5f579dea82fd7dc303e46de2b6b2e79db0b29bcd7fdf7fccd-merged.mount: Deactivated successfully.
Feb 28 06:03:09 np0005634017 podman[394680]: 2026-02-28 11:03:09.010946826 +0000 UTC m=+0.194351675 container remove 799c29a39db4c1e2d2a0e7566ebb0be5546b03989932746b0792e982c1e58e23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 06:03:09 np0005634017 systemd[1]: libpod-conmon-799c29a39db4c1e2d2a0e7566ebb0be5546b03989932746b0792e982c1e58e23.scope: Deactivated successfully.
Feb 28 06:03:09 np0005634017 podman[394718]: 2026-02-28 11:03:09.161190471 +0000 UTC m=+0.046321183 container create e1441b45b0d800290d25bd794f1e5947291dfd2358f9e572a6a7b937af741bd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_morse, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 06:03:09 np0005634017 systemd[1]: Started libpod-conmon-e1441b45b0d800290d25bd794f1e5947291dfd2358f9e572a6a7b937af741bd5.scope.
Feb 28 06:03:09 np0005634017 podman[394718]: 2026-02-28 11:03:09.14244325 +0000 UTC m=+0.027573722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:03:09 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:03:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9311cb1adc9d776c7d5d5251f580a57ea668c381a1510a693382467e3c63db97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 06:03:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9311cb1adc9d776c7d5d5251f580a57ea668c381a1510a693382467e3c63db97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 06:03:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9311cb1adc9d776c7d5d5251f580a57ea668c381a1510a693382467e3c63db97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 06:03:09 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9311cb1adc9d776c7d5d5251f580a57ea668c381a1510a693382467e3c63db97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 06:03:09 np0005634017 podman[394718]: 2026-02-28 11:03:09.266120633 +0000 UTC m=+0.151251165 container init e1441b45b0d800290d25bd794f1e5947291dfd2358f9e572a6a7b937af741bd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_morse, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 06:03:09 np0005634017 podman[394718]: 2026-02-28 11:03:09.273176343 +0000 UTC m=+0.158306795 container start e1441b45b0d800290d25bd794f1e5947291dfd2358f9e572a6a7b937af741bd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_morse, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 06:03:09 np0005634017 podman[394718]: 2026-02-28 11:03:09.276709383 +0000 UTC m=+0.161839925 container attach e1441b45b0d800290d25bd794f1e5947291dfd2358f9e572a6a7b937af741bd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_morse, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 06:03:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:03:09 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2987: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:09 np0005634017 lvm[394812]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 06:03:09 np0005634017 lvm[394813]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 06:03:09 np0005634017 lvm[394813]: VG ceph_vg0 finished
Feb 28 06:03:09 np0005634017 lvm[394812]: VG ceph_vg1 finished
Feb 28 06:03:09 np0005634017 lvm[394815]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 06:03:09 np0005634017 lvm[394815]: VG ceph_vg2 finished
Feb 28 06:03:10 np0005634017 sharp_morse[394734]: {}
Feb 28 06:03:10 np0005634017 systemd[1]: libpod-e1441b45b0d800290d25bd794f1e5947291dfd2358f9e572a6a7b937af741bd5.scope: Deactivated successfully.
Feb 28 06:03:10 np0005634017 systemd[1]: libpod-e1441b45b0d800290d25bd794f1e5947291dfd2358f9e572a6a7b937af741bd5.scope: Consumed 1.200s CPU time.
Feb 28 06:03:10 np0005634017 podman[394718]: 2026-02-28 11:03:10.057265349 +0000 UTC m=+0.942395821 container died e1441b45b0d800290d25bd794f1e5947291dfd2358f9e572a6a7b937af741bd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_morse, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 28 06:03:10 np0005634017 systemd[1]: var-lib-containers-storage-overlay-9311cb1adc9d776c7d5d5251f580a57ea668c381a1510a693382467e3c63db97-merged.mount: Deactivated successfully.
Feb 28 06:03:10 np0005634017 podman[394718]: 2026-02-28 11:03:10.110436385 +0000 UTC m=+0.995566867 container remove e1441b45b0d800290d25bd794f1e5947291dfd2358f9e572a6a7b937af741bd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 06:03:10 np0005634017 systemd[1]: libpod-conmon-e1441b45b0d800290d25bd794f1e5947291dfd2358f9e572a6a7b937af741bd5.scope: Deactivated successfully.
Feb 28 06:03:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 06:03:10 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:03:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 06:03:10 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:03:10 np0005634017 nova_compute[243452]: 2026-02-28 11:03:10.318 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:03:10 np0005634017 nova_compute[243452]: 2026-02-28 11:03:10.320 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 06:03:10 np0005634017 nova_compute[243452]: 2026-02-28 11:03:10.320 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 06:03:10 np0005634017 nova_compute[243452]: 2026-02-28 11:03:10.404 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 06:03:10 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 06:03:10 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.0 total, 600.0 interval#012Cumulative writes: 13K writes, 62K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s#012Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1368 writes, 5913 keys, 1368 commit groups, 1.0 writes per commit group, ingest: 8.93 MB, 0.01 MB/s#012Interval WAL: 1368 writes, 1368 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     84.1      0.89              0.20        44    0.020       0      0       0.0       0.0#012  L6      1/0    9.18 MB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   5.1    167.5    142.8      2.66              1.06        43    0.062    278K    23K       0.0       0.0#012 Sum      1/0    9.18 MB   0.0      0.4     0.1      0.4       0.4      0.1       0.0   6.1    125.5    128.1      3.55              1.26        87    0.041    278K    23K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.4    183.7    182.9      0.25              0.15         8    0.031     33K   2043       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   0.0    167.5    142.8      2.66              1.06        43    0.062    278K    23K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     84.5      0.88              0.20        43    0.021       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.5      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.0 total, 600.0 interval#012Flush(GB): cumulative 0.073, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.44 GB write, 0.08 MB/s write, 0.43 GB read, 0.08 MB/s read, 3.5 seconds#012Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.08 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555caadbb8d0#2 capacity: 304.00 MB usage: 49.80 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000762 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3122,47.78 MB,15.717%) FilterBlock(88,773.92 KB,0.248613%) IndexBlock(88,1.26 MB,0.415114%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb 28 06:03:11 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:03:11 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:03:11 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2988: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:13 np0005634017 nova_compute[243452]: 2026-02-28 11:03:13.215 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:03:13 np0005634017 nova_compute[243452]: 2026-02-28 11:03:13.217 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:03:13 np0005634017 nova_compute[243452]: 2026-02-28 11:03:13.217 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 06:03:13 np0005634017 nova_compute[243452]: 2026-02-28 11:03:13.218 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:03:13 np0005634017 nova_compute[243452]: 2026-02-28 11:03:13.259 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:03:13 np0005634017 nova_compute[243452]: 2026-02-28 11:03:13.259 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:03:13 np0005634017 nova_compute[243452]: 2026-02-28 11:03:13.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:03:13 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2989: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:03:14 np0005634017 nova_compute[243452]: 2026-02-28 11:03:14.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:03:14 np0005634017 nova_compute[243452]: 2026-02-28 11:03:14.340 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:03:14 np0005634017 nova_compute[243452]: 2026-02-28 11:03:14.341 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:03:14 np0005634017 nova_compute[243452]: 2026-02-28 11:03:14.341 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:03:14 np0005634017 nova_compute[243452]: 2026-02-28 11:03:14.341 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 06:03:14 np0005634017 nova_compute[243452]: 2026-02-28 11:03:14.342 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 06:03:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 06:03:14 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3445608344' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 06:03:14 np0005634017 nova_compute[243452]: 2026-02-28 11:03:14.864 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 06:03:15 np0005634017 nova_compute[243452]: 2026-02-28 11:03:15.054 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 06:03:15 np0005634017 nova_compute[243452]: 2026-02-28 11:03:15.056 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3596MB free_disk=59.987354160286486GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 06:03:15 np0005634017 nova_compute[243452]: 2026-02-28 11:03:15.057 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:03:15 np0005634017 nova_compute[243452]: 2026-02-28 11:03:15.057 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:03:15 np0005634017 nova_compute[243452]: 2026-02-28 11:03:15.152 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 06:03:15 np0005634017 nova_compute[243452]: 2026-02-28 11:03:15.152 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 06:03:15 np0005634017 nova_compute[243452]: 2026-02-28 11:03:15.170 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 06:03:15 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 06:03:15 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3313444922' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 06:03:15 np0005634017 nova_compute[243452]: 2026-02-28 11:03:15.754 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 06:03:15 np0005634017 nova_compute[243452]: 2026-02-28 11:03:15.761 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 06:03:15 np0005634017 nova_compute[243452]: 2026-02-28 11:03:15.781 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 06:03:15 np0005634017 nova_compute[243452]: 2026-02-28 11:03:15.784 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 06:03:15 np0005634017 nova_compute[243452]: 2026-02-28 11:03:15.785 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:03:15 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2990: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:17 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2991: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:18 np0005634017 nova_compute[243452]: 2026-02-28 11:03:18.260 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:03:18 np0005634017 nova_compute[243452]: 2026-02-28 11:03:18.262 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:03:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:03:19 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2992: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:21 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2993: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:23 np0005634017 podman[394899]: 2026-02-28 11:03:23.126555186 +0000 UTC m=+0.058600251 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 28 06:03:23 np0005634017 podman[394898]: 2026-02-28 11:03:23.161245028 +0000 UTC m=+0.090971987 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 28 06:03:23 np0005634017 nova_compute[243452]: 2026-02-28 11:03:23.262 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:03:23 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2994: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:03:25 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2995: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:27 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2996: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:28 np0005634017 nova_compute[243452]: 2026-02-28 11:03:28.263 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:03:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_11:03:29
Feb 28 06:03:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 06:03:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 06:03:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'backups', 'volumes', 'default.rgw.control', 'vms', 'default.rgw.log', 'default.rgw.meta']
Feb 28 06:03:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 06:03:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:03:29 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2997: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:03:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:03:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:03:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:03:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:03:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:03:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 06:03:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 06:03:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 06:03:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 06:03:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 06:03:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 06:03:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 06:03:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 06:03:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 06:03:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 06:03:31 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2998: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:33 np0005634017 nova_compute[243452]: 2026-02-28 11:03:33.264 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:03:33 np0005634017 nova_compute[243452]: 2026-02-28 11:03:33.265 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:03:33 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2999: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:03:35 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3000: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:37 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3001: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:38 np0005634017 nova_compute[243452]: 2026-02-28 11:03:38.266 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:03:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:03:39 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3002: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 06:03:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:03:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 06:03:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:03:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5454513684934942e-05 of space, bias 1.0, pg target 0.0046363541054804825 quantized to 32 (current 32)
Feb 28 06:03:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:03:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:03:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:03:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:03:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:03:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.899822393338988e-06 of space, bias 1.0, pg target 0.0014699467180016965 quantized to 32 (current 32)
Feb 28 06:03:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:03:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.272876705470989e-07 of space, bias 4.0, pg target 0.0008727452046565186 quantized to 16 (current 16)
Feb 28 06:03:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:03:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:03:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:03:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 06:03:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:03:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 06:03:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:03:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:03:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:03:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 06:03:41 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3003: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:43 np0005634017 nova_compute[243452]: 2026-02-28 11:03:43.268 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:03:43 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3004: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:03:44 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Feb 28 06:03:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:44.966164) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 06:03:44 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Feb 28 06:03:44 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276624966221, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 2083, "num_deletes": 254, "total_data_size": 3479397, "memory_usage": 3532864, "flush_reason": "Manual Compaction"}
Feb 28 06:03:44 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Feb 28 06:03:44 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276624983882, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 3398420, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61305, "largest_seqno": 63387, "table_properties": {"data_size": 3388903, "index_size": 6074, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19180, "raw_average_key_size": 20, "raw_value_size": 3369872, "raw_average_value_size": 3562, "num_data_blocks": 269, "num_entries": 946, "num_filter_entries": 946, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772276406, "oldest_key_time": 1772276406, "file_creation_time": 1772276624, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Feb 28 06:03:44 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 17796 microseconds, and 8506 cpu microseconds.
Feb 28 06:03:44 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 06:03:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:44.983958) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 3398420 bytes OK
Feb 28 06:03:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:44.983988) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Feb 28 06:03:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:44.985550) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Feb 28 06:03:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:44.985575) EVENT_LOG_v1 {"time_micros": 1772276624985566, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 06:03:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:44.985610) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 06:03:44 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 3470670, prev total WAL file size 3470670, number of live WAL files 2.
Feb 28 06:03:44 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 06:03:44 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:44.986467) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Feb 28 06:03:44 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 06:03:44 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(3318KB)], [146(9403KB)]
Feb 28 06:03:44 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276624986527, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 13027459, "oldest_snapshot_seqno": -1}
Feb 28 06:03:45 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 8274 keys, 11280917 bytes, temperature: kUnknown
Feb 28 06:03:45 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276625043303, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 11280917, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11226377, "index_size": 32731, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20741, "raw_key_size": 216058, "raw_average_key_size": 26, "raw_value_size": 11079469, "raw_average_value_size": 1339, "num_data_blocks": 1273, "num_entries": 8274, "num_filter_entries": 8274, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772276624, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Feb 28 06:03:45 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 06:03:45 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:45.043717) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 11280917 bytes
Feb 28 06:03:45 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:45.045096) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 229.0 rd, 198.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 9.2 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(7.2) write-amplify(3.3) OK, records in: 8796, records dropped: 522 output_compression: NoCompression
Feb 28 06:03:45 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:45.045127) EVENT_LOG_v1 {"time_micros": 1772276625045112, "job": 90, "event": "compaction_finished", "compaction_time_micros": 56896, "compaction_time_cpu_micros": 36752, "output_level": 6, "num_output_files": 1, "total_output_size": 11280917, "num_input_records": 8796, "num_output_records": 8274, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 06:03:45 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 06:03:45 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276625045818, "job": 90, "event": "table_file_deletion", "file_number": 148}
Feb 28 06:03:45 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 06:03:45 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276625047666, "job": 90, "event": "table_file_deletion", "file_number": 146}
Feb 28 06:03:45 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:44.986347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 06:03:45 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:45.047774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 06:03:45 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:45.047784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 06:03:45 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:45.047787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 06:03:45 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:45.047791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 06:03:45 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:45.047795) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 06:03:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 06:03:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3037235996' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 06:03:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 06:03:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3037235996' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 06:03:45 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3005: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:47 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3006: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:48 np0005634017 nova_compute[243452]: 2026-02-28 11:03:48.270 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:03:48 np0005634017 systemd-logind[815]: New session 55 of user zuul.
Feb 28 06:03:48 np0005634017 systemd[1]: Started Session 55 of User zuul.
Feb 28 06:03:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:03:49 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3007: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:51 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.22944 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:03:51 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3008: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:51 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.22946 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:03:52 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Feb 28 06:03:52 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1807713450' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 28 06:03:53 np0005634017 nova_compute[243452]: 2026-02-28 11:03:53.272 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:03:53 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3009: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:54 np0005634017 podman[395200]: 2026-02-28 11:03:54.128787624 +0000 UTC m=+0.062468930 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 28 06:03:54 np0005634017 podman[395199]: 2026-02-28 11:03:54.168510319 +0000 UTC m=+0.101333781 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 06:03:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:03:55 np0005634017 ovs-vsctl[395273]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Feb 28 06:03:55 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3010: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:56 np0005634017 virtqemud[242837]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Feb 28 06:03:56 np0005634017 virtqemud[242837]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Feb 28 06:03:56 np0005634017 virtqemud[242837]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Feb 28 06:03:57 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: cache status {prefix=cache status} (starting...)
Feb 28 06:03:57 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: client ls {prefix=client ls} (starting...)
Feb 28 06:03:57 np0005634017 lvm[395615]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 06:03:57 np0005634017 lvm[395615]: VG ceph_vg1 finished
Feb 28 06:03:57 np0005634017 lvm[395620]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 06:03:57 np0005634017 lvm[395620]: VG ceph_vg0 finished
Feb 28 06:03:57 np0005634017 lvm[395631]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 06:03:57 np0005634017 lvm[395631]: VG ceph_vg2 finished
Feb 28 06:03:57 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.22950 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:03:57 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: damage ls {prefix=damage ls} (starting...)
Feb 28 06:03:57 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3011: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:57 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.22952 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:03:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:03:57.909 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:03:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:03:57.910 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:03:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:03:57.910 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:03:57 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: dump loads {prefix=dump loads} (starting...)
Feb 28 06:03:58 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Feb 28 06:03:58 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Feb 28 06:03:58 np0005634017 nova_compute[243452]: 2026-02-28 11:03:58.274 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:03:58 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.22954 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:03:58 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Feb 28 06:03:58 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Feb 28 06:03:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Feb 28 06:03:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4209889848' entity='client.admin' cmd={"prefix": "report"} : dispatch
Feb 28 06:03:58 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Feb 28 06:03:58 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.22958 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:03:58 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-izimmo[76606]: 2026-02-28T11:03:58.814+0000 7fdac2ed4640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 28 06:03:58 np0005634017 ceph-mgr[76610]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 28 06:03:58 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: get subtrees {prefix=get subtrees} (starting...)
Feb 28 06:03:58 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 06:03:58 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3009662277' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 06:03:59 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: ops {prefix=ops} (starting...)
Feb 28 06:03:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:03:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Feb 28 06:03:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1896005864' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Feb 28 06:03:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Feb 28 06:03:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1224101874' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Feb 28 06:03:59 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: session ls {prefix=session ls} (starting...)
Feb 28 06:03:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb 28 06:03:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2950520075' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 28 06:03:59 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3012: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:03:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Feb 28 06:03:59 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2824246745' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Feb 28 06:03:59 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: status {prefix=status} (starting...)
Feb 28 06:04:00 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.22972 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:04:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb 28 06:04:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/515845332' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 28 06:04:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:04:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:04:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:04:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:04:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:04:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:04:00 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.22975 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:04:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb 28 06:04:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2856515785' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 28 06:04:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Feb 28 06:04:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/391597969' entity='client.admin' cmd={"prefix": "features"} : dispatch
Feb 28 06:04:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 28 06:04:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3544148651' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 28 06:04:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Feb 28 06:04:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2680149477' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Feb 28 06:04:01 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3013: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:04:01 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Feb 28 06:04:01 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1355709642' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb 28 06:04:02 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.22986 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:04:02 np0005634017 ceph-mgr[76610]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 28 06:04:02 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-izimmo[76606]: 2026-02-28T11:04:02.206+0000 7fdac2ed4640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 28 06:04:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb 28 06:04:02 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1790314155' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 28 06:04:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Feb 28 06:04:02 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/449009649' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Feb 28 06:04:02 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.22992 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:04:03 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.22996 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:04:03 np0005634017 nova_compute[243452]: 2026-02-28 11:04:03.277 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672d400 session 0x557686110e00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686e5c400 session 0x55768bece8c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688d38400 session 0x557686d01340
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b0800 session 0x5576940e1a40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 56770560 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156c00 session 0x55768cc3ea80
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672d400 session 0x557688a10540
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686e5c400 session 0x5576886d5500
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688d38400 session 0x5576886f88c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b0800 session 0x5576886f8fc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285573120 unmapped: 56573952 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285573120 unmapped: 56573952 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb919000/0x0/0x4ffc00000, data 0x539fa23/0x5533000, compress 0x0/0x0/0x0, omap 0x4cae6, meta 0xed6351a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285573120 unmapped: 56573952 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285573120 unmapped: 56573952 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3291678 data_alloc: 218103808 data_used: 20651256
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285573120 unmapped: 56573952 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb919000/0x0/0x4ffc00000, data 0x539fa23/0x5533000, compress 0x0/0x0/0x0, omap 0x4cae6, meta 0xed6351a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285573120 unmapped: 56573952 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285573120 unmapped: 56573952 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285573120 unmapped: 56573952 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285573120 unmapped: 56573952 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3291678 data_alloc: 218103808 data_used: 20651256
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285573120 unmapped: 56573952 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285573120 unmapped: 56573952 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb919000/0x0/0x4ffc00000, data 0x539fa23/0x5533000, compress 0x0/0x0/0x0, omap 0x4cae6, meta 0xed6351a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687f7ec00 session 0x5576886f8540
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285573120 unmapped: 56573952 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.131345749s of 18.433906555s, submitted: 75
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285573120 unmapped: 56573952 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 55328768 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3335842 data_alloc: 218103808 data_used: 28060920
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 55328768 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb919000/0x0/0x4ffc00000, data 0x539fa23/0x5533000, compress 0x0/0x0/0x0, omap 0x4cae6, meta 0xed6351a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb919000/0x0/0x4ffc00000, data 0x539fa23/0x5533000, compress 0x0/0x0/0x0, omap 0x4cae6, meta 0xed6351a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 55312384 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 55312384 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 55312384 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 55312384 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3334802 data_alloc: 218103808 data_used: 28060920
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb919000/0x0/0x4ffc00000, data 0x539fa23/0x5533000, compress 0x0/0x0/0x0, omap 0x4cae6, meta 0xed6351a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 55312384 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 55312384 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 55312384 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.838995934s of 10.858511925s, submitted: 5
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 289505280 unmapped: 52641792 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eacce000/0x0/0x4ffc00000, data 0x5feaa23/0x617e000, compress 0x0/0x0/0x0, omap 0x4cae6, meta 0xed6351a), peers [0,1] op hist [0,0,0,0,0,0,3])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 293117952 unmapped: 49029120 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687f7ec00 session 0x557687aa0700
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688d38400 session 0x557686197880
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b0800 session 0x5576886f8000
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3465106 data_alloc: 234881024 data_used: 29232376
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x55768cc4b6c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557689b7b800 session 0x5576886d4540
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294256640 unmapped: 47890432 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294256640 unmapped: 47890432 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294256640 unmapped: 47890432 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687f7ec00 session 0x557686d4fdc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294256640 unmapped: 47890432 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688d38400 session 0x5576886e1500
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b0800 session 0x5576861be8c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x55768bc58380
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294264832 unmapped: 47882240 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3467264 data_alloc: 234881024 data_used: 29314296
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea587000/0x0/0x4ffc00000, data 0x6730a85/0x68c5000, compress 0x0/0x0/0x0, omap 0x4ccee, meta 0xed63312), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294273024 unmapped: 47874048 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 44539904 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 44539904 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 44539904 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea562000/0x0/0x4ffc00000, data 0x6754a95/0x68ea000, compress 0x0/0x0/0x0, omap 0x4ccee, meta 0xed63312), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 44539904 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3503796 data_alloc: 234881024 data_used: 36240632
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 44539904 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.089795113s of 12.524835587s, submitted: 174
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672d400 session 0x557688656a80
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686e5c400 session 0x5576886b3500
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687f7ec00 session 0x5576940e0c40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 46530560 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 46530560 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb377000/0x0/0x4ffc00000, data 0x52c2a23/0x5456000, compress 0x0/0x0/0x0, omap 0x4cec2, meta 0xed6313e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 46530560 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 46530560 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3336798 data_alloc: 218103808 data_used: 27573496
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb377000/0x0/0x4ffc00000, data 0x52c2a23/0x5456000, compress 0x0/0x0/0x0, omap 0x4cec2, meta 0xed6313e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 46522368 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768bb5f400 session 0x557686cadc00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x55768800ddc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688d38400 session 0x5576869f28c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 46514176 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 45801472 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3371178 data_alloc: 218103808 data_used: 26739315
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb46b000/0x0/0x4ffc00000, data 0x584ea00/0x59e1000, compress 0x0/0x0/0x0, omap 0x4cec2, meta 0xed6313e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.071561813s of 12.451289177s, submitted: 145
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3370754 data_alloc: 218103808 data_used: 26751603
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb468000/0x0/0x4ffc00000, data 0x5851a00/0x59e4000, compress 0x0/0x0/0x0, omap 0x4cec2, meta 0xed6313e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3370754 data_alloc: 218103808 data_used: 26751603
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb468000/0x0/0x4ffc00000, data 0x5851a00/0x59e4000, compress 0x0/0x0/0x0, omap 0x4cec2, meta 0xed6313e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb468000/0x0/0x4ffc00000, data 0x5851a00/0x59e4000, compress 0x0/0x0/0x0, omap 0x4cec2, meta 0xed6313e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb468000/0x0/0x4ffc00000, data 0x5851a00/0x59e4000, compress 0x0/0x0/0x0, omap 0x4cec2, meta 0xed6313e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb468000/0x0/0x4ffc00000, data 0x5851a00/0x59e4000, compress 0x0/0x0/0x0, omap 0x4cec2, meta 0xed6313e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3371778 data_alloc: 218103808 data_used: 26829427
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.321892738s of 13.334897041s, submitted: 2
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c3800 session 0x5576885bfdc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694ec00 session 0x557686196c40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x55768cc4afc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb466000/0x0/0x4ffc00000, data 0x5853a00/0x59e6000, compress 0x0/0x0/0x0, omap 0x4cec2, meta 0xed6313e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219503 data_alloc: 218103808 data_used: 18690675
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec2de000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 nova_compute[243452]: 2026-02-28 11:04:03.329 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:04:03 np0005634017 nova_compute[243452]: 2026-02-28 11:04:03.330 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5054 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 06:04:03 np0005634017 nova_compute[243452]: 2026-02-28 11:04:03.330 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:04:03 np0005634017 nova_compute[243452]: 2026-02-28 11:04:03.330 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:04:03 np0005634017 nova_compute[243452]: 2026-02-28 11:04:03.331 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:04:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Feb 28 06:04:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/573064748' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219503 data_alloc: 218103808 data_used: 18690675
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.2 total, 600.0 interval#012Cumulative writes: 33K writes, 127K keys, 33K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.04 MB/s#012Cumulative WAL: 33K writes, 12K syncs, 2.73 writes per sync, written: 0.12 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4166 writes, 15K keys, 4166 commit groups, 1.0 writes per commit group, ingest: 17.10 MB, 0.03 MB/s#012Interval WAL: 4166 writes, 1712 syncs, 2.43 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec2de000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219503 data_alloc: 218103808 data_used: 18690675
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec2de000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.095006943s of 16.153604507s, submitted: 36
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686e5c400 session 0x5576886e16c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687f7ec00 session 0x5576886d41c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x55768d75c540
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694ec00 session 0x5576886b2380
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686e5c400 session 0x5576886b2540
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec2de000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291160064 unmapped: 50987008 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291160064 unmapped: 50987008 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3240437 data_alloc: 218103808 data_used: 18694673
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291160064 unmapped: 50987008 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291160064 unmapped: 50987008 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291160064 unmapped: 50987008 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291160064 unmapped: 50987008 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec508000/0x0/0x4ffc00000, data 0x47b398e/0x4944000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291160064 unmapped: 50987008 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3240437 data_alloc: 218103808 data_used: 18694673
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c3800 session 0x557686d008c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291160064 unmapped: 50987008 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec508000/0x0/0x4ffc00000, data 0x47b398e/0x4944000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291160064 unmapped: 50987008 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 290668544 unmapped: 51478528 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.785350800s of 10.859200478s, submitted: 12
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec508000/0x0/0x4ffc00000, data 0x47b398e/0x4944000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [0,0,0,0,1])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x5576861a9500
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 290979840 unmapped: 54845440 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x5576940e0e00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x5576861be700
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694ec00 session 0x557688a10fc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686e5c400 session 0x5576861bee00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 290979840 unmapped: 54845440 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306319 data_alloc: 218103808 data_used: 19779089
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 290979840 unmapped: 54845440 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 290979840 unmapped: 54845440 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 290979840 unmapped: 54845440 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 290979840 unmapped: 54845440 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ebb5c000/0x0/0x4ffc00000, data 0x515f98e/0x52f0000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c3800 session 0x557686d01880
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 290979840 unmapped: 54845440 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306319 data_alloc: 218103808 data_used: 19779089
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c3800 session 0x557685cc4c40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 290979840 unmapped: 54845440 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x5576886576c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694ec00 session 0x557686c21180
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 290979840 unmapped: 54845440 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ebb5b000/0x0/0x4ffc00000, data 0x515f99e/0x52f1000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291192832 unmapped: 54632448 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 289988608 unmapped: 55836672 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb333000/0x0/0x4ffc00000, data 0x597199e/0x5b03000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 289988608 unmapped: 55836672 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3424449 data_alloc: 234881024 data_used: 29714535
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 289988608 unmapped: 55836672 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 289988608 unmapped: 55836672 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb333000/0x0/0x4ffc00000, data 0x597199e/0x5b03000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 289988608 unmapped: 55836672 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb333000/0x0/0x4ffc00000, data 0x597199e/0x5b03000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 289988608 unmapped: 55836672 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.092023849s of 15.370637894s, submitted: 110
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 289054720 unmapped: 56770560 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416521 data_alloc: 234881024 data_used: 29714535
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 289054720 unmapped: 56770560 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 289054720 unmapped: 56770560 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 289054720 unmapped: 56770560 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb328000/0x0/0x4ffc00000, data 0x599299e/0x5b24000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 293003264 unmapped: 52822016 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 293052416 unmapped: 52772864 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3470731 data_alloc: 234881024 data_used: 30496871
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 293052416 unmapped: 52772864 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x5576886e01c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e90800 session 0x55768bece8c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x557688722540
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694ec00 session 0x5576886d4540
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x55768d090e00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c3800 session 0x55768d75ddc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557689b78800 session 0x55768d75d340
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x557685cc5c00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694ec00 session 0x557688723a40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 293085184 unmapped: 54353920 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 293085184 unmapped: 54353920 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea039000/0x0/0x4ffc00000, data 0x6c80a00/0x6e13000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 293085184 unmapped: 54353920 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.805257797s of 10.072917938s, submitted: 107
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294174720 unmapped: 53264384 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3532258 data_alloc: 234881024 data_used: 30563397
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 53248000 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x55768d091340
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 53248000 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c3800 session 0x557686d4e380
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x5576886b3a40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 53248000 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x55768a538e00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294223872 unmapped: 53215232 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea038000/0x0/0x4ffc00000, data 0x6c80a10/0x6e14000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294256640 unmapped: 53182464 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3533934 data_alloc: 234881024 data_used: 30563397
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294256640 unmapped: 53182464 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294256640 unmapped: 53182464 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 51240960 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea038000/0x0/0x4ffc00000, data 0x6c80a10/0x6e14000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299278336 unmapped: 48160768 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299278336 unmapped: 48160768 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3598318 data_alloc: 234881024 data_used: 37816901
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299278336 unmapped: 48160768 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.916085243s of 12.152958870s, submitted: 133
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x5576886f9880
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c3800 session 0x55768c314540
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b000 session 0x55768800dc00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688546800 session 0x55768bc596c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x55768bc58380
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299728896 unmapped: 47710208 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299728896 unmapped: 47710208 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299728896 unmapped: 47710208 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e996b000/0x0/0x4ffc00000, data 0x734da10/0x74e1000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e996b000/0x0/0x4ffc00000, data 0x734da10/0x74e1000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 47702016 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3649372 data_alloc: 234881024 data_used: 37816901
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 47702016 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9969000/0x0/0x4ffc00000, data 0x734ea10/0x74e2000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 47702016 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b000 session 0x5576880f7340
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 47702016 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305954816 unmapped: 41484288 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3757208 data_alloc: 234881024 data_used: 41424453
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307699712 unmapped: 39739392 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307781632 unmapped: 39657472 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8c78000/0x0/0x4ffc00000, data 0x8040a10/0x81d4000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 39559168 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 39559168 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 39559168 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3760712 data_alloc: 234881024 data_used: 41430085
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 39559168 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.586961746s of 13.872611046s, submitted: 108
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307896320 unmapped: 39542784 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307896320 unmapped: 39542784 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8c78000/0x0/0x4ffc00000, data 0x8040a10/0x81d4000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 39510016 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 39510016 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3759256 data_alloc: 234881024 data_used: 41410117
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 39452672 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 39403520 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310345728 unmapped: 37093376 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313622528 unmapped: 33816576 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8016000/0x0/0x4ffc00000, data 0x8ca1a10/0x8e35000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x557686d01a40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694ec00 session 0x557689a01a40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b800 session 0x5576886e16c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309157888 unmapped: 38281216 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3643899 data_alloc: 234881024 data_used: 31891525
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309166080 unmapped: 38273024 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.773560524s of 10.151765823s, submitted: 167
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309059584 unmapped: 38379520 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e96c1000/0x0/0x4ffc00000, data 0x75f999e/0x778b000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309059584 unmapped: 38379520 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768bb5f400 session 0x55768a539a40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b0800 session 0x557686c20700
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309059584 unmapped: 38379520 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b800 session 0x5576886b2540
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309059584 unmapped: 38379520 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea057000/0x0/0x4ffc00000, data 0x6c6399e/0x6df5000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea057000/0x0/0x4ffc00000, data 0x6c6399e/0x6df5000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3570054 data_alloc: 234881024 data_used: 30572515
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309059584 unmapped: 38379520 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea057000/0x0/0x4ffc00000, data 0x6c6399e/0x6df5000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x557687aa1880
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c3800 session 0x5576869f28c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309059584 unmapped: 38379520 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b800 session 0x5576861bec40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 44425216 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb448000/0x0/0x4ffc00000, data 0x587299e/0x5a04000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 44425216 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 44425216 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb448000/0x0/0x4ffc00000, data 0x587299e/0x5a04000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686e5c400 session 0x557687b436c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x5576940e08c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3416140 data_alloc: 218103808 data_used: 23893475
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 44425216 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.108342171s of 10.153443336s, submitted: 23
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x5576861a8540
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 52109312 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 52109312 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec679000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 52109312 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 52109312 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec679000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3259263 data_alloc: 218103808 data_used: 18691539
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 52109312 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 52109312 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 52109312 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 52109312 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 52101120 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3259263 data_alloc: 218103808 data_used: 18691539
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 52101120 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec679000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 52101120 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 52101120 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec679000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 52101120 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 52092928 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3259263 data_alloc: 218103808 data_used: 18691539
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 52092928 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec679000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 52092928 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 52092928 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 52092928 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec679000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 52092928 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3259263 data_alloc: 218103808 data_used: 18691539
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 52092928 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 52092928 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295354368 unmapped: 52084736 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295354368 unmapped: 52084736 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec679000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295354368 unmapped: 52084736 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.209495544s of 24.226617813s, submitted: 9
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768bb5f400 session 0x5576861be700
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686e5c400 session 0x5576885be1c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b800 session 0x5576861be8c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x557686c21880
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x5576861bf6c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316624 data_alloc: 218103808 data_used: 18691539
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 55738368 heap: 351117312 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ebda9000/0x0/0x4ffc00000, data 0x4f119f0/0x50a3000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 55738368 heap: 351117312 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 55738368 heap: 351117312 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 55738368 heap: 351117312 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306511872 unmapped: 53002240 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b0800 session 0x5576861be540
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686e5c400 session 0x557686c20fc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b800 session 0x55768a539880
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x5576886b2380
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x557685cc4c40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388831 data_alloc: 218103808 data_used: 18691539
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 64110592 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 64110592 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb179000/0x0/0x4ffc00000, data 0x5b419f0/0x5cd3000, compress 0x0/0x0/0x0, omap 0x4d29e, meta 0xed62d62), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 64110592 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x557688a10380
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 64110592 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb179000/0x0/0x4ffc00000, data 0x5b419f0/0x5cd3000, compress 0x0/0x0/0x0, omap 0x4d29e, meta 0xed62d62), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 64110592 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3395359 data_alloc: 218103808 data_used: 19748307
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 64110592 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 61906944 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb179000/0x0/0x4ffc00000, data 0x5b419f0/0x5cd3000, compress 0x0/0x0/0x0, omap 0x4d29e, meta 0xed62d62), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 61906944 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 61906944 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 61906944 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb179000/0x0/0x4ffc00000, data 0x5b419f0/0x5cd3000, compress 0x0/0x0/0x0, omap 0x4d29e, meta 0xed62d62), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b800 session 0x557686d01500
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3443615 data_alloc: 218103808 data_used: 27915731
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 61906944 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.720023155s of 15.895346642s, submitted: 50
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 61906944 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 61906944 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb179000/0x0/0x4ffc00000, data 0x5b419f0/0x5cd3000, compress 0x0/0x0/0x0, omap 0x4d29e, meta 0xed62d62), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 61906944 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303906816 unmapped: 55607296 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3526807 data_alloc: 234881024 data_used: 39512019
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305643520 unmapped: 53870592 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305733632 unmapped: 53780480 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305741824 unmapped: 53772288 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eaf37000/0x0/0x4ffc00000, data 0x5d839f0/0x5f15000, compress 0x0/0x0/0x0, omap 0x4d29e, meta 0xed62d62), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305741824 unmapped: 53772288 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305741824 unmapped: 53772288 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3527703 data_alloc: 234881024 data_used: 39708627
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305741824 unmapped: 53772288 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305741824 unmapped: 53772288 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305741824 unmapped: 53772288 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eaf37000/0x0/0x4ffc00000, data 0x5d839f0/0x5f15000, compress 0x0/0x0/0x0, omap 0x4d29e, meta 0xed62d62), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305741824 unmapped: 53772288 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eaf37000/0x0/0x4ffc00000, data 0x5d839f0/0x5f15000, compress 0x0/0x0/0x0, omap 0x4d29e, meta 0xed62d62), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.435902596s of 13.532156944s, submitted: 36
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 48521216 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea70c000/0x0/0x4ffc00000, data 0x656e9f0/0x6700000, compress 0x0/0x0/0x0, omap 0x4d29e, meta 0xed62d62), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3616161 data_alloc: 234881024 data_used: 40302547
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311033856 unmapped: 48480256 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311033856 unmapped: 48480256 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311033856 unmapped: 48480256 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311033856 unmapped: 48480256 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311033856 unmapped: 48480256 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea02b000/0x0/0x4ffc00000, data 0x6c8e9f0/0x6e20000, compress 0x0/0x0/0x0, omap 0x4d29e, meta 0xed62d62), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3619447 data_alloc: 234881024 data_used: 40421331
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311050240 unmapped: 48463872 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311050240 unmapped: 48463872 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311050240 unmapped: 48463872 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311050240 unmapped: 48463872 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311050240 unmapped: 48463872 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea02b000/0x0/0x4ffc00000, data 0x6c8e9f0/0x6e20000, compress 0x0/0x0/0x0, omap 0x4d29e, meta 0xed62d62), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3619447 data_alloc: 234881024 data_used: 40421331
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311083008 unmapped: 48431104 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea02b000/0x0/0x4ffc00000, data 0x6c8e9f0/0x6e20000, compress 0x0/0x0/0x0, omap 0x4d29e, meta 0xed62d62), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311091200 unmapped: 48422912 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311099392 unmapped: 48414720 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311099392 unmapped: 48414720 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311099392 unmapped: 48414720 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x557686c216c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694ec00 session 0x557688723a40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b000 session 0x55768d091340
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b0000 session 0x55768bc596c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.026269913s of 16.309738159s, submitted: 91
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3619042 data_alloc: 234881024 data_used: 40421331
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x5576861a9180
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311492608 unmapped: 48021504 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694ec00 session 0x5576886576c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b000 session 0x5576886e0fc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b800 session 0x557688a116c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e70400 session 0x55768d75c8c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea02b000/0x0/0x4ffc00000, data 0x6c8e9f0/0x6e20000, compress 0x0/0x0/0x0, omap 0x4d29e, meta 0xed62d62), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x557687b43180
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311500800 unmapped: 48013312 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694ec00 session 0x55768d090e00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b000 session 0x5576886f9a40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b800 session 0x5576886b3a40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3edc00 session 0x557686197880
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x557686cada40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694ec00 session 0x557689a01dc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b000 session 0x55768a539c00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b800 session 0x55768becfc00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310059008 unmapped: 49455104 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e97b7000/0x0/0x4ffc00000, data 0x7501a62/0x7695000, compress 0x0/0x0/0x0, omap 0x4d47e, meta 0xed62b82), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310091776 unmapped: 49422336 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310091776 unmapped: 49422336 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3677473 data_alloc: 234881024 data_used: 40421331
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310091776 unmapped: 49422336 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310091776 unmapped: 49422336 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd400 session 0x5576886d41c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310091776 unmapped: 49422336 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x557687aa01c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e97b7000/0x0/0x4ffc00000, data 0x7501a62/0x7695000, compress 0x0/0x0/0x0, omap 0x4d47e, meta 0xed62b82), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694ec00 session 0x557686c21dc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310099968 unmapped: 49414144 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b000 session 0x557686196a80
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310099968 unmapped: 49414144 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e71400 session 0x557686c20700
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3732250 data_alloc: 251658240 data_used: 47692259
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 45441024 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768ae2a400 session 0x5576886e16c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x55768becefc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.025026321s of 11.173868179s, submitted: 49
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694ec00 session 0x557686c208c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314376192 unmapped: 45137920 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314384384 unmapped: 45129728 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9791000/0x0/0x4ffc00000, data 0x7525a94/0x76bb000, compress 0x0/0x0/0x0, omap 0x4d47e, meta 0xed62b82), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314384384 unmapped: 45129728 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314564608 unmapped: 44949504 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3735366 data_alloc: 251658240 data_used: 47722979
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9791000/0x0/0x4ffc00000, data 0x7525a94/0x76bb000, compress 0x0/0x0/0x0, omap 0x4d47e, meta 0xed62b82), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314564608 unmapped: 44949504 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314564608 unmapped: 44949504 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314564608 unmapped: 44949504 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9791000/0x0/0x4ffc00000, data 0x7525a94/0x76bb000, compress 0x0/0x0/0x0, omap 0x4d47e, meta 0xed62b82), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314564608 unmapped: 44949504 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9791000/0x0/0x4ffc00000, data 0x7525a94/0x76bb000, compress 0x0/0x0/0x0, omap 0x4d47e, meta 0xed62b82), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314564608 unmapped: 44949504 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3769052 data_alloc: 251658240 data_used: 47731171
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 317571072 unmapped: 41943040 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 317718528 unmapped: 41795584 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8f75000/0x0/0x4ffc00000, data 0x7d3ba94/0x7ed1000, compress 0x0/0x0/0x0, omap 0x4d47e, meta 0xed62b82), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.946731567s of 10.294985771s, submitted: 54
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318177280 unmapped: 41336832 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318177280 unmapped: 41336832 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318177280 unmapped: 41336832 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3801222 data_alloc: 251658240 data_used: 48196067
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318676992 unmapped: 40837120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8cd8000/0x0/0x4ffc00000, data 0x7fd8a94/0x816e000, compress 0x0/0x0/0x0, omap 0x4d47e, meta 0xed62b82), peers [0,1] op hist [1])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318693376 unmapped: 40820736 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 319930368 unmapped: 39583744 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 319930368 unmapped: 39583744 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 319930368 unmapped: 39583744 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3823520 data_alloc: 251658240 data_used: 48304611
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8cbb000/0x0/0x4ffc00000, data 0x7ff3a94/0x8189000, compress 0x0/0x0/0x0, omap 0x4d47e, meta 0xed62b82), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 319930368 unmapped: 39583744 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 319938560 unmapped: 39575552 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 319938560 unmapped: 39575552 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 319938560 unmapped: 39575552 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.299750328s of 12.471056938s, submitted: 63
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 319946752 unmapped: 39567360 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b800 session 0x5576886d4540
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687a6e800 session 0x5576886b2c40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8cbb000/0x0/0x4ffc00000, data 0x7ff3a94/0x8189000, compress 0x0/0x0/0x0, omap 0x4d4b2, meta 0xed62b4e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576861d1800 session 0x55768800d880
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3665460 data_alloc: 234881024 data_used: 39023059
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 320012288 unmapped: 39501824 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 320012288 unmapped: 39501824 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9aa6000/0x0/0x4ffc00000, data 0x6f569f0/0x70e8000, compress 0x0/0x0/0x0, omap 0x4d4b2, meta 0xed62b4e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 320012288 unmapped: 39501824 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b000 session 0x557686cad180
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e71400 session 0x55768800da40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9aa6000/0x0/0x4ffc00000, data 0x6f569f0/0x70e8000, compress 0x0/0x0/0x0, omap 0x4d4b2, meta 0xed62b4e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576861d1800 session 0x5576861a9c00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 320036864 unmapped: 39477248 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 320036864 unmapped: 39477248 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3638348 data_alloc: 234881024 data_used: 38877518
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 320036864 unmapped: 39477248 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 320036864 unmapped: 39477248 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 320036864 unmapped: 39477248 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x557688a10fc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686e5c400 session 0x5576869f3180
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea02c000/0x0/0x4ffc00000, data 0x6c8e9f0/0x6e20000, compress 0x0/0x0/0x0, omap 0x4d603, meta 0xed629fd), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318980096 unmapped: 40534016 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x55768c314c40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.080725670s of 10.223545074s, submitted: 71
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x55768cc4afc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x557686111500
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312582144 unmapped: 46931968 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2000 session 0x55768cc3f880
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301066 data_alloc: 218103808 data_used: 18699598
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec0e1000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d519, meta 0xed62ae7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec0e1000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d519, meta 0xed62ae7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301066 data_alloc: 218103808 data_used: 18699598
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec0e1000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d519, meta 0xed62ae7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec0e1000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d519, meta 0xed62ae7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301066 data_alloc: 218103808 data_used: 18699598
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec0e1000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d519, meta 0xed62ae7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301066 data_alloc: 218103808 data_used: 18699598
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec0e1000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d519, meta 0xed62ae7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec0e1000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d519, meta 0xed62ae7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301066 data_alloc: 218103808 data_used: 18699598
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec0e1000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d519, meta 0xed62ae7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301066 data_alloc: 218103808 data_used: 18699598
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.470396042s of 26.505001068s, submitted: 26
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686153800 session 0x5576886b2380
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x557687aa01c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x5576886b2540
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2000 session 0x5576869f2fc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x5576886d5500
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 56033280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec18a000/0x0/0x4ffc00000, data 0x4b3198e/0x4cc2000, compress 0x0/0x0/0x0, omap 0x4d6d2, meta 0xed6292e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 56033280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 56033280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686e5d000 session 0x5576886b2fc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec18a000/0x0/0x4ffc00000, data 0x4b3198e/0x4cc2000, compress 0x0/0x0/0x0, omap 0x4d6d2, meta 0xed6292e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x557686196fc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 56033280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3337065 data_alloc: 218103808 data_used: 18703596
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 56033280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x557689a00380
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2000 session 0x557689a01180
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 56033280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec18a000/0x0/0x4ffc00000, data 0x4b3198e/0x4cc2000, compress 0x0/0x0/0x0, omap 0x4d6d2, meta 0xed6292e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 56606720 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 56770560 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 56770560 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360493 data_alloc: 218103808 data_used: 22635756
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 56770560 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec18a000/0x0/0x4ffc00000, data 0x4b3198e/0x4cc2000, compress 0x0/0x0/0x0, omap 0x4d6d2, meta 0xed6292e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 56770560 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 56770560 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.855288506s of 11.934335709s, submitted: 26
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x5576861bf500
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108800 session 0x5576886f9880
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x5576886d5c00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 59310080 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec18a000/0x0/0x4ffc00000, data 0x4b3198e/0x4cc2000, compress 0x0/0x0/0x0, omap 0x4d6d2, meta 0xed6292e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 59310080 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3305995 data_alloc: 218103808 data_used: 18703596
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 59310080 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 59310080 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec679000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d7ca, meta 0xed62836), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 59310080 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 59310080 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x5576869f28c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2000 session 0x557686bfaa80
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x557688a116c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694e400 session 0x557688a11340
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x55768a538700
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x557686d4f500
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2000 session 0x55768becea80
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x55768d0908c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768da43000 session 0x55768a539500
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 59121664 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ebe9a000/0x0/0x4ffc00000, data 0x4e2099e/0x4fb2000, compress 0x0/0x0/0x0, omap 0x4d7ca, meta 0xed62836), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3352975 data_alloc: 218103808 data_used: 18703596
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 59121664 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 59121664 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ebe9a000/0x0/0x4ffc00000, data 0x4e2099e/0x4fb2000, compress 0x0/0x0/0x0, omap 0x4d7ca, meta 0xed62836), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 59121664 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 59121664 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 59121664 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3352975 data_alloc: 218103808 data_used: 18703596
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 59113472 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ebe9a000/0x0/0x4ffc00000, data 0x4e2099e/0x4fb2000, compress 0x0/0x0/0x0, omap 0x4d7ca, meta 0xed62836), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 59113472 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 59113472 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 59113472 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ebe9a000/0x0/0x4ffc00000, data 0x4e2099e/0x4fb2000, compress 0x0/0x0/0x0, omap 0x4d7ca, meta 0xed62836), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 59113472 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3352975 data_alloc: 218103808 data_used: 18703596
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 59113472 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.932571411s of 18.043914795s, submitted: 31
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x5576886b3180
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x557686cac1c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2000 session 0x557686cacc40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x5576861116c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686e5a400 session 0x557686197180
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 59113472 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x5576886f81c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 59105280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 59105280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 59105280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb8e6000/0x0/0x4ffc00000, data 0x53d499e/0x5566000, compress 0x0/0x0/0x0, omap 0x4d8b0, meta 0xed62750), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x55768becea80
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3432290 data_alloc: 218103808 data_used: 25511199
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 59105280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 59105280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 59105280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb8e6000/0x0/0x4ffc00000, data 0x53d499e/0x5566000, compress 0x0/0x0/0x0, omap 0x4d8b0, meta 0xed62750), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 59105280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb8e6000/0x0/0x4ffc00000, data 0x53d499e/0x5566000, compress 0x0/0x0/0x0, omap 0x4d8b0, meta 0xed62750), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 59105280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1400 session 0x55768a5396c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd000 session 0x557687b42a80
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768a5388c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3402384 data_alloc: 218103808 data_used: 25511199
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 59277312 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 59277312 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ebe76000/0x0/0x4ffc00000, data 0x4e4499e/0x4fd6000, compress 0x0/0x0/0x0, omap 0x4d8b0, meta 0xed62750), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ebe76000/0x0/0x4ffc00000, data 0x4e4499e/0x4fd6000, compress 0x0/0x0/0x0, omap 0x4d8b0, meta 0xed62750), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 59277312 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 59277312 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 59277312 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.672096252s of 14.787719727s, submitted: 13
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3407832 data_alloc: 218103808 data_used: 25570591
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ebe4a000/0x0/0x4ffc00000, data 0x4e7099e/0x5002000, compress 0x0/0x0/0x0, omap 0x4d8b0, meta 0xed62750), peers [0,1] op hist [0,0,0,0,0,0,2])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 58515456 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301072384 unmapped: 58441728 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 58433536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 58433536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 58433536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3445142 data_alloc: 218103808 data_used: 25646367
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 58433536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea87a000/0x0/0x4ffc00000, data 0x529f99e/0x5431000, compress 0x0/0x0/0x0, omap 0x4d8b0, meta 0xff02750), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 58433536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 58433536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 58433536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 58433536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3445142 data_alloc: 218103808 data_used: 25646367
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea87a000/0x0/0x4ffc00000, data 0x529f99e/0x5431000, compress 0x0/0x0/0x0, omap 0x4d8b0, meta 0xff02750), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 58433536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 58433536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea87a000/0x0/0x4ffc00000, data 0x529f99e/0x5431000, compress 0x0/0x0/0x0, omap 0x4d8b0, meta 0xff02750), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 58433536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 58433536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576886b2540
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x55768800d880
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1400 session 0x557686111500
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x557686d00000
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.676843643s of 14.048244476s, submitted: 29
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd000 session 0x557688a10c40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301285376 unmapped: 58228736 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd000 session 0x5576861d2c40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768cc3fdc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x557687b43340
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1400 session 0x55768bece540
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3464812 data_alloc: 218103808 data_used: 25646367
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301285376 unmapped: 58228736 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea533000/0x0/0x4ffc00000, data 0x55e799e/0x5779000, compress 0x0/0x0/0x0, omap 0x4d8b0, meta 0xff02750), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301285376 unmapped: 58228736 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301285376 unmapped: 58228736 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301285376 unmapped: 58228736 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301285376 unmapped: 58228736 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x557686caddc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3464812 data_alloc: 218103808 data_used: 25646367
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301285376 unmapped: 58228736 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x5576885bfdc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301285376 unmapped: 58228736 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557688657500
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x5576886f96c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea533000/0x0/0x4ffc00000, data 0x55e799e/0x5779000, compress 0x0/0x0/0x0, omap 0x4d8b0, meta 0xff02750), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301285376 unmapped: 58228736 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x5576886d5dc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686157000 session 0x55768c314c40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576886f9500
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x5576886b2fc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x557686c208c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302825472 unmapped: 56688640 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9d2d000/0x0/0x4ffc00000, data 0x5deca00/0x5f7f000, compress 0x0/0x0/0x0, omap 0x4d8b0, meta 0xff02750), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302825472 unmapped: 56688640 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3521697 data_alloc: 218103808 data_used: 25646367
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302825472 unmapped: 56688640 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302825472 unmapped: 56688640 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302825472 unmapped: 56688640 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9d2d000/0x0/0x4ffc00000, data 0x5deca00/0x5f7f000, compress 0x0/0x0/0x0, omap 0x4d8b0, meta 0xff02750), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x5576861bee00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302825472 unmapped: 56688640 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x5576886f8700
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302825472 unmapped: 56688640 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768cc3e700
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.300589561s of 15.582851410s, submitted: 46
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x5576880f7340
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3544353 data_alloc: 218103808 data_used: 28884767
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 56680448 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9d2b000/0x0/0x4ffc00000, data 0x5deca33/0x5f81000, compress 0x0/0x0/0x0, omap 0x4da9f, meta 0xff02561), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 56680448 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 56680448 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 56680448 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 56680448 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3580761 data_alloc: 234881024 data_used: 31896351
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305373184 unmapped: 54140928 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9d2b000/0x0/0x4ffc00000, data 0x5deca33/0x5f81000, compress 0x0/0x0/0x0, omap 0x4da9f, meta 0xff02561), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305643520 unmapped: 53870592 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305643520 unmapped: 53870592 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305643520 unmapped: 53870592 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305643520 unmapped: 53870592 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3617591 data_alloc: 234881024 data_used: 31929119
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305643520 unmapped: 53870592 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305643520 unmapped: 53870592 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.965996742s of 12.156393051s, submitted: 50
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e973d000/0x0/0x4ffc00000, data 0x63daa33/0x656f000, compress 0x0/0x0/0x0, omap 0x4da9f, meta 0xff02561), peers [0,1] op hist [0,3,1])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309141504 unmapped: 50372608 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309518336 unmapped: 49995776 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309518336 unmapped: 49995776 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3663751 data_alloc: 234881024 data_used: 32969503
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309518336 unmapped: 49995776 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309518336 unmapped: 49995776 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e7e0b000/0x0/0x4ffc00000, data 0x6b6ba33/0x6d00000, compress 0x0/0x0/0x0, omap 0x4da9f, meta 0x110a2561), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309518336 unmapped: 49995776 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309518336 unmapped: 49995776 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309518336 unmapped: 49995776 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1400 session 0x55768bece8c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd000 session 0x557686cad500
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3591439 data_alloc: 234881024 data_used: 29702431
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x55768cc4b880
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309542912 unmapped: 49971200 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309542912 unmapped: 49971200 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.119799614s of 10.389805794s, submitted: 107
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x55768becea80
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x5576861a8a80
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309551104 unmapped: 49963008 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768cc3fa40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e96d9000/0x0/0x4ffc00000, data 0x529f99e/0x5431000, compress 0x0/0x0/0x0, omap 0x4dc8e, meta 0x110a2372), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305561600 unmapped: 53952512 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x55768c315500
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2000 session 0x55768a538700
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576861a9c00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e96d9000/0x0/0x4ffc00000, data 0x529f99e/0x5431000, compress 0x0/0x0/0x0, omap 0x4dc8e, meta 0x110a2372), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338834 data_alloc: 218103808 data_used: 18699535
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea211000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4dd62, meta 0x110a229e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338834 data_alloc: 218103808 data_used: 18699535
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea211000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4dd62, meta 0x110a229e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea211000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4dd62, meta 0x110a229e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338834 data_alloc: 218103808 data_used: 18699535
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea211000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4dd62, meta 0x110a229e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea211000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4dd62, meta 0x110a229e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305586176 unmapped: 53927936 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305586176 unmapped: 53927936 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338834 data_alloc: 218103808 data_used: 18699535
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305586176 unmapped: 53927936 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea211000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4dd62, meta 0x110a229e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305586176 unmapped: 53927936 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea211000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4dd62, meta 0x110a229e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305586176 unmapped: 53927936 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305586176 unmapped: 53927936 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305586176 unmapped: 53927936 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338834 data_alloc: 218103808 data_used: 18699535
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea211000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4dd62, meta 0x110a229e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305586176 unmapped: 53927936 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x55768c315880
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x5576861be8c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x5576861bf500
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x55768a538c40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 23.794982910s of 23.933214188s, submitted: 69
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557686c20700
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x557688a10c40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x55768d75c1c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x5576885bea80
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1400 session 0x5576886b2000
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306126848 unmapped: 53387264 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9e4e000/0x0/0x4ffc00000, data 0x4b2ba00/0x4cbe000, compress 0x0/0x0/0x0, omap 0x4dd62, meta 0x110a229e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306126848 unmapped: 53387264 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306126848 unmapped: 53387264 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306126848 unmapped: 53387264 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768800d500
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3374899 data_alloc: 218103808 data_used: 18707594
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x55768800ddc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306126848 unmapped: 53387264 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9e4e000/0x0/0x4ffc00000, data 0x4b2ba00/0x4cbe000, compress 0x0/0x0/0x0, omap 0x4dd62, meta 0x110a229e), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x5576869f2fc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x557686d4fdc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306233344 unmapped: 53280768 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306233344 unmapped: 53280768 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9e28000/0x0/0x4ffc00000, data 0x4b4fa33/0x4ce4000, compress 0x0/0x0/0x0, omap 0x4e365, meta 0x110a1c9b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306233344 unmapped: 53280768 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306233344 unmapped: 53280768 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381967 data_alloc: 218103808 data_used: 18708122
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306233344 unmapped: 53280768 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306241536 unmapped: 53272576 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306241536 unmapped: 53272576 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9e28000/0x0/0x4ffc00000, data 0x4b4fa33/0x4ce4000, compress 0x0/0x0/0x0, omap 0x4e365, meta 0x110a1c9b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306241536 unmapped: 53272576 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306241536 unmapped: 53272576 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3405647 data_alloc: 218103808 data_used: 22645914
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306241536 unmapped: 53272576 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9e28000/0x0/0x4ffc00000, data 0x4b4fa33/0x4ce4000, compress 0x0/0x0/0x0, omap 0x4e365, meta 0x110a1c9b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306241536 unmapped: 53272576 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.273567200s of 16.399780273s, submitted: 44
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x557688a11340
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x5576885be1c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557687b436c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x5576886d5a40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306241536 unmapped: 53272576 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x557686d01a40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306241536 unmapped: 53272576 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306241536 unmapped: 53272576 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3411501 data_alloc: 218103808 data_used: 22650010
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306241536 unmapped: 53272576 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306241536 unmapped: 53272576 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9da5000/0x0/0x4ffc00000, data 0x4bd2a33/0x4d67000, compress 0x0/0x0/0x0, omap 0x4e365, meta 0x110a1c9b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308609024 unmapped: 50905088 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9ac8000/0x0/0x4ffc00000, data 0x4eafa33/0x5044000, compress 0x0/0x0/0x0, omap 0x4e365, meta 0x110a1c9b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x557686cac540
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308879360 unmapped: 50634752 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x557688a10000
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557686cad6c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x55768a538380
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x557688a11340
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308879360 unmapped: 50634752 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464223 data_alloc: 218103808 data_used: 23882906
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308879360 unmapped: 50634752 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308879360 unmapped: 50634752 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x557687b436c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 51077120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e989b000/0x0/0x4ffc00000, data 0x50dba56/0x5271000, compress 0x0/0x0/0x0, omap 0x4e365, meta 0x110a1c9b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 51077120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 51077120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3463048 data_alloc: 218103808 data_used: 24411290
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 51077120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.976218224s of 14.206614494s, submitted: 74
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x557687aa0a80
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 51077120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 51077120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 51077120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e989b000/0x0/0x4ffc00000, data 0x50dba56/0x5271000, compress 0x0/0x0/0x0, omap 0x4e554, meta 0x110a1aac), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 51077120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3471599 data_alloc: 218103808 data_used: 25476250
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 51077120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 51077120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 51077120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 51077120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e989b000/0x0/0x4ffc00000, data 0x50dba56/0x5271000, compress 0x0/0x0/0x0, omap 0x4e554, meta 0x110a1aac), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 51077120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3487557 data_alloc: 218103808 data_used: 25488538
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308576256 unmapped: 50937856 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308584448 unmapped: 50929664 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.962013245s of 11.024630547s, submitted: 17
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309280768 unmapped: 50233344 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311721984 unmapped: 47792128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311746560 unmapped: 47767552 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e912c000/0x0/0x4ffc00000, data 0x5849a56/0x59df000, compress 0x0/0x0/0x0, omap 0x4e554, meta 0x110a1aac), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3526647 data_alloc: 218103808 data_used: 25730202
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311746560 unmapped: 47767552 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311812096 unmapped: 47702016 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311812096 unmapped: 47702016 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 47693824 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 47693824 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e912c000/0x0/0x4ffc00000, data 0x5849a56/0x59df000, compress 0x0/0x0/0x0, omap 0x4e554, meta 0x110a1aac), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3525199 data_alloc: 218103808 data_used: 25730202
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e912a000/0x0/0x4ffc00000, data 0x584ca56/0x59e2000, compress 0x0/0x0/0x0, omap 0x4e554, meta 0x110a1aac), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311828480 unmapped: 47685632 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311828480 unmapped: 47685632 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311828480 unmapped: 47685632 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd400 session 0x55768bece8c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c4000 session 0x5576940e01c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0aa000 session 0x557686d00000
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311828480 unmapped: 47685632 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f0000 session 0x55768cc3fdc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.120762825s of 11.302170753s, submitted: 55
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x5576886b2fc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c4000 session 0x5576880f7340
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0aa000 session 0x55768cc4b880
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd400 session 0x557687aa0c40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e912a000/0x0/0x4ffc00000, data 0x584ca56/0x59e2000, compress 0x0/0x0/0x0, omap 0x4e554, meta 0x110a1aac), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3ec400 session 0x55768cc4a540
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311853056 unmapped: 51339264 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3590138 data_alloc: 218103808 data_used: 25730202
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311853056 unmapped: 51339264 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311853056 unmapped: 51339264 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311853056 unmapped: 51339264 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311853056 unmapped: 51339264 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e877e000/0x0/0x4ffc00000, data 0x61f7a66/0x638e000, compress 0x0/0x0/0x0, omap 0x4e752, meta 0x110a18ae), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x5576861bea80
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x557686196380
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x5576861a8a80
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308330496 unmapped: 54861824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c4000 session 0x557688a10000
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533217 data_alloc: 218103808 data_used: 24427674
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308330496 unmapped: 54861824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308338688 unmapped: 54853632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8e73000/0x0/0x4ffc00000, data 0x5b02a66/0x5c99000, compress 0x0/0x0/0x0, omap 0x4e941, meta 0x110a16bf), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308076544 unmapped: 55115776 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8e73000/0x0/0x4ffc00000, data 0x5b02a66/0x5c99000, compress 0x0/0x0/0x0, omap 0x4e941, meta 0x110a16bf), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308076544 unmapped: 55115776 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.426188469s of 10.563802719s, submitted: 50
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x55768d091180
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd000 session 0x5576885bf880
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308076544 unmapped: 55115776 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x5576861a9c00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3509476 data_alloc: 234881024 data_used: 29171354
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 55107584 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9671000/0x0/0x4ffc00000, data 0x53059c1/0x5498000, compress 0x0/0x0/0x0, omap 0x4e9fa, meta 0x110a1606), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 55107584 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 55107584 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 55107584 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 55107584 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9671000/0x0/0x4ffc00000, data 0x53059c1/0x5498000, compress 0x0/0x0/0x0, omap 0x4e9fa, meta 0x110a1606), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3509476 data_alloc: 234881024 data_used: 29171354
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 55107584 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9671000/0x0/0x4ffc00000, data 0x53059c1/0x5498000, compress 0x0/0x0/0x0, omap 0x4e9fa, meta 0x110a1606), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 55107584 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311910400 unmapped: 51281920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8aad000/0x0/0x4ffc00000, data 0x5ec69c1/0x6059000, compress 0x0/0x0/0x0, omap 0x4e9fa, meta 0x110a1606), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311910400 unmapped: 51281920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313434112 unmapped: 49758208 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3590632 data_alloc: 234881024 data_used: 29376021
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313434112 unmapped: 49758208 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313434112 unmapped: 49758208 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8a04000/0x0/0x4ffc00000, data 0x5f6d9c1/0x6100000, compress 0x0/0x0/0x0, omap 0x4e9fa, meta 0x110a1606), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313434112 unmapped: 49758208 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313434112 unmapped: 49758208 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.913797379s of 15.265137672s, submitted: 144
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 49627136 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e89eb000/0x0/0x4ffc00000, data 0x5f8e9c1/0x6121000, compress 0x0/0x0/0x0, omap 0x4e9fa, meta 0x110a1606), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3586752 data_alloc: 234881024 data_used: 29376021
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 49627136 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 49627136 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 49627136 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 49627136 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 49618944 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3ec400 session 0x5576886d5a40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0aa000 session 0x557686197c00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e89eb000/0x0/0x4ffc00000, data 0x5f8e9c1/0x6121000, compress 0x0/0x0/0x0, omap 0x4e9fa, meta 0x110a1606), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3586352 data_alloc: 234881024 data_used: 29384213
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 49618944 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x557685cc4540
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307257344 unmapped: 55934976 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea020000/0x0/0x4ffc00000, data 0x495a9b1/0x4aec000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307257344 unmapped: 55934976 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x55768becfa40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3ec400 session 0x557685cc41c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0aa000 session 0x55768bece1c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd000 session 0x5576885be540
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x55768800ddc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3419612 data_alloc: 218103808 data_used: 19256325
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x4b8a9b1/0x4d1c000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x557686d4ec40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576861be8c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.198194504s of 12.342607498s, submitted: 38
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x55768c315880
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x5576861d3c00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3ec400 session 0x557689a016c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576861be540
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x557686c20c40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3395284 data_alloc: 218103808 data_used: 18715653
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea108000/0x0/0x4ffc00000, data 0x487298e/0x4a03000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea108000/0x0/0x4ffc00000, data 0x487298e/0x4a03000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3408852 data_alloc: 218103808 data_used: 20918277
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea108000/0x0/0x4ffc00000, data 0x487298e/0x4a03000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea108000/0x0/0x4ffc00000, data 0x487298e/0x4a03000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea108000/0x0/0x4ffc00000, data 0x487298e/0x4a03000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306987008 unmapped: 56205312 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3408852 data_alloc: 218103808 data_used: 20918277
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306987008 unmapped: 56205312 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.364843369s of 14.394786835s, submitted: 12
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310943744 unmapped: 52248576 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312492032 unmapped: 50700288 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9930000/0x0/0x4ffc00000, data 0x503b98e/0x51cc000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312492032 unmapped: 50700288 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312492032 unmapped: 50700288 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476420 data_alloc: 218103808 data_used: 21910498
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312492032 unmapped: 50700288 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312492032 unmapped: 50700288 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9918000/0x0/0x4ffc00000, data 0x504d98e/0x51de000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312492032 unmapped: 50700288 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9918000/0x0/0x4ffc00000, data 0x504d98e/0x51de000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476692 data_alloc: 218103808 data_used: 21918690
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9918000/0x0/0x4ffc00000, data 0x504d98e/0x51de000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x5576940e1180
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x55768800d500
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.993345261s of 14.355683327s, submitted: 90
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385972 data_alloc: 218103808 data_used: 18719714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0aa000 session 0x5576886b2c40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9974000/0x0/0x4ffc00000, data 0x464e98e/0x47df000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3383976 data_alloc: 218103808 data_used: 18719714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311476224 unmapped: 51716096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557687b436c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x557686197180
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x5576885bfa40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x55768c315a40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd000 session 0x557687aa1340
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6f000/0x0/0x4ffc00000, data 0x4e0c98e/0x4f9d000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6f000/0x0/0x4ffc00000, data 0x4e0c98e/0x4f9d000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3426312 data_alloc: 218103808 data_used: 18719714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6f000/0x0/0x4ffc00000, data 0x4e0c98e/0x4f9d000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6f000/0x0/0x4ffc00000, data 0x4e0c98e/0x4f9d000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6f000/0x0/0x4ffc00000, data 0x4e0c98e/0x4f9d000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3426312 data_alloc: 218103808 data_used: 18719714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 55377920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 55377920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 55377920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 55377920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.629673004s of 19.488275528s, submitted: 12
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576886d5180
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428832 data_alloc: 218103808 data_used: 18719714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 55230464 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b4b000/0x0/0x4ffc00000, data 0x4e3098e/0x4fc1000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 55230464 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b4b000/0x0/0x4ffc00000, data 0x4e3098e/0x4fc1000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3477216 data_alloc: 218103808 data_used: 26889186
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b4b000/0x0/0x4ffc00000, data 0x4e3098e/0x4fc1000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b4b000/0x0/0x4ffc00000, data 0x4e3098e/0x4fc1000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3477216 data_alloc: 218103808 data_used: 26889186
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.102824211s of 13.107093811s, submitted: 1
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3485238 data_alloc: 218103808 data_used: 26991586
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3485238 data_alloc: 218103808 data_used: 26991586
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3485238 data_alloc: 218103808 data_used: 26991586
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.839479446s of 14.852085114s, submitted: 2
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x55768afca380
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c4000 session 0x5576886b3880
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x5576886e16c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd400 session 0x5576869f3340
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd400 session 0x557687aa0a80
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3501258 data_alloc: 218103808 data_used: 26991586
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e99aa000/0x0/0x4ffc00000, data 0x4fd198e/0x5162000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576887228c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3504923 data_alloc: 218103808 data_used: 26992098
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311304192 unmapped: 51888128 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311910400 unmapped: 51281920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9985000/0x0/0x4ffc00000, data 0x4ff59b1/0x5187000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3508763 data_alloc: 218103808 data_used: 27546594
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9985000/0x0/0x4ffc00000, data 0x4ff59b1/0x5187000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311992320 unmapped: 51200000 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3508763 data_alloc: 218103808 data_used: 27546594
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312000512 unmapped: 51191808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312000512 unmapped: 51191808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.325185776s of 19.358276367s, submitted: 15
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313982976 unmapped: 49209344 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e934f000/0x0/0x4ffc00000, data 0x56249b1/0x57b6000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3563869 data_alloc: 218103808 data_used: 27722722
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9314000/0x0/0x4ffc00000, data 0x56579b1/0x57e9000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3558653 data_alloc: 218103808 data_used: 27722722
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x557686111500
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c4000 session 0x55768457b340
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.165890694s of 11.501284599s, submitted: 68
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x557686d01c00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312295424 unmapped: 50896896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312295424 unmapped: 50896896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b46000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3495179 data_alloc: 218103808 data_used: 26991586
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312295424 unmapped: 50896896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312295424 unmapped: 50896896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x55768d090000
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x55768becfdc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x5576886576c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 53813248 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 53813248 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 53813248 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768bc58380
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x55768a539dc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768cc3f880
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557689b7b800 session 0x55768a5381c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 37.512207031s of 37.553077698s, submitted: 22
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x55768d75ddc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557694ee2000 session 0x5576886d5a40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557689b7b000 session 0x5576861bea80
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576880f7340
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x5576880f7a40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441190 data_alloc: 218103808 data_used: 18723775
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557689b7b800 session 0x557686d00000
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557694ee2000 session 0x55768a538380
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9d85000/0x0/0x4ffc00000, data 0x4bf59f0/0x4d87000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b1800 session 0x557688a11340
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576886b2fc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469990 data_alloc: 218103808 data_used: 23552959
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9d85000/0x0/0x4ffc00000, data 0x4bf59f0/0x4d87000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.2 total, 600.0 interval#012Cumulative writes: 36K writes, 142K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 36K writes, 13K syncs, 2.71 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3550 writes, 14K keys, 3550 commit groups, 1.0 writes per commit group, ingest: 18.05 MB, 0.03 MB/s#012Interval WAL: 3550 writes, 1394 syncs, 2.55 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9d85000/0x0/0x4ffc00000, data 0x4bf59f0/0x4d87000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469990 data_alloc: 218103808 data_used: 23552959
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9d85000/0x0/0x4ffc00000, data 0x4bf59f0/0x4d87000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.287157059s of 17.387201309s, submitted: 32
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312320000 unmapped: 50872320 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312377344 unmapped: 50814976 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3519878 data_alloc: 218103808 data_used: 23937983
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312377344 unmapped: 50814976 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312377344 unmapped: 50814976 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f4000/0x0/0x4ffc00000, data 0x53859f0/0x5517000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312377344 unmapped: 50814976 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312377344 unmapped: 50814976 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312385536 unmapped: 50806784 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3518710 data_alloc: 218103808 data_used: 23942079
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312385536 unmapped: 50806784 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f3000/0x0/0x4ffc00000, data 0x53879f0/0x5519000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312385536 unmapped: 50806784 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312385536 unmapped: 50806784 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312393728 unmapped: 50798592 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f3000/0x0/0x4ffc00000, data 0x53879f0/0x5519000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f3000/0x0/0x4ffc00000, data 0x53879f0/0x5519000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312393728 unmapped: 50798592 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3518710 data_alloc: 218103808 data_used: 23942079
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312393728 unmapped: 50798592 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.962099075s of 13.174759865s, submitted: 92
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b1800 session 0x55768becf340
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557694ee2000 session 0x5576886d41c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x557686cadc00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e71c00 session 0x55768bc58000
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576886d4700
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9341000/0x0/0x4ffc00000, data 0x56399f0/0x57cb000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9341000/0x0/0x4ffc00000, data 0x56399f0/0x57cb000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x557688a10fc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3538984 data_alloc: 218103808 data_used: 23942079
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b1800 session 0x5576861a88c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557694ee2000 session 0x557688a11500
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x55768becf180
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9340000/0x0/0x4ffc00000, data 0x5639a00/0x57cc000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312688640 unmapped: 50503680 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 50495488 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3557002 data_alloc: 218103808 data_used: 26660799
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 50495488 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9340000/0x0/0x4ffc00000, data 0x5639a00/0x57cc000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 50495488 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 50495488 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 50495488 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9340000/0x0/0x4ffc00000, data 0x5639a00/0x57cc000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312705024 unmapped: 50487296 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3557002 data_alloc: 218103808 data_used: 26660799
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312705024 unmapped: 50487296 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312705024 unmapped: 50487296 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312705024 unmapped: 50487296 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.755399704s of 16.796745300s, submitted: 7
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314982400 unmapped: 48209920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e909c000/0x0/0x4ffc00000, data 0x58dda00/0x5a70000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314310656 unmapped: 48881664 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3580546 data_alloc: 218103808 data_used: 26796991
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314318848 unmapped: 48873472 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314318848 unmapped: 48873472 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314327040 unmapped: 48865280 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9094000/0x0/0x4ffc00000, data 0x58e5a00/0x5a78000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314327040 unmapped: 48865280 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9094000/0x0/0x4ffc00000, data 0x58e5a00/0x5a78000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314327040 unmapped: 48865280 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3580562 data_alloc: 218103808 data_used: 26796991
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314327040 unmapped: 48865280 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314327040 unmapped: 48865280 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314335232 unmapped: 48857088 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9094000/0x0/0x4ffc00000, data 0x58e5a00/0x5a78000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314343424 unmapped: 48848896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314343424 unmapped: 48848896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x557686c20fc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768a539c00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3580082 data_alloc: 218103808 data_used: 26858431
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314343424 unmapped: 48848896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.200057983s of 12.832662582s, submitted: 39
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x5576886f8000
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f1000/0x0/0x4ffc00000, data 0x5388a00/0x551b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314359808 unmapped: 48832512 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314408960 unmapped: 48783360 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f1000/0x0/0x4ffc00000, data 0x53899f0/0x551b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314417152 unmapped: 48775168 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314417152 unmapped: 48775168 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f1000/0x0/0x4ffc00000, data 0x53899f0/0x551b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3526151 data_alloc: 218103808 data_used: 24003519
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314417152 unmapped: 48775168 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x557686197c00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557689b7b800 session 0x557688656c40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313155584 unmapped: 50036736 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f1000/0x0/0x4ffc00000, data 0x53899f0/0x551b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557689a016c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414931 data_alloc: 218103808 data_used: 18723775
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414931 data_alloc: 218103808 data_used: 18723775
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414931 data_alloc: 218103808 data_used: 18723775
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414931 data_alloc: 218103808 data_used: 18723775
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414931 data_alloc: 218103808 data_used: 18723775
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313196544 unmapped: 49995776 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313196544 unmapped: 49995776 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313196544 unmapped: 49995776 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414931 data_alloc: 218103808 data_used: 18723775
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313196544 unmapped: 49995776 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313212928 unmapped: 49979392 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313212928 unmapped: 49979392 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313212928 unmapped: 49979392 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x5576940e01c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x55768c315a40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x557688723180
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313212928 unmapped: 49979392 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b1800 session 0x557687b43340
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 39.065399170s of 39.311141968s, submitted: 108
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768cc4afc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x5576886b2540
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x557687aa1c00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x5576886e1c00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557694ee2000 session 0x5576885be700
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417481 data_alloc: 218103808 data_used: 18727773
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 49856512 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 49856512 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea31b000/0x0/0x4ffc00000, data 0x466098e/0x47f1000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313344000 unmapped: 49848320 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313344000 unmapped: 49848320 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313352192 unmapped: 49840128 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576886d4000
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417481 data_alloc: 218103808 data_used: 18727773
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313352192 unmapped: 49840128 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x557688a108c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x557686cac8c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 49864704 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x557688656a80
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313630720 unmapped: 49561600 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea2f6000/0x0/0x4ffc00000, data 0x468499e/0x4816000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313630720 unmapped: 49561600 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313630720 unmapped: 49561600 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3422291 data_alloc: 218103808 data_used: 18761565
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea2f6000/0x0/0x4ffc00000, data 0x468499e/0x4816000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313630720 unmapped: 49561600 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313638912 unmapped: 49553408 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313638912 unmapped: 49553408 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea2f6000/0x0/0x4ffc00000, data 0x468499e/0x4816000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313638912 unmapped: 49553408 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313638912 unmapped: 49553408 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3422291 data_alloc: 218103808 data_used: 18761565
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313638912 unmapped: 49553408 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea2f6000/0x0/0x4ffc00000, data 0x468499e/0x4816000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313647104 unmapped: 49545216 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.853551865s of 17.863443375s, submitted: 3
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314327040 unmapped: 48865280 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 49528832 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1fc000/0x0/0x4ffc00000, data 0x477e99e/0x4910000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1f1000/0x0/0x4ffc00000, data 0x478999e/0x491b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442903 data_alloc: 218103808 data_used: 18856797
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442919 data_alloc: 218103808 data_used: 18856797
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1f1000/0x0/0x4ffc00000, data 0x478999e/0x491b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1f1000/0x0/0x4ffc00000, data 0x478999e/0x491b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442919 data_alloc: 218103808 data_used: 18856797
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 49512448 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 49512448 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4800 session 0x55768bece1c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557686111500
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x5576869f3180
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x5576886e16c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.000139236s of 17.048984528s, submitted: 20
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314818560 unmapped: 48373760 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x5576886b2e00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768da43400 session 0x5576886b2c40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768da43400 session 0x5576885bfa40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768c315180
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x5576861d3c00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1f1000/0x0/0x4ffc00000, data 0x478999e/0x491b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3486925 data_alloc: 218103808 data_used: 18856797
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b05000/0x0/0x4ffc00000, data 0x4e7599e/0x5007000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313778176 unmapped: 51027968 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313778176 unmapped: 51027968 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313778176 unmapped: 51027968 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b05000/0x0/0x4ffc00000, data 0x4e7599e/0x5007000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313778176 unmapped: 51027968 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b05000/0x0/0x4ffc00000, data 0x4e7599e/0x5007000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313778176 unmapped: 51027968 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3486925 data_alloc: 218103808 data_used: 18856797
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313786368 unmapped: 51019776 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x557686d4ea80
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313786368 unmapped: 51019776 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313761792 unmapped: 51044352 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b05000/0x0/0x4ffc00000, data 0x4e7599e/0x5007000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b05000/0x0/0x4ffc00000, data 0x4e7599e/0x5007000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3521229 data_alloc: 218103808 data_used: 24664925
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b05000/0x0/0x4ffc00000, data 0x4e7599e/0x5007000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3521229 data_alloc: 218103808 data_used: 24664925
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.758279800s of 17.839538574s, submitted: 13
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316399616 unmapped: 48406528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318586880 unmapped: 46219264 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3559463 data_alloc: 218103808 data_used: 25188701
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9747000/0x0/0x4ffc00000, data 0x522d99e/0x53bf000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3556031 data_alloc: 218103808 data_used: 25188701
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e972c000/0x0/0x4ffc00000, data 0x524e99e/0x53e0000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x5576886d5a40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686155400 session 0x5576880f7340
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.849447250s of 12.123903275s, submitted: 71
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x5576861be8c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e972c000/0x0/0x4ffc00000, data 0x524e99e/0x53e0000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452972 data_alloc: 218103808 data_used: 18856797
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1f1000/0x0/0x4ffc00000, data 0x478999e/0x491b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869e1400 session 0x55768d090380
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c073800 session 0x5576869f3340
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1f1000/0x0/0x4ffc00000, data 0x478999e/0x491b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557688656c40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316162048 unmapped: 48644096 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316162048 unmapped: 48644096 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316162048 unmapped: 48644096 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316170240 unmapped: 48635904 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316170240 unmapped: 48635904 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316170240 unmapped: 48635904 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316170240 unmapped: 48635904 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316170240 unmapped: 48635904 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557688a108c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686155400 session 0x557686c21880
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869e1400 session 0x557688a116c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c073800 session 0x557688722c40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 42.027751923s of 42.075328827s, submitted: 18
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x5576886d5340
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557688a10000
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686155400 session 0x557686cacfc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869e1400 session 0x557688657a40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 51478528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c073800 session 0x557688a10540
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 51478528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 51478528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9db6000/0x0/0x4ffc00000, data 0x4bc598e/0x4d56000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 51478528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469604 data_alloc: 218103808 data_used: 18727773
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 51478528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 51478528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9db6000/0x0/0x4ffc00000, data 0x4bc598e/0x4d56000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x557686d00fc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768d090700
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469604 data_alloc: 218103808 data_used: 18727773
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686155400 session 0x55768800d6c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869e1400 session 0x5576886e0c40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.067780495s of 10.190921783s, submitted: 22
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9db4000/0x0/0x4ffc00000, data 0x4bc59c1/0x4d58000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502024 data_alloc: 218103808 data_used: 23294813
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9db4000/0x0/0x4ffc00000, data 0x4bc59c1/0x4d58000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502024 data_alloc: 218103808 data_used: 23294813
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.957121849s of 10.000660896s, submitted: 20
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9db4000/0x0/0x4ffc00000, data 0x4bc59c1/0x4d58000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [0,1,0,0,0,8])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318406656 unmapped: 46399488 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9030000/0x0/0x4ffc00000, data 0x59489c1/0x5adb000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3599328 data_alloc: 218103808 data_used: 25293661
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9030000/0x0/0x4ffc00000, data 0x59489c1/0x5adb000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3595064 data_alloc: 218103808 data_used: 25293661
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread fragmentation_score=0.003738 took=0.000053s
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.949523926s of 10.245022774s, submitted: 88
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318488576 unmapped: 46317568 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318488576 unmapped: 46317568 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e902a000/0x0/0x4ffc00000, data 0x594e9c1/0x5ae1000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318488576 unmapped: 46317568 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c073800 session 0x55768cc3fc00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x5576861a81c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318488576 unmapped: 46317568 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3596956 data_alloc: 218103808 data_used: 25334621
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557685cc4c40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318488576 unmapped: 46317568 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 285 ms_handle_reset con 0x557686155400 session 0x557687b42700
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 285 heartbeat osd_stat(store_statfs(0x4e902a000/0x0/0x4ffc00000, data 0x594e9c1/0x5ae1000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 285 ms_handle_reset con 0x55768c073800 session 0x55768cc3f880
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 285 ms_handle_reset con 0x5576869e1400 session 0x55768c3148c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 323772416 unmapped: 41033728 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 285 ms_handle_reset con 0x55768da43400 session 0x55768800ddc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 285 ms_handle_reset con 0x557685ffa800 session 0x55768457b340
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337510400 unmapped: 43384832 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 285 handle_osd_map epochs [286,287], i have 285, src has [1,287]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 287 ms_handle_reset con 0x557686155400 session 0x557686cac8c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337551360 unmapped: 43343872 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 287 ms_handle_reset con 0x5576869e1400 session 0x5576886d4000
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 287 ms_handle_reset con 0x55768c073800 session 0x5576886f8e00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 287 ms_handle_reset con 0x55768d0e9400 session 0x5576861116c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 43327488 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 287 heartbeat osd_stat(store_statfs(0x4e7cd3000/0x0/0x4ffc00000, data 0x6c9fd67/0x6e37000, compress 0x0/0x0/0x0, omap 0x4f381, meta 0x110a0c7f), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3761780 data_alloc: 234881024 data_used: 35955647
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 287 heartbeat osd_stat(store_statfs(0x4e7cd3000/0x0/0x4ffc00000, data 0x6c9fd67/0x6e37000, compress 0x0/0x0/0x0, omap 0x4f381, meta 0x110a0c7f), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 43327488 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337633280 unmapped: 43261952 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.589632988s of 10.994457245s, submitted: 34
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 287 ms_handle_reset con 0x557685ffa800 session 0x557687aa1340
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 287 heartbeat osd_stat(store_statfs(0x4e8fe2000/0x0/0x4ffc00000, data 0x5993d34/0x5b29000, compress 0x0/0x0/0x0, omap 0x4f41d, meta 0x110a0be3), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 287 heartbeat osd_stat(store_statfs(0x4e8fe2000/0x0/0x4ffc00000, data 0x5993d34/0x5b29000, compress 0x0/0x0/0x0, omap 0x4f41d, meta 0x110a0be3), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3574579 data_alloc: 218103808 data_used: 18723775
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 287 heartbeat osd_stat(store_statfs(0x4e8fe2000/0x0/0x4ffc00000, data 0x5993d34/0x5b29000, compress 0x0/0x0/0x0, omap 0x4f41d, meta 0x110a0be3), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 287 handle_osd_map epochs [287,288], i have 288, src has [1,288]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e8fe2000/0x0/0x4ffc00000, data 0x5993d34/0x5b29000, compress 0x0/0x0/0x0, omap 0x4f41d, meta 0x110a0be3), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e8fde000/0x0/0x4ffc00000, data 0x59957b3/0x5b2c000, compress 0x0/0x0/0x0, omap 0x4f52c, meta 0x110a0ad4), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557686155400 session 0x5576885bfa40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x5576869e1400 session 0x55768cc3f500
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x55768c073800 session 0x557686196a80
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x55768c035800 session 0x5576940e1340
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578137 data_alloc: 218103808 data_used: 18727934
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557686155400 session 0x5576861d3c00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557685ffa800 session 0x557689a01c00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x5576869e1400 session 0x5576886d5a40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x55768c073800 session 0x5576886d41c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557686155000 session 0x55768a5388c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557685ffa800 session 0x55768d090e00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 348602368 unmapped: 47890432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557686155400 session 0x557689a00fc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337657856 unmapped: 58834944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x5576869e1400 session 0x55768a538380
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x55768c073800 session 0x55768cc4b180
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337657856 unmapped: 58834944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557687e8bc00 session 0x557686d01880
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.653596878s of 11.167081833s, submitted: 54
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557685ffa800 session 0x557685cc5dc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e73eb000/0x0/0x4ffc00000, data 0x75887e6/0x7721000, compress 0x0/0x0/0x0, omap 0x4f52c, meta 0x110a0ad4), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 329965568 unmapped: 66527232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 329965568 unmapped: 66527232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3789035 data_alloc: 234881024 data_used: 29690303
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 289 ms_handle_reset con 0x55768c073800 session 0x55768d090380
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 289 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0x623e374/0x63d7000, compress 0x0/0x0/0x0, omap 0x4f643, meta 0x110a09bd), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 289 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0x623e374/0x63d7000, compress 0x0/0x0/0x0, omap 0x4f643, meta 0x110a09bd), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687671 data_alloc: 234881024 data_used: 21415871
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 289 handle_osd_map epochs [289,290], i have 289, src has [1,290]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4e8730000/0x0/0x4ffc00000, data 0x623fdf3/0x63da000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.615905762s of 11.692889214s, submitted: 45
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3699421 data_alloc: 234881024 data_used: 22761407
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318464000 unmapped: 78028800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318570496 unmapped: 77922304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318570496 unmapped: 77922304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318570496 unmapped: 77922304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318570496 unmapped: 77922304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4e8732000/0x0/0x4ffc00000, data 0x623fdf3/0x63da000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3716445 data_alloc: 234881024 data_used: 24364991
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318816256 unmapped: 77676544 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4e8732000/0x0/0x4ffc00000, data 0x623fdf3/0x63da000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318816256 unmapped: 77676544 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 319045632 unmapped: 77447168 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4e8732000/0x0/0x4ffc00000, data 0x623fdf3/0x63da000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 319045632 unmapped: 77447168 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557686155400 session 0x557686110000
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x5576869e1400 session 0x557688a108c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x55768672c000 session 0x5576886f8000
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4e8732000/0x0/0x4ffc00000, data 0x623fdf3/0x63da000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 47.560314178s of 47.608993530s, submitted: 31
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557685ffa800 session 0x5576886e1c00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310362112 unmapped: 86130688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557686155400 session 0x557686cac1c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x5576869e1400 session 0x55768bece380
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x55768c073800 session 0x5576880f7a40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: mgrc ms_handle_reset ms_handle_reset con 0x55768bb5f000
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1095581968
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1095581968,v1:192.168.122.100:6801/1095581968]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: mgrc handle_mgr_configure stats_period=5
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557686258000 session 0x5576886f8000
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557685ffa800 session 0x557689a01c00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310370304 unmapped: 86122496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557686155400 session 0x5576861d3c00
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310370304 unmapped: 86122496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x5576869e1400 session 0x55768cc3f500
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x55768c073800 session 0x557687aa1340
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3523018 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310378496 unmapped: 86114304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 86106112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4e9a78000/0x0/0x4ffc00000, data 0x4efadd0/0x5094000, compress 0x0/0x0/0x0, omap 0x4fe69, meta 0x110a0197), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 86106112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 86106112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 86106112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557685ffa800 session 0x557686d00380
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557686155400 session 0x5576886b2fc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3478714 data_alloc: 218103808 data_used: 9620380
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea302000/0x0/0x4ffc00000, data 0x4670dd0/0x480a000, compress 0x0/0x0/0x0, omap 0x4fed4, meta 0x110a012c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557686258000 session 0x5576861116c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 38.864528656s of 39.085803986s, submitted: 59
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 30.823881149s of 30.865018845s, submitted: 24
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 291 ms_handle_reset con 0x5576869e1400 session 0x55768cc4a540
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ec323000/0x0/0x4ffc00000, data 0x264e98a/0x27e7000, compress 0x0/0x0/0x0, omap 0x505c8, meta 0x1109fa38), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3317485 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ec323000/0x0/0x4ffc00000, data 0x264e98a/0x27e7000, compress 0x0/0x0/0x0, omap 0x505c8, meta 0x1109fa38), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ec323000/0x0/0x4ffc00000, data 0x264e98a/0x27e7000, compress 0x0/0x0/0x0, omap 0x505c8, meta 0x1109fa38), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3317485 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ec323000/0x0/0x4ffc00000, data 0x264e98a/0x27e7000, compress 0x0/0x0/0x0, omap 0x505c8, meta 0x1109fa38), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 87146496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 87146496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 87146496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 87146496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 87138304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 87138304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 87138304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 87138304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 87138304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 87138304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 87130112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 87130112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 87130112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 87130112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 87130112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 87130112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 87097344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 87097344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 87097344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 87097344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 87097344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 87097344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309403648 unmapped: 87089152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309403648 unmapped: 87089152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 87080960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 87080960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 87080960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309428224 unmapped: 87064576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309428224 unmapped: 87064576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309428224 unmapped: 87064576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309428224 unmapped: 87064576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309428224 unmapped: 87064576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309452800 unmapped: 87040000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309452800 unmapped: 87040000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309452800 unmapped: 87040000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309452800 unmapped: 87040000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309469184 unmapped: 87023616 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309469184 unmapped: 87023616 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309469184 unmapped: 87023616 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309469184 unmapped: 87023616 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 ms_handle_reset con 0x55768d0abc00 session 0x557686d4f880
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315342848 unmapped: 81149952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315351040 unmapped: 81141760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315351040 unmapped: 81141760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315359232 unmapped: 81133568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315359232 unmapped: 81133568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327555 data_alloc: 234881024 data_used: 17875868
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315359232 unmapped: 81133568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315359232 unmapped: 81133568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315359232 unmapped: 81133568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315359232 unmapped: 81133568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 128.169219971s of 128.219329834s, submitted: 33
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315392000 unmapped: 81100800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 293 ms_handle_reset con 0x557685ffa800 session 0x557688a10540
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3217201 data_alloc: 218103808 data_used: 4309916
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307396608 unmapped: 89096192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 293 heartbeat osd_stat(store_statfs(0x4ed31f000/0x0/0x4ffc00000, data 0x1651ff9/0x17ed000, compress 0x0/0x0/0x0, omap 0x50d73, meta 0x1109f28d), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307396608 unmapped: 89096192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307396608 unmapped: 89096192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 293 handle_osd_map epochs [293,294], i have 294, src has [1,294]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 294 ms_handle_reset con 0x557686155400 session 0x5576861a81c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ee78b000/0x0/0x4ffc00000, data 0x1e3bb6/0x37e000, compress 0x0/0x0/0x0, omap 0x50e89, meta 0x1109f177), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099618 data_alloc: 218103808 data_used: 181132
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ee78b000/0x0/0x4ffc00000, data 0x1e3bb6/0x37e000, compress 0x0/0x0/0x0, omap 0x50e89, meta 0x1109f177), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099618 data_alloc: 218103808 data_used: 181132
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ee78b000/0x0/0x4ffc00000, data 0x1e3bb6/0x37e000, compress 0x0/0x0/0x0, omap 0x50e89, meta 0x1109f177), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.580028534s of 12.689705849s, submitted: 49
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 93347840 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 93347840 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 93347840 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3102328 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ee78a000/0x0/0x4ffc00000, data 0x1e5658/0x382000, compress 0x0/0x0/0x0, omap 0x51005, meta 0x1109effb), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 93306880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ee78a000/0x0/0x4ffc00000, data 0x1e5658/0x382000, compress 0x0/0x0/0x0, omap 0x51005, meta 0x1109effb), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 93290496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 ms_handle_reset con 0x557686258000 session 0x55768d75c380
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 93241344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 93241344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 93241344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 93241344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 93224960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 93224960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 93224960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 93224960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 93216768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 93216768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 93216768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 93216768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.2 total, 600.0 interval#012Cumulative writes: 38K writes, 148K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 38K writes, 14K syncs, 2.69 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1533 writes, 6281 keys, 1533 commit groups, 1.0 writes per commit group, ingest: 6.68 MB, 0.01 MB/s#012Interval WAL: 1533 writes, 657 syncs, 2.33 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 93184000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 93184000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 93184000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 93184000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 93151232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 93151232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 93151232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 93151232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 93151232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 114.041397095s of 114.333473206s, submitted: 46
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 93151232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302129152 unmapped: 94363648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 297 ms_handle_reset con 0x5576869e1400 session 0x55768bc58380
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302153728 unmapped: 94339072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 297 heartbeat osd_stat(store_statfs(0x4eaf80000/0x0/0x4ffc00000, data 0x39e8dc6/0x3b8a000, compress 0x0/0x0/0x0, omap 0x51b9b, meta 0x1109e465), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302178304 unmapped: 94314496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 298 ms_handle_reset con 0x55768b3c4400 session 0x5576886b28c0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3434144 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302194688 unmapped: 94298112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0a000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302194688 unmapped: 94298112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302194688 unmapped: 94298112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302194688 unmapped: 94298112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302194688 unmapped: 94298112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3434144 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302211072 unmapped: 94281728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0a000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302211072 unmapped: 94281728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302211072 unmapped: 94281728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0a000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302211072 unmapped: 94281728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302211072 unmapped: 94281728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.552754402s of 13.607363701s, submitted: 19
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432672 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0a000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302219264 unmapped: 94273536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302219264 unmapped: 94273536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302219264 unmapped: 94273536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0e000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302227456 unmapped: 94265344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302227456 unmapped: 94265344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432744 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302268416 unmapped: 94224384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0e000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0e000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432672 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0e000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432672 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 94175232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0e000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 94175232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432672 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 94175232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 94175232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.988111496s of 22.149505615s, submitted: 90
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 94175232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 298 handle_osd_map epochs [298,299], i have 299, src has [1,299]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 299 ms_handle_reset con 0x557685ffa800 session 0x5576886f8fc0
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eab09000/0x0/0x4ffc00000, data 0x3e5c562/0x4001000, compress 0x0/0x0/0x0, omap 0x51d59, meta 0x1109e2a7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 94126080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 94126080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153870 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 94126080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 94126080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 94117888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 94117888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ee309000/0x0/0x4ffc00000, data 0x65c53c/0x800000, compress 0x0/0x0/0x0, omap 0x51dc7, meta 0x1109e239), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 94117888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153870 data_alloc: 218103808 data_used: 185193
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 94109696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 94109696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 94109696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 94109696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 94109696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 94101504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 94101504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299466752 unmapped: 97026048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299466752 unmapped: 97026048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299466752 unmapped: 97026048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299466752 unmapped: 97026048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299466752 unmapped: 97026048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299466752 unmapped: 97026048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299507712 unmapped: 96985088 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299507712 unmapped: 96985088 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299507712 unmapped: 96985088 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299532288 unmapped: 96960512 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299548672 unmapped: 96944128 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299548672 unmapped: 96944128 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299548672 unmapped: 96944128 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299548672 unmapped: 96944128 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299548672 unmapped: 96944128 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 96911360 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 96911360 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299589632 unmapped: 96903168 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299589632 unmapped: 96903168 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 96894976 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 96894976 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 96894976 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 96894976 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299606016 unmapped: 96886784 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299606016 unmapped: 96886784 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 96878592 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 96878592 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 96878592 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 96878592 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 96878592 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 96878592 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299630592 unmapped: 96862208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299630592 unmapped: 96862208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299630592 unmapped: 96862208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299630592 unmapped: 96862208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299630592 unmapped: 96862208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299630592 unmapped: 96862208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 96854016 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 96854016 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 96854016 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 96854016 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 96854016 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 96845824 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 96845824 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 96845824 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 96845824 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299663360 unmapped: 96829440 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299663360 unmapped: 96829440 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299663360 unmapped: 96829440 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299671552 unmapped: 96821248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299671552 unmapped: 96821248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299671552 unmapped: 96821248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299671552 unmapped: 96821248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299679744 unmapped: 96813056 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299687936 unmapped: 96804864 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299687936 unmapped: 96804864 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299687936 unmapped: 96804864 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299687936 unmapped: 96804864 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299704320 unmapped: 96788480 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299704320 unmapped: 96788480 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299704320 unmapped: 96788480 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299704320 unmapped: 96788480 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299704320 unmapped: 96788480 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299712512 unmapped: 96780288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299712512 unmapped: 96780288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299712512 unmapped: 96780288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299720704 unmapped: 96772096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299720704 unmapped: 96772096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299720704 unmapped: 96772096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299720704 unmapped: 96772096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299728896 unmapped: 96763904 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 96755712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 96755712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 96755712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 96755712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 96755712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 96755712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299745280 unmapped: 96747520 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299745280 unmapped: 96747520 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299778048 unmapped: 96714752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299778048 unmapped: 96714752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299778048 unmapped: 96714752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299778048 unmapped: 96714752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299835392 unmapped: 96657408 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299835392 unmapped: 96657408 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299835392 unmapped: 96657408 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299835392 unmapped: 96657408 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299851776 unmapped: 96641024 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299851776 unmapped: 96641024 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299851776 unmapped: 96641024 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299851776 unmapped: 96641024 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299859968 unmapped: 96632832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299859968 unmapped: 96632832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299859968 unmapped: 96632832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299859968 unmapped: 96632832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299868160 unmapped: 96624640 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299868160 unmapped: 96624640 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299868160 unmapped: 96624640 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299868160 unmapped: 96624640 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299876352 unmapped: 96616448 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299884544 unmapped: 96608256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299884544 unmapped: 96608256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299884544 unmapped: 96608256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299884544 unmapped: 96608256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299884544 unmapped: 96608256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299892736 unmapped: 96600064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299892736 unmapped: 96600064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299900928 unmapped: 96591872 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299900928 unmapped: 96591872 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299900928 unmapped: 96591872 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299900928 unmapped: 96591872 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299900928 unmapped: 96591872 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299909120 unmapped: 96583680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299909120 unmapped: 96583680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299909120 unmapped: 96583680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299917312 unmapped: 96575488 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299917312 unmapped: 96575488 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299917312 unmapped: 96575488 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299925504 unmapped: 96567296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299925504 unmapped: 96567296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299925504 unmapped: 96567296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 96559104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 96559104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 96559104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 96559104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 96550912 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 96550912 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 96542720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 96542720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 96542720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 96542720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299966464 unmapped: 96526336 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299982848 unmapped: 96509952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299982848 unmapped: 96509952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299982848 unmapped: 96509952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299999232 unmapped: 96493568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299999232 unmapped: 96493568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300007424 unmapped: 96485376 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300007424 unmapped: 96485376 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300007424 unmapped: 96485376 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 96477184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 96477184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 96477184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 96477184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 96477184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300023808 unmapped: 96468992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300023808 unmapped: 96468992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300023808 unmapped: 96468992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 96460800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 96460800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 96460800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 96460800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 96460800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 96444416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 96444416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 96444416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 96444416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300056576 unmapped: 96436224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300056576 unmapped: 96436224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300056576 unmapped: 96436224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300056576 unmapped: 96436224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 96428032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 96428032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 96428032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 96428032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 96419840 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 96419840 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300081152 unmapped: 96411648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300089344 unmapped: 96403456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300089344 unmapped: 96403456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300089344 unmapped: 96403456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300097536 unmapped: 96395264 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300105728 unmapped: 96387072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300105728 unmapped: 96387072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 96378880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 96378880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 96378880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 96378880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 96378880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300122112 unmapped: 96370688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300122112 unmapped: 96370688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300130304 unmapped: 96362496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300130304 unmapped: 96362496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300130304 unmapped: 96362496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300163072 unmapped: 96329728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300163072 unmapped: 96329728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300163072 unmapped: 96329728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 96321536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 96321536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 96321536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 96321536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 96321536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300195840 unmapped: 96296960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300195840 unmapped: 96296960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 366.777954102s of 366.843353271s, submitted: 49
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 301 ms_handle_reset con 0x557686155400 session 0x55768c90da40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 96198656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 301 heartbeat osd_stat(store_statfs(0x4ee306000/0x0/0x4ffc00000, data 0x65fb78/0x804000, compress 0x0/0x0/0x0, omap 0x52b13, meta 0x1109d4ed), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 96198656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300302336 unmapped: 96190464 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3160433 data_alloc: 218103808 data_used: 189267
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 302 ms_handle_reset con 0x557686258000 session 0x557686cacc40
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300318720 unmapped: 96174080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee774000/0x0/0x4ffc00000, data 0x1f1758/0x396000, compress 0x0/0x0/0x0, omap 0x52c29, meta 0x1109d3d7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300318720 unmapped: 96174080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300318720 unmapped: 96174080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee774000/0x0/0x4ffc00000, data 0x1f1758/0x396000, compress 0x0/0x0/0x0, omap 0x52c29, meta 0x1109d3d7), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300318720 unmapped: 96174080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3182875 data_alloc: 218103808 data_used: 189523
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 303 heartbeat osd_stat(store_statfs(0x4edf72000/0x0/0x4ffc00000, data 0x9f3216/0xb9a000, compress 0x0/0x0/0x0, omap 0x52da5, meta 0x1109d25b), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 303 handle_osd_map epochs [304,304], i have 304, src has [1,304]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 304 ms_handle_reset con 0x5576869e1400 session 0x557692b07340
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.334288597s of 11.534142494s, submitted: 81
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300466176 unmapped: 96026624 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300466176 unmapped: 96026624 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300466176 unmapped: 96026624 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 95993856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 95993856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 96288768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 96288768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 96288768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 96288768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 96288768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300212224 unmapped: 96280576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300212224 unmapped: 96280576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 96272384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 96272384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 96272384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 96272384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300228608 unmapped: 96264192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300228608 unmapped: 96264192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 96256000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 96256000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 96256000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 96256000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 96256000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 47.789546967s of 47.797679901s, submitted: 15
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 96256000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 306 ms_handle_reset con 0x557687f7fc00 session 0x55768b427880
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 96223232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 96223232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f841a/0x3a2000, compress 0x0/0x0/0x0, omap 0x53906, meta 0x1109c6fa), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 96223232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3151061 data_alloc: 218103808 data_used: 190108
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f841a/0x3a2000, compress 0x0/0x0/0x0, omap 0x53906, meta 0x1109c6fa), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f841a/0x3a2000, compress 0x0/0x0/0x0, omap 0x53906, meta 0x1109c6fa), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300285952 unmapped: 96206848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300285952 unmapped: 96206848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f841a/0x3a2000, compress 0x0/0x0/0x0, omap 0x53906, meta 0x1109c6fa), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 306 handle_osd_map epochs [306,307], i have 307, src has [1,307]
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300285952 unmapped: 96206848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 96198656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 96198656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 96198656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 96198656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300302336 unmapped: 96190464 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300302336 unmapped: 96190464 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300302336 unmapped: 96190464 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 96182272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 96182272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 96182272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 96182272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 96182272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300335104 unmapped: 96157696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300335104 unmapped: 96157696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300335104 unmapped: 96157696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 96149504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 96149504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 96149504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 96149504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 96149504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300351488 unmapped: 96141312 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300359680 unmapped: 96133120 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 96116736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 96116736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 96116736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 96116736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 96116736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 96116736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 96100352 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 96100352 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 96100352 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 96100352 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 96100352 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 96092160 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 96092160 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 96092160 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 96083968 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 96067584 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 96067584 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 96067584 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 95911936 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: do_command 'config diff' '{prefix=config diff}'
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: do_command 'config show' '{prefix=config show}'
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: do_command 'counter dump' '{prefix=counter dump}'
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: do_command 'counter schema' '{prefix=counter schema}'
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:03 np0005634017 ceph-osd[89322]: do_command 'log dump' '{prefix=log dump}'
Feb 28 06:04:03 np0005634017 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 06:04:03 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.22998 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:04:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} v 0)
Feb 28 06:04:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} : dispatch
Feb 28 06:04:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb 28 06:04:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3988020636' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 28 06:04:03 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3014: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:04:04 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23002 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:04:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} v 0)
Feb 28 06:04:04 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} : dispatch
Feb 28 06:04:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:04:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb 28 06:04:04 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/653001906' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 28 06:04:04 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23006 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:04:04 np0005634017 nova_compute[243452]: 2026-02-28 11:04:04.787 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:04:04 np0005634017 nova_compute[243452]: 2026-02-28 11:04:04.787 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:04:04 np0005634017 nova_compute[243452]: 2026-02-28 11:04:04.787 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:04:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb 28 06:04:04 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2115761211' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 28 06:04:05 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23010 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:04:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 28 06:04:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3454852689' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 28 06:04:05 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23014 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:04:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb 28 06:04:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4028395648' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 28 06:04:05 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23018 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 06:04:05 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3015: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:04:06 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23022 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 06:04:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Feb 28 06:04:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1209041268' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Feb 28 06:04:06 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23024 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 06:04:07 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23028 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 06:04:07 np0005634017 ceph-mgr[76610]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 28 06:04:07 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-izimmo[76606]: 2026-02-28T11:04:07.167+0000 7fdac2ed4640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 28 06:04:07 np0005634017 nova_compute[243452]: 2026-02-28 11:04:07.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:04:07 np0005634017 nova_compute[243452]: 2026-02-28 11:04:07.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:04:07 np0005634017 nova_compute[243452]: 2026-02-28 11:04:07.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:04:07 np0005634017 nova_compute[243452]: 2026-02-28 11:04:07.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 06:04:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Feb 28 06:04:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4164528576' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Feb 28 06:04:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Feb 28 06:04:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1150463311' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 332791808 unmapped: 62087168 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3865563 data_alloc: 251658240 data_used: 36185480
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 332832768 unmapped: 62046208 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8ead000/0x0/0x4ffc00000, data 0x5ac7c4a/0x5c5f000, compress 0x0/0x0/0x0, omap 0x6719b, meta 0x11088e65), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a38400 session 0x562fffc001c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3ac00 session 0x56300273f500
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 332832768 unmapped: 62046208 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a47000 session 0x5630007ffdc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336084992 unmapped: 58793984 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336470016 unmapped: 58408960 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336470016 unmapped: 58408960 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3815252 data_alloc: 251658240 data_used: 32440696
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336470016 unmapped: 58408960 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9639000/0x0/0x4ffc00000, data 0x5327c07/0x54bc000, compress 0x0/0x0/0x0, omap 0x67613, meta 0x110889ed), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336470016 unmapped: 58408960 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336470016 unmapped: 58408960 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336470016 unmapped: 58408960 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9639000/0x0/0x4ffc00000, data 0x5327c07/0x54bc000, compress 0x0/0x0/0x0, omap 0x67613, meta 0x110889ed), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.976147652s of 15.230655670s, submitted: 124
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336502784 unmapped: 58376192 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3815268 data_alloc: 251658240 data_used: 32440696
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630052e4000 session 0x563000340e00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x562fffadf340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335831040 unmapped: 59047936 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a38400 session 0x56300273ec40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 75841536 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 75841536 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 75841536 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eb0c0000/0x0/0x4ffc00000, data 0x38b8bd4/0x3a4b000, compress 0x0/0x0/0x0, omap 0x672fe, meta 0x11088d02), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 75841536 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3560608 data_alloc: 234881024 data_used: 15013701
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006c58c00 session 0x56300273ea80
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 74784768 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eae96000/0x0/0x4ffc00000, data 0x3ae3bd4/0x3c76000, compress 0x0/0x0/0x0, omap 0x672fe, meta 0x11088d02), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 74784768 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 74784768 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 74784768 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 74784768 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578346 data_alloc: 234881024 data_used: 15017762
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eae96000/0x0/0x4ffc00000, data 0x3ae3bd4/0x3c76000, compress 0x0/0x0/0x0, omap 0x672fe, meta 0x11088d02), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 74784768 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 74784768 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 74784768 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 74784768 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eae96000/0x0/0x4ffc00000, data 0x3ae3bd4/0x3c76000, compress 0x0/0x0/0x0, omap 0x672fe, meta 0x11088d02), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 74784768 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578346 data_alloc: 234881024 data_used: 15017762
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eae96000/0x0/0x4ffc00000, data 0x3ae3bd4/0x3c76000, compress 0x0/0x0/0x0, omap 0x672fe, meta 0x11088d02), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a46000 session 0x562fffba1a40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 74784768 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300dcfd400 session 0x562fffadefc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x562fffba1c00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 74784768 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.832296371s of 18.060344696s, submitted: 51
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a38400 session 0x56300238b180
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320421888 unmapped: 74457088 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eae70000/0x0/0x4ffc00000, data 0x3b07c07/0x3c9c000, compress 0x0/0x0/0x0, omap 0x674db, meta 0x11088b25), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320421888 unmapped: 74457088 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320421888 unmapped: 74457088 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599110 data_alloc: 234881024 data_used: 17189666
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320421888 unmapped: 74457088 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320421888 unmapped: 74457088 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320421888 unmapped: 74457088 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eae70000/0x0/0x4ffc00000, data 0x3b07c07/0x3c9c000, compress 0x0/0x0/0x0, omap 0x674db, meta 0x11088b25), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320421888 unmapped: 74457088 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320421888 unmapped: 74457088 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599110 data_alloc: 234881024 data_used: 17189666
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eae70000/0x0/0x4ffc00000, data 0x3b07c07/0x3c9c000, compress 0x0/0x0/0x0, omap 0x674db, meta 0x11088b25), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320421888 unmapped: 74457088 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320421888 unmapped: 74457088 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320421888 unmapped: 74457088 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.238119125s of 11.266804695s, submitted: 12
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 323428352 unmapped: 71450624 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006905c00 session 0x563002011180
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630035d8c00 session 0x563002c34e00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002ea3400 session 0x563002574fc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002ea3400 session 0x5630025756c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x563002224700
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 323960832 unmapped: 70918144 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730344 data_alloc: 234881024 data_used: 17993506
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324042752 unmapped: 70836224 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9c85000/0x0/0x4ffc00000, data 0x4ceac07/0x4e7f000, compress 0x0/0x0/0x0, omap 0x674db, meta 0x11088b25), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324042752 unmapped: 70836224 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324042752 unmapped: 70836224 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324042752 unmapped: 70836224 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006906c00 session 0x56300273f340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 323371008 unmapped: 71507968 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3727581 data_alloc: 234881024 data_used: 18001698
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 323289088 unmapped: 71589888 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 323076096 unmapped: 71802880 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9c8d000/0x0/0x4ffc00000, data 0x4ceac07/0x4e7f000, compress 0x0/0x0/0x0, omap 0x674db, meta 0x11088b25), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 323076096 unmapped: 71802880 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 323076096 unmapped: 71802880 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.760303497s of 11.015823364s, submitted: 72
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9c8d000/0x0/0x4ffc00000, data 0x4ceac07/0x4e7f000, compress 0x0/0x0/0x0, omap 0x674db, meta 0x11088b25), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 323092480 unmapped: 71786496 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3781937 data_alloc: 234881024 data_used: 26610466
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 323092480 unmapped: 71786496 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006c58c00 session 0x562fffadf180
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a46000 session 0x56300222da40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x563002472380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 322404352 unmapped: 72474624 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ea769000/0x0/0x4ffc00000, data 0x420fbd4/0x43a2000, compress 0x0/0x0/0x0, omap 0x67c44, meta 0x110883bc), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 322404352 unmapped: 72474624 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 322404352 unmapped: 72474624 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 322404352 unmapped: 72474624 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686202 data_alloc: 234881024 data_used: 23556898
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 322404352 unmapped: 72474624 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007abc800 session 0x562fffa6b340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf9000 session 0x5630015648c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ea76a000/0x0/0x4ffc00000, data 0x420fbd4/0x43a2000, compress 0x0/0x0/0x0, omap 0x67c44, meta 0x110883bc), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a46400 session 0x563002c35c00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 323477504 unmapped: 71401472 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325033984 unmapped: 69844992 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325279744 unmapped: 69599232 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eafc9000/0x0/0x4ffc00000, data 0x39a0bc4/0x3b32000, compress 0x0/0x0/0x0, omap 0x685b2, meta 0x11087a4e), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325279744 unmapped: 69599232 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575805 data_alloc: 234881024 data_used: 14628591
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eafce000/0x0/0x4ffc00000, data 0x39acbc4/0x3b3e000, compress 0x0/0x0/0x0, omap 0x685b2, meta 0x11087a4e), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.512154579s of 13.937977791s, submitted: 211
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575821 data_alloc: 234881024 data_used: 14628591
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eafce000/0x0/0x4ffc00000, data 0x39acbc4/0x3b3e000, compress 0x0/0x0/0x0, omap 0x685b2, meta 0x11087a4e), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eafce000/0x0/0x4ffc00000, data 0x39acbc4/0x3b3e000, compress 0x0/0x0/0x0, omap 0x685b2, meta 0x11087a4e), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575821 data_alloc: 234881024 data_used: 14628591
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eafce000/0x0/0x4ffc00000, data 0x39acbc4/0x3b3e000, compress 0x0/0x0/0x0, omap 0x685b2, meta 0x11087a4e), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3576077 data_alloc: 234881024 data_used: 14636783
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3f400 session 0x562fffc01880
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.207843781s of 15.209612846s, submitted: 1
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a49400 session 0x563000340e00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3601.6 total, 600.0 interval#012Cumulative writes: 41K writes, 159K keys, 41K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s#012Cumulative WAL: 41K writes, 15K syncs, 2.72 writes per sync, written: 0.15 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4949 writes, 18K keys, 4949 commit groups, 1.0 writes per commit group, ingest: 19.96 MB, 0.03 MB/s#012Interval WAL: 4949 writes, 2024 syncs, 2.45 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eafce000/0x0/0x4ffc00000, data 0x39acbc4/0x3b3e000, compress 0x0/0x0/0x0, omap 0x685b2, meta 0x11087a4e), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x563001d9bc00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 81207296 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3413195 data_alloc: 218103808 data_used: 4908783
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 81207296 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 81207296 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 81207296 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ec2c8000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x68a22, meta 0x110875de), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 81207296 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 81207296 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3413195 data_alloc: 218103808 data_used: 4908783
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: mgrc ms_handle_reset ms_handle_reset con 0x563004ae8c00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1095581968
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1095581968,v1:192.168.122.100:6801/1095581968]
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: mgrc handle_mgr_configure stats_period=5
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 81207296 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6bc00 session 0x563000340700
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ec2c8000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x68a22, meta 0x110875de), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006904c00 session 0x562fffb87340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c8000 session 0x56300205c000
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 81207296 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 81207296 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 81207296 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 81207296 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3413195 data_alloc: 218103808 data_used: 4908783
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 81207296 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ec2c8000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x68a22, meta 0x110875de), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ec2c8000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x68a22, meta 0x110875de), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 81207296 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300dcfc000 session 0x5630022a8a80
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004ae9400 session 0x5630022f3c00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630043d0c00 session 0x563002147500
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x562fffca1c00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.193162918s of 14.241909027s, submitted: 35
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a49400 session 0x563002574000
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 314220544 unmapped: 80658432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630043d0c00 session 0x563002a6a540
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004ae9400 session 0x56300238a380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300dcfc000 session 0x5630029aca80
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x562fffc01dc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 314220544 unmapped: 80658432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 314220544 unmapped: 80658432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517492 data_alloc: 218103808 data_used: 4908783
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 314220544 unmapped: 80658432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x377fc35/0x3913000, compress 0x0/0x0/0x0, omap 0x68a22, meta 0x110875de), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 314220544 unmapped: 80658432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 314220544 unmapped: 80658432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x377fc35/0x3913000, compress 0x0/0x0/0x0, omap 0x68a22, meta 0x110875de), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 314220544 unmapped: 80658432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300212b000 session 0x563001da01c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6c400 session 0x562fffa6b500
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 314220544 unmapped: 80658432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517492 data_alloc: 218103808 data_used: 4908783
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002f16400 session 0x5630015e6380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c8000 session 0x563002147a40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 314499072 unmapped: 80379904 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 314499072 unmapped: 80379904 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eb1d3000/0x0/0x4ffc00000, data 0x37a3c67/0x3939000, compress 0x0/0x0/0x0, omap 0x686b9, meta 0x11087947), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 318324736 unmapped: 76554240 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002f16400 session 0x563001564fc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6c400 session 0x562fffadefc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004ae8c00 session 0x5630026de700
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004298c00 session 0x5630004fe1c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.702307701s of 10.852206230s, submitted: 51
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c8800 session 0x5630029ac000
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002f16400 session 0x5630022f3c00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004298c00 session 0x563002472380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004ae8c00 session 0x563001564fc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6c400 session 0x56300205ce00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 75472896 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 75472896 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3658083 data_alloc: 234881024 data_used: 21319422
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 75472896 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eabc2000/0x0/0x4ffc00000, data 0x3db3c77/0x3f4a000, compress 0x0/0x0/0x0, omap 0x682ae, meta 0x11087d52), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 75472896 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 75472896 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 319414272 unmapped: 75464704 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 319414272 unmapped: 75464704 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3658083 data_alloc: 234881024 data_used: 21319422
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eabc2000/0x0/0x4ffc00000, data 0x3db3c77/0x3f4a000, compress 0x0/0x0/0x0, omap 0x682ae, meta 0x11087d52), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 319414272 unmapped: 75464704 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002ea2c00 session 0x563001da1500
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325664768 unmapped: 69214208 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327081984 unmapped: 67796992 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327614464 unmapped: 67264512 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327614464 unmapped: 67264512 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3804968 data_alloc: 234881024 data_used: 29722382
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9dc2000/0x0/0x4ffc00000, data 0x4bb1c9a/0x4d49000, compress 0x0/0x0/0x0, omap 0x682ae, meta 0x11087d52), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327614464 unmapped: 67264512 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327614464 unmapped: 67264512 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327614464 unmapped: 67264512 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327614464 unmapped: 67264512 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9dc2000/0x0/0x4ffc00000, data 0x4bb1c9a/0x4d49000, compress 0x0/0x0/0x0, omap 0x682ae, meta 0x11087d52), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327614464 unmapped: 67264512 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3804968 data_alloc: 234881024 data_used: 29722382
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327614464 unmapped: 67264512 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3fc00 session 0x56300238bc00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9dc2000/0x0/0x4ffc00000, data 0x4bb1c9a/0x4d49000, compress 0x0/0x0/0x0, omap 0x682ae, meta 0x11087d52), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327614464 unmapped: 67264512 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.093086243s of 19.357521057s, submitted: 142
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328990720 unmapped: 65888256 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328728576 unmapped: 66150400 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9a09000/0x0/0x4ffc00000, data 0x4f6bc9a/0x5103000, compress 0x0/0x0/0x0, omap 0x682ae, meta 0x11087d52), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328728576 unmapped: 66150400 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3840382 data_alloc: 234881024 data_used: 29837070
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006851400 session 0x5630029acfc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6dc00 session 0x563002c42700
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004075400 session 0x5630029ad340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328728576 unmapped: 66150400 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6b400 session 0x5630008b2c40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006c59c00 session 0x5630008c4e00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006c59c00 session 0x56300238a000
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004075400 session 0x56300273e380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6b400 session 0x5630022f28c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6dc00 session 0x563001d9aa80
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328941568 unmapped: 65937408 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328941568 unmapped: 65937408 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328949760 unmapped: 65929216 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329007104 unmapped: 65871872 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3867883 data_alloc: 234881024 data_used: 29837730
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e976a000/0x0/0x4ffc00000, data 0x5209caa/0x53a2000, compress 0x0/0x0/0x0, omap 0x682ae, meta 0x11087d52), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e976a000/0x0/0x4ffc00000, data 0x5209caa/0x53a2000, compress 0x0/0x0/0x0, omap 0x682ae, meta 0x11087d52), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329007104 unmapped: 65871872 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329007104 unmapped: 65871872 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e976a000/0x0/0x4ffc00000, data 0x5209caa/0x53a2000, compress 0x0/0x0/0x0, omap 0x682ae, meta 0x11087d52), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.789917946s of 10.007776260s, submitted: 107
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007aab000 session 0x562fffadfa40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e976a000/0x0/0x4ffc00000, data 0x5209caa/0x53a2000, compress 0x0/0x0/0x0, omap 0x682ae, meta 0x11087d52), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329007104 unmapped: 65871872 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329056256 unmapped: 65822720 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329064448 unmapped: 65814528 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3870104 data_alloc: 234881024 data_used: 29842354
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329064448 unmapped: 65814528 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329064448 unmapped: 65814528 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330317824 unmapped: 64561152 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9769000/0x0/0x4ffc00000, data 0x5209ccd/0x53a3000, compress 0x0/0x0/0x0, omap 0x682ae, meta 0x11087d52), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330317824 unmapped: 64561152 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3878936 data_alloc: 234881024 data_used: 31212978
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330317824 unmapped: 64561152 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330317824 unmapped: 64561152 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc0c800 session 0x563002147880
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007aaa000 session 0x5630022a8380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a800 session 0x563002225dc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f9c00 session 0x5630008b3500
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004075000 session 0x563002a6b180
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330645504 unmapped: 64233472 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330645504 unmapped: 64233472 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8981000/0x0/0x4ffc00000, data 0x5ff1ccd/0x618b000, compress 0x0/0x0/0x0, omap 0x684c0, meta 0x11087b40), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330645504 unmapped: 64233472 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.100229263s of 11.311615944s, submitted: 103
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3958365 data_alloc: 234881024 data_used: 31212978
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330645504 unmapped: 64233472 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de3c00 session 0x563000521880
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8981000/0x0/0x4ffc00000, data 0x5ff1ccd/0x618b000, compress 0x0/0x0/0x0, omap 0x684c0, meta 0x11087b40), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330645504 unmapped: 64233472 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630043d0c00 session 0x563002a6a8c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330645504 unmapped: 64233472 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004074400 session 0x56300238a1c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300645f400 session 0x5630007ff880
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330776576 unmapped: 64102400 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 332668928 unmapped: 62210048 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4077214 data_alloc: 251658240 data_used: 41646549
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340410368 unmapped: 54468608 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x63b6d00/0x6552000, compress 0x0/0x0/0x0, omap 0x684c0, meta 0x11087b40), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 54403072 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 54403072 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 54403072 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 54403072 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.664144516s of 10.839223862s, submitted: 88
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4077558 data_alloc: 251658240 data_used: 41650645
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 54394880 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 54370304 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e85b8000/0x0/0x4ffc00000, data 0x63b7d00/0x6553000, compress 0x0/0x0/0x0, omap 0x684c0, meta 0x11087b40), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e85b8000/0x0/0x4ffc00000, data 0x63b7d00/0x6553000, compress 0x0/0x0/0x0, omap 0x684c0, meta 0x11087b40), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340516864 unmapped: 54362112 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340516864 unmapped: 54362112 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e85b8000/0x0/0x4ffc00000, data 0x63b7d00/0x6553000, compress 0x0/0x0/0x0, omap 0x684c0, meta 0x11087b40), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340516864 unmapped: 54362112 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4072086 data_alloc: 251658240 data_used: 41650645
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340516864 unmapped: 54362112 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340721664 unmapped: 54157312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 52305920 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8090000/0x0/0x4ffc00000, data 0x68d8d00/0x6a74000, compress 0x0/0x0/0x0, omap 0x684c0, meta 0x11087b40), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 342704128 unmapped: 52174848 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004075400 session 0x5630026dea80
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004075400 session 0x563002574380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 342810624 unmapped: 52068352 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4053236 data_alloc: 251658240 data_used: 39771589
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 52060160 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8689000/0x0/0x4ffc00000, data 0x628dccd/0x6427000, compress 0x0/0x0/0x0, omap 0x68928, meta 0x110876d8), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 342851584 unmapped: 52027392 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 342851584 unmapped: 52027392 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.205799103s of 12.442360878s, submitted: 96
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x562fffba1500
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300212b000 session 0x562fffba1c00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 55926784 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630045adc00 session 0x563002aa0e00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 55926784 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ea5b2000/0x0/0x4ffc00000, data 0x43c2c2a/0x4558000, compress 0x0/0x0/0x0, omap 0x68d90, meta 0x11087270), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760600 data_alloc: 234881024 data_used: 21230995
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 55926784 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de2000 session 0x562fffa6b180
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004ae9800 session 0x562fffb86000
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 55926784 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x563001f03500
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 70230016 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ea728000/0x0/0x4ffc00000, data 0x30afbf7/0x3243000, compress 0x0/0x0/0x0, omap 0x69139, meta 0x12226ec7), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 70230016 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 70230016 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002f16400 session 0x5630025756c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004298c00 session 0x563002c35dc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567825 data_alloc: 218103808 data_used: 6247659
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 70230016 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de2000 session 0x5630022a9180
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eb127000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472885 data_alloc: 218103808 data_used: 4921563
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eb127000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eb127000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eb127000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472885 data_alloc: 218103808 data_used: 4921563
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eb127000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472885 data_alloc: 218103808 data_used: 4921563
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eb127000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324640768 unmapped: 70238208 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324640768 unmapped: 70238208 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eb127000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324640768 unmapped: 70238208 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eb127000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324640768 unmapped: 70238208 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472885 data_alloc: 218103808 data_used: 4921563
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324640768 unmapped: 70238208 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eb127000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324640768 unmapped: 70238208 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 70230016 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 70230016 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 70230016 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.625631332s of 32.128654480s, submitted: 153
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002f17400 session 0x562fffadefc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3520287 data_alloc: 218103808 data_used: 4925624
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325697536 unmapped: 69181440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ea974000/0x0/0x4ffc00000, data 0x2e66bc4/0x2ff8000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325697536 unmapped: 69181440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325697536 unmapped: 69181440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325705728 unmapped: 69173248 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc8ec00 session 0x563002146540
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f8000 session 0x56300273f6c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630040a6800 session 0x5630026de380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c8400 session 0x562fffca1880
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc8ec00 session 0x5630022f28c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f8000 session 0x562fffba01c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002f17400 session 0x563002147880
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325746688 unmapped: 69132288 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630040a6800 session 0x563001d9aa80
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329ec00 session 0x56300261e8c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3568403 data_alloc: 218103808 data_used: 4925624
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325746688 unmapped: 69132288 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325746688 unmapped: 69132288 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ea205000/0x0/0x4ffc00000, data 0x35d4bd4/0x3767000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325754880 unmapped: 69124096 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004298c00 session 0x563001da1340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ea205000/0x0/0x4ffc00000, data 0x35d4bd4/0x3767000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325754880 unmapped: 69124096 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325754880 unmapped: 69124096 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3590331 data_alloc: 218103808 data_used: 8167608
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325386240 unmapped: 69492736 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300dcfc000 session 0x562fffba0a80
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325386240 unmapped: 69492736 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ea1e1000/0x0/0x4ffc00000, data 0x35f8bd4/0x378b000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007a98000 session 0x562fffba0e00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325386240 unmapped: 69492736 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325386240 unmapped: 69492736 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325386240 unmapped: 69492736 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ea1e1000/0x0/0x4ffc00000, data 0x35f8bd4/0x378b000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004689400 session 0x5630003e7c00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.050533295s of 15.176102638s, submitted: 24
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf8c00 session 0x5630007fec40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615734 data_alloc: 218103808 data_used: 11878584
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325394432 unmapped: 69484544 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325394432 unmapped: 69484544 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325394432 unmapped: 69484544 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ea1df000/0x0/0x4ffc00000, data 0x35f8c07/0x378d000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325394432 unmapped: 69484544 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324739072 unmapped: 70139904 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3721560 data_alloc: 234881024 data_used: 20186312
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328998912 unmapped: 65880064 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ea1df000/0x0/0x4ffc00000, data 0x35f8c07/0x378d000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329244672 unmapped: 65634304 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330539008 unmapped: 64339968 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e985f000/0x0/0x4ffc00000, data 0x3f70c07/0x4105000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330539008 unmapped: 64339968 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330539008 unmapped: 64339968 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3745308 data_alloc: 234881024 data_used: 20650696
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330539008 unmapped: 64339968 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330539008 unmapped: 64339968 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.288350105s of 12.520104408s, submitted: 103
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330539008 unmapped: 64339968 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9864000/0x0/0x4ffc00000, data 0x3f73c07/0x4108000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330547200 unmapped: 64331776 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 332791808 unmapped: 62087168 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3796088 data_alloc: 234881024 data_used: 21364424
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335118336 unmapped: 59760640 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8067000/0x0/0x4ffc00000, data 0x45a9c07/0x473e000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x133c6f2a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335118336 unmapped: 59760640 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335118336 unmapped: 59760640 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335118336 unmapped: 59760640 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8067000/0x0/0x4ffc00000, data 0x45a9c07/0x473e000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x133c6f2a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335118336 unmapped: 59760640 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3796088 data_alloc: 234881024 data_used: 21364424
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334446592 unmapped: 60432384 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334462976 unmapped: 60416000 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334462976 unmapped: 60416000 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e808b000/0x0/0x4ffc00000, data 0x45acc07/0x4741000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x133c6f2a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334462976 unmapped: 60416000 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334462976 unmapped: 60416000 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3789200 data_alloc: 234881024 data_used: 21380808
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334462976 unmapped: 60416000 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e808b000/0x0/0x4ffc00000, data 0x45acc07/0x4741000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x133c6f2a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334462976 unmapped: 60416000 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e808b000/0x0/0x4ffc00000, data 0x45acc07/0x4741000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x133c6f2a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.640681267s of 14.842003822s, submitted: 88
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334462976 unmapped: 60416000 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334462976 unmapped: 60416000 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334471168 unmapped: 60407808 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3e000 session 0x56300205d180
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e808a000/0x0/0x4ffc00000, data 0x45adc07/0x4742000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x133c6f2a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3826053 data_alloc: 234881024 data_used: 21380808
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004298800 session 0x562fffade8c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334495744 unmapped: 60383232 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006912c00 session 0x563002a801c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a47c00 session 0x563001f028c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006850800 session 0x562fffa6b880
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006850800 session 0x56300222d500
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334495744 unmapped: 60383232 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3e000 session 0x563000521500
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a47c00 session 0x5630021468c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004298800 session 0x5630015e6540
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006912c00 session 0x562fffba1500
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006912c00 session 0x563000521500
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334700544 unmapped: 60178432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334700544 unmapped: 60178432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7710000/0x0/0x4ffc00000, data 0x4f26c69/0x50bc000, compress 0x0/0x0/0x0, omap 0x6964e, meta 0x133c69b2), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7710000/0x0/0x4ffc00000, data 0x4f26c69/0x50bc000, compress 0x0/0x0/0x0, omap 0x6964e, meta 0x133c69b2), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334700544 unmapped: 60178432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3856880 data_alloc: 234881024 data_used: 21380808
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334700544 unmapped: 60178432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334700544 unmapped: 60178432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334716928 unmapped: 60162048 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.183117867s of 11.356911659s, submitted: 52
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004298000 session 0x5630007ffc00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334725120 unmapped: 60153856 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006904400 session 0x5630008c4700
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334725120 unmapped: 60153856 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e770d000/0x0/0x4ffc00000, data 0x4f27c69/0x50bd000, compress 0x0/0x0/0x0, omap 0x6964e, meta 0x133c69b2), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1400 session 0x56300238a380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3885715 data_alloc: 234881024 data_used: 25845448
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334848000 unmapped: 60030976 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630004de400 session 0x563002472380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630004de400 session 0x56300238bc00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334848000 unmapped: 60030976 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e770c000/0x0/0x4ffc00000, data 0x4f27c79/0x50be000, compress 0x0/0x0/0x0, omap 0x6964e, meta 0x133c69b2), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334848000 unmapped: 60030976 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334848000 unmapped: 60030976 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335560704 unmapped: 59318272 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e770c000/0x0/0x4ffc00000, data 0x4f27c79/0x50be000, compress 0x0/0x0/0x0, omap 0x6964e, meta 0x133c69b2), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3919221 data_alloc: 234881024 data_used: 31165394
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335560704 unmapped: 59318272 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335560704 unmapped: 59318272 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335560704 unmapped: 59318272 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335560704 unmapped: 59318272 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e770d000/0x0/0x4ffc00000, data 0x4f28c79/0x50bf000, compress 0x0/0x0/0x0, omap 0x6964e, meta 0x133c69b2), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335560704 unmapped: 59318272 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.527291298s of 11.558494568s, submitted: 10
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3950583 data_alloc: 234881024 data_used: 31714258
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 55926784 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 55926784 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 55926784 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e71d8000/0x0/0x4ffc00000, data 0x545dc79/0x55f4000, compress 0x0/0x0/0x0, omap 0x6964e, meta 0x133c69b2), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 55926784 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 55926784 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e71d8000/0x0/0x4ffc00000, data 0x545dc79/0x55f4000, compress 0x0/0x0/0x0, omap 0x6964e, meta 0x133c69b2), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3976585 data_alloc: 234881024 data_used: 31841234
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340443136 unmapped: 54435840 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 53387264 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340836352 unmapped: 54042624 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6e5c000/0x0/0x4ffc00000, data 0x57d9c79/0x5970000, compress 0x0/0x0/0x0, omap 0x6964e, meta 0x133c69b2), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340836352 unmapped: 54042624 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340836352 unmapped: 54042624 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3981693 data_alloc: 234881024 data_used: 31906770
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340836352 unmapped: 54042624 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340844544 unmapped: 54034432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.701119423s of 12.182498932s, submitted: 108
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340844544 unmapped: 54034432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340844544 unmapped: 54034432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6e5b000/0x0/0x4ffc00000, data 0x57dac79/0x5971000, compress 0x0/0x0/0x0, omap 0x6964e, meta 0x133c69b2), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3a000 session 0x562fffa6b180
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300334e800 session 0x563001564fc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340852736 unmapped: 54026240 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de2c00 session 0x563002147a40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3894666 data_alloc: 234881024 data_used: 26787794
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340230144 unmapped: 54648832 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e725b000/0x0/0x4ffc00000, data 0x4e4fc79/0x4fe6000, compress 0x0/0x0/0x0, omap 0x695bf, meta 0x133c6a41), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e725b000/0x0/0x4ffc00000, data 0x4e4fc79/0x4fe6000, compress 0x0/0x0/0x0, omap 0x695bf, meta 0x133c6a41), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340230144 unmapped: 54648832 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004298000 session 0x56300222d6c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1400 session 0x563000520540
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340230144 unmapped: 54648832 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004298000 session 0x563002538fc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 55091200 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 55091200 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3810023 data_alloc: 234881024 data_used: 21409746
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 55091200 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 55091200 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7fc8000/0x0/0x4ffc00000, data 0x45b4c07/0x4749000, compress 0x0/0x0/0x0, omap 0x69a17, meta 0x133c65e9), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 55091200 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.566295624s of 10.727230072s, submitted: 92
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de0400 session 0x563002a6b500
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc8e400 session 0x563000341880
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 55099392 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004ae8c00 session 0x5630022a8700
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc0c000 session 0x563001f02c40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a8000 session 0x5630026de8c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328056832 unmapped: 66822144 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004ae8c00 session 0x56300261e8c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517804 data_alloc: 218103808 data_used: 4913504
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 66805760 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 66805760 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9f87000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6a2c7, meta 0x133c5d39), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 66805760 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 66805760 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9f87000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6a2c7, meta 0x133c5d39), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 66805760 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517804 data_alloc: 218103808 data_used: 4913504
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 66805760 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9f87000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6a2c7, meta 0x133c5d39), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 66805760 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 66805760 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 66805760 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 66805760 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9f87000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6a2c7, meta 0x133c5d39), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9f87000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6a2c7, meta 0x133c5d39), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517804 data_alloc: 218103808 data_used: 4913504
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328081408 unmapped: 66797568 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328081408 unmapped: 66797568 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328081408 unmapped: 66797568 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9f87000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6a2c7, meta 0x133c5d39), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328081408 unmapped: 66797568 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328081408 unmapped: 66797568 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517804 data_alloc: 218103808 data_used: 4913504
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328081408 unmapped: 66797568 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9f87000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6a2c7, meta 0x133c5d39), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328089600 unmapped: 66789376 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328089600 unmapped: 66789376 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328097792 unmapped: 66781184 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328097792 unmapped: 66781184 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9f87000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6a2c7, meta 0x133c5d39), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517804 data_alloc: 218103808 data_used: 4913504
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328097792 unmapped: 66781184 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9f87000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6a2c7, meta 0x133c5d39), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328097792 unmapped: 66781184 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328097792 unmapped: 66781184 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328097792 unmapped: 66781184 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328097792 unmapped: 66781184 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630052e4800 session 0x563002a6a8c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000925000 session 0x563002538e00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc0c000 session 0x5630029adc00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517804 data_alloc: 218103808 data_used: 4913504
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a8000 session 0x5630029ad340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.916614532s of 28.000328064s, submitted: 56
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328097792 unmapped: 66781184 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004ae8c00 session 0x563002473340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630052e4800 session 0x562fffadf340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300157e400 session 0x562fffba0c40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc0c000 session 0x563002574380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a8000 session 0x562fffca1880
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9f87000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6a2c7, meta 0x133c5d39), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328105984 unmapped: 70443008 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328105984 unmapped: 70443008 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9084000/0x0/0x4ffc00000, data 0x35b4c36/0x3748000, compress 0x0/0x0/0x0, omap 0x6a2c7, meta 0x133c5d39), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328105984 unmapped: 70443008 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328105984 unmapped: 70443008 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3608823 data_alloc: 218103808 data_used: 4917565
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328105984 unmapped: 70443008 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898400 session 0x563002146000
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328040448 unmapped: 70508544 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 331685888 unmapped: 66863104 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9084000/0x0/0x4ffc00000, data 0x35b4c36/0x3748000, compress 0x0/0x0/0x0, omap 0x6a7aa, meta 0x133c5856), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 333479936 unmapped: 65069056 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 333479936 unmapped: 65069056 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9084000/0x0/0x4ffc00000, data 0x35b4c36/0x3748000, compress 0x0/0x0/0x0, omap 0x6a7aa, meta 0x133c5856), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3702288 data_alloc: 234881024 data_used: 20563277
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 333479936 unmapped: 65069056 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 333479936 unmapped: 65069056 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6d000 session 0x563002538c40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.834800720s of 11.945485115s, submitted: 36
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3fc00 session 0x563000521880
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 333479936 unmapped: 65069056 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3fc00 session 0x5630029ad6c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327450624 unmapped: 71098368 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9f87000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6a6d8, meta 0x133c5928), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327450624 unmapped: 71098368 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528332 data_alloc: 218103808 data_used: 4917565
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327450624 unmapped: 71098368 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327450624 unmapped: 71098368 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327450624 unmapped: 71098368 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9f87000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6a6d8, meta 0x133c5928), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327450624 unmapped: 71098368 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004689000 session 0x5630022f2a80
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a9c00 session 0x5630029ad340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328867840 unmapped: 69681152 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3583805 data_alloc: 218103808 data_used: 4921563
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328867840 unmapped: 69681152 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328867840 unmapped: 69681152 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328867840 unmapped: 69681152 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328867840 unmapped: 69681152 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9808000/0x0/0x4ffc00000, data 0x2e31c26/0x2fc4000, compress 0x0/0x0/0x0, omap 0x6a6d8, meta 0x133c5928), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328867840 unmapped: 69681152 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3583805 data_alloc: 218103808 data_used: 4921563
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328867840 unmapped: 69681152 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328867840 unmapped: 69681152 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328867840 unmapped: 69681152 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9808000/0x0/0x4ffc00000, data 0x2e31c26/0x2fc4000, compress 0x0/0x0/0x0, omap 0x6a6d8, meta 0x133c5928), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328867840 unmapped: 69681152 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328867840 unmapped: 69681152 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004074800 session 0x5630020116c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de3c00 session 0x5630022a8700
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a9c00 session 0x563002a6b500
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3fc00 session 0x563000520540
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.817676544s of 17.979703903s, submitted: 75
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3661698 data_alloc: 218103808 data_used: 4921563
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004074800 session 0x563002147a40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328802304 unmapped: 69746688 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9808000/0x0/0x4ffc00000, data 0x2e31c26/0x2fc4000, compress 0x0/0x0/0x0, omap 0x6a6d8, meta 0x133c5928), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004689000 session 0x562fffba0a80
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x5630015e6380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 69730304 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 69730304 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328826880 unmapped: 69722112 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328343552 unmapped: 70205440 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8e64000/0x0/0x4ffc00000, data 0x37d3cab/0x3968000, compress 0x0/0x0/0x0, omap 0x6a6d8, meta 0x133c5928), peers [0,2] op hist [1])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3fc00 session 0x562fffba0e00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3691301 data_alloc: 218103808 data_used: 12480219
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328310784 unmapped: 70238208 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328310784 unmapped: 70238208 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329940992 unmapped: 68608000 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330096640 unmapped: 68452352 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002ea3400 session 0x563002539340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002f16400 session 0x56300273f340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330096640 unmapped: 68452352 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e97ff000/0x0/0x4ffc00000, data 0x2e31c6c/0x2fc6000, compress 0x0/0x0/0x0, omap 0x6aa4a, meta 0x133c55b6), peers [0,2] op hist [1])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3a400 session 0x562fffb87340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3637665 data_alloc: 218103808 data_used: 10774587
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329728000 unmapped: 68820992 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329728000 unmapped: 68820992 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e97ff000/0x0/0x4ffc00000, data 0x2e31c49/0x2fc5000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x133c5130), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329728000 unmapped: 68820992 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329728000 unmapped: 68820992 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329728000 unmapped: 68820992 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.409809113s of 14.661175728s, submitted: 93
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3685957 data_alloc: 218103808 data_used: 10787766
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334241792 unmapped: 64307200 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335126528 unmapped: 63422464 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335822848 unmapped: 62726144 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7ac8000/0x0/0x4ffc00000, data 0x39c8c49/0x3b5c000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x14565130), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335822848 unmapped: 62726144 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335822848 unmapped: 62726144 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7ac8000/0x0/0x4ffc00000, data 0x39c8c49/0x3b5c000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x14565130), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3710401 data_alloc: 218103808 data_used: 11049910
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335822848 unmapped: 62726144 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7ac8000/0x0/0x4ffc00000, data 0x39c8c49/0x3b5c000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x14565130), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335822848 unmapped: 62726144 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335822848 unmapped: 62726144 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335839232 unmapped: 62709760 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335839232 unmapped: 62709760 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3706393 data_alloc: 218103808 data_used: 11054006
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335839232 unmapped: 62709760 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7ab1000/0x0/0x4ffc00000, data 0x39e7c49/0x3b7b000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x14565130), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335839232 unmapped: 62709760 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335839232 unmapped: 62709760 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7ab1000/0x0/0x4ffc00000, data 0x39e7c49/0x3b7b000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x14565130), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335839232 unmapped: 62709760 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.914414406s of 13.816678047s, submitted: 110
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a8800 session 0x563001d9a380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 62177280 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759649 data_alloc: 218103808 data_used: 11054006
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 62177280 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 62177280 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 62177280 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 62177280 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e72e2000/0x0/0x4ffc00000, data 0x41b6c49/0x434a000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x14565130), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 62177280 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759649 data_alloc: 218103808 data_used: 11054006
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336379904 unmapped: 62169088 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336379904 unmapped: 62169088 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630015b6800 session 0x563001f02000
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007abcc00 session 0x5630015e6540
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3e000 session 0x563001f02fc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630040a7800 session 0x562fffadf180
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a46000 session 0x5630008b2e00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3e000 session 0x5630003e7dc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336388096 unmapped: 62160896 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69ca000/0x0/0x4ffc00000, data 0x4acec49/0x4c62000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x14565130), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336396288 unmapped: 62152704 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336396288 unmapped: 62152704 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3818314 data_alloc: 218103808 data_used: 11054518
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336068608 unmapped: 62480384 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 62472192 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 62472192 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffb12c00 session 0x563002c34c40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 62472192 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69ca000/0x0/0x4ffc00000, data 0x4acec49/0x4c62000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x14565130), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a39400 session 0x5630029addc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6d000 session 0x56300222cfc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 62472192 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.657527924s of 16.036277771s, submitted: 37
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329f800 session 0x563002c34e00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3861858 data_alloc: 234881024 data_used: 17980342
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 62472192 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 62472192 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 341164032 unmapped: 57384960 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 341164032 unmapped: 57384960 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69a4000/0x0/0x4ffc00000, data 0x4af3c49/0x4c87000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x14565130), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 341164032 unmapped: 57384960 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3968416 data_alloc: 234881024 data_used: 27521974
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 346292224 unmapped: 52256768 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 50995200 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 50995200 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5fe9000/0x0/0x4ffc00000, data 0x54a6c49/0x563a000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x14565130), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 50995200 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 50995200 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3996190 data_alloc: 234881024 data_used: 29500342
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 50929664 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 50929664 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.915058136s of 12.146306038s, submitted: 106
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e575b000/0x0/0x4ffc00000, data 0x5d3dc49/0x5ed1000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x14565130), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350461952 unmapped: 48087040 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e56f9000/0x0/0x4ffc00000, data 0x5d9fc49/0x5f33000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x14565130), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350461952 unmapped: 48087040 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350478336 unmapped: 48070656 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4058962 data_alloc: 234881024 data_used: 30184374
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350543872 unmapped: 48005120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350543872 unmapped: 48005120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350543872 unmapped: 48005120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 47964160 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e56ed000/0x0/0x4ffc00000, data 0x5dabc49/0x5f3f000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x14565130), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 47964160 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a46000 session 0x563002a6b180
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630015b6800 session 0x56300238a8c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3d800 session 0x563000520380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3902990 data_alloc: 234881024 data_used: 21246902
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348307456 unmapped: 50241536 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348168192 unmapped: 50380800 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6857000/0x0/0x4ffc00000, data 0x4c41c49/0x4dd5000, compress 0x0/0x0/0x0, omap 0x6b320, meta 0x14564ce0), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006851400 session 0x5630004fe000
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003da9400 session 0x563001d9a000
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.460074425s of 10.716476440s, submitted: 116
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630015b6800 session 0x562fffca1c00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347398144 unmapped: 51150848 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347398144 unmapped: 51150848 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a9c00 session 0x5630005216c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x3a03c49/0x3b97000, compress 0x0/0x0/0x0, omap 0x6b770, meta 0x14564890), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3d800 session 0x5630022f3c00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343375872 unmapped: 55173120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575266 data_alloc: 218103808 data_used: 4905398
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343375872 unmapped: 55173120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343375872 unmapped: 55173120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de6000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6bbc0, meta 0x14564440), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343375872 unmapped: 55173120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343375872 unmapped: 55173120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343375872 unmapped: 55173120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575266 data_alloc: 218103808 data_used: 4905398
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343375872 unmapped: 55173120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343375872 unmapped: 55173120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de6000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6bbc0, meta 0x14564440), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343375872 unmapped: 55173120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343375872 unmapped: 55173120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343375872 unmapped: 55173120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575266 data_alloc: 218103808 data_used: 4905398
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343375872 unmapped: 55173120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343375872 unmapped: 55173120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343384064 unmapped: 55164928 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de6000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6bbc0, meta 0x14564440), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343384064 unmapped: 55164928 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343384064 unmapped: 55164928 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575266 data_alloc: 218103808 data_used: 4905398
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343392256 unmapped: 55156736 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343392256 unmapped: 55156736 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343392256 unmapped: 55156736 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de6000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6bbc0, meta 0x14564440), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343392256 unmapped: 55156736 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343392256 unmapped: 55156736 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575266 data_alloc: 218103808 data_used: 4905398
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343400448 unmapped: 55148544 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.489187241s of 23.583105087s, submitted: 61
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004ae9800 session 0x562fffca1c00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6d000 session 0x5630022f2a80
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a9c00 session 0x5630008c41c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630015b6800 session 0x563002c42e00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3d800 session 0x5630008c5340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de6000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6bbc0, meta 0x14564440), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 58810368 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 58810368 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 58810368 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e85fc000/0x0/0x4ffc00000, data 0x2e9ebc4/0x3030000, compress 0x0/0x0/0x0, omap 0x6bbc0, meta 0x14564440), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 58810368 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3626467 data_alloc: 218103808 data_used: 4913457
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 58810368 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 58810368 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343007232 unmapped: 59219968 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e85fc000/0x0/0x4ffc00000, data 0x2e9ebc4/0x3030000, compress 0x0/0x0/0x0, omap 0x6bbc0, meta 0x14564440), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343015424 unmapped: 59211776 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343015424 unmapped: 59211776 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3626599 data_alloc: 218103808 data_used: 4913457
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343015424 unmapped: 59211776 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343015424 unmapped: 59211776 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 342441984 unmapped: 59785216 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 59949056 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e85fc000/0x0/0x4ffc00000, data 0x2e9ebc4/0x3030000, compress 0x0/0x0/0x0, omap 0x6bbc0, meta 0x14564440), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 59949056 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675495 data_alloc: 218103808 data_used: 13154609
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 59949056 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a47000 session 0x562fffc00e00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3a400 session 0x562fffce6380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 59949056 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a9c00 session 0x5630026df340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630015b6800 session 0x56300273fa40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.204570770s of 16.290288925s, submitted: 20
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3d800 session 0x5630008b2e00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a47000 session 0x56300238a8c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de2400 session 0x56300222cfc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de2400 session 0x563001da16c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a9c00 session 0x5630026dfa40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 58687488 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 58687488 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7e6e000/0x0/0x4ffc00000, data 0x362ac36/0x37be000, compress 0x0/0x0/0x0, omap 0x6bbc0, meta 0x14564440), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 58687488 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3732498 data_alloc: 218103808 data_used: 13154609
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 58687488 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 58687488 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 55492608 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e74c3000/0x0/0x4ffc00000, data 0x3fcfc36/0x4163000, compress 0x0/0x0/0x0, omap 0x6bbc0, meta 0x14564440), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002ea3400 session 0x563001d9a8c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007aaa000 session 0x5630020116c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007abd000 session 0x5630022a8c40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007abd000 session 0x5630026de1c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a9c00 session 0x56300205c540
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 57786368 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de2400 session 0x56300222c700
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69a1000/0x0/0x4ffc00000, data 0x4ae8c98/0x4c7d000, compress 0x0/0x0/0x0, omap 0x6bdc8, meta 0x14564238), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 57786368 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a39400 session 0x5630022a9dc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3874916 data_alloc: 218103808 data_used: 14231857
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 57786368 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300334f400 session 0x562fffc001c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348127232 unmapped: 57778176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300334f400 session 0x563002c43c00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.853331566s of 10.227196693s, submitted: 183
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347947008 unmapped: 57958400 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347947008 unmapped: 57958400 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e698b000/0x0/0x4ffc00000, data 0x4b0cc98/0x4ca1000, compress 0x0/0x0/0x0, omap 0x6bdc8, meta 0x14564238), peers [0,2] op hist [1])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347947008 unmapped: 57958400 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006907000 session 0x5630026dec40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911325 data_alloc: 234881024 data_used: 18573899
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f9c00 session 0x563002c34c40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 57876480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a400 session 0x5630029addc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002f16400 session 0x5630026dee00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348184576 unmapped: 57720832 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348397568 unmapped: 57507840 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 55812096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 55812096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6962000/0x0/0x4ffc00000, data 0x4b33cca/0x4cca000, compress 0x0/0x0/0x0, omap 0x6bdc8, meta 0x14564238), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3977248 data_alloc: 234881024 data_used: 27425867
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 55812096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 55812096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 55812096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 55812096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6962000/0x0/0x4ffc00000, data 0x4b33cca/0x4cca000, compress 0x0/0x0/0x0, omap 0x6bdc8, meta 0x14564238), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.239691734s of 12.257406235s, submitted: 10
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 53223424 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4047860 data_alloc: 234881024 data_used: 27775563
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 53133312 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353861632 unmapped: 52043776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356794368 unmapped: 49111040 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e4e9a000/0x0/0x4ffc00000, data 0x65f5cca/0x678c000, compress 0x0/0x0/0x0, omap 0x6bdc8, meta 0x14564238), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358236160 unmapped: 47669248 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e4e76000/0x0/0x4ffc00000, data 0x6611cca/0x67a8000, compress 0x0/0x0/0x0, omap 0x6bdc8, meta 0x14564238), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 47554560 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4167062 data_alloc: 234881024 data_used: 28856907
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 47554560 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358359040 unmapped: 47546368 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358359040 unmapped: 47546368 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358359040 unmapped: 47546368 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358359040 unmapped: 47546368 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e4e63000/0x0/0x4ffc00000, data 0x6632cca/0x67c9000, compress 0x0/0x0/0x0, omap 0x6bdc8, meta 0x14564238), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4161198 data_alloc: 234881024 data_used: 28861003
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358367232 unmapped: 47538176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e4e63000/0x0/0x4ffc00000, data 0x6632cca/0x67c9000, compress 0x0/0x0/0x0, omap 0x6bdc8, meta 0x14564238), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.773331642s of 12.197587967s, submitted: 242
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358375424 unmapped: 47529984 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358375424 unmapped: 47529984 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e4e5d000/0x0/0x4ffc00000, data 0x6638cca/0x67cf000, compress 0x0/0x0/0x0, omap 0x6bdc8, meta 0x14564238), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358432768 unmapped: 47472640 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002ea3400 session 0x56300222c8c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a49400 session 0x563001d9a1c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358432768 unmapped: 47472640 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4194793 data_alloc: 234881024 data_used: 28938827
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358432768 unmapped: 47472640 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358440960 unmapped: 47464448 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358440960 unmapped: 47464448 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e4952000/0x0/0x4ffc00000, data 0x6b42d2c/0x6cda000, compress 0x0/0x0/0x0, omap 0x6bdc8, meta 0x14564238), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358440960 unmapped: 47464448 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f9c00 session 0x56300222c1c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a400 session 0x562fffce7a40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003da8000 session 0x562fffb86c40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351535104 unmapped: 54370304 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f9c00 session 0x563002a6a540
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3965140 data_alloc: 234881024 data_used: 16362059
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351535104 unmapped: 54370304 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351535104 unmapped: 54370304 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e638a000/0x0/0x4ffc00000, data 0x50e7c98/0x527c000, compress 0x0/0x0/0x0, omap 0x6c3ec, meta 0x14563c14), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351535104 unmapped: 54370304 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351535104 unmapped: 54370304 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e638a000/0x0/0x4ffc00000, data 0x50e7c98/0x527c000, compress 0x0/0x0/0x0, omap 0x6c3ec, meta 0x14563c14), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.327329636s of 12.523733139s, submitted: 92
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004ae8c00 session 0x562fffc01180
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300334e000 session 0x5630008b2380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 54337536 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3848324 data_alloc: 234881024 data_used: 17609670
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 54337536 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7573000/0x0/0x4ffc00000, data 0x3f24c98/0x40b9000, compress 0x0/0x0/0x0, omap 0x6c83c, meta 0x145637c4), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 54337536 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 54337536 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 54337536 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7573000/0x0/0x4ffc00000, data 0x3f24c98/0x40b9000, compress 0x0/0x0/0x0, omap 0x6c83c, meta 0x145637c4), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 54337536 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3848324 data_alloc: 234881024 data_used: 17609670
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 54337536 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7573000/0x0/0x4ffc00000, data 0x3f24c98/0x40b9000, compress 0x0/0x0/0x0, omap 0x6c83c, meta 0x145637c4), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351576064 unmapped: 54329344 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7573000/0x0/0x4ffc00000, data 0x3f24c98/0x40b9000, compress 0x0/0x0/0x0, omap 0x6c83c, meta 0x145637c4), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 54468608 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6dac000/0x0/0x4ffc00000, data 0x46eac98/0x487f000, compress 0x0/0x0/0x0, omap 0x6c83c, meta 0x145637c4), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 54468608 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 54804480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3910436 data_alloc: 234881024 data_used: 18441158
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 54804480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 54804480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6da5000/0x0/0x4ffc00000, data 0x46f2c98/0x4887000, compress 0x0/0x0/0x0, omap 0x6c83c, meta 0x145637c4), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 54804480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 54804480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6da5000/0x0/0x4ffc00000, data 0x46f2c98/0x4887000, compress 0x0/0x0/0x0, omap 0x6c83c, meta 0x145637c4), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 54804480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3910436 data_alloc: 234881024 data_used: 18441158
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351109120 unmapped: 54796288 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351109120 unmapped: 54796288 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351109120 unmapped: 54796288 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351109120 unmapped: 54796288 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6da5000/0x0/0x4ffc00000, data 0x46f2c98/0x4887000, compress 0x0/0x0/0x0, omap 0x6c83c, meta 0x145637c4), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351109120 unmapped: 54796288 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.755073547s of 20.982717514s, submitted: 105
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a400 session 0x563000521500
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a49400 session 0x56300273ec40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911900 data_alloc: 234881024 data_used: 18535366
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 53731328 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f9c00 session 0x562fffca0000
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 53731328 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 53731328 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a400 session 0x563000340700
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300334e000 session 0x5630008c5340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004ae8c00 session 0x563002574700
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc8f000 session 0x563002c34c40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f9c00 session 0x562fffba1340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352509952 unmapped: 53395456 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352509952 unmapped: 53395456 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e705e000/0x0/0x4ffc00000, data 0x4439c98/0x45ce000, compress 0x0/0x0/0x0, omap 0x6c44f, meta 0x14563bb1), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3857330 data_alloc: 218103808 data_used: 12335972
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352509952 unmapped: 53395456 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a9c00 session 0x5630008b28c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de2400 session 0x563001d9ae00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 60399616 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf9c00 session 0x5630005208c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630015b7400 session 0x562fffb868c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a9c00 session 0x5630022f2a80
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 59351040 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630015b7400 session 0x563001d9a380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de2400 session 0x562fffce6000
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 59342848 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e83ab000/0x0/0x4ffc00000, data 0x30e8c26/0x327b000, compress 0x0/0x0/0x0, omap 0x6c897, meta 0x14563769), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 59342848 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3698486 data_alloc: 218103808 data_used: 4918002
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 59342848 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 59367424 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e83af000/0x0/0x4ffc00000, data 0x30e8c59/0x327d000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e83af000/0x0/0x4ffc00000, data 0x30e8c59/0x327d000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3754294 data_alloc: 218103808 data_used: 14295794
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e83af000/0x0/0x4ffc00000, data 0x30e8c59/0x327d000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e83af000/0x0/0x4ffc00000, data 0x30e8c59/0x327d000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3754294 data_alloc: 218103808 data_used: 14295794
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e83af000/0x0/0x4ffc00000, data 0x30e8c59/0x327d000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.583734512s of 21.009777069s, submitted: 135
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 57458688 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7dfd000/0x0/0x4ffc00000, data 0x3699c59/0x382e000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3799496 data_alloc: 234881024 data_used: 14492402
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7dbf000/0x0/0x4ffc00000, data 0x36d7c59/0x386c000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7dbf000/0x0/0x4ffc00000, data 0x36d7c59/0x386c000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3796744 data_alloc: 234881024 data_used: 14496498
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7d9c000/0x0/0x4ffc00000, data 0x36fbc59/0x3890000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.710806847s of 13.025801659s, submitted: 82
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f9c00 session 0x5630022a9c00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3dc00 session 0x56300222da40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349773824 unmapped: 56131584 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646460 data_alloc: 218103808 data_used: 4917490
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3dc00 session 0x563002574fc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c2b000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6d25b, meta 0x14562da5), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3644559 data_alloc: 218103808 data_used: 4913394
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c2b000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6d25b, meta 0x14562da5), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630035d9000 session 0x5630008c5500
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006907400 session 0x562fffc01c00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f8c00 session 0x563002c42c40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004298800 session 0x562fffc016c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f8c00 session 0x563002c42380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3dc00 session 0x562fffba0000
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630035d9000 session 0x5630004ff6c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006907400 session 0x56300273efc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de0000 session 0x562fffce7340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8834000/0x0/0x4ffc00000, data 0x2c65bd4/0x2df8000, compress 0x0/0x0/0x0, omap 0x6d25b, meta 0x14562da5), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688345 data_alloc: 218103808 data_used: 4921469
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8834000/0x0/0x4ffc00000, data 0x2c65bd4/0x2df8000, compress 0x0/0x0/0x0, omap 0x6d25b, meta 0x14562da5), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8834000/0x0/0x4ffc00000, data 0x2c65bd4/0x2df8000, compress 0x0/0x0/0x0, omap 0x6d25b, meta 0x14562da5), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688345 data_alloc: 218103808 data_used: 4921469
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343736320 unmapped: 62169088 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343736320 unmapped: 62169088 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343736320 unmapped: 62169088 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.425588608s of 20.792432785s, submitted: 85
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a800 session 0x563000520380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3691067 data_alloc: 218103808 data_used: 4921469
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8833000/0x0/0x4ffc00000, data 0x2c65bf7/0x2df9000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8833000/0x0/0x4ffc00000, data 0x2c65bf7/0x2df9000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3726267 data_alloc: 218103808 data_used: 10794109
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8833000/0x0/0x4ffc00000, data 0x2c65bf7/0x2df9000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8833000/0x0/0x4ffc00000, data 0x2c65bf7/0x2df9000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3726267 data_alloc: 218103808 data_used: 10794109
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8833000/0x0/0x4ffc00000, data 0x2c65bf7/0x2df9000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.685478210s of 11.705242157s, submitted: 10
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347709440 unmapped: 58195968 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 57876480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7f5f000/0x0/0x4ffc00000, data 0x3523bf7/0x36b7000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 57876480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 57876480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3804967 data_alloc: 218103808 data_used: 11819133
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 57876480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 57876480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 57876480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 57868288 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7f5f000/0x0/0x4ffc00000, data 0x3523bf7/0x36b7000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 57868288 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3804983 data_alloc: 218103808 data_used: 11819133
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 57868288 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 57868288 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 57860096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7f5f000/0x0/0x4ffc00000, data 0x3523bf7/0x36b7000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 57860096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 57860096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3805239 data_alloc: 218103808 data_used: 11827325
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 57860096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 57860096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.995227814s of 16.248806000s, submitted: 101
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630040a6c00 session 0x56300238a380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf9c00 session 0x5630026de8c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329f000 session 0x56300222c700
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347529216 unmapped: 65724416 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a47c00 session 0x562fffce6380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a800 session 0x56300222da40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347529216 unmapped: 65724416 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752e000/0x0/0x4ffc00000, data 0x3f6abf7/0x40fe000, compress 0x0/0x0/0x0, omap 0x6d934, meta 0x145626cc), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347537408 unmapped: 65716224 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3859568 data_alloc: 218103808 data_used: 11827325
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347537408 unmapped: 65716224 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752e000/0x0/0x4ffc00000, data 0x3f6abf7/0x40fe000, compress 0x0/0x0/0x0, omap 0x6d934, meta 0x145626cc), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347537408 unmapped: 65716224 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347537408 unmapped: 65716224 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347537408 unmapped: 65716224 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347537408 unmapped: 65716224 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf8400 session 0x562fffce7340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3861245 data_alloc: 218103808 data_used: 11827325
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 65691648 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 65691648 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752e000/0x0/0x4ffc00000, data 0x3f6abf7/0x40fe000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [1])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752e000/0x0/0x4ffc00000, data 0x3f6abf7/0x40fe000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3916669 data_alloc: 234881024 data_used: 17889846
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.772537231s of 15.883464813s, submitted: 28
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752c000/0x0/0x4ffc00000, data 0x3f6bbf7/0x40ff000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3916885 data_alloc: 234881024 data_used: 17889846
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752c000/0x0/0x4ffc00000, data 0x3f6bbf7/0x40ff000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752c000/0x0/0x4ffc00000, data 0x3f6bbf7/0x40ff000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 65011712 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752c000/0x0/0x4ffc00000, data 0x3f6bbf7/0x40ff000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [0,0,0,0,1])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 60211200 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 60104704 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 60104704 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3970057 data_alloc: 234881024 data_used: 19615286
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 60104704 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 60104704 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 60104704 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7034000/0x0/0x4ffc00000, data 0x445cbf7/0x45f0000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.204720497s of 10.448961258s, submitted: 70
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60096512 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60096512 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3970329 data_alloc: 234881024 data_used: 19623478
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60096512 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60096512 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60096512 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300645f000 session 0x563002539340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004299800 session 0x563002c421c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7034000/0x0/0x4ffc00000, data 0x445cbf7/0x45f0000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [0,2])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329f400 session 0x563002c43c00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 63324160 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 63324160 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800321 data_alloc: 218103808 data_used: 8706614
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7f74000/0x0/0x4ffc00000, data 0x3524bf7/0x36b8000, compress 0x0/0x0/0x0, omap 0x6dfed, meta 0x14562013), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 63324160 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 63324160 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329e400 session 0x563002c42e00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3b400 session 0x5630008c4c40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf9800 session 0x562fffce6540
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 64290816 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 64290816 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 64290816 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 64290816 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 64290816 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 64274432 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 64274432 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 64274432 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 64274432 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 64274432 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 64266240 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 64266240 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 64266240 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 64266240 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007aaac00 session 0x5630015e6380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007a98000 session 0x562fffce6fc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3b400 session 0x56300222c540
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329e400 session 0x563000520700
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.949302673s of 42.108650208s, submitted: 99
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 63913984 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf9800 session 0x5630026df6c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007aaac00 session 0x5630004fe000
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630015e9400 session 0x562fffba0c40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3b400 session 0x5630007fec40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329e400 session 0x563002c348c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 67854336 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8637000/0x0/0x4ffc00000, data 0x2e62bd4/0x2ff5000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 67854336 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8637000/0x0/0x4ffc00000, data 0x2e62bd4/0x2ff5000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 67854336 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 67846144 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3721561 data_alloc: 218103808 data_used: 4889091
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 67846144 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x563001f02000
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349011968 unmapped: 67919872 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4201.6 total, 600.0 interval#012Cumulative writes: 46K writes, 180K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s#012Cumulative WAL: 46K writes, 17K syncs, 2.68 writes per sync, written: 0.17 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5448 writes, 20K keys, 5448 commit groups, 1.0 writes per commit group, ingest: 23.91 MB, 0.04 MB/s#012Interval WAL: 5448 writes, 2224 syncs, 2.45 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8612000/0x0/0x4ffc00000, data 0x2e86bf7/0x301a000, compress 0x0/0x0/0x0, omap 0x6e357, meta 0x14561ca9), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773411 data_alloc: 218103808 data_used: 12658707
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8612000/0x0/0x4ffc00000, data 0x2e86bf7/0x301a000, compress 0x0/0x0/0x0, omap 0x6e357, meta 0x14561ca9), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773411 data_alloc: 218103808 data_used: 12658707
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8612000/0x0/0x4ffc00000, data 0x2e86bf7/0x301a000, compress 0x0/0x0/0x0, omap 0x6e357, meta 0x14561ca9), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.338871002s of 17.436998367s, submitted: 22
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352256000 unmapped: 64675840 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 64241664 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3832685 data_alloc: 218103808 data_used: 12786707
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 64241664 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 64241664 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 64241664 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7d99000/0x0/0x4ffc00000, data 0x36f7bf7/0x388b000, compress 0x0/0x0/0x0, omap 0x6e357, meta 0x14561ca9), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 64241664 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7d99000/0x0/0x4ffc00000, data 0x36f7bf7/0x388b000, compress 0x0/0x0/0x0, omap 0x6e357, meta 0x14561ca9), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3828501 data_alloc: 218103808 data_used: 12786707
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7d82000/0x0/0x4ffc00000, data 0x3716bf7/0x38aa000, compress 0x0/0x0/0x0, omap 0x6e357, meta 0x14561ca9), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3828501 data_alloc: 218103808 data_used: 12786707
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c9c00 session 0x563000520380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004689800 session 0x56300205c1c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x563002c42a80
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c9c00 session 0x5630008b3340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.907081604s of 13.095699310s, submitted: 66
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3b400 session 0x563001d9ba40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329e400 session 0x562fffb87340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006912c00 session 0x56300238b180
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x563001d9afc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c9c00 session 0x563002574000
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x438dbf7/0x4521000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x438dbf7/0x4521000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x438dbf7/0x4521000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3906016 data_alloc: 218103808 data_used: 12786707
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300b960c00 session 0x563001f02fc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 58138624 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 58138624 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e70e7000/0x0/0x4ffc00000, data 0x43b1bf7/0x4545000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3978348 data_alloc: 234881024 data_used: 24597523
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 58138624 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 58138624 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 58138624 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 58138624 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e70e7000/0x0/0x4ffc00000, data 0x43b1bf7/0x4545000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358801408 unmapped: 58130432 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e70e7000/0x0/0x4ffc00000, data 0x43b1bf7/0x4545000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3978348 data_alloc: 234881024 data_used: 24597523
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358801408 unmapped: 58130432 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358801408 unmapped: 58130432 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.627857208s of 16.728017807s, submitted: 31
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360079360 unmapped: 56852480 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4046006 data_alloc: 234881024 data_used: 25961491
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e67b8000/0x0/0x4ffc00000, data 0x4cdfbf7/0x4e73000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e67b8000/0x0/0x4ffc00000, data 0x4cdfbf7/0x4e73000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4046006 data_alloc: 234881024 data_used: 25961491
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e67b8000/0x0/0x4ffc00000, data 0x4cdfbf7/0x4e73000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e67b8000/0x0/0x4ffc00000, data 0x4cdfbf7/0x4e73000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362446848 unmapped: 54484992 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362446848 unmapped: 54484992 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362446848 unmapped: 54484992 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.708907127s of 12.569359779s, submitted: 90
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329f400 session 0x562fffb861c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003da9c00 session 0x5630029ad340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4039350 data_alloc: 234881024 data_used: 25961491
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x56300273f340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358670336 unmapped: 58261504 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7d76000/0x0/0x4ffc00000, data 0x3722bf7/0x38b6000, compress 0x0/0x0/0x0, omap 0x6e714, meta 0x145618ec), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358678528 unmapped: 58253312 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358735872 unmapped: 58195968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358735872 unmapped: 58195968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358735872 unmapped: 58195968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3840646 data_alloc: 218103808 data_used: 12786707
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358735872 unmapped: 58195968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7d6f000/0x0/0x4ffc00000, data 0x3729bf7/0x38bd000, compress 0x0/0x0/0x0, omap 0x6e714, meta 0x145618ec), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c8000 session 0x562fffba16c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003da8400 session 0x5630022a9dc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358653952 unmapped: 58277888 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a000 session 0x5630008c4c40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694722 data_alloc: 218103808 data_used: 4889091
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694722 data_alloc: 218103808 data_used: 4889091
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694722 data_alloc: 218103808 data_used: 4889091
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694722 data_alloc: 218103808 data_used: 4889091
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694722 data_alloc: 218103808 data_used: 4889091
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 63315968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 63315968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 63315968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694722 data_alloc: 218103808 data_used: 4889091
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 63315968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 63315968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 63315968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 63307776 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006905400 session 0x562fffb86700
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a46c00 session 0x56300205c1c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x5630007fec40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c8000 session 0x562fffba0000
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.295139313s of 39.636463165s, submitted: 162
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354689024 unmapped: 62242816 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a000 session 0x5630022f2c40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003da8400 session 0x563002c34c40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x562fffb87c00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c8000 session 0x5630029ad6c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a46c00 session 0x56300273f500
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3765477 data_alloc: 218103808 data_used: 4893152
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 63291392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8343000/0x0/0x4ffc00000, data 0x3155c35/0x32e9000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 63291392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 63291392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 63291392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 63291392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6c000 session 0x563002473340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3765477 data_alloc: 218103808 data_used: 4893152
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a8c00 session 0x563002c42380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 63291392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x562fffba0c40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c8000 session 0x563002c42a80
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353583104 unmapped: 63348736 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8343000/0x0/0x4ffc00000, data 0x3155c35/0x32e9000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 63676416 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3834807 data_alloc: 234881024 data_used: 16034287
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8342000/0x0/0x4ffc00000, data 0x3155c58/0x32ea000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8342000/0x0/0x4ffc00000, data 0x3155c58/0x32ea000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3834807 data_alloc: 234881024 data_used: 16034287
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.666275024s of 17.794780731s, submitted: 43
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359178240 unmapped: 57753600 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e628e000/0x0/0x4ffc00000, data 0x4063c58/0x41f8000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x15700ff5), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 57647104 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 56467456 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3932965 data_alloc: 234881024 data_used: 16882159
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 56467456 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6263000/0x0/0x4ffc00000, data 0x4094c58/0x4229000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x15700ff5), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 56467456 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 56467456 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 56467456 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6263000/0x0/0x4ffc00000, data 0x4094c58/0x4229000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x15700ff5), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3933117 data_alloc: 234881024 data_used: 16882159
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3933373 data_alloc: 234881024 data_used: 16890351
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6260000/0x0/0x4ffc00000, data 0x4097c58/0x422c000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x15700ff5), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360480768 unmapped: 56451072 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360480768 unmapped: 56451072 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6260000/0x0/0x4ffc00000, data 0x4097c58/0x422c000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x15700ff5), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.938798904s of 17.121707916s, submitted: 144
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc0c000 session 0x563002538c40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3b000 session 0x563000520c40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x562fffce7340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc0c000 session 0x562fffce6fc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x563001da1c00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3953230 data_alloc: 234881024 data_used: 16890351
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6089000/0x0/0x4ffc00000, data 0x426ec58/0x4403000, compress 0x0/0x0/0x0, omap 0x6f209, meta 0x15700df7), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6089000/0x0/0x4ffc00000, data 0x426ec58/0x4403000, compress 0x0/0x0/0x0, omap 0x6f209, meta 0x15700df7), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007a99800 session 0x563000520380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6089000/0x0/0x4ffc00000, data 0x426ec58/0x4403000, compress 0x0/0x0/0x0, omap 0x6f209, meta 0x15700df7), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630045ad000 session 0x5630008b3340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3953230 data_alloc: 234881024 data_used: 16890351
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300334e000 session 0x563000340700
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc0c000 session 0x563002147a40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6089000/0x0/0x4ffc00000, data 0x426ec58/0x4403000, compress 0x0/0x0/0x0, omap 0x6f209, meta 0x15700df7), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3967924 data_alloc: 234881024 data_used: 18733551
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6088000/0x0/0x4ffc00000, data 0x426ec7b/0x4404000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 56107008 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.814036369s of 14.911996841s, submitted: 29
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 56107008 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3968068 data_alloc: 234881024 data_used: 18733551
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6088000/0x0/0x4ffc00000, data 0x426ec7b/0x4404000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 56107008 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 56107008 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6088000/0x0/0x4ffc00000, data 0x426ec7b/0x4404000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [0,0,0,0,0,3,6])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 54755328 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362184704 unmapped: 54747136 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a1c000/0x0/0x4ffc00000, data 0x48dac7b/0x4a70000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362184704 unmapped: 54747136 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4006276 data_alloc: 234881024 data_used: 19196399
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362184704 unmapped: 54747136 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a1c000/0x0/0x4ffc00000, data 0x48dac7b/0x4a70000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a1c000/0x0/0x4ffc00000, data 0x48dac7b/0x4a70000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a1c000/0x0/0x4ffc00000, data 0x48dac7b/0x4a70000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4006276 data_alloc: 234881024 data_used: 19196399
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.440196991s of 14.719416618s, submitted: 60
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x562fffce7500
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300334e000 session 0x5630008c5500
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf9c00 session 0x5630026de700
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3941854 data_alloc: 234881024 data_used: 16951791
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e625f000/0x0/0x4ffc00000, data 0x4097c58/0x422c000, compress 0x0/0x0/0x0, omap 0x6f6c4, meta 0x1570093c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e625f000/0x0/0x4ffc00000, data 0x4097c58/0x422c000, compress 0x0/0x0/0x0, omap 0x6f6c4, meta 0x1570093c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a46c00 session 0x563002574700
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6c000 session 0x562fffba1340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e625f000/0x0/0x4ffc00000, data 0x4098c58/0x422d000, compress 0x0/0x0/0x0, omap 0x6f6c4, meta 0x1570093c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a41000 session 0x563002575500
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 53600256 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 53600256 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 53600256 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 53600256 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 53600256 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 53583872 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 53583872 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 53583872 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.221851349s of 42.453300476s, submitted: 113
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007aaac00 session 0x562fffadfa40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 63627264 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 63627264 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7095000/0x0/0x4ffc00000, data 0x3265bc4/0x33f7000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 63627264 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 63627264 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3792375 data_alloc: 218103808 data_used: 4905272
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 63627264 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7095000/0x0/0x4ffc00000, data 0x3265bc4/0x33f7000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361709568 unmapped: 63619072 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361709568 unmapped: 63619072 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361709568 unmapped: 63619072 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361709568 unmapped: 63619072 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7095000/0x0/0x4ffc00000, data 0x3265bc4/0x33f7000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3792375 data_alloc: 218103808 data_used: 4905272
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361709568 unmapped: 63619072 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361709568 unmapped: 63619072 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7095000/0x0/0x4ffc00000, data 0x3265bc4/0x33f7000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7095000/0x0/0x4ffc00000, data 0x3265bc4/0x33f7000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3862391 data_alloc: 234881024 data_used: 16737592
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7095000/0x0/0x4ffc00000, data 0x3265bc4/0x33f7000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3862391 data_alloc: 234881024 data_used: 16737592
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.046377182s of 20.163766861s, submitted: 13
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 61046784 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6aee000/0x0/0x4ffc00000, data 0x3804bc4/0x3996000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6a74000/0x0/0x4ffc00000, data 0x387ebc4/0x3a10000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3904073 data_alloc: 234881024 data_used: 16845112
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 61497344 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3900481 data_alloc: 234881024 data_used: 16845112
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread fragmentation_score=0.004393 took=0.000057s
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 61497344 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6a5b000/0x0/0x4ffc00000, data 0x389fbc4/0x3a31000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.036879539s of 10.267202377s, submitted: 60
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 61489152 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 61489152 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6a3c000/0x0/0x4ffc00000, data 0x38bebc4/0x3a50000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 61489152 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007a98400 session 0x562fffb86700
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 61489152 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3901945 data_alloc: 234881024 data_used: 16853304
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 61489152 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 285 ms_handle_reset con 0x563004b6a400 session 0x563001f02e00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 285 ms_handle_reset con 0x563002a41000 session 0x5630008c41c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 285 ms_handle_reset con 0x563007a98400 session 0x5630020116c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 382451712 unmapped: 47079424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 285 heartbeat osd_stat(store_statfs(0x4e5986000/0x0/0x4ffc00000, data 0x496f824/0x4b04000, compress 0x0/0x0/0x0, omap 0x6feaa, meta 0x15700156), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 285 ms_handle_reset con 0x563007aaac00 session 0x5630015e68c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 286 ms_handle_reset con 0x56300334ec00 session 0x562fffb87a40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 371097600 unmapped: 58433536 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3016: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 287 ms_handle_reset con 0x562fffc0dc00 session 0x562fffba1a40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 371154944 unmapped: 58376192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 287 ms_handle_reset con 0x563002a41000 session 0x562fffce76c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 287 ms_handle_reset con 0x56300334ec00 session 0x563002c42380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 287 ms_handle_reset con 0x563007a98400 session 0x563000341880
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 371154944 unmapped: 58376192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4041853 data_alloc: 234881024 data_used: 23828792
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 371154944 unmapped: 58376192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 371154944 unmapped: 58376192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e5974000/0x0/0x4ffc00000, data 0x497df6a/0x4b14000, compress 0x0/0x0/0x0, omap 0x70a87, meta 0x156ff579), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.529747963s of 10.955096245s, submitted: 94
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 287 ms_handle_reset con 0x563007aaac00 session 0x56300273ec40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3844847 data_alloc: 218103808 data_used: 4905272
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e6b84000/0x0/0x4ffc00000, data 0x3771f6a/0x3908000, compress 0x0/0x0/0x0, omap 0x70ebf, meta 0x156ff141), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 287 handle_osd_map epochs [288,288], i have 288, src has [1,288]
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6b7f000/0x0/0x4ffc00000, data 0x37739e9/0x390b000, compress 0x0/0x0/0x0, omap 0x70f46, meta 0x156ff0ba), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x563002a46000 session 0x563002c42a80
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x56300334ec00 session 0x5630022a88c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x563002a41000 session 0x5630022a9340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3849155 data_alloc: 218103808 data_used: 4905272
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x563007a98400 session 0x562fffba0000
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x563007aaac00 session 0x5630007ffc00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x562fffd93800 session 0x563002c43dc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x563002a41000 session 0x56300222c700
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 369156096 unmapped: 60375040 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x56300334ec00 session 0x562fffba0e00
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e673b000/0x0/0x4ffc00000, data 0x3bb99e9/0x3d51000, compress 0x0/0x0/0x0, omap 0x71276, meta 0x156fed8a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 372703232 unmapped: 56827904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 372703232 unmapped: 56827904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 372703232 unmapped: 56827904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.734997749s of 12.027949333s, submitted: 52
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 369434624 unmapped: 60096512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6431000/0x0/0x4ffc00000, data 0x3ec39e9/0x405b000, compress 0x0/0x0/0x0, omap 0x71276, meta 0x156fed8a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3968002 data_alloc: 234881024 data_used: 25452989
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 289 ms_handle_reset con 0x56300645f400 session 0x563002a81340
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 289 heartbeat osd_stat(store_statfs(0x4e74e6000/0x0/0x4ffc00000, data 0x2e0b577/0x2fa3000, compress 0x0/0x0/0x0, omap 0x712fd, meta 0x156fed03), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3845228 data_alloc: 218103808 data_used: 9718717
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74e4000/0x0/0x4ffc00000, data 0x2e0cff6/0x2fa6000, compress 0x0/0x0/0x0, omap 0x71384, meta 0x156fec7c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.752949715s of 10.863764763s, submitted: 72
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3853182 data_alloc: 218103808 data_used: 9947483
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74e4000/0x0/0x4ffc00000, data 0x2e0cff6/0x2fa6000, compress 0x0/0x0/0x0, omap 0x71384, meta 0x156fec7c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74e6000/0x0/0x4ffc00000, data 0x2e0cff6/0x2fa6000, compress 0x0/0x0/0x0, omap 0x71384, meta 0x156fec7c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74e6000/0x0/0x4ffc00000, data 0x2e0cff6/0x2fa6000, compress 0x0/0x0/0x0, omap 0x71384, meta 0x156fec7c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3854974 data_alloc: 218103808 data_used: 10385755
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74e6000/0x0/0x4ffc00000, data 0x2e0cff6/0x2fa6000, compress 0x0/0x0/0x0, omap 0x71384, meta 0x156fec7c), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x5630040a7000 session 0x562fffc00a80
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300157f000 session 0x562fffc01500
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74e6000/0x0/0x4ffc00000, data 0x29c6ff6/0x2b60000, compress 0x0/0x0/0x0, omap 0x71519, meta 0x156feae7), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: mgrc ms_handle_reset ms_handle_reset con 0x563002a3f400
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1095581968
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1095581968,v1:192.168.122.100:6801/1095581968]
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: mgrc handle_mgr_configure stats_period=5
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300dcfd000 session 0x5630008c5a40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x563002a3ec00 session 0x563002c43500
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x563006914800 session 0x562fffc00000
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x5630052e4c00 session 0x563001f03500
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x563002a46400 session 0x5630029ac8c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300212a800 session 0x5630007ff500
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300645f800 session 0x563002146380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 47.592243195s of 47.662048340s, submitted: 52
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [0,0,0,2])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x5630045adc00 session 0x563002a6b6c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360480768 unmapped: 69050368 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300212a800 session 0x5630029ac8c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e71ff000/0x0/0x4ffc00000, data 0x30f3ff6/0x328d000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360480768 unmapped: 69050368 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x563002a46400 session 0x562fffba1dc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360480768 unmapped: 69050368 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x5630052e4c00 session 0x562fffc00000
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300645f800 session 0x5630008c5a40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3836513 data_alloc: 218103808 data_used: 4897115
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 69132288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362455040 unmapped: 67076096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363880448 unmapped: 65650688 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363880448 unmapped: 65650688 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e71fe000/0x0/0x4ffc00000, data 0x30f4019/0x328e000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300329f400 session 0x563002c43dc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363880448 unmapped: 65650688 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300212a800 session 0x562fffba0540
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300b961400 session 0x56300273fdc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 69550080 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.875541687s of 39.054084778s, submitted: 43
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777745 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777745 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777745 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360005632 unmapped: 69525504 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360005632 unmapped: 69525504 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777745 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777745 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777745 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.903411865s of 30.936098099s, submitted: 22
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 291 ms_handle_reset con 0x5630040a7c00 session 0x562fffa6b6c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3619554 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e9c32000/0x0/0x4ffc00000, data 0x6bebd6/0x858000, compress 0x0/0x0/0x0, omap 0x74b7f, meta 0x156fb481), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e9c32000/0x0/0x4ffc00000, data 0x6bebd6/0x858000, compress 0x0/0x0/0x0, omap 0x74b7f, meta 0x156fb481), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 69459968 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 69459968 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e9c32000/0x0/0x4ffc00000, data 0x6bebd6/0x858000, compress 0x0/0x0/0x0, omap 0x74b7f, meta 0x156fb481), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 69459968 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e9c32000/0x0/0x4ffc00000, data 0x6bebd6/0x858000, compress 0x0/0x0/0x0, omap 0x74b7f, meta 0x156fb481), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3619554 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 69459968 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 69427200 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 69427200 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 69427200 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 69427200 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 69353472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 69353472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 69353472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 69353472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 69353472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 69263360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 69263360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 69263360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2069453453' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 69263360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 69263360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 69263360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 69255168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 69255168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 69255168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 118.484336853s of 118.528816223s, submitted: 35
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 ms_handle_reset con 0x563006915000 session 0x5630022a9880
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3621740 data_alloc: 218103808 data_used: 4901211
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 ms_handle_reset con 0x562fffb12c00 session 0x5630008c4c40
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c31000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c9f, meta 0x156fb361), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 69230592 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 69230592 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3621608 data_alloc: 218103808 data_used: 4901211
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c31000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c9f, meta 0x156fb361), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 292 handle_osd_map epochs [293,293], i have 293, src has [1,293]
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 70385664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 293 ms_handle_reset con 0x563003da8400 session 0x56300238afc0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592192 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 293 heartbeat osd_stat(store_statfs(0x4ea09d000/0x0/0x4ffc00000, data 0x252222/0x3ed000, compress 0x0/0x0/0x0, omap 0x75b78, meta 0x156fa488), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 70385664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 70385664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 70385664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.014619827s of 13.094200134s, submitted: 43
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 294 ms_handle_reset con 0x563002a3c400 session 0x5630003e6700
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 294 heartbeat osd_stat(store_statfs(0x4ea09d000/0x0/0x4ffc00000, data 0x252222/0x3ed000, compress 0x0/0x0/0x0, omap 0x75b78, meta 0x156fa488), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359153664 unmapped: 70377472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359153664 unmapped: 70377472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3594966 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359153664 unmapped: 70377472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359153664 unmapped: 70377472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 294 heartbeat osd_stat(store_statfs(0x4ea09a000/0x0/0x4ffc00000, data 0x253e12/0x3f0000, compress 0x0/0x0/0x0, omap 0x75c00, meta 0x156fa400), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 70369280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 70369280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 70369280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3594966 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 70344704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 70344704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 295 heartbeat osd_stat(store_statfs(0x4ea097000/0x0/0x4ffc00000, data 0x255891/0x3f3000, compress 0x0/0x0/0x0, omap 0x75cee, meta 0x156fa312), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 70344704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 70344704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 295 heartbeat osd_stat(store_statfs(0x4ea097000/0x0/0x4ffc00000, data 0x255891/0x3f3000, compress 0x0/0x0/0x0, omap 0x75cee, meta 0x156fa312), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 295 heartbeat osd_stat(store_statfs(0x4ea097000/0x0/0x4ffc00000, data 0x255891/0x3f3000, compress 0x0/0x0/0x0, omap 0x75cee, meta 0x156fa312), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 70344704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.104069710s of 12.140682220s, submitted: 28
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3598801 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359194624 unmapped: 70336512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359194624 unmapped: 70336512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359219200 unmapped: 70311936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359219200 unmapped: 70311936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8099000/0x0/0x4ffc00000, data 0x2255891/0x23f3000, compress 0x0/0x0/0x0, omap 0x7595f, meta 0x156fa6a1), peers [0,2] op hist [0,0,0,1])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 ms_handle_reset con 0x5630040a6c00 session 0x562fffba01c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 70287360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 70287360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 70287360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 70287360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359251968 unmapped: 70279168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359251968 unmapped: 70279168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 70270976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 70270976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 70270976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 70270976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 70270976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 70270976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 70262784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 70262784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 70262784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 70262784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 70262784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359276544 unmapped: 70254592 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359276544 unmapped: 70254592 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359276544 unmapped: 70254592 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359292928 unmapped: 70238208 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359292928 unmapped: 70238208 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 70205440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 70205440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 70205440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 70205440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 70205440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 70205440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359333888 unmapped: 70197248 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359333888 unmapped: 70197248 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359342080 unmapped: 70189056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359342080 unmapped: 70189056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359342080 unmapped: 70189056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359350272 unmapped: 70180864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359350272 unmapped: 70180864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359350272 unmapped: 70180864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4801.6 total, 600.0 interval#012Cumulative writes: 48K writes, 188K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s#012Cumulative WAL: 48K writes, 18K syncs, 2.67 writes per sync, written: 0.18 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2185 writes, 8173 keys, 2185 commit groups, 1.0 writes per commit group, ingest: 7.57 MB, 0.01 MB/s#012Interval WAL: 2185 writes, 915 syncs, 2.39 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359383040 unmapped: 70148096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359383040 unmapped: 70148096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 70139904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 70139904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 70139904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 70139904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 70139904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 70139904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359407616 unmapped: 70123520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359407616 unmapped: 70123520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 70115328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 70115328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 70115328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 70115328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 70107136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 70107136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 70107136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 70098944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 70098944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 70098944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 70098944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 70098944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 70098944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 111.617111206s of 111.771911621s, submitted: 17
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359440384 unmapped: 70090752 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 297 ms_handle_reset con 0x5630007a8800 session 0x563002c356c0
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359464960 unmapped: 70066176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359473152 unmapped: 70057984 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 298 ms_handle_reset con 0x563006907c00 session 0x563002472380
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359489536 unmapped: 70041600 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e7420000/0x0/0x4ffc00000, data 0x2ec8fec/0x306a000, compress 0x0/0x0/0x0, omap 0x76256, meta 0x156f9daa), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3843633 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741b000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359489536 unmapped: 70041600 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359489536 unmapped: 70041600 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741b000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359489536 unmapped: 70041600 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359489536 unmapped: 70041600 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359489536 unmapped: 70041600 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3843633 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 70017024 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741b000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 70017024 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 70008832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 70008832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.149941444s of 13.237030983s, submitted: 29
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 70008832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3842145 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 70008832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 70008832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 70008832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 70000640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 70000640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3842145 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3842145 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 69918720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3842145 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 69918720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 69918720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 69918720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 69918720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 69910528 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3842145 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 69910528 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.955919266s of 22.110708237s, submitted: 90
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 69885952 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 299 ms_handle_reset con 0x56300212a000 session 0x56300238a000
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 69836800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 69836800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e941a000/0x0/0x4ffc00000, data 0xecc778/0x1070000, compress 0x0/0x0/0x0, omap 0x76df0, meta 0x156f9210), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 69836800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683971 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 69836800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 69828608 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e941a000/0x0/0x4ffc00000, data 0xecc778/0x1070000, compress 0x0/0x0/0x0, omap 0x76df0, meta 0x156f9210), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 69828608 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 69828608 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 299 handle_osd_map epochs [299,300], i have 299, src has [1,300]
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 69804032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 69804032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 69804032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 69795840 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 69795840 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 69771264 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 69771264 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 69771264 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 69771264 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 69771264 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 69763072 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 69754880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 69754880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 69754880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 69754880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 69738496 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 69738496 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 69738496 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:04:07 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 69738496 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:09:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3163: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:02 np0005634017 rsyslogd[1017]: imjournal: 15122 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Feb 28 06:09:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3164: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:04 np0005634017 nova_compute[243452]: 2026-02-28 11:09:04.061 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:09:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:09:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3165: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3166: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:08 np0005634017 nova_compute[243452]: 2026-02-28 11:09:08.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:09:09 np0005634017 nova_compute[243452]: 2026-02-28 11:09:09.064 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:09:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:09:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3167: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:10 np0005634017 nova_compute[243452]: 2026-02-28 11:09:10.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:09:10 np0005634017 nova_compute[243452]: 2026-02-28 11:09:10.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:09:10 np0005634017 nova_compute[243452]: 2026-02-28 11:09:10.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 06:09:11 np0005634017 nova_compute[243452]: 2026-02-28 11:09:11.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:09:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3168: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:12 np0005634017 nova_compute[243452]: 2026-02-28 11:09:12.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:09:13 np0005634017 nova_compute[243452]: 2026-02-28 11:09:13.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:09:13 np0005634017 nova_compute[243452]: 2026-02-28 11:09:13.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 06:09:13 np0005634017 nova_compute[243452]: 2026-02-28 11:09:13.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 06:09:13 np0005634017 nova_compute[243452]: 2026-02-28 11:09:13.347 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 06:09:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3169: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:14 np0005634017 nova_compute[243452]: 2026-02-28 11:09:14.066 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:09:14 np0005634017 nova_compute[243452]: 2026-02-28 11:09:14.068 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:09:14 np0005634017 nova_compute[243452]: 2026-02-28 11:09:14.068 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 06:09:14 np0005634017 nova_compute[243452]: 2026-02-28 11:09:14.068 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:09:14 np0005634017 nova_compute[243452]: 2026-02-28 11:09:14.092 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:09:14 np0005634017 nova_compute[243452]: 2026-02-28 11:09:14.093 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:09:14 np0005634017 nova_compute[243452]: 2026-02-28 11:09:14.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:09:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:09:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3170: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3171: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:19 np0005634017 nova_compute[243452]: 2026-02-28 11:09:19.093 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:09:19 np0005634017 nova_compute[243452]: 2026-02-28 11:09:19.095 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:09:19 np0005634017 nova_compute[243452]: 2026-02-28 11:09:19.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:09:19 np0005634017 nova_compute[243452]: 2026-02-28 11:09:19.341 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:09:19 np0005634017 nova_compute[243452]: 2026-02-28 11:09:19.341 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:09:19 np0005634017 nova_compute[243452]: 2026-02-28 11:09:19.341 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:09:19 np0005634017 nova_compute[243452]: 2026-02-28 11:09:19.342 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 06:09:19 np0005634017 nova_compute[243452]: 2026-02-28 11:09:19.342 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 06:09:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:09:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 06:09:19 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1583978062' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 06:09:19 np0005634017 nova_compute[243452]: 2026-02-28 11:09:19.908 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 06:09:20 np0005634017 nova_compute[243452]: 2026-02-28 11:09:20.027 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 06:09:20 np0005634017 nova_compute[243452]: 2026-02-28 11:09:20.029 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3548MB free_disk=59.987354160286486GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 06:09:20 np0005634017 nova_compute[243452]: 2026-02-28 11:09:20.029 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:09:20 np0005634017 nova_compute[243452]: 2026-02-28 11:09:20.029 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:09:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3172: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:20 np0005634017 nova_compute[243452]: 2026-02-28 11:09:20.117 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 06:09:20 np0005634017 nova_compute[243452]: 2026-02-28 11:09:20.117 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 06:09:20 np0005634017 nova_compute[243452]: 2026-02-28 11:09:20.133 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 06:09:20 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 06:09:20 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/573631554' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 06:09:20 np0005634017 nova_compute[243452]: 2026-02-28 11:09:20.656 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 06:09:20 np0005634017 nova_compute[243452]: 2026-02-28 11:09:20.662 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 06:09:20 np0005634017 nova_compute[243452]: 2026-02-28 11:09:20.679 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 06:09:20 np0005634017 nova_compute[243452]: 2026-02-28 11:09:20.682 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 06:09:20 np0005634017 nova_compute[243452]: 2026-02-28 11:09:20.682 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:09:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3173: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3174: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:24 np0005634017 nova_compute[243452]: 2026-02-28 11:09:24.096 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:09:24 np0005634017 nova_compute[243452]: 2026-02-28 11:09:24.098 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:09:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:09:25 np0005634017 nova_compute[243452]: 2026-02-28 11:09:25.683 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:09:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3175: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3176: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:29 np0005634017 nova_compute[243452]: 2026-02-28 11:09:29.099 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:09:29 np0005634017 nova_compute[243452]: 2026-02-28 11:09:29.102 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:09:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_11:09:29
Feb 28 06:09:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 06:09:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 06:09:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['images', '.mgr', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'default.rgw.log', '.rgw.root', 'volumes', 'default.rgw.meta']
Feb 28 06:09:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 06:09:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:09:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3177: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:09:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:09:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:09:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:09:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:09:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:09:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 06:09:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 06:09:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 06:09:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 06:09:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 06:09:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 06:09:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 06:09:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 06:09:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 06:09:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 06:09:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3178: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:32 np0005634017 podman[406642]: 2026-02-28 11:09:32.149176406 +0000 UTC m=+0.073138473 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 28 06:09:32 np0005634017 podman[406641]: 2026-02-28 11:09:32.232443204 +0000 UTC m=+0.155629329 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0)
Feb 28 06:09:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3179: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:34 np0005634017 nova_compute[243452]: 2026-02-28 11:09:34.101 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:09:34 np0005634017 nova_compute[243452]: 2026-02-28 11:09:34.102 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:09:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:09:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3180: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3181: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:39 np0005634017 nova_compute[243452]: 2026-02-28 11:09:39.103 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:09:39 np0005634017 nova_compute[243452]: 2026-02-28 11:09:39.104 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:09:39 np0005634017 nova_compute[243452]: 2026-02-28 11:09:39.105 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 06:09:39 np0005634017 nova_compute[243452]: 2026-02-28 11:09:39.105 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:09:39 np0005634017 nova_compute[243452]: 2026-02-28 11:09:39.105 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:09:39 np0005634017 nova_compute[243452]: 2026-02-28 11:09:39.106 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:09:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:09:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 06:09:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 06:09:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 06:09:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 06:09:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 06:09:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:09:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 06:09:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 06:09:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 06:09:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 06:09:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 06:09:39 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 06:09:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3182: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:40 np0005634017 podman[406831]: 2026-02-28 11:09:40.136695607 +0000 UTC m=+0.062131811 container create a0790a1af8bf9098ebb032c236f7c307917bf636abc26139ab12d29dab9bccaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 06:09:40 np0005634017 systemd[1]: Started libpod-conmon-a0790a1af8bf9098ebb032c236f7c307917bf636abc26139ab12d29dab9bccaf.scope.
Feb 28 06:09:40 np0005634017 podman[406831]: 2026-02-28 11:09:40.109562088 +0000 UTC m=+0.034998332 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:09:40 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:09:40 np0005634017 podman[406831]: 2026-02-28 11:09:40.240660101 +0000 UTC m=+0.166096285 container init a0790a1af8bf9098ebb032c236f7c307917bf636abc26139ab12d29dab9bccaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_carson, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 06:09:40 np0005634017 podman[406831]: 2026-02-28 11:09:40.251324313 +0000 UTC m=+0.176760507 container start a0790a1af8bf9098ebb032c236f7c307917bf636abc26139ab12d29dab9bccaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_carson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 06:09:40 np0005634017 podman[406831]: 2026-02-28 11:09:40.255250344 +0000 UTC m=+0.180686518 container attach a0790a1af8bf9098ebb032c236f7c307917bf636abc26139ab12d29dab9bccaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_carson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 06:09:40 np0005634017 adoring_carson[406847]: 167 167
Feb 28 06:09:40 np0005634017 systemd[1]: libpod-a0790a1af8bf9098ebb032c236f7c307917bf636abc26139ab12d29dab9bccaf.scope: Deactivated successfully.
Feb 28 06:09:40 np0005634017 conmon[406847]: conmon a0790a1af8bf9098ebb0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a0790a1af8bf9098ebb032c236f7c307917bf636abc26139ab12d29dab9bccaf.scope/container/memory.events
Feb 28 06:09:40 np0005634017 podman[406831]: 2026-02-28 11:09:40.260895344 +0000 UTC m=+0.186331538 container died a0790a1af8bf9098ebb032c236f7c307917bf636abc26139ab12d29dab9bccaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 06:09:40 np0005634017 systemd[1]: var-lib-containers-storage-overlay-2ca8df98f9ae806cadfccb57773c30f811c7e392ba124bce7eb6abcd472d4696-merged.mount: Deactivated successfully.
Feb 28 06:09:40 np0005634017 podman[406831]: 2026-02-28 11:09:40.306925588 +0000 UTC m=+0.232361782 container remove a0790a1af8bf9098ebb032c236f7c307917bf636abc26139ab12d29dab9bccaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_carson, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 06:09:40 np0005634017 systemd[1]: libpod-conmon-a0790a1af8bf9098ebb032c236f7c307917bf636abc26139ab12d29dab9bccaf.scope: Deactivated successfully.
Feb 28 06:09:40 np0005634017 podman[406871]: 2026-02-28 11:09:40.491670759 +0000 UTC m=+0.060628778 container create e05c78270ac99a34b49afc539a9b1c71cac8af91e173b2db97ea15f66bd204b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_merkle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 06:09:40 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 06:09:40 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:09:40 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 06:09:40 np0005634017 systemd[1]: Started libpod-conmon-e05c78270ac99a34b49afc539a9b1c71cac8af91e173b2db97ea15f66bd204b6.scope.
Feb 28 06:09:40 np0005634017 podman[406871]: 2026-02-28 11:09:40.468468882 +0000 UTC m=+0.037426941 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:09:40 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:09:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/408191266e4e74013d1e7b4817671ba3ed23f8899617dd2da33e77890028a47a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 06:09:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/408191266e4e74013d1e7b4817671ba3ed23f8899617dd2da33e77890028a47a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 06:09:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/408191266e4e74013d1e7b4817671ba3ed23f8899617dd2da33e77890028a47a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 06:09:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/408191266e4e74013d1e7b4817671ba3ed23f8899617dd2da33e77890028a47a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 06:09:40 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/408191266e4e74013d1e7b4817671ba3ed23f8899617dd2da33e77890028a47a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 06:09:40 np0005634017 podman[406871]: 2026-02-28 11:09:40.603446725 +0000 UTC m=+0.172404794 container init e05c78270ac99a34b49afc539a9b1c71cac8af91e173b2db97ea15f66bd204b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 28 06:09:40 np0005634017 podman[406871]: 2026-02-28 11:09:40.614979911 +0000 UTC m=+0.183937930 container start e05c78270ac99a34b49afc539a9b1c71cac8af91e173b2db97ea15f66bd204b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_merkle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 06:09:40 np0005634017 podman[406871]: 2026-02-28 11:09:40.618903802 +0000 UTC m=+0.187861871 container attach e05c78270ac99a34b49afc539a9b1c71cac8af91e173b2db97ea15f66bd204b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 06:09:41 np0005634017 pensive_merkle[406888]: --> passed data devices: 0 physical, 3 LVM
Feb 28 06:09:41 np0005634017 pensive_merkle[406888]: --> All data devices are unavailable
Feb 28 06:09:41 np0005634017 systemd[1]: libpod-e05c78270ac99a34b49afc539a9b1c71cac8af91e173b2db97ea15f66bd204b6.scope: Deactivated successfully.
Feb 28 06:09:41 np0005634017 podman[406871]: 2026-02-28 11:09:41.110541176 +0000 UTC m=+0.679499165 container died e05c78270ac99a34b49afc539a9b1c71cac8af91e173b2db97ea15f66bd204b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_merkle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 06:09:41 np0005634017 systemd[1]: var-lib-containers-storage-overlay-408191266e4e74013d1e7b4817671ba3ed23f8899617dd2da33e77890028a47a-merged.mount: Deactivated successfully.
Feb 28 06:09:41 np0005634017 podman[406871]: 2026-02-28 11:09:41.159148493 +0000 UTC m=+0.728106492 container remove e05c78270ac99a34b49afc539a9b1c71cac8af91e173b2db97ea15f66bd204b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_merkle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 06:09:41 np0005634017 systemd[1]: libpod-conmon-e05c78270ac99a34b49afc539a9b1c71cac8af91e173b2db97ea15f66bd204b6.scope: Deactivated successfully.
Feb 28 06:09:41 np0005634017 podman[406985]: 2026-02-28 11:09:41.624831391 +0000 UTC m=+0.053179247 container create 55a54f941e073b58b122fa8ab50c947c4b2c380fbff0bd03c16bdeb4b7f9ca27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 06:09:41 np0005634017 systemd[1]: Started libpod-conmon-55a54f941e073b58b122fa8ab50c947c4b2c380fbff0bd03c16bdeb4b7f9ca27.scope.
Feb 28 06:09:41 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:09:41 np0005634017 podman[406985]: 2026-02-28 11:09:41.606133802 +0000 UTC m=+0.034481668 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:09:41 np0005634017 podman[406985]: 2026-02-28 11:09:41.710130227 +0000 UTC m=+0.138478103 container init 55a54f941e073b58b122fa8ab50c947c4b2c380fbff0bd03c16bdeb4b7f9ca27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_jang, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 06:09:41 np0005634017 podman[406985]: 2026-02-28 11:09:41.720798269 +0000 UTC m=+0.149146125 container start 55a54f941e073b58b122fa8ab50c947c4b2c380fbff0bd03c16bdeb4b7f9ca27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 06:09:41 np0005634017 podman[406985]: 2026-02-28 11:09:41.724942537 +0000 UTC m=+0.153290403 container attach 55a54f941e073b58b122fa8ab50c947c4b2c380fbff0bd03c16bdeb4b7f9ca27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_jang, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 28 06:09:41 np0005634017 xenodochial_jang[407001]: 167 167
Feb 28 06:09:41 np0005634017 systemd[1]: libpod-55a54f941e073b58b122fa8ab50c947c4b2c380fbff0bd03c16bdeb4b7f9ca27.scope: Deactivated successfully.
Feb 28 06:09:41 np0005634017 podman[406985]: 2026-02-28 11:09:41.72824964 +0000 UTC m=+0.156597556 container died 55a54f941e073b58b122fa8ab50c947c4b2c380fbff0bd03c16bdeb4b7f9ca27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_jang, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 06:09:41 np0005634017 systemd[1]: var-lib-containers-storage-overlay-fdb7ee5ba6114939149860c3e4ab0e5e191bc3dcea6d02bed8d9f83bcc6f6215-merged.mount: Deactivated successfully.
Feb 28 06:09:41 np0005634017 podman[406985]: 2026-02-28 11:09:41.771170796 +0000 UTC m=+0.199518672 container remove 55a54f941e073b58b122fa8ab50c947c4b2c380fbff0bd03c16bdeb4b7f9ca27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_jang, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 06:09:41 np0005634017 systemd[1]: libpod-conmon-55a54f941e073b58b122fa8ab50c947c4b2c380fbff0bd03c16bdeb4b7f9ca27.scope: Deactivated successfully.
Feb 28 06:09:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 06:09:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:09:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 06:09:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:09:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5454513684934942e-05 of space, bias 1.0, pg target 0.0046363541054804825 quantized to 32 (current 32)
Feb 28 06:09:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:09:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:09:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:09:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:09:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:09:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.899822393338988e-06 of space, bias 1.0, pg target 0.0014699467180016965 quantized to 32 (current 32)
Feb 28 06:09:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:09:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.272876705470989e-07 of space, bias 4.0, pg target 0.0008727452046565186 quantized to 16 (current 16)
Feb 28 06:09:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:09:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:09:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:09:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 06:09:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:09:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 06:09:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:09:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:09:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:09:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 06:09:41 np0005634017 podman[407024]: 2026-02-28 11:09:41.940561213 +0000 UTC m=+0.047290410 container create d975fb0d4a6689cde3622598d72ec7210fbd72cb4f2c16638a0fb99738e537ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ganguly, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 06:09:41 np0005634017 systemd[1]: Started libpod-conmon-d975fb0d4a6689cde3622598d72ec7210fbd72cb4f2c16638a0fb99738e537ee.scope.
Feb 28 06:09:42 np0005634017 podman[407024]: 2026-02-28 11:09:41.918850508 +0000 UTC m=+0.025579785 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:09:42 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:09:42 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6b9ea5c95fa20e346e87e3e51788d7bcf8181fc5b9d38da072c0da662694cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 06:09:42 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6b9ea5c95fa20e346e87e3e51788d7bcf8181fc5b9d38da072c0da662694cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 06:09:42 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6b9ea5c95fa20e346e87e3e51788d7bcf8181fc5b9d38da072c0da662694cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 06:09:42 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6b9ea5c95fa20e346e87e3e51788d7bcf8181fc5b9d38da072c0da662694cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 06:09:42 np0005634017 podman[407024]: 2026-02-28 11:09:42.046345319 +0000 UTC m=+0.153074546 container init d975fb0d4a6689cde3622598d72ec7210fbd72cb4f2c16638a0fb99738e537ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ganguly, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 28 06:09:42 np0005634017 podman[407024]: 2026-02-28 11:09:42.056435985 +0000 UTC m=+0.163165172 container start d975fb0d4a6689cde3622598d72ec7210fbd72cb4f2c16638a0fb99738e537ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ganguly, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 06:09:42 np0005634017 podman[407024]: 2026-02-28 11:09:42.060959853 +0000 UTC m=+0.167689090 container attach d975fb0d4a6689cde3622598d72ec7210fbd72cb4f2c16638a0fb99738e537ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 06:09:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3183: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]: {
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:    "0": [
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:        {
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "devices": [
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "/dev/loop3"
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            ],
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "lv_name": "ceph_lv0",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "lv_size": "21470642176",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "name": "ceph_lv0",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "tags": {
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.cephx_lockbox_secret": "",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.cluster_name": "ceph",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.crush_device_class": "",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.encrypted": "0",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.objectstore": "bluestore",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.osd_id": "0",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.type": "block",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.vdo": "0",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.with_tpm": "0"
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            },
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "type": "block",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "vg_name": "ceph_vg0"
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:        }
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:    ],
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:    "1": [
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:        {
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "devices": [
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "/dev/loop4"
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            ],
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "lv_name": "ceph_lv1",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "lv_size": "21470642176",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "name": "ceph_lv1",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "tags": {
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.cephx_lockbox_secret": "",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.cluster_name": "ceph",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.crush_device_class": "",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.encrypted": "0",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.objectstore": "bluestore",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.osd_id": "1",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.type": "block",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.vdo": "0",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.with_tpm": "0"
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            },
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "type": "block",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "vg_name": "ceph_vg1"
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:        }
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:    ],
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:    "2": [
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:        {
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "devices": [
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "/dev/loop5"
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            ],
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "lv_name": "ceph_lv2",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "lv_size": "21470642176",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "name": "ceph_lv2",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "tags": {
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.cephx_lockbox_secret": "",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.cluster_name": "ceph",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.crush_device_class": "",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.encrypted": "0",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.objectstore": "bluestore",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.osd_id": "2",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.type": "block",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.vdo": "0",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:                "ceph.with_tpm": "0"
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            },
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "type": "block",
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:            "vg_name": "ceph_vg2"
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:        }
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]:    ]
Feb 28 06:09:42 np0005634017 serene_ganguly[407041]: }
Feb 28 06:09:42 np0005634017 systemd[1]: libpod-d975fb0d4a6689cde3622598d72ec7210fbd72cb4f2c16638a0fb99738e537ee.scope: Deactivated successfully.
Feb 28 06:09:42 np0005634017 podman[407024]: 2026-02-28 11:09:42.34684334 +0000 UTC m=+0.453572557 container died d975fb0d4a6689cde3622598d72ec7210fbd72cb4f2c16638a0fb99738e537ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ganguly, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 06:09:42 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ec6b9ea5c95fa20e346e87e3e51788d7bcf8181fc5b9d38da072c0da662694cd-merged.mount: Deactivated successfully.
Feb 28 06:09:42 np0005634017 podman[407024]: 2026-02-28 11:09:42.451394031 +0000 UTC m=+0.558123228 container remove d975fb0d4a6689cde3622598d72ec7210fbd72cb4f2c16638a0fb99738e537ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 06:09:42 np0005634017 systemd[1]: libpod-conmon-d975fb0d4a6689cde3622598d72ec7210fbd72cb4f2c16638a0fb99738e537ee.scope: Deactivated successfully.
Feb 28 06:09:42 np0005634017 podman[407126]: 2026-02-28 11:09:42.952308417 +0000 UTC m=+0.054605567 container create 379760b9fda2a35cd5e839f34ae0c4e246c1afb78cec42de75e28a65def3e1da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 06:09:42 np0005634017 systemd[1]: Started libpod-conmon-379760b9fda2a35cd5e839f34ae0c4e246c1afb78cec42de75e28a65def3e1da.scope.
Feb 28 06:09:43 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:09:43 np0005634017 podman[407126]: 2026-02-28 11:09:42.93193767 +0000 UTC m=+0.034234870 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:09:43 np0005634017 podman[407126]: 2026-02-28 11:09:43.029155423 +0000 UTC m=+0.131452583 container init 379760b9fda2a35cd5e839f34ae0c4e246c1afb78cec42de75e28a65def3e1da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 06:09:43 np0005634017 podman[407126]: 2026-02-28 11:09:43.036334467 +0000 UTC m=+0.138631627 container start 379760b9fda2a35cd5e839f34ae0c4e246c1afb78cec42de75e28a65def3e1da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wright, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 28 06:09:43 np0005634017 podman[407126]: 2026-02-28 11:09:43.039809065 +0000 UTC m=+0.142106195 container attach 379760b9fda2a35cd5e839f34ae0c4e246c1afb78cec42de75e28a65def3e1da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wright, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 06:09:43 np0005634017 frosty_wright[407142]: 167 167
Feb 28 06:09:43 np0005634017 systemd[1]: libpod-379760b9fda2a35cd5e839f34ae0c4e246c1afb78cec42de75e28a65def3e1da.scope: Deactivated successfully.
Feb 28 06:09:43 np0005634017 podman[407126]: 2026-02-28 11:09:43.04278801 +0000 UTC m=+0.145085130 container died 379760b9fda2a35cd5e839f34ae0c4e246c1afb78cec42de75e28a65def3e1da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wright, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 06:09:43 np0005634017 systemd[1]: var-lib-containers-storage-overlay-045fa46a408e6ca26fe22636378e17eed41cd2d3b5290acbe75ba159aedbc4fa-merged.mount: Deactivated successfully.
Feb 28 06:09:43 np0005634017 podman[407126]: 2026-02-28 11:09:43.085843909 +0000 UTC m=+0.188141029 container remove 379760b9fda2a35cd5e839f34ae0c4e246c1afb78cec42de75e28a65def3e1da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 06:09:43 np0005634017 systemd[1]: libpod-conmon-379760b9fda2a35cd5e839f34ae0c4e246c1afb78cec42de75e28a65def3e1da.scope: Deactivated successfully.
Feb 28 06:09:43 np0005634017 podman[407166]: 2026-02-28 11:09:43.264104288 +0000 UTC m=+0.063428028 container create e8c299fc07d2876c51120b3f9d4757661d2ea65f41e1e295e0e648b42f9fb638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_lalande, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 06:09:43 np0005634017 systemd[1]: Started libpod-conmon-e8c299fc07d2876c51120b3f9d4757661d2ea65f41e1e295e0e648b42f9fb638.scope.
Feb 28 06:09:43 np0005634017 podman[407166]: 2026-02-28 11:09:43.239199352 +0000 UTC m=+0.038523182 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:09:43 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:09:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d5aa536dae8b48faf049c9be3056ac9b6ca3d4e59bde83d1cdf5695c3e13658/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 06:09:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d5aa536dae8b48faf049c9be3056ac9b6ca3d4e59bde83d1cdf5695c3e13658/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 06:09:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d5aa536dae8b48faf049c9be3056ac9b6ca3d4e59bde83d1cdf5695c3e13658/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 06:09:43 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d5aa536dae8b48faf049c9be3056ac9b6ca3d4e59bde83d1cdf5695c3e13658/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 06:09:43 np0005634017 podman[407166]: 2026-02-28 11:09:43.376923613 +0000 UTC m=+0.176247413 container init e8c299fc07d2876c51120b3f9d4757661d2ea65f41e1e295e0e648b42f9fb638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_lalande, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 06:09:43 np0005634017 podman[407166]: 2026-02-28 11:09:43.38601399 +0000 UTC m=+0.185337740 container start e8c299fc07d2876c51120b3f9d4757661d2ea65f41e1e295e0e648b42f9fb638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_lalande, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 28 06:09:43 np0005634017 podman[407166]: 2026-02-28 11:09:43.389236001 +0000 UTC m=+0.188559831 container attach e8c299fc07d2876c51120b3f9d4757661d2ea65f41e1e295e0e648b42f9fb638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 28 06:09:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3184: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:44 np0005634017 nova_compute[243452]: 2026-02-28 11:09:44.106 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:09:44 np0005634017 lvm[407262]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 06:09:44 np0005634017 lvm[407262]: VG ceph_vg1 finished
Feb 28 06:09:44 np0005634017 lvm[407261]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 06:09:44 np0005634017 lvm[407261]: VG ceph_vg0 finished
Feb 28 06:09:44 np0005634017 lvm[407264]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 06:09:44 np0005634017 lvm[407264]: VG ceph_vg2 finished
Feb 28 06:09:44 np0005634017 confident_lalande[407183]: {}
Feb 28 06:09:44 np0005634017 systemd[1]: libpod-e8c299fc07d2876c51120b3f9d4757661d2ea65f41e1e295e0e648b42f9fb638.scope: Deactivated successfully.
Feb 28 06:09:44 np0005634017 systemd[1]: libpod-e8c299fc07d2876c51120b3f9d4757661d2ea65f41e1e295e0e648b42f9fb638.scope: Consumed 1.244s CPU time.
Feb 28 06:09:44 np0005634017 podman[407166]: 2026-02-28 11:09:44.261803712 +0000 UTC m=+1.061127502 container died e8c299fc07d2876c51120b3f9d4757661d2ea65f41e1e295e0e648b42f9fb638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 06:09:44 np0005634017 systemd[1]: var-lib-containers-storage-overlay-0d5aa536dae8b48faf049c9be3056ac9b6ca3d4e59bde83d1cdf5695c3e13658-merged.mount: Deactivated successfully.
Feb 28 06:09:44 np0005634017 podman[407166]: 2026-02-28 11:09:44.310663466 +0000 UTC m=+1.109987206 container remove e8c299fc07d2876c51120b3f9d4757661d2ea65f41e1e295e0e648b42f9fb638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 06:09:44 np0005634017 systemd[1]: libpod-conmon-e8c299fc07d2876c51120b3f9d4757661d2ea65f41e1e295e0e648b42f9fb638.scope: Deactivated successfully.
Feb 28 06:09:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 06:09:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:09:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 06:09:44 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:09:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:09:45 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:09:45 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:09:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 06:09:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2241819429' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 06:09:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 06:09:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2241819429' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 06:09:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3185: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3186: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:49 np0005634017 nova_compute[243452]: 2026-02-28 11:09:49.108 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:09:49 np0005634017 nova_compute[243452]: 2026-02-28 11:09:49.112 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:09:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:09:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3187: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3188: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3189: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:54 np0005634017 nova_compute[243452]: 2026-02-28 11:09:54.110 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:09:54 np0005634017 nova_compute[243452]: 2026-02-28 11:09:54.113 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:09:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:09:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3190: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:09:57.917 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:09:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:09:57.917 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:09:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:09:57.918 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:09:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3191: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:09:59 np0005634017 nova_compute[243452]: 2026-02-28 11:09:59.113 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:09:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:10:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3192: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:10:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:10:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:10:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:10:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:10:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:10:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3193: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:03 np0005634017 podman[407306]: 2026-02-28 11:10:03.153661769 +0000 UTC m=+0.078287318 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 28 06:10:03 np0005634017 podman[407305]: 2026-02-28 11:10:03.188756393 +0000 UTC m=+0.112638161 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 28 06:10:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3194: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:04 np0005634017 nova_compute[243452]: 2026-02-28 11:10:04.114 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:10:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:10:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3195: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3196: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:09 np0005634017 nova_compute[243452]: 2026-02-28 11:10:09.117 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:10:09 np0005634017 nova_compute[243452]: 2026-02-28 11:10:09.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:10:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:10:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3197: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3198: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:12 np0005634017 nova_compute[243452]: 2026-02-28 11:10:12.318 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:10:12 np0005634017 nova_compute[243452]: 2026-02-28 11:10:12.319 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:10:12 np0005634017 nova_compute[243452]: 2026-02-28 11:10:12.319 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 06:10:13 np0005634017 nova_compute[243452]: 2026-02-28 11:10:13.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:10:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3199: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:14 np0005634017 nova_compute[243452]: 2026-02-28 11:10:14.119 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:10:14 np0005634017 nova_compute[243452]: 2026-02-28 11:10:14.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:10:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:10:15 np0005634017 nova_compute[243452]: 2026-02-28 11:10:15.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:10:15 np0005634017 nova_compute[243452]: 2026-02-28 11:10:15.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 06:10:15 np0005634017 nova_compute[243452]: 2026-02-28 11:10:15.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 06:10:15 np0005634017 nova_compute[243452]: 2026-02-28 11:10:15.341 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 06:10:15 np0005634017 nova_compute[243452]: 2026-02-28 11:10:15.342 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:10:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3200: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3201: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:19 np0005634017 nova_compute[243452]: 2026-02-28 11:10:19.121 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:10:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:10:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3202: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:21 np0005634017 nova_compute[243452]: 2026-02-28 11:10:21.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:10:21 np0005634017 nova_compute[243452]: 2026-02-28 11:10:21.347 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:10:21 np0005634017 nova_compute[243452]: 2026-02-28 11:10:21.348 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:10:21 np0005634017 nova_compute[243452]: 2026-02-28 11:10:21.349 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:10:21 np0005634017 nova_compute[243452]: 2026-02-28 11:10:21.349 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 06:10:21 np0005634017 nova_compute[243452]: 2026-02-28 11:10:21.350 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 06:10:21 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 06:10:21 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1031500812' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 06:10:21 np0005634017 nova_compute[243452]: 2026-02-28 11:10:21.905 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 06:10:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3203: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:22 np0005634017 nova_compute[243452]: 2026-02-28 11:10:22.119 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 06:10:22 np0005634017 nova_compute[243452]: 2026-02-28 11:10:22.121 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3549MB free_disk=59.987354160286486GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 06:10:22 np0005634017 nova_compute[243452]: 2026-02-28 11:10:22.122 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:10:22 np0005634017 nova_compute[243452]: 2026-02-28 11:10:22.122 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:10:22 np0005634017 nova_compute[243452]: 2026-02-28 11:10:22.307 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 06:10:22 np0005634017 nova_compute[243452]: 2026-02-28 11:10:22.308 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 06:10:22 np0005634017 nova_compute[243452]: 2026-02-28 11:10:22.394 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 28 06:10:22 np0005634017 nova_compute[243452]: 2026-02-28 11:10:22.496 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 28 06:10:22 np0005634017 nova_compute[243452]: 2026-02-28 11:10:22.497 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 28 06:10:22 np0005634017 nova_compute[243452]: 2026-02-28 11:10:22.512 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 28 06:10:22 np0005634017 nova_compute[243452]: 2026-02-28 11:10:22.531 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 28 06:10:22 np0005634017 nova_compute[243452]: 2026-02-28 11:10:22.553 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 06:10:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 06:10:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1328841490' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 06:10:23 np0005634017 nova_compute[243452]: 2026-02-28 11:10:23.087 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 06:10:23 np0005634017 nova_compute[243452]: 2026-02-28 11:10:23.095 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 06:10:23 np0005634017 nova_compute[243452]: 2026-02-28 11:10:23.141 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 06:10:23 np0005634017 nova_compute[243452]: 2026-02-28 11:10:23.144 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 06:10:23 np0005634017 nova_compute[243452]: 2026-02-28 11:10:23.144 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:10:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3204: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:24 np0005634017 nova_compute[243452]: 2026-02-28 11:10:24.123 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:10:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:10:25 np0005634017 nova_compute[243452]: 2026-02-28 11:10:25.141 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:10:25 np0005634017 nova_compute[243452]: 2026-02-28 11:10:25.161 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:10:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3205: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3206: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:29 np0005634017 nova_compute[243452]: 2026-02-28 11:10:29.126 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:10:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_11:10:29
Feb 28 06:10:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 06:10:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 06:10:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'images', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'backups', 'volumes', 'default.rgw.meta']
Feb 28 06:10:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 06:10:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:10:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3207: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:10:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:10:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:10:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:10:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:10:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:10:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 06:10:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 06:10:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 06:10:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 06:10:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 06:10:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 06:10:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 06:10:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 06:10:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 06:10:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 06:10:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3208: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3209: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:34 np0005634017 nova_compute[243452]: 2026-02-28 11:10:34.127 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:10:34 np0005634017 nova_compute[243452]: 2026-02-28 11:10:34.130 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:10:34 np0005634017 podman[407396]: 2026-02-28 11:10:34.139585377 +0000 UTC m=+0.069452468 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Feb 28 06:10:34 np0005634017 podman[407395]: 2026-02-28 11:10:34.172554761 +0000 UTC m=+0.108848124 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 28 06:10:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:10:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3210: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3211: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:39 np0005634017 nova_compute[243452]: 2026-02-28 11:10:39.129 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:10:39 np0005634017 nova_compute[243452]: 2026-02-28 11:10:39.131 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:10:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:10:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3212: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 06:10:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:10:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 06:10:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:10:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5454513684934942e-05 of space, bias 1.0, pg target 0.0046363541054804825 quantized to 32 (current 32)
Feb 28 06:10:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:10:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:10:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:10:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:10:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:10:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.899822393338988e-06 of space, bias 1.0, pg target 0.0014699467180016965 quantized to 32 (current 32)
Feb 28 06:10:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:10:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.272876705470989e-07 of space, bias 4.0, pg target 0.0008727452046565186 quantized to 16 (current 16)
Feb 28 06:10:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:10:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:10:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:10:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 06:10:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:10:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 06:10:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:10:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:10:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:10:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 06:10:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3213: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3214: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:44 np0005634017 nova_compute[243452]: 2026-02-28 11:10:44.131 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:10:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:10:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 06:10:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 06:10:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 06:10:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 06:10:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 06:10:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:10:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 06:10:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 06:10:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 06:10:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 06:10:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 06:10:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 06:10:45 np0005634017 podman[407586]: 2026-02-28 11:10:45.637163507 +0000 UTC m=+0.060189806 container create e89a5e71e4ae155fac63d2fecb52601b3efa590ea56163033adf8ba98154fc3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 28 06:10:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 06:10:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2175088020' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 06:10:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 06:10:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2175088020' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 06:10:45 np0005634017 systemd[1]: Started libpod-conmon-e89a5e71e4ae155fac63d2fecb52601b3efa590ea56163033adf8ba98154fc3f.scope.
Feb 28 06:10:45 np0005634017 podman[407586]: 2026-02-28 11:10:45.608191226 +0000 UTC m=+0.031217565 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:10:45 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:10:45 np0005634017 podman[407586]: 2026-02-28 11:10:45.73685518 +0000 UTC m=+0.159881429 container init e89a5e71e4ae155fac63d2fecb52601b3efa590ea56163033adf8ba98154fc3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 06:10:45 np0005634017 podman[407586]: 2026-02-28 11:10:45.748915001 +0000 UTC m=+0.171941260 container start e89a5e71e4ae155fac63d2fecb52601b3efa590ea56163033adf8ba98154fc3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 06:10:45 np0005634017 podman[407586]: 2026-02-28 11:10:45.752313888 +0000 UTC m=+0.175340167 container attach e89a5e71e4ae155fac63d2fecb52601b3efa590ea56163033adf8ba98154fc3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 28 06:10:45 np0005634017 romantic_taussig[407602]: 167 167
Feb 28 06:10:45 np0005634017 systemd[1]: libpod-e89a5e71e4ae155fac63d2fecb52601b3efa590ea56163033adf8ba98154fc3f.scope: Deactivated successfully.
Feb 28 06:10:45 np0005634017 podman[407586]: 2026-02-28 11:10:45.759552433 +0000 UTC m=+0.182578732 container died e89a5e71e4ae155fac63d2fecb52601b3efa590ea56163033adf8ba98154fc3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 06:10:45 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 06:10:45 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:10:45 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 06:10:45 np0005634017 systemd[1]: var-lib-containers-storage-overlay-49ddceb7231719c860f06dd73a3a37f779ceb575188d1f6918c28744318e0a82-merged.mount: Deactivated successfully.
Feb 28 06:10:45 np0005634017 podman[407586]: 2026-02-28 11:10:45.813376677 +0000 UTC m=+0.236402976 container remove e89a5e71e4ae155fac63d2fecb52601b3efa590ea56163033adf8ba98154fc3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 06:10:45 np0005634017 systemd[1]: libpod-conmon-e89a5e71e4ae155fac63d2fecb52601b3efa590ea56163033adf8ba98154fc3f.scope: Deactivated successfully.
Feb 28 06:10:45 np0005634017 podman[407630]: 2026-02-28 11:10:45.995857255 +0000 UTC m=+0.062601584 container create fca95100822719bbccf5feffa972332f70883a385eb48f946fbf2632ea097be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 06:10:46 np0005634017 systemd[1]: Started libpod-conmon-fca95100822719bbccf5feffa972332f70883a385eb48f946fbf2632ea097be3.scope.
Feb 28 06:10:46 np0005634017 podman[407630]: 2026-02-28 11:10:45.96742505 +0000 UTC m=+0.034169429 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:10:46 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:10:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a719851cc3c4c8a0177a38e0d60f5ae377ea25af6e38a8aa815abaea75d42d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 06:10:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a719851cc3c4c8a0177a38e0d60f5ae377ea25af6e38a8aa815abaea75d42d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 06:10:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a719851cc3c4c8a0177a38e0d60f5ae377ea25af6e38a8aa815abaea75d42d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 06:10:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a719851cc3c4c8a0177a38e0d60f5ae377ea25af6e38a8aa815abaea75d42d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 06:10:46 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a719851cc3c4c8a0177a38e0d60f5ae377ea25af6e38a8aa815abaea75d42d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 06:10:46 np0005634017 podman[407630]: 2026-02-28 11:10:46.09667188 +0000 UTC m=+0.163416259 container init fca95100822719bbccf5feffa972332f70883a385eb48f946fbf2632ea097be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ritchie, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 06:10:46 np0005634017 podman[407630]: 2026-02-28 11:10:46.109158194 +0000 UTC m=+0.175902523 container start fca95100822719bbccf5feffa972332f70883a385eb48f946fbf2632ea097be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ritchie, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 28 06:10:46 np0005634017 podman[407630]: 2026-02-28 11:10:46.113586529 +0000 UTC m=+0.180330848 container attach fca95100822719bbccf5feffa972332f70883a385eb48f946fbf2632ea097be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 28 06:10:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3215: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:46 np0005634017 elastic_ritchie[407646]: --> passed data devices: 0 physical, 3 LVM
Feb 28 06:10:46 np0005634017 elastic_ritchie[407646]: --> All data devices are unavailable
Feb 28 06:10:46 np0005634017 systemd[1]: libpod-fca95100822719bbccf5feffa972332f70883a385eb48f946fbf2632ea097be3.scope: Deactivated successfully.
Feb 28 06:10:46 np0005634017 podman[407630]: 2026-02-28 11:10:46.653884541 +0000 UTC m=+0.720628860 container died fca95100822719bbccf5feffa972332f70883a385eb48f946fbf2632ea097be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 06:10:46 np0005634017 systemd[1]: var-lib-containers-storage-overlay-e2a719851cc3c4c8a0177a38e0d60f5ae377ea25af6e38a8aa815abaea75d42d-merged.mount: Deactivated successfully.
Feb 28 06:10:46 np0005634017 podman[407630]: 2026-02-28 11:10:46.707045857 +0000 UTC m=+0.773790176 container remove fca95100822719bbccf5feffa972332f70883a385eb48f946fbf2632ea097be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 06:10:46 np0005634017 systemd[1]: libpod-conmon-fca95100822719bbccf5feffa972332f70883a385eb48f946fbf2632ea097be3.scope: Deactivated successfully.
Feb 28 06:10:47 np0005634017 podman[407740]: 2026-02-28 11:10:47.214987022 +0000 UTC m=+0.046091116 container create 82e8be20fad9a494b2d2b45f1ba800939f733c6c436e81f5a10bd3271ebb96a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 06:10:47 np0005634017 systemd[1]: Started libpod-conmon-82e8be20fad9a494b2d2b45f1ba800939f733c6c436e81f5a10bd3271ebb96a9.scope.
Feb 28 06:10:47 np0005634017 podman[407740]: 2026-02-28 11:10:47.197030984 +0000 UTC m=+0.028134848 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:10:47 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:10:47 np0005634017 podman[407740]: 2026-02-28 11:10:47.320799179 +0000 UTC m=+0.151903113 container init 82e8be20fad9a494b2d2b45f1ba800939f733c6c436e81f5a10bd3271ebb96a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 06:10:47 np0005634017 podman[407740]: 2026-02-28 11:10:47.32930952 +0000 UTC m=+0.160413404 container start 82e8be20fad9a494b2d2b45f1ba800939f733c6c436e81f5a10bd3271ebb96a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 06:10:47 np0005634017 systemd[1]: libpod-82e8be20fad9a494b2d2b45f1ba800939f733c6c436e81f5a10bd3271ebb96a9.scope: Deactivated successfully.
Feb 28 06:10:47 np0005634017 admiring_germain[407756]: 167 167
Feb 28 06:10:47 np0005634017 podman[407740]: 2026-02-28 11:10:47.334088225 +0000 UTC m=+0.165192109 container attach 82e8be20fad9a494b2d2b45f1ba800939f733c6c436e81f5a10bd3271ebb96a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_germain, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Feb 28 06:10:47 np0005634017 conmon[407756]: conmon 82e8be20fad9a494b2d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82e8be20fad9a494b2d2b45f1ba800939f733c6c436e81f5a10bd3271ebb96a9.scope/container/memory.events
Feb 28 06:10:47 np0005634017 podman[407740]: 2026-02-28 11:10:47.335051633 +0000 UTC m=+0.166155477 container died 82e8be20fad9a494b2d2b45f1ba800939f733c6c436e81f5a10bd3271ebb96a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_germain, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 28 06:10:47 np0005634017 systemd[1]: var-lib-containers-storage-overlay-20de343a137311074b119a53729ef63d8a875c841fc2efea84e41b45ad6f8ed9-merged.mount: Deactivated successfully.
Feb 28 06:10:47 np0005634017 podman[407740]: 2026-02-28 11:10:47.3642582 +0000 UTC m=+0.195362044 container remove 82e8be20fad9a494b2d2b45f1ba800939f733c6c436e81f5a10bd3271ebb96a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_germain, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 06:10:47 np0005634017 systemd[1]: libpod-conmon-82e8be20fad9a494b2d2b45f1ba800939f733c6c436e81f5a10bd3271ebb96a9.scope: Deactivated successfully.
Feb 28 06:10:47 np0005634017 podman[407781]: 2026-02-28 11:10:47.479994568 +0000 UTC m=+0.037126113 container create 07fac0c9baaa6f877ac2c014d698be417532a84521751c156c3aac67f3f484af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 06:10:47 np0005634017 systemd[1]: Started libpod-conmon-07fac0c9baaa6f877ac2c014d698be417532a84521751c156c3aac67f3f484af.scope.
Feb 28 06:10:47 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:10:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ede6c1b0aa8210ddd8f045b4eafb9337ee3349dfd4de68c56c560358d21203/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 06:10:47 np0005634017 podman[407781]: 2026-02-28 11:10:47.461999398 +0000 UTC m=+0.019130903 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:10:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ede6c1b0aa8210ddd8f045b4eafb9337ee3349dfd4de68c56c560358d21203/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 06:10:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ede6c1b0aa8210ddd8f045b4eafb9337ee3349dfd4de68c56c560358d21203/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 06:10:47 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ede6c1b0aa8210ddd8f045b4eafb9337ee3349dfd4de68c56c560358d21203/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 06:10:47 np0005634017 podman[407781]: 2026-02-28 11:10:47.576428789 +0000 UTC m=+0.133560374 container init 07fac0c9baaa6f877ac2c014d698be417532a84521751c156c3aac67f3f484af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 06:10:47 np0005634017 podman[407781]: 2026-02-28 11:10:47.585261529 +0000 UTC m=+0.142393024 container start 07fac0c9baaa6f877ac2c014d698be417532a84521751c156c3aac67f3f484af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_gates, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 06:10:47 np0005634017 podman[407781]: 2026-02-28 11:10:47.58918515 +0000 UTC m=+0.146316735 container attach 07fac0c9baaa6f877ac2c014d698be417532a84521751c156c3aac67f3f484af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 28 06:10:47 np0005634017 fervent_gates[407797]: {
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:    "0": [
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:        {
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "devices": [
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "/dev/loop3"
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            ],
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "lv_name": "ceph_lv0",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "lv_size": "21470642176",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "name": "ceph_lv0",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "tags": {
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.cephx_lockbox_secret": "",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.cluster_name": "ceph",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.crush_device_class": "",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.encrypted": "0",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.objectstore": "bluestore",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.osd_id": "0",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.type": "block",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.vdo": "0",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.with_tpm": "0"
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            },
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "type": "block",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "vg_name": "ceph_vg0"
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:        }
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:    ],
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:    "1": [
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:        {
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "devices": [
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "/dev/loop4"
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            ],
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "lv_name": "ceph_lv1",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "lv_size": "21470642176",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "name": "ceph_lv1",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "tags": {
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.cephx_lockbox_secret": "",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.cluster_name": "ceph",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.crush_device_class": "",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.encrypted": "0",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.objectstore": "bluestore",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.osd_id": "1",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.type": "block",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.vdo": "0",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.with_tpm": "0"
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            },
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "type": "block",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "vg_name": "ceph_vg1"
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:        }
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:    ],
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:    "2": [
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:        {
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "devices": [
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "/dev/loop5"
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            ],
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "lv_name": "ceph_lv2",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "lv_size": "21470642176",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "name": "ceph_lv2",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "tags": {
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.cephx_lockbox_secret": "",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.cluster_name": "ceph",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.crush_device_class": "",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.encrypted": "0",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.objectstore": "bluestore",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.osd_id": "2",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.type": "block",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.vdo": "0",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:                "ceph.with_tpm": "0"
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            },
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "type": "block",
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:            "vg_name": "ceph_vg2"
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:        }
Feb 28 06:10:47 np0005634017 fervent_gates[407797]:    ]
Feb 28 06:10:47 np0005634017 fervent_gates[407797]: }
Feb 28 06:10:47 np0005634017 systemd[1]: libpod-07fac0c9baaa6f877ac2c014d698be417532a84521751c156c3aac67f3f484af.scope: Deactivated successfully.
Feb 28 06:10:47 np0005634017 podman[407781]: 2026-02-28 11:10:47.891942414 +0000 UTC m=+0.449073949 container died 07fac0c9baaa6f877ac2c014d698be417532a84521751c156c3aac67f3f484af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 06:10:47 np0005634017 systemd[1]: var-lib-containers-storage-overlay-c6ede6c1b0aa8210ddd8f045b4eafb9337ee3349dfd4de68c56c560358d21203-merged.mount: Deactivated successfully.
Feb 28 06:10:47 np0005634017 podman[407781]: 2026-02-28 11:10:47.946770447 +0000 UTC m=+0.503901992 container remove 07fac0c9baaa6f877ac2c014d698be417532a84521751c156c3aac67f3f484af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 06:10:47 np0005634017 systemd[1]: libpod-conmon-07fac0c9baaa6f877ac2c014d698be417532a84521751c156c3aac67f3f484af.scope: Deactivated successfully.
Feb 28 06:10:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3216: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:48 np0005634017 podman[407885]: 2026-02-28 11:10:48.382350122 +0000 UTC m=+0.055324757 container create ee362ef76b8e3c1858101d0aac2623dd1b1925c7b196fb69ef20e11e6016f595 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 06:10:48 np0005634017 systemd[1]: Started libpod-conmon-ee362ef76b8e3c1858101d0aac2623dd1b1925c7b196fb69ef20e11e6016f595.scope.
Feb 28 06:10:48 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:10:48 np0005634017 podman[407885]: 2026-02-28 11:10:48.359854275 +0000 UTC m=+0.032828960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:10:48 np0005634017 podman[407885]: 2026-02-28 11:10:48.455593597 +0000 UTC m=+0.128568242 container init ee362ef76b8e3c1858101d0aac2623dd1b1925c7b196fb69ef20e11e6016f595 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_nash, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 06:10:48 np0005634017 podman[407885]: 2026-02-28 11:10:48.462671247 +0000 UTC m=+0.135645882 container start ee362ef76b8e3c1858101d0aac2623dd1b1925c7b196fb69ef20e11e6016f595 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_nash, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 06:10:48 np0005634017 vibrant_nash[407901]: 167 167
Feb 28 06:10:48 np0005634017 podman[407885]: 2026-02-28 11:10:48.466491055 +0000 UTC m=+0.139465720 container attach ee362ef76b8e3c1858101d0aac2623dd1b1925c7b196fb69ef20e11e6016f595 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_nash, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 06:10:48 np0005634017 systemd[1]: libpod-ee362ef76b8e3c1858101d0aac2623dd1b1925c7b196fb69ef20e11e6016f595.scope: Deactivated successfully.
Feb 28 06:10:48 np0005634017 podman[407885]: 2026-02-28 11:10:48.467520585 +0000 UTC m=+0.140495230 container died ee362ef76b8e3c1858101d0aac2623dd1b1925c7b196fb69ef20e11e6016f595 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_nash, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 06:10:48 np0005634017 systemd[1]: var-lib-containers-storage-overlay-137e4a1ccc62147bf23770756fd3c971a28b4e571716a24837c972912c4ac2bf-merged.mount: Deactivated successfully.
Feb 28 06:10:48 np0005634017 podman[407885]: 2026-02-28 11:10:48.511276444 +0000 UTC m=+0.184251079 container remove ee362ef76b8e3c1858101d0aac2623dd1b1925c7b196fb69ef20e11e6016f595 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 06:10:48 np0005634017 systemd[1]: libpod-conmon-ee362ef76b8e3c1858101d0aac2623dd1b1925c7b196fb69ef20e11e6016f595.scope: Deactivated successfully.
Feb 28 06:10:48 np0005634017 podman[407923]: 2026-02-28 11:10:48.670760631 +0000 UTC m=+0.053173077 container create efd00d2fe120bfea8dc0c4f803f19b6dc6482a2fcd04afd99584849ff1481b50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 06:10:48 np0005634017 systemd[1]: Started libpod-conmon-efd00d2fe120bfea8dc0c4f803f19b6dc6482a2fcd04afd99584849ff1481b50.scope.
Feb 28 06:10:48 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:10:48 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/250565ee650e21ef709c56f0cfdd4ea64b9b16952189832d2d8561adf1b786f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 06:10:48 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/250565ee650e21ef709c56f0cfdd4ea64b9b16952189832d2d8561adf1b786f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 06:10:48 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/250565ee650e21ef709c56f0cfdd4ea64b9b16952189832d2d8561adf1b786f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 06:10:48 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/250565ee650e21ef709c56f0cfdd4ea64b9b16952189832d2d8561adf1b786f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 06:10:48 np0005634017 podman[407923]: 2026-02-28 11:10:48.650227859 +0000 UTC m=+0.032640365 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:10:48 np0005634017 podman[407923]: 2026-02-28 11:10:48.756334094 +0000 UTC m=+0.138746570 container init efd00d2fe120bfea8dc0c4f803f19b6dc6482a2fcd04afd99584849ff1481b50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_rosalind, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 06:10:48 np0005634017 podman[407923]: 2026-02-28 11:10:48.76255219 +0000 UTC m=+0.144964636 container start efd00d2fe120bfea8dc0c4f803f19b6dc6482a2fcd04afd99584849ff1481b50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 06:10:48 np0005634017 podman[407923]: 2026-02-28 11:10:48.76571359 +0000 UTC m=+0.148126036 container attach efd00d2fe120bfea8dc0c4f803f19b6dc6482a2fcd04afd99584849ff1481b50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 06:10:49 np0005634017 nova_compute[243452]: 2026-02-28 11:10:49.132 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:10:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:10:49 np0005634017 lvm[408018]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 06:10:49 np0005634017 lvm[408018]: VG ceph_vg1 finished
Feb 28 06:10:49 np0005634017 lvm[408017]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 06:10:49 np0005634017 lvm[408017]: VG ceph_vg0 finished
Feb 28 06:10:49 np0005634017 lvm[408020]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 06:10:49 np0005634017 lvm[408020]: VG ceph_vg2 finished
Feb 28 06:10:49 np0005634017 vibrant_rosalind[407939]: {}
Feb 28 06:10:49 np0005634017 systemd[1]: libpod-efd00d2fe120bfea8dc0c4f803f19b6dc6482a2fcd04afd99584849ff1481b50.scope: Deactivated successfully.
Feb 28 06:10:49 np0005634017 podman[407923]: 2026-02-28 11:10:49.587953456 +0000 UTC m=+0.970365952 container died efd00d2fe120bfea8dc0c4f803f19b6dc6482a2fcd04afd99584849ff1481b50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_rosalind, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 06:10:49 np0005634017 systemd[1]: libpod-efd00d2fe120bfea8dc0c4f803f19b6dc6482a2fcd04afd99584849ff1481b50.scope: Consumed 1.146s CPU time.
Feb 28 06:10:49 np0005634017 systemd[1]: var-lib-containers-storage-overlay-250565ee650e21ef709c56f0cfdd4ea64b9b16952189832d2d8561adf1b786f0-merged.mount: Deactivated successfully.
Feb 28 06:10:49 np0005634017 podman[407923]: 2026-02-28 11:10:49.638527409 +0000 UTC m=+1.020939895 container remove efd00d2fe120bfea8dc0c4f803f19b6dc6482a2fcd04afd99584849ff1481b50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_rosalind, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 06:10:49 np0005634017 systemd[1]: libpod-conmon-efd00d2fe120bfea8dc0c4f803f19b6dc6482a2fcd04afd99584849ff1481b50.scope: Deactivated successfully.
Feb 28 06:10:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 06:10:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:10:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 06:10:49 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:10:49 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:10:49 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:10:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3217: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3218: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:54 np0005634017 nova_compute[243452]: 2026-02-28 11:10:54.135 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:10:54 np0005634017 nova_compute[243452]: 2026-02-28 11:10:54.137 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:10:54 np0005634017 nova_compute[243452]: 2026-02-28 11:10:54.138 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 06:10:54 np0005634017 nova_compute[243452]: 2026-02-28 11:10:54.138 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:10:54 np0005634017 nova_compute[243452]: 2026-02-28 11:10:54.173 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:10:54 np0005634017 nova_compute[243452]: 2026-02-28 11:10:54.174 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:10:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3219: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:10:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3220: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:10:57.918 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:10:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:10:57.918 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:10:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:10:57.919 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:10:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3221: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:10:59 np0005634017 nova_compute[243452]: 2026-02-28 11:10:59.175 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:10:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:11:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3222: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:11:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:11:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:11:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:11:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:11:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:11:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3223: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:04 np0005634017 nova_compute[243452]: 2026-02-28 11:11:04.177 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:11:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3224: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.857489) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277064857520, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 2052, "num_deletes": 251, "total_data_size": 3485006, "memory_usage": 3546992, "flush_reason": "Manual Compaction"}
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277064874238, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 3418278, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66039, "largest_seqno": 68090, "table_properties": {"data_size": 3408833, "index_size": 6002, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18683, "raw_average_key_size": 20, "raw_value_size": 3390193, "raw_average_value_size": 3645, "num_data_blocks": 266, "num_entries": 930, "num_filter_entries": 930, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772276835, "oldest_key_time": 1772276835, "file_creation_time": 1772277064, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 16846 microseconds, and 8761 cpu microseconds.
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.874324) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 3418278 bytes OK
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.874353) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.876729) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.876760) EVENT_LOG_v1 {"time_micros": 1772277064876749, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.876793) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 3476415, prev total WAL file size 3476415, number of live WAL files 2.
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.878406) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(3338KB)], [158(9217KB)]
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277064878466, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 12857447, "oldest_snapshot_seqno": -1}
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 8707 keys, 11098323 bytes, temperature: kUnknown
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277064933929, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 11098323, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11042259, "index_size": 33167, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21829, "raw_key_size": 228037, "raw_average_key_size": 26, "raw_value_size": 10888926, "raw_average_value_size": 1250, "num_data_blocks": 1284, "num_entries": 8707, "num_filter_entries": 8707, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772277064, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.934461) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 11098323 bytes
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.936802) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 230.9 rd, 199.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 9.0 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 9221, records dropped: 514 output_compression: NoCompression
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.936836) EVENT_LOG_v1 {"time_micros": 1772277064936820, "job": 98, "event": "compaction_finished", "compaction_time_micros": 55693, "compaction_time_cpu_micros": 35531, "output_level": 6, "num_output_files": 1, "total_output_size": 11098323, "num_input_records": 9221, "num_output_records": 8707, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277064937554, "job": 98, "event": "table_file_deletion", "file_number": 160}
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277064938944, "job": 98, "event": "table_file_deletion", "file_number": 158}
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.878196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.939187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.939203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.939210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.939216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 06:11:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.939221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 06:11:05 np0005634017 podman[408062]: 2026-02-28 11:11:05.145374206 +0000 UTC m=+0.072975788 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 06:11:05 np0005634017 podman[408061]: 2026-02-28 11:11:05.239877093 +0000 UTC m=+0.167519906 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 06:11:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3225: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3226: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:09 np0005634017 nova_compute[243452]: 2026-02-28 11:11:09.179 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:11:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:11:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3227: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:10 np0005634017 nova_compute[243452]: 2026-02-28 11:11:10.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:11:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3228: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:13 np0005634017 nova_compute[243452]: 2026-02-28 11:11:13.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:11:14 np0005634017 nova_compute[243452]: 2026-02-28 11:11:14.182 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:11:14 np0005634017 nova_compute[243452]: 2026-02-28 11:11:14.184 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:11:14 np0005634017 nova_compute[243452]: 2026-02-28 11:11:14.185 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 06:11:14 np0005634017 nova_compute[243452]: 2026-02-28 11:11:14.185 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:11:14 np0005634017 nova_compute[243452]: 2026-02-28 11:11:14.215 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:11:14 np0005634017 nova_compute[243452]: 2026-02-28 11:11:14.216 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:11:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3229: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:14 np0005634017 nova_compute[243452]: 2026-02-28 11:11:14.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:11:14 np0005634017 nova_compute[243452]: 2026-02-28 11:11:14.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:11:14 np0005634017 nova_compute[243452]: 2026-02-28 11:11:14.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:11:14 np0005634017 nova_compute[243452]: 2026-02-28 11:11:14.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 06:11:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:11:15 np0005634017 nova_compute[243452]: 2026-02-28 11:11:15.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:11:15 np0005634017 nova_compute[243452]: 2026-02-28 11:11:15.318 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 06:11:15 np0005634017 nova_compute[243452]: 2026-02-28 11:11:15.318 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 06:11:15 np0005634017 nova_compute[243452]: 2026-02-28 11:11:15.336 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 06:11:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3230: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:17 np0005634017 nova_compute[243452]: 2026-02-28 11:11:17.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:11:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3231: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:19 np0005634017 nova_compute[243452]: 2026-02-28 11:11:19.217 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:11:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:11:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3232: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3233: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:23 np0005634017 nova_compute[243452]: 2026-02-28 11:11:23.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:11:23 np0005634017 nova_compute[243452]: 2026-02-28 11:11:23.344 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:11:23 np0005634017 nova_compute[243452]: 2026-02-28 11:11:23.344 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:11:23 np0005634017 nova_compute[243452]: 2026-02-28 11:11:23.344 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:11:23 np0005634017 nova_compute[243452]: 2026-02-28 11:11:23.344 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 06:11:23 np0005634017 nova_compute[243452]: 2026-02-28 11:11:23.345 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 06:11:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 06:11:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2420158980' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 06:11:23 np0005634017 nova_compute[243452]: 2026-02-28 11:11:23.887 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 06:11:24 np0005634017 nova_compute[243452]: 2026-02-28 11:11:24.101 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 06:11:24 np0005634017 nova_compute[243452]: 2026-02-28 11:11:24.103 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3554MB free_disk=59.987354160286486GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 06:11:24 np0005634017 nova_compute[243452]: 2026-02-28 11:11:24.104 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:11:24 np0005634017 nova_compute[243452]: 2026-02-28 11:11:24.104 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:11:24 np0005634017 nova_compute[243452]: 2026-02-28 11:11:24.165 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 06:11:24 np0005634017 nova_compute[243452]: 2026-02-28 11:11:24.166 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 06:11:24 np0005634017 nova_compute[243452]: 2026-02-28 11:11:24.182 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 06:11:24 np0005634017 nova_compute[243452]: 2026-02-28 11:11:24.219 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:11:24 np0005634017 nova_compute[243452]: 2026-02-28 11:11:24.221 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:11:24 np0005634017 nova_compute[243452]: 2026-02-28 11:11:24.222 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 06:11:24 np0005634017 nova_compute[243452]: 2026-02-28 11:11:24.222 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:11:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3234: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:24 np0005634017 nova_compute[243452]: 2026-02-28 11:11:24.257 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:11:24 np0005634017 nova_compute[243452]: 2026-02-28 11:11:24.258 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:11:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:11:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 06:11:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/497188344' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 06:11:24 np0005634017 nova_compute[243452]: 2026-02-28 11:11:24.749 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 06:11:24 np0005634017 nova_compute[243452]: 2026-02-28 11:11:24.754 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 06:11:24 np0005634017 nova_compute[243452]: 2026-02-28 11:11:24.771 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 06:11:24 np0005634017 nova_compute[243452]: 2026-02-28 11:11:24.773 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 06:11:24 np0005634017 nova_compute[243452]: 2026-02-28 11:11:24.774 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:11:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3235: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:27 np0005634017 nova_compute[243452]: 2026-02-28 11:11:27.774 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:11:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3236: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:29 np0005634017 nova_compute[243452]: 2026-02-28 11:11:29.258 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:11:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_11:11:29
Feb 28 06:11:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 06:11:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 06:11:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'backups', '.rgw.root', '.mgr', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log']
Feb 28 06:11:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 06:11:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:11:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3237: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:11:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:11:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:11:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:11:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:11:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:11:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 06:11:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 06:11:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 06:11:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 06:11:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 06:11:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 06:11:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 06:11:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 06:11:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 06:11:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 06:11:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3238: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3239: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:34 np0005634017 nova_compute[243452]: 2026-02-28 11:11:34.261 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:11:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:11:36 np0005634017 podman[408150]: 2026-02-28 11:11:36.14458872 +0000 UTC m=+0.068148871 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 28 06:11:36 np0005634017 podman[408149]: 2026-02-28 11:11:36.172763108 +0000 UTC m=+0.106854148 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 28 06:11:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3240: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3241: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:39 np0005634017 nova_compute[243452]: 2026-02-28 11:11:39.264 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:11:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:11:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3242: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 06:11:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:11:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 06:11:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:11:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5454513684934942e-05 of space, bias 1.0, pg target 0.0046363541054804825 quantized to 32 (current 32)
Feb 28 06:11:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:11:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:11:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:11:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:11:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:11:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.899822393338988e-06 of space, bias 1.0, pg target 0.0014699467180016965 quantized to 32 (current 32)
Feb 28 06:11:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:11:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.272876705470989e-07 of space, bias 4.0, pg target 0.0008727452046565186 quantized to 16 (current 16)
Feb 28 06:11:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:11:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:11:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:11:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 06:11:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:11:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 06:11:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:11:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:11:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:11:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 06:11:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3243: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:44 np0005634017 nova_compute[243452]: 2026-02-28 11:11:44.266 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:11:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3244: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:44 np0005634017 nova_compute[243452]: 2026-02-28 11:11:44.267 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:11:44 np0005634017 nova_compute[243452]: 2026-02-28 11:11:44.267 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 06:11:44 np0005634017 nova_compute[243452]: 2026-02-28 11:11:44.267 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:11:44 np0005634017 nova_compute[243452]: 2026-02-28 11:11:44.268 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:11:44 np0005634017 nova_compute[243452]: 2026-02-28 11:11:44.269 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:11:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:11:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 06:11:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2658666051' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 06:11:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 06:11:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2658666051' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 06:11:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3245: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3246: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:49 np0005634017 nova_compute[243452]: 2026-02-28 11:11:49.270 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:11:49 np0005634017 nova_compute[243452]: 2026-02-28 11:11:49.272 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:11:49 np0005634017 nova_compute[243452]: 2026-02-28 11:11:49.272 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 06:11:49 np0005634017 nova_compute[243452]: 2026-02-28 11:11:49.272 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:11:49 np0005634017 nova_compute[243452]: 2026-02-28 11:11:49.306 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:11:49 np0005634017 nova_compute[243452]: 2026-02-28 11:11:49.307 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:11:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:11:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3247: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 06:11:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:11:50 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 06:11:50 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:11:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 06:11:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 06:11:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 06:11:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 06:11:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 06:11:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:11:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 06:11:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 06:11:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 06:11:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 06:11:51 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 06:11:51 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 06:11:51 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:11:51 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:11:51 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 06:11:51 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:11:51 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 06:11:51 np0005634017 podman[408410]: 2026-02-28 11:11:51.585574753 +0000 UTC m=+0.051217961 container create ff7df16d44cbf8dd103080ab1ade812ad4756f65e965a38c5b9cf6db23646fc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 28 06:11:51 np0005634017 systemd[1]: Started libpod-conmon-ff7df16d44cbf8dd103080ab1ade812ad4756f65e965a38c5b9cf6db23646fc7.scope.
Feb 28 06:11:51 np0005634017 podman[408410]: 2026-02-28 11:11:51.5579269 +0000 UTC m=+0.023570138 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:11:51 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:11:51 np0005634017 podman[408410]: 2026-02-28 11:11:51.681670045 +0000 UTC m=+0.147313293 container init ff7df16d44cbf8dd103080ab1ade812ad4756f65e965a38c5b9cf6db23646fc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_einstein, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 28 06:11:51 np0005634017 podman[408410]: 2026-02-28 11:11:51.688838448 +0000 UTC m=+0.154481616 container start ff7df16d44cbf8dd103080ab1ade812ad4756f65e965a38c5b9cf6db23646fc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 06:11:51 np0005634017 podman[408410]: 2026-02-28 11:11:51.692519132 +0000 UTC m=+0.158162340 container attach ff7df16d44cbf8dd103080ab1ade812ad4756f65e965a38c5b9cf6db23646fc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_einstein, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 06:11:51 np0005634017 systemd[1]: libpod-ff7df16d44cbf8dd103080ab1ade812ad4756f65e965a38c5b9cf6db23646fc7.scope: Deactivated successfully.
Feb 28 06:11:51 np0005634017 hopeful_einstein[408426]: 167 167
Feb 28 06:11:51 np0005634017 conmon[408426]: conmon ff7df16d44cbf8dd1030 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff7df16d44cbf8dd103080ab1ade812ad4756f65e965a38c5b9cf6db23646fc7.scope/container/memory.events
Feb 28 06:11:51 np0005634017 podman[408410]: 2026-02-28 11:11:51.703202855 +0000 UTC m=+0.168846023 container died ff7df16d44cbf8dd103080ab1ade812ad4756f65e965a38c5b9cf6db23646fc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_einstein, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 06:11:51 np0005634017 systemd[1]: var-lib-containers-storage-overlay-c9e14ff1e17183cf20fa903daf8ab29f77ef91c2b45f93275a4eb0f1e9701284-merged.mount: Deactivated successfully.
Feb 28 06:11:51 np0005634017 podman[408410]: 2026-02-28 11:11:51.750735191 +0000 UTC m=+0.216378359 container remove ff7df16d44cbf8dd103080ab1ade812ad4756f65e965a38c5b9cf6db23646fc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_einstein, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 06:11:51 np0005634017 systemd[1]: libpod-conmon-ff7df16d44cbf8dd103080ab1ade812ad4756f65e965a38c5b9cf6db23646fc7.scope: Deactivated successfully.
Feb 28 06:11:51 np0005634017 podman[408450]: 2026-02-28 11:11:51.928036662 +0000 UTC m=+0.073074180 container create 9c5e4f3bd2bcb5ebb7275fcc9dd89b07b9e8591ea803696cb35e2fe906de410a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 06:11:51 np0005634017 systemd[1]: Started libpod-conmon-9c5e4f3bd2bcb5ebb7275fcc9dd89b07b9e8591ea803696cb35e2fe906de410a.scope.
Feb 28 06:11:51 np0005634017 podman[408450]: 2026-02-28 11:11:51.895056908 +0000 UTC m=+0.040094486 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:11:52 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:11:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc25e2c6130c979fb21df5fd58bfaa13d4cfd9c4b27230726f461655b87b532/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 06:11:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc25e2c6130c979fb21df5fd58bfaa13d4cfd9c4b27230726f461655b87b532/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 06:11:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc25e2c6130c979fb21df5fd58bfaa13d4cfd9c4b27230726f461655b87b532/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 06:11:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc25e2c6130c979fb21df5fd58bfaa13d4cfd9c4b27230726f461655b87b532/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 06:11:52 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc25e2c6130c979fb21df5fd58bfaa13d4cfd9c4b27230726f461655b87b532/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 06:11:52 np0005634017 podman[408450]: 2026-02-28 11:11:52.038103499 +0000 UTC m=+0.183141087 container init 9c5e4f3bd2bcb5ebb7275fcc9dd89b07b9e8591ea803696cb35e2fe906de410a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lewin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 06:11:52 np0005634017 podman[408450]: 2026-02-28 11:11:52.059854576 +0000 UTC m=+0.204892104 container start 9c5e4f3bd2bcb5ebb7275fcc9dd89b07b9e8591ea803696cb35e2fe906de410a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 28 06:11:52 np0005634017 podman[408450]: 2026-02-28 11:11:52.064232319 +0000 UTC m=+0.209269847 container attach 9c5e4f3bd2bcb5ebb7275fcc9dd89b07b9e8591ea803696cb35e2fe906de410a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lewin, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 06:11:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3248: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:52 np0005634017 gifted_lewin[408466]: --> passed data devices: 0 physical, 3 LVM
Feb 28 06:11:52 np0005634017 gifted_lewin[408466]: --> All data devices are unavailable
Feb 28 06:11:52 np0005634017 systemd[1]: libpod-9c5e4f3bd2bcb5ebb7275fcc9dd89b07b9e8591ea803696cb35e2fe906de410a.scope: Deactivated successfully.
Feb 28 06:11:52 np0005634017 podman[408450]: 2026-02-28 11:11:52.523903298 +0000 UTC m=+0.668940786 container died 9c5e4f3bd2bcb5ebb7275fcc9dd89b07b9e8591ea803696cb35e2fe906de410a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 06:11:52 np0005634017 systemd[1]: var-lib-containers-storage-overlay-9dc25e2c6130c979fb21df5fd58bfaa13d4cfd9c4b27230726f461655b87b532-merged.mount: Deactivated successfully.
Feb 28 06:11:52 np0005634017 podman[408450]: 2026-02-28 11:11:52.567173343 +0000 UTC m=+0.712210841 container remove 9c5e4f3bd2bcb5ebb7275fcc9dd89b07b9e8591ea803696cb35e2fe906de410a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lewin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 06:11:52 np0005634017 systemd[1]: libpod-conmon-9c5e4f3bd2bcb5ebb7275fcc9dd89b07b9e8591ea803696cb35e2fe906de410a.scope: Deactivated successfully.
Feb 28 06:11:53 np0005634017 podman[408560]: 2026-02-28 11:11:53.010655522 +0000 UTC m=+0.059249639 container create 2a1ad97cd0f0917d9bcb44ebbb0d3601ad4cda3ad980a835d6ddc7b9bd5f3b1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_montalcini, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 06:11:53 np0005634017 systemd[1]: Started libpod-conmon-2a1ad97cd0f0917d9bcb44ebbb0d3601ad4cda3ad980a835d6ddc7b9bd5f3b1c.scope.
Feb 28 06:11:53 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:11:53 np0005634017 podman[408560]: 2026-02-28 11:11:52.986352714 +0000 UTC m=+0.034946921 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:11:53 np0005634017 podman[408560]: 2026-02-28 11:11:53.093220221 +0000 UTC m=+0.141814388 container init 2a1ad97cd0f0917d9bcb44ebbb0d3601ad4cda3ad980a835d6ddc7b9bd5f3b1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_montalcini, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 06:11:53 np0005634017 podman[408560]: 2026-02-28 11:11:53.101893086 +0000 UTC m=+0.150487243 container start 2a1ad97cd0f0917d9bcb44ebbb0d3601ad4cda3ad980a835d6ddc7b9bd5f3b1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_montalcini, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 06:11:53 np0005634017 podman[408560]: 2026-02-28 11:11:53.106674392 +0000 UTC m=+0.155268519 container attach 2a1ad97cd0f0917d9bcb44ebbb0d3601ad4cda3ad980a835d6ddc7b9bd5f3b1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_montalcini, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 06:11:53 np0005634017 affectionate_montalcini[408577]: 167 167
Feb 28 06:11:53 np0005634017 systemd[1]: libpod-2a1ad97cd0f0917d9bcb44ebbb0d3601ad4cda3ad980a835d6ddc7b9bd5f3b1c.scope: Deactivated successfully.
Feb 28 06:11:53 np0005634017 conmon[408577]: conmon 2a1ad97cd0f0917d9bcb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2a1ad97cd0f0917d9bcb44ebbb0d3601ad4cda3ad980a835d6ddc7b9bd5f3b1c.scope/container/memory.events
Feb 28 06:11:53 np0005634017 podman[408560]: 2026-02-28 11:11:53.110916302 +0000 UTC m=+0.159510419 container died 2a1ad97cd0f0917d9bcb44ebbb0d3601ad4cda3ad980a835d6ddc7b9bd5f3b1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_montalcini, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 06:11:53 np0005634017 systemd[1]: var-lib-containers-storage-overlay-b70bd2530ee200474ebb0aa8bc16b79ffb9da2e6eb59cc00e6e8a6f5ddfb7b05-merged.mount: Deactivated successfully.
Feb 28 06:11:53 np0005634017 podman[408560]: 2026-02-28 11:11:53.159301712 +0000 UTC m=+0.207895889 container remove 2a1ad97cd0f0917d9bcb44ebbb0d3601ad4cda3ad980a835d6ddc7b9bd5f3b1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_montalcini, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 06:11:53 np0005634017 systemd[1]: libpod-conmon-2a1ad97cd0f0917d9bcb44ebbb0d3601ad4cda3ad980a835d6ddc7b9bd5f3b1c.scope: Deactivated successfully.
Feb 28 06:11:53 np0005634017 podman[408599]: 2026-02-28 11:11:53.342966994 +0000 UTC m=+0.059065404 container create e0d41a05f0d789e955928eb007e4a968a032f7090b32ce8090e1f863b7006e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 06:11:53 np0005634017 systemd[1]: Started libpod-conmon-e0d41a05f0d789e955928eb007e4a968a032f7090b32ce8090e1f863b7006e59.scope.
Feb 28 06:11:53 np0005634017 podman[408599]: 2026-02-28 11:11:53.317441901 +0000 UTC m=+0.033540291 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:11:53 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:11:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d30d412431da8d17a09e0d5b26d2bb5938eb192c8f30c8c51573ec80ea3db77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 06:11:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d30d412431da8d17a09e0d5b26d2bb5938eb192c8f30c8c51573ec80ea3db77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 06:11:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d30d412431da8d17a09e0d5b26d2bb5938eb192c8f30c8c51573ec80ea3db77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 06:11:53 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d30d412431da8d17a09e0d5b26d2bb5938eb192c8f30c8c51573ec80ea3db77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 06:11:53 np0005634017 podman[408599]: 2026-02-28 11:11:53.442168303 +0000 UTC m=+0.158266693 container init e0d41a05f0d789e955928eb007e4a968a032f7090b32ce8090e1f863b7006e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_thompson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 06:11:53 np0005634017 podman[408599]: 2026-02-28 11:11:53.448211724 +0000 UTC m=+0.164310134 container start e0d41a05f0d789e955928eb007e4a968a032f7090b32ce8090e1f863b7006e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_thompson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 06:11:53 np0005634017 podman[408599]: 2026-02-28 11:11:53.451486777 +0000 UTC m=+0.167585167 container attach e0d41a05f0d789e955928eb007e4a968a032f7090b32ce8090e1f863b7006e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_thompson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 06:11:53 np0005634017 elated_thompson[408615]: {
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:    "0": [
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:        {
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "devices": [
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "/dev/loop3"
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            ],
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "lv_name": "ceph_lv0",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "lv_size": "21470642176",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "name": "ceph_lv0",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "tags": {
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.cephx_lockbox_secret": "",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.cluster_name": "ceph",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.crush_device_class": "",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.encrypted": "0",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.objectstore": "bluestore",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.osd_id": "0",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.type": "block",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.vdo": "0",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.with_tpm": "0"
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            },
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "type": "block",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "vg_name": "ceph_vg0"
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:        }
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:    ],
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:    "1": [
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:        {
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "devices": [
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "/dev/loop4"
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            ],
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "lv_name": "ceph_lv1",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "lv_size": "21470642176",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "name": "ceph_lv1",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "tags": {
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.cephx_lockbox_secret": "",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.cluster_name": "ceph",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.crush_device_class": "",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.encrypted": "0",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.objectstore": "bluestore",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.osd_id": "1",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.type": "block",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.vdo": "0",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.with_tpm": "0"
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            },
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "type": "block",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "vg_name": "ceph_vg1"
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:        }
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:    ],
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:    "2": [
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:        {
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "devices": [
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "/dev/loop5"
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            ],
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "lv_name": "ceph_lv2",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "lv_size": "21470642176",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "name": "ceph_lv2",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "tags": {
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.cephx_lockbox_secret": "",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.cluster_name": "ceph",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.crush_device_class": "",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.encrypted": "0",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.objectstore": "bluestore",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.osd_id": "2",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.type": "block",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.vdo": "0",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:                "ceph.with_tpm": "0"
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            },
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "type": "block",
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:            "vg_name": "ceph_vg2"
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:        }
Feb 28 06:11:53 np0005634017 elated_thompson[408615]:    ]
Feb 28 06:11:53 np0005634017 elated_thompson[408615]: }
Feb 28 06:11:53 np0005634017 systemd[1]: libpod-e0d41a05f0d789e955928eb007e4a968a032f7090b32ce8090e1f863b7006e59.scope: Deactivated successfully.
Feb 28 06:11:53 np0005634017 podman[408599]: 2026-02-28 11:11:53.804018921 +0000 UTC m=+0.520117341 container died e0d41a05f0d789e955928eb007e4a968a032f7090b32ce8090e1f863b7006e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 06:11:53 np0005634017 systemd[1]: var-lib-containers-storage-overlay-3d30d412431da8d17a09e0d5b26d2bb5938eb192c8f30c8c51573ec80ea3db77-merged.mount: Deactivated successfully.
Feb 28 06:11:53 np0005634017 podman[408599]: 2026-02-28 11:11:53.853454701 +0000 UTC m=+0.569553071 container remove e0d41a05f0d789e955928eb007e4a968a032f7090b32ce8090e1f863b7006e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 06:11:53 np0005634017 systemd[1]: libpod-conmon-e0d41a05f0d789e955928eb007e4a968a032f7090b32ce8090e1f863b7006e59.scope: Deactivated successfully.
Feb 28 06:11:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3249: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:54 np0005634017 nova_compute[243452]: 2026-02-28 11:11:54.307 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:11:54 np0005634017 nova_compute[243452]: 2026-02-28 11:11:54.310 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:11:54 np0005634017 podman[408699]: 2026-02-28 11:11:54.35939225 +0000 UTC m=+0.052171619 container create 6ef69f534d2e83ac65a3bb6c2610d6b97650f2e8eca620fed27ce36ea223c8a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 06:11:54 np0005634017 systemd[1]: Started libpod-conmon-6ef69f534d2e83ac65a3bb6c2610d6b97650f2e8eca620fed27ce36ea223c8a7.scope.
Feb 28 06:11:54 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:11:54 np0005634017 podman[408699]: 2026-02-28 11:11:54.335336759 +0000 UTC m=+0.028116148 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:11:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:11:54 np0005634017 podman[408699]: 2026-02-28 11:11:54.442001249 +0000 UTC m=+0.134780638 container init 6ef69f534d2e83ac65a3bb6c2610d6b97650f2e8eca620fed27ce36ea223c8a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_beaver, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 06:11:54 np0005634017 podman[408699]: 2026-02-28 11:11:54.448891565 +0000 UTC m=+0.141670924 container start 6ef69f534d2e83ac65a3bb6c2610d6b97650f2e8eca620fed27ce36ea223c8a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Feb 28 06:11:54 np0005634017 podman[408699]: 2026-02-28 11:11:54.452000903 +0000 UTC m=+0.144780292 container attach 6ef69f534d2e83ac65a3bb6c2610d6b97650f2e8eca620fed27ce36ea223c8a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_beaver, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 06:11:54 np0005634017 adoring_beaver[408715]: 167 167
Feb 28 06:11:54 np0005634017 systemd[1]: libpod-6ef69f534d2e83ac65a3bb6c2610d6b97650f2e8eca620fed27ce36ea223c8a7.scope: Deactivated successfully.
Feb 28 06:11:54 np0005634017 podman[408699]: 2026-02-28 11:11:54.455961045 +0000 UTC m=+0.148740454 container died 6ef69f534d2e83ac65a3bb6c2610d6b97650f2e8eca620fed27ce36ea223c8a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_beaver, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 06:11:54 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ce1e73f44e00d7ee204dc7c919f4dd0b32f065aeb19846f9c14ccfa0f82d16a7-merged.mount: Deactivated successfully.
Feb 28 06:11:54 np0005634017 podman[408699]: 2026-02-28 11:11:54.503689127 +0000 UTC m=+0.196468486 container remove 6ef69f534d2e83ac65a3bb6c2610d6b97650f2e8eca620fed27ce36ea223c8a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 06:11:54 np0005634017 systemd[1]: libpod-conmon-6ef69f534d2e83ac65a3bb6c2610d6b97650f2e8eca620fed27ce36ea223c8a7.scope: Deactivated successfully.
Feb 28 06:11:54 np0005634017 podman[408742]: 2026-02-28 11:11:54.704686469 +0000 UTC m=+0.059923658 container create db2b84da6ab1de9fde976c1483488337ff3017da86c0317fb6941e2414d1cd8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sanderson, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 06:11:54 np0005634017 systemd[1]: Started libpod-conmon-db2b84da6ab1de9fde976c1483488337ff3017da86c0317fb6941e2414d1cd8a.scope.
Feb 28 06:11:54 np0005634017 podman[408742]: 2026-02-28 11:11:54.679557387 +0000 UTC m=+0.034794666 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:11:54 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:11:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5068787a400e4709ad0546d2509842110154079083461e8440178f10b59348b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 06:11:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5068787a400e4709ad0546d2509842110154079083461e8440178f10b59348b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 06:11:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5068787a400e4709ad0546d2509842110154079083461e8440178f10b59348b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 06:11:54 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5068787a400e4709ad0546d2509842110154079083461e8440178f10b59348b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 06:11:54 np0005634017 podman[408742]: 2026-02-28 11:11:54.810479385 +0000 UTC m=+0.165716594 container init db2b84da6ab1de9fde976c1483488337ff3017da86c0317fb6941e2414d1cd8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Feb 28 06:11:54 np0005634017 podman[408742]: 2026-02-28 11:11:54.820551621 +0000 UTC m=+0.175788810 container start db2b84da6ab1de9fde976c1483488337ff3017da86c0317fb6941e2414d1cd8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 28 06:11:54 np0005634017 podman[408742]: 2026-02-28 11:11:54.824627456 +0000 UTC m=+0.179864685 container attach db2b84da6ab1de9fde976c1483488337ff3017da86c0317fb6941e2414d1cd8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sanderson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 28 06:11:55 np0005634017 lvm[408838]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 06:11:55 np0005634017 lvm[408838]: VG ceph_vg1 finished
Feb 28 06:11:55 np0005634017 lvm[408837]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 06:11:55 np0005634017 lvm[408837]: VG ceph_vg0 finished
Feb 28 06:11:55 np0005634017 lvm[408840]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 06:11:55 np0005634017 lvm[408840]: VG ceph_vg2 finished
Feb 28 06:11:55 np0005634017 boring_sanderson[408759]: {}
Feb 28 06:11:55 np0005634017 systemd[1]: libpod-db2b84da6ab1de9fde976c1483488337ff3017da86c0317fb6941e2414d1cd8a.scope: Deactivated successfully.
Feb 28 06:11:55 np0005634017 podman[408742]: 2026-02-28 11:11:55.660790317 +0000 UTC m=+1.016027506 container died db2b84da6ab1de9fde976c1483488337ff3017da86c0317fb6941e2414d1cd8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sanderson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 06:11:55 np0005634017 systemd[1]: libpod-db2b84da6ab1de9fde976c1483488337ff3017da86c0317fb6941e2414d1cd8a.scope: Consumed 1.230s CPU time.
Feb 28 06:11:55 np0005634017 systemd[1]: var-lib-containers-storage-overlay-5068787a400e4709ad0546d2509842110154079083461e8440178f10b59348b3-merged.mount: Deactivated successfully.
Feb 28 06:11:55 np0005634017 podman[408742]: 2026-02-28 11:11:55.69336221 +0000 UTC m=+1.048599399 container remove db2b84da6ab1de9fde976c1483488337ff3017da86c0317fb6941e2414d1cd8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sanderson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 06:11:55 np0005634017 systemd[1]: libpod-conmon-db2b84da6ab1de9fde976c1483488337ff3017da86c0317fb6941e2414d1cd8a.scope: Deactivated successfully.
Feb 28 06:11:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 06:11:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:11:55 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 06:11:55 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:11:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3250: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:11:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:11:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:11:57.919 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:11:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:11:57.921 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:11:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:11:57.922 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:11:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3251: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:11:59 np0005634017 nova_compute[243452]: 2026-02-28 11:11:59.309 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:11:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:12:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3252: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:12:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:12:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:12:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:12:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:12:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:12:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3253: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3254: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:04 np0005634017 nova_compute[243452]: 2026-02-28 11:12:04.310 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:12:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:12:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3255: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:07 np0005634017 podman[408882]: 2026-02-28 11:12:07.147989342 +0000 UTC m=+0.068953874 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 28 06:12:07 np0005634017 podman[408881]: 2026-02-28 11:12:07.17757929 +0000 UTC m=+0.104877541 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 28 06:12:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3256: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:09 np0005634017 nova_compute[243452]: 2026-02-28 11:12:09.312 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:12:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:12:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3257: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3258: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:12 np0005634017 nova_compute[243452]: 2026-02-28 11:12:12.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:12:13 np0005634017 nova_compute[243452]: 2026-02-28 11:12:13.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:12:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3259: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:14 np0005634017 nova_compute[243452]: 2026-02-28 11:12:14.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:12:14 np0005634017 nova_compute[243452]: 2026-02-28 11:12:14.314 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:12:14 np0005634017 nova_compute[243452]: 2026-02-28 11:12:14.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:12:14 np0005634017 nova_compute[243452]: 2026-02-28 11:12:14.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 06:12:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:12:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3260: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:16 np0005634017 nova_compute[243452]: 2026-02-28 11:12:16.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:12:16 np0005634017 nova_compute[243452]: 2026-02-28 11:12:16.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 06:12:16 np0005634017 nova_compute[243452]: 2026-02-28 11:12:16.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 06:12:16 np0005634017 nova_compute[243452]: 2026-02-28 11:12:16.336 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 06:12:16 np0005634017 nova_compute[243452]: 2026-02-28 11:12:16.337 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:12:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3261: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:18 np0005634017 nova_compute[243452]: 2026-02-28 11:12:18.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:12:19 np0005634017 nova_compute[243452]: 2026-02-28 11:12:19.316 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:12:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:12:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3262: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3263: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:23 np0005634017 nova_compute[243452]: 2026-02-28 11:12:23.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:12:23 np0005634017 nova_compute[243452]: 2026-02-28 11:12:23.357 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:12:23 np0005634017 nova_compute[243452]: 2026-02-28 11:12:23.357 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:12:23 np0005634017 nova_compute[243452]: 2026-02-28 11:12:23.358 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:12:23 np0005634017 nova_compute[243452]: 2026-02-28 11:12:23.358 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 06:12:23 np0005634017 nova_compute[243452]: 2026-02-28 11:12:23.359 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 06:12:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 06:12:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/430315698' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 06:12:23 np0005634017 nova_compute[243452]: 2026-02-28 11:12:23.956 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 06:12:24 np0005634017 nova_compute[243452]: 2026-02-28 11:12:24.138 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 06:12:24 np0005634017 nova_compute[243452]: 2026-02-28 11:12:24.140 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3553MB free_disk=59.987354160286486GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 06:12:24 np0005634017 nova_compute[243452]: 2026-02-28 11:12:24.141 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:12:24 np0005634017 nova_compute[243452]: 2026-02-28 11:12:24.141 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:12:24 np0005634017 nova_compute[243452]: 2026-02-28 11:12:24.211 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 06:12:24 np0005634017 nova_compute[243452]: 2026-02-28 11:12:24.212 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 06:12:24 np0005634017 nova_compute[243452]: 2026-02-28 11:12:24.227 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 06:12:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3264: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:24 np0005634017 nova_compute[243452]: 2026-02-28 11:12:24.319 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:12:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:12:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 06:12:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1325925670' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 06:12:24 np0005634017 nova_compute[243452]: 2026-02-28 11:12:24.797 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 06:12:24 np0005634017 nova_compute[243452]: 2026-02-28 11:12:24.804 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 06:12:24 np0005634017 nova_compute[243452]: 2026-02-28 11:12:24.823 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 06:12:24 np0005634017 nova_compute[243452]: 2026-02-28 11:12:24.825 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 06:12:24 np0005634017 nova_compute[243452]: 2026-02-28 11:12:24.825 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:12:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3265: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:26 np0005634017 nova_compute[243452]: 2026-02-28 11:12:26.822 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:12:27 np0005634017 nova_compute[243452]: 2026-02-28 11:12:27.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:12:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3266: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_11:12:29
Feb 28 06:12:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 06:12:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 06:12:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.log', 'backups', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'images', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta']
Feb 28 06:12:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 06:12:29 np0005634017 nova_compute[243452]: 2026-02-28 11:12:29.320 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:12:29 np0005634017 nova_compute[243452]: 2026-02-28 11:12:29.324 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:12:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:12:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3267: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:12:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:12:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:12:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:12:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:12:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:12:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 06:12:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 06:12:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 06:12:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 06:12:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 06:12:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 06:12:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 06:12:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 06:12:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 06:12:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 06:12:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3268: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3269: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:34 np0005634017 nova_compute[243452]: 2026-02-28 11:12:34.321 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:12:34 np0005634017 nova_compute[243452]: 2026-02-28 11:12:34.326 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:12:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:12:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3270: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:36 np0005634017 nova_compute[243452]: 2026-02-28 11:12:36.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:12:36 np0005634017 nova_compute[243452]: 2026-02-28 11:12:36.316 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:12:36 np0005634017 nova_compute[243452]: 2026-02-28 11:12:36.316 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:12:36 np0005634017 nova_compute[243452]: 2026-02-28 11:12:36.317 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:12:36 np0005634017 nova_compute[243452]: 2026-02-28 11:12:36.317 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:12:36 np0005634017 nova_compute[243452]: 2026-02-28 11:12:36.318 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:12:36 np0005634017 nova_compute[243452]: 2026-02-28 11:12:36.318 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:12:36 np0005634017 nova_compute[243452]: 2026-02-28 11:12:36.340 243456 DEBUG nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Feb 28 06:12:36 np0005634017 nova_compute[243452]: 2026-02-28 11:12:36.349 243456 DEBUG nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Feb 28 06:12:36 np0005634017 nova_compute[243452]: 2026-02-28 11:12:36.349 243456 WARNING nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Unknown base file: /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2#033[00m
Feb 28 06:12:36 np0005634017 nova_compute[243452]: 2026-02-28 11:12:36.349 243456 WARNING nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Unknown base file: /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847#033[00m
Feb 28 06:12:36 np0005634017 nova_compute[243452]: 2026-02-28 11:12:36.350 243456 INFO nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Removable base files: /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847#033[00m
Feb 28 06:12:36 np0005634017 nova_compute[243452]: 2026-02-28 11:12:36.350 243456 INFO nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2#033[00m
Feb 28 06:12:36 np0005634017 nova_compute[243452]: 2026-02-28 11:12:36.350 243456 INFO nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847#033[00m
Feb 28 06:12:36 np0005634017 nova_compute[243452]: 2026-02-28 11:12:36.351 243456 DEBUG nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Feb 28 06:12:36 np0005634017 nova_compute[243452]: 2026-02-28 11:12:36.351 243456 DEBUG nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Feb 28 06:12:36 np0005634017 nova_compute[243452]: 2026-02-28 11:12:36.351 243456 DEBUG nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Feb 28 06:12:36 np0005634017 nova_compute[243452]: 2026-02-28 11:12:36.352 243456 INFO nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Feb 28 06:12:38 np0005634017 podman[408971]: 2026-02-28 11:12:38.135835363 +0000 UTC m=+0.060171705 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 28 06:12:38 np0005634017 podman[408970]: 2026-02-28 11:12:38.175933038 +0000 UTC m=+0.105025064 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 06:12:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3271: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:39 np0005634017 nova_compute[243452]: 2026-02-28 11:12:39.289 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:12:39 np0005634017 nova_compute[243452]: 2026-02-28 11:12:39.323 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:12:39 np0005634017 nova_compute[243452]: 2026-02-28 11:12:39.327 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:12:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:12:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3272: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:41 np0005634017 nova_compute[243452]: 2026-02-28 11:12:41.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:12:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 06:12:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:12:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 06:12:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:12:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5454513684934942e-05 of space, bias 1.0, pg target 0.0046363541054804825 quantized to 32 (current 32)
Feb 28 06:12:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:12:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:12:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:12:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:12:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:12:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.899822393338988e-06 of space, bias 1.0, pg target 0.0014699467180016965 quantized to 32 (current 32)
Feb 28 06:12:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:12:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.272876705470989e-07 of space, bias 4.0, pg target 0.0008727452046565186 quantized to 16 (current 16)
Feb 28 06:12:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:12:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:12:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:12:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 06:12:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:12:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 06:12:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:12:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:12:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:12:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 06:12:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3273: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3274: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:44 np0005634017 nova_compute[243452]: 2026-02-28 11:12:44.326 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:12:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:12:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 06:12:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1003014709' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 06:12:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 06:12:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1003014709' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 06:12:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3275: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3276: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:49 np0005634017 nova_compute[243452]: 2026-02-28 11:12:49.327 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:12:49 np0005634017 nova_compute[243452]: 2026-02-28 11:12:49.329 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:12:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:12:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3277: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:50 np0005634017 nova_compute[243452]: 2026-02-28 11:12:50.339 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:12:50 np0005634017 nova_compute[243452]: 2026-02-28 11:12:50.340 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 28 06:12:50 np0005634017 nova_compute[243452]: 2026-02-28 11:12:50.355 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 28 06:12:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3278: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3279: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:54 np0005634017 nova_compute[243452]: 2026-02-28 11:12:54.329 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:12:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:12:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3280: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:56 np0005634017 nova_compute[243452]: 2026-02-28 11:12:56.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:12:56 np0005634017 nova_compute[243452]: 2026-02-28 11:12:56.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 28 06:12:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 28 06:12:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 06:12:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 06:12:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 06:12:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 06:12:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 06:12:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 06:12:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:12:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 06:12:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 06:12:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 06:12:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 06:12:56 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 06:12:56 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 06:12:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 06:12:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 06:12:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:12:56 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 06:12:56 np0005634017 podman[409157]: 2026-02-28 11:12:56.864623055 +0000 UTC m=+0.041233938 container create d57ddc427c01d308e14c70cb452a6813d435e16768f6b9ad9fe9db7c74d5348f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_gates, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 06:12:56 np0005634017 systemd[1]: Started libpod-conmon-d57ddc427c01d308e14c70cb452a6813d435e16768f6b9ad9fe9db7c74d5348f.scope.
Feb 28 06:12:56 np0005634017 podman[409157]: 2026-02-28 11:12:56.848916361 +0000 UTC m=+0.025527274 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:12:56 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:12:56 np0005634017 podman[409157]: 2026-02-28 11:12:56.966688035 +0000 UTC m=+0.143298948 container init d57ddc427c01d308e14c70cb452a6813d435e16768f6b9ad9fe9db7c74d5348f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_gates, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 06:12:56 np0005634017 podman[409157]: 2026-02-28 11:12:56.976483862 +0000 UTC m=+0.153094765 container start d57ddc427c01d308e14c70cb452a6813d435e16768f6b9ad9fe9db7c74d5348f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_gates, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 06:12:56 np0005634017 exciting_gates[409173]: 167 167
Feb 28 06:12:56 np0005634017 systemd[1]: libpod-d57ddc427c01d308e14c70cb452a6813d435e16768f6b9ad9fe9db7c74d5348f.scope: Deactivated successfully.
Feb 28 06:12:56 np0005634017 podman[409157]: 2026-02-28 11:12:56.980103195 +0000 UTC m=+0.156714078 container attach d57ddc427c01d308e14c70cb452a6813d435e16768f6b9ad9fe9db7c74d5348f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_gates, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 06:12:56 np0005634017 podman[409157]: 2026-02-28 11:12:56.9831115 +0000 UTC m=+0.159722383 container died d57ddc427c01d308e14c70cb452a6813d435e16768f6b9ad9fe9db7c74d5348f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_gates, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 06:12:57 np0005634017 systemd[1]: var-lib-containers-storage-overlay-c93496747ecb86d9f387e0cf12243837368608e4609e6884e42c8e4f1b8bc946-merged.mount: Deactivated successfully.
Feb 28 06:12:57 np0005634017 podman[409157]: 2026-02-28 11:12:57.029983817 +0000 UTC m=+0.206594710 container remove d57ddc427c01d308e14c70cb452a6813d435e16768f6b9ad9fe9db7c74d5348f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 28 06:12:57 np0005634017 systemd[1]: libpod-conmon-d57ddc427c01d308e14c70cb452a6813d435e16768f6b9ad9fe9db7c74d5348f.scope: Deactivated successfully.
Feb 28 06:12:57 np0005634017 podman[409197]: 2026-02-28 11:12:57.218382389 +0000 UTC m=+0.061636815 container create 88691617e3cfc75d5ff6ad057eaf60ae2da5631f8f94e1cdf118a82a72e90ba4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_payne, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 06:12:57 np0005634017 systemd[1]: Started libpod-conmon-88691617e3cfc75d5ff6ad057eaf60ae2da5631f8f94e1cdf118a82a72e90ba4.scope.
Feb 28 06:12:57 np0005634017 podman[409197]: 2026-02-28 11:12:57.192526857 +0000 UTC m=+0.035781343 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:12:57 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:12:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8870a5d22a71e86ad9b3bcba30ea29efe7c92165b0a29c3e24b84fb6b2dc04c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 06:12:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8870a5d22a71e86ad9b3bcba30ea29efe7c92165b0a29c3e24b84fb6b2dc04c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 06:12:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8870a5d22a71e86ad9b3bcba30ea29efe7c92165b0a29c3e24b84fb6b2dc04c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 06:12:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8870a5d22a71e86ad9b3bcba30ea29efe7c92165b0a29c3e24b84fb6b2dc04c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 06:12:57 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8870a5d22a71e86ad9b3bcba30ea29efe7c92165b0a29c3e24b84fb6b2dc04c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 06:12:57 np0005634017 podman[409197]: 2026-02-28 11:12:57.325009098 +0000 UTC m=+0.168263594 container init 88691617e3cfc75d5ff6ad057eaf60ae2da5631f8f94e1cdf118a82a72e90ba4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_payne, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 06:12:57 np0005634017 podman[409197]: 2026-02-28 11:12:57.338282274 +0000 UTC m=+0.181536670 container start 88691617e3cfc75d5ff6ad057eaf60ae2da5631f8f94e1cdf118a82a72e90ba4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_payne, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 06:12:57 np0005634017 podman[409197]: 2026-02-28 11:12:57.342265937 +0000 UTC m=+0.185520443 container attach 88691617e3cfc75d5ff6ad057eaf60ae2da5631f8f94e1cdf118a82a72e90ba4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 06:12:57 np0005634017 wizardly_payne[409214]: --> passed data devices: 0 physical, 3 LVM
Feb 28 06:12:57 np0005634017 wizardly_payne[409214]: --> All data devices are unavailable
Feb 28 06:12:57 np0005634017 systemd[1]: libpod-88691617e3cfc75d5ff6ad057eaf60ae2da5631f8f94e1cdf118a82a72e90ba4.scope: Deactivated successfully.
Feb 28 06:12:57 np0005634017 podman[409197]: 2026-02-28 11:12:57.797961947 +0000 UTC m=+0.641216403 container died 88691617e3cfc75d5ff6ad057eaf60ae2da5631f8f94e1cdf118a82a72e90ba4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_payne, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 06:12:57 np0005634017 systemd[1]: var-lib-containers-storage-overlay-8870a5d22a71e86ad9b3bcba30ea29efe7c92165b0a29c3e24b84fb6b2dc04c0-merged.mount: Deactivated successfully.
Feb 28 06:12:57 np0005634017 podman[409197]: 2026-02-28 11:12:57.840923374 +0000 UTC m=+0.684177770 container remove 88691617e3cfc75d5ff6ad057eaf60ae2da5631f8f94e1cdf118a82a72e90ba4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 06:12:57 np0005634017 systemd[1]: libpod-conmon-88691617e3cfc75d5ff6ad057eaf60ae2da5631f8f94e1cdf118a82a72e90ba4.scope: Deactivated successfully.
Feb 28 06:12:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:12:57.921 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:12:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:12:57.924 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:12:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:12:57.924 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:12:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3281: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:12:58 np0005634017 podman[409310]: 2026-02-28 11:12:58.353237708 +0000 UTC m=+0.054792373 container create b17e4dab1da1839f1ba09be76b93609a86b79760a4d9155e16f0065cb00438b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 06:12:58 np0005634017 systemd[1]: Started libpod-conmon-b17e4dab1da1839f1ba09be76b93609a86b79760a4d9155e16f0065cb00438b0.scope.
Feb 28 06:12:58 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:12:58 np0005634017 podman[409310]: 2026-02-28 11:12:58.33038459 +0000 UTC m=+0.031939275 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:12:58 np0005634017 podman[409310]: 2026-02-28 11:12:58.43598422 +0000 UTC m=+0.137538885 container init b17e4dab1da1839f1ba09be76b93609a86b79760a4d9155e16f0065cb00438b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 06:12:58 np0005634017 podman[409310]: 2026-02-28 11:12:58.444492361 +0000 UTC m=+0.146047036 container start b17e4dab1da1839f1ba09be76b93609a86b79760a4d9155e16f0065cb00438b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 06:12:58 np0005634017 podman[409310]: 2026-02-28 11:12:58.448892946 +0000 UTC m=+0.150447611 container attach b17e4dab1da1839f1ba09be76b93609a86b79760a4d9155e16f0065cb00438b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_solomon, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 06:12:58 np0005634017 systemd[1]: libpod-b17e4dab1da1839f1ba09be76b93609a86b79760a4d9155e16f0065cb00438b0.scope: Deactivated successfully.
Feb 28 06:12:58 np0005634017 tender_solomon[409326]: 167 167
Feb 28 06:12:58 np0005634017 conmon[409326]: conmon b17e4dab1da1839f1ba0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b17e4dab1da1839f1ba09be76b93609a86b79760a4d9155e16f0065cb00438b0.scope/container/memory.events
Feb 28 06:12:58 np0005634017 podman[409310]: 2026-02-28 11:12:58.452463657 +0000 UTC m=+0.154018332 container died b17e4dab1da1839f1ba09be76b93609a86b79760a4d9155e16f0065cb00438b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_solomon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Feb 28 06:12:58 np0005634017 systemd[1]: var-lib-containers-storage-overlay-19a6163ba3ce9e025b39d1d489c045055f1a3675d8a9a4869fca33aedca3b910-merged.mount: Deactivated successfully.
Feb 28 06:12:58 np0005634017 podman[409310]: 2026-02-28 11:12:58.497754409 +0000 UTC m=+0.199309084 container remove b17e4dab1da1839f1ba09be76b93609a86b79760a4d9155e16f0065cb00438b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_solomon, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 06:12:58 np0005634017 systemd[1]: libpod-conmon-b17e4dab1da1839f1ba09be76b93609a86b79760a4d9155e16f0065cb00438b0.scope: Deactivated successfully.
Feb 28 06:12:58 np0005634017 podman[409350]: 2026-02-28 11:12:58.683425525 +0000 UTC m=+0.051761646 container create f02a5a430ff9ab983b29e1211ce4b187150b5089c7fec9cd092e9aaa1338eb89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_leavitt, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 06:12:58 np0005634017 systemd[1]: Started libpod-conmon-f02a5a430ff9ab983b29e1211ce4b187150b5089c7fec9cd092e9aaa1338eb89.scope.
Feb 28 06:12:58 np0005634017 podman[409350]: 2026-02-28 11:12:58.657221663 +0000 UTC m=+0.025557874 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:12:58 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:12:58 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af38acf773ecf90a0e13d618160febbc96e9e71cf2bfe4506ada85e1ff42882b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 06:12:58 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af38acf773ecf90a0e13d618160febbc96e9e71cf2bfe4506ada85e1ff42882b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 06:12:58 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af38acf773ecf90a0e13d618160febbc96e9e71cf2bfe4506ada85e1ff42882b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 06:12:58 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af38acf773ecf90a0e13d618160febbc96e9e71cf2bfe4506ada85e1ff42882b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 06:12:58 np0005634017 podman[409350]: 2026-02-28 11:12:58.800125709 +0000 UTC m=+0.168461900 container init f02a5a430ff9ab983b29e1211ce4b187150b5089c7fec9cd092e9aaa1338eb89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_leavitt, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 06:12:58 np0005634017 podman[409350]: 2026-02-28 11:12:58.812969543 +0000 UTC m=+0.181305664 container start f02a5a430ff9ab983b29e1211ce4b187150b5089c7fec9cd092e9aaa1338eb89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 06:12:58 np0005634017 podman[409350]: 2026-02-28 11:12:58.816861643 +0000 UTC m=+0.185197844 container attach f02a5a430ff9ab983b29e1211ce4b187150b5089c7fec9cd092e9aaa1338eb89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_leavitt, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]: {
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:    "0": [
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:        {
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "devices": [
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "/dev/loop3"
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            ],
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "lv_name": "ceph_lv0",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "lv_size": "21470642176",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "name": "ceph_lv0",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "tags": {
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.cephx_lockbox_secret": "",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.cluster_name": "ceph",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.crush_device_class": "",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.encrypted": "0",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.objectstore": "bluestore",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.osd_id": "0",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.type": "block",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.vdo": "0",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.with_tpm": "0"
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            },
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "type": "block",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "vg_name": "ceph_vg0"
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:        }
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:    ],
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:    "1": [
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:        {
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "devices": [
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "/dev/loop4"
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            ],
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "lv_name": "ceph_lv1",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "lv_size": "21470642176",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "name": "ceph_lv1",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "tags": {
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.cephx_lockbox_secret": "",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.cluster_name": "ceph",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.crush_device_class": "",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.encrypted": "0",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.objectstore": "bluestore",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.osd_id": "1",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.type": "block",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.vdo": "0",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.with_tpm": "0"
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            },
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "type": "block",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "vg_name": "ceph_vg1"
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:        }
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:    ],
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:    "2": [
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:        {
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "devices": [
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "/dev/loop5"
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            ],
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "lv_name": "ceph_lv2",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "lv_size": "21470642176",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "name": "ceph_lv2",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "tags": {
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.cephx_lockbox_secret": "",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.cluster_name": "ceph",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.crush_device_class": "",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.encrypted": "0",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.objectstore": "bluestore",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.osd_id": "2",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.type": "block",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.vdo": "0",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:                "ceph.with_tpm": "0"
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            },
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "type": "block",
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:            "vg_name": "ceph_vg2"
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:        }
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]:    ]
Feb 28 06:12:59 np0005634017 nostalgic_leavitt[409367]: }
Feb 28 06:12:59 np0005634017 systemd[1]: libpod-f02a5a430ff9ab983b29e1211ce4b187150b5089c7fec9cd092e9aaa1338eb89.scope: Deactivated successfully.
Feb 28 06:12:59 np0005634017 podman[409350]: 2026-02-28 11:12:59.145706753 +0000 UTC m=+0.514042904 container died f02a5a430ff9ab983b29e1211ce4b187150b5089c7fec9cd092e9aaa1338eb89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_leavitt, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 06:12:59 np0005634017 systemd[1]: var-lib-containers-storage-overlay-af38acf773ecf90a0e13d618160febbc96e9e71cf2bfe4506ada85e1ff42882b-merged.mount: Deactivated successfully.
Feb 28 06:12:59 np0005634017 podman[409350]: 2026-02-28 11:12:59.178377317 +0000 UTC m=+0.546713428 container remove f02a5a430ff9ab983b29e1211ce4b187150b5089c7fec9cd092e9aaa1338eb89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 06:12:59 np0005634017 systemd[1]: libpod-conmon-f02a5a430ff9ab983b29e1211ce4b187150b5089c7fec9cd092e9aaa1338eb89.scope: Deactivated successfully.
Feb 28 06:12:59 np0005634017 nova_compute[243452]: 2026-02-28 11:12:59.331 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:12:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:12:59 np0005634017 podman[409452]: 2026-02-28 11:12:59.599939042 +0000 UTC m=+0.056444319 container create 89b830819f9609fbfe98665e1885e73d4513e8e3e83ccbc2f1b0e2fb7a4d2bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kilby, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 06:12:59 np0005634017 systemd[1]: Started libpod-conmon-89b830819f9609fbfe98665e1885e73d4513e8e3e83ccbc2f1b0e2fb7a4d2bae.scope.
Feb 28 06:12:59 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:12:59 np0005634017 podman[409452]: 2026-02-28 11:12:59.575264713 +0000 UTC m=+0.031770080 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:12:59 np0005634017 podman[409452]: 2026-02-28 11:12:59.679254927 +0000 UTC m=+0.135760234 container init 89b830819f9609fbfe98665e1885e73d4513e8e3e83ccbc2f1b0e2fb7a4d2bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 28 06:12:59 np0005634017 podman[409452]: 2026-02-28 11:12:59.684602899 +0000 UTC m=+0.141108186 container start 89b830819f9609fbfe98665e1885e73d4513e8e3e83ccbc2f1b0e2fb7a4d2bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kilby, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 28 06:12:59 np0005634017 podman[409452]: 2026-02-28 11:12:59.687305725 +0000 UTC m=+0.143811032 container attach 89b830819f9609fbfe98665e1885e73d4513e8e3e83ccbc2f1b0e2fb7a4d2bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Feb 28 06:12:59 np0005634017 vigilant_kilby[409469]: 167 167
Feb 28 06:12:59 np0005634017 systemd[1]: libpod-89b830819f9609fbfe98665e1885e73d4513e8e3e83ccbc2f1b0e2fb7a4d2bae.scope: Deactivated successfully.
Feb 28 06:12:59 np0005634017 podman[409452]: 2026-02-28 11:12:59.690339991 +0000 UTC m=+0.146845278 container died 89b830819f9609fbfe98665e1885e73d4513e8e3e83ccbc2f1b0e2fb7a4d2bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kilby, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 06:12:59 np0005634017 systemd[1]: var-lib-containers-storage-overlay-f9a28792cc7adbd2368211bc991a0451b088fa2d776c3029a19cd8e54ab40744-merged.mount: Deactivated successfully.
Feb 28 06:12:59 np0005634017 podman[409452]: 2026-02-28 11:12:59.72313305 +0000 UTC m=+0.179638357 container remove 89b830819f9609fbfe98665e1885e73d4513e8e3e83ccbc2f1b0e2fb7a4d2bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kilby, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 06:12:59 np0005634017 systemd[1]: libpod-conmon-89b830819f9609fbfe98665e1885e73d4513e8e3e83ccbc2f1b0e2fb7a4d2bae.scope: Deactivated successfully.
Feb 28 06:12:59 np0005634017 podman[409492]: 2026-02-28 11:12:59.853094759 +0000 UTC m=+0.054183545 container create a6812bbdce8a9205841705cee8cbfde556887c07f5d1c21afeda31e51eaf24ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_liskov, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 06:12:59 np0005634017 systemd[1]: Started libpod-conmon-a6812bbdce8a9205841705cee8cbfde556887c07f5d1c21afeda31e51eaf24ee.scope.
Feb 28 06:12:59 np0005634017 podman[409492]: 2026-02-28 11:12:59.823287075 +0000 UTC m=+0.024375921 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:12:59 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:12:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9952a9f56985d8e889cbcdca0e65c74ab0eb065b2c4439e15d7bc7e48c08066/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 06:12:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9952a9f56985d8e889cbcdca0e65c74ab0eb065b2c4439e15d7bc7e48c08066/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 06:12:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9952a9f56985d8e889cbcdca0e65c74ab0eb065b2c4439e15d7bc7e48c08066/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 06:12:59 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9952a9f56985d8e889cbcdca0e65c74ab0eb065b2c4439e15d7bc7e48c08066/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 06:12:59 np0005634017 podman[409492]: 2026-02-28 11:12:59.974687901 +0000 UTC m=+0.175776747 container init a6812bbdce8a9205841705cee8cbfde556887c07f5d1c21afeda31e51eaf24ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 06:12:59 np0005634017 podman[409492]: 2026-02-28 11:12:59.984074467 +0000 UTC m=+0.185163263 container start a6812bbdce8a9205841705cee8cbfde556887c07f5d1c21afeda31e51eaf24ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_liskov, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 28 06:12:59 np0005634017 podman[409492]: 2026-02-28 11:12:59.987880735 +0000 UTC m=+0.188969631 container attach a6812bbdce8a9205841705cee8cbfde556887c07f5d1c21afeda31e51eaf24ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_liskov, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 06:13:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3282: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:13:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:13:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:13:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:13:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:13:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:13:00 np0005634017 lvm[409584]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 06:13:00 np0005634017 lvm[409584]: VG ceph_vg0 finished
Feb 28 06:13:00 np0005634017 lvm[409587]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 06:13:00 np0005634017 lvm[409587]: VG ceph_vg1 finished
Feb 28 06:13:00 np0005634017 lvm[409588]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 06:13:00 np0005634017 lvm[409588]: VG ceph_vg2 finished
Feb 28 06:13:00 np0005634017 happy_liskov[409508]: {}
Feb 28 06:13:00 np0005634017 systemd[1]: libpod-a6812bbdce8a9205841705cee8cbfde556887c07f5d1c21afeda31e51eaf24ee.scope: Deactivated successfully.
Feb 28 06:13:00 np0005634017 systemd[1]: libpod-a6812bbdce8a9205841705cee8cbfde556887c07f5d1c21afeda31e51eaf24ee.scope: Consumed 1.128s CPU time.
Feb 28 06:13:00 np0005634017 podman[409492]: 2026-02-28 11:13:00.79593568 +0000 UTC m=+0.997024486 container died a6812bbdce8a9205841705cee8cbfde556887c07f5d1c21afeda31e51eaf24ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_liskov, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 06:13:00 np0005634017 systemd[1]: var-lib-containers-storage-overlay-c9952a9f56985d8e889cbcdca0e65c74ab0eb065b2c4439e15d7bc7e48c08066-merged.mount: Deactivated successfully.
Feb 28 06:13:00 np0005634017 podman[409492]: 2026-02-28 11:13:00.840859252 +0000 UTC m=+1.041948048 container remove a6812bbdce8a9205841705cee8cbfde556887c07f5d1c21afeda31e51eaf24ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 06:13:00 np0005634017 systemd[1]: libpod-conmon-a6812bbdce8a9205841705cee8cbfde556887c07f5d1c21afeda31e51eaf24ee.scope: Deactivated successfully.
Feb 28 06:13:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 06:13:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:13:00 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 06:13:00 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:13:01 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:13:01 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:13:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3283: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3284: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:04 np0005634017 nova_compute[243452]: 2026-02-28 11:13:04.334 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:13:04 np0005634017 nova_compute[243452]: 2026-02-28 11:13:04.336 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:13:04 np0005634017 nova_compute[243452]: 2026-02-28 11:13:04.336 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 06:13:04 np0005634017 nova_compute[243452]: 2026-02-28 11:13:04.336 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:13:04 np0005634017 nova_compute[243452]: 2026-02-28 11:13:04.362 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:13:04 np0005634017 nova_compute[243452]: 2026-02-28 11:13:04.364 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:13:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:13:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3285: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3286: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:09 np0005634017 podman[409630]: 2026-02-28 11:13:09.173186051 +0000 UTC m=+0.102487292 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 28 06:13:09 np0005634017 podman[409629]: 2026-02-28 11:13:09.182374021 +0000 UTC m=+0.113659068 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 28 06:13:09 np0005634017 nova_compute[243452]: 2026-02-28 11:13:09.364 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:13:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:13:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3287: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:10 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 06:13:10 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.0 total, 600.0 interval#012Cumulative writes: 15K writes, 68K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s#012Cumulative WAL: 15K writes, 15K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1358 writes, 6219 keys, 1358 commit groups, 1.0 writes per commit group, ingest: 8.88 MB, 0.01 MB/s#012Interval WAL: 1358 writes, 1358 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     88.7      0.95              0.23        49    0.019       0      0       0.0       0.0#012  L6      1/0   10.58 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.1    171.6    146.6      2.93              1.23        48    0.061    323K    25K       0.0       0.0#012 Sum      1/0   10.58 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.1    129.7    132.5      3.88              1.46        97    0.040    323K    25K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.6    175.4    179.6      0.33              0.20        10    0.033     44K   2523       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    171.6    146.6      2.93              1.23        48    0.061    323K    25K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     89.1      0.94              0.23        48    0.020       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.5      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.0 total, 600.0 interval#012Flush(GB): cumulative 0.082, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.50 GB write, 0.09 MB/s write, 0.49 GB read, 0.08 MB/s read, 3.9 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555caadbb8d0#2 capacity: 304.00 MB usage: 58.58 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000559 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3937,56.25 MB,18.5046%) FilterBlock(98,903.48 KB,0.290233%) IndexBlock(98,1.44 MB,0.474804%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb 28 06:13:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3288: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:12 np0005634017 nova_compute[243452]: 2026-02-28 11:13:12.359 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:13:13 np0005634017 nova_compute[243452]: 2026-02-28 11:13:13.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:13:14 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3289: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:14 np0005634017 nova_compute[243452]: 2026-02-28 11:13:14.366 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:13:14 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:13:16 np0005634017 nova_compute[243452]: 2026-02-28 11:13:16.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:13:16 np0005634017 nova_compute[243452]: 2026-02-28 11:13:16.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:13:16 np0005634017 nova_compute[243452]: 2026-02-28 11:13:16.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 28 06:13:16 np0005634017 nova_compute[243452]: 2026-02-28 11:13:16.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 28 06:13:16 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3290: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:16 np0005634017 nova_compute[243452]: 2026-02-28 11:13:16.375 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 28 06:13:16 np0005634017 nova_compute[243452]: 2026-02-28 11:13:16.377 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:13:16 np0005634017 nova_compute[243452]: 2026-02-28 11:13:16.377 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 28 06:13:17 np0005634017 nova_compute[243452]: 2026-02-28 11:13:17.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:13:18 np0005634017 nova_compute[243452]: 2026-02-28 11:13:18.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:13:18 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3291: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:19 np0005634017 nova_compute[243452]: 2026-02-28 11:13:19.368 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:13:19 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:13:20 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3292: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:22 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3293: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:23 np0005634017 nova_compute[243452]: 2026-02-28 11:13:23.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:13:23 np0005634017 nova_compute[243452]: 2026-02-28 11:13:23.348 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:13:23 np0005634017 nova_compute[243452]: 2026-02-28 11:13:23.349 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:13:23 np0005634017 nova_compute[243452]: 2026-02-28 11:13:23.349 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:13:23 np0005634017 nova_compute[243452]: 2026-02-28 11:13:23.350 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 28 06:13:23 np0005634017 nova_compute[243452]: 2026-02-28 11:13:23.350 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 06:13:23 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 06:13:23 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3655902496' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 06:13:23 np0005634017 nova_compute[243452]: 2026-02-28 11:13:23.911 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 06:13:24 np0005634017 nova_compute[243452]: 2026-02-28 11:13:24.112 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 28 06:13:24 np0005634017 nova_compute[243452]: 2026-02-28 11:13:24.114 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3556MB free_disk=59.987354160286486GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 28 06:13:24 np0005634017 nova_compute[243452]: 2026-02-28 11:13:24.114 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:13:24 np0005634017 nova_compute[243452]: 2026-02-28 11:13:24.114 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:13:24 np0005634017 nova_compute[243452]: 2026-02-28 11:13:24.185 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 28 06:13:24 np0005634017 nova_compute[243452]: 2026-02-28 11:13:24.185 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 28 06:13:24 np0005634017 nova_compute[243452]: 2026-02-28 11:13:24.204 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 28 06:13:24 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3294: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:24 np0005634017 nova_compute[243452]: 2026-02-28 11:13:24.370 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:13:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:13:24 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 06:13:24 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3033646151' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 06:13:24 np0005634017 nova_compute[243452]: 2026-02-28 11:13:24.762 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 28 06:13:24 np0005634017 nova_compute[243452]: 2026-02-28 11:13:24.769 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 28 06:13:24 np0005634017 nova_compute[243452]: 2026-02-28 11:13:24.789 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 28 06:13:24 np0005634017 nova_compute[243452]: 2026-02-28 11:13:24.791 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 28 06:13:24 np0005634017 nova_compute[243452]: 2026-02-28 11:13:24.792 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:13:26 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3295: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:28 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3296: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_11:13:29
Feb 28 06:13:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 06:13:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 06:13:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'backups', 'images', 'vms', 'volumes', 'cephfs.cephfs.data']
Feb 28 06:13:29 np0005634017 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 06:13:29 np0005634017 nova_compute[243452]: 2026-02-28 11:13:29.373 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:13:29 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:13:29 np0005634017 nova_compute[243452]: 2026-02-28 11:13:29.792 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:13:30 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3297: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:13:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:13:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:13:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:13:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:13:30 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:13:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 06:13:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 06:13:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 06:13:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 06:13:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 06:13:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 06:13:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 06:13:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 06:13:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 06:13:31 np0005634017 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 06:13:32 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3298: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:34 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3299: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:34 np0005634017 nova_compute[243452]: 2026-02-28 11:13:34.372 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:13:34 np0005634017 nova_compute[243452]: 2026-02-28 11:13:34.375 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:13:34 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:13:36 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3300: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:38 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3301: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:39 np0005634017 nova_compute[243452]: 2026-02-28 11:13:39.375 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:13:39 np0005634017 nova_compute[243452]: 2026-02-28 11:13:39.376 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:13:39 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:13:40 np0005634017 podman[409722]: 2026-02-28 11:13:40.168562064 +0000 UTC m=+0.106008292 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Feb 28 06:13:40 np0005634017 podman[409723]: 2026-02-28 11:13:40.169268844 +0000 UTC m=+0.097135960 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 28 06:13:40 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3302: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 06:13:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:13:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 06:13:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:13:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5454513684934942e-05 of space, bias 1.0, pg target 0.0046363541054804825 quantized to 32 (current 32)
Feb 28 06:13:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:13:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:13:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:13:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:13:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:13:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.899822393338988e-06 of space, bias 1.0, pg target 0.0014699467180016965 quantized to 32 (current 32)
Feb 28 06:13:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:13:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.272876705470989e-07 of space, bias 4.0, pg target 0.0008727452046565186 quantized to 16 (current 16)
Feb 28 06:13:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:13:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:13:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:13:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 06:13:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:13:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 06:13:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:13:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 06:13:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 06:13:41 np0005634017 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 06:13:42 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3303: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:44 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3304: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:44 np0005634017 nova_compute[243452]: 2026-02-28 11:13:44.378 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:13:44 np0005634017 nova_compute[243452]: 2026-02-28 11:13:44.380 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:13:44 np0005634017 nova_compute[243452]: 2026-02-28 11:13:44.380 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 06:13:44 np0005634017 nova_compute[243452]: 2026-02-28 11:13:44.380 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:13:44 np0005634017 nova_compute[243452]: 2026-02-28 11:13:44.405 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:13:44 np0005634017 nova_compute[243452]: 2026-02-28 11:13:44.406 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:13:44 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:13:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 06:13:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3613126888' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 06:13:45 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 06:13:45 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3613126888' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 06:13:46 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3305: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:48 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3306: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:49 np0005634017 nova_compute[243452]: 2026-02-28 11:13:49.407 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:13:49 np0005634017 nova_compute[243452]: 2026-02-28 11:13:49.408 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:13:49 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:13:50 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3307: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:52 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3308: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:53 np0005634017 systemd-logind[815]: New session 58 of user zuul.
Feb 28 06:13:53 np0005634017 systemd[1]: Started Session 58 of User zuul.
Feb 28 06:13:54 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3309: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:54 np0005634017 nova_compute[243452]: 2026-02-28 11:13:54.409 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:13:54 np0005634017 nova_compute[243452]: 2026-02-28 11:13:54.412 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 28 06:13:54 np0005634017 nova_compute[243452]: 2026-02-28 11:13:54.412 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 28 06:13:54 np0005634017 nova_compute[243452]: 2026-02-28 11:13:54.412 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:13:54 np0005634017 nova_compute[243452]: 2026-02-28 11:13:54.440 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:13:54 np0005634017 nova_compute[243452]: 2026-02-28 11:13:54.441 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 28 06:13:54 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:13:56 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23270 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:13:56 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3310: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:56 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23272 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:13:57 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Feb 28 06:13:57 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1471020115' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 28 06:13:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:13:57.922 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 28 06:13:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:13:57.924 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 28 06:13:57 np0005634017 ovn_metadata_agent[156634]: 2026-02-28 11:13:57.924 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 28 06:13:58 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3311: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:13:59 np0005634017 nova_compute[243452]: 2026-02-28 11:13:59.441 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:13:59 np0005634017 nova_compute[243452]: 2026-02-28 11:13:59.445 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:13:59 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:14:00 np0005634017 ovs-vsctl[410053]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Feb 28 06:14:00 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3312: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:14:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:14:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:14:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:14:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:14:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 06:14:00 np0005634017 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 06:14:01 np0005634017 virtqemud[242837]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Feb 28 06:14:01 np0005634017 virtqemud[242837]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Feb 28 06:14:01 np0005634017 virtqemud[242837]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Feb 28 06:14:01 np0005634017 podman[410371]: 2026-02-28 11:14:01.632146868 +0000 UTC m=+0.086340794 container exec 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 06:14:01 np0005634017 podman[410371]: 2026-02-28 11:14:01.709621301 +0000 UTC m=+0.163815187 container exec_died 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 28 06:14:01 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: cache status {prefix=cache status} (starting...)
Feb 28 06:14:01 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: client ls {prefix=client ls} (starting...)
Feb 28 06:14:01 np0005634017 lvm[410538]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 06:14:01 np0005634017 lvm[410538]: VG ceph_vg2 finished
Feb 28 06:14:01 np0005634017 lvm[410553]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 06:14:01 np0005634017 lvm[410553]: VG ceph_vg0 finished
Feb 28 06:14:02 np0005634017 lvm[410558]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 06:14:02 np0005634017 lvm[410558]: VG ceph_vg1 finished
Feb 28 06:14:02 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23276 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:14:02 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3313: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:14:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 06:14:02 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:14:02 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 06:14:02 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:14:02 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: damage ls {prefix=damage ls} (starting...)
Feb 28 06:14:02 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: dump loads {prefix=dump loads} (starting...)
Feb 28 06:14:02 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Feb 28 06:14:02 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23278 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:14:02 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Feb 28 06:14:03 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Feb 28 06:14:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Feb 28 06:14:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/526184937' entity='client.admin' cmd={"prefix": "report"} : dispatch
Feb 28 06:14:03 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Feb 28 06:14:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 06:14:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 06:14:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 06:14:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 06:14:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 06:14:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:14:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 06:14:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 06:14:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 06:14:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 06:14:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 06:14:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 06:14:03 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23282 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:14:03 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Feb 28 06:14:03 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:14:03 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:14:03 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 06:14:03 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:14:03 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 06:14:03 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: get subtrees {prefix=get subtrees} (starting...)
Feb 28 06:14:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 06:14:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1572024949' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 06:14:03 np0005634017 podman[411009]: 2026-02-28 11:14:03.592021803 +0000 UTC m=+0.038835601 container create 20bac391ff05958cabf8f0dc70178c7511df87c9d6758b3d05f1898a0cdd3981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_cannon, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 28 06:14:03 np0005634017 systemd[1]: Started libpod-conmon-20bac391ff05958cabf8f0dc70178c7511df87c9d6758b3d05f1898a0cdd3981.scope.
Feb 28 06:14:03 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:14:03 np0005634017 podman[411009]: 2026-02-28 11:14:03.573454577 +0000 UTC m=+0.020268405 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:14:03 np0005634017 podman[411009]: 2026-02-28 11:14:03.675837306 +0000 UTC m=+0.122651114 container init 20bac391ff05958cabf8f0dc70178c7511df87c9d6758b3d05f1898a0cdd3981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 06:14:03 np0005634017 podman[411009]: 2026-02-28 11:14:03.687764203 +0000 UTC m=+0.134577991 container start 20bac391ff05958cabf8f0dc70178c7511df87c9d6758b3d05f1898a0cdd3981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_cannon, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True)
Feb 28 06:14:03 np0005634017 podman[411009]: 2026-02-28 11:14:03.691028766 +0000 UTC m=+0.137842584 container attach 20bac391ff05958cabf8f0dc70178c7511df87c9d6758b3d05f1898a0cdd3981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_cannon, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 28 06:14:03 np0005634017 stoic_cannon[411029]: 167 167
Feb 28 06:14:03 np0005634017 systemd[1]: libpod-20bac391ff05958cabf8f0dc70178c7511df87c9d6758b3d05f1898a0cdd3981.scope: Deactivated successfully.
Feb 28 06:14:03 np0005634017 podman[411009]: 2026-02-28 11:14:03.698200859 +0000 UTC m=+0.145014647 container died 20bac391ff05958cabf8f0dc70178c7511df87c9d6758b3d05f1898a0cdd3981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_cannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 06:14:03 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23286 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:14:03 np0005634017 ceph-mgr[76610]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 28 06:14:03 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-izimmo[76606]: 2026-02-28T11:14:03.724+0000 7fdac2ed4640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 28 06:14:03 np0005634017 systemd[1]: var-lib-containers-storage-overlay-2ffa6d541f37c065a567ddb7f42ff6104dbab56a72ba0ca407558db2efe2565a-merged.mount: Deactivated successfully.
Feb 28 06:14:03 np0005634017 podman[411009]: 2026-02-28 11:14:03.75015465 +0000 UTC m=+0.196968438 container remove 20bac391ff05958cabf8f0dc70178c7511df87c9d6758b3d05f1898a0cdd3981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_cannon, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 06:14:03 np0005634017 systemd[1]: libpod-conmon-20bac391ff05958cabf8f0dc70178c7511df87c9d6758b3d05f1898a0cdd3981.scope: Deactivated successfully.
Feb 28 06:14:03 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: ops {prefix=ops} (starting...)
Feb 28 06:14:03 np0005634017 podman[411083]: 2026-02-28 11:14:03.900906907 +0000 UTC m=+0.046738364 container create 9f5b79ff7490dd2d7fec16398a1a274f8a844fcf9acf29ab74714021febbba93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 06:14:03 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Feb 28 06:14:03 np0005634017 systemd[1]: Started libpod-conmon-9f5b79ff7490dd2d7fec16398a1a274f8a844fcf9acf29ab74714021febbba93.scope.
Feb 28 06:14:03 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2024216684' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Feb 28 06:14:03 np0005634017 podman[411083]: 2026-02-28 11:14:03.878117282 +0000 UTC m=+0.023948759 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:14:03 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:14:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ace3e5ab140cddc3eea681a3627d7101aee2362a279834718870f28a6df7dd69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 06:14:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ace3e5ab140cddc3eea681a3627d7101aee2362a279834718870f28a6df7dd69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 06:14:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ace3e5ab140cddc3eea681a3627d7101aee2362a279834718870f28a6df7dd69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 06:14:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ace3e5ab140cddc3eea681a3627d7101aee2362a279834718870f28a6df7dd69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 06:14:03 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ace3e5ab140cddc3eea681a3627d7101aee2362a279834718870f28a6df7dd69/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 06:14:04 np0005634017 podman[411083]: 2026-02-28 11:14:04.016720226 +0000 UTC m=+0.162551733 container init 9f5b79ff7490dd2d7fec16398a1a274f8a844fcf9acf29ab74714021febbba93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_dubinsky, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 28 06:14:04 np0005634017 podman[411083]: 2026-02-28 11:14:04.023285312 +0000 UTC m=+0.169116769 container start 9f5b79ff7490dd2d7fec16398a1a274f8a844fcf9acf29ab74714021febbba93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_dubinsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 06:14:04 np0005634017 podman[411083]: 2026-02-28 11:14:04.026923095 +0000 UTC m=+0.172754582 container attach 9f5b79ff7490dd2d7fec16398a1a274f8a844fcf9acf29ab74714021febbba93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_dubinsky, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2485113866' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Feb 28 06:14:04 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3314: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:14:04 np0005634017 vibrant_dubinsky[411120]: --> passed data devices: 0 physical, 3 LVM
Feb 28 06:14:04 np0005634017 vibrant_dubinsky[411120]: --> All data devices are unavailable
Feb 28 06:14:04 np0005634017 nova_compute[243452]: 2026-02-28 11:14:04.443 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:14:04 np0005634017 nova_compute[243452]: 2026-02-28 11:14:04.448 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:14:04 np0005634017 systemd[1]: libpod-9f5b79ff7490dd2d7fec16398a1a274f8a844fcf9acf29ab74714021febbba93.scope: Deactivated successfully.
Feb 28 06:14:04 np0005634017 podman[411083]: 2026-02-28 11:14:04.463947767 +0000 UTC m=+0.609779224 container died 9f5b79ff7490dd2d7fec16398a1a274f8a844fcf9acf29ab74714021febbba93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_dubinsky, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.472636) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277244472676, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 1705, "num_deletes": 250, "total_data_size": 2782655, "memory_usage": 2827376, "flush_reason": "Manual Compaction"}
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277244479279, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 1605140, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68091, "largest_seqno": 69795, "table_properties": {"data_size": 1599457, "index_size": 2758, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14980, "raw_average_key_size": 20, "raw_value_size": 1586801, "raw_average_value_size": 2197, "num_data_blocks": 127, "num_entries": 722, "num_filter_entries": 722, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772277065, "oldest_key_time": 1772277065, "file_creation_time": 1772277244, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 6714 microseconds, and 3645 cpu microseconds.
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.479329) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 1605140 bytes OK
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.479375) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.481136) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.481154) EVENT_LOG_v1 {"time_micros": 1772277244481149, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.481181) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 2775318, prev total WAL file size 2775318, number of live WAL files 2.
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.481837) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373532' seq:72057594037927935, type:22 .. '6D6772737461740033303033' seq:0, type:0; will stop at (end)
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(1567KB)], [161(10MB)]
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277244481950, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 12703463, "oldest_snapshot_seqno": -1}
Feb 28 06:14:04 np0005634017 systemd[1]: var-lib-containers-storage-overlay-ace3e5ab140cddc3eea681a3627d7101aee2362a279834718870f28a6df7dd69-merged.mount: Deactivated successfully.
Feb 28 06:14:04 np0005634017 podman[411083]: 2026-02-28 11:14:04.505059101 +0000 UTC m=+0.650890558 container remove 9f5b79ff7490dd2d7fec16398a1a274f8a844fcf9acf29ab74714021febbba93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 06:14:04 np0005634017 systemd[1]: libpod-conmon-9f5b79ff7490dd2d7fec16398a1a274f8a844fcf9acf29ab74714021febbba93.scope: Deactivated successfully.
Feb 28 06:14:04 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: session ls {prefix=session ls} (starting...)
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 9001 keys, 10477556 bytes, temperature: kUnknown
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277244547941, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 10477556, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10421921, "index_size": 32034, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22533, "raw_key_size": 234304, "raw_average_key_size": 26, "raw_value_size": 10265858, "raw_average_value_size": 1140, "num_data_blocks": 1243, "num_entries": 9001, "num_filter_entries": 9001, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772277244, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.548237) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 10477556 bytes
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.549442) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.3 rd, 158.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 10.6 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(14.4) write-amplify(6.5) OK, records in: 9429, records dropped: 428 output_compression: NoCompression
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.549458) EVENT_LOG_v1 {"time_micros": 1772277244549450, "job": 100, "event": "compaction_finished", "compaction_time_micros": 66066, "compaction_time_cpu_micros": 30862, "output_level": 6, "num_output_files": 1, "total_output_size": 10477556, "num_input_records": 9429, "num_output_records": 9001, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277244549676, "job": 100, "event": "table_file_deletion", "file_number": 163}
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277244550997, "job": 100, "event": "table_file_deletion", "file_number": 161}
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.481696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.551112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.551122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.551124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.551126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.551128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3332890894' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Feb 28 06:14:04 np0005634017 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: status {prefix=status} (starting...)
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb 28 06:14:04 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2512350153' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 28 06:14:04 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23296 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:14:05 np0005634017 podman[411320]: 2026-02-28 11:14:05.007252788 +0000 UTC m=+0.045968722 container create 5dcac8f193f0d5fc6e1e3ad3f11b9c8c30d704d9060eb15f5e0d7f04ce560180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jackson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 06:14:05 np0005634017 systemd[1]: Started libpod-conmon-5dcac8f193f0d5fc6e1e3ad3f11b9c8c30d704d9060eb15f5e0d7f04ce560180.scope.
Feb 28 06:14:05 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:14:05 np0005634017 podman[411320]: 2026-02-28 11:14:04.990773822 +0000 UTC m=+0.029489846 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:14:05 np0005634017 podman[411320]: 2026-02-28 11:14:05.086324927 +0000 UTC m=+0.125040921 container init 5dcac8f193f0d5fc6e1e3ad3f11b9c8c30d704d9060eb15f5e0d7f04ce560180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 28 06:14:05 np0005634017 podman[411320]: 2026-02-28 11:14:05.093628284 +0000 UTC m=+0.132344218 container start 5dcac8f193f0d5fc6e1e3ad3f11b9c8c30d704d9060eb15f5e0d7f04ce560180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jackson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 06:14:05 np0005634017 podman[411320]: 2026-02-28 11:14:05.096865965 +0000 UTC m=+0.135581949 container attach 5dcac8f193f0d5fc6e1e3ad3f11b9c8c30d704d9060eb15f5e0d7f04ce560180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 06:14:05 np0005634017 thirsty_jackson[411342]: 167 167
Feb 28 06:14:05 np0005634017 systemd[1]: libpod-5dcac8f193f0d5fc6e1e3ad3f11b9c8c30d704d9060eb15f5e0d7f04ce560180.scope: Deactivated successfully.
Feb 28 06:14:05 np0005634017 podman[411320]: 2026-02-28 11:14:05.099442298 +0000 UTC m=+0.138158232 container died 5dcac8f193f0d5fc6e1e3ad3f11b9c8c30d704d9060eb15f5e0d7f04ce560180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jackson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 06:14:05 np0005634017 systemd[1]: var-lib-containers-storage-overlay-62afddbcf13730c1bae4f3be31bbae3a697e7c1b0ced1fa7b08d641ac6fd38d1-merged.mount: Deactivated successfully.
Feb 28 06:14:05 np0005634017 podman[411320]: 2026-02-28 11:14:05.138629358 +0000 UTC m=+0.177345302 container remove 5dcac8f193f0d5fc6e1e3ad3f11b9c8c30d704d9060eb15f5e0d7f04ce560180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jackson, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 06:14:05 np0005634017 systemd[1]: libpod-conmon-5dcac8f193f0d5fc6e1e3ad3f11b9c8c30d704d9060eb15f5e0d7f04ce560180.scope: Deactivated successfully.
Feb 28 06:14:05 np0005634017 podman[411388]: 2026-02-28 11:14:05.270581402 +0000 UTC m=+0.037552434 container create 103bd15e650a9f327ff8b1e2a2f9ed2773b462c8f23408c8d994d0e76f2619e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 06:14:05 np0005634017 systemd[1]: Started libpod-conmon-103bd15e650a9f327ff8b1e2a2f9ed2773b462c8f23408c8d994d0e76f2619e8.scope.
Feb 28 06:14:05 np0005634017 podman[411388]: 2026-02-28 11:14:05.251893133 +0000 UTC m=+0.018864205 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:14:05 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:14:05 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49931047dbdb154b3b0159a304092e40c3386a9a38804af8c001d64f4a99e493/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 06:14:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb 28 06:14:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3241801687' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 28 06:14:05 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49931047dbdb154b3b0159a304092e40c3386a9a38804af8c001d64f4a99e493/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 06:14:05 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49931047dbdb154b3b0159a304092e40c3386a9a38804af8c001d64f4a99e493/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 06:14:05 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49931047dbdb154b3b0159a304092e40c3386a9a38804af8c001d64f4a99e493/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 06:14:05 np0005634017 podman[411388]: 2026-02-28 11:14:05.370226883 +0000 UTC m=+0.137197985 container init 103bd15e650a9f327ff8b1e2a2f9ed2773b462c8f23408c8d994d0e76f2619e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cannon, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 06:14:05 np0005634017 podman[411388]: 2026-02-28 11:14:05.377231752 +0000 UTC m=+0.144202784 container start 103bd15e650a9f327ff8b1e2a2f9ed2773b462c8f23408c8d994d0e76f2619e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cannon, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 06:14:05 np0005634017 podman[411388]: 2026-02-28 11:14:05.3799875 +0000 UTC m=+0.146958532 container attach 103bd15e650a9f327ff8b1e2a2f9ed2773b462c8f23408c8d994d0e76f2619e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cannon, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 06:14:05 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23300 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]: {
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:    "0": [
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:        {
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "devices": [
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "/dev/loop3"
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            ],
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "lv_name": "ceph_lv0",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "lv_size": "21470642176",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "name": "ceph_lv0",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "tags": {
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.cephx_lockbox_secret": "",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.cluster_name": "ceph",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.crush_device_class": "",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.encrypted": "0",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.objectstore": "bluestore",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.osd_id": "0",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.type": "block",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.vdo": "0",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.with_tpm": "0"
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            },
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "type": "block",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "vg_name": "ceph_vg0"
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:        }
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:    ],
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:    "1": [
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:        {
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "devices": [
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "/dev/loop4"
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            ],
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "lv_name": "ceph_lv1",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "lv_size": "21470642176",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "name": "ceph_lv1",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "tags": {
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.cephx_lockbox_secret": "",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.cluster_name": "ceph",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.crush_device_class": "",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.encrypted": "0",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.objectstore": "bluestore",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.osd_id": "1",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.type": "block",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.vdo": "0",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.with_tpm": "0"
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            },
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "type": "block",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "vg_name": "ceph_vg1"
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:        }
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:    ],
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:    "2": [
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:        {
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "devices": [
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "/dev/loop5"
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            ],
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "lv_name": "ceph_lv2",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "lv_size": "21470642176",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "name": "ceph_lv2",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "tags": {
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.cephx_lockbox_secret": "",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.cluster_name": "ceph",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.crush_device_class": "",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.encrypted": "0",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.objectstore": "bluestore",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.osd_id": "2",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.type": "block",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.vdo": "0",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:                "ceph.with_tpm": "0"
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            },
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "type": "block",
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:            "vg_name": "ceph_vg2"
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:        }
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]:    ]
Feb 28 06:14:05 np0005634017 recursing_cannon[411406]: }
Feb 28 06:14:05 np0005634017 systemd[1]: libpod-103bd15e650a9f327ff8b1e2a2f9ed2773b462c8f23408c8d994d0e76f2619e8.scope: Deactivated successfully.
Feb 28 06:14:05 np0005634017 podman[411388]: 2026-02-28 11:14:05.697058526 +0000 UTC m=+0.464029588 container died 103bd15e650a9f327ff8b1e2a2f9ed2773b462c8f23408c8d994d0e76f2619e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cannon, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Feb 28 06:14:05 np0005634017 systemd[1]: var-lib-containers-storage-overlay-49931047dbdb154b3b0159a304092e40c3386a9a38804af8c001d64f4a99e493-merged.mount: Deactivated successfully.
Feb 28 06:14:05 np0005634017 podman[411388]: 2026-02-28 11:14:05.750830138 +0000 UTC m=+0.517801170 container remove 103bd15e650a9f327ff8b1e2a2f9ed2773b462c8f23408c8d994d0e76f2619e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 06:14:05 np0005634017 systemd[1]: libpod-conmon-103bd15e650a9f327ff8b1e2a2f9ed2773b462c8f23408c8d994d0e76f2619e8.scope: Deactivated successfully.
Feb 28 06:14:05 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb 28 06:14:05 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1634838279' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 28 06:14:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Feb 28 06:14:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1590331172' entity='client.admin' cmd={"prefix": "features"} : dispatch
Feb 28 06:14:06 np0005634017 podman[411596]: 2026-02-28 11:14:06.25492937 +0000 UTC m=+0.038689777 container create a2ae6174c6f4eb8fb50b1adb3f224c4536ea568cce0b82bc25c8276f0af70448 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 06:14:06 np0005634017 systemd[1]: Started libpod-conmon-a2ae6174c6f4eb8fb50b1adb3f224c4536ea568cce0b82bc25c8276f0af70448.scope.
Feb 28 06:14:06 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:14:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 28 06:14:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2322899888' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 28 06:14:06 np0005634017 podman[411596]: 2026-02-28 11:14:06.237600479 +0000 UTC m=+0.021360926 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:14:06 np0005634017 podman[411596]: 2026-02-28 11:14:06.342866659 +0000 UTC m=+0.126627116 container init a2ae6174c6f4eb8fb50b1adb3f224c4536ea568cce0b82bc25c8276f0af70448 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 06:14:06 np0005634017 podman[411596]: 2026-02-28 11:14:06.351051511 +0000 UTC m=+0.134811938 container start a2ae6174c6f4eb8fb50b1adb3f224c4536ea568cce0b82bc25c8276f0af70448 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_grothendieck, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 28 06:14:06 np0005634017 podman[411596]: 2026-02-28 11:14:06.354629262 +0000 UTC m=+0.138389689 container attach a2ae6174c6f4eb8fb50b1adb3f224c4536ea568cce0b82bc25c8276f0af70448 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_grothendieck, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 28 06:14:06 np0005634017 keen_grothendieck[411614]: 167 167
Feb 28 06:14:06 np0005634017 systemd[1]: libpod-a2ae6174c6f4eb8fb50b1adb3f224c4536ea568cce0b82bc25c8276f0af70448.scope: Deactivated successfully.
Feb 28 06:14:06 np0005634017 podman[411596]: 2026-02-28 11:14:06.358053069 +0000 UTC m=+0.141813506 container died a2ae6174c6f4eb8fb50b1adb3f224c4536ea568cce0b82bc25c8276f0af70448 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_grothendieck, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 06:14:06 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3315: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:14:06 np0005634017 systemd[1]: var-lib-containers-storage-overlay-5c6de90612406524c019bc44d2479d972f7f93b8d52e4c19cd86b5a21be7105e-merged.mount: Deactivated successfully.
Feb 28 06:14:06 np0005634017 podman[411596]: 2026-02-28 11:14:06.391040553 +0000 UTC m=+0.174800970 container remove a2ae6174c6f4eb8fb50b1adb3f224c4536ea568cce0b82bc25c8276f0af70448 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_grothendieck, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 06:14:06 np0005634017 systemd[1]: libpod-conmon-a2ae6174c6f4eb8fb50b1adb3f224c4536ea568cce0b82bc25c8276f0af70448.scope: Deactivated successfully.
Feb 28 06:14:06 np0005634017 podman[411663]: 2026-02-28 11:14:06.52938528 +0000 UTC m=+0.041805305 container create 2ffd91eea7f1bbc51da97afe59f05b35b89b6ebe67a816fd50988f939cf3cd4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 06:14:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Feb 28 06:14:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3812442866' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Feb 28 06:14:06 np0005634017 systemd[1]: Started libpod-conmon-2ffd91eea7f1bbc51da97afe59f05b35b89b6ebe67a816fd50988f939cf3cd4f.scope.
Feb 28 06:14:06 np0005634017 systemd[1]: Started libcrun container.
Feb 28 06:14:06 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ebfbc94794ddc77f6ed1e07da6099e916c266b83c08f7e06ff3c858e8fd1b26/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 06:14:06 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ebfbc94794ddc77f6ed1e07da6099e916c266b83c08f7e06ff3c858e8fd1b26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 06:14:06 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ebfbc94794ddc77f6ed1e07da6099e916c266b83c08f7e06ff3c858e8fd1b26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 06:14:06 np0005634017 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ebfbc94794ddc77f6ed1e07da6099e916c266b83c08f7e06ff3c858e8fd1b26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 06:14:06 np0005634017 podman[411663]: 2026-02-28 11:14:06.511219165 +0000 UTC m=+0.023639230 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 06:14:06 np0005634017 podman[411663]: 2026-02-28 11:14:06.614622013 +0000 UTC m=+0.127042048 container init 2ffd91eea7f1bbc51da97afe59f05b35b89b6ebe67a816fd50988f939cf3cd4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 06:14:06 np0005634017 podman[411663]: 2026-02-28 11:14:06.62018008 +0000 UTC m=+0.132600105 container start 2ffd91eea7f1bbc51da97afe59f05b35b89b6ebe67a816fd50988f939cf3cd4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bassi, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 06:14:06 np0005634017 podman[411663]: 2026-02-28 11:14:06.624363798 +0000 UTC m=+0.136783853 container attach 2ffd91eea7f1bbc51da97afe59f05b35b89b6ebe67a816fd50988f939cf3cd4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 06:14:06 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Feb 28 06:14:06 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1399654361' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb 28 06:14:07 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23312 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:14:07 np0005634017 ceph-mgr[76610]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 28 06:14:07 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-izimmo[76606]: 2026-02-28T11:14:07.044+0000 7fdac2ed4640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 28 06:14:07 np0005634017 lvm[411836]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 06:14:07 np0005634017 lvm[411836]: VG ceph_vg0 finished
Feb 28 06:14:07 np0005634017 lvm[411837]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 06:14:07 np0005634017 lvm[411837]: VG ceph_vg1 finished
Feb 28 06:14:07 np0005634017 lvm[411839]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 06:14:07 np0005634017 lvm[411839]: VG ceph_vg2 finished
Feb 28 06:14:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb 28 06:14:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3772552681' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 28 06:14:07 np0005634017 awesome_bassi[411683]: {}
Feb 28 06:14:07 np0005634017 podman[411663]: 2026-02-28 11:14:07.401295664 +0000 UTC m=+0.913715679 container died 2ffd91eea7f1bbc51da97afe59f05b35b89b6ebe67a816fd50988f939cf3cd4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 06:14:07 np0005634017 systemd[1]: libpod-2ffd91eea7f1bbc51da97afe59f05b35b89b6ebe67a816fd50988f939cf3cd4f.scope: Deactivated successfully.
Feb 28 06:14:07 np0005634017 systemd[1]: libpod-2ffd91eea7f1bbc51da97afe59f05b35b89b6ebe67a816fd50988f939cf3cd4f.scope: Consumed 1.156s CPU time.
Feb 28 06:14:07 np0005634017 systemd[1]: var-lib-containers-storage-overlay-5ebfbc94794ddc77f6ed1e07da6099e916c266b83c08f7e06ff3c858e8fd1b26-merged.mount: Deactivated successfully.
Feb 28 06:14:07 np0005634017 podman[411663]: 2026-02-28 11:14:07.48065289 +0000 UTC m=+0.993072905 container remove 2ffd91eea7f1bbc51da97afe59f05b35b89b6ebe67a816fd50988f939cf3cd4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bassi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 06:14:07 np0005634017 systemd[1]: libpod-conmon-2ffd91eea7f1bbc51da97afe59f05b35b89b6ebe67a816fd50988f939cf3cd4f.scope: Deactivated successfully.
Feb 28 06:14:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 06:14:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:14:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 06:14:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:14:07 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Feb 28 06:14:07 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2407995255' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Feb 28 06:14:07 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23318 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:14:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Feb 28 06:14:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/759175547' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Feb 28 06:14:08 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23322 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x55768becfa40
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3ec400 session 0x557685cc41c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0aa000 session 0x55768bece1c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd000 session 0x5576885be540
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x55768800ddc0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3419612 data_alloc: 218103808 data_used: 19256325
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x4b8a9b1/0x4d1c000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x557686d4ec40
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576861be8c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.198194504s of 12.342607498s, submitted: 38
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x55768c315880
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x5576861d3c00
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3ec400 session 0x557689a016c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576861be540
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x557686c20c40
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3395284 data_alloc: 218103808 data_used: 18715653
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea108000/0x0/0x4ffc00000, data 0x487298e/0x4a03000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea108000/0x0/0x4ffc00000, data 0x487298e/0x4a03000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3408852 data_alloc: 218103808 data_used: 20918277
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea108000/0x0/0x4ffc00000, data 0x487298e/0x4a03000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea108000/0x0/0x4ffc00000, data 0x487298e/0x4a03000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea108000/0x0/0x4ffc00000, data 0x487298e/0x4a03000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306987008 unmapped: 56205312 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3408852 data_alloc: 218103808 data_used: 20918277
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306987008 unmapped: 56205312 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.364843369s of 14.394786835s, submitted: 12
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310943744 unmapped: 52248576 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312492032 unmapped: 50700288 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9930000/0x0/0x4ffc00000, data 0x503b98e/0x51cc000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312492032 unmapped: 50700288 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312492032 unmapped: 50700288 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476420 data_alloc: 218103808 data_used: 21910498
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312492032 unmapped: 50700288 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312492032 unmapped: 50700288 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9918000/0x0/0x4ffc00000, data 0x504d98e/0x51de000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312492032 unmapped: 50700288 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9918000/0x0/0x4ffc00000, data 0x504d98e/0x51de000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476692 data_alloc: 218103808 data_used: 21918690
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9918000/0x0/0x4ffc00000, data 0x504d98e/0x51de000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x5576940e1180
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x55768800d500
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.993345261s of 14.355683327s, submitted: 90
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385972 data_alloc: 218103808 data_used: 18719714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0aa000 session 0x5576886b2c40
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9974000/0x0/0x4ffc00000, data 0x464e98e/0x47df000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3383976 data_alloc: 218103808 data_used: 18719714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311476224 unmapped: 51716096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557687b436c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x557686197180
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x5576885bfa40
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x55768c315a40
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd000 session 0x557687aa1340
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6f000/0x0/0x4ffc00000, data 0x4e0c98e/0x4f9d000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6f000/0x0/0x4ffc00000, data 0x4e0c98e/0x4f9d000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3426312 data_alloc: 218103808 data_used: 18719714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6f000/0x0/0x4ffc00000, data 0x4e0c98e/0x4f9d000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6f000/0x0/0x4ffc00000, data 0x4e0c98e/0x4f9d000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6f000/0x0/0x4ffc00000, data 0x4e0c98e/0x4f9d000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3426312 data_alloc: 218103808 data_used: 18719714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 55377920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 55377920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 55377920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 55377920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.629673004s of 19.488275528s, submitted: 12
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576886d5180
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428832 data_alloc: 218103808 data_used: 18719714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 55230464 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b4b000/0x0/0x4ffc00000, data 0x4e3098e/0x4fc1000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 55230464 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b4b000/0x0/0x4ffc00000, data 0x4e3098e/0x4fc1000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3477216 data_alloc: 218103808 data_used: 26889186
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b4b000/0x0/0x4ffc00000, data 0x4e3098e/0x4fc1000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b4b000/0x0/0x4ffc00000, data 0x4e3098e/0x4fc1000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3477216 data_alloc: 218103808 data_used: 26889186
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.102824211s of 13.107093811s, submitted: 1
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3485238 data_alloc: 218103808 data_used: 26991586
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3485238 data_alloc: 218103808 data_used: 26991586
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3485238 data_alloc: 218103808 data_used: 26991586
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.839479446s of 14.852085114s, submitted: 2
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x55768afca380
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c4000 session 0x5576886b3880
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x5576886e16c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd400 session 0x5576869f3340
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd400 session 0x557687aa0a80
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3501258 data_alloc: 218103808 data_used: 26991586
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e99aa000/0x0/0x4ffc00000, data 0x4fd198e/0x5162000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576887228c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3504923 data_alloc: 218103808 data_used: 26992098
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311304192 unmapped: 51888128 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311910400 unmapped: 51281920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9985000/0x0/0x4ffc00000, data 0x4ff59b1/0x5187000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3508763 data_alloc: 218103808 data_used: 27546594
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9985000/0x0/0x4ffc00000, data 0x4ff59b1/0x5187000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311992320 unmapped: 51200000 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3508763 data_alloc: 218103808 data_used: 27546594
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312000512 unmapped: 51191808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312000512 unmapped: 51191808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.325185776s of 19.358276367s, submitted: 15
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313982976 unmapped: 49209344 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e934f000/0x0/0x4ffc00000, data 0x56249b1/0x57b6000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3563869 data_alloc: 218103808 data_used: 27722722
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9314000/0x0/0x4ffc00000, data 0x56579b1/0x57e9000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3558653 data_alloc: 218103808 data_used: 27722722
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x557686111500
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c4000 session 0x55768457b340
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.165890694s of 11.501284599s, submitted: 68
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x557686d01c00
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312295424 unmapped: 50896896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312295424 unmapped: 50896896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b46000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3495179 data_alloc: 218103808 data_used: 26991586
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312295424 unmapped: 50896896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312295424 unmapped: 50896896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x55768d090000
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x55768becfdc0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x5576886576c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 53813248 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 53813248 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 53813248 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768bc58380
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x55768a539dc0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768cc3f880
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557689b7b800 session 0x55768a5381c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 37.512207031s of 37.553077698s, submitted: 22
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x55768d75ddc0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557694ee2000 session 0x5576886d5a40
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557689b7b000 session 0x5576861bea80
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576880f7340
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x5576880f7a40
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441190 data_alloc: 218103808 data_used: 18723775
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557689b7b800 session 0x557686d00000
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557694ee2000 session 0x55768a538380
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9d85000/0x0/0x4ffc00000, data 0x4bf59f0/0x4d87000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b1800 session 0x557688a11340
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576886b2fc0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3316: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469990 data_alloc: 218103808 data_used: 23552959
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9d85000/0x0/0x4ffc00000, data 0x4bf59f0/0x4d87000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.2 total, 600.0 interval#012Cumulative writes: 36K writes, 142K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 36K writes, 13K syncs, 2.71 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3550 writes, 14K keys, 3550 commit groups, 1.0 writes per commit group, ingest: 18.05 MB, 0.03 MB/s#012Interval WAL: 3550 writes, 1394 syncs, 2.55 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9d85000/0x0/0x4ffc00000, data 0x4bf59f0/0x4d87000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469990 data_alloc: 218103808 data_used: 23552959
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9d85000/0x0/0x4ffc00000, data 0x4bf59f0/0x4d87000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.287157059s of 17.387201309s, submitted: 32
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312320000 unmapped: 50872320 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312377344 unmapped: 50814976 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3519878 data_alloc: 218103808 data_used: 23937983
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312377344 unmapped: 50814976 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312377344 unmapped: 50814976 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f4000/0x0/0x4ffc00000, data 0x53859f0/0x5517000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312377344 unmapped: 50814976 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312377344 unmapped: 50814976 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312385536 unmapped: 50806784 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3518710 data_alloc: 218103808 data_used: 23942079
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312385536 unmapped: 50806784 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f3000/0x0/0x4ffc00000, data 0x53879f0/0x5519000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312385536 unmapped: 50806784 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312385536 unmapped: 50806784 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312393728 unmapped: 50798592 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f3000/0x0/0x4ffc00000, data 0x53879f0/0x5519000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f3000/0x0/0x4ffc00000, data 0x53879f0/0x5519000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312393728 unmapped: 50798592 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3518710 data_alloc: 218103808 data_used: 23942079
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312393728 unmapped: 50798592 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.962099075s of 13.174759865s, submitted: 92
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b1800 session 0x55768becf340
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557694ee2000 session 0x5576886d41c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x557686cadc00
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e71c00 session 0x55768bc58000
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576886d4700
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9341000/0x0/0x4ffc00000, data 0x56399f0/0x57cb000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9341000/0x0/0x4ffc00000, data 0x56399f0/0x57cb000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x557688a10fc0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3538984 data_alloc: 218103808 data_used: 23942079
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b1800 session 0x5576861a88c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557694ee2000 session 0x557688a11500
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x55768becf180
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9340000/0x0/0x4ffc00000, data 0x5639a00/0x57cc000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312688640 unmapped: 50503680 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 50495488 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3557002 data_alloc: 218103808 data_used: 26660799
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 50495488 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9340000/0x0/0x4ffc00000, data 0x5639a00/0x57cc000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 50495488 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 50495488 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 50495488 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9340000/0x0/0x4ffc00000, data 0x5639a00/0x57cc000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312705024 unmapped: 50487296 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3557002 data_alloc: 218103808 data_used: 26660799
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312705024 unmapped: 50487296 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312705024 unmapped: 50487296 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312705024 unmapped: 50487296 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.755399704s of 16.796745300s, submitted: 7
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314982400 unmapped: 48209920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e909c000/0x0/0x4ffc00000, data 0x58dda00/0x5a70000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314310656 unmapped: 48881664 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3580546 data_alloc: 218103808 data_used: 26796991
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314318848 unmapped: 48873472 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314318848 unmapped: 48873472 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314327040 unmapped: 48865280 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9094000/0x0/0x4ffc00000, data 0x58e5a00/0x5a78000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314327040 unmapped: 48865280 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9094000/0x0/0x4ffc00000, data 0x58e5a00/0x5a78000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314327040 unmapped: 48865280 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3580562 data_alloc: 218103808 data_used: 26796991
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314327040 unmapped: 48865280 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314327040 unmapped: 48865280 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314335232 unmapped: 48857088 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9094000/0x0/0x4ffc00000, data 0x58e5a00/0x5a78000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314343424 unmapped: 48848896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314343424 unmapped: 48848896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x557686c20fc0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768a539c00
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3580082 data_alloc: 218103808 data_used: 26858431
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314343424 unmapped: 48848896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.200057983s of 12.832662582s, submitted: 39
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x5576886f8000
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f1000/0x0/0x4ffc00000, data 0x5388a00/0x551b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314359808 unmapped: 48832512 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314408960 unmapped: 48783360 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f1000/0x0/0x4ffc00000, data 0x53899f0/0x551b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314417152 unmapped: 48775168 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314417152 unmapped: 48775168 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f1000/0x0/0x4ffc00000, data 0x53899f0/0x551b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3526151 data_alloc: 218103808 data_used: 24003519
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314417152 unmapped: 48775168 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x557686197c00
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557689b7b800 session 0x557688656c40
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313155584 unmapped: 50036736 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f1000/0x0/0x4ffc00000, data 0x53899f0/0x551b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557689a016c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414931 data_alloc: 218103808 data_used: 18723775
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414931 data_alloc: 218103808 data_used: 18723775
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414931 data_alloc: 218103808 data_used: 18723775
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414931 data_alloc: 218103808 data_used: 18723775
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414931 data_alloc: 218103808 data_used: 18723775
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313196544 unmapped: 49995776 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313196544 unmapped: 49995776 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313196544 unmapped: 49995776 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414931 data_alloc: 218103808 data_used: 18723775
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313196544 unmapped: 49995776 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313212928 unmapped: 49979392 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313212928 unmapped: 49979392 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313212928 unmapped: 49979392 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x5576940e01c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x55768c315a40
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x557688723180
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313212928 unmapped: 49979392 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b1800 session 0x557687b43340
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 39.065399170s of 39.311141968s, submitted: 108
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768cc4afc0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x5576886b2540
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x557687aa1c00
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x5576886e1c00
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557694ee2000 session 0x5576885be700
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417481 data_alloc: 218103808 data_used: 18727773
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 49856512 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 49856512 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea31b000/0x0/0x4ffc00000, data 0x466098e/0x47f1000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313344000 unmapped: 49848320 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313344000 unmapped: 49848320 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313352192 unmapped: 49840128 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576886d4000
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417481 data_alloc: 218103808 data_used: 18727773
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313352192 unmapped: 49840128 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x557688a108c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x557686cac8c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 49864704 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x557688656a80
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313630720 unmapped: 49561600 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea2f6000/0x0/0x4ffc00000, data 0x468499e/0x4816000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313630720 unmapped: 49561600 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313630720 unmapped: 49561600 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3422291 data_alloc: 218103808 data_used: 18761565
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea2f6000/0x0/0x4ffc00000, data 0x468499e/0x4816000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313630720 unmapped: 49561600 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313638912 unmapped: 49553408 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313638912 unmapped: 49553408 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea2f6000/0x0/0x4ffc00000, data 0x468499e/0x4816000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313638912 unmapped: 49553408 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313638912 unmapped: 49553408 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3422291 data_alloc: 218103808 data_used: 18761565
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313638912 unmapped: 49553408 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea2f6000/0x0/0x4ffc00000, data 0x468499e/0x4816000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313647104 unmapped: 49545216 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.853551865s of 17.863443375s, submitted: 3
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314327040 unmapped: 48865280 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 49528832 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1fc000/0x0/0x4ffc00000, data 0x477e99e/0x4910000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1f1000/0x0/0x4ffc00000, data 0x478999e/0x491b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442903 data_alloc: 218103808 data_used: 18856797
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442919 data_alloc: 218103808 data_used: 18856797
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1f1000/0x0/0x4ffc00000, data 0x478999e/0x491b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1f1000/0x0/0x4ffc00000, data 0x478999e/0x491b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442919 data_alloc: 218103808 data_used: 18856797
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 49512448 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 49512448 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4800 session 0x55768bece1c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557686111500
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x5576869f3180
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x5576886e16c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.000139236s of 17.048984528s, submitted: 20
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314818560 unmapped: 48373760 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x5576886b2e00
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768da43400 session 0x5576886b2c40
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768da43400 session 0x5576885bfa40
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768c315180
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x5576861d3c00
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1f1000/0x0/0x4ffc00000, data 0x478999e/0x491b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3486925 data_alloc: 218103808 data_used: 18856797
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b05000/0x0/0x4ffc00000, data 0x4e7599e/0x5007000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313778176 unmapped: 51027968 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313778176 unmapped: 51027968 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313778176 unmapped: 51027968 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b05000/0x0/0x4ffc00000, data 0x4e7599e/0x5007000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313778176 unmapped: 51027968 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b05000/0x0/0x4ffc00000, data 0x4e7599e/0x5007000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313778176 unmapped: 51027968 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3486925 data_alloc: 218103808 data_used: 18856797
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313786368 unmapped: 51019776 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x557686d4ea80
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313786368 unmapped: 51019776 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313761792 unmapped: 51044352 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b05000/0x0/0x4ffc00000, data 0x4e7599e/0x5007000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b05000/0x0/0x4ffc00000, data 0x4e7599e/0x5007000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3521229 data_alloc: 218103808 data_used: 24664925
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b05000/0x0/0x4ffc00000, data 0x4e7599e/0x5007000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3521229 data_alloc: 218103808 data_used: 24664925
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.758279800s of 17.839538574s, submitted: 13
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316399616 unmapped: 48406528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318586880 unmapped: 46219264 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3559463 data_alloc: 218103808 data_used: 25188701
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9747000/0x0/0x4ffc00000, data 0x522d99e/0x53bf000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3556031 data_alloc: 218103808 data_used: 25188701
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e972c000/0x0/0x4ffc00000, data 0x524e99e/0x53e0000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x5576886d5a40
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686155400 session 0x5576880f7340
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.849447250s of 12.123903275s, submitted: 71
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x5576861be8c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e972c000/0x0/0x4ffc00000, data 0x524e99e/0x53e0000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452972 data_alloc: 218103808 data_used: 18856797
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1f1000/0x0/0x4ffc00000, data 0x478999e/0x491b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869e1400 session 0x55768d090380
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c073800 session 0x5576869f3340
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1f1000/0x0/0x4ffc00000, data 0x478999e/0x491b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557688656c40
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316162048 unmapped: 48644096 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316162048 unmapped: 48644096 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316162048 unmapped: 48644096 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316170240 unmapped: 48635904 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316170240 unmapped: 48635904 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316170240 unmapped: 48635904 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316170240 unmapped: 48635904 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316170240 unmapped: 48635904 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557688a108c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686155400 session 0x557686c21880
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869e1400 session 0x557688a116c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c073800 session 0x557688722c40
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 42.027751923s of 42.075328827s, submitted: 18
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x5576886d5340
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557688a10000
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686155400 session 0x557686cacfc0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869e1400 session 0x557688657a40
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 51478528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c073800 session 0x557688a10540
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 51478528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 51478528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9db6000/0x0/0x4ffc00000, data 0x4bc598e/0x4d56000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 51478528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469604 data_alloc: 218103808 data_used: 18727773
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 51478528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 51478528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9db6000/0x0/0x4ffc00000, data 0x4bc598e/0x4d56000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x557686d00fc0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768d090700
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469604 data_alloc: 218103808 data_used: 18727773
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686155400 session 0x55768800d6c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869e1400 session 0x5576886e0c40
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.067780495s of 10.190921783s, submitted: 22
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9db4000/0x0/0x4ffc00000, data 0x4bc59c1/0x4d58000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502024 data_alloc: 218103808 data_used: 23294813
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9db4000/0x0/0x4ffc00000, data 0x4bc59c1/0x4d58000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502024 data_alloc: 218103808 data_used: 23294813
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.957121849s of 10.000660896s, submitted: 20
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9db4000/0x0/0x4ffc00000, data 0x4bc59c1/0x4d58000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [0,1,0,0,0,8])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318406656 unmapped: 46399488 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9030000/0x0/0x4ffc00000, data 0x59489c1/0x5adb000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3599328 data_alloc: 218103808 data_used: 25293661
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9030000/0x0/0x4ffc00000, data 0x59489c1/0x5adb000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3595064 data_alloc: 218103808 data_used: 25293661
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread fragmentation_score=0.003738 took=0.000053s
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.949523926s of 10.245022774s, submitted: 88
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318488576 unmapped: 46317568 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318488576 unmapped: 46317568 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e902a000/0x0/0x4ffc00000, data 0x594e9c1/0x5ae1000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318488576 unmapped: 46317568 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c073800 session 0x55768cc3fc00
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x5576861a81c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318488576 unmapped: 46317568 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3596956 data_alloc: 218103808 data_used: 25334621
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557685cc4c40
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318488576 unmapped: 46317568 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 285 ms_handle_reset con 0x557686155400 session 0x557687b42700
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 285 heartbeat osd_stat(store_statfs(0x4e902a000/0x0/0x4ffc00000, data 0x594e9c1/0x5ae1000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 285 ms_handle_reset con 0x55768c073800 session 0x55768cc3f880
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 285 ms_handle_reset con 0x5576869e1400 session 0x55768c3148c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 323772416 unmapped: 41033728 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 285 ms_handle_reset con 0x55768da43400 session 0x55768800ddc0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 285 ms_handle_reset con 0x557685ffa800 session 0x55768457b340
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337510400 unmapped: 43384832 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 285 handle_osd_map epochs [286,287], i have 285, src has [1,287]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 287 ms_handle_reset con 0x557686155400 session 0x557686cac8c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337551360 unmapped: 43343872 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 287 ms_handle_reset con 0x5576869e1400 session 0x5576886d4000
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 287 ms_handle_reset con 0x55768c073800 session 0x5576886f8e00
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 287 ms_handle_reset con 0x55768d0e9400 session 0x5576861116c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 43327488 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 287 heartbeat osd_stat(store_statfs(0x4e7cd3000/0x0/0x4ffc00000, data 0x6c9fd67/0x6e37000, compress 0x0/0x0/0x0, omap 0x4f381, meta 0x110a0c7f), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3761780 data_alloc: 234881024 data_used: 35955647
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 287 heartbeat osd_stat(store_statfs(0x4e7cd3000/0x0/0x4ffc00000, data 0x6c9fd67/0x6e37000, compress 0x0/0x0/0x0, omap 0x4f381, meta 0x110a0c7f), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 43327488 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337633280 unmapped: 43261952 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.589632988s of 10.994457245s, submitted: 34
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 287 ms_handle_reset con 0x557685ffa800 session 0x557687aa1340
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 287 heartbeat osd_stat(store_statfs(0x4e8fe2000/0x0/0x4ffc00000, data 0x5993d34/0x5b29000, compress 0x0/0x0/0x0, omap 0x4f41d, meta 0x110a0be3), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 287 heartbeat osd_stat(store_statfs(0x4e8fe2000/0x0/0x4ffc00000, data 0x5993d34/0x5b29000, compress 0x0/0x0/0x0, omap 0x4f41d, meta 0x110a0be3), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3574579 data_alloc: 218103808 data_used: 18723775
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 287 heartbeat osd_stat(store_statfs(0x4e8fe2000/0x0/0x4ffc00000, data 0x5993d34/0x5b29000, compress 0x0/0x0/0x0, omap 0x4f41d, meta 0x110a0be3), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 287 handle_osd_map epochs [287,288], i have 288, src has [1,288]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e8fe2000/0x0/0x4ffc00000, data 0x5993d34/0x5b29000, compress 0x0/0x0/0x0, omap 0x4f41d, meta 0x110a0be3), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e8fde000/0x0/0x4ffc00000, data 0x59957b3/0x5b2c000, compress 0x0/0x0/0x0, omap 0x4f52c, meta 0x110a0ad4), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557686155400 session 0x5576885bfa40
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x5576869e1400 session 0x55768cc3f500
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x55768c073800 session 0x557686196a80
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x55768c035800 session 0x5576940e1340
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578137 data_alloc: 218103808 data_used: 18727934
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557686155400 session 0x5576861d3c00
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557685ffa800 session 0x557689a01c00
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x5576869e1400 session 0x5576886d5a40
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x55768c073800 session 0x5576886d41c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557686155000 session 0x55768a5388c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557685ffa800 session 0x55768d090e00
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 348602368 unmapped: 47890432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557686155400 session 0x557689a00fc0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337657856 unmapped: 58834944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x5576869e1400 session 0x55768a538380
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x55768c073800 session 0x55768cc4b180
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337657856 unmapped: 58834944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557687e8bc00 session 0x557686d01880
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.653596878s of 11.167081833s, submitted: 54
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557685ffa800 session 0x557685cc5dc0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e73eb000/0x0/0x4ffc00000, data 0x75887e6/0x7721000, compress 0x0/0x0/0x0, omap 0x4f52c, meta 0x110a0ad4), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 329965568 unmapped: 66527232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 329965568 unmapped: 66527232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3789035 data_alloc: 234881024 data_used: 29690303
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 289 ms_handle_reset con 0x55768c073800 session 0x55768d090380
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 289 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0x623e374/0x63d7000, compress 0x0/0x0/0x0, omap 0x4f643, meta 0x110a09bd), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 289 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0x623e374/0x63d7000, compress 0x0/0x0/0x0, omap 0x4f643, meta 0x110a09bd), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687671 data_alloc: 234881024 data_used: 21415871
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 289 handle_osd_map epochs [289,290], i have 289, src has [1,290]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4e8730000/0x0/0x4ffc00000, data 0x623fdf3/0x63da000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.615905762s of 11.692889214s, submitted: 45
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3699421 data_alloc: 234881024 data_used: 22761407
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318464000 unmapped: 78028800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318570496 unmapped: 77922304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318570496 unmapped: 77922304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318570496 unmapped: 77922304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318570496 unmapped: 77922304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4e8732000/0x0/0x4ffc00000, data 0x623fdf3/0x63da000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3716445 data_alloc: 234881024 data_used: 24364991
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318816256 unmapped: 77676544 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4e8732000/0x0/0x4ffc00000, data 0x623fdf3/0x63da000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318816256 unmapped: 77676544 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 319045632 unmapped: 77447168 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4e8732000/0x0/0x4ffc00000, data 0x623fdf3/0x63da000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 319045632 unmapped: 77447168 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557686155400 session 0x557686110000
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x5576869e1400 session 0x557688a108c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x55768672c000 session 0x5576886f8000
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4e8732000/0x0/0x4ffc00000, data 0x623fdf3/0x63da000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 47.560314178s of 47.608993530s, submitted: 31
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557685ffa800 session 0x5576886e1c00
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310362112 unmapped: 86130688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557686155400 session 0x557686cac1c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x5576869e1400 session 0x55768bece380
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x55768c073800 session 0x5576880f7a40
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: mgrc ms_handle_reset ms_handle_reset con 0x55768bb5f000
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1095581968
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1095581968,v1:192.168.122.100:6801/1095581968]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: mgrc handle_mgr_configure stats_period=5
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557686258000 session 0x5576886f8000
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557685ffa800 session 0x557689a01c00
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310370304 unmapped: 86122496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557686155400 session 0x5576861d3c00
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310370304 unmapped: 86122496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x5576869e1400 session 0x55768cc3f500
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x55768c073800 session 0x557687aa1340
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3523018 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310378496 unmapped: 86114304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 86106112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4e9a78000/0x0/0x4ffc00000, data 0x4efadd0/0x5094000, compress 0x0/0x0/0x0, omap 0x4fe69, meta 0x110a0197), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 86106112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 86106112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 86106112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557685ffa800 session 0x557686d00380
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557686155400 session 0x5576886b2fc0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3478714 data_alloc: 218103808 data_used: 9620380
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea302000/0x0/0x4ffc00000, data 0x4670dd0/0x480a000, compress 0x0/0x0/0x0, omap 0x4fed4, meta 0x110a012c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557686258000 session 0x5576861116c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 38.864528656s of 39.085803986s, submitted: 59
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 30.823881149s of 30.865018845s, submitted: 24
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 291 ms_handle_reset con 0x5576869e1400 session 0x55768cc4a540
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ec323000/0x0/0x4ffc00000, data 0x264e98a/0x27e7000, compress 0x0/0x0/0x0, omap 0x505c8, meta 0x1109fa38), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3317485 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ec323000/0x0/0x4ffc00000, data 0x264e98a/0x27e7000, compress 0x0/0x0/0x0, omap 0x505c8, meta 0x1109fa38), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ec323000/0x0/0x4ffc00000, data 0x264e98a/0x27e7000, compress 0x0/0x0/0x0, omap 0x505c8, meta 0x1109fa38), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3317485 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ec323000/0x0/0x4ffc00000, data 0x264e98a/0x27e7000, compress 0x0/0x0/0x0, omap 0x505c8, meta 0x1109fa38), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 87146496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 87146496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 87146496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 87146496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 87138304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 87138304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 87138304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 87138304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 87138304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 87138304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 87130112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 87130112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 87130112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 87130112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 87130112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 87130112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 87097344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 87097344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 87097344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 87097344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 87097344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 87097344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309403648 unmapped: 87089152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309403648 unmapped: 87089152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 87080960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 87080960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 87080960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309428224 unmapped: 87064576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309428224 unmapped: 87064576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309428224 unmapped: 87064576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309428224 unmapped: 87064576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309428224 unmapped: 87064576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309452800 unmapped: 87040000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309452800 unmapped: 87040000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309452800 unmapped: 87040000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309452800 unmapped: 87040000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:14:08 np0005634017 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309469184 unmapped: 87023616 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309469184 unmapped: 87023616 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309469184 unmapped: 87023616 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309469184 unmapped: 87023616 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 ms_handle_reset con 0x55768d0abc00 session 0x557686d4f880
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315342848 unmapped: 81149952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315351040 unmapped: 81141760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315351040 unmapped: 81141760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315359232 unmapped: 81133568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315359232 unmapped: 81133568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327555 data_alloc: 234881024 data_used: 17875868
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315359232 unmapped: 81133568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315359232 unmapped: 81133568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315359232 unmapped: 81133568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315359232 unmapped: 81133568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 128.169219971s of 128.219329834s, submitted: 33
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315392000 unmapped: 81100800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 293 ms_handle_reset con 0x557685ffa800 session 0x557688a10540
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3217201 data_alloc: 218103808 data_used: 4309916
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307396608 unmapped: 89096192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 293 heartbeat osd_stat(store_statfs(0x4ed31f000/0x0/0x4ffc00000, data 0x1651ff9/0x17ed000, compress 0x0/0x0/0x0, omap 0x50d73, meta 0x1109f28d), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307396608 unmapped: 89096192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307396608 unmapped: 89096192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 293 handle_osd_map epochs [293,294], i have 294, src has [1,294]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 294 ms_handle_reset con 0x557686155400 session 0x5576861a81c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ee78b000/0x0/0x4ffc00000, data 0x1e3bb6/0x37e000, compress 0x0/0x0/0x0, omap 0x50e89, meta 0x1109f177), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099618 data_alloc: 218103808 data_used: 181132
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ee78b000/0x0/0x4ffc00000, data 0x1e3bb6/0x37e000, compress 0x0/0x0/0x0, omap 0x50e89, meta 0x1109f177), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099618 data_alloc: 218103808 data_used: 181132
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ee78b000/0x0/0x4ffc00000, data 0x1e3bb6/0x37e000, compress 0x0/0x0/0x0, omap 0x50e89, meta 0x1109f177), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.580028534s of 12.689705849s, submitted: 49
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 93347840 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 93347840 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 93347840 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3102328 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ee78a000/0x0/0x4ffc00000, data 0x1e5658/0x382000, compress 0x0/0x0/0x0, omap 0x51005, meta 0x1109effb), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 93306880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ee78a000/0x0/0x4ffc00000, data 0x1e5658/0x382000, compress 0x0/0x0/0x0, omap 0x51005, meta 0x1109effb), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 93290496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 ms_handle_reset con 0x557686258000 session 0x55768d75c380
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 93241344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 93241344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 93241344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 93241344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 93224960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 93224960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 93224960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 93224960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 93216768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 93216768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 93216768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 93216768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.2 total, 600.0 interval#012Cumulative writes: 38K writes, 148K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 38K writes, 14K syncs, 2.69 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1533 writes, 6281 keys, 1533 commit groups, 1.0 writes per commit group, ingest: 6.68 MB, 0.01 MB/s#012Interval WAL: 1533 writes, 657 syncs, 2.33 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 93184000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 93184000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 93184000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 93184000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 93151232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 93151232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 93151232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 93151232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 93151232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 114.041397095s of 114.333473206s, submitted: 46
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 93151232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302129152 unmapped: 94363648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 297 ms_handle_reset con 0x5576869e1400 session 0x55768bc58380
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302153728 unmapped: 94339072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 297 heartbeat osd_stat(store_statfs(0x4eaf80000/0x0/0x4ffc00000, data 0x39e8dc6/0x3b8a000, compress 0x0/0x0/0x0, omap 0x51b9b, meta 0x1109e465), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302178304 unmapped: 94314496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 298 ms_handle_reset con 0x55768b3c4400 session 0x5576886b28c0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3434144 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302194688 unmapped: 94298112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0a000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302194688 unmapped: 94298112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302194688 unmapped: 94298112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302194688 unmapped: 94298112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302194688 unmapped: 94298112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3434144 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302211072 unmapped: 94281728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0a000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302211072 unmapped: 94281728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302211072 unmapped: 94281728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0a000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302211072 unmapped: 94281728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302211072 unmapped: 94281728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.552754402s of 13.607363701s, submitted: 19
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432672 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0a000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302219264 unmapped: 94273536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302219264 unmapped: 94273536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302219264 unmapped: 94273536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0e000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302227456 unmapped: 94265344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302227456 unmapped: 94265344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432744 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302268416 unmapped: 94224384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0e000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0e000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432672 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0e000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432672 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 94175232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0e000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 94175232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432672 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 94175232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 94175232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.988111496s of 22.149505615s, submitted: 90
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 94175232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 298 handle_osd_map epochs [298,299], i have 299, src has [1,299]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 299 ms_handle_reset con 0x557685ffa800 session 0x5576886f8fc0
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eab09000/0x0/0x4ffc00000, data 0x3e5c562/0x4001000, compress 0x0/0x0/0x0, omap 0x51d59, meta 0x1109e2a7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 94126080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 94126080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153870 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 94126080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 94126080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 94117888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 94117888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ee309000/0x0/0x4ffc00000, data 0x65c53c/0x800000, compress 0x0/0x0/0x0, omap 0x51dc7, meta 0x1109e239), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 94117888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153870 data_alloc: 218103808 data_used: 185193
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 94109696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 94109696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 94109696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 94109696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 94109696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 94101504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 94101504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299466752 unmapped: 97026048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299466752 unmapped: 97026048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299466752 unmapped: 97026048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299466752 unmapped: 97026048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299466752 unmapped: 97026048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299466752 unmapped: 97026048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299507712 unmapped: 96985088 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299507712 unmapped: 96985088 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299507712 unmapped: 96985088 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299532288 unmapped: 96960512 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299548672 unmapped: 96944128 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299548672 unmapped: 96944128 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299548672 unmapped: 96944128 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299548672 unmapped: 96944128 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299548672 unmapped: 96944128 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 96911360 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 96911360 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299589632 unmapped: 96903168 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299589632 unmapped: 96903168 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 96894976 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 96894976 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 96894976 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 96894976 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299606016 unmapped: 96886784 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299606016 unmapped: 96886784 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 96878592 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 96878592 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 96878592 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 96878592 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 96878592 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 96878592 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299630592 unmapped: 96862208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299630592 unmapped: 96862208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299630592 unmapped: 96862208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299630592 unmapped: 96862208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299630592 unmapped: 96862208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299630592 unmapped: 96862208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 96854016 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 96854016 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 96854016 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 96854016 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 96854016 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 96845824 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 96845824 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 96845824 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 96845824 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299663360 unmapped: 96829440 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299663360 unmapped: 96829440 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299663360 unmapped: 96829440 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299671552 unmapped: 96821248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299671552 unmapped: 96821248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299671552 unmapped: 96821248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299671552 unmapped: 96821248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299679744 unmapped: 96813056 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299687936 unmapped: 96804864 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299687936 unmapped: 96804864 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299687936 unmapped: 96804864 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299687936 unmapped: 96804864 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299704320 unmapped: 96788480 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299704320 unmapped: 96788480 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299704320 unmapped: 96788480 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299704320 unmapped: 96788480 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299704320 unmapped: 96788480 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299712512 unmapped: 96780288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299712512 unmapped: 96780288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299712512 unmapped: 96780288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299720704 unmapped: 96772096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299720704 unmapped: 96772096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299720704 unmapped: 96772096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299720704 unmapped: 96772096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299728896 unmapped: 96763904 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 96755712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 96755712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 96755712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 96755712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 96755712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 96755712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299745280 unmapped: 96747520 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299745280 unmapped: 96747520 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299778048 unmapped: 96714752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299778048 unmapped: 96714752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299778048 unmapped: 96714752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299778048 unmapped: 96714752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299835392 unmapped: 96657408 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299835392 unmapped: 96657408 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299835392 unmapped: 96657408 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299835392 unmapped: 96657408 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299851776 unmapped: 96641024 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299851776 unmapped: 96641024 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299851776 unmapped: 96641024 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299851776 unmapped: 96641024 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299859968 unmapped: 96632832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299859968 unmapped: 96632832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299859968 unmapped: 96632832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299859968 unmapped: 96632832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299868160 unmapped: 96624640 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299868160 unmapped: 96624640 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299868160 unmapped: 96624640 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299868160 unmapped: 96624640 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299876352 unmapped: 96616448 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299884544 unmapped: 96608256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299884544 unmapped: 96608256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299884544 unmapped: 96608256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299884544 unmapped: 96608256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299884544 unmapped: 96608256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299892736 unmapped: 96600064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299892736 unmapped: 96600064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299900928 unmapped: 96591872 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299900928 unmapped: 96591872 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299900928 unmapped: 96591872 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299900928 unmapped: 96591872 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299900928 unmapped: 96591872 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299909120 unmapped: 96583680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299909120 unmapped: 96583680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299909120 unmapped: 96583680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299917312 unmapped: 96575488 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299917312 unmapped: 96575488 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299917312 unmapped: 96575488 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299925504 unmapped: 96567296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299925504 unmapped: 96567296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299925504 unmapped: 96567296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 96559104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 96559104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 96559104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 96559104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 96550912 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 96550912 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 96542720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 96542720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 96542720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 96542720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299966464 unmapped: 96526336 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299982848 unmapped: 96509952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299982848 unmapped: 96509952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299982848 unmapped: 96509952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299999232 unmapped: 96493568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299999232 unmapped: 96493568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300007424 unmapped: 96485376 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300007424 unmapped: 96485376 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300007424 unmapped: 96485376 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 96477184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 96477184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 96477184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 96477184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 96477184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300023808 unmapped: 96468992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300023808 unmapped: 96468992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300023808 unmapped: 96468992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 96460800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 96460800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 96460800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 96460800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 96460800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 96444416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 96444416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 96444416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 96444416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300056576 unmapped: 96436224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300056576 unmapped: 96436224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300056576 unmapped: 96436224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300056576 unmapped: 96436224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 96428032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 96428032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 96428032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 96428032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 96419840 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 96419840 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300081152 unmapped: 96411648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300089344 unmapped: 96403456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300089344 unmapped: 96403456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300089344 unmapped: 96403456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300097536 unmapped: 96395264 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300105728 unmapped: 96387072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300105728 unmapped: 96387072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 96378880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 96378880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 96378880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 96378880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 96378880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300122112 unmapped: 96370688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300122112 unmapped: 96370688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300130304 unmapped: 96362496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300130304 unmapped: 96362496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300130304 unmapped: 96362496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300163072 unmapped: 96329728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300163072 unmapped: 96329728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300163072 unmapped: 96329728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 96321536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 96321536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 96321536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 96321536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 96321536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300195840 unmapped: 96296960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300195840 unmapped: 96296960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 366.777954102s of 366.843353271s, submitted: 49
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 301 ms_handle_reset con 0x557686155400 session 0x55768c90da40
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 96198656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 301 heartbeat osd_stat(store_statfs(0x4ee306000/0x0/0x4ffc00000, data 0x65fb78/0x804000, compress 0x0/0x0/0x0, omap 0x52b13, meta 0x1109d4ed), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 96198656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300302336 unmapped: 96190464 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3160433 data_alloc: 218103808 data_used: 189267
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 302 ms_handle_reset con 0x557686258000 session 0x557686cacc40
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300318720 unmapped: 96174080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee774000/0x0/0x4ffc00000, data 0x1f1758/0x396000, compress 0x0/0x0/0x0, omap 0x52c29, meta 0x1109d3d7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300318720 unmapped: 96174080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300318720 unmapped: 96174080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee774000/0x0/0x4ffc00000, data 0x1f1758/0x396000, compress 0x0/0x0/0x0, omap 0x52c29, meta 0x1109d3d7), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300318720 unmapped: 96174080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3182875 data_alloc: 218103808 data_used: 189523
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 303 heartbeat osd_stat(store_statfs(0x4edf72000/0x0/0x4ffc00000, data 0x9f3216/0xb9a000, compress 0x0/0x0/0x0, omap 0x52da5, meta 0x1109d25b), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 303 handle_osd_map epochs [304,304], i have 304, src has [1,304]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 304 ms_handle_reset con 0x5576869e1400 session 0x557692b07340
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.334288597s of 11.534142494s, submitted: 81
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300466176 unmapped: 96026624 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300466176 unmapped: 96026624 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300466176 unmapped: 96026624 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 95993856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 95993856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 96288768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 96288768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 96288768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 96288768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 96288768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300212224 unmapped: 96280576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300212224 unmapped: 96280576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 96272384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 96272384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 96272384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 96272384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300228608 unmapped: 96264192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300228608 unmapped: 96264192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 96256000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 96256000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 96256000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 96256000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 96256000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 47.789546967s of 47.797679901s, submitted: 15
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 96256000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 306 ms_handle_reset con 0x557687f7fc00 session 0x55768b427880
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 96223232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 96223232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f841a/0x3a2000, compress 0x0/0x0/0x0, omap 0x53906, meta 0x1109c6fa), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 96223232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3151061 data_alloc: 218103808 data_used: 190108
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f841a/0x3a2000, compress 0x0/0x0/0x0, omap 0x53906, meta 0x1109c6fa), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f841a/0x3a2000, compress 0x0/0x0/0x0, omap 0x53906, meta 0x1109c6fa), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300285952 unmapped: 96206848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300285952 unmapped: 96206848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f841a/0x3a2000, compress 0x0/0x0/0x0, omap 0x53906, meta 0x1109c6fa), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 306 handle_osd_map epochs [306,307], i have 307, src has [1,307]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300285952 unmapped: 96206848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 96198656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 96198656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 96198656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 96198656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300302336 unmapped: 96190464 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300302336 unmapped: 96190464 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300302336 unmapped: 96190464 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 96182272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 96182272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 96182272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 96182272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 96182272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300335104 unmapped: 96157696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300335104 unmapped: 96157696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300335104 unmapped: 96157696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 96149504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 96149504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 96149504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 96149504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 96149504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300351488 unmapped: 96141312 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300359680 unmapped: 96133120 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 96116736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 96116736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 96116736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 96116736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 96116736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 96116736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 96100352 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 96100352 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 96100352 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 96100352 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 96100352 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 96092160 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 96092160 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 96092160 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 96083968 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 96067584 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 96067584 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 96067584 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 95911936 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: do_command 'config diff' '{prefix=config diff}'
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: do_command 'config show' '{prefix=config show}'
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: do_command 'counter dump' '{prefix=counter dump}'
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: do_command 'counter schema' '{prefix=counter schema}'
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: do_command 'log dump' '{prefix=log dump}'
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: do_command 'perf dump' '{prefix=perf dump}'
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: do_command 'perf schema' '{prefix=perf schema}'
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300646400 unmapped: 95846400 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 95838208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 95838208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 95838208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 95838208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 95838208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 95838208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 95838208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 95838208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300670976 unmapped: 95821824 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300670976 unmapped: 95821824 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300670976 unmapped: 95821824 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 95805440 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 95805440 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 95805440 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 95805440 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 95805440 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 95797248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 95797248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 95797248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 95797248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 95797248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 95797248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 95797248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.2 total, 600.0 interval#012Cumulative writes: 38K writes, 150K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 38K writes, 14K syncs, 2.69 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 542 writes, 1588 keys, 542 commit groups, 1.0 writes per commit group, ingest: 0.67 MB, 0.00 MB/s#012Interval WAL: 542 writes, 244 syncs, 2.22 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 95797248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 95789056 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300711936 unmapped: 95780864 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 95772672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 95772672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 95772672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 95772672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: mgrc ms_handle_reset ms_handle_reset con 0x55768672c000
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1095581968
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1095581968,v1:192.168.122.100:6801/1095581968]
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: mgrc handle_mgr_configure stats_period=5
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300728320 unmapped: 95764480 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300728320 unmapped: 95764480 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300736512 unmapped: 95756288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300736512 unmapped: 95756288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300736512 unmapped: 95756288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 95748096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 95748096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 95748096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 95748096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 95748096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300769280 unmapped: 95723520 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 95715328 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 95715328 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 95715328 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 95715328 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 95715328 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 95715328 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 95707136 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300802048 unmapped: 95690752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300802048 unmapped: 95690752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300802048 unmapped: 95690752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300802048 unmapped: 95690752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300802048 unmapped: 95690752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 95682560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 95682560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 95682560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 95682560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 95682560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 95674368 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300826624 unmapped: 95666176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300826624 unmapped: 95666176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 151.716339111s of 151.783905029s, submitted: 44
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300851200 unmapped: 95641600 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300851200 unmapped: 95641600 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300851200 unmapped: 95641600 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300851200 unmapped: 95641600 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300851200 unmapped: 95641600 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300851200 unmapped: 95641600 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 95592448 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 95592448 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 95592448 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 95592448 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 95592448 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 95592448 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 95592448 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 95592448 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 95592448 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 95592448 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 95584256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 95584256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 95584256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 95584256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 95584256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 95584256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 95576064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 95576064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 95576064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 95576064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 95576064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 95551488 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 95551488 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 95551488 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 95551488 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 95543296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 95543296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 95543296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300957696 unmapped: 95535104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 95526912 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 95526912 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 95526912 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 95526912 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 95526912 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 95526912 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 95526912 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 95526912 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 95518720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 95518720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 95518720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 95518720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 95518720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 95518720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300982272 unmapped: 95510528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300982272 unmapped: 95510528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 95502336 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 95502336 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 95502336 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 95494144 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 95494144 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 95494144 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 95494144 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 95494144 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301015040 unmapped: 95477760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301023232 unmapped: 95469568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301023232 unmapped: 95469568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301031424 unmapped: 95461376 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301039616 unmapped: 95453184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301039616 unmapped: 95453184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301039616 unmapped: 95453184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301039616 unmapped: 95453184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301039616 unmapped: 95453184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 95436800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 95436800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 95436800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 95428608 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 95428608 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301072384 unmapped: 95420416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301072384 unmapped: 95420416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 95412224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 95412224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 95412224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 95404032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 95404032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 95404032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 95404032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 95404032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 95404032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 95404032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301096960 unmapped: 95395840 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301096960 unmapped: 95395840 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 95387648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 95387648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 95387648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 95387648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 95371264 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 95371264 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 95371264 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 95371264 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 95371264 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 95371264 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 95371264 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 95371264 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 95354880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 95354880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 95354880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 95346688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 95346688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 95346688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 95346688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 95346688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 95338496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 95338496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 95338496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 95338496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 95338496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 95338496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 95338496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 95338496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301162496 unmapped: 95330304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301162496 unmapped: 95330304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301178880 unmapped: 95313920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301195264 unmapped: 95297536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 95289344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 95289344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 95289344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 95289344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 95289344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 95289344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 95289344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 95289344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301211648 unmapped: 95281152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301211648 unmapped: 95281152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301211648 unmapped: 95281152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 95272960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 95272960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 95272960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 95272960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 95272960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301236224 unmapped: 95256576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301236224 unmapped: 95256576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301244416 unmapped: 95248384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301244416 unmapped: 95248384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301244416 unmapped: 95248384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301244416 unmapped: 95248384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301244416 unmapped: 95248384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301252608 unmapped: 95240192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301252608 unmapped: 95240192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301252608 unmapped: 95240192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301252608 unmapped: 95240192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301252608 unmapped: 95240192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301252608 unmapped: 95240192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301252608 unmapped: 95240192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300728320 unmapped: 95764480 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300736512 unmapped: 95756288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300736512 unmapped: 95756288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300736512 unmapped: 95756288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300736512 unmapped: 95756288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 95748096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 95748096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 95748096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 95748096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 95739904 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 95739904 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 95739904 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 95739904 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 95739904 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 95739904 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 95739904 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 95739904 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300761088 unmapped: 95731712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300761088 unmapped: 95731712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300761088 unmapped: 95731712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300769280 unmapped: 95723520 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300769280 unmapped: 95723520 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300769280 unmapped: 95723520 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300769280 unmapped: 95723520 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300769280 unmapped: 95723520 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 95715328 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 95715328 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 95707136 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 95707136 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 95707136 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 95707136 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 95707136 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 95707136 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300802048 unmapped: 95690752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300802048 unmapped: 95690752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300802048 unmapped: 95690752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 95682560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 95682560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 95682560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 95682560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 95682560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 95674368 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 95674368 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 95674368 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 95674368 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300826624 unmapped: 95666176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300826624 unmapped: 95666176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300826624 unmapped: 95666176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300826624 unmapped: 95666176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300834816 unmapped: 95657984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300834816 unmapped: 95657984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300834816 unmapped: 95657984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300843008 unmapped: 95649792 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300843008 unmapped: 95649792 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300843008 unmapped: 95649792 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300843008 unmapped: 95649792 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300851200 unmapped: 95641600 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300851200 unmapped: 95641600 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300851200 unmapped: 95641600 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300851200 unmapped: 95641600 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 95633408 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 95633408 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 95633408 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 95633408 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 95633408 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 95625216 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 95625216 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 95625216 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 95625216 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 95625216 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 95625216 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 95625216 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 95625216 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300875776 unmapped: 95617024 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300883968 unmapped: 95608832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300883968 unmapped: 95608832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300883968 unmapped: 95608832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300883968 unmapped: 95608832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300883968 unmapped: 95608832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300883968 unmapped: 95608832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300883968 unmapped: 95608832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 95600640 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 95600640 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 95592448 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 95584256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 95584256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 95584256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 95584256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 95584256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 95576064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 95576064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 95576064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 95576064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 95576064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 95576064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 95576064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 95576064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 95551488 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 95551488 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 95551488 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 95543296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 95543296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 95543296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 95543296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 95543296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300957696 unmapped: 95535104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300957696 unmapped: 95535104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300957696 unmapped: 95535104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300957696 unmapped: 95535104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300957696 unmapped: 95535104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300957696 unmapped: 95535104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300957696 unmapped: 95535104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300957696 unmapped: 95535104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 95518720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 95518720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 95518720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 95518720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 95518720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 95518720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 95518720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300982272 unmapped: 95510528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 95502336 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 95502336 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 95502336 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 95494144 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 95494144 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 95494144 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 95494144 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 95494144 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301015040 unmapped: 95477760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301015040 unmapped: 95477760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301023232 unmapped: 95469568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301023232 unmapped: 95469568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301023232 unmapped: 95469568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301023232 unmapped: 95469568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301023232 unmapped: 95469568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301023232 unmapped: 95469568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301031424 unmapped: 95461376 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301031424 unmapped: 95461376 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301031424 unmapped: 95461376 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301039616 unmapped: 95453184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301039616 unmapped: 95453184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301039616 unmapped: 95453184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301039616 unmapped: 95453184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301039616 unmapped: 95453184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 95428608 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 95428608 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 95428608 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 95428608 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 95428608 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 95428608 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 95428608 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 95428608 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301072384 unmapped: 95420416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301072384 unmapped: 95420416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301072384 unmapped: 95420416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 95412224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 95404032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 95404032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 95404032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 95404032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 95387648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 95387648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 95387648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 95387648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 95387648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 95387648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 95387648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 95387648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 95379456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 95379456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 95379456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 95379456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 95379456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 95379456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 95379456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 95379456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 95371264 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 95371264 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 95371264 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301129728 unmapped: 95363072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301129728 unmapped: 95363072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301129728 unmapped: 95363072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 95354880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 95354880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 95354880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 95354880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 95346688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 95346688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 95346688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 95346688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 95346688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 95338496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 95338496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 95338496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 95338496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301162496 unmapped: 95330304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301162496 unmapped: 95330304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301162496 unmapped: 95330304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301162496 unmapped: 95330304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301178880 unmapped: 95313920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301178880 unmapped: 95313920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301178880 unmapped: 95313920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301178880 unmapped: 95313920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301195264 unmapped: 95297536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301195264 unmapped: 95297536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301195264 unmapped: 95297536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 95289344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 95289344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 95289344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 95289344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 95289344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 95272960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 95272960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 95272960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 95272960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 95272960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 95272960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 95272960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 95272960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301228032 unmapped: 95264768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301228032 unmapped: 95264768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301228032 unmapped: 95264768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301236224 unmapped: 95256576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301236224 unmapped: 95256576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301236224 unmapped: 95256576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301236224 unmapped: 95256576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301236224 unmapped: 95256576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301252608 unmapped: 95240192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301252608 unmapped: 95240192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301252608 unmapped: 95240192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301252608 unmapped: 95240192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301260800 unmapped: 95232000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301260800 unmapped: 95232000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301260800 unmapped: 95232000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301260800 unmapped: 95232000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301260800 unmapped: 95232000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301260800 unmapped: 95232000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301260800 unmapped: 95232000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301260800 unmapped: 95232000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301260800 unmapped: 95232000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301268992 unmapped: 95223808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301277184 unmapped: 95215616 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301277184 unmapped: 95215616 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301285376 unmapped: 95207424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301285376 unmapped: 95207424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301285376 unmapped: 95207424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301293568 unmapped: 95199232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301293568 unmapped: 95199232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301293568 unmapped: 95199232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301301760 unmapped: 95191040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301301760 unmapped: 95191040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301309952 unmapped: 95182848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301309952 unmapped: 95182848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301309952 unmapped: 95182848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301309952 unmapped: 95182848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301309952 unmapped: 95182848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301309952 unmapped: 95182848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301309952 unmapped: 95182848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301309952 unmapped: 95182848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301318144 unmapped: 95174656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301318144 unmapped: 95174656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301318144 unmapped: 95174656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301334528 unmapped: 95158272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301334528 unmapped: 95158272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301334528 unmapped: 95158272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301334528 unmapped: 95158272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301334528 unmapped: 95158272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301342720 unmapped: 95150080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301342720 unmapped: 95150080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301342720 unmapped: 95150080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301350912 unmapped: 95141888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301350912 unmapped: 95141888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301350912 unmapped: 95141888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301359104 unmapped: 95133696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301359104 unmapped: 95133696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301367296 unmapped: 95125504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301367296 unmapped: 95125504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301367296 unmapped: 95125504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301367296 unmapped: 95125504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301367296 unmapped: 95125504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301367296 unmapped: 95125504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301367296 unmapped: 95125504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301367296 unmapped: 95125504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301375488 unmapped: 95117312 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301375488 unmapped: 95117312 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301375488 unmapped: 95117312 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301383680 unmapped: 95109120 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301383680 unmapped: 95109120 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301383680 unmapped: 95109120 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301383680 unmapped: 95109120 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301383680 unmapped: 95109120 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301383680 unmapped: 95109120 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301383680 unmapped: 95109120 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301391872 unmapped: 95100928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301391872 unmapped: 95100928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301391872 unmapped: 95100928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301391872 unmapped: 95100928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301400064 unmapped: 95092736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301400064 unmapped: 95092736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301400064 unmapped: 95092736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 95084544 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 95076352 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 95076352 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 95068160 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301432832 unmapped: 95059968 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301432832 unmapped: 95059968 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 95051776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: do_command 'config diff' '{prefix=config diff}'
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: do_command 'config show' '{prefix=config show}'
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: do_command 'counter dump' '{prefix=counter dump}'
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: do_command 'counter schema' '{prefix=counter schema}'
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301326336 unmapped: 95166464 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 95436800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 06:14:08 np0005634017 ceph-osd[89322]: do_command 'log dump' '{prefix=log dump}'
Feb 28 06:14:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb 28 06:14:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3316234082' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 28 06:14:08 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23326 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:14:08 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} v 0)
Feb 28 06:14:08 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} : dispatch
Feb 28 06:14:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb 28 06:14:09 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/938010990' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 28 06:14:09 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23330 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:14:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} v 0)
Feb 28 06:14:09 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} : dispatch
Feb 28 06:14:09 np0005634017 nova_compute[243452]: 2026-02-28 11:14:09.446 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:14:09 np0005634017 nova_compute[243452]: 2026-02-28 11:14:09.450 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 28 06:14:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 06:14:09 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23334 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:14:09 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb 28 06:14:09 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1829037130' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 28 06:14:10 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23336 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:14:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 28 06:14:10 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3827080679' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 28 06:14:10 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3317: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:14:10 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23340 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 06:14:10 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb 28 06:14:10 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1777725137' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 28 06:14:10 np0005634017 podman[412232]: 2026-02-28 11:14:10.788851825 +0000 UTC m=+0.068825339 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 28 06:14:10 np0005634017 podman[412231]: 2026-02-28 11:14:10.830928927 +0000 UTC m=+0.116715896 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 06:14:11 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23344 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 06:14:11 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Feb 28 06:14:11 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3481473638' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Feb 28 06:14:11 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23348 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 06:14:12 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23352 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 06:14:12 np0005634017 nova_compute[243452]: 2026-02-28 11:14:12.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 28 06:14:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Feb 28 06:14:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/948105503' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Feb 28 06:14:12 np0005634017 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3318: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 06:14:12 np0005634017 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23356 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 06:14:12 np0005634017 ceph-mgr[76610]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 28 06:14:12 np0005634017 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-izimmo[76606]: 2026-02-28T11:14:12.533+0000 7fdac2ed4640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 28 06:14:12 np0005634017 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Feb 28 06:14:12 np0005634017 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3868391455' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.755073547s of 20.982717514s, submitted: 105
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a400 session 0x563000521500
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a49400 session 0x56300273ec40
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911900 data_alloc: 234881024 data_used: 18535366
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 53731328 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f9c00 session 0x562fffca0000
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 53731328 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 53731328 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a400 session 0x563000340700
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300334e000 session 0x5630008c5340
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004ae8c00 session 0x563002574700
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc8f000 session 0x563002c34c40
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f9c00 session 0x562fffba1340
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352509952 unmapped: 53395456 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352509952 unmapped: 53395456 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e705e000/0x0/0x4ffc00000, data 0x4439c98/0x45ce000, compress 0x0/0x0/0x0, omap 0x6c44f, meta 0x14563bb1), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3857330 data_alloc: 218103808 data_used: 12335972
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352509952 unmapped: 53395456 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a9c00 session 0x5630008b28c0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de2400 session 0x563001d9ae00
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 60399616 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf9c00 session 0x5630005208c0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630015b7400 session 0x562fffb868c0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a9c00 session 0x5630022f2a80
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 59351040 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630015b7400 session 0x563001d9a380
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de2400 session 0x562fffce6000
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 59342848 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e83ab000/0x0/0x4ffc00000, data 0x30e8c26/0x327b000, compress 0x0/0x0/0x0, omap 0x6c897, meta 0x14563769), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 59342848 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3698486 data_alloc: 218103808 data_used: 4918002
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 59342848 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 59367424 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e83af000/0x0/0x4ffc00000, data 0x30e8c59/0x327d000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e83af000/0x0/0x4ffc00000, data 0x30e8c59/0x327d000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3754294 data_alloc: 218103808 data_used: 14295794
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e83af000/0x0/0x4ffc00000, data 0x30e8c59/0x327d000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e83af000/0x0/0x4ffc00000, data 0x30e8c59/0x327d000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3754294 data_alloc: 218103808 data_used: 14295794
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e83af000/0x0/0x4ffc00000, data 0x30e8c59/0x327d000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.583734512s of 21.009777069s, submitted: 135
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 57458688 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7dfd000/0x0/0x4ffc00000, data 0x3699c59/0x382e000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3799496 data_alloc: 234881024 data_used: 14492402
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7dbf000/0x0/0x4ffc00000, data 0x36d7c59/0x386c000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7dbf000/0x0/0x4ffc00000, data 0x36d7c59/0x386c000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3796744 data_alloc: 234881024 data_used: 14496498
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7d9c000/0x0/0x4ffc00000, data 0x36fbc59/0x3890000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.710806847s of 13.025801659s, submitted: 82
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f9c00 session 0x5630022a9c00
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3dc00 session 0x56300222da40
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349773824 unmapped: 56131584 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646460 data_alloc: 218103808 data_used: 4917490
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3dc00 session 0x563002574fc0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c2b000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6d25b, meta 0x14562da5), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3644559 data_alloc: 218103808 data_used: 4913394
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c2b000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6d25b, meta 0x14562da5), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630035d9000 session 0x5630008c5500
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006907400 session 0x562fffc01c00
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f8c00 session 0x563002c42c40
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004298800 session 0x562fffc016c0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f8c00 session 0x563002c42380
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3dc00 session 0x562fffba0000
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630035d9000 session 0x5630004ff6c0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006907400 session 0x56300273efc0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de0000 session 0x562fffce7340
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8834000/0x0/0x4ffc00000, data 0x2c65bd4/0x2df8000, compress 0x0/0x0/0x0, omap 0x6d25b, meta 0x14562da5), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688345 data_alloc: 218103808 data_used: 4921469
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8834000/0x0/0x4ffc00000, data 0x2c65bd4/0x2df8000, compress 0x0/0x0/0x0, omap 0x6d25b, meta 0x14562da5), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8834000/0x0/0x4ffc00000, data 0x2c65bd4/0x2df8000, compress 0x0/0x0/0x0, omap 0x6d25b, meta 0x14562da5), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688345 data_alloc: 218103808 data_used: 4921469
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343736320 unmapped: 62169088 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343736320 unmapped: 62169088 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343736320 unmapped: 62169088 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.425588608s of 20.792432785s, submitted: 85
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a800 session 0x563000520380
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3691067 data_alloc: 218103808 data_used: 4921469
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8833000/0x0/0x4ffc00000, data 0x2c65bf7/0x2df9000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8833000/0x0/0x4ffc00000, data 0x2c65bf7/0x2df9000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3726267 data_alloc: 218103808 data_used: 10794109
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8833000/0x0/0x4ffc00000, data 0x2c65bf7/0x2df9000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8833000/0x0/0x4ffc00000, data 0x2c65bf7/0x2df9000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3726267 data_alloc: 218103808 data_used: 10794109
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8833000/0x0/0x4ffc00000, data 0x2c65bf7/0x2df9000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.685478210s of 11.705242157s, submitted: 10
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347709440 unmapped: 58195968 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 57876480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7f5f000/0x0/0x4ffc00000, data 0x3523bf7/0x36b7000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 57876480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 57876480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3804967 data_alloc: 218103808 data_used: 11819133
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 57876480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 57876480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 57876480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 57868288 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7f5f000/0x0/0x4ffc00000, data 0x3523bf7/0x36b7000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 57868288 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3804983 data_alloc: 218103808 data_used: 11819133
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 57868288 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 57868288 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 57860096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7f5f000/0x0/0x4ffc00000, data 0x3523bf7/0x36b7000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 57860096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 57860096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3805239 data_alloc: 218103808 data_used: 11827325
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 57860096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 57860096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.995227814s of 16.248806000s, submitted: 101
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630040a6c00 session 0x56300238a380
Feb 28 06:14:12 np0005634017 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf9c00 session 0x5630026de8c0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329f000 session 0x56300222c700
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347529216 unmapped: 65724416 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a47c00 session 0x562fffce6380
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a800 session 0x56300222da40
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347529216 unmapped: 65724416 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752e000/0x0/0x4ffc00000, data 0x3f6abf7/0x40fe000, compress 0x0/0x0/0x0, omap 0x6d934, meta 0x145626cc), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347537408 unmapped: 65716224 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3859568 data_alloc: 218103808 data_used: 11827325
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347537408 unmapped: 65716224 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752e000/0x0/0x4ffc00000, data 0x3f6abf7/0x40fe000, compress 0x0/0x0/0x0, omap 0x6d934, meta 0x145626cc), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347537408 unmapped: 65716224 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347537408 unmapped: 65716224 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347537408 unmapped: 65716224 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347537408 unmapped: 65716224 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf8400 session 0x562fffce7340
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3861245 data_alloc: 218103808 data_used: 11827325
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 65691648 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 65691648 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752e000/0x0/0x4ffc00000, data 0x3f6abf7/0x40fe000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [1])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752e000/0x0/0x4ffc00000, data 0x3f6abf7/0x40fe000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3916669 data_alloc: 234881024 data_used: 17889846
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.772537231s of 15.883464813s, submitted: 28
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752c000/0x0/0x4ffc00000, data 0x3f6bbf7/0x40ff000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3916885 data_alloc: 234881024 data_used: 17889846
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752c000/0x0/0x4ffc00000, data 0x3f6bbf7/0x40ff000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752c000/0x0/0x4ffc00000, data 0x3f6bbf7/0x40ff000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 65011712 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752c000/0x0/0x4ffc00000, data 0x3f6bbf7/0x40ff000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [0,0,0,0,1])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 60211200 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 60104704 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 60104704 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3970057 data_alloc: 234881024 data_used: 19615286
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 60104704 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 60104704 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 60104704 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7034000/0x0/0x4ffc00000, data 0x445cbf7/0x45f0000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.204720497s of 10.448961258s, submitted: 70
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60096512 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60096512 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3970329 data_alloc: 234881024 data_used: 19623478
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60096512 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60096512 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60096512 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300645f000 session 0x563002539340
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004299800 session 0x563002c421c0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7034000/0x0/0x4ffc00000, data 0x445cbf7/0x45f0000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [0,2])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329f400 session 0x563002c43c00
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 63324160 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 63324160 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800321 data_alloc: 218103808 data_used: 8706614
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7f74000/0x0/0x4ffc00000, data 0x3524bf7/0x36b8000, compress 0x0/0x0/0x0, omap 0x6dfed, meta 0x14562013), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 63324160 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 63324160 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329e400 session 0x563002c42e00
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3b400 session 0x5630008c4c40
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf9800 session 0x562fffce6540
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 64290816 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 64290816 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 64290816 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 64290816 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 64290816 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 64274432 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 64274432 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 64274432 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 64274432 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 64274432 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 64266240 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 64266240 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 64266240 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 64266240 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007aaac00 session 0x5630015e6380
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007a98000 session 0x562fffce6fc0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3b400 session 0x56300222c540
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329e400 session 0x563000520700
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.949302673s of 42.108650208s, submitted: 99
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 63913984 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf9800 session 0x5630026df6c0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007aaac00 session 0x5630004fe000
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630015e9400 session 0x562fffba0c40
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3b400 session 0x5630007fec40
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329e400 session 0x563002c348c0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 67854336 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8637000/0x0/0x4ffc00000, data 0x2e62bd4/0x2ff5000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 67854336 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8637000/0x0/0x4ffc00000, data 0x2e62bd4/0x2ff5000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 67854336 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 67846144 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3721561 data_alloc: 218103808 data_used: 4889091
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 67846144 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x563001f02000
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349011968 unmapped: 67919872 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4201.6 total, 600.0 interval#012Cumulative writes: 46K writes, 180K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s#012Cumulative WAL: 46K writes, 17K syncs, 2.68 writes per sync, written: 0.17 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5448 writes, 20K keys, 5448 commit groups, 1.0 writes per commit group, ingest: 23.91 MB, 0.04 MB/s#012Interval WAL: 5448 writes, 2224 syncs, 2.45 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8612000/0x0/0x4ffc00000, data 0x2e86bf7/0x301a000, compress 0x0/0x0/0x0, omap 0x6e357, meta 0x14561ca9), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773411 data_alloc: 218103808 data_used: 12658707
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8612000/0x0/0x4ffc00000, data 0x2e86bf7/0x301a000, compress 0x0/0x0/0x0, omap 0x6e357, meta 0x14561ca9), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773411 data_alloc: 218103808 data_used: 12658707
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8612000/0x0/0x4ffc00000, data 0x2e86bf7/0x301a000, compress 0x0/0x0/0x0, omap 0x6e357, meta 0x14561ca9), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.338871002s of 17.436998367s, submitted: 22
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352256000 unmapped: 64675840 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 64241664 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3832685 data_alloc: 218103808 data_used: 12786707
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 64241664 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 64241664 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 64241664 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7d99000/0x0/0x4ffc00000, data 0x36f7bf7/0x388b000, compress 0x0/0x0/0x0, omap 0x6e357, meta 0x14561ca9), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 64241664 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7d99000/0x0/0x4ffc00000, data 0x36f7bf7/0x388b000, compress 0x0/0x0/0x0, omap 0x6e357, meta 0x14561ca9), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3828501 data_alloc: 218103808 data_used: 12786707
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7d82000/0x0/0x4ffc00000, data 0x3716bf7/0x38aa000, compress 0x0/0x0/0x0, omap 0x6e357, meta 0x14561ca9), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3828501 data_alloc: 218103808 data_used: 12786707
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c9c00 session 0x563000520380
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004689800 session 0x56300205c1c0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x563002c42a80
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c9c00 session 0x5630008b3340
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.907081604s of 13.095699310s, submitted: 66
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3b400 session 0x563001d9ba40
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329e400 session 0x562fffb87340
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006912c00 session 0x56300238b180
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x563001d9afc0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c9c00 session 0x563002574000
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x438dbf7/0x4521000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x438dbf7/0x4521000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x438dbf7/0x4521000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3906016 data_alloc: 218103808 data_used: 12786707
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300b960c00 session 0x563001f02fc0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 58138624 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 58138624 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e70e7000/0x0/0x4ffc00000, data 0x43b1bf7/0x4545000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3978348 data_alloc: 234881024 data_used: 24597523
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 58138624 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 58138624 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 58138624 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 58138624 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e70e7000/0x0/0x4ffc00000, data 0x43b1bf7/0x4545000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358801408 unmapped: 58130432 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e70e7000/0x0/0x4ffc00000, data 0x43b1bf7/0x4545000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3978348 data_alloc: 234881024 data_used: 24597523
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358801408 unmapped: 58130432 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358801408 unmapped: 58130432 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.627857208s of 16.728017807s, submitted: 31
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360079360 unmapped: 56852480 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4046006 data_alloc: 234881024 data_used: 25961491
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e67b8000/0x0/0x4ffc00000, data 0x4cdfbf7/0x4e73000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e67b8000/0x0/0x4ffc00000, data 0x4cdfbf7/0x4e73000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4046006 data_alloc: 234881024 data_used: 25961491
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e67b8000/0x0/0x4ffc00000, data 0x4cdfbf7/0x4e73000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e67b8000/0x0/0x4ffc00000, data 0x4cdfbf7/0x4e73000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362446848 unmapped: 54484992 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362446848 unmapped: 54484992 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362446848 unmapped: 54484992 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.708907127s of 12.569359779s, submitted: 90
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329f400 session 0x562fffb861c0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003da9c00 session 0x5630029ad340
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4039350 data_alloc: 234881024 data_used: 25961491
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x56300273f340
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358670336 unmapped: 58261504 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7d76000/0x0/0x4ffc00000, data 0x3722bf7/0x38b6000, compress 0x0/0x0/0x0, omap 0x6e714, meta 0x145618ec), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358678528 unmapped: 58253312 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358735872 unmapped: 58195968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358735872 unmapped: 58195968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358735872 unmapped: 58195968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3840646 data_alloc: 218103808 data_used: 12786707
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358735872 unmapped: 58195968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7d6f000/0x0/0x4ffc00000, data 0x3729bf7/0x38bd000, compress 0x0/0x0/0x0, omap 0x6e714, meta 0x145618ec), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c8000 session 0x562fffba16c0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003da8400 session 0x5630022a9dc0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358653952 unmapped: 58277888 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a000 session 0x5630008c4c40
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694722 data_alloc: 218103808 data_used: 4889091
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694722 data_alloc: 218103808 data_used: 4889091
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694722 data_alloc: 218103808 data_used: 4889091
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694722 data_alloc: 218103808 data_used: 4889091
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694722 data_alloc: 218103808 data_used: 4889091
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 63315968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 63315968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 63315968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694722 data_alloc: 218103808 data_used: 4889091
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 63315968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 63315968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 63315968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 63307776 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006905400 session 0x562fffb86700
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a46c00 session 0x56300205c1c0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x5630007fec40
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c8000 session 0x562fffba0000
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.295139313s of 39.636463165s, submitted: 162
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354689024 unmapped: 62242816 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a000 session 0x5630022f2c40
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003da8400 session 0x563002c34c40
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x562fffb87c00
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c8000 session 0x5630029ad6c0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a46c00 session 0x56300273f500
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3765477 data_alloc: 218103808 data_used: 4893152
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 63291392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8343000/0x0/0x4ffc00000, data 0x3155c35/0x32e9000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 63291392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 63291392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 63291392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 63291392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6c000 session 0x563002473340
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3765477 data_alloc: 218103808 data_used: 4893152
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a8c00 session 0x563002c42380
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 63291392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x562fffba0c40
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c8000 session 0x563002c42a80
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353583104 unmapped: 63348736 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8343000/0x0/0x4ffc00000, data 0x3155c35/0x32e9000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 63676416 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3834807 data_alloc: 234881024 data_used: 16034287
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8342000/0x0/0x4ffc00000, data 0x3155c58/0x32ea000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8342000/0x0/0x4ffc00000, data 0x3155c58/0x32ea000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3834807 data_alloc: 234881024 data_used: 16034287
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.666275024s of 17.794780731s, submitted: 43
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359178240 unmapped: 57753600 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e628e000/0x0/0x4ffc00000, data 0x4063c58/0x41f8000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x15700ff5), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 57647104 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 56467456 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3932965 data_alloc: 234881024 data_used: 16882159
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 56467456 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6263000/0x0/0x4ffc00000, data 0x4094c58/0x4229000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x15700ff5), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 56467456 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 56467456 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 56467456 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6263000/0x0/0x4ffc00000, data 0x4094c58/0x4229000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x15700ff5), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3933117 data_alloc: 234881024 data_used: 16882159
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3933373 data_alloc: 234881024 data_used: 16890351
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6260000/0x0/0x4ffc00000, data 0x4097c58/0x422c000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x15700ff5), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360480768 unmapped: 56451072 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360480768 unmapped: 56451072 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6260000/0x0/0x4ffc00000, data 0x4097c58/0x422c000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x15700ff5), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.938798904s of 17.121707916s, submitted: 144
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc0c000 session 0x563002538c40
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3b000 session 0x563000520c40
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x562fffce7340
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc0c000 session 0x562fffce6fc0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x563001da1c00
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3953230 data_alloc: 234881024 data_used: 16890351
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6089000/0x0/0x4ffc00000, data 0x426ec58/0x4403000, compress 0x0/0x0/0x0, omap 0x6f209, meta 0x15700df7), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6089000/0x0/0x4ffc00000, data 0x426ec58/0x4403000, compress 0x0/0x0/0x0, omap 0x6f209, meta 0x15700df7), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007a99800 session 0x563000520380
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6089000/0x0/0x4ffc00000, data 0x426ec58/0x4403000, compress 0x0/0x0/0x0, omap 0x6f209, meta 0x15700df7), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630045ad000 session 0x5630008b3340
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3953230 data_alloc: 234881024 data_used: 16890351
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300334e000 session 0x563000340700
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc0c000 session 0x563002147a40
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6089000/0x0/0x4ffc00000, data 0x426ec58/0x4403000, compress 0x0/0x0/0x0, omap 0x6f209, meta 0x15700df7), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3967924 data_alloc: 234881024 data_used: 18733551
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6088000/0x0/0x4ffc00000, data 0x426ec7b/0x4404000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 56107008 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.814036369s of 14.911996841s, submitted: 29
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 56107008 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3968068 data_alloc: 234881024 data_used: 18733551
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6088000/0x0/0x4ffc00000, data 0x426ec7b/0x4404000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 56107008 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 56107008 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6088000/0x0/0x4ffc00000, data 0x426ec7b/0x4404000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [0,0,0,0,0,3,6])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 54755328 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362184704 unmapped: 54747136 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a1c000/0x0/0x4ffc00000, data 0x48dac7b/0x4a70000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362184704 unmapped: 54747136 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4006276 data_alloc: 234881024 data_used: 19196399
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362184704 unmapped: 54747136 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a1c000/0x0/0x4ffc00000, data 0x48dac7b/0x4a70000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a1c000/0x0/0x4ffc00000, data 0x48dac7b/0x4a70000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a1c000/0x0/0x4ffc00000, data 0x48dac7b/0x4a70000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4006276 data_alloc: 234881024 data_used: 19196399
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.440196991s of 14.719416618s, submitted: 60
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x562fffce7500
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300334e000 session 0x5630008c5500
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf9c00 session 0x5630026de700
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3941854 data_alloc: 234881024 data_used: 16951791
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e625f000/0x0/0x4ffc00000, data 0x4097c58/0x422c000, compress 0x0/0x0/0x0, omap 0x6f6c4, meta 0x1570093c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e625f000/0x0/0x4ffc00000, data 0x4097c58/0x422c000, compress 0x0/0x0/0x0, omap 0x6f6c4, meta 0x1570093c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a46c00 session 0x563002574700
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6c000 session 0x562fffba1340
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e625f000/0x0/0x4ffc00000, data 0x4098c58/0x422d000, compress 0x0/0x0/0x0, omap 0x6f6c4, meta 0x1570093c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a41000 session 0x563002575500
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 53600256 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 53600256 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 53600256 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 53600256 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 53600256 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 53583872 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 53583872 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 53583872 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.221851349s of 42.453300476s, submitted: 113
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007aaac00 session 0x562fffadfa40
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 63627264 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 63627264 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7095000/0x0/0x4ffc00000, data 0x3265bc4/0x33f7000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 63627264 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 63627264 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3792375 data_alloc: 218103808 data_used: 4905272
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 63627264 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7095000/0x0/0x4ffc00000, data 0x3265bc4/0x33f7000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361709568 unmapped: 63619072 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361709568 unmapped: 63619072 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361709568 unmapped: 63619072 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361709568 unmapped: 63619072 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7095000/0x0/0x4ffc00000, data 0x3265bc4/0x33f7000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3792375 data_alloc: 218103808 data_used: 4905272
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361709568 unmapped: 63619072 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361709568 unmapped: 63619072 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7095000/0x0/0x4ffc00000, data 0x3265bc4/0x33f7000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7095000/0x0/0x4ffc00000, data 0x3265bc4/0x33f7000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3862391 data_alloc: 234881024 data_used: 16737592
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7095000/0x0/0x4ffc00000, data 0x3265bc4/0x33f7000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3862391 data_alloc: 234881024 data_used: 16737592
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.046377182s of 20.163766861s, submitted: 13
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 61046784 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6aee000/0x0/0x4ffc00000, data 0x3804bc4/0x3996000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6a74000/0x0/0x4ffc00000, data 0x387ebc4/0x3a10000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3904073 data_alloc: 234881024 data_used: 16845112
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 61497344 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3900481 data_alloc: 234881024 data_used: 16845112
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread fragmentation_score=0.004393 took=0.000057s
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 61497344 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6a5b000/0x0/0x4ffc00000, data 0x389fbc4/0x3a31000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.036879539s of 10.267202377s, submitted: 60
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 61489152 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 61489152 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6a3c000/0x0/0x4ffc00000, data 0x38bebc4/0x3a50000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 61489152 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007a98400 session 0x562fffb86700
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 61489152 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3901945 data_alloc: 234881024 data_used: 16853304
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 61489152 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 285 ms_handle_reset con 0x563004b6a400 session 0x563001f02e00
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 285 ms_handle_reset con 0x563002a41000 session 0x5630008c41c0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 285 ms_handle_reset con 0x563007a98400 session 0x5630020116c0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 382451712 unmapped: 47079424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 285 heartbeat osd_stat(store_statfs(0x4e5986000/0x0/0x4ffc00000, data 0x496f824/0x4b04000, compress 0x0/0x0/0x0, omap 0x6feaa, meta 0x15700156), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 285 ms_handle_reset con 0x563007aaac00 session 0x5630015e68c0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 286 ms_handle_reset con 0x56300334ec00 session 0x562fffb87a40
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 371097600 unmapped: 58433536 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 287 ms_handle_reset con 0x562fffc0dc00 session 0x562fffba1a40
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 371154944 unmapped: 58376192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 287 ms_handle_reset con 0x563002a41000 session 0x562fffce76c0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 287 ms_handle_reset con 0x56300334ec00 session 0x563002c42380
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 287 ms_handle_reset con 0x563007a98400 session 0x563000341880
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 371154944 unmapped: 58376192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4041853 data_alloc: 234881024 data_used: 23828792
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 371154944 unmapped: 58376192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 371154944 unmapped: 58376192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e5974000/0x0/0x4ffc00000, data 0x497df6a/0x4b14000, compress 0x0/0x0/0x0, omap 0x70a87, meta 0x156ff579), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.529747963s of 10.955096245s, submitted: 94
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 287 ms_handle_reset con 0x563007aaac00 session 0x56300273ec40
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3844847 data_alloc: 218103808 data_used: 4905272
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e6b84000/0x0/0x4ffc00000, data 0x3771f6a/0x3908000, compress 0x0/0x0/0x0, omap 0x70ebf, meta 0x156ff141), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 287 handle_osd_map epochs [288,288], i have 288, src has [1,288]
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6b7f000/0x0/0x4ffc00000, data 0x37739e9/0x390b000, compress 0x0/0x0/0x0, omap 0x70f46, meta 0x156ff0ba), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x563002a46000 session 0x563002c42a80
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x56300334ec00 session 0x5630022a88c0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x563002a41000 session 0x5630022a9340
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3849155 data_alloc: 218103808 data_used: 4905272
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x563007a98400 session 0x562fffba0000
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x563007aaac00 session 0x5630007ffc00
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x562fffd93800 session 0x563002c43dc0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x563002a41000 session 0x56300222c700
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 369156096 unmapped: 60375040 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x56300334ec00 session 0x562fffba0e00
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e673b000/0x0/0x4ffc00000, data 0x3bb99e9/0x3d51000, compress 0x0/0x0/0x0, omap 0x71276, meta 0x156fed8a), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 372703232 unmapped: 56827904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 372703232 unmapped: 56827904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 372703232 unmapped: 56827904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.734997749s of 12.027949333s, submitted: 52
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 369434624 unmapped: 60096512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6431000/0x0/0x4ffc00000, data 0x3ec39e9/0x405b000, compress 0x0/0x0/0x0, omap 0x71276, meta 0x156fed8a), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3968002 data_alloc: 234881024 data_used: 25452989
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 289 ms_handle_reset con 0x56300645f400 session 0x563002a81340
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 289 heartbeat osd_stat(store_statfs(0x4e74e6000/0x0/0x4ffc00000, data 0x2e0b577/0x2fa3000, compress 0x0/0x0/0x0, omap 0x712fd, meta 0x156fed03), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3845228 data_alloc: 218103808 data_used: 9718717
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74e4000/0x0/0x4ffc00000, data 0x2e0cff6/0x2fa6000, compress 0x0/0x0/0x0, omap 0x71384, meta 0x156fec7c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.752949715s of 10.863764763s, submitted: 72
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3853182 data_alloc: 218103808 data_used: 9947483
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74e4000/0x0/0x4ffc00000, data 0x2e0cff6/0x2fa6000, compress 0x0/0x0/0x0, omap 0x71384, meta 0x156fec7c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74e6000/0x0/0x4ffc00000, data 0x2e0cff6/0x2fa6000, compress 0x0/0x0/0x0, omap 0x71384, meta 0x156fec7c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74e6000/0x0/0x4ffc00000, data 0x2e0cff6/0x2fa6000, compress 0x0/0x0/0x0, omap 0x71384, meta 0x156fec7c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3854974 data_alloc: 218103808 data_used: 10385755
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74e6000/0x0/0x4ffc00000, data 0x2e0cff6/0x2fa6000, compress 0x0/0x0/0x0, omap 0x71384, meta 0x156fec7c), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x5630040a7000 session 0x562fffc00a80
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300157f000 session 0x562fffc01500
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74e6000/0x0/0x4ffc00000, data 0x29c6ff6/0x2b60000, compress 0x0/0x0/0x0, omap 0x71519, meta 0x156feae7), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: mgrc ms_handle_reset ms_handle_reset con 0x563002a3f400
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1095581968
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1095581968,v1:192.168.122.100:6801/1095581968]
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: mgrc handle_mgr_configure stats_period=5
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300dcfd000 session 0x5630008c5a40
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x563002a3ec00 session 0x563002c43500
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x563006914800 session 0x562fffc00000
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x5630052e4c00 session 0x563001f03500
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x563002a46400 session 0x5630029ac8c0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300212a800 session 0x5630007ff500
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300645f800 session 0x563002146380
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 47.592243195s of 47.662048340s, submitted: 52
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [0,0,0,2])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x5630045adc00 session 0x563002a6b6c0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360480768 unmapped: 69050368 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300212a800 session 0x5630029ac8c0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e71ff000/0x0/0x4ffc00000, data 0x30f3ff6/0x328d000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360480768 unmapped: 69050368 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x563002a46400 session 0x562fffba1dc0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360480768 unmapped: 69050368 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x5630052e4c00 session 0x562fffc00000
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300645f800 session 0x5630008c5a40
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3836513 data_alloc: 218103808 data_used: 4897115
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 69132288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362455040 unmapped: 67076096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363880448 unmapped: 65650688 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363880448 unmapped: 65650688 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e71fe000/0x0/0x4ffc00000, data 0x30f4019/0x328e000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300329f400 session 0x563002c43dc0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363880448 unmapped: 65650688 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300212a800 session 0x562fffba0540
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300b961400 session 0x56300273fdc0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 69550080 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.875541687s of 39.054084778s, submitted: 43
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777745 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777745 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777745 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360005632 unmapped: 69525504 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360005632 unmapped: 69525504 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777745 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777745 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777745 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.903411865s of 30.936098099s, submitted: 22
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 291 ms_handle_reset con 0x5630040a7c00 session 0x562fffa6b6c0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3619554 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e9c32000/0x0/0x4ffc00000, data 0x6bebd6/0x858000, compress 0x0/0x0/0x0, omap 0x74b7f, meta 0x156fb481), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e9c32000/0x0/0x4ffc00000, data 0x6bebd6/0x858000, compress 0x0/0x0/0x0, omap 0x74b7f, meta 0x156fb481), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 69459968 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 69459968 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e9c32000/0x0/0x4ffc00000, data 0x6bebd6/0x858000, compress 0x0/0x0/0x0, omap 0x74b7f, meta 0x156fb481), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 69459968 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e9c32000/0x0/0x4ffc00000, data 0x6bebd6/0x858000, compress 0x0/0x0/0x0, omap 0x74b7f, meta 0x156fb481), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3619554 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 69459968 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 69427200 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 69427200 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 69427200 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 69427200 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 69353472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 69353472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 69353472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 69353472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 69353472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 69263360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 69263360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 69263360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 69263360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 69263360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 69263360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 69255168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 69255168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 69255168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 118.484336853s of 118.528816223s, submitted: 35
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 ms_handle_reset con 0x563006915000 session 0x5630022a9880
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3621740 data_alloc: 218103808 data_used: 4901211
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 ms_handle_reset con 0x562fffb12c00 session 0x5630008c4c40
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c31000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c9f, meta 0x156fb361), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 69230592 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 69230592 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3621608 data_alloc: 218103808 data_used: 4901211
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c31000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c9f, meta 0x156fb361), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 292 handle_osd_map epochs [293,293], i have 293, src has [1,293]
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 70385664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 293 ms_handle_reset con 0x563003da8400 session 0x56300238afc0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592192 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 293 heartbeat osd_stat(store_statfs(0x4ea09d000/0x0/0x4ffc00000, data 0x252222/0x3ed000, compress 0x0/0x0/0x0, omap 0x75b78, meta 0x156fa488), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 70385664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 70385664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 70385664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.014619827s of 13.094200134s, submitted: 43
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 294 ms_handle_reset con 0x563002a3c400 session 0x5630003e6700
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 294 heartbeat osd_stat(store_statfs(0x4ea09d000/0x0/0x4ffc00000, data 0x252222/0x3ed000, compress 0x0/0x0/0x0, omap 0x75b78, meta 0x156fa488), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359153664 unmapped: 70377472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359153664 unmapped: 70377472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3594966 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359153664 unmapped: 70377472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359153664 unmapped: 70377472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 294 heartbeat osd_stat(store_statfs(0x4ea09a000/0x0/0x4ffc00000, data 0x253e12/0x3f0000, compress 0x0/0x0/0x0, omap 0x75c00, meta 0x156fa400), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 70369280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 70369280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 70369280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3594966 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 70344704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 70344704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 295 heartbeat osd_stat(store_statfs(0x4ea097000/0x0/0x4ffc00000, data 0x255891/0x3f3000, compress 0x0/0x0/0x0, omap 0x75cee, meta 0x156fa312), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 70344704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 70344704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 295 heartbeat osd_stat(store_statfs(0x4ea097000/0x0/0x4ffc00000, data 0x255891/0x3f3000, compress 0x0/0x0/0x0, omap 0x75cee, meta 0x156fa312), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 295 heartbeat osd_stat(store_statfs(0x4ea097000/0x0/0x4ffc00000, data 0x255891/0x3f3000, compress 0x0/0x0/0x0, omap 0x75cee, meta 0x156fa312), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 70344704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.104069710s of 12.140682220s, submitted: 28
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3598801 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359194624 unmapped: 70336512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359194624 unmapped: 70336512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359219200 unmapped: 70311936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359219200 unmapped: 70311936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8099000/0x0/0x4ffc00000, data 0x2255891/0x23f3000, compress 0x0/0x0/0x0, omap 0x7595f, meta 0x156fa6a1), peers [0,2] op hist [0,0,0,1])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 296 ms_handle_reset con 0x5630040a6c00 session 0x562fffba01c0
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 70287360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 70287360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 70287360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 70287360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359251968 unmapped: 70279168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359251968 unmapped: 70279168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 70270976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 70270976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 70270976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 70270976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 70270976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 70270976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 70262784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 70262784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 70262784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 70262784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 70262784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359276544 unmapped: 70254592 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359276544 unmapped: 70254592 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359276544 unmapped: 70254592 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359292928 unmapped: 70238208 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359292928 unmapped: 70238208 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:12 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 70205440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 70205440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 70205440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 70205440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 70205440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 70205440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359333888 unmapped: 70197248 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359333888 unmapped: 70197248 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359342080 unmapped: 70189056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359342080 unmapped: 70189056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359342080 unmapped: 70189056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359350272 unmapped: 70180864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359350272 unmapped: 70180864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359350272 unmapped: 70180864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4801.6 total, 600.0 interval#012Cumulative writes: 48K writes, 188K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s#012Cumulative WAL: 48K writes, 18K syncs, 2.67 writes per sync, written: 0.18 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2185 writes, 8173 keys, 2185 commit groups, 1.0 writes per commit group, ingest: 7.57 MB, 0.01 MB/s#012Interval WAL: 2185 writes, 915 syncs, 2.39 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359383040 unmapped: 70148096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359383040 unmapped: 70148096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 70139904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 70139904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 70139904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 70139904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 70139904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 70139904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359407616 unmapped: 70123520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359407616 unmapped: 70123520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 70115328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 70115328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 70115328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 70115328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 70107136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 70107136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 70107136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 70098944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 70098944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 70098944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 70098944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 70098944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 70098944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 111.617111206s of 111.771911621s, submitted: 17
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359440384 unmapped: 70090752 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 297 ms_handle_reset con 0x5630007a8800 session 0x563002c356c0
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359464960 unmapped: 70066176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359473152 unmapped: 70057984 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 298 ms_handle_reset con 0x563006907c00 session 0x563002472380
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359489536 unmapped: 70041600 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e7420000/0x0/0x4ffc00000, data 0x2ec8fec/0x306a000, compress 0x0/0x0/0x0, omap 0x76256, meta 0x156f9daa), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3843633 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741b000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359489536 unmapped: 70041600 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359489536 unmapped: 70041600 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741b000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359489536 unmapped: 70041600 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359489536 unmapped: 70041600 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359489536 unmapped: 70041600 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3843633 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 70017024 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741b000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 70017024 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 70008832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 70008832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.149941444s of 13.237030983s, submitted: 29
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 70008832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3842145 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 70008832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 70008832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 70008832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 70000640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 70000640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3842145 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3842145 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 69918720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3842145 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 69918720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 69918720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 69918720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 69918720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 69910528 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3842145 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 69910528 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.955919266s of 22.110708237s, submitted: 90
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 69885952 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 299 ms_handle_reset con 0x56300212a000 session 0x56300238a000
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 69836800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 69836800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e941a000/0x0/0x4ffc00000, data 0xecc778/0x1070000, compress 0x0/0x0/0x0, omap 0x76df0, meta 0x156f9210), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 69836800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683971 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 69836800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 69828608 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e941a000/0x0/0x4ffc00000, data 0xecc778/0x1070000, compress 0x0/0x0/0x0, omap 0x76df0, meta 0x156f9210), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 69828608 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 69828608 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 299 handle_osd_map epochs [299,300], i have 299, src has [1,300]
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 69804032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 69804032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 69804032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 69795840 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 69795840 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 69771264 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 69771264 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 69771264 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 69771264 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 69771264 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 69763072 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 69754880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 69754880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 69754880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 69754880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 69738496 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 69738496 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 69738496 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 69738496 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 69738496 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 69738496 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 69730304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 69730304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 69730304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 69730304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 69730304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 69730304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 69730304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 69730304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359817216 unmapped: 69713920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359817216 unmapped: 69713920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 69705728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 69705728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 69705728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 69705728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 69705728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 69705728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 69697536 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 69697536 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 69697536 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 69689344 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 69689344 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 69689344 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 69689344 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 69689344 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359849984 unmapped: 69681152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359858176 unmapped: 69672960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359858176 unmapped: 69672960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359858176 unmapped: 69672960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359858176 unmapped: 69672960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359858176 unmapped: 69672960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 69664768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 69664768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 69656576 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 69656576 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 69656576 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 69656576 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 69656576 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 69656576 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 69656576 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 69656576 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 69640192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 69640192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 69640192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 69640192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 69640192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 69640192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359899136 unmapped: 69632000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359899136 unmapped: 69632000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359899136 unmapped: 69632000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359899136 unmapped: 69632000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 69623808 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 69623808 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 69623808 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 69615616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 69615616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 69615616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 69607424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 69607424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 69607424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 69607424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 69607424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 69607424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 69607424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 69607424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 69599232 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 69591040 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 69582848 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 69582848 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 69582848 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 69582848 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 69582848 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 69582848 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 69574656 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 69574656 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 69574656 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 69574656 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 69574656 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 69574656 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 69558272 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 69558272 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 69558272 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 69550080 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 69550080 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 69550080 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360005632 unmapped: 69525504 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360005632 unmapped: 69525504 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 69509120 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 69509120 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 69509120 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360038400 unmapped: 69492736 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360046592 unmapped: 69484544 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360046592 unmapped: 69484544 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360046592 unmapped: 69484544 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360046592 unmapped: 69484544 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360046592 unmapped: 69484544 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 69476352 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 69476352 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 69476352 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 69459968 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 69443584 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 69443584 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 69443584 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 69443584 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 69443584 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 69427200 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 69427200 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360128512 unmapped: 69402624 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 69353472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 69353472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 69337088 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 69337088 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 69255168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 69255168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360316928 unmapped: 69214208 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360316928 unmapped: 69214208 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360316928 unmapped: 69214208 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 69206016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 69206016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 69197824 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 69197824 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 69197824 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 69197824 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 69197824 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 69197824 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 69189632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 69189632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 69189632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 69189632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 69189632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 69189632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 69189632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 69189632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360349696 unmapped: 69181440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360349696 unmapped: 69181440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 69165056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 69165056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 69165056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 69165056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 69165056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 69165056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360374272 unmapped: 69156864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360374272 unmapped: 69156864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 69148672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 69148672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 69148672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 69148672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 69148672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 69148672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360390656 unmapped: 69140480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360390656 unmapped: 69140480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 69132288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 69132288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 69132288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 69132288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 69132288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 69132288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 69124096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 69124096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 69124096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 69124096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 69124096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 69124096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 69124096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 69124096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 69099520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 69099520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 69099520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 69099520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 69099520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 69099520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 69099520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 69099520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 69091328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360448000 unmapped: 69083136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360448000 unmapped: 69083136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360448000 unmapped: 69083136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360448000 unmapped: 69083136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 69074944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 69074944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 69074944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 69074944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 69066752 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 69058560 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 69058560 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 69058560 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 69058560 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 69058560 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 69058560 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360488960 unmapped: 69042176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360488960 unmapped: 69042176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360488960 unmapped: 69042176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360488960 unmapped: 69042176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360488960 unmapped: 69042176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360488960 unmapped: 69042176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360497152 unmapped: 69033984 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360497152 unmapped: 69033984 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360497152 unmapped: 69033984 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 366.986419678s of 367.067871094s, submitted: 52
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 301 ms_handle_reset con 0x5630035d8800 session 0x5630022256c0
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360521728 unmapped: 69009408 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360529920 unmapped: 69001216 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 301 heartbeat osd_stat(store_statfs(0x4ea084000/0x0/0x4ffc00000, data 0x25fde7/0x406000, compress 0x0/0x0/0x0, omap 0x77a8c, meta 0x156f8574), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360529920 unmapped: 69001216 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360579072 unmapped: 68952064 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3628525 data_alloc: 218103808 data_used: 248120
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 302 ms_handle_reset con 0x563006c59c00 session 0x563000520c40
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360579072 unmapped: 68952064 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360579072 unmapped: 68952064 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 302 heartbeat osd_stat(store_statfs(0x4ea081000/0x0/0x4ffc00000, data 0x2619b4/0x408000, compress 0x0/0x0/0x0, omap 0x778e2, meta 0x156f871e), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360587264 unmapped: 68943872 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360603648 unmapped: 68927488 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 303 heartbeat osd_stat(store_statfs(0x4ea07f000/0x0/0x4ffc00000, data 0x26344f/0x40b000, compress 0x0/0x0/0x0, omap 0x7850a, meta 0x156f7af6), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360603648 unmapped: 68927488 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700002 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 304 ms_handle_reset con 0x563007a99400 session 0x5630004ff180
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360620032 unmapped: 68911104 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e940b000/0x0/0x4ffc00000, data 0xed5017/0x107f000, compress 0x0/0x0/0x0, omap 0x787f6, meta 0x156f780a), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360620032 unmapped: 68911104 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360620032 unmapped: 68911104 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360620032 unmapped: 68911104 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360620032 unmapped: 68911104 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3702408 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.496150970s of 13.664681435s, submitted: 92
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e940b000/0x0/0x4ffc00000, data 0xed5017/0x107f000, compress 0x0/0x0/0x0, omap 0x787f6, meta 0x156f780a), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360669184 unmapped: 68861952 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360669184 unmapped: 68861952 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360669184 unmapped: 68861952 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 68853760 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 68853760 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 68853760 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 68853760 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 68853760 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 68837376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 68837376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 68837376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 68837376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 68837376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 68837376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 68837376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 68837376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360701952 unmapped: 68829184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360701952 unmapped: 68829184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360701952 unmapped: 68829184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360701952 unmapped: 68829184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360701952 unmapped: 68829184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360701952 unmapped: 68829184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 68820992 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 68820992 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 68812800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 68812800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 68812800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 68812800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 68812800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 68812800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 68812800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 68812800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 68796416 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 68796416 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 68796416 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 68796416 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 68796416 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 68796416 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 68796416 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 68796416 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 68788224 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 68788224 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 68788224 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360751104 unmapped: 68780032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360751104 unmapped: 68780032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 45.422519684s of 45.444473267s, submitted: 13
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360759296 unmapped: 68771840 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 306 ms_handle_reset con 0x563002a3c400 session 0x563002574380
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 67706880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 67706880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 306 heartbeat osd_stat(store_statfs(0x4ea076000/0x0/0x4ffc00000, data 0x268676/0x414000, compress 0x0/0x0/0x0, omap 0x78ccf, meta 0x156f7331), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 67706880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 67706880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643995 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 67706880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 67706880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 306 heartbeat osd_stat(store_statfs(0x4ea076000/0x0/0x4ffc00000, data 0x268676/0x414000, compress 0x0/0x0/0x0, omap 0x78ccf, meta 0x156f7331), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 306 handle_osd_map epochs [307,307], i have 307, src has [1,307]
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361848832 unmapped: 67682304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361848832 unmapped: 67682304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361848832 unmapped: 67682304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361848832 unmapped: 67682304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361857024 unmapped: 67674112 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361857024 unmapped: 67674112 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361865216 unmapped: 67665920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361865216 unmapped: 67665920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361865216 unmapped: 67665920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361865216 unmapped: 67665920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361865216 unmapped: 67665920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361865216 unmapped: 67665920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361873408 unmapped: 67657728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361873408 unmapped: 67657728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361873408 unmapped: 67657728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361873408 unmapped: 67657728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361873408 unmapped: 67657728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361873408 unmapped: 67657728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361873408 unmapped: 67657728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361873408 unmapped: 67657728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361889792 unmapped: 67641344 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361889792 unmapped: 67641344 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 67633152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 67633152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 67633152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 67633152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 67633152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 67633152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 67633152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 67633152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361906176 unmapped: 67624960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361906176 unmapped: 67624960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361906176 unmapped: 67624960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361906176 unmapped: 67624960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361906176 unmapped: 67624960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361906176 unmapped: 67624960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361914368 unmapped: 67616768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361914368 unmapped: 67616768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361914368 unmapped: 67616768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361914368 unmapped: 67616768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361914368 unmapped: 67616768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361914368 unmapped: 67616768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361914368 unmapped: 67616768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361914368 unmapped: 67616768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361930752 unmapped: 67600384 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361930752 unmapped: 67600384 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 67592192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 67592192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 67592192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 67592192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 67592192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 67592192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361947136 unmapped: 67584000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361947136 unmapped: 67584000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361947136 unmapped: 67584000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361947136 unmapped: 67584000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361947136 unmapped: 67584000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361947136 unmapped: 67584000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361947136 unmapped: 67584000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361947136 unmapped: 67584000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361955328 unmapped: 67575808 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361955328 unmapped: 67575808 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361963520 unmapped: 67567616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361963520 unmapped: 67567616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361963520 unmapped: 67567616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361963520 unmapped: 67567616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361963520 unmapped: 67567616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361963520 unmapped: 67567616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361971712 unmapped: 67559424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361971712 unmapped: 67559424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361979904 unmapped: 67551232 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361979904 unmapped: 67551232 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361979904 unmapped: 67551232 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361979904 unmapped: 67551232 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361979904 unmapped: 67551232 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 67493888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: do_command 'config diff' '{prefix=config diff}'
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: do_command 'config show' '{prefix=config show}'
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: do_command 'counter dump' '{prefix=counter dump}'
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: do_command 'counter schema' '{prefix=counter schema}'
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362217472 unmapped: 67313664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 67338240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: do_command 'log dump' '{prefix=log dump}'
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 373260288 unmapped: 56270848 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: do_command 'perf dump' '{prefix=perf dump}'
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: do_command 'perf schema' '{prefix=perf schema}'
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 67158016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 67158016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 67158016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 67158016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 67158016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 67141632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 67141632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 67141632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 67141632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 67141632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362397696 unmapped: 67133440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362397696 unmapped: 67133440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362397696 unmapped: 67133440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 67125248 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5401.6 total, 600.0 interval#012Cumulative writes: 49K writes, 189K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 49K writes, 18K syncs, 2.66 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 609 writes, 1700 keys, 609 commit groups, 1.0 writes per commit group, ingest: 0.70 MB, 0.00 MB/s#012Interval WAL: 609 writes, 282 syncs, 2.16 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 67125248 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362422272 unmapped: 67108864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362422272 unmapped: 67108864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362422272 unmapped: 67108864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362422272 unmapped: 67108864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362422272 unmapped: 67108864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362422272 unmapped: 67108864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362438656 unmapped: 67092480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 ms_handle_reset con 0x56300645e800 session 0x5630003e61c0
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 ms_handle_reset con 0x563004b6cc00 session 0x5630029ac540
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 ms_handle_reset con 0x5630015b7000 session 0x563000521c00
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361373696 unmapped: 68157440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361373696 unmapped: 68157440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361373696 unmapped: 68157440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361373696 unmapped: 68157440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361381888 unmapped: 68149248 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361381888 unmapped: 68149248 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361381888 unmapped: 68149248 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361390080 unmapped: 68141056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361390080 unmapped: 68141056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361390080 unmapped: 68141056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361390080 unmapped: 68141056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361390080 unmapped: 68141056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361390080 unmapped: 68141056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361390080 unmapped: 68141056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361390080 unmapped: 68141056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361406464 unmapped: 68124672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361414656 unmapped: 68116480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361422848 unmapped: 68108288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361422848 unmapped: 68108288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361422848 unmapped: 68108288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361422848 unmapped: 68108288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361422848 unmapped: 68108288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361422848 unmapped: 68108288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361431040 unmapped: 68100096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361431040 unmapped: 68100096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361431040 unmapped: 68100096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361431040 unmapped: 68100096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361431040 unmapped: 68100096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361431040 unmapped: 68100096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361431040 unmapped: 68100096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361439232 unmapped: 68091904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361455616 unmapped: 68075520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361455616 unmapped: 68075520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361455616 unmapped: 68075520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361455616 unmapped: 68075520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 ms_handle_reset con 0x56300212cc00 session 0x563002c34380
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361455616 unmapped: 68075520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361455616 unmapped: 68075520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361455616 unmapped: 68075520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361463808 unmapped: 68067328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361463808 unmapped: 68067328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361463808 unmapped: 68067328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 68059136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 68059136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 151.738861084s of 151.820251465s, submitted: 51
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 68059136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 68059136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 68059136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 68059136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 68059136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 68059136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 66945024 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 66936832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 66936832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 66936832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 66936832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 66936832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 66936832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 66936832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 66936832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 66936832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 66936832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 66936832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 66936832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 66928640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 66928640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 66928640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 66928640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 06:14:13 np0005634017 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
